topic
stringclasses 2
values | relevance score
int64 1
10
| paper name
stringlengths 19
239
| text
stringlengths 1.56k
680k
|
---|---|---|---|
ai_researcher | 1 | Rural_Settlement_Reconstruction_Integrating_Land_Suitability_and_Individual_Difference_Factors_A_Case_Study_of_Pingba_Village_China.pdf | Spatial Accuracy 2020
Towards localized accuracy assessment of remote-sensing derived built-up
land layers across the rural-urban continuum
Johannes H. Uhl1,2, Stefan Leyk1,2
1Department of Geography, University of Colorado Boulder, Boulder, Colorado, USA
2Institute of Behavioral Science, University of Colorado Boulder, Boulder, Colorado, USA
*Corresponding author: [email protected]
Abstract
The accuracy assessment of remote-sensing derived built-up land data represents a specific
case of binary map comparison, where class imbalance varies considerably across rural-urban
trajectories. Thus, local accuracy characterization of such datasets requires specific strategies
that are robust to low sample sizes and different levels of class imbalance. Herein, we examine
the suitability of commonly used spatial agreement measures for their localized accuracy
characterization of built-up land layers across the rural-urban continuum, using the Global
Human Settlement Layer and a reference database of built-up land derived from cadastral and
building footprint data.
Keywords
Localized accuracy assessment, spatially explicit accuracy assessment, spatially constrained
confusion matrices, rural-urban continuum, Global Human Settlement Layer
With recent technological advances in geospatial data acquisition, processing, as well as cloud-
based geospatial data dissemination and analysis infrastructure, there is an increasing amount of
novel geospatial datasets available, measuring the spatial(-temporal) distribution of human
settlements at large spatial extents and at unprecedented spatial granularity. These datasets
include the Global Human Settlement Layer (Pesaresi et al. 2015), Global Urban Footprint (Esch
et al. 2013), High-Resolution Settlement Layer (Facebook Connectivity Lab & CIESIN 2016),
and the World Settlement Footprint (Marconcini et al. 2019). While such datasets greatly
facilitate
the study of urbanization, human-natural systems and related geographic-
environmental processes at unseen levels of detail, little research has been done on the accuracy
of such datasets and how accuracy trajectories can be characterized across the rural-urban
continuum, often due to the lack of reliable reference data over sufficiently large spatial extents.
Previous work has revealed varying levels of accuracy among different settlement datasets
(Klotz et al. 2016), increasing accuracy levels over time in case of the multi-temporal Global
Human Settlement Layer (GHSL), and increases in accuracy from rural towards urban areas
(Uhl et al. 2017, Leyk et al. 2018, Uhl et al. 2020). However, these general trends are based on
coarse, regional stratification of the studied area and density variations derived from the
reference data, and thus possibly neglect local accuracy variations.
Several approaches for localized accuracy assessments of categorical spatial data have been
proposed in the past (e.g., Leyk and Zimmermann 2004, Foody 2007, Stehman and Wickham
2011), typically applied to (multi-class) land cover data at relative coarse spatial resolutions.
High-resolution built-up land data, discriminating between built-up and not built-up land in a
binary fashion, exhibit some significant differences with respect to multi-class land cover data:
1) Class imbalance switches between highly urban and sparsely populated areas: The positive
class (i.e., “built-up“) may be the dominant class in urban areas, but highly underrepresented in
rural areas.
1
Spatial Accuracy 2020
2) Localized (i.e., spatially constrained) confusion matrices (of dimension 2x2 in the binary case)
characterizing local accuracy may be based on small sample sizes, and possibly contain empty
elements, e.g., caused by zero instances of false positives in a local spatial context prohibiting
calculation of those measures.
Thus, a framework for localized accuracy assessment of built-up land data needs to account for
extreme, bi-directional class imbalance, as well as for low sample sizes underlying a spatially
constrained confusion matrix, and the absence of instances of one or more (dis)agreement
categories. We are currently developing such a framework, using the GHSL as test data (Figure
1a) and an integrated reference dataset derived from cadastral parcel data and building footprint
data (Leyk et al. 2018) as validation data (Figure 1b). We examine the suitability of a variety of
commonly used agreement and accuracy measures for local characterization of positional and
quantity agreement of built-up land layers, and analyze their interactions with each other, and
with variables characterizing the rural-urban continuum. We compute surfaces of Percent
Correctly Classified (PCC), User’s Accuracy (UA), Producer’s Accuracy (PA), Cohen’s Kappa,
F-measure, G-mean, Intersection-over-union (IoU) for positional agreement, as well as absolute
errors (AE) and relative errors (RE) in built-up area as measures of local quantity agreement.
Figures 1c,d show such surfaces for PCC and the F-measure, respectively, for Springfield,
Massachusetts, USA, illustrating the differences in local accuracy estimated by PCC and the F-
measure, particularly in area of lower built-up density, where PCC yields inflated values due to
class imbalance (i.e., dominating not-built-up class).
Figure 1: Data examples used in this study, shown for Springfield, Massachusetts, USA: (a) Built-up
areas (black) from the GHSL in 2014, (b) reference built-up land surface derived from building
footprint and cadastral data, and localized accuracy surfaces for (c) PCC and (d) F-measure, both
computed within focal windows of 1x1km. Black areas in (c) and (d) depict no-data areas or areas
excluded due to unreliable reference data.
2
Spatial Accuracy 2020
Using the same grid and focal window size, we calculate surfaces of reference built-up area
density, and of focal landscape metrics (e.g, the area of the largest built-up patch), derived from
the reference data, to characterize the density and structure of built-up areas. We assume such
metrics to vary with the rural-urban gradient and thus, representing a proxy measure of the rural-
urban continuum, ranging from scattered, sparse rural settlements to dense, and highly connected
built-up areas in urban settlements.
Figure 2 shows the relationships between these metrics and local measures of built-up density
and structure, computed for the state of Massachusetts and GHSL built-up labels in 2015,
revealing interesting, partially contradicting trends of the tested measures across the rural-urban
continuum. Moreover, the variability in these scatterplots indicates high levels of variation
among measures due to the conservativeness in their mathematical structure. For example,
Kappa index never exceeds the F-measure, and achieves similar values in regions of low built-
up density, and where the largest built-up patch area is low (i.e., areas of sparse, scattered
settlements).
Figure 2: Scatterplot matrix generated by pixel-wise comparison of localized positional and quantity
agreement surfaces computed at 30m spatial resolution and within 1x1km focal windows for the state of
Massachusetts, USA.
Future work includes the analysis of effects of varying spatial support (i.e., the size of focal
windows used to generate localized confusion matrices), and of the analytical unit on local
accuracy characterization, as well as the suitability of built environment and socio-economic
variables for uncertainty prediction of remote-sensing derived built-up land data.
3
Spatial Accuracy 2020
ACKNOWLEDGMENT
Support for this work was provided through the Eunice Kennedy Shriver National Institute of
Child Health & Human Development of the National Institutes of Health under Award Number
P2CHD066613. The content is solely the responsibility of the authors and does not necessarily
represent the official views of the National Institutes of Health.
REFERENCES
Esch, T., Marconcini, M., Felbier, A., Roth, A., Heldens, W., Huber, M., Schwinger, M., Taubenböck, H., Müller,
A. and Dech, S.J.I.G., (2013). Urban footprint processor—Fully automated processing chain generating
settlement masks from global data of the TanDEM-X mission. IEEE Geoscience and Remote Sensing Letters,
10(6), pp.1617-1621.
Facebook Connectivity Lab and Center for International Earth Science Information Network - CIESIN - Columbia
University, (2016). High Resolution Settlement Layer (HRSL). Source Imagery for HRSL © 2016 DigitalGlobe
Available at https://ciesin.columbia.edu/data/hrsl/. (Accessed 23-03-2018)
Foody, G.M. (2007). Local characterization of thematic classification accuracy through spatially constrained
confusion matrices. International Journal of Remote Sensing, 26(6), 1217-1228.
Klotz, M., Kemper, T., Geiß, C., Esch, T. and Taubenböck, H., (2016). How good is the map? A multi-scale cross-
comparison framework for global settlement layers: Evidence from Central Europe. Remote Sensing of
Environment, 178, pp.191-212.
Leyk, S., Uhl, J.H., Balk, D. and Jones, B., (2018). Assessing the accuracy of multi-temporal built-up land layers
across rural-urban trajectories in the United States. Remote sensing of environment, 204, pp.898-917.
Leyk, S., and Zimmermann, N.E. (2004). A predictive uncertainty model for field-based survey maps using
generalized linear models. International Conference on Geographic Information Science, 191-205.
Marconcini, M., Metz-Marconcini, A., Üreyen, S., Palacios-Lopez, D., Hanke, W., Bachofer, F., Zeidler, J., Esch,
T., Gorelick, N., Kakarla, A. and Strano, E., (2019). Outlining where humans live--The World Settlement
Footprint 2015. arXiv preprint arXiv:1910.12707.
Pesaresi, M., Ehrlich, D., Ferri, S., Florczyk, A., Freire, S., Haag, F., Halkia, M., Julea, A.M., Kemper, T. and Soille,
P., (2015). Global human settlement analysis for disaster risk reduction. The International Archives of
Photogrammetry, Remote Sensing and Spatial Information Sciences, 40(7), p.837.
Stehman, S.V., and Wickham, J.D. (2011). Pixels, blocks of pixels, and polygons: Choosing a spatial unit for
thematic accuracy assessment. Remote Sensing of Environment, 115(12), 3044-3055.
Uhl, J.H. and Leyk, S., (2017). Multi-Scale Effects and Sensitivities in Built-up Land Data Accuracy Assessments.
Proceedings of International Cartographic Conference 2017, Washington D.C., USA. 2017.
Uhl, J.H., Zoraghein, H., Leyk, S., Balk, D., Corbane, C., Syrris, V. and Florczyk, A.J., (2020). Exposing the urban
continuum: Implications and cross-comparison from an interdisciplinary perspective. International Journal of
Digital Earth, 13(1), pp.22-44.
4
|
ai_researcher | 3 | Development_of_an_Analysis_of_Alternatives_Tool_for_Human-Agent_Teaming_Research.pdf | Towards Optimizing and Evaluating a Retrieval Augmented QA Chatbot
using LLMs with Human-in-the-Loop
Anum Afzal, Alexander Kowsik, Rajna Fani, Florian Matthes
School of Computation, Information and Technology
Technical University of Munich
{anum.afzal, alexander.kowsik, rajna.fani, matthes}@tum.de
Abstract
Large Language Models have found application
in various mundane and repetitive tasks includ-
ing Human Resource (HR) support. We worked
with the domain experts of SAP SE to develop
an HR support chatbot as an efficient and effec-
tive tool for addressing employee inquiries. We
inserted a human-in-the-loop in various parts
of the development cycles such as dataset col-
lection, prompt optimization, and evaluation
of generated output. By enhancing the LLM-
driven chatbot’s response quality and explor-
ing alternative retrieval methods, we have cre-
ated an efficient, scalable, and flexible tool for
HR professionals to address employee inquiries
effectively. Our experiments and evaluation
conclude that GPT-4 outperforms other mod-
els and can overcome inconsistencies in data
through internal reasoning capabilities. Addi-
tionally, through expert analysis, we infer that
reference-free evaluation metrics such as G-
Eval and Prometheus demonstrate reliability
closely aligned with that of human evaluation.
1
Introduction
In recent years, incorporating Artificial Intelligence
(AI) into various sectors has led to significant im-
provements in automated systems, particularly in
customer service and support. Since the offset of
Large Language Models (LLMs), more companies
are now incorporating Natural Language Process-
ing (NLP) techniques to minimize the need for hu-
man support personnel, especially domain experts
(Shuster et al., 2021). With a chatbot providing
accurate and comprehensive responses promptly,
domain experts can redirect their focus towards
higher-value tasks, leading to potential cost savings
and improved productivity within the HR depart-
ment. Moreover, an effective chatbot can play a
pivotal role in enhancing overall employee satis-
faction and engagement by delivering timely and
relevant assistance.
To this end, we worked with a SAP SE on de-
veloping an HR chatbot to evaluate the potential of
LLMs on industrial data. We used domain experts
as a human-in-the-loop through various iterations
of LLM-centric development such as dataset col-
lection, prompt optimization, and most importantly
the evaluation of model outputs.
The well-known Retrieval Augmented Genera-
tion (RAG) (Lewis et al., 2021) approach is ideal
for this use case as it allows the model to produce
more grounded answers, hence reducing hallucina-
tions. We optimized different modules of the stan-
dard RAG pipeline such as the retriever and model
prompts, while constantly incorporating feedback
from the domain experts. While the retrieval ac-
curacy of an LLM could still be assessed to a de-
gree, the generative nature of LLMs makes eval-
uation of the generated output quite challenging.
To overcome this, we explored the effectiveness of
both traditional reference-based and reference-free
(LLM-based) automatic evaluation metrics while
using human evaluation as a baseline.
We benchmark OpenAI’s models in our experi-
ments while using the open-source LongT5 (Guo
et al., 2022) and BERT (Devlin et al., 2019) as
a baseline. In essence, both the industry and the
research community could benefit from our find-
ings related to the retriever and the reliability of
automatic evaluation metrics.
2 Corpus
The dataset used in the development of the HR chat-
bot was compiled using SAP’s internal HR policies
with the help of domain experts. While each sam-
ple forms a triplet consisting of a Question, Answer,
and Context, additional metadata such as the user’s
region, company, employment status, and applica-
ble company policies were also included. A snippet
of such a sample is shown in Appendix A.4. The
dataset was compiled using two separate sources
4
2
0
2
l
u
J
8
]
L
C
.
s
c
[
1
v
5
2
9
5
0
.
7
0
4
2
:
v
i
X
r
a
to have a mix of a gold dataset (FAQ dataset) and a
user-utterance dataset (UT dataset). Both datasets
follow the same structure and differences exist in
the distribution of the questions. We extracted all
unique HR articles to form a knowledge base for
answering new user questions. Additionally, an
evaluation set of 6k samples was used to evaluate
both the retriever and the chatbot as a whole.
2.1 Dataset Collection
FAQ Dataset (N≈48k): This is a collection of
potential questions, along with their corresponding
articles and gold-standard answers. It is carefully
created and curated by domain experts based on
the company’s internal policies.
UT Dataset (N≈41k): This is a collection of real
user utterances (UT) gathered from previous itera-
tions of the chatbot. Inspired by a semi-supervised
learning approach, a simplistic text-matching ap-
proach was implemented that mapped each user
query to a question from the FAQ dataset. The
chatbot logs from this development cycle were in-
spected and corrected by the domain experts.
2.2 Dataset Statistics
Figure 1 shows that the majority of the articles
in our dataset have under 4k tokens. Hence, they
can easily fit into the context window of OpenAI
models. As displayed in Table 1, the most asked
questions in the dataset revolve around payslips,
leave days of any kind, and questions regarding
management.
10 most frequent user queries
How can I change my approver?
Where do I see how much leave I have left?
How can I view my payslip online?
Am I paid during maternity leave?
If I am sick whilst on holiday, can I claim my holiday
back?
Can I cancel a leave request?
I have a question about my payslip, who do I contact?
Where can I find information about my payslip?
Do I receive sick pay?
How can I have an overview of my leave?
Table 1: Top 10 most frequent user queries
3 Methodology
In general, the HR chatbot follows the standard
RAG pipeline with optimizations done on individ-
ual modules with the help of domain experts as
Figure 1: Distribution over the number of tokens of all
unique articles in our HR dataset.
shown in Figure 2. The methodology illustrates var-
ious parts of the chatbot pipeline that are influenced
by a human-in-the-loop and is further discussed in
Appendix B.
3.1 Retriever
We compiled a comprehensive knowledge base of
all possible HR articles occurring in the whole
dataset as the basis for retrieval, resulting in
roughly 50k unique articles. Given a user utterance,
the goal of the retriever is to find the most relevant
article from the collection. While the technical de-
tails for each retriever may differ, in general, they
are both embedding-based. Technical details of the
Retriever module are discussed in Appendix D.1.
Moreover, we developed extensive filter func-
tionalities, ensuring that the vector search only con-
siders articles relevant to the user, like their country,
region, or employment status as shown in Table 4.
For example, from the top retrieved articles, we
filter them to only keep the ones that are applicable
to the employee and then pick the article with the
maximum similarity score from the filtered list.
3.1.1 Dense Passage Retriever (BERT)
Dense Passage Retriever (DPR) fine-tunes bert-
base-uncased embedding to generate a model that
given a user query, retrieves the most relevant ar-
ticle from a set of documents. The dataset used
for training was processed to contain questions
paired with their respective gold answers, as well
as positive and negative contexts for each question.
A triplet loss function (Hoffer and Ailon, 2018)
was used for training such that the relevant article
served as the positive context, with two random ar-
ticles from the entire dataset providing the negative
contexts. This retriever is used in the framework
Figure 2: Block diagram of the methodology introduced in our paper, illustrating baseline and Open AI models,
highlighting the role of the human-in-the-loop during development
with the fine-tuned LongT5 model and also serves
as a baseline for evaluating the OpenAI retriever.
very close to the article embeddings, because of the
very similar content.
3.1.2 Vector Search (OpenAI)
The OpenAI Retriever is plain vector search, that
utilizes the text-embedding-ada-002 embedding
model by OpenAI to generate embeddings for each
article, followed by using similarity search to find
the relevant article. To further enhance retrieval
accuracy, we implemented various Query Trans-
formation techniques1 (Cormack et al., 2009a).
These methods alter the user query into a different
representation using LLMs before the embedding
model computes the query vector. The following
three query transformation methods were explored
and evaluated:
1) Intended Topics: Inspired by Ma et al. (2023),
the user question is sent to an LLM with the in-
struction to return a list of three intended topics of
the question, which are then embedded instead of
the user question.
Example: How to request a parental leave?
→ parental leave, childcare leave, maternity leave
2) HyDE (Hypothetical Document Embeddings):
In this method introduced by Gao et al. (2022),
the user question is transformed by an LLM into
three distinct excerpts from potential HR articles
answering the original question. These parts are
then embedded instead of the user question itself.
This approach leads to query embeddings that are
1https://docs.llamaindex.ai/en/stable/
optimizing/advanced_retrieval/query_
transformations/
Example: How to request a parental leave?
→ To request parental leave, please submit..., If
you wish to request..., ...
3) Multi-Query: This method2 employs LLMs to
generate multiple variations of a user’s question
varying in length and phrasing but maintaining the
same meaning and intent as the original question.
We then embed each of these variants individually.
Along with the embedded original question, we
perform a vector search for each query, combining
the results using Reciprocal Rank Fusion (Cormack
et al., 2009b). Additionally, we include queries
from the Intended Topics and HyDE methods.
Example: parental leave request?
→ How can I request a parental leave?, Where can
I apply for parental leave?, ...
3.2 NLG Module
3.2.1 LongT5 (Fine-tuning driven)
We fine-tuned LongT5 (Guo et al., 2022), employ-
ing the local-attention-based variant3, which con-
sists of 296 million trainable parameters. This
model was fine-tuned on a combination of the FAQ
dataset and UT dataset for a generative question-
answering task. To limit computational require-
ments, we fine-tuned it on a context window of
2https://docs.llamaindex.ai/en/latest/
examples/retrievers/reciprocal_rerank_fusion/
3https://huggingface.co/google/
long-t5-local-base
7168 tokens, retaining approximately ∼86K sam-
ples from the original dataset to avoid truncation.
3.2.2 OpenAI Models (Prompt driven)
We used OpenAI’s ChatGPT and GPT-4 to gener-
ate the answer to the user’s query by passing both
the user query and the retrieved article via a mean-
ingful prompt. We conducted extensive prompt
engineering to tailor the responses of the LLMs
to the company’s requirements for an HR chat-
bot. Prompt engineering was an iterative process
that included our qualitative analysis and multi-
ple small evaluations of 10-100 sample responses
by the company’s HR experts who served as the
human-in-the-loop. We analyzed feedback from
these evaluation runs and addressed the main issues
in the next iteration of the process to produce the
final prompt shown in Table 5.
3.3 Evaluation Framework
For our analysis we employ Reference-based eval-
uation metrics such as BERTScore (Zhang et al.,
2019), ROUGE (Lin, 2004), and BLEU (Papineni
et al., 2002). We also explore the concept of using
LLM as an evaluator, and finally, we assess the
effectiveness of automated metrics by involving
domain experts in a human-in-the-loop process.
3.3.1 Retriever Evaluation
Our primary evaluation metric for the retriever is
accuracy, defined as the percentage of times the
retriever returns the correct article for a given ques-
tion.
3.3.2 Human Evaluation Setup
The domain experts who served as the human-in-
the-loop brought a high level of precision and in-
sight to the evaluation process. Apart from dataset
curation, they also evaluated the performance of
the retriever by verifying the correctness of the
retrieved articles. After discussion with domain ex-
perts, we found four dimensions across which the
quality of the model’s output could be evaluated
on a score between 1 - 5 following a 5-point Lik-
ert (Likert, 1932) scale. One domain expert eval-
uated 100 samples across the fine-tuned LongT5,
ChatGPT and GPT-4 across Readability, Relevance,
Truthfulness, and Usability.
3.3.3 Reference-based Metrics
In evaluating the effectiveness of reference-based
metrics, we examine two distinct categories: N-
gram-based and embedding-based metrics metrics.
N-gram based metrics: N-gram-based metrics,
such as BLEU (Bilingual Evaluation Understudy)
and ROUGE (Recall-Oriented Understudy for Gist-
ing Evaluation), assess the similarity between the
generated response and the ground truth answer by
analyzing the overlap of n-grams.
Embedding-based metrics: Embedding-based
metrics, such as BERTScore, leverage deep contex-
tual embeddings from language models like BERT
to assess the semantic similarity between generated
and reference texts.
3.3.4 Reference-free Metrics
In the evolving landscape of Natural Language Gen-
eration evaluation, LLM-based metrics emerge as a
compelling alternative, offering insights into model
performance without the constraints of pre-defined
reference responses. Details regarding the prompts
used for these Reference-free metrics are present
in Appendix C.
Prompt-based Evaluation: Prompt-based eval-
uation is at the forefront of NLG advancements,
particularly with the utilization of LLMs (Li et al.,
2024). Inspired by G-Eval, we followed the ap-
proach described by Liu et al. (2023) and tailored
the prompts to be suitable for the evaluation of a
question-answering task.
Tuning-based Evaluation: Nowadays, there is a
significant shift toward leveraging open-source lan-
guage models, such as LLaMA (Touvron et al.,
for fine-tuning purposes. We utilize
2023),
Prometheus (Kim et al., 2023), which stands out
for its fine-tuned evaluation capability, leveraging
a large language model to perform nuanced anal-
ysis based on customized score rubrics (Li et al.,
2024). This unique approach enables Prometheus
to evaluate text generation tasks comprehensively,
considering factors such as creativity, relevance,
and coherence without relying on reference texts.
4 Results and Discussion
4.1 Dense Passage Retriever
As depicted in Table 2, surprisingly the BERT-
based DPR significantly outperforms all new meth-
ods with a top-1 accuracy of 22.24%, whereas the
OpenAI-based retriever only reaches a top-1 accu-
racy of 11.12%. Of the latter, the best performer
is Multi-Query, with 10.92%, yet this still falls
short of the Basic retriever (no query transforma-
tion). These results resonate with the findings of
Weller et al. (2024), confirming that query transfor-
Method
BERT-based DPR
Basic
Intended Topics
HyDE
Multi-Query
HR Test Dataset
top-1
22.24%
11.12%
9.33%
10.01%
10.92%
Stackexchange English
top-1
-
69.5%
57.25%
65.91%
71.31%
Table 2: Retriever accuracy on the HR test data and the
Stackexchange benchmark dataset for various retriever
methods on top-1 retrieved articles
mations, do not always lead to better performance.
Our understanding is that the retriever performs
poorly mainly because of the noise attributed to
the dataset.
It is worth noting, that our dataset
contains many variant articles for a given topic or
question, with only small differences such as the
region or the employee role. Hence, the incorrect
article may still contain sufficient knowledge to
address user queries. We confirmed these findings
with our domain experts and elaborated on them
further in Appendix A.3. Further results on up to
top-5 articles are shared in Appendix E.1.
However, to assess the effectiveness of the newly
implemented methods on a different dataset, we
gathered 10k samples from CQADupStack English
(Hoogeveen et al., 2015), a collection of English
language questions and their top answers from
the Stackexchange English forum. We used the
same embedding model as the HR dataset to em-
bed this new data and evaluated its top-1 accuracy.
It can be observed that the Intended Topics method
and HyDE both underperform compared to the Ba-
sic retriever. However, the Multi-Query method
did produce a higher top-1 accuracy. During our
experiments, we noticed that these methods are
greatly influenced by the choice of query trans-
formation prompts. For instance, when HyDE re-
sponses closely matched the desired replies, the
accuracy was significantly higher. These methods
also achieved higher accuracies than the Basic on
other types of data, which indicates that the perfor-
mance is also dependent on the type of data used.
This might explain why these methods couldn’t
achieve higher accuracy on the HR dataset.
mensions is summarized in Table 3.
Metric
ChatGPT GPT-4 LongT5
Reference-based Evaluation
BLEU Score
ROUGE-1
ROUGE-2
ROUGE-L
BERTScore_P
BERTScore_R
BERTScore_F1
0.27
0.48
0.36
0.46
0.88
0.96
0.90
0.28
0.52
0.35
0.50
0.90
0.93
0.91
Reference-free Evaluation (LLM-based)
G-Eval: Relevance
G-Eval: Readability
G-Eval: Truthfulness
G-Eval: Usability
Prometheus: Relevance
Prometheus: Readability
Prometheus: Truthfulness
Prometheus: Usability
4.03
4.26
4.12
4.67
3.25
3.07
3.20
3.98
Domain Expert Evaluation
Human Eval: Readability
Human Eval: Relevance
Human Eval: Truthfulness
Human Eval: Usability
4.31
4.31
4.09
3.32
4.51
4.49
4.80
4.79
3.70
4.22
3.75
4.32
4.76
4.67
4.41
4.11
0.41
0.51
0.43
0.49
0.91
0.91
0.90
3.17
3.52
3.36
3.29
2.83
3.73
3.32
2.83
4.02
3.46
3.67
2.59
Table 3: Average Evaluation Scores. BLEU (0 to 1),
ROUGE (0 to 1) and BERTScore (-1 to +1 ) were com-
puted on 200 samples, Prometheus (1 to 5) on 60 sam-
ples, and Domain Expert Evaluation (1 to 5) & G-Eval
(1 - 5) on 100 samples.
Overall, GPT-4 shows clear domination in terms of
generation capabilities for an HR chatbot. N-gram-
based evaluation scores such as ROUGE and BLEU
are quite low due to the generative nature of the
(L)LMs, as the answer may contain words differ-
ent than the reference answers. Nonetheless, these
results establish GPT-4 as the leading model, ef-
fectively combining advanced language skills with
the demands of content accuracy and user engage-
ment. On the other hand, the fine-tuned LongT5’s
performance is observed to be inferior when bench-
marked against the OpenAI models. This outcome
is consistent with the anticipated advancements in
LLMs, which are progressively outpacing the capa-
bilities of fine-tuning-driven models. The perfor-
mance of ChatGPT has been notably strong, trail-
ing marginally behind GPT-4 in only a few scoring
categories. Its close performance to GPT-4 raises
important considerations for the trade-offs between
computational efficiency and output quality.
4.2 NLG Evaluation
4.3 Correlation Analysis
We use the previously optimized DPRs with the
top-1 article for our NLG Module consisting of
ChatGPT, GPT-4 and fine-tuned LongT5 as shown
in Figure 2. An overview of all evaluation scores
highlighting model performance across several di-
Inspired by Zhong et al. (2022), we assessed the
reliability of the evaluation score using Spearman
(Myers and Sirois, 2004) and Kendall (Abdi, 2007)
correlation coefficients in Table 9.
Human Evaluation & Reference-based Metrics
Due to its limited innovation, LongT5 typically
produces text with fewer novel sentences, result-
ing in more favorable scores from n-gram-based
metrics like BLEU and ROUGE. The analysis of
GPT-3.5 and GPT-4, in particular, illuminates a sig-
nificant gap between automated metrics and human
judgment. As these models generate more varied
and longer sentences, their outputs increasingly di-
verge from the patterns recognized by word-overlap
metrics, such as BLEU and ROUGE. For instance,
GPT-4’s BLEU score correlation marks a clear dis-
connect, indicating that as text generation becomes
more complex, the less effective traditional metrics
are in evaluating it. This discrepancy calls into
question the reliance on current automated metrics
for assessing the creativity and nuance of outputs
from advanced language models, highlighting the
need for more sophisticated evaluation frameworks
that can better align with human judgment.
Human Evaluation & Reference-free Metrics
Despite similar average scores between Reference-
free metrics and Domain Expert evaluations shown
in Table 3, their correlations are low. Since these
methods measure linear and ordinal relationships,
similar averages in evaluations do not imply a
strong correlation as depicted in Table 9.
Overall, while Prometheus and G-Eval both
serve as proxies for human evaluation, their ef-
fectiveness varies by model and evaluation criteria.
While G-Eval excels in assessing truthfulness, its
capability in evaluating readability and usability
lags behind. Prometheus on the other hand, out-
performs G-Eval in assessing usability across all
models. However, G-Eval shows a steadier perfor-
mance across different models, particularly with
LongT5, suggesting its robustness in accurate eval-
uations. Both metrics show weak alignment in
assessing readability, reflecting the inherent chal-
lenge of one LLM evaluating another’s ability to
produce easily understandable text.
Additionally, LLM-based metrics sometimes fail to
align with human judgment, particularly when an-
swers or instructions involve unfamiliar HR terms
or sensitive information. Notably, OpenAI mod-
els’ novel answers exhibit lower human correla-
tion compared to LongT5, which provides answers
more similar to the golden response.
5 Related Work
Previously, domain-specific chatbots meant for a
specific task were designed using conversational
AI frameworks like RASA (Bocklisch et al., 2017).
Latest advancements in NLP have shifted focus to-
wards employing and optimizing LLM-based RAG
(Gao et al., 2024b). Chen et al. (2023) experi-
ment with ChatGPT and several other open-source
models like Vicuna to benchmark their capabili-
ties in RAG, and Wang et al. (2023) use a smaller
secondary domain-specific model to assist a big-
ger LLM on a domain-specific question answering
task on industrial data. Recent studies have ex-
plored various retrieval methods, including dense
vector retrieval (Karpukhin et al., 2020a), sparse
retrieval (Robertson et al., 2004, 2009), and hybrid
approaches (Guu et al., 2020a), to improve the rel-
evance and diversity of retrieved documents. Guu
et al. (2020b) uses various RAG techniques to en-
sure that chatbot responses are based on relevant
HR policies, leading to accurate and helpful user
support.
Given the diverse distribution of the text gener-
ated by LLMs, conventional metrics are not suit-
able for its evaluation (Wei et al., 2021; Belz and
Reiter, 2006; Novikova et al., 2017). Consequently,
a lot of follow-up research has come up in the area
of NLG Evaluation (Gao et al., 2024a; Li et al.,
2024). Specifically focusing on RAG, Es et al.
(2024) released a Framework for the automatic
evaluation of generated output using LLM-based
metrics with a focus on faithfulness. A similar ap-
proach is followed by Saad-Falcon et al. (2023) in
their framework ARES which also evaluates the
performance of RAG systems over relevance and
faithfulness by fine-tuning a lightweight LM judge.
6 Conclusion
By optimizing retrieval techniques and benchmark-
ing state-of-the-art LLMs with the help of domain
experts, we show how LLM-based applications
could benefit from a domain expert as human-
in-the-loop within various iterations of the devel-
opment. Even though our optimizations on the
OpenAI-based retriever show minor improvements,
the accuracy remains quite low due to the poor
quality of the evaluation dataset. Nonetheless, both
ChatGPT and GPT-4 show competence when ad-
dressing the user query. This hints that the in-
ternal reasoning capabilities and domain knowl-
edge of these LLMs are strong enough to over-
come the knowledge in the supposed incorrect ar-
ticle. This also suggests that, given the nature of
the dataset used, the accuracy metric used for the
evaluation of the retriever is not a good measure
of its performance. We employed and studied a
range of evaluation metrics and concluded that in
contrast to traditional evaluation approaches such
ROUGE & BERTScore, LLM-based metrics such
as Prometheus and G-Eval come very close to hu-
man evaluation on average. Nonetheless, our find-
ings reiterate the importance of human judgment,
particularly in use cases that require an understand-
ing of a specific domain.
Acknowledgements
The work outlined in this paper is part of a re-
search project between the Technical University
of Munich and SAP SE under SAP@TUM Col-
laboration Lab. The authors would like to thank
Patrick Heinze, Christopher Pielka, Albert Neu-
mueller, Darwin Wijaya from the SAP IES as well
as the Domain Experts from the Human Resource
department for their continued support.
Limitations
In our experiments, we mostly worked with Ope-
nAI models which are closed-source and hence
raise concerns of privacy. Additionally, their large
sizes inhibited fine-tuning as they required exten-
sive hardware. Fine-tuning open source and smaller
models tailored to HR-specific contexts could fur-
ther improve response accuracy and relevance. Ad-
ditionally, since we worked with only one domain
expert for the evaluation of the generated answers,
the human evaluation might be biased. Because
of the data protection concerns with the associated
dataset, we cannot make the dataset open source.
We employed basic filtering techniques to include
user-specific information and context, more ad-
vanced approaches could be explored to include
this information into the LLM prompt.
Ethics Statement
Throughout our experiments, we strictly adhere to
the ACL Code of Ethics. The dataset used for our
research was anonymized to not include any per-
sonal information. We employed in-house domain
experts, who receive a full salary for evaluation for
generated summaries. They were informed about
the task and usability of data in the research. Their
annotations were stored in an anonymized fashion,
mitigating any privacy concerns. Through our fine-
tuning strategies, no additional bias was introduced
into the models, other than what might already be
part of the dataset. The goal of the research was
to optimize an LLM-centric chatbot with the help
of a human-in-the-loop. The results and discus-
sions in this paper are meant to further promote
research in LLM-based development, bridging the
gap between academia and application.
References
Hervé Abdi. 2007. The kendall rank correlation coef-
ficient. Encyclopedia of measurement and statistics,
2:508–510.
Anja Belz and Ehud Reiter. 2006. Comparing automatic
and human evaluation of nlg systems. In 11th confer-
ence of the european chapter of the association for
computational linguistics, pages 313–320.
Tom Bocklisch, Joey Faulkner, Nick Pawlowski, and
Alan Nichol. 2017. Rasa: Open source language
understanding and dialogue management. arXiv
preprint arXiv:1712.05181.
Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun.
2023. Benchmarking large language models in
retrieval-augmented generation.
Gordon V. Cormack, Charles L A Clarke, and Stefan
Buettcher. 2009a. Reciprocal rank fusion outper-
forms condorcet and individual rank learning meth-
ods. In Proceedings of the 32nd International ACM
SIGIR Conference on Research and Development
in Information Retrieval, SIGIR ’09, page 758–759,
New York, NY, USA. Association for Computing
Machinery.
Gordon V. Cormack, Charles L A Clarke, and Stefan
Buettcher. 2009b. Reciprocal rank fusion outper-
forms condorcet and individual rank learning meth-
ods. In Proceedings of the 32nd International ACM
SIGIR Conference on Research and Development
in Information Retrieval, SIGIR ’09, page 758–759,
New York, NY, USA. Association for Computing
Machinery.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. Bert: Pre-training of deep
bidirectional transformers for language understand-
ing.
Shahul Es, Jithin James, Luis Espinosa Anke, and
Steven Schockaert. 2024. RAGAs: Automated evalu-
ation of retrieval augmented generation. In Proceed-
ings of the 18th Conference of the European Chap-
ter of the Association for Computational Linguistics:
System Demonstrations, pages 150–158, St. Julians,
Malta. Association for Computational Linguistics.
Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan.
2022. Precise zero-shot dense retrieval without rele-
vance labels. arXiv preprint arXiv:2212.10496.
Mingqi Gao, Xinyu Hu, Jie Ruan, Xiao Pu, and
Xiaojun Wan. 2024a. Llm-based nlg evaluation:
arXiv preprint
Current status and challenges.
arXiv:2402.01383.
Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia,
Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo,
Meng Wang, and Haofen Wang. 2024b. Retrieval-
augmented generation for large language models: A
survey.
Mandy Guo, Joshua Ainslie, David Uthus, Santiago On-
tanon, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang.
2022. LongT5: Efficient text-to-text transformer for
long sequences. In Findings of the Association for
Computational Linguistics: NAACL 2022, pages 724–
736, Seattle, United States. Association for Compu-
tational Linguistics.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu-
pat, and Ming-Wei Chang. 2020a. Realm: Retrieval-
augmented language model pre-training.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat,
and Mingwei Chang. 2020b. Retrieval augmented
language model pre-training. In International confer-
ence on machine learning, pages 3929–3938. PMLR.
Elad Hoffer and Nir Ailon. 2018. Deep metric learning
using triplet network.
Doris Hoogeveen, Karin Verspoor, and Timothy Bald-
win. 2015. Cqadupstack: A benchmark data set for
community question-answering research.
Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and
Wen-tau Yih. 2020a. Dense passage retrieval for
open-domain question answering. arXiv preprint
arXiv:2004.04906.
Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and
Wen tau Yih. 2020b. Dense passage retrieval for
open-domain question answering.
Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang,
Shayne Longpre, Hwaran Lee, Sangdoo Yun,
Seongjin Shin, Sungdong Kim, James Thorne, et al.
2023. Prometheus: Inducing fine-grained evalua-
tion capability in language models. arXiv preprint
arXiv:2310.08491.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio
Petroni, Vladimir Karpukhin, Naman Goyal, Hein-
rich Küttler, Mike Lewis, Wen tau Yih, Tim Rock-
täschel, Sebastian Riedel, and Douwe Kiela. 2021.
Retrieval-augmented generation for knowledge-
intensive nlp tasks.
Zhen Li, Xiaohan Xu, Tao Shen, Can Xu, Jia-Chen
Gu, and Chongyang Tao. 2024. Leveraging large
language models for nlg evaluation: A survey. arXiv
preprint arXiv:2401.07103.
Rensis Likert. 1932. A technique for the measurement
of attitudes. Archives of psychology.
Chin-Yew Lin. 2004. Rouge: A package for automatic
In Text summarization
evaluation of summaries.
branches out, pages 74–81.
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang,
Ruochen Xu, and Chenguang Zhu. 2023. Gpteval:
Nlg evaluation using gpt-4 with better human align-
ment. arXiv preprint arXiv:2303.16634.
Xinbei Ma, Yeyun Gong, Pengcheng He, hai zhao,
and Nan Duan. 2023. Query rewriting in retrieval-
augmented large language models. In The 2023 Con-
ference on Empirical Methods in Natural Language
Processing.
Leann Myers and Maria J Sirois. 2004. Spearman cor-
relation coefficients, differences between. Encyclo-
pedia of statistical sciences, 12.
Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas
Curry, and Verena Rieser. 2017. Why we need
arXiv preprint
new evaluation metrics for nlg.
arXiv:1707.06875.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
Jing Zhu. 2002. Bleu: a method for automatic evalu-
ation of machine translation. In Proceedings of the
40th annual meeting of the Association for Computa-
tional Linguistics, pages 311–318.
Stephen Robertson, Hugo Zaragoza, and Michael Taylor.
2004. Simple bm25 extension to multiple weighted
fields. In Proceedings of the thirteenth ACM inter-
national conference on Information and knowledge
management, pages 42–49.
Stephen Robertson, Hugo Zaragoza, et al. 2009. The
probabilistic relevance framework: Bm25 and be-
yond. Foundations and Trends® in Information Re-
trieval, 3(4):333–389.
Jon Saad-Falcon, Omar Khattab, Christopher Potts, and
Matei Zaharia. 2023. Ares: An automated evalua-
tion framework for retrieval-augmented generation
systems.
Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela,
and Jason Weston. 2021. Retrieval augmentation
reduces hallucination in conversation. arXiv preprint
arXiv:2104.07567.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro,
Faisal Azhar, et al. 2023. Llama: Open and effi-
cient foundation language models. arXiv preprint
arXiv:2302.13971.
Zezhong Wang, Fangkai Yang, Pu Zhao, Lu Wang,
Jue Zhang, Mohit Garg, Qingwei Lin, and Dong-
mei Zhang. 2023. Empower large language model to
perform better on industrial domain-specific question
answering. arXiv preprint arXiv:2305.11541.
Wei Wei, Bo Dai, Tuo Zhao, Lihong Li, Diyi Yang,
Yun-Nung Chen, Y-Lan Boureau, Asli Celikyilmaz,
Alborz Geramifard, Aman Ahuja, et al. 2021. The
first workshop on evaluations and assessments of neu-
ral conversation systems. In The First Workshop on
Evaluations and Assessments of Neural Conversation
Systems.
Orion Weller, Kyle Lo, David Wadden, Dawn Lawrie,
Benjamin Van Durme, Arman Cohan, and Luca Sol-
daini. 2024. When do generative query and docu-
ment expansions fail? a comprehensive study across
methods, retrievers, and datasets. In Findings of the
Association for Computational Linguistics: EACL
2024, pages 1987–2003, St. Julian’s, Malta. Associa-
tion for Computational Linguistics.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q
Weinberger, and Yoav Artzi. 2019. Bertscore: Eval-
uating text generation with bert. arXiv preprint
arXiv:1904.09675.
Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu
Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and
Towards a unified multi-
Jiawei Han. 2022.
dimensional evaluator for text generation. arXiv
preprint arXiv:2210.07197.
A Dataset
A.1 Dataset Collection
FAQ Dataset: The internal HR policies of the
company consist of Wiki articles, where each ar-
ticle contains a description text followed by some
frequently asked questions. The FAQ dataset was
constructed by the domain articles by compiling
all the FAQ questions from all articles. Each FAQ
question is in the form of a triplet where the con-
text is the original Wiki article the question was de-
rived from. UT Dataset: The user utterance (UT)
dataset was compiled using the user utterances col-
lected from the chatbot logs. To reduce the manual
labeling effort, a simple text-matching approach
was deployed that mapped each user query to one
of the questions from the FAQ dataset. The respec-
tive answers and context of the matched question
were used to create the triplets that form the UT
dataset.
A.2 Dataset Pre-processing
We cleaned the dataset using regular expressions
and with the help of LLMs. This involved remov-
ing unnecessary formatting like HTML tags, lead-
ing or trailing white spaces and newline characters,
and removing some wasteful markdown annota-
tions without text. This process thus reduced the
number of tokens in each document. Some of the
documents were too long to fit into the LLM’s
context window, so we excluded them from our
analysis.
A.3 Dataset Challenges
We discovered that our dataset contains multiple ar-
ticles answering most questions. These articles dif-
fer in a few characters, often in an unequal amount
of whitespaces, or a few exchanged words, or even
entire sections not present in other articles. This sit-
uation leads to multiple slightly different versions
of the same article present in the dataset, all linked
to similar questions. Consequently, the retriever
often retrieves very relevant articles that do not
exactly match the gold standard article but are a
slightly different version.
To address this, we implemented an evaluation
method measuring the Levenshtein distance be-
tween the retrieved article and the gold article. If
this distance is below a threshold of 100, we con-
sider it a successful retrieval. However, this ap-
proach does not match articles with varying sec-
tions, as the Levenshtein distance is much higher,
and we didn’t want to risk matching incorrect arti-
cles by increasing the threshold. All of the results
in Table 2 are using this evaluation method.
As the DPR is fine-tuned on the dataset, which
likely has a strong imbalance in the counts of dif-
ferent article versions, it tends to favor the most
common version. This bias contributes to its higher
accuracy, as the retriever fetches the correct article
more often than not.
A.4 Dataset Example
Table 4 shows an example sample from the FAQ
dataset representing the training triplet along with
all metadata.
B Human-in-the-Loop
As shown in Figure 2, the domain experts are in-
volved in various parts of the development cycle
explained below:
Dataset Collection: The domain experts play a big
role in the compilation and quality control of the
datasets used in this paper
Prompt Optimization: The domain experts eval-
uated answers generated by models on various
prompt versions. They also provided guidelines
the chatbot should follow when addressing the user
query which is reflected in the final prompt dis-
played in Table 5.
Evaluation: Domain experts also served as the
human annotators for the answers generated by
DATA TRIPLET
Question: How can I apply for half a day of holiday?
Answer: Unfortunately, vacation days in your coun-
try can only be taken as full days.
Context: {Relevant Article}
META DATA
User Role: Employee
Name of KBA: Vacation
Company Name: {Company Name}
Company Code: {Company Code}
Region: {Region}
Country Code: {Country Code}
FAQ Category: {FAQ Category}
Process ID: {Process ID}
Service ID: {Process ID}
Table 4: HR Dataset Sample
(L)LMs which helped us assess the quality of an-
swers as well as study the effectiveness of auto-
matic evaluation scores.
C Prompts Samples
In this section, we provide the extensive list of
prompts used for the OpenAI Models for the Chat-
bot Pipeline, as well as the prompts used for the
LLM-based Metrics.
C.1 Prompts used for OpenAI models
The optimized prompt used for ChatGPT and GPT-
4 during our experiments is shown in Table 5.
C.2 G-Eval Evaluation Metric Prompt
The evaluation prompt used for the Readability Cri-
teria is shown in Table 6. The prompts for other
criteria (Truthfulness, Usability, Relevance) follow
similar instructions as the one shown for the Read-
ability prompt.
C.3 Prometheus Evaluation Metric Prompt
The prompt for the Prometheus Evaluation Metric
outlined in Table 7 was based on the official paper’s
guidelines (Kim et al., 2023) for Feedback Collec-
tion. This specific prompt illustrates the Readabil-
ity Criteria and was similarly adapted for other
criteria such as Truthfulness, Relevance, and Us-
ability. In general, both LLM-based metrics follow
similar evaluation criteria in the prompts.
D Technical Details
D.1 Retriever
It is worth noting that we embed the whole arti-
cle and do not perform chunking. As shown in
Figure 1, these articles are quite long. To cater to
the limited context window of the models, we opt
for the top-1 article to be passed as context. This
also makes sense for our use case as the dataset
is designed such that the answer to any given HR
question usually exists in only one article.
D.2 Dense Passage Retriver Training
Dense Passage Retriever (DPR) (Karpukhin et al.,
2020b) powered by Haystack4 uses the bert-base-
uncased embedding model by google-bert, openly
available on HuggingFace. DPR training aims to
generate a model that creates embeddings where
the question embedding closely aligns with the rel-
evant context embedding. During retrieval, the user
query is processed through the previously trained
retriever, producing a query vector in the same em-
bedding space as the articles. This query vector
is then compared to all article vectors within the
vector store using cosine similarity. The top-k arti-
cles belonging to the embeddings with the highest
cosine similarities are returned.
D.3 LongT5 Fine-tuning
During fine-tuning of the LongT5 models, the train-
ing process was configured with a learning rate of
1e-4 and a batch size of 8, spanning 5 epochs.
E Results and Evaluation
Throughout our research, we encountered several
challenges that warrant attention. The variability
in retrieved articles due to slight differences in con-
tent or formatting posed complexities in evaluating
retrieval accuracy and ensuring consistency in re-
sponse generation. Addressing this challenge may
require further refinement of the retrieval mecha-
nism or additional preprocessing steps to standard-
ize the retrieved content.
E.1 Retriever
The accuracy of both DPR on the top-1, top-2, top-
3, and top-5 articles on both retrievers is shown in
Table 8. As expected, the accuracy of the retriever
module increases as the value of k is increased.
However, we are limited to including only top-1
4https://haystack.deepset.ai/
SYSTEM PROMPT
You are an HR chatbot for SAP SE and you provide truthful and concise answers to employee questions based on provided
relevant HR articles.
1. Stay very concise and keep your answer below 150 words.
2. Do not include too much irrelevant information unrelated to the posed question.
3. Keep your response brief and on point.
4. Include URLs from the relevant article if it is important to answer the question.
5. If the answer applies to specific labs/countries/companies, include this information in your response.
6. Refer to the employee directly as "you" and not indirectly as "the employee".
7. If the provided HR article does not include the answer to the question, tell the employee to create an HRdirect ticket.
8. Answer in a polite, personal, user-friendly, and actionable way.
9. Never make up your response! If you do not know the answer to the question, just say so and ask the user to create an
HRdirect ticket!
USER PROMPT
Question: {question}
Relevant Article: {article}
Table 5: Chatbot Prompt for OpenAI Models
SYSTEM PROMPT
You will be given a generated answer for a given question. Your task is to act as an evaluator and compare the generated
answer with a reference answer on one metric. The reference answer is the fact-based benchmark and shall be assumed as
the perfect answer for your evaluation. Please make sure you read and understand these instructions very carefully. Please
keep this document open while reviewing, and refer to it as needed.
Evaluation Criteria: {criteria}
Evaluation Steps: {steps}
USER PROMPT
Example: {example}
Question: {question}
Generated Answer: {generated_answer}
Reference Answer: {reference_answer}
Evaluation Form: Please provide your output in two parts separate as a Python dictionary with keys rating and explanation.
First the rating in an integer followed by the explanation of the rating.
{metric_name}
METRIC SCORE CRITERIA
{The degree to which the generated answer matches the reference answer based on the metric description.}
Readability(1-5) - Please rate the readability of each chatbot response. This criterion assesses how easily the response can
be understood. A response with high readability should be clear, concise, and straightforward, making it easy for the reader
to comprehend the information presented. Complex sentences, jargon, or convoluted explanations should result in a lower
readability score.
METRIC SCORE STEPS
{Readability Score Steps}
1. Read the chatbot response carefully.
2. Assess how easily the response can be understood. Consider the clarity and conciseness of the response.
3. Consider the complexity of the sentences, the use of jargon, and how straightforward the explanation is.
4. Assign a readability score from 1 to 5 based on these criteria, where 1 is the lowest (hard to understand) and 5 is the
highest (very easy to understand).
Table 6: G-Eval Prompt Example for Readability Criteria
SYSTEM PROMPT
Task Description: An instruction (might include an input inside it), a response to evaluate, a reference answer that gets a
score of 5, and a score rubric representing an evaluation criterion is given.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: Feedback: [write a feedback for criteria] [RESULT] [an integer number
between 1 and 5].
4. Please do not generate any other opening, closing, and explanations.
Question to Evaluate: {instruction}
Response to Evaluate: {response}
Reference Answer (Score 5): {reference answer}
Score Rubrics: {criteria description}
Score 1: {Very Low correlation with the criteria description}
Score 2: {Low correlation with the criteria description}
Score 3: {Acceptable correlation with the criteria description}
Score 4: {Good correlation with the criteria description}
Score 5: {Excellent correlation with the criteria description}
{criteria description}: Readability(1-5) - Please rate the readability of each chatbot response. This criterion assesses
how easily the response can be understood. A response with high readability should be clear, concise, and straightforward.
Complex sentences, jargon, or convoluted explanations should result in a lower readability score.
Table 7: Prometheus Prompt Example for Readability Criteria
articles because the articles are quite long and more
samples may not fit in the model’s context window.
The BERT-based DPR model still significantly out-
performs all new methods with a top-1 accuracy
of 22.24% and a top-5 accuracy exceeding 40%.
The new retriever, in comparison, only reaches a
top-1 accuracy of 11.12% and a top-5 accuracy of
18.53% on the same dataset. These results in gen-
eral are quite underwhelming and mainly attributed
to the dataset challenges described in Appendix
A.3.
DPR
BERT-based
OpenAI-based
top-2
top-1
22.24% 30.03% 35.08% 40.06%
11.12% 15.06% 16.82% 18.53%
top-3
top-5
Table 8: Retriever Accuracy on the HR test dataset for
various values of k on the HR Dataset. The OpenAI-
based DPR uses the Basic method.
E.2 Correlation between Automatic
Evaluation and Domain Expert
Evaluation
Table 9 shows the individual across for correlation
of each evaluation metric with human evaluation
across LongT5, ChatGPT, and GPT-4. The low
correlation coefficients are a consequence of the
Spearman and Kendall methods, which analyze
the linear and ordinal relationships between vari-
ables by comparing each set of scores. When these
methods detect divergent scores between two eval-
uations, it leads to a reduced correlation coefficient,
indicating a disproportion that is not apparent when
considering the average scores alone.
Criteria
LongT5
ChatGPT
GPT-4
Spearman ρ Kendall τ
Spearman ρ Kendall τ
Spearman ρ Kendall τ
BLEU
ROUGE-1
ROUGE-2
ROUGE-L
BERTScore_P
BERTScore_R
BERTScore_F1
G-Eval
Usability
Relevance
Readability
Truthfulness
Prometheus
Usability
Relevance
Readability
Truthfulness
0.459
0.435
0.462
0.433
0.457
0.466
0.455
0.675
0.569
0.208
0.726
0.723
0.467
0.493
0.541
0.337
0.321
0.341
0.324
0.347
0.305
0.332
0.584
0.499
0.181
0.651
0.675
0.439
0.468
0.521
0.345
0.364
0.332
0.353
0.304
0.085
0.246
0.217
0.339
0.395
0.694
0.386
0.419
0.378
0.439
0.263
0.284
0.258
0.274
0.234
0.064
0.192
0.198
0.304
0.373
0.667
0.351
0.371
0.358
0.402
0.146
0.113
0.056
0.093
0.156
−0.022
0.097
0.346
0.325
0.139
0.452
0.516
0.382
0.225
0.454
0.116
0.091
0.044
0.075
0.122
−0.018
0.077
0.327
0.306
0.137
0.432
0.495
0.357
0.213
0.427
Table 9: Correlations between Automated Metrics and Human Evaluation across Models
|
ai_researcher | 2 | Optimizing_Instruction_Synthesis_Effective_Exploration_of_Evolutionary_Space_with_Tree_Search.pdf | 4
2
0
2
y
a
M
7
1
]
O
L
.
s
c
[
2
v
7
2
1
6
0
.
5
0
4
2
:
v
i
X
r
a
Efficiently Synthesizing Lowest Cost Rewrite Rules
for Instruction Selection
Ross Daly
Stanford University
Stanford, CA, USA
[email protected]
Caleb Donovick
Stanford University
Stanford, CA, USA
[email protected]
Caleb Terrill
Stanford University
Stanford, CA, USA
[email protected]
Jackson Melchert
Stanford University
Stanford, CA, USA
[email protected]
Pat Hanrahan
Stanford University
Stanford, CA, USA
[email protected]
Priyanka Raina
Stanford University
Stanford, CA, USA
[email protected]
Clark Barrett
Stanford University
Stanford, CA, USA
[email protected]
Abstract—Compiling programs to an instruction set architec-
ture (ISA) requires a set of rewrite rules that map patterns
consisting of compiler instructions to patterns consisting of ISA
instructions. We synthesize such rules by constructing SMT
queries, whose solutions represent two functionally equivalent
programs. These two programs are interpreted as an instruc-
tion selection rewrite rule. Existing work is limited to single-
instruction ISA patterns, whereas our solution does not have
that restriction. Furthermore, we address inefficiencies of existing
work by developing two optimized algorithms. The first only
generates unique rules by preventing synthesis of duplicate and
composite rules. The second only generates lowest-cost rules
by preventing synthesis of higher-cost rules. We evaluate our
algorithms on multiple ISAs. Without our optimizations, the
vast majority of synthesized rewrite rules are either duplicates,
composites, or higher cost. Our optimizations result in synthesis
speed-ups of up to 768× and 4004× for the two algorithms.
I. INTRODUCTION
As we approach the end of Moore’s law and Dennard
scaling, drastically improving computing performance and
energy efficiency requires designing domain-specific hardware
architectures (DSAs) or adding domain-specific extensions to
existing architectures [22]. As a result, many DSAs have
been developed in recent years [4], [8], [24], [27], [30], each
with its own custom instruction set architecture (ISA) or ISA
extension.
Targeting such ISAs from a compiler’s intermediate repre-
sentation (IR) requires a custom library of instruction selection
rewrite rules. A rewrite rule is a mapping of an IR pattern
to a functionally equivalent ISA pattern. Manual specification
of rewrite rules is error-prone,
time-consuming, and often
incomplete. It is therefore desirable to automatically generate
valid rewrite rules.
When specifying instruction selection rewrite rules, there
are two common cases. When ISAs have complex instructions,
rewrite rules will often map multi-instruction IR patterns to a
single ISA instruction. When ISAs have simple instructions,
rewrite rules will often map a single IR instruction to a multi-
instruction ISA pattern. A rewrite rule generation tool should
be able to create rewrite rules for both cases. We call such
rewrite rules many-to-many rules.
Generating instruction selectors is not a new idea. Most
relevant to this work is Gulwani et al. [21] who use a satisfia-
bility modulo theories (SMT) solver to synthesize a loop-free
program that is functionally equivalent to a given specification.
Their approach is called component-based program synthesis
(CBPS), as each synthesized program must include functional
components from a given component library. Buchwald et
al. [6] use and extend CBPS to efficiently generate multi-
instruction loop-free IR programs equivalent to a single ISA
instruction program; that is, they solve the many-to-one rewrite
rules synthesis problem. However, multi-instruction ISA pro-
grams cannot be synthesized.
Both of these algorithms produce many duplicate rules,
which are removed during a post-processing step. As we show,
this adds significant additional cost. Another issue is that
CBPS as currently formulated does not incorporate the notion
of optimizing for cost. In practice, we often want only the set
of lowest-cost rules, making it unnecessary (and expensive) to
generate equivalent higher-cost rules.
This paper presents an algorithm for automatically generat-
ing a complete set of many-to-many rewrite rules. We address
the above issues by preventing the synthesis of both duplicate
and high-cost rules at rule generation time, using exclusion
techniques. As a further optimization, we generate rules in
stages and exclude composite rules, i.e. rules that can be
composed of smaller rules found in previous stages. These
ensure we produce a minimal but complete set of rewrite
rules. Compared to previous work, our approach eliminates
unnecessary rules and significantly reduces the time required
to produce the unique necessary ones.
Our contributions are as follows:
• We define generalized component-based program synthe-
sis (GCBPS) as the task of synthesizing two functionally
equivalent programs using two component libraries. We
then present an SMT-based synthesis approach inspired
by Gulwani et al. to solve it.
• We present an iterative algorithm genAll
to generate
all unique many-to-many rules up to a given size. We
identify a set of equivalence relations for patterns encoded
as programs and for rules that map IR programs to
ISA programs. We use these relations to enumerate and
exclude duplicate rules. Furthermore, we directly exclude
composite rewrite rules. These result in up to a 768×
synthesis speed-up.
• We present an algorithm genAll LC which generates only
the lowest-cost rules by incorporating a cost metric in
addition to excluding duplicate and composite rewrite
rules. This results in a synthesis speed-up up to 4004×.
The rest of the paper is organized as follows. Section II
discusses instruction selection, existing rule generation meth-
ods, SMT, and program synthesis. Section III describes a
program synthesis query for generating many-to-many rules.
Section IV presents an algorithm for generating only unique
rewrite rules and defines duplicates and composites. Section V
presents an algorithm for synthesizing only the lowest-cost
rules. Section VI evaluates both algorithms, and Section VII
discusses limitations and further optimizations.
II. BACKGROUND AND RELATED WORK
A. Instruction Selection
Instruction selection is the task of translating code in the
compiler’s intermediate representation (IR) to functionally
equivalent code for a target ISA. Typically, a library of rewrite
rules is used in instruction selection. A rewrite rule is a
mapping from an IR pattern consisting of IR instructions
to a functionally equivalent ISA pattern consisting of ISA
instructions. Such patterns can be expression trees or directed
acyclic graphs (DAGs).
Significant work has been devoted to developing rewrite rule
tiling algorithms to perform instruction selection [1], [5], [12],
[14]–[17], [19], [26], [29]. For each rule in the rule library, a
tiling algorithm first finds all fragments from the IR program
in which the rule’s IR pattern exactly matches that fragment.
Then, the instruction selector finds a tiling of these matches
that completely covers the basic block and minimizes the total
rule cost according to some cost metric.
Simple instruction selectors only handle tree-based IR pat-
terns, which is inefficient for reused computations. Modern
instruction selectors like LLVM, use DAG-based matching that
allows for both a richer rules and better tiling. Koes et al.
[26] describe a similar near-optimal DAG-based instruction
selection algorithm [5]. We want to generate rules that can be
used with such modern instruction selectors.
B. Generating Instruction Selectors
Generating instruction selectors from instruction semantics
has been a topic of research interest [6], [7], [9], [10], [23].
Dias and Ramsey [10] introduce an algorithm for generating
rewrite rules based on a declarative specification of the ISA.
While this solves part of the many-to-many rule task, their
work relies on an existing set of algebraic rewrite rules for
synthesizing semantically equivalent rules. Our work uses
SMT for the instruction and program semantics. However,
incorporating certain kinds of algebraic rewrite rules could
be an avenue for future optimizations.
Daly et al. [9] propose a way to synthesize instruction
selection rewrite rules from the register-transfer level (RTL)
specification of a processor. Their algorithm requires a set
of pre-specified IR patterns. In contrast, we can efficiently
synthesize rules that consider all possible multi-instruction IR
patterns up to a given size. Their approach for synthesizing
complex instruction constants and handling floating point
types could be combined with the approaches in this paper.
The most relevant to this work is the work by Buchwald
et al. [6], that leverages component-based program synthe-
sis to generate rules with multi-instruction IR patterns and
single-instruction ISA patterns. In contrast, our work synthe-
sizes rules with both multi-instruction IR patterns and multi-
instruction ISA patterns. We additionally prevent the synthesis
of duplicate, composite, and high-cost rewrite rules, unlike any
of the above approaches.
C. Program Synthesis and Equivalence
We use SMT-based program synthesis to enumerate a com-
plete set of instruction selection rewrite rules. In program
synthesis enumeration, it is common to remove equivalent
solutions [3]. We use the equivalence relation defined in
Section IV-A to determine equivalent rewrite rules. In prior
work [2], observational equivalence (i.e., programs with the
same semantics) has been used for de-duplication [2], however
observational equivalence does not
the
structure of the program which is essential for rewrite rule
pattern matching.
take into account
D. Logical Setting and Notation
We work in the context of many-sorted logic (e.g., [13]),
where we assume an infinite set of variables of each sort.
Terms are denoted using non-boldface symbols (e.g., X).
Boldface symbols (e.g., X) are used for sets, tuples, and
multisets, whose elements are either terms or other collections
of terms. Y := (Y1, ..., YN ) defines a tuple, where |Y| = N
and Yi refers to the i-th element. Z := {zn} defines a multiset,
where the multiplicity of element z is n ∈ N. Both ψ and ϕ
are used to denote formulas. ψ(X) is a formula whose free
variables are a subset of X. We use M ⊨ ψ(X) to denote
the satisfiability relation between the interpretation M and
the formula ψ. Assuming X is a collection of variables, MX
denotes the assignment to those variables induced by M. For
an assignment α, we write α |= ψ(X) if M |= ψ(X) for
every model M such that MX = α.
E. Component-based Program Synthesis
CBPS is a program synthesis task introduced by Gulwani
et al. The inputs to the task are:
• A specification S := (IS, OS, ϕspec(IS, OS)) containing
a tuple of input variables IS, a single output variable OS,
and a formula ϕspec(IS, OS) relating the inputs and the
output.
• A library of components (e.g., instructions) K, where the
k-th component Kk := (Ik, Ok, ϕk(Ik, Ok)) consists of
a tuple of input variables Ik, a single output variable
Ok, and a formula ϕk(Ik, Ok) defining the component’s
semantics.
An example component for an addition instruction is shown
below using the theory of bit-vectors, QF BV, where BV [n]
is an n-bit sort.
((I0 : BV [16], I1 : BV [16]), O : BV [16], I0 +[16] I1 = O)
The task is to synthesize a valid program functionally
equivalent to the specification using each component from K
exactly once.
of
of
the
and
inputs
outputs
For notational
all
convenience, we group together
the
components:
set
W := ∪(Ik,Ok, )∈K (Ok ∪ (∪Ik)). Gulwani et al. encode
connection constraint:
the program structure using a
ϕconn (L, IS, OS, W). This is a formula representing how the
program inputs (IS) and program output (OS) are connected
via the components. The connections are specified using
location variables L. We do not go into the details of how
location variables encode connections (they are in [21]). It
is sufficient for our purposes to know that these are integer
variables, and an assignment to them uniquely determines a
way of connecting the components together into a program.
The program semantics ϕprog are defined as the components’
semantics conjoined with the connection constraint:
This formula can be solved using a technique called counter-
example guided inductive synthesis (CEGIS). CEGIS solves
such exist-forall formulas by iteratively solving a series of
quantifier-free queries and is often more efficient than trying
to solve the quantified query directly. More details are in
[21]. For our purposes, we assume the existence of a CEGIS
implementation, CEGIS , which takes an instance of ϕsynth
and returns a model M with the property that ML |= ϕverif ,
from which a program that is a solution to CBPS can be
constructed.
III. COMPONENT-BASED PROGRAM SYNTHESIS FOR
MANY-TO-MANY RULES
Given the IR and ISA instruction sets KIR and KISA,
Buchwald et al. [6] use CBPS to synthesize rewrite rules.
They use a single ISA instruction kISA ∈ KISA for the CBPS
specification and a subset of the IR instructions for the CBPS
components. A solution to the resulting ϕsynth formula gives
a program PIR. If PISA is the single-instruction program
consisting of kISA, they interpret the pair (PIR, PISA) as an
instruction selection rewrite rule.
However, Buchwald et al.’s solution is insufficient for gen-
erating many-to-many rules, as they cannot synthesize IR and
ISA programs that both contain multiple instructions. Instead,
two functionally equivalent programs need to be synthesized.
We first define an extension to CBPS called generalized
component-based program synthesis (GCBPS) to address this
problem. Then we show how to construct a synthesis query
whose solutions represent pairs of functionally equivalent
programs.
A. Generalized Component-based Program Synthesis
We define the GCBPS task as that of synthesizing two
programs, Pa and Pb, represented using location variables
La and Lb, given two sets of components Ka and Kb, two
sets of inputs Ia, Ib where |Ia| = |Ib|, and two outputs Oa, Ob
where the following conditions hold true:
1) Pa uses each component in Ka exactly once.
2) Pb uses each component in Kb exactly once.
3) Pa is functionally equivalent to Pb.
B. Solving GCBPS
(1)
ϕprog(L, IS, OS, W) :=
)︄
ϕk(Ik, Ok)
∧ ϕconn(L, IS, OS, W).
(︄
⋀︂
k
They define a verification constraint that holds if a par-
ticular program is both well-formed (specified using a well-
formedness constraint ψwfp) and satisfies the specification
ϕspec:
ϕverif := ψwfp(L) ∧ ∀IS, OS, W.
(2)
ϕprog(L, IS, OS, W) =⇒ ϕspec(IS, OS).
A synthesis formula ϕsynth existentially quantifies L in (2):
ϕsynth := ∃L.∀IS, OS, W.
(3)
ψwfp(L) ∧ (︁ϕprog (L, IS, OS, W) =⇒ ϕspec(IS, OS))︁ .
We start with the CBPS verification constraint from (2)
using components Ka (and a corresponding set of inputs and
outputs Wa), but modify it slightly by introducing variables
(Ia, Oa) that are fresh copies of (IS, OS):
ψwfp(La) ∧ ∀Ia, Oa, Wa, IS, OS.
(4)
(ϕa
(︁(︁∧i I a
prog(La, Ia, Oa, Wa) ∧ ϕspec(IS, OS)) =⇒
)︁ =⇒ Oa = OS)︁ .
i = I S
i
Assuming the formulas for both the program and the specifi-
cation, if their inputs are the same, their outputs must also be
the same.
We next replace the specification program with a different
component-based program using components Kb and quantify
over that program’s inputs Ib, output Ob, and component
variables Wb:
ϕverif := ψwfp(La) ∧ ψwfp(Lb) ∧ ∀Ia, Ib, Oa, Ob, Wa, Wb.
(5)
(︁ϕa
(︁(︁∧i I a
prog(La, Ia, Oa, Wa) ∧ ϕb
)︁ =⇒ Oa = Ob)︁ .
i = I b
i
prog(Lb, Ib, Ob, Wb))︁ =⇒
This is our generalized verification constraint stating the
correctness criteria for when two component-based programs
are semantically equivalent.
To synthesize such a pair of programs, a synthesis formula
ϕsynth is defined by existentially quantifying La and Lb in the
verification formula (5):
ϕsynth := ∃La, Lb.∀Ia, Ib, Oa, Ob, Wa, Wb.
(6)
ψwf p(La) ∧ ψwf p(Lb)∧
(︃
(︁ϕa
prog(La, Ia, Oa, Wa) ∧ ϕb
(︁(︁∧i I a
i = I b
i
)︁ =⇒ Oa = Ob)︁
prog(Lb, Ib, Ob, Wb))︁ =⇒
)︃
.
1
2
3
4
5
6
7
8
9
genAll (KIR, KISA, N IR, N ISA) :
SR ← {}
f o r n1, n2 ∈ [1, N IR] × [1, N ISA] :
f o r mIR ∈ multicomb(KIR, n1) :
f o r mISA ∈ multicomb(KISA, n2) :
f o r IIR, IISA ∈ allInputs(mIR, mISA) :
ϕ, LIR, LISA ←
GCBPS (mIR, mISA, IIR, IISA)
ϕ ← ϕ ∧ ¬AllComposites(SR, . . .)
SR ← SR ∪
10
r e t u r n SR
CEGISAll (ϕ, mIR, mISA, LIR, LISA)
Fig. 1: Iterative algorithm to generate all unique rewrite rules
up to a given size.
SR = {}
w h i l e T r u e :
1 CEGISAll (ϕ, mIR, mISA, LIR, LISA) :
2
3
4
5
M ← CEGIS (ϕ)
i f M = ⊥ : r e t u r n SR
R ← rewriteRule(mIR, mISA, MLIR , MLISA )
SR ← SR ∪ {R}
ϕ ← ϕ ∧ ¬ψdup(R, (LIR, LISA))
6
7
8
As above, we assume that calling CEGIS on ϕsynth returns
a model M such that MLa∪Lb
|= ϕverif . This can be
converted into a pair of programs (Pa, Pb) representing a
rewrite rule that
is a solution for the GCBPS task. We
write rewriteRule(Ka, Kb, MLa , MLb ) for the rewrite rule
constructed from a specific model M using the component
sets Ka and Kb.
IV. GENERATING ALL MANY-TO-MANY REWRITE RULES
Buchwald et al.
[6] describe an iterative algorithm,
IterativeCEGIS , to synthesize rewrite rules using CBPS. This
algorithm iterates over all multisets of IR instructions up to
a given size and only runs synthesis on each such multiset.
Compared to running synthesis using all the IR instructions at
once, this iterative algorithm works better in practice.
However, IterativeCEGIS cannot synthesize rewrite rules
with both multi-instruction IR programs and multi-instruction
it produces duplicate rewrite
ISA programs. Furthermore,
rules which are then filtered out in a post-synthesis filtering
step. Although the results are correct, this approach is highly
inefficient because each call to CEGIS is expensive, and a
CEGIS call is made, not just for some duplicate rules, but for
every possible duplicate rule. In our approach, we make the
requirement that a solution is not a duplicate part of the CEGIS
query itself, ensuring that each successful CEGIS query finds
a new, non-redundant rewrite rule.
Our iterative algorithm, genAll , is shown in Figure 1. It
takes as parameters the IR and ISA component sets, KIR
and KISA respectively, as well as a maximum number of
components of each kind to use in rewrite rules, N IR and
N ISA, and iteratively builds up a set SR of rewrite rules,
which it returns at the end. Line 3 shows that n1 and n2
iterate up to these maximum sizes. Line 4 iterates over all
Fig. 2: AllSAT algorithm to synthesize all unique rules. Line 8
excludes all rules that are duplicates of the current synthesized
rewrite rule.
multisets of elements from KIR of size n1 using a standard
multicombination algorithm multicomb [25] (not shown). Line
5 is similar but for multisets from KISA of size n2. Next, for a
given choice of multisets, line 6 enumerates all possible ways
of selecting input vectors from those multisets that could create
well-formed programs. Line 7 constructs fresh sets of location
variables LIR and LISA and returns them along with the
instantiated GCBPS synthesis formula (using Equation (6)).1
Line 8 excludes all composite rules from the synthesis search
space. Composite rules are rules that can be constructed using
the current set of rules SR and are thus unnecessary for
instruction selection. We discuss this in more detail in Section
IV-B. Finally, on line 9, the current set of rules SR is updated
with the result of calling CEGISAll , which we describe next.
Figure 2 shows the CEGISAll algorithm that performs
the AllSAT [20], [31] task. Its parameters are the synthesis
formula ϕ, the multisets mIR and mISA, and the location
variables LIR and LISA. It returns a set SR of rewrite rules.
Initially this set
is empty. The algorithm iteratively calls
a standard CEGIS algorithm to solve the synthesis query,
constructing a new rewrite rule R, which is added to the set
SR of rewrite rules, when the call to CEGIS is successful. The
iteration repeats until the CEGIS query returns ⊥, indicating
that there are no more rewrite rules to be found. Note that
after each iteration, the ϕsynth formula is refined by adding
1We augment the well-formed program constraint in (6) to prevent syn-
thesizing programs containing dead code and unused inputs. This can be
accomplished by enforcing that each input and intermediate value is used
in at least one location.
the negation of a formula capturing the notion of duplicates
for this rule. We describe how this is done next.
Putting everything together, we define rule equivalence
∼rule as follows.
A. Excluding Duplicate Rules
Consider the two distinct rules below. As a syntactical con-
vention, infix operators are used for IR patterns and function
calls for ISA patterns.
I1 + (I2 · I3) → add(I1, mul(I2, I3))
(I1 · I3) + I2 → add(I2, mul(I1, I3))
The two IR patterns represent the same operation despite the
fact that the variable names and the order of the commutative
arguments to addition are both different. Both rules would
match the same program fragments in an instruction selector
and would result in the same rewrite rule application. Thus, we
consider such rules to be equivalent and would like to ensure
that only one is generated by our algorithm.
We first define a rewrite rule equivalence relation, ∼rule.
Informally, two rules are equivalent if replacing either one
by the other has no discernible effect on the execution of an
instruction selection algorithm. We make this more formal by
considering various attributes of standard instruction selection
algorithms.
Commutative Instructions Modern pattern matching algo-
rithms used for instruction selection try all argument orderings
for commutative instructions [5]. We define the commutative
equivalence relation ∼CIR as PIR
is a
remapping of PIR
1 ’s commutative instruction’s arguments.
1 ∼CIR PIR
2
iff PIR
2
Same-kind Instructions Programs P generated by GCPBS
have a unique identifier, the program line number, for each
instruction. This means that if two instructions of the same
kind appear in a program, interchanging their line numbers
results in a different program, even though it makes no
difference to the instruction selection algorithm. We define
the same-kind equivalence relation ∼KIR as PIR
1 ∼KIR PIR
2
iff PIR
is the result of remapping the line numbers for same-
2
kind instructions in PIR
1 .
Data Dependency Modern instruction selection algorithms
perform pattern matching, not based on a total order of instruc-
tions, but on a partial order determined by data dependencies.
Many different sequences may thus lead to the same partial
and PIR
order. We define ∼DIR as PIR
2
have the same data dependency graph.
1 ∼DIR PIR
2
iff PIR
1
Rule Input Renaming For a given rewrite rule, the input vari-
ables used for the IR program must match the input variables
used for the ISA program, but the specific variable identifiers
used do not matter. We define the equivalence relation ∼I rule
on rules (i.e., pairs of programs) as R1 ∼I rule R2 iff R2 is
the result of remapping variable identifiers in R1.
Rule Equivalence The first three equivalence relations defined
above are for IR programs, but the analogous relations (∼CISA,
∼KISA, ∼DISA) for ISA instructions are also useful.
∼IR := ⊎{∼CIR , ∼KIR , ∼DIR }
∼ISA := ⊎{∼CISA, ∼KISA, ∼DISA}
∼rule := ⊎{(∼IR ⊗ ∼ISA), ∼I rule }
(7)
(8)
(9)
Overall IR equivalence is defined as the transitive closure
of the union (notated with ⊎) of the three individual IR
relations. ISA equivalence is defined similarly. Overall rewrite
rule equivalence is then defined using the ⊗ operator, where
∼⊗=∼a ⊗ ∼b is defined as: (a1, b2) ∼⊗ (a2, b2) iff a1 ∼a a2
and b1 ∼b b2. Specifically, rule equivalence is obtained by
combining IR equivalence in this way with ISA equivalence,
and then combining the result with ∼I rule using ⊎.
The set of all duplicates of rule R is the rule equivalence
class [R]rule, where R′ ∈ [R]rule ⇐⇒ R ∼rule R′.
ψdup can be constructed by enumerating all elements of the
equivalence class [R]rule.
B. Excluding Composite Rules
We also exclude any rule whose effect can already be
achieved using the current set of generated rules (line 8
of Figure 1). We elucidate this using a simple example.
Assume the algorithm just constructed a new query for the
multisets mIR, mISA, and the input IIR (line 7 of Fig-
ure 1) and assume that the rule library SR currently contains
rules for addition (I1 + I2 → add(I1, I2)), and multiplication
(I1 · I2 → mul(I1, I2)). Consider the following cases.
1) If IIR = (I1), mIR = {+}, and mISA = {add}, then
the rule I1 + I1 → add(I1, I1) will be synthesized by
CEGISAll . But this rule is a specialization of the existing
rule for addition. Any use of this specialized rule could
instead be replaced by the more general rule and this
rule can thus be excluded. Note that we order the inputs
on line 6 of Figure 1 to guarantee that the most general
version of a rule is found first.
2) If IIR = (I1, I2, I3), mIR = {+, ·}, and mISA =
{add, mul}, then the composite rule (I1 + (I2 · I3)) →
add(I1, mul(I2, I3)) will be synthesized by CEGISAll .
Using similar logic, any use of this composite rule
could instead use the simpler and more general rules
for addition and multiplication, and this rule can thus be
excluded. The multiset ordering used in lines 4 and 5 of
Figure 1 ensures that subsets are visited before supersets,
guaranteeing that smaller rules are found first.
Only a subset of composite rules built from existing rules
need to be excluded for each synthesis query. In general, for
a specific query based on mIR, mISA, and IIR, we exclude
composite rules R := (PIR, PISA) that meet the following
criteria:
• R has exactly |IIR| inputs.
• PIR has the same components as mIR.
• PISA has the same components as mISA.
• PIR is built from the IR programs of already-found rules
in SR.
1
2
3
4
5
6
7
8
9
10
11
12
genAll LC (KIR, KISA, N IR, N ISA, cost) :
Ksorted ← sortByCost(KISA, N ISA, cost)
SR ← {}
f o r n ∈ [1, N IR] :
f o r mIR ∈ multicomb(KIR, n) :
f o r mISA ∈ Ksorted :
ccur ← cost(mISA)
f o r IIR, IISA ∈ allInputs(mIR, mISA) :
ϕ, LIR, LISA ←
GCBPS (mIR, mISA, IIR, IISA)
ϕ ← ϕ ∧ ¬AllComposites LC (SR, ccur, . . .)
SR ← SR ∪
CEGISAll LC (ϕ, mIR, mISA, LIR, LISA)
r e t u r n SR
Fig. 3: Iterative algorithm to generate all lowest-cost rules.
ISA multisets are ordered by cost. CEGISAll is modified to
exclude rules with duplicate IR programs.
• PISA is the result of applying the rewrite rules used to
build PIR.
These checks are encapsulated by the call to AllComposites
on line 8 of Figure 1.
V. GENERATING ALL LOWEST-COST RULES
Because all duplicates are excluded, the genAll algorithm
generates only unique rewrite rules. However, two unique rules
can share the same IR pattern. For a particular IR pattern, only
the lowest-cost rule is needed for some cost metrics. Knowing
the instruction selection cost metric at rule-generation time
presents another time-saving opportunity because we can also
prevent the synthesis of high-cost rules.
We make a few assumptions about such a cost metric.
• The cost for an instruction selection tiling is equal to the
sum of the costs of each tiling rule’s ISA program.
• The cost of an ISA program PISA only depends on the
instruction contents, not the program structure. This cost
is the sum of the cost of each instruction in the program.
While these assumptions are a restriction on the space of
possible cost metrics, they are sufficient to represent common
ones like code size and energy. If the compiler’s cost metric
violates these assumptions, the genAll algorithm can be used
instead. This restricted space of cost metrics has the important
property that the cost of any rule that would be synthesized
using the components mISA can be determined up front as
the sum of the cost of each component.
Figure 3 shows our synthesis algorithm updated to only
synthesize the lowest-cost rules for each unique IR pattern.
The first change is to sort all possible mulitsets of ISA
instructions up to size N ISA by cost (lower cost first) (line
2). This ordering ensures that the first rule synthesized for a
particular IR program will be the lowest-cost version of that
rule. Therefore, after synthesizing a new rule, all rules with
a duplicate IR program can be excluded. The second change
excludes rules with duplicate IR programs. A duplicate IR
program is defined using the IR equivalence relation:
∼IRLC := ⊎{∼CIR , ∼KIR , ∼DIR , ∼I IR }
(10)
This is the same definition as (7), but with an additional
1 ∼I IR PIR
relation ∼I IR defined as PIR
is the result
2
of remapping variable identifiers in PIR
1 . The CEGISAll LC
function called on line 11 is the same as CEGISAll , except
that it uses ∼IRLC instead of ∼IR when constructing ψdup.
iff PIR
2
The third change modifies AllComposites to use the
known up-front cost cost(mISA). To see how this works,
we consider again the example from Section IV-B. As be-
fore, we assume SR currently contains two rules: one for
addition (I1 + I2 → add(I1, I2)) and one for multiplication
(I1 · I2 → mul(I1, I2)). We assume the target (ISA) expres-
sions for these rules have cost 5 and 10, respectively. Consider
the following situation:
• Suppose IIR = (I1, I2, I3), and mIR = {+, ·}. It
might be possible to synthesize a rule that has IR pat-
tern (I1 + (I2 · I3)). We know that the composite rule
(I1 + (I2 · I3)) → add(I1, mul(I2, I3)) would have a
cost of 15 since rule costs are additive. Therefore, we
can exclude any rule that matches this IR pattern and has
cost(mISA) ≥ 15.
To implement this, only one adjustment needs to be made
to the conditions in Section IV-B. Instead of requiring PISA
to have the same components as mISA, we simply require
cost(PISA) ≥ cost(mISA), i.e., for rules matching the other
conditions, if the ISA program has a cost equal to or greater
than cost of the ISA program in the current rule,
is
excluded. These conditions are encapsulated by the call to
AllComposites LC (line 10).
it
VI. EVALUATION
Our evaluation strategy is threefold. We first show that our
algorithm is capable of producing a variety of many-to-many
rules. A good set of rewrite rules involves both many-to-
one and one-to-many rules. We also show that by removing
duplicate, composite, and high-cost rules, we produce a much
smaller set of rewrite rules. Second, we analyze the effect on
performance of the optimizations described above. We show
that they all significantly reduce the time spent in synthesis.
Finally, we show that by using different cost metrics, we can
generate different sets of lowest-cost rewrite rules.
A. Implementation
All instructions are formally specified using the hwtypes
Python library [11], which leverages pySMT [18] to construct
(quantifier-free) SMT queries in the theory of bit-vectors.
We also use annotations indicating which instructions are
commutative. We use Boolector [28] as the SMT solver and
set a timeout of 12 seconds for each CEGIS invocation. Every
synthesized rewrite rule is independently verified to be valid.
B. Instruction Specifications
To evaluate our algorithms, we selected small but non-trivial
sets of IR and ISA instructions operating on 4-bit bit-vectors.
IR We define the IR instruction set to be constants (0, 1),
bitwise operations (not, and, or, xor), arithmetic operations
(neg, add, sub), multiplication (mul), unsigned comparison
operations (ult, ule, ugt, uge), equality (eq), and dis-equality
(neq).
ISA 1 This is a minimal RISC-like ISA containing only 6
instructions: nand, sub, three comparison instructions (cmpZ,
cmpN , cmpC) which compute the zero (Z), sign (N), and
carry (C) flags respectively for a subtraction, and a flag
inverting instruction (inv).
ISA 2 This is an ISA specialized for linear algebra. It supports
the 5 instructions: neg, add, add3 (addition of 3 values), mul,
and mac (multiply-accumulate).
C. Rewrite Rule Synthesis
For each ISA we run three experiments. The first experiment
(All Rules) is the baseline that generates all many-to-many
rules including duplicate, composite, and high cost rules. This
is an implementation of Buchwald et al.’s IterativeCEGIS al-
gorithm extended to use GCBPS for many-to-many rules (no-
tated as IterativeCEGISGCBPS ). The second (Only Unique)
generates only unique rules by excluding all duplicates and
composites using the genAll algorithm. The third (Only
Lowest-Cost) generates only the lowest-cost rules using the
genAll LC algorithm in Figure 3. A code-size cost metric is
used, i.e., cost(K) is just the number of components in K.
For ISA 1, we split the rule generation into two parts.
The first part (ISA 1a) synthesizes rules composed of bitwise
and arithmetic IR instructions using the ISA’s nand and
sub instructions. The second part (ISA 1b) synthesizes rules
composed of constants and comparison instructions using the
four instructions cmpZ, cmpN , cmpC, and inv.
For 1a and 1b, we synthesize rewrite rules up to an IR
program size of 2 and an ISA program size of 3 (written 2-to-
3). For (Only Lowest-Cost), we increase the ISA program size
to 5 and 4 respectively. For ISA 2, we synthesize all rewrite
rules composed of constant, and arithmetic (including mul)
IR instructions up to size 3-to-2.
The number of rewrite rules produced for ISA 1a, 1b, and
2 are shown in Tables I, II, and III, respectively. Each table
entry is the number of rewrite rules synthesized for a particular
IR and ISA program size. For all ISAs, the extra synthesized
rules in (All Rules) were compared against the duplicate and
composite rules excluded by (Only Unique). Entries in (All
Rules) marked with a ‘(-n)’ represent ‘n’ rules that (Only
Unique) synthesized, but (All Rules) missed due to CEGIS
timeouts. The (All Rules) experiment for the entry marked
with an asterisk could not complete in 70 hours, so the number
calculated from (Only Unique) is shown.
For both ISAs we were able to synthesize 1-to-many
and many-to-1 rules for both IR and ISA instructions.
genAll produced a more complete set of
IterativeCEGISGCBPS .
rules
than
Table IV shows the percentage of rules that are duplicates
or composites in the first column, and the percentage of rules
that are high cost in the second column. Most rules in (All
Rules) are duplicates, composites, or high cost. Out of the
349179 rules up to size 3-to-2 for ISA 2 (i.e. the sum of the
(All Rules)), 99.5% are duplicates or composites. Similarly,
most rules are high cost. In ISA 1a, 59672 out of 59822 rules
(99.7%) up to size 2-to-3 are high cost.
D. Synthesis Time Improvement with genAll
In this section we showcase the synthesis time im-
provements of genAll . The first experiment is the baseline
IterativeCEGISGCBPS . The second excludes duplicate rules
(i.e., with line 8 of Figure 2). The third, genAll , excludes both
duplicates and composites (i.e. with line 8 of both Figure 2
and Figure 1).
For each GCBPS query, we note the time required (tsat )
to run CEGISAll . Next, we measure the number of unique
rules (Nunique ) found by CEGISAll . We then add the pair
(Nunique , tsat ) to our dataset. We plot the cumulative synthesis
time versus the number of unique rules found by doing the fol-
lowing. Each data point is sorted by its slope (tsat /Nunique ).
Then, the increase in both tsat and Nunique is plotted for each
sorted point. Some data points have Nunique = 0 indicating
that every synthesized rule was redundant and is shown using
a vertical slope.
The synthesis time plot for unique rewrite rules for ISA
1b up to size 2-to-3 is shown in Figure 4a. Excluding all
duplicates shows a 5.3× speedup. Excluding both duplicates
and composites shows a 6.2× speedup. Both optimizations
find an additional 5 unique rules.
E. Synthesis Time Improvement with genAll LC
We also showcase the synthesis time improvements of
genAll LC using a similar setup. The first experiment is the
baseline IterativeCEGISGCBPS . The second excludes IR du-
plicate rules. The third, genAll LC, excludes both IR duplicates
and IR composites.
We use the same experimental setup as before except when
computing Nunique , all higher-cost rules are filtered instead.
The synthesis time plot for lowest-cost rewrite rules for ISA
1b up to size 2-to-3 is shown in Figure 4b.
Excluding rules with duplicate IR programs provides a 41×
speed-up. Also excluding high-cost composites provides a
1254× speed-up over the baseline (All Rules) configuration.
F. Total Speed-up
We summarize the speed-ups of genAll and genAll LC
compared to the IterativeCEGISGCBPS baseline for all con-
figurations in Table V. We compare the synthesis time in the
“Synth” column. We compare the total algorithm runtime in
the “Total” column (including time for iterating, solving, rule
filtering, etc.). The last row’s baseline did not complete in 70
hours, so we provide lower bounds for speed-up.
All Rules
1
5
76
2
32
1719
3
1096
56894
ISA Program Size
Only Unique
3
2
10
96
1940
189
1
3
40
1
3
40
IR Prog
Size
1
2
Only Lowest-Cost
3
2
34
2
4
67
4
1
12
5
0
6
TABLE I: Number of synthesized rewrite rules for ISA 1a.
ISA Program Size
All Rules
1
17
89
2
71
3942 (-5)
3
3662
199572
Only Unique
3
873
21511
2
51
717
1
9
78
IR Program
Size
1
2
Only Lowest-Cost
4
1
0
7
0
52
2
3
64
3
0
9
TABLE II: Number of synthesized rewrite rules for ISA 1b.
ISA Program
Size
1
2
All Rules
2
287
3115
3
3998
341758∗
1
11
10
IR Program Size
Only Unique
3
315
1337
2
14
69
1
3
3
Only Lowest-Cost
1
3
1
3
315
760
2
14
32
TABLE III: Number of synthesized rewrite rules for ISA 2.
(a) genAll
(b) genAll LC
Fig. 4: Cumulative synthesis time comparison for ISA 1b up to size 2-to-3.
ISA
1a
1b
2
Rule Size up
to (IR, ISA)
(2, 3)
(2, 3)
(3, 2)
% Duplicate
or Composite
96.2%
88.8%
99.5%
% High-cost
99.7%
99.9%
99.7%
TABLE IV: Percent of rewrite rules up to (IR, ISA) size that
are a duplicate or a composite, and percent that are high-cost.
ISA
1a
1b
2
1a
1b
2
Rule Size up
to (IR, ISA)
(2, 2)
(2, 2)
(2, 2)
(2, 3)
(2, 3)
(3, 2)
genAll Speed-up
Total
1.3×
1.7×
2×
6.8×
2.7×
> 768× > 81×
Synth
3.5×
3.1×
11×
12×
6.2×
genAll LC Speed-up
Total
2.8×
2.8×
2.5×
57×
63×
> 4004× > 171×
Synth
11×
26×
53×
601×
1254×
TABLE V: Speed-ups compared to IterativeCEGISGCBPS .
The speed-ups depend on many parameters including the
maximum size of the rewrite rules, the number of possible
instructions, the commutativity of the instructions, and the
semantics of the instructions. The optimizations discussed
produce several orders of magnitude speed-ups. Further op-
timizing the non-solver portions (e.g. re-coding in C) would
drastically increase the “Total” speed-ups to be closer to the
“Synth” ones. Clearly, the combination of all optimizations
discussed in this paper can produce speed-ups of several orders
of magnitude.
ISA
1a
1b
2
Rule Size up
to (IR, ISA)
(2, 5)
(2, 4)
(3, 2)
Unique
(CS)
121
99
134
Unique
(E)
161
198
137
Common
48
36
991
TABLE VI: Number of unique and common rewrite rules
synthesized for code size (CS) and energy (E) cost metrics.
G. Cost Metric Comparisons
Our final experiment explores how the choice of cost metric
influences the rules. We have implemented two cost metrics:
a code size metric (CS) and an estimated energy metric (E).
The energy metric was created to correspond to real hardware
energy data. For example the cost ratio for mul and add is
1 : 1 for code size, but is 2.5 : 1 for energy. The number of
common and unique lowest-cost rewrite rules for each ISA is
shown in Table VI.
While there is some overlap in common rules, each cost
metric produces a differing set of unique lowest-cost rules.
VII. CONCLUSION AND FUTURE WORK
We showed that many-to-many instruction selection rewrite
rules can be synthesized for various ISAs using program
synthesis. This supports two major trends in computer archi-
tecture. The first is the trend towards simple or reduced instruc-
tion architectures where multiple instructions are needed for
simple operations. It also supports the trend to introduce more
complex domain-specific instructions for energy efficiency.
In this case, a single instruction can implement complex
operations.
We showed that our algorithms are efficient. Removing du-
plicates, composites, and higher-cost rules results in multiple
orders of magnitude speed-ups. Synthesizing many-to-many
rewrite rules for modern IRs and ISAs may require further
optimizations. Many of our synthesized rules contain program
fragments that a compiler would optimize before instruction
selection (e.g., sub(X, X)). Excluding these could result in
further speed-ups.
Buchwald et al. [6] presented generalizations for multi-
sorted instructions, multiple outputs, preconditions, and inter-
nal attributes, enabling the modeling of memory and control
flow instructions. Our synthesis query and algorithms are
orthogonal and could incorporate these features, allowing for
a broader range of possible instruction sets.
As is the case in prior work, we limit synthesis to loop free
patterns. Relaxing this constraint and using other instruction
selection algorithms would be an interesting research avenue.
We believe this research area is fertile ground and hope our
work inspires and enables future research endeavors towards
the goal of automatically generating compilers for emerging
domain-specific architectures.
REFERENCES
[1] Alfred V. Aho, Mahadevan Ganapathi, and Steven W. K. Tjiang. Code
generation using tree matching and dynamic programming. ACM Trans-
actions on Programming Languages and Systems (TOPLAS), 11(4):491–
516, 1989.
[2] Aws Albarghouthi, Sumit Gulwani, and Zachary Kincaid. Recursive
program synthesis. In Computer Aided Verification: 25th International
Conference, CAV 2013, Saint Petersburg, Russia, July 13-19, 2013.
Proceedings 25, pages 934–950. Springer, 2013.
[3] Rajeev Alur, Arjun Radhakrishna, and Abhishek Udupa.
Scaling
enumerative program synthesis via divide and conquer. In International
Conference on Tools and Algorithms for the Construction and Analysis
of Systems, pages 319–336. Springer, 2017.
[4] Rick Bahr, Clark Barrett, Nikhil Bhagdikar, Alex Carsello, Ross Daly,
Caleb Donovick, David Durst, Kayvon Fatahalian, Kathleen Feng, Pat
Hanrahan, et al. Creating an agile hardware design flow. In 2020 57th
ACM/IEEE Design Automation Conference (DAC), pages 1–6. IEEE,
2020.
[5] Eli Bendersky. A deeper look into the LLVM code generator, Part 1,
Feb 2013.
[6] Sebastian Buchwald, Andreas Fried, and Sebastian Hack. Synthesizing
an instruction selection rule library from semantic specifications.
In
Proceedings of the 2018 International Symposium on Code Generation
and Optimization, pages 300–313, 2018.
[7] R. G. Cattell. Automatic derivation of code generators from machine
ACM Transactions on Programming Languages and
descriptions.
Systems (TOPLAS), 2(2):173–190, 1980.
[8] Yu-Hsin Chen, Joel Emer, and Vivienne Sze. Eyeriss: A spatial archi-
tecture for energy-efficient dataflow for convolutional neural networks.
ACM SIGARCH Computer Architecture News, 44(3):367–379, 2016.
[9] Ross Daly, Caleb Donovick, Jackson Melchert, Rajsekhar Setaluri,
Nestan Tsiskaridze Bullock, Priyanka Raina, Clark Barrett, and Pat
Hanrahan. Synthesizing instruction selection rewrite rules from RTL
In Conference on Formal Methods in Computer-Aided
using SMT.
Design (FMCAD), page 139, 2022.
[10] Joao Dias and Norman Ramsey. Automatically generating instruction
selectors using declarative machine descriptions. ACM Sigplan Notices,
45(1):403–416, 2010.
[11] Caleb Donovick, Ross Daly, Jackson Melchert, Lenny Truong, Priyanka
Raina, Pat Hanrahan, and Clark Barrett. Peak: A single source of truth
for hardware design and verification. arXiv preprint arXiv:2308.13106,
2023.
[12] Helmut Emmelmann, F.-W. Schr¨oer, and Rudolf Landwehr. BEG: A
generator for efficient back ends. ACM Sigplan Notices, 24(7):227–237,
1989.
[13] Herbert Enderton and Herbert B. Enderton. A mathematical introduction
to logic. Elsevier, 2001.
[14] Christopher W. Fraser and David R. Hanson. A retargetable C compiler:
Design and implementation. Addison-Wesley Longman Publishing Co.,
Inc., 1995.
[15] Christopher W. Fraser, David R. Hanson, and Todd A. Proebsting.
Engineering a simple, efficient code-generator generator. ACM Letters on
Programming Languages and Systems (LOPLAS), 1(3):213–226, 1992.
[16] Mahadevan Ganapathi. Retargetable code generation and optimization
using attribute grammars. PhD thesis, 1980. AAI8107834.
[17] Mahadevan Ganapathi and Charles N. Fischer. Description-driven
In Proceedings of the 9th
code generation using attribute grammars.
ACM SIGPLAN-SIGACT Symposium on Principles of Programming
Languages, POPL ’82, page 108–119, New York, NY, USA, 1982.
Association for Computing Machinery.
[18] Marco Gario and Andrea Micheli. PySMT: A solver-agnostic library
for fast prototyping of SMT-based algorithms. In SMT Workshop 2015,
2015.
[19] R. Steven Glanville and Susan L. Graham. A new method for compiler
In Proceedings of the 5th ACM SIGACT-SIGPLAN
code generation.
Symposium on Principles of Programming Languages, POPL ’78, page
231–254, New York, NY, USA, 1978. Association for Computing
Machinery.
[20] Orna Grumberg, Assaf Schuster, and Avi Yadgar. Memory efficient all-
solutions SAT solver and its application for reachability analysis. In For-
mal Methods in Computer-Aided Design: 5th International Conference,
FMCAD 2004, Austin, Texas, USA, November 15-17, 2004. Proceedings
5, pages 275–289. Springer, 2004.
[21] Sumit Gulwani, Susmit Jha, Ashish Tiwari, and Ramarathnam Venkate-
In Proceedings of the 32nd
san. Synthesis of loop-free programs.
ACM SIGPLAN Conference on Programming Language Design and
Implementation, 2011.
[22] John L. Hennessy and David A. Patterson. A new golden age for
computer architecture. Commun. ACM, 62(2):48–60, January 2019.
[23] Roger Hoover and Kenneth Zadeck. Generating machine specific
In Proceedings of the 23rd ACM SIGPLAN-
optimizing compilers.
SIGACT Symposium on Principles of Programming Languages, pages
219–229, 1996.
[24] Norman P. Jouppi, Cliff Young, Nishant Patil, and David Patterson. A
domain-specific architecture for deep neural networks. Communications
of the ACM, 61(9):50–59, 2018.
[25] Donald E. Knuth.
The Art of Computer Programming, Volume 4,
Fascicle 3: Generating All Combinations and Partitions. Addison-
Wesley Professional, 2005.
[26] David Ryan Koes and Seth Copen Goldstein. Near-optimal instruction
In Proceedings of the 6th Annual IEEE/ACM
selection on DAGs.
International Symposium on Code Generation and Optimization, pages
45–54, 2008.
[27] Jackson Melchert, Kathleen Feng, Caleb Donovick, Ross Daly, Ritvik
Sharma, Clark Barrett, Mark A Horowitz, Pat Hanrahan, and Priyanka
Raina. APEX: A framework for automated processing element design
space exploration using frequent subgraph analysis. In Proceedings of
the 28th ACM International Conference on Architectural Support for
Programming Languages and Operating Systems, Volume 3, pages 33–
45, 2023.
[28] Aina Niemetz, Mathias Preiner, and Armin Biere. Boolector 2.0. J.
Satisf. Boolean Model. Comput., 9(1):53–58, 2014.
[29] Eduardo Pelegri-Llopart and Susan L. Graham. Optimal code generation
for expression trees: An application BURS theory. In Proceedings of the
15th ACM SIGPLAN-SIGACT Symposium on Principles of Programming
Languages, pages 294–308, 1988.
[30] Raghu Prabhakar, Yaqi Zhang, David Koeplinger, Matt Feldman, Tian
Zhao, Stefan Hadjis, Ardavan Pedram, Christos Kozyrakis, and Kunle
Olukotun. Plasticine: A reconfigurable architecture for parallel patterns.
ACM SIGARCH Computer Architecture News, 45(2):389–402, 2017.
[31] Takahisa Toda and Takehide Soh. Implementing efficient all solutions
sat solvers. Journal of Experimental Algorithmics (JEA), 21:1–44, 2016.
|
ai_researcher | 1 | Aplikasi_Media_Sosialisasi_Quality_Control_Circle_dan_Idea_Proposal_Guidance.pdf | Stochastic Item Descent Method
for Large Scale Equal Circle Packing Problem
Kun He1 , Min Zhang1∗ , Jianrong Zhou1 , Yan Jin1 , Chu-min Li2
1School of Computer Science and Technology, Huazhong University of Science and Technology, China
2MIS, Universit´e de Picardie Jules Verne, France
{ brooklet60, m zhang, yukihana0416,jinyan}@hust.edu.cn, [email protected]
0
2
0
2
n
a
J
2
2
]
C
O
.
h
t
a
m
[
1
v
0
4
5
8
0
.
1
0
0
2
:
v
i
X
r
a
Abstract
Stochastic gradient descent (SGD) is a powerful
method for large-scale optimization problems in
the area of machine learning, especially for a finite-
sum formulation with numerous variables. In re-
cent years, mini-batch SGD gains great success
and has become a standard technique for training
deep neural networks fed with big amount of data.
Inspired by its success in deep learning, we ap-
ply the idea of SGD with batch selection of sam-
ples to a classic optimization problem in decision
version. Given n unit circles,
the equal circle
packing problem (ECPP) asks whether there ex-
ist a feasible packing that could put all the cir-
cles inside a circular container without overlapping.
Specifically, we propose a stochastic item descent
method (SIDM) for ECPP in large scale, which ran-
domly divides the unit circles into batches and runs
Broyden-Fletcher-Goldfarb-Shanno (BFGS) algo-
rithm on the corresponding batch function itera-
tively to speedup the calculation. We also increase
the batch size during the batch iterations to gain
higher quality solution. Comparing to the current
best packing algorithms, SIDM greatly speeds up
the calculation of optimization process and guar-
antees the solution quality for large scale instances
with up to 1500 circle items, while the baseline
algorithms usually handle about 300 circle items.
The results indicate the highly efficiency of SIDM
for this classic optimization problem in large scale,
and show potential for other large scale classic op-
timization problems in which gradient descent is
used for optimization.
1 Introduction
Stochastic gradient descent (SGD) method [Robbins and
Monro, 1951] has gained great success in the area of ma-
chine learning [Bottou, 2010; Bottou et al., 2018]. Espe-
cially for deep learning tasks, mini-batch SGD has become
a standard technique for the training of deep neural net-
works fed with big amount of data [Goodfellow et al., 2016;
∗Corresponding author.
Lecun et al., 1998] . Inspired by its successful application for
such big, complex optimization problems, in this work, we
consider a classic global optimization problem well-studied
in the area of operations research for over 30 years[Kravitz,
1967], and apply the idea of batch gradient descent (BGD)
for this problem in large scale.
Specifically, we consider the equal circle packing problem
(ECPP) in decision version, the purpose of which is to answer
whether a dense arrangement of n unit circles without over-
lapping (i.e. feasible) in a circular container of fixed radius.
If we already have an efficient algorithm for the decision ver-
sion, the optimal version of minimizing the container radius
for feasible packings can be solved efficiently by combining
divide and conquer on the container radius. Our motivation is
how to design an algorithm that is very fast so as to address
the problem in large scale where hundreds and thousands of
unit circles are considered.
Finding the optimal solution of ECPP with plenty number
of circles is known to be NP hard, even the search of a subop-
timal solution is still very challenging. Many researchers de-
sign heuristic algorithms to find a suboptimal packing pattern.
In recent years, the quasi-physical energy based method was
proposed which could solve ECPP in optimal version with
up to a hundred items. Many quasi-physical researches re-
gard each circle as an elastic item and treat the container as
a rigid hollow container [He et al., 2013; He et al., 2015;
He et al., 2018]. If two items, or an item with the container
are squeezed against each other, the whole system would have
elastic potential energy, and by gradient descent method like
Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [Liu
and Nocedal, 1989] we can reduce the potential energy of the
system so as to remove the overlapping. Then some Basin-
hopping strategy is used to jump out of the local optimal trap
where overlaps still exist.
On the other hand, SGD is a classic first-order optimiza-
tion algorithm widely used in large scale machine learning
problems due to its low computational cost and modest pre-
cision [Bottou, 2010; Bottou and Bousquet, 2008].
In the
training of deep neural networks, SGD plays a key role in the
optimization process, and promotes the great success of deep
learning. In the iteration of SGD, it randomly selects a sam-
ple and then optimizes the loss function corresponding to the
current sample.
Inspired by the success of SGD in deep learning, can we
apply this idea to the classic optimization problem of ECPP?
Specifically, can we randomly select a unit circle and opti-
mize the corresponding optimization function? If each time
a batch of circles are selected for gradient descent by fix-
ing other circles, then we have a batched version of SGD.
As quasi-Newton methods have been shown superior to first-
order gradient descent method for various circle packing
problems, we choose a quasi-Newton method like BFGS and
combine it with random batch, and design a batched version
of stochastic BFGS for ECPP.
Therefore, we propose a novel approach called stochastic
item descent method (SIDM), which can find dense layouts
for large scale ECPP. SIDM accelerates the search process,
especially for a large number of unit circles. In addition, after
attaining a local minimum or saddle points, we improve the
hopping strategy in the current best solution [He et al., 2018],
in which we gradually increase the shrinking radius of con-
tainer during the iteration to find better solutions. Compar-
ing to state-of-the-art algorithms that can only address small-
scale ECPP within reasonable time, SIDM can address up to
n = 1500 instances and reach current best solution reported
on the packomania website 1.
Our main contributions are listed as follows:
• The proposed novel method SIDM can speed up the
process of reaching the local minimum or saddle point,
which is the main computation load of the ECPP.
• We improve the basin-hopping procedure of the existing
strategy used to escape suboptimal layouts, and shrink
the radius of the container more flexibly.
• Experiments demonstrate that SIDM can greatly accel-
erate the computation while maintaining the state-of-art
packing quality.
2 Related Work
In the literature, most researchers address the optimal version
of ECPP that requires to find the smallest container radius for
all items. But they usually solve the decision version of ECPP
as a sub-problem and then use binary search (divide and con-
quer on the container radius) so as to find a possible smallest
container radius for feasible packing. The efficiency and ef-
fectiveness of the overall algorithm mainly depend on the al-
gorithm on the decision version. Thus, in this work, we focus
on improving the efficiency of the sub-algorithm for the de-
cision version while maintaining the same effectiveness. And
in the following, we provide an overview for the ECPP in
optimal version.
ECPP is a well studied problem since 1960’s [Pirl, 1969].
Mathematicians found the optimal packing pattern for 1 ≤
n ≤ 13 [Pirl, 1969; Melissen, 1994; Fodor, 2000; Fodor,
2003] and n = 19 [Fodor, 1999]. However, it is very hard
to mathematically find optimal solutions for bigger n, and
mathematicians only found suboptimal packing patterns for
n ≤ 25 [Pirl, 1969; Goldberg, 1971; Reis, 1975].
To earn a good trade-off between the computation effi-
ciency and solution equality, greedy based heuristic algo-
rithms performance well for n ≤ 100. Graham et al.
1http://www.packomania.com
proposed methods that simulate repulsion forces and bil-
liards to iteratively search for global optimal layout [Gra-
ham et al., 1998], and found suboptimal solution for 25 ≤
n ≤ 65. Akiyama et al. obtained dense layout for n =
70, 73, 75, 77, 78, 79, 80 by a greedy algorithm [Akiyama et
al., 2003]. Then, Grosso et al. proposed a monotonic basin
hopping algorithm that improved many solutions for 66 ≤
n ≤ 100 [Grosso et al., 2010].
i.e.
For heuristic approaches, a typical way is to transform
ECPP into a discrete optimization problem,
putting
the unit circles into the container one by one [Chen et al.,
2018], and then incorporating some search methods to im-
prove the solution. Beam search algorithm [Akeb et al.,
2009] and greedy heuristic algorithm [Chen et al., 2018]
have been proposed, which are all based on max hole degree
method [Huang et al., 2003]. However, the solution quality is
rather limited.
Another approach is to formulate ECPP into a continuous
optimization problem, that is, put all circles into the container
allowing overlapping, use gradient based optimization algo-
rithms to constantly adjust positions of the unit circles, and
shrink the container radius for the next round of search if fea-
sible solution is found. Specifically, quasi-physical models
are used that regard each circle as an elastic item and treat the
container as a rigid hollow container [He et al., 2013; He et
al., 2015; He et al., 2018]. If two items, or an item and the
container are squeezed against each other, the whole system
would have certain elastic potential energy, and by gradient
descent method like BFGS we can reduce the potential energy
of the system so as to remove the overlapping. Then some
Basin-hopping strategy can be used to jump out of the local
optimal trap where overlaps still exist. This category mainly
includes some quasi-physical algorithms [Huang et al., 2001;
Wang et al., 2002; Liu et al., 2016; Zhang and Deng, 2005;
Huang and Ye, 2011], basin hopping algorithms [Addis et
al., 2008], iterated Tabu search algorithms [Fu et al., 2013;
Zeng et al., 2016], and evolutionary search algorithms [Flo-
res et al., 2016]. Huang et al. proposed a global optimiza-
tion algorithm based on quasi-physics, tested on instances of
1 ≤ n ≤ 200 and obtained 63 better packings [Huang and
Ye, 2011]. He et al. proposed a new quasi-physical quasi-
human algorithm (QPQH) [He et al., 2018] that utilizes the
local neighbor information to speed up the calculation, tested
on instances of n = 1, 2, ...320, and obtained 66 denser lay-
outs with smaller container radius, which is the current state-
of-the-art.
To our knowledge, there is no formal publications on in-
stances of n > 320, probably due to the large computa-
tional complexity. On the circle packing website http://www.
packomania.com, the website maintainer Eckard Specht re-
ported results for n = 1 to 5000 for ECPP, using his “program
cci, 1999–2014”. However, he did not report the running time
and computing machine, or release his code.
The quasi-physical model is a general model popularly
used for solving ECPP, which includes a key algorithm to ob-
tain suboptimal layout and a basin-hopping strategy to jump
out of the local optimum. Our proposed method adapts this
framework, and our main contribution is the design of the
mini-batch BFGS method that greatly speeds up the BFGS
normally used for ECPP, such that we can solve up to n =
1500 items, and we believe this is a big progress for the gen-
eral quasi-physical model.
3 Problem Formulation
The equal circle packing problem (ECPP) in decision version
is to ask whether we can pack n unit circles into a circular
container with fixed radius R, such that all circle items are
within the border of the container and any two circle items do
not overlap with each other.
Formally speaking, we build a Cartesian coordinate sys-
tem with its origin located at the center of the container and
the coordinate of the center of circle i is denoted by (xi, yi),
i ∈ {1, 2, ..., n},. Then we denote any layout configuration
by X = (x1, y1, x2, y2, ..., xn, yn). Our purpose is to find a
packing pattern of n circles without overlapping, i.e., to find
(xi, yi), i ∈ {1, 2, ..., n}, such that:
(cid:113)
i + y2
x2
i + 1 ≤ R,
(cid:113)
(xi − xj)2 + (yi − yj)2 ≥ 2,
where i, j ∈ {1, 2, ..., n}, i (cid:54)= j. The first constraint denotes
that any circle item does not intersect with the container and
the second constraint indicates that any two items do not over-
lap with each other. Thus, we need to find 2n real numbers to
satisfy the two constraints, in which case we call X a feasible
layout.
4 The General Quasi-physical Model
Among the current best approaches, researchers build a quasi-
physical model to address this continuous optimization prob-
lem [Huang et al., 2001; He et al., 2018]. Regard the con-
tainer as a rigid hollow item (denoted as item “0”) fixed at the
origin, and each circle i as a movable elastic circular item i.
There will be some elastic potential energy if any two elas-
tic items overlap, or an item overlaps with the border of the
container Then we can calculate the elastic potential energy
for a layout configuration X, and if we reduce the potential
energy by some gradient descent method, there will be less
overlapping among the items.
5 The Proposed SIDM Algorithm
We adopt the general quasi-physical model for ECPP, and the
key issue is how to find a local minimum of the potential en-
ergy efficiently such that we can handle large scale instances.
The advantage of our method is that it can efficiently find a
feasible layout, which is also a global minimum layout for a
fixed container radius. In the following discussion, we will
focus on the global optimization problem using best-known
radius reported on the packomania website.
There are three procedures for a feasible layout search.
First, a local search procedure finds the local minimum or
saddle point, in which our stochastic item descent method is
proposed. The second is the basin-hopping procedure, for
which we design a flexible strategy of shrinking the container
radius. Finally, the global search procedure combines the
local-search and the basin-hopping procedure to search for
a solution iteratively within reasonable time.
5.1 Stochastic Item Descent Method
For the local search procedure, we randomly select items as
a mini-batch and use the classical BFGS [Liu and Nocedal,
1989] algorithm for gradient descent. The main idea of BFGS
is to use the gradient information of the objective function U
to approximate the inverse of Hessian matrix rather than to
calculate the second-order derivative at each iteration.
For simplicity, we use X s to denote the layout of a subset
of unit circles, and U s is the corresponding elastic potential
energy function of this set of circles. The complementary set
of X s is denoted as X c. The BFGS iteration for minimizing
the potential energy U s has the form:
X s
k+1 ← X s
k − αkHkgk,
(3)
in which X s
k is the layout configuration at iteration k, gk is the
gradient of U s at X s
k, Hk is a positive definite approximation
of ∇2U s(X s
k)−1 and αk is the step length (learning rate) at
each iteration, defined in Eq. (4). Hk is updated dynamically
by Eq. (6), in which I is the identity matrix and uk, vk are
defined in Eq. (5).
Definition 1 Overlap Depth. There are two kinds of over-
lap, circle-circle overlap and circle-container overlap. The
circle-circle overlap depth is defined as:
(cid:18)
(cid:113)
(cid:19)
dij = max
2 −
(xi − xj)2 + (yi − yj)2, 0
,
(1)
where i (cid:54)= j. And the circle-container overlap depth is de-
fined as:
(cid:18)(cid:113)
(cid:19)
d0i = max
x2
i + y2
i + 1 − R, 0
.
(2)
Definition 2 Elastic Potential Energy. The elastic potential
energy of the items is proportional to the square of the over-
lap depth. The potential energy Ui of circle i is defined as
Ui = (cid:80)n
ij. And the total potential energy U (X) is
U (X) = (cid:80)n
j=0,j(cid:54)=i d2
i=1 Ui.
Obviously, the total energy U ≥ 0 for any layout configu-
ration. U = 0 if and only if X is a feasible layout, i.e. U is
a global minimal potential. Thus, for a fixed R, we minimize
U as the objective function so as to find a feasible solution.
αk = arg min
U s(X s
k − αHkgk)
α∈R+
k+1 − X s
uk = X s
k, vk = gk+1 − gk
(cid:18)
Hk+1 =
I −
(cid:19)T
(cid:18)
Hk
I −
vkuT
k
uT
k vk
(cid:19)
+
vkuT
k
uT
k vk
ukuT
k
uT
k vk
(4)
(5)
(6)
Based on above definitions, we design a local BFGS algo-
rithm, Algorithm 1, for optimizing the potential for circles in
X s, while other circles in X c are all fixed in the algorithm.
Combining the random selection of batches on unit circles
with local BFGS algorithm, we have our stochastic item de-
scent algorithm (SIDM). The specific idea is to randomly se-
lect a subset of circles at each time, and call local BFGS on
this subset to get a locally better layout. Then we continue
to randomly select another batch of circles in the remaining
set and repeat such operation until all the circles have been
selected in a batch. This is equivalent to a random grouping
of all circles for one round of iteration, the number of circles
Algorithm 1 Local BFGS Algorithm
Algorithm 2 Stochastic Item Descent Method
Input:
A layout for a subset of circles X s;
Container radius R.
Output:
A local minimum layout X s∗.
Input:
A layout configuration X;
Container radius R.
Output:
A local minimum layout X ∗.
1: iteration step k ← 0;
2: X s
k ← X;
3: Hk ← I;
4: calculate gk;
5: while k ≤ M axIterN um do
calculate αk by Eq. (4);
6:
calculate X s
k+1 by Eq. (3) ;
7:
if U s ≤ 10−20 or (cid:107)gk(cid:107) ≤ 10−10 then
8:
9:
10:
11:
12:
13:
14: end while
15: return layout X s
end if
calculate gk+1;
calculate uk, vk, Hk+1 by Eq. (5) and (6);
k ← k + 1;
return layout X s
k+1 as X s∗;
k as X s∗.
per group is recorded as s (except for the last group), and the
local BFGS algorithm is called iteratively for each group.
If we continue do another random grouping on the circles
at the next round of iteration and run BFGS iteratively for
each group again, then after k rounds of iterations, it is prob-
ably that the potential energy of the whole system is still rel-
atively high. Therefore, we consider reducing the number of
groups for each round, which means the number of circles in
each group increases. The local BFGS algorithm is still ap-
plied to reduce the potential energy of each group. We need
to go through k/2 rounds until all circles are in one group in
the end, in which case we run the local BFGS for the whole
system. As the overall packing is already relatively good, a
local minimum packing layout can be quickly obtained. The
reason why we do not choose a fixed group size but increase
s gradually is that small fixed group size may cause oscilla-
tion during the iterations like stochastic gradient descent for
neural network training, making it hard for the potential en-
ergy to converge to a local minimum. The pseudo code of the
entire process is in Algorithm 2.
The selection of the group size s has an impact on the
algorithm efficiency. We experimentally tested on two in-
stances of n = 300 and n = 400 with various group sizes
s = 50, 60, 70, ..., 150. We compare the average running time
of 10 runs that reach local minimum layout. The results are
illustrated in Figure 1, in which we see s = 100 is the best.
5.2 Basin-hopping and Global Search
The stochastic item descent usually obtains a local minimum
layout or a saddle point in many cases and can not guarantee
the elastic potential energy of the whole system to be small
enough, aka a feasible layout may not be found.
In such
case, we need to consider appropriate basin-hopping strategy
to help the current configuration jump out of the local opti-
mum at the same time have a better chance to move toward
randomly select s circles as a group, with a total of
g groups;
run Algorithm 1 for each group;
if U ≤ 10−20 then
return current layout as the X ∗;
1: s ← 100;
2: k ← 10;
3: g ← (cid:98) n
s (cid:99);
4: while g ≥ 1 do
5:
6:
for i = 1 to k do
7:
8:
9:
end if
10:
end for
11:
s ← min(s ∗ 2, n);
12:
k ← max((cid:98) k
2 (cid:99), 1);
13:
g ← (cid:98) n
14:
15: end while
16: return current layout as X ∗.
s (cid:99);
the global optimum.
The shrinking strategy has a good impact on the layout
with dense inner packing and sparse outer packing [He et al.,
2018]. Intuitively, if we make circles near the container cen-
ter denser and make more use of the inner space, we may
obtain a better layout. In order to get a global optimal lay-
out, we often need to run the basin-hopping strategy multiple
times. QPQH uses an identical shrinking scale for each initial
shrinking radius. In practice, as the number of hops increases,
it is unnecessary to squeeze the circle too far inside, and the
circles near the boundary still need more precise adjustment
because they are more scattered and irregular. Therefore, we
adapt and improve the basin-hopping strategy of QPQH [He
et al., 2018] by shrinking the radius of the container more
flexibly.
Figure 1: Comparison on the average running time of 10 runs of
SIDM to find a good group size.
The coordinates of all circles are fixed and the container
radius is reduced by a factor of γ (0 < γ < 1): R = γR0,
where R0 in the initial container radius and γ is defined as:
γ = α + β · hops +
1 − α − β · hops
m
k,
(7)
in which α is the initial shrinking scale of the container ra-
dius, hops is the times of running basin-hopping procedure,
β is the factor corresponding to hops that adjusts the shrink-
ing scale during the iterations, m is the number of generated
new layouts and k varies form 0, 1, 2 to m − 1. Then we run
stochastic item descent to reach a new layout.
If α is too small, all the circles will converge to the cen-
ter of the container and most dense packing will be broken
severely. If α is too large, there is little impact by shrink-
ing the container radius. If β is too small/large, the shrinking
scale of each basin-hopping increases too slowly/quickly dur-
ing the iteration. Besides, if m is too small, the probability of
generating new layouts with high quality is small; if m is too
large, it is very slow to generate m new layouts. The values
are chosen empirically: α = 0.4, β = 0.03 and m = 10.
Algorithm 3 Global Search Procedure
Input:
The container radius R0.
Output:
Table 1: Key parameters of the SIDM algorithm.
Parameter Description
s
α
β
m
Initial group size
Initial shrinking factor
Shrinking scale growing factor
Number of new layouts
Value
100
0.4
0.03
10
once, and m iterations of local BFGS where m is the batch
size so all the circle positions are also updated once.
Each iteration of BFGS algorithm (Simply regard X s as
the layout of all circles) calculates the step length α by Eq.
(4), new layout by (3), new gradient and Hessian matrix by
Eq. (3), and the time complexities are O(nlog( len
(cid:15) )), O(n2),
O(n) and O(n2), respectively. Here len is the length of real
number interval in the line search, (cid:15) is the searching precision,
and nlog( len
(cid:15) ) is the time complexity of the line search algo-
rithm. Thus, the total time complexity is O(nlog( len
(cid:15) ) + n2).
The memory mainly used by BFGS algorithm is to store the
Hessian matrix, thus the space complexity is O(n2).
A global or local minimum layout.
1: randomly generate an initial layout;
2: run SIDM to obtain an updated layout X;
3: X ∗ ← X;
4: hops ← 0;
5: while U (X ∗) > 10−20 and time limit is not reached do
6:
7:
8:
9:
calculate γ by Eq. (7);
R ← γR0;
run SIDM on layout X ∗ with radius R to generate a
new layout, denoted as Xk;
run SIDM on Xk with radius R0;
for k = 0 to 9 do
end for
if mink U (Xk) < U (X ∗) then
X ∗ ← arg mink U (Xk);
10:
11:
12:
13:
14:
15:
16: end while
17: return current layout X ∗;
end if
hops ← (hops + 1) mod (cid:98) 1−α
β (cid:99);
Combining the local search procedure with the basin-
hopping procedure, we have the global search algorithm, Al-
gorithm 3, that finds a feasible layout in a fixed container. It
is initialized with a random layout. Then we run stochastic
item descent to obtain a local minimum layout and then use
the basin-hopping procedure to generate 10 new layouts. We
continue run SIDM on these packing patterns and if some
packing is better than the current local minimum packing,
we update the current packing. The algorithm terminates
when a global minimum layout is obtained or the time limit
is reached. To show the key feature of the proposed method,
we still denote the overall algorithm as SIDM.
5.3 Complexity Analysis
This subsection compares the time complexity and space
complexity of BFGS algorithm and local BFGS algorithm.
For a fair comparison, we consider the complexity for one
iteration of BFGS that all circle items update their positions
The time complexity and space complexity for each batch
of local BFGS algorithm are similar to BFGS algorithm,
which are O( n
m )2). So for m
batches of local BFGS, the time complexity is m times of the
complexity of a single batch of local BFGS algorithm, i.e.,
O(nlog( len
m ), and the space complexity is O( n2
m )2) and O(( n
m log( len
(cid:15) ) + n2
(cid:15) ) + ( n
The time complexity of BFGS and m batches of local
BFGS is mainly decided by the second term, which are O(n2)
and O( n2
m ), respectively. The time complexity of BFGS is m
times of the m batches of local BFGS. And obviously, the
space complexity of BFGS is also m times of the m batches
of local BFGS.
m ).
Therefore, We can conclude that SIDM using local BFGS
search is more efficient than BFGS search from a complexity
analysis point of view.
6 Experimental Results
We present our results on instances of n = 100, 200, 300,
..., 1500. The best-known packing results are maintained on
the packomania website, where most results of ECPP are re-
ported for n ≤ 200 in the literature. The packomania web-
site maintainer, Eckard Specht, also provide results using his
program cci for n = 1 to 5000. But unfortunately he did
not provide running time, computing machine, or code. To
our knowledge, no result has been formally published in the
literature for n > 320 due to the exponentially growing of
computational complexity. The current state-of-art results
formally published in the literature are from QPQH [He et
al., 2018], which is not updated on packomania. Thus, we
compare with QPQH [He et al., 2018] to demonstrate the ef-
ficiency of SIDM.
6.1 Experimental Setup
SIDM is programmed in C++ programming language and im-
plemented in Visual Studio 2017 IDE. All experiments are
carried out using a personal computer with 2.5GHz CPU and
8GB RAM. Table 1 lists the key parameters of SIDM.
Table 2: Comparison on average running time.
Table 3: Experimental results for n = 100, 200, ..., 1500.
n
200
210
220
230
240
250
260
270
280
290
300
310
320
R0
15.4632748785
15.8792012772
16.2253735494
16.5964300724
16.8971658948
17.2629622393
17.6049551932
17.8872656677
18.2472267427
18.5493750704
18.8135833638
19.1848594632
19.4562307640
QPQH (s)
1250
2412
1690
865
1960
2697
4617
6712
5478
3782
7153
8274
8397
SIDM (s)
1668
1945
2047
1912
2560
1867
2897
2976
3125
2698
4211
5712
4987
6.2 Computational Results
Our purpose is to evaluate whether SIDM can find a global
minimum layout efficiently using the reported container ra-
dius on packomania as the fixed container radius.
We first compare results on instances of n =
200, 210, ..., 320 between SIDM and QPQH (we use the ver-
sion that the container radius is fixed). We run both algo-
rithms for five times respectively, and show the average run-
ning time of reaching a feasible pattern in Table 2. We also
show the comparison in Figure 2 to have an intuitive observa-
tion. The average running time of the two algorithms is close
when the number of circles is small in 200 to 250. But as the
number of circles increases, SIDM behaves more efficiently
than QPQH.
Then, for 15 instances of n = 100, 200, ..., 1500, we ran-
domly place n circles in the container and run the overall
SIDM algorithm. We will stop the search when a global min-
imum layout is found, or the maximum time limit of 15 hours
is reached. For each instance, we run SIDM 10 times to re-
duce the impact of randomness. The results listed in Table 3
show that SIDM can find the global minimum layout except
for n = 1400. The hit count indicates the number of success-
ful times for 10 times of running, and the time indicates the
average running time for successful runs.
Figure 2: Comparison on average running time of QPQH and SIDM.
The experimental results indicate that with the increase on
number of circles, in most cases SIDM can find a feasible
layout, and the running time increases almost linearly (2562
for n = 100, 2562 · 15 = 38430, 41286 for n = 1500).
n
100
200
300
400
500
600
700
800
900
1000
1100
1200
1300
1400
1500
R0
11.0821497243
15.4632748785
18.8135833638
21.6895717951
24.1329376240
26.4274162694
28.4958443164
30.4212133790
32.2330843545
33.9571409147
35.6161932968
37.1121608416
38.6047666608
40.0604065845
41.4126836805
Hit count Time (s)
2562
1772
4326
7921
9865
16372
12369
15893
13715
21735
19816
34682
28871
——
41286
1/10
8/10
7/10
7/10
6/10
4/10
5/10
3/10
1/10
1/10
2/10
1/10
2/10
0/10
1/10
By comparison, QPQH can not output any feasible results for
n = 400, 500, ..., 1500 within the time limit.
7 Conclusion
Inspired by the idea of SGD in the area of machine learn-
ing, we propose a stochastic item descent method for large-
scale equal circle packing problem (ECPP), which randomly
divides the circles into batches and runs BFGS on the corre-
sponding potential energy function in iterations. In order to
obtain a solution with high quality, we increase the batch size
during the iterations. Besides, we improve the basin-hopping
strategy and shrink the radius of the container more flexibly.
Experiment has demonstrated that the proposed method is ef-
ficient for large-scale equal circle packing problem.
In future work, we will adapt SIDM via binary search for
its optimization version problem of minimizing the container
radius, and try the SIDM idea on various circle packing prob-
lems, such as equal or unequal circles packing with various
container shape. We also believe SIDM can be adapted for
other classic optimization problems where gradient descent
method has been used for optimization, including those prob-
lems occurring in the optimization process of large scale ma-
chine learning.
References
[Addis et al., 2008] Bernardetta Addis, Marco Locatelli, and
Fabio Schoen. Efficiently packing unequal disks in a cir-
cle. Operations Research Letters, 36(01):37–42, 2008.
[Akeb et al., 2009] Hakim Akeb, Mhand Hifi, and Rym
M’Hallah. A beam search algorithm for the circular
Computer & Operation Research,
packing problem.
36(5):1513–1528, 2009.
[Akiyama et al., 2003] Jin Akiyama, Rika Mochizuki,
Nobuaki Mutoh, and Gisaku Nakamura. Maximin dis-
tance for n points in a unit square or a unit circle.
In
Discrete and Computational Geometry, pages 9–13, 2003.
[Bottou and Bousquet, 2008] L´eon Bottou and Olivier Bous-
quet. The tradeoffs of large scale learning. In NIPS, pages
161–168. 2008.
[Bottou et al., 2018] L. Bottou, F. Curtis, and J. Nocedal.
Optimization methods for large-scale machine learning.
SIAM Review, 60(2):223–311, 2018.
[Bottou, 2010] L´eon Bottou. Large-scale machine learning
In COMPSTAT, pages
with stochastic gradient descent.
177–186, 2010.
[Chen et al., 2018] Mao Chen, Xiangyang Tang, Ting Song,
Zhizhong Zeng, Xicheng Peng, and Sanya Liu. Greedy
heuristic algorithm for packing equal circles into a circular
container. Computers & Industrial Engineering, 119:114–
120, 2018.
[Flores et al., 2016] Juan J. Flores, Jose Mart´ınez, and Felix
Calder´on. Evolutionary computation solutions to the cir-
cle packing problem. Soft Computing, 20(4):1521–1535,
2016.
[Fodor, 1999] Ferenc Fodor. The densest packing of 19
congruent circles in a circle. Geometriae Dedicata,
74(2):139–145, 1999.
[Fodor, 2000] Ferenc Fodor. The densest packing of 12 con-
gruent circles in a circle. Contributions to Algebra and
Geometry, 41(2):401–409, 2000.
[Fodor, 2003] Ferenc Fodor. The densest packing of 13 con-
gruent circles in a circle. Contributions to Algebra and
Geometry, 44(2):431–440, 2003.
[Fu et al., 2013] Zhanghua Fu, Wenqi Huang, and Zhipeng
L¨u.
Iterated tabu search for the circular open dimen-
sion problem. European Journal of Operational Research,
225(2):236–243, 2013.
[Goldberg, 1971] Michael Goldberg.
Packing of 14, 16,
17 and 20 circles in a circle. Mathematics Magazine,
44(3):134–139, 1971.
[Goodfellow et al., 2016] Ian J. Goodfellow, Yoshua Bengio,
and Aaron C. Courville. Deep Learning. Adaptive com-
putation and machine learning. MIT Press, 2016.
[Graham et al., 1998] R. L. Graham, B. D. Lubachevsky,
K. J. Nurmela, and P. R. J. ¨Osterg˚ard. Dense packings
of congruent circles in a circle. Discrete Math., 181(1-
3):139–154, 1998.
[Grosso et al., 2010] A. Grosso, A. R. Jamali, M. Locatelli,
and F. Schoen. Solving the problem of packing equal and
unequal circles in a circular container. Journal of Global
Optimization, 47(1):63–81, 2010.
[He et al., 2013] Kun He, Danzeng Mo, Tao Ye, and Wenqi
Huang.
A coarse-to-fine quasi-physical optimization
method for solving the circle packing problem with equi-
librium constraints. Computers & Industrial Engineering,
66(4):1049–1060, 2013.
[He et al., 2015] Kun He, Menglong Huang, and Chenkai
Yang. An action-space-based global optimization algo-
rithm for packing circles into a square container. Com-
puters & Operations Research, 58:67–74, 2015.
[He et al., 2018] Kun He, Hui Ye, Zhengli Wang, and Jingfa
Liu. An efficient quasi-physical quasi-human algorithm
for packing equal circles in a circular container. Comput-
ers & Operations Research, 92:26 – 36, 2018.
[Huang and Ye, 2011] Wenqi Huang and Tao Ye. Global op-
timization method for finding dense packings of equal cir-
cles in a circle. European Journal of Operational Re-
search, 210(3):474 – 481, 2011.
[Huang et al., 2001] WQ Huang, Y Li, and RC Xu. Local
search based on a physical model for solving a circle pack-
ing problem. In Proceedings of the 4th Metaheuristics In-
ternational Conference, pages 455–459, 2001.
[Huang et al., 2003] Wenqi Huang, Yu Li, Bernard Ju-
rkowiak, Chumin Li, and Ruchu Xu. A two-level search
strategy for packing unequal circles into a circle container.
In Principles and Practice of Constraint Programming,
pages 868–872, 2003.
[Kravitz, 1967] Sidney Kravitz. Packing cylinders into cylin-
drical containers. Mathematics Magazine, 40(2):65–71,
1967.
[Lecun et al., 1998] Y. Lecun, L. Bottou, Y. Bengio, and
P. Haffner. Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11):2278–2324,
1998.
[Liu and Nocedal, 1989] Dong C Liu and Jorge Nocedal. On
the limited memory bfgs method for large scale optimiza-
tion. Mathematical programming, 45(1-3):503–528, 1989.
[Liu et al., 2016] Jingfa Liu, Kewang Zhang, Yonglei Yao,
Yu Xue, and Tinzhao Guan. A heuristic quasi-physical
algorithm with coarse and fine adjustment for multi-
objective weighted circles packing problem. Computers
& Industrial Engineering, 101:416–426, 2016.
[Melissen, 1994] Hans Melissen. Densest packings of eleven
congruent circles in a circle. Geometriae Dedicata,
50(1):15–25, 1994.
[Pirl, 1969] Udo Pirl. Der mindestabstand von n in der
einheitskreisscheibe gelegenen punkten. Mathematische
Nachrichten, 40(1-3):111–124, 1969.
[Reis, 1975] George E. Reis. Dense packing of equal cir-
cles within a circle. Mathematics Magazine, 48(1):33–37,
1975.
[Robbins and Monro, 1951] Herbert Robbins and Sutton
Monro. A stochastic approximation method. The Annals
of Mathematical Statistics, pages 400–407, 1951.
[Wang et al., 2002] Huaiqing Wang, Wenqi Huang, Quan
Zhang, and Dongming Xu. An improved algorithm for
the packing of unequal circles within a larger contain-
ing circle. European Journal of Operational Research,
141(2):440 – 453, 2002.
[Zeng et al., 2016] Zhizhong Zeng, Xinguo Yu, Kun He,
Wenqi Huang, and Zhanghua Fu. Iterated tabu search and
variable neighborhood descent for packing unequal circles
into a circular container. European Journal of Operational
Research, 250:615–627, 2016.
[Zhang and Deng, 2005] De-fu Zhang and An-sheng Deng.
An effective hybrid algorithm for the problem of packing
circles into a larger containing circle. Computers & Oper-
ations Research, 32:1941–1951, 2005.
|
ai_researcher | 7 | LLMatDesign_Autonomous_Materials_Discovery_with_Large_Language_Models.pdf | LLMatDesign: Autonomous Materials Discovery with
Large Language Models
Shuyi Jia†, Chao Zhang†, Victor Fung†§
†Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA, USA
§Corresponding author: [email protected]
Abstract
Discovering new materials can have significant scientific and technological implications but remains a chal-
lenging problem today due to the enormity of the chemical space. Recent advances in machine learning
have enabled data-driven methods to rapidly screen or generate promising materials, but these methods
still depend heavily on very large quantities of training data and often lack the flexibility and chemical
understanding often desired in materials discovery. We introduce LLMatDesign, a novel language-based
framework for interpretable materials design powered by large language models (LLMs). LLMatDesign uti-
lizes LLM agents to translate human instructions, apply modifications to materials, and evaluate outcomes
using provided tools. By incorporating self-reflection on its previous decisions, LLMatDesign adapts rapidly
to new tasks and conditions in a zero-shot manner. A systematic evaluation of LLMatDesign on several
materials design tasks, in silico, validates LLMatDesign’s effectiveness in developing new materials with
user-defined target properties in the small data regime. Our framework demonstrates the remarkable poten-
tial of autonomous LLM-guided materials discovery in the computational setting and towards self-driving
laboratories in the future.
1 Introduction
Discovering novel materials with useful functional properties is a longstanding challenge in materials science
due to the vast and diverse composition and structure space these materials can inhabit[1, 2]. Traditional
approaches to materials discovery often involve exhaustively screening materials via lab-based experiments or
in silico simulations, which can be time-consuming and resource-intensive[3, 4, 5]. Recent advancements have
introduced machine learning surrogate models to predict material structures and properties [6, 7], as well as
generative modeling techniques to propose novel materials [8, 9, 10, 11, 12, 13, 14]. However, these data-driven
methods rely heavily on extensive training datasets, generally derived from density functional theory (DFT)
calculations. These methods are less useful in most instances where such data is unavailable, or when only a
limited budget exists to perform experiments or high fidelity simulations. In contrast, a human expert would
be far more effective here by being able to draw from domain knowledge and prior experiences, and reason from
limited examples. Therefore, a different materials design paradigm is needed in these situations where models
should be developed to exhibit similar proficiencies as human experts.
Fueled by ever-expanding textual datasets and significant increases in computing power, large language mod-
els (LLMs) have witnessed a meteoric rise in capabilities and usage in recent years. More broadly, the re-
markable performance of LLMs across diverse tasks they have not been explicitly trained on has sparked a
burgeoning interest in developing and utilizing LLM-based agents capable of reasoning, self-reflection, and
decision-making[15, 16, 17]. These autonomous agents are typically augmented with tools or action modules,
empowering them to go beyond conventional text processing and directly interact with the physical world,
such as robotic manipulation [18, 19] and scientific experimentation [20, 21]. As the capabilities of LLMs and
LLM-based autonomous agents continue to expand, they are increasingly being recognized for their potential in
scientific domains, particularly in chemistry [22]. This surge in interest stems from the fact that the majority of
information in chemistry exists as text, aligning closely with the text-centric nature of LLMs [23]. For instance,
recent studies have demonstrated the use of LLMs to extract chemical reaction information [24, 25], predict
chemical properties [26, 27, 28, 29], and generate crystal structures [30, 31, 32], among many other applica-
tions. In particular, chemical research, such as materials discovery, traditionally hinges on human expertise and
1
4
2
0
2
n
u
J
9
1
]
i
c
s
-
l
r
t
m
.
t
a
m
-
d
n
o
c
[
1
v
3
6
1
3
1
.
6
0
4
2
:
v
i
X
r
a
experience encapsulated in scientific publications. LLMs, capable of ingesting vast quantities of these publi-
cations beyond human capacity, have the potential to act as intelligent copilots that might be able to extract
key insights, uncover hidden patterns, and propose novel methodologies, thereby accelerating scientific progress
[23].
Figure 1: Overview of LLMatDesign. The discovery process with LLMatDesign begins with user-provided inputs
of chemical composition and target property. It recommends modifications (addition, removal, substitution, or
exchange), and uses machine learning tools for structure relaxation and property prediction. Driven by an LLM,
this iterative process continues until the target property is achieved, with self-reflection on past modifications
fed back into the decision-making process at each step.
In this work, we present LLMatDesign (Fig. 1), a language-based framework for materials design powered
by state-of-the-art LLMs. LLMatDesign is capable of interpreting human-provided instructions and design
constraints, using computational tools for materials evaluation, and leveraging existing chemical knowledge and
feedback to act as a highly effective autonomous materials design agent. Unlike traditional methods that rely on
explicit mathematical formulations and programmed solvers, LLMatDesign as an autonomous agent works with
natural language directly, allowing it to quickly adapt to a diverse set of tasks, materials and target properties
by simply modifying the prompt. In each step, LLMatDesign generates new designs of a material by choosing
a modification of a starting material along with a corresponding hypothesis. It then applies the modification
to the material and validates its property. Here, we use surrogate models as a stand-in for DFT to perform
property validation, which can be readily replaced with any other computational or, potentially, experimental
validation method. Following this, LLMatDesign reflects on the applied modification and its outcome. This
reflection, along with the modified material and hypothesis, is then incorporated into the prompt in an iterative
process. Moreover, LLMatDesign’s flexibility allows incorporation of the entire modification history or user-
defined requirements, offering even finer control over the discovery process.
By utilizing state-of-the-art LLMs as chemical reasoning engines, LLMatDesign represents a novel framework
for materials discovery which, unlike many current data-driven generative methods, eliminates the need for large
training datasets derived from ab initio calculations. LLMatDesign’s ability to interpret human instructions and
incorporate design constraints enables rapid adaptation to new conditions, tasks, materials, and target proper-
ties via prompt modification—a flexibility that is often very difficult for current materials discovery methods
such as those using generative models. More importantly, LLMatDesign’s ability to generate hypothesis, eval-
uate outcomes, and self-reflect on past decisions in a closed-loop manner showcases the potential for a fully
automated artificial intelligence (AI) agent for materials design in both a computational setting or towards
robotic laboratories in the future.
2
LLMatDesignDesign
Decision-makingChoices of ModificationExchangeSubstitutionRemovalAdditionStructure
RelaxationProperty
PredictorML Force Field̂yYESNÔyModification HistorySelf-reflectionSelf-evaluate the
suggested modification
and its effectiveness•Modification
•Reasoning
•ReflectionClose to ?yTarget yDesign LoopThis module uses LLMThis module is a toolDFT
calculation2 Results
2.1 LLMatDesign Framework
LLMatDesign is a flexible framework powered by an LLM and empowered with the necessary tools to perform
materials discovery. The discovery process with LLMatDesign begins by taking the chemical composition and
property of a starting material, along with a target property value, as user-provided inputs. If a chemical com-
position is specified without an initial structure, LLMatDesign will automatically query the Materials Project
[33] database to retrieve the corresponding structure. If multiple candidates match the query, the structure with
the lowest formation energy per atom is selected. LLMatDesign then intelligently recommends one of four pos-
sible modifications—addition, removal, substitution, or exchange—to the material’s composition and structure
to achieve the target value. Specifically, “exchange” refers to swapping two elements within the material, while
“substitution” involves replacing one type of element with another. “Removal” means eliminating a specific
element from the material. In the case of “addition,” an atom of the suggested element is added to the unit cell
of the material, with its position randomly determined. These four choices act as a proxy to physical processes
in materials modification, such as doping or creating defects, and additional modification choices can also be
readily added or removed as desired within the framework.
LLMatDesign Prompt Template (GPT-4o)
I have a material and its <property>. <definition of property>.
(<chemical composition>, <property value>)
Please propose a modification to the material that results in <objective>. You can choose one of the four following
modifications:
1. exchange: exchange two elements in the material
2. substitute: substitute one element in the material with another
3. remove: remove an element from the material
4. add: add an element to the material
<additional constraints>
Your output should be a python dictionary of the following the format:
{Hypothesis: $HYPOTHESIS, Modification: [$TYPE, $ELEMENT 1, $ELEMENT 2]}.
Here are the requirements:
1. $HYPOTHESIS should be your analysis and reason for choosing a modification
2. $TYPE should be the modification type; one of “exchange”, “substitute”, “remove”, “add”
3. $ELEMENT should be the selected element type to be modified. For “exchange” and “substitute”, two $ELEMENT
placeholders are needed. For “remove” and “add”, one $ELEMENT placeholder is needed.
<modification history>
Figure 2: Prompt template for LLMatDesign with GPT-4o. Text placeholders in red angular brackets are
specific to the task given to LLMatDesign. Text placeholders in blue angular brackets are optional and can be
omitted if not needed. For Gemini-1.0-pro’s prompt template, see Appendix A.
Self-reflection Prompt Template
After completing the following modification on <previous composition>, we obtained <current composition>
and the <property> changed from <previous value> to <current value>. Please write a brief post-action reflection
on the modification, explaining how successful it was in achieving the <objective> and the reasons for its success or failure:
<hypothesis>, <modification>
Figure 3: Prompt template for self-reflection. Text placeholders in red angular brackets are specific to the task
given to LLMatDesign.
Alongside the proposed modification, LLMatDesign provides a hypothesis explaining why the suggested change
could be beneficial. This hypothesis generated by the LLM provides a window into the reasoning behind its
choices and provides a degree of interpretability which is not possible with traditional optimization algorithms.
3
Next, LLMatDesign modifies the material based on the given suggestion, relaxes the structure using a machine
learning force field (MLFF), and predicts its properties using a machine learning property predictor (MLPP).
If the predicted property of the new material does not match the target value within a defined threshold,
LLMatDesign then evaluates the effectiveness of the modification through a process called self-reflection where
commentary is provided on the success of failure of the chosen modification.
After self-reflection, a modification history message is created. This message includes the modified chemical
composition, the modification itself, the hypothesis behind the modification, and the self-reflection results. This
history is then fed back into LLMatDesign, which enters the next design decision-making phase towards the
goal of achieving the target property. The entire process repeats in a loop until termination conditions are met.
Optionally, density functional theory (DFT) calculations can be performed on the final material.
At the core of the entire workflow, LLMatDesign utilizes an LLM engine or agent which translates user-defined
objectives into appropriate Materials Project API calls, drives the design decision-making process, and conducts
self-reflection on previous decisions to enhance performance. In this work, we demonstrate the capabilities of
LLMatDesign using two state-of-the-art LLMs: GPT-4o [34] and Gemini-1.0-pro [35]. However, the framework
is model-agnostic and should function effectively with any capable LLMs. The overall architecture and algorithm
of LLMatDesign is depicted in Fig. 1 and Algo. 1 respectively. The modification and self-reflection prompt
templates are shown in Fig. 2 and 3 respectively.
Algorithm 1 LLMatDesign Algorithm
Input: (x0, y0): chemical composition and property of the starting material.
ytarget: target property value to achieve.
M := ∅: set of history messages, if any.
Output: (xi, yi): chemical composition and property of the new material.
for i = 1 : N do
▷ N : maximum number of modifications
▷ s: modification; h: hypothesis
▷ ε: error tolerance
▷ r: self-reflection
▷ m: history message
si, hi ← LLM(xi−1, yi−1, ytarget, M)
˜xi ← perform modification(xi−1, si)
xi ← MLFF(˜xi)
yi ← MLPP(xi)
if |yi − ytarget|/|ytarget| ≤ ε then
return (xi, yi)
end if
ri ← LLM (xi−1, xi, yi−1, yi, si, hi)
mi ← create history message (si, hi, ri)
M ← M ∪ {mi}
end for
2.2 Evaluation
To evaluate the effectiveness of LLMatDesign, we performed a set of experiments with 10 starting materials
randomly selected from the Materials Project [6]. Specifically, we focus on designing materials targeting two
material properties and their corresponding objectives:
• Band gap (eV): design a new material with a band gap of 1.4 eV.
• Formation energy per atom (eV/atom): design a new material with the most negative formation energy
possible.
The objective of achieving a band gap value of 1.4 eV is chosen as an example of designing an ideal photovoltaic
material with a band gap within the range of 1–1.8 eV [36], and the aim of obtaining the most negative formation
energy requires LLMatDesign to suggest modifications that could result in more stable materials.
For the band gap experiments, we record the average number of modifications taken by LLMatDesign, with a
maximum budget of up to 50 modifications. A 10% tolerance of error to the target is used as the convergence
criterion. For the formation energy experiments, a fixed budget of 50 modifications is used, and both the
average and minimum formation energies are recorded. The experiment is then repeated 30 times for each
starting material. We present results for two different LLM engines: Gemini-1.0-pro and GPT-4o. Within
4
each LLM engine, two variants of experiments—history and historyless—are conducted to evaluate the impact
of including the knowledge of prior modification history. All results are compared against a random baseline,
where modifications to materials are randomly selected. The results for band gap and formation energy per
atom are shown in Table 1 and Table 2 respectively. Note that self-reflection is included only for GPT-4o and
not for Gemini-1.0-pro.
Table 1: LLMatDesign’s performance in achieving a new material with a target band gap of 1.4 eV. Each
experiment is repeated 30 times, and the average number of modifications taken to reach the target value is
recorded.
Starting Material
Gemini-1.0-pro
GPT-4o
Random
Gemini-1.0-pro
GPT-4o
Random
Average # of Modifications
Average Final Band Gap (eV)
History Historyless History Historyless
History Historyless History Historyless
BaV2Ni2O8
CdCu2GeS4
CeAlO3
Co2TiO4
ErNi2Ge2
Ga2O3
Li2CaSiO4
LiSiNO
Na2ZnGeO4
SrTiO3
Avg.
17.7
11.1
14.3
8.8
26.8
10.3
15.7
12.4
13.0
7.2
13.7
14.4
13.4
15.1
13.1
24.8
12.3
20.5
10.4
15.0
8.8
14.8
17.7
3.3
7.4
5.5
19.3
12.7
14.3
4.1
11.5
12.0
10.8
30.4
9.5
16.9
1.6
47.6
37.7
29.3
2.8
49.4
40.6
26.6
22.4
28.7
26.7
29.7
31.8
32.8
27.4
27.4
22.9
24.3
27.4
1.23
1.41
1.42
1.40
1.18
1.34
1.36
1.38
1.40
1.42
1.35
1.42
1.39
1.39
1.30
1.26
1.38
1.37
1.39
1.39
1.41
1.37
1.39
1.44
1.41
1.36
1.36
1.36
1.41
1.39
1.39
1.45
1.39
1.89
1.38
1.68
1.42
0.43
1.76
1.81
1.50
2.35
1.64
1.59
1.12
1.01
1.21
1.02
0.90
0.87
1.09
1.09
1.15
1.11
1.06
Table 2: LLMatDesign’s performance in achieving a new material with a as low as possible formation energy
per atom. Each experiment consists of 50 modifications, and is repeated 30 times.
Starting Material
Gemini-1.0-pro
GPT-4o
Random
Gemini-1.0-pro
GPT-4o
Random
Average Formation Energy (eV/atom)
Minimum Formation Energy (eV/atom)
History Historyless History Historyless
History Historyless History Historyless
BaV2Ni2O8
CdCu2GeS4
CeAlO3
TiO4
ErNi2Ge2
Ga2O3
Li2CaSiO4
LiSiNO
Na2ZnGeO4
SrTiO3
Avg.
-0.80
-0.19
-0.77
-0.39
-0.02
-0.19
-0.77
-0.38
-0.79
-1.26
-0.56
-0.20
0.11
-0.28
0.03
-0.19
-0.12
-0.41
-0.19
-0.25
-0.23
-0.17
-2.45
-1.05
-2.79
-1.57
-0.54
-1.61
-2.30
-1.75
-2.62
-3.01
-1.97
-2.50
-0.61
-2.24
-1.49
-0.74
-2.07
-2.69
-1.54
-2.52
-3.54
-1.99
-0.12
0.29
-0.04
0.0
-0.11
-0.16
-0.20
-0.15
0.05
-0.02
-0.05
-2.69
-1.31
-3.44
-2.64
-0.96
-2.05
-2.94
-2.01
-2.48
-3.40
-2.39
-2.30
-1.59
-3.22
-2.08
-1.71
-1.82
-2.60
-2.01
-2.34
-3.09
-2.28
-2.91
-1.61
-3.73
-2.48
-0.94
-3.31
-3.13
-2.60
-2.87
-3.65
-2.72
-2.74
-0.72
-3.73
-2.10
-1.57
-3.29
-2.98
-1.72
-2.55
-3.57
-2.50
-1.99
-1.37
-2.50
-1.80
-1.40
-1.67
-2.27
-1.75
-1.85
-2.38
-1.90
We observe that GPT-4o with past modification history performs the best in achieving the target band gap value
of 1.4 eV, requiring an average of 10.8 modifications (Table 1). In comparison, Gemini-1.0-pro with history
takes an average of 13.7 modifications. Both methods signifcantly outperform the baseline, whic requires
27.4 modifications. Adding modification history to subsequent prompts allows the LLMs to converge to the
target more quickly, as both Gemini-1.0-pro and GPT-4o with modification history outperform their historyless
counterparts. Notably, the performance gap between the history and historyless variants is smaller for Gemini-
1.0-pro than for GPT-4o. From a closer inspection of the modification paths of GPT-4o without history, we find
that GPT-4o often alternates between a few of the same modifications until reaching the maximum number of
allowed iterations (see Fig. 4). For the two starting materials where GPT-4o without history performs the best
(Co2TiO4 and SrTiO3), the final materials frequently converge to identical composition by following the same
modification sequence. This indicates a lack of diversity in the newly generated materials when no history is
included in LLMatDesign’s iterative loop. In addition, GPT-4o with history achieves the best final band gap
value, averaging 1.39 eV, followed by Gemini-1.0-pro at 1.35 eV, and random at 1.06 eV.
LLMatDesign’s superior performance is also apparent when finding new materials with the lowest formation
energy per atom (Table 2), consistently outperforming the random baseline. Specifically, both the history and
5
historyless variants of GPT-4o achieve the lowest average formation energies, with −1.97 eV/atom and −1.99
eV/atom, respectively. GPT-4o with history also achieves the lowest minimum formation energy per atom at
−2.72 eV/atom. Interestingly, while the minimum formation energy per atom values achieved by Gemini-1.0-pro
are close to that of GPT-4o, its average formation energy per atom values are significantly higher, indicating
that it struggles to consistently suggest chemically stable modifications for the materials. Nonetheless, Gemini-
1.0-pro still noticeably outperforms the baseline. In Fig. D.1 and D.2, we visualize 20 materials discovered by
LLMatDesign for the band gap and formation energy tasks, respectively. These materials are obtained from the
first run of all 10 starting materials. For the band gap task, the final materials are selected. For the formation
energy task, the materials with the lowest formation energy per atom are chosen.
Figure 4: Average band gaps and formation energies over 50 modifications. The grey horizontal line indicates
the target band gap of 1.4 eV. The colored dots on the x-axis indicate the average number of modifications
taken for each method to reach the target. For formation energy, the goal is to achieve the lowest possible value.
In Fig. 4, we plot the band gaps and formation energies per atom over 50 modifications, averaged across
10 starting materials. The target band gap of 1.4 eV is indicated by the grey horizontal line. Both history
and historyless variants of Gemini-1.0-pro and GPT-4o demonstrate quick convergence to the target band
gap. However, the GPT-4o historyless variant exhibits zig-zag oscillations in band gap values as modifications
increase. This occurs because, without historical information, GPT-4o tends to oscillate between a few of the
same moves, causing the band gap to fluctuate without improving. In contrast, the random baseline fails to
converge to 1.4 eV within the maximum allowed 50 modifications. For formation energy, our findings indicate
that GPT-4o is consistently able to suggest modifications which keep formation energy low on average around
−2 eV/atom, though Gemini-1.0-pro struggles to do so despite being able to obtain a low minimum formation
energy. Notably, neither GPT-4o nor Gemini-1.0-pro are able to beat the formation energy of the starting
materials, likely due to the the fact that these materials are already at or near the lowest energy states.
Fig. 5 presents heatmaps over the periodic table displaying the element occurrences in the modifications for
both the band gap and formation energy tasks, which reveal additional insights into the reason for the good
performance for LLM-driven design. The number of occurrences of each element is collected across all runs
and starting materials. In the heatmaps for the random baseline, all elements are chosen at nearly uniform
frequencies. This result is to be expected, as the random algorithm samples elements with atomic numbers
up to 99 uniformly. Meanwhile, in the heatmaps for the LLM cases, there is a clear distribution towards
certain elements, mostly focusing on elements within the first four rows of the periodic table and avoiding noble
6
01020304050Number of modifications0.00.51.01.5Band gap (eV)Gemini-1.0-pro01020304050Number of modifications2.01.51.00.50.0Formation energy (eV/atom)Gemini-1.0-pro01020304050Number of modifications0.00.51.01.52.0Band gap (eV)GPT-4o01020304050Number of modifications2.01.51.00.50.0Formation energy (eV/atom)GPT-4oHistoryHistorylessRandomBG: Gemini-1.0-pro with history
BG: GPT-4o with history
BG: Random
FE: Gemini-1.0-pro with history
FE: GPT-4o with history
FE: Random
Figure 5: Heatmaps of element frequencies in band gap (BG) and formation energy (FE) tasks. The periodic
table is color-coded to indicate the frequency of each element’s occurrence in all modified materials (both
intermediate and final) across all runs and starting materials. Darker colors represent higher frequencies, while
lighter colors denote lower frequencies or absence. The visualization employs log-scaling to effectively highlight
the distribution and prevalence of elements.
metals and Actinides. Both LLM models share similar distributions, such as a preference for elements like
oxygen, however Gemini-1.0-pro’s suggestions appear to exhibit a greater element diversity compared to GPT-
4o, including some of the transition metals. With Gemini-1.0-pro, we also occasionally observe modifications
suggested by the LLM that include noble gases, which is not chemically feasible due to their inert nature. With
GPT-4o, this does not occur (see Fig. C.1). Regardless, both LLM models are able to consistently suggest
chemically viable elements for modification, which is akin to how a human expert would make similar choices
based on chemical intuition or from past examples in the literature.
In Fig. 6, we present an example of the full process whereby LLMatDesign successfully completes a design
task to achieve a band gap of 1.40 eV. In the first step, LLMatDesign suggests modifying the starting material
CdCu2GeS4 by substituting S with Se, given the hypothesis that increasing atomic radius and changing the
electronegativity can alter the band gap. Upon modification, the new material CdCu2GeSe4 was found to have
an even smaller band gap, which is contrary to the desired effect as noted by the reflection. This history is
included in the second step of modification, whereby LLMatDesign suggests a subsequent modification of Ge
with Si, which increases the gap. The reflection notes a partial success is achieved, but is still not enough to reach
the target, whereupon a third step is taken. In the third step, Cu is substituted with Zn, which finally achieves
the desired band gap within an acceptable threshold, ending the process. From this example, we can observe the
LLM is successful at 1) recognizing differences in element properties (i.e. Se having a larger atomic radius than
S), 2) highlighting these properties as being relevant to the design task (i.e. atomic radius, electronegativity,
and electronic configuration affecting the band hap), 3) and recognizing whether a modification is successful
and the degree of success in the reflection. We will show in the subsequent section that it is this reasoning and
reflection process which has a significant impact on its success.
In the final step of the design process, a DFT calculation is performed to validate the material’s properties
which were obtained from a ML surrogate model. Here, we use DFT to compute the formation energy of the
minimum energy structures in all 30 runs for each 10 starting materials obtained with GPT-4o and random
sampling. On average, structures generated by LLMatDesign using GPT-4o with history achieved a formation
energy of -2.32 eV/atom with a job completion rate of 73.3%. In comparison, the random baseline obtained an
average formation energy of -1.51 eV/atom, with a significantly lower job completion rate of 40.0% (see Fig.
E.1). The lower completion rate is likely due to a much higher likelihood of unreasonable compositions and
configurations obtained from random sampling which cause the DFT calculations to fail. These results also
follow the same relative trends in Table 2. In a situation where DFT is directly used in each iteration rather
than a surrogate model, this step can be omitted, or potentially the materials can be validated further with
experiments.
7
HHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr010122136415143CountBG-History-Gemini-1.0-proHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr010138164419428CountBG-History-GPT-4oHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr07736385512CountBG-RandomHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr012183251134152CountFE-History-Gemini-1.0-proHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr016295510487914CountFE-History-GPT-4oHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr07797236502CountFE-RandomFigure 6: Example of LLMatDesign with GPT-4o on the task of modifying the starting material CdCu2GeS4
to achieve a band gap of 1.40 eV. The starting material is retrieved from the Materials Project with chemical
formula Cd2Cu4Ge2S8.
8
Given CdCu2GeS4, which has a band gap of 0.388 eV, find a new material with a target band gap of 1.4 eV.LLMatDesignStep 10.39 eVCd2Cu4Ge2S8[Substitute, S, Se]Start0.01 eVCd2Cu4Ge2Se8ModifiedModified0.40 eVCd2Cu4Si2Se8Cd2Cu4Ge2Se8[Substitute, Ge, Si]Step 2Start0.01 eV0.40 eVStart[Substitute, Cu, Zn]Cd2Zn4Si2Se8ModifiedCd2Cu4Si2Se8Step 31.42 eVHypothesis: Substituting copper (Cu) with zinc (Zn) is likely to increase the band gap of the material. Zinc has a different electronic configuration and electronegativity compared to copper, which can affect the electronic structure and potentially increase the band gap.Post-modification Reflection: The modification of substituting copper (Cu) with zinc (Zn) in Cd2Cu4Si2Se8 was highly successful in achieving the target band gap value of 1.4 eV, as the band gap increased from 0.39 eV to 1.42 eV, closely aligning with our hypothesis.I have found a new material, Cd2Zn4Si2Se8, which has a band gap of 1.42 eV, closely matching the target value of 1.4 eV.Hypothesis: substituting sulfur (S) with selenium (Se) is likely to increase the band gap of the material. Selenium has a larger atomic radius and different electronegativity compared to sulfur, which can affect the electronic structure and increase the band gap.Post-modification Reflection: the modification of substituting sulfur (S) with selenium (Se) in Cd2Cu4Ge2S8 to obtain Cd2Cu4Ge2Se8 was unsuccessful in achieving the target band gap value of 1.4 eV, as the band gap decreased significantly to 0.01 eV instead of increasing, contrary to our hypothesis.Post-modification Reflection: The modification of substituting Ge with Si in Cd2Cu4Ge2Se8 to form Cd2Cu4Si2Se8 resulted in an increase in the band gap from 0.01 eV to 0.39 eV, which indicates a partial success in increasing the band gap, but it fell short of the target value of 1.4 eV, suggesting that additional modifications or different approaches may be necessary to achieve the desired band gap.Hypothesis: Substituting germanium (Ge) with silicon (Si) is likely to increase the band gap of the material. Silicon has a smaller atomic radius and different electronegativity compared to germanium, which can affect the electronic structure and potentially increase the band gap.2.3 Self-reflection
To quantify the effect of self-reflection on the performance of LLMatDesign, we conduct band gap experiments
using GPT-4o and the same set of 10 starting materials, where we aim to find a new material with a target
band gap of 1.4 eV. Like with the history variant, past modifications are incorporated into the prompting loop.
However, in this case, self-reflection is omitted completely. In other words, the history message only includes
the modification and hypothesis pairs (see Algo. 1). The results from these experiments are shown in Table
3. As previously discussed, GPT-4o with history achieves an average of 10.8 modifications, while GPT-4o
without history requires 26.6 modifications. In comparison, GPT-4o with history but without self-reflection
now needs an average of 23.4 modifications, which is over twice as many compared to including self-reflection.
These results suggest that self-reflection, which involves the LLM evaluating and reasoning through its previous
design choices, plays a crucial role in enhancing the efficiency of LLMatDesign in achieving the given objective.
Table 3: LLMatDesign with and without self-reflection. GPT-4o is used as the LLM engine.
Starting Material
History Historyless History without reflection
Average # of Modifications
BaV2Ni2O8
CdCu2GeS4
CeAlO3
Co2TiO4
ErNi2Ge2
Ga2O3
Li2CaSiO4
LiSiNO
Na2ZnGeO4
SrTiO3
Avg.
17.7
3.3
7.4
5.5
19.3
12.7
14.3
4.1
11.5
12.0
10.8
30.4
9.5
16.9
1.6
47.6
37.7
29.3
2.8
49.4
40.6
26.6
45.1
5.0
27.6
7.9
31.0
13.1
31.4
5.1
31.5
36.7
23.4
2.4 Prompting
Well-crafted prompts are essential for eliciting accurate and useful responses from LLMs. While the base prompt
template, shown in Fig. 2, works as intended, we subsequently show that optimizing this prompt can improve
the performance of LLMatDesign even further. To this end, we develop two additional prompt templates in a
non-exhaustive demonstration. The first template, termed GPT-4o Refined, is an enhancement of the original
prompt (Fig. 2) created by GPT-4o itself. This refinement includes rephrasing and reformatting parts of the
original prompt and appending the following sentence: “Take a deep breath and work on this problem step-
by-step. Your thoughtful and detailed analysis is highly appreciated.” The second template, named Persona,
mirrors the original prompt but incorporates the persona of a materials specialist. Specifically, it begins with
a declaration that the LLM is a materials design expert working on developing new materials with specific
properties. Detailed descriptions of these prompt templates are provided in Appendix A.
We conduct the same experiments on the band gap task using GPT-4o as the LLM engine for LLMatDesign
across all 10 starting materials. The results, shown in Table 4, indicate that both the GPT-4o Refined and Per-
sona prompt templates outperform the GPT-4o with history, with the GPT-4o Refined template achieving the
best performance, requiring an average of only 8.69 modifications to complete the task. The improvement over
the original prompt template indicates that careful prompt optimization can positively enhance the efficiency
and accuracy of LLM-directed materials discovery frameworks, and that this process can even be performed
by the LLM itself. This is a particularly intriguing discovery as it hints towards an unprecedented level of
autonomy which can be enabled by LLMs, whereby the prompts and instructions in the framework can be
continuously tuned in an automated manner with minimal human intervention.
9
Table 4: LLMatDesign with different prompts. GPT-4o is used as the LLM engine.
Starting Material
History GPT-4o Refined
Persona
Average # of Modifications
BaV2Ni2O8
CdCu2GeS4
CeAlO3
Co2TiO4
ErNi2Ge2
Ga2O3
Li2CaSiO4
LiSiNO
Na2ZnGeO4
SrTiO3
Avg.
17.7
3.3
7.4
5.5
19.3
12.7
14.3
4.1
11.5
12.0
10.8
13.4
3.1
7.2
8.9
11.9
8.4
11.6
5.6
6.9
9.9
8.69
9.6
5.3
8.7
11.9
11.7
8.3
11.9
1.0
8.8
13.9
9.11
2.5 Constrained Materials Design
Materials discovery with constraints ensures scientific, economic, and political viability. For instance, avoiding
the use of rare earth metals can reduce dependency on limited and expensive resources, mitigate supply chain
risks, and align with environmental and ethical standards. To this end, we evaluate LLMatDesign under three
constraints limiting its action space. Experiments are conducted on the band gap task using the starting
material SrTiO3 with GPT-4o to test whether these constraints are obeyed. Like before, each experiment is
repeated 30 times, and the percentage of modifications adhering strictly to the constraints is calculated across
all runs. As shown in Table 5, LLMatDesign perfectly adheres to the constraints of “do not use Ba or Ca”
and “do not modify Sr,” achieving 100% compliant modifications. For the constraint “do not have more than
4 distinct elements,” only 4 out of 509 modifications by LLMatDesign include 5 distinct elements, resulting in
a high compliance rate of 99.02%. These results demonstrate LLMatDesign’s robust capability in adhering to
predefined constraints as described by natural language, an advantage unique to LLM-driven design.
Table 5: LLMatDesign with different constraints on SrTiO3.
Constraint
Do not use Ba or Ca
Do not modify Sr
Do not have more than 4 distinct elements
% compliant modifications
100
100
99.02
2.6 Further Discussion
Through extensive experiments, we find LLMatDesign consistently outperforms baselines by a significant margin,
demonstrating the viability of using LLM-based autonomous agents for materials discovery tasks under a limited
budget. While the random baseline uniformly samples from a set of elements for modification (see Fig. 5),
LLMatDesign, whether utilizing GPT-4o or Gemini-1.0-pro, exhibits inherent chemical knowledge, enabling
it to provide chemically meaningful suggestions. Furthermore, GPT-4o accurately recognizes periodic trends
such as atomic radius and electronegativity in its hypotheses and self-reflections in guiding its decisions. In
contrast, Gemini-1.0-pro is more prone to errors in this regard, likely due to it being a less robust LLM. Further
experiments also show the critical role of self-reflection in the performance of the LLM. This indicates that
by reviewing and learning from its previous decisions, LLMatDesign can refine its future suggestions more
effectively. This iterative learning process helps the model understand the implications of its modifications
better, leading to quicker convergence. In general, it is evident that there are more complex underpinnings
behind the remarkable effectiveness of LLM-driven design than simply predicting most likely outcomes.
This work also demonstrates the lower-bound capabilities of LLM-based design, which is performed without
further fine-tuning in a zero-shot manner. A natural extension of this approach would be to further train LLMs
10
on chemical and materials knowledge, such as those obtained from literature articles. In the future, it would be
highly desirable for a chemically fine-tuned to provide more insightful hypotheses and explanations, and even
refer to specific references of prior published experiments to support them. These capabilities can potentially
be within reach given the growing prevalence of powerful open-source LLMs and parameter-efficient fine-tuning.
In the current examples, LLMatDesign comes up with new materials designs from a limited set of modifica-
tions on the composition of a material. Nonetheless, this framework is general and can include more complex
modifications which act not only on the composition space but also the structure space. Future work in this
direction will focus on incorporating structural information when describing the material being modified, and
also suggest modifications which directly act on the positions and lattice of the crystal structure. To this end,
recent advances in multimodal LLMs can be applied here, where the atomic structure is considered to be an
additional modality to be encoded in addition to the text modality.
3 Conclusion
In this work, we present LLMatDesign, a novel materials design framework powered by state-of-the-art LLMs
that works directly with user-defined design requirements and constraints in natural language. It integrates
computational tools for structure relaxation and property evaluation, incorporates internal chemical knowledge,
and learns from previous iterations to function as an automated material design framework with high efficiency.
Additionally, LLMatDesign quickly adapt to different tasks, target properties and design constraints by simply
modifying the prompt. In our experiments, LLMatDesign consistently outperforms the baseline, demonstrating
the effectiveness of the framework in developing new materials. Our work highlights the potential for fully
automated AI-driven materials discovery that can be seamlessly integrated into autonomous laboratories in the
future.
4 Methods
4.1 Large Language Models
Large language models (LLMs) are a class of machine learning models built on the transformer architecture
[37]. By training on vast amounts of text data, these models can understand and generate text in a human-like
manner. In this work, GPT-4o [34] refers to OpenAI’s gpt-4o model, which has a context length of 128K and
a knowledge cutoff date of October 2023. Gemini-1.0-pro [35] refers to Google’s LLM with the same name,
featuring a context length of 32K.
4.2 Machine Learning Force Field
Machine learning force fields (MLFFs) represent a significant advancement in computational chemistry and
materials science. By utilizing state-of-the-art machine learning models and training on extensive datasets of
atomic structures with energies, forces, and stresses, MLFFs can achieve high accuracy in predicting these
properties, often rivaling ab initio methods such as density functional theory (DFT) [38]. More importantly,
MLFFs provide these high-accuracy predictions with unprecedented computational efficiency, enabling the sim-
ulation of larger systems and longer timescales. In this study, we train a TorchMD-Net model [39] using the
MatDeepLearn framework [40, 41]. The training dataset, curated from the Materials Project [6], comprises
187,687 crystal structures with associated energies, forces, and stresses. The model is trained for 400 epochs on
a single Nvidia A100 80GB GPU.
4.3 Machine Learning Property Predictor
Similar to machine learning force fields (MLFFs), machine learning property predictors (MLPPs) leverage
advanced machine learning models trained on large datasets to make fast and accurate predictions for specific
11
target properties. In this study, we train TorchMD-Net models to predict two separate properties: band gap
and formation energy per atom. The datasets used are the mp gap and mp form datasets from the MatBench
benchmark [42], containing 106,113 and 132,752 structures from the Materials Project [6], respectively. Each
model is trained for 200 epochs on a single Nvidia A100 80GB GPU.
4.4 Modification of Material
Once LLMatDesign suggests a modification to achieve the user’s target objective, the material is modified
accordingly. Specifically, as illustrated in Fig. 1, there are four types of modifications: exchange, substitute,
remove, and add. Each modification is applied directly to an ase.Atoms object representing the material. For
example, given the modification [‘exchange’, ‘Sr’, ‘Ti’], all Sr atoms in the material are replaced with
Ti atoms and vice versa. After applying the modification, the structure undergoes relaxation using a machine
learning force field (MLFF).
4.5 Modification of Material
The DFT calculations were performed using the Vienna Ab Initio Simulation Package (VASP)[43, 44]. All
calculations followed the same settings specified by the ”MPRelaxSet” in the Pymatgen library[45] used in
Materials Project.
5 Data Availability
The authors declare that the data, materials and code supporting the results reported in this study are available
upon the publication of this manuscript.
6 Acknowledgements
We thank Lingkai Kong and Rui Feng for helpful discussions.
This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. De-
partment of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated
under Contract No. DE-AC02-05CH11231 using NERSC award BES-ERCAP0022842.
12
References
[1] Davies, D. W. et al. Computational screening of all stoichiometric inorganic materials. Chem 1, 617–627
(2016).
[2] Oganov, A. R., Pickard, C. J., Zhu, Q. & Needs, R. J. Structure prediction drives materials discovery.
Nature Reviews Materials 4, 331–348 (2019).
[3] Liu, Y., Zhao, T., Ju, W. & Shi, S. Materials discovery and design using machine learning. Journal of
Materiomics 3, 159–177 (2017).
[4] Hautier, G., Jain, A. & Ong, S. P. From the computer to the laboratory: materials discovery and design
using first-principles calculations. Journal of Materials Science 47, 7317–7340 (2012).
[5] Pyzer-Knapp, E. O., Suh, C., G´omez-Bombarelli, R., Aguilera-Iparraguirre, J. & Aspuru-Guzik, A. What
is high-throughput virtual screening? a perspective from organic materials discovery. Annual Review of
Materials Research 45, 195–216 (2015).
[6] Chen, C. & Ong, S. P. A universal graph deep learning interatomic potential for the periodic table. Nature
Computational Science 2, 718–728 (2022).
[7] Merchant, A. et al. Scaling deep learning for materials discovery. Nature 624, 80–85 (2023).
[8] Hoffmann, J. et al. Data-driven approach to encoding and decoding 3-d crystal structures. arXiv preprint
arXiv:1909.00949 (2019).
[9] Court, C. J., Yildirim, B., Jain, A. & Cole, J. M. 3-d inorganic crystal structure generation and property
prediction via representation learning. Journal of Chemical Information and Modeling 60, 4518–4535
(2020).
[10] Xie, T., Fu, X., Ganea, O.-E., Barzilay, R. & Jaakkola, T. Crystal diffusion variational autoencoder for
periodic material generation. arXiv preprint arXiv:2110.06197 (2021).
[11] Long, T. et al. Constrained crystals deep convolutional generative adversarial network for the inverse design
of crystal structures. npj Computational Materials 7, 66 (2021).
[12] Ren, Z. et al. An invertible crystallographic representation for general inverse design of inorganic crystals
with targeted properties. Matter 5, 314–335 (2022).
[13] Fung, V. et al. Atomic structure generation from reconstructing structural fingerprints. Machine Learning:
Science and Technology 3, 045018 (2022).
[14] Zeni, C. et al. Mattergen:
arXiv:2312.03687 (2023).
a generative model
for inorganic materials design.
arXiv preprint
[15] Wei, J. et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural
information processing systems 35, 24824–24837 (2022).
[16] Huang, J. & Chang, K. C.-C. Towards reasoning in large language models: A survey. arXiv preprint
arXiv:2212.10403 (2022).
[17] Li, S. et al. Pre-trained language models for interactive decision-making. Advances in Neural Information
Processing Systems 35, 31199–31212 (2022).
[18] Ahn, M. et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint
arXiv:2204.01691 (2022).
[19] Huang, W. et al. Voxposer: Composable 3d value maps for robotic manipulation with language models.
arXiv preprint arXiv:2307.05973 (2023).
[20] Boiko, D. A., MacKnight, R., Kline, B. & Gomes, G. Autonomous chemical research with large language
models. Nature 624, 570–578 (2023).
[21] Bran, A. M. et al. Chemcrow: Augmenting large-language models with chemistry tools. arXiv preprint
arXiv:2304.05376 (2023).
[22] AI4Science, M. R. & Quantum, M. A. The impact of large language models on scientific discovery: a
preliminary study using gpt-4. arXiv preprint arXiv:2311.07361 (2023).
13
[23] Mirza, A. et al. Are large language models superhuman chemists? arXiv preprint arXiv:2404.01475 (2024).
[24] Fan, V. et al. Openchemie: An information extraction toolkit for chemistry literature. arXiv preprint
arXiv:2404.01462 (2024).
[25] Ai, Q., Meng, F., Shi, J., Pelkie, B. & Coley, C. W. Extracting structured data from organic synthesis
procedures using a fine-tuned large language model. ChemRxiv preprint 10.26434/chemrxiv-2024-979fz
(2024).
[26] Zhong, Z., Zhou, K. & Mottin, D. Benchmarking large language models for molecule prediction tasks.
arXiv preprint arXiv:2403.05075 (2024).
[27] Xie, Z. et al. Fine-tuning gpt-3 for machine learning electronic and functional properties of organic
molecules. Chemical science 15, 500–510 (2024).
[28] Jablonka, K. M., Schwaller, P., Ortega-Guerrero, A. & Smit, B. Leveraging large language models for
predictive chemistry. Nature Machine Intelligence 1–9 (2024).
[29] Ock, J., Guntuboina, C. & Barati Farimani, A. Catalyst energy prediction with catberta: Unveiling feature
exploration strategies through large language models. ACS Catalysis 13, 16032–16044 (2023).
[30] Flam-Shepherd, D. & Aspuru-Guzik, A. Language models can generate molecules, materials, and protein
binding sites directly in three dimensions as xyz, cif, and pdb files. arXiv preprint arXiv:2305.05708 (2023).
[31] Antunes, L. M., Butler, K. T. & Grau-Crespo, R. Crystal structure generation with autoregressive large
language modeling. arXiv preprint arXiv:2307.04340 (2023).
[32] Gruver, N. et al. Fine-tuned language models generate stable inorganic materials as text. arXiv preprint
arXiv:2402.04379 (2024).
[33] Jain, A. et al. Commentary: The materials project: A materials genome approach to accelerating materials
innovation. APL materials 1 (2013).
[34] Achiam, J. et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
[35] Team, G. et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805
(2023).
[36] Sutherland, B. R. Solar materials find their band gap. Joule 4, 984–985 (2020).
[37] Vaswani, A. et al. Attention is all you need. Advances in neural information processing systems 30 (2017).
[38] Ko, T. W. & Ong, S. P. Recent advances and outstanding challenges for machine learning interatomic
potentials. Nature Computational Science 3, 998–1000 (2023).
[39] Th¨olke, P. & De Fabritiis, G. Torchmd-net: Equivariant transformers for neural network based molecular
potentials. arXiv preprint arXiv:2202.02541 (2022).
[40] Fung, V., Zhang, J., Juarez, E. & Sumpter, B. G. Benchmarking graph neural networks for materials
chemistry. npj Computational Materials 7, 84 (2021).
[41] Jia, S. et al. Derivative-based pre-training of graph neural networks for materials property predictions.
Digital Discovery 3, 586–593 (2024).
[42] Dunn, A., Wang, Q., Ganose, A., Dopp, D. & Jain, A. Benchmarking materials property prediction
methods: the matbench test set and automatminer reference algorithm. npj Computational Materials 6,
138 (2020).
[43] Kresse, G. & Furthm¨uller, J. Efficient iterative schemes for ab initio total-energy calculations using a
plane-wave basis set. Physical review B 54, 11169 (1996).
[44] Kresse, G. & Furthm¨uller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors
using a plane-wave basis set. Computational materials science 6, 15–50 (1996).
[45] Ong, S. P. et al. Python materials genomics (pymatgen): A robust, open-source python library for materials
analysis. Computational Materials Science 68, 314–319 (2013).
14
Supplementary Information
A Prompt Templates for LLMatDesign
A slightly different prompt template to Fig. 2 is designed for Gemini-1.0-pro due to its inconsistency in
generating standardized output.
LLMatDesign Prompt Template (Gemini-1.0-pro)
I have a material and its <property>. <definition of property>.
(<chemical composition>, <property value>)
You will be given a starting material to be modified. Try to achieve <objective>. Make an informed choice of modification
based on the given material and past modifications and property values obtained after those modifications. Output a
list for the suggested modification, and a string of the reason why you think it is a good modification to take to achieve
<objective>. Make sure the modification is physically meaningful.
Material to be modified: <chemical composition>
Current property value: <property value>
<modification history>
Available modifications:
1. exchange: exchange two elements in the material
2. substitute: substitute one element in the material with another
3. remove: remove an element from the material
4. add: add an element to the material
Example output format:
1. ["exchange", "O", "N"], "some reason here"
2. ["substitute", "Ti", "Fe"], "some reason here"
3. ["add", "O"], "some reason here"
4. ["remove", "O"], "some reason here"
Figure A.1: Prompt template for LLMatDesign with Gemini-1.0-pro. Text placeholders in red angular brackets
are specific to the task given to LLMatDesign. Text placeholders in blue angular brackets are optional and can
be omitted if not needed.
GPT-4o Refined Prompt Template for LLMatDesign
I have a material with a known <property>. <definition of property>.
Material information:
• Chemical formula: <chemical composition>
• <property>: <property value>
Objective:
Propose a modification to this material to achieve <objective>. You can choose one of the following modification types:
1. exchange: exchange two elements in the material
2. substitute: substitute one element in the material with another
3. remove: remove an element from the material
4. add: add an element to the material
Your response should be a Python dictionary in the following format:
‘‘‘
{Hypothesis: $HYPOTHESIS, Modification: [$TYPE, $ELEMENT 1, $ELEMENT 2]}.
‘‘‘
Requirements:
1. $HYPOTHESIS: Provide a detailed analysis and rationale for your proposed modification.
2. $TYPE:Specify the type of modification (“exchange”, “substitute”, “remove”, “add”).
3. $Identify the element(s) involved in the modification. For ”exchange” and ”substitute”, include two elements ($ELE-
MENT 1 and $ELEMENT 2). For “remove” and “add”, include one element ($ELEMENT 1).
<modification history>
Take a deep breath and work on this problem step-by-step. Your thoughtful and detailed analysis is highly appreciated.
Figure A.2: GPT-4o refined prompt template for LLMatDesign. Text placeholders in red angular brackets are
specific to the task given to LLMatDesign. Text placeholders in blue angular brackets are optional and can be
omitted if not needed.
Persona Prompt Template for LLMatDesign
You are a materials design expert working on the development of new materials with specific properties. You will be given
a composition (chemical formula) and its corresponding <property>. You will be asked to propose a modification to the
material to achieve a target <property>.
Material information:
• Chemical formula: <chemical composition>
• <property>: <property value>
Objective:
Propose a modification to this material to achieve <objective>. You can choose one of the following modification types:
1. exchange: exchange two elements in the material
2. substitute: substitute one element in the material with another
3. remove: remove an element from the material
4. add: add an element to the material
Your response should be a Python dictionary in the following format:
‘‘‘
{Hypothesis: $HYPOTHESIS, Modification: [$TYPE, $ELEMENT 1, $ELEMENT 2]}.
‘‘‘
Requirements:
1. $HYPOTHESIS: Provide a detailed analysis and rationale for your proposed modification.
2. $TYPE:Specify the type of modification (“exchange”, “substitute”, “remove”, “add”).
3. $Identify the element(s) involved in the modification. For ”exchange” and ”substitute”, include two elements ($ELE-
MENT 1 and $ELEMENT 2). For “remove” and “add”, include one element ($ELEMENT 1).
<modification history>
Take a deep breath and work on this problem step-by-step. Your thoughtful and detailed analysis is highly appreciated.
Figure A.3: Prompt template with materials design expert persona for LLMatDesign. Text placeholders in red
angular brackets are specific to the task given to LLMatDesign. Text placeholders in blue angular brackets are
optional and can be omitted if not needed.
2
B Convergence Plots
Figure B.1: Average band gaps over 50 modifications for all 10 starting materials using GPT-4o. The grey
horizontal line indicates the target band gap of 1.4 eV. The colored dots on the x-axis indicate the average
number of modifications taken for each method to reach the target.
Figure B.2: Average band gaps over 50 modifications for all 10 starting materials using Gemini-1.0-pro. The
grey horizontal line indicates the target band gap of 1.4 eV. The colored dots on the x-axis indicate the average
number of modifications taken for each method to reach the target.
3
02040Number of modifications0123Band gap (eV)SrTiO302040Number of modifications0123Band gap (eV)Ga2O302040Number of modifications0.00.51.01.52.0Band gap (eV)BaV2Ni2O802040Number of modifications0.00.51.01.52.02.5Band gap (eV)Na2ZnGeO402040Number of modifications0.00.51.0Band gap (eV)ErNi2Ge202040Number of modifications012345Band gap (eV)CeAlO302040Number of modifications0.00.51.01.5Band gap (eV)Co2TiO402040Number of modifications012345Band gap (eV)LiSiNO02040Number of modifications012345Band gap (eV)Li2CaSiO402040Number of modifications0.00.51.01.5Band gap (eV)CdCu2GeS4GPT-4oHistoryHistorylessRandom02040Number of modifications0.00.51.01.5Band gap (eV)SrTiO302040Number of modifications0.00.51.01.52.0Band gap (eV)Ga2O302040Number of modifications0.00.51.0Band gap (eV)BaV2Ni2O802040Number of modifications0.00.51.01.52.02.5Band gap (eV)Na2ZnGeO402040Number of modifications0.00.51.0Band gap (eV)ErNi2Ge202040Number of modifications0.00.51.01.52.02.5Band gap (eV)CeAlO302040Number of modifications0.00.51.0Band gap (eV)Co2TiO402040Number of modifications012345Band gap (eV)LiSiNO02040Number of modifications012345Band gap (eV)Li2CaSiO402040Number of modifications0.00.51.0Band gap (eV)CdCu2GeS4Gemini-1.0-proHistoryHistorylessRandomFigure B.3: Average formation energies over 50 modifications for all 10 starting materials using GPT-4o. The
goal is to achieve the lowest possible formation energy per atom.
Figure B.4: Average formation energies over 50 modifications for all 10 starting materials using Gemini-1.0-pro.
The goal is to achieve the lowest possible formation energy per atom.
4
02040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)SrTiO302040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)Ga2O302040Number of modifications-2.5-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)BaV2Ni2O802040Number of modifications-2.0-1.00.0Formation energy (eV/atom)Na2ZnGeO402040Number of modifications-1.0-0.8-0.5-0.20.00.2Formation energy (eV/atom)ErNi2Ge202040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)CeAlO302040Number of modifications-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)Co2TiO402040Number of modifications-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)LiSiNO02040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)Li2CaSiO402040Number of modifications-1.0-0.50.00.5Formation energy (eV/atom)CdCu2GeS4GPT-4oHistoryHistorylessRandom02040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)SrTiO302040Number of modifications-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)Ga2O302040Number of modifications-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)BaV2Ni2O802040Number of modifications-2.0-1.5-1.0-0.50.00.5Formation energy (eV/atom)Na2ZnGeO402040Number of modifications-0.6-0.4-0.20.00.2Formation energy (eV/atom)ErNi2Ge202040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)CeAlO302040Number of modifications-2.0-1.5-1.0-0.50.00.5Formation energy (eV/atom)Co2TiO402040Number of modifications-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)LiSiNO02040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)Li2CaSiO402040Number of modifications-0.5-0.20.00.20.5Formation energy (eV/atom)CdCu2GeS4Gemini-1.0-proHistoryHistorylessRandomC Heatmaps
BG: Gemini-1.0-pro without history
BG: GPT-4o without history
FE: Gemini-1.0-pro without history
FE: GPT-4o without history
Figure C.1: Heatmaps of element frequencies in band gap (BG) and formation energy (FE) tasks for Gemini-1.0-
pro and GPT-4o without history. The periodic table is color-coded to indicate the frequency of each element’s
occurrence in all modified materials (both intermediate and final) across all runs and starting materials. Darker
colors represent higher frequencies, while lighter colors denote lower frequencies or absence. The visualization
employs log-scaling to effectively highlight the distribution and prevalence of elements.
5
HHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr010124139915666CountBG-Historyless-Gemini-1.0-proHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr013211310045237CountBG-Historyless-GPT-4oHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr010121135715040CountFE-Historyless-Gemini-1.0-proHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr016294507987341CountFE-Historyless-GPT-4oD Visualization of Selected Structures
SrTiO3
Ba2Tl2PNO6
1.51 eV
BaV2Ni2O8
BaCd2(MoO5)2
1.30 eV
Co2TiO4
Co2SO4
1.42 eV
ErNi2Ge2
ErGa2S2O
1.39 eV
CeAlO3
FeGeO3
1.49 eV
Li2CaSiO4
LiSnO2
1.29 eV
CdCu2GeS4
Zn2CdSiSe4
1.42 eV
Na2ZnGeO4
Na4MnFe2(GeO4)2
1.41 eV
LiSiNO
LiGePS
1.30 eV
Ga2O3
Ga4SnO6
1.33 eV
Figure D.1: Visualization of the final structures obtained by LLMatDesign for the band gap task. These
structures are obtained from the first run of all 10 starting materials. The chemical formulae in red represent
the starting materials, followed by the formulae of the final structures and their corresponding band gaps.
GPT-4o with history is utilized as the LLM engine.
6
BaV2Ni2O8
BaTi2V2O8
−3.09 eV/atom
SrTiO3
BaTiO3
−3.56 eV/atom
Li2CaSiO4
Li2CaSiO4
−3.03 eV/atom
Co2TiO4
TiMnO3
−2.38 eV/atom
ErNi2Ge2
LaAl(NiS)2
−1.19 eV/atom
Na2ZnGeO4
Li2AlSiO4
−2.85 eV/atom
LiSiNO
Li2Si2N2O2F
−2.21 eV/atom
CdCu2GeS4
MgAl6O10
−2.87 eV/atom
CeAlO3
YScO3
−3.75 eV/atom
Ga2O3
Al2O3
−3.29 eV/atom
Figure D.2: Visualization of the final structures obtained by LLMatDesign for the formation energy task.
These structures represent the ones with the minimum formation energy per atom from the first run of all 10
starting materials. The chemical formulae in red represent the starting materials, followed by the formulae of
the structures with the lowest formation energies and their corresponding formation energy per atom values.
GPT-4o with history is utilized as the LLM engine.
7
E DFT Calculations
Table E.1: DFT results for lowest-energy structures obtained from the formation energy task, averaged across
all starting materials and runs.
Formation energy per atom (eV/atom)
Job success rate (%)
−2.31
73.3
−1.51
40.0
GPT-4o with history Random
8
|
ai_researcher | 2 | Context-Aware_Summarization_for_PDF_Documents_using_Large_Language_Models.pdf | World of Computer Science and Information Technology Journal (WCSIT)
ISSN: 2221-0741
Vol. 1, No. 3, 63-70, 2011
A Cloud-based Approach for Context Information
Provisioning
Elarbi Badidi
Faculty of Information Technology
United Arab Emirates University
Al-Ain, United Arab Emirates
[email protected]
Larbi Esmahi
School for Computing & Information Systems
Athabasca University, University Drive
Athabasca, Alberta, Canada
[email protected]
Abstract— As a result of the phenomenal proliferation of modern mobile Internet-enabled devices and the widespread utilization of
wireless and cellular data networks, mobile users are increasingly requiring services tailored to their current context. High-level
context information is typically obtained from context services that aggregate raw context information sensed by various sensors
and mobile devices. Given the massive amount of sensed data, traditional context services are lacking the necessary resources to
store and process these data, as well as to disseminate high-level context information to a variety of potential context consumers.
In this paper, we propose a novel framework for context information provisioning, which relies on deploying context services on
the cloud and using context brokers to mediate between context consumers and context services using a publish/subscribe model.
Moreover, we describe a multi-attributes decision algorithm for the selection of potential context services that can fulfill context
consumers’ requests for context information. The algorithm calculates the score of each context service, per context information
type, based on the quality-of-service (QoS) and quality-of-context information (QoC) requirements expressed by the context
consumer.
One of the benefits of the approach is that context providers can scale up and down, in terms of cloud resources they use, depending
on current demand for context information. Besides, the selection algorithm allows ranking context services by matching their QoS
and QoC offers against the QoS and QoC requirements of the context consumer.
Keywords- mobile users; context-aware web services; context services; cloud services; quality-of-context; quality-of-service;
service selection.
I.
INTRODUCTION
The proliferation of wireless and cellular networks over the
last few years has led to a remarkable rise in the number of
users who are using a variety of modern mobile Internet-
enabled devices --such as iPhones, iPads, and Android-based
smartphones-- to consume online services. Mobile users are
increasingly requiring services tailored to their context as they
are on the move. Therefore, enterprise services should be
context-aware to deal with the changing environment of the
user. Several definitions of the notion of context have been
provided in the literature. According to Dey [1], “Context is
any information that can be used to characterize the situation
of an entity. An entity is a person, place, or object that is
considered relevant to the interaction between a user and an
application, including the user and applications themselves.”
According to this definition, the amount of information that
can be categorized as context information is extremely wide.
Location, time, temperature, humidity, pressure, and mobile
user activity are the most widely used context indicators by
applications. Specialized services, that we call context services,
capture, store, analyze and aggregate data to provide high-level
context information to consumer application services as
needed. Context services and context consumers are often
physically distributed. Besides, it is likely that these context
sources provide the same context information but with different
QoC [2][3]. The QoC concept is explained in Section 3.
Context-awareness raises challenges
like aggregation of
context information in a structured format, discovery, and
selection of appropriate context services for context delivery to
context consumers.
To cope with the issues of context delivery and context
service selection, we propose a novel framework for context
provisioning, which is relying on using components called
context brokers, and deploying context services on the cloud.
Context brokers mediate between context consumers and
context services using a publish/subscribe model. To the best of
our knowledge there was no previous work on deploying
63
WCSIT 1 (3), 63 -70, 2011
context services on the cloud. We believe that our approach
will take advantage of the power of the cloud in terms of
elasticity, storage abundance, and scalability. Furthermore, we
describe a multi-attributes algorithm for the selection of context
services on the basis of the QoS and QoC they can offer. The
algorithm takes into account the QoS and QoC requirements of
context consumers for each context information to which they
subscribe with the Context Broker.
The remainder of the paper is organized as follows. Section
2 describes related work on context-awareness and context
information provisioning. Section 3 provides background
information on the concepts of cloud services and quality-of-
context. Section 4 presents an overview of our proposed
framework, describes the interactions among the framework
components and our proposed algorithm for the selection of
context services in both a single cloud and multiple clouds.
Section 5 discusses the challenges of the approach. Finally,
Section 6 concludes the paper and describes future work.
II. RELATED WORK
Over the last two decades, context provisioning has been a
particularly popular research topic, especially with the advent
of smart mobile devices, the advances in sensing technology,
and the proliferation of mobile applications. Many research
works have proposed, designed, and implemented frameworks
and middleware
for managing context
information and providing users with context-aware services.
Moreover, many surveys have been made in order to
understand the features and shortcomings of existing systems
[4][5][6].
infrastructures
With
the emergence of service-oriented computing,
numerous research works have investigated the design and the
implementation of context services. A context service
infrastructure support for collection,
typically provides
management, and dissemination of context information vis-à-
vis a number of subjects. Subjects may be users, objects such
as handheld devices and equipment, or the environment of
users. The context service acquires context information from
various context
the
―temperature‖ at the current location of the mobile user; this
information may be obtained directly from the mobile device
of the user. It can also be obtained from a local weather
station. Alternatively, it may be obtained from weather TV
channels providing weather information nation-wide.
sources. For example, consider
Schmidt et al. designed and implemented a generic context
service with a modular architecture that allows for context
collection, discovery and monitoring [7]. This context service
provides a Web service interface that allows its integration in
heterogeneous environments. The implementation uses OWL
to describe context information and SPARQL to query and
monitor context information.
the design
issues and
Lei et al. described
the
implementation of a middleware infrastructure for context
collection and dissemination [8]. They realize this middleware
infrastructure as a context service. To allow for wide
deployment of the context service, this work has addressed the
following
the context service
architecture by supporting heterogeneous context sources,
issues: extensibility of
integrated support for privacy, and quality of context
information support. Coronato et al. proposed a semantic
context service that relies on semantic Web technologies to
support smart offices [9]. It uses ontologies and rules to infer
high-level context information, such as lighting and sound
level, from low-level raw information acquired from context
sources.
As it was described in the surveys mentioned earlier, many
of the existing context-aware systems are suffering from the
lack of scalability, extensibility, interoperability, and adoption
difficulties. The originality of our approach lies in bringing
context management and delivery to the cloud by deploying
context services on the cloud. We believe that our approach
will benefit from the power of the cloud in terms of scalability,
elasticity, cloud storage abundance, and scaling up and down.
III. BACKGROUND
A. Quality-of-Context
Context information is characterized by some properties
referred in literature as QoC indicators. Buchholz et al. [2]
have defined the QoC as: “Quality of Context (QoC) is any
information that describes the quality of information that is
used as context information. Thus, QoC refers to information
and not to the process nor the hardware component that
possibly provide the information.”
Buchholz et al. [2] and Sheikh et al. [3] have identified the
following QoC indicators: precision, freshness, temporal
resolution, spatial resolution, and probability of correctness.
Precision represents the granularity with which context
information describes a real world situation. Freshness
represents the time that elapses between the determination of
context information and its delivery to a requester. Spatial
resolution represents the precision with which the physical
area, to which an instance of context information is applicable,
is expressed. Temporal resolution is the period of time during
which a single instance of context information is applicable.
Probability of correctness represents the probability that a
piece of context information is correct.
Several competing context services may provide the same
context
[2]. Therefore, potential context
consumers should be able to select context services on the
basis of the QoC they can assure.
information
B. Cloud services
Cloud computing enables a service-provisioning model for
computing services that relies on the Internet. This model
typically involves the provisioning of dynamically scalable
and virtualized services.
Applications or services offered by means of cloud
computing are called cloud services. Typical examples of
cloud services include office applications (word processing,
spreadsheets, and presentations) that are traditionally found
among desktop applications. Nearly, all
large software
corporations, such as Google, Microsoft, Amazon, IBM, and
Oracle, are providing various kinds of cloud services. Besides,
many small businesses have launched their own Web-based
64
WCSIT 1 (3), 63 -70, 2011
services, mainly to take advantage of the collaborative nature
of cloud services.
The user of a cloud service has access to the service
through a Web interface or via an API. Once started, the cloud
service application acts as if it is a normal desktop application.
The difference is that working documents are on the cloud
servers.
Cloud services models are:
Infrastructure-as-a-Service
IaaS,
organizations rent computing resources and storage
space and access them through a private network or
across the Internet.
(IaaS): With
Platform-as-a-Service
(PaaS): With
PaaS,
organizations can develop their business applications
in a cloud environment by using software tools
supported by their cloud provider. Maintenance and
management of the cloud infrastructure including
severs and operating system is the responsibility of the
cloud provider.
Software-as-a-Service (SaaS): With SaaS, the cloud
service application runs on the cloud provider servers
and users access the service through a Web interface
or via an API.
IV. A FRAMEWORK FOR CLOUD-BASED CONTEXT
PROVISIONING
In every business with a delivery/consumption model,
brokers emerge to mediate between consumers and providers.
This could be the case for context delivery. Context brokers
may, then, be used to decouple context consumers from
context services. Our interest in using brokers is motivated by
the fact that they have been used for a while in Service
Oriented Architecture (SOA) to mediate between services
providers, service consumers, and partners. They have also
been extensively used in multimedia systems and in mobile
computing systems to deal mainly with the issue of QoS
management.
Fig. 1 depicts our framework for context information
provisioning. The main components of the framework are:
Context-aware Web services (context consumers), Context
Brokers, and Cloud-based Context Services. Multiple context
brokers may be deployed, one for each local domain for
instance. A discovery service will allow context-aware
consumers to bind to the right context broker.
A. Context Brokers
A context broker is a mediator service that decouples
context consumers from context services. It is in charge of
handling subscriptions of context consumers in which they
express their interest to receive context information, and
registration of context services. Context services may then
publish their newly acquired context information to the
context broker, which notifies context consumers about that
newly acquired context information. Context brokers can also
be deployed on the cloud. Fig. 2 illustrates our topic-based
publish-subscribe system in which context services are the
publishers and the CAWSs are the subscribers.
65
Figure 1. Framework for Cloud-based Context Provisioning
Context information -- such as location, temperature, and
user activity -- represents the topics of the system. The
Publish/subscribe messaging model is a one-to-many pattern
of asynchronous message distribution based on registration of
interest. In this model, publishers associate the name of a topic
to each message (―publish‖) rather than addressing it directly
to subscribers. Then, the message system sends the message to
all eligible recipients that expressed their interest in receiving
messages on that topic (―subscribe‖). As opposed to point-to-
point messaging systems, such as message queuing, the
publish/subscribe model of asynchronous communication is a
far more scalable architecture. This is because the source of
the information has only to concern itself with creating the
information, and can leave the task of servicing potential
recipients to the messaging system. It is a loosely coupled
architecture in which senders often do not need to know who
their potential subscribers are, and the subscribers do not need
to know who generates the information.
to
In
addition
this publish/subscribe model
for
provisioning context information, a context broker implements
a regular on-demand request/response model, in which it
requests up-to-date context information from context services
once a context consumer requires information for a given
topic. Therefore, a context broker may either pull context
information from context services or let context services push
updated context information.
Context services, typically residing in different clouds,
deliver context information to context consumers with various
quality-of-context and quality-of-service (QoS). Therefore, the
Context Broker is in charge of selecting appropriate context
services to deliver context information to which a context
consumer has subscribed. Context
information may be
delivered to the same consumer by several context services.
Each one may deliver a piece of context information (a topic)
that the consumer requires to adapt its behavior to the current
context of a user. In Sub-section 4.5, we describe a selection
algorithm that allows ranking context services with regard to
the QoC and the topics required by a context consumer.
WCSIT 1 (3), 63 -70, 2011
Figure 2. Topic-based publish/subscribe system
B. Context Consumers
In our framework, context-aware Web services (CAWS)
are the consumers of context information obtained from the
cloud-based context services. A CAWS is a Web service that
can understand situational context and can adapt its behavior
according the changing circumstances as context data may
change rapidly. It produces dynamic results according to the 5
WH questions: who, where, when, what, and why it was
invoked. A CAWS can be responsive to various situational
circumstances, such as:
The identity of the client who invoked the service,
whether it is a person, or another Web service.
The location of the client.
The time at which the client invokes the service.
The activity that the client is carrying out at the time it
invokes the service.
The preferences that the client may have defined prior
to invoking the service.
The security and privacy policies associated with the
client of this service.
The device (laptop, PDA, smartphone, etc.) that the
client is using to invoke the service.
C. Cloud-based Context Services
As we have mentioned earlier in the related work section,
high-level context information is typically obtained from
context services that aggregate raw context information sensed
by sensors and mobile devices. Given the massive amount of
context data processed and stored by context services and the
wide acceptance of the cloud computing technology, context
providers now can leverage their services by deploying them
on the cloud.
Fig. 3 depicts the process of context acquisition and the
deployment of context services on the cloud to provide high-
level context information to context consumers. Raw context
data sensed by various devices and sensors is processed,
aggregated by Context Aggregator components in a structured
format, and then uploaded to the cloud-based context services.
Figure 3. Deployment of high-level context information on the cloud
One of the underlying advantages of the deployment of
context services in the cloud is the economy of scale. By
making the most of the cloud infrastructure provided by a
cloud vendor, a context provider can offer better, cheaper, and
more reliable services than is possible within its premises. The
context service can utilize the full processing and storage
resources of the cloud infrastructure if needed. Another
advantage is scalability in terms of computing resources.
Context providers can scale up when additional resources are
required as a result of a rise in the demand for context
information. Conversely, they can scale down when the
demand for context information is decreasing. Another benefit
of the approach is to enable context-aware application services
to acquire their required context information on a pay-as-you-
go basis and to select cloud-based context services on the basis
of the price they have to pay and other criteria, such as the
QoC they can get. Furthermore, context-aware applications
can obtain context information from cloud-based context
services without having
in context
management. The net benefit for consumers and mobile users,
in particular, is the ability to receive better services tailored to
their current context.
involved
to be
The SaaS model is the most appropriate model for cloud-
based context provisioning. Indeed, SaaS is seen as the trend
of the future and the most common form of cloud service
development. With SaaS, software is deployed over the
Internet and delivered to thousands of customers. Using this
model, the context service provider may license its service to
customers through a subscription or a pay-as-you-go model.
The service is then accessible using an API.
D. Interfaces and Interaction model
In this section, we describe the interactions among the
components of the framework and do consider only the case of
a single context broker. The model can be easily extended to
consider several context brokers. Fig. 4 shows a simplified
class diagram of the framework components, and Fig. 5
depicts the interactions among them.
66
WCSIT 1 (3), 63 -70, 2011
Figure 4- Class diagram of the framework components.
The context broker acts as an intermediary between
(context
(context services) and subscribers
publishers
consumers) on a collection of topics (context information).
A context consumer invokes the subscribe() method of the
context broker to register its interest to receive updates on
some topics, such as location, and temperature. If the
processing of subscribe() is successful, the context broker
returns a subscription ID to the context consumer.
a
Similarly,
invokes
context
registerContextService() of the context broker to register its
interest to publish some types of context information through
the context broker. If the processing of that method is
successful, the context broker returns a registration ID to the
context service.
service
The context broker receives notifications of context change
through its notify() method that a context service invokes. It,
then, notifies a context consumer about context change by
invoking its notify() method. Furthermore, a context consumer
may request the current value for a given topic by invoking
getCurrentTopicValue() of the context broker. The broker
forwards the request to context services that are providing that
topic requested by the context consumer. A newly-subscribed
context consumer can invoke getLastTopicValue() in order to
get the last value of a given topic that other consumers have
already received.
The context broker has also two additional methods
findContextConsumers() and findContextServices() that are
self-invoked. The former is invoked to get the list of context
consumers that have subscribed to a given topic once a
notification of context change has been received for that topic.
The latest is invoked to get the list of context services that are
publishing the topic requested by a context consumer that has
invoked getCurrentTopicValue().
A context aggregator can register at a context service by
specifying what topics it is an aggregator for. Once registered,
a context aggregator can submit the current value for a given
topic by invoking the setTopicValue() method at the context
service. When the topic value is changed in the context
service, the notify() method at the context broker is triggered
to notify all subscribers of that topic.
67
Figure 5- Diagram of interactions among the framework components
E. A Multi-attributes Algorithm for Context Services Selection
As we have stated earlier, the Context Broker is in charge
of selecting suitable context services to deliver context
information to which context consumer (CAWS) subscribed.
Context information may be delivered to the same context
consumer by several context services. Each one may deliver a
piece of context information (a topic) that the context
consumer requires to adapt its behavior to the current context
of a user. Thus, the selection has to be done per topic. In this
subsection, we describe our proposed algorithm for context
services selection. The algorithm allows ranking context
services with regard to the QoC and the QoS required by a
context consumer. We first describe how the algorithm works
in the case of a single cloud; then, we extend the algorithm to
the case of multiple clouds as depicted by Fig. 1.
1) Single Cloud-based Service Selection
As numerous potential context services, within the cloud,
can deliver the context information required by a consumer, it
is indispensable to consider only potential context services that
can satisfy both the QoC and the QoS required by the context
consumer.
Let be the list of context information
(topics) to which a context consumer has subscribed by
showing its interest in receiving such context information. Let
be the list of context services in the
cloud that have subscribed with the Context Broker. Two
context services may provide different context information;
each one specializes in offering particular context information.
One service, for example, may offer location information
while another service may offer only temperature information,
and a third one may offer both of them.
These services typically provide context information with
different QoC and QoS. We assume that QoC and QoS
indicators are in normalized form with values between 0 and
1. A value of 1 means highest quality and 0 means lowest
quality. For example for the freshness quality indicator, 1
means that context sources have sensed the information in the
last minute, and 0 means that they have sensed it in the last 10
minutes. QoS indicators may concern for instance parameters
such as availability, response- time, reputation, and cost of
service.
WCSIT 1 (3), 63 -70, 2011
When subscribing to context information, a context
consumer specifies the min values of the normalized QoC and
QoS indicators that he can tolerate. For instance, the context
consumer may subscribe to the location information may
require a min value of 80% for the freshness indicator, 93%
for the probability of correctness indicator. He may require
also 98% for the Availability QoS indicator. Let
be the list of QoC indicators (parameters)
considered in the system. Let be the list
of QoS indicators considered in the system.
The minimum QoC
the context
consumer tolerates for a given context information (topic)
are expressed by the following vector:
requirements
that
, with and is the cardinality
of .
Therefore, the whole quality-of-context requirements of
the context consumer for all its subscribed topics and all QoC
indicators considered in the system can be expressed by the
following matrix:
The minimum QoS that the context consumer tolerates
concern all topics, are expressed by the following vector:
the
minimal value that the context consumer is willing to accept
for the QoS parameter , for
represents
.
A zero value in any QoC or QoS parameter means that the
user has not specified any constraint on that parameter.
The goal of the selection algorithm is to find for each topic
, to which the context consumer subscribed, a suitable
context service from the set that can satisfy the minimum
quality requirements of the context consumer.
The QoC offer of a context service is expressed by the
following matrix:
is suitable for a topic if the following condition is
satified:
for and
and
for
(1)
In other words, is suitable for provisioning topic if
the minimum quality-of-context requirements as well as the
minimum quality-of service requirements are satisfied.
In the following, we will consider in the selection process
only context servers that meet the minimum QoS requirements
of the context consumer.
The context consumer may set relative weights for the
QoC indicators. He may even set weights for each topic to
which it subscribed. For example, for the location topic, more
weight may be given to the spatial resolution indicator than to
the probability of correctness indicator. For the time of the day
topic, more weight may be given, for example, to the precision
indicator than to the other QoC indicators. Therefore, the
weight matrix is given by:
The score of a given QoC indicator for a given topic
by the offer is:
for and
(2)
Therefore, the score matrix of the offer, for all QoC
indicators and all topics is:
Given
the weight matrix and
requirements matrix, the minimum score matrix is:
the minimum QoC
The QoS offer of is expressed by the following vector:
is the offer of for the
, Where
QoS indicator ; .
The quality-of-service
requirements of
consumer are independent from the topics.
the context
68
Where
for and
WCSIT 1 (3), 63 -70, 2011
The difference matrix, , shows whether
may satisfy or not all QoC requirements for all topics to which
the context consumer has subscribed to. A value that is less
than zero in this matrix means that cannot satisfy the QoC
requirement for the associated topic and QoC indicator.
Therefore, we have to reason per topic, and consider only
context services that can meet the QoC requirement for that
topic.
The score per topic for a potential context service
offer is:
.
(3)
The score of for all topics can be expressed by the
following vector:
Step-1: Construct the matrix of minimum QoC requirements of the
context consumer for all the topics it subsribes to, and the vector S of
minimum QoS requirements the context consumer can tolerate. We
assume that all values of the matrix and the vector are normalized to be
in the range [0,1].
Step-2: Construct the weight matrix W set by the context consumer for
each topic and for each QoC indicator, then the minimum score matrix
.
Step-3: For each Context service registered with the Context Broker,
a) Construct the normalized matrix of the QoC offers of for all
current topics to which the context consumer has subscribed to, and
the normalized vector of the QoS offer of .
b) Calculate the score matrix that represents the score between the
QoC offer of and the context consumer QoC requirements for
each quality indicator considered in the system and for each topic.
c) Calculate the difference matrix, . If a value of this matrix
is less than zero, then it means that cannot satisfy the QoC
requirements of the context consumer for the associated topic and
the associate QoC indicator. Only rows with positive values will be
considered in the next steps.
d) Calculate the score vector using equation (3). Note that rows
with negative values in the difference matrix will have a score 0 in
the score vector.
Considering the scores of all the potential context services,
we get the following decision matrix:
Step-4: Create the decision matrix, and fill out the maximum score for
each topic and the CS providing that score.
…
Max
score
…
…
… …
… … …
…
…
…
…
…
Selected
CS
…
…
…
…
A score in the decision matrix is zero if the context service
cannot meet the QoC requirements for a given topic.
The maximum score value of each row j corresponds to the
best QoC offer that can fulfill the QoS and QoC requirements
of the context consumer for the topic .
The most suitable context service for topic , that we call
here , will be the one that maximizes the above
score, that is:
.
(4)
If no context service satisfies the context consumer QoS
and QoC requirements for a given topic, then the Context
Broker may ask the context consumer to lower its QoC
expectations.
The steps of the algorithm are summarized in Fig. 6.
2) Multiple Clouds-based Service Selection
The previous subsection describes how the ranking and
selection of context services is achieved within a single cloud.
In order to find out the most suitable context services, for each
topic, within multiple clouds, the context broker selects
potential context services in each cloud according to the
algorithm described in the previous sub-section. Selected
context services from the clouds are then ranked to find out
the best context services per topic, which maximizes the score
expressed by equation (3).
Figure 6. QoC-based Context Service Selection Algorithm
V. CHALLENGES OF THE APPROACH
In conjunction with the benefits provided by the cloud,
deploying context services to the cloud raises numerous issues
including possible
for context providers
interoperability, security, and performance concerns.
to consider,
The interaction model described in the previous section
provides the basis for the development of a context service
API that will be used by both context brokers and context
consumers to interact with context services. Heterogeneity of
the APIs offered by various context services will be one of the
challenges of the approach, especially if they are residing on
different clouds. Context brokers should, then, be able to
interoperate with all these heterogeneous context services.
Security is a significant concern with any SaaS application
on the cloud. Care must be taken when designing and
implementing a security solution for a cloud-based context-
service to keep it as simple and efficient as possible. For
instance, the context service may have to be integrated with an
identity management service. In this scenario, each customer
of the context service has an identity account, which is used to
authenticate the customer and track all its requests for service.
Performance monitoring, billing, managing customers’
expectations are also significant concerns among others that a
context service provider has to handle. The context provider
must ensure that its context service is highly available and that
its customers can access it. One outage or crash of the service
can affect all its customers. Now, there is a general trend
toward implementing a Service Level Agreement (SLA)
between providers of cloud services and customers, even
though that most SaaS vendors do not provide them at present.
Another concern, which is not linked to the cloud, but that
should be handled by context brokers and consumers is the
69
WCSIT 1 (3), 63 -70, 2011
heterogeneity in the representation and modeling of context
information by each context service. Bettini et al. [10] provide
a survey in which they describe and compare current context
modeling and reasoning techniques. Strang et al. [11] provide
another similar survey. Modeling approaches mainly include
key-values models, graphical models, object-oriented models,
markup scheme models, logic-based models, and ontology-
based models. With this heterogeneity in context information
models, context brokers should provide a common ontology-
based context information model and the mappings from the
various models to this common model.
[8] H. Lei, D.M. Sow, J.S. Davis, G. Banavar, and M.R. Ebling, ―The design
and applications of a context service,‖ SIGMOBILE Mob. Comput.
Commun. Rev., vol 6(4), pp.45-55, October 2002.
[9] A. Coronato, G. De Pietro, and M. Esposito, ―A Semantic Context
Service for Smart Offices,‖ In Proc. of the International Conference on
Hybrid Information Technology, vol. 02, pp.391-399, 2006.
[10] C.Bettini, O. Brdiczka, K. Henricksen, J.Indulska, D. Nicklas, A.
Ranganathan, D. Riboni, ―A Survey of Context Modelling and
Reasoning Techniques,‖ Pervasive and Mobile Computing, vol. 6(2),
pp. 161-180, 2010.
[11] T. Strang, C. Linnhoff-Popien, ―A Context Modeling Survey,‖ In
Workshop on Advanced Context Modelling, Reasoning and
Management, UbiComp 2004 , Nottingham/England, 2004.
AUTHORS PROFILE
Elarbi Badidi is currently an Assistant Professor of
computer science at the Faculty of Information
Technology
(FIT) of United Arab Emirates
University. Before joining the FIT, he held the
position of bioinformatics group leader at the
Biochemistry Department of University of Montréal
from 2001 to July 2004. He received a Ph.D. in
in 2000 from University of
computer science
Montréal, Québec (Canada). Dr. Badidi has been
conducting research in the areas of object-based
distributed systems, bioinformatics tools integration, and Web services. He is
a member of the IEEE, IEEE Computer Society, and ACM. He served on the
technical program committees of many international conferences. His research
interests include Web services and service oriented computing, middleware,
cloud computing, and bioinformatics data and tools integration.
Larbi Esmahi is an Associate Professor of the
School of Computing and Information Systems at
Athabasca University. He was the graduate program
coordinator at the same school during 2002-2005.
He holds a PhD in electrical engineering from Ecole
Polytechnique, University of Montreal. His current
research interests are in e-services, e-commerce,
multi-agent systems, and intelligent systems. He is
an associate editor for the Journal of Computer
Science, and the Tamkang Journal of Science and
Engineering. He is also member of the editorial
advisory board of the Advances in Web-Based Learning Book Series, IGI
Global, and member of the international editorial review board the
International Journal of Web-Based Learning and Teaching Technologies.
VI. CONCLUSION AND FUTURE WORK
High-level context information is typically obtained from
context services that aggregate raw context information sensed
by sensors and mobile devices. Given the enormous amount of
context data processed and stored by context services and the
wide acceptance of the cloud computing technology, context
providers now can leverage their services by deploying them
on the cloud.
In this paper, we have presented our proposed framework
for cloud-based context provisioning. The framework relies on
context brokers for context information dissemination using a
publish/subscribe model. Context services, deployed on the
cloud, can scale up and down, in terms of cloud resources they
use, according to the demand for context information. We
have described a preliminary model of interactions, among the
components of the framework, and that could be the basis for a
context service API. As a future work, we first intend to
investigate further on a common ontology-based model for
context information representation that can be used by context
brokers; and then, describe the mappings from the various
context representation models described in the literature to that
common model. We also intend to implement a prototype of
the framework by considering some real scenarios for context
provisioning, and implementing a context broker and few
similar cloud-based context services using open-source
software tools.
REFERENCES
[1] A.K. Dey, ―Understanding and Using Context,‖ Journal of Pervasive and
Ubiquitous Computing, vol. 5(1), pp. 4–7, 2001.
[2] T. Buchholz, A. Kpper, M. Schiffers, ―Quality of context: What it is and
why we need it?,‖ In Proc. of the 10th International Workshop of the HP
OpenView University association (HPOVUA), 2003.
[3] K. Sheikh, M. Wegdam, and M. Van Sinderen, ―Quality-of-Context and
its use for Protecting Privacy in Context Aware Systems,‖ Journal of
Software, vol. 3(3) pp. 83-93, March 2008.
[4] M.Baldauf, S. Dustdar, and F. Rosenberg, ―A survey on context-aware
systems,‖ International Journal of Ad Hoc and Ubiquitous Computing,
vol. 2 (4), pp. 263-277, 2007.
[5] K. Henricksen, J. Indulska, T. McFadden, and S. Balasubramaniam,
―Middleware
for Distributed Context-Aware Systems,‖ OTM
Confederated International Conferences, pp. 846-863, Springer-Verlag,
2005.
[6] H.L. Truong, and S. Dustdar, ―A Survey on Context-aware Web Service
Systems,‖ International Journal of Web Information Systems, vol. 5(1),
pp.5-31, Emerald, 2009.
[7] H. Schmidt, F. Flerlage, F.J. Hauck, ―A generic context service for
ubiquitous environments,‖ In Proc. of the IEEE International Conference
on Pervasive Computing and Communications (PERCOM), pp.1-6,
2009.
70
|
ai_researcher | 1 | There_Is_No_“I”_in_Team_but_There_Is_in_Innovation_How_Individual_Attributes_Impact_Team_Ideation_and_Selection_Practices.pdf | 1
2
0
2
t
c
O
2
1
]
Y
S
.
s
s
e
e
[
1
v
8
5
7
5
0
.
0
1
1
2
:
v
i
X
r
a
Role of Externally Provided Randomness in
Stochastic Teams and Zero-sum Team Games
Rahul Meshram
Abstract
Stochastic team decision problem is extensively studied in literature and the existence of optimal
solution is obtained in recent literature. The value of information in statistical problem and decision
theory is classical problem. Much of earlier does not qualitatively describe role of externally provided
private and common randomness in stochastic team problem and team vs team zero sum game.
In this paper, we study the role of extrenally provided private or common randomness in stochastic
team decision. We make observation that the randomness independent of environment does not benefit
either team but randomness dependent on environment benefit teams and decreases the expected cost
function. We also studied LQG team game with special information structure on private or common
randomness. We extend these study to problem team vs team zero sum game. We show that if a game
admits saddle point solution, then private or common randomness independent of environment does not
benefit either team. We also analyze the scenario when a team with having more information than other
team which is dependent on environment and game has saddle point solution, then team with more
information benefits. This is also illustrated numerically for LQG team vs team zero sum game. Finally,
we show for discrete team vs team zero sum game that private randomness independent of environment
benefits team when there is no saddle point condition. Role of common randomness is discussed for
discrete game.
I. INTRODUCTION
A team decision problem consists of two or more of decision makers (DMs) or players that
make decisions in a random environment where the information of each DM is a (possibly
partial) observation about the random environment. A DM takes an action as a function of the
information; this function is referred to the as the decision rule. DMs choose the decision rule
to jointly minimize an expected cost.
Rahul Meshram is with the Electronics and Communication Engineering Department, Indian Institute of Information
Technology Allahabad. [email protected]
If the decision makers had identical observations, then the multiple decision makers could be
clumped together as a single decision maker and the problem reduces to that of a stochastic opti-
mization or control problem. Of interest to us here is the case where information is asymmetric,
whereby there is no obvious method of aggregating it. A team decision problem is in essence
a decentralized stochastic optimal control problem. Problems with structure appear in a variety
of settings for example in sensor networks. The decision makers could be sensors situated at
different locations. These sensors observe the environment through different, possibly imperfect,
channels and under this information structure, the sensors have to act collectively to minimize
a certain cost function.
In this paper we consider the role of externally provided private and common randomness in
stochastic teams and in stochastic team v/s team zero-sum games. In the setup described above,
it is conceivable that an external source provides randomness to the players. This randomness
may or may not be correlated with their observations, and it may or may not be correlated across
players. This randomness increases the set of achievable joint distributions on the joint action
space of the DMs. Our goal is to understand the role of such randomness in a team problem
and a team v/s team game.
Qualitatively, there are three of kinds randomness that an external source may provide. First,
the source of randomness could be a coordinator – namely an entity that mixes the actions of
the DM by randomizing. Mathematically, this randomness is independent of the observations of
the DMs, but it may be correlated across DMs. This correlation makes this randomness distinct
from the usual notion of “randomized policies”, in which the randomization is independently
performed by each DM. The second kind of randomness, may be imagined as a counsellor – this
entity accesses the observations of each decision maker and provides a common message to all
DMs. This kind of randomness is correlated with the environment. The sources of randomness
mentioned above are relevant for team problems as well as team v/s team games. The third kind,
which is relevant only in the team v/s team game, is that of a mole or a spy. This source of
randomness provides information about observations of the opposite team.
Our interest is in qualitatively understanding the role of the kinds of randomness mentioned
above and quantifying it. We make following contribution in this work.
1) A team decision problem:
• We show that if a coordinator provides private or common randomness independent
of the environment, then it cannot improve the cost.
• We show that common randomness dependent on the environment can improve the
team cost. For a certain class of LQG team problems, we show that if the information
of each player is replaced by a convex combination of the information of all players,
then the team improves its cost.
2) Team v/s team zero-sum game:
• We show randomness independent the an environment does not benefit teams if the
zero-sum game admits saddle point solution.
• We prove that a team having more information than other team, benefit and decreases
the cost function for minimizing team when randomness is dependent on environment.
• For LQG team zero-sum games we illustrate that common randomness dependent on
the environment leads to an improvement in the optimal team cost.
• Finally, We give an example of a discrete team v/s team zero sum game, without a pure
strategy saddle point, we also show that private and common randomness independent
of an environment benefit teams. But it may not have Nash equilibrium.
A. Related Work
Early work on team decision problem in aspect of an organization theory studied in [1];
where author used the concepts from game theory and statistical decision theory. A general
formulation of team decision problem are described in [2] and person by person optimality
condition is established to solve the distributed team decision problem. Furthermore, the team
decision problem extended to a LQG team problem in [3], [4]. They investigated static and
dynamic LQG team decision problem and explored its connection with information theory and
economics. In LQG team problem, there is a unique optimal solution, linear in information and it
is obtained via solving person by person optimal condition. They also studied dynamic LQG team
with partial-nested information structure. Moreover, the symmetric static team problem studied
in [5] and have shown that the optimal strategy for a symmetric team problem not necessarily
a pure strategy but it can have randomized strategy.
Two-team zero-sum game in LQG problem studied in [6], and they show that team having extra
information not necessary ameliorate the expected loss. Apart from a team decision problem,
the role of common randomness in multi-agent distributed control problem is analyzed in [7].
Our work is inspired from [3], [6], [7]. Role of common randomness is not quantified in [3],
[6], whereas we discuss role of the private and common randomness in team decision problem.
The value of information for statistical problems is first introduced in [8], [9]. This is further
extended to decision problems in [10], and author have shown that increasing information lead to
increasing in utility. Early work on role of increasing information in two person game problem is
presented in [11].The surprising finding is presented in [11], where author finds that increasing
informativeness leads to decreasing performance. The value of information available to players
with two-person zero sum game is studied in [12]. As the additional information increased for
a player, may lead to solution toward ideal optimality condition when there is a saddle point
condition exists. This result further motivated study on value of information in team vs team
zero sum game and similar result have shown for LQG team vs team zero sum game. In [13],
the value of information for two players non zero sum game is developed, and they have show
that in LQG model with better informed player, it decreases the average Nash cost for both
players but in duopoly problem, the better informed player benefits only.
The great reference for stochastic team decision problem is [14]. In this book, authors dis-
cussed fundamental of team decision problem, sequential team decision problem, comparison
of information structure, topological properties of information structure and its application to
communication channel. It has motivated further research on team decision problem in recent
time.
There are flurry of research activities on static team problems and their existence of solution.
In [15], the class of sequential team problem is studied with a certain information structure and
existence of optimal strategies are proved. Further, they have shown the existence of optimal
solution for team problem under weaker assumptions, i.e., assumption on cost function to be
bounded and continuous, action space of agent to be compact or not compact and observation
satisfies technical condition. The ideas from weak convergence in probability theory is used to
show convergence of measure of joint probability of actions. In [16], author extended study of
[15], further weaken assumptions their. They have shown the existence of optimal strategies for
static teams and topology on set of policies are introduced. In [17], authors studied convexity
properties of strategy spaces and discussed redundancy of common or private information that is
independent of randomness for static team. Though this result is similar to ours, their proof differs
from our method. The role of common information in dynamic stochastic game is studied in [18],
where asymmetric of common information is considered among players. In [19], the existence
of optimal solution to static team problem under private and common information structure
is developed using topology of information and space of measures. Early ideas developed in
[8]–[11], [13] on role of information are derived for zero sum game under slightly weaker
assumptions in [20] and have shown existence of saddle point equilibrium.
But these paper do not provide qualitative comparison of role of externally provided private
and common randomness in static team and team vs team zero sum game.
The rest of the paper is organized as follows. Private and common randomness in static team
decision problem described in Section II. Role of private and common randomness in static
team vs team zero-sum game developed in Section III. Finally, concluding remarks and future
direction of research presented in Section IV.
II. PRIVATE AND COMMON RANDOMNESS IN STATIC TEAM PROBLEM
A. Team decision problem
Consider a team decision problem having N decision makers DM1, . . . , DMN in a team and
let N = {1, . . . , N}. Let ξ be a random vector taking values in a space Ξ denoting the state
of nature or an environment; let its distribution be P(·). Define yi := ηi(ξ) for a measurable
function ηi to be the information observed by DMi and let Yi be the space of yi. Let Ui ⊆ Rmi,
mi ∈ N denote the set of actions of DMi. The strategy space of DMi is Γi, the space of
measurable functions γi mapping Yi to Ui and an action ui is given by ui = γi(yi). Without loss
of generality we take Ui ⊆ R for all i, since a DM with a Rmi-valued strategy can be considered
as mi separate DMs with R-valued strategies; thus mi = 1 for all i ∈ N . Let
u := (u1, . . . , uN ),
γ := (γ1, . . . , γN ),
u−i := (u1, . . . , ui−1, ui+1, . . . , uN ),
γ−i := (γ1, . . . , γi−1, γi+1, . . . , γN )
The cost function is measurable function κ : U × Ξ → R, where U :
Qi∈N Ui and let
J (γ) = Eξ[κ(u1 = γ1(η1(ξ)), . . . , uN = γN (ηN (ξ)), ξ)].
A team optimal solution of the above problem is defined as γ∗ ∈ Γ :=
Qi∈N Γi such that
J ∗
TO
, J (γ∗) = min
γ∈Γ
J (γ) = min
γ∈Γ
Eξ[κ(u1 = γ1(η1(ξ)), . . . , uN = γN (ηN (ξ)), ξ)].
(1)
We assume throughout that a team optimal solution exists and use ‘min’ instead of ‘inf’. A
related concept, called the person by person optimal solution is a γ ∈ Γ such that
J ∗
PBP = J (γ) = min
γ′
i∈Γi
J (γ′
i, γ−i)
∀ i ∈ N .
B. Externally provided randomness
We now introduce externally provided randomness, beginning with private randomness. Sup-
pose DMi chooses ui randomly from Ui and let Q be the joint distribution of all variables
involved, namely, ξ, y, u. We say that the DMs have externally provided private randomness, if
Y
i∈N
This specification corresponds to the standard notion of randomized policies in stochastic control
Q(u|y) =
Q(ui|yi).
(2)
or behavioral strategies in stochastic games, wherein the action is chosen to be a random function
of the information.
In general one has
Q(ξ, y, u) = Q(u|ξ, y)Q(ξ, y),
where Q(ξ, y, u) is the joint distribution of ξ, y, u, Q(u|ξ, y) the conditional distribution of u
given ξ, y, and Q(ξ, y) is the marginal of ξ, y (evaluated at ‘ξ = ξ, y = y, u = u’). When the
randomness provided to DMs is independent of ξ, we have
u|y ∐ ξ,
i.e, given y the choice of u is independent of ξ. Furthermore, the joint distribution of (ξ, y)
is known; denote this distribution by P (ξ, y). Consequently, any joint distribution of ξ, y, u is
given by
Q(ξ, y, u) = Q(u|y)P (ξ, y).
(3)
To describe externally provided common randomness, let w = (w1, . . . , wN ) be a random
vector, w ∐ ξ, and assume that wi is externally provided to DMi by a coordinator. With the
additional information of wi, the strategies γi of DMi are deterministic yi × wi → ui mappings
and Γi is the space of such strategies. For a given random vector w with distribution P, the team
optimal solution is defined analogously to (1), as follows:
min
γ∈Γ
J (γ) = min
γ∈Γ
Eξ,w[κ(u1 = γ1(η1(ξ), w1), . . . , uN = γN (ηN (ξ), wN ), ξ)].
(4)
Since ξ is independent of w, the expectation with respect to (ξ, w) is well defined once the
marginals of ξ, w are defined.
C. Randomness independent of ξ
In this section, we study the case of externally provided randomness that is independent of
the state of nature ξ. Our main result is that in a team problem, such randomness provides
no benefit to the team. One may interpret this to mean that a team a gains nothing by hiring
a coordinator whose sole role is that of mixing the actions of the team members without the
use of any knowledge of the underlying state of nature or of the observations made by team
members.
Let P(· · · ) be the set of joint distributions of on the space ‘· · · ’. Let Q be the set of joint
distributions of random variables ξ, y, u that admit the decomposition above. i.e.,
Q = {Q ∈ P(Ξ × Y × U) | Q satisfies (3), (2)}.
Consider the following problem:
J ∗
TOP
, min
Q∈Q
EQ[κ(u, ξ)].
(5)
From the decomposition of Q provided by (3)-(2), it follows that (5) is a multilinear program
with separable constraints. Classical results show that (5) admits a solution that is an extreme
point, namely, one where ui is a deterministic function of yi. Consequently, J ∗
TOP = J ∗
TO and
we have the following result.
Proposition 2.1: In a static stochastic team problem, externally provided private randomness
that is independent of the state of nature cannot improve the team’s cost.
Proof is along the lines of proof of Proposition 2.2. We skip the proof details.
Consider the following cost:
J ∗
TOC = min
γ∈Γ,P
Eξ,w[κ(γ1(η1(ξ), w1), . . . , γN (ηN (ξ), wN ), ξ)].
This is the lowest cost that can be attained via common randomness. The common randomness
is independent of environment ξ.
Proposition 2.2:
J ∗
TO = J ∗
TOP = J ∗
TOC .
(6)
Proof: It is enough to show that J ∗
TO = J ∗
TOC . Now consider
J ∗
TOC = min
γ∈Γ,P
Eξ,w[κ(γ1(η1(ξ), w1), . . . , γN (ηN (ξ), wN ), ξ)].
Assuming {y1, · · · , yN } are well defined. Rewriting above expression, we obtain
J ∗
TOC = min
γ∈Γ,P
EwEξ/w[κ(γ1(η1(ξ), w1), . . . , γN (ηN (ξ), wN ), ξ)].
Since common randomness w independent of ξ, we have
J ∗
TOC = min
γ∈Γ,P
EwEξ[κ(γ1(η1(ξ), w1), . . . , γN (ηN (ξ), wN), ξ)].
Now, we split the minimization minγ1,γ2∈Γ,P(w) = minP(w) minγ1,γ2∈Γ, we can also interchange
minγ1,γ2∈Γ and expectation Ew since DMs can cooperate and communicate in team problem.
J ∗
TOC = min
P
Ew min
γ∈Γ
Eξ[κ(γ1(η1(ξ), w1), . . . , γN (ηN (ξ), wN ), ξ)].
Next we have
J ∗
TOC = min
P
Ew[J ∗
TO(w)].
It is linear program, thus it has optimal at extreme points, that is, w∗ = arg minw J ∗
TO(w). Then
TOC = J ∗
J ∗
TO(w∗).
Now consider that JTOC is a convex function of decision rule γ. If the decision rule is linear in
its information, that is, γi(ηi(ξ), wi) = αi1ηi(ξ) + αi2wi, then clearly cost function will convex
in αi1 and αi2 for all i = 1, · · · , N. Without loss of generality assume that E[wi] = 0 for all
i = 1, · · · , N. Since w and ξ are independent the cost function will be separable and minimization
w.r.t. variable αi1 and αi2 for all i = 1, · · · , N. It implies that cost will be minimum iff αi2 = 0
for all i = 1, · · · , N. Thus no weightage given to additional information under this decision rule.
For LQG team problem in Appendix A, it is illustrated that if private and common randomness
independent of the environment ξ, it does not improve the expected cost function.
D. Randomness dependent on ξ
Consider a scenario where consultant provides an extra randomness about an environment to
decision makers. That means these extra randomness is correlated with an environment ξ.
Let ω = (ω1, . . . , ωN ) be a random vector represents an extra randomness provided to decision
makers by consultant. Further assume that ω is function of ξ, i.e. ω = f (ξ) = (f1(ξ), . . . , fN (ξ)),
here f, fi be the measurable functions. The strategies of DMi are γi : yi × ωi → ui, γi ∈ Γi
space of strategies and ui ∈ Ui space of decision variables. The team optimal cost is defined as
follows.
min
γ∈Γ
J (γ) = min
γ∈Γ
Eξ,ω[κ(u1 = γ1(η1(ξ), ω1), . . . , γN (ηN (ξ), ωN ))].
Note that ω is function of ξ. The optimal cost function is
J ∗
TOER = min
γ∈Γ
Eξ[κ(u1 = γ1(η1(ξ), f1(ξ)), . . . , γN (ηN (ξ), fN (ξ)))].
(7)
(8)
In distributed team problem with no extra randomness, decision maker have only partial observa-
tion about ξ. Thus an observations about ξ is distributed among decision makers and an optimal
team cost J ∗
TO found in section II-A. When a consultant provides an extra randomness about
an environment ξ to the decision makers. Essentially, there is an increase in observation about ξ
available at decision makers. Intuitively, we expect that optimal cost under extra randomness in
distributed stochastic team will improve optimal cost functional. Thus we have following result.
Proposition 2.3: In distributed static stochastic team problem,
J ∗
TO ≥ J ∗
TOER.
(9)
Proof: We develop the proof using the ideas from [8]. Let B1 = {η1(ξ), η2(ξ), · · · , ηN (ξ)} be
the information available at team and B2 = {(η1(ξ), f1(ξ)), (η2(ξ), f1(ξ)), · · · , (ηN (ξ), fN (ξ))}
be the another information available at team with extra common randomness. Thus B1 ⊂ B2, i.e.,
B2 is more informative than B1. Since DMs can cooperate and communicate in team problem.
The minimization problem minγ∈Γ Eξ [κ (u1, u2, · · · , uN ) | B] . As fis are measurable func-
tions, ηis are measurable functions, so γi are measurable and Γ is closed bounded convex set.
The cost function is also convex and measurable, thus from [8, Theorem 2] we can have
min
γ∈Γ
Eξ [κ (u1, u2, · · · , uN ) | B2] ≤ min
γ∈Γ
Eξ [κ (u1, u2, · · · , uN ) | B1]
This implies the desired result.
Consider LQG stochastic team problem which has decision maker DM1 and DM2 in a team,
and we have following different variation of LQG team problem based on types of observation
available at decision makers.
Problem 1: Let decision variable u1 = Ay, where A is diagonal matrix, diag(A) = [α11, . . . , αN 1],
y is observation available at decision makers, y = [y1, y2]T = [µ1, µ2]T , and Σ is covariance
matrix of random vector y. The expected team cost
J ∗
TOLQG,1 = min
A
Eξ[yT AT BAy + 2yT AT Sξ] = min
A
Tr[AT BAΣ + 2AT SΣ].
Let A∗ be the matrix such that optimal cost function of team is
TOLQG,1 = Tr[A∗T BA∗Σ + 2A∗T SΣ].
J ∗
Problem 2: Let decision variable u2 = ˜A˜y, where ˜A is diagonal matrix, diag( ˜A) = [α11, . . . , αN 1],
˜y is observation available at decision makers, ˜y = [y2, y1]T = [µ2, µ1]T . Note that
˜y =
y2
y1
=
0 1
1 0
y1
y2
. Thus ˜y = ˜Iy and ξ = y = ˜I ˜y.
Let ˜I =
0 1
1 0
The expected team cost is
J ∗
TOLQG,2 = min
˜A
Eξ[˜yT ˜AT B ˜A˜y + 2˜yT ˜AT Sξ] = min
˜A
Tr[ ˜AT B ˜AΣ + 2 ˜AT ˜S ˜Σ].
Here ˜S := S ˜I and ˜Σ denote the covariance matrix of random vector ˜y .
Let A∗∗ be the matrix such that
TOLQG,2 = Tr[A∗∗T BA∗∗ ˜Σ + 2A∗∗T ˜S ˜Σ].
J ∗
Problem 3: Let decision variable u3 = Cω, where C is diagonal matrix, ω = [ω1, ω2]T ,
ω1 = βy1 + (1 − β)y2, = ω2 = (1 − β)y1 + βy2, β ∈ (0, 1). Hence ω = βy + (1 − β)˜y. We
assume that decision maker has available common randomness provided by a consultant. These
common randomness is convex combination of observation available at decision maker that is
y1 and y2. For example β = 1
2 , a consultant provides an average of observations. The optimal
cost functional is
J ∗
TOLQG,3 = min
u3∈U
Proposition 2.4:
1)
Eξ[uT
3 Bu3 + 2uT
3 Sξ].
TOLQG,3 ≤ βJ ∗
J ∗
TOLQG,1 + (1 − β)J ∗
TOLQG,2.
2) If ˜Σ = Σ and ˜S = S, then
Furthermore, A∗ = A∗∗. Also,
TOLQG,1 = J ∗
J ∗
TOLQG,2.
TOLQG,3 ≤ J ∗
J ∗
TOLQG,1.
Proof: 1) We have:
J ∗
TOLQG,3 = min
u3∈U
Now,
3 Bu3 = ωT C T BCω
uT
Eξ[uT
3 Bu3 + 2uT
3 Sξ]
= (βy + (1 − β)˜y)T C T BC(βy + (1 − β)˜y)
≤ βyT C T BCy + (1 − β)˜yT C T BC ˜y
(10)
Since B is symmetric positive definite matrix, (βy+(1−β)˜y)T C T BC(βy+(1−β)˜y) is quadratic
convex function. Thus inequality in (10) follows from convexity property of function.
J ∗
TOLQG,3 ≤ min
C
= min
C
Eξ[βyT C T BCy + (1 − β)˜yT C T BC ˜y + 2βyT C T Sξ + 2(1 − β)˜yT C T Sξ]
Tr[βC T BCΣ + 2βC T SΣ + (1 − β)C T BC ˜Σ + 2(1 − β)C T ˜S ˜Σ]
Tr[C T BCΣ + 2C T SΣ] + (1 − β) min
C
Tr[C T BC ˜Σ + 2C T ˜S ˜Σ]
= β min
C
= βJ ∗
TOLQG,1 + (1 − β)J ∗
TOLQG,2.
2)
Let ˜Σ = Σ and ˜S = S, we have:
J ∗
TOLQG,1 = min
A
Tr[AT BAΣ + 2AT SΣ].
Clearly, J ∗
TOLQG,1 = J ∗
J ∗
TOLQG,2 = min
˜A
TOLQG,2. Consequntly, A∗ = A∗∗. Hence,
Tr[ ˜AT B ˜AΣ + 2 ˜AT ˜S ˜Σ].
TOLQG,3 ≤ J ∗
J ∗
TOLQG,1.
So far, we studied role of common randomness (information) in a team problem. In next
section, we describe the role of common randomness in two team zero-sum game.
III. PRIVATE AND COMMON RANDOMNESS IN STATIC TEAM VS TEAM ZERO-SUM GAME
We study role of private and common randomness in static two-team zero-sum game. We
compare the static LQG team with zero-sum LQG team game under private and common
randomness. Then We demonstrate the two team zero-sum discrete game.
Now consider the case where there are N + M DMs. Let M = {N + 1, . . . , M}. DMi, i ∈ N
comprise of a single team, say Team 1, and DMj, j ∈ M comprise of Team 2. Team 1 and
Team 2 play a zero-sum game. Let u = (u1, . . . , uN ), γ = (γ1, . . . , γN ) denote the actions of
players of Team 1 and v = (vN +1, . . . , vM ), δ = (δN +1, . . . , δM ) denote the actions of players in
Team 2. Suppose the function the teams want to optimize is
min
ui=γi(yi),i∈N
max
vj =δj (yj ),j∈M
E[κ(u, v, ξ)]
Theorem 3.1: If the zero-sum team game admits a saddle point, randomness independent of
ξ does not benefit either team.
Proof: We have:
min
ui=γi(yi),i∈N
max
vj =δj (yj ),j∈M
E[κ(u, v, ξ)] =
max
vj=δj (yj ),j∈M
min
ui=γi(yi),i∈N
E[κ(u, v, ξ)]
min
ui=γi(yi),i∈N
max
vj =δj(yj ),j∈M
E[κ(u, v, ξ)] ≥
min
ui=γi(yi,w),i∈N
max
vj =δj(yj ,z),j∈M
E[κ(u, v, ξ)]
≥
max
vj =δj(yj ,z),j∈M
min
ui=γi(yi,w),i∈N
E[κ(u, v, ξ)]
(11)
≥
max
vj =δj(yj ),j∈M
min
ui=γi(yi),i∈N
E[κ(u, v, ξ)].
Eq (11) follows from:
max
vj =δj(yj ,z),j∈M
E[κ(u, v, ξ)] ≥
min
ui=γi(yi,w),i∈N
E[κ(u, v, ξ)]
. Consequently,
min
ui=γi(yi,w),i∈N
max
vj =δj (yj ,z),j∈M
E[κ(u, v, ξ)] ≥
max
vj =δj (yj ,z),j∈M
min
ui=γi(yi,w),i∈N
E[κ(u, v, ξ)]
.
Theorem 3.2: If zero-sum game admits a saddle point, common randomness dependent of ξ
is provided to one of team, then that team benefits. Suppose the consultant provides common
randomness z which is dependent of ξ to decision makers of a team say, Team 2. Then we want
to optimize
JTOZS,CR =
min
ui=γi(yi),i∈N
max
vj =δj (yj,z),j∈M
E[κ(u, v, ξ)].
Further, JTOZS,CR = JTOZS , where
JTOZS =
min
ui=γi(yi),i∈N
max
vj =δj (yj),j∈M
E[κ(u, v, ξ)].
Proof:
We know from a team decision problem with common randomness dependent of ξ, then
max
vj =δj(yj ,z),j∈M
E[κ(u, v, ξ)] ≥
max
vj =δj (yj),j∈M
E[κ(u, v, ξ)]
min
ui=γi(yi),i∈N
max
vj =δj(yj ,z),j∈M
E[κ(u, v, ξ)] ≥
min
ui=γi(yi),i∈N
max
vj =δj(yj ),j∈M
E[κ(u, v, ξ)]
Since we assume saddle point solution of zero-sum game,
min
ui=γi(yi),i∈N
max
vj =δj (yj,z),j∈M
E[κ(u, v, ξ)] =
max
vj=δj (yj ,z),j∈M
min
ui=γi(yi),i∈N
E[κ(u, v, ξ)]
We also have
max
vj =δj (yj,z),j∈M
min
ui=γi(yi),i∈N
E[κ(u, v, ξ)] ≥
max
vj =δj (yj),j∈M
min
ui=γi(yi),i∈N
E[κ(u, v, ξ)]
If two-team zero sum game without common randomness admits a saddle point, then
max
vj =δj(yj ),j∈M
min
ui=γi(yi),i∈N
E[κ(u, v, ξ)] =
min
ui=γi(yi),i∈N
max
vj =δj(yj ),j∈M
E[κ(u, v, ξ)].
Hence result JTOZS,CR = JTOZS follows.
Remark:
• If the common or private information is uncorrelated with an environment or uncertainty
of world, no one can gain anything from this information in team vs team zero-sum game.
This is also illustrated numerically for LQG zero sum team vs team game is illustrated in
Appendix C2.
• In next subsection, we describe that a team having private information correlated with
environment benefits. This implies that the team with more information manage to decrease
the cost and even this is true in LQG teams decision problem. This is first observed by [12]
and later this is extended to LQG teams problem in [6].
• We present results in our stochastic team vs team zero sum game. We illustrate role of
common randomness in team vs team LQG zero sum game by numerical examples in
Appendix C3.
(cid:3)
A. Role of private randomness dependent on ξ
Let yi = ηi(ξ) be the information available at player i, and
2. Note that a team 1 is minimizing using control u and team 2 is maximizing with control v.
y1 = (y1, y2, · · · , yN ) be informa-
e
y2 = (yN +1, yN +2, · · · , yN +M ) be the information available at team
e
tion available at team 1 and
Define the cost function
y2, ξ)]
e
From saddle point condition at the information structure (
J(u, v) = E[κ(u, v,
y1,
y1,
e
y2), we have
e
J(u∗, v) ≤ J(u∗, v∗) ≤ J(u, v∗).
e
The optimal decision pair is (u∗, v∗) at the information structure
y1, and
y2. Similarly, one can
define saddle point condition for null information structure and has only prior knowledge about
e
e
ξ, information structure is (y1, y2) and optimal decision pair is (u0, v0).
The value of information for team 1 and team 2 is defined as follows.
= J(u∗, v∗) − J(u0, v0)
V1
V2
y1,
(cid:0)
e
y1,
(cid:0)
e
y2
e
y2
e
(cid:1)
(cid:1)
= −V1
(cid:0)
y1,
y2
e
e
(cid:1)
Suppose the information at a team, say team 2 is fixed, i.e., η′
M. The opponent gets more information, say team 1, i.e., η′
i(ξ) = ηi(ξ) for i = N +1, · · · , N +
i(ξ) ⊆ ηi(ξ) for i = 1, 2, · · · , N.
Thus the decision set for team 1 is Aη′ ⊆ Aη and that for team 2 is Cη′ = Cη. We have the
following result.
Lemma 3.1: If the information of team 1 is increasing, i.e., η′
i(ξ) ⊆ ηi(ξ) for i = 1, 2, · · · , N,
i(ξ) = ηi(ξ) for i = N + 1, · · · , N + M, then the
and the information of team 2 is fixed, i.e., η′
value of information satisfy the following inequality
Here yi = ηi(ξ),
y1 = (y1, · · · , yN ),
e
N +1, · · · , y′
N +M ).
y2 = (y′
V1
y2
y1,
(cid:0)
≤ V1
y1,
y2
.
(cid:1)
(cid:1)
e
e
(cid:0)
y2 = (yN +1, · · · , yN +M ), and y′
e
b
b
i = η′
i(ξ),
y1 = (y′
1, · · · , y′
N ),
b
b
The proof is analogous to [6, Lemma 3.3]. For clarity purpose we provide details is as follows.
The saddle point condition at information structure η(ξ) implies that
J(u∗, v) ≤ J(u∗, v∗) ≤ J(u, v∗)
for u ∈ Aη, v ∈ Cη. Another saddle point condition at information structure η′(ξ) is
for
v ∈ Cη′.
u ∈ Aη′ and
b
b
Since Cη = Cη′ we can have
Because Aη′ ⊆ Aη, and
J(
u∗,
v) ≤ J(
u∗,
b
b
b
v∗) ≤ J(
b
u,
b
v∗)
b
v∗ ∈ Cη and then it implies that
b
J(u∗, v∗) = J(u∗,
v∗).
b
u∗ ∈ Aη′ implies
b
u,
J(
u∗,
v∗) ≥ J(
b
b
b
u∗ ∈ Aη. Further,
b
v∗) ≥ J(u∗,
b
v∗) = J(u∗, v∗)
b
Thus we get J(
u∗,
v∗) ≥ J(u∗, v∗). As we note that J(u0, v0) does not change. After sub-
b
stracting J(u0, v0), we have desired inequality
b
V1
y1,
(cid:0)
e
y2
e
(cid:1)
≤ V1
(cid:0)
y1,
y2
b
b
.
(cid:1)
B. Discrete team vs team zero-sum game
In this section, we investigate discrete team vs team zero-sum game and the role of extra
randomness in the team and its decision makers.
Claim 3.3: In discrete team vs team zero-sum game,
1) it may not admit pure-strategy saddle point solution,
(12)
(13)
(14)
(15)
2) if a coordinator provides the private randomness independent of an environment to decision
makers of team then it benefit both team and improves the team cost. But it may not achieve
Nash equilibrium,
3) if a consultant provides the common randomness to decision makers of team, then it lead
to improve in team cost. But it may not have Nash equilibrium.
Proofs of these are difficult to obtain but we provide examples in appendix B to support our
claim.
IV. DISCUSSION AND CONCLUSIONS
The value of information is classic problem in decision theory. As information increases, we
anticipated that the optimal cost decreases. This is first illustrated for statistical problems in [8].
In stochastic team problem and stochastic team vs team zero sum games, the value of private
information to decision makers is not explicitly presented in earlier literature.
We analyzed a stochastic team decision problem when decision makers are provided with
external private randomness which is correlated or independent of environment. The private
randomness independent of environment does not decrease the cost function. But this randomness
dependent on environment provided to DMs in a team decreases the cost function of team
compare to no randomness. In stochastic LQG team decision problem under special information
structure, we have shown that the correlated randomness decreases the cost function.
We next studied stochastic team vs team zero sum game, and showed that the randomness
independent of environment does not benefit either time if a game admits a saddle point condition.
In LQG team vs team zero sum game, we analyze the role of common randomness which is
correlated with environment for one of team, then the optimal value function decreases with
information. We further extended this finding to discrete team vs team zeros sum game when
there is no saddle point condition and observed that common or private randomness independent
of environment benefits both team. Even common randomness dependent on environment benefit
a team and improves cost. This may not lead to saddle point condition.
It opens future research direction on problem of role of private or common randomness in
stochastic teams with non zero sum games and sequential stochastic dynamic teams. Another
research directions is on correlated equilibrium behaviors and common knowledge in sequential
stochastic team vs team games.
V. ACKNOWLEDGMENT
Most part of this work was carried out at the Bharti Centre for Communications at IIT Bombay
and EE Dept. IIT Bombay, where author was PhD scholar. Part of this work was done at EE Dept.
IIT Madras during Postdoctoral Fellowship. Author is very grateful to Prof. Ankur Kulkarni, SC
Dept. IIT Bombay for guidance and extensive discussion on problem of Team decision theory
and pointing out references. Author is thankful to Prof. D. Manjunath, EE Dept. IIT Bombay for
initial support on the work. Author is also thankful to ECE Dept. IIIT Allahabad for financial
support.
REFERENCES
[1] J. Marschak, “Elements for a theory of teams,” Management Science, vol. 1, pp. 127–337, 1955.
[2] R. Radner, “Team decision problems,” Ann. Math. Statist., vol. 33, no. 3, pp. 857–881, 1962.
[3] Y. C. Ho, M. P. Kastner, and E. Wong, “Teams, signaling, and information theory,” Transaction on Automatic Control,
vol. 23, no. 2, pp. 305–311, 2010.
[4] Y. C. Ho, “Team decision theory and information structure,” Proceedings of IEEE, vol. 68, no. 6, pp. 644–654, June 1980.
[5] F. C. Schoute, “Symmetric team problems and multi-access wire communication,” Automatica, vol. 14, no. 3, pp. 255–269,
May 1978.
[6] Y. C. Ho and F. K. Sun, “Value of information in two-team zero-sum problem,” Journal of Optimization Theory and
Applicatiion, vol. 14, no. 5, 1974.
[7] V. Anantharam and V. Borkar, “Common randomness and distributed control: A counterexample,” System and Control
Letters, vol. 56, no. 7, pp. 568–572, July 2007.
[8] D. Blackwell, “Comparisons of experiments,” Berkeley Symposium on Mathematical Statistics and Probability, vol. 2, pp.
93–102, 1951.
[9] D. Blackwell, “Equivalent comparisons of experiments,” The Annals of Mathematical Statistics, vol. 24, no. 2, pp. 265–272,
1953.
[10] J. Marschak and K. Miyasawa, “Economic comparability of informative systems,” International Economic Review, vol.
9, no. 2, pp. 137–174, June 1968.
[11] Y. C. Ho and I. Blau, “A simple example on informativeness and performance,” Journal of Optimization Theory and
Applicatiions, vol. 11, no. 4, 1973.
[12] H. S. Witsenhausen, “On the relation between the values of a game and its information structure,” Information and
Control, vol. 19, no. 3, pp. 204–215, Oct. 1971.
[13] T. Basar and Y. C. Ho, “Informational properties of Nash solutions of two stochastic non-zero sum games,” Journal of
Economic Theory, vol. 7, no. 4, pp. 370–387, April 1974.
[14] S. Yuksel and T. Basar,
Stochastic Networked Control Systems: Stabilization and Optimization under Information
Constraints, Birkhauser, 2013.
[15] A. Gupta, S. Yuksel, T. Bas¸ar, and C. Langbort, “On the existence of optimal policies for a class of static and sequential
dynamic teams,” SIAM Journal Control and Optimization, vol. 53, no. 3, pp. 1681–1712, 2015.
[16] N. Saldi, “A topology for team policies and existence of optimal team policies in stochastic team theory,” IEEE Transactions
on Automatic Control, vol. 65, no. 1, pp. 310–317, 2019.
[17] S. Yuksel and N. Saldi, “Convex analysis in decentralized stochastic control, strategic measures, and optimal solutions,”
SIAM Journal in Control and Optimization, vol. 55, no. 1, pp. 1–27, 2017.
[18] A. Gupta, A. Nayyar, C. Langbort, and T. Bas¸ar, “Common information based markov perfect equilibria for linear-gaussian
games with asymmetric information,” SIAM Journal in Control and Optimization, vol. 52, no. 5, pp. 3228–3260, 2014.
[19] A. Gupta, “Existence of team-optimal solution in static teams with common information: A topology of information
approach,” SIAM Journal in Control and Optimization, vol. 58, no. 2, pp. 998–1021, 2020.
[20]
I. Hogeboom-Burr and S. Yuksel, “Comparison of information structure for zero-sum games and a partial converse to
Blackwell ordering in standard Borel spaces,” Arxiv, pp. 1–23, 2020.
APPENDIX
A. LQG Team Problem
Now we examine an example of a LQG team problem.
Consider a LQG team problem of having N decision maker. Let an environment ξ :=
[µ1, . . . , µN ]T be random vector; it is Gaussian distributed zero mean and covariance Σ. Let
yi = ηi(ξ) be the information observed by DMi, y = [y1, . . . , yN ]T information vector observed
by decision makers. In a static LQG team problem optimal action is linear in information
observed by decision maker. Thus action of DMi is ui = γi(yi) = αi1yi. Then
u = (u1, . . . , uN )T = Ay,
where A is diagonal matrix of dimensional N × N, diag(A) = [α11, . . . , αN 1]. Standard LQG
problem assumes cost function to be quadratic in nature. The cost function is κ(u, ξ) := uT Bu +
2uT Sξ, here B is symmetric positive matrix.
The team optimal solution of LQG team problem is γ ∈ Γ such that
J ∗
TOLQG
, min
γ∈Γ
J (γ) = min
u∈U
Eξ[κ(u, ξ)] = min
u∈U
Eξ[uT Bu + 2uT Sξ].
(16)
Replacing u = Ay, we obtain
J ∗
TOLQG = min
A
Eξ[yT AT BAy + 2yT AT Sξ].
Further this can expressed as deterministic optimization problem as follows.
J ∗
TOLQG = min
A
Tr[AT BAΣ + 2AT SΣ],
Note that Tr denote trace of matrix.
1) Private randomness independent of ξ : We will show that in LQG team problem the private
randomness provided by a coordinator do not benefit the team optimal cost functional.
Consider ω = [ω1, . . . , ωN ]T is private randomness available to decision makers, it is Gaussian
distributed with zero mean and covariance matrix Σ1 is diagonal; ωi is private randomness
available at DMi. We suppose that ωi is independent of ωj for i 6= j and it is also independent
of y. (E[ωiωj] = 0 for i 6= j and E[ωiyk] = 0 for i 6= k, 1 ≤ i, j, k ≤ N.)
The action ui = γi(yi, ωi) = αi1yi + αi2ωi. Let u = Ay + Cω, where A and C are diagonal
matrix of dimension N × N, diag(A) = [α11, . . . , αN 1] and diag(C) = [α12, . . . , αN 2].
The optimal expected cost functional of LQG team problem with private randomness is
J ∗
TOP,LQG
, min
Q∈Q
EQ[κ(u, ξ)] = min
A
Tr[AT BAΣ + 2AT SΣ] + min
C
Tr[C T BCΣ1].
(17)
From equation (17), minC Tr[C T BCΣ1] = 0 if and only if C is zero matrix. Hence J ∗
J ∗
TOP,LQG.
2) Common randomness independent of ξ : We study a LQG team problem with common
TOLQG =
randomness has structure similar to that of LQG team problem with private randomness. We
demonstrate that common randomness provided to decision makers by the consultant is inde-
pendent of ξ, then it do not improve the expected cost functional.
Consider ω = [ω1, . . . , ωN ]T is common randomness available to decision makers, it is
Gaussian distributed with zero mean and covariance matrix Σ2; ωi is the common randomness
at DMi. We suppose that ωi is perfect correlation with ωj for i 6= j and it is also independent
of y. (E[ωiωj] 6= 0 for i 6= j and E[ωiyk] = 0 for i 6= k, 1 ≤ i, j, k ≤ N.) The action
ui = γi(yi, ωi) = αi1yi + αi2ωi. Let u = Ay + Cω. The optimal expected cost function is
J ∗
TOC,LQG = min
A
Tr[AT BAΣ + 2AT SΣ] + min
C
Tr[C T BCΣ2]
(18)
Note that in LQG team problem, B is symmetric positive definite matrix. From (18), expression
minC Tr[C T BCΣ2] attains minimum value = 0 if C is zero matrix. Thus we have following
relation, J ∗
TOLQG = J ∗
TOC,LQG.
3) Common randomness dependent on ξ: Next, we demonstrate the result in (9) via an
example of LQG team problem. Further we show numerically for two decision maker LQG
team problem that there is strict inequality between team optimal cost with and without extra
randomness, that is J ∗
TOLQG > J ∗
TOER,LQG.
Consider a LQG team problem consists of an environment ξ = [µ1, . . . , µN ]T as random vector
with mean zero and covariance matrix Σ. The information observed by DMi is yi = ηi(ξ) = µi,
y = [y1, . . . , yN ]T . Let ω = [ω1, . . . , ωN ]T be the extra randomness provided by a consultant
to decision makers. Furthermore, assume that ω = f (ξ) and f is linear function in ξ. Thus
N
j φijµj, ω = Φξ = Φy, Φ is matrix of dimension N × N, with entries in φij ≥ 0 and
ωi =
N
j=1 φij = 1. The cost function is κ(u, ξ) := uT Bu + 2uT Sξ, the optimal expected cost under
P
P
extra randomness is
J ∗
TOER = min
u∈U
Eξ[uT Bu + 2uT Sξ].
Since it is static LQG team problem, optimal decision rule is linear in observation variable.
We assume that ui = αi1yi + αi2ωi, u = Ay + Cω, where A and C are diagonal matrices,
diag(A) = [α11, . . . , αN 1], diag(C) = [α12, . . . , αN 2]. The optimal expected cost is
J ∗
TOER,LQG = min
A,C
Eξ[yT AT BAy +2yT AT Sξ +2yT ΦT C T BAy +yT ΦT C T BCΦy +2yT ΦT C T Sξ].
We have ξ ∼ N(0, Σ), taking expectation and rewriting above expression, we obtain deterministic
optimization problem as follows.
J ∗
TOER,LQG = min
A,C
Tr[AT BAT Σ + 2AT SΣ + 2ΦT C T BAΣ + ΦT C T BCΦΣ + 2ΦT C T SΣ]. (19)
Intuitively, in LQG team problem with no extra randomness can described as incomplete
information static LQG team problem. Since extra randomness is linear function of an environ-
ment and under assumption of nonzero linear coefficient (φij 6= 0 for all 1 ≤ i, j ≤ N), LQG
team problem with extra randomness can be describe as complete information static LQG team
problem. Thus it is natural to expect that J ∗
TO. But showing this result analytically
TOER
difficult due to in-separability of optimization problem (19) into optimization problem with
< J ∗
respect to A and C.
To support our claim of J ∗
TO, we numerically evaluate the optimal cost functional
with and without extra randomness which is dependent on ξ for LQG two team problem and
TOER < J ∗
show that our claim is indeed true. Further, we show impact of correlation coefficient {φij, 1 ≤
i, j ≤ 2} on optimal cost functional.
4) Numerical example–LQG team problem: Let ξ = [µ1, µ2]T denote the state of nature or
an environment having probability distribution N(0, Σ). Let yi = ηi(ξ) = µi be the information
observed at DMi for 1 ≤ i ≤ 2. Let ω = [ω1, ω2]T be an extra randomness provided by a
consultant to decision makers. Consider ωi = φi1y1 + φi2y2, ui = αi1yi + αi2ωi for 1 ≤ i ≤ 2.
α11
0
α12
0
Thus we have A =
Tr[AT BAΣ + 2AT SΣ + 2ΦT C T BAΣ + ΦT C T BCΦΣ + 2ΦT C T SΣ].
. Team optimal cost from (19) is
, C =
α21
α22
J ∗
0
0
TOER = min
A,C
2 −1
1 0
In this example, we suppose B =
We define δ1 = E[y1w1] = φ11σ2
µ1µ2 +φ12σ2
E[y2w1] = φ11σ2
, S =
σ2
−1
µ1µ2, δ2 = E[y1w2] = φ21σ2
µ1 + φ12σ2
µ1, δ5 = E[w2
, Σ =
µ1µ2 +φ22σ2
µ1,µ2
σ2
µ2
µ1,µ2
µ1 + φ22σ2
11σ2
1] = φ2
µ1, δ4 = E[y2w2] = φ21σ2
0 1
1
σ2
µ1
σ2
.
µ1µ2, δ3 =
12σ2
µ2 +
µ1 +φ2
φ11φ12σ2
φ12φ21)σ2
µ1µ2, δ6 = E[w2
µ1µ2 + φ22φ12σ2
21σ2
µ1 +φ2
2] = φ2
µ2, δ8 = E[w1ξ1] = φ11σ2
22σ2
µ2 +φ21φ22σ2
µ1µ2, δ7 = E[w1w2] = φ11φ21σ2
µ1µ2, δ9 = E[w2ξ2] = φ21σ2
µ1 +(φ22φ11+
µ1µ2 + φ22σ2
µ2.
µ1 + φ12σ2
Now rewriting team optimal cost function we obtain,
J ∗
TOER =
min
α11,α12,α21,α22
2α2
11σ2
y1 − 2α11α21σ2
y1,y2 + α2
21σ2
y2 + 2α11α12δ1 − α21α12δ2 − α11α22δ3 + α22α21δ4 +
2α2
12δ5 − 2α12α22δ7 + α2
22δ6 + 2(α11σ2
y1 + α21σ2
y2) + 2(α12δ8 + α22δ9).
Differentiating above expression with respect to α11, α12, α21, α22 and equating to 0. We have
4σ2
y1
−2σ2
y1y2
2δ1
−δ3
−2σ2
y1y2
2δ1 −δ3
2σ2
y1
−δ2
δ4
−δ2
δ4
4δ5 −2δ7
−2δ7
2δ6
α11
α21
α12
α22
=
−2σ2
y1
−2σ2
y2
−2δ8
−2δ9
.
Notice that computing optimal α11, α12, α21, α22 via solving linear systems of equations and
finding optimal expected cost is computationally tedious. Without loss of generality, we suppose
µ1µ2 = 1
µ2 = 1 and σ2
µ1 = σ2
σ2
team cost under optimal α∗
4. Furthermore, we fix φ11, φ12, φ21, φ22 and evaluate the minimum
22. Note that φ11, φ12, φ21, φ22 determines the correlation
of extra randomness with observations available at decision makers. From numerical computation
11, α∗
12, α∗
21, α∗
in table (I), we make following concluding remarks.
1) In distributed static LQG team problem without extra randomness, the team optimal cost
is highest.
2) In distributed static LQG team problem, only one decision maker having extra randomness
which is correlated with ξ do not lead to improve in the team optimal cost. Instead it lead
to increase in the team optimal cost.
3) In distributed static LQG team problem,all decision maker having extra randomness which
is correlated with ξ lead to improvement in the team optimal cost. Thus we have strict
inequality between J ∗
TO and J ∗
TOER, that means J ∗
TOER < J ∗
TO.
4) if an extra randomness provided by a consultant is an average of the observations µ1 and
µ2, then team optimal cost is best than any other convex combination of the observations
µ1 and µ2. Hence correlation coefficient φij for 1 ≤ i, j ≤ 2 plays significant role to attain
minimal team optimal cost in distributed static LQG team problem with extra randomness
dependent on ξ.
(φ11, φ12, φ21, φ22)
No randomization
DM1 have randomness
Both DM have randomness
Both DM have randomness
Both DM have randomness
(0, 0, 0, 0)
( 1
4 , 3
2 , 1
( 1
( 2
3 , 1
( 1
3 , 2
4 , 0, 0)
2 , 1
2 , 1
2 )
4 , 1
3 , 3
4 )
4 , 3
3 , 1
4 )
12, α∗
11, α∗
21, α∗
(α∗
(−0.6452, −1.1613, 0, 0)
22)
(0, −1, −0.3024, 2.7513)
minα E[κ(α, ξ)]
−1.806
−0.477
(−0.3434, −0.7046, −2.7862, −4.0062) −5.2974
(−0.5122, −1.4833, −2.6067, −3.2171) −4.5211
(−0.7045, −0.7058, −0.6765, −1.522)
−3.6923
TABLE I
COMPARISON OF EXPECTED COST WITH DIFFERENT RANDOMIZATION PROVIDED TO DM
B. Proof of Lemma 3.3
We prove our claim via illustrating an example of two-team discrete game.
Consider two-team label them as Team 1 and 2, Team 1 consists of a decision maker and
Team 2 comprises two decision makers. Let ξ = [µ1, s1, s2]T denote an environment or the state
of nature; it is random vector with discrete distribution p(ξ). Each decision maker observes an
environment partially since decision maker are situated distributed manner.
Let y1 = η(ξ) denote an observation available at decision maker of Team 1; zj = ζj(ξ)
represent an observation available at DMj of Team 2. Decision rule at Team 1 and 2 is
and
j = 1, 2.
γ1 : y1 → u1
δj : zj → vj
Without loss of generality, we assume that µ1, sj is binary random variable take values {0, 1};
y1 = η(ξ) = µ1, zj = ζj(ξ) = sj, for j = 1, 2. Moreover, we consider u1, vj ∈ {L, R}, j = 1, 2.
Binary random variable µ1, s1 and s2 defined as follows.
µ1 =
s1 =
1
0
µ1
0
with prob.
p1
with prob.
1 − p1.
with prob.
p
with prob.
1 − p.
s2 =
1 − µ1
s1
with prob.
q
with prob.
1 − q.
The joint distribution of (µ1, s1, s2) is P(µ1, s1, s2) and is written as
P(µ1 = 0, s1 = 0, s2 = 0) = (1 − p1)(1 − q)
P(µ1 = 0, s1 = 0, s2 = 1) = (1 − p1)q
P(µ1 = 0, s1 = 1, s2 = 0) = 0
P(µ1 = 0, s1 = 1, s2 = 1) = 0
P(µ1 = 1, s1 = 0, s2 = 0) = p1(1 − p)
P(µ1 = 1, s1 = 0, s2 = 1) = 0
P(µ1 = 0, s1 = 1, s2 = 0) = p1pq
P(µ1 = 1, s1 = 1, s2 = 1) = p1p(1 − q)
There are four possible decision rule available at each decision maker. The decision rule of a
decision maker is
1 = γ1
u1
1(y1) = γ1
1(µ1) =
1 = γ2
u2
1(y1) = γ2
1(µ1) =
1 = γ3
u3
1(y1) = γ3
1(µ1) = n L
1 = γ4
u4
1(y1) = γ4
1(µ1) = n R
1 = δ1
v1
1(z1) = δ1
1(s1) =
1 = δ2
v2
1(z1) = δ2
1(s1) =
L
R
L
R
if
if
L
R
L
R
if
if
if
if
µ1 = 0
µ1 = 1
µ1 = 1
µ1 = 0
µ1 = 1
µ1 = 1
or
or
µ1 = 0
µ1 = 0
if
if
if
if
s1 = 0
s1 = 1
s1 = 1
s1 = 0
LL
LR
RL
RR
L
R
20
20
0
1
1
0
30
30
TABLE II
PAYOFF MATRIX: TEAM VS TEAM ZERO-SUM GAME
1 = δ3
v3
1(z1) = δ3
1(s1) = n L
1 = δ4
v4
1(z1) = δ4
1(s1) = n R
2 = δ1
v1
2(z2) = δ1
2(s2) =
2 = δ2
v2
2(z2) = δ2
2(s2) =
if
if
L
R
L
R
s1 = 1
s1 = 1
or
or
s1 = 0
s1 = 0
if
if
if
if
s2 = 0
s2 = 1
s2 = 1
s2 = 0
2δ3
v3
2(z2) = δ3
2(s2) = n L
2 = δ4
v4
2(z2) = δ4
2(s2) = n R
if
s2 = 1
or
s2 = 0
if
s2 = 1
or
s2 = 0
We next formulate team vs team zero-sum game, Team 1 seeks to maximize the expected payoff
whereas Team 2 seeks to minimize the expected payoff. We describe payoff matrix in table II . In
table II row vector denotes actions of Team 1 and corresponding payoff; column vector denotes
actions of Team 2 and corresponding payoff. Since observations available at each decision maker
in team is function of state of nature ξ and ξ is random variable, we evaluate the expected payoff
for different actions of decision makers and it is
E
κ
(cid:2)
(cid:0)
1(µ1), δm
γl
1 (s1), δn
2 (s2)
=
(cid:1)(cid:3)
X
µ1,s1,s2∈{0,1}3
κ
(cid:0)
1(µ1), δm
γl
1 (s1)δn
2 (s2)
P(µ1, s1, s2).
(cid:1)
where 1 ≤ l, m, n ≤ 4. Enumerating the expected payoff over all possible actions of decision
makers, we obtain
E
1(µ1), δ1
γ1
κ
(cid:2)
(cid:0)
γ2
1(µ1), δ1
1(s1)δ1
E
κ
(cid:2)
(cid:0)
1(s1)δ1
2(s2)
= 20 − 20q + 20p1q + 10p1p − 30p1pq
(cid:1)(cid:3)
2(s2)
= 40 − 40p1 − 19q + 19p1q − 29p1pq + 30p1p
(cid:1)(cid:3)
E
κ
1(µ1), δ1
γ3
(cid:0)
1(s1)δ1
2(s2)
= 20 − 20q + 30p1p − 29p1pq
(cid:1)(cid:3)
E
κ
(cid:2)
E
κ
(cid:2)
(cid:2)
1(µ1), δ1
γ4
(cid:0)
1(µ1), δ2
γ1
(cid:0)
E
κ
(cid:2)
E
κ
(cid:2)
E
1(µ1), δ2
γ2
(cid:0)
1(µ1), δ2
γ3
(cid:0)
1(µ1), δ2
γ4
1(s1)δ1
2(s2)
= 20 − 19q + 19p1q + 10p1p − 30p1pq
(cid:1)(cid:3)
1(s1)δ1
2(s2)
= 1 − p1 + 29q − 29p1q + 19p1pq + p1p
(cid:1)(cid:3)
1(s1)δ1
2(s2)
= 30 − 31p1q + p1 + 20p1pq
(cid:1)(cid:3)
1(s1)δ1
2(s2)
= 1 + 29q − 30p1q + 20p1pq
(cid:1)(cid:3)
1(s1)δ1
2(s2)
= 30q − 30p1q + 19p1pq + p1p
κ
(cid:2)
E
(cid:0)
κ
(cid:2)
E
κ
(cid:2)
(cid:0)
(cid:0)
1(µ1), δ3
γ1
1(s1)δ1
= 20 − 20q + 19p1pq + p1p
1(µ1), δ3
γ2
1(s1)δ1
2(s2)
= 20 − 19q − p1q + 20p1pq
(cid:1)(cid:3)
2(s2)
(cid:1)(cid:3)
(cid:1)(cid:3)
2(s2)
1(s1)δ1
E
κ
1(µ1), δ3
γ3
(cid:0)
(cid:2)
1(µ1), δ3
γ4
(cid:0)
1(s1)δ1
E
κ
(cid:2)
γ1
1(µ1), δ4
2(s2)
= 20 − 19q − p1q + 19p1pq + p1p
(cid:1)(cid:3)
1(s1)δ1
2(s2)
= 1 − p1 + 29q − 29p1q + 30p1p − 30p1pq
= 20 − 20q + 20p1pq
(cid:1)(cid:3)
1(µ1), δ4
γ2
(cid:0)
1(s1)δ1
1(µ1), δ4
γ3
1(s1)δ1
2(s2)
(cid:1)(cid:3)
= 30q − 31p1q + p1 − 29p1pq + 30p1p
= 1 + 29q − 29p1q + 29p1p − 29p1pq
1(µ1), δ4
γ4
(cid:0)
1(s1)δ1
= 30q − 30p1q + 30p1p − 30p1pq
E
κ
(cid:2)
E
(cid:0)
κ
(cid:2)
E
κ
(cid:2)
E
(cid:0)
κ
(cid:2)
E
= 20q − 20p1q + p1 + 30p1pq
= 1 − p1 + 19q − 19p1q + 29p1pq + p1p
(cid:1)(cid:3)
2(s2)
(cid:1)(cid:3)
2(s2)
(cid:1)(cid:3)
2(s2)
(cid:1)(cid:3)
1(s1)δ2
2(s2)
(cid:1)(cid:3)
1(s1)δ2
2(s2)
= 20q − 20p1q + 29p1pq + p1p
(cid:1)(cid:3)
1(s1)δ2
2(s2)
= 1 + 19q − 19p1q − p1p + 30p1pq
(cid:1)(cid:3)
1(s1)δ2
2(s2)
= 30 − 29q + 29p1q − 10p1p − 19p1pq
(cid:1)(cid:3)
1(s1)δ2
2(s2)
= 30 − 10p1 − 30q + 30p1q − 20p1pq
(cid:1)(cid:3)
= 30 − 29q + 29p1q − 10p1p − 20p1pq
= 30 − 30q + 30p1q − 10p1p − 19p1pq
(cid:1)(cid:3)
(cid:1)(cid:3)
1(µ1), δ2
γ2
(cid:0)
γ3
1(µ1), δ2
(cid:0)
1(s1)δ2
2(s2)
1(µ1), δ2
γ4
1(s1)δ2
2(s2)
(cid:0)
1(µ1), δ3
γ1
(cid:0)
1(s1)δ2
2(s2)
= 20q − 20p1q + p1 + 19p1p − 19p1pq
(cid:1)(cid:3)
E
κ
(cid:2)
E
κ
(cid:0)
(cid:2)
E
E
E
κ
(cid:2)
κ
(cid:2)
κ
(cid:2)
E
κ
(cid:2)
E
κ
1(µ1), δ1
γ1
(cid:2)
(cid:0)
1(µ1), δ1
γ2
(cid:0)
E
1(µ1), δ1
γ3
1(s1)δ2
κ
(cid:2)
(cid:0)
γ4
1(µ1), δ1
(cid:0)
1(µ1), δ2
γ1
κ
(cid:2)
= 1 − p1 + 19q − 19p1q + 20p1p − 20p1pq
E
κ
(cid:2)
(cid:0)
1(µ1), δ3
γ2
1(s1)δ2
2(s2)
(cid:1)(cid:3)
1(s1)δ2
1(µ1), δ3
γ3
E
κ
(cid:0)
(cid:2)
1(µ1), δ3
γ4
(cid:0)
1(µ1), δ4
γ1
(cid:0)
1(µ1), δ4
γ2
(cid:0)
1(µ1), δ4
γ3
(cid:0)
1(µ1), δ4
γ4
(cid:0)
E
κ
(cid:2)
E
E
E
E
κ
(cid:2)
κ
(cid:2)
κ
(cid:2)
κ
(cid:2)
2(s2)
= 20q − 20p1q + 20p1pq
(cid:1)(cid:3)
1(s1)δ2
2(s2)
= 1 + 19q − 19p1q + 19p1p − 19p1pq
(cid:1)(cid:3)
1(s1)δ2
2(s2)
1(s1)δ2
2(s2)
1(s1)δ2
2(s2)
= 30 − 29q + 29p1q − 29p1p + 29p1pq
= 30 − 30q + 30p1q − 29p1p + 29p1pq
= 30 − 29q + 29p1q − 29p1p + 29p1pq
(cid:1)(cid:3)
(cid:1)(cid:3)
(cid:1)(cid:3)
= 30 − 30q + 30p1q − 30p1p + 30p1pq
E
E
E
E
1(s1)δ2
2(s2)
(cid:2)
(cid:2)
(cid:2)
(cid:2)
κ
κ
1(s1)δ3
(cid:1)(cid:3)
1(s1)δ3
1(µ1), δ1
γ1
(cid:0)
1(µ1), δ1
γ2
(cid:0)
1(µ1), δ1
γ3
(cid:0)
γ4
1(µ1), δ1
(cid:0)
1(µ1), δ2
γ1
(cid:0)
1(s1)δ3
1(s1)δ3
1(s1)δ3
κ
κ
2(s2)
2(s2)
2(s2)
2(s2)
= 20 − 20p1p
= 20 − 19p1p
= 20 − 19p1p
= 20 − 20p1p
(cid:1)(cid:3)
(cid:1)(cid:3)
(cid:1)(cid:3)
(cid:1)(cid:3)
E
κ
(cid:2)
E
E
κ
(cid:2)
κ
(cid:2)
E
2(s2)
= 1 − p1 + 20p1p
(cid:1)(cid:3)
2(s2)
1(s1)δ3
1(s1)δ3
2(s2)
1(µ1), δ2
γ2
(cid:0)
1(µ1), δ2
γ3
(cid:0)
(cid:1)(cid:3)
= p1 + 19p1p
= p1 + 19p1p
1(µ1), δ2
γ4
1(s1)δ3
= 20p1p
(cid:1)(cid:3)
2(s2)
(cid:1)(cid:3)
2(s2)
1(s1)δ3
1(s1)δ3
2(s2)
1(s1)δ3
2(s2)
1(s1)δ3
2(s2)
= 20
= 20
= 20
= 20
(cid:1)(cid:3)
(cid:1)(cid:3)
(cid:1)(cid:3)
(cid:1)(cid:3)
κ
(cid:2)
E
(cid:2)
E
E
E
(cid:2)
(cid:2)
(cid:2)
(cid:0)
κ
κ
κ
κ
1(µ1), δ3
γ1
(cid:0)
1(µ1), δ3
γ2
(cid:0)
γ3
1(µ1), δ3
(cid:0)
1(µ1), δ3
γ4
(cid:0)
1(µ1), δ4
γ1
E
(cid:0)
(cid:0)
κ
(cid:2)
E
κ
(cid:2)
E
κ
(cid:2)
E
κ
(cid:2)
1(s1)δ3
2(s2)
= 1 − p1
(cid:1)(cid:3)
2(s2)
= p1
(cid:1)(cid:3)
γ2
1(µ1), δ4
1(s1)δ3
1(µ1), δ4
γ3
1(s1)δ3
2(s2)
1(µ1), δ4
γ4
1(s1)δ3
2(s2)
(cid:0)
(cid:0)
= 1
= 0
(cid:1)(cid:3)
(cid:1)(cid:3)
E
(cid:2)
κ
κ
1(µ1), δ1
γ1
(cid:0)
1(µ1), δ1
γ2
(cid:0)
E
E
(cid:2)
1(s1)δ4
2(s2)
1(s1)δ4
2(s2)
(cid:1)(cid:3)
1(s1)δ4
2(s2)
= p1 + 29p1p
(cid:1)(cid:3)
= 1 − p1 + 20p1p
1(µ1), δ1
γ3
κ
(cid:2)
κ
(cid:0)
1(µ1), δ1
γ4
(cid:0)
1(µ1), δ2
γ1
= 30p1p
(cid:1)(cid:3)
= 1 + 29p1p
(cid:1)(cid:3)
= 30p1 − 29p1p
= 30 − 30p1p
1(s1)δ4
2(s2)
1(s1)δ4
2(s2)
1(s1)δ4
(cid:1)(cid:3)
2(s2)
(cid:1)(cid:3)
1(µ1), δ2
γ2
(cid:0)
1(µ1), δ2
γ3
(cid:0)
1(µ1), δ2
γ4
(cid:0)
E
1(s1)δ4
2(s2)
= 30 − 30p1p
(cid:1)(cid:3)
1(s1)δ4
2(s2)
= 30 − 29p1p
1(µ1), δ3
γ1
1(s1)δ4
(cid:0)
1(µ1), δ3
γ2
1(s1)δ4
2(s2)
= p1
(cid:1)(cid:3)
= 1 − p1
E
E
(cid:2)
κ
(cid:2)
E
(cid:0)
κ
(cid:2)
E
E
κ
κ
(cid:2)
(cid:2)
E
(cid:1)(cid:3)
2(s2)
(cid:1)(cid:3)
2(s2)
(cid:0)
κ
(cid:2)
κ
(cid:2)
κ
κ
(cid:2)
κ
(cid:2)
E
E
E
E
E
E
(cid:2)
κ
κ
κ
(cid:2)
(cid:2)
(cid:2)
1(µ1), δ3
γ3
1(s1)δ4
(cid:0)
1(µ1), δ3
γ4
1(s1)δ4
2(s2)
(cid:0)
1(µ1), δ4
γ1
(cid:0)
γ2
1(µ1), δ4
(cid:0)
1(µ1), δ4
γ3
(cid:0)
1(µ1), δ4
γ4
(cid:0)
1(s1)δ4
2(s2)
1(s1)δ4
2(s2)
1(s1)δ4
2(s2)
1(s1)δ4
2(s2)
= 0
= 1
(cid:1)(cid:3)
(cid:1)(cid:3)
= 30
= 30
= 30
= 30
(cid:1)(cid:3)
(cid:1)(cid:3)
(cid:1)(cid:3)
(cid:1)(cid:3)
From above expression, it is difficult to make any comment on saddle point solution of zero-sum
game. Thus we suppose p1 = 1
3 but it is also possible that under different
range of p1, p, q our claim holds true. Rewriting expected payoff matrix for zero-sum game in
3 and q = 2
4, p = 1
table III. In table III, row vector denote strategies of a Team 2, column vector denote strategies
of a Team 1 and corresponding expected payoff. Here, Team 2 wishes to minimize the expected
payoff and Team 1 wishes to maximize the expected payoff. The security level of Team 1 is
Similarly, the security level of Team 2 is
V (A) = max
j
min
i
aij = 0.25
max
j
Notice that we have V (A) > V (A), it implies this game do not admit the pure strategy saddle
V (A) = min
aij = 1.
i
point solution.
δ1
1 (s1)δ1
δ2
1 (s1)δ1
1 (s1)δ1
δ3
δ4
1 (s1)δ1
δ1
1 (s1)δ2
1 (s1)δ2
δ2
1 (s1)δ2
δ3
δ4
1 (s1)δ2
1 (s1)δ3
δ1
δ2
1 (s1)δ3
δ3
1 (s1)δ3
1 (s1)δ3
δ4
δ1
1 (s1)δ4
δ2
1 (s1)δ4
1 (s1)δ4
δ3
1 (s1)δ4
δ4
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
2(s2)
γ1
1 (µ1)
γ2
1 (µ1)
γ3
1 (µ1)
γ4
1 (µ1)
9.16
22.3
7.54
9.66
16.39
26.18
16.45
16.13
7.80
16.08
11.91
13.61
10.77
14.69
18.33
2.41
20
0.75
2.66
5.08
0.25
30
8.27
15.72
11.94
11.38
10.80
14.19
18.41
1.83
20
0.25
2.41
27.5
0.75
30
7.77
16.30
11.69
13.55
11.11
14.69
18.41
1.83
20
1
2.5
27.5
0
30
8.30
15.83
12.08
13.11
11.02
14.16
18.33
1.66
20
0
3.41
27.58
1
30
TWO-TEAM ZERO-SUM GAME WITH EXPECTED PAYOFF MATRIX
TABLE III
1) Role of the private randomness independent of ξ: We are interested to understand the role
of the private randomness in two-team zero-sum game. We assume a coordinator provides the
private randomness to decision maker of a team, say Team 1 decision maker. Further we assume
that these private randomization is independent of ξ.
Consider Team 1 decision maker has private randomization over its strategies and plays strategy
4
i=1 ai = 1. That is
γi
1(µ1) with probability ai for 1 ≤ i ≤ 4 and
P
γ1(µ1) =
Then the expected payoff is
E
κ
(cid:2)
(cid:0)
γ1(µ1)δj
1(s1)δk
2 (s2)
γ1
1(µ1)
γ2
1(µ1)
γ3
1(µ1)
γ4
1(µ1)
4
=
(cid:1)(cid:3)
X
i=1
E
κ
(cid:2)
with prob.
with prob.
with prob.
a1
a2
a3
with prob.
a4.
(γ1(µ1) = γi
(cid:0)
1(µ1))δj
1(s1)δk
2 (s2)
ai
(cid:1)(cid:3)
2 (s2)
2 (s2)
2 (s2)
1(s1)δ1
δ1
δ2
1(s1)δ1
δ3
1(s1)δ1
1(s1)δ1
δ4
δ1
1(s1)δ2
δ2
1(s1)δ2
1(s1)δ2
δ3
1(s1)δ2
δ4
δ1
1(s1)δ3
1(s1)δ3
δ2
δ3
1(s1)δ3
δ4
1(s1)δ3
1(s1)δ4
δ1
δ2
1(s1)δ4
δ3
1(s1)δ4
1(s1)δ4
δ4
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
9.16a1 + 22.3a2 + 7.54a3 + 9.66a4
16.39a1 + 26.18a2 + 16.45a3 + 16.13a4
7.80a1 + 8.27a2 + 7.77a3 + 8.30a4
16.08a1 + 15.72a2 + 16.30a3 + 15.83a4
11.91a1 + 11.94a2 + 11.69a3 + 12.08a4
13.61a1 + 11.38a2 + 13.55a3 + 13.11a4
10.77a1 + 10.80a2 + 11.11a3 + 11.02a4
14.69a1 + 14.19a2 + 14.69a3 + 14.16a4
18.33a1 + 18.41a2 + 18.41a3 + 18.33a4
2.41a1 + 1.83a2 + 1.83a3 + 1.66a4
20
0.75a1 + 0.25a2 + 1a3
2.66a1 + 2.41a2 + 2.5a3 + 3.41a4
5.08a1 + 27.5a2 + 27.5a3 + 27.58a4
0.25a1 + 0.75a2 + 1a4
30
TABLE IV
TWO-TEAM ZERO-SUM GAME EXPECTED PAYOFF WITH TEAM 1 HAS PRIVATE RANDOMIZATION
notice that a Team 2 best response will be (δ3
for 1 ≤ j, k ≤ 4. We have evaluated the expected payoff and given in table IV. From table IV,
1(s1)δ4
2(s2)) depend on probability
vector a = [a1, a2, a3, a4] at Team 1 (i.e. private randomization). Without loss of generality, we
2(s2)) or (δ4
1(s1)δ3
assume a3 = a4, now observe that a1 and a2 determines the best response of Team 2. We
demonstrate this as follows.
1) If a1 < a2, thw best response of team 2 will be (δ3
1(s1)δ4
be (0.75a1 + 0.25a2 + 1a3). Further assume a2 = 2a1, a3 = a4 = 1
2(s2)) and expected payoff will
18 and
12, then a1 = 5
expected payoff is 0.43.
2) If a1 > a2, the best response of team 2 will be (δ4
1(s1)δ3
(0.25a1 + 0.75a2 + 1a4). Similarly, we assume a1 = 2a2, a3 = a4 = 1
2(s2)) and expected payoff will be
18 and
12, then a2 = 5
expected payoff is 0.43.
3) If a1 = a2, the best response of team 2 will be (δ3
expected payoff will be a1 + a3. We assume a3 = a4 = 1
2(s2)) or (δ4
1(s1)δ4
12 ,then a2 = a1 = 5
1(s1)δ3
2(s2)) and
12 and expected
1 (s1)δ1
δ1
δ2
1 (s1)δ1
δ3
1 (s1)δ1
1 (s1)δ1
δ4
δ1
1 (s1)δ2
δ2
1 (s1)δ2
1 (s1)δ2
δ3
1 (s1)δ2
δ4
δ1
1 (s1)δ3
1 (s1)δ3
δ2
δ3
1 (s1)δ3
δ4
1 (s1)δ3
1 (s1)δ4
δ1
δ2
1 (s1)δ4
δ3
1 (s1)δ4
1 (s1)δ4
δ4
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
2 (s2)
16.33
21.81
8.10
15.87
11.92
12.32
10.83
14.36
18.38
1.97
20
0.43
2.57
21.27
0.57
30
TABLE V
TWO-TEAM ZERO-SUM GAME WITH TEAM 1 PRIVATE RANDOMIZATION OVER ITS STRATEGIES a3 = a4 = 1
12 , a1 = 5
18
a2 = 10
18 .
payoff is 0.5.
This implies that under private randomization at one of team, it do not admit Nash equilibrium
solution.
Observe that the expected payoff of Team 2 has improved from 1 to 0.43 if a1 < a2 or a1 > a2
and 0.5 if a1 = a2 where Team 2 wishes to minimize the expected payoff.
Now from table V, note that the best strategy of DM1 and DM2 in Team 2 would be to play
pure strategy as δ4
2(s2) to minimize the expected payoff.
Furthermore, one of DM in Team 2 having private randomness may not lead to improve in
1(s1) and δ3
the expected payoff. To demonstrate this, consider DM1 in Team 2 has private randomization
δ1
2 (s2)
δ2
2 (s2)
δ3
2 (s2)
δ4
2 (s2)
16.32b1 + 21.81b2 + 8.10b3 + 15.87b4
11.92b1 + 12.32b2 + 10.83b3 + 14.36b4
18.38b1 + 1.97b2 + 20b3 + 0.43b4
2.57b1 + 21.27b2 + 0.57b3 + 30b4
TABLE VI
TWO-TEAM ZERO-SUM GAME WITH TEAM 1 AND 2 HAVING PRIVATE RANDOMIZATION OVER ITS STRATEGIES
over his strategies.
δ1(s1) =
δ1
1(s1)
δ2
1(s1)
δ3
1(s1)
δ4
1(s1)
with prob.
with prob.
with prob.
with prob.
b1
b2
b3
b4
and 0 ≤ bi ≤ 1 for 1 ≤ i ≤ 4,
4
i=1 bi = 1.
P
The expected payoff payoff is
E
κ
(cid:2)
γ1(µ1)δ1(s1)δk
(cid:0)
2 (s2)
(cid:1)(cid:3)
=
4
X
i,j
E
κ
(cid:2)
(cid:0)
(γ1(µ1) = γi
1(µ1))(δ1(s1) = δj
1(s1))δk
2 (s2)
aibj,
(cid:1)(cid:3)
We illustrated the expected payoff matrix in table VI. If DM1 in Team 2 do not play pure
strategy, assume b1 = b3 = 0, b2 = 1
4 , then DM2 of Team 2 will play strategy
2(s2) to minimize the expected payoff. Thus expected payoff 0.815. Note that 0.815 < ¯VA
δ3
but greater than pure strategy expected payoff (it is clear from table V) since expected payoff
4, and b4 = 3
under pure strategy solution is 0.43. Here we assume if decision maker in Team 1 having private
randomization with probability vector a3 = a4 = 1
12, a1 = 5
18 a2 = 10
18.
2) Role of common randomness independent of ξ: Now consider common randomness in-
dependent of ξ is provided to DM1 and DM2 of team 2, i.e. Team 2 does joint randomization
over its strategy then best for for team 2 to put positive mass on strategies (δ4
2(s2)) or
2(s2)). Otherwise its expected payoff more than pure strategy (it is clear from table
III). In discrete team vs team zero-sum game with common randomness, do not admit Nash
1(s1), δ3
1(s1), δ4
(δ3
equilibrium solution. It also lead to improve in the expected payoff.
C. Example: LQG team vs team zero-sum game
Now, we illustrate an example of LQG zero-sum team vs team game and show that common
randomness independent of environment ξ does not benefit. We also demonstrate that common
randomness dependent on ξ benefit a team having extra randomness.
Consider two team LQG zero sum game, Team 1 and Team 2 consists of a decision maker
and two decision makers, respectively. Let ξ = [µ1, s1, s2]T denote an environment or state of
nature; it is random vector having probability distribution N(0, Σ), Σ is covariance matrix. Let
yi = ηi(ξ) be the observations about ξ available at decision maker i of Team 1, for i = 1;
zj = ζj(ξ) represents the observations about ξ available at decision maker j of Team 2, for
j = 1, 2. Mathematical simplicity, we assume y1 = η1(ξ) = µ1, zj = ζj(ξ) = sj, j = 1, 2. In
standard LQG two-team zero-sum game decision rule is defined as follows.
γi ∈ Γi and ui ∈ Ui for i = 1;
δj ∈ ∆j and vj ∈ Vj for j = 1, 2.
γi : yi → ui,
δj : zj → vj,
The optimal decision rule (u∗
1 = γ∗
1(y1), v∗
1 = δ∗
1(z1), v∗
2 = δ∗
2(z2)) such that
JZS,LQG(u1, v∗
1, v∗
2) ≤ JZS,LQG(u∗
1, v∗
1, v∗
2) ≤ JZS,LQG(u∗
1, v1, v2),
(20)
for all u1 ∈ U1, v1 ∈ V1 and v2 ∈ V2; JZS,LQG(u1, v1, v2) = Eξ[κ(u1, v1, v2, ξ)].
The cost function:
κ(u1, v1, v2, ξ) = κ(θ, ξ),
= θT Bθ + 2θT Sξ,
(21)
−1 r11 r12
where θ = [u1, v1, v2]T , B =
among teams, that is r11 and r12 is coupling of DM1 of Team 1 with DM1 and DM2 of Team 2
, here r11 and r12 characterizes the coupling
r12
r11
q12
q12
1
1
respectively. And q12 denotes coupling among DM1 and DM2 of Team 2. Moreover, we assume
that Team 1 seeks to maximize the expected payoff and Team 2 seeks to minimize the expected
payoff. It is required that the cost function Eξ[κ(u1, v1, v2, ξ)] to be concave in u1 and convex
in v1 and v2. Hence, we assume 1 − q2
12 > 0 and S =
0 −1
1
0
0
0
.
0
0 −1
Two-team LQG zero-sum game admits a saddle point solution (for which we refer the reader
to [6, lemma 3.1, 3.2, theorem 3.1]•), i.e.
max
u1∈U1
min
(v1,v2)∈V
Eξ[κ(u1, v1, v2, ξ)] = min
(v1,v2)∈V
max
u1∈U1
Eξ[κ(u1, v1, v2, ξ)].
(22)
V = V1 × V2. Since, in static LQG problem, decision variable are linear function of observations
available at decision makers, u1 = γi(y1) = α11y1, vj = δj(zj) = α2jzj, j = 1, 2.
Re-writing relation of θ and observations y1, z1 z2 more compactly, we have θ = A˜y, where
A =
0
0
α11
0
α21
0
0
0
α22
, and ˜y = [y1, z1, z2]T . The expected cost function is
JZS,LQG(α11, α21, α22) = Eξ[˜yT AT BA˜y + 2˜yT AT Sξ]
= Tr[AT BAΣ + 2AT SΣ].
(23)
Equality in (23) follows from ˜y = ξ and ξ ∼ N(0, Σ). Then we obtain from (22),
max
α11
min
α21,α22
JZS,LQG(α11, α21, α22) = min
α21,α22
max
α11
JZS,LQG(α11, α21, α22)
(24)
An objective of zero-sum two team LQG game is to determine (α∗
11, α∗
21, α∗
22) such that
JZS,LQG(α11, α∗
21, α∗
22) ≤ JZS,LQG(α∗
11, α∗
21, α∗
22) ≤ JZS,LQG(α∗
11, α21, α22),
will be satisfied for α11, α21, α22 ∈ R.
1) Discussion on matrix B : In matrix B, we have coupling parameter r11, r12 and q12. If
r11 = r12 = q12 = 0, there is no coupling among Team 1 and 2, as well as among decision
makers of Team 2. This is not at all interesting. If r11 = r12 = 0, then there is no coupling among
team 1 and 2. Problem becomes team decision problem. Hence we suppose r11, r12, q12 6= 0.
Next, we analyze the role of common randomness in LQG two-team zero-sum game. We
describe two cases as follows.
• Case I: Common randomness independent of ξ.
• Case II: Common randomness dependent on ξ.
2) Common randomness independent of ξ:
Proposition A.1: In LQG two-team zero-sum stochastic game, common randomness indepen-
dent of ξ do not benifit the team.
Proof: Consider a coordinator provides common randomness which is independent of
environment ξ to the decision makers of teams. For mathematical simplicity, we assume common
randomness is available at one of team, say Team 2. The common randomness provided to
decision maker DM1 and DM2 of team 2 is represented as ω, and also ω ∐ ξ. The decision rule
of a decision maker of Team 1 is
γ1 : y1 → u1,
and decision rule of Team 2 decision makers are
δj : zj × ω → vj,
j = 1, 2. Actions of decision makers are
u1 = γ1(y1) = α11y1,
vj = δj(zj, ω) = α2jzj + β2jω,
for j = 1, 2. Rewriting above expression, we obtain
θ = A˜y + βω,
here, θ =
, A =
u1
v1
v2
α11
, ˜y =
0
α21
0
0
0
α22
y1
z1
z2
0
, β =
.
β21
β22
0
0
The expected payoff of LQG two team zero-sum game with common randomness is
JZS,CR,LQG(α11, α21, α22, β) = Eξ[˜yT AT BA˜y + 2˜yT AT Sξ + 2˜yT AT Bβω + ωT βT Bβω + 2ωT βT Sξ],
= Tr[AT BAΣ + 2AT SΣ + βT BβΣ2].
(25)
Equality in (26) because ω ∐ ξ, ω ∼ N(0, Σ2).
max
α11
min
α21,α22,β21,β22
JZS,CR,LQG(α11, α21, α22, β21, β22) = max
α11
min
α21,α22,β21,β22
Tr[AT BAΣ + 2AT SΣ + βT BβΣ2],
= max
α11
min
α21,α22
Tr[AT BAΣ + 2AT SΣ] + min
β21,β22
Tr[βT BβΣ2].
Clearly, from above expression, minimization of Tr[βT BβΣ2] attained at β equals to zero,
i.e.β11 = 0, β21 = 0, β22 = 0 for given B and Σ2 > 0.
max
α11
min
α21,α22,β21,β22
JZS,CR,LQG(α11, α21, α22, β21, β22) = max
α11
min
α21,α22
Tr[AT BAΣ + 2AT SΣ]
= max
α11
min
α21,α22
JZS,LQG(α11, α21, α22)
=
min
α21,α22,β21,β22
max
α11
JZS,CR,LQG(α11, α21, α22, β21, β22)
Hence we conclude that common randomness independent of ξ do not benefit the team having
common randomness.
3) Common randomness dependent on ξ: Suppose the common randomness available at
decision makers of Team 2 of two-team LQG zero-sum game; it is denoted as ω. The decision
rule of a decision maker in Team 1 is
and decision rule of Team 2 decision makers are
γ1 : y1 → u1,
δj : zj × ω → vj,
j = 1, 2. Actions of decision makers are
for j = 1, 2. We have6
u1 = γ1(y1) = α11y1,
vj = δj(zj, ω) = α2jzj + β2jω,
θ = A˜y + βω,
here, θ =
, A =
u1
v1
v2
α11
, ˜y =
0
α21
0
0
0
α22
y1
z1
z2
0
, β =
.
β21
β22
0
0
Moreover it is assume that the common randomness is dependent on an environment ξ. Hence
ω is function of ξ, that is ω = f (ξ); f (·) is measurable function. Let f be the linear function,
then
ω = f (ξ) = φ11µ1 + φ21s1 + φ22s2
= ΦT ˜y = ΦT ξ.
Where Φ = [φ11, φ21, φ22]T , ˜y = ξ and ξ ∼ N(0, Σ). The expected cost functional is
JZS,CR,LQG(α11, α21, α22, β21, β22) = Eξ[˜yT AT BA˜y + 2˜yT AT Sξ + 2˜yT AT Bβω + ωT βT Bβω + 2ωT βT Sξ],
= Tr[AT BAΣ + 2AT SΣ + 2AT B ˜βΣ + ˜βT B ˜βΣ + 2 ˜βT SΣ].
(26)
In (26), ˜β = βΦT . Goal is to find (α∗
11, α∗
21, α∗
22, β∗
21, β∗
22) such that
JZS,CR,LQG(α11, α∗
21, α∗
22, β∗
21, β∗
22) ≤ JZS,CR,LQG(α∗
11, α∗
21, α∗
22, β∗
21, β∗
22) ≤ JZS,CR,LQG(α∗
11, α21, α22, β21, β22)
for α11, α21, α22, β21, β22 ∈ R.
Source of information (source of common randomness) can act as a mole or consultant
depending on type of information it provides. If source of information is a mole then ω = φ11µ1.
It implies φ21 = 0, and φ22 = 0. If source of information is consultant, then ω = φ21s1 + φ22s2.
We will investigate two different cases based on source of information and types of information
it provides.
a) Suppose the source of information is a mole or spy and it provide information (common
randomness) ω = φ11µ1. Let J a,∗
ZS,CR,LQG denote the saddle point solution of LQG two-team zero-
sum game with common randomness when source of common randomness to Team 2 decision
makers is spy.
b) Let J b,∗
ZS,CR,LQG represents the saddle point solution of LQG two-team zero-sum game
with common randomness when source of common randomness to Team 2 decision makers is
consultant and ω = φ21s1 + φ22s2.
Intuitively, we expect to have following inequalities.
J a,∗
ZS,CR,LQG ≤ J ∗
ZS,LQG.
J b,∗
ZS,CR,LQG ≤ J ∗
ZS,LQG.
(27)
(28)
Note J ∗
ZS,LQG is saddle point solution of LQG two-team zero-sum game with no common
randomness.
From (26), analytically, it is difficult to prove the inequalities in (27), (28). Hence we conjecture
result in (27), (28). Now we present numerical results and show that above inequalities are true.
Let Σ =
σ2
µ1
µ1,s1 σ2
σ2
µ1,s2
σ2
σ2
s1
s1,s2
σ2
s2
σ2
µ1,s1
µ1,s2 σ2
σ2
s1,s2
, Since ω is scalar, we have Σ2 = σ2
ω. Team cost functional
is
J (α11, α21, α22, β21, β22) = −α2
11σ2
µ1 + α2
21σ2
s1 + α2
22σ2
s2 + 2r11α11α21σ2
µ1,s1 + 2r12α11α22σ2
µ1,s2
+2q12α21α22σ2
s1,s2 + 2(r11α11β21 + r12α11β22)σ2
µ1,w + 2(α21β21 + q12α21β22)σ2
s1,w
+2(q12α22β21 + α22β22)σ2
s2,w + (β2
21 + 2q12β21β22 + β2
22)σ2
w + 2α11σ2
µ1
−2α21σ2
s1 − 2α22σ2
s2 − 2β21σ2
s1,w − 2β22σ2
s2,w.(29)
We know that LQG two-team zero-sum game has saddle point solution, that is
max
α11
min
α21,α22,β21,β22
JZS,CR,LQG(α11, α21, α22, β21, β22) =
min
α21,α22,β21,β22
max
α11
JZS,CR,LQG(α11, α21, α22, β21, β22).
(30)
To evaluate maxα11 minα21,α22,β21,β22 JZS,CR,LQG(α11, α21, α22, β21, β22), we differentiate (29) with
respect to α11, α21, α22, β21, β22 and equate to 0. We obtain linear systems of equations as follows.
µ1,s1
µ1,s2
µ1,s1
µ1,s2
−σ2
µ1
r11σ2
r12σ2
r11σ2
r12σ2
µ1,w
µ1,w
r11σ2
σ2
s1
q12σ2
σ2
s1,s2
s1,w
q12σ2
s1,w
s1s2
r12σ2
q12σ2
σ2
s2
q12σ2
σ2
s2,w
s2,w
r11σ2
σ2
s1,w
q12σ2
σ2
w
q12σ2
w
µ1,w r12σ2
q12σ2
σ2
s2,w
µ1,w
s1,w
s2,w
q12σ2
w
σ2
w
α11
α21
α22
β21
β22
=
−σ2
µ1
σ2
s1
σ2
s2
σ2
s1,w
σ2
s2,w
.
Numerically, we compare our result for different values matrix B.
1)B =
1
−1 1
4
4
1 1
2
1
2 1
1
4
1
4
We assume Σ =
, 2)B =
2 1
4
4 1 1
2
1
2 1
1
4
1
4
1
1
−1 1
4
2
1 1
2
1
2 1
1
4
1
2
,
for all numerical results.
a) When source of information is a mole and ω = φ11µ1, we have E[ω] = 0,
E[ω2] = σ2
ω = φ2
11σ2
µ1.
σ2
µ1,ω = φ11σ2
µ1
s1,ω = φ11E[µ1s1] = φ11σ2
σ2
µ1,s1.
s2,ω = φ11E[µ1s2] = φ11σ2
σ2
µ1,s2.
(r11, r12, q12)
4 , 1
( 1
4 , 1
2 )
2 , 1
4 , 1
( 1
2 )
(φ11, φ21, φ22) J a,∗
( 1
2 , 0, 0)
( 1
2 , 0, 0)
0.2037
0.4012
ZS,CR,LQG
WITH RANDOMIZATION: COMPARISON OF J a,∗
ZS,CR,LQG FOR DIFFERENT VALUES OF r11, r12, q12.
TABLE VII
(r11, r12, q12)
4 , 1
4 , 1
( 1
2 )
2 , 1
4 , 1
( 1
2 )
(φ11, φ21, φ22) J b,∗
(0, 1
(0, 1
2 , 1
2 )
2 , 1
2 )
0.1616
0.2435
ZS,CR,LQG
WITH RANDOMIZATION: COMPARISON OF J b,∗
ZS,CR,LQG FOR DIFFERENT VALUES OF r11, r12, q12.
TABLE VIII
Case 1) B =
−1 1
1
4
4
1 1
2
1
2 1
After solving linear systems of equation, we have
1
4
1
4
11 = 0.9615, α∗
α∗
21 = 0.8052, α∗
22 = 0.8052, β∗
21 = −0.7103, β∗
22 = −0.7103. Team cost
functional is
J a,∗
ZS,CR,LQG = J a
ZS,CR,LQG(α∗
11, α∗
21, α∗
22, β∗
21, β∗
22) = max
α11
min
α21,α22,β21,β22
J (α11, α21, α22, β21, β22)
= 0.4012.
Case 2) B =
−1 1
1
4
2
1 1
2
1
2 1
Solving linear systems of equations we obtain α∗
.
1
2
1
4
11 = 0.8500, α∗
21 = 0.8052, α∗
22 = 0.8052
21 = −0.0693, β∗
β∗
22 = −1.7693. Evaluating team cost functional
J a,∗
ZS,CR,LQG = 0.2037.
b) When a consultant provides an information, ω = φ21s1 + φ22s2. Note that E[w] = 0,
(r11, r12, q12)
4 , 1
( 1
4 , 1
2 )
2 , 1
4 , 1
( 1
2 )
(φ11, φ21, φ22) J ∗
ZS,LQG
(0, 0, 0)
(0, 0, 0)
TABLE IX
0.598
1.8991
WITHOUT RANDOMIZATION: COMPARISON OF J ∗
ZS,LQG FOR DIFFERENT VALUES OF r11, r12, q12.
w = E[w2] = φ2
σ2
21σ2
s1 + φ2
22σ2
s2 + 2φ21φ22σ2
s1,s2.
µ1,w = E[µ1w] = φ21σ2
σ2
µ1,s1 + φ22σ2
µ1,s2.
s1,w = E[s1w] = φ21σ2
σ2
s1 + φ22σ2
s1,s2.
s2,w = E[s2w] = φ21σ2
σ2
s1,s2 + φ22σ2
s2.
We suppose φ21 = 1
22 = 2, β∗
1
4
1
4
2, φ22 = 1
2 .
1
−1 1
4
4
1 1
2
1
2 1
21 = −1.391, β∗
−1 1
1
4
2
1 1
2
1
2 1
1
2
1
4
Case 1) B =
Solving linear system of eqaution we have α∗
11 = 1.0381,
21 = 2, α∗
α∗
22 = −1.391 and team optimal cost J b,∗
ZS,CR,LQG = 0.1616.
Case 2)B =
. Then α∗
11 = 1.0515, α∗
21 = 2, α∗
22 = 2, β∗
21 = −1.3333, β∗
22 =
−1.5086 and team optimal cost J b,∗
ZS,CR,LQG = 0.2435.
From table VII, VIII, IX, it clear that inequalities in (27),(28) satisfy numerically. Observe
that common randomness dependent on ξ provided by either a mole or consultant benefits the
team vs team zero-sum game.
|
ai_researcher | 2 | DOCBENCH_A_Benchmark_for_Evaluating_LLM-based_Document_Reading_Systems.pdf | 4
2
0
2
l
u
J
5
1
]
L
C
.
s
c
[
1
v
1
0
7
0
1
.
7
0
4
2
:
v
i
X
r
a
DOCBENCH: A Benchmark for Evaluating
LLM-based Document Reading Systems
Anni Zou1,2∗, Wenhao Yu2(cid:66), Hongming Zhang2, Kaixin Ma2,
Deng Cai2, Zhuosheng Zhang1, Hai Zhao1, Dong Yu2
1Shanghai Jiao Tong University 2Tencent AI Lab
[email protected],
(cid:66)[email protected] (corresponding author)
Abstract
Recently, there has been a growing interest among large language model (LLM)
developers in LLM-based document reading systems, which enable users to upload
their own documents and pose questions related to the document contents, going
beyond simple reading comprehension tasks. Consequently, these systems have
been carefully designed to tackle challenges such as file parsing, metadata extrac-
tion, multi-modal information understanding and long-context reading. However,
no current benchmark exists to evaluate their performance in such scenarios, where
a raw file and questions are provided as input, and a corresponding response is
expected as output. In this paper, we introduce DOCBENCH, a new benchmark de-
signed to evaluate LLM-based document reading systems. Our benchmark involves
a meticulously crafted process, including the recruitment of human annotators
and the generation of synthetic questions. It includes 229 real documents and
1,102 questions, spanning across five different domains and four major types of
questions. We evaluate both proprietary LLM-based systems accessible via web
interfaces or APIs, and a parse-then-read pipeline employing open-source LLMs.
Our evaluations reveal noticeable gaps between existing LLM-based document
reading systems and human performance, underscoring the challenges of develop-
ing proficient systems. To summarize, DOCBENCH aims to establish a standardized
benchmark for evaluating LLM-based document reading systems under diverse
real-world scenarios, thereby guiding future advancements in this research area. 2
1
Introduction
The emergence of large language models (LLMs) has marked a significant milestone in the field of
natural language processing, revolutionizing the way we approach a variety of tasks [2, 3, 7, 35, 37,
40, 50]. Existing LLMs such as GPT-4 [2], Llama-3 [37], and Claude-3 [3] have shown exceptional
abilities in following human instructions to perform tasks such as answering questions, translating
languages and summarizing texts. These tasks are typically characterized by straightforward input-
output interactions, where the models generate responses solely based on the provided text. However,
many real-world applications require more complex interactions involving user-provided documents.
For instance, financial analysts might need to query comprehensive financial reports to inform their
investment decisions [25, 42, 45]. Legal professionals often search through extensive legal documents
to find relevant case law [8, 10, 22]. Similarly, scientific researchers frequently sift through academic
papers to identify related works and extract key findings [5, 11].
∗This work was done during internship at Tencent AI Lab, Seattle.
2Data and code will be released at https://github.com/Anni-Zou/DocBench.
Preprint. Under review.
Figure 1: An example of OpenAI’s GPT-4 based document reading system. Unlike standalone LLMs,
recent proprietary LLM-based document reading systems employ a carefully designed approach (e.g.,
file parsing, code execution) to answer user questions related to document contents.
When users pose queries based on their provided documents, the situation becomes more intricate and
challenging [23]. Unlike standalone LLMs that are primarily trained to process and respond to textual
inputs (or images in the case of Vision LLMs), handling user-provided documents necessitates a more
sophisticated approach that stretches beyond the capabilities of a single LLM. In order to provide
accurate responses, an LLM-based document reading system should not only comprehend natural
language queries, but also excel in a range of processing skills, including parsing and interpreting
user documents and layouts, navigating complex formatting structures, extracting relevant metadata,
and managing long textual contexts along with any embedded images. Mastery of these diverse skills
is essential for generating precise and contextually relevant responses.
At the same time, recent advancements in proprietary LLM developers such as OpenAI and Anthropic
have provoked the release of several LLM-based document reading systems. Figure 1 illustrates an
example of OpenAI’s GPT-4-based document reading system. Despite widespread claims of effec-
tiveness and efficiency in various online public blogs34, the absence of a standardized benchmark
makes it difficult to objectively evaluate and compare the document reading performance across these
systems, thereby leaving a critical gap in fairly assessing these capabilities in a fine-grained manner.
To fill this gap, our paper introduces DOCBENCH, a novel benchmark specifically designed to evaluate
LLM-based document reading systems. DOCBENCH is developed to mirror real-world scenarios
where each input consists of a document paired with one or multiple associated questions, and each
question is annotated with a golden answer. Our benchmark undergoes a meticulous development
process, incorporating human annotation and synthetic question generation. To the end, DOCBENCH
features 229 real-world documents and 1,102 questions spanning 5 diverse domains: Academia,
Finance, Government, Laws, and News. Besides, the benchmark involves 4 question categories,
including text-only, multi-modal (i.e., tables and figures), meta-data, and unanswerable, ensuring
comprehensive coverage of various document reading capabilities.
Based upon DOCBENCH, we evaluate several proprietary LLM-based systems that are accessible via
web interfaces or APIs. However, these proprietary systems are close-sourced, thus leading to the
limited disclosure of their detailed operational strategies. As a result, we additionally assess a straight-
forward parse-then-read pipeline employing a series of open-source LLMs. Our evaluations reveal
noticeable gaps between existing LLM-based document reading systems and human performance,
underscoring the challenges of developing proficient systems.
In summary, DOCBENCH serves as the first standardized benchmark to evaluate LLM-based document
reading systems within real-world scenarios, where the systems take a document file paired with
one or multiple related questions as input and generate textual responses as output. Moreover, our
benchmark is carefully designed to encompass 5 diverse domains and 4 distinct question types,
3Blog: Claude can now use tools https://www.anthropic.com/news/tool-use-ga
4Blog:
using LlamaIndex
talk-with-documents-using-llamaindex-3952c76bd511
Talk with
documents
https://codemaker2016.medium.com/
2
User: Upload the PDF file of DPR paperUser: Who is most cited person in the paperSystem: Ming-Wei Chang, with 4 citationsBlack-box document reading systemsStep 1: parse the uploaded documentStep2: extract the reference sectionStep 3: extract author names from referencesStep 4: count the occurrences of each person Step 5: respond with the most cited person[Some file loading process is omitted … ]Figure 2: Construction pipeline of DOCBENCH. (a) Document Collection: gathering PDF files
from five different domains; (b) QA-pair Generation: creating diverse and comprehensive QA pairs
through a combination of LLMs and human effort; (c) Quality Check: ensuring data quality through
a multi-step process that includes auto filtering, manual review, and expert curation.
ensuring a nuanced and thorough assessment. By facilitating fair comparisons across different
systems, DOCBENCH highlights current limitations and paves the way for future advancements.
2 The DOCBENCH
DOCBENCH is a benchmark that takes raw PDF files and accompanying questions as inputs, with the
objective of generating corresponding textual answers. In this section, we will introduce the pipeline
used to construct the dataset, present detailed statistics, and explain the evaluation method.
2.1 Dataset Construction
Our dataset construction pipeline consists of three phases. First, we crawl documents across various
domains from publicly accessible online resources (§2.1.1). Second, we generate corresponding QA
pairs with the help of GPT-4 and a team of human annotators (§2.1.2). Finally, we conduct auto
filtering followed by a manual review to validate the quality of the generated instances (§2.1.3).
2.1.1 Document Collection
To establish a practical and constructive benchmark for document reading, we concentrate on
scenarios where it is crucial to read documents. We standardize the documents to PDF format due to
its high compatibility and stability. We identify five domains where documents are frequently utilized:
Academia, Finance, Government, Laws, News. For Academia, papers are downloaded from arXiv
within the range of top-k citations in the field of natural language processing on Google Scholar. 5
For Finance, we crawl the annual reports of companies with top-k global market capitalization up to
2024-02-23 from AnnualReports. 6 For Government, we manually download official governmental
reports in 2023 from the U.S. Department of State and GovInfo. 7 For Laws, files are gathered from
an official online collection of publications from the Library of Congress, within the years ranging
from 2020 to 2024. 8 For News, we collect front-page scanned documents of the New York Times,
covering dates from 2022-02-22 to 2024-02-22. 9 We set k = 100 in the initial crawling process
for academic and financial documents. After skipping the unobtainable or damaged documents, we
eventually obtained 229 PDF files, with 49 for academia, 40 for finance, 44 for government, 46 for
laws, and 50 for news. Detailed statistics are shown in Table 1.
5https://scholar.google.com/; https://arxiv.org/.
6https://companiesmarketcap.com; http://www.annualreports.com.
7https://www.state.gov/department-reports/; https://www.govinfo.gov/.
8https://www.loc.gov/collections/publications-of-the-law-library-of-congress.
9https://static01.nyt.com/images/.
3
AcademiaFinanceGovernmentLawsNews(cid:31)(cid:30)(cid:29)(cid:28)(cid:27)(cid:26)(cid:29)(cid:25)(cid:24)(cid:23)(cid:22)(cid:21)(cid:27)(cid:20)(cid:19)(cid:18)(cid:30)(cid:28)(cid:23)(cid:17)(cid:16)(cid:27)(cid:29)(cid:28)(cid:25)(cid:15)(cid:27)(cid:20)(cid:14)Q: What was the total non-operating income for Amazon in 2021? A: $13,272 million. [Evidence](cid:31)(cid:21)(cid:20)(cid:30)(cid:21)(cid:29)(cid:17)(cid:13)(cid:25)(cid:12)(cid:27)(cid:25)(cid:11)(a) Document Collection(b) QA-pair Generation(c) Quality Check(cid:10)(cid:20)(cid:29)(cid:27)(cid:20)(cid:25)(cid:17)(cid:13)(cid:25)(cid:19)(cid:23)(cid:30)(cid:15)(cid:9)(cid:25)(cid:19)(cid:31)(cid:30)(cid:29)(cid:28)(cid:27)(cid:8)(cid:22)(cid:23)(cid:7)(cid:21)(cid:29)(cid:17)(cid:6)(cid:20)(cid:5)(cid:23)Page Text:We introduce a new language model that... (cid:6)(cid:20)(cid:19)(cid:28)(cid:15)(cid:30)(cid:9)(cid:28)(cid:27)(cid:23)(cid:20)(cid:17)(cid:4)(cid:15)(cid:23)(cid:22)(cid:26)(cid:28)(cid:19)(cid:3)(cid:21)(cid:15)(cid:14)(cid:25)(cid:17)(cid:3)(cid:21)(cid:20)(cid:14)(cid:30)(cid:21)(cid:14)(cid:25)(cid:17)(cid:31)(cid:23)(cid:7)(cid:25)(cid:29)(cid:19)(cid:2)(cid:30)(cid:22)(cid:21)(cid:20)(cid:17)(cid:18)(cid:20)(cid:20)(cid:23)(cid:28)(cid:21)(cid:28)(cid:23)(cid:15)(cid:19)<Text-only> Q: What is the average sales...A: $10,537 million. [Evidence]<Multimodal> Q: According to Figure 2, what is ...A: Yes. [Evidence]<Meta-data> Q: On which page does the reportA: Page 5.<Unanswerable> Q: What does BERT...A: Not mentioned.Text-onlyBased on the above figure and text, please design three QA pairs...These questions require locating the specific information, simple orcomplex calculations, comparisons, finding the maximum or minimum... Multimodal(cid:1)(cid:127)(cid:26)(cid:25)(cid:15)(cid:28)(cid:17)(cid:129)(cid:30)(cid:15)(cid:21)(cid:28)(cid:27)(cid:23)(cid:20)Q: Is SenseBERT a model mentioned in the provided text? A: Yes. [Evidence]Q: What was the total non-operating income for Amazon in 2021? A: $13,272 million. [Evidence]Q: Is SenseBERT a model mentioned in the provided text? A: Yes. [Evidence]Table 1: Overview statistics of DOCBENCH. All documents are in PDF format. We extract text
content and calculate the corresponding #Tokens of documents.
Category
Questions.
Documents.
#Num
#Tokens
#Num
#Pages
#Size(KB)
#Tokens
Aca.
Fin.
Gov.
Laws
News
303
288
148
191
172
Total/Avg.
1,102
16.8
16.8
14.1
15.4
13.5
15.7
49
40
44
46
50
229
11
192
69
58
1
66
847
6,594
2,183
969
3,095
2,738
11,123
149,409
36,105
32,339
2,909
46,377
Figure 3: Overview of Questions and Documents: distribution of question token counts (left);
distribution of QA pairs per document (middle); distribution of document token counts (right).
2.1.2 QA-pair Generation
The generation procedure revolves around two aspects: diversity and comprehensiveness. On one
hand, as the document itself inherently abounds with multi-dimensional and multi-modal information
including texts, tables, figures, and meta-data, we leverage the fitz library 10 to parse out the distinct
modalities within the PDF files. Afterward, we deliver plain texts to GPT-4 (gpt-4-0125-preview)
for generating text-only QA pairs and resort to GPT-4V (gpt-4-1106-vision-preview) for yield-
ing multi-modal ones based on tables, figures, and their related textual descriptions. On the other
hand, we further request a set of human annotators to manually elaborate 350 QA pairs based on the
given document files. Their primary task is to focus on types that are rarely covered in the previous
generation stage but are frequent in daily usage, such as meta-data and unanswerable instances.
Details and additional analysis of instruction prompts are attached in Appendix A.
2.1.3 Quality Check
We begin by instructing GPT-4 to automatically filter out questions that are excessively lengthy,
unnatural, or impractical. We then conduct a manual review following the automatic filtering to ensure
both the quality of questions and the accuracy of answers. To further align our data with real-world
user scenarios, we engage 7 practitioners from distinct domains to review and refine the data within
their areas of expertise. In this way, our data quality is validated from multiple perspectives.
2.2 Dataset Statistics
DOCBENCH comprises a total of 229 PDF documents sourced from publicly accessible online
repositories along with 1,102 questions, spanning across 5 domains: Academia, Finance, Government,
Law, and News. As shown in Table 1, we conduct comprehensive statistical analysis across various
angles, encompassing the number of questions, documents, and average token counts within each.
Given the unique nature of our task input, which involves processing PDF files, we additionally
include information such as page count and file size. Moreover, Figure 3 presents distributions
depicting the counts of question tokens, document tokens 11, and QA pairs per document. Notably,
we constrain the number of QA pairs per document to a maximum of 20, with its range spanning from
1 to 16, aiming to better emulate real-world usage scenarios. As for the token counts of questions and
documents, the minimum and maximum values are (6||40) and (1, 300||598, 302) respectively.
10https://pypi.org/project/fitz/
11We utilize the tokenizer of gpt-4-turbo for token measurement.
4
510152025303540#Tokens050100150200#QuestionsDistribution of Question Token Counts.246810121416#QA pairs01020304050#DocumentsDistribution of QA pairs per Document.0102030>40#Tokens(k)0102030405060#DocumentsDistribution of Document Token Counts.Table 2: Examples of instances from DOCBENCH, with multiple labels indicating our data diversity.
Question
Answer
Labels
Document
Why does the model not per-
form as well in German com-
pared to Spanish and Dutch?
Due to its complex mor-
phology and compound
words...
<Aca.><Why>
<Text-only>
<Textual>
When and Why are Pre-trained Word
Embeddings Useful for Machine
Translation [clickable file link]
By how much did the num-
ber of Erica users increase
from 2018 to 2019?
The number increased by
5.5 million...
<Fin.><How>
<Multimodal>
<Numerical>
Bank of America Annual Report
2020 [clickable file link]
What is the primary focus of
Bureau Objective 3.4?
The report does not con-
tain such objective.
<Gov.> <Wh->
<Unanswerable>
<Others>
Governmental report from Secre-
tary’s Office of Global Women’s Is-
sues 2022 [clickable file link]
How many times does the
report mention "scientific
ethics"?
The report mentions "sci-
entific ethics" 11 times.
Is the article about Hurricane
Ian’s impact in Florida writ-
ten by multiple authors?
Yes, the article is about
Hurrican Ian’s impace in
Florida...
<Laws><How>
<Meta-data>
<Numerical>
<News><Y/N>
<Meta-data>
<Boolean>
Report on Regulation of Stem Cell
Research from Library of Congress
2023 [clickable file link]
New York Times front page on
2022-09-30 [clickable file link]
Figure 4: Data distribution of DOCBENCH: (a) proportion(%) of various data groups based on four
distinct classification criteria; (b) detailed data analysis based on question types.
2.3 Dataset Analysis
Figure 4 illustrates the data distribution in DOCBENCH based on different classification criteria.
QA-pair Type The types of QA pairs can be mainly divided into four groups: text-only (37.4%),
multimodal (27.9%), meta-data (23.4%), and unanswerable (11.3%). The text-only and multimodal
types collectively account for over half (65.3%), centering on the abilities to comprehend long
contexts and interpret information from different modalities. Besides, we incorporate approximately
one-third (34.7%) of questions to more closely fit the actual scenarios as well as assess the robustness
of the document reading systems, including 23.4% inquiring about metadata (e.g., page numbers,
word counts) and 11.3% that cannot be answered based on the given document.
Question Type The types of questions can be primarily separated into four categories according to
the inquiry focus: what / who / where / when / which (58.6%), Y/N (22.1%), how (18.8%), and why
(0.5%). These categories respectively demand specific information or details, straightforward yes or
no responses, methods or degrees, and the underlying reasons behind actions or phenomena. Figure
4(b) delineates a detailed data distribution based on question types. The interrogative what holds
a dominant proportion at 40.8%, which is reasonable as users commonly seek precise information
when confronted with a document.
Answer Type The types of answers can be partitioned into four classes: numerical (37.4%), textual
(35.7%), boolean (17.3%), and others (9.6%). Within the numerical class, 69% originate from the
domains of academia and finance, as these documents naturally require extensive use of numbers to
convey information, such as performance metrics in academic papers and figures in financial reports.
5
(cid:31)(cid:30)(cid:29)(cid:28)(cid:27)(cid:26)(cid:25)(cid:28)(cid:24)(cid:23)(cid:22)(cid:28)(cid:21)(cid:29)(cid:20)(cid:19)(cid:18)(cid:17)(cid:20)(cid:19)(cid:16)(cid:17)(cid:15)(cid:14)(cid:13)(cid:23)(cid:25)(cid:12)(cid:11)(cid:10)(cid:9)(cid:12)(cid:14)(cid:26)(cid:8)(cid:23)(cid:7)(cid:29)(cid:12)(cid:10)(cid:17)(cid:14)(cid:29)(cid:13)(cid:7)(cid:29)(cid:14)(cid:29)(cid:6)(cid:25)(cid:29)(cid:28)(cid:5)(cid:4)(cid:29)(cid:14)/(cid:5)(cid:4)(cid:23)/(cid:5)(cid:4)(cid:17)(cid:3)(cid:17)/(cid:5)(cid:4)(cid:17)(cid:25)/(cid:5)(cid:4)(cid:26)(cid:30)(cid:4)(cid:2)/(cid:18)(cid:1)(cid:23)(cid:20)(cid:5)(cid:4)(cid:11)(cid:18)(cid:9)(cid:8)(cid:17)(cid:3)(cid:26)(cid:30)(cid:29)(cid:12)(cid:16)(cid:17)(cid:15)(cid:14)(cid:9)(cid:29)(cid:12)(cid:127)(cid:23)(cid:23)(cid:12)(cid:17)(cid:29)(cid:25)(cid:129)(cid:14)(cid:4)(cid:17)(cid:3)(cid:19)((cid:29))(cid:141)(cid:143)(cid:29)(cid:14)(cid:29)(cid:141)(cid:7)(cid:26)(cid:19)(cid:14)(cid:3)(cid:26)(cid:144)(cid:9)(cid:14)(cid:26)(cid:23)(cid:25)(cid:141)(cid:144)(cid:29)(cid:19)(cid:17)(cid:7)(cid:141)(cid:23)(cid:25)(cid:141)(cid:7)(cid:26)(cid:157)(cid:17)(cid:3)(cid:17)(cid:25)(cid:14)(cid:141)(cid:30)(cid:12)(cid:29)(cid:19)(cid:19)(cid:26) (cid:30)(cid:29)(cid:14)(cid:26)(cid:23)(cid:25)(cid:141)(cid:30)(cid:3)(cid:26)(cid:14)(cid:17)(cid:3)(cid:26)(cid:29)(cid:28)((cid:144))(cid:141)(cid:143)(cid:17)(cid:14)(cid:29)(cid:26)(cid:12)(cid:17)(cid:7)(cid:141)(cid:7)(cid:29)(cid:14)(cid:29)(cid:141)(cid:7)(cid:26)(cid:19)(cid:14)(cid:3)(cid:26)(cid:144)(cid:9)(cid:14)(cid:26)(cid:23)(cid:25)(cid:141)(cid:144)(cid:29)(cid:19)(cid:17)(cid:7)(cid:141)(cid:23)(cid:25)(cid:141) (cid:9)(cid:17)(cid:19)(cid:14)(cid:26)(cid:23)(cid:25)(cid:141)(cid:14)(cid:11)€(cid:17)(cid:19)(cid:141)(cid:143)(cid:23)(cid:8)(cid:29)(cid:26)(cid:25)‚(cid:31)(cid:13)€(cid:29)(cid:26)(cid:3)‚(cid:9)(cid:17)(cid:19)(cid:14)(cid:26)(cid:23)(cid:25)(cid:31)(cid:25)(cid:19)(cid:20)(cid:17)(cid:3)Table 3: The GPT-4 automatic evaluator shows a 98% agreement with human annotators. We
randomly sample 40 questions and answers from five systems, asking human annotators to assess
their accuracy. We then employ string matching (StrMatch), GPT-3.5, and GPT-4 as automatic
evaluators. Finally, we measure the agreement between the human and these automatic evaluators.
Sources
# Correct / Wrong by different evaluators Agreement (human and automatic evaluators)
Human GPT-4 GPT-3.5
StrMatch
KimiChat
Qwen-2.5
Gemma (7B)
Mixtral (7B)
Llama-3 (70B)
24 / 16
17 / 23
19 / 21
14 / 26
16 / 24
23 / 17
18 / 22
18 / 22
14 / 26
15 / 25
33 / 7
31 / 9
18 / 22
26 / 14
28 / 12
0 / 40
0 / 40
0 / 40
0 / 40
0 / 40
Total
90 / 110
88 / 112
136 / 64
0 / 200
GPT-4
97.5%
97.5%
97.5%
100.0%
97.5%
98.0%
GPT-3.5
StrMatch
75.0%
57.5%
75.0%
65.0%
62.5%
67.0%
40.0%
57.5%
52.5%
65.0%
60.0%
55.0%
2.4 Evaluation Setup
Evaluation Process Our dataset diversity poses two major evaluation challenges: (i) The evaluation
methods vary depending on the answer type. For example, for boolean or numerical answers, a
fair evaluator only needs to verify the correctness of a binary yes/no response or a specific number
using simple techniques like string matching or number extraction. In contrast, textual responses
require more nuanced standards such as natural language generation (NLG) metrics. Thus, accurately
determining the appropriate evaluation method becomes complex when the answer type is unknown.
(ii) Different LLMs and systems exhibit substantial variations in the organization and style of their
outputs, potentially leading to biases in traditional evaluation approaches. Therefore, we capitalize
on the prowess of LLMs that have proven to be decent evaluators and can be easily adapted to the
assessment of various answer types [14, 24, 39]. Inspired by Liu et al. [24], we clearly define the
evaluation criteria for various types within the instruction prompt and then instruct GPT-4 to assign a
score of 0 (incorrect) or 1 (correct). After evaluating 200 examples by both human evaluators and
GPT-4, we found that the GPT-4 automatic evaluator shows a 98% agreement with human annotators,
significantly exceeding the traditional string matching approach. Details of this experiment is shown
in Table 3, and details of evaluation instruction prompts are attached in Appendix A.
Metrics As mentioned above, we instruct GPT-4 to assign a score of 0 (incorrect) or 1 (correct),
thus using Accuracy (abbreviated as Acc.) to measure system performance. We report accuracy
across all instances, as well as for each domain and QA-pair type in Table 4.
3 Experiments and Analysis
3.1 Experimental Setup
We conduct a comprehensive evaluation of 22 LLM-based document reading systems, encompassing
both proprietary systems that support document uploads and a series of parse-then-read pipelines. For
parse-then-read pipelines, we leverage the fitz package to extract text and image blocks from PDF
files. We retain the original texts and line breaks for text chunks while we denote the i-th image as
[image i] for images. Our selection for the proprietary systems includes GPT-4 and GPT-4o [2] from
OpenAI, GLM-4 12 from ZhipuAI, Kimi 13 from Moonshot AI, Claude-3 14 from Anthropic, Qwen-
2.5 15 from Alibaba Cloud, and ERNIE-3.5 16 from Baidu. In the case of the parse-then-read pipelines,
we assess 15 prominent LLMs as base models, featuring those from the GPT [2, 31], Llama [37],
Mistral [17], Yi [48], InternLM [6], Phi-3 [1], Gemma [36], ChatGLM3 [12], and Command-R [9]
families. The selection of base open-sourced LLMs adheres to three guiding principles: (i) official
release with instruct or chat versions that are supported by vLLM [20] framework; (ii) model sizes
ranging from 7B to 70B to accommodate GPU memory constraints; (iii) availability of the longest
context length and the latest version.
12https://chatglm.cn/main/doc
13https://kimi.moonshot.cn
14https://claude.ai/chats
15https://tongyi.aliyun.com/qianwen
16https://yiyan.baidu.com
6
Table 4: Results on DOCBENCH across various types and domains. Ver./Size stands for the model
version or size; File denotes the maximum uploaded file size; Cxt. refers to model’s context length.
Methods
Form
Ver.
/Size
File
/Cxt.
Domain
Type
Overall Acc.
Aca. Fin. Gov. Laws News Text. Multi. Meta. Una.
Human
-
-
-
83.0 82.2 77.8 75.0
86.4
81.4
83.3
77.5
82.2
81.2
LLM-based systems
API 0409 100M 65.7 65.3 75.7 69.6
API 0513 100M 56.4 56.3 73.0 65.5
Web
Web
-
-
20M 55.8 35.4 61.5 62.8
100M 62.4 61.8 77.0 78.5
Web Opus 10M 73.9 40.6 70.3 79.1
Web
Web
-
-
150M 42.9 29.9 51.4 55.5
10M 56.4 37.5 54.7 58.1
79.6
75.0
82.0
87.2
86.6
69.2
58.1
87.9
85.0
73.1
87.6
80.8
61.7
63.6
Parse-then-Read Pipelines
GPT-4
GPT-4o
GLM-4
KimiChat
Claude-3
Qwen-2.5
ERNIE-3.5
GPT-4
GPT-3.5
ChatGLM3
Gemma
Mixtral
InternLM2
Llama-3
Yi-1.5
Llama-2
Phi-3
API 0409 128k 70.0 47.9 68.9 70.7
API 0125 16k 49.8 24.0 58.8 50.3
Open
Open
Open
Open
Open
Open
6B
7B
7B
7B
8B
9B
128k 34.7 41.7 58.1 51.3
8k
34.3 12.5 43.2 34.0
32k 42.6 29.2 58.8 50.3
32k 38.6 27.1 52.0 46.1
8k
44.6 23.6 61.5 54.5
16k 40.6 26.4 58.1 52.4
Open 13B
4k
20.8 18.4 29.7 23.6
Open 14B 128k 50.2 44.4 65.5 64.4
InternLM2
Open 20B
32k 43.2 28.5 59.5 54.5
Yi-1.5
Open 34B
16k 47.2 27.1 59.5 56.5
Command-R Open 35B 128k 49.5 38.9 66.2 64.4
Mixtral-8x7B Open 47B
32k 48.5 31.9 60.1 59.2
Llama-3
Open 70B
8k
52.1 25.3 68.2 59.2
3.2 Results and Discussion
74.7
62.7
50.3
65.3
64.6
31.8
47.7
63.3
37.0
40.3
17.2
33.8
28.9
29.2
33.8
15.9
45.8
33.4
39.0
50.0
42.9
38.6
50.8
50.4
48.8
50.4
54.3
36.0
36.8
54.3
42.6
31.0
21.3
38.4
35.3
45.0
45.7
21.7
45.3
43.0
49.2
49.6
46.9
49.2
37.1
17.7
33.1
71.8
58.9
58.1
54.0
70.2
44.4
12.1
77.4
30.6
25.8
49.2
27.4
12.9
44.4
22.6
19.4
13.7
12.1
56.5
69.8
63.1
56.5
70.9
67.6
46.9
51.8
67.9
49.6
46.2
34.6
48.7
42.9
49.6
47.9
27.2
57.4
49.4
50.1
56.4
52.7
54.5
93.6
83.7
58.1
65.1
82.0
65.7
86.6
83.1
55.2
76.7
80.8
78.5
80.8
81.4
90.7
79.1
65.0
70.4
43.0
71.8
63.3
68.0
66.0
43.4
77.4
73.3
68.2
78.4
76.0
69.2
Table 4 showcases the performance of various document reading systems on DOCBENCH. Our
findings reveal substantial variations in document reading capabilities among these systems, driven by
differences in their foundational models, context length limitations, diverse design and implementation
approaches, and etc. In this section, we will provide further discussions to delve deeper into the pros
and cons of existing systems, as well as uncover the core challenges posed by DOCBENCH.
3.2.1 Interpreting Multi-modal and Metadata Information
Figure 5 presents a case study illustrating the unique challenge of answering multi-modal questions in
DOCBENCH. We observe that leading proprietary LLM-based systems often fail due to errors in
one of the steps in the Location→Extraction→Calculation sequence. Take the first case study as
an example, in the first step, KimiChat fails to locate the relevant chart on page 17. In the extraction
phase, Claude-3 misidentifies the data as 288 & 348, instead of the correct 326 & 390. Finally,
while GPT-4 locates and extracts the correct information, it errs in calculating the percentage change,
demonstrating the complexity of these questions. Interestingly, parse-then-read pipelines can achieve
reasonable performance on multi-modal questions (e.g., 63.3% for GPT-4). This is likely because the
7
Figure 5: To address multi-modal questions in DOCBENCH, it is essential to: (i) identify the relevant
figure/table (Location); (ii) extract specific data (Extraction); (iii) perform necessary calculations
(Calculation). In the first case study, KimiChat fails to locate the figure, Claude-3 retrieves incorrect
data, and GPT-4, despite succeeding in the first two steps, struggles with the calculation.
parsing process captures certain table information, and documents often include textual descriptions
of figures. Meanwhile, for metadata-related questions, current methods generally lack attention to
global information, resulting in relative low performances (below 55%).
3.2.2 Handling Lengthy Documents
Handling lengthy documents is demanding,
especially in real-world scenarios where doc-
ument size can be virtually unlimited. Pro-
prietary LLM-based systems struggle with up-
loading extensive files, while the parse-then-
read pipelines with open-sourced LLMs are con-
strained by their maximum context length, lead-
ing to varying degrees of information loss. As
shown in Figure 6, both methods perform poorly
in the finance domain but achieve higher perfor-
mance in the news domain. This discrepancy
arises because financial documents are typically
longer and contain richer information, whereas
news files are limited to single front pages with
fewer messages. Furthermore, certain strong models with relatively short context lengths may excel
with smaller files, but context length becomes a crucial factor when it comes to large files. For
instance, the 8k Llama-3 family performs exceptionally well in the news domain, but is outperformed
Figure 6: Average accuracy (%) of two methods
under five different domains.
8
Question: What is the percentage change in the number of offshore stores for the company from FY18 to FY19? LocationExtractionCalculation(390-326)/326 = 19.63%GPT-4KimiChatClaude-3The percentage change in the number of offshore stores from FY18 to FY19 is 12.8% (326 stores in FY18 to 390 in FY19) . The percentage change in the number of offshore stores from FY18 to FY19 is not provided in the document. The number of offshore stores increased by 21% from 288 in FY18 to 348 in FY19. Question: Which region has the highest number of employees of the company and how many?LocationExtractionCalculationmax(43181, 22674, 678...)GPT-4KimiChatClaude-3 Head Genentech Research & Early Development (gRED) The region with the highest number of employees of the company is Asia, with 22,674 employees. This information is provided in the text: "Asia 22,674.", Europe has the highest number of employees at Roche, with a total of 43,181 employees. Question: Which domain has the highest number of dialogues in the test set?LocationExtractionCalculationGPT-4KimiChatClaude-3The "Restaurant" domain has the highest number of dialogues in the test set, with 437 dialogues.The "Taxi" domain has the highest number of dialogues in the test set, with 195 dialogues.The restaurant domain has the highest number of dialogues (437) in the test set. max(394, 494, 395, 437...)LLM-based SystemsParse-then-Read Pipelines020406080Average Accuracy (%)59.144.546.729.866.257.967.152.876.877.5Aca.Fin.Gov.LawsNewsby all the 128k models in the finance domain. Besides, we discover that KimiChat and Command-R,
which are specifically enhanced for long-context and Retrieval-Augmented Generation (RAG) capa-
bilities, achieve decent results on text-only questions. Therefore, a key challenge lies in adapting
these systems to handle documents of varying lengths while balancing the foundational model’s
capabilities and context length constraints.
3.2.3 Faithfulness to User-provided Documents
Most existing document reading systems falter when faced with unanswerable questions based
on the provided document, exhibiting a lack of fidelity. Remarkably, Gemma and KimiChat
perform better in such scenarios, which represents a crucial capability since users often expect
systems to answer questions strictly based on given files. Intriguingly, despite the commonly-shared
base model on GPT-4, there is a notable performance gap between the system and the parse-then-
read pipeline in handling unanswerable questions (i.e., 37.1% and 70.2 % for system and pipeline,
respectively). We analyze that this may be due to: (i) the proprietary LLM-based system have
undergone optimizations on the base model, potentially causing overfitting; (ii) GPT-4 tends to adhere
more closely to the in-context learning information. Such phenomenon thus underscores a critical
challenge for future document reading systems on enhancing fidelity to the given documents.
4 Related Works
4.1 Recent Advances of LLMs and LLM-based Systems
The latest generation of LLMs, such as GPT-4 [2], Llama-3 [37] and Claude-3 [3], have significantly
extended the capabilities of language models [7, 40, 50]. These models are pre-trained on vast amounts
of web-scale data, enabling them to perform a wide range of human-instructed tasks with impressive
performance. Despite their remarkable performance, standalone LLMs may not be sufficient for many
real-world applications. For example, LLMs lack access to real-time information and may struggle
with tasks that require up-to-date knowledge [38]. Moreover, real-world applications often require
non-text inputs parsing, code execution, API calling and interaction with external environments
[15, 18, 21, 23, 44, 52]. The overall task completion usually requires multiple reasoning, execution
and reflection steps that cannot be accomplished in a simple input-output manner [33, 41, 47]. To
overcome the limitations of standalone LLMs, recent efforts have incorporated additional components
and sophisticated system design. These systems, such as Microsoft’s Co-Pilot17 and OpenAI’s GPT-4
all-in-one18, aim to provide more comprehensive and practical solutions for real-world applications.
Other pioneering efforts on designing LLM-based systems include web agents [16, 26, 51], software
agents [21, 46] and computer agents [43] that can interact with external resources (e.g., websites,
search engine, code repositories or computers) and perform multi-step tasks. The success of these
systems relies on integrating powerful LLMs with well-designed architectures and components that
enable them to handle complex tasks effectively.
4.2 Document reading: Datasets and Methods
Document reading is a critical area where LLM-based systems have demonstrated significant ad-
vancements. Proprietary developers such as OpenAI19 and Anthropic20 have introduced advanced
systems that can take a user-provided document as input, parse its structure, extract relevant meta-
data, and handle long texts and images to provide accurate responses. While these systems build
upon the fundamental capabilities of their underlying LLMs [2–4, 49], they differ in their design
and implementation, with some systems excelling in long-context reading and others focusing on
retrieval-augmented methods to improve document reading ability. Despite claims of effectiveness
and efficiency in online public blogs, the absence of a standardized benchmark makes it difficult to
objectively evaluate and compare the document reading performance across these systems. Existing
benchmarks relevant to document reading are unable to adequately reflect the real performance
of these systems. Datasets focusing on document understanding such as Doc2Dial [13], Condi-
tionalQA [34] and those specifically focusing on long-context reading like NarrativeQA [19] and
17https://copilot.microsoft.com
18https://chat.openai.com
19OpenAI’s ChatGPT: https://chat.openai.com
20Anthropic’s Claude: https://claude.ai/chats
9
QuALITY [32], primarily use text as input only, ignoring the complex nature of document struc-
ture and multi-modal information. On the other hand, multi-modal document reading datasets like
DocVQA [29], ChartQA [27], OCR-VQA [30], and InfoVQA [28] include multi-modal inputs and
preserve the original document structure and layout. However these datasets often capture only parts
of document (e.g. tables or figures) and ignored substantial amount of textual content. Different
from previous works, DocBench requires systems to process the full documents as intact files and
covers different types of questions targeting various abilities, which can more accurately evaluate the
capabilities of LLM-based document reading systems in real-world scenarios.
5 Conclusion
In this paper, we introduce DOCBENCH, a novel benchmark created to assess LLM-based document
reading systems in a comprehensive and fine-grained manner. DOCBENCH consists of 229 documents
and 1,102 questions, spanning 5 domains and 4 question types, developed with the help of human
annotators and synthetic questions. We evaluate both proprietary LLM systems, accessible via web
interfaces or APIs, and a parse-then-read approach using open-source LLMs. Our findings reveal
significant disparities in document reading capabilities among these systems, highlighting current
limitations, presenting potential challenges, and thus driving forward progress in this research field.
References
[1] Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany
Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. 2024. Phi-3
technical report: A highly capable language model locally on your phone. arXiv preprint
arXiv:2404.14219 (2024).
[2] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni
Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023.
Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
[3] Anthropic. 2024. Claude 3 haiku: our fastest model yet. (2024). https://www.anthropic.
com/news/claude-3-haiku.
[4] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin
Ge, Yu Han, Fei Huang, et al. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609
(2023).
[5] Abeba Birhane, Atoosa Kasirzadeh, David Leslie, and Sandra Wachter. 2023. Science in the
age of large language models. Nature Reviews Physics 5, 5 (2023), 277–280.
[6] Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen,
Internlm2 technical report. arXiv preprint
Zehui Chen, Zhi Chen, Pei Chu, et al. 2024.
arXiv:2403.17297 (2024).
[7] Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen,
Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. 2024. A survey on evaluation of large
language models. ACM Transactions on Intelligent Systems and Technology 15, 3 (2024), 1–45.
[8] Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun
Yang, Julian McAuley, Linda Petzold, and William Yang Wang. 2024. A Survey on Large
Language Models for Critical Societal Domains: Finance, Healthcare, and Law. arXiv preprint
arXiv:2405.01769 (2024).
[9] CohereAI. 2024. Introducing Command R. (2024). https://docs.cohere.com/docs/
command-r
[10] Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, and Li Yuan. 2023. Chatlaw: Open-
source legal large language model with integrated external knowledge bases. arXiv preprint
arXiv:2306.16092 (2023).
10
[11] Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A Smith, and Matt Gardner. 2021.
A dataset of information-seeking questions and answers anchored in research papers. arXiv
preprint arXiv:2105.03011 (2021).
[12] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang.
2021. Glm: General language model pretraining with autoregressive blank infilling. arXiv
preprint arXiv:2103.10360 (2021).
[13] Song Feng, Hui Wan, Chulaka Gunasekara, Siva Patel, Sachindra Joshi, and Luis Lastras. 2020.
doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset. In Proceedings of the 2020
Conference on Empirical Methods in Natural Language Processing (EMNLP), Bonnie Webber,
Trevor Cohn, Yulan He, and Yang Liu (Eds.). Association for Computational Linguistics, Online,
8118–8128. https://doi.org/10.18653/v1/2020.emnlp-main.652
[14] Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. GPTScore: Evaluate as You
Desire. arXiv:2302.04166 [cs.CL]
[15] Siyuan Guo, Cheng Deng, Ying Wen, Hechang Chen, Yi Chang, and Jun Wang. 2024. DS-
Agent: Automated Data Science by Empowering Large Language Models with Case-Based
Reasoning. arXiv:2402.17453 [cs.LG]
[16] Hongliang He, Wenlin Yao, Kaixin Ma, Wenhao Yu, Yong Dai, Hongming Zhang, Zhenzhong
Lan, and Dong Yu. 2024. WebVoyager: Building an End-to-End Web Agent with Large
Multimodal Models. arXiv preprint arXiv:2401.13919 (2024).
[17] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris
Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand,
et al. 2024. Mixtral of experts. arXiv preprint arXiv:2401.04088 (2024).
[18] Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and
Karthik Narasimhan. 2023. SWE-bench: Can Language Models Resolve Real-World GitHub
Issues? arXiv:2310.06770 [cs.CL]
[19] Tomáš Koˇcisk`y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor
Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge.
Transactions of the Association for Computational Linguistics 6 (2018), 317–328.
[20] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph
Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language
model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems
Principles. 611–626.
[21] Cognition Labs. 2024. Devin, AI software engineer. (2024). https://www.cognition.ai/
blog/introducing-devin
[22] Jinqi Lai, Wensheng Gan, Jiayang Wu, Zhenlian Qi, and Philip S Yu. 2023. Large language
models in law: A survey. arXiv preprint arXiv:2312.03718 (2023).
[23] Kuang-Huei Lee, Xinyun Chen, Hiroki Furuta, John Canny, and Ian Fischer. 2024. A
Human-Inspired Reading Agent with Gist Memory of Very Long Contexts. arXiv preprint
arXiv:2402.09727 (2024).
[24] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023.
G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment. In Proceedings of the
2023 Conference on Empirical Methods in Natural Language Processing, Houda Bouamor, Juan
Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 2511–2522.
https://doi.org/10.18653/v1/2023.emnlp-main.153
[25] Zhuang Liu, Degen Huang, Kaiyu Huang, Zhuang Li, and Jun Zhao. 2021. Finbert: A pre-
trained financial language representation model for financial text mining. In Proceedings of the
twenty-ninth international conference on international joint conferences on artificial intelligence.
4513–4519.
11
[26] Kaixin Ma, Hongming Zhang, Hongwei Wang, Xiaoman Pan, Wenhao Yu, and Dong
LASER: LLM Agent with State-Space Exploration for Web Navigation.
Yu. 2023.
arXiv:2309.08172 [cs.CL]
[27] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. 2022. Chartqa:
A benchmark for question answering about charts with visual and logical reasoning. arXiv
preprint arXiv:2203.10244 (2022).
[28] Minesh Mathew, Viraj Bagal, Rubèn Tito, Dimosthenis Karatzas, Ernest Valveny, and CV Jawa-
har. 2022. Infographicvqa. In Proceedings of the IEEE/CVF Winter Conference on Applications
of Computer Vision. 1697–1706.
[29] Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. 2021. Docvqa: A dataset for vqa
on document images. In Proceedings of the IEEE/CVF winter conference on applications of
computer vision. 2200–2209.
[30] Anand Mishra, Shashank Shekhar, Ajeet Kumar Singh, and Anirban Chakraborty. 2019. Ocr-
vqa: Visual question answering by reading text in images. In 2019 international conference on
document analysis and recognition (ICDAR). IEEE, 947–952.
[31] OpenAI. 2022. Introducing chatgpt. (2022). https://openai.com/blog/chatgpt.
[32] Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica
Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, et al. 2022. QuALITY:
Question Answering with Long Input Texts, Yes!. In Proceedings of the 2022 Conference of the
North American Chapter of the Association for Computational Linguistics: Human Language
Technologies. 5336–5358.
[33] Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan,
and Shunyu Yao. 2023. Reflexion: Language Agents with Verbal Reinforcement Learning.
arXiv:2303.11366 [cs.AI]
[34] Haitian Sun, William Cohen, and Ruslan Salakhutdinov. 2022. ConditionalQA: A Complex
Reading Comprehension Dataset with Conditional Answers. In Proceedings of the 60th Annual
Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Smaranda
Muresan, Preslav Nakov, and Aline Villavicencio (Eds.). Association for Computational Linguis-
tics, Dublin, Ireland, 3627–3637. https://doi.org/10.18653/v1/2022.acl-long.253
[35] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,
Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of
highly capable multimodal models. arXiv preprint arXiv:2312.11805 (2023).
[36] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya
Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. 2024. Gemma:
Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295
(2024).
[37] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,
Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2:
Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023).
[38] Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan
Sung, Denny Zhou, Quoc Le, et al. 2023. Freshllms: Refreshing large language models with
search engine augmentation. arXiv preprint arXiv:2310.03214 (2023).
[39] Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu,
Jianfeng Qu, and Jie Zhou. 2023. Is ChatGPT a Good NLG Evaluator? A Preliminary Study.
In Proceedings of the 4th New Frontiers in Summarization Workshop, Yue Dong, Wen Xiao,
Lu Wang, Fei Liu, and Giuseppe Carenini (Eds.). Association for Computational Linguistics,
Singapore, 1–11. https://doi.org/10.18653/v1/2023.newsum-1.1
[40] Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen,
Jiakai Tang, Xu Chen, Yankai Lin, et al. 2024. A survey on large language model based
autonomous agents. Frontiers of Computer Science 18, 6 (2024), 1–26.
12
[41] Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, and Heng Ji.
2024. Executable Code Actions Elicit Better LLM Agents. In ICML. arXiv:2402.01030
[42] Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann,
Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. 2023. Bloomberggpt: A large
language model for finance. arXiv preprint arXiv:2303.17564 (2023).
[43] Zhiyong Wu, Chengcheng Han, Zichen Ding, Zhenmin Weng, Zhoumianze Liu, Shunyu Yao,
Tao Yu, and Lingpeng Kong. 2024. OS-Copilot: Towards Generalist Computer Agents with
Self-Improvement. arXiv:2402.07456 [cs.AI]
[44] Tianbao Xie, Danyang Zhang, Jixuan Chen, Xiaochuan Li, Siheng Zhao, Ruisheng Cao, Toh Jing
Hua, Zhoujun Cheng, Dongchan Shin, Fangyu Lei, et al. 2024. OSWorld: Benchmarking
Multimodal Agents for Open-Ended Tasks in Real Computer Environments. arXiv preprint
arXiv:2404.07972 (2024).
[45] Hongyang Yang, Xiao-Yang Liu, and Christina Dan Wang. 2023. Fingpt: Open-source financial
large language models. arXiv preprint arXiv:2306.06031 (2023).
[46] John Yang, Carlos E Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik
Narasimhan, and Ofir Press. 2024. SWE-agent: Agent-Computer Interfaces Enable Auto-
mated Software Engineering. arXiv preprint arXiv:2405.15793 (2024).
[47] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan
Cao. 2023. ReAct: Synergizing Reasoning and Acting in Language Models. In The Eleventh
International Conference on Learning Representations.
[48] Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li,
Jiangcheng Zhu, Jianqun Chen, Jing Chang, et al. 2024. Yi: Open foundation models by 01. ai.
arXiv preprint arXiv:2403.04652 (2024).
[49] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang,
Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained model.
arXiv preprint arXiv:2210.02414 (2022).
[50] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min,
Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models.
arXiv preprint arXiv:2303.18223 (2023).
[51] Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, and Yu Su. 2024. GPT-4V(ision) is a
Generalist Web Agent, if Grounded. arXiv:2401.01614 [cs.IR]
[52] Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng,
Yonatan Bisk, Daniel Fried, Uri Alon, et al. 2023. Webarena: A realistic web environment for
building autonomous agents. arXiv preprint arXiv:2307.13854 (2023).
A Instruction Prompts
A.1 Response Evaluation
Detailed instruction prompts for response evaluation are shown in Table 5.
A.2 QA-pair Generation
Details of instruction prompts for generating QA pairs are attached in Table 6. We discover that
simply passing diagrams to GPT-4V leads to subpar question quality. This issue likely stems from the
fact that figures or tables without accompanying text descriptions typically lack sufficient information,
thus causing the generated QA pairs to deviate from their intended meanings. In addition, we observe
that adding difficulty settings for QA generation (e.g., Easy, Medium, Hard) in the instruction prompt
can result in higher quality. We analyze that this may be due to the model being able to favor higher
generation quality in potential comparisons.
13
Table 5: Instruction Prompts in Response Evaluation.
System Content:
You are a helpful evaluator.
Prompt:
Task Overview:
You are tasked with evaluating user answers based on a given question, reference answer, and
additional reference text. Your goal is to assess the correctness of the user answer using a specific
metric.
Evaluation Criteria:
1. Yes/No Questions: Verify if the user’s answer aligns with the reference answer in terms of a
"yes" or "no" response.
2. Short Answers/Directives: Ensure key details such as numbers, specific nouns/verbs, and
dates match those in the reference answer.
3. Abstractive/Long Answers: The user’s answer can differ in wording but must convey the
same meaning and contain the same key information as the reference answer to be considered
correct.
Evaluation Process:
1. Identify the type of question presented.
2. Apply the relevant criteria from the Evaluation Criteria.
3. Compare the user’s answer against the reference answer accordingly.
4. Consult the reference text for clarification when needed.
5. Score the answer with a binary label 0 or 1, where 0 denotes wrong and 1 denotes correct.
NOTE that if the user answer is 0 or an empty string, it should get a 0 score.
Question: {{question}}
User Answer: {{sys_ans}}
Reference Answer: {{ref_ans}}
Reference Text: {{ref_text}}
Evaluation Form (score ONLY):
- Correctness:
B Performance Comparison
Figure 7 demonstrates the relative performance of LLM-based systems and parse-then-read pipelines
against the best on DOCBENCH. For LLM-based systems, KimiChat consistently scores high across
various metrics, demonstrating balanced performance. Notably, GPT-4 performs poorly in the
unanswerable category, indicating potential overfitting in optimized GPT-4 file systems, which leads
to decreased fidelity to given documents. Additionally, Claude-3 excels in the meta-data category,
highlighting its superior ability to comprehend high-level metadata information. For parse-then-read
pipelines, we select models with the highest overall accuracy for comparison. Unlike LLM-based
systems, GPT-4 demonstrates consistently high and balanced performance across all aspects within
this pipeline. Notably, significant discrepancies arise in handling multi-modal and unanswerable
questions, where GPT-4 and Gemma exhibit clear distinctions from the remaining methods.
C Analysis of Input Sources
Table 7 presents the impact of different input sources on model performance. We provide questions to
GPT-4 and GPT-4o, both with and without attached files. Remarkably, even without files, the models
correctly answer a portion of the questions (19.1% for GPT-4 and 21.7% for GPT-4o). Our analysis
reveals that the correctly answered questions are predominantly textual and are largely associated
with government, law, and news domains. This trend suggests that the models’ underlying training
14
Table 6: Instruction Prompts in QA-pair Generation.
System Content:
You are a helpful assistant that can generate question-answer pairs.
Text-only QA:
Based on the above text, please design three question-answer pairs with different levels of
difficulty: Easy, Medium, Hard.
The questions should be close-ended and should be answered based on the provided text.
The answer form should be as diverse as possible, including [Yes/No, Short Answer, Long
Answer, Abstractive Answer].
You should provide the reference in the text and the answer form if possible.
The output should be formalized as: ”’Q: | A: | Reference: | Difficulty Level: | Answer Form:”’
Multimodal QA (w/table+text):
Based on the above table and text, please design three question-answer pairs with different
levels of difficulty: Easy, Medium, Hard.
The text provided is text related to the table, which can provide more reference for question
generation, but the focus is still on the table itself.
These questions require locating the specific information, simple or complex calculations,
comparisons, finding the maximum and minimum, reading across rows and columns, etc.
Note that these questions also need to be realistic. You should provide the reason if possible.
The output should be formalized as: ”’Q: | A: | Reference: | Difficulty Level: | Answer Form:”’
Multimodal QA (w/figure+text):
Based on the above figure and text, please design three question-answer pairs with different
levels of difficulty: Easy, Medium, Hard.
The text provided is text related to the figure, which can provide more reference for question
generation, but the focus is still on the figure itself.
These questions require a deep reading of the meaning of the image.
Note that these questions also need to be realistic. You should provide the reason if possible.
The output should be formalized as: ”’Q: | A: | Reason: | Difficulty Level: | ”’
Multimodal QA (w/table):
Based on the above image, please design three question-answer pairs with different levels of
difficulty: Easy, Medium, Hard.
These questions require locating the specific information, simple or complex calculations,
comparisons, finding the maximum and minimum, reading across rows and columns, etc.
Note that these questions also need to be realistic. You should provide the reason if possible.
The output should be formalized as: ”’Q: | A: | Reason: | Difficulty Level: | ”’
Multimodal QA (w/figure):
Based on the above image, please design three question-answer pairs with different levels of
difficulty: Easy, Medium, Hard.
These questions require a deep reading of the meaning of the image.
questions also need to be realistic. You should provide the reason if possible.
Note that these
The output should be formalized as: ”’Q: | A: | Reason: | Difficulty Level: | ”’
data is heavily skewed towards these categories, enabling them to answer some questions accurately
without additional files. Moreover, as GPT-4o is an optimized version of GPT-4, it likely benefits
from a broader and more extensive training data.
15
Figure 7: Performance (Relative) of two major methods on DOCBENCH against the best.
Table 7: Analyzing the Influence of Input Sources: We deliver questions with attached files and
without files to GPT-4 and GPT-4o for evaluation, respectively.
Domain
Type
Overall Acc.
Aca.
Fin. Gov. Laws News
Text. Multi. Meta. Una.
Methods
GPT-4
w/ file
w/o file
65.7
10.9
65.3
10.8
75.7
23.0
GPT-4o
w/ file
w/o file
56.4
11.2
56.3
13.5
73.0
29.1
69.6
29.3
65.5
31.9
79.6
32.6
75.0
36.0
87.9
40.8
85.0
46.6
74.7
8.1
62.7
10.7
50.8
1.6
50.4
2.3
37.1
10.5
17.7
6.5
69.8
19.1
63.1
21.7
16
—KimiOverallUnans-werable—GPT-4—Claude-3—GLM-4—ERNIE-3.5—Qwen-2.5OverallText-onlyUnans-werable—GPT-4—Command-R-35B—Phi-3—Llama-3-70B—Mixtral-8x7B—InternLM2-20B—Yi-1.5-34B—Gemma—ChatGLM-6BText-onlyMulti-modalMeta-dataMulti-modalMeta-dataLLM-based systemsParse-then-Read Pipelines |
ai_researcher | 2 | Filter-then-Generate_Large_Language_Models_with_Structure-Text_Adapter_for_Knowledge_Graph_Completion.pdf | 8
1
0
2
r
a
M
2
2
]
P
A
.
t
a
t
s
[
1
v
3
0
5
8
0
.
3
0
8
1
:
v
i
X
r
a
Kalman Filter, Unscented Filter and Particle Flow
Filter on Non-linear Models
Author: Yan Zhao
Advisor: prof. Zhongqiang Zhang
Contents
1 Kalman Filter
1.0.1 Linear Dynamic Systems in Discrete Time . . . . . . .
1.0.2 Example of Application . . . . . . . . . . . . . . . . .
Solving for Kalman Gain . . . . . . . . . . . . . . . .
1.0.3
Solving for Priori and Posterior Estimation . . . . . .
1.0.4
Solving for Prior and Posterior Covariance . . . . . . .
1.0.5
. . . . . . .
1.0.6 Results for Yield and Real Return Model
2 Unscented Filtering and Nonlinear Estimation
2.0.1 General Algorithms for Unscented Kalman Filter . . .
Implementation for Yield and Real Return Model . . .
2.0.2
. . . . . . .
2.0.3 Results for Yield and Real Return Model
3 Particle Flow Filter
3.0.1 Generalized Gromov Method for stochastic Particle
Flow Filters . . . . . . . . . . . . . . . . . . . . . . . .
3.0.2
Implementation of Particle Flow Filter . . . . . . . . .
3.0.3 Results for yield and real return model . . . . . . . . .
1
1
2
4
6
6
8
10
10
12
12
14
14
16
18
1
Abstract
Filters, especially wide range of Kalman Filters have shown their impacts
on predicting variables of stochastic models with higher accuracy then tra-
ditional statistic methods. Updating mean and covariance each time makes
Bayesian inferences more meaningful. In this paper, we mainly focused on
the derivation and implementation of three powerful filters: Kalman Filter,
Unscented Kalman Filter and Particle Flow Filter. Comparison for these
different type of filters could make us more clear about the suitable appli-
cations for different circumstances.
Chapter 1
Kalman Filter
Kalman Filter, also called the Linear Quadratic Estimator(LQE), has been
used to minimize the estimation error for unknown variables in noisy stochas-
tic system. Kalman Filter works recursively to update estimation by in-
putting observed measurements over time. It contains two models, the first
is Observation model and the second is Measurement model. Observation
model, involving Plant noise, has been used to generate prior estimation for
current state variables; Measurement model, including observation noise,
has been used to update the estimation and generate posterior estimation.
Kalman Filter has wide applications, such as predicting natural weather
and prices of traded commodities. It also has been used to monitor complex
dynamic systems, like signal processing in GPS and motion monitoring in
robotics. Kalman Filter works perfectly in linear model, and the extended
versions of extended Kalman Filter and Unscented Kalman Filter have been
applied to non-linear problems.
1.0.1 Linear Dynamic Systems in Discrete Time
We suppose that the stochastic systems can be presented by the following:
Plant model:
xk = φk−1xk−1 + wk−1 with wk ∽ N (0, Qk)
(1.1)
Measurement model:
zk = Hkxk + vk with vk ∽ N (0, Rk)
(1.2)
vk and wk are assumed as independent normal random processes with mean
of zero. xk has known initial value of x0 and known initial covariance matrix
1
P0. The goal is to find the estimations of ˆxk presented by function of zk
such that the mean-squared error is minimized. Denote Pk(−) as the prior
covariance matrix for x at time k, Pk(+) as the posterior covariance matrix
for x at time k, ¯Kk as Kalman gain at time k, ˆxk(−) as the prior estimate
of xk and ˆxk(+) as the posterior estimate of xk. By using orthogonality, we
can prove the following updating equations:
Pk(−) = φk−1P(k−1)(+)φT
k−1 + Qk−1
k [HkPk(−)H T
k + Rk]
−1
¯Kk = Pk(−)H T
Pk(+) = [I
¯KHk]Pk(−)
−
ˆxk(−) = φk−1 ˆx(k−1)(+)
ˆxk(+) = ˆxk(−) + ¯K[zk −
Hk ˆxk(−)]
1.0.2 Example of Application
(1.3)
(1.4)
(1.5)
(1.6)
(1.7)
Consider a dividend yield and S&P real return model for stocks, in which
Xn is dividend yield, δRn is real return and Yn is a two-dimensional vector
for the observation of Xn and δRn from year 1945 to 2010. ∆W1,n, ∆W2,n
are independent Brownian motion increments with
∆Wi,n = Wi,n+1 −
Wi,n, i = 1, 2
B1,n, B2,n are also independent Brownian motion increments. k, θ, σ, µ, a,
ρ, Q1 and Q2 are parameters with the given values as following:
Table 1.1: Parameters
k
2.0714
θ
2.0451
σ
0.3003
µ
0.1907
a
0.9197
ρ
1.6309
Q1
0.0310
Q2
-0.8857
Zn =
Xn
δRn(cid:19)
(cid:18)
=
1
1+k Xn−1 + kθ
1+k + σ
ρ∆W1,n +
1+k √Xn−1∆W1,n
1
ρ2∆W2,n
µXn + a√Xn−1
−
!
(cid:17)
Yn =
Y1,n
Y2,n(cid:19)
(cid:18)
=
p
(cid:16)
Xn + Q1B1,n
δRn + Q2B2,n(cid:19)
(cid:18)
2
Rewriting Zn and Yn are necessary, as the Observation and Measurement
model showing that, Zn is the function of Zn−1 and Yn is the function of
Zn.
First, let’s rewrite Zn. We can see that Xn is represented by Xn−1, which
is the element of vector Zn−1, while δRn is represented by Xn. So we need
to rewrite δRn as the term of Xn−1:
δRn =
µ
1 + k
Xn−1+
µkθ
1 + k
+(
µσ√xn−1
1 + k
+aρ
Xn−1)∆W1,n+a
Xn−1
p
p
p
Then,
ρ2∆W2,n
1
−
(1.8)
Zn =
Xn
δRn(cid:19)
(cid:18)
=
(cid:18)
1
1+k 0
µ
1+k 0
Xn−1
δRn−1(cid:19)
+
(cid:19) (cid:18)
kθ
1+k
µkθ
1+k !
σ
1+k
µσ
1+k + aρ
+
Xn−1
p
Denote
Φ =
(cid:18)
D =
(cid:19)
1
1+k 0
µ
1+k 0
kθ
1+k
µkθ
1+k !
∆W1,n
∆W2,n(cid:19)
ρ2
! (cid:18)
a
0
1
−
(1.9)
p
C =
(cid:18)
σ
0
1+k
µσ
1+k + aρ a√1
∆W1,n
∆W2,n(cid:19)
Wn =
(cid:18)
−
As a result, we can write Zn as:
rho2
(cid:19)
Zn = Φn−1Zn−1 + D +
Xn−1CWn
(1.10)
Next is to rewrite Yn:
Denote
Hn =
V =
We can rewrite Yn as :
Bn =
p
1 0
0 1
(cid:18)
(cid:18)
(cid:19)
Q1 0
0 Q2(cid:19)
B1,n
B2,n(cid:19)
(cid:18)
Yn = HnZn + V Bn
(1.11)
3
1.0.3 Solving for Kalman Gain
The optimal updated estimate ˆZn(+) is a linear function of a priori estimate
ˆZn(−) and measurement Yk, that is,
ˆZn(+) = K 1
n
ˆZn(−) + ¯KnYn
(1.12)
n and ¯Kn are unknown yet. We seek values of K 1
K 1
estimate ˆZn(+) satisfies the orthogonality principle:
n and ¯Kn such that the
E
[Zn −
h
ˆZn(+)]Y T
i i
= 0, f or i = 1, 2, ...n
1
−
(1.13)
If one expand Zn from equation(1.1) and Zn(+) from equation(1.12) into
equation(1.13), then one will obverse:
E
[Φn−1Zn−1+D+
h
Xn−1CWn−
Since Wn and Vn are uncorrelated, it follows that E
i
p
n
ˆZn(−)−
¯KnYn]Y T
i i
K 1
n
1. Using this result, one can get obtain the following result:
WnY T
i i
h
= 0, f or i = 1, 2, ...n
(1.14)
1
−
= 0 f or 1
≤
−
≤
E
[Φn−1Zn−1 + D
h
ˆZn(−) −
Then by substituting Yn using equation (1.11), one can get
¯KnYn]Y T
i i
K 1
n
= 0, f or i = 1, 2, ...n
−
1 (1.15)
−
E
[Φn−1Zn−1+D
h
ˆZn(−)−
Then equation(1.16) can be changed to the form
¯KnHnZn−
¯KnV Bn]Y T
i i
K 1
n
−
= 0, f or i = 1, 2, ...n
−
(1.16)
1
Φn−1E
Zn−1Y T
i i
h
¯KnV E
BnY T
i i
h
We also know that
−
+ DE
Y T
i i −
h
= 0, f or i = 1, 2, ...n
ˆZn(−)Y T
h
1
K 1
nE
i i −
−
BnY T
i i
h
Equation (1.17) can be reduced to the form
E
= 0, f or i = 1, 2, ...n
¯KnHnE
ZnY T
i i
h
(1.17)
1
−
E
Zn−1Y T
Φn−1E
i i
h
K 1
nZn −
[Zn −
h
K 1
nZn −
[Zn −
h
K 1
[I
n −
−
h
E
E
K 1
Y T
+ DE
i i −
h
¯KnHnZn]Y T
i i −
¯KnHnZn]Y T
i i
ZnY T
i i
h
= 0
¯KnHn]
E
i
= 0,
ˆZn(−)Y T
nE
h
K 1
nE
i i −
[ ˆZn(−) −
h
¯KnHnE
Zn]Y T
i i
ZnY T
i i
h
= 0,
= 0,
(1.18)
4
Equation (1.18) can be satisfied for any given Zn if
K 1
n = I
−
¯KnHn,
Thus, K 1
Define estimation errors after and before updates
n in equation (1.12) satisfied equation (1.19).
Zn
Zn
˜Zn(+)
, ˆZn(+) −
˜Zn(−)
, ˆZn(−) −
˜Yn , ˆYn(−) −
Yn
= HnZn(−) + Yn
Since ˜Yn(−) depends linearly on Yn, from equation (1.13),
E
[Zn −
h
ˆZn(+)] ˜Y T
n i
= 0
(1.19)
(1.20)
(1.21)
(1.22)
(1.23)
Substitute Zn, ˆZn(+), and ˜Yn from equations (1.10), (1.12), (1.22) respec-
tively. Then
E
Φn−1Zn−1 + D +
h
Xn−1CWn −
K 1
n
ˆZn(−) −
¯KnYn][Hn ˆZn(−) −
Yn]T
i
= 0.
p
By the orthogonality of
E
WnY T
n i
h
= E
WnX T
h
n(−)i
= 0,
We will obtain
Yn]T
i
= 0.
E
E
E
−
K 1
n
ˆZn(−))
¯KnYn][Hn ˆZn(−) −
Φn−1Zn−1+D
h
[(Zn−
h
E
Φn−1Zn−1 + D
h
Substituting for K 1
ˆZn(−) −
n, Yn and using equation (1.21)
ˆZn(−)+ ¯KHn ˆZn(−)−
−
ˆZn(−))
¯KnHn(Zn−
˜Zn(−) + ¯KnHn ˜Zn(−) −
˜Zn(−)
˜Z T
H T
n(−)i
h
n + ¯KnV E
H T
n(−)i
˜Zn(−)BT
Using the fact that E
n i
h
as follows:
[
−
h
I + ¯KnHn)E
B ˜Z T
¯KnV E
h
(
−
BnBT
n i
h
˜Z T
= E
¯KnZn−
¯KnV Bn][Hn( ˆZn(−)−
Zn)
−
−
¯KnV Bn][Hn ˜Zn(−) −
V Bn]T
i
I + ¯KnHn)E
˜Zn(−)BT
n i
n −
h
V T = 0
¯KnV Bn][Hn ˆZn(−)−
BT
n
h
n(−)i
(
−
−
−
V Bn]T
= 0,
V T
= 0,
i
(1.24)
= 0, this last result will be
HnZn−
V Bn]T
= 0,
i
(
−
I + ¯KnHn)E
˜Zn(−)
h
˜Z T
n(−)i
H T
n + ¯KnV E
BnBT
n i
h
V T = 0
(1.25)
5
For the second term of equation(1.25) ¯KnV E
BnBT
n i
h
V T :
¯Kn
Q1 0
0 Q2(cid:19)
(cid:18)
E
(cid:18)
B2
1n B2nB1n
B1nB2n B2
2n(cid:19) (cid:18)
Q1 0
0 Q2(cid:19)
= ¯Kn
Q1 0
0 Q2(cid:19) (cid:18)
(cid:18)
1 0
0 1
(cid:19) (cid:18)
Plugging the value of equation (1.26) to (1.25):
Q1 0
0 Q2(cid:19)
= ¯KnV 2
(1.26)
(
−
I + ¯KnHn)E
˜Zn(−)
h
˜Z T
n(−)i
H T
n + ¯KnV 2 = 0,
By definition, the error covariance matrix is Pn(−) = E
satisfies the equation:
˜Zn(−)
h
˜Z T
n(−)i
, it
I + ¯KnHn)Pn(−)H T
n + ¯KnV 2 = 0,
(
−
¯Kn(HnPn(−)H T
n + V 2) = Pn(−)H T
n ,
And therefore, Kalman gain can be expressed as:
¯Kn = Pn(−)H T
n (HnPn(−)H T
n + V 2)
−1,
(1.27)
which is the solution we want to seek as a function of priori covariance before
update.
1.0.4 Solving for Priori and Posterior Estimation
By definition, the priori estimation
ˆZn(−) = Φn−1 ˆZn(+) + D.
(1.28)
By substituting equation (1.19) into equation (1.12), one obtains the equa-
tions
ˆZn(+) = (I
ˆZn(+) = ˆZn(−) + ¯Kn(
¯KnHn) ˆZn(−) + ¯KnYn,
Hn ˆZn(−) + Yn)
−
−
(1.29)
Therefore, the posterior estimation we want to seek is a function of priori
estimation and kalman gain.
1.0.5 Solving for Prior and Posterior Covariance
One can derive a formula for posterior covariance, which is
Pn(+) = E
˜Zn(+)
h
˜Z T
n(+)i
(1.30)
6
By plugging equation (1.29) to equation (1.20), one obtains the equations
˜Zn(+) = ˆZn(+) −
= ˆZn(−) −
= ( ˆZn(−)Zn)
= (I
Zn = ˆZn(−) −
¯KnHn ˆZn(−) + ¯KnHnZn −
¯KnHn( ˆZn(−) −
−
¯KnHn) ˜Zn(−) + ¯KnV Bn
¯KnHn ˆZn(−) + ¯KnYn −
¯KnV Bn −
Zn) + ¯KnV Bn
−
Zn
Zn
By substituting equation (1.31) into equation (1.30) and noting that E
0, one obtains
(1.0.1)
˜Zn(−)BT
n i
h
=
[(I
h
−
¯KnHn) ˜Zn(−)
−
¯KnHn)Pn(−)(I
−
¯KnHnPn(−) −
Pn(+) = E
= E
(I
h
= (I
−
= Pn(−) −
= (I
−
¯KnHn)Pn(−) −
¯KnHn)Pn(−) −
¯KnHn)Pn(−)
= (I
= (I
−
−
¯KnHn) ˜Zn(−) + ¯KnV Bn][(I
¯KnHn) ˜Zn(−) + ¯KnV Bn]T
n V T ¯K T
n i
i
−
˜Z T
−
¯KnHn)T + ¯KnV BnBT
n(−)(I
¯KnHn)T + ¯KnV 2 ¯K T
n
¯K T + ¯KnHnPn(−)H T
¯K T
Pn(−)H T
n
n
n + ¯Kn(HnPn(−)H T
¯K T
n + v2) ¯K T
¯K T
n + Pn(−)H T
n
Pn(−)H T
n
Pn(−)H T
n
¯K T
n
n + ¯KnV 2 ¯K T
n
(1.32)
This is the final form of posterior covariance, which shows the effects of
kalman gains on priori covariance. Respectively, the definition of prior co-
variance
˜Zn(−)
h
By plugging equation (1.10) and equation (1.28) to equation (1.21), one
obtains the equations
˜Z T
n(−)i
Pn(−) = E
(1.33)
˜Zn(−) = Φn−1 ˆZn−1(+) + D
= Φn−1 ˆZn−1(+) + D
= Φn−1 ˜Zn−1(+) −
−
Zn −
−
Xn−1CWn
Zn
Φn−1Zn−1 −
Xn−1CWn
(1.34)
D
−
p
n−1 to obtain the results
Pn(−) = E
Uses the fact that E
p
˜Zn−1W T
h
[Φn−1 ˜Zn−1(+) −
h
−
˜Zn−1(+)
˜Z T
= Φn−1E
n−1 +
n−1(+)i
h
n−1 + Xn−1CC T
= Φn−1Pn−1(+)ΦT
√Xn
ΦT
1CWn][Φn−1 ˜Zn−1(+) −
C T
WnW T
Xn−1CE
n i
h
√Xn
−
Xn−1
1CWn]T
i
p
(1.35)
p
7
which gives a priori value of the covariance matrix as a function of the
previous posterior covariance.
Thus, the update equations for our yield and real return model are listed
following:
Pn(−) = Φn−1Pn−1(+)ΦT
n−1 + Xn−1CC T
¯Kn = Pn(−)H T
n (HnPn(−)H T
n + V 2)
−1,
Pn(+) = (I
−
¯KnHn)Pn(−)
ˆZn(−) = Φn−1 ˆZn(+) + D.
ˆZn(+) = ˆZn(−) + ¯Kn(
−
Hn ˆZn(−) + Yn)
(1.35)
(1.27)
(1.32)
(1.28)
(1.29)
The form of equations of example model are similar to equation (1.3) to (1.7),
but the differences are because the example model is not strictly linear and
noisy parts from plant model are relying on the previous steps.
1.0.6 Results for Yield and Real Return Model
By plugging value of Yield and Real Return from year 1945 to year 2010 to Yn
and setting the initial priori covariance as zero, one can repeat the algorithms
listed above to calculate kalman gain 65 times and correspondingly update
post covariance and posterior value of estimation. Set posterior estimation
as estimation for yield and real return, and one can plot real value and
estimation value on the same plot by using same time discretization. The
results are showing following:
The results are showing that kalman filter works well in first five to six
years with the same trend of move and estimation value approximating to
real value. After the fifth year, the value of estimations are far away from
real value but keeping the same trend of move. The reason for estimation
and real value deviating from fifth year is that the model is non-linear with
time. The results confirm that kalman filter perfectly works on linear model
and the first several steps of non-linear model, while it works worse on the
later part of non-linear model. Thus, the use of extended Kalman filter–
Unscented Kalman filter, is needed to solve this non-linear problem.
8
0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
0
0.5
0.4
0.3
0.2
0.1
0
-0.1
-0.2
-0.3
-0.4
0
The plot for estimated yield and real yield
Estimated yield
Real yield
10
20
30
40
50
60
70
Year
(a) Yield
The plot for estimated return and real return
Estimated return
Real return
10
20
30
40
50
60
70
Year
(b) Return
Figure 1.1: Kalman Filter Results for Yield and Real Return
9
Chapter 2
Unscented Filtering and
Nonlinear Estimation
−
The extended Kalman Filter (EKF) has been widely used to deal with non-
linear problem. However, it is hard to implement and the results are often
inaccurate. the Unscented transformation (UT) has been developed as an
improvement to utilize information of mean and covariance to accurate re-
sults and make it easier to implement. The method is to select sigma points
according to their mean µx and covariance σx (i.e. choosing data in range
2σx, 2σx]). The non-linear function is applied to each point to generate
of [
a cloud of points. Then transformed mean and covariance can be obtained
from calculating mean and variance of those sigma points. There are two
advantages of using UT transformation. The first is selected sigma points
are no longer randomly chosen but containing information of an unknown
distribution, which is sufficient to operate statistic computation. Further-
more, mean and covariance are linearly transformable (i.e. mean ¯x will be
T ¯x after operating transformation T, and covariance Σx will be T ΣxT T )
The second is weights for sigma points can be adjusted in ways such that
more points around mean can be captured.
2.0.1 General Algorithms for Unscented Kalman Filter
1) Generating sigma points:
Consider a set of sigma points S with given mean and covariance, it con-
i = 0, 1, ...2Nx :
tains (2Nx + 1) vectors and their associate weights S =
{
. By convention, W (0) will be the weight on the mean point,
X (i), W (i)
}
10
which is indexed as the zeroth point
X (0) = ¯X
W (0) = W (0)
The other 2N
the left side of mean and half on the right side of mean
x points lie on the √Nxth covariance with half points on
−
X (i) = ¯X +
(cid:0)
W (i) =
Nx
W (0) Σx
1
r
1
−
W (0)
−
2Nx
i
(cid:1)
X (i+Nx) = ¯X
−
(cid:0)
W (i+Nx) =
r
1
1
Nx
W (0) Σx
−
W (0)
i
(cid:1)
−
2Nx
2) Generating transformed set, which is normally the expectation value
through Plant model
ˆX (i)
n = f [X (i)
n , µn].
3) Computing predicted mean
ˆµn =
p
Xi=0
W (i) ˆX (i)
n .
4) And computing predicted covariance
ˆKn =
p
Xi=0
ˆX (i)
{
n −
ˆµn}{
ˆX (i)
n −
T .
ˆµn}
5) Plugging each of the predicted points to observation model
ˆY (i)
n = g[X (i)
n ].
6) Computing observation mean
p
ˆYn =
W (i) ˆY (i)
n .
Xi=0
11
7) And computing observation covariance
p
ˆY (i)
n −
{
Xi=0
8) Finally updating normal Kalman Filter Equations
ˆY (i)
n −
ˆYn}{
ˆSn =
ˆYn}
T .
ˆYn
Vn = Yn −
−1
Wn = ˆKn ˆY
n
µn = ˆµn + WnVn
Kn = ˆKn −
Wn ˆSnW T
n
2.0.2
Implementation for Yield and Real Return Model
Here we use the same example of yield and real return model. Expected
results are better by implementing Unscented Kalman filter. That is
Zn = Φn−1Zn−1 + D +
Xn−1CWn
(1.10)
(1.11)
p
Yn = HnZn + V Bn
In order to generate sigma points, W (0) = 1
3 and 2Nx = 400 have been
set. With initial mean of sigma points µn = H −1
V Bn) and initial
covariance as zero matrix, one can repeat the algorithms by using steps
listed. Each time by choosing factorization of ˆKn, we can get the covariance
of sigma points. When implementing the algorithm, we changed a little bit
in step 2. Instead of using expectation, we use the whole function to process
sigma particles because the value of noisy parameters are relatively high and
it will be better to mimic points adding those noise. Each time we need to
guarantee Xn is positive.
n (Yn −
2.0.3 Results for Yield and Real Return Model
Predicted dividend yield matches highly with the real yield from the figure,
which means the prediction for yield is pretty sucess. Predicted real return
does not match with the real return well but keep the same trend. Reasons
for diiference of the results are: First, variance for real return is higher
than yield which enlarge the error for mis-allocated sigma points. Sigma
points for yield are intensive since it has relatively stable trend with lower
variance. Second, for updating each step, real return highly depends on the
prediction of yield from previous step, so the predicted error for yield can
be exaggerated further.
12
0.1
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0.5
0.4
0.3
0.2
0.1
0
-0.1
-0.2
-0.3
-0.4
0
Dividend Yield
Predicted yield
Real yield
10
20
30
40
50
60
70
(a) Yield
Real Return
Predicted return
Real return
10
20
30
40
50
60
70
(b) Return
Figure 2.1: Unscented Filter Results for Yield and Real Return
13
Chapter 3
Particle Flow Filter
Particle Filters have the problem of particle degeneracy caused by Bayesian
Rule, especially in dealing with high dimensional state vectors. The algo-
rithm puts particles to wrong places when multiplying prior function with
likelihood function. Particle Flow Filter is derived to improve the estimation
accuracy in high-dimensional space by involving move functions of particles
and it is significantly mitigate the problem of degeneracy. We set each par-
ticle in d-dimensional space as a function of λ denoting as x(λ), in which
lambda is continuously changing like time. λ starts from 0 and ends up with
1 giving the results of moving from points to next points.
3.0.1 Generalized Gromov Method for stochastic Particle
Flow Filters
We start from constructing the stochastic differential equation for flow of
particles:
dx = f (x, λ)dλ + Q(x)
1
2 dWλ
(3.0.1)
Here f (x, λ) is the moving function for particles and Q is the covariance ma-
trix of the diffusion Wλ. Wλ is the measurement noise generated according
to λ.
In order to get the solution of f (x, λ) and Q(x), probability density function
log P (x, λ) is essential to be introduced. We have:
log P (x, λ) = log g(x) + λ log h(x)
log K(λ)
(3.0.2)
−
The generalized probability density function has the form of :
p(x, λ) =
g(x)h(x)λ
Rd g(x)h(x)λ dx
=
g(x)h(x)λ
K(λ)
,
(3.0.3)
R
14
in which h(x) is the likelihood, g(x) is from part a and K(λ) is the norm of
product of g(x)andh(x)λ. The purpose of K(λ) is to normalize the condi-
tional probability density.
By using equation (3.0.2), one can solve f (x, λ) by setting specific Q(x) to
simplify the PDE for f. The PDE has the form of :
∂log h
∂x
=
−
f T ∂2 log P
∂x2 −
∂div(f )
∂x −
∂ log P
∂x
∂f
∂x
+
∂[div(Q ∂P
∂x )/2P ]
∂x
(3.0.4)
The simplest way is to set:
∂div(f )
−
∂x −
∂logP
∂x
∂f
∂x
+
∂[div(Q ∂P
∂x )/2P ]
∂x
= 0
(3.0.5)
Then the solution for f (x, λ) is :
f (x, λ) =
∂2 log P (x, λ)
∂x2
[
−
−1(
]
∂ log h(x)
∂x
)T
(3.0.6)
According to equation (3.0.5), the corresponding covariance function Q is:
−
Q = [P
λP H T (R+λHP H T )
−1HP ]H T R
−1H[P
λP H T (R+λHP H T )
−1HP ]
(3.0.7)
where R is the measurement noise covariance matrix, P is the prior covari-
ance matrix, and H is the sensitive matrix in measurement model.
In order to keep the solution of Q from equation (3.0.7) as symmetric matrix,
one can implement the following method to symmetry Q immediately:
−
Q =
Q + QT
2
(3.0.8)
Algorithm 3.0.1. (Algorithm for implementing Particle Flow Filter with
diffusion)
•
•
•
a. Use Monte Carlo method randomly choose N particles around ob-
servation, and generate particle density function g(x) as prior density
function.
b. Choose suitable h(x) as likelihood function.
c. Compute p(x, λ) by Equation (3.0.3), p(x, λ) = g(x)h(x)λ
K(λ) =
Rd g(x)h(x)λ dx .
K(λ)
, where
R
15
•
d. Solve the moving function f (x, λ) and measurement covariance
matrix Q by equation (3.0.6) and (3.0.7). That is,
f (x, λ) =
∂2 log P (x, λ)
∂x2
[
−
−1(
]
)T .
(3.0.9)
Q = [P
−
λP H T (R+λHP H T )
−1HP ]H T R
λP H T (R+λHP H T )
−1HP ].
∂ log h(x)
∂x
−1H[P
−
•
e. Plug the value of f (x, λ) and Q(x), one can derive x by solving the
PDE: dx = f (x, λ)dλ + LdWλ, with L= chol(Q). We can use forward
Euler scheme
x(n+1) = x(n) + f (x(n), λn)∆λ + L∆Wλ
(3.0.10)
or implicit Euler scheme
x(n+1) = x(n) + f (x(n+1), λn+1)∆λ + L∆Wλ.
(3.0.11)
f. For updating each point, repeat steps from a to e.
Remark 3.0.2. Here h(x) can be any type of distribution but we consider
normal distribution with estimated mean and variance.
Remark 3.0.3. The use of either explicit or implicit Euler method depends
on the shape of f (x, λ).
3.0.2
Implementation of Particle Flow Filter
In our previous dividend yield and S&P real return model,with the obser-
vation model as:
Zn =
Xn
δRn(cid:19)
and measurement model as:
=
(cid:18)
1
1+k Xn−1 + kθ
1+k + σ
ρ∆W1,n +
1+k √Xn−1∆W1,n
1
ρ2∆W2,n
µXn + a√Xn−1
(cid:16)
p
−
!
(cid:17)
Yn =
Y1,n
Y2,n(cid:19)
(cid:18)
=
Xn + Q1B1,n
δRn + Q2B2,n(cid:19)
(cid:18)
We can get the particle density function is
g(x1, x2) =
1
2πσ1σ2
1
ρ2
−
p
16
− (x−µ)T Σ
e
−1
1
2
(x−µ)
where µ is sample mean and Σ1 is sample covariance
Σ1 =
σ2
1 ρσ1σ2
ρσ1σ2 σ2
2(cid:19)
(cid:18)
We can set the likelihood function as
h(x1, x2) =
1
− (x−m)T Σ
e
−1
2
2
(x−m)
Σ2|
|
where m is probability mean and Σ2 is probability covariance. Conditional
probability density function P (x, λ) follows:
2π
p
p(x, λ) =
g(x)h(x)λ
g(x)h(x)λ
||
=
||
− (x−µ)T Σ
e
−1
1
−1
(x−µ)+λ(x−m)T Σ
2
2
(x−m)
K(λ)
where
And then
K(λ) =
−1
1
− (x−µ)T Σ
e
||
−1
(x−µ)+λ(x−m)T Σ
2
2
(x−m)
||
logP (x, λ) =
µ)T Σ
−1
1 (x
(x
−
−
−
µ) + λ(x
2
−
m)T Σ
−1
2 (x
m)
−
−
log(K(λ))
log h(x) = log(
1
2π(Σ2)1/2 )
−
(x
−
−1
m)T Σ
2 (x
2
m)
−
∂2(logP (x, λ))
∂x2
∂(log h(x))
∂x
=
Σ
−1
1 −
−
λΣ
−1
2
=
Σ
−1
2 (x
−
m)
−
Moving function f (x, λ) is
f (x, λ) =
∂2log P (x, λ)
∂2x
[
−
−1(
]
∂log h(x)
∂x
) =
[Σ
−1
1 + λΣ
−1
2 ]
−1Σ
=
−
[
−
−
−1
2 (x
Σ
−1
1 −
λΣ
m)
−
−1
2 ]
−1[
−
Σ
−1
2 (x
m)]
−
According to equation (3.0.7), the corresponding Q(x) in this case is:
Q = [P
−
λP (V + λP )
−1P ]V
−1[P
λP (V + λP )
−1P ],
−
where P is the prior covariance, which has the form from Kalman Filter:
Pn(−) = ΦPn−1(+)ΦT + (1 , 0)x(n−1)CC T ,
17
and L(x) =
Q(x). Then update x with respect to λ by Backward Euler:
p
x(n+1) = x(n) + f (x(n+1), λn+1)∆λ + L(x(n))∆Wλ
Subtracting m on each side, one can get
x(n+1)
m = (x(n)
−
−
Set y(n+1) = x(n+1)
m)
[Σ
−1
1 +λn+1Σ
−1
2 ]
−
m and y(n) = x(n)
−1Σ
−1
2 (x(n+1)
−
m)∆λ+L(x(n))∆Wλ.
−
y(n+1) = y(n)
[Σ
−1
1 + λΣ
−1
2 ]
m, the equation becomes
−
−1
−1Σ
2 y(n+1)∆λ + L(x)∆Wλ.
(I + ∆λ[Σ
−
−1
−1
2 ]
1 + λΣ
−1
y(n+1) = (I + ∆λ[Σ
1 + λΣ
−1
2 ]
−1
1 + λΣ
−1Σ
x(n+1) = (I + ∆λ[Σ
−1
2 )y(n+1) = y(n) + L(x)∆Wλ
−1
2 ]
−1Σ
−1
2 )
−1(x(n)
−1(y(n) + L(x)∆Wλ)
−1Σ
−1
2 )
m + L(x)∆Wλ) + m
−
3.0.3 Results for yield and real return model
By involving function of movement f (x, λ), accuracy for predicting of real
return has been highly increased. The validity of using particle flow methods
has been proved. The trends for predicted yield are highly similar to the
real trend. And prediction for yield has great performance at the years
with large fluctuation but cannot mimic the value with lower fluctuation.
That is because we set relative larger covariance for likelihood matrix, which
means it cannot do better when the real covariance become lower. Then the
corresponding cons for Particle Flow Filter is clear to see that constant
likelihood function h(x) is hard to satisfy the change for each points.
18
0.12
0.1
0.08
0.06
0.04
0.02
0
-0.02
0
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
0
Dividend Yield
Predicted yield
Real yield
10
20
30
40
50
60
70
(a) Yield
Real Return
Predicted return
Real return
10
20
30
40
50
60
70
(b) Return
Figure 3.1: Particle Flow Filter Results for Yield and Real Return
19
Bibliography
[1] Narayan Kovvali ; Mahesh Banavar ; Andreas Spanias. An Introduc-
tion to Kalman Filtering with MATLAB Examples. Morgan & Claypool,
Reading, 9781627051408, 2013.
[2] Simon J. Julier ; Jeffery K. Uhlmann. Unscented Filtering and Nonlinear
estimation. Digital Objective Identifier, 0018-9219/04, 2014.
[3] Fred Daum ; Jim Huang ; Arjang Noushin Generalized Gromov method
for Stochastic Particle Flow Filters 0277-786X/17, 2017
20
|
ai_researcher | 2 | CreativEval_Evaluating_Creativity_of_LLM-Based_Hardware_Code_Generation.pdf | CreativEval: Evaluating Creativity of
LLM-Based Hardware Code Generation
Matthew DeLorenzo, Vasudev Gohil, Jeyavijayan Rajendran
Texas A&M University, USA
{matthewdelorenzo, gohil.vasudev, jv.rajendran}@tamu.edu
4
2
0
2
r
p
A
2
1
]
L
C
.
s
c
[
1
v
6
0
8
8
0
.
4
0
4
2
:
v
i
X
r
a
Abstract—Large Language Models (LLMs) have proved effec-
tive and efficient in generating code, leading to their utilization
within the hardware design process. Prior works evaluating
LLMs’ abilities for register transfer level code generation solely
focus on functional correctness. However, the creativity associated
with these LLMs, or the ability to generate novel and unique
solutions, is a metric not as well understood, in part due to the
challenge of quantifying this quality.
To address this research gap, we present CreativEval, a
framework for evaluating the creativity of LLMs within the
context of generating hardware designs. We quantify four creative
sub-components, fluency, flexibility, originality, and elaboration,
through various prompting and post-processing techniques. We
then evaluate multiple popular LLMs (including GPT models,
CodeLlama, and VeriGen) upon this creativity metric, with re-
sults indicating GPT-3.5 as the most creative model in generating
hardware designs.
Index Terms—Hardware Design, LLM, Creativity
I. INTRODUCTION
Recent advancements within artificial intelligence, machine
learning, and computing performance have resulted in the
development of LLMs, which have quickly proven to be a
widely applicable and successful solution when applied to a
variety of text-based tasks [1]. After extensive training on large
quantities of text data, these transformer-based models [2] have
demonstrated the ability to not only successfully interpret the
contextual nuances of a provided text (or prompt), but also
generate effective responses to a near human-like degree [3].
This can take the form of summarizing a document, answering
and elaborating upon questions, and even generating code.
The effectiveness and versatility of LLMs regarding textual
understanding have resulted in their adoption within various
applications, such as language translation [4], customer service
chat-bots [5], and programming assistants [1].
Furthermore, the potential of LLM code generation has
recently been explored within the integrated circuit (IC) design
process [6], such as within the logic design stage. With chip
designs continually growing in scale and complexity, efforts
to increase the automation of this task through LLMs have
been explored. This includes the evaluation of LLMs’ ability to
generate hardware design codes from English prompts, leading
to promising initial results within various frameworks [7]–[10].
With the goal of further optimizing these LLMs to the level
of an experienced hardware designer, many research efforts
have focused on improving performance within the metric of
code functionality. This includes testing various LLM fine-
tuning strategies and prompting methods for domain-optimized
performance, such as register transfer level (RTL) code gen-
eration.
However, another dimension to consider when evaluat-
ing the ability of a designer, absent from previous eval-
is creativity. This term refers to the capacity to
uations,
think innovatively—the ability to formulate new solutions or
connections that are effective and unconventional [11]. When
applied to hardware code generation, this can take the form
of writing programs that are not only correct, but also novel,
surprising, or valuable when compared to typical design ap-
proaches. This quality is essential to understanding the greater
potential of LLMs as a tool for deriving new approaches
to hardware design challenges, rather than simply a method
to accelerate existing design practices. With a quantitative
method of measuring this concept of creativity within LLM
hardware generation, valuable insights could be derived, such
as how performance could be further improved, or how LLMs
can be best utilized within the hardware design process.
To address this absence within the analysis of LLM-based
RTL code generation, we propose a comparative evaluation
framework in which the creativity of LLMs can be effectively
measured. This assessment
is composed of four cognitive
subcategories of creativity (fluency, flexibility, originality, and
elaboration), which are quantified and evaluated within the
context of generating functional Verilog modules. Further-
more, this approach utilizes various prompting structures, gen-
eration strategies, and post-processing methods, from which
the quality and variations of responses are utilized to generate
a metric for creativity. This work presents the following
contributions:
• To the best of our knowledge, we propose the first frame-
work from which a metric for creativity is defined for LLMs
within the context of hardware design and code generation.
• We provide a comparative evaluation between state-of-the-
art LLMs upon our creativity metric and its components,
with GPT-3.5 achieving the highest result.
future
enable
• To
datasets
our
https://github.com/matthewdelorenzo/CreativEval/
research, we will
framework
codebase
and
open-source
here:
II. BACKGROUND AND RELATED WORK
A. LLMs for Code Generation and Hardware Design
Many state-of-the-art LLMs have demonstrated remarkable
success in generating code when provided only with a natural
Fig. 1. Experimental Framework - calculating creativity of LLMs in Verilog code generation.
language description, such as GPT-3.5/4 [12], BERT [13],
and Claude [14], revolutionizing the software development
process. These models demonstrate promising performance in
code functionality, such as GPT-4 generating correct code for
67% of programming tasks in the HumanEval benchmark in
a single response (pass@1) [15]–[17].
Therefore, the applications of LLMs within hardware design
through RTL code generation are explored within various
studies, such as DAVE [18] which utilized GPT-2 for this
task. VeriGen [7] then demonstrated that fine-tuning smaller
models (CodeGen) upon a curated RTL dataset can outperform
larger models in RTL tests. VerilogEval [19] presents enhanced
LLM hardware generation through supervised fine-tuning,
and provides an RTL benchmark for evaluating functionality
in RTL generation. ChipNeMo [9] applied fine-tuning upon
open-source models (Llama2 7B/13B) for various hardware
design tasks. RTLCoder [20] presents an automated method
for expanding the RTL dataset used for fine-tuning, resulting
in a 7B-parameter model that outperforms GPT-3.5 on RTL
benchmarks. Other works, including RTLLM [21] and Chip-
Chat [8], explore prompt engineering strategies to enhance the
quality and scale of LLM-generated designs. Although there
is a plethora of work on LLM-based RTL generation, none of
these prior works assess the creative component of LLMs in
the hardware design process. We address this shortcoming in
this work.
B. Evaluating Creativity
Prior cognitive science studies [22]–[25] have explored
methods in which creative thinking can be effectively mea-
sured. A widely accepted creativity model [24] defines four
primary cognitive dimensions from which divergent thinking,
or the ability to generate creative ideas through exploring
multiple possible solutions [26], can be measured—fluency,
flexibility, originality, and elaboration.
• Fluency. The quantity of relevant and separate ideas able to
be derived in response to a single given question.
• Flexibility. The ability to formulate alternative solutions to
a given problem or example across a variety of categories.
• Originality. A measure of how unique or novel a given idea
is, differing from typical responses or solutions.
• Elaboration. The ability to expand upon or refine a given
idea. This can include the ability to construct complex
solutions utilizing provided, basic concepts.
These subcategories have been widely in evaluating human
creativity within educational research, including various stud-
ies of students [27]–[29] as a metric for effective learning.
Furthermore, recent works explore the intersection between
cognitive science and LLMs [30]–[32], in which the creativity
of LLMs are evaluated within the context of natural lan-
guage, demonstrating near-human like performance in many
cases [31]. In particular, [33] utilizes the four creative sub-
categories to evaluate LLMs across multiple language-based
cognitive tasks. However, this framework has not been adapted
to LLMs within the context of generating hardware code. To
this end, we devise our creativity evaluation framework for
LLM-based hardware code generation.
III. CR E A T I VEV A L FRAMEWORK
Given a target LLM, our CreativEval framework, as
shown in Fig. 1, seeks to evaluate the creativity associated
with LLMs in hardware code generation. CreativEval
evaluates the previously defined subcategories of creativity—
fluency, flexibility, originality, and elaboration. To this end, we
query the target LLM with different Verilog-based prompts,
and analyze the responses through various methods of post-
processing to calculate the desired metrics, as explained below.
A. Fluency
To capture the quantity of relevant and separate ideas in our
context, we define fluency as the average number of unique
Verilog solutions generated by the target LLM in response to a
given prompt. Our prompts contain a brief English description
of the module and the module’s declaration, as shown in
Listing 1. Each prompt is provided as input to the LLM, with
the response intended to be the completed implementation
of the module. As the inference process of LLMs contain
variations in the generated responses, we generate t responses
for each prompt to estimate the average performance.
Upon generating all responses, each response is then tested
for functionality against the module’s associated testbench. If
all test cases pass, the module is considered functional. Then,
for each prompt, the functional responses (if any) are collected
and compared to identify if they are unique implementations.
This is done through GNN4IP [34], a tool utilized to assess
the similarities between circuits. By representing two Verilog
Formatted for fluencyand originalityFormatted for flexibilityFormatted forelaborationLLMResponses(VerilogModules)FunctionalResponsesGNN4IPGoldenResponseSimilarityScoresCreativityFluencyOriginalityFlexibilityElaborationFunctionalityCheckVerilog Prompts1 //Create a full adder.
2 //A full adder adds three bits (including
carry-in) and produces a sum and carry-out.
3
4 module top_module (
input a, b, cin,
5
output cout, sum );
6
Listing 1. Fluency/Originality prompt example
modules as a data-flow graph (DFG), GNN4IP generates a
similarity score within [-1,1], with larger values indicating
a higher similarity. Each correct generated solution from the
LLM is input into GNN4IP, and compared to its ideal solution,
or “golden response”. Upon the generation of each similarity
value for a given prompt, these results are then compared to
determine how many unique values are in the response set,
indicating the number of distinct solutions.
Given that there are a set of p total prompts in the dataset,
the LLM generates t responses for each. After evaluating
the functionality of these results, there is then a subset n
prompts that contain at least one success (functional module
generation). For each of these n prompts, there is a sub-total
of the t responses that are functional, defined as m. Each
of these m functional responses, r, are defined as the set
R = {r1n, ..., rmn}. The GNN4IP similarity value is then
found for each response in R, represented as the function S.
The number of unique similarity values is then determined
within the set, and normalized to total t responses. This
process is repeated for all n successful prompts and averaged
to define the fluency metric F below:
F =
1
n
n
(cid:88)
i=1
(cid:19)
(cid:18) |S(Ri)|
t
(1)
B. Flexibility
Flexibility is quantified as the ability of the LLM to generate
an alternative implementation of a Verilog module when
provided with a solution. The prompts for this metric are
constructed for a set of Verilog modules in which a correct
solution (the golden response) is included (Listing 2). The
LLM then rewrites the Verilog module, ideally resulting in a
functional and unique implementation.
As before, t responses are generated for each of the p total
prompts. After all responses are checked for functionality, n
prompts have at least one functional response, each with m
functional responses. These functional responses are compared
directly with the golden response (through GNN4IP) to iden-
tify their similarity value. If the similarity value s is lower
than a given threshold on the scale [-1,1], the response is
considered an alternative solution, shown in Equation 2. For
each successful prompt, the response with minimum similarity
is found and evaluated against the threshold. Then, the total
amount of n prompts with a response less than the threshold is
determined, and normalized by the total prompts n. The final
flexibility metric X is then defined below:
1 // You are a professional hardware designer
that writes correct, fully functional
Verilog modules.
2 // Given the fully implemented example of
the Verilog module below:
3
4 module true_module(
input a, b, cin,
5
output cout, sum );
6
assign sum = a ^ b ^ cin;
7
assign cout = a & b | a & cin | b & cin;
8
9 endmodule
10
11 // Finish writing a different and unique
implementation of the provided true_module
in the module below, top_module.
12 module top_module (
input a, b, cin,
13
output cout, sum );
14
Listing 2. Flexibility prompt example
T (s) =
(cid:40)
1
0
if s < 0
if s ≥ 0
X =
1
n
n
(cid:88)
(cid:16)
i=1
T [min S(Ri)]
(cid:17)
(2)
(3)
C. Originality
The originality metric is defined as the variance (unique-
ness) of an LLM-generated Verilog module in comparison
to a typical, fully functional implementation. This metric is
derived from the similarity value (generated through GNN4IP)
between successful generations and their golden response.
The originality experiment follows the same prompt struc-
ture and procedure as described in III-A. For each prompt, the
response with the minimum similarity value is found. Then,
the similarity values [-1, 1] are re-normalized to be on scale of
[0, 1] with 1 indicating the least similarity (i.e. most original).
These results are averaged over all n prompts, with the final
originality metric O is described below:
O =
1
n
n
(cid:88)
i=1
(− min S(Ri) + 1)
2
(4)
D. Elaboration
To measure an LLM’s capacity for elaboration, the LLM is
provided with multiple smaller Verilog modules in a prompt,
and tasked with utilizing them to implement a larger, more
complex module. As this metric requires multi-modular de-
signs, a separate set of Verilog modules is utilized in con-
structing the prompts, as shown in Listing 3.
Multiple LLM responses are generated for each module,
which are all then checked for functionality. For all given
functional solutions, the responses are checked to see if the
solution utilizes the smaller modules (as opposed to a single
modular solution). If any of the responses for a given prompt
is
are both functional and utilize the smaller modules,
it
TABLE I
COMPARISON OF DIFFERENT LLMS IN TERMS OF CREATIVITY AND ITS SUBCATEGORIES
LLM
CodeLlama-7B [35]
CodeLlama-13B [36]
VeriGen-6B [37]
VeriGen-16B [38]
GPT-3.5 [39]
GPT-4 [40]
Functionality
0.2417
0.3167
0.3667
0.3250
0.3083
0.3750
Fluency
0.1483
0.1611
0.1244
0.1189
0.1343
0.1644
Flexibility
0.0000
0.0260
0.1000
0.0556
0.1600
0.0795
Originality
0.2926
0.3021
0.2527
0.2771
0.2526
0.2657
Elaboration
0.2222
0.3333
0.3333
0.3333
0.3333
0.3333
Creativity
0.1658
0.2056
0.2026
0.1962
0.2201
0.2107
1 // You are given a module add16 that
performs a 16-bit addition.
2 //Instantiate two of them to create a 32-bit
adder.
3
4 module add16 (input[15:0] a, input[15:0] b,
input cin, output[15:0] sum, output cout );
5
6 module top_module (
7
8
9
10 );
input [31:0] a,
input [31:0] b,
output [31:0] sum
Listing 3. Elaboration prompt example
considered a positive instance of elaboration. Given p total
Verilog prompts, of which n have at least one response that
demonstrates elaboration, the metric is specified as:
E =
(cid:19)
(cid:18) n
p
(5)
E. Creativity: Putting It All Together
Given each of the subcategories associated with creativity
defined above, the metrics are then combined to define the
overall creativity of a given LLM in Verilog hardware design.
C = (0.25)F + (0.25)X + (0.25)O + (0.25)E
(6)
IV. EXPERIMENTAL EVALUATION
A. Experimental Setup
We evaluate multiple LLMs using the CreativEval
framework, including CodeLlama 7B [35] and 13B [36] pa-
rameter models, VeriGen 6B [37] and 16B [38] (16B model
loaded in 8-bit quantization due to memory constraints), GPT-
3.5 [39], and GPT-4 [40]. The inference process of the VeriGen
and CodeLlama models was performed locally on an NVIDIA
A100 GPU with 80 GB RAM, while GPT-3.5/4 were queried
through the OpenAI Python API. All scripts are written in
Python 3.10, with Icarus Verilog 10.3 as the simulator for eval-
uating functionality checks. The open-source GNN4IP repos-
itory was adapted to this framework to generate the similarity
scores. The prompt dataset utilized for functionality, fluency,
and originality consists of 111 single-module HDLBits [41]
prompts sourced through AutoChip [42], each containing a
correctly implemented solution and testbench. The smaller
prompt set used for elaboration contains 9 separate multi-
module prompts from the same source. The base functionality
metric (pass@10) is measured on all 120 prompts.
When generating LLM responses in all experiments, the
LLMs were all set to the following inference hyperparameters:
top p=0.95.
temperature=0.3; max tokens=1024;
All responses were trimmed to the first generated instance of
“endmodule” for effective functionality evaluation.
top k=10;
B. Results
Table I summarizes the results for all LLMs for all sub-
categories of creativity. In evaluating fluency, GPT-4 had the
highest quantity of separate and correct Verilog solutions to a
module (with respect to the modules that have at least one cor-
rect solution), with CodeLlama-13B achieving similar results.
The VeriGen models comparatively struggled in this metric,
partly due to repeated generations of similar implementations
instead of different implementations.
Regarding flexibility, GPT-3.5 had the highest rate of gen-
erating alternative solutions to provided modules across most
models. The models that struggled (e.g., CodeLlama) produced
results that were often direct copies of the provided module, in-
dicating the ability to understand the prompt’s natural language
description as an important factor that determined flexibility.
As for originality, the GPT models had slightly worse
performance than the others, with CodeLlama performing best.
This means that the successful solutions provided with the
GPT models were, on average, closer to the ideal solution. This
could be due to its large size and training dataset, resulting in a
more direct retrieval of existing solutions or coding practices.
Elaboration was largely similar for all modules, as the
HDLBits dataset for this metric is comparatively small (9 mod-
ules). The models primarily excelled in correctly connecting
the input and output parameters between separate modules,
while struggling to generate the larger module solution.
Overall, the GPT models were the most creative, with GPT-
3.5 as the best, and CodeLlama-7B was the least creative.
Creativity is shown to slightly drop for the larger model sizes
of GPT and VeriGen.
V. CONCLUSION
Recent studies on LLMs regarding their applications to
hardware design have effectively demonstrated their poten-
tial, applying many optimization strategies to increase the
performance in terms of functional correctness. However,
these studies do not investigate the creativity associated with
LLMs in their ability to generate solutions, largely due to the
lack of an effective metric. Within this work, we propose
CreativEval, a framework to evaluate the creativity of
LLMs in generating hardware code. By evaluating multiple
popular LLMs within this framework, we perform a com-
parative analysis, concluding that GPT-3.5 had the greatest
creativity. Future research in this direction can further evaluate
more LLMs and on larger prompt sets.
ACKNOWLEDGMENT
The authors acknowledge the support from the Purdue Center
for Secure Microelectronics Ecosystem – CSME#210205. This
work was also partially supported by the National Science
Foundation (NSF CNS–1822848 and NSF DGE–2039610).
REFERENCES
[1] Tim Keary, “12 Practical Large Language Model (LLM) Applications,”
https://www.techopedia.com/12-practical-large-language-model-llm-a
pplications, 2023, [Online; last accessed 21-Nov-2023].
[2] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2023.
[3] Z. G. Cai, X. Duan, D. A. Haslett, S. Wang, and M. J. Pickering, “Do
large language models resemble humans in language use?” 2024.
[4] T. Kocmi and C. Federmann, “Large language models are state-of-the-art
evaluators of translation quality,” 2023.
[5] K. Pandya and M. Holia, “Automating customer service using langchain:
Building custom open-source gpt chatbot for organizations,” 2023.
[6] R. Zhong, X. Du, S. Kai, Z. Tang, S. Xu, H.-L. Zhen, J. Hao,
Q. Xu, M. Yuan, and J. Yan, “Llm4eda: Emerging progress in large
language models for electronic design automation,” arXiv preprint
arXiv:2401.12224, 2023.
[7] S. Thakur, B. Ahmad, H. Pearce, B. Tan, B. Dolan-Gavitt, R. Karri, and
S. Garg, “Verigen: A large language model for verilog code generation,”
2023.
[8] J. Blocklove, S. Garg, R. Karri, and H. Pearce, “Chip-chat:
Challenges and opportunities in conversational hardware design,”
in 2023 ACM/IEEE 5th Workshop on Machine Learning for
CAD (MLCAD).
[Online]. Available: http:
//dx.doi.org/10.1109/MLCAD58807.2023.10299874
IEEE, Sep. 2023.
[9] M. Liu, T.-D. Ene, R. Kirby, C. Cheng, N. Pinckney, R. Liang et al.,
“Chipnemo: Domain-adapted llms for chip design,” 2024.
[10] M. DeLorenzo, A. B. Chowdhury, V. Gohil, S. Thakur, R. Karri, S. Garg,
and J. Rajendran, “Make every move count: Llm-based high-quality rtl
code generation using mcts,” 2024.
[11] M. Runco and G. Jaeger, “The standard definition of creativity,” Cre-
ativity Research Journal - CREATIVITY RES J, vol. 24, pp. 92–96, 01
2012.
[12] OpenAI, J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L.
Aleman et al., “Gpt-4 technical report,” 2024.
[13] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training
of Deep Bidirectional Transformers for Language Understanding,” in
Proceedings of the 2019 Conference of the North American Chapter
of the Association for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers). Minneapolis,
Minnesota: Association for Computational Linguistics, Jun. 2019, pp.
4171–4186. [Online]. Available: https://aclanthology.org/N19-1423
[14] [Online]. Available: https://www.anthropic.com/news/claude-3-haiku
[15] Z. Luo, C. Xu, P. Zhao, Q. Sun, X. Geng, W. Hu, C. Tao, J. Ma, Q. Lin,
and D. Jiang, “Wizardcoder: Empowering code large language models
with evol-instruct,” 2023.
[16] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan
et al., “Evaluating large language models trained on code,” 2021.
[17] Y. Wang, H. Le, A. D. Gotmare, N. D. Q. Bui, J. Li, and S. C. H. Hoi,
“Codet5+: Open code large language models for code understanding and
generation,” 2023.
[18] H. Pearce, B. Tan, and R. Karri, “Dave: Deriving automatically verilog
from english,” in Proceedings of the 2020 ACM/IEEE Workshop on
Machine Learning for CAD, ser. MLCAD ’20. New York, NY,
USA: Association for Computing Machinery, 2020, p. 27–32. [Online].
Available: https://doi.org/10.1145/3380446.3430634
[19] M. Liu, N. Pinckney, B. Khailany, and H. Ren, “Verilogeval: Evaluating
large language models for verilog code generation,” 2023.
[20] S. Liu, W. Fang, Y. Lu, Q. Zhang, H. Zhang, and Z. Xie, “Rtlcoder:
Outperforming gpt-3.5 in design rtl generation with our open-source
dataset and lightweight solution,” 2024.
[21] Y. Lu, S. Liu, Q. Zhang, and Z. Xie, “Rtllm: An open-source benchmark
for design rtl generation with large language model,” 2023.
[22] L. S. Almeida, L. P. Prieto, M. Ferrando, E. Oliveira, and C. Ferr´andiz,
“Torrance test of creative thinking: The question of its construct
validity,” Thinking Skills and Creativity, vol. 3, no. 1, pp. 53–58, 2008.
[Online]. Available: https://www.sciencedirect.com/science/article/pii/
S1871187108000072
[23] S. L. Doerr, “Conjugate lateral eye movement, cerebral dominance,
and the figural creativity factors of fluency, flexibility, originality, and
elaboration,” Studies in Art Education, vol. 21, no. 3, pp. 5–11, 1980.
[Online]. Available: http://www.jstor.org/stable/1319788
[24] J. P. Guilford, The nature of human intelligence. McGraw-Hill, 1971.
[25] E. P. Torrance, “Torrance tests of creative thinking,” Educational and
psychological measurement, 1966.
[26] M. Arefi, “Comparation of creativity dimensions (fluency, flexibility,
elaboration, originality) between bilingual elementary students (azari
language-kurdish language) in urmia city iran - the iafor research
archive,” Dec 2018. [Online]. Available: https://papers.iafor.org/submi
ssion22045/
[27] S. A. Handayani, Y. S. Rahayu, and R. Agustini, “Students’
creative thinking skills
in biology learning: fluency, flexibility,
originality, and elaboration,” Journal of Physics: Conference Series,
vol. 1747, no. 1, p. 012040,
[Online]. Available:
https://dx.doi.org/10.1088/1742-6596/1747/1/012040
feb 2021.
[28] F. Alacapinar, “Grade level and creativity,” Eurasian Journal of Educa-
tional Research (EJER), vol. 13, pp. 247–266, 01 2012.
[29] M. Arefi and N. Jalali, “Comparation of creativity dimensions (flu-
ency, flexibility, elaboration, originality) between bilingual elementary
students (azari language-kurdish language) in urmia city–iran,” in The
IAFOR International Conference on Language Learning, 2016.
[30] R. Shiffrin and M. Mitchell, Mar 2023.
[Online]. Available:
https://www.pnas.org/doi/abs/10.1073/pnas.2300963120
[31] C. Stevenson, I. Smal, M. Baas, R. Grasman, and H. van der Maas,
“Putting gpt-3’s creativity to the (alternative uses) test,” 2022.
[32] M. Binz and E. Schulz, “Using cognitive psychology to understand
gpt-3,” Proceedings of the National Academy of Sciences, vol. 120,
no. 6, Feb. 2023. [Online]. Available: http://dx.doi.org/10.1073/pnas.22
18523120
[33] Y. Zhao, R. Zhang, W. Li, D. Huang, J. Guo, S. Peng, Y. Hao, Y. Wen,
X. Hu, Z. Du, Q. Guo, L. Li, and Y. Chen, “Assessing and understanding
creativity in large language models,” 2024.
[34] R. Yasaei, S.-Y. Yu, E. K. Naeini, and M. A. A. Faruque, “Gnn4ip: Graph
neural network for hardware intellectual property piracy detection,”
2021.
[35] “Hugging face.” [Online]. Available: https://huggingface.co/codellama
/CodeLlama-7b-hf
[36] “Hugging face.” [Online]. Available: https://huggingface.co/codellama
/CodeLlama-13b-hf
[37] “Hugging face.” [Online]. Available: https://huggingface.co/shailja/fin
e-tuned-codegen-6B-Verilog
[38] “Hugging face.” [Online]. Available: https://huggingface.co/shailja/fin
e-tuned-codegen-16B-Verilog
[39] “fine-tuning and api updates.” [Online]. Available: https://openai.com/b
log/gpt-3-5-turbo-fine-tuning-and-api-updates
[40] “fine-tuning and api updates.” [Online]. Available: https://openai.com/r
esearch/gpt-4
[41] [Online]. Available: https://hdlbits.01xz.net/wiki/Main Page
[42] S. Thakur, J. Blocklove, H. Pearce, B. Tan, S. Garg, and R. Karri,
“Autochip: Automating hdl generation using llm feedback,” 2023.
|
ai_researcher | 1 | Machine_Learning_for_Fast_Quantum_Mechanics-Based_Approximation_of_Drug_Lipophilicity.pdf | 0
2
0
2
l
u
J
7
2
]
h
p
-
p
m
o
c
.
s
c
i
s
y
h
p
[
1
v
6
0
2
4
1
.
7
0
0
2
:
v
i
X
r
a
Machine Learning Potential Repository
Atsuto Seko1, ∗
1Department of Materials Science and Engineering, Kyoto University, Kyoto 606-8501, Japan
(Dated: July 29, 2020)
This paper introduces a machine learning potential repository that includes Pareto optimal ma-
chine learning potentials. It also shows the systematic development of accurate and fast machine
learning potentials for a wide range of elemental systems. As a result, many Pareto optimal machine
learning potentials are available in the repository from a website [1]. Therefore, the repository will
help many scientists to perform accurate and fast atomistic simulations.
I.
INTRODUCTION
may be decomposed as
Machine learning potential (MLP) has been increas-
ingly required to perform crystal structure optimizations
and large-scale atomistic simulations more accurately
than with conventional interatomic potentials. There-
fore, many recent studies have proposed a number of
procedures to develop MLPs and have shown their ap-
plications [2–23]. Simultaneously, MLPs themselves are
necessary for their users to perform accurate atomistic
simulations. Therefore, the development and distribu-
tion of MLPs for a wide range of systems should be use-
ful, similarly to the conventional interatomic potentials
distributed in several repositories [24, 25].
This study demonstrates an MLP repository available
from a website [1]. The MLP repository includes Pareto
optimal MLPs with different trade-offs between accuracy
and computational efficiency because they are conflicting
properties and there is no single optimal MLP [26–28].
This study develops the repository by performing sys-
tematic density functional theory (DFT) calculations for
approximately 460,000 structures and by combining them
with existing DFT datasets in the literature [26, 29].
Polynomial-based potential energy models [26, 29] and
their revisions are then systematically applied to the con-
struction of MLPs for a wide range of elemental systems.
Although the present version of the repository does not
contain MLPs for multicomponent systems, the reposi-
tory will be gradually updated. Moreover, a user pack-
age that combines MLPs in the repository with atomistic
simulations using the lammps code [30] is also available
on a website [31].
II. POTENTIAL ENERGY MODELS
This section shows structural features and potential
energy models used for developing MLPs in the reposi-
tory. Given cutoff radius rc from atom i in a structure,
the short-range part of the total energy for the structure
∗ [email protected]
E =
E(i),
Xi
(1)
where E(i) denotes the contribution of atom i or the
atomic energy. The atomic energy is then given by a
function of invariants for the O(3) group [26, 32] as
E(i) = F
d(i)
1 , d(i)
2 , · · ·
,
(cid:17)
(cid:16)
(2)
In the context of MLPs, invariants {d(i)
where d(i)
n denotes an invariant derived from order pa-
rameters representing the neighboring atomic density of
atom i.
n } can
be called “structural features”. Also, a number of func-
tions are useful as function F to represent the relation-
ship between the invariants and the atomic energy, such
as artificial neural network models [2, 3, 5–8], Gaussian
process models [4, 9–12], and linear models [13–19]. In
the repository, linear models are explained as function F ,
which are shown in Sec. II B.
A. Structural features
A systematic procedure to derive a set of structural
features that can control the accuracy and computa-
tional efficiency of MLPs (e.g., [26, 32]) plays an essen-
tial role in automatically generating fast and accurate
MLPs. Therefore, the repository employs systematic sets
of structural features derived from order parameters rep-
resenting the neighboring atomic density in terms of a
basis set. They are classified into a set of structural fea-
tures derived only from radial functions and a set of struc-
tural features derived from radial and spherical harmonic
functions.
A pairwise structural feature is expressed as
d(i)
n0 =
Xj∈neighbor
fn(rij ),
(3)
where rij denotes the distance between atoms i and j.
The repository adopts a finite basis set of Gaussian-type
radial functions given by
fn(r) = exp
−βn(r − rn)2
fc(r),
(4)
(cid:2)
(cid:3)
where βn and rn denote parameters. Cutoff function fc
ensures smooth decay of the radial function, and the
repository employs a cosine-based cutoff function ex-
pressed as
r
rc (cid:19)
π
(cid:18)
+ 1
(cid:21)
cos
1
2 (cid:20)
0
(r ≤ rc)
(r > rc)
.
(5)
fc(r) =
2
Another structural feature is a linearly independent
polynomial invariant of the O(3) group, which is gener-
ated from order parameters representing the neighbor-
ing atomic density in terms of spherical harmonics. A
pth-order polynomial invariant for a given radial number
n and a given set of angular numbers { l1, l2, · · · , lp } is
defined by a linear combination of products of p order
parameters, expressed as
d(i)
nl1l2···lp,(σ) =
X{ m1,m2,··· ,mp }
Cl1l2···lp,(σ)
m1m2···mp a(i)
nl1m1a(i)
nl2m2 · · · a(i)
nlpmp
,
(6)
where a(i)
nlm denotes the order parameter of component
nlm representing the neighboring atomic density of atom
i. The order parameters for atom i in a given struc-
ture are approximately calculated from its neighboring
atomic density regardless of the orthonormality of the
radial functions [26] as
{ w } are given as follows.
F1 (D) =
widi
Xi
F2,pow (D) =
wiididi
Xi
a(i)
nlm =
Xj∈neighbor
fn(rij )Y ∗
lm(θij, φij ),
(7)
where (rij , θij , φij ) denotes the spherical coordinates of
neighboring atom j centered at the position of atom i.
A coefficient set { Cl1l2···lp,(σ)
m1m2···mp } is determined by using
a group-theoretical projection operator method [33], en-
suring that the linear combination of Eqn. (6) is invari-
ant for arbitrary rotation [26]. In terms of fourth- and
higher-order polynomial invariants, there exist multiple
invariants that are linearly independent for most of the
set { l1, l2, · · · , lp }. Therefore, they are distinguished by
index σ if necessary.
B. Energy models with respect to structural
features
The repository uses polynomial functions as function F
representing the relationship between the atomic energy
and a given set of structural features, D = { d1, d2, · · · }.
functions with regression coefficients
The polynomial
F2 (D) =
wij didj
(8)
X{i,j}
F3,pow (D) =
wiiidididi
Xi
F3 (D) =
wijkdidjdk
X{i,j,k}
...
A potential energy model is identified with a combina-
tion of the polynomial functions and structural features.
The repository introduces the following six potential en-
ergy models. When a set of pairwise structural features
is described as D(i)
n0 }, the first model (model
type = 1, feature type = pair) is composed of pow-
ers of the pairwise structural features as
pair = { d(i)
(cid:17)
(cid:17)
D(i)
pair
D(i)
pair
(cid:16)
D(i)
pair
(cid:16)
E(i) = F1
+F2,pow
+F3,pow
+· · · ,
(9)
which is measured from the energy of the isolated state
of atom i. This model was proposed in Refs. 13 and 14.
The second model (model type = 2, feature type =
pair) is a polynomial of the pairwise structural features
with their cross terms, expressed as
(cid:17)
(cid:16)
E(i) = F1
D(i)
pair
(cid:16)
+F2
(cid:17)
(cid:16)
D(i)
pair
(cid:17)
+F3
D(i)
pair
(cid:16)
(cid:17)
+· · · . (10)
This model can be regarded as a natural extension of em-
bedded atom method (EAM) potentials as demonstrated
in Ref. 15.
The other four models are derived from the polynomial
invariants of Eqn. (6). When a set of the polynomial
invariants is expressed by the union of sets of pth-order
polynomial invariants as
D(i) = D(i)
pair ∪ D(i)
2 ∪ D(i)
3 ∪ D(i)
4 ∪ · · · ,
(11)
where
D(i)
D(i)
D(i)
2 = { d(i)
3 = { d(i)
4 = { d(i)
nll }
nl1l2l3 }
nl1l2l3l4,(σ) } ,
(12)
the third model (model type = 1, feature type =
invariants) is expressed as
(cid:17)
(cid:17)
(cid:16)
(cid:16)
D(i)
D(i)
D(i)
E(i) = F1
+ F3,pow
+ F2,pow
+ · · · .
(13)
This model consists of the powers of the polynomial in-
variants. A linear polynomial form of the polynomial
invariants, E(i) = F1
, which was proposed in Ref.
26, is included in the third model. Note that a linear
(cid:1)
polynomial model with up to third-order invariants, ex-
pressed by
D(i)
(cid:0)
(cid:17)
(cid:16)
E(i) = F1
(cid:16)
D(i)
pair ∪ D(i)
2 ∪ D(i)
3
,
(cid:17)
(14)
is regarded as a spectral neighbor analysis potential
(SNAP) [16].
The fourth model (model type = 2, feature type
= invariants) is given by a polynomial of the polyno-
mial invariants as
E(i) = F1
D(i)
(cid:16)
(cid:17)
+ F2
D(i)
(cid:16)
(cid:17)
+ F3
D(i)
(cid:16)
(cid:17)
+ · · · . (15)
A quadratic polynomial model of the polynomial invari-
ants up to the third order, expressed as
E(i) = F1
pair ∪ D(i)
2 ∪ D(i)
3
D(i)
(cid:16)
+F2
(cid:17)
(cid:16)
D(i)
pair ∪ D(i)
2 ∪ D(i)
3
(cid:17)
(16)
is regarded as a quadratic SNAP [34].
The fifth model (model type = 3, feature type =
invariants) is the sum of a linear polynomial form of
the polynomial invariants and a polynomial of pairwise
structural features, described as
E(i) = f1
D(i)
(cid:16)
(cid:17)
+ f2
D(i)
pair
(cid:16)
(cid:17)
+ f3
D(i)
pair
(cid:16)
(cid:17)
+ · · · . (17)
The sixth model (model type = 4, feature type =
invariants) is the sum of a linear polynomial form of
the polynomial invariants and a polynomial of pairwise
structural features and second-order polynomial invari-
ants. This is written as
E(i) = f1
D(i)
(cid:16)
(cid:17)
+ f2
pair ∪ D(i)
2
D(i)
(cid:16)
+ · · · .
(18)
(cid:17)
III. DATASETS
Training and test datasets are generated from proto-
type structures, i.e., structure generators. The reposi-
tory uses two sets of structure generators for elemental
systems. One is composed of face-centered cubic (fcc),
3
body-centered cubic (bcc), hexagonal close-packed (hcp),
simple cubic (sc), ω, and β-tin structures, which was em-
ployed in Ref. 29. Hereafter, structures generated from
the structure generator set are denoted by “dataset 1”.
The other is composed of prototype structures reported
in the Inorganic Crystal Structure Database (ICSD) [35],
which aims to generate a wide variety of structures. For
elemental systems, only prototype structures composed
of single elements with zero oxidation state are chosen
from the ICSD. The total number of the structure gener-
ators is 86. A list of structure generators can be found in
the Appendix of Ref. 26. Hereafter, structures generated
from the second set are denoted by “dataset 2”.
Given a structure generator, the atomic positions and
lattice constants of the structure generator are fully opti-
mized by DFT calculation to obtain its equilibrium struc-
ture. Then, a new structure is constructed by random
lattice expansion, random lattice distortion, and random
atomic displacements into a supercell of the structure
generator. For a given parameter ε controlling the de-
gree of lattice expansion, lattice distortion, and atomic
displacements, the lattice vectors of the new structure
A′ and the fractional coordinates of an atom in the new
structure f ′ are given as
A′
f ′
= A + εR
= f + εA′−1η,
(19)
(20)
,
where the (3 × 3) matrix R and the three-dimensional
vector η are composed of uniform random numbers rang-
ing from −1 to 1. Matrix A and vector f represent the
lattice vectors of the original supercell and the fractional
coordinates of the atom in the original supercell, respec-
tively.
For each elemental system, datasets 1 and 2 are com-
posed of 3,000 and 10,000 structures, respectively, in ad-
dition to the equilibrium structures of the structure gen-
erators. Dataset 1 was developed in Ref. 29, whereas
dataset 2, except for the case of elemental aluminum, is
developed in this study. Each of the datasets is then ran-
domly divided into a training dataset and a test dataset.
In the repository, datasets 1 and 2 are available for 31 and
47 elements, respectively. This means that the repository
contains MLPs developed from a total of 567,228 DFT
calculations.
DFT calculations were performed using the plane-
wave-basis projector augmented wave method [36] within
the Perdew–Burke–Ernzerhof exchange-correlation func-
tional [37] as implemented in the vasp code [38–40]. The
cutoff energy was set to 300 eV. The total energies con-
verged to less than 10−3 meV/supercell. The atomic po-
sitions and lattice constants of the structure generators
were optimized until the residual forces were less than
10−2 eV/˚A.
IV. MODEL COEFFICIENT ESTIMATION
Coefficients of a potential energy model are estimated
from all the total energies, forces, and stress tensors in-
cluded in a training dataset. Given a potential energy
model, therefore, the predictor matrix and observation
vector are simply written in a submatrix form as
X =
Xenergy
Xforce
Xstress
yenergy
yforce
ystress
.
y =
,
(21)
The predictor matrix X is divided into three submatri-
ces, Xenergy, Xforce, and Xstress, which contain structural
features and their polynomial contributions to the total
energies, the forces acting on atoms, and the stress ten-
sors of structures in the training dataset, respectively.
The observation vector y also has three components,
yenergy, yforce, and ystress, which contain the total en-
ergy, the forces acting on atoms, and the stress tensors of
structures in the training dataset, respectively, obtained
from DFT calculations. Using the predictor matrix and
the observation vector, coefficients of a potential energy
model are estimated by linear ridge regression.
In the case of dataset 2 for elemental aluminum, the
training data has 9,086, 1,314,879, and 54,516 entries
for the energy, the force, and the stress tensor, respec-
tively. Therefore, the predictor matrix X has a size of
(1, 378, 481, ncoeff), where ncoeff denotes the number of
coefficients of the potential energy model and ranges from
10 to 32, 850 in the potential energy models of the repos-
itory.
V. PARETO OPTIMALITY
The accuracy and computational efficiency of the
present MLP strongly depend on the given input parame-
ters. They are (1) the cutoff radius, (2) the type of struc-
tural features, (3) the type of potential energy model, (4)
the number of radial functions, (5) the polynomial order
in the potential energy model, and (6) the truncation
of the polynomial invariants, i.e, the maximum angular
max, · · · , l(pmax)
numbers of spherical harmonics, {l(2)
max }
and the polynomial order of invariants, pmax. Therefore,
a systematic grid search is performed for each system to
find their optimal values. The input parameters used for
developing MLPs can be found in the repository.
max, l(3)
However, it is difficult to find the optimal set of pa-
rameters because the accuracy and computational effi-
ciency of an MLP are conflicting properties whose trade-
off should be optimized, as pointed out in Ref. 26. In
this multiobjective optimization problem involving sev-
eral conflicting objectives, there is no single optimal so-
lution but a set of alternatives with different trade-offs
between the accuracy and the computational efficiency.
In such a case, Pareto optimal points can be optimal solu-
tions with different trade-offs [41]. Therefore, the repos-
Pareto optimal MLPs (Al, Dataset 1)
4
25
20
15
10
5
MLP3
0
10-5
10-4
MLP2
10-3
MLP1
10-2
Elapsed time (s/atom/MD step) (Single CPU core)
MLP3
MLP2
MLP1
15
Training
Test
)
t
m
o
a
V
e
m
/
(
r
o
r
r
e
n
o
i
i
t
c
d
e
r
P
)
t
m
o
a
V
e
m
/
0
(
T
F
D
–
P
L
–15M
–4
–3
–2
–1
0
–4
–3
–2
–1
0
–4
–3
–2
–1
0
DFT (eV/atom)
FIG. 1. Distribution of MLPs in a grid search to find op-
timal parameters controlling the accuracy and the computa-
tional efficiency of the MLP for elemental Al. The elapsed
time for a single point calculation is estimated using a single
core of Intel R(cid:13) Xeon R(cid:13) E5-2695 v4 (2.10 GHz). The red closed
circles show the Pareto optimal points of the distribution ob-
tained using a non-dominated sorting algorithm. The cyan
closed circles indicate the MLP with the lowest prediction er-
ror and two Pareto optimal MLPs with higher computational
cost performance. The distribution of the prediction errors
for dataset 1 is also shown.
TABLE I. Model parameters of MLP1, MLP2, and MLP3
for elemental Al.
Number of coefficients
Feature type
Cutoff radius
Number of radial functions
Model type
Polynomial order (function F )
Polynomial order (invariants)
{ l(2)
max, l(3)
max, · · · }
MLP1
2410
Invariants
12.0
20
3
3
4
[4,4,2]
MLP2 MLP3
1770
Pair
12.0
20
2
3
−
−
815
Pair
8.0
15
2
3
−
−
itory contains all Pareto optimal MLPs for each system
and each dataset.
Pareto optimal MLPs (Cu, Dataset 1)
Pareto optimal MLPs (Ga, Dataset 1)
5
25
20
15
10
5
MLP3
0
10-5
MLP2
10-4
10-3
MLP1
10-2
Elapsed time (s/atom/MD step) (Single CPU core)
Pareto optimal MLPs (Mg, Dataset 1)
25
20
15
10
5
0
10-5
MLP3
10-4
MLP2
10-3
10-2
Elapsed time (s/atom/MD step) (Single CPU core)
MLP1
Pareto optimal MLPs (Ti, Dataset 1)
70
60
50
40
30
20
10
)
m
o
t
a
/
V
e
m
(
r
o
r
r
e
n
o
i
t
c
d
e
r
P
i
)
m
o
t
a
/
V
e
m
(
r
o
r
r
e
n
o
i
t
c
d
e
r
P
i
)
m
o
t
a
/
V
e
m
(
r
o
r
r
e
n
o
i
t
c
d
e
r
P
i
25
20
15
10
5
0
10-5
MLP3
10-4
MLP2
10-3
10-2
MLP1
Elapsed time (s/atom/MD step) (Single CPU core)
Pareto optimal MLPs (Zn, Dataset 1)
25
20
15
10
5
MLP3
0
10-5
10-4
MLP2
10-3
MLP1
10-2
Elapsed time (s/atom/MD step) (Single CPU core)
Pareto optimal MLPs (Zr, Dataset 1)
70
60
50
40
30
20
10
0
10-5
10-4
MLP3
10-3
MLP2
10-2
MLP1
MLP3
0
10-5
10-4
10-3
MLP2 MLP1
10-2
Elapsed time (s/atom/MD step) (Single CPU core)
Elapsed time (s/atom/MD step) (Single CPU core)
Pareto optimal MLPs (Y, Dataset 1)
Pareto optimal MLPs (Nb, Dataset 1)
70
60
50
40
30
20
10
70
60
50
40
30
20
10
)
m
o
t
a
/
V
e
m
(
r
o
r
r
e
n
o
i
t
c
d
e
r
P
i
)
m
o
t
a
/
V
e
m
(
r
o
r
r
e
n
o
i
t
c
d
e
r
P
i
)
m
o
t
a
/
V
e
m
(
r
o
r
r
e
n
o
i
t
c
d
e
r
P
i
)
m
o
t
a
/
V
e
m
(
r
o
r
r
e
n
o
i
t
c
d
e
r
P
i
)
m
o
t
a
/
V
e
m
(
r
o
r
r
e
n
o
i
t
c
d
e
r
P
i
0
10-5
MLP3
10-4
MLP2
10-3
10-2
MLP1
Elapsed time (s/atom/MD step) (Single CPU core)
0
10-5
MLP3
10-4
10-3
MLP2
10-2
MLP1
Elapsed time (s/atom/MD step) (Single CPU core)
FIG. 2. Distribution of MLPs in a grid search for elemental Cu, Ga, Mg, Zn, Ti, Zr, Y, and Nb. The closed red circles show
the Pareto optimal points of the distribution.
VI. MLPS IN REPOSITORY
Figure 1 shows the prediction error and the computa-
tional efficiency of the Pareto optimal MLPs developed
from dataset 1 for elemental Al. Figure 2 also shows
the Pareto optimal MLPs for elemental Cu, Ga, Mg, Zn,
Ti, Zr, Y, and Nb. The prediction error is estimated
using the root mean square (RMS) error of the energy
for the test dataset. The computational efficiency is es-
timated using the elapsed time to compute the energy,
the forces and the stress tensors of a structure with 284
atoms. In Figs. 1 and 2, the elapsed time is normalized
by the number of atoms because it is proportional to the
number of atoms as shown later. The behavior of the
relationship between the prediction error and the com-
putational efficiency for the other systems can be found
in the repository.
Users of the repository can choose an appropriate MLP
from the Pareto optimal ones according to their targets
and purposes. The MLP with the lowest prediction er-
ror is denoted by “MLP1”, whereas two Pareto optimal
MLPs showing higher computational cost performance
are denoted by “MLP2” and “MLP3”. As can be seen
in Figs. 1 and 2, MLP2 and MLP3 exhibit high com-
putational efficiency without significantly increasing the
prediction error. This study introduces simple scores to
evaluate the computational cost performance from the
elapsed time t with the unit of ms/atom/step and the
prediction error ∆E with the unit of meV/atom. MLP2
and MLP3 with higher computational cost performance
minimize t + ∆E and 10t + ∆E, respectively.
Figure 1 shows the distribution of the prediction errors
for structures in dataset 1. Table I also lists the values of
the model parameters of MLP1, MLP2, and MLP3. This
information for the other Pareto optimal MLPs and the
other systems can be found in the repository.
Tables II and III list the prediction error and the com-
putational efficiency of MLPs for each elemental system
obtained from datasets 1 and 2, respectively. MLP2 and
MLP3 exhibit high computational efficiency while avoid-
ing a significant increase of the prediction error. There-
fore, MLP2 and MLP3 can be regarded as better poten-
tials than MLP1 for most practical purposes.
Figure 3 shows the elapsed times of single point calcu-
lations for structures with up to 32,000 atoms using the
EAM potential [42], MLP1, MLP2, and MLP3 for ele-
mental Al. Structures were made by the expansion of the
fcc conventional unit cell with a lattice constant of 4 ˚A.
As can be seen in Fig. 3, linear scaling with respect to the
[1] A.
tial
https://sekocha.github.io/repository/index-e.html.
Seko,
Repository
Poten-
University,
Learning
Machine
Kyoto
at
6
number of atoms is achieved in all the MLPs. Although
the performance for only three MLPs is shown here, the
other MLPs also exhibit linear scaling with respect to
the number of atoms. Therefore, the computational time
required for a calculation of nstep steps for a structure
with natom atoms can be estimated as t × natom × nstep,
where t is the elapsed time per atom for a single point
calculation listed in the repository.
103
)
p
e
t
s
D
M
/
s
(
e
m
i
t
d
e
s
p
a
E
l
102
101
100
10-1
10-2
10-3
10-4
10-5
MLP1
MLP2
MLP3
EAM
10-6
100
101
102
103
Number of atoms
104
105
FIG. 3.
Dependence of the computational time required
for a single point calculation on the number of atoms. The
elapsed time is measured using a single core of Intel R(cid:13) Xeon R(cid:13)
E5-2695 v4 (2.10 GHz).
VII. CONCLUSION
An MLP repository developed by a systematic applica-
tion of the procedure to obtain Pareto optimal MLPs has
In particular, MLPs
been demonstrated in this paper.
with high computational cost performance, showing high
computational efficiency without increasing the predic-
tion error, are useful for most practical purposes. Cur-
rently, many Pareto optimal MLPs are available in the
repository from the website, and the number of MLP en-
tries in the repository is continuously increasing. There-
fore, the repository should be useful in performing accu-
rate and fast atomistic simulations.
ACKNOWLEDGMENTS
This work was supported by a Grant-in-Aid for Sci-
entific Research (B) (Grant Number 19H02419) and a
Grant-in-Aid for Scientific Research on Innovative Areas
(Grant Number 19H05787) from the Japan Society for
the Promotion of Science (JSPS).
[2] S.
Lorenz,
A. Groß,
and M.
Scheffler,
Chem. Phys. Lett. 395, 210 (2004).
[3] J. Behler and M. Parrinello, Phys. Rev. Lett. 98, 146401
(2007).
7
TABLE II. Prediction error and computational efficiency of MLPs constructed from dataset 1 for 31 elemental systems. The
normalized elapsed time for a single point calculation, the RMS error for the energy, and the RMS error for the force are
denoted by t (s/atom/step), ∆E (meV/atom), and ∆f (eV/˚A), respectively. MLP1 shows the lowest prediction error of ∆E.
MLP2 and MLP3 show the lowest values of t + ∆E and 10t + ∆E, respectively.
t
10.71
6.74
21.65
5.79
12.55
15.83
2.02
12.64
6.60
6.82
21.73
21.72
4.01
21.67
18.53
6.89
21.71
21.69
21.68
21.65
12.48
21.71
21.53
21.70
21.67
21.66
6.95
21.69
12.77
3.27
6.71
MLP1
∆E
1.9
0.5
0.5
1.0
1.1
1.0
4.6
2.8
0.5
1.7
0.5
0.9
0.8
0.5
0.4
0.1
0.4
2.4
0.2
2.4
0.5
0.7
0.5
1.6
1.4
0.5
2.2
2.9
0.8
2.3
1.4
∆f
0.004
0.006
0.006
0.005
0.019
0.004
0.016
0.061
0.001
0.004
0.006
0.039
0.004
0.005
0.000
0.001
0.002
0.065
0.001
0.048
0.001
0.017
0.003
0.056
0.035
0.005
0.048
0.080
0.016
0.008
0.044
t
0.07
0.30
0.66
0.30
1.54
0.07
0.13
2.31
0.16
0.10
0.29
1.85
0.23
0.22
0.09
0.13
0.18
2.16
0.10
2.18
0.12
0.22
0.22
0.91
1.85
0.29
1.09
2.32
0.29
0.18
1.84
MLP2
∆E
2.0
0.9
0.7
1.2
2.0
1.1
5.0
3.6
0.5
2.0
0.6
1.4
1.0
0.7
0.5
0.2
0.5
3.3
0.2
2.8
0.5
2.6
0.7
3.3
1.8
0.8
3.1
3.9
2.2
2.5
2.4
∆f
0.008
0.014
0.012
0.011
0.026
0.011
0.011
0.070
0.002
0.011
0.014
0.051
0.008
0.014
0.001
0.003
0.006
0.078
0.001
0.058
0.001
0.048
0.008
0.071
0.047
0.012
0.058
0.092
0.040
0.011
0.055
t
0.03
0.07
0.05
0.10
0.13
0.07
0.05
0.80
0.12
0.03
0.10
0.18
0.07
0.07
0.09
0.03
0.10
0.15
0.05
0.10
0.09
0.18
0.13
0.22
0.13
0.10
0.07
0.14
0.13
0.07
0.05
MLP3
∆E
2.2
1.8
1.8
1.8
5.5
1.1
5.3
5.5
0.6
2.2
1.2
4.3
1.2
1.2
0.5
0.7
0.6
9.3
0.4
9.0
0.7
3.0
0.8
8.5
5.5
1.6
8.5
12.0
3.3
2.7
11.0
∆f
0.011
0.016
0.027
0.013
0.043
0.011
0.018
0.082
0.001
0.013
0.015
0.103
0.010
0.014
0.001
0.005
0.007
0.138
0.002
0.127
0.001
0.049
0.009
0.126
0.100
0.014
0.129
0.177
0.044
0.014
0.116
Ag
Al
Au
Ba
Be
Ca
Cd
Cr
Cs
Cu
Ga
Hf
Hg
In
K
Li
Mg
Mo
Na
Nb
Rb
Sc
Sr
Ta
Ti
Tl
V
W
Y
Zn
Zr
[4] A. P. Bart´ok, M. C. Payne, R. Kondor, and G. Cs´anyi,
[12] A. Glielmo,
P.
Sollich,
and A. De Vita,
Phys. Rev. Lett. 104, 136403 (2010).
Phys. Rev. B 95, 214302 (2017).
[5] J. Behler, J. Chem. Phys. 134, 074106 (2011).
[6] J. Han, L. Zhang, R. Car, and E. Weinan, Commun.
[13] A.
Seko,
A. Takahashi,
and
I. Tanaka,
Phys. Rev. B 90, 024101 (2014).
Comput. Phys. 23, 629 (2018).
[14] A.
Seko,
A. Takahashi,
and
I. Tanaka,
[7] N.
Artrith
and
A.
Urban,
Phys. Rev. B 92, 054113 (2015).
Comput. Mater. Sci. 114, 135 (2016).
[15] A. Takahashi,
A.
Seko,
and
I. Tanaka,
[8] N. Artrith,
A. Urban,
and G. Ceder,
Phys. Rev. Mater. 1, 063801 (2017).
Phys. Rev. B 96, 014112 (2017).
[9] W. J. Szlachta, A. P. Bart´ok,
Phys. Rev. B 90, 104108 (2014).
and G. Cs´anyi,
[16] A. Thompson, L. Swiler, C. Trott, S. Foiles,
G. Tucker, J. Comput. Phys. 285, 316 (2015).
and
[17] M. A. Wood and A. P. Thompson, J. Chem. Phys. 148,
[10] A. P. Bart´ok, J. Kermode, N. Bernstein, and G. Cs´anyi,
241721 (2018).
Phys. Rev. X 8, 041048 (2018).
[18] C. Chen, Z. Deng, R. Tran, H. Tang, I.-H. Chu, and
[11] Z. Li,
J. R. Kermode,
and A. De Vita,
S. P. Ong, Phys. Rev. Mater. 1, 043603 (2017).
Phys. Rev. Lett. 114, 096405 (2015).
[19] A. V. Shapeev, Multiscale Model. Simul. 14, 1153 (2016).
TABLE III. Prediction error and computational efficiency of MLPs constructed from dataset 2 for 47 elemental systems.
8
t
18.51
28.69
23.39
23.49
36.80
8.69
23.45
23.66
23.41
13.95
37.17
23.20
23.38
23.45
13.65
23.61
23.59
22.64
7.10
37.02
23.59
23.62
18.21
3.20
36.63
14.15
23.76
23.98
14.51
23.19
36.57
37.20
13.78
23.43
23.50
23.92
23.58
23.45
11.80
22.94
13.20
24.14
14.23
22.71
36.55
23.62
14.57
MLP1
∆E
1.1
1.8
5.1
3.1
0.7
3.8
2.8
0.4
0.7
6.7
0.4
8.2
1.1
2.7
4.2
3.6
0.7
9.0
0.1
2.5
0.2
0.3
7.3
1.5
6.5
10.2
7.3
1.2
2.6
5.3
0.0
9.8
6.4
8.5
3.4
3.0
4.1
1.7
0.7
6.5
4.4
0.8
6.4
8.3
2.6
1.1
5.9
∆f
0.019
0.033
0.125
0.028
0.013
0.078
0.130
0.006
0.011
0.221
0.001
0.022
0.028
0.058
0.121
0.014
0.016
0.251
0.001
0.057
0.004
0.006
0.211
0.196
0.182
0.304
0.176
0.028
0.073
0.137
0.000
0.274
0.186
0.234
0.120
0.211
0.077
0.036
0.007
0.190
0.143
0.015
0.188
0.247
0.050
0.017
0.130
t
0.76
1.11
1.98
0.75
1.44
1.10
2.06
0.93
0.75
2.59
0.22
0.28
1.12
1.11
2.51
0.75
0.93
3.19
0.22
1.96
0.75
0.75
3.48
0.71
2.55
3.21
1.98
0.94
0.99
1.11
0.13
1.98
1.98
3.19
2.00
1.98
1.11
1.12
0.76
3.17
1.98
0.72
2.54
3.17
1.98
0.99
0.82
MLP2
∆E
1.3
3.0
8.5
3.9
1.9
5.7
4.6
1.1
1.8
8.3
0.6
8.8
1.8
5.0
6.0
4.7
1.2
10.8
0.4
3.8
0.9
0.8
8.6
2.0
7.6
11.4
9.8
2.4
4.0
7.3
0.5
13.5
8.5
9.9
6.0
4.0
7.2
3.5
1.6
7.7
6.4
2.2
8.4
9.8
3.9
1.9
9.0
∆f
0.033
0.040
0.144
0.035
0.021
0.088
0.121
0.013
0.018
0.226
0.002
0.034
0.039
0.067
0.137
0.020
0.021
0.260
0.002
0.069
0.010
0.009
0.226
0.197
0.183
0.300
0.186
0.037
0.080
0.148
0.002
0.291
0.192
0.237
0.411
0.234
0.088
0.049
0.014
0.195
0.146
0.023
0.196
0.254
0.062
0.024
0.139
t
0.28
0.28
0.58
0.28
0.13
0.28
0.62
0.17
0.28
0.35
0.09
0.10
0.27
0.58
0.75
0.13
0.28
0.75
0.08
0.69
0.10
0.14
0.75
0.17
0.75
0.75
1.11
0.17
0.28
0.72
0.07
0.71
0.71
0.75
0.75
0.75
0.75
0.58
0.18
0.75
0.69
0.15
0.71
0.99
0.71
0.27
0.75
MLP3
∆E
2.5
7.3
13.5
7.0
4.6
11.8
9.4
3.5
3.2
18.2
0.9
9.5
4.5
7.6
8.6
6.5
2.8
16.8
0.7
6.4
1.7
2.9
15.6
2.7
11.8
18.8
11.8
5.0
7.6
8.2
0.9
18.4
12.6
16.4
8.7
5.9
8.9
5.5
3.2
12.3
9.2
5.0
12.3
14.7
6.7
4.6
9.1
∆f
0.035
0.061
0.178
0.056
0.034
0.132
0.166
0.030
0.026
0.324
0.003
0.051
0.044
0.076
0.148
0.038
0.028
0.295
0.003
0.079
0.019
0.029
0.266
0.226
0.212
0.348
0.192
0.057
0.096
0.156
0.004
0.320
0.217
0.279
0.124
0.135
0.095
0.061
0.022
0.221
0.163
0.060
0.228
0.286
0.070
0.038
0.140
Ag
Al
As
Au
Ba
Be
Bi
Ca
Cd
Cr
Cs
Cu
Ga
Ge
Hf
Hg
In
Ir
K
La
Li
Mg
Mo
Na
Nb
Os
P
Pb
Pd
Pt
Rb
Re
Rh
Ru
Sb
Sc
Si
Sn
Sr
Ta
Ti
Tl
V
W
Y
Zn
Zr
[20] V. L. Deringer, C. J. Pickard,
and G. Cs´anyi,
[32] A. P. Bart´ok, R. Kondor, and G. Cs´anyi, Phys. Rev. B
Phys. Rev. Lett. 120, 156001 (2018).
87, 184115 (2013).
9
[21] E. V. Podryabinkin, E. V. Tikhonov, A. V. Shapeev, and
A. R. Oganov, arXiv preprint arXiv:1802.07605 (2018).
[22] K. Gubaev, E. V. Podryabinkin, G. L. Hart, and A. V.
Shapeev, Comput. Mater. Sci. 156, 148 (2019).
[23] T. Mueller, A. Hernandez,
and C. Wang,
J. Chem. Phys. 152, 050902 (2020).
Interatomic
Potentials
[24] NIST
Repository,
http://www.ctcms.nist.gov/potentials.
[25] E. Tadmor, R. Elliott, J. Sethna, R. Miller, and C.
Becker, Knowledgebase of interatomic models (KIM),
https://openkim.org (2011).
[33] M.
El-Batanouny
and
F.
Wooten,
Symmetry and Condensed Matter Physics: A Computational Approach
(Cambridge University Press, 2008).
[34] M.
A. Wood
and
A.
P.
Thompson,
J. Chem. Phys. 148, 241721 (2018).
[35] G. Bergerhoff and I. D. Brown,
in Crystallographic
Databases, edited by F. H. Allen et al. (International
Union of Crystallography, Chester, 1987).
[36] P. E. Bl¨ochl, Phys. Rev. B 50, 17953 (1994).
[37] J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev.
Lett. 77, 3865 (1996).
[26] A.
Seko,
A.
Phys. Rev. B 99, 214108 (2019).
Togo,
and
I.
Tanaka,
[38] G. Kresse and J. Hafner, Phys. Rev. B 47, 558 (1993).
[39] G. Kresse and J. Furthm¨uller, Phys. Rev. B 54, 11169
[27] A. Hernandez, A. Balasubramanian, F. Yuan, S. A. Ma-
(1996).
son, and T. Mueller, npj Comput. Mater. 5, 1 (2019).
[28] Y. Zuo, C. Chen, X. Li, Z. Deng, Y. Chen, J. Behler,
G. Cs´anyi, A. V. Shapeev, A. P. Thompson, M. A. Wood,
and S. P. Ong, J. Phys. Chem. A 124, 731 (2020),
pMID: 31916773.
[29] A. Takahashi, A. Seko, and I. Tanaka, J. Chem. Phys.
148, 234106 (2018).
[40] G. Kresse and D. Joubert, Phys. Rev. B 59, 1758 (1999).
and
[41] J. Branke, J. Branke, K. Deb, K. Miettinen,
R. Slowi´nski, Multiobjective optimization:
Interactive
and evolutionary approaches, Vol. 5252 (Springer Science
& Business Media, 2008).
[42] K. W. Jacobsen, J. K. Norskov,
Phys. Rev. B 35, 7423 (1987).
and M. J. Puska,
[30] LAMMPS code, http://lammps.sandia.gov.
[31] A.
Seko,
lammps-mlip-package,
https://github.com/sekocha/lammps-mlip-package.
|
ai_researcher | 2 | Deep_Insights_into_Automated_Optimization_with_Large_Language_Models_and_Evolutionary_Algorithms.pdf | Deep Insights into Automated Optimization with Large Language Models and
Evolutionary Algorithms
He Yua,b, Jing Liua,b
aSchool of Artificial Intelligence, Xidian University, 2 South Taibai Road, Xi’an, 710071, Shaanxi, China
bGuangzhou Institute of Technology, Xidian University, Knowledge City, Guangzhou, 510555, Guangdong, China
4
2
0
2
t
c
O
8
2
]
E
N
.
s
c
[
1
v
8
4
8
0
2
.
0
1
4
2
:
v
i
X
r
a
Abstract
Designing optimization approaches, no matter heuristic or meta-heuristic, often require extensive manual intervention
and struggle to generalize across diverse problem domains. The integration of Large Language Models (LLMs) and
Evolutionary Algorithms (EAs) presents a promising new way to overcome these limitations to make optimization
more automated, where LLMs function as dynamic agents capable of generating, refining, and interpreting optimiza-
tion strategies, while EAs explore complex solution spaces efficiently through evolutionary operators. Since this
synergy enables a more efficient and creative searching process, in this paper, we first conduct an extensive review
of recent research on the application of LLMs in optimization, focusing on LLMs’ dual functionality as solution
generators and algorithm designers. Then, we summarize the common and valuable design in existing work and pro-
pose a novel LLM-EA paradigm for automated optimization. Furthermore, focusing on this paradigm, we conduct
an in-depth analysis on innovative methods for three key components, namely, individual representation, variation
operators, and fitness evaluation, addressing challenges related to heuristic generation and solution exploration, par-
ticularly from the perspective of LLM prompts. Our systematic review and thorough analysis into the paradigm can
help researchers better understand the current research and boost the development of combining LLMs with EAs for
automated optimization.
Keywords: evolutionary algorithms, large language models, optimization, prompt engineering, deep learning
1. Introduction
Optimization [1] plays a pivotal role in solving complex challenges across various industries, from logistics and
manufacturing to machine learning and healthcare. At its core, optimization seeks to identify the best solution from
a set of candidates according to specific objectives, while adhering to constraints. The growing scale and complexity
of real-world optimization problems demand approaches that can navigate vast search spaces efficiently. Traditional
optimization methods, such as gradient-based approaches and mathematical programming, have long been employed
for problems with well-defined objective functions. However, these methods often struggle with real-world problems
that are non-differentiable, multi-modal, or laden with constraints and uncertainties. This gap has driven the devel-
opment of more flexible, adaptable, and scalable methods, leading to the rise of heuristics [2], which aim to provide
approximate solutions efficiently.
Heuristics emerged as practical tools for generating “good-enough” solutions without requiring exhaustive searches.
While heuristics have been successful in many applications, they come with limitations. Traditional heuristics often
require careful manual design, limiting their adaptability to new problems. Meta-heuristics [3, 4], such as genetic
algorithms [5] and simulated annealing [6], offer more general approaches but often require parameter fine-tuning and
expert knowledge. Hyper-heuristics [7, 8] attempt to automate the selection or generation of heuristics, representing
a step forward. However, they remain constrained by predefined low-level heuristics or components, limiting their
adaptability to highly dynamic and complex problems.
The integration of Large Language Models (LLMs) [9] and Evolutionary Algorithms (EAs) [10] presents a
promising new way to overcome these limitations. LLMs function as dynamic agents capable of generating, refining,
and interpreting optimization strategies, while EAs explore complex solution spaces efficiently through evolutionary
Preprint submitted to arXiv
October 29, 2024
Figure 1: The major development of heuristics
operators like selection, mutation, and crossover. Together, LLM-EA offers the potential to reduce the need for
manual tuning and expert knowledge, paving the way for more automated and adaptable optimization frameworks.
This paper makes several key contributions to the study of integrating LLMs with EAs for automated optimiza-
tion. First, we provide a brief review of the historical development of heuristics, from traditional methods to hyper-
heuristics, offering readers a foundational understanding of the field. Then, we conduct an extensive review of recent
research on the application of LLMs in optimization, highlighting their roles as searching operators, solvers, and in
algorithm design. Building on these insights, we summarize the common and valuable design in existing work and
propose a novel LLM-EA paradigm for automated optimization, which combines the strengths of LLMs and EAs
to enhance the efficiency and adaptability of optimization processes. Furthermore, focusing on the paradigm, we
conduct an in-depth analysis on innovative methods for the three key components, namely, individual representation,
variation operators, and fitness evaluation, addressing challenges related to heuristic generation and solution explo-
ration. Finally, we identify current challenges and outline future directions for research, emphasizing the potential for
further advancements in generalization, transparency, and scalability in LLM-EA systems.
The remainder of this paper is organized as follows: Section 2 presents the evolution of heuristics in automated
optimization, offering a detailed overview of traditional, meta-, and hyper-heuristic approaches. Section 3 explores
the key technologies of LLMs and EAs that enable effective heuristic and solution generation. Section 4 discusses
recent advancements in LLM-based optimization, followed by a detailed analysis of the novel LLM-EA automated
optimization paradigm in Section 5.
In Section 6, we identify and address the challenges associated with LLM-
EA systems and suggest future research directions to enhance their scalability and transparency. Finally, Section 7
concludes the paper with key insights and the potential impact of this research on future optimization methodologies.
2. Evolution of Heuristics for Automated Optimization
Heuristics are problem-solving techniques designed to provide approximate solutions to optimization problems
where finding the exact solution is computationally prohibitive. Throughout the evolution from heuristics to meta-
heuristics and hyper-heuristics as shown in Figure 1, the key goal is to create a more generalized, flexible, and
automated approaches for solving optimization problems. Each development reduces the dependence on problem-
specific adjustments and domain-specific expertise. In the following subsections, we briefly discuss each of these
developments.
2.1. Pre-Heuristic Approaches to Optimization
Before the development of heuristics, optimization largely relied on methods such as exhaustive search, linear
programming, and gradient-based techniques. While effective for small and well-structured problems, these methods
struggled with the increasing size and complexity of modern optimization tasks. Exhaustive search, for example, sys-
tematically exploring all possible solutions quickly became computationally infeasible for combinatorial optimization
problems like the Traveling Salesman Problem (TSP).
2
Traditional optimization techniques, such as mathematical programming and gradient-based methods, offered so-
lutions for continuous variables and smooth objective functions but were insufficient for non-differentiable or discrete
problems. These limitations spurred the need for more flexible and scalable approaches, which led to the rise of
heuristic methods.
2.2. Classical Heuristics
The first wave of heuristics introduced simple yet practical techniques like construction heuristics and local search.
Construction heuristics [11] build solutions incrementally, often making greedy decisions at each step. For example,
the nearest-neighbor heuristic for the TSP selects the closest unvisited city at each step, offering quick but often
suboptimal solutions. Local search techniques [12, 13, 14] start with an initial solution and attempt to improve it by
making small adjustments within its neighborhood, such as the 2-opt heuristic [15] for the TSP, which swaps edges in
a tour to reduce the distance.
While these methods provided efficient ways to tackle complex problems, designing heuristics needs careful man-
ual design, and also heuristics are easily trapped in local optima. These challenges highlighted the need for more
robust strategies that could better balance exploration and exploitation in the searching process.
2.3. Rise of Meta-Heuristics
In response to the limitations of classical heuristics, meta-heuristics emerged as a more flexible and adaptable
framework. Unlike classical heuristics, which are typically problem-specific, meta-heuristics are designed to be gen-
eral algorithms applicable across a wide range of optimization problems, both combinatorial and continuous.
A variety of meta-heuristic algorithms have been developed to tackle complex problems across various domains.
Among them, EAs stand out as a prominent representative, like Genetic Algorithms (GAs) [5] , Memetic Algorithms
[16], Particle Swarm Optimization (PSO) [17], Ant Colony Optimization (ACO) [18]. These algorithms share a
common trait: they are designed to explore the solution space efficiently by leveraging principles inspired by natural
processes or swarm intelligence.
Despite their differences in specific implementations and searching operations, these meta-heuristic algorithms all
embody a general framework that involves an iterative searching process. They initialize a set of candidate solutions,
evaluate their quality based on a predefined objective function, and iteratively update the solutions through various
mechanisms such as selection, crossover, mutation, or information sharing among individuals. This process allows
them to escape local minima and explore diverse regions of the searching space, thereby increasing the likelihood of
finding globally optimal solutions.
However, despite their generalization capabilities, meta-heuristics still face key challenges, particularly their re-
liance on carefully tuned parameters and the need for expert knowledge. This dependence on domain knowledge
for tasks like designing fitness functions, selecting appropriate operators (e.g., mutation and crossover), and adjusting
algorithmic components reduces the true generality of meta-heuristics and limits their applicability across diverse
optimization problems without significant manual intervention.
2.4. Hyper-Heuristics: Automating Heuristic Design
To reduce the reliance on expert knowledge and achieve more automated optimization, hyper-heuristic address
problems by searching for and generating heuristics tailored to the problem. They operate at a more abstract level.
Hyper-heuristics rely on predefined low-level heuristics, typically employing either heuristic selection or heuristic
generation [7, 8, 19, 20], which output heuristics through various methods, including machine learning prediction
or EA search. The machine learning approach is more generalized, relying heavily on extensive training data for
prediction, while the EAs are based on an iterative search using specific problem-related training data.
Despite hyper-heuristics’ potential to generalize across various problems, they are constrained by the quality and
diversity of the available low-level heuristics or components, and if these components are not robust or flexible enough,
the generated heuristics may perform poorly across different problems. Additionally, no matter the hyper-heuristics
relying on training a prediction model or EAs, their effectiveness highly depends on the quality and quantity of training
data. Poor or insufficient training data may result in suboptimal heuristics that fail to generalize effectively to unseen
problem instances or variations.
3
2.5. Toward a New Era of Automated Optimization
The integration of LLMs and EAs offers a promising new method for automated optimization, which leverages
LLMs’ capabilities in generating combined with EAs’ iterative optimization techniques, resulting in a framework that
can both solve optimization problems and design the optimization algorithms. The key innovations in this integration
is the dual-role that LLMs play.
First, LLMs can directly generate solutions by interpreting prompt content and applying searching operators
such as mutation and crossover. In this capacity, LLMs function as meta-heuristic agents, dynamically produc-
ing solutions based on real-time feedback from the optimization process. Second, LLMs can generate and refine
heuristics—problem-solving strategies, assuming the role of a hyper-heuristic. This allows for continuous adaptation
and improvement of both the search strategies and the resulting solutions, enhancing the flexibility and effectiveness
of the optimization process.
The LLM-EA automated optimization paradigm holds the potential for generalized, scalable optimization
across various domains, such as network design, logistics, and machine learning model optimization. By enabling the
automated design of both solutions and the algorithms that generate them, this paradigm represents a significant leap
forward in intelligent, adaptive problem solving, offering new opportunities for addressing complex, high-dimensional
optimization challenges.
3. Fundamental Technologies in LLM and EA for Automated Optimization
3.1. Overview of LLMs and EAs
Large Language Models, such as GPT-4 and BERT [21], are built on the transformer architecture, which has
revolutionized natural language processing (NLP). This architecture allows LLMs to process input sequences in par-
allel, rather than sequentially as seen in earlier models like RNNs [22] or LSTMs [23], making them significantly
more efficient. The key innovation lies in the self-attention mechanism [24], which enables the model to weigh the
importance of different words in a sentence relative to one another, regardless of their position in the text. This capa-
bility is crucial for understanding long-range dependencies in language and for capturing both syntactic and semantic
information with high accuracy. The success of LLMs in NLP stems from pretraining on vast amounts of text, which
allows them to generalize across various domains and tasks. By fine-tuning on specific tasks, LLMs can generate
coherent and contextually relevant text, even in highly specialized fields.
Evolutionary Algorithms, on the other hand, are inspired by the process of natural evolution and use mechanisms
like selection, mutation, and crossover to solve optimization problems. EAs are particularly useful when the search
space is large, complex, or non-differentiable, rendering traditional methods like gradient descent ineffective. EAs
begin with an initial population of candidate solutions, which are evaluated using a fitness function to measure their
performance. High-performing individuals are more likely to be selected for reproduction. The crossover operation
combines features from two or more parent solutions to create offspring, while mutation introduces random variations
to maintain diversity. This iterative process refines solutions over multiple generations, making EAs especially suitable
for black-box optimization, where the internal structure of the system is unknown.
3.2. Understanding Prompts in LLMs
A prompt [25] is essentially the input provided to LLMs to guide its output. The purpose of a prompt is to specify
what the LLMs should generate, whether it is answering a question, writing a paragraph, or completing a task based
on a given example. A text prompt example is given in Figure 2(a). The simplicity or complexity of a prompt depends
on the task at hand. At its core, a prompt can be thought of as the instruction or query that triggers the model’s
response. This is a foundational component of using generative LLMs like GPT or BERT, as it sets the direction for
the model’s output.
3.2.1. Prompt techniques
Prompt techniques refer to structured approaches for designing, formatting, and sequencing prompts to achieve
optimal generative performance from LLMs. Common prompt techniques include:
Zero-Shot prompt [26]: In this technique, LLMs are tasked with performing a job without presenting any ex-
amples. It solely depends on the directions stated in the prompt. As depicted in Figure 2(b), zero-shot prompt tests
4
Figure 2: The examples of prompting
the LLMs’ capacity to comprehend and carry out the task solely based on the provided instructions, devoid of any
supplementary context or samples.
Few-Shot prompt [27]: As shown in Figure 2(c), this approach entails furnishing a handful of examples alongside
the task directions to assist in guiding the model. These instances are commonly known as exemplars. By presenting
the LLMs with a small number of pertinent cases, few-shot prompt can enhance its performance on the task by offering
a clearer understanding of what is anticipated.
Chain-of-Thought prompt [28]: This method encourages LLMs to dissect their reasoning process into consec-
utive steps before reaching the final answer. As illustrated in Figure 2(d), the Chain-of-Thought prompt can boost
the model’s effectiveness on tasks that necessitate logical deduction or multi-step problem-solving. By prompting the
model to articulate its thought process, this technique can make the model’s reasoning more transparent and accurate.
3.2.2. Prompt optimization
Prompt optimization [29, 30] refers to the process of refining prompts to enhance the generative performance of
LLMs. It involves iteratively adjusting prompts, experimenting with different variations, and using automated tech-
niques to optimize for accuracy, efficiency, and output relevance. The following principles are essential for effective
prompt optimization:
• Prompt Ensembling: This technique generates multiple variations of a prompt and aggregates their outputs to
improve response quality and diversity.
• Prompt Tuning: Fine-tuning the structure, wording, and format of prompts can significantly impact the model’s
output quality, allowing for more targeted and precise results.
• Self-reflective Prompts: In this iterative approach, the LLM evaluates its own responses, identifies potential
weaknesses, and suggests improvements, enabling continuous refinement.
By applying these principles, prompt optimization can produce more reliable and fine-tuned outputs, pushing the
capabilities of LLMs in complex problem-solving scenarios.
3.3. EAs for Prompt Optimization
EAs have been used to solve the optimization problem in LLMs, especially for optimizing the prompt. Prompts
can be categorized into continuous prompts, which use numerical vectors (embedding vectors) to influence LLMs’
5
behavior directly [31] and text prompts, which are natural language instructions. EAs can effectively optimize
prompts by exploring different configurations and selecting the most effective ones.
Continuous prompts involve tuning embedding vectors that act as "soft prompts", allowing fine control over
the LLM’s responses. For example, Black-Box Tuning (BBT) [32] applies EAs to optimize continuous prompts
represented as embedding vectors, which iteratively adjusts these vectors to improve performance on various language
tasks, such as sentiment analysis and question answering. BBT shows that continuous prompts can be fine-tuned using
EAs even when the LLM is treated as a black-box system.
For text prompts, EAs adjust phrasing and structure to find better ways of prompting the LLM. For example,
EvoPrompt [33] utilizes evolutionary operators like mutation and crossover to modify parts of prompts, enabling
the exploration of new variations that might lead to better performance. PromptBreeder [34] takes this further by
evolving both the task-specific prompts and the evolutionary operator used to refine these prompts. This dual evolution
ensures that the model not only generates high-quality prompts but also refines the method of prompt generation itself,
ultimately leading to more robust outcomes when the LLM is later applied to complex optimization tasks.
As can be seen, EAs can improve the quality of prompts which guide LLMs to generate better responses, which is
a natural application of EAs. In fact, more creatively, EAs are combined with LLMs to solve optimization problems,
which is the focus of this paper and introduced in the following sections.
4. Development of LLM-based Optimization and LLM-EA Automated Optimization Paradigm
In this section, we first summarize existing work on LLM-based optimization, highlighting key techniques and
common patterns. Based on these insights, we propose a novel paradigm where LLMs generate solutions and heuris-
tics, while EAs iteratively refine them. This LLM-EA paradigm aims to automate and enhance optimization processes,
reducing the need for manual intervention and improving adaptability across diverse problems.
4.1. LLM-based Optimization
LLMs, with their vast knowledge base and advanced natural language understanding, reasoning, and problem-
solving capabilities, have garnered significant attention in the field of optimization. Research efforts in this area have
primarily focused on two main directions: exploring the searching abilities of LLMs and using LLMs to model
optimization problems. While the focus of this paper is on the former, for more details on the latter, readers can
refer to [35, 36]. There has also been research into using the fundamental mechanism behind LLMs—Transformer
architecture—to solve optimization problems, which is also outside the scope of this paper, and readers can refer to
[37, 38, 39] for further details.
Our focus is on how LLMs function as searching operators within optimization processes. Regardless of the
type of problem being addressed, the majority of research has treated LLMs as searching operators, embedding them
into iterative procedures or coupling them with EAs. Certainly, LLMs have also been employed in other stages of
the optimization process, such as initialization, evaluation, and selection. However, the role of LLMs as searching
operators remains the most valuable and complex to execute effectively. Therefore, in this section, we delve into how
LLMs have been designed and used as searching operators. For a more general overview, refer to [40, 41].
As searching operators, LLMs have been applied to a variety of optimization problems, extending their reach
from prompt optimization, as discussed above, to classical numerical and combinatorial optimization problems, and
even to automatic algorithm design. In prompt optimization, optimization techniques have been used to improve
the quality of prompts [31, 32, 42, 43, 44, 45]. In fact, many of these studies [31, 42, 43, 44] have already treated
LLMs as searching operators to directly optimize prompts, given that text is the format LLMs naturally excel in.
Recognizing that LLMs can optimize text, and that numerical values can also be treated as a form of text, researchers
have begun exploring whether LLMs can solve classical optimization problems directly [46]. They applied LLM-
based searching operators to continuous numerical optimization, combinatorial optimization, and more complex
optimization problems [47, 48, 49]. In these cases, LLMs are designed to generate candidate solutions directly for the
given problem, effectively functioning as solvers.
However, as research advanced, it became apparent that while LLMs are exceptional at generating text, their ability
to handle numerical optimization was somewhat limited [50]. On the other hand, LLMs have shown remarkable
capabilities in code generation [51, 52], leading researchers to shift their focus towards guiding LLMs to design
6
optimization algorithms rather than directly solving optimization problems. In this approach, LLMs are tasked with
designing either specific components of an algorithm or entire algorithms. These algorithm components or complete
algorithms are expressed in natural language, pseudo-code, or real code. Given that LLMs are more proficient at
handling code than numerical values, there has been growing interest in utilizing LLMs for automated algorithm
design.
Next, we focus on the work taking LLMs as a solver or for automated algorithm design, which not only represent
the core focus of optimization in this field, but also analyze the key developments and technologies that have emerged.
4.2. LLMs as a Solver
Treating LLMs as a solver and asking them to generate solutions for optimization problems directly, the core
technologies lie in designing prompts for LLM-based searching operators. In general, prompts contain two main
types of information: the problem details and the required output from the LLM. In this case, the output is typically
straightforward—producing one or more candidate solutions to the problem at hand. The real challenge arises in how
the problem is presented to the LLM. The problem-related information in prompts can be categorized into three types:
available solutions, quality of those solutions, and guidance for the LLM’s search direction or pattern. Most
existing studies provide candidate solutions as examples, but the use of solution quality and search guidance varies
across different works. Below, we summarize the major developments in this area.
Initially, researchers simply provided LLMs with candidate solutions. A notable example of this is Meyerson et
al.’s work [53], where LLMs were employed as crossover operators in EAs. By providing pairs of existing solutions,
the LLMs generated new solutions based on these examples. No information about solution quality or search guidance
was included in this work. Another early study is Lehman et al.’s work [54], where LLMs were designed as mutation
operators in genetic programming (GP). Since GP deals with codes, the LLM-based mutation operators generated
code solutions for the problems being tackled. Notably, this differs from later work on code generation for algorithm
design. In [54], basic guidance was introduced alongside the solutions, with three LLM-mutation operators requiring
the model to either: make changes or small changes to the current solution, or modify parameters of the current
solution. While these instructions were simple, they introduced a level of search direction, influencing subsequent
research. Hemberg et al. [55] applied LLMs to each part of GP, instead of just the mutation operator.
As research advanced, it became evident that providing only candidate solutions was insufficient; including the
quality of solutions helped LLMs learn how solutions differ. A representative work in this space is OPRO, pro-
posed by Yang et al.
[56], which provided objective function values alongside each solution, though no specific
search guidance was offered. OPRO tested small-scale linear regression problems and TSPs, with a focus still on
prompt optimization. Following OPRO, the inclusion of solution quality became a standard practice in LLM-based
optimization.
Further developments in the use of searching guidance emerged, recognizing its importance in steering LLMs
more effectively. Following OPRO, Liu et al.
[57] designed more detailed task instructions that were combined
with EA steps. These instructions specified the sequence of operations, such as performing selection, followed by
crossover, and then mutation. This combination of solution quality and structured guidance represented a significant
step forward in prompt design.
Beyond prompt design, researchers have also extended this LLM-driven optimization approach to more complex
problems, such as multi-objective optimization [58, 59]. Additionally, several innovations have further refined this
paradigm. For example, Lange et al. [60] applied evolution strategies, asking LLMs to generate the next mean for
a desired fitness level. Brahmachary et al. [61] introduced a technique that split the population into two groups for
exploration and exploitation, providing different search guidance for each group. Huang et al. [50] conducted a
comprehensive comparative study on the ability of LLMs to generate solutions for optimization problems directly,
offering insights into their relative strengths and limitations.
4.3. LLMs for Automated Algorithm Design
The use of LLMs to automate algorithm design has progressed significantly over recent years. Initially, researchers
focused on a single-round process, where LLMs were prompted to design new meta-heuristics. In these early studies,
LLMs typically selected and analyzed existing algorithms, generating new ones in pseudo-code format. However,
the performance of these generated algorithms could not be evaluated online, and their application was limited to
7
conversational interactions rather than fully leveraging LLMs’ optimization potential [62, 63, 64]. We think these
types of work are more like chatting with LLMs rather than mining the optimization ability of LLMs.
A more advanced approach involves embedding LLM-based searching operators into an iterative process, allow-
ing continuous improvement of the generated algorithms. This shift enabled LLMs to move beyond simple heuristic
generation to creating meta-heuristics. Within this context, researchers have focused on generating both components
of heuristics/meta-heuristics and complete algorithms. Various optimization problems have been explored, including
numerical optimization, combinatorial optimization, multi-objective optimization, and even complex network opti-
mization [47, 49]. Given the varying complexities of these problems, numerical and combinatorial optimizations have
received the most attention, with subsequent studies expanding to multi-objective and complex problems.
The type of algorithm generated and the problem used to test is not strictly one-to-one; some studies apply a single
generated heuristic or meta-heuristic across multiple problem types. The iterative nature of algorithm improvement
stems from two factors: LLMs’ ability to generate codes and the integration of an iterative process. One of the most
widely used iterative processes is the EAs.
Currently, the primary focus is on designing components within heuristics or meta-heuristics. Typically, an ex-
isting optimization algorithm is fixed, and LLMs are tasked with generating specific functions for the algorithm. For
example, in FunSearch [65], LLMs generate functions to evaluate the score of each bin in the bin-packing prob-
lem. The prompt provides existing code, and the LLM is asked to generate only the function code. Similarly, EoH
[66, 67, 68] uses both code and descriptions about the algorithm to generate new function components. EoH employs
a guided local search (GLS) algorithm [14, 69], and the LLM generates an evaluation function embedded in GLS.
However, the search guidance in EoH remains fixed throughout the iterative process.
Recent advancements have introduced dynamic search guidance during the iteration process. Ye et al.
[70]
proposed a system with two LLMs: one for generation and one for reflection. The reflection LLM analyzes short-
term and long-term search results and provides updated search guidance to the generation LLM. Similarly, Sun et al.
[71] developed a system of three LLM-based agents—Advisor, Coder, and Evaluator—that work together to analyze
solutions and guide further searches, representing early steps toward LLMs guiding other LLMs.
In addition to generating components for heuristics, LLMs have been applied to meta-heuristic component design.
Huang et al. [72] used LLM-based crossover and mutation operators for multi-objective optimization. Huang et al.
[73] extended this work to evolutionary multitasking algorithms, where LLMs were tasked with designing knowledge
transfer models. In other studies, LLMs have been employed to design surrogate models for expensive optimization
tasks [74] and learning rate schedules in evolutionary strategies [75, 76].
Designing complete heuristics or meta-heuristics is more challenging than just designing components, and re-
search in this area remains limited. Yu et al. [49] proposed a method for generating complete heuristics to improve
the robustness of complex networks [77, 78]. Stein et al. [79] developed a method for generating meta-heuristics for
continuous optimization problems. This direction needs further study since we are more interested in letting LLMs
design complete algorithms automatically.
Although LLMs have been used to solve various optimization problems and design optimization algorithms, this
research direction is just at the beginning, and most work just uses small-scale problems to validate the ability of LLMs
in optimization. Undeniably, LLMs combined with EAs provide a promising new way for automated optimization.
Thus, we summarize the common and most valuable design in existing work and propose a general LLM-EA auto-
mated optimization paradigm in the following subsection, which can help researchers better understand the current
research and guide future research.
4.4. LLM-EA Automated Optimization Paradigm
Through the above review, we can see, that to enhance LLMs’ capabilities in solving optimization problems,
researchers have increasingly integrated EAs with LLMs. This synergy enables a more efficient and creative search-
ing process, with LLMs generating high-quality candidates and EAs optimizing these candidates through iterative
refinement. This complementary relationship strengthens the continuous creativity, allowing for more automated op-
timization algorithm design. Building on this foundation, we propose an LLM-EA automated optimization paradigm
in which LLMs and EAs work together to generate both solutions and heuristics, providing a flexible and powerful
framework for automated optimization. Next, we first introduce each component of this paradigm and then present
the whole paradigm.
8
Optimization problems aim to find the optimal candidate x∗ that maximizes (or minimizes) a objective function
f (x). For solution search, the objective is to find x∗ whose fitness is maximum, where x ∈ S solution, and S solution is
solution searching space :
x∗ = arg max
x∈S solution
fs(x)
(1)
For heuristic search, the objective is to find the best heuristic x∗ for the optimization problem under consideration. The
performance of x∗ is evaluated on a training set D = {d1, d2, . . . , dk} of problem instances by aggregating as follows:
x∗ = arg max
x∈S heuristic
A ( fh(x, d1), fh(x, d2), . . . , fh(x, dk))
(2)
where S heuristic is heuristic searching space and A(.) is an aggregation function (e.g., average or weighted sum) used
to combine the fitness values across instances.
In the evolutionary process, at generation t, the population consists of N candidates:
P(t) = {x1, x2, . . . , xN},
xi ∈ S solution/S heuristic,
i = 1, 2, . . . , N
(3)
Each candidate in the population represents either a solution or a heuristic.
In the paradigm, the evolutionary process includes three primary operators: selection, variation, and reflective.
• Selection Operator: The selection operator Oselect selects the candidates for conducting the variation operator
based on their fitness values:
Pparent(t) = Oselect(P(t))
(4)
ensuring that candidates with higher fitness are more likely to contribute to the next generation.
• Variation Operator: The variation operator generates new individuals based on Pparent(t). One or more vari-
ation operators can be designed with different focus on exploration or exploitation. LLMs play a role in this
process by generating offspring through prompt Pvariation :
Poffspring(t) = Ovariation(LLMvariation, Pvariation, Pparent(t))
(5)
where Pvariation is defined as:
Pvariation = {Dproblem, Dtask, xi, . . . , x j}
Here, Dproblem is the problem description, Dtask is the task instruction, which includes the variation logic, and
xi, . . . , x j are the candidates as example data.
(6)
• Reflective Operator: An optional reflective mechanism adjusts variation operators dynamically. The LLM
refines the variation strategies based on the performance of previous generations:
Pvariation = Oreflective(LLMreflective, Preflective)
(7)
The LLM, acting as a searching operator, is guided by prompts. Whether variation prompts or reflective prompts
are used, they share a common pattern that includes a problem description, task instructions, and example data. The
problem description refers to the detailed explanation or context of the problem that needs to be solved. The task
instructions are the explicit directives given to the LLM, specifying the task or objective that the model needs to
accomplish. The example data refers to real examples provided as input or reference, helping the LLM understand
the task requirements or expected output format.
For variation prompts, the problem description outlines the optimization problem to be addressed. The task
instructions can vary from simple requirements to detailed variation logic, specifying how the LLM should approach
generating. The example data consists of previously evaluated candidates with fitness scores, guiding the LLM in
creating improved solutions or heuristics according to the task instructions. An example of a variation prompt is
present in Figure 3(a).
In reflective prompts, the problem description shifts to optimizing the existing task instruction of the variation
prompt, focusing on improving the current prompt itself. The task instructions then guides how to analyze and
9
Figure 3: (a) This is the variation prompt for the TSP. The problem description introduces the TSP and presents data for an instance of the problem.
The example data provides several routes along with their respective lengths. The task instruction specifies the requirement to provide a route that
is shorter than all the routes given in the example data. (b) This reflective prompt is aimed at refining or optimizing the task instruction for the TSP.
Algorithm 1 LLM-EA for Automated Optimization
Input: Fitness function fs or fh , Number of generations T
Output: Best candidate found x∗
1: Initialize population P(1) = {x1, x2, . . . , xN}, where xi ∈ S solution/S heuristic, i = 1, 2, . . . , N
2: for each candidate xi in P(1) do
if Solution Search then
3:
4:
5:
6:
Evaluate the fitness using the aggregation across training set:
Evaluate the fitness using fs(xi)
else if Heuristic Search then
f (xi) = A ({ fh(xi, d1), fh(xi, d2), . . . , fh(xi, dk)})
end if
7:
8: end for
9: for t = 1 to T do
10:
11:
12:
13:
14:
15:
Apply the selection operator to form the parent population Pparent(t) using Eq. (4)
for each group candidates (xi, . . . , x j) in Pparent(t) do
Generate prompt Pvariation = {Dproblem, Dtask, xi, . . . , x j}
Apply the variation operator using Eq. (5) to obtain Poffspring(t)
end for
Evaluate the fitness of individuals in Poffspring(t)
Survivor selection to obtain population P(t + 1) with Poffspring(t) and Pparent(t)
16:
17: Optional: Apply the reflective operator to refine variation operators by using Eq. (7)
18: end for
19: Return best candidate x∗
10
compare the available example data to generate new task instructions for the variation prompt. These example data
can either consist of macro-level statistical information, representing the overall characteristics of multiple candidates,
or specific, feature-rich candidates or existing variation prompts. This example data serves as a reference for the LLM
to generate more optimized instructions. An example of a reflective prompt is presented in Figure 3(b).
Explanation of the paradigm: Algorithm 1 gives the detail of LLM-EA automated optimization paradigm. It
begins by initializing a population of candidates and iteratively evaluates their fitness. Selection, variation operations
are applied, where the LLM generates new candidates based on prompts. An optional reflective mechanism adjusts
the variation operators based on feedback from previous generations, enhancing the optimization process over time.
The algorithm continues for a set number of generations, ultimately returning the best candidate.
5. In-Depth Analysis of Key Modules in LLM-EA Automated Optimization Paradigm
Building on the LLM-EA automated optimization paradigm, this section conducts a comprehensive analysis from
two perspectives: prompt engineering for LLMs and the evolutionary process for iterative search. By integrating these
two aspects, we provide a multidimensional exploration of the LLM-EA automated optimization paradigm.
Prompts play a fundamental role in this paradigm by guiding LLMs through optimization tasks. The structure of
these prompts—consisting of problem descriptions, task instructions, and example data—determines how effectively
the LLM participates in tasks such as crossover, mutation, and reflective optimization. Prompts not only provide
the LLM with the necessary reference points for generating new candidates, but also are embedded the logic of
evolutionary operators, ensuring smooth integration with the optimization process.
In the evolutionary process, we focus on three critical components: individual representation [80, 81], variation
operators [82], and fitness evaluation [83, 84]. Individual representation shapes how candidates are structured and
determine the searching space. Variation operators, such as mutation and crossover, guide the exploration of the
searching space. Fitness evaluation drives the process by measuring how well the generated candidates meet the
optimization objectives.
The following subsections provide detailed explanations of how these components are systematically integrated
into the prompt, ensuring that the LLM maximizes its effectiveness in the optimization task. Our analysis reveals how
prompts dynamically influence each phase of the evolutionary process, offering deeper insights into the synergistic
relationship between LLMs and EAs in automated optimization.
5.1. Individual Representation for Heuristic
Historically, the representation of solutions in optimization has been purely numerical, particularly in the case of
continuous and combinatorial optimization problems, where candidates are expressed as vectors or arrays. While this
method remains effective for many tasks, the advent of LLMs introduces new possibilities for representing heuristics.
These representations expand beyond simple numerical encoding to incorporate natural language, pseudo-code, and
even executable code. This shift allows LLMs to play a more creative role in generating novel problem-solving
strategies.
After analyzing current research, we define a novel classification of heuristic representation that extends traditional
solution encoding and differentiates between three main types of heuristic representation: Code-Centric Represen-
tation, Hybrid Representation, and Augmented Representation, each tailored to different levels of complexity in
optimization problems.
• Code-Centric Representation: In this form, the heuristic is represented solely as executable code. For in-
stance, FunSearch [65] uses LLMs to generate small, self-contained code snippets that are directly applied to
optimization problems. The LLM evolves the code itself, which is designed to perform specific tasks or cal-
culations without the need for external explanations. While this approach is computationally efficient, it lacks
interpretability, as the generated code does not come with any accompanying documentation or reasoning. This
method is better suited for well-defined problems where efficiency is prioritized over transparency.
• Hybrid Representation: This method blends code with natural language descriptions. In the EoH [66] frame-
work, LLMs not only generate executable code but also provide a natural language explanation of the code’s
11
Figure 4: An individual representation of EoH [66] is provided, where the heuristic is expressed in both natural language and executable code. The
natural language description explains how the heuristic calculates scores for each bin, considering factors such as remaining capacity, bin index,
and penalties for large differences. The code snippet implements this logic. A fitness score of 0.0143 reflects the performance of the generated
heuristic in the optimization task.
logic and intended purpose, as illustrated in Figure 4. This combination bridges the gap between machine-
generated heuristics and human-readable explanations. By co-evolving code and descriptions, this approach
enhances both performance and interpretability, making it suitable for more complex tasks where understand-
ing the reasoning behind the code is crucial.
• Augmented Representation: This extends beyond previous representations by incorporating executable code,
natural language descriptions, and domain-specific expert knowledge into the individual representation.
For example, unlike FunSearch and EoH, which represent code snippets or code paired with explanations, Au-
toRNet [49] enhances the representation by embedding higher-level concepts from network science, such as
high-degree nodes, low-degree nodes, critical nodes, and network connectivity. This enriched representa-
tion allows the LLM to contextualize the code within a broader domain-specific framework, facilitating a deeper
understanding of the problem. By incorporating expert knowledge, the LLM is not merely working with logic
and procedures but is equipped with the conceptual background to generate more advanced and applicable al-
gorithms. Augmented Representation ensures that the generated heuristics can address complex optimization
problems with a higher degree of relevance and adaptability.
5.2. LLM-based Variation Operators
Traditional EAs rely on predefined operators such as mutation and crossover, which require detailed step-by-
step programming and domain-specific expertise. With the advent of LLMs, the role of these operators has evolved,
enabling more flexible and dynamic approaches to solution generation and heuristic manipulation. We identify three
key advantages that LLMs bring to EAs:
1. High-Level Instructions Remove the Need for Step-by-Step Programming. Traditionally, variation opera-
tors require precise, step-by-step programming to define how solutions are selected, combined, and modified.
12
Figure 5: (a) An example of the constructed prompt when utilizing LMEA to solve TSPs. The evolutionary operator is presented as a natural
language in task instructions. (b) An prompt of EoH uses the E2 strategy to design a new heuristic.
LLMs eliminate this need by interpreting high-level task instructions written in natural language, enabling flex-
ible solution generation. For instance, the LMEA [57] framework is illustrated in Figure 5(a), where LLMs are
given general directives for tasks like parent selection and mutation, allowing them to autonomously generate
solutions based on these instructions without needing detailed programming. This approach reduces reliance
on domain-specific expertise and enables more flexible solution exploration.
2. Advanced Manipulation of Heuristics via Natural Language. Heuristics, unlike numerical solutions, are
complex algorithms or pieces of code. LLMs excel in applying variation operators to these heuristics by using
their natural language understanding to combine, refine, and adjust logical structures. For example, in the
EoH [66] framework, five prompt strategies (E1, E2, E3, M1, and M2) are designed and categorized into two
groups: Exploration and Modification. Each strategy uses prompts to guide the LLM with different emphases
in evolving heuristics based on current population performance and heuristic structure. For example, the detail
of E2 strategy is shown in Figure 5(b), which put emphasis on designing a new heuristic different from the
given ones.
3. Incorporation of Domain-Specific Knowledge into Variation Operators: A significant advantage of LLM-
based variation operators is their ability to integrate expert domain knowledge into the evolutionary process,
as demonstrated in AutoRNet [49] through its Network Optimization Strategies (NOS). By embedding spe-
cialized knowledge from fields like network science (e.g., degree distribution, path characteristics, clustering
coefficient, centrality measures, and community structure) into the variation operations, LLMs can guide mu-
tation and crossover with insights specific to the problem domain. This allows for more sophisticated and
effective heuristics that address complex, domain-specific optimization challenges. For example, in network
optimization, AutoRNet uses domain knowledge to adaptively modify network structures, ensuring that the
generated heuristics are deeply informed by network science principles. This integration of expert knowledge
allows LLMs to generate heuristics that are not only generalizable but also highly specialized, providing a new
13
layer of flexibility and precision in the evolutionary process.
Beyond generating solutions or heuristics, LLMs also play a pivotal role in optimizing the variation operators
themselves. ReEvo [70] introduces a novel reflective mechanism where LLMs evaluate and refine the variation op-
erators by analyzing the performance of previously generated heuristics. Unlike traditional EAs that rely on static
operators, ReEvo enables LLMs to reflect on both short-term and long-term performance data. This allows the LLMs
to generate adaptive mutation and crossover strategies, leading to more effective exploration of the search space.
• Short-term Reflection: LLMs assess the recent individuals, identifying immediate changes needed in mutation
or crossover operations. This dynamic response helps the evolutionary process adapt quickly to the promising
searching direction.
• Long-term Reflection: LLMs evaluate broader trends in the performance of heuristics over multiple gener-
ations, allowing for deeper adjustments to the evolutionary strategy. This ensures that the operators evolve
alongside the heuristics, leading to more robust solutions.
This reflective feedback loops enables LLM-driven optimization of the search strategy itself, moving beyond simple
heuristic generation to a more dynamic, self-improving evolutionary process.
LLMs as variation operators bring two critical innovations: the ability to interpret high-level instructions, elim-
inating the need for step-by-step programming, and the capacity for sophisticated heuristic manipulation through
natural language. When coupled with reflective optimization strategies like those in ReEvo, LLMs offer a dynamic,
self-improving approach to EAs, pushing the boundaries of what traditional operators can achieve.
5.3. Fitness Evaluation in Heuristic Optimization
The quality of solutions for optimization problems can be evaluated directly by the objective function. In contrast,
heuristics operate at a higher level of abstraction, as they represent strategies for generating solutions. Therefore,
evaluating heuristics requires a mapping from the heuristic space to the solution space, followed by the application of
fitness evaluation. This requires a more flexible and generalizable fitness function capable of capturing performance
across diverse scenarios. To address this challenge, we summarize two primary approaches:
• Adaptive fitness evaluation dynamically adjusts the criteria for assessing heuristic performance as the opti-
mization progresses. It allows for broader exploration early in the process and more focused refinement as the
search converges. AutoRNet [49] designs an adaptive fitness function (AFF) to dynamically adjust constraints
during the evolutionary process. Initially, constraints on degree distribution are relaxed, allowing for broader
exploration of the heuristic search space. As the optimization progresses, these constraints are progressively
tightened, promoting convergence toward more optimal solutions while maintaining diversity within the popula-
tion. This progressive tightening ensures that the search space is thoroughly explored while gradually refining
the candidate heuristics to meet increasingly stringent requirements.
• Benchmark-based evaluation ensures that heuristics generalize across multiple problem instances by testing
them in a variety of scenarios, reducing the risk of overfitting to a specific instance and ensuring that the
heuristic performs well in different contexts. LLaMEA [79] leverages benchmark-based fitness evaluation,
utilizing platforms like IOHexperimenter to systematically assess the performance of generated metaheuristics.
LLaMEA evaluates algorithms across a wide range of benchmark functions, providing a robust and reproducible
environment for fitness assessment. This evaluation method promotes fairness and consistency by comparing
new algorithms to well-established state-of-the-art benchmarks.
While adaptive fitness evaluation and benchmark-based methods effectively address the generalization challenge,
some problems still pose significant computational challenges, particularly when fitness evaluations are time-consuming
or when heuristics generate solutions across multiple problem instances. In these cases, surrogate models provide a
crucial solution. Traditional surrogate models, usually using Gaussian Processes and Neural Networks [85], have
long been used in EAs. However, they come with their own set of limitations, such as the need for iterative training
and updating as new data becomes available. This adds additional computational overhead, potentially diminishing
their efficiency in real-time optimization tasks.
14
A novel method, as proposed in recent research, is the use of LLMs as surrogate models [74]. LLMs, with their
powerful inference capabilities, offer a unique approach by eliminating the need for iterative training. LLMs can act
as classifying solutions as “good” or “bad” based on prior performance and approximating the fitness values of new
solutions based on the patterns identified in historical data. This method not only reduces the computational cost but
also speeds up the optimization process, enabling the evaluation of complex problems such as network robustness
without requiring full-scale evaluations for every candidate solution.
In summary, the fitness evaluation process for heuristics presents challenges due to the need for generalization
and computational efficiency. The combined use of adaptive fitness evaluation, benchmark-based evaluation, and
LLMs as surrogate models addresses these challenges by offering flexible, scalable, and efficient methods for fitness
evaluation. These approaches ensure that heuristics are not only evaluated accurately across multiple instances, but
also do so with reduced computational cost.
6. Future Research Directions
As LLM-EA automated optimization approaches continue to evolve, there are several promising areas of research
that can enhance their ability. This section outlines four critical directions that could drive future advancements in the
field.
6.1. Enhancing Explainability and Reasoning Capabilities
One of the key challenges in combining LLMs with EAs is the lack of transparency in the decision-making
process of LLM-generated heuristics. The need for explainable AI (XAI) [86, 87, 88] is essential to allow researchers
and practitioners to understand why specific optimization strategies are generated and how they contribute to robust
solutions. Explainability not only improves trust in AI systems but also provides a foundation for diagnosing errors
and refining heuristics.
Furthermore, improving the reasoning capabilities of LLMs is crucial for developing more effective optimization
heuristics. Recent advancements like Self-Taught Reasoner (STaR) [89, 90, 91] highlight the potential of iterative
reasoning to refine outputs over multiple steps. STaR improves the accuracy of LLM-generated solutions by enabling
the model to reason through a problem progressively rather than providing a single-shot response. Incorporating such
reasoning mechanisms into LLM-EA systems can lead to more sophisticated and nuanced optimization strategies.
6.2. Integration of Domain Knowledge
While LLMs are trained on vast amounts of data, their general knowledge may not always be sufficient to solve
domain-specific optimization problems [92]. To address this, integrating domain-specific knowledge can significantly
enhance the quality and relevance of generated heuristics. Retrieval-Augmented Generation (RAG) [93, 94, 95]
provides a promising approach to this challenge by combining LLMs with external knowledge sources. By retrieving
relevant domain-specific information from large datasets or expert systems, LLMs can generate more specialized and
effective heuristics for particular fields such as logistics, network design, or healthcare optimization.
In addition, long memory models [96, 97] can play a crucial role in maintaining domain-specific context over
extended problem-solving processes. These models enable LLMs to retain and recall relevant information from previ-
ous interactions, allowing for more coherent and context-aware heuristic generation over time. The ability to leverage
both short-term and long-term knowledge will be vital in addressing complex, multi-stage optimization problems.
6.3. Optimization of Evaluation and Benchmarking Platforms
To ensure that LLM-generated heuristics are robust and widely applicable, there is a need for unified evaluation
platforms that can consolidate training data from a wide variety of optimization problems. Such platforms would
enhance the generalization capabilities of the generated heuristics by exposing them to diverse problem sets. The
broad range of training data available through these platforms can help ensure that the LLM-EA systems do not
overfit to a specific problem domain, thereby improving their versatility and applicability across multiple domains.
In addition, surrogate models [74, 98, 99] can be integrated into these platforms to speed up the evaluation process.
Surrogate models approximate the fitness function using historical data, which reduces the computational cost of
evaluating large-scale optimization problems. This allows for faster heuristic testing and validation with limited
15
compromising the accuracy of results. Additionally, these platforms can include benchmarking systems that provide
a standardized way to compare the performance of LLM-EA-generated heuristics against established optimization
methods, fostering greater transparency and enabling further improvements through iterative development.
6.4. Scalability of LLM-EA Systems
As optimization problems increase in complexity and scale, ensuring the scalability of LLM-EA systems becomes
a critical challenge. Distributed computing and model compression offer promising solutions to address these chal-
lenges.
Distributed computing [100, 101] enables the parallel execution of tasks by distributing the computational load
across multiple machines. This approach is particularly beneficial in the context of EAs, where large populations of
candidate solutions need to be evaluated simultaneously. By leveraging distributed computing, different stages of the
evolutionary process, such as selection, mutation, and crossover, can be run concurrently, reducing the overall runtime.
Similarly, LLM inference tasks, such as generating new heuristics or solutions, can be distributed across multiple
nodes, accelerating the optimization process. Distributed systems, therefore, provide the necessary scalability for
applying LLM-EA systems to large-scale optimization problems.
Model compression [102, 103] techniques further enhance scalability by reducing the size and computational
complexity of LLMs. Methods such as pruning, quantization, and knowledge distillation allow LLMs to maintain high
performance while significantly reducing their memory footprint and inference times. This is particularly valuable
when LLMs are repeatedly queried during the evolutionary process. Compressed models not only run more efficiently
but also reduce the energy consumption required for large-scale optimization, making LLM-EA systems more feasible
for real-world applications where computational resources are limited.
7. Conclusion
In this paper, we highlight the significant potential of the LLM-EA framework to transform the field of automated
optimization, providing a new avenue for fully automated optimization. We began by tracing the evolution of heuristic
approaches, establishing the need for more adaptive and automated solutions, followed by a comprehensive review of
existing research on applying LLMs to optimization. By identifying the most common and valuable part of current
research, we propose a novel paradigm that integrates LLMs and EAs to advance automated optimization. LLMs,
with their robust generative and reasoning capabilities, play a dual role in our proposed paradigm as both heuristic
designers and solution generators. By combining these strengths with the iterative search and refinement processes
of EAs, the paradigm enables the automated generation of high-quality heuristics and solutions with minimal manual
intervention.
We then make a thorough analysis into the novel methodologies for individual representation, variation operators,
and fitness evaluation within the LLM-EA paradigm. Our review and the proposed paradigm lay a strong foundation
for future research into the capabilities of LLMs and EAs, opening up new avenues for both academic inquiry and
practical applications, with the potential to reshape the landscape of optimization methodologies in a wide range of
fields.
In identifying future directions, we addressed ongoing challenges such as improving the transparency and explain-
ability of LLM-generated heuristics, enhancing generalization to broader problem spaces, and optimizing computa-
tional efficiency. Additionally, we pointed out the integration of domain-specific knowledge and the development of
scalable benchmarking platforms to further refine the efficacy and reliability of LLM-EA systems.
References
[1] C. A. Floudas, P. M. Pardalos (Eds.), Encyclopedia of Optimization, 2nd Edition, Springer, New York, NY, 2009.
[2] M. Hjeij, A. Vilks, A brief history of heuristics: how did research on heuristics evolve?, Humanities & Social Sciences Communications 10
(2023) 64.
[3] P. M. Pardalos, M. G. Resende (Eds.), Handbook of Metaheuristics, Springer, Boston, MA, 2003.
[4] R. Martí, M. Sevaux, K. Sörensen, 50 years of metaheuristics, European Journal of Operational Research (2024).
[5] J. H. Holland, Adaptation in Natural and Artificial Systems, MIT Press, Cambridge, MA, 1975.
[6] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680.
16
[7] E. K. Burke, M. R. Hyde, G. Kendall, G. Ochoa, E. Ozcan, J. R. Woodward, A comprehensive analysis of hyper-heuristics, Journal of the
Operational Research Society 61 (2010) 1697–1724.
[8] E. K. Burke, M. Gendreau, M. Hyde, G. Kendall, G. Ochoa, R. Qu, Hyper-heuristics: a survey of the state of the art, Journal of the
Operational Research Society 64 (12) (2013) 1695–1724.
[9] H. Naveed, A. U. Khan, S. Qiu, M. Saqib, S. Anwar, M. Usman, N. Akhtar, N. Barnes, A. Mian, A comprehensive overview of large
language models, arXiv:2307.06435 (2023).
[10] K. A. De Jong, Evolutionary Computation: A Unified Approach, MIT Press, Cambridge, MA, 2006.
[11] F. Glover, G. Gutin, A. Yeo, A. Zverovich, Construction heuristics for the asymmetric TSP, European Journal of Operational Research
129 (3) (2001) 555–568.
[12] E. Aarts, J. K. Lenstra, Local Search in Combinatorial Optimization, Princeton University Press, 2003.
[13] C. Voudouris, E. Tsang, Guided local search, in: Handbook of Metaheuristics, 1999, pp. 185–218.
[14] A. Alsheddy, C. Voudouris, E. P. K. Tsang, A. Alhindi, Guided local search, in: R. M. et al. (Ed.), Handbook of Heuristics, 2016.
[15] G. A. Croes, A method for solving traveling salesman problems, Operations Research 6 (6) (1958) 791–812.
[16] Y. S. Ong, M. H. Lim, X. S. Chen, Research frontier: memetic computation-past, present & future, IEEE Computational Intelligence
Magazine 5 (2) (2010) 24–36.
[17] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of International Conference on Neural Networks, Vol. 4, 1995, pp.
1942–1948.
[18] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of cooperating agents, IEEE Transactions on Systems, Man, and
Cybernetics, Part B 26 (1) (1996) 29–41.
[19] Q. Zhao, Q. Duan, B. Yan, S. Cheng, Y. Shi, Automated design of metaheuristic algorithms: a survey, arXiv:2303.06532v3 (2024).
[20] K. Tang, X. Yao, Learn to optimize – a brief overview, National Science Review 11 (2024) nwae132.
[21] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: pre-training of deep bidirectional transformers for language understanding, in:
Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2019, pp.
4171–4186.
[22] I. D. Mienye, T. G. Swart, G. Obaido, Recurrent neural networks: a comprehensive review of architectures, variants, and applications, MDPI
Information 15 (2024).
[23] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation 9 (8) (1997) 1735–1780.
[24] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin, Attention is all you need, in: Proceedings
of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Vol. 30, 2017.
[25] S. Schulhoff, M. Ilie, N. Balepur, K. Kahadze, A. Liu, C. Si, Y. Li, A. Gupta, H. Han, S. Schulhoff, P. S. Dulepet, S. Vidyadhara, D. Ki,
S. Agrawal, C. Pham, G. Kroiz, F. Li, H. Tao, A. Srivastava, H. D. Costa, S. Gupta, M. L. Rogers, I. Goncearenco, G. Sarli, I. Galynker,
D. Peskoff, M. Carpuat, J. White, S. Anadkat, A. Hoyle, P. Resnik, The prompt report: a systematic survey of prompting techniques,
arXiv:2406.06608 (2024).
[26] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, Y. Iwasawa, Large language models are zero-shot reasoners, arXiv:2205.11916 (2022).
[27] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, et al., Language models are few-shot learners, arXiv:2005.14165 (2020).
[28] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, D. Zhou, Chain-of-thought prompting elicits reasoning
in large language models, arXiv:2201.11903 (2022).
[29] J. Cheng, X. Liu, K. Zheng, P. Ke, H. Wang, Y. Dong, M. Huang, Black-box prompt optimization: aligning large language models without
model training, in: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, 2024, pp. 3201–3219.
[30] A. Sabbatella, A. Ponti, I. Giordani, A. Candelieri, F. Archetti, Prompt optimization in large language models, MDPI Mathematics 12 (6)
(2024) 929.
[31] Q. Guo, R. Wang, J. Guo, B. Li, K. Song, X. Tan, G. Liu, J. Bian, Y. Yang, Connecting large language models with evolutionary algorithms
yields powerful prompt optimizers, in: Proceedings of the 12th International Conference on Learning Representations, 2024.
[32] T. Sun, Y. Shao, H. Qian, X. Huang, X. Qiu, Black-box tuning for language-model-as-a-service, in: Proceedings of the 39th International
Conference on Machine Learning, 2022.
[33] A. Chen, D. M. Dohan, D. R. So, Evoprompting: language models for code-level neural architecture search, in: Proceedings of the 37th
International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 2024.
[34] C. Fernando, D. S. Banarse, H. Michalewski, S. Osindero, T. Rocktäschel, PromptBreeder: self-referential self-improvement via prompt
evolution, in: Proceedings of the 41st International Conference on Machine Learning, 2024.
[35] A. AhmadiTeshnizi, W. Gao, M. Udell, OptiMUS: optimization modeling using MIP solvers and large language models, arXiv:2310.06116
(2023).
[36] H. Chen, G. E. Constante-Flores, C. Li, Diagnosing infeasible optimization problems using large language models, arXiv:2308.12923
(2023).
[37] W. Chao, J. Zhao, L. Jiao, L. Li, F. Liu, S. Yang, When large language models meet evolutionary algorithms, arXiv:2401.10510 (2024).
[38] R. T. Lange, Y. Tian, Y. Tang, Evolution transformer: in-context evolutionary optimization, in: Proceedings of the Genetic and Evolutionary
Computation Conference Companion (GECCO’24), Melbourne, Australia, 2024.
[39] X. Li, K. Wu, Y. B. Li, X. Zhang, H. Wang, J. Liu, Pretrained optimization model for zero-shot black box optimization, in: Proceedings of
the 38th Conference on Neural Information Processing Systems, 2024.
[40] X. Wu, S. hao Wu, J. Wu, L. Feng, K. C. Tan, Evolutionary computation in the era of large language model: survey and roadmap,
arXiv:2401.10034 (2024).
[41] X. Song, Y. Tian, R. T. Lange, C. Lee, Y. Tang, Y. Chen, Leverage foundation models for black-box optimization, in: Proceedings of the
41st International Conference on Machine Learning, Vol. 235 of PMLR, Vienna, Austria, 2024.
[42] S. Liu, S. Yu, Z. Lin, D. Pathak, D. Ramanan, Language models as black-box optimizers for vision-language models, arXiv:2309.05950v2
(2023).
[43] F. Jin, Y. Liu, Y. Tan, Zero-shot chain-of-thought
reasoning guided by evolutionary algorithms in large language models,
17
arXiv:2402.05376v1 (2024).
[44] H. Yang, K. Li, InstOptima: evolutionary multi-objective instruction optimization via large language model-based instruction operators,
arXiv:2310.17630v1 (2023).
[45] Y. B. Li, K. Wu, SPELL: semantic prompt evolution based on a LLM, arXiv:2310.01260v1 (2023).
[46] P.-F. Guo, Y.-H. Chen, Y.-D. Tsai, S.-D. Lin, Towards optimizing with large language model, in: Workshop on Knowledge-Infused Learning
Co-located with 30th ACM KDD Conference, Barcelona, Spain, 2024.
[47] J. Mao, D. Zou, L. Sheng, S. Liu, C. Gao, Y. Wang, Y. Li, Identify critical nodes in complex network with large language models,
arXiv:2403.03962 (2024).
[48] S. Mo, K. Wu, Q. Gao, X. Teng, J. Liu, AutoSGNN: automatic propagation mechanism discovery for spectral graph neural networks, Under
Review (2024).
[49] H. Yu, J. Liu, AutoRNet: automatically optimizing heuristics for robust network design via large language models, under Review (2024).
[50] B. Huang, X. Wu, Y. Zhou, J. Wu, L. Feng, R. Cheng, K. C. Tan, Exploring the true potential: evaluating the black-box optimization
capability of large language models, arXiv:2404.06290 (2024).
[51] H. Ghaemi, Z. Alizadehsani, A. Shahraki, J. M. Corchado, Transformers in source code generation: a comprehensive survey, Journal of
Systems Architecture 153 (2024) 103193.
[52] H. Luo, J. Wu, J. Liu, M. F. Antwi-Afari, Large language model-based code generation for the control of construction assembly robots: a
hierarchical generation approach, Developments in the Built Environment 19 (2024) 100488.
[53] E. Meyerson, M. J. Nelson, H. Bradley, A. Gaier, A. Moradi, A. K. Hoover, J. Lehman, Language model crossover: variation through
few-shot prompting, arXiv:2302.12170 (2024).
[54] J. Lehman, J. Gordon, S. Jain, K. Ndousse, C. Yeh, K. O. Stanley, Evolution through large models, in: Handbook of Evolutionary Machine
Learning, Springer, 2023, pp. 331–366.
[55] E. Hemberg, S. Moskal, U.-M. O’Reilly, Evolving code with a large language model, arXiv:2401.07102 (2024).
[56] C. Yang, X. Wang, Y. Lu, H. Liu, Q. V. Le, D. Zhou, X. Chen, Large language models as optimizers, in: Proceedings of the International
Conference on Learning Representations (ICLR), 2024.
[57] S. Liu, C. Chen, X. Qu, K. Tang, Y.-S. Ong, Large language models as evolutionary optimizers, arXiv:2310.19046v3 (2024).
[58] F. Liu, X. Lin, Z. Wang, S. Yao, X. Tong, M. Yuan, Q. Zhang, Large language model for multi-objective evolutionary optimization,
arXiv:2310.12541v1 (2023).
[59] Z. Wang, S. Liu, J. Chen, K. C. Tan, Large language model-aided evolutionary search for constrained multiobjective optimization,
arXiv:2405.05767 (2024).
[60] R. T. Lange, Y. Tian, Y. Tang, Large language models as evolution strategies, in: Proceedings of the Genetic and Evolutionary Computation
Conference Companion (GECCO’24), Melbourne, Australia, 2024.
[61] S. Brahmachary, S. M. Joshi, A. Panda, K. Koneripalli, A. K. Sagotra, H. Patel, A. Sharma, A. D. Jagtap, K. Kalyanaraman, Large language
model-based evolutionary optimizer: reasoning with elitism, arXiv:2403.02054 (2024).
[62] M. Pluhacek, A. Kazikova, T. Kadavy, A. Viktorin, R. Senkerik, Leveraging large language models for the generation of novel metaheuristic
optimization algorithms, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO), Lisbon, Portugal,
2023.
[63] M. Pluhacek, J. Kovac, A. Viktorin, P. Janku, T. Kadavy, R. Senkerik, Using LLM for automatic evolvement of metaheuristics from swarm
algorithm SOMA, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO’24), Melbourne, Aus-
tralia, 2024.
[64] R. Zhong, Y. Xu, C. Zhang, J. Yu, Leveraging large language model to generate a novel metaheuristic algorithm with CRISPE framework,
Cluster Computing (2024).
[65] B. Romera-Paredes, M. Barekatain, A. Novikov, M. Balog, M. P. Kumar, E. Dupont, F. J. R. Ruiz, J. S. Ellenberg, P. Wang, O. Fawzi,
P. Kohli, A. Fawzi, Mathematical discoveries from program search with large language models, Nature 625 (2024) 468–475.
[66] F. Liu, X. Tong, M. Yuan, X. Lin, F. Luo, Z. Wang, Z. Lu, Q. Zhang, Evolution of heuristics: towards efficient automatic algorithm design
using large language model, in: Proceedings of International Conference on Machine Learning (ICML), 2024.
[67] F. Liu, X. Tong, M. Yuan, Q. Zhang, Algorithm evolution using large language model, arXiv:2311.15249v1 (2023).
[68] F. Liu, X. Tong, M. Yuan, X. Lin, F. Luo, Z. Wang, Z. Lu, Q. Zhang, An example of evolutionary computation + large language model
beating human: design of efficient guided local search, arXiv:2401.02051v1 (2024).
[69] F. Arnold, K. Sörensen, Knowledge-guided local search for the vehicle routing problem, Computers and Operations Research 105 (2019)
32–46.
[70] H. Ye, J. Wang, Z. Cao, F. Berto, C. Hua, H. Kim, J. Park, G. Song, Large language models as hyper-heuristics for combinatorial optimiza-
tion, arXiv:2402.01145v2 (2024).
[71] Y. Sun, X. Zhang, S. Huang, S. Cai, B. Zhang, K. Wei, AutoSAT: automatically optimize sat solvers via large language models,
arXiv:2402.10705 (2024).
[72] Y. Huang, S. Wu, W. Zhang, J. Wu, L. Feng, K. C. Tan, Autonomous multi-objective optimization using large language model,
arXiv:2406.08987 (2024).
[73] Y. Huang, X. Lv, S. Wu, J. Wu, L. Feng, K. C. Tan, Advancing automated knowledge transfer in evolutionary multitasking via large language
models, arXiv:2409.04270 (2024).
[74] H. Hao, X. Zhang, A. Zhou, Large language models as surrogate models in evolutionary algorithms: a preliminary study, Swarm and
Evolutionary Computation 91 (2024) 101741.
[75] O. Kramer, Large language models for tuning evolution strategies, arXiv:2405.10999 (2024).
[76] L. L. Custode, F. Caraffini, A. Yaman, G. Iacca, An investigation on the use of large language models for hyperparameter tuning in evolu-
tionary algorithms, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO’24), Melbourne, VIC,
Australia, 2024.
[77] M. Zhou, J. Liu, A two-phase multi-objective evolutionary algorithm for enhancing the robustness of scale-free networks against multiple
18
malicious attacks, IEEE Transactions on Cybernetics 47 (2) (2017) 539–552.
[78] M. Zhou, J. Liu, A memetic algorithm for enhancing the robustness of scale-free networks against malicious attacks, Physica A: Statistical
Mechanics and its Applications 410 (2014) 131–143.
[79] N. van Stein, T. Bäck, LLaMEA: a large language model evolutionary algorithm for automatically generating metaheuristics,
arXiv:2405.20132 (2024).
[80] W. E. Hart, N. Krasnogor, Using multiple representations in evolutionary algorithms, in: Proceedings of the Genetic and Evolutionary
Computation Conference, 1998, pp. 359–366.
[81] K. Deb, Representation, selection, and variation in genetic algorithms, Genetic Algorithms in Engineering and Computer Science (1997)
78–98.
[82] W. M. Spears, Crossover or mutation?, Foundations of Genetic Algorithms 2 (1995) 221–237.
[83] T. Jones, S. Forrest, Fitness distance correlation as a measure of problem difficulty for genetic algorithms, in: Proceedings of the 6th
International Conference on Genetic Algorithms, 1995, pp. 184–192.
[84] M. Mitchell, S. Forrest, J. H. Holland, The royal road for genetic algorithms: fitness landscapes and GA performance, in: Proceedings of
the 1st European Conference on Artificial Life, 1992, pp. 245–254.
[85] J. A. Garcia, H. Zhenli, Gaussian process regression + deep neural network autoencoder for probabilistic surrogate modeling in nonlinear
mechanics of solids, arXiv:2407.10732 (2024).
[86] M. T. Ribeiro, S. Singh, C. Guestrin, Why should I trust you? Explaining the predictions of any classifier, in: Proceedings of the 22nd ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.
[87] M. T. Ribeiro, S. Singh, C. Guestrin, Lime: local interpretable model-agnostic explanations, in: Proceedings of the 22nd ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.
[88] W. Samek, T. Wiegand, K.-R. Müller, Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models,
arXiv:1708.08296 (2017).
[89] E. Zelikman, Y. Wu, J. Mu, N. D. Goodman, STaR: bootstrapping reasoning with reasoning, arXiv:2203.14465 (2022).
[90] A. Hosseini, X. Yuan, N. Malkin, A. Courville, A. Sordoni, R. Agarwal, V-STaR:
training verifiers for self-taught reasoners,
arXiv:2403.09629 (2024).
[91] E. Zelikman, G. Harik, Y. Shao, V. Jayasiri, N. Haber, N. D. Goodman, Quiet-STaR: language models can teach themselves to think before
speaking, arXiv:2403.09629 (2024).
[92] R. Zhang, F. Liu, X. Lin, Z. Wang, Z. Lu, Q. Zhang, Understanding the importance of evolutionary search in automated heuristic design
with large language models, arXiv:2407.10873v1 (2024).
[93] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W. tau Yih, T. Rocktäschel, S. Riedel, D. Kiela,
Retrieval-augmented generation for knowledge-intensive NLP tasks, in: Proceedings of the 33th International Conference on Neural Infor-
mation Processing Systems, 2020, pp. 9459–9474.
[94] K. Guu, K. Lee, Z. Tung, P. Pasupat, M.-W. Chang, REALM: retrieval-augmented language model pre-training, in: Proceedings of the 37th
International Conference on Machine Learning, 2020, pp. 3929–3938.
[95] G. Izacard, E. Grave, Fusion-in-Decoder: a novel retrieval-augmented language model, in: Proceedings of the 2021 Annual Conference of
the North American Chapter of the Association for Computational Linguistics, 2021, pp. 681–688.
[96] M. Burtsev, A. V. Kurenkov, Memorizing transformers, arXiv:2010.06891 (2020).
[97] J. W. Rae, A. Potapenko, S. M. Jayakumar, T. P. Lillicrap, Compressive transformers for long-range sequence modeling, in: Proceedings of
the 32th International Conference on Neural Information Processing Systems, 2019, pp. 4694–4707.
[98] L. Bliek, A. Guijt, R. Karlsson, S. Verwer, M. de Weerdt, Benchmarking surrogate-based optimisation algorithms on expensive black-box
functions, Applied Soft Computing 147 (2023) 110744.
[99] T. Rios, F. Lanfermann, S. Menzel, Large language model-assisted surrogate modelling for engineering optimization, in: Proceedings of
2024 IEEE Conference on Artificial Intelligence (CAI), 2024, pp. 796–803.
[100] J. Dean, S. Ghemawat, MapReduce: simplified data processing on large clusters, Communications of the ACM 51 (1) (2008) 107–113.
[101] Y. Tang, Y. Tian, R. Huang, Distributed learning with evolutionary algorithms, in: Proceedings of the Genetic and Evolutionary Computation
Conference (GECCO), 2021, pp. 851–859.
[102] S. Han, J. Pool, J. Tran, W. Dally, Deep compression: compressing deep neural networks with pruning, trained quantization and huffman
coding, arXiv:1510.00149 (2015).
[103] G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network, in: Proceedings of the NIPS Deep Learning and Representation
Learning Workshop, 2015.
19
|
ai_researcher | 3 | Analysis_of_LLM-Based_Narrative_Generation_Using_the_Agent-Based_Simulation.pdf | Parrot: Efficient Serving of LLM-based Applications with Semantic Variable
Chaofan Lin1∗, Zhenhua Han2, Chengruidong Zhang2, Yuqing Yang2
Fan Yang2, Chen Chen1∗, Lili Qiu2
1Shanghai Jiao Tong University, 2Microsoft Research
4
2
0
2
y
a
M
0
3
]
G
L
.
s
c
[
1
v
8
8
8
9
1
.
5
0
4
2
:
v
i
X
r
a
Abstract
The rise of large language models (LLMs) has enabled
LLM-based applications (a.k.a. AI agents or co-pilots), a new
software paradigm that combines the strength of LLM and
conventional software. Diverse LLM applications from differ-
ent tenants could design complex workflows using multiple
LLM requests to accomplish one task. However, they have
to use the over-simplified request-level API provided by to-
day’s public LLM services, losing essential application-level
information. Public LLM services have to blindly optimize
individual LLM requests, leading to sub-optimal end-to-end
performance of LLM applications.
This paper introduces Parrot, an LLM service system that
focuses on the end-to-end experience of LLM-based applica-
tions. Parrot proposes Semantic Variable, a unified abstrac-
tion to expose application-level knowledge to public LLM
services. A Semantic Variable annotates an input/output vari-
able in the prompt of a request, and creates the data pipeline
when connecting multiple LLM requests, providing a natu-
ral way to program LLM applications. Exposing Semantic
Variables to the public LLM service allows it to perform con-
ventional data flow analysis to uncover the correlation across
multiple LLM requests. This correlation opens a brand-new
optimization space for the end-to-end performance of LLM-
based applications. Extensive evaluations demonstrate that
Parrot can achieve up to an order-of-magnitude improvement
for popular and practical use cases of LLM applications.
1 Introduction
Large language models (LLMs) have demonstrated a remark-
able language understanding capability [7, 41]. This enables
a paradigm shift in application development. In this new
paradigm, one or multiple application entities, known as AI
agents or co-pilots, communicate with LLMs via natural lan-
guage, known as “prompts”, to accomplish a task collabo-
ratively. For example, Meeting applications like Microsoft
Teams or Google Meet can summarize meeting discussions
through LLMs [33]. Search engines like Google and Bing
can be enhanced with Chat ability through LLMs [14, 34].
It is believed such LLM-based applications will become the
mainstream applications in the near future [13].
∗This work is partially done while Chaofan Lin’s internship and Dr. Chen
Chen’s visting scholar in Microsoft Research.
To accomplish a task, LLM-based applications typically
require multiple rounds of conversation. The conversation, im-
plemented through multiple API calls to LLM, demonstrates
complex workflow patterns. Figure 1 illustrates several popu-
lar conversation patterns. For example, a meeting summary
application [8, 33] often divides a lengthy document into mul-
tiple shorter sections, each satisfying the length constraint
of the LLM conversation and thus can be summarized and
combined into the final summary through the Map-Reduce
or chaining summary patterns. Chat-based applications, e.g.,
Bing Copilot [34], call LLM APIs multiple times to generate
answers based on user queries. Multiple agents, each repre-
senting a different role played by different LLM calls, can
collaborate to achieve a task [22, 47, 54].
Public LLM service providers have to face diverse tenants
and applications, each with different workflows and perfor-
mance preference. However, existing API design for LLM
service provision is still request-centric. Public LLM services
only observe tons of individual requests, without knowing any
application-level information, e.g., which requests belong to
the same application, how different requests are connected, or
whether there are any similarities. The lost application-level
information makes public LLM service blindly optimize the
performance of individual requests, leading to sub-optimal
end-to-end performance of LLM applications. In this paper,
we observe there exist significant opportunities to improve
the end-to-end experience of LLM applications by exploiting
the application-level information, especially the correlation
of multiple LLM requests.
First, multiple consecutive LLM requests may be depen-
dent: the result of one request could be the direct input of
the next request. Therefore, it is desirable to colocate those
(a) Map-Reduce Summary
(b) Chain Summary
(c) LLM-Powered Search
(d) Multi-agent Coding
Figure 1: The workflow of popular LLM-based applications.
The final result requires multiple LLM requests.
Chunk 1Chunk 2……Chunk NLLMLLMLLM……S1S2SN……LLMFinal SummaryChunk 1Chunk 2……Chunk NLLMS1+LLMS1+S2+……SN-1LLMFinal SummaryLLM RequestMessage PassingQuery RewriterLLM-powered SearchQA w/ search resultSafety CheckerLLMFinal AnswerUser QueryProduct MangerArchitectEngineerQA TesterCode ReviewerLLMFinal CodeTask
requests together and execute them consecutively on the LLM
service side. However, unaware of their dependencies, these
requests have to be executed interactively between the client
side of LLM-based applications and the public LLM ser-
vices. These clients, often located on the other end of the
Internet, can only issue the second request after they receive
the result of the first request. This unnecessarily incurs extra
overhead of consecutive requests on network latency as well
as losing the opportunity of co-scheduling these consecutive
requests (§3).
Second, LLM requests may have diverse scheduling pref-
erence, even within a single application. For example, in Fig-
ure 1a, to reduce the end-to-end latency, the requests represent-
ing multiple Map tasks should be batched more aggressively
to increase the throughput of the Map tasks; while the Re-
duce task, due to its scarcity, should be optimized for latency.
Unfortunately, public LLM services cannot discriminate the
difference between the two types of tasks. As a result, the
current practice is to blindly optimize the latency for individ-
ual requests, which might not be desirable for the end-to-end
experience.
Third, there exists a high degree of commonality across
LLM requests. Popular LLM applications (e.g., Bing Copi-
lot [32], GPTs [42]) use a long system prompt, including task
definitions, examples, and safety rules, to guide the behavior
of LLM applications. The long system prompt is usually static
and common for all users. As existing public LLM services
treat each request individually, these common prefix prompts
are provided repeatedly in each request, leading to a great
waste of storage, computation, and memory bandwidth. Our
analysis of a production LLM-based search engine shows
that over 94% of tokens in the requests are repeated across
different users.
Although we have seen some emerging engine-level tech-
niques [25,56,63] proposed to optimize the above three cases,
they all work based on certain application-level knowledge,
which is lost in nowadays public LLM services. In a nut-
shell, due to the lack of understanding of the correlations of
LLM requests, existing LLM services cannot leverage the
three opportunities, leading to high end-to-end service latency
and reduced throughput. Based on the above facts and in-
sights, we introduce Parrot, an LLM service system that treats
LLM applications as first-class citizens. Parrot retains most of
application-level information by a simple abstraction Seman-
tic Variable, achieving a perfect balance between increasing
system complexity and bringing new information for opti-
mization. A Semantic Variable is a text region in the prompt
with a specific semantic purpose, such as a task instruction, a
list of few-shot examples, an input, or an output. A Semantic
Variable can also work as the data pipeline that connects mul-
tiple LLM requests. Semantic Variable naturally exposes the
information of prompt structures and correlations of requests
to LLM services. By inspecting Semantic Variable at runtime,
Parrot can perform conventional data flow analysis to derive
Figure 2: The communication of consecutive LLM requests
in multi-agent applications.
the data dependency between LLM requests just-in-time.
By analyzing the application-level information, Parrot’s
unified abstraction naturally enables joint optimizations,
which bring better global optimality. The same data pipeline
built by Semantic Variables can enable multiple optimizations
simultaneously, including hiding data pipeline’s latency, ob-
jective deduction for a better scheduling and commonality
analysis to perform de-duplication. Parrot’s scheduling also
takes different opportunities into accounts under the unified
abstraction. Our extensive evaluation of Parrot on popular
LLM-based applications, including the production and open-
source projects, shows Parrot achieves up to 11.7× speedup
or 12× higher throughput compared with the state-of-the-art
solutions.
2 Background
LLM Service. Most LLM services are provisioned as a
conditional generation service via a text completion API.
Completion(prompt : str) −→ generated_text : str.
The application client provides a text prompt, and the LLM
service responds with the generated text. Behind the API,
an LLM service provider runs one or multiple clusters of
LLM inference engines. A request scheduler dispatches LLM
requests from a queue to an LLM inference engine, which
uses a set of GPUs to conduct the LLM inference.
LLM-based Applications. Figure 1 highlights the repre-
sentative workflows of how LLM is used in the applications.
Due to the limited context window of LLMs (e.g., 4,096 for
GPT-3.5-Turbo [40]), data analytics on long documents fol-
low a map-reduce style (Figure 1a) or chain style (Figure 1b)
workflow to generate the final results. It splits the long tran-
script into chunks, uses multiple requests to generate partial
results for each chunk (the Map task), and combines them
altogether (a Reduce task) or incrementally (the chain style)
to generate the final result. Chat-based search engine in Fig-
ure 1c may use consecutive LLM requests to discern query
(a) Latency Breakdown
(b) Current LLM Services
(c) Our system: Parrot
Figure 3: The end-to-end latency breakdown of current LLM services. The source of the overhead comes from network and
queuing due to chatty interaction between LLM application and LLM services, which is eliminated in our system Parrot.
intention, enrich the query with supplementary information,
retrieve related data, undergo a safety check, and finally gen-
erate the response. Multi-agent in Figure 1d and Figure 2 is
another type of workflow using multiple LLM requests, each
with a designated role. Different roles work collaboratively on
the same task, e.g., AutoGen [54] and MetaGPT [22] use the
roles like product manager, architect, engineer, and QA tester.
They communicate with each other on a software project.
Each role is supported by one or multiple LLM requests to
act as the designed role to generate their responses.
3 Problems of Serving LLM Applications
Although LLM’s text completion API provides a flexible way
of building LLM applications, it loses the application-level
information to public LLM services, leading to the following
challenges.
Excessive Overhead of Consecutive Requests. As demon-
strated in Figure 1, LLM applications frequently make multi-
ple LLM calls to complete a single task. Due to the request-
centric design of existing public LLM services, which gener-
ate responses for each request individually, developers have to
parse the output of an LLM request and compose the prompts
for subsequent LLM requests on the client side. Figure 3a
shows our empirical study of the latency breakdown of the
LLM calls from a popular LLM application in our production,
which uses a chain-style workflow. The prompt lengths range
from 150 to 4000 tokens and the output length is around 50
tokens. We find there is a significant portion of the latency of
LLM API call originates outside the LLM engine (30 ∼ 50%
on average and over 70% in the worst cases). The overhead in-
creases with the growing length of prompts. The high latency
can sometimes result in API timeouts and resubmissions.
Such overhead is due to the chatty interaction between
LLM services and clients. Figure 3b illustrates the overhead
of a simple two-step LLM application (e.g., chain-style sum-
mary of two text chunks). Existing LLM services are unaware
of the dependency among such requests, where the output of
the previous request may be the direct input of the next one.
For such consecutive and dependent requests, the client has
Figure 4: Request-centric scheduling v.s. application-centric
scheduling for the map-reduce style document summary task.
to wait for the arrival of the response to the first LLM request
( 2 ) before submitting the next LLM request ( 3 ). This un-
necessarily incurs heavy network latency because clients and
LLM services are typically in different data centers. More-
over, the next LLM request has to suffer extra queuing delays
( 4 ), because requests from other applications may arrive
between the consecutive LLM requests.
In Table 1, we evaluated four popular LLM applications.
The first two are from our production, and the last two are
popular open-source projects. They all require tens of LLM
calls to complete a single task, which results in high user-
perceived latency. Our evaluation in §8.2 shows LLM services
that treat requests individually could slow down the end-to-
end latency by over 2×. An LLM service can eliminate the
overhead if it can handle consecutive requests in a batch.
Parrot adopts such an approach. As shown in Figure 3c, the
two steps of the same application are scheduled together, thus
allowing the output of Step A to be fed directly into Step
B—with the network and queuing overhead bypassed.
Misaligned Scheduling Objectives. Due to the lost appli-
LLM-based App.
Long Doc. Analytics
Chat Search
MetaGPT [22]
AutoGen [54]
Repeated (%)∗
# Calls
2 ∼ 40
3%
2 ∼ 10
94%
72%
14
99%
17
∗We count a paragraph as repeated if it appears in at least two LLM requests.
Tokens
3.5k ∼ 80k
5k
17k
57k
Table 1: Statistics of LLM calls of LLM applications.
01000200030004000Prompt Length (# of tokens)010002000300040005000Time (ms)End-to-end Time (P99))GPU Inference TimeOther Overhead (median)LLM Step ALLM Step BSchedulerInternetLLM AppOther LLM AppsABLLM EngineLLM EngineLLM EngineAQueueB①②③QueryResponse④LLM EngineLLM EngineLLM EngineSchedulerLLM Step ALLM Step B①②InternetABABQueueQueryResponseLLM AppOther LLM AppsChunk 1Chunk 2Chunk 3Chunk 4……Final SummaryChunk 5Chunk 6Chunk 15Chunk 16Batch=2TimeChunk 1Final SummaryBatch=8Chunk 9Time(1) Per-request latency optimized(2) End-to-end latency optimizedMaximize ThroughputMinimize LatencyLatency=2700 msLatency=1100 msReduce StageMap StageReduce StageChunk 2Chunk 10Chunk 7Chunk 15Chunk 8Chunk 16Map Stage…………Figure 5: The prompt structure of Bing Copilot shows a long
prompt reused by different user queries.
cation information (workflow and application performance
objective), existing public LLM services have to blindly use
a universal treatment for all requests, e.g., optimizing per-
request latency [44]. However, LLM-based applications are
more concerned about the end-to-end experience, rather than
individual requests. This misaligned optimization objectives
may negatively impact end-to-end performance. Considering
the map-reduce document summary in Figure 1a, the system
should minimize the end-to-end time it takes to receive the
final summary, rather than the latency of individual requests.
The LLM services optimized for individual requests are not
optimal for end-to-end latency.
As depicted in Figure 4, current LLM services must limit
the number of concurrent requests running on each LLM en-
gine to control the latency of individual requests. However,
there is a trade-off between latency and throughput in LLM in-
ference. Increasing the batch size can bring up to 8.2× higher
throughput but lead to 95% higher latency [9]. Yet, if we un-
derstand the application-level performance objective, which
in this case is the end-to-end latency, we can determine that
the ideal scheduling strategy should maximize the throughput
(using higher batch sizes) during the map stage and minimize
request latency during the reduce stage. This strategy reduces
end-to-end latency by 2.4×. Moreover, it uncovers the po-
tential to enhance cluster throughput without compromising
the end-to-end latency of LLM applications. This insight is
essential for addressing the conflict between rising demand
and limited hardware resources. It underscores the necessity
of scheduling LLM requests from the perspective of LLM
applications, but it also presents the challenge of managing
diverse LLM requests with varying performance objectives.
Redundant Computations. Currently, most LLM-based
applications exhibit a high degree of redundancy in the
prompts of their requests. For instance, Bing Chat [32] has
handled more than 1 billion chat prompts. These prompts
share the same system prompts that defines the functionality
of Bing Chat. OpenAI introduces GPTs [42] to let users cus-
tomize a ChatGPT for a specific purpose whose prompt tem-
plate is the same across users. The commonality in prompts
is crucial as it delineates the functionality and restrictions
of LLM-based applications. The prompt structure in Fig-
ure 5 [52] includes a role definition, several examples to
enhance the precision of LLM’s behaviors and user query
details. While the user input is dynamic, the task role is al-
Figure 6: Parrot system overview.
ways fixed, and the few-shot examples could be quasi-static in
that the same type of tasks use the same examples. This is why
more than 94% of prefix tokens could be repetitively used
across LLM requests for various users (Table 1). Such com-
monality also exists in multi-agent applications. For example,
MetaGPT [22] and AutoGen [54] recurrently incorporate con-
versation history into the prompt over several rounds of LLM
requests, leading to 72% and 99% redundancy respectively.
These redundant sections excessively utilize GPU memory
bandwidth and are computed for multiple times. Earlier re-
sults have proposed optimizations in LLM engines to avoid
redundant GPU memory of shared prompt [25]. However, it is
hard for public LLM services to swiftly detect and co-locate
the prompt-sharing requests, which be dynamically generated,
from tons of diverse requests from diverse applications. With-
out knowledge about the prompt structure, extensive token-
by-token matching for every LLM request is expensive at the
cluster level. Hence, if the cluster scheduler of public LLM
service cannot dispatch prompt-sharing requests to the same
engine, the engine-level redundancy avoidance optimizations
would be hard to take effect.
4 Parrot Design
Figure 6 depicts the overview of Parrot’s design. Parrot pro-
vides a natural way of programming LLM applications with
Semantic Variable annotations (§4.1), which is compatible of
existing LLM orchestration frameworks, e.g., LangChain [8].
Centering on this abstraction, Parrot Manager is designed
to schedule LLM requests at a cluster-level, by deriving the
application-level knowledge (§4.2) and optimizing end-to-end
performance of application (§5). The manager will schedule
the LLM requests to LLM Engine, which is formed by a GPU
server (or a group of servers) in the cluster that can serve LLM
requests independently.
4.1 Semantic Variable
Parrot treats an LLM request as a semantic function1 im-
plemented using natural language and executed by LLMs.
1The term semantic function is borrowed from Semantic Kernel [36].
[system](#instructions) ## You are the chat mode of Microsoft Bing search: - You identify as Microsoft Bing search to users, **not** an assistant. - You should ……[system](#context) - New conversation with user A. - Time at the start of this conversation is Sun, 30 Oct 2022 16:13:49 GMT. The user is located in Redmond, Washington, United States. [user](#message) Hi. ……[system](#context) - New conversation with user B. - Time at the start of this conversation is Mon, 20 Nov 2023 16:13:49 GMT. The user is located in London, UK. [user](#message)Explain AI agent for a kid.Task Role (static)Few-shot Examples (quasi-static)User Input (dynamic)++Parrot LLM EngineParrot LLM EngineParrot LLM EngineParrot APIs w/ Semantic VariablesParrot Manager w/ Inter-Request AnalysisParrot App-centric LLM ServiceApplicationsInternetApplications(front-end)Parrot Front-endOthers(LangChain,SK,etc.)Perf.Objective DeductionSharing Prompt PrefixApp-centricSchedulingEfficientGPU KernelsContext ManagementContextual Fill / GenInter-Request Comm.import Parrot as P
from Parrot.PerformanceCriteria import LATENCY
@P.SemanticFunction
def WritePythonCode(task: P.SemanticVariable):
""" You are an expert software engineer.
Write python code of {{input:task}}.
Code: {{output:code}}
"""
@P.SemanticFunction
def WriteTestCode(
task: P.SemanticVariable,
code: P.SemanticVariable):
""" You are an experienced QA engineer.
You write test code for {{input:task}}.
Code: {{input:code}}.
Your test code: {{output:test}}
"""
def WriteSnakeGame():
task = P.SemanticVariable("a snake game")
code = WritePythonCode(task)
test = WriteTestCode(task, code)
return code.get(perf=LATENCY), test.get(perf=LATENCY)
Figure 7: Example: a multi-agent application in Parrot.
A Semantic Variable is defined as a input or output vari-
able of a semantic function, which is referred as a place-
holder in the prompt. Figure 7 shows a simplified example of
multi-agent application like MetaGPT [22]. It contains two
SemanticFunctions, one for the software engineer to write
code and one for the QA engineer to write test code. It has
three Semantic Variables: task, code, and test, for task de-
scription, the code to be developed by the software engineer,
and the test code to be developed by the QA engineer, re-
spectively. Although existing LLM orchestration frameworks
(e.g., LangChain [8]) also allow placeholders in a prompt,
however, the placeholders are rendered with real data before
the submission, hence public LLM services cannot detect such
a structure. Instead, Parrot relies on Semantic Variables to
preserve the prompt structure for further inter-request analysis
in public LLM services side.
In addition to the semantic functions, LLM application
developers can further define orchestration functions that con-
nect multiple semantic functions (e.g., WriteSnakeGame in
Figure 7). The Semantic Variables connecting multiple se-
mantic functions form the data pipeline of multiple LLM
requests in the public LLM service. A simple data flow
analysis of the semantic functions can be done to reveals
the connections of multiple LLM requests. E.g., in Figure 7,
the code variable connects the two LLM requests originat-
ing from WritePythonCode and WriteTestCode, showing
their sequential dependency. Different from traditional com-
pletion API, Parrot splits a completion request to submit
operation and get operation (§7). A function calling of
SemanticFunction will trigger the submit API to submit a
LLM request with its prompt and input Semantic Variables.
The execution of a SemanticFunction is asynchronous
thus it returns the futures of the output Semantic Variables.
Figure 8: Primitives (selected) for Inter-Request Analysis.
Through the get API, applications can fetch the value of
an output Semantic Variable from the public LLM service
in an on-demand manner. This asynchronous design allows
Parrot-powered LLM service to receive all LLM requests not
blocked by native functions and analyze their relationships
just-in-time.
The get operation supports annotation of performance cri-
teria, showing the end-to-end performance requirement of
an application, which can be end-to-end latency or through-
put (extensible to more criteria like per-token latency when
streaming, and time-to-first-token). For example, the final out-
puts, code and test in Figure 7, are fetched using get with
an objective of end-to-end latency. Criteria of middle vari-
ables will be automatically deduced and propagated from final
outputs (§5.2). After propagation, each variable is attached to
a criterion, which finally works by serving as a hint to Parrot’s
scheduler (§5.4).
4.2 Primitives of Inter-Request Analysis
In general, Parrot perform inter-request analysis mainly by
two types of application-level information deduced from Se-
mantic Variable: DAG of requests and prompt structure. Fig-
ure 8 illustrates the DAG workflow of the example shown in
Figure 7 and the primitives used for inter-request analysis and
optimizations.
DAG-based analysis. As requests, or SemanticFunctions,
are submitted beforehand, Parrot can receive them all at once
and analyze their correlations just-in-time on the service side.
Parrot maintains a DAG-like data structure in each user’s
registered session. Each node is either a request or a Seman-
tic Variable that connects different requests. When a request
comes, Parrot inserts it to DAG by linking edges with Seman-
tic Variables it refers through placeholders in the prompts.
Parrot can perform conventional dataflow analysis [1, 38]
using the primitives to get the producer and consumers of Se-
mantic Variables (i.e., GetProducer and GetConsumers) to
recover dependency of LLM requests. Using the request DAG
and the annotated performance criteria (via GetPerfObj) of
final output Semantic Variables, Parrot can deduct the request-
level scheduling preference by analyzing the DAG and the
performance objective of final outputs (§5.2).
taskcodeWritePythonCodeWriteTestCodetestYou are an expert software engineer. Write python code ofYou are an expert ...... code of: {{input:task}}. Code:Hash()Hash()① PrefixHash()④ GetPerfObj() Latency ③ GetConsumers() [Request()]② GetProducer() Request()WritePythonCodeWriteTestCodePrompt structure-based analysis. Based on the prompt
structure declared by Semantic Variables, Parrot supports ex-
tracting the hash values of an LLM request at multiple po-
sitions split by Semantic Variables (i.e., PrefixHash). For
example, the prompt of WritePythonCode has two potential
sharing prefix: the text before {{input:task}} and the text
before {{output:code}}, thus there will be two prefix hash
values generated. The prefix hashes of LLM requests will
be used by swift detection of commonality across multiple
requests, supporting both static and dynamically generated
contents, as well as within the same type of application or
even across applications (§5.3).
5 Optimizations with Semantic Variable
5.1 Serving Dependent Requests
To avoid the unnecessary client-side execution, it requires
the dependency of requests at the application level, which
is lost in today’s public LLM services. With the DAG and
primitives illustrated in §4.2, Parrot serves dependent requests
efficiently through a graph-based executor. The executor polls
constantly and sends it to corresponding engine once ready
(i.e. producer requests are all finished), which allows instant
execution and maximizes batching opportunities. For con-
secutive execution of dependent requests, materialized value
is transmitted through a message queue allocated for cor-
responding Semantic Variable, avoiding unnecessary chatty
communication between clients and LLM services.
The value of a Semantic Variable in a request may require
transformation before being exchanged, e.g., the value of a
Semantic Variable is extracted from the JSON-formatted out-
put of an LLM request, which is then fed into consecutive
LLM requests. Similar to existing message queue systems
that support message transformation (e.g., Kafka [5]), Parrot
also supports string transformation to manipulate Semantic
Variables during value exchanging among LLM requests. Par-
rot supports most output parsing methods of LangChain [8],
which covers most use cases of LLM applications.
5.2 Performance Objective Deduction
To optimize the end-to-end performance of applications, we
need to know the application-level performance criteria. To
help deriving the request-level scheduling preference from the
end-to-end application’s performance requirement, we need
to understand the workflow of the LLM application, which is
the DAG of LLM requests derived by Parrot’s primitives.
When an application annotates a Semantic Variable to pre-
fer higher throughput, all requests generating this Seman-
tic Variable (both directly or indirectly) will be marked as
throughput-preferred when scheduling. This scheduling pref-
erence is usually beneficial for offline data processing, such
as bulk document analysis.
Figure 9: Performance deduction for an LLM-based applica-
tion generating two latency-sensitive Semantic Variable.
Handling latency-sensitive applications is more intricate.
As demonstrated in Figure 4, achieving low end-to-end la-
tency may sometimes require prioritizing throughput at the
Mapping stage. The latency of individual requests can sacri-
ficed so as to reduce the completion time of the entire DAG of
requests. Parrot analyzes LLM requests in reverse topological
order, beginning with those linked to latency-critical Semantic
Variable, as depicted in Figure 9. With the extracted DAG,
LLM requests that directly result in latency-critical Seman-
tic Variables are labeled as latency-sensitive (Request 1 and
2), as are their immediate predecessors (Request 3). Parallel
LLM requests at the same stage are grouped into a task group
(Task Groups 0 and 1). The scheduler should minimize the
latency of the entire task group, often leading to a higher batch
capacity for higher throughput of token generation.
5.3 Sharing Prompt Prefix
When an LLM request is scheduled to an LLM engine, a con-
text on the engine is created to store the state of the model
execution for this request (mainly KV cache). Existing works
have proposed to share the KV cache of common prefix of
prompts in LLM engines to save the GPU memory. However,
as we have explained in §3, today’s public LLM service face
diverse applications and requests, which is hard to identify
the commonality at the cluster level. Token-by-token compar-
ison is impractical due to high time complexity, especially for
very long context with massive requests. In Parrot, by expos-
ing Semantic Variables to LLM service, we can understand
the prompt structure to automatically detect the commonality
more efficiently at the granularity of Semantic Variables.
Using Parrot’s primitive of PrefixHash, Parrot only needs
to check the hash value at positions after each Semantic Vari-
able in a request’s prompt. Parrot maintains a key-value store,
where each entry maps a (hashed) prefix of tokens to a list of
requests, thus the scheduler can quickly check the opportunity
in an online manner, supporting both static and dynamically-
generated prompt within one application or even across dif-
ferent applications.
Furthermore, we propose better GPU kernel for the atten-
tion computation of the requests with a common prefix. We
first leverage vLLM’s paged memory management [25] to
save the redundent GPU memory. But vLLM’s kernel still
suffers from redundant computation and memory loading
of the shared tokens. Therefore, we design a new Attention
decoding algorithm by combining FlashAttenation [12] and
PagedAttention [25] that treat the shared and non-shared to-
135467x.get(perf=LATENCY)Task Group 0Task Group 12y.get(perf=LATENCY)*/
Algorithm 1: Parrot’s Request Scheduling.
Data: Q: the request queue
1 Q.sort() ; /* Topological order
2 for r ∈ Q do
3
SharedReqsInQueue, CtxInEngine =
FindSharedPrefix(r);
if r.TaskGroup ̸= ∅ then
4
5
6
7
8
9
10
11
12
r∗ = FindEngine(r.TaskGroup);
else if SharedReqsInQueue ̸= ∅ then
r∗ = FindEngine(SharedReqsInQueue);
else if CtxInEngine ̸= ∅ then
r∗ = FindEngine(r, filter=CtxInEngine);
if r∗ = ∅ then
r∗ = FindEngine(r);
Q.remove(r∗);
ken separately. This significantly accelerates the attention of
shared contexts (implementation details in §7).
5.4 Application-Centric Scheduling
To fix the problem of existing public LLM service that blindly
optimize diverse individual requests, Parrot’s scheduling pol-
icy leverages the application-level knowledge to optimize the
end-to-end performance. Specifically, the primary goal of Par-
rot’s scheduler is to meet the varied performance goals of
LLM applications while optimizing GPU cluster utilization.
As explained in §3, a conflict arises when combining through-
put and latency oriented requests: large batch sizes increase
throughput and GPU efficiency but degrade latency, and vice
versa. Transformer-based LLM inference is largely memory-
bound, with latency influenced by the count of concurrent
tokens within the engine. To meet performance targets of
LLM applications, particularly latency, an LLM engine must
regulate the token count below a specified threshold, which
is determined by the LLM request with the most strict la-
tency constraint. Therefore, Parrot’s scheduling principles are
twofold: (1) group LLM requests with similar performance
requirements to circumvent the conflict, and (2) maximize
opportunities for sharing across requests.
Algorithm 1 outlines the scheduling process of Parrot. With
the extracted DAG, the system arranges the LLM requests
according to their topological order (line 1). Parrot tends to
schedule requests belonging to the same application together
to avoid the slowing down of interleaved scheduling (§8.2).
For requests identified as part of a task group through Parrot’s
performance objective deduction, the scheduler attempts to
allocate the entire task group together (line 4-line 5). Addi-
tionally, if Parrot detects other queued requests or running
contexts with a common prefix, it tries to assign them to
the same LLM engine (line 3, line 6-line 9), to utilize Par-
rot’s context fork to reduce the redundant computation and
GPU memory transactions. For an LLM request without the
above opportunity, Parrot schedules the request independently
(line 10-line 11). Due to limited space, we omit the details of
how Parrot chooses LLM engines (i.e., FindEngine). Briefly,
Parrot finds the engine that satisfies the scheduling preference
of a request while minimizing the negative impacts. For in-
stance, if a latency-sensitive request is scheduled to an LLM
engine that can run up to 64,000 tokens of throughput-driven
requests, its capacity will be significantly reduced to 2,000 to
satisfy its strict latency requirement. But, if it is scheduled to
an engine that has already been running a latency-sensitive
request, the capacity reduction is negligible.
6 Discussion
Dynamic Applications and Function Calling. Currently,
Parrot only supports cloud-side orchestration of LLM requests
without involving dynamic control flow and native functions
(e.g., Python Code). They still require client-side execution.
We intentionally disable the offloading of these functions
to public LLM services to minimize the security risks of
malicious injection. For private LLM services whose LLM
applications are trusted or there is a trusted zone to execute
these functions, Parrot’s APIs can be easily extended with
conditional connections and native code submission. More-
over, these extensions further enable new optimizations, e.g.,
we can speculatively pre-launch high-probability branches in
dynamic applications based on past profiles. This also proves
the potential of Parrot’s design when facing new types of
applications. We leave these extensions as future works.
Other Applications of Inter-Request Analysis. The inter-
request analysis in Parrot enables a new optimization space
not limited to the ones we introduced in §5. A large-scale
service has more scheduling features to consider, including
handling outliers [3], job failures [58], delay scheduling [57],
fairness [15, 61], starvation [17], or supporting heterogeneous
clusters [24, 37], which have been widely studied in other
systems. Parrot provides a new view from the perspective
of LLM-based applications: we need to understand the inter-
connection and commonality of LLM requests to optimize
applications’ end-to-end performance. These features can be
revisited in the LLM service system by considering the new
characteristics of LLM applications. In this paper, we focus
on Parrot’s mechanisms and a few use cases, leaving other
optimizations as promising future works.
Parrot with LLM Orchestration Frameworks. There
have been several frameworks for developers to build LLM-
based applications, e.g., LangChain [8], SemanticKernel [36],
and PromptFlow [35]. The key function of these frameworks
is to “glue” different LLM calls to accomplish a complex
task (aka. LLM orchestration). Parrot can be integrated with
these frameworks by extending their calling of LLM service
APIs with Semantic Variables. Most of these frameworks
have already used a template-based approach in which devel-
opers can design a template with placeholders, and render the
placeholders at runtime. These placeholders naturally have
the same concept as Parrot’s Semantic Variable. However,
because these frameworks will render the template prompt
before the submission, LLM services lose the information on
the prompt structure. To make these frameworks compatible
with Parrot, both the template itself and the variables to render
the template (using Semantic Variable in Parrot) need to be
wrapped as a SemanticFunction so the necessary informa-
tion is exposed to Parrot’s LLM service.
7 Implementation
Parrot is an end-to-end LLM service for LLM applications,
implemented on Python with about 14,000 lines of code. Its
front-end provides the abstraction of Semantic Variable, and
SemanticFunction, which is transformed into Parrot’s APIs
(implemented with FastAPI [48]) to be submitted as LLM
requests. A centralized Parrot manager handles the manage-
ment of LLM requests, including Semantic Variables, com-
munication, and scheduling. We also build an LLM engine
based on efficient kernels from vLLM [25], xFormers [26],
and ourselves. The engine supports advanced features for
LLM serving, including paged memory management [25] and
continues batching [56]. Parrot’s front-end and manager are
implemented in 1,600 and 3,200 lines of Python, respectively.
Parrot’s LLM engine is implemented in 5,400 lines of Python
and 1,600 lines of CUDA. We have implemented OPT [60]
and LLaMA [51] with PyTorch [45] and Transformers [53].
APIs. Applications programmed by SemanticFunctions
or other frontends are finally lowered to requests to universal
APIs through different adapters. Parrot provides OpenAI-like
APIs with the extension of Semantic Variables. The request
body of two operations mentioned in §4.1 is shown as follows:
(submit) {"prompt": str, "placeholders": [{"name":
(cid:44)→
(cid:44)→
str, "in_out": bool, "semantic_var_id": str,
"transforms": str}, ...], "session_id": str}
(get) {"semantic_var_id": str, "criteria": str,
(cid:44)→
"session_id": str}
In addition to the static string prompt, Parrot preserves the
input and output placeholders. A placeholder is associated
with a semantic variable either for rendering the input or
parsing the output. As introduced in §5.1. Parrot supports
transformations before the input or after the output. Parrot
also supports other APIs for setting and fetching the value of
Semantic Variables. The error message will be returned when
fetching an Semantic Variable, whose intermediate steps fail
(including engine, communication, and string transformation).
Kernel Optimization.
vLLM’s GPU kernel, while capable
of reusing results cached in GPU memory for shared prefix to-
kens in a prompt, sometimes excessively reloads these tokens
from global to shared memory, impeding attention score com-
putations. Using OpenAI Triton [43] and CUDA, we have
developed a novel GPU kernel, integrating concepts from
PagedAttention [25] and FlashAttention [11, 12], to acceler-
ate attention decoding computation involving shared prefixes.
This kernel retains PagedAttention’s approach of storing the
key-value (KV) cache in disparate memory segments and
utilizes a page table per request to monitor block status and
placement. Furthermore, employing FlashAttention princi-
ples, the kernel maximizes data reuse within shared memory.
Unlike reloading tiles repeatedly in the PagedAttention’s im-
plementation, it loads KV cache tiles for the shared prefix
to shared memory only once, diminishing memory transac-
tions between the L2 Cache and Shared Memory. The kernel
initially calculates interim attention metrics (including atten-
tion scores, qk_max, exp_sum) for the shared prefix using the
loaded tiles and records these back to HBM. Subsequently, it
processes the new tokens’ partial attention beyond the prefix,
amalgamating this with the prefix’s interim results to derive
the ultimate attention output.
Universal Engine Abstraction. Parrot’s cluster manager
controls multiple engines running various models, tokeniz-
ers, KV cache layouts, etc. To enable Parrot’s optimizations,
LLM engines need to support (1) stateful generation (e.g.,
guidance [18]) and (2) sharing KV cache states across dif-
ferent requests. Hence we propose a universal abstraction to
describe the minimal capability required to LLM engines to
be integrated into Parrot.
parent_context_id: int)
def Fill(token_ids: List[int], context_id: int,
(cid:44)→
def Generate(sampling_configs: Dict, context_id:
int, parent_context_id: int)
(cid:44)→
def FreeContext(context_id: int)
These three methods not only cover the basic completion
functionality of LLM inference engine, but also provide a
flexible context management interface. The Fill method pro-
cesses the initial prompt tokens, calculates and fills the KV
cache into corresponding context. The Generate method pro-
duces tokens via generative decoding that produces one token
per iteration until it reaches the length limit, user-defined
termination character or EOS (end-of-sequence) token, un-
der certain sampling configurations (e.g. temperature). Fills
and Generates are scheduled and batched by engine’s sched-
uler per iteration using continuous batching [56]. Creating
and forking contexts can also be realized with these two
methods by setting context_id and parent_context_id,
respectively. The FreeContext method explicitly frees a con-
text (i.e. free its KV cache in GPU memory). Separating
Fill and Generate not only fits Semantic Variable naturally:
constant text and input values are processed by Fill; the out-
put values are generated by Generate, but also breaks the
request-level dependency into a finer granularity, enabling
more parallel execution opportunities [2, 21, 46, 64].
8 Evaluation
8.1 Experimental Setup
Testbed. We evaluate Parrot with two separate setups for
single-GPU and multi-GPU experiments. The single-GPU
evaluations use a server with a 24-core AMD-EPYC-7V13
CPUs equipped with one NVIDIA A100 (80GB) GPU. The
multi-GPU evaluations use a server with 64-core EPYC AMD
CPU and four NVIDIA A6000 (48GB) GPUs. Both servers
run CUDA 12.1 and cuDNN 8.9.2.
Workloads. Our evaluations are performed to run four rep-
resentative LLM applications. Each LLM engine uses one
GPU and runs a LLaMA 13B or LLaMA 7B model [51] .
For LLM-based data analytics on long documents, we use the
Arxiv dataset [27], executing chain and map-reduce summa-
rizations on an extensive collection of academic papers. To
investigate the sharing opportunities of LLM-based applica-
tions with many users, we run the prompts from Bing Copilot
and GPTs [42] with synthesized user queries. For multi-agent
applications, we build a multi-agent programming application
using MetaGPT [22], which contains a system architect to
design APIs, multiple programmers to write code for different
files, reviewers to share review comments. The programmers
will also revise the code based on comments. For chat ser-
vice workloads, we derived scenarios from the ShareGPT
dataset [50], which mirrors real LLM chat conversations. Ac-
cording to the distribution of our measurement, we introduced
a random delay of 200 ∼ 300 ms to LLM requests to emulate
typical network overhead seen over the Internet. To create
realistic workloads, we documented the LLM responses us-
ing GPT-4 [41], ensuring the LLaMA models generated text
of similar length for system performance analysis. Table 2
presents the workloads and their optimizations in Parrot.
Baseline. We benchmark Parrot against sate-of-the-art so-
lutions for building LLM applications and serving LLM re-
quests. The majority of LLM applications used in our baseline
Workload
Data Analytics
Serving Popular
LLM Applications
Multi-agent App.
Mixed Workloads
Serving
Dependent
Requests.
✓
✓
✓
Perf. Obj.
Deduction
Sharing
Prompt
App-centric
Scheduling
✓
✓
✓
✓
✓
✓
✓
✓
✓
Table 2: The workloads and the optimizations taking effect.
(a) Mean Latency
(b) P90 Latency
Figure 10: Latency (per output token) of vLLM with varying
token capacities and request rates. Requests are sampled from
ShareGPT [50] and their arrival time follows Poisson distri-
butions.
comparisons are developed using LangChain [8], which is the
predominant framework for LLM application development.
The LLM applications in baselines leverage OpenAI-style
chat completion APIs as provided by FastChat [62]. FastChat
is a widely recognized open-source LLM serving system
with over 30,000 stars on its repository. Incoming requests to
FastChat are allocated to LLM engines that run either Hug-
gingFace’s Transformers library [53] or vLLM [25], both of
which incorporate cutting-edge enhancements for LLM exe-
cution, such as FlashAttention [12], PagedAttention [25], and
continuous batching techniques [56]. The default scheduling
strategy employed by FastChat assigns incoming requests
to the LLM engine with the smallest current queue. Since
existing LLM services typically expose their functionality
through "chat" completion APIs, baseline assessments treat
all requests as independent and assume a high sensitivity to
latency. To manage token generation response times, each
LLM engine is subject to a capacity threshold, which is the
aggregate token count from all active requests on the engine.
Since existing LLM token generation is usually bound by
memory bandwidth, the per-token generation latency of an
engine is mainly affected by the number of running tokens in
a batch. As depicted in Figure 10, our experiments indicate
that the latency per output token, i.e. TPOT (Time-per-output-
token) for vLLM, with continuous batching enabled, experi-
ences a notable uptick when the engine’s workload using a
batch capacity beyond 6144. In our evaluation, we use the
setting that an LLM engine can keep its generation latency
under 40 ms/s for latency-sensitive requests, consistent with
our experience of OpenAI’s LLM services. When all LLM
engines hit their maximum capacity, any additional LLM re-
quests are queued in a FIFO (First In, First Out) manner,
awaiting the completion and release of resources by ongoing
tasks. Serving longer context (e.g., 32k or even 1M tokens)
within a satisfactory latency require either more GPUs using
tensor-parallel [49] or sequence-parallel [6] approaches, or
approximate attention (e.g., StreamingLLM [55]), which is
beyond the scope of this paper.
510152025Requests/s2030405060Mean Latency (ms)Capacity=2048Capacity=4096Capacity=6144Capacity=8192Capacity=10240Capacity=12288510152025Requests/s2030405060P90 Latency (ms)Capacity=2048Capacity=4096Capacity=6144Capacity=8192Capacity=10240Capacity=12288(a) Output lengths
(b) Chunk sizes
(a) With background requests
(b) Multiple summary apps.
Figure 11: Average E2E latency of chain summarization with
varying output lengths and chunk sizes.
Figure 12: Average E2E latency of chain-summary with back-
ground requests or other chain-summary applications.
8.2 Data Analytics on Long Documents
Our experimental analysis within data analytics randomly
picks ten long documents from the Arxiv-March dataset [27],
using chain-summary and map-reduce summary. Each docu-
ment has over 20,000 tokens. The results measures the mean
end-to-end latency across all documents.
Chain-style Applications. Our evaluation demonstrates
how Parrot enhances chain summarization by mitigating the
excessive communication overhead stemming from client in-
teractions. Figure 11 presents the average end-to-end latency
for summarizing a single document using one LLM engine
(A100, LLaMA 13B) . We adjust the chunk size (the count of
tokens per chunk) and the output length, with results shown in
Figure 11a and Figure 11b, respectively. Parrot achieves a re-
duction in end-to-end latency by as much as 1.38× and 1.88×
compared to the baselines employing vLLM and Hugging-
Face, respectively. The efficiency of Parrot primarily stems
from the decreased network latency, which is a consequence
of reduced client interaction. As the output length increases,
the time spent on generation becomes more significant, lead-
ing to a diminishing advantage for Parrot over the baseline. By
increasing the chunk size, we decrease the number of chunks,
yet the extent of the speedup is contingent upon the network
latency savings for each chunk. Given that token generation is
substantially more time-consuming than prompt processing,
we observe a consistent speedup with variable chunk sizes
and a fixed output length (1.2× and 1.66× relative to vLLM
and HuggingFace, respectively). This indicates that Parrot’s
optimization for dependent LLM requests is particularly bene-
ficial for shorter outputs, which are prevalent in various LLM
applications such as summarization, short answer generation,
scoring, and choice provision. Due to HuggingFace’s slower
performance relative to vLLM, subsequent evaluations focus
solely on the comparison between Parrot and vLLM.
Figure 12a extends the evaluation by introducing back-
ground LLM requests at varying rates to examine the capa-
bility of Parrot in mitigating additional queuing delays for
dependent requests. Parrot slashes the end-to-end latency by a
factor of 2.38× in comparison to the baseline (vLLM). With
Parrot, as soon as the summary for the first chunk is completed,
Figure 13: The difference in E2E latency of the 25 chain-
summary application between Baseline and Parrot. All appli-
cations finish earlier in Parrot.
the subsequent chunk is processed immediately by incorporat-
ing the summaries of previous chunks into the prompt, which
aids in generating the summary for the next chunk. In con-
trast, the baseline treats all LLM requests individually. As a
result, in addition to the network latency from client interac-
tions, subsequent requests must re-enter the queue, leading
to added queuing delays. Figure 12b further illustrates the
end-to-end latency when multiple chain-summary applica-
tions are submitted concurrently, with each application tasked
with generating a summary for a separate document. Parrot
manages to reduce the average end-to-end latency for all ap-
plications by 1.68× without slowing down any applications
compared to the baseline according to Figure 13. The base-
line, by interleaving the execution of different applications,
exacerbates the slowdown of the end-to-end latency for all
applications. These experiments validate that recognizing the
interconnections of LLM requests can significantly enhance
end-to-end performance, as opposed to processing requests
in isolation.
Map-Reduce Applications. An alternative implementation
of the document summarization application follows the map-
reduce paradigm as depicted in Figure 1a. This approach
consists of multiple parallel mapping LLM requests, where
each request summarizes a distinct segment of the document,
followed by a reducing LLM request that aggregates these
individual summaries into a final summary. As shown in
Figure 14, Parrot realizes a 2.37× acceleration over the base-
255075100Output Length (# tokens)050100150200250Average Latency (s)1.38x1.21x1.14x1.11x1.88x1.64x1.55x1.52xParrotBaseline (vLLM)Baseline (HuggingFace)512102415362048Chunk Size (# tokens)050100150200250Average Latency (s)1.21x1.21x1.20x1.19x1.63x1.62x1.60x1.61xParrotBaseline (vLLM)Baseline (HuggingFace)0.00.51.01.52.02.53.03.5Request Rate (reqs/s)50100150200250Average Latency (s)1.21x1.19x1.31x1.79x2.38xParrotBaseline (vLLM)10152025Number of Apps0100200300Average Latency (s)1.38x1.52x1.63x1.68xParrotBaseline (vLLM)12345678910111213141516171819202122232425Application No.050100150200250Latency in Baseline - Latency in Parrot (s)(a) Output lengths
(b) Chunk sizes
Figure 14: Average E2E latency of Map-Reduce document
summary with varying output lengths and chunk sizes.
line with one LLM engine (A100, LLaMA 13B). Since the
mapping LLM requests are independent, they are dispatched
concurrently by both Parrot and the baseline. The primary ad-
vantage of Parrot stems from its deduction of a performance
objective that identifies the mapping tasks as a task group.
By recognizing this relationship, Parrot is capable of optimiz-
ing the latency of the entire task group through larger batch
sizes, which in turn enhances throughput. In contrast, the
baseline processes each LLM request in isolation, operating
under the presumption that they are all sensitive to latency.
This constrains the baseline to utilize a limited token capacity
(4096 tokens) on the LLM engine to achieve optimal latency
for individual tasks, which is detrimental to the end-to-end
performance of applications. It underscores the necessity for
LLM services to distinguish LLM requests to optimize the
end-to-end performance of varied LLM applications.
8.3 Serving Popular LLM Applications
Production applications need to face massive users. As ex-
plained in Figure 5, developers often need to use a very long
system prompt to define the behavior of LLMs. Therefore,
users of the same LLM application often use the shared
prompt, which can benefit from Parrot’s context fork mech-
anism and Parrot’s scheduling policy that co-locates LLM
requests sharing a long prompt prefix. Because we do not
have access to the intermediate steps of Bing Copilot, we only
evaluate the final request generating the response to users.
We synthesized 64 requests from the length distribution we
measured using Bing Copilot. The system prompt length is
about 6000 tokens. The output lengths ranges from 180 to
800 tokens. Figure 15 shows the average request latency of
Bing Copilot of Parrot and the baselines. Because the LLM
service in the baseline system does not know the prompt struc-
ture, it is hard to infer the shared prompt from massive LLM
requests. Compared to the baseline without sharing prompt,
Parrot achieves 1.8× ∼ 2.4× speedup for batch sizes of 8 and
16. Further increasing the batch size leads to out-of-memory
due to the massive KV cache of shared system prompt. We
also build an advanced baseline using vLLM’s paged atten-
tion to support sharing the prompt with a static prefix. Both
Figure 15: Latency of Bing Copilot with varying batch sizes.
(a) Batch Size = 32
(b) Batch Size = 64
Figure 16: Latency per output token of Bing Copilot.
Parrot and vLLM use the paged memory management [25],
thus both systems can hold the same number of tokens in
an LLM engine (A100, LLaMA 7B). Parrot further achieves
1.1× ∼ 1.7× speedup over vLLM because of the better GPU
kernel. Although vLLM can save extra memory usage of the
shared prompt, its GPU kernel still has to reload the tokens
repeatedly. Given that the token generation of LLMs is bound
by memory bandwidth, such redundant memory loading slows
down the end-to-end inference. By combining FlashAtten-
tion and PagedAttention, Parrot only needs to load the tokens
of the shared prompt once, when computing the attention
from the diverged tokens of different users. Parrot’s speedup
of shared prompt mainly comes from the token generation,
thus the longer output length leads to higher improvement.
Figure 16 shows Parrot achieves 1.58× and 1.84× speedup
compared to vLLM using paged attention, showing 40 ms
per-output-token latency at a batch size of 32.
In Figure 17, we further evaluated the serving of multiple
GPTs applications [42], each of which has multiple users, in
a multi-GPU cluster. Four A6000 (48GB) GPUs are deployed
with four LLM engines (LLaMA 7B). We select four GPTs
applications in four popular categories including productivity,
programming, image generation, and data analysis. The LLM
requests are randomly generated from the four categories with
equal probability. LLM requests arrive at fixed rates following
Poisson distribution. Parrot can sustain 12× higher request
rates compared to the baseline without sharing. Because the
baseline’s scheduling policy is not aware of the shared prompt
within each LLM application, the requests are mixed in all
LLM engines making it impossible to reuse the common
prompt prefix. Parrot’s scheduling policy co-locates LLM
requests of the same applications to maximize the sharing op-
255075100Output Length (# tokens)010203040Average Latency (s)1.70x2.04x2.22x2.37xParrotBaseline (vLLM)512102415362048Chunk Size (# tokens)0102030Average Latency (s)1.96x2.07x2.07x2.16xParrotBaseline (vLLM)8163264Batch Size010203040Avg. Latency (s)1.1x1.3x1.4x1.7x1.8x2.4xxxParrotBaseline w/ SharingBaseline w/o Sharing200400600800Output Length (# tokens)0.000.020.040.060.080.100.12Latency per token (s)1.44x1.53x1.56x1.58xParrotBaseline w/ Sharing100200300400480Output Length (# tokens)0.000.050.100.150.200.25Latency per token (s)1.44x1.64x1.74x1.81x1.84xParrotBaseline w/ SharingFigure 17: Serving multiple GPTs applications.
portunity, achieving both lower inference latency and higher
cluster throughput. After turning off such affinity scheduling
policy, Parrot only exhibits 3× higher request rates compared
to the baseline, because the requests with shared prefix are
often dispatched to different engines thus reduced the sharing
opportunities. Moreover, Parrot’s attention kernel helps Parrot
to achieve 2.4× higher rate compared to Parrot using vLLM’s
PagedAttention, by avoiding the redundant memory loading
for attention of shared prompts.
8.4 Multi-agent Applications
We assess the performance of multi-agent systems utiliz-
ing MetaGPT [22] within Parrot. A workflow is constructed
with three distinct roles. Initially, the Architect outlines the
project’s file structures and specifies APIs within each file
for a given task. Subsequently, multiple Coders undertake the
project implementation, with each focusing on a specific file.
Following the integration of the code from all files, several
Reviewers engage in the process, each examining and com-
menting on a single file. The Coders then revise their code
based on these comments. This review-and-revision cycle
is iterated three times to produce the final code. Figure 18
illustrates the latency and memory consumption of Parrot
compared to baseline systems on one A100 running LLaMA
13B. Parrot achieves a speedup of up to 11.7× compared
with the latency-centric baseline. The primary improvement
is attributed to Parrot’s capability to deduct the performance
objectives for LLM requests based on the end-to-end perfor-
mance criteria. For this specific multi-agent scenario, the goal
is to minimize the time taken to deliver the final code. Parrot
identifies multiple task groups within the parallel processes of
coding, reviewing, and revising, facilitating larger batch sizes
to enhance throughput and reduce the completion time of task
groups. We also contrast Parrot with an throughput-centric
baseline that uses larger batch on purpose to optimize cluster
throughput, which also shows higher concurrency and better
completion time than the latency-centric baseline.
Even when compared to the throughput-centric baseline,
Parrot demonstrates superiority, being faster by up to 2.45×.
This enhancement mainly stems from Parrot’s ability to
(a) End-to-end Latency
(b) GPU Memory of KV Cache
Figure 18: The latency and memory usage for multi-agent
programming, with varying number of files to program.
decrease redundancy through its prompt structure analysis,
which contributes a 2.35× acceleration. Given the interactive
nature of the roles in MetaGPT, there is considerable overlap
in the context among different roles, which Parrot capitalizes
on by sharing this common context as a prompt prefix. The
static prefix sharing mechanism from vLLM does not work
in this dynamic scenario. Without a grasp of the prompt’s
structure, it cannot identify dynamically generated Semantic
Variables that could also be shared during runtime. As de-
picted in Figure 18b, Parrot without this sharing capability
would hit the GPU memory ceiling. Additionally, Parrot’s spe-
cialized GPU kernel for processing the shared prefix achieves
a further 1.2× speedup when there are 16 files, compared to
using vLLM’s PagedAttention, due to the reduced memory
transactions.
8.5 Scheduling of Mixed Workloads
To assess the performance of Parrot on a multi-GPU setup, we
configure a cluster with four A6000 (48GB) GPUs, each host-
ing a separate LLM engine (LLaMA 7B), resulting in a total
of four LLM engines. We emulate a real-world scenario where
LLM services encounter a variety of demands by injecting a
mix of requests from chat applications at a rate of 1 req/s and
from data analytic tasks (i.e., map-reduce applications) previ-
ously analyzed in §8.2. Requests from the chat applications
are characterized by their need for low latency, whereas the
map-reduce applications prioritize high throughput, creating a
challenge when they are concurrently processed by the same
LLM engine. We benchmark Parrot against two reference
implementations: one tailored for latency, limiting engine ca-
pacity to reduce decoding time, and another for throughput,
012345678910111213141516Request rate (req/s)0100200300Normalized latency (ms/token)ParrotParrot w/ PagedAttentionParrot w/o SchedulingBaseline (vLLM)2400260011.7x481216Number of Files050010001500Average Latency (s)1.00x1.04x1.14x1.16x1.22x1.61x1.88x2.35x1.27x1.58x2.03x2.45x7.19x4.9x3.0xParrotParrot w/ PagedAttentionParrot w/o SharingBaseline (vLLM, Throughput)Baseline (vLLM, Latency)481216Number of Files01020304050GPU Memory ofKV Cache (GB)GPU Memory CapacityParrotParrot w/o Sharingorthogonal to them. With more application-level knowledge
exposed by Semantic Variables, Parrot can do data flow analy-
sis on LLM requests, which enables a brand new optimization
space with the final goal of optimizing the end-to-end perfor-
mance of applications, rather than individual requests.
LLM Orchestrator Frameworks. LLM orchestration
frameworks help developers create and manage applications
powered by LLMs. They simplify the process of prompt de-
sign, and orchestration of multiple LLM requests, which en-
able developers to interact with LLMs easily. LangChain [8]
is a Python framework that provides many workflow patterns,
e.g., chain, map-reduce so that developers can easily cus-
tomize their own LLM applications. Semantic Kernel [36]
introduces Planners are semantic agents that can automati-
cally generate plans based on the needs of the users. Prompt-
Flow [35] supports chains of native and semantic functions
and visualizes them as a graph. LlamaIndex [29] allows de-
velopers to use natural language queries to retrieve relevant
documents. Parrot is orthogonal to these frameworks and can
be easily integrated with these frameworks to support Parrot’s
APIs with Semantic Variable abstraction, as discussed in §6.
DAG-aware System Optimizations. Dependency graphs
or DAGs (Directed Acyclic Graphs) widely exist in many
kinds of systems, and many optimizations have been proposed
to optimize the systems by exploiting the DAG information.
Tez [4], Dryad [23], and Graphene [16] use the task depen-
dency to optimize the scheduling and packing of parallel data
analytic workloads. SONIC [30], Caerus [59], and Orion [31]
optimize serverless functions from the aspects of communica-
tion, latency, and cost. Parrot learns from the previous system
works and realizes the importance of correlations of LLM
requests to optimize the end-to-end performance of LLM ap-
plications. This motivates Parrot to build APIs for exposing
such dependency information. Moreover, it is unique to LLM
applications to understand the prompt structure in addition to
request-level dependency, which is necessary for communica-
tion and identifying commonality across LLM requests. This
motivates us to propose the Semantic Variable abstraction,
instead of just using a DAG of requests.
10 Conclusion
This paper proposes Parrot that treats LLM applications as
first-class citizens and targets to optimize the end-to-end per-
formance of LLM applications, instead of only optimizing
individual LLM requests. We propose Semantic Variable as
the key abstraction that exposes the dependency and common-
ality of LLM requests, enabling a new optimization space.
Our evaluation shows Parrot can optimize LLM-based ap-
plications by up to 11.7×. We envision this new angle of
efficiency improvement of LLM applications brings a broad
Figure 19: The mixture of chat and map-reduce applications.
utilizing full engine capacity to maximize GPU utilization.
The results depicted in Figure 19 demonstrate that Par-
rot attains a 5.5× and 1.23× improvement in normalized
latency (measured as request latency per number of output
tokens) [25, 56] for chat applications in comparison to the
latency-focused and throughput-focused baselines, respec-
tively. In terms of token generation speed for chat applications,
Parrot delivers performance on par with the latency-centric
baseline and outperforms the throughput-centric baseline by
1.72×. For map-reduce applications, Parrot reaches a 3.7×
speedup over the latency-centric baseline and is 1.05× more
efficient than the throughput-centric baseline. Parrot excels
by providing both low latency for chat applications and high
throughput for map-reduce applications. It mitigates the con-
tention between chat and map-reduce workloads by intelli-
gently scheduling them on separate engines. These findings
underscore the significance of specialized handling for diverse
requests to enhance the overall performance of LLM services.
9 Related Works
Deep Learning Serving Systems. The field of model serv-
ing has seen a surge of research activity in recent years,
with many systems developed to address the different chal-
lenges of deep learning model deployment. The systems in-
clude Clipper [10], TensorFlow Serving [39], Clockwork [19],
REEF [20], AlpaServe [28], which have explored many as-
pects including batching, caching, placement, scheduling,
model parallelism for the serving of single or multiple models.
These systems were proposed for serving general deep learn-
ing models, which have less consideration about the unique
requirements of large language models, e.g., autoregressive
decoding. Orca [56] proposed a fine-grained scheduling mech-
anism that can batch multiple LLM requests at the iteration
level, which is also known as continuous batching. vLLM
proposes PagedAttention [25] allows the batching of LLM
requests with different lengths using non-contiguous memory,
increasing memory utilization. These systems for LLM serv-
ing still treat LLM requests separately, missing the opportuni-
ties to understand the interconnections within an application
and exploit the commonality of different requests. Parrot is
0200400600800149.1184.6827.6Average Chat Normalized Latency (ms)02040608045.177.841.4Average ChatDecode Time (ms)02040608010023.224.586.4AverageMap-Reduce JCT (s)ParrotBaseline (Throughput)Baseline (Latency)future direction to study other scheduling features like the
fairness of end-to-end performance of LLM applications.
[8] Harrison Chase. LangChain. https://github.com/
langchain-ai/langchain, October 2022.
Acknowledgments
We thank the anonymous reviewers and the shepherd for their
constructive feedback and suggestions. Zhenhua Han, Yuqing
Yang and Chen Chen are the corresponding authors.
References
[1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng
Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, San-
jay Ghemawat, Geoffrey Irving, Michael Isard, Man-
junath Kudlur, Josh Levenberg, Rajat Monga, Sherry
Moore, Derek G. Murray, Benoit Steiner, Paul Tucker,
Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu,
and Xiaoqiang Zheng. TensorFlow: A system for Large-
Scale machine learning. In 12th USENIX Symposium on
Operating Systems Design and Implementation (OSDI
16), pages 265–283, Savannah, GA, November 2016.
USENIX Association.
[2] Amey Agrawal, Nitin Kedia, Ashish Panwar, Jayashree
Mohan, Nipun Kwatra, Bhargav S Gulavani, Alexey Tu-
manov, and Ramachandran Ramjee. Taming throughput-
latency tradeoff in llm inference with sarathi-serve.
arXiv preprint arXiv:2403.02310, 2024.
[3] Ganesh Ananthanarayanan, Srikanth Kandula, Albert
Greenberg, Ion Stoica, Yi Lu, Bikas Saha, and Edward
Harris. Reining in the outliers in Map-Reduce clusters
using mantri. In 9th USENIX Symposium on Operating
Systems Design and Implementation (OSDI 10), Van-
couver, BC, October 2010. USENIX Association.
[4] Apache. Tez. https://tez.apache.org/, November
2019.
[5] Apache. Kafka. https://kafka.apache.org/, Octo-
ber 2023.
[6] Zhengda Bian, Hongxin Liu, Boxiang Wang, Haichen
Huang, Yongbin Li, Chuanrui Wang, Fan Cui, and Yang
You. Colossal-ai: A unified deep learning system for
large-scale parallel training. CoRR, abs/2110.14883,
2021.
[7] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan,
Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee,
Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori,
Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang.
Sparks of artificial general intelligence: Early experi-
ments with gpt-4, 2023.
[9] Lequn Chen. Dissecting batching effects in gpt infer-
https://le.qun.ch/en/blog/2023/05/13/
ence.
transformer-batching/, May 2023.
[10] Daniel Crankshaw, Xin Wang, Guilio Zhou, Michael J.
Franklin, Joseph E. Gonzalez, and Ion Stoica. Clipper:
A Low-Latency online prediction serving system. In
14th USENIX Symposium on Networked Systems Design
and Implementation (NSDI 17), pages 613–627, Boston,
MA, March 2017. USENIX Association.
[11] Tri Dao. Flashattention-2: Faster attention with bet-
ter parallelism and work partitioning. arXiv preprint
arXiv:2307.08691, 2023.
[12] Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and
Christopher Ré.
Flashattention: Fast and memory-
efficient exact attention with io-awareness. In S. Koyejo,
S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and
A. Oh, editors, Advances in Neural Information Process-
ing Systems, volume 35, pages 16344–16359. Curran
Associates, Inc., 2022.
[13] Bill Gates. Ai is about to completely change how you
use computers and upend the software industry. https:
//www.gatesnotes.com/AI-agents, Nov 2023.
[14] Google. Google bard. https://bard.google.com/,
Nov 2023.
[15] Robert Grandl, Mosharaf Chowdhury, Aditya Akella,
and Ganesh Ananthanarayanan. Altruistic scheduling
in Multi-Resource clusters. In 12th USENIX Sympo-
sium on Operating Systems Design and Implementa-
tion (OSDI 16), pages 65–80, Savannah, GA, November
2016. USENIX Association.
[16] Robert Grandl, Srikanth Kandula, Sriram Rao, Aditya
Akella, and Janardhan Kulkarni. GRAPHENE: Packing
and Dependency-Aware scheduling for Data-Parallel
In 12th USENIX Symposium on Operating
clusters.
Systems Design and Implementation (OSDI 16), pages
81–97, Savannah, GA, November 2016. USENIX Asso-
ciation.
[17] Juncheng Gu, Mosharaf Chowdhury, Kang G. Shin,
Yibo Zhu, Myeongjae Jeon, Junjie Qian, Hongqiang Liu,
and Chuanxiong Guo. Tiresias: A GPU cluster manager
for distributed deep learning. In 16th USENIX Sympo-
sium on Networked Systems Design and Implementation
(NSDI 19), pages 485–500, Boston, MA, February 2019.
USENIX Association.
[18] guidance ai. Guidance.
https://github.com/
guidance-ai/guidance, November 2023.
[19] Arpan Gujarati, Reza Karimi, Safya Alzayat, Wei Hao,
Antoine Kaufmann, Ymir Vigfusson, and Jonathan
Mace. Serving DNNs like clockwork: Performance
predictability from the bottom up. In 14th USENIX Sym-
posium on Operating Systems Design and Implementa-
tion (OSDI 20), pages 443–462. USENIX Association,
November 2020.
[20] Mingcong Han, Hanze Zhang, Rong Chen, and Haibo
Chen. Microsecond-scale preemption for concurrent
In 16th USENIX
GPU-accelerated DNN inferences.
Symposium on Operating Systems Design and Imple-
mentation (OSDI 22), pages 539–558, Carlsbad, CA,
July 2022. USENIX Association.
[21] Connor Holmes, Masahiro Tanaka, Michael Wyatt, Am-
mar Ahmad Awan, Jeff Rasley, Samyam Rajbhan-
dari, Reza Yazdani Aminabadi, Heyang Qin, Arash
Bakhtiari, Lev Kurilenko, et al. Deepspeed-fastgen:
High-throughput text generation for llms via mii and
deepspeed-inference. arXiv preprint arXiv:2401.08671,
2024.
[22] Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng
Cheng, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau,
Zijuan Lin, Liyang Zhou, Chenyu Ran, et al. Metagpt:
Meta programming for multi-agent collaborative frame-
work. arXiv preprint arXiv:2308.00352, 2023.
[23] Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell,
and Dennis Fetterly. Dryad: Distributed data-parallel
programs from sequential building blocks. In Proceed-
ings of the 2nd ACM SIGOPS/EuroSys European Con-
ference on Computer Systems 2007, EuroSys ’07, page
59–72, New York, NY, USA, 2007. Association for Com-
puting Machinery.
[24] Suhas Jayaram Subramanya, Daiyaan Arfeen, Shouxu
Lin, Aurick Qiao, Zhihao Jia, and Gregory R. Ganger.
Sia: Heterogeneity-aware, goodput-optimized ml-cluster
scheduling. In Proceedings of the 29th Symposium on
Operating Systems Principles, SOSP ’23, page 642–657,
New York, NY, USA, 2023. Association for Computing
Machinery.
[25] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying
Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez,
Hao Zhang, and Ion Stoica. Efficient memory man-
agement for large language model serving with page-
dattention. In Proceedings of the 29th Symposium on
Operating Systems Principles, SOSP ’23, page 611–626,
New York, NY, USA, 2023. Association for Computing
Machinery.
[26] Benjamin
Lefaudeux, Francisco Massa, Diana
Liskovich, Wenhan Xiong, Vittorio Caggiano, Sean
Naren, Min Xu, Jieru Hu, Marta Tintore, Susan Zhang,
Patrick Labatut, and Daniel Haziza. xformers: A modu-
lar and hackable transformer modelling library. https:
//github.com/facebookresearch/xformers,
2022.
[27] Yucheng Li. Unlocking context constraints of llms: En-
hancing context efficiency of llms with self-information-
based content filtering, 2023.
[28] Zhuohan Li, Lianmin Zheng, Yinmin Zhong, Vincent
Liu, Ying Sheng, Xin Jin, Yanping Huang, Zhifeng Chen,
Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Al-
paServe: Statistical multiplexing with model parallelism
In 17th USENIX Sympo-
for deep learning serving.
sium on Operating Systems Design and Implementa-
tion (OSDI 23), pages 663–679, Boston, MA, July 2023.
USENIX Association.
[29] Jerry Liu. LlamaIndex, November 2022.
[30] Ashraf Mahgoub, Karthick Shankar, Subrata Mitra,
Ana Klimovic, Somali Chaterji, and Saurabh Bagchi.
SONIC: Application-aware data passing for chained
serverless applications. In 2021 USENIX Annual Tech-
nical Conference (USENIX ATC 21), pages 285–301.
USENIX Association, July 2021.
[31] Ashraf Mahgoub, Edgardo Barsallo Yi, Karthick
Shankar, Sameh Elnikety, Somali Chaterji, and Saurabh
Bagchi. ORION and the three rights: Sizing, bundling,
and prewarming for serverless DAGs. In 16th USENIX
Symposium on Operating Systems Design and Imple-
mentation (OSDI 22), pages 303–320, Carlsbad, CA,
July 2022. USENIX Association.
[32] Microsoft. Bing chat. https://www.bing.com/chat,
Nov 2023.
[33] Microsoft.
teams.
microsoft-teams/premium, May 2023.
in microsoft
recap
https://www.microsoft.com/en-us/
Meeting
[34] Microsoft.
Microsoft 365 copilot.
https:
//www.microsoft.com/en-us/microsoft-365/
enterprise/microsoft-365-copilot, Mar 2023.
[35] Microsoft.
PromptFlow.
microsoft/promptflow, November 2023.
https://github.com/
[36] Microsoft. Semantic Kernel. https://github.com/
microsoft/semantic-kernel, November 2023.
[37] Deepak Narayanan, Keshav
Santhanam, Fiodar
Kazhamiaka, Amar Phanishayee, and Matei Zaharia.
Heterogeneity-Aware cluster scheduling policies for
deep learning workloads. In 14th USENIX Symposium
on Operating Systems Design and Implementation
(OSDI 20), pages 481–498. USENIX Association,
November 2020.
[38] Flemming Nielson, Hanne R Nielson, and Chris Hankin.
Principles of program analysis. Springer, 2015.
[39] Christopher Olston, Fangwei Li, Jeremiah Harmsen, Jor-
dan Soyke, Kiril Gorovoy, Li Lao, Noah Fiedel, Sukriti
Ramesh, and Vinu Rajashekhar. Tensorflow-serving:
Flexible, high-performance ml serving. In Workshop on
ML Systems at NIPS 2017, 2017.
[40] OpenAI. Chatgpt. https://chat.openai.com/, Nov
2023.
[41] OpenAI. Gpt-4 technical report, 2023.
[42] OpenAI.
Introducing gpts. https://openai.com/
blog/introducing-gpts, Nov 2023.
[43] OpenAI. OpenAI Triton.
https://github.com/
openai/triton, November 2023.
Production best practices
[44] OpenAI.
nai
api.
docs/guides/production-best-practices/
improving-latencies, Nov 2023.
- ope-
https://platform.openai.com/
[45] Adam Paszke, Sam Gross, Soumith Chintala, Gregory
Chanan, Edward Yang, Zachary DeVito, Zeming Lin,
Alban Desmaison, Luca Antiga, and Adam Lerer. Auto-
matic differentiation in pytorch. 2017.
[46] Pratyush Patel, Esha Choukse, Chaojie Zhang, Íñigo
Goiri, Aashaka Shah, Saeed Maleki, and Ricardo Bian-
chini. Splitwise: Efficient generative llm inference using
phase splitting. arXiv preprint arXiv:2311.18677, 2023.
[47] Chen Qian, Xin Cong, Cheng Yang, Weize Chen,
Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong
Sun. Communicative agents for software development.
arXiv preprint arXiv:2307.07924, 2023.
[48] Sebastián Ramírez. FastAPI. https://github.com/
tiangolo/fastapi.
[49] Mohammad Shoeybi, Mostofa Patwary, Raul Puri,
Patrick LeGresley, Jared Casper, and Bryan Catanzaro.
Megatron-lm: Training multi-billion parameter language
models using model parallelism. CoRR, abs/1909.08053,
2019.
[50] ShareGPT Team.
Sharegpt dataset.
https://
sharegpt.com/, Nov 2023.
[51] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix, Bap-
tiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar,
Aurelien Rodriguez, Armand Joulin, Edouard Grave,
and Guillaume Lample. Llama: Open and efficient foun-
dation language models, 2023.
[52] Unknown.
Prompt of bing chat.
https:
//www.make-safe-ai.com/is-bing-chat-safe/
Prompts_Conversations.txt, Nov 2023.
[53] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Perric
Cistac, Clara Ma, Yacine Jernite, Julien Plu, Canwen
Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame,
Quentin Lhoest, and Alexander M. Rush. Transformers:
State-of-the-Art Natural Language Processing. pages
38–45. Association for Computational Linguistics, Oc-
tober 2020.
[54] Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu,
Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xi-
aoyun Zhang, and Chi Wang. Autogen: Enabling next-
gen llm applications via multi-agent conversation frame-
work. arXiv preprint arXiv:2308.08155, 2023.
[55] Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song
Han, and Mike Lewis. Efficient streaming language
models with attention sinks. arXiv, 2023.
[56] Gyeong-In Yu, Joo Seong Jeong, Geon-Woo Kim, Soo-
jeong Kim, and Byung-Gon Chun. Orca: A distributed
serving system for Transformer-Based generative mod-
els. In 16th USENIX Symposium on Operating Systems
Design and Implementation (OSDI 22), pages 521–538,
Carlsbad, CA, July 2022. USENIX Association.
[57] Matei Zaharia, Dhruba Borthakur, Joydeep Sen Sarma,
Khaled Elmeleegy, Scott Shenker, and Ion Stoica. Delay
scheduling: A simple technique for achieving locality
and fairness in cluster scheduling. In Proceedings of
the 5th European Conference on Computer Systems,
EuroSys ’10, page 265–278, New York, NY, USA, 2010.
Association for Computing Machinery.
[58] Matei Zaharia, Andy Konwinski, Anthony D Joseph,
Randy H Katz, and Ion Stoica. Improving mapreduce
In 8th
performance in heterogeneous environments.
USENIX Symposium on Operating Systems Design and
Implementation (OSDI 08), San Diego, CA, 2008.
[59] Hong Zhang, Yupeng Tang, Anurag Khandelwal, Jin-
grong Chen, and Ion Stoica. Caerus: NIMBLE task
scheduling for serverless analytics. In 18th USENIX
Symposium on Networked Systems Design and Imple-
mentation (NSDI 21), pages 653–669. USENIX Associ-
ation, April 2021.
[60] Susan Zhang, Stephen Roller, Naman Goyal, Mikel
Artetxe, Moya Chen, Shuohui Chen, Christopher De-
wan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mi-
haylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel
Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang,
and Luke Zettlemoyer. Opt: Open pre-trained trans-
former language models, 2022.
[61] Hanyu Zhao, Zhenhua Han, Zhi Yang, Quanlu Zhang,
Fan Yang, Lidong Zhou, Mao Yang, Francis C.M. Lau,
Yuqi Wang, Yifan Xiong, and Bin Wang. HiveD: Sharing
a GPU cluster for deep learning with guarantees. In 14th
USENIX Symposium on Operating Systems Design and
Implementation (OSDI 20), pages 515–532. USENIX
Association, November 2020.
[62] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuo-
han Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E.
Gonzalez, and Ion Stoica. Judging llm-as-a-judge with
mt-bench and chatbot arena, 2023.
[63] Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Jeff
Huang, Chuyue Sun, Cody Hao Yu, Shiyi Cao, Chris-
tos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark
Barrett, and Ying Sheng. Efficiently programming large
language models using sglang, 2023.
[64] Yinmin Zhong, Shengyu Liu, Junda Chen, Jianbo Hu,
Yibo Zhu, Xuanzhe Liu, Xin Jin, and Hao Zhang. Dist-
serve: Disaggregating prefill and decoding for goodput-
optimized large language model serving. arXiv preprint
arXiv:2401.09670, 2024.
|
ai_researcher | 1 | Should_ChatGPT_help_with_my_research_A_caution_against_artificial_intelligence_in_qualitative_analysis.pdf | 1
Can ChatGPT evaluate research quality?
Mike Thelwall: Information School, University of Sheffield, UK. https://orcid.org/0000-0001-
6065-205X
Purpose: Assess whether ChatGPT 4.0 is accurate enough to perform research evaluations on
journal articles to automate this time-consuming task.
Design/methodology/approach: Test the extent to which ChatGPT-4 can assess the quality
of journal articles using a case study of the published scoring guidelines of the UK Research
Excellence Framework (REF) 2021 to create a research evaluation ChatGPT. This was applied
to 51 of my own articles and compared against my own quality judgements.
Findings: ChatGPT-4 can produce plausible document summaries and quality evaluation
rationales that match the REF criteria. Its overall scores have weak correlations with my self-
evaluation scores of the same documents (averaging r=0.281 over 15 iterations, with 8 being
statistically significantly different from 0). In contrast, the average scores from the 15
iterations produced a statistically significant positive correlation of 0.509. Thus, averaging
scores from multiple ChatGPT-4 rounds seems more effective than individual scores. The
positive correlation may be due to ChatGPT being able to extract the author’s significance,
rigour, and originality claims from inside each paper. If my weakest articles are removed, then
the correlation with average scores (r=0.200) falls below statistical significance, suggesting
that ChatGPT struggles to make fine-grained evaluations.
Research limitations: The data is self-evaluations of a convenience sample of articles from
one academic in one field.
Practical implications: Overall, ChatGPT does not yet seem to be accurate enough to be
trusted for any formal or informal research quality evaluation tasks. Research evaluators,
including journal editors, should therefore take steps to control its use.
Originality/value: This is the first published attempt at post-publication expert review
accuracy testing for ChatGPT.
Keywords: ChatGPT, Large Language Models, LLM, Research Excellence Framework, REF
2021, research quality, research assessment
Introduction
1
Academic peer review of articles entails reading a complex document containing text and
perhaps also tables and images, and then judging its value. For journal peer review the results
might be a publishing recommendation and a list of corrections. After publication, a similar
evaluation by someone conducting a literature review might inform their decision about
whether and how to use the information in the article in future research. A similar evaluation
might also judge the overall quality of the article formally for a process like the UK’s Research
Excellence Framework (REF) national evaluation (www.ref.ac.uk) or the equivalents in Italy
(www.anvur.it/en/activities/vqr) and New Zealand (www.tec.govt.nz/funding/funding-and-
performance/funding/fund-finder/pbrf). A cut down review evaluation might also be used for
informal or less systematic evaluations, including for appointments, tenure and promotions.
The time-consuming nature of this task has led to the partial automation of some aspects by
journals, such as plagiarism checking (Memon, 2020), reviewer selection and assignment
(Zhao & Zhang, 2022) and statistics checking (Baker, 2016). In addition, there have been
2
attempts to more fully automate some types of peer review evaluation, such as by replacing
them with bibliometrics (Sivertsen, 2017) or artificial intelligence (Thelwall, et al., 2023). In
addition, ChatGPT can provide useful advice to peer reviews about individual paper
evaluations (Liang et al., 2023). Despite these calls and applications, peer review remains a
labour-intensive task that consumes the time of academic experts.
The emergence of Large Language Models (LLMs) like ChatGPT (Wu et al., 2023) that
have shown new general-purpose text and image processing capabilities has created a new
possibility for research evaluation. LLMs work by processing enormous collections of
documents and learning layers of patterns in them to the extent that they are self-trained
grammar experts and highly capable at linguistics tasks like translation, sentiment analysis
and question answering (Kocoń et al., 2023). In addition, they can write short programs on
demand (Feng et al., 2023) and might also be useful for eliciting information or giving support
through chat-based dialog with patients (Cheng, et al., 2023). In education and wider
examination contexts, ChatGPT performs well at answering questions, including providing
answers that could pass university exams and attain professional qualifications (Nazir &
Wang, 2023). Overall, ChatGPT 3.5 and 4 seem to perform above the baseline but below the
state-of-the-art algorithms for natural language processing tasks. They are least accurate for
tasks involving understanding and for practical tasks (Kocoń et al., 2023). The main
advantages of LLMs may lie as being part of task pipelines (e.g., Wei et al., 2023) and their
ready availability for a wide range of tasks (Kocoń et al., 2023).
In theory, an LLM might replace human peer reviewers by judging academic article
quality, especially if given guidelines about how to perform the evaluation. Alternatively, they
might instead provide support to a human reviewer if the human took responsibility for the
final report (Hosseini & Horbach, 2023). Nevertheless, since LLMs can produce misleadingly
plausible incorrect (Nazir & Wang, 2023) or incomplete (Johnson et al., 2023) answers, careful
accuracy testing is needed.
This article assesses the extent to which ChatGPT-4 can estimate the quality of
academic journal articles using the REF 2021 quality criteria (REF, 2019ab). ChatGPT-4 was
chosen as apparently the most capable LLM at the time of writing. The REF 2021 quality
criteria are appropriate for this task because they are both public definitions of four quality
scores and guidelines about what to consider as aspects of quality in four different broad
areas of scholarship. This gives perhaps the clearest criteria for evaluating any type of
research quality. Whilst this type of research quality is not appropriate for Global South
research, for pre-publication peer review, or for evaluations of field contributions, the results
may provide a starting point to investigate LLMs for these other types. The following research
questions drive the study.
• RQ1: Can ChatGPT 4.0 understand the REF research quality evaluation task in the
sense of producing plausible outputs?
• RQ2: Does ChatGPT 4.0 allocate the full range of REF research quality scores?
• RQ3: Is ChatGPT 4.0 consistent in its REF quality scoring?
• RQ4: Is ChatGPT 4.0 accurate in its REF quality scoring?
• RQ5: Does averaging ChatGPT 4.0 scores improve its accuracy?
• RQ6: Can ChatGPT 4.0 scores distinguish between high-quality articles?
2 Background
This section describes ChatGPT and REF2021 to set the context for the methods and results.
3
2.1 LLMs and ChatGPT
A large language model (LLM) contains information about language that has been extracted
from a huge collection of text and is stored abstracted in a neural network format. This
information allows the model to accurately determine if new text is likely or not. Whilst
previous small scale linguistic models could determine sentence likelihood based on grammar
information and patterns (e.g., “The cat sat on.” is unlikely because “[noun phrase] sat on”
should be followed by something), large language models have ingested sufficient
information to also make fact-based determinations about grammatically correct sentences
(e.g., “The cat sat on the sea.” is unlikely because a sea cannot be sat on). The abstraction is
important because it allows the LLM to make determinations about text that it has not seen
before.
LLMs are currently built with the transformer neural network architecture, a type of
deep learning. When built, an LLM is “pre-trained” in the sense of having learned from
ingesting a huge amount of text. A Generative Pretrained Transformer (GPT) goes one step
further by generating likely text. It harnesses its LLM and, when fed some input text, predicts
what the next text could plausibly be. For this, it uses random parameters, so the text
generated is not always the same. Thus, if fed with “The cat sat on the”, it could easily guess
“mat” but if asked many times might occasionally produce different plausible answers, like
“lap” and “sofa”, but not “sea”. Much more impressively, an LLM could also complete large
sections of credible text, such as writing an entire thematically coherent poem with this
starting phrase.
The accuracy or usefulness of the output of a GPT can be improved by systematic large
scale human evaluation of its responses. The GPT can learn from this human feedback to
produce more consistently useful or correct results. Feedback can also help it learn to avoid
controversial or illegal responses. At the time of writing, the latest GPT from OpenAI was GPT
4.0 (openai.com/gpt-4, openai.com/research/gpt-4). Whilst general technical details about
GPT 4.0 are public (OpenAI, 2023), some details are retained as a commercial secret.
Essentially, though, each version of OpenAI’s GPT series seems to have more input data, a
larger network to abstract it, and more human feedback to fine tune it.
ChatGPT is a GPT from OpenAI that is optimised for a chat-like environment, where it
delivers responses to a series of human inputs (called “prompts”). It is general purpose, so
the goal of a chat could just as easily be to elicit fiction reading recommendations as to
identify a timeline of flash drive maximum capacities. ChatGPT therefore gives a mediated
public interface to the capabilities of the underlying GPT.
2.2 Research quality and UK REF2021
The purpose of academic research is broadly to advance the world body of knowledge and
understanding. The tangible outputs are usually journal articles, conference papers,
monographs, and book chapters but can also include more diverse entities like software,
datasets, compositions, and performances. For many different purposes, editors, reviewers,
funders, peers, and managers may attempt to assess the “quality” of documentary outputs.
Although there are many different definitions of research quality that partly conform to the
stakeholder goals, methodological rigour, novelty/originality, and impact on science or
society are usually included or explicitly stated as the three core components (Langfeldt et
al., 2020). In line with this, the UK REF definition of research quality revolves around rigour,
originality and significance.
4
The Research Excellence Framework in the UK is a periodic national assessment of
research processes, environments, and societal impacts in public universities and other
government funded research institutions. It succeeded the Research Assessment Exercise
(RAE), with iterations including REF2014, REF2021, and REF2029 (projected). The results are
primarily based on evaluating research outputs, with the scores used to direct the entire UK
block grant for research until the next iteration.
REF2021 is split into 34 Units of Assessment (UoAs), each of which corresponds to a
large academic field (e.g., UoA 8 Chemistry) or a collection of related fields (e.g., UoA 24 Sport
and Exercise Sciences, Leisure and Tourism). Institutions can choose how to split their work
between these UoAs. Each UoA has a team of assessors who are field experts. Most are full
professors although there are also some from outside academia. Collectively, there were over
1000 assessors for REF2021 evaluating 2.5 outputs on average per full time equivalent
academic at the source institution. The UoAs are grouped into four related sets: Main Panel
A (UoAs 1-6; mainly health and life sciences); Main Panel B (UoAs 7-12; mainly physical
sciences and engineering), Main Panel C (UoAs 13-24; mainly social sciences), and Main Panel
D (UoAs 25-34; mainly arts and humanities).
Each REF2021 journal article or other output was given a quality score on the following
scale: 4* (world-leading), 3* (internationally excellent), 2* (recognised internationally), or 1*
(recognised nationally). Explanations of these levels are given by each Main Panel, and these
are public (REF, 2019ab). A few outputs were also scored as 0 for being out of scope or low
quality. Each output is primarily scored by two assessors from the relevant UoA, who agree
on a score. In some UoAs (1-9, 11, 16) they may consult a narrow set of standardised
bibliometrics provided by the REF team, but these have little influence (Wilsdon et al., 2015)
so REF scores almost purely reflect expert judgement.
3 Methods
3.1 Article selection
I checked my recently published articles to obtain at least 50 open access articles of variable
quality. Fifty was chosen as a large but practical round number in the absence of any
expectation about correlations that could be used for statistical power calculations. I searched
from the present day backwards for articles that were open access and for which I had
retained copyright so that there would be no ambiguity about whether I could legally upload
them to ChatGPT. After this, I searched for articles from the same period (2019-2024) that I
had written but not published. These were either not submitted to a journal because I thought
them to be substandard or were submitted but rejected and I considered them to be not
worth resubmitting elsewhere. These were included to give lower quality articles so that the
collection had the full range of REF quality ratings. The final total was 51 articles.
My research has always been submitted to REF UoA 34 Communication, Cultural and
Media Studies, Library and Information Management. This Main Panel D area contains a mix
of social science and humanities approaches, but journal articles are still important for it.
Thus, I consider the 51 articles to be within the remit of UoA 34. They include articles about
scientometrics, gender, research evaluation, and social media analysis. All contain primary
research (rather than reviews) and would therefore be eligible for the REF.
5
3.2 Article scoring by my judgements
Before entering any of the articles into Chat-GPT or any other LLM, I assigned each one a
quality score using the REF2021 quality criteria for Main Panel D. I am very familiar with these
criteria, not just as a UK academic and leader of UoA 34 submission to REF2021, but also from
spending six months developing traditional AI solutions for estimating REF2021 scores and
evaluating REF2021 score data (Thelwall et al., 2023a). I am also familiar with my own work
so consider myself to be in a good position to estimate its quality. Nevertheless, probably like
most academics, I probably tend to overestimate the quality of my own work. I therefore tried
to be conservative in my quality judgements and allocate them scores that I considered REF
assessors might give them. In cases where I valued an article highly, but it had been rejected
from at least one journal, I used this information to lower the score given. Thus, the final
scores reflect my own judgements, occasionally tempered by negative opinions from
reviewers. None of my scores were changed after seeing any ChatGPT results.
Using my own judgements as the core evidence in my own article is clearly not ideal
from a rigour perspective. Nevertheless, this strategy seems preferable to using others’
judgements because it takes a substantial amount of time and expertise to read and evaluate
academic work and so asking others to do this task would risk obtaining surface-level
judgements.
3.3 ChatGPT 4 REF D configuration and scores
ChatGPT 4 was subscribed to for this project (chat.openai.com). It allows custom ChatBots to
be created for specific tasks. The approach used was zero-shot learning in the sense that no
answers or feedback were provided to ChatGPT. All ChatGPT’s requests for feedback on its
answers, even if purely stylistic, were ignored to avoid any human input that could potentially
bias it.
A custom chatbot, ChatGPT 4 REF D, was created in December 2023 with the official
REF Main Panel D criteria, as used for UoA 34. The online public information for REF assessors
and submitting researchers was entered almost verbatim as the setup instructions. The
overall definitions of quality levels (1*, 2*, 3*, and 4*) and statements about the three
dimensions to be assessed (rigour, originality, and impact) were merged with the panel-
specific criteria. Small changes were made to the original REF text to align with the task (e.g.,
changing pronouns, deleting repeated text). The “unclassified” category was also removed
because this was very rare in the REF and removing it would simplify the already complex
instructions. This information was entered twice. It was entered in response to the setup
instructions for a new GPT, but much of the information entered was lost during this process,
so the missing information was subsequently added to the ChatGPT configuration section to
ensure that all information was available for the evaluations. The final version of the
configuration instructions is in the Appendix. This is the only ChatGPT used in the current
article and so the name is sometimes abbreviated to ChatGPT below for convenience.
For each article, ChatGPT 4 REF D was started, the PDF was uploaded, and it was given
the instruction, “score this”. Initially, all articles were scored by the same chatbot instance on
the basis that (a) each was uploaded separately and should therefore be allocated a separate
score and (b) processing multiple articles from a collection might help each chatbot to
calibrate its scoring level by comparing new articles with scores from its memory. This did not
work because the scores were very stable, with long sequences of the same number, strongly
suggesting that ChatGPT was giving a quality score to the current article partly based on all
articles previously uploaded in the same session. In response to this, a new chatbot was
6
started for each article and the issue of unchanging scores for different consecutive articles
stopped. These queries were submitted manually in January-February 2023.
The results were inspected in each case to check that the PDF had been scanned
correctly and for obvious errors and the presence of a score. Identity checking was possible
in almost all cases because the ChatGPT output usually started with a sentence containing the
article title, which it could only have extracted from the PDF. No problems with implausible
results (e.g., answering a different question) were found.
The overall score was extracted from the ChatGPT output text. In some cases, ChatGPT
4.0 REF D did not give an immediate clear score, but a score was always obtained through
follow up prompts. For example, the follow up prompt “Give an exact REF score” within the
same chat session gave an answer for the last two cases below. Thus, although an additional
prompt was occasionally needed, ChatGPT 4.0 was always able to produce a plausible
response to the request for a research quality score. The following problems occasionally
occurred.
• Reporting an error processing the uploaded PDF (always solved by re-uploading the
PDF, sometimes after a break if several attempts did not work).
• Reporting an error whilst it was writing the report. In such cases, the report was
retained if it contained the score but regenerated if not.
• Reporting that it could not decide between 3* and 4* (e.g., “The decision between 3*
and 4* would depend on further details about the broader impact and recognition of
the work within the international community, as well as additional evidence of its
influence on policy, practice, or subsequent research.” ChatGPT 4.0). In these cases,
the average was reported.
• Scoring originality, rigour, and significance separately without an overall score. In this
case, the mean of these three scores was recorded.
• Evaluating originality, rigour, and significance but not reporting a numerical score.
• Summarising the contents of the PDF without clearly evaluating originality, rigour, and
significance separately or giving a numerical score. It is possible that ChatGPT 4.0
triggered a stopping condition before producing a score in such cases, since the other
outputs tended to start with an article summary.
After scores had been obtained for all articles, the process was repeated fourteen times to
get additional batches of scores to obtain average scores for each of the 51 articles. A total of
15 repetitions was judged sufficient to obtain a reasonably reliable average estimate.
3.4 Analyses
For RQ1 (Can ChatGPT 4.0 understand the REF research quality evaluation task in the sense
of producing plausible outputs?), I read and qualitatively examined the ChatGPT outputs for
whether they delivered an appropriate response to the task.
For RQ2 (Does ChatGPT 4.0 allocate the full range of REF research quality scores?), the
scores were summarised. Averages were also reported for additional detail.
For RQ3 (Is ChatGPT 4.0 consistent in its REF quality scoring?) the scores from each of
the 15 rounds of scoring were correlated against each other and the average correlation
calculated. Even though the data consists of ranks, Pearson correlations were used because
some of the scores were fractional and there are no extreme values. Whilst it is also
reasonable to argue that REF scores are not equidistant in the sense that the quality
difference between, for example, 1* and 2* might not be the same as the quality difference
between 3* and 4*, it seems more appropriate to make this assumption than to treat
7
fractional ranks as full ranks. The Pearson correlation assesses the extent to which two scores
form a linear relationship but not the extent to which they agree. For example, a perfect
correlation of 1 would occur if the REF scored all articles as 3* or 4* and ChatGPT scored all
REF 3* articles as 1* and all REF 4* articles as 2*. Thus, the correlation assesses the extent to
which the two processes recognise the same quality differences between articles but not the
extent to which they agree on the precise score.
For RQ4 (Is ChatGPT 4.0 accurate in its REF quality scoring?), the degree to which my
scores agree with the ChatGPT scores was tested in two ways. First, the two were correlated
using the Pearson rank correlation coefficient. Again, although REF scores are ranks, The
Pearson rank correlation is more appropriate because some of ChatGPT’s outputs are
fractional. The Mean Absolute Difference (MAD) was calculated between my scores and
ChatGPT scores to assess the extent to which they differ. A MAD of 0 means that they agree
100% whereas a MAD of 1 would occur if ChatGPT differed from my score by 1 on average.
For RQ5 (Does averaging ChatGPT 4.0 scores improve its accuracy?) The correlation
was calculated for the average of the 15 ChatGPT in the same way as for the original scores.
For RQ6 (Can ChatGPT 4.0 scores distinguish between high-quality articles?) The
evaluations were repeated for the 34 articles that I had scored as being 2.5* or higher and
also for the 24 articles that I had scored as being 3* or higher. In the REF, funding is only
allocated for articles scoring 3* or 4* so a threshold of 2.5* is reasonable on the basis that
such articles have a chance of getting a 3* score. Similarly, a threshold of 3* is also reasonable
since such articles are predicted to get funding.
4 Results
4.1 RQ1: Can ChatGPT 4.0 understand the REF research quality evaluation task
in the sense of producing plausible outputs?
ChatGPT 4.0 almost always appeared to understand the task in the sense of producing a
report that critically evaluated the originality, rigour, and significance of the uploaded article
and giving either a valid score or a score range. The only strange results that it produced were
(a) that in a few cases it illustrated a star score on a five-point scale (e.g., 3* was illustrated
as: ★★★☆☆) despite there being no 5* score, and (b) it sometimes reported fractional
scores, such as 3.5*, which is reasonable but does not conform to the REF guidelines.
The following illustrates the ChatGPT 4.0 REF D output evaluating one document with
the prompt, “score this”. The output formats varied considerably with the only nearly
common factor being that the first one to three paragraphs usually summarised the contents
of the article, with the remainder either describing or evaluating its rigour, significance, and
originality. Scores were usually given in the final paragraph.
[A two-paragraph summary of the article]
Based on the REF criteria for assessing research outputs, the article would be
evaluated under the following categories:
1. **Originality**: The study demonstrates originality through its comprehensive,
large-scale analysis of academic collaboration patterns across multiple countries and
disciplines. It engages with the complex problem of understanding the dynamics of
academic collaboration and its impact on citation rates, a topic of significant relevance
in the context of global research practices.
8
insights that can
It provides valuable
influence
2. **Significance**: The research has a significant potential to
understanding in multiple fields, particularly in research policy and scholarly
communication.
inform national and
international strategies for research collaboration and funding. The study’s findings
challenge the uniformity of collaboration practices across different regions and
disciplines, highlighting the need for context-specific approaches in research
management and policy-making.
3. **Rigour**: The study exhibits intellectual coherence and integrity. It adopts a
robust and appropriate methodology, utilizing a large dataset and a well-justified
metric for citation impact analysis. The research design is methodologically sound, and
the authors provide a thorough analysis of their findings, acknowledging the
limitations of their study.
Based on these considerations, the article would be rated as follows:
- **4* (World-leading)**: While the study is methodologically sound and offers new
insights into the patterns of academic collaboration and citation impact, it may not
reach the highest standards of world-leading work
in terms of providing
transformative theoretical advances or developing entirely new methods or
paradigms. It offers substantial contributions to the field but may fall slightly short of
a pioneering breakthrough that would characterize a definitive 4* rating.
- **3* (Internationally excellent)**: The article most likely fits within this category. It
is of high quality, contributes important knowledge, and is likely to be influential at an
international level. The study is well-designed, applies a robust methodology, and
addresses a significant issue in academic research. It is likely to be an important point
of reference for future research in this area.
In conclusion, the article [title redacted] is assessed to be of 3* quality, indicating that
it is internationally excellent in terms of originality, significance, and rigour.
Output from ChatGPT 4.0 REF D
Despite each article being uploaded to ChatGPT 15 times, the reports were always different.
They varied moderately in overall structure and content, but the exact phrasing of expressions
was always novel.
4.2 RQ2: Does ChatGPT 4.0 allocate the full range of REF research quality
scores?
ChatGPT 4.0 REF D only ever allocated scores between 2* and 4*, never using the lowest score
of 1*. Over two thirds of the time it allocated a score of 3*, with lower scores only being given
2.5% of the time. My average score for these articles was 2.75* and the ChatGPT 4.0 REF D
average score was only slightly higher at 3*. Thus, ChatGPT 4.0 REF D seems to be slightly
biased towards higher scores, at least compared to my self-evaluations, and it is substantially
biased towards allocating a 3* score, irrespective of the merits of an article.
Table 1. The scores given by ChatGPT-4 REF D and me to 51 of my open access articles.
Score GPT %
1*
1.5*
2*
2.33*
2.5*
0
0
14
1
2
0.0%
0.0%
1.8%
0.1%
0.3%
Me %
2
3
12
0
9
4%
6%
24%
0%
18%
2.67*
2.75*
2
0
3* 509
9
14
15
4* 199
0.3%
0.0%
66.5%
1.2%
1.8%
2.0%
26.0%
765 100.0%
3.33*
3.5*
3.67*
Total
9
0
1
8
0
7
0
9
0%
2%
16%
0%
14%
0%
18%
51 100%
4.3 RQ3/4/5/6: Is ChatGPT 4.0 REF D consistent and accurate in its REF quality
scoring?
In terms of accuracy, the ChatGPT 4.0 REF D quality scores were out by 0.802 (mean average
deviation), on average. When the ChatGPT 4.0 REF D quality scores are averaged across all 15
attempts, then the average deviation (MAD) is the same at 0.802. Thus, ChatGPT is inaccurate.
Nevertheless, a high correlation is more important than accuracy because it would indicate
that the ChatGPT scores could be useful, if appropriately scaled.
For the complete set of 51 articles, the correlation between my scores and the average
ChatGPT-4 REF D scores (0.509) was positive and statistically significantly different from 0
(Table 2). This supports the hypothesis that ChatGPT has some capability to detect REF
research quality. Nevertheless, the correlation is only moderate, with ChatGPT being able to
account for only 25% (=0.5092) of the variance in my scores. Moreover, the correlation is
lower and not statistically significant for both sets of higher quality articles. Thus, whilst
ChatGPT has some power for mixed quality sets of articles, its power is probably weaker for
more uniformly high-quality sets of articles.
Table 2. Pearson correlations for 51 of my open access articles, comparing my initial scores,
and scores from ChatGPT-4 REF D.
Correlation
All articles
scored
GPT average vs. author
(95% CI)
GPT vs. author, average of 15 pairs
(fraction of 95% Cis excluding 0)
GPT vs. GPT (average of 105 pairs)
Sample size (articles)
0.509
(0.271,0.688)
0.281
(8/15)
0.245
51
Articles
2.5+ by me
0.200
(-0.148,0.504)
0.102
(1/15)
0.194
34
Articles scored
3+ by me
0.246
(-0.175,0.590)
0.128
(1/15)
0.215
24
Despite the moderate correlation for all articles between my scores and the ChatGPT average,
some low-quality articles had high ChatGPT averages and vice versa (Figure 1). The graph is
consistent with ChatGPT being better able to detect between 1*-2* articles and 2.5*-4*
articles than within other ranges of scores.
10
Figure 1. The average REF star rating given by the REF D GPT against the author’s prior
evaluation of the REF score of 51 of his open access articles.
ChatGPT gave at least two different scores to 50 out of the 51 articles, with the remining
article being scored as 3* all 15 times (article 11 in Figure 2). Five of the 51 articles were given
all three of the main scores (2*, 3* and 4*) in different rounds by ChatGPT illustrating that it
is scoring inconsistently. The inconsistency of the scores between rounds is also evident in
the correlations between different rounds of ChatGPT being about the same as the
correlation between individual rounds of ChatGPT and my scores, and much lower than the
correlations between average ChatGPT scores and my scores, at least for the full dataset of
51 articles (Table 2). This suggests that the averaging strategy is better than using individual
ChatGPT rounds.
11
Figure 2. The range of REF star ratings given by the REF D GPT against the author’s prior
evaluation of the REF score of 51 of his open access articles. The area of each bubble is
proportional to the number of times the y axis score was given by ChatGPT to the x axis article.
My REF scores are marked on the x axis.
5 Discussion
5.1 Limitations and alternatives
This study has major limitations. The articles evaluated are from a single author and
disciplinary area and most of the results are based on my self-evaluations of the quality of
these articles. It is possible that a greater rate of agreement could have been obtained if the
scores had been given by REF judges instead. Moreover, higher correlations might also have
been obtained from different configurations of ChatGPT or other LLMs than the ChatGPT 4
REF D configuration used here. Because there were too few articles to create separate
development and evaluation subsets of articles, it was not practical to experiment with
different configurations or prompt chains to find one that gave higher correlations.
Nevertheless, ChatGPT seemed to follow the REF rules well, giving no indication that it was
doing anything inappropriate or sub-optimal.
Another limitation is that LLMs are evolving rapidly, and more accurate results may be
obtained in the future from upgraded systems. More generally, the REF quality definition is
not the only one and ChatGPT may work better on other versions. Finally, as mentioned
above, standards varied between UoAs within a Main Panel (Thelwall et al., 2023a), and this
was not considered by the instructions.
5.2 Comparison with prior research
There is no comparable study, with the partial exception that a traditional machine learning
approach using journal, citation, and authorship data to estimate REF scores. If only the above
2* set here is considered, then the results of the current paper would be comparable with the
prior results for UoA 34 (Thelwall et al., 2023a). The discussion of the contents of the reports
also agrees with prior research that ChatGPT can provide useful advice to peer reviews about
individual paper evaluations (Liang et al., 2023). There is agreement in the sense the ChatGPT
output here was generally correct and meaningful information about the articles’ rigour,
12
originality, and significance. More generally, the results of this study also confirm prior
observations that ChatGPT can generate output that is plausible but inaccurate (Nazir &
Wang, 2023).
5.3 Potential applications
The current article used a “zero-shot” approach by not feeding ChatGPT with any “correct”
scores to learn from. Although articles are unique and diverse, ChatGPT’s performance might
be improved with reference to example scores. A previous machine learning study that used
citation data and metadata as inputs (but not full text) was able to make predictions that had
high correlations with REF scores in some UoAs (mainly health, life, and physical sciences)
(Thelwall et al., 2023a), so ChatGPT does not seem like a realistic alternative to this traditional
approach in these areas.
It is not clear whether ChatGPT could augment the traditional machine learning
approach, for example by providing score predictions for articles that the machine learning
reports a low confidence in its score or for UoAs where the traditional approach does not
work at all. ChatGPT might also be useful for curating inputs to a machine learning model, by
extracting useful information like the number of figures and tables, although other software
could also do this.
5.4 Potential threats
ChatGPT’s ability to produce plausible complex written quality evaluations of academic
research despite little capacity to detect quality (at least as found in the current experiments)
is a threat to peer review. This is because reviewers might try to save time by uploading
documents to ChatGPT (probably in breach of copyright) and trust the output because of its
plausibility. Thus, LLM use should be explicitly banned or controlled by journals, funders,
universities, and other organisations that evaluate research (see also: Garcia, 2024). Explicit
rules are already common for journals (Perkins & Roe, 2024) but the current study emphasises
their importance and the potential for ChatGPT output to be plausible but misleading. The
results also support a previous call for journals to actively detect whether reviewers have used
generative AI for their evaluation (Mollaki, 2024).
5.5 Reason for positive correlations
I read and compared the ChatGPT reports to try to detect how ChatGPT evaluated rigour,
significance, and originality, with the goal of understanding why it had some ability to detect
an article’s quality. In all cases it seemed to primarily extract originality, significance and
rigour claims from inside the article rather than by applying externally obtained information
to make judgements. The results are therefore consistent with ChatGPT having the ability to
translate an author’s information about strengths and weaknesses into a quality judgement.
It sometimes brought in wider information to make a claim about the potential significance
or reach of an article, suggesting that it might be applying some ability to generalise.
To test this, I created a fake article and uploaded it to ChatGPT 4.0 REF D for a score.
The article was titled, “Do squirrel surgeons generate more citation impact?” and it was based
on a short article that had been rejected from a journal and that I did not resubmit elsewhere
(not one of the 51 evaluated), but that I would have scored as 1.5*. I changed two words
throughout the article to make it a comparison between humans and squirrels for surgery
research to test whether ChatGPT could detect that this research would have no significance
(or that its data was fake). It allocated it a 4* score, however, justifying it with, “The study
13
stands out for its innovative approach, potential influence on scholarly thought and policy,
and rigorous methodology.” The report also made clearly false claims that it had uncritically
derived from the paper, “By highlighting species-based differences in citation impact, the
research could contribute to broader discussions on diversity and representation in
academia.” I asked ChatGPT 4.0 separately, “can squirrels write academic research journal
articles?” and it gave a definitive reply, “No, squirrels cannot write academic research journal
articles. Squirrels are animals without the cognitive capabilities necessary for complex tasks
like academic writing.[]”. Thus, it had ingested the information necessary to draw an
appropriate conclusion but had not applied it to the fake article.
Whilst this is a single case and fake research rather than poor quality research, it partly
undermines my initial hypothesis that ChatGPT could harness its wider information to
estimate the significance of an article. It seems that ChatGPT can’t reliably do this.
6 Conclusion
The results suggest that ChatGPT 4.0 can write plausible REF reviews of journal articles and
has a weak capacity to estimate REF scores, but that this is probably due to an ability to
differentiate between research that is and isn’t high quality (above 2* in REF terms). The most
accurate way to use ChatGPT for quality scores seems to be to apply it multiple times and
then use the average score. Norm referencing and scaling will also be needed because it may
have a strong tendency to assign a default score (e.g., 3*) to most articles. Its evaluative
reports are primarily derived from the article itself in terms of information about significance,
rigour, and originality. It is not clear why it can score articles with some degree of accuracy,
but it might typically deduce them from author claims inside an article rather than by primarily
applying external information.
In terms of practical advice, it would be unethical and may breach copyright for a
reviewer to use a public LLM like ChatGPT to help review a document that was not already in
the public domain (Buriak et al., 2023; Flanagin et al., 2023). Moreover, even published
documents that are not open access may be legally problematic to upload, so it seems that
ChatGPT should be avoided for all research evaluation purposes until the copyright situation
is clarified or explicit permission is obtained from the copyright holder first and an effective
prompt engineering strategy is developed and validated. When LLM use is ethical and does
not breach copyright, the most important immediate conclusion is that ChatGPT’s output can
be misleading, and it should be avoided by researchers, editors, reviewers, literature review
authors, and evaluators attempting to make quality judgements of articles unless an
improved prompt engineering strategy can be developed or the existing strategy becomes
more effective on newer LLMs.
7 References
Baker, M. (2016). Stat-checking software stirs up psychology. Nature, 540(7631), 151-152.
Buriak, J. M., Hersam, M. C., & Kamat, P. V. (2023). Can ChatGPT and Other AI Bots Serve as
Peer Reviewers? ACS Energy Letters, 9, 191-192.
Cheng, S. W., Chang, C. W., Chang, W. J., Wang, H. W., Liang, C. S., Kishimoto, T., & Su, K. P.
(2023). The now and future of ChatGPT and GPT in psychiatry. Psychiatry and Clinical
Neurosciences, 77(11), 592-596.
Feng, Y., Vanam, S., Cherukupally, M., Zheng, W., Qiu, M., & Chen, H. (2023). Investigating
Code Generation Performance of Chat-GPT with Crowdsourcing Social Data.
14
In Proceedings of the 47th IEEE Computer Software and Applications Conference (pp. 1-
10).
Flanagin, A., Kendall-Taylor, J., & Bibbins-Domingo, K. (2023). Guidance for authors, peer
reviewers, and editors on use of AI, language models, and chatbots. JAMA.
https://doi.org/10.1001/jama.2023.12500
Garcia, M. B. (2024). Using AI tools in writing peer review reports: should academic journals
embrace the use of ChatGPT? Annals of biomedical engineering, 52, 139-140.
Gov.uk (2023). Guidance: Exceptions to copyright. https://www.gov.uk/guidance/exceptions-
to-copyright
Hosseini, M., & Horbach, S. P. (2023). Fighting reviewer fatigue or amplifying bias?
Considerations and recommendations for use of ChatGPT and other Large Language
Models in scholarly peer review. Research Integrity and Peer Review, 8(1), 4.
https://doi.org/10.1186/s41073-023-00133-5
Huang, J., & Tan, M. (2023). The role of ChatGPT in scientific communication: writing better
scientific review articles. American Journal of Cancer Research, 13(4), 1148.
Johnson, D., Goodman, R., Patrinely, J., Stone, C., Zimmerman, E., Donald, R., & Wheless, L.
(2023). Assessing the accuracy and reliability of AI-generated medical responses: an
PubMed.
evaluation
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10002821/
Chat-GPT
model.
the
of
Kocoń, J., Cichecki, I., Kaszyca, O., Kochanek, M., Szydło, D., Baran, J., & Kazienko, P. (2023).
ChatGPT: Jack of all trades, master of none. Information Fusion, 101861.
Langfeldt, L., Nedeva, M., Sörlin, S., & Thomas, D. A. (2020). Co-existing notions of research
quality: A framework to study context-specific understandings of good research.
Minerva, 58(1), 115-137.
Liang, W., Zhang, Y., Cao, H., Wang, B., Ding, D., Yang, X., & Zou, J. (2023). Can large language
models provide useful feedback on research papers? A large-scale empirical analysis.
arXiv preprint arXiv:2310.01783
Memon, A. R. (2020). Similarity and plagiarism in scholarly journal submissions: bringing
clarity to the concept for authors, reviewers and editors. Journal of Korean medical
science, 35(27). https://synapse.koreamed.org/articles/1146064
Mollaki, V. (2024). Death of a reviewer or death of peer review integrity? the challenges of
using AI tools in peer reviewing and the need to go beyond publishing policies. Research
Ethics, 17470161231224552.
Nazir, A., & Wang, Z. (2023). A Comprehensive Survey of ChatGPT: Advancements,
Applications, Prospects, and Challenges. Meta-radiology, 100022.
OpenAI (2023). GPT-4 technical report. https://arxiv.org/abs/2303.08774
Perkins, M., & Roe, J. (2024). Academic publisher guidelines on AI usage: A ChatGPT supported
thematic analysis. F1000Research, 12, 1398.
REF (2019a). Guidance on submissions (2019/01). https://archive.ref.ac.uk/publications-and-
reports/guidance-on-submissions-201901/
and
(2019b).
criteria
Panel
REF
working
methods
(2019/02).
https://archive.ref.ac.uk/publications-and-reports/panel-criteria-and-working-
methods-201902/
Sivertsen, G. (2017). Unique, but still best practice? The Research Excellence Framework (REF)
from an international perspective. Palgrave Communications, 3(1), 1-6.
15
Thelwall, M., Kousha, K., Wilson, P., Makita, M., Abdoli, M., Stuart, E., Levitt, J. & Cancellieri,
M. (2023a). Predicting article quality scores with machine learning: The UK Research
Excellence Framework. Quantitative Science Studies, 4(2), 547-573.
Thelwall, M., Kousha, K., Stuart, E., Makita, M., Abdoli, M., Wilson, P. & Levitt, J. (2023b). Does
the perceived quality of interdisciplinary research vary between fields? Journal of
Documentation. 79(6), 1514-1531. https://doi.org/10.1108/JD-01-2023-0012
Wei, X., Cui, X., Cheng, N., Wang, X., Zhang, X., Huang, S., & Han, W. (2023). Zero-shot
information extraction via chatting with chatgpt. arXiv preprint arXiv:2302.10205.
Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., (2015). The metric tide. Report
of the independent review of the role of metrics in research assessment and
https://www.ukri.org/publications/review-of-metrics-in-research-
management.
assessment-and-management/
Wu, T., He, S., Liu, J., Sun, S., Liu, K., Han, Q. L., & Tang, Y. (2023). A brief overview of ChatGPT:
The history, status quo and potential future development. IEEE/CAA Journal of
Automatica Sinica, 10(5), 1122-1136.
Zhao, X., & Zhang, Y. (2022). Reviewer assignment algorithms for peer review automation: A
survey. Information Processing & Management, 59(5), 103028.
8 Appendix: ChatGPT configuration
The configuration reported below largely quotes and uses small paraphrases of text from REF
documentation (REF, 2019ab). Breaking academic conventions about plagiarism, these are
not in quotes because the quotes might confuse ChatGPT.
8.1 ChatGPT-4 REF D configuration instructions
REF Assessor for Main Panel D employs an academic tone, prioritizing precision, formality,
and clarity in its analyses. It avoids casual language, overly simplistic explanations, and
subjective judgments not grounded in REF criteria. It focuses on providing objective,
evidence-based assessments, maintaining the integrity and seriousness expected in
academic evaluations. The GPT's interactions are guided by the principles of scholarly
communication, ensuring that every assessment aligns with academic standards of
originality, significance, and rigour.
Originality will be understood as the extent to which the output makes an important and
innovative contribution to understanding and knowledge in the field. Research outputs
that demonstrate originality may do one or more of the following: produce and
interpret new empirical findings or new material; engage with new and/or complex
problems; develop
innovative research methods, methodologies and analytical
techniques; show imaginative and creative scope; provide new arguments and/or new
forms of expression, formal innovations, interpretations and/or insights; collect and
engage with novel types of data; and/or advance theory or the analysis of doctrine,
policy or practice, and new forms of expression.
Significance will be understood as the extent to which the work has influenced, or has the
capacity to influence, knowledge and scholarly thought, or the development and
understanding of policy and/or practice.
Rigour will be understood as the extent to which the work demonstrates intellectual
coherence and integrity, and adopts robust and appropriate concepts, analyses,
sources, theories and/or methodologies.
The scoring system used is 1*, 2*, 3* or 4*, which are defined as follows.
16
4*: Quality that is world-leading in terms of originality, significance and rigour.
3*: Quality that is internationally excellent in terms of originality, significance and rigour but
which falls short of the highest standards of excellence.
2*: Quality that is recognised internationally in terms of originality, significance and rigour.
1* Quality that is recognised nationally in terms of originality, significance and rigour.
The terms ‘world-leading’, ‘international’ and ‘national’ will be taken as quality benchmarks
within the generic definitions of the quality levels. They will relate to the actual, likely
or deserved influence of the work, whether in the UK, a particular country or region
outside the UK, or on international audiences more broadly. There will be no
assumption of any necessary international exposure in terms of publication or
reception, or any necessary research content in terms of topic or approach. Nor will
there be an assumption that work published in a language other than English or Welsh
is necessarily of a quality that is or is not internationally benchmarked.
In assessing outputs, look for evidence of originality, significance and rigour and apply the
generic definitions of the starred quality levels as follows:
•
•
•
In assessing work as being 4* (quality that is world-leading in terms of originality, significance
and rigour), expect to see evidence of, or potential for, some of the following types of
characteristics across and possibly beyond its area/field:
a primary or essential point of reference
of profound influence
instrumental in developing new thinking, practices, paradigms, policies or audiences
a major expansion of the range and the depth of research and its application
outstandingly novel, innovative and/or creative.
•
•
•
•
•
In assessing work as being 3* (quality that is internationally excellent in terms of originality,
significance and rigour but which falls short of the highest standards of excellence),
expect to see evidence of, or potential for, some of the following types of characteristics
across and possibly beyond its area/field:
an important point of reference
of considerable influence
a catalyst for, or important contribution to, new thinking, practices, paradigms, policies
or audiences
a significant expansion of the range and the depth of research and its application
significantly novel or innovative or creative.
•
•
In assessing work as being 2* (quality that is recognised internationally in terms of originality,
significance and rigour), expect to see evidence of, or potential for, some of the
following types of characteristics across and possibly beyond its area/field:
a recognised point of reference
of some influence
an incremental and cumulative advance on thinking, practices, paradigms, policies or
audiences
a useful contribution to the range or depth of research and its application.
•
In assessing work as being 1* (quality that is recognised nationally in terms of originality,
significance and rigour), expect to see evidence of the following characteristics within
its area/field:
an identifiable contribution to understanding without advancing existing paradigms of
enquiry or practice
of minor influence.
•
•
•
•
•
|
ai_researcher | 1 | Explain_ability_and_interpretability_in_machine_learning_models.pdf | Assessing the Local Interpretability of Machine Learning Models
Dylan Slack,1 Sorelle A. Friedler,1, Carlos Scheidegger2, Chitradeep Dutta Roy3
1Haverford College
2University of Arizona
3University of Utah
9
1
0
2
g
u
A
2
]
G
L
.
s
c
[
2
v
1
0
5
3
0
.
2
0
9
1
:
v
i
X
r
a
Abstract
The increasing adoption of machine learning tools has led to
calls for accountability via model interpretability. But what
does it mean for a machine learning model to be interpretable
by humans, and how can this be assessed? We focus on two
definitions of interpretability that have been introduced in the
machine learning literature: simulatability (a user’s ability to
run a model on a given input) and “what if” local explain-
ability (a user’s ability to correctly determine a model’s pre-
diction under local changes to the input, given knowledge of
the model’s original prediction). Through a user study with
1000 participants, we test whether humans perform well on
tasks that mimic the definitions of simulatability and “what
if” local explainability on models that are typically consid-
ered locally interpretable. To track the relative interpretabil-
ity of models, we employ a simple metric, the runtime opera-
tion count on the simulatability task. We find evidence that as
the number of operations increases, participant accuracy on
the local interpretability tasks decreases. In addition, this ev-
idence is consistent with the common intuition that decision
trees and logistic regression models are interpretable and are
more interpretable than neural networks.
Introduction
Recently, there has been growing interest in interpreting ma-
chine learning models. The goal of interpretable machine
learning is to allow oversight and understanding of machine-
learned decisions. Much of the work in interpretable ma-
chine learning has come in the form of devising methods to
better explain the predictions of machine learning models.
However, such work usually leaves a noticeable gap in un-
derstanding interpretability (Lipton 2018; Doshi-Velez and
Kim 2017). The field currently stands on shaky foundations:
papers mean different things when they use the word “in-
terpretability”, and interpretability claims are typically not
validated by measuring human performance on a controlled
task. However, there is growing recognition in the merit
of such human validated assessments (Lage et al. 2018b;
Lage et al. 2018a; Lakkaraju, Bach, and Leskovec 2016).
In line with this goal, we seek concrete, falsifiable notions
of interpretability.
Copyright c(cid:13) 2019, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
“Interpretability” can be broadly divided into global inter-
pretability, meaning understanding the entirety of a trained
model including all decision paths, and local interpretabil-
ity, the goal of understanding the results of a trained model
on a specific input and small deviations from that input. We
focus on local interpretability, and on two specific defini-
tions. We assess simulatability (Lipton 2018), the ability of
a person to—independently of a computer—run a model
and get the correct output for a given input, and “what
if” local explainability (Ribeiro, Singh, and Guestrin 2016;
Lipton 2018): the ability of a person to correctly determine
how small changes to a given input affect the model out-
put. We will refer to a model as locally interpretable if users
are able to correctly perform both of these tasks when given
a model and input. The experiments we present here are
necessarily artificial and limited in scope. We see these as
lower bounds on the local interpretability of a model; if peo-
ple cannot perform these interpretability tasks, these models
should not be deemed locally interpretable.
In addition to considering the successful completion of
these tasks a lower bound on the local interpretability of
a model, we might reasonably ask whether these are valu-
able interpretability tasks at all. Though purposefully lim-
ited in scope, we argue that these tasks are still valuable in
real-world settings. Consider a defense attorney faced with a
client’s resulting score generated by a machine learned risk
assessment. In order to properly defend their client, the attor-
ney may want to verify that the risk score was correctly cal-
culated (simulatability) and argue about the extent to which
small changes in features about their client could change the
calculated score (local explainability). Despite being simple
interpretability tasks, successfully completing them is im-
portant to the attorney’s ability to defend their client from
potential errors or issues with the risk assessment.
We assessed the simulatability and “what if” local ex-
plainability of decision trees, logistic regressions, and neu-
ral networks through a crowdsourced user study using Pro-
lific (Prolific 2014). We asked 1,000 participants to simulate
the model on a given input and anticipate the outcome on
a slightly modified version of the input. We measured user
accuracy and completion time over varied datasets, inputs,
and model types (described in detail in the User Study De-
sign section). The results are consistent with the folk hy-
potheses (Lipton 2018) that decision trees and logistic re-
gression models are locally interpretable and are more lo-
cally interpretable than neural networks given the particular
model representations, datasets, and user inputs used in the
study.
As has been previously observed (Lipton 2018), it may
be the case that a small neural network is more interpretable
than a very large decision tree. To begin to answer questions
surrounding cross-model comparisons and generalizations
of these results to models not studied here, we investigated
a measure for its suitability as a proxy for the users’ ability
to correctly perform both the simulation and “what if” lo-
cal explainability tasks. We hypothesized that the number of
program operations performed by an execution trace of the
model on a given input would be a good proxy for the time
and accuracy of users’ attempts to locally interpret the model
under both definitions; specifically, that as the total number
of operations increased, the time taken would increase and
the accuracy on the combined task would decrease.
Analyzing the results of this study, we find evidence that
as the number of total operations performed by the model
increases, the time taken by the user increases and their ac-
curacy on the combined local interpretability task decreases.
We anticipated that as the number of operations increases,
the model would become uninterpretable because all users
are eventually expected to make a mistake simulating a very
large model. The operation count at which the users cannot
locally interpret a model can be considered an upper bound
limit to the interpretability of the model. Users reached this
upper bound when simulating the largest neural network
sizes we considered. We see this work as a first step in a
more nuanced understanding of the users’ experience of in-
terpretable machine learning.
Related Work
Work on the human interpretability of machine learning
models began as early as Breiman’s study of random forests
(Breiman 2001). Since then, many approaches to the in-
terpretability of machine learning models have been con-
sidered, including the development of new globally inter-
pretable models (Ustun and Rudin 2016), post-hoc local ex-
planations (Ribeiro, Singh, and Guestrin 2016) and visual-
izations (Olah et al. 2018), and post-hoc measurement of the
global importance of different features (Henelius et al. 2014;
Datta, Sen, and Zick 2016; Adler et al. 2018). We re-
fer the interested read to Molnar and Guidotti et al. for a
more detailed discussion of these methods (Molnar 2018;
Guidotti et al. 2018).
Some of the recent activity on interpretability has been
prompted by Europe’s General Data Protection Regulation
(GDPR). A legal discussion of the meaning of the regula-
tion with respect to interpretability is ongoing. Initially, the
GDPR regulations were described as providing a “right to
an explanation” (Goodman and Flaxman 2016), although
subsequent work challenges that claim (Wachter, Mittel-
stadt, and Floridi 2017), supporting a more nuanced right
to “meaningful information” about any automated decision
impacting a user (Selbst and Powles 2017). Exactly what
is meant by interpretability to support the GDPR and in a
broader legal context remains in active discussion (Selbst
and Barocas 2018).
The uncertainty around the meaning of “interpretability”
has prompted calls for more precise definitions and carefully
delineated goals (Lipton 2018). One thought-provoking pa-
per makes the case for a research agenda in interpretabil-
ity driven by user studies and formalized metrics that can
serve as validated proxies for user understanding (Doshi-
Velez and Kim 2017). Doshi-Velez and Kim argue that hu-
man evaluation of the interpretability of a method in its spe-
cific application context is the pinnacle of an interpretability
research hierarchy followed by human evaluation of inter-
pretability on a simplified or synthetic task and analysis of
proxy tasks without associated user studies. In order to per-
form interpretability analysis without user studies, they ar-
gue, it is necessary to first assess proxies for user behavior.
Here, we propose one such metric and assess its suitability
as a proxy for the local interpretability of a model.
Although we are unaware of existing metrics for the lo-
cal interpretability of a general model, many measures de-
veloped by the program analysis community aim at assess-
ing the understandability of a general program, which could
be seen as metrics for global interpretability. For example,
the cyclomatic complexity counts the number of independent
paths through a program using its control flow graph (Mc-
Cabe 1976). Metrics for specific model types have also been
developed. Lage et al. (Lage et al. 2018a) investigate how
different measures of complexity in decision sets affect ac-
curacy and response time on tasks consisting of simulatabil-
ity, verification, and counterfactual-reasoning. Via six differ-
ent user studies of 150 people (for a total of 900 participants)
they find that increased complexity in decision set logic re-
sults in increased response time but do not find a signifi-
cant connection with accuracy. They measure decision set
complexity as a combination between the explanation size,
clauses in the disjunctive normal form of the input (called
cognitive chunks), and number of repeated input conditions
to the decision set. Their work is specific to decision sets and
does not generalize to other model types.
There have also been experimentally grounded assess-
ments of model properties related to (but different from) in-
terpretability. Poursabzi-Sangdeh et al. (Poursabzi-Sangdeh
et al. 2017) consider the impact of model attributes (e.g.
black-box vs. clear) on user trust, simulatability, and mis-
take detection using randomized user studies on a similar
scale to what we will consider here. They find that clear
models (models where the inner calculations are displayed
to the user) are best simulated. Allahyari et al. (Allahyari and
Lavesson 2011) measure the perceived relative understand-
ability of decision trees and rule-based models and find de-
cision trees are seen as more understandable than rule-based
models.
Other methods are concerned with human in the loop opti-
mization of the interpretability of machine learning models.
Lage et. al. (Lage et al. 2018b) develop a method that op-
timizes models for both interpretability and accuracy by in-
cluding user studies in the optimization loop. Their method
minimizes the number of user studies needed to generate
models that are both interpretable and accurate. They per-
form experiments on optimizing decision trees and find that
the proxy interpretability metric optimized by the model
(e.g. number of nodes, mean path length) varies based on
dataset.
A Metric for Local Interpretability
Figure 1: A decision tree where the answer when run on the
input (a = −80, b = 200) is shown circled in blue and the
result of running the same model on the input (a = −64, b =
115) is shown circled in red.
Motivated by the previous literature and its calls for user-
validated metrics that capture aspects of interpretability, we
wish to assess whether a candidate metric captures a user’s
ability to simulate and “what if” locally explain a model.
The candidate metric we consider here is the total number
of runtime operation counts performed by the model when
run on a given input. We consider two basic variants of op-
erations, arithmetic and boolean, and track their totals sepa-
rately. Effectively, we seek a proxy for the work that a user
must do (in their head or via a calculator) in order to simu-
late a model on a given input, and will claim that the total
number of operations also impacts a user’s ability to perform
a “what if” local explanation of a model.
ning the model on the input (a = −80, b = 200) is shown
circled in blue and the result of running the same model on
the input (a = −64, b = 115) is shown circled in red. The
red answer is at a depth of 10 in the decision tree while the
blue answer is at a depth of 5. Counting the operations that
the model takes to run on the input (including each boolean
comparison operation or memory access required, which we
count as an arithmetic operation) gives the total number of
runtime operations - our candidate metric. Using the be-
low methodology to count these operations, to determine the
number of runtime operations executed when evaluating the
decision tree model on the inputs from the example above,
(blue: a = −80, b = 200 and red: a = −64, b = 115), the
blue input is found to require 17 total operations (6 opera-
tions are arithmetic and 11 are boolean) while the red input
requires 32 total operations (11 arithmetic and 21 boolean).
Essentially, at each branch point one arithmetic operation is
performed to do a memory access, one boolean operation is
performed to check if the node is a leaf node, and one more
boolean operation is performed for the branching operation.
Calculating Runtime Operation Counts
In order to calculate the number of runtime operations for
a given input, we instrumented the prediction operation
for existing trained models in python’s scikit-learn pack-
age (Buitinck et al. 2013). The source code for the tech-
nique is available at URL removed for anonymization. Since
most machine learning models in scikit-learn use (indirectly,
via other dependencies) cython, Fortran, and C for speed
and memory efficiency, we implemented a pure Python ver-
sion of the predict method for the classifiers, and in-
strumented the Python bytecode directly. We created pure-
Python versions of the decision tree, logistic regression, and
neural network classifiers in scikit-learn.1
Once working only with pure Python code, we used the
tracing feature of python’s sys module and a custom tracer
function to count the number of boolean and arithmetic op-
erations. The default behavior of tracer in python is line
based, meaning the trace handler method is called for each
line of the source code. We used the dis module to mod-
ify the compiled bytecode objects of useful modules stored
in their respective .pyc files. In particular, we modified the
line numbering metadata so that every bytecode is given a
new line number, ensuring that our tracer function is called
for every bytecode instruction (Ned 2008b; Ned 2008a;
Ike-Nwosu 2018). Inside the tracer function we use the dis
module to determine when a byte corresponds to a valid op-
eration and count them accordingly for our simplified pre-
dict method implementations when run on a given input.
User Study Design
We have two overall goals in this project: to assess the
simulatability and “what if” local explainability of machine
learning models, and to study the extent to which the pro-
posed metric works as proxy for local interpretability. To
An Example
As an example of how this metric would work, consider the
visualization of a decision tree in Figure 1. The result of run-
1Specifically, sklearn.tree.DecisionTreeClassifier,
sklearn.linear model. LogisticRegression,
sklearn.neural network.MLPClassifier.
and
those ends, we designed a crowdsourced experiment that
was given to 1000 participants. Participants were asked to
run a model on a given input and then evaluate the same
model on a locally changed version of the input. We start
by describing the many potentially interacting factors that
required a careful experimental design.
Models and Representations
For this study we consider the local
interpretability of
three models: decision trees, logistic regression, and neu-
ral networks. We chose decision trees and logistic re-
gression because they are commonly considered to be in-
terpretable (Lipton 2018). In contrast, we picked neural
networks because they are commonly considered uninter-
pretable. The models were trained using the standard pack-
age scikit-learn.2
Our decision tree representation is a standard node-link
diagram representation for a decision tree or flow chart. In
order to allow users to simulate the logistic regression and
neural network classifiers we needed a representation that
would walk the users through the calculations without pre-
vious training in using the model or any assumed mathemat-
ical knowledge beyond arithmetic. The resulting representa-
tion for logistic regression is shown in Figure 2. The neural
network representation used the same representation as the
logistic regression for each node and one page per layer.
The representations described so far are for the first ques-
tion a user will be asked about a model - the request to sim-
ulate it on a given input. In order to allow users to assess the
“what if” local explainability of the model, we also asked
them to determine the output of the model for a perturbed
version of the initial input they were shown. The represen-
tations used here are the same as the ones described, but a
snapshot of the participants’ previously filled in answers are
shown for the logistic regression and neural network repre-
sentations (see Figure 3) and users are not given blank en-
tries to allow the re-simulation of the model.
Data and Inputs
In order to avoid effects from study participants with do-
main knowledge, we created synthetic datasets to train the
models. We created four synthetic datasets simple enough
so that each model could achieve 100% test accuracy. These
datasets consisted of a 2 dimensional dataset with rota-
tion around an axis applied, 2 dimensional without rotation
around an axis, 3 dimensional with rotation around an axis,
2Decision
any
was
trees
were
using
default
and with
the multi class
depth
Logistic
restrictions
regression
trained
sklearn.tree.DecisionTreeClassifier
without
rameters.
sklearn.linear model.LogisticRegression
’multinomial’
with
the
as
average
and
solver.
using
network was
sklearn.neural network.MLPClassifier. The neural
network used is a fully connected network with 1 input layer, 1
hidden layer with 3 nodes, and 1 output layer. The relu (rectified
linear unit) activation function was used for the hidden layer.
to
descent)
implemented
’sag’(Stochastic
neural
set
gradient
pa-
using
argument
trained
The
Figure 2: The logistic regression representation shown to
users.
and 5 dimensional with rotation around an axis. As the num-
ber of dimensions increases, so does the operation count.
These four datasets were used to train the three considered
models via an 80/20 train-test split. We generated user in-
puts using the test data. For each test data point, we changed
one dimension incrementally in order to create a perturbed
input.
From this set of input and perturbed input pairs, we then
chose a set of eight pairs for each trained model (i.e., for
each model type and dataset combination) to show to the
participants. The set was chosen to fit the following condi-
tions: 50% of the classifications of the original inputs are
True, 50% of the classifications on the perturbed input are
True, and 50% of the time the classification between input
and its perturbed input changes. We used this criteria in or-
der to distribute classification patterns evenly across users
so that a distribution of random guesses by the participants
would lead to 50% correctness on each task, and guessing
that the perturbed input had the same outcome as the origi-
nal input would also be correct 50% of the time.
Pilot Studies
In order to assess the length of the study and work out any
problems with instructions, we conducted three pilot studies.
In the first informal study, one of us watched and took notes
while a student attempted to simulate an input on each of
the three types of models and determine the outcome for a
perturbed input for each of those three models. In the second
two pilots we recruited about 40 participants through Prolific
and gave the study for a few fixed models and inputs with the
same setup as we would be using for the full study. The main
takeaways from these pilot studies were that we estimated it
would take users 20-30 minutes to complete the survey, but
that some users would take much longer. We had originally
planned to include a dataset with 10 dimensions, and based
on the time taken by users in the pilot survey decreased our
largest dataset to 5 dimensions and added the 2-dimensional
dataset with no rotation.
Experimental Setup
We used Prolific to distribute the survey to 1000 users each
of whom was paid $3.50 for completing it. Participants were
restricted to those with at least a high school education (due
to the mathematical nature of the task) and a Prolific rat-
ing greater than 75 out of 100. The full survey information
(hosted through Qualtrics) and resulting data is available on-
line.3
Each participant was asked to calculate the output of a
machine learning model for a given input, and then to de-
termine the output of a perturbed input applied to the same
model. We showed each participant three trained models: a
logistic regression, a decision tree, and a neural network in a
random order. Each participant was shown a model trained
on a specific dataset (chosen from the four described ear-
lier) at most once to avoid memory effects across models.
Each question began with the initial input and a brief de-
scription of the task. As an attention check, we included a
question in the survey that asked users to do some basic ad-
dition. Lastly, we asked each user at the end of the study to
indicate whether they fully attempted to determine correct
answers and that they would still be compensated in case
they selected no. We considered only the data of the 930
users, who we will refer to as confident respondents, that se-
lected they fully tried to determine correct answers and who
correctly answered the basic addition problem.
Preregistered Hypotheses We preregistered two experi-
mental hypotheses. Namely, that time to complete will be
positively related to operation count and that accuracy will
be negatively related to operation count. We also preregis-
tered two exploratory hypotheses. These were that we would
explore the specific relationship between time and accuracy
versus operation count and that we would explore how the
3URL removed for anonymization
Figure 3: The “what if” local explainability question shown
to users for the neural network model. Note that while the
simulatability question on the neural networks allowed users
to fill in the blanks, the shown blanks in the above image rep-
resent where the variable values will be filled in, and users
are given no location to fill in partial simulations of the neu-
ral network.
DT
LR
NN
Correct
p-Value
95% CI
Correct
p-Value
95% CI
Correct
p-Value
95% CI
Simulatability
717 / 930
5.9 × 10−63
[0.73, 0.81]
592 / 930
1.94 × 10−15
[0.59, 0.69]
556 / 930
7.34 × 5.5−8
[0.55, 0.65]
“What If”
Local Explainability
719 / 930
5.16 × 10−64
[0.73, 0.82]
579 / 930
2.07 × 10−12
[0.57, 0.67]
499 / 930
0.78
[0.49, 0.59]
Table 1: Per-model correct responses out of the total confi-
dent respondents on the original input (simulatability task)
and perturbed inputs (“what if” local explainability task) for
decision trees, logistic regression, and neural networks. p-
values given are with respect to the null hypothesis that re-
spondents are correct 50% of the time, using exact binomial
tests.
perturbed input is related to time and operation count. These
hypotheses can be found at the Open Science Framework at:
url removed for anonymization
Study Setup Issues After running the user study, we
found that an error in the survey setup meant that the survey
exited prematurely for users given two of the eight inputs on
the decision tree models for one dataset. Since we did not
receive data from these participants, Prolific recruited other
participants who were allocated to other inputs and datasets,
so the analyzed dataset does not include data for these two
inputs. Users who contacted us to let us know about the
problem were still paid.
Multiple Comparison Corrections
In order to mitigate
the problem of multiple comparisons, all p-values and confi-
dence intervals we report in the next section include a Bon-
ferroni correction factor of 28. While we include 15 statisti-
cal tests in this paper, we considered a total of 28. Reported
p-values greater than one arise from these corrections.
User Study Results
Based on the results from the described user study, we now
examine folk hypotheses regarding the local interpretability
of different model types, consider the relative local inter-
pretability of these models, and assess our proposed metric.
Assessing the Local Interpretability of Models
In order to assess the local interpretability of different model
types, we first separately consider the user success on the
task for simulatability (the original input) and the task for
“what if” local explainability (the perturbed input). Since
inputs were chosen so that 50% of the correct model out-
puts were “yes” and 50% were “no”, we compare the result-
ing participant correctness rates to the null hypothesis that
respondents are correct 50% of the time. The resulting p-
values and confidence intervals are shown in Table 1.
The results indicate strong support for the simulatability
of decision trees, logistic regression, and neural networks
based on the representations the users were given. The re-
sults also indicate strong support for the “what if” local ex-
plainability of decision trees and logistic regression models,
but neural networks were not found to be “what if” locally
explainable.
Recall that we consider models to be locally interpretable
if they are both simulatable and “what if” locally explain-
able. Based on the results in Table 1, we thus have evidence
that decision trees and logistic regression models are locally
interpretable and neural networks are not, partially validat-
ing the folk hypotheses about the interpretability of these
models. Next, we’ll consider the relative local interpretabil-
ity of these models.
Assessing Relative Local Interpretability
In order to assess the relative local interpretability of models
— to evaluate the folk hypothesis that decision trees and lo-
gistic regression models are more interpretable than neural
networks — we compared the distributions of correct and
incorrect answers on both tasks across pairs of model types.
We applied one-sided Fisher exact tests with the null hy-
pothesis that the models were equally simulatable, “what if”
locally explainable, or locally interpretable. The alternative
hypotheses were that decision trees and logistic regression
models were more interpretable (had a greater number of
correct responses) than neural networks and that decision
trees were more interpretable than logistic regression.
The results (see Table 2) give strong evidence that deci-
sion trees are more locally interpretable than logistic regres-
sion or neural network models on both the simulatability and
“what if” local explainability tasks. While there was strong
evidence that logistic regression is more “what if” locally
explainable and more locally interpretable than neural net-
works, there is not evidence that logistic regression is more
simulatable than neural networks using the given representa-
tions. This may be because the logistic regression and neu-
ral network representations were very similar. An analysis
of the users who got both tasks right, i.e., were able to lo-
cally interpret the model, shows that the alternative hypothe-
sis was strongly supported in all three cases, thus supporting
the folk hypotheses that decision trees and logistic regres-
sion models are more interpretable than neural networks.
Assessing Runtime Operations as a Metric for
Local Interpretability
In order to evaluate our preregistered hypotheses, we consid-
ered the relationship between total operation counts, time,
and accuracy on the simulatability, “what if” local explain-
ability, and combined local interpretability tasks. The graphs
showing these relationships, including ellipses that depict
the degree to which the different measurements are linearly
related to each other, are shown in Figure 4. The time and
accuracy given for the simulatability and “what if” local ex-
plainability tasks are separated individually for those tasks
in the first two columns of the figure, while the final local
interpretability column includes the sum of the time taken
Table 2: Comparative correct / incorrect distributions and p-values between model types generated through Fisher Exact Tests
for confident responses. Relative correctness is shown for simulatability (correctness on the original input), “what if” local
explainability (correctness on the perturbed input), and local interpretability (correctness on both parts). DT stands for Decision
Tree, LR stands for Logistic Regression, and NN stands for Neural Network.
Relative Simulatability:
Contingency Table
Correct
Incorrect
p-value, 95% CI
DT > NN
DT > LR
717
213
1.5 × 10−14
556
374
[1.69, ∞]
717
213
3.7 × 10−9
592
338
[1.43, ∞]
592
338
1.3
LR > NN
556
374
[0.90, ∞]
Relative “What If” Local Explainability:
Contingency Table
Correct
Incorrect
p-value, 95% CI
DT > NN
DT > LR
719
211
7.3 × 10−26
499
431
[2.20, ∞]
719
211
2.6 × 10−11
579
351
[1.54, ∞]
LR > NN
499
431
[1.09, ∞]
579
351
2.9 × 10−3
Relative Local Interpretability:
Contingency Table
Correct
Incorrect
p-value, 95% CI
DT > NN
DT > LR
594
336
9.3 × 10−32
337
593
[2.36, ∞]
594
336
5.9 × 10−14
425
505
[1.60, ∞]
LR > NN
337
593
[1.13, ∞]
425
505
5.7 × 10−4
by the user on both tasks and credits the user with an ac-
curate answer only if both the simulatability and “what if”
local explainability tasks were correctly answered. The ac-
curacies as displayed in the figure are averaged over all users
given the same input into the trained model. All total oper-
ation counts given are for the simulation task on the spe-
cific input. In the case of the “what if” local explainability
task for decision trees, this operation count is for the simu-
latability task on the perturbed input; the logistic regression
and neural network simulatability operation counts do not
vary based on input. The local interpretability total opera-
tion count is the sum of the counts on the simulatability and
“what if” local explainability tasks. Additionally, we con-
sidered the effect on time and accuracy of just the arithmetic
operation counts. The overall trends are discussed below.
Assessing the Relationship Between Runtime
Operations and Time
The number of operations has a positive relationship
with the time taken. Across all three interpretability tasks
it appears clear that as the number of operations increases,
the total time taken by the user also increases (see the first
row of Figure 4). This trend is especially clear for the sim-
ulatability task, validating Hypothesis 1. This effect is per-
haps not surprising, since the operation count considered is
for the simulatability task and the representations given fo-
cus on performing each operation.
Users were locally interpreting the “what if” local ex-
plainability task. Users spent much less time on the lo-
cal explainability task than the simulatability task across all
models. The difference suggests that users were actually lo-
cally interpreting the model on the “what if” local explain-
ability task as opposed to re-simulating the whole model.
The time taken to simulate neural networks might not be
feasible in practice. The neural network simulation time
was noticeably greater than that of the decision tree and
logistic regression. In some cases, the time expended was
greater than 30 minutes. A user attempting the simulate the
results of a model might give up or be unable to dedicate
that much time to the task. The study takers likely feared
lack of compensation if they gave up. This result suggests
that in time constrained situations, neural networks are not
simulatable.
Assessing the Relationship Between Runtime
Operations and Accuracy
The relationship between accuracy and operation count
is clear for decision trees but not the other model types.
As the total number of runtime operations increases, we hy-
pothesized that the accuracy would decrease. In the second
row of Figure 4 we can see that this trend appears to hold
clearly for all three interpretability tasks for the decision tree
models, but there is no clear trend for the logistic regression
and neural network models. This lack of effect may be due
to the comparatively smaller range of operation counts ex-
amined for these two model types, or it may be that the local
interpretability of these model types is not as related to oper-
ation count as it is for decision trees. The lack of overlap in
the ranges for the operation counts of logistic regression and
neural networks also makes it hard to separate the effects of
the model type on the results.
Some users might not have understood the logistic re-
gression and neural network tasks. Because the logis-
tic regression and neural network tasks could be considered
more challenging than the decision tree task, there may have
been noise introduced by the variability in user ability to per-
Figure 4: Comparisons shown are between total operations for a particular trained model and input, the time taken by the
user to complete the task, and the accuracy of the users on that task for the simulatability (original input), “what if” local
explainability (perturbed input), and the combined local interpretability (getting both tasks correct) tasks. The total time shown
is in seconds. The total operation count is for the simulatability task on the specific input; this is the same for both “what if”
local explainability and simulatability except for in the case of the decision tree models, where operation counts differ based on
input. The local interpretability operation count is the sum for the simulatability and “what if” local explainability task operation
counts. Accuracy shown is averaged over all users who were given the same input for that task and trained model. The models
considered are decision trees (DT), logistic regression models (LR), and neural networks (NN). The ellipses surrounding each
group depict the covariance between the two displayed variables, and capture 95% of the sample variance.
form the task. While operation counts might influence the
accuracy for users who are able to understand the base task,
this trend may be hidden by the fact that some users who
were confident did not understand the task.
Discussion and Conclusion
We investigated the local interpretability of three common
model types: decision trees, logistic regression, and neural
networks, and our user study provides evidence for the folk
hypotheses that decision trees and logistic regression models
are locally interpretable, while neural networks are not. We
also found that decision trees are more locally interpretable
than logistic regression or neural network models. We also
showed that as the number of runtime operations increase,
participants take longer to locally interpret a model, and they
become less accurate on local interpretation tasks. This run-
time operations metric provides some insight into the local
interpretability of the discussed models and representations,
and could indicate to practitioners the extent to which their
models fulfill a lower bound requirement of interpretabil-
ity. Further work is needed to consider the extent to which
the metric generalizes to other model types. In addition, we
found that users were consistently unable to locally inter-
pret the largest operation count neural networks shown to
them, and their inability to simulate such neural networks
could suggest that users struggle to locally interpret mod-
els more than 100 operations. Because we did not give users
other models of similar operation count due to their poten-
tial display size to the user, further work is needed to verify
if users inability to locally interpret large neural networks
was caused by the number of operation counts or neural net-
works themselves.
Further, there are many caveats and limitations to the
reach of this work. The domain-agnostic nature of our syn-
thetic dataset has transferability advantages, but also has dis-
advantages in that it does not study interpretability within
its target domain. The definitions of local interpretability
that we assess here — simulatability and “what if” local
explainability— are limited in their reach and the specific
user study setup that we introduce may be limited in captur-
ing the nuance of these definitions. Still, this work provides
a starting point for designing user studies to validate notions
of interpretability in machine learning. Such controlled stud-
ies are delicate and time-consuming, but are ultimately nec-
essary in order for the field to make progress.
2018.
SIGKDD Conference on Knowledge Discovery and Data
Mining (KDD).
[Lipton 2018] Lipton, Z. C. 2018. The mythos of model
interpretability. Queue 16(3):30.
[McCabe 1976] McCabe, T. J. 1976. A complexity measure.
IEEE Transactions on software Engineering (4):308–320.
Interpretable Machine
[Molnar 2018] Molnar, C.
Learning.
https://christophm.github.io/interpretable-ml-
book/. https://christophm.github.io/interpretable-ml-book/.
[Ned 2008a] Ned, B. 2008a. The structure of .pyc files.
Blog. https://nedbatchelder.com/blog/200804/the structure
of pyc files.html.
[Ned 2008b] Ned, B. 2008b. Wicked hack: Python byte-
code tracing. Blog. https://nedbatchelder.com/blog/200804/
wicked hack python bytecode tracing.html.
[Olah et al. 2018] Olah, C.; Satyanarayan, A.; Johnson, I.;
Carter, S.; Schubert, L.; Ye, K.; and Mordvintsev, A.
2018. The building blocks of interpretability. Distill.
https://distill.pub/2018/building-blocks.
[Poursabzi-Sangdeh et al. 2017] Poursabzi-Sangdeh,
F.;
Goldstein, D. G.; Hofman, J. M.; Vaughan, J. W.; and
Wallach, H. 2017. Manipulating and measuring model
interpretability.
Transparent and Interpretable Machine
Learning in Safety Critical Environments Workshop at
NIPS.
[Prolific 2014] Prolific. 2014. https://prolific.ac/, last ac-
cessed on June 5th, 2019.
[Ribeiro, Singh, and Guestrin 2016] Ribeiro, M. T.; Singh,
S.; and Guestrin, C. 2016. Why should i trust you?: Ex-
plaining the predictions of any classifier. In Proceedings of
the 22nd ACM SIGKDD international conference on knowl-
edge discovery and data mining, 1135–1144. ACM.
[Selbst and Barocas 2018] Selbst, A. D., and Barocas, S.
2018. The intuitive appeal of explainable machines. Ford-
ham Law Review. Forthcoming. Available at SSRN: https:
//ssrn.com/abstract=3126971.
[Selbst and Powles 2017] Selbst, A. D., and Powles, J. 2017.
Meaningful information and the right to explanation. Inter-
national Data Privacy Law 7(4):233–242.
[Ustun and Rudin 2016] Ustun, B., and Rudin, C. 2016. Su-
persparse linear integer models for optimized medical scor-
ing systems. Machine Learning 102(3):349–391.
[Wachter, Mittelstadt, and Floridi 2017] Wachter, S.; Mittel-
stadt, B.; and Floridi, L. 2017. Why a right to explanation
of automated decision-making does not exist in the general
data protection regulation. International Data Privacy Law
7(2):76–99.
References
[Adler et al. 2018] Adler, P.; Falk, C.; Friedler, S. A.; Nix, T.;
Rybeck, G.; Scheidegger, C.; Smith, B.; and Venkatasubra-
manian, S. 2018. Auditing black-box models for indirect
influence. Knowledge and Information Systems 54(1):95–
122.
[Allahyari and Lavesson 2011] Allahyari, H., and Lavesson,
N. 2011. User-oriented assessment of classification model
understandability. In 11th scandinavian conference on Arti-
ficial intelligence. IOS Press.
[Breiman 2001] Breiman, L. 2001. Random forests. Ma-
chine learning 45(1):5–32.
[Buitinck et al. 2013] Buitinck, L.; Louppe, G.; Blondel, M.;
Pedregosa, F.; Mueller, A.; Grisel, O.; Niculae, V.; Pretten-
hofer, P.; Gramfort, A.; Grobler, J.; Layton, R.; VanderPlas,
J.; Joly, A.; Holt, B.; and Varoquaux, G. 2013. API design
for machine learning software: experiences from the scikit-
In ECML PKDD Workshop: Languages for
learn project.
Data Mining and Machine Learning, 108–122.
[Datta, Sen, and Zick 2016] Datta, A.; Sen, S.; and Zick, Y.
2016. Algorithmic transparency via quantitative input in-
fluence: Theory and experiments with learning systems. In
Security and Privacy (SP), 2016 IEEE Symposium on, 598–
617. IEEE.
[Doshi-Velez and Kim 2017] Doshi-Velez, F., and Kim, B.
2017. Towards a rigorous science of interpretable machine
learning. arXiv preprint arXiv:1702.08608.
[Goodman and Flaxman 2016] Goodman, B., and Flaxman,
S.
European union regulations on algorithmic
decision-making and a” right to explanation”. presented at
the 2016 ICML Workshop on Human Interpretability in Ma-
chine Learning (WHI 2016), New York, NY.
[Guidotti et al. 2018] Guidotti, R.; Monreale, A.; Ruggieri,
S.; Turini, F.; Giannotti, F.; and Pedreschi, D. 2018. A
survey of methods for explaining black box models. ACM
Computing Surveys (CSUR) 51(5):93.
K.;
A.;
[Henelius et al. 2014] Henelius,
Bostr¨om, H.; Asker, L.; and Papapetrou, P. 2014. A peek
into the black box: exploring classifiers by randomization.
Data mining and knowledge discovery 28(5-6):1503–1529.
[Ike-Nwosu 2018] Ike-Nwosu, O. 2018. Inside Python Vir-
tual Machine. Lean publishing, 1st edition. chapter 5 Code
Objects, 68–78.
[Lage et al. 2018a] Lage, I.; Chen, E.; He, J.; Narayanan,
M.; Gershman, S.; Kim, B.; and Doshi-Velez, F. 2018a.
An evaluation of the human-interpretability of explanation.
In Conference on Neural Information Processing Systems
(NeurIPS) Workshop on Correcting and Critiquing Trends
in Machine Learning.
[Lage et al. 2018b] Lage, I.; Slavin Ross, A.; Kim, B.; J. Ger-
shman, S.; and Doshi-Velez, F. 2018b. Human-in-the-loop
interpretability prior. In Conference on Neural Information
Processing Systems (NeurIPS).
[Lakkaraju, Bach, and Leskovec 2016] Lakkaraju, H.; Bach,
S. H.; and Leskovec, J. 2016. Interpretable decision sets:
A joint framework for description and prediction. In ACM
Puolam¨aki,
2016.
|
ai_researcher | 1 | Devising_Network_Intrusion_Detection_System_for_Smart_City_with_an_Ensemble_of_Optimization_and_Deep_Learning_Techniques.pdf | A Comprehensive Comparative Study of Individual
ML Models and Ensemble Strategies for Network
Intrusion Detection Systems
Ismail Bibers, Osvaldo Arreche, and Mustafa Abdallah
4
2
0
2
t
c
O
1
2
]
R
C
.
s
c
[
1
v
7
9
5
5
1
.
0
1
4
2
:
v
i
X
r
a
Abstract—The escalating frequency of intrusions in networked
systems has spurred the exploration of new research avenues
in devising artificial intelligence (AI) techniques for intrusion
detection systems (IDS). Various AI techniques have been used
to automate network intrusion detection tasks, yet each model
possesses distinct strengths and weaknesses. Selecting the optimal
model for a given dataset can pose a challenge, necessitating
the exploration of ensemble methods to enhance generalization
and applicability in network intrusion detection. This paper
addresses this gap by conducting a comprehensive evaluation
of diverse individual models and both simple and advanced
ensemble methods for network IDS. We introduce an ensemble
learning framework tailored for assessing individual models
and ensemble methods in network intrusion detection tasks.
Our framework encompasses the loading of
input datasets,
training of individual models and ensemble methods, and the
generation of evaluation metrics. Furthermore, we incorporate
all features across individual models and ensemble techniques.
The study presents results for our framework, encompassing
14 methods, including various bagging, stacking, blending, and
boosting techniques applied to multiple base learners such as
decision trees, neural networks, and among others. We evaluate
the framework using two distinct network intrusion datasets,
RoEduNet-SIMARGL2021 and CICIDS-2017, each possessing
unique characteristics. Additionally, we categorize AI models
based on their performances on our evaluation metrics and
via their confusion matrices. Our assessment demonstrates the
efficacy of learning across most setups explored in this study.
Furthermore, we contribute to the community by releasing
our source codes, providing a foundational ensemble learning
framework for network intrusion detection.
Index
Terms—Intrusion Detection
Systems, Ensemble
Learning, Network Security, Machine Learning, CICIDS-2017,
RoEduNet-SIMARGL2021, Predictive Modeling, and Evaluation
Metrics.
I. INTRODUCTION
The primary aim of intrusion detection systems (IDS) is
to detect unauthorized utilization, misuse, and exploitation of
computer network systems by both internal users and external
intruders [1]–[3]. Traditional IDS designs typically operate
under the assumption that the behavior of an intruder will
This work is partially supported by the Lilly Endowment through the
AnalytixIN grant,
the Enhanced Mentoring Program with Opportunities
for Ways to Excel in Research (EMPOWER), and the 1st Year Research
Immersion Program (1RIP) grants from the Office of the Vice Chancellor
for Research at Indiana University-Purdue University Indianapolis. Ismail
Bibers and Mustafa Abdallah are with Computer and Information Technology
Department, Purdue University in Indianapolis, Indianapolis, IN, USA. Email:
{ibibers,abdalla0}@purdue.edu. Osvaldo Arreche is with Electrical
and Computer Engineering Department, Purdue University in Indianapolis,
Indianapolis, IN, USA. Email: [email protected]
deviate noticeably from that of a legitimate user and that
many unauthorized actions are discernible. The potential of
artificial intelligence (AI) has spurred the advancement of
fully automated intrusion detection systems [4], [5]. Various
AI methodologies have been employed to automate intrusion
detection tasks, including neural networks [6], [7], decision
logistic regression [10], [11], and random
trees [8], [9],
forest [12], [13].
The majority of these AI methods, with the exception of
random forest, operate as standalone learning models where
the combination of their decisions is not utilized by the
IDS [14], [15]. Each of these AI models harbors its own
constraints, such as a high false positive rate for certain models
(for instance, approximately half of the major companies
contend with 10,000 security alerts daily from AI-based threat
monitoring tools [16]), and a high false negative rate for others
(which poses a significant challenge in safety-critical computer
network applications [17]).
Prior AI-focused studies predominantly emphasized the
classification accuracy of different AI algorithms without
harnessing the collective potential of
these diverse AI
techniques. This inherent limitation has stressed the urgent
necessity to exploit various ensemble learning methods to
bolster IDS [18]–[20].
Numerous recent studies have begun delving into the
utilization of ensemble learning with various AI models for
IDS, as evidenced by works such as [21]–[34]. Specifically,
works like [23], [24], [26], [28], [29], [31], [32] have proposed
ensemble learning frameworks for anomaly detection, focusing
on binary classification to discern normal from anomalous
traffic. Conversely, other studies [21],
[30],
[33], [34] have developed ensemble learning frameworks for
classification of network intrusions, encompassing categories
like Denial of Service (DoS) attacks, Port Scanning, Normal
traffic, and others.
[22],
[27],
These frameworks employ ensemble learning techniques
such as Boosting, Stacking, and Bagging, considering various
base models like Decision Trees, Support Vector Machines,
and Neural Networks. Primary evaluation metrics include
traditional AI metrics like accuracy, precision, recall (true
positive rate), F1 score, and false positive rates. While
most studies utilize benchmark datasets for IDS, such as
CICIDS-2017, KDD’99, NSL-KDD, and UNSW-NB15 [23],
[28], [29], [33], some conduct tests on real networks like the
Palo Alto network [31], and even in real-time scenarios, as
demonstrated by the “kitsune” framework [24].
An exemplary contribution in this domain is the work
by [22], which focuses on generating a new dataset and
benchmarking it using ensemble learning techniques. Another
notable approach is showcased in [26], where ensemble
procedures are employed to select the AI model variant with
the best performance. However, a comprehensive evaluation of
a wide array of AI methods across different intrusion datasets
is lacking in these works, potentially impacting their general
applicability. Each study tends to concentrate on a singular
ensemble learning method to enhance the performance of a
limited set of base models.
This paper aims to address the aforementioned gap by
comprehensively evaluating diverse ensemble methods for
network intrusion detection systems. We establish multiple
Individual ML models and Simple and Advanced ensemble
learning frameworks to assess such methods in the context
of network intrusion detection. Leveraging prior works such
as [21]–[34], which have outlined various ensemble learning
approaches, our framework can be categorized as follows.
• Individual Models: The initial phase of the framework
involves implementing individual models such as decision
trees
[11], and
[9],
[8],
[7]. This phase encompasses
neural networks
tasks like loading datasets (e.g., CICIDS-2017, and
training the models, and
RoEduNet-SIMARGL2021),
assessing performance using metrics
like accuracy,
precision, recall, and F1 score.
logistic regression [10],
[6],
• Simple Ensemble Methods: The subsequent stage of
the framework involves implementing simple ensemble
methods such as averaging, max voting, and weighted
averaging. Performance evaluation is conducted using
metrics like accuracy, precision, recall, and F1 score.
• Advanced Ensemble Methods: The third phase focuses
on implementing advanced ensemble methods including
bagging, boosting, stacking, and blending. Note that we
consider random forest [12], [13] in this category since it
is built based on bagging of many decision trees. Again,
evaluation metrics like accuracy, precision, recall, and F1
score are used to assess performance.
• Comparative Analysis: The final step entails evaluating
all individual, simple, and advanced ensemble models to
identify the most effective models for IDS and analyze
the impacts of ensemble learning techniques.
logistic regression,
Additionally, our study presents results for various ensemble
model combinations,
including bagging methods, stacking,
and boosting, applied to multiple base learners such as
decision trees,
random forest, neural
networks, among others. These distinctions highlight the novel
contributions of our work compared to previous studies, as
discussed in Related Work Section (Section II).
evaluations
network
using
each
two
dataset,
with
characteristics.
RoEduNet-SIMARGL2021 [35],
collection
from the SIMARGL project, supported by the European
Union. Notably, to our knowledge, limited prior works has
applied comprehensive ensemble learning methods to this
framework
datasets,
first
recent
of
our
intrusion
The
a
conduct
prominent
distinct
We
is
dataset, as discussed in the related work section. This dataset
comprises realistic network traffic data,
including features
derived from live traffic, rendering it highly suitable for
network intrusion detection systems. The second dataset
utilized in our evaluation is CICIDS-2017 [36], established
by the Canadian Institute for Cybersecurity at the University
of Brunswick in 2017. This dataset serves as a benchmark for
intrusion detection and encompasses various attack profiles.
For
each
assess
various
learners
dataset, we
approaches,
encompassing different base
and variants of
ensemble methods, applied to different AI models. The AI
models under consideration include Logistic Regression
(LR), Decision Trees (DT), K-Nearest Neighbors (KNN),
Multi-Layer Perceptron (MLP), Adaptive Boosting (ADA),
eXtreme Gradient Boosting (XGB), CatBoosting (CAT),
Gradient Boosting (GB), Averaging (Avg), Max Voting,
Weighted Averaging, and Random Forest (RF).
For all these models across both datasets, we present and
analyze the evaluation metrics generated by our framework.
We thoroughly discuss the results, providing insights into the
performance of each method. Additionally, we categorize the
AI models based on their performances on evaluation metrics
with the datasets considered in this study. Notably, we rank
these different methods in descending order of F1 score,
offering a clear perspective on their effectiveness (given by
performance on the network intrusion datasets).
This comprehensive evaluation allows us to identify the
most promising approaches for network intrusion detection
across different datasets and AI models, facilitating informed
decision-making in the implementation of IDS. This work
represents a significant advancement in bridging the gap in
the application of ensemble learning methods for network
intrusion detection systems
(IDS). Through conducting
extensive evaluations and comparisons of various metrics, we
contribute to enhancing the understanding of these ensemble
methods’ efficacy in the realm of IDS.
The metrics employed in our evaluation encompass crucial
network security requirements for AI models,
including
accuracy, precision, recall, and F1 score of intrusion detection
methods, along with their corresponding runtimes. By
thoroughly examining these metrics, we provide valuable
insights into the performance and efficiency of different
ensemble methods in detecting network intrusions. Our
framework not only addresses existing limitations but also
expands the application of ensemble learning techniques in
network intrusion detection systems. By doing so, we pave
the way for further advancements and enhancements in this
research area, ultimately contributing to the development of
more robust and effective network security solutions.
Summary of Contributions: We summarize below our main
contributions in this current work.
• Evaluation of Individual and Ensemble Learning
Methods: We conduct a comprehensive evaluation and
comparison of various Individual ML models, along with
various simple and advanced ensemble learning methods
for network intrusion detection systems (IDS).
• Assessment Across Diverse Metrics: Our evaluation
considers a range of metrics crucial for network security
requirements, including accuracy, precision, recall, and
F1 score of intrusion detection methods, as well as their
runtime performance.
• Evaluation
on Two Prominent Datasets: We
evaluate our framework on two well-known network
intrusion
characteristics:
datasets with
and CICIDS-2017. This
RoEduNet-SIMARGL2021
allowed for a comprehensive analysis across different
network intrusion scenarios.
distinct
and
ones)
based
ensemble
• Performance Ranking: We categorized AI models
their
on
(individual
performances on evaluation metrics,
ranking these
methods in descending order of F1 score, providing
valuable insights into the effectiveness of each approach.
• Expansion of Ensemble Learning Applications for
IDS: By demonstrating the efficacy of ensemble learning
methods
the
application of these techniques in this critical research
area, paving the way for further advancements.
in network IDS, our work expands
• Availability of Source Codes: We make our source codes
available to the community for accessing the framework
designed for network intrusion detection and for further
development with new datasets and models.1
II. RELATED WORK
A. Existing Efforts in Leveraging Ensemble Learning for IDS
The survey conducted in the previous work [25] offers an
overview of intrusion detection systems (IDS), focusing on the
evolution of ensemble systems and methodologies employed
in their design, particularly emphasizing ensemble techniques
between 2009 and 2020. This study comprehensively discusses
the current state of ensemble models, highlighting various
approaches such as Stacking, Bagging, Boosting, and voting,
among others. The analyzed works encompass a range
of datasets,
including KDD’99, NSL-KDD, Kyoto 2006+,
and AWID, along with diverse models such as neural
networks (NN), support vector machines (SVM), and decision
trees
function
(RBF). The primary contribution of this work lies in its
in-depth exploration of the existing landscape, stimulating
the investigation of novel combination methods, such as the
exploration of new combination rules. These insights offer
valuable directions for further research on ensemble learning
for IDS.
fuzzy clustering, and radial basis
(DT),
Ensemble Learning for Binary Classification Anomaly
Detection Approaches: Within this domain, the study by
[23] introduces an anomaly detection framework that operates
on input datasets such as CICIDS-2017, UNSW-NB15,
and KDD’99. The framework preprocesses the data and
conducts feature selection (employing the Chi-square method
in this study), subsequently applying various base models
including Gaussian Naive Bayes, Logistic Regression, and
Decision Trees. The predictions are then integrated using the
Stochastic Gradient Descent ensemble model
to yield the
final prediction. The primary contributions of this research
1The GitHub URL for our source codes is: https://github.com/sm3a96/
IDS-Machine-Learning-Techniques-.git
lie in the amalgamation of learning algorithms via stacking
to enhance IDS performance, with potential applicability to
other benchmark datasets. However, the study acknowledges
limitations related to data imbalance issues, suggesting that
the utilization of data augmentation techniques could alleviate
such imbalances. Furthermore, the framework could benefit
from incorporating different ensemble learning models to
further enhance performance.
Similarly, [28] proposes an ensemble learning framework
for binary classification of anomalies in IDS, utilizing
the NSL-KDD and UNSW-NB15 datasets along with base
models
like Random Forest, AdaBoost, XGBoost, and
Gradient boosting decision trees. The framework combines the
outcomes of these models using a soft voting scheme. The
presented results highlight the potential of the proposed NIDS
framework to improve the accuracy of cyber-attack detection
and minimize false alarm rates.
Moreover, [29] presents a framework for IDS applied to
datasets such as NSL-KDD, and UNSW-NB15. Base models
encompass LR, DT, NB, NN, and SVM, while ensemble
techniques include Majority Voting, DT, NB, LR, NN, and
SVM. The study also explores combinations of
feature
selection methods, with results indicating superior overall
performance for ensemble techniques. However, the authors
underscore the need for new datasets, particularly real-world
ones, and advocate for the integration of unsupervised learning
methods.
Additionally,
[31] applies its framework to real-world
datasets, including the Palo Alto network log, in addition
to NSL-KDD and UNSW-NB15 datasets. Anomaly detection
is addressed through ensemble methods employing weighted
voting atop base learners like SVM, Autoencoder, and
Random Forest. The primary contribution of
this work
is
the introduction of a new ANIDS approach with
real-world applicability, reducing false predictions. However,
scalability issues and reliance solely on weighted voting are
acknowledged as limitations, potentially necessitating more
diverse approaches for efficient performance across different
scenarios in this network intrusion detection task.
four
employs
Considering the IoT domain, the study by [32] introduces
a framework for anomaly detection utilizing the TON-IoT
supervised machine
network dataset.
It
learning (ML) models as base models,
including Random
Forests, Decision Trees, Logistic Regression, and K-Nearest
Neighbors. These base models are subsequently integrated
into an ensemble method, employing stacking and voting
mechanisms to enhance attack detection efficiency. Limitations
of this research include its narrow focus on the TON-IoT
dataset without exploring other datasets, and its omission of
other popular ensemble learning methods like bagging and
averaging.
In contrast, [24] presents an online anomaly detection
system for network intrusion detection using a series
of ensemble Autoencoders, catering to real-time detection
requirements. This approach differs from previous works
specifically tailored
by leveraging ensemble techniques
[26] addresses the issue
for Autoencoders. Additionally,
small binary
of overfitting in ensemble
learning for
classification datasets. However, the framework utilizes several
non-IDS-related datasets and employs base models such
as Random Forest, Naive Bayes, and Logistic Regressor.
The primary contribution of this work lies in its ensemble
model selection procedure, which searches for the best model
for a particular instance. Nonetheless, a limitation of this
approach is the high computational cost associated with
the cross-validation technique. While pruning may mitigate
overfitting, its effectiveness may vary across different datasets
and models.
a novel
ensemble
Ensemble Learning
for Multiclass Classification
Approaches: Several
studies explore ensemble learning
techniques for multiclass classification in IDS. For example,
in [21],
learning
approach applies
on datasets like CICIDS-2017 and ToN IoT. They utilize
stacking with Tensorflow models (CNN, DNN, RNN, LSTM),
where class predictions feed into a DNN ensemble method.
However, limitations include the absence of real IoT scenario
reliance on a single ensemble method,
experimentation,
and resource-intensive operations on IoT devices. Another
for multiclass
work,
classification,
limitations.
addressing NSL-KDD dataset
Employing the Weka toolkit, it employs adaptive ensemble
learning with J48, MLP, and IBK base models, utilizing
majority voting for ensemble learning. Drawbacks include
lack of real-world deployment, external dataset validation,
and limited AI model variety.
introduces the GTCS dataset
[22],
In [27], an ensemble learning approach achieves higher
accuracy and lower false alarms by incorporating Random
Forest to alleviate data imbalance. Utilizing Linear Genetic
Programming (LGP), Adaptive Neural Fuzzy Inference System
(ANFIS), and Random Forest classifiers, it employs weighted
voting for ensemble. However, challenges
include lack
of optimal weight assignment, generalization issues across
datasets, and model variety. Similarly, [33] applies bagging to
NB, PART, and Adaptive Boosting on KDD’99, using voting
for component selection. Limitations involve restricted dataset
testing and AI model diversity.
B. Contribution of Our Work
two
operate
distinct
Our contribution lies in introducing a comprehensive
intrusion classification framework encompassing individual
ML models, along with simple and advanced ensemble
techniques. We
datasets,
on
RoEduNet-SIMARGL2021 and CICIDS-2017, aiming to
generate key performance metrics including Accuracy, Recall,
Precision, and F1 score. Upon dataset loading and preparation,
we incorporate all available features for analysis. Initially,
our framework executes individual models leveraging LR,
DT, RF, MLP, and KNN as base learners. Subsequently, we
explore simple ensemble techniques such as Averaging (Avg),
Max Voting, and Weighted Averaging. In the next phase,
advanced ensemble methods including Bagging, Boosting
methods (ADA, GB, XGB, CAT), Blending, and Stacking are
applied in our framework.
Notably, our work stands out for its extensive benchmarking
experimentation
combinations.
across
inclusion of RoEduNet-SIMARGL2021
Furthermore, our
dataset in the experiments fills a gap in existing research, as
prior works seldom consider this dataset in their analyses.
various model
III. BACKGROUND AND PROBLEM STATEMENT
This section outlines the fundamental concepts of network
intrusion detection, highlights the hurdles posed by artificial
intelligence (AI), underscores the necessity of ensemble
learning, and elucidates the challenges inherent in evaluating
these methodologies within the context of network intrusion
detection tasks.
A. Types of Network Intrusions
Various network intrusion types exist, categorized within the
widely recognized MITRE ATT&CK framework [37]. In our
study, we address the primary network attacks outlined in this
framework. Consequently, network traffic is broadly classified
into the following categories:
Normal traffic: This refers to regular network activity
observed within the system.
link it
to specific creators,
Malware / Malware Repository Information obtained
regarding malicious software [MITRE ATT&CK ID:
DS0004]: This refers to analyzing malware for traits that
might
like the compiler used,
debugging traces, code similarities, or group identifiers related
to particular MaaS providers. Finding overlaps in malware
usage by different adversaries may suggest the malware was
acquired rather than independently developed. In this context,
overlapping features in malware used by various adversaries
could indicate a shared quartermaster [38].
PortScan (PS) / Network Service Discovery [MITRE
ATT&CK ID: T1046]: PortScan involves an intrusion where
the attacker conducts reconnaissance on the victim’s computer.
Often utilized as an initial step in an attack, it aims to identify
vulnerabilities and potential entry points. The method involves
sending connection requests to various ports, without finalizing
the connection. Responses received from these ports help map
potential entry points for exploitation [39].
Denial of Service (DoS) / Network Denial of Service
[MITRE ATT&CK ID: T1498]: This type of attack aims to
disrupt the target’s network availability. A common example
involves the attacker continuously sending connection requests
to a server. However, upon receiving acknowledgment from
the server, the attacker fails to respond, leaving the server’s
resources tied up and eventually leading to its unavailability.
For comprehensive classifications of DoS attacks, readers are
referred to [40].
Brute Force [MITRE ATT&CK ID: T1110]: This attack
involves attempting all possible password combinations to gain
unauthorized access to the victim’s network. Attackers often
leverage commonly used passwords in conjunction with this
method. Success is more likely when users employ weak or
easily guessable passwords [40].
Throughout the experimentation, we meticulously collect
and evaluate result metrics to benchmark optimal performance.
Web Attack / Initial Access [MITRE ATT&CK ID:
TA0001, T1659, T1189]: This category encompasses attacks
in
public-facing
conducted through web channels, exploiting vulnerabilities
in web systems. For
attackers may exploit
instance,
leveraging
applications,
vulnerabilities
software bugs, misconfigurations, or glitches to gain access to
the application’s underlying instance or container. Examples
of such attacks include Drive-by Compromise [41]. However,
it is noteworthy that while web attacks such as SQL injection
(SQLi) and Cross-Site Scripting (XSS) are common,
they
typically do not directly provide initial access to a remote
server [37].
Infiltration / Initial Access [MITRE ATT&CK ID:
TA0001]: This type of attack occurs when an unauthorized
entity attempts to gain initial access to a system or application.
It encompasses various techniques, including targeted spear
phishing and exploiting vulnerabilities in public-facing web
servers. The initial access gained through this attack can vary,
ranging from simply changing a password to maintaining
persistent access through legitimate accounts and external
remote services.
Botnet / Compromise Infrastructure [MITRE ATT&CK
ID: T1584.005, T1059, T1036, T1070]: This type of attack
involves the use of automated scripts executed remotely
by attackers through hijacked devices. These scripts, known
as bots, emulate human behavior and replicate it across
multiple devices. The scripted nature of this technique enables
scalability and easy deployment, making it an effective tool for
targeting multiple attack points simultaneously. Consequently,
botnets are a prevalent type of network attack.
Probe Attack / Network Scanning or Surveillance
[MITRE ATT&CK ID: T1595]: Probe attacks serve as the
initial phase of a broader attack strategy. These attacks involve
scanning a network to collect information or identify known
vulnerabilities [42]. Armed with a map detailing the available
machines and services within a network, attackers can leverage
this information to seek out potential exploits. It is important
to note that while port scanning represents a type of probe
attack, not all probe attacks involve port scans. Some may
target specific vulnerabilities or utilize alternative methods,
such as ping sweeps [43] or DNS zone transfers [44].
B. Intrusion Detection Systems
risk to critical
The escalating complexity of cyber attacks poses a
substantial
infrastructure across diverse
industries [45], [46]. As a result, IDS plays a pivotal role
in defending computer network systems against malicious
activities, whether perpetrated by internal users or external
adversaries [47]. Conventional IDS architectures typically
operate under the assumption that an intruder’s actions will
noticeably diverge from those of a legitimate user, thereby
enabling the detection of many unauthorized activities [48].
With recent strides in artificial
intelligence (AI) over the
past decade, this architectural paradigm has facilitated the
emergence of AI models capable of autonomously identifying
network intrusions [49].
C. Limitations of Base Learner Models
While AI models have greatly automated intrusion
detection, their inherent complexity presents constraints due
to the intricate nature of their learning and decision-making
mechanisms. This complexity poses challenges for a single
to fully grasp the subtleties of datasets, resulting
model
in difficulties in learning specific subsets and achieving
satisfactory metrics for certain outcomes. This challenge is
widespread across various AI models, such as Decision Trees
(DT), K-nearest neighbors (KNN), Support Vector Machines
(SVM), Deep Neural Networks (DNN), among others. Despite
their high predictive accuracy in Intrusion Detection Systems
(IDS),
there persists a gap in attaining better accuracy,
precision, recall, and F1 scores, particularly in error or attack
scenarios (including a high false positive rate for some AI
models [16] and a high false negative rate for others [17]).
This issue is especially critical in safety-sensitive applications
like network security through IDS. Consequently, there is a
growing impetus to enhance performance and broaden the
application of AI models in IDS. This has spurred the urgent
need to employ diverse ensemble learning techniques to bolster
IDS by leveraging the combination of different base learner
models [18]–[20].
D. Key Advantages of Ensemble Methods
It
is crucial
to recognize that
individual base learners
possess distinct strengths and weaknesses. Depending on the
specific application or task, one model may outperform others,
adding complexity to the model selection process. Machine
learning algorithms operate on diverse underlying principles.
For instance, K-nearest neighbors (KNN), which clusters
similar data around centroids, is sensitive to factors like the
number of clusters (K), class outliers, and irrelevant features,
besides being computationally demanding. Neural Networks
(NN), on the other hand,
typically require large datasets
and substantial computational resources, while also being
susceptible to variations in input data. Regression methods
like Logistic Regression offer simplicity and interpretability
but may struggle to capture intricate relationships, such as
higher-order polynomials. Similarly, Decision Trees boast
quick training times but can oversimplify problems, potentially
leading to overfitting. Consequently, amalgamating these
AI models through ensemble techniques can enhance their
robustness, generalizability, and effectiveness in network
intrusion detection tasks by leveraging their complementary
strengths and mitigating their weaknesses.
Ensemble Learning is a dynamic field that delves into the
concept of harnessing the strengths of diverse base learners to
enhance predictive performance. Among the most renowned
ensemble techniques are Bagging, Boosting, Blending, and
Stacking. Bagging, short for Bootstrap Aggregating, involves
creating multiple subsets of the dataset through bootstrapping,
wherein data points are sampled with replacement, and training
separate instances of a machine learning model on each
subset. These models are trained independently. The primary
objective of Bagging is to mitigate overfitting and enhance
generalization by leveraging the diversity among the models.
In contrast, Boosting operates by sequentially training multiple
instances of the same base model, with each subsequent model
aiming to correct the errors made by its predecessors. Boosting
achieves this by assigning higher weights or emphasis to
misclassified data points, effectively prioritizing instances that
were previously difficult to classify correctly. By iteratively
refining the model’s performance, Boosting endeavors to
improve predictive accuracy and reduce bias in the final
ensemble. Meanwhile, the Stacking method adopts a distinct
approach by training a diverse array of base learners and
utilizing their predictions as features to train a meta-model.
This meta-model learns to combine the predictions of the base
learners, effectively capturing complex relationships between
features and the target variable.
The ensemble methods such as Bagging, Boosting, and
Stacking offer sophisticated strategies for improving predictive
performance by leveraging the collective intelligence of
diverse base learners. By combining the strengths of individual
models and mitigating their weaknesses, ensemble techniques
pave the way for more accurate and robust predictions across
a wide range of machine-learning tasks.
In our study, we investigate various ensemble learning
approaches within our
framework, exclusively utilizing
base models for network intrusion detection tasks. This
comparative analysis is conducted across two distinct datasets,
each possessing unique characteristics,
in order to gain a
comprehensive insight into our proposed framework.
IV. FRAMEWORK
The primary objective of this study is to develop an
ensemble learning pipeline aimed at enhancing result metrics
across diverse datasets. Our framework aims to assist security
analysts in selecting effective methodologies for identifying
intrusions and classifying attacks on network traffic, thereby
bolstering intrusion prevention measures within their scope.
To achieve this, we delineate a methodological framework
comprising key stages for investigating the efficacy of various
ensemble learning techniques tailored for intrusion detection
systems (IDS), shown in Figure 1.
A. Data Preprocessing
thorough preprocessing for
The CICIDS-2017 and RoEduNet-SIMARGL2021 datasets
intrusion detection
underwent
systems (IDS). For CICIDS-2017, duplicate records were
removed, and missing values were imputed with the mean for
the ‘Flow Bytes/s’ column. Leading space characters in feature
names were also removed, and label encoding was applied to
categorical data in the ‘Label’ column.
Similarly, for RoEduNet-SIMARGL2021, duplicate records
were removed, columns with singular unique values were
dropped, and missing values were filled with the mean
features were
values for
encoded into numerical values using the Ordinal Encoder.
These preprocessing steps aimed to enhance data quality and
consistency for subsequent analyses.
respective columns. Categorical
B. Model Selection and Used Techniques
In this section, we will explain the model selection process
including the selection of individual base learners and the use
of ensemble techniques such as simple and advanced ensemble
techniques.
1) The Individual Base Learner Models: We carefully
selected a set of diverse and established base learners to
leverage their complementary strengths.
• Decision Trees: Decision trees are well-known for
their simplicity and easy-to-understand nature. They
provide an intuitive representation of the decision-making
processes in the data.
• Neural Networks: We
intricacy
and non-linearity of neural networks, especially the
Multi-layer perceptron Classifier.
focused on the
• Logistic Regression: It
is a benchmark model
that
provides insights into linear relationships in the data.
2) The Simple Ensemble Techniques:
In conjunction
with individual base learners, we employed various simple
ensemble techniques to enhance predictive performance. These
techniques included:
• Averaging Predictions: By averaging the predictions of
multiple individual models, we aimed to reduce variance
and improve overall prediction accuracy.
• Max Voting: Employing a majority voting scheme, max
voting aggregates predictions from multiple models and
selects the most frequently occurring class label as the
final prediction.
• Weighted Averaging: Assigning weights to predictions
from individual models based on their performance,
weighted averaging allowed us
the
contributions of more accurate models while mitigating
the impact of less accurate ones. We explain how the
weights are assigned in our experiments in the Evaluation
Section.
to emphasize
3) The Advanced Ensemble Techniques: To further bolster
our ensemble model’s efficacy, we delved into advanced
ensemble techniques, comprising:
• Bagging: Through
aggregating,
bagging
bootstrap
generates diverse subsets of the training data and trains
multiple base learners on each subset. By averaging
their predictions, bagging reduces variance and enhances
In this context, Random forests
model
aggregate predictions
from several decision trees
to reduce overfitting and maintain robust predictive
performance across different datasets.
robustness.
• Blending: Leveraging the outputs of multiple base
learners as features, blending combines their predictions
using a meta-learner to generate the final prediction.
This technique harnesses the diversity of base learners
to improve generalization.
• Boosting: Sequentially training base learners to correct
the errors of preceding models, boosting emphasizes the
misclassified instances, thereby iteratively refining the
model’s predictive performance.
• Stacking: Combining predictions from multiple base
learners as features, stacking employs a meta-learner to
learn the optimal combination of base learner predictions.
This hierarchical ensemble technique leverages
the
diverse strengths of individual models to improve overall
performance.
Fig. 1: An overview of our ensemble learning framework for network IDS. It considers a diverse set of AI models and ensemble
methods, along with network intrusion datasets.
C. Model Implementation and Training
selection of models,
Following the meticulous
the
implementation phase commenced, leveraging Python for the
realization of our ensemble framework. Our implementation
strategy began with the deployment of individual models,
followed by the integration of simple ensemble techniques,
culminating in the incorporation of diverse array of advanced
ensemble techniques.
In order to make the best use of our high-performance
system, we decided to use TensorFlow’s distribution strategy,
specifically tf.distribute.MirroredStrategy(). This strategy is
designed for synchronous training across multiple GPUs
within a single machine. It works by replicating the model’s
variables and computations across all available GPUs, which
makes parallelism more efficient and speeds up the training
process significantly. Each GPU independently computes
gradients for a subset of
the training data, and these
gradients are aggregated across all GPUs to update the
model’s parameters. By synchronizing training across all
this approach maximizes GPU utilization, prevents
GPUs,
inconsistencies, and ultimately accelerates the training process
while improving overall efficiency. This
strategy aligns
perfectly with our goal of using our high-performance
to speed up model
computer’s computational
development and experimentation.
resources
1) Individual Model Implementation and Training : We
implemented and trained each chosen base learner on its
own, giving us the opportunity to explore its algorithms and
performance characteristics in detail. With the help of Python’s
powerful libraries (scikit-learn), TensorFlow, Keras, we were
able to implement and train decision trees and random forests
as well as neural networks (especially Multi-layer Perceptron
Classifier), as well as logistic regression models with ease. We
implemented and trained the decision trees model, we utilized
the scikit-learn library. Decision Tree Classifier (DT) function
was used to create a decision tree classifier object, and then
we trained the classifier using the fit function with our training
data. Similar approaches were followed for implementing and
training other models, such as random forests, Multi-layer
Perceptron Classifier, and logistic regression. Each model was
instantiated using its respective class from scikit-learn, and
then trained on training data using the “fit” function.
After training each model, we utilized the “predict” function
to test
that we
the trained models using the test dataset
prepared. This allowed us to evaluate the performance of each
model on unseen data.
the
using
“accuracy score”
We further evaluated the models by computing their
from
accuracy
classification
scikit-learn. Additionally, we printed the
function to obtain
report using the classification report
recall, and F1-score for each class. We then
precision,
visualized the performance of the models using confusion
function
Data Generation for ClassifiersTesting 30%Model's ClassifiersPerformance AnalysisBestPerformingModelsRoEduNet-SIMARGL2021CICIDS-2017Data CollectionData PreprocessingTransformationand NormalizationAll FeaturesSelectionSplit Data Training Data70%Testing Data30%Trained Individual ModelsDNNSupervised ClassifiersRFLRMLPTrained Simple Ensembled ModelsMaxVoting Supervised SimpleEnsemble TechniquesWeightedAveragingAveragingTrained Advanced Ensembled ModelsStackingSupervised AdvancedEnsemble TechniquesBoostingBlendingBaggingmatrices generated by the confusion matrix function. These
evaluations provided us with insights
into the overall
performance and effectiveness of each model for the task.
Implementation
2) Simple Ensemble Techniques
and
Training : Following the individual model implementations,
simple ensemble techniques were employed to combine
enhancing
thereby
predictions
predictive
array manipulation
capabilities and built-in functions facilitated the seamless
integration of averaging, max voting, and weighted averaging
techniques.
from multiple models,
performance.
Python’s
In the averaging technique implementation, three diverse
classifiers (Decision Tree (DT), K-Nearest Neighbors (KNN),
and Random Forest (RF)) are trained simultaneously within
the distributed scope, enabling efficient GPU acceleration.
Predictions are then made using each model, and a Soft Voting
ensemble technique is applied to combine the predictions
into a final output. This ensemble approach aims to enhance
predictive performance by leveraging diverse models and
distributed computing resources.
training the
ensemble model using averaging
technique, we utilized the “predict” function to test
the
model using the test dataset. We further evaluated the
model’s performance by computing its accuracy using the
“accuracy score” function and printing the classification report
using the “classification report” function. Additionally, we
generated a confusion matrix using the confusion matrix
function to visualize the model’s performance (as will be
shown in next sections).
After
In the max voting technique implementation, a distributed
training approach using TensorFlow’s MirroredStrategy is
employed to optimize the training process across multiple
GPUs. Three distinct classifiers (K-Nearest Neighbors (KNN),
Decision Tree (DT), and Random Forest (RF)) are initialized.
These models are then integrated into a Voting Classifier using
hard voting. The Voting Classifier aggregates the predictions
from individual models and selects the class label with the
majority vote as the final prediction. Subsequently, the Voting
Classifier is trained on the training data, and its performance
is evaluated using the test data. This ensemble approach
enhances the model’s predictive capability by leveraging
strengths of multiple classifiers.
using
training
distributed
In the weighted averaging technique implementation,
TensorFlow’s
strategy
a
MirroredStrategy is employed, facilitating parallel execution
across multiple GPUs. Within this distributed scope, three
distinct classifiers (Decision Tree (DT), K-Nearest Neighbors
(KNN), and Random Forest (RF)) are instantiated. These
classifiers are then integrated into a Voting Classifier, which
aggregates
their predictions using hard voting. Custom
weights are assigned to each classifier to influence their
contribution to the final prediction, with DT accounting
the
for 40%, KNN for 30%, and RF for 30%. Finally,
ensemble model
is trained on the provided training data.
This ensemble-based approach aims to enhance predictive
accuracy by leveraging the diverse capabilities of individual
classifiers while considering their respective contributions to
the final prediction.
Following the training of
the ensemble model using
weighted averaging technique, we conducted prediction and
evaluation by computing accuracy, printing the classification
report, and generating the confusion matrix.
3) Advanced Ensemble Techniques
Implementation:
Finally, advanced ensemble techniques were implemented
to further enhance the predictive capabilities of the model.
Python’s machine learning libraries,
including scikit-learn,
provide seamless integration of various advanced ensemble
techniques such as bagging, blending, boosting (including
Adaptive Boosting, Cat Boosting, Gradient Boosting, and
XGBoost Extreme Gradient Boosting), and stacking.
For the bagging technique implementation, a distributed
training strategy using TensorFlow’s MirroredStrategy is
initiated to enable parallel execution across multiple GPUs.
Within this distributed context, a list of diverse base models
is instantiated, including RF, MLP, LR, and DT classifiers.
These base models serve as the foundational components for
the ensemble approach. Subsequently, a Bagging Classifier
is constructed, utilizing RF as the base model. Bagging is
a robust ensemble technique effective in reducing overfitting
by aggregating predictions from multiple models. In this
implementation, the Bagging Classifier is configured with the
same number of estimators as the number of base models to
ensure diversity and effectiveness in prediction. The Bagging
Classifier is then trained on the provided training data.
for
Similarly,
the blending technique implementation,
TensorFlow’s MirroredStrategy is employed to facilitate
parallel execution across multiple GPUs. Under this strategy’s
scope, several base models including RF, MLP, LR, and DT
are initialized and trained. Each of these models generates
predictions for the test data, which are then combined using the
blending technique to create a new dataset. This dataset serves
as the input for a meta-model, another DT Classifier. The
meta-model is trained on the blended predictions to learn how
to best combine the outputs of the base models. Additionally,
predictions are made on the test set using the trained base
models, and a new dataset (blend X test) is created with these
predictions to be used as input for the meta-model. Finally,
the meta-model predicts the final output based on the blended
predictions from the base models.
for
the boosting technique
implementations including Adaptive Boosting, Cat Boosting,
Gradient Boosting, and XGBoost Extreme Gradient Boosting,
we followed the same aforementioned process.
Along the
lines,
same
For the stacking technique implementation, TensorFlow’s
MirroredStrategy is employed to facilitate parallel execution
across multiple GPUs. Within this distributed context, four
base models (RF, MLP, LR, and DT) are instantiated and
integrated into pipelines incorporating Principal Component
Analysis (PCA) for dimensionality reduction. These pipelines
the data, enhancing model performance and
preprocess
complexity. A meta-model,
computational
reducing
represented by another DT Classifier,
is instantiated to
learn from the outputs of the base models. The Stacking
from scikit-learn is utilized to stack the base
Classifier
the
models, combining their predictions as features for
is trained on the
meta-model. Finally,
the stacked model
provided training data, enabling it
combination of predictions from the base models.
to learn the optimal
D. Evaluation Metrics and Model Selection Rationale
Results’ Metrics: To evaluate
the performance of
the selected models and techniques comprehensively, we
employed four primary performance indicators: Accuracy,
Precision, Recall, and F1 score. Additionally, runtime was
considered as a metric to assess the computational efficiency
of the models. These metrics collectively provide insights into
the effectiveness and efficiency of the models in detecting
intrusions.
We organized the results systematically to facilitate analysis
and comparison across different models and techniques.
This structured approach enables us to draw meaningful
conclusions regarding the suitability and efficacy of the models
for IDS applications.
Model Selection Criteria: The models chosen for this study
were selected based on several key factors. Primarily, their
prevalence in prior research pertaining to Intrusion Detection
Systems (IDS) ensured alignment with established literature,
enabling effective comparison with seminal studies such
as [40], [50]. Furthermore, these diverse ensemble learning
methods had success in different applications. By adopting
widely-used models, our research maintains consistency with
existing methodologies, facilitating a robust evaluation of
various models and ensemble learning techniques utilized in
our investigation. In this context, we emphasize that we used
different AI models with different working principles for our
ensemble learning (i.e., the KNN uses a different reasoning
from MLP that also uses a different reasoning than DT).
E. Comprehensive Overview of Top Network
Features and Their Role in the Learning
Intrusion
I
Tables
and II
elucidate key features
In this subsection, we present a detailed list of the top
network intrusion features along with their explanations for
the two datasets under study, as they play a crucial role
throughout the entirety of our paper. Tables I and II provide
descriptions for each feature in the RoEduNet-SIMARGL2021
and CICIDS-2017 network intrusion datasets, respectively.
specific
to
the RoEduNet-SIMARGL2021 and CICIDS-2017 datasets.
These tables serve to highlight significant
features from
each dataset, offering clarity and contextual understanding.
However, it is essential to clarify that all features listed in
Table IIIb were utilized in our preliminary experiments. This
inclusive approach enabled us to fully exploit the datasets
for our analysis of network intrusion detection. Notably,
Table IIIb summarizes the overall composition of each dataset,
encompassing the number of features.
V. FOUNDATIONS OF EVALUATION
In this section, we present a comprehensive evaluation
aimed at addressing key research questions that underpin our
study:
1) What are the optimal individual ML models suited for
a given network intrusion detection dataset?
TABLE
RoEduNet-SIMARGL2021 dataset [51].
Description
of
I:
main
features
for
RoEduNet-SIMARGL2021 Features
FLOW DURATION MILLISECONDS
PROTOCOL MAP
TCP FLAGS
TCP WIN MAX IN
TCP WIN MAX OUT
TCP WIN MIN IN
TCP WIN MIN OUT
TCP WIN SCALE IN
TCP WIN MSS IN
TCP WIN SCALE OUT
SRC TOS
DST TOS
FIRST SWITCHED
LAST SWITCHED
TOTAL FLOWS EXP
Explanation
Flow duration in milliseconds
IP protocol name (tcp, ipv6, udp, icmp)
Cumulation of all flow TCP flags
Max TCP Window (src->dst)
Max TCP Window (dst->src)
Min TCP Window (src->dst)
Min TCP Window (dst->src)
TCP Window Scale (src->dst)
TCP Max Segment Size (src->dst)
TCP Window Scale (dst->src)
TOS/DSCP (src->dst)
TOS/DSCP (dst->src)
SysUptime of First Flow Packet
SysUptime of Last Flow Packet
Total number of exported flows
TABLE II: Description of
CICIDS-2017 dataset [52].
the main features
for
the
CICIDS-2017 Features
Packet Length Std
Total Length of Bwd Packets
Subflow Bwd Bytes
Destination Port
Packet Length Variance
Bwd Packet Length Mean
Avg Bwd Segment Size
Bwd Packet Length Max
Init Win Bytes Backward
Total Length of Fwd Packets
Subflow Fwd Bytes
Init Win Bytes Forward
Average Packet Size
Packet Length Mean
Max Packet Length
Explanation
Standard deviation length of a packet
Total size of packet in backward direction
Average number of bytes in backward sub-flow
Destination Port Address
Variance length of a packet
Mean size of packet in backward direction
Average size observed in the backward direction
Maximum size of packet in backward direction
Total number of bytes in initial backward window
Total packets in the forward direction
Average number of bytes in a forward sub-flow
Total number of bytes in initial forward window
Average size of packet
Mean length of a packet
Maximum length of a packet
2) Which ensemble method exhibits superior performance
on a given dataset?
3) How do the evaluated methods within our framework
perform across key metrics such as Accuracy, Precision,
Recall, F1 Score, and runtime?
are
the
inherent
4) What
limitations
and strengths
associated with the application of ensemble learning
methods in the context of network intrusion detection?
Before delving into the detailed evaluation results, we
provide a comprehensive overview of the experimental setup.
A. DataSet Description
RoEduNet-SIMARGL2021 Dataset
[35]: This dataset
stems from the SIMARGL project, a collaborative initiative
supported by the European Union under the Horizon program,
in conjunction with the Romanian Education Network
(RoEduNet).
It comprises authentic network traffic data,
incorporating features derived from real-time traffic analysis.
The dataset adheres to a structured data schema reminiscent
of Netflow [53], a network protocol developed by CISCO for
the purpose of capturing and monitoring network flows.
CICIDS-2017 Dataset [36]: Serving as a benchmark for
intrusion detection, this dataset was curated by the Canadian
Institute for Cybersecurity at the University of Brunswick in
2017. It encompasses six distinct attack profiles, including
activities such as brute force, heartbleed, botnet, Denial of
Service (DoS), portscan, web attack, and infiltration attack.
incorporates
To establish a realistic context,
background traffic generated through a B-Profile system [54],
the dataset
TABLE III: Summary and statistics of the three network
intrusion datasets used in this work, including the size of the
dataset, number of attack types (labels), number of intrusion
features, and distribution of samples among attack types.
(a) Basic statistics of datasets
Dataset
CICIDS-2017
RoEduNet-SIMARGL2021
No. of Labels
7
3
No. of Features
78
29
No. of Samples
2,775,364
31,433,875
(b) Distribution of samples among different attack types
Dataset
CICIDS-2017
RoEduNet2021
DoS
Normal
84.442% 9.104%
24.53%
62.20%
PortScan
5.726%
13.27%
Brute Force Web Attack
0.498%
-
0.157%
-
Bot
0.071%
-
Infiltration
0.001%
-
which captures various user behaviors based on popular
network protocols.
Summary and Statistics of the Datasets: Each dataset is
characterized by its size, the number of attack types (labels),
and the quantity of intrusion features. Detailed statistics
regarding these attributes are presented in Table III.
B. Experimental Setup
Computing Resources: Our experiments were conducted
on a high-performance computing (HPC) system equipped
with robust hardware capabilities. The HPC configuration
includes two NVIDIA A100 GPUs, 64 GPU-accelerated
nodes, each boasting 256 GB of memory, and a single
64-core AMD EPYC 7713 processor running at 2.0 GHz
with a power consumption of 225 watts. This setup enables
a peak performance of approximately 7 petaFLOPs, making
it exceptionally well-suited for intensive AI and machine
learning tasks [55].
Coding Tools: To ensure versatility and openness in our
implementation, we utilized the Python programming language
alongside various AI toolboxes such as Keras and ScikitLearn.
Additionally, we leveraged essential libraries including Pandas
and Matplotlib. By adopting these open-source tools, we aimed
to facilitate transparency and reproducibility in our research
endeavors.
C. Evaluation Metrics
In this study, the utilization of well-established evaluation
to ascertain the most effective model
metrics is crucial
for
integration within an Intrusion Detection System
(IDS). Accuracy, precision, recall, and F1-score stand as
quintessential performance evaluation metrics. These metrics
are derived from four fundamental measures: true positive
(TP), false positive (FP), true negative (TN), and false negative
(FN) rates. The evaluation metrics are delineated as follows:
• Accuracy [(T P + T N )/T otal]: Signifies the proportion
of accurately identified network traffic instances over the
total data instances.
• Precision [T P/(F P + T P )]: Measures the frequency
with which the model accurately discerns an attack.
• Recall [T P/(F N + T P )]: Measures the model’s ability
to correctly identify attacks (or intrusions). Recall
is
also referred to as the true-positive rate, sensitivity, or
detection rate.
• F1-Score [2T P/(2T N + F P + F N )]: Represents the
harmonic mean of precision and recall.
of
IV:
base
learners
different
ensemble methods)
Performance
and
models
TABLE
(both
on
RoEduNet-SIMARGL2021 datasets. The results are organized
by F1 score (highest to lowest).
Accuracy (ACC)
1.00
1.00
1.00
0.999
0.99998
0.998
0.998
0.998
0.99981
0.998
0.988
0.996
0.6781
0.6178
Models
Random Forest (RF)
Decision Tree (DT)
Average (avg)
Max Voting (Max Vot)
Stacking
Weighted Average (weighed avg)
Bagging (Bag)
Blending (Bled)
AdaBoost (ADA)
Cat Boost (CAT)
Gradient Boosting (GB)
XGBoost (XGB)
Logistic Regression (LR)
Multi-Layer Perceptron (MLP)
Precision (PRE)
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.998
0.99
0.996
0.56
0.38
Recall (REC)
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.998
0.99
0.996
0.68
0.62
F1 Score
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.998
0.998
0.996
0.58
0.47
D. AI Models
In this section, we outline the main AI models employed in
our study.
(i) Base Learners: We utilized four widely-used AI
classification algorithms as base learners, namely: Multi-Layer
Perceptron (MLP) [56] Decision Tree (DT) [57], Logistic
Regression (LR) [58], and k-Nearest Neighbor (KNN) [59].
These AI methods form the foundation of our evaluation,
allowing us to assess both their individual performances
and their contributions to our network intrusion detection
framework.
(ii) Ensemble Methods: In addition to the base learners,
our framework incorporates advanced ensemble techniques
such as stacking, blending, boosting (including Cat Boosting
(CAT)
[60], Light Gradient-Boosting Machine (LGBM)
[61], AdaBoost (ADA) [62], Gradient Boosting (GB) [63],
Extreme Gradient-Boosting (XGBoost) [64]), Random Forest,
and bagging techniques. Furthermore, we employ simpler
ensemble methods like Voting [65], Averaging [66], and
Weighted Averaging, alongside the aforementioned models.
These ensemble methods enhance the robustness and accuracy
of our intrusion detection system.
Hyperparameters : We provide our main hyperparameter
choices for each AI model and each ensemble method used in
our work in Appendix A.
Having provided the main experimental setup, we next detail
our evaluation results and findings.
E. RoEduNet-SIMARGL2021 Analysis
Main Results : Table IV presents the performance metrics
of various models on the RoEduNet-SIMARGL2021 dataset
when utilizing all available features. Notably, the majority
of models demonstrate remarkable performance across all
metrics, with particularly high values for precision, recall,
and F1 scores. The top-performing techniques and models,
including Random Forest (RF), Decision Tree (DT), Average
(Avg), Max Voting (Max Vot), Stacking, and Bagging (Bag),
consistently achieve perfect scores (1.00) across all metrics.
This convergence suggests robust model performance across
different ensemble techniques and underscores the efficacy of
utilizing the complete feature set for classification.
Conversely, Logistic Regression (LR) and Multi-Layer
Perceptron (MLP) exhibit comparatively lower performance
TABLE V: Model Training and Testing Timetable (Seconds)
for RoEduNet-SIMARGL2021 Dataset. The runtimes are
organized (from shortest to lengthiest). Logistic Regression is
the most efficient individual model while bagging is the most
time-efficient ensemble method for this dataset.
Models
Logistic Regression (LR)
Decision Tree (DT)
Multi-Layer Perceptron (MLP)
Bagging (Bag)
Random Forest (RF)
XGBoost (XGB)
AdaBoost (ADA)
Cat Boost (CAT)
Gradient Boosting (GB)
Blending (Bled)
Average (avg)
Stacking
Weighted Average (weighed avg)
Max Voting (Max Vot)
Time (Seconds)
122.72
900.74
6200.45
9834.9
10800.85
12520.23
12610.54
18050.12
18250.02
19600.43
27216.16
28800.32
30816.53
32400.452
metrics,
indicating potential
underlying patterns within the dataset.
limitations in capturing the
already optimal performance
attained by
Given the
further optimization may yield marginal
several models,
improvements
alternative
feature engineering strategies or investigating potential data
augmentation techniques could offer avenues for enhancing
model generalization and resilience to unseen data.
best. However,
exploring
at
Runtime Performance: Table V offers insights into the
runtime performance of RoEduNet-SIMARGL2021 models,
measured in seconds. Models such as Logistic Regression
(LR) and Decision Tree (DT) demonstrate shorter runtimes,
reflecting their computational efficiency. In contrast, ensemble
methods and gradient boosting algorithms like Stacking,
runtimes
Average, and Gradient Boosting exhibit
inherent complexity and resource-intensive
due to their
nature. However,
their high performance across evaluation
metrics justifies the computational investment, particularly in
accuracy-sensitive applications.
longer
the
Considering
extensive RoEduNet-SIMARGL2021
dataset
comprising approximately 30 million samples,
prioritizing models with optimal performance across metrics
while balancing computational efficiency is crucial. Models
like LR and DT emerge as promising candidates due to
their shorter runtimes, making them attractive for scenarios
with computational constraints. Conversely, models with
longer runtimes, such as Blending (Bled) and Stacking, may
their superior
require substantial
performance
especially in
applications prioritizing precision and recall metrics.
resources. Nevertheless,
allocation,
resource
justifies
To manage computational complexity for blending and
stacking, experiments were conducted on a subset of the
dataset,
limiting the sample size to 20% through random
sampling, ensuring manageable overhead while retaining
analytical integrity.
1) The models and ensemble Confusion Matrices for
RoEduNet-SIMARGL2021: The following results shown in
Fig. 2: Confusion Matrix of different models with perfect
performance on RoEduNet-SIMARGL2021 Dataset. These
models are: Random Forest
(RF), Decision Trees (DT),
averaging ensemble technique (avg), Weighted Averaging
(weighed avg), Bagging ensemble technique (Bag), and Max
Voting ensemble technique (Max Vot).
this section provides the classification accuracy of the methods
via confusion matrices that are displayed using a heat map.
The 14 different learners are tested to classify four various
attacks in the RoEduNet-SIMARGL2021 Dataset. We group
those as follows:
(i) Confusion Matrices
the
perfect
Dataset.
(RF), Decision
of Models with Perfect
Performance: Figure 2 shows
confusion Matrix
performance
on
of
different models with
models
These
RoEduNet-SIMARGL2021
(DT),
are: Random Forest
averaging ensemble technique (avg), Weighted Averaging
(weighed avg), Bagging ensemble technique (Bag), and Max
Voting ensemble technique (Max Vot). The figure shows that
all normal samples along with the three intrusion classes
(Denial of Service, Malware, and Port Scanning) have been
predicted perfectly for these models.
Trees
(ii) Confusion Matrices of Models with Near-Perfect
Performance: The second category of models include those
that have near-perfect performances but have few errors in
prediction. This category includes Adaptive Boosting (ADA)
(Figure 3) and Stacking ensemble (Figure 4).
(iii) Confusion Matrices
of Methods with Low
Performance: We finally show the confusion matrices
for the models with the lowest performaces on our first
dataset. Figure 5 shows the confusion Matrix of MLP
Classifier on RoEduNet-SIMARGL2021 Dataset. It has low
prediction performance, particularly for Denial of Service
it has
and Port Scanning attacks. On the other hand,
Fig. 3: Confusion Matrix of Adaptive Boosting (ADA)
ensemble technique on RoEduNet-SIMARGL2021 Dataset. It
has near-perfect performance.
(MLP) on
Fig. 5: Confusion Matrix of MLP Classifier
RoEduNet-SIMARGL2021 Dataset.
It has low prediction
performance, particularly for Denial of Service and Port
Scanning attacks. On the other hand,
it has perfect
performance in identifying normal samples.
Fig. 4: Confusion Matrix of Stacking ensemble technique
on RoEduNet-SIMARGL2021 Dataset. It has near-perfect
performance on the samples.
Fig. 6: Confusion Matrix of Logistic Regression (LR) on
RoEduNet-SIMARGL2021 Dataset. Here, label 0 represents
Denial of Service (DOS), 1 represents Malware, 2 represents
Normal, and 3 represents Port Scanning. LR has low
performance on DoS and Port Scanning intrusions while
having near-perfect performance on normal (label 2) samples.
Fig. 7: Confusion Matrix of Bagging (Bag), Blending
(Bled), and Cat Boosting (CAT) ensemble techniques on
RoEduNet-SIMARGL2021 Dataset.
perfect performance in identifying normal samples. Moreover,
Figure 6 shows the confusion Matrix of Logistic Regression
(LR) on RoEduNet-SIMARGL2021 Dataset. LR has low
performance on Denial of Service and Port Scanning intrusions
traffic
while having near-perfect performance on normal
instances (or samples).
the models that have computational
(iv) Confusion Matrices for Boosting Models with
Reduced Sample Size: Finally, we
show confusion
issues
matrices for
on RoEduNet-SIMARGL2021 Dataset. Figure 8 shows the
confusion Matrices of
these models, which are Extreme
Gradient Boosting (XGB), and Gradient Boosting (GB)
ensemble techniques. Note that these two methods was tested
on reduced sample size, however having almost perfect
performances on all
intrusion classes. Similarly, Figure 7
shows the confusion Matrices for Bagging, Blending, and CAT
boosting techniques.
Having finished our detailed evaluation analysis on
RoEduNet-SIMARGL2021 dataset, we next show the detailed
evaluation analysis for CICIDS-2017 dataset.
F. CICIDS-2017 Analysis
Main Results: Table VI presents the performance metrics
of various models on the CICIDS-2017 dataset when utilizing
into
all available features. The table provides
the effectiveness of different machine learning algorithms
in classifying network traffic. Notably,
several models
demonstrate high performance across all metrics, with Random
(RF), Bagging (Bag), Blending (Bled), Weighted
Forest
Average (weighed avg), Stacking, Gradient Boosting (GB),
insights
Fig. 8: Confusion Matrix of Extreme Gradient Boosting
(XGB) and Gradient Boosting (GB) ensemble techniques
on RoEduNet-SIMARGL2021 Dataset. Note that these two
methods was tested on reduced sample size.
(CAT) consistently
Decision Tree (DT), and Cat Boost
(F1 score of 0.998 or
achieving near-perfect
higher). This robust performance underscores the suitability
of ensemble methods and tree-based algorithms for
the
classification task on this dataset.
scores
Conversely, Logistic Regression (LR) and AdaBoost (ADA)
exhibit comparatively lower performance metrics, suggesting
potential limitations in capturing the complex patterns present
in the dataset. The overlapping top-performing models further
emphasize their stability and reliability across different feature
sets.
Runtime Performance: Table VII presents the runtime
performance of various machine learning models on the
CICIDS-2017 dataset, measured in seconds. Considering the
importance of achieving optimal results while also minimizing
computational overhead,
runtime performance becomes a
crucial factor in model selection. Among the models exhibiting
perfect F1 scores (i.e., RF, DT), Decision Tree (DT) emerges
as the most time-efficient option, requiring approximately four
minutes for training and testing combined. This makes it an
attractive choice for scenarios where computational resources
are limited.
Moving beyond models with perfect F1 scores, Logistic
Regression (LR) stands out as the fastest option among those
achieving relatively good performance across all metrics.
Conversely, models like Bagging (Bag) and Blending (Bled)
runtimes, exceeding two
demonstrate significantly longer
hours. While these models may offer competitive performance,
their computational demands make them less practical for
resource-constrained applications. We also emphasize that
when an IDS is online (in real time), the ensemble models
are already trained, thus they can still be used for prediction.
TABLE VI: Performance of different models (individual ML
models, and different ensemble methods) organized by F1
score (highest to lowest) on CICIDS-2017 Dataset.
Models
Random Forest (RF)
Bagging (Bag)
Blending (Bled)
Weighted Average (weighed avg)
Stacking
Gradient Boosting (GB)
Decision Tree (DT)
Cat Boost (CAT)
Multi-Layer Perceptron (MLP)
XGBoost (XGB)
Average (avg)
Max Voting (Max Vot)
Logistic Regression (LR)
AdaBoost (ADA)
Accuracy (ACC)
1.00
0.998
0.998
0.998
0.997
0.988
0.998
0.998
0.996
0.996
0.996
0.926
0.889
0.891
Precision (PRE)
1.00
1.00
1.00
1.00
1.00
0.99
0.998
0.998
0.996
0.996
0.997
0.89
0.866
0.81
Recall (REC)
1.00
1.00
1.00
1.00
1.00
0.99
0.998
0.998
0.996
0.996
0.996
0.93
0.899
0.89
F1 Score
1.00
1.00
1.00
1.00
1.00
0.99
0.998
0.998
0.996
0.996
0.996
0.90
0.877
0.85
TABLE VII: Model Training and Testing Timetable (Seconds)
for CICIDS-2017 Dataset. Logistic Regression and Decision
Tree are the most efficient individual model while Max Voting
is the most time-efficient ensemble learning method.
Models
Logistic Regression (LR)
Decision Tree (DT)
Max Voting (Max Vot)
Random Forest (RF)
AdaBoost (ADA)
Average (avg)
XGBoost (XGB)
Bagging (Bag)
Cat Boost (CAT)
Gradient Boosting (GB)
Multi-Layer Perceptron (MLP)
Weighted Average (weighed avg)
Blending (Bled)
Stacking
Time (Seconds)
120.31
240.7
670.25
1402.8
2040.54
2160.11
2220.00
2305.31
2640.02
3000.02
4200.45
5040.3
9600.52
9720.54
Fig. 9: Confusion Matrix of Decision Tree (DT) on
CICIDS-2017 Dataset. Most attacks are predicted very
efficiently (except Web Attack and Bot).
it
the dataset
Additionally,
itself
is worth noting that
plays a significant role in determining runtime complexity.
The CICIDS-2017 dataset
is large and complex, which
further accentuates the importance of efficient model selection.
By analyzing the runtime performance of various models,
stakeholders gain valuable insights into the computational
demands associated with each algorithm, enabling informed
decisions regarding model deployment and scalability.
for Different
1) Confusion Matrices
Individual and
Ensemble Models for CICIDS-2017 Dataset: We next show
confusion matrices for the 14 different
learners tested to
classify various attacks in the CICIDS-2017 Dataset. Due to
having higher number of attack classes here compared to
RoEduNet-SIMARGL2021, confusion matrices are different
for most models. We now provide them for different models
along with their main insights.
(i) Confusion Matrices of
Individual Models on
CICIDS-2017: Figure 9 shows the confusion matrix of
Decision Tree (DT) on CICIDS-2017 Dataset. It shows that
most attacks are predicted very efficiently (except Web Attack
and Bot). On the other hand, Figure 10 shows the confusion
matrix of Logistic Regression (LR) on CICIDS-2017, which
shows much lower prediction accuracy.
(ii) Confusion Matrices of Simple Ensemble Methods
on CICIDS-2017: Figure 11 shows the confusion matrix
of Max Voting (Max Vot) on CICIDS-2017, which shows
Fig. 10: Confusion Matrix of Logistic Regression (LR)
on CICIDS-2017 Dataset. It shows much lower prediction
accuracy for different attacks.
Fig. 11: Confusion Matrix of Max Voting (Max Vot) simple
ensemble technique on CICIDS-2017 Dataset. It has lower
prediction accuracy for several attacks.
Fig. 13: Confusion Matrix of Bagging (Bag) ensemble
technique on CICIDS-2017 Dataset. Most attacks are predicted
very efficiently (except Web Attack-XSS and Bot).
Methods on CICIDS-2017 Dataset: Figures 13-14 show
the confusion matrices of Bagging (Bag) and Blending
(Bled) on CICIDS-2017 Dataset, respectively. They show that
both ensemble methods predict most attacks very efficiently
(except Web Attack-XSS and Bot). Along the same lines,
Figure 15 shows the confusion matrix for Cat Boosting
(CAT) ensemble technique on CICIDS-2017 Dataset. It has
near-perfect performance for all
intrusion classes except
Bot. Similarly, Figure 16 shows that Stacking ensemble
method predict most attacks very efficiently (except Web
Attack-XSS and Bot). On the other hand, Boosting ensemble
learning techniques provide lower prediction accuracy for
different classes for the CICIDS-2017 dataset (as shown in
Figures 17-18). However, we emphasize that Extreme Gradient
Boosting Ensemble Techniques (XGB) has better performance
compared to Adaptive Boosting (ADA).
G. Performance Enhancement Across Datasets
12: Confusion Matrix
of Weighted Averaging
Fig.
(Weighted avg)
on CICIDS-2017
Dataset. Most attacks are predicted very efficiently (except
Web Attack and Bot).
technique
ensemble
lower prediction accuracy for several attacks. On the contrary,
Figure 12 shows the confusion matrix of Weighted Averaging
(Weighted avg) on CICIDS-2017 Dataset. It shows that most
attacks are predicted very efficiently (except Web Attack and
Bot). The Averaging simple ensemble method has also same
behaviour as Weighted Averaging (figure omitted).
(iii) Confusion Matrices
of Advanced Ensemble
the best model
framework. The RF was
CICIDS-2017: Our analysis on the CICIDS-2017 dataset
revealed significant gains achieved through the implementation
of our
that
demonstrated a notable improvement in accuracy, precision,
recall (achieving a perfect score of 1.000), and F1 score.
Furthermore,
in conjunction
leveraging this base learner
with the best ensemble learning methods (including bagging,
blending, weighted averaging, and stacking) yield near-perfect
performance metrics, including accuracy and precision, while
maintaining a perfect recall and F1 score.
Similarly,
RoEduNet-SIMARGL2021:
the
RoEduNet-SIMARGL2021 dataset, employing our framework
yielded remarkable improvements. The best ensemble learning
model is Averaging that exhibited achieving perfect scores
of 1.000 across all performance metrics, including accuracy,
recall, and F1 score. Similarly, Max Voting,
precision,
on
Fig. 14: Confusion Matrix of Blending (Bled) ensemble
technique on CICIDS-2017 Dataset. Most attacks are predicted
very efficiently (except Web Attack-XSS and Bot).
Fig. 16: Confusion Matrix of Stacking ensemble technique
on CICIDS-2017 Dataset. Most attacks are predicted very
efficiently (except Web Attack-XSS and Bot).
Fig. 15: Confusion Matrix of Cat Boosting (CAT) ensemble
technique on CICIDS-2017 Dataset.
It has near-perfect
performance for all intrusion classes except Bot.
Fig. 17: Confusion Matrix of Adaptive Boosting (ADA)
ensemble technique on CICIDS-2017 Dataset. It has the lowest
performance across all models.
Stacking, Bagging, Boosting, and Weighted Averaging yield
almost perfect performance across all four metrics.
Overall, our comprehensive evaluation showcases
the
potential benefits of our proposed simple and advanced
ensemble learning. The detailed breakdown of performance
metrics for each dataset underscores the efficacy of our
approach and its ability to enhance classification accuracy and
reliability across diverse datasets.
We also emphasize that one strong point in this work is
categorizing the different
individual models and ensemble
methods based on their confusion matrices (which show
different aspects of model’s performance on different intrusion
classes). In particular,
these confusion matrices can give
insights about which model can be used depending on the
expected types of network attacks for the organization that
the security analyst is monitoring. This also can lead to doing
“ensemble of ensembles,” where we fuse different ensemble
methods to confront different network attacks (where each
ensemble method has some strong performance on one or more
of these attacks).
Having finished our detailed evaluation analysis on our two
datasets, we next show the main discussions and limitations
of our work in the next section.
TABLE VIII: A summary of top-5 models for CICIDS-2017
and RoEduNet-SIMARGL2021 datasets.
Ranking
1st
2nd
3rd
4th
5th
CICIDS-2017
Random Forest (RF)
Bagging (Bag)
Blending (Bled)
Weighted Averaging
Stacking
RoEduNet-SIMARGL2021
Random Forest (RF)
Decision Tree (DT)
Averaging (avg)
Max Voting
Stacking
Performance Metrics: This section focuses on metrics such
as Accuracy, Precision, Recall, and F1 scores utilized to assess
individual models and ensemble techniques. Notably, superior
results, indicated by higher metric scores, were observed for
models and ensemble techniques including Random Forest
(RF), Decision Tree (DT), Weighted Average (weighed avg),
Stacking, Bagging (Bag), Blending (Bled), AdaBoost (ADA),
Cat Boost (CAT), Gradient Boosting (GB), and XGBoost
(XGB).
Runtime: The runtime analysis involves evaluating the
execution times of the 14 individual models and ensemble
techniques utilized. In Section V, a runtime table is provided,
arranged from the fastest to the slowest models. Following this
analysis, the fastest models overall include LR, DT, MLP, and
RF. Models with an average runtime encompass Bag, XGB,
ADA, CAT, and GB. Conversely, the slowest models comprise
Bled, avg, stacking, max Vot, and weighted avg.
these
intersecting
Optimal Models: By
results
(performance metrics and runtime), we identify the optimal
models, including DT, RF, Bag, Bled, ADA, CAT, and GB.
Subsequently, Stacking, avg, weighted avg, and Max Vot
are also recognized, albeit with slower
runtimes. These
models demonstrated superior performance across the metrics
outlined in this study.
Superiority of Ensemble Learning Methods: Table VIII
presents a summary of top-5 models (in terms of F-1 score)
for CICIDS-2017 and RoEduNet-SIMARGL2021 Datasets.
We notice that
the ensemble methods have superiority
over individual models for both datasets. In particular, for
CICIDS-2017 dataset, all
the top-5 models are ensemble
methods. On the other hand, for RoEduNet-SIMARGL2021
dataset we have four ensemble methods in the top-5 methods
(where decision tree (DT) is the only individual model in this
list of top-5 models).
3) Random Forest Assessment: The findings highlight
Random Forest (RF) as one of the top-performing ensemble
techniques across both datasets. However,
its applicability
may be limited by a tendency towards overfitting and bias
when deployed in diverse scenarios. For instance, concerning
the CICIDS-2017 dataset, the most effective base learner is
decision tree (DT), which achieved an Accuracy of 0.998,
Precision of 1.0, Recall of 1.0, and F1 score of 1.0.
Similarly, for the RoEduNet-SIMARGL2021 dataset, the next
best-performing base learner is decision tree (DT), attaining
an Accuracy of 1.0, Precision of 1.0, Recall of 1.0, and F1
score of 1.0.
4) Advantages of Ensemble Learning Framework:
The
in its
ensemble
learning
construction approach. Our
framework
offers
setup facilitates
versatility
a
Fig. 18: Confusion Matrix of Extreme Gradient Boosting
(XGB) ensemble technique on CICIDS-2017 Dataset. It has
lower performance compared to other advanced ensemble
methods (except ADA).
VI. LIMITATIONS, DISCUSSION, AND FUTURE
DIRECTIONS
A. DISCUSSION
1) Significance of our Framework:
In the contemporary
landscape of flourishing information, the frequency of network
attacks is expected to rise (as evidenced by recent studies
such as the one conducted by the Center for Strategic &
International Studies (CSIS) [67]). Despite the evolution of
Intrusion Detection Systems (IDS), security analysts still
face the challenge of verifying potential attack incidents in
this rapidly evolving environment. Hence, having a reliable
framework for intrusion detection systems can significantly
mitigate this challenge by reducing the instances of false
positive rates (FPR)
to scrutinize and enable a focused
analysis of critical traffic data. The framework presented in
our study contributes to addressing this issue via enhancing
the performance metrics (including accuracy, recall, precision,
intrusion detection systems, which is
and F1 score) of
to ensure better network
for
pivotal
security in aforementioned modern systems. Our framework
was also evaluated using confusion matrices (which show
intrusion-specific performance of different individual methods
and ensemble approaches). The time analysis also can help
in choosing the best model, depending on the needs of the
security analyst.
their deployment
2) Summary of Results: All findings presented in this paper
are succinctly outlined in Section V, addressing key inquiries
readers may have, such as determining the optimal AI model
and ensemble technique and assessing their viability. Section V
provides a comprehensive overview of the results, highlighting
performance metrics and crucial aspects of the ensemble
technique, including runtime and confusion metrics, employed
in evaluating the framework.
the two datasets utilized,
comprehensive evaluation of
including RoEduNet-SIMARGL2021, which,
to the best
of our knowledge, has been evaluated in this context with
very limited works within the existing literature [68].
However, our current work has several differences from
the prior work [68]. First, our work considers all
the
individual classes of CICIDS-2017 (total of 15 classes), and
RoEduNet-SIMARGL2021 (total of 4 classes). The previous
work simplified the problem for CICIDS-2017 grouping the
attacks in 7 classes, and by not using the malware class
for RoEduNet-SIMARGL2021. Moreover, this current work
presents in detail the decision matrices used for each case
scenario where it brings light to the classes that ensemble
learning can identify the best, and other attacks that might
to detect. This important analysis is
need enhancement
not present in the previous work [68] since it considered
the problem from a holistic point of view. Furthermore,
this current work employs the blending technique which is
not present in the prior work [68]. Finally, the work [68]
focused on hierarchical ensemble learning where it built a
two-level ensemble learning framework for network intrusion
detection tasks, which is different from our current focus of
comprehensive comparison of popular ensemble methods.
Furthermore, this study presents an extensive evaluation
that distinguishes it
from previous works by analyzing
14 models/ensemble methods across two distinct datasets,
yielding results for Accuracy, Precision, Recall, F1, Confusion
Matrices, and Runtime. Notably, we achieve near-perfect
results for several models in terms of F1 score, emphasizing
the significance of
Intrusion Detection
these metrics for
Systems (IDS). Given the imperative for security analysts,
stakeholders, and users to accurately and rapidly identify
potential threats, undetected attacks which can pose substantial
risks.
It is worth emphasizing that we have taken the extra step
of making our codes open source. Designed to be easily
adaptable for use with other datasets and further analysis,
they do not constitute a deployable solution for production,
as they have not undergone extensive testing or validation
by an authoritative entity. Instead, they serve as a proof of
concept highlighting the benefits of our proposed framework
and represent a crucial step towards enhancing the field of
AI-based network IDS.
B. LIMITATIONS
The
and Biases:
1) Dataset Analysis
experiments
conducted enable us to draw several noteworthy conclusions
regarding the datasets employed. Firstly, it is evident that the
models within our framework achieved superior gains when
applied to CICIDS-2017 and RoEduNet-SIMARGL2021
datasets. Each dataset exhibits distinct characteristics. For
example, RoEduNet-SIMARGL2021 comprises nearly 30
million data points with approximately 20 feature columns,
whereas CICIDS-2017 contains almost 2 million data
points but approximately 70 feature columns
to
(refer
the observed
Table III). This discrepancy accounts for
the
increase in runtime for
these datasets. Additionally,
disparity in data volume and class count is noteworthy, with
RoEduNet-SIMARGL2021 featuring four prediction classes
and CICIDS-2017 incorporating several prediction classes.
Interestingly, AI models appear to readily learn patterns in
the RoEduNet-SIMARGL2021 dataset owing to its extensive
size and fewer prediction classes. Conversely, despite the
higher number of prediction classes in CICIDS-2017, models
demonstrate adeptness and achieve commendable scores,
possibly due to its heavily unbalanced class distribution (as
depicted in Table III, where four classes combined represent
less than 1% of the entire dataset). We stress here that this
is close to a real-world scenario since most of the traffic
is normal traffic. This limitation underscores the necessity
for future research to explore alternative datasets or employ
uncalibrated models to use on ensemble learning to broaden
benchmarking and testing within our framework.
C. FUTURE DIRECTIONS
While our work represents a foundational step towards
(IDS),
advancing AI-based Intrusion Detection Systems
there are numerous avenues for
future exploration and
refinement within our framework. Expanding our framework
to encompass additional datasets, diverse AI models, and a
broader array of ensemble methods holds promise for creating
a more comprehensive and insightful framework.
Another promising avenue for future research involves
delving into multi-level ensemble learning approaches, which
involve utilizing multi-level classification techniques to further
enhance detection accuracy and robustness. Additionally,
exploring a wider range of feature selection methods could
provide valuable insights into optimizing model performance
and interpret ability.
Moreover,
integrating explainable AI (XAI) frameworks
to generate explanations for ensemble methods presents an
intriguing direction for future investigation. By providing
transparent
for model
predictions, XAI techniques could enhance the trust and
understanding of IDS systems [69].
interpretable
explanations
and
Furthermore,
the ultimate goal of our ongoing efforts
includes the implementation of real-time capabilities and
validation through collaboration with security experts and
analysts. This collaboration aims to gather invaluable insights
leading to continuous improvements and
and feedback,
real-world applicability of our framework.
VII. CONCLUSION
The primary goal of a security intrusion detection tool is to
serve as a robust shield against potential intruders. Leveraging
artificial
intelligence (AI) can significantly enhance the
automation and effectiveness of these tools. The increasing
frequency of intrusions in networked systems has driven
extensive research into developing AI techniques for intrusion
detection systems (IDS). While various AI models have been
deployed for this purpose, each model has its own strengths
and weaknesses, presenting a challenge in selecting the most
suitable model for a given dataset.
To address this challenge, combining multiple AI models
can substantially improve their overall performance and
applicability in network intrusion detection. This paper aims
to bridge this crucial gap by evaluating a diverse array
of ensemble methods for IDS. Specifically, we present a
comprehensive comparative study of individual models and
both simple and advanced ensemble learning frameworks
for network intrusion detection tasks. Our approach involves
training base learners and ensemble methods to generate
evaluation metrics.
We present results for 14 combinations of individual and
ensemble models within our framework, utilizing various
boosting, stacking, and blending methods on diverse base
learners. Evaluation is conducted on two network intrusion
datasets, each possessing unique characteristics. Our analysis
categorizes AI models based on their performance metrics
(including accuracy, precision,
recall, and F1-score), and
runtime, highlighting the advantages of learning across various
setups for two very important datasets.
The best models for each dataset were advised based
on their performance metrics. For the CICIDS-2017 dataset,
the top three ensemble models were Random Forest (RF),
Bagging (Bag), and Blending (Bled). These models achieved
exceptional results, with the Random Forest model achieving
perfect scores in Accuracy (ACC), Precision (PRE), Recall
(REC), and F1 Score. Bagging and Blending models also
performed remarkably well, achieving near-perfect metrics
the
the board (see Table VIII). Similarly,
across
RoEduNet-SIMARGL2021 dataset, the top three models were
Random Forest (RF), Decision Tree (DT), and Bagging (Bag).
Both the Random Forest and Decision Tree models achieved
perfect scores in all performance metrics, while the Bagging
model performed almost perfectly (see Table VIII).
for
Our evaluation results show that using ensemble learning
was beneficial as it significantly enhanced the performance of
the models, leading to high accuracy, precision, recall, and F1
scores across both datasets.
We contribute to the community by providing our source
codes, offering a foundational ensemble learning framework
for network intrusion detection that can be expanded with new
models and datasets. We also provide insights into the best
models for each dataset, highlighting common and distinct
behaviors among them through confusion matrices, which
influence their performance and results. We conclude with
an in-depth discussion of our main findings and the primary
benefits of our framework. This study represents a significant
advancement
in utilizing ensemble learning methods for
network Intrusion Detection Systems (IDS), achieved through
comprehensive evaluations and comparisons of various metrics
to assess the effectiveness of these ensemble methods.
REFERENCES
[1] S. Northcutt and J. Novak, Network intrusion detection.
Sams
Publishing, 2002.
[2] B. Mukherjee, L. T. Heberlein, and K. N. Levitt, “Network intrusion
detection,” IEEE network, vol. 8, no. 3, pp. 26–41, 1994.
[3] G. Apruzzese, M. Andreolini, L. Ferretti, M. Marchetti,
and
M. Colajanni, “Modeling realistic adversarial attacks against network
intrusion detection systems,” Digital Threats: Research and Practice
(DTRAP), vol. 3, no. 3, pp. 1–19, 2022.
[4] A. L. Buczak and E. Guven, “A survey of data mining and
machine learning methods for cyber security intrusion detection,” IEEE
Communications surveys & tutorials, vol. 18, no. 2, pp. 1153–1176,
2015.
[5] A. S. Dina and D. Manivannan, “Intrusion detection based on machine
learning techniques in computer networks,” Internet of Things, vol. 16,
p. 100462, 2021.
[6] J. Kim, N. Shin, S. Y. Jo, and S. H. Kim, “Method of intrusion detection
using deep neural network,” in 2017 IEEE international conference on
big data and smart computing (BigComp).
IEEE, 2017, pp. 313–316.
[7] C. Tang, N. Luktarhan, and Y. Zhao, “Saae-dnn: Deep learning method
on intrusion detection,” Symmetry, vol. 12, no. 10, p. 1695, 2020.
[8] M. A. Ferrag, L. Maglaras, A. Ahmim, M. Derdour, and H. Janicke,
“Rdtids: Rules and decision tree-based intrusion detection system for
internet-of-things networks,” Future internet, vol. 12, no. 3, p. 44, 2020.
[9] M. Al-Omari, M. Rawashdeh, F. Qutaishat, M. Alshira’H, and
N. Ababneh, “An intelligent tree-based intrusion detection model for
cyber security,” Journal of Network and Systems Management, vol. 29,
no. 2, pp. 1–18, 2021.
[10] T. G. Nick and K. M. Campbell, “Logistic regression,” Topics in
Biostatistics, pp. 273–301, 2007.
[11] R. Panigrahi, S. Borah, M. Pramanik, A. K. Bhoi, P. Barsocchi,
in
S. R. Nayak,
environment using hybrid na¨ıve bayes—decision
cyber–physical
table and multi-objective evolutionary feature selection,” Computer
Communications, vol. 188, pp. 133–144, 2022.
and W. Alnumay,
“Intrusion
detection
[12] A. K. Balyan, S. Ahuja, U. K. Lilhore, S. K. Sharma, P. Manoharan,
A. D. Algarni, H. Elmannai, and K. Raahemifar, “A hybrid intrusion
detection model using ega-pso and improved random forest method,”
Sensors, vol. 22, no. 16, p. 5986, 2022.
[13] S. Waskle, L. Parashar, and U. Singh, “Intrusion detection system using
pca with random forest approach,” in 2020 International Conference on
Electronics and Sustainable Communication Systems (ICESC).
IEEE,
2020, pp. 803–808.
[14] S. Arisdakessian, O. A. Wahab, A. Mourad, H. Otrok, and M. Guizani,
“A survey on iot intrusion detection: Federated learning, game theory,
social psychology and explainable ai as future directions,” IEEE Internet
of Things Journal, 2022.
[15] S. I. Sabev, “Integrated approach to cyber defence: Human in the loop.
technical evaluation report,” Information & Security: An International
Journal, vol. 44, pp. 76–92, 2020.
[16] S. D. DCunha,
Model
is-ai-shifting-the-human-in-the-loop-model-in-cybersecurity/,
[Online; accessed 21-October-2021].
Shifting The Human-In-The-Loop
https://datatechvibe.com/ai/
2017,
“Is AI
Cybersecurity?”
In
[17] J. Mijalkovic and A. Spognardi, “Reducing the false negative rate in
deep learning based network intrusion detection systems,” Algorithms,
vol. 15, no. 8, p. 258, 2022.
[18] N. H. Al-A’araji, S. O. Al-Mamory, and A. H. Al-Shakarchi,
“Classification and clustering based ensemble techniques for intrusion
detection systems: A survey,” in Journal of Physics: Conference Series,
IOP Publishing, 2021, p. 012106.
vol. 1818, no. 1.
[19] A. A. Aburomman and M. B. I. Reaz, “A survey of intrusion detection
systems based on ensemble and hybrid classifiers,” Computers &
security, vol. 65, pp. 135–152, 2017.
[20] B. A. Tama and S. Lim, “Ensemble learning for intrusion detection
systems: A systematic mapping study and cross-benchmark evaluation,”
Computer Science Review, vol. 39, p. 100357, 2021.
[21] R. Lazzarini, H. Tianfield, and V. Charissis, “A stacking ensemble of
deep learning models for iot intrusion detection,” Knowledge-Based
Systems, vol. 279, p. 110941, 2023.
[Online]. Available: https:
//www.sciencedirect.com/science/article/pii/S0950705123006913
[22] A. Mahfouz, A. Abuhussein, D. Venugopal, and S. Shiva, “Ensemble
classifiers for network intrusion detection using a novel network attack
dataset,” Future Internet, vol. 12, no. 11, 2020. [Online]. Available:
https://www.mdpi.com/1999-5903/12/11/180
[23] N. Thockchom, M. Singh, and U. Nandi, “A novel ensemble
learning-based model for network intrusion detection,” Complex &
Intelligent Systems, vol. 9, 04 2023.
[24] Y. Mirsky, T. Doitshman, Y. Elovici, and A. Shabtai, “Kitsune: An
ensemble of autoencoders for online network intrusion detection,” 2018.
[25] N. H. Al-A’araji, S. O. Al-Mamory, and A. H. Al-Shakarchi,
“Classification and clustering based ensemble techniques for intrusion
detection systems: A survey,” Journal of Physics: Conference Series,
[Online]. Available:
vol. 1818, no. 1, p. 012106, mar 2021.
https://dx.doi.org/10.1088/1742-6596/1818/1/012106
[26] R. Caruana, A. Niculescu-Mizil, G. Crew, and A. Ksikes, “Ensemble
selection from libraries of models,” 09 2004.
[27] A. Zainal, M. Maarof, and S. M. Shamsuddin, “Ensemble classifiers for
network intrusion detection system,” Journal of Information Assurance
and Security, vol. 4, pp. 217–225, 07 2009.
[28] A. Z. Kiflay, A. Tsokanos, and R. Kirner, “A network intrusion detection
system using ensemble machine learning,” in 2021 International
Carnahan Conference on Security Technology (ICCST), 2021, pp. 1–6.
[29] S. Das, S. Saha, A. T. Priyoti, E. K. Roy, F. T. Sheldon, A. Haque, and
S. Shiva, “Network intrusion detection and comparative analysis using
ensemble machine learning and feature selection,” IEEE Transactions
on Network and Service Management, vol. 19, no. 4, pp. 4821–4833,
2022.
[30] H. Zhang, J.-L. Li, X.-M. Liu, and C. Dong, “Multi-dimensional
feature fusion and stacking ensemble mechanism for network intrusion
detection,” Future Generation Computer Systems, vol. 122, pp.
130–143, 2021.
[Online]. Available: https://www.sciencedirect.com/
science/article/pii/S0167739X2100114X
[31] Y.-F. Hsu, Z. He, Y. Tarutani, and M. Matsuoka, “Toward an online
network intrusion detection system based on ensemble learning,” in 2019
IEEE 12th International Conference on Cloud Computing (CLOUD),
2019, pp. 174–178.
[32] Y. Alotaibi and M. Ilyas, “Ensemble-learning framework for intrusion
detection to enhance internet of things’ devices security,” Sensors,
vol. 23, no. 12, 2023. [Online]. Available: https://www.mdpi.com/
1424-8220/23/12/5568
[33] R. Kumar Singh Gautam and E. A. Doegar, “An ensemble approach
for intrusion detection system using machine learning algorithms,” in
2018 8th International Conference on Cloud Computing, Data Science
& Engineering (Confluence), 2018, pp. 14–15.
[34] T. Divyasree
“A network intrusion detection
and K. Sherly,
system based on ensemble cvm using efficient
feature selection
approach,” Procedia Computer Science, vol. 143, pp. 442–449,
2018, 8th International Conference on Advances
in Computing
& Communications (ICACC-2018). [Online]. Available: https://www.
sciencedirect.com/science/article/pii/S1877050918321136
[35] M.-E. Mihailescu, D. Mihai, M. Carabas, M. Komisarek, M. Pawlicki,
W. Hołubowicz, and R. Kozik, “The proposition and evaluation of
the roedunet-simargl2021 network intrusion detection dataset,” Sensors,
vol. 21, no. 13, p. 4319, 2021.
[36] R. Panigrahi and S. Borah, “A detailed analysis of cicids2017 dataset
for designing intrusion detection systems,” International Journal of
Engineering & Technology, vol. 7, no. 3.24, pp. 479–482, 2018.
[37] B. E. Strom, A. Applebaum, D. P. Miller, K. C. Nickels, A. G.
Pennington, and C. B. Thomas, “Mitre att&ck: Design and philosophy,”
in Technical report. The MITRE Corporation, 2018.
[38] M. repository, “Malware repository,” https://attack.mitre.org/datasources/
DS0004/, 2021, [Online; accessed 30-April-2024].
[39] C. B. Lee, C. Roedel, and E. Silenok, “Detection and characterization
of port scan attacks,” Univeristy of California, Department of Computer
Science and Engineering, 2003.
[40] Kurniabudi, D. Stiawan, Darmawijoyo, M. Y. Bin Idris, A. M. Bamhdi,
and R. Budiarto, “Cicids-2017 dataset feature analysis with information
gain for anomaly detection,” IEEE Access, vol. 8, pp. 132 911–132 921,
2020.
[41] D. by Comprmoise, “Drive-by Compromise,” https://attack.mitre.org/
techniques/T1189/, 2023, [Online; accessed 21-October-2023].
[42] Y. Chen, Q. Lin, W. Wei,
Ji, K.-C. Wong,
and C. A.
J.
Coello Coello, “Intrusion detection using multi-objective evolutionary
convolutional
fog
for
computing,” Knowledge-Based Systems, vol. 244, p. 108505, 2022.
[Online]. Available: https://www.sciencedirect.com/science/article/pii/
S0950705122002179
network
internet
neural
things
of
in
[43] V. Gorodetski and I. Kotenko, “Attacks against computer network:
Formal grammar-based framework and simulation tool,” in International
Workshop on Recent Advances in Intrusion Detection. Springer, 2002,
pp. 219–238.
[44] M. Skwarek, M. Korczynski, W. Mazurczyk,
and A. Duda,
“Characterizing vulnerability of dns axfr transfers with global-scale
scanning,” in 2019 IEEE Security and Privacy Workshops (SPW). IEEE,
2019, pp. 193–198.
[45] A. Khan, H. Kim, and B. Lee, “M2mon: Building an mmio-based
[46] S. R. Hussain,
security reference monitor for unmanned vehicles.” 2021.
I. Karim, A. A.
Ishtiaq, O. Chowdhury, and
E. Bertino, “Noncompliance as deviant behavior: An automated
for 4g lte cellular devices,” in
black-box noncompliance checker
Proceedings of the 2021 ACM SIGSAC Conference on Computer and
Communications Security, 2021, pp. 1082–1099.
[47] O. Mirzaei, R. Vasilenko, E. Kirda, L. Lu, and A. Kharraz, “Scrutinizer:
Detecting code reuse in malware via decompilation and machine
learning,” in Detection of Intrusions and Malware, and Vulnerability
Assessment: 18th International Conference, DIMVA 2021, Virtual Event,
July 14–16, 2021, Proceedings 18. Springer, 2021, pp. 130–150.
[48] S. Lukacs, D. H. Lutas, A. V. COLESA et al., “Strongly isolated
malware scanning using secure virtual containers,” Aug. 25 2015, uS
Patent 9,117,081.
[49] A. Kim, M. Park, and D. H. Lee, “Ai-ids: Application of deep
learning to real-time web intrusion detection,” IEEE Access, vol. 8, pp.
70 245–70 261, 2020.
[50] L. Dhanabal and S. Shantharajah, “A study on nsl-kdd dataset
for intrusion detection system based on classification algorithms,”
International
journal of advanced research in computer and
communication engineering, vol. 4, no. 6, pp. 446–452, 2015.
[51] “Flow information
elements
[Online].
information elements.html
Available:
-
documentation.”
10.1
https://www.ntop.org/guides/nprobe/flow
nprobe
[52] Ahlashkari,
master
https://github.com/ahlashkari/cicflowmeter/blob/master/readme.txt,”
Jun
Available:
[Online].
CICFlowMeter/blob/master/ReadMe.txt
“Cicflowmeter/readme.txt
at
ahlashkari/cicflowmeter,
https://github.com/ahlashkari/
2021.
·
[53] B. Claise, “Cisco systems netflow services export version 9,” Tech. Rep.,
2004.
[54] I. Sharafaldin, A. Gharib, A. H. Lashkari, and A. A. Ghorbani, “Towards
a reliable intrusion detection benchmark dataset,” Software Networking,
vol. 2018, no. 1, pp. 177–200, 2018.
[55] C. A. Stewart, V. Welch, B. Plale, G. C. Fox, M. Pierce, and T. Sterling,
“Indiana university pervasive technology institute,” techreport, 2017.
[56] J. O. Mebawondu, O. D. Alowolodu, J. O. Mebawondu, and A. O.
Adetunmbi, “Network intrusion detection system using supervised
learning paradigm,” Scientific African, vol. 9, p. e00497, 2020.
[57] Y.-Y. Song and L. Ying, “Decision tree methods: applications for
classification and prediction,” Shanghai archives of psychiatry, vol. 27,
no. 2, p. 130, 2015.
[58] S. Dreiseitl and L. Ohno-Machado, “Logistic regression and artificial
neural network classification models: a methodology review,” Journal
of biomedical informatics, vol. 35, no. 5-6, pp. 352–359, 2002.
[59] W. Li, P. Yi, Y. Wu, L. Pan, and J. Li, “A new intrusion detection system
based on knn classification algorithm in wireless sensor network,”
Journal of Electrical and Computer Engineering, vol. 2014, 2014.
[60] A. V. Dorogush, V. Ershov, and A. Gulin, “Catboost: gradient boosting
with categorical features support,” arXiv preprint arXiv:1810.11363,
2018.
[61] D. Jin, Y. Lu, J. Qin, Z. Cheng, and Z. Mao, “Swiftids: Real-time
intrusion detection system based on lightgbm and parallel intrusion
detection mechanism,” Computers & Security, vol. 97, p. 101984, 2020.
“Improving
adaboost-based intrusion detection system (ids) performance on cic ids
2017 dataset,” in Journal of Physics: Conference Series, vol. 1192.
IOP
Publishing, 2019, p. 012018.
[62] A. Yulianto, P. Sukarno,
and N. A. Suwastika,
[63] A. Natekin and A. Knoll, “Gradient boosting machines, a tutorial,”
Frontiers in Neurorobotics, vol. 7, p. 21, 2013.
[64] S. S. Dhaliwal, A.-A. Nahid, and R. Abbas, “Effective intrusion
detection system using xgboost,” Information, vol. 9, no. 7, p. 149, 2018.
in machine learning,” in
International workshop on multiple classifier systems. Springer, 2000,
pp. 1–15.
[65] T. G. Dietterich, “Ensemble methods
[66] M. Zounemat-Kermani, O. Batelaan, M. Fadaee, and R. Hinkelmann,
“Ensemble machine learning paradigms in hydrology: A review,”
Journal of Hydrology, vol. 598, p. 126266, 2021.
[67] I.
“42
Insights,
at
Statistics
by Year: A
Look
https://sectigostore.com/blog/
42-cyber-attack-statistics-by-year-a-look-at-the-last-decade/, February
2020, [Online; accessed 10-March-2023].
Cyber Attack
Decade,”
Last
the
[68] O. Arreche, I. Bibers, and M. Abdallah, “A two-level ensemble learning
framework for enhancing network intrusion detection systems,” IEEE
Access, vol. 12, pp. 83 830–83 857, 2024.
[69] B. Mahbooba, M. Timilsina, R. Sahal, and M. Serrano, “Explainable
artificial intelligence (xai) to enhance trust management in intrusion
detection systems using decision tree model,” Complexity, vol. 2021,
2021.
Bagging: Bagging, a class of ensemble methods, was
utilized in this scenario. This method involves dividing the
into subsets with replacements and utilizing these
dataset
subsets as input data for diverse base models. Subsequently,
the predictions from each base model are aggregated to reach
a final decision. In this implementation, Bagging Classifier
was instantiated with various base models including Random
Forest, MLP Classifier, Logistic Regression, and Decision Tree
Classifier. The number of estimators for the Bagging Classifier
was set to the total number of base models, ensuring that each
base model contributes to the ensemble’s prediction.
Random Forest (RF): One ensemble classifier utilized for
detecting malicious samples in network traffic was the Random
Forest (RF). The hyperparameters employed for this classifier
are as follows: n estimators (representing the number of trees
used) was set to 100, the maximum tree depth was set to 10,
the minimum number of samples required to split an internal
node was set to 2, while the remaining parameters were left
as default.
Blending: Blending, another class of ensemble methods,
was employed. This method uses a holdout (validation) set
from the train set to make predictions. The process involves
splitting the train set into training and validation sets, fitting
models on the training set, and making predictions on the
validation and test sets. The validation set and its predictions
are then used to build a new model, which makes the final
decision on the test set. In this implementation, the blending
including
method was applied using several base models,
Random Forest, Multi-layer Perceptron, Logistic Regression,
and Decision Tree. The predictions from these base models on
the validation set are used to train the final estimator, which
makes the final decision.
Stacking: Lastly, Stacking, another class of ensemble
methods, was employed. This method stacks the decisions of
base models, utilizing their outcomes to create a new dataset.
Subsequently, various models and ensemble methods are
applied again to make a final decision. In this implementation,
StackingClassifier was instantiated with several base models,
including Random Forest, Multi-layer Perceptron, Logistic
Regression, and Decision Tree. The predictions from these
base models are then used to train the final estimator, which
makes the final decision.
APPENDIX A
AI MODELS AND HYPER-PARAMETERS
We present the hyperparameters of the various AI models
and ensemble methods employed in this study.
A. Details of AI Models and Hyperparameters
1) Base Models: First, we outline the primary details of
the base models.
Logistic Regression (LR): Moving on to the next
classifier, we employed Logistic Regression. The parameter
configuration for this classifier remains at default settings.
Decision Tree (DT): Continuing with the subsequent
classifier, we utilized the Decision Tree. The parameter
configuration for this classifier remains at default settings.
Multi-layer Perceptron (MLP): Following, we utilized
the MLP classifier with the subsequent configuration: The
MLP classifier architecture comprises two hidden layers,
each containing 50 neurons, utilizing the Rectified Linear
Unit (ReLU) activation function. We employed the Adaptive
Moment Estimation (Adam) solver for optimization, with
an L2 regularization term (alpha) set to 0.0001. The batch
size was dynamically adjusted based on the dataset size.
Additionally, the learning rate was kept constant at 0.001
throughout training, with a maximum of 1000 iterations. The
random seed was fixed at 42 for reproducibility. Early stopping
was disabled, and progress messages were printed during
training.
2) Ensemble Methods: Next, we present the key details of
our ensemble methods.
AdaBoost (ADA): AdaBoost was employed as the next
this classifier
classifier. The parameter configuration for
remains at the default setting.
Extreme Gradient Boost (XGB): Following, XGB was
utilized as a classifier. The parameter configuration learning
rate set to 0.1, loss function set to multi: softmax.
CatBoost (CAT): Subsequently, Catboost was utilized as a
classifier. The parameter settings default setting.
Max Voting: The following classifier employed is Voting, a
simple stacking method that aggregates each model’s decision.
In this implementation, a VotingClassifier is instantiated with
two base classifiers, Logistic Regression (LR) and Decision
Tree (dt), using hard voting.
Average: Additionally, the Average classifier was employed.
This approach involves initializing three base classifiers:
Decision Tree, K-nearest Neighbors, and Random Forest.
These models are trained on the training data. The predictions
from each model (pred1, pred2, pred3) are then averaged
to generate the final prediction using a simple averaging
technique.
Weighted Average: Weighted Average was utilized in this
implementation. This method involves initializing three base
classifiers: Decision Tree, K-nearest Neighbors, and Random
Forest. An ensemble is then created using a VotingClassifier,
with the classifiers assigned weights based on their importance.
In this case, Decision Tree is assigned a weight of 0.4,
K-nearest Neighbors a weight of 0.3, and Random Forest a
weight of 0.3.
|
ai_researcher | 1 | PubGraph_A_Large_Scale_Scientific_Temporal_Knowledge_Graph.pdf | 3
2
0
2
y
a
M
9
1
]
I
A
.
s
c
[
2
v
1
3
2
2
0
.
2
0
3
2
:
v
i
X
r
a
PubGraph: A Large-Scale Scientific Knowledge
Graph
Kian Ahrabian, Xinwei Du, Richard Delwin Myloth, Arun Baalaaji Sankar
Ananthan, and Jay Pujara
University of Southern California, Information Sciences Institute, Marina del Rey CA
90292, USA
{ahrabian,xinweidu,myloth,arunbaal,jpujara}@usc.edu
Abstract. Research publications are the primary vehicle for sharing
scientific progress in the form of new discoveries, methods, techniques,
and insights. Unfortunately, the lack of a large-scale, comprehensive, and
easy-to-use resource capturing the myriad relationships between publi-
cations, their authors, and venues presents a barrier to applications for
gaining a deeper understanding of science. In this paper, we present
PubGraph, a new resource for studying scientific progress that takes
the form of a large-scale knowledge graph (KG) with more than 385M
entities, 13B main edges, and 1.5B qualifier edges. PubGraph is com-
prehensive and unifies data from various sources, including Wikidata,
OpenAlex, and Semantic Scholar, using the Wikidata ontology. Beyond
the metadata available from these sources, PubGraph includes outputs
from auxiliary community detection algorithms and large language mod-
els. To further support studies on reasoning over scientific networks,
we create several large-scale benchmarks extracted from PubGraph for
the core task of knowledge graph completion (KGC). These benchmarks
present many challenges for knowledge graph embedding models, includ-
ing an adversarial community-based KGC evaluation setting, zero-shot
inductive learning, and large-scale learning. All of the aforementioned
resources are accessible at https://pubgraph.isi.edu/ and released un-
der the CC-BY-SA license. We plan to update PubGraph quarterly to
accommodate the release of new publications.
Keywords: Scientific Knowledge Graphs · Knowledge Graph Comple-
tion · Inductive Learning
1
Introduction
Scientific progress takes many forms, from discovering new species to repurpos-
ing extant models for novel tasks. Innovation in science has been studied from a
variety of perspectives, including the combination of scholarly domains [12,28],
sociological factors [8], and analogical reasoning [13,17]. However, many studies
of this phenomenon have been limited due to the difficulty in finding and using
large-scale data for the domain. In this paper, we address this obstacle by in-
troducing PubGraph, a knowledge graph (KG) with new resources and bench-
marks, enabling the study of scientific research at scale using structural patterns
2
K. Ahrabian et al.
in citation and collaboration networks. PubGraph also provides a unique op-
portunity to compare models on core tasks such as transductive and inductive
knowledge graph completion (KGC).
PubGraph is a large-scale multi-relational KG built on top of the OpenAlex
catalog [23] and the Wikidata [29] ontology. It consists of more than 385M en-
tities, comprising authors, institutions, sources, papers, and concepts, and more
than 13B main edges and 1.5B qualifier edges among those entities. PubGraph
captures temporal information, allowing the study of scientific works’ dynamics.
Additionally, it also connects the scholarly articles available in OpenAlex to their
counterparts in the Semantic Scholar Academic Graph (S2AG) [30] and Wiki-
data through external ids. Moreover, besides the metadata information available
in OpenAlex, PubGraph provides outputs from auxiliary community detection
algorithms and large language models to further assist future studies of scien-
tific articles. Fig. 1 illustrates an overview of PubGraph schema. In this paper,
we describe the methodology used to construct PubGraph, i.e., the ontological
choices made for mapping OpenAlex to Wikidata, the model choices to extract
outputs from auxiliary models, and the entity resolution procedure for mapping
OpenAlex articles to S2AG and Wikidata.
One of the essential parts of studying scientific progress is understanding
and reasoning about connections between ideas and discoveries. However, there
is a shortage of benchmarks that could be used to study such topics. In the
past, citations have proven to be crucial in studying publications and their im-
pact [22]. Prior works have also studied tasks on citations such as intent classi-
fication [5,11,16], recommendation [3,9], and prediction [7,20]. In this work, we
introduce new large-scale benchmarks for finding connections among scientific
works framed as a KGC task. The KGC task requires models to predict a target
entity, given a source entity and a relation. The aim of this task is to support
the study of citations from a structural perspective in both transductive, i.e.,
all nodes are known, and inductive, i.e., evaluation nodes are unseen, settings.
Moreover, we also identify a community-based adversarial evaluation setting that
mitigates the influence of random negative sampling in the evaluation phase of
large-scale KGs.
The contributions of this work are summarized as follows:
1. Introducing PubGraph, a billion-scale, multi-relational KG built on top of
the OpenAlex catalog
2. Mapping the OpenAlex metadata to Wikidata ontology
3. Connecting two other large-scale scholarly metadata repositories, S2AG and
Wikidata, to make PubGraph a unifying and comprehensive resource
4. Introducing large-scale extrapolated KGC benchmarks for KG models in
both transductive and inductive settings
5. Identifying challenging adversarial evaluation settings for KGC benchmarks
2 Building PubGraph
The primary source for creating PubGraph is the metadata in the OpenAlex
catalog that we map to the Wikidata ontology. OpenAlex is an open-source cat-
PubGraph: A Large-Scale Scientific Knowledge Graph
3
Fig. 1. Overview of PubGraph schema. Legend. Colors: Blue → Main entity, Yellow
→ Boolean attribute, Purple → Multi attribute, and Green → New attribute; Shapes:
Rounded rectangle → Entity attribute, and Rectangle → Regular attribute.
alog of scholarly entities that provides metadata for works, authors, institutions,
sources, publishers, and concepts. Moreover, we add connections to both S2AG
and Wikidata repositories to provide a more unifying resource for the researchers.
4
K. Ahrabian et al.
Furthermore, we provide outputs from auxiliary models to further enrich Pub-
Graph for future studies. The rest of this section is organized as follows: Sec.
2.1 introduces the mapping procedure from OpenAlex metadata to Wikidata
ontology, Sec. 2.2 describes the implemented procedure to connect S2AG and
Wikidata with OpenAlex along with some statistics of the resolution, and Sec.
2.3 presents the model choices for auxiliary outputs included in PubGraph.
2.1 Mapping to Wikidata Ontology
To transform the OpenAlex dump (taken on April 9th, 2023) into PubGraph,
we follow the well-known and well-studied Wikidata ontology. Specifically, we
create a mapping between metadata information from the OpenAlex dump to
Wikidata properties. Using Wikidata enables broader adoption of the KG and
clear semantics for entities and relationships.
Table 1 presents the mapping from OpenAlex metadata to Wikidata proper-
ties. These mappings are selected such that they best describe the metadata field.
Here, we explain the ontological design choices that we made for the mapping:
1. abstract → P7535: Due to the absence of a one-to-one match, we use P7535
(scope and content), which is defined as “a summary statement providing an
overview of the archival collection.”
2. author position → P1545: Since this field defines an order of the authors, we
use P1545 (series ordinal), which is defined as the “position of an item in its
parent series (most frequently a 1-based index).”
3. first page + last page → P304: Since OpenAlex uses two different fields to
present this information, we merge them into one attribute to be aligned
with the Wikidata ontology.
4. score → P4271: Since this field indicates the relatedness of two concepts as
produced by a model, it matches the definition of P4271 (rating) defined as
“qualifier to indicate a score given by the referenced source indicating the
quality or completeness of the statement.”
5. descriptor ui + qualifier ui → P1038: Since OpenAlex uses two different fields
to present this information, we merge them into one attribute to be aligned
with the Wikidata ontology.
6. apc usd → P2555: Since this field describes a “source’s article processing
charge in US Dollars”, we match it to P2555 (fee) defined as “fee or toll
payable to use, transit or enter the subject.”
7. relationship → P1039: Since this field describes the relation between two
institutions, we use P1039 (kinship to subject) defined as “qualifier of "rela-
tive" (P1038) to indicate less usual family relationships.”
8. location → P1433: Since this field describes the publishing location of a work,
we match it with P1433 (published in).
9. latitude + longitude → P625: Since OpenAlex uses two different fields to
present this information, we merge them into one attribute to be aligned
with the Wikidata ontology.
PubGraph: A Large-Scale Scientific Knowledge Graph
5
Table 1. OpenAlex metadata mapping to properties covered by Wikidata ontology.
OpenAlex Metadata WikiData Property OpenAlex Metadata WikiData Property
abstract
author position
landing page url
license
volume
first page + last page
score
created date
mag
pmcid
oa status
publication date
title
updated date
display name
orcid
twitter
last known institution
alternate titles
country code
host organization
issn
associated institution
display name acronyms
geonames city id
ror
international display name
level
hierarchy level
location
related concept
P7535
P1545
P973
P275
P478
P304
P4271
P571
P6366
P932
P6954
P577
P1476
P5017
P2561
P496
P2002
P1416
P1476
P297
P749
P236
P1416
P1813
P1566
P6782
P4970
P1545
P1545
P1433
P921
author
institution
pdf url
version
issue
concept
year
doi
pmid
descriptor ui + qualifier ui
oa url
referenced work
type
works count
display name alternatives
scopus
wikipedia
abbreviated title
apc usd
homepage url
issn-l
fatcat
relationship
homepage url
latitude + longitude
grid
language
alternate titles
parent publisher
ancestor
corpus id
P50
P1416
P953
P9767
P433
P921
P585
P356
P698
P9340
P2699
P2860
P31
P3740
P4970
P1153
P4656
P1813
P2555
P856
P7363
P8608
P1039
P856
P625
P2427
P9753
P4970
P749
P4900
P8299
10. level → P1545 and hierarchy level → P1545: Since there is no Wikidata
property to describe a position in a hierarchy, we use the closest property
P1545 (series ordinal), which is defined as the “position of an item in its
parent series (most frequently a 1-based index).”
6
K. Ahrabian et al.
Table 2. OpenAlex metadata mapping to properties not covered by Wikidata ontology.
OpenAlex Metadata New Property OpenAlex Metadata
New Property
best oa location
P_best_oa_location
cited by count
P_total_cited_by_count
cited by count
P_cited_by_count
primary location
P_primary_location
2yr mean citedness
P_impact_factor
i10-index
umls aui
P_i10_index
h-index
wikidata
P_h_index
P_wikidata
P_umls_aui
community id
P_community_id
Table 3. OpenAlex boolean metadata mapping to edges using Wikidata ontology.
OpenAlex Metadata
Edge
OpenAlex Metadata
Edge
is corresponding
is paratext
is in doaj
P31 → Q36988860
P31 → Q853520
P31 → Q1227538
is oa
is retracted
P31 → Q232932
P31 → Q45182324
Table 4. OpenAlex entity type mapping to edges using Wikidata ontology.
OpenAlex Metadata
Edge
OpenAlex Metadata
Edge
work
source
concept
P31 → Q13442814
P31 → Q1711593
P31 → Q115949945
author
institution
publisher
P31 → Q482980
P31 → Q178706
P31 → Q2085381
For the metadata with no suitable parallel property, we create new ones to
keep the KG as complete as possible, as showcased in Table 2. Note that for “cited
by count”, OpenAlex provides both yearly and total values; hence, the reason
for having two different new properties. Moreover, for metadata with a boolean
type, we add a new edge (main or qualifier) when true. Table 3 presents the edges
representing each boolean metadata with all the relations and entities taken from
the Wikidata repository. This choice was made to maintain a better semantic
composure and avoid creating new properties in the KG. For example, there is no
property in Wikidata for “is paratext”; however, there exists an paratext entity
(Q853520). Hence, instead of creating new property such as P_is_paratext, we
can create a new edge when “is paratext” is true to this entity with relation P31
(instance of). Finally, we also add “instance of” edges to indicate the type of each
entity as classified by OpenAlex, as presented in Table 4. Given its flexibility
to represent attributed graphs, we use RDF∗ as the graph representation for
PubGraph (as illustrated in Fig. 1).
PubGraph: A Large-Scale Scientific Knowledge Graph
7
Fig. 2. Distribution of publication years in the 2000-2023 period for OpenAlex, S2AG,
and Wikidata. Note that only ∼128.3M out of the ∼211.5M papers in S2AG have
publication dates and are included.
Fig. 3. Coverage of S2AG and Wikidata papers after entity resolution in the 2000-2023
period.
2.2 S2AG and WikiData Entity Resolution
To make PubGraph a more unifying and comprehensive resource, we opt to con-
nect works in OpenAlex to two other large-scale repositories of scholarly meta-
data: S2AG (taken on April 11th, 2023) and Wikidata (taken on April 28th,
2023). Fig. 2 showcases the distribution of publication years in the 2000-2023
period for the works available in these three repositories. During this analysis,
we noticed that only ∼128.3M out of the ∼211.5M papers in S2AG have pub-
lication dates. This finding further highlights the importance of a unifying and
comprehensive resource. To this end, we follow a two-step procedure. First, we
match entities based on the following IDs: DOI, MAG, PMID, and PMCID. For
S2AG, this results in ∼197.6M out of ∼211.5M unique papers being matched to
OpenAlex works, roughly providing a 93.4% coverage. For Wikidata, this results
in ∼33.2M out of ∼38.9M unique papers being matched to OpenAlex works,
roughly providing an 85.4% coverage.
Then, among the remaining unmatched entities, we run an exact title search
and only keep one-to-one mappings. For S2AG, this step further increases the
number of matched unique papers to ∼199.2M, roughly providing a 94.2% cov-
8
K. Ahrabian et al.
erage. For Wikidata, this step further increases the number of matched unique
papers to ∼36.4M, roughly providing a 93.6% coverage. Fig. 3 provides a cov-
erage distribution over the 2000-2023 period for both S2AG and Wikidata. As
evident from this distribution, the coverage of both data sources seems to be rel-
atively unbiased toward the time of publication. We believe the Wikidata drop
from 2021 onward is due to the low number of papers available in the platform
in the period, and the S2AG drop is due to the potential delays in adding recent
publications. Moreover, regarding more recent data, Wikidata seems to benefit
drastically from adding new entities through external sources. We plan to im-
prove our entity resolution heuristic using other metadata, such as authors, to
cover more entities in future releases.
2.3 Auxiliary Outputs
Community Detection Besides sharing scientific findings, scholarly articles
represent the research interests of their authors. Therefore, by referencing each
other’s publications, authors create communities of shared interests. To enable
the study of these communities, we provide the results obtained from the Leiden
community detection algorithm [26] as auxiliary outputs for papers in PubGraph.
To this end, we first extract the full citation network from all the publication-
publication links. Then, we tune the Leiden algorithm1 on the extracted citation
network with the following parameters: quality function ∈ {Modular, RBER,
Significance, Surprise}, maximum papers per community ∈ {300k, 500k}, and
number of communities ∈ {3000, 4000, 5000, 6000}. To evaluate the communi-
ties’ quality, we use a purity proxy metric extracted from the ancestral graph of
the concepts connected to the publications in OpenAlex. Specifically, we count
the number of children for each root concept and select the largest root concept
for each community. Then, we calculate the percentage of the papers that are
children of that root concept as the proxy metric. Figure 4 illustrates our results
on different numbers of communities. Based on our experiments, the highest
quality communities are produced by the following parameters: quality func-
tion = Significance, maximum papers per community = 300k, and number of
communities = 3000.
Large Language Models PubGraph was developed to enable researchers to
study scholarly works from a graph perspective. Through PubGraph, it is pos-
sible to learn representations for papers using graph-based methods, which then
could be used for various downstream tasks. Orthogonal to this relational and
structural information, are textual information based on scholarly works’ con-
tent. When available, textual features complement the graph-based features and
can improve the performance of the models [2].
Recently, many large language models (LLM) have been introduced to tackle
the problem of generating representations for scientific documents [1,6]. These
pre-trained models are specifically tuned for scientific data and could be used to
1 https://github.com/vtraag/leidenalg
PubGraph: A Large-Scale Scientific Knowledge Graph
9
Fig. 4. Analysis of the effect of the number of communities on the quality of commu-
nities. A higher area under the curve (AUC) indicates more pure communities.
generate low-dimensional embeddings for input documents. In this work, to fur-
ther enable multi-view studies of PubGraph, we provide embeddings generated
by LLMs for all the papers. These embeddings also save resources for researchers
who want to use textual information. To this end, first, we obtain a representing
text by concatenating the title and the abstract of each work. This approach
allows us to cover all the works with at least one of these attributes available,
improving the general coverage of this data. Then, we run the representing text
through the SciNCL model [21] to obtain the embeddings, with each generated
embedding being a 768-dimensional vector. All the generated embeddings are
released with an index to match the corresponding papers.
3 Knowledge Graph Completion
Traditionally, knowledge graph embedding (KGE) models [25,27] have been eval-
uated in an interpolated, transductive KGC setting where all entities, e.g., pa-
pers and authors, are known. However, one of the challenging aspects of study-
ing scientific progress is dealing with new publications which require inference
over unseen samples. A better-aligned evaluation setting for this purpose is the
extrapolated, inductive setting. An inductive setting requires models to make
predictions over previously unseen entities. While KGs capture the structure
10
K. Ahrabian et al.
Fig. 5. Overview of the training and evaluation scheme. Intra-period current links
(black) are used for training in all experiment settings. Intra-period future links (red)
are used for evaluation in both validation and testing phases in all experiment settings.
Exo-period links (dotted blue) are used in the training phase in transductive settings;
however, in inductive settings, these links are only used as auxiliary links during the
evaluation phase. Auxiliary links establish connections between seen training nodes
and unseen evaluation nodes.
necessary for this setting, many models do not address this use case. Moreover,
extrapolated prediction requires train and test sets to be partitioned by a tem-
poral threshold, so model predictions are for a future time epoch.
In this work, we introduce new resources and benchmarks in the extrapo-
lated setting for both inductive and transductive models, framing the research
question as a KGC task and supporting the study of this problem from a purely
structural standpoint at different scales and across various models. Moreover, we
also introduce a community-based adversarial evaluation setting to 1) mitigate
the influence of random negative sampling (due to the scale) in the evaluation
phase and 2) maintain the same level of difficulty as evaluated on all of the enti-
ties. Fig. 5 presents an overview of the training and evaluation schemes for the
KGC benchmarks in both transductive and inductive settings. The rest of this
section is organized as follows: Sec. 3.1 describes the methodology used to create
PG-X benchmarks, Sec. 3.2 presents a data quality analysis over the extracted
samples, and Sec. 3.3 presents a set of adversarial evaluation settings for the
KGC tasks.
3.1 Building PG-X Benchmarks
The full PubGraph KG contains a vast amount of information in the form of lit-
eral values and sparse properties that are not easily usable by many KG models.
We extract subsets of PubGraph, designated as PG-X, to create easier-to-use
benchmarks for KG models. To extract PGs from the transformed data, we first
remove all the publications with no citations that do not cite any other papers
PubGraph: A Large-Scale Scientific Knowledge Graph
11
Table 5. Statistics of PG-X benchmarks splits.
Benchmark
#Training
(Validation)
#Training
(Testing)
#Validation #Test
PG-1M
18.2M
20.5M
PG-10M
269.0M
305.9M
265k
3.1M
146k
2.3M
PG-Full
1.88B
2.17B
28.1M 26.3M
Table 6. Validity and completeness metrics of sampled KGs.
Metric
PG-1M PG-10M PG-Full
Mutual Citations
0.03% 0.04% 0.06%
Authorship Completeness 99.97% 99.97% 99.92%
Venue Completeness
92.37% 90.25% 75.34%
Institution Completeness 81.45% 71.21% 45.77%
to get PG-Full. Since these nodes are disconnected from other publications, this
step mitigates the sparsity problem and reduces the KG size by a large margin.
Given the enormous size of the PG-Full, we create two small and medium-
sized sub-KGs to allow future studies at different scales. To this end, we use
snowball sampling [10] to extract PG-1M and PG-10M with 1M and 10M publi-
cation nodes, respectively. After sampling, we remove any publication without a
publication date. Next, we extract all the “cites work (P2860),” “author (P50),”
“published in (P1433)," and “affiliation (P1416)" links for the sampled publi-
cations. We ensure to include all the available author, source, and institution
links from the sampled publications in the benchmarks. Finally, we split all the
benchmarks temporally, using all the publications before 2017 for training, 2017
up until 2020 for validation, and 2020 onward for testing. Table 5 presents the
statistics on the extracted splits of each benchmark.
3.2 Data Quality
To evaluate the quality of the extracted benchmarks, we check the validity and
completeness of our KGs. For validity, we look for potential mutual citations,
cases where two papers reference each other, violating strict temporal order.
This artifact may appear when articles have several revisions, but OpenAlex only
reports the earliest publication date. For completeness, we calculate publication-
author, publication-source, and author-institution relations completeness. Table
6 showcases these metrics on the extracted KGs. As evident from the metrics,
all the benchmarks exhibit an extremely low mutual citations percentage which
is evidence of their quality. Moreover, the small and medium-sized KGs exhibit
higher completeness metrics which we attribute to the forced inclusion of all
authors, venues, and institutions links.
12
K. Ahrabian et al.
Table 7. Negative sampling results on the PG-1M benchmark.
Variation
#Negative Samples MRR Hits@1 Hits@10 Time (Seconds)
Random
1000
0.723 0.608
0.918
588 (CPU)
Entity Type
Time Constrained
Community
Full
1000
1000
1000
∼3.38M
0.560 0.418
0.577 0.449
0.076 0.023
0.015 0.000
0.826
0.817
0.167
0.036
655 (CPU)
601 (CPU)
1008 (CPU)
81987 (GPU)
Fig. 6. Analysis of the effect of negative samples count on the model’s performance
measured by MRR.
3.3 Adversarial Evaluation Setting
One of the most common strategies to evaluate KGC on large-scale graphs is
to sample a fixed number of negative samples for each positive sample during
the evaluation phase. However, this strategy is prone to exhibiting inflated per-
formance due to having no control over the difficulty of the sampled nodes.
Moreover, calculating the evaluation metrics on the complete set of samples
becomes increasingly more expensive as the size of the KG grows. Hence, we
propose three alternative strategies for negative sampling during the evaluation
phase. These strategies aim to find an efficient method to be used as a proxy for
complete metric calculations. Our proposed strategies are as follows:
PubGraph: A Large-Scale Scientific Knowledge Graph
13
1. Entity Type: This is the most straightforward strategy in which we only
sample candidate nodes with the same type as the target node. For example,
in our case, we only sample from the publications.
2. Time Constrained: Building upon our first strategy, we further add the
constraint of only sampling candidate nodes from the nodes within the evalu-
ation period. Intuitively, these unseen (inductive) or less seen (transductive)
nodes will pose more problems for the model during the evaluation phase.
3. Community: Given a target node, we sample candidate nodes only from its
community. This strategy relies on the auxiliary outputs, i.e., communities,
generated as described in Sec. 2.3. We hypothesize that these nodes pose the
most difficulty for the model during the evaluation phase.
To test the proposed strategies, we train a ComplEx [27] model using the
DGL-KE toolkit [31]. We tune the hyper-parameters of our model using the
following set of values: embedding dimensions ∈ {50, 100, 200, 400}, learning
rate ∈ {0.003, 0.01, 0.03, 0.1, 0.3}, number of negative samples ∈ {128, 256, 512
, 1024, 2048}, and regularization coefficient ∈ {0.0, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5}.
Table 7 presents the results of our experiments with the aforementioned neg-
ative sampling strategies in the evaluation phase. The reported times are for one
evaluation run over the complete testing set of the PG-1M benchmark (∼147K
samples). As evident from these results, the community-based method is the best
proxy to the full metrics calculation while still being significantly time efficient.
Even if we factor in the 11.5 hours (41400 seconds) that it takes to learn commu-
nities for all the 91M publications, the difference in computation time becomes
much more significant when we have to repeat the evaluation process over and
over again, e.g., for validation, fine-tuning, etc. Moreover, the full metrics are
calculated on a GPU which is far more efficient than the calculations on the
CPU. It is important to note that the community-based method is helpful in
evaluation settings where the ground truth is known; however, in settings where
the ground truth is unknown, e.g., a deployed model, there is no workaround to
complete ranking computations as we have to consider all the entities regardless.
We further analyze the effect of the number of negative samples on the
model’s performance. Figure 6 presents the result of our experiments with vary-
ing numbers of negative samples on all the introduced strategies. As expected,
the model’s performance rapidly drops with the increase of negative samples.
Moreover, the community-based negative sampling results act as an excellent
proxy at 5k negative samples and seem to converge to the full variation around
10k negative samples. This finding is further evidence of the effectiveness of this
method.
4 Related Works
4.1 Scientific Knowledge Graphs
In recent years, a wide range of scientific KGs (SKG) have emerged in the re-
search community. Examples of these SKGs are Scholia [19], ORKG [24], Ope-
nAIRE [18], and MAG240M [14]. Each of the aforementioned SKGs has different
14
K. Ahrabian et al.
Table 8. Comparison between PubGraph and the existing SKGs.
SKG
#Articles Source
Ontology Embeddings Community
External Links
(Other Sources)
Scholia
ORKG
39M Wikidata Wikidata
25k
Curated Proprietary
OpenAIRE 164M Curated Proprietary
MAG240M 121M
MAG Proprietary
PubGraph
250M OpenAlex Wikidata
(cid:55)
(cid:55)
(cid:55)
(cid:51)
(cid:51)
(cid:55)
(cid:55)
(cid:55)
(cid:55)
(cid:51)
(cid:55)
(cid:55)
(cid:55)
(cid:55)
(cid:51)
Table 9. Statistics of extracted benchmarks compared to the existing large-scale KGC
benchmarks. As evident, PG-Full has more than 2x nodes and 3.6x edges compared to
the largest existing benchmarks.
Benchmark
#Nodes
#Edges #Relations
2,927,963
ogbl-citation2 [15]
86,054,151
Freebase [4]
WikiKG90Mv2 [14] 91,230,610
30,561,187
338,586,276
601,062,811
1
14,824
1,315
PG-1M
PG-10M
PG-Full
3,378,202
25,312,490
22,442,976
315,225,337
184,126,885 2,201,239,147
4
4
4
characteristics that make them unique and interesting to the community. Table
8 compares PubGraph with the existing SKGs across various properties. As
evident from this table, PubGraph is built on a more grounded ontology and
provides much more information and artifacts compared to other SKGs.
4.2 Large Scale KGC Benchmarks
KGC is one of the most common tasks defined on KGs. Recent efforts [14,15]
have shifted toward introducing more large-scale benchmarks for KGC; however,
there is still a shortage of benchmarks for large-scale graph learning. We believe
the PG-X benchmarks introduced in this paper can help mitigate this shortage.
Table 9 showcases the statistics of the sampled KGs along with a comparison
to existing large-scale KGC benchmarks in the literature. As evident from the
numbers, PG-X benchmarks provide an opportunity to evaluate KG models
on larger (2x nodes and 3.6x edges) and more flexible (3.3M to 184M range)
benchmarks.
5 Conclusion and Future Work
In this work, we introduced PubGraph, a new large-scale resource in the form
of a KG built on Wikidata ontology and extracted from the OpenAlex cata-
PubGraph: A Large-Scale Scientific Knowledge Graph
15
log with more than 13B edges and 385M nodes. As presented through different
comparisons, PubGraph provides a much-needed unifying and comprehensive re-
source for researchers to study scientific progress that connects multiple sources.
PubGraph also enables the study of scientific documents from distinct perspec-
tives through the information extracted from auxiliary community detection
algorithms and large language models. Moreover, we created three KGC bench-
marks with varying sizes to enable future studies at different scales and for both
transductive and inductive settings. Finally, we identified a set of challenging ad-
versarial evaluation settings for the introduced benchmarks that overcome the
common downfall of large-scale KGC evaluation settings. As for future directions
for PubGraph, one direction is to improve the coverage of connections to exter-
nal sources. Moreover, it is possible to bring in more external data sources, e.g.,
SKGs such as Scholia, and link them with PubGraph. Finally, another venue
is to add other metadata that is of interest to the community, such as awards
and grants, which further enables researchers to study these events in the larger
context.
Acknowledgements
This work was funded by the Defense Advanced Research Projects Agency with
award W911NF-19-20271 and with support from a Keston Exploratory Research
Award.
Resource Availability Statement: The source code for building PubGraph, along
with a data schema, is available from GitHub, released under the CC-BY-SA
license2. All the introduced benchmarks and resources are publicly accessible
and released under the CC-BY-SA license3. Due to the sheer size of the resources
(> 2TB), we could not host the data in any commonly used platform and had
to resort to self-provisioned servers.
References
1. Beltagy, I., Lo, K., Cohan, A.: SciBERT: A pretrained language model for scientific
text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Lan-
guage Processing and the 9th International Joint Conference on Natural Language
Processing (EMNLP-IJCNLP). pp. 3615–3620. Association for Computational Lin-
guistics, Hong Kong, China (Nov 2019). https://doi.org/10.18653/v1/D19-1371,
https://aclanthology.org/D19-1371
2. Berrebbi, D., Huynh, N., Balalau, O.: Graphcite: Citation intent classification in
scientific publications via graph embeddings. In: Companion Proceedings of the
Web Conference 2022. pp. 779–783 (2022)
3. Bhagavatula, C., Feldman, S., Power, R., Ammar, W.: Content-based citation rec-
ommendation. arXiv preprint arXiv:1802.08301 (2018)
2 https://github.com/usc-isi-i2/isi-pubgraph
3 https://pubgraph.isi.edu/
16
K. Ahrabian et al.
4. Bollacker, K., Evans, C., Paritosh, P., Sturge, T., Taylor, J.: Freebase: a collabo-
ratively created graph database for structuring human knowledge. In: Proceedings
of the 2008 ACM SIGMOD international conference on Management of data. pp.
1247–1250 (2008)
5. Cohan, A., Ammar, W., Van Zuylen, M., Cady, F.: Structural scaffolds for cita-
tion intent classification in scientific publications. arXiv preprint arXiv:1904.01608
(2019)
6. Cohan, A., Feldman, S., Beltagy, I., Downey, D., Weld, D.: SPECTER: Document-
level representation learning using citation-informed transformers. In: Proceed-
ings of the 58th Annual Meeting of the Association for Computational Lin-
guistics. pp. 2270–2282. Association for Computational Linguistics, Online (Jul
2020). https://doi.org/10.18653/v1/2020.acl-main.207, https://aclanthology.org/
2020.acl-main.207
7. Cohan, A., Feldman, S., Beltagy, I., Downey, D., Weld, D.S.: Specter: Document-
level representation learning using citation-informed transformers. arXiv preprint
arXiv:2004.07180 (2020)
8. De Vaan, M., Stark, D., Vedres, B.: Game changer: The topology of creativity.
American Journal of Sociology 120(4), 1144–1194 (2015)
9. Färber, M., Sampath, A.: Hybridcite: A hybrid model for context-aware citation
recommendation. In: Proceedings of the ACM/IEEE Joint Conference on Digital
Libraries in 2020. pp. 117–126 (2020)
10. Goodman, L.A.: Snowball sampling. The annals of mathematical statistics pp.
148–170 (1961)
11. Gururangan, S., Marasović, A., Swayamdipta, S., Lo, K., Beltagy, I., Downey, D.,
Smith, N.A.: Don’t stop pretraining: adapt language models to domains and tasks.
arXiv preprint arXiv:2004.10964 (2020)
12. Hofstra, B., Kulkarni, V.V., Munoz-Najar Galvez, S., He, B., Jurafsky, D., Mc-
Farland, D.A.: The diversity–innovation paradox in science. Proceedings of the
National Academy of Sciences 117(17), 9284–9291 (2020)
13. Hope, T., Chan, J., Kittur, A., Shahaf, D.: Accelerating innovation through analogy
mining. In: Proceedings of the 23rd ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining. pp. 235–243 (2017)
14. Hu, W., Fey, M., Ren, H., Nakata, M., Dong, Y., Leskovec, J.: Ogb-lsc: A large-
scale challenge for machine learning on graphs. arXiv preprint arXiv:2103.09430
(2021)
15. Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., Leskovec,
J.: Open graph benchmark: Datasets for machine learning on graphs. Advances in
neural information processing systems 33, 22118–22133 (2020)
16. Jurgens, D., Kumar, S., Hoover, R., McFarland, D., Jurafsky, D.: Measuring the
evolution of a scientific field through citation frames. Transactions of the Associa-
tion for Computational Linguistics 6, 391–406 (2018)
17. Kang, H.B., Qian, X., Hope, T., Shahaf, D., Chan, J., Kittur, A.: Augmenting scien-
tific creativity with an analogical search engine. ACM Transactions on Computer-
Human Interaction (2022)
18. Manghi, P., Bardi, A., Atzori, C., Baglioni, M., Manola, N., Schirrwagen, J.,
Principe, P.: The openaire research graph data model (Apr 2019). https://doi.
org/10.5281/zenodo.2643199, https://doi.org/10.5281/zenodo.2643199
19. Nielsen, F.Å., Mietchen, D., Willighagen, E.: Scholia and scientometrics with wiki-
data. In: Scientometrics 2017. pp. 237–259 (November 2017). https://doi.org/10.
1007/978-3-319-70407-4_36, https://arxiv.org/pdf/1703.04222
PubGraph: A Large-Scale Scientific Knowledge Graph
17
20. Ostendorff, M., Rethmeier, N., Augenstein, I., Gipp, B., Rehm, G.: Neighborhood
contrastive learning for scientific document representations with citation embed-
dings. arXiv preprint arXiv:2202.06671 (2022)
21. Ostendorff, M., Rethmeier, N., Augenstein, I., Gipp, B., Rehm, G.: Neighborhood
contrastive learning for scientific document representations with citation embed-
dings. In: Proceedings of the 2022 Conference on Empirical Methods in Natural
Language Processing. pp. 11670–11688. Association for Computational Linguis-
tics, Abu Dhabi, United Arab Emirates (Dec 2022), https://aclanthology.org/2022.
emnlp-main.802
22. Price, D.J.D.S.: Networks of scientific papers: The pattern of bibliographic ref-
erences indicates the nature of the scientific research front. Science 149(3683),
510–515 (1965)
23. Priem, J., Piwowar, H., Orr, R.: Openalex: A fully-open index of scholarly works,
authors, venues, institutions, and concepts. arXiv preprint arXiv:2205.01833 (2022)
24. Stocker, M., Oelen, A., Jaradeh, M.Y., Haris, M., Oghli, O.A., Heidari, G., Hussein,
H., Lorenz, A.L., Kabenamualu, S., Farfar, K.E., et al.: Fair scientific information
with the open research knowledge graph. FAIR Connect 1(1), 19–21 (2023)
25. Sun, Z., Deng, Z.H., Nie, J.Y., Tang, J.: Rotate: Knowledge graph embedding by
relational rotation in complex space. arXiv preprint arXiv:1902.10197 (2019)
26. Traag, V.A., Waltman, L., Van Eck, N.J.: From louvain to leiden: guaranteeing
well-connected communities. Scientific reports 9(1), 1–12 (2019)
27. Trouillon, T., Welbl, J., Riedel, S., Gaussier, É., Bouchard, G.: Complex embed-
dings for simple link prediction. In: International conference on machine learning.
pp. 2071–2080. PMLR (2016)
28. Uzzi, B., Mukherjee, S., Stringer, M., Jones, B.: Atypical combinations and scien-
tific impact. Science 342(6157), 468–472 (2013)
29. Vrandečić, D., Krötzsch, M.: Wikidata: a free collaborative knowledgebase. Com-
munications of the ACM 57(10), 78–85 (2014)
30. Wade, A.D.: The semantic scholar academic graph (s2ag). In: Companion Proceed-
ings of the Web Conference 2022. pp. 739–739 (2022)
31. Zheng, D., Song, X., Ma, C., Tan, Z., Ye, Z., Dong, J., Xiong, H., Zhang, Z.,
Karypis, G.: Dgl-ke: Training knowledge graph embeddings at scale. In: Proceed-
ings of the 43rd International ACM SIGIR Conference on Research and Develop-
ment in Information Retrieval. pp. 739–748 (2020)
|
ai_researcher | 4 | "I'm_categorizing_LLM_as_a_productivity_tool"_Examining_ethics_of_LLM_use_in_HCI_research_practices.pdf | 4
2
0
2
r
p
A
2
1
]
L
C
.
s
c
[
2
v
2
7
7
8
0
.
1
0
4
2
:
v
i
X
r
a
HUIXIANGDOU: OVERCOMING GROUP CHAT SCE-
NARIOS WITH LLM-BASED TECHNICAL ASSISTANCE
Huanjun Kong
Songyang Zhang
Jiaying Li
Min Xiao
Jun Xu
Kai Chen
Shanghai AI Laboratory
ABSTRACT
In this work, we present HuixiangDou1, a technical assistant powered by Large
Language Models (LLM). This system is designed to assist algorithm developers
by providing insightful responses to questions related to open-source algorithm
projects, such as computer vision and deep learning projects from OpenMM-
Lab. We further explore the integration of this assistant into the group chats of
instant messaging (IM) tools such as WeChat and Lark. Through several itera-
tive improvements and trials, we have developed a sophisticated technical chat
assistant capable of effectively answering users’ technical questions without caus-
ing message flooding. This paper’s contributions include: 1) Designing an algo-
rithm pipeline specifically for group chat scenarios; 2) Verifying the reliable per-
formance of text2vec in task rejection; 3) Identifying three critical requirements
for LLMs in technical-assistant-like products, namely scoring ability, In-Context
Learning (ICL), and Long Context. We have made the source code, android app
and web service available at Github, OpenXLab and YouTube to aid in future re-
search and application. HuixiangDou is applicable to any group chat within IM
tools.
1
INTRODUCTION
Authors of open-source projects often set up user groups on IM tools(like WeChat, Slack, Discord,
etc.) for discussing project-related technical questions. As the number of users gradually increases,
the maintainers, aiming to reduce the time spent on answering user questions while ensuring these
questions are addressed, tend to pin some content or set up a bot to automatically answer FAQs.
However, user inquiries are strongly correlated with their local development environments, and most
messages in the group are unrelated to the project. However, traditional NLP solutions can neither
parse the users’ intent nor often provide the answers they desire.
ChatGPT, a large language model service from OpenAI, has a good performance in multiple test
sets and natural language communication. However, directly integrating ChatGPT into group chats
could lead to more severe issues.
• ChatGPT is designed for single-user chat. If it responds too many messages within a group,
it may impact others’ experience and cause them to leave the group.
• For really valuable queries such as code implementation principles and modification meth-
ods, ChatGPT fails to provide correct answers. This is because its training data comes
from the public internet, not domain-specific knowledge, and its data cannot be updated
immediately with code modifications.
• Even though ChatGPT exhibits high accuracy in numerous datasets, it still faces the issue
of hallucination. For example, asking ”Who is the author of ncnn?” can yield an incorrect
response related to ”Nvidia Compute Library”.
Hence, a technical assistant operating in group chats has different requirements.
1HuixiangDou is a dish from the Chinese classical story ’Kong Yiji’. It’s a meal Kong Yiji would routinely
order from a local tavern, reflecting his humble conditions yet exalted spirit.
1
Target true help-seekers Technical assistant should not respond to non-technical content, such as
politics, chit-chat, or personal information. They are only activated to provide responses to technical
inquiries when users genuinely require assistance.
Strictly no hallucination Even a single instance of hallucination could make users perceive the
bot as unreliable from a product perspective. Therefore, the system is implemented to avoid creating
any false impressions of understanding.
Understand domain-specific knowledge Possessing exclusivity not found in the public internet
is the fundamental value of an assistant. Simultaneously, it can update the version content of the
knowledge base at a relatively low cost.
No rush for response Users might ask questions late at night without having much expectation
for response time. Therefore, we can adopt more complex processing procedures.
2 APPROACH
In addressing these unique needs, we started with a basic version and arrived at our current solution
after making two improvements. As show in Figure 1, our final version consists of three parts:
Preprocess, Rejection and Response.
The underlying philosophy of HuixiangDou is to eliminate irrelevant noise to improve precision;
enhance retrieval capabilities to increase recall.
Figure 1: The overall structure of the approach. After the user’s message is preprocessed, small talk
will be filtered out, and only genuine questions will be responded to.
2.1 PREPROCESS USER INPUT
In a chat group, multiple users may pose questions and communicate among themselves. How-
ever, the typical LLM chat template only support three roles: system, user and bot. Hence, we
concatenate groupid and userid as the unique ID for users, in order to accommodate chat template.
Given that users are unlikely to describe their problem completely in one go, we pack multiple
consecutive messages into a single one. In the process, we use OCR service to parse image and
disregard other elements such as video, emojis, and voice messages.
2
Furthermore, extremely short messages that do not pose algorithmic challenges and messages that
clearly do not seek interaction with the assistant by quoting others will also be disregarded.
2.2 REJECTION PIPELINE
In group chat scenario, hallucinations come from two parts: user gossip and the model itself (where
the model’s training data and domain knowledge are not aligned). Rejections pipeline is designed
for dismissing causal chat-like discourse, as shown in Figure 2.
Figure 2: The structure of rejection pipeline. We build a two-stage refusal-to-answer filter using
text2vec and LLM scoring.
Refusal to Answer Based on Text2Vec LangChain (langchain contributors, 2023) and wenda
(wenda contributors, 2023) were originally used for RAG. After repeated tests, we think their re-
trieval abilities are normal, but surprisingly suitable for telling whether the question deserves to be
answered. Undesired question are topics that are too distant from the knowledge base.
Refusal to Answer Based on LLM Scoring Because the text2vec model judges topic similarity,
it’s easy to be influenced by tone words in the chat group question. From the perspective of the
text2vec model, there is a high degree of similarity between ’This development board is very
good’ and ’This board is poorly designed’. Influenced by moral and other factors, humans do not
believe that these two sentences express the same meaning.
2.3 RESPONSE PIPELINE
Obviously, models with robust In-Context Learning capabilities can assuredly mitigate internal hal-
lucinations through search mechanisms. As shown in Figure 3, response Pipeline is designed to
identify the underlying background knowledge of the issue.
The key here is to determine the importance of the information source and use them in order.
Extract Keywords User queries often contain many modal words, which can greatly impact the
precision of text2vec models. Therefore, we cannot directly use original queries for text2vec search.
As LLM excels at NLP part-of-speech segmentation tasks, we leverage it to extract keywords and
phrases from the query.
Feature and Rerank We mixedly use LangChain and BCEmbedding (Netease, 2024) to retrieve
domain-specific knowledge. In this scenario, our search result is a list of document snippets. To
fully utilize the context length of the model, we also employ LLM scoring to judge the relevance
3
Figure 3: The structure of response pipeline. We employ scoring and partial ordering to filter high-
quality text from rerank model, web search and knowledge graph for the LLM to generate responses.
To save costs, we mix and schedule different LLMs. We have established a set of security mecha-
nisms to ensure that replies to chat groups do not involve sensitive topics.
4
between the query and the document. This helps avoid any distractions for the LLM from irrelevant
inputs. It’s evident that LLM’s In-Context Learning capability is extremely crucial for this scenario.
Due to the varying performance of different text2vec models, the Response Pipeline does not share
a feature database with the Rejection Pipeline.
Web Search We first retrieve multiple search results (depends on the maximum token length sup-
ported by the model) and then utilize LLM scoring to filter the results associated with the question.
These results are finally packed into the background document. This will return illegal content, so
safety filtering is also necessary.
As stated by Hsieh et al. (2024), although many models claim to perform excellently in needle-in-
a-haystack task, their true long context ability remains valuable. Thus we can’t feed all documents
into the LLM at one time. This step involves deciding which pieces of information will make up the
final input for the LLM Chat based on prior knowledge. For instance, for PyTorch-related questions,
we tend to look up the official PyTorch documentation rather than some tech blog.
The trick here is that the data quality from web searches is not controllable and necessitates stringent
review.
Knowledge Graph Search engines face the entire spectrum of internet information, but the back-
ground information implicit in the group chat technical assistant isn’t fully utilized. For instance,
users wouldn’t actually ask mmdetection (Chen et al., 2019) questions in the opencompass (open-
compass contributors, 2023) user group.
Based on sourcegraph (McColl et al., 2013), we built a unique search engine for each repository,
routing queries from different groups accordingly. This improvement enables the assistant to answer
difficult questions that internet searches can’t locate, which we discuss further in our LLM paging
experiments.
Scoring The string of LLM’s responses cannot be directly integrated into Python or Java, hence
we implement process control through LLM scoring, such as for intent recognition and relevance
assessment. The Experiments section will showcase more examples.
Hybrid LLM Service Our product focuses more on cost-efficiency and does not insist on a single
model possessing all capabilities. We treat LLM chat as an RPC (Remote Procedure Call) service
that can internally integrate multiple models for use as needed.
The hybrid service is not a mishmash, it requires identifying the strengths of various models first and
then invoking them based on the circumstances. In HuixiangDou, we can fully leverage Internlm2’s
scoring capability and kimi chat’s long context ability.
LLM Response Directly using snippet to answer questions can lead to local optima. We read the
original text corresponding to the snippet and hand it over to the LLM for processing along with the
original question. The experimental part will showcase our work on Long Context.
Ultimately, we use LLM scoring to evaluate the relevance between the response and the query. If
the relevance is low, assistant will not respond.
Security Many regions emphasize the security of AI applications. To ensure foolproof safety, we
implemented four seat belts:
• Check all string variables and their association with prohibited topics based on LLM scoring to
prevent the generation of illegal content.
• Integrate traditional security service to check whether the assistant’s responses are illegal.
• Set working hours for the assistant to ensure all activities are under human supervision.
• Everyone can withdraw HuixiangDou’s response if they deem inappropriate.
5
Model
text2vec-large-chinese
text2vec-bge-large-chinese
Precision Recall
0.92
0.81
0.99
0.95
Table 1: Test the refusal to answer using different text2vec models on manually annotated data.
text2vec model has demonstrated strong robustness in the refusal-to-answer task.
3 EXPERIMENTS
In this section, we validate the feasibility of key technical points during the iterative process of
the pipeline. Section 3.1 presents the fine-tuning process and conclusions of the LLM. Section
3.2 demonstrates Rejection Pipeline effects. Section 3.3 details the implementation and testing
conclusions of the scoring method. Section 3.4 is dedicated to the necessary experiments with Long
Context responses. Section 4.2 is an attempt to further enhance the search capabilities.
3.1 FINE-TUNED MODEL
Base model selection Due to resource limitations, we cannot train from scratch and must select a
base model for fine-tuning. Our selection criteria are as follows.
• Understanding domain-specific terminologies. That means training data includes the vocabulary
needed for business operations. Otherwise, we believe that the required results cannot be calculated
using attention score.
• Long context. Since we can use ReRoPE (Su, 2023) or dynamic NTK (emozilla and bloc97, 2023)
for expansion, a model supporting RoPE can be considered capable of handling long context.
• In-Context Learning (ICL) and stable scoring ability.
Data preparation Our training data comprises 28,000 QA pairs, which are made up of three parts:
1. During clean existing OpenMMLab group chat data, we removed personal information, and
divided the dialogues into the QA format required for training. Among them, there are about 8,000
question-answer pairs.
2. For unanswered questions, we constructed responses using a larger LLM. These account for
approximately 12,000 of the total.
3. We also scraped closed issues from GitHub, amounting to about 8,000 entries.
Train and test We used the XTuner (xtuner contributors, 2023) qLoRA method to fine-tune on
the 7B and 13B models respectively. Our learning rate is 2e-5 and epoch is 5. Regardless of the
combination, there were significant issues with hallucinations. In the best version, the model learned
colloquial expressions from users in WeChat groups, rather than technical answers. As shown in
Appendix A.
We believe that the biggest issue lies in data quality. Other users answer the questions casually
and unprofessionally. In terms of domain-specific questions, the answers provided by LLM are not
correct.
3.2 TEXT2VEC IN REJECTION PIPELINE
We manually annotated hundreds of user contents, with human judgement determining whether they
were related to domain-specific knowledge. We then used different text2vec models to construct a
database and test the accuracy of refusal to answer. See table 1.
also
examined
the
including
of
pre-
We
langchain.MarkdownHeaderTextSplitter,
cision,
langchain.CharacterTextSplitter and their combined implementation.
Experi-
ments showed that the impact of the split method on the precision of refusal to answer can be
disregarded.
split methods
various
impact
text
on
6
3.3 LLM SCORING IN INTENT RECOGNITION
LLM Scoring has been utilized in intent determination, the evaluation of relevance between ques-
tions and background, as well as within security contexts.
This is implemented by determining the final score of the task. In engineering practice, we often use
integers and booleans instead of strings to determine the result of an if statement. Appendix B is the
LLM scoring prompt.
We randomly selected 1,303 domain-related queries and use InternLM2-7B to estimate the likeli-
hood of the query being a question—the higher the score, the more likely it is a question. As shown
in Figure 4, 11.6% of the content are user questions, which is in line with common sense.
Figure 4: Question likelihood with InternLM2-7B on 1,303 domain-related group chat sentences,
11.6% are questions. This distribution aligns with common sense, and the scoring method can
effectively handle intent recognition tasks.
While elaborating on the problem can enhance the model’s performance, the model’s perspective
of the world doesn’t straightforwardly equate to that of a human. For instance, a submarine can
”swim,” but this is not equivalent to human swimming.
We extracted 11,362 sentences from the content sent by users. To improve the precision of rejection
pipeline, we included scoring examples in the prompt, see Appendix C. However, after adding these
examples, the scores for all 7,753 pieces of data increased.
But in reality, more than 80% of group chat consists of idle chatter. If precision improves, the score
distribution should present a polarized state.
3.4 LONG CONTEXT OPTIMIZATION
Based on our experience in Chinese-English bilingual scenarios, the token length of a search result
can exceed 11k. Considering the prompt and historical dialogue, the model’s max token length
should be more than 16k. Only with 32k can we achieve relatively good results.
7
Considering the prohibitive training cost of YaRN (Peng et al., 2023), we optimized ReRoPE’s infer-
ence 2 performance using Triton (triton contributors, 2019), also introducing dynamic quantization3
base on LMDeploy (lmdeploy contributors, 2023).
Model
baseline
ReRoPE
ReRoPE+Quant
Length(Memory) Precision
4k(*65GB)
14k(79GB)
40k(75GB)
1.0
1.0
1.0
Table 2: Passkey test results of using different methods on openbuddy-llama2-13B-v8.1-fp16. To
optimize speed, LMDeploy automatically pre-allocates memory based on the GPU, hence the base
version occupy 65GB. We ultimately achieved 40k long text on a single card, proving that the
ReRoPE method is feasible.
Eventually, we achieved support for 40k token length on an A100 80G card. Table 2 is our precision
test report for passkey retrieval, with the base model being openbuddy-llama2-13B-v8.1-fp16.
4 OTHER ATTEMPTS
In order to enhance the accuracy, we have explored Natural Language Processing (NLP) as well
as prompting techniques, but these methods have insurmountable shortcomings, and thus were ulti-
mately not adopted.
4.1 NLP IN RAG
Since the capabilities of the text2vec model are limited, we have tried to simplify the query and
document with NLP methods. For example, inputting ”How to install mmdet and mmcv” will
identify CC part of speech, thereby decomposing into two simple questions. But in actual operation,
we encountered more difficult problems.
• Domain-specific part-of-speech tagging lacks precision. For example, in the field of deep learning,
the part of speech for ”deploy” depends on the context, which is different from daily communication.
• Bilingual problems. HanLP ? exhibits subpar performance in English, and other well-known
projects do not support Chinese. Utilizing translation APIs to bridge this gap in bilingual models
poses further complications. Due to the lack of appropriate translations for certain terms, it can
result in significant misinterpretations, such as with the term ”transformers”.
4.2 PROMPT TECHNICS
Paging Suppose we want to make LLM understand an entire repository via prompts. Even the lat-
est 192k context length can’t accommodate the full source code of OpenCompass. During ReRoPE
optimization, we also realized that the transformer kv cache and attention score mechanism severely
limit the maximum context length.
Inspired by the operating system paging mechanism, we compressed the Python module into a single
description, thereby shrinking the OpenCompass project within 120k. For user technical queries, we
let LLM decide which modules to view, then extract the module source code for secondary inquiries.
However, in practice, LLM only finds partial source code using a 128k context, and user questions
may involve multiple knowledge points. Appendix D is an LLM Paging example without any web
search nor RAG results.
Rephrase and Respond Deng et al. (2023) attempts to enhance the prompt using LLM, but this
is constrained by the understanding ability of the base model, making it incapable of extending this
technique to interrogative sentences. Otherwise, it would lead to confusion in the LLM. Here is an
scoring example.
2See https://github.com/InternLM/lmdeploy/pull/625
3See https://github.com/InternLM/lmdeploy/pull/718
8
Rephrase and Respond Example
User:
”Determine whether the following sentences are topical interrogative sentences, with results
ranging from 0 to 10. Provide scores directly without explanation.”
Rephrase and expand the question, and respond.
Assistant:
1
Figure 5: RaR prompt is not applicable to interrogative sentences.
ReAct Yao et al. (2023) utilizes training data to potentially generate fixed-format json results based
on inputs, which are then employed to invoke tools such as search engines. However, from a product
perspective, being unable to debug indicates a significant risk. In practical use, search behaviors are
often triggered even for simple queries. Considering that this approach requires training data, we
don’t deem it cost-effective for practical use.
5 CONCLUSION AND LIMITS
In this work, we demonstrated the feasibility of using text2vec for refusal response, and multiple
search methods can substantially mitigate the hallucination caused by LLMs.
As long as an LLM has the following capabilities, instead of strange prayer postures, it can suffi-
ciently address most demands within group chat scenarios:
• Understanding domain-specific terminologies.
• Supporting a minimum token length of 16k.
• Scoring capability.
• In-Context Learning.
However, as users’ questions become more professional, it’s increasingly difficult to provide sat-
isfactory responses based on the prompt and search method. This necessitates that the LLM truly
understands the source code in the repository. We think efficient further pretrain is the next stage
solution.
Due to the limitations of the ChatML (OpenAI, 2022) format, we have merely divided the group
messages according to the user, which in fact has led to a significant loss of contextual information.
The new chat format should fully expressing the context of the problem, the historical messages of
the speaker, and the remarks.
Additionally, users are very fond of first sending log screenshots before asking questions. Many
valuable contexts are contained within these images, but HuixiangDou does not support multimodal.
We will work on it.
6 ACKNOWLEDGMENTS
• We would like to express our gratitude towards the OpenMMLab users and ncnn contributors for
their understanding and tolerance of the numerous bugs in the technical assistant.
• We are grateful to the teams at OpenCompass, XTuner, and LMDeploy for their guidance during
the exploratory phase of the project.
• Our thanks also go to Moonshot AI and Xinran Xu for providing a free 128k context LLM API.
• We extend our appreciation to Jianlin Su, the author of RoPE, for his profound insights into the
structure of transformers.
9
• Finally, we want to thank Bowen Li and Kuikun Liu for their ideas on NLP, thank Song Yang for
his android app contribution on Github, thank Wenxing Hu for his method of integrating WeChat
and Siyue Zhao for the proofreading on this Report.
REFERENCES
Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen
Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie
Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli
Ouyang, Chen Change Loy, and Dahua Lin. Mmdetection: Open mmlab detection toolbox and
benchmark. arXiv preprint arXiv:1906.07155, 2019.
Yihe Deng, Weitong Zhang, Zixiang Chen, and Quanquan Gu. Rephrase and respond: Let large
language models ask better questions for themselves, 2023.
emozilla and bloc97. Dynamically scaled rope further increases performance of long context llama
with zero fine-tuning, June 2023. URL https://www.reddit.com/r/LocalLLaMA/
comments/14mrgpr/dynamically_scaled_rope_further_increases/.
Cheng-Ping Hsieh, Simeng Sun, Samuel Kriman, Shantanu Acharya, Dima Rekesh, Fei Jia, and
Boris Ginsburg. Ruler: What’s the real context size of your long-context language models?,
2024.
langchain contributors. Langchain: Building applications with llms through composability, 2023.
URL https://github.com/langchain-ai/langchain.
lmdeploy contributors. Lmdeploy is a toolkit for compressing, deploying, and serving llms, 2023.
URL https://github.com/internlm/lmdeploy.
Rob McColl, David Ediger, Jason Poovey, Dan Campbell, and David Bader. A Brief Study of Open
Source Graph Databases, 2013.
Netease. Netease youdao’s open-source embedding and reranker models for rag products., Jan 2024.
URL https://github.com/netease-youdao/BCEmbedding.
OpenAI.
URL
e389823ba013a24b4c32ce38fa0bd87e6bccae94/chatml.md.
2022.
https://github.com/openai/openai-python/blob/
official
python
library
openai
The
api,
the
for
opencompass contributors. Opencompass: A universal evaluation platform for foundation models,
2023. URL https://github.com/open-compass/opencompass.
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. Yarn: Efficient context window
extension of large language models, 2023.
Jianlin Su. The upgrade path of transformer: 12, infinite extrapolation with rerope?, Aug 2023. URL
https://spaces.ac.cn/archives/9708.
triton contributors. Development repository for the triton language and compiler, 2019. URL
https://github.com/openai/triton.
wenda contributors. wenda: A large language model (llm) invocation platform, 2023. URL https:
//github.com/wenda-LLM/wenda.
xtuner contributors. Xtuner: A toolkit for efficiently fine-tuning llm, 2023. URL https://
github.com/internlm/xtuner.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.
React: Synergizing reasoning and acting in language models, 2023.
10
A FINE-TUNED LLM CHAT EXAMPLES
Fine-tuned LLM Chat Examples
User:
Can mmdet3.0 use version 0.7 of mmcv ?
Assistant:
mmdetection3.0 and mmcv version 0.7 are not compatible due to differences in certain
features. Mmdetection 3.x requires mmcv 2.0 or a higher version. If you need to use an
older version of mmcv, you may consider downgrading to mmdetection 2.x.
Hallucination Example:
User: Does mmdet3.0 support mmcv0.7 ?
Assistant: There’s no need for that, the version of mmcv should be between 1.3.3 and 1.5.0.
Response Style Example:
User: mmyolo uses the coco format, right ?
Assistant: Haha, your understanding is very precise. Yes, mmyolo indeed utilizes the coco
format.
Figure 6: Fine-tuned LLM examples.
B SYSTEM PROMPT IN SCORING
C SYSTEM PROMPT WITH EXAMPLES
D LLM PAGING EXAMPLE
System prompt in Scoring
Prompt:
Determine whether the following sentences are topical interrogative sentences, with results
ranging from 0 to 10. Provide scores directly without explanation.
Scoring standards:
A score of 10 for sentences with subject, predicate, and object that are
interrogative;
points deducted for missing subject, verb, or object; a score of 0 for declarative
sentences;
a score of 0 for non-interrogative sentences.
New question ”{}”, what is the score? Provide scores directly without explanation.
Figure 7: Prompt for LLM scoring task.
11
LLM Scoring with Examples
Prompt:
Determine whether the following sentences are topical interrogative sentences, with results
ranging from 0 to 10. Provide scores directly without explanation.
Scoring standards:
A score of 10 for sentences with subject, predicate, and object that are
interrogative;
points deducted for missing subject, verb, or object; a score of 0 for declarative
sentences;
a score of 0 for non-interrogative sentences.
Here are some examples:
Question "Excuse me, how should mmdeploy be installed?", Score: 9
Question "How to apply for modification of rpm?", Missing subject, Score: 7
Question "Please check if the environment is installed with your revised version",
It’s a declarative sentence, Score: 0
Question "If you treat bot as a living person, everything will go smoothly", It’s
not a question, Score: 0
New question ”{}”, what is the score? Provide scores directly without explanation.
Figure 8: LLM scoring with examples, build prompt with examples does not enhance the score.
LLM Paging Example
to
an
element:
’triviaqa 5shot’
User:
How do I output the results of triviaqa 5shot in a summarizer?
Real Solution:
Add
summarizer[’dataset abbrs’] in the configuration file.
Assistant:
First, you need to ensure that the results of ”triviaqa 5shot” are stored in your code in an
appropriate data structure (such as a dictionary or list), and that these results can be accessed
from your data processing or loading section.
Next, you need to find or create a method within the relevant summarizer class that han-
dles and formats the ”triviaqa 5shot” results. This might involve parsing, computing, or
transforming data so they can be correctly included in the output of the summarizer.
Finally, you need to insert code into the output methods of the summarizer (like form table
or
format raw txt) so that the ”triviaqa 5shot” results are included when generating sum-
maries. This might involve adding new columns or rows to a table or text, or modifying
existing data to include new information.
array
the
Figure 9: LLM Paging only searched the source code of opencompass.summarizer, and was
not aware of the existence of the config file.
12
|
ai_researcher | 1 | Characterizing_the_orthodontic_patient's_purchase_decision.pdf | YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
1
4
2
0
2
g
u
A
8
3D Structure-guided Network for Tooth
Alignment in 2D Photograph
Yulong Dou1
[email protected]
Lanzhuju Mei1
[email protected]
Dinggang Shen1,2,3
[email protected]
Zhiming Cui1
[email protected]
]
V
C
.
s
c
[
1 School of Biomedical Engineering
ShanghaiTech University
Shanghai 201210, China
2 Shanghai United Imaging Intelligence
Co., Ltd.
Shanghai 200230, China
3 Shanghai Clinical Research and Trial
Center
Shanghai 201210, China
2
v
6
0
1
1
1
.
0
1
3
2
:
v
i
X
r
a
Abstract
Orthodontics focuses on rectifying misaligned teeth (i.e., malocclusions), affecting
both masticatory function and aesthetics. However, orthodontic treatment often involves
complex, lengthy procedures. As such, generating a 2D photograph depicting aligned
teeth prior to orthodontic treatment is crucial for effective dentist-patient communica-
tion and, more importantly, for encouraging patients to accept orthodontic intervention.
In this paper, we propose a 3D structure-guided tooth alignment network that takes 2D
photographs as input (e.g., photos captured by smartphones) and aligns the teeth within
the 2D image space to generate an orthodontic comparison photograph featuring aes-
thetically pleasing, aligned teeth. Notably, while the process operates within a 2D im-
age space, our method employs 3D intra-oral scanning models collected in clinics to
learn about orthodontic treatment, i.e., projecting the pre- and post-orthodontic 3D tooth
structures onto 2D tooth contours, followed by a diffusion model to learn the mapping
relationship. Ultimately, the aligned tooth contours are leveraged to guide the genera-
tion of a 2D photograph with aesthetically pleasing, aligned teeth and realistic textures.
We evaluate our network on various facial photographs, demonstrating its exceptional
performance and strong applicability within the orthodontic industry.
1
Introduction
Orthodontic treatment is an effective remedy for correcting tooth misalignment (i.e., mal-
occlusions). It is estimated that over 90% people suffer from malocclusion problems with
various degrees[2], and most of people can benefit from orthodontic intervention. This treat-
ment not only helps prevent oral diseases at a physiological level, but also significantly boosts
patients’ confidence, enhancing their psychological well-being[23]. However, the complex-
ity of orthodontic procedure, which often spans several months or even years, can deter
individuals from seeking treatment. Hence, the generation and visualization of potential
© 2023. The copyright of this document resides with its authors.
It may be distributed unchanged freely in print or electronic forms.
2
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
Figure 1: Orthodontic comparison photographs. For each case, we show the facial photo-
graph with misaligned teeth (left) and the facial photograph with well-aligned teeth generated
by our network (right), and the image in the lower right corner is a zoom-in of mouth region.
post-treatment facial photographs with aesthetical teeth becomes crucial. Such predictive
imaging not only engages and motivates patients but also fosters more effective communica-
tion between orthodontists and their patients.
In clinical practice, visualizing patients’ appearance after orthodontic treatment is re-
ferred to "Visual Treatment Objective" (VTO). This is typically performed on X-Ray images
by deforming soft tissues and skeleton based on detected landmarks[21, 25]. However, this
operation leaves the teeth’s appearance unaltered„ making it challenging for patients to make
a realistic comparison. In this study, our objective is to take 2D photograph as an input (e.g.,
photos captured by smartphones), and directly generate the "Orthodontic Comparison Photo-
graph" with aligned teeth and realistic textures, as shown in Figure 1. Note that the generated
photograph should follow the unique tooth alignment property of each patient in real-world
treatment, instead of simplistic Photoshop approach with template teeth[32].
Currently, significant advancements in deep learning, particularly in generative networks,
have achieved promising results in computer vision community. However, most of these
models heavily rely on paired images, that is not suitable for our task. This is primarily be-
cause collecting paired pre- and post-orthodontic facial photographs is challenging due to the
long-term orthodontic procedure and changes in facial appearance over time. Furthermore,
the 2D photograph does not provide the 3D structure of teeth. Thus, how to learn the clinical
knowledge of tooth alignment, defined on 3D tooth models, from 2D photographs is also a
significant challenge.
In this paper, we propose a 3D structure-guided network for tooth alignment in 2D pho-
tograph. The key idea is to learn the clinical tooth alignment knowledge defined on the
3D intra-oral scanning models[17], and apply the learned property to guide the 2D post-
orthodontic photograph generation. Specifically, we begin by collecting a set of paired pre-
and post-orthodontic intra-oral scanning tooth models in clinics, and render[24] them onto
the oral area of a 2D facial photograph.
In this way, we can obtain the paired pre- and
post-orthodontic tooth contours in 2D photograph (as shown in Figure 2). Then, a Diffusion
Model[11] is applied to learn tooth alignment knowledge, i.e., generating post-orthodontic
tooth contours with the input of pre-orthodontic tooth contours, derived from 3D tooth mod-
els. Note that only the tooth structures are captured, without any texture information. In the
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
3
inference process, we can directly take the tooth contours segmented from 2D facial photo-
graph. Finally, guided by the aligned tooth contours, we employ another Diffusion Model
to generate a realistic 2D photograph with aligned teeth. In particular, To enhance simi-
larity with the patient’s original appearance, we incorporate facial skin color and intra-oral
highlights into the generation process, accounting for texture and lighting information. In
the experiment, we collect a large number of photographs from patients suffering from mal-
occlusion problems with various degrees, and achieve superior performance compared to
the state-of-the-art methods, including GAN[8] and Diffusion Models. Furthermore, we also
conduct a user study to validate the alignment and authenticity performance of our algorithm,
demonstrating its potential applicability within orthodontic industry.
2 Related Work
Digital Orthodontics. Digital orthodontics employs digital imaging technologies such as
intra-oral scanning[17], CBCT[7], and panoramic radiograph[1] to provide dentists with
information about the structure and occlusion of patients’ teeth. This helps dentists with
pre-treatment diagnosis and orthodontic treatment planning. A variety of emerging tech-
niques have been introduced in related fields, including tooth segmentation[5, 6], 3D tooth
reconstruction[36, 37], and 3D tooth arrangement[35]. In terms of orthodontic comparison
photographs, Lingchen et al. [20] have introduced iOrthoPredictor which can synthesize an
image of well-aligned teeth based on a patient’s facial photograph and an additional input
of the patient’s 3D dental model. Chen et al. [3] have introduced OrthoAligner which needs
only a facial photograph but no 3D dental model as input, by introducing the concept of
StyleGAN inversion. But OrthoAligner is limited in that it only uses facial photographs to
learn tooth transformation, without utilizing information from 3D dental models.
Image Generation. Image generation is a field of research in computer vision that aims to
generate new digital images by using algorithms or models from scratch or by modifying ex-
isting images. Several models have been proposed for image generation, including GAN[8],
VAE[18], Diffusion Model[11]. Specifically, GAN simultaneously trains the generator and
discriminator to generate more realistic images. Many models based on GAN have been
proposed, such as unsupervised StyleGAN[15], and supervised Pix2pix GAN[13]. VAE is
a generative model that uses variational inference for sampling from probability distribu-
tions. Ho et al. [11] propose Diffusion Model based on Score Matching[12] and Denoising
Autoencoder[34], and elaborate on its mathematical principles. Diffusion Model is a gener-
ative model that utilizes a forward process of step-by-step adding noise and a backward de-
noising process to generate high-quality images. Choi et al. [4] propose a reference-guided
conditional Diffusion Model that fine-tunes the backward denoising process. Singh et al.
[31] introduce condition noise to navigate Diffusion Model. Saharia et al. [29] propose an
image-to-image Diffusion Model guided by condition image.
3 Method
3.1 Overview
Overall, our goal is to design a tooth alignment network that incorporates 3D structural infor-
mation derived from intra-oral scanning models, which is essential for clinical orthodontic
treatment, and guide the orthodontic comparison photograph generation.
4
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
We have pre-orthodontic intra-oral scanning models S = {S1, S2, ..., SN}, post-orthodontic
intra-oral scanning models ˆS = { ˆS1, ˆS2, ..., ˆSN} of same patients collected in clinics, and un-
paired facial photographs I = {I1, I2, ..., IM} collected by smartphones. Given that the facial
photographs I and 3D intra-oral scanning models S, ˆS in our dataset are not paired, we design
a module, named Align-Mod, for tooth alignment that can still incorporate 3D structural in-
formation from intra-oral scanning models as guidance. This module randomly selects the
pre- and post-orthodontic intra-oral scanning models (i.e., Sr ∈ S and ˆSr ∈ ˆS) for an unpaired
facial photograph (i.e., Ir ∈ I), and makes a coarse 2D-3D registration between Sr, ˆSr and
Ir, respectively. Then, the 3D tooth structures are projected onto the 2D facial photograph
to obtain pre-orthodontic tooth contours Cr ∈ R128×256 and post-orthodontic tooth contours
ˆCr ∈ R128×256. In this way, our Align-Mod module can learn the tooth transformation
T (·), which represents the clinical orthodontic knowledge derived from the 3D intra-
oral scanning models.
In addition to the pre-trained tooth alignment module, we also design a segmentation
module, named Segm-Mod, to locate the mouth region and segment tooth contours C
from facial photographs I, and a generation module, named Gen-Mod, to generate the
facial image with aesthetically pleasing teeth.
In summary, the three modules designed in this framework are shown in Figure 2.
Figure 2: Overall pipeline. When a facial photograph is input into our network, it first goes
through Segm-Mod to obtain oral mask, mouth region and tooth contours. Then it enters Pre-
trained Align-Mod to predict well-aligned tooth contours, and finally goes through Gen-Mod
to generate a facial photograph with well-aligned teeth.
3.2 Segmentation Module
To begin with, Segm-Mod needs to detect the position of face[16, 30] and obtain a standard-
ized face Fi ∈ R512×512 from any given facial photograph Ii ∈ I ⊆ R. As shown in Figure
2, to accurately locate the mouth, we propose an oral detection network OD(·)[19, 38] to
segment the oral mask Mi ∈ R128×256 and crop the mouth region Ri ∈ R128×256 from the
standardized face Fi. Then, to obtain tooth contours Ci ∈ R128×256 which contains struc-
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
5
tural information of teeth, we employ a commonly-used supervised segmentation network,
U-Net[27], for segmenting tooth contours Ci from the mouth region Ri. The formulation of
process of Segm-Mod is as:
Ri, Mi = OD(Ii),
Ci = U(Ri, Mi), ∀i = 1, 2, ..., N,
(1)
where OD(·) is the oral detection network performed in 2D facial photograph Ii, U(·) de-
notes the U-Net-based contour segmentation network, and Ci, Ri, Mi are the pre-orthodontic
tooth contours, mouth region and oral mask obtained by Segm-Mod, respectively.
To train the network, we employ the Dice Loss[22] and Weighted Cross-Entropy Loss[28].
Given the imbalanced area between foreground (tooth contours) to background, Dice Loss
has excellent performance in situations with severe imbalance and focuses on learning the
foreground area. Furthermore, Weighted Cross-Entropy Loss can address the imbalance
problem by adjusting the weighted proportion, thus making it a suitable complement to Dice
Loss. Our designed loss function is defined as:
L = wdice · Ldice + wce · Lce.
(2)
3.3 Alignment Module
One of the most innovative aspects of our method is that we can incorporate structural in-
formation from 3D intra-oral scanning models S, ˆS into Align-Mod, which are essential for
clinical orthodontic treatment. We employ a 3D-to-2D Render to project the 3D intra-oral
scanning models S, ˆS into the oral area of the 2D facial photographs, as opposed to the ap-
proach used by Wirtz et al. [36], Zheng et al. [40] to reconstruct 3D dental models from
multi-view tooth photographs. Furthermore, we design a conditional Diffusion Model-based
network for learning the clinical orthodontic knowledge T (·) in the space of tooth contours
Cr, ˆCr obtained by Render.
3.3.1 Render
Since the 3D intra-oral scanning models S, ˆS and facial photographs I are collected from
different environments and sources, i.e., one is in clinics and another is from smartphones
in daily life, we cannot perform precise 3D-2D registration through rigid transformation.
Fortunately, precise registration is not necessary for our task as our purpose is to create
paired tooth contours from the intra-oral scanning models. Therefore, to obtain tooth con-
tours Cr, ˆCr ∈ R128×256, we perform coarse registration based on the landmarks between the
coordinates of tooth cusp points of central and lateral incisors in both render-used intra-oral
scanning models Sr ∈ S, ˆSr ∈ ˆS and facial photograph Ir ∈ I.
We use Numerical Optimization[33] to perform coarse registration and then obtain 2D
tooth contours by projection. The essential principle of projecting 3D to 2D is to solve the
camera parameters. As shown in Equation 3,
ρm = (KRT | − KRTC)
(cid:18) M
1
(cid:19)
,
(3)
where M represents the coordinate of the point in world coordinate system, denoted as M =
(X,Y, Z)T , and m represents the coordinate of the corresponding point in pixel coordinate
6
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
system, denoted as m = (u, v, 1)T . ρ is the projection depth, C is the position of camera, R is
the rotation matrix that represents camera pose, and K is the matrix of intrinsic parameters[9].
Specifically, we use the four paired tooth cusp points mentioned above on both the facial
photo and the intra-oral scanning model to derive their coordinates, represented as m and M.
Subsequently, we can resolve camera parameters, primarily the unknown variables in K and
C, given the other known parameters.
3.3.2 Tooth Transformation
Once we render 3D intra-oral scanning models Sr, ˆSr onto the 2D facial photograph Ir and
obtain pre- and post-orthodontic tooth contours Cr, ˆCr as mentioned above, we can then learn
the T (·), i.e., clinical orthodontic knowledge. We employ a network based on image-to-
image Diffusion Model[29], as shown in Figure 2. To emphasize the generation of oral
regions, we introduce Gaussian noise Gr ∈ R128×256 generated within the oral mask Mr. We
concatenate the pre-orthodontic tooth contours Cr with the Gaussian noise Gr to form the
condition information, which serves as guidance for our diffusion model. Therefore, the
formula for T (·) during the pre-trained process is given as:
ˆCr = T (Cr c⃝ Gr),
(4)
where c⃝ denotes channel-wise concatenation.
Since we have pre-learned the T (·), which represents the clinical orthodontic knowledge
of tooth transformation, we can apply the learned knowledge to process the 2D tooth contours
Ci derived from Segm-Mod. Hence, we concatenate the tooth contours Ci, together with intra-
oral Gaussian noise Gi, and then feed them into our diffusion model, expecting a reasonable
prediction for well-aligned tooth contours ˆCi. The inference process is formulated as:
ˆCi = T (Ci c⃝ Gi), ∀i = 1, 2, ..., N.
(5)
3.4 Generation Module
After obtaining well-aligned tooth contours ˆCi through the tooth transformation T (·) of our
Align-Mod, we aim to generate a mouth region with realistic teeth ˆRi guided by ˆCi. To
achieve this, we adapt a conditional Diffusion Model-based generative network G(·) in our
Gen-Mod. We still introduce Gaussian noise Gi ∈ R128×256 generated within the oral mask
Mi to emphasize the generation of oral regions. Besides well-aligned tooth contours ˆCi and
intra-oral Gaussian noise Gi mentioned above, we additionally introduce intra-oral highlights
Li ∈ R128×256 and facial skin color Ki ∈ R128×256, which are helpful for generating more re-
alistic tooth color and environmental lighting. Then, four of them are concatenated together
as the condition information to guide our generation model.
In terms of intra-oral highlights, we employ Contrast Limited Adaptive Histogram Equal-
ization (CLAHE) [41] and Thresholding, aimed to enhance image contrast and detect high-
lights within the oral region. Specifically, we utilize CLAHE as in Equation 6 to improve
local contrast in mouth region Ri by using a histogram equalization approach with a specified
contrast limit of 5 in 20 × 20 local window, thus preventing over-amplification of noise in
flat areas while enhancing contrast in textured areas, defined as:
(cid:40)
gxy =
L − 1
Sxy
fxy
∑
z=0
h(x, z)
Sxy
,
f ′
xy = gxy × (L − 1), CLAHExy =
f ′
xy
L − 1
if f ′
if f ′
xy < L − 1
xy ≥ L − 1
,
(6)
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
7
where Sxy denotes size of the local window, fxy denotes pixel intensity at pixel (x, y), h(x, z)
is histogram of pixel intensities in local window, L means number of intensity level, gxy is
gain factor, and f ′
xy is transformed pixel intensity.
Once we generate a mouth region with realistic teeth ˆRi, we use the face and mouth po-
sition stored in Segm-Mod to replace the oral region in the initial facial photograph, thereby
obtaining a facial photograph with well-aligned and aesthetically pleasing teeth ˆIi for VTO
(see Figure 2). The formulation of process of Gen-Mod is shown as:
ˆIi = G( ˆCi c⃝ Gi c⃝ Li c⃝ Ki), ∀i = 1, 2, ..., N,
(7)
where c⃝ denotes channel-wise concatenation of segmented tooth contours ˆCi, Gaussian
ˆIi is the predicted facial photo-
noise Gi, intra-oral highlights Li and facial skin color Ki.
graph with well-aligned and aesthetically pleasing teeth through the Diffusion Model-based
generative network G(·) mentioned in Gen-Mod.
4 Experiments
4.1 Experiments Settings
Dataset. Our dataset comprises 1367 facial photographs I, of which 1129 are used to train
Segm-Mod and Gen-Mod, 138 are used to create datasets through Render in Align-Mod, and
the remaining 100 are reserved for testing our overall pipeline. For the 138 render-used facial
photographs, we manually annotate the coordinates of tooth cusp points of central and lateral
incisors in the upper jaw. Moreover, We have 1257 3D intra-oral scanning models S collected
in dental clinics, along with their corresponding orthodontic treatment plans provided by
dentists.
In this way, we can also obtain corresponding 1257 post-orthodontic intra-oral
scanning models ˆS accordingly. Note that for each of the 138 render-used facial photographs,
we randomly select 10 from the pool of 1257 intra-oral scanning models to perform the
Render process as mentioned in 3.3.1. Thus we have 1380 pre- and post-orthodontic tooth
contours Cr, ˆCr, respectively, for training Align-Mod.
Implementation Details. The proposed method is implemented in PyTorch with 2 NVIDIA
A100 80GB GPU. By iteratively tuning and training Segm-Mod, we utimately choose to
assign 0.8 to wdice and 0.2 to wce, along with a weight of 20 for the foreground and a weight
of 1 for the background in Lce. In terms of Align-Mod and Gen-Mod, we set the batch-size
of our diffusion model to 60. The learning rate is 5e-5 and we use Exponential Moving
Average[10] with β = 0.9999 to update parameters of diffusion model. Lastly regarding the
parameters in Render, we set focal length in camera intrinsic parameters to 213.33, and we
use SGD[26] optimizer with an initial learning rate of 0.01 and a learning rate scheduler,
which reduces the learning rate by a factor of 0.9 every 500 steps.
4.2 Results
Based on the pre-trained Align-Mod, our three-stage network can infer a facial photograph
with well-aligned and aesthetically pleasing teeth ˆIi according to patient’s previous photo-
graph Ii, without any 3D intra-oral scanning model as input, while still benefiting from the
guidance of clinical orthodontic knowledge within 3D structure of intra-oral scanning mod-
els. To demonstrate the outstanding results of our method and provide a more detailed view
8
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
of the inference process and confidence level, we present some testing cases in Figure 1 and
Figure 3.
Based on the visual results and inference process we presented, it’s evident that our
Segm-Mod has excellent segmentation ability, even with misaligned teeth. Our Align-Mod
also performs great reliability on predicting well-aligned tooth contours, closely based on the
pre-trained transformation T (·) in the image space, which is crucial for clinical orthodontic
treatment. Besides, our Gen-Mod can infer reasonable realistic teeth with similar color and
lighting compared with patient’s previous teeth photograph and its shooting environment.
(a) mouth region
(b) oral mask
(c) oral region
(d) segmented tooth contours (e) aligned tooth contours
(f) predicted mouth region
Figure 3: Inference process. For each detected mouth region Ri (a), we segment to obtain
the oral mask Mi (b) and oral region (c). We further obtain tooth contours Ci (d) from our
Segm-Mod and input it into our Align-Mod to yield well-aligned tooth contours ˆCi (e). We
finally predict a mouth region with well-aligned teeth ˆRi (f) through our Gen-Mod.
4.3 Comparison
Additionally, we qualitatively compare our tooth alignment network with Pix2pix GAN[8,
13], especially for Align-Mod and Gen-Mod, and the comparison results are visually shown
in Figure 4. It is shown that our Diffusion Model-based methods are more capable than
Pix2pix GAN-based methods, with more reasonable alignment prediction and more realistic
tooth color and lighting. As shown in Table 1, we quantitatively evaluate our proposed
Align-Mod and Gen-Mod to make comparisons with pix2pix GAN using pixel-wise L1, L2
and LPIPS error[14, 39]. L1 and L2 are commonly used pixel-wise metrics for quantifying
discrepancies between generated results and the target, whereas LPIPS is a perceptual metric
which calculates the perceptual distance and visual similarity between images. It is shown
that our method are consistently better than Pix2pix GAN-based methods considering pixel-
wise metrics.
4.4 Ablation Study
We have mentioned in Subsection 3.4 that, in order to make our Gen-Mod yield more realistic
tooth color and environmental lighting, we introduce intra-oral highlights Li and facial skin
color Ki and concatenate them together with well-aligned tooth contours ˆCi and intra-oral
Gaussian noise Gi as guidance. To validate the effectiveness of these two condition images
guiding Gen-Mod, we conduct an ablation study to demonstrate their effectiveness. We
design four groups of ablation experiments as shown in Table 2, all using Diffusion Model-
based generative network and the same datasets for training. Visual results of four ablation
experiments are shown in Figure 5.
Specifically, Ablation I includes neither of the two condition images, resulting in the
poorest generation performance. Ablation II includes facial skin color, resulting in better
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
9
Diffusion Model
-based Align-Mod
Pix2pix GAN
-based Align-Mod
Diffusion Model
-based Gen-Mod
Pix2pix GAN
-based Gen-Mod
L1
L2
LPIPS
0.053
0.319
0.057
0.061
0.343
0.104
0.029
0.093
0.038
0.047
0.148
0.107
Table 1: Quantitative comparisons be-
tween different methods on Testing
Dataset.
Input
Ours
Pix2pix GAN
GT
Figure 4: Qualitative comparisons. The upper
two rows are two testing cases of Align-Mod,
and the lower two rows are two testing cases
of Gen-Mod.
facial
skin color
intra-oral
highlights
Ablation I
Ablation II
Ablation III
Ablation IV
✓
✓
✓
✓
Table 2: Four groups of ablation
experiments.
Ablation I
Ablation III
Figure 5: Visual results of four ablation experiments.
Ablation IV
Ablation II
GT
tooth color but lighting information is lost compared to the original image. Ablation III adds
intra-oral highlights, significantly restoring the environmental lighting but less realistic tooth
color. Ablation IV has intuitively the best generation performance, with both realistic facial
skin color and intra-oral highlights.
4.5 User Study
To further demonstrate the reliability and credibility of our method, we conduct a user study
that we invite 30 individuals to vote for assessing the alignment and authenticity of pho-
tographs (only concentrated on mouth region) generated by our method. Specifically, for as-
sessing alignment, we randomly select 10 generated facial photographs and 10 photographs
from patients who have received orthodontic treatments. Participants are asked to rate the
alignment of teeth in the photographs on a scale of 1 to 5, with higher scores indicating better
alignment. Similarly, for assessing authenticity, we randomly select 10 generated facial pho-
tographs and 10 real ones, and ask participants to vote on whether they are real or fake. Table
3 has shown the average alignment scores and average percentage of photographs being clas-
sified to "real". Compared with well-aligned or real photographs, photographs generated by
our method achieve high scores in terms of both alignment and authenticity, only slightly
lower than scores of the well-aligned teeth on real photographs.
5 Discussion
In this work, we propose a 3D structure-guided tooth alignment network to effectively gen-
erate orthodontic comparison photographs. According to the experimental results above, our
10
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
Alignment
Authenticity
Well-aligned photos
3.84
Real photos
72.67%
Ours
3.82
65.00%
Table 3: Voting results of user study. First row represents the average score regarding align-
ment, second row represents the average percentage of photographs being classified as "real".
method utilizes 3D dental models to learn the orthodontic knowledge in the image space.
The 3D structure successfully guides the learning and prediction of our network, giving our
method practical clinical significance. Additionally, we introduce Diffusion Model into the
task of orthodontic comparison photograph generation and have shown the great power of
Diffusion Model in our Align-Mod and Gen-Mod. Importantly, different from state-of-the-
art methods[3, 20] in the field, our method can still incorporate clinical orthodontic knowl-
edge into the network without requiring additional input of dental models. This demon-
strates that our method is much more clinically practical, user-friendly and applicable within
orthodontic industry.
Our method, however, is not without limitations. For example, our network cannot han-
dle with several cases, such as that teeth are too misaligned and patients smile too widely.
Moreover, our method cannot take collision and occlusal relationship into consideration
since our method is just performed in the image space. In future, we plan to first recon-
struct 3D tooth models from 2D photograph and then make the tooth alignment.
6 Conclusion
In this paper, we have designed a 3D structure-guided network to infer a facial photograph
with well-aligned and aesthetically pleasing teeth based on the patient’s previous facial pho-
tograph. Our method stands out from existing approaches as it can learn the clinical or-
thodontic knowledge based on 3D intra-oral scanning models, making our method highly
reliable and potentially applicable in clinical practice.
Acknowledgements
This work was supported in part by NSFC grants (No. 6230012077).
References
[1] Christos Angelopoulos, Aurelija Bedard, Jerald O Katz, Stelios Karamanis, and Nikos
Parissis. Digital panoramic radiography: An overview. In Seminars in Orthodontics,
volume 10, pages 194–203. Elsevier, 2004.
[2] Olaf Bernhardt, Karl-Friedrich Krey, Amro Daboul, Henry Voelzke, Stefan Kindler,
Thomas Kocher, and Christian Schwahn. New insights in the link between malocclu-
sion and periodontal disease. Journal of clinical periodontology, 46(2):144–159, 2019.
[3] Beijia Chen, Hongbo Fu, Kun Zhou, and Youyi Zheng. Orthoaligner: Image-based
teeth alignment prediction via latent style manipulation. IEEE Transactions on Visual-
ization and Computer Graphics, 2022.
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
11
[4] Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh
Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. arXiv
preprint arXiv:2108.02938, 2021.
[5] Zhiming Cui, Changjian Li, and Wenping Wang. Toothnet: automatic tooth instance
In Proceedings of the
segmentation and identification from cone beam ct images.
IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6368–
6377, 2019.
[6] Zhiming Cui, Yu Fang, Lanzhuju Mei, Bojun Zhang, Bo Yu, Jiameng Liu, Caiwen
Jiang, Yuhang Sun, Lei Ma, Jiawei Huang, et al. A fully automatic ai system for tooth
and alveolar bone segmentation from cone-beam ct images. Nature communications,
13(1):2096, 2022.
[7] W De Vos, Jan Casselman, and GRJ19464146 Swennen. Cone-beam computerized
tomography (cbct) imaging of the oral and maxillofacial region: a systematic review of
the literature. International journal of oral and maxillofacial surgery, 38(6):609–625,
2009.
[8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,
Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks.
Communications of the ACM, 63(11):139–144, 2020.
[9] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision.
Cambridge university press, 2003.
[10] David Haynes, Steven Corns, and Ganesh Kumar Venayagamoorthy. An exponential
In 2012 IEEE Congress on Evolutionary Computation,
moving average algorithm.
pages 1–8. IEEE, 2012.
[11] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models.
Advances in Neural Information Processing Systems, 33:6840–6851, 2020.
[12] Aapo Hyvärinen and Peter Dayan. Estimation of non-normalized statistical models by
score matching. Journal of Machine Learning Research, 6(4), 2005.
[13] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image trans-
lation with conditional adversarial networks. In Proceedings of the IEEE conference
on computer vision and pattern recognition, pages 1125–1134, 2017.
[14] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style
transfer and super-resolution. In Computer Vision–ECCV 2016: 14th European Con-
ference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14,
pages 694–711. Springer, 2016.
[15] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for
generative adversarial networks. In Proceedings of the IEEE/CVF conference on com-
puter vision and pattern recognition, pages 4401–4410, 2019.
[16] Vahid Kazemi and Josephine Sullivan. One millisecond face alignment with an ensem-
ble of regression trees. In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 1867–1874, 2014.
12
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
[17] Hidemichi Kihara, Wataru Hatakeyama, Futoshi Komine, Kyoko Takafuji, Toshiyuki
Takahashi, Jun Yokota, Kenta Oriso, and Hisatomo Kondo. Accuracy and practicality
of intraoral scanner in dentistry: A literature review. Journal of prosthodontic research,
64(2):109–113, 2020.
[18] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint
arXiv:1312.6114, 2013.
[19] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse
and interactive facial image manipulation. In Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition, pages 5549–5558, 2020.
[20] YANG Lingchen, SHI Zefeng, WU Yiqian, LI Xiang, ZHOU Kun, FU Hongbo, and
Youyi Zheng. iorthopredictor: model-guided deep prediction of teeth alignment. ACM
Transactions on Graphics, 39(6):216, 2020.
[21] Richard P McLaughlin and John C Bennett. The dental vto: an analysis of orthodontic
tooth movement. Journal of Clinical Orthodontics: JCO, 33(7):394–403, 1999.
[22] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutional
neural networks for volumetric medical image segmentation. In 2016 fourth interna-
tional conference on 3D vision (3DV), pages 565–571. Ieee, 2016.
[23] Minghui Peng, Jing Kang, and Xiao Deng. The role of body image in orthodontic
treatment for adolescents. West China Journal of Stomatology, 35(5):489, 2017.
[24] Matt Pharr, Wenzel Jakob, and Greg Humphreys. Physically based rendering: From
theory to implementation. Morgan Kaufmann, 2016.
[25] G Power, J Breckon, M Sherriff, and F McDonald. Dolphin imaging software: an
analysis of the accuracy of cephalometric digitization and orthognathic prediction. In-
ternational journal of oral and maxillofacial surgery, 34(6):619–626, 2005.
[26] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals
of mathematical statistics, pages 400–407, 1951.
[27] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks
In Medical Image Computing and Computer-
for biomedical image segmentation.
Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Ger-
many, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
[28] Reuven Rubinstein. The cross-entropy method for combinatorial and continuous opti-
mization. Methodology and computing in applied probability, 1:127–190, 1999.
[29] Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Sali-
mans, David Fleet, and Mohammad Norouzi. Palette: Image-to-image diffusion mod-
els. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1–10, 2022.
[30] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embed-
In Proceedings of the IEEE conference on
ding for face recognition and clustering.
computer vision and pattern recognition, pages 815–823, 2015.
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH
13
[31] Vedant Singh, Surgan Jandial, Ayush Chopra, Siddharth Ramesh, Balaji Krishna-
murthy, and Vineeth N Balasubramanian. On conditioning the input noise for controlled
image generation with diffusion models. arXiv preprint arXiv:2205.03859, 2022.
[32] Manoj Kumar Sundar and BDS Venkataraman Chelliah. Ten steps to create virtual
smile design templates with adobe photoshop cs6. Compendium, 39(3), 2018.
[33] Bo D Tapley and JM Lewallen. Comparison of several numerical optimization meth-
ods. Journal of Optimization Theory and Applications, 1:1–32, 1967.
[34] Pascal Vincent. A connection between score matching and denoising autoencoders.
Neural computation, 23(7):1661–1674, 2011.
[35] Guodong Wei, Zhiming Cui, Yumeng Liu, Nenglun Chen, Runnan Chen, Guiqing Li,
and Wenping Wang. Tanet: towards fully automatic tooth arrangement. In Computer
Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020,
Proceedings, Part XV 16, pages 481–497. Springer, 2020.
[36] Andreas Wirtz, Florian Jung, Matthias Noll, Anqi Wang, and Stefan Wesarg. Auto-
matic model-based 3-d reconstruction of the teeth from five photographs with prede-
fined viewing directions. In Medical Imaging 2021: Image Processing, volume 11596,
pages 198–212. SPIE, 2021.
[37] Chenglei Wu, Derek Bradley, Pablo Garrido, Michael Zollhöfer, Christian Theobalt,
Markus H Gross, and Thabo Beeler. Model-based teeth reconstruction. ACM Trans.
Graph., 35(6):220–1, 2016.
[38] Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang.
Bisenet: Bilateral segmentation network for real-time semantic segmentation. In Pro-
ceedings of the European conference on computer vision (ECCV), pages 325–341,
2018.
[39] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The
In Proceedings
unreasonable effectiveness of deep features as a perceptual metric.
of the IEEE conference on computer vision and pattern recognition, pages 586–595,
2018.
[40] TX Zheng, Shuai Huang, YF Li, and MC Feng. Key techniques for vision based 3d
reconstruction: a review. Zidonghua Xuebao, 46:631–652, 2020.
[41] Karel Zuiderveld. Contrast limited adaptive histogram equalization. Graphics gems,
pages 474–485, 1994.
|
ai_researcher | 1 | Exploration_and_optimization_of_surgical_techniques_for_laparoscopic_transhiatal_lower_mediastinal_lymph_node_dissection_for_adenocarcinoma_of_esophagogastric_junction_A_prospective_IDEAL_2a_study_with_qualitative_design.pdf | 4
2
0
2
n
u
J
0
2
]
V
C
.
s
c
[
1
v
5
6
3
4
1
.
6
0
4
2
:
v
i
X
r
a
Journal of Machine Learning for Biomedical Imaging 2024:008 vol. 2, pp. 798–816
Special issue: MICCAI 2023 Lymph Node Quantification Challenge
Guest editors: Steve Pieper, Erik Ziegler, Tawa Idris, Bhanusupriya Somarouthu, Reuben
Dorent, Gordon Harris, Ron Kikinis
Submitted 01/2024
Published 06/2024
Mask the Unknown: Assessing Different Strategies to
Handle Weak Annotations in the MICCAI2023 Mediastinal
Lymph Node Quantification Challenge
Stefan M. Fischer1,2,3,4
Johannes Kiechle1,2,3,4
Daniel M. Lang1,3
Jan C. Peeken2
Julia A. Schnabel1,3,4,5
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
1: School of Computation, Information and Technology, Technical University Munich, Germany
2: Department of RadioOncology, Klinikum rechts der Isar, Technical University Munich, Germany
3: Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Germany
4: Munich Center of Machine Learning (MCML), Germany
5: School of Biomedical Engineering and Imaging Sciences, King’s College London, UK
Abstract
Pathological lymph node delineation is crucial in cancer diagnosis, progression assess-
ment, and treatment planning. The MICCAI 2023 Lymph Node Quantification Chal-
lenge published the first public dataset for pathological lymph node segmentation in the
mediastinum. As lymph node annotations are expensive, the challenge was formed as a
weakly supervised learning task, where only a subset of all lymph nodes in the training
set have been annotated. For the challenge submission, multiple methods for training on
these weakly supervised data were explored, including noisy label training, loss masking
of unlabeled data, and an approach that integrated the TotalSegmentator toolbox as a
form of pseudo labeling in order to reduce the number of unknown voxels. Furthermore,
multiple public TCIA datasets were incorporated into the training to improve the perfor-
mance of the deep learning model. Our submitted model achieved a Dice score of 0.628
and an average symmetric surface distance of 5.8 mm on the challenge test set. With
our submitted model, we accomplished third rank in the MICCAI2023 LNQ challenge. A
finding of our analysis was that the integration of all visible, including non-pathological,
lymph nodes improved the overall segmentation performance on pathological lymph nodes
of the test set. Furthermore, segmentation models trained only on clinically enlarged
lymph nodes, as given in the challenge scenario, could not generalize to smaller patho-
logical lymph nodes. The code and model for the challenge submission are available at
https://gitlab.lrz.de/compai/MediastinalLymphNodeSegmentation.
Keywords: deep learning, lymph node quantification, weakly supervised learning, image
segmentation
©2024 Fischer, Kiechle, Lang, Peeken and Schnabel. License: CC-BY 4.0
https://doi.org/10.59275/j.melba.2024-8g8b
Mask the Unknown
1. Introduction
In the following section, we introduce the ”Mediastinal Lymph Node Quantification: Seg-
mentation of Heterogeneous CT Data” (LNQ2023) challenge, and discuss related work.
1.1 Motivation
Lymph nodes (LNs) are small anatomical structures scattered throughout the body. Based
on their location, they are grouped into various LN stations according to established def-
initions, such as those from the International Association for the Study of Lung Cancer
(IASLC) (Rusch et al., 2009). During cancer progression, the tumor grows, and cancer
cells spread into nearby anatomical structures, developing into metastasis and radically in-
creasing the severity of the disease. Infiltration of cancerous tissue into LNs may lead to
enlargement of those LNs. Thus, assessing metastatic LNs is a critical factor for initial
diagnosis, tumor staging, and treatment planning. The conventional criteria for quantify-
ing lymph node size is based on Response Evaluation Criteria In Solid Tumours (RECIST)
guidelines (Eisenhauer et al., 2009). Enlarged LNs are defined as those whose shortest diam-
eter exceeds 10 mm on an axial CT slice. Medical professionals solely rely on unidirectional
or bidirectional measurements on a single axial slice of just one or a few LNs, introducing
limitations in capturing the full extent of abnormalities (Guo et al., 2022).
However, studies indicate that solely relying on the feature of shortest LN diameter for
malignancy assessment yields recall rates of only 60%-80% in lung cancer patients (Yan
et al., 2023). The facts that accessing the status of LNs is a complex, time-consuming
task and that the shortest diameter is a limiting metric highlights the necessity for ac-
curate segmentation in three dimensions to comprehensively evaluate lymph node disease
(Guo et al., 2022). Furthermore, precise delineation of all tumorous regions is particularly
crucial in radiation therapy, where irradiating metastatic areas such as LNs impacts pa-
tient outcomes significantly (Chapet et al., 2005). Thus, automated segmentation not only
holds promise for reducing the inter and intra-observer variability, but also reduces the
task-related working time.
The objectives of the LNQ2023 challenge are twofold, each aiming to address critical
aspects in the field of lymph node identification and segmentation (Khajavibajestani et al.,
2023). The primary goal of the challenge was to establish a benchmark for the detection
and segmentation of mediastinal lymph nodes. The mediastinum, located between the lung
lobes, poses a particular challenge due to the presence of ten or more lymph nodes, often
with three or more enlarged nodes exceeding 10 mm in diameter. The secondary goal
focuses on the exploration and application of weakly supervised learning techniques in the
scenario of LN segmentation. Given the time-consuming nature of manual annotation and
the possible presence of several pathological LN instances, there is a lack of pre-existing fully
annotated pathological LN datasets. This setting aligns with the current growing interest
in the medical imaging community in harnessing weak annotations (Kemnitz et al., 2018;
Petit et al., 2018; Zhou et al., 2019; Shi et al., 2021; Dong et al., 2022; Ulrich et al., 2023).
799
Fischer, Kiechle, Lang, Peeken and Schnabel
1.2 Mediastinal Lymph Node Segmentation
The first public dataset for mediastinal LN segmentation was introduced by Roth et al.
(2014, 2015), offering segmentation annotations of enlarged lymph nodes. This task presents
unique challenges due to reduced contrast compared to axillary and pelvic nodes (Nogues
et al., 2016). Early methods often relied on machine learning approaches that incorporated
handcrafted features and manual region-of-interests (ROI) (Aerts et al., 2014a; Liu et al.,
2014, 2016; Oda et al., 2017b,a; Roth et al., 2014). Notably, Roth et al. (2014) introduced
the first neural network (NN) based method, specifically for LN detection after an ROI
proposal step, reducing false positives and marking a shift in methodology.
Building upon this, subsequent works explored various NN-based methods operating
directly on full CT volumes (Bouget et al., 2019, 2023; Guo et al., 2022; Yan et al., 2023;
Iuga et al., 2021).
To enhance segmentation accuracy and robustness, the inclusion of anatomical key re-
gions to guide the segmentation has been explored (Bouget et al., 2023; Oda et al., 2018;
Bouget et al., 2019). The integration of lymph node station information has been a focal
point, initially introduced by Liu et al. (2014). The idea of mapping lymph node stations,
according to IASLC guidelines, has been extended by various methods, including NN-based
approaches. Guo et al. (2022) proposed dedicated encoders for different LN stations, and
Yan et al. (2023) introduced a station-stratified LN detector, emphasizing the importance
of incorporating station information during training (Liu et al., 2014; Guo et al., 2022; Yan
et al., 2023).
1.3 Weakly Supervised Learning
Weakly supervised learning has recently become a popular topic in medical imaging research,
as most datasets only provide annotations for one single or few structures of interest. Com-
bining multiple datasets results in a partially supervised learning setting, as datasets come
with full supervision of a few classes.
It forms a special case of weak supervision. Sev-
eral research groups have concentrated on such partially supervised learning approaches,
aiming to enhance model performance by aggregating information from multiple partially
labeled datasets during training (Kemnitz et al., 2018; Petit et al., 2018; Zhou et al., 2019;
Shi et al., 2021; Dong et al., 2022; Ulrich et al., 2023). Some works focus on masking the
loss of missing classes in the current training label (Ulrich et al., 2023; Dong et al., 2022).
Furthermore, different approaches to constrain the loss functions were motivated by mutual
exclusion of classes or implemented by merging classes to superclasses (Kemnitz et al., 2018;
Shi et al., 2021; Ulrich et al., 2023; Petit et al., 2018).
However, there is only limited work outside the partially supervised setting.
In the
computer vision domain, missing labels are often treated as background, a method viewed
as a simplistic form of dealing with noisy labels. This approach is effective when the pixels of
missing classes constitute a significantly smaller portion of images compared to background
pixels (Dong et al., 2022).
A general strategy for training on weakly labeled datasets is masking the loss of voxels
lacking annotations (Kemnitz et al., 2018). Additionally, incorporating semi-supervised
learning (SSL) has been explored to gain additional training feedback by mining unlabeled
voxels (Zhou et al., 2019; Petit et al., 2018).
800
Mask the Unknown
Most works in SSL focus on the classification or segmentation of completely unlabeled
images.
In contrast, the LNQ2023 challenge is given as an incomplete pixel-level label
scenario in which one or multiple foreground instances are annotated. Nguyen et al. (2020)
have explored this particular scenario. They focused on two distinct scenarios: speech
balloon segmentation in comics and cell segmentation in medical imaging. In both cases,
the incomplete annotation is generated using an automatic extraction method. The learning
of background information is facilitated by a small set of background voxels chosen as direct
neighbors of known foreground instances. By applying an SSL technique, they were able to
learn the representation of foreground and background.
1.4 Problem Setting and Contributions
The LNQ2023 challenge goal was to segment all pathological mediastinal LNs from thorax
CT volumes, while the given training labels only covered some instances of the foreground.
Those instances, furthermore, were only covering enlarged pathological LN components.
The problem setting was a weakly or semi-supervised learning task with incomplete pixel-
level annotations. As the challenge was an open challenge, it was allowed to use the public
CT Lymph Nodes dataset from TCIA (Clark et al., 2013; Roth et al., 2015), providing a
set of CT volumes in which enlarged LNs were fully annotated. Our challenge strategy
was to develop a supervised training strategy, handling the incomplete pixel-level labels of
the LNQ2023 challenge data. Furthermore, we integrated additional public data and the
TotalSegmentator (Wasserthal et al., 2023) to gain performance improvements.
Our main contributions are summarized as follows:
1. Starting from a fully annotated dataset of image volumes, namely the public TCIA
Lymph Nodes dataset, we implemented different strategies to integrate the additional
incomplete pixel-level labeled data, as given in the LNQ2023 challenge, into the train-
ing process. Those strategies were noisy label training, loss masking, and wrapping
each foreground instance with a background shell.
2. We applied the public toolbox TotalSegmentator to identify anatomical structures
and, by exclusion, set those to background class voxels. We refer to this as TotalSeg-
mentator Pseudo Labeling.
3. We explored the effect of integrating different public datasets, namely TCIA CT
Lymph Nodes, an annotation-refined version of CT Lymph Nodes, TCIA NSCLC-
Radiomics, TCIA NSCLC-Radiogenomics, and TCIA NSCLC-Radiomics-Interobserver
on the downstream performance.
4. Furthermore, we performed experiments on the impact of adding all visible, poten-
tially non-pathological lymph nodes to the model training on the overall segmentation
performance and the performance regarding the lymph node’s shortest diameter.
2. Integration of Weakly Annotated Data
In the following section, we present the different strategies to integrate the weakly annotated
data into the training process, shown in Figure 1. We cropped the input CT volumes with
801
Fischer, Kiechle, Lang, Peeken and Schnabel
Figure 1: Sketch of different strategies to handle weakly annotated data in our analysis.
The missed lymph node instance is incorrectly set to the background class in
the noisy label training. For loss masking, foreground instance coating, and
TotalSegmentator Pseudo Labeling, the missing instance is removed from the
training process by loss masking.
the help of the TotalSegmentator, a deep learning-based toolbox capable of segmenting 104
different anatomical structures on CT volumes (Wasserthal et al., 2023). Using the toolbox,
we created a bounding box of the lung lobes, to which the CT volume was cropped. The
resulting image volume then contained the full mediastinum, leading to improved compu-
tational efficiency. As a segmentation network, we used the nnUNet, a fully self-configuring
segmentation pipeline (Isensee et al., 2021).
Noisy Label The unlabeled voxels of the LNQ2023 training set were set to the back-
ground class as a form of noisy labels. Consequently, unlabeled LN instances were na¨ıvely
set to background. The foreground only consisted of the expert-annotated foreground in-
stances.
Loss Masking Another simple strategy to include the weakly labeled image volumes in
the training procedure was to mask out regions without class annotation. For such voxels,
the loss was set to zero so that only labeled voxels contributed to the learning process.
Foreground Instance Coating Given foreground LN instances in the weakly annotated
LNQ2023 training data, we followed the approach of Nguyen et al. (2020) and set voxels
neighboring the foreground instances to the background class. We implemented that by
running the morphological operator binary dilation. In this way, each foreground component
was embedded in a hull of background voxels.
TotalSegmentator Pseudo Labeling All 104 output structures of the TotalSegmenta-
tor, covering organs, vessels, bones, and muscles, by definition, should not contain any LNs.
We exploited this fact to constrain our problem setting. We utilized the output classes to
annotate unlabeled voxels as background if classified as foreground by the TotalSegmen-
tator. During that process, we skipped overwriting expert annotated LN instances in the
LNQ2023 training set. This strategy is especially beneficial in the mediastinum, as the To-
talSegmentator covers many known anatomical structures located within the mediastinum.
802
AnnotatedLymphNodeBackground ClassMissedLymphNodeLoss MaskingTotalSegmentatorStructureNoisyLabelLoss MaskingForegroundInstance CoatingTotalSegmentatorPseudoLabelingMask the Unknown
Dataset
LNQ2023 train set
LNQ2023 test set
CT Lymph Nodes
Bouget Refinements
NSCLC datasets
Volumes Labeled LNs Labeled Enlarged LNs Fully Labeled
No
Yes
Yes
Yes
No
558
845
294
1403
0
393
100
90
90
585
512
289
244
414
0
Table 1: Lymph node statistics per dataset. A lymph node component is considered en-
larged if its shortest diameter is equal to or greater than 10 mm.
Figure 2: Histograms of shortest diameter of lymph node components in TCIA CT Lymph
Nodes dataset, refined annotations by Bouget et al. (2019) and the LNQ2023
training and test set.
3. Experiments
In this section, we describe the ablation study we performed to select the building blocks of
the final model submission. For this purpose, we added various training components from
the network training, such as extra training data or different training strategy changes, to
analyze their contribution to the model’s performance on the test set.
803
01020304050Shortest Diameter [mm]0100200300400Lymph Node ComponentsCT Lymph Nodes01020304050Shortest Diameter [mm]0100200300400Lymph Node ComponentsCT Lymph Nodes: Bouget Refinements01020304050Shortest Diameter [mm]0100200300400Lymph Node ComponentsLNQ2023 Training Set01020304050Shortest Diameter [mm]0100200300400Lymph Node ComponentsLNQ2023 Test SetFischer, Kiechle, Lang, Peeken and Schnabel
3.1 Data
We included three different source datasets for training the segmentation model. The
LNQ2023 training data consists of 393 weakly annotated thorax CT volumes of patients
suffering from various cancer types, including breast cancer, leukemia, lung cancer, and
others.
Another dataset used is the TCIA CT Lymph Nodes dataset introduced by Roth et al.
(2015).
It contains 90 thorax CT volumes with fully annotated mediastinal LNs. Here,
only clinically enlarged LN instances are annotated. The TCIA CT Lymph Nodes dataset
was refined in the work of Bouget et al. (2019). They updated the existing annotations and
integrated all visible LNs without separating them into pathological or healthy LNs.
Furthermore, to increase the amount of data, we added three different lung cancer
datasets, adding up to 585 CT volumes, to the training data. The first is a subset of
the NSCLC-Radiogenomics dataset, consisting of 143 thorax CT volumes with tumor de-
lineation (Gevaert et al., 2012; Bakr et al., 2017, 2018; Clark et al., 2013). The other
lung cancer dataset is the NSCLC-Radiomics containing 422 thorax CT volumes with tu-
mor delineation (Aerts et al., 2014a,b; Clark et al., 2013). Additionally, we integrated the
NSCLC-Radiomics-Interobserver1 dataset by adding the 20 thorax CT volumes and the
delineation of the tumor by the medical expert 1 to the data pool (Wee et al., 2019; Clark
et al., 2013; Aerts et al., 2014a). Multiple individuals of the NSCLC datasets developed
metastatic LNs documented in the patient’s N-staging.
The occurring LN instances per dataset were generated by a connected component
analysis, and statistics of the datasets are specified in Table 1. Not all LN components in
the CT Lymph Nodes dataset were considered enlarged. The Bouget refinements, which
are refined annotations of the standard CT Lymph Nodes dataset, contained more than
four times as many annotated components. The refined version had almost twice as many
enlarged LN instances. LNQ2023 training set is not fully annotated, so the number of LNs
per CT volume should be similar to the LNQ2023 test set. For the NSLC datasets, roughly
half of the patients suffer from pathological thorax LNs, while at least 171 patients had
spread into the mediastinal LNs. The histograms of the shortest LN component diameter
per labeled dataset are shown in Figure 2.
3.2 Ablation Study for Integration Strategy of Weakly Annotated Data
We performed an ablation study to evaluate the various training strategy building blocks.
The LNQ2023 challenge test set, containing 100 fully annotated samples, was used for eval-
uation. The Dice score and the average symmetric surface distance (ASSD) were computed
for performance assessment.
Initially, during the challenge, we used 20 samples of the
CT Lymph Nodes dataset for validation of the method development, which we, therefore,
omitted in the trained models of the ablation study.
We used default nnUNet planning and training for all models. All different training
strategies introduced in Section 2 and various combinations of data for training were eval-
uated on the LNQ2023 test set:
• Model 1: Baseline segmentation model trained on 70 fully annotated samples of the
TCIA CT Lymph Nodes dataset. Thus, training annotations only included enlarged
LN components.
804
Mask the Unknown
• Model 2: Model trained on data combination of TCIA CT Lymph Node and LNQ2023
training set. Unlabeled voxels were set to background class, so the model was trained
in a noisy label fashion.
• Model 3: Instead of noisy label training here, loss masking was applied. The loss of
the unlabeled voxels was set to zero so that it did not affect the training process.
• Model 4: The LNQ2023 training data was preprocessed with foreground instance
coating with a background margin of one voxel for each hull.
• Model 5: For this model, the TotalSegmentator toolbox was used for reducing the
number of unlabeled voxels by TotalSegmentator Pseudo Labeling.
• Model 6: To evaluate the effect of replacing the standard TCIA CT Lymph Node
annotations with the according Bouget label refinements, a model was trained on
this data combination. As a weakly supervised learning strategy, TotalSegmentator
Pseudo Labeling was used. In the standard TCIA CT Lymph Node annotations, only
enlarged LN components are contained, while the Bouget refinements include all sizes
of LN components.
• Model 7: Finally, the NSCLC data were integrated into the training cohort. A
model using TotalSegmentator Pseudo Labeling was trained on the data combination
of the LNQ2023 training set, CT Lymph Nodes data with Bouget refinements, and
the NSCLC datasets. The given tumor annotations of the NSCLC datasets were set
to background class.
We evaluated the trained models on the 100 samples of the LNQ2023 test set and
computed Dice score plus ASSD. The results of the experiments are shown in Table 3.
3.3 Performance Analysis regarding Lymph Node Shortest Diameter
To test whether a model solely trained on clinically enlarged pathological LN components
was able to segment non-enlarged pathological LN components, we analyzed the predictions
of Model 5 and Model 6 regarding their performance for different LN shortest diameters.
To assess the performance, we iterated through all predicted LN components and com-
puted the overlap of this predicted LN component on its ground truth annotation. The
volume overlap was normalized by its volume, resulting in values between 0 and 1. This
was then interpreted as a proxy of the model sensitivity. Thus, we were able to analyze
the sensitivity over different LN sizes. We repeated the same assessment, iterated through
all annotated ground truth LN components, and computed the overlap of this to the entire
model prediction on the regarding volume, which was then a proxy for the precision.
4. Results
The following presents an analysis of the preprocessing strategies, results of the ablation
study, and an analysis of the segmentation performance on different LN component sizes.
805
Fischer, Kiechle, Lang, Peeken and Schnabel
Statistics Per CT Volume Total Voxels [·106] Labeled-Unlabeled Voxel Ratio [%]
0.02 ± 0.03
38.2 ± 24.5
Raw Data
7.5 ± 4.1
+ ROI crop
0.10 ± 0.17
55.05 ± 3.31
7.5 ± 4.1
+ TotalSegmentator PL
Table 2: Voxel statistics of the LNQ2023 training set for each preprocessing step.
Figure 3: Preprocessing steps performed on LNQ2023 training set example, shown in three
orthogonal views. Top: raw input volume with weak lymph node annotation
(green), Middle: volume after lung bounding box cropping, Bottom: TotalSeg-
mentator Pseudo Labeling setting known structures to the background (blue).
4.1 Image and Annotation Preprocessing
First, the CT volumes were resampled to a common spacing of [3.0 mm, 0.93 mm, 0.93 mm].
The average number of voxels per CT volume and the ratio between labeled and unlabeled
voxels are given in Table 2. The number of voxels per CT volume was reduced from
38.2 · 106 to 7.5 · 106 with the ROI cropping, leading to an increase in the ratio of labeled to
unlabeled voxels from 0.02% to 0.10%. TotalSegmentator Pseudo Labeling further increased
the number of labeled voxels, resulting in a ratio of 55.05%. Figure 3 shows the preprocessing
steps of one LNQ2023 training set example.
4.2 Ablation Study for Integration Strategy of Weakly Annotated Data
The results of the ablation study are presented in Table 3. Model 1, only trained on
the fully annotated CT Lymph Nodes dataset containing only enlarged LN components,
achieved a low Dice score of 0.172 and an ASSD of 48.95 mm on average.
806
Mask the Unknown
Figure 4: Different cases of LNQ2023 test set with ground truth annotation and model pre-
dictions. For intuitive visualization the trachea is shown in blue, model prediction
is in yellow and ground truth in green. For inference Model 7 was used. Left:
worst case (Dice score 0.108, ASSD 19.2 mm), Center: average case (Dice score
0.626, ASSD 5.65 mm), Right: best case (Dice score 0.860, ASSD 2.38 mm)
Model
Model 1: CT Lymph Nodes dataset
Integration of LNQ2023 training samples:
Model 2: Noisy Label Strategy
Model 3: Loss Masking Strategy
Model 4: Foreground Instance Coating
Model 5: TotalSegmentator PL
Replace Annotations/Add data:
Model 6: Bouget Label Refinements
Model 7: Integration of NSCLC data
Dice Score ↑ ASSD [mm] ↓
48.95 ± 64.49
0.172 ± 0.182
0.343 ± 0.201
0.552 ± 0.200
0.548 ± 0.219
0.601 ± 0.173
18.46 ± 23.86
9.37 ± 9.79
12.19 ± 17.62
7.08 ± 7.54
0.665 ± 0.143
0.663 ± 0.136
4.47 ± 4.67
3.97 ± 2.83
Table 3: Performance of the lymph node segmentation model with different strategies to
handle the weakly annotated data and additional training data. Models were
tested on the 100 fully annotated samples of the LNQ2023 test set. Best achieved
scores are highlighted in bold. (↑: higher is better, ↓: lower is better)
Integration of the weakly annotated challenge training data improved the results overall
for all applied weak annotation handling strategies. The worst performance was the noisy
label training, with a Dice score of 0.343 and an ASSD of 9.37 mm. Loss masking and its
variants, foreground instance coating, and TotalSegmentator Pseudo Labeling outperformed
the noisy labels. Foreground instance coating did decrease the performance compared to
raw loss masking in both metrics. The best weak annotation handling strategy was the
TotalSegmentator Pseudo labeling with a Dice score of 0.601 and an ASSD of 7.08 mm.
807
Fischer, Kiechle, Lang, Peeken and Schnabel
Figure 5: Overlap of each ground truth lymph node component with prediction and over-
lap of each predicted lymph node component with all ground truth lymph node
components over the shortest diameter. Lymph node components were binned
regarding shortest diameter in 2.5 mm steps. Model predictions were generated
by Model 5 (green) and Model 6 (blue).
Changing the standard CT Lymph Nodes annotation to the Bouget refinements im-
proved the performance.
Important to note is that the Bouget refinements are annota-
tions of all visible LNs. This gave an increase to a Dice score of 0.665 and an ASSD of
4.47 mm with the TotalSegmentator Pseudo Labeling. Furthermore, a model trained with
the additional NSCLC datasets did have a similar Dice score and a slightly better ASSD.
The significance of the segmentation performance is analysed with a non-parametric paired
Wilcoxon signed-rank test in the Appendix A. Predictions of Model 7 on three different
test set cases of the LNQ2023 are shown in Figure 4.
4.3 Performance Analysis regarding Lymph Node Shortest Diameter
In Figure 5, the overlap between the ground truth LN component on prediction components
and the overlap of predicted LN component on ground truth components is plotted. Model
6, which was trained on LN components of all sizes, achieved a better overlap between a
ground truth LN component on a full model prediction on average for all different LN
shortest diameters. Model 5 did detect much fewer LN instances, which are shorter than
10 mm in diameter, than the Model 6.
Overlap between predicted LN instances and complete ground truth annotations per-
forms similarly, while for small LN instances, Model 5 indicates a higher precision.
808
51015202530Shortest Diameter [mm]0.00.20.40.60.81.0OverlapAnnotation LN to Prediction LNsTrainingBougetStandard51015202530Shortest Diameter [mm]0.00.20.40.60.81.0OverlapPrediction LN to Annotation LNsTrainingBougetStandardMask the Unknown
5. Challenge Submission
Our submitted challenge model, trained via TotalSegmentator Pseudo Labeling on the com-
bination of Bouget refined CT Lymph Node data, LNQ2023 training set, and the TCIA
NSCLC datasets, ranked third place in the MICCAI LNQ2023 challenge. The model per-
formed slightly worse on the test set than the developed models in the ablation study of
Subsection 3.2. We discuss reasons for this difference in Section 6. The nnUNet instance
of the submission follows all the findings from the ablation study and refers to Model 7 of
the ablation study.
Authors (Team Name)
Rank 1: Deissler et al. (Skeleton Suns)
Rank 2: Zhang et al. (IMR)
Rank 3: Fischer et al. (CompAI)
Rank 4: Kondo et al. (HiLab)
Rank 5: Engelson et al. (sofija engelson)
CompAI (ours) without Postprocessing
Model 7
Dice Score ↑ ASSD [mm] ↓
4.5 ± 4.7
5.4 ± 4.3
5.8 ± 3.6
8.2 ± 11.8
6.9 ± 4.0
5.35 ± 10.71
3.97 ± 2.83
0.674 ± 0.165
0.665 ± 0.163
0.628 ± 0.193
0.603 ± 0.141
0.569 ± 0.185
0.660 ± 0.150
0.663 ± 0.136
Table 4: Performance of the top five performing team models on the 100 samples from
LNQ2023 test set. Additionally, the performance of our submitted model without
the removal of small LN components is given, as well as the Model 7 from the
ablation study. Best achieved scores are highlighted in bold.
For the challenge submission, the default nnUNet normalization scheme was replaced by
intensity clipping to [−150, 350] inspired by the work of Bouget et al. (2019) and intensity
standardization. Furthermore, the challenge submission model was trained with a learning
rate of 1e − 3 instead of the default 1e − 2 in order to stabilize the training process. Thus,
we increased the number of epochs from default 1000 epochs to 2000 epochs. To train one
nnUNet instance the full available data were used, also integrating the missing 20 samples
of CT Lymph Node for original validation.
The prediction was postprocessed by a connected component analysis for the challenge
submission. Predicted LN components with a short diameter smaller than 9.5 mm were
removed from the prediction so that only LN components that are considered enlarged are
kept. The ranking of the challenge participating teams is given in Table 4. Our submission
was ranked third place in the Lymph Node Quantification Challenge 2023 with a Dice score
of 0.628 and an ASSD of 5.8 mm.
Model 7 of our ablation study would have achieved third place in the Dice score while
having the lowest standard deviation. Furthermore, it would have scored the best ASSD
among all challenge submission models.
6. Discussion
For the challenge, only semantic segmentation maps have been provided, while the CT
Lymph Nodes dataset comes with instance segmentation annotations. Therefore, original
809
Fischer, Kiechle, Lang, Peeken and Schnabel
annotations of the CT Lymph Nodes were interpreted as semantic segmentation maps, and
LN components were created by a connected component analysis. Thus, there is a difference
in the reported LN instances in the work of Roth et al. (2015) compared to ours.
In Table 1, the number of LN components is provided for each dataset. For the CT
Lymph Nodes dataset, which should only contain enlarged LNs, also a subset of non-
enlarged LNs are included. An explanation for this trend is that clinicians follow the
RECIST guidelines and thus only consider axial slice directions. For the LNQ2023 training
set, the same trend holds but is less prominent. This might originate from the weak anno-
tating done by the physicians, probably resulting in a bias toward the larger pathological
LNs per image volume.
Another related surprising finding was the difference in enlarged LN instances in CT
Lymph Nodes and the Bouget refinements. There might be a bias originating from the
intention of annotating all visible LNs in the refinements, while the clinicians searched only
for enlarged LNs via the RECIST criteria in the standard dataset. Another factor is that
each LN of a neighboring LN cluster might be considered as healthy, while annotating all
of them and processing the cluster with a connected component analysis can result in an
enlarged LN component.
The TCIA CT Lymph Nodes dataset was essential for the development of the proposed
method, as it provided fully annotated training cases. A possible solution to solve the task
with only the weakly annotated challenge data is to follow the work of Nguyen et al. (2020),
which refers to foreground instance coating. By using TotalSegmentator Pseudo Labeling
the performance in this scenario would improve. Another approach would be noisy label
training. We did not perform any experiments on the challenge data alone.
In our scenario, the noisy label training improved the performance compared to only
using CT Lymph Nodes. The integration of the LNQ2023 training set outweighed the effect
of false negatives generated from unlabeled LN instances.
We hypothesize the reason for the failure of the foreground coating is the difficulty
of labeling LNs in a binary manner, as LN are known to be confluent and miss sharp
intensity drops at their boundary. Thus, the instance coating might generate a lot of
false negatives. Increasing the margin of the coating might reduce this issue and lead to
performance gains. Furthermore, the presence of bulky lymph nodes in which only one LN
instance was annotated will result in false negative annotations.
TotalSegmentator Pseudo Labeling offers limited generalization to other tasks, as it is
only applicable to CT modality and differs in coverage for all body regions. Furthermore,
there is also a small overlap between LN voxels and TotalSegmentator structures, leading
to false negatives. There might be better structure subsets of the TotalSegmentator for the
LN segmentation task.
The segmentation models benefit from integrating the Bouget refinements, also reported
in the work of Bouget et al. (2023). We support their hypothesis that the inclusion of all
visible LNs is an efficient form of data augmentation, integrating possible LN locations.
Another aspect is the incorporation of non-enlarged LNs into the training, which is essential
as models trained only on enlarged LNs are not generalizing well on small LNs.
We hypothesize that the better performance of ASSD and more stable Dice scores, when
including the NSCLC data, originates from the higher number of training samples and the
learning of lung cancer anomalies as background.
810
Mask the Unknown
Extending the method with a semi-supervised learning method, in the form of Totalseg-
mentator Pseudo Labeling, was shown to improve the model. Additionally, pseudo labeling
of the remaining unlabeled voxels as described in the work of Huang et al. (2022) might
further increase the performance.
Until the final challenge submission, the goal of the challenge, that all pathological
lymph nodes should be segmented, was ambiguous to us. Our intention was to segment
only clinically enlarged lymph nodes. Thus, we introduced the postprocessing of filtering the
segmentation regarding LN enlargement. This postprocessing led to a lower performance.
Nevertheless, we were still able to achieve third place in the final challenge ranking.
In this work, different strategies to handle weak annotations were proposed that sig-
nificantly improved the performance on the task of mediastinal pathological lymph node
segmentation. The usage of the TotalSegmentator was highly beneficial in our case as ROI
cropping but also as a network providing informative pseudo labels. Different datasets were
integrated into the training and were fundamental for the submission model. One impor-
tant finding is that the integration of non-pathological lymph nodes also aided our task of
pathological lymph node segmentation.
Acknowledgments
Stefan Fischer has received funding by the Deutsche Forschungsgemeinschaft (DFG, Ger-
man Research Foundation) – 515279324 / SPP 2177. Johannes Kiechle was supported by
the DAAD programme Konrad Zuse School of Excellence in reliable Artificial Intelligence
(relAI), sponsored by the Federal Ministry of Education and Research.
Ethical Standards
The work follows appropriate ethical standards in conducting research and writing the
manuscript, following all applicable laws and regulations regarding treatment of animals or
human subjects.
Conflicts of Interest
We declare we do not have conflicts of interest.
Data availability
All experiments and models are performed and trained on publicly available data. The
LNQ2023 challenge data will be published by the challenge hosts in the future as part of
TCIA. Bouget refinements are accessible via the Google-Drive links via
https://github.com/dbouget/ct mediastinal structures segmentation.
811
Fischer, Kiechle, Lang, Peeken and Schnabel
References
Hugo J. W. L. Aerts, Emmanuel Rios Velazquez, Ralph T. H. Leijenaar, Chintan Par-
mar, Patrick Grossmann, Sara Carvalho, Johan Bussink, Ren´e Monshouwer, Benjamin
Haibe-Kains, Derek Rietveld, Frank Hoebers, Michelle M. Rietbergen, C. Ren´e Leemans,
Andre Dekker, John Quackenbush, Robert J. Gillies, and Philippe Lambin. Decoding tu-
mour phenotype by noninvasive imaging using a quantitative radiomics approach. Nature
Communications, 5(1):4006, 2014a.
Hugo J. W. L. Aerts, Emmanuel Rios Velazquez, Ralph T. H. Leijenaar, Chintan Parmar,
Patrick Grossmann, Sara Carvalho, Johan Bussink, Ren´e Monshouwer, Benjamin Haibe-
Kains, Derek Rietveld, Frank Hoebers, Michelle M. Rietbergen, C. Ren´e Leemans, Andre
Dekker, John Quackenbush, Robert J. Gillies, and Philippe Lambin. Data From NSCLC-
Radiomics (version 4) [Data set], 2014b. URL https://wiki.cancerimagingarchive.
net/display/Public/NSCLC-Radiomics.
Shaimaa Bakr, Olivier Gevaert, Sebastian Echegaray, Kelsey Ayers, Mu Zhou, Majid Shafiq,
Hong Zheng, Weiruo Zhang, Ann Leung, Michael Kadoch, Joseph Shrager, Andrew
Quon, Daniel L. Rubin, Sylvia K. Plevritis, and Sandy Napel. Data for NSCLC Ra-
diogenomics (Version 4) [Data set], 2017. URL https://wiki.cancerimagingarchive.
net/display/Public/NSCLC+Radiogenomics.
Shaimaa Bakr, Olivier Gevaert, Sebastian Echegaray, Kelsey Ayers, Mu Zhou, Majid Shafiq,
Hong Zheng, Weiruo Zhang, Ann Leung, Michael Kadoch, Joseph Shrager, Andrew Quon,
Daniel L. Rubin, Sylvia K. Plevritis, and Sandy Napel. A radiogenomic dataset of non-
small cell lung cancer. Scientific Data, 5(1):1–9, 2018.
David Bouget, Arve Jørgensen, Gabriel Kiss, Haakon O. Leira, and Thomas Langø. Seman-
tic segmentation and detection of mediastinal lymph nodes and anatomical structures in
CT data for lung cancer staging. International Journal of Computer Assisted Radiology
and Surgery, 14(6):977–986, 2019.
David Bouget, Andr´e Pedersen, Johanna Vanel, Haakon O. Leira, and Thomas Langø.
Mediastinal lymph nodes segmentation using 3D convolutional neural network ensem-
bles and anatomical priors guiding. Computer Methods in Biomechanics and Biomedical
Engineering: Imaging & Visualization, 11(1):44–58, 2023.
Olivier Chapet, Feng-Ming Kong, Leslie E. Quint, Andrew C. Chang, Randall K. Ten Haken,
Avraham Eisbruch, and James A. Hayman. CT-based definition of thoracic lymph node
stations: an atlas from the University of Michigan. International Journal of Radiation
Oncology - Biology - Physics, 63(1):170–178, 2005.
Kenneth Clark, Bruce Vendt, Kirk Smith, John Freymann, Justin Kirby, Paul Koppel,
Stephen Moore, Stanley Phillips, David Maffitt, Michael Pringle, Lawrence Tarbox, and
Fred Prior. The Cancer Imaging Archive (TCIA): maintaining and operating a public
information repository. Journal of Digital Imaging, 26:1045–1057, 2013.
812
Mask the Unknown
Nanqing Dong, Michael Kampffmeyer, Xiaodan Liang, Min Xu, Irina Voiculescu, and Eric
Xing. Towards robust partially supervised multi-structure medical image segmentation
on small-scale data. Applied Soft Computing, 114:108074, 2022.
E. A. Eisenhauer, P. Therasse, J. Bogaerts, L. H. Schwartz, D. Sargent, R. Ford, J. Dancey,
S. Arbuck, S. Gwyther, M. Mooney, L. Rubinstein, L. Shankar, L. Dodd, R. Kaplan,
D. Lacombe, and J. Verweij. New response evaluation criteria in solid tumours: revised
RECIST guideline (version 1.1). European Journal of Cancer, 45(2):228–247, 2009.
Olivier Gevaert, Jiajing Xu, Chuong D Hoang, Ann N. Leung, Yue Xu, Andrew Quon,
Daniel L. Rubin, Sandy Napel, and Sylvia K. Plevritis. Non–small cell lung cancer: Iden-
tifying prognostic imaging biomarkers by leveraging public gene expression microarray
data—methods and preliminary results. Radiology, 264(2):387–396, 2012.
Dazhou Guo, Jia Ge, Ke Yan, Puyang Wang, Zhuotun Zhu, Dandan Zheng, Xian-Sheng
Hua, Le Lu, Tsung-Ying Ho, Xianghua Ye, and Dakai Jin. Thoracic lymph node segmen-
tation in CT imaging via lymph node station stratification and size encoding. In Inter-
national Conference on Medical Image Computing and Computer-Assisted Intervention–
MICCAI, volume 13435 of Lecture Notes in Computer Science. Springer, 2022.
Ziyan Huang, Haoyu Wang, Jin Ye, Jingqi Niu, Can Tu, Yuncheng Yang, Shiyi Du, Zhongy-
ing Deng, Lixu Gu, and Junjun He. Revisiting nnU-net for iterative pseudo labeling
and efficient sliding window inference. In MICCAI Challenge on Fast and Low-Resource
Semi-supervised Abdominal Organ Segmentation held in conjunction with MICCAI 2022.
Springer, 2022.
Fabian Isensee, Paul F. Jaeger, Simon A. A. Kohl, Jens Petersen, and Klaus H. Maier-
Hein. nnU-Net: a self-configuring method for deep learning-based biomedical image
segmentation. Nature Methods, 18(2):203–211, 2021.
Andra-Iza Iuga, Heike Carolus, Anna J. H¨oink, Tom Brosch, Tobias Klinder, David Maintz,
Thorsten Persigehl, Bettina Baeßler, and Michael P¨usken. Automated detection and
segmentation of thoracic lymph nodes from CT using 3D foveal fully convolutional neural
networks. BMC Medical Imaging, 21(1):1–12, 2021.
Jana Kemnitz, Christian F. Baumgartner, Wolfgang Wirth, Felix Eckstein, Sebastian K.
Eder, and Ender Konukoglu. Combining heterogeneously labeled datasets for training
In Machine Learning in Medical Imaging: 9th International
segmentation networks.
Workshop, MLMI 2018, Held in Conjunction with MICCAI 2018. Springer, 2018.
Roya Khajavibajestani, Steve Pieper, Erik Ziegler, Tagwa Idris, Reuben Dorent,
Bhanusupriya Somarouthu, Sonia Pujol, Ann LaCasce, Heather Jacene, Gordon Har-
ris, and Ron Kikinis. Mediastinal Lymph Node Quantification (LNQ): Segmentation of
Heterogeneous CT Data, 2023. URL https://doi.org/10.5281/zenodo.7844666.
Jiamin Liu, Jocelyn Zhao, Joanne Hoffman, Jianhua Yao, Weidong Zhang, Evrim B. Turk-
bey, Shijun Wang, Christine Kim, and Ronald M. Summers. Mediastinal lymph node
detection on thoracic CT scans using spatial prior from multi-atlas label fusion. In Med-
ical Imaging 2014: Computer-Aided Diagnosis. SPIE, 2014.
813
Fischer, Kiechle, Lang, Peeken and Schnabel
Jiamin Liu, Joanne Hoffman, Jocelyn Zhao, Jianhua Yao, Le Lu, Lauren Kim, Evrim B.
Turkbey, and Ronald M. Summers. Mediastinal lymph node detection and station map-
ping on chest CT using spatial priors and random forest. Medical Physics, 43(7):4362–
4374, 2016.
Nhu-Van Nguyen, Christophe Rigaud, Arnaud Revel, and Jean-Christophe Burie. A learn-
ing approach with incomplete pixel-level labels for deep neural networks. Neural Networks,
130:111–125, 2020.
Isabella Nogues, Le Lu, Xiaosong Wang, Holger Roth, Gedas Bertasius, Nathan Lay, Jianbo
Shi, Yohannes Tsehay, and Ronald M. Summers. Automatic lymph node cluster segmen-
tation using holistically-nested neural networks and structured optimization in CT im-
ages. In International Conference on Medical Image Computing and Computer-Assisted
Intervention–MICCAI, volume 9901 of Lecture Notes in Computer Science. Springer,
2016.
Hirohisa Oda, Kanwal K. Bhatia, Masahiro Oda, Takayuki Kitasaka, Shingo Iwano, Hiroto-
shi Homma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Julia A. Schnabel, and
Kensaku Mori. Automated mediastinal lymph node detection from CT volumes based
on intensity targeted radial structure tensor analysis. Journal of Medical Imaging, 4(4):
044502–044502, 2017a.
Hirohisa Oda, Kanwal K. Bhatia, Masahiro Oda, Takayuki Kitasaka, Shingo Iwano, Hiro-
toshi Homma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Julia A. Schnabel,
and Kensaku Mori. Hessian-assisted supervoxel: Structure-oriented voxel clustering and
application to mediastinal lymph node detection from CT volumes. In Medical Imaging
2017: Computer-Aided Diagnosis. SPIE, 2017b.
Hirohisa Oda, Holger R. Roth, Kanwal K. Bhatia, Masahiro Oda, Takayuki Kitasaka, Shingo
Iwano, Hirotoshi Homma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, et al.
Dense volumetric detection and segmentation of mediastinal lymph nodes in chest CT
images. In Medical Imaging 2018: Computer-Aided Diagnosis. SPIE, 2018.
Olivier Petit, Nicolas Thome, Arnaud Charnoz, Alexandre Hostettler, and Luc Soler. Han-
dling missing annotations for semantic segmentation with deep convnets. In Deep Learn-
ing in Medical Image Analysis and Multimodal Learning for Clinical Decision Support:
4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS
2018, Held in Conjunction with MICCAI 2018. Springer, 2018.
Holger R. Roth, Le Lu, Ari Seff, Kevin M. Cherry, Joanne Hoffman, Shijun Wang, Jiamin
Liu, Evrim Turkbey, and Ronald M. Summers. A new 2.5D representation for lymph
node detection using random sets of deep convolutional neural network observations. In
Medical Image Computing and Computer-Assisted Intervention – MICCAI, volume 8673
of Lecture Notes in Computer Science. Springer, 2014.
Holger R. Roth, Le Lu, Ari Seff, Kevin M. Cherry, Joanne Hoffman, Shijun Wang, Jiamin
Liu, Evrim Turkbey, and Ronald M. Summers. A new 2.5D representation for lymph
node detection in CT [Data set], 2015. URL https://www.cancerimagingarchive.
net/collection/ct-lymph-nodes/.
814
Mask the Unknown
Valerie W. Rusch, Hisao Asamura, Hirokazu Watanabe, Dorothy J. Giroux, Ramon Rami-
Porta, and Peter Goldstraw. The IASLC lung cancer staging project: a proposal for
a new international lymph node map in the forthcoming seventh edition of the TNM
classification for lung cancer. Journal of Thoracic Oncology, 4(5):568–577, 2009.
Gonglei Shi, Li Xiao, Yang Chen, and S. Kevin Zhou. Marginal loss and exclusion loss for
partially supervised multi-organ segmentation. Medical Image Analysis, 70:101979, 2021.
Constantin Ulrich, Fabian Isensee, Tassilo Wald, Maximilian Zenk, Michael Baumgart-
ner, and Klaus H. Maier-Hein. Multitalent: A multi-dataset approach to medical image
segmentation. In International Conference on Medical Image Computing and Computer-
Assisted Intervention–MICCAI, volume 14222 of Lecture Notes in Computer Science.
Springer, 2023.
Jakob Wasserthal, Hans-Christian Breit, Manfred T. Meyer, Maurice Pradella, Daniel
Hinck, Alexander W. Sauter, Tobias Heye, Daniel T. Boll, Joshy Cyriac, Shan Yang,
Michael Bach, and Martin Segeroth. Totalsegmentator: Robust segmentation of 104
anatomic structures in CT images. Radiology: Artificial Intelligence, 5(5), 2023.
Leonard Wee, Hugo J. W. L. Aerts, Petros Kalendralis, and Andre Dekker. Data
URL https://wiki.
from NSCLC-Radiomics-Interobserver1 [Data set], 2019.
cancerimagingarchive.net/display/Public/NSCLC-Radiomics-Interobserver1.
Ke Yan, Dakai Jin, Dazhou Guo, Minfeng Xu, Na Shen, Xian-Sheng Hua, Xianghua Ye,
and Le Lu. Anatomy-aware lymph node detection in chest CT using implicit station
stratification. In International Conference on Medical Image Computing and Computer-
Assisted Intervention, volume 14394 of Lecture Notes in Computer Science. Springer,
2023.
Yuyin Zhou, Zhe Li, Song Bai, Chong Wang, Xinlei Chen, Mei Han, Elliot Fishman, and
Alan L. Yuille. Prior-aware neural network for partially-supervised multi-organ segmen-
tation. In Proceedings of the IEEE/CVF International Conference on Computer Vision,
2019.
815
.
t
s
e
t
k
n
a
r
-
d
e
n
g
i
s
n
o
x
o
c
l
i
W
c
i
r
t
e
m
a
r
a
p
-
n
o
n
e
l
p
m
a
s
d
e
r
i
a
p
e
h
t
a
i
v
d
e
t
u
p
m
o
c
e
r
a
s
e
u
l
a
v
-
p
2
.
3
n
o
i
t
c
e
s
b
u
S
f
o
y
d
u
t
s
n
o
i
t
a
l
b
a
e
h
t
r
o
F
.
6
e
l
b
a
T
d
n
a
5
e
l
b
a
T
n
i
n
e
v
i
g
e
r
a
s
t
l
u
s
e
r
e
h
T
y
d
u
t
S
n
o
i
t
a
l
b
A
n
o
g
n
i
t
s
e
T
e
c
n
a
c
fi
i
n
g
i
S
.
A
x
i
d
n
e
p
p
A
Fischer, Kiechle, Lang, Peeken and Schnabel
]
m
m
[
D
S
S
A
7
l
e
d
o
M
6
l
e
d
o
M
5
l
e
d
o
M
4
l
e
d
o
M
3
l
e
d
o
M
2
l
e
d
o
M
1
l
e
d
o
M
9
4
.
4
6
±
5
9
.
8
4
6
8
.
3
2
±
6
4
.
8
1
2
6
.
7
1
±
9
1
.
2
1
9
7
.
9
±
7
3
.
9
4
5
.
7
±
8
0
.
7
7
6
.
4
±
7
4
.
4
3
8
.
2
±
7
9
.
3
8
1
−
e
9
.
3
8
1
−
e
3
.
7
4
1
−
e
3
.
1
5
1
−
e
5
.
1
3
1
−
e
6
.
1
1
−
e
8
.
7
-
8
1
−
e
9
.
3
7
1
−
e
6
.
1
3
1
−
e
7
.
5
4
1
−
e
6
.
2
1
1
−
e
4
.
7
1
−
e
8
.
7
-
8
1
−
e
0
.
4
7
1
−
e
7
.
6
4
−
e
3
.
7
7
−
e
2
.
3
1
1
−
e
4
.
7
3
1
−
e
6
.
1
-
5
1
−
e
2
.
4
0
1
−
e
6
.
6
3
−
e
0
.
2
7
−
e
2
.
3
4
1
−
e
6
.
2
5
1
−
e
5
.
1
-
6
1
−
e
9
.
2
0
1
−
e
7
.
7
3
−
e
0
.
2
5
−
e
3
.
7
3
1
−
e
7
.
5
4
1
−
e
3
.
1
-
8
−
e
9
.
3
-
0
1
−
e
7
.
7
0
1
−
e
6
.
6
7
1
−
e
7
.
6
7
1
−
e
6
.
1
8
1
−
e
3
.
7
8
−
e
9
.
3
6
1
−
e
9
.
2
5
1
−
e
2
.
4
8
1
−
e
0
.
4
8
1
−
e
9
.
3
8
1
−
e
9
.
3
-
1
2
3
4
5
6
7
l
e
d
o
M
l
e
d
o
M
l
e
d
o
M
l
e
d
o
M
l
e
d
o
M
l
e
d
o
M
l
e
d
o
M
-
3
8
.
2
±
7
9
.
3
7
6
.
4
±
7
4
.
4
4
5
.
7
±
8
0
.
7
2
6
.
7
1
±
9
1
.
2
1
9
7
.
9
±
7
3
.
9
6
8
.
3
2
±
6
4
.
8
1
9
4
.
4
6
±
5
9
.
8
4
]
m
m
[
D
S
S
A
.
D
S
S
A
r
o
f
s
i
s
e
h
t
o
p
y
h
l
l
u
n
f
o
t
s
e
t
k
n
a
r
-
d
e
n
g
i
s
n
o
x
o
c
l
i
W
d
e
r
i
a
p
c
i
r
t
e
m
a
r
a
p
-
n
o
n
f
o
e
u
l
a
V
P
-
:
6
e
l
b
a
T
.
e
r
o
c
s
e
c
i
D
r
o
f
s
i
s
e
h
t
o
p
y
h
l
l
u
n
f
o
t
s
e
t
k
n
a
r
-
d
e
n
g
i
s
n
o
x
o
c
l
i
W
d
e
r
i
a
p
c
i
r
t
e
m
a
r
a
p
-
n
o
n
f
o
e
u
l
a
V
P
-
:
5
e
l
b
a
T
e
r
o
c
S
e
c
i
D
7
l
e
d
o
M
6
l
e
d
o
M
5
l
e
d
o
M
4
l
e
d
o
M
3
l
e
d
o
M
2
8
1
.
0
±
2
7
1
.
0
1
0
2
.
0
±
3
4
3
.
0
0
0
2
.
0
±
2
5
5
.
0
9
1
2
.
0
±
8
4
5
.
0
3
7
1
.
0
±
1
0
6
.
0
3
4
1
.
0
±
5
6
6
.
0
6
3
1
.
0
±
3
6
6
.
0
8
1
−
e
0
.
4
7
1
−
e
4
.
2
0
1
−
e
0
.
1
0
1
−
e
2
.
1
1
1
−
e
2
.
2
1
−
e
5
.
2
-
8
1
−
e
9
.
5
7
1
−
e
0
.
2
2
1
−
e
3
.
4
1
1
−
e
2
.
3
0
1
−
e
1
.
1
1
−
e
5
.
2
-
8
1
−
e
9
.
5
7
1
−
e
0
.
5
4
−
e
8
.
5
5
−
e
1
.
1
0
1
−
e
1
.
1
1
1
−
e
2
.
2
-
7
1
−
e
0
.
5
3
1
−
e
4
.
1
5
−
e
1
.
1
1
1
−
e
2
.
3
0
1
−
e
2
.
1
2
9
.
0
-
7
1
−
e
6
.
3
3
1
−
e
2
.
3
-
4
−
e
8
.
5
2
1
−
e
3
.
4
0
1
−
e
0
.
1
2
9
.
0
2
l
e
d
o
M
0
1
−
e
5
.
1
3
1
−
e
2
.
3
3
1
−
e
4
.
1
7
1
−
e
0
.
5
7
1
−
e
0
.
2
7
1
−
e
4
.
2
-
1
l
e
d
o
M
0
1
−
e
5
.
1
7
1
−
e
6
.
3
7
1
−
e
0
.
5
8
1
−
e
9
.
5
8
1
−
e
9
.
5
8
1
−
e
0
.
4
-
1
2
3
4
5
6
7
l
e
d
o
M
l
e
d
o
M
l
e
d
o
M
l
e
d
o
M
l
e
d
o
M
l
e
d
o
M
l
e
d
o
M
-
6
3
1
.
0
±
3
6
6
.
0
3
4
1
.
0
±
5
6
6
.
0
3
7
1
.
0
±
1
.
0
6
9
1
2
.
0
±
8
4
5
.
0
0
0
2
.
0
±
2
5
5
.
0
1
0
2
.
0
±
3
4
3
.
0
2
8
1
.
0
±
2
7
1
.
0
e
r
o
c
S
e
c
i
D
816
|
ai_researcher | 1 | Time_and_its_Study_in_Design_Ideation_Processes.pdf | International Design Engineering Technical Conferences and
Computers and Information in Engineering Conference
Proceedings of the ASME 2024
IDETC/CIE2024
August 25-28, 2024, Washington, DC
DETC2024-143166
AUTOTRIZ: ARTIFICIAL IDEATION WITH TRIZ AND LARGE LANGUAGE MODELS
Shuo Jiang
Singapore University of Technology and Design,
Singapore
[email protected]
Jianxi Luo
Department of Systems Engineering,
City University of Hong Kong, Hong Kong
[email protected]
ABSTRACT
for
Researchers and innovators have made enormous efforts in
developing ideation methods, such as morphological analysis
and design-by-analogy, to aid engineering design ideation for
problem solving and innovation. Among these, the Theory of
Inventive Problem Solving (TRIZ) stands out as one of the most
well-known approaches, widely applied
systematic
innovation. However, the complexity of TRIZ resources and
concepts, coupled with its reliance on users' knowledge,
experience, and reasoning capabilities, limits its practicality.
Therefore, we explore the recent advances of large language
models (LLMs) for a generative approach to bridge this gap.
This paper proposes AutoTRIZ, an artificial ideation tool that
uses LLMs to automate and enhance the TRIZ methodology. By
leveraging the broad knowledge and advanced reasoning
capabilities of LLMs, AutoTRIZ offers a novel approach for
design automation and interpretable ideation with artificial
intelligence. AutoTRIZ takes a problem statement from the user
as its initial input, and automatically generates a solution report
after the reasoning process. We demonstrate and evaluate the
effectiveness of AutoTRIZ through consistency experiments in
contradiction detection, and a case study comparing solutions
generated by AutoTRIZ with the experts’ analyses from the
textbook. Moreover, the proposed LLM-based framework holds
the potential for extension to automate other knowledge-based
ideation methods, including SCAMPER, Design Heuristics, and
Design-by-Analogy, paving the way for a new era of artificial
ideation for design innovation.
Keywords: Innovation, Design Ideation, Problem Solving,
TRIZ, Large Language Models, Artificial Intelligence
1.
INTRODUCTION
Intuitive or structured
ideation methods such as
brainstorming, morphological analysis, and mind-mapping [1–3]
have been used to aid creative ideation of human designers for
concept generation. Among these, the Theory of Inventive
Problem Solving (TRIZ) [4] stands out as one of the most well-
known approaches, widely applied for systematic innovation.
TRIZ is a knowledge-based ideation methodology that provides
a structured framework for engineering problem solving by
identifying and overcoming technical contradictions using
inventive principles derived from a large-scale patent database.
However, the complexity of TRIZ resources and concepts poses
significant cognitive challenges to effectively learning and
applying it. In addition, the problem-solving process in TRIZ is
highly dependent on the reasoning capabilities of human users.
While some researchers have employed natural language
processing and machine learning techniques to support certain
steps within TRIZ [5–7], the effectiveness still depends heavily
on the users’ proficiency with TRIZ.
Large Language Models (LLMs) such as OpenAI's GPT [8]
and Meta's Llama [9] have not only acquired broad knowledge
but also developed emergent abilities such as in-context learning
[10], instruction following [10], and step-by-step reasoning [10].
These capabilities have been applied across various domains,
including medicine [11], chemistry [12], and mathematics [13].
Recently, researchers have evaluated the capabilities of LLMs in
engineering-related tasks [14,15] and reported the extensive
engineering knowledge within these models as well as their wide
applicability in engineering design and manufacturing. In terms
of engineering problem solving and idea generation, there has
been preliminary exploration using LLMs [16–19]. However, the
lack of transparency and limited control over reasoning steps
during ideation often leads to divergent results, requiring
multiple heuristic attempts by users to achieve desired outcomes,
which places significant demands on their domain-specific
expertise. Besides, the interpretability of generated concepts
remains challenging, as users obtain only the final results
without understanding the ideation reasoning process.
In this work, we aim to leverage the broad knowledge and
advanced reasoning capabilities of LLMs to automate the TRIZ
method, showcasing the potential of LLMs in design automation
1
Copyright © 2024 by ASME
and interpretable innovation. We have developed an LLM-based
tool, AutoTRIZ (www.autotriz.ai), capable of
intelligent
artificial
ideation for problem solving with TRIZ-based
interpretability. AutoTRIZ begins with a problem statement from
the user and automatically generates a report that includes
multiple solutions, strictly following the TRIZ thinking flow and
reasoning process. In this paper, we also evaluate the
effectiveness
through
quantitative comparison, as well as case studies involving human
uses of TRIZ from TRIZ textbooks.
and performance of AutoTRIZ
2. RELATED WORK
2.1 TRIZ
TRIZ is a knowledge-based systematic approach of
inventive problem solving, developed in the 1960s by Genrich S.
Altshuller and his colleagues [4]. Through a thorough analysis of
over 40,000 patents, Altshuller and his collaborators identified
repeated patterns of innovation and underlying innovative
principles within these documents. By inductively analyzing
these patterns, they proposed a comprehensive problem-solving
framework, applying selected inventive principles for ideation.
Since then, TRIZ has been developed continually and some
modern TRIZ databases rely on the analysis of over 2 million
patents. It has been widely applied in industries, research, and
education with notable influence in many fields, such as energy,
electrical, automotive industries, and mechanical engineering
[20].
The TRIZ toolkit contains a series of theories and tools that
cover all aspects of problem understanding and solving,
including the trimming method, evolution trends, and 76
standard solutions [4]. In this paper, we focus on the best-known
tool, the Method of Inventive Principles, which represents the
basic reasoning logic behind TRIZ. Figure 1 shows the overview
of its framework (adapted from [21]), which contains four steps:
(1) Identify the specific problem.
(2) Transform the specific problem into a general problem
by identifying physical contradictions. The contradictions
involve an improving feature and a worsening feature. These
features are drawn from Altshuller’s 39 engineering parameters.
(3) Search for selected inventive principles from the
contradiction matrix using
identified contradictions. The
contradiction matrix is organized in the form of 39-improving
features and 39-worsening features (a 39 by 39 matrix) with each
cell entry listing the most often used principles (from TRIZ’s 40
inventive principles) that may be used to solve the problem.
(4) Use the selected principles to generate solutions to the
problem.
Although TRIZ has demonstrated its effectiveness, it still
suffers from drawbacks that hinder its practical applications. For
instance, the complexity of TRIZ resources and concepts poses
cognitive challenges to effectively learning and applying it,
particularly for non-experts. Additionally, the efficacy of TRIZ
is heavily constrained by the users’ reasoning capabilities and
prior knowledge already acquired.
FIGURE 1: Four steps for problem solving using TRIZ
thereby
reducing
Recent advancements in machine learning and natural
language processing have been applied in conjunction with TRIZ
[5,7,22]. These efforts aim to automate the TRIZ reasoning
process,
the difficulty of use. For
instance, Cascini and Russo [5] developed the PAT-ANALYZER
system that can analyze patent texts and automatically extract the
contradictory information underlying the innovation for the use
of TRIZ. Similarly, Guarino et al. [7] proposed the PaTRIZ,
combining the Bidirectional Encoder Representations from
Transformers (BERT) and Conditional Random Fields (CRF) for
word-level patent analysis and TRIZ contradiction mining. Li et
al. [22] proposed an approach that leverages natural language
processing techniques to assess patent innovations according to
the level of invention as defined in TRIZ. Berdyugina
and Cavallucci [23] proposed a methodology for the automatic
extraction of inventive information from texts for formulating an
inventive problem into TRIZ engineering parameters. Their
method combined a series of text-mining techniques, including
topic modeling, word embedding, and clustering. Hall et al. [6]
proposed an approach that uses topic modeling and unsupervised
machine learning to map TRIZ inventive principles to individual
patents and detect the novelty.
However, most of these works focus on utilizing algorithms
to improve specific steps of the TRIZ process. They still require
innovators to dedicate much time and effort to extensive
reasoning. Employing these methods does not directly assist
users throughout the entire process, from analyzing a problem to
creating practical solutions. In this paper, we aim to harness
LLMs to automate the entire TRIZ reasoning process and
minimize the cognitive requirements for users during its
application.
2
Copyright © 2024 by ASME
FIGURE 2: The framework of AutoTRIZ
2.2 Large Language Models for Design and Innovation
Over the past years, many data-driven approaches have
utilized machine learning and deep learning techniques to
augment design and innovation [24,25]. Evolved from deep
learning and pre-trained language models, LLMs typically refer
to Transformer-based models that contain hundreds of billions
of parameters for processing and generating natural language
texts [10]. They are trained on extremely large-scale corpora,
enabling them to acquire a wide range of knowledge and
capabilities,
including understanding context, generating
coherent text, and step-by-step reasoning [10]. Some research
has already explored the application of LLMs in engineering
including
design and
microfluidic devices [26], robotics [27], and the user interface of
webpages [28]. However, most of these early efforts primarily
utilize conversational interactions, such as those facilitated by
ChatGPT Interface [8], to engage in the innovation process.
Meanwhile, with the development of LLMs, there has been an
increase in efforts to create LLM-driven methods and tools to
offer more generalized innovation assistance and directly
support users in rapid ideation.
innovation within specific fields,
For instance, several studies have harnessed LLMs for
processing vast amounts of design documentation, representing
designs in specific forms, and identifying user needs for product
development [16,17,29]. Han et al. [17] introduced an LLM-
to
based attribute-sentiment-guided summarization model
extract user needs from online product reviews. Qiu et al. [29]
applied a transformer-based language model to distill design-
related knowledge from extensive reports and documents.
Moreover, Wang et al. [16] utilized LLMs to decompose
conceptual design tasks into Function-Behavior-Structure (FBS)
formats, assisting users in ideation across different aspects.
Recent studies have developed tools and methodologies
utilizing LLMs to aid the design process, enhance human-
computer collaborative
innovation, or directly produce
innovative concepts for users [18,19,30,31]. Ding et al. [31]
conducted a systematic exploration of LLMs’ potential to boost
cross-domain analogical creativity. Huang et al. [30] proposed
CausalMapper, a system that combines LLMs with causal
mapping to reason about the connections between problems and
solutions. Ma et al. [32,33] evaluated the differences between
LLM-generated and crowdsourced design solutions through
multiple perspectives, including human expert evaluations and
computational metrics. Zhu and Luo [19] presented GPT-based
models with domain-specific tuning and task-specific learning,
to generate original and useful design concepts. Notably, they
applied their approach to automating bio-inspired design concept
generation [18].
Although these recent idea-generation methods directly
leverage the reasoning capabilities of LLMs, the lack of control
over LLMs may hinder their effectiveness when assisting
ideation. These approaches often lead to solutions that are too
divergent to meet specific needs. Managing the problem-solving
process to ensure that solutions are both innovative and practical,
as well as understanding the reasoning process behind generated
innovative solutions, remains a challenge. In this study, we
address this issue by integrating TRIZ with LLMs, presenting
AutoTRIZ as a tool that follows the TRIZ reasoning steps to
generate inventive solutions with interpretability.
3. AUTOTRIZ
In this section, we introduce AutoTRIZ, an artificial
ideation tool that automates TRIZ with LLMs. The architecture
of AutoTRIZ is depicted in Figure 2. At the core of AutoTRIZ is
the utilization of LLMs to learn the reasoning process of the
TRIZ methodology, which engineers often find it challenging to
learn and excel at.
Overall, AutoTRIZ takes a problem statement from the user
as its initial input, and automatically generates a solution report
after the reasoning process. The report includes detailed
information about the reasoning process based on TRIZ and the
resulting solutions to the problem. Within AutoTRIZ, we have
defined a four-step reasoning flow based on the classic TRIZ
workflow. The system includes an inner fixed knowledge base
3
Copyright © 2024 by ASME
which consists of three segments related to TRIZ details,
enabling controlled reasoning. It is noteworthy that our focus is
on controlling the entire problem-solving reasoning process,
while remaining open to the knowledge used in ideation. The
problem-related knowledge applied during the problem-solving
process is drawn from the knowledge base that the LLM has
acquired through pre-training on the large-scale corpus.
3.1 Controlling the TRIZ Reasoning Flow
To ensure that the system strictly follows the TRIZ thinking
flow and reasoning process, we have configured AutoTRIZ with
four modules, each corresponding to the four steps in TRIZ. As
depicted in Figure 2, Modules 1, 2, and 4, outlined by solid-line
frames, are driven by LLMs, whereas Module 3, outlined by a
dashed-line frame, is controlled by predefined functions without
using LLMs. Specifically, we exploit the instruction-following
capabilities of LLMs for backend reasoning control. In each
module that incorporates LLMs, relevant instructions are
engineered into the input as system and assistant prompts.
Specifically, in Module 1, AutoTRIZ identifies the problem
to be solved from user input and converts it into descriptive text.
Ideally, we hope that the content entered by the user is a clear
problem statement. However, user inputs may include additional
information such as scenario descriptions, background details,
and even some redundant information. Therefore, in this module,
AutoTRIZ is designed to identify and extract information related
to the problem and then reorganize it into clear and concise text.
In Module 2, AutoTRIZ receives the processed problem
description and detects its engineering contradiction, which is
represented by a space constructed from two out of the 39
engineering parameters. At this stage, AutoTRIZ learns all the
engineering parameters based on its inner knowledge base. The
outputs of this module are presented in a structured format (i.e.,
the indexes of the improving and worsening features). It is
important to note that for the same problem statement, the
identified contradiction may differ with each execution of this
module. On the one hand, a single problem may encompass
multiple contradictory pairs, yet our system is designed to
identify only one contradiction. On the other hand, there is an
inherent randomness in the content generation by LLMs. In the
next section, we will conduct experimental investigations to
examine the efficacy of contradiction identification and the
consistency of the outputs.
Once the contradiction is identified, Module 3 searches the
contradiction matrix to find the indexes of relevant inventive
principles and returns their descriptions. Following this, Module
4 synthesizes the original problem description, the identified
engineering contradiction, and
inventive principles
recommended by the system through TRIZ, to generate the final
solutions.
the
LLMs can generate complex structured data, such as those
in HTML and LaTeX formats [34]. In AutoTRIZ, we harness
this capability to integrate all generated content and directly
produce a reader-friendly problem-solving report in a structured
format. We have engineered the format template directly into
Module 4, enabling it to output documents formatted in LaTeX.
In practice, the template for the report generation can be adjusted
as needed to suit specific requirements.
3.2 Learning from the Fixed Knowledge Base
AutoTRIZ acquires the necessary information to learn the
prior knowledge of TRIZ, enabling it to handle various types of
problems. We have curated a static knowledge base, which
interacts with the modules we described above, thereby
empowering AutoTRIZ to master and apply the relevant
knowledge.
In AutoTRIZ, the internal fixed knowledge base includes
three main components: (1) the TRIZ 39 Engineering Parameters
[4], (2) the TRIZ Contradiction Matrix [4], and (3) the TRIZ 40
Inventive Principles [4]. Notably, the contradiction matrix here
is identical to the traditional TRIZ contradiction matrix. The
knowledge regarding engineering parameters and inventive
principles includes titles and detailed descriptions for each entry.
For example, for the first engineering parameter:
[INDEX]1 [TITLE] Weight of moving object
[DESCRIPTION]The mass of the object in a
gravitational field, essentially the
force that the body exerts on its
support or suspension.
Similarly, for the first inventive principle:
[INDEX]1 [TITLE] Segmentation
[DESCRIPTION] The Segmentation principle
encourages consideration of the division
of an object or system into smaller
independent parts, making it sectional,
making it easy to assemble or
disassemble, and increasing the degree
of its divisibility or fragmentation.
All engineering parameters are configured into Module 2 as
assistant information. The backend LLMs learn instructions and
the output parameter space through in-context learning, enabling
zero-shot reasoning. Regarding inventive principles, only
selected contents are delivered to the system based on the
position in the contradiction matrix. This process is very similar
to LLMs’ Retrieval Augmented Generation (RAG) [35]. By
retrieving additional information related to the query from
external databases, RAG incorporates these external texts into
LLM prompts to address the hallucination problem, leading to
better generation [35]. Whereas in our system, the problem-
solving process involves precise search-augmented generation,
effectively bridging the gap between the prior TRIZ knowledge
from experts and the reasoning capabilities of LLMs derived
from large-scale pre-training. Simultaneously, all solutions
generated are interpretable because each solution is derived from
the application of selected inventive principles.
4
Copyright © 2024 by ASME
3.3 System Implementation
We developed a web-based tool for public users to test and
use AutoTRIZ, available at: https://www.autotriz.ai/. Figure 3
shows the user interface of the tool. Throughout the deployment
of this tool and all experiments conducted in this study, we
utilized GPT-4 (Version: 20231106, the state-of-the-art model at
the time this work was done) as the backend LLM. However, it
is important to note that since the proposed AutoTRIZ is a
general framework, the backend LLM can be replaced with any
other closed-source LLM (e.g., Claude) or open-source LLM
(e.g., Llama) with minimal effort required for adapting the
corresponding prompts. For the TRIZ knowledge base in
AutoTRIZ, we adopt the TRIZ definitions and descriptions in an
engineering design textbook [36].
FIGURE 3: AutoTRIZ web-based tool
4. EXPERIMENTAL EVALUATION
In this section, we evaluate the effectiveness of the
proposed AutoTRIZ through quantitative experiments and
comparative studies. Specifically, we collected several case
studies analyzed by human experts from TRIZ textbooks,
constructing a case base. Then, we explored the consistency of
the system in identifying engineering contradictions, as well as
its overlap with human analysis. Finally, we selected a specific
problem from the case base, then compared and discussed the
solutions generated by AutoTRIZ against the results of human
experts.
4.1 Constructing the TRIZ Case Base
To evaluate the performance of AutoTRIZ, we first
constructed a case base containing TRIZ problem-solving cases
developed by human experts. Initially, we gathered several
TRIZ-related textbooks, some of which are focused on general
design innovation, while others are specifically about TRIZ.
From 7 of these textbooks [4,36–41], we collected 10 initial
cases. The selection criteria include: (1) the content of the case
contains all elements of the TRIZ reasoning process, including
problem description, contradiction identification, inventive
principle positioning, and solutions; (2) the problem is defined
clearly and comprehensively; (3) the cases do not contain similar
problems. All cases are stored in JSON format. For more details
on collected cases, please refer to our GitHub repository1.
The initial 10 cases cover various domains, including
environmental engineering,
transportation, manufacturing,
material science, aerospace technology, and so on. The
1 https://github.com/shuojiangcn/AutoTRIZ-DETC24
evaluation of these cases can serve as a preliminary benchmark,
enabling users to understand and experience the usage protocol
and performance of AutoTRIZ. In the future, we will continue to
expand the case base for more robust testing. Beyond serving
experimental purposes in this study, the curated case base can
also store the results generated by users with AutoTRIZ. As the
size of the base expands, we can also explore the interaction
between the reasoning module and the existing case base,
enabling AutoTRIZ's innovative capabilities to be scalable.
4.2 Assessing the Contradiction Identification
Detecting contradictions is an essential step in the entire
TRIZ problem-solving process. Accurate identification of the
contradictions within a problem can effectively assist the system
in recommending the appropriate inventive principles for the
next step. Within LLMs, randomness is incorporated into the text
generation process. These models often use sampling methods
(e.g., top-k sampling) or temperature adjustments to control the
generation process, leading to a variety of possible outputs rather
than repeating the same response every time. Because of this
inherent variability, LLMs may suffer from instability during
inference. As a result, some LLM-based agents adopt self-
consistency techniques that create several reasoning paths and
then perform an ensemble on all generated answers, selecting the
most consistent one through majority voting [42]. However, in
traditional TRIZ, analyzing the same problem from different
perspectives can yield different possible contradictions. Such
stochastic nature of LLM-based generation can be useful for
increasing the diversity of generated ideas [32]. Based on this,
we maintain the setting of producing a single contradiction in
each entry. To assess the performance and consistency of this
setting, we conducted the following experiments.
For each given problem statement, we performed the
analysis 100 times, resulting in 100 pairs of identified parameters
(contradictions). Then, we counted all results and calculated
their respective proportions. In cases of high consistency, a
particular contradiction could be dominant. In some cases, one
parameter in the contradiction may have higher certainty than the
other, leading to more dispersed results.
We used information entropy as the uncertainty score,
where smaller entropy value indicates greater confidence in the
model's output. The information entropy metric is widely used
for uncertainty measurement [43]. Given a probability
distribution 𝑋 generated by the model, we can calculate the
entropy by:
"
𝐻(𝑋) = − ’ 𝑃(𝑥!) log 𝑃(𝑥!)
!#$
where 𝑃(𝑥!) represents the frequency probability of the i-th
class in a total 100 trials and n is the number of possible classes.
Since we have 100 trials in our experiments, the entropy value
ranges from 0 to 6.64, where a smaller value indicates higher
consistency.
Furthermore, we
the overlap between
examined
AutoTRIZ’s detection and the analysis results of human experts
5
Copyright © 2024 by ASME
from textbooks, categorizing them into three scenarios: complete
match, half match, and no match. It is important to note that since
human expert analysis also includes subjectivity and bias, it
cannot be considered a golden standard. The main purpose of this
experiment
to showcase and quantitatively compare
AutoTRIZ against human uses of TRIZ.
is
Figure 4 shows the experimental results, where the bar chart
for each case illustrates the top 3 detections by proportion. The
top 3 detections represent the output results corresponding to the
three classes with the highest probabilities in the probability
distribution obtained from the 100 trials. The use of top 3
detections enables us to account for both the model accuracy and
the randomness in its predictions. In the chart, green bars
represent complete match, blue bars indicate half match, and
yellow bars denote not match. The table at the bottom shows the
entropy of each case and whether the top 3 detections match the
reference from textbooks, with symbols (✓, ✓, ✗) indicating
complete match, half match, and not match, respectively.
Overall, 7 out of 10 cases match or half-match the
textbook’s analysis within the top 3 detections, indicating that
AutoTRIZ's inference overlaps with the human experts’ results
to a certain degree. A minority of the cases show relatively
higher consistency (cases 5, 6, 7, 8), where the proportion of the
top 1 detection is significantly higher than the other detections,
including two complete match detections. For these cases,
utilizing self-consistency may be beneficial
to enhance
performance. For other cases, the experimental results show
greater diversity, indicated by higher information entropy. By
examining the content of the top 3 detections of contradiction for
each case, we observe that for almost all cases, one parameter is
fixed while the other varies. Moreover, when using the
textbook’s analysis as a reference, a pattern emerges across all
cases where outputs with higher probabilities (within the top 3
detections) show a better match in alignment. These findings can
serve as the initial benchmark for assessing the performance of
AutoTRIZ’s contradiction identification. As the case base
expands in the future, we can explore these patterns in a more
fine-grained way with greater statistical significance. For
example, we can examine the differences between various
themes,
techniques such as self-consistency
reasoning in conjunction with the identified patterns to improve
overall performance.
leveraging
4.3 Comparing AutoTRIZ and Human Expertise
In this section, we select one of the collected cases (case 7)
to compare AutoTRIZ's generated report with humans’ analysis
results from the textbook. The reasons for choosing case 7 are
two-fold: (1) This case exhibits relatively high consistency in
identifying engineering contradictions, with one dominant
outcome (Figure 4); (2) The top 3 detections of contradiction are
all half-match with the reference. This ensures a certain degree
of reliability while allowing the distinction between the
subsequent reasoning paths of AutoTRIZ and humans.
The problem of case 7 is about the pneumatic transportation
of metal shots through a system of plastic piping [39]. Here is
the original problem statement:
We are faced with a challenge involving the pneumatic
transportation of metal shots through a system of plastic
piping originally intended for plastic pellets. The transition
to metal shots, despite their advantages for production
purposes, has
to significant wear and damage,
particularly at the pipe's elbows. This issue arises from the
incompatibility between the metal shots and the existing
plastic elbow design. The task is to identify and implement a
solution that resolves this conflict, ensuring the system's
durability and effectiveness for transporting metal shots.
led
FIGURE 4: Experimental results about contradiction detection
In the textbook, the identified improving parameter is
"Speed" (Parameter 9), and the worsening parameter is "Stability
of the object's composition" (Parameter 13). According to the
contradiction matrix,
"Mechanical
Substitution" (Principle 28) from the obtained inventive
principles. Applying this principle, the author describes the
solution as placing a magnet at the elbow to bind the metal shots
selects
author
the
6
Copyright © 2024 by ASME
relatively lengthy and complex. Besides the case study
exploration, we will also seek computational evaluation methods
and metrics [44] regarding the quality of generated solutions in
future work. It is important to note that these solutions are
relatively preliminary and can serve as foundational directions
for innovators to further develop and refine their designs. On this
basis, we will continue to develop AutoTRIZ to produce more
detailed solutions for the given problem.
to a plastic material, thereby creating a blanket of shots that
absorb the energy.
Figure 5 shows the problem-solving report generated by
AutoTRIZ, containing the reasoning process and solutions. The
same problem statement is used as the input. Firstly, we can see
that AutoTRIZ simplifies the original problem statement,
identifying the main issue that needs to be addressed. Regarding
the identification of contradictions, AutoTRIZ diverges from
human expertise. Both AutoTRIZ and the textbook’s analysis
consistently recognize the "Stability of the object's composition"
(Parameter 13) as the worsening feature. However, concerning
the improving feature, AutoTRIZ detects "Area of stationary
object" (Parameter 6), while the textbook's analysis considers it
to be "Speed" (Parameter 9). From the original problem
statement, we understand that the key issue is to avoid wear on
the plastic elbows by the metal shots to ensure durability, which
clearly indicates that one of the contradictory parameters
involves stability. Whereas the identification of the other
parameter is not directly mentioned, leading to a variety of
possible interpretations. AutoTRIZ reasons that the surface area
needs improvement to withstand the impact and wear of the
metal shot, while the expert asserts speed as the system’s top
priority. These two analyses highlight different needs, thereby
guiding subsequent innovative directions differently.
(1,
(28,
(18,
(i.e.,
'Segmentation'),
'Mechanical Substitution'),
In the textbook's analysis, the author selected a single
inventive principle (28, 'Mechanical Substitution') and created a
solution by positioning a magnet at the piping's elbow, which
magnetically attaches metal shots to the plastic, forming an
energy-absorbing layer. This approach represents a direct and
effective innovation. However, based on the identified parameter
pair, the contradiction matrix could yield four inventive
(33,
principles
'Homogeneity'),
'Mechanical
Vibration')). Some principles may be challenging to apply, as the
outcomes are directly influenced by the users’ reasoning ability,
experience, and familiarity with TRIZ materials. This step also
requires the most human effort in TRIZ. By comparison,
AutoTRIZ can effectively overcome this issue. After identifying
the contradiction (Parameter 6 vs. Parameter 13), AutoTRIZ
identifies two inventive principles from the contradiction matrix
(i.e., (2,
'Strong Oxidants')). For each
principle, AutoTRIZ applies it and generates a corresponding
solution. Both proposed solutions demonstrate feasibility and
innovation. Solution 1 implements a physical alteration to
prevent direct contact between the metal shots and the piping.
Solution 2, integrating 'Strong oxidants', involves a surface
treatment to improve the piping's durability against metal shots
through a protective coating.
'Extraction'), (39,
In summary, both the textbook's solution and the solutions
automatically generated by AutoTRIZ are practical, originating
from different inventive principles and leading to different
approaches. In the previous section, we performed 100 trials on
each case for contradiction detection. We randomly selected one
trial's solutions to compare and discuss with humans' analysis
results from the textbook in this section. We only randomly chose
one result because the solutions and the complete report are
FIGURE 5: AutoTRIZ generated solution report for case 7
5. DISCUSSION
So far, we have presented a new methodology that
integrates LLMs and the systematic innovation method, TRIZ,
to automatically generate inventive solutions for any given
problem in an interpretable way. This methodology has been
implemented into a web-based tool, AutoTRIZ. We have
through
demonstrated
experiments and case studies.
its effectiveness and practicality
Prior studies [14,15] have assessed LLMs’ capabilities
across a broad range of engineering-related tasks, revealing that
these models (especially GPT-series models) hold extensive
engineering knowledge, such as design and manufacturing.
Therefore, in our framework, we only control the reasoning flow,
without limiting the knowledge involved in the ideation process,
7
Copyright © 2024 by ASME
FIGURE 6: The multi-input usages of AutoTRIZ
to fully leverage the general knowledge and capabilities of
LLMs. In this study, our case base of 10 problems spans multiple
distinct domains, and AutoTRIZ has effectively generated
inventive solutions in each case.
The proposed method significantly reduces the entry barrier
to TRIZ. AutoTRIZ can generate a multitude of solutions in a
short period of time because it leverages the computational
power and vast knowledge base of LLMs. This efficiency is
further enhanced by its user-friendly interface, allowing for easy
configuration and use, significantly reducing the time needed to
generate ideas and refine problem-solving strategies. In contrast,
mastering the traditional TRIZ method for professional use
typically requires months of training and substantial intellectual
and cognitive efforts [45].
In the comparative study of case 7, we observed that the
problem statement contains information related to the desired
direction of improvement, which is relevant to the contradiction.
Such information aids in aligning AutoTRIZ’s detections with
those of human experts. Accordingly, as demonstrated in Figure
6, we can incorporate multi-input configurations into the system,
enabling AutoTRIZ to generate solutions that fully consider
detailed requirements from users. The user interaction settings
with AutoTRIZ are also a topic worth exploring. We currently
keep it simple to ensure accessibility for all users, including
those without an understanding of TRIZ. We plan to investigate
user interaction with TRIZ, AutoTRIZ, and vanilla LLMs,
examining the differences to identify the most effective methods
for
the overall user experience and system
performance.
improving
Although this study focuses on automating the TRIZ
reasoning process using LLMs, the proposed framework can be
extended
innovation
to automate other knowledge-based
methods. For instance, Yilmaz et al. [46] identified 77 design
heuristics from over 3,000 design process outcomes, and
suggested a subset of heuristics to designers, which when
selected at random, has produced improved design outcomes
[47]. By applying our framework to this research, one could treat
the identified heuristics as an internal knowledge base for the
LLM-based agent, determining how to utilize these heuristics in
the backend. Moreover, to develop a more powerful tool, one
could also integrate various knowledge-based idea generation
methods into the reasoning modules of LLMs, such as
SCAMPER [48], IDEO Method Cards [49], Bio-inspired Design
[50], and Design-by-Analogy [51–53].
The proposed AutoTRIZ framework has several limitations.
Firstly,
the solutions generated by LLMs may contain
hallucinations or erroneous information. We plan to include fact-
check modules to ensure the accuracy of the solutions.
Additionally, there is no objective mechanism to evaluate the
effectiveness of generated solutions. Users must independently
assess solution quality and rank them for practical use. The
evaluation studies conducted in this paper compared results
solely from textbooks, which usually represent the analysis of a
single expert or a small group of experts. Future studies will
involve many more experts analyzing the same problems for
comparison, making the conclusions more robust. Moreover, this
study was demonstrated on a limited set of problem cases,
providing only an initial insight into AutoTRIZ that might
introduce some bias. In future research, we aim to apply this
method to a broader and more diverse range of problems,
systematically evaluating AutoTRIZ's performance.
6. CONCLUSION
In this paper, we propose AutoTRIZ, an artificial ideation
workflow and tool that leverages LLMs to automate the TRIZ
methodology and enhance its applications. AutoTRIZ is
constructed by multiple LLM-based reasoning modules and a
pre-defined function module, interacting with the inner fixed
knowledge base. It takes problem statements from users as initial
inputs and automatically produces an interpretable solution
report by following the step-by-step TRIZ reasoning process.
The efficacy of this method is demonstrated and evaluated
through quantitative and comparative experiments, as well as
case studies involving human uses of TRIZ from TRIZ
textbooks. Although this paper primarily focuses on integrating
LLMs with TRIZ, the proposed framework holds the potential to
be extended to other knowledge-based ideation methods,
including SCAMPER, Design Heuristics, and Design-by-
Analogy. Despite its current limitations, we invite interested
innovators to test and use AutoTRIZ at: https://www.autotriz.ai/.
8
Copyright © 2024 by ASME
REFERENCES
[1]
Zwicky, F., 1967, “The Morphological Approach to
Discovery, Invention, Research and Construction,” New
Methods of Thought and Procedure, A.G. Zwicky Fritz
and Wilson, ed., Springer Berlin Heidelberg, Berlin,
Heidelberg, pp. 273–297.
[5]
[6]
[4]
[3]
of Patents
and Search
[2] White, C. K., Wood, K. L., and Jensen, D., 2012, “From
Brainstorming to C-Sketch to Principles of Historical
Innovators: Ideation Techniques to Enhance Student
Creativity,” J STEM Educ, 13(5).
Camburn, B., Arlitt, R., Anderson, D., Sanaei, R.,
Raviselam, S., Jensen, D., and Wood, K. L., 2020,
via
“Computer-Aided Mind Map Generation
Crowdsourcing and Machine Learning,” Res Eng Des,
31, pp. 383–409.
Altshuller, G. S., 1999, The Innovation Algorithm: TRIZ,
Systematic
Innovation and Technical Creativity,
Technical innovation center Inc.
Cascini, G., and Russo, D., 2007, “Computer-Aided
Analysis
for TRIZ
Contradictions,”
International Journal of Product
Development, 4(1–2), pp. 52–67.
Hall, S., Mollan, C., Pandey, V., and Mourelatos, Z.,
2022, “TRIZ Mapping and Novelty Detection of
Engineering Design Patents Using Machine Learning,”
Technical
International
Conferences and Computers and Information
in
Engineering Conference, p. V006T06A044.
Guarino, G., Samet, A., and Cavallucci, D., 2022,
“PaTRIZ: A
for Mining TRIZ
Contradictions in Patents,” Expert Syst Appl, 207, p.
117942.
Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya,
I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman,
S., Anadkat, S., and others, 2023, “GPT-4 Technical
Report,” arXiv preprint arXiv:2303.08774.
Touvron, H., Lavril, T., Izacard, G., Martinet, X.,
Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N.,
Hambro, E., Azhar, F., and others, 2023, “Llama: Open
and Efficient Foundation Language Models,” arXiv
preprint arXiv:2302.13971.
Engineering
Framework
Design
[9]
[8]
[7]
[11]
[10] Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B.,
Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D.,
Metzler, D., and others, 2022, “Emergent Abilities of
Large Language Models,” Transactions on Machine
Learning Research.
Singhal, K., Azizi, S., Tu, T., Mahdavi, S. S., Wei, J.,
Chung, H. W., Scales, N., Tanwani, A., Cole-Lewis, H.,
Pfohl, S., and others, 2023, “Large Language Models
Encode Clinical Knowledge,” Nature, pp. 1–9.
[12] Boiko, D. A., MacKnight, R., Kline, B., and Gomes, G.,
2023, “Autonomous Chemical Research with Large
Language Models,” Nature, 624(7992), pp. 570–578.
[13] Romera-Paredes, B., Barekatain, M., Novikov, A.,
Balog, M., Kumar, M. P., Dupont, E., Ruiz, F. J. R.,
Ellenberg, J. S., Wang, P., Fawzi, O., and others, 2024,
“Mathematical Discoveries from Program Search with
Large Language Models,” Nature, 625(7995), pp. 468–
475.
[14] Makatura, L., Foshey, M., Wang, B., HähnLein, F., Ma,
P., Deng, B., Tjandrasuwita, M., Spielberg, A., Owens,
C. E., Chen, P. Y., and others, 2023, “How Can Large
Language Models Help Humans
in Design and
Manufacturing?,” arXiv preprint arXiv:2307.14377.
Picard, C., Edwards, K. M., Doris, A. C., Man, B.,
Giannone, G., Alam, M. F., and Ahmed, F., 2023, “From
Concept to Manufacturing: Evaluating Vision-Language
Models for Engineering Design,” arXiv preprint
arXiv:2311.12668.
[15]
[16] Wang, B., Zuo, H., Cai, Z., Yin, Y., Childs, P., Sun, L.,
and Chen, L., 2023, “A Task-Decomposed AI-Aided
for Generative Conceptual Design,”
Approach
Technical
International
Engineering
in
Conferences and Computers and Information
Engineering Conference, p. V006T06A009.
Design
[18]
[20]
[19]
[21]
[17] Han, Y., Nanda, G., and Moghaddam, M., 2023,
“Attribute-Sentiment-Guided Summarization of User
Opinions From Online Reviews,” Journal of Mechanical
Design, 145(4), p. 41402.
Zhu, Q., Zhang, X., and Luo, J., 2023, “Biologically
Inspired Design Concept Generation Using Generative
Pre-Trained Transformers,” Journal of Mechanical
Design, 145(4), p. 41409.
Zhu, Q., and Luo, J., 2023, “Generative Transformers for
Design Concept Generation,” J Comput Inf Sci Eng,
23(4), p. 41003.
Spreafico, C., and Russo, D., 2016, “TRIZ Industrial
Case Studies: A Critical Survey,” Procedia CIRP, 39, pp.
51–56.
Silverstein, D., DeCarlo, N., and Slocum, M., 2008,
“How to Achieve Competitive Excellence Using TRIZ,”
NW: Taylor&Francis Group.
Li, Z., Tate, D., Lane, C., and Adams, C., 2012, “A
Framework for Automatic TRIZ Level of Invention
Estimation of Patents Using Natural Language
Processing, Knowledge-Transfer and Patent Citation
Metrics,” Computer-aided design, 44(10), pp. 987–1010.
[23] Berdyugina, D., and Cavallucci, D., 2023, “Automatic
Extraction of Inventive Information out of Patent Texts
in Support of Manufacturing Design Studies Using
Natural Languages Processing,” J Intell Manuf, 34(5),
pp. 2495–2509.
Luo, J., 2022, “Data-Driven Innovation : What Is It ?,”
IEEE Trans Eng Manag, pp. 1–19.
Jiang, S., Sarica, S., Song, B., Hu, J., and Luo, J., 2022,
“Patent Data for Engineering Design: A Critical Review
and Future Directions,” J Comput Inf Sci Eng, 22(6), p.
060902.
[25]
[24]
[22]
[26] Nelson, M. D., Goenner, B. L., and Gale, B. K., 2023,
to Assist CAD Design for
“Utilizing ChatGPT
9
Copyright © 2024 by ASME
[27]
[28]
Microfluidic Devices,” Lab Chip, 23(17), pp. 3778–
3784.
Stella, F., Della Santina, C., and Hughes, J., 2023, “How
Can LLMs Transform the Robotic Design Process?,”
Nat Mach Intell, pp. 1–4.
Li, A., Wu, J., and Bigham, J. P., 2023, “Using LLMs to
Customize the UI of Webpages,” Adjunct Proceedings of
the 36th Annual ACM Symposium on User Interface
Software and Technology, pp. 1–3.
[29] Qiu, Y., and Jin, Y., 2023, “Document Understanding-
Based Design Support: Application of Language Model
for Design Knowledge Extraction,”
Journal of
Mechanical Design, 145(12), p. 121401.
[30] Huang, Z., Quan, K., Chan, J., and MacNeil, S., 2023,
“CausalMapper: Challenging Designers to Think in
Systems with Causal Maps and Large Language Model,”
Proceedings of the 15th Conference on Creativity and
Cognition, pp. 325–329.
[31] Ding, Z., Srinivasan, A., MacNeil, S., and Chan, J., 2023,
“Fluid Transformers and Creative Analogies: Exploring
Large Language Models’ Capacity for Augmenting
Cross-Domain Analogical Creativity,” Proceedings of
the 15th Conference on Creativity and Cognition, pp.
489–505.
[32] Ma, K., Grandi, D., McComb, C., and Goucher-Lambert,
K., 2024, “Exploring the Capabilities of Large Language
Models for Generating Diverse Design Solutions,” arXiv
preprint arXiv:2405.02345.
[34]
[33] Ma, K., Grandi, D., McComb, C., and Goucher-Lambert,
K., 2023, “Conceptual Design Generation Using Large
Language Models,” International Design Engineering
Technical Conferences and Computers and Information
in Engineering Conference, p. V006T06A021.
Tang, X., Zong, Y., Zhao, Y., Cohan, A., and Gerstein,
M., 2023, “Struc-Bench: Are Large Language Models
Really Good at Generating Complex Structured Data?,”
arXiv preprint arXiv:2309.08963.
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin,
V., Goyal, N., Küttler, H., Lewis, M., Yih, W.,
Rocktäschel, T., and others, 2020, “Retrieval-
Augmented Generation for Knowledge-Intensive Nlp
Tasks,” Adv Neural Inf Process Syst, 33, pp. 9459–9474.
[36] Childs, P., 2013, Mechanical Design Engineering
[35]
Handbook, Butterworth-Heinemann.
[37] Orloff, M. A., 2006, Inventive Thinking through TRIZ: A
Practical Guide, Springer Berlin, Heidelberg.
[38] Orloff, M. A., 2012, Modern TRIZ: A Practical Course
with Easytriz Technology, Springer Science & Business
Media.
Savransky, S. D., 2000, Engineering of Creativity:
Introduction to TRIZ Methodology of Inventive Problem
Solving, CRC press.
Silverstein, D., DeCarlo, N., and Slocum, M., 2007,
Insourcing Innovation: How to Achieve Competitive
Excellence Using TRIZ, CRC Press.
[40]
[39]
[41]
Fey, V., and Rivin, E., 2005, Innovation on Demand:
New Product Development Using TRIZ, Cambridge
University Press.
[43]
[42] Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E.,
Narang, S., Chowdhery, A., and Zhou, D., 2023, “Self-
Consistency Improves Chain of Thought Reasoning in
Language Models,” The Eleventh
International
Conference on Learning Representations, Kigali,
Rwanda.
Zhang, X., Chen, F., Lu, C.-T., and Ramakrishnan, N.,
2019,
in Document
Classification,” arXiv preprint arXiv:1907.07590.
[44] Regenwetter, L., Srivastava, A., Gutfreund, D., and
Ahmed, F., 2023, “Beyond Statistical Similarity:
Rethinking Metrics for Deep Generative Models in
Engineering Design,” Computer-Aided Design, 165, p.
103609.
Ilevbare, I. M., Probert, D., and Phaal, R., 2013, “A
Review of TRIZ, and Its Benefits and Challenges in
Practice,” Technovation, 33(2–3), pp. 30–37.
“Mitigating Uncertainty
[45]
[46] Yilmaz, S., Daly, S. R., Seifert, C. M., and Gonzalez, R.,
2016, “Evidence-Based Design Heuristics for Idea
Generation,” Des Stud, 46, pp. 95–124.
[47] Daly, S., Yilmaz, S., Christian, J. L., Seifert, C. M., and
in
of
Richard Gonzalez, 2012, “Design Heuristics
Engineering Concept Generation,”
Journal
Engineering Education, 101(4), pp. 602–628.
Eberle, B., 1996, Scamper on: Games for Imagination
Development, Prufrock Press Inc.
IDEO., 2003, IDEO Method Cards: 51 Ways to Inspire
Design, William Stout.
Fu, K., Moreno, D., Yang, M., and Wood, K. L., 2014,
“Bio-Inspired Design: An Overview Investigating Open
Questions From the Broader Field of Design-by-
Analogy,” ASME Journal of Mechanical Design,
136(11, SI), p. 111102.
Jiang, S., Hu, J., Wood, K. L., and Luo, J., 2022, “Data-
Driven Design-By-Analogy: State-of-the-Art and Future
Directions,” ASME Journal of Mechanical Design,
144(2), p. 020801.
[48]
[49]
[50]
[51]
[52] Murphy, J., Fu, K., Otto, K., Yang, M., Jensen, D., and
Wood, K., 2014, “Function Based Design-by-Analogy:
A Functional Vector Approach to Analogical Search,”
ASME Journal of Mechanical Design, 136(10), p.
101102.
[53] Hey, J., Linsey, J., Agogino, A. M., and Wood, K. L.,
2008, “Analogies and Metaphors in Creative Design,”
International Journal of Engineering Education, 24(2),
pp. 283–294.
10
Copyright © 2024 by ASME
|
ai_researcher | 3 | Redefining_Creativity_in_the_Era_of_AI_Perspectives_of_Computer_Scientists_and_New_Media_Artists.pdf | GENERATIVE AI AND HUMAN CAPITAL
1
AUGMENTING MINDS OR AUTOMATING SKILLS? THE DIFFERENTIAL ROLE OF
HUMAN CAPITAL IN GENERATIVE AI’S IMPACT ON CREATIVE TASKS
Meiling Huang1, Ming Jin2, and Ning Li1
1School of Economics and Management, Tsinghua University
2School of Management, Wuhan University of Technology
Author Note
All data, analysis code, output, and research materials including the full list of items are
available at https://osf.io/ynhtu/?view_only=31642f7caac74082940eb1153d4e9e55.
Correspondence concerning this article should be addressed to Ning Li, Leadership and
Organization Management, Tsinghua University, Beijing, Beijing, China. Email:
[email protected]
GENERATIVE AI AND HUMAN CAPITAL
Abstract
2
Generative AI is rapidly reshaping creative work, raising critical questions about its beneficiaries
and societal implications. This study challenges prevailing assumptions by exploring how
generative AI interacts with diverse forms of human capital in creative tasks. Through two
random controlled experiments in flash fiction writing and song composition, we uncover a
paradox: while AI democratizes access to creative tools, it simultaneously amplifies cognitive
inequalities. Our findings reveal that AI enhances general human capital (cognitive abilities and
education) by facilitating adaptability and idea integration but diminishes the value of domain-
specific expertise. We introduce a novel theoretical framework that merges human capital theory
with the automation-augmentation perspective, offering a nuanced understanding of human-AI
collaboration. This framework elucidates how AI shifts the locus of creative advantage from
specialized expertise to broader cognitive adaptability. Contrary to the notion of AI as a universal
equalizer, our work highlights its potential to exacerbate disparities in skill valuation, reshaping
workplace hierarchies and redefining the nature of creativity in the AI era. These insights
advance theories of human capital and automation while providing actionable guidance for
organizations navigating AI integration amidst workforce inequalities.
GENERATIVE AI AND HUMAN CAPITAL
3
Generative AI is transforming creative industries, challenging traditional notions of
human expertise and reshaping the dynamics of work. This technology offers both promise and
peril: while democratizing access to creative tools, it also risks deepening cognitive and social
inequalities. Scholars have highlighted generative AI’s potential to augment human creativity in
areas as diverse as writing, music, and visual arts (Noy & Zhang, 2023; Zhou & Lee, 2024;
Nakavachara et al., 2024). Yet, others caution that such advancements may exacerbate disparities
in skill valuation, favoring those who can effectively leverage AI while marginalizing others
(Acemoglu et al., 2022; Doshi & Hauser, 2024; Lee & Chung, 2024; Eloundou et al., 2023). As
AI evolves from a mere tool to a co-creator, understanding who benefits most from this
transformation is increasingly critical—not only for individuals and organizations but also for
broader societal equity.
While initial evidence suggests that generative AI can enhance creative performance,
answers to this nuanced question remain elusive (Jia et al., 2023; Li et al., 2024). Some argue
that AI could reduce inequality by leveling the playing field, allowing lower-performing
individuals to close the performance gap (Eloundou et al., 2023; Noy & Zhang, 2023). Yet,
studies also suggest that much of the observed performance gain stems from participants relying
heavily on AI-generated outputs with minimal human input, resulting in automation rather than
meaningful human-AI collaboration (Noy & Zhang, 2023; Doshi & Hauser, 2024). This paradox
highlights the need to examine whether generative AI truly democratizes creativity or amplifies
disparities by favoring those already equipped with the skills to use it effectively.
These observations point to a broader tension inherent in AI’s role in the workplace.
Raisch and Krakowski (2021) describe the “automation-augmentation paradox,” where AI can
both replace human tasks through automation and simultaneously enhance human abilities
GENERATIVE AI AND HUMAN CAPITAL
4
through augmentation. This framework highlights that generative AI has the potential not only to
automate processes— reducing the need for human involvement in certain elements of the
creative process —but also to augment human capabilities by enhancing creative and cognitive
functions. These dynamics complicate our understanding of AI’s broader impact, raising critical
questions about its beneficiaries and its potential to reshape skill hierarchies. This distinction is
critical for understanding how generative AI reshapes the dynamics of creativity and skill
valuation. Will AI level the playing field, or will it widen existing gaps by privileging those with
broader, more adaptable abilities? To explore these questions, we challenge conventional
wisedom and offer a novel framework that redefines human-AI collaboration in creative tasks.
Specifically, we integrate the augmentation and automation framework with human
capital theories to propose that generative AI has a dual effect and contrasting impact: it lowers
knowledge barriers by diminishing the value of domain-specific expertise, while simultaneously
increasing the importance of general human capital, such as cognitive adaptability and education
(Choudhury et al., 2020; Teodoridis et al., 2019). Building on this dual effect, we develop a novel
framework that differentiates between general human capital (broad, transferable skills like
problem-solving and learning capacity) and specific human capital (deep, domain-specific
expertise unique to particular tasks; Rietzschel et al., 2007; Teodoridis et al., 2019). This
framework sheds light on how generative AI interacts unevenly with these forms of human
capital, revealing its potential to both empower and marginalize. Rather than uniformly
enhancing productivity, we suggest that generative AI disproportionately benefits individuals
with adaptable, transferable skills, while devaluing specialized expertise. By providing this
nuanced lens, our study challenges assumptions and investigates whether AI will serve as a force
for democratizing creative work or as a catalyst for reinforcing inequalities in skill valuation.
GENERATIVE AI AND HUMAN CAPITAL
5
To empirically test these ideas, we conducted two randomized controlled experiments
examining how generative AI interacts with human capital in creative tasks. The first experiment
focused on flash fiction writing, a task accessible to a broad range of individuals. Participants’
general human capital (e.g., IQ and education level) and specific human capital (e.g., writing
skills) were assessed, and they were randomly assigned to either work independently or
collaborate with AI. To ensure ecological validity, members of the public evaluated the flash
fiction, providing real-world audience judgments (Berg, 2019; Yin et al., 2024).
The second experiment extended this investigation to song lyric composition (Nelson et
al., 2023), a more specialized creative domain. Participants—ranging from novices to
experienced lyricists—were provided with pre-composed musical pieces and tasked with writing
lyrics tailored to their assigned composition. Once completed, the songs were professionally
recorded with trained singers. Public evaluations of the finished songs were again used to reflect
authentic consumer responses (Berg, 2016, 2022), allowing us to capture the nuanced ways AI
impacts creativity across varying levels of human capital.
Through these experiments, we reveal how generative AI's impact on creativity depends
on the interplay between general and specific human capital. Our findings challenge the
assumption that AI universally enhances productivity, showing that its benefits are
disproportionately influenced by individuals’ human capital profiles. Rather than leveling the
creative playing field, AI enhances the value of general human capital—such as cognitive
adaptability—while diminishing the relative importance of specialized expertise. These insights
highlight the duality of AI’s role: it democratizes access to creative tools but risks widening
disparities based on cognitive adaptability. By examining these dynamics, our work advances
GENERATIVE AI AND HUMAN CAPITAL
understanding of human-AI collaboration, offering critical guidance for organizations and
6
policymakers seeking to balance innovation with equity in the AI era.
THEORETICAL DEVELOPMENT
Generative AI and Creative Performance
Generative AI, a new generation of artificial intelligence that creates new content and
solutions across various domains, has rapidly become a pivotal tool for enhancing creativity
among knowledge workers (Dell’Acqua et al., 2023; Lee & Chung, 2024). A growing body of
research has demonstrated its capacity to augment human performance in diverse tasks, ranging
from text generation and coding assistance to complex creative endeavors such as storytelling,
music composition, and visual art creation. Studies by Huang et al. (2021) and Brynjolfsson et al.
(2023) have shown that generative AI can significantly increase efficiency and creativity by
automating routine tasks and offering novel ideas that humans might not conceive independently.
However, while the general consensus is that generative AI improves performance, the
question of who benefits most from this technology remains underexplored. Early findings, such
as those from Park et al. (2023) and Noy and Zhang (2023), suggest that AI can reduce
performance disparities by offering significant support to lower-performing individuals. Yet,
these studies often focus on relatively simple tasks requiring minimal human input, where AI
largely operates autonomously. Noy and Zhang (2023), for instance, found that AI compresses
performance variance by boosting lower performers but also observed limited human-AI
interaction, as many participants submitted AI-generated outputs with minimal editing. This
disparity in benefit may also reflect a ceiling effect, where higher-performing individuals
experience limited incremental gains relative to their lower-performing counterparts.
GENERATIVE AI AND HUMAN CAPITAL
7
Consequently, these findings may not fully capture the complexities of more collaborative tasks,
where deeper human-AI collaboration is required.
A Contingent Approach: Integrating Human Capital Theory
To understand the nuanced effects of generative AI on performance, it is critical to
develop a contingent approach that accounts for individual differences in human capital (Becker,
1962; Rosen, 1976). Human capital theory, widely established in organizational behavior and
economics, provides a useful framework for understanding how individuals’ abilities and
knowledge influence their interaction with AI (Lepak & Snell, 1999; Carpenter et al., 2001;
Ployhart et al., 2011).
Within this theory, human capital is typically categorized into two distinct types: general
human capital and specific human capital (Coff, 1997). General human capital represents
cognitive abilities and formal education that equip individuals with versatile, transferable skills
(Lepak & Snell, 2002; Ritchie & Tucker-Drob, 2018). These skills enable people to quickly learn
and adapt across various tasks and industries. Importantly, general human capital fosters
problem-solving, critical thinking, and the ability to work with complex information (Crook et
al., 2011; Ritchie & Tucker-Drob, 2018). Because these cognitive skills are broad in nature,
individuals with higher levels of general human capital are capable of navigating diverse
environments and performing a wide range of tasks.
On the other hand, specific human capital encompasses specialized knowledge and
expertise that is narrowly focused on particular tasks, industries, or domains (Baer, 2015; Plucker
& Beghetto, 2004; Tu et al., 2020). This type of capital reflects deep, technical proficiency in a
specific area, allowing individuals to excel in highly specialized roles that demand extensive
training and experience.
GENERATIVE AI AND HUMAN CAPITAL
8
In the context of AI, the distinction between general and specific human capital becomes
even more salient. While generative AI democratizes access to knowledge and facilitates the
completion of tasks that once required specialized expertise, it also interacts with human capital
in ways that can either amplify or diminish the relative value of these skills (Doshi & Hauser,
2024; Zhu & Zou, 2024). The contingent approach suggests that the benefits of AI are not
uniformly distributed but are instead influenced by the type of human capital an individual
possesses.
The Augmentation-Automation Perspective on Generative AI and Human Capital
Generative AI’s unique features—its lack of agency and its expansive knowledge span—
make it both a powerful tool and a complex variable in human AI collaboration (Rouse, 2020;
Gilardi et al., 2023). These features interact differently with general and specific human capital,
leading to distinct outcomes based on the type of human capital individuals possess (Pyatt &
Becker, 1966; Plucker & Beghetto, 2004). The augmentation-automation framework provides a
useful lens to understand this interaction, illustrating how AI either complements or substitutes
human labor depending on whether individuals rely more on general or specific human capital
(Raisch & Krakowski, 2021).
Generative AI’s lack of agency requires human input to produce meaningful outputs,
making it heavily reliant on the cognitive and evaluative capacities of users (Boussioux et al.,
2024; Wang et al., 2023). This reliance means that the effectiveness of AI in creative, complex
tasks is closely tied to the user’s general human capital (Choudhury et al., 2020; Mariz-Perez et
al., 2012). Individuals with high levels of general human capital—those equipped with cognitive
versatility, critical thinking, and broad educational backgrounds—are better positioned to extract
value from AI. They can assess, refine, and apply AI-generated content within complex processes
GENERATIVE AI AND HUMAN CAPITAL
such as strategic decision-making, design, and creative work (Agarwal et al., 2023; Hui et al.,
9
2024; Rafner et al., 2023). Because these tasks require judgment, adaptation, and the integration
of diverse information, AI acts as a powerful amplifier for individuals with strong general human
capital. The lack of agency in AI necessitates that human oversight remains essential, meaning
that those who possess broader cognitive skills will be increasingly instrumental in guiding AI
towards producing meaningful, innovative outputs. This dynamic amplifies the value of general
human capital, making it indispensable in an AI-augmented workplace.
At the same time, generative AI’s expansive knowledge span allows it to access and
apply information across a vast array of domains, fundamentally altering how tasks that
traditionally relied on specific human capital are performed (Acemoglu et al., 2022; Anthony et
al., 2023). In creative work, domain-specific expertise is typically acquired through years of
experience, learning, and deep familiarity with the nuances of a particular field (Amabile, 2012;
Lifshitz-Assaf, 2018). This expertise allows individuals to produce creative outputs informed by
their specialized knowledge, which is often tied to domain-specific memory and learned
associations (Baer, 2015; Bruns, 2013; Ward, 2008). However, generative AI’s ability to
synthesize nearly all human knowledge and understand complex connections across fields
reduces the need for such narrowly focused expertise (Anthony et al., 2023; Li et al., 2024). AI’s
training across vast datasets allows it to not only access deep knowledge in specific areas but
also combine insights from multiple domains, enabling it to perform creative tasks that were
once the exclusive domain of highly specialized experts.
By integrating these two key features of generative AI—its need for human oversight and
its expansive knowledge span—with the augmentation-automation framework (Raisch &
Krakowski, 2021), we can better understand how AI differentially interacts with general and
GENERATIVE AI AND HUMAN CAPITAL
specific human capital (Raisch & Krakowski, 2021; Einola & Khoreva, 2023; Lee & Chung,
10
2024). From the augmentation perspective, generative AI enhances the capabilities of individuals
with general human capital. AI tools increase cognitive and creative productivity by providing
vast resources for exploration, iteration, and decision-making (Luo et al., 2021; Einola &
Khoreva, 2023; Agarwal et al., 2023). Individuals with broad, adaptable skills are better
equipped to harness these tools, guiding AI in ways that enhance performance on complex, non-
routine creative tasks (Meincke et al., 2024; Wang et al., 2023). In this context, the demand for
general human capital rises, as the role of human oversight and creative input remains critical in
realizing AI’s potential.
From the automation perspective, AI’s expansive knowledge span enables it to perform
creative tasks traditionally dominated by specific human capital, reducing the economic value of
specialized knowledge (Einola & Khoreva, 2023). As AI efficiently generates creative outputs by
synthesizing knowledge across domains, the demand for deep, domain-specific expertise among
experts declines, while novices may find new opportunities to engage in creative processes
(Dell’Acqua et al., 2023). The more AI automates creative tasks that rely on established
knowledge connections, the less critical specialized human capital becomes in driving creative
performance. This shift poses challenges for workers whose roles are defined by their domain-
specific expertise, as AI’s capacity to replicate or approximate these tasks diminishes the relative
value of such expertise while simultaneously opening pathways for novices.
Building on this foundation, we now turn to the development of specific hypotheses that
stem from these key mechanisms and relationships.
GENERATIVE AI AND HUMAN CAPITAL
HYPOTHESIS
11
We first posit that the use of generative AI enhances individual creativity, a baseline
assumption supported by prior research showing AI’s ability to boost productivity and creative
output. Studies indicate that AI can augment creativity by generating new ideas, offering
alternative solutions, and streamlining iteration processes in tasks like writing and consulting
(Brynjolfsson et al., 2023; Doshi & Hauser, 2024). These tasks benefit from AI’s strengths in
synthesizing information, producing coherent narratives, and offering stylistic variations.
However, in highly creative tasks—such as flash fiction and songwriting, where brevity,
originality, and rapid shifts in focus are key—the impact of AI is less straightforward (Lee &
Chung, 2024; Zhou & Lee, 2024). These tasks often demand novel ideas, emotional depth, and
unpredictable shifts, traditionally seen as the realm of human intuition, raising questions about
AI’s role in enhancing creativity in such contexts.
Nevertheless, several core mechanisms suggest that AI could still improve creative
performance in these highly dynamic tasks. First, AI’s capacity to access and synthesize vast
knowledge across genres, themes, and styles provides a wealth of inspiration, allowing users to
explore novel ideas that might not be immediately apparent through human creativity alone
(Marrone et al., 2024; Meincke et al., 2024). This extensive knowledge base enables individuals
to combine concepts in innovative ways, potentially sparking fresh and unique creative outputs.
Moreover, AI facilitates rapid iteration, allowing people to experiment with multiple creative
directions (Peng et al., 2023; Nakavachara et al., 2024). This iterative process increases the
likelihood of refining ideas and enhancing the final creative product. Therefore,
Hypothesis 1. The use of generative AI enhances individual creativity.
GENERATIVE AI AND HUMAN CAPITAL
12
Building on the first hypothesis, which posits that generative AI enhances individual
creativity, we now consider how general human capital augments this relationship. The core of
this argument lies in how individuals’ cognitive abilities and education level interact with AI’s
capabilities, particularly in creative tasks, where novelty and adaptability are key (Harvey &
Berry, 2023; Doshi & Hauser, 2024; Lee & Chung, 2024).
Generative AI offers a vast array of ideas, but it lacks the ability to independently direct
or refine them—relying instead on humans to guide the process (Acemoglu et al., 2022; Noy &
Zhang, 2023). This is where general human capital comes into play. Individuals with high
cognitive flexibility can more effectively interpret and integrate AI-generated content, drawing
from a range of inputs and integrating them in unique ways (Tu et al., 2020; Meincke et al.,
2024). In tasks that demand originality, those with higher education level are better equipped to
navigate and synthesize AI’s diverse offerings. For instance, in songwriting, an individual with
broad knowledge might use AI-generated lyrics from various musical genres and styles, merging
them into something fresh and innovative that goes beyond what AI alone could produce.
Additionally, the human role in providing oversight becomes critical. While AI can
suggest numerous creative paths, individuals must exercise judgment to evaluate and refine these
ideas (Anthony et al., 2023; Peng et al., 2023). Here, the cognitive strength associated with
general human capital enables individuals to make strategic decisions about which AI-generated
ideas to pursue (Boussioux et al., 2024). For example, in fiction writing, someone with high
cognitive ability may discern which AI-generated plot elements will best enhance the emotional
resonance or thematic complexity of the story, resulting in a more compelling final product.
Furthermore, AI’s ability to draw on a vast expanse of knowledge across fields is most
effectively utilized by individuals with a similarly broad base of knowledge (Jia et al., 2023; Noy
GENERATIVE AI AND HUMAN CAPITAL
& Zhang, 2023). Those with higher levels of general human capital can connect AI-generated
content to a variety of contexts, pushing creative boundaries further (Mariz-Perez et al., 2012;
Dell’Acqua et al., 2023). In songwriting, for example, an individual might blend poetic,
13
historical, and contemporary influences into their lyrics, creating something more original than
either they or the AI could achieve alone.
Taken together, individuals with higher levels of general human capital are not only better
at guiding AI but also at leveraging its wide-ranging capabilities to produce more innovative and
impactful creative outputs (Huang et al., 2024; Rafner et al., 2023). Their ability to adapt,
evaluate, and synthesize AI-generated content enhances the creative process, making the
relationship between AI use and creativity particularly strong for those with greater cognitive
flexibility and educational background. Therefore,
Hypothesis 2. General human capital positively moderates the relationship between the
use of generative AI and creativity, such that the positive relationship between AI-use and
creativity will be stronger when individuals’ general human capital is higher (H2a:
education; H2b: IQ).
In contrast to the synergistic interaction between AI and general human capital,
generative AI may diminish the importance of specific human capital in creative tasks (Baer,
2015; Dane, 2010; Tu et al., 2020). Specific human capital, built through years of domain-
specific learning and expertise, plays a vital role in producing creative outputs informed by deep
knowledge (Amabile, 2012; Bruns, 2013; Teodoridis et al., 2019). However, AI’s expansive
knowledge span, coupled with its ability to synthesize information from diverse fields, reduces
the need for narrowly focused expertise (Acemoglu & Restrepo, 2022; Eloundou et al., 2023).
This shift challenges the value of specific human capital, particularly in tasks such as fiction
GENERATIVE AI AND HUMAN CAPITAL
writing and songwriting, where AI can now perform functions once requiring deep, domain-
14
specific skills.
A key mechanism is AI’s ability to automate routine elements of creative tasks. Much of
specific human capital involves knowledge internalized through years of experience, such as
understanding narrative structures or lyrical patterns (Zhou & Lee, 2024). For example, a
professional lyricist develops an intricate understanding of lyrical structure, genre conventions,
and thematic depth over time, applying these learned associations to produce high-quality
compositions. However, generative AI, trained on vast knowledge corpus, can replicate these
established techniques, reducing the need for domain-specific human intervention. AI’s
proficiency in producing creative outputs that follow conventional structures undermines the
unique value that specific human capital once offered, especially in formulaic aspects of
creativity.
Additionally, AI’s ability to draw from a wide array of knowledge domains goes beyond
the more constrained scope of specific human capital (Yin et al., 2024; Zhou & Lee, 2024).
While domain-specific experts focus on the nuances of their particular field, AI can integrate
diverse insights across disciplines, broadening creative possibilities (Luo et al., 2021; Lee &
Chung, 2024). The fixed nature of specific human capital, often referred to as the curse of
knowledge (Camerer et al., 1989), may limit flexibility in exploring ideas beyond familiar
frameworks. For example, experts deeply rooted in their field may overlook novel ideas that lie
outside their established knowledge base, especially when AI suggests unconventional
combinations (Dane, 2010; Miller et al., 2006; Ward, 2008; Schillebeeckx et al., 2019). AI’s lack
of agency, requiring human oversight, further complicates this interaction, as specialists may rely
too heavily on their own expertise, missing out on creative possibilities that don’t align with their
GENERATIVE AI AND HUMAN CAPITAL
domain-specific knowledge (Amabile, 1985; Lawless & Kulikowich, 2006; Rietzschel et al.,
15
2007).
Furthermore, the distinctiveness of specific skills, often developed through extensive
training (Tu et al., 2020), becomes less critical when AI can replicate them at scale (Huang et al.,
2024). The value of deep expertise, once a significant advantage in creative fields, is diminished
when AI can produce outputs that rival or exceed the quality of those created by human experts
(Doshi & Hauser, 2024; Zhou & Lee, 2024). AI’s ability to emulate specific techniques and
structures reduces the competitive edge of those with domain-specific skills, as the unique
contributions of such expertise are no longer as essential to the creative process (Harvey & Kou,
2013; Agarwal et al., 2023).
As AI automates routine tasks, integrates diverse knowledge, and offers creative solutions
beyond the confines of specific expertise, the traditional advantages of specific human capital is
diminished (Puranam, 2021; Marrone et al., 2024).
Hypothesis 3. Specific human capital negatively moderates the relationship between the
use of generative AI and creativity, such that the positive relationship between AI-use and
creativity will be weaker when the individuals’ specific human capital is higher.
OVERVIEW OF STUDIES
We conducted two experiments to test the effects of generative AI on creativity and the
moderating roles of general and specific human capital. Study 1 focused on flash-fiction writing,
while Study 2 extended this investigation to a lyric-writing task, addressing the limitations of the
first study and examining the interaction effects between AI use and human capital on creativity
(see Figure 1 and Figure 2 for detailed experiment designs). In both studies, participants were
randomly assigned to either use generative AI or complete the task independently. The AI tool
GENERATIVE AI AND HUMAN CAPITAL
16
was deployed via a user-friendly, dialogue-based interface built using OpenAI’s API (GPT-4),
allowing participants to interact seamlessly with the system (see Figure 3 for the interface of the
used AI tool). By employing distinct creative contexts across the two studies, we aimed to
capture a broader understanding of how AI influences creative output and how this relationship is
moderated by individual differences in human capital.1
EXPERIMENT 1
Samples and Procedures
We recruited participants with a shared interest in story creation, through various
channels including social media and online interest-based groups, ensuring a diverse sample
comprising university students and professionals across various industries in China. Participants
signed up our experiment and paid visit to our behavioral lab in schedule. 162 individuals
participated in the first experiment, each compensated 30 CNY. Of the final sample, 111
(68.52%) were female, with an average age of 26.27 years (SD = 5.62). The majority, 154
participants (95.06%), held at least a bachelor’s degree. Among them, 101 were college students,
while the remaining participants worked in different sectors such as technology (8.02%) and
education (6.79%).
The experiment was conducted in three stages. First, participants completed an IQ test
and provided demographic information. Second, they were randomly assigned to one of two
conditions: one group used generative AI (GPT-4) to compose a flash-fiction of under 500
Chinese characters, while the other group completed the task without AI assistance. Both groups
1 This study is part of a broader research project titled “Human Interactions with Artificial Intelligence in
Organizations”, which received IRB approval. All data, analysis code, output, and research materials including the
full list of items are available at https://osf.io/ynhtu/?view_only=31642f7caac74082940eb1153d4e9e55. All data
were analyzed using STATA MP Version 17.0.
GENERATIVE AI AND HUMAN CAPITAL
were informed with basic fiction writing techniques and requirements. For AI-assist group,
17
information about effective prompt crafting was additionally provided to ensure all participants
could use AI. After the experiment, participants completed a post-experiment survey to capture
their subjective perceptions during the creative process and received their compensations (see
online Appendix A for measures used in survey).
Measures
Creativity measure. We measured creativity using the consensual assessment technique
(Amabile et al., 1996; Amabile & Pratt, 2016), following Berg’s (2016, 2019) approach. We
recruited an online panel of raters who evaluated the created fictions in two dimensions: novelty
(ICC₂ = .90–.91) and usefulness (ICC₂ = .87–.89)2. Novelty was defined as the extent to which
the story presented novel and distinctive ideas, reflecting originality and uniqueness. Usefulness
was defined as the degree to which the story provoked thought and conveyed meaningful insights
or lessons, recognizing that its value may vary based on the context of the task.
To assess the quality of each story, we included an overall enjoyment rating from raters
(ICC₂ = .89–.91) as an additional dimension. This measure complements the specific dimensions
of novelty and usefulness, providing a broader perspective on the stories’ impact. Overall
enjoyment serves as a key indicator of how well the stories resonate with audiences. To ensure
consistent assessments, raters participated in online training and received standardized
definitions and criteria (see Appendix A). Ratings were made on a 10-point scale (1 = Extremely
low, 10 = Extremely high). To mitigate potential bias perceptions against AI (Yin et al., 2024),
raters indicated whether they believed each story involved generative AI (1 = Yes, 0 = No).
2 When multiple groups of raters were used, the range of ICC2’s is shown.
GENERATIVE AI AND HUMAN CAPITAL
18
Attention checks were randomly embedded; data from two raters were excluded due to failure in
these checks. Each story was evaluated by an average of 43.87 raters (SD = 1.77).
General human capital. Participants’ general human capital was assessed through their
educational attainment and IQ test scores (Pyatt & Becker, 1966; Crook et al., 2011; Mariz-Perez
et al., 2012), both collected during the initial phase of the experiment. Participants first reported
their highest level of education (1 = junior high school and below, 6 = doctoral degree). They
took an 18-item version of the Raven Progressive Matrices test, which consisted of reasoning
questions and had a 10-minute time limit (Sefcek et al., 2016).
Specific human capital. To measure the participants’ specific human capital in fiction
writing, we utilized self-reported assessments of their literary writing skills. This was measured
with two items: “How would you rate your literary writing ability?” (1 = Extremely poor, 5 =
Extremely good) and “Compared to your peers, how would you rate your literary writing
ability?” (1 = Significantly worse than most peers, 5 = Significantly better than most peers). The
average of these two items was used to represent participants’ overall literary writing ability
(Cronbach’s α = .77).
Control variables. We controlled for several variables to ensure the robustness of our
findings. First, we included demographic factors—age and gender. To account for personality
traits, we controlled for openness, measured using Saucier’s (1994) brief Big Five scale (8 items;
e.g., “imaginative and creative”; Cronbach’s α = .83). We controlled for the frequency of AI
usage (0 = never, 5 = daily) because frequent AI users may be more proficient with AI tools,
potentially enhancing creative outcomes due to their experience rather than the experimental
conditions. Separately, we controlled for participants’ mind perception of AI, measured with an
adapted scale from Yam et al. (2021; 8 items; e.g., “AI can think,” “AI can plan”; Cronbach’s α
GENERATIVE AI AND HUMAN CAPITAL
= .81), as individuals who perceive AI as more cognitively capable might interact differently
19
with AI during the task, influencing their reliance on and utilization of the technology. To address
potential biases related to participant motivation, we coded their motivation for participation (0 =
monetary compensation, 1 = other reasons such as interest in AI or fiction). Finally, to account
for potential evaluation bias toward AI (Yin et al., 2024), we controlled for the AI identification
ratio, calculated as the proportion of raters who believed AI was used in creating each story.
Results
We employed Ordinary Least Squares (OLS) regression models to test our hypotheses.
Table 1 presents the descriptive statistics and correlations among the study variables, and Table 2
provides the detailed regression results.
In supporting Hypothesis 1, AI use was positively and significantly related to novelty (b
= 0.403, p = .035), usefulness (b = 0.352, p = .032), and overall impression (b = 0.370, p = .015).
Hypothesis 2 posits that general human capital amplifies the effect of AI use on creativity.
The interaction between AI use and education was found to be positive and significant for
novelty (b = 0.480, p = .015) and approaching significance for overall impression (b = 0.309, p
= .064), indicating that the positive effect of AI use on creativity is stronger for individuals with
higher education levels. In contrast, the moderation effect on usefulness was positive but not
significant (b = 0.295, p = .118). Simple slope analysis revealed that, for individuals with high
education, the positive effect of AI use on novelty was significant (b = 0.774, t(149) = 3.25, p
= .001). Conversely, this effect was not significant for those with low education (b = -0.010,
t(149) = -0.04, p = .968), as illustrated in Figure 4. A similar pattern emerged from the simple
slope analysis for the usefulness and overall impression dimensions, as shown in the figures in
the Appendix F. These results partially support Hypothesis 2a.
GENERATIVE AI AND HUMAN CAPITAL
20
Similarly, the interaction between AI use and IQ was positive and significant for both
novelty (b = 0.193, p = .008) and overall impression (b = 0.140, p = .041), and approaching
significance for usefulness (b = 0.106, p = .060). These findings suggest that the positive effect
of AI use on creativity is stronger for individuals with higher IQ levels, supporting Hypothesis
2b. Simple slope analysis confirmed a significant positive effect on novelty when IQ was high (b
= 0.869, t(149) = 3.04, p = .003), while the effect was not significant when IQ was low (b = -
0.147, t(149) = -0.62, p = .537), as shown in Figure 5. Similar patterns were observed for the
usefulness and overall impression dimensions, with detailed results available in the online
Appendix F.
Hypothesis 3 posits that specific human capital weakens the relationship between AI use
and creativity. The interaction between AI use and specific human capital was negative and
significant for usefulness (b = -0.600, p = .003) and overall impression (b = -0.404, p = .047),
suggesting that the positive effect of AI use on creativity is diminished among individuals with
higher levels of specific human capital. Although the moderation effect on novelty was negative,
it was not significant (b = -0.341, p = .169). Further analysis revealed that for the usefulness
dimension, the simple slope was positive and significantly when writing skills were low (b =
0.706, t(149) = 3.49, p = .001), but not significant when writing skills were high (b = -0.065,
t(149) = -0.31, p = .758), as shown in Figure 6. Similar patterns were observed for novelty and
overall impression. These findings collectively suggest partial support for Hypothesis 3.
Supplementary Analysis
Building on our main hypotheses, we conducted additional analyses to deepen our
understanding of the effects of AI on creativity. First, we investigated whether individuals with
varying levels of general and specific human capital interacted with AI differently in terms of
GENERATIVE AI AND HUMAN CAPITAL
style or mode. We conducted mean split analyses to categorize participants into high and low
groups for both specific and general human capital. Specific human capital, measured by self-
reported writing skills, was split at the mean score of 3.26 (SD = 0.64, Nlow = 54, Nhigh = 57).
21
Independent samples t-tests revealed no significant differences between these groups in terms of
prompt length (t(109) = 0.923, p = .358) and the number of interaction rounds with the AI (t(109)
= 1.075, p = .285). Participants were divided into high and low education groups based on a
mean of 4.59 (SD = 0.82, Nlow = 52, Nhigh = 59). T-tests showed no significant differences
between high and low education groups regarding prompt length (t(109) = -1.403, p = .164) and
interaction rounds (t(109) = 0.897, p = .386). Similarly, for IQ, the mean split was at 15.56 (SD =
2.63, Nlow = 62, Nhigh = 49). T-tests indicated no significant differences in prompt length (t(109) =
-0.194, p = .846) or interaction rounds (t(109) = 1.05, p = .916) between high and low IQ groups.
Next, considering prior research suggesting that AI use may lead to increased similarity
in outputs, we employed textual analysis techniques (embedding) to assess the similarity of the
creative products. Interestingly, our findings showed no significant increase in similarity among
AI-assisted outputs compared to those created independently, indicating that AI use in our study
did not homogenize creative work.
Third, we explored whether AI use impacted participants’ cognitive perceptions of their
creativity. Results revealed that using AI significantly reduced participants’ psychological
ownership of their creative products (b = -1.239, p = .001).
Lastly, to ensure the robustness of our main results, we conducted an omnibus test by
including all interactions in the same regression model. The findings remained highly consistent
with our initial analyses, and in several cases, the interaction effects became stronger. Together,
these supplementary analyses contribute to a more comprehensive understanding of the nuanced
GENERATIVE AI AND HUMAN CAPITAL
22
effects of AI on creativity, supporting the robustness of our main findings. Additional details are
provided in the online Appendix D.
Experiment 1 Discussion
Experiment 1 demonstrated that generative AI significantly enhances creativity in flash-
fiction writing, positively impacting novelty, usefulness, and overall impression. Notably,
individuals with higher general human capital benefited more from AI, while those with higher
specific human capital experienced less benefit.
Despite testing all hypotheses, several limitations warrant consideration. First, the general
nature of writing may dilute the unique impact of specific human capital, potentially explaining
some insignificant moderation effects. Second, our assessment of specific human capital relied
on broad self-reports of writing ability, which may not capture essential skills for novel writing,
such as story development and emotional expression, leading to possible response bias. Third,
participants completed tasks within a constrained timeframe in the lab, which may not reflect the
extended periods typical of real-world creative processes.
To address these limitations, our second study involves a lyric-writing task with both
expert and novice lyricists, allowing for a clearer operationalization of specific human capital
based on lyric-writing publication history. This study also spans one week, providing participants
ample time to engage deeply with the creative process, thereby enhancing ecological validity and
better mimicking real-world work conditions.
EXPERIMENT 2
Sample and Procedures
In Experiment 2, participants were recruited from universities, companies, and online
music platforms, ensuring a diverse range of lyric-writing skills, including individuals with prior
GENERATIVE AI AND HUMAN CAPITAL
writing and publication experience. To incentivize participation and engagement, each
23
participant was promised a professionally recorded song composition based on their own lyric
creation, in addition to receiving 100 CNY upon completing all stages of the experiment.
The participants were tasked with writing song lyrics, a key component of a song
alongside vocal melodies and instrumental accompaniments. To support this task, we provided
each participant with both a vocal melody and instrumental accompaniment. We prepared ten
royalty-free accompaniment tracks in various styles, and two professional composers created
vocal melodies for five tracks each, resulting in ten complete song demos. Each demo consisted
of an accompaniment track paired with a vocal melody performed using “la-la-la” syllables (see
the online Appendix B for delivered materials).
After registration and an online IQ test (Nstart = 685), participants were randomly assigned
to either an AI-assisted group or a control group that composed lyrics without AI support. Both
groups received basic instructions on lyric-writing techniques, while the AI-assisted group
received additional guidance on using generative AI (Ninformation = 611). Participants were
assigned a demo file along with a simplified musical score of the vocal melody, which included
annotations for lyric breaks and suggested word counts. This design aimed to engage participants
effectively, regardless of their experience level.
The experiment was conducted online, allowing participants one week to complete the
task at their own pace, closely mimicking typical lyrics-writing processes. Throughout this
period, participants could listen to the provided melody and refer to the musical score as they
composed their lyrics, facilitating a structured and supportive creative environment. After
completing their initial assignment, participants were encouraged to write lyrics for the
GENERATIVE AI AND HUMAN CAPITAL
24
remaining nine demos. They then submitted their lyrics and completed a follow-up questionnaire
about their experiences (Nsubmission = 348).
Sample attrition occurred primarily during the lyrics-creation stage, which had a dropout
rate of 43.04% despite efforts to simplify the task. Among the submissions, 329 works from 299
participants were deemed suitable for song recording by professional producers and singers. Of
these participants, 289 composed lyrics for only one song (96.66%). The final sample of 299
participants had an average age of 24.10 years (SD = 6.06); 191 (63.88%) were female, and 279
(93.31%) held a bachelor’s degree or higher.
Measures
Creativity measure. In Experiment 2, we employed a dual-method approach to assess
the creativity of the composed lyrics, ensuring a comprehensive evaluation that captures both the
intrinsic qualities of the lyrics and their reception within a musical context. First, we recruited an
online panel of raters to evaluate the lyrics independently of their musical performance, thereby
minimizing potential influences from accompanying melodies and arrangements. These raters
rated each lyric on three dimensions: novelty (ICC₂ = .35–.52), defined as the originality and
unique expression within the lyrics, including innovative rhetorical techniques and perspectives;
usefulness (emotional expression; ICC₂ = .26–.41), referring to the extent to which the emotional
content resonates with and engages the audience; and overall impression (ICC₂ = .42–.59), which
pertains to the overall quality and impact of the lyrics themselves. To ensure consistency, raters
participated in online training sessions and were provided with standardized definitions and
criteria for each dimension (see online Appendix C). Ratings were conducted using a 10-point
scale (1 = Extremely low, 10 = Extremely high), with attention checks embedded throughout the
GENERATIVE AI AND HUMAN CAPITAL
process, resulting in the exclusion of data from two raters. Each lyric was evaluated by an
25
average of 5.12 raters (SD = 0.71).
Second, to evaluate the creativity of the lyrics within a musical context, we composed the
written lyrics into complete songs. Ten members from university choirs, all with systematic and
professional training in singing, performed the composed lyrics. A professional music producer
then finalized these performances into complete song recordings. An online panel of raters
assessed the lyrics on the same two dimensions of novelty (ICC₂ = .85–.87) and usefulness
(emotional appeal; ICC₂ = .78–.81), maintaining the same criteria. Additionally, raters evaluated
their overall impression of the songs based on their liking as listeners (ICC₂ = .83–.85), reflecting
the audience’s reception of the complete musical piece. This second rating phase allowed us to
consider the fit between the lyrics and their musical execution. Raters received training and
detailed definitions to ensure consistent evaluations. Attention checks were included, leading to
the exclusion of five raters who failed these checks and two raters whose correlation coefficients
with the average scores were below .3. Each song was evaluated by an average of 28.33 raters
(SD = 1.61).
General human capital. Consistent with Experiment 1, participants’ general human
capital was assessed through their education level and IQ test scores using the same
measurement (Mariz-Perez et al., 2012; Ployhart et al., 2011; Sefcek et al., 2016).
Specific human capital. We measured participants’ specific human capital in the field of
lyrics writing using an objective measurement. One single indicator was employed to measure
participants’ previous experience with publishing or showcasing their lyrics: “Have you ever
published or presented your lyrics?” The responses were distributed as follows—1 (No)
accounted for 64.44%, 2 (Yes, but only within a small range) for 29.79%, 3 (Yes, on a public
GENERATIVE AI AND HUMAN CAPITAL
26
platform) for 4.86%, and 4 (My work has received awards or recognition) for 0.91%. Compared
to self-assessed writing ability, participants’ experience in publishing or showcasing their lyrics
provides a more objective measure and directly reflects their accumulated experience in this
field.
Control variables. Consistent with Experiment 1, we controlled for participants’ age,
gender, openness, frequency of AI usage, motivation for participating in the experiment, mind
perception of AI, and the AI identification ratio by raters. All the measurements stayed the same
with Experiment 1. Additionally, we included fixed effects for the selected demo in our
regression analysis.
Results
We employed Ordinary Least Squares (OLS) regression models to test our hypotheses.
Descriptive statistics and correlations are presented in Table 3, and regression results are shown
in Tables 4–8.
The results provide limited support for hypothesis 1. When evaluating lyrics alone, AI
use did not significantly predict creativity. In contrast, when assessing complete songs, AI use
showed positive coefficients for novelty (b = 0.133, p = .061) and usefulness (b = 0.108, p
= .075), though these effects did not reach statistical significance.
Hypothesis 2a proposed that education would moderate the relationship between AI use
and creativity, with stronger effects for individuals with higher education levels. For lyrics
ratings, the interaction between AI use and education was significant across all creativity
dimensions: novelty (b = 0.407, p = .008), usefulness (b = 0.341, p = .021), and overall
impression (b = 0.489, p = .002). Simple slope analyses revealed that for individuals with high
education, AI use positively influenced novelty (b = 0.389, t(308) = 2.49, p = .013; see Figure 7),
GENERATIVE AI AND HUMAN CAPITAL
27
usefulness (b = 0.333, t(308) = 2.22, p = .027), and overall impression (b = 0.461, t(308) = 2.93,
p = .004). For those with low education, the effects of AI use were negative but not significant
across all dimensions. In the context of complete songs, the interaction effects between AI use
and education were positive but not statistically significant, although the direction remained
consistent with our hypothesis. These results indicate that the positive effect of AI use on
creativity is stronger among more educated individuals when evaluating lyrics alone, providing
partial support for Hypothesis 2a.
Hypothesis 2b suggested that IQ would moderate the relationship between AI use and
creativity, with stronger effects for individuals with higher IQ scores. The interactions were not
statistically significant for any creativity dimensions in either lyrics ratings or complete songs.
Although the coefficients were in the expected direction, we do not find support for Hypothesis
2b.
Hypothesis 3 posited that specific human capital, measured by prior lyrics publication
experience, would negatively moderate the relationship between AI use and creativity. The
results support this hypothesis across both lyrics ratings and complete songs. For lyrics ratings,
the interaction between AI use and specific human capital was significantly negative for novelty
(b = -0.391, p = .017) and was approaching significance for usefulness (b = -0.327, p = .056) and
overall impression (b = -0.283, p = .091). Similarly, for complete songs, the interaction was
significantly negative for novelty (b = -0.221, p = .029), usefulness (b = -0.195, p = .017), and
overall impression (b = -0.209, p = .014). Simple slope analyses showed that for individuals with
low specific human capital, AI use positively influenced creativity—significantly for novelty (b
= 0.264, t(308) = 2.76, p = .006; See Figure 8), usefulness (b = 0.223, t(308) = 2.75, p = .006),
and overall impression (b = 0.211, t(308) = 2.568, p = .011) in the context of complete songs.
GENERATIVE AI AND HUMAN CAPITAL
28
Conversely, for those with high specific human capital, the effects of AI use on creativity were
negative but not statistically significant across all dimensions. These findings indicate that the
positive effects of AI use on creativity are weaker for individuals with higher levels of specific
human capital, supporting Hypothesis 3.
Supplementary Analysis
We also conducted several supplementary analyses similar to first study. First, we
categorize participants into high and low groups for specific human capital by identifying
whether they have any lyrics publication previously (Nlow = 131, Nhigh = 66). Independent
samples t-tests revealed no significant differences between these groups in terms of prompt
length (t(195) = 1.539, p = .126). However, experts interact with AI significantly less than novice
regarding the number of interaction rounds with the AI (t(195) = 2.207, p = .029). Similar to
Experiment 1, we conducted mean split analyses to categorize participants into high and low
groups for general human capital. For general human capital, participants were divided into high
and low education groups based on a mean of 4.31 (SD = 0.71, Nlow = 132, Nhigh = 65). T-tests
showed individual with higher education interact significantly more with AI than their
counterparts with lower education level regarding both prompt length (t(195) = -3.189, p = .002)
and interaction rounds (t(195) = -2.206, p = .029). Similarly, for IQ, the mean split was at 14.47
(SD = 3.04, Nlow = 97, Nhigh = 100). T-tests indicated individuals with higher IQ interact with AI
approaching significantly more than their counterparts in prompt length (t(195) = -1.925, p
= .056) but no significant difference in interaction rounds (t(195) = -1.222, p = .223) between
high and low IQ groups.
Second, we again assessed the similarity of the creative products using textual analysis
techniques (embedding). Contrast to Experiment 1, the results showed increased similarity
GENERATIVE AI AND HUMAN CAPITAL
among AI-assisted outputs compared to those created without AI, remaining confusion about
whether AI use homogenize the creative work (Cosine: b = 0.014, p = .008; L2 distance: b = -
29
0.014, p = .007).
Third, we investigated whether the use of AI affected participants cognitive perceptions.
Results showed using AI significantly reduced participants’ psychological ownership to their
creativity product (b = -1.124, p = .000). Different from Experiment 1, AI use in the second study
also increases participants’ creative self-efficacy (b = 0.284, p = .014).
Consistent with Experiment 1, to ensure the robustness of our main results, we conducted
an omnibus test by including all interactions in the same regression model. The findings
remained highly consistent with our initial analyses. Details were shown in online Appendix E.
Experiment 2 Discussion
Unlike in Experiment 1, working with generative AI in Experiment 2 did not significantly
improve creativity. The moderating role of education, which was evident in the lyric-writing
task, diminished when evaluating the full songs. Additionally, IQ did not significantly moderate
the relationship between AI use and creativity in either case. However, specific human capital
consistently moderated the AI-creativity relationship negatively across both lyrics and songs,
indicating that individuals with higher domain-specific expertise benefited less from AI
assistance. These differences could stem from the complexity and specificity of the songwriting
task. Songwriting, as a more specialized creative domain, may reduce the impact of general
human capital while amplifying the importance of specific expertise. The clearer distinction
between experts and novices in Experiment 2, based on prior lyrics publication, likely intensified
the negative moderation effect of specific human capital.
GENERAL DISCUSSION
GENERATIVE AI AND HUMAN CAPITAL
30
This research examined how generative AI interacts with different forms of human
capital to influence creativity. Across two studies—flash fiction writing and songwriting—we
explored how AI affects creativity and how general human capital (education and IQ) and
specific human capital (domain-specific expertise) moderate these effects. The results reveal that
AI significantly enhances creativity, especially for individuals with higher levels of general
human capital. However, specific human capital consistently moderated this relationship
negatively, indicating that individuals with greater domain expertise benefited less from AI
assistance. These findings suggest that AI’s impact on creativity is uneven, favoring those with
broader cognitive skills while offering diminished advantages for those with specialized
knowledge.
Theoretical Implications
Our study makes several important theoretical contributions. First, it challenges the
notion that generative AI uniformly enhances productivity and reduces performance disparities
among individuals (Noy & Zhang, 2023). Contrary to prior research on human-AI interactions,
which suggests that domain experts may benefit more from AI due to their ability to effectively
utilize predictive algorithms (e.g., Agrawal et al., 2019; Huang et al., 2024), our findings reveal
that generative AI—unlike traditional predictive AI—can actually reduce the competitive edge of
domain experts. By democratizing access to knowledge, generative AI breaks down traditional
barriers, allowing individuals without specific expertise to perform tasks previously reserved for
specialists (Anthony et al., 2023; Brynjolfsson et al., 2023; Wang et al., 2023). This shift
underscores a fundamental change in the dynamics of knowledge work, where general cognitive
skills become more valuable than specialized knowledge.
Second, by integrating human capital theory with the context of generative AI, we
GENERATIVE AI AND HUMAN CAPITAL
31
develop a novel framework that explains how different forms of human capital interact with AI
technologies. Our findings illustrate that augmentation and automation coexist in the AI-human
collaboration landscape and that their relative influence depends on the type of human capital
individuals possess. Specifically, generative AI augments the capabilities of those with high
general human capital by enhancing their ability to process and integrate vast amounts of
information creatively. In contrast, it automates tasks traditionally reliant on specific human
capital, thereby reducing the unique value of specialized expertise. This framework advances
human capital theory by demonstrating that the value of different skill types is reshaped in the
presence of generative AI. It explains why experts may not benefit more from AI: the breaking of
knowledge barriers by AI diminishes the exclusivity of their expertise. Additionally, experts may
engage less with AI tools due to factors such as AI aversion or overreliance on their own
knowledge, limiting their ability to leverage AI effectively (Doshi & Hauser, 2024; Yin et al.,
2024). Our research thus highlights the need to reconsider how specific and general human
capital are valued in future work.
Third, our study uncovers nuanced insights into the limitations of generative AI. In
Experiment 2, we did not observe significant main effects of AI use on creativity when
evaluating lyrics alone, diverging from previous studies that reported consistent positive effects
(Jia et al., 2023; Noy & Zhang, 2023). This suggests that AI’s effectiveness may depend on the
nature of the task. For instance, songwriting relies less on writing skills—a strength of generative
AI—and more on idea generation and emotional expression, which may not be as readily
enhanced by AI assistance.
Furthermore, we did not consistently observe the hypothesized homophily effect, which
suggests that AI use leads to increased similarity in outputs (Wang et al., 2023; Anthony et al.,
GENERATIVE AI AND HUMAN CAPITAL
2023). While some studies argue that AI can homogenize creative products due to reliance on
common algorithms, our findings indicate that this effect is not consistent and may vary
32
depending on the type of task and the level of human-AI interaction.
Finally, our exploration into participants’ perceptions revealed that AI use could impact
intrinsic motivation. Some participants reported reduced feelings of ownership over their creative
work when using AI, potentially diminishing intrinsic motivation (Amabile & Pratt, 2016).
However, AI assistance also appeared to boost self-efficacy in creative domains, encouraging
individuals to engage in tasks they might have otherwise avoided due to perceived skill gaps
(Anthony et al., 2023; Noy & Zhang, 2023). These contrasting effects suggest that AI’s influence
on motivation is complex and warrants further investigation.
Practical Implications
Our findings have important practical implications for organizations navigating the
integration of AI in creative and knowledge-based work. As AI becomes more prevalent across
industries, understanding how different forms of human capital interact with AI can inform talent
acquisition, workforce development, and task allocation strategies (Dell’Acqua et al., 2023;
Frank et al., 2019; Paudel, 2024). Organizations should recognize the increasing value of general
human capital—skills such as critical thinking, problem-solving, and adaptability—in an AI-
enhanced workplace. Prioritizing these skills in hiring and training programs can enhance
employees’ ability to collaborate effectively with AI technologies.
Companies can invest in developing general cognitive skills through targeted training
initiatives, thereby maximizing the benefits of AI integration. At the same time, industries
heavily reliant on domain-specific expertise may need to reconsider the role of such knowledge
in an AI-driven economy (Allen & Choudhury, 2022; Brynjolfsson et al., 2023). Our findings
GENERATIVE AI AND HUMAN CAPITAL
33
suggest that AI’s capacity to automate specialized tasks could reduce the competitive advantage
of individuals with narrowly focused expertise. Organizations might therefore shift toward
fostering interdisciplinary skills and encouraging employees to develop broader competencies.
From a societal perspective, policymakers and educators should emphasize broad-based
educational programs that cultivate general cognitive abilities, ensuring that individuals are
equipped to thrive alongside AI technologies (Furman & Seamans, 2019; Frank et al., 2019).
Strategies to mitigate potential inequalities exacerbated by differences in human capital are
essential, promoting inclusive access to skills development opportunities.
Limitations and Future Directions
Despite the contributions of our research, several limitations warrant acknowledgment
and present avenues for future investigation. First, in Experiment 1, the distinction between high
and low specific human capital may not have been salient due to reliance on broad self-
assessments of writing ability. Experiment 2 addressed this limitation by using prior lyrics
publication as a clearer indicator of specific human capital, resulting in more consistent results.
Future research should employ precise and validated measures of specific human capital to better
capture its nuances across different creative domains.
Second, some inconsistencies between Experiment 1 and Experiment 2, particularly
regarding the main effects of AI use and the moderating role of general human capital (e.g., IQ),
suggest that the impact of AI may vary across tasks. Songwriting may rely less on writing
skills—a strength of generative AI—and more on idea generation and emotional expression,
areas where AI assistance may be less effective. Future research should explore a range of
creative tasks to determine the conditions under which AI enhances or diminishes creativity.
Third, our measures of general human capital—IQ tests based on Raven’s Progressive
GENERATIVE AI AND HUMAN CAPITAL
Matrices and education level—focus on logic, reasoning, and cognitive skills that may not
34
directly translate to artistic creativity (Ritchie & Tucker-Drob, 2018). This may explain why IQ
and education did not predict performance directly in our studies. Additionally, cultural factors,
such as the emphasis on logic and mathematics in Chinese education, may limit the applicability
of these measures to creative tasks. Future studies should consider alternative measures of
general human capital that capture a broader range of cognitive abilities relevant to creativity.
Finally, our research focused on creative tasks involving writing and lyric creation. It
remains to be seen whether similar patterns emerge in tasks involving different cognitive
demands, such as logical reasoning, coding, or analytical problem-solving. Investigating the
interaction of AI and human capital in diverse domains would enhance the generalizability of our
theoretical framework and inform AI integration strategies across various industries.
Conclusion
In conclusion, our study provides valuable insights into the complex interplay between
generative AI and human capital in creative work. By demonstrating that AI does not uniformly
enhance productivity and that its benefits are contingent on the type of human capital individuals
possess, we contribute to a more nuanced understanding of AI’s role in the modern workplace.
These findings have significant implications for theory, practice, and future research,
highlighting the need to reconsider how we value and develop human skills in an era
increasingly shaped by AI technologies.
GENERATIVE AI AND HUMAN CAPITAL
35
REFERENCE
Acemoglu, D., Autor, D., Hazell, J., & Restrepo, P. (2022). AI and jobs: Evidence from online
vacancies. Journal of Labor Economics, 40(S1), S293–S340.
https://doi.org/10.1086/718327
Acemoglu, D., & Restrepo, P. (2022). Tasks, automation, and the rise in U.S. wage inequality.
Econometrica, 90(5), 1973–2016. https://doi.org/10.3982/ECTA19815
Agarwal, N., Moehring, A., Rajpurkar, P., & Salz, T. (2023). Combining human expertise with
artificial intelligence: Experimental evidence from radiology. National Bureau of Economic
Research. https://doi.org/10.3386/w31422
Agrawal, A., Gans, J. S., & Goldfarb, A. (2019). Artificial intelligence: The ambiguous labor
market impact of automating prediction. Journal of Economic Perspectives, 33(2), 31–50.
https://doi.org/10.1257/jep.33.2.31
Allen, R., & Choudhury, P. (Raj). (2022). Algorithm-augmented work and domain experience:
The countervailing forces of ability and aversion. Organization Science, 33(1), 149–169.
https://doi.org/10.1287/orsc.2021.1554
Amabile, T. M. (1985). Motivation and creativity: Effects of motivational orientation on creative
writers. Journal of Personality and Social Psychology, 48(2), 393–399.
https://doi.org/10.1037/0022-3514.48.2.393
Amabile, T. M. (2012). Componential theory of creativity: Vol. pp. 538-559. Boston, MA:
Harvard Business School.
Amabile, T. M., Conti, R., Coon, H., Lazenby, J., & Herron, M. (1996). Accessing the work
environment for creativity. Academy of Management Journal, 39(5), 1154–1184.
https://doi.org/10.2307/256995
Amabile, T. M., & Pratt, M. G. (2016). The dynamic componential model of creativity and
innovation in organizations: Making progress, making meaning. Research in Organizational
Behavior, 36, 157–183. https://doi.org/10.1016/j.riob.2016.10.001
Anthony, C., Bechky, B. A., & Fayard, A.-L. (2023). “Collaborating” with AI: Taking a system
view to explore the future of work. Organization Science, 34(5), 1672–1694.
https://doi.org/10.1287/orsc.2022.1651
Baer, J. (2015). The importance of domain-specific expertise in creativity. Roeper Review, 37(3),
165–178. https://doi.org/10.1080/02783193.2015.1047480
Becker, G. S. (1962). Investment in human capital: A theoretical analysis. Journal of Political
Economy, 70(5, Part 2), 9–49. https://doi.org/10.1086/258724
Berg, J. M. (2016). Balancing on the creative highwire: Forecasting the success of novel ideas in
organizations. Administrative Science Quarterly, 61(3), 433–468.
https://doi.org/10.1177/0001839216642211
Berg, J. M. (2019). When silver is gold: Forecasting the potential creativity of initial ideas.
Organizational Behavior and Human Decision Processes, 154, 96–117.
https://doi.org/10.1016/j.obhdp.2019.08.004
Berg, J. M. (2022). One-hit wonders versus hit makers: Sustaining success in creative industries.
Administrative Science Quarterly, 67(3), 630–673.
https://doi.org/10.1177/00018392221083650
Boussioux, L., Lane, J. N., Zhang, M., Jacimovic, V., & Lakhani, K. R. (2024). The crowdless
future? Generative AI and creative problem-solving. Organization Science, 35(5), 1589–
1607. https://doi.org/10.1287/orsc.2023.18430
GENERATIVE AI AND HUMAN CAPITAL
36
Bruns, H. C. (2013). Working alone together: Coordination in collaboration across domains of
expertise. Academy of Management Journal, 56(1), 62–83.
https://doi.org/10.5465/amj.2010.0756
Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at work. National Bureau of
Economic Research. https://doi.org/10.3386/w31161
Camerer, C., Loewenstein, G., & Weber, M. (1989). The curse of knowledge in economic
settings: An experimental analysis. Journal of Political Economy, 97(5), 1232–1254.
Carpenter, M. A., Sanders, W. G., & Gregersen, H. B. (2001). Bundling human capital with
organizational context: The impact of international assignment experience on multinational
firm performance and CEO pay. Academy of Management Journal, 44(3), 493–511.
https://doi.org/doi.org/10.5465/3069366
Choudhury, P., Starr, E., & Agarwal, R. (2020). Machine learning and human capital
complementarities: Experimental evidence on bias mitigation. Strategic Management
Journal, 41(8), 1381–1411. https://doi.org/10.1002/smj.3152
Coff, R. W. (1997). Human assets and management dilemmas: Coping with hazards on the road
to resource-based theory. Academy of Management Review, 22(2), 374.
https://doi.org/10.2307/259327
Crook, T. R., Todd, S. Y., Combs, J. G., Woehr, D. J., & Ketchen, D. J. (2011). Does human
capital matter? A meta-analysis of the relationship between human capital and firm
performance. Journal of Applied Psychology, 96(3), 443–456.
https://doi.org/10.1037/a0022147
Dane, E. (2010). Reconsidering the trade-off between expertise and flexibility: A cognitive
entrenchment perspective. Academy of Management Review, 35(4), 579–603.
https://doi.org/10.5465/amr.35.4.zok579
Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S.,
Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological
frontier: Field experimental evidence of the effects of AI on knowledge worker productivity
and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper,
24–013. https://doi.org/10.2139/ssrn.4573321
Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces
the collective diversity of novel content. Science Advances, 10(28), eadn5290.
https://doi.org/10.1126/sciadv.adn5290
Einola, K., & Khoreva, V. (2023). Best friend or broken tool? Exploring the co‐existence of
humans and artificial intelligence in the workplace ecosystem. Human Resource
Management, 62(1), 117–135. https://doi.org/10.1002/hrm.22147
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An early look at the
labor market impact potential of large language models. arXiv Preprint arXiv:2303.10130.
https://doi.org/10.48550/arXiv.2303.10130
Frank, M. R., Autor, D., Bessen, J. E., Brynjolfsson, E., Cebrian, M., Deming, D. J., Feldman,
M., Groh, M., Lobo, J., Moro, E., Wang, D., Youn, H., & Rahwan, I. (2019). Toward
understanding the impact of artificial intelligence on labor. Proceedings of the National
Academy of Sciences, 116(14), 6531–6539. https://doi.org/10.1073/pnas.1900949116
Furman, J., & Seamans, R. (2019). AI and the Economy. Innovation Policy and the Economy,
19(1), 161–191. https://doi.org/10.1086/699936
GENERATIVE AI AND HUMAN CAPITAL
37
Gilardi, F., Alizadeh, M., & Kubli, M. (2023). ChatGPT outperforms crowd workers for text-
annotation tasks. Proceedings of the National Academy of Sciences, 120(30), e2305016120.
https://doi.org/10.1073/pnas.2305016120
Harvey, S., & Berry, J. W. (2023). Toward a meta-theory of creativity forms: How novelty and
usefulness shape creativity. Academy of Management Review, 48(3), 504–529.
https://doi.org/10.5465/amr.2020.0110
Harvey, S., & Kou, C.-Y. (2013). Collective engagement in creative tasks: The role of evaluation
in the creative process in groups. Administrative Science Quarterly, 58(3), 346–386.
https://doi.org/10.1177/0001839213498591
Huang, L. L., Chen, R. P., & Chan, K. W. (2024). Pairing up with anthropomorphized artificial
agents: Leveraging employee creativity in service encounters. Journal of the Academy of
Marketing Science, 52(4), 955–975. https://doi.org/10.1007/s11747-024-01017-w
Huang, M.-H., & Rust, R. T. (2021). A strategic framework for artificial intelligence in
marketing. Journal of the Academy of Marketing Science, 49(1), 30–50.
https://doi.org/10.1007/s11747-020-00749-9
Hui, X., Reshef, O., & Zhou, L. (2024). The short-term effects of generative artificial
intelligence on employment: Evidence from an online labor market. Organization Science.
https://doi.org/10.1287/orsc.2023.18441
Jia, N., Luo, X., Fang, Z., & Liao, C. (2023). When and how artificial intelligence augments
employee creativity. Academy of Management Journal, 67(1), 5–32.
https://doi.org/10.2139/ssrn.4397280
Lawless, K. A., & Kulikowich, J. M. (2006). Domain knowledge and individual interest: The
effects of academic level and specialization in statistics and psychology. Contemporary
Educational Psychology, 31(1), 30–43. https://doi.org/10.1016/j.cedpsych.2005.01.002
Lee, B. C., & Chung, J. (2024). An empirical investigation of the impact of ChatGPT on
creativity. Nature Human Behaviour, 1–9. https://doi.org/10.1038/s41562-024-01953-1
Lepak, D. P., & Snell, S. A. (1999). The human resource architecture: Toward a theory of human
capital allocation and development. Academy of Management Review, 24(1), 31.
https://doi.org/10.2307/259035
Lepak, D. P., & Snell, S. A. (2002). Examining the human resource architecture: The
relationships among human capital, employment, and human resource configurations.
Journal of Management, 28(4), 517–543. https://doi.org/10.1177/014920630202800403
Li, N., Zhou, H., Deng, W., Liu, J., Liu, F., & Mikel-Hong, K. (2024). When advanced AI isn’t
enough: Human factors as drivers of success in generative AI-human collaborations.
Available at SSRN 4738829. https://doi.org/10.2139/ssrn.4738829
Lifshitz-Assaf, H. (2018). Dismantling knowledge boundaries at NASA: The critical role of
professional identity in open innovation. Administrative Science Quarterly, 63(4), 746–782.
https://doi.org/10.1177/0001839217747876
Luo, X., Qin, M. S., Fang, Z., & Qu, Z. (2021). Artificial intelligence coaches for sales agents:
Caveats and solutions. Journal of Marketing, 85(2), 14–32.
https://doi.org/10.1177/0022242920956676
Mariz-Perez, R. M., Teijeiro-Alvarez, M. M., & Garcìa-Alvarez, M. T. (2012). The relevance of
human capital as a driver for innovation. Cuadernos de Economía, 35(98), 68–76.
https://doi.org/10.1016/S0210-0266(12)70024-9
Marrone, R., Cropley, D., & Medeiros, K. (2024). How does narrow AI impact human creativity?
Creativity Research Journal, 1–11. https://doi.org/10.1080/10400419.2024.2378264
GENERATIVE AI AND HUMAN CAPITAL
38
Meincke, L., Mollick, E. R., & Terwiesch, C. (2024). Prompting diverse ideas: Increasing AI
idea variance. arXiv Preprint arXiv:2402.01727. https://doi.org/10.2139/ssrn.4708466
Miller, K. D., Zhao, M., & Calantone, R. J. (2006). Adding interpersonal learning and tacit
knowledge to March’s exploration-exploitation model. Academy of Management Journal,
49(4), 709–722. https://doi.org/10.5465/amj.2006.22083027
Nakavachara, V., Potipiti, T., & Chaiwat, T. (2024). Experimenting with generative AI: Does
ChatGPT really increase everyone’s productivity? arXiv Preprint arXiv:2403.01770.
https://doi.org/10.2139/ssrn.4746770
Nelson, A., Anthony, C., & Tripsas, M. (2023). “If I could turn back time”: Occupational
dynamics, technology trajectories, and the reemergence of the analog music synthesizer.
Administrative Science Quarterly, 68(2), 551–599.
https://doi.org/10.1177/00018392231163178
Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative
artificial intelligence. Science, 381(6654), 187–192.
https://doi.org/10.1126/science.adh2586
Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023).
Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th
Annual Acm Symposium on User Interface Software and Technology, 1–22.
https://doi.org/10.1145/3586183.3606763
Paudel, R. (2024). The impact of automation and artificial intelligence (AI) on leadership and the
workforce. Indonesian Journal of Banking and Financial Technology, 2(2), 109–124.
https://doi.org/10.55927/fintech.v2i2.8904
Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The impact of AI on developer
productivity: Evidence from gitHub copilot. arXiv Preprint arXiv:2302.06590.
https://doi.org/10.48550/arXiv.2302.06590
Ployhart, R. E., Van Iddekinge, C. H., & MacKenzie, W. I. (2011). Acquiring and developing
human capital in service contexts: The interconnectedness of human capital resources.
Academy of Management Journal, 54(2), 353–368.
https://doi.org/10.5465/amj.2011.60263097
Plucker, J. A., & Beghetto, R. A. (2004). Why creativity is domain general, why it looks domain
specific, and why the distinction does not matter. In R. J. Sternberg, E. L. Grigorenko, & J.
L. Singer (Eds.), Creativity: From potential to realization. (pp. 153–167). American
Psychological Association. https://doi.org/10.1037/10692-009
Puranam, P. (2021). Human–AI collaborative decision-making as an organization design
problem. Journal of Organization Design, 10(2), 75–80. https://doi.org/10.1007/s41469-
021-00095-2
Pyatt, G., & Becker, G. S. (1966). Human capital: A theoretical and empirical analysis, with
special reference to education. The Economic Journal, 76(303), 635.
https://doi.org/10.2307/2229541
Rafner, J., Beaty, R. E., Kaufman, J. C., Lubart, T., & Sherson, J. (2023). Creativity in the age of
generative AI. Nature Human Behaviour, 7(11), 1836–1838.
https://doi.org/10.1038/s41562-023-01751-1
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–
augmentation paradox. Academy of Management Review, 46(1), 192–210.
https://doi.org/10.5465/amr.2018.0072
GENERATIVE AI AND HUMAN CAPITAL
39
Rietzschel, E. F., Nijstad, B. A., & Stroebe, W. (2007). Relative accessibility of domain
knowledge and creativity: The effects of knowledge activation on the quantity and
originality of generated ideas. Journal of Experimental Social Psychology, 43(6), 933–946.
https://doi.org/10.1016/j.jesp.2006.10.014
Ritchie, S. J., & Tucker-Drob, E. M. (2018). How much does education improve intelligence? A
meta-analysis. Psychological Science, 29(8), 1358–1369.
https://doi.org/10.1177/0956797618774253
Rosen, S. (1976). A theory of life earnings. Journal of Political Economy, 84(4, Part 2), S45–
S67. https://doi.org/10.1086/260532
Rouse, E. D. (2020). Where you end and I begin: Understanding intimate co-creation. Academy
of Management Review, 45(1), 181–204. https://doi.org/10.5465/amr.2016.0388
Saucier, G. (1994). Mini-Markers: A Brief Version of Goldberg’s Unipolar Big-Five Markers.
Journal of Personality Assessment, 63(3), 506–516.
https://doi.org/10.1207/s15327752jpa6303_8
Schad, J., & Bansal, P. (2018). Seeing the forest and the trees: How a systems perspective
informs paradox research. Journal of Management Studies, 55(8), 1490–1506.
https://doi.org/10.1111/joms.12398
Schillebeeckx, S. J. D., Lin, Y., & George, G. (2019). When do expert teams fail to create
impactful inventions? Journal of Management Studies, 56(6), 1073–1104.
https://doi.org/10.1111/joms.12447
Sefcek, J. A., Miller, G. F., & Figueredo, A. J. (2016). Development and validation of an 18-item
medium form of the ravens advanced progressive matrices. Sage Open, 6(2),
2158244016651915. https://doi.org/10.1177/2158244016651915
Teodoridis, F., Bikard, M., & Vakili, K. (2019). Creativity at the knowledge frontier: The impact
of specialization in fast- and slow-paced domains. Administrative Science Quarterly, 64(4),
894–927. https://doi.org/10.1177/0001839218793384
Tu, C., Guo, J., Hatcher, R. C., & Kaufman, J. C. (2020). The relationship between emotional
intelligence and domain-specific and domain-general creativity. The Journal of Creative
Behavior, 54(2), 337–349. https://doi.org/10.1002/jocb.369
Wang, W., Gao, G. (Gordon), & Agarwal, R. (2023). Friend or foe? Teaming between artificial
intelligence and workers with variation in experience. Management Science, 70(9), 5753–
5775. https://doi.org/10.1287/mnsc.2021.00588
Ward, T. B. (2008). The role of domain knowledge in creative generation. Learning and
Individual Differences, 18(4), 363–366. https://doi.org/10.1016/j.lindif.2007.07.002
Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2021).
Robots at work: People prefer—and forgive—service robots with perceived feelings.
Journal of Applied Psychology, 106(10), 1557–1572. https://doi.org/10.1037/apl0000834
Yin, Y., Jia, N., & Wakslak, C. J. (2024). AI can help people feel heard, but an AI label
diminishes this impact. Proceedings of the National Academy of Sciences, 121(14),
e2319112121. https://doi.org/10.1073/pnas.2319112121
Zhou, E., & Lee, D. (2024). Generative artificial intelligence, human creativity, and art. PNAS
Nexus, 3(3), pgae052. https://doi.org/10.1093/pnasnexus/pgae052
Zhu, F., & Zou, W. (2024). The role of generative AI in human creative processes: Experimental
evidence. Available at SSRN 4676053. https://doi.org/10.2139/ssrn.4676053
40
Table 1 Means, SDs, and Correlation of Studied Variables (Experiment 1)
VARIABLES
Education
IQ
Specific Human Capital
1 Novelty
2 Usefulness
3 Overall Impression
4
5
6
7 Age
8 Gender
9 Openness
10 AI Use Frequency
11 Purpose for Experiment
12 Mind Perception
13 AI Identification Ratio
Continued
Mean
4.879
4.799
4.912
4.586
15.556
3.269
26.272
1.685
3.854
2.204
0.056
3.983
0.369
SD
1.036
0.940
0.918
0.816
2.626
0.643
5.622
0.466
0.607
1.458
0.230
0.827
0.134
1
-
0.744●●●
0.870●●●
-0.086
-0.093
0.008
-0.117
-0.048
0.053
-0.104
0.180●
-0.176●
-0.262●●●
2
3
4
5
6
7
-
0.881●●●
-0.086
-0.049
0.040
-0.124
-0.094
0.072
-0.114
0.219●●
-0.096
-0.404●●●
-
-0.155●
-0.116
0.016
-0.186●
-0.041
0.042
-0.132
0.189●
-0.125
-0.441●●●
-
0.206●●
0.065
0.305●●●
0.031
0.098
0.191●
0.024
-0.019
-0.004
-
0.031
-0.144
-0.100
0.008
0.329●●●
-0.124
-0.003
-0.049
-
0.024
0.097
0.364●●●
-0.016
-0.207●●
-0.010
-0.065
-
-0.022
0.019
-0.135
-0.021
-0.036
0.144
VARIABLES
8
9
10
11
12
13
8 Gender
9 Openness
10 AI Use Frequency
11 Purpose for Experiment
12 Mind Perception
13 AI Identification Ratio
Notes. Female = 2, Male = 1. All p values in this table are two-tailed.
●●● p<0.001, ●● p<0.01, ● p<0.05.
-
-0.032
-0.042
-0.242●●
0.010
-0.044
-
0.041
-0.103
0.050
-0.138
-
0.059
-0.020
-0.061
-
-0.011
-0.122
-
0.101
-
Table 2 Regression Results (Experiment 1)
41
VARIABLES
Novelty
Usefulness
Overall
Impression
Novelty
Usefulness
Overall
Impression
Novelty
Usefulness
Overall
Impression
Novelty
Usefulness
Overall
Impression
Model 1
Model 2a
Model 2b
Model 3
AI Use
AI Use × Education
AI Use × IQ
AI Use × Specific Human Capital
Education
IQ
Specific Human Capital
Age
Gender
openness
AI Use Frequency
Purpose for Experiment
Mind Perception
AI Identification Ratio
Constant
0.403●
(0.19)
0.352●
(0.16)
0.370●
(0.15)
0.382●
(0.19)
0.480●
(0.20)
0.339●
(0.16)
0.295
(0.19)
0.357●
(0.15)
0.309+
(0.17)
-0.040
(0.11)
-0.019
(0.04)
-0.024
(0.13)
-0.016
(0.01)
-0.039
(0.17)
0.102
(0.12)
-0.080
(0.06)
0.703+
(0.38)
-0.211●
(0.08)
-2.093●●
(0.63)
6.996●●●
(1.12)
-0.048
(0.09)
0.000
(0.03)
0.033
(0.11)
-0.010
(0.01)
-0.131
(0.15)
0.065
(0.11)
-0.092+
(0.05)
0.735●
(0.34)
-0.080
(0.07)
-2.966●●●
(0.57)
6.482●●●
(0.98)
-0.087
(0.08)
-0.025
(0.03)
-0.005
(0.11)
-0.019
(0.01)
-0.056
(0.13)
0.029
(0.10)
-0.085+
(0.05)
0.549+
(0.32)
-0.109
(0.07)
-3.185●●●
(0.56)
7.704●●●
(0.94)
-0.367●
(0.17)
-0.020
(0.04)
-0.010
(0.14)
-0.018
(0.01)
-0.015
(0.17)
0.072
(0.12)
-0.093
(0.06)
0.670+
(0.39)
-0.202●
(0.09)
-2.131●●●
(0.61)
6.945●●●
(1.10)
-0.249
(0.17)
-0.000
(0.03)
0.042
(0.11)
-0.012
(0.01)
-0.116
(0.15)
0.047
(0.11)
-0.100+
(0.05)
0.714●
(0.34)
-0.074
(0.07)
-2.989●●●
(0.57)
6.344●●●
(0.99)
-0.298●
(0.14)
-0.025
(0.03)
0.004
(0.11)
-0.020+
(0.01)
-0.041
(0.13)
0.010
(0.10)
-0.094●
(0.04)
0.528+
(0.32)
-0.103
(0.07)
-3.209●●●
(0.55)
7.388●●●
(0.93)
0.361●
(0.18)
0.329●
(0.16)
0.340●
(0.15)
0.386●
(0.19)
0.321●
(0.16)
0.349●
(0.15)
0.193●●
(0.07)
0.106+
(0.06)
0.140●
(0.07)
-0.036
(0.10)
-0.164●
(0.07)
-0.008
(0.13)
-0.018
(0.01)
-0.057
(0.17)
0.056
(0.12)
-0.066
(0.06)
0.457
(0.35)
-0.183●
(0.09)
-2.078●●●
(0.59)
6.810●●●
(0.91)
-0.046
(0.09)
-0.079
(0.05)
0.042
(0.11)
-0.012
(0.01)
-0.141
(0.15)
0.040
(0.11)
-0.085
(0.05)
0.600+
(0.32)
-0.064
(0.07)
-2.957●●●
(0.56)
6.549●●●
(0.81)
-0.084
(0.08)
-0.130●
(0.06)
0.007
(0.11)
-0.021+
(0.01)
-0.069
(0.13)
-0.005
(0.10)
-0.075
(0.05)
0.371
(0.29)
-0.089
(0.07)
-3.174●●●
(0.52)
7.403●●●
(0.75)
-0.341
(0.25)
-0.043
(0.11)
-0.021
(0.04)
0.214
(0.22)
-0.018
(0.01)
-0.001
(0.17)
0.118
(0.12)
-0.073
(0.06)
0.753+
(0.41)
-0.222●
(0.09)
-2.048●●
(0.64)
6.919●●●
(1.10)
-0.600●●
(0.20)
-0.053
(0.08)
-0.003
(0.03)
0.451●●
(0.17)
-0.014
(0.01)
-0.064
(0.14)
0.094
(0.11)
-0.080
(0.05)
0.822●
(0.32)
-0.100
(0.07)
-2.886●●●
(0.57)
6.593●●●
(0.90)
-0.404●
(0.20)
-0.091
(0.08)
-0.027
(0.03)
0.276
(0.18)
-0.021+
(0.01)
-0.011
(0.13)
0.048
(0.10)
-0.077
(0.05)
0.608+
(0.33)
-0.123+
(0.07)
-3.131●●●
(0.56)
7.690●●●
(0.88)
Observations
R-squared
Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. Education was centered in Model 2a; IQ was centered in Model 2b; Specific Human Capital was centered in
Model 3.
●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1
162
0.178
162
0.279
162
0.223
162
0.295
162
0.330
162
0.276
162
0.187
162
0.315
162
0.263
162
0.207
162
0.345
162
0.330
42
Table 3 Means, SDs, and Correlations of the Studied Variables (Experiment 2)
VARIABLES
1 Novelty_L
2 Usefulness_L
3 Overall Impression_L
4 Novelty_S
5 Usefulness_S
6 Overall Impression_S
Education
7
IQ
8
9
Specific Human Capital
10 AI Identification Raio_L
11 AI Identification Ratio_S
12 Age
13 Gender
14 Openness
15 AI Use Frequency
16 Purpose for Experiment
17 Mind Perception
Continued
Mean
5.012
5.374
5.366
4.836
5.143
4.944
4.307
14.465
1.422
0.626
0.569
24.617
1.623
3.961
2.900
0.049
3.856
SD
1.015
0.950
1.026
0.744
0.636
0.681
0.711
3.038
0.630
0.231
0.114
6.267
0.485
0.571
1.651
0.215
0.861
1
-
0.808●●●
0.882●●●
0.618●●●
0.469●●●
0.444●●●
0.119●
0.130●
0.063
-0.231●
-0.134●
0.111●
0.057
-0.063
-0.040
0.018
-0.091
2
3
4
5
6
7
8
-
0.846●●●
0.498●●●
0.479●●●
0.417●●●
0.151●●
0.096
0.074
-0.273●●●
-0.223●●●
0.125●
0.085
-0.048
-0.004
-0.021
-0.090
-
0.633●●●
0.526●●●
0.513●●●
0.134●
0.101
0.098
-0.188●●●
-0.106
0.085
0.079
-0.075
-0.014
-0.011
-0.096
-
0.792●●●
0.797●●●
0.101
0.072
0.078
-0.057
-0.231●●●
0.109●
0.033
-0.068
-0.001
0.082
-0.034
-
0.895●●●
0.090
0.059
0.074
-0.026
-0.337●●●
0.047
0.059
-0.040
-0.027
0.092
-0.091
-
0.052
0.051
0.092
-0.009
-0.299●●●
0.008
0.021
-0.050
-0.023
0.101
-0.084
-
0.221●●●
-0.100
-0.055
-0.090
0.097
0.080
0.067
0.213●●●
0.002
-0.083
-
-0.041
-0.011
-0.075
0.112●
0.003
-0.079
0.063
0.045
-0.050
VARIABLES
9
10
11
12
13
14
15
16
17
9
Specific Human Capital
10 AI Identification Raio_L
11 AI Identification Ratio_S
12 Age
13 Gender
14 Openness
15 AI Use Frequency
16 Purpose for Experiment
17 Mind Perception
Notes. Female = 2, Male = 1. All p values in this table are two-tailed. “_L” refers to rating scores on lyrics only; “_S” refers to rating scores with songs hearing.
●●● p<0.001, ●● p<0.01, ● p<0.05.
-
-0.028
-0.039
0.219●●●
-0.245●●●
0.078
-0.117●
-0.107
0.115●
-
0.359●●●
-0.069
-0.038
-0.056
0.013
-0.005
0.001
-
-0.080
-0.120●
-0.022
0.0480
-0.049
0.164●●
-
-0.227●●●
0.076
-0.190●●●
-0.029
0.060
-
0.022
0.097
0.001
-0.116●
-
0.132●
-0.229●●●
0.192●●●
-
-0.038
0.042
-
-0.054
-
43
Table 4 Regression Results of AI Use on Creativity Measured by Lyrics (Experiment 2)
VARIABLES
(1)
(2)
(3)
(4)
Novelty_L
Novelty_L Usefulness_L Usefulness_L
(5)
Overall
Impression_L
(6)
Overall
Impression_L
AI Use
Education
IQ
Specific Human Capital
Age
Gender
Openness
AI Use frequency
Purpose for Experiment
Mind Perception
AI Identification Ratio_L
Constant
0.087
(0.11)
0.156+
(0.08)
0.030
(0.02)
0.109
(0.08)
0.013
(0.01)
0.117
(0.11)
-0.085
(0.09)
-0.028
(0.04)
0.080
(0.24)
-0.078
(0.06)
-1.404●●●
(0.22)
4.906●●●
(0.66)
0.079
(0.11)
0.183●
(0.08)
0.017
(0.02)
0.110
(0.08)
0.016
(0.01)
0.186+
(0.11)
-0.111
(0.09)
-0.002
(0.04)
-0.111
(0.23)
-0.076
(0.06)
-1.398●●●
(0.21)
5.350●●●
(0.62)
0.097
(0.12)
0.198●
(0.08)
0.015
(0.02)
0.184●
(0.08)
0.009
(0.01)
0.186
(0.12)
-0.148
(0.09)
-0.012
(0.03)
-0.013
(0.25)
-0.094
(0.06)
-1.259●●●
(0.23)
5.481●●●
(0.65)
0.201●
(0.08)
0.015
(0.02)
0.190●
(0.08)
0.010
(0.01)
0.193
(0.12)
-0.158+
(0.09)
-0.012
(0.03)
-0.017
(0.25)
-0.094
(0.06)
-1.233●●●
(0.23)
5.531●●●
(0.65)
0.186●
(0.08)
0.017
(0.02)
0.116
(0.08)
0.016+
(0.01)
0.192+
(0.11)
-0.119
(0.09)
-0.001
(0.04)
-0.114
(0.23)
-0.076
(0.06)
-1.378●●●
(0.21)
5.391●●●
(0.61)
0.159●
(0.08)
0.029
(0.02)
0.115
(0.08)
0.013
(0.01)
0.123
(0.11)
-0.093
(0.10)
-0.028
(0.04)
0.076
(0.24)
-0.078
(0.06)
-1.382●●●
(0.22)
4.951●●●
(0.66)
i.demo
Observations
R-squared
Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. “Y” means model includes the fixed
effect of assigned demo. “_L” refers to rating scores on lyrics only.
●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1
Y
329
0.248
Y
329
0.245
Y
329
0.246
Y
329
0.215
Y
329
0.216
Y
329
0.249
44
Table 5 Regression Results of AI Use on Creativity Measured by Songs (Experiment 2)
VARIABLES
(1)
(2)
(3)
(4)
Novelty_S
Novelty_S Usefulness_S Usefulness_S
(5)
Overall
Impression_S
(6)
Overall
Impression_S
AI Use
Education
IQ
Specific Human Capital
Age
Gender
Openness
AI Use frequency
Purpose for Experiment
Mind Perception
AI Identification Ratio_S
Constant
0.133+
(0.07)
0.138●●
(0.05)
0.005
(0.01)
0.051
(0.05)
0.011●
(0.01)
0.034
(0.07)
-0.072
(0.06)
-0.007
(0.02)
0.189
(0.16)
0.041
(0.05)
-1.620●●●
(0.33)
4.986●●●
(0.48)
0.108+
(0.06)
0.111●●
(0.04)
0.006
(0.01)
0.039
(0.04)
0.003
(0.01)
0.052
(0.06)
-0.031
(0.05)
-0.013
(0.02)
0.185+
(0.11)
-0.006
(0.03)
-1.620●●●
(0.30)
5.840●●●
(0.37)
0.087
(0.06)
0.090●
(0.04)
0.007
(0.01)
0.055
(0.04)
-0.002
(0.00)
0.031
(0.06)
-0.052
(0.05)
-0.016
(0.02)
0.231●
(0.11)
-0.025
(0.03)
-1.364●●●
(0.30)
5.999●●●
(0.36)
0.093●
(0.04)
0.007
(0.01)
0.061
(0.04)
-0.002
(0.00)
0.039
(0.06)
-0.060
(0.05)
-0.015
(0.02)
0.229●
(0.11)
-0.027
(0.03)
-1.300●●●
(0.31)
6.020●●●
(0.35)
0.115●●
(0.04)
0.006
(0.01)
0.047
(0.04)
0.003
(0.01)
0.062
(0.06)
-0.041
(0.05)
-0.012
(0.02)
0.182+
(0.11)
-0.009
(0.03)
-1.540●●●
(0.30)
5.866●●●
(0.37)
0.143●●
(0.05)
0.005
(0.01)
0.061
(0.05)
0.012●
(0.01)
0.046
(0.07)
-0.085
(0.06)
-0.006
(0.02)
0.186
(0.15)
0.038
(0.05)
-1.522●●●
(0.34)
5.018●●●
(0.48)
i.demo
Observations
R-squared
Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. “Y” means model includes the fixed
effect of assigned demo. “_S” refers to rating scores with songs hearing.
●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1
Y
329
0.440
Y
329
0.454
Y
329
0.508
Y
329
0.448
Y
329
0.446
Y
329
0.512
45
Table 6 Regression Results of Interaction of AI Use and Education on Creativity (Experiment 2)
VARIABLES
(1)
(2)
Novelty_L Usefulness_L
(3)
Overall
Impression_L
(4)
(5)
Novelty_S Usefulness_S
(6)
Overall
Impression_S
AI Use
Education
AI Use × Education
IQ
Specific Human Capital
Age
Gender
Openness
AI Use frequency
Purpose for Experiment
Mind Perception
AI Identification Ratio_L
AI Identification Ratio_S
Constant
0.100
(0.11)
-0.106
(0.12)
0.407●●
(0.15)
0.028
(0.02)
0.094
(0.09)
0.011
(0.01)
0.116
(0.11)
-0.086
(0.09)
-0.027
(0.04)
0.083
(0.24)
-0.090
(0.07)
-1.434●●●
(0.21)
0.090
(0.11)
-0.036
(0.13)
0.341●
(0.15)
0.015
(0.02)
0.098
(0.08)
0.015
(0.01)
0.185+
(0.11)
-0.112
(0.09)
-0.001
(0.04)
-0.108
(0.23)
-0.086
(0.06)
-1.423●●●
(0.21)
0.113
(0.12)
-0.117
(0.13)
0.489●●
(0.16)
0.013
(0.02)
0.165●
(0.08)
0.008
(0.01)
0.185
(0.12)
-0.150
(0.09)
-0.011
(0.03)
-0.008
(0.24)
-0.108+
(0.06)
-1.294●●●
(0.22)
0.137+
(0.07)
0.043
(0.10)
0.146
(0.11)
0.004
(0.01)
0.046
(0.05)
0.011●
(0.01)
0.034
(0.07)
-0.073
(0.06)
-0.006
(0.02)
0.190
(0.16)
0.037
(0.05)
0.113+
(0.06)
-0.012
(0.09)
0.192+
(0.10)
0.005
(0.01)
0.032
(0.04)
0.002
(0.01)
0.052
(0.06)
-0.031
(0.05)
-0.013
(0.02)
0.186+
(0.11)
-0.012
(0.03)
0.092
(0.06)
-0.013
(0.09)
0.159+
(0.10)
0.007
(0.01)
0.049
(0.04)
-0.002
(0.00)
0.031
(0.06)
-0.052
(0.05)
-0.015
(0.02)
0.232●
(0.11)
-0.030
(0.03)
5.692●●●
(0.63)
6.234●●●
(0.62)
6.470●●●
(0.66)
-1.623●●●
(0.33)
5.612●●●
(0.44)
-1.624●●●
(0.29)
6.364●●●
(0.36)
-1.368●●●
(0.29)
6.422●●●
(0.35)
i.demo
Observations
R-squared
Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. “Y” means model includes the fixed
effect of assigned demo. “_L” refers to rating scores on lyrics only; “_S” refers to rating scores with songs hearing.
●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1
Y
329
0.267
Y
329
0.464
Y
329
0.271
Y
329
0.518
Y
329
0.230
Y
329
0.450
46
Table 7 Regression Results of Interaction of AI Use and IQ on Creativity (Experiment 2)
VARIABLES
(1)
(2)
Novelty_L Usefulness_L
(3)
Overall
Impression_L
(4)
(5)
Novelty_S Usefulness_S
(6)
Overall
Impression_S
AI Use
IQ
AI Use × IQ
Education
Specific Human Capital
Age
Gender
Openness
AI Use frequency
Purpose for Experiment
Mind Perception
AI Identification Ratio_L
AI Identification Ratio_S
Constant
0.083
(0.11)
-0.028
(0.04)
0.076+
(0.04)
0.160●
(0.08)
0.109
(0.08)
0.011
(0.01)
0.116
(0.11)
-0.106
(0.09)
-0.032
(0.04)
0.112
(0.25)
-0.096
(0.07)
-1.405●●●
(0.22)
0.077
(0.11)
-0.015
(0.04)
0.042
(0.04)
0.186●
(0.08)
0.111
(0.08)
0.015
(0.01)
0.185+
(0.11)
-0.122
(0.09)
-0.004
(0.04)
-0.093
(0.23)
-0.086
(0.06)
-1.398●●●
(0.21)
0.095
(0.12)
-0.022
(0.04)
0.050
(0.04)
0.201●
(0.08)
0.184●
(0.08)
0.008
(0.01)
0.185
(0.12)
-0.162+
(0.09)
-0.015
(0.03)
0.008
(0.25)
-0.106+
(0.06)
-1.259●●●
(0.22)
0.132+
(0.07)
-0.007
(0.02)
0.016
(0.02)
0.138●●
(0.05)
0.051
(0.05)
0.011●
(0.01)
0.033
(0.07)
-0.077
(0.06)
-0.008
(0.02)
0.196
(0.15)
0.037
(0.05)
0.108+
(0.06)
0.005
(0.02)
0.002
(0.02)
0.111●●
(0.04)
0.039
(0.04)
0.003
(0.01)
0.052
(0.06)
-0.031
(0.05)
-0.013
(0.02)
0.186+
(0.11)
-0.007
(0.03)
0.088
(0.06)
0.016
(0.02)
-0.011
(0.02)
0.089●
(0.04)
0.055
(0.04)
-0.001
(0.00)
0.032
(0.06)
-0.049
(0.05)
-0.015
(0.02)
0.227●
(0.11)
-0.023
(0.03)
5.549●●●
(0.64)
5.712●●●
(0.60)
5.843●●●
(0.64)
-1.625●●●
(0.33)
5.110●●●
(0.48)
-1.621●●●
(0.30)
5.935●●●
(0.36)
-1.361●●●
(0.30)
6.073●●●
(0.37)
i.demo
Observations
R-squared
Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. “Y” means model includes the fixed
effect of assigned demo. “_L” refers to rating scores on lyrics only; “_S” refers to rating scores with songs hearing.
●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1
Y
329
0.258
Y
329
0.447
Y
329
0.250
Y
329
0.219
Y
329
0.454
Y
329
0.512
47
Table 8 Regression Results of Interaction of AI Use and Specific Human Capital on Creativity
VARIABLES
AI Use
Specific Human Capital
AI Use × Specific Human Capital
Education
IQ
Age
Gender
Openness
AI Use frequency
Purpose for Experiment
Mind Perception
AI Identification Ratio_L
AI Identification Ratio_S
Constant
(Experiment 2)
(1)
(2)
Novelty_L Usefulness_L
(3)
Overall
Impression_L
(4)
(5)
Novelty_S Usefulness_S
(6)
Overall
Impression_S
0.074
(0.11)
0.377●●
(0.14)
-0.391●
(0.16)
0.168●
(0.08)
0.031+
(0.02)
0.013
(0.01)
0.124
(0.11)
-0.059
(0.10)
-0.022
(0.04)
0.085
(0.24)
-0.085
(0.07)
-1.391●●●
(0.22)
0.068
(0.11)
0.334●
(0.15)
-0.327+
(0.17)
0.193●
(0.08)
0.018
(0.02)
0.016+
(0.01)
0.192+
(0.11)
-0.090
(0.09)
0.004
(0.03)
-0.106
(0.23)
-0.082
(0.06)
-1.387●●●
(0.21)
0.088
(0.12)
0.378●●
(0.14)
-0.283+
(0.17)
0.207●
(0.08)
0.016
(0.02)
0.010
(0.01)
0.191
(0.12)
-0.130
(0.09)
-0.008
(0.03)
-0.009
(0.24)
-0.099
(0.06)
-1.249●●●
(0.23)
0.124+
(0.07)
0.204●
(0.09)
-0.221●
(0.10)
0.144●●
(0.05)
0.006
(0.01)
0.012●
(0.00)
0.039
(0.07)
-0.058
(0.06)
-0.003
(0.02)
0.193
(0.15)
0.036
(0.05)
0.100+
(0.06)
0.174●
(0.07)
-0.195●
(0.08)
0.117●●
(0.04)
0.007
(0.01)
0.003
(0.01)
0.057
(0.06)
-0.018
(0.05)
-0.010
(0.02)
0.188+
(0.11)
-0.011
(0.03)
0.079
(0.06)
0.198●●
(0.07)
-0.209●
(0.08)
0.096●
(0.04)
0.008
(0.01)
-0.002
(0.00)
0.036
(0.06)
-0.038
(0.05)
-0.012
(0.02)
0.235●
(0.11)
-0.030
(0.03)
4.907●●●
(0.63)
5.379●●●
(0.60)
5.633●●●
(0.62)
-1.580●●●
(0.33)
4.953●●●
(0.47)
-1.585●●●
(0.30)
5.802●●●
(0.36)
-1.327●●●
(0.30)
5.977●●●
(0.34)
i.demo
Observations
R-squared
Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. “Y” means model includes the fixed effect of
assigned demo. “_L” refers to rating scores on lyrics only; “_S” refers to rating scores with songs hearing.
●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1
Y
329
0.261
Y
329
0.461
Y
329
0.226
Y
329
0.453
Y
329
0.253
Y
329
0.519
Figure 1 Design of Experiment 13
48
3 Figures and icons are generated by AI.
Figure 2 Design of Experiment 24
49
4 Figures and icons are generated by AI. “PL” stands for Professional Lyricists, and “LN” stands for Lyric Novices.
Figure 3 AI Tool Used in Two Experiments
50
Figure 4 Interaction of AI Use and Education Predicting Novelty Score
51
Figure 5 Interaction of AI Use and IQ Predicting Novelty Score
Figure 6 Interaction of AI Use and Specific Human Capital Predicting Novelty Score
52
Figure 7 Interaction of AI Use and Education Predicting Novelty Score by Song Rating
53
Figure 8 Interaction of AI Use and Specific Human Capital Predicting Novelty Score by
Song Rating
|
ai_researcher | 3 | Is_Your_LLM_Secretly_a_World_Model_of_the_Internet_Model-Based_Planning_for_Web_Agents.pdf | 4
2
0
2
v
o
N
0
1
]
I
A
.
s
c
[
1
v
9
5
5
6
0
.
1
1
4
2
:
v
i
X
r
a
Is Your LLM Secretly a World Model of the Internet?
MODEL-BASED PLANNING FOR WEB AGENTS
Yu Gu1,†, Boyuan Zheng1,†, Boyu Gou1, Kai Zhang1, Cheng Chang2, Sanjari Srivastava2,
Yanan Xie2, Peng Qi2, Huan Sun1, Yu Su1
1The Ohio State University, 2Orby AI
{gu.826, zheng.2372, sun.397, su.809}@osu.edu
ABSTRACT
Language agents have demonstrated promising capabilities in automating web-
based tasks, though their current reactive approaches still underperform largely
compared to humans. While incorporating advanced planning algorithms, par-
ticularly tree search methods, could enhance these agents’ performance, imple-
menting tree search directly on live websites poses significant safety risks and
practical constraints due to irreversible actions such as confirming a purchase. In
this paper, we introduce a novel paradigm that augments language agents with
model-based planning, pioneering the innovative use of large language models
(LLMs) as world models in complex web environments. Our method, WEB-
DREAMER, builds on the key insight that LLMs inherently encode comprehen-
sive knowledge about website structures and functionalities. Specifically, WEB-
DREAMER uses LLMs to simulate outcomes for each candidate action (e.g., “what
would happen if I click this button?”) using natural language descriptions, and
then evaluates these imagined outcomes to determine the optimal action at each
step. Empirical results on two representative web agent benchmarks with on-
line interaction—VisualWebArena and Mind2Web-live—demonstrate that WEB-
DREAMER achieves substantial improvements over reactive baselines. By estab-
lishing the viability of LLMs as world models in web environments, this work
lays the groundwork for a paradigm shift in automated web interaction. More
broadly, our findings open exciting new avenues for future research into 1) opti-
mizing LLMs specifically for world modeling in complex, dynamic environments,
and 2) model-based speculative planning for language agents. 1
1
INTRODUCTION
Planning (Mattar & Lengyel, 2022)—the strategic search for optimal action sequences to achieve
goals from initial states—has been fundamental to artificial intelligence since its inception, driving
remarkable breakthroughs including superhuman performance in games like Go (Feng et al., 2023;
Silver et al., 2016). Recent advances have demonstrated that integrating large language models
(LLMs) with advanced planning algorithms (e.g., Yao et al. (2023a); Hao et al. (2023); Gu et al.
(2023); Wang et al. (2024); Feng et al. (2023); Brown et al. (2024)) substantially enhances their per-
formance on complex reasoning tasks beyond chain-of-thought (CoT) (Wei et al., 2022) approaches,
with OpenAI’s o1 (OpenAI, 2024b) serving as a prominent example. These methods effectively
scale inference-time compute and enable LLMs to explore multiple potential solution paths, which
ultimately lead to more accurate outcomes.
Alongside these developments, research into generalist web agents capable of planning and execut-
ing a sequence of actions to complete complex tasks across diverse websites has garnered significant
interest (Deng et al., 2023; Zhou et al., 2023; Zheng et al., 2024; Koh et al., 2024a), partly due to the
web’s potential as a complex yet realistic environment for driving agent research and development.
However, applying existing planning algorithms to the online web environment presents formidable
challenges. Chief among these challenges are the inherent safety risks associated with live website
†Equal contribution.
1Github: OSU-NLP-Group/WebDreamer
1
Figure 1: Schematic illustration of different strategies for web agents formulated as a search prob-
lem. Each node represents a webpage.
(a) Reactive: The agent selects locally optimal actions
without forward planning, often leading to suboptimal outcomes. (b) Tree search with real interac-
tions: The agent explores multiple paths through active website navigation and permits backtracking
(indicated by dashed arrows). However, in real-world websites, backtracking is often infeasible due
to the prevalence of irreversible actions. (c) Model-based planning: The agent simulates poten-
tial outcomes (illustrated by cloud-bordered nodes) to determine optimal actions prior to real-world
execution, thus minimizing actual website interactions while maintaining effectiveness. For visual
clarity, only one-step simulated outcomes are depicted. Faded nodes indicate unexplored webpages,
while green checkmarks and red crosses denote successful and unsuccessful outcomes, respectively.
interactions (Liao et al., 2024), such as inadvertently submitting forms with sensitive information
or triggering unintended transactions. These risks become even more pronounced when employ-
ing tree search algorithms (Koh et al., 2024b; Putta et al., 2024), as their exhaustive exploration
can expose the agent to hidden vulnerabilities and unforeseen scenarios. Additionally, many online
actions, such as confirming a purchase or sending an email, are irreversible, which further makes
backtracking—a crucial component of planning algorithms—highly challenging, if not infeasible.
One promising solution to address these challenges is model-based planning (Pascanu et al., 2017;
Moerland et al., 2023), which equips agents with the ability to simulate interactions using a world
model—a computational representation of environment dynamics. By simulating action sequences
within this virtual environment, agents can explore potential outcomes safely, without directly inter-
acting with live websites. This approach not only reduces safety risks but also preserves the agent’s
capacity to explore and plan. Yet, the true challenge lies in creating a versatile world model that can
faithfully capture the landscape of the ever-evolving Internet. While previous research demonstrates
that LLMs can function as effective world models in simplistic settings like blocksworld (Hao et al.,
2023) and gridworld (Kim et al., 2024), a bolder question emerges: Can LLMs rise to the challenge
of modeling the vast, dynamic Internet? With their extensive pre-trained knowledge—spanning web
structures, protocols, and user behaviors—LLMs are uniquely positioned to take on this task. Build-
ing on these insights, we present WEBDREAMER, a pioneering framework that leverages LLMs
as world models to navigate the web (Figure 1). At the core of WEBDREAMER lies the concept
of “dreaming”: before committing to any action, the agent uses the LLM to imagine the outcome
of each possible step, expressed as natural language descriptions of how the state would change.
These simulated outcomes are then evaluated based on their progress toward achieving the task ob-
jective. The most promising action is executed, and the process is repeated iteratively until the LLM
determines that the goal has been reached (Section 4).
To validate the effectiveness of WEBDREAMER, we evaluate it on two representative benchmarks
that support online interaction: VisualWebArena (Koh et al., 2024a) and Mind2Web-live (Pan et al.,
2024b). WEBDREAMER achieves substantial performance gains over reactive agents on both bench-
marks, underscoring its practical value despite its conceptual simplicity. While tree search with
actual interactions shows slightly superior performance on VisualWebArena, which features a con-
trolled environment of three locally hosted websites, this method is rarely feasible in practical appli-
cations, given its inherent limitations regarding safety risks and the potential for irreversible actions
2
(a) reactive(b) tree search with real interactions(c) model-based planningin real-world websites. In contrast, our simulation-based approach offers a more flexible solution,
balancing performance gains with practical applicability in real-world web navigation tasks.
In summary, our work introduces a new direction for AI planning in complex, real-world environ-
ments like the web using world models simulated by LLMs. With WEBDREAMER, we tackle the
dual challenges of safety and complexity in web navigation. Our results validate the potential of
LLM-based world models for planning in complex web environments and highlight new opportu-
nities for optimizing LLMs as world models and improving model-based planning algorithms for
language agents.
2 RELATED WORK
2.1 WEB AGENTS
Driven by the goal of automating tedious and repetitive web-based tasks, web agents powered by
(multimodal) language models have made substantial progress in various aspects. Benchmarks
have evolved from MiniWoB++ (Shi et al., 2017; Liu et al., 2018) to WebShop (Yao et al., 2022)
and WebArena (Zhou et al., 2023), offering increasingly realistic website simulations. VisualWe-
bArena (Koh et al., 2024a) and Mind2Web (Deng et al., 2023) challenge models’ ability to handle
visual information and generalize across diverse tasks, websites, and domains.
Reactive Agents. Reactive agents make decisions based on immediate observations from the envi-
ronment without performing any search or simulation of future actions, typically implemented with
the ReAct framework (Yao et al., 2023b). Much progress has been made to enhance the fundamen-
tal capabilities of reactive web agents through both prompting closed-source models (Zheng et al.,
2024; He et al., 2024; Deng et al., 2023) and training models using HTML and webpage screen-
shots (Lee et al., 2023; Gur et al., 2023; Furuta et al., 2023; Hong et al., 2024; Baechler et al., 2024).
Additionally, models’ abilities to ground web agent actions to elements have been improved through
training on action-coordinate pair data (You et al., 2024; Cheng et al., 2024). Further advancements
have been achieved by training on web agent trajectories, utilizing both human-annotated trajec-
tories (Shaw et al., 2023; Hong et al., 2024; Deng et al., 2023; Lai et al., 2024) and synthesized
exploration trajectories (Furuta et al., 2023; Song et al., 2024; Patel et al., 2024). However, reactive
agents inherently suffer from short-sightedness, which can often lead to suboptimal performance in
multi-step decision making.
Agents with tree search. Pan et al. (2024a) introduces a reward model based on GPT-4V, de-
signed to provide both step-wise and trajectory-level rewards to guide inference-time search. Search
Agent (Koh et al., 2024b) investigates inference-time search algorithms in interactive web environ-
ments, enabling explicit exploration and multi-step planning. In contrast to Search Agent, which
employs a variant of best-first tree search, AgentQ (Putta et al., 2024) and WebPilot (Zhang et al.,
2024) utilize Monte Carlo Tree Search (MCTS) as their primary search strategy.
While tree search on websites has demonstrated significant improvements, it still presents several
limitations. First, the search process substantially increases inference time due to the need for exten-
sive exploration, which is difficult to parallelize given its inherently sequential nature. Backtracking
to previous states is essential for search-based methods but impractical on real-world websites. Koh
et al. (2024b) addressed this in sandbox environments by storing action sequences to resume states
after resetting the environment. However, resetting the environment or undoing action sequences
is not feasible on live websites. Finally, the extra explorations introduced by search algorithms
substantially amplify the risk of destructive actions that may irreversibly alter the website’s state,
potentially causing harmful side effects.
2.2 WORLD MODELS
World models, a cornerstone of model-based reinforcement learning (Moerland et al., 2023) since
the introduction of Dyna by Sutton (1991), are typically trained on observed state transitions to
predict future states and rewards. These world models enable efficient training through simulated
experiences, reducing environmental interactions and improving sample efficiency (Ha & Schmid-
huber, 2018). Beyond their role in training, researchers have explored the use of world models to
3
facilitate planning (Pascanu et al., 2017; Schrittwieser et al., 2020). Fundamentally, world models in
reinforcement learning often involve task-specific training, with a primary focus on enhancing data
efficiency in the agent learning process.
In contrast to traditional world models in reinforcement learning, LLMs employed as world models
primarily focus on facilitating decision-making in planning rather than training. This distinction
leads LLM-based models to prioritize key task abstractions over the high-fidelity simulations typi-
cally required in reinforcement learning. Recent research has demonstrated the potential of LLMs
as world models for simple environments, leveraging their encoded broad world knowledge (Hao
et al., 2023; Kim et al., 2024). Our study aims to advance this field by investigating the capabil-
ities of LLM-based world models in more complex real-world environments, specifically diverse
websites. A concurrent work (Chae et al., 2024) also explores augmenting web agents with LLM-
simulated action outcomes, however, their focus is on data collection to train an open-weights LLM,
while ours centers on understanding the potential of this new paradigm using advanced LLMs such
as GPT-4o (OpenAI, 2024a).
3 PRELIMINARY
3.1 TASK FORMULATION
Web agents tasked with automating activities in live
websites confront vast and complex search spaces.
Formally, each task with a task instruction I can
be framed as a partially observable Markov decision
process (POMDP): (S, A, O, T, R, Ω), where S rep-
resents the set of all possible states of the environ-
ment, A represents all possible actions the agent can
take, O represents the set of possible observations
from the environment, T : S × A → S represents
the state transition function, R is a binary reward de-
noting whether the task specified in I has been com-
pleted or not, and Ω : S → O is a deterministic
function that projects a state to an observation. The
goal of the task is to execute a sequence of actions
that achieves a reward of 1.
Table 1: Action space for web navigation de-
fined in VisualWebArena (Koh et al., 2024a).
Action Type a
Description
Click on elem.
Hover over elem.
click [elem]
hover [elem]
type [elem] [text] Type text into elem.
press [key comb]
goto [url]
go back
go forward
new tab
tab focus [index]
tab close
scroll [up/down]
stop [answer]
Press a key combo.
Go to url.
Click back.
Click forward.
Open a new tab.
Focus on the i-th tab.
Close current tab.
Scroll up or down.
End with an output.
In practical scenarios, the environment is partially
observable due to the complexity of web environ-
ments. The true state encompasses server-side variables, dynamically loaded content, hidden UI
elements, and is subject to network conditions and browser limitations. Consequently, the agent can
only perceive the environment through a limited viewport (i.e., an observation o ∈ O), which repre-
sents an incomplete projection of the true system state. The observation space typically manifests as
screenshots or text-based accessibility trees, reflecting common implementation practices. This con-
strained observability naturally shapes the action space A, which comprises operations executable
on interactable elements within o, such as element clicks, text input, and URL navigation (Table 1).
3.2 PLANNING THROUGH SIMULATION
Planning an optimal action sequence through tree search using real interactions governed by T is
costly and risks irreversible actions. Model-based planning addresses these challenges by using a
computational representation of the environment to simulate interaction outcomes. Instead of ex-
ecuting actions in the real environment, the agent leverages an approximate model to predict state
transitions, enabling efficient exploration and evaluation of action sequences without real-world
interactions. While offline planning can compute entire action sequences before execution in de-
terministic environments like BlocksWorld (Hao et al., 2023), web environments are too complex
for such long-term prediction. This necessitates online planning approaches that interleave planning
and execution, computing one action at a time.
One prominent approach is Model Predictive Control (MPC; Garcia et al. (1989)), which iteratively
simulates future trajectories to select actions. At each state s, MPC simulates trajectories over a
4
Figure 2: Illustration of WEBDREAMER using the LLM to simulate the outcome of each candidate
action. The LLM simulates trajectories in natural language descriptions for three candidate actions:
(1) Click “Office Products”, (2) Click “Electronics”, and (3) Type “Disk” into textbox. Through
these simulations, each resulting trajectory is scored to identify the action most likely to succeed.
In this case, the LLM selects Click Click “Electronics” as the optimal step and executes it. Each
dotted box represents an LLM-generated state description after each simulated action. This example
demonstrates a two-step planning horizon.
finite horizon H for each possible action a ∈ A using a simulator function sim(s, a) and evaluates
them using a scoring function score(τ ). The action leading to the most promising trajectory is then
executed: a∗ = arg maxa∈A score(sim(s, a)). This process repeats after observing the new
state, allowing the agent to adapt its plan based on actual outcomes while avoiding costly real-world
exploration. In reality, we cannot access the real state due to partial observability, as a result, we
instead do sim(o, a) using the observation o = Ω(s).
4 WEBDREAMER: MODEL-BASED PLANNING FOR WEB AGENTS
In this paper, we propose WEBDREAMER, a pioneering approach leveraging LLMs as world mod-
els to enable efficient planning in complex digital environments. Our approach is motivated by the
observation that web interfaces, despite their complexity, are designed to be predictable for human
users. When browsing websites, humans can effectively anticipate action outcomes based on visual
cues and common design patterns—clicking a “Submit” button leads to form submission, select-
ing a product image navigates to its detail page. Given that LLMs are trained on vast amounts
of web-related data, we hypothesize that they have acquired sufficient knowledge to simulate the
consequences of user actions, potentially serving as effective world models for planning.
4.1 CORE DESIGN
WEBDREAMER follows the planning through simulation paradigm introduced in Section 3.2. Fig-
ure 2 illustrates this process with three candidate actions, where WEBDREAMER simulates two-step
trajectories for each action, selects the trajectory with the highest score, and executes its correspond-
ing initial action. At its core, WEBDREAMER leverages an LLM to implement both the simulation
function sim and the scoring function score.
Implementation for sim: Our implementation of sim consists of two modules: one predicts state
changes after action execution, approximating T , while the other imagines a possible action based
on the predicted state. Together, these two modules generate trajectories of length H, where H
5
Please navigate to the 'Data Storage' category and purchase the least expensive disk with 512GB of storage.The 'Office Products' category will display three sub-categories: 'Office Electronics', 'Office & School Supplies', and 'Office Furniture & Lighting'.Click ‘Office Products’Click ‘Electronics’Type ‘Disk’The'Electronics'categorywilldisplaythreesub-categories:'Computers&Accessories','Accessories&Supplies',and'Car&VehicleElectronics'.Thewebpagewilldisplaysearchresults,includingalistofproducts,eachofwhichincludestheproducttitle,price,andan'AddtoCart'button.Thewebpagewilldisplay'OfficeElectronics'sub-categoryresultswithproducts,andthesub-menuwillshow'Printers&Accessories'andothercategories.Thewebpagewilldisplay'ComputerAccessories'sub-categoryresults,including'DataStorage','TabletAccessories',andothers.The'Electronics'categorywilldisplaythreesub-categories:'Computers&Accessories','Accessories&Supplies',and'Car&VehicleElectronics'.Stage I: SimulationStage II: ExecutionClick ’Office Electronics’Click ’Computer & Accessories’Click ’Electronics’v=0.8v=0.4v=0.1is a configurable horizon parameter (i.e., the simulation depth). Specifically, to represent the state
changes, we prompt the LLM to generate a concise natural language description focusing only on
the effects of the action. For example, in Figure 2, the LLM would output a short description as
follows when prompted to predict the effect of executing the action Click “Electronics”:
→ Click “Electronics”
The ‘Electronics’ category will display three sub-categories: ‘Computers & Accessories’, ‘Accessories
& Supplies’, and ‘Car & Vehicle Electronics’.
Based on this predicted state, the LLM then imagines the next action (i.e., Click “Computers &
Accessories”), which leads to another state change prediction. This process generates a trajectory
of horizon H = 2.
Implementation for score: After collecting a trajectory τi simulated from each candidate ac-
tion ai using sim, we further use the LLM as a scoring function for each simulation. Follow-
ing Koh et al. (2024b), we prompt the LLM to evaluate each simulated trajectory with a three-
scale response—complete (1.0), on track (0.5), or incorrect (0)—indicating its progress toward
task completion. The final score is computed by averaging multiple samples of these evaluations.
In addition to sim and score, a prerequi-
site to planning is candidate action genera-
tion. We employ a two-stage approach: first
sampling top-k actions following Koh et al.
(2024b), then using LLM to self-refine un-
necessary actions for simulation. This self-
refinement step is motivated by our observa-
tion that at different steps, the same k can in-
troduce varying degrees of irrelevant actions—
some steps naturally have fewer plausible ac-
tions than others. We show the pseudo code
of WEBDREAMER’s overall design in Algo-
rithm 1. termination check verifies if
the model outputs a stop action, reaches max
steps, or repeats an action over 3 times, also
following the implementation by Koh et al.
(2024b).
Algorithm 1: WEBDREAMER
Input: Instruction I; initial observation o0
Output: Sequence of actions a0, a1, . . . , aT
t ← 0;
while True do
At ← get candidate(I, ot);
A′
t ← self refine(At);
at = arg maxa∈A′
ot+1 ← execute(at);
t ← t + 1;
if termination check() = True then
score(sim(ot, a));
t
break;
end
end
Return result;
All system prompts used in WEBDREAMER can be found in Appendix A.
4.2 DISCUSSION
To justify our design choices in light of our goal—a pioneering study on using LLMs as world
models for web environments—we discuss three key considerations:
State change description instead of HTML/Accessibility Tree. While we use natural language
descriptions to capture state changes, an alternative is to prompt the LLM to predict the HTML
or accessibility tree of the resulting page. However, since most webpage elements remain un-
changed after an action, predicting the entire page structure is unnecessarily wasteful. Moreover,
such concrete predictions are more prone to hallucination—HTML requires precise details about
the website, whereas state descriptions need only capture the essential changes. For our pioneering
study, we embrace this simpler, more intuitive representation, though we make no claims about its
strict superiority over HTML or accessibility trees (see Section 6.1 for a detailed analysis).
Prompting instead of fine-tuning. In this work, we implement WEBDREAMER through direct
prompting of state-of-the-art LLMs (i.e., GPT-4o (OpenAI, 2024a)) without fine-tuning. Our ra-
tionale is straightforward: we aim to first establish the feasibility of using advanced LLMs as world
models for web environments and their effectiveness in planning. Demonstrating promising results
with this approach will lay the foundation for future work on optimizing this direction through fine-
tuning OSS models on targeted datasets.
6
Straightforward MPC-based planning instead of MCTS. We adopt a relatively straightforward
MPC-based planning algorithm rather than more sophisticated approaches like MCTS that have
been prominent in recent LLM planning research (Hao et al., 2023; Feng et al., 2023). This choice
is motivated by our empirical findings: increasing the planning horizon of WEBDREAMER yields
diminishing returns, which suggests the current limitations of LLMs in accurately modeling multi-
step trajectories (see Section 6.1). Given our goal of exploring LLMs as world models for web
environments, this simpler approach suffices to demonstrate the key insights while acknowledging
the current capabilities of LLMs.
5 EXPERIMENTS
5.1 SETUP
To properly test our planning framework’s real-world performance, we use benchmarks with on-
line evaluation, capturing the dynamic nature of web interactions. We focus on two representative
benchmarks: VisualWebArena (VWA; Koh et al. (2024a)), which emphasizes a multimodal setting,
and Mind2Web-live (Pan et al., 2024b), which operates with HTML by default. VWA comprises
In contrast,
910 tasks across three locally hosted websites: Shopping, Classifieds, and Reddit.
Mind2Web-live includes 104 tasks spanning 69 real-world websites. We adhere to the default set-
tings of both benchmarks: for VWA, we use screenshots with Set-of-Marks prompting as the obser-
vation space, while for Mind2Web-live, we use HTML. For our LLM, we choose the most advanced
multimodal LLM available, GPT-4o, as it best serves our aim to pioneer model-based planning with
LLMs and explore the full potential of this envisioned paradigm. In our experiments, we empiri-
cally set the planning horizon H to 1. A comprehensive analysis of this parameter is presented in
Section 6.1.
To demonstrate the effectiveness of our proposal, we primarily compare our approach with two
major baselines: the reactive agent and the tree search agent with real interactions.2 While we can
readily implement our own method for both benchmarks, for the tree search baseline (Koh et al.,
2024b), we can only compare with it on VWA, because of the infeasibility of doing tree search in
real-world websites in Mind2Web-live. Specifically, in VWA, Koh et al. (2024b) keeps track of
the sequences of actions to get to states in previous trajectories. During backtracking, they reset the
sandbox and re-execute the action sequence to restore the state. However, resetting the environment
to undo effects is not always feasible in real-world websites featured in Mind2Web-live.
5.2 MAIN RESULTS
Effectiveness. We present the overall performance results in Table 2. WEBDREAMER demon-
strates substantial improvements over the reactive agent on both VWA and Mind2Web-live datasets.
Notably, on the VWA dataset, our proposed method achieves a 33.3% relative performance gain.
Meanwhile, our proposal still underperforms the tree search baseline in terms of overall success
rate. Despite these improvements, our approach still falls short of the tree search baseline in terms
of overall success rate. However, it is crucial to emphasize that tree search is not a practical option
for real-world websites, whereas WEBDREAMER provides a more flexible and adaptive alternative.
On Mind2Web-live, WEBDREAMER outperforms the reactive baseline by 2.9% (a relative gain of
13.1%), which is less significant than the improvement on VWA. However, it is worth noting that
the Mind2Web-live dataset does not offer as much discriminative power, as evidenced by the min-
imal performance differences across multiple base LLMs shown in Table 2. The strong results on
both VWA and Mind2Web-live indicate the effectiveness of our method across different observation
settings.
We further conduct a more granular analysis comparing our proposed method to the reactive baseline
on the VWA dataset across multiple dimensions. Table 3 demonstrates that our model-based plan-
ning approach consistently outperforms the reactive baseline across all websites and task difficulty
levels. On tasks of medium difficulty according to the official annotation by VWA, model-based
planning even surpasses the performance of tree search (i.e., 22.2% vs. 24.1%). Despite its promise,
model-based planning still struggles with hard tasks in VWA that necessitate multistep simulations.
2We will refer tree search with real interactions simply as tree search in our experiments for brevity.
7
Table 2: Results on VisualWebArena and Mind2Web-live. WEBDREAMER significantly outper-
forms the reactive baseline and falls only slightly short of the tree search baseline on VWA while
requiring far fewer website interactions. For Mind2Web-live, implementing tree search algorithms
poses significant challenges due to the requirement for website backtracing, leading us to omit tree
search performance metrics. This limitation further underscores the flexibility of our model-based
planning method. We also include additional baselines (denoted by gray cells) to provide broader
context. While these comparisons may not directly assess our core hypothesis, they offer valuable
background for understanding our method’s performance in the web navigation landscape. † We
run the reactive baseline on VWA by ourselves because local hosting requirements may lead to
hardware-dependent performance variations.
Benchmark
Observation O
Method
Completion Rate
Success Rate
VisualWebArena
Screenshot+SoM
Mind2Web-live
HTML
Gemini-1.5-Pro + Reactive (Koh et al., 2024a)
GPT-4 + Reactive (Koh et al., 2024a)
GPT-4o + Reactive (Koh et al., 2024a)
GPT-4o + Tree Search (Koh et al., 2024b)
GPT-4o + WEBDREAMER
GPT-4 + Reactive (Pan et al., 2024b)
Claude-3-Sonnet + Reactive (Pan et al., 2024b)
Gemini-1.5-Pro + Reactive (Pan et al., 2024b)
GPT-4-turbo + Reactive (Pan et al., 2024b)
GPT-3.5-turbo + Reactive (Pan et al., 2024b)
GPT-4o + Reactive (Pan et al., 2024b)
GPT-4o + WEBDREAMER
-
-
-
-
-
48.8%
47.9%
44.6%
44.3%
40.2%
47.6%
49.9%
12.0%
16.4%
17.7%†
26.4%
23.6% ((33.3%)
23.1%
22.1%
22.3%
21.1%
16.5%
22.1%
25.0% ((13.1%)
The accuracy of simulations diminishes as the number of steps increases, presenting a significant
challenge for handling hard tasks.
Table 3: Success rate breakdown based on different dimensions. γ = SRWEBDREAMER−SRreactive
measures
SRtree search−SRreactive
the extent to which WEBDREAMER narrows the gap between the reactive agent and the tree search
agent.
(a) Websites
(b) Task Difficulty
Websites
Reactive Tree Search WEBDREAMER
γ
Difficulty Reactive Tree Search WEBDREAMER
γ
Classifieds
Reddit
Shopping
16.8%
15.3%
19.4%
26.5%
20.5%
29.0%
22.6%
18.6%
26.5%
59.8%
63.5%
74.0%
Easy
Medium
Hard
28.8%
16.4%
10.7%
42.3%
22.2%
14.9%
37.4%
24.1%
12.7%
63.7%
132.8%
47.6%
Efficiency. Another key advantage of model-based planning is its efficiency compared with tree
search using actual explorations. As shown in Table 4, tree search requires approximately three
times more steps than the baseline across all environments, whereas our method maintains compa-
rable action steps. Notably, tree search introduces about ten times more wall clock latency due to
the extra actions and backtracking, while the simulation overhead in our approach is minimal and
can be further reduced with increased parallelization.
Table 4: Action steps and wall clock time on VWA.
(a) Number of Action Steps
(b) Task Completion Wall Clock Time
Steps
Reactive Tree Search WEBDREAMER
Seconds
Reactive Tree Search WEBDREAMER
Classifieds
Reddit
Shopping
3.4
5.1
4.5
9.9
13.6
11.4
4.1
5.2
4.5
Classifieds
Reddit
Shopping
68.3
83.5
87.7
749.2
972.1
785.7
183.6
233.7
179.4
8
Figure 4: We demonstrate the performance on a subset of the VWA dataset, varying both the state
representation within simulations and the planning horizon. Planning with long horizon with simu-
lation remains challenging, regardless of the state representation employed.
6 ANALYSES
6.1 STATE REPRESENTATION AND PLANNING HORIZON
Our model-based planning approach relies on two critical dimensions for simulation: the state rep-
resentation and the planning horizon (i.e., the simulation depth). To gain deeper insights into its ef-
fectiveness and limitations, we investigate how various configurations affect the final performance.
Given the high computational cost of these experiments, we conduct this analysis using a subset of
the VWA dataset, comprising 100 shopping tasks with officially annotated human trajectories.
In addition to the state change description used in our primary experiments, we explore alterna-
tive approaches where GPT-4o predicts either the HTML code or the accessibility tree of the re-
sulting webpage within the simulation. For each of these state representations, we evaluate plan-
ning horizons of 1, 2, and 3 steps. As depicted in Figure 4, all three state representations sig-
nificantly outperform the reactive baseline. However, their effectiveness diminishes as the plan-
ning horizon extends to 3 steps, indicating a common limitation in long-horizon simulation across
these approaches. Specifically, the action proposal within the simulation tends to hallucinate rel-
evant actions for task completion, even when such actions may not exist in the current state pre-
dicted by the LLM. Notably, the state change representation exhibits the most pronounced per-
formance degradation as planning horizons extend. This decline is particularly severe with a
planning horizon of 3, where performance falls below that of the reactive baseline. This vul-
nerability stems from its implicit specification of available interactive elements on the current
webpage, requiring the model to infer these elements by applying changes to the initial state.
In contrast, HTML and accessibility tree representations provide explicit element information.
Consequently, the state change approach is more
susceptible to hallucination during extended simula-
tions. Despite this limitation, the state change ap-
proach remains a viable choice given the current
capabilities of LLMs.
It matches the performance
of HTML and accessibility tree representations for
planning horizons less than 3 while consuming fewer
output tokens.
6.2 ABLATION STUDY
To determine if the observed improvements come
from specific parts of our model-based planning ap-
proach, we perform ablation studies on the simula-
tion and self-refinement stages, using the same sub-
set from Section 6.1. We pay special attention to the
9
Figure 3: Ablation study on the simulation
stage and self-refinement stage.
123Planning Horizon22242628303234Success RateReactive AgentState ChangeAccessibility TreeHTMLReactiveRerankingWebDreamer w/o Self-RefinementWebDreamer051015202530Success Ratesimulation stage, which is the core of model-based planning. One might argue that the primary
improvement stems from reranking candidate actions, irrespective of whether this ranking relies on
simulation. To test this idea, we conduct an experiment where we remove the simulation stage com-
pletely and instead ask the reward model to directly evaluate each candidate action. As shown in
Figure 3, this modified reranking approach does lead to some improvement over the ractive baseline,
but the gain is small and still falls well behind WEBDREAMER. These results confirm that the LLM-
based world model simulation plays a crucial role in the planning process. Furthermore, we observe
a decrease in performance when removing the self-refinement stage. Upon closer examination, we
find that this decline is primarily due to the self-refinement module’s ability to effectively filter out
less relevant candidate actions when the next optimal action is clear. In contrast, directly simulating
all actions may introduce additional noise that can negatively impact performance.
6.3 CASE STUDY
To clarify the role of simulation in planning, we present a case study covering both positive and
negative examples. This illustrates how simulation aids the agent in exploring the environment, as
well as how inaccuracies in simulation can lead to incorrect predictions. Detailed examples are
provided in Appendix B.
7 CONCLUSION
In this paper, we demonstrate the strong potential of using LLMs as world models to support plan-
ning in complex environments. Specifically, our model-based planning approach, WEBDREAMER,
shows substantial improvement over reactive baselines and offers greater flexibility than tree search,
which is often impossible in real-world websites. As a pioneering effort in this area, our work opens
new avenues for model-based planning with LLM-simulated world models. Future work can focus
on further optimizing LLMs as world models for complex environments and developing more robust
model-based planning algorithms for long-horizon planning.
LIMITATIONS
Our study, as a pioneering exploration of MPC-based planning with LLMs for web navigation,
naturally comes with several limitations, which are also exciting future research directions:
Simplicity of Planning Algorithm. In this preliminary work, we deliberately employed a straight-
forward planning algorithm to demonstrate the core potential of our approach. While effective, this
simplicity leaves ample room for future enhancements. More sophisticated planning techniques,
such as Monte Carlo Tree Search (MCTS), could be integrated to further improve performance. As
a foundational study, our focus was on establishing the viability of the concept rather than opti-
mizing every aspect of the system. This strategic choice allows future research to build upon our
findings and explore more advanced planning strategies within the framework we’ve established.
Computational Cost. Our current implementation, utilizing state-of-the-art models like GPT-4o,
incurs non-trivial API costs (approximately $1 per task on VWA). This cost reflects our prioriti-
zation of exploring the full potential of LLM-based planning without immediate constraints. For
practical applications, future work could investigate cost-effective alternatives such as fine-tuning
specialized models for simulation tasks. This sets a benchmark for future optimizations that balance
performance and efficiency.
These limitations underscore the nature of our work as a proof of concept, opening up numerous
avenues for future research and optimization. By establishing the foundational potential of MPC-
based planning with LLMs, we have laid the groundwork for a new planning paradigm for LLM-
based language agents, inviting further innovations that can refine and extend model-based planning.
10
ACKNOWLEDGMENTS
We would like to extend our appreciation to colleagues from the OSU NLP group and Orby AI for
their insightful comments. This work is supported in part by Orby AI and ARL W911NF2220144.
The views and conclusions contained herein are those of the authors and should not be interpreted as
representing the official policies, either expressed or implied, of the U.S. government. The U.S. gov-
ernment is authorized to reproduce and distribute reprints for government purposes notwithstanding
any copyright notice herein.
REFERENCES
Gilles Baechler, Srinivas Sunkara, Maria Wang, Fedir Zubach, Hassan Mansoor, Vincent Etter, Vic-
tor C˘arbune, Jason Lin, Jindong Chen, and Abhanshu Sharma. Screenai: A vision-language
model for ui and infographics understanding. ArXiv preprint, abs/2402.04615, 2024. URL
https://arxiv.org/abs/2402.04615.
Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher R´e, and
Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling.
ArXiv preprint, abs/2407.21787, 2024. URL https://arxiv.org/abs/2407.21787.
Hyungjoo Chae, Namyoung Kim, Kai Tzu-iunn Ong, Minju Gwak, Gwanwoo Song, Jihoon Kim,
Sunghwan Kim, Dongha Lee, and Jinyoung Yeo. Web agents with world models: Learning and
leveraging environment dynamics in web navigation. arXiv preprint arXiv:2410.13232, 2024.
Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Li YanTao, Jianbing Zhang, and Zhiyong
In Proceedings
Wu. SeeClick: Harnessing GUI grounding for advanced visual GUI agents.
of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long
Papers), pp. 9313–9332, Bangkok, Thailand, 2024. Association for Computational Linguistics.
URL https://aclanthology.org/2024.acl-long.505.
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samual Stevens, Boshi Wang, Huan Sun,
In Alice Oh, Tristan Nau-
and Yu Su. Mind2web: Towards a generalist agent for the web.
mann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (eds.), Advances
in Neural Information Processing Systems 36: Annual Conference on Neural Informa-
tion Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 -
URL http://papers.nips.cc/paper_files/paper/2023/
16, 2023, 2023.
hash/5950bf290a1570ea401bf98882128160-Abstract-Datasets_and_
Benchmarks.html.
Xidong Feng, Ziyu Wan, Muning Wen, Stephen Marcus McAleer, Ying Wen, Weinan Zhang, and
Jun Wang. Alphazero-like tree-search can guide large language model decoding and training.
ArXiv preprint, abs/2309.17179, 2023. URL https://arxiv.org/abs/2309.17179.
Hiroki Furuta, Kuang-Huei Lee, Ofir Nachum, Yutaka Matsuo, Aleksandra Faust, Shixiang Shane
Gu, and Izzeddin Gur. Multimodal web navigation with instruction-finetuned foundation models.
ArXiv preprint, abs/2305.11854, 2023. URL https://arxiv.org/abs/2305.11854.
Carlos E Garcia, David M Prett, and Manfred Morari. Model predictive control: Theory and prac-
tice—a survey. Automatica, 25(3):335–348, 1989.
Yu Gu, Xiang Deng, and Yu Su. Don’t generate, discriminate: A proposal for grounding language
models to real-world environments. In Proceedings of the 61st Annual Meeting of the Association
for Computational Linguistics (Volume 1: Long Papers), pp. 4928–4949, Toronto, Canada, July
2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.270. URL
https://aclanthology.org/2023.acl-long.270.
Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and
Aleksandra Faust. A real-world webagent with planning, long context understanding, and pro-
gram synthesis. ArXiv preprint, abs/2307.12856, 2023. URL https://arxiv.org/abs/
2307.12856.
11
David Ha and J¨urgen Schmidhuber. World models. ArXiv preprint, abs/1803.10122, 2018. URL
https://arxiv.org/abs/1803.10122.
Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. Rea-
In Houda Bouamor, Juan Pino,
soning with language model is planning with world model.
and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural
Language Processing, pp. 8154–8173, Singapore, 2023. Association for Computational Linguis-
tics. doi: 10.18653/v1/2023.emnlp-main.507. URL https://aclanthology.org/2023.
emnlp-main.507.
Hongliang He, Wenlin Yao, Kaixin Ma, Wenhao Yu, Yong Dai, Hongming Zhang, Zhenzhong Lan,
and Dong Yu. Webvoyager: Building an end-to-end web agent with large multimodal models.
ArXiv preprint, abs/2401.13919, 2024. URL https://arxiv.org/abs/2401.13919.
Wenyi Hong, Weihan Wang, Qingsong Lv, Jiazheng Xu, Wenmeng Yu, Junhui Ji, Yan Wang, Zihan
Wang, Yuxiao Dong, Ming Ding, and Jie Tang. Cogagent: A visual language model for gui agents.
In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.
14281–14290, 2024.
Doyoung Kim, Jongwon Lee, Jinho Park, and Minjoon Seo. Cognitive map for language models:
Optimal planning via verbally representing the world model. ArXiv preprint, abs/2406.15275,
2024. URL https://arxiv.org/abs/2406.15275.
Jing Yu Koh, Robert Lo, Lawrence Jang, Vikram Duvvur, Ming Chong Lim, Po-Yu Huang, Graham
Neubig, Shuyan Zhou, Ruslan Salakhutdinov, and Daniel Fried. Visualwebarena: Evaluating
multimodal agents on realistic visual web tasks. ArXiv preprint, abs/2401.13649, 2024a. URL
https://arxiv.org/abs/2401.13649.
Jing Yu Koh, Stephen McAleer, Daniel Fried, and Ruslan Salakhutdinov. Tree search for language
model agents. ArXiv preprint, abs/2407.01476, 2024b. URL https://arxiv.org/abs/
2407.01476.
Hanyu Lai, Xiao Liu, Iat Long Iong, Shuntian Yao, Yuxuan Chen, Pengbo Shen, Hao Yu, Hanchen
Zhang, Xiaohan Zhang, Yuxiao Dong, and Jie Tang. Autowebglm: A large language model-
based web navigating agent. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge
Discovery and Data Mining, pp. 5295–5306, 2024.
Kenton Lee, Mandar Joshi, Iulia Raluca Turc, Hexiang Hu, Fangyu Liu, Julian Martin Eisensch-
los, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, and Kristina Toutanova. Pix2struct:
Screenshot parsing as pretraining for visual language understanding. In Andreas Krause, Emma
Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), In-
ternational Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii,
USA, volume 202 of Proceedings of Machine Learning Research, pp. 18893–18912. PMLR, 2023.
URL https://proceedings.mlr.press/v202/lee23g.html.
Zeyi Liao, Lingbo Mo, Chejian Xu, Mintong Kang, Jiawei Zhang, Chaowei Xiao, Yuan Tian, Bo Li,
and Huan Sun. EIA: environmental injection attack on generalist web agents for privacy leakage.
CoRR, abs/2409.11295, 2024. doi: 10.48550/ARXIV.2409.11295. URL https://doi.org/
10.48550/arXiv.2409.11295.
Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement
In 6th International Confer-
learning on web interfaces using workflow-guided exploration.
ence on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018,
Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/
forum?id=ryTp3f-0-.
Marcelo G Mattar and M´at´e Lengyel. Planning in the brain. Neuron, 110(6):914–934, 2022.
Thomas M Moerland, Joost Broekens, Aske Plaat, Catholijn M Jonker, et al. Model-based rein-
forcement learning: A survey. Foundations and Trends® in Machine Learning, 16(1):1–118,
2023.
12
OpenAI. Hello GPT-4o. https://openai.com/index/hello-gpt-4o/, 2024a. Ac-
cessed: 2024-09-28.
OpenAI. Introducing OpenAI o1. https://openai.com/o1/, 2024b. Accessed: 2024-09-29.
Jiayi Pan, Yichi Zhang, Nicholas Tomlin, Yifei Zhou, Sergey Levine, and Alane Suhr. Autonomous
evaluation and refinement of digital agents. ArXiv preprint, abs/2404.06474, 2024a. URL
https://arxiv.org/abs/2404.06474.
Yichen Pan, Dehan Kong, Sida Zhou, Cheng Cui, Yifei Leng, Bing Jiang, Hangyu Liu, Yanyi Shang,
Shuyan Zhou, Tongshuang Wu, and Zhengyang Wu. Webcanvas: Benchmarking web agents in
online environments. ArXiv preprint, abs/2406.12373, 2024b. URL https://arxiv.org/
abs/2406.12373.
Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, Sebastien Racani`ere, David
Reichert, Th´eophane Weber, Daan Wierstra, and Peter Battaglia. Learning model-based plan-
ning from scratch. ArXiv preprint, abs/1707.06170, 2017. URL https://arxiv.org/abs/
1707.06170.
Ajay Patel, Markus Hofmarcher, Claudiu Leoveanu-Condrei, Marius-Constantin Dinu, Chris
Callison-Burch, and Sepp Hochreiter. Large language models can self-improve at web agent tasks.
ArXiv preprint, abs/2405.20309, 2024. URL https://arxiv.org/abs/2405.20309.
Pranav Putta, Edmund Mills, Naman Garg, Sumeet Motwani, Chelsea Finn, Divyansh Garg, and
Rafael Rafailov. Agent q: Advanced reasoning and learning for autonomous ai agents. ArXiv
preprint, abs/2408.07199, 2024. URL https://arxiv.org/abs/2408.07199.
Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon
Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari,
go, chess and shogi by planning with a learned model. Nature, 588(7839):604–609, 2020.
Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu,
From pixels to UI actions:
Urvashi Khandelwal, Kenton Lee, and Kristina Toutanova.
In Alice Oh, Tristan Nau-
Learning to follow instructions via graphical user interfaces.
mann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (eds.), Advances
in Neural Information Processing Systems 36: Annual Conference on Neural Informa-
tion Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16,
2023, 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/
6c52a8a4fadc9129c6e1d1745f2dfd0f-Abstract-Conference.html.
Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits:
An open-domain platform for web-based agents.
In Doina Precup and Yee Whye Teh (eds.),
Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney,
NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research,
pp. 3135–3144. PMLR, 2017. URL http://proceedings.mlr.press/v70/shi17a.
html.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,
Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering
the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
Yifan Song, Da Yin, Xiang Yue, Jie Huang, Sujian Li, and Bill Yuchen Lin. Trial and error:
Exploration-based trajectory optimization for llm agents. ArXiv preprint, abs/2403.02502, 2024.
URL https://arxiv.org/abs/2403.02502.
Richard S Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACM Sigart
Bulletin, 2(4):160–163, 1991.
Evan Wang, Federico Cassano, Catherine Wu, Yunfeng Bai, Will Song, Vaskar Nath, Ziwen Han,
Sean Hendryx, Summer Yue, and Hugh Zhang. Planning in natural language improves llm search
for code generation. ArXiv preprint, abs/2409.03733, 2024. URL https://arxiv.org/
abs/2409.03733.
13
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi,
Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language
models.
In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh
(eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural
Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - De-
cember 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/
hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html.
Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scal-
In Sanmi Koyejo, S. Mo-
able real-world web interaction with grounded language agents.
hamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural
Information Processing Systems 35: Annual Conference on Neural Information Process-
ing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9,
2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/
82ad13ec01f9fe44c01cb91814fd7b8c-Abstract-Conference.html.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models.
In
Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine
(eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neu-
ral Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December
10 - 16, 2023, 2023a. URL http://papers.nips.cc/paper_files/paper/2023/
hash/271db9922b8d1f4dd7aaef84ed5ac703-Abstract-Conference.html.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan
Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International
Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenRe-
view.net, 2023b. URL https://openreview.net/forum?id=WE_vluYUL-X.
Keen You, Haotian Zhang, Eldon Schoop, Floris Weers, Amanda Swearngin, Jeffrey Nichols, Yinfei
Yang, and Zhe Gan. Ferret-ui: Grounded mobile ui understanding with multimodal llms. ArXiv
preprint, abs/2404.05719, 2024. URL https://arxiv.org/abs/2404.05719.
Yao Zhang, Zijian Ma, Yunpu Ma, Zhen Han, Yu Wu, and Volker Tresp. Webpilot: A versatile and
autonomous multi-agent system for web task execution with strategic exploration. ArXiv preprint,
abs/2408.15978, 2024. URL https://arxiv.org/abs/2408.15978.
Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, and Yu Su. Gpt-4v(ision) is a generalist web
agent, if grounded. In Forty-first International Conference on Machine Learning, 2024. URL
https://openreview.net/forum?id=piecKJ2DlB.
Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng,
Yonatan Bisk, Daniel Fried, Uri Alon, et al. Webarena: A realistic web environment for building
autonomous agents. ArXiv preprint, abs/2307.13854, 2023. URL https://arxiv.org/
abs/2307.13854.
14
A PROMPTS FOR FOUR STAGES IN MPC-BASED PLANNING
A.1 ACTION PROPOSAL
Action Proposal
You are an autonomous intelligent agent tasked with navigating a web browser. You will be given
web-based tasks. These tasks will be accomplished through the use of specific actions you can issue.
Here’s the information you’ll have: {Web Information}
The user’s objective: {Task Objective} This is the task you’re trying to complete.
The current web page screenshot: {Web Page Screenshot Image} This is a screenshot of the
webpage, with each interactable element assigned a unique numerical id. Each bounding box and its
respective id shares the same color.
The observation, which lists the IDs of all interactable elements on the current web page with their
text content if any, in the format [id][tagType][text content]. tagType is the type of the
element, such as button, link, or textbox. text content is the text content of the element. For example,
[1234][button][’Add to Cart’] means that there is a button with id 1234 and text content
’Add to Cart’ on the current web page. [][StaticText][text] means that the element is
of some text that is not interactable.
The current web page’s URL: {Web URL} This is the page you’re currently navigating.
The open tabs: {Previous Tabs} These are the tabs you have open.
The previous action: {Previous Action} This is the action you just performed. It may be helpful
to track your progress.
The actions you can perform fall into several categories:
Page Operation Actions:
- click [id]: This action clicks on an element with a specific id on the webpage.
- type [id] [content]: Use this to type the content into the field with id. By default,
the Enter key is pressed after typing unless press enter after is set to 0, i.e., type [id]
[content] [0].
- hover [id]: Hover over an element with id.
- press [key comb]: Simulates the pressing of a key combination on the keyboard (e.g., Ctrl+V)
- scroll [down] or scroll [up]: Scroll the page up or down.
Tab Management Actions:
- new tab: Open a new, empty browser tab.
- tab focus [tab index]: Switch the browser’s focus to a specific tab using its index.
- close tab: Close the currently active tab.
URL Navigation Actions:
- goto [url]: Navigate to a specific URL.
- go back: Navigate to the previously viewed page.
- go forward: Navigate to the next page (if a previous go back action was performed).
Completion Action:
- stop [answer]: Issue this action when you believe the task is complete. If the objective is to
find a text-based answer, provide the answer in the bracket.
Homepage:
If you want to visit other websites, check out the homepage at http://homepage.com. It has a list of
websites you can visit. http://homepage.com/password.html lists all the account name and password
for the websites. You can use them to log in to the websites.
To be successful, it is very important to follow the following rules:
1. You should only issue an action that is valid given the current observation
2. You should only issue one action at a time.
3. You should follow the examples to reason step by step and then issue the next action.
4. Generate the action in the correct format. Start with a “In summary, the next action I will perform
is” phrase, followed by action. For example, In summary, the next action I will perform is click
[1234].
5. Issue stop action when you think you have achieved the objective. Don’t generate anything after
stop.
15
A.2 SELF-REFINEMENT
Self-Refinement
You are assiting a web navigation agent to help a human user navigate a website to complete a task.
Given the user’s intent, the action history, and the current state of the webpage, the agent has proposed
a set of candidate actions to take at the current step.
Your role is not to determine a best action for the agent at this step, but to filter out the actions that are
very likely not relevant or helpful for the agent to accomplish the task.
Please select all actions that you think that could possibly lead the agent to accomplish the task. It’s
important to note that to accomplish a task, the agent will execute a sequence of actions. So the action
to take at this step does not have to immediately lead to the completion of the task. You should select
any action that could be relevant for the agent to take in the current state of the webpage. Try to be as
thoughtful and comprehensive as you can! Don’t miss any possible action. If there is one action that
is clearly the best, and all other actions are clearly not very relevant, you can only select one action.
Please do this sparely, since some actions may be helpful in a longer horizon.
A action should be included as long as it could be relevant to the task, even if it may not be the most
direct action to take at this step!! Some relevant actions might seem indirect at the first glance, but
could be helpful in a longer horizon. Please also include those actions.
Please at least select one action.
*IMPORTANT*
Format your response into two lines as shown below:
Thoughts: <your thoughts and reasoning process>. You must explicitly evaluate each
action one by one and imagine whether it could be relevant to the task following the format:
action:... rationale:...
Selected actions: id0;id1;aid2;... (please return the index of the action in the candidate actions
list, starting from 0. Don’t output the action description itself. Separate the indices with semicolons.
Do not add spaces or any other characters between after the semicolons.)
Action History: {last actions str}
Current URL: {current url}
The images corresponding to the user intent are shown in the FIRST {len(intent images)}
images (before the User Intent).
The last {len(screenshots)} snapshots of the agent’s trajectory are shown in the LAST
{len(screenshots)} images. The LAST IMAGE represents the current state of the webpage.
Proposed Action: {action descriptions}
A.3 WORLD MODEL
World Model
You are an agent that predicts the effect of an action on a webpage. You will be given a screenshot of a
webpage, a sequence of actions and state changes applied to the initial screenshot, and an operation to
perform on the webpage. You are required to predict the new changes that will occur on the webpage
after the operation is performed, such as the appearance of new elements, the disappearance of existing
elements, or changes in the content of existing elements. The operation type and the element to operate
will be provided in the prompt. Directly output State changes:... and don’t output anything
else. Try to be as comprehensive and detailed as possible.
Based on the initial screenshot and the changes to the webpage, please predict the changes after action:
16
A.4 REWARD MODEL
Reward Model
You are an expert in evaluating the performance of a web navigation agent. The agent is designed to
help a human user navigate a website to complete a task. Given the user’s intent, the agent’s action
history, the current state of the webpage, your goal is to decide **whether the simulated steps by
the agent indicate a successful execution of the user intent**. In particular, if the predicted state
(i.e., the current state represented by the last image plus all the predicted changes so far) corresponds
to a successful final state. If it is a failure but it looks like the simulated steps are on the right track
towards success, you should also output as such. Note that, in the simulated steps, all the state changes
are predicted by the agent’s world model, and they may not actually be faithful to the real website
interactions (e.g., some proposed actions may not be avaiable in a realistic website). You should also
account for this in your evaluation (e.g., if the predicted state changes are not reasonable then it’s
probably a failure).
*IMPORTANT*
Format your response into two lines as shown below:
Thoughts: <your thoughts and reasoning process>
Status: "success" or "failure"
On the right track to success: "yes" or "no"
17
B CASE STUDY
B.1 ERROR CAUSED BY IMPERFECT WORLD MODEL SIMULATION
Figure 5: An error case caused by imperfect world model simulation.
B.2 POSITIVE CASE BENEFITING FROM WORLD MODEL SIMULATION
Figure 6: A positive case where the simulation leads to correct action prediction.
18
Find me a printer of the same brand as the product in this picture. It needs to be white and have at least 11 reviews with an average rating greater than 4.Thewebpagewillupdatetodisplayitems25-36of1272forthecurrentcategoryorsearchcriteria.Theproductimages,names,prices,andotherrelateddetailswillchangetoreflectthisnextsetofitems.Thepaginationatthebottomofthepagewillindicatethattheuserisnowonthethirdpageofitems,withPage3likelyhighlightedornotedinsomeway.Theproductsandtheirdetailsfromitems13-24willbereplacedbytheinformationforitems25-36.Click ‘Next Page’Anewsetofsearchresultsfor'HPprinter'willappear,displayingthenextitemsstartingfromitem13andonwards.Thepaginationnumberandtheitemcountatthetopofthesearchresultsmayupdateaccordingly.The"RecentlyOrdered"and"CompareProducts"sections,ifstatic,maymoveoutofview,whilenewsectionsmightcomeintoview.Thefootersectionofthepagemayalsoshiftintoviewifscrollingreachesthebottom.Thelistingforthe"HPBusinessInkjet2800WideFormatPrinter(C8174A#A2L)"willberemovedfromthewebpage.Thisincludestheremovalofitsimage,price($37.64),ratingstars,numberofreviews(12),and"AddtoCart"buttonpositionedatthebottomrightofthescreen(elements63through68).Theremainingproductswouldshiftaccordinglytofillthegalleryspace.v=0.5v=0.4v=0.1Afterscrollingdownthefootersectionbecomesvisible,displayinglinksforPrivacyandCookiePolicy,SearchTerms,AdvancedSearch,OrdersandReturns,andContactUs,alongwithasubscriptionboxforemailupdates.Nonewproductsappearonthepageafterscrolling.Scroll DownStopv=0.2CorrectionWhat are the two types of birds on the front of that colorful shirt?Thepagewillnavigatetoadetailedproductpageforthe"MensFlowersCasualAlohaHawaiianShirtSummerShortSleeveBeachT-ShirtRegularFitButtonDownDressShirts."Thisnewpagewilllikelycontainadditionalinformationabouttheproductincludingmoredetailedspecifications,customerreviews,largerimages,sizingoptions,andpossiblyalarger"AddtoCart"button.Otherelementsfromthecurrentcategoryviewlikethegridofproductswillbereplacedwiththedetailedviewofthisspecificproduct.ClickHoveringoverwilllikelyresultinthefollowingchanges:1.Atooltiporadditionalinformationpopupmightappearnearorovertheshirt'simageorelement.Thispopupcouldincludemoredetailssuchastheprice,sizeoptions,availability,andpossiblyazoomed-inviewoftheshirt.2.Theimageoftheshirtmightchange,possiblyshowingadifferentangleoramodelwearingtheshirt,providingmorecontextonhowitlookswhenworn.3.The"AddtoCart"buttonorratingmightchangeappearance,potentiallybecominghighlightedtodrawmoreattention.Parrotsandpalmtreeswillbereturnedastheanswer.v=0.75v=0.25v=0.05HoverStopHover |
ai_researcher | 1 | Difficult_Conversations_A_Collaborative_Interprofessional_Simulation_for_Social_Work_Athletic_Training_and_Physician_Assistant_Programs.pdf | DurIAN-SC: Duration Informed Attention Network based Singing Voice
Conversion System
Liqiang Zhang∗1, Chengzhu Yu2, Heng Lu2, Chao Weng2, Chunlei Zhang2, Yusong Wu2,
Xiang Xie1, Zijin Li3, Dong Yu2
1Beijing Institute of Technology
2Tencent AI Lab
3China Conservatory of Music
{zhlq,xiexiang}@bit.edu.cn,{czyu,bearlu,cweng,cleizhang,ysw,dyu}@tencent.com
0
2
0
2
g
u
A
7
]
S
A
.
s
s
e
e
[
1
v
9
0
0
3
0
.
8
0
0
2
:
v
i
X
r
a
Abstract
Singing voice conversion is converting the timbre in the source
singing to the target speaker’s voice while keeping singing con-
tent the same. However, singing data for target speaker is much
more difficult to collect compared with normal speech data.
In this paper, we introduce a singing voice conversion algo-
rithm that is capable of generating high quality target speaker’s
singing using only his/her normal speech data. First, we man-
age to integrate the training and conversion process of speech
and singing into one framework by unifying the features used in
standard speech synthesis system and singing synthesis system.
In this way, normal speech data can also contribute to singing
voice conversion training, making the singing voice conversion
system more robust especially when the singing database is small.
Moreover, in order to achieve one-shot singing voice conver-
sion, a speaker embedding module is developed using both speech
and singing data, which provides target speaker identify infor-
mation during conversion. Experiments indicate proposed sing
conversion system can convert source singing to target speaker’s
high-quality singing with only 20 seconds of target speaker’s
enrollment speech data.
Index Terms: Singing Voice Conversion , Singing Synthesis,
Speaker D-vector, Speaker Embedding
1. Introduction
Singing is one of the predominant form of the music arts and
singing voice conversion and synthesis can have many potential
applications in entertainment industries. Over the past decades,
many methods have been proposed to increase the naturalness
of synthesized singing. These include the methods based on
unit selection and concatenation[1] as well as more the recent
approaches based on deep neural network (DNN) [2] and auto-
regressive generation models [3].
While existing singing synthesis algorithms are able to pro-
ducing natural singing, it basically requires large amount of
singing data from one same speaker in order to generate his/her
singing. Comparing to normal speech data collection, singing
data is much more difficult and more expensive to obtain. To
alleviate such limitations, data efficient singing synthesis ap-
proaches [4] have been proposed recently. In [4], a large singing
synthesis model trained from multi-speaker is adaptively fine-
tuned with a small amount of target speaker’s singing data to
generate the target singing model. Alternatively, singing gen-
eration for new voices can be achieved through singing voice
conversion. The goal of singing voice conversion is to convert
∗*Work performed while interning at Tencent AI Lab
the source singing to the timbre of target speaker while keeping
singing content untouched. Traditional singing voice conver-
sion [5, 6, 7] relies on parallel singing data to learn conversion
functions between different speakers. However, a recent study
[8] proposed an unsupervised singing voice conversion method
based on WaveNet [9] autoencoder architecture to achieve non
parallel singing voice conversion. In [8], neither singing data
nor the transcribed lyrics or notes is needed.
While above mentioned methods could efficiently generate
singing with new voices, they still require an essential amount
of singing voice samples from target speakers. This limits the
applications of singing generation to relatively restricted sce-
narios where there has to be target speaker’s singing data. On
the other hand, normal speech samples are much easier to col-
lect than singing. There are only limited studies on investigat-
ing to use normal speech data to enhance singing generation.
The speech-to-singing synthesis method proposed in [10] at-
tempts to convert a speaking voice to singing by directly modi-
fying acoustic features such as f0 contour and phone duration
extracted from reading speech. While speech-to-singing ap-
proaches could produce singing from reading lyrics, it normally
requires non-trivial amount of manual tuning of acoustic fea-
tures for achieving high intelligibility and naturalness of singing
voices.
Duration Informed Attention Network (DurIAN)[11], orig-
inally proposed for the task of multimodal synthesis, is essen-
tially an autoregressive feature generation framework that can
generate acoustic features (e.g., mel-spectrogram) for any audio
source frame by frame. In this paper, we proposed a DurIAN
based speech and singing voice conversion system (DurIAN-
SC), a unified speech and singing conversion framework1. There
are two major contributions for the proposed method: 1) De-
spite the input feature for conventional speech synthesis and
singing synthesis is different, proposed framework unifies the
training process for both speech and singing synthesis. Thus in
this work, we can even train the singing voice conversion model
just using speech data. 2) Instead of the commonly used train-
able Look Up Table (LUT)[8] for speaker embedding, we use a
pre-trained speaker embedding network module for speaker d-
vector[12, 13] extraction. Extracted speaker d-vectors are then
fed into singing voice conversion network as the speaker em-
bedding to represent the speaker identity. During conversion,
only 20 seconds speech or singing data is needed for the tester’s
d-vector extraction. Experiments show proposed algorithm can
generates high-quality singing voices when using only speech
1Sound
demo
proposed
https://tencent-ailab.github.io/learning_
singing_from_speech
algorithm can
be
of
found
at
Figure 1: Model architecture of DurIAN-SC. RMSE means root mean square energy, FC represents the full connected layer, Expansion
means expanding the time dimension to frame level.
data. The Mean Opinion Scores (MOS) of naturalness and sim-
ilarity indicates our system can perform one-shot singing voice
conversion with only 20 seconds tester’s speech data.
The paper is organized as following. Section 2 introduces
the architecture of our proposed conversion model. Experi-
ments are introduced in Section 3. And section 4 is the con-
clusion.
2. Model Architecture
2.1. DurIAN-SC
While DurIAN was originally proposed for the task of multi-
modal speech synthesis, it has many advantages over conven-
tional End-to-End framework, especially for its stable in synthe-
sis and its duration controllability. The original DurIAN model
is modified here to perform speech and singing synthesis at the
same time. Here we use text/song lyric as one of input for both
speech and singing data. Text or song lyric is then transferred
to phone sequence with prosody token by text-to-speech TTS
front-end module. The commonly used music score is not used
Instead, we use
in our singing voice conversion framework.
frame level f0 and average Root Mean Square Energy (RMSE)
extracted from both original singing/speech as additional input
conditions (Fig. 1). For singing voice conversion, the f0 and
rhythm is totally decided by score notes and the content itself,
and this is the part we do not convert unless there is large gap be-
tween source and target speaker’s singing pitch range. Further,
we found that if using RMSE as input condition in training, the
loss convergence would be much faster.
The architecture of DurIAN-SC is illustrated in Fig. 1. It
includes (1) an encoder that encodes the context of each phone,
(2) an alignment model that aligns the input phone sequence
and to target acoustic frames, (3) an auto-regressive decoder
network that generates target mel-spectrogram features frame
by frame.
2.1.1. Encoder
We use phone sequence x1:N directly as input for both speech
and singing synthesis. The output of the encoder h1:N is a se-
quence of hidden states containing the sequential representation
of the input phones as
h1:N = encoder(x1:N)
(1)
where N is the length of input phone sequences, encoder mod-
ule contains a phone embedding, fully connected layers and a
CBHG[14] module, which is a combination module of Convo-
lution layer, Highway network[15] and bidirectional GRU[16].
2.1.2. Alignment model
The purpose of alignment model is to generate frame aligned
hidden states which is further fed into auto-regressive decoder.
Here, the output hidden sequence from encoder h1:N is first ex-
panded according to the duration of each phone as
e1:T = state expand(h1:N, d1:N)
(2)
where T is the total number of input audio frames. The state ex-
pansion is simply the replication of hidden states according to
the provided phone duration d1:N. The duration of each phone
is obtained from force alignments performed on input source
phones and acoustic features sequences. The frame aligned hid-
den states e1:T is then concatenated with frame level f0, RMSE
and speaker embedding, as we can see in Fig. 1.
(cid:48)
1:T = FC(e1:T ∨ f1:T ∨ r1:T ∨ D1:T)
e
(3)
where ∨ indicates concatenation,FC indicates the fully con-
nected layer, f1:T represents f0 for each frame, D1:T represents
the speaker embedding expanded to frame level. And r1:T is the
RMSE for each frame.
2.1.3. Decoder
The decoder is the same as in DurIAN, composed of two auto-
regressive RNN layers. Different from the attention mechanism
used in the end-to-end systems, the attention context here is
computed from a small number of encoded hidden states that
are aligned with the target frames, which reduces the artifacts
observed in the end-to-end system[14]. We decode two frames
per time step in our system. The output from the decoder net-
work y
1:T is passed through a post-CBHG [14] to improve the
quality of predicted mel-spectrogram as
(cid:48)
(cid:48)
1:T = decoder(e
y
1:T)
(cid:48)
(4)
CBHGDecoder RNNFCDecoder RNNFCDecoder RNNFC<Go> FrameCBHGPhone SequenceFCPhone DurationsEncoderState Expansion EmbeddingMel-SpectrogramF0 & RMSE Sequence Speaker EmbeddingExpansionAlignment ModuleFC2.2.2. Speaker embedding network
To provide the DurIAN-SC with robust speaker embedding on
Mandarin language. External Mandarin corpora are explored
to train a speaker embedding network, which is then used as a
pre-trained module. The external training set contains of 8800
speaker drawn from two gender-balanced public speech recog-
nition datasets2. The training data is then augmented 2 folds
to incorporate variabilities from distance (reverberation), chan-
nel or background noise, resulting in a training pool with 2.8M
utterances. 257-d raw short time fourier transform (STFT) fea-
tures are extracted with a 32ms window and the time shift of fea-
ture frames is 16ms. The non-speech part is removed by a en-
ergy based voice activity detection. The utterance is randomly
segmented into 100-200 frames to control the duration variabil-
ity in the training phase. For the choice of network architecture,
we employ a TDNN framework which is similar to [13, 19].
The speaker embedding training guilded with a multi-task loss,
which employs both the large margin cosine loss (LMCL) and
the triplet loss [20, 21, 22].
In order to further boost the capability for singing data, the
internal singing corpus is incorporated in the speaker embed-
ding training. Since the singing corpus is not provided with
speaker label, we employ a bottom-up hierarchical agglomera-
tive clustering (HAC) to assign a pseudo speaker label for each
singing segment. Specifically, we first extract speaker embed-
ding for singing corpus using the external speaker embedding
model. Then, HAC is applied to produce 1000 speaker “IDs”
from the training singing corpus (3500 singing segments). Fi-
nally, the clustered corpus is pooled with external speech data
for another round of speaker embedding training. The final sys-
tem is utilized to extract speaker embedding for speech/singing.
2.2.3. Training and conversion process
In the training stage, both the normal speech and singing data
could be used as input training data. The f0, RMSE, phone se-
quence and phone duration are extracted as shown in section
2.2.1. Speaker embedding are extracted using the pre-trained
speaker embedding network introduced in the previous section.
DurIAN-SC model is then trained based on these extracted acous-
tic features and speaker embedding.
In singing voice conversion stage, f0, RMSE and phone du-
ration are extracted from source singing and later used in con-
version process as condition. Using the pre-trained speaker em-
bedding network, target speaker embedding can be obtained by
testing on target speaker’s singing or speech data with a length
of only 20 seconds. By conditioning on the extracted target
speaker embedding, mel-spectrogram can be generated with tar-
get speaker’s timbre through the model trained in the last ses-
sion. Finally, WaveRNN [17] is employed as Neural Vocoder
for waveform generation.
In case there is large gap between source and target speaker’s
singing pitch range, which often happen when performing cross
gender conversion, we shift original source key linearly to make
it easier for target speaker to ’sing’ the same song as source. The
input f0 is multiplied by a factor ν as:
ν =
mean(xt)
mean(xs)
(6)
where xs is the source singing f0, xt
i is the target register speech
or singing f0. mean represents to average f0 across all vowel
phones in all the audios by the source or target speaker.
2http://en.speechocean.com/datacenter/details/254.htm
Figure 2: The process diagram of training stage. The WaveRNN
[17] model is trained separately.
Figure 3: The process diagram of converting stage.
ˆy1:T = cbhg(y
(cid:48)
1:T)
(5)
The entire network is trained to minimize the mel-spectrogram
prediction loss the same as in DurIAN.
2.2. Singing Voice Conversion Process
The training stage is illustrated in Fig. 2, and the converting
stage is illustrated in Fig. 3.
2.2.1. Data Preparation
Our training dataset is composed a mix of normal speech data
and singing data. TTS front-end is used to parse text or song
lyrics into phone sequence. Acoustic feature including mel-
sepctrogram, f0 and RMSE are extracted for every frame of
training data. Note that the f0 is extracted with World vocoder[18].
Since DurIAN structure needs phone alignment as input, a Time
delay neural network (TDNN) is employed here to force-align
the extracted acoustic feature with phone sequence. Different
from normal TTS for Mandarin which use phone identity plus
5 tones in the modeling, non-tonal phones are used in our ex-
periment to bridge the gap between speech phones and singing
phones. Finally, phone duration can be extracted from the aligned
phone sequence.
World VocoderTraining Singing/SpeechText SequenceRMSEF0Phone SequenceDuration SequenceSpeaker Embedding NetworkSpeaker EmbeddingDurIAN-SCMel-SpectrogramWaveRNNTraining StagePhone Force-alignWorld VocoderPhone Force-alignSource SingingText SequenceRMSEF0Phone SequenceDuration SequenceDurIAN-SCMel-SpectrogramWaveRNNConverting StageSpeaker Embedding NetworkSpeaker EmbeddingTarget Register Singing/Speech3. Experiments
3.1. Dataset
Two databases are used in our experiments. Database A is a
large multi-singer Mandarin singing corpus containing 18-hour
singing data. There are 3600 singing segments from various
songs in corpus A, and each with an average length of 20 sec-
onds. Each singing fragment is by a different singer. Amongst
all singing fragments, 2600 are by female singers and 1000 are
by male singers. This multi-singer singing corpus are recorded
by singers themselves with various recording devices. All songs
are down sampled to 24kHz.
Database B is speech database containing 10-hour multi-
speaker Mandarin normal TTS speech data. There are 3 male
speakers and 4 female speakers in this corpus, each with a du-
ration around 1.5 hours. The sampling rate is also set to 24kHz.
In the singing voice conversion experiments, all source singing
is chosen randomly from another mandarin singing corpus C.
3.2. Model Hyperparameters
In our experiment, the dimensions of the phone embedding,
speaker embedding, encoder CBHG module, attention layer are
all set to 256. The decoder has 2 GRU layers with 256 dimen-
sion and batch normalization is used in the encoder and post-net
module. We use Adam optimizer and 0.001 initial learning rate
with warm-up [23] schedule. In training stage, a total of 250k
steps with a batch size of 32 were trained till convergence.
3.3. Naturalness and Similarity Evaluation
In the singing voice conversion test, Mean Opinion Scores (MOS)
on naturalness and similarity to target speaker are evaluated.
The scale of MOS is set between 1 to 5 with 5 representing the
best performance and 1 the worst. 10 testers participated in our
listening test.
3.3.1. Experiment on speaker embedding representation
In this experiment, we compare the singing naturalness and sim-
ilarity to target speaker by proposed d-vector based speaker em-
bedding and LUT based trainable speaker embedding. Two sys-
tems are built respectively. The training dataset used here is the
18-hour singing database A introduced in section 3.1. We use a
total of 3500 singing fragments in training. In testing, 3 female
and 3 male singers are randomly chosen from training set for in-
set test. To evaluate the out-set singing voice conversion perfor-
mance, 4 speakers from the speech dataset B are chosen for test.
Here, only a 20s period of singing or speech data are used from
each testers for speaker d-vector extraction. As the baseline
system, the LUT based trainable speaker embedding is trained
alongside the singing voice conversion DurIAN-SC model. The
out-of-set baseline system is not tested because baseline system
can not convert to unseen target.
Table 1: Comparison of speaker embedding extraction meth-
ods: LUT and speaker D-vector. The ’Target Singer’ column
indicates whether target speaker’s singing data is used in train-
ing.
Method
Target Singer Naturalness
Similarity
D-vector
LUT
D-vector
LUT
in-set
in-set
out-of-set
out-of-set
3.70
3.61
3.69
-
3.61
3.56
3.10
-
As shown in Table 1, for the in-set test, proposed D-vector
speaker embedding system outperforms the baseline LUT speaker
embedding system in both MOS naturalness and similarity by
a small margin. The result is in line with expectations. For the
baseline trainable LUT speaker embedding system, the speaker
embedding is trained alongside the singing voice conversion
model, that makes the total free parameter in the system is actu-
ally more than proposed method especially for the ’seen’ speaker.
However on the other side, because there is only 20 seconds data
per each singer in the training, it could be hard for the trainable
LUT speaker embedding method to learn a really good speaker
embedding. Meanwhile, proposed speaker embedding network
is an independent module which is pre-trained on a lot extra
speaker recognition data. While for the out-set test, the MOS
scores for proposed method is lower than in-set test especially
on similarity. We believe this is normal result for the model
parameters are not fine-tuned with the ’unseen’ speaker’s data.
And speaker d-vectors are extracted from only 20 seconds of
target speaker’s register speech or singing. At least, unlike the
baseline system, proposed method save the trouble to fine-tune
and update model parameters for each new user.
3.3.2. Using speech corpus in singing voice conversion
To demonstrate proposed system can learn singing voice con-
version from only speech data, three different systems are trained
using: 1) only speech data, 2) mix of speech and singing data,
and 3) singing data only, respectively for comparison.
Table 2: Singing voice conversion experiments trained with
speech data. Dataset indicates the type of training data.
Dataset
Naturalness
Similarity
Speech & Singing
Only Speech
Only Singing
3.71
3.65
3.70
3.74
3.71
3.61
Results in Table 2 show that all three above mentioned sys-
tems has close performance. This interesting result indicates
that in the proposed system, speech data can contribute equally
to singing voice conversion as singing data. In this case, we can
use only speech data when target’s singing data is not available.
In our experiments, it is noticed that by adding some speech
data to singing voice conversion training process, the generated
target singing will have clearer pronunciation. Speech data in
training also helps to improve the singing voice conversion sim-
ilarity.
4. Conclusion
In this paper, we proposed an singing voice conversion model
DurIAN-SC with a unified framework of speech and singing
data. For those speakers with none singing data, our method
could convert to their singings by training on only their speech
data. Through a pre-trained speaker embedding network, we
could convert to ’unseen’ speakers’ singing with only a 20 sec-
ond length of data. Experiments indicate the proposed model
can generate high-quality singing voices for in-set ’seen’ tar-
get speakers in terms of both naturalness and similarity.
In
the meanwhile, proposed system can also one-shot convert to
out-of-set ’unseen’ users with small register data. In the future
work, we will continue to make our model nore robust and im-
prove the similarity of the ’unseen’ singing voice conversion.
[19] X. Ji, M. Yu, C. Zhang, D. Su, T. Yu, X. Liu, and D. Yu,
“Speaker-aware target speaker enhancement by jointly learning
with speaker embedding extraction,” in ICASSP 2020 - 2020 IEEE
International Conference on Acoustics, Speech and Signal Pro-
cessing (ICASSP), 2020, pp. 7294–7298.
[20] C. Zhang and K. Koishida, “End-to-end text-independent speaker
verification with triplet loss on short utterances,” in Proc. Inter-
speech 2017, 2017, pp. 1487–1491.
[21] H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and
W. Liu, “Cosface: Large margin cosine loss for deep face recog-
nition,” in Proceedings of CVPR, 2018, pp. 5265–5274.
[22] C. Zhang, F. Bahmaninezhad, S. Ranjan, H. Dubey, W. Xia, and
J. H. Hansen, “Utd-crss systems for 2018 nist speaker recogni-
tion evaluation,” in ICASSP 2019-2019 IEEE International Con-
ference on Acoustics, Speech and Signal Processing (ICASSP).
IEEE, 2019, pp. 5776–5780.
[23] P. Goyal, P. Doll´ar, R. Girshick, P. Noordhuis, L. Wesolowski,
A. Kyrola, A. Tulloch, Y. Jia, and K. He, “Accurate,
large
minibatch sgd: Training imagenet in 1 hour,” arXiv preprint
arXiv:1706.02677, 2017.
5. References
[1] J. Bonada, M. Umbert, and M. Blaauw, “Expressive singing syn-
thesis based on unit selection for the singing synthesis challenge
2016.” in INTERSPEECH, 2016, pp. 1230–1234.
[2] M. Nishimura, K. Hashimoto, K. Oura, Y. Nankaku, and
K. Tokuda, “Singing voice synthesis based on deep neural net-
works.” in Interspeech, 2016, pp. 2478–2482.
[3] M. Blaauw and J. Bonada, “A neural parametric singing synthe-
sizer modeling timbre and expression from natural songs,” Ap-
plied Sciences, vol. 7, no. 12, p. 1313, 2017.
[4] M. Blaauw, J. Bonada, and R. Daido, “Data efficient voice cloning
for neural singing synthesis,” in ICASSP 2019-2019 IEEE Inter-
national Conference on Acoustics, Speech and Signal Processing
(ICASSP).
IEEE, 2019, pp. 6840–6844.
[5] K. Kobayashi, T. Toda, G. Neubig, S. Sakti, and S. Nakamura,
“Statistical singing voice conversion with direct waveform mod-
ification based on the spectrum differential,” in Fifteenth Annual
Conference of the International Speech Communication Associa-
tion, 2014.
[6] ——, “Statistical singing voice conversion based on direct wave-
form modification with global variance,” in Sixteenth Annual Con-
ference of the International Speech Communication Association,
2015.
[7] F. Villavicencio and J. Bonada, “Applying voice conversion to
concatenative singing-voice synthesis,” in Eleventh Annual Con-
ference of the International Speech Communication Association,
2010.
[8] E. Nachmani and L. Wolf, “Unsupervised singing voice conver-
sion,” arXiv preprint arXiv:1904.06590, 2019.
[9] A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals,
A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu,
“Wavenet: A generative model for raw audio,” arXiv preprint
arXiv:1609.03499, 2016.
[10] T. Saitou, M. Goto, M. Unoki, and M. Akagi, “Speech-to-singing
synthesis: Converting speaking voices to singing voices by con-
trolling acoustic features unique to singing voices,” in 2007 IEEE
Workshop on Applications of Signal Processing to Audio and
Acoustics.
IEEE, 2007, pp. 215–218.
[11] C. Yu, H. Lu, N. Hu, M. Yu, C. Weng, K. Xu, P. Liu,
D. Tuo, S. Kang, G. Lei et al., “Durian: Duration informed
attention network for multimodal synthesis,” arXiv preprint
arXiv:1909.01700, 2019.
[12] J. Gonzalez-Dominguez, “Deep neural networks for small foot-
print text-dependent speaker verification,” in ICASSP 2014 - 2014
IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), 2014.
[13] D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudan-
pur, “X-vectors: Robust dnn embeddings for speaker recognition,”
in 2018 IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP).
IEEE, 2018, pp. 5329–5333.
[14] Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss,
N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio et al.,
“Tacotron: Towards end-to-end speech synthesis,” arXiv preprint
arXiv:1703.10135, 2017.
[15] R. K. Srivastava, K. Greff, and J. Schmidhuber, “Highway net-
works,” 2015.
[16] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evalu-
ation of gated recurrent neural networks on sequence modeling,”
arXiv preprint arXiv:1412.3555, 2014.
[17] N. Kalchbrenner, E. Elsen, K. Simonyan,
S. Noury,
N. Casagrande, E. Lockhart, F. Stimberg, A. van den
Oord, S. Dieleman, and K. Kavukcuoglu, “Efficient neural
audio synthesis,” CoRR, vol. abs/1802.08435, 2018. [Online].
Available: http://arxiv.org/abs/1802.08435
[18] M. Morise, F. Yokomori, and K. Ozawa, “World: a vocoder-based
high-quality speech synthesis system for real-time applications,”
IEICE TRANSACTIONS on Information and Systems, vol. 99,
no. 7, pp. 1877–1884, 2016.
|
ai_researcher | 3 | Scientific_Language_Models_for_Biomedical_Knowledge_Base_Completion_An_Empirical_Study.pdf | Automated Knowledge Base Construction (2020)
Conference paper
Scientific Language Models for Biomedical Knowledge Base
Completion: An Empirical Study
Rahul Nadkarni1
David Wadden1
Iz Beltagy2
Noah A. Smith1,2
Hannaneh Hajishirzi1,2
Tom Hope1,2
1Paul G. Allen School of Computer Science & Engineering, University of Washington
2Allen Institute for Artificial Intelligence (AI2)
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
Abstract
Biomedical knowledge graphs (KGs) hold rich information on entities such as diseases,
drugs, and genes. Predicting missing links in these graphs can boost many important appli-
cations, such as drug design and repurposing. Recent work has shown that general-domain
language models (LMs) can serve as “soft” KGs, and that they can be fine-tuned for the
task of KG completion. In this work, we study scientific LMs for KG completion, exploring
whether we can tap into their latent knowledge to enhance biomedical link prediction. We
evaluate several domain-specific LMs, fine-tuning them on datasets centered on drugs and
diseases that we represent as KGs and enrich with textual entity descriptions. We inte-
grate the LM-based models with KG embedding models, using a router method that learns
to assign each input example to either type of model and provides a substantial boost in
performance. Finally, we demonstrate the advantage of LM models in the inductive setting
with novel scientific entities. Our datasets and code are made publicly available.1
1. Introduction
Understanding complex diseases such as cancer, HIV, and COVID-19 requires rich biolog-
ical, chemical, and medical knowledge. This knowledge plays a vital role in the process of
discovering therapies for these diseases — for example, identifying targets for drugs [Lindsay,
2003] requires knowing what genes or proteins are involved in a disease, and designing drugs
requires predicting whether a drug molecule will interact with specific target proteins. In
addition, to alleviate the great costs of designing new drugs, drug repositioning [Luo et al.,
2021] involves identification of existing drugs that can be re-purposed for other diseases.
Due to the challenging combinatorial nature of these tasks, there is need for automation
with machine learning techniques. Given the many links between biomedical entities, re-
cent work [Bonner et al., 2021a,b] has highlighted the potential benefits of knowledge graph
(KG) data representations, formulating the associated tasks as KG completion problems —
predicting missing links between drugs and diseases, diseases and genes, and so forth.
The focus of KG completion work — in the general domain, as well as in biomedical
applications — is on using graph structure to make predictions, such as with KG embedding
1. https://github.com/rahuln/lm-bio-kgc
1
2
0
2
p
e
S
1
2
]
L
C
.
s
c
[
2
v
0
0
7
9
0
.
6
0
1
2
:
v
i
X
r
a
Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope
(KGE) models and graph neural networks [Zitnik et al., 2018, Chang et al., 2020]. In parallel,
recent work in the general domain has explored the use of pretrained language models (LMs)
as “soft” knowledge bases, holding factual knowledge latently encoded in their parameters
[Petroni et al., 2019, 2020]. An emerging direction for using this information for the task
of KG completion involves fine-tuning LMs to predict relations between pairs of entities
based on their textual descriptions [Yao et al., 2019, Kim et al., 2020, Wang et al., 2021,
Daza et al., 2021]. In the scientific domain, this raises the prospect of using LMs trained
on millions of research papers to tap into the scientific knowledge that may be embedded
in their parameters. While this text-based approach has been evaluated on general domain
benchmarks derived from WordNet [Miller, 1995] and Freebase [Bollacker et al., 2008], to
our knowledge it has not been applied to the task of scientific KG completion.
Our contributions. We perform an extensive study of LM-based KG completion in the
biomedical domain, focusing on three datasets centered on drugs and diseases, two of which
have not been used to date for the KG completion task. To enable exploration of LM-based
models, we collect missing entity descriptions, obtaining them for over 35k entities across all
datasets. We evaluate a range of KGE models and domain-specific scientific LMs pretrained
on different biomedical corpora [Beltagy et al., 2019, Lee et al., 2020, Alsentzer et al., 2019,
Gu et al., 2020]. We conduct analyses of predictions made by both types of models and find
them to have complementary strengths, echoing similar observations made in recent work
in the general domain [Wang et al., 2021] and motivating integration of both text and graph
modalities. Unlike previous work, we train a router that selects for each input instance which
type of model is likely to do better, finding it to often outperform average-based ensembles.
Integration of text and graph modalities provides substantial relative improvements of 13–
36% in mean reciprocal rank (MRR), and routing across multiple LM-based models further
boosts results. Finally, we demonstrate the utility of LM-based models when applied to
entities unseen during training, an important scenario in the rapidly evolving scientific
domain. Our hope is that this work will encourage further research into using scientific
LMs for biomedical KG completion, tapping into knowledge embedded in these models and
making relational inferences between complex scientific concepts.
2. Task and Methods
We begin by presenting the KG completion task and the approaches we employ for pre-
dicting missing links in biomedical KGs, including our model integration and inductive KG
completion methodologies. An overview of our approaches is illustrated in Figure 1.
2.1 KG Completion Task
Formally, a KG consists of entities E, relations R, and triples T representing facts. Each
triple (h, r, t) ∈ T consists of head and tail entities h, t ∈ E and a relation r ∈ R. An entity
can be one of many types, with the type of an entity e denoted as T (e). In our setting,
each entity is also associated with some text, denoted as text(e) for e ∈ E. This text can
be an entity name, description, or both; we use the entity’s name concatenated with its
description when available, or just the name otherwise. For example, the fact (aspirin,
Scientific Language Models for Biomedical Knowledge Base Completion
Figure 1: Illustration of the main methods we apply for biomedical KG completion: (a) LM
fine-tuning; (b) KGE models; (c) an approach that combines both; and (d) using an LM to
impute missing entities in a KGE model.
treats, headache) might be an (h, r, t) triple found in a biomedical KG that relates drugs
and diseases, with the head and tail entities having types T (h) = drug and T (t) = disease.
The task of KG completion or link prediction involves receiving a triple (h, r, ?) (where ?
can replace either the head or tail entity) and scoring all candidate triples {(h, r, t(cid:48)) | t(cid:48) ∈ S}
such that the correct entity that replaces ? has the highest score. For the example listed
above, a well-performing model that receives the incomplete triple (aspirin, treats, ?) should
rank the tail entity headache higher than an incorrect one such as diabetes. S can be the
entire set of entities (i.e., S = E) or some fixed subset. In the transductive setting, the set
of facts T is split into a training set Ttrain and a test set Ttest such that all positive triples in
the test set contain entities seen during training. In contrast, for inductive KG completion
the triples in the test set may contain entities not seen during training (see Section 2.4).
2.2 Methods
Ranking-based KG completion. Each KG completion model in our experiments learns
a function f that computes a ranking score s = f (x) for a given triple x = (h, r, t). Models
are trained to assign a high ranking score to correct positive triples from the set of known
facts T and a low ranking score to triples that are likely to be incorrect. To do so, we use
the max-margin loss function Lrank(x) = 1
i)), where λ is a
N
margin hyperparameter, x ∈ T is a known positive triple in the KG, and x(cid:48)
i is a negative
triple constructed by randomly corrupting either the head or tail entity of x with an entity
of the same type.
i=1 max(0, λ − f (x) + f (x(cid:48)
(cid:80)N
KG embedding (KGE) models. For each entity e ∈ E and each relation r ∈ R, KG
embedding (KGE) models learn a vector representation E(e) ∈ Rm and R(r) ∈ Rn. For a
given triple (h, r, t), each model computes the ranking score f (h, r, t) as a simple function
of these embeddings (Figure 1b). We include a variety of different KGE models in our
experiments, including TransE [Bordes et al., 2013], DistMult [Yang et al., 2015], ComplEx
[Trouillon et al., 2016], and RotatE [Sun et al., 2019].
LM-based models. KGE methods do not capture the rich information available from
textual descriptions of nodes. To address this limitation, previous KG completion ap-
[CLS] aspirin [SEP] headache [SEP]scoreEEERscore(h, r, t)h tLM2KGEg(h, r, t)𝜙sKGEsLM2ɑscore(a)(b)(c)LM(d)estriolestroneLM1sLM1Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope
proaches have incorporated textual representations [Toutanova et al., 2015, Wang and Li,
2016], most recently with approaches such as KG-BERT [Yao et al., 2019] that fine-tune the
BERT language model (LM) [Devlin et al., 2019] for the task of KG completion. Our focus
in this work is on LMs pretrained on corpora of biomedical documents (e.g., PubMedBERT
[Gu et al., 2020]; see Appendix B.1.2 for full details). To score a triple using an LM, we
use a cross-encoder approach [Yao et al., 2019, Kim et al., 2020] (Fig. 1a), where we encode
the text of the head and tail entities together along with the appropriate special tokens.
Specifically, a triple (h, r, t) is encoded as v = LM([CLS] text(h) [SEP] text(t) [SEP]),
where v is the contextualized representation of the [CLS] token at the last layer.2 We then
apply an additional linear layer with a single output dimension to v to compute the ranking
score for the triple (f (x) = Wrankv ∈ R), and train the LM with the same max-margin
loss. Recent work on applying BERT for KG completion on general domain benchmarks
has shown that multi-task training improves performance [Wang et al., 2021, Kim et al.,
2020]. We use the approach of Kim et al. [2020] and incorporate two additional losses for
each LM: a binary triple classification loss to identify if a triple is positive or negative, and
a multi-class relation classification loss.3
2.3 Integrating KGE and LM: Model Averaging vs. Routing
Previous work using text for KG completion on general domain benchmarks has demon-
strated the benefit of combining KGE and text-based models [Xie et al., 2016, Wang et al.,
2021]. We study integration of graph-based and text-based methods (Figure 1c), exploring
whether learning to route input instances adaptively to a single model can improve perfor-
mance over previous approaches that compute a weighted average of ranking scores [Wang
et al., 2021]. We also explore the more general setup of combining more than two models.
More formally, for a given triple x = (h, r, t), let φ(x) be its feature vector. We can learn
a function g(φ(x)) that outputs a set of weights α = [α1, . . . , αk], (cid:80)
i αi = 1, αi > 0 ∀i.
These weights can be used to perform a weighted average of the ranking scores {s1, . . . , sk}
for a set of k models we wish to combine, such that the final ranking score is s = (cid:80)
i αisi.
We use a variety of graph-, triple-, and text-based features to construct the feature vector
φ(x) such as node degree, entity and relation type, string edit distance between head and
tail entity names, and overlap in graph neighbors of head and tail nodes. We explore these
features further in Section 4.1, and provide a full list in Appendix B.1.3 (Table 6).
For the function g(·), we experiment with an input-dependent weighted average
that outputs arbitrary weights α and a router that outputs a constrained α such that
αi = 1 for some i and αj = 0, ∀j (cid:54)= i (i.e., α is a one-hot vector).4
In practice, we
implement the router as a classifier which selects a single KG completion model for each
example by training it to predict which model will perform better.5 For the input-dependent
weighted average we train a multilayer perceptron (MLP) using the max-margin ranking
loss. We train all models on the validation set and evaluate on the test set for each dataset.
2. We experiment with encoding the relation text as well, but find that this did not improve performance.
3. See details in Appendix B. Wang et al. [2021] omit the relation classification loss and use a bi-encoder;
we find that both of these modifications reduce performance in our setting.
4. We also try a global weighted average with a single set of weights; see Appendix B.1.3 for details.
5. We explore a range of methods for the router’s classifier, with the best found to be gradient boosted
decision trees (GBDT) and multilayer perceptrons (MLP).
Scientific Language Models for Biomedical Knowledge Base Completion
Dataset #Entities #Rel
Train
Dev.
Test
#Positive Edges
RepoDB
Hetionet (our subset)
MSI
WN18RR
FB15k-237
2,748
12,733
29,959
40,943
14,541
1
4
6
11
237
5,342
124,544
387,724
86,835
272,115
667
15,567
48,465
3,034
17,535
668
15,568
48,465
3,134
20,466
Avg. Desc.
Length
49.54
44.65
45.13
14.26
139.32
Table 1: Statistics for our datasets and a sample of general domain benchmarks.
When performing ranking evaluation, we use the features φ(x) of each positive example
to compute the weights α, then apply the same weights to all negative examples ranked
against that positive example.
2.4 Inductive KG Completion
KGE models are limited to the transductive setting where all entities seen during evaluation
have appeared during training. Inductive KG completion is important in the biomedical
domain, where we may want to make predictions on novel entities such as emerging biomed-
ical concepts or drugs/proteins mentioned in the literature that are missing from existing
KGs. Due to their ability to form compositional representations from entity text, LMs are
well-suited to this setting. In addition to using LMs fine-tuned for KGC, we try a simple
technique using LMs to“fill in” missing KGE embeddings without explicitly using the LM
for prediction (Fig. 1d). Given a set of entities E for which a KGE model has trained embed-
dings and a set of unknown entities U, for each e ∈ E ∪ U we encode its text using an LM to
form ve = LM([CLS] text(e) [SEP]), ∀e ∈ E ∪ U, where ve is the [CLS] token representation
at the last layer. We use the cosine similarity between embeddings to replace each unseen
cos-sim(ve, vu))
entity’s embedding with the closest trained embedding as E(u) = E(argmax
e∈E
where e is of the same type as u, i.e., T (e) = T (u).
3. Experimental Setup
3.1 Datasets
We use three datasets in the biomedical domain that cover a range of sizes comparable to
existing general domain benchmarks, each pooled from a broad range of biomedical sources.
Our datasets include RepoDB [Brown and Patel, 2017], a collection of drug-disease pairs
intended for drug repositioning research; MSI (multiscale interactome; [Ruiz et al., 2021]),
a recent network of diseases, proteins, genes, drug targets, and biological functions; and
Hetionet [Himmelstein and Baranzini, 2015], a heterogeneous biomedical knowledge graph
which following Alshahrani et al. [2021] we restrict to interactions involving drugs, diseases,
symptoms, genes, and side effects.6 Statistics for all datasets and a sample of popular
general domain benchmark KGs can be found in Table 1.
6. More information on each dataset is available in Appendix A.1.
Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope
While Hetionet has previously been explored for the task of KG completion as link
prediction using KGE models (though not LMs) [Alshahrani et al., 2021, Bonner et al.,
2021b], to our knowledge neither RepoDB nor MSI have been represented as KGs and used
for evaluating KG completion models despite the potential benefits of this representation
[Bonner et al., 2021a], especially in conjunction with textual information. In order to apply
LMs to each dataset, we scrape entity names (when not provided by the original dataset) as
well as descriptions from the original online sources used to construct each KG (see Table 5
in the appendix).
We construct an 80%/10%/10% training/development/test transductive split for each
KG by removing edges from the complete graph while ensuring that all nodes remain in
the training graph. We also construct inductive splits, where each positive triple in the test
test has one or both entities unseen during training.
3.2 Pretrained LMs and KGE Integration
We experiment with several LMs pretrained on biomedical corpora (see Table 2 and Ap-
pendix B.1.2). For each LM that has been fine-tuned for KG completion, we add the prefix
“KG-” (e.g., KG-PubMedBERT) to differentiate it from the base LM. We use the umbrella
term “model integration” for both model averaging and routing, unless stated otherwise.
Model integration. We explore integration of all pairs of KGE models as well as each
KGE model paired with KG-PubMedBERT. This allows us to compare the effect of inte-
grating pairs of KG completion models in general with integrating graph- and text-based
approaches. For all pairs of models, we use the router-based and input-dependent weighted
average methods. We also explore combinations of multiple KGE models and LMs, where
we start with the best pair of KG-PubMedBERT and a KGE model based on the validation
set and add either KG-BioBERT or the best-performing KGE model (or the second-best,
if the best KGE model is in the best pair with KG-PubMedBERT).
3.3 Evaluation
At test time, each positive triple is ranked against a set of negatives constructed by replacing
either the head or tail entity by a fixed set of entities of the same type. When constructing
the edge split for each of the three datasets, we generate a fixed set of negatives for every
positive triple in the validation and test sets, each corresponding to replacing the head or
tail entity with an entity of the same type and filtering out negatives that appear as positive
triples in either the training, validation, or test set (exact details in Appendix A.4). For
each positive triple, we use its rank to compute the mean reciprocal rank (MRR), Hits@3
(H@3), and Hits@10 (H@10) metrics.
4. Experimental Results
4.1 Transductive Link Prediction Results
We report performance on the link prediction task across all datasets and models in Ta-
ble 2. While LMs perform competitively with KGE models and even outperform some,
they generally do not match the best KGE model on RepoDB and MSI. This echoes re-
Scientific Language Models for Biomedical Knowledge Base Completion
RepoDB
Hetionet
MRR H@3 H@10 MRR H@3 H@10 MRR H@3 H@10
MSI
KGE
LM (fine-tuned)
ComplEx
DistMult
RotatE
TransE
62.3
62.0
58.8
60.0
51.7
RoBERTa
59.7
SciBERT
BioBERT
58.2
Bio+ClinicalBERT 55.7
60.8
PubMedBERT-abs
59.9
PubMedBERT-full
Two models
(router)
Best pair of KGE
Best KGE + LM
Two models
(input-dep. avg.)
Best pair of KGE
Best KGE + LM
62.2
70.6
65.2
65.9
71.1
70.4
65.9
68.6
60.3
67.6
65.8
64.0
70.7
69.3
70.4
80.3
74.3
74.4
85.6
85.2
79.8
81.1
82.3
88.5
86.8
84.1
89.5
88.8
83.7
94.3
87.6
91.5
Three models
(router)
2 KGE + 1 LM
1 KGE + 2 LM
72.7
72.1
81.6
82.5
95.2
95.7
45.9
46.0
50.6
50.2
46.4
50.3
50.3
43.6
50.8
51.7
56.1
59.7
65.3
70.3
62.6
62.1
53.6
53.5
58.2
58.0
53.6
57.1
57.5
49.1
58.0
58.7
65.5
68.6
75.3
78.7
71.7
71.9
77.8
77.8
79.3
79.8
76.9
79.1
79.4
72.6
80.0
80.8
85.4
87.2
90.2
92.2
89.4
89.5
40.3
29.6
32.4
32.7
30.1
34.2
33.4
32.6
34.3
34.2
45.2
48.5
39.8
40.6
44.3
34.1
35.3
36.5
33.3
37.9
37.1
36.1
38.0
37.7
50.6
54.4
44.9
44.6
57.5
53.6
49.8
53.8
50.6
55.0
54.8
53.5
55.3
55.1
66.2
70.1
62.0
61.2
50.9
51.2
57.1
57.0
73.2
73.0
Table 2: KG completion results. All values are in the range [0, 100], higher is better.
Underlined values denote the best result within a model category (KGE, LM, two models
with router, two models with input-dependent weighted average, three models with router),
while bold values denote the best result for each dataset.
sults in the general domain for link prediction on subsets of WordNet and Freebase [Yao
et al., 2019, Wang et al., 2021]. On all datasets and metrics, the best-performing LM
is KG-PubMedBERT, which aligns with results for natural language understanding tasks
over biomedical text [Gu et al., 2020]. The biomedical LMs also generally outperform
KG-RoBERTa, illustrating the benefit of in-domain pretraining even in the KG completion
setting.
Comparing model errors. By examin-
ing a selected set of examples in Table 3,
we can observe cases where information
in text provides LMs an advantage and
where a lack of context favors KGE models.
KG-PubMedBERT is able to make connec-
tions between biomedical concepts – like the
fact that a disease that affects the stomach
might cause weight loss – and align related
concepts expressed with different terminol-
ogy – like connecting antineoplastic with
cancer (a type of neoplasm), or recogniz-
ing that an echocardiogram is a technique
for imaging the heart. In contrast, RotatE offers an advantage when the descriptions do not
immediately connect the two terms (mediastinal cancer, hoarseness), where a description
Figure 2: Fraction of test set examples where
each model performs better.
Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope
Relation
RotatE better
KG-PubMedBERT better
Disease
presents
Symptom
Compound
treats
Disease
Compound
causes
Side Effect
Disease: mediastinal cancer; a
cancer in the mediastinum.
Symptom: hoarseness; a deep or
rough quality of voice.
Compound: methylprednisolone;
a prednisolone derivative glucocorti-
coid with higher potency.
Disease: allergic rhinitis; a rhinitis
that is an allergic inflammation and
irritation of the nasal airways.
Compound:
synthetic,
biotic derivative of cephalexin.
Side Effect:
nephritis; no description
semi-
broad-spectrum anti-
tubulointerstitial
cefaclor;
Disease: stomach cancer; a gas-
trointestinal cancer in the stomach.
Symptom: weight loss; decrease in
existing body weight.
Compound: altretamine; an alky-
lating agent proposed as an antineo-
plastic.
Disease: ovarian cancer; a female
reproductive organ cancer that is lo-
cated in the ovary.
Compound: perflutren; a diagnos-
tic medication to improve contrast in
echocardiograms.
Side Effect: palpitations; irregular
and/or forceful beating of the heart.
Table 3: Examples from Hetionet where one model ranks the shown positive pair consider-
ably higher than the other. LMs often perform better when there is semantic relatedness
between head and tail text, but can be outperformed by a KGE model when head/tail
entity text is missing or unrelated. Entity descriptions cut to fit.
may be too technical or generic to be informative (methylprednisolone, allergic rhinitis),
or where no description is available (cefaclor, tubulointerstitial nephritis).7 Furthermore,
Fig. 2 shows that KG-PubMedBERT outperforms the best KGE model on a substantial
fraction of the test set examples for each dataset.8 These observations motivate an ap-
proach that leverages the strengths of both types of models by identifying examples where
each model might do better, which leads to our results for model integration.
4.2 Model Averaging and Routing
Integrating pairs of models. Table 2 shows that combining each class of models boosts
results by a large relative improvement of 13–36% in MRR across datasets. Moreover, the
best-performing combination always includes a KGE model and KG-PubMedBERT rather
than two KGE models (Fig. 3), showing the unique benefit of using LMs to augment models
relying on KG structure alone.
Averaging vs.
routing. We also compare the router and input-dependent weighted
average approaches of integrating a pair of models in Table 2, with the router-based ap-
proach outperforming the weighted average for the best KGE + LM pair on RepoDB and
MSI. This presents routing as a promising alternative for integrating KGE and LM models.
Since the gradient boosted decision trees (GBDT) router achieves the best validation set
7. Table 7 in the appendix shows the drop in performance when one or both entities are missing descriptions.
8. See MRR breakdown by relation type in Fig. 4 in the appendix.
Scientific Language Models for Biomedical Knowledge Base Completion
Figure 3: Test set MRR for all pairs of KG completion models using an MLP router.
The best combination of a KGE model and KG-PubMedBERT always performs better
than the best pair of KGE models, and for RepoDB and Hetionet all pairs involving KG-
PubMedBERT outperform all KGE-only pairs.
performance in most cases across classifiers and integration methods, we use this method
for combinations of more than two models, such as multiple LMs with a single KGE model.
Integrating additional models. The bottom of Table 2 shows results for three-model
combinations. Adding a third model improves performance compared to the best pair of
models, though whether the best third model is an LM or KGE model varies across datasets
and metrics. Although there are diminishing returns to including a third model, the three-
model combinations provide the best performance for RepoDB and MSI.
Interpreting model routing. We compute average feature gain for all datasets, using
a GBDT router implemented with XGBoost [Chen and Guestrin, 2016] (see Fig. 5 in the
appendix). We find that the most salient features are the ranking scores output by each
model, which is intuitive as these scores reflect each model’s confidence. Graph features
like node degree and PageRank also factor into the classifier’s predictions, as well as tex-
tual features such as entity text length and edit distance between entity names. General
concepts such as Hypertensive disease and Infection of skin and/or subcutaneous tissue
are central nodes for which we observe KGE models to often do better. KGE models also
tend to do better on entities with short, non-descriptive names (e.g., P2RY14), especially
when no descriptions are available. Generally, these patterns are not clear-cut, and non-
linear or interaction effects likely exist. It remains an interesting challenge to gain deeper
understanding into the strengths and weaknesses of LM-based and graph-based models.
4.3 Inductive KG Completion
For our inductive KG completion experiments, we use ComplEx as the KGE model and
KG-PubMedBERT as our LM-based model, and compare the performance of each method
to ComplEx with entity embeddings imputed using the method described in Section 2.4.
We use either the untrained PubMedBERT or the fine-tuned KG-PubMedBERT as the LM
for retrieving nearest-neighbor (NN) entities (see examples in Table 9 in the appendix).
We also compare to DKRL [Xie et al., 2016], which constructs entity representations from
text using a CNN encoder and uses the TransE scoring function. We use PubMedBERT’s
ComplExDistMultRotatETransEDistMultRotatETransEPubMedBERT63.463.263.663.763.762.270.470.870.670.9RepoDBComplExDistMultRotatETransEDistMultRotatETransEPubMedBERT48.056.156.155.055.255.759.659.759.259.6HetionetComplExDistMultRotatETransEDistMultRotatETransEPubMedBERT41.345.041.644.439.838.648.342.843.242.8MSINadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope
RepoDB
Hetionet
MRR H@3 H@10 MRR H@3 H@10 MRR H@3 H@10
MSI
DKRL
KG-PubMedBERT
ComplEx
NN-ComplEx, frozen LM
NN-ComplEx, fine-tuned
15.9
15.6
38.8 43.4
0.4
22.3
30.3
0.8
20.1
26.9
28.2
67.5
1.6
31.2
39.4
18.5
17.8
21.6 22.3
0.7
18.4
12.9
3.6
18.1
13.9
31.9
42.8
2.8
32.8
25.5
14.1
13.3
20.2 21.7
0.1
16.9
15.4
0.5
15.8
14.6
22.4
32.2
0.4
23.4
21.4
Table 4: Inductive KG completion results. NN-ComplEx refers to the version of ComplEx
with unseen entity embeddings replaced using an LM to find the 1-nearest neighbor, either
with PubMedBERT frozen or fine-tuned for KG completion (KG-PubMedBERT).
token embeddings as input to DKRL and train with the same multi-task loss. While other
methods for inductive KG completion exist, such as those based on graph neural networks
[Schlichtkrull et al., 2018, Vashishth et al., 2020, Bhowmik and de Melo, 2020], they require
the unseen entity to have known connections to entities that were seen during training in
order to propagate information needed to construct the new embedding. In our inductive
experiments, we consider the more challenging setup where every test set triple has at least
one entity with no known connections to entities seen during training, such that graph
neural network-based methods cannot be applied. This models the phenomenon of rapidly
emerging concepts in the biomedical domain, where a novel drug or protein may be newly
studied and discussed in the scientific literature without having been integrated into existing
knowledge bases.
As seen in Table 4, ComplEx unsurprisingly performs poorly as it attempts link predic-
tion with random embeddings for unseen entities. DKRL does substantially better, with
KG-PubMedBERT further increasing MRR with a relative improvement of 21% (Hetionet)
to over 2x (RepoDB). Our strategy for replacing ComplEx embeddings for unseen entities
performs comparably to or better than DKRL in most cases, with untrained PubMedBERT
encodings generally superior to using KG-PubMedBERT’s encodings. In either case, this
simple strategy for replacing the untrained entity embeddings of a KGE model shows the
ability of an LM to augment a structure-based method for KG completion that is typically
only used in the transductive setting, even without using the LM to compute ranking scores.
5. Conclusion and Discussion
We perform the first empirical study of scientific language models (LMs) applied to biomed-
ical knowledge graph (KG) completion. We evaluate domain-specific biomedical LMs, fine-
tuning them to predict missing links in KGs that we construct by enriching biomedical
datasets with textual entity descriptions. We find that LMs and more standard KG embed-
ding models have complementary strengths, and propose a routing approach that integrates
the two by assigning each input example to either type of model to boost performance. Fi-
nally, we demonstrate the utility of LMs in the inductive setting with entities not seen during
training, an important scenario in the scientific domain with many emerging concepts.
Our work raises several directions for further study. For instance, several structural
differences exist between general-domain and biomedical text that would be interesting to
Scientific Language Models for Biomedical Knowledge Base Completion
explore in more depth and leverage more explicitly to improve KG completion performance.
For example, entities with uninformative technical names – such as protein names that are
combinations of numbers and letters (e.g., P2RY14) – appear very often in scientific KGs,
and are likely related to the benefit of adding descriptions (Table 7, appendix). The surface
forms of entity mentions in the biomedical literature on which the LMs were pretrained
tend to be diverse with many aliases, while entities such as cities or people in the general
domain often show less variety in their surface forms used in practice. This could potentially
be challenging when trying to tap into the latent knowledge LMs hold on specific entities
as part of the KG completion task, and likely requires LMs to disambiguate these surface
forms to perform the task well. General-domain LMs are also trained on corpora such as
Wikipedia which has “centralized” pages with comprehensive information about entities,
while in the scientific literature information on entities such as drugs or genes is scattered
across the many papers that form the training corpora for the LMs.
Previous work [Wang et al., 2021] has also observed that combining graph- and LM-based
models improves KG completion results. We provide further analyses into this phenomenon
based on textual and graph properties, but a deeper understanding of the strengths and
weaknesses of each modality is needed. Interpreting neural models is generally a challenging
problem; further work in our setting could help reveal the latent scientific knowledge embed-
ded in language models. Importantly, our results point to the potential for designing new
models that capitalize on both graph and text modalities, perhaps by injecting structured
knowledge into LMs [Peters et al., 2019] or with entity-centric pretraining [Zemlyanskiy
et al., 2021]. Finally, our findings provide a promising direction for biomedical knowledge
completion tasks, and for literature-based scientific discovery [Swanson, 1986, Gopalakrish-
nan et al., 2019].
Acknowledgments
This project is supported in part by NSF Grant OIA-2033558 and by the Office of Naval
Research under MURI grant N00014-18-1-2670.
References
Emily Alsentzer, John R. Murphy, Willie Boag, Wei-Hung Weng, Di Jin, Tristan Naumann,
and Matthew B. A. McDermott. Publicly Available Clinical BERT Embeddings. In 2nd
Clinical Natural Language Processing Workshop, 2019.
Mona Alshahrani, Maha A. Thafar, and Magbubah Essack. Application and evaluation of
knowledge graph embeddings in biomedical data. PeerJ Computer Science, 7, 2021.
Iz Beltagy, Kyle Lo, and Arman Cohan. SciBERT: A Pretrained Language Model for
Scientific Text. In EMNLP, 2019.
Rajarshi Bhowmik and Gerard de Melo. Explainable Link Prediction for Emerging Entities
in Knowledge Graphs. In SEMWEB, 2020.
Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope
Kurt Bollacker, Colin Evans, Praveen K. Paritosh, Tim Sturge, and Jamie Taylor. Free-
base: A Collaboratively Created Graph Database For Structuring Human Knowledge. In
SIGMOD Conference, 2008.
Stephen Bonner, Ian P Barrett, Cheng Ye, Rowan Swiers, Ola Engkvist, Andreas Bender,
Charles Tapley Hoyt, and William Hamilton. A Review of Biomedical Datasets Relating
to Drug Discovery: A Knowledge Graph Perspective. arXiv:2102.10062, 2021a.
Stephen Bonner, Ian P Barrett, Cheng Ye, Rowan Swiers, Ola Engkvist, and William L
Hamilton. Understanding the Performance of Knowledge Graph Embeddings in Drug
Discovery. arXiv:2105.10488, 2021b.
Antoine Bordes, Nicolas Usunier, Alberto Garc´ıa-Dur´an, Jason Weston, and Oksana
Yakhnenko. Translating Embeddings for Modeling Multi-relational Data. In NIPS, 2013.
Adam S. Brown and Chirag J. Patel. A standard database for drug repositioning. Scientific
Data, 4, 2017.
David Chang, Ivana Balaˇzevi´c, Carl Allen, Daniel Chawla, Cynthia Brandt, and Andrew
Taylor. Benchmark and Best Practices for Biomedical Knowledge Graph Embeddings.
In 19th SIGBioMed Workshop on Biomedical Language Processing, pages 167–176, 2020.
Tianqi Chen and Carlos Guestrin. XGBoost: A Scalable Tree Boosting System. KDD,
2016.
Daniel Daza, Michael Cochez, and Paul T. Groth. Inductive Entity Representations from
Text via Link Prediction. In WWW, 2021.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training
of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT, 2019.
Vishrawas Gopalakrishnan, Kishlay Jha, Wei Jin, and Aidong Zhang. A survey on literature
based discovery approaches in biomedical domain. Journal of biomedical informatics, 93:
103141, 2019.
Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan
Naumann, Jianfeng Gao, and Hoifung Poon. Domain-Specific Language Model Pretrain-
ing for Biomedical Natural Language Processing. arXiv:2007.15779, 2020.
Daniel S. Himmelstein and Sergio E. Baranzini. Heterogeneous Network Edge Prediction: A
Data Integration Approach to Prioritize Disease-Associated Genes. PLoS Computational
Biology, 11, 2015.
Bosung Kim, Taesuk Hong, Youngjoong Ko, and Jungyun Seo. Multi-Task Learning for
Knowledge Graph Completion with Pre-trained Language Models. In COLING, 2020.
Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In
ICLR, 2015.
Scientific Language Models for Biomedical Knowledge Base Completion
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So,
and Jaewoo Kang. BioBERT: a pre-trained biomedical language representation model
for biomedical text mining. Bioinformatics, 36:1234 – 1240, 2020.
Mark A Lindsay. Target discovery. Nature Reviews Drug Discovery, 2(10):831–838, 2003.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy,
Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A Robustly Optimized
BERT Pretraining Approach. arXiv:1907.11692, 2019.
Huimin Luo, Min Li, Mengyun Yang, Fang-Xiang Wu, Yaohang Li, and Jianxin Wang.
Biomedical data and computational models for drug repositioning: a comprehensive re-
view. Briefings in bioinformatics, 22(2):1604–1619, 2021.
George A. Miller. WordNet: a lexical database for English. Commun. ACM, 38:39–41,
1995.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blon-
del, G. Louppe, P. Prettenhofer, R. Weiss, J. Vanderplas, A. Passos, D. Cournapeau,
M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. J.
Mach. Learn. Res., 12:2825–2830, 2011.
Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer
Singh, and Noah A. Smith. Knowledge Enhanced Contextual Word Representations. In
EMNLP, 2019.
Fabio Petroni, Tim Rockt¨aschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H.
Miller, and Sebastien Riedel. Language Models as Knowledge Bases? In EMNLP, 2019.
Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rockt¨aschel, Yuxiang Wu, Alexan-
der H Miller, and Sebastian Riedel. How Context Affects Language Models’ Factual
Predictions. In AKBC, 2020.
Nils Reimers and Iryna Gurevych. Sentence-BERT: Sentence Embeddings using Siamese
BERT-Networks. In EMNLP, 2019.
Camilo Ruiz, Marinka Zitnik, and Jure Leskovec. Identification of disease treatment mech-
anisms through the multiscale interactome. Nature communications, 2021.
Michael Schlichtkrull, Thomas Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and
Max Welling. Modeling Relational Data with Graph Convolutional Networks. In ESWC,
2018.
Zhiqing Sun, Zhihong Deng, Jian-Yun Nie, and Jian Tang. RotatE: Knowledge Graph
Embedding by Relational Rotation in Complex Space. In ICLR, 2019.
Don R. Swanson. Fish oil, Raynaud’s syndrome, and undiscovered public knowledge. Per-
spectives in biology and medicine, 30(1):7–18, 1986.
Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope
Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and
Michael Gamon. Representing Text for Joint Embedding of Text and Knowledge Bases.
In EMNLP, 2015.
Th´eo Trouillon, Johannes Welbl, S. Riedel, ´Eric Gaussier, and Guillaume Bouchard. Com-
plex Embeddings for Simple Link Prediction. In ICML, 2016.
Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. Composition-based
Multi-Relational Graph Convolutional Networks. In ICLR, 2020.
Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Ying Wang, and Yi Chang. Structure-
Augmented Text Representation Learning for Efficient Knowledge Graph Completion. In
WWW, 2021.
Zhigang Wang and Juan-Zi Li. Text-Enhanced Representation Learning for Knowledge
Graph. In IJCAI, 2016.
Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. Representation
Learning of Knowledge Graphs with Entity Descriptions. In AAAI, 2016.
Bishan Yang, Wen tau Yih, Xiadong He, Jianfeng Gao, and Li Deng. Embedding Entities
and Relations for Learning and Inference in Knowledge Bases. In ICLR, 2015.
Liang Yao, Chengsheng Mao, and Yuan Luo. KG-BERT: BERT for Knowledge Graph
Completion. arXiv:1909.03193, 2019.
Yury Zemlyanskiy, Sudeep Gandhe, Ruining He, Bhargav Kanagal, Anirudh Ravula, Ju-
raj Gottweis, Fei Sha, and Ilya Eckstein. DOCENT: Learning Self-Supervised Entity
Representations from Large Document Collections. In EACL, 2021.
Marinka Zitnik, Monica Agrawal, and Jure Leskovec. Modeling polypharmacy side effects
with graph convolutional networks. Bioinformatics, 34(13):i457–i466, 2018.
Scientific Language Models for Biomedical Knowledge Base Completion
Dataset Link
RepoDB http://apps.chiragjpgroup.org/repoDB/
Hetionet
https://github.com/hetio/hetionet
MSI
https://github.com/snap-stanford/multiscale-interactome
Sources
DrugBank
UMLS
DrugBank
Disease Ontology
Entrez
SIDER
MeSH
DrugBank
Gene Ontology
Entrez
UMLS
Table 5: Links and sources of entity names and descriptions for each dataset.
Appendix A. Dataset Construction
A.1 Sources
RepoDB Drugs in RepoDB have statuses including approved, terminated, withdrawn, and
suspended. We restrict our KG to pairs in the approved category.
Hetionet was constructed using data from various publicly-available scientific reposito-
ries. Following Alshahrani et al. [2021], we restrict the KG to the treats, presents, associates,
and causes relation types. This includes interactions between drugs and the diseases they
treat, diseases and their symptoms, diseases and associated genes, and drugs and their side
effects. We use this subset of the full Hetionet dataset to avoid scalability issues that arise
when training large Transformer-based language models, inspired by benchmark datasets
such as FB15K [Bordes et al., 2013], a subset of the Freebase knowledge base.
MSI
includes diseases and the proteins they perturb, drug targets, and biological functions
designed to discover drug-disease treatment pairs through the pathways that connect them
via genes, proteins, and their functions. We include all entities and relation types in the
dataset.
We collect each of the datasets from the links listed in Table 5. For missing entity names
and all descriptions, we write scripts to scrape the information from the resources listed
above using the entity identifiers provided by each of the datasets.
A.2 Transductive Splits
To construct transductive splits for each dataset, we begin with the complete graph, and
repeat the following steps:
1. Randomly sample an edge from the graph.
2. If the degree of both nodes incident to the edge is greater than one, remove the edge.
3. Otherwise, replace the edge and continue.
Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope
The above steps are repeated until validation and test graphs have been constructed of
the desired size while ensuring that no entities are removed from the training graph. We
construct 80%/10%/10% training/validation/test splits of all datasets.
A.3 Inductive Splits
To construct inductive splits for each dataset, we follow the procedure outlined in the
“Technical Details” section of the appendix of Daza et al. [2021]. We similarly construct a
80%/10%/10% training/validation/test split of each dataset in the inductive setting.
A.4 Negative Validation/Test Triples
In order to perform a ranking-based evaluation for each dataset in both the transductive
and inductive settings, we generate a set of negative triples to be ranked against each
positive triple. To generate negative entities to replace both the head and tail entity of each
validation and test positive, we follow the procedure below:
1. Begin with the set of all entities in the knowledge graph.
2. Remove all entities that do not have the same entity type as the entity to be ranked
against in the positive triple.
3. Remove all entities that would result in a valid positive triple in either the training,
validation, or test sets.
4. Randomly sample a fixed set of size m from the remaining set of entities.
We use a value of m = 500 for RepoDB and MSI, and a value of m = 80 for Hetionet (due
to the constraints above, the minimum number of valid entities remaining across positive
triples for Hetionet was 80). Using a fixed set of entities allows for fair comparison when
assessing performance of subsets of the test set, such as when examining the effect of subsets
where descriptions are present for neither, one, or both entities (Table 7).
Appendix B. Training
B.1 Transductive Setting
For all individual models, we train the models on the training set of each dataset while
periodically evaluating on the validation set. We save the model with the best validation
set MRR, then use that model to evaluate on the test set. We also perform hyperparameter
tuning for all models, and use validation set MRR to select the final set of hyperparameters
for each model.
B.1.1 Knowledge Graph Embeddings
We use the max-margin ranking loss for all KGE methods. We use a batch size of 512 for
all models. We train models for 10,000 steps (958 epochs) on RepoDB, 50,000 steps (205
epochs) on Hetionet, and 50,000 steps (66 epochs) on MSI. We evaluate on the validation
set every 500 steps for RepoDB and 5,000 steps for Hetionet and MSI. We use the Adam
optimizer for training. We perform a hyperparameter search over the following values:
Scientific Language Models for Biomedical Knowledge Base Completion
• Embedding dimension: 500, 1000, 2000
• Margin for max-margin loss: 0.1, 1
• Learning rate: 1e-3, 1e-4
• Number of negative samples per positive: 128, 256
• Parameter for L3 regularization of embeddings: 1e-5, 1e-6
B.1.2 Language Models
Pretrained scientific LMs. We explore various pretrained LMs, with their initialization,
vocabulary, and pretraining corpora described below. In particular, we study a range of
LMs trained on different scientific and biomedical literature, and also on clinical notes.
• BioBERT [Lee et al., 2020] Initialized from BERT and using the same general do-
main vocabulary, with additional pretraining on the PubMed repository of scientific
abstracts and full-text articles.
• Bio+ClinicalBERT [Alsentzer et al., 2019] Initialized from BioBERT with addi-
tional pretraining on the MIMIC-III corpus of clinical notes.
• SciBERT [Beltagy et al., 2019] Pretrained from scratch with a domain-specific vo-
cabulary on a sample of the Semantic Scholar corpus, of which biomedical papers are
a significant fraction but also papers from other scientific domains.
• PubMedBERT [Gu et al., 2020] Pretrained from scratch with a domain-specific
vocabulary on PubMed. We apply two versions of PubMedBERT, one trained on
PubMed abstracts alone (PubMedBERT-abstract) and the other on abstracts as well
as full-text articles (PubMedBERT-fulltext).
We also use RoBERTa [Liu et al., 2019] – pretrained from scratch on the BookCor-
pus, English Wikipedia, CC-News, OpenWebText, and Stories datasets – as a strongly-
performing general domain model for comparison. For all LMs, we follow Kim et al. [2020]
and use the multi-task loss consisting of binary triple classification, multi-class relation
classification, and max-margin ranking loss, with a margin of 1 for the max-margin loss.
For triple classification, given the correct label y ∈ {0, 1} (positive or negative triple)
we apply a linear layer to the [CLS] token representation v to output the probability
p of the triple being correct as p = σ(Wtriplev), and use the binary cross entropy loss
Ltriple(x) = −y log(p) − (1 − y) log(1 − p). For relation classification over R relation types,
we apply a linear layer to v to calculate a probability distribution q over relation classes
with q = softmax(Wrelv), and use the cross entropy loss with one-hot vector y ∈ {0, 1}R as
the correct relation label: Lrel(x) = − (cid:80)R
i=1 yi log qi. The final loss is the equally-weighted
sum of all three losses: L(x) = Lrank(x) + Ltriple(x) + Lrel(x).
We train for 40 epochs on RepoDB, and 10 epochs on Hetionet and MSI. We evaluate
on the validation set every epoch for RepoDB, and three times per epoch for Hetionet and
MSI. For RepoDB, Hetionet, and MSI we use 32, 16, and 8 negative samples per positive,
respectively. We use the Adam optimizer for training. We perform a hyperparameter search
over the following values:
Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope
• Batch size: 16, 32
• Learning rate: 1e-5, 3e-5, 5e-5
B.1.3 Integrated Models
Global weighted average. For the global weighted average, we compute ranking scores
for positive and negative examples as the weighted average of ranking scores output by all
KG completion models being integrated. Specifically, for a set of ranking scores s1, . . . , sk
output by k models for an example, we learn a set of weights α = [α1, . . . , αk] to compute
the final ranking score as s = (cid:80)k
i=1 αisi, where the same weight vector α is used for all
examples. We search for each αi over the grid [0.05, 0.95] with steps of 0.05, ensuring that
all αi’s sum to 1. We choose values that maximize validation set MRR, then apply them to
the test set.
Router. For the router-based method, we train a classifier to select a single model out of
a set of KG completion models to use for computing ranking scores for a positive example
and its associated negatives. The class to be predicted for a particular example corresponds
to which model performs best on that example (i.e., gives the best rank), with an additional
class for examples where all models perform the same. We explore a number of different clas-
sifiers, including logistic regression, decision tree, gradient boosted decision tree (GBDT),
and multilayer perceptron (MLP), finding that GBDT and MLP classifiers perform the
best. As input to the classifier, we use a diverse set of features computed from each positive
example (listed in Table 6) as well as each model’s ranking score for the positive example.
Classifiers are trained on the validation set and evaluated on the test set for each dataset.
We additionally perform hyperparameter tuning over the following values for each classifier:
Logistic regression:
• Penalty: L1, L2
• Regularization parameter: 9 values evenly log-spaced between 1e-5 and 1e3
Decision tree:
• Max depth: 2, 4, 8
• Learning rate: 1e-1, 1e-2, 1e-3
GBDT:
• Number of boosting rounds: 100, 500, 1000
• Max depth: 2, 4, 8
• Learning rate: 1e-1, 1e-2, 1e-3
MLP:
• Number of hidden layers: 1, 2
Scientific Language Models for Biomedical Knowledge Base Completion
• Hidden layer size: 128, 256
• Batch size: 64, 128, 256
• Learning rate: 1e-1, 1e-2, 1e-3
We perform five-fold cross-validation on the validation set and use validation set accuracy
to choose the best set of hyperparameters for each classifier. We use Scikit-Learn [Pedregosa
et al., 2011] to implement the logistic regression and MLP classifiers, and XGBoost [Chen
and Guestrin, 2016] to implement the decision tree and GBDT classifiers, using default
parameters other than the ones listed above.
Input-dependent weighted average. The input-dependent weighted average method
of integrating KG completion models operates similarly to the global weighted average,
except that the set of weights can vary for each positive example and are a function of its
feature vector (the same set of weights is used for all negative examples used to rank against
each positive example). We train an MLP to output a set of weights that are then used
to compute a weighted average of ranking scores for a set of KG completion models. The
MLP is trained on the validation set and evaluated on the test set for each dataset. We use
the max-margin ranking loss with a margin of 1. In order to compare to the MLP trained
as a router, we train the MLP using the Adam optimizer [Kingma and Ba, 2015] for 200
epochs with early stopping on the training loss and a patience of 10 epochs (the default
settings for an MLP classifier in Scikit-Learn). We perform a hyperparameter search over
the following values (matching the values for the MLP router where applicable):
• Number of hidden layers: 1, 2
• Hidden layer size: 128, 256
• Batch size: 64, 128, 256
• Learning rate: 1e-1, 1e-2, 1e-4
• Number of negatives (for max-margin loss): 16, 32
We select the best hyperparameters by MRR on a held-out portion of the validation set.
Features for integrated models. Both the router and input-dependent weighted av-
erage methods of model integration use a function to outputs weights based on a feature
vector of an example. A complete list of the features used by each method can be found in
Table 6. We also use the ranking score for the positive example from each KG completion
model being integrated as additional features.
B.2 Inductive Setting
B.2.1 Knowledge Graph Embeddings and Language Models
For the KGE and LM models, we follow the same training procedure for the inductive splits
as for the transductive splits. We perform hyperparameter tuning over the same grids of
hyperparameters, periodically evaluate on the validation set and save the checkpoint with
the best validation set MRR, and use the set of hyperparameters corresponding to the
highest validation set MRR to evaluate on the test set.
Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope
entity type
relation type
head/tail node in-/out-degree
head/tail node PageRank
Adamic-Adar index of edge
edit dist. between head/tail entity names
length of text in chars.
presence of word “unknown” in name/desc.
missing desc.
number/ratio of punctuation/numeric chars.
tokens-to-words ratio of entity name/desc.
Table 6: Complete list of features used by router classifiers.
B.2.2 DKRL
In addition to the KGE and LM-based methods, we also train DKRL [Xie et al., 2016]
for inductive KG completion as another text-based baseline for comparison. DKRL uses a
two-layer CNN encoder applied to the word or subword embeddings of an entity’s textual
description to construct a fixed-length entity embedding. To score a triple, DKRL combines
its entity embeddings constructed from text with a separately-learned relation embedding
using the TransE [Bordes et al., 2013] scoring function. The original DKRL model uses a
joint scoring function with structure-based and description-based components; we restrict
to the description-based component as we are applying DKRL in the inductive setting. We
use PubMedBERT subword embeddings at the input layer of the CNN encoder, encode
entity names and descriptions, and apply the same multi-task loss as for the LM-based
models. To apply the triple classification and relation classification losses, for head and tail
entity embeddings h and t, we apply a separate linear layer for each loss to the concatenated
vector [h; t; |h − t|], following previous work on models that use a bi-encoder to construct
entity or sentence representations [Wang et al., 2021, Reimers and Gurevych, 2019]. We
use the same number of training epochs and number of negatives per positive for DKRL
as for the LM-based methods on each dataset. We use a batch size of 64, and perform a
hyperparameter search over the following values:
• Learning rate: 1e-3, 1e-4, 1e-5
• Embedding dimension: 500, 1000, 2000
• Parameter for L2 regularization of embeddings: 0, 1e-3, 1e-2
Appendix C. Additional Results
C.1 Transductive Setting, Individual Models
Missing entity descriptions. Table 7 shows test set MRR for KG-PubMedBERT on
each dataset broken down by triples with either both, one, or neither entities having available
descriptions. Across datasets, performance clearly degrades when fewer descriptions are
available to provide context for the LM to generate a ranking score.
Relation-level performance. Figure 4 shows test set MRR broken down by relation for
the datasets with multiple relation types (Hetionet and MSI). KG-PubMedBERT performs
better on all relation types except compound-side effect for Hetionet, and on the function-
function relation for MSI.
Scientific Language Models for Biomedical Knowledge Base Completion
#entities with
desc. in pair
None
One
Both
MRR
RepoDB Hetionet MSI
N/A
59.5
63.7
25.6
43.6
52.6
25.1
25.4
37.3
Table 7: Effect of descriptions on KG-PubMedBERT test set MRR.
Figure 4: Test set MRR for the best KGE model compared to KG-PubMedBERT broken
down by relation type for Hetionet and MSI.
C.2 Transductive Setting, Integrated Models
RepoDB Hetionet MSI
Global avg.
Input-dep. avg.
Router
70.4
65.9
70.6
55.8
70.3
59.7
42.1
40.6
48.5
Table 8: Test set MRR for the best pair of a KGE model and KG-PubMedBERT for
different methods of model integration.
C.3 Inductive Setting
Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope
Figure 5: Feature importances for GBDT router for a selection of most important features.
Ranking scores output by each model tend to be the most important, with other graph-
and text-based features also contributing.
Imputation Model
with Better Ranking
Unseen
Entity
PubMedBERT
eye redness
ecchymosis
estrone
KG-PubMedBERT
keratoconjunctivitis
KG-PubMedBERT
nearest neighbors
skin
burning
skin discomfort
gas, thrombophlebitis
sensation,
vitamin a,
methyltestosterone
conjunctivitis allergic,
otitis externa
malnutrition
dehydration, anaemia
congestive
cardiomyopathy
diastolic dysfunction,
cardiomyopathy
PubMedBERT
nearest neighbors
conjunctivitis, throat sore
petechiae, macule
estriol, calcitriol
enteritis, parotitis
meningism,
wasting generalized
carcinoma breast,
hypertrophic
cardiomyopathy
Table 9: Samples of unseen entities and their nearest neighbors found by KG-PubMedBERT
and PubMedBERT, for test set examples in the Hetionet inductive split where the Pub-
MedBERT neighbor performs better than the KG-PubMedBERT neighbor (first three) and
vice versa (last three). Each LM offers a larger improvement per example when its nearest
neighbor is more semantically related to the unseen entity.
|
ai_researcher | 1 | Magnetic_resonance_imaging_as_a_tool_for_quality_control_in_extrusion‐based_bioprinting.pdf | Improvement of Printing Quality for Laser-induced Forward Transfer based Laser-
Assisted Bioprinting Process using a CFD-based numerical model
Jie Qua,b, Chaoran Douc, Ben Xud*, Jianzhi Lic*, Zhonghao Raob,, Andrew Tsine
a. Department of Mechanical Engineering, The University of Texas Rio Grande Valley, Edinburg, TX
78539, USA
b. School of Electrical and Power Engineering, China University of Mining and Technology, Xuzhou
221116, China
c. Department of Manufacturing and Industrial Engineering, The University of Texas Rio Grande
Valley, Edinburg, TX 78539, USA
d. Department of Mechanical Engineering, Mississippi State University, Mississippi State, MS 39762,
USA
e. Department of Molecular Science, The University of Texas Rio Grande Valley School of Medicine,
Edinburg, TX, 78539, USA
Corresponding author
Ben Xu: Tel.: +1 (662) 325-5632; Email: [email protected]
Jianzhi Li: Tel.: +1 (956) 665-7329; Email: [email protected]
Abstract:
As one of the three-dimensional (3D) bioprinting
techniques with great application
potential,
(LAB)
laser-induced-forward-transfer (LIFT) based laser assisted bioprinting
transfers the bioink through a developed jet flow, and the printing quality highly depends on
the stability of jet flow regime. To understand the connection between the jet flow and printing
outcomes, a Computational Fluid Dynamic (CFD) model was developed for the first time to
accurately describe the jet flow regime and provide a guidance for optimal printing process
planning. By adopting the printing parameters recommended by the CFD model, the printing
quality was greatly improved by forming stable jet regime and organized printing patterns on
the substrate, and the size of printed droplet can also be accurately predicted through a static
equilibrium model. The ultimate goal of this research is to direct the LIFT-based LAB process
and eventually improve the quality of bioprinting.
Keywords: Laser assisted bioprinting; Laser Induced Forward Transfer (LIFT); Bioink;
Bubble formation and collapse; Jet Regime; Computational Fluid Dynamics (CFD).
1
INTRODUCTION
3D bioprinting is an emerging technology that has been investigated in fields varying from
printing of live cells to biosensors fabrication and from stem cell fabrication to artificial organ
generation (1-3). 3D bioprinting has gained special momentum in generation of the 3D
functional tissues and organs due to its capability of periodic arrangement of various biological
materials in a precisely controlled manner (4). As one kind of the 3D bioprinting techniques,
laser assisted bioprinting (LAB) can print biological materials with as small as cell-level
resolution, therefore by controlling the cell density and organization, LAB potentially holds a
great promise to fabricate living tissues or organs with biomimetic physiological functionality
(5). LAB is based on the principle of laser induced forward transfer (LIFT), which was first
proposed by Bohandy et al. in 1986 (6) as an accurate solid deposition technology with high
resolution. LIFT uses a pulsed laser beam focused through a transparent glass/quartz plate onto
a donor layer coated on the other side of the plate to eject a tiny volume of the donor material
towards a receiving substrate (7). The bioink transfer in LAB process is believed as the key to
the formation and growth of a vapor bubble and a jet because of the rapid evaporation caused
by the high energy laser pulse (8, 9). LIFT-based LAB has great advantages over other
bioprinting technologies. These advantages include non-contact printing, high fabrication
precision and high adaptability, supporting different cell patterns with good cell viability
(~85%) (2). LIFT has similar functionality to droplet-on-demand inkjet printing (nozzle-based),
however, since LIFT is a nozzle-free process, it does not suffer from nozzle clogging and
compatibility issues between bioink and nozzle’s materials, which provide the possibility to
print bioink with a variety of properties (viscosity, and density etc.) (10).
Due to these advantages, LIFT-based LAB has drawn attentions from researchers and
practitioners for its potential application in printing tissue or organs (11-16). Nevertheless, the
main drawback of LIFT-based LAB is also due to its high resolution, resulting in a low flow
rate, therefore it may experience some difficulty to accurately position cells on the receiving
substrate (17-19). In addition, even though the nozzle-free feature resolves the clogging issue,
it in turn has no restrictions to the flow direction and the jet regime, since the bioink transfer
process completely depends on the formation of jet flow, therefore if the flow and jet regime
cannot be controlled precisely, the process could suffer from deteriorated printing quality. As
shown in Fig.1, when the jet flow is not fully developed, no bioink can be transferred from the
coated quartz to the receiving substrate. Even if the bioink can be transferred, there are still two
scenarios which may affect the printing process: the plume and the splashing cases, which
actually will lead to unorganized printing pattern on the substrate with irregular droplet
surrounded by many splashes. Those two printing patterns are not acceptable for precise
bioprinting and the scattered droplet distribution strongly influences the final printing quality
as well as the cell viability. Fig. 1 shows that only the stable jet can achieve controlled printing
patterns with organized and circular droplets, therefore this is the only transfer scenario that
allows for precise printing with a good printing quality and high cell viability. Consequently,
a deep understanding of the jet flow regime is critical to the adoption of LIFT-based LAB
process.
As agreed in a few investigations reported, a variety of printing parameters could affect
the jet flow regime and in turns the printing patterns on the substrate. These parameters include
2
the pulse laser energy intensity (20-22), the focal spot size (23), the liquid layer thickness,
material properties (5, 19) and so on. Therefore, it is extremely difficult to theoretically model
the formation of jet flow because of its nature of complex multiphysics and multiscale
phenomena involved in the LIFT based LAB process. For example shock wave (24), plasma
generation (25) and irradiation (26), are reported in the laser-liquid interaction during the LIFT-
based bioprinting process. Meanwhile, the laser-liquid interaction occurs in an extremely fast
manner with a typical time duration ranging from 10-10s to 10-12s, while the jet development
process could take a time period ranging from 10-3s to 10-6s. These multiscale time duration
will certainly complicates the attempt of developing accurate mathematical models. As a result,
most reported studies required tedious experimental efforts to explore the relationship between
the jet flow regime and the final printing outcomes, in order to fully understand the relationship
between the process parameters and the formation of a stable jet.
Fig. 1. Different Jet regime and corresponding printed pattern (17, 19, 22, 27)
Computational Fluid Dynamics (CFD) simulation is a very popular approach to predict the
formation of jet and bubble in various multiphase transport processes. It could bring a good
opportunity for reducing the tedious experimental efforts required in investigation of the LIFT-
based LAB process. However, considering the complex multiphysics phenomena at the very
beginning stage of LAB, modeling the laser-liquid interaction process from a multiscale point
of view in a very concise and consistent way becomes extremely difficult. Through literature
review, while there are investigations that attempted to explain thoroughly the laser-liquid
interaction mechanism in LAB, most of the current work either ignored the initial bubble
forming process, or relied on experimental observations by missing key information in small
scales, based on which, assumptions are made. For example, Brown et al. (28) and Kalaitzis et
al. (29) chose to experimentally track the interface deformation during the bioprinting, and then
utilized
the experimental results as the moving boundary condition to model the liquid
movement and the jet. This model highly relied on the earlier experimental results, therefore it
3
is only applicable under specific conditions, such as the same energy input, the same liquid
layer thickness, and the same liquid properties. The other model is the initial bubble model (30,
31), which assumes that the input laser energy is converted into the internal and kinetic energy
of an initial bubble. Most of the published works, which adopted the initial bubble model, chose
the properties and dimensions of the initial bubble (such as the size, pressure and temperature)
based on their own experiments. However, the laser energy intensity, the donor layer thickness
and the position of laser focal point have strong impacts on the formation of jet and bubble
(21), therefore it is extremely hard to extend the reported models to explore the LIFT process
when these parameters are changed (30-32). Consequently,
it is desired to develop a
generalized and solid model to determine the properties and dimensions of such an initial
bubble, then this generalized model can be incorporated in the CFD simulation in order to
precisely model the entire LIFT based LAB process.
In the present work, a novel generalized mathematical model was developed for the first
time to accurately determine the size, pressure and temperature of the initial bubble based on
the energy conservation law, and then a CFD study by incorporating the proposed generalized
mathematical model for the initial bubble was performed for the first time to predict the
formation of jet flow and the final printing pattern on the receiving substrate. The proposed
CFD-directed simulation model was validated and shown its capability of precise prediction of
the jet flow behavior. Furthermore, by utilizing the simulation results as parameters input, a
static equilibrium model was employed to accurately predict the size of the printed droplet.
Meanwhile, a LIFT-based LAB experimental platform was built and utilized to perform more
experimental works by altering the printing parameters, where a femtosecond pulse laser with
1040nm wavelength and a maximum pulse lase energy of 40μJ was adopted in this study, as
shown in Fig. 2. Deionized water with dye was selected as the liquid layer for all the
experimental cases. The printing quality with various printing parameters was analyzed in
detail using the proposed CFD model. By adopting the printing parameters recommended by
the CFD model, the printing quality was greatly improved by forming stable jet regime and
organized printing patterns on the receiving substrate, and the size of printed droplet can be
accurately predicted through the static equilibrium model. The ultimate goal of this research is
to develop a solid connection by utilizing the proposed CFD model to direct the LAB process
and improve the printing quality.
4
Fig. 2. Experimental platform of LIFT process
RESULTS AND DISCUSSION
In this study, we first performed LIFT bioprinting experiments with non-optimize d
parameters, such as the liquid layer thickness and pulse laser energy intensity. Not surprisingly,
the unstable jet regime was formed so the printing quality was fairly low with unorganized
printing outcomes and irregular droplets on the receiving substrate. CFD study was then
performed so that the appropriate combinations of printing parameters were identified, and the
bioprinting experiments were conducted one more time to verify the predicted results.
Eventually, the printing quality was greatly improved by forming stable jet regime and very
organized printing patterns on the receiving substrate.
First attempt to obtain well-organized printed droplets
As shown in Fig. 3, a laser generator (Spirit One 1040-8) was chosen to generate the pulse
laser, and the laser intensity distribution satisfies the Gaussian distribution. The laser’s
wavelength is 1040nm, its maximum pulse energy is 40μJ, and the pulse duration is 300fs. In
the experiment, every laser pulse was reflected by the mirrors and went through the
galvanometer and the focusing lens, eventually focused on the ribbon, which is a quartz with
liquid layer coated at the bottom. The radius of laser focal spot is 30μm, and the thickness of
the quartz is 0.64cm. Deionized water was selected as the liquid layer. To enhance the
absorption rate of deionized water, 1% w.t. of graphene solution was added as a dye, which
can also introduce an additional benefit of biocompatibility when the actual bioink is used in
the printing process. Since the liquid layer thickness was selected from 1μm to 100μm (7), for
the first attempt in this study, an median liquid layer thickness was selected as 50μm while the
pulse laser energy was varied from 10μJ to 40μJ.
Fig. 3 shows the liquid transfer and printing patterns with 50μm thick liquid layer and
various pulse laser energies. It is important to note that there are two mirror lines at the top and
bottom part of these figures, because the reflection of the two substrates in the figures. To
clearly show the printing patterns, the mirror line at the bottom was marked by a light blue dash
line. From Fig. 3A-D, the jet flow is separated as two stages: the first stage shows that a thin
jet flow came out from the cone-shape structure as marked by the red dash line, and it only
needed a short time period to complete the liquid transfer until 58.8μs, as shown in Fig. 3A-D;
the second stage demonstrates the development of the cone-shape structure, which can be
developed into two sub-stages: 1) the formation of a jet and a single droplet underneath; 2) the
collapse of jet to complete the liquid transfer. The second stage took a longer time to complete
than the first stage. For the case with pulse laser energy of 10μJ or under, the first stage needed
about 176.4μs to be completed (Fig. 3A). However, after 176.4μs, since the pulse laser energy
was too small to develop the cone-shape well, the second stage liquid transfer process could
not be completed. Once the jet collapsed at 176.4μs, the droplet started to move upward instead
of downward. With the development of jet flow, the droplet underneath the ribbon tended to
move downward due to the remaining momentum from the bubble’s fast expansion, while the
cone-shape structure had a tendency to move upward and bounce back to the liquid layer below
the ribbon due to the collapse of bubble and the surface tension, and provided a force pointing
5
upward. The opposite movement direction between the droplet and the cone-shape structure
eventually led to the break of jet, as shown in Fig. 3A at 235.2μs. For the case with pulse laser
energy of 10μJ, the upward momentum from the cone-shape structure was dominated over
other effects, therefore it made the droplet bounce back to the liquid layer. In this case, the
printed droplet with 10μJ laser energy has the smallest diameter, as shown in Fig. 3E. For jet
flow with laser pulse energy of 20μJ and 30μJ, the first stage can be completed before 117.6μs
with stable jet regime (Fig. 3B and C). And the second stage was well developed with the
droplet nearly touching the substrate. With a bigger energy input, the remaining downward
momentum was dominated, so the underneath droplet moved downward after jet broke,
therefore the second liquid transfer stage can also be completed. The major difference between
the jet flow with 20μJ and 30μJ pulse laser energy was that the second jet was thicker and more
liquid was transferred if 30μJ pulse laser energy was adopted. Apparently, both cases can print
reasonably round shape droplets, and the diameters were around 187.5μm and 237.5μm,
respectively. And the case with 30μJ laser energy input has a bigger droplet area.
A
B
C
D
E
0μs
58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs
882.0μs 940.8μs 999.6μs
1058.4μs
0μs
58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs
705.6μs 764.4μs 823.2μs
882.0μs
0μs
58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 470.4μs 529.2μs 588.0μs
0μs
58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 470.4μs 529.2μs 588.0μs
Fig. 3. Liquid transfer and printing patterns with 50μm thick liquid layer. (A) Jet flow with
10μJ pulse laser energy. (B) Jet flow with 20μJ pulse laser energy. (C) Jet flow with 30μJ pulse
laser energy. (D) Jet flow with 40μJ pulse laser energy. (E) Printing patterns of 50μm thickness
liquid layer with different pulse laser energies.
However, once the pulse laser energy was further increased, although the two stages of
liquid transfer can be completed, the jet still cannot hold a stable cone shape and was turned
6
into a splashing regime, as shown in Fig. 3D. Both the first and second jets broke into multiple
tiny droplets and then scattered. The laser energy input was too big for the liquid layer to hold
and develop a stable jet flow. Meanwhile, the printed droplet on the substrate with 40μJ showed
a chaotic printing pattern, such that a biggest droplet was surrounded by multiple satellite
droplets, as shown in Fig. 3E. The diameter of the biggest droplet was around 43.1μm, while
the smallest droplet had only about 2.5μm diameter. Apparently, this type of printing pattern
was not acceptable for precise LIFT-based bioprinting, because it would completely ruin the
structure of printed tissue or organ.
In summary, from our first attempt of 3D bioprinting using water as the liquid, we can
conclude that only a stable jet could result in well-printed outcomes, and the jet regime can
predict the printing pattern based on the input laser energy. Nevertheless, a quantitative analysis
cannot be developed with such limited information about the jet formation and jet regime,
therefore in the next section we will discuss about the proposed CFD model and simulations.
Numerical simulation of the development of bubble/jet flow during LIFT process
Since the development of bubble/jet flow in the first stage occurs in a wide span of spatial
and temporal scales, it is extremely difficult to monitor the printing process and tune the
printing parameters in order to improve the printing quality. In addition,
the first stage
demonstrates most of the underlined features for the entire LIFT based LAB process, such as
bubble growth and jet breakage, therefore if the development of bubble/jet flow in the first
stage can be controlled precisely, the printing quality will be significantly improved and obtain
well-organized printing patterns. CFD simulation is a powerful and efficient tool which can
assist the design process by reducing the tedious experimental efforts. By combining CFD and
the bioprinting experiment, CFD can predict the unique features of jet and bubble formation in
the first stage, and direct the bioprinting process for better printing quality by recommending
reasonable printing parameters based on the relationship between the jet regime and the
printing patterns on the substrate.
Because the development of bubble/jet flow simulation is a multiphase process, the Volume
of Fluid (VOF) model was employed to track the liquid-gas interface. The geometry and
meshing configuration of the CFD model are shown in Fig. 4A. The computational domain is
part of the LIFT ribbon with various thickness, 800μm width liquid layer and 900μm air in
length. Only half of the model was meshed and simulated because of the axisymmetric
geometry. A structured mesh was used in this case, and the mesh near the boundary was refined.
The boundary conditions are also shown in Fig. 4A. The right side of the liquid layer was
defined as pressure-inlet while that of air zone was defined as pressure-outlet. Besides the
axisymmetric boundary condition at the axis, other boundaries were all defined as “wall”. The
parameters of initial bubbles were patched before simulation started. The vapor was set as the
ideal gas while the liquid and air were assumed incompressible fluid. Considering the jet flow
regime observed in the experiment, the laminar model was selected in the simulation. To
validate the current model with the published experimental works, one case with 100μm thick
65%-glycerol layer and 717 mJ/cm2 laser fluence was simulated and compared with the
experimental results from literature (17), as shown in Fig. 4D.
Fig. 4B shows the simulation results of LIFT process with 100μm 65%-glycerol layer and
7
717 mJ/cm2 laser fluence. It clearly demonstrated the entire development of jet flow, including
generation and breakage. Firstly, once the high-energy laser pulse hit the liquid layer, the rapid
evaporation of liquid generated a high pressure and high temperature initial vapor bubble. Due
to the high pressure vapor inside the bubble, the initial bubble expanded rapidly. Because the
quartz can be considered as a rigid wall boundary condition,
the initial bubble expanded
asymmetrically to a cone shape. With the bubble expansion, the high pressure inside the bubble
was released and decreased. Once the pressure inside the bubble became lower than the outside
atmosphere pressure, the bubble began to collapse. At this time, liquid around the tip of the
bubble moved downward due to the remaining momentum from the fast bubble expansion, and
then the jet flow was formed at the tip of the bubble. Meanwhile, because the viscous forces
and surface tension, a reversed jet inside the bubble was also generated (23), as shown in Fig.
4C. With the development of both jets, the reversed jet reached a much higher velocity than
that of the primary jet, for instance, the velocity of the primary jet was about 49m/s, while that
of the reversed jet was 87m/s. This phenomenon was because the reversed jet was much smaller
than the primary jet, and the pressure inside the bubble was lower than the outside ambient
pressure.
A
C
B
D
8
Fig. 4. Simulation of jet flow. (A) Simulation model geometry and meshing configuration. (B)
Jet flow with 100μm thickness 65% glycerol layer and 717 mJ/cm2 laser fluence. (C) Velocity
of jet flow at 8μs. (D) Comparison with experimental results (17).
A comparison between the simulations and experimental results was also provided in Fig.
4D, where all the experimental conditions were maintained the same as the simulation. The
length of jet in the simulation was slightly longer than that of the experiment, and the relative
difference between the simulation and experiment was around 14%. Considering the associated
numerical error, the proposed CFD model can be validated in a reasonable range, therefore it
is trustworthy for other studies in order to identify the appropriate printing parameters for good
printing quality.
Since the experimental results already showed that the liquid transfer and printing pattern
were unacceptable for 50μm thick liquid layer and 40μJ pulse laser input, cases with different
liquid layer thickness (50μm, 100μm, 150μm) with pulse laser energy of 40μJ were studied in
this section to obtain an optimized layer thickness. Meanwhile, cases with 100μm thick liquid
layer and various pulse laser input (10μJ, 20μJ, 30μJ and 40μJ) were also simulated to study
the effect of pulse laser energy. Once all the simulations were completed, in the next section
experiments were carried out by adopting the recommended printing parameters from the
simulations. The printing parameters used in simulations and experiments are shown in Table.1.
Table 1. Parameters for simulations and experiments
Pulse laser
Pulse laser
Pulse laser
Pulse laser
energy-10μJ
energy-20μJ
energy-30μJ
energy-40μJ
Liquid layer thickness-50μm
E-1
Liquid layer thickness -100μm
E-5 / S-2
Liquid layer thickness -150μm
Note: E: experiment; S-simulation.
N/A
E-2
E-6 / S-3
N/A
E-3
E-7 / S-4
N/A
E-4 / S-1
E-8 / S-5
E-9 / S-6
The simulation results of LIFT process with 50μm liquid layer and 40μJ (S-1) are shown
in Fig. 5A. As discussed before, the initial bubble expanded rapidly at first. However, the liquid
layer could not hold the rapid bubble expansion and therefore it was broken at about 0.5μs.
Apparently the stable jet could not be formed for this case, therefore it showed a good
agreement with the experiment in Fig. 3C. With the breakage of the bubble, the high pressure
and high temperature vapor inside were released and then mixed with the ambient. With the
same pulse laser energy input, increasing the liquid layer thickness would help to generate a
stable jet. As shown in Fig. 5B and C, when the liquid layer thickness were increased from
50μm to 100μm (S-5) and 150μm (S-6), the bubble was broken first, and then the bubble kept
developing and formed a regular jet flow. Because a thicker liquid layer was more capable of
holding the vapor bubble, therefore it had a more robust bubble development. It is noteworthy
that the length of jet for S-5 was always longer than that of S-6 at the same instant, as shown
in Fig. 5D. The reason for this phenomenon is that a thicker liquid layer could provide a bigger
flow resistance to slow down the rapid bubble expansion with the same laser energy input. For
both S-5 and S-6, it showed a linear relationship between the length of the jet and the time
duration. The maximum velocity of jet flow with different liquid layer are shown in Fig. 5E.
9
For S-5, the maximum jet flow velocity could reach 157 m/s at 1μs, while the maximum jet
flow velocity was 89.4 m/s for S-6. In addition, with the bubble expansion, the maximum
velocity decreased until the tip of the jet flow was generated. After that, the velocity was
increased slightly at 4μs for S-5 and at 6μs for S-6, respectively. This phenomenon is because
the liquid tip was less affected by the surface tension when it bulged out from the liquid film,
therefore it kept developing to a longer jet. Furthermore, the maximum velocity eventually
became stable for each case, for example, the maximum velocity of S-5 was about 110m/s,
while it was about 60m/s for S-6.
A
B
C
D
E
Fig. 5. Simulation of jet flow. (A) Jet flow with 50μm thick liquid layer and 40μJ pulse laser
energy. (B) Jet flow with 100μm thick liquid layer and 40μJ pulse laser energy. (C) Jet flow
with 150μm thick liquid layer and 40μJ pulse laser energy. (D) The length of jet flow with
different liquid layer thickness. (E) The maximum velocity of jet flow with different liquid layer
thickness.
The simulation results of 100μm liquid layer with various laser energy inputs are shown in
Fig. 6. The bubble expansion and jet formation process of S-5 was already discussed, and these
processes for other cases with smaller laser energy input were all very similar. Nevertheless,
the size of bubble, the length of jet and the velocity were different for those cases. At the same
time instant of simulation, the size of expanded bubble increased with the increasing of pulse
10
012345670100200300400500600700800Length of Jet (mm)Time (ms) 100mm liquid layer 150mm liquid layer01234567050100150200Velocity (m/s)Time (ms) 100mm liquid layer 150mm liquid layer
laser energy, and the length of jet flow increased with the increasing of pulse laser energy as
well. For certain pulse laser energy input, the length of the jet flow and its time duration showed
a linear relationship, but the relationship between the length of the jet flow at the same instant
and pulse laser energy was nonlinear (Fig. 6D). The velocity of the jet flow also increased with
the increasing of pulse laser energy. With the developing of jet flow, the velocity remained
almost as a constant after 4μs. The velocity of the stable jet flow with 10μJ (S-2) was about
25m/s, while it was around 70m/s for S-3 with 20μJ, increased about 180%. However, for the
pulse energy changing from 20μJ (S-3) to 30μJ (S-4), the velocity was only increased 33.3%,
which also showed a nonlinear relationship between the velocity and the laser energy input.
Fig. 6G shows the mass flow rate versus time for cases with different laser energies. The mass
flow rate was defined by the amount of liquid moved downward through the initial liquid-a ir
interface per unit time. From Fig. 6G, the mass flow rate decreased with the development of
the jet flow. Even though the tip of jet flow remained at a similar level of velocity, the whole
jet flow was slowed down by the bubble collapse, and the adhesion force also provided flow
resistances. In addition, it also showed a nonlinear relationship between the mass flow rate and
the pulse laser energy (Fig. 6H).
To summarize this section, by adopting the proposed CFD model, different cases with
various liquid layer thicknesses and laser energies were investigated numerically, and the
developing of the bubble expansion and jet flow was clearly described. With the increase of
liquid layer thickness from 50μm to 100μm for cases with 40μJ pulse laser energy, the jet
regime developed from unstable jet to stable jet regime. With the increase of the pulse laser
energy for the cases with 100μm thick liquid layer, the length and velocity of the jet both got
increased. Based on the simulation results, a stable jet can be obtained by choosing 100μm
liquid layer with various pulse laser energy from 10μJ to 40μJ. In conclusion, for pulse laser
energy varying from 10μJ to 40μJ, the CFD simulations recommended a liquid layer thickness
around 100μm for a better printing quality.
11
A
B
C
D
E
G
F
H
Fig. 6. Simulation results of jet flow with different laser energy. (A) Jet flow with 100μm
thickness liquid layer and 10μJ pulse laser energy. (B) Jet flow with 100μm thickness liquid
layer and 20μJ pulse laser energy. (C) Jet flow with 100μm thickness liquid layer and 30μJ
pulse laser energy. (D) Jet flow with 100μm thickness liquid layer and 40μJ pulse laser energy.
(E) The length of jet flow with different laser energy. (F) The maximum velocity of jet flow with
12
101520253035400.00.20.40.60.81.01.21.41.61.82.0Mass flow rate (g/s)Pulse laser energy (mJ) 1μs 2μs 3μs 4μs 5μs 6μs 7μs012345670100200300400500600700800Length of Jet (mm)Time (ms) 10μJ 20μJ 30μJ 40μJ012345670.00.20.40.60.81.01.21.41.61.82.02.2Mass Flow Rate (g/s)Time (ms) 10μJ 20μJ 30μJ 40μJ01234567020406080100120140160Velocity (m/s)Time (ms) 10μJ 20μJ 30μJ 40μJ
different laser energy. (G) The mass flow rate of jet flow versus time with different laser energy.
(G) The mass flow rate of jet flow versus laser energy.
Printed droplets after optimization
In this section, we tried to utilize these recommended printing parameters to experimentally
print out the droplets and also find out the connection between the size of printing pattern and
the characteristics of jet flow.
The liquid transfer of 150μm and 100μm thick water layer with 40μJ pulse laser energy are
shown in Fig. 7A and B. No jet flow and liquid transfer were observed when the liquid layer
thickness was 150μm (Fig. 7A). Compared with 50μm liquid layer (Fig.3), the same amount
of pulse laser energy input could not provide adequate pressure to overcome a bigger flow
resistance. The generated bubble still could be expanded, but it only formed a peak at 117.6μs,
and started to collapse afterwards. At about 400μs, the liquid layer returned to a flat surface at
the upper layer. When the liquid layer thickness was 100μm (Fig. 7B), a complete process of
both the first and second stages of jet flow was formed. The jet flow in the first stage was
connected with the substrate at 58.8μs while in the second stage it was connected with the
substrate at about 235.2μs. This happened because the laser energy input was large enough to
drive the jet flow and develop to a sufficient length, at the same time the jet with the 100μm
thick liquid layer (E-8 in Table 1) was also robust enough not to break during the jet developing.
Later on the linkage between the liquid layer and the substrate became thinner and thinner, and
eventually detached from the top liquid layer between 411.6μs and 470.4μs, as shown in Fig.
7B. The broken linkage finally formed a droplet due to the surface tension and fell on the
substrate by completing the second stage of liquid transfer.
A
B
0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 470.4μs
0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 470.4μs 529.2μs 588.0μs 646.8μs 705.6μs
Fig. 7. Liquid transfer with different thickness liquid layer and pulse laser energy. (A) Jet flow
of 150μm thick liquid layer with 40μJ pulse laser energy. (B) Jet flow of 100μm thick liquid
layer with 40μJ pulse laser energy.
More cases with 100μm thick liquid layer and different pulse laser energy inputs were
experimentally investigated, and the liquid transfer and printing patterns were shown in Fig.8.
As predicted by the CFD studies in the previous section, a stable jet can be formed for the cases
with 100μm thick liquid layer and pulse laser energy varying from 10μJ to 40μJ. The test results
actually demonstrated that the jet flow process with 100μm thick liquid layer showed very
similar phenomenon as the case with 50μm thick liquid layer, as shown in Fig. 8 A-C, where
13
the connection between the jet and the liquid layer became much thinner while maintaining the
liquid transfer, and a separated droplet (marked by the blue dash circle) was formed on top of
the primary droplet. The gourd-shaped droplet was also formed and can be observed at 235.2μs
in E-5, 294.0μs in E-6 and E-7. However, the gourd-shaped droplet could not be detected when
the liquid layer thickness was 50μm. Therefore, the jet flow remained more robust for the cases
with thicker liquid layer than other cases, for instant the liquid layer thickness of jet in E-2 and
E-6 were 33.8μm and 92.9μm at 176.4μs, respectively.
Fig.8 A-C shows the moving trajectory of the separated droplet, which was marked by the
blue dash circle. For the liquid transfer process with 50μm thick liquid layer, the velocity of
the separated droplet increased with the increasing of pulse laser energy. When the pulse laser
energy reached to 40μJ, the jet regime changed from the stable jet to the splashing jet mode,
but the velocity of the separated droplet was not affected too much by the pulse laser energy
input for the case with 100μm thick liquid layer (Fig.8 F), it was probably because the separated
droplet was almost static for the stable jet, and the initial velocity of the separated droplet was
almost zero. With the assistance of gravity, the separated droplet then fell onto the receiving
substrate. With such a short distance between two substrates and such a short time period, the
falling velocity was about the same for all the cases with different pulse laser energy inputs.
However, when the pulse laser energy reached to 40μJ (Fig.8 D), the jet flow could directly
connect with the substrate, and no separated droplets were formed. Similarly, the size of printed
droplet on the substrate increased with the increase of pulse laser energy. As shown in Fig.8 G,
for the same pulse laser energy, the size of printed droplet on the substrate for the case with
100μm thick liquid layer was bigger than the case with 50μm thick liquid layer. Considering
the unstable jet regime of liquid transfer process with 50μm liquid layer thickness and pulse
laser energy of 10μJ and 40μJ, the droplet size of these cases was not typical. For case with
100μm thick liquid layer, it showed a linear relationship between the droplet size and the pulse
laser energy, which confirmed the conclusions from Lin et al. (8) and Kattamis et al. (33),
because their results also indicated that both the LIFT process with/without an absorption layer
showed a linear relationship between the droplet size and the laser energy input, therefore they
shared a similar mechanism of liquid transfer.
14
A
B
C
D
E
F
0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs
588.0μs 646.8μs 705.6μs
764.4μs
0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs
823.2μs 882.0μs 940.8μs
999.6μs
0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs
705.6μs 764.4μs 823.2μs
882.0μs
0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 470.4μs 529.2μs 588.0μs 646.8μs 705.6μs
G
Fig. 8. Liquid transfer and printing patterns with 100μm thick liquid layer. (A) Jet flow with
10μJ pulse laser energy. (B) Jet flow with 20μJ pulse laser energy. (C) Jet flow with 30μJ pulse
laser energy. (D) Jet flow with 40μJ pulse laser energy. (E) Printing patterns with different
pulse laser energies. (F) The movement velocity of the dropped droplet. (G) Printed droplet
size of different liquid layer thickness with different pulse laser energy.
Based on the discussions above, the printing parameters recommended by the CFD
simulation were proved to ensure a stable jet regime and improve the printing quality. Because
the initial jet flow significantly affects the printing quality and the size of printed patterns on
the substrate, a quantitative analysis is desired to reveal the relationship between the jet flow
and the size of the printing pattern. A regression curve fitting and a static equilibrium model
were developed in this study to predict the size of printed droplet by utilizing the simulation
15
1015202530354050100150200250300Printed droplet size(mm)Pulse laser energy(mJ) 50mm liquid layer 100mm liquid layer102030-0.50.00.51.01.5Velocity(m/s)Pulse laser energy(mJ) 50mm 100mm
results as input parameters, and the experimental results were utilized to verify the prediction.
The flow chart of comparison strategy between simulation results and experimental results is
shown in Fig. 9.
Based on the conclusion from van Dam & Le Clerc (34), the velocity and volume of droplet
are the two main factors that influence the size of printing pattern. Since we already got the
moving velocity of the jet flow from the simulations, we can show the transferred liquid volume
versus the mass flow rate obtained from the simulation, as shown in Fig.10 A. Apparently, the
volume of the transferred liquid and the mass flow rate showed a linear relationship, and a
regression model can be obtained as
. The coefficient of this curve fitting
equation is 1.13×10-6, which is related to the developing time of jet flow and the distance
between two substrates.
CFD model
Experiment
Mass flow rate
(Simulation results)
Curve fitting
Curve fitting equation
Transferred liquid volume
Size of printed droplet
(Experimental results)
(Experimental results)
Transferred liquid volume
(Predicted results)
Comparison
Static equilibrium model
Size of printed droplet
(From predicted results)
Size of printed droplet
(From experimental results)
Comparison
Comparison
Fig. 9. Flow chart of comparison strategy between simulation results and experimental
results
In addition, we can also utilize a mathematical model to predict the maximum size of
printed droplet on the receiving substrate, and then compared with the experiment, as shown
in Fig.9. Since the droplet on the substrate is in the static state, the size of the droplet was only
related to the volume, surface tension of liquid and the surface properties of the substrate.
Assuming the droplet as part of sphere shape, the static equilibrium equation was shown as
follows (35),
(1)
where
is the surface tension,
is the contact angle,
is the density,
is the radius
16
61.1310Vm225323222112-2(cos)2arcsinarcsin0443rzrrrrlrRgRzrrRzrRzr
of the droplet,
length of the droplet.
is the height of the droplet,
is the radius of the sphere,
is the arc
,
and
could be defined as below:
(2)
(3)
(4)
where
is the volume of the droplet. Based on the discussions above, the volume could be
obtained from both the experimental results or the predicted results calculated by the curve
fitting equation.
Eq. (1) was adopted to calculate the size of the printed droplet by using the transferred
liquid volume from experimental results and predicted results, and the comparison between the
experiment and simulation is shown in Fig.10 B. Both the simulation and experimental results
showed that the sized of printed droplet increased with the increasing of pulse laser energy.
Meanwhile, both the size of the printed droplet calculated from transferred liquid volume of
experimental results and the predicted results showed good agreement with the actual measured
droplet size, while the simulation results with liquid volume from curve-fitting prediction as
input were closer, especially with pulse laser energy of 40μJ. Utilizing this static equilibr ium
model can directly connect the size of printed droplet with the simulation results, as shown in
Fig. 10C. Furthermore, the static equilibrium model can also be combined with the proposed
CFD model to predict the jet flow regime and the size of printed droplet, and it can provide a
great guideline to direct the design of experimental process.
Eventually, a well-organized printed pattern on the receiving substrate with different
alphabets is shown in Fig.10 D, where UT means University of Texas, and CUMT is the
abbreviation of China University of Mining and Technology, and this is a successful
demonstration of CFD-based improvement of printing quality for LIFT-based LAB process.
17
zRlzRl1/31/31/21/2226263333VVVVzrrr222rzRzarcsinrlRRV
A
C
B
D
Fig. 10. (A) Prediction of transferred liquid volume with the mass flow rate from simulation
results. (B)Experimental and simulation results of printed pattern size. (C)Experimental and
simulation results of printed pattern size versus mass flow rate. (D) Printed pattern after
optimization (UT: University of Texas, CUMT: China University of Mining and Technology)
CONCLUSIONS
The major contribution of this work is to develop a CFD model to guide the LIFT-based
LAB for the first time in the bioprinting research community, and this model provides a great
opportunity to quantitatively predict the generation and development of bubble and jet flow in
the LIFT-based LAB process, and eventually improve the final printing quality by adopting the
appropriate printing parameters recommended by the CFD model. The numerical model was
validated by the experimental results, and a good agreement was achieved in terms of the size
of printed droplet. By utilizing the proposed CFD model, this study demonstrated a successful
example of well printed pattern, as shown in Fig. 10D. The key conclusions are listed as follows:
(1) The liquid layer thickness strongly affects the formation and development of jet flow.
A thin liquid layer cannot maintain the jet flow due to the rapid bubble expansion with large
pulse laser energy input, therefore the jet eventually would break and reach to the splashing jet
regime. Furthermore, the jet cannot be formed when the liquid layer was too thick.
(2) For all the stable jets investigated in this study, the length of the jet flow and the time
duration of jet flow showed a linear relationship, as shown in Fig. 5D. With the development
of jet, the velocity of the jet flow remained almost as a constant. A reversed jet inside the bubble
was also observed because of the viscous forces and surface tension.
(3) For cases with the same liquid layer thickness, the size of the printed droplet, the
18
0.60.81.01.21.41.61.82.02.20.60.81.01.21.41.61.82.02.220.99875R 100mm Curve fittingTransferred liquid volume (nL)Mass flow rate (g/s)10mJ20mJ30mJ40mJ91.1310mV10152025303540150200250300350Pulse laser energy(mJ)Size of printed droplet mm)Experimental resultsSimulation results (Transferred liquid volume ) from experimental results)Simulation results (Transferred liquid volume) from predicted results)
velocity and length of the jet flow all increased with the increase of pulse laser energy, as shown
in Fig. 6 and Fig.8 G.
(4) Utilizing the simulation results, the volume of transferred liquid trough LIFT-based
LAB process could be accurately predicted. With the assistance of static equilibrium model
describing the static balance of droplet and substrate, the size of printed droplet can also be
predicted, as shown in Fig.10 B and C.
MATERIALS AND METHODS
Description of experiments
The experimental platform can be found in Fig. 1. An XY stage (Pro115LM Aerotech) was
utilized to move the substrate up and down to get different print patterns. A light source
(HL150-A Fisher Scientific) was used to provide a sharp background, and a high-speed camera
(Phantom VEO 410L) was adopted to monitor and record the LIFT printing process. Several
high magnification zoom lenses (Navitar) were utilized to obtain videos and images with high
resolutions. The frame rate was set as 57,000 fps and the exposure time was fixed as 3μs. In
addition, a microscope (LEICA MC 170 HD) was utilized to observe and record the printed
droplet patterns on the substrate for more analysis.
Modeling - Initial bubble parameters
As shown in Fig. S1, the laser energy distribution
in this study was adopted as a
Gaussian distribution (25, 36, 37),
(1)
where
is the pulse energy at different position,
is approximated as a 99.7%
distribution range,
beam profile.
is the position of interest,
is the spatial standard deviation of laser
Due to the Gaussian distribution of laser energy, the energy increases toward the center
while decreases toward the edge. Assuming a threshold of laser fluence exists to define the
laser interaction diameter, while only the liquid layer inside this interaction area could absorb
the laser energy input for phase change and temperature increase. The threshold can be defined
by utilizing the energy input at
divide the area of ring around
,
(2)
where
is the threshold of laser interaction fluence,
is the laser interaction radius,
is the half of width of the ring near the laser interaction radius. For different types of lasers and
liquids, the threshold of laser interaction fluence should be different.
After calculating the laser interaction radius
, the energy absorbed by the liquid layer
19
E202()exp22ErErE0ErTrTr22()()()TTTTErFrrrrTFTrxTr
can be calculated by integrating the laser energy distribution from
to
,
where
is the absorbed energy by the liquid layer.
(3)
Fig. S1. Gaussian distribution of pulse laser energy and the actual interaction area
Considering the extremely short interaction period between the pulse laser and the liquid ,
we assumed an initial bubble existed inside the coated liquid layer after the laser interaction
with the liquid, and such an initial bubble has the same size as the laser interaction diameter
(30, 31). Without considering the effect of pressure change, the latent heat
and the
sensible heat
can be calculated by Eqs. (4) and (5),
(4)
(5)
where
is the density of liquid layer,
is the latent heat,
is the specific heat
20
-TrTr()TTrarEErdraETrLESE343LlTfgErh34()3SlTpieErcTTlfghpc
capacity,
and
are the initial temperature of the initial bubble and the environmental
temperature, respectively.
The sum of latent heat
and sensible heat
should equal to
, the total absorbed
energy by the liquid layer,
(6)
In addition, the pressure inside the initial bubble
can be calculated by Eq. (7),
(7)
where
is the density of vapor under the initial temperature, and
is the atmosphere
pressure.
CFD Modeling - governing equations
The Rayleigh bubble dynamics model (38) has been widely applied to study the response
of surrounding incompressible flow to the expansion of a single spherical bubble. The
governing equation for the bubble expansion within liquid can be described as follows,
(8)
where
is the bubble radius,
is the pressure inside the bubble,
is the pressure
the hydrogel flow at the infinite distance from the bubble,
is the surface tension and
is the coefficient of viscosity.
Because the growth and development of bubble and jet flow is a multiphase process, the
Volume of Friction (VOF) model in ANSYS Fluent was employed to track the liquid- gas
interface. Considering the short interaction period, the phase change between vapor and liquid
was ignored in the current model. The governing equations are shown as follows,
Energy equation
Momentum equation
Continuity equation
VOF model equation
(9)
(10)
(11)
21
iTeTLESEaEaLSEEEiPlievPPveP222324()()2llidRdRdRRPtPtdtdtRRdtR()iPt()Pt2()peffTcvTkTpt2()()vvvpvgFtm0vt
(12)
where
is the density of mixture,
is the pressure,
is the effective conductivity,
is the heat capacity,
is the dynamic viscosity,
is the volume fraction.
Modeling – boundary conditions and properties
Fig. S2. Geometry of computational domain with boundary conditions and configuration of
mesh
The dimensions of the computational domain, boundary conditions and mesh configuration
were shown in Fig. S2. The computational domain includes the ribbon, the liquid layer and air.
Only half of the model was meshed and simulated because of its axisymmetric geometry.
Structured meshes were used in this study, and the mesh near all the boundaries was refined.
Because the computational domain was only part of the ribbon, the right side of liquid was
defined as the pressure inlet while the right side of air zone was defined as the pressure outlet.
Besides the axisymmetric boundary condition at the axis, other boundaries were all defined as
“wall”. The parameters of initial bubbles were set before simulation started.
The physical properties of liquid
layer were shown in Table S1. 65%-glycerol and
deionized water were utilized in the simulation.
Table S1. Physical property parameters of liquid layer
22
1[()()]0qqqqqqvtPeffkpcm
Properties
Density
(kg/m3)
Heat capacity
(kJ/(kg•°C ))
Latent heat
Viscosity
Surface tension
ΔH (kJ/kg)
(kg/(m•s))
(N/m)
65%-glycerol
1169.1
Deionized water
998.2
3.030
4.182
1426.1
2257.2
0.0177
0.001003
0.068
0.0728
Modeling – mesh independent study
The grid independent study was carried out to a reasonable mesh number by considering
the balance of computational load and numerical accuracy. Fig. S3 shows the comparison of
maximum liquid velocity at 1μs among various six different cases. Apparently, the case with
580000 meshes is the most appropriate one with reasonable computational load and great
numerical accuracy, since its maximum velocity change is smaller than 0.5% when the number
of grid cells further increases. Therefore, the grid number of 5800000 is sufficient, and similar
grid sizes were used in this study for all other CFD cases.
Fig. S3. Grid dependence analysis
Prediction of droplet size – Static equilibrium equation
23
pcm10000020000030000040000050000060000070000095.095.596.096.597.097.598.098.599.099.5100.0Velocity (m/s)Mesh
B
A
C
Fig. S4. Geometry of part of sphere shape droplet and the force analysis
Considering the droplet as part of the sphere shape, a geometric model was established to
is the height of
is the radius of the droplet,
show a differential control volume, where
the droplet, and
is the radius of the sphere, as shown in Fig. S4A.
The governing equation of the static equilibrium model for the differential control volume
(Fig. S4B and C) is shown as follows,
(13)
If integrating on both sides of Eq. (13), we can obtain the governing equation of the static
equilibrium model as follows,
(14)
REFERENCES
1.
B. Starly, R. Shirwaiker, "3D Bioprinting Techniques"
in 3D Bioprinting and
Nanotechnology in Tissue Engineering and Regenerative Medicine (2015), pp. 57-77.
S. V. Murphy, A. Atala, 3D bioprinting of tissues and organs. Nature Biotechnology 32,
773-785 (2014).
C. Mandrycky, Z. Wang, K. Kim, D. H. Kim, 3D bioprinting for engineering complex
tissues. Biotechnol Adv 34, 422-434 (2016).
Q. Dasgupta, L. D. Black, A FRESH SLATE for 3D bioprinting. Science 365, 446
(2019).
B. Guillotin, A. Souquet, S. Catros, M. Duocastella, B. Pippenger, S. Bellance, R.
Bareille, M. Remy, L. Bordenave, J. Amedee, F. Guillemot, Laser assisted bioprint ing
2.
3.
4.
5.
24
rzRsincos2sin22sin02drddddrdlPrddzPzddrrzddPdPzdr225323222112-2(cos)2arcsinarcsin0443rzrrrrlrRgRzrrRzrRz
6.
7.
8.
9.
of engineered tissue with high cell density and microscale organization. Biomaterials
31, 7250-7256 (2010).
J. Bohandy, B. F. Kim, F. J. Adrian, Metal deposition from a supported metal film using
an excimer laser. Journal of Applied Physics 60, 1538-1539 (1986).
P. Serra, A. Piqué, Laser-Induced Forward Transfer: Fundamentals and Applications.
Advanced Materials Technologies 4, (2019).
Y. Lin, Y. Huang, D. B. Chrisey, Droplet formation in matrix-assisted pulsed-laser
evaporation direct writing of glycerol-water solution. Journal of Applied Physics 105,
(2009).
B. Guillotin, S. Catros, F. Guillemot, "Laser Assisted Bio-printing (LAB) of Cells and
Bio-materials Based on Laser Induced Forward Transfer (LIFT)" in Laser Technology
in Biomimetics (Biological and Medical Physics, Biomedical Engineering, 2013),
chap. Chapter 8, pp. 193-209.
10. M. Morales, D. Munoz-Martin, A. Marquez, S. Lauzurica, C. Molpeceres, "Laser-
in Advances in Laser
11.
12.
13.
14.
15.
16.
17.
18.
19.
Induced Forward Transfer Techniques and Applications"
Materials Processing (2018), pp. 339-379.
K. C. Hribar, P. Soman, J. Warner, P. Chung, S. Chen, Light-assisted direct-write of
3D functional biomaterials. Lab Chip 14, 268-275 (2014).
K. C. Hribar, K. Meggs, J. Liu, W. Zhu, X. Qu, S. Chen, Three-dimensional direct cell
patterning in collagen hydrogels with near-infrared femtosecond laser. Sci Rep 5, 17203
(2015).
R. Xiong, Z. Zhang, W. Chai, Y. Huang, D. B. Chrisey, Freeform drop-on-demand laser
printing of 3D alginate and cellular constructs. Biofabrication 7, 045011 (2015).
A. Sorkio, L. Koch, L. Koivusalo, A. Deiwick, S. Miettinen, B. Chichkov, H. Skottman,
Human stem cell based corneal tissue mimicking structures using laser-assisted 3D
bioprinting and functional bioinks. Biomaterials 171, 57-71 (2018).
V. Keriquel, F. Guillemot, I. Arnault, B. Guillotin, S. Miraux, J. Amédée, J.-C. Fricain,
S. Catros, In vivobioprinting for computer- and robotic-assisted medical intervention:
preliminary study in mice. Biofabrication 2, 014101 (2010).
V. Keriquel, H. Oliveira, M. Remy, S. Ziane, S. Delmond, B. Rousseau, S. Rey, S.
Catros, J. Amedee, F. Guillemot, J. C. Fricain, In situ printing of mesenchymal stromal
cells, by laser-assisted bioprinting, for in vivo bone regeneration applications. Sci Rep
7, 1778 (2017).
J. Yan, Y. Huang, C. Xu, D. B. Chrisey, Effects of fluid properties and laser fluence on
jet formation during laser direct writing of glycerol solution. Journal of Applied Physics
112, (2012).
J. Yan, Y. Huang, D. B. Chrisey, Laser-assisted printing of alginate long tubes and
annular constructs. Biofabrication 5, 015002 (2013).
Z. Zhang, R. Xiong, R. Mei, Y. Huang, D. B. Chrisey, Time-Resolved Imaging Study
of Jetting Dynamics during Laser Printing of Viscoelastic Alginate Solutions.
Langmuir 31, 6447-6456 (2015).
20. M. Duocastella, J. M. Fernández-Pradas, P. Serra, J. L. Morenza, Jet formation in the
laser forward transfer of liquids. Applied Physics A 93, 453-456 (2008).
21. M. Duocastella, J. M. Fernández-Pradas, J. L. Morenza, P. Serra, Time-resolved
25
imaging of the laser forward transfer of liquids. Journal of Applied Physics 106,
(2009).
22. M. Ali, E. Pages, A. Ducom, A. Fontaine, F. Guillemot, Controlling laser-induced jet
formation for bioprinting mesenchymal stem cells with high viability and high
resolution. Biofabrication 6, 045001 (2014).
23. M. S. Brown, N. T. Kattamis, C. B. Arnold, Time-resolved dynamics of laser-induced
24.
25.
26.
27.
micro-jets from thin liquid films. Microfluidics and Nanofluidics 11, 199-207 (2011).
A. V. Joachim Noack, Single-shot spatially resolved characterization of laser-induced
shock waves in water. Appl. Opt. 37, 4092-4099 (1998).
A. V. Joachim Noack, Laser-Induced Plasma Formation in Water at Nanosecond to
Femtosecond Time Scales: Calculation of Thresholds, Absorption Coefficients, and
Energy Density. IEEE JOURNAL OF QUANTUM ELECTRONICS 35, 1156-1167
(1999).
B. T. Ohan Baghdassarian, Gary A. Williams, Luminescence Characteristics of Laser-
Induced Bubbles in Water. Phys Rev Lett 83, 2437-2440 (1999).
Z. Zhang, R. Xiong, D. T. Corr, Y. Huang, Study of Impingement Types and Printing
Quality during Laser Printing of Viscoelastic Alginate Solutions. Langmuir 32, 3004-
3014 (2016).
29.
28. M. S. Brown, C. F. Brasz, Y. Ventikos, C. B. Arnold, Impulsively actuated jets from
thin liquid films for high-resolution printing applications. Journal of Fluid Mechanics
709, 341-370 (2012).
A. Kalaitzis, M. Makrygianni, I. Theodorakos, A. Hatziapostolou, S. Melamed, A.
Kabla, F. de la Vega, I. Zergioti, Jetting dynamics of Newtonian and non-Newtonian
fluids via laser-induced forward transfer: Experimental and simulation studies. Applied
Surface Science 465, 136-142 (2019).
C. Mezel, A. Souquet, L. Hallo, F. Guillemot, Bioprinting by laser-induced forward
transfer for tissue engineering applications: jet formation modeling. Biofabrication 2,
014103 (2010).
30.
31. W. Wang, G. Li, Y. Huang, paper presented at the ASME 2008 International
32.
33.
34.
35.
36.
37.
Manufacturing Science and Engineering Conference, Volume 2, 2008.
R. Xiong, Z. Zhang, J. Shen, Y. Lin, Y. Huang, D. B. Chrisey, Bubble Formation
Modeling During Laser Direct Writing of Glycerol Solutions. Journal of Micro and
Nano-Manufacturing 3, (2015).
N. T. Kattamis, P. E. Purnick, R. Weiss, C. B. Arnold, Thick film laser induced forward
transfer for deposition of thermally and mechanically sensitive materials. Applied
Physics Letters 91, (2007).
D. B. van Dam, C. Le Clerc, Experimental study of the impact of an ink-jet printed
droplet on a solid substrate. Physics of Fluids 16, 3403-3414 (2004).
Y. Tao, "Theoretical research on static spreading of droplet impact on horizontal
surface", thesis, Dalian University of Technology (2014).
J. Ready, Effects of High-Power Laser Radiation. Elsevier, New York , 67–125. (2012).
F. Docchio, P. Regondi, M. R. C. Capon, J. Mellerio, Study of the temporal and spatial
dynamics of plasmas induced in liquids by nanosecond Nd:YAG laser pulses. 1:
Analysis of the plasma starting times. Appl. Opt. 27, 3661-3668 (1988).
26
38. M. S. Plesset, A. Prosperetti, Bubble dynamics and cavitation. Annual review of fluid
mechanics 9, 145-185 (1977).
27
|
ai_researcher | 1 | Live_Synchronous_Computing_in_Robot_Driven_Design.pdf | Can parallel lives provide a solution to Hardy’s paradox?
˙Inan¸c S¸ahin1,
∗
1Department of Physics, Faculty of Sciences,
Ankara University, Ankara, Turkey
Abstract
Parallel lives is a model which provides an interpretation of quantum theory that is both local
and realistic. This model assumes that all quantum fields are composed of point beings called
”lives”. Lives interact locally and have a memory of their previous interactions. The reduction
of the state vector is not included in this model: lives can be divided into different worlds. This
feature resembles many worlds interpretation. However in the parallel lives model, the division of
lives into different worlds takes place locally. The parallel lives model is expected to be compatible
with special relativity, as the lives propagate at a speed that does not exceed the speed of light and
interact locally. On the other hand, it is open to paradoxes based on counterfactual propositions,
as it provides a realistic interpretation of quantum theory. In this paper, we confront the parallel
lives model with the paradox proposed by Hardy [1]. We show that the parallel lives model
cannot overcome the dilemma in Hardy’s paradox. We discuss implications of this confrontation
on special theory of relativity, and speculate a solution that we believe, fits the spirit of the parallel
lives model.
Keywords: Parallel lives model, many worlds interpretation, quantum theory, relativity
0
2
0
2
g
u
A
2
2
]
h
p
-
t
n
a
u
q
[
1
v
3
3
6
7
0
.
9
0
0
2
:
v
i
X
r
a
∗[email protected]
1
I.
INTRODUCTION
Parallel lives (PL) is an ontological model that was first proposed by Brassard and
Raymond-Robichaud [2, 3] in order to provide a local and realistic interpretation to quantum
theory (QT). The details of the PL model have been developed in Ref.[4]. According to PL,
all quantum fields are composed of point beings called ”lives” moving on continuous world-
lines with a speed bounded by the speed of light [4]. Lives can only interact locally when
their world-lines coincides. However, not all lives whose world-lines coincide interact with
another. Lives have a memory of their previous interactions, and this memory determines
which live they will interact with. Lives that do not interact are invisible to each other. We
can say that these invisible ”lives” are living in different worlds. The network of internal
interactions of a very large collection of lives forms a macroscopic system. If a live is hidden
relative to one of the lives that make up the macroscopic system, it should also be hidden
relative to other lives in that macroscopic system.1 Thus, it is possible to have macroscopic
systems that live in parallel and hidden relative to each other. This feature recalls the many
worlds interpretation [5, 6]. However, in many worlds interpretation the entire universe split
into copies, while in PL, lives locally split into relative worlds. When the state vector of a
system is reduced to one of the orthogonal terms in it, the lives that make up that system
split locally into different relative worlds. Therefore, there is no reduction of the state vector
in the PL model; each orthogonal term in the superposition lives parallel in space-time. For
instance, let’s consider an EPR-type experiment with two spin-1/2 particles in the singlet
state,
|
0, 0 >= 1
√2[
| ↑
>
,
↓
,
↑
−| ↓
>]. Let A and B be spacelike separated macroscopic ob-
server systems carrying Stern-Gerlach apparatuses. After the spins become entangled in the
singlet state at the midpoint between A and B, one moves to A and the other to B. Then,
the lives of spins and observers A and B split into relative worlds. In one world spin is up
and observer measures spin-up and in the other world spin is down and observer measures
spin-down. If A
(A
↑
↓
) represents observer A measuring spin-up (spin-down) then, the lives
of A
↑
and A
can only interact with the lives of B
. Similarly, A
are hidden with respect to B
↑
↓
respectively. Therefore, we say that A
↑
↓
↓
and B
can interact with B
↑
↓
, but A
↑
and
1 Here we should note that not all lives in a macroscopic system need to interact with each other, but
they must be part of the same network of interactions. The interaction waves propagating through the
macroscopic system form a network of interactions and the memory of a distant live is shared in this way.
2
B
are living in a world parallel to the world of A
and B
.
↓
It is often thought that Bell’s theorem rules out local realistic interpretations of QT. In
↑
↓
fact, Bell’s theorem rules out local hidden variable theories, not local realistic interpretations
of QT [2, 3, 7]. However, this issue is subtle and a detailed review is required. In local hidden
variable theories, the result of a measurement is given as a function of hidden variables and
locally defined adjustable apparatus parameters [7, 8]. It is also assumed that experimenters
have a free will to adjust apparatus parameters2 [9]. Let us denote the measurement result
by the function R(λ, a), where λ and a represent hidden variables and apparatus parame-
ters respectively. The existence of the function R(λ, a) tells us that when the values of the
parameters λ and a are given, the measurement result is uniquely determined. We will call
this property determinism. In the PL model, different possible outcomes of a measurement
and observers observing these results can live in parallel in different relative worlds. Thus,
reality depends on which relative world we live in; there is no single concept of reality. Due
to this multiple reality concept, some authors give up using conventional realism [10]. On
the other hand, PL assumes ontological reality according to which measurement results cor-
responding to orthogonal terms in the superposition exist in different relative worlds prior
to measurement. This view is different from Copenhagen interpretation, where ontological
reality of the wave function is denied. PL can provide deterministic rules for the behaviors
of the lives [10]. If we consider whole worlds of lives living parallel, then PL gives a deter-
ministic model. On the other hand, each individual observer living parallel in space-time,
experiences indeterminism. For example, the observer A performing a spin measurement
(see the example at the end of page 2) can find herself in the relative world of A
↓
after measurement. But she does not know in advance which relative world she will be in.
or A
↑
Since, the observers cannot know in advance which one among several possible outcomes
will actually occur, the process generated by the rules of PL is completely indeterministic
according to observers. Therefore, the measurement results cannot be given as a determin-
istic function predicted by a local hidden variable theory. In the language of the free will
theorem of Conway and Kochen [9], the response of universe to the measurement is not a
function of the information accessible to the particle. The universe makes a free decision in
the neighborhood of the particle and this decision determines in which relative world the
2 Otherwise, we cannot eliminate the superdeterminism option.
3
observer lives.3 Consequently, the locality and reality4 features of the PL model do not
conflict with Bell’s theorem.
On the other hand, as demonstrated in several studies in the literature, the realistic
interpretations of QT are inconsistent with the special theory of relativity [1, 11, 12]. We
should note that their arguments are based on counterfactual reasoning. When we consider
the results of actual measurements, we do not encounter paradoxes [13]. Nevertheless, if
we have a realistic model where the wave function or say the probability distribution of
the possible outcomes exists prior to the measurement then counterfactual propositions
become somewhat legitimate [14]. Therefore, any model that claims to provide a realistic
interpretation of QT must be confronted with counterfactual paradoxes. In this context, we
confront PL model with the second paradox in Hardy’s paper [1]. As we will see, in PL
some counterfactual propositions become part of the reality in various alternative worlds.
This has interesting implications for the theory of relativity, which we will examine.
II. REVISITING HARDY’S PARADOX
In 1992 Hardy [1] proposed a gedankenexperiment consists of two Mach-Zehnder interfer-
ometers, one for positrons and one for electrons Fig.1. The experiment is designed so that
u+ and u− paths of these two Mach-Zehnder interferometers overlap. If the positron and
electron take u+ and u− paths then they will meet at P and annihilate one another. Pair
annihilation is expressed in Hardy’s notation as
u+ >
|
u− >
|
→ |
γ > .
(1)
Using the experimental setup shown in the Fig.1, Hardy first demonstrated an inequality-free
version of the Bell’s theorem. Hardy secondly demonstrated that if the ”elements of reality”
corresponding to Lorentz-invariant observables are themselves Lorentz invariant, then real-
istic interpretations of quantum mechanics are incompatible with special theory of relativity.
For the purpose of this paper we will concentrate on his second result. The summary of the
reasoning that led him to this conclusion is as follows: Consider three different reference
3 According to weak anthropic principle, the observer is in one of the relative worlds just because she
observes the measurement result in that relative world.
4 Unless otherwise stated, reality will be used in the sense of ontological reality.
4
frames: LAB, S+ and S− frames of reference. In LAB frame, the measurements on electron
and positron are simultaneous. The relative velocities of S+ and S− frames to LAB frame
are so arranged that these measurements are not simultaneous with respect to S+ and S−.
According to S+ frame the measurement on the positron occurs before the electron arrives
at BS2− and according to S− frame the measurement on the electron occurs before the
positron arrives at BS2+. Let’s denote the initial electron-positron states by
e− >
|
|
e+ >.
After the particles pass point P, but before they reach BS2± the initial state evolves to
e− >
|
|
e+ >
→
1
2
(
γ > +i
|
−|
u+ >
v− > +i
|
|
v+ >
|
u− > +
v+ >
|
|
v− >).
(2)
Since this state is orthogonal to
u− >, according to an observer in the LAB frame
positron and electron cannot take u+ and u− paths simultaneously. The beam splitters
|
|
u+ >
BS2± perform the following transformations:
u± >
|
→
1
√2
(i
|
d± > +
c± >),
|
v± >
|
→
1
√2
(i
|
c± > +
d± >).
|
(3)
Using equations (2) and and (3) we see that the state vector reduces to
a probability of 1
d− > with
16th of the experiments both D+ and D− detectors receive
16 . Hence, in 1
|
|
d+ >
signals.
Now, let’s examine the same experiment according to observers in the S− and S+ frames.
According to S− when the electron passes through BS2− but the positron has not yet
reached BS2+, the following state is obtained:
1
2
(
−|
γ >
1
√2 |
−
u+ >
c− > +
|
i
√2 |
u+ >
|
d− > +i√2
v+ >
|
|
c− >).
(4)
Here we use (2) and transformations for
u− > and
|
|
v− > in (3). When the electron is
detected in D−, then the state vector is reduced to
u+ >
|
|
d− > .
(5)
Then, the observer in the S− frame infers that positron takes u+ path. On the other hand,
according to S+ when the positron passes through BS2+ but the electron has not yet reached
BS2−, the following state is obtained:
1
2
(
−|
γ >
1
√2 |
−
c+ >
|
u− > +
i
√2 |
d+ >
|
u− > +i√2
c+ >
|
|
v− >).
(6)
5
Here we use (2) and transformations for
u+ > and
|
v+ > in (3). When the positron is
|
detected in D+, then the state vector is reduced to
d+ >
|
|
u− > .
(7)
Then, the observer in the S+ frame infers that electron takes u− path.5
Hardy used EPR’s [15] ”element of reality” criterion. If a system is in an eigenstate of
an operator corresponding to an observable, then we can predict certainly the result of the
measurement of this observable. Therefore, according to EPR’s reality criterion, the value
of this observable (which is the eigenvalue of the observable corresponding to the system
eigenstate) is an element of reality even if the measurement is not performed. We can define
|
u± >< u±
. Since the vectors
the operators U ± =
u− > are eigenvectors of
U ±, there exist elements of reality associated with paths u+ and u−. However, as we have
shown, the reference frames S+ and S− infer that electron and positron take the paths u−
and u+ respectively. If the elements of reality corresponding to Lorentz-invariant observables
u+ > and
|
|
|
are themselves Lorentz invariant, then these inferences must be true for all inertial frames.
On the contrary, as shown previously it is not true for the LAB frame. To summarize
very briefly, what Hardy did is that he associated counterfactuals about particle paths with
elements of reality. Then, he showed that elements of reality corresponding to these paths
are not Lorentz invariant.
As stated in his article, Hardy’s result can be applied to any realistic interpretation of
QT which assumes that particles have real trajectories. In PL model, lives move on real
trajectories in space-time. Therefore, confrontation of the PL model with Hardy’s paradox
can have important consequences. Before examining Hardy’s paradox in the PL model, let’s
examine lives of a single photon on a beam splitter and in a Mach-Zehnder interferometer.
In Fig.2 we show a single photon on a 50-50 beam splitter.
Incident photon can either
be transmitted along path (1) or reflected along path (2). Each path has 50% probability.
Assume that an observer performs a measurement using photon detectors to determine the
path along which the photon moves. This measurement causes an entanglement between
5 Here we should note that the inferences of observers in LAB, S+ and S− frames about particle trajectories
(u+ and u−) are counterfactual. They don’t make measurements to determine real paths, but they infer
these results from D+ and D− detections via counterfactual reasoning.
6
photon paths and measurement apparatus:
ψ >=
|
1
√2|
1γ >
|
1m > +
1
√2|
2γ >
2m >
|
(8)
where,
|
1γ > represents the photon state in path (1) and
1m > represents the state of
|
measurement apparatus measuring a photon in path (1). Similar definitions hold for
2γ >
|
and
|
2m >. Furthermore, we can also say that the observer is entangled with photon paths.
By looking at the result of the measurement, the observer can decide to behave in one way
or another. For instance, assume that if the photon takes path (1), then the observer will
have lunch. On the other hand, if the photon takes path (2), then she will be on diet. Thus,
we can write
ψ >=
|
1
√2 |
1γ >
|
1o > +
1
√2 |
2γ >
2o > .
|
(9)
Here, subscript ”o” denotes the observer. The description of the experiment within the PL
model can be given as follows: The lives of the incident photon are divided into two group
of lives living in the same world. One of them takes path (1) and the other takes path
(2). When the lives of the photons moving on paths (1) and (2) meet with the detectors,
lives of each detector, subsequently lives of the measurement apparatus and the observer
are divided into two different worlds.
In one world D1 detects a signal but D2 does not
detect any signal, in another world D1 does not detect any signal but D2 detects a signal.
Consequently, in one world observer measures a photon moving on path (1) and in the other
world she measures a photon moving on path (2). These two worlds are hidden with respect
to each other.
Now, let’s consider a single photon in a Mach-Zehnder interferometer Fig.3. Due to
destructive interference, D2 detector does not detect any signal. Therefore, in this case
photon paths are not entangled with the measurement apparatus or the observer. Hence,
the lives of the measurement apparatus and the observer are not divided into relative worlds.
When the initial photon passes through the first beam splitter, its lives are divided into two
group of lives, one going through the path (1) and the other going through the path (2).
These two group of lives moving on paths (1) and (2), exist in the same world. In the second
beam splitter they interact with each other and produce the usual interference effects.
Finally, let’s try to examine Hardy’s paradox in the framework of the PL model. In the
LAB frame of reference, both of the particles reach second beam splitters simultaneously.
7
Just before they reach second beam splitters the state of the system is given by (2). This
u+ >
|
state is orthogonal to
u− >. Therefore, according to an observer in the LAB
frame positron and electron cannot take u+ and u− paths together. Accordingly, lives of
the positron and electron moving on paths u+ and u− must be hidden in the world of the
LAB frame, i.e. they are living parallel to the LAB frame.6 In Fig.4 we show a diagram
|
representing the lives observed in the LAB frame.
Let’s depict the same experiment according to an observer in S− frame of reference. Due
to the relativity of simultaneity, the positron has not yet reached BS2+ as soon as the
electron passes through BS2−. At this instant, the system is described by the state given
in (4). Within a very short time, electron can reach C − and D−. Hence, the following
entangled state is obtained:
γ >
1
2 |
i
2√2 |
−
+
C − = 0; D− = 0 >
|
1
2√2 |
−
u+ >
c− >
|
|
C − = 1; D− = 0 >
u+ >
d− >
|
|
C − = 0; D− = 1 > +
i
√2 |
v+ >
c− >
|
|
C − = 1; D− = 0 > (10)
where,
|
C − = 0, 1; D− = 0, 1 > is the state of the measurement apparatus; 1 represents
detection of a particle and 0 represents a null value (no detection). Consequently, lives of
the observer and experimental apparatus split into four different worlds, corresponding to
orthogonal terms in the superposition (10). Since we restrict ourselves to the situation where
D detectors detect signal, we consider the relative world of S− described by the third term
in (10). In this relative world, lives of the positron take u+ path and lives moving on paths
u− and v+ are hidden. In Fig.5 we show the lives of the experimental apparatus observed
in the S− frame. On the other hand, according to an observer in S+ frame of reference, the
electron has not yet reached BS2− as soon as the positron passes through BS2+. At this
instant, the system is described by the state given in (6). Within a very short time, positron
6 This is evident from equation (2), but it is also conceivable from pair annihilation process at point P .
If the particles take paths u+ and u−, then pair annihilation occur.
In this case, the positron and
electron turn into two photon and do not leave any signal in the detectors D+, D−, C+, C−. If we have
additional photon detectors, we can capture photon signals from pair annihilation. However, since we
restrict ourselves to the situation where both D+ and D− detectors detect signals, there should be no
pair annihilation in the world of the LAB frame.
8
can reach C + and D+. Hence, the following entangled state is obtained:
γ >
1
2 |
i
2√2 |
−
+
C + = 0; D+ = 0 >
|
1
2√2 |
−
u− >
c+ >
|
|
C + = 1; D+ = 0 >
u− >
d+ >
|
|
C + = 0; D+ = 1 > +
i
√2 |
v− >
c+ >
|
|
C + = 1; D+ = 0 > .
(11)
In the relative world of S+ described by the third term in (11), lives of the electron take
u− path and lives moving on paths u+ and v− are hidden. The lives of the experimental
apparatus observed in the S+ frame is given in Fig.6.
To summarize, the lives of particles in the worlds of different reference frames are different
from each other. The lives moving on path u+ are part of the world of the S− frame, but
not part of the worlds of the S+ and LAB frames. Similarly, the lives moving on path u−
are part of the world of the S+ frame, but not part of the worlds of the S− and LAB frames.
However, we should note that actually the lives were there all along. The only thing that
changes from one frame of reference to another is whether lives of the particles interact or
not with the apparatus. As we have discussed in the introduction, noninteracting lives are
hidden, and the observer cannot experience them in her world.
The fact that different reference frames live parallel to each other in different worlds
seems to fit the logic of PL at first sight. However as we will see, there is a problem we
have to overcome. The observer in each reference frame observes not only the experimental
apparatus but also the observer in the other reference frame. For instance, let’s denote the
lives of the observer in the S− frame of reference observing the measurement results C − = 0
and D− = 1 by OS−(D− = 1). Denote also the lives of the experimental apparatus with
C − = 0 and D− = 1 by A(D− = 1). When these two lives meet, they merge to form a
bigger set of lives that we will denote as
OS−(D− = 1)
⊕
AS−(D− = 1).
(12)
Here, the subscript S− in A represents the configuration of the lives of the apparatus ob-
served by OS− (configuration in Fig.5). Let the lives OS−(D− = 1) and AS−(D− = 1) meet
the lives of the observer in S+ frame before positron reaches BS2+, then the following set
of lives is obtained:
OS−(D− = 1)
AS−(D− = 1)
⊕
⊕
OS+(D− = 1)3.
(13)
where the subscript ”3.” indicates that this describes a ”third-person perspective”: observer
in S− frame observes in her world another ”observer” in the S+ frame of reference which
9
she denotes (OS+)3..7 After a while, positron also passes BS2+ and is then detected. The
detection of the positron causes the lives of the apparatus and the observers split into relative
worlds: In one world we obtain C + = 0, D+ = 1 and in the other world C + = 1, D+ = 0.
Since we consider D− = 1, D+ = 1 case, lives of the joint system become
OS−(D− = 1; D+ = 1)
AS−(D− = 1; D+ = 1)
⊕
⊕
OS+(D− = 1; D+ = 1)3..
(14)
The above expression reflects first-person perspective of the observer OS−.8 In this perspec-
tive D− = 1 and D+ = 1 detections occurred due to lives coming from v− and u+ paths (see
Fig.5). Therefore, lives moving on paths v− and u+ are part of the history of (14). On the
other hand, first-person perspective of the observer OS+ has experienced an other history.
According to OS+, D− = 1 and D+ = 1 detections occurred due to lives coming from u−
and v+ paths (see Fig.6). In the first-person perspective of the observer OS+, we can write
the following world of lives:
OS−(D− = 1; D+ = 1)3. ⊕
AS+(D− = 1; D+ = 1)
⊕
OS+(D− = 1; D+ = 1).
(15)
From the analysis we performed above, we get the following odd-looking result: first-person
and third-person perspectives of the same observer belong to different worlds. The observer
(OS+)3. in the world of (OS−)1. lives parallel to the world of (OS+)1.. But if quantum laws
apply equally to all observers, then (OS+)3. should not observe that the positron is detected
before the electron.9 However, this result is incompatible with the relativity of simultaneity:
(OS+)3. is moving relative to (OS+)1., and the time order of the detection events should be
reversed. Consequently, we encounter a discrepancy between special relativity and the PL
model.
Nevertheless, we need to say that such a discrepancy does not arise for any interpretation
of QT that does not accept the reality of anything other than the measurement outcomes.
According to such an interpretation, the paths u−, u+, v− and v+ are just mathematical
auxiliary concepts; they are not related to reality.
7 We borrow this terminology from Ref.[16]. However, Ref.[16] used this terminology in the context of
algorithmic information theory and did not apply it to relativistic observers.
8 we omit the subscript ”1.” for abbreviation.
9 Otherwise the state vector is reduced to (7), which indicates that electron takes u− path. However, this
is erroneous as seen from (14).
10
III. SPECULATIONS ON THEORY OF RELATIVITY
If we persist in the realistic interpretations of QT, the discrepancy with the theory of
relativity needs to be resolved. One solution to this discrepancy is to modify the theory of
relativity by proposing a preferred frame of reference. Such a modification of the theory
has been discussed for a long time [17]. However, there are obscurities in this approach,
such as which criteria should be used to determine the preferred frame of reference.10
In
this paper we will make the following speculation which we believe offers a solution to the
discrepancy and also fits the spirit of the PL model: There is no particular preferred frame
of reference, but for each frame there is always a world in which that frame is preferred. The
world observed from an observer’s first-person perspective is the world where the observer’s
stationary frame is preferred. Lorentz transformations11 are defined between first-person
perspectives of observers on different inertial frames of reference.
According to the assumptions above, lives of each observer split into infinitely many
worlds; one of them corresponds to observer’s first-person perspective and others correspond
to third-person perspectives of some other observers. Suppose that S1,S2,...,Sn are different
inertial reference frames. Then lives of the observer of each reference frame Si, i
1, 2, ...n
}
split into n relative worlds. One of them is the world observed in the first-person perspective
∈ {
of the observer in the frame Si. In this world we denote the lives of the observer in Si by
(OSi)1.. All other observers are in the third-person perspective and denoted by (OS1)3.,
(OS2)3.,..,(OSi−1)3., (OSi+1)3.,..,(OSn)3.. As is known, Lorentz transformations have a sym-
metrical form, i.e. the transformations Si →
form, up to the sign in front of the velocity. This feature implies that we cannot distinguish
= j) have exactly the same
Sj and Sj →
Si, (i
one frame of reference from another. In our assumptions, a Lorentz transformation from Si
to Sj, essentially defines a transformation from (OSi)1. to (OSj)1.. (OSi)1. and (OSj)1. live
parallel in different worlds and each is the preferred observer in her own world. We interpret
the symmetry feature of Lorentz transformations as the equivalence of the worlds of (OSi)1.
and (OSj)1. in defining the laws of nature.
10 One possible candidate for preferred frame of reference is the frame in which the cosmic microwave
background is isotropic [17, 18]. However, there is not any apparent reason why this frame should be the
preferred frame of reference.
11 Conventional Lorentz transformations in the symmetrical form.
11
6
One can then ask the transformations between observers in the first-person and third-
person perspectives, i.e. transformations (OSi)1. →
frame of reference. Therefore, the order of events observed by (OSi)1. determine the physical
(OSj)3.? In this case Si is the preferred
behavior in Hardy’s gedankenexperiment. For instance, if Si coincides with S− frame of
reference then the detection of the electron takes place before the detection of the positron
and hence, lives of the joint system of observers and the apparatus is given by (14). All other
observers in the world of (OSi)1. should observe same order of detection events. Therefore
(OSj)3. observes variable speed of light, and hence the transformations (OSi)1. ⇆ (OSj)3.
does not obey conventional Lorentz transformation formula. To be precise, assume that the
detections in the D+ and D− detectors are synchronized with light pulses from outer point
K. According to (OSi)1., these light pulses propagate with a speed c. Then, according
= j) speeds of these light pulses moving from K to D+ and D− can vary
to (OSj)3., (i
and their values may no longer be c. The discussion of what the explicit forms of these
transformations is beyond the purpose of this paper. However, we would like to draw
attention to the following point: Whatever new transformations are, it may not be valid
globally. For instance, speed of light from emission event at K to the absorption event at
D− may not be equal to the speed of light moving between other two events.12 Therefore,
the transformation used, varies depending on which events it is used for. This gives us
locally defined transformations. This peculiar situation becomes understandable to some
extent if we realize that the world of (OSi)1. emerge as a result of the entanglement of the
Hardy’s experimental setup with (OSi)1.. Accordingly, in this world we can attribute a
special meaning to the signal events in D− and D+ detectors. We can consider some kind
of transformation which gives conventional Lorentz transformation formula for events not
associated with Hardy’s experimental setup, but gives a new or modified transformation
formula for signal events in the D− and D+ detectors. Obviously, this new transformation
violates the Lorentz symmetry. However, the Lorentz symmetry is violated only for events
associated with quantum entanglement between the observer and some quantum system.
Therefore, we can say that Lorentz symmetry is almost valid.
As it was said by Barbour [19], Einstein did not create a theory of clocks and duration from
first principles. He avoided ever having to address the physical working of rods and clocks;
12 Even the speeds of light pulses from K to D− and K to D+ may not be equal.
12
6
they were always treated separately as independent entities in both relativity theories. Their
properties were not deduced from the inner structure of the theory, but were simply required
to accord with the relativity principle [19]. We claim that QT gives actual physical working
of rods and clocks. But we should be open to the idea that the relativity principle may not
be absolute, and can be violated for certain events associated with quantum entanglement.
Finally, we want to discuss how we should interpret the non-equivalence of an observer’s
first-person and third-person perspectives. What exactly does this mean? Does this mean
that the observer (OSj)3. in (OSi)1.’s world is an unconscious being, such as a zombie or a
robot? This is not what we intend to say. If we want to explain with the example of Hardy’s
gedankenexperiment we discussed in the previous section, we can say that the measurement
performed by (OSi)1. and her conscious perception causes the state vector to collapse.13 But
this does not mean that (OSj)3. is an unconscious being. It simply means that in (OSi)1.’s
world, (OSj)3.’s perception of the measurement result has no effect on the state vector’s
collapse; all observers in different reference frames respect the order of events and recorded
history that the observer (OSi)1. sees on Hardy’s experimental setup. On the other hand, if
we repeat or perform another experiment, lives will split again and (OSj)3. can find herself
in the world of her first-person perspective where her frame of reference is the preferred
frame. As soon as this happens, the subscript ”3.” should be replaced by ”1.”.
IV. CONCLUSIONS
PL is a model that is expected to be compatible with the relativity theory because it in-
cludes the local interactions of lives and their motions that do not exceed the speed of light.
However, we negated this expectation by showing that the PL model could not overcome
the paradox suggested by Hardy. Our results can also be applied to many world interpreta-
tion where counterfactual propositions assumed to be part of reality in different alternative
worlds, or any realistic interpretation of QT that assumes real particle trajectories. But
we want to emphasize that there is no conflict between the special theory of relativity and
QT for approaches and interpretations that regard state vectors as auxiliary mathematical
concepts and do not relate them to reality. Therefore, one way to overcome the Hardy’s
13 Of course, there is no state vector collapse in the PL model. But since we think many physicists are more
familiar with this terminology, we use the term ”collapse” for clarity.
13
paradox is to adopt such an approach. On the other hand, if we insist on a realistic interpre-
tation as we have just mentioned, we must accept the possibility that Lorentz symmetry is
violated. Such a Lorentz symmetry violation can be realized by choosing a preferred frame of
reference, as noted in Hardy’s original paper [1]. In section III, we have made an interesting
speculation which we believe offers a solution to the discrepancy between QT and special
theory of relativity, and also fits the spirit of the PL model.
[1] L. Hardy, ”Quantum mechanics, local realistic theories, and Lorentz-invariant realistic theo-
ries,” Phys. Rev. Lett. 68, 2981-2984 (1992).
[2] G. Brassard and P. Raymond-Robichaud, ”Can free will emerge from determinism in quantum
theory?,” in ”Is Science Compatible with Free Will? Exploring free will and consciousness in
light of quantum physics and neuroscience” A. Suarez and P. Adams (Eds.), Chapter 4., pp.
41-61, Springer, 2013 [arXiv:1204.2128 [quant-ph]].
[3] G. Brassard, and P. Raymond-Robichaud. ”Parallel Lives: A Local-Realistic Interpretation of
Nonlocal Boxes,” Entropy 21(1), 87 (2019) [arXiv:1709.10016 [quant-ph]].
[4] M. Waegell, “An Ontology of Nature with Local Causality, Parallel Lives, and Many Relative
Worlds,” Found. Phys. 48, no.12, 1698-1730 (2018) [arXiv:1707.06324 [quant-ph]].
[5] H. Everett III, ”relative state formulation of quantum mechanics,” Rev. Mod. Phys. 29, no.
3, p. 454 (1957).
[6] B. S. DeWitt, Physics Today 23, 9, 30 (1970).
[7] J. S. Bell, ”On the Einstein-Podolsky-Rosen paradox,” Physics 1, 195-200 (1964).
[8] J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, ”Proposed Experiment to Test Local
Hidden-Variable Theories,” Phys. Rev. Lett. 23, 880 (1969).
[9] J. Conway, and S. Kochen. ”The Free Will Theorem,” Found. Phys. 36, no.10, 1441-1473
(2006) [arXiv:0604079 [quant-ph]].
[10] M. Waegell, ”Locally Causal and Deterministic Interpretations of Quantum Mechanics:
Parallel Lives and Cosmic Inflation,” Quantum Stud.: Math. Found. 4, 323-337 (2017)
[arXiv:1604.07874 [quant-ph]].
[11] R. Clifton, C. Pagonis and I. Pitowsky, ”Relativity, Quantum Mechanics and EPR”, Proceed-
ings of the Biennial Meeting of the Philosophy of Science Association, Volume 1, pp. 114-128
14
(1992).
[12] I. Pitowsky, ”The Relativity of Quantum Predictions”, Phys. Lett. A 156, 137-139 (1991).
[13] Y. Aharonov, et al. ”Revisiting Hardys Paradox: Counterfactual Statements, Real Measure-
ments, Entanglement and Weak Values”, Phys. Lett. A 301, 130-138 (2002).
[14] L. Vaidman, ”Counterfactuals in Quantum Mechanics,” in ”Compendium of Quantum
Physics” Greenberger D., Hentschel K., Weinert F. (eds) Springer, 2009.
[15] A. Einstein, B. Podolsky and N. Rosen, ”Can Quantum-Mechanical Description of Physical
Reality be Considered Complete?”, Phys. Rev. 47, 777, (1935).
[16] M. P. M¨uller, “Law without law: from observer states to physics via algorithmic information
theory,” arXiv:1712.01826 [quant-ph].
[17] C. M. Will, ”Theory and Experiment in Gravitational Physics”, Cambridge University Press,
Cambridge, (1993).
[18] S. R. Coleman and S. L. Glashow, ”Cosmic ray and neutrino tests of special relativity,” Phys.
Lett. B 405, 249-252 (1997) [arXiv:hep-ph/9703240 [hep-ph]].
[19] J. Barbour, ”The End Of Time”, Oxford University Press, Oxford, (1999).
15
FIG. 1: Scheme of Hardy’s gedankenexperiment [1]. BS1+, BS1−, BS2+, BS2− represent beam
splitters and M 1+, M 1−, M 2+, M 2− represent mirrors. C +, D+, C −, D− are detectors.
FIG. 2: Single photon on a beam splitter.
16
FIG. 3: Single photon in a Mach-Zehnder interferometer.
FIG. 4: Diagram representing the lives observed in the LAB frame. Dotted lines represent hidden
lives living in parallel.
17
FIG. 5: Diagram representing the lives observed in the S− frame. Dotted lines represent hidden
lives living in parallel.
18
FIG. 6: Diagram representing the lives observed in the S+ frame. Dotted lines represent hidden
lives living in parallel.
19
|
ai_researcher | 1 | Development_and_validation_of_Spanish_version_of_FINCODA_an_instrument_for_self-assessment_of_innovation_competence_of_workers_or_candidates_for_Jobs.pdf | 1
2
0
2
t
c
O
3
1
]
L
C
.
s
c
[
1
v
1
6
4
6
0
.
0
1
1
2
:
v
i
X
r
a
FAKE NEWS DETECTION IN SPANISH USING DEEP LEARNING
TECHNIQUES
Kevin Martínez-Gallego
Intelligent Information Systems Lab
Universidad de Antioquia
Calle 67 No. 53 - 108, 050010, Medellín, Colombia.
[email protected]
Andrés M. Álvarez-Ortiz
Intelligent Information Systems Lab
Universidad de Antioquia
Calle 67 No. 53 - 108, 050010, Medellín, Colombia.
[email protected]
Julián D. Arias-Londoño
Intelligent Information Systems Lab
Dpt. of Systems Engineering and Computer Science
Universidad de Antioquia
Calle 67 No. 53 - 108, 050010, Medellín, Colombia.
[email protected]
ABSTRACT
This paper addresses the problem of fake news detection in Spanish using Machine Learning
techniques. It is fundamentally the same problem tackled for the English language; however, there
is not a significant amount of publicly available and adequately labeled fake news in Spanish to
effectively train a Machine Learning model, similarly to those proposed for the English language.
Therefore, this work explores different training strategies and architectures to establish a baseline for
further research in this area. Four datasets were used, two in English and two in Spanish, and four
experimental schemes were tested, including a baseline with classical Machine Learning models,
trained and validated using a small dataset in Spanish. The remaining schemes include state-of-the-art
Deep Learning models trained (or fine-tuned) and validated in English, trained and validated in
Spanish, and fitted in English and validated with automatic translated Spanish sentences. The Deep
Learning architectures were built on top of different pre-trained Word Embedding representations,
including GloVe, ELMo, BERT, and BETO (a BERT version trained on a large corpus in Spanish).
According to the results, the best strategy was a combination of a pre-trained BETO model and a
Recurrent Neural Network based on LSTM layers, yielding an accuracy of up to 80%; nonetheless,
a baseline model using a Random Forest estimator obtained similar outcomes. Additionally, the
translation strategy did not yield acceptable results because of the propagation error; there was also
observed a significant difference in models performance when trained in English or Spanish, mainly
attributable to the number of samples available for each language.
Keywords Deep Learning · Fake News Detection · Spanish · Supervised Learning · Word Embeddings · Transfer
Learning
1
Introduction
In social networks, the proliferation of fake news is a strategy used to manipulate public opinion. For example, it is
well-known the case of Cambridge Analytica, where the “private data of millions of people was used to psychologically
manipulate voters in the 2016 US elections, where Donald Trump was elected president. The company not only sent
tailored advertising but developed fake news that it then replicated across social networks, blogs and media" [1]. One of
the strategies that have begun to be explored to prevent the proliferation of fake news, is the use of Artificial Intelligence
(AI) techniques and, more precisely, Deep Learning (DL) for their detection and subsequent removal. Most of the
work that has been done in this field uses datasets of news written in English as a source of information, which are
composed of properly labeled sentences publicly available. Although fake news is a common problem across different
languages, including Spanish, there is not a significant amount of properly labeled fake news in Spanish to effectively
train a DL model for fake news detection, similar to those proposed for the English language. Indeed, to the best of our
knowledge, recently in 2019 the first corpus of fake news in Spanish exclusively adapted for such a task was presented
in [2]; nevertheless, this corpus consists of 971 labeled news, which is an insufficient amount of samples to train a solid
DL model from scratch. Therefore, the main objective of this work is to design a Machine Learning (ML) strategy for
the detection of fake news in Spanish, based on Transfer Learning techniques and/or machine translation tools, which
allow the use of previously trained models, both in English and Spanish.
This paper is organized as follows: section 2 presents some antecedents on the use of ML and DL models for fake
news detection in different languages, doing emphasis on English and Spanish; section 3 presents the pre-processing
strategies applied to the texts, and also the models and embeddings we employed; in section 4 we present the datasets
utilized and the methodology for evaluating the different models, as well as the settings for the experiments carried out
and the outcomes we obtained. Finally, we discuss the results and present the conclusions of this paper in section 5.
2 Related Work
Automatic fake news detection (FND) is a task that has attracted extensive attention from AI researchers in recent years;
this has been evidenced by the large number of publications in which the problem has been addressed by applying
different strategies. Shu et al. [3] report a summary of research works on FND in social networks, analyzing the aspects
involved from psychology, social theories, and algorithmic points of view. As in many ML applications, the proposed
approaches addressing FND are composed of two stages: feature extraction and model building; the first refers to the
numerical representation of news content and related information; the second proposes the development of a machine
learning model to distinguish between fake news and legitimate news. For example, Wu and Liu in [4] assume that fake
news is typically manipulated to resemble real news; thus, they propose a classifier based on propagation paths in social
networks using Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN) and Embeddings. Although the
FND task has traditionally been stated as a bi-class classification problem, in [5] the author presents a dataset in English
(The Liar Dataset), which is composed of 6 classes: pants-fire, false, barely true, half-true, mostly true, and true. In
addition, this author evaluates four classification models following an approach where he considered both meta-data and
text; hence, he presented Support Vector Machine (SVM) as the best classical ML model, and Convolutional Neural
Network (CNN) as the best DL model, which outperformed the other models with an accuracy of 27% on the test set.
Using this same dataset, Bracsoveanu and Andonie in [6], propose to add a pre-processing stage based on the extraction
of semantic features from the text. These authors also evaluate classical ML and DL models, finding the SVM model
to be the best in terms of performance (28.4%) for classical ML, and the CapNetLSTM model (64.4%) for DL, which
was used in combination with a pre-trained Embeddings model; these results were obtained on the dataset presented in
[5]. The authors conclude that employing Semantic Features significantly improves accuracy in fake news detection;
in particular, for DL models, the improvement in accuracy was up to 5-6%. Furthermore, they also highlighted that
"the accuracy of the various models greatly varies depending on the data sets and the number of classes involved", a
phenomenon we also noticed across this state-of-the-art review.
Previous works tackled the FND task using datasets in English; however, this paper focuses on FND for Spanish.
Faustini and Covoes propose in [7] an approach using text features that can be generated independently of the news
source platform and, as far as possible, independently of the news language under analysis. The authors report
competitive results for news in languages belonging to the Germanic, Latin, and Slavic language groups. They used
five datasets, and each one was processed with four different Natural Language Processing (NLP) techniques for text
representation. Then, experiments were performed with different models obtaining the best result with Random Forest
and SVM algorithms, combined with Bag-of-Words (BoW) as text representation technique; hence, they got a prediction
rate of up to 95% for news in the specified linguistic groups. Additionally, Posadas-Durán et al. [2] address the FND
task for Spanish news, using different classical ML models: SVM, Random Forest, Logistic Regression, and Boosting;
these models were combined with different strategies for text pre-processing that allow extracting useful semantic
information for the detection task: BOW, Part of Speech tags (POS tags) and n-grams, as well as applying Stop Words
to avoid prepositions and/or punctuation marks in the text. The experiments were carried out using a proprietary dataset
1 (released under CC-BY-4.0 license). The authors report results of up to 77.28% accuracy for one of the combinations.
To the best of our knowledge, no works applying DL models in the Spanish FND task have been published so far.
1https://github.com/jpposadas/FakeNewsCorpusSpanish
2
3 Methods
3.1 Preprocessing steps
In order to obtain consistent results, a data standardization process known as Text Normalization was performed, which,
in addition to eliminating non-alphanumeric characters in the text, includes some of the most commonly used techniques
in NLP:
• Stop Words: we removed words there is an agreement they do not contribute to the models learning process in
the context of the problem addressed; for instance, articles and prepositions.
• Stemming: this technique was used to reduce words to their root.
• Tokenization and Padding: as usual in text processing tasks, we performed tokenization and padding, when
required, for words and sentences representation.
Subsequently, we decided to compare some of the most common techniques regarding text representation: BoW, which
provides the number of occurrences of each word in the text corpus; term frequency-inverse document frequency (tf-idf),
which provides a weighted measure of the importance of each term within the text (according to its frequency of
occurrence in sentences); and pre-trained Word Embeddings, where words and the semantic relationships among them
are represented as a vector. It is worth clarifying that, we call Word Embeddings to both pre-trained vectors such as
word2vec or GloVe, and embeddings obtained from pre-trained models such as ELMo or BERT (presented in subsection
3.2).
3.2 Models
Classical ML models, and DL models based on artificial neural networks were used. We employed ML models intending
to create a baseline for comparison purposes; hence, we selected the following: Support Vector Machine (SVM),
Random Forest (RF), Gradient Boosting Tree (GBT), and Multi-Layer Perceptron (MLP). For the case of DL classifiers,
besides word embeddings, two types of layers were used: Long Short-Term Memory Recurrent Neural Network
(LSTM-RNN) using a many-to-one architecture, and Convolutional Neural Network (CNN). LSTM-RNN processes
the input data as sequences of dependent observations, while CNNs can process n-grams through the application of
convolutional filters. A schematic of the DL classifiers in combination with a embedding layer is illustrated in Figure 1;
this figure shows the arrangement of the aforementioned layers, and the different word embeddings we used which are
presented next.
Figure 1: Schematic of DL classifiers in combination with Embedding Layer
3
In some experiments, we trained the embedding layer as part of the model training process. However, in most cases, we
used transfer learning strategies through pre-trained word embeddings. This procedure makes much sense when either
there are not many samples available for training or the computational resources are limited. Four word-embedding
variants were evaluated:
1. Global Vectors for Word Representation (GloVe): it is an unsupervised method that captures statistical
information from a text corpus; in order to generate word representations, its training process is based on
the spectral co-occurrence matrix decomposition [8]. Hence, we made use of the 300-dimensional vectors
pre-trained on the English Wikipedia corpus, available at [9].
2. Embeddings from a Language Model (ELMo): in contrast to GloVe, which provides a fixed meaning for
each word, the representations generated by ELMo are functions of the entire sequence, instead of a single
word. ELMo encodes words by individual characters, such that it allows the same word to have different
representative vectors under different contexts [10]. The pre-trained model was downloaded from the public
repository Tensorflow Hub, which can be found at [11].
3. Bidirectional Encoder Representations from Transformers (BERT): it is a language representation model
based on another type of model called Transformer, where instead of strictly analyzing the temporal dependence
of the input sequence, all possible combinations of the input sequence are evaluated through an attention layer
[12]. It has the advantage that its training process can be performed in parallel, since it does not depend on the
temporal condition. For this research, the so-called BERT Base was used, which is a model with a total of 110
million pre-trained parameters. The pre-trained model was downloaded from the public repository Tensorflow
Hub, available at [13].
4. BETO: this model corresponds to a BERT version but instead of English, this models is trained on a large
corpus of text in Spanish [14]. The size of the model is similar to a BERT Base, with approximately 110
million parameters.
4 Experiments and Results
4.1 Datasets
Four free-to-use datasets were chosen to use in this study. Two of them consist of news in English labeled as fake or
real: Fake and real news dataset [15], and News Data Set - Fake OR Real [16]; the other two datasets correspond to
news in Spanish, also properly labeled: The Spanish Fake News Corpus [2] containing 971 news items, and fake news
in Spanish [17], consisting of 1600 news items. None of the above datasets had missing or null data, and they are well
balanced considering the two classes involved. The English datasets were merged, resulting in a final English corpus
comprising 51233 samples, of which 26645 are fake, and 24588 are genuine; the same procedure was followed over the
individual datasets in Spanish, resulting in a final Spanish corpus consisting of 2571 samples, such that 1280 of them
are fake and 1291 genuine 2. Figure 2 shows through a chart the corresponding distribution for each resulting dataset.
Since the number of samples in Spanish is considerably small to train a DL model from scratch, one of the strategies
followed during experiments, consists of evaluating the capacity of a DL model trained with the English corpus to
predict fake Spanish news translated into English using the Google translation API.
4.2 Experimental setup
The experiments were carried out using a Bootstrap (ShuffleSplit) validation methodology, considering 5 iterations.
Depending on each scheme (described below), the partitioning was done into three subsets (train, development, test) or
into two subsets (train, test), taking 80% for training and 20% for testing (in the case of partitioning into three subsets,
for the internal sub-division train/development we took 80% / 20% respectively); however, in some experiments the
split of the dataset was set to a ratio of train: 90% / test: 10%. These variations in the partitioning were considered due
to the few samples we had in the final corpus in Spanish; hence, we wanted to try different combinations aiming at keep
as many samples for training as possible.
Considering the balancing condition on the datasets we used, and the fact that we were addressing a bi-class
classification task, we chose Accuracy as the performance metric for measuring the generalization ability of the models.
2In this paper, we refer to the resulting datasets presented in this subsection as follows: the dataset in English, the dataset in
Spanish, and the translated dataset.
4
Figure 2: Sample distribution for the resulting datasets in English and Spanish
Regarding the execution of experiments, four schemes were defined in combination with the datasets and the different
models. Initially, a baseline was established, training and validating the four classical ML models listed in subsection
3.2, using the dataset in Spanish (first scheme). For classical ML approaches, the texts were represented using BoW
and tf-idf techniques; this step was carried out to have a reference point for comparison purposes with the DL-based
architectures. The subsequent experiments combine different DL models with different word embedding representations
(presented in sub-section 3.2), varying the datasets used for training and validation. The second scheme uses the
dataset in Spanish both to train and validate two vanilla DL architectures based on LSTM and CNN layers, and more
sophisticated architectures built on top of BERT-type models. Concerning the experiments with BETO embedding, we
tried with different values for the number of epochs, and also applied the early stopping strategy considering different
values of the hyperparameters tolerance and patience. For its part, the third scheme is similar to the former one but
using the dataset in English instead, so in this case no experiments were performed using the pre-trained BETO model.
The last scheme trains the models with the dataset in English and validates with the translated dataset (fourth scheme).
Moreover, we conducted some experiments where samples from the translated dataset were progressively mixed with
the dataset in English during the training phase; then, the remaining portion of the translated dataset was used for
validation, i.e., emulating a learning curve.
The following is the collection of hyperparameter values we considered when training and validating the different
models. Regarding the ML models for the baseline:
• (SVM) RBF and Linear Kernel; regularization parameter ’C’: 1e3, 1e-3; kernel coefficient for RBF ’gamma’:
0.1, 1
• (RF and GBT) number of trees: 50, 100, 200, 300, 500; maximum number of features: 50, 100, 200, 300
• (MLP) hidden layers: 1, 2, 3; number of neurons per hidden layer: 10, 50; epochs: 1000, 1500
Furthermore, the combinations of these models were evaluated with BoW and tf-idf representations; removing and not
removing Stop Words; applying and not applying Stemming; and considering a maximum vocabulary size of 10000,
20000, 30000 and 40000 words.
Similarly, for the DL models evaluated we considered:
• LSTM: units present in hidden layers (Units) [this model was only implemented with a single hidden layer],
kernel regularizer (KR), recurrent regularizer (RR), dropout (D).
• CNN: amount of filters (F), kernel size (KS), number of units for additional dense layer (Units), kernel
regularizer (KR).
It is also worth pointing out that, in order to set the input length for the models, we used a histograms-based approach to
determine the most common length (in words) of the news items, in both the datasets for English and Spanish news:
1500 and 500 words, respectively.
5
The source code used to carry out the experiments can be found in a publicly accessible repository at GitHub 3.
4.3 Results
Table 1 shows the best results for each of the models considered during the experiments of the first scheme, which
was described in subsection 4.2. It also shows the configuration of pre-processing steps that achieved the best results.
Moreover, the hyperparameter values selected for each model were the following:
• (SVM) kernel RBF; ’C’: 1e3; kernel coefficient ’gamma’: 1
• (RF and GBT) number of trees: 500; maximum number of features: 50
• (MLP) hidden layers: 1; neurons: 10; epochs: 1500
Table 1: Baseline results for the dataset in Spanish
Model Vocab Size
Stemming Remove StopWords Text Representation
test_acc
SVM
RF
GBT
MLP
10000
40000
40000
10000
NO
NO
YES
YES
YES
YES
NO
NO
tf-idf
tf-idf
BoW
tf-idf
0.798
0.802
0.783
0.794
According to the baseline results, RF in combination with a tf-idf text representation showed the highest accuracy.
Subsequently, we performed the experiments with the DL models (LSTM, CNN) in combination with the different types
of Word Embeddings; hence, we followed the second, third, and fourth schemes. From this point on, we permanently
removed Stop Words and did not apply Stemming anymore regarding data pre-processing.
Initially, we ran some experiments using a trainable embedding layer; the results are summarized in Table 2, where the
hyperparameter values selected for each model were:
• LSTM (Spanish) 16 units; KR and KK equals 1; D equals 0
• LSTM (English) 4 units; KR and KK equals 0.01; D equals 0
• CNN (Spanish) F equals 16; KS equals 10; 4 units; KR equals 0.01
• CNN (English) F equals 16; KS equals 10; 12 units; KR equals 0
The LSTM and CNN models trained with the English dataset, whose results are shown in Table 2, were also validated
with the whole translated dataset, yielding accuracies of 56.7% and 53.2% respectively.
Table 2: Results for DL models with trainable embedding layer; the column dev_acc shows the accuracy in the
development set; std is the standard deviation, and test_acc shows the accuracy in the test set.
Model Dataset Language
dev_acc
std
test_acc
LSTM
LSTM
CNN
CNN
Spanish
English
Spanish
English
0.714
0.95
0.73
0.984
0.026
0.02
0.021
0.002
0.761
0.931
0.685
0.982
Next, we performed the experiments using a Transfer Learning approach, with the pre-trained 300-feature GloVe
embedding layer presented in subsection 3.2; this time, the embedding values were left fixed during the training process,
and only the added hidden layers were fine-tuned. Since the GloVe vectors utilized were trained on a corpus in English,
these experiments correspond to the third and fourth schemes. The results are summarized in Table 3; moreover, the
hyperparameter values chosen for each model were the following:
• (LSTM) 8 units; KR and KK equals 0; D equals 0.5
• (CNN) F equals 16; KS equals 10; 4 units; KR equals 0
Then, when validating the former models using the translated dataset (fourth scheme), we got accuracy values of 54%
and 53.8% for LSTM and CNN layers, respectively.
3https://github.com/kevinmaiden7/Spanish_FakeNewsDetection
6
Table 3: Results for DL models using GloVe embedding with the dataset in English. The column dev_acc shows the
accuracy in the development set; std is the standard deviation, and test_acc shows the accuracy in the test set.
Model
dev_acc
std
test_acc
LSTM
CNN
0.962
0.974
0.006
0.002
0.924
0.973
At this point, we decided to implement a learning curve in order to evaluate the effect of including data from the
translated dataset, into a training dataset composed of the original news in English. The models evaluated included
LSTM and CNN layers as well as trainable Embeddings and GloVe. Five experiments were run for each case, adding
500, 1000, 1500, 2000 and 2500 samples from the translated dataset to the training set, and then validating with the total
amount of remaining samples of the translated dataset. Figure 3 shows the results for the CNN model with trainable
Embedding, which was the best combination found in terms of accuracy, when applying this strategy.
Figure 3: Learning curve results for CNN with trainable embedding
Based on the previous results obtained with GloVe, and to simplify the experimental phase, the best hyperparameters of
LSTM and CNN layers found using this embedding, were left fixed for the subsequent experiments applying ELMo and
BERT-type embeddings.
Table 4: Results for ELMo, BERT and BETO Embeddings
Embedding Model
scheme Epochs
test_acc
ELMo
ELMo
ELMo
BERT
BERT
BETO
LSTM Third
CNN
Third
Fourth
CNN
Third
CNN
CNN
Fourth
LSTM Second
7
5
7
7
7
25
0.973
0.957
0.525
0.957
0.53
0.80
Regarding the results with ELMo embedding, we got high accuracy outcomes for both LSTM and CNN models
concerning the third scheme, as it is shown in Table 4; however, when it comes to the fourth scheme, the results show a
degradation in performance. Additionally, taking into account the trend identified in the aforementioned learning curve
approach, we added 2500 samples from the translated dataset to the training set, fitted the CNN layer in combination
with ELMo embedding on this set, and validated with the remaining 71 samples of the translated dataset: we reached an
accuracy value of 70%.
For the models built on top of BERT embedding, experiments were only carried out with the CNN layers; with this in
mind, as it is described in Table 4, we obtained a high accuracy result for the third scheme again, whereas not such a
good level was achieved for the fourth scheme. Similarly to the procedure followed with ELMo embedding, 2500
samples were added from the translated dataset to the training set, for then validating with the remaining 71 samples: in
this case we got an accuracy level of 63.4%.
7
Concerning BETO embedding, the experiments with this model were framed in the second scheme (training and
validating with the dataset in Spanish) since BETO itself is a model trained over a corpus in this language; in this case,
we used both LSTM and CNN layers. Table 4 shows the results for this experiment, where it is possible to observe that
the LSTM model trained with 25 epochs achieved results of up to 80% in accuracy for the test set; this is the best result
we reached with DL models for the dataset in Spanish. Thus, Figure 4 shows the confusion matrix associated with this
architecture for the test set; it is worth highlighting that we used the early stopping strategy, although it did not yield a
significant improvement in comparison to the previous results.
Figure 4: (BETO) normalized confusion matrix for LSTM with the dataset in Spanish
5 Discussion and Conclusions
Regarding the proposed baseline for datasets in Spanish (first scheme), RF showed the best performance getting an
accuracy of 80%, and using the tf-idf text representation; however, this performance was not statistically different from
that obtained with SVM, where a smaller vocabulary size was used. This model outperformed the best result reported in
[2], where The Spanish Fake News Corpus dataset was also utilized. Furthermore, we noticed that for this scheme, there
were no significant differences in the performance of the models when applying Stemming or removing StopWords, or
even when varying the text representation strategy or the vocabulary size.
It is worth highlighting the gap between the number of samples in the resulting datasets for English and Spanish,
as it was shown in Figure 2. Since the models we used in this research follow a phenomenological approach, they
highly depend on the amount of experimental data they are trained on. The above was evidenced by the prominent
difference on accuracy we obtained in the third and fourth schemes, using GloVe, ELMo, and BERT embeddings.
Also, concerning the second scheme, we noticed that the models exhibited a trend of overfitting due to the small
number of samples available for training; moreover, we observed that the regularization strategies we employed did not
significantly improve the performance.
In regard to the fourth scheme, it is important to underline that the vocabulary present in the translated dataset
corresponded to just 60% of that present in the dataset in English; this situation negatively affected the results we
obtained for the translation strategy. For this scheme, despite the excellent performance of the models trained and
validated with the dataset in English (third scheme), when we validated with the translated dataset, the values of
accuracy were drastically reduced. Consequently, the results of the implemented learning curve indicated a performance
improvement (although in different ratios) as more samples were added from the translated dataset to the training
set (as it was shown in Figure 3); this pattern was noticed regardless of the combination between a model and the
embedding layer utilized.
Concerning the different embeddings we used, similar results were obtained for the third and fourth schemes when
using GloVe, ELMo or BERT, in each of them; in fact, it is noteworthy that, in combination with these different
embeddings, the LSTM and CNN layers showed similar results. Furthermore, taking into account these pre-trained
models corresponded to the state-of-the-art in NLP, we were expecting to obtain salient results by mixing portions of
the translated dataset to the dataset in English for training; however, due to the small number of samples available in the
8
dataset in Spanish, and the discrepancies resulting from the translation, the predictive capability of the models was
limited. Nonetheless, BETO embedding allowed us to obtain the best result for the dataset in Spanish (or any validation
over news items in Spanish using DL models), considering the fact that its approximately 110 million parameters were
pre-trained on a corpus in this language; this enabled us to take full advantage of the Transfer Learning strategy, and
obtain an outstanding performance of 80% of accuracy on the test set, in spite of the small number of samples available
for such deep network architecture. Figure 4 showed the result for the LSTM model combined with this embedding,
which corresponds to the best strategy identified for detecting fake news in Spanish using DL techniques: out of the
258 news items used for validation, the model correctly classified 76% of the fake news items, as well as 86% of the
legitimate news items, which we consider a good hit ratio; by contrast, the model tends to confuse news items that are
fake with legitimate ones, which corresponds to the main error condition it incurred.
Although the best detection rate achieved for DL models was similar to that obtained with RF, there is indubitably
more room for improvement in the case of Deep Neural network architectures, due to the combination with Word
Embeddings and more advanced techniques. Thus, in the future, this research could continue aiming at building a
more robust system from the best strategy we found (BETO + LSTM), if a set of labeled news in Spanish with a more
representative number of samples are available; furthermore, more experiments combining hyperparameter values and
network architectures could also be carried out.
References
[1] “5 claves para entender el escándalo de cambridge analytica que hizo que facebook perdiera us 37.000 millones
[Online;
en un día (bbc news mundo).” https://www.bbc.com/mundo/noticias-43472797, Mar 2018.
accessed 09-Sep-2021].
[2] J.-P. Posadas-Durán, H. Gómez-Adorno, G. Sidorov, and J. J. M. Escobar, “Detection of fake news in a new corpus
for the spanish language,” Journal of Intelligent & Fuzzy Systems, vol. 36, no. 5, pp. 4869–4876, 2019.
[3] K. Shu, A. Sliva, S. Wang, J. Tang, and H. Liu, “Fake news detection on social media: A data mining perspective,”
ACM SIGKDD explorations newsletter, vol. 19, no. 1, pp. 22–36, 2017.
[4] L. Wu and H. Liu, “Tracing fake-news footprints: Characterizing social media messages by how they propagate,”
in Proceedings of the eleventh ACM international conference on Web Search and Data Mining, pp. 637–645, 2018.
[5] W. Y. Wang, “" liar, liar pants on fire": A new benchmark dataset for fake news detection,” arXiv preprint
arXiv:1705.00648, 2017.
[6] A. M. Bra¸soveanu and R. Andonie, “Semantic fake news detection: a machine learning perspective,” in Interna-
tional Work-Conference on Artificial Neural Networks, pp. 656–667, Springer, 2019.
[7] P. H. A. Faustini and T. F. Covões, “Fake news detection in multiple platforms and languages,” Expert Systems
with Applications, p. 113503, 2020.
[8] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in Empirical
Methods in Natural Language Processing (EMNLP), pp. 1532–1543, 2014.
[9] J. Pennington, “Glove: Global vectors for word representation.” https://nlp.stanford.edu/projects/
glove/, 2014. [Online; accessed 09-Sep-2021].
[10] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized
word representations,” arXiv preprint arXiv:1802.05365, 2018.
[11] Google, “Elmo - tensorflow hub.” https://tfhub.dev/google/elmo/3, 2020. [Online; accessed 09-Sep-
2021].
[12] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for
language understanding,” arXiv preprint arXiv:1810.04805, 2018.
[13] Google. https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2, 2020.
[Online; ac-
cessed 09-Sep-2021].
[14] J. Cañete, G. Chaperon, R. Fuentes, J.-H. Ho, H. Kang, and J. Pérez, “Spanish pre-trained bert model and
evaluation data,” in PML4DC at ICLR 2020, 2020.
[15] C. Bisaillon,
“Fake
and
real
news
dataset.”
https://www.kaggle.com/clmentbisaillon/
[16] V.
fake-and-real-news-dataset, mar 2020. [Online; accessed 09-Sep-2021].
set
-
news-data-set-fake-news-with-python, Jul 2020. [Online; accessed 09-Sep-2021].
“News
Ukani,
real.”
fake
data
or
https://kaggle.com/vikasukani/
9
[17] A. Tretiakov,
“noticias
falsas
en
español
2020.”
https://kaggle.com/arseniitretiakov/
noticias-falsas-en-espaol, 2020. [Online; accessed 09-Sep-2021].
10
|
ai_researcher | 3 | Let's_Be_Self-generated_via_Step_by_Step_A_Curriculum_Learning_Approach_to_Automated_Reasoning_with_Large_Language_Models.pdf | 6
9
9
1
g
u
A
8
2
2
v
2
0
0
7
0
6
9
/
n
a
-
t
c
n
u
f
:
v
i
X
r
a
ON SELF-ADJOINTNESS OF A SCHR ¨ODINGER
OPERATOR ON DIFFERENTIAL FORMS
Maxim Braverman
Abstract. Let M be a complete Riemannian manifold and let Ω•(M) denote the
space of differential forms on M. Let d : Ω•(M) → Ω•+1(M) be the exterior dif-
ferential operator and let ∆ = dd∗ + d∗d be the Laplacian. We establish a suffi-
cient condition for the Schr¨odinger operator H = ∆ + V (x) (where the potential
V (x) : Ω•(M) → Ω•(M) is a zero order differential operator) to be self-adjoint. Our
result generalizes a theorem by I. Oleinik about self-adjointness of a Schr¨odinger
operator which acts on the space of scalar valued functions.
1. Introduction. Suppose M is a complete Riemannian non-compact manifold.
We will assume that M is oriented and connected. Let T ∗M denote the cotangent
i(T ∗M ) denote the exterior algebra of T ∗M .
bundle to M and let
We denote by L2Ω•(M ) the space of square integrable complex valued differential
V
•(T ∗M ) ⊗ C which are square integrable
forms on M , i.e. the space of sections of
with respect to the scalar product
•(T ∗M ) =
L
V
i
V
(1)
hα, βi =
α ∧ ∗β,
α, β ∈ L2Ω•(M ).
ZM
Here ∗ denotes the Hodge operator associated to the Riemannian metric on M .
Note that L2Ω0(M ) is just the space of square integrable complex valued functions
on M .
Let d : L2Ω•(M ) → L2Ω•+1(M ) denote the exterior differential and let d∗ be
the operator formally adjoint to d with respect to the scalar product (1).
Let ∆ = dd∗ + d∗d be the Laplacian and consider the Schr¨odinger operator
(2)
H = ∆ + V (x) : L2Ω•(M ) → L2Ω•(M )
where the potential V (x) is a measurable section of the bundle End
of
•(T ∗M ) which belongs to the class L∞
endomorphisms of
loc (i.e. such that for any
(cid:0) V
compact set K ⊂ M there exists a constant CK > 0 such that |V (x)| ≤ CK for
almost all x ∈ K).
V
(cid:1)
•(T ∗M )
We denote by H0 the restriction of H on the space Ω•
c (M ) of smooth differential
forms with compact support. The purpose of this paper is to introduce a sufficient
condition on the potential V (x) for operator H0 to be self-adjoint.
1991 Mathematics Subject Classification. Primary: 58G25 Secondary: 35P05.
The research was supported by US - Israel Binational Science Foundation grant No. 9400299
Typeset by AMS-TEX
2
MAXIM BRAVERMAN
2. Statement of results. For x, y ∈ M let dist(x, y) denote the Riemannian
distance between x and y. Fix a point p ∈ M and set r(x) = dist(x, p).
Fix x ∈ M . The Riemannian metric on M defines a scalar product h·, ·ix on
•(T ∗M ) ⊗ C. As usual, we write
x M ) ⊗ C of the vector bundle
•(T ∗
the fiber
V (x) ≥ C if
V
V
(3)
hV (x) ξ, ξix ≥ C hξ, ξix
•(T ∗
for any ξ ∈
endomorphism of
V
•(T ∗
x M ).
x M ) ⊗ C. Note that it follows from (3) that V (x) is a self-adjoint
Theorem A. Assume that for almost all x ∈ M the potential V (x) of the operator
(2) satisfies the estimate
V
(4)
V (x) ≥ −Q(x),
where 1 ≤ Q(x) ≤ ∞ and Q−1/2(x) is a Lipschitz function on M such that
(5)
| Q−1/2(x) − Q−1/2(y) | ≤ K dist(x, y)
for any x, y ∈ M.
If for any piecewise smooth curve γ : [0, ∞) → M such that limt→∞ r(γ(t)) = ∞
the integral
(6)
Q−1/2(x) dγ = ∞
Zγ
then the operator H0 is essentially self-adjoint.
For the case of a Schr¨odinger operator acting on scalar valued functions this
theorem was established by I. Oleinik [O2]. Note that Q(x) may be equal to infinity
on a set of positive measure.
As a simple consequence of Theorem A we obtain the following
Theorem B. Suppose that for almost all x ∈ M the potential V (x) satisfies the
, where 1 ≤ q ≤ ∞ and q−1/2(t) is a Lipschitz function on
estimate V (x) ≥ −q
R such that
q−1/2(t) dt = ∞. Then the operator H0 is essentially self-adjoint.
In particular, if M = Rn and V (x) ≥ −C|x|2 then the operator H0 is essentially
r(x)
∞
0
(cid:1)
(cid:0)
self-adjoint.
R
Remark. Theorem A remains true if we replace L2Ω•(M ) by the space of square
integrable forms on M with values in a flat Hermitian vector bundle F over M ,
provided that the Hermitian structure on F is flat. In this case the differential d
should be replaced by the covariant differential associated to the flat structure on
F . The proof is a verbatim repetition of the proof for the scalar case, cf. bellow.
However the notation in the vector valued case is more complicated.
3. Historical remarks. An analogue of Theorem B for the case M = R1
was established by Sears [Se]. B. Levitan [Le] proved the Sears theorem for the
Schr¨odinger operator acting on scalar valued functions on M = Rn. F. Rofe-
Beketov [RB] extended these results to the case where the potential V (x) can not
be estimate by a function depending only on dist(x, p). Many results and references
ON SELF-ADJOINTNESS OF A SCHR ¨ODINGER OPERATOR
3
about the essential self-adjointness of Schr¨odinger operators on Rn may be found
in [RS].
I. Oleinik [O1,O2] established Theorem A for the Schr¨odinger operator acting
on scalar valued functions on a complete Riemannian manifold.
Essential self-adjointness of a pure Laplacian (without lower order terms) on
differential forms on a complete Riemannian manifold was first stated and proved
by M. P. Gaffney [Ga1,Ga2]. A number of related results may be found in [Sh].
In [BFS], Theorem B is established for the case where M is a manifold with
cylindrical ends and the potential V (x) ≥ 0. The result is used there to study
Witten deformation of the Laplacian on a non-compact manifold.
4. Acknowledgment. I would like to thank M. Shubin for posing the problem
and for essential help.
I am very grateful to I. Oleinik for pointing out a gap in a preliminary version
of the paper and for drawing my attention to the paper [O2].
I am also thankful to M. Farber and V. Matsaev for valuable discussions.
5. The domain of D(H ∗
D(H ∗
of distributions also belongs to L2Ω•(M ).
0 denote the operator adjoin to H0. The domain
0 consists of forms α ∈ L2Ω•(M ) such that Hα understood in the sense
0 ). Let H ∗
0 ) of H ∗
The operator H0 is symmetric. Hence, to show that its closure is self-adjoint it
0 is symmetric. In other words we
is enough to show that the adjoint operator H ∗
have to prove that
(7)
(Hα ∧ ∗β − α ∧ ∗Hβ) = 0
for any α, β ∈ D(H ∗
0 ).
ZM
To prove (7) we need some information about the behavior of differential forms
0 ). The main result of this section is the following lemma, which provides
from D(H ∗
us with this information.
Lemma 1. If α ∈ D(H ∗
0 ) then the forms Q−1/2dα, Q−1/2d∗α are square integrable.
Remark. 1. By the standard theory of elliptic operators any α ∈ D(H ∗
0 ) belongs
to the Sobolev space H 2
loc. Hence, dα, d∗α are locally square integrable. Thus the
lemma provides us with an information about the behavior of the forms from D(H ∗
0 )
at infinity.
2. For the Schr¨odinger operator on scalar valued functions on Rn an analogous
lemma was established in [RB]. The proof was adopted in [O1,O2] to the case
of a Riemannian manifold. In our proof we follow rather closely the lines of [O2].
However, the fact that we deal with differential forms rather than with scalar valued
functions demands a more careful analysis.
Proof. Recall that we fixed a point p ∈ M and that for any x ∈ M we denoted by
r(x) the Riemannian distance between x and p.
It is shown in [O2, Proof of Lemma 1] that for any R > 0, ε > 0 there exist
smooth functions rR,ε(x), FR,ε(x) on M which approximate the Lipschitz functions
r(x), Q−1/2(x) in the sense that
(8)
|rR,ε(x) − r(x)| < ε,
Q−1/2(x) − ε < FR,ε(x) < (1 + ε) Q−1/2(x),
lim
ε→0
|drR,ε(x)| ≤ 1,
lim
ε→0
|dFR,ε(x)| ≤ K,
4
MAXIM BRAVERMAN
for any x ∈ r−1
R,ε([0, R + 1]). Here K is the same constant as in (5).
Let Ψ : [0, +∞) → [0, 1] be a smooth function which is equal to one when t ≤ 1/2
and which is equal to zero when t ≥ 1. Set
(9)
ψR,ε(x) =
Ψ
rR,ε(x)
R
FR,ε(x)
(
0
(cid:0)
(cid:1)
rR,ε(x) ≤ R;
if
outside of the set rR,ε(x) ≤ R.
For any R > 0 the functions ψR,ε, ε < 1 vanish outside of the compact set
r−1([0, R + 1]). Hence, it follows from (8) and (4) that there exist a constant
K1 > 0 not depending on R and a number εR > 0 (which does depend on R) such
that
(10)
|dψR,ε(x)| ≤ K1,
ψ2
R,ε(x) ≤ 2,
ψ2
R,ε α ∧ ∗V α
≤ 2 kαk2,
(cid:12)
(cid:12)
(cid:12)
(cid:12)
for any x ∈ M, R > 1, 0 < ε < εR, α ∈ L2Ω•(M ). Here kαk = hα, αi
L2-norm of the form α.
(cid:12)
(cid:12)
(cid:12)
(cid:12)
ZM
1
2 denotes the
Functions ψR,ε have compact support. Hence, in view of the remark 1 after
the statement of the lemma, the forms ψR,εdα and ψR,εd∗α are square integrable.
Assume that α ∈ D(H ∗
0 ) is a real valued form and set
(11) J 2
R,ε = kψR,εdαk2 + kψR,εd∗αk2 =
ψR,ε(x)2
dα ∧ ∗dα + d∗α ∧ ∗d∗α
.
ZM
(cid:0)
(cid:1)
It follows from (8), (9) that to prove the lemma it is enough to show that
(12)
lim
R→∞
lim
ε→0
JR,ε < ∞.
Let us first rewrite the integrand in (11) in a more convenient form.
In the
[Wa, §6.1]) d∗α = (−1)|α| ∗−1 d ∗ α
calculations bellow we use the equality (cf.
where |α| denotes the degree of the differential form α.
(13)
ψ2
ψ2
R,ε dα∧ ∗dα = d
R,ε d∗α∧∗d∗α = (−1)|α|ψ2
ψ2
= −d
(cid:0)
ψ2
R,εd∗α ∧ ∗α
R,εα ∧ ∗dα
− 2ψR,εdψR,ε ∧ α ∧ ∗dα + ψ2
R,εα ∧ ∗d∗dα,
R,ε d∗α ∧ d ∗ α
(cid:1)
+ 2ψR,εdψR,ε ∧ d∗α ∧ ∗α + ψ2
R,εdd∗α ∧ ∗α.
It follows now from (13), (10) and from the Stokes theorem that, if R > 1, ε < εR,
then
(cid:0)
(cid:1)
kψR,εdαk2 =
ψ2
R,ε dα ∧ ∗dα = −2hdψR,ε ∧ α, ψR,εdαi + hα, ψ2
R,εd∗dαi
≤ 2K1kαk kψR,εdαk + hα, ψ2
R,εd∗dαi,
kψR,εd∗αk2 =
R,ε d∗α ∧ ∗d∗α = 2hdψR,ε ∧ ψR,εd∗α, αi + hψ2
ψ2
R,εdd∗α, αi
ZM
ZM
≤ 2K1kψR,εd∗αk kαk + hα, ψ2
R,εdd∗αi.
ON SELF-ADJOINTNESS OF A SCHR ¨ODINGER OPERATOR
5
Summing these two equations we obtain
(14) J 2
R,ε ≤ 2K1kαk
kψR,εdαk + kψR,εd∗αk
+ hα, ψ2
R,ε∆αi
≤ 4K1kαk JR,ε +
(cid:0)
ZM
ψ2
R,ε
(cid:1)
α ∧ ∗Hα − α ∧ ∗V α
(cid:0)
≤ 4K1kαk JR,ε + 2kαk kHαk + 2kαk2.
(cid:1)
Here the last inequality follows from (10).
It follows from (14) that the set {JR,ε : R > 1, ε < εR} is bounded from above.
Hence (12) holds. The proof of the lemma is completed. (cid:3)
6. Proof of Theorem A. We apply a modification of the method used in [RB]
suggested by I. Oleinik [O2].
The quantity
(15)
ρ(x, y) = inf
γ
Zγ
Q−1/2(x) dγ,
e
where the infimum is taken over all piecewise smooth curves connecting the points
x, y ∈ M , is called generalized distance between x and y. It is a symmetric function
in x, y which satisfies the triangular inequality. The first metric axiom is not valid
in general. Note, however, that (6) implies, that the sets P −1([0, R]) are compact
for any R > 0.
Recall that in Section 2 we have fixed a point p ∈ M . Set P (x) =
ρ(x, p). Then
(cf. [O2, Lemma 2])
(16)
|P (x) − P (y)| ≤ Q−1/2(x) dist(x, y) +
e
K
2
(dist(x, y))2
for any x, y ∈ M . It follows (cf.
smooth function
PR,ε(x) which approximates P (x) in the sense that
[O2]) that for any R > 0, ε > 0 there exists a
(17)
e
PR,ε(x) − P (x)| ≤ ε,
|
|d
PR,ε(x)| ≤ Q−1/2(x),
lim
ε→0
for any x ∈ P −1([0, R + 1]).
e
e
Assume that ε < 1 so that
P −1
R,ε([0, R]) ⊂ P −1([0, R + 1]). Let us define a
piecewise smooth function PR,ε(x) on M by the formula
(18)
PR,ε(x) =
By (17), the inequality
e
PR,ε(x)
(
R
e
if
PR,ε(x) ≤ R;
outside the set
e
PR,ε(x) ≤ R.
e
(19)
|dPR,ε(x)| ≤ Q−1/2(x)
lim
ε→0
holds almost everywhere on M .
6
MAXIM BRAVERMAN
Recall from Section 5, that the statement of Theorem A is equivalent to equality
0 ) and consider the following approximation of the integral (7)
(7). Fix α, β ∈ D(H ∗
(20)
IR,ε =
1 −
PR,ε
R
ZM (cid:18)
(cid:19)
(cid:0)
Hα ∧ ∗β − α ∧ ∗Hβ
=
1 −
ZM (cid:18)
(cid:1)
PR,ε
R
(cid:19)
By the Fatou theorem ([RS, Theorem I.17]), it is enough to show that
∆α ∧ ∗β − α ∧ ∗∆β
.
(cid:0)
(cid:1)
(21)
lim
R→∞
lim
ε→0
IR,ε = 0.
We will need the following “integration by parts” lemma1
Lemma 2. Let φ : M → R be a smooth function with compact support. Then
(22)
φ ∆α ∧ ∗β
ZM
=
φ
dα ∧ ∗dβ + d∗α ∧ ∗d∗β
+
dφ ∧
β ∧ ∗dα − d∗α ∧ ∗β
ZM
(cid:0)
for any α, β ∈ D(H ∗
0 ).
ZM
(cid:1)
(cid:0)
(cid:1)
Note that, by remark 1 after the statement of Lemma 1, all the integrals in (22)
have sense.
Proof. Recall that d∗u = (−1)|u| ∗−1 d ∗ u where |u| denotes the degree of the
differential form u. Hence, if |u| = |v| − 1, then
(23)
φdu ∧ ∗v = φu ∧ ∗d∗v − dφ ∧ u ∧ ∗w + d (φu ∧ ∗v)
Substituting into (23) first u = d∗α, v = β and then u = β, v = dα we obtain
φdd∗α ∧ ∗β = −dφ ∧ d∗α ∧ ∗β + φd∗α ∧ ∗d∗β + d (φd∗α ∧ ∗β),
φd∗dα ∧ ∗β = φβ ∧ ∗d∗dα = dφ ∧ β ∧ dα + φdα ∧ ∗dβ − d (φβ ∧ ∗dα).
In the last equality we used that u ∧ ∗v = v ∧ ∗u for any differential forms u, v of
the same degree. Summing the above equations, integrating over M and using the
Stokes theorem we get (22). (cid:3)
Using definition (20) of IR,ε and Lemma 2 we obtain
(24)
IR,ε =
1
R
ZM
dPR,ε ∧
β ∧ ∗dα − d∗α ∧ ∗β − α ∧ ∗dβ + d∗β ∧ ∗α
.
(cid:0)
Let dµ(x) denote the Riemannian density on M . For any ξ ∈
denote by |ξ| its norm with respect to the scalar product on
by the Riemannian structure on M . Then
V
V
(cid:1)
k(T ∗M ) ⊗ C we
•(T ∗M ) ⊗ C induced
(25)
|hα, βi| ≤
|α ∧ ∗β| dµ(x) ≤
|α| |β| dµ(x) ≤ kαk kβk
ZM
ZM
1I learned this lemma from M. Shubin.
ON SELF-ADJOINTNESS OF A SCHR ¨ODINGER OPERATOR
7
for any α, β ∈ L2Ω•(M ).
Let us estimate the behavior of the right hand side of (24) as ε → 0. For the
first term we obtain
(26)
lim
ε→0
1
R
(cid:12)
(cid:12)
(cid:12)
(cid:12)
ZM
dPR,ε ∧ β ∧ ∗dα
≤
|dPR,ε| |d∗α| |β| dµ(x)
1
R
lim
ε→0
ZM
|Q−1/2d∗α| |β| dµ(x) ≤
≤
1
R
(cid:12)
(cid:12)
(cid:12)
(cid:12)
ZM
kQ−1/2d∗αk kβk
R
.
In the second inequality in (26) we used the estimate (19). The last inequality in
(26) follows from Lemma 1.
Analogously, one can estimate the other terms in the right hand side of (24).
That proves (21) and Theorem A. (cid:3)
References
[BS]
[BFS] M. Braverman, M. Farber, M. Shubin, The Novikov-Bott inequalities on a manifold with
F. A. Berezin, M. A. Shubin, The Schr¨odinger equation, Kluwer, Dordrecht, 1991.
a cylindrical ends, In preparation.
[Ga1] M. P. Gaffney, The Harmonic operators for exterior differential forms, Proc. Nat. Acad.
Sci. USA 37 (1951), 48–50.
[Ga2] M. P. Gaffney, A special Stokes’s theorem for complete Riemannian manifolds, Ann. of
[Le]
[O1]
[O2]
[RB]
[RS]
[Se]
[Sh]
[Wa]
Math. 60 (1954), 140–145.
B. M. Levitan, On a theorem of Titchmarsh and Sears, Usp. Math. Nauk 16 (1961),
175–178.
I. M. Oleinik, On the essential self-adjointness of the Schr¨odinger operator on a complete
Riemannian manifold, Mathematical Notes 54 (1993), 934–939.
I. M. Oleinik, On the connection of the classical and quantum mechanical completeness
of a potential at infinity on complete Riemannian manifolds, Mathematical Notes 55
(1994), 380–386.
F. S. Rofe-Beketov, Self-adjointness conditions for the Schr¨odinger operator, Mat. Za-
metki 8 (1970), 741–751.
M. Reed, B. Simon, Methods of modern mathematical physics, Vol. I, II, Academic
Press, London, 1978.
D. B. Sears, Note on the uniqueness of Green’s functions associated with certain differ-
ential equations, Canadian J. Math. 2 (1950), 314–325.
M. A. Shubin, Spectral theory of elliptic operators on non-compact manifolds, Ast´erisque
207 (1992), 37–108.
F. W. Warner, Foundations of differentiable manifolds and Lie groups, Graduate Texts
in Mathematics, Springer-Verlag, New-York, Berlin, Heidelberg, Tokyo, 1983.
School of Mathematical Sciences, Tel-Aviv University, Ramat-Aviv 69978, Israel
E-mail address: [email protected]
|
ai_researcher | 1 | Sentinel_Lymph_Node_Mapping_by_Retroperitoneal_vNOTES_for_Uterus-Confined_Malignancies_A_Standardized_10-Step_Approach.pdf | 1
Deep Learning Provides Rapid Screen for Breast Cancer Metastasis with
Sentinel Lymph Nodes
Kareem Allam1, Xiaohong Iris Wang1, Songlin Zhang1, Jianmin Ding1, Kevin Chiu2,
Karan Saluja1, Amer Wahed1, Hongxia Sun1, Andy N.D. Nguyen1
1Department of Pathology and Laboratory Medicine
2Medical School
University of Texas Health Science Center-Houston, Medical School, Texas 77030
ABSTRACT
Deep learning has been shown to be useful to detect breast cancer metastases by analyzing whole
slide images (WSI) of sentinel lymph nodes (SLNs); however, it requires extensive scanning and
analysis of all the lymph node slides for each case. Our deep learning study focuses on breast
cancer screening with only a small set of image patches from any SLN to detect changes in
tumor environment and not in the tumor itself. This study involves breast pathologists in our
department and uses our in-house breast cancer cases and WSI scanners. We design a
convolutional neural network in the Python language to build a diagnostic model for four
diagnostic categories (macrometastasis, micrometastasis, isolated tumor cells, and negative
metastasis). SLNs with macrometastasis and micrometastasis are defined as positive cases; while
those with isolated tumor cells only or true negative for metastatic tumor cells are defined as
negative cases. We obtained WSIs of Hematoxylin and Eosin-stained slides from 34 cases with
near equal distribution in 4 diagnostic categories. A total of 2720 image patches, from which
2160 (79%) were used for training, 240 (9%) for validation, and 320 (12%) for testing.
Interobserver variation was also examined among 3 users. The test results showed excellent
diagnostic results: accuracy (91.15%), sensitivity (77.92%), specificity (92.09%), positive
predictive value (90.86%), and negative predictive value (80.66%). No significant variation in
results was observed among the 3 observers. This preliminary study provided a proof of concept
for incorporating automated metastatic screen into the digital pathology workflow to augment the
pathologists’ productivity. Our approach is unique since it provides a very rapid screen rather
than an exhaustive search for tumor in all fields of all sentinel lymph nodes.
Key Words: Deep Learning, Whole Slide Imaging, Breast Cancer, Sentinel Lymph Nodes,
Metastasis, Rapid Screen
Corresponding Author:
Andy N.D. Nguyen, MD, MS
Department of Pathology and Laboratory Medicine
University of Texas Health Science Center-Houston, Medical School
6431 Fannin Street MSB 2.292
Houston, Texas 77030
Telephone: (713) 500-5337
Fax: (713) 500-0712
Email: [email protected]
2
INTRODUCTION
In surgery for a patient with breast cancer, the surgeon finds and removes the first lymph node(s)
to which a tumor is likely to spread (called SLNs). To do this, the surgeon injects a radioactive
substance and/or a blue dye into the tumor, the area around it, or the area around the nipple.
Lymphatic vessels will carry these substances along the same path that the cancer would take.
The first lymph nodes that dye or radioactive substance travels to are the SLNs. The evaluation
of breast SLNs is an important component of treatment. Patients with a SLN positive for
metastatic cancer will receive a more aggressive clinical management, including axillary lymph
nodes dissection.
The manual microscopic examination of SLNs is time-consuming and laborious, particularly in
cases where the lymph nodes are negative for cancer or contain only small foci of metastatic
cancer [1]. SLNs can be grouped into two types: positive for metastasis or negative for
metastasis. Of the ones that are positive for metastasis, they can have macrometastasis (tumor
region of at least 2.0 mm) or micrometastasis (tumor region of at least 200 cells or with size
between 0.2 mm and 2.0 mm). Of the ones that are negative, they can be truly negative, or have
isolated tumor cells (ITC) only, which is a tumor region of up to 200 cells and/or smaller than
0.2 mm [2]. Potential morphologic features for metastasis include pleomorphic nuclei and some
features of the tumor microenvironment, including lymphocytic infiltrates in the stroma, the
sinus, and follicular hyperplasia [3].
Due to the large number of SLNs to screen for breast cancer metastasis, histopathologic
screening often presents a challenge to the pathologists. An automated diagnosis using digital
images would be helpful to assist the pathologist in daily work. In this study, we investigate how
automated screening methods can be combined with microscopic examination by pathologists to
achieve better accuracy. We focus on using reactive morphology in non-tumor areas of lymph
nodes to predict positive metastasis. To analyze slides by automated techniques, it is first
necessary to scan the slide into the computer’s data storage. This process is called whole slide
imaging (WSI). Techniques of WSI involve scanning and compressing the images before they
are analyzed [4]. WSI offers many advantages such as ease of slide sharing and image analysis
[5].
Previous attempts to digitally classify histologic images were based on specific criteria (such as
nuclear shape, nuclear size, texture, etc.) [3,6]. They turned out not to be successful [7].
Attention has turned to machine learning. Machine learning can be defined as software
algorithms that can learn from and make predictions on data. This gives the software the ability
to learn without being explicitly programmed. There are numerous machine learning methods.
Some examples are decision trees, cluster analysis, support vector machines, random forests,
Bayesian networks, regression analysis, and neural networks [7]. Neural networks consist of
multiple artificial nodes (“neurons”) connected to form a network for prediction/classification
[8]. This is inspired by biological neural networks. Early generations of neural networks used
supervised training, but this has some disadvantages. One disadvantage is that the parameters
(such as the strengths of the connections between the neurons) may not converge, leaving no
solution. Another disadvantage is that it may not scale well.
3
Deep learning is the most recent and most disruptive method of machine learning; it is based on
neural networks. In 2006, major breakthroughs in deep learning started. One is unsupervised
learning, which allows a network to be fed with raw data (no known outcomes) and discover the
representations needed for detection or classification. Another is the use of multiple layers in the
network, which allows it to extract high-level and complex data representations and avoid some
of the problems of older neural networks. Since such methods perform many operations in
parallel, they can be speeded up by using graphics processing units (GPUs). Studies have been
done to assess the reproducibility of deep learning algorithms by using them to identify the tissue
of origin from 512x512 pixel tiles. The performance of the algorithm was better than
pathologists viewing the same tiles [9]. Deep learning techniques, especially third generation
neural networks called convolutional neural networks (CNN or ConvNet), have quickly become
the state of the art in computer vision [10]. The ventral visual pathway is organized as a
hierarchical series of four interconnected visual areas called Brodmann areas. Neurons in early
areas, such as area V1, respond to comparatively simple visual features of the retinal image,
while later areas, such as area V4, respond to increasingly complex visual features. The
specialization of receptor cells is incorporated into the design of the CNN as pairs of convolution
operators followed by a pooling layer (Figure 1) [11]. Convolution is an operation in image
processing using filters to modify or detect certain characteristics of an image (such as Smooth,
Sharpen, Intensify, Enhance). In CNNs, it is used to extract features of images. Mathematically,
a convolution is done by multiplying the pixels’ values in an image patch by a filter (kernel)
matrix and then adding them up (Figure 2). This operation is also called a "dot product". By
moving the filter across the input image, one obtains the final output as a modified filtered
image. The CNN consists of interleaved convolutional and max pooling layers and then a final
fully connected layer (Figure 1) [12]. The convolutional layers (C) perform 'feature extraction'
consecutively from the image patch to higher level features. The max pooling layers (S) reduce
image size by subsampling. The last 'fully connected' layers (F) provide prediction.
Convolutional neural networks have been used to generate heat maps of tumors and tumor-
infiltrating lymphocytes [13, 14]. Big companies are analyzing large volumes of data for
business analysis and decisions, using deep learning technology (Google’s search engine, Google
Photo, automobile companies with self-driving cars, and IBM’s Watson).
The application of deep learning to digital pathology imaging has a promising start; it could
impact personalized diagnostics and treatment. Deep learning has also been considered in
interpreting and integrating multiple sources of information in pathology (histology, molecular,
etc.) [15]. Recent studies have shown promising results in using deep learning to detect breast
cancer in whole slide imaging of SLNs (examples: Camelyon16, ICIAR 2018)[16, 17]. However,
they require extensive scanning and analysis of all the lymph node slides for each case. We
explore how deep learning could be used for breast cancer screening with only a small set of
image patches (5 patches) from any SLN. Our goal is to detect changes in the tumor
microenvironment and not the tumor itself (Figure 3). Our approach is unique since it provides a
very rapid screen rather than an exhaustive search for tumors in all fields of all lymph nodes. We
also set out to examine the feasibility of looking at either negative or positive slides (in the
uninvolved area) to predict metastasis. The tumor microenvironment has been shown to be
important in diagnosing the tumor [18]. We examined three areas of interest: interfollicular
lymphocyte-rich area, follicles, and the sinus (Fig. 3) to see which is best for predicting
metastasis. Previous studies have examined the tumor-infiltrating lymphocytes [19, 20].
Interobserver variation was also examined among different users. We assessed variation in
predictive results with data obtained by 3 users.
Fig. 1 The CNN deep learning model
4
Fig. 2 Convolution of input with kernel
MATERIALS AND METHODS
Our study was approved by the Institution Review Board at the University of Texas Health
Science Center. We obtained WSIs of SLNs using Motic scanners (Motic Easy Scan, Motic
Instrument, Richmond, BC, Canada) in the pathology department of the University of Texas-
Houston Medical School to obtain our data. The Motic Digital Slide Assistant software (by the
same company) was used to view the WSIs. SnagIt (TechSmith Corp, Okemos, Michigan, USA)
5
was used to capture and automatically save image patches (100x100 pixels) in files with JPEG
format. Our study includes 34 cases with near equal distribution in 4 diagnostic categories:
1. Macrometastasis: 10 cases
2. Micrometastasis: 8 cases
3. Isolated tumor cells (ITC): 6 cases
4. Negative: 10 cases
A positive WSI and a negative WSI were selected from each positive case, and two negative
WSIs were selected from each negative case to obtain a total of 68 WSIs. For each WSI, 40
image patches were obtained for a total of 68x40 = 2720 image patches. Of the 2720 image
patches, 2160 images (79%) were used for training the model, 240 (9%) were used for
validation, and 320 (12%) for testing. We designed the CNN model using the Python language
together with the TensorFlow and Keras libraries. The model ran on 64-bit Windows 10
Professional edition. Keras allows for parallel computing using graphics processing units (GPUs)
with the Compute Unified Device Architecture (CUDA) by NVIDIA (Santa Clara, CA, USA).
The hardware was 9th Gen Intel® Core™ i7 9700 (8-Core, 12MB Cache, 4.7GHz), 32GB RAM
(DDR4 at 2666MHz), and NVIDIA® GPU (GeForce RTX™ 2070, 8GB GDDR6, 2304 CUDA
cores). Our deep learning model used 14 layers, including convolution, max-pooling, and dense
layers (Figure 1).
RESULTS
We looked at different areas of interest in WSIs to see which one would be of most predictive
value (positive vs negative metastasis): 1. Interfollicular lymphocyte-rich areas, 2. Follicles, and
3. The sinus (Figure 3). The preliminary results indicated that areas containing interfollicular
lymphocytes are of most predictive value (full results not reported in this article). Subsequently,
our study has been focusing on this parameter alone.
The image-by-image accuracy of user 1 was found to be 161/320 = 50.31% (Table 1). When the
diagnoses were grouped by rank (i.e., diagnoses 0 and 1 are considered negative, 2 and 3 are
considered positive), significantly better accuracy was achieved at 275/320 = 85.93% (Table 2).
For each test case, the predicted diagnosis was combined from the prediction for 5 images (at
least 3 or more must agree), a process known as “majority voting” (see examples in Table 3).
This led to a higher accuracy of 92.18% (59 sets/64 sets). When majority voting was used, we
obtained the data in Table 4.
From this, we calculated the accuracy, sensitivity, specificity, positive predictive value (PPV),
and negative predictive value (NPV) for user 1:
Accuracy = 59/64 = 92.18%
Sensitivity=27/(27+5)=84.4%
Specificity=32/(32+0)=100%
PPV=27/(27+0)=100%
NPV=32/(32+5)=86.5%
In a similar manner, we calculated those values for the other 2 users. The results were tabulated
in Table 5. No significant variation was observed among the 3 observers. The average results
were as following: Accuracy = 91.15%, Sensitivity=77.92%, Specificity=92.09%, PPV=90.86%,
NPV=80.66%
6
Fig. 3 Areas of interest in WSI
Table 1. Image-by-image accuracy
Predicted Diagnosis
Negative
(0)
ITC (1) Micro
Met (2)
Macro
Met (3)
Observed
Diagnosis
Negative
(0)
42
ITC (1)
20
Micro
Met (2)
Macro
Met (3)
12
22
36
54
2
1
2
3
65
57
0
3
1
0
Accuracy: 161/320 = 50.31%
Table 2. Grouped ranking
Predicted Diagnosis
Negative (0) or ITC (1) Micro Met (2) or Macro Met (3)
7
Observed
Diagnosis
Negative (0) or ITC (1) 152
Micro Met (2) or
Macro Met (3)
37
Accuracy: 275/320 = 85.93%
8
123
Table 3. Examples of majority voting process
Result for each image
Observed dx=3, Predicted dx=1
Observed dx=3, Predicted dx=0
Observed dx=3, Predicted dx=2
Observed dx=3, Predicted dx=0
Observed dx=3, Predicted dx=2
Observed dx=2, Predicted dx=2
Observed dx=2, Predicted dx=2
Observed dx=2, Predicted dx=0
Observed dx=2, Predicted dx=0
Observed dx=2, Predicted dx=2
Observed dx=1, Predicted dx=1
Observed dx=1, Predicted dx=1
Observed dx=1, Predicted dx=1
Observed dx=1, Predicted dx=0
Observed dx=1, Predicted dx=0
Observed dx=0, Predicted dx=1
Observed dx=0, Predicted dx=2
Observed dx=0, Predicted dx=0
Observed dx=0, Predicted dx=1
Observed dx=0, Predicted dx=1
Case-by-case (set of 5)
2/5 → Incorrect
3/5→ Correct
5/5→Correct
4/5→ Correct
Table 4. Data for majority voting with group ranking for user 1.
Observed negative
Observed positive
Accuracy: 59/64 = 92.18%
Predicted negative
32
5
Predicted positive
0
27
Table 5. Accuracy, sensitivity, specificity, PPV, and NPV for all 3 users.
User
User 1
User 2
User 3
Means
Specificity
95
89.38
91.88
92.09
Sensitivity
76.88
78.75
78.12
77.92
Accuracy
92.19
87.5
93.75
91.15
PPV
93.89
88.11
90.58
90.86
8
NPV
80.42
80.79
80.77
80.66
DISCUSSION
Deep learning has been shown to be useful for the identification of breast cancer metastases by
analyzing whole sections of slide images of SLNs [12, 21]. Our study focuses on breast
cancer screening using deep learning with only a small set of image patches from any SLN
(positive or negative) to detect changes in the tumor microenvironment and not the tumor itself.
Our approach is unique since it provides a very rapid screen rather than an exhaustive search for
tumors in all fields of all lymph nodes. We obtain excellent predictive results for cancer
metastasis in this study, which provide a proof of concept for incorporating automated breast
cancer metastatic screen into the digital pathology workflow to potentially augment the
pathologists’ productivity. This could have a significant impact on health economics.
Some limitations of this study are:
1. The model was only validated on one hardware platform (Motic scanner),
2. Representative images require preselection of lymphocyte-rich areas,
3. Lack of explicit diagnostic criteria (inherent to deep learning).
Our preliminary study nevertheless provided a proof of concept for incorporating automated
breast cancer screen using digital microscopic images into the pathology workflow to augment
the pathologists' QA. Future studies will need to (a) include more hardware platforms and many
more cases for training and validation, and (b) use automated segmentation of WSIs for
lymphocyte-rich areas.
CONCLUSION
We obtained excellent predictive results for cancer metastasis from this study: 91% accuracy,
78% sensitivity, and 92% specificity using a set of 5 random image patches (100x100 pixels)
from each test case. There is a potential role for this model in clinical work as a QA tool. If a
case is positive by histology, a final diagnosis of metastasis can readily be made. For cases that
are negative by histology, our model can be used to screen for metastasis. If the screen is
negative, a final diagnosis of negative metastasis can be made, and if the screen is positive, the
slide can be re-examined to either find the metastases or to make a final diagnosis of negative
metastasis if none is found. In this way, the model serves as an extra checking step to help detect
metastases that otherwise would be missed with just manual examination.
REFERENCES
1. Wang, Dayong, et al. "Deep learning for identifying metastatic breast cancer." arXiv preprint
arXiv:1606.05718 (2016).
2. Maguire, Aoife, and Edi Brogi. "Sentinel lymph nodes for breast carcinoma: a paradigm
shift." Archives of pathology & laboratory medicine 140.8 (2016): 791-798.
9
3. Seidl, Maximilian, et al. "Morphology of immunomodulation in breast cancer tumor draining
lymph nodes depends on stage and intrinsic subtype." Scientific Reports 8.1 (2018): 1-12.
4. Zarella, Mark D., et al. "A practical guide to whole slide imaging: a white paper from the
digital pathology association." Archives of pathology & laboratory medicine 143.2 (2019): 222-
234.
5. Aeffner, Famke, et al. "Introduction to digital image analysis in whole-slide imaging: a white
paper from the digital pathology association." Journal of pathology informatics 10.1 (2019): 9.
6. Choras, Ryszard S. "Feature extraction for CBIR and biometrics applications." 7th WSEAS
International Conference on Applied Computer Science. Vol. 7. 2007.
7. Marsland, Stephen. Machine learning: an algorithmic perspective. Chapman and Hall/CRC,
2011.
8. Mitchell, Tom M., and Tom M. Mitchell. Machine learning. Vol. 1. No. 9. New York:
McGraw-Hill, 1997.
9. Bizzego, Andrea, et al. "Evaluating reproducibility of AI algorithms in digital pathology with
DAPPER." PLoS computational biology 15.3 (2019): e1006269.
10. Roy, Kaushiki, et al. "Patch-based system for classification of breast histology images using
deep learning." Computerized Medical Imaging and Graphics 71 (2019): 90-103.
11. Roy, Sudipta, and Anirban Choudhury. "On the intersection of deep learning and chest
radiography: Background and Prospects." Available at SSRN 3861229 (2019).
12. El Achi, Hanadi, et al. "Automated diagnosis of lymphoma with digital pathology images
using deep learning." Annals of Clinical & Laboratory Science 49.2 (2019): 153-160.
13. Le, Han, et al. "Utilizing automated breast cancer detection to identify spatial distributions of
tumor-infiltrating lymphocytes in invasive breast cancer." The American journal of pathology
190.7 (2020): 1491-1504.
14. Amgad, Mohamed, et al. "Report on computational assessment of tumor infiltrating
lymphocytes from the International Immuno-Oncology Biomarker Working Group." NPJ breast
cancer 6.1 (2020): 1-13.
15. Fuchs, Thomas J., and Joachim M. Buhmann. "Computational pathology: challenges and
promises for tissue analysis." Computerized Medical Imaging and Graphics 35.7-8 (2011): 515-
530.
16. Rakhlin, Alexander, et al. "Deep convolutional neural networks for breast cancer histology
image analysis." international conference image analysis and recognition. Springer, Cham,
2018.
17. Kovalev, V., A. Kalinovsky, and V. Liauchuk. "Deep Learning in Big Image Data: Histology
image classification for breast cancer diagnosis." Big Data and Advanced Analytics, Proc. 2nd
International Conference, BSUIR, Minsk. sn, 2016.
18. Manoharan, Malini, et al. "A computational approach identifies immunogenic features of
prognosis in human cancers." Frontiers in immunology 9 (2018): 3017.
19. Lu, Zixiao, et al. "Deep-learning–based characterization of tumor-infiltrating lymphocytes in
breast cancers from histopathology images and multiomics data." JCO clinical cancer
informatics 4 (2020): 480-490.
20. Kim, So-Woon, et al. "Distribution pattern of tumor infiltrating lymphocytes and tumor
microenvironment composition as prognostic indicators in anorectal malignant melanoma."
Modern Pathology 34.1 (2021): 141-160.
21. Maguire, Aoife, and Edi Brogi. "Sentinel lymph nodes for breast carcinoma: an update on
current practice." Histopathology 68.1 (2016): 152-167.
|
ai_researcher | 1 | Optimised_Realistic_Test_Input_Generation.pdf | 4
2
0
2
r
a
M
2
2
]
E
S
.
s
c
[
1
v
5
6
0
5
1
.
3
0
4
2
:
v
i
X
r
a
Testing for Fault Diversity in Reinforcement Learning
Quentin Mazouni
Simula Research Laboratory
Oslo, Norway
[email protected]
Arnaud Gotlieb
Simula Research Laboratory
Oslo, Norway
[email protected]
ABSTRACT
Reinforcement Learning is the premier technique to approach se-
quential decision problems, including complex tasks such as driving
cars and landing spacecraft. Among the software validation and
verification practices, testing for functional fault detection is a con-
venient way to build trustworthiness in the learned decision model.
While recent works seek to maximise the number of detected faults,
none consider fault characterisation during the search for more
diversity. We argue that policy testing should not find as many
failures as possible (e.g., inputs that trigger similar car crashes) but
rather aim at revealing as informative and diverse faults as possible
in the model. In this paper, we explore the use of quality diversity
optimisation to solve the problem of fault diversity in policy testing.
Quality diversity (QD) optimisation is a type of evolutionary algo-
rithm to solve hard combinatorial optimisation problems where
high-quality diverse solutions are sought. We define and address
the underlying challenges of adapting QD optimisation to the test
of action policies. Furthermore, we compare classical QD optimisers
to state-of-the-art frameworks dedicated to policy testing, both in
terms of search efficiency and fault diversity. We show that QD
optimisation, while being conceptually simple and generally appli-
cable, finds effectively more diverse faults in the decision model,
and conclude that QD-based policy testing is a promising approach.
CCS CONCEPTS
• Software and its engineering → Software verification and
validation; • Computing methodologies → Reinforcement
learning.
KEYWORDS
Software Testing, Reinforcement Learning, Quality Diversity
ACM Reference Format:
Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher. 2024.
Testing for Fault Diversity in Reinforcement Learning. In 5th ACM/IEEE
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from [email protected].
AST ’24, April 15–16, 2024, Lisbon, Portugal
© 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 979-8-4007-0588-5/24/04. . . $15.00
https://doi.org/10.1145/3644032.3644458
Helge Spieker
Simula Research Laboratory
Oslo, Norway
[email protected]
Mathieu Acher
Univ Rennes, Inria, INSA Rennes, CNRS, IRISA
Rennes, France
[email protected]
International Conference on Automation of Software Test (AST 2024) (AST
’24), April 15–16, 2024, Lisbon, Portugal. ACM, New York, NY, USA, 11 pages.
https://doi.org/10.1145/3644032.3644458
1 INTRODUCTION
In the last decade, Reinforcement Learning (RL) combined with
neural networks (NNs) was shown to be able to effectively solve
complex sequential decision problems in various fields, such as
planning, game playing or system control [10, 21, 28]. Deployment
of such action policies in real-world applications involves strong
software validation and verification practices. Among them, testing,
which trades exhaustiveness for efficiency helps build trust and
confidence in the decision model. To that regard, a lot of work to
test learnt policies have recently emerged [14, 18, 29, 31]. Some of
them [18, 31] further investigate the test results after the search.
For example, Pang et al. [18] retrain the policy and Zolfagharian
et al. [31] extract interpretable rules from a Decision Tree to charac-
terise the fault-revealing inputs. However, none of them proposes
to consider how the policy under test solves (or fails) the problem
at each test case and look for diversity. Instead, they all maximise
the number of faults found, whatever they reveal or mean.
Still, fault diversity is important: to improve the model, to assess
the range of possible incorrect decisions (policy explainability) and
one can also see the diversity as a test coverage measure, the latter
being especially difficult to assess in NN-based policies. That way,
stakeholders could accurately assess the safety of the NN-based
policy and eventually build trust and trustworthiness. Fortunately,
an evolutionary search technique known as Quality Diversity (QD,
or Illumination) addresses this very same issue.
Indeed, QD optimisation finds both diverse and high-quality
solutions for a given task. To do so, diversity is not computed in
the search space but rather in a behaviour space that describes how
a solution actually solves the task. Typical QD applications are in
robotics, where the objective is to find the best action policies (i.e.,
that successfully solve the task, like getting out of a maze) while
discovering as many behaviourally different policies as possible,
e.g., how the robot leaves the maze.
In this paper, we propose to address the problem of fault diver-
sity in policy testing with Quality Diversity. In other words, we
investigate if QD optimisation can find diverse faults in trained poli-
cies. Especially, we define and address the underlying challenges of
adapting QD optimisation to policy testing. We compare how two
existing QD optimisers solve the subsequent policy testing task as
a QD problem to a state-of-the-art framework dedicated to policy
AST ’24, April 15–16, 2024, Lisbon, Portugal
Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher
testing, both in terms of search efficiency and fault diversity. Our
results show that QD-based testing finds diverse solutions, without
additional test budget cost. The contributions of this paper are thus:
• We propose the first reformulation of policy testing as a
Quality Diversity optimisation problem.
• We implement our method with two classical Illumination
algorithms.
• We compare the two QD-based frameworks to a SOTA frame-
work dedicated to policy testing on three use-cases.
The rest of the paper is organised as follows. Section 2 describes
the current policy testing techniques and positions our study. Sec-
tion 3 introduces Reinforcement Learning for decision making and
Quality Diversity optimisation. Section 4 presents the challenges to
solve the problem of fault diversity in policy testing in the Quality
Diversity framework. Section 5 describes the empirical evaluation
of our QD-based policy testing framework implemented with 2 QD
optimisers to a dedicated policy testing technique. We discuss the
current limitation in Section 6. Eventually, Section 7 concludes the
paper and draws some perspectives.
2 RELATED WORK
Policy testing has been recently addressed in numerous ways [15].
In the following, we describe the different testing objectives studied
alongside the corresponding testing techniques proposed.
Zolfagharian et al. [31] test RL policies with a genetic algo-
rithm [9], where test cases are individuals of a population. Here, the
individuals are episodes and their genes, state-action pairs. Start-
ing from historical data (e.g., training data of the policy tested),
the search consists of evolving the population to generate faulty
episodes which are likely to be consistent with the policy under test
(i.e., the trajectories match policy’s decision). While this approach
lets it avoid executing the policy, it also requires that resulting
faulty episodes be validated with respect to the policy. Lu et al. [14]
and Ul Haq et al. [29] investigate active policy testing, which con-
sists of dynamically changing the simulator (the policy under test
interacts with) during executions. As such, the test cases are de-
fined as the sequences of the environmental changes applied to
the simulator. They both turn the search problem into a RL task,
where an agent learns to perturb the simulation to generate hazard
decisions in the policy. More precisely, Lu et al. [14] address the
case of an autonomous driving system (where the possible modelled
perturbations include changing weather conditions and dynamics
of pedestrians and vehicles – e.g. their position or velocity –), while
Ul Haq et al. [29] consider the case of multiple testing requirements
(i.e., many-objective search). Tappler et al. [24] look for states that
trigger unsafe decisions, called boundary states. The crucial differ-
ence with all the other methodologies is that they do not look for
those boundary states but, rather, retrieve the latter from the state
space explored by an initial backtracking-based, depth-first search
for a solution of the decision-making problem. This approach can
thus be computationally expansive (depending on the difficulty of
the decision task) and provides no guarantee of finding boundary
states from the search.
Fuzzing frameworks have also been recently proposed [4, 18, 22].
Pang et al. [18] consider seeds as initial situations of the decision-
making task. Similarly to genetic searches, the search space is ex-
plored by mutating used seeds. Even though the search does not
look for diversity, it accounts for novelty by maintaining the pool
of seeds which produce uncovered state sequences. Precisely, they
compute the likelihood of the latter (collected after each execu-
tion) with Gaussian Mixture Models [6] and keep a seed only if the
likelihood is lower than a defined threshold. As for [22] and [4],
they investigate the bug confirmation problem for NN-based action
policies, which corresponds to finding avoidable failures. To bypass
the oracle problem in such a situation, Steinmetz et al. [22] use
heuristics known in classical AI planning, and Eniser et al. [4] rely
on metamorphic testing. To do so, the authors design the metamor-
phic operations around state relaxation, a well-studied concept also
taken from the AI planning community. Their idea is that a relaxed
version of a given environment should represent an easier problem
than the original one. Therefore, the policy under test contains a
bug if it solves the original problem but fails to solve its “relaxed”
counterpart. Besides, Tian et al. [25] and Pei et al. [19] consider
white-box testing of image input-based NNs. In their work, the
objective is to find behaviour inconsistencies, which is approached
as a neuron-coverage-guided greedy search. They differ in their
test oracles: Tian et al. [25] use metamorphic testing [1] to check
that the model tested outputs the same action for morphed images,
while Pei et al. [19] rely on differential testing [16] (i.e., several NNs
are simultaneously tested and inconsistencies are detected when
their decisions differ).
3 BACKGROUND
In this section, we introduce Reinforcement Learning to approach
sequential decision-making and Quality Diversity optimisation.
3.1 Reinforcement Learning for sequential
decision-making
Informally, sequential decision-making refers to tasks that can be
solved by any decision model in a step-by-step manner and which
accounts for the dynamics of the environment [5]. This definition
is very broad, and in the following we consider tasks that involve
a single decision entity. As such, solving a sequential decision-
making problem consists of initialising the world (or environment)
to a particular starting situation and letting the decision model
(or agent) interact with the former (e.g., simulations) through a
step-wise observation-decision-action process until a final state is
reached: if the latter is satisfying, the agent has solved the task;
otherwise, the agent fails. A typical example is the case of path
planning in robotics, where the agent is expected to safely reach
a given position from an initial situation. Formally, those kind of
problems are formulated as Markov Decision Processes (MDPs),
defined as 4-tuples ⟨𝑆, 𝐴, 𝑅,𝑇 ⟩ where:
• 𝑆 is a set of states. Referred to as the observation space, it
corresponds to what the agent can know about its environ-
ment.
• 𝐴 is the set of actions. Referred to as the action space, it
specifies how the agent acts on its environment.
Testing for Fault Diversity in Reinforcement Learning
AST ’24, April 15–16, 2024, Lisbon, Portugal
• 𝑅 : 𝑆 × 𝐴 ↦→ R is the reward function. It reflects the agent’s
performance by associating any pair of state-action with a
numerical value.
• 𝑇 : 𝑆 × 𝐴 × 𝑆 ↦→ [0, 1] is the transition function, which is a
probability distribution over the observation and the action
space. It depicts which state the environment will transit to
after an action is executed. The function is not known by
the agent and governs the environment’s dynamic.
Solutions to MDPs are called policies and noted 𝜋, which are func-
tions mapping from the observation space 𝑆 to the action space 𝐴.
A common approach to training policies is Reinforcement Learning,
a sub-field of Machine Learning which consists in learning from
rewards/penalties [23]. Precisely, RL learns an optimal, or nearly-
optimal, policy 𝜋 : 𝑆 ×𝐴 ↦→ [0, 1] that maximises the total expected
discounted cumulative reward 𝑅𝑡 = (cid:205)𝑡 >0 𝛾𝑡 −1𝑟𝑡 , where 0 < 𝛾 ≤ 1
is the discount factor. This parameter controls how the agent takes
into account the future rewards. Precisely, a small value encourages
the agent to maximise short-term rewards, whereas high values
(usually close to 1) lead the agent to focus on maximising long-term
rewards.
In this work, we consider black-box testing, i.e., without access
to the internals of the policy nor the simulator, of deterministic
decision models, changing thus the previous definition to 𝜋 : 𝑆 ↦→ 𝐴.
In the following, we introduce Quality Diversity optimisation.
3.2 Quality Diversity
Informally, Quality Diversity optimisation stems from the evolu-
tionary algorithms, but provides a shift in methodology by not only
considering the maximisation of a fitness function, but explicitly
targeting the discovery of diverse solutions as characterised by
their behaviour, i.e. the way a problem is solved. Formally, it as-
sumes the objective function 𝑓 now returns a fitness value 𝑓𝑥 and a
behavioural descriptor 𝑏𝑥 for any parameter 𝑥, i.e., 𝑓𝑥 , 𝑏𝑥 ← 𝑓 (𝑥).
As previously mentioned, the behavioural descriptor describes how
the solution solves the problem, while the fitness value quantifies
how well it is solved. If we assume that we want to maximise the
fitness function and define the behaviour space as 𝐵, then the goal
of QD optimisation is to find for each point 𝑏 ∈ 𝐵 the parameters
𝑥 ∈ 𝑋 with the maximum fitness value:
∀𝑏 ∈ 𝐵, 𝑥 ∗ = arg max𝑥
𝑓𝑥 | 𝑏𝑥 = 𝑏
The goal of QD is thus to return a collection of behaviourally distinct
solutions, whose individuals are the best performers (also called
elites) in their local behaviour area (or behaviour niches).
The first method in the QD paradigm is Novelty Search (NS) [11,
12], which entirely abandons the search for the objective function
and only explores behavioural novelty. While later developments
in QD reconsider the inclusion of the objective function into the
search by competition of behaviourally similar solutions with dif-
ferent quality [7, 13], we see a parallel between the pure search
for novelty and fault-triggering inputs for policies. In the search
for fault-triggering inputs, there is no clear objective function to
maximise besides the binary test verdict of a successful or failing
episode. There is no indication of the closeness to a failure, even
though some existing works introduce surrogate objectives to help
guide the search [31]. We therefore consider novelty search as a
potentially interesting approach to explore the behaviour space
without the necessity for guidance through an objective function.
In its recent variants, QD tends to discover as many diverse be-
haviours as possible, while improving their elite, i.e., the individual
with the highest quality in that niche. The result of QD optimisa-
tion is a set of solutions, which is usually referred to as collection,
archive or container and is structured through the niches of the
behaviour space. During the optimisation process, the collection is
filled with any candidate whose behaviour is novel enough and lets
the latter compete with the current collection’s elite if its behaviour
is deemed to belong in a niche already covered. As such, the col-
lection of any QD algorithm structures its search, since it defines
the behavioural neighbourhood of the parameters evaluated (i.e.,
how to consider if two solutions have close enough behaviours to
belong to the same behavioural area or niche).
In the QD literature, one can distinguish two types of collections,
namely structured and unstructured ones. In the former case, the
behaviour space is discretised with a segmentation pattern into
a grid, and each cell represents a niche (or behaviour descriptor
location). This approach is for example implemented in the MAP-
Elites algorithm [17], one of the most used QD optimisers. Here,
the collection is a regular grid and the search aims at filling ev-
ery cell of that grid with the best possible solution. On the other
hand, unstructured archives do not define niche locations before the
optimisation starts and relies on distance thresholds and/or local
densities to assess the behavioural similarities and neighbourhoods
between solutions.
Cully and Demiris [2] propose a unified definition of QD opti-
misation that we follow to introduce the general, high-level op-
timisation process in QD (see Algorithm 1). In their formulation,
algorithms vary depending on (i) the type of container (i.e., how the
data is gathered and ordered into a collection); (ii) the selection op-
erator (i.e., how the solutions to be modified in the next generation
are selected) and (iii) the type of scores computed for the container
and the selection operator to work. Given those parameters and
operators, the execution of a QD algorithm follows a 4-step loop
until the budget is consumed:
• A new batch of candidates is produced from the individuals
selected by the selection operator.
• The candidates are evaluated and their performance and
descriptors, recorded.
• Each candidate is possibly added to the container (according
to the solutions already in the collection).
• The scores maintained by the container are eventually up-
dated (if needed). Common scores are the novelty, the local
competition, or the curiosity score.
For more information about the different variants studied and the
most widely used scores, see [2].
4 QD OPTIMISATION FOR POLICY TESTING
This section describes the challenges in optimising policy testing
for fault diversity with Quality Diversity and how we address them.
We consider a black-box setting, where neither the 𝑀𝐷𝑃 nor the
policy under test 𝜋 can be directly inspected, but only inputs and
outputs can be observed. The only assumption we take is, that it
AST ’24, April 15–16, 2024, Lisbon, Portugal
Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher
Algorithm 1 4-step QD-algorithm for Policy Testing
Input: 𝑁 : iteration budget, 𝑁𝑖𝑛𝑖𝑡 : number of initial iterations
Output: 𝐴: archive of diverse and high-performing solutions
⊲ empty archive of solutions
1: 𝐴 ← ∅
2: for 𝐼 ← 0 to 𝑁 do
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
⊲ start with random parents and offspring
if 𝐼 < 𝑁𝑖𝑛𝑖𝑡 then
⊳
𝑋𝑝𝑎𝑟𝑒𝑛𝑡𝑠 ← random_solutions()
𝑋𝑜 𝑓 𝑓 𝑠𝑝𝑟𝑖𝑛𝑔 ← random_solutions()
else
⊲ 1. select individuals from the archive and/or the previous
⊳
batch
𝑋𝑝𝑎𝑟𝑒𝑛𝑡𝑠 ← select(𝐴, 𝑋𝑜 𝑓 𝑓 𝑠𝑝𝑟𝑖𝑛𝑔)
⊲ 2. create randomly modified copies of 𝑋𝑝𝑎𝑟𝑒𝑛𝑡𝑠 (mutation
⊳
and/or crossover)
𝑋𝑜 𝑓 𝑓 𝑠𝑝𝑟𝑖𝑛𝑔 ← variation(𝑋𝑝𝑎𝑟𝑒𝑛𝑡𝑠 )
end if
for 𝑥 ∈ 𝑋𝑜 𝑓 𝑓 𝑠𝑝𝑟𝑖𝑛𝑔 do
⊲ 3. record the behaviour and quality of the candidate ⊳
𝑏𝑥 , 𝑓𝑥 ← 𝑓 (𝑥)
𝑥 .store_score(𝑏𝑥 , 𝑓𝑥 )
⊲ 3*. initialise 𝑀𝐷𝑃 with 𝑥 and characterise 𝑥 with (1) the be-
⊳
haviour and fitness of 𝜋 ; (2) the test oracle result
𝑏𝑥 , 𝑓𝑥 , 𝑜𝑥 ← 𝐸𝑣𝑎𝑙𝑢𝑎𝑡𝑒 (𝑀𝐷𝑃, 𝜋, 𝑥)
𝑥 .𝑠𝑡𝑜𝑟𝑒_𝑠𝑐𝑜𝑟𝑒 (𝑏𝑥 , 𝑓𝑥 , 𝑜𝑥 )
⊲
competition)
attempt_to_add(𝑥, 𝐴)
update_scores(parent(𝑥), 𝐴) ⊲ parent’s scores might be up-
⊳
dated
end for
⊲ possibly update the scores of all solutions (e.g., curiosity scores) ⊳
update_scores(𝐴)
4. attempt to add the candidate to the archive (local
⊳
25:
26: end for
27: return 𝐴
is possible to instrument the 𝑀𝐷𝑃’s simulator with an initial state
and random seed.
Solution Behaviour. Since we aim to find fault-triggering initial
states of the MDP (under which 𝜋 fails), the search space corre-
sponds to the parameter (or input) space of the MDP’s simulator.
The exact definition of such parameter spaces depends on the imple-
mentation of the simulator and the decision-making problem (see
Section 5). Therefore, at first glance, the search space does not come
with any behavioural definition directly: parameters like gravity
or objects’ positions do not behave in a specific way. This is not an
issue though, as we can still use the traditional behaviour space
definition of QD. Indeed, we similarly want to characterise how the
policy under test solves the MDP. The main difference is that in QD,
the solutions evaluated are policy’s parameters (since the goal is
to find good policies) whereas in our reformulation, 𝜋 is the same
and its behaviours depends on the initial scenario described by the
solution evaluated. This allows us to leverage the QD’s literature
richness of behavioural policy analysis and use already proposed
behaviour space definitions (see Section 5) to describe the solutions.
That way, finding diverse solutions will exercise as many different
policy behaviours as possible and, similarly, the fault-triggering
ones will imply diverse hazard decisions.
Solution Quality. The second challenge in the adaption of QD
for policy testing lies in the definition of the solutions’ quality (or
fitness). Indeed, in policy testing the quality of a test case (i.e., the
evaluation of an execution) boils down to the boolean value of the
test oracle (i.e., 𝜋 solves the task or fails), which is not enough
informative. We instead define the quality of solutions as the ac-
cumulated reward of the policy under test (like in common QD
applications, i.e., as if we were finding policies) and define the opti-
misation task of QD with minimisation (i.e., accumulated reward
minimisation).
Assumptions. In this first work, we fix the randomness effects
during the simulation, letting thus the 𝑀𝐷𝑃, its simulator and all
their executions be deterministic. As such, every input test (i.e.,
solution) generates a single trajectory and thus, a single behaviour
(and test result).
QD-based Policy Testing. Thanks to the flexibility of Quality Di-
versity, our proposal to optimise a fault-triggering simulator’s in-
puts search for policy testing with QD involves only few changes
to the high-level framework proposed by Cully and Demiris [2] as
shown in Algorithm 1. The modifications are highlighted at lines 18-
19, which mostly consists of changing how solutions are evaluated.
First, a solution 𝑥 is used to initialise the 𝑀𝐷𝑃’s simulator. Then,
we let 𝜋 solve/fail the decision task and characterise 𝑠 with (1) the
behaviour and fitness of 𝜋; (2) the test oracle result. After the search,
fault-triggering solutions (faults for short) can be retrieved from
the archive by filtering it with the test oracle results.
5 EXPERIMENTAL EVALUATION
5.1 Research Questions
To evaluate QD optimisation for fault diversity in policy testing,
we conduct experiments to answer three research questions (RQs):
RQ1 How efficient is QD optimisation compared to dedicated
policy testing techniques?
RQ2 How does QD optimisation improve diversity?
RQ3 How does the behaviour space definition impact the perfor-
mance of QD-based policy testing?
Answering the first two research questions will let us assess
the benefits and cost of prioritising diversity. Here, we compare
the number of faults revealed and their diversity among QD-based
testing and testing without consideration of behaviours. Finally,
the last research question investigates the impact of the definition
of the behaviour space, a key configuration parameter for QD opti-
misation. We expect that the number of faults found by QD-based
testing still be competitive and that it improves fault diversity by
accounting for the behaviour of the policy under test.
5.2 Experiments
To answer the RQs, we conduct experiments with three standard
environments [27]: Lunar Lander, Bipedal Walker and Taxi. We
compare Random Testing with MDPFuzz [18], a recent black-box
policy testing technique for MDPs and two implementations of
Testing for Fault Diversity in Reinforcement Learning
AST ’24, April 15–16, 2024, Lisbon, Portugal
our QD-based testing framework. The first one uses the QD opti-
miser MAP-Elites [17], and the second one, Novelty Search [11].
Random Testing will assess the difficulty of the testing task and act
as a baseline to compare the other methodologies. We choose the
policy-dedicated testing framework MDPFuzz since it addresses
complex environments (which has not been shown by other frame-
works such as STARLA [31]) and drives its fuzzing search towards
uncovered state sequences, accounting thus for diversity. Finally,
as part of our QD-based testing framework, we study MAP-Elites
(ME) since it is one of the first and most conceptually simple illumi-
nation algorithms, while Novelty Search (NS) will let us assess the
relevance of accounting for the quality of the executions, as this
algorithm emphasises diversity only.
5.2.1 Environments. The three selected environments are com-
monly used benchmarks in the RL literature.
Lunar Lander. This control problem consists in safely landing
a spacecraft. We chose this environment since it has been used
in QD optimisation and RL. In particular, a behaviour space has
already been studied [26]. The spacecraft always starts at the top
centre of the space and, similarly, the landing pad is always at the
centre of the ground. The initial situations differ in the shape of
the landscape (around the landing pad) and the initial force applied
to the spacecraft. The policy controls the main and orientation
engines of the spacecraft. Precisely, there are four possible actions:
do nothing, fire the left orientation engine, fire the main engine
and fire the right orientation engine.
Bipedal Walker. This problem consists in piloting a 4-joint walker
robot across an uneven landscape composed of obstacles like steps,
pits, and stumps. We chose this environment to follow the evalua-
tion of MDPFuzz [18] (enabling thus a fair comparison), but also
because behaviour spaces have already been proposed [8]. The
impact of these definitions on the results are studied in RQ3. In
this problem, the walker always starts at the same position. The
initial situations differ in the shape of the landscape (positions of
the steps, the stumps and the pits). The action space is continuous.
Precisely, the action of the policy is the motor speed values at the 4
joints of the robot, which are localised at the hip and its knees.
Taxi. In this classical environment, the policy navigates in a
grid world to pick up passengers and safely drop them off at their
destinations [3]. Every test initiates a particular initial situation
as the position of the passenger, the position of the taxi and the
passenger’s destination. The six actions possible by this policy
are the next taxi’s direction (going north, south, east or west) and
interactions with the passenger (pick it up, drop it off). We use a
version of the Taxi environment with an enlarged 18x13 map, thus
disabling the simple enumeration of all MDP’s possible states for
the standard 5x5 grid.
5.3 Metrics
To answer RQ1, we measure what we call the test efficiency as the
number of distinct faults found over time. To answer RQ2, we study
the diversity of testing (i.e., how a test methodology exercises the
policy under test) and diversity of the faults. We consider two met-
rics to measure diversity. First, we compare the behaviour coverage,
that is: how many behaviours are discovered during testing. To do
so, we follow the QD literature and count the number of bins filled
in each result archive. This archive corresponds to the regular grid
used by MAP-Elites. For fault diversity, only the bins filled by at
least one fault-triggering solution are counted. To complement this
method space-based measure, we also analyse the diversity with the
final states of the simulations. The idea of the final state comparison
is to make the result analysis possibly more accessible and not only
do a comparison in the method space (i.e., behaviour space). Indeed,
the definition of the behaviour space is domain-dependent and can
therefore vary. Complementing thus the behaviour space coverage
with final state diversity analysis will provide us with conclusions
that are not bound to how behaviours are actually computed from
the trajectories. We report final state diversity as the average dis-
tances of the 3 nearest neighbours in the solution sets, since this
metric captures the sparseness of a data set (i.e., if the set is com-
posed of dense points or if the points are smoothly distributed).
Similarly as the first metric, fault diversity only considers the final
states of fault-revealing executions (failure states). Eventually, to
answer RQ3, we compare the effect of four behaviour spaces for
the Bipedal Walker use-case on all the previous metrics mentioned
above.
5.4 Implementation
We run all methods with a budget of 5000 tests, and an initialisation
phase of 1000 for MAP-Elites and MDPFuzz (following the evalua-
tion in [18]). For Novelty Search, we use a population size of 100
and let the search iterate for 50 iterations. The result archives used
to collect data rely on a regular grid of 50x50 bins. All the exper-
iments were executed on a Linux machine (Ubuntu 22.04.3 LTS)
equipped with an AMD Ryzen 9 3950X 16-Core processor and 32GB
of RAM. We accounted for randomness effects by repeating all the
experiments with 10 seeds and report the median results as well as
the first and third quantiles. The source code of the experiments is
available online1.
Test oracles. In Lunar Lander, a failure occurs if the lander crashes
into the ground or moves outside the viewport. In Bipedal Walker,
failure occurs if the body of the robot collides with the ground. For
the Taxi environment, a fault occurs in case of an illegal action
(for instance, dropping the passenger even though the taxi is still
empty) or collision (by moving into a wall).
Input/Parameter sampling and mutation. For the Lunar Lander en-
vironment, the shape of the landscape is fixed by our assumption to
consider deterministic environments. As such, the parameter space
is the two-dimensional space [−1000, 1000]2 that describes the pos-
sible initial forces applied to the spacecraft. The mutation operator
slightly perturbs the original parameter (clipped, if needed). For the
Bipedal Walker use-case, we follow the experimental settings of
MDPFuzz [18] for both the parameter space and the mutation oper-
ator. Precisely, the parameter space encodes the type of obstacles
(flat, pit, steps or stump) of the landscape as 15-size vectors whose
values ∈ [0, 1, 2, 3], while the mutation operator randomly changes
at least one of those values (clipped if needed). As mentioned in the
aforementioned paper, the obstacles are sufficiently well away from
1https://github.com/QuentinMaz/QD_Based_Testing_RL
AST ’24, April 15–16, 2024, Lisbon, Portugal
Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher
Figure 1: Evolution of the number of fault-triggering solutions found for each framework evaluated. The lines show the median
results over 10 executions, and the shaded areas correspond to the first and third quantiles.
each other so that they can be passed by an optimal policy 𝜋 ∗ (i.e.,
all the problems are solvable). As for the last use-case, the parameter
space encodes the initial position of the taxi and the passenger as
well as its destination. The map is static in this environment. The
mutation operator increments or decrements one value (clipped if
needed). For instance, the taxi would now start from a cell in the
grid world close to the initial one.
Behaviour Spaces. All the behaviour spaces studied are two-
dimensional spaces. For the Lunar Lander use-case, we use the
behavioural definition proposed by [26]. It describes how the policy
lands the spacecraft as its horizontal position and vertical veloc-
ity when it first touches the ground. If the lander moves outside
the viewport, we consider the last values observed. For Bipedal
Walker, we follow a previous work that studies policy behaviour
for this very same use-case [8]. In particular, they define a set of
hand-designed behaviour descriptors (averaged over the observa-
tion state sequence) such as Distance, the walker’s position relative
to the goal, Hull angle, the body’s angle of the agent, Torque, the
force applied to the agent’s hip and knee joints, Jump, that describes
when both legs are simultaneously not in contact with the ground
and Hip angle/speed, that are the angle and speed values of the
agent’s hip joints, respectively. We define the behaviour space as
the pair of the descriptor Distance and Hull angle. The effect of dif-
ferent descriptor pairs as behaviour spaces is addressed in RQ3 (see
Subsection 5.7). As for the last use-case, the behaviour is defined as
the two dimensional point that counts the number of actions to 1)
pick-up the passenger and 2) to drop them off afterwards.
Models Under Test. For the Bipedal Walker and Lunar Lander
experiments, we use the models freely available on the Stable-
Baselines3 repository [20]. For the customised Taxi use-case, we
train the agent via Q-Learning [30].
Hyperparameters. We follow the guidelines indicated in [18]
to configure the Gaussian Mixture Models used by MDPFuzz. As
for Novelty Search, it computes the novelty scores as the average
distance of the 3 nearest neighbours and the novelty threshold for
updating its novelty archive has been set to 𝑡 = 0.9 for the Taxi use-
case and 𝑡 = 0.005 for Bipedal Walker and Lunar Lander. Similarly
to all the previously mentioned experimental parameters, they have
been obtained in a prior study.
5.5 RQ1: Effective Fault Detection
We first investigate the efficiency of the approaches evaluated. Fig-
ure 1 shows the evolution of the number of distinct fault-triggering
solutions.
Results. All the frameworks evaluated find more faults than Ran-
dom Testing for the Bipedal Walker use-case. Precisely, at the end of
testing, we report an average improvement of 30% and 56% for ME
and NS, respectively, while MDPFuzz finds 17% more faults. Greater
improvements of QD optimisation are observed for the Lunar Lan-
der use-case (up to 206% and 152%), while MDPFuzz matches the
baseline. We note that in the case of NS, the latter vary a lot (see
the shaded areas in Figure 1). The results for the last use-case show
a different picture though. Here, only ME beats Random Testing
(up to 12%), while Novelty Search and MDPFuzz find 23% and 27%
fewer faults, respectively.
Analysis. Searching for diversity can impede test efficiency, as
Random Testing outperforms Novelty Search in the Taxi experi-
ments. Furthermore, NS is the most sensitive framework to ran-
domness, which is likely to be caused by its high dependency to
its initial population. It is therefore difficult to recommend as a
general optimiser for efficient QD-based policy testing. However,
accounting for both quality and diversity lets MAP-Elites system-
atically beat Random Testing, while showing lesser sensitivity to
its initialisation. Besides, MDPFuzz does not seem to be suited for
all the use-cases studied, since it is only able to compete with the
efficiency of QD-based policy testing on the Bipedal Walker envi-
ronment. While being deceptive, those results are not completely
surprising. Indeed, this framework drives its search with Gaussian
Mixture Models (GMMs), whose parameters were studied for sev-
eral applications, including Bipedal Walker. We therefore suspect
that MDPFuzz ends up sharing its results with Random Testing
on Lunar Lander because of none-optimal GMM configuration. As
for the Taxi use-case, we recall that the MDPFuzz aims at enabling
policy testing solving complex MDPs [18], which is definitely not
the case of this toy problem.
Testing for Fault Diversity in Reinforcement Learning
AST ’24, April 15–16, 2024, Lisbon, Portugal
Figure 2: Evolution of the behaviour space coverage over
time as the number of behaviour niches (bins) illuminated
during testing. In the second column, only bins filled by fault-
triggering solutions are counted, i.e., faulty behaviours. The
lines show the median results over 10 executions, and the
shaded areas correspond to the first and third quantiles.
Figure 3: Final state diversity as the average distances of
the 3 nearest neighbours. Since their scale depends on the
observation space of each use-case, we report the relative
performance of the methodologies to Random Testing. The
lines show the median results over 10 executions.
Conclusion. The complexity of dedicated policy testing tech-
niques such as MDPFuzz can hurt their efficiency – especially for
smaller MDPs –, while QD optimisation consistently finds the great-
est number of faults in the decision model. About that, discarding
solution quality in favour of novelty can lead to better, yet unstable
results. QD-based policy testing does not come with poorer effi-
ciency (as we expected) but rather with significant increase the
number of functional faults found in the model.
5.6 RQ2: Testing and Fault Diversity
Next, we study how QD optimisation improves diversity in terms
of behaviours and final states. Figure 2 shows the behaviour and
faulty behaviour coverage, and Figure 3 the average distances of
the 3 nearest neighbours of the final states. Since the latter are
distances between observation states, Figure 3 shows the relative
performance of the methodologies evaluated to the Random Testing
baseline.
Results. ME systematically improves behaviour discovery, rang-
ing from 7% (Taxi) to 14% (Lunar Lander). MDPFuzz matches RT’s
performance for all the use-cases, as Novelty Search does, except
for Lunar Lander for which NS covers up to 20% more behaviour
niches (slightly outperforming MAP-Elites). Regarding faulty be-
haviours, our results only show significant differences to Random
Testing for MAP-Elites and Novelty Search on the Lunar Lander
experiments. Precisely, they stand out after 2000 iterations and
end up with 189% and 138% more faulty behaviours discovered,
respectively. Similarly to the previous results though, we note that
NS’s performance vary a lot. If we now look at final state diversity,
we find mixed-bag results. For Bipedal Walker, we observe that
Novelty Search explores around 7.5% more diverse final states than
the random baseline throughout testing. For Lunar Lander, none of
the testing technique significantly improves the baseline’s results.
Actually, the failure state distribution tends to be worse, especially
for QD optimisation (up to a 30% decrease). For the last use-case,
MDPFuzz outperforms all the other techniques with significant
margins, with 40% greater distances in the final states (averaged
throughout testing) and up to 50% sparser failure states. Meanwhile,
if QD optimisation does not show significant difference with the
baseline for the final state diversity, we observe great, yet unstable
improvements in their failure state results (as shown in the second
chart of the bottom row in Figure 3). In particular, Novelty Search
catches up with MDPFuzz’s performance on the last iterations.
AST ’24, April 15–16, 2024, Lisbon, Portugal
Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher
Figure 4: Impact of the behaviour space parameter for the Bipedal Walker experiments. The four spaces are different pairs of
hand-designed descriptors studied in [8]. Each column displays the results for a behaviour space. From top to bottom: number
of faults, number of behaviours and faulty behaviours, final and failure state diversity (FS and FFS) relative to Random Testing.
The results are the medians of 10 executions.
Analysis. QD-based policy testing discovers more behaviours
than both MDPFuzz and Random Testing. More importantly, in
some cases (Lunar Lander), this also applies for fault-triggering so-
lutions. In other words, QD-based policy testing finds more diverse
faults in the model under test than dedicated testing techniques. It
is interesting to note that the SOTA framework MDPFuzz does not
cover less behaviours than Random Testing; something that one
could have expected. Complementing the diversity measurement
of our behavioural evaluation with final state analysis lets us shade
interesting details. Indeed, discovering more behaviours does not
translate to sparser final state distribution, as the two QD-based
frameworks either matches or slightly improves Random Testing.
It is especially the case for Lunar Lander, for which both ME and
NS significantly improve faulty behaviours coverage (i.e., find di-
verse faults in the model tested) while showing poor failure state
distributions. As such, the use of behaviour spaces helps policy
testing, as simply looking at final states is not enough to cover
diverse faults in the model. If we consider the actual descriptors
Testing for Fault Diversity in Reinforcement Learning
AST ’24, April 15–16, 2024, Lisbon, Portugal
used to define the behaviour spaces, we can explain the lack of a
general correlation between increased behaviour and final state
discovery. Indeed, none of the descriptors solely depends on the
termination of an execution to define the behaviour. For instance,
in the Bipedal Walker experiments, the behaviours are averages of
some observations’ features.
Interestingly though, the descriptors defined for the Lunar Lan-
der use-case let us better understand what type of faults every
methodology tends to find. In particular, behaviours are based on
the first contact of the spacecraft with the ground; but the policy
can later fail if the landing wasn’t on the targeted pad (and that the
lander has thus to be safely glided towards the latter). Therefore, by
facing the results of our bi-metric evaluation, an interpretation is
that here the QD-based frameworks find solutions for which 𝜋 lands
the spacecraft at various positions (i.e., diverse behaviours) but later
fails the task by colliding the lander’s body on the same edge of the
landscape (i.e., dense failure states), while MDPFuzz either finds
slightly more different edges (around 10% more distributed failure
states) but the policy actually lands at similar positions (i.e., poor
faulty behaviour discovery).
Conclusion. The ability of QD optimisation to find diverse high-
quality solutions applies for policy testing, that is, fault-triggering
inputs that exercise the model such that it fails with varied be-
haviours. As found in the previous analysis, discarding solution
quality leads to unstable results, which is fixed by more balanced
Quality Diversity optimisers such as MAP-Elites.
5.7 RQ3: Behaviour Space Impact
In this last research question, we investigate the effect of differ-
ent behaviour spaces on the proposed QD-based policy testing
framework. As previously mentioned, Gupta et al. [8] define sev-
eral descriptors to characterise the behaviour of a policy. We use
the ones introduced in Subsection 5.4 to define three additional
behaviour spaces (as descriptor pairs) to study how the results of
our appraoch can differ. Figure 4 summarises our findings, where
each column corresponds to a behaviour space. Note that the first
column corresponds to the results found above. In the following,
we analyse the impact on each metric.
Test efficiency. ME improves the number of faults found com-
pared to Random Testing, regardless of the behaviour space used,
to a extent that can decrease to 11.5% (down from 30%, the first
column on Figure 4). However, Novelty Search shows significantly
higher sensitivity to the behaviour space. In particular, NS can al-
most double the number of faults found by Random Testing with
the third behaviour definition (90%), but has around 20% poorer
performance with an inappropriate space (as shown in the second
column of Figure 4). The drastic difference in behaviour sensitiv-
ity of the two implementations again lies in the balance between
how quality and diversity of solutions are accounted. Since MAP-
Elites mostly relies on quality (by focusing on mutating its elites),
the results are steadier and less sensitive to their exact behaviour.
In other words, as long as the behaviour space is able to capture
diversity – which is the case here, as all the spaces are based on
meaningful, hand-designed descriptors – MAP-Elites seems to be
an efficient optimiser for QD-based policy testing. On the other
hand, by discarding quality to only account for behaviour novelty,
Novelty Search becomes more sensitive to the behaviour space
used.
Behaviour Diversity. The number of behaviours discovered by
all the methodologies are for the most part steady across the spaces
evaluated, which match the performance of the baseline. We only
observe ME discovers around 8% more behaviours with two of the
spaces (first and third columns in Figure 4). As for the diversity of
faulty behaviours, we report no significant change compared to our
previous findings, that is: all frameworks share similar figures. One
can possibly note MDPFuzz has consistent lower numbers (10% in
averaged over the spaces). While QD-based policy testing never
impedes behaviour discovery, its ability to significantly improve
the random baseline depends on the behaviour space. For instance,
we find that the behaviour of the policy tested is best captured with
the first pair of descriptors.
Final State Diversity. The behaviour space definition does not
affect the sparseness of the final states, which is little surprising
since behaviours are computed as feature averages. Similarly, the
relative performance of the methodologies to the baseline does not
significantly fluctuate. In particular, Novelty Search shows a 8% to
12% smoother state distribution. Finally, if we only consider failure
states (bottom row), we can see that there is no general trend in the
results neither, with minor fluctuations around the baseline lower
than 11% throughout testing.
5.8 Summary
Our first and most important conclusion is that QD manages to
find diverse faults in the decision model, despite the simplicity
of our approach. In particular, it shows that complex, dedicated
policy techniques such MDPFuzz are not always needed. Second,
we systematically observe a lack of consistency in the results of
our framework optimised with Novelty Search, especially when
running close to our set test budget. As such, despite its outstanding
performances for some cases, we recommend using our proposal
with QD optimisers that account for quality and diversity in more
balanced ways. Finally, the selection of the precise behaviour space
can boost the performance of QD-based policy. We observe this
for the Bipedal Walker use-case, where a well-chosen behaviour
space substantially increases the number of faults detected. At the
same time, the other behaviour spaces are all competitive and do
not negatively impact the ability to reveal diverse faults.
6 THREATS TO VALIDITY
In the following, we discuss the limitations of our proposal as well
as the threats of our experimental evaluation.
External Threats. A first threat to our evaluation is the policy
under test used, that we mitigate by using the same model as previ-
ous works [18]. Similarly, there is some inherent bias to the results
from the selected use-cases. To that regard, we consider three en-
vironments of various natures, two of them having already been
studied in the QD and RL literature. Finally, we only evaluate one
SOTA policy testing technique, namely MDPFuzz. Given the space
limitation, we decided to prioritise the number of use-cases, since
AST ’24, April 15–16, 2024, Lisbon, Portugal
Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher
this work primarily introduces Illumination optimisation in policy
testing.
Construction Threats. In this first work, we assume to have de-
terministic executions with fixed randomness effects. While deter-
ministic executions are a common approach for testing policies
and their deployment, the challenge of handling MDP stochasticity
lies in the fact that multiple executions of a particular solution
would generate different trajectories and thus, possibly different
behaviours. Similarly, some of these executions might fail depend-
ing on the selected policy actions or state transitions in the MDP.
We acknowledge that it is thus an important challenge in QD-based
policy testing, that needs to be addressed in future work. Practi-
cally speaking, by fixing the random effects we reduce the search
space to a subset of inputs that reveal a fault with the given random
seed. We thereby limit the experimental evaluation to a subset of
all possibly detectable faults in the policy.
7 CONCLUSION
This work introduces Quality Diversity (QD) for policy testing. QD
is a flexible, black-box optimisation framework that optimises a
population of individuals by considering both their behaviour, i.e.
how they solve a given problem, and their quality, i.e. how well
they solve it.
We illustrate how to adapt QD to policy testing and propose the
first formulation of diversity-oriented policy testing as an Illumina-
tion task. Precisely, we characterise test inputs with the behaviour
of the policy under test, that is, how it solves/fails the test case. We
implement our QD-based testing framework with two commonly
used QD optimisers of different paradigms: the elitist MAP-Elites
and the divergent Novelty Search algorithms. We perform experi-
ments on three use-cases from the reinforcement learning literature,
and compare QD-based testing to state-of-the-art policy testing
and random testing as the state-of-the-practice. Our results show
that QD optimisation, while being a conceptually straightforward
and easy-to-apply approach, not only improves fault diversity but
also fault detection. We further assess the impact of the behaviour
space definition, what we consider as the most decisive parameter
of our approach. With this first work, we open a new application
area for Quality Diversity.
In future work we will address the inclusion of generic respec-
tively learned behaviour spaces to reduce the initial effort to setup
QD and the handling of stochastic MDPs in the search space to
further guide the search.
ACKNOWLEDGEMENTS
This work is funded by the Norwegian Ministry of Education and
Research, the Research Council of Norway under grant number
324674 (AutoCSP), and is part of the RESIST_EA Inria-Simula asso-
ciate team.
REFERENCES
[1] T.Y. Chen, S.C. Cheung, and S.M. Yiu. 1998. Metamorphic Testing: A New Approach
for Generating Next Test Cases. Technical Report. Department of Computer
Science, Hong Kong University of Science and Technology.
[2] Antoine Cully and Yiannis Demiris. 2018. Quality and Diversity Optimization: A
Unifying Modular Framework. IEEE Transactions on Evolutionary Computation
22, 2 (2018), 245–259. https://doi.org/10.1109/TEVC.2017.2704781
[3] Thomas G Dietterich. 2000. Hierarchical reinforcement learning with the MAXQ
value function decomposition. Journal of artificial intelligence research 13 (2000),
227–303.
[4] Hasan Ferit Eniser, Timo P. Gros, Valentin Wüstholz, Jörg Hoffmann, and
Maria Christakis. 2022. Metamorphic Relations via Relaxations: An Approach
to Obtain Oracles for Action-Policy Testing. In Proceedings of the 31st ACM
SIGSOFT International Symposium on Software Testing and Analysis.
https:
//doi.org/10.1145/3533767.3534392
[5] Keith Frankish and William M. Ramsey (Eds.). 2014. The Cambridge Handbook of
Artificial Intelligence. Cambridge University Press, Cambridge, UK.
[6] D. N. Geary.
2018.
to Clustering.
tions
ries A: Statistics
//doi.org/10.2307/2982840
pdf/152/1/126/49758778/jrsssa_152_1_126.pdf
Journal of
in Society 152, 1 (12 2018), 126–127.
Inference
Mixture Models:
and Applica-
the Royal Statistical Society Se-
https:
arXiv:https://academic.oup.com/jrsssa/article-
[7] Jorge Gomes, Pedro Mariano, and Anders Lyhne Christensen. 2015. Devising
Effective Novelty Search Algorithms: A Comprehensive Empirical Study. In Pro-
ceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation
(Madrid, Spain) (GECCO ’15). Association for Computing Machinery, New York,
NY, USA, 943–950. https://doi.org/10.1145/2739480.2754736
[8] Vikas Gupta, Nathanael Aubert-Kato, and Leo Cazenille. 2020. Exploring the
BipedalWalker Benchmark with MAP-Elites and Curiosity-Driven A3C. In Pro-
ceedings of the 2020 Genetic and Evolutionary Computation Conference Companion
(Cancún, Mexico) (GECCO ’20). Association for Computing Machinery, New York,
NY, USA, 79–80. https://doi.org/10.1145/3377929.3389921
[9] John H. Holland. 1992. Genetic Algorithms. Scientific American (1992).
[10] Rushang Karia and Siddharth Srivastava. 2020. Learning Generalized Relational
Heuristic Networks for Model-Agnostic Planning. CoRR (2020).
[11] Joel Lehman and Kenneth O. Stanley. 2008. Exploiting Open-Endedness to Solve
Problems Through the Search for Novelty. In IEEE Symposium on Artificial Life.
https://api.semanticscholar.org/CorpusID:2367605
[12] Joel Lehman and Kenneth O Stanley. 2011. Abandoning objectives: Evolution
through the search for novelty alone. Evolutionary computation 19, 2 (2011),
189–223.
[13] Joel Lehman and Kenneth O. Stanley. 2011. Evolving a Diversity of Virtual
Creatures through Novelty Search and Local Competition. In Proceedings of
the 13th Annual Conference on Genetic and Evolutionary Computation (Dublin,
Ireland) (GECCO ’11). Association for Computing Machinery, New York, NY,
USA, 211–218. https://doi.org/10.1145/2001576.2001606
[14] Chengjie Lu, Yize Shi, Huihui Zhang, Man Zhang, Tiexin Wang, Tao Yue, and
Shaukat Ali. 2023. Learning Configurations of Operating Environment of Au-
tonomous Vehicles to Maximize their Collisions. IEEE Transactions on Software
Engineering (2023). https://doi.org/10.1109/TSE.2022.3150788
[15] Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher. 2023. A Re-
view of Validation and Verification of Neural Network-based Policies for Sequen-
tial Decision Making. In Rencontres des Jeunes Chercheurs en Intelligence Artifi-
cielle (RJCIA). https://pfia23.icube.unistra.fr/conferences/rjcia/Actes/RJCIA2023_
paper_5.pdf
[16] William M. McKeeman. 1998. Differential Testing for Software. Digit. Tech. J.
(1998).
[17] Jean-Baptiste Mouret and Jeff Clune. 2015. Illuminating search spaces by mapping
elites. arXiv:1504.04909 [cs.AI]
[18] Qi Pang, Yuanyuan Yuan, and Shuai Wang. 2022. MDPFuzz: Testing Models
Solving Markov Decision Processes. In Proceedings of the 31st ACM SIGSOFT
International Symposium on Software Testing and Analysis. https://doi.org/10.
1145/3533767.3534388
[19] Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. Deepxplore: Au-
tomated whitebox testing of deep learning systems. In proceedings of the 26th
Symposium on Operating Systems Principles. 1–18.
[20] Antonin Raffin. 2020. RL Baselines3 Zoo. https://github.com/DLR-RM/rl-
baselines3-zoo.
[21] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew
Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Grae-
pel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. 2018. A general
reinforcement learning algorithm that masters chess, shogi, and Go through
self-play. Science (2018). https://doi.org/10.1126/science.aar6404
[22] Marcel Steinmetz, Daniel Fišer, Hasan Ferit Eniser, Patrick Ferber, Timo P. Gros,
Philippe Heim, Daniel Höller, Xandra Schuler, Valentin Wüstholz, Maria Chris-
takis, and Jörg Hoffmann. 2022. Debugging a Policy: Automatic Action-Policy
Testing in AI Planning. Proceedings of the International Conference on Automated
Planning and Scheduling (2022). https://doi.org/10.1609/icaps.v32i1.19820
[23] Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An intro-
duction. MIT press.
[24] Martin Tappler, Filip Cano Córdoba, Bernhard K. Aichernig, and Bettina
Könighofer. 2022. Search-Based Testing of Reinforcement Learning. In Pro-
ceedings of the Thirty-First International Joint Conference on Artificial Intelligence,
IJCAI 2022, Vienna, Austria, 23-29 July 2022, Luc De Raedt (Ed.). ijcai.org, 503–510.
https://doi.org/10.24963/IJCAI.2022/72
Testing for Fault Diversity in Reinforcement Learning
AST ’24, April 15–16, 2024, Lisbon, Portugal
[25] Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. 2018. DeepTest: Automated
Testing of Deep-Neural-Network-Driven Autonomous Cars. In Proceedings of the
40th International Conference on Software Engineering. https://doi.org/10.1145/
3180155.3180220
[26] Bryon Tjanaka, Sam Sommerer, Nikitas Klapsis, Matthew C. Fontaine, and Ste-
fanos Nikolaidis. 2021. Using CMA-ME to Land a Lunar Lander Like a Space
Shuttle. pyribs.org (2021). https://docs.pyribs.org/en/stable/tutorials/lunar_
lander.html
[27] Mark Towers, Jordan K. Terry, Ariel Kwiatkowski, John U. Balis, Gianluca de
Cola, Tristan Deleu, Manuel Goulão, Andreas Kallinteris, Arjun KG, Markus
Krimmel, Rodrigo Perez-Vicente, Andrea Pierré, Sander Schulhoff, Jun Jet Tai,
Andrew Tan Jin Shen, and Omar G. Younis. 2023. Gymnasium. https://doi.org/
10.5281/zenodo.8127026
[28] Sam Toyer, Sylvie Thiébaux, Felipe Trevizan, and Lexing Xie. 2020. Asnets: Deep
learning for generalised planning. Journal of Artificial Intelligence Research 68
(2020).
[29] Fitash Ul Haq, Donghwan Shin, and Lionel C. Briand. 2023. Many-Objective
Reinforcement Learning for Online Testing of DNN-Enabled Systems. In 2023
IEEE/ACM 45th International Conference on Software Engineering (ICSE). 1814–
1826. https://doi.org/10.1109/ICSE48619.2023.00155
[30] Christopher J. C. H. Watkins and Peter Dayan. 1992. Q-learning. Machine
Learning 8, 3 (01 May 1992), 279–292. https://doi.org/10.1007/BF00992698
[31] Amirhossein Zolfagharian, Manel Abdellatif, Lionel C. Briand, Mojtaba
Bagherzadeh, and Ramesh S. 2023. A Search-Based Testing Approach for Deep
Reinforcement Learning Agents. IEEE Transactions on Software Engineering 49, 7
(2023), 3715–3735. https://doi.org/10.1109/TSE.2023.3269804
|
ai_researcher | 1 | R-Bot_An_LLM-based_Query_Rewrite_System.pdf | 2
2
0
2
v
o
N
6
1
]
A
R
.
h
t
a
m
[
2
v
3
7
5
4
1
.
1
1
1
2
:
v
i
X
r
a
Polyfunctions over Commutative Rings
Ernst Specker1, Norbert Hungerb¨uhler2, and Micha Wasem3
1Dedicated to the memory of the first author
2Department of Mathematics, ETH Z¨urich, R¨amistrasse 101, 8092 Z¨urich,
Switzerland
3HTA Freiburg, HES-SO University of Applied Sciences and Arts Western
Switzerland, P´erolles 80, 1700 Freiburg, Switzerland
November 17, 2022
Abstract
A function f : R → R, where R is a commutative ring with unit element,
is called polyfunction if it admits a polynomial representative p ∈ R[x]. Based
on this notion we introduce ring invariants which associate to R the numbers
s(R) and s(R′; R), where R′ is the subring generated by 1. For the ring R =
Z/nZ the invariant s(R) coincides with the number theoretic Smarandache
or Kempner function s(n). If every function in a ring R is a polyfunction,
then R is a finite field according to the R´edei-Szele theorem, and it holds that
s(R) = |R|. However, the condition s(R) = |R| does not imply that every
function f : R → R is a polyfunction. We classify all finite commutative rings
R with unit element which satisfy s(R) = |R|. For infinite rings R, we obtain
a bound on the cardinality of the subring R′ and for s(R′; R) in terms of
s(R). In particular we show that |R′| 6 s(R)!. We also give two new proofs
for the R´edei-Szele theorem which are based on our results.
1
Introduction
For a commutative ring R with unit element, a function f : R → R is said to
be a polyfunction if there exists a polynomial p ∈ R[x] such that f (x) = p(x)
for all x ∈ R (see [11, 9], and also [1, 2] for a discussion on polyfunctions from
1
Zm → Zn). The set of polyfunctions over R equipped with pointwise addition and
multiplication forms a subring
G(R) := {f : R → R, ∃p ∈ R[x] ∀x ∈ R =⇒ p(x) = f (x)}
of RR and will be called the ring of polyfunctions over R. The polynomials in R[x]
which represent the zero element in G(R) are called null-polynomials (see [13]). If
S is a subring of R, then
G(S; R) := {f : R → R, ∃p ∈ S[x] ∀x ∈ R =⇒ p(x) = f (x)},
In particular, the subring R′ generated by the
is a natural subring of G(R).
unit element 1 in R gives rise to the integer polyfunctions G(R′; R). Instead of
restricting the ring of allowed coefficients as in the construction for G(S; R), one
obtains other rings of polyfunctions by restricting the domain: The ring
{f : S → R, ∃p ∈ R[x] ∀x ∈ S =⇒ p(x) = f (x)}
e.g. contains G(R) as a subring.
If S is a subring of R, a characteristic number connected to S and R is the minimal
degree m such that the function x 7→ xm can be represented by a polynomial in
S[x] of degree strictly smaller than m. Then, in particular, every function in
G(S; R) has a polynomial representative of degree strictly less than m. We set
s(S; R) := min{m ∈ N, ∃p ∈ S[x], deg(p) < m, ∀x ∈ R =⇒ p(x) = xm}
and s(R) := s(R; R) for brevity. We set s(S; R) := ∞ if no function x 7→ xm can
be represented by a polynomial of degree strictly smaller than m.
Trivially, we have s(S; R) > s(T ; R) > s(R) whenever S ⊂ T are subrings of R.
On the other hand, we will see in Section 3, that s(R′; R) < ∞ is bounded in
terms of s(R) if s(R) < ∞.
Clearly, if two rings R1, R2 are isomorphic, then s(R1) = s(R2) and s(R′
s(R′
2, R2). In other words, R 7→ s(R) and R 7→ s(R′, R) are ring invariants.
The function s, which associates to a given ring R the number s(R)∈ N ∪ {∞} has
been introduced in [5] and is called Smarandache function. This naming stems
from the fact, that for all 2 6 n ∈ N, the map n 7→ s(Z/nZ) coincides with
the well-known number theoretic Smarandache or Kempner function s (see [5,
Theorem 2]) defined by
1, R1) =
s(n) := min{k ∈ N, n | k!}
(1)
2
In fact, Legendre has already
(see Lucas [8], Neuberg [10] and Kempner [6]).
studied aspects of the function s(n): In [7] he showed that if n = pµ for some
prime p and 1 6 µ ∈ N, then s(n) verifies
s(n) = µ(p − 1) + a0 + a1 + . . . + ak,
where the numbers ai are the digits of s(n) in base p. i.e. s(n) = akpk + . . . + a0
and 0 6 ai < p. We refer to Dickson [3, p. 263–265] for the history of the function
s(n).
In a finite field F , every function is a polyfunction as a polynomial respresentative
of a function f : F → F is, e.g., given by the Lagrange interpolation polynomial
for f . This representation property characterizes finite fields among commutative
rings with unit element (see [12]):
Theorem 1 (R´edei, Szele). If R is a commutative ring with unit element then R
is a finite field if and only if every function f : R → R can be represented by a
polynomial in R[x].
We will include two short alternative proofs of this theorem in Section 4. For finite
fields F , one has s(F ) = |F |, so in view of Theorem 1, it is natural to ask what
can be said about commutative rings R with unit element for which s(R) = |R|
holds true. Note that if R is a finite ring, it trivially holds that s(R) 6 |R|, as the
polynomial
y∈R
Y
is a normed null-polynomial of degree |R|.
p(x) =
(x − y)
The following theorem (which will be restated below for the reader’s convenience
as Theorem 3), answers the above question and classifies all finite commutative
rings R with unit element that satisfy s(R) = |R|:
Theorem. Let R be a finite commutative ring with unit element. Then, s(R) =
|R| holds if and only if R is one of the following:
(a) R is a finite field, or
(b) R is Z4, or
(c) R is the ring ρ with four elements {0, 1, a, 1 + a} with 1 + 1 = 0 and a2 = 0.
Remarks:
3
1. The ring ρ is not a field since it has zero divisors, and since it is of charac-
teristic 2, it is not isomorphic to Z4.
2. Observe the similarity between this result and the fact that for n > 2, the
usual Smarandache function satisfies s(n) = n if and only if n is prime or
n = 4.
Section 2 is devoted to the proof of this theorem. In Section 3 we discuss infinite
rings and show that for an infinite commutative ring R with unit element and
s(R) < ∞, we obtain an upper bound for |R′| and for s(R′; R) in terms of s(R),
where R′ is the subring of R generated by 1. Finally, in Section 4, we give two
proofs of Theorem 1 – a direct one and one that is based on Theorem 3.
Throughout the article, n > 2 will denote a natural number, and Zn = Z/nZ is
the ring of integers modulo n, and we write a | b if b is an integer multiple of a.
2 Polyfunctions over Finite Rings
Theorem 1 answered the question, when a ring R has the property, that every
function f : R → R can be represented by a polynomial in R[x]. For finite rings
a necessary (but not sufficient) condition for this property to hold is
s(R) = |R|,
(2)
(see Theorem 3 below). In this section, we want to address the question for which
finite rings, equation (2) holds. The first step to answer this, is the following
proposition:
Proposition 2. If R is a commutative ring with unit element and with zero divi-
sors then either
(a) there exist a, b ∈ R \ {0} with a 6= b and ab = 0, or
(b) R is Z4, or
(c) R is the ring ρ with four elements {0, 1, a, 1 + a} with 1 + 1 = 0 and a2 = 0.
Proof
Let us assume that in R the implication holds: if u, v ∈ R \ {0} and uv = 0 then it
follows u = v. Let a ∈ R \ {0} be a zero divisor: a2 = 0. Thus, if x is an element
4
in R with ax = 0, we have either x = 0 or x = a. Notice that for all u ∈ R we
have
and hence for all u ∈ R
a(au) = 0
au = 0 or a(u − 1) = 0.
Hence, we have only the four cases u = 0 or u = a or u = 1 or u = 1 + a. If
✷
1 + 1 = a, then R = Z4, if 1 + 1 = 0, then R is the ring ρ in (c).
We can now prove the main result of this section:
Theorem 3. Let R be a finite commutative ring with unit element. Then, s(R) =
|R| holds if and only if R is one of the following:
(a) R is a finite field, or
(b) R is Z4, or
(c) R is the ring ρ with four elements {0, 1, a, 1 + a} with 1 + 1 = 0 and a2 = 0.
Proof
If R is not a field and not Z4 and not the ring ρ, then, according to Proposition 2,
R is a ring with a, b ∈ R \ {0} such that ab = 0 and with a 6= b. Then
(x − a)(x − b)
(x − z)
Yz∈R\{a,b,0}
is a normed null-polynomial of degree |R| − 1. Therefore s(R) < |R|.
To prove the opposite direction, we go through the three cases:
(a) If R is a field, then a polynomial of degree n has at most n roots. Hence,
s(R) = |R|.
(b) If R is Z4, then (by [5, Theorem 2]) s(Z4) = s(4) = 4 = |Z4|.
(c) If R is the ring ρ with elements {0, 1, a, 1 + a} and with 1 + 1 = 0 and a2 = 0,
we have to prove that s(R) = 4. Assume by contradiction, that p(x) ∈ R[x] is a
normed null-polynomial of degree 3. Since p(0) = p(1) = 0, p(x) must be of the
form
p(x) = x(x + 1)(ξ + x).
From p(a) = 0, it follows that aξ = 0 and from p(a+ 1) = 0 it subsequently follows
✷
that a = 0 which is a contradiction.
5
3
Infinite Rings
In this section R is a commutative ring with unit element and R′ the subring of R
which is generated by 1. We will need the following lemma, which is a corollary
of [5, Lemma 4, p.4]:
Lemma 4. For all k, n ∈ N ∪ {0}, k 6 n we have
n
(−1)n−j
j=0
X
(with the convention 00 := 1).
jk = δknn!
n
j
(cid:19)
(cid:18)
Proposition 5. If s(R) < ∞ then R′ is a finite ring and |R′|
s(R)!.
Remark: We notice, that s(R) < ∞ may hold even if R is an infinite ring. As
an example consider the ring
(cid:12)
(cid:12)
R = Z2[x1, x2, . . .]/{x2
1, x2
2, . . .}
in which all u ∈ R satisfy the relation u4 = u2. On the other hand, if R is finite,
we trivially have s(R) 6 |R|.
Proof of Proposition 5
By assumption, for n = s(R) there exist coefficients ai ∈ R, i ∈ {0, 1, . . . n − 1},
such that for all u ∈ R we have
We denote
un −
n−1
i=0
X
aiui = 0.
1 + 1 + . . . + 1
∈ R′
m times
by ¯m. Then, by Lemma 4, we have for k 6 n
}
{z
|
j=0
X
Hence, it follows from (3) that
n
(−1)n−j
n
j
(cid:19)
(cid:18)
jk = δknn!
n
0 =
(−1)n−j
=
j=0
X
n
j=0
X
(−1)n−j
n
j
n
j
(cid:18)
(cid:18)
n−1
ai¯ji
¯jn −
i=0
X
n−1
n
(cid:19)
=
!
jn −
ai
(−1)n−j
(cid:19)
i=0
X
j=0
X
6
(3)
(4)
ji = n!
n
j
(cid:18)
(cid:19)
where the last equality follows from (4).
✷
Remark: As the example R = Zn! shows, the estimate on the size of R′ emerging
from Proposition 5, |R′| 6 s(R)!, cannot be improved in general.
Lemma 6. If n := s(R) < ∞ then there exists a bound Λ = n!(2n)nn for the
cardinality of the orbits of the elements of R, i.e., for all u ∈ R there holds
|{uk, k ∈ N}| 6 Λ.
Proof
As in the previous proof, we adopt (3). For k ∈ N let
Mk
:=
n−1
n
i=0
Y
aεi
i , εi ∈ {0, 1, . . . , k}
o
Nk
:=
rµ µ, rµ ∈ {0, 1, . . . , n! − 1}
.
µ∈Mk
n X
o
Observe that |Mk| 6 (k + 1)n and |Nk| 6 n!|Mk|. By Proposition 5 it follows that
for a, b ∈ Nk, the sum a+b also belongs to Nk. On the other hand, by applying (3)
to u = a2
j , j ∈ {0, 1, . . . , n − 1}, we obtain
n−1
a2n
j =
aia2i
j ,
i=0
X
and hence, Nk = Nk−1 for k > 2n. It follows for all u ∈ R and all k ∈ N that uk
is of the form
n−1
uk =
µi(k)uj
i=0
X
for certain coefficients µi(k) ∈ N2n−1 and hence |{uk, k ∈ N}| 6 |N2n−1|n 6 Λ.
✷
Theorem 7. If n := s(R) < ∞ then s(R′; R) 6 lcm(Λ) + Λ, where Λ = n!(2n)nn.
Remarks:
(a) Here lcm(n) denotes the least common multiple of the numbers in the set
{1, 2, . . . , n}.
7
(b) Since R′ is contained in every subring T (with 1) of R, the given bound also
holds for s(T ; R).
Proof of Theorem 7
By Lemma 6, there exist for arbitrary u ∈ R integers l < k 6 Λ + 1 such that
uk = ul. Thus, we have
ulcm(Λ)+Λ = ulcm(Λ)+Λ− lcm(Λ)
k−l (k−l) = uΛ.
✷
We conclude this section by an example of a ring R which has the property, that
s(R) < s(R′, R).
Example: Let R = Z2[x]/{x3 + x4}.
The following lemma shows that for this particular ring s(R) 6 4.
Lemma 8. For all polynomials P ∈ Z2[x] we have that
xP + (1 + x)P 2 + P 4 ≡ 0 mod (x3 + x4).
Proof
We first consider the special case P (x) = xm. We have to show, that
xxm + (1 + x)x2m + x4m = xm+1 + x2m + x2m+1 + x4m ≡ 0 mod (x3 + x4).
This is readily checked:
m = 0 :
m = 1 :
m > 2 :
x + 1 + x + 1 ≡ 0 mod (x3 + x4)
x2 + x2 + x3 + x4 ≡ 0 mod (x3 + x4)
x3 + x3 + x3 + x3 ≡ 0 mod (x3 + x4)
Now, for arbitrary P , the claim follows by additivity in Z2[x]:
x(P1 + P2) + (1 + x)(P1 + P2)2 + (P1 + P2)4 =
xPi + (1 + x)P 2
i + P 4
i .
2
i=1
X
✷
Remark: We leave it to the reader to verify, that in fact s(R) = 4.
Now, we show that s(R′; R) > 6.
Lemma 9. Let ai ∈ Z2 be such that
a0 = · · · = a5 = 0.
5
i=0 akuk = 0 in R for all u ∈ R. Then
P
8
Proof
First, by choosing u to be the class of x in R (which we denote by ¯x), we obtain
a0 + a1 ¯x + a2 ¯x2 + (a3 + a4 + a5)¯x3 = 0
in R
and hence, we conclude that a0 = a1 = a2 = 0 and a3 + a4 + a5 = 0. Next, we
choose u to be the class of 1 + x in R. Observing that
(1 + ¯x)3 = 1 + ¯x + ¯x2 + ¯x3 in R
(1 + ¯x)4 = 1 + ¯x4 = 1 + ¯x3 in R
(1 + ¯x)5 = 1 + ¯x
in R
we have
0 = a3u3 + a4u4 + a5u5 =
= (a3 + a4 + a5) + (a3 + a5)¯x + a3 ¯x2 + (a3 + a4)¯x3 in R
which immediately implies that a3 = a4 = a5 = 0. This completes the proof.
✷
Finally we prove that s(R′; R) = 6.
Lemma 10. For all u ∈ R it holds that u3 + u4 + u5 + u6 = 0 in R.
Proof
Let u be the class of a polynomial P ∈ Z2[x] in R.
First case: P (0) = 0. In this case, we have
P (x) = xQ(x)
P 2(x) ≡ x2Q2(x) mod (x3 + x4)
P 3(x) ≡ x3Q3(x) ≡ x3Q(1) mod (x3 + x4)
P 4(x) ≡ x4Q4(x) ≡ x3Q(1) mod (x3 + x4)
and hence P 3(x) ≡ P 4(x) mod (x3 + x4). This proves the claim in this case.
Second case: P (0) = 1. In this case, we have
P (x) = 1 + xQ(x)
P 2(x) ≡ 1 + x2Q2(x) mod (x3 + x4)
P 3(x) ≡ (1 + xQ(x))(1 + x2Q2(x)) ≡
≡ 1 + xQ(x) + x2Q2(x) + x3Q(1) mod (x3 + x4)
P 4(x) ≡ 1 + x4Q4(x) ≡ 1 + x3Q(1) mod (x3 + x4)
P 5(x) ≡ (1 + xQ(x))(1 + x3Q(1)) ≡ 1 + xQ(x) ≡ P (x) mod (x3 + x4)
9
which allows to verify the claim easily.
✷
4 Two Alternative Proofs of the R´edei-Szele Theorem
We start with a short direct proof of Theorem 1. Let R be a commutative ring
with unit element. One implication is immediate:
Assume that R is a finite field and f : R → R. Then the Lagrange interpolation
polynomial
p(x) =
f (y)py(x),
y∈R
X
py(x) =
(x − z)
(y − z)
−1
,
Yz∈R\{y}
(cid:16) Yz∈R\{y}
(cid:17)
where
represents f .
For the opposite implication, we assume that every function f : R → R can be
represented by a polynomial in R[x]. In particular, for the function
f (x) :=
−1,
0,
(
if x = 0
if x 6= 0
there exists a representing polynomial
akxk = f (x)
for all x ∈ R.
n
Xk=0
Since a0 = f (0) = −1, it follows that
n
n
x
akxk−1
=
akxk = 1
for all x ∈ R \ {0}.
k=1
X
=x−1
k=1
X
{z
Hence, R is a field. Moreover, for all x ∈ R
}
|
0 = xf (x) =
akxk+1.
n
Xk=0
(5)
The right hand side of (5) is a polynomial of degree n + 1 which (in the field R)
✷
has at most n + 1 roots. Hence, |R| 6 n + 1.
10
A second alternative proof uses the characterization of the rings for which s(R) =
|R| (see Theorem 3). This condition is necessary for the property, that all functions
from R to R have a polynomial representative.
In order to rule out the case
R = Z4, we use the following formula from [4, Theorem 6, p.9]: If p is a prime
number and m ∈ N, the number of polyfunctions over Zpm is given by
Ψ(pm) := |G(Zpm)| = expp
m
.
s(pk)
(cid:17)
(cid:16)
k=1
X
Here s denotes the usual number theoretic Smarandache function (see equation (1)),
and expp(q) := pq for better readability. It follows that there are Ψ(4) = Ψ(22) =
22+4 = 64 polyfunctions over Z4, but the number of functions from Z4 to Z4 equals
44 = 256. The case R = ρ is ruled out by explicit verification that
f (x) =
0
1
(
for x 6= 0 and
for x = 0
is not a polyfunction over ρ: Since s(ρ) = 4, it is enough to show that no polyno-
mial p ∈ ρ[x] of degree 6 3 represents f . Suppose there is
3
p(x) =
akxk
Xk=0
representing f . Then p(0) = a0 = 1 and p(a) = 1 + a1a = 0, which implies that
✷
a1a = 1 which is impossible since a does not have a multiplicative inverse.
References
[1] M. Bhargava: Congruence preservation and polynomial functions from
Zn to Zm. Discrete Math. 173 (1997), no. 1–3, 15–21.
[2] Z. Chen: On polynomial functions from Zn to Zm. Discrete Math. 137
(1995), no. 1–3, 137–145.
[3] L. E. Dickson: History of the Theory of Numbers, vol. 1. Carnegie Insti-
tution of Washington Publication, 1919.
[4] N. Hungerb¨uhler, E. Specker: A generalization of the Smarandache func-
tion to several variables. Integers 6 (2006): Paper A23, 11 p.
11
[5] N. Hungerb¨uhler, E. Specker, M. Wasem: The Ring of Polyfunctions
over Z/nZ. Comm. Algebra, Published online: 17 July 2022, DOI:
https://doi.org/10.1080/00927872.2022.2092628.
[6] A. J. Kempner: Concerning the smallest integer m! divisible by a given
integer n. Amer. Math. Monthly 25 (1918), 204–210.
[7] A. M. Legendre: Essai sur la th´eorie des nombres, 2nd edition, Paris:
Courcier, 1808.
[8] E. Lucas: Question ×288. Mathesis 3 (1883), 232.
[9] G. Mullen, H. Stevens: Polynomial functions (mod m). Acta Math. Hun-
gar. 44 (1984), no. 3–4, 237–241.
[10] J. Neuberg: Solutions de questions propos´ees, ×Question 288. Mathesis
7 (1887), 68–69.
[11] L. R´edei, T. Szele: Algebraisch-zahlentheoretische Betrachtungen ¨uber
Ringe. I. Acta Math. 79, (1947), 291–320.
[12] L. R´edei, T. Szele: Algebraisch-zahlentheoretische Betrachtungen ¨uber
Ringe. II. Acta Math. 82, (1950), 209–241.
[13] D. Singmaster: On polynomial functions (mod m). J. Number Theory 6
(1974), 345–352.
12
|
ai_researcher | 2 | Do_large_language_models_“understand”_their_knowledge.pdf | Defining Knowledge: Bridging Epistemology and Large Language Models
Constanza Fierro† Ruchira Dhar†‡ Filippos Stamatiou‡
Nicolas Garneau† Anders Søgaard†‡
†Department of Computer Science, University of Copenhagen
‡ Center for Philosophy in Artificial Intelligence, University of Copenhagen
4
2
0
2
t
c
O
3
]
L
C
.
s
c
[
1
v
9
9
4
2
0
.
0
1
4
2
:
v
i
X
r
a
Abstract
Knowledge claims are abundant in the litera-
ture on large language models (LLMs); but can
we say that GPT-4 truly “knows” the Earth is
round? To address this question, we review
standard definitions of knowledge in epistemol-
ogy and we formalize interpretations applicable
to LLMs. In doing so, we identify inconsis-
tencies and gaps in how current NLP research
conceptualizes knowledge with respect to epis-
temological frameworks. Additionally, we con-
duct a survey of 100 professional philosophers
and computer scientists to compare their prefer-
ences in knowledge definitions and their views
on whether LLMs can really be said to know.
Finally, we suggest evaluation protocols for
testing knowledge in accordance to the most
relevant definitions.
1
Introduction
NLP researchers have used the term knowl-
edge somewhat haphazardly in the context of large
language models (LLMs), e.g., discussing “knowl-
edge contained in language models” (Jiang et al.,
2020), their “knowledge gaps” (Feng et al., 2024b),
or how “LLMs encode knowledge” (Farquhar et al.,
2023), and “model’s internal knowledge” (Kass-
ner et al., 2023). Petroni et al. (2019) defined an
LLM to know a fact if it correctly completes a
cloze sentence such as “The capital of Germany is
__”, which are typically generated directly from
so-called knowledge graphs. Many have evalu-
ated knowledge in this way (Jiang et al., 2020;
Paik et al., 2021; Dai et al., 2022; Kassner et al.,
2020, 2021a; Keleg and Magdy, 2023, inter alia).
However, the predictions of semantically equiva-
lent cloze sentences can be inconsistent1 (Elazar
et al., 2021; Kassner and Schütze, 2020; Fierro and
*Correspondance: Constanza Fierro <[email protected]>,
Ruchira Dhar <[email protected]>.
1An LLM may predict Berlin in the above, but Hamburg
for “The city which is the capital of Germany is called __”.
Figure 1: From our survey (§4): Philosophers and com-
puter scientists prefer different definitions of knowl-
edge.
Søgaard, 2022), leading to question the meaningful-
ness of knowledge claims. Should we then require
an LLM to predict correctly all the paraphrases
of a given fact to say it knows it? What about re-
lated facts? Can we really say that an LLM knows
that ‘Lionel Messi plays for Inter Miami’ if it does
not know that ‘Lionel Messi resides in Miami’?
What, then, are sufficient conditions for saying an
LLM knows? Or more generally, can LLMs know
anything? That is:
Can LLMs have bona fide knowledge?
Whether LLMs know, or in what sense, depends
on how knowing is defined. Determining what inter-
nal knowledge LLMs possess could have important
implications on their trustworthiness, as knowledge
modulates our trust in agents (Hardwig, 1991; Ped-
erneschi, 2024). We tend to lose trust in others
when they do not appear to know what we con-
sider basic facts. Furthermore, studying knowledge
in LLMs could potentially have implications for
epistemology itself (Cappelen and Dever, 2021).
Recent works have approached the question
of how to define knowledge, considering addi-
tional requirements for determining what an LLM
knows. Some require correct predictions across
paraphrases (De Cao et al., 2021; Zhong et al.,
tb-knowledgej-knowledgeg-knowledgev-knowledgep-knowledgeDefinitions of Knowledge0.00.10.20.30.40.50.6Percentage Agreement(4-5 on Likert scale)Agreement on Definitions of Knowledge by ProfessionPhilosophersComputer Scientists
tb-knowledge
j-knowledge
g-knowledge
v-knowledge
p-knowledge
p is known if and only if
p is true, and p is believed+
p is true, p is believed, and p is justified
p is known sui generis
p is inferred with intellectual virtue
p is believed and facilitates correct predictions
Philosopher
Sartwell (1992)
Nozick (2000)
Williamson (2005)
Zagzebski (1999)
Austin (2000)
Table 1: Five standard definitions of knowledge in philosophy, i.e., knowledge-that p (where p is a proposition).
The naming is arbitrary and motivated by keywords. See Appendix A for formalizations in epistemic modal logic.
2023b), and others additionally require correct pre-
dictions on logically derived facts (Kassner et al.,
2021b; Cohen et al., 2024). However, so far, NLP
research has approached knowledge claims in a
somewhat arbitrary manner, driven by what seems
to make sense intuitively when discussing knowl-
edge. Since philosophy has long tried to define
what it means to know, we turn to epistemology
to better ground our definitions of knowledge for
LLMs.
Contributions We survey the most commonly
used definitions of knowledge in epistemology, and
discuss and formalize how to map these definitions
to LLMs. We compare current research of knowl-
edge in LLMs to our formal definitions, identi-
fying shortcomings in evaluation practices. We
present the results of our survey to philosophers
and computer scientists about their views on LLMs
and knowledge, finding disagreements about when
LLMs can be said to know. These disagreements
seem to arise from adherence to slightly different
definitions of knowledge (Figure 1). Finally, we
provide protocols that follow the epistemological
definitions for evaluating and testing knowledge in
LLMs. We hope that the connection we provide
to epistemology can inform better evaluations and
claims regarding knowledge in LLMs.
2 Definitions of Knowledge
While the NLP research community’s use of the
word knowledge has been somewhat unclear, in
philosophy there is a long tradition of trying to
pin down exactly what is involved in knowledge
claims. Knowledge – or propositional knowledge,2
to be precise – is what is at stake when we say
that ‘x knows that p’ where x is an entity whose
knowledge is under question, and p is a declara-
2Knowledge is not always propositional; there is also what
is referred to as knowledge-how, which is related to perfor-
mance, i.e., knowing how to perform an action (Ryle, 1949).
tive statement.3 But what are the necessary and
sufficient conditions for knows here? We review
5 definitions of knowledge (see Table 1),4 and we
interpret and formalize a corresponding definition
for LLMs. In §3, we discuss if these definitions are
used in the LLM literature, and whether evaluating
knowledge claims under them is feasible or not.
2.1 True beliefs (tb-knowledge)
Sartwell (1992) defines knowledge as a belief that
is true, that is ‘x believes that p’ and ‘p is true’.
Mary can on this account believe the capital of Ger-
many is Hamburg, but since Hamburg is not the
capital of Germany (Berlin is), Mary cannot be said
to know that the capital of Germany is Hamburg.
Sartwell argues that there is no need for more re-
quirements for what is knowledge, as long as one
has a solid definition of belief. A lucky guess does
not qualify as knowledge because, in Sartwell’s
view, a guess is not a belief. Sartwell (1992) re-
quires, in his definition of beliefs, that beliefs are
coherent. As Sartwell puts it, “no belief stands in
isolation; I cannot have the belief that Goldbach’s
conjecture is true and fail to have any related be-
liefs. The belief is constituted as a belief within a
system of beliefs.” Thus we define,
Definition 2.1 (belief). An LLM M believes p ⇐⇒
p is assigned high confidence.5
Definition 2.2 (belief+). Let p, q be propositions.
A proposition p is believed+ ⇐⇒
3If, for example, x= “John” and p=“Berlin is the capital
of Germany”, we can say that x knows p, if John knows the
fact that Berlin is the capital of Germany.
4We have selected five popular epistemological definitions
of knowledge, which are among the most common and formal.
However, we acknowledge that other perspectives on episte-
mological knowledge exist. Nonetheless, we believe these five
definitions can serve as a solid foundation.
5This does not simply refer to the output probability as-
signed to the proposition p, as most models could assign fairly
high probability to any grammatical sentence, but rather to M
assigning high confidence to p relative to other values that p
could take.
1. p is believed.
2. ∀q st. p =⇒ q, then q is believed.
3. ∄q st. q is believed ∧ q =⇒ ¬p.
That is, p is believed (Def. 2.1), any other proposi-
tion that follows logically from p is also believed,
and p is consistent with any other proposition that
is believed (by the same system).6 Thus,
Definition 2.3 (tb-knowledge). An LLM M tb-
knows p ⇐⇒ p is true ∧ M believes+ p.7
2.2
Justification (j-knowledge)
Nozick (2000) takes another approach and defines
knowledge as justified true beliefs,8 with a less
strict definition of belief of the sort ‘x thinks that p’
and x has some justification for thinking it.9 Noz-
ick (2000) posits that a lucky guess is not knowl-
edge because a guess is not justified. Thus, for
LLMs:
Definition 2.4 (j-knowledge). An LLM M j-knows
p ⇐⇒ p is true ∧ M believes p ∧ M (or M ’s
inference that p) is partially interpretable (justi-
fied).10
2.3 Sui generis (g-knowledge)
Williamson (2005) argues for a relativist and primi-
tive view of knowledge, where the truthfulness of p
is relative to the agent. Knowledge, on this view, is
sui generis which is a legal term literally meaning
‘of its own kind’ or ‘unique’. Williamson (2005)
argues that we can’t analyze knowledge in terms
of other requirements or atomic concepts (belief
and justification) because knowledge is the atomic
concept, which in effect explains what a belief or a
justification is and not the other way around.11
6If I believe in Goldbach’s conjecture (any even number
greater than two is the sum of two primes), I have to believe
the definition of prime numbers, and I can’t believe 1+1=3.
7Our definitions are semi-formal. In epistemic logic, this
would be expressed as (cid:50)sp ⇔ p ∧ (cid:51)+p. See Appendix A, for
epistemic logic formalizations of our knowledge definitions.
8The idea that knowledge may require some kind of justifi-
cation goes back at least to Plato (Plato, 2019, 187b–201c). In
the Theaetetus, the definition of knowledge as true judgement
is ultimately rejected, before arguing that some sort of account
is necessary for knowledge (Plato, 2019, 201d-210a).
9E.g: Mary thinks there are five oranges on the table, be-
cause she counted them up. There really are five oranges; so
Mary knows there are five oranges on the table.
10We take this to mean that M can, possibly from ad-hoc
methods, provide a rationale for p (Joshi et al., 2023).
11In his view, a belief is an attempt at knowing, if I believe
the tree in front is a Sequoia then I will act as if I know it.
Thus, belief is explain through knowledge and not the reverse.
Definition 2.5 (g-knowledge). An LLM M g-know
p ⇐⇒ M includes p in its knowledge bank.
We discuss below (§3) what, precisely, it means
for propositions to be included in an LLM’s knowl-
edge bank. The core intuition is that there is some-
thing akin to a knowledge box (Fodor, 1985) from
which known propositions can be extracted. One
extreme version would be if the LLM is its own
knowledge box, meaning an LLM g-knows what-
ever it outputs, but g-knowledge could also be seen
as a modular component in LLM architectures.
2.4 Virtue (v-knowledge)
The virtue definition of knowledge became popular
in the 1980s (Sosa, 1980; Greco, 1993). Zagzebski
(1999) used it to address the challenge from Gettier
cases12 of the justified true belief definition, and
states that knowledge is belief arising out of acts
of intellectual virtue. As Zagzebski (1999) puts
it, “virtues are properties of persons. Intellectual
virtues are properties of persons that aim at intel-
lectual goods, most specially the truth.” An act of
virtue is an act in which there is imitation of the
behavior of virtuous persons and success in reach-
ing the end for that reason. Therefore if the end is
reached by accident and not as a consequence of
the virtuous action then it is not considered an act
of virtue.13 So we need to define that an LLM is
behaving in a virtuous way, that is, it is aiming at
the truth and arriving to a prediction as a result of
this aim, thus,
Definition 2.6 (v-knowledge). An LLM M v-knows
p ⇐⇒ p is true ∧ M believes p ∧ M ’s cause for
believing p is motivated only by truthfulness.
12Gettier (1963) challenged Nozick’s definition of knowl-
edge as (j-knowledge) by citing a case where justified true
belief would not imply knowledge: John sees a sheep in the
field and forms the belief that there is a sheep in the field. The
sheep that he saw is in fact a dog, but there is a sheep in the
field, occluded from John’s vision. In this case, John had a
true belief, as well as a justification (‘I saw it with my own
eyes’) but his justification was false, and John really arrived at
the right conclusion out of sheer luck (Chisholm et al., 1989).
13E.g: A judge determines by an impeccable procedure
and motivated by justice that the man is guilty. The judge
does everything he ought to do and exhibits all the virtues
appropriate in this situation. Nonetheless, for some accidental
reason the accused is the wrong man (e.g. the evidence was
fabricated). Suppose that the actual killer is secretly switched
with the accused man, so the judge ends up sentencing the
right man (Zagzebski, 1999). Here, a feature of luck has
cancelled out the bad and the end has been reached, but not
because of the virtuous act of the judge.
2.5 Predictive accuracy (p-knowledge)
For Austin (2000), to know means to be able to
make correct and relevant assertions about the sub-
ject in question. If M p-knows p, M believes p,
and believing p facilitates correct and relevant pre-
dictions. Austin’s definition is pragmatic. For him
“believing in other persons, in authority and testi-
mony, is an essential part of the act of communi-
cating”, and knowledge is the belief that works out
over time. Austin (2000) states that knowledge is
relevant true belief under deductive closure; that
is, if the subject knows p, and believing p implies
believing q (with q relevant), then q must be true
(and therefore the subject knows q as well). Thus,
p facilitates relevant and correct predictions (q).
This is similar to tb-knowledge, in which belief+
is epistemically closed, however, in tb-knowledge
the closure scopes over all propositions q, not just
the relevant ones. Moreover, since the definition is
pragmatic, the deductive closure is only probabilis-
tic.
Definition 2.7 (p-knowledge). Let p, q be relevant
propositions st. believing p =⇒ believing q.
Then, an LLM M p-knows p ⇐⇒ M probably
tb-knows p ∧ M probably tb-knows q.
Relevance is ambiguous and could be defined as
p and q being relevant for each other, i.e., q being
relevant for knowing p; or p and q being relevant
for performing a target task (see §5).
3 Knowledge in NLP Research
Now, we discuss perspectives from NLP research
on what constitutes knowledge, and how these align
with the definitions we extracted from the philo-
sophical literature.
tb-knowledge Most knowledge probing work
seems to rely (loosely) on tb-knowledge or p-
knowledge. Namely, works related to measuring
knowledge encoded in LLMs (Petroni et al., 2019;
Jiang et al., 2020; Wallat et al., 2020; Roberts et al.,
2020; Paik et al., 2021; Dai et al., 2022; Kassner
et al., 2020, 2021a; Dhingra et al., 2022; Chalkidis
et al., 2023; Keleg and Magdy, 2023; Qi et al.,
2023; Fierro et al., 2024b, inter alia), understand-
ing the mechanisms of recalling (Dai et al., 2022;
Geva et al., 2023; Sharma et al., 2024), knowledge
edits (Meng et al., 2022; Hase et al., 2023a; Meng
et al., 2023; Wang et al., 2024), and analyses of
LLM’s knowledge vs contextual factual informa-
tion (Neeman et al., 2023; Yu et al., 2023). These
works follow the LAMA protocol (Petroni et al.,
2019), where propositions {p} are derived from
knowledge graphs,14 and an LLM is said to know p
if it predicts p correctly in a fill-in-the-blank state-
ment. Since p is true (from a knowledge graph)
and believed (predicted) by the LLM, the LLM is
said to know p.15 However, such work fails to ad-
dress the fact that tb-knowledge relies on p being
believed+, or that p-knowledge requires epistemic
closure over relevant propositions.16 We discuss
how best to evaluate whether an LLM believes+ p
in §5.
Some works propose to enhance the LLM with
an extra component to ensure more consistent be-
liefs; a so-called belief bank (Kassner et al., 2021b)
or reflex layer (Kassner et al., 2023). This ex-
tra component is optimized for consistency via
weighted MaxSAT (Park, 2002), and it is used to
prompt the model to be consistent to its previous
stated beliefs (Kassner et al., 2021b), or it is di-
rectly used to determine the system’s prediction
(Kassner et al., 2023). Both works aim to rely on
tb-knowledge, where the extra component approx-
imates belief+.17 However, it is only an approxi-
mation as the extra component is not necessarily
fully consistent and the entailed facts are sampled.
This approximation would not be a problem if we
consider their approach to be under p-knowledge,
although in that case the entailed facts should be se-
lected according to some measure of relevance. Fur-
thermore, Kassner et al. (2023) are slightly incon-
sistent in how they use the term knowledge, e.g., us-
ing interchangeably “model beliefs” and “models’
internal knowledge”, if these were to be the same
then they would be talking about g-knowledge.
j-knowledge Hase et al. (2023b) adheres to j-
knowledge, but they study LLMs’ beliefs and not
its knowledge as they argue “in a traditional view of
knowledge as Justified True Belief, it is relatively
more difficult to say that an LM knows something
rather than believes it”. Nonetheless, they align
14E.g.:https://www.wikidata.org/
15Note that under this framework we only need to find one
surface form of p for which the LLM predicts it correctly to
say that it knows p.
16Knowledge edits works usually have a mismatch in
their definition of knowledge, as they employ true belief (tb-
knowledge without belief+) to determine the set of facts that
the model knows. But then evaluate the success of an update
by measuring correct predictions of paraphrases, and thus
accounting to some extent for belief+.
17They track consistency and accuracy to compare systems.
Consistency measures the approximation of tb-knowledge,
while accuracy only accounts for belief (Definition 2.1).
their experiments with the belief+ definition by
measuring beliefs consistency under paraphrasing
and entailment.
A justification for j-knowledge could be pro-
vided in different ways, namely, post-hoc attri-
bution to training data using attribution methods
(Hampel, 1974; Koh and Liang, 2017; Pruthi et al.,
2020; Akyurek et al., 2022), logical derivation with
a chain-of-thought mechanism (Wei et al., 2022),
generation of factual statements with citations to
sources (Gao et al., 2023; Menick et al., 2022;
Fierro et al., 2024a), or potentially as Jiang et al.
(2021) proposed, the probability of a calibrated
language model could be use as justification to dif-
ferentiate between mere beliefs and knowledge. In
any case, the jury is still out on which justifica-
tion procedures are valid and/or superior, but note
that all these methods seem to require partial inter-
pretability.
g-knowledge One extreme interpretation of the
knowledge bank in g-knowledge’s definition is rel-
ativist and deflationary: An LLM knows p if it
asserts p, simply by generating it. This conflates
assertion and true knowledge, and as such, beliefs
and knowledge. A more interesting interpretation
would be to assume that LLMs have distinct mem-
orization strategies for knowledge and learn to in-
duce modular knowledge components. While some
LLM researchers have explored memorization com-
ponents (Dai et al., 2022; Meng et al., 2022), no one
has, to the best of our knowledge, identified knowl-
edge components. Some researchers insert devoted
knowledge layers (Dai and Huang, 2019; Kassner
et al., 2021b, 2023; Feng et al., 2024a; Liu et al.,
2024), which could be interpreted as the knowledge
box, but it remains to be seen if such layers permit
unambiguous extraction of knowledge claims.
v-knowledge
If knowledge can only be inferred
with intellectual virtue, then the difficulty lies iden-
tifying intellectual virtues for LLMs. How to test
for predictions that are acts of intellectual virtue
is an open question. However, we could consider
using training data attribution methods as proof of
such acts. Another promising avenue is mechanis-
tic interpretability, if we could distinguish factual
recall (Geva et al., 2023) from guessing (Stoehr
et al., 2024) mechanisms. This distinction would
relate in interesting ways to the epistemological
view of proper functioning (Plantinga, 1993). Yad-
kori et al. (2024) suggest making such a distinction
is feasible for some models.
In recent works, Biran et al. (2024) address the
intellectual virtue condition to some extent by only
analyzing the model’s virtue knowledge. They do
this by filtering out facts p that the model can cor-
rectly predict without using critical components in
the input, thereby merely guessing the fact (acting
unvirtuous). This is a step in the right direction,
but a more in depth detection of the inner work-
ings of the model is necessary to filter out all the
non-virtuous predictions.
Note that if we interpret the detection of a virtue
act can be viewed as a model justification and then
it is somewhat unclear what would distinguish j-
knowledge from v-knowledge. This is unsurprising,
however, since v-knowledge can be seen as an at-
tempt to flesh out what justification turns on (Greco,
1993). As we insist on concrete methodological
interpretations, the two definitions of knowledge
may coincide.
p-knowledge
In the context of editing factual
knowledge in LLMs, Zhong et al. (2023a); Cohen
et al. (2024) propose to not only evaluate the modi-
fied fact itself, but also to evaluate related facts. For
example, if we edit an LLM to predict that Lionel
Messi now plays in a different football team, then
a successful edit should also modify the league in
which he plays and the country where he resides.
Such evaluation follows the p-knowledge defini-
tion, particularly since they focus on evaluating
only logically related facts (i.e., only the relevant
ones) that are two hops away from the subject or
object in question. This type of evaluation could
be directly applied to measure the knowledge of
the LLM, not just to assess the update accuracy of
edits.
The logically related facts to evaluate could also
be defined in terms of task relevance. For example,
in the context of legal knowledge, Chalkidis et al.
(2023) studied the relevance of the knowledge pos-
sessed by an LLM for downstream performance in
legal classification tasks.
4 Survey Results
To determine how researchers think about knowl-
edge, we turn to our survey of how computer
scientists and philosophers. We had 105 respon-
dents, out of which 50.4% considered themselves
philosophers, 36.2% considered themselves com-
puter scientists, 2.3% both, and 10.5% none of the
Figure 2: LLMs understanding of respondents.
Figure 4: Disagreements on epistemological definitions
of knowledge.
selecting 4-5 57% of the time. Philosophers dis-
agreed with p-knowledge, since 60% selected 1-2;
whereas computer scientists seemed more divided,
with 36% choosing 1-2 and 31% choosing 4-5.
Overall, the survey shows that j-knowledge and
v-knowledge are the most accepted across the two
groups. tb-knowledge has more mixed results. 19.
The disagreement with p-knowledge is somewhat
surprising, since this aligns well with practical
evaluation methodologies in the LLM literature.20
On the other hand, there is an agreement among
philosophers and computer scientists to reject the
g-knowledge definition.
4.2 General Questions
Can non-human entities know? Both computer
scientists and philosophers generally agree that
non-human entities can possess knowledge (see
Figure 5a). Disagreement within each group is rela-
tively low, with 7% among computer scientists and
22% among philosophers.21
Should knowledge be defined differently for hu-
mans and non-humans? Computer scientists
generally believe that knowledge should be defined
differently for humans and non-humans, while
philosophers are more divided. Among philoso-
19This could either reflect the philosophers’ knowledge of
the challenges to such definitions of knowledge, or it could
reflect the fact that we did not discuss the implications of
epistemic closure in the survey (for brevity). In the absence of
epistemic closure, maybe some philosophers felt inclined to
disagree with this definition.
20One possible explanation was our use of the word “use-
ful” in the survey. This word was intended to convey p-
knowledge’s pragmatic flavor, but may have misled some
respondents to think that all knowledge has to be directly
useful for some user-defined goal.
21This question is intentionally ambiguous, e.g., animals
could be consider as non-human entities. We aim to find out
whether people think differently about LLMs compared to
general non-human entities.
Figure 3: Epistemology understanding of respondents.
two.18 Most respondents from computer science
reported a better understanding of LLMs compared
to philosophers (see Figure 2) while the majority of
philosophers reported better understanding of epis-
temology compared to 40% of computer scientists
(see Figure 3). See Appendix B for more details.
4.1 Questions on Knowledge Definitions
We asked our respondents to indicate from 1-5 if
they disagree completely (1) or agree completely
(5) with statements that verbalized our knowledge
definitions. See Figure 1 and 4 for a summary
In brief, philosophers disagreed
of the results.
with tb-knowledge, with 49% selecting 1-2, while
the computer scientists agreed more, with 52% se-
lecting 4-5. Philosophers were divided about j-
knowledge, with a slight tendency to agree (33.9%
chose 1-2 and 47% chose 4-5). Here, they were
in some agreement with computer scientists, 57%
of whom selected 4-5. Philosophers disagreed
strongly with the g-knowledge definition (84% an-
swers 1-2), whereas computer scientists tended to
disagree (57% answers 1-2). Everyone seemed to
like v-knowledge better, with philosophers select-
ing 4-5 62% of the time, and computer scientists
18Some considered themselves mathematicians, cognitive
scientists, cultural theorists, etc.
VeryComprehensiveComprehensiveNeitherLimitedVery Limited0.00.10.20.30.40.5PercentageHow would you describe your understanding oflarge language models?PhilosophersComputer ScientistsVeryComprehensiveComprehensiveNeitherLimitedVery Limited0.00.10.20.30.40.5PercentageHow would you describe your understanding ofepistemology?PhilosophersComputer Scientiststb-knowledgej-knowledgeg-knowledgev-knowledgep-knowledgeDefinitions of Knowledge0.00.10.20.30.40.50.60.70.8Percentage Disagreement(1-2 on Likert scale)Disagreement on Definitions of Knowledge by ProfessionPhilosophersComputer Scientists(a) Survey answers to “Can non-human entities know?”.
(b) Survey responses on defining global or specific knowledge.
(c) Survey results to the question of LLMs having knowledge.
(d) Survey results on LLMs being able to have knowledge.
Figure 5: Four of the survey questions and their respective answers.
phers, 33% think it should be different, and 30%
think it should be the same. Among computer sci-
entists, 44% think it should be different, and 34%
think it should be the same (see Figure 5b).
Do LLMs know (empirically, in practice, now)?
There is a significant difference in opinion between
philosophers and computer scientists. Philosophers
largely disagree, with 54% saying no and only 11%
saying yes. In contrast, computer scientists are
more divided, with 31% saying no, 34% saying
yes, and the remaining respondents undecided or
unclear (see Figure 5c). Computer scientists, in
other words, evaluate LLM knowledge claims more
positively.
Can LLMs know (in theory)? When consider-
ing the question theoretically (as opposed to in
practice), approval increases in both groups (see
Figure 5d). Among philosophers, 24% now say yes
and 33% say no, showing a more divided opinion.
Among computer scientists, 55% say yes and 21%
say no, indicating that most believe LLMs can pos-
sess knowledge.
The survey results thus indicate that scholars from
both epistemology and computer science think that
the notion of knowledge for LLMs is not a trivial
one. Despite differences in opinion, two key points
emerge: most scholars believe non-humans can
possess knowledge, and LLMs have the potential
to "know" in some sense.
5 Best Practices
Given our discussion of mapping knowledge defi-
nitions to LLMs and the results of our survey, we
provide possible protocols for evaluating knowl-
edge of LLMs in relation to each discussed defini-
tion.22 We also provide a really simple example
to contrast in a more practical manner some of the
definitions. We use Llama-3-8B-Instruct23 with
greedy decoding for generating completions.24
Protocol for tb-knowledge A protocol for evalu-
ating knowledge of p as per Definition 2.3 would
involve evaluating the three conditions for belief+
(Definition 2.2), which can be done by evaluating
model confidence in the true statement itself, as
well as in all that follows logically from the true
statement. The model should, of course, have low
confidence in statements that could imply ¬p.
22We provide practical examples on how the definitions
could be implemented with the current research. However
these protocols may change completely in the future as we
better understand the inner workings of LLMs and develop
new methodologies and algorithms.
23https://github.com/meta-llama/llama3
24We use the system prompt: “You are a helpful chatbot
that aims to be truthful.”
YesNoAgnostic/Undecided0.00.10.20.30.40.50.60.7PercentageCan Non-Humans Know?PhilosophersComputer ScientistsSameDifferentAgnostic/UndecidedNon-humans do nothave knowledge0.00.10.20.30.40.50.60.7PercentageShould knowledge be defined differentlyfor humans and non-humans?PhilosophersComputer ScientistsYesNoAgnostic/Unclear Question0.00.10.20.30.40.50.60.7PercentageDo LLMs Know (empirically, now)?PhilosophersComputer ScientistsYesNoAgnostic/Unclear Question0.00.10.20.30.40.50.60.7PercentageCan LLMs Know (in theory)?PhilosophersComputer ScientistsMost current work (§3) evaluates model confi-
dence in p, but to assert tb-knowledge in LLMs, we
must also evaluate model confidence in all that is
implied by p. In our small example (Table 2), we
evaluate whether Llama-3 knows
p = ‘Platypuses are mammals’
We first test model confidence in the answer to
‘Are platypuses mammals?’ being yes. We then
evaluate the epistemic closure by evaluating model
confidence in facts that follow logically from the
platypuses being a mammal, e.g., ‘Do platypuses
have hair or fur?’ For this question, the model
has more confidence in the answer yes, they have
fur. We now prompt the model ‘Do mammals lay
eggs?’, and the model answers no. Its answer to
is yes. Therefore, the
‘Do platypus lay eggs?’
model believes
q = ‘Platypuses lay eggs and mammals do not’
which implies ¬p, thus violating condition 3 from
the belief+ definition; leading us to conclude that
Llama-3 does not tb-know p.25
Protocol for j-knowledge
If we subscribe to j-
knowledge – which many computer scientists do
(§4) – then we need to have a two part protocol:
(1) Same as in tb-knowledge the model’s confi-
dence in the true statement should be high; and
(2) we must also attribute this belief to a training
data which unambiguously states p, or reasoning
that justifies how p can be derived from already
established propositions.26
In our running example, we obtain a justification
by prompting Llama-3 with ‘Are platypus mam-
mals? Please explain step-by-step’, for which the
model generates the definition of a mammal, platy-
pus characteristics corresponding to mammals’ fea-
tures, and explains that platypus are mammals even
though they do not comply with all the mammals’
features (exact answer in Appendix C). By estab-
lishing that the intermediate reasoning steps are
correct (the characteristics of mammals and platy-
pus) we can conclude that Llama-3 j-knows p.27
25In this example conditions (2) and (3) have been tested
with only one proposition that follows logically, but in real-
ity one should obviously sample from a large enough set of
propositions. We have also used greedy decoding but different
approaches to high confidence can be used.
26See §3 for references to current methodologies of reason-
ing and training data attribution.
27We have used chain-of-thought prompting in this example,
however it should be noted that the reasoning steps need to
be verified for this to be a valid justification (Golovneva et al.,
2023; Jacovi et al., 2024).
If by g-knowing p
Protocol for g-knowledge
we simply mean the ability to state p, then g-
knowledge will not do much work for us. On such
an account, knowledge becomes indistinguishable
from beliefs. In line with our discussion in §3, we
generally recommend to adopt other definitions.
Protocol for v-knowledge The v-knowledge def-
inition seems to be quite popular among both
philosophers and computer scientists. In §3, we
cited possible interpretations of intellectual virtue
in LLMs. Training data reliability assessments
could involve attributing the inference of p to train-
ing data that contains p, and showing that the model
knows this data is reliable, e.g., by using a linear
probe to see whether the model successfully distin-
guishes reliable from unreliable training data. On
the other hand, if the model infers p from in-context
data that we know is reliable, we need to show that
the model is indeed generating the proposition us-
ing the provided in-context knowledge, e.g., via
mechanistic interpretability (Yu et al., 2023; Wu
et al., 2024).
Protocol for p-knowledge
If knowledge is some-
thing that facilitates correct predictions, we need to
be able to sample from the set of relevant situations.
This is of course a familiar challenge to LLM re-
searchers interested in evaluating performance in
the wild. We propose to evaluate p-knowledge as
we would evaluate tb-knowledge, albeit in a prob-
abilistic setting, and only over the relevant set of
implied propositions.28 While computer scientists
prefer tb-knowledge over p-knowledge (by some
margin; see §4), the definition of p-knowledge
seems more in line with current practices in the
LLM community. Following with the example in
Table 2, here, we would conclude that Llama-3 p-
knows ‘Platypuses are mammals’, as opposed to
tb-knowing. Since even though believing mammals
do not lay eggs, is in contradiction with p, q is true
most of the times.
28This seems to make the p-knowledge definition strictly
weaker than tb-knowledge, with the implication that any
model that tb-knows p will also p-know p. This conclusion
depends on whether our notion of model usefulness is limited
to knowledge. If we can dissociate knowledge performance
from task performance and talk about model usefulness only
in terms of knowledge, it holds that p-knowledge is strictly
weaker than tb-knowledge. If not, we must add the additional
requirement that models perform well on the domain they are
supposed to be knowledgeable about.
6 Conclusion
In this paper, we reviewed epistemological defini-
tions and formalized interpretations in the context
of large language models (LLMs). Then, we ex-
amined how existing works in NLP research align
with these definitions, highlighting gaps in their
interpretations of knowledge. Furthermore, we pre-
sented the results of our survey of philosophers and
computer scientists, showcasing the different views
in terms of definitions of knowledge and whether
LLMs can be said to know. Finally, we outlined
protocols of evaluations for each knowledge defini-
tion using existing algorithms and methodologies.
We hope that the connection to epistemological def-
initions of knowledge can inform the evaluations
of knowledge in LLMs and can provide a more
solid foundation for the necessary tests to deter-
mine when an LLM truly knows a fact.
Limitations
We presented five standard definitions of knowl-
edge in philosophy. However, there are more nu-
ances and potentially additional definitions that
could apply, nonetheless, we believe these are
the most standard and serve as a starting point
to ground the evaluations of knowledge in LLMs
more formally. Regarding Section 3, there are cer-
tainly more works evaluating knowledge in LLMs
that could be included. Nonetheless, we included
as many as possible and believe these lay out the
current landscape of knowledge evaluation. Finally,
as stated in the main body, the protocols are practi-
cal methodologies that may become irrelevant as
more research on LLMs is conducted. However, we
included them here to clarify how the definitions
can be implemented in practice.
Acknowledgements
We thank our colleagues at the Center for Philos-
ophy in AI and the CoAStaL NLP group for in-
In
sightful discussions throughout this project.
particular, we would like to thank Daniel Hersh-
covich, Ilias Chalkidis and Jiaang Li for valuable
comments on the final manuscript. This work has
been supported by Carlsberg Semper Ardens Ad-
vance Grant CF22-1432.
References
Ekin Akyurek, Tolga Bolukbasi, Frederick Liu, Bin-
bin Xiong, Ian Tenney, Jacob Andreas, and Kelvin
Guu. 2022. Towards tracing knowledge in language
models back to the training data. In Findings of the
Association for Computational Linguistics: EMNLP
2022, pages 2429–2446, Abu Dhabi, United Arab
Emirates. Association for Computational Linguistics.
Sergei Artemov. 2008. The logic of justification. Re-
view of Symbolic Logic, 1(4):477–513.
J. L. Austin. 2000. Other minds. In Sven Bernecker
and Fred I. Dretske, editors, Knowledge: Readings
in Contemporary Epistemology. Oxford University
Press.
Eden Biran, Daniela Gottesman, Sohee Yang, Mor Geva,
and Amir Globerson. 2024. Hopping too late: Ex-
ploring the limitations of large language models on
multi-hop queries. arXiv preprint arXiv:2406.12775.
Herman Cappelen and Josh Dever. 2021. Making Ai
Intelligible: Philosophical Foundations. Oxford Uni-
versity Press, New York, USA.
Ilias Chalkidis, Nicolas Garneau, Catalina Goanta,
Daniel Katz, and Anders Søgaard. 2023. LeXFiles
and LegalLAMA: Facilitating English multinational
legal language model development. In Proceedings
of the 61st Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 15513–15535, Toronto, Canada. Association
for Computational Linguistics.
Roderick M Chisholm, Roderick Milton Chisholm,
Roderick Milton Chisholm, and Roderick Milton
Chisholm. 1989. Theory of knowledge, volume 3.
Prentice-Hall Englewood Cliffs, NJ.
Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson,
and Mor Geva. 2024. Evaluating the ripple effects
of knowledge editing in language models. Transac-
tions of the Association for Computational Linguis-
tics, 12:283–298.
Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao
Chang, and Furu Wei. 2022. Knowledge neurons in
pretrained transformers. In Proceedings of the 60th
Annual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 8493–
8502, Dublin, Ireland. Association for Computational
Linguistics.
Zeyu Dai and Ruihong Huang. 2019. A regulariza-
tion approach for incorporating event knowledge and
coreference relations into neural discourse parsing.
In Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the
9th International Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 2976–
2987, Hong Kong, China. Association for Computa-
tional Linguistics.
Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Edit-
ing factual knowledge in language models. In Pro-
ceedings of the 2021 Conference on Empirical Meth-
ods in Natural Language Processing, pages 6491–
6506, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Bhuwan Dhingra, Jeremy R Cole, Julian Martin
Eisenschlos, Daniel Gillick, Jacob Eisenstein, and
William W Cohen. 2022. Time-aware language mod-
els as temporal knowledge bases. Transactions of the
Association for Computational Linguistics, 10:257–
273.
Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhi-
lasha Ravichander, Eduard Hovy, Hinrich Schütze,
and Yoav Goldberg. 2021. Measuring and improving
consistency in pretrained language models. Transac-
tions of the Association for Computational Linguis-
tics, 9:1012–1031.
Sebastian Farquhar, Vikrant Varma, Zachary Kenton, Jo-
hannes Gasteiger, Vladimir Mikulik, and Rohin Shah.
2023. Challenges with unsupervised llm knowledge
discovery. arXiv preprint arXiv:2312.10029.
Shangbin Feng, Weijia Shi, Yuyang Bai, Vidhisha Bal-
achandran, Tianxing He, and Yulia Tsvetkov. 2024a.
Knowledge card: Filling LLMs’ knowledge gaps
with plug-in specialized language models. In The
Twelfth International Conference on Learning Repre-
sentations.
Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding,
Vidhisha Balachandran, and Yulia Tsvetkov. 2024b.
Don’t hallucinate, abstain: Identifying llm knowl-
edge gaps via multi-llm collaboration. arXiv preprint
arXiv:2402.00367.
Constanza Fierro, Reinald Kim Amplayo, Fantine Huot,
Nicola De Cao, Joshua Maynez, Shashi Narayan, and
Mirella Lapata. 2024a. Learning to plan and generate
text with citations. arXiv preprint arXiv:2404.03381.
Constanza Fierro, Nicolas Garneau, Emanuele
Bugliarello, Yova Kementchedjhieva, and Anders
Søgaard. 2024b. Mulan: A study of fact mutability
in language models.
Constanza Fierro and Anders Søgaard. 2022. Factual
consistency of multilingual pretrained language mod-
els. In Findings of the Association for Computational
Linguistics: ACL 2022, pages 3046–3052, Dublin,
Ireland. Association for Computational Linguistics.
Jerry A. Fodor. 1985. Fodor?s guide to mental represen-
tation: The intelligent auntie?s vade-mecum. Mind,
94(373):76–100.
Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony
Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent
Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and
Kelvin Guu. 2023. RARR: Researching and revising
what language models say, using language models.
In Proceedings of the 61st Annual Meeting of the
Association for Computational Linguistics (Volume 1:
Long Papers), pages 16477–16508, Toronto, Canada.
Association for Computational Linguistics.
Edmund L. Gettier. 1963.
Is Justified True Belief
Knowledge? Analysis, 23(6):121–123.
Mor Geva, Jasmijn Bastings, Katja Filippova, and Amir
Globerson. 2023. Dissecting recall of factual associa-
tions in auto-regressive language models. In Proceed-
ings of the 2023 Conference on Empirical Methods in
Natural Language Processing, pages 12216–12235,
Singapore. Association for Computational Linguis-
tics.
Olga Golovneva, Moya Peng Chen, Spencer Poff, Mar-
tin Corredor, Luke Zettlemoyer, Maryam Fazel-
Zarandi, and Asli Celikyilmaz. 2023. ROSCOE: A
suite of metrics for scoring step-by-step reasoning. In
The Eleventh International Conference on Learning
Representations.
John Greco. 1993. Virtues and vices of virtue episte-
mology. Canadian Journal of Philosophy, 23(3):413–
432.
Frank R Hampel. 1974. The influence curve and its
role in robust estimation. Journal of the american
statistical association, 69(346):383–393.
John Hardwig. 1991. The role of trust in knowledge.
Journal of Philosophy, 88(12):693–708.
Peter Hase, Mohit Bansal, Been Kim, and Asma Ghan-
deharioun. 2023a. Does localization inform editing?
surprising differences in causality-based localization
vs. knowledge editing in language models. In Thirty-
seventh Conference on Neural Information Process-
ing Systems.
Peter Hase, Mona Diab, Asli Celikyilmaz, Xian Li, Zor-
nitsa Kozareva, Veselin Stoyanov, Mohit Bansal, and
Srinivasan Iyer. 2023b. Methods for measuring, up-
dating, and visualizing factual beliefs in language
models. In Proceedings of the 17th Conference of
the European Chapter of the Association for Compu-
tational Linguistics, pages 2714–2731, Dubrovnik,
Croatia. Association for Computational Linguistics.
Alon Jacovi, Yonatan Bitton, Bernd Bohnet, Jonathan
Herzig, Or Honovich, Michael Tseng, Michael
Collins, Roee Aharoni, and Mor Geva. 2024. A
chain-of-thought is as strong as its weakest link: A
benchmark for verifiers of reasoning chains. arXiv
preprint arXiv:2402.00559.
Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham
Neubig. 2021. How can we know when language
models know? on the calibration of language models
for question answering. Transactions of the Associa-
tion for Computational Linguistics, 9:962–977.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham
Neubig. 2020. How can we know what language
models know? Transactions of the Association for
Computational Linguistics, 8:423–438.
Brihi Joshi, Ziyi Liu, Sahana Ramnath, Aaron Chan,
Zhewei Tong, Shaoliang Nie, Qifan Wang, Yejin
Choi, and Xiang Ren. 2023. Are machine rationales
(not) useful to humans? measuring and improving
human utility of free-text rationales. In Proceedings
of the 61st Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 7103–7128, Toronto, Canada. Association for
Computational Linguistics.
Nora Kassner, Philipp Dufter, and Hinrich Schütze.
2021a. Multilingual LAMA: Investigating knowl-
edge in multilingual pretrained language models. In
Proceedings of the 16th Conference of the European
Chapter of the Association for Computational Lin-
guistics: Main Volume, pages 3250–3258, Online.
Association for Computational Linguistics.
Nora Kassner, Benno Krojer, and Hinrich Schütze. 2020.
Are pretrained language models symbolic reasoners
over knowledge? In Proceedings of the 24th Confer-
ence on Computational Natural Language Learning,
pages 552–564, Online. Association for Computa-
tional Linguistics.
Nora Kassner and Hinrich Schütze. 2020. Negated and
misprimed probes for pretrained language models:
Birds can talk, but cannot fly. In Proceedings of the
58th Annual Meeting of the Association for Compu-
tational Linguistics, pages 7811–7818, Online. Asso-
ciation for Computational Linguistics.
Nora Kassner, Oyvind Tafjord, Ashish Sabharwal, Kyle
Richardson, Hinrich Schuetze, and Peter Clark. 2023.
Language models with rationality. In Proceedings
of the 2023 Conference on Empirical Methods in
Natural Language Processing, pages 14190–14201,
Singapore. Association for Computational Linguis-
tics.
Nora Kassner, Oyvind Tafjord, Hinrich Schütze, and
Peter Clark. 2021b. BeliefBank: Adding memory to
a pre-trained language model for a systematic notion
of belief. In Proceedings of the 2021 Conference
on Empirical Methods in Natural Language Process-
ing, pages 8849–8861, Online and Punta Cana, Do-
minican Republic. Association for Computational
Linguistics.
Amr Keleg and Walid Magdy. 2023. DLAMA: A frame-
work for curating culturally diverse facts for prob-
ing the knowledge of pretrained language models.
In Findings of the Association for Computational
Linguistics: ACL 2023, pages 6245–6266, Toronto,
Canada. Association for Computational Linguistics.
Pang Wei Koh and Percy Liang. 2017. Understanding
black-box predictions via influence functions.
In
International conference on machine learning, pages
1885–1894. PMLR.
Yanming Liu, Xinyue Peng, Xuhong Zhang, Weihao
Liu, Jianwei Yin, Jiannan Cao, and Tianyu Du. 2024.
Ra-isf: Learning to answer and understand from re-
trieval augmentation via iterative self-feedback.
Kevin Meng, David Bau, Alex Andonian, and Yonatan
Belinkov. 2022. Locating and editing factual asso-
ciations in GPT. Advances in Neural Information
Processing Systems, 36.
Kevin Meng, Arnab Sen Sharma, Alex J Andonian,
Yonatan Belinkov, and David Bau. 2023. Mass-
editing memory in a transformer. In The Eleventh
International Conference on Learning Representa-
tions.
Jacob Menick, Maja Trebacz, Vladimir Mikulik,
John Aslanides, Francis Song, Martin Chadwick,
Mia Glaese, Susannah Young, Lucy Campbell-
Gillingham, Geoffrey Irving, et al. 2022. Teaching
language models to support answers with verified
quotes. arXiv preprint arXiv:2203.11147.
Ella Neeman, Roee Aharoni, Or Honovich, Leshem
Choshen, Idan Szpektor, and Omri Abend. 2023.
DisentQA: Disentangling parametric and contextual
knowledge with counterfactual question answering.
In Proceedings of the 61st Annual Meeting of the
Association for Computational Linguistics (Volume 1:
Long Papers), pages 10056–10070, Toronto, Canada.
Association for Computational Linguistics.
Robert Nozick. 2000. . knowledge and scepticism. In
Sven Bernecker and Fred I. Dretske, editors, Knowl-
edge: Readings in Contemporary Epistemology. Ox-
ford University Press.
Cory Paik, Stéphane Aroca-Ouellette, Alessandro Ron-
cone, and Katharina Kann. 2021. The World of an
Octopus: How Reporting Bias Influences a Language
Model’s Perception of Color. In Proceedings of the
2021 Conference on Empirical Methods in Natural
Language Processing, pages 823–835, Online and
Punta Cana, Dominican Republic. Association for
Computational Linguistics.
James D. Park. 2002. Using weighted max-sat engines
to solve mpe. In AAAI/IAAI.
Anna Pederneschi. 2024. An analysis of bias and dis-
trust in social hinge epistemology. Philosophical
Psychology, 37(1):258–277.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel,
Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and
Alexander Miller. 2019. Language models as knowl-
In Proceedings of the 2019 Confer-
edge bases?
ence on Empirical Methods in Natural Language Pro-
cessing and the 9th International Joint Conference
on Natural Language Processing (EMNLP-IJCNLP),
pages 2463–2473, Hong Kong, China. Association
for Computational Linguistics.
Alvin Plantinga. 1993. Warrant and proper function.
Oxford University Press.
Plato Plato. 2019. Theaetetus. BoD–Books on Demand.
Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund
Sundararajan. 2020. Estimating training data influ-
ence by tracing gradient descent. Advances in Neural
Information Processing Systems, 33:19920–19930.
Jirui Qi, Raquel Fernández, and Arianna Bisazza. 2023.
Cross-lingual consistency of factual knowledge in
In The 2023 Con-
multilingual language models.
ference on Empirical Methods in Natural Language
Processing.
Yasin Abbasi Yadkori, Ilja Kuzborskij, András György,
and Csaba Szepesvári. 2024. To believe or not to
believe your llm. arXiv preprint arXiv:2406.02543.
Adam Roberts, Colin Raffel, and Noam Shazeer. 2020.
How much knowledge can you pack into the param-
eters of a language model? In Proceedings of the
2020 Conference on Empirical Methods in Natural
Language Processing (EMNLP), pages 5418–5426,
Online. Association for Computational Linguistics.
Qinan Yu, Jack Merullo, and Ellie Pavlick. 2023. Char-
acterizing mechanisms for factual recall in language
models. In Proceedings of the 2023 Conference on
Empirical Methods in Natural Language Processing,
pages 9924–9959, Singapore. Association for Com-
putational Linguistics.
Gilbert Ryle. 1949. The Concept of Mind: 60Th An-
niversary Edition. Hutchinson & Co, New York.
Crispin Sartwell. 1992. Why knowledge is merely true
belief. Journal of Philosophy, 89(4):167–180.
Arnab Sen Sharma, David Atkinson, and David Bau.
2024. Locating and editing factual associations in
mamba. In First Conference on Language Modeling.
Ernest Sosa. 1980. The raft and the pyramid: Coher-
ence versus foundations in the theory of knowledge.
Midwest Studies in Philosophy, 5(1):3–26.
Niklas Stoehr, Mitchell Gordon, Chiyuan Zhang, and
Owen Lewis. 2024. Localizing paragraph mem-
arXiv preprint
orization in language models.
arXiv:2403.19851.
Alasdair Urquhart. 1972. Semantics for relevant logics.
Journal of Symbolic Logic, 37(1):159–169.
Jonas Wallat, Jaspreet Singh, and Avishek Anand. 2020.
BERTnesia: Investigating the capture and forgetting
of knowledge in BERT. In Proceedings of the Third
BlackboxNLP Workshop on Analyzing and Interpret-
ing Neural Networks for NLP, pages 174–183, On-
line. Association for Computational Linguistics.
Jiaan Wang, Yunlong Liang, Zengkui Sun, Yuxuan Cao,
Jiarong Xu, and Fandong Meng. 2024. Cross-lingual
knowledge editing in large language models. In Pro-
ceedings of the 62nd Annual Meeting of the Associa-
tion for Computational Linguistics (Volume 1: Long
Papers), pages 11676–11686, Bangkok, Thailand.
Association for Computational Linguistics.
Linda Zagzebski. 1999. "what is knowledge?". In John
Greco and Ernest Sosa, editors, The Blackwell Guide
to Epistemology, pages 92–116. Oxford: Blackwell.
Zexuan Zhong, Zhengxuan Wu, Christopher Man-
ning, Christopher Potts, and Danqi Chen. 2023a.
MQuAKE: Assessing knowledge editing in language
models via multi-hop questions. In Proceedings of
the 2023 Conference on Empirical Methods in Natu-
ral Language Processing, pages 15686–15702, Sin-
gapore. Association for Computational Linguistics.
Zexuan Zhong, Zhengxuan Wu, Christopher D Man-
ning, Christopher Potts, and Danqi Chen. 2023b.
MQuAKE: Assessing knowledge editing in language
arXiv preprint
models via multi-hop questions.
arXiv:2305.14795.
A Epistemic logic
The syntax of standard epistemic logic is defined
by:
ϕ def== p | ¬ϕ | (ϕ ∧ ψ) | (cid:50)ϕ | (cid:51)ϕ
The veridicality principle (also known as axiom
T) that what is known, is also true, is expressed as
follows: (cid:50)ϕ → ϕ. We will distinguish between
different definitions of knowing by subscripting the
modal operators. One standard epistemic logic is
the so-called S4 logic, axiomatized as follows:
K (cid:50)(ϕ → ψ) → ((cid:50)ϕ → (cid:50)ψ)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le,
and Denny Zhou. 2022. Chain of thought prompt-
ing elicits reasoning in large language models. In
Advances in Neural Information Processing Systems.
T (cid:50)ϕ → ϕ
4 (cid:50)ϕ → (cid:50)(cid:50)ϕ
Timothy Williamson. 2005. Knowledge, context, and
In Gerhard Preyer and
the agent’s point of view.
Georg Peter, editors, Contextualism in Philosophy:
Knowledge, Meaning, and Truth, pages 91–114. Ox-
ford University Press.
Wenhao Wu, Yizhong Wang, Guangxuan Xiao, Hao
Peng, and Yao Fu. 2024. Retrieval head mechanisti-
cally explains long-context factuality. arXiv preprint
arXiv:2404.15574.
Axiom 4 is also called the principle of positive
introspection. This is not the only epistemic modal
logic on the table, but it suffices for our purposes.
We extend S4 in various ways to accommodate
for the five definitions. Specifically, v-knowledge
introduces the concept of virtue, and p-knowledge
relies on some notion of empirical risk. The virtue
definition of knowledge introduces a new operator
that does not satisfy the veridicality principle T.
tb-knowledge A naïve implementation of knowl-
edge as true belief falls out of S4 and the principle
called KB1, which goes all the way back to Plato:
(cid:50)sϕ → (cid:51)ϕ
Sartwell (1992), however, relies on an extended
notion of belief which we will have to formalize,
also. Let us introduce a new operator (cid:51)+ and call
the epistemic closure principle for this operator for
+:
(cid:51)+: (cid:50)sp → (cid:51)+p
+: (cid:51)+p → ((p → q) → (cid:51)+q)
One way to express that belief is consistent is by
the principle:
¬(cid:51)⊥
j-knowledge The idea of justified true beliefs
calls for so-called justification logic (Artemov,
2008) with justification operators:
(cid:50)nϕ → t : ϕ
Justification logic can be axiomatized in differ-
ent ways, but these details go beyond our main
concerns here.
If we insist on a sui generis inter-
g-knowledge
pretation of knowledge in S4, we would have to in-
troduce a new operator, say †. This operator would
have very different properties from the standard
epistemic modal logic (cid:50)-operator. T would not
apply. K would apply, and knowledge would still
be required to be consistent. It is unclear whether
4 would apply to the †-operator. A minimal axiom
system could, perhaps, be something like this:
T †(ϕ → ψ) → (†ϕ → †ψ)
0 ¬ † ⊥
We leave further details open.
v-knowledge Virtue reliabilist accounts of
knowledge and justification are versions of epis-
temological externalism. Sosa characterizes an in-
tellectual virtue, very generally, as “a quality bound
to help maximize one’s surplus of truth over error”
(1991: 225). For most virtue reliabilists, intel-
lectual virtue is what leads to justification, and
virtue-based knowledge definition are therefore of-
ten formalized in justification logics.
(cid:50)wϕ → t : ϕ
but with slightly different model-theoretic se-
mantics than (cid:50)n.
p-knowledge Definition 2.7 translates into the
following in an extended probabilistic version of
S4:
(cid:50)ap → (cid:51)p ∧ P ((cid:51)q | p → q) > 0.95
which is taken to mean that the definition of
p-knowing p ((cid:50)ap) is that you believe p ((cid:51)p),
and q follows from p, then you probably also be-
lieve q, i.e., the probability of you believing q
(P ((cid:51)q | p → q)) is higher than some threshold,
say 0.95. This is simply the probabilistic version
of tb-knowledge. The definition of p-knowledge
also requires pragmatic usefulness. One way to
formalize this is in terms of empirical risk on rele-
vant benchmarks. An alternative is formalization
through relevance logics (Urquhart, 1972).
B Survey Details
We recruited professional LLM researchers and
philosophers through Computer Science and Phi-
losophy mailing lists, research groups mailing lists,
and point of contacts of the authors at other univer-
sities. All respondents participated free of charge
on a completely voluntary basis. The respondents
were informed about the intended use of the survey.
The full instructions was:
The <X> is running a survey about the re-
lationship between knowledge and Large
Language Models (LLMs). We are inter-
ested in getting as many perspectives as
possible, especially from philosophers
and computer scientists. The survey
should not take more than 5 minutes of
your time.
We first ask general questions about the respon-
dent and their knowledge of language models and
epistemology (Figure 6). Then, we present an ex-
ample to motivate the informal knowledge defini-
tions (Figure 7), and we ask to rank the definitions
in a Likert scale (Figure 8). Finally, we ask ques-
tions related to whether LLMs can be said to know
(Figure 9).
C Protocols Example
See Table 2.
Figure 6: First part of questions from our survey.
Figure 7: Example to be used for the informal knowl-
edge definitions.
Figure 8: Second part of questions from the survey,
related to the knowledge definitions.
Figure 9: Final part of questions from our survey.
Prompt
What is a mammal?
Do mammals lay eggs?
What are platypus?
Do platypuses have hair or fur?
Do platypuses lay eggs?
Are platypuses mammals?
Are platypus mammals? Please ex-
plain step-by-step.
Llama-3’s Completion
A mammal is (...) key characteristics of mammals include:
1. Hair or fur: Mammals are covered in hair or fur, which helps to regulate their
body temperature and protect their skin. (...)
5. Live birth: Mammals give birth to live young, rather than laying eggs like birds
or reptiles. (...)
No, mammals do not lay eggs. Mammals are a group of (...)
The platypus! They’re one of the most fascinating and unique creatures in the animal
kingdom. The platypus (Ornithorhynchus anatinus) is a semi-aquatic mammal that
is native to eastern Australia (...)
Platypuses do not have hair, but they do have a covering of soft, woolly fur. This fur
is made up (...)
Yes, platypuses do lay eggs! In fact, they (...)
Yes, platypuses are indeed mammals! They belong to the order Monotremata, which
(...)
I ' d be happy to help clarify whether platypus are mammals or not.
Here's a step−by−step explanation :
1. ** Definition of a mammal**: A mammal is a warm−blooded animal that
has hair or fur , produces milk to feed its young, and has mammary
glands. Mammals also have a specific type of skeleton , brain
structure , and dental formula.
2. **Platypus
characteristics **: Platypuses have some characteristics
that are typical of mammals, such as:
* They are warm−blooded, meaning they regulate their own body
temperature .
* They have hair or fur , although it ' s not very visible .
like other
* They produce milk to feed their young,
just
mammals.
3. **Unique platypus features **: However, platypuses also have some
features that are not
typical of mammals:
* They lay eggs instead of giving birth to live young like most
other mammals.
* They have a duck−billed snout and webbed feet , which are more
similar
to those of
reptiles and birds .
* They have a venomous spur on their hind leg , which is a
characteristic of some reptiles .
4. ** Classification **: Based on their
characteristics , platypuses are
classified as mammals, but they are also considered to be part of a
unique group called monotremes. Monotremes are mammals that lay eggs
instead of giving birth to live young.
are a special
different
5. **Conclusion**: In conclusion , platypuses are indeed mammals, but they
that are
type of mammal that has some characteristics
from those of other mammals. Their ability to lay eggs
their unique
and have a venomous spur are just a few examples of
features .
Table 2: Greedy decoding of Llama-3-8B-Instruct given a prompt. The ‘(...)’ means that more text was generated
but omitted here due to space limitations.
|
ai_researcher | 1 | Three-arm_randomised_controlled_trial_of_an_m-health_app_and_digital_engagement_strategy_for_improving_treatment_adherence_and_reducing_suicidal_ideation_in_young_people_study_protocol.pdf | Phononic frequency comb via three-mode parametric three-wave mixing
Authors: Adarsh Ganesan1, Cuong Do1, Ashwin Seshia1
1. Nanoscience Centre, University of Cambridge, Cambridge, UK
This paper is motivated by the recent demonstration of three-wave mixing based phononic
frequency comb. While the previous experiments have shown the existence of three-wave mixing
pathway in a system of two-coupled phonon modes, this work demonstrates a similar pathway in
a system of three-coupled phonon modes. The paper also presents a number of interesting
experimental facts concomitant to the three-mode three-wave mixing based frequency comb
observed in a specific micromechanical device. The experimental validation of three-mode three-
wave mixing along with the previous demonstration of two-mode three-wave mixing points to the
ultimate possibility of multimode frequency combs.
Optical frequency combs have significantly transformed modern metrology and molecular
spectroscopy [1-2]. Recently, we experimentally demonstrated the existence of such frequency
combs in the phononic domain [3] after its theoretical prediction in [4]. While both optical and
phononic frequency combs carry similar spectral features, the dynamics describing the respective
generation processes are different. The optical frequency combs usually arise through a well-
established Kerr nonlinear pathway [4]. However, the pathway describing our more recent phononic
frequency combs is nonlinear three-wave mixing [4]. Through this mechanism, a single drive tone
intrinsically couples with the eigenfrequency of a phonon mode. Such physical interaction can
particularly enable high-precision micro and nano-mechanical resonant sensors adapted for the
direct monitoring of slow varying intrinsic or extrinsic physical processes.
In the phononic frequency comb demonstrated in [3], the three-wave mixing process is particularly
operative in the system of two parametrically coupled phonon modes. However, the preceding
theoretical work [4] on frequency combs have actually shown the possibility for frequency combs in
a system comprising three and four parametrically coupled phonon modes. Hence, inspired by these
mathematical predictions [4], we now experimentally describe three wave mixing process even in
three mode parametric resonance [5]. Additionally, this paper also presents surprising experimental
facts specific to three-mode three-wave mixing based phononic frequency comb.
For understanding the surprising nature of three-mode three-wave mixing based phononic
frequency comb, we first present a brief discussion of two-mode three-wave mixing based phononic
frequency comb. Hence, we first consider the coupled two-mode dynamics.
2
2
2
2
2
𝑄̈ 𝑖 = −𝜔𝑖
2𝑄𝑖 − 2𝜁𝑖𝜔𝑖𝑄̇ 𝑖 + ∑ ∑ 𝛼𝜏1𝜏2
𝑄𝜏1𝑄𝜏2
+ ∑ ∑ ∑ 𝛽𝜏1𝜏2𝜏3
𝑄𝜏1𝑄𝜏2𝑄𝜏3
𝜏1=1
𝜏2=1
𝜏1=1
𝜏2=1
𝜏3=1
+ 𝑃 𝑐𝑜𝑠 (𝜔𝑑𝑡)
(1)
where 𝑃 is the drive level, 𝛼 and 𝛽 are quadratic coupling coefficients and 𝜔𝑖=1,2 and 𝜁𝑖=1,2 are
natural frequencies and damping coefficients of modes 𝑖 = 1,2 respectively.
Here, when the drive frequency 𝜔𝑑 is closer to 𝜔1, the direct excitation of mode 1 takes place. The
larger drive level 𝑃 cause higher displacement 𝑄1 which in turn results in the parametric excitation
of mode 2 through the nonlinear term 𝑄1𝑄2. The frequency of this parametrically excited tone is
expected to be
𝜔𝑑
2
in the case of two-mode parametric resonance [6]. However, owing to an intrinsic
three-wave mixing pathway, the excitation of
𝜔1
2
tone instead of
𝜔𝑑
2
is observed in our recent
demonstration of phononic frequency comb. Also, this specific three-wave mixing pathway is only
operative when 𝜔𝑑 is set outside the dispersion band of driven phonon mode. Subsequent to this
excitation, through high-order interactions, a frequency comb of spacing |𝜔𝑑 − 𝜔1| is formed about
𝜔1 and
𝜔1
2
.
While the concept of three-wave mixing with two-mode parametric resonance is clear, we now turn
to three-wave mixing with three-mode parametric resonance. For understanding this, we consider a
system of three coupled modes [5].
3
3
3
3
3
𝑄̈ 𝑖 = −𝜔𝑖
2𝑄𝑖 − 2𝜁𝑖𝜔𝑖𝑄̇ 𝑖 + ∑ ∑ 𝛼𝜏1𝜏2
𝑄𝜏1𝑄𝜏2
+ ∑ ∑ ∑ 𝛽𝜏1𝜏2𝜏3
𝑄𝜏1𝑄𝜏2𝑄𝜏3
𝜏1=1
𝜏2=1
𝜏1=1
𝜏2=1
𝜏3=1
+ 𝑃 𝑐𝑜𝑠 (𝜔𝑑𝑡) ; 𝑖 = 1,2,3
(1)
Based on this dynamics, for 𝜔𝑑 ≅ 𝜔1 ≅ (𝜔1 + 𝜔2), the parametric excitation of tones 𝜔𝑥 ≅ 𝜔1 and
𝜔𝑦 ≅ 𝜔2 which satisfies the condition 𝜔𝑥 + 𝜔𝑦 = 𝜔𝑑 is expected. Using the perspective derived
from two-mode three-wave mixing, we now outline the three-wave mixing pathway in this system of
three-coupled modes.
In the case of two-mode three-wave mixing, one of the frequencies corresponding to the combs of
spacing |𝜔𝑑 − 𝜔1| had to satisfy the condition 2𝜔 = 𝜔1 instead of 2𝜔 = 𝜔𝑑. Hence, a frequency
𝜔𝑑
2
± 𝑛(𝜔𝑑 − 𝜔1). On similar lines, in the case of
± 𝑛(𝜔𝑑 − 𝜔1) is formed instead of
comb of
𝜔1
2
three-mode three-wave mixing, one might expect to have two frequencies 𝜔𝑝 and 𝜔𝑞 the respective
frequency combs of mode 2 and 3 that satisfy a frequency matching condition of 𝜔𝑝 + 𝜔𝑞 = 𝜔1
instead of 𝜔𝑝 + 𝜔𝑞 = 𝜔𝑑. In the frequency combs: 𝜔𝑥 ± 𝑛(𝜔𝑑 − 𝜔1); 𝜔𝑦 ± 𝑛(𝜔𝑑 − 𝜔1), such a
condition of 𝜔𝑝 + 𝜔𝑞 = 𝜔1 is satisfied when 𝜔𝑝 = 𝜔𝑥; 𝜔𝑞 = 𝜔𝑦 − (𝜔𝑑 − 𝜔1). Additionally, unlike
two-mode three-wave mixing, the frequency condition of 𝜔𝑝 + 𝜔𝑞 = 𝜔𝑑 is also satisfied in these
frequency combs when 𝜔𝑝 = 𝜔𝑥; 𝜔𝑞 = 𝜔𝑦.
In order to establish this pathway for comb formation, we experimentally probe a micromechanical
device and organise the experimental observations to support the above discussion. Note: In the
experimental section, unlike in the theory section, the temporal frequency 𝑓 is used in the discussion
instead of angular frequency 𝜔 = 2𝜋𝑓.
For studying the three-mode three-wave mixing based frequency comb, the same experimental
system that was used to study two-mode three-wave mixing [3] is once again considered i.e. an AlN-
on-Si free-free micro-beam of dimensions 1100 × 350 × 11 𝜇𝑚3 (Figure 1A). A sinusoidal electrical
signal is applied through one of the split electrodes patterned on the microstructure and the output
signal which is extracted via another split electrode is analysed using Agilent infiniium 54830B DSO.
The experiments were carried out under ambient pressure and temperature conditions.
Figure 1B shows the frequency spectrum of output electrical signal when 𝑆𝑖𝑛(𝑓𝑑 = 3.857 𝑀𝐻𝑧) =
15 𝑑𝐵𝑚 is applied. In this spectrum, we can see five thick spectral features b1-b5. The frequency
corresponding to b1 is close to the drive frequency 𝑓𝑑 ≅ 𝑓1. While the features b2 and b3 are
associated with frequencies 𝑓𝑚 ≅ 𝑓2 and 𝑓𝑛 ≅ 𝑓3 respectively, their sum is seen to approximately
equal to the drive frequency. The final two features b4 and b5 have frequencies 2𝑓𝑚 and 2𝑓𝑛
respectively and these correspond to the second harmonics of features b2 and b3 respectively. To
clearly visualize each of these spectral features, the zoomed-in images are presented in the figures
1b1-1b5. These figures clearly show that b1-b5 correspond to the frequency combs of spacing
5.035 𝑘𝐻𝑧 about
the
frequencies 3.857 𝑀𝐻𝑧 = 𝑓𝑑, 1.791 𝑀𝐻𝑧 ≅ 𝑓2, 2.066 𝑀𝐻𝑧 ≅ 𝑓3,
3.582 𝑀𝐻𝑧 ≅ 2𝑓2 and 4.132 𝑀𝐻𝑧 ≅ 2𝑓3 respectively. The tone corresponding to the drive
frequency 𝑓𝑑 can be located in the figure 1b1 and the additional spectral lines in the output
spectrum arise through the nonlinear three-wave mixing process.
To systematically understand the evolution of three-mode three-wave mixing based frequency
comb, the experiments are carried out for a range of drive levels: 4 − 23.8 𝑑𝐵𝑚 and the frequency
combs about 𝑓𝑑 ≅ 𝑓1, 𝑓𝑚 ≅ 𝑓2 and 𝑓𝑛 ≅ 𝑓3 are examined. Figure 2 shows the drive level dependence
of the frequency combs. Each of the frequency combs about 𝑓𝑑, 𝑓𝑚 and 𝑓𝑛 presented in figures 2A-
2C correspond to different phonon modes and their mode shapes are presented in figure 2 inset.
The vertical line in figure 2A corresponds to 𝑓𝑑 and the additional lines are formed about 𝑓𝑑 with
equidistant spacing. Interestingly, this spacing is found to increase with the drive level 𝑆𝑖𝑛.
Additionally, at higher drive levels 𝑆𝑖𝑛 ≥ 18 𝑑𝐵𝑚, inter-leaved spectral lines are also formed. This is
broadly similar to the case presented in our previous demonstration of three-wave mixing [3].
Corresponding to the frequency generation about 𝑓𝑑, combs are also formed about 𝑓𝑚 ≅ 𝑓2 and
𝑓𝑛 ≅ 𝑓3 (Figure 2B and 2C). However, there are no vertical lines in figures 2B and 2C. This shows that
the frequencies 𝑓𝑚 and 𝑓𝑛 corresponding to the two parametrically excited internal modes are also
drive level dependent and such dependences are related to the nonlinear comb generation process.
This leads to a drive-level dependence of the comb spacing in the presented pathway.
To understand more about the drive level dependence of the frequency comb, the evolution of
frequencies 𝑓𝑚 and 𝑓𝑛 is investigated. It can be seen from figures 3A and 3B that 𝑓𝑚 and 𝑓𝑛 are not
simply drive level dependent but their dependences are also characterized by peculiar nonlinear
functions. Despite this, 𝑓𝑚 + 𝑓𝑛 is always equal to 𝑓𝑑 for any drive level 𝑆𝑖𝑛. The experimental facts
presented in the figures 3A-3C also confirm the specific nature of frequency combs observed in our
device is (𝑓𝑚 = 𝑓𝑥) ± 𝑛(𝑓𝑑 − 𝑓1); (𝑓𝑛 = 𝑓𝑦) ± 𝑛(𝑓𝑑 − 𝑓1); 𝑓𝑑 ± 𝑛(𝑓𝑑 − 𝑓1). This frequency comb
possesses an equidistant spacing of |𝑓𝑑 − 𝑓1|. The figure 3D shows that the resonant frequency 𝑓1 is
drive level dependent although linear. This is similar to the case observed in [3]. This drive level
dependence of 𝑓1 thus leads to the drive level dependent comb spacing as noted in figure 2A. Figure
4A further validates that the spacing of frequency combs formed about 𝑓𝑑, 𝑓𝑥 and 𝑓𝑦 are equal. To
obtain the relevance of the dispersion band of the driven phonon mode on the frequency comb, the
experiments were conducted at different drive frequencies. Despite the significant low 3-mode
parametric resonance threshold within the dispersion band for instance 𝑓𝑑 = 3.856 𝑀𝐻𝑧, the
frequency comb is only existent outside the dispersion band specifically on the right side (Figure 4B).
The reason for the existence of an asymmetric frequency comb on only one side of dispersion band,
unlike the case presented in [3], can possibly be explained by the asymmetry of drive frequency
dependent parametric excitation threshold (Figure 4B). In the frequency combs observed in our
experiments, similar to those presented in [3], the spacing stays the same for different drive
frequencies at a specific drive level in addition to the increase in spacing with drive level for a
specific drive frequency. This can also be evidenced from the colour-maps in the figure 4B.
Now, we turn to the interesting trend presented in the figures 3A-3C. Frequencies 𝑓𝑥 and 𝑓𝑦 shift
symmetrically while maintaining the condition 𝑓𝑥 + 𝑓𝑦 = 𝑓𝑑. The magnitude of this shift possesses a
peculiar nonlinear relationship with drive level. To prove the relevance of three-mode three-wave
mixing process on this trend, the nature of drive level dependence of 𝑓𝑥 and 𝑓𝑦 for different drive
frequencies are examined. It is possible to clearly note from the figures 5A and 5B that the
simultaneous downshifts and upshifts in 𝑓𝑥 and 𝑓𝑦 respectively are only observed when the three-
mode three-wave mixing is existent. Despite these shifts, 𝑓𝑥 + 𝑓𝑦 for all drive levels and drive
frequencies are equal to the respective 𝑓𝑑. Further, for the drive frequencies that constitute three-
mode three-wave mixing, the respective nonlinear relationships with drive levels are both different
and non-monotonous. Such relationships may arise through the nonlinear feedback involved in the
frequency comb generation process and may also offer an additional room for rigorous fundamental
establishment despite that they fall outside the scope of this current manuscript. In addition to the
mountains and pits of the respective figures 5A and 5B, we can also observe another interesting
characteristic associated with the three-mode three-wave mixing. Even in the absence of three-
mode three-wave mixing, there exists an obscure nominal relationship between 𝑓𝑥 & 𝑓𝑦 and 𝑓𝑑
which is clear from the planes presented in the figures 5A-5C and any influence of three-wave mixing
is only above these nominal planes. These planes are parallel to the drive level axis as the
frequencies 𝑓𝑥 and 𝑓𝑦 for a specific 𝑓𝑑 do not vary with the drive level and such possible drive level
dependences are only observed under the influence of three-wave mixing. Figures 5A and 5B show
that the slopes of respective planes are not same. This suggests that 𝑓𝑦 − 𝑓𝑥 is not constant with 𝑓𝑑
and such specific relationship along with the comb induced symmetric frequency shifts should also
be theoretically understood.
This paper thus presents the first experimental demonstration of a phononic frequency comb via
three-mode three-wave mixing using a micromechanical resonator. The specific experimental facts
associated with this form of frequency comb are also provided. The validated existence of phononic
frequency combs via both two-mode three-wave mixing and three-mode three-wave mixing can
thus create a general perceptive for multi-mode three-wave mixing based frequency combs. Such
multi-mode frequency combs can possibly be helpful in the distribution of frequency combs to the
multiple segments of frequency spectrum.
Acknowledgements
Funding from the Cambridge Trusts is gratefully acknowledged.
References
[1] T. Udem, R. Holzwarth, and T. W. Hansch, "Optical frequency metrology," Nature, vol. 416, pp.
233-237, 2002.
[2] J. Ye, Femtosecond optical frequency comb: principle, operation and applications: Springer
Science & Business Media, 2005.
[3] A. Ganesan, C. Do, and A. Seshia, "Phononic Frequency Comb via Intrinsic Three-Wave Mixing,"
Physical Review Letters, vol. 118, p. 033903, 2017.
[4] L. S. Cao, D. X. Qi, R. W. Peng, M. Wang, and P. Schmelcher, "Phononic Frequency Combs through
Nonlinear Resonances," Physical Review Letters, vol. 112, p. 075505, 2014.
[5] A. Ganesan, C. Do, and A. Seshia, "Observation of three-mode parametric instability in a
micromechanical resonator," Applied Physics Letters, vol. 109, p. 193501, 2016.
[6] E. Mathieu, "Memoire sur le mouvement vibratoire d'une membrane de forme elliptique,"
Journal de mathamatiques pures et appliquees, vol. 13, pp. 137-203, 1868.
Figure 1: Observation of phononic frequency comb via three-mode three-wave mixing. A: An
electrical signal 𝑆𝑖𝑛(𝑓𝑑 = 3.857 𝑀𝐻𝑧) is provided to a free-free beam microstructure; B: The
frequency spectrum of the output electrical signal 𝑆𝑜𝑢𝑡; b1-b5: The zoomed views of spectral
features b1-b5 in B respectively.
Figure 2: Drive level dependence of frequency comb. A-C: The spectral maps of output electrical
signal 𝑆𝑜𝑢𝑡 around 𝑓𝑑, 𝑓𝑚 and 𝑓𝑛 respectively for different drive conditions 𝑆𝑖𝑛(𝑓𝑑1 = 3.86 𝑀𝐻𝑧) =
4 − 23.8 𝑑𝐵𝑚. The inset figures show the vibration mode shapes corresponding to the respective
frequency combs and the red and blue correspond to maximum and minimum displacements in
these inset figures.
Figure 3: Drive level dependence of frequency comb (Contd.). A-D: The drive level 𝑆𝑖𝑛 dependence
of 𝑓𝑛, 𝑓𝑚, 𝑓𝑚 + 𝑓𝑛 and 𝑓̃1 respectively.
Figure 4: Drive frequency dependence of frequency comb. A: The spacing of frequency combs
around 𝑓𝑥, 𝑓𝑦 and 𝑓𝑑 for the drive frequency 𝑓𝑑 = 3.857 𝑀𝐻𝑧; B: The spacing of frequency combs for
different drive frequencies 𝑓𝑑 and drive levels 𝑆𝑖𝑛. The colour-maps indicate the spacing. The
absence of colour or white-colour indicates the absence of frequency comb for that drive condition.
The dotted black line indicates the parametric excitation threshold. The drive level 𝑆𝑖𝑛 above this
threshold line leads to parametric resonance.
Figure 5: Drive frequency dependence of frequency comb (Contd.). A-C: The value of 𝑓𝑥, 𝑓𝑦 and 𝑓𝑥 +
𝑓𝑦 for different drive frequencies 𝑓𝑑 and drive levels 𝑆𝑖𝑛 respectively. The colour-maps indicate the
values of these frequencies. The absence of colour or white-colour indicates the absence of
frequency comb for that drive condition. The sketched planes correspond to the nominal drive
frequency dependence of 𝑓𝑥, 𝑓𝑦 and 𝑓𝑥 + 𝑓𝑦 i.e. under the absence of three-wave mixing. Note: The
projections of 3-D plots on the 𝑆𝑖𝑛 − 𝑓𝑑 plane are also shown for clarity.
|
ai_researcher | 3 | Toward_the_Development_of_a_Computer-Assisted_Real-Time_Assessment_of_Ideational_Dynamics_in_Collaborative_Creative_Groups.pdf | 9
1
0
2
l
u
J
5
2
]
C
D
.
s
c
[
1
v
4
0
9
0
1
.
7
0
9
1
:
v
i
X
r
a
Collaborative Heterogeneous Computing on MPSoCs
Extended Abstract∗
Siqi Wang
1 INTRODUCTION
With the emerging demand for computations on mobile devices,
heterogeneous multi-processor system-on-chips (MPSoCs) are en-
visioned to dominate the current and future mobile computing
landscape. Heterogeneous MPSoCs usually comprise of various pro-
cessing elements such as general-purpose cores (CPUs) with differ-
ent performance-power characteristics and application-specific ac-
celerators, examples of which are graphics processing units (GPUs),
digital signal processors (DSPs), reconfigurable accelerators (FP-
GAs, etc.) and the recent neural acceleration engines (NPUs, etc.).
Such heterogeneity presented on the SoC enables delicate match-
ing of computational kernels to the processing elements that are
best suited to perform the computation, which leads to substantial
improvements in performance and energy-efficiency.
The heterogeneity can be broadly classified into performance
and functional heterogeneity, while commercial SoCs are trending
toward adopting both in the same chip. Performance heterogene-
ity consists of cores with the same functionality (instruction-set
architecture, ISA) but with different power-performance charac-
teristics, an example of which is the ARM big.LITTLE CPU archi-
tecture. The difference stems from distinct micro-architectural fea-
tures such as in-order core versus out-of-order core. The complex
cores provide better performance at the cost of higher power con-
sumption while the simpler cores exhibit low-power behavior with
lower performance. Functional heterogeneity features cores with
very different functionality (different ISA) existing on the same die.
The heterogeneity takes advantage of certain execution pattern for
exceptional speed-up to meet the performance requirement under
the stringent power budget. Under carefully managed exploitation
of multiple forms of heterogeneity, heterogeneous MPSoCs present
great potential to sustain the performance and power requirements
for next generation mobile computing.
While architectural heterogeneity is promising, software devel-
opment efforts are required to fully benefit from this architectural
advancement [4]. This thesis (extended abstract) presents the soft-
ware development efforts toward efficient exploitation of hetero-
geneity through intricate mapping of computational kernels, col-
laborative execution of multiple processing elements and applica-
tion specific techniques. The goal is to embrace the heterogeneity
to unleash the full potential of the heterogeneous MPSoCs towards
high-performance energy-efficient mobile computing.
2 EXPLOITATION OF HETEROGENEITY
Functional heterogeneity presents application developers with a
diverse choice of processing elements on the same chip. They now
have the opportunity and the responsibility to take advantage of
∗Accepted to ACM SIGDA Ph.D. Forum at Design Automation Conference (DAC) 2019.
Siqi Wang is with the Department of Computer Science, School of Computing, Na-
tional University of Singapore, SG. E-mail: ([email protected])
the unique characteristics of different processing elements to im-
prove execution performance. However, the matching of compu-
tational kernels to processing elements is difficult as the perfor-
mance is a complex interplay among the exposed parallelism, the
compiler, and the processor architecture. Furthermore, the applica-
tion kernel needs to be implemented in different processor-specific
languages to measure the performance of each processing element.
If the performance of the applications on different processing ele-
ments are made available at an early stage, the developers will then
be able to make an informed decision in selecting the most appro-
priate processing element and concentrate on further processor-
specific languages and optimizations. CGPredict [5] is proposed to
guide developers in the early design choice without tedious rede-
velopment efforts. It is an analytical framework that accurately es-
timates the performance of a computational kernel on an embed-
ded GPU architecture from unoptimized, single-threaded C code.
CGPredict takes a computational kernel in the form of single-
threaded C code and generates its execution trace through a Trace
Extraction phase. In order to emulate the behavior of GPU, a Warp
Formation phase is introduced to transform the single-threaded
trace into its multi-threaded equivalent. CGPredict then extracts
computation (compute instructions) and memory access informa-
tion. The compute cycle count is obtained by mapping compute in-
structions to GPU instructions in the Computation Analysis stage,
while the memory cycle count is obtained through memory access
information analysis with access patterns and cache behavior in
the Memory Behavior Analysis stage. The results from the two anal-
ysis stages complete the execution characteristics we need from
the kernel for performance prediction. Lastly, together with the
hardware architectural parameters obtained from micro- bench-
marking, a comprehensive Analytical Prediction Model is engaged
to predict the final execution performance using the computation
and memory execution characteristics.
CGPredict provides accurate GPU performance estimations from
only C code with 9% error. It also provides insights regarding the
characteristics of the kernel and the GPU that influence perfor-
mance, such as coalescing of memory accesses and shared mem-
ory usage. These insights offer opportunities for the developers to
understand the intrinsic strengths and weaknesses of the architec-
ture in the context of a particular kernel that can facilitate further
code optimizations. Furthermore, CGPredict in conjunction with
an existing FPGA performance predictor from C code [6] achieves
our objective of making the perfect choice of processing elements
(CPU, GPU or FPGA) given a kernel.
3 CO-EXECUTION ON MOBILE PLATFORM
The ever-increasing processing requirements impose higher pres-
sure on mobile devices with limited processing capability. Execut-
ing an application on a single processing element may not sustain
the performance requirements, while other processing elements
that can potentially be used are not actively contributing. The con-
current co-execution of a single computational kernel on multi-
ple processing elements thus exhibits great potential in achieving
additional performance. The design space of co-execution is huge
with the exploitation of both performance and functional hetero-
geneity. In addition, the ability to vary clock frequencies enables
the compromise between the achievable performance and power
consumption which further extends the design space. We show
through exhaustive design space search [1] that by executing a
computational kernel simultaneously on all available processing
elements (big.LITTLE CPU cores, GPUs) together with suitable
voltage-frequency settings for all these cores, as high as 39% en-
ergy savings and 19% improvement in runtime are achieved com-
pared to the stand-alone executions. The improvement in runtime
allows developers to have more flexibility in tuning the various
voltage-frequency settings to achieve higher performance with cer-
tain constraints.
On the other hand, the inherent characteristics of mobile sys-
tems demand stringent power and thermal requirements as com-
pared to server system; this is especially so because of the lack of
active cooling measures on mobile devices. Commercial heteroge-
neous MPSoCs usually implement operating system level thermal
management techniques such as processor frequency throttling to
prevent failure of the chip at high temperatures. Engaging multiple
processing elements concurrently may expedite the heating up of
the system, necessitating frequency throttling and hence degrada-
tion of performance. Therefore, the benefit of co-execution can be
compromised by the throttling of frequency due to thermal issues.
We propose OPTiC [2] to anticipate such thermal impact on execu-
tion when engaging multiple processing elements for performance
optimization.
OPTiC presents a static partitioning strategy to split a compu-
tational kernel across CPU and GPU cores for concurrent execu-
tion, with the voltage-frequency settings of the cores carefully de-
termined considering the thermal effects. OPTiC builds on exten-
sive and comprehensive modeling of power and runtime, resource
contention and thermal behavior. The power and runtime of the
CPU and GPU cores at all frequencies are predicted through ana-
lytical modeling from one profile run at a sample frequency. The
thermal behavior is captured through a thermal throttling model
that predicts the occurrence of OS frequency throttling and the
resultant runtime under such thermal condition. From the indi-
vidual performances, the allocation of the workload and the co-
execution performance are predicted through a co-execution model
that considers the effect of thermal frequency throttling and re-
source contention. The framework then goes through all the possi-
ble frequency settings and predicts the performance to locate the
optimal configuration and workload allocation. While the perfor-
mance of an application is largely affected by thermal conditions,
OPTiC is able to predict the configuration that presents on aver-
age 14% runtime improvement over standalone execution. OPTiC
further demonstrates great temperature control with real-life ap-
plications. With the configuration predicted by OPTiC, the chip
exhibits a much cooler temperature as compared to the Linux fre-
quency governors.
4 TOWARD MACHINE LEARNING
Lastly, the rise of machine learning applications poses great chal-
lenges to mobile platforms. Deploying neural network inferencing
on mobile platforms require the exploitation of heterogeneity to
sustain the performance requirements given limited resources and
stringent power budgets. Although dedicated neural accelerators
(NPUs, etc.) show exceptional speed-ups for applications like con-
volutional neural network (CNN), the technique is highly platform
dependent and not applicable to general architectures without the
accelerator. Furthermore, CNN are more commonly used as build-
ing blocks to construct more complex systems. We envision in the
near future that multiple independent inference sub-tasks are ex-
pected to be performed concurrently. This requires all the available
processing elements to run the inference engines in parallel. There-
fore, it is important to develop general techniques that are applica-
ble to existing heterogeneous MPSoCs on mobile platforms.
Commercial CNN libraries usually only engage one of the pro-
cessing elements and are often ignorant to the co-execution of mul-
tiple processing elements. ARM Compute Library (ARM-CL) pro-
vides out-of-the-box support for parallel execution through multi-
threading for the CPU clusters. But the concurrent co-execution
of the big and LITTLE cluster with multi-threading is harmful for
performance due to cache coherence overheads. Thus, the kernel-
level splitting among processing elements fails to either reduce
the end-to-end latency or the throughput. We present an alter-
native framework pipe-it [3] that employs a pipelined design to
split the convolutional layers across processing elements (different
CPU clusters) to improve throughput for streaming inferencing.
Here, the two CPU core clusters are divided into multiple sub-core-
clusters as processing elements to construct the pipeline stages to
better match the resources and workload. Pipe-it includes an ana-
lytical performance model that predicts the performance of a con-
volutional layer on different configurations (core type, count) from
its network structure descriptions. The predicted performance is
then used as input into a design space exploration algorithm that
navigates the design space and locates the best fitting pipeline
configuration and respective layer allocation. Pipe-it with the pre-
dicted multi-stage pipeline achieves on average 39% throughput
gain compared with the execution on a single processing element.
REFERENCES
[1] A. Prakash, S. Wang, AE. Irimiea, and T. Mitra. 2015.
Energy-efficient ex-
ecution of data-parallel applications on heterogeneous mobile platforms. In
2015 33rd IEEE International Conference on Computer Design (ICCD). 208–215.
https://doi.org/10.1109/ICCD.2015.7357105
[2] S. Wang, G. Ananthanarayanan, and T. Mitra. 2019. OPTiC: Optimizing Collabo-
rative CPU-GPU Computing on Mobile Devices With Thermal Constraints. IEEE
Transactions on Computer-Aided Design of Integrated Circuits and Systems 38, 3
(March 2019), 393–406. https://doi.org/10.1109/TCAD.2018.2873210
[3] S. Wang, G. Ananthanarayanan, Y. Zeng, N. Goel, A. Pathania, and T. Mitra. 2019.
High-Throughput CNN Inference on Embedded ARM big. LITTLE Multi-Core
Processors. arXiv preprint arXiv:1903.05898 (2019).
[4] S. Wang, A. Prakash, and T. Mitra. 2018. Software Support for Heterogeneous
Computing. In 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI).
756–762. https://doi.org/10.1109/ISVLSI.2018.00142
[5] S. Wang, G. Zhong, and T. Mitra. 2017. CGPredict: Embedded GPU Performance
Estimation from Single-Threaded Applications. ACM Trans. Embed. Comput. Syst.
16, 5s, Article 146 (Sept. 2017), 22 pages. https://doi.org/10.1145/3126546
[6] G. Zhong, A. Prakash, S. Wang, Y. Liang, T. Mitra, and S. Niar. 2017. Design
Space exploration of FPGA-based accelerators with multi-level parallelism. In De-
sign, Automation Test in Europe Conference Exhibition (DATE), 2017. 1141–1146.
https://doi.org/10.23919/DATE.2017.7927161
2
|
ai_researcher | 1 | Capabilities_of_GPT-4_on_Medical_Challenge_Problems.pdf | Evaluation of GPT-4o & GPT-4o-mini’s Vision Capabilities for Salt Evaporite
Identification
Deven B. Dangi, Lawton Chiles High School & Florida State University, Tallahassee, FL
Beni B. Dangi, Florida A&M University, Tallahassee, FL
Oliver Steinbock, Florida State University, Tallahassee, FL
Abstract
Identifying salts from images of their ‘stains’ has diverse practical applications. While specialized
AI models are being developed, this paper explores the potential of OpenAI’s state-of-the-art vision
models (GPT-4o and GPT-4o-mini) as an immediate solution. Testing with 12 different types of
salts, the GPT-4o model achieved 57% accuracy and a 0.52 F1 score, significantly outperforming
both random chance (8%) and GPT-4o mini (11% accuracy). However, GPT-4o mini also had
significantly biased responses, diminishing the representativeness of its accuracy. Results suggest
that current vision models could serve as an interim solution for salt identification from their stain
images.
Introduction
Whether it is forensics experts identifying residues at a crime scene or astronomers examining
materials from other planets, the ability to rapidly identify salts from macroscopic images has
significant implications for various fields (1, 2). Potential software could enable low-cost,
equipment-free salt analysis, benefiting both specialized and general users.
Current efforts are underway, and somewhat successful, in making such an ability a reality
(3)—but these methods are not immediately available, and it is unknown how long it will take for
them to become viable for general or specialized use. Thus, a possible alternative method for
identifying salts through these images is the target of investigation: using state-of-the-art large
language models (LLMs) with vision capabilities. These models have demonstrated remarkable
capabilities in transferring knowledge across domains and an impressive level of ability in
identifying certain complex visual patterns without domain-specific fine-tuning (4), suggesting
potential applicability to salt crystal morphology analysis.
OpenAI's GPT-4o and GPT-4o-mini models represent current state-of-the-art capabilities in
image recognition and analysis, with GPT-4o-mini being a lighter, cost-efficient variant of GPT-4o
(4). For cost- and time-efficiency, the batch processing method offered by OpenAI was preferred,
which allows for processing of multiple requests simultaneously with half the cost and greater
granularity (5, 6). Most significantly, it allows control over parameters such as model type,
temperature, and seed values, which can help produce outputs that are closer to being deterministic
and reproducible, despite the inherently probabilistic nature of these models (7).
This study examines 12 salts (NaCl, KCl, NH₄Cl, Na₂SO₄, K₂SO₄, NH₄NO₃, NaH₂PO₄,
NaNO₃, Na₃PO₄, KBr, KNO₃, and RbCl) selected to represent a diverse range of ionic compounds,
including both naturally occurring minerals and industrially significant compounds, to evaluate the
accuracy and consistency of these models in salt identification. The findings reveal significant
variations in identification accuracy across different salt types, with implications for practical
applications.
Method
The GPT-4o and GPT-4o-mini models have a wide breadth of knowledge, but their knowledge is
not deep enough to understand what the macroscopic images of salt deposits formed by the
evaporation of droplets look like for each type of salt (4). Thus, training images for each type of salt
are included for the model to understand their appearance, sourced from images of laboratory-
generated salts from Dr. Steinbock’s website, which were created for the related machine learning
methods of identification (3, 8). These training images were the same for all trials.
Figure 1: Example images of each type of salt (B-M). Sourced from Batista et al. (3)
The OpenAI APIs were leveraged in order to specifically utilize the batch processing
method, allowing for multiple requests simultaneously at lower pricing and with higher token limits
compared to regular requests (5, 6). Using the API also ensures every request is independent
(stateless), with no memory carrying over between requests (9). This also means the training images
the model requires to grasp the differences between salts must be provided in every single request,
significantly contributing to the token count and therefore the total cost.
For batch processing, a JSONL file is required, in which every line contains a valid JSON
object representing individual requests to the API. Each trial would require multiple batches due to
file size and token limits per batch. In order to generate the batches, 12 folders containing the
images of the salts were iterated over. For each salt type, an empty set would be created, which
would keep track of already-seen images to avoid duplicate requests within the same trial. A total of
100 images would be randomly chosen from that folder to ensure an unbiased representation of
each salt type’s potential morphological variations, then added to the current batch as individual
requests. In total, each trial contains 1200 requests (12 salts times 100 images), which were spread
across as many batches as necessary.
Each request had a few key parameters: the ID, which was determined by combining the
image’s name (a randomly generated and unique 9-digit number) and the name of the image’s
parent folder, allowing later identification of the image’s true identity. Each request also contained
the specific model name (either gpt-4o-mini-2024-07-18 or gpt-4o-2024-08-06), a temperature of 0,
and a seed of 17 (chosen arbitrarily) in order to push the results closer to being deterministic (7).
Each request also had the system prompt as shown below:
You are a helpful assistant who is knowledgeable about different types of salt
crystals and can identify them from images. You can identify these 12 different
salts: NaCl, KCl, NH4Cl, Na2SO4, K2SO4, NH4NO3, NaH2PO4, NaNO3,
Na3PO4, KBr, KNO3, and RbCl.
For training, each request had 12 user messages passed in as context, each formed by 5
images and a line of text. Finally, a 13th user message would be included in the request, reading,
“Identify this salt with just the name.” In other words, the models were trained using 12 sets of
images labeled with their corresponding salt types, allowing the models to learn the visual
characteristics of each salt.
Once all the JSONL files (batches) were created programmatically, they were then sent to
be processed and receive a response from the specified model. For convenience and ease of use for
non-experts, the web interface was used for batch processing (https://platform.openai.com/batches),
though the underlying logic is the same as using the APIs. Initially, due to OpenAI’s usage tier
system, only one batch of a relatively smaller size could be done at a time, but at higher tiers it is
possible to run multiple large batches simultaneously without hitting a rate limit.
Each batch, once fully processed, outputs a JSONL file that contains the responses received
for each of the requests within the batch. In order to analyze the data, the custom ID associated with
each request is used to identify what salt the image is in actuality, and the model’s response is also
recorded. This is done by searching the response for the first mention of one of the 12 salts, which
should be the only salt named in the concise outputs, due to the prompt.
Results
Agreements
First, the results for all models were tested with each other for agreement using Cohen’s Kappa.
Cohen’s Kappa is a statistical measure used to evaluate the level of agreement between two raters
while accounting for the likelihood of agreement occurring by chance. (10). Cohen’s Kappa values
range from -1 to 1, with 1 representing perfect agreement and values close to and lower than 0
representing very poor agreement (11). Figure 2 displays the Cohen’s Kappa values for comparisons
between each of the four batch results.
Figure 2: Cohen’s Kappa between every pair of trials
The 4o-mini batch 1 and batch 2 trials, when compared to each other, had a kappa value of
0.91, signifying high consistency between the trials of the same model. This holds true for the full-
sized 4o model as well, with its batch 1 and batch 2 trials achieving an even higher kappa value of
0.96.
The value of each kappa being below 1, meaning the trials of the same model do not display
total agreement, is to be expected. Since the batches used different random sets of 100 images from
the pool of 500 total images for each salt, each batch had slightly different testing images, meaning
the model is expected to have different responses. Still, due to the overall similarity between images
of the same salt, we can still apply Cohen’s Kappa.
When comparing the full-sized 4o and the 4o-mini models, however, the kappa values are
near 0, indicating virtually no agreement across the models. Clearly, the two models are consistent
throughout their own trials but have minimal agreement with each other’s predictions when
accounting for agreement by chance.
Accuracies
Given the consistency within and disagreement between models, it is then crucial to know which
model is more accurate. In terms of pure accuracy, the 4o full-sized model trials show significantly
better results, with 57.25% and 57.17% accuracy, respectively. Meanwhile, the 4o-mini model
responses had a relatively low 11% and 10.09% accuracy, respectively. However, both the mini and
full models had higher accuracy than the guessing accuracy would be given 12 salts (8%), though
the mini model only exceeds that marker by a marginal amount.
Figure 3: Batch 1 4o-mini Model Responses
Looking closer at the responses from the first batch of the mini model, as can be seen in
Figure 3, there is a clear bias towards Na₃PO₄ in the model’s responses. In fact, the accuracies for
every other salt are substantially lower, suggesting that the overall accuracy for the mini model is
deceptively high due to its tendency to predict Na₃PO₄. This pattern is reconfirmed in Figure 4, as
the second batch of the mini model’s responses demonstrates a similar pattern, with approximately
55% of all its predictions being Na₃PO₄. This bias, which is not present in the full model results,
suggests potential limitations in the model’s visual learning capabilities compared to the 4o model.
Figure 4: Batch 2 4o-mini Model Responses
Comparatively, the first batch of the 4o full-sized model lacks this bias, showing much better
results overall with multiple salts having 90%+ accuracy, even reaching 100% on NaH₂PO₄.
However, the results are still uneven, as can be seen in Figure 5; in particular, KBr and KCl were
very often misidentified as NaCl (98 and 99 times, respectively), leaving their individual accuracies
at 2% and 1%. Clearly, there is massive confusion between these three salts for the model,
alongside a bias towards NaCl. This may be due to the prevalence of NaCl in the model’s original
training data as one of the most common salts. Another major point of confusion was Na₂SO₄, as
the model often misidentified multiple other salts (RbCl, K₂SO₄, and NH₄Cl especially) as Na₂SO₄
instead.
Figure 5: Batch 1 4o Full Model Responses
The second batch remains highly consistent with the first, as seen in Figure 6. This is further
evidenced by the high Cohen’s Kappa value and is in part due to the temperature of 0 applied to
each request (7) along with the model’s superior vision capabilities compared to its mini version.
While there are slight differences numerically, these are to be expected due to differences in the
randomly chosen testing images, and the overall patterns remain the same. The issues from the first
batch are still prevalent in the second batch, alongside similar strengths, maintaining a 100%
accuracy on NaH₂PO₄ and frequent confusions concerning NaCl.
Figure 6: Batch 2 4o Full Model Responses
With the calculated levels of disagreement between the full and mini models as well as the
differences in accuracy, the full model proves clearly superior. In fact, the mini model proves to be
less worthwhile even when prioritizing financial or computational efficacy over accuracy. The
models process images differently, with the mini model requiring more computational resources per
image (higher token counts), negating the price advantage of the mini model for vision applications
(12).
F1 Scores
F1 scores, which provide a balanced measure of precision and recall, were also calculated for each
salt type across all models. The macro F1 scores for each model differed slightly from the
accuracies. To be exact, the mini model had 0.0522 and 0.0443 F1 scores across trials (compared to
accuracies of 11% and 10.09%), while the full model had 0.5253 and 0.5263 F1 scores (compared
to accuracies of 57.25% and 57.17%). All 12 classes of salts are equally represented in our provided
training data, but the difference between the F1 scores and the accuracy implies an imbalance in
responses. This could be due to a similar imbalance in the original training data of the 4o and 4o-
mini models stemming from the differences in the prevalence of certain salts, or simply difficulties
in identifying certain salts in particular. This difference between accuracy and F1 scores is
especially visible in the mini models, aligning with previous observations in the individual trial
data.
Figure 7 shows the F1 scores for each of the 12 salt types for all the trials. The full model is
obviously more capable overall, but it shares a few weaknesses and strengths with the mini model.
Firstly, the two most obvious weaknesses in the responses are KBr and KCl, both of which have
extremely low F1 scores for both models. Clearly, there is heavy confusion concerning those salts.
Beyond that, Figure 7 also elucidates other weaknesses of the models, such as K₂SO₄, NaCl,
and RbCl; this could be attributed to class imbalance caused by prior knowledge the model has
(especially for common salts like NaCl) or possible deficiencies in the training data. Many of the
salts also appear similar on a macroscopic scale, making them difficult to discern and identify due
to the variability in their possible forms.
Figure 7: F1 Scores for every salt across all trials
Conclusions
In lieu of more reliable methods, large language models with vision capabilities are a possible
method of identifying the original salt from an image of its evaporite. In testing, between the 4o-
mini and full 4o models, the mini model performed drastically worse, proving to be ineffective even
when considering cost and computing efficiency. For the full models, we see considerably higher
than random accuracy and F1 scores, indicating acceptable performance for initial testing; potential
improvement can be found in the fine-tuning feature (13), leaving an opportunity for better results.
In general, vision models excel at identifying common objects in images but struggle when it comes
to learning from images to identify new things—and, in particular, when it comes to images that
require granular attention to detail and can often be homogeneous between classes (4). However,
such confusion is not unexpected and can be improved on by allowing for greater attention to detail
in models. Overall, the full GPT-4o model shows promising results for salt-evaporite analysis,
allowing for quick picturing and identification of salts when provided with the appropriate training
data.
References
(1) Da Silveira Tortolero Araujo Lourenço, M.; Di Maggio, R. M.; Germinario, C.; Grifa, C.; Izzo,
F.; Langella, A.; Mercurio, M. Discovering halite traces on a victim’s clothing through a forensic
geoscience analytical approach: A suspicious case in Italy. Forensic Sci. 2024, 4 (3), 396−408. DOI:
10.3390/forensicsci4030024.
(2) Osterloo, M. M.; Hamilton, V. E.; Bandfield, J. L.; Glotch, T. D.; Baldridge, A. M.; Christensen,
P. R.; Tornabene, L. L.; Anderson, F. S. Chloride-bearing materials in the southern highlands of
Mars. Science 2008, 319 (5870), 1651−1654. DOI: 10.1126/science.1150690.
(3) Batista, B. C.; Tekle, S. D.; Yan, J.; Dangi, B. B.; Steinbock, O. Chemical composition from
photos: Dried solution drops reveal a morphogenetic tree. Proc. Natl. Acad. Sci. U.S.A. 2024, 121
(27). DOI: 10.1073/pnas.2405963121.
(4) Shahriar, S.; Lund, B. D.; Mannuru, N. R.; Arshad, M. A.; Hayawi, K.; Bevara, R. V. K.;
Mannuru, A.; Batool, L. Putting GPT-4O to the sword: A comprehensive evaluation of language,
vision, speech, and multimodal proficiency. Appl. Sci. 2024, 14 (17), 7782. DOI:
10.3390/app14177782.
(5) Cheng, Z.; Kasai, J.; Yu, T. Batch prompting: Efficient inference with large language model
APIs. arXiv 2023, arXiv:2023.emnlp-industry.74. DOI: 10.18653/v1/2023.emnlp-industry.74.
(6) OpenAI. Batch API. https://platform.openai.com/docs/guides/batch (accessed Nov 24, 2024).
(7) Ouyang, S.; Zhang, J. M.; Harman, M.; Wang, M. An empirical study of the non-determinism of
ChatGPT in code generation. ACM Trans. Softw. Eng. Methodol. 2024. DOI: 10.1145/3697010.
(8) Steinbock Group. Saltscapes: Images of Salt Crystals Formed by Evaporation.
https://www.chem.fsu.edu/~steinbock/saltscapes.php (accessed Nov 30, 2024).
(9) OpenAI. Assistants API Overview (Python SDK).
https://cookbook.openai.com/examples/assistants_api_overview_python (accessed Nov 24, 2024).
(10) Warrens, M. J. Five ways to look at Cohen’s kappa. J. Psychol. Psychother. 2015, 5 (4). DOI:
10.4172/2161-0487.1000197.
(11) Sim, J.; Wright, C. C. The kappa statistic in reliability studies: Use, interpretation, and sample
size requirements. Phys. Ther. 2005, 85 (3), 257−268. DOI: 10.1093/ptj/85.3.257.
(12) OpenAI. Pricing. https://openai.com/api/pricing/ (accessed Nov 28, 2024).
(13) Wu, E.; Wu, K.; Zou, J. FineTuneBench: How well do commercial fine-tuning APIs infuse
knowledge into LLMs? arXiv 2024, arXiv:2411.05059.
|
ai_researcher | 2 | A_Statistical_Analysis_of_LLMs'_Self-Evaluation_Using_Proverbs.pdf | Pride and Prejudice: LLM Amplifies Self-Bias in Self-Refinement
Wenda Xu†, Guanglei Zhu‡, Xuandong Zhao†, Liangming Pan†,
Lei Li‡, William Yang Wang†
†University of California, Santa Barbara, ‡Carnegie Mellon University
{wendaxu,xuandongzhao,liangmingpan,william}@cs.ucsb.edu,
{guanglez,leili}@cs.cmu.edu
4
2
0
2
n
u
J
8
1
]
L
C
.
s
c
[
2
v
6
3
4
1
1
.
2
0
4
2
:
v
i
X
r
a
Abstract
Recent studies show that large language mod-
els (LLMs) improve their performance through
self-feedback on certain tasks while degrade
on others. We discovered that such a con-
trary is due to LLM’s bias in evaluating their
own output.
In this paper, we formally de-
fine LLM’s self-bias – the tendency to favor
its own generation – using two statistics. We
analyze six LLMs (GPT-4, GPT-3.5, Gemini,
LLaMA2, Mixtral and DeepSeek) on transla-
tion, constrained text generation, and mathe-
matical reasoning tasks. We find that self-bias
is prevalent in all examined LLMs across mul-
tiple languages and tasks. Our analysis reveals
that while the self-refine pipeline improves the
fluency and understandability of model out-
puts, it further amplifies self-bias. To miti-
gate such biases, we discover that larger model
size and external feedback with accurate as-
sessment can significantly reduce bias in the
self-refine pipeline, leading to actual perfor-
mance improvement in downstream tasks. The
code and data are released at https://github.
com/xu1998hz/llm_self_bias.
1
Introduction
Large language models (LLMs) have shown strong
capabilities in many NLP tasks. While these mod-
els still make mistakes, recent studies show that
“self-refine” (also known as “self-reflection”) is
promising to rectify errors based on LLM’s self-
feedback (Madaan et al., 2024; Chen et al., 2024;
Shinn et al., 2024; Manakul et al., 2023; Pan et al.,
2023). Meanwhile, opposite study also shows that
LLMs fail to correct their mistakes and their perfor-
mance even gets worse after self-feedback (Huang
et al., 2023b). These contradictory results sug-
gest that LLM’s self-feedback is unreliable. Self-
refine procedure relies on LLM’s evaluation capa-
bility of the generated text. We hypothesize that if
there is a bias during the self-evaluation process,
such bias will be amplified during iterative self-
Figure 1: How LLM’s self-feedback inflates scores com-
pared to human assessment. Bias is the mean differ-
ence between LLM and human scores, while skewness
(Dskew) measures the asymmetry of their distribution
around zero. Non-biased estimation will have Dskew=0.
refinement. This is consistent with a prior finding
that LM-based metrics (e.g. BARTScore) exhibit
“narcissism” during self-evaluation, i.e., the metric
model favors text generated by the same underly-
ing language model in the context of summariza-
tion tasks (Liu et al., 2023b). However, it remains
unclear whether bias exists universally in LLMs
across a wide range of tasks. How to quantify such
biases? How does this “narcissism” impact LLM’s
self-refinement?
In this work, we define “self-bias” to the degree
that an LLM favors its own generation. We pro-
pose to use two principled statistics to estimate
self-bias in LLM’s self-refinement procedure. The
first one measures the degree of inflation in the
LLM’s self-evaluation compared to the true (hu-
man) evaluation. The second measures whether
LLM’s self-evaluation is skewed compared to the
ture estimate. Figure 1 illustrates these two statis-
tics. We examine self-bias scores on six diverse
LLMs, covering four languages across three dis-
tinct tasks: machine translation, constrained text
generation, and mathematical reasoning. We find
that self-bias is universal in self-refine and self-
rewarding pipelines, regardless of the languages
and tasks. This bias causes LLMs to optimize for
false positive corrections rather than improving the
actual output quality.
We further investigate what is the real benefit
of self-refine. We find that while the self-refine
pipeline improves the fluency and understandabil-
ity of model outputs, it does not necessarily lead to
intended improvements as specified in the prompt.
Moreover, LLMs may favor texts that mirror their
style, potentially leading to false positive optimiza-
tion and reduced diversity in text generation. To
mitigate the self-bias, we propose two solutions:
increasing the model size and incorporating ex-
ternal feedback to provide accurate assessment,
thereby directing the LLM towards more accurate
self-correction. Our contributions are:
1. We formally define the self-bias of an LLM us-
ing two principled estimated statistics.
2. We quantify self-biases for six diverse LLMs
and find that self-bias amplifies during self-
refine across many languages and tasks.
3. We observe two factors that contribute to self-
bias and pinpoint two directions to mitigate it
and elicit LLMs’ self-correction ability.
2 Related Work
Large Language Model Self-correction. Re-
cent works demonstrate that LLM can utilize its
own feedback signal to refine itself (Madaan et al.,
2024; Chen et al., 2024; Shinn et al., 2024). Wang
et al. (2023) further proposed to sample diverse
reasoning paths and use a majority vote to find the
most confident answer. Huang et al. (2023a) lever-
ages self-consistency to further fine-tune the LLM
on the most confident reasoning path with diverse
instruction formats. On the other hand, LLM’s self-
feedback can also be used as a reward signal to
further align LLM to follow instructions (Gulcehre
et al., 2023; Yuan et al., 2024).
correction (Huang et al., 2023b; Tyen et al., 2023;
Ke et al., 2023). This issue arises because the
quality of the model’s self-generated feedback is
bounded by its existing knowledge and abilities
(Stechly et al., 2023; Hong et al., 2023). Therefore,
internal feedback may not offer any extra advantage
for improving the results; it might even steer the
model away from the correct answer (Valmeekam
et al., 2023). However, prior works only had em-
pirical observations on this phenomenon, while
lacking a quantitative analysis. Moreover, prior
works only focus on specific tasks, such as reason-
ing or code generation. In this work, we are the first
to quantitatively analyze the self-bias of different
LLMs across three tasks and four languages, which
provides a novel and generalizable view to address
the perils of self-refine.
LLMs as Evaluators. Liu et al. (2023a) lever-
ages GPT-4 to evaluate text through chain-of-
thoughts prompting. Fu et al. (2023) leverages
GPT-3’s sequence likelihood to estimate model per-
formance. Kocmi and Federmann (2023); Xu et al.
(2023) designed detailed error schemes for LLM to
output fine-grained error annotations. Despite the
popularity of using LLMs as evaluators, Koo et al.
(2023) pointed out that LLM exhibits cognitive
bias when evaluating the text, misaligning from
human preference. Zheng et al. (2023) pointed
out LLMs have verbosity and self-enhancement
bias, which makes them prefer long and verbose an-
swers and answers generated by themselves. Chang
et al. (2023) found out that LLM prefers memo-
rized text over non-memorized text, creating unfair
judgments over texts. Deutsch et al. (2022); Liu
et al. (2023b) point out that reference-free metrics
are inherently biased on their own outputs.
Although the above empirical studies provide valu-
able insights, they lack a formal definition to quan-
tify those biases nor provide a connection to the
self-refine framework. In this work, we define and
quantify self-bias and provide the first in-depth
analysis of its impact on the self-refine pipeline.
We analyze potential bias attributions and pinpoint
two mitigation directions.
3 Quantifying Self-Bias
Despite some demonstrations of performance im-
provements, most findings indicate that LLMs
struggle to rectify their initial mistakes, and
self-
their performance even worsens after
This section outlines the approach used to quantify
the self-bias exhibited by LLMs in an iterative self-
refinement pipeline. We employ statistical bias
and distance skewness (Szekely and Móri, 2006)
estimation to measure self-bias.
3.1
Iterative Self-Refinement in LLMs
Self-refinement is an inference time method, in
which the LLM first generates a response yi to a
given prompt x, and then the same LLM generates
feedback fi based on the candidate output yi and
input x. Based on feedback fi, input x, and candi-
date output yi, the LLM then generates a refined
output ri. LLM iterates between the feedback and
the refinement steps, continuing until it reaches a
predetermined number of iterations. At each refine-
ment step, the refined output will only be accepted
if it demonstrates superior quality compared to the
previously generated text. The quality of the text is
assessed through self-feedback from the language
model itself. At each feedback or refinement step,
LLM only sees the last iteration’s generation or
feedback, without accessing the entire history of
output or feedback.
3.2 Bias Estimation
We estimate the self-bias of LLMs using the statis-
tical bias definition. This bias is characterized by
the disparity between an LLM’s predicted quality
score and the expected quality score, as follows:
Bias(ˆθ) =
1
n
n
(cid:88)
(E[ˆθi] − θi),
(1)
i=1
where E[ˆθi] is an expected LLM’s quality predic-
tion at sample i, and θi denotes the true quality of
sample i. Ideally, θi should be derived from hu-
man annotations, for example, multidimensional
quality metrics (MQM) human annotations (Fre-
itag et al., 2021) for machine translation, or pre-
defined criteria such as word coverage for con-
strained text generation (Madaan et al., 2024). The
LLM’s quality prediction is expected to precisely
follow the human annotation procedure or prede-
fined criteria, ensuring consistency between θ and
E[ˆθ]. When Bias(ˆθ) > 0, the LLM assigns a higher
quality score to its own sample compared to the
expected quality score. When Bias(ˆθ) < 0, the
LLM underestimates the sample quality compared
to the expected quality score. The larger the value
of Bias(ˆθ), the more pronounced the LLM’s bias
against its own samples.
3.3 Distance Skewness Estimation
In an ideal scenario, an unbiased LLM should
have equal chance of over-estimation and under-
estimation of text quality (Bias(ˆθ) = 0), resulting
Figure 2: Bias(ˆθ) = 0 does not guarantee a symmetric
distribution of E[ˆθ] − θ. One tail could be long and thin,
while the other is short and fat (shown in the right figure).
We use distance skewness to measure the asymmetry
of distribution. Therefore, using two meta-metrics as
complimentary, we can measure the self-bias of LLM.
in a perfectly symmetric distribution when plotting
E[ˆθ] − θ. However, Bias(ˆθ) = 0 does not guaran-
tee a symmetric distribution (In Figure 2, one tail
could be long and thin, while the other is short and
fat, yet they balance out overall). Therefore, we
introduce another meta-metric, distance skewness,
to measure the asymmetry of E[ˆθ]−θ’s distribution.
Specifically,
dSkewn(X) = 1 −
(cid:80)
i,j ∥xi − xj∥
i,j ∥xi + xj − 2γ∥
(cid:80)
,
(2)
where xi and xj are two independent identical ran-
dom examples drawn from E[ˆθ] − θ. dSkewn(X)
measures the asymmetry of X with respect to
γ. Distance skewness ranges between 0 and 1.
dSkewn(X) equals 0 if and only if X is diago-
nally distributed respect to γ. dSkewn(X) equals 1
if and only if X is distributed at a constant on one
side of γ. A higher distance skewness indicates a
more asymmetric distribution of E[ˆθ] − θ. In our
experimental setup, we use both bias and distance
skewness to measure the model’s bias towards its
quality prediction.
4 Analyzing LLM’s Self-Bias
4.1 Experimental Setup
We include three closed-source LLMs (GPT-4
(Achiam et al., 2023), GPT-3.5-Turbo and Gemini
(Team et al., 2023)) and three open-source LLMs
(LLaMA2-7B (Touvron et al., 2023), Mixtral-MOE
8x7B (Jiang et al., 2024) and DeepSeekMoE 16B
(Dai et al., 2024)). These models have been shown
to have strong instruction-following capabilities
(Madaan et al., 2024; Shinn et al., 2024), making
them well-suited to demonstrate self-bias.
For each model, we first prompt it to produce the
initial generation. Then, we prompt the model
to generate the feedback for the initial generation.
The model takes in both the feedback and the prior
step generation to produce a refined output. We
will only accept refinement if the feedback score is
improved on the refined output. We listed specific
model API/checkpoints in Appendix Section A.
Machine Translation. We evaluated LLMs on
Flores-200 (Costa-jussà et al., 2022) dataset with
four language pairs: Yoruba to English (Yor-En),
Javanese to English (Jav-En), Armenian to English
(Arm-En), and Igbo to English (Ig-En), using 100
test examples per language pair. We concentrate on
low-to-medium resource language pairs, as Kocmi
et al. (2023) indicate that LLMs like GPT-4 already
perform at a nearly human-like level in high re-
source language pairs such as Chinese-to-English,
leaving limited potential for further improvement
through self-refine.
To ensure high-quality evaluations, we utilized
feedback prompts based on the MQM human an-
notation from Freitag et al. (2021), as in Kocmi
and Federmann (2023). LLMs will input source
text and candidate text and output feedback, includ-
ing error location, error type, and severity labels.
We adopt the same error scoring as Freitag et al.
(2021), assigning −1 for minor errors and −5 for
major errors, with a score range of 0 to −25 (0 for
perfect translations, −25 for samples with more
than five severe errors). The details of the prompts
are provided in the Appendix Table 8, 9 and 10.
Ideally, human raters would have evaluated each
sample, but due to cost and scalability constraints,
we utilized the reference-based learned metric
BLEURT (Sellam et al., 2020) as an approximation
of human judgments. BLEURT generates quality
scores based on the similarity between candidate
and reference translations. To align BLEURT’s
score distribution with that of human ratings, we
employed quantile mapping (Cannon et al., 2015),
yielding a score range from 0 to -25. Although au-
tomatic metrics are primarily used, we also conduct
modified MQM human evaluations (Freitag et al.,
2021) for validation purposes. Our bias estimation
ranged from -25 to 25. Details on quantile mapping
are provided in the Appendix Section B.
Constrained Text Generation. We conducted
experiments on commonsense text generation, fol-
lowing (Lin et al., 2020). We tested LLMs on 100
examples from the CommonGen Hard dataset. For
each testing instance, the large language model
(LLM) received approximately 30 concepts and
was tasked with generating a fluent and logically
sound text. To generate the initial output, we
adopted a similar prompt design to that of (Lin
et al., 2020). Next, we provided two ICL feedback
examples to help the LLM identify missing con-
cepts in its initial output. In each feedback example,
the LLM was given concept words and the previ-
ous generation and asked to indicate any missing
concepts. This feedback allowed the LLM to revise
its output and generate a text with better coverage
of the input concepts. The details of the prompts
are included in the Appendix Table 12, 13 and 14.
To evaluate the coverage of the generated texts,
we adopted the evaluation metric used in (Madaan
et al., 2024). This metric uses strict string matching
to determine whether each concept word from the
input appears in the generated text (metric outputs 1
if all concepts are covered and 0 otherwise). From
feedback of LLM’s missing concepts, we assigned
a binary score (0 or 1) to each text based on its full
coverage of concepts. Since our string-matching
metric and LLM feedback score were on the same
scale, we were able to compute bias and distance
skewness directly. The range of bias estimation is
between −1 to 1.
Mathematical Reasoning. We conducted exper-
iments on mathematical reasoning. We tested
LLMs on 100 examples from the MATH testing
set (Hendrycks et al., 2021). For each instance,
LLM receives a problem statement and generates
a step-by-step solution with a final answer. In this
task, we use the self-refine pipeline by providing
the feedback on the step-by-step solution. In each
iteration, the previous solution will be compared
against the ground truth answer, outputting 1 if they
are matched and 0 otherwise. Therefore, we can
directly compute bias and distance skewness. The
range of bias estimation is between −1 to 1. The
details of the prompts are included in the Appendix
Table 11. In addition, we also conducted exper-
iments by replacing the self-evaluation (LLM as
evaluator) with self-consistency verification (self-
consistency as an evaluator) (Huang et al., 2023a).
We include those results in the Appendix D.
4.2 Self-Bias Amplification during Iterative
Refinement
Machine Translation.
In Figure 3, we illustrate
that all large language models (LLMs) exhibit a
Figure 3: Average Bias and Dskew estimations for Yor-
En, Jav-En, Arm-En, and Ig-En translations on FLo-
res200, with the x-axis showing self-refine steps, re-
veal that all LLMs exhibit self-bias, where open-source
LLMs exhibit higher levels than GPT-4 and Gemini.
self-bias in the self-refine pipeline. Notably, open-
source LLMs and GPT-3.5-Turbo tend to exhibit
higher levels of self-bias throughout iterations than
stronger instruction-following LLMs, such as GPT-
4 and Gemini. This suggests that GPT-4 and Gem-
ini possess a certain level of capability in resist-
ing self-bias. However, despite some robustness
demonstrated by GPT-4 and Gemini, we observe
a consistent amplification of self-bias through the
self-refine pipeline across four language directions,
indicating that even these advanced LLMs are sus-
ceptible to self-bias amplification.
In Figure 4, we illustrate a comparison be-
tween GPT-4 and Gemini’s quality assessments
of their own outputs and performance measured
by reference-based BLEURT over ten iterations.
Our findings suggest that the primary reason for
the amplification of bias during self-refine itera-
tion is that actual performance does not improve
through iterations.
Instead, GPT-4 and Gemini
mistakenly perceive performance improvements in
their refined outputs. This discrepancy between the
false positive performance measure and the true
performance measure grows larger with each iter-
ation. The appendix Section C details Gemini’s
shift from right-skewed to left-skewed distribution,
resulting in a decrease in distance skewness during
early iterations and an increase in later ones.
Constrained Text Generation. Figure 5 depicts
the amplification of self-bias through ten self-refine
iterations in constrained text generation for GPT-
3.5-Turbo, GPT-4, and Gemini. Notably, GPT-4
exhibits a higher bias estimation at earlier iterations
compared to GPT-3.5-Turbo and Gemini. This can
be attributed to GPT-4’s higher coverage ratio at
initial generation (approximately 40%) compared
Figure 4: GPT-4 and Gemini overestimate improve-
ments in self-refined outputs, leading to amplified bias
over iterations compared to actual performance mea-
sured by BLEURT.
to its counterparts (GPT-3.5-Turbo at around 2%).
Consequently, GPT-4 struggles to identify a few
missing concepts, while GPT-3.5-Turbo and Gem-
ini have more coverage issues and can easily iden-
tify missing input concepts.
As GPT-3.5-Turbo reaches 20% coverage around
the 5th iteration, it experiences a significant rise in
bias and skewness estimation. It is worth noting
that the rate of LLM’s self-estimated improvements
is much higher than the true coverage improve-
ments. This phenomenon results in a saturation of
performance improvements after the 5th iteration
for both GPT-4 and GPT-3.5-Turbo.
Mathematical Reasoning. Figure 6 illustrates
that all large language models (LLMs) exhibit an
increase in bias and skewness estimation in the
iterative self-refine pipeline. This suggests that
LLMs introduce self-biases towards some math
solutions during self-refine.
Human Evaluation on Bias Estimation. We em-
ploy one graduate student to annotate 50 examples
from the 0th and 10th iteration of GPT-4, GPT-
3.5-Turbo and Gemini’s outputs at Yor-En, respec-
tively. The human rater compares candidate text
against reference and labels error location, error
type, and severity labels at candidate text. The
scoring scheme follows MQM style (Freitag et al.,
2021), which matches the scoring range of LLM’s
feedback. Our human score indicates that all three
LLMs have not received measurable improvements
via the self-refine pipeline (The raw human scores
0123455051015Average Bias0123450.00.20.40.6Average DskewAverage Bias and Dskew on four languages at Flores200GeminiGPT4GPT3.5DeepSeekMOEMistralMOELLaMA27B 0 4 0 6 F D O H % / ( 8 5 7 Y V * 3 7 < |