topic
stringclasses
2 values
relevance score
int64
1
10
paper name
stringlengths
19
239
text
stringlengths
1.56k
680k
ai_researcher
1
Rural_Settlement_Reconstruction_Integrating_Land_Suitability_and_Individual_Difference_Factors_A_Case_Study_of_Pingba_Village_China.pdf
Spatial Accuracy 2020 Towards localized accuracy assessment of remote-sensing derived built-up land layers across the rural-urban continuum Johannes H. Uhl1,2, Stefan Leyk1,2 1Department of Geography, University of Colorado Boulder, Boulder, Colorado, USA 2Institute of Behavioral Science, University of Colorado Boulder, Boulder, Colorado, USA *Corresponding author: [email protected] Abstract The accuracy assessment of remote-sensing derived built-up land data represents a specific case of binary map comparison, where class imbalance varies considerably across rural-urban trajectories. Thus, local accuracy characterization of such datasets requires specific strategies that are robust to low sample sizes and different levels of class imbalance. Herein, we examine the suitability of commonly used spatial agreement measures for their localized accuracy characterization of built-up land layers across the rural-urban continuum, using the Global Human Settlement Layer and a reference database of built-up land derived from cadastral and building footprint data. Keywords Localized accuracy assessment, spatially explicit accuracy assessment, spatially constrained confusion matrices, rural-urban continuum, Global Human Settlement Layer With recent technological advances in geospatial data acquisition, processing, as well as cloud- based geospatial data dissemination and analysis infrastructure, there is an increasing amount of novel geospatial datasets available, measuring the spatial(-temporal) distribution of human settlements at large spatial extents and at unprecedented spatial granularity. These datasets include the Global Human Settlement Layer (Pesaresi et al. 2015), Global Urban Footprint (Esch et al. 2013), High-Resolution Settlement Layer (Facebook Connectivity Lab & CIESIN 2016), and the World Settlement Footprint (Marconcini et al. 2019). While such datasets greatly facilitate the study of urbanization, human-natural systems and related geographic- environmental processes at unseen levels of detail, little research has been done on the accuracy of such datasets and how accuracy trajectories can be characterized across the rural-urban continuum, often due to the lack of reliable reference data over sufficiently large spatial extents. Previous work has revealed varying levels of accuracy among different settlement datasets (Klotz et al. 2016), increasing accuracy levels over time in case of the multi-temporal Global Human Settlement Layer (GHSL), and increases in accuracy from rural towards urban areas (Uhl et al. 2017, Leyk et al. 2018, Uhl et al. 2020). However, these general trends are based on coarse, regional stratification of the studied area and density variations derived from the reference data, and thus possibly neglect local accuracy variations. Several approaches for localized accuracy assessments of categorical spatial data have been proposed in the past (e.g., Leyk and Zimmermann 2004, Foody 2007, Stehman and Wickham 2011), typically applied to (multi-class) land cover data at relative coarse spatial resolutions. High-resolution built-up land data, discriminating between built-up and not built-up land in a binary fashion, exhibit some significant differences with respect to multi-class land cover data: 1) Class imbalance switches between highly urban and sparsely populated areas: The positive class (i.e., “built-up“) may be the dominant class in urban areas, but highly underrepresented in rural areas. 1 Spatial Accuracy 2020 2) Localized (i.e., spatially constrained) confusion matrices (of dimension 2x2 in the binary case) characterizing local accuracy may be based on small sample sizes, and possibly contain empty elements, e.g., caused by zero instances of false positives in a local spatial context prohibiting calculation of those measures. Thus, a framework for localized accuracy assessment of built-up land data needs to account for extreme, bi-directional class imbalance, as well as for low sample sizes underlying a spatially constrained confusion matrix, and the absence of instances of one or more (dis)agreement categories. We are currently developing such a framework, using the GHSL as test data (Figure 1a) and an integrated reference dataset derived from cadastral parcel data and building footprint data (Leyk et al. 2018) as validation data (Figure 1b). We examine the suitability of a variety of commonly used agreement and accuracy measures for local characterization of positional and quantity agreement of built-up land layers, and analyze their interactions with each other, and with variables characterizing the rural-urban continuum. We compute surfaces of Percent Correctly Classified (PCC), User’s Accuracy (UA), Producer’s Accuracy (PA), Cohen’s Kappa, F-measure, G-mean, Intersection-over-union (IoU) for positional agreement, as well as absolute errors (AE) and relative errors (RE) in built-up area as measures of local quantity agreement. Figures 1c,d show such surfaces for PCC and the F-measure, respectively, for Springfield, Massachusetts, USA, illustrating the differences in local accuracy estimated by PCC and the F- measure, particularly in area of lower built-up density, where PCC yields inflated values due to class imbalance (i.e., dominating not-built-up class). Figure 1: Data examples used in this study, shown for Springfield, Massachusetts, USA: (a) Built-up areas (black) from the GHSL in 2014, (b) reference built-up land surface derived from building footprint and cadastral data, and localized accuracy surfaces for (c) PCC and (d) F-measure, both computed within focal windows of 1x1km. Black areas in (c) and (d) depict no-data areas or areas excluded due to unreliable reference data. 2 Spatial Accuracy 2020 Using the same grid and focal window size, we calculate surfaces of reference built-up area density, and of focal landscape metrics (e.g, the area of the largest built-up patch), derived from the reference data, to characterize the density and structure of built-up areas. We assume such metrics to vary with the rural-urban gradient and thus, representing a proxy measure of the rural- urban continuum, ranging from scattered, sparse rural settlements to dense, and highly connected built-up areas in urban settlements. Figure 2 shows the relationships between these metrics and local measures of built-up density and structure, computed for the state of Massachusetts and GHSL built-up labels in 2015, revealing interesting, partially contradicting trends of the tested measures across the rural-urban continuum. Moreover, the variability in these scatterplots indicates high levels of variation among measures due to the conservativeness in their mathematical structure. For example, Kappa index never exceeds the F-measure, and achieves similar values in regions of low built- up density, and where the largest built-up patch area is low (i.e., areas of sparse, scattered settlements). Figure 2: Scatterplot matrix generated by pixel-wise comparison of localized positional and quantity agreement surfaces computed at 30m spatial resolution and within 1x1km focal windows for the state of Massachusetts, USA. Future work includes the analysis of effects of varying spatial support (i.e., the size of focal windows used to generate localized confusion matrices), and of the analytical unit on local accuracy characterization, as well as the suitability of built environment and socio-economic variables for uncertainty prediction of remote-sensing derived built-up land data. 3 Spatial Accuracy 2020 ACKNOWLEDGMENT Support for this work was provided through the Eunice Kennedy Shriver National Institute of Child Health & Human Development of the National Institutes of Health under Award Number P2CHD066613. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. REFERENCES Esch, T., Marconcini, M., Felbier, A., Roth, A., Heldens, W., Huber, M., Schwinger, M., Taubenböck, H., Müller, A. and Dech, S.J.I.G., (2013). Urban footprint processor—Fully automated processing chain generating settlement masks from global data of the TanDEM-X mission. IEEE Geoscience and Remote Sensing Letters, 10(6), pp.1617-1621. Facebook Connectivity Lab and Center for International Earth Science Information Network - CIESIN - Columbia University, (2016). High Resolution Settlement Layer (HRSL). Source Imagery for HRSL © 2016 DigitalGlobe Available at https://ciesin.columbia.edu/data/hrsl/. (Accessed 23-03-2018) Foody, G.M. (2007). Local characterization of thematic classification accuracy through spatially constrained confusion matrices. International Journal of Remote Sensing, 26(6), 1217-1228. Klotz, M., Kemper, T., Geiß, C., Esch, T. and Taubenböck, H., (2016). How good is the map? A multi-scale cross- comparison framework for global settlement layers: Evidence from Central Europe. Remote Sensing of Environment, 178, pp.191-212. Leyk, S., Uhl, J.H., Balk, D. and Jones, B., (2018). Assessing the accuracy of multi-temporal built-up land layers across rural-urban trajectories in the United States. Remote sensing of environment, 204, pp.898-917. Leyk, S., and Zimmermann, N.E. (2004). A predictive uncertainty model for field-based survey maps using generalized linear models. International Conference on Geographic Information Science, 191-205. Marconcini, M., Metz-Marconcini, A., Üreyen, S., Palacios-Lopez, D., Hanke, W., Bachofer, F., Zeidler, J., Esch, T., Gorelick, N., Kakarla, A. and Strano, E., (2019). Outlining where humans live--The World Settlement Footprint 2015. arXiv preprint arXiv:1910.12707. Pesaresi, M., Ehrlich, D., Ferri, S., Florczyk, A., Freire, S., Haag, F., Halkia, M., Julea, A.M., Kemper, T. and Soille, P., (2015). Global human settlement analysis for disaster risk reduction. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 40(7), p.837. Stehman, S.V., and Wickham, J.D. (2011). Pixels, blocks of pixels, and polygons: Choosing a spatial unit for thematic accuracy assessment. Remote Sensing of Environment, 115(12), 3044-3055. Uhl, J.H. and Leyk, S., (2017). Multi-Scale Effects and Sensitivities in Built-up Land Data Accuracy Assessments. Proceedings of International Cartographic Conference 2017, Washington D.C., USA. 2017. Uhl, J.H., Zoraghein, H., Leyk, S., Balk, D., Corbane, C., Syrris, V. and Florczyk, A.J., (2020). Exposing the urban continuum: Implications and cross-comparison from an interdisciplinary perspective. International Journal of Digital Earth, 13(1), pp.22-44. 4
ai_researcher
3
Development_of_an_Analysis_of_Alternatives_Tool_for_Human-Agent_Teaming_Research.pdf
Towards Optimizing and Evaluating a Retrieval Augmented QA Chatbot using LLMs with Human-in-the-Loop Anum Afzal, Alexander Kowsik, Rajna Fani, Florian Matthes School of Computation, Information and Technology Technical University of Munich {anum.afzal, alexander.kowsik, rajna.fani, matthes}@tum.de Abstract Large Language Models have found application in various mundane and repetitive tasks includ- ing Human Resource (HR) support. We worked with the domain experts of SAP SE to develop an HR support chatbot as an efficient and effec- tive tool for addressing employee inquiries. We inserted a human-in-the-loop in various parts of the development cycles such as dataset col- lection, prompt optimization, and evaluation of generated output. By enhancing the LLM- driven chatbot’s response quality and explor- ing alternative retrieval methods, we have cre- ated an efficient, scalable, and flexible tool for HR professionals to address employee inquiries effectively. Our experiments and evaluation conclude that GPT-4 outperforms other mod- els and can overcome inconsistencies in data through internal reasoning capabilities. Addi- tionally, through expert analysis, we infer that reference-free evaluation metrics such as G- Eval and Prometheus demonstrate reliability closely aligned with that of human evaluation. 1 Introduction In recent years, incorporating Artificial Intelligence (AI) into various sectors has led to significant im- provements in automated systems, particularly in customer service and support. Since the offset of Large Language Models (LLMs), more companies are now incorporating Natural Language Process- ing (NLP) techniques to minimize the need for hu- man support personnel, especially domain experts (Shuster et al., 2021). With a chatbot providing accurate and comprehensive responses promptly, domain experts can redirect their focus towards higher-value tasks, leading to potential cost savings and improved productivity within the HR depart- ment. Moreover, an effective chatbot can play a pivotal role in enhancing overall employee satis- faction and engagement by delivering timely and relevant assistance. To this end, we worked with a SAP SE on de- veloping an HR chatbot to evaluate the potential of LLMs on industrial data. We used domain experts as a human-in-the-loop through various iterations of LLM-centric development such as dataset col- lection, prompt optimization, and most importantly the evaluation of model outputs. The well-known Retrieval Augmented Genera- tion (RAG) (Lewis et al., 2021) approach is ideal for this use case as it allows the model to produce more grounded answers, hence reducing hallucina- tions. We optimized different modules of the stan- dard RAG pipeline such as the retriever and model prompts, while constantly incorporating feedback from the domain experts. While the retrieval ac- curacy of an LLM could still be assessed to a de- gree, the generative nature of LLMs makes eval- uation of the generated output quite challenging. To overcome this, we explored the effectiveness of both traditional reference-based and reference-free (LLM-based) automatic evaluation metrics while using human evaluation as a baseline. We benchmark OpenAI’s models in our experi- ments while using the open-source LongT5 (Guo et al., 2022) and BERT (Devlin et al., 2019) as a baseline. In essence, both the industry and the research community could benefit from our find- ings related to the retriever and the reliability of automatic evaluation metrics. 2 Corpus The dataset used in the development of the HR chat- bot was compiled using SAP’s internal HR policies with the help of domain experts. While each sam- ple forms a triplet consisting of a Question, Answer, and Context, additional metadata such as the user’s region, company, employment status, and applica- ble company policies were also included. A snippet of such a sample is shown in Appendix A.4. The dataset was compiled using two separate sources 4 2 0 2 l u J 8 ] L C . s c [ 1 v 5 2 9 5 0 . 7 0 4 2 : v i X r a to have a mix of a gold dataset (FAQ dataset) and a user-utterance dataset (UT dataset). Both datasets follow the same structure and differences exist in the distribution of the questions. We extracted all unique HR articles to form a knowledge base for answering new user questions. Additionally, an evaluation set of 6k samples was used to evaluate both the retriever and the chatbot as a whole. 2.1 Dataset Collection FAQ Dataset (N≈48k): This is a collection of potential questions, along with their corresponding articles and gold-standard answers. It is carefully created and curated by domain experts based on the company’s internal policies. UT Dataset (N≈41k): This is a collection of real user utterances (UT) gathered from previous itera- tions of the chatbot. Inspired by a semi-supervised learning approach, a simplistic text-matching ap- proach was implemented that mapped each user query to a question from the FAQ dataset. The chatbot logs from this development cycle were in- spected and corrected by the domain experts. 2.2 Dataset Statistics Figure 1 shows that the majority of the articles in our dataset have under 4k tokens. Hence, they can easily fit into the context window of OpenAI models. As displayed in Table 1, the most asked questions in the dataset revolve around payslips, leave days of any kind, and questions regarding management. 10 most frequent user queries How can I change my approver? Where do I see how much leave I have left? How can I view my payslip online? Am I paid during maternity leave? If I am sick whilst on holiday, can I claim my holiday back? Can I cancel a leave request? I have a question about my payslip, who do I contact? Where can I find information about my payslip? Do I receive sick pay? How can I have an overview of my leave? Table 1: Top 10 most frequent user queries 3 Methodology In general, the HR chatbot follows the standard RAG pipeline with optimizations done on individ- ual modules with the help of domain experts as Figure 1: Distribution over the number of tokens of all unique articles in our HR dataset. shown in Figure 2. The methodology illustrates var- ious parts of the chatbot pipeline that are influenced by a human-in-the-loop and is further discussed in Appendix B. 3.1 Retriever We compiled a comprehensive knowledge base of all possible HR articles occurring in the whole dataset as the basis for retrieval, resulting in roughly 50k unique articles. Given a user utterance, the goal of the retriever is to find the most relevant article from the collection. While the technical de- tails for each retriever may differ, in general, they are both embedding-based. Technical details of the Retriever module are discussed in Appendix D.1. Moreover, we developed extensive filter func- tionalities, ensuring that the vector search only con- siders articles relevant to the user, like their country, region, or employment status as shown in Table 4. For example, from the top retrieved articles, we filter them to only keep the ones that are applicable to the employee and then pick the article with the maximum similarity score from the filtered list. 3.1.1 Dense Passage Retriever (BERT) Dense Passage Retriever (DPR) fine-tunes bert- base-uncased embedding to generate a model that given a user query, retrieves the most relevant ar- ticle from a set of documents. The dataset used for training was processed to contain questions paired with their respective gold answers, as well as positive and negative contexts for each question. A triplet loss function (Hoffer and Ailon, 2018) was used for training such that the relevant article served as the positive context, with two random ar- ticles from the entire dataset providing the negative contexts. This retriever is used in the framework Figure 2: Block diagram of the methodology introduced in our paper, illustrating baseline and Open AI models, highlighting the role of the human-in-the-loop during development with the fine-tuned LongT5 model and also serves as a baseline for evaluating the OpenAI retriever. very close to the article embeddings, because of the very similar content. 3.1.2 Vector Search (OpenAI) The OpenAI Retriever is plain vector search, that utilizes the text-embedding-ada-002 embedding model by OpenAI to generate embeddings for each article, followed by using similarity search to find the relevant article. To further enhance retrieval accuracy, we implemented various Query Trans- formation techniques1 (Cormack et al., 2009a). These methods alter the user query into a different representation using LLMs before the embedding model computes the query vector. The following three query transformation methods were explored and evaluated: 1) Intended Topics: Inspired by Ma et al. (2023), the user question is sent to an LLM with the in- struction to return a list of three intended topics of the question, which are then embedded instead of the user question. Example: How to request a parental leave? → parental leave, childcare leave, maternity leave 2) HyDE (Hypothetical Document Embeddings): In this method introduced by Gao et al. (2022), the user question is transformed by an LLM into three distinct excerpts from potential HR articles answering the original question. These parts are then embedded instead of the user question itself. This approach leads to query embeddings that are 1https://docs.llamaindex.ai/en/stable/ optimizing/advanced_retrieval/query_ transformations/ Example: How to request a parental leave? → To request parental leave, please submit..., If you wish to request..., ... 3) Multi-Query: This method2 employs LLMs to generate multiple variations of a user’s question varying in length and phrasing but maintaining the same meaning and intent as the original question. We then embed each of these variants individually. Along with the embedded original question, we perform a vector search for each query, combining the results using Reciprocal Rank Fusion (Cormack et al., 2009b). Additionally, we include queries from the Intended Topics and HyDE methods. Example: parental leave request? → How can I request a parental leave?, Where can I apply for parental leave?, ... 3.2 NLG Module 3.2.1 LongT5 (Fine-tuning driven) We fine-tuned LongT5 (Guo et al., 2022), employ- ing the local-attention-based variant3, which con- sists of 296 million trainable parameters. This model was fine-tuned on a combination of the FAQ dataset and UT dataset for a generative question- answering task. To limit computational require- ments, we fine-tuned it on a context window of 2https://docs.llamaindex.ai/en/latest/ examples/retrievers/reciprocal_rerank_fusion/ 3https://huggingface.co/google/ long-t5-local-base 7168 tokens, retaining approximately ∼86K sam- ples from the original dataset to avoid truncation. 3.2.2 OpenAI Models (Prompt driven) We used OpenAI’s ChatGPT and GPT-4 to gener- ate the answer to the user’s query by passing both the user query and the retrieved article via a mean- ingful prompt. We conducted extensive prompt engineering to tailor the responses of the LLMs to the company’s requirements for an HR chat- bot. Prompt engineering was an iterative process that included our qualitative analysis and multi- ple small evaluations of 10-100 sample responses by the company’s HR experts who served as the human-in-the-loop. We analyzed feedback from these evaluation runs and addressed the main issues in the next iteration of the process to produce the final prompt shown in Table 5. 3.3 Evaluation Framework For our analysis we employ Reference-based eval- uation metrics such as BERTScore (Zhang et al., 2019), ROUGE (Lin, 2004), and BLEU (Papineni et al., 2002). We also explore the concept of using LLM as an evaluator, and finally, we assess the effectiveness of automated metrics by involving domain experts in a human-in-the-loop process. 3.3.1 Retriever Evaluation Our primary evaluation metric for the retriever is accuracy, defined as the percentage of times the retriever returns the correct article for a given ques- tion. 3.3.2 Human Evaluation Setup The domain experts who served as the human-in- the-loop brought a high level of precision and in- sight to the evaluation process. Apart from dataset curation, they also evaluated the performance of the retriever by verifying the correctness of the retrieved articles. After discussion with domain ex- perts, we found four dimensions across which the quality of the model’s output could be evaluated on a score between 1 - 5 following a 5-point Lik- ert (Likert, 1932) scale. One domain expert eval- uated 100 samples across the fine-tuned LongT5, ChatGPT and GPT-4 across Readability, Relevance, Truthfulness, and Usability. 3.3.3 Reference-based Metrics In evaluating the effectiveness of reference-based metrics, we examine two distinct categories: N- gram-based and embedding-based metrics metrics. N-gram based metrics: N-gram-based metrics, such as BLEU (Bilingual Evaluation Understudy) and ROUGE (Recall-Oriented Understudy for Gist- ing Evaluation), assess the similarity between the generated response and the ground truth answer by analyzing the overlap of n-grams. Embedding-based metrics: Embedding-based metrics, such as BERTScore, leverage deep contex- tual embeddings from language models like BERT to assess the semantic similarity between generated and reference texts. 3.3.4 Reference-free Metrics In the evolving landscape of Natural Language Gen- eration evaluation, LLM-based metrics emerge as a compelling alternative, offering insights into model performance without the constraints of pre-defined reference responses. Details regarding the prompts used for these Reference-free metrics are present in Appendix C. Prompt-based Evaluation: Prompt-based eval- uation is at the forefront of NLG advancements, particularly with the utilization of LLMs (Li et al., 2024). Inspired by G-Eval, we followed the ap- proach described by Liu et al. (2023) and tailored the prompts to be suitable for the evaluation of a question-answering task. Tuning-based Evaluation: Nowadays, there is a significant shift toward leveraging open-source lan- guage models, such as LLaMA (Touvron et al., for fine-tuning purposes. We utilize 2023), Prometheus (Kim et al., 2023), which stands out for its fine-tuned evaluation capability, leveraging a large language model to perform nuanced anal- ysis based on customized score rubrics (Li et al., 2024). This unique approach enables Prometheus to evaluate text generation tasks comprehensively, considering factors such as creativity, relevance, and coherence without relying on reference texts. 4 Results and Discussion 4.1 Dense Passage Retriever As depicted in Table 2, surprisingly the BERT- based DPR significantly outperforms all new meth- ods with a top-1 accuracy of 22.24%, whereas the OpenAI-based retriever only reaches a top-1 accu- racy of 11.12%. Of the latter, the best performer is Multi-Query, with 10.92%, yet this still falls short of the Basic retriever (no query transforma- tion). These results resonate with the findings of Weller et al. (2024), confirming that query transfor- Method BERT-based DPR Basic Intended Topics HyDE Multi-Query HR Test Dataset top-1 22.24% 11.12% 9.33% 10.01% 10.92% Stackexchange English top-1 - 69.5% 57.25% 65.91% 71.31% Table 2: Retriever accuracy on the HR test data and the Stackexchange benchmark dataset for various retriever methods on top-1 retrieved articles mations, do not always lead to better performance. Our understanding is that the retriever performs poorly mainly because of the noise attributed to the dataset. It is worth noting, that our dataset contains many variant articles for a given topic or question, with only small differences such as the region or the employee role. Hence, the incorrect article may still contain sufficient knowledge to address user queries. We confirmed these findings with our domain experts and elaborated on them further in Appendix A.3. Further results on up to top-5 articles are shared in Appendix E.1. However, to assess the effectiveness of the newly implemented methods on a different dataset, we gathered 10k samples from CQADupStack English (Hoogeveen et al., 2015), a collection of English language questions and their top answers from the Stackexchange English forum. We used the same embedding model as the HR dataset to em- bed this new data and evaluated its top-1 accuracy. It can be observed that the Intended Topics method and HyDE both underperform compared to the Ba- sic retriever. However, the Multi-Query method did produce a higher top-1 accuracy. During our experiments, we noticed that these methods are greatly influenced by the choice of query trans- formation prompts. For instance, when HyDE re- sponses closely matched the desired replies, the accuracy was significantly higher. These methods also achieved higher accuracies than the Basic on other types of data, which indicates that the perfor- mance is also dependent on the type of data used. This might explain why these methods couldn’t achieve higher accuracy on the HR dataset. mensions is summarized in Table 3. Metric ChatGPT GPT-4 LongT5 Reference-based Evaluation BLEU Score ROUGE-1 ROUGE-2 ROUGE-L BERTScore_P BERTScore_R BERTScore_F1 0.27 0.48 0.36 0.46 0.88 0.96 0.90 0.28 0.52 0.35 0.50 0.90 0.93 0.91 Reference-free Evaluation (LLM-based) G-Eval: Relevance G-Eval: Readability G-Eval: Truthfulness G-Eval: Usability Prometheus: Relevance Prometheus: Readability Prometheus: Truthfulness Prometheus: Usability 4.03 4.26 4.12 4.67 3.25 3.07 3.20 3.98 Domain Expert Evaluation Human Eval: Readability Human Eval: Relevance Human Eval: Truthfulness Human Eval: Usability 4.31 4.31 4.09 3.32 4.51 4.49 4.80 4.79 3.70 4.22 3.75 4.32 4.76 4.67 4.41 4.11 0.41 0.51 0.43 0.49 0.91 0.91 0.90 3.17 3.52 3.36 3.29 2.83 3.73 3.32 2.83 4.02 3.46 3.67 2.59 Table 3: Average Evaluation Scores. BLEU (0 to 1), ROUGE (0 to 1) and BERTScore (-1 to +1 ) were com- puted on 200 samples, Prometheus (1 to 5) on 60 sam- ples, and Domain Expert Evaluation (1 to 5) & G-Eval (1 - 5) on 100 samples. Overall, GPT-4 shows clear domination in terms of generation capabilities for an HR chatbot. N-gram- based evaluation scores such as ROUGE and BLEU are quite low due to the generative nature of the (L)LMs, as the answer may contain words differ- ent than the reference answers. Nonetheless, these results establish GPT-4 as the leading model, ef- fectively combining advanced language skills with the demands of content accuracy and user engage- ment. On the other hand, the fine-tuned LongT5’s performance is observed to be inferior when bench- marked against the OpenAI models. This outcome is consistent with the anticipated advancements in LLMs, which are progressively outpacing the capa- bilities of fine-tuning-driven models. The perfor- mance of ChatGPT has been notably strong, trail- ing marginally behind GPT-4 in only a few scoring categories. Its close performance to GPT-4 raises important considerations for the trade-offs between computational efficiency and output quality. 4.2 NLG Evaluation 4.3 Correlation Analysis We use the previously optimized DPRs with the top-1 article for our NLG Module consisting of ChatGPT, GPT-4 and fine-tuned LongT5 as shown in Figure 2. An overview of all evaluation scores highlighting model performance across several di- Inspired by Zhong et al. (2022), we assessed the reliability of the evaluation score using Spearman (Myers and Sirois, 2004) and Kendall (Abdi, 2007) correlation coefficients in Table 9. Human Evaluation & Reference-based Metrics Due to its limited innovation, LongT5 typically produces text with fewer novel sentences, result- ing in more favorable scores from n-gram-based metrics like BLEU and ROUGE. The analysis of GPT-3.5 and GPT-4, in particular, illuminates a sig- nificant gap between automated metrics and human judgment. As these models generate more varied and longer sentences, their outputs increasingly di- verge from the patterns recognized by word-overlap metrics, such as BLEU and ROUGE. For instance, GPT-4’s BLEU score correlation marks a clear dis- connect, indicating that as text generation becomes more complex, the less effective traditional metrics are in evaluating it. This discrepancy calls into question the reliance on current automated metrics for assessing the creativity and nuance of outputs from advanced language models, highlighting the need for more sophisticated evaluation frameworks that can better align with human judgment. Human Evaluation & Reference-free Metrics Despite similar average scores between Reference- free metrics and Domain Expert evaluations shown in Table 3, their correlations are low. Since these methods measure linear and ordinal relationships, similar averages in evaluations do not imply a strong correlation as depicted in Table 9. Overall, while Prometheus and G-Eval both serve as proxies for human evaluation, their ef- fectiveness varies by model and evaluation criteria. While G-Eval excels in assessing truthfulness, its capability in evaluating readability and usability lags behind. Prometheus on the other hand, out- performs G-Eval in assessing usability across all models. However, G-Eval shows a steadier perfor- mance across different models, particularly with LongT5, suggesting its robustness in accurate eval- uations. Both metrics show weak alignment in assessing readability, reflecting the inherent chal- lenge of one LLM evaluating another’s ability to produce easily understandable text. Additionally, LLM-based metrics sometimes fail to align with human judgment, particularly when an- swers or instructions involve unfamiliar HR terms or sensitive information. Notably, OpenAI mod- els’ novel answers exhibit lower human correla- tion compared to LongT5, which provides answers more similar to the golden response. 5 Related Work Previously, domain-specific chatbots meant for a specific task were designed using conversational AI frameworks like RASA (Bocklisch et al., 2017). Latest advancements in NLP have shifted focus to- wards employing and optimizing LLM-based RAG (Gao et al., 2024b). Chen et al. (2023) experi- ment with ChatGPT and several other open-source models like Vicuna to benchmark their capabili- ties in RAG, and Wang et al. (2023) use a smaller secondary domain-specific model to assist a big- ger LLM on a domain-specific question answering task on industrial data. Recent studies have ex- plored various retrieval methods, including dense vector retrieval (Karpukhin et al., 2020a), sparse retrieval (Robertson et al., 2004, 2009), and hybrid approaches (Guu et al., 2020a), to improve the rel- evance and diversity of retrieved documents. Guu et al. (2020b) uses various RAG techniques to en- sure that chatbot responses are based on relevant HR policies, leading to accurate and helpful user support. Given the diverse distribution of the text gener- ated by LLMs, conventional metrics are not suit- able for its evaluation (Wei et al., 2021; Belz and Reiter, 2006; Novikova et al., 2017). Consequently, a lot of follow-up research has come up in the area of NLG Evaluation (Gao et al., 2024a; Li et al., 2024). Specifically focusing on RAG, Es et al. (2024) released a Framework for the automatic evaluation of generated output using LLM-based metrics with a focus on faithfulness. A similar ap- proach is followed by Saad-Falcon et al. (2023) in their framework ARES which also evaluates the performance of RAG systems over relevance and faithfulness by fine-tuning a lightweight LM judge. 6 Conclusion By optimizing retrieval techniques and benchmark- ing state-of-the-art LLMs with the help of domain experts, we show how LLM-based applications could benefit from a domain expert as human- in-the-loop within various iterations of the devel- opment. Even though our optimizations on the OpenAI-based retriever show minor improvements, the accuracy remains quite low due to the poor quality of the evaluation dataset. Nonetheless, both ChatGPT and GPT-4 show competence when ad- dressing the user query. This hints that the in- ternal reasoning capabilities and domain knowl- edge of these LLMs are strong enough to over- come the knowledge in the supposed incorrect ar- ticle. This also suggests that, given the nature of the dataset used, the accuracy metric used for the evaluation of the retriever is not a good measure of its performance. We employed and studied a range of evaluation metrics and concluded that in contrast to traditional evaluation approaches such ROUGE & BERTScore, LLM-based metrics such as Prometheus and G-Eval come very close to hu- man evaluation on average. Nonetheless, our find- ings reiterate the importance of human judgment, particularly in use cases that require an understand- ing of a specific domain. Acknowledgements The work outlined in this paper is part of a re- search project between the Technical University of Munich and SAP SE under SAP@TUM Col- laboration Lab. The authors would like to thank Patrick Heinze, Christopher Pielka, Albert Neu- mueller, Darwin Wijaya from the SAP IES as well as the Domain Experts from the Human Resource department for their continued support. Limitations In our experiments, we mostly worked with Ope- nAI models which are closed-source and hence raise concerns of privacy. Additionally, their large sizes inhibited fine-tuning as they required exten- sive hardware. Fine-tuning open source and smaller models tailored to HR-specific contexts could fur- ther improve response accuracy and relevance. Ad- ditionally, since we worked with only one domain expert for the evaluation of the generated answers, the human evaluation might be biased. Because of the data protection concerns with the associated dataset, we cannot make the dataset open source. We employed basic filtering techniques to include user-specific information and context, more ad- vanced approaches could be explored to include this information into the LLM prompt. Ethics Statement Throughout our experiments, we strictly adhere to the ACL Code of Ethics. The dataset used for our research was anonymized to not include any per- sonal information. We employed in-house domain experts, who receive a full salary for evaluation for generated summaries. They were informed about the task and usability of data in the research. Their annotations were stored in an anonymized fashion, mitigating any privacy concerns. Through our fine- tuning strategies, no additional bias was introduced into the models, other than what might already be part of the dataset. The goal of the research was to optimize an LLM-centric chatbot with the help of a human-in-the-loop. The results and discus- sions in this paper are meant to further promote research in LLM-based development, bridging the gap between academia and application. References Hervé Abdi. 2007. The kendall rank correlation coef- ficient. Encyclopedia of measurement and statistics, 2:508–510. Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of nlg systems. In 11th confer- ence of the european chapter of the association for computational linguistics, pages 313–320. Tom Bocklisch, Joey Faulkner, Nick Pawlowski, and Alan Nichol. 2017. Rasa: Open source language understanding and dialogue management. arXiv preprint arXiv:1712.05181. Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. 2023. Benchmarking large language models in retrieval-augmented generation. Gordon V. Cormack, Charles L A Clarke, and Stefan Buettcher. 2009a. Reciprocal rank fusion outper- forms condorcet and individual rank learning meth- ods. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’09, page 758–759, New York, NY, USA. Association for Computing Machinery. Gordon V. Cormack, Charles L A Clarke, and Stefan Buettcher. 2009b. Reciprocal rank fusion outper- forms condorcet and individual rank learning meth- ods. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’09, page 758–759, New York, NY, USA. Association for Computing Machinery. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. Shahul Es, Jithin James, Luis Espinosa Anke, and Steven Schockaert. 2024. RAGAs: Automated evalu- ation of retrieval augmented generation. In Proceed- ings of the 18th Conference of the European Chap- ter of the Association for Computational Linguistics: System Demonstrations, pages 150–158, St. Julians, Malta. Association for Computational Linguistics. Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2022. Precise zero-shot dense retrieval without rele- vance labels. arXiv preprint arXiv:2212.10496. Mingqi Gao, Xinyu Hu, Jie Ruan, Xiao Pu, and Xiaojun Wan. 2024a. Llm-based nlg evaluation: arXiv preprint Current status and challenges. arXiv:2402.01383. Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo, Meng Wang, and Haofen Wang. 2024b. Retrieval- augmented generation for large language models: A survey. Mandy Guo, Joshua Ainslie, David Uthus, Santiago On- tanon, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang. 2022. LongT5: Efficient text-to-text transformer for long sequences. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 724– 736, Seattle, United States. Association for Compu- tational Linguistics. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020a. Realm: Retrieval- augmented language model pre-training. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020b. Retrieval augmented language model pre-training. In International confer- ence on machine learning, pages 3929–3938. PMLR. Elad Hoffer and Nir Ailon. 2018. Deep metric learning using triplet network. Doris Hoogeveen, Karin Verspoor, and Timothy Bald- win. 2015. Cqadupstack: A benchmark data set for community question-answering research. Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020a. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906. Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen tau Yih. 2020b. Dense passage retrieval for open-domain question answering. Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, et al. 2023. Prometheus: Inducing fine-grained evalua- tion capability in language models. arXiv preprint arXiv:2310.08491. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2021. Retrieval-augmented generation for knowledge- intensive nlp tasks. Zhen Li, Xiaohan Xu, Tao Shen, Can Xu, Jia-Chen Gu, and Chongyang Tao. 2024. Leveraging large language models for nlg evaluation: A survey. arXiv preprint arXiv:2401.07103. Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of psychology. Chin-Yew Lin. 2004. Rouge: A package for automatic In Text summarization evaluation of summaries. branches out, pages 74–81. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. Gpteval: Nlg evaluation using gpt-4 with better human align- ment. arXiv preprint arXiv:2303.16634. Xinbei Ma, Yeyun Gong, Pengcheng He, hai zhao, and Nan Duan. 2023. Query rewriting in retrieval- augmented large language models. In The 2023 Con- ference on Empirical Methods in Natural Language Processing. Leann Myers and Maria J Sirois. 2004. Spearman cor- relation coefficients, differences between. Encyclo- pedia of statistical sciences, 12. Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need arXiv preprint new evaluation metrics for nlg. arXiv:1707.06875. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics, pages 311–318. Stephen Robertson, Hugo Zaragoza, and Michael Taylor. 2004. Simple bm25 extension to multiple weighted fields. In Proceedings of the thirteenth ACM inter- national conference on Information and knowledge management, pages 42–49. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends® in Information Re- trieval, 3(4):333–389. Jon Saad-Falcon, Omar Khattab, Christopher Potts, and Matei Zaharia. 2023. Ares: An automated evalua- tion framework for retrieval-augmented generation systems. Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. arXiv preprint arXiv:2104.07567. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971. Zezhong Wang, Fangkai Yang, Pu Zhao, Lu Wang, Jue Zhang, Mohit Garg, Qingwei Lin, and Dong- mei Zhang. 2023. Empower large language model to perform better on industrial domain-specific question answering. arXiv preprint arXiv:2305.11541. Wei Wei, Bo Dai, Tuo Zhao, Lihong Li, Diyi Yang, Yun-Nung Chen, Y-Lan Boureau, Asli Celikyilmaz, Alborz Geramifard, Aman Ahuja, et al. 2021. The first workshop on evaluations and assessments of neu- ral conversation systems. In The First Workshop on Evaluations and Assessments of Neural Conversation Systems. Orion Weller, Kyle Lo, David Wadden, Dawn Lawrie, Benjamin Van Durme, Arman Cohan, and Luca Sol- daini. 2024. When do generative query and docu- ment expansions fail? a comprehensive study across methods, retrievers, and datasets. In Findings of the Association for Computational Linguistics: EACL 2024, pages 1987–2003, St. Julian’s, Malta. Associa- tion for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- uating text generation with bert. arXiv preprint arXiv:1904.09675. Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Towards a unified multi- Jiawei Han. 2022. dimensional evaluator for text generation. arXiv preprint arXiv:2210.07197. A Dataset A.1 Dataset Collection FAQ Dataset: The internal HR policies of the company consist of Wiki articles, where each ar- ticle contains a description text followed by some frequently asked questions. The FAQ dataset was constructed by the domain articles by compiling all the FAQ questions from all articles. Each FAQ question is in the form of a triplet where the con- text is the original Wiki article the question was de- rived from. UT Dataset: The user utterance (UT) dataset was compiled using the user utterances col- lected from the chatbot logs. To reduce the manual labeling effort, a simple text-matching approach was deployed that mapped each user query to one of the questions from the FAQ dataset. The respec- tive answers and context of the matched question were used to create the triplets that form the UT dataset. A.2 Dataset Pre-processing We cleaned the dataset using regular expressions and with the help of LLMs. This involved remov- ing unnecessary formatting like HTML tags, lead- ing or trailing white spaces and newline characters, and removing some wasteful markdown annota- tions without text. This process thus reduced the number of tokens in each document. Some of the documents were too long to fit into the LLM’s context window, so we excluded them from our analysis. A.3 Dataset Challenges We discovered that our dataset contains multiple ar- ticles answering most questions. These articles dif- fer in a few characters, often in an unequal amount of whitespaces, or a few exchanged words, or even entire sections not present in other articles. This sit- uation leads to multiple slightly different versions of the same article present in the dataset, all linked to similar questions. Consequently, the retriever often retrieves very relevant articles that do not exactly match the gold standard article but are a slightly different version. To address this, we implemented an evaluation method measuring the Levenshtein distance be- tween the retrieved article and the gold article. If this distance is below a threshold of 100, we con- sider it a successful retrieval. However, this ap- proach does not match articles with varying sec- tions, as the Levenshtein distance is much higher, and we didn’t want to risk matching incorrect arti- cles by increasing the threshold. All of the results in Table 2 are using this evaluation method. As the DPR is fine-tuned on the dataset, which likely has a strong imbalance in the counts of dif- ferent article versions, it tends to favor the most common version. This bias contributes to its higher accuracy, as the retriever fetches the correct article more often than not. A.4 Dataset Example Table 4 shows an example sample from the FAQ dataset representing the training triplet along with all metadata. B Human-in-the-Loop As shown in Figure 2, the domain experts are in- volved in various parts of the development cycle explained below: Dataset Collection: The domain experts play a big role in the compilation and quality control of the datasets used in this paper Prompt Optimization: The domain experts eval- uated answers generated by models on various prompt versions. They also provided guidelines the chatbot should follow when addressing the user query which is reflected in the final prompt dis- played in Table 5. Evaluation: Domain experts also served as the human annotators for the answers generated by DATA TRIPLET Question: How can I apply for half a day of holiday? Answer: Unfortunately, vacation days in your coun- try can only be taken as full days. Context: {Relevant Article} META DATA User Role: Employee Name of KBA: Vacation Company Name: {Company Name} Company Code: {Company Code} Region: {Region} Country Code: {Country Code} FAQ Category: {FAQ Category} Process ID: {Process ID} Service ID: {Process ID} Table 4: HR Dataset Sample (L)LMs which helped us assess the quality of an- swers as well as study the effectiveness of auto- matic evaluation scores. C Prompts Samples In this section, we provide the extensive list of prompts used for the OpenAI Models for the Chat- bot Pipeline, as well as the prompts used for the LLM-based Metrics. C.1 Prompts used for OpenAI models The optimized prompt used for ChatGPT and GPT- 4 during our experiments is shown in Table 5. C.2 G-Eval Evaluation Metric Prompt The evaluation prompt used for the Readability Cri- teria is shown in Table 6. The prompts for other criteria (Truthfulness, Usability, Relevance) follow similar instructions as the one shown for the Read- ability prompt. C.3 Prometheus Evaluation Metric Prompt The prompt for the Prometheus Evaluation Metric outlined in Table 7 was based on the official paper’s guidelines (Kim et al., 2023) for Feedback Collec- tion. This specific prompt illustrates the Readabil- ity Criteria and was similarly adapted for other criteria such as Truthfulness, Relevance, and Us- ability. In general, both LLM-based metrics follow similar evaluation criteria in the prompts. D Technical Details D.1 Retriever It is worth noting that we embed the whole arti- cle and do not perform chunking. As shown in Figure 1, these articles are quite long. To cater to the limited context window of the models, we opt for the top-1 article to be passed as context. This also makes sense for our use case as the dataset is designed such that the answer to any given HR question usually exists in only one article. D.2 Dense Passage Retriver Training Dense Passage Retriever (DPR) (Karpukhin et al., 2020b) powered by Haystack4 uses the bert-base- uncased embedding model by google-bert, openly available on HuggingFace. DPR training aims to generate a model that creates embeddings where the question embedding closely aligns with the rel- evant context embedding. During retrieval, the user query is processed through the previously trained retriever, producing a query vector in the same em- bedding space as the articles. This query vector is then compared to all article vectors within the vector store using cosine similarity. The top-k arti- cles belonging to the embeddings with the highest cosine similarities are returned. D.3 LongT5 Fine-tuning During fine-tuning of the LongT5 models, the train- ing process was configured with a learning rate of 1e-4 and a batch size of 8, spanning 5 epochs. E Results and Evaluation Throughout our research, we encountered several challenges that warrant attention. The variability in retrieved articles due to slight differences in con- tent or formatting posed complexities in evaluating retrieval accuracy and ensuring consistency in re- sponse generation. Addressing this challenge may require further refinement of the retrieval mecha- nism or additional preprocessing steps to standard- ize the retrieved content. E.1 Retriever The accuracy of both DPR on the top-1, top-2, top- 3, and top-5 articles on both retrievers is shown in Table 8. As expected, the accuracy of the retriever module increases as the value of k is increased. However, we are limited to including only top-1 4https://haystack.deepset.ai/ SYSTEM PROMPT You are an HR chatbot for SAP SE and you provide truthful and concise answers to employee questions based on provided relevant HR articles. 1. Stay very concise and keep your answer below 150 words. 2. Do not include too much irrelevant information unrelated to the posed question. 3. Keep your response brief and on point. 4. Include URLs from the relevant article if it is important to answer the question. 5. If the answer applies to specific labs/countries/companies, include this information in your response. 6. Refer to the employee directly as "you" and not indirectly as "the employee". 7. If the provided HR article does not include the answer to the question, tell the employee to create an HRdirect ticket. 8. Answer in a polite, personal, user-friendly, and actionable way. 9. Never make up your response! If you do not know the answer to the question, just say so and ask the user to create an HRdirect ticket! USER PROMPT Question: {question} Relevant Article: {article} Table 5: Chatbot Prompt for OpenAI Models SYSTEM PROMPT You will be given a generated answer for a given question. Your task is to act as an evaluator and compare the generated answer with a reference answer on one metric. The reference answer is the fact-based benchmark and shall be assumed as the perfect answer for your evaluation. Please make sure you read and understand these instructions very carefully. Please keep this document open while reviewing, and refer to it as needed. Evaluation Criteria: {criteria} Evaluation Steps: {steps} USER PROMPT Example: {example} Question: {question} Generated Answer: {generated_answer} Reference Answer: {reference_answer} Evaluation Form: Please provide your output in two parts separate as a Python dictionary with keys rating and explanation. First the rating in an integer followed by the explanation of the rating. {metric_name} METRIC SCORE CRITERIA {The degree to which the generated answer matches the reference answer based on the metric description.} Readability(1-5) - Please rate the readability of each chatbot response. This criterion assesses how easily the response can be understood. A response with high readability should be clear, concise, and straightforward, making it easy for the reader to comprehend the information presented. Complex sentences, jargon, or convoluted explanations should result in a lower readability score. METRIC SCORE STEPS {Readability Score Steps} 1. Read the chatbot response carefully. 2. Assess how easily the response can be understood. Consider the clarity and conciseness of the response. 3. Consider the complexity of the sentences, the use of jargon, and how straightforward the explanation is. 4. Assign a readability score from 1 to 5 based on these criteria, where 1 is the lowest (hard to understand) and 5 is the highest (very easy to understand). Table 6: G-Eval Prompt Example for Readability Criteria SYSTEM PROMPT Task Description: An instruction (might include an input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing an evaluation criterion is given. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: Feedback: [write a feedback for criteria] [RESULT] [an integer number between 1 and 5]. 4. Please do not generate any other opening, closing, and explanations. Question to Evaluate: {instruction} Response to Evaluate: {response} Reference Answer (Score 5): {reference answer} Score Rubrics: {criteria description} Score 1: {Very Low correlation with the criteria description} Score 2: {Low correlation with the criteria description} Score 3: {Acceptable correlation with the criteria description} Score 4: {Good correlation with the criteria description} Score 5: {Excellent correlation with the criteria description} {criteria description}: Readability(1-5) - Please rate the readability of each chatbot response. This criterion assesses how easily the response can be understood. A response with high readability should be clear, concise, and straightforward. Complex sentences, jargon, or convoluted explanations should result in a lower readability score. Table 7: Prometheus Prompt Example for Readability Criteria articles because the articles are quite long and more samples may not fit in the model’s context window. The BERT-based DPR model still significantly out- performs all new methods with a top-1 accuracy of 22.24% and a top-5 accuracy exceeding 40%. The new retriever, in comparison, only reaches a top-1 accuracy of 11.12% and a top-5 accuracy of 18.53% on the same dataset. These results in gen- eral are quite underwhelming and mainly attributed to the dataset challenges described in Appendix A.3. DPR BERT-based OpenAI-based top-2 top-1 22.24% 30.03% 35.08% 40.06% 11.12% 15.06% 16.82% 18.53% top-3 top-5 Table 8: Retriever Accuracy on the HR test dataset for various values of k on the HR Dataset. The OpenAI- based DPR uses the Basic method. E.2 Correlation between Automatic Evaluation and Domain Expert Evaluation Table 9 shows the individual across for correlation of each evaluation metric with human evaluation across LongT5, ChatGPT, and GPT-4. The low correlation coefficients are a consequence of the Spearman and Kendall methods, which analyze the linear and ordinal relationships between vari- ables by comparing each set of scores. When these methods detect divergent scores between two eval- uations, it leads to a reduced correlation coefficient, indicating a disproportion that is not apparent when considering the average scores alone. Criteria LongT5 ChatGPT GPT-4 Spearman ρ Kendall τ Spearman ρ Kendall τ Spearman ρ Kendall τ BLEU ROUGE-1 ROUGE-2 ROUGE-L BERTScore_P BERTScore_R BERTScore_F1 G-Eval Usability Relevance Readability Truthfulness Prometheus Usability Relevance Readability Truthfulness 0.459 0.435 0.462 0.433 0.457 0.466 0.455 0.675 0.569 0.208 0.726 0.723 0.467 0.493 0.541 0.337 0.321 0.341 0.324 0.347 0.305 0.332 0.584 0.499 0.181 0.651 0.675 0.439 0.468 0.521 0.345 0.364 0.332 0.353 0.304 0.085 0.246 0.217 0.339 0.395 0.694 0.386 0.419 0.378 0.439 0.263 0.284 0.258 0.274 0.234 0.064 0.192 0.198 0.304 0.373 0.667 0.351 0.371 0.358 0.402 0.146 0.113 0.056 0.093 0.156 −0.022 0.097 0.346 0.325 0.139 0.452 0.516 0.382 0.225 0.454 0.116 0.091 0.044 0.075 0.122 −0.018 0.077 0.327 0.306 0.137 0.432 0.495 0.357 0.213 0.427 Table 9: Correlations between Automated Metrics and Human Evaluation across Models
ai_researcher
2
Optimizing_Instruction_Synthesis_Effective_Exploration_of_Evolutionary_Space_with_Tree_Search.pdf
4 2 0 2 y a M 7 1 ] O L . s c [ 2 v 7 2 1 6 0 . 5 0 4 2 : v i X r a Efficiently Synthesizing Lowest Cost Rewrite Rules for Instruction Selection Ross Daly Stanford University Stanford, CA, USA [email protected] Caleb Donovick Stanford University Stanford, CA, USA [email protected] Caleb Terrill Stanford University Stanford, CA, USA [email protected] Jackson Melchert Stanford University Stanford, CA, USA [email protected] Pat Hanrahan Stanford University Stanford, CA, USA [email protected] Priyanka Raina Stanford University Stanford, CA, USA [email protected] Clark Barrett Stanford University Stanford, CA, USA [email protected] Abstract—Compiling programs to an instruction set architec- ture (ISA) requires a set of rewrite rules that map patterns consisting of compiler instructions to patterns consisting of ISA instructions. We synthesize such rules by constructing SMT queries, whose solutions represent two functionally equivalent programs. These two programs are interpreted as an instruc- tion selection rewrite rule. Existing work is limited to single- instruction ISA patterns, whereas our solution does not have that restriction. Furthermore, we address inefficiencies of existing work by developing two optimized algorithms. The first only generates unique rules by preventing synthesis of duplicate and composite rules. The second only generates lowest-cost rules by preventing synthesis of higher-cost rules. We evaluate our algorithms on multiple ISAs. Without our optimizations, the vast majority of synthesized rewrite rules are either duplicates, composites, or higher cost. Our optimizations result in synthesis speed-ups of up to 768× and 4004× for the two algorithms. I. INTRODUCTION As we approach the end of Moore’s law and Dennard scaling, drastically improving computing performance and energy efficiency requires designing domain-specific hardware architectures (DSAs) or adding domain-specific extensions to existing architectures [22]. As a result, many DSAs have been developed in recent years [4], [8], [24], [27], [30], each with its own custom instruction set architecture (ISA) or ISA extension. Targeting such ISAs from a compiler’s intermediate repre- sentation (IR) requires a custom library of instruction selection rewrite rules. A rewrite rule is a mapping of an IR pattern to a functionally equivalent ISA pattern. Manual specification of rewrite rules is error-prone, time-consuming, and often incomplete. It is therefore desirable to automatically generate valid rewrite rules. When specifying instruction selection rewrite rules, there are two common cases. When ISAs have complex instructions, rewrite rules will often map multi-instruction IR patterns to a single ISA instruction. When ISAs have simple instructions, rewrite rules will often map a single IR instruction to a multi- instruction ISA pattern. A rewrite rule generation tool should be able to create rewrite rules for both cases. We call such rewrite rules many-to-many rules. Generating instruction selectors is not a new idea. Most relevant to this work is Gulwani et al. [21] who use a satisfia- bility modulo theories (SMT) solver to synthesize a loop-free program that is functionally equivalent to a given specification. Their approach is called component-based program synthesis (CBPS), as each synthesized program must include functional components from a given component library. Buchwald et al. [6] use and extend CBPS to efficiently generate multi- instruction loop-free IR programs equivalent to a single ISA instruction program; that is, they solve the many-to-one rewrite rules synthesis problem. However, multi-instruction ISA pro- grams cannot be synthesized. Both of these algorithms produce many duplicate rules, which are removed during a post-processing step. As we show, this adds significant additional cost. Another issue is that CBPS as currently formulated does not incorporate the notion of optimizing for cost. In practice, we often want only the set of lowest-cost rules, making it unnecessary (and expensive) to generate equivalent higher-cost rules. This paper presents an algorithm for automatically generat- ing a complete set of many-to-many rewrite rules. We address the above issues by preventing the synthesis of both duplicate and high-cost rules at rule generation time, using exclusion techniques. As a further optimization, we generate rules in stages and exclude composite rules, i.e. rules that can be composed of smaller rules found in previous stages. These ensure we produce a minimal but complete set of rewrite rules. Compared to previous work, our approach eliminates unnecessary rules and significantly reduces the time required to produce the unique necessary ones. Our contributions are as follows: • We define generalized component-based program synthe- sis (GCBPS) as the task of synthesizing two functionally equivalent programs using two component libraries. We then present an SMT-based synthesis approach inspired by Gulwani et al. to solve it. • We present an iterative algorithm genAll to generate all unique many-to-many rules up to a given size. We identify a set of equivalence relations for patterns encoded as programs and for rules that map IR programs to ISA programs. We use these relations to enumerate and exclude duplicate rules. Furthermore, we directly exclude composite rewrite rules. These result in up to a 768× synthesis speed-up. • We present an algorithm genAll LC which generates only the lowest-cost rules by incorporating a cost metric in addition to excluding duplicate and composite rewrite rules. This results in a synthesis speed-up up to 4004×. The rest of the paper is organized as follows. Section II discusses instruction selection, existing rule generation meth- ods, SMT, and program synthesis. Section III describes a program synthesis query for generating many-to-many rules. Section IV presents an algorithm for generating only unique rewrite rules and defines duplicates and composites. Section V presents an algorithm for synthesizing only the lowest-cost rules. Section VI evaluates both algorithms, and Section VII discusses limitations and further optimizations. II. BACKGROUND AND RELATED WORK A. Instruction Selection Instruction selection is the task of translating code in the compiler’s intermediate representation (IR) to functionally equivalent code for a target ISA. Typically, a library of rewrite rules is used in instruction selection. A rewrite rule is a mapping from an IR pattern consisting of IR instructions to a functionally equivalent ISA pattern consisting of ISA instructions. Such patterns can be expression trees or directed acyclic graphs (DAGs). Significant work has been devoted to developing rewrite rule tiling algorithms to perform instruction selection [1], [5], [12], [14]–[17], [19], [26], [29]. For each rule in the rule library, a tiling algorithm first finds all fragments from the IR program in which the rule’s IR pattern exactly matches that fragment. Then, the instruction selector finds a tiling of these matches that completely covers the basic block and minimizes the total rule cost according to some cost metric. Simple instruction selectors only handle tree-based IR pat- terns, which is inefficient for reused computations. Modern instruction selectors like LLVM, use DAG-based matching that allows for both a richer rules and better tiling. Koes et al. [26] describe a similar near-optimal DAG-based instruction selection algorithm [5]. We want to generate rules that can be used with such modern instruction selectors. B. Generating Instruction Selectors Generating instruction selectors from instruction semantics has been a topic of research interest [6], [7], [9], [10], [23]. Dias and Ramsey [10] introduce an algorithm for generating rewrite rules based on a declarative specification of the ISA. While this solves part of the many-to-many rule task, their work relies on an existing set of algebraic rewrite rules for synthesizing semantically equivalent rules. Our work uses SMT for the instruction and program semantics. However, incorporating certain kinds of algebraic rewrite rules could be an avenue for future optimizations. Daly et al. [9] propose a way to synthesize instruction selection rewrite rules from the register-transfer level (RTL) specification of a processor. Their algorithm requires a set of pre-specified IR patterns. In contrast, we can efficiently synthesize rules that consider all possible multi-instruction IR patterns up to a given size. Their approach for synthesizing complex instruction constants and handling floating point types could be combined with the approaches in this paper. The most relevant to this work is the work by Buchwald et al. [6], that leverages component-based program synthe- sis to generate rules with multi-instruction IR patterns and single-instruction ISA patterns. In contrast, our work synthe- sizes rules with both multi-instruction IR patterns and multi- instruction ISA patterns. We additionally prevent the synthesis of duplicate, composite, and high-cost rewrite rules, unlike any of the above approaches. C. Program Synthesis and Equivalence We use SMT-based program synthesis to enumerate a com- plete set of instruction selection rewrite rules. In program synthesis enumeration, it is common to remove equivalent solutions [3]. We use the equivalence relation defined in Section IV-A to determine equivalent rewrite rules. In prior work [2], observational equivalence (i.e., programs with the same semantics) has been used for de-duplication [2], however observational equivalence does not the structure of the program which is essential for rewrite rule pattern matching. take into account D. Logical Setting and Notation We work in the context of many-sorted logic (e.g., [13]), where we assume an infinite set of variables of each sort. Terms are denoted using non-boldface symbols (e.g., X). Boldface symbols (e.g., X) are used for sets, tuples, and multisets, whose elements are either terms or other collections of terms. Y := (Y1, ..., YN ) defines a tuple, where |Y| = N and Yi refers to the i-th element. Z := {zn} defines a multiset, where the multiplicity of element z is n ∈ N. Both ψ and ϕ are used to denote formulas. ψ(X) is a formula whose free variables are a subset of X. We use M ⊨ ψ(X) to denote the satisfiability relation between the interpretation M and the formula ψ. Assuming X is a collection of variables, MX denotes the assignment to those variables induced by M. For an assignment α, we write α |= ψ(X) if M |= ψ(X) for every model M such that MX = α. E. Component-based Program Synthesis CBPS is a program synthesis task introduced by Gulwani et al. The inputs to the task are: • A specification S := (IS, OS, ϕspec(IS, OS)) containing a tuple of input variables IS, a single output variable OS, and a formula ϕspec(IS, OS) relating the inputs and the output. • A library of components (e.g., instructions) K, where the k-th component Kk := (Ik, Ok, ϕk(Ik, Ok)) consists of a tuple of input variables Ik, a single output variable Ok, and a formula ϕk(Ik, Ok) defining the component’s semantics. An example component for an addition instruction is shown below using the theory of bit-vectors, QF BV, where BV [n] is an n-bit sort. ((I0 : BV [16], I1 : BV [16]), O : BV [16], I0 +[16] I1 = O) The task is to synthesize a valid program functionally equivalent to the specification using each component from K exactly once. of of the and inputs outputs For notational all convenience, we group together the components: set W := ∪(Ik,Ok, )∈K (Ok ∪ (∪Ik)). Gulwani et al. encode connection constraint: the program structure using a ϕconn (L, IS, OS, W). This is a formula representing how the program inputs (IS) and program output (OS) are connected via the components. The connections are specified using location variables L. We do not go into the details of how location variables encode connections (they are in [21]). It is sufficient for our purposes to know that these are integer variables, and an assignment to them uniquely determines a way of connecting the components together into a program. The program semantics ϕprog are defined as the components’ semantics conjoined with the connection constraint: This formula can be solved using a technique called counter- example guided inductive synthesis (CEGIS). CEGIS solves such exist-forall formulas by iteratively solving a series of quantifier-free queries and is often more efficient than trying to solve the quantified query directly. More details are in [21]. For our purposes, we assume the existence of a CEGIS implementation, CEGIS , which takes an instance of ϕsynth and returns a model M with the property that ML |= ϕverif , from which a program that is a solution to CBPS can be constructed. III. COMPONENT-BASED PROGRAM SYNTHESIS FOR MANY-TO-MANY RULES Given the IR and ISA instruction sets KIR and KISA, Buchwald et al. [6] use CBPS to synthesize rewrite rules. They use a single ISA instruction kISA ∈ KISA for the CBPS specification and a subset of the IR instructions for the CBPS components. A solution to the resulting ϕsynth formula gives a program PIR. If PISA is the single-instruction program consisting of kISA, they interpret the pair (PIR, PISA) as an instruction selection rewrite rule. However, Buchwald et al.’s solution is insufficient for gen- erating many-to-many rules, as they cannot synthesize IR and ISA programs that both contain multiple instructions. Instead, two functionally equivalent programs need to be synthesized. We first define an extension to CBPS called generalized component-based program synthesis (GCBPS) to address this problem. Then we show how to construct a synthesis query whose solutions represent pairs of functionally equivalent programs. A. Generalized Component-based Program Synthesis We define the GCBPS task as that of synthesizing two programs, Pa and Pb, represented using location variables La and Lb, given two sets of components Ka and Kb, two sets of inputs Ia, Ib where |Ia| = |Ib|, and two outputs Oa, Ob where the following conditions hold true: 1) Pa uses each component in Ka exactly once. 2) Pb uses each component in Kb exactly once. 3) Pa is functionally equivalent to Pb. B. Solving GCBPS (1) ϕprog(L, IS, OS, W) := )︄ ϕk(Ik, Ok) ∧ ϕconn(L, IS, OS, W). (︄ ⋀︂ k They define a verification constraint that holds if a par- ticular program is both well-formed (specified using a well- formedness constraint ψwfp) and satisfies the specification ϕspec: ϕverif := ψwfp(L) ∧ ∀IS, OS, W. (2) ϕprog(L, IS, OS, W) =⇒ ϕspec(IS, OS). A synthesis formula ϕsynth existentially quantifies L in (2): ϕsynth := ∃L.∀IS, OS, W. (3) ψwfp(L) ∧ (︁ϕprog (L, IS, OS, W) =⇒ ϕspec(IS, OS))︁ . We start with the CBPS verification constraint from (2) using components Ka (and a corresponding set of inputs and outputs Wa), but modify it slightly by introducing variables (Ia, Oa) that are fresh copies of (IS, OS): ψwfp(La) ∧ ∀Ia, Oa, Wa, IS, OS. (4) (ϕa (︁(︁∧i I a prog(La, Ia, Oa, Wa) ∧ ϕspec(IS, OS)) =⇒ )︁ =⇒ Oa = OS)︁ . i = I S i Assuming the formulas for both the program and the specifi- cation, if their inputs are the same, their outputs must also be the same. We next replace the specification program with a different component-based program using components Kb and quantify over that program’s inputs Ib, output Ob, and component variables Wb: ϕverif := ψwfp(La) ∧ ψwfp(Lb) ∧ ∀Ia, Ib, Oa, Ob, Wa, Wb. (5) (︁ϕa (︁(︁∧i I a prog(La, Ia, Oa, Wa) ∧ ϕb )︁ =⇒ Oa = Ob)︁ . i = I b i prog(Lb, Ib, Ob, Wb))︁ =⇒ This is our generalized verification constraint stating the correctness criteria for when two component-based programs are semantically equivalent. To synthesize such a pair of programs, a synthesis formula ϕsynth is defined by existentially quantifying La and Lb in the verification formula (5): ϕsynth := ∃La, Lb.∀Ia, Ib, Oa, Ob, Wa, Wb. (6) ψwf p(La) ∧ ψwf p(Lb)∧ (︃ (︁ϕa prog(La, Ia, Oa, Wa) ∧ ϕb (︁(︁∧i I a i = I b i )︁ =⇒ Oa = Ob)︁ prog(Lb, Ib, Ob, Wb))︁ =⇒ )︃ . 1 2 3 4 5 6 7 8 9 genAll (KIR, KISA, N IR, N ISA) : SR ← {} f o r n1, n2 ∈ [1, N IR] × [1, N ISA] : f o r mIR ∈ multicomb(KIR, n1) : f o r mISA ∈ multicomb(KISA, n2) : f o r IIR, IISA ∈ allInputs(mIR, mISA) : ϕ, LIR, LISA ← GCBPS (mIR, mISA, IIR, IISA) ϕ ← ϕ ∧ ¬AllComposites(SR, . . .) SR ← SR ∪ 10 r e t u r n SR CEGISAll (ϕ, mIR, mISA, LIR, LISA) Fig. 1: Iterative algorithm to generate all unique rewrite rules up to a given size. SR = {} w h i l e T r u e : 1 CEGISAll (ϕ, mIR, mISA, LIR, LISA) : 2 3 4 5 M ← CEGIS (ϕ) i f M = ⊥ : r e t u r n SR R ← rewriteRule(mIR, mISA, MLIR , MLISA ) SR ← SR ∪ {R} ϕ ← ϕ ∧ ¬ψdup(R, (LIR, LISA)) 6 7 8 As above, we assume that calling CEGIS on ϕsynth returns a model M such that MLa∪Lb |= ϕverif . This can be converted into a pair of programs (Pa, Pb) representing a rewrite rule that is a solution for the GCBPS task. We write rewriteRule(Ka, Kb, MLa , MLb ) for the rewrite rule constructed from a specific model M using the component sets Ka and Kb. IV. GENERATING ALL MANY-TO-MANY REWRITE RULES Buchwald et al. [6] describe an iterative algorithm, IterativeCEGIS , to synthesize rewrite rules using CBPS. This algorithm iterates over all multisets of IR instructions up to a given size and only runs synthesis on each such multiset. Compared to running synthesis using all the IR instructions at once, this iterative algorithm works better in practice. However, IterativeCEGIS cannot synthesize rewrite rules with both multi-instruction IR programs and multi-instruction it produces duplicate rewrite ISA programs. Furthermore, rules which are then filtered out in a post-synthesis filtering step. Although the results are correct, this approach is highly inefficient because each call to CEGIS is expensive, and a CEGIS call is made, not just for some duplicate rules, but for every possible duplicate rule. In our approach, we make the requirement that a solution is not a duplicate part of the CEGIS query itself, ensuring that each successful CEGIS query finds a new, non-redundant rewrite rule. Our iterative algorithm, genAll , is shown in Figure 1. It takes as parameters the IR and ISA component sets, KIR and KISA respectively, as well as a maximum number of components of each kind to use in rewrite rules, N IR and N ISA, and iteratively builds up a set SR of rewrite rules, which it returns at the end. Line 3 shows that n1 and n2 iterate up to these maximum sizes. Line 4 iterates over all Fig. 2: AllSAT algorithm to synthesize all unique rules. Line 8 excludes all rules that are duplicates of the current synthesized rewrite rule. multisets of elements from KIR of size n1 using a standard multicombination algorithm multicomb [25] (not shown). Line 5 is similar but for multisets from KISA of size n2. Next, for a given choice of multisets, line 6 enumerates all possible ways of selecting input vectors from those multisets that could create well-formed programs. Line 7 constructs fresh sets of location variables LIR and LISA and returns them along with the instantiated GCBPS synthesis formula (using Equation (6)).1 Line 8 excludes all composite rules from the synthesis search space. Composite rules are rules that can be constructed using the current set of rules SR and are thus unnecessary for instruction selection. We discuss this in more detail in Section IV-B. Finally, on line 9, the current set of rules SR is updated with the result of calling CEGISAll , which we describe next. Figure 2 shows the CEGISAll algorithm that performs the AllSAT [20], [31] task. Its parameters are the synthesis formula ϕ, the multisets mIR and mISA, and the location variables LIR and LISA. It returns a set SR of rewrite rules. Initially this set is empty. The algorithm iteratively calls a standard CEGIS algorithm to solve the synthesis query, constructing a new rewrite rule R, which is added to the set SR of rewrite rules, when the call to CEGIS is successful. The iteration repeats until the CEGIS query returns ⊥, indicating that there are no more rewrite rules to be found. Note that after each iteration, the ϕsynth formula is refined by adding 1We augment the well-formed program constraint in (6) to prevent syn- thesizing programs containing dead code and unused inputs. This can be accomplished by enforcing that each input and intermediate value is used in at least one location. the negation of a formula capturing the notion of duplicates for this rule. We describe how this is done next. Putting everything together, we define rule equivalence ∼rule as follows. A. Excluding Duplicate Rules Consider the two distinct rules below. As a syntactical con- vention, infix operators are used for IR patterns and function calls for ISA patterns. I1 + (I2 · I3) → add(I1, mul(I2, I3)) (I1 · I3) + I2 → add(I2, mul(I1, I3)) The two IR patterns represent the same operation despite the fact that the variable names and the order of the commutative arguments to addition are both different. Both rules would match the same program fragments in an instruction selector and would result in the same rewrite rule application. Thus, we consider such rules to be equivalent and would like to ensure that only one is generated by our algorithm. We first define a rewrite rule equivalence relation, ∼rule. Informally, two rules are equivalent if replacing either one by the other has no discernible effect on the execution of an instruction selection algorithm. We make this more formal by considering various attributes of standard instruction selection algorithms. Commutative Instructions Modern pattern matching algo- rithms used for instruction selection try all argument orderings for commutative instructions [5]. We define the commutative equivalence relation ∼CIR as PIR is a remapping of PIR 1 ’s commutative instruction’s arguments. 1 ∼CIR PIR 2 iff PIR 2 Same-kind Instructions Programs P generated by GCPBS have a unique identifier, the program line number, for each instruction. This means that if two instructions of the same kind appear in a program, interchanging their line numbers results in a different program, even though it makes no difference to the instruction selection algorithm. We define the same-kind equivalence relation ∼KIR as PIR 1 ∼KIR PIR 2 iff PIR is the result of remapping the line numbers for same- 2 kind instructions in PIR 1 . Data Dependency Modern instruction selection algorithms perform pattern matching, not based on a total order of instruc- tions, but on a partial order determined by data dependencies. Many different sequences may thus lead to the same partial and PIR order. We define ∼DIR as PIR 2 have the same data dependency graph. 1 ∼DIR PIR 2 iff PIR 1 Rule Input Renaming For a given rewrite rule, the input vari- ables used for the IR program must match the input variables used for the ISA program, but the specific variable identifiers used do not matter. We define the equivalence relation ∼I rule on rules (i.e., pairs of programs) as R1 ∼I rule R2 iff R2 is the result of remapping variable identifiers in R1. Rule Equivalence The first three equivalence relations defined above are for IR programs, but the analogous relations (∼CISA, ∼KISA, ∼DISA) for ISA instructions are also useful. ∼IR := ⊎{∼CIR , ∼KIR , ∼DIR } ∼ISA := ⊎{∼CISA, ∼KISA, ∼DISA} ∼rule := ⊎{(∼IR ⊗ ∼ISA), ∼I rule } (7) (8) (9) Overall IR equivalence is defined as the transitive closure of the union (notated with ⊎) of the three individual IR relations. ISA equivalence is defined similarly. Overall rewrite rule equivalence is then defined using the ⊗ operator, where ∼⊗=∼a ⊗ ∼b is defined as: (a1, b2) ∼⊗ (a2, b2) iff a1 ∼a a2 and b1 ∼b b2. Specifically, rule equivalence is obtained by combining IR equivalence in this way with ISA equivalence, and then combining the result with ∼I rule using ⊎. The set of all duplicates of rule R is the rule equivalence class [R]rule, where R′ ∈ [R]rule ⇐⇒ R ∼rule R′. ψdup can be constructed by enumerating all elements of the equivalence class [R]rule. B. Excluding Composite Rules We also exclude any rule whose effect can already be achieved using the current set of generated rules (line 8 of Figure 1). We elucidate this using a simple example. Assume the algorithm just constructed a new query for the multisets mIR, mISA, and the input IIR (line 7 of Fig- ure 1) and assume that the rule library SR currently contains rules for addition (I1 + I2 → add(I1, I2)), and multiplication (I1 · I2 → mul(I1, I2)). Consider the following cases. 1) If IIR = (I1), mIR = {+}, and mISA = {add}, then the rule I1 + I1 → add(I1, I1) will be synthesized by CEGISAll . But this rule is a specialization of the existing rule for addition. Any use of this specialized rule could instead be replaced by the more general rule and this rule can thus be excluded. Note that we order the inputs on line 6 of Figure 1 to guarantee that the most general version of a rule is found first. 2) If IIR = (I1, I2, I3), mIR = {+, ·}, and mISA = {add, mul}, then the composite rule (I1 + (I2 · I3)) → add(I1, mul(I2, I3)) will be synthesized by CEGISAll . Using similar logic, any use of this composite rule could instead use the simpler and more general rules for addition and multiplication, and this rule can thus be excluded. The multiset ordering used in lines 4 and 5 of Figure 1 ensures that subsets are visited before supersets, guaranteeing that smaller rules are found first. Only a subset of composite rules built from existing rules need to be excluded for each synthesis query. In general, for a specific query based on mIR, mISA, and IIR, we exclude composite rules R := (PIR, PISA) that meet the following criteria: • R has exactly |IIR| inputs. • PIR has the same components as mIR. • PISA has the same components as mISA. • PIR is built from the IR programs of already-found rules in SR. 1 2 3 4 5 6 7 8 9 10 11 12 genAll LC (KIR, KISA, N IR, N ISA, cost) : Ksorted ← sortByCost(KISA, N ISA, cost) SR ← {} f o r n ∈ [1, N IR] : f o r mIR ∈ multicomb(KIR, n) : f o r mISA ∈ Ksorted : ccur ← cost(mISA) f o r IIR, IISA ∈ allInputs(mIR, mISA) : ϕ, LIR, LISA ← GCBPS (mIR, mISA, IIR, IISA) ϕ ← ϕ ∧ ¬AllComposites LC (SR, ccur, . . .) SR ← SR ∪ CEGISAll LC (ϕ, mIR, mISA, LIR, LISA) r e t u r n SR Fig. 3: Iterative algorithm to generate all lowest-cost rules. ISA multisets are ordered by cost. CEGISAll is modified to exclude rules with duplicate IR programs. • PISA is the result of applying the rewrite rules used to build PIR. These checks are encapsulated by the call to AllComposites on line 8 of Figure 1. V. GENERATING ALL LOWEST-COST RULES Because all duplicates are excluded, the genAll algorithm generates only unique rewrite rules. However, two unique rules can share the same IR pattern. For a particular IR pattern, only the lowest-cost rule is needed for some cost metrics. Knowing the instruction selection cost metric at rule-generation time presents another time-saving opportunity because we can also prevent the synthesis of high-cost rules. We make a few assumptions about such a cost metric. • The cost for an instruction selection tiling is equal to the sum of the costs of each tiling rule’s ISA program. • The cost of an ISA program PISA only depends on the instruction contents, not the program structure. This cost is the sum of the cost of each instruction in the program. While these assumptions are a restriction on the space of possible cost metrics, they are sufficient to represent common ones like code size and energy. If the compiler’s cost metric violates these assumptions, the genAll algorithm can be used instead. This restricted space of cost metrics has the important property that the cost of any rule that would be synthesized using the components mISA can be determined up front as the sum of the cost of each component. Figure 3 shows our synthesis algorithm updated to only synthesize the lowest-cost rules for each unique IR pattern. The first change is to sort all possible mulitsets of ISA instructions up to size N ISA by cost (lower cost first) (line 2). This ordering ensures that the first rule synthesized for a particular IR program will be the lowest-cost version of that rule. Therefore, after synthesizing a new rule, all rules with a duplicate IR program can be excluded. The second change excludes rules with duplicate IR programs. A duplicate IR program is defined using the IR equivalence relation: ∼IRLC := ⊎{∼CIR , ∼KIR , ∼DIR , ∼I IR } (10) This is the same definition as (7), but with an additional 1 ∼I IR PIR relation ∼I IR defined as PIR is the result 2 of remapping variable identifiers in PIR 1 . The CEGISAll LC function called on line 11 is the same as CEGISAll , except that it uses ∼IRLC instead of ∼IR when constructing ψdup. iff PIR 2 The third change modifies AllComposites to use the known up-front cost cost(mISA). To see how this works, we consider again the example from Section IV-B. As be- fore, we assume SR currently contains two rules: one for addition (I1 + I2 → add(I1, I2)) and one for multiplication (I1 · I2 → mul(I1, I2)). We assume the target (ISA) expres- sions for these rules have cost 5 and 10, respectively. Consider the following situation: • Suppose IIR = (I1, I2, I3), and mIR = {+, ·}. It might be possible to synthesize a rule that has IR pat- tern (I1 + (I2 · I3)). We know that the composite rule (I1 + (I2 · I3)) → add(I1, mul(I2, I3)) would have a cost of 15 since rule costs are additive. Therefore, we can exclude any rule that matches this IR pattern and has cost(mISA) ≥ 15. To implement this, only one adjustment needs to be made to the conditions in Section IV-B. Instead of requiring PISA to have the same components as mISA, we simply require cost(PISA) ≥ cost(mISA), i.e., for rules matching the other conditions, if the ISA program has a cost equal to or greater than cost of the ISA program in the current rule, is excluded. These conditions are encapsulated by the call to AllComposites LC (line 10). it VI. EVALUATION Our evaluation strategy is threefold. We first show that our algorithm is capable of producing a variety of many-to-many rules. A good set of rewrite rules involves both many-to- one and one-to-many rules. We also show that by removing duplicate, composite, and high-cost rules, we produce a much smaller set of rewrite rules. Second, we analyze the effect on performance of the optimizations described above. We show that they all significantly reduce the time spent in synthesis. Finally, we show that by using different cost metrics, we can generate different sets of lowest-cost rewrite rules. A. Implementation All instructions are formally specified using the hwtypes Python library [11], which leverages pySMT [18] to construct (quantifier-free) SMT queries in the theory of bit-vectors. We also use annotations indicating which instructions are commutative. We use Boolector [28] as the SMT solver and set a timeout of 12 seconds for each CEGIS invocation. Every synthesized rewrite rule is independently verified to be valid. B. Instruction Specifications To evaluate our algorithms, we selected small but non-trivial sets of IR and ISA instructions operating on 4-bit bit-vectors. IR We define the IR instruction set to be constants (0, 1), bitwise operations (not, and, or, xor), arithmetic operations (neg, add, sub), multiplication (mul), unsigned comparison operations (ult, ule, ugt, uge), equality (eq), and dis-equality (neq). ISA 1 This is a minimal RISC-like ISA containing only 6 instructions: nand, sub, three comparison instructions (cmpZ, cmpN , cmpC) which compute the zero (Z), sign (N), and carry (C) flags respectively for a subtraction, and a flag inverting instruction (inv). ISA 2 This is an ISA specialized for linear algebra. It supports the 5 instructions: neg, add, add3 (addition of 3 values), mul, and mac (multiply-accumulate). C. Rewrite Rule Synthesis For each ISA we run three experiments. The first experiment (All Rules) is the baseline that generates all many-to-many rules including duplicate, composite, and high cost rules. This is an implementation of Buchwald et al.’s IterativeCEGIS al- gorithm extended to use GCBPS for many-to-many rules (no- tated as IterativeCEGISGCBPS ). The second (Only Unique) generates only unique rules by excluding all duplicates and composites using the genAll algorithm. The third (Only Lowest-Cost) generates only the lowest-cost rules using the genAll LC algorithm in Figure 3. A code-size cost metric is used, i.e., cost(K) is just the number of components in K. For ISA 1, we split the rule generation into two parts. The first part (ISA 1a) synthesizes rules composed of bitwise and arithmetic IR instructions using the ISA’s nand and sub instructions. The second part (ISA 1b) synthesizes rules composed of constants and comparison instructions using the four instructions cmpZ, cmpN , cmpC, and inv. For 1a and 1b, we synthesize rewrite rules up to an IR program size of 2 and an ISA program size of 3 (written 2-to- 3). For (Only Lowest-Cost), we increase the ISA program size to 5 and 4 respectively. For ISA 2, we synthesize all rewrite rules composed of constant, and arithmetic (including mul) IR instructions up to size 3-to-2. The number of rewrite rules produced for ISA 1a, 1b, and 2 are shown in Tables I, II, and III, respectively. Each table entry is the number of rewrite rules synthesized for a particular IR and ISA program size. For all ISAs, the extra synthesized rules in (All Rules) were compared against the duplicate and composite rules excluded by (Only Unique). Entries in (All Rules) marked with a ‘(-n)’ represent ‘n’ rules that (Only Unique) synthesized, but (All Rules) missed due to CEGIS timeouts. The (All Rules) experiment for the entry marked with an asterisk could not complete in 70 hours, so the number calculated from (Only Unique) is shown. For both ISAs we were able to synthesize 1-to-many and many-to-1 rules for both IR and ISA instructions. genAll produced a more complete set of IterativeCEGISGCBPS . rules than Table IV shows the percentage of rules that are duplicates or composites in the first column, and the percentage of rules that are high cost in the second column. Most rules in (All Rules) are duplicates, composites, or high cost. Out of the 349179 rules up to size 3-to-2 for ISA 2 (i.e. the sum of the (All Rules)), 99.5% are duplicates or composites. Similarly, most rules are high cost. In ISA 1a, 59672 out of 59822 rules (99.7%) up to size 2-to-3 are high cost. D. Synthesis Time Improvement with genAll In this section we showcase the synthesis time im- provements of genAll . The first experiment is the baseline IterativeCEGISGCBPS . The second excludes duplicate rules (i.e., with line 8 of Figure 2). The third, genAll , excludes both duplicates and composites (i.e. with line 8 of both Figure 2 and Figure 1). For each GCBPS query, we note the time required (tsat ) to run CEGISAll . Next, we measure the number of unique rules (Nunique ) found by CEGISAll . We then add the pair (Nunique , tsat ) to our dataset. We plot the cumulative synthesis time versus the number of unique rules found by doing the fol- lowing. Each data point is sorted by its slope (tsat /Nunique ). Then, the increase in both tsat and Nunique is plotted for each sorted point. Some data points have Nunique = 0 indicating that every synthesized rule was redundant and is shown using a vertical slope. The synthesis time plot for unique rewrite rules for ISA 1b up to size 2-to-3 is shown in Figure 4a. Excluding all duplicates shows a 5.3× speedup. Excluding both duplicates and composites shows a 6.2× speedup. Both optimizations find an additional 5 unique rules. E. Synthesis Time Improvement with genAll LC We also showcase the synthesis time improvements of genAll LC using a similar setup. The first experiment is the baseline IterativeCEGISGCBPS . The second excludes IR du- plicate rules. The third, genAll LC, excludes both IR duplicates and IR composites. We use the same experimental setup as before except when computing Nunique , all higher-cost rules are filtered instead. The synthesis time plot for lowest-cost rewrite rules for ISA 1b up to size 2-to-3 is shown in Figure 4b. Excluding rules with duplicate IR programs provides a 41× speed-up. Also excluding high-cost composites provides a 1254× speed-up over the baseline (All Rules) configuration. F. Total Speed-up We summarize the speed-ups of genAll and genAll LC compared to the IterativeCEGISGCBPS baseline for all con- figurations in Table V. We compare the synthesis time in the “Synth” column. We compare the total algorithm runtime in the “Total” column (including time for iterating, solving, rule filtering, etc.). The last row’s baseline did not complete in 70 hours, so we provide lower bounds for speed-up. All Rules 1 5 76 2 32 1719 3 1096 56894 ISA Program Size Only Unique 3 2 10 96 1940 189 1 3 40 1 3 40 IR Prog Size 1 2 Only Lowest-Cost 3 2 34 2 4 67 4 1 12 5 0 6 TABLE I: Number of synthesized rewrite rules for ISA 1a. ISA Program Size All Rules 1 17 89 2 71 3942 (-5) 3 3662 199572 Only Unique 3 873 21511 2 51 717 1 9 78 IR Program Size 1 2 Only Lowest-Cost 4 1 0 7 0 52 2 3 64 3 0 9 TABLE II: Number of synthesized rewrite rules for ISA 1b. ISA Program Size 1 2 All Rules 2 287 3115 3 3998 341758∗ 1 11 10 IR Program Size Only Unique 3 315 1337 2 14 69 1 3 3 Only Lowest-Cost 1 3 1 3 315 760 2 14 32 TABLE III: Number of synthesized rewrite rules for ISA 2. (a) genAll (b) genAll LC Fig. 4: Cumulative synthesis time comparison for ISA 1b up to size 2-to-3. ISA 1a 1b 2 Rule Size up to (IR, ISA) (2, 3) (2, 3) (3, 2) % Duplicate or Composite 96.2% 88.8% 99.5% % High-cost 99.7% 99.9% 99.7% TABLE IV: Percent of rewrite rules up to (IR, ISA) size that are a duplicate or a composite, and percent that are high-cost. ISA 1a 1b 2 1a 1b 2 Rule Size up to (IR, ISA) (2, 2) (2, 2) (2, 2) (2, 3) (2, 3) (3, 2) genAll Speed-up Total 1.3× 1.7× 2× 6.8× 2.7× > 768× > 81× Synth 3.5× 3.1× 11× 12× 6.2× genAll LC Speed-up Total 2.8× 2.8× 2.5× 57× 63× > 4004× > 171× Synth 11× 26× 53× 601× 1254× TABLE V: Speed-ups compared to IterativeCEGISGCBPS . The speed-ups depend on many parameters including the maximum size of the rewrite rules, the number of possible instructions, the commutativity of the instructions, and the semantics of the instructions. The optimizations discussed produce several orders of magnitude speed-ups. Further op- timizing the non-solver portions (e.g. re-coding in C) would drastically increase the “Total” speed-ups to be closer to the “Synth” ones. Clearly, the combination of all optimizations discussed in this paper can produce speed-ups of several orders of magnitude. ISA 1a 1b 2 Rule Size up to (IR, ISA) (2, 5) (2, 4) (3, 2) Unique (CS) 121 99 134 Unique (E) 161 198 137 Common 48 36 991 TABLE VI: Number of unique and common rewrite rules synthesized for code size (CS) and energy (E) cost metrics. G. Cost Metric Comparisons Our final experiment explores how the choice of cost metric influences the rules. We have implemented two cost metrics: a code size metric (CS) and an estimated energy metric (E). The energy metric was created to correspond to real hardware energy data. For example the cost ratio for mul and add is 1 : 1 for code size, but is 2.5 : 1 for energy. The number of common and unique lowest-cost rewrite rules for each ISA is shown in Table VI. While there is some overlap in common rules, each cost metric produces a differing set of unique lowest-cost rules. VII. CONCLUSION AND FUTURE WORK We showed that many-to-many instruction selection rewrite rules can be synthesized for various ISAs using program synthesis. This supports two major trends in computer archi- tecture. The first is the trend towards simple or reduced instruc- tion architectures where multiple instructions are needed for simple operations. It also supports the trend to introduce more complex domain-specific instructions for energy efficiency. In this case, a single instruction can implement complex operations. We showed that our algorithms are efficient. Removing du- plicates, composites, and higher-cost rules results in multiple orders of magnitude speed-ups. Synthesizing many-to-many rewrite rules for modern IRs and ISAs may require further optimizations. Many of our synthesized rules contain program fragments that a compiler would optimize before instruction selection (e.g., sub(X, X)). Excluding these could result in further speed-ups. Buchwald et al. [6] presented generalizations for multi- sorted instructions, multiple outputs, preconditions, and inter- nal attributes, enabling the modeling of memory and control flow instructions. Our synthesis query and algorithms are orthogonal and could incorporate these features, allowing for a broader range of possible instruction sets. As is the case in prior work, we limit synthesis to loop free patterns. Relaxing this constraint and using other instruction selection algorithms would be an interesting research avenue. We believe this research area is fertile ground and hope our work inspires and enables future research endeavors towards the goal of automatically generating compilers for emerging domain-specific architectures. REFERENCES [1] Alfred V. Aho, Mahadevan Ganapathi, and Steven W. K. Tjiang. Code generation using tree matching and dynamic programming. ACM Trans- actions on Programming Languages and Systems (TOPLAS), 11(4):491– 516, 1989. [2] Aws Albarghouthi, Sumit Gulwani, and Zachary Kincaid. Recursive program synthesis. In Computer Aided Verification: 25th International Conference, CAV 2013, Saint Petersburg, Russia, July 13-19, 2013. Proceedings 25, pages 934–950. Springer, 2013. [3] Rajeev Alur, Arjun Radhakrishna, and Abhishek Udupa. Scaling enumerative program synthesis via divide and conquer. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 319–336. Springer, 2017. [4] Rick Bahr, Clark Barrett, Nikhil Bhagdikar, Alex Carsello, Ross Daly, Caleb Donovick, David Durst, Kayvon Fatahalian, Kathleen Feng, Pat Hanrahan, et al. Creating an agile hardware design flow. In 2020 57th ACM/IEEE Design Automation Conference (DAC), pages 1–6. IEEE, 2020. [5] Eli Bendersky. A deeper look into the LLVM code generator, Part 1, Feb 2013. [6] Sebastian Buchwald, Andreas Fried, and Sebastian Hack. Synthesizing an instruction selection rule library from semantic specifications. In Proceedings of the 2018 International Symposium on Code Generation and Optimization, pages 300–313, 2018. [7] R. G. Cattell. Automatic derivation of code generators from machine ACM Transactions on Programming Languages and descriptions. Systems (TOPLAS), 2(2):173–190, 1980. [8] Yu-Hsin Chen, Joel Emer, and Vivienne Sze. Eyeriss: A spatial archi- tecture for energy-efficient dataflow for convolutional neural networks. ACM SIGARCH Computer Architecture News, 44(3):367–379, 2016. [9] Ross Daly, Caleb Donovick, Jackson Melchert, Rajsekhar Setaluri, Nestan Tsiskaridze Bullock, Priyanka Raina, Clark Barrett, and Pat Hanrahan. Synthesizing instruction selection rewrite rules from RTL In Conference on Formal Methods in Computer-Aided using SMT. Design (FMCAD), page 139, 2022. [10] Joao Dias and Norman Ramsey. Automatically generating instruction selectors using declarative machine descriptions. ACM Sigplan Notices, 45(1):403–416, 2010. [11] Caleb Donovick, Ross Daly, Jackson Melchert, Lenny Truong, Priyanka Raina, Pat Hanrahan, and Clark Barrett. Peak: A single source of truth for hardware design and verification. arXiv preprint arXiv:2308.13106, 2023. [12] Helmut Emmelmann, F.-W. Schr¨oer, and Rudolf Landwehr. BEG: A generator for efficient back ends. ACM Sigplan Notices, 24(7):227–237, 1989. [13] Herbert Enderton and Herbert B. Enderton. A mathematical introduction to logic. Elsevier, 2001. [14] Christopher W. Fraser and David R. Hanson. A retargetable C compiler: Design and implementation. Addison-Wesley Longman Publishing Co., Inc., 1995. [15] Christopher W. Fraser, David R. Hanson, and Todd A. Proebsting. Engineering a simple, efficient code-generator generator. ACM Letters on Programming Languages and Systems (LOPLAS), 1(3):213–226, 1992. [16] Mahadevan Ganapathi. Retargetable code generation and optimization using attribute grammars. PhD thesis, 1980. AAI8107834. [17] Mahadevan Ganapathi and Charles N. Fischer. Description-driven In Proceedings of the 9th code generation using attribute grammars. ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’82, page 108–119, New York, NY, USA, 1982. Association for Computing Machinery. [18] Marco Gario and Andrea Micheli. PySMT: A solver-agnostic library for fast prototyping of SMT-based algorithms. In SMT Workshop 2015, 2015. [19] R. Steven Glanville and Susan L. Graham. A new method for compiler In Proceedings of the 5th ACM SIGACT-SIGPLAN code generation. Symposium on Principles of Programming Languages, POPL ’78, page 231–254, New York, NY, USA, 1978. Association for Computing Machinery. [20] Orna Grumberg, Assaf Schuster, and Avi Yadgar. Memory efficient all- solutions SAT solver and its application for reachability analysis. In For- mal Methods in Computer-Aided Design: 5th International Conference, FMCAD 2004, Austin, Texas, USA, November 15-17, 2004. Proceedings 5, pages 275–289. Springer, 2004. [21] Sumit Gulwani, Susmit Jha, Ashish Tiwari, and Ramarathnam Venkate- In Proceedings of the 32nd san. Synthesis of loop-free programs. ACM SIGPLAN Conference on Programming Language Design and Implementation, 2011. [22] John L. Hennessy and David A. Patterson. A new golden age for computer architecture. Commun. ACM, 62(2):48–60, January 2019. [23] Roger Hoover and Kenneth Zadeck. Generating machine specific In Proceedings of the 23rd ACM SIGPLAN- optimizing compilers. SIGACT Symposium on Principles of Programming Languages, pages 219–229, 1996. [24] Norman P. Jouppi, Cliff Young, Nishant Patil, and David Patterson. A domain-specific architecture for deep neural networks. Communications of the ACM, 61(9):50–59, 2018. [25] Donald E. Knuth. The Art of Computer Programming, Volume 4, Fascicle 3: Generating All Combinations and Partitions. Addison- Wesley Professional, 2005. [26] David Ryan Koes and Seth Copen Goldstein. Near-optimal instruction In Proceedings of the 6th Annual IEEE/ACM selection on DAGs. International Symposium on Code Generation and Optimization, pages 45–54, 2008. [27] Jackson Melchert, Kathleen Feng, Caleb Donovick, Ross Daly, Ritvik Sharma, Clark Barrett, Mark A Horowitz, Pat Hanrahan, and Priyanka Raina. APEX: A framework for automated processing element design space exploration using frequent subgraph analysis. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, pages 33– 45, 2023. [28] Aina Niemetz, Mathias Preiner, and Armin Biere. Boolector 2.0. J. Satisf. Boolean Model. Comput., 9(1):53–58, 2014. [29] Eduardo Pelegri-Llopart and Susan L. Graham. Optimal code generation for expression trees: An application BURS theory. In Proceedings of the 15th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 294–308, 1988. [30] Raghu Prabhakar, Yaqi Zhang, David Koeplinger, Matt Feldman, Tian Zhao, Stefan Hadjis, Ardavan Pedram, Christos Kozyrakis, and Kunle Olukotun. Plasticine: A reconfigurable architecture for parallel patterns. ACM SIGARCH Computer Architecture News, 45(2):389–402, 2017. [31] Takahisa Toda and Takehide Soh. Implementing efficient all solutions sat solvers. Journal of Experimental Algorithmics (JEA), 21:1–44, 2016.
ai_researcher
1
Aplikasi_Media_Sosialisasi_Quality_Control_Circle_dan_Idea_Proposal_Guidance.pdf
Stochastic Item Descent Method for Large Scale Equal Circle Packing Problem Kun He1 , Min Zhang1∗ , Jianrong Zhou1 , Yan Jin1 , Chu-min Li2 1School of Computer Science and Technology, Huazhong University of Science and Technology, China 2MIS, Universit´e de Picardie Jules Verne, France { brooklet60, m zhang, yukihana0416,jinyan}@hust.edu.cn, [email protected] 0 2 0 2 n a J 2 2 ] C O . h t a m [ 1 v 0 4 5 8 0 . 1 0 0 2 : v i X r a Abstract Stochastic gradient descent (SGD) is a powerful method for large-scale optimization problems in the area of machine learning, especially for a finite- sum formulation with numerous variables. In re- cent years, mini-batch SGD gains great success and has become a standard technique for training deep neural networks fed with big amount of data. Inspired by its success in deep learning, we ap- ply the idea of SGD with batch selection of sam- ples to a classic optimization problem in decision version. Given n unit circles, the equal circle packing problem (ECPP) asks whether there ex- ist a feasible packing that could put all the cir- cles inside a circular container without overlapping. Specifically, we propose a stochastic item descent method (SIDM) for ECPP in large scale, which ran- domly divides the unit circles into batches and runs Broyden-Fletcher-Goldfarb-Shanno (BFGS) algo- rithm on the corresponding batch function itera- tively to speedup the calculation. We also increase the batch size during the batch iterations to gain higher quality solution. Comparing to the current best packing algorithms, SIDM greatly speeds up the calculation of optimization process and guar- antees the solution quality for large scale instances with up to 1500 circle items, while the baseline algorithms usually handle about 300 circle items. The results indicate the highly efficiency of SIDM for this classic optimization problem in large scale, and show potential for other large scale classic op- timization problems in which gradient descent is used for optimization. 1 Introduction Stochastic gradient descent (SGD) method [Robbins and Monro, 1951] has gained great success in the area of ma- chine learning [Bottou, 2010; Bottou et al., 2018]. Espe- cially for deep learning tasks, mini-batch SGD has become a standard technique for the training of deep neural net- works fed with big amount of data [Goodfellow et al., 2016; ∗Corresponding author. Lecun et al., 1998] . Inspired by its successful application for such big, complex optimization problems, in this work, we consider a classic global optimization problem well-studied in the area of operations research for over 30 years[Kravitz, 1967], and apply the idea of batch gradient descent (BGD) for this problem in large scale. Specifically, we consider the equal circle packing problem (ECPP) in decision version, the purpose of which is to answer whether a dense arrangement of n unit circles without over- lapping (i.e. feasible) in a circular container of fixed radius. If we already have an efficient algorithm for the decision ver- sion, the optimal version of minimizing the container radius for feasible packings can be solved efficiently by combining divide and conquer on the container radius. Our motivation is how to design an algorithm that is very fast so as to address the problem in large scale where hundreds and thousands of unit circles are considered. Finding the optimal solution of ECPP with plenty number of circles is known to be NP hard, even the search of a subop- timal solution is still very challenging. Many researchers de- sign heuristic algorithms to find a suboptimal packing pattern. In recent years, the quasi-physical energy based method was proposed which could solve ECPP in optimal version with up to a hundred items. Many quasi-physical researches re- gard each circle as an elastic item and treat the container as a rigid hollow container [He et al., 2013; He et al., 2015; He et al., 2018]. If two items, or an item with the container are squeezed against each other, the whole system would have elastic potential energy, and by gradient descent method like Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [Liu and Nocedal, 1989] we can reduce the potential energy of the system so as to remove the overlapping. Then some Basin- hopping strategy is used to jump out of the local optimal trap where overlaps still exist. On the other hand, SGD is a classic first-order optimiza- tion algorithm widely used in large scale machine learning problems due to its low computational cost and modest pre- cision [Bottou, 2010; Bottou and Bousquet, 2008]. In the training of deep neural networks, SGD plays a key role in the optimization process, and promotes the great success of deep learning. In the iteration of SGD, it randomly selects a sam- ple and then optimizes the loss function corresponding to the current sample. Inspired by the success of SGD in deep learning, can we apply this idea to the classic optimization problem of ECPP? Specifically, can we randomly select a unit circle and opti- mize the corresponding optimization function? If each time a batch of circles are selected for gradient descent by fix- ing other circles, then we have a batched version of SGD. As quasi-Newton methods have been shown superior to first- order gradient descent method for various circle packing problems, we choose a quasi-Newton method like BFGS and combine it with random batch, and design a batched version of stochastic BFGS for ECPP. Therefore, we propose a novel approach called stochastic item descent method (SIDM), which can find dense layouts for large scale ECPP. SIDM accelerates the search process, especially for a large number of unit circles. In addition, after attaining a local minimum or saddle points, we improve the hopping strategy in the current best solution [He et al., 2018], in which we gradually increase the shrinking radius of con- tainer during the iteration to find better solutions. Compar- ing to state-of-the-art algorithms that can only address small- scale ECPP within reasonable time, SIDM can address up to n = 1500 instances and reach current best solution reported on the packomania website 1. Our main contributions are listed as follows: • The proposed novel method SIDM can speed up the process of reaching the local minimum or saddle point, which is the main computation load of the ECPP. • We improve the basin-hopping procedure of the existing strategy used to escape suboptimal layouts, and shrink the radius of the container more flexibly. • Experiments demonstrate that SIDM can greatly accel- erate the computation while maintaining the state-of-art packing quality. 2 Related Work In the literature, most researchers address the optimal version of ECPP that requires to find the smallest container radius for all items. But they usually solve the decision version of ECPP as a sub-problem and then use binary search (divide and con- quer on the container radius) so as to find a possible smallest container radius for feasible packing. The efficiency and ef- fectiveness of the overall algorithm mainly depend on the al- gorithm on the decision version. Thus, in this work, we focus on improving the efficiency of the sub-algorithm for the de- cision version while maintaining the same effectiveness. And in the following, we provide an overview for the ECPP in optimal version. ECPP is a well studied problem since 1960’s [Pirl, 1969]. Mathematicians found the optimal packing pattern for 1 ≤ n ≤ 13 [Pirl, 1969; Melissen, 1994; Fodor, 2000; Fodor, 2003] and n = 19 [Fodor, 1999]. However, it is very hard to mathematically find optimal solutions for bigger n, and mathematicians only found suboptimal packing patterns for n ≤ 25 [Pirl, 1969; Goldberg, 1971; Reis, 1975]. To earn a good trade-off between the computation effi- ciency and solution equality, greedy based heuristic algo- rithms performance well for n ≤ 100. Graham et al. 1http://www.packomania.com proposed methods that simulate repulsion forces and bil- liards to iteratively search for global optimal layout [Gra- ham et al., 1998], and found suboptimal solution for 25 ≤ n ≤ 65. Akiyama et al. obtained dense layout for n = 70, 73, 75, 77, 78, 79, 80 by a greedy algorithm [Akiyama et al., 2003]. Then, Grosso et al. proposed a monotonic basin hopping algorithm that improved many solutions for 66 ≤ n ≤ 100 [Grosso et al., 2010]. i.e. For heuristic approaches, a typical way is to transform ECPP into a discrete optimization problem, putting the unit circles into the container one by one [Chen et al., 2018], and then incorporating some search methods to im- prove the solution. Beam search algorithm [Akeb et al., 2009] and greedy heuristic algorithm [Chen et al., 2018] have been proposed, which are all based on max hole degree method [Huang et al., 2003]. However, the solution quality is rather limited. Another approach is to formulate ECPP into a continuous optimization problem, that is, put all circles into the container allowing overlapping, use gradient based optimization algo- rithms to constantly adjust positions of the unit circles, and shrink the container radius for the next round of search if fea- sible solution is found. Specifically, quasi-physical models are used that regard each circle as an elastic item and treat the container as a rigid hollow container [He et al., 2013; He et al., 2015; He et al., 2018]. If two items, or an item and the container are squeezed against each other, the whole system would have certain elastic potential energy, and by gradient descent method like BFGS we can reduce the potential energy of the system so as to remove the overlapping. Then some Basin-hopping strategy can be used to jump out of the local optimal trap where overlaps still exist. This category mainly includes some quasi-physical algorithms [Huang et al., 2001; Wang et al., 2002; Liu et al., 2016; Zhang and Deng, 2005; Huang and Ye, 2011], basin hopping algorithms [Addis et al., 2008], iterated Tabu search algorithms [Fu et al., 2013; Zeng et al., 2016], and evolutionary search algorithms [Flo- res et al., 2016]. Huang et al. proposed a global optimiza- tion algorithm based on quasi-physics, tested on instances of 1 ≤ n ≤ 200 and obtained 63 better packings [Huang and Ye, 2011]. He et al. proposed a new quasi-physical quasi- human algorithm (QPQH) [He et al., 2018] that utilizes the local neighbor information to speed up the calculation, tested on instances of n = 1, 2, ...320, and obtained 66 denser lay- outs with smaller container radius, which is the current state- of-the-art. To our knowledge, there is no formal publications on in- stances of n > 320, probably due to the large computa- tional complexity. On the circle packing website http://www. packomania.com, the website maintainer Eckard Specht re- ported results for n = 1 to 5000 for ECPP, using his “program cci, 1999–2014”. However, he did not report the running time and computing machine, or release his code. The quasi-physical model is a general model popularly used for solving ECPP, which includes a key algorithm to ob- tain suboptimal layout and a basin-hopping strategy to jump out of the local optimum. Our proposed method adapts this framework, and our main contribution is the design of the mini-batch BFGS method that greatly speeds up the BFGS normally used for ECPP, such that we can solve up to n = 1500 items, and we believe this is a big progress for the gen- eral quasi-physical model. 3 Problem Formulation The equal circle packing problem (ECPP) in decision version is to ask whether we can pack n unit circles into a circular container with fixed radius R, such that all circle items are within the border of the container and any two circle items do not overlap with each other. Formally speaking, we build a Cartesian coordinate sys- tem with its origin located at the center of the container and the coordinate of the center of circle i is denoted by (xi, yi), i ∈ {1, 2, ..., n},. Then we denote any layout configuration by X = (x1, y1, x2, y2, ..., xn, yn). Our purpose is to find a packing pattern of n circles without overlapping, i.e., to find (xi, yi), i ∈ {1, 2, ..., n}, such that: (cid:113) i + y2 x2 i + 1 ≤ R, (cid:113) (xi − xj)2 + (yi − yj)2 ≥ 2, where i, j ∈ {1, 2, ..., n}, i (cid:54)= j. The first constraint denotes that any circle item does not intersect with the container and the second constraint indicates that any two items do not over- lap with each other. Thus, we need to find 2n real numbers to satisfy the two constraints, in which case we call X a feasible layout. 4 The General Quasi-physical Model Among the current best approaches, researchers build a quasi- physical model to address this continuous optimization prob- lem [Huang et al., 2001; He et al., 2018]. Regard the con- tainer as a rigid hollow item (denoted as item “0”) fixed at the origin, and each circle i as a movable elastic circular item i. There will be some elastic potential energy if any two elas- tic items overlap, or an item overlaps with the border of the container Then we can calculate the elastic potential energy for a layout configuration X, and if we reduce the potential energy by some gradient descent method, there will be less overlapping among the items. 5 The Proposed SIDM Algorithm We adopt the general quasi-physical model for ECPP, and the key issue is how to find a local minimum of the potential en- ergy efficiently such that we can handle large scale instances. The advantage of our method is that it can efficiently find a feasible layout, which is also a global minimum layout for a fixed container radius. In the following discussion, we will focus on the global optimization problem using best-known radius reported on the packomania website. There are three procedures for a feasible layout search. First, a local search procedure finds the local minimum or saddle point, in which our stochastic item descent method is proposed. The second is the basin-hopping procedure, for which we design a flexible strategy of shrinking the container radius. Finally, the global search procedure combines the local-search and the basin-hopping procedure to search for a solution iteratively within reasonable time. 5.1 Stochastic Item Descent Method For the local search procedure, we randomly select items as a mini-batch and use the classical BFGS [Liu and Nocedal, 1989] algorithm for gradient descent. The main idea of BFGS is to use the gradient information of the objective function U to approximate the inverse of Hessian matrix rather than to calculate the second-order derivative at each iteration. For simplicity, we use X s to denote the layout of a subset of unit circles, and U s is the corresponding elastic potential energy function of this set of circles. The complementary set of X s is denoted as X c. The BFGS iteration for minimizing the potential energy U s has the form: X s k+1 ← X s k − αkHkgk, (3) in which X s k is the layout configuration at iteration k, gk is the gradient of U s at X s k, Hk is a positive definite approximation of ∇2U s(X s k)−1 and αk is the step length (learning rate) at each iteration, defined in Eq. (4). Hk is updated dynamically by Eq. (6), in which I is the identity matrix and uk, vk are defined in Eq. (5). Definition 1 Overlap Depth. There are two kinds of over- lap, circle-circle overlap and circle-container overlap. The circle-circle overlap depth is defined as: (cid:18) (cid:113) (cid:19) dij = max 2 − (xi − xj)2 + (yi − yj)2, 0 , (1) where i (cid:54)= j. And the circle-container overlap depth is de- fined as: (cid:18)(cid:113) (cid:19) d0i = max x2 i + y2 i + 1 − R, 0 . (2) Definition 2 Elastic Potential Energy. The elastic potential energy of the items is proportional to the square of the over- lap depth. The potential energy Ui of circle i is defined as Ui = (cid:80)n ij. And the total potential energy U (X) is U (X) = (cid:80)n j=0,j(cid:54)=i d2 i=1 Ui. Obviously, the total energy U ≥ 0 for any layout configu- ration. U = 0 if and only if X is a feasible layout, i.e. U is a global minimal potential. Thus, for a fixed R, we minimize U as the objective function so as to find a feasible solution. αk = arg min U s(X s k − αHkgk) α∈R+ k+1 − X s uk = X s k, vk = gk+1 − gk (cid:18) Hk+1 = I − (cid:19)T (cid:18) Hk I − vkuT k uT k vk (cid:19) + vkuT k uT k vk ukuT k uT k vk (4) (5) (6) Based on above definitions, we design a local BFGS algo- rithm, Algorithm 1, for optimizing the potential for circles in X s, while other circles in X c are all fixed in the algorithm. Combining the random selection of batches on unit circles with local BFGS algorithm, we have our stochastic item de- scent algorithm (SIDM). The specific idea is to randomly se- lect a subset of circles at each time, and call local BFGS on this subset to get a locally better layout. Then we continue to randomly select another batch of circles in the remaining set and repeat such operation until all the circles have been selected in a batch. This is equivalent to a random grouping of all circles for one round of iteration, the number of circles Algorithm 1 Local BFGS Algorithm Algorithm 2 Stochastic Item Descent Method Input: A layout for a subset of circles X s; Container radius R. Output: A local minimum layout X s∗. Input: A layout configuration X; Container radius R. Output: A local minimum layout X ∗. 1: iteration step k ← 0; 2: X s k ← X; 3: Hk ← I; 4: calculate gk; 5: while k ≤ M axIterN um do calculate αk by Eq. (4); 6: calculate X s k+1 by Eq. (3) ; 7: if U s ≤ 10−20 or (cid:107)gk(cid:107) ≤ 10−10 then 8: 9: 10: 11: 12: 13: 14: end while 15: return layout X s end if calculate gk+1; calculate uk, vk, Hk+1 by Eq. (5) and (6); k ← k + 1; return layout X s k+1 as X s∗; k as X s∗. per group is recorded as s (except for the last group), and the local BFGS algorithm is called iteratively for each group. If we continue do another random grouping on the circles at the next round of iteration and run BFGS iteratively for each group again, then after k rounds of iterations, it is prob- ably that the potential energy of the whole system is still rel- atively high. Therefore, we consider reducing the number of groups for each round, which means the number of circles in each group increases. The local BFGS algorithm is still ap- plied to reduce the potential energy of each group. We need to go through k/2 rounds until all circles are in one group in the end, in which case we run the local BFGS for the whole system. As the overall packing is already relatively good, a local minimum packing layout can be quickly obtained. The reason why we do not choose a fixed group size but increase s gradually is that small fixed group size may cause oscilla- tion during the iterations like stochastic gradient descent for neural network training, making it hard for the potential en- ergy to converge to a local minimum. The pseudo code of the entire process is in Algorithm 2. The selection of the group size s has an impact on the algorithm efficiency. We experimentally tested on two in- stances of n = 300 and n = 400 with various group sizes s = 50, 60, 70, ..., 150. We compare the average running time of 10 runs that reach local minimum layout. The results are illustrated in Figure 1, in which we see s = 100 is the best. 5.2 Basin-hopping and Global Search The stochastic item descent usually obtains a local minimum layout or a saddle point in many cases and can not guarantee the elastic potential energy of the whole system to be small enough, aka a feasible layout may not be found. In such case, we need to consider appropriate basin-hopping strategy to help the current configuration jump out of the local opti- mum at the same time have a better chance to move toward randomly select s circles as a group, with a total of g groups; run Algorithm 1 for each group; if U ≤ 10−20 then return current layout as the X ∗; 1: s ← 100; 2: k ← 10; 3: g ← (cid:98) n s (cid:99); 4: while g ≥ 1 do 5: 6: for i = 1 to k do 7: 8: 9: end if 10: end for 11: s ← min(s ∗ 2, n); 12: k ← max((cid:98) k 2 (cid:99), 1); 13: g ← (cid:98) n 14: 15: end while 16: return current layout as X ∗. s (cid:99); the global optimum. The shrinking strategy has a good impact on the layout with dense inner packing and sparse outer packing [He et al., 2018]. Intuitively, if we make circles near the container cen- ter denser and make more use of the inner space, we may obtain a better layout. In order to get a global optimal lay- out, we often need to run the basin-hopping strategy multiple times. QPQH uses an identical shrinking scale for each initial shrinking radius. In practice, as the number of hops increases, it is unnecessary to squeeze the circle too far inside, and the circles near the boundary still need more precise adjustment because they are more scattered and irregular. Therefore, we adapt and improve the basin-hopping strategy of QPQH [He et al., 2018] by shrinking the radius of the container more flexibly. Figure 1: Comparison on the average running time of 10 runs of SIDM to find a good group size. The coordinates of all circles are fixed and the container radius is reduced by a factor of γ (0 < γ < 1): R = γR0, where R0 in the initial container radius and γ is defined as: γ = α + β · hops + 1 − α − β · hops m k, (7) in which α is the initial shrinking scale of the container ra- dius, hops is the times of running basin-hopping procedure, β is the factor corresponding to hops that adjusts the shrink- ing scale during the iterations, m is the number of generated new layouts and k varies form 0, 1, 2 to m − 1. Then we run stochastic item descent to reach a new layout. If α is too small, all the circles will converge to the cen- ter of the container and most dense packing will be broken severely. If α is too large, there is little impact by shrink- ing the container radius. If β is too small/large, the shrinking scale of each basin-hopping increases too slowly/quickly dur- ing the iteration. Besides, if m is too small, the probability of generating new layouts with high quality is small; if m is too large, it is very slow to generate m new layouts. The values are chosen empirically: α = 0.4, β = 0.03 and m = 10. Algorithm 3 Global Search Procedure Input: The container radius R0. Output: Table 1: Key parameters of the SIDM algorithm. Parameter Description s α β m Initial group size Initial shrinking factor Shrinking scale growing factor Number of new layouts Value 100 0.4 0.03 10 once, and m iterations of local BFGS where m is the batch size so all the circle positions are also updated once. Each iteration of BFGS algorithm (Simply regard X s as the layout of all circles) calculates the step length α by Eq. (4), new layout by (3), new gradient and Hessian matrix by Eq. (3), and the time complexities are O(nlog( len (cid:15) )), O(n2), O(n) and O(n2), respectively. Here len is the length of real number interval in the line search, (cid:15) is the searching precision, and nlog( len (cid:15) ) is the time complexity of the line search algo- rithm. Thus, the total time complexity is O(nlog( len (cid:15) ) + n2). The memory mainly used by BFGS algorithm is to store the Hessian matrix, thus the space complexity is O(n2). A global or local minimum layout. 1: randomly generate an initial layout; 2: run SIDM to obtain an updated layout X; 3: X ∗ ← X; 4: hops ← 0; 5: while U (X ∗) > 10−20 and time limit is not reached do 6: 7: 8: 9: calculate γ by Eq. (7); R ← γR0; run SIDM on layout X ∗ with radius R to generate a new layout, denoted as Xk; run SIDM on Xk with radius R0; for k = 0 to 9 do end for if mink U (Xk) < U (X ∗) then X ∗ ← arg mink U (Xk); 10: 11: 12: 13: 14: 15: 16: end while 17: return current layout X ∗; end if hops ← (hops + 1) mod (cid:98) 1−α β (cid:99); Combining the local search procedure with the basin- hopping procedure, we have the global search algorithm, Al- gorithm 3, that finds a feasible layout in a fixed container. It is initialized with a random layout. Then we run stochastic item descent to obtain a local minimum layout and then use the basin-hopping procedure to generate 10 new layouts. We continue run SIDM on these packing patterns and if some packing is better than the current local minimum packing, we update the current packing. The algorithm terminates when a global minimum layout is obtained or the time limit is reached. To show the key feature of the proposed method, we still denote the overall algorithm as SIDM. 5.3 Complexity Analysis This subsection compares the time complexity and space complexity of BFGS algorithm and local BFGS algorithm. For a fair comparison, we consider the complexity for one iteration of BFGS that all circle items update their positions The time complexity and space complexity for each batch of local BFGS algorithm are similar to BFGS algorithm, which are O( n m )2). So for m batches of local BFGS, the time complexity is m times of the complexity of a single batch of local BFGS algorithm, i.e., O(nlog( len m ), and the space complexity is O( n2 m )2) and O(( n m log( len (cid:15) ) + n2 (cid:15) ) + ( n The time complexity of BFGS and m batches of local BFGS is mainly decided by the second term, which are O(n2) and O( n2 m ), respectively. The time complexity of BFGS is m times of the m batches of local BFGS. And obviously, the space complexity of BFGS is also m times of the m batches of local BFGS. m ). Therefore, We can conclude that SIDM using local BFGS search is more efficient than BFGS search from a complexity analysis point of view. 6 Experimental Results We present our results on instances of n = 100, 200, 300, ..., 1500. The best-known packing results are maintained on the packomania website, where most results of ECPP are re- ported for n ≤ 200 in the literature. The packomania web- site maintainer, Eckard Specht, also provide results using his program cci for n = 1 to 5000. But unfortunately he did not provide running time, computing machine, or code. To our knowledge, no result has been formally published in the literature for n > 320 due to the exponentially growing of computational complexity. The current state-of-art results formally published in the literature are from QPQH [He et al., 2018], which is not updated on packomania. Thus, we compare with QPQH [He et al., 2018] to demonstrate the ef- ficiency of SIDM. 6.1 Experimental Setup SIDM is programmed in C++ programming language and im- plemented in Visual Studio 2017 IDE. All experiments are carried out using a personal computer with 2.5GHz CPU and 8GB RAM. Table 1 lists the key parameters of SIDM. Table 2: Comparison on average running time. Table 3: Experimental results for n = 100, 200, ..., 1500. n 200 210 220 230 240 250 260 270 280 290 300 310 320 R0 15.4632748785 15.8792012772 16.2253735494 16.5964300724 16.8971658948 17.2629622393 17.6049551932 17.8872656677 18.2472267427 18.5493750704 18.8135833638 19.1848594632 19.4562307640 QPQH (s) 1250 2412 1690 865 1960 2697 4617 6712 5478 3782 7153 8274 8397 SIDM (s) 1668 1945 2047 1912 2560 1867 2897 2976 3125 2698 4211 5712 4987 6.2 Computational Results Our purpose is to evaluate whether SIDM can find a global minimum layout efficiently using the reported container ra- dius on packomania as the fixed container radius. We first compare results on instances of n = 200, 210, ..., 320 between SIDM and QPQH (we use the ver- sion that the container radius is fixed). We run both algo- rithms for five times respectively, and show the average run- ning time of reaching a feasible pattern in Table 2. We also show the comparison in Figure 2 to have an intuitive observa- tion. The average running time of the two algorithms is close when the number of circles is small in 200 to 250. But as the number of circles increases, SIDM behaves more efficiently than QPQH. Then, for 15 instances of n = 100, 200, ..., 1500, we ran- domly place n circles in the container and run the overall SIDM algorithm. We will stop the search when a global min- imum layout is found, or the maximum time limit of 15 hours is reached. For each instance, we run SIDM 10 times to re- duce the impact of randomness. The results listed in Table 3 show that SIDM can find the global minimum layout except for n = 1400. The hit count indicates the number of success- ful times for 10 times of running, and the time indicates the average running time for successful runs. Figure 2: Comparison on average running time of QPQH and SIDM. The experimental results indicate that with the increase on number of circles, in most cases SIDM can find a feasible layout, and the running time increases almost linearly (2562 for n = 100, 2562 · 15 = 38430, 41286 for n = 1500). n 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 R0 11.0821497243 15.4632748785 18.8135833638 21.6895717951 24.1329376240 26.4274162694 28.4958443164 30.4212133790 32.2330843545 33.9571409147 35.6161932968 37.1121608416 38.6047666608 40.0604065845 41.4126836805 Hit count Time (s) 2562 1772 4326 7921 9865 16372 12369 15893 13715 21735 19816 34682 28871 —— 41286 1/10 8/10 7/10 7/10 6/10 4/10 5/10 3/10 1/10 1/10 2/10 1/10 2/10 0/10 1/10 By comparison, QPQH can not output any feasible results for n = 400, 500, ..., 1500 within the time limit. 7 Conclusion Inspired by the idea of SGD in the area of machine learn- ing, we propose a stochastic item descent method for large- scale equal circle packing problem (ECPP), which randomly divides the circles into batches and runs BFGS on the corre- sponding potential energy function in iterations. In order to obtain a solution with high quality, we increase the batch size during the iterations. Besides, we improve the basin-hopping strategy and shrink the radius of the container more flexibly. Experiment has demonstrated that the proposed method is ef- ficient for large-scale equal circle packing problem. In future work, we will adapt SIDM via binary search for its optimization version problem of minimizing the container radius, and try the SIDM idea on various circle packing prob- lems, such as equal or unequal circles packing with various container shape. We also believe SIDM can be adapted for other classic optimization problems where gradient descent method has been used for optimization, including those prob- lems occurring in the optimization process of large scale ma- chine learning. References [Addis et al., 2008] Bernardetta Addis, Marco Locatelli, and Fabio Schoen. Efficiently packing unequal disks in a cir- cle. Operations Research Letters, 36(01):37–42, 2008. [Akeb et al., 2009] Hakim Akeb, Mhand Hifi, and Rym M’Hallah. A beam search algorithm for the circular Computer & Operation Research, packing problem. 36(5):1513–1528, 2009. [Akiyama et al., 2003] Jin Akiyama, Rika Mochizuki, Nobuaki Mutoh, and Gisaku Nakamura. Maximin dis- tance for n points in a unit square or a unit circle. In Discrete and Computational Geometry, pages 9–13, 2003. [Bottou and Bousquet, 2008] L´eon Bottou and Olivier Bous- quet. The tradeoffs of large scale learning. In NIPS, pages 161–168. 2008. [Bottou et al., 2018] L. Bottou, F. Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. SIAM Review, 60(2):223–311, 2018. [Bottou, 2010] L´eon Bottou. Large-scale machine learning In COMPSTAT, pages with stochastic gradient descent. 177–186, 2010. [Chen et al., 2018] Mao Chen, Xiangyang Tang, Ting Song, Zhizhong Zeng, Xicheng Peng, and Sanya Liu. Greedy heuristic algorithm for packing equal circles into a circular container. Computers & Industrial Engineering, 119:114– 120, 2018. [Flores et al., 2016] Juan J. Flores, Jose Mart´ınez, and Felix Calder´on. Evolutionary computation solutions to the cir- cle packing problem. Soft Computing, 20(4):1521–1535, 2016. [Fodor, 1999] Ferenc Fodor. The densest packing of 19 congruent circles in a circle. Geometriae Dedicata, 74(2):139–145, 1999. [Fodor, 2000] Ferenc Fodor. The densest packing of 12 con- gruent circles in a circle. Contributions to Algebra and Geometry, 41(2):401–409, 2000. [Fodor, 2003] Ferenc Fodor. The densest packing of 13 con- gruent circles in a circle. Contributions to Algebra and Geometry, 44(2):431–440, 2003. [Fu et al., 2013] Zhanghua Fu, Wenqi Huang, and Zhipeng L¨u. Iterated tabu search for the circular open dimen- sion problem. European Journal of Operational Research, 225(2):236–243, 2013. [Goldberg, 1971] Michael Goldberg. Packing of 14, 16, 17 and 20 circles in a circle. Mathematics Magazine, 44(3):134–139, 1971. [Goodfellow et al., 2016] Ian J. Goodfellow, Yoshua Bengio, and Aaron C. Courville. Deep Learning. Adaptive com- putation and machine learning. MIT Press, 2016. [Graham et al., 1998] R. L. Graham, B. D. Lubachevsky, K. J. Nurmela, and P. R. J. ¨Osterg˚ard. Dense packings of congruent circles in a circle. Discrete Math., 181(1- 3):139–154, 1998. [Grosso et al., 2010] A. Grosso, A. R. Jamali, M. Locatelli, and F. Schoen. Solving the problem of packing equal and unequal circles in a circular container. Journal of Global Optimization, 47(1):63–81, 2010. [He et al., 2013] Kun He, Danzeng Mo, Tao Ye, and Wenqi Huang. A coarse-to-fine quasi-physical optimization method for solving the circle packing problem with equi- librium constraints. Computers & Industrial Engineering, 66(4):1049–1060, 2013. [He et al., 2015] Kun He, Menglong Huang, and Chenkai Yang. An action-space-based global optimization algo- rithm for packing circles into a square container. Com- puters & Operations Research, 58:67–74, 2015. [He et al., 2018] Kun He, Hui Ye, Zhengli Wang, and Jingfa Liu. An efficient quasi-physical quasi-human algorithm for packing equal circles in a circular container. Comput- ers & Operations Research, 92:26 – 36, 2018. [Huang and Ye, 2011] Wenqi Huang and Tao Ye. Global op- timization method for finding dense packings of equal cir- cles in a circle. European Journal of Operational Re- search, 210(3):474 – 481, 2011. [Huang et al., 2001] WQ Huang, Y Li, and RC Xu. Local search based on a physical model for solving a circle pack- ing problem. In Proceedings of the 4th Metaheuristics In- ternational Conference, pages 455–459, 2001. [Huang et al., 2003] Wenqi Huang, Yu Li, Bernard Ju- rkowiak, Chumin Li, and Ruchu Xu. A two-level search strategy for packing unequal circles into a circle container. In Principles and Practice of Constraint Programming, pages 868–872, 2003. [Kravitz, 1967] Sidney Kravitz. Packing cylinders into cylin- drical containers. Mathematics Magazine, 40(2):65–71, 1967. [Lecun et al., 1998] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [Liu and Nocedal, 1989] Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimiza- tion. Mathematical programming, 45(1-3):503–528, 1989. [Liu et al., 2016] Jingfa Liu, Kewang Zhang, Yonglei Yao, Yu Xue, and Tinzhao Guan. A heuristic quasi-physical algorithm with coarse and fine adjustment for multi- objective weighted circles packing problem. Computers & Industrial Engineering, 101:416–426, 2016. [Melissen, 1994] Hans Melissen. Densest packings of eleven congruent circles in a circle. Geometriae Dedicata, 50(1):15–25, 1994. [Pirl, 1969] Udo Pirl. Der mindestabstand von n in der einheitskreisscheibe gelegenen punkten. Mathematische Nachrichten, 40(1-3):111–124, 1969. [Reis, 1975] George E. Reis. Dense packing of equal cir- cles within a circle. Mathematics Magazine, 48(1):33–37, 1975. [Robbins and Monro, 1951] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951. [Wang et al., 2002] Huaiqing Wang, Wenqi Huang, Quan Zhang, and Dongming Xu. An improved algorithm for the packing of unequal circles within a larger contain- ing circle. European Journal of Operational Research, 141(2):440 – 453, 2002. [Zeng et al., 2016] Zhizhong Zeng, Xinguo Yu, Kun He, Wenqi Huang, and Zhanghua Fu. Iterated tabu search and variable neighborhood descent for packing unequal circles into a circular container. European Journal of Operational Research, 250:615–627, 2016. [Zhang and Deng, 2005] De-fu Zhang and An-sheng Deng. An effective hybrid algorithm for the problem of packing circles into a larger containing circle. Computers & Oper- ations Research, 32:1941–1951, 2005.
ai_researcher
7
LLMatDesign_Autonomous_Materials_Discovery_with_Large_Language_Models.pdf
LLMatDesign: Autonomous Materials Discovery with Large Language Models Shuyi Jia†, Chao Zhang†, Victor Fung†§ †Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA, USA §Corresponding author: [email protected] Abstract Discovering new materials can have significant scientific and technological implications but remains a chal- lenging problem today due to the enormity of the chemical space. Recent advances in machine learning have enabled data-driven methods to rapidly screen or generate promising materials, but these methods still depend heavily on very large quantities of training data and often lack the flexibility and chemical understanding often desired in materials discovery. We introduce LLMatDesign, a novel language-based framework for interpretable materials design powered by large language models (LLMs). LLMatDesign uti- lizes LLM agents to translate human instructions, apply modifications to materials, and evaluate outcomes using provided tools. By incorporating self-reflection on its previous decisions, LLMatDesign adapts rapidly to new tasks and conditions in a zero-shot manner. A systematic evaluation of LLMatDesign on several materials design tasks, in silico, validates LLMatDesign’s effectiveness in developing new materials with user-defined target properties in the small data regime. Our framework demonstrates the remarkable poten- tial of autonomous LLM-guided materials discovery in the computational setting and towards self-driving laboratories in the future. 1 Introduction Discovering novel materials with useful functional properties is a longstanding challenge in materials science due to the vast and diverse composition and structure space these materials can inhabit[1, 2]. Traditional approaches to materials discovery often involve exhaustively screening materials via lab-based experiments or in silico simulations, which can be time-consuming and resource-intensive[3, 4, 5]. Recent advancements have introduced machine learning surrogate models to predict material structures and properties [6, 7], as well as generative modeling techniques to propose novel materials [8, 9, 10, 11, 12, 13, 14]. However, these data-driven methods rely heavily on extensive training datasets, generally derived from density functional theory (DFT) calculations. These methods are less useful in most instances where such data is unavailable, or when only a limited budget exists to perform experiments or high fidelity simulations. In contrast, a human expert would be far more effective here by being able to draw from domain knowledge and prior experiences, and reason from limited examples. Therefore, a different materials design paradigm is needed in these situations where models should be developed to exhibit similar proficiencies as human experts. Fueled by ever-expanding textual datasets and significant increases in computing power, large language mod- els (LLMs) have witnessed a meteoric rise in capabilities and usage in recent years. More broadly, the re- markable performance of LLMs across diverse tasks they have not been explicitly trained on has sparked a burgeoning interest in developing and utilizing LLM-based agents capable of reasoning, self-reflection, and decision-making[15, 16, 17]. These autonomous agents are typically augmented with tools or action modules, empowering them to go beyond conventional text processing and directly interact with the physical world, such as robotic manipulation [18, 19] and scientific experimentation [20, 21]. As the capabilities of LLMs and LLM-based autonomous agents continue to expand, they are increasingly being recognized for their potential in scientific domains, particularly in chemistry [22]. This surge in interest stems from the fact that the majority of information in chemistry exists as text, aligning closely with the text-centric nature of LLMs [23]. For instance, recent studies have demonstrated the use of LLMs to extract chemical reaction information [24, 25], predict chemical properties [26, 27, 28, 29], and generate crystal structures [30, 31, 32], among many other applica- tions. In particular, chemical research, such as materials discovery, traditionally hinges on human expertise and 1 4 2 0 2 n u J 9 1 ] i c s - l r t m . t a m - d n o c [ 1 v 3 6 1 3 1 . 6 0 4 2 : v i X r a experience encapsulated in scientific publications. LLMs, capable of ingesting vast quantities of these publi- cations beyond human capacity, have the potential to act as intelligent copilots that might be able to extract key insights, uncover hidden patterns, and propose novel methodologies, thereby accelerating scientific progress [23]. Figure 1: Overview of LLMatDesign. The discovery process with LLMatDesign begins with user-provided inputs of chemical composition and target property. It recommends modifications (addition, removal, substitution, or exchange), and uses machine learning tools for structure relaxation and property prediction. Driven by an LLM, this iterative process continues until the target property is achieved, with self-reflection on past modifications fed back into the decision-making process at each step. In this work, we present LLMatDesign (Fig. 1), a language-based framework for materials design powered by state-of-the-art LLMs. LLMatDesign is capable of interpreting human-provided instructions and design constraints, using computational tools for materials evaluation, and leveraging existing chemical knowledge and feedback to act as a highly effective autonomous materials design agent. Unlike traditional methods that rely on explicit mathematical formulations and programmed solvers, LLMatDesign as an autonomous agent works with natural language directly, allowing it to quickly adapt to a diverse set of tasks, materials and target properties by simply modifying the prompt. In each step, LLMatDesign generates new designs of a material by choosing a modification of a starting material along with a corresponding hypothesis. It then applies the modification to the material and validates its property. Here, we use surrogate models as a stand-in for DFT to perform property validation, which can be readily replaced with any other computational or, potentially, experimental validation method. Following this, LLMatDesign reflects on the applied modification and its outcome. This reflection, along with the modified material and hypothesis, is then incorporated into the prompt in an iterative process. Moreover, LLMatDesign’s flexibility allows incorporation of the entire modification history or user- defined requirements, offering even finer control over the discovery process. By utilizing state-of-the-art LLMs as chemical reasoning engines, LLMatDesign represents a novel framework for materials discovery which, unlike many current data-driven generative methods, eliminates the need for large training datasets derived from ab initio calculations. LLMatDesign’s ability to interpret human instructions and incorporate design constraints enables rapid adaptation to new conditions, tasks, materials, and target proper- ties via prompt modification—a flexibility that is often very difficult for current materials discovery methods such as those using generative models. More importantly, LLMatDesign’s ability to generate hypothesis, eval- uate outcomes, and self-reflect on past decisions in a closed-loop manner showcases the potential for a fully automated artificial intelligence (AI) agent for materials design in both a computational setting or towards robotic laboratories in the future. 2 LLMatDesignDesign Decision-makingChoices of ModificationExchangeSubstitutionRemovalAdditionStructure RelaxationProperty PredictorML Force Field̂yYESNÔyModification HistorySelf-reflectionSelf-evaluate the suggested modification and its effectiveness•Modification •Reasoning •ReflectionClose to ?yTarget yDesign LoopThis module uses LLMThis module is a toolDFT calculation 2 Results 2.1 LLMatDesign Framework LLMatDesign is a flexible framework powered by an LLM and empowered with the necessary tools to perform materials discovery. The discovery process with LLMatDesign begins by taking the chemical composition and property of a starting material, along with a target property value, as user-provided inputs. If a chemical com- position is specified without an initial structure, LLMatDesign will automatically query the Materials Project [33] database to retrieve the corresponding structure. If multiple candidates match the query, the structure with the lowest formation energy per atom is selected. LLMatDesign then intelligently recommends one of four pos- sible modifications—addition, removal, substitution, or exchange—to the material’s composition and structure to achieve the target value. Specifically, “exchange” refers to swapping two elements within the material, while “substitution” involves replacing one type of element with another. “Removal” means eliminating a specific element from the material. In the case of “addition,” an atom of the suggested element is added to the unit cell of the material, with its position randomly determined. These four choices act as a proxy to physical processes in materials modification, such as doping or creating defects, and additional modification choices can also be readily added or removed as desired within the framework. LLMatDesign Prompt Template (GPT-4o) I have a material and its <property>. <definition of property>. (<chemical composition>, <property value>) Please propose a modification to the material that results in <objective>. You can choose one of the four following modifications: 1. exchange: exchange two elements in the material 2. substitute: substitute one element in the material with another 3. remove: remove an element from the material 4. add: add an element to the material <additional constraints> Your output should be a python dictionary of the following the format: {Hypothesis: $HYPOTHESIS, Modification: [$TYPE, $ELEMENT 1, $ELEMENT 2]}. Here are the requirements: 1. $HYPOTHESIS should be your analysis and reason for choosing a modification 2. $TYPE should be the modification type; one of “exchange”, “substitute”, “remove”, “add” 3. $ELEMENT should be the selected element type to be modified. For “exchange” and “substitute”, two $ELEMENT placeholders are needed. For “remove” and “add”, one $ELEMENT placeholder is needed. <modification history> Figure 2: Prompt template for LLMatDesign with GPT-4o. Text placeholders in red angular brackets are specific to the task given to LLMatDesign. Text placeholders in blue angular brackets are optional and can be omitted if not needed. For Gemini-1.0-pro’s prompt template, see Appendix A. Self-reflection Prompt Template After completing the following modification on <previous composition>, we obtained <current composition> and the <property> changed from <previous value> to <current value>. Please write a brief post-action reflection on the modification, explaining how successful it was in achieving the <objective> and the reasons for its success or failure: <hypothesis>, <modification> Figure 3: Prompt template for self-reflection. Text placeholders in red angular brackets are specific to the task given to LLMatDesign. Alongside the proposed modification, LLMatDesign provides a hypothesis explaining why the suggested change could be beneficial. This hypothesis generated by the LLM provides a window into the reasoning behind its choices and provides a degree of interpretability which is not possible with traditional optimization algorithms. 3 Next, LLMatDesign modifies the material based on the given suggestion, relaxes the structure using a machine learning force field (MLFF), and predicts its properties using a machine learning property predictor (MLPP). If the predicted property of the new material does not match the target value within a defined threshold, LLMatDesign then evaluates the effectiveness of the modification through a process called self-reflection where commentary is provided on the success of failure of the chosen modification. After self-reflection, a modification history message is created. This message includes the modified chemical composition, the modification itself, the hypothesis behind the modification, and the self-reflection results. This history is then fed back into LLMatDesign, which enters the next design decision-making phase towards the goal of achieving the target property. The entire process repeats in a loop until termination conditions are met. Optionally, density functional theory (DFT) calculations can be performed on the final material. At the core of the entire workflow, LLMatDesign utilizes an LLM engine or agent which translates user-defined objectives into appropriate Materials Project API calls, drives the design decision-making process, and conducts self-reflection on previous decisions to enhance performance. In this work, we demonstrate the capabilities of LLMatDesign using two state-of-the-art LLMs: GPT-4o [34] and Gemini-1.0-pro [35]. However, the framework is model-agnostic and should function effectively with any capable LLMs. The overall architecture and algorithm of LLMatDesign is depicted in Fig. 1 and Algo. 1 respectively. The modification and self-reflection prompt templates are shown in Fig. 2 and 3 respectively. Algorithm 1 LLMatDesign Algorithm Input: (x0, y0): chemical composition and property of the starting material. ytarget: target property value to achieve. M := ∅: set of history messages, if any. Output: (xi, yi): chemical composition and property of the new material. for i = 1 : N do ▷ N : maximum number of modifications ▷ s: modification; h: hypothesis ▷ ε: error tolerance ▷ r: self-reflection ▷ m: history message si, hi ← LLM(xi−1, yi−1, ytarget, M) ˜xi ← perform modification(xi−1, si) xi ← MLFF(˜xi) yi ← MLPP(xi) if |yi − ytarget|/|ytarget| ≤ ε then return (xi, yi) end if ri ← LLM (xi−1, xi, yi−1, yi, si, hi) mi ← create history message (si, hi, ri) M ← M ∪ {mi} end for 2.2 Evaluation To evaluate the effectiveness of LLMatDesign, we performed a set of experiments with 10 starting materials randomly selected from the Materials Project [6]. Specifically, we focus on designing materials targeting two material properties and their corresponding objectives: • Band gap (eV): design a new material with a band gap of 1.4 eV. • Formation energy per atom (eV/atom): design a new material with the most negative formation energy possible. The objective of achieving a band gap value of 1.4 eV is chosen as an example of designing an ideal photovoltaic material with a band gap within the range of 1–1.8 eV [36], and the aim of obtaining the most negative formation energy requires LLMatDesign to suggest modifications that could result in more stable materials. For the band gap experiments, we record the average number of modifications taken by LLMatDesign, with a maximum budget of up to 50 modifications. A 10% tolerance of error to the target is used as the convergence criterion. For the formation energy experiments, a fixed budget of 50 modifications is used, and both the average and minimum formation energies are recorded. The experiment is then repeated 30 times for each starting material. We present results for two different LLM engines: Gemini-1.0-pro and GPT-4o. Within 4 each LLM engine, two variants of experiments—history and historyless—are conducted to evaluate the impact of including the knowledge of prior modification history. All results are compared against a random baseline, where modifications to materials are randomly selected. The results for band gap and formation energy per atom are shown in Table 1 and Table 2 respectively. Note that self-reflection is included only for GPT-4o and not for Gemini-1.0-pro. Table 1: LLMatDesign’s performance in achieving a new material with a target band gap of 1.4 eV. Each experiment is repeated 30 times, and the average number of modifications taken to reach the target value is recorded. Starting Material Gemini-1.0-pro GPT-4o Random Gemini-1.0-pro GPT-4o Random Average # of Modifications Average Final Band Gap (eV) History Historyless History Historyless History Historyless History Historyless BaV2Ni2O8 CdCu2GeS4 CeAlO3 Co2TiO4 ErNi2Ge2 Ga2O3 Li2CaSiO4 LiSiNO Na2ZnGeO4 SrTiO3 Avg. 17.7 11.1 14.3 8.8 26.8 10.3 15.7 12.4 13.0 7.2 13.7 14.4 13.4 15.1 13.1 24.8 12.3 20.5 10.4 15.0 8.8 14.8 17.7 3.3 7.4 5.5 19.3 12.7 14.3 4.1 11.5 12.0 10.8 30.4 9.5 16.9 1.6 47.6 37.7 29.3 2.8 49.4 40.6 26.6 22.4 28.7 26.7 29.7 31.8 32.8 27.4 27.4 22.9 24.3 27.4 1.23 1.41 1.42 1.40 1.18 1.34 1.36 1.38 1.40 1.42 1.35 1.42 1.39 1.39 1.30 1.26 1.38 1.37 1.39 1.39 1.41 1.37 1.39 1.44 1.41 1.36 1.36 1.36 1.41 1.39 1.39 1.45 1.39 1.89 1.38 1.68 1.42 0.43 1.76 1.81 1.50 2.35 1.64 1.59 1.12 1.01 1.21 1.02 0.90 0.87 1.09 1.09 1.15 1.11 1.06 Table 2: LLMatDesign’s performance in achieving a new material with a as low as possible formation energy per atom. Each experiment consists of 50 modifications, and is repeated 30 times. Starting Material Gemini-1.0-pro GPT-4o Random Gemini-1.0-pro GPT-4o Random Average Formation Energy (eV/atom) Minimum Formation Energy (eV/atom) History Historyless History Historyless History Historyless History Historyless BaV2Ni2O8 CdCu2GeS4 CeAlO3 TiO4 ErNi2Ge2 Ga2O3 Li2CaSiO4 LiSiNO Na2ZnGeO4 SrTiO3 Avg. -0.80 -0.19 -0.77 -0.39 -0.02 -0.19 -0.77 -0.38 -0.79 -1.26 -0.56 -0.20 0.11 -0.28 0.03 -0.19 -0.12 -0.41 -0.19 -0.25 -0.23 -0.17 -2.45 -1.05 -2.79 -1.57 -0.54 -1.61 -2.30 -1.75 -2.62 -3.01 -1.97 -2.50 -0.61 -2.24 -1.49 -0.74 -2.07 -2.69 -1.54 -2.52 -3.54 -1.99 -0.12 0.29 -0.04 0.0 -0.11 -0.16 -0.20 -0.15 0.05 -0.02 -0.05 -2.69 -1.31 -3.44 -2.64 -0.96 -2.05 -2.94 -2.01 -2.48 -3.40 -2.39 -2.30 -1.59 -3.22 -2.08 -1.71 -1.82 -2.60 -2.01 -2.34 -3.09 -2.28 -2.91 -1.61 -3.73 -2.48 -0.94 -3.31 -3.13 -2.60 -2.87 -3.65 -2.72 -2.74 -0.72 -3.73 -2.10 -1.57 -3.29 -2.98 -1.72 -2.55 -3.57 -2.50 -1.99 -1.37 -2.50 -1.80 -1.40 -1.67 -2.27 -1.75 -1.85 -2.38 -1.90 We observe that GPT-4o with past modification history performs the best in achieving the target band gap value of 1.4 eV, requiring an average of 10.8 modifications (Table 1). In comparison, Gemini-1.0-pro with history takes an average of 13.7 modifications. Both methods signifcantly outperform the baseline, whic requires 27.4 modifications. Adding modification history to subsequent prompts allows the LLMs to converge to the target more quickly, as both Gemini-1.0-pro and GPT-4o with modification history outperform their historyless counterparts. Notably, the performance gap between the history and historyless variants is smaller for Gemini- 1.0-pro than for GPT-4o. From a closer inspection of the modification paths of GPT-4o without history, we find that GPT-4o often alternates between a few of the same modifications until reaching the maximum number of allowed iterations (see Fig. 4). For the two starting materials where GPT-4o without history performs the best (Co2TiO4 and SrTiO3), the final materials frequently converge to identical composition by following the same modification sequence. This indicates a lack of diversity in the newly generated materials when no history is included in LLMatDesign’s iterative loop. In addition, GPT-4o with history achieves the best final band gap value, averaging 1.39 eV, followed by Gemini-1.0-pro at 1.35 eV, and random at 1.06 eV. LLMatDesign’s superior performance is also apparent when finding new materials with the lowest formation energy per atom (Table 2), consistently outperforming the random baseline. Specifically, both the history and 5 historyless variants of GPT-4o achieve the lowest average formation energies, with −1.97 eV/atom and −1.99 eV/atom, respectively. GPT-4o with history also achieves the lowest minimum formation energy per atom at −2.72 eV/atom. Interestingly, while the minimum formation energy per atom values achieved by Gemini-1.0-pro are close to that of GPT-4o, its average formation energy per atom values are significantly higher, indicating that it struggles to consistently suggest chemically stable modifications for the materials. Nonetheless, Gemini- 1.0-pro still noticeably outperforms the baseline. In Fig. D.1 and D.2, we visualize 20 materials discovered by LLMatDesign for the band gap and formation energy tasks, respectively. These materials are obtained from the first run of all 10 starting materials. For the band gap task, the final materials are selected. For the formation energy task, the materials with the lowest formation energy per atom are chosen. Figure 4: Average band gaps and formation energies over 50 modifications. The grey horizontal line indicates the target band gap of 1.4 eV. The colored dots on the x-axis indicate the average number of modifications taken for each method to reach the target. For formation energy, the goal is to achieve the lowest possible value. In Fig. 4, we plot the band gaps and formation energies per atom over 50 modifications, averaged across 10 starting materials. The target band gap of 1.4 eV is indicated by the grey horizontal line. Both history and historyless variants of Gemini-1.0-pro and GPT-4o demonstrate quick convergence to the target band gap. However, the GPT-4o historyless variant exhibits zig-zag oscillations in band gap values as modifications increase. This occurs because, without historical information, GPT-4o tends to oscillate between a few of the same moves, causing the band gap to fluctuate without improving. In contrast, the random baseline fails to converge to 1.4 eV within the maximum allowed 50 modifications. For formation energy, our findings indicate that GPT-4o is consistently able to suggest modifications which keep formation energy low on average around −2 eV/atom, though Gemini-1.0-pro struggles to do so despite being able to obtain a low minimum formation energy. Notably, neither GPT-4o nor Gemini-1.0-pro are able to beat the formation energy of the starting materials, likely due to the the fact that these materials are already at or near the lowest energy states. Fig. 5 presents heatmaps over the periodic table displaying the element occurrences in the modifications for both the band gap and formation energy tasks, which reveal additional insights into the reason for the good performance for LLM-driven design. The number of occurrences of each element is collected across all runs and starting materials. In the heatmaps for the random baseline, all elements are chosen at nearly uniform frequencies. This result is to be expected, as the random algorithm samples elements with atomic numbers up to 99 uniformly. Meanwhile, in the heatmaps for the LLM cases, there is a clear distribution towards certain elements, mostly focusing on elements within the first four rows of the periodic table and avoiding noble 6 01020304050Number of modifications0.00.51.01.5Band gap (eV)Gemini-1.0-pro01020304050Number of modifications2.01.51.00.50.0Formation energy (eV/atom)Gemini-1.0-pro01020304050Number of modifications0.00.51.01.52.0Band gap (eV)GPT-4o01020304050Number of modifications2.01.51.00.50.0Formation energy (eV/atom)GPT-4oHistoryHistorylessRandom BG: Gemini-1.0-pro with history BG: GPT-4o with history BG: Random FE: Gemini-1.0-pro with history FE: GPT-4o with history FE: Random Figure 5: Heatmaps of element frequencies in band gap (BG) and formation energy (FE) tasks. The periodic table is color-coded to indicate the frequency of each element’s occurrence in all modified materials (both intermediate and final) across all runs and starting materials. Darker colors represent higher frequencies, while lighter colors denote lower frequencies or absence. The visualization employs log-scaling to effectively highlight the distribution and prevalence of elements. metals and Actinides. Both LLM models share similar distributions, such as a preference for elements like oxygen, however Gemini-1.0-pro’s suggestions appear to exhibit a greater element diversity compared to GPT- 4o, including some of the transition metals. With Gemini-1.0-pro, we also occasionally observe modifications suggested by the LLM that include noble gases, which is not chemically feasible due to their inert nature. With GPT-4o, this does not occur (see Fig. C.1). Regardless, both LLM models are able to consistently suggest chemically viable elements for modification, which is akin to how a human expert would make similar choices based on chemical intuition or from past examples in the literature. In Fig. 6, we present an example of the full process whereby LLMatDesign successfully completes a design task to achieve a band gap of 1.40 eV. In the first step, LLMatDesign suggests modifying the starting material CdCu2GeS4 by substituting S with Se, given the hypothesis that increasing atomic radius and changing the electronegativity can alter the band gap. Upon modification, the new material CdCu2GeSe4 was found to have an even smaller band gap, which is contrary to the desired effect as noted by the reflection. This history is included in the second step of modification, whereby LLMatDesign suggests a subsequent modification of Ge with Si, which increases the gap. The reflection notes a partial success is achieved, but is still not enough to reach the target, whereupon a third step is taken. In the third step, Cu is substituted with Zn, which finally achieves the desired band gap within an acceptable threshold, ending the process. From this example, we can observe the LLM is successful at 1) recognizing differences in element properties (i.e. Se having a larger atomic radius than S), 2) highlighting these properties as being relevant to the design task (i.e. atomic radius, electronegativity, and electronic configuration affecting the band hap), 3) and recognizing whether a modification is successful and the degree of success in the reflection. We will show in the subsequent section that it is this reasoning and reflection process which has a significant impact on its success. In the final step of the design process, a DFT calculation is performed to validate the material’s properties which were obtained from a ML surrogate model. Here, we use DFT to compute the formation energy of the minimum energy structures in all 30 runs for each 10 starting materials obtained with GPT-4o and random sampling. On average, structures generated by LLMatDesign using GPT-4o with history achieved a formation energy of -2.32 eV/atom with a job completion rate of 73.3%. In comparison, the random baseline obtained an average formation energy of -1.51 eV/atom, with a significantly lower job completion rate of 40.0% (see Fig. E.1). The lower completion rate is likely due to a much higher likelihood of unreasonable compositions and configurations obtained from random sampling which cause the DFT calculations to fail. These results also follow the same relative trends in Table 2. In a situation where DFT is directly used in each iteration rather than a surrogate model, this step can be omitted, or potentially the materials can be validated further with experiments. 7 HHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr010122136415143CountBG-History-Gemini-1.0-proHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr010138164419428CountBG-History-GPT-4oHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr07736385512CountBG-RandomHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr012183251134152CountFE-History-Gemini-1.0-proHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr016295510487914CountFE-History-GPT-4oHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr07797236502CountFE-Random Figure 6: Example of LLMatDesign with GPT-4o on the task of modifying the starting material CdCu2GeS4 to achieve a band gap of 1.40 eV. The starting material is retrieved from the Materials Project with chemical formula Cd2Cu4Ge2S8. 8 Given CdCu2GeS4, which has a band gap of 0.388 eV, find a new material with a target band gap of 1.4 eV.LLMatDesignStep 10.39 eVCd2Cu4Ge2S8[Substitute, S, Se]Start0.01 eVCd2Cu4Ge2Se8ModifiedModified0.40 eVCd2Cu4Si2Se8Cd2Cu4Ge2Se8[Substitute, Ge, Si]Step 2Start0.01 eV0.40 eVStart[Substitute, Cu, Zn]Cd2Zn4Si2Se8ModifiedCd2Cu4Si2Se8Step 31.42 eVHypothesis: Substituting copper (Cu) with zinc (Zn) is likely to increase the band gap of the material. Zinc has a different electronic configuration and electronegativity compared to copper, which can affect the electronic structure and potentially increase the band gap.Post-modification Reflection: The modification of substituting copper (Cu) with zinc (Zn) in Cd2Cu4Si2Se8 was highly successful in achieving the target band gap value of 1.4 eV, as the band gap increased from 0.39 eV to 1.42 eV, closely aligning with our hypothesis.I have found a new material, Cd2Zn4Si2Se8, which has a band gap of 1.42 eV, closely matching the target value of 1.4 eV.Hypothesis: substituting sulfur (S) with selenium (Se) is likely to increase the band gap of the material. Selenium has a larger atomic radius and different electronegativity compared to sulfur, which can affect the electronic structure and increase the band gap.Post-modification Reflection: the modification of substituting sulfur (S) with selenium (Se) in Cd2Cu4Ge2S8 to obtain Cd2Cu4Ge2Se8 was unsuccessful in achieving the target band gap value of 1.4 eV, as the band gap decreased significantly to 0.01 eV instead of increasing, contrary to our hypothesis.Post-modification Reflection: The modification of substituting Ge with Si in Cd2Cu4Ge2Se8 to form Cd2Cu4Si2Se8 resulted in an increase in the band gap from 0.01 eV to 0.39 eV, which indicates a partial success in increasing the band gap, but it fell short of the target value of 1.4 eV, suggesting that additional modifications or different approaches may be necessary to achieve the desired band gap.Hypothesis: Substituting germanium (Ge) with silicon (Si) is likely to increase the band gap of the material. Silicon has a smaller atomic radius and different electronegativity compared to germanium, which can affect the electronic structure and potentially increase the band gap. 2.3 Self-reflection To quantify the effect of self-reflection on the performance of LLMatDesign, we conduct band gap experiments using GPT-4o and the same set of 10 starting materials, where we aim to find a new material with a target band gap of 1.4 eV. Like with the history variant, past modifications are incorporated into the prompting loop. However, in this case, self-reflection is omitted completely. In other words, the history message only includes the modification and hypothesis pairs (see Algo. 1). The results from these experiments are shown in Table 3. As previously discussed, GPT-4o with history achieves an average of 10.8 modifications, while GPT-4o without history requires 26.6 modifications. In comparison, GPT-4o with history but without self-reflection now needs an average of 23.4 modifications, which is over twice as many compared to including self-reflection. These results suggest that self-reflection, which involves the LLM evaluating and reasoning through its previous design choices, plays a crucial role in enhancing the efficiency of LLMatDesign in achieving the given objective. Table 3: LLMatDesign with and without self-reflection. GPT-4o is used as the LLM engine. Starting Material History Historyless History without reflection Average # of Modifications BaV2Ni2O8 CdCu2GeS4 CeAlO3 Co2TiO4 ErNi2Ge2 Ga2O3 Li2CaSiO4 LiSiNO Na2ZnGeO4 SrTiO3 Avg. 17.7 3.3 7.4 5.5 19.3 12.7 14.3 4.1 11.5 12.0 10.8 30.4 9.5 16.9 1.6 47.6 37.7 29.3 2.8 49.4 40.6 26.6 45.1 5.0 27.6 7.9 31.0 13.1 31.4 5.1 31.5 36.7 23.4 2.4 Prompting Well-crafted prompts are essential for eliciting accurate and useful responses from LLMs. While the base prompt template, shown in Fig. 2, works as intended, we subsequently show that optimizing this prompt can improve the performance of LLMatDesign even further. To this end, we develop two additional prompt templates in a non-exhaustive demonstration. The first template, termed GPT-4o Refined, is an enhancement of the original prompt (Fig. 2) created by GPT-4o itself. This refinement includes rephrasing and reformatting parts of the original prompt and appending the following sentence: “Take a deep breath and work on this problem step- by-step. Your thoughtful and detailed analysis is highly appreciated.” The second template, named Persona, mirrors the original prompt but incorporates the persona of a materials specialist. Specifically, it begins with a declaration that the LLM is a materials design expert working on developing new materials with specific properties. Detailed descriptions of these prompt templates are provided in Appendix A. We conduct the same experiments on the band gap task using GPT-4o as the LLM engine for LLMatDesign across all 10 starting materials. The results, shown in Table 4, indicate that both the GPT-4o Refined and Per- sona prompt templates outperform the GPT-4o with history, with the GPT-4o Refined template achieving the best performance, requiring an average of only 8.69 modifications to complete the task. The improvement over the original prompt template indicates that careful prompt optimization can positively enhance the efficiency and accuracy of LLM-directed materials discovery frameworks, and that this process can even be performed by the LLM itself. This is a particularly intriguing discovery as it hints towards an unprecedented level of autonomy which can be enabled by LLMs, whereby the prompts and instructions in the framework can be continuously tuned in an automated manner with minimal human intervention. 9 Table 4: LLMatDesign with different prompts. GPT-4o is used as the LLM engine. Starting Material History GPT-4o Refined Persona Average # of Modifications BaV2Ni2O8 CdCu2GeS4 CeAlO3 Co2TiO4 ErNi2Ge2 Ga2O3 Li2CaSiO4 LiSiNO Na2ZnGeO4 SrTiO3 Avg. 17.7 3.3 7.4 5.5 19.3 12.7 14.3 4.1 11.5 12.0 10.8 13.4 3.1 7.2 8.9 11.9 8.4 11.6 5.6 6.9 9.9 8.69 9.6 5.3 8.7 11.9 11.7 8.3 11.9 1.0 8.8 13.9 9.11 2.5 Constrained Materials Design Materials discovery with constraints ensures scientific, economic, and political viability. For instance, avoiding the use of rare earth metals can reduce dependency on limited and expensive resources, mitigate supply chain risks, and align with environmental and ethical standards. To this end, we evaluate LLMatDesign under three constraints limiting its action space. Experiments are conducted on the band gap task using the starting material SrTiO3 with GPT-4o to test whether these constraints are obeyed. Like before, each experiment is repeated 30 times, and the percentage of modifications adhering strictly to the constraints is calculated across all runs. As shown in Table 5, LLMatDesign perfectly adheres to the constraints of “do not use Ba or Ca” and “do not modify Sr,” achieving 100% compliant modifications. For the constraint “do not have more than 4 distinct elements,” only 4 out of 509 modifications by LLMatDesign include 5 distinct elements, resulting in a high compliance rate of 99.02%. These results demonstrate LLMatDesign’s robust capability in adhering to predefined constraints as described by natural language, an advantage unique to LLM-driven design. Table 5: LLMatDesign with different constraints on SrTiO3. Constraint Do not use Ba or Ca Do not modify Sr Do not have more than 4 distinct elements % compliant modifications 100 100 99.02 2.6 Further Discussion Through extensive experiments, we find LLMatDesign consistently outperforms baselines by a significant margin, demonstrating the viability of using LLM-based autonomous agents for materials discovery tasks under a limited budget. While the random baseline uniformly samples from a set of elements for modification (see Fig. 5), LLMatDesign, whether utilizing GPT-4o or Gemini-1.0-pro, exhibits inherent chemical knowledge, enabling it to provide chemically meaningful suggestions. Furthermore, GPT-4o accurately recognizes periodic trends such as atomic radius and electronegativity in its hypotheses and self-reflections in guiding its decisions. In contrast, Gemini-1.0-pro is more prone to errors in this regard, likely due to it being a less robust LLM. Further experiments also show the critical role of self-reflection in the performance of the LLM. This indicates that by reviewing and learning from its previous decisions, LLMatDesign can refine its future suggestions more effectively. This iterative learning process helps the model understand the implications of its modifications better, leading to quicker convergence. In general, it is evident that there are more complex underpinnings behind the remarkable effectiveness of LLM-driven design than simply predicting most likely outcomes. This work also demonstrates the lower-bound capabilities of LLM-based design, which is performed without further fine-tuning in a zero-shot manner. A natural extension of this approach would be to further train LLMs 10 on chemical and materials knowledge, such as those obtained from literature articles. In the future, it would be highly desirable for a chemically fine-tuned to provide more insightful hypotheses and explanations, and even refer to specific references of prior published experiments to support them. These capabilities can potentially be within reach given the growing prevalence of powerful open-source LLMs and parameter-efficient fine-tuning. In the current examples, LLMatDesign comes up with new materials designs from a limited set of modifica- tions on the composition of a material. Nonetheless, this framework is general and can include more complex modifications which act not only on the composition space but also the structure space. Future work in this direction will focus on incorporating structural information when describing the material being modified, and also suggest modifications which directly act on the positions and lattice of the crystal structure. To this end, recent advances in multimodal LLMs can be applied here, where the atomic structure is considered to be an additional modality to be encoded in addition to the text modality. 3 Conclusion In this work, we present LLMatDesign, a novel materials design framework powered by state-of-the-art LLMs that works directly with user-defined design requirements and constraints in natural language. It integrates computational tools for structure relaxation and property evaluation, incorporates internal chemical knowledge, and learns from previous iterations to function as an automated material design framework with high efficiency. Additionally, LLMatDesign quickly adapt to different tasks, target properties and design constraints by simply modifying the prompt. In our experiments, LLMatDesign consistently outperforms the baseline, demonstrating the effectiveness of the framework in developing new materials. Our work highlights the potential for fully automated AI-driven materials discovery that can be seamlessly integrated into autonomous laboratories in the future. 4 Methods 4.1 Large Language Models Large language models (LLMs) are a class of machine learning models built on the transformer architecture [37]. By training on vast amounts of text data, these models can understand and generate text in a human-like manner. In this work, GPT-4o [34] refers to OpenAI’s gpt-4o model, which has a context length of 128K and a knowledge cutoff date of October 2023. Gemini-1.0-pro [35] refers to Google’s LLM with the same name, featuring a context length of 32K. 4.2 Machine Learning Force Field Machine learning force fields (MLFFs) represent a significant advancement in computational chemistry and materials science. By utilizing state-of-the-art machine learning models and training on extensive datasets of atomic structures with energies, forces, and stresses, MLFFs can achieve high accuracy in predicting these properties, often rivaling ab initio methods such as density functional theory (DFT) [38]. More importantly, MLFFs provide these high-accuracy predictions with unprecedented computational efficiency, enabling the sim- ulation of larger systems and longer timescales. In this study, we train a TorchMD-Net model [39] using the MatDeepLearn framework [40, 41]. The training dataset, curated from the Materials Project [6], comprises 187,687 crystal structures with associated energies, forces, and stresses. The model is trained for 400 epochs on a single Nvidia A100 80GB GPU. 4.3 Machine Learning Property Predictor Similar to machine learning force fields (MLFFs), machine learning property predictors (MLPPs) leverage advanced machine learning models trained on large datasets to make fast and accurate predictions for specific 11 target properties. In this study, we train TorchMD-Net models to predict two separate properties: band gap and formation energy per atom. The datasets used are the mp gap and mp form datasets from the MatBench benchmark [42], containing 106,113 and 132,752 structures from the Materials Project [6], respectively. Each model is trained for 200 epochs on a single Nvidia A100 80GB GPU. 4.4 Modification of Material Once LLMatDesign suggests a modification to achieve the user’s target objective, the material is modified accordingly. Specifically, as illustrated in Fig. 1, there are four types of modifications: exchange, substitute, remove, and add. Each modification is applied directly to an ase.Atoms object representing the material. For example, given the modification [‘exchange’, ‘Sr’, ‘Ti’], all Sr atoms in the material are replaced with Ti atoms and vice versa. After applying the modification, the structure undergoes relaxation using a machine learning force field (MLFF). 4.5 Modification of Material The DFT calculations were performed using the Vienna Ab Initio Simulation Package (VASP)[43, 44]. All calculations followed the same settings specified by the ”MPRelaxSet” in the Pymatgen library[45] used in Materials Project. 5 Data Availability The authors declare that the data, materials and code supporting the results reported in this study are available upon the publication of this manuscript. 6 Acknowledgements We thank Lingkai Kong and Rui Feng for helpful discussions. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. De- partment of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award BES-ERCAP0022842. 12 References [1] Davies, D. W. et al. Computational screening of all stoichiometric inorganic materials. Chem 1, 617–627 (2016). [2] Oganov, A. R., Pickard, C. J., Zhu, Q. & Needs, R. J. Structure prediction drives materials discovery. Nature Reviews Materials 4, 331–348 (2019). [3] Liu, Y., Zhao, T., Ju, W. & Shi, S. Materials discovery and design using machine learning. Journal of Materiomics 3, 159–177 (2017). [4] Hautier, G., Jain, A. & Ong, S. P. From the computer to the laboratory: materials discovery and design using first-principles calculations. Journal of Materials Science 47, 7317–7340 (2012). [5] Pyzer-Knapp, E. O., Suh, C., G´omez-Bombarelli, R., Aguilera-Iparraguirre, J. & Aspuru-Guzik, A. What is high-throughput virtual screening? a perspective from organic materials discovery. Annual Review of Materials Research 45, 195–216 (2015). [6] Chen, C. & Ong, S. P. A universal graph deep learning interatomic potential for the periodic table. Nature Computational Science 2, 718–728 (2022). [7] Merchant, A. et al. Scaling deep learning for materials discovery. Nature 624, 80–85 (2023). [8] Hoffmann, J. et al. Data-driven approach to encoding and decoding 3-d crystal structures. arXiv preprint arXiv:1909.00949 (2019). [9] Court, C. J., Yildirim, B., Jain, A. & Cole, J. M. 3-d inorganic crystal structure generation and property prediction via representation learning. Journal of Chemical Information and Modeling 60, 4518–4535 (2020). [10] Xie, T., Fu, X., Ganea, O.-E., Barzilay, R. & Jaakkola, T. Crystal diffusion variational autoencoder for periodic material generation. arXiv preprint arXiv:2110.06197 (2021). [11] Long, T. et al. Constrained crystals deep convolutional generative adversarial network for the inverse design of crystal structures. npj Computational Materials 7, 66 (2021). [12] Ren, Z. et al. An invertible crystallographic representation for general inverse design of inorganic crystals with targeted properties. Matter 5, 314–335 (2022). [13] Fung, V. et al. Atomic structure generation from reconstructing structural fingerprints. Machine Learning: Science and Technology 3, 045018 (2022). [14] Zeni, C. et al. Mattergen: arXiv:2312.03687 (2023). a generative model for inorganic materials design. arXiv preprint [15] Wei, J. et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems 35, 24824–24837 (2022). [16] Huang, J. & Chang, K. C.-C. Towards reasoning in large language models: A survey. arXiv preprint arXiv:2212.10403 (2022). [17] Li, S. et al. Pre-trained language models for interactive decision-making. Advances in Neural Information Processing Systems 35, 31199–31212 (2022). [18] Ahn, M. et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691 (2022). [19] Huang, W. et al. Voxposer: Composable 3d value maps for robotic manipulation with language models. arXiv preprint arXiv:2307.05973 (2023). [20] Boiko, D. A., MacKnight, R., Kline, B. & Gomes, G. Autonomous chemical research with large language models. Nature 624, 570–578 (2023). [21] Bran, A. M. et al. Chemcrow: Augmenting large-language models with chemistry tools. arXiv preprint arXiv:2304.05376 (2023). [22] AI4Science, M. R. & Quantum, M. A. The impact of large language models on scientific discovery: a preliminary study using gpt-4. arXiv preprint arXiv:2311.07361 (2023). 13 [23] Mirza, A. et al. Are large language models superhuman chemists? arXiv preprint arXiv:2404.01475 (2024). [24] Fan, V. et al. Openchemie: An information extraction toolkit for chemistry literature. arXiv preprint arXiv:2404.01462 (2024). [25] Ai, Q., Meng, F., Shi, J., Pelkie, B. & Coley, C. W. Extracting structured data from organic synthesis procedures using a fine-tuned large language model. ChemRxiv preprint 10.26434/chemrxiv-2024-979fz (2024). [26] Zhong, Z., Zhou, K. & Mottin, D. Benchmarking large language models for molecule prediction tasks. arXiv preprint arXiv:2403.05075 (2024). [27] Xie, Z. et al. Fine-tuning gpt-3 for machine learning electronic and functional properties of organic molecules. Chemical science 15, 500–510 (2024). [28] Jablonka, K. M., Schwaller, P., Ortega-Guerrero, A. & Smit, B. Leveraging large language models for predictive chemistry. Nature Machine Intelligence 1–9 (2024). [29] Ock, J., Guntuboina, C. & Barati Farimani, A. Catalyst energy prediction with catberta: Unveiling feature exploration strategies through large language models. ACS Catalysis 13, 16032–16044 (2023). [30] Flam-Shepherd, D. & Aspuru-Guzik, A. Language models can generate molecules, materials, and protein binding sites directly in three dimensions as xyz, cif, and pdb files. arXiv preprint arXiv:2305.05708 (2023). [31] Antunes, L. M., Butler, K. T. & Grau-Crespo, R. Crystal structure generation with autoregressive large language modeling. arXiv preprint arXiv:2307.04340 (2023). [32] Gruver, N. et al. Fine-tuned language models generate stable inorganic materials as text. arXiv preprint arXiv:2402.04379 (2024). [33] Jain, A. et al. Commentary: The materials project: A materials genome approach to accelerating materials innovation. APL materials 1 (2013). [34] Achiam, J. et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023). [35] Team, G. et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 (2023). [36] Sutherland, B. R. Solar materials find their band gap. Joule 4, 984–985 (2020). [37] Vaswani, A. et al. Attention is all you need. Advances in neural information processing systems 30 (2017). [38] Ko, T. W. & Ong, S. P. Recent advances and outstanding challenges for machine learning interatomic potentials. Nature Computational Science 3, 998–1000 (2023). [39] Th¨olke, P. & De Fabritiis, G. Torchmd-net: Equivariant transformers for neural network based molecular potentials. arXiv preprint arXiv:2202.02541 (2022). [40] Fung, V., Zhang, J., Juarez, E. & Sumpter, B. G. Benchmarking graph neural networks for materials chemistry. npj Computational Materials 7, 84 (2021). [41] Jia, S. et al. Derivative-based pre-training of graph neural networks for materials property predictions. Digital Discovery 3, 586–593 (2024). [42] Dunn, A., Wang, Q., Ganose, A., Dopp, D. & Jain, A. Benchmarking materials property prediction methods: the matbench test set and automatminer reference algorithm. npj Computational Materials 6, 138 (2020). [43] Kresse, G. & Furthm¨uller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Physical review B 54, 11169 (1996). [44] Kresse, G. & Furthm¨uller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Computational materials science 6, 15–50 (1996). [45] Ong, S. P. et al. Python materials genomics (pymatgen): A robust, open-source python library for materials analysis. Computational Materials Science 68, 314–319 (2013). 14 Supplementary Information A Prompt Templates for LLMatDesign A slightly different prompt template to Fig. 2 is designed for Gemini-1.0-pro due to its inconsistency in generating standardized output. LLMatDesign Prompt Template (Gemini-1.0-pro) I have a material and its <property>. <definition of property>. (<chemical composition>, <property value>) You will be given a starting material to be modified. Try to achieve <objective>. Make an informed choice of modification based on the given material and past modifications and property values obtained after those modifications. Output a list for the suggested modification, and a string of the reason why you think it is a good modification to take to achieve <objective>. Make sure the modification is physically meaningful. Material to be modified: <chemical composition> Current property value: <property value> <modification history> Available modifications: 1. exchange: exchange two elements in the material 2. substitute: substitute one element in the material with another 3. remove: remove an element from the material 4. add: add an element to the material Example output format: 1. ["exchange", "O", "N"], "some reason here" 2. ["substitute", "Ti", "Fe"], "some reason here" 3. ["add", "O"], "some reason here" 4. ["remove", "O"], "some reason here" Figure A.1: Prompt template for LLMatDesign with Gemini-1.0-pro. Text placeholders in red angular brackets are specific to the task given to LLMatDesign. Text placeholders in blue angular brackets are optional and can be omitted if not needed. GPT-4o Refined Prompt Template for LLMatDesign I have a material with a known <property>. <definition of property>. Material information: • Chemical formula: <chemical composition> • <property>: <property value> Objective: Propose a modification to this material to achieve <objective>. You can choose one of the following modification types: 1. exchange: exchange two elements in the material 2. substitute: substitute one element in the material with another 3. remove: remove an element from the material 4. add: add an element to the material Your response should be a Python dictionary in the following format: ‘‘‘ {Hypothesis: $HYPOTHESIS, Modification: [$TYPE, $ELEMENT 1, $ELEMENT 2]}. ‘‘‘ Requirements: 1. $HYPOTHESIS: Provide a detailed analysis and rationale for your proposed modification. 2. $TYPE:Specify the type of modification (“exchange”, “substitute”, “remove”, “add”). 3. $Identify the element(s) involved in the modification. For ”exchange” and ”substitute”, include two elements ($ELE- MENT 1 and $ELEMENT 2). For “remove” and “add”, include one element ($ELEMENT 1). <modification history> Take a deep breath and work on this problem step-by-step. Your thoughtful and detailed analysis is highly appreciated. Figure A.2: GPT-4o refined prompt template for LLMatDesign. Text placeholders in red angular brackets are specific to the task given to LLMatDesign. Text placeholders in blue angular brackets are optional and can be omitted if not needed. Persona Prompt Template for LLMatDesign You are a materials design expert working on the development of new materials with specific properties. You will be given a composition (chemical formula) and its corresponding <property>. You will be asked to propose a modification to the material to achieve a target <property>. Material information: • Chemical formula: <chemical composition> • <property>: <property value> Objective: Propose a modification to this material to achieve <objective>. You can choose one of the following modification types: 1. exchange: exchange two elements in the material 2. substitute: substitute one element in the material with another 3. remove: remove an element from the material 4. add: add an element to the material Your response should be a Python dictionary in the following format: ‘‘‘ {Hypothesis: $HYPOTHESIS, Modification: [$TYPE, $ELEMENT 1, $ELEMENT 2]}. ‘‘‘ Requirements: 1. $HYPOTHESIS: Provide a detailed analysis and rationale for your proposed modification. 2. $TYPE:Specify the type of modification (“exchange”, “substitute”, “remove”, “add”). 3. $Identify the element(s) involved in the modification. For ”exchange” and ”substitute”, include two elements ($ELE- MENT 1 and $ELEMENT 2). For “remove” and “add”, include one element ($ELEMENT 1). <modification history> Take a deep breath and work on this problem step-by-step. Your thoughtful and detailed analysis is highly appreciated. Figure A.3: Prompt template with materials design expert persona for LLMatDesign. Text placeholders in red angular brackets are specific to the task given to LLMatDesign. Text placeholders in blue angular brackets are optional and can be omitted if not needed. 2 B Convergence Plots Figure B.1: Average band gaps over 50 modifications for all 10 starting materials using GPT-4o. The grey horizontal line indicates the target band gap of 1.4 eV. The colored dots on the x-axis indicate the average number of modifications taken for each method to reach the target. Figure B.2: Average band gaps over 50 modifications for all 10 starting materials using Gemini-1.0-pro. The grey horizontal line indicates the target band gap of 1.4 eV. The colored dots on the x-axis indicate the average number of modifications taken for each method to reach the target. 3 02040Number of modifications0123Band gap (eV)SrTiO302040Number of modifications0123Band gap (eV)Ga2O302040Number of modifications0.00.51.01.52.0Band gap (eV)BaV2Ni2O802040Number of modifications0.00.51.01.52.02.5Band gap (eV)Na2ZnGeO402040Number of modifications0.00.51.0Band gap (eV)ErNi2Ge202040Number of modifications012345Band gap (eV)CeAlO302040Number of modifications0.00.51.01.5Band gap (eV)Co2TiO402040Number of modifications012345Band gap (eV)LiSiNO02040Number of modifications012345Band gap (eV)Li2CaSiO402040Number of modifications0.00.51.01.5Band gap (eV)CdCu2GeS4GPT-4oHistoryHistorylessRandom02040Number of modifications0.00.51.01.5Band gap (eV)SrTiO302040Number of modifications0.00.51.01.52.0Band gap (eV)Ga2O302040Number of modifications0.00.51.0Band gap (eV)BaV2Ni2O802040Number of modifications0.00.51.01.52.02.5Band gap (eV)Na2ZnGeO402040Number of modifications0.00.51.0Band gap (eV)ErNi2Ge202040Number of modifications0.00.51.01.52.02.5Band gap (eV)CeAlO302040Number of modifications0.00.51.0Band gap (eV)Co2TiO402040Number of modifications012345Band gap (eV)LiSiNO02040Number of modifications012345Band gap (eV)Li2CaSiO402040Number of modifications0.00.51.0Band gap (eV)CdCu2GeS4Gemini-1.0-proHistoryHistorylessRandom Figure B.3: Average formation energies over 50 modifications for all 10 starting materials using GPT-4o. The goal is to achieve the lowest possible formation energy per atom. Figure B.4: Average formation energies over 50 modifications for all 10 starting materials using Gemini-1.0-pro. The goal is to achieve the lowest possible formation energy per atom. 4 02040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)SrTiO302040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)Ga2O302040Number of modifications-2.5-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)BaV2Ni2O802040Number of modifications-2.0-1.00.0Formation energy (eV/atom)Na2ZnGeO402040Number of modifications-1.0-0.8-0.5-0.20.00.2Formation energy (eV/atom)ErNi2Ge202040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)CeAlO302040Number of modifications-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)Co2TiO402040Number of modifications-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)LiSiNO02040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)Li2CaSiO402040Number of modifications-1.0-0.50.00.5Formation energy (eV/atom)CdCu2GeS4GPT-4oHistoryHistorylessRandom02040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)SrTiO302040Number of modifications-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)Ga2O302040Number of modifications-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)BaV2Ni2O802040Number of modifications-2.0-1.5-1.0-0.50.00.5Formation energy (eV/atom)Na2ZnGeO402040Number of modifications-0.6-0.4-0.20.00.2Formation energy (eV/atom)ErNi2Ge202040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)CeAlO302040Number of modifications-2.0-1.5-1.0-0.50.00.5Formation energy (eV/atom)Co2TiO402040Number of modifications-2.0-1.5-1.0-0.50.0Formation energy (eV/atom)LiSiNO02040Number of modifications-3.0-2.0-1.00.0Formation energy (eV/atom)Li2CaSiO402040Number of modifications-0.5-0.20.00.20.5Formation energy (eV/atom)CdCu2GeS4Gemini-1.0-proHistoryHistorylessRandom C Heatmaps BG: Gemini-1.0-pro without history BG: GPT-4o without history FE: Gemini-1.0-pro without history FE: GPT-4o without history Figure C.1: Heatmaps of element frequencies in band gap (BG) and formation energy (FE) tasks for Gemini-1.0- pro and GPT-4o without history. The periodic table is color-coded to indicate the frequency of each element’s occurrence in all modified materials (both intermediate and final) across all runs and starting materials. Darker colors represent higher frequencies, while lighter colors denote lower frequencies or absence. The visualization employs log-scaling to effectively highlight the distribution and prevalence of elements. 5 HHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr010124139915666CountBG-Historyless-Gemini-1.0-proHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr013211310045237CountBG-Historyless-GPT-4oHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr010121135715040CountFE-Historyless-Gemini-1.0-proHHeLiBeBCNOFNeNaMgAlSiPSClArKCaScTiVCrMnFeCoNiCuZnGaGeAsSeBrKrRbSrYZrNbMoTcRuRhPdAgCdInSnSbTeIXeCsBaHfTaWReOsIrPtAuHgTlPbBiPoAtRnFrRaRfDbSgBhHsMtDsRgCnNhFlMcLvTsOgLaCePrNdPmSmEuGdTbDyHoErTmYbLuAcThPaUNpPuAmCmBkCfEsFmMdNoLr016294507987341CountFE-Historyless-GPT-4o D Visualization of Selected Structures SrTiO3 Ba2Tl2PNO6 1.51 eV BaV2Ni2O8 BaCd2(MoO5)2 1.30 eV Co2TiO4 Co2SO4 1.42 eV ErNi2Ge2 ErGa2S2O 1.39 eV CeAlO3 FeGeO3 1.49 eV Li2CaSiO4 LiSnO2 1.29 eV CdCu2GeS4 Zn2CdSiSe4 1.42 eV Na2ZnGeO4 Na4MnFe2(GeO4)2 1.41 eV LiSiNO LiGePS 1.30 eV Ga2O3 Ga4SnO6 1.33 eV Figure D.1: Visualization of the final structures obtained by LLMatDesign for the band gap task. These structures are obtained from the first run of all 10 starting materials. The chemical formulae in red represent the starting materials, followed by the formulae of the final structures and their corresponding band gaps. GPT-4o with history is utilized as the LLM engine. 6 BaV2Ni2O8 BaTi2V2O8 −3.09 eV/atom SrTiO3 BaTiO3 −3.56 eV/atom Li2CaSiO4 Li2CaSiO4 −3.03 eV/atom Co2TiO4 TiMnO3 −2.38 eV/atom ErNi2Ge2 LaAl(NiS)2 −1.19 eV/atom Na2ZnGeO4 Li2AlSiO4 −2.85 eV/atom LiSiNO Li2Si2N2O2F −2.21 eV/atom CdCu2GeS4 MgAl6O10 −2.87 eV/atom CeAlO3 YScO3 −3.75 eV/atom Ga2O3 Al2O3 −3.29 eV/atom Figure D.2: Visualization of the final structures obtained by LLMatDesign for the formation energy task. These structures represent the ones with the minimum formation energy per atom from the first run of all 10 starting materials. The chemical formulae in red represent the starting materials, followed by the formulae of the structures with the lowest formation energies and their corresponding formation energy per atom values. GPT-4o with history is utilized as the LLM engine. 7 E DFT Calculations Table E.1: DFT results for lowest-energy structures obtained from the formation energy task, averaged across all starting materials and runs. Formation energy per atom (eV/atom) Job success rate (%) −2.31 73.3 −1.51 40.0 GPT-4o with history Random 8
ai_researcher
2
Context-Aware_Summarization_for_PDF_Documents_using_Large_Language_Models.pdf
World of Computer Science and Information Technology Journal (WCSIT) ISSN: 2221-0741 Vol. 1, No. 3, 63-70, 2011 A Cloud-based Approach for Context Information Provisioning Elarbi Badidi Faculty of Information Technology United Arab Emirates University Al-Ain, United Arab Emirates [email protected] Larbi Esmahi School for Computing & Information Systems Athabasca University, University Drive Athabasca, Alberta, Canada [email protected] Abstract— As a result of the phenomenal proliferation of modern mobile Internet-enabled devices and the widespread utilization of wireless and cellular data networks, mobile users are increasingly requiring services tailored to their current context. High-level context information is typically obtained from context services that aggregate raw context information sensed by various sensors and mobile devices. Given the massive amount of sensed data, traditional context services are lacking the necessary resources to store and process these data, as well as to disseminate high-level context information to a variety of potential context consumers. In this paper, we propose a novel framework for context information provisioning, which relies on deploying context services on the cloud and using context brokers to mediate between context consumers and context services using a publish/subscribe model. Moreover, we describe a multi-attributes decision algorithm for the selection of potential context services that can fulfill context consumers’ requests for context information. The algorithm calculates the score of each context service, per context information type, based on the quality-of-service (QoS) and quality-of-context information (QoC) requirements expressed by the context consumer. One of the benefits of the approach is that context providers can scale up and down, in terms of cloud resources they use, depending on current demand for context information. Besides, the selection algorithm allows ranking context services by matching their QoS and QoC offers against the QoS and QoC requirements of the context consumer. Keywords- mobile users; context-aware web services; context services; cloud services; quality-of-context; quality-of-service; service selection. I. INTRODUCTION The proliferation of wireless and cellular networks over the last few years has led to a remarkable rise in the number of users who are using a variety of modern mobile Internet- enabled devices --such as iPhones, iPads, and Android-based smartphones-- to consume online services. Mobile users are increasingly requiring services tailored to their context as they are on the move. Therefore, enterprise services should be context-aware to deal with the changing environment of the user. Several definitions of the notion of context have been provided in the literature. According to Dey [1], “Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.” According to this definition, the amount of information that can be categorized as context information is extremely wide. Location, time, temperature, humidity, pressure, and mobile user activity are the most widely used context indicators by applications. Specialized services, that we call context services, capture, store, analyze and aggregate data to provide high-level context information to consumer application services as needed. Context services and context consumers are often physically distributed. Besides, it is likely that these context sources provide the same context information but with different QoC [2][3]. The QoC concept is explained in Section 3. Context-awareness raises challenges like aggregation of context information in a structured format, discovery, and selection of appropriate context services for context delivery to context consumers. To cope with the issues of context delivery and context service selection, we propose a novel framework for context provisioning, which is relying on using components called context brokers, and deploying context services on the cloud. Context brokers mediate between context consumers and context services using a publish/subscribe model. To the best of our knowledge there was no previous work on deploying 63 WCSIT 1 (3), 63 -70, 2011 context services on the cloud. We believe that our approach will take advantage of the power of the cloud in terms of elasticity, storage abundance, and scalability. Furthermore, we describe a multi-attributes algorithm for the selection of context services on the basis of the QoS and QoC they can offer. The algorithm takes into account the QoS and QoC requirements of context consumers for each context information to which they subscribe with the Context Broker. The remainder of the paper is organized as follows. Section 2 describes related work on context-awareness and context information provisioning. Section 3 provides background information on the concepts of cloud services and quality-of- context. Section 4 presents an overview of our proposed framework, describes the interactions among the framework components and our proposed algorithm for the selection of context services in both a single cloud and multiple clouds. Section 5 discusses the challenges of the approach. Finally, Section 6 concludes the paper and describes future work. II. RELATED WORK Over the last two decades, context provisioning has been a particularly popular research topic, especially with the advent of smart mobile devices, the advances in sensing technology, and the proliferation of mobile applications. Many research works have proposed, designed, and implemented frameworks and middleware for managing context information and providing users with context-aware services. Moreover, many surveys have been made in order to understand the features and shortcomings of existing systems [4][5][6]. infrastructures With the emergence of service-oriented computing, numerous research works have investigated the design and the implementation of context services. A context service infrastructure support for collection, typically provides management, and dissemination of context information vis-à- vis a number of subjects. Subjects may be users, objects such as handheld devices and equipment, or the environment of users. The context service acquires context information from various context the ―temperature‖ at the current location of the mobile user; this information may be obtained directly from the mobile device of the user. It can also be obtained from a local weather station. Alternatively, it may be obtained from weather TV channels providing weather information nation-wide. sources. For example, consider Schmidt et al. designed and implemented a generic context service with a modular architecture that allows for context collection, discovery and monitoring [7]. This context service provides a Web service interface that allows its integration in heterogeneous environments. The implementation uses OWL to describe context information and SPARQL to query and monitor context information. the design issues and Lei et al. described the implementation of a middleware infrastructure for context collection and dissemination [8]. They realize this middleware infrastructure as a context service. To allow for wide deployment of the context service, this work has addressed the following the context service architecture by supporting heterogeneous context sources, issues: extensibility of integrated support for privacy, and quality of context information support. Coronato et al. proposed a semantic context service that relies on semantic Web technologies to support smart offices [9]. It uses ontologies and rules to infer high-level context information, such as lighting and sound level, from low-level raw information acquired from context sources. As it was described in the surveys mentioned earlier, many of the existing context-aware systems are suffering from the lack of scalability, extensibility, interoperability, and adoption difficulties. The originality of our approach lies in bringing context management and delivery to the cloud by deploying context services on the cloud. We believe that our approach will benefit from the power of the cloud in terms of scalability, elasticity, cloud storage abundance, and scaling up and down. III. BACKGROUND A. Quality-of-Context Context information is characterized by some properties referred in literature as QoC indicators. Buchholz et al. [2] have defined the QoC as: “Quality of Context (QoC) is any information that describes the quality of information that is used as context information. Thus, QoC refers to information and not to the process nor the hardware component that possibly provide the information.” Buchholz et al. [2] and Sheikh et al. [3] have identified the following QoC indicators: precision, freshness, temporal resolution, spatial resolution, and probability of correctness. Precision represents the granularity with which context information describes a real world situation. Freshness represents the time that elapses between the determination of context information and its delivery to a requester. Spatial resolution represents the precision with which the physical area, to which an instance of context information is applicable, is expressed. Temporal resolution is the period of time during which a single instance of context information is applicable. Probability of correctness represents the probability that a piece of context information is correct. Several competing context services may provide the same context [2]. Therefore, potential context consumers should be able to select context services on the basis of the QoC they can assure. information B. Cloud services Cloud computing enables a service-provisioning model for computing services that relies on the Internet. This model typically involves the provisioning of dynamically scalable and virtualized services. Applications or services offered by means of cloud computing are called cloud services. Typical examples of cloud services include office applications (word processing, spreadsheets, and presentations) that are traditionally found among desktop applications. Nearly, all large software corporations, such as Google, Microsoft, Amazon, IBM, and Oracle, are providing various kinds of cloud services. Besides, many small businesses have launched their own Web-based 64 WCSIT 1 (3), 63 -70, 2011 services, mainly to take advantage of the collaborative nature of cloud services. The user of a cloud service has access to the service through a Web interface or via an API. Once started, the cloud service application acts as if it is a normal desktop application. The difference is that working documents are on the cloud servers. Cloud services models are:  Infrastructure-as-a-Service IaaS, organizations rent computing resources and storage space and access them through a private network or across the Internet. (IaaS): With  Platform-as-a-Service (PaaS): With PaaS, organizations can develop their business applications in a cloud environment by using software tools supported by their cloud provider. Maintenance and management of the cloud infrastructure including severs and operating system is the responsibility of the cloud provider.  Software-as-a-Service (SaaS): With SaaS, the cloud service application runs on the cloud provider servers and users access the service through a Web interface or via an API. IV. A FRAMEWORK FOR CLOUD-BASED CONTEXT PROVISIONING In every business with a delivery/consumption model, brokers emerge to mediate between consumers and providers. This could be the case for context delivery. Context brokers may, then, be used to decouple context consumers from context services. Our interest in using brokers is motivated by the fact that they have been used for a while in Service Oriented Architecture (SOA) to mediate between services providers, service consumers, and partners. They have also been extensively used in multimedia systems and in mobile computing systems to deal mainly with the issue of QoS management. Fig. 1 depicts our framework for context information provisioning. The main components of the framework are: Context-aware Web services (context consumers), Context Brokers, and Cloud-based Context Services. Multiple context brokers may be deployed, one for each local domain for instance. A discovery service will allow context-aware consumers to bind to the right context broker. A. Context Brokers A context broker is a mediator service that decouples context consumers from context services. It is in charge of handling subscriptions of context consumers in which they express their interest to receive context information, and registration of context services. Context services may then publish their newly acquired context information to the context broker, which notifies context consumers about that newly acquired context information. Context brokers can also be deployed on the cloud. Fig. 2 illustrates our topic-based publish-subscribe system in which context services are the publishers and the CAWSs are the subscribers. 65 Figure 1. Framework for Cloud-based Context Provisioning Context information -- such as location, temperature, and user activity -- represents the topics of the system. The Publish/subscribe messaging model is a one-to-many pattern of asynchronous message distribution based on registration of interest. In this model, publishers associate the name of a topic to each message (―publish‖) rather than addressing it directly to subscribers. Then, the message system sends the message to all eligible recipients that expressed their interest in receiving messages on that topic (―subscribe‖). As opposed to point-to- point messaging systems, such as message queuing, the publish/subscribe model of asynchronous communication is a far more scalable architecture. This is because the source of the information has only to concern itself with creating the information, and can leave the task of servicing potential recipients to the messaging system. It is a loosely coupled architecture in which senders often do not need to know who their potential subscribers are, and the subscribers do not need to know who generates the information. to In addition this publish/subscribe model for provisioning context information, a context broker implements a regular on-demand request/response model, in which it requests up-to-date context information from context services once a context consumer requires information for a given topic. Therefore, a context broker may either pull context information from context services or let context services push updated context information. Context services, typically residing in different clouds, deliver context information to context consumers with various quality-of-context and quality-of-service (QoS). Therefore, the Context Broker is in charge of selecting appropriate context services to deliver context information to which a context consumer has subscribed. Context information may be delivered to the same consumer by several context services. Each one may deliver a piece of context information (a topic) that the consumer requires to adapt its behavior to the current context of a user. In Sub-section 4.5, we describe a selection algorithm that allows ranking context services with regard to the QoC and the topics required by a context consumer. WCSIT 1 (3), 63 -70, 2011 Figure 2. Topic-based publish/subscribe system B. Context Consumers In our framework, context-aware Web services (CAWS) are the consumers of context information obtained from the cloud-based context services. A CAWS is a Web service that can understand situational context and can adapt its behavior according the changing circumstances as context data may change rapidly. It produces dynamic results according to the 5 WH questions: who, where, when, what, and why it was invoked. A CAWS can be responsive to various situational circumstances, such as:  The identity of the client who invoked the service, whether it is a person, or another Web service.  The location of the client.  The time at which the client invokes the service.  The activity that the client is carrying out at the time it invokes the service.  The preferences that the client may have defined prior to invoking the service.  The security and privacy policies associated with the client of this service.  The device (laptop, PDA, smartphone, etc.) that the client is using to invoke the service. C. Cloud-based Context Services As we have mentioned earlier in the related work section, high-level context information is typically obtained from context services that aggregate raw context information sensed by sensors and mobile devices. Given the massive amount of context data processed and stored by context services and the wide acceptance of the cloud computing technology, context providers now can leverage their services by deploying them on the cloud. Fig. 3 depicts the process of context acquisition and the deployment of context services on the cloud to provide high- level context information to context consumers. Raw context data sensed by various devices and sensors is processed, aggregated by Context Aggregator components in a structured format, and then uploaded to the cloud-based context services. Figure 3. Deployment of high-level context information on the cloud One of the underlying advantages of the deployment of context services in the cloud is the economy of scale. By making the most of the cloud infrastructure provided by a cloud vendor, a context provider can offer better, cheaper, and more reliable services than is possible within its premises. The context service can utilize the full processing and storage resources of the cloud infrastructure if needed. Another advantage is scalability in terms of computing resources. Context providers can scale up when additional resources are required as a result of a rise in the demand for context information. Conversely, they can scale down when the demand for context information is decreasing. Another benefit of the approach is to enable context-aware application services to acquire their required context information on a pay-as-you- go basis and to select cloud-based context services on the basis of the price they have to pay and other criteria, such as the QoC they can get. Furthermore, context-aware applications can obtain context information from cloud-based context services without having in context management. The net benefit for consumers and mobile users, in particular, is the ability to receive better services tailored to their current context. involved to be The SaaS model is the most appropriate model for cloud- based context provisioning. Indeed, SaaS is seen as the trend of the future and the most common form of cloud service development. With SaaS, software is deployed over the Internet and delivered to thousands of customers. Using this model, the context service provider may license its service to customers through a subscription or a pay-as-you-go model. The service is then accessible using an API. D. Interfaces and Interaction model In this section, we describe the interactions among the components of the framework and do consider only the case of a single context broker. The model can be easily extended to consider several context brokers. Fig. 4 shows a simplified class diagram of the framework components, and Fig. 5 depicts the interactions among them. 66 WCSIT 1 (3), 63 -70, 2011 Figure 4- Class diagram of the framework components. The context broker acts as an intermediary between (context (context services) and subscribers publishers consumers) on a collection of topics (context information). A context consumer invokes the subscribe() method of the context broker to register its interest to receive updates on some topics, such as location, and temperature. If the processing of subscribe() is successful, the context broker returns a subscription ID to the context consumer. a Similarly, invokes context registerContextService() of the context broker to register its interest to publish some types of context information through the context broker. If the processing of that method is successful, the context broker returns a registration ID to the context service. service The context broker receives notifications of context change through its notify() method that a context service invokes. It, then, notifies a context consumer about context change by invoking its notify() method. Furthermore, a context consumer may request the current value for a given topic by invoking getCurrentTopicValue() of the context broker. The broker forwards the request to context services that are providing that topic requested by the context consumer. A newly-subscribed context consumer can invoke getLastTopicValue() in order to get the last value of a given topic that other consumers have already received. The context broker has also two additional methods findContextConsumers() and findContextServices() that are self-invoked. The former is invoked to get the list of context consumers that have subscribed to a given topic once a notification of context change has been received for that topic. The latest is invoked to get the list of context services that are publishing the topic requested by a context consumer that has invoked getCurrentTopicValue(). A context aggregator can register at a context service by specifying what topics it is an aggregator for. Once registered, a context aggregator can submit the current value for a given topic by invoking the setTopicValue() method at the context service. When the topic value is changed in the context service, the notify() method at the context broker is triggered to notify all subscribers of that topic. 67 Figure 5- Diagram of interactions among the framework components E. A Multi-attributes Algorithm for Context Services Selection As we have stated earlier, the Context Broker is in charge of selecting suitable context services to deliver context information to which context consumer (CAWS) subscribed. Context information may be delivered to the same context consumer by several context services. Each one may deliver a piece of context information (a topic) that the context consumer requires to adapt its behavior to the current context of a user. Thus, the selection has to be done per topic. In this subsection, we describe our proposed algorithm for context services selection. The algorithm allows ranking context services with regard to the QoC and the QoS required by a context consumer. We first describe how the algorithm works in the case of a single cloud; then, we extend the algorithm to the case of multiple clouds as depicted by Fig. 1. 1) Single Cloud-based Service Selection As numerous potential context services, within the cloud, can deliver the context information required by a consumer, it is indispensable to consider only potential context services that can satisfy both the QoC and the QoS required by the context consumer. Let be the list of context information (topics) to which a context consumer has subscribed by showing its interest in receiving such context information. Let be the list of context services in the cloud that have subscribed with the Context Broker. Two context services may provide different context information; each one specializes in offering particular context information. One service, for example, may offer location information while another service may offer only temperature information, and a third one may offer both of them. These services typically provide context information with different QoC and QoS. We assume that QoC and QoS indicators are in normalized form with values between 0 and 1. A value of 1 means highest quality and 0 means lowest quality. For example for the freshness quality indicator, 1 means that context sources have sensed the information in the last minute, and 0 means that they have sensed it in the last 10 minutes. QoS indicators may concern for instance parameters such as availability, response- time, reputation, and cost of service. WCSIT 1 (3), 63 -70, 2011 When subscribing to context information, a context consumer specifies the min values of the normalized QoC and QoS indicators that he can tolerate. For instance, the context consumer may subscribe to the location information may require a min value of 80% for the freshness indicator, 93% for the probability of correctness indicator. He may require also 98% for the Availability QoS indicator. Let be the list of QoC indicators (parameters) considered in the system. Let be the list of QoS indicators considered in the system. The minimum QoC the context consumer tolerates for a given context information (topic) are expressed by the following vector: requirements that , with and is the cardinality of . Therefore, the whole quality-of-context requirements of the context consumer for all its subscribed topics and all QoC indicators considered in the system can be expressed by the following matrix: The minimum QoS that the context consumer tolerates concern all topics, are expressed by the following vector: the minimal value that the context consumer is willing to accept for the QoS parameter , for represents . A zero value in any QoC or QoS parameter means that the user has not specified any constraint on that parameter. The goal of the selection algorithm is to find for each topic , to which the context consumer subscribed, a suitable context service from the set that can satisfy the minimum quality requirements of the context consumer. The QoC offer of a context service is expressed by the following matrix: is suitable for a topic if the following condition is satified: for and and for (1) In other words, is suitable for provisioning topic if the minimum quality-of-context requirements as well as the minimum quality-of service requirements are satisfied. In the following, we will consider in the selection process only context servers that meet the minimum QoS requirements of the context consumer. The context consumer may set relative weights for the QoC indicators. He may even set weights for each topic to which it subscribed. For example, for the location topic, more weight may be given to the spatial resolution indicator than to the probability of correctness indicator. For the time of the day topic, more weight may be given, for example, to the precision indicator than to the other QoC indicators. Therefore, the weight matrix is given by: The score of a given QoC indicator for a given topic by the offer is: for and (2) Therefore, the score matrix of the offer, for all QoC indicators and all topics is: Given the weight matrix and requirements matrix, the minimum score matrix is: the minimum QoC The QoS offer of is expressed by the following vector: is the offer of for the , Where QoS indicator ; . The quality-of-service requirements of consumer are independent from the topics. the context 68 Where for and WCSIT 1 (3), 63 -70, 2011 The difference matrix, , shows whether may satisfy or not all QoC requirements for all topics to which the context consumer has subscribed to. A value that is less than zero in this matrix means that cannot satisfy the QoC requirement for the associated topic and QoC indicator. Therefore, we have to reason per topic, and consider only context services that can meet the QoC requirement for that topic. The score per topic for a potential context service offer is: . (3) The score of for all topics can be expressed by the following vector: Step-1: Construct the matrix of minimum QoC requirements of the context consumer for all the topics it subsribes to, and the vector S of minimum QoS requirements the context consumer can tolerate. We assume that all values of the matrix and the vector are normalized to be in the range [0,1]. Step-2: Construct the weight matrix W set by the context consumer for each topic and for each QoC indicator, then the minimum score matrix . Step-3: For each Context service registered with the Context Broker, a) Construct the normalized matrix of the QoC offers of for all current topics to which the context consumer has subscribed to, and the normalized vector of the QoS offer of . b) Calculate the score matrix that represents the score between the QoC offer of and the context consumer QoC requirements for each quality indicator considered in the system and for each topic. c) Calculate the difference matrix, . If a value of this matrix is less than zero, then it means that cannot satisfy the QoC requirements of the context consumer for the associated topic and the associate QoC indicator. Only rows with positive values will be considered in the next steps. d) Calculate the score vector using equation (3). Note that rows with negative values in the difference matrix will have a score 0 in the score vector. Considering the scores of all the potential context services, we get the following decision matrix: Step-4: Create the decision matrix, and fill out the maximum score for each topic and the CS providing that score. … Max score … … … … … … … … … … … … Selected CS … … … … A score in the decision matrix is zero if the context service cannot meet the QoC requirements for a given topic. The maximum score value of each row j corresponds to the best QoC offer that can fulfill the QoS and QoC requirements of the context consumer for the topic . The most suitable context service for topic , that we call here , will be the one that maximizes the above score, that is: . (4) If no context service satisfies the context consumer QoS and QoC requirements for a given topic, then the Context Broker may ask the context consumer to lower its QoC expectations. The steps of the algorithm are summarized in Fig. 6. 2) Multiple Clouds-based Service Selection The previous subsection describes how the ranking and selection of context services is achieved within a single cloud. In order to find out the most suitable context services, for each topic, within multiple clouds, the context broker selects potential context services in each cloud according to the algorithm described in the previous sub-section. Selected context services from the clouds are then ranked to find out the best context services per topic, which maximizes the score expressed by equation (3). Figure 6. QoC-based Context Service Selection Algorithm V. CHALLENGES OF THE APPROACH In conjunction with the benefits provided by the cloud, deploying context services to the cloud raises numerous issues including possible for context providers interoperability, security, and performance concerns. to consider, The interaction model described in the previous section provides the basis for the development of a context service API that will be used by both context brokers and context consumers to interact with context services. Heterogeneity of the APIs offered by various context services will be one of the challenges of the approach, especially if they are residing on different clouds. Context brokers should, then, be able to interoperate with all these heterogeneous context services. Security is a significant concern with any SaaS application on the cloud. Care must be taken when designing and implementing a security solution for a cloud-based context- service to keep it as simple and efficient as possible. For instance, the context service may have to be integrated with an identity management service. In this scenario, each customer of the context service has an identity account, which is used to authenticate the customer and track all its requests for service. Performance monitoring, billing, managing customers’ expectations are also significant concerns among others that a context service provider has to handle. The context provider must ensure that its context service is highly available and that its customers can access it. One outage or crash of the service can affect all its customers. Now, there is a general trend toward implementing a Service Level Agreement (SLA) between providers of cloud services and customers, even though that most SaaS vendors do not provide them at present. Another concern, which is not linked to the cloud, but that should be handled by context brokers and consumers is the 69 WCSIT 1 (3), 63 -70, 2011 heterogeneity in the representation and modeling of context information by each context service. Bettini et al. [10] provide a survey in which they describe and compare current context modeling and reasoning techniques. Strang et al. [11] provide another similar survey. Modeling approaches mainly include key-values models, graphical models, object-oriented models, markup scheme models, logic-based models, and ontology- based models. With this heterogeneity in context information models, context brokers should provide a common ontology- based context information model and the mappings from the various models to this common model. [8] H. Lei, D.M. Sow, J.S. Davis, G. Banavar, and M.R. Ebling, ―The design and applications of a context service,‖ SIGMOBILE Mob. Comput. Commun. Rev., vol 6(4), pp.45-55, October 2002. [9] A. Coronato, G. De Pietro, and M. Esposito, ―A Semantic Context Service for Smart Offices,‖ In Proc. of the International Conference on Hybrid Information Technology, vol. 02, pp.391-399, 2006. [10] C.Bettini, O. Brdiczka, K. Henricksen, J.Indulska, D. Nicklas, A. Ranganathan, D. Riboni, ―A Survey of Context Modelling and Reasoning Techniques,‖ Pervasive and Mobile Computing, vol. 6(2), pp. 161-180, 2010. [11] T. Strang, C. Linnhoff-Popien, ―A Context Modeling Survey,‖ In Workshop on Advanced Context Modelling, Reasoning and Management, UbiComp 2004 , Nottingham/England, 2004. AUTHORS PROFILE Elarbi Badidi is currently an Assistant Professor of computer science at the Faculty of Information Technology (FIT) of United Arab Emirates University. Before joining the FIT, he held the position of bioinformatics group leader at the Biochemistry Department of University of Montréal from 2001 to July 2004. He received a Ph.D. in in 2000 from University of computer science Montréal, Québec (Canada). Dr. Badidi has been conducting research in the areas of object-based distributed systems, bioinformatics tools integration, and Web services. He is a member of the IEEE, IEEE Computer Society, and ACM. He served on the technical program committees of many international conferences. His research interests include Web services and service oriented computing, middleware, cloud computing, and bioinformatics data and tools integration. Larbi Esmahi is an Associate Professor of the School of Computing and Information Systems at Athabasca University. He was the graduate program coordinator at the same school during 2002-2005. He holds a PhD in electrical engineering from Ecole Polytechnique, University of Montreal. His current research interests are in e-services, e-commerce, multi-agent systems, and intelligent systems. He is an associate editor for the Journal of Computer Science, and the Tamkang Journal of Science and Engineering. He is also member of the editorial advisory board of the Advances in Web-Based Learning Book Series, IGI Global, and member of the international editorial review board the International Journal of Web-Based Learning and Teaching Technologies. VI. CONCLUSION AND FUTURE WORK High-level context information is typically obtained from context services that aggregate raw context information sensed by sensors and mobile devices. Given the enormous amount of context data processed and stored by context services and the wide acceptance of the cloud computing technology, context providers now can leverage their services by deploying them on the cloud. In this paper, we have presented our proposed framework for cloud-based context provisioning. The framework relies on context brokers for context information dissemination using a publish/subscribe model. Context services, deployed on the cloud, can scale up and down, in terms of cloud resources they use, according to the demand for context information. We have described a preliminary model of interactions, among the components of the framework, and that could be the basis for a context service API. As a future work, we first intend to investigate further on a common ontology-based model for context information representation that can be used by context brokers; and then, describe the mappings from the various context representation models described in the literature to that common model. We also intend to implement a prototype of the framework by considering some real scenarios for context provisioning, and implementing a context broker and few similar cloud-based context services using open-source software tools. REFERENCES [1] A.K. Dey, ―Understanding and Using Context,‖ Journal of Pervasive and Ubiquitous Computing, vol. 5(1), pp. 4–7, 2001. [2] T. Buchholz, A. Kpper, M. Schiffers, ―Quality of context: What it is and why we need it?,‖ In Proc. of the 10th International Workshop of the HP OpenView University association (HPOVUA), 2003. [3] K. Sheikh, M. Wegdam, and M. Van Sinderen, ―Quality-of-Context and its use for Protecting Privacy in Context Aware Systems,‖ Journal of Software, vol. 3(3) pp. 83-93, March 2008. [4] M.Baldauf, S. Dustdar, and F. Rosenberg, ―A survey on context-aware systems,‖ International Journal of Ad Hoc and Ubiquitous Computing, vol. 2 (4), pp. 263-277, 2007. [5] K. Henricksen, J. Indulska, T. McFadden, and S. Balasubramaniam, ―Middleware for Distributed Context-Aware Systems,‖ OTM Confederated International Conferences, pp. 846-863, Springer-Verlag, 2005. [6] H.L. Truong, and S. Dustdar, ―A Survey on Context-aware Web Service Systems,‖ International Journal of Web Information Systems, vol. 5(1), pp.5-31, Emerald, 2009. [7] H. Schmidt, F. Flerlage, F.J. Hauck, ―A generic context service for ubiquitous environments,‖ In Proc. of the IEEE International Conference on Pervasive Computing and Communications (PERCOM), pp.1-6, 2009. 70
ai_researcher
1
There_Is_No_“I”_in_Team_but_There_Is_in_Innovation_How_Individual_Attributes_Impact_Team_Ideation_and_Selection_Practices.pdf
1 2 0 2 t c O 2 1 ] Y S . s s e e [ 1 v 8 5 7 5 0 . 0 1 1 2 : v i X r a Role of Externally Provided Randomness in Stochastic Teams and Zero-sum Team Games Rahul Meshram Abstract Stochastic team decision problem is extensively studied in literature and the existence of optimal solution is obtained in recent literature. The value of information in statistical problem and decision theory is classical problem. Much of earlier does not qualitatively describe role of externally provided private and common randomness in stochastic team problem and team vs team zero sum game. In this paper, we study the role of extrenally provided private or common randomness in stochastic team decision. We make observation that the randomness independent of environment does not benefit either team but randomness dependent on environment benefit teams and decreases the expected cost function. We also studied LQG team game with special information structure on private or common randomness. We extend these study to problem team vs team zero sum game. We show that if a game admits saddle point solution, then private or common randomness independent of environment does not benefit either team. We also analyze the scenario when a team with having more information than other team which is dependent on environment and game has saddle point solution, then team with more information benefits. This is also illustrated numerically for LQG team vs team zero sum game. Finally, we show for discrete team vs team zero sum game that private randomness independent of environment benefits team when there is no saddle point condition. Role of common randomness is discussed for discrete game. I. INTRODUCTION A team decision problem consists of two or more of decision makers (DMs) or players that make decisions in a random environment where the information of each DM is a (possibly partial) observation about the random environment. A DM takes an action as a function of the information; this function is referred to the as the decision rule. DMs choose the decision rule to jointly minimize an expected cost. Rahul Meshram is with the Electronics and Communication Engineering Department, Indian Institute of Information Technology Allahabad. [email protected] If the decision makers had identical observations, then the multiple decision makers could be clumped together as a single decision maker and the problem reduces to that of a stochastic opti- mization or control problem. Of interest to us here is the case where information is asymmetric, whereby there is no obvious method of aggregating it. A team decision problem is in essence a decentralized stochastic optimal control problem. Problems with structure appear in a variety of settings for example in sensor networks. The decision makers could be sensors situated at different locations. These sensors observe the environment through different, possibly imperfect, channels and under this information structure, the sensors have to act collectively to minimize a certain cost function. In this paper we consider the role of externally provided private and common randomness in stochastic teams and in stochastic team v/s team zero-sum games. In the setup described above, it is conceivable that an external source provides randomness to the players. This randomness may or may not be correlated with their observations, and it may or may not be correlated across players. This randomness increases the set of achievable joint distributions on the joint action space of the DMs. Our goal is to understand the role of such randomness in a team problem and a team v/s team game. Qualitatively, there are three of kinds randomness that an external source may provide. First, the source of randomness could be a coordinator – namely an entity that mixes the actions of the DM by randomizing. Mathematically, this randomness is independent of the observations of the DMs, but it may be correlated across DMs. This correlation makes this randomness distinct from the usual notion of “randomized policies”, in which the randomization is independently performed by each DM. The second kind of randomness, may be imagined as a counsellor – this entity accesses the observations of each decision maker and provides a common message to all DMs. This kind of randomness is correlated with the environment. The sources of randomness mentioned above are relevant for team problems as well as team v/s team games. The third kind, which is relevant only in the team v/s team game, is that of a mole or a spy. This source of randomness provides information about observations of the opposite team. Our interest is in qualitatively understanding the role of the kinds of randomness mentioned above and quantifying it. We make following contribution in this work. 1) A team decision problem: • We show that if a coordinator provides private or common randomness independent of the environment, then it cannot improve the cost. • We show that common randomness dependent on the environment can improve the team cost. For a certain class of LQG team problems, we show that if the information of each player is replaced by a convex combination of the information of all players, then the team improves its cost. 2) Team v/s team zero-sum game: • We show randomness independent the an environment does not benefit teams if the zero-sum game admits saddle point solution. • We prove that a team having more information than other team, benefit and decreases the cost function for minimizing team when randomness is dependent on environment. • For LQG team zero-sum games we illustrate that common randomness dependent on the environment leads to an improvement in the optimal team cost. • Finally, We give an example of a discrete team v/s team zero sum game, without a pure strategy saddle point, we also show that private and common randomness independent of an environment benefit teams. But it may not have Nash equilibrium. A. Related Work Early work on team decision problem in aspect of an organization theory studied in [1]; where author used the concepts from game theory and statistical decision theory. A general formulation of team decision problem are described in [2] and person by person optimality condition is established to solve the distributed team decision problem. Furthermore, the team decision problem extended to a LQG team problem in [3], [4]. They investigated static and dynamic LQG team decision problem and explored its connection with information theory and economics. In LQG team problem, there is a unique optimal solution, linear in information and it is obtained via solving person by person optimal condition. They also studied dynamic LQG team with partial-nested information structure. Moreover, the symmetric static team problem studied in [5] and have shown that the optimal strategy for a symmetric team problem not necessarily a pure strategy but it can have randomized strategy. Two-team zero-sum game in LQG problem studied in [6], and they show that team having extra information not necessary ameliorate the expected loss. Apart from a team decision problem, the role of common randomness in multi-agent distributed control problem is analyzed in [7]. Our work is inspired from [3], [6], [7]. Role of common randomness is not quantified in [3], [6], whereas we discuss role of the private and common randomness in team decision problem. The value of information for statistical problems is first introduced in [8], [9]. This is further extended to decision problems in [10], and author have shown that increasing information lead to increasing in utility. Early work on role of increasing information in two person game problem is presented in [11].The surprising finding is presented in [11], where author finds that increasing informativeness leads to decreasing performance. The value of information available to players with two-person zero sum game is studied in [12]. As the additional information increased for a player, may lead to solution toward ideal optimality condition when there is a saddle point condition exists. This result further motivated study on value of information in team vs team zero sum game and similar result have shown for LQG team vs team zero sum game. In [13], the value of information for two players non zero sum game is developed, and they have show that in LQG model with better informed player, it decreases the average Nash cost for both players but in duopoly problem, the better informed player benefits only. The great reference for stochastic team decision problem is [14]. In this book, authors dis- cussed fundamental of team decision problem, sequential team decision problem, comparison of information structure, topological properties of information structure and its application to communication channel. It has motivated further research on team decision problem in recent time. There are flurry of research activities on static team problems and their existence of solution. In [15], the class of sequential team problem is studied with a certain information structure and existence of optimal strategies are proved. Further, they have shown the existence of optimal solution for team problem under weaker assumptions, i.e., assumption on cost function to be bounded and continuous, action space of agent to be compact or not compact and observation satisfies technical condition. The ideas from weak convergence in probability theory is used to show convergence of measure of joint probability of actions. In [16], author extended study of [15], further weaken assumptions their. They have shown the existence of optimal strategies for static teams and topology on set of policies are introduced. In [17], authors studied convexity properties of strategy spaces and discussed redundancy of common or private information that is independent of randomness for static team. Though this result is similar to ours, their proof differs from our method. The role of common information in dynamic stochastic game is studied in [18], where asymmetric of common information is considered among players. In [19], the existence of optimal solution to static team problem under private and common information structure is developed using topology of information and space of measures. Early ideas developed in [8]–[11], [13] on role of information are derived for zero sum game under slightly weaker assumptions in [20] and have shown existence of saddle point equilibrium. But these paper do not provide qualitative comparison of role of externally provided private and common randomness in static team and team vs team zero sum game. The rest of the paper is organized as follows. Private and common randomness in static team decision problem described in Section II. Role of private and common randomness in static team vs team zero-sum game developed in Section III. Finally, concluding remarks and future direction of research presented in Section IV. II. PRIVATE AND COMMON RANDOMNESS IN STATIC TEAM PROBLEM A. Team decision problem Consider a team decision problem having N decision makers DM1, . . . , DMN in a team and let N = {1, . . . , N}. Let ξ be a random vector taking values in a space Ξ denoting the state of nature or an environment; let its distribution be P(·). Define yi := ηi(ξ) for a measurable function ηi to be the information observed by DMi and let Yi be the space of yi. Let Ui ⊆ Rmi, mi ∈ N denote the set of actions of DMi. The strategy space of DMi is Γi, the space of measurable functions γi mapping Yi to Ui and an action ui is given by ui = γi(yi). Without loss of generality we take Ui ⊆ R for all i, since a DM with a Rmi-valued strategy can be considered as mi separate DMs with R-valued strategies; thus mi = 1 for all i ∈ N . Let u := (u1, . . . , uN ), γ := (γ1, . . . , γN ), u−i := (u1, . . . , ui−1, ui+1, . . . , uN ), γ−i := (γ1, . . . , γi−1, γi+1, . . . , γN ) The cost function is measurable function κ : U × Ξ → R, where U : Qi∈N Ui and let J (γ) = Eξ[κ(u1 = γ1(η1(ξ)), . . . , uN = γN (ηN (ξ)), ξ)]. A team optimal solution of the above problem is defined as γ∗ ∈ Γ := Qi∈N Γi such that J ∗ TO , J (γ∗) = min γ∈Γ J (γ) = min γ∈Γ Eξ[κ(u1 = γ1(η1(ξ)), . . . , uN = γN (ηN (ξ)), ξ)]. (1) We assume throughout that a team optimal solution exists and use ‘min’ instead of ‘inf’. A related concept, called the person by person optimal solution is a γ ∈ Γ such that J ∗ PBP = J (γ) = min γ′ i∈Γi J (γ′ i, γ−i) ∀ i ∈ N . B. Externally provided randomness We now introduce externally provided randomness, beginning with private randomness. Sup- pose DMi chooses ui randomly from Ui and let Q be the joint distribution of all variables involved, namely, ξ, y, u. We say that the DMs have externally provided private randomness, if Y i∈N This specification corresponds to the standard notion of randomized policies in stochastic control Q(u|y) = Q(ui|yi). (2) or behavioral strategies in stochastic games, wherein the action is chosen to be a random function of the information. In general one has Q(ξ, y, u) = Q(u|ξ, y)Q(ξ, y), where Q(ξ, y, u) is the joint distribution of ξ, y, u, Q(u|ξ, y) the conditional distribution of u given ξ, y, and Q(ξ, y) is the marginal of ξ, y (evaluated at ‘ξ = ξ, y = y, u = u’). When the randomness provided to DMs is independent of ξ, we have u|y ∐ ξ, i.e, given y the choice of u is independent of ξ. Furthermore, the joint distribution of (ξ, y) is known; denote this distribution by P (ξ, y). Consequently, any joint distribution of ξ, y, u is given by Q(ξ, y, u) = Q(u|y)P (ξ, y). (3) To describe externally provided common randomness, let w = (w1, . . . , wN ) be a random vector, w ∐ ξ, and assume that wi is externally provided to DMi by a coordinator. With the additional information of wi, the strategies γi of DMi are deterministic yi × wi → ui mappings and Γi is the space of such strategies. For a given random vector w with distribution P, the team optimal solution is defined analogously to (1), as follows: min γ∈Γ J (γ) = min γ∈Γ Eξ,w[κ(u1 = γ1(η1(ξ), w1), . . . , uN = γN (ηN (ξ), wN ), ξ)]. (4) Since ξ is independent of w, the expectation with respect to (ξ, w) is well defined once the marginals of ξ, w are defined. C. Randomness independent of ξ In this section, we study the case of externally provided randomness that is independent of the state of nature ξ. Our main result is that in a team problem, such randomness provides no benefit to the team. One may interpret this to mean that a team a gains nothing by hiring a coordinator whose sole role is that of mixing the actions of the team members without the use of any knowledge of the underlying state of nature or of the observations made by team members. Let P(· · · ) be the set of joint distributions of on the space ‘· · · ’. Let Q be the set of joint distributions of random variables ξ, y, u that admit the decomposition above. i.e., Q = {Q ∈ P(Ξ × Y × U) | Q satisfies (3), (2)}. Consider the following problem: J ∗ TOP , min Q∈Q EQ[κ(u, ξ)]. (5) From the decomposition of Q provided by (3)-(2), it follows that (5) is a multilinear program with separable constraints. Classical results show that (5) admits a solution that is an extreme point, namely, one where ui is a deterministic function of yi. Consequently, J ∗ TOP = J ∗ TO and we have the following result. Proposition 2.1: In a static stochastic team problem, externally provided private randomness that is independent of the state of nature cannot improve the team’s cost. Proof is along the lines of proof of Proposition 2.2. We skip the proof details. Consider the following cost: J ∗ TOC = min γ∈Γ,P Eξ,w[κ(γ1(η1(ξ), w1), . . . , γN (ηN (ξ), wN ), ξ)]. This is the lowest cost that can be attained via common randomness. The common randomness is independent of environment ξ. Proposition 2.2: J ∗ TO = J ∗ TOP = J ∗ TOC . (6) Proof: It is enough to show that J ∗ TO = J ∗ TOC . Now consider J ∗ TOC = min γ∈Γ,P Eξ,w[κ(γ1(η1(ξ), w1), . . . , γN (ηN (ξ), wN ), ξ)]. Assuming {y1, · · · , yN } are well defined. Rewriting above expression, we obtain J ∗ TOC = min γ∈Γ,P EwEξ/w[κ(γ1(η1(ξ), w1), . . . , γN (ηN (ξ), wN ), ξ)]. Since common randomness w independent of ξ, we have J ∗ TOC = min γ∈Γ,P EwEξ[κ(γ1(η1(ξ), w1), . . . , γN (ηN (ξ), wN), ξ)]. Now, we split the minimization minγ1,γ2∈Γ,P(w) = minP(w) minγ1,γ2∈Γ, we can also interchange minγ1,γ2∈Γ and expectation Ew since DMs can cooperate and communicate in team problem. J ∗ TOC = min P Ew min γ∈Γ Eξ[κ(γ1(η1(ξ), w1), . . . , γN (ηN (ξ), wN ), ξ)]. Next we have J ∗ TOC = min P Ew[J ∗ TO(w)]. It is linear program, thus it has optimal at extreme points, that is, w∗ = arg minw J ∗ TO(w). Then TOC = J ∗ J ∗ TO(w∗). Now consider that JTOC is a convex function of decision rule γ. If the decision rule is linear in its information, that is, γi(ηi(ξ), wi) = αi1ηi(ξ) + αi2wi, then clearly cost function will convex in αi1 and αi2 for all i = 1, · · · , N. Without loss of generality assume that E[wi] = 0 for all i = 1, · · · , N. Since w and ξ are independent the cost function will be separable and minimization w.r.t. variable αi1 and αi2 for all i = 1, · · · , N. It implies that cost will be minimum iff αi2 = 0 for all i = 1, · · · , N. Thus no weightage given to additional information under this decision rule. For LQG team problem in Appendix A, it is illustrated that if private and common randomness independent of the environment ξ, it does not improve the expected cost function. D. Randomness dependent on ξ Consider a scenario where consultant provides an extra randomness about an environment to decision makers. That means these extra randomness is correlated with an environment ξ. Let ω = (ω1, . . . , ωN ) be a random vector represents an extra randomness provided to decision makers by consultant. Further assume that ω is function of ξ, i.e. ω = f (ξ) = (f1(ξ), . . . , fN (ξ)), here f, fi be the measurable functions. The strategies of DMi are γi : yi × ωi → ui, γi ∈ Γi space of strategies and ui ∈ Ui space of decision variables. The team optimal cost is defined as follows. min γ∈Γ J (γ) = min γ∈Γ Eξ,ω[κ(u1 = γ1(η1(ξ), ω1), . . . , γN (ηN (ξ), ωN ))]. Note that ω is function of ξ. The optimal cost function is J ∗ TOER = min γ∈Γ Eξ[κ(u1 = γ1(η1(ξ), f1(ξ)), . . . , γN (ηN (ξ), fN (ξ)))]. (7) (8) In distributed team problem with no extra randomness, decision maker have only partial observa- tion about ξ. Thus an observations about ξ is distributed among decision makers and an optimal team cost J ∗ TO found in section II-A. When a consultant provides an extra randomness about an environment ξ to the decision makers. Essentially, there is an increase in observation about ξ available at decision makers. Intuitively, we expect that optimal cost under extra randomness in distributed stochastic team will improve optimal cost functional. Thus we have following result. Proposition 2.3: In distributed static stochastic team problem, J ∗ TO ≥ J ∗ TOER. (9) Proof: We develop the proof using the ideas from [8]. Let B1 = {η1(ξ), η2(ξ), · · · , ηN (ξ)} be the information available at team and B2 = {(η1(ξ), f1(ξ)), (η2(ξ), f1(ξ)), · · · , (ηN (ξ), fN (ξ))} be the another information available at team with extra common randomness. Thus B1 ⊂ B2, i.e., B2 is more informative than B1. Since DMs can cooperate and communicate in team problem. The minimization problem minγ∈Γ Eξ [κ (u1, u2, · · · , uN ) | B] . As fis are measurable func- tions, ηis are measurable functions, so γi are measurable and Γ is closed bounded convex set. The cost function is also convex and measurable, thus from [8, Theorem 2] we can have min γ∈Γ Eξ [κ (u1, u2, · · · , uN ) | B2] ≤ min γ∈Γ Eξ [κ (u1, u2, · · · , uN ) | B1] This implies the desired result. Consider LQG stochastic team problem which has decision maker DM1 and DM2 in a team, and we have following different variation of LQG team problem based on types of observation available at decision makers. Problem 1: Let decision variable u1 = Ay, where A is diagonal matrix, diag(A) = [α11, . . . , αN 1], y is observation available at decision makers, y = [y1, y2]T = [µ1, µ2]T , and Σ is covariance matrix of random vector y. The expected team cost J ∗ TOLQG,1 = min A Eξ[yT AT BAy + 2yT AT Sξ] = min A Tr[AT BAΣ + 2AT SΣ]. Let A∗ be the matrix such that optimal cost function of team is TOLQG,1 = Tr[A∗T BA∗Σ + 2A∗T SΣ]. J ∗ Problem 2: Let decision variable u2 = ˜A˜y, where ˜A is diagonal matrix, diag( ˜A) = [α11, . . . , αN 1], ˜y is observation available at decision makers, ˜y = [y2, y1]T = [µ2, µ1]T . Note that ˜y =   y2 y1   =   0 1 1 0     y1 y2    . Thus ˜y = ˜Iy and ξ = y = ˜I ˜y. Let ˜I =   0 1 1 0  The expected team cost is J ∗ TOLQG,2 = min ˜A Eξ[˜yT ˜AT B ˜A˜y + 2˜yT ˜AT Sξ] = min ˜A Tr[ ˜AT B ˜AΣ + 2 ˜AT ˜S ˜Σ]. Here ˜S := S ˜I and ˜Σ denote the covariance matrix of random vector ˜y . Let A∗∗ be the matrix such that TOLQG,2 = Tr[A∗∗T BA∗∗ ˜Σ + 2A∗∗T ˜S ˜Σ]. J ∗ Problem 3: Let decision variable u3 = Cω, where C is diagonal matrix, ω = [ω1, ω2]T , ω1 = βy1 + (1 − β)y2, = ω2 = (1 − β)y1 + βy2, β ∈ (0, 1). Hence ω = βy + (1 − β)˜y. We assume that decision maker has available common randomness provided by a consultant. These common randomness is convex combination of observation available at decision maker that is y1 and y2. For example β = 1 2 , a consultant provides an average of observations. The optimal cost functional is J ∗ TOLQG,3 = min u3∈U Proposition 2.4: 1) Eξ[uT 3 Bu3 + 2uT 3 Sξ]. TOLQG,3 ≤ βJ ∗ J ∗ TOLQG,1 + (1 − β)J ∗ TOLQG,2. 2) If ˜Σ = Σ and ˜S = S, then Furthermore, A∗ = A∗∗. Also, TOLQG,1 = J ∗ J ∗ TOLQG,2. TOLQG,3 ≤ J ∗ J ∗ TOLQG,1. Proof: 1) We have: J ∗ TOLQG,3 = min u3∈U Now, 3 Bu3 = ωT C T BCω uT Eξ[uT 3 Bu3 + 2uT 3 Sξ] = (βy + (1 − β)˜y)T C T BC(βy + (1 − β)˜y) ≤ βyT C T BCy + (1 − β)˜yT C T BC ˜y (10) Since B is symmetric positive definite matrix, (βy+(1−β)˜y)T C T BC(βy+(1−β)˜y) is quadratic convex function. Thus inequality in (10) follows from convexity property of function. J ∗ TOLQG,3 ≤ min C = min C Eξ[βyT C T BCy + (1 − β)˜yT C T BC ˜y + 2βyT C T Sξ + 2(1 − β)˜yT C T Sξ] Tr[βC T BCΣ + 2βC T SΣ + (1 − β)C T BC ˜Σ + 2(1 − β)C T ˜S ˜Σ] Tr[C T BCΣ + 2C T SΣ] + (1 − β) min C Tr[C T BC ˜Σ + 2C T ˜S ˜Σ] = β min C = βJ ∗ TOLQG,1 + (1 − β)J ∗ TOLQG,2. 2) Let ˜Σ = Σ and ˜S = S, we have: J ∗ TOLQG,1 = min A Tr[AT BAΣ + 2AT SΣ]. Clearly, J ∗ TOLQG,1 = J ∗ J ∗ TOLQG,2 = min ˜A TOLQG,2. Consequntly, A∗ = A∗∗. Hence, Tr[ ˜AT B ˜AΣ + 2 ˜AT ˜S ˜Σ]. TOLQG,3 ≤ J ∗ J ∗ TOLQG,1. So far, we studied role of common randomness (information) in a team problem. In next section, we describe the role of common randomness in two team zero-sum game. III. PRIVATE AND COMMON RANDOMNESS IN STATIC TEAM VS TEAM ZERO-SUM GAME We study role of private and common randomness in static two-team zero-sum game. We compare the static LQG team with zero-sum LQG team game under private and common randomness. Then We demonstrate the two team zero-sum discrete game. Now consider the case where there are N + M DMs. Let M = {N + 1, . . . , M}. DMi, i ∈ N comprise of a single team, say Team 1, and DMj, j ∈ M comprise of Team 2. Team 1 and Team 2 play a zero-sum game. Let u = (u1, . . . , uN ), γ = (γ1, . . . , γN ) denote the actions of players of Team 1 and v = (vN +1, . . . , vM ), δ = (δN +1, . . . , δM ) denote the actions of players in Team 2. Suppose the function the teams want to optimize is min ui=γi(yi),i∈N max vj =δj (yj ),j∈M E[κ(u, v, ξ)] Theorem 3.1: If the zero-sum team game admits a saddle point, randomness independent of ξ does not benefit either team. Proof: We have: min ui=γi(yi),i∈N max vj =δj (yj ),j∈M E[κ(u, v, ξ)] = max vj=δj (yj ),j∈M min ui=γi(yi),i∈N E[κ(u, v, ξ)] min ui=γi(yi),i∈N max vj =δj(yj ),j∈M E[κ(u, v, ξ)] ≥ min ui=γi(yi,w),i∈N max vj =δj(yj ,z),j∈M E[κ(u, v, ξ)] ≥ max vj =δj(yj ,z),j∈M min ui=γi(yi,w),i∈N E[κ(u, v, ξ)] (11) ≥ max vj =δj(yj ),j∈M min ui=γi(yi),i∈N E[κ(u, v, ξ)]. Eq (11) follows from: max vj =δj(yj ,z),j∈M E[κ(u, v, ξ)] ≥ min ui=γi(yi,w),i∈N E[κ(u, v, ξ)] . Consequently, min ui=γi(yi,w),i∈N max vj =δj (yj ,z),j∈M E[κ(u, v, ξ)] ≥ max vj =δj (yj ,z),j∈M min ui=γi(yi,w),i∈N E[κ(u, v, ξ)] . Theorem 3.2: If zero-sum game admits a saddle point, common randomness dependent of ξ is provided to one of team, then that team benefits. Suppose the consultant provides common randomness z which is dependent of ξ to decision makers of a team say, Team 2. Then we want to optimize JTOZS,CR = min ui=γi(yi),i∈N max vj =δj (yj,z),j∈M E[κ(u, v, ξ)]. Further, JTOZS,CR = JTOZS , where JTOZS = min ui=γi(yi),i∈N max vj =δj (yj),j∈M E[κ(u, v, ξ)]. Proof: We know from a team decision problem with common randomness dependent of ξ, then max vj =δj(yj ,z),j∈M E[κ(u, v, ξ)] ≥ max vj =δj (yj),j∈M E[κ(u, v, ξ)] min ui=γi(yi),i∈N max vj =δj(yj ,z),j∈M E[κ(u, v, ξ)] ≥ min ui=γi(yi),i∈N max vj =δj(yj ),j∈M E[κ(u, v, ξ)] Since we assume saddle point solution of zero-sum game, min ui=γi(yi),i∈N max vj =δj (yj,z),j∈M E[κ(u, v, ξ)] = max vj=δj (yj ,z),j∈M min ui=γi(yi),i∈N E[κ(u, v, ξ)] We also have max vj =δj (yj,z),j∈M min ui=γi(yi),i∈N E[κ(u, v, ξ)] ≥ max vj =δj (yj),j∈M min ui=γi(yi),i∈N E[κ(u, v, ξ)] If two-team zero sum game without common randomness admits a saddle point, then max vj =δj(yj ),j∈M min ui=γi(yi),i∈N E[κ(u, v, ξ)] = min ui=γi(yi),i∈N max vj =δj(yj ),j∈M E[κ(u, v, ξ)]. Hence result JTOZS,CR = JTOZS follows. Remark: • If the common or private information is uncorrelated with an environment or uncertainty of world, no one can gain anything from this information in team vs team zero-sum game. This is also illustrated numerically for LQG zero sum team vs team game is illustrated in Appendix C2. • In next subsection, we describe that a team having private information correlated with environment benefits. This implies that the team with more information manage to decrease the cost and even this is true in LQG teams decision problem. This is first observed by [12] and later this is extended to LQG teams problem in [6]. • We present results in our stochastic team vs team zero sum game. We illustrate role of common randomness in team vs team LQG zero sum game by numerical examples in Appendix C3. (cid:3) A. Role of private randomness dependent on ξ Let yi = ηi(ξ) be the information available at player i, and 2. Note that a team 1 is minimizing using control u and team 2 is maximizing with control v. y1 = (y1, y2, · · · , yN ) be informa- e y2 = (yN +1, yN +2, · · · , yN +M ) be the information available at team e tion available at team 1 and Define the cost function y2, ξ)] e From saddle point condition at the information structure ( J(u, v) = E[κ(u, v, y1, y1, e y2), we have e J(u∗, v) ≤ J(u∗, v∗) ≤ J(u, v∗). e The optimal decision pair is (u∗, v∗) at the information structure y1, and y2. Similarly, one can define saddle point condition for null information structure and has only prior knowledge about e e ξ, information structure is (y1, y2) and optimal decision pair is (u0, v0). The value of information for team 1 and team 2 is defined as follows. = J(u∗, v∗) − J(u0, v0) V1 V2 y1, (cid:0) e y1, (cid:0) e y2 e y2 e (cid:1) (cid:1) = −V1 (cid:0) y1, y2 e e (cid:1) Suppose the information at a team, say team 2 is fixed, i.e., η′ M. The opponent gets more information, say team 1, i.e., η′ i(ξ) = ηi(ξ) for i = N +1, · · · , N + i(ξ) ⊆ ηi(ξ) for i = 1, 2, · · · , N. Thus the decision set for team 1 is Aη′ ⊆ Aη and that for team 2 is Cη′ = Cη. We have the following result. Lemma 3.1: If the information of team 1 is increasing, i.e., η′ i(ξ) ⊆ ηi(ξ) for i = 1, 2, · · · , N, i(ξ) = ηi(ξ) for i = N + 1, · · · , N + M, then the and the information of team 2 is fixed, i.e., η′ value of information satisfy the following inequality Here yi = ηi(ξ), y1 = (y1, · · · , yN ), e N +1, · · · , y′ N +M ). y2 = (y′ V1 y2 y1, (cid:0) ≤ V1 y1, y2 . (cid:1) (cid:1) e e (cid:0) y2 = (yN +1, · · · , yN +M ), and y′ e b b i = η′ i(ξ), y1 = (y′ 1, · · · , y′ N ), b b The proof is analogous to [6, Lemma 3.3]. For clarity purpose we provide details is as follows. The saddle point condition at information structure η(ξ) implies that J(u∗, v) ≤ J(u∗, v∗) ≤ J(u, v∗) for u ∈ Aη, v ∈ Cη. Another saddle point condition at information structure η′(ξ) is for v ∈ Cη′. u ∈ Aη′ and b b Since Cη = Cη′ we can have Because Aη′ ⊆ Aη, and J( u∗, v) ≤ J( u∗, b b b v∗) ≤ J( b u, b v∗) b v∗ ∈ Cη and then it implies that b J(u∗, v∗) = J(u∗, v∗). b u∗ ∈ Aη′ implies b u, J( u∗, v∗) ≥ J( b b b u∗ ∈ Aη. Further, b v∗) ≥ J(u∗, b v∗) = J(u∗, v∗) b Thus we get J( u∗, v∗) ≥ J(u∗, v∗). As we note that J(u0, v0) does not change. After sub- b stracting J(u0, v0), we have desired inequality b V1 y1, (cid:0) e y2 e (cid:1) ≤ V1 (cid:0) y1, y2 b b . (cid:1) B. Discrete team vs team zero-sum game In this section, we investigate discrete team vs team zero-sum game and the role of extra randomness in the team and its decision makers. Claim 3.3: In discrete team vs team zero-sum game, 1) it may not admit pure-strategy saddle point solution, (12) (13) (14) (15) 2) if a coordinator provides the private randomness independent of an environment to decision makers of team then it benefit both team and improves the team cost. But it may not achieve Nash equilibrium, 3) if a consultant provides the common randomness to decision makers of team, then it lead to improve in team cost. But it may not have Nash equilibrium. Proofs of these are difficult to obtain but we provide examples in appendix B to support our claim. IV. DISCUSSION AND CONCLUSIONS The value of information is classic problem in decision theory. As information increases, we anticipated that the optimal cost decreases. This is first illustrated for statistical problems in [8]. In stochastic team problem and stochastic team vs team zero sum games, the value of private information to decision makers is not explicitly presented in earlier literature. We analyzed a stochastic team decision problem when decision makers are provided with external private randomness which is correlated or independent of environment. The private randomness independent of environment does not decrease the cost function. But this randomness dependent on environment provided to DMs in a team decreases the cost function of team compare to no randomness. In stochastic LQG team decision problem under special information structure, we have shown that the correlated randomness decreases the cost function. We next studied stochastic team vs team zero sum game, and showed that the randomness independent of environment does not benefit either time if a game admits a saddle point condition. In LQG team vs team zero sum game, we analyze the role of common randomness which is correlated with environment for one of team, then the optimal value function decreases with information. We further extended this finding to discrete team vs team zeros sum game when there is no saddle point condition and observed that common or private randomness independent of environment benefits both team. Even common randomness dependent on environment benefit a team and improves cost. This may not lead to saddle point condition. It opens future research direction on problem of role of private or common randomness in stochastic teams with non zero sum games and sequential stochastic dynamic teams. Another research directions is on correlated equilibrium behaviors and common knowledge in sequential stochastic team vs team games. V. ACKNOWLEDGMENT Most part of this work was carried out at the Bharti Centre for Communications at IIT Bombay and EE Dept. IIT Bombay, where author was PhD scholar. Part of this work was done at EE Dept. IIT Madras during Postdoctoral Fellowship. Author is very grateful to Prof. Ankur Kulkarni, SC Dept. IIT Bombay for guidance and extensive discussion on problem of Team decision theory and pointing out references. Author is thankful to Prof. D. Manjunath, EE Dept. IIT Bombay for initial support on the work. Author is also thankful to ECE Dept. IIIT Allahabad for financial support. REFERENCES [1] J. Marschak, “Elements for a theory of teams,” Management Science, vol. 1, pp. 127–337, 1955. [2] R. Radner, “Team decision problems,” Ann. Math. Statist., vol. 33, no. 3, pp. 857–881, 1962. [3] Y. C. Ho, M. P. Kastner, and E. Wong, “Teams, signaling, and information theory,” Transaction on Automatic Control, vol. 23, no. 2, pp. 305–311, 2010. [4] Y. C. Ho, “Team decision theory and information structure,” Proceedings of IEEE, vol. 68, no. 6, pp. 644–654, June 1980. [5] F. C. Schoute, “Symmetric team problems and multi-access wire communication,” Automatica, vol. 14, no. 3, pp. 255–269, May 1978. [6] Y. C. Ho and F. K. Sun, “Value of information in two-team zero-sum problem,” Journal of Optimization Theory and Applicatiion, vol. 14, no. 5, 1974. [7] V. Anantharam and V. Borkar, “Common randomness and distributed control: A counterexample,” System and Control Letters, vol. 56, no. 7, pp. 568–572, July 2007. [8] D. Blackwell, “Comparisons of experiments,” Berkeley Symposium on Mathematical Statistics and Probability, vol. 2, pp. 93–102, 1951. [9] D. Blackwell, “Equivalent comparisons of experiments,” The Annals of Mathematical Statistics, vol. 24, no. 2, pp. 265–272, 1953. [10] J. Marschak and K. Miyasawa, “Economic comparability of informative systems,” International Economic Review, vol. 9, no. 2, pp. 137–174, June 1968. [11] Y. C. Ho and I. Blau, “A simple example on informativeness and performance,” Journal of Optimization Theory and Applicatiions, vol. 11, no. 4, 1973. [12] H. S. Witsenhausen, “On the relation between the values of a game and its information structure,” Information and Control, vol. 19, no. 3, pp. 204–215, Oct. 1971. [13] T. Basar and Y. C. Ho, “Informational properties of Nash solutions of two stochastic non-zero sum games,” Journal of Economic Theory, vol. 7, no. 4, pp. 370–387, April 1974. [14] S. Yuksel and T. Basar, Stochastic Networked Control Systems: Stabilization and Optimization under Information Constraints, Birkhauser, 2013. [15] A. Gupta, S. Yuksel, T. Bas¸ar, and C. Langbort, “On the existence of optimal policies for a class of static and sequential dynamic teams,” SIAM Journal Control and Optimization, vol. 53, no. 3, pp. 1681–1712, 2015. [16] N. Saldi, “A topology for team policies and existence of optimal team policies in stochastic team theory,” IEEE Transactions on Automatic Control, vol. 65, no. 1, pp. 310–317, 2019. [17] S. Yuksel and N. Saldi, “Convex analysis in decentralized stochastic control, strategic measures, and optimal solutions,” SIAM Journal in Control and Optimization, vol. 55, no. 1, pp. 1–27, 2017. [18] A. Gupta, A. Nayyar, C. Langbort, and T. Bas¸ar, “Common information based markov perfect equilibria for linear-gaussian games with asymmetric information,” SIAM Journal in Control and Optimization, vol. 52, no. 5, pp. 3228–3260, 2014. [19] A. Gupta, “Existence of team-optimal solution in static teams with common information: A topology of information approach,” SIAM Journal in Control and Optimization, vol. 58, no. 2, pp. 998–1021, 2020. [20] I. Hogeboom-Burr and S. Yuksel, “Comparison of information structure for zero-sum games and a partial converse to Blackwell ordering in standard Borel spaces,” Arxiv, pp. 1–23, 2020. APPENDIX A. LQG Team Problem Now we examine an example of a LQG team problem. Consider a LQG team problem of having N decision maker. Let an environment ξ := [µ1, . . . , µN ]T be random vector; it is Gaussian distributed zero mean and covariance Σ. Let yi = ηi(ξ) be the information observed by DMi, y = [y1, . . . , yN ]T information vector observed by decision makers. In a static LQG team problem optimal action is linear in information observed by decision maker. Thus action of DMi is ui = γi(yi) = αi1yi. Then u = (u1, . . . , uN )T = Ay, where A is diagonal matrix of dimensional N × N, diag(A) = [α11, . . . , αN 1]. Standard LQG problem assumes cost function to be quadratic in nature. The cost function is κ(u, ξ) := uT Bu + 2uT Sξ, here B is symmetric positive matrix. The team optimal solution of LQG team problem is γ ∈ Γ such that J ∗ TOLQG , min γ∈Γ J (γ) = min u∈U Eξ[κ(u, ξ)] = min u∈U Eξ[uT Bu + 2uT Sξ]. (16) Replacing u = Ay, we obtain J ∗ TOLQG = min A Eξ[yT AT BAy + 2yT AT Sξ]. Further this can expressed as deterministic optimization problem as follows. J ∗ TOLQG = min A Tr[AT BAΣ + 2AT SΣ], Note that Tr denote trace of matrix. 1) Private randomness independent of ξ : We will show that in LQG team problem the private randomness provided by a coordinator do not benefit the team optimal cost functional. Consider ω = [ω1, . . . , ωN ]T is private randomness available to decision makers, it is Gaussian distributed with zero mean and covariance matrix Σ1 is diagonal; ωi is private randomness available at DMi. We suppose that ωi is independent of ωj for i 6= j and it is also independent of y. (E[ωiωj] = 0 for i 6= j and E[ωiyk] = 0 for i 6= k, 1 ≤ i, j, k ≤ N.) The action ui = γi(yi, ωi) = αi1yi + αi2ωi. Let u = Ay + Cω, where A and C are diagonal matrix of dimension N × N, diag(A) = [α11, . . . , αN 1] and diag(C) = [α12, . . . , αN 2]. The optimal expected cost functional of LQG team problem with private randomness is J ∗ TOP,LQG , min Q∈Q EQ[κ(u, ξ)] = min A Tr[AT BAΣ + 2AT SΣ] + min C Tr[C T BCΣ1]. (17) From equation (17), minC Tr[C T BCΣ1] = 0 if and only if C is zero matrix. Hence J ∗ J ∗ TOP,LQG. 2) Common randomness independent of ξ : We study a LQG team problem with common TOLQG = randomness has structure similar to that of LQG team problem with private randomness. We demonstrate that common randomness provided to decision makers by the consultant is inde- pendent of ξ, then it do not improve the expected cost functional. Consider ω = [ω1, . . . , ωN ]T is common randomness available to decision makers, it is Gaussian distributed with zero mean and covariance matrix Σ2; ωi is the common randomness at DMi. We suppose that ωi is perfect correlation with ωj for i 6= j and it is also independent of y. (E[ωiωj] 6= 0 for i 6= j and E[ωiyk] = 0 for i 6= k, 1 ≤ i, j, k ≤ N.) The action ui = γi(yi, ωi) = αi1yi + αi2ωi. Let u = Ay + Cω. The optimal expected cost function is J ∗ TOC,LQG = min A Tr[AT BAΣ + 2AT SΣ] + min C Tr[C T BCΣ2] (18) Note that in LQG team problem, B is symmetric positive definite matrix. From (18), expression minC Tr[C T BCΣ2] attains minimum value = 0 if C is zero matrix. Thus we have following relation, J ∗ TOLQG = J ∗ TOC,LQG. 3) Common randomness dependent on ξ: Next, we demonstrate the result in (9) via an example of LQG team problem. Further we show numerically for two decision maker LQG team problem that there is strict inequality between team optimal cost with and without extra randomness, that is J ∗ TOLQG > J ∗ TOER,LQG. Consider a LQG team problem consists of an environment ξ = [µ1, . . . , µN ]T as random vector with mean zero and covariance matrix Σ. The information observed by DMi is yi = ηi(ξ) = µi, y = [y1, . . . , yN ]T . Let ω = [ω1, . . . , ωN ]T be the extra randomness provided by a consultant to decision makers. Furthermore, assume that ω = f (ξ) and f is linear function in ξ. Thus N j φijµj, ω = Φξ = Φy, Φ is matrix of dimension N × N, with entries in φij ≥ 0 and ωi = N j=1 φij = 1. The cost function is κ(u, ξ) := uT Bu + 2uT Sξ, the optimal expected cost under P P extra randomness is J ∗ TOER = min u∈U Eξ[uT Bu + 2uT Sξ]. Since it is static LQG team problem, optimal decision rule is linear in observation variable. We assume that ui = αi1yi + αi2ωi, u = Ay + Cω, where A and C are diagonal matrices, diag(A) = [α11, . . . , αN 1], diag(C) = [α12, . . . , αN 2]. The optimal expected cost is J ∗ TOER,LQG = min A,C Eξ[yT AT BAy +2yT AT Sξ +2yT ΦT C T BAy +yT ΦT C T BCΦy +2yT ΦT C T Sξ]. We have ξ ∼ N(0, Σ), taking expectation and rewriting above expression, we obtain deterministic optimization problem as follows. J ∗ TOER,LQG = min A,C Tr[AT BAT Σ + 2AT SΣ + 2ΦT C T BAΣ + ΦT C T BCΦΣ + 2ΦT C T SΣ]. (19) Intuitively, in LQG team problem with no extra randomness can described as incomplete information static LQG team problem. Since extra randomness is linear function of an environ- ment and under assumption of nonzero linear coefficient (φij 6= 0 for all 1 ≤ i, j ≤ N), LQG team problem with extra randomness can be describe as complete information static LQG team problem. Thus it is natural to expect that J ∗ TO. But showing this result analytically TOER difficult due to in-separability of optimization problem (19) into optimization problem with < J ∗ respect to A and C. To support our claim of J ∗ TO, we numerically evaluate the optimal cost functional with and without extra randomness which is dependent on ξ for LQG two team problem and TOER < J ∗ show that our claim is indeed true. Further, we show impact of correlation coefficient {φij, 1 ≤ i, j ≤ 2} on optimal cost functional. 4) Numerical example–LQG team problem: Let ξ = [µ1, µ2]T denote the state of nature or an environment having probability distribution N(0, Σ). Let yi = ηi(ξ) = µi be the information observed at DMi for 1 ≤ i ≤ 2. Let ω = [ω1, ω2]T be an extra randomness provided by a consultant to decision makers. Consider ωi = φi1y1 + φi2y2, ui = αi1yi + αi2ωi for 1 ≤ i ≤ 2. α11 0 α12 0 Thus we have A =    Tr[AT BAΣ + 2AT SΣ + 2ΦT C T BAΣ + ΦT C T BCΦΣ + 2ΦT C T SΣ]. . Team optimal cost from (19) is  , C =    α21 α22 J ∗  0 0 TOER = min A,C 2 −1 1 0 In this example, we suppose B =   We define δ1 = E[y1w1] = φ11σ2 µ1µ2 +φ12σ2 E[y2w1] = φ11σ2  , S =  σ2 −1   µ1µ2, δ2 = E[y1w2] = φ21σ2 µ1 + φ12σ2 µ1, δ5 = E[w2  , Σ =    µ1µ2 +φ22σ2 µ1,µ2 σ2 µ2 µ1,µ2 µ1 + φ22σ2 11σ2 1] = φ2 µ1, δ4 = E[y2w2] = φ21σ2 0 1 1 σ2 µ1 σ2  .  µ1µ2, δ3 = 12σ2 µ2 + µ1 +φ2 φ11φ12σ2 φ12φ21)σ2 µ1µ2, δ6 = E[w2 µ1µ2 + φ22φ12σ2 21σ2 µ1 +φ2 2] = φ2 µ2, δ8 = E[w1ξ1] = φ11σ2 22σ2 µ2 +φ21φ22σ2 µ1µ2, δ7 = E[w1w2] = φ11φ21σ2 µ1µ2, δ9 = E[w2ξ2] = φ21σ2 µ1 +(φ22φ11+ µ1µ2 + φ22σ2 µ2. µ1 + φ12σ2 Now rewriting team optimal cost function we obtain, J ∗ TOER = min α11,α12,α21,α22 2α2 11σ2 y1 − 2α11α21σ2 y1,y2 + α2 21σ2 y2 + 2α11α12δ1 − α21α12δ2 − α11α22δ3 + α22α21δ4 + 2α2 12δ5 − 2α12α22δ7 + α2 22δ6 + 2(α11σ2 y1 + α21σ2 y2) + 2(α12δ8 + α22δ9). Differentiating above expression with respect to α11, α12, α21, α22 and equating to 0. We have 4σ2 y1 −2σ2 y1y2 2δ1 −δ3        −2σ2 y1y2 2δ1 −δ3 2σ2 y1 −δ2 δ4 −δ2 δ4 4δ5 −2δ7 −2δ7 2δ6               α11 α21 α12 α22        = −2σ2 y1 −2σ2 y2 −2δ8 −2δ9        .        Notice that computing optimal α11, α12, α21, α22 via solving linear systems of equations and finding optimal expected cost is computationally tedious. Without loss of generality, we suppose µ1µ2 = 1 µ2 = 1 and σ2 µ1 = σ2 σ2 team cost under optimal α∗ 4. Furthermore, we fix φ11, φ12, φ21, φ22 and evaluate the minimum 22. Note that φ11, φ12, φ21, φ22 determines the correlation of extra randomness with observations available at decision makers. From numerical computation 11, α∗ 12, α∗ 21, α∗ in table (I), we make following concluding remarks. 1) In distributed static LQG team problem without extra randomness, the team optimal cost is highest. 2) In distributed static LQG team problem, only one decision maker having extra randomness which is correlated with ξ do not lead to improve in the team optimal cost. Instead it lead to increase in the team optimal cost. 3) In distributed static LQG team problem,all decision maker having extra randomness which is correlated with ξ lead to improvement in the team optimal cost. Thus we have strict inequality between J ∗ TO and J ∗ TOER, that means J ∗ TOER < J ∗ TO. 4) if an extra randomness provided by a consultant is an average of the observations µ1 and µ2, then team optimal cost is best than any other convex combination of the observations µ1 and µ2. Hence correlation coefficient φij for 1 ≤ i, j ≤ 2 plays significant role to attain minimal team optimal cost in distributed static LQG team problem with extra randomness dependent on ξ. (φ11, φ12, φ21, φ22) No randomization DM1 have randomness Both DM have randomness Both DM have randomness Both DM have randomness (0, 0, 0, 0) ( 1 4 , 3 2 , 1 ( 1 ( 2 3 , 1 ( 1 3 , 2 4 , 0, 0) 2 , 1 2 , 1 2 ) 4 , 1 3 , 3 4 ) 4 , 3 3 , 1 4 ) 12, α∗ 11, α∗ 21, α∗ (α∗ (−0.6452, −1.1613, 0, 0) 22) (0, −1, −0.3024, 2.7513) minα E[κ(α, ξ)] −1.806 −0.477 (−0.3434, −0.7046, −2.7862, −4.0062) −5.2974 (−0.5122, −1.4833, −2.6067, −3.2171) −4.5211 (−0.7045, −0.7058, −0.6765, −1.522) −3.6923 TABLE I COMPARISON OF EXPECTED COST WITH DIFFERENT RANDOMIZATION PROVIDED TO DM B. Proof of Lemma 3.3 We prove our claim via illustrating an example of two-team discrete game. Consider two-team label them as Team 1 and 2, Team 1 consists of a decision maker and Team 2 comprises two decision makers. Let ξ = [µ1, s1, s2]T denote an environment or the state of nature; it is random vector with discrete distribution p(ξ). Each decision maker observes an environment partially since decision maker are situated distributed manner. Let y1 = η(ξ) denote an observation available at decision maker of Team 1; zj = ζj(ξ) represent an observation available at DMj of Team 2. Decision rule at Team 1 and 2 is and j = 1, 2. γ1 : y1 → u1 δj : zj → vj Without loss of generality, we assume that µ1, sj is binary random variable take values {0, 1}; y1 = η(ξ) = µ1, zj = ζj(ξ) = sj, for j = 1, 2. Moreover, we consider u1, vj ∈ {L, R}, j = 1, 2. Binary random variable µ1, s1 and s2 defined as follows. µ1 =    s1 =   1 0 µ1 0  with prob. p1 with prob. 1 − p1. with prob. p with prob. 1 − p. s2 =   1 − µ1 s1 with prob. q with prob. 1 − q.  The joint distribution of (µ1, s1, s2) is P(µ1, s1, s2) and is written as P(µ1 = 0, s1 = 0, s2 = 0) = (1 − p1)(1 − q) P(µ1 = 0, s1 = 0, s2 = 1) = (1 − p1)q P(µ1 = 0, s1 = 1, s2 = 0) = 0 P(µ1 = 0, s1 = 1, s2 = 1) = 0 P(µ1 = 1, s1 = 0, s2 = 0) = p1(1 − p) P(µ1 = 1, s1 = 0, s2 = 1) = 0 P(µ1 = 0, s1 = 1, s2 = 0) = p1pq P(µ1 = 1, s1 = 1, s2 = 1) = p1p(1 − q) There are four possible decision rule available at each decision maker. The decision rule of a decision maker is 1 = γ1 u1 1(y1) = γ1 1(µ1) =   1 = γ2 u2 1(y1) = γ2  1(µ1) =    1 = γ3 u3 1(y1) = γ3 1(µ1) = n L 1 = γ4 u4 1(y1) = γ4 1(µ1) = n R 1 = δ1 v1 1(z1) = δ1 1(s1) =   1 = δ2 v2 1(z1) = δ2  1(s1) =    L R L R if if L R L R if if if if µ1 = 0 µ1 = 1 µ1 = 1 µ1 = 0 µ1 = 1 µ1 = 1 or or µ1 = 0 µ1 = 0 if if if if s1 = 0 s1 = 1 s1 = 1 s1 = 0 LL LR RL RR L R 20 20 0 1 1 0 30 30 TABLE II PAYOFF MATRIX: TEAM VS TEAM ZERO-SUM GAME 1 = δ3 v3 1(z1) = δ3 1(s1) = n L 1 = δ4 v4 1(z1) = δ4 1(s1) = n R 2 = δ1 v1 2(z2) = δ1 2(s2) =   2 = δ2 v2 2(z2) = δ2  2(s2) =   if if L R L R s1 = 1 s1 = 1 or or s1 = 0 s1 = 0 if if if if s2 = 0 s2 = 1 s2 = 1 s2 = 0 2δ3 v3 2(z2) = δ3 2(s2) = n L 2 = δ4 v4 2(z2) = δ4 2(s2) = n R  if s2 = 1 or s2 = 0 if s2 = 1 or s2 = 0 We next formulate team vs team zero-sum game, Team 1 seeks to maximize the expected payoff whereas Team 2 seeks to minimize the expected payoff. We describe payoff matrix in table II . In table II row vector denotes actions of Team 1 and corresponding payoff; column vector denotes actions of Team 2 and corresponding payoff. Since observations available at each decision maker in team is function of state of nature ξ and ξ is random variable, we evaluate the expected payoff for different actions of decision makers and it is E κ (cid:2) (cid:0) 1(µ1), δm γl 1 (s1), δn 2 (s2) = (cid:1)(cid:3) X µ1,s1,s2∈{0,1}3 κ (cid:0) 1(µ1), δm γl 1 (s1)δn 2 (s2) P(µ1, s1, s2). (cid:1) where 1 ≤ l, m, n ≤ 4. Enumerating the expected payoff over all possible actions of decision makers, we obtain E 1(µ1), δ1 γ1 κ (cid:2) (cid:0) γ2 1(µ1), δ1 1(s1)δ1 E κ (cid:2) (cid:0) 1(s1)δ1 2(s2) = 20 − 20q + 20p1q + 10p1p − 30p1pq (cid:1)(cid:3) 2(s2) = 40 − 40p1 − 19q + 19p1q − 29p1pq + 30p1p (cid:1)(cid:3) E κ 1(µ1), δ1 γ3 (cid:0) 1(s1)δ1 2(s2) = 20 − 20q + 30p1p − 29p1pq (cid:1)(cid:3) E κ (cid:2) E κ (cid:2) (cid:2) 1(µ1), δ1 γ4 (cid:0) 1(µ1), δ2 γ1 (cid:0) E κ (cid:2) E κ (cid:2) E 1(µ1), δ2 γ2 (cid:0) 1(µ1), δ2 γ3 (cid:0) 1(µ1), δ2 γ4 1(s1)δ1 2(s2) = 20 − 19q + 19p1q + 10p1p − 30p1pq (cid:1)(cid:3) 1(s1)δ1 2(s2) = 1 − p1 + 29q − 29p1q + 19p1pq + p1p (cid:1)(cid:3) 1(s1)δ1 2(s2) = 30 − 31p1q + p1 + 20p1pq (cid:1)(cid:3) 1(s1)δ1 2(s2) = 1 + 29q − 30p1q + 20p1pq (cid:1)(cid:3) 1(s1)δ1 2(s2) = 30q − 30p1q + 19p1pq + p1p κ (cid:2) E (cid:0) κ (cid:2) E κ (cid:2) (cid:0) (cid:0) 1(µ1), δ3 γ1 1(s1)δ1 = 20 − 20q + 19p1pq + p1p 1(µ1), δ3 γ2 1(s1)δ1 2(s2) = 20 − 19q − p1q + 20p1pq (cid:1)(cid:3) 2(s2) (cid:1)(cid:3) (cid:1)(cid:3) 2(s2) 1(s1)δ1 E κ 1(µ1), δ3 γ3 (cid:0) (cid:2) 1(µ1), δ3 γ4 (cid:0) 1(s1)δ1 E κ (cid:2) γ1 1(µ1), δ4 2(s2) = 20 − 19q − p1q + 19p1pq + p1p (cid:1)(cid:3) 1(s1)δ1 2(s2) = 1 − p1 + 29q − 29p1q + 30p1p − 30p1pq = 20 − 20q + 20p1pq (cid:1)(cid:3) 1(µ1), δ4 γ2 (cid:0) 1(s1)δ1 1(µ1), δ4 γ3 1(s1)δ1 2(s2) (cid:1)(cid:3) = 30q − 31p1q + p1 − 29p1pq + 30p1p = 1 + 29q − 29p1q + 29p1p − 29p1pq 1(µ1), δ4 γ4 (cid:0) 1(s1)δ1 = 30q − 30p1q + 30p1p − 30p1pq E κ (cid:2) E (cid:0) κ (cid:2) E κ (cid:2) E (cid:0) κ (cid:2) E = 20q − 20p1q + p1 + 30p1pq = 1 − p1 + 19q − 19p1q + 29p1pq + p1p (cid:1)(cid:3) 2(s2) (cid:1)(cid:3) 2(s2) (cid:1)(cid:3) 2(s2) (cid:1)(cid:3) 1(s1)δ2 2(s2) (cid:1)(cid:3) 1(s1)δ2 2(s2) = 20q − 20p1q + 29p1pq + p1p (cid:1)(cid:3) 1(s1)δ2 2(s2) = 1 + 19q − 19p1q − p1p + 30p1pq (cid:1)(cid:3) 1(s1)δ2 2(s2) = 30 − 29q + 29p1q − 10p1p − 19p1pq (cid:1)(cid:3) 1(s1)δ2 2(s2) = 30 − 10p1 − 30q + 30p1q − 20p1pq (cid:1)(cid:3) = 30 − 29q + 29p1q − 10p1p − 20p1pq = 30 − 30q + 30p1q − 10p1p − 19p1pq (cid:1)(cid:3) (cid:1)(cid:3) 1(µ1), δ2 γ2 (cid:0) γ3 1(µ1), δ2 (cid:0) 1(s1)δ2 2(s2) 1(µ1), δ2 γ4 1(s1)δ2 2(s2) (cid:0) 1(µ1), δ3 γ1 (cid:0) 1(s1)δ2 2(s2) = 20q − 20p1q + p1 + 19p1p − 19p1pq (cid:1)(cid:3) E κ (cid:2) E κ (cid:0) (cid:2) E E E κ (cid:2) κ (cid:2) κ (cid:2) E κ (cid:2) E κ 1(µ1), δ1 γ1 (cid:2) (cid:0) 1(µ1), δ1 γ2 (cid:0) E 1(µ1), δ1 γ3 1(s1)δ2 κ (cid:2) (cid:0) γ4 1(µ1), δ1 (cid:0) 1(µ1), δ2 γ1 κ (cid:2) = 1 − p1 + 19q − 19p1q + 20p1p − 20p1pq E κ (cid:2) (cid:0) 1(µ1), δ3 γ2 1(s1)δ2 2(s2) (cid:1)(cid:3) 1(s1)δ2 1(µ1), δ3 γ3 E κ (cid:0) (cid:2) 1(µ1), δ3 γ4 (cid:0) 1(µ1), δ4 γ1 (cid:0) 1(µ1), δ4 γ2 (cid:0) 1(µ1), δ4 γ3 (cid:0) 1(µ1), δ4 γ4 (cid:0) E κ (cid:2) E E E E κ (cid:2) κ (cid:2) κ (cid:2) κ (cid:2) 2(s2) = 20q − 20p1q + 20p1pq (cid:1)(cid:3) 1(s1)δ2 2(s2) = 1 + 19q − 19p1q + 19p1p − 19p1pq (cid:1)(cid:3) 1(s1)δ2 2(s2) 1(s1)δ2 2(s2) 1(s1)δ2 2(s2) = 30 − 29q + 29p1q − 29p1p + 29p1pq = 30 − 30q + 30p1q − 29p1p + 29p1pq = 30 − 29q + 29p1q − 29p1p + 29p1pq (cid:1)(cid:3) (cid:1)(cid:3) (cid:1)(cid:3) = 30 − 30q + 30p1q − 30p1p + 30p1pq E E E E 1(s1)δ2 2(s2) (cid:2) (cid:2) (cid:2) (cid:2) κ κ 1(s1)δ3 (cid:1)(cid:3) 1(s1)δ3 1(µ1), δ1 γ1 (cid:0) 1(µ1), δ1 γ2 (cid:0) 1(µ1), δ1 γ3 (cid:0) γ4 1(µ1), δ1 (cid:0) 1(µ1), δ2 γ1 (cid:0) 1(s1)δ3 1(s1)δ3 1(s1)δ3 κ κ 2(s2) 2(s2) 2(s2) 2(s2) = 20 − 20p1p = 20 − 19p1p = 20 − 19p1p = 20 − 20p1p (cid:1)(cid:3) (cid:1)(cid:3) (cid:1)(cid:3) (cid:1)(cid:3) E κ (cid:2) E E κ (cid:2) κ (cid:2) E 2(s2) = 1 − p1 + 20p1p (cid:1)(cid:3) 2(s2) 1(s1)δ3 1(s1)δ3 2(s2) 1(µ1), δ2 γ2 (cid:0) 1(µ1), δ2 γ3 (cid:0) (cid:1)(cid:3) = p1 + 19p1p = p1 + 19p1p 1(µ1), δ2 γ4 1(s1)δ3 = 20p1p (cid:1)(cid:3) 2(s2) (cid:1)(cid:3) 2(s2) 1(s1)δ3 1(s1)δ3 2(s2) 1(s1)δ3 2(s2) 1(s1)δ3 2(s2) = 20 = 20 = 20 = 20 (cid:1)(cid:3) (cid:1)(cid:3) (cid:1)(cid:3) (cid:1)(cid:3) κ (cid:2) E (cid:2) E E E (cid:2) (cid:2) (cid:2) (cid:0) κ κ κ κ 1(µ1), δ3 γ1 (cid:0) 1(µ1), δ3 γ2 (cid:0) γ3 1(µ1), δ3 (cid:0) 1(µ1), δ3 γ4 (cid:0) 1(µ1), δ4 γ1 E (cid:0) (cid:0) κ (cid:2) E κ (cid:2) E κ (cid:2) E κ (cid:2) 1(s1)δ3 2(s2) = 1 − p1 (cid:1)(cid:3) 2(s2) = p1 (cid:1)(cid:3) γ2 1(µ1), δ4 1(s1)δ3 1(µ1), δ4 γ3 1(s1)δ3 2(s2) 1(µ1), δ4 γ4 1(s1)δ3 2(s2) (cid:0) (cid:0) = 1 = 0 (cid:1)(cid:3) (cid:1)(cid:3) E (cid:2) κ κ 1(µ1), δ1 γ1 (cid:0) 1(µ1), δ1 γ2 (cid:0) E E (cid:2) 1(s1)δ4 2(s2) 1(s1)δ4 2(s2) (cid:1)(cid:3) 1(s1)δ4 2(s2) = p1 + 29p1p (cid:1)(cid:3) = 1 − p1 + 20p1p 1(µ1), δ1 γ3 κ (cid:2) κ (cid:0) 1(µ1), δ1 γ4 (cid:0) 1(µ1), δ2 γ1 = 30p1p (cid:1)(cid:3) = 1 + 29p1p (cid:1)(cid:3) = 30p1 − 29p1p = 30 − 30p1p 1(s1)δ4 2(s2) 1(s1)δ4 2(s2) 1(s1)δ4 (cid:1)(cid:3) 2(s2) (cid:1)(cid:3) 1(µ1), δ2 γ2 (cid:0) 1(µ1), δ2 γ3 (cid:0) 1(µ1), δ2 γ4 (cid:0) E 1(s1)δ4 2(s2) = 30 − 30p1p (cid:1)(cid:3) 1(s1)δ4 2(s2) = 30 − 29p1p 1(µ1), δ3 γ1 1(s1)δ4 (cid:0) 1(µ1), δ3 γ2 1(s1)δ4 2(s2) = p1 (cid:1)(cid:3) = 1 − p1 E E (cid:2) κ (cid:2) E (cid:0) κ (cid:2) E E κ κ (cid:2) (cid:2) E (cid:1)(cid:3) 2(s2) (cid:1)(cid:3) 2(s2) (cid:0) κ (cid:2) κ (cid:2) κ κ (cid:2) κ (cid:2) E E E E E E (cid:2) κ κ κ (cid:2) (cid:2) (cid:2) 1(µ1), δ3 γ3 1(s1)δ4 (cid:0) 1(µ1), δ3 γ4 1(s1)δ4 2(s2) (cid:0) 1(µ1), δ4 γ1 (cid:0) γ2 1(µ1), δ4 (cid:0) 1(µ1), δ4 γ3 (cid:0) 1(µ1), δ4 γ4 (cid:0) 1(s1)δ4 2(s2) 1(s1)δ4 2(s2) 1(s1)δ4 2(s2) 1(s1)δ4 2(s2) = 0 = 1 (cid:1)(cid:3) (cid:1)(cid:3) = 30 = 30 = 30 = 30 (cid:1)(cid:3) (cid:1)(cid:3) (cid:1)(cid:3) (cid:1)(cid:3) From above expression, it is difficult to make any comment on saddle point solution of zero-sum game. Thus we suppose p1 = 1 3 but it is also possible that under different range of p1, p, q our claim holds true. Rewriting expected payoff matrix for zero-sum game in 3 and q = 2 4, p = 1 table III. In table III, row vector denote strategies of a Team 2, column vector denote strategies of a Team 1 and corresponding expected payoff. Here, Team 2 wishes to minimize the expected payoff and Team 1 wishes to maximize the expected payoff. The security level of Team 1 is Similarly, the security level of Team 2 is V (A) = max j min i aij = 0.25 max j Notice that we have V (A) > V (A), it implies this game do not admit the pure strategy saddle V (A) = min aij = 1. i point solution. δ1 1 (s1)δ1 δ2 1 (s1)δ1 1 (s1)δ1 δ3 δ4 1 (s1)δ1 δ1 1 (s1)δ2 1 (s1)δ2 δ2 1 (s1)δ2 δ3 δ4 1 (s1)δ2 1 (s1)δ3 δ1 δ2 1 (s1)δ3 δ3 1 (s1)δ3 1 (s1)δ3 δ4 δ1 1 (s1)δ4 δ2 1 (s1)δ4 1 (s1)δ4 δ3 1 (s1)δ4 δ4 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) 2(s2) γ1 1 (µ1) γ2 1 (µ1) γ3 1 (µ1) γ4 1 (µ1) 9.16 22.3 7.54 9.66 16.39 26.18 16.45 16.13 7.80 16.08 11.91 13.61 10.77 14.69 18.33 2.41 20 0.75 2.66 5.08 0.25 30 8.27 15.72 11.94 11.38 10.80 14.19 18.41 1.83 20 0.25 2.41 27.5 0.75 30 7.77 16.30 11.69 13.55 11.11 14.69 18.41 1.83 20 1 2.5 27.5 0 30 8.30 15.83 12.08 13.11 11.02 14.16 18.33 1.66 20 0 3.41 27.58 1 30 TWO-TEAM ZERO-SUM GAME WITH EXPECTED PAYOFF MATRIX TABLE III 1) Role of the private randomness independent of ξ: We are interested to understand the role of the private randomness in two-team zero-sum game. We assume a coordinator provides the private randomness to decision maker of a team, say Team 1 decision maker. Further we assume that these private randomization is independent of ξ. Consider Team 1 decision maker has private randomization over its strategies and plays strategy 4 i=1 ai = 1. That is γi 1(µ1) with probability ai for 1 ≤ i ≤ 4 and P γ1(µ1) = Then the expected payoff is E κ (cid:2) (cid:0) γ1(µ1)δj 1(s1)δk 2 (s2) γ1 1(µ1) γ2 1(µ1) γ3 1(µ1) γ4 1(µ1)    4 = (cid:1)(cid:3) X i=1 E κ (cid:2) with prob. with prob. with prob. a1 a2 a3 with prob. a4. (γ1(µ1) = γi (cid:0) 1(µ1))δj 1(s1)δk 2 (s2) ai (cid:1)(cid:3) 2 (s2) 2 (s2) 2 (s2) 1(s1)δ1 δ1 δ2 1(s1)δ1 δ3 1(s1)δ1 1(s1)δ1 δ4 δ1 1(s1)δ2 δ2 1(s1)δ2 1(s1)δ2 δ3 1(s1)δ2 δ4 δ1 1(s1)δ3 1(s1)δ3 δ2 δ3 1(s1)δ3 δ4 1(s1)δ3 1(s1)δ4 δ1 δ2 1(s1)δ4 δ3 1(s1)δ4 1(s1)δ4 δ4 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 9.16a1 + 22.3a2 + 7.54a3 + 9.66a4 16.39a1 + 26.18a2 + 16.45a3 + 16.13a4 7.80a1 + 8.27a2 + 7.77a3 + 8.30a4 16.08a1 + 15.72a2 + 16.30a3 + 15.83a4 11.91a1 + 11.94a2 + 11.69a3 + 12.08a4 13.61a1 + 11.38a2 + 13.55a3 + 13.11a4 10.77a1 + 10.80a2 + 11.11a3 + 11.02a4 14.69a1 + 14.19a2 + 14.69a3 + 14.16a4 18.33a1 + 18.41a2 + 18.41a3 + 18.33a4 2.41a1 + 1.83a2 + 1.83a3 + 1.66a4 20 0.75a1 + 0.25a2 + 1a3 2.66a1 + 2.41a2 + 2.5a3 + 3.41a4 5.08a1 + 27.5a2 + 27.5a3 + 27.58a4 0.25a1 + 0.75a2 + 1a4 30 TABLE IV TWO-TEAM ZERO-SUM GAME EXPECTED PAYOFF WITH TEAM 1 HAS PRIVATE RANDOMIZATION notice that a Team 2 best response will be (δ3 for 1 ≤ j, k ≤ 4. We have evaluated the expected payoff and given in table IV. From table IV, 1(s1)δ4 2(s2)) depend on probability vector a = [a1, a2, a3, a4] at Team 1 (i.e. private randomization). Without loss of generality, we 2(s2)) or (δ4 1(s1)δ3 assume a3 = a4, now observe that a1 and a2 determines the best response of Team 2. We demonstrate this as follows. 1) If a1 < a2, thw best response of team 2 will be (δ3 1(s1)δ4 be (0.75a1 + 0.25a2 + 1a3). Further assume a2 = 2a1, a3 = a4 = 1 2(s2)) and expected payoff will 18 and 12, then a1 = 5 expected payoff is 0.43. 2) If a1 > a2, the best response of team 2 will be (δ4 1(s1)δ3 (0.25a1 + 0.75a2 + 1a4). Similarly, we assume a1 = 2a2, a3 = a4 = 1 2(s2)) and expected payoff will be 18 and 12, then a2 = 5 expected payoff is 0.43. 3) If a1 = a2, the best response of team 2 will be (δ3 expected payoff will be a1 + a3. We assume a3 = a4 = 1 2(s2)) or (δ4 1(s1)δ4 12 ,then a2 = a1 = 5 1(s1)δ3 2(s2)) and 12 and expected 1 (s1)δ1 δ1 δ2 1 (s1)δ1 δ3 1 (s1)δ1 1 (s1)δ1 δ4 δ1 1 (s1)δ2 δ2 1 (s1)δ2 1 (s1)δ2 δ3 1 (s1)δ2 δ4 δ1 1 (s1)δ3 1 (s1)δ3 δ2 δ3 1 (s1)δ3 δ4 1 (s1)δ3 1 (s1)δ4 δ1 δ2 1 (s1)δ4 δ3 1 (s1)δ4 1 (s1)δ4 δ4 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 2 (s2) 16.33 21.81 8.10 15.87 11.92 12.32 10.83 14.36 18.38 1.97 20 0.43 2.57 21.27 0.57 30 TABLE V TWO-TEAM ZERO-SUM GAME WITH TEAM 1 PRIVATE RANDOMIZATION OVER ITS STRATEGIES a3 = a4 = 1 12 , a1 = 5 18 a2 = 10 18 . payoff is 0.5. This implies that under private randomization at one of team, it do not admit Nash equilibrium solution. Observe that the expected payoff of Team 2 has improved from 1 to 0.43 if a1 < a2 or a1 > a2 and 0.5 if a1 = a2 where Team 2 wishes to minimize the expected payoff. Now from table V, note that the best strategy of DM1 and DM2 in Team 2 would be to play pure strategy as δ4 2(s2) to minimize the expected payoff. Furthermore, one of DM in Team 2 having private randomness may not lead to improve in 1(s1) and δ3 the expected payoff. To demonstrate this, consider DM1 in Team 2 has private randomization δ1 2 (s2) δ2 2 (s2) δ3 2 (s2) δ4 2 (s2) 16.32b1 + 21.81b2 + 8.10b3 + 15.87b4 11.92b1 + 12.32b2 + 10.83b3 + 14.36b4 18.38b1 + 1.97b2 + 20b3 + 0.43b4 2.57b1 + 21.27b2 + 0.57b3 + 30b4 TABLE VI TWO-TEAM ZERO-SUM GAME WITH TEAM 1 AND 2 HAVING PRIVATE RANDOMIZATION OVER ITS STRATEGIES over his strategies. δ1(s1) = δ1 1(s1) δ2 1(s1) δ3 1(s1) δ4 1(s1) with prob. with prob. with prob. with prob. b1 b2 b3 b4    and 0 ≤ bi ≤ 1 for 1 ≤ i ≤ 4, 4 i=1 bi = 1. P The expected payoff payoff is E κ (cid:2) γ1(µ1)δ1(s1)δk (cid:0) 2 (s2) (cid:1)(cid:3) = 4 X i,j E κ (cid:2) (cid:0) (γ1(µ1) = γi 1(µ1))(δ1(s1) = δj 1(s1))δk 2 (s2) aibj, (cid:1)(cid:3) We illustrated the expected payoff matrix in table VI. If DM1 in Team 2 do not play pure strategy, assume b1 = b3 = 0, b2 = 1 4 , then DM2 of Team 2 will play strategy 2(s2) to minimize the expected payoff. Thus expected payoff 0.815. Note that 0.815 < ¯VA δ3 but greater than pure strategy expected payoff (it is clear from table V) since expected payoff 4, and b4 = 3 under pure strategy solution is 0.43. Here we assume if decision maker in Team 1 having private randomization with probability vector a3 = a4 = 1 12, a1 = 5 18 a2 = 10 18. 2) Role of common randomness independent of ξ: Now consider common randomness in- dependent of ξ is provided to DM1 and DM2 of team 2, i.e. Team 2 does joint randomization over its strategy then best for for team 2 to put positive mass on strategies (δ4 2(s2)) or 2(s2)). Otherwise its expected payoff more than pure strategy (it is clear from table III). In discrete team vs team zero-sum game with common randomness, do not admit Nash 1(s1), δ3 1(s1), δ4 (δ3 equilibrium solution. It also lead to improve in the expected payoff. C. Example: LQG team vs team zero-sum game Now, we illustrate an example of LQG zero-sum team vs team game and show that common randomness independent of environment ξ does not benefit. We also demonstrate that common randomness dependent on ξ benefit a team having extra randomness. Consider two team LQG zero sum game, Team 1 and Team 2 consists of a decision maker and two decision makers, respectively. Let ξ = [µ1, s1, s2]T denote an environment or state of nature; it is random vector having probability distribution N(0, Σ), Σ is covariance matrix. Let yi = ηi(ξ) be the observations about ξ available at decision maker i of Team 1, for i = 1; zj = ζj(ξ) represents the observations about ξ available at decision maker j of Team 2, for j = 1, 2. Mathematical simplicity, we assume y1 = η1(ξ) = µ1, zj = ζj(ξ) = sj, j = 1, 2. In standard LQG two-team zero-sum game decision rule is defined as follows. γi ∈ Γi and ui ∈ Ui for i = 1; δj ∈ ∆j and vj ∈ Vj for j = 1, 2. γi : yi → ui, δj : zj → vj, The optimal decision rule (u∗ 1 = γ∗ 1(y1), v∗ 1 = δ∗ 1(z1), v∗ 2 = δ∗ 2(z2)) such that JZS,LQG(u1, v∗ 1, v∗ 2) ≤ JZS,LQG(u∗ 1, v∗ 1, v∗ 2) ≤ JZS,LQG(u∗ 1, v1, v2), (20) for all u1 ∈ U1, v1 ∈ V1 and v2 ∈ V2; JZS,LQG(u1, v1, v2) = Eξ[κ(u1, v1, v2, ξ)]. The cost function: κ(u1, v1, v2, ξ) = κ(θ, ξ), = θT Bθ + 2θT Sξ, (21) −1 r11 r12   where θ = [u1, v1, v2]T , B =     among teams, that is r11 and r12 is coupling of DM1 of Team 1 with DM1 and DM2 of Team 2 , here r11 and r12 characterizes the coupling     r12 r11 q12 q12 1 1 respectively. And q12 denotes coupling among DM1 and DM2 of Team 2. Moreover, we assume that Team 1 seeks to maximize the expected payoff and Team 2 seeks to minimize the expected payoff. It is required that the cost function Eξ[κ(u1, v1, v2, ξ)] to be concave in u1 and convex in v1 and v2. Hence, we assume 1 − q2 12 > 0 and S = 0 −1 1  0 0 0 .          0 0 −1 Two-team LQG zero-sum game admits a saddle point solution (for which we refer the reader to [6, lemma 3.1, 3.2, theorem 3.1]•), i.e. max u1∈U1 min (v1,v2)∈V Eξ[κ(u1, v1, v2, ξ)] = min (v1,v2)∈V max u1∈U1 Eξ[κ(u1, v1, v2, ξ)]. (22) V = V1 × V2. Since, in static LQG problem, decision variable are linear function of observations available at decision makers, u1 = γi(y1) = α11y1, vj = δj(zj) = α2jzj, j = 1, 2. Re-writing relation of θ and observations y1, z1 z2 more compactly, we have θ = A˜y, where A =      0 0 α11 0 α21 0 0 0 α22 , and ˜y = [y1, z1, z2]T . The expected cost function is      JZS,LQG(α11, α21, α22) = Eξ[˜yT AT BA˜y + 2˜yT AT Sξ] = Tr[AT BAΣ + 2AT SΣ]. (23) Equality in (23) follows from ˜y = ξ and ξ ∼ N(0, Σ). Then we obtain from (22), max α11 min α21,α22 JZS,LQG(α11, α21, α22) = min α21,α22 max α11 JZS,LQG(α11, α21, α22) (24) An objective of zero-sum two team LQG game is to determine (α∗ 11, α∗ 21, α∗ 22) such that JZS,LQG(α11, α∗ 21, α∗ 22) ≤ JZS,LQG(α∗ 11, α∗ 21, α∗ 22) ≤ JZS,LQG(α∗ 11, α21, α22), will be satisfied for α11, α21, α22 ∈ R. 1) Discussion on matrix B : In matrix B, we have coupling parameter r11, r12 and q12. If r11 = r12 = q12 = 0, there is no coupling among Team 1 and 2, as well as among decision makers of Team 2. This is not at all interesting. If r11 = r12 = 0, then there is no coupling among team 1 and 2. Problem becomes team decision problem. Hence we suppose r11, r12, q12 6= 0. Next, we analyze the role of common randomness in LQG two-team zero-sum game. We describe two cases as follows. • Case I: Common randomness independent of ξ. • Case II: Common randomness dependent on ξ. 2) Common randomness independent of ξ: Proposition A.1: In LQG two-team zero-sum stochastic game, common randomness indepen- dent of ξ do not benifit the team. Proof: Consider a coordinator provides common randomness which is independent of environment ξ to the decision makers of teams. For mathematical simplicity, we assume common randomness is available at one of team, say Team 2. The common randomness provided to decision maker DM1 and DM2 of team 2 is represented as ω, and also ω ∐ ξ. The decision rule of a decision maker of Team 1 is γ1 : y1 → u1, and decision rule of Team 2 decision makers are δj : zj × ω → vj, j = 1, 2. Actions of decision makers are u1 = γ1(y1) = α11y1, vj = δj(zj, ω) = α2jzj + β2jω, for j = 1, 2. Rewriting above expression, we obtain θ = A˜y + βω, here, θ = , A = u1 v1 v2           α11    , ˜y = 0 α21 0 0 0 α22     y1 z1 z2          0  , β =  .     β21 β22     0 0     The expected payoff of LQG two team zero-sum game with common randomness is JZS,CR,LQG(α11, α21, α22, β) = Eξ[˜yT AT BA˜y + 2˜yT AT Sξ + 2˜yT AT Bβω + ωT βT Bβω + 2ωT βT Sξ], = Tr[AT BAΣ + 2AT SΣ + βT BβΣ2]. (25) Equality in (26) because ω ∐ ξ, ω ∼ N(0, Σ2). max α11 min α21,α22,β21,β22 JZS,CR,LQG(α11, α21, α22, β21, β22) = max α11 min α21,α22,β21,β22 Tr[AT BAΣ + 2AT SΣ + βT BβΣ2], = max α11 min α21,α22 Tr[AT BAΣ + 2AT SΣ] + min β21,β22 Tr[βT BβΣ2]. Clearly, from above expression, minimization of Tr[βT BβΣ2] attained at β equals to zero, i.e.β11 = 0, β21 = 0, β22 = 0 for given B and Σ2 > 0. max α11 min α21,α22,β21,β22 JZS,CR,LQG(α11, α21, α22, β21, β22) = max α11 min α21,α22 Tr[AT BAΣ + 2AT SΣ] = max α11 min α21,α22 JZS,LQG(α11, α21, α22) = min α21,α22,β21,β22 max α11 JZS,CR,LQG(α11, α21, α22, β21, β22) Hence we conclude that common randomness independent of ξ do not benefit the team having common randomness. 3) Common randomness dependent on ξ: Suppose the common randomness available at decision makers of Team 2 of two-team LQG zero-sum game; it is denoted as ω. The decision rule of a decision maker in Team 1 is and decision rule of Team 2 decision makers are γ1 : y1 → u1, δj : zj × ω → vj, j = 1, 2. Actions of decision makers are for j = 1, 2. We have6 u1 = γ1(y1) = α11y1, vj = δj(zj, ω) = α2jzj + β2jω, θ = A˜y + βω, here, θ = , A = u1 v1 v2           α11    , ˜y = 0 α21 0 0 0 α22     y1 z1 z2          0  , β =  .     β21 β22     0 0     Moreover it is assume that the common randomness is dependent on an environment ξ. Hence ω is function of ξ, that is ω = f (ξ); f (·) is measurable function. Let f be the linear function, then ω = f (ξ) = φ11µ1 + φ21s1 + φ22s2 = ΦT ˜y = ΦT ξ. Where Φ = [φ11, φ21, φ22]T , ˜y = ξ and ξ ∼ N(0, Σ). The expected cost functional is JZS,CR,LQG(α11, α21, α22, β21, β22) = Eξ[˜yT AT BA˜y + 2˜yT AT Sξ + 2˜yT AT Bβω + ωT βT Bβω + 2ωT βT Sξ], = Tr[AT BAΣ + 2AT SΣ + 2AT B ˜βΣ + ˜βT B ˜βΣ + 2 ˜βT SΣ]. (26) In (26), ˜β = βΦT . Goal is to find (α∗ 11, α∗ 21, α∗ 22, β∗ 21, β∗ 22) such that JZS,CR,LQG(α11, α∗ 21, α∗ 22, β∗ 21, β∗ 22) ≤ JZS,CR,LQG(α∗ 11, α∗ 21, α∗ 22, β∗ 21, β∗ 22) ≤ JZS,CR,LQG(α∗ 11, α21, α22, β21, β22) for α11, α21, α22, β21, β22 ∈ R. Source of information (source of common randomness) can act as a mole or consultant depending on type of information it provides. If source of information is a mole then ω = φ11µ1. It implies φ21 = 0, and φ22 = 0. If source of information is consultant, then ω = φ21s1 + φ22s2. We will investigate two different cases based on source of information and types of information it provides. a) Suppose the source of information is a mole or spy and it provide information (common randomness) ω = φ11µ1. Let J a,∗ ZS,CR,LQG denote the saddle point solution of LQG two-team zero- sum game with common randomness when source of common randomness to Team 2 decision makers is spy. b) Let J b,∗ ZS,CR,LQG represents the saddle point solution of LQG two-team zero-sum game with common randomness when source of common randomness to Team 2 decision makers is consultant and ω = φ21s1 + φ22s2. Intuitively, we expect to have following inequalities. J a,∗ ZS,CR,LQG ≤ J ∗ ZS,LQG. J b,∗ ZS,CR,LQG ≤ J ∗ ZS,LQG. (27) (28) Note J ∗ ZS,LQG is saddle point solution of LQG two-team zero-sum game with no common randomness. From (26), analytically, it is difficult to prove the inequalities in (27), (28). Hence we conjecture result in (27), (28). Now we present numerical results and show that above inequalities are true. Let Σ =      σ2 µ1 µ1,s1 σ2 σ2 µ1,s2 σ2 σ2 s1 s1,s2 σ2 s2 σ2 µ1,s1 µ1,s2 σ2 σ2 s1,s2 , Since ω is scalar, we have Σ2 = σ2 ω. Team cost functional      is J (α11, α21, α22, β21, β22) = −α2 11σ2 µ1 + α2 21σ2 s1 + α2 22σ2 s2 + 2r11α11α21σ2 µ1,s1 + 2r12α11α22σ2 µ1,s2 +2q12α21α22σ2 s1,s2 + 2(r11α11β21 + r12α11β22)σ2 µ1,w + 2(α21β21 + q12α21β22)σ2 s1,w +2(q12α22β21 + α22β22)σ2 s2,w + (β2 21 + 2q12β21β22 + β2 22)σ2 w + 2α11σ2 µ1 −2α21σ2 s1 − 2α22σ2 s2 − 2β21σ2 s1,w − 2β22σ2 s2,w.(29) We know that LQG two-team zero-sum game has saddle point solution, that is max α11 min α21,α22,β21,β22 JZS,CR,LQG(α11, α21, α22, β21, β22) = min α21,α22,β21,β22 max α11 JZS,CR,LQG(α11, α21, α22, β21, β22). (30) To evaluate maxα11 minα21,α22,β21,β22 JZS,CR,LQG(α11, α21, α22, β21, β22), we differentiate (29) with respect to α11, α21, α22, β21, β22 and equate to 0. We obtain linear systems of equations as follows. µ1,s1 µ1,s2 µ1,s1 µ1,s2 −σ2 µ1 r11σ2 r12σ2 r11σ2 r12σ2 µ1,w µ1,w           r11σ2 σ2 s1 q12σ2 σ2 s1,s2 s1,w q12σ2 s1,w s1s2 r12σ2 q12σ2 σ2 s2 q12σ2 σ2 s2,w s2,w r11σ2 σ2 s1,w q12σ2 σ2 w q12σ2 w µ1,w r12σ2 q12σ2 σ2 s2,w µ1,w s1,w s2,w q12σ2 w σ2 w                     α11 α21 α22 β21 β22           = −σ2 µ1 σ2 s1 σ2 s2 σ2 s1,w σ2 s2,w           .           Numerically, we compare our result for different values matrix B. 1)B =      1 −1 1 4 4 1 1 2 1 2 1 1 4 1 4  We assume Σ = , 2)B =      2 1 4 4 1 1 2 1 2 1 1 4 1 4 1          1 −1 1 4 2 1 1 2 1 2 1 1 4 1 2       ,     for all numerical results. a) When source of information is a mole and ω = φ11µ1, we have E[ω] = 0, E[ω2] = σ2 ω = φ2 11σ2 µ1. σ2 µ1,ω = φ11σ2 µ1 s1,ω = φ11E[µ1s1] = φ11σ2 σ2 µ1,s1. s2,ω = φ11E[µ1s2] = φ11σ2 σ2 µ1,s2. (r11, r12, q12) 4 , 1 ( 1 4 , 1 2 ) 2 , 1 4 , 1 ( 1 2 ) (φ11, φ21, φ22) J a,∗ ( 1 2 , 0, 0) ( 1 2 , 0, 0) 0.2037 0.4012 ZS,CR,LQG WITH RANDOMIZATION: COMPARISON OF J a,∗ ZS,CR,LQG FOR DIFFERENT VALUES OF r11, r12, q12. TABLE VII (r11, r12, q12) 4 , 1 4 , 1 ( 1 2 ) 2 , 1 4 , 1 ( 1 2 ) (φ11, φ21, φ22) J b,∗ (0, 1 (0, 1 2 , 1 2 ) 2 , 1 2 ) 0.1616 0.2435 ZS,CR,LQG WITH RANDOMIZATION: COMPARISON OF J b,∗ ZS,CR,LQG FOR DIFFERENT VALUES OF r11, r12, q12. TABLE VIII   Case 1) B = −1 1 1 4 4 1 1 2 1 2 1 After solving linear systems of equation, we have         1 4 1 4 11 = 0.9615, α∗ α∗ 21 = 0.8052, α∗ 22 = 0.8052, β∗ 21 = −0.7103, β∗ 22 = −0.7103. Team cost functional is J a,∗ ZS,CR,LQG = J a ZS,CR,LQG(α∗ 11, α∗ 21, α∗ 22, β∗ 21, β∗ 22) = max α11 min α21,α22,β21,β22 J (α11, α21, α22, β21, β22) = 0.4012.  Case 2) B = −1 1 1 4 2 1 1 2 1 2 1 Solving linear systems of equations we obtain α∗  .         1 2 1 4 11 = 0.8500, α∗ 21 = 0.8052, α∗ 22 = 0.8052 21 = −0.0693, β∗ β∗ 22 = −1.7693. Evaluating team cost functional J a,∗ ZS,CR,LQG = 0.2037. b) When a consultant provides an information, ω = φ21s1 + φ22s2. Note that E[w] = 0, (r11, r12, q12) 4 , 1 ( 1 4 , 1 2 ) 2 , 1 4 , 1 ( 1 2 ) (φ11, φ21, φ22) J ∗ ZS,LQG (0, 0, 0) (0, 0, 0) TABLE IX 0.598 1.8991 WITHOUT RANDOMIZATION: COMPARISON OF J ∗ ZS,LQG FOR DIFFERENT VALUES OF r11, r12, q12. w = E[w2] = φ2 σ2 21σ2 s1 + φ2 22σ2 s2 + 2φ21φ22σ2 s1,s2. µ1,w = E[µ1w] = φ21σ2 σ2 µ1,s1 + φ22σ2 µ1,s2. s1,w = E[s1w] = φ21σ2 σ2 s1 + φ22σ2 s1,s2. s2,w = E[s2w] = φ21σ2 σ2 s1,s2 + φ22σ2 s2. We suppose φ21 = 1     22 = 2, β∗ 1 4 1 4      2, φ22 = 1 2 . 1 −1 1  4 4 1 1 2 1 2 1 21 = −1.391, β∗ −1 1 1 4 2 1 1 2 1 2 1      1 2 1 4      Case 1) B = Solving linear system of eqaution we have α∗ 11 = 1.0381, 21 = 2, α∗ α∗ 22 = −1.391 and team optimal cost J b,∗ ZS,CR,LQG = 0.1616. Case 2)B = . Then α∗ 11 = 1.0515, α∗ 21 = 2, α∗ 22 = 2, β∗ 21 = −1.3333, β∗ 22 = −1.5086 and team optimal cost J b,∗ ZS,CR,LQG = 0.2435. From table VII, VIII, IX, it clear that inequalities in (27),(28) satisfy numerically. Observe that common randomness dependent on ξ provided by either a mole or consultant benefits the team vs team zero-sum game.
ai_researcher
2
DOCBENCH_A_Benchmark_for_Evaluating_LLM-based_Document_Reading_Systems.pdf
4 2 0 2 l u J 5 1 ] L C . s c [ 1 v 1 0 7 0 1 . 7 0 4 2 : v i X r a DOCBENCH: A Benchmark for Evaluating LLM-based Document Reading Systems Anni Zou1,2∗, Wenhao Yu2(cid:66), Hongming Zhang2, Kaixin Ma2, Deng Cai2, Zhuosheng Zhang1, Hai Zhao1, Dong Yu2 1Shanghai Jiao Tong University 2Tencent AI Lab [email protected], (cid:66)[email protected] (corresponding author) Abstract Recently, there has been a growing interest among large language model (LLM) developers in LLM-based document reading systems, which enable users to upload their own documents and pose questions related to the document contents, going beyond simple reading comprehension tasks. Consequently, these systems have been carefully designed to tackle challenges such as file parsing, metadata extrac- tion, multi-modal information understanding and long-context reading. However, no current benchmark exists to evaluate their performance in such scenarios, where a raw file and questions are provided as input, and a corresponding response is expected as output. In this paper, we introduce DOCBENCH, a new benchmark de- signed to evaluate LLM-based document reading systems. Our benchmark involves a meticulously crafted process, including the recruitment of human annotators and the generation of synthetic questions. It includes 229 real documents and 1,102 questions, spanning across five different domains and four major types of questions. We evaluate both proprietary LLM-based systems accessible via web interfaces or APIs, and a parse-then-read pipeline employing open-source LLMs. Our evaluations reveal noticeable gaps between existing LLM-based document reading systems and human performance, underscoring the challenges of develop- ing proficient systems. To summarize, DOCBENCH aims to establish a standardized benchmark for evaluating LLM-based document reading systems under diverse real-world scenarios, thereby guiding future advancements in this research area. 2 1 Introduction The emergence of large language models (LLMs) has marked a significant milestone in the field of natural language processing, revolutionizing the way we approach a variety of tasks [2, 3, 7, 35, 37, 40, 50]. Existing LLMs such as GPT-4 [2], Llama-3 [37], and Claude-3 [3] have shown exceptional abilities in following human instructions to perform tasks such as answering questions, translating languages and summarizing texts. These tasks are typically characterized by straightforward input- output interactions, where the models generate responses solely based on the provided text. However, many real-world applications require more complex interactions involving user-provided documents. For instance, financial analysts might need to query comprehensive financial reports to inform their investment decisions [25, 42, 45]. Legal professionals often search through extensive legal documents to find relevant case law [8, 10, 22]. Similarly, scientific researchers frequently sift through academic papers to identify related works and extract key findings [5, 11]. ∗This work was done during internship at Tencent AI Lab, Seattle. 2Data and code will be released at https://github.com/Anni-Zou/DocBench. Preprint. Under review. Figure 1: An example of OpenAI’s GPT-4 based document reading system. Unlike standalone LLMs, recent proprietary LLM-based document reading systems employ a carefully designed approach (e.g., file parsing, code execution) to answer user questions related to document contents. When users pose queries based on their provided documents, the situation becomes more intricate and challenging [23]. Unlike standalone LLMs that are primarily trained to process and respond to textual inputs (or images in the case of Vision LLMs), handling user-provided documents necessitates a more sophisticated approach that stretches beyond the capabilities of a single LLM. In order to provide accurate responses, an LLM-based document reading system should not only comprehend natural language queries, but also excel in a range of processing skills, including parsing and interpreting user documents and layouts, navigating complex formatting structures, extracting relevant metadata, and managing long textual contexts along with any embedded images. Mastery of these diverse skills is essential for generating precise and contextually relevant responses. At the same time, recent advancements in proprietary LLM developers such as OpenAI and Anthropic have provoked the release of several LLM-based document reading systems. Figure 1 illustrates an example of OpenAI’s GPT-4-based document reading system. Despite widespread claims of effec- tiveness and efficiency in various online public blogs34, the absence of a standardized benchmark makes it difficult to objectively evaluate and compare the document reading performance across these systems, thereby leaving a critical gap in fairly assessing these capabilities in a fine-grained manner. To fill this gap, our paper introduces DOCBENCH, a novel benchmark specifically designed to evaluate LLM-based document reading systems. DOCBENCH is developed to mirror real-world scenarios where each input consists of a document paired with one or multiple associated questions, and each question is annotated with a golden answer. Our benchmark undergoes a meticulous development process, incorporating human annotation and synthetic question generation. To the end, DOCBENCH features 229 real-world documents and 1,102 questions spanning 5 diverse domains: Academia, Finance, Government, Laws, and News. Besides, the benchmark involves 4 question categories, including text-only, multi-modal (i.e., tables and figures), meta-data, and unanswerable, ensuring comprehensive coverage of various document reading capabilities. Based upon DOCBENCH, we evaluate several proprietary LLM-based systems that are accessible via web interfaces or APIs. However, these proprietary systems are close-sourced, thus leading to the limited disclosure of their detailed operational strategies. As a result, we additionally assess a straight- forward parse-then-read pipeline employing a series of open-source LLMs. Our evaluations reveal noticeable gaps between existing LLM-based document reading systems and human performance, underscoring the challenges of developing proficient systems. In summary, DOCBENCH serves as the first standardized benchmark to evaluate LLM-based document reading systems within real-world scenarios, where the systems take a document file paired with one or multiple related questions as input and generate textual responses as output. Moreover, our benchmark is carefully designed to encompass 5 diverse domains and 4 distinct question types, 3Blog: Claude can now use tools https://www.anthropic.com/news/tool-use-ga 4Blog: using LlamaIndex talk-with-documents-using-llamaindex-3952c76bd511 Talk with documents https://codemaker2016.medium.com/ 2 User: Upload the PDF file of DPR paperUser: Who is most cited person in the paperSystem: Ming-Wei Chang, with 4 citationsBlack-box document reading systemsStep 1: parse the uploaded documentStep2: extract the reference sectionStep 3: extract author names from referencesStep 4: count the occurrences of each person Step 5: respond with the most cited person[Some file loading process is omitted … ] Figure 2: Construction pipeline of DOCBENCH. (a) Document Collection: gathering PDF files from five different domains; (b) QA-pair Generation: creating diverse and comprehensive QA pairs through a combination of LLMs and human effort; (c) Quality Check: ensuring data quality through a multi-step process that includes auto filtering, manual review, and expert curation. ensuring a nuanced and thorough assessment. By facilitating fair comparisons across different systems, DOCBENCH highlights current limitations and paves the way for future advancements. 2 The DOCBENCH DOCBENCH is a benchmark that takes raw PDF files and accompanying questions as inputs, with the objective of generating corresponding textual answers. In this section, we will introduce the pipeline used to construct the dataset, present detailed statistics, and explain the evaluation method. 2.1 Dataset Construction Our dataset construction pipeline consists of three phases. First, we crawl documents across various domains from publicly accessible online resources (§2.1.1). Second, we generate corresponding QA pairs with the help of GPT-4 and a team of human annotators (§2.1.2). Finally, we conduct auto filtering followed by a manual review to validate the quality of the generated instances (§2.1.3). 2.1.1 Document Collection To establish a practical and constructive benchmark for document reading, we concentrate on scenarios where it is crucial to read documents. We standardize the documents to PDF format due to its high compatibility and stability. We identify five domains where documents are frequently utilized: Academia, Finance, Government, Laws, News. For Academia, papers are downloaded from arXiv within the range of top-k citations in the field of natural language processing on Google Scholar. 5 For Finance, we crawl the annual reports of companies with top-k global market capitalization up to 2024-02-23 from AnnualReports. 6 For Government, we manually download official governmental reports in 2023 from the U.S. Department of State and GovInfo. 7 For Laws, files are gathered from an official online collection of publications from the Library of Congress, within the years ranging from 2020 to 2024. 8 For News, we collect front-page scanned documents of the New York Times, covering dates from 2022-02-22 to 2024-02-22. 9 We set k = 100 in the initial crawling process for academic and financial documents. After skipping the unobtainable or damaged documents, we eventually obtained 229 PDF files, with 49 for academia, 40 for finance, 44 for government, 46 for laws, and 50 for news. Detailed statistics are shown in Table 1. 5https://scholar.google.com/; https://arxiv.org/. 6https://companiesmarketcap.com; http://www.annualreports.com. 7https://www.state.gov/department-reports/; https://www.govinfo.gov/. 8https://www.loc.gov/collections/publications-of-the-law-library-of-congress. 9https://static01.nyt.com/images/. 3 AcademiaFinanceGovernmentLawsNews(cid:31)(cid:30)(cid:29)(cid:28)(cid:27)(cid:26)(cid:29)(cid:25)(cid:24)(cid:23)(cid:22)(cid:21)(cid:27)(cid:20)(cid:19)(cid:18)(cid:30)(cid:28)(cid:23)(cid:17)(cid:16)(cid:27)(cid:29)(cid:28)(cid:25)(cid:15)(cid:27)(cid:20)(cid:14)Q: What was the total non-operating income for Amazon in 2021? A: $13,272 million. [Evidence](cid:31)(cid:21)(cid:20)(cid:30)(cid:21)(cid:29)(cid:17)(cid:13)(cid:25)(cid:12)(cid:27)(cid:25)(cid:11)(a) Document Collection(b) QA-pair Generation(c) Quality Check(cid:10)(cid:20)(cid:29)(cid:27)(cid:20)(cid:25)(cid:17)(cid:13)(cid:25)(cid:19)(cid:23)(cid:30)(cid:15)(cid:9)(cid:25)(cid:19)(cid:31)(cid:30)(cid:29)(cid:28)(cid:27)(cid:8)(cid:22)(cid:23)(cid:7)(cid:21)(cid:29)(cid:17)(cid:6)(cid:20)(cid:5)(cid:23)Page Text:We introduce a new language model that... (cid:6)(cid:20)(cid:19)(cid:28)(cid:15)(cid:30)(cid:9)(cid:28)(cid:27)(cid:23)(cid:20)(cid:17)(cid:4)(cid:15)(cid:23)(cid:22)(cid:26)(cid:28)(cid:19)(cid:3)(cid:21)(cid:15)(cid:14)(cid:25)(cid:17)(cid:3)(cid:21)(cid:20)(cid:14)(cid:30)(cid:21)(cid:14)(cid:25)(cid:17)(cid:31)(cid:23)(cid:7)(cid:25)(cid:29)(cid:19)(cid:2)(cid:30)(cid:22)(cid:21)(cid:20)(cid:17)(cid:18)(cid:20)(cid:20)(cid:23)(cid:28)(cid:21)(cid:28)(cid:23)(cid:15)(cid:19)<Text-only> Q: What is the average sales...A: $10,537 million. [Evidence]<Multimodal> Q: According to Figure 2, what is ...A: Yes. [Evidence]<Meta-data> Q: On which page does the reportA: Page 5.<Unanswerable> Q: What does BERT...A: Not mentioned.Text-onlyBased on the above figure and text, please design three QA pairs...These questions require locating the specific information, simple orcomplex calculations, comparisons, finding the maximum or minimum... Multimodal(cid:1)(cid:127)(cid:26)(cid:25)(cid:15)(cid:28)(cid:17)(cid:129)(cid:30)(cid:15)(cid:21)(cid:28)(cid:27)(cid:23)(cid:20)Q: Is SenseBERT a model mentioned in the provided text? A: Yes. [Evidence]Q: What was the total non-operating income for Amazon in 2021? A: $13,272 million. [Evidence]Q: Is SenseBERT a model mentioned in the provided text? A: Yes. [Evidence] Table 1: Overview statistics of DOCBENCH. All documents are in PDF format. We extract text content and calculate the corresponding #Tokens of documents. Category Questions. Documents. #Num #Tokens #Num #Pages #Size(KB) #Tokens Aca. Fin. Gov. Laws News 303 288 148 191 172 Total/Avg. 1,102 16.8 16.8 14.1 15.4 13.5 15.7 49 40 44 46 50 229 11 192 69 58 1 66 847 6,594 2,183 969 3,095 2,738 11,123 149,409 36,105 32,339 2,909 46,377 Figure 3: Overview of Questions and Documents: distribution of question token counts (left); distribution of QA pairs per document (middle); distribution of document token counts (right). 2.1.2 QA-pair Generation The generation procedure revolves around two aspects: diversity and comprehensiveness. On one hand, as the document itself inherently abounds with multi-dimensional and multi-modal information including texts, tables, figures, and meta-data, we leverage the fitz library 10 to parse out the distinct modalities within the PDF files. Afterward, we deliver plain texts to GPT-4 (gpt-4-0125-preview) for generating text-only QA pairs and resort to GPT-4V (gpt-4-1106-vision-preview) for yield- ing multi-modal ones based on tables, figures, and their related textual descriptions. On the other hand, we further request a set of human annotators to manually elaborate 350 QA pairs based on the given document files. Their primary task is to focus on types that are rarely covered in the previous generation stage but are frequent in daily usage, such as meta-data and unanswerable instances. Details and additional analysis of instruction prompts are attached in Appendix A. 2.1.3 Quality Check We begin by instructing GPT-4 to automatically filter out questions that are excessively lengthy, unnatural, or impractical. We then conduct a manual review following the automatic filtering to ensure both the quality of questions and the accuracy of answers. To further align our data with real-world user scenarios, we engage 7 practitioners from distinct domains to review and refine the data within their areas of expertise. In this way, our data quality is validated from multiple perspectives. 2.2 Dataset Statistics DOCBENCH comprises a total of 229 PDF documents sourced from publicly accessible online repositories along with 1,102 questions, spanning across 5 domains: Academia, Finance, Government, Law, and News. As shown in Table 1, we conduct comprehensive statistical analysis across various angles, encompassing the number of questions, documents, and average token counts within each. Given the unique nature of our task input, which involves processing PDF files, we additionally include information such as page count and file size. Moreover, Figure 3 presents distributions depicting the counts of question tokens, document tokens 11, and QA pairs per document. Notably, we constrain the number of QA pairs per document to a maximum of 20, with its range spanning from 1 to 16, aiming to better emulate real-world usage scenarios. As for the token counts of questions and documents, the minimum and maximum values are (6||40) and (1, 300||598, 302) respectively. 10https://pypi.org/project/fitz/ 11We utilize the tokenizer of gpt-4-turbo for token measurement. 4 510152025303540#Tokens050100150200#QuestionsDistribution of Question Token Counts.246810121416#QA pairs01020304050#DocumentsDistribution of QA pairs per Document.0102030>40#Tokens(k)0102030405060#DocumentsDistribution of Document Token Counts. Table 2: Examples of instances from DOCBENCH, with multiple labels indicating our data diversity. Question Answer Labels Document Why does the model not per- form as well in German com- pared to Spanish and Dutch? Due to its complex mor- phology and compound words... <Aca.><Why> <Text-only> <Textual> When and Why are Pre-trained Word Embeddings Useful for Machine Translation [clickable file link] By how much did the num- ber of Erica users increase from 2018 to 2019? The number increased by 5.5 million... <Fin.><How> <Multimodal> <Numerical> Bank of America Annual Report 2020 [clickable file link] What is the primary focus of Bureau Objective 3.4? The report does not con- tain such objective. <Gov.> <Wh-> <Unanswerable> <Others> Governmental report from Secre- tary’s Office of Global Women’s Is- sues 2022 [clickable file link] How many times does the report mention "scientific ethics"? The report mentions "sci- entific ethics" 11 times. Is the article about Hurricane Ian’s impact in Florida writ- ten by multiple authors? Yes, the article is about Hurrican Ian’s impace in Florida... <Laws><How> <Meta-data> <Numerical> <News><Y/N> <Meta-data> <Boolean> Report on Regulation of Stem Cell Research from Library of Congress 2023 [clickable file link] New York Times front page on 2022-09-30 [clickable file link] Figure 4: Data distribution of DOCBENCH: (a) proportion(%) of various data groups based on four distinct classification criteria; (b) detailed data analysis based on question types. 2.3 Dataset Analysis Figure 4 illustrates the data distribution in DOCBENCH based on different classification criteria. QA-pair Type The types of QA pairs can be mainly divided into four groups: text-only (37.4%), multimodal (27.9%), meta-data (23.4%), and unanswerable (11.3%). The text-only and multimodal types collectively account for over half (65.3%), centering on the abilities to comprehend long contexts and interpret information from different modalities. Besides, we incorporate approximately one-third (34.7%) of questions to more closely fit the actual scenarios as well as assess the robustness of the document reading systems, including 23.4% inquiring about metadata (e.g., page numbers, word counts) and 11.3% that cannot be answered based on the given document. Question Type The types of questions can be primarily separated into four categories according to the inquiry focus: what / who / where / when / which (58.6%), Y/N (22.1%), how (18.8%), and why (0.5%). These categories respectively demand specific information or details, straightforward yes or no responses, methods or degrees, and the underlying reasons behind actions or phenomena. Figure 4(b) delineates a detailed data distribution based on question types. The interrogative what holds a dominant proportion at 40.8%, which is reasonable as users commonly seek precise information when confronted with a document. Answer Type The types of answers can be partitioned into four classes: numerical (37.4%), textual (35.7%), boolean (17.3%), and others (9.6%). Within the numerical class, 69% originate from the domains of academia and finance, as these documents naturally require extensive use of numbers to convey information, such as performance metrics in academic papers and figures in financial reports. 5 (cid:31)(cid:30)(cid:29)(cid:28)(cid:27)(cid:26)(cid:25)(cid:28)(cid:24)(cid:23)(cid:22)(cid:28)(cid:21)(cid:29)(cid:20)(cid:19)(cid:18)(cid:17)(cid:20)(cid:19)(cid:16)(cid:17)(cid:15)(cid:14)(cid:13)(cid:23)(cid:25)(cid:12)(cid:11)(cid:10)(cid:9)(cid:12)(cid:14)(cid:26)(cid:8)(cid:23)(cid:7)(cid:29)(cid:12)(cid:10)(cid:17)(cid:14)(cid:29)(cid:13)(cid:7)(cid:29)(cid:14)(cid:29)(cid:6)(cid:25)(cid:29)(cid:28)(cid:5)(cid:4)(cid:29)(cid:14)/(cid:5)(cid:4)(cid:23)/(cid:5)(cid:4)(cid:17)(cid:3)(cid:17)/(cid:5)(cid:4)(cid:17)(cid:25)/(cid:5)(cid:4)(cid:26)(cid:30)(cid:4)(cid:2)/(cid:18)(cid:1)(cid:23)(cid:20)(cid:5)(cid:4)(cid:11)(cid:18)(cid:9)(cid:8)(cid:17)(cid:3)(cid:26)(cid:30)(cid:29)(cid:12)(cid:16)(cid:17)(cid:15)(cid:14)(cid:9)(cid:29)(cid:12)(cid:127)(cid:23)(cid:23)(cid:12)(cid:17)(cid:29)(cid:25)(cid:129)(cid:14)(cid:4)(cid:17)(cid:3)(cid:19)((cid:29))(cid:141)(cid:143)(cid:29)(cid:14)(cid:29)(cid:141)(cid:7)(cid:26)(cid:19)(cid:14)(cid:3)(cid:26)(cid:144)(cid:9)(cid:14)(cid:26)(cid:23)(cid:25)(cid:141)(cid:144)(cid:29)(cid:19)(cid:17)(cid:7)(cid:141)(cid:23)(cid:25)(cid:141)(cid:7)(cid:26)(cid:157)(cid:17)(cid:3)(cid:17)(cid:25)(cid:14)(cid:141)(cid:30)(cid:12)(cid:29)(cid:19)(cid:19)(cid:26) (cid:30)(cid:29)(cid:14)(cid:26)(cid:23)(cid:25)(cid:141)(cid:30)(cid:3)(cid:26)(cid:14)(cid:17)(cid:3)(cid:26)(cid:29)(cid:28)((cid:144))(cid:141)(cid:143)(cid:17)(cid:14)(cid:29)(cid:26)(cid:12)(cid:17)(cid:7)(cid:141)(cid:7)(cid:29)(cid:14)(cid:29)(cid:141)(cid:7)(cid:26)(cid:19)(cid:14)(cid:3)(cid:26)(cid:144)(cid:9)(cid:14)(cid:26)(cid:23)(cid:25)(cid:141)(cid:144)(cid:29)(cid:19)(cid:17)(cid:7)(cid:141)(cid:23)(cid:25)(cid:141) (cid:9)(cid:17)(cid:19)(cid:14)(cid:26)(cid:23)(cid:25)(cid:141)(cid:14)(cid:11)€(cid:17)(cid:19)(cid:141)(cid:143)(cid:23)(cid:8)(cid:29)(cid:26)(cid:25)‚(cid:31)(cid:13)€(cid:29)(cid:26)(cid:3)‚(cid:9)(cid:17)(cid:19)(cid:14)(cid:26)(cid:23)(cid:25)(cid:31)(cid:25)(cid:19)(cid:20)(cid:17)(cid:3) Table 3: The GPT-4 automatic evaluator shows a 98% agreement with human annotators. We randomly sample 40 questions and answers from five systems, asking human annotators to assess their accuracy. We then employ string matching (StrMatch), GPT-3.5, and GPT-4 as automatic evaluators. Finally, we measure the agreement between the human and these automatic evaluators. Sources # Correct / Wrong by different evaluators Agreement (human and automatic evaluators) Human GPT-4 GPT-3.5 StrMatch KimiChat Qwen-2.5 Gemma (7B) Mixtral (7B) Llama-3 (70B) 24 / 16 17 / 23 19 / 21 14 / 26 16 / 24 23 / 17 18 / 22 18 / 22 14 / 26 15 / 25 33 / 7 31 / 9 18 / 22 26 / 14 28 / 12 0 / 40 0 / 40 0 / 40 0 / 40 0 / 40 Total 90 / 110 88 / 112 136 / 64 0 / 200 GPT-4 97.5% 97.5% 97.5% 100.0% 97.5% 98.0% GPT-3.5 StrMatch 75.0% 57.5% 75.0% 65.0% 62.5% 67.0% 40.0% 57.5% 52.5% 65.0% 60.0% 55.0% 2.4 Evaluation Setup Evaluation Process Our dataset diversity poses two major evaluation challenges: (i) The evaluation methods vary depending on the answer type. For example, for boolean or numerical answers, a fair evaluator only needs to verify the correctness of a binary yes/no response or a specific number using simple techniques like string matching or number extraction. In contrast, textual responses require more nuanced standards such as natural language generation (NLG) metrics. Thus, accurately determining the appropriate evaluation method becomes complex when the answer type is unknown. (ii) Different LLMs and systems exhibit substantial variations in the organization and style of their outputs, potentially leading to biases in traditional evaluation approaches. Therefore, we capitalize on the prowess of LLMs that have proven to be decent evaluators and can be easily adapted to the assessment of various answer types [14, 24, 39]. Inspired by Liu et al. [24], we clearly define the evaluation criteria for various types within the instruction prompt and then instruct GPT-4 to assign a score of 0 (incorrect) or 1 (correct). After evaluating 200 examples by both human evaluators and GPT-4, we found that the GPT-4 automatic evaluator shows a 98% agreement with human annotators, significantly exceeding the traditional string matching approach. Details of this experiment is shown in Table 3, and details of evaluation instruction prompts are attached in Appendix A. Metrics As mentioned above, we instruct GPT-4 to assign a score of 0 (incorrect) or 1 (correct), thus using Accuracy (abbreviated as Acc.) to measure system performance. We report accuracy across all instances, as well as for each domain and QA-pair type in Table 4. 3 Experiments and Analysis 3.1 Experimental Setup We conduct a comprehensive evaluation of 22 LLM-based document reading systems, encompassing both proprietary systems that support document uploads and a series of parse-then-read pipelines. For parse-then-read pipelines, we leverage the fitz package to extract text and image blocks from PDF files. We retain the original texts and line breaks for text chunks while we denote the i-th image as [image i] for images. Our selection for the proprietary systems includes GPT-4 and GPT-4o [2] from OpenAI, GLM-4 12 from ZhipuAI, Kimi 13 from Moonshot AI, Claude-3 14 from Anthropic, Qwen- 2.5 15 from Alibaba Cloud, and ERNIE-3.5 16 from Baidu. In the case of the parse-then-read pipelines, we assess 15 prominent LLMs as base models, featuring those from the GPT [2, 31], Llama [37], Mistral [17], Yi [48], InternLM [6], Phi-3 [1], Gemma [36], ChatGLM3 [12], and Command-R [9] families. The selection of base open-sourced LLMs adheres to three guiding principles: (i) official release with instruct or chat versions that are supported by vLLM [20] framework; (ii) model sizes ranging from 7B to 70B to accommodate GPU memory constraints; (iii) availability of the longest context length and the latest version. 12https://chatglm.cn/main/doc 13https://kimi.moonshot.cn 14https://claude.ai/chats 15https://tongyi.aliyun.com/qianwen 16https://yiyan.baidu.com 6 Table 4: Results on DOCBENCH across various types and domains. Ver./Size stands for the model version or size; File denotes the maximum uploaded file size; Cxt. refers to model’s context length. Methods Form Ver. /Size File /Cxt. Domain Type Overall Acc. Aca. Fin. Gov. Laws News Text. Multi. Meta. Una. Human - - - 83.0 82.2 77.8 75.0 86.4 81.4 83.3 77.5 82.2 81.2 LLM-based systems API 0409 100M 65.7 65.3 75.7 69.6 API 0513 100M 56.4 56.3 73.0 65.5 Web Web - - 20M 55.8 35.4 61.5 62.8 100M 62.4 61.8 77.0 78.5 Web Opus 10M 73.9 40.6 70.3 79.1 Web Web - - 150M 42.9 29.9 51.4 55.5 10M 56.4 37.5 54.7 58.1 79.6 75.0 82.0 87.2 86.6 69.2 58.1 87.9 85.0 73.1 87.6 80.8 61.7 63.6 Parse-then-Read Pipelines GPT-4 GPT-4o GLM-4 KimiChat Claude-3 Qwen-2.5 ERNIE-3.5 GPT-4 GPT-3.5 ChatGLM3 Gemma Mixtral InternLM2 Llama-3 Yi-1.5 Llama-2 Phi-3 API 0409 128k 70.0 47.9 68.9 70.7 API 0125 16k 49.8 24.0 58.8 50.3 Open Open Open Open Open Open 6B 7B 7B 7B 8B 9B 128k 34.7 41.7 58.1 51.3 8k 34.3 12.5 43.2 34.0 32k 42.6 29.2 58.8 50.3 32k 38.6 27.1 52.0 46.1 8k 44.6 23.6 61.5 54.5 16k 40.6 26.4 58.1 52.4 Open 13B 4k 20.8 18.4 29.7 23.6 Open 14B 128k 50.2 44.4 65.5 64.4 InternLM2 Open 20B 32k 43.2 28.5 59.5 54.5 Yi-1.5 Open 34B 16k 47.2 27.1 59.5 56.5 Command-R Open 35B 128k 49.5 38.9 66.2 64.4 Mixtral-8x7B Open 47B 32k 48.5 31.9 60.1 59.2 Llama-3 Open 70B 8k 52.1 25.3 68.2 59.2 3.2 Results and Discussion 74.7 62.7 50.3 65.3 64.6 31.8 47.7 63.3 37.0 40.3 17.2 33.8 28.9 29.2 33.8 15.9 45.8 33.4 39.0 50.0 42.9 38.6 50.8 50.4 48.8 50.4 54.3 36.0 36.8 54.3 42.6 31.0 21.3 38.4 35.3 45.0 45.7 21.7 45.3 43.0 49.2 49.6 46.9 49.2 37.1 17.7 33.1 71.8 58.9 58.1 54.0 70.2 44.4 12.1 77.4 30.6 25.8 49.2 27.4 12.9 44.4 22.6 19.4 13.7 12.1 56.5 69.8 63.1 56.5 70.9 67.6 46.9 51.8 67.9 49.6 46.2 34.6 48.7 42.9 49.6 47.9 27.2 57.4 49.4 50.1 56.4 52.7 54.5 93.6 83.7 58.1 65.1 82.0 65.7 86.6 83.1 55.2 76.7 80.8 78.5 80.8 81.4 90.7 79.1 65.0 70.4 43.0 71.8 63.3 68.0 66.0 43.4 77.4 73.3 68.2 78.4 76.0 69.2 Table 4 showcases the performance of various document reading systems on DOCBENCH. Our findings reveal substantial variations in document reading capabilities among these systems, driven by differences in their foundational models, context length limitations, diverse design and implementation approaches, and etc. In this section, we will provide further discussions to delve deeper into the pros and cons of existing systems, as well as uncover the core challenges posed by DOCBENCH. 3.2.1 Interpreting Multi-modal and Metadata Information Figure 5 presents a case study illustrating the unique challenge of answering multi-modal questions in DOCBENCH. We observe that leading proprietary LLM-based systems often fail due to errors in one of the steps in the Location→Extraction→Calculation sequence. Take the first case study as an example, in the first step, KimiChat fails to locate the relevant chart on page 17. In the extraction phase, Claude-3 misidentifies the data as 288 & 348, instead of the correct 326 & 390. Finally, while GPT-4 locates and extracts the correct information, it errs in calculating the percentage change, demonstrating the complexity of these questions. Interestingly, parse-then-read pipelines can achieve reasonable performance on multi-modal questions (e.g., 63.3% for GPT-4). This is likely because the 7 Figure 5: To address multi-modal questions in DOCBENCH, it is essential to: (i) identify the relevant figure/table (Location); (ii) extract specific data (Extraction); (iii) perform necessary calculations (Calculation). In the first case study, KimiChat fails to locate the figure, Claude-3 retrieves incorrect data, and GPT-4, despite succeeding in the first two steps, struggles with the calculation. parsing process captures certain table information, and documents often include textual descriptions of figures. Meanwhile, for metadata-related questions, current methods generally lack attention to global information, resulting in relative low performances (below 55%). 3.2.2 Handling Lengthy Documents Handling lengthy documents is demanding, especially in real-world scenarios where doc- ument size can be virtually unlimited. Pro- prietary LLM-based systems struggle with up- loading extensive files, while the parse-then- read pipelines with open-sourced LLMs are con- strained by their maximum context length, lead- ing to varying degrees of information loss. As shown in Figure 6, both methods perform poorly in the finance domain but achieve higher perfor- mance in the news domain. This discrepancy arises because financial documents are typically longer and contain richer information, whereas news files are limited to single front pages with fewer messages. Furthermore, certain strong models with relatively short context lengths may excel with smaller files, but context length becomes a crucial factor when it comes to large files. For instance, the 8k Llama-3 family performs exceptionally well in the news domain, but is outperformed Figure 6: Average accuracy (%) of two methods under five different domains. 8 Question: What is the percentage change in the number of offshore stores for the company from FY18 to FY19? LocationExtractionCalculation(390-326)/326 = 19.63%GPT-4KimiChatClaude-3The percentage change in the number of offshore stores from FY18 to FY19 is 12.8% (326 stores in FY18 to 390 in FY19) . The percentage change in the number of offshore stores from FY18 to FY19 is not provided in the document. The number of offshore stores increased by 21% from 288 in FY18 to 348 in FY19. Question: Which region has the highest number of employees of the company and how many?LocationExtractionCalculationmax(43181, 22674, 678...)GPT-4KimiChatClaude-3 Head Genentech Research & Early Development (gRED) The region with the highest number of employees of the company is Asia, with 22,674 employees. This information is provided in the text: "Asia 22,674.", Europe has the highest number of employees at Roche, with a total of 43,181 employees. Question: Which domain has the highest number of dialogues in the test set?LocationExtractionCalculationGPT-4KimiChatClaude-3The "Restaurant" domain has the highest number of dialogues in the test set, with 437 dialogues.The "Taxi" domain has the highest number of dialogues in the test set, with 195 dialogues.The restaurant domain has the highest number of dialogues (437) in the test set. max(394, 494, 395, 437...)LLM-based SystemsParse-then-Read Pipelines020406080Average Accuracy (%)59.144.546.729.866.257.967.152.876.877.5Aca.Fin.Gov.LawsNews by all the 128k models in the finance domain. Besides, we discover that KimiChat and Command-R, which are specifically enhanced for long-context and Retrieval-Augmented Generation (RAG) capa- bilities, achieve decent results on text-only questions. Therefore, a key challenge lies in adapting these systems to handle documents of varying lengths while balancing the foundational model’s capabilities and context length constraints. 3.2.3 Faithfulness to User-provided Documents Most existing document reading systems falter when faced with unanswerable questions based on the provided document, exhibiting a lack of fidelity. Remarkably, Gemma and KimiChat perform better in such scenarios, which represents a crucial capability since users often expect systems to answer questions strictly based on given files. Intriguingly, despite the commonly-shared base model on GPT-4, there is a notable performance gap between the system and the parse-then- read pipeline in handling unanswerable questions (i.e., 37.1% and 70.2 % for system and pipeline, respectively). We analyze that this may be due to: (i) the proprietary LLM-based system have undergone optimizations on the base model, potentially causing overfitting; (ii) GPT-4 tends to adhere more closely to the in-context learning information. Such phenomenon thus underscores a critical challenge for future document reading systems on enhancing fidelity to the given documents. 4 Related Works 4.1 Recent Advances of LLMs and LLM-based Systems The latest generation of LLMs, such as GPT-4 [2], Llama-3 [37] and Claude-3 [3], have significantly extended the capabilities of language models [7, 40, 50]. These models are pre-trained on vast amounts of web-scale data, enabling them to perform a wide range of human-instructed tasks with impressive performance. Despite their remarkable performance, standalone LLMs may not be sufficient for many real-world applications. For example, LLMs lack access to real-time information and may struggle with tasks that require up-to-date knowledge [38]. Moreover, real-world applications often require non-text inputs parsing, code execution, API calling and interaction with external environments [15, 18, 21, 23, 44, 52]. The overall task completion usually requires multiple reasoning, execution and reflection steps that cannot be accomplished in a simple input-output manner [33, 41, 47]. To overcome the limitations of standalone LLMs, recent efforts have incorporated additional components and sophisticated system design. These systems, such as Microsoft’s Co-Pilot17 and OpenAI’s GPT-4 all-in-one18, aim to provide more comprehensive and practical solutions for real-world applications. Other pioneering efforts on designing LLM-based systems include web agents [16, 26, 51], software agents [21, 46] and computer agents [43] that can interact with external resources (e.g., websites, search engine, code repositories or computers) and perform multi-step tasks. The success of these systems relies on integrating powerful LLMs with well-designed architectures and components that enable them to handle complex tasks effectively. 4.2 Document reading: Datasets and Methods Document reading is a critical area where LLM-based systems have demonstrated significant ad- vancements. Proprietary developers such as OpenAI19 and Anthropic20 have introduced advanced systems that can take a user-provided document as input, parse its structure, extract relevant meta- data, and handle long texts and images to provide accurate responses. While these systems build upon the fundamental capabilities of their underlying LLMs [2–4, 49], they differ in their design and implementation, with some systems excelling in long-context reading and others focusing on retrieval-augmented methods to improve document reading ability. Despite claims of effectiveness and efficiency in online public blogs, the absence of a standardized benchmark makes it difficult to objectively evaluate and compare the document reading performance across these systems. Existing benchmarks relevant to document reading are unable to adequately reflect the real performance of these systems. Datasets focusing on document understanding such as Doc2Dial [13], Condi- tionalQA [34] and those specifically focusing on long-context reading like NarrativeQA [19] and 17https://copilot.microsoft.com 18https://chat.openai.com 19OpenAI’s ChatGPT: https://chat.openai.com 20Anthropic’s Claude: https://claude.ai/chats 9 QuALITY [32], primarily use text as input only, ignoring the complex nature of document struc- ture and multi-modal information. On the other hand, multi-modal document reading datasets like DocVQA [29], ChartQA [27], OCR-VQA [30], and InfoVQA [28] include multi-modal inputs and preserve the original document structure and layout. However these datasets often capture only parts of document (e.g. tables or figures) and ignored substantial amount of textual content. Different from previous works, DocBench requires systems to process the full documents as intact files and covers different types of questions targeting various abilities, which can more accurately evaluate the capabilities of LLM-based document reading systems in real-world scenarios. 5 Conclusion In this paper, we introduce DOCBENCH, a novel benchmark created to assess LLM-based document reading systems in a comprehensive and fine-grained manner. DOCBENCH consists of 229 documents and 1,102 questions, spanning 5 domains and 4 question types, developed with the help of human annotators and synthetic questions. We evaluate both proprietary LLM systems, accessible via web interfaces or APIs, and a parse-then-read approach using open-source LLMs. Our findings reveal significant disparities in document reading capabilities among these systems, highlighting current limitations, presenting potential challenges, and thus driving forward progress in this research field. References [1] Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. 2024. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219 (2024). [2] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023). [3] Anthropic. 2024. Claude 3 haiku: our fastest model yet. (2024). https://www.anthropic. com/news/claude-3-haiku. [4] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609 (2023). [5] Abeba Birhane, Atoosa Kasirzadeh, David Leslie, and Sandra Wachter. 2023. Science in the age of large language models. Nature Reviews Physics 5, 5 (2023), 277–280. [6] Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Internlm2 technical report. arXiv preprint Zehui Chen, Zhi Chen, Pei Chu, et al. 2024. arXiv:2403.17297 (2024). [7] Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. 2024. A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology 15, 3 (2024), 1–45. [8] Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, and William Yang Wang. 2024. A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law. arXiv preprint arXiv:2405.01769 (2024). [9] CohereAI. 2024. Introducing Command R. (2024). https://docs.cohere.com/docs/ command-r [10] Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, and Li Yuan. 2023. Chatlaw: Open- source legal large language model with integrated external knowledge bases. arXiv preprint arXiv:2306.16092 (2023). 10 [11] Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A Smith, and Matt Gardner. 2021. A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:2105.03011 (2021). [12] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2021. Glm: General language model pretraining with autoregressive blank infilling. arXiv preprint arXiv:2103.10360 (2021). [13] Song Feng, Hui Wan, Chulaka Gunasekara, Siva Patel, Sachindra Joshi, and Luis Lastras. 2020. doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (Eds.). Association for Computational Linguistics, Online, 8118–8128. https://doi.org/10.18653/v1/2020.emnlp-main.652 [14] Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. GPTScore: Evaluate as You Desire. arXiv:2302.04166 [cs.CL] [15] Siyuan Guo, Cheng Deng, Ying Wen, Hechang Chen, Yi Chang, and Jun Wang. 2024. DS- Agent: Automated Data Science by Empowering Large Language Models with Case-Based Reasoning. arXiv:2402.17453 [cs.LG] [16] Hongliang He, Wenlin Yao, Kaixin Ma, Wenhao Yu, Yong Dai, Hongming Zhang, Zhenzhong Lan, and Dong Yu. 2024. WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models. arXiv preprint arXiv:2401.13919 (2024). [17] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024. Mixtral of experts. arXiv preprint arXiv:2401.04088 (2024). [18] Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik Narasimhan. 2023. SWE-bench: Can Language Models Resolve Real-World GitHub Issues? arXiv:2310.06770 [cs.CL] [19] Tomáš Koˇcisk`y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics 6 (2018), 317–328. [20] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles. 611–626. [21] Cognition Labs. 2024. Devin, AI software engineer. (2024). https://www.cognition.ai/ blog/introducing-devin [22] Jinqi Lai, Wensheng Gan, Jiayang Wu, Zhenlian Qi, and Philip S Yu. 2023. Large language models in law: A survey. arXiv preprint arXiv:2312.03718 (2023). [23] Kuang-Huei Lee, Xinyun Chen, Hiroki Furuta, John Canny, and Ian Fischer. 2024. A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts. arXiv preprint arXiv:2402.09727 (2024). [24] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 2511–2522. https://doi.org/10.18653/v1/2023.emnlp-main.153 [25] Zhuang Liu, Degen Huang, Kaiyu Huang, Zhuang Li, and Jun Zhao. 2021. Finbert: A pre- trained financial language representation model for financial text mining. In Proceedings of the twenty-ninth international conference on international joint conferences on artificial intelligence. 4513–4519. 11 [26] Kaixin Ma, Hongming Zhang, Hongwei Wang, Xiaoman Pan, Wenhao Yu, and Dong LASER: LLM Agent with State-Space Exploration for Web Navigation. Yu. 2023. arXiv:2309.08172 [cs.CL] [27] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. 2022. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. arXiv preprint arXiv:2203.10244 (2022). [28] Minesh Mathew, Viraj Bagal, Rubèn Tito, Dimosthenis Karatzas, Ernest Valveny, and CV Jawa- har. 2022. Infographicvqa. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 1697–1706. [29] Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. 2021. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2200–2209. [30] Anand Mishra, Shashank Shekhar, Ajeet Kumar Singh, and Anirban Chakraborty. 2019. Ocr- vqa: Visual question answering by reading text in images. In 2019 international conference on document analysis and recognition (ICDAR). IEEE, 947–952. [31] OpenAI. 2022. Introducing chatgpt. (2022). https://openai.com/blog/chatgpt. [32] Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, et al. 2022. QuALITY: Question Answering with Long Input Texts, Yes!. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 5336–5358. [33] Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Reflexion: Language Agents with Verbal Reinforcement Learning. arXiv:2303.11366 [cs.AI] [34] Haitian Sun, William Cohen, and Ruslan Salakhutdinov. 2022. ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (Eds.). Association for Computational Linguis- tics, Dublin, Ireland, 3627–3637. https://doi.org/10.18653/v1/2022.acl-long.253 [35] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 (2023). [36] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. 2024. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295 (2024). [37] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023). [38] Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, et al. 2023. Freshllms: Refreshing large language models with search engine augmentation. arXiv preprint arXiv:2310.03214 (2023). [39] Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is ChatGPT a Good NLG Evaluator? A Preliminary Study. In Proceedings of the 4th New Frontiers in Summarization Workshop, Yue Dong, Wen Xiao, Lu Wang, Fei Liu, and Giuseppe Carenini (Eds.). Association for Computational Linguistics, Singapore, 1–11. https://doi.org/10.18653/v1/2023.newsum-1.1 [40] Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2024. A survey on large language model based autonomous agents. Frontiers of Computer Science 18, 6 (2024), 1–26. 12 [41] Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, and Heng Ji. 2024. Executable Code Actions Elicit Better LLM Agents. In ICML. arXiv:2402.01030 [42] Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. 2023. Bloomberggpt: A large language model for finance. arXiv preprint arXiv:2303.17564 (2023). [43] Zhiyong Wu, Chengcheng Han, Zichen Ding, Zhenmin Weng, Zhoumianze Liu, Shunyu Yao, Tao Yu, and Lingpeng Kong. 2024. OS-Copilot: Towards Generalist Computer Agents with Self-Improvement. arXiv:2402.07456 [cs.AI] [44] Tianbao Xie, Danyang Zhang, Jixuan Chen, Xiaochuan Li, Siheng Zhao, Ruisheng Cao, Toh Jing Hua, Zhoujun Cheng, Dongchan Shin, Fangyu Lei, et al. 2024. OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments. arXiv preprint arXiv:2404.07972 (2024). [45] Hongyang Yang, Xiao-Yang Liu, and Christina Dan Wang. 2023. Fingpt: Open-source financial large language models. arXiv preprint arXiv:2306.06031 (2023). [46] John Yang, Carlos E Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik Narasimhan, and Ofir Press. 2024. SWE-agent: Agent-Computer Interfaces Enable Auto- mated Software Engineering. arXiv preprint arXiv:2405.15793 (2024). [47] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2023. ReAct: Synergizing Reasoning and Acting in Language Models. In The Eleventh International Conference on Learning Representations. [48] Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, et al. 2024. Yi: Open foundation models by 01. ai. arXiv preprint arXiv:2403.04652 (2024). [49] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414 (2022). [50] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023). [51] Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, and Yu Su. 2024. GPT-4V(ision) is a Generalist Web Agent, if Grounded. arXiv:2401.01614 [cs.IR] [52] Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al. 2023. Webarena: A realistic web environment for building autonomous agents. arXiv preprint arXiv:2307.13854 (2023). A Instruction Prompts A.1 Response Evaluation Detailed instruction prompts for response evaluation are shown in Table 5. A.2 QA-pair Generation Details of instruction prompts for generating QA pairs are attached in Table 6. We discover that simply passing diagrams to GPT-4V leads to subpar question quality. This issue likely stems from the fact that figures or tables without accompanying text descriptions typically lack sufficient information, thus causing the generated QA pairs to deviate from their intended meanings. In addition, we observe that adding difficulty settings for QA generation (e.g., Easy, Medium, Hard) in the instruction prompt can result in higher quality. We analyze that this may be due to the model being able to favor higher generation quality in potential comparisons. 13 Table 5: Instruction Prompts in Response Evaluation. System Content: You are a helpful evaluator. Prompt: Task Overview: You are tasked with evaluating user answers based on a given question, reference answer, and additional reference text. Your goal is to assess the correctness of the user answer using a specific metric. Evaluation Criteria: 1. Yes/No Questions: Verify if the user’s answer aligns with the reference answer in terms of a "yes" or "no" response. 2. Short Answers/Directives: Ensure key details such as numbers, specific nouns/verbs, and dates match those in the reference answer. 3. Abstractive/Long Answers: The user’s answer can differ in wording but must convey the same meaning and contain the same key information as the reference answer to be considered correct. Evaluation Process: 1. Identify the type of question presented. 2. Apply the relevant criteria from the Evaluation Criteria. 3. Compare the user’s answer against the reference answer accordingly. 4. Consult the reference text for clarification when needed. 5. Score the answer with a binary label 0 or 1, where 0 denotes wrong and 1 denotes correct. NOTE that if the user answer is 0 or an empty string, it should get a 0 score. Question: {{question}} User Answer: {{sys_ans}} Reference Answer: {{ref_ans}} Reference Text: {{ref_text}} Evaluation Form (score ONLY): - Correctness: B Performance Comparison Figure 7 demonstrates the relative performance of LLM-based systems and parse-then-read pipelines against the best on DOCBENCH. For LLM-based systems, KimiChat consistently scores high across various metrics, demonstrating balanced performance. Notably, GPT-4 performs poorly in the unanswerable category, indicating potential overfitting in optimized GPT-4 file systems, which leads to decreased fidelity to given documents. Additionally, Claude-3 excels in the meta-data category, highlighting its superior ability to comprehend high-level metadata information. For parse-then-read pipelines, we select models with the highest overall accuracy for comparison. Unlike LLM-based systems, GPT-4 demonstrates consistently high and balanced performance across all aspects within this pipeline. Notably, significant discrepancies arise in handling multi-modal and unanswerable questions, where GPT-4 and Gemma exhibit clear distinctions from the remaining methods. C Analysis of Input Sources Table 7 presents the impact of different input sources on model performance. We provide questions to GPT-4 and GPT-4o, both with and without attached files. Remarkably, even without files, the models correctly answer a portion of the questions (19.1% for GPT-4 and 21.7% for GPT-4o). Our analysis reveals that the correctly answered questions are predominantly textual and are largely associated with government, law, and news domains. This trend suggests that the models’ underlying training 14 Table 6: Instruction Prompts in QA-pair Generation. System Content: You are a helpful assistant that can generate question-answer pairs. Text-only QA: Based on the above text, please design three question-answer pairs with different levels of difficulty: Easy, Medium, Hard. The questions should be close-ended and should be answered based on the provided text. The answer form should be as diverse as possible, including [Yes/No, Short Answer, Long Answer, Abstractive Answer]. You should provide the reference in the text and the answer form if possible. The output should be formalized as: ”’Q: | A: | Reference: | Difficulty Level: | Answer Form:”’ Multimodal QA (w/table+text): Based on the above table and text, please design three question-answer pairs with different levels of difficulty: Easy, Medium, Hard. The text provided is text related to the table, which can provide more reference for question generation, but the focus is still on the table itself. These questions require locating the specific information, simple or complex calculations, comparisons, finding the maximum and minimum, reading across rows and columns, etc. Note that these questions also need to be realistic. You should provide the reason if possible. The output should be formalized as: ”’Q: | A: | Reference: | Difficulty Level: | Answer Form:”’ Multimodal QA (w/figure+text): Based on the above figure and text, please design three question-answer pairs with different levels of difficulty: Easy, Medium, Hard. The text provided is text related to the figure, which can provide more reference for question generation, but the focus is still on the figure itself. These questions require a deep reading of the meaning of the image. Note that these questions also need to be realistic. You should provide the reason if possible. The output should be formalized as: ”’Q: | A: | Reason: | Difficulty Level: | ”’ Multimodal QA (w/table): Based on the above image, please design three question-answer pairs with different levels of difficulty: Easy, Medium, Hard. These questions require locating the specific information, simple or complex calculations, comparisons, finding the maximum and minimum, reading across rows and columns, etc. Note that these questions also need to be realistic. You should provide the reason if possible. The output should be formalized as: ”’Q: | A: | Reason: | Difficulty Level: | ”’ Multimodal QA (w/figure): Based on the above image, please design three question-answer pairs with different levels of difficulty: Easy, Medium, Hard. These questions require a deep reading of the meaning of the image. questions also need to be realistic. You should provide the reason if possible. Note that these The output should be formalized as: ”’Q: | A: | Reason: | Difficulty Level: | ”’ data is heavily skewed towards these categories, enabling them to answer some questions accurately without additional files. Moreover, as GPT-4o is an optimized version of GPT-4, it likely benefits from a broader and more extensive training data. 15 Figure 7: Performance (Relative) of two major methods on DOCBENCH against the best. Table 7: Analyzing the Influence of Input Sources: We deliver questions with attached files and without files to GPT-4 and GPT-4o for evaluation, respectively. Domain Type Overall Acc. Aca. Fin. Gov. Laws News Text. Multi. Meta. Una. Methods GPT-4 w/ file w/o file 65.7 10.9 65.3 10.8 75.7 23.0 GPT-4o w/ file w/o file 56.4 11.2 56.3 13.5 73.0 29.1 69.6 29.3 65.5 31.9 79.6 32.6 75.0 36.0 87.9 40.8 85.0 46.6 74.7 8.1 62.7 10.7 50.8 1.6 50.4 2.3 37.1 10.5 17.7 6.5 69.8 19.1 63.1 21.7 16 —KimiOverallUnans-werable—GPT-4—Claude-3—GLM-4—ERNIE-3.5—Qwen-2.5OverallText-onlyUnans-werable—GPT-4—Command-R-35B—Phi-3—Llama-3-70B—Mixtral-8x7B—InternLM2-20B—Yi-1.5-34B—Gemma—ChatGLM-6BText-onlyMulti-modalMeta-dataMulti-modalMeta-dataLLM-based systemsParse-then-Read Pipelines
ai_researcher
2
Filter-then-Generate_Large_Language_Models_with_Structure-Text_Adapter_for_Knowledge_Graph_Completion.pdf
8 1 0 2 r a M 2 2 ] P A . t a t s [ 1 v 3 0 5 8 0 . 3 0 8 1 : v i X r a Kalman Filter, Unscented Filter and Particle Flow Filter on Non-linear Models Author: Yan Zhao Advisor: prof. Zhongqiang Zhang Contents 1 Kalman Filter 1.0.1 Linear Dynamic Systems in Discrete Time . . . . . . . 1.0.2 Example of Application . . . . . . . . . . . . . . . . . Solving for Kalman Gain . . . . . . . . . . . . . . . . 1.0.3 Solving for Priori and Posterior Estimation . . . . . . 1.0.4 Solving for Prior and Posterior Covariance . . . . . . . 1.0.5 . . . . . . . 1.0.6 Results for Yield and Real Return Model 2 Unscented Filtering and Nonlinear Estimation 2.0.1 General Algorithms for Unscented Kalman Filter . . . Implementation for Yield and Real Return Model . . . 2.0.2 . . . . . . . 2.0.3 Results for Yield and Real Return Model 3 Particle Flow Filter 3.0.1 Generalized Gromov Method for stochastic Particle Flow Filters . . . . . . . . . . . . . . . . . . . . . . . . 3.0.2 Implementation of Particle Flow Filter . . . . . . . . . 3.0.3 Results for yield and real return model . . . . . . . . . 1 1 2 4 6 6 8 10 10 12 12 14 14 16 18 1 Abstract Filters, especially wide range of Kalman Filters have shown their impacts on predicting variables of stochastic models with higher accuracy then tra- ditional statistic methods. Updating mean and covariance each time makes Bayesian inferences more meaningful. In this paper, we mainly focused on the derivation and implementation of three powerful filters: Kalman Filter, Unscented Kalman Filter and Particle Flow Filter. Comparison for these different type of filters could make us more clear about the suitable appli- cations for different circumstances. Chapter 1 Kalman Filter Kalman Filter, also called the Linear Quadratic Estimator(LQE), has been used to minimize the estimation error for unknown variables in noisy stochas- tic system. Kalman Filter works recursively to update estimation by in- putting observed measurements over time. It contains two models, the first is Observation model and the second is Measurement model. Observation model, involving Plant noise, has been used to generate prior estimation for current state variables; Measurement model, including observation noise, has been used to update the estimation and generate posterior estimation. Kalman Filter has wide applications, such as predicting natural weather and prices of traded commodities. It also has been used to monitor complex dynamic systems, like signal processing in GPS and motion monitoring in robotics. Kalman Filter works perfectly in linear model, and the extended versions of extended Kalman Filter and Unscented Kalman Filter have been applied to non-linear problems. 1.0.1 Linear Dynamic Systems in Discrete Time We suppose that the stochastic systems can be presented by the following: Plant model: xk = φk−1xk−1 + wk−1 with wk ∽ N (0, Qk) (1.1) Measurement model: zk = Hkxk + vk with vk ∽ N (0, Rk) (1.2) vk and wk are assumed as independent normal random processes with mean of zero. xk has known initial value of x0 and known initial covariance matrix 1 P0. The goal is to find the estimations of ˆxk presented by function of zk such that the mean-squared error is minimized. Denote Pk(−) as the prior covariance matrix for x at time k, Pk(+) as the posterior covariance matrix for x at time k, ¯Kk as Kalman gain at time k, ˆxk(−) as the prior estimate of xk and ˆxk(+) as the posterior estimate of xk. By using orthogonality, we can prove the following updating equations: Pk(−) = φk−1P(k−1)(+)φT k−1 + Qk−1 k [HkPk(−)H T k + Rk] −1 ¯Kk = Pk(−)H T Pk(+) = [I ¯KHk]Pk(−) − ˆxk(−) = φk−1 ˆx(k−1)(+) ˆxk(+) = ˆxk(−) + ¯K[zk − Hk ˆxk(−)] 1.0.2 Example of Application (1.3) (1.4) (1.5) (1.6) (1.7) Consider a dividend yield and S&P real return model for stocks, in which Xn is dividend yield, δRn is real return and Yn is a two-dimensional vector for the observation of Xn and δRn from year 1945 to 2010. ∆W1,n, ∆W2,n are independent Brownian motion increments with ∆Wi,n = Wi,n+1 − Wi,n, i = 1, 2 B1,n, B2,n are also independent Brownian motion increments. k, θ, σ, µ, a, ρ, Q1 and Q2 are parameters with the given values as following: Table 1.1: Parameters k 2.0714 θ 2.0451 σ 0.3003 µ 0.1907 a 0.9197 ρ 1.6309 Q1 0.0310 Q2 -0.8857 Zn = Xn δRn(cid:19) (cid:18) = 1 1+k Xn−1 + kθ 1+k + σ ρ∆W1,n + 1+k √Xn−1∆W1,n 1 ρ2∆W2,n µXn + a√Xn−1 − ! (cid:17) Yn = Y1,n Y2,n(cid:19) (cid:18) = p (cid:16) Xn + Q1B1,n δRn + Q2B2,n(cid:19) (cid:18) 2 Rewriting Zn and Yn are necessary, as the Observation and Measurement model showing that, Zn is the function of Zn−1 and Yn is the function of Zn. First, let’s rewrite Zn. We can see that Xn is represented by Xn−1, which is the element of vector Zn−1, while δRn is represented by Xn. So we need to rewrite δRn as the term of Xn−1: δRn = µ 1 + k Xn−1+ µkθ 1 + k +( µσ√xn−1 1 + k +aρ Xn−1)∆W1,n+a Xn−1 p p p Then, ρ2∆W2,n 1 − (1.8) Zn = Xn δRn(cid:19) (cid:18) = (cid:18) 1 1+k 0 µ 1+k 0 Xn−1 δRn−1(cid:19) + (cid:19) (cid:18) kθ 1+k µkθ 1+k ! σ 1+k µσ 1+k + aρ + Xn−1 p Denote Φ = (cid:18) D = (cid:19) 1 1+k 0 µ 1+k 0 kθ 1+k µkθ 1+k ! ∆W1,n ∆W2,n(cid:19) ρ2 ! (cid:18) a 0 1 − (1.9) p C = (cid:18) σ 0 1+k µσ 1+k + aρ a√1 ∆W1,n ∆W2,n(cid:19) Wn = (cid:18) − As a result, we can write Zn as: rho2 (cid:19) Zn = Φn−1Zn−1 + D + Xn−1CWn (1.10) Next is to rewrite Yn: Denote Hn = V = We can rewrite Yn as : Bn = p 1 0 0 1 (cid:18) (cid:18) (cid:19) Q1 0 0 Q2(cid:19) B1,n B2,n(cid:19) (cid:18) Yn = HnZn + V Bn (1.11) 3 1.0.3 Solving for Kalman Gain The optimal updated estimate ˆZn(+) is a linear function of a priori estimate ˆZn(−) and measurement Yk, that is, ˆZn(+) = K 1 n ˆZn(−) + ¯KnYn (1.12) n and ¯Kn are unknown yet. We seek values of K 1 K 1 estimate ˆZn(+) satisfies the orthogonality principle: n and ¯Kn such that the E [Zn − h ˆZn(+)]Y T i i = 0, f or i = 1, 2, ...n 1 − (1.13) If one expand Zn from equation(1.1) and Zn(+) from equation(1.12) into equation(1.13), then one will obverse: E [Φn−1Zn−1+D+ h Xn−1CWn− Since Wn and Vn are uncorrelated, it follows that E i p n ˆZn(−)− ¯KnYn]Y T i i K 1 n 1. Using this result, one can get obtain the following result: WnY T i i h = 0, f or i = 1, 2, ...n (1.14) 1 − = 0 f or 1 ≤ − ≤ E [Φn−1Zn−1 + D h ˆZn(−) − Then by substituting Yn using equation (1.11), one can get ¯KnYn]Y T i i K 1 n = 0, f or i = 1, 2, ...n − 1 (1.15) − E [Φn−1Zn−1+D h ˆZn(−)− Then equation(1.16) can be changed to the form ¯KnHnZn− ¯KnV Bn]Y T i i K 1 n − = 0, f or i = 1, 2, ...n − (1.16) 1 Φn−1E Zn−1Y T i i h ¯KnV E BnY T i i h We also know that − + DE Y T i i − h = 0, f or i = 1, 2, ...n ˆZn(−)Y T h 1 K 1 nE i i − − BnY T i i h Equation (1.17) can be reduced to the form E = 0, f or i = 1, 2, ...n ¯KnHnE ZnY T i i h (1.17) 1 − E Zn−1Y T Φn−1E i i h K 1 nZn − [Zn − h K 1 nZn − [Zn − h K 1 [I n − − h E E K 1 Y T + DE i i − h ¯KnHnZn]Y T i i − ¯KnHnZn]Y T i i ZnY T i i h = 0 ¯KnHn] E i = 0, ˆZn(−)Y T nE h K 1 nE i i − [ ˆZn(−) − h ¯KnHnE Zn]Y T i i ZnY T i i h = 0, = 0, (1.18) 4 Equation (1.18) can be satisfied for any given Zn if K 1 n = I − ¯KnHn, Thus, K 1 Define estimation errors after and before updates n in equation (1.12) satisfied equation (1.19). Zn Zn ˜Zn(+) , ˆZn(+) − ˜Zn(−) , ˆZn(−) − ˜Yn , ˆYn(−) − Yn = HnZn(−) + Yn Since ˜Yn(−) depends linearly on Yn, from equation (1.13), E [Zn − h ˆZn(+)] ˜Y T n i = 0 (1.19) (1.20) (1.21) (1.22) (1.23) Substitute Zn, ˆZn(+), and ˜Yn from equations (1.10), (1.12), (1.22) respec- tively. Then E Φn−1Zn−1 + D + h Xn−1CWn − K 1 n ˆZn(−) − ¯KnYn][Hn ˆZn(−) − Yn]T i = 0. p By the orthogonality of E WnY T n i h = E WnX T h n(−)i = 0, We will obtain Yn]T i = 0. E E E − K 1 n ˆZn(−)) ¯KnYn][Hn ˆZn(−) − Φn−1Zn−1+D h [(Zn− h E Φn−1Zn−1 + D h Substituting for K 1 ˆZn(−) − n, Yn and using equation (1.21) ˆZn(−)+ ¯KHn ˆZn(−)− − ˆZn(−)) ¯KnHn(Zn− ˜Zn(−) + ¯KnHn ˜Zn(−) − ˜Zn(−) ˜Z T H T n(−)i h n + ¯KnV E H T n(−)i ˜Zn(−)BT Using the fact that E n i h as follows: [ − h I + ¯KnHn)E B ˜Z T ¯KnV E h ( − BnBT n i h ˜Z T = E ¯KnZn− ¯KnV Bn][Hn( ˆZn(−)− Zn) − − ¯KnV Bn][Hn ˜Zn(−) − V Bn]T i I + ¯KnHn)E ˜Zn(−)BT n i n − h V T = 0 ¯KnV Bn][Hn ˆZn(−)− BT n h n(−)i ( − − − V Bn]T = 0, V T = 0, i (1.24) = 0, this last result will be HnZn− V Bn]T = 0, i ( − I + ¯KnHn)E ˜Zn(−) h ˜Z T n(−)i H T n + ¯KnV E BnBT n i h V T = 0 (1.25) 5 For the second term of equation(1.25) ¯KnV E BnBT n i h V T : ¯Kn Q1 0 0 Q2(cid:19) (cid:18) E (cid:18) B2 1n B2nB1n B1nB2n B2 2n(cid:19) (cid:18) Q1 0 0 Q2(cid:19) = ¯Kn Q1 0 0 Q2(cid:19) (cid:18) (cid:18) 1 0 0 1 (cid:19) (cid:18) Plugging the value of equation (1.26) to (1.25): Q1 0 0 Q2(cid:19) = ¯KnV 2 (1.26) ( − I + ¯KnHn)E ˜Zn(−) h ˜Z T n(−)i H T n + ¯KnV 2 = 0, By definition, the error covariance matrix is Pn(−) = E satisfies the equation: ˜Zn(−) h ˜Z T n(−)i , it I + ¯KnHn)Pn(−)H T n + ¯KnV 2 = 0, ( − ¯Kn(HnPn(−)H T n + V 2) = Pn(−)H T n , And therefore, Kalman gain can be expressed as: ¯Kn = Pn(−)H T n (HnPn(−)H T n + V 2) −1, (1.27) which is the solution we want to seek as a function of priori covariance before update. 1.0.4 Solving for Priori and Posterior Estimation By definition, the priori estimation ˆZn(−) = Φn−1 ˆZn(+) + D. (1.28) By substituting equation (1.19) into equation (1.12), one obtains the equa- tions ˆZn(+) = (I ˆZn(+) = ˆZn(−) + ¯Kn( ¯KnHn) ˆZn(−) + ¯KnYn, Hn ˆZn(−) + Yn) − − (1.29) Therefore, the posterior estimation we want to seek is a function of priori estimation and kalman gain. 1.0.5 Solving for Prior and Posterior Covariance One can derive a formula for posterior covariance, which is Pn(+) = E ˜Zn(+) h ˜Z T n(+)i (1.30) 6 By plugging equation (1.29) to equation (1.20), one obtains the equations ˜Zn(+) = ˆZn(+) − = ˆZn(−) − = ( ˆZn(−)Zn) = (I Zn = ˆZn(−) − ¯KnHn ˆZn(−) + ¯KnHnZn − ¯KnHn( ˆZn(−) − − ¯KnHn) ˜Zn(−) + ¯KnV Bn ¯KnHn ˆZn(−) + ¯KnYn − ¯KnV Bn − Zn) + ¯KnV Bn − Zn Zn By substituting equation (1.31) into equation (1.30) and noting that E 0, one obtains (1.0.1) ˜Zn(−)BT n i h = [(I h − ¯KnHn) ˜Zn(−) − ¯KnHn)Pn(−)(I − ¯KnHnPn(−) − Pn(+) = E = E (I h = (I − = Pn(−) − = (I − ¯KnHn)Pn(−) − ¯KnHn)Pn(−) − ¯KnHn)Pn(−) = (I = (I − − ¯KnHn) ˜Zn(−) + ¯KnV Bn][(I ¯KnHn) ˜Zn(−) + ¯KnV Bn]T n V T ¯K T n i i − ˜Z T − ¯KnHn)T + ¯KnV BnBT n(−)(I ¯KnHn)T + ¯KnV 2 ¯K T n ¯K T + ¯KnHnPn(−)H T ¯K T Pn(−)H T n n n + ¯Kn(HnPn(−)H T ¯K T n + v2) ¯K T ¯K T n + Pn(−)H T n Pn(−)H T n Pn(−)H T n ¯K T n n + ¯KnV 2 ¯K T n (1.32) This is the final form of posterior covariance, which shows the effects of kalman gains on priori covariance. Respectively, the definition of prior co- variance ˜Zn(−) h By plugging equation (1.10) and equation (1.28) to equation (1.21), one obtains the equations ˜Z T n(−)i Pn(−) = E (1.33) ˜Zn(−) = Φn−1 ˆZn−1(+) + D = Φn−1 ˆZn−1(+) + D = Φn−1 ˜Zn−1(+) − − Zn − − Xn−1CWn Zn Φn−1Zn−1 − Xn−1CWn (1.34) D − p n−1 to obtain the results Pn(−) = E Uses the fact that E p ˜Zn−1W T h [Φn−1 ˜Zn−1(+) − h − ˜Zn−1(+) ˜Z T = Φn−1E n−1 + n−1(+)i h n−1 + Xn−1CC T = Φn−1Pn−1(+)ΦT √Xn ΦT 1CWn][Φn−1 ˜Zn−1(+) − C T WnW T Xn−1CE n i h √Xn − Xn−1 1CWn]T i p (1.35) p 7 which gives a priori value of the covariance matrix as a function of the previous posterior covariance. Thus, the update equations for our yield and real return model are listed following: Pn(−) = Φn−1Pn−1(+)ΦT n−1 + Xn−1CC T ¯Kn = Pn(−)H T n (HnPn(−)H T n + V 2) −1, Pn(+) = (I − ¯KnHn)Pn(−) ˆZn(−) = Φn−1 ˆZn(+) + D. ˆZn(+) = ˆZn(−) + ¯Kn( − Hn ˆZn(−) + Yn) (1.35) (1.27) (1.32) (1.28) (1.29) The form of equations of example model are similar to equation (1.3) to (1.7), but the differences are because the example model is not strictly linear and noisy parts from plant model are relying on the previous steps. 1.0.6 Results for Yield and Real Return Model By plugging value of Yield and Real Return from year 1945 to year 2010 to Yn and setting the initial priori covariance as zero, one can repeat the algorithms listed above to calculate kalman gain 65 times and correspondingly update post covariance and posterior value of estimation. Set posterior estimation as estimation for yield and real return, and one can plot real value and estimation value on the same plot by using same time discretization. The results are showing following: The results are showing that kalman filter works well in first five to six years with the same trend of move and estimation value approximating to real value. After the fifth year, the value of estimations are far away from real value but keeping the same trend of move. The reason for estimation and real value deviating from fifth year is that the model is non-linear with time. The results confirm that kalman filter perfectly works on linear model and the first several steps of non-linear model, while it works worse on the later part of non-linear model. Thus, the use of extended Kalman filter– Unscented Kalman filter, is needed to solve this non-linear problem. 8 0.18 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 0 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 0 The plot for estimated yield and real yield Estimated yield Real yield 10 20 30 40 50 60 70 Year (a) Yield The plot for estimated return and real return Estimated return Real return 10 20 30 40 50 60 70 Year (b) Return Figure 1.1: Kalman Filter Results for Yield and Real Return 9 Chapter 2 Unscented Filtering and Nonlinear Estimation − The extended Kalman Filter (EKF) has been widely used to deal with non- linear problem. However, it is hard to implement and the results are often inaccurate. the Unscented transformation (UT) has been developed as an improvement to utilize information of mean and covariance to accurate re- sults and make it easier to implement. The method is to select sigma points according to their mean µx and covariance σx (i.e. choosing data in range 2σx, 2σx]). The non-linear function is applied to each point to generate of [ a cloud of points. Then transformed mean and covariance can be obtained from calculating mean and variance of those sigma points. There are two advantages of using UT transformation. The first is selected sigma points are no longer randomly chosen but containing information of an unknown distribution, which is sufficient to operate statistic computation. Further- more, mean and covariance are linearly transformable (i.e. mean ¯x will be T ¯x after operating transformation T, and covariance Σx will be T ΣxT T ) The second is weights for sigma points can be adjusted in ways such that more points around mean can be captured. 2.0.1 General Algorithms for Unscented Kalman Filter 1) Generating sigma points: Consider a set of sigma points S with given mean and covariance, it con- i = 0, 1, ...2Nx : tains (2Nx + 1) vectors and their associate weights S = { . By convention, W (0) will be the weight on the mean point, X (i), W (i) } 10 which is indexed as the zeroth point X (0) = ¯X W (0) = W (0) The other 2N the left side of mean and half on the right side of mean x points lie on the √Nxth covariance with half points on − X (i) = ¯X + (cid:0) W (i) = Nx W (0) Σx 1 r 1 − W (0) − 2Nx i (cid:1) X (i+Nx) = ¯X − (cid:0) W (i+Nx) = r 1 1 Nx W (0) Σx − W (0) i (cid:1) − 2Nx 2) Generating transformed set, which is normally the expectation value through Plant model ˆX (i) n = f [X (i) n , µn]. 3) Computing predicted mean ˆµn = p Xi=0 W (i) ˆX (i) n . 4) And computing predicted covariance ˆKn = p Xi=0 ˆX (i) { n − ˆµn}{ ˆX (i) n − T . ˆµn} 5) Plugging each of the predicted points to observation model ˆY (i) n = g[X (i) n ]. 6) Computing observation mean p ˆYn = W (i) ˆY (i) n . Xi=0 11 7) And computing observation covariance p ˆY (i) n − { Xi=0 8) Finally updating normal Kalman Filter Equations ˆY (i) n − ˆYn}{ ˆSn = ˆYn} T . ˆYn Vn = Yn − −1 Wn = ˆKn ˆY n µn = ˆµn + WnVn Kn = ˆKn − Wn ˆSnW T n 2.0.2 Implementation for Yield and Real Return Model Here we use the same example of yield and real return model. Expected results are better by implementing Unscented Kalman filter. That is Zn = Φn−1Zn−1 + D + Xn−1CWn (1.10) (1.11) p Yn = HnZn + V Bn In order to generate sigma points, W (0) = 1 3 and 2Nx = 400 have been set. With initial mean of sigma points µn = H −1 V Bn) and initial covariance as zero matrix, one can repeat the algorithms by using steps listed. Each time by choosing factorization of ˆKn, we can get the covariance of sigma points. When implementing the algorithm, we changed a little bit in step 2. Instead of using expectation, we use the whole function to process sigma particles because the value of noisy parameters are relatively high and it will be better to mimic points adding those noise. Each time we need to guarantee Xn is positive. n (Yn − 2.0.3 Results for Yield and Real Return Model Predicted dividend yield matches highly with the real yield from the figure, which means the prediction for yield is pretty sucess. Predicted real return does not match with the real return well but keep the same trend. Reasons for diiference of the results are: First, variance for real return is higher than yield which enlarge the error for mis-allocated sigma points. Sigma points for yield are intensive since it has relatively stable trend with lower variance. Second, for updating each step, real return highly depends on the prediction of yield from previous step, so the predicted error for yield can be exaggerated further. 12 0.1 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 0 Dividend Yield Predicted yield Real yield 10 20 30 40 50 60 70 (a) Yield Real Return Predicted return Real return 10 20 30 40 50 60 70 (b) Return Figure 2.1: Unscented Filter Results for Yield and Real Return 13 Chapter 3 Particle Flow Filter Particle Filters have the problem of particle degeneracy caused by Bayesian Rule, especially in dealing with high dimensional state vectors. The algo- rithm puts particles to wrong places when multiplying prior function with likelihood function. Particle Flow Filter is derived to improve the estimation accuracy in high-dimensional space by involving move functions of particles and it is significantly mitigate the problem of degeneracy. We set each par- ticle in d-dimensional space as a function of λ denoting as x(λ), in which lambda is continuously changing like time. λ starts from 0 and ends up with 1 giving the results of moving from points to next points. 3.0.1 Generalized Gromov Method for stochastic Particle Flow Filters We start from constructing the stochastic differential equation for flow of particles: dx = f (x, λ)dλ + Q(x) 1 2 dWλ (3.0.1) Here f (x, λ) is the moving function for particles and Q is the covariance ma- trix of the diffusion Wλ. Wλ is the measurement noise generated according to λ. In order to get the solution of f (x, λ) and Q(x), probability density function log P (x, λ) is essential to be introduced. We have: log P (x, λ) = log g(x) + λ log h(x) log K(λ) (3.0.2) − The generalized probability density function has the form of : p(x, λ) = g(x)h(x)λ Rd g(x)h(x)λ dx = g(x)h(x)λ K(λ) , (3.0.3) R 14 in which h(x) is the likelihood, g(x) is from part a and K(λ) is the norm of product of g(x)andh(x)λ. The purpose of K(λ) is to normalize the condi- tional probability density. By using equation (3.0.2), one can solve f (x, λ) by setting specific Q(x) to simplify the PDE for f. The PDE has the form of : ∂log h ∂x = − f T ∂2 log P ∂x2 − ∂div(f ) ∂x − ∂ log P ∂x ∂f ∂x + ∂[div(Q ∂P ∂x )/2P ] ∂x (3.0.4) The simplest way is to set: ∂div(f ) − ∂x − ∂logP ∂x ∂f ∂x + ∂[div(Q ∂P ∂x )/2P ] ∂x = 0 (3.0.5) Then the solution for f (x, λ) is : f (x, λ) = ∂2 log P (x, λ) ∂x2 [ − −1( ] ∂ log h(x) ∂x )T (3.0.6) According to equation (3.0.5), the corresponding covariance function Q is: − Q = [P λP H T (R+λHP H T ) −1HP ]H T R −1H[P λP H T (R+λHP H T ) −1HP ] (3.0.7) where R is the measurement noise covariance matrix, P is the prior covari- ance matrix, and H is the sensitive matrix in measurement model. In order to keep the solution of Q from equation (3.0.7) as symmetric matrix, one can implement the following method to symmetry Q immediately: − Q = Q + QT 2 (3.0.8) Algorithm 3.0.1. (Algorithm for implementing Particle Flow Filter with diffusion) • • • a. Use Monte Carlo method randomly choose N particles around ob- servation, and generate particle density function g(x) as prior density function. b. Choose suitable h(x) as likelihood function. c. Compute p(x, λ) by Equation (3.0.3), p(x, λ) = g(x)h(x)λ K(λ) = Rd g(x)h(x)λ dx . K(λ) , where R 15 • d. Solve the moving function f (x, λ) and measurement covariance matrix Q by equation (3.0.6) and (3.0.7). That is, f (x, λ) = ∂2 log P (x, λ) ∂x2 [ − −1( ] )T . (3.0.9) Q = [P − λP H T (R+λHP H T ) −1HP ]H T R λP H T (R+λHP H T ) −1HP ]. ∂ log h(x) ∂x −1H[P − • e. Plug the value of f (x, λ) and Q(x), one can derive x by solving the PDE: dx = f (x, λ)dλ + LdWλ, with L= chol(Q). We can use forward Euler scheme x(n+1) = x(n) + f (x(n), λn)∆λ + L∆Wλ (3.0.10) or implicit Euler scheme x(n+1) = x(n) + f (x(n+1), λn+1)∆λ + L∆Wλ. (3.0.11) f. For updating each point, repeat steps from a to e. Remark 3.0.2. Here h(x) can be any type of distribution but we consider normal distribution with estimated mean and variance. Remark 3.0.3. The use of either explicit or implicit Euler method depends on the shape of f (x, λ). 3.0.2 Implementation of Particle Flow Filter In our previous dividend yield and S&P real return model,with the obser- vation model as: Zn = Xn δRn(cid:19) and measurement model as: = (cid:18) 1 1+k Xn−1 + kθ 1+k + σ ρ∆W1,n + 1+k √Xn−1∆W1,n 1 ρ2∆W2,n µXn + a√Xn−1 (cid:16) p − ! (cid:17) Yn = Y1,n Y2,n(cid:19) (cid:18) = Xn + Q1B1,n δRn + Q2B2,n(cid:19) (cid:18) We can get the particle density function is g(x1, x2) = 1 2πσ1σ2 1 ρ2 − p 16 − (x−µ)T Σ e −1 1 2 (x−µ) where µ is sample mean and Σ1 is sample covariance Σ1 = σ2 1 ρσ1σ2 ρσ1σ2 σ2 2(cid:19) (cid:18) We can set the likelihood function as h(x1, x2) = 1 − (x−m)T Σ e −1 2 2 (x−m) Σ2| | where m is probability mean and Σ2 is probability covariance. Conditional probability density function P (x, λ) follows: 2π p p(x, λ) = g(x)h(x)λ g(x)h(x)λ || = || − (x−µ)T Σ e −1 1 −1 (x−µ)+λ(x−m)T Σ 2 2 (x−m) K(λ) where And then K(λ) = −1 1 − (x−µ)T Σ e || −1 (x−µ)+λ(x−m)T Σ 2 2 (x−m) || logP (x, λ) = µ)T Σ −1 1 (x (x − − − µ) + λ(x 2 − m)T Σ −1 2 (x m) − − log(K(λ)) log h(x) = log( 1 2π(Σ2)1/2 ) − (x − −1 m)T Σ 2 (x 2 m) − ∂2(logP (x, λ)) ∂x2 ∂(log h(x)) ∂x = Σ −1 1 − − λΣ −1 2 = Σ −1 2 (x − m) − Moving function f (x, λ) is f (x, λ) = ∂2log P (x, λ) ∂2x [ − −1( ] ∂log h(x) ∂x ) = [Σ −1 1 + λΣ −1 2 ] −1Σ = − [ − − −1 2 (x Σ −1 1 − λΣ m) − −1 2 ] −1[ − Σ −1 2 (x m)] − According to equation (3.0.7), the corresponding Q(x) in this case is: Q = [P − λP (V + λP ) −1P ]V −1[P λP (V + λP ) −1P ], − where P is the prior covariance, which has the form from Kalman Filter: Pn(−) = ΦPn−1(+)ΦT + (1 , 0)x(n−1)CC T , 17 and L(x) = Q(x). Then update x with respect to λ by Backward Euler: p x(n+1) = x(n) + f (x(n+1), λn+1)∆λ + L(x(n))∆Wλ Subtracting m on each side, one can get x(n+1) m = (x(n) − − Set y(n+1) = x(n+1) m) [Σ −1 1 +λn+1Σ −1 2 ] − m and y(n) = x(n) −1Σ −1 2 (x(n+1) − m)∆λ+L(x(n))∆Wλ. − y(n+1) = y(n) [Σ −1 1 + λΣ −1 2 ] m, the equation becomes − −1 −1Σ 2 y(n+1)∆λ + L(x)∆Wλ. (I + ∆λ[Σ − −1 −1 2 ] 1 + λΣ −1 y(n+1) = (I + ∆λ[Σ 1 + λΣ −1 2 ] −1 1 + λΣ −1Σ x(n+1) = (I + ∆λ[Σ −1 2 )y(n+1) = y(n) + L(x)∆Wλ −1 2 ] −1Σ −1 2 ) −1(x(n) −1(y(n) + L(x)∆Wλ) −1Σ −1 2 ) m + L(x)∆Wλ) + m − 3.0.3 Results for yield and real return model By involving function of movement f (x, λ), accuracy for predicting of real return has been highly increased. The validity of using particle flow methods has been proved. The trends for predicted yield are highly similar to the real trend. And prediction for yield has great performance at the years with large fluctuation but cannot mimic the value with lower fluctuation. That is because we set relative larger covariance for likelihood matrix, which means it cannot do better when the real covariance become lower. Then the corresponding cons for Particle Flow Filter is clear to see that constant likelihood function h(x) is hard to satisfy the change for each points. 18 0.12 0.1 0.08 0.06 0.04 0.02 0 -0.02 0 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 Dividend Yield Predicted yield Real yield 10 20 30 40 50 60 70 (a) Yield Real Return Predicted return Real return 10 20 30 40 50 60 70 (b) Return Figure 3.1: Particle Flow Filter Results for Yield and Real Return 19 Bibliography [1] Narayan Kovvali ; Mahesh Banavar ; Andreas Spanias. An Introduc- tion to Kalman Filtering with MATLAB Examples. Morgan & Claypool, Reading, 9781627051408, 2013. [2] Simon J. Julier ; Jeffery K. Uhlmann. Unscented Filtering and Nonlinear estimation. Digital Objective Identifier, 0018-9219/04, 2014. [3] Fred Daum ; Jim Huang ; Arjang Noushin Generalized Gromov method for Stochastic Particle Flow Filters 0277-786X/17, 2017 20
ai_researcher
2
CreativEval_Evaluating_Creativity_of_LLM-Based_Hardware_Code_Generation.pdf
CreativEval: Evaluating Creativity of LLM-Based Hardware Code Generation Matthew DeLorenzo, Vasudev Gohil, Jeyavijayan Rajendran Texas A&M University, USA {matthewdelorenzo, gohil.vasudev, jv.rajendran}@tamu.edu 4 2 0 2 r p A 2 1 ] L C . s c [ 1 v 6 0 8 8 0 . 4 0 4 2 : v i X r a Abstract—Large Language Models (LLMs) have proved effec- tive and efficient in generating code, leading to their utilization within the hardware design process. Prior works evaluating LLMs’ abilities for register transfer level code generation solely focus on functional correctness. However, the creativity associated with these LLMs, or the ability to generate novel and unique solutions, is a metric not as well understood, in part due to the challenge of quantifying this quality. To address this research gap, we present CreativEval, a framework for evaluating the creativity of LLMs within the context of generating hardware designs. We quantify four creative sub-components, fluency, flexibility, originality, and elaboration, through various prompting and post-processing techniques. We then evaluate multiple popular LLMs (including GPT models, CodeLlama, and VeriGen) upon this creativity metric, with re- sults indicating GPT-3.5 as the most creative model in generating hardware designs. Index Terms—Hardware Design, LLM, Creativity I. INTRODUCTION Recent advancements within artificial intelligence, machine learning, and computing performance have resulted in the development of LLMs, which have quickly proven to be a widely applicable and successful solution when applied to a variety of text-based tasks [1]. After extensive training on large quantities of text data, these transformer-based models [2] have demonstrated the ability to not only successfully interpret the contextual nuances of a provided text (or prompt), but also generate effective responses to a near human-like degree [3]. This can take the form of summarizing a document, answering and elaborating upon questions, and even generating code. The effectiveness and versatility of LLMs regarding textual understanding have resulted in their adoption within various applications, such as language translation [4], customer service chat-bots [5], and programming assistants [1]. Furthermore, the potential of LLM code generation has recently been explored within the integrated circuit (IC) design process [6], such as within the logic design stage. With chip designs continually growing in scale and complexity, efforts to increase the automation of this task through LLMs have been explored. This includes the evaluation of LLMs’ ability to generate hardware design codes from English prompts, leading to promising initial results within various frameworks [7]–[10]. With the goal of further optimizing these LLMs to the level of an experienced hardware designer, many research efforts have focused on improving performance within the metric of code functionality. This includes testing various LLM fine- tuning strategies and prompting methods for domain-optimized performance, such as register transfer level (RTL) code gen- eration. However, another dimension to consider when evaluat- ing the ability of a designer, absent from previous eval- is creativity. This term refers to the capacity to uations, think innovatively—the ability to formulate new solutions or connections that are effective and unconventional [11]. When applied to hardware code generation, this can take the form of writing programs that are not only correct, but also novel, surprising, or valuable when compared to typical design ap- proaches. This quality is essential to understanding the greater potential of LLMs as a tool for deriving new approaches to hardware design challenges, rather than simply a method to accelerate existing design practices. With a quantitative method of measuring this concept of creativity within LLM hardware generation, valuable insights could be derived, such as how performance could be further improved, or how LLMs can be best utilized within the hardware design process. To address this absence within the analysis of LLM-based RTL code generation, we propose a comparative evaluation framework in which the creativity of LLMs can be effectively measured. This assessment is composed of four cognitive subcategories of creativity (fluency, flexibility, originality, and elaboration), which are quantified and evaluated within the context of generating functional Verilog modules. Further- more, this approach utilizes various prompting structures, gen- eration strategies, and post-processing methods, from which the quality and variations of responses are utilized to generate a metric for creativity. This work presents the following contributions: • To the best of our knowledge, we propose the first frame- work from which a metric for creativity is defined for LLMs within the context of hardware design and code generation. • We provide a comparative evaluation between state-of-the- art LLMs upon our creativity metric and its components, with GPT-3.5 achieving the highest result. future enable • To datasets our https://github.com/matthewdelorenzo/CreativEval/ research, we will framework codebase and open-source here: II. BACKGROUND AND RELATED WORK A. LLMs for Code Generation and Hardware Design Many state-of-the-art LLMs have demonstrated remarkable success in generating code when provided only with a natural Fig. 1. Experimental Framework - calculating creativity of LLMs in Verilog code generation. language description, such as GPT-3.5/4 [12], BERT [13], and Claude [14], revolutionizing the software development process. These models demonstrate promising performance in code functionality, such as GPT-4 generating correct code for 67% of programming tasks in the HumanEval benchmark in a single response (pass@1) [15]–[17]. Therefore, the applications of LLMs within hardware design through RTL code generation are explored within various studies, such as DAVE [18] which utilized GPT-2 for this task. VeriGen [7] then demonstrated that fine-tuning smaller models (CodeGen) upon a curated RTL dataset can outperform larger models in RTL tests. VerilogEval [19] presents enhanced LLM hardware generation through supervised fine-tuning, and provides an RTL benchmark for evaluating functionality in RTL generation. ChipNeMo [9] applied fine-tuning upon open-source models (Llama2 7B/13B) for various hardware design tasks. RTLCoder [20] presents an automated method for expanding the RTL dataset used for fine-tuning, resulting in a 7B-parameter model that outperforms GPT-3.5 on RTL benchmarks. Other works, including RTLLM [21] and Chip- Chat [8], explore prompt engineering strategies to enhance the quality and scale of LLM-generated designs. Although there is a plethora of work on LLM-based RTL generation, none of these prior works assess the creative component of LLMs in the hardware design process. We address this shortcoming in this work. B. Evaluating Creativity Prior cognitive science studies [22]–[25] have explored methods in which creative thinking can be effectively mea- sured. A widely accepted creativity model [24] defines four primary cognitive dimensions from which divergent thinking, or the ability to generate creative ideas through exploring multiple possible solutions [26], can be measured—fluency, flexibility, originality, and elaboration. • Fluency. The quantity of relevant and separate ideas able to be derived in response to a single given question. • Flexibility. The ability to formulate alternative solutions to a given problem or example across a variety of categories. • Originality. A measure of how unique or novel a given idea is, differing from typical responses or solutions. • Elaboration. The ability to expand upon or refine a given idea. This can include the ability to construct complex solutions utilizing provided, basic concepts. These subcategories have been widely in evaluating human creativity within educational research, including various stud- ies of students [27]–[29] as a metric for effective learning. Furthermore, recent works explore the intersection between cognitive science and LLMs [30]–[32], in which the creativity of LLMs are evaluated within the context of natural lan- guage, demonstrating near-human like performance in many cases [31]. In particular, [33] utilizes the four creative sub- categories to evaluate LLMs across multiple language-based cognitive tasks. However, this framework has not been adapted to LLMs within the context of generating hardware code. To this end, we devise our creativity evaluation framework for LLM-based hardware code generation. III. CR E A T I VEV A L FRAMEWORK Given a target LLM, our CreativEval framework, as shown in Fig. 1, seeks to evaluate the creativity associated with LLMs in hardware code generation. CreativEval evaluates the previously defined subcategories of creativity— fluency, flexibility, originality, and elaboration. To this end, we query the target LLM with different Verilog-based prompts, and analyze the responses through various methods of post- processing to calculate the desired metrics, as explained below. A. Fluency To capture the quantity of relevant and separate ideas in our context, we define fluency as the average number of unique Verilog solutions generated by the target LLM in response to a given prompt. Our prompts contain a brief English description of the module and the module’s declaration, as shown in Listing 1. Each prompt is provided as input to the LLM, with the response intended to be the completed implementation of the module. As the inference process of LLMs contain variations in the generated responses, we generate t responses for each prompt to estimate the average performance. Upon generating all responses, each response is then tested for functionality against the module’s associated testbench. If all test cases pass, the module is considered functional. Then, for each prompt, the functional responses (if any) are collected and compared to identify if they are unique implementations. This is done through GNN4IP [34], a tool utilized to assess the similarities between circuits. By representing two Verilog Formatted for fluencyand originalityFormatted for flexibilityFormatted forelaborationLLMResponses(VerilogModules)FunctionalResponsesGNN4IPGoldenResponseSimilarityScoresCreativityFluencyOriginalityFlexibilityElaborationFunctionalityCheckVerilog Prompts 1 //Create a full adder. 2 //A full adder adds three bits (including carry-in) and produces a sum and carry-out. 3 4 module top_module ( input a, b, cin, 5 output cout, sum ); 6 Listing 1. Fluency/Originality prompt example modules as a data-flow graph (DFG), GNN4IP generates a similarity score within [-1,1], with larger values indicating a higher similarity. Each correct generated solution from the LLM is input into GNN4IP, and compared to its ideal solution, or “golden response”. Upon the generation of each similarity value for a given prompt, these results are then compared to determine how many unique values are in the response set, indicating the number of distinct solutions. Given that there are a set of p total prompts in the dataset, the LLM generates t responses for each. After evaluating the functionality of these results, there is then a subset n prompts that contain at least one success (functional module generation). For each of these n prompts, there is a sub-total of the t responses that are functional, defined as m. Each of these m functional responses, r, are defined as the set R = {r1n, ..., rmn}. The GNN4IP similarity value is then found for each response in R, represented as the function S. The number of unique similarity values is then determined within the set, and normalized to total t responses. This process is repeated for all n successful prompts and averaged to define the fluency metric F below: F = 1 n n (cid:88) i=1 (cid:19) (cid:18) |S(Ri)| t (1) B. Flexibility Flexibility is quantified as the ability of the LLM to generate an alternative implementation of a Verilog module when provided with a solution. The prompts for this metric are constructed for a set of Verilog modules in which a correct solution (the golden response) is included (Listing 2). The LLM then rewrites the Verilog module, ideally resulting in a functional and unique implementation. As before, t responses are generated for each of the p total prompts. After all responses are checked for functionality, n prompts have at least one functional response, each with m functional responses. These functional responses are compared directly with the golden response (through GNN4IP) to iden- tify their similarity value. If the similarity value s is lower than a given threshold on the scale [-1,1], the response is considered an alternative solution, shown in Equation 2. For each successful prompt, the response with minimum similarity is found and evaluated against the threshold. Then, the total amount of n prompts with a response less than the threshold is determined, and normalized by the total prompts n. The final flexibility metric X is then defined below: 1 // You are a professional hardware designer that writes correct, fully functional Verilog modules. 2 // Given the fully implemented example of the Verilog module below: 3 4 module true_module( input a, b, cin, 5 output cout, sum ); 6 assign sum = a ^ b ^ cin; 7 assign cout = a & b | a & cin | b & cin; 8 9 endmodule 10 11 // Finish writing a different and unique implementation of the provided true_module in the module below, top_module. 12 module top_module ( input a, b, cin, 13 output cout, sum ); 14 Listing 2. Flexibility prompt example T (s) = (cid:40) 1 0 if s < 0 if s ≥ 0 X = 1 n n (cid:88) (cid:16) i=1 T [min S(Ri)] (cid:17) (2) (3) C. Originality The originality metric is defined as the variance (unique- ness) of an LLM-generated Verilog module in comparison to a typical, fully functional implementation. This metric is derived from the similarity value (generated through GNN4IP) between successful generations and their golden response. The originality experiment follows the same prompt struc- ture and procedure as described in III-A. For each prompt, the response with the minimum similarity value is found. Then, the similarity values [-1, 1] are re-normalized to be on scale of [0, 1] with 1 indicating the least similarity (i.e. most original). These results are averaged over all n prompts, with the final originality metric O is described below: O = 1 n n (cid:88) i=1 (− min S(Ri) + 1) 2 (4) D. Elaboration To measure an LLM’s capacity for elaboration, the LLM is provided with multiple smaller Verilog modules in a prompt, and tasked with utilizing them to implement a larger, more complex module. As this metric requires multi-modular de- signs, a separate set of Verilog modules is utilized in con- structing the prompts, as shown in Listing 3. Multiple LLM responses are generated for each module, which are all then checked for functionality. For all given functional solutions, the responses are checked to see if the solution utilizes the smaller modules (as opposed to a single modular solution). If any of the responses for a given prompt is are both functional and utilize the smaller modules, it TABLE I COMPARISON OF DIFFERENT LLMS IN TERMS OF CREATIVITY AND ITS SUBCATEGORIES LLM CodeLlama-7B [35] CodeLlama-13B [36] VeriGen-6B [37] VeriGen-16B [38] GPT-3.5 [39] GPT-4 [40] Functionality 0.2417 0.3167 0.3667 0.3250 0.3083 0.3750 Fluency 0.1483 0.1611 0.1244 0.1189 0.1343 0.1644 Flexibility 0.0000 0.0260 0.1000 0.0556 0.1600 0.0795 Originality 0.2926 0.3021 0.2527 0.2771 0.2526 0.2657 Elaboration 0.2222 0.3333 0.3333 0.3333 0.3333 0.3333 Creativity 0.1658 0.2056 0.2026 0.1962 0.2201 0.2107 1 // You are given a module add16 that performs a 16-bit addition. 2 //Instantiate two of them to create a 32-bit adder. 3 4 module add16 (input[15:0] a, input[15:0] b, input cin, output[15:0] sum, output cout ); 5 6 module top_module ( 7 8 9 10 ); input [31:0] a, input [31:0] b, output [31:0] sum Listing 3. Elaboration prompt example considered a positive instance of elaboration. Given p total Verilog prompts, of which n have at least one response that demonstrates elaboration, the metric is specified as: E = (cid:19) (cid:18) n p (5) E. Creativity: Putting It All Together Given each of the subcategories associated with creativity defined above, the metrics are then combined to define the overall creativity of a given LLM in Verilog hardware design. C = (0.25)F + (0.25)X + (0.25)O + (0.25)E (6) IV. EXPERIMENTAL EVALUATION A. Experimental Setup We evaluate multiple LLMs using the CreativEval framework, including CodeLlama 7B [35] and 13B [36] pa- rameter models, VeriGen 6B [37] and 16B [38] (16B model loaded in 8-bit quantization due to memory constraints), GPT- 3.5 [39], and GPT-4 [40]. The inference process of the VeriGen and CodeLlama models was performed locally on an NVIDIA A100 GPU with 80 GB RAM, while GPT-3.5/4 were queried through the OpenAI Python API. All scripts are written in Python 3.10, with Icarus Verilog 10.3 as the simulator for eval- uating functionality checks. The open-source GNN4IP repos- itory was adapted to this framework to generate the similarity scores. The prompt dataset utilized for functionality, fluency, and originality consists of 111 single-module HDLBits [41] prompts sourced through AutoChip [42], each containing a correctly implemented solution and testbench. The smaller prompt set used for elaboration contains 9 separate multi- module prompts from the same source. The base functionality metric (pass@10) is measured on all 120 prompts. When generating LLM responses in all experiments, the LLMs were all set to the following inference hyperparameters: top p=0.95. temperature=0.3; max tokens=1024; All responses were trimmed to the first generated instance of “endmodule” for effective functionality evaluation. top k=10; B. Results Table I summarizes the results for all LLMs for all sub- categories of creativity. In evaluating fluency, GPT-4 had the highest quantity of separate and correct Verilog solutions to a module (with respect to the modules that have at least one cor- rect solution), with CodeLlama-13B achieving similar results. The VeriGen models comparatively struggled in this metric, partly due to repeated generations of similar implementations instead of different implementations. Regarding flexibility, GPT-3.5 had the highest rate of gen- erating alternative solutions to provided modules across most models. The models that struggled (e.g., CodeLlama) produced results that were often direct copies of the provided module, in- dicating the ability to understand the prompt’s natural language description as an important factor that determined flexibility. As for originality, the GPT models had slightly worse performance than the others, with CodeLlama performing best. This means that the successful solutions provided with the GPT models were, on average, closer to the ideal solution. This could be due to its large size and training dataset, resulting in a more direct retrieval of existing solutions or coding practices. Elaboration was largely similar for all modules, as the HDLBits dataset for this metric is comparatively small (9 mod- ules). The models primarily excelled in correctly connecting the input and output parameters between separate modules, while struggling to generate the larger module solution. Overall, the GPT models were the most creative, with GPT- 3.5 as the best, and CodeLlama-7B was the least creative. Creativity is shown to slightly drop for the larger model sizes of GPT and VeriGen. V. CONCLUSION Recent studies on LLMs regarding their applications to hardware design have effectively demonstrated their poten- tial, applying many optimization strategies to increase the performance in terms of functional correctness. However, these studies do not investigate the creativity associated with LLMs in their ability to generate solutions, largely due to the lack of an effective metric. Within this work, we propose CreativEval, a framework to evaluate the creativity of LLMs in generating hardware code. By evaluating multiple popular LLMs within this framework, we perform a com- parative analysis, concluding that GPT-3.5 had the greatest creativity. Future research in this direction can further evaluate more LLMs and on larger prompt sets. ACKNOWLEDGMENT The authors acknowledge the support from the Purdue Center for Secure Microelectronics Ecosystem – CSME#210205. This work was also partially supported by the National Science Foundation (NSF CNS–1822848 and NSF DGE–2039610). REFERENCES [1] Tim Keary, “12 Practical Large Language Model (LLM) Applications,” https://www.techopedia.com/12-practical-large-language-model-llm-a pplications, 2023, [Online; last accessed 21-Nov-2023]. [2] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2023. [3] Z. G. Cai, X. Duan, D. A. Haslett, S. Wang, and M. J. Pickering, “Do large language models resemble humans in language use?” 2024. [4] T. Kocmi and C. Federmann, “Large language models are state-of-the-art evaluators of translation quality,” 2023. [5] K. Pandya and M. Holia, “Automating customer service using langchain: Building custom open-source gpt chatbot for organizations,” 2023. [6] R. Zhong, X. Du, S. Kai, Z. Tang, S. Xu, H.-L. Zhen, J. Hao, Q. Xu, M. Yuan, and J. Yan, “Llm4eda: Emerging progress in large language models for electronic design automation,” arXiv preprint arXiv:2401.12224, 2023. [7] S. Thakur, B. Ahmad, H. Pearce, B. Tan, B. Dolan-Gavitt, R. Karri, and S. Garg, “Verigen: A large language model for verilog code generation,” 2023. [8] J. Blocklove, S. Garg, R. Karri, and H. Pearce, “Chip-chat: Challenges and opportunities in conversational hardware design,” in 2023 ACM/IEEE 5th Workshop on Machine Learning for CAD (MLCAD). [Online]. Available: http: //dx.doi.org/10.1109/MLCAD58807.2023.10299874 IEEE, Sep. 2023. [9] M. Liu, T.-D. Ene, R. Kirby, C. Cheng, N. Pinckney, R. Liang et al., “Chipnemo: Domain-adapted llms for chip design,” 2024. [10] M. DeLorenzo, A. B. Chowdhury, V. Gohil, S. Thakur, R. Karri, S. Garg, and J. Rajendran, “Make every move count: Llm-based high-quality rtl code generation using mcts,” 2024. [11] M. Runco and G. Jaeger, “The standard definition of creativity,” Cre- ativity Research Journal - CREATIVITY RES J, vol. 24, pp. 92–96, 01 2012. [12] OpenAI, J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman et al., “Gpt-4 technical report,” 2024. [13] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp. 4171–4186. [Online]. Available: https://aclanthology.org/N19-1423 [14] [Online]. Available: https://www.anthropic.com/news/claude-3-haiku [15] Z. Luo, C. Xu, P. Zhao, Q. Sun, X. Geng, W. Hu, C. Tao, J. Ma, Q. Lin, and D. Jiang, “Wizardcoder: Empowering code large language models with evol-instruct,” 2023. [16] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan et al., “Evaluating large language models trained on code,” 2021. [17] Y. Wang, H. Le, A. D. Gotmare, N. D. Q. Bui, J. Li, and S. C. H. Hoi, “Codet5+: Open code large language models for code understanding and generation,” 2023. [18] H. Pearce, B. Tan, and R. Karri, “Dave: Deriving automatically verilog from english,” in Proceedings of the 2020 ACM/IEEE Workshop on Machine Learning for CAD, ser. MLCAD ’20. New York, NY, USA: Association for Computing Machinery, 2020, p. 27–32. [Online]. Available: https://doi.org/10.1145/3380446.3430634 [19] M. Liu, N. Pinckney, B. Khailany, and H. Ren, “Verilogeval: Evaluating large language models for verilog code generation,” 2023. [20] S. Liu, W. Fang, Y. Lu, Q. Zhang, H. Zhang, and Z. Xie, “Rtlcoder: Outperforming gpt-3.5 in design rtl generation with our open-source dataset and lightweight solution,” 2024. [21] Y. Lu, S. Liu, Q. Zhang, and Z. Xie, “Rtllm: An open-source benchmark for design rtl generation with large language model,” 2023. [22] L. S. Almeida, L. P. Prieto, M. Ferrando, E. Oliveira, and C. Ferr´andiz, “Torrance test of creative thinking: The question of its construct validity,” Thinking Skills and Creativity, vol. 3, no. 1, pp. 53–58, 2008. [Online]. Available: https://www.sciencedirect.com/science/article/pii/ S1871187108000072 [23] S. L. Doerr, “Conjugate lateral eye movement, cerebral dominance, and the figural creativity factors of fluency, flexibility, originality, and elaboration,” Studies in Art Education, vol. 21, no. 3, pp. 5–11, 1980. [Online]. Available: http://www.jstor.org/stable/1319788 [24] J. P. Guilford, The nature of human intelligence. McGraw-Hill, 1971. [25] E. P. Torrance, “Torrance tests of creative thinking,” Educational and psychological measurement, 1966. [26] M. Arefi, “Comparation of creativity dimensions (fluency, flexibility, elaboration, originality) between bilingual elementary students (azari language-kurdish language) in urmia city iran - the iafor research archive,” Dec 2018. [Online]. Available: https://papers.iafor.org/submi ssion22045/ [27] S. A. Handayani, Y. S. Rahayu, and R. Agustini, “Students’ creative thinking skills in biology learning: fluency, flexibility, originality, and elaboration,” Journal of Physics: Conference Series, vol. 1747, no. 1, p. 012040, [Online]. Available: https://dx.doi.org/10.1088/1742-6596/1747/1/012040 feb 2021. [28] F. Alacapinar, “Grade level and creativity,” Eurasian Journal of Educa- tional Research (EJER), vol. 13, pp. 247–266, 01 2012. [29] M. Arefi and N. Jalali, “Comparation of creativity dimensions (flu- ency, flexibility, elaboration, originality) between bilingual elementary students (azari language-kurdish language) in urmia city–iran,” in The IAFOR International Conference on Language Learning, 2016. [30] R. Shiffrin and M. Mitchell, Mar 2023. [Online]. Available: https://www.pnas.org/doi/abs/10.1073/pnas.2300963120 [31] C. Stevenson, I. Smal, M. Baas, R. Grasman, and H. van der Maas, “Putting gpt-3’s creativity to the (alternative uses) test,” 2022. [32] M. Binz and E. Schulz, “Using cognitive psychology to understand gpt-3,” Proceedings of the National Academy of Sciences, vol. 120, no. 6, Feb. 2023. [Online]. Available: http://dx.doi.org/10.1073/pnas.22 18523120 [33] Y. Zhao, R. Zhang, W. Li, D. Huang, J. Guo, S. Peng, Y. Hao, Y. Wen, X. Hu, Z. Du, Q. Guo, L. Li, and Y. Chen, “Assessing and understanding creativity in large language models,” 2024. [34] R. Yasaei, S.-Y. Yu, E. K. Naeini, and M. A. A. Faruque, “Gnn4ip: Graph neural network for hardware intellectual property piracy detection,” 2021. [35] “Hugging face.” [Online]. Available: https://huggingface.co/codellama /CodeLlama-7b-hf [36] “Hugging face.” [Online]. Available: https://huggingface.co/codellama /CodeLlama-13b-hf [37] “Hugging face.” [Online]. Available: https://huggingface.co/shailja/fin e-tuned-codegen-6B-Verilog [38] “Hugging face.” [Online]. Available: https://huggingface.co/shailja/fin e-tuned-codegen-16B-Verilog [39] “fine-tuning and api updates.” [Online]. Available: https://openai.com/b log/gpt-3-5-turbo-fine-tuning-and-api-updates [40] “fine-tuning and api updates.” [Online]. Available: https://openai.com/r esearch/gpt-4 [41] [Online]. Available: https://hdlbits.01xz.net/wiki/Main Page [42] S. Thakur, J. Blocklove, H. Pearce, B. Tan, S. Garg, and R. Karri, “Autochip: Automating hdl generation using llm feedback,” 2023.
ai_researcher
1
Machine_Learning_for_Fast_Quantum_Mechanics-Based_Approximation_of_Drug_Lipophilicity.pdf
0 2 0 2 l u J 7 2 ] h p - p m o c . s c i s y h p [ 1 v 6 0 2 4 1 . 7 0 0 2 : v i X r a Machine Learning Potential Repository Atsuto Seko1, ∗ 1Department of Materials Science and Engineering, Kyoto University, Kyoto 606-8501, Japan (Dated: July 29, 2020) This paper introduces a machine learning potential repository that includes Pareto optimal ma- chine learning potentials. It also shows the systematic development of accurate and fast machine learning potentials for a wide range of elemental systems. As a result, many Pareto optimal machine learning potentials are available in the repository from a website [1]. Therefore, the repository will help many scientists to perform accurate and fast atomistic simulations. I. INTRODUCTION may be decomposed as Machine learning potential (MLP) has been increas- ingly required to perform crystal structure optimizations and large-scale atomistic simulations more accurately than with conventional interatomic potentials. There- fore, many recent studies have proposed a number of procedures to develop MLPs and have shown their ap- plications [2–23]. Simultaneously, MLPs themselves are necessary for their users to perform accurate atomistic simulations. Therefore, the development and distribu- tion of MLPs for a wide range of systems should be use- ful, similarly to the conventional interatomic potentials distributed in several repositories [24, 25]. This study demonstrates an MLP repository available from a website [1]. The MLP repository includes Pareto optimal MLPs with different trade-offs between accuracy and computational efficiency because they are conflicting properties and there is no single optimal MLP [26–28]. This study develops the repository by performing sys- tematic density functional theory (DFT) calculations for approximately 460,000 structures and by combining them with existing DFT datasets in the literature [26, 29]. Polynomial-based potential energy models [26, 29] and their revisions are then systematically applied to the con- struction of MLPs for a wide range of elemental systems. Although the present version of the repository does not contain MLPs for multicomponent systems, the reposi- tory will be gradually updated. Moreover, a user pack- age that combines MLPs in the repository with atomistic simulations using the lammps code [30] is also available on a website [31]. II. POTENTIAL ENERGY MODELS This section shows structural features and potential energy models used for developing MLPs in the reposi- tory. Given cutoff radius rc from atom i in a structure, the short-range part of the total energy for the structure ∗ [email protected] E = E(i), Xi (1) where E(i) denotes the contribution of atom i or the atomic energy. The atomic energy is then given by a function of invariants for the O(3) group [26, 32] as E(i) = F d(i) 1 , d(i) 2 , · · · , (cid:17) (cid:16) (2) In the context of MLPs, invariants {d(i) where d(i) n denotes an invariant derived from order pa- rameters representing the neighboring atomic density of atom i. n } can be called “structural features”. Also, a number of func- tions are useful as function F to represent the relation- ship between the invariants and the atomic energy, such as artificial neural network models [2, 3, 5–8], Gaussian process models [4, 9–12], and linear models [13–19]. In the repository, linear models are explained as function F , which are shown in Sec. II B. A. Structural features A systematic procedure to derive a set of structural features that can control the accuracy and computa- tional efficiency of MLPs (e.g., [26, 32]) plays an essen- tial role in automatically generating fast and accurate MLPs. Therefore, the repository employs systematic sets of structural features derived from order parameters rep- resenting the neighboring atomic density in terms of a basis set. They are classified into a set of structural fea- tures derived only from radial functions and a set of struc- tural features derived from radial and spherical harmonic functions. A pairwise structural feature is expressed as d(i) n0 = Xj∈neighbor fn(rij ), (3) where rij denotes the distance between atoms i and j. The repository adopts a finite basis set of Gaussian-type radial functions given by fn(r) = exp −βn(r − rn)2 fc(r), (4) (cid:2) (cid:3) where βn and rn denote parameters. Cutoff function fc ensures smooth decay of the radial function, and the repository employs a cosine-based cutoff function ex- pressed as r rc (cid:19) π (cid:18) + 1 (cid:21) cos 1 2 (cid:20) 0 (r ≤ rc) (r > rc) . (5) fc(r) =    2 Another structural feature is a linearly independent polynomial invariant of the O(3) group, which is gener- ated from order parameters representing the neighbor- ing atomic density in terms of spherical harmonics. A pth-order polynomial invariant for a given radial number n and a given set of angular numbers { l1, l2, · · · , lp } is defined by a linear combination of products of p order parameters, expressed as d(i) nl1l2···lp,(σ) = X{ m1,m2,··· ,mp } Cl1l2···lp,(σ) m1m2···mp a(i) nl1m1a(i) nl2m2 · · · a(i) nlpmp , (6) where a(i) nlm denotes the order parameter of component nlm representing the neighboring atomic density of atom i. The order parameters for atom i in a given struc- ture are approximately calculated from its neighboring atomic density regardless of the orthonormality of the radial functions [26] as { w } are given as follows. F1 (D) = widi Xi F2,pow (D) = wiididi Xi a(i) nlm = Xj∈neighbor fn(rij )Y ∗ lm(θij, φij ), (7) where (rij , θij , φij ) denotes the spherical coordinates of neighboring atom j centered at the position of atom i. A coefficient set { Cl1l2···lp,(σ) m1m2···mp } is determined by using a group-theoretical projection operator method [33], en- suring that the linear combination of Eqn. (6) is invari- ant for arbitrary rotation [26]. In terms of fourth- and higher-order polynomial invariants, there exist multiple invariants that are linearly independent for most of the set { l1, l2, · · · , lp }. Therefore, they are distinguished by index σ if necessary. B. Energy models with respect to structural features The repository uses polynomial functions as function F representing the relationship between the atomic energy and a given set of structural features, D = { d1, d2, · · · }. functions with regression coefficients The polynomial F2 (D) = wij didj (8) X{i,j} F3,pow (D) = wiiidididi Xi F3 (D) = wijkdidjdk X{i,j,k} ... A potential energy model is identified with a combina- tion of the polynomial functions and structural features. The repository introduces the following six potential en- ergy models. When a set of pairwise structural features is described as D(i) n0 }, the first model (model type = 1, feature type = pair) is composed of pow- ers of the pairwise structural features as pair = { d(i) (cid:17) (cid:17) D(i) pair D(i) pair (cid:16) D(i) pair (cid:16) E(i) = F1 +F2,pow +F3,pow +· · · , (9) which is measured from the energy of the isolated state of atom i. This model was proposed in Refs. 13 and 14. The second model (model type = 2, feature type = pair) is a polynomial of the pairwise structural features with their cross terms, expressed as (cid:17) (cid:16) E(i) = F1 D(i) pair (cid:16) +F2 (cid:17) (cid:16) D(i) pair (cid:17) +F3 D(i) pair (cid:16) (cid:17) +· · · . (10) This model can be regarded as a natural extension of em- bedded atom method (EAM) potentials as demonstrated in Ref. 15. The other four models are derived from the polynomial invariants of Eqn. (6). When a set of the polynomial invariants is expressed by the union of sets of pth-order polynomial invariants as D(i) = D(i) pair ∪ D(i) 2 ∪ D(i) 3 ∪ D(i) 4 ∪ · · · , (11) where D(i) D(i) D(i) 2 = { d(i) 3 = { d(i) 4 = { d(i) nll } nl1l2l3 } nl1l2l3l4,(σ) } , (12) the third model (model type = 1, feature type = invariants) is expressed as (cid:17) (cid:17) (cid:16) (cid:16) D(i) D(i) D(i) E(i) = F1 + F3,pow + F2,pow + · · · . (13) This model consists of the powers of the polynomial in- variants. A linear polynomial form of the polynomial invariants, E(i) = F1 , which was proposed in Ref. 26, is included in the third model. Note that a linear (cid:1) polynomial model with up to third-order invariants, ex- pressed by D(i) (cid:0) (cid:17) (cid:16) E(i) = F1 (cid:16) D(i) pair ∪ D(i) 2 ∪ D(i) 3 , (cid:17) (14) is regarded as a spectral neighbor analysis potential (SNAP) [16]. The fourth model (model type = 2, feature type = invariants) is given by a polynomial of the polyno- mial invariants as E(i) = F1 D(i) (cid:16) (cid:17) + F2 D(i) (cid:16) (cid:17) + F3 D(i) (cid:16) (cid:17) + · · · . (15) A quadratic polynomial model of the polynomial invari- ants up to the third order, expressed as E(i) = F1 pair ∪ D(i) 2 ∪ D(i) 3 D(i) (cid:16) +F2 (cid:17) (cid:16) D(i) pair ∪ D(i) 2 ∪ D(i) 3 (cid:17) (16) is regarded as a quadratic SNAP [34]. The fifth model (model type = 3, feature type = invariants) is the sum of a linear polynomial form of the polynomial invariants and a polynomial of pairwise structural features, described as E(i) = f1 D(i) (cid:16) (cid:17) + f2 D(i) pair (cid:16) (cid:17) + f3 D(i) pair (cid:16) (cid:17) + · · · . (17) The sixth model (model type = 4, feature type = invariants) is the sum of a linear polynomial form of the polynomial invariants and a polynomial of pairwise structural features and second-order polynomial invari- ants. This is written as E(i) = f1 D(i) (cid:16) (cid:17) + f2 pair ∪ D(i) 2 D(i) (cid:16) + · · · . (18) (cid:17) III. DATASETS Training and test datasets are generated from proto- type structures, i.e., structure generators. The reposi- tory uses two sets of structure generators for elemental systems. One is composed of face-centered cubic (fcc), 3 body-centered cubic (bcc), hexagonal close-packed (hcp), simple cubic (sc), ω, and β-tin structures, which was em- ployed in Ref. 29. Hereafter, structures generated from the structure generator set are denoted by “dataset 1”. The other is composed of prototype structures reported in the Inorganic Crystal Structure Database (ICSD) [35], which aims to generate a wide variety of structures. For elemental systems, only prototype structures composed of single elements with zero oxidation state are chosen from the ICSD. The total number of the structure gener- ators is 86. A list of structure generators can be found in the Appendix of Ref. 26. Hereafter, structures generated from the second set are denoted by “dataset 2”. Given a structure generator, the atomic positions and lattice constants of the structure generator are fully opti- mized by DFT calculation to obtain its equilibrium struc- ture. Then, a new structure is constructed by random lattice expansion, random lattice distortion, and random atomic displacements into a supercell of the structure generator. For a given parameter ε controlling the de- gree of lattice expansion, lattice distortion, and atomic displacements, the lattice vectors of the new structure A′ and the fractional coordinates of an atom in the new structure f ′ are given as A′ f ′ = A + εR = f + εA′−1η, (19) (20) , where the (3 × 3) matrix R and the three-dimensional vector η are composed of uniform random numbers rang- ing from −1 to 1. Matrix A and vector f represent the lattice vectors of the original supercell and the fractional coordinates of the atom in the original supercell, respec- tively. For each elemental system, datasets 1 and 2 are com- posed of 3,000 and 10,000 structures, respectively, in ad- dition to the equilibrium structures of the structure gen- erators. Dataset 1 was developed in Ref. 29, whereas dataset 2, except for the case of elemental aluminum, is developed in this study. Each of the datasets is then ran- domly divided into a training dataset and a test dataset. In the repository, datasets 1 and 2 are available for 31 and 47 elements, respectively. This means that the repository contains MLPs developed from a total of 567,228 DFT calculations. DFT calculations were performed using the plane- wave-basis projector augmented wave method [36] within the Perdew–Burke–Ernzerhof exchange-correlation func- tional [37] as implemented in the vasp code [38–40]. The cutoff energy was set to 300 eV. The total energies con- verged to less than 10−3 meV/supercell. The atomic po- sitions and lattice constants of the structure generators were optimized until the residual forces were less than 10−2 eV/˚A. IV. MODEL COEFFICIENT ESTIMATION Coefficients of a potential energy model are estimated from all the total energies, forces, and stress tensors in- cluded in a training dataset. Given a potential energy model, therefore, the predictor matrix and observation vector are simply written in a submatrix form as X =  Xenergy Xforce Xstress yenergy yforce ystress .   y =   ,   (21)  The predictor matrix X is divided into three submatri- ces, Xenergy, Xforce, and Xstress, which contain structural features and their polynomial contributions to the total energies, the forces acting on atoms, and the stress ten- sors of structures in the training dataset, respectively. The observation vector y also has three components, yenergy, yforce, and ystress, which contain the total en- ergy, the forces acting on atoms, and the stress tensors of structures in the training dataset, respectively, obtained from DFT calculations. Using the predictor matrix and the observation vector, coefficients of a potential energy model are estimated by linear ridge regression. In the case of dataset 2 for elemental aluminum, the training data has 9,086, 1,314,879, and 54,516 entries for the energy, the force, and the stress tensor, respec- tively. Therefore, the predictor matrix X has a size of (1, 378, 481, ncoeff), where ncoeff denotes the number of coefficients of the potential energy model and ranges from 10 to 32, 850 in the potential energy models of the repos- itory. V. PARETO OPTIMALITY The accuracy and computational efficiency of the present MLP strongly depend on the given input parame- ters. They are (1) the cutoff radius, (2) the type of struc- tural features, (3) the type of potential energy model, (4) the number of radial functions, (5) the polynomial order in the potential energy model, and (6) the truncation of the polynomial invariants, i.e, the maximum angular max, · · · , l(pmax) numbers of spherical harmonics, {l(2) max } and the polynomial order of invariants, pmax. Therefore, a systematic grid search is performed for each system to find their optimal values. The input parameters used for developing MLPs can be found in the repository. max, l(3) However, it is difficult to find the optimal set of pa- rameters because the accuracy and computational effi- ciency of an MLP are conflicting properties whose trade- off should be optimized, as pointed out in Ref. 26. In this multiobjective optimization problem involving sev- eral conflicting objectives, there is no single optimal so- lution but a set of alternatives with different trade-offs between the accuracy and the computational efficiency. In such a case, Pareto optimal points can be optimal solu- tions with different trade-offs [41]. Therefore, the repos- Pareto optimal MLPs (Al, Dataset 1) 4 25 20 15 10 5 MLP3 0 10-5 10-4 MLP2 10-3 MLP1 10-2 Elapsed time (s/atom/MD step) (Single CPU core) MLP3 MLP2 MLP1 15 Training Test ) t m o a V e m / ( r o r r e n o i i t c d e r P ) t m o a V e m / 0 ( T F D – P L –15M –4 –3 –2 –1 0 –4 –3 –2 –1 0 –4 –3 –2 –1 0 DFT (eV/atom) FIG. 1. Distribution of MLPs in a grid search to find op- timal parameters controlling the accuracy and the computa- tional efficiency of the MLP for elemental Al. The elapsed time for a single point calculation is estimated using a single core of Intel R(cid:13) Xeon R(cid:13) E5-2695 v4 (2.10 GHz). The red closed circles show the Pareto optimal points of the distribution ob- tained using a non-dominated sorting algorithm. The cyan closed circles indicate the MLP with the lowest prediction er- ror and two Pareto optimal MLPs with higher computational cost performance. The distribution of the prediction errors for dataset 1 is also shown. TABLE I. Model parameters of MLP1, MLP2, and MLP3 for elemental Al. Number of coefficients Feature type Cutoff radius Number of radial functions Model type Polynomial order (function F ) Polynomial order (invariants) { l(2) max, l(3) max, · · · } MLP1 2410 Invariants 12.0 20 3 3 4 [4,4,2] MLP2 MLP3 1770 Pair 12.0 20 2 3 − − 815 Pair 8.0 15 2 3 − − itory contains all Pareto optimal MLPs for each system and each dataset. Pareto optimal MLPs (Cu, Dataset 1) Pareto optimal MLPs (Ga, Dataset 1) 5 25 20 15 10 5 MLP3 0 10-5 MLP2 10-4 10-3 MLP1 10-2 Elapsed time (s/atom/MD step) (Single CPU core) Pareto optimal MLPs (Mg, Dataset 1) 25 20 15 10 5 0 10-5 MLP3 10-4 MLP2 10-3 10-2 Elapsed time (s/atom/MD step) (Single CPU core) MLP1 Pareto optimal MLPs (Ti, Dataset 1) 70 60 50 40 30 20 10 ) m o t a / V e m ( r o r r e n o i t c d e r P i ) m o t a / V e m ( r o r r e n o i t c d e r P i ) m o t a / V e m ( r o r r e n o i t c d e r P i 25 20 15 10 5 0 10-5 MLP3 10-4 MLP2 10-3 10-2 MLP1 Elapsed time (s/atom/MD step) (Single CPU core) Pareto optimal MLPs (Zn, Dataset 1) 25 20 15 10 5 MLP3 0 10-5 10-4 MLP2 10-3 MLP1 10-2 Elapsed time (s/atom/MD step) (Single CPU core) Pareto optimal MLPs (Zr, Dataset 1) 70 60 50 40 30 20 10 0 10-5 10-4 MLP3 10-3 MLP2 10-2 MLP1 MLP3 0 10-5 10-4 10-3 MLP2 MLP1 10-2 Elapsed time (s/atom/MD step) (Single CPU core) Elapsed time (s/atom/MD step) (Single CPU core) Pareto optimal MLPs (Y, Dataset 1) Pareto optimal MLPs (Nb, Dataset 1) 70 60 50 40 30 20 10 70 60 50 40 30 20 10 ) m o t a / V e m ( r o r r e n o i t c d e r P i ) m o t a / V e m ( r o r r e n o i t c d e r P i ) m o t a / V e m ( r o r r e n o i t c d e r P i ) m o t a / V e m ( r o r r e n o i t c d e r P i ) m o t a / V e m ( r o r r e n o i t c d e r P i 0 10-5 MLP3 10-4 MLP2 10-3 10-2 MLP1 Elapsed time (s/atom/MD step) (Single CPU core) 0 10-5 MLP3 10-4 10-3 MLP2 10-2 MLP1 Elapsed time (s/atom/MD step) (Single CPU core) FIG. 2. Distribution of MLPs in a grid search for elemental Cu, Ga, Mg, Zn, Ti, Zr, Y, and Nb. The closed red circles show the Pareto optimal points of the distribution. VI. MLPS IN REPOSITORY Figure 1 shows the prediction error and the computa- tional efficiency of the Pareto optimal MLPs developed from dataset 1 for elemental Al. Figure 2 also shows the Pareto optimal MLPs for elemental Cu, Ga, Mg, Zn, Ti, Zr, Y, and Nb. The prediction error is estimated using the root mean square (RMS) error of the energy for the test dataset. The computational efficiency is es- timated using the elapsed time to compute the energy, the forces and the stress tensors of a structure with 284 atoms. In Figs. 1 and 2, the elapsed time is normalized by the number of atoms because it is proportional to the number of atoms as shown later. The behavior of the relationship between the prediction error and the com- putational efficiency for the other systems can be found in the repository. Users of the repository can choose an appropriate MLP from the Pareto optimal ones according to their targets and purposes. The MLP with the lowest prediction er- ror is denoted by “MLP1”, whereas two Pareto optimal MLPs showing higher computational cost performance are denoted by “MLP2” and “MLP3”. As can be seen in Figs. 1 and 2, MLP2 and MLP3 exhibit high com- putational efficiency without significantly increasing the prediction error. This study introduces simple scores to evaluate the computational cost performance from the elapsed time t with the unit of ms/atom/step and the prediction error ∆E with the unit of meV/atom. MLP2 and MLP3 with higher computational cost performance minimize t + ∆E and 10t + ∆E, respectively. Figure 1 shows the distribution of the prediction errors for structures in dataset 1. Table I also lists the values of the model parameters of MLP1, MLP2, and MLP3. This information for the other Pareto optimal MLPs and the other systems can be found in the repository. Tables II and III list the prediction error and the com- putational efficiency of MLPs for each elemental system obtained from datasets 1 and 2, respectively. MLP2 and MLP3 exhibit high computational efficiency while avoid- ing a significant increase of the prediction error. There- fore, MLP2 and MLP3 can be regarded as better poten- tials than MLP1 for most practical purposes. Figure 3 shows the elapsed times of single point calcu- lations for structures with up to 32,000 atoms using the EAM potential [42], MLP1, MLP2, and MLP3 for ele- mental Al. Structures were made by the expansion of the fcc conventional unit cell with a lattice constant of 4 ˚A. As can be seen in Fig. 3, linear scaling with respect to the [1] A. tial https://sekocha.github.io/repository/index-e.html. Seko, Repository Poten- University, Learning Machine Kyoto at 6 number of atoms is achieved in all the MLPs. Although the performance for only three MLPs is shown here, the other MLPs also exhibit linear scaling with respect to the number of atoms. Therefore, the computational time required for a calculation of nstep steps for a structure with natom atoms can be estimated as t × natom × nstep, where t is the elapsed time per atom for a single point calculation listed in the repository. 103 ) p e t s D M / s ( e m i t d e s p a E l 102 101 100 10-1 10-2 10-3 10-4 10-5 MLP1 MLP2 MLP3 EAM 10-6 100 101 102 103 Number of atoms 104 105 FIG. 3. Dependence of the computational time required for a single point calculation on the number of atoms. The elapsed time is measured using a single core of Intel R(cid:13) Xeon R(cid:13) E5-2695 v4 (2.10 GHz). VII. CONCLUSION An MLP repository developed by a systematic applica- tion of the procedure to obtain Pareto optimal MLPs has In particular, MLPs been demonstrated in this paper. with high computational cost performance, showing high computational efficiency without increasing the predic- tion error, are useful for most practical purposes. Cur- rently, many Pareto optimal MLPs are available in the repository from the website, and the number of MLP en- tries in the repository is continuously increasing. There- fore, the repository should be useful in performing accu- rate and fast atomistic simulations. ACKNOWLEDGMENTS This work was supported by a Grant-in-Aid for Sci- entific Research (B) (Grant Number 19H02419) and a Grant-in-Aid for Scientific Research on Innovative Areas (Grant Number 19H05787) from the Japan Society for the Promotion of Science (JSPS). [2] S. Lorenz, A. Groß, and M. Scheffler, Chem. Phys. Lett. 395, 210 (2004). [3] J. Behler and M. Parrinello, Phys. Rev. Lett. 98, 146401 (2007). 7 TABLE II. Prediction error and computational efficiency of MLPs constructed from dataset 1 for 31 elemental systems. The normalized elapsed time for a single point calculation, the RMS error for the energy, and the RMS error for the force are denoted by t (s/atom/step), ∆E (meV/atom), and ∆f (eV/˚A), respectively. MLP1 shows the lowest prediction error of ∆E. MLP2 and MLP3 show the lowest values of t + ∆E and 10t + ∆E, respectively. t 10.71 6.74 21.65 5.79 12.55 15.83 2.02 12.64 6.60 6.82 21.73 21.72 4.01 21.67 18.53 6.89 21.71 21.69 21.68 21.65 12.48 21.71 21.53 21.70 21.67 21.66 6.95 21.69 12.77 3.27 6.71 MLP1 ∆E 1.9 0.5 0.5 1.0 1.1 1.0 4.6 2.8 0.5 1.7 0.5 0.9 0.8 0.5 0.4 0.1 0.4 2.4 0.2 2.4 0.5 0.7 0.5 1.6 1.4 0.5 2.2 2.9 0.8 2.3 1.4 ∆f 0.004 0.006 0.006 0.005 0.019 0.004 0.016 0.061 0.001 0.004 0.006 0.039 0.004 0.005 0.000 0.001 0.002 0.065 0.001 0.048 0.001 0.017 0.003 0.056 0.035 0.005 0.048 0.080 0.016 0.008 0.044 t 0.07 0.30 0.66 0.30 1.54 0.07 0.13 2.31 0.16 0.10 0.29 1.85 0.23 0.22 0.09 0.13 0.18 2.16 0.10 2.18 0.12 0.22 0.22 0.91 1.85 0.29 1.09 2.32 0.29 0.18 1.84 MLP2 ∆E 2.0 0.9 0.7 1.2 2.0 1.1 5.0 3.6 0.5 2.0 0.6 1.4 1.0 0.7 0.5 0.2 0.5 3.3 0.2 2.8 0.5 2.6 0.7 3.3 1.8 0.8 3.1 3.9 2.2 2.5 2.4 ∆f 0.008 0.014 0.012 0.011 0.026 0.011 0.011 0.070 0.002 0.011 0.014 0.051 0.008 0.014 0.001 0.003 0.006 0.078 0.001 0.058 0.001 0.048 0.008 0.071 0.047 0.012 0.058 0.092 0.040 0.011 0.055 t 0.03 0.07 0.05 0.10 0.13 0.07 0.05 0.80 0.12 0.03 0.10 0.18 0.07 0.07 0.09 0.03 0.10 0.15 0.05 0.10 0.09 0.18 0.13 0.22 0.13 0.10 0.07 0.14 0.13 0.07 0.05 MLP3 ∆E 2.2 1.8 1.8 1.8 5.5 1.1 5.3 5.5 0.6 2.2 1.2 4.3 1.2 1.2 0.5 0.7 0.6 9.3 0.4 9.0 0.7 3.0 0.8 8.5 5.5 1.6 8.5 12.0 3.3 2.7 11.0 ∆f 0.011 0.016 0.027 0.013 0.043 0.011 0.018 0.082 0.001 0.013 0.015 0.103 0.010 0.014 0.001 0.005 0.007 0.138 0.002 0.127 0.001 0.049 0.009 0.126 0.100 0.014 0.129 0.177 0.044 0.014 0.116 Ag Al Au Ba Be Ca Cd Cr Cs Cu Ga Hf Hg In K Li Mg Mo Na Nb Rb Sc Sr Ta Ti Tl V W Y Zn Zr [4] A. P. Bart´ok, M. C. Payne, R. Kondor, and G. Cs´anyi, [12] A. Glielmo, P. Sollich, and A. De Vita, Phys. Rev. Lett. 104, 136403 (2010). Phys. Rev. B 95, 214302 (2017). [5] J. Behler, J. Chem. Phys. 134, 074106 (2011). [6] J. Han, L. Zhang, R. Car, and E. Weinan, Commun. [13] A. Seko, A. Takahashi, and I. Tanaka, Phys. Rev. B 90, 024101 (2014). Comput. Phys. 23, 629 (2018). [14] A. Seko, A. Takahashi, and I. Tanaka, [7] N. Artrith and A. Urban, Phys. Rev. B 92, 054113 (2015). Comput. Mater. Sci. 114, 135 (2016). [15] A. Takahashi, A. Seko, and I. Tanaka, [8] N. Artrith, A. Urban, and G. Ceder, Phys. Rev. Mater. 1, 063801 (2017). Phys. Rev. B 96, 014112 (2017). [9] W. J. Szlachta, A. P. Bart´ok, Phys. Rev. B 90, 104108 (2014). and G. Cs´anyi, [16] A. Thompson, L. Swiler, C. Trott, S. Foiles, G. Tucker, J. Comput. Phys. 285, 316 (2015). and [17] M. A. Wood and A. P. Thompson, J. Chem. Phys. 148, [10] A. P. Bart´ok, J. Kermode, N. Bernstein, and G. Cs´anyi, 241721 (2018). Phys. Rev. X 8, 041048 (2018). [18] C. Chen, Z. Deng, R. Tran, H. Tang, I.-H. Chu, and [11] Z. Li, J. R. Kermode, and A. De Vita, S. P. Ong, Phys. Rev. Mater. 1, 043603 (2017). Phys. Rev. Lett. 114, 096405 (2015). [19] A. V. Shapeev, Multiscale Model. Simul. 14, 1153 (2016). TABLE III. Prediction error and computational efficiency of MLPs constructed from dataset 2 for 47 elemental systems. 8 t 18.51 28.69 23.39 23.49 36.80 8.69 23.45 23.66 23.41 13.95 37.17 23.20 23.38 23.45 13.65 23.61 23.59 22.64 7.10 37.02 23.59 23.62 18.21 3.20 36.63 14.15 23.76 23.98 14.51 23.19 36.57 37.20 13.78 23.43 23.50 23.92 23.58 23.45 11.80 22.94 13.20 24.14 14.23 22.71 36.55 23.62 14.57 MLP1 ∆E 1.1 1.8 5.1 3.1 0.7 3.8 2.8 0.4 0.7 6.7 0.4 8.2 1.1 2.7 4.2 3.6 0.7 9.0 0.1 2.5 0.2 0.3 7.3 1.5 6.5 10.2 7.3 1.2 2.6 5.3 0.0 9.8 6.4 8.5 3.4 3.0 4.1 1.7 0.7 6.5 4.4 0.8 6.4 8.3 2.6 1.1 5.9 ∆f 0.019 0.033 0.125 0.028 0.013 0.078 0.130 0.006 0.011 0.221 0.001 0.022 0.028 0.058 0.121 0.014 0.016 0.251 0.001 0.057 0.004 0.006 0.211 0.196 0.182 0.304 0.176 0.028 0.073 0.137 0.000 0.274 0.186 0.234 0.120 0.211 0.077 0.036 0.007 0.190 0.143 0.015 0.188 0.247 0.050 0.017 0.130 t 0.76 1.11 1.98 0.75 1.44 1.10 2.06 0.93 0.75 2.59 0.22 0.28 1.12 1.11 2.51 0.75 0.93 3.19 0.22 1.96 0.75 0.75 3.48 0.71 2.55 3.21 1.98 0.94 0.99 1.11 0.13 1.98 1.98 3.19 2.00 1.98 1.11 1.12 0.76 3.17 1.98 0.72 2.54 3.17 1.98 0.99 0.82 MLP2 ∆E 1.3 3.0 8.5 3.9 1.9 5.7 4.6 1.1 1.8 8.3 0.6 8.8 1.8 5.0 6.0 4.7 1.2 10.8 0.4 3.8 0.9 0.8 8.6 2.0 7.6 11.4 9.8 2.4 4.0 7.3 0.5 13.5 8.5 9.9 6.0 4.0 7.2 3.5 1.6 7.7 6.4 2.2 8.4 9.8 3.9 1.9 9.0 ∆f 0.033 0.040 0.144 0.035 0.021 0.088 0.121 0.013 0.018 0.226 0.002 0.034 0.039 0.067 0.137 0.020 0.021 0.260 0.002 0.069 0.010 0.009 0.226 0.197 0.183 0.300 0.186 0.037 0.080 0.148 0.002 0.291 0.192 0.237 0.411 0.234 0.088 0.049 0.014 0.195 0.146 0.023 0.196 0.254 0.062 0.024 0.139 t 0.28 0.28 0.58 0.28 0.13 0.28 0.62 0.17 0.28 0.35 0.09 0.10 0.27 0.58 0.75 0.13 0.28 0.75 0.08 0.69 0.10 0.14 0.75 0.17 0.75 0.75 1.11 0.17 0.28 0.72 0.07 0.71 0.71 0.75 0.75 0.75 0.75 0.58 0.18 0.75 0.69 0.15 0.71 0.99 0.71 0.27 0.75 MLP3 ∆E 2.5 7.3 13.5 7.0 4.6 11.8 9.4 3.5 3.2 18.2 0.9 9.5 4.5 7.6 8.6 6.5 2.8 16.8 0.7 6.4 1.7 2.9 15.6 2.7 11.8 18.8 11.8 5.0 7.6 8.2 0.9 18.4 12.6 16.4 8.7 5.9 8.9 5.5 3.2 12.3 9.2 5.0 12.3 14.7 6.7 4.6 9.1 ∆f 0.035 0.061 0.178 0.056 0.034 0.132 0.166 0.030 0.026 0.324 0.003 0.051 0.044 0.076 0.148 0.038 0.028 0.295 0.003 0.079 0.019 0.029 0.266 0.226 0.212 0.348 0.192 0.057 0.096 0.156 0.004 0.320 0.217 0.279 0.124 0.135 0.095 0.061 0.022 0.221 0.163 0.060 0.228 0.286 0.070 0.038 0.140 Ag Al As Au Ba Be Bi Ca Cd Cr Cs Cu Ga Ge Hf Hg In Ir K La Li Mg Mo Na Nb Os P Pb Pd Pt Rb Re Rh Ru Sb Sc Si Sn Sr Ta Ti Tl V W Y Zn Zr [20] V. L. Deringer, C. J. Pickard, and G. Cs´anyi, [32] A. P. Bart´ok, R. Kondor, and G. Cs´anyi, Phys. Rev. B Phys. Rev. Lett. 120, 156001 (2018). 87, 184115 (2013). 9 [21] E. V. Podryabinkin, E. V. Tikhonov, A. V. Shapeev, and A. R. Oganov, arXiv preprint arXiv:1802.07605 (2018). [22] K. Gubaev, E. V. Podryabinkin, G. L. Hart, and A. V. Shapeev, Comput. Mater. Sci. 156, 148 (2019). [23] T. Mueller, A. Hernandez, and C. Wang, J. Chem. Phys. 152, 050902 (2020). Interatomic Potentials [24] NIST Repository, http://www.ctcms.nist.gov/potentials. [25] E. Tadmor, R. Elliott, J. Sethna, R. Miller, and C. Becker, Knowledgebase of interatomic models (KIM), https://openkim.org (2011). [33] M. El-Batanouny and F. Wooten, Symmetry and Condensed Matter Physics: A Computational Approach (Cambridge University Press, 2008). [34] M. A. Wood and A. P. Thompson, J. Chem. Phys. 148, 241721 (2018). [35] G. Bergerhoff and I. D. Brown, in Crystallographic Databases, edited by F. H. Allen et al. (International Union of Crystallography, Chester, 1987). [36] P. E. Bl¨ochl, Phys. Rev. B 50, 17953 (1994). [37] J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996). [26] A. Seko, A. Phys. Rev. B 99, 214108 (2019). Togo, and I. Tanaka, [38] G. Kresse and J. Hafner, Phys. Rev. B 47, 558 (1993). [39] G. Kresse and J. Furthm¨uller, Phys. Rev. B 54, 11169 [27] A. Hernandez, A. Balasubramanian, F. Yuan, S. A. Ma- (1996). son, and T. Mueller, npj Comput. Mater. 5, 1 (2019). [28] Y. Zuo, C. Chen, X. Li, Z. Deng, Y. Chen, J. Behler, G. Cs´anyi, A. V. Shapeev, A. P. Thompson, M. A. Wood, and S. P. Ong, J. Phys. Chem. A 124, 731 (2020), pMID: 31916773. [29] A. Takahashi, A. Seko, and I. Tanaka, J. Chem. Phys. 148, 234106 (2018). [40] G. Kresse and D. Joubert, Phys. Rev. B 59, 1758 (1999). and [41] J. Branke, J. Branke, K. Deb, K. Miettinen, R. Slowi´nski, Multiobjective optimization: Interactive and evolutionary approaches, Vol. 5252 (Springer Science & Business Media, 2008). [42] K. W. Jacobsen, J. K. Norskov, Phys. Rev. B 35, 7423 (1987). and M. J. Puska, [30] LAMMPS code, http://lammps.sandia.gov. [31] A. Seko, lammps-mlip-package, https://github.com/sekocha/lammps-mlip-package.
ai_researcher
2
Deep_Insights_into_Automated_Optimization_with_Large_Language_Models_and_Evolutionary_Algorithms.pdf
Deep Insights into Automated Optimization with Large Language Models and Evolutionary Algorithms He Yua,b, Jing Liua,b aSchool of Artificial Intelligence, Xidian University, 2 South Taibai Road, Xi’an, 710071, Shaanxi, China bGuangzhou Institute of Technology, Xidian University, Knowledge City, Guangzhou, 510555, Guangdong, China 4 2 0 2 t c O 8 2 ] E N . s c [ 1 v 8 4 8 0 2 . 0 1 4 2 : v i X r a Abstract Designing optimization approaches, no matter heuristic or meta-heuristic, often require extensive manual intervention and struggle to generalize across diverse problem domains. The integration of Large Language Models (LLMs) and Evolutionary Algorithms (EAs) presents a promising new way to overcome these limitations to make optimization more automated, where LLMs function as dynamic agents capable of generating, refining, and interpreting optimiza- tion strategies, while EAs explore complex solution spaces efficiently through evolutionary operators. Since this synergy enables a more efficient and creative searching process, in this paper, we first conduct an extensive review of recent research on the application of LLMs in optimization, focusing on LLMs’ dual functionality as solution generators and algorithm designers. Then, we summarize the common and valuable design in existing work and pro- pose a novel LLM-EA paradigm for automated optimization. Furthermore, focusing on this paradigm, we conduct an in-depth analysis on innovative methods for three key components, namely, individual representation, variation operators, and fitness evaluation, addressing challenges related to heuristic generation and solution exploration, par- ticularly from the perspective of LLM prompts. Our systematic review and thorough analysis into the paradigm can help researchers better understand the current research and boost the development of combining LLMs with EAs for automated optimization. Keywords: evolutionary algorithms, large language models, optimization, prompt engineering, deep learning 1. Introduction Optimization [1] plays a pivotal role in solving complex challenges across various industries, from logistics and manufacturing to machine learning and healthcare. At its core, optimization seeks to identify the best solution from a set of candidates according to specific objectives, while adhering to constraints. The growing scale and complexity of real-world optimization problems demand approaches that can navigate vast search spaces efficiently. Traditional optimization methods, such as gradient-based approaches and mathematical programming, have long been employed for problems with well-defined objective functions. However, these methods often struggle with real-world problems that are non-differentiable, multi-modal, or laden with constraints and uncertainties. This gap has driven the devel- opment of more flexible, adaptable, and scalable methods, leading to the rise of heuristics [2], which aim to provide approximate solutions efficiently. Heuristics emerged as practical tools for generating “good-enough” solutions without requiring exhaustive searches. While heuristics have been successful in many applications, they come with limitations. Traditional heuristics often require careful manual design, limiting their adaptability to new problems. Meta-heuristics [3, 4], such as genetic algorithms [5] and simulated annealing [6], offer more general approaches but often require parameter fine-tuning and expert knowledge. Hyper-heuristics [7, 8] attempt to automate the selection or generation of heuristics, representing a step forward. However, they remain constrained by predefined low-level heuristics or components, limiting their adaptability to highly dynamic and complex problems. The integration of Large Language Models (LLMs) [9] and Evolutionary Algorithms (EAs) [10] presents a promising new way to overcome these limitations. LLMs function as dynamic agents capable of generating, refining, and interpreting optimization strategies, while EAs explore complex solution spaces efficiently through evolutionary Preprint submitted to arXiv October 29, 2024 Figure 1: The major development of heuristics operators like selection, mutation, and crossover. Together, LLM-EA offers the potential to reduce the need for manual tuning and expert knowledge, paving the way for more automated and adaptable optimization frameworks. This paper makes several key contributions to the study of integrating LLMs with EAs for automated optimiza- tion. First, we provide a brief review of the historical development of heuristics, from traditional methods to hyper- heuristics, offering readers a foundational understanding of the field. Then, we conduct an extensive review of recent research on the application of LLMs in optimization, highlighting their roles as searching operators, solvers, and in algorithm design. Building on these insights, we summarize the common and valuable design in existing work and propose a novel LLM-EA paradigm for automated optimization, which combines the strengths of LLMs and EAs to enhance the efficiency and adaptability of optimization processes. Furthermore, focusing on the paradigm, we conduct an in-depth analysis on innovative methods for the three key components, namely, individual representation, variation operators, and fitness evaluation, addressing challenges related to heuristic generation and solution explo- ration. Finally, we identify current challenges and outline future directions for research, emphasizing the potential for further advancements in generalization, transparency, and scalability in LLM-EA systems. The remainder of this paper is organized as follows: Section 2 presents the evolution of heuristics in automated optimization, offering a detailed overview of traditional, meta-, and hyper-heuristic approaches. Section 3 explores the key technologies of LLMs and EAs that enable effective heuristic and solution generation. Section 4 discusses recent advancements in LLM-based optimization, followed by a detailed analysis of the novel LLM-EA automated optimization paradigm in Section 5. In Section 6, we identify and address the challenges associated with LLM- EA systems and suggest future research directions to enhance their scalability and transparency. Finally, Section 7 concludes the paper with key insights and the potential impact of this research on future optimization methodologies. 2. Evolution of Heuristics for Automated Optimization Heuristics are problem-solving techniques designed to provide approximate solutions to optimization problems where finding the exact solution is computationally prohibitive. Throughout the evolution from heuristics to meta- heuristics and hyper-heuristics as shown in Figure 1, the key goal is to create a more generalized, flexible, and automated approaches for solving optimization problems. Each development reduces the dependence on problem- specific adjustments and domain-specific expertise. In the following subsections, we briefly discuss each of these developments. 2.1. Pre-Heuristic Approaches to Optimization Before the development of heuristics, optimization largely relied on methods such as exhaustive search, linear programming, and gradient-based techniques. While effective for small and well-structured problems, these methods struggled with the increasing size and complexity of modern optimization tasks. Exhaustive search, for example, sys- tematically exploring all possible solutions quickly became computationally infeasible for combinatorial optimization problems like the Traveling Salesman Problem (TSP). 2 Traditional optimization techniques, such as mathematical programming and gradient-based methods, offered so- lutions for continuous variables and smooth objective functions but were insufficient for non-differentiable or discrete problems. These limitations spurred the need for more flexible and scalable approaches, which led to the rise of heuristic methods. 2.2. Classical Heuristics The first wave of heuristics introduced simple yet practical techniques like construction heuristics and local search. Construction heuristics [11] build solutions incrementally, often making greedy decisions at each step. For example, the nearest-neighbor heuristic for the TSP selects the closest unvisited city at each step, offering quick but often suboptimal solutions. Local search techniques [12, 13, 14] start with an initial solution and attempt to improve it by making small adjustments within its neighborhood, such as the 2-opt heuristic [15] for the TSP, which swaps edges in a tour to reduce the distance. While these methods provided efficient ways to tackle complex problems, designing heuristics needs careful man- ual design, and also heuristics are easily trapped in local optima. These challenges highlighted the need for more robust strategies that could better balance exploration and exploitation in the searching process. 2.3. Rise of Meta-Heuristics In response to the limitations of classical heuristics, meta-heuristics emerged as a more flexible and adaptable framework. Unlike classical heuristics, which are typically problem-specific, meta-heuristics are designed to be gen- eral algorithms applicable across a wide range of optimization problems, both combinatorial and continuous. A variety of meta-heuristic algorithms have been developed to tackle complex problems across various domains. Among them, EAs stand out as a prominent representative, like Genetic Algorithms (GAs) [5] , Memetic Algorithms [16], Particle Swarm Optimization (PSO) [17], Ant Colony Optimization (ACO) [18]. These algorithms share a common trait: they are designed to explore the solution space efficiently by leveraging principles inspired by natural processes or swarm intelligence. Despite their differences in specific implementations and searching operations, these meta-heuristic algorithms all embody a general framework that involves an iterative searching process. They initialize a set of candidate solutions, evaluate their quality based on a predefined objective function, and iteratively update the solutions through various mechanisms such as selection, crossover, mutation, or information sharing among individuals. This process allows them to escape local minima and explore diverse regions of the searching space, thereby increasing the likelihood of finding globally optimal solutions. However, despite their generalization capabilities, meta-heuristics still face key challenges, particularly their re- liance on carefully tuned parameters and the need for expert knowledge. This dependence on domain knowledge for tasks like designing fitness functions, selecting appropriate operators (e.g., mutation and crossover), and adjusting algorithmic components reduces the true generality of meta-heuristics and limits their applicability across diverse optimization problems without significant manual intervention. 2.4. Hyper-Heuristics: Automating Heuristic Design To reduce the reliance on expert knowledge and achieve more automated optimization, hyper-heuristic address problems by searching for and generating heuristics tailored to the problem. They operate at a more abstract level. Hyper-heuristics rely on predefined low-level heuristics, typically employing either heuristic selection or heuristic generation [7, 8, 19, 20], which output heuristics through various methods, including machine learning prediction or EA search. The machine learning approach is more generalized, relying heavily on extensive training data for prediction, while the EAs are based on an iterative search using specific problem-related training data. Despite hyper-heuristics’ potential to generalize across various problems, they are constrained by the quality and diversity of the available low-level heuristics or components, and if these components are not robust or flexible enough, the generated heuristics may perform poorly across different problems. Additionally, no matter the hyper-heuristics relying on training a prediction model or EAs, their effectiveness highly depends on the quality and quantity of training data. Poor or insufficient training data may result in suboptimal heuristics that fail to generalize effectively to unseen problem instances or variations. 3 2.5. Toward a New Era of Automated Optimization The integration of LLMs and EAs offers a promising new method for automated optimization, which leverages LLMs’ capabilities in generating combined with EAs’ iterative optimization techniques, resulting in a framework that can both solve optimization problems and design the optimization algorithms. The key innovations in this integration is the dual-role that LLMs play. First, LLMs can directly generate solutions by interpreting prompt content and applying searching operators such as mutation and crossover. In this capacity, LLMs function as meta-heuristic agents, dynamically produc- ing solutions based on real-time feedback from the optimization process. Second, LLMs can generate and refine heuristics—problem-solving strategies, assuming the role of a hyper-heuristic. This allows for continuous adaptation and improvement of both the search strategies and the resulting solutions, enhancing the flexibility and effectiveness of the optimization process. The LLM-EA automated optimization paradigm holds the potential for generalized, scalable optimization across various domains, such as network design, logistics, and machine learning model optimization. By enabling the automated design of both solutions and the algorithms that generate them, this paradigm represents a significant leap forward in intelligent, adaptive problem solving, offering new opportunities for addressing complex, high-dimensional optimization challenges. 3. Fundamental Technologies in LLM and EA for Automated Optimization 3.1. Overview of LLMs and EAs Large Language Models, such as GPT-4 and BERT [21], are built on the transformer architecture, which has revolutionized natural language processing (NLP). This architecture allows LLMs to process input sequences in par- allel, rather than sequentially as seen in earlier models like RNNs [22] or LSTMs [23], making them significantly more efficient. The key innovation lies in the self-attention mechanism [24], which enables the model to weigh the importance of different words in a sentence relative to one another, regardless of their position in the text. This capa- bility is crucial for understanding long-range dependencies in language and for capturing both syntactic and semantic information with high accuracy. The success of LLMs in NLP stems from pretraining on vast amounts of text, which allows them to generalize across various domains and tasks. By fine-tuning on specific tasks, LLMs can generate coherent and contextually relevant text, even in highly specialized fields. Evolutionary Algorithms, on the other hand, are inspired by the process of natural evolution and use mechanisms like selection, mutation, and crossover to solve optimization problems. EAs are particularly useful when the search space is large, complex, or non-differentiable, rendering traditional methods like gradient descent ineffective. EAs begin with an initial population of candidate solutions, which are evaluated using a fitness function to measure their performance. High-performing individuals are more likely to be selected for reproduction. The crossover operation combines features from two or more parent solutions to create offspring, while mutation introduces random variations to maintain diversity. This iterative process refines solutions over multiple generations, making EAs especially suitable for black-box optimization, where the internal structure of the system is unknown. 3.2. Understanding Prompts in LLMs A prompt [25] is essentially the input provided to LLMs to guide its output. The purpose of a prompt is to specify what the LLMs should generate, whether it is answering a question, writing a paragraph, or completing a task based on a given example. A text prompt example is given in Figure 2(a). The simplicity or complexity of a prompt depends on the task at hand. At its core, a prompt can be thought of as the instruction or query that triggers the model’s response. This is a foundational component of using generative LLMs like GPT or BERT, as it sets the direction for the model’s output. 3.2.1. Prompt techniques Prompt techniques refer to structured approaches for designing, formatting, and sequencing prompts to achieve optimal generative performance from LLMs. Common prompt techniques include: Zero-Shot prompt [26]: In this technique, LLMs are tasked with performing a job without presenting any ex- amples. It solely depends on the directions stated in the prompt. As depicted in Figure 2(b), zero-shot prompt tests 4 Figure 2: The examples of prompting the LLMs’ capacity to comprehend and carry out the task solely based on the provided instructions, devoid of any supplementary context or samples. Few-Shot prompt [27]: As shown in Figure 2(c), this approach entails furnishing a handful of examples alongside the task directions to assist in guiding the model. These instances are commonly known as exemplars. By presenting the LLMs with a small number of pertinent cases, few-shot prompt can enhance its performance on the task by offering a clearer understanding of what is anticipated. Chain-of-Thought prompt [28]: This method encourages LLMs to dissect their reasoning process into consec- utive steps before reaching the final answer. As illustrated in Figure 2(d), the Chain-of-Thought prompt can boost the model’s effectiveness on tasks that necessitate logical deduction or multi-step problem-solving. By prompting the model to articulate its thought process, this technique can make the model’s reasoning more transparent and accurate. 3.2.2. Prompt optimization Prompt optimization [29, 30] refers to the process of refining prompts to enhance the generative performance of LLMs. It involves iteratively adjusting prompts, experimenting with different variations, and using automated tech- niques to optimize for accuracy, efficiency, and output relevance. The following principles are essential for effective prompt optimization: • Prompt Ensembling: This technique generates multiple variations of a prompt and aggregates their outputs to improve response quality and diversity. • Prompt Tuning: Fine-tuning the structure, wording, and format of prompts can significantly impact the model’s output quality, allowing for more targeted and precise results. • Self-reflective Prompts: In this iterative approach, the LLM evaluates its own responses, identifies potential weaknesses, and suggests improvements, enabling continuous refinement. By applying these principles, prompt optimization can produce more reliable and fine-tuned outputs, pushing the capabilities of LLMs in complex problem-solving scenarios. 3.3. EAs for Prompt Optimization EAs have been used to solve the optimization problem in LLMs, especially for optimizing the prompt. Prompts can be categorized into continuous prompts, which use numerical vectors (embedding vectors) to influence LLMs’ 5 behavior directly [31] and text prompts, which are natural language instructions. EAs can effectively optimize prompts by exploring different configurations and selecting the most effective ones. Continuous prompts involve tuning embedding vectors that act as "soft prompts", allowing fine control over the LLM’s responses. For example, Black-Box Tuning (BBT) [32] applies EAs to optimize continuous prompts represented as embedding vectors, which iteratively adjusts these vectors to improve performance on various language tasks, such as sentiment analysis and question answering. BBT shows that continuous prompts can be fine-tuned using EAs even when the LLM is treated as a black-box system. For text prompts, EAs adjust phrasing and structure to find better ways of prompting the LLM. For example, EvoPrompt [33] utilizes evolutionary operators like mutation and crossover to modify parts of prompts, enabling the exploration of new variations that might lead to better performance. PromptBreeder [34] takes this further by evolving both the task-specific prompts and the evolutionary operator used to refine these prompts. This dual evolution ensures that the model not only generates high-quality prompts but also refines the method of prompt generation itself, ultimately leading to more robust outcomes when the LLM is later applied to complex optimization tasks. As can be seen, EAs can improve the quality of prompts which guide LLMs to generate better responses, which is a natural application of EAs. In fact, more creatively, EAs are combined with LLMs to solve optimization problems, which is the focus of this paper and introduced in the following sections. 4. Development of LLM-based Optimization and LLM-EA Automated Optimization Paradigm In this section, we first summarize existing work on LLM-based optimization, highlighting key techniques and common patterns. Based on these insights, we propose a novel paradigm where LLMs generate solutions and heuris- tics, while EAs iteratively refine them. This LLM-EA paradigm aims to automate and enhance optimization processes, reducing the need for manual intervention and improving adaptability across diverse problems. 4.1. LLM-based Optimization LLMs, with their vast knowledge base and advanced natural language understanding, reasoning, and problem- solving capabilities, have garnered significant attention in the field of optimization. Research efforts in this area have primarily focused on two main directions: exploring the searching abilities of LLMs and using LLMs to model optimization problems. While the focus of this paper is on the former, for more details on the latter, readers can refer to [35, 36]. There has also been research into using the fundamental mechanism behind LLMs—Transformer architecture—to solve optimization problems, which is also outside the scope of this paper, and readers can refer to [37, 38, 39] for further details. Our focus is on how LLMs function as searching operators within optimization processes. Regardless of the type of problem being addressed, the majority of research has treated LLMs as searching operators, embedding them into iterative procedures or coupling them with EAs. Certainly, LLMs have also been employed in other stages of the optimization process, such as initialization, evaluation, and selection. However, the role of LLMs as searching operators remains the most valuable and complex to execute effectively. Therefore, in this section, we delve into how LLMs have been designed and used as searching operators. For a more general overview, refer to [40, 41]. As searching operators, LLMs have been applied to a variety of optimization problems, extending their reach from prompt optimization, as discussed above, to classical numerical and combinatorial optimization problems, and even to automatic algorithm design. In prompt optimization, optimization techniques have been used to improve the quality of prompts [31, 32, 42, 43, 44, 45]. In fact, many of these studies [31, 42, 43, 44] have already treated LLMs as searching operators to directly optimize prompts, given that text is the format LLMs naturally excel in. Recognizing that LLMs can optimize text, and that numerical values can also be treated as a form of text, researchers have begun exploring whether LLMs can solve classical optimization problems directly [46]. They applied LLM- based searching operators to continuous numerical optimization, combinatorial optimization, and more complex optimization problems [47, 48, 49]. In these cases, LLMs are designed to generate candidate solutions directly for the given problem, effectively functioning as solvers. However, as research advanced, it became apparent that while LLMs are exceptional at generating text, their ability to handle numerical optimization was somewhat limited [50]. On the other hand, LLMs have shown remarkable capabilities in code generation [51, 52], leading researchers to shift their focus towards guiding LLMs to design 6 optimization algorithms rather than directly solving optimization problems. In this approach, LLMs are tasked with designing either specific components of an algorithm or entire algorithms. These algorithm components or complete algorithms are expressed in natural language, pseudo-code, or real code. Given that LLMs are more proficient at handling code than numerical values, there has been growing interest in utilizing LLMs for automated algorithm design. Next, we focus on the work taking LLMs as a solver or for automated algorithm design, which not only represent the core focus of optimization in this field, but also analyze the key developments and technologies that have emerged. 4.2. LLMs as a Solver Treating LLMs as a solver and asking them to generate solutions for optimization problems directly, the core technologies lie in designing prompts for LLM-based searching operators. In general, prompts contain two main types of information: the problem details and the required output from the LLM. In this case, the output is typically straightforward—producing one or more candidate solutions to the problem at hand. The real challenge arises in how the problem is presented to the LLM. The problem-related information in prompts can be categorized into three types: available solutions, quality of those solutions, and guidance for the LLM’s search direction or pattern. Most existing studies provide candidate solutions as examples, but the use of solution quality and search guidance varies across different works. Below, we summarize the major developments in this area. Initially, researchers simply provided LLMs with candidate solutions. A notable example of this is Meyerson et al.’s work [53], where LLMs were employed as crossover operators in EAs. By providing pairs of existing solutions, the LLMs generated new solutions based on these examples. No information about solution quality or search guidance was included in this work. Another early study is Lehman et al.’s work [54], where LLMs were designed as mutation operators in genetic programming (GP). Since GP deals with codes, the LLM-based mutation operators generated code solutions for the problems being tackled. Notably, this differs from later work on code generation for algorithm design. In [54], basic guidance was introduced alongside the solutions, with three LLM-mutation operators requiring the model to either: make changes or small changes to the current solution, or modify parameters of the current solution. While these instructions were simple, they introduced a level of search direction, influencing subsequent research. Hemberg et al. [55] applied LLMs to each part of GP, instead of just the mutation operator. As research advanced, it became evident that providing only candidate solutions was insufficient; including the quality of solutions helped LLMs learn how solutions differ. A representative work in this space is OPRO, pro- posed by Yang et al. [56], which provided objective function values alongside each solution, though no specific search guidance was offered. OPRO tested small-scale linear regression problems and TSPs, with a focus still on prompt optimization. Following OPRO, the inclusion of solution quality became a standard practice in LLM-based optimization. Further developments in the use of searching guidance emerged, recognizing its importance in steering LLMs more effectively. Following OPRO, Liu et al. [57] designed more detailed task instructions that were combined with EA steps. These instructions specified the sequence of operations, such as performing selection, followed by crossover, and then mutation. This combination of solution quality and structured guidance represented a significant step forward in prompt design. Beyond prompt design, researchers have also extended this LLM-driven optimization approach to more complex problems, such as multi-objective optimization [58, 59]. Additionally, several innovations have further refined this paradigm. For example, Lange et al. [60] applied evolution strategies, asking LLMs to generate the next mean for a desired fitness level. Brahmachary et al. [61] introduced a technique that split the population into two groups for exploration and exploitation, providing different search guidance for each group. Huang et al. [50] conducted a comprehensive comparative study on the ability of LLMs to generate solutions for optimization problems directly, offering insights into their relative strengths and limitations. 4.3. LLMs for Automated Algorithm Design The use of LLMs to automate algorithm design has progressed significantly over recent years. Initially, researchers focused on a single-round process, where LLMs were prompted to design new meta-heuristics. In these early studies, LLMs typically selected and analyzed existing algorithms, generating new ones in pseudo-code format. However, the performance of these generated algorithms could not be evaluated online, and their application was limited to 7 conversational interactions rather than fully leveraging LLMs’ optimization potential [62, 63, 64]. We think these types of work are more like chatting with LLMs rather than mining the optimization ability of LLMs. A more advanced approach involves embedding LLM-based searching operators into an iterative process, allow- ing continuous improvement of the generated algorithms. This shift enabled LLMs to move beyond simple heuristic generation to creating meta-heuristics. Within this context, researchers have focused on generating both components of heuristics/meta-heuristics and complete algorithms. Various optimization problems have been explored, including numerical optimization, combinatorial optimization, multi-objective optimization, and even complex network opti- mization [47, 49]. Given the varying complexities of these problems, numerical and combinatorial optimizations have received the most attention, with subsequent studies expanding to multi-objective and complex problems. The type of algorithm generated and the problem used to test is not strictly one-to-one; some studies apply a single generated heuristic or meta-heuristic across multiple problem types. The iterative nature of algorithm improvement stems from two factors: LLMs’ ability to generate codes and the integration of an iterative process. One of the most widely used iterative processes is the EAs. Currently, the primary focus is on designing components within heuristics or meta-heuristics. Typically, an ex- isting optimization algorithm is fixed, and LLMs are tasked with generating specific functions for the algorithm. For example, in FunSearch [65], LLMs generate functions to evaluate the score of each bin in the bin-packing prob- lem. The prompt provides existing code, and the LLM is asked to generate only the function code. Similarly, EoH [66, 67, 68] uses both code and descriptions about the algorithm to generate new function components. EoH employs a guided local search (GLS) algorithm [14, 69], and the LLM generates an evaluation function embedded in GLS. However, the search guidance in EoH remains fixed throughout the iterative process. Recent advancements have introduced dynamic search guidance during the iteration process. Ye et al. [70] proposed a system with two LLMs: one for generation and one for reflection. The reflection LLM analyzes short- term and long-term search results and provides updated search guidance to the generation LLM. Similarly, Sun et al. [71] developed a system of three LLM-based agents—Advisor, Coder, and Evaluator—that work together to analyze solutions and guide further searches, representing early steps toward LLMs guiding other LLMs. In addition to generating components for heuristics, LLMs have been applied to meta-heuristic component design. Huang et al. [72] used LLM-based crossover and mutation operators for multi-objective optimization. Huang et al. [73] extended this work to evolutionary multitasking algorithms, where LLMs were tasked with designing knowledge transfer models. In other studies, LLMs have been employed to design surrogate models for expensive optimization tasks [74] and learning rate schedules in evolutionary strategies [75, 76]. Designing complete heuristics or meta-heuristics is more challenging than just designing components, and re- search in this area remains limited. Yu et al. [49] proposed a method for generating complete heuristics to improve the robustness of complex networks [77, 78]. Stein et al. [79] developed a method for generating meta-heuristics for continuous optimization problems. This direction needs further study since we are more interested in letting LLMs design complete algorithms automatically. Although LLMs have been used to solve various optimization problems and design optimization algorithms, this research direction is just at the beginning, and most work just uses small-scale problems to validate the ability of LLMs in optimization. Undeniably, LLMs combined with EAs provide a promising new way for automated optimization. Thus, we summarize the common and most valuable design in existing work and propose a general LLM-EA auto- mated optimization paradigm in the following subsection, which can help researchers better understand the current research and guide future research. 4.4. LLM-EA Automated Optimization Paradigm Through the above review, we can see, that to enhance LLMs’ capabilities in solving optimization problems, researchers have increasingly integrated EAs with LLMs. This synergy enables a more efficient and creative search- ing process, with LLMs generating high-quality candidates and EAs optimizing these candidates through iterative refinement. This complementary relationship strengthens the continuous creativity, allowing for more automated op- timization algorithm design. Building on this foundation, we propose an LLM-EA automated optimization paradigm in which LLMs and EAs work together to generate both solutions and heuristics, providing a flexible and powerful framework for automated optimization. Next, we first introduce each component of this paradigm and then present the whole paradigm. 8 Optimization problems aim to find the optimal candidate x∗ that maximizes (or minimizes) a objective function f (x). For solution search, the objective is to find x∗ whose fitness is maximum, where x ∈ S solution, and S solution is solution searching space : x∗ = arg max x∈S solution fs(x) (1) For heuristic search, the objective is to find the best heuristic x∗ for the optimization problem under consideration. The performance of x∗ is evaluated on a training set D = {d1, d2, . . . , dk} of problem instances by aggregating as follows: x∗ = arg max x∈S heuristic A ( fh(x, d1), fh(x, d2), . . . , fh(x, dk)) (2) where S heuristic is heuristic searching space and A(.) is an aggregation function (e.g., average or weighted sum) used to combine the fitness values across instances. In the evolutionary process, at generation t, the population consists of N candidates: P(t) = {x1, x2, . . . , xN}, xi ∈ S solution/S heuristic, i = 1, 2, . . . , N (3) Each candidate in the population represents either a solution or a heuristic. In the paradigm, the evolutionary process includes three primary operators: selection, variation, and reflective. • Selection Operator: The selection operator Oselect selects the candidates for conducting the variation operator based on their fitness values: Pparent(t) = Oselect(P(t)) (4) ensuring that candidates with higher fitness are more likely to contribute to the next generation. • Variation Operator: The variation operator generates new individuals based on Pparent(t). One or more vari- ation operators can be designed with different focus on exploration or exploitation. LLMs play a role in this process by generating offspring through prompt Pvariation : Poffspring(t) = Ovariation(LLMvariation, Pvariation, Pparent(t)) (5) where Pvariation is defined as: Pvariation = {Dproblem, Dtask, xi, . . . , x j} Here, Dproblem is the problem description, Dtask is the task instruction, which includes the variation logic, and xi, . . . , x j are the candidates as example data. (6) • Reflective Operator: An optional reflective mechanism adjusts variation operators dynamically. The LLM refines the variation strategies based on the performance of previous generations: Pvariation = Oreflective(LLMreflective, Preflective) (7) The LLM, acting as a searching operator, is guided by prompts. Whether variation prompts or reflective prompts are used, they share a common pattern that includes a problem description, task instructions, and example data. The problem description refers to the detailed explanation or context of the problem that needs to be solved. The task instructions are the explicit directives given to the LLM, specifying the task or objective that the model needs to accomplish. The example data refers to real examples provided as input or reference, helping the LLM understand the task requirements or expected output format. For variation prompts, the problem description outlines the optimization problem to be addressed. The task instructions can vary from simple requirements to detailed variation logic, specifying how the LLM should approach generating. The example data consists of previously evaluated candidates with fitness scores, guiding the LLM in creating improved solutions or heuristics according to the task instructions. An example of a variation prompt is present in Figure 3(a). In reflective prompts, the problem description shifts to optimizing the existing task instruction of the variation prompt, focusing on improving the current prompt itself. The task instructions then guides how to analyze and 9 Figure 3: (a) This is the variation prompt for the TSP. The problem description introduces the TSP and presents data for an instance of the problem. The example data provides several routes along with their respective lengths. The task instruction specifies the requirement to provide a route that is shorter than all the routes given in the example data. (b) This reflective prompt is aimed at refining or optimizing the task instruction for the TSP. Algorithm 1 LLM-EA for Automated Optimization Input: Fitness function fs or fh , Number of generations T Output: Best candidate found x∗ 1: Initialize population P(1) = {x1, x2, . . . , xN}, where xi ∈ S solution/S heuristic, i = 1, 2, . . . , N 2: for each candidate xi in P(1) do if Solution Search then 3: 4: 5: 6: Evaluate the fitness using the aggregation across training set: Evaluate the fitness using fs(xi) else if Heuristic Search then f (xi) = A ({ fh(xi, d1), fh(xi, d2), . . . , fh(xi, dk)}) end if 7: 8: end for 9: for t = 1 to T do 10: 11: 12: 13: 14: 15: Apply the selection operator to form the parent population Pparent(t) using Eq. (4) for each group candidates (xi, . . . , x j) in Pparent(t) do Generate prompt Pvariation = {Dproblem, Dtask, xi, . . . , x j} Apply the variation operator using Eq. (5) to obtain Poffspring(t) end for Evaluate the fitness of individuals in Poffspring(t) Survivor selection to obtain population P(t + 1) with Poffspring(t) and Pparent(t) 16: 17: Optional: Apply the reflective operator to refine variation operators by using Eq. (7) 18: end for 19: Return best candidate x∗ 10 compare the available example data to generate new task instructions for the variation prompt. These example data can either consist of macro-level statistical information, representing the overall characteristics of multiple candidates, or specific, feature-rich candidates or existing variation prompts. This example data serves as a reference for the LLM to generate more optimized instructions. An example of a reflective prompt is presented in Figure 3(b). Explanation of the paradigm: Algorithm 1 gives the detail of LLM-EA automated optimization paradigm. It begins by initializing a population of candidates and iteratively evaluates their fitness. Selection, variation operations are applied, where the LLM generates new candidates based on prompts. An optional reflective mechanism adjusts the variation operators based on feedback from previous generations, enhancing the optimization process over time. The algorithm continues for a set number of generations, ultimately returning the best candidate. 5. In-Depth Analysis of Key Modules in LLM-EA Automated Optimization Paradigm Building on the LLM-EA automated optimization paradigm, this section conducts a comprehensive analysis from two perspectives: prompt engineering for LLMs and the evolutionary process for iterative search. By integrating these two aspects, we provide a multidimensional exploration of the LLM-EA automated optimization paradigm. Prompts play a fundamental role in this paradigm by guiding LLMs through optimization tasks. The structure of these prompts—consisting of problem descriptions, task instructions, and example data—determines how effectively the LLM participates in tasks such as crossover, mutation, and reflective optimization. Prompts not only provide the LLM with the necessary reference points for generating new candidates, but also are embedded the logic of evolutionary operators, ensuring smooth integration with the optimization process. In the evolutionary process, we focus on three critical components: individual representation [80, 81], variation operators [82], and fitness evaluation [83, 84]. Individual representation shapes how candidates are structured and determine the searching space. Variation operators, such as mutation and crossover, guide the exploration of the searching space. Fitness evaluation drives the process by measuring how well the generated candidates meet the optimization objectives. The following subsections provide detailed explanations of how these components are systematically integrated into the prompt, ensuring that the LLM maximizes its effectiveness in the optimization task. Our analysis reveals how prompts dynamically influence each phase of the evolutionary process, offering deeper insights into the synergistic relationship between LLMs and EAs in automated optimization. 5.1. Individual Representation for Heuristic Historically, the representation of solutions in optimization has been purely numerical, particularly in the case of continuous and combinatorial optimization problems, where candidates are expressed as vectors or arrays. While this method remains effective for many tasks, the advent of LLMs introduces new possibilities for representing heuristics. These representations expand beyond simple numerical encoding to incorporate natural language, pseudo-code, and even executable code. This shift allows LLMs to play a more creative role in generating novel problem-solving strategies. After analyzing current research, we define a novel classification of heuristic representation that extends traditional solution encoding and differentiates between three main types of heuristic representation: Code-Centric Represen- tation, Hybrid Representation, and Augmented Representation, each tailored to different levels of complexity in optimization problems. • Code-Centric Representation: In this form, the heuristic is represented solely as executable code. For in- stance, FunSearch [65] uses LLMs to generate small, self-contained code snippets that are directly applied to optimization problems. The LLM evolves the code itself, which is designed to perform specific tasks or cal- culations without the need for external explanations. While this approach is computationally efficient, it lacks interpretability, as the generated code does not come with any accompanying documentation or reasoning. This method is better suited for well-defined problems where efficiency is prioritized over transparency. • Hybrid Representation: This method blends code with natural language descriptions. In the EoH [66] frame- work, LLMs not only generate executable code but also provide a natural language explanation of the code’s 11 Figure 4: An individual representation of EoH [66] is provided, where the heuristic is expressed in both natural language and executable code. The natural language description explains how the heuristic calculates scores for each bin, considering factors such as remaining capacity, bin index, and penalties for large differences. The code snippet implements this logic. A fitness score of 0.0143 reflects the performance of the generated heuristic in the optimization task. logic and intended purpose, as illustrated in Figure 4. This combination bridges the gap between machine- generated heuristics and human-readable explanations. By co-evolving code and descriptions, this approach enhances both performance and interpretability, making it suitable for more complex tasks where understand- ing the reasoning behind the code is crucial. • Augmented Representation: This extends beyond previous representations by incorporating executable code, natural language descriptions, and domain-specific expert knowledge into the individual representation. For example, unlike FunSearch and EoH, which represent code snippets or code paired with explanations, Au- toRNet [49] enhances the representation by embedding higher-level concepts from network science, such as high-degree nodes, low-degree nodes, critical nodes, and network connectivity. This enriched representa- tion allows the LLM to contextualize the code within a broader domain-specific framework, facilitating a deeper understanding of the problem. By incorporating expert knowledge, the LLM is not merely working with logic and procedures but is equipped with the conceptual background to generate more advanced and applicable al- gorithms. Augmented Representation ensures that the generated heuristics can address complex optimization problems with a higher degree of relevance and adaptability. 5.2. LLM-based Variation Operators Traditional EAs rely on predefined operators such as mutation and crossover, which require detailed step-by- step programming and domain-specific expertise. With the advent of LLMs, the role of these operators has evolved, enabling more flexible and dynamic approaches to solution generation and heuristic manipulation. We identify three key advantages that LLMs bring to EAs: 1. High-Level Instructions Remove the Need for Step-by-Step Programming. Traditionally, variation opera- tors require precise, step-by-step programming to define how solutions are selected, combined, and modified. 12 Figure 5: (a) An example of the constructed prompt when utilizing LMEA to solve TSPs. The evolutionary operator is presented as a natural language in task instructions. (b) An prompt of EoH uses the E2 strategy to design a new heuristic. LLMs eliminate this need by interpreting high-level task instructions written in natural language, enabling flex- ible solution generation. For instance, the LMEA [57] framework is illustrated in Figure 5(a), where LLMs are given general directives for tasks like parent selection and mutation, allowing them to autonomously generate solutions based on these instructions without needing detailed programming. This approach reduces reliance on domain-specific expertise and enables more flexible solution exploration. 2. Advanced Manipulation of Heuristics via Natural Language. Heuristics, unlike numerical solutions, are complex algorithms or pieces of code. LLMs excel in applying variation operators to these heuristics by using their natural language understanding to combine, refine, and adjust logical structures. For example, in the EoH [66] framework, five prompt strategies (E1, E2, E3, M1, and M2) are designed and categorized into two groups: Exploration and Modification. Each strategy uses prompts to guide the LLM with different emphases in evolving heuristics based on current population performance and heuristic structure. For example, the detail of E2 strategy is shown in Figure 5(b), which put emphasis on designing a new heuristic different from the given ones. 3. Incorporation of Domain-Specific Knowledge into Variation Operators: A significant advantage of LLM- based variation operators is their ability to integrate expert domain knowledge into the evolutionary process, as demonstrated in AutoRNet [49] through its Network Optimization Strategies (NOS). By embedding spe- cialized knowledge from fields like network science (e.g., degree distribution, path characteristics, clustering coefficient, centrality measures, and community structure) into the variation operations, LLMs can guide mu- tation and crossover with insights specific to the problem domain. This allows for more sophisticated and effective heuristics that address complex, domain-specific optimization challenges. For example, in network optimization, AutoRNet uses domain knowledge to adaptively modify network structures, ensuring that the generated heuristics are deeply informed by network science principles. This integration of expert knowledge allows LLMs to generate heuristics that are not only generalizable but also highly specialized, providing a new 13 layer of flexibility and precision in the evolutionary process. Beyond generating solutions or heuristics, LLMs also play a pivotal role in optimizing the variation operators themselves. ReEvo [70] introduces a novel reflective mechanism where LLMs evaluate and refine the variation op- erators by analyzing the performance of previously generated heuristics. Unlike traditional EAs that rely on static operators, ReEvo enables LLMs to reflect on both short-term and long-term performance data. This allows the LLMs to generate adaptive mutation and crossover strategies, leading to more effective exploration of the search space. • Short-term Reflection: LLMs assess the recent individuals, identifying immediate changes needed in mutation or crossover operations. This dynamic response helps the evolutionary process adapt quickly to the promising searching direction. • Long-term Reflection: LLMs evaluate broader trends in the performance of heuristics over multiple gener- ations, allowing for deeper adjustments to the evolutionary strategy. This ensures that the operators evolve alongside the heuristics, leading to more robust solutions. This reflective feedback loops enables LLM-driven optimization of the search strategy itself, moving beyond simple heuristic generation to a more dynamic, self-improving evolutionary process. LLMs as variation operators bring two critical innovations: the ability to interpret high-level instructions, elim- inating the need for step-by-step programming, and the capacity for sophisticated heuristic manipulation through natural language. When coupled with reflective optimization strategies like those in ReEvo, LLMs offer a dynamic, self-improving approach to EAs, pushing the boundaries of what traditional operators can achieve. 5.3. Fitness Evaluation in Heuristic Optimization The quality of solutions for optimization problems can be evaluated directly by the objective function. In contrast, heuristics operate at a higher level of abstraction, as they represent strategies for generating solutions. Therefore, evaluating heuristics requires a mapping from the heuristic space to the solution space, followed by the application of fitness evaluation. This requires a more flexible and generalizable fitness function capable of capturing performance across diverse scenarios. To address this challenge, we summarize two primary approaches: • Adaptive fitness evaluation dynamically adjusts the criteria for assessing heuristic performance as the opti- mization progresses. It allows for broader exploration early in the process and more focused refinement as the search converges. AutoRNet [49] designs an adaptive fitness function (AFF) to dynamically adjust constraints during the evolutionary process. Initially, constraints on degree distribution are relaxed, allowing for broader exploration of the heuristic search space. As the optimization progresses, these constraints are progressively tightened, promoting convergence toward more optimal solutions while maintaining diversity within the popula- tion. This progressive tightening ensures that the search space is thoroughly explored while gradually refining the candidate heuristics to meet increasingly stringent requirements. • Benchmark-based evaluation ensures that heuristics generalize across multiple problem instances by testing them in a variety of scenarios, reducing the risk of overfitting to a specific instance and ensuring that the heuristic performs well in different contexts. LLaMEA [79] leverages benchmark-based fitness evaluation, utilizing platforms like IOHexperimenter to systematically assess the performance of generated metaheuristics. LLaMEA evaluates algorithms across a wide range of benchmark functions, providing a robust and reproducible environment for fitness assessment. This evaluation method promotes fairness and consistency by comparing new algorithms to well-established state-of-the-art benchmarks. While adaptive fitness evaluation and benchmark-based methods effectively address the generalization challenge, some problems still pose significant computational challenges, particularly when fitness evaluations are time-consuming or when heuristics generate solutions across multiple problem instances. In these cases, surrogate models provide a crucial solution. Traditional surrogate models, usually using Gaussian Processes and Neural Networks [85], have long been used in EAs. However, they come with their own set of limitations, such as the need for iterative training and updating as new data becomes available. This adds additional computational overhead, potentially diminishing their efficiency in real-time optimization tasks. 14 A novel method, as proposed in recent research, is the use of LLMs as surrogate models [74]. LLMs, with their powerful inference capabilities, offer a unique approach by eliminating the need for iterative training. LLMs can act as classifying solutions as “good” or “bad” based on prior performance and approximating the fitness values of new solutions based on the patterns identified in historical data. This method not only reduces the computational cost but also speeds up the optimization process, enabling the evaluation of complex problems such as network robustness without requiring full-scale evaluations for every candidate solution. In summary, the fitness evaluation process for heuristics presents challenges due to the need for generalization and computational efficiency. The combined use of adaptive fitness evaluation, benchmark-based evaluation, and LLMs as surrogate models addresses these challenges by offering flexible, scalable, and efficient methods for fitness evaluation. These approaches ensure that heuristics are not only evaluated accurately across multiple instances, but also do so with reduced computational cost. 6. Future Research Directions As LLM-EA automated optimization approaches continue to evolve, there are several promising areas of research that can enhance their ability. This section outlines four critical directions that could drive future advancements in the field. 6.1. Enhancing Explainability and Reasoning Capabilities One of the key challenges in combining LLMs with EAs is the lack of transparency in the decision-making process of LLM-generated heuristics. The need for explainable AI (XAI) [86, 87, 88] is essential to allow researchers and practitioners to understand why specific optimization strategies are generated and how they contribute to robust solutions. Explainability not only improves trust in AI systems but also provides a foundation for diagnosing errors and refining heuristics. Furthermore, improving the reasoning capabilities of LLMs is crucial for developing more effective optimization heuristics. Recent advancements like Self-Taught Reasoner (STaR) [89, 90, 91] highlight the potential of iterative reasoning to refine outputs over multiple steps. STaR improves the accuracy of LLM-generated solutions by enabling the model to reason through a problem progressively rather than providing a single-shot response. Incorporating such reasoning mechanisms into LLM-EA systems can lead to more sophisticated and nuanced optimization strategies. 6.2. Integration of Domain Knowledge While LLMs are trained on vast amounts of data, their general knowledge may not always be sufficient to solve domain-specific optimization problems [92]. To address this, integrating domain-specific knowledge can significantly enhance the quality and relevance of generated heuristics. Retrieval-Augmented Generation (RAG) [93, 94, 95] provides a promising approach to this challenge by combining LLMs with external knowledge sources. By retrieving relevant domain-specific information from large datasets or expert systems, LLMs can generate more specialized and effective heuristics for particular fields such as logistics, network design, or healthcare optimization. In addition, long memory models [96, 97] can play a crucial role in maintaining domain-specific context over extended problem-solving processes. These models enable LLMs to retain and recall relevant information from previ- ous interactions, allowing for more coherent and context-aware heuristic generation over time. The ability to leverage both short-term and long-term knowledge will be vital in addressing complex, multi-stage optimization problems. 6.3. Optimization of Evaluation and Benchmarking Platforms To ensure that LLM-generated heuristics are robust and widely applicable, there is a need for unified evaluation platforms that can consolidate training data from a wide variety of optimization problems. Such platforms would enhance the generalization capabilities of the generated heuristics by exposing them to diverse problem sets. The broad range of training data available through these platforms can help ensure that the LLM-EA systems do not overfit to a specific problem domain, thereby improving their versatility and applicability across multiple domains. In addition, surrogate models [74, 98, 99] can be integrated into these platforms to speed up the evaluation process. Surrogate models approximate the fitness function using historical data, which reduces the computational cost of evaluating large-scale optimization problems. This allows for faster heuristic testing and validation with limited 15 compromising the accuracy of results. Additionally, these platforms can include benchmarking systems that provide a standardized way to compare the performance of LLM-EA-generated heuristics against established optimization methods, fostering greater transparency and enabling further improvements through iterative development. 6.4. Scalability of LLM-EA Systems As optimization problems increase in complexity and scale, ensuring the scalability of LLM-EA systems becomes a critical challenge. Distributed computing and model compression offer promising solutions to address these chal- lenges. Distributed computing [100, 101] enables the parallel execution of tasks by distributing the computational load across multiple machines. This approach is particularly beneficial in the context of EAs, where large populations of candidate solutions need to be evaluated simultaneously. By leveraging distributed computing, different stages of the evolutionary process, such as selection, mutation, and crossover, can be run concurrently, reducing the overall runtime. Similarly, LLM inference tasks, such as generating new heuristics or solutions, can be distributed across multiple nodes, accelerating the optimization process. Distributed systems, therefore, provide the necessary scalability for applying LLM-EA systems to large-scale optimization problems. Model compression [102, 103] techniques further enhance scalability by reducing the size and computational complexity of LLMs. Methods such as pruning, quantization, and knowledge distillation allow LLMs to maintain high performance while significantly reducing their memory footprint and inference times. This is particularly valuable when LLMs are repeatedly queried during the evolutionary process. Compressed models not only run more efficiently but also reduce the energy consumption required for large-scale optimization, making LLM-EA systems more feasible for real-world applications where computational resources are limited. 7. Conclusion In this paper, we highlight the significant potential of the LLM-EA framework to transform the field of automated optimization, providing a new avenue for fully automated optimization. We began by tracing the evolution of heuristic approaches, establishing the need for more adaptive and automated solutions, followed by a comprehensive review of existing research on applying LLMs to optimization. By identifying the most common and valuable part of current research, we propose a novel paradigm that integrates LLMs and EAs to advance automated optimization. LLMs, with their robust generative and reasoning capabilities, play a dual role in our proposed paradigm as both heuristic designers and solution generators. By combining these strengths with the iterative search and refinement processes of EAs, the paradigm enables the automated generation of high-quality heuristics and solutions with minimal manual intervention. We then make a thorough analysis into the novel methodologies for individual representation, variation operators, and fitness evaluation within the LLM-EA paradigm. Our review and the proposed paradigm lay a strong foundation for future research into the capabilities of LLMs and EAs, opening up new avenues for both academic inquiry and practical applications, with the potential to reshape the landscape of optimization methodologies in a wide range of fields. In identifying future directions, we addressed ongoing challenges such as improving the transparency and explain- ability of LLM-generated heuristics, enhancing generalization to broader problem spaces, and optimizing computa- tional efficiency. Additionally, we pointed out the integration of domain-specific knowledge and the development of scalable benchmarking platforms to further refine the efficacy and reliability of LLM-EA systems. References [1] C. A. Floudas, P. M. Pardalos (Eds.), Encyclopedia of Optimization, 2nd Edition, Springer, New York, NY, 2009. [2] M. Hjeij, A. Vilks, A brief history of heuristics: how did research on heuristics evolve?, Humanities & Social Sciences Communications 10 (2023) 64. [3] P. M. Pardalos, M. G. Resende (Eds.), Handbook of Metaheuristics, Springer, Boston, MA, 2003. [4] R. Martí, M. Sevaux, K. Sörensen, 50 years of metaheuristics, European Journal of Operational Research (2024). [5] J. H. Holland, Adaptation in Natural and Artificial Systems, MIT Press, Cambridge, MA, 1975. [6] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680. 16 [7] E. K. Burke, M. R. Hyde, G. Kendall, G. Ochoa, E. Ozcan, J. R. Woodward, A comprehensive analysis of hyper-heuristics, Journal of the Operational Research Society 61 (2010) 1697–1724. [8] E. K. Burke, M. Gendreau, M. Hyde, G. Kendall, G. Ochoa, R. Qu, Hyper-heuristics: a survey of the state of the art, Journal of the Operational Research Society 64 (12) (2013) 1695–1724. [9] H. Naveed, A. U. Khan, S. Qiu, M. Saqib, S. Anwar, M. Usman, N. Akhtar, N. Barnes, A. Mian, A comprehensive overview of large language models, arXiv:2307.06435 (2023). [10] K. A. De Jong, Evolutionary Computation: A Unified Approach, MIT Press, Cambridge, MA, 2006. [11] F. Glover, G. Gutin, A. Yeo, A. Zverovich, Construction heuristics for the asymmetric TSP, European Journal of Operational Research 129 (3) (2001) 555–568. [12] E. Aarts, J. K. Lenstra, Local Search in Combinatorial Optimization, Princeton University Press, 2003. [13] C. Voudouris, E. Tsang, Guided local search, in: Handbook of Metaheuristics, 1999, pp. 185–218. [14] A. Alsheddy, C. Voudouris, E. P. K. Tsang, A. Alhindi, Guided local search, in: R. M. et al. (Ed.), Handbook of Heuristics, 2016. [15] G. A. Croes, A method for solving traveling salesman problems, Operations Research 6 (6) (1958) 791–812. [16] Y. S. Ong, M. H. Lim, X. S. Chen, Research frontier: memetic computation-past, present & future, IEEE Computational Intelligence Magazine 5 (2) (2010) 24–36. [17] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of International Conference on Neural Networks, Vol. 4, 1995, pp. 1942–1948. [18] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of cooperating agents, IEEE Transactions on Systems, Man, and Cybernetics, Part B 26 (1) (1996) 29–41. [19] Q. Zhao, Q. Duan, B. Yan, S. Cheng, Y. Shi, Automated design of metaheuristic algorithms: a survey, arXiv:2303.06532v3 (2024). [20] K. Tang, X. Yao, Learn to optimize – a brief overview, National Science Review 11 (2024) nwae132. [21] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: pre-training of deep bidirectional transformers for language understanding, in: Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2019, pp. 4171–4186. [22] I. D. Mienye, T. G. Swart, G. Obaido, Recurrent neural networks: a comprehensive review of architectures, variants, and applications, MDPI Information 15 (2024). [23] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation 9 (8) (1997) 1735–1780. [24] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin, Attention is all you need, in: Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Vol. 30, 2017. [25] S. Schulhoff, M. Ilie, N. Balepur, K. Kahadze, A. Liu, C. Si, Y. Li, A. Gupta, H. Han, S. Schulhoff, P. S. Dulepet, S. Vidyadhara, D. Ki, S. Agrawal, C. Pham, G. Kroiz, F. Li, H. Tao, A. Srivastava, H. D. Costa, S. Gupta, M. L. Rogers, I. Goncearenco, G. Sarli, I. Galynker, D. Peskoff, M. Carpuat, J. White, S. Anadkat, A. Hoyle, P. Resnik, The prompt report: a systematic survey of prompting techniques, arXiv:2406.06608 (2024). [26] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, Y. Iwasawa, Large language models are zero-shot reasoners, arXiv:2205.11916 (2022). [27] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, et al., Language models are few-shot learners, arXiv:2005.14165 (2020). [28] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, D. Zhou, Chain-of-thought prompting elicits reasoning in large language models, arXiv:2201.11903 (2022). [29] J. Cheng, X. Liu, K. Zheng, P. Ke, H. Wang, Y. Dong, M. Huang, Black-box prompt optimization: aligning large language models without model training, in: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, 2024, pp. 3201–3219. [30] A. Sabbatella, A. Ponti, I. Giordani, A. Candelieri, F. Archetti, Prompt optimization in large language models, MDPI Mathematics 12 (6) (2024) 929. [31] Q. Guo, R. Wang, J. Guo, B. Li, K. Song, X. Tan, G. Liu, J. Bian, Y. Yang, Connecting large language models with evolutionary algorithms yields powerful prompt optimizers, in: Proceedings of the 12th International Conference on Learning Representations, 2024. [32] T. Sun, Y. Shao, H. Qian, X. Huang, X. Qiu, Black-box tuning for language-model-as-a-service, in: Proceedings of the 39th International Conference on Machine Learning, 2022. [33] A. Chen, D. M. Dohan, D. R. So, Evoprompting: language models for code-level neural architecture search, in: Proceedings of the 37th International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 2024. [34] C. Fernando, D. S. Banarse, H. Michalewski, S. Osindero, T. Rocktäschel, PromptBreeder: self-referential self-improvement via prompt evolution, in: Proceedings of the 41st International Conference on Machine Learning, 2024. [35] A. AhmadiTeshnizi, W. Gao, M. Udell, OptiMUS: optimization modeling using MIP solvers and large language models, arXiv:2310.06116 (2023). [36] H. Chen, G. E. Constante-Flores, C. Li, Diagnosing infeasible optimization problems using large language models, arXiv:2308.12923 (2023). [37] W. Chao, J. Zhao, L. Jiao, L. Li, F. Liu, S. Yang, When large language models meet evolutionary algorithms, arXiv:2401.10510 (2024). [38] R. T. Lange, Y. Tian, Y. Tang, Evolution transformer: in-context evolutionary optimization, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO’24), Melbourne, Australia, 2024. [39] X. Li, K. Wu, Y. B. Li, X. Zhang, H. Wang, J. Liu, Pretrained optimization model for zero-shot black box optimization, in: Proceedings of the 38th Conference on Neural Information Processing Systems, 2024. [40] X. Wu, S. hao Wu, J. Wu, L. Feng, K. C. Tan, Evolutionary computation in the era of large language model: survey and roadmap, arXiv:2401.10034 (2024). [41] X. Song, Y. Tian, R. T. Lange, C. Lee, Y. Tang, Y. Chen, Leverage foundation models for black-box optimization, in: Proceedings of the 41st International Conference on Machine Learning, Vol. 235 of PMLR, Vienna, Austria, 2024. [42] S. Liu, S. Yu, Z. Lin, D. Pathak, D. Ramanan, Language models as black-box optimizers for vision-language models, arXiv:2309.05950v2 (2023). [43] F. Jin, Y. Liu, Y. Tan, Zero-shot chain-of-thought reasoning guided by evolutionary algorithms in large language models, 17 arXiv:2402.05376v1 (2024). [44] H. Yang, K. Li, InstOptima: evolutionary multi-objective instruction optimization via large language model-based instruction operators, arXiv:2310.17630v1 (2023). [45] Y. B. Li, K. Wu, SPELL: semantic prompt evolution based on a LLM, arXiv:2310.01260v1 (2023). [46] P.-F. Guo, Y.-H. Chen, Y.-D. Tsai, S.-D. Lin, Towards optimizing with large language model, in: Workshop on Knowledge-Infused Learning Co-located with 30th ACM KDD Conference, Barcelona, Spain, 2024. [47] J. Mao, D. Zou, L. Sheng, S. Liu, C. Gao, Y. Wang, Y. Li, Identify critical nodes in complex network with large language models, arXiv:2403.03962 (2024). [48] S. Mo, K. Wu, Q. Gao, X. Teng, J. Liu, AutoSGNN: automatic propagation mechanism discovery for spectral graph neural networks, Under Review (2024). [49] H. Yu, J. Liu, AutoRNet: automatically optimizing heuristics for robust network design via large language models, under Review (2024). [50] B. Huang, X. Wu, Y. Zhou, J. Wu, L. Feng, R. Cheng, K. C. Tan, Exploring the true potential: evaluating the black-box optimization capability of large language models, arXiv:2404.06290 (2024). [51] H. Ghaemi, Z. Alizadehsani, A. Shahraki, J. M. Corchado, Transformers in source code generation: a comprehensive survey, Journal of Systems Architecture 153 (2024) 103193. [52] H. Luo, J. Wu, J. Liu, M. F. Antwi-Afari, Large language model-based code generation for the control of construction assembly robots: a hierarchical generation approach, Developments in the Built Environment 19 (2024) 100488. [53] E. Meyerson, M. J. Nelson, H. Bradley, A. Gaier, A. Moradi, A. K. Hoover, J. Lehman, Language model crossover: variation through few-shot prompting, arXiv:2302.12170 (2024). [54] J. Lehman, J. Gordon, S. Jain, K. Ndousse, C. Yeh, K. O. Stanley, Evolution through large models, in: Handbook of Evolutionary Machine Learning, Springer, 2023, pp. 331–366. [55] E. Hemberg, S. Moskal, U.-M. O’Reilly, Evolving code with a large language model, arXiv:2401.07102 (2024). [56] C. Yang, X. Wang, Y. Lu, H. Liu, Q. V. Le, D. Zhou, X. Chen, Large language models as optimizers, in: Proceedings of the International Conference on Learning Representations (ICLR), 2024. [57] S. Liu, C. Chen, X. Qu, K. Tang, Y.-S. Ong, Large language models as evolutionary optimizers, arXiv:2310.19046v3 (2024). [58] F. Liu, X. Lin, Z. Wang, S. Yao, X. Tong, M. Yuan, Q. Zhang, Large language model for multi-objective evolutionary optimization, arXiv:2310.12541v1 (2023). [59] Z. Wang, S. Liu, J. Chen, K. C. Tan, Large language model-aided evolutionary search for constrained multiobjective optimization, arXiv:2405.05767 (2024). [60] R. T. Lange, Y. Tian, Y. Tang, Large language models as evolution strategies, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO’24), Melbourne, Australia, 2024. [61] S. Brahmachary, S. M. Joshi, A. Panda, K. Koneripalli, A. K. Sagotra, H. Patel, A. Sharma, A. D. Jagtap, K. Kalyanaraman, Large language model-based evolutionary optimizer: reasoning with elitism, arXiv:2403.02054 (2024). [62] M. Pluhacek, A. Kazikova, T. Kadavy, A. Viktorin, R. Senkerik, Leveraging large language models for the generation of novel metaheuristic optimization algorithms, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO), Lisbon, Portugal, 2023. [63] M. Pluhacek, J. Kovac, A. Viktorin, P. Janku, T. Kadavy, R. Senkerik, Using LLM for automatic evolvement of metaheuristics from swarm algorithm SOMA, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO’24), Melbourne, Aus- tralia, 2024. [64] R. Zhong, Y. Xu, C. Zhang, J. Yu, Leveraging large language model to generate a novel metaheuristic algorithm with CRISPE framework, Cluster Computing (2024). [65] B. Romera-Paredes, M. Barekatain, A. Novikov, M. Balog, M. P. Kumar, E. Dupont, F. J. R. Ruiz, J. S. Ellenberg, P. Wang, O. Fawzi, P. Kohli, A. Fawzi, Mathematical discoveries from program search with large language models, Nature 625 (2024) 468–475. [66] F. Liu, X. Tong, M. Yuan, X. Lin, F. Luo, Z. Wang, Z. Lu, Q. Zhang, Evolution of heuristics: towards efficient automatic algorithm design using large language model, in: Proceedings of International Conference on Machine Learning (ICML), 2024. [67] F. Liu, X. Tong, M. Yuan, Q. Zhang, Algorithm evolution using large language model, arXiv:2311.15249v1 (2023). [68] F. Liu, X. Tong, M. Yuan, X. Lin, F. Luo, Z. Wang, Z. Lu, Q. Zhang, An example of evolutionary computation + large language model beating human: design of efficient guided local search, arXiv:2401.02051v1 (2024). [69] F. Arnold, K. Sörensen, Knowledge-guided local search for the vehicle routing problem, Computers and Operations Research 105 (2019) 32–46. [70] H. Ye, J. Wang, Z. Cao, F. Berto, C. Hua, H. Kim, J. Park, G. Song, Large language models as hyper-heuristics for combinatorial optimiza- tion, arXiv:2402.01145v2 (2024). [71] Y. Sun, X. Zhang, S. Huang, S. Cai, B. Zhang, K. Wei, AutoSAT: automatically optimize sat solvers via large language models, arXiv:2402.10705 (2024). [72] Y. Huang, S. Wu, W. Zhang, J. Wu, L. Feng, K. C. Tan, Autonomous multi-objective optimization using large language model, arXiv:2406.08987 (2024). [73] Y. Huang, X. Lv, S. Wu, J. Wu, L. Feng, K. C. Tan, Advancing automated knowledge transfer in evolutionary multitasking via large language models, arXiv:2409.04270 (2024). [74] H. Hao, X. Zhang, A. Zhou, Large language models as surrogate models in evolutionary algorithms: a preliminary study, Swarm and Evolutionary Computation 91 (2024) 101741. [75] O. Kramer, Large language models for tuning evolution strategies, arXiv:2405.10999 (2024). [76] L. L. Custode, F. Caraffini, A. Yaman, G. Iacca, An investigation on the use of large language models for hyperparameter tuning in evolu- tionary algorithms, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO’24), Melbourne, VIC, Australia, 2024. [77] M. Zhou, J. Liu, A two-phase multi-objective evolutionary algorithm for enhancing the robustness of scale-free networks against multiple 18 malicious attacks, IEEE Transactions on Cybernetics 47 (2) (2017) 539–552. [78] M. Zhou, J. Liu, A memetic algorithm for enhancing the robustness of scale-free networks against malicious attacks, Physica A: Statistical Mechanics and its Applications 410 (2014) 131–143. [79] N. van Stein, T. Bäck, LLaMEA: a large language model evolutionary algorithm for automatically generating metaheuristics, arXiv:2405.20132 (2024). [80] W. E. Hart, N. Krasnogor, Using multiple representations in evolutionary algorithms, in: Proceedings of the Genetic and Evolutionary Computation Conference, 1998, pp. 359–366. [81] K. Deb, Representation, selection, and variation in genetic algorithms, Genetic Algorithms in Engineering and Computer Science (1997) 78–98. [82] W. M. Spears, Crossover or mutation?, Foundations of Genetic Algorithms 2 (1995) 221–237. [83] T. Jones, S. Forrest, Fitness distance correlation as a measure of problem difficulty for genetic algorithms, in: Proceedings of the 6th International Conference on Genetic Algorithms, 1995, pp. 184–192. [84] M. Mitchell, S. Forrest, J. H. Holland, The royal road for genetic algorithms: fitness landscapes and GA performance, in: Proceedings of the 1st European Conference on Artificial Life, 1992, pp. 245–254. [85] J. A. Garcia, H. Zhenli, Gaussian process regression + deep neural network autoencoder for probabilistic surrogate modeling in nonlinear mechanics of solids, arXiv:2407.10732 (2024). [86] M. T. Ribeiro, S. Singh, C. Guestrin, Why should I trust you? Explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144. [87] M. T. Ribeiro, S. Singh, C. Guestrin, Lime: local interpretable model-agnostic explanations, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144. [88] W. Samek, T. Wiegand, K.-R. Müller, Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models, arXiv:1708.08296 (2017). [89] E. Zelikman, Y. Wu, J. Mu, N. D. Goodman, STaR: bootstrapping reasoning with reasoning, arXiv:2203.14465 (2022). [90] A. Hosseini, X. Yuan, N. Malkin, A. Courville, A. Sordoni, R. Agarwal, V-STaR: training verifiers for self-taught reasoners, arXiv:2403.09629 (2024). [91] E. Zelikman, G. Harik, Y. Shao, V. Jayasiri, N. Haber, N. D. Goodman, Quiet-STaR: language models can teach themselves to think before speaking, arXiv:2403.09629 (2024). [92] R. Zhang, F. Liu, X. Lin, Z. Wang, Z. Lu, Q. Zhang, Understanding the importance of evolutionary search in automated heuristic design with large language models, arXiv:2407.10873v1 (2024). [93] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W. tau Yih, T. Rocktäschel, S. Riedel, D. Kiela, Retrieval-augmented generation for knowledge-intensive NLP tasks, in: Proceedings of the 33th International Conference on Neural Infor- mation Processing Systems, 2020, pp. 9459–9474. [94] K. Guu, K. Lee, Z. Tung, P. Pasupat, M.-W. Chang, REALM: retrieval-augmented language model pre-training, in: Proceedings of the 37th International Conference on Machine Learning, 2020, pp. 3929–3938. [95] G. Izacard, E. Grave, Fusion-in-Decoder: a novel retrieval-augmented language model, in: Proceedings of the 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2021, pp. 681–688. [96] M. Burtsev, A. V. Kurenkov, Memorizing transformers, arXiv:2010.06891 (2020). [97] J. W. Rae, A. Potapenko, S. M. Jayakumar, T. P. Lillicrap, Compressive transformers for long-range sequence modeling, in: Proceedings of the 32th International Conference on Neural Information Processing Systems, 2019, pp. 4694–4707. [98] L. Bliek, A. Guijt, R. Karlsson, S. Verwer, M. de Weerdt, Benchmarking surrogate-based optimisation algorithms on expensive black-box functions, Applied Soft Computing 147 (2023) 110744. [99] T. Rios, F. Lanfermann, S. Menzel, Large language model-assisted surrogate modelling for engineering optimization, in: Proceedings of 2024 IEEE Conference on Artificial Intelligence (CAI), 2024, pp. 796–803. [100] J. Dean, S. Ghemawat, MapReduce: simplified data processing on large clusters, Communications of the ACM 51 (1) (2008) 107–113. [101] Y. Tang, Y. Tian, R. Huang, Distributed learning with evolutionary algorithms, in: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 2021, pp. 851–859. [102] S. Han, J. Pool, J. Tran, W. Dally, Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding, arXiv:1510.00149 (2015). [103] G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network, in: Proceedings of the NIPS Deep Learning and Representation Learning Workshop, 2015. 19
ai_researcher
3
Analysis_of_LLM-Based_Narrative_Generation_Using_the_Agent-Based_Simulation.pdf
Parrot: Efficient Serving of LLM-based Applications with Semantic Variable Chaofan Lin1∗, Zhenhua Han2, Chengruidong Zhang2, Yuqing Yang2 Fan Yang2, Chen Chen1∗, Lili Qiu2 1Shanghai Jiao Tong University, 2Microsoft Research 4 2 0 2 y a M 0 3 ] G L . s c [ 1 v 8 8 8 9 1 . 5 0 4 2 : v i X r a Abstract The rise of large language models (LLMs) has enabled LLM-based applications (a.k.a. AI agents or co-pilots), a new software paradigm that combines the strength of LLM and conventional software. Diverse LLM applications from differ- ent tenants could design complex workflows using multiple LLM requests to accomplish one task. However, they have to use the over-simplified request-level API provided by to- day’s public LLM services, losing essential application-level information. Public LLM services have to blindly optimize individual LLM requests, leading to sub-optimal end-to-end performance of LLM applications. This paper introduces Parrot, an LLM service system that focuses on the end-to-end experience of LLM-based applica- tions. Parrot proposes Semantic Variable, a unified abstrac- tion to expose application-level knowledge to public LLM services. A Semantic Variable annotates an input/output vari- able in the prompt of a request, and creates the data pipeline when connecting multiple LLM requests, providing a natu- ral way to program LLM applications. Exposing Semantic Variables to the public LLM service allows it to perform con- ventional data flow analysis to uncover the correlation across multiple LLM requests. This correlation opens a brand-new optimization space for the end-to-end performance of LLM- based applications. Extensive evaluations demonstrate that Parrot can achieve up to an order-of-magnitude improvement for popular and practical use cases of LLM applications. 1 Introduction Large language models (LLMs) have demonstrated a remark- able language understanding capability [7, 41]. This enables a paradigm shift in application development. In this new paradigm, one or multiple application entities, known as AI agents or co-pilots, communicate with LLMs via natural lan- guage, known as “prompts”, to accomplish a task collabo- ratively. For example, Meeting applications like Microsoft Teams or Google Meet can summarize meeting discussions through LLMs [33]. Search engines like Google and Bing can be enhanced with Chat ability through LLMs [14, 34]. It is believed such LLM-based applications will become the mainstream applications in the near future [13]. ∗This work is partially done while Chaofan Lin’s internship and Dr. Chen Chen’s visting scholar in Microsoft Research. To accomplish a task, LLM-based applications typically require multiple rounds of conversation. The conversation, im- plemented through multiple API calls to LLM, demonstrates complex workflow patterns. Figure 1 illustrates several popu- lar conversation patterns. For example, a meeting summary application [8, 33] often divides a lengthy document into mul- tiple shorter sections, each satisfying the length constraint of the LLM conversation and thus can be summarized and combined into the final summary through the Map-Reduce or chaining summary patterns. Chat-based applications, e.g., Bing Copilot [34], call LLM APIs multiple times to generate answers based on user queries. Multiple agents, each repre- senting a different role played by different LLM calls, can collaborate to achieve a task [22, 47, 54]. Public LLM service providers have to face diverse tenants and applications, each with different workflows and perfor- mance preference. However, existing API design for LLM service provision is still request-centric. Public LLM services only observe tons of individual requests, without knowing any application-level information, e.g., which requests belong to the same application, how different requests are connected, or whether there are any similarities. The lost application-level information makes public LLM service blindly optimize the performance of individual requests, leading to sub-optimal end-to-end performance of LLM applications. In this paper, we observe there exist significant opportunities to improve the end-to-end experience of LLM applications by exploiting the application-level information, especially the correlation of multiple LLM requests. First, multiple consecutive LLM requests may be depen- dent: the result of one request could be the direct input of the next request. Therefore, it is desirable to colocate those (a) Map-Reduce Summary (b) Chain Summary (c) LLM-Powered Search (d) Multi-agent Coding Figure 1: The workflow of popular LLM-based applications. The final result requires multiple LLM requests. Chunk 1Chunk 2……Chunk NLLMLLMLLM……S1S2SN……LLMFinal SummaryChunk 1Chunk 2……Chunk NLLMS1+LLMS1+S2+……SN-1LLMFinal SummaryLLM RequestMessage PassingQuery RewriterLLM-powered SearchQA w/ search resultSafety CheckerLLMFinal AnswerUser QueryProduct MangerArchitectEngineerQA TesterCode ReviewerLLMFinal CodeTask requests together and execute them consecutively on the LLM service side. However, unaware of their dependencies, these requests have to be executed interactively between the client side of LLM-based applications and the public LLM ser- vices. These clients, often located on the other end of the Internet, can only issue the second request after they receive the result of the first request. This unnecessarily incurs extra overhead of consecutive requests on network latency as well as losing the opportunity of co-scheduling these consecutive requests (§3). Second, LLM requests may have diverse scheduling pref- erence, even within a single application. For example, in Fig- ure 1a, to reduce the end-to-end latency, the requests represent- ing multiple Map tasks should be batched more aggressively to increase the throughput of the Map tasks; while the Re- duce task, due to its scarcity, should be optimized for latency. Unfortunately, public LLM services cannot discriminate the difference between the two types of tasks. As a result, the current practice is to blindly optimize the latency for individ- ual requests, which might not be desirable for the end-to-end experience. Third, there exists a high degree of commonality across LLM requests. Popular LLM applications (e.g., Bing Copi- lot [32], GPTs [42]) use a long system prompt, including task definitions, examples, and safety rules, to guide the behavior of LLM applications. The long system prompt is usually static and common for all users. As existing public LLM services treat each request individually, these common prefix prompts are provided repeatedly in each request, leading to a great waste of storage, computation, and memory bandwidth. Our analysis of a production LLM-based search engine shows that over 94% of tokens in the requests are repeated across different users. Although we have seen some emerging engine-level tech- niques [25,56,63] proposed to optimize the above three cases, they all work based on certain application-level knowledge, which is lost in nowadays public LLM services. In a nut- shell, due to the lack of understanding of the correlations of LLM requests, existing LLM services cannot leverage the three opportunities, leading to high end-to-end service latency and reduced throughput. Based on the above facts and in- sights, we introduce Parrot, an LLM service system that treats LLM applications as first-class citizens. Parrot retains most of application-level information by a simple abstraction Seman- tic Variable, achieving a perfect balance between increasing system complexity and bringing new information for opti- mization. A Semantic Variable is a text region in the prompt with a specific semantic purpose, such as a task instruction, a list of few-shot examples, an input, or an output. A Semantic Variable can also work as the data pipeline that connects mul- tiple LLM requests. Semantic Variable naturally exposes the information of prompt structures and correlations of requests to LLM services. By inspecting Semantic Variable at runtime, Parrot can perform conventional data flow analysis to derive Figure 2: The communication of consecutive LLM requests in multi-agent applications. the data dependency between LLM requests just-in-time. By analyzing the application-level information, Parrot’s unified abstraction naturally enables joint optimizations, which bring better global optimality. The same data pipeline built by Semantic Variables can enable multiple optimizations simultaneously, including hiding data pipeline’s latency, ob- jective deduction for a better scheduling and commonality analysis to perform de-duplication. Parrot’s scheduling also takes different opportunities into accounts under the unified abstraction. Our extensive evaluation of Parrot on popular LLM-based applications, including the production and open- source projects, shows Parrot achieves up to 11.7× speedup or 12× higher throughput compared with the state-of-the-art solutions. 2 Background LLM Service. Most LLM services are provisioned as a conditional generation service via a text completion API. Completion(prompt : str) −→ generated_text : str. The application client provides a text prompt, and the LLM service responds with the generated text. Behind the API, an LLM service provider runs one or multiple clusters of LLM inference engines. A request scheduler dispatches LLM requests from a queue to an LLM inference engine, which uses a set of GPUs to conduct the LLM inference. LLM-based Applications. Figure 1 highlights the repre- sentative workflows of how LLM is used in the applications. Due to the limited context window of LLMs (e.g., 4,096 for GPT-3.5-Turbo [40]), data analytics on long documents fol- low a map-reduce style (Figure 1a) or chain style (Figure 1b) workflow to generate the final results. It splits the long tran- script into chunks, uses multiple requests to generate partial results for each chunk (the Map task), and combines them altogether (a Reduce task) or incrementally (the chain style) to generate the final result. Chat-based search engine in Fig- ure 1c may use consecutive LLM requests to discern query (a) Latency Breakdown (b) Current LLM Services (c) Our system: Parrot Figure 3: The end-to-end latency breakdown of current LLM services. The source of the overhead comes from network and queuing due to chatty interaction between LLM application and LLM services, which is eliminated in our system Parrot. intention, enrich the query with supplementary information, retrieve related data, undergo a safety check, and finally gen- erate the response. Multi-agent in Figure 1d and Figure 2 is another type of workflow using multiple LLM requests, each with a designated role. Different roles work collaboratively on the same task, e.g., AutoGen [54] and MetaGPT [22] use the roles like product manager, architect, engineer, and QA tester. They communicate with each other on a software project. Each role is supported by one or multiple LLM requests to act as the designed role to generate their responses. 3 Problems of Serving LLM Applications Although LLM’s text completion API provides a flexible way of building LLM applications, it loses the application-level information to public LLM services, leading to the following challenges. Excessive Overhead of Consecutive Requests. As demon- strated in Figure 1, LLM applications frequently make multi- ple LLM calls to complete a single task. Due to the request- centric design of existing public LLM services, which gener- ate responses for each request individually, developers have to parse the output of an LLM request and compose the prompts for subsequent LLM requests on the client side. Figure 3a shows our empirical study of the latency breakdown of the LLM calls from a popular LLM application in our production, which uses a chain-style workflow. The prompt lengths range from 150 to 4000 tokens and the output length is around 50 tokens. We find there is a significant portion of the latency of LLM API call originates outside the LLM engine (30 ∼ 50% on average and over 70% in the worst cases). The overhead in- creases with the growing length of prompts. The high latency can sometimes result in API timeouts and resubmissions. Such overhead is due to the chatty interaction between LLM services and clients. Figure 3b illustrates the overhead of a simple two-step LLM application (e.g., chain-style sum- mary of two text chunks). Existing LLM services are unaware of the dependency among such requests, where the output of the previous request may be the direct input of the next one. For such consecutive and dependent requests, the client has Figure 4: Request-centric scheduling v.s. application-centric scheduling for the map-reduce style document summary task. to wait for the arrival of the response to the first LLM request ( 2 ) before submitting the next LLM request ( 3 ). This un- necessarily incurs heavy network latency because clients and LLM services are typically in different data centers. More- over, the next LLM request has to suffer extra queuing delays ( 4 ), because requests from other applications may arrive between the consecutive LLM requests. In Table 1, we evaluated four popular LLM applications. The first two are from our production, and the last two are popular open-source projects. They all require tens of LLM calls to complete a single task, which results in high user- perceived latency. Our evaluation in §8.2 shows LLM services that treat requests individually could slow down the end-to- end latency by over 2×. An LLM service can eliminate the overhead if it can handle consecutive requests in a batch. Parrot adopts such an approach. As shown in Figure 3c, the two steps of the same application are scheduled together, thus allowing the output of Step A to be fed directly into Step B—with the network and queuing overhead bypassed. Misaligned Scheduling Objectives. Due to the lost appli- LLM-based App. Long Doc. Analytics Chat Search MetaGPT [22] AutoGen [54] Repeated (%)∗ # Calls 2 ∼ 40 3% 2 ∼ 10 94% 72% 14 99% 17 ∗We count a paragraph as repeated if it appears in at least two LLM requests. Tokens 3.5k ∼ 80k 5k 17k 57k Table 1: Statistics of LLM calls of LLM applications. 01000200030004000Prompt Length (# of tokens)010002000300040005000Time (ms)End-to-end Time (P99))GPU Inference TimeOther Overhead (median)LLM Step ALLM Step BSchedulerInternetLLM AppOther LLM AppsABLLM EngineLLM EngineLLM EngineAQueueB①②③QueryResponse④LLM EngineLLM EngineLLM EngineSchedulerLLM Step ALLM Step B①②InternetABABQueueQueryResponseLLM AppOther LLM AppsChunk 1Chunk 2Chunk 3Chunk 4……Final SummaryChunk 5Chunk 6Chunk 15Chunk 16Batch=2TimeChunk 1Final SummaryBatch=8Chunk 9Time(1) Per-request latency optimized(2) End-to-end latency optimizedMaximize ThroughputMinimize LatencyLatency=2700 msLatency=1100 msReduce StageMap StageReduce StageChunk 2Chunk 10Chunk 7Chunk 15Chunk 8Chunk 16Map Stage………… Figure 5: The prompt structure of Bing Copilot shows a long prompt reused by different user queries. cation information (workflow and application performance objective), existing public LLM services have to blindly use a universal treatment for all requests, e.g., optimizing per- request latency [44]. However, LLM-based applications are more concerned about the end-to-end experience, rather than individual requests. This misaligned optimization objectives may negatively impact end-to-end performance. Considering the map-reduce document summary in Figure 1a, the system should minimize the end-to-end time it takes to receive the final summary, rather than the latency of individual requests. The LLM services optimized for individual requests are not optimal for end-to-end latency. As depicted in Figure 4, current LLM services must limit the number of concurrent requests running on each LLM en- gine to control the latency of individual requests. However, there is a trade-off between latency and throughput in LLM in- ference. Increasing the batch size can bring up to 8.2× higher throughput but lead to 95% higher latency [9]. Yet, if we un- derstand the application-level performance objective, which in this case is the end-to-end latency, we can determine that the ideal scheduling strategy should maximize the throughput (using higher batch sizes) during the map stage and minimize request latency during the reduce stage. This strategy reduces end-to-end latency by 2.4×. Moreover, it uncovers the po- tential to enhance cluster throughput without compromising the end-to-end latency of LLM applications. This insight is essential for addressing the conflict between rising demand and limited hardware resources. It underscores the necessity of scheduling LLM requests from the perspective of LLM applications, but it also presents the challenge of managing diverse LLM requests with varying performance objectives. Redundant Computations. Currently, most LLM-based applications exhibit a high degree of redundancy in the prompts of their requests. For instance, Bing Chat [32] has handled more than 1 billion chat prompts. These prompts share the same system prompts that defines the functionality of Bing Chat. OpenAI introduces GPTs [42] to let users cus- tomize a ChatGPT for a specific purpose whose prompt tem- plate is the same across users. The commonality in prompts is crucial as it delineates the functionality and restrictions of LLM-based applications. The prompt structure in Fig- ure 5 [52] includes a role definition, several examples to enhance the precision of LLM’s behaviors and user query details. While the user input is dynamic, the task role is al- Figure 6: Parrot system overview. ways fixed, and the few-shot examples could be quasi-static in that the same type of tasks use the same examples. This is why more than 94% of prefix tokens could be repetitively used across LLM requests for various users (Table 1). Such com- monality also exists in multi-agent applications. For example, MetaGPT [22] and AutoGen [54] recurrently incorporate con- versation history into the prompt over several rounds of LLM requests, leading to 72% and 99% redundancy respectively. These redundant sections excessively utilize GPU memory bandwidth and are computed for multiple times. Earlier re- sults have proposed optimizations in LLM engines to avoid redundant GPU memory of shared prompt [25]. However, it is hard for public LLM services to swiftly detect and co-locate the prompt-sharing requests, which be dynamically generated, from tons of diverse requests from diverse applications. With- out knowledge about the prompt structure, extensive token- by-token matching for every LLM request is expensive at the cluster level. Hence, if the cluster scheduler of public LLM service cannot dispatch prompt-sharing requests to the same engine, the engine-level redundancy avoidance optimizations would be hard to take effect. 4 Parrot Design Figure 6 depicts the overview of Parrot’s design. Parrot pro- vides a natural way of programming LLM applications with Semantic Variable annotations (§4.1), which is compatible of existing LLM orchestration frameworks, e.g., LangChain [8]. Centering on this abstraction, Parrot Manager is designed to schedule LLM requests at a cluster-level, by deriving the application-level knowledge (§4.2) and optimizing end-to-end performance of application (§5). The manager will schedule the LLM requests to LLM Engine, which is formed by a GPU server (or a group of servers) in the cluster that can serve LLM requests independently. 4.1 Semantic Variable Parrot treats an LLM request as a semantic function1 im- plemented using natural language and executed by LLMs. 1The term semantic function is borrowed from Semantic Kernel [36]. [system](#instructions) ## You are the chat mode of Microsoft Bing search: - You identify as Microsoft Bing search to users, **not** an assistant. - You should ……[system](#context) - New conversation with user A. - Time at the start of this conversation is Sun, 30 Oct 2022 16:13:49 GMT. The user is located in Redmond, Washington, United States. [user](#message) Hi. ……[system](#context) - New conversation with user B. - Time at the start of this conversation is Mon, 20 Nov 2023 16:13:49 GMT. The user is located in London, UK. [user](#message)Explain AI agent for a kid.Task Role (static)Few-shot Examples (quasi-static)User Input (dynamic)++Parrot LLM EngineParrot LLM EngineParrot LLM EngineParrot APIs w/ Semantic VariablesParrot Manager w/ Inter-Request AnalysisParrot App-centric LLM ServiceApplicationsInternetApplications(front-end)Parrot Front-endOthers(LangChain,SK,etc.)Perf.Objective DeductionSharing Prompt PrefixApp-centricSchedulingEfficientGPU KernelsContext ManagementContextual Fill / GenInter-Request Comm. import Parrot as P from Parrot.PerformanceCriteria import LATENCY @P.SemanticFunction def WritePythonCode(task: P.SemanticVariable): """ You are an expert software engineer. Write python code of {{input:task}}. Code: {{output:code}} """ @P.SemanticFunction def WriteTestCode( task: P.SemanticVariable, code: P.SemanticVariable): """ You are an experienced QA engineer. You write test code for {{input:task}}. Code: {{input:code}}. Your test code: {{output:test}} """ def WriteSnakeGame(): task = P.SemanticVariable("a snake game") code = WritePythonCode(task) test = WriteTestCode(task, code) return code.get(perf=LATENCY), test.get(perf=LATENCY) Figure 7: Example: a multi-agent application in Parrot. A Semantic Variable is defined as a input or output vari- able of a semantic function, which is referred as a place- holder in the prompt. Figure 7 shows a simplified example of multi-agent application like MetaGPT [22]. It contains two SemanticFunctions, one for the software engineer to write code and one for the QA engineer to write test code. It has three Semantic Variables: task, code, and test, for task de- scription, the code to be developed by the software engineer, and the test code to be developed by the QA engineer, re- spectively. Although existing LLM orchestration frameworks (e.g., LangChain [8]) also allow placeholders in a prompt, however, the placeholders are rendered with real data before the submission, hence public LLM services cannot detect such a structure. Instead, Parrot relies on Semantic Variables to preserve the prompt structure for further inter-request analysis in public LLM services side. In addition to the semantic functions, LLM application developers can further define orchestration functions that con- nect multiple semantic functions (e.g., WriteSnakeGame in Figure 7). The Semantic Variables connecting multiple se- mantic functions form the data pipeline of multiple LLM requests in the public LLM service. A simple data flow analysis of the semantic functions can be done to reveals the connections of multiple LLM requests. E.g., in Figure 7, the code variable connects the two LLM requests originat- ing from WritePythonCode and WriteTestCode, showing their sequential dependency. Different from traditional com- pletion API, Parrot splits a completion request to submit operation and get operation (§7). A function calling of SemanticFunction will trigger the submit API to submit a LLM request with its prompt and input Semantic Variables. The execution of a SemanticFunction is asynchronous thus it returns the futures of the output Semantic Variables. Figure 8: Primitives (selected) for Inter-Request Analysis. Through the get API, applications can fetch the value of an output Semantic Variable from the public LLM service in an on-demand manner. This asynchronous design allows Parrot-powered LLM service to receive all LLM requests not blocked by native functions and analyze their relationships just-in-time. The get operation supports annotation of performance cri- teria, showing the end-to-end performance requirement of an application, which can be end-to-end latency or through- put (extensible to more criteria like per-token latency when streaming, and time-to-first-token). For example, the final out- puts, code and test in Figure 7, are fetched using get with an objective of end-to-end latency. Criteria of middle vari- ables will be automatically deduced and propagated from final outputs (§5.2). After propagation, each variable is attached to a criterion, which finally works by serving as a hint to Parrot’s scheduler (§5.4). 4.2 Primitives of Inter-Request Analysis In general, Parrot perform inter-request analysis mainly by two types of application-level information deduced from Se- mantic Variable: DAG of requests and prompt structure. Fig- ure 8 illustrates the DAG workflow of the example shown in Figure 7 and the primitives used for inter-request analysis and optimizations. DAG-based analysis. As requests, or SemanticFunctions, are submitted beforehand, Parrot can receive them all at once and analyze their correlations just-in-time on the service side. Parrot maintains a DAG-like data structure in each user’s registered session. Each node is either a request or a Seman- tic Variable that connects different requests. When a request comes, Parrot inserts it to DAG by linking edges with Seman- tic Variables it refers through placeholders in the prompts. Parrot can perform conventional dataflow analysis [1, 38] using the primitives to get the producer and consumers of Se- mantic Variables (i.e., GetProducer and GetConsumers) to recover dependency of LLM requests. Using the request DAG and the annotated performance criteria (via GetPerfObj) of final output Semantic Variables, Parrot can deduct the request- level scheduling preference by analyzing the DAG and the performance objective of final outputs (§5.2). taskcodeWritePythonCodeWriteTestCodetestYou are an expert software engineer. Write python code ofYou are an expert ...... code of: {{input:task}}. Code:Hash()Hash()① PrefixHash()④ GetPerfObj() Latency ③ GetConsumers() [Request()]② GetProducer() Request()WritePythonCodeWriteTestCode Prompt structure-based analysis. Based on the prompt structure declared by Semantic Variables, Parrot supports ex- tracting the hash values of an LLM request at multiple po- sitions split by Semantic Variables (i.e., PrefixHash). For example, the prompt of WritePythonCode has two potential sharing prefix: the text before {{input:task}} and the text before {{output:code}}, thus there will be two prefix hash values generated. The prefix hashes of LLM requests will be used by swift detection of commonality across multiple requests, supporting both static and dynamically generated contents, as well as within the same type of application or even across applications (§5.3). 5 Optimizations with Semantic Variable 5.1 Serving Dependent Requests To avoid the unnecessary client-side execution, it requires the dependency of requests at the application level, which is lost in today’s public LLM services. With the DAG and primitives illustrated in §4.2, Parrot serves dependent requests efficiently through a graph-based executor. The executor polls constantly and sends it to corresponding engine once ready (i.e. producer requests are all finished), which allows instant execution and maximizes batching opportunities. For con- secutive execution of dependent requests, materialized value is transmitted through a message queue allocated for cor- responding Semantic Variable, avoiding unnecessary chatty communication between clients and LLM services. The value of a Semantic Variable in a request may require transformation before being exchanged, e.g., the value of a Semantic Variable is extracted from the JSON-formatted out- put of an LLM request, which is then fed into consecutive LLM requests. Similar to existing message queue systems that support message transformation (e.g., Kafka [5]), Parrot also supports string transformation to manipulate Semantic Variables during value exchanging among LLM requests. Par- rot supports most output parsing methods of LangChain [8], which covers most use cases of LLM applications. 5.2 Performance Objective Deduction To optimize the end-to-end performance of applications, we need to know the application-level performance criteria. To help deriving the request-level scheduling preference from the end-to-end application’s performance requirement, we need to understand the workflow of the LLM application, which is the DAG of LLM requests derived by Parrot’s primitives. When an application annotates a Semantic Variable to pre- fer higher throughput, all requests generating this Seman- tic Variable (both directly or indirectly) will be marked as throughput-preferred when scheduling. This scheduling pref- erence is usually beneficial for offline data processing, such as bulk document analysis. Figure 9: Performance deduction for an LLM-based applica- tion generating two latency-sensitive Semantic Variable. Handling latency-sensitive applications is more intricate. As demonstrated in Figure 4, achieving low end-to-end la- tency may sometimes require prioritizing throughput at the Mapping stage. The latency of individual requests can sacri- ficed so as to reduce the completion time of the entire DAG of requests. Parrot analyzes LLM requests in reverse topological order, beginning with those linked to latency-critical Semantic Variable, as depicted in Figure 9. With the extracted DAG, LLM requests that directly result in latency-critical Seman- tic Variables are labeled as latency-sensitive (Request 1 and 2), as are their immediate predecessors (Request 3). Parallel LLM requests at the same stage are grouped into a task group (Task Groups 0 and 1). The scheduler should minimize the latency of the entire task group, often leading to a higher batch capacity for higher throughput of token generation. 5.3 Sharing Prompt Prefix When an LLM request is scheduled to an LLM engine, a con- text on the engine is created to store the state of the model execution for this request (mainly KV cache). Existing works have proposed to share the KV cache of common prefix of prompts in LLM engines to save the GPU memory. However, as we have explained in §3, today’s public LLM service face diverse applications and requests, which is hard to identify the commonality at the cluster level. Token-by-token compar- ison is impractical due to high time complexity, especially for very long context with massive requests. In Parrot, by expos- ing Semantic Variables to LLM service, we can understand the prompt structure to automatically detect the commonality more efficiently at the granularity of Semantic Variables. Using Parrot’s primitive of PrefixHash, Parrot only needs to check the hash value at positions after each Semantic Vari- able in a request’s prompt. Parrot maintains a key-value store, where each entry maps a (hashed) prefix of tokens to a list of requests, thus the scheduler can quickly check the opportunity in an online manner, supporting both static and dynamically- generated prompt within one application or even across dif- ferent applications. Furthermore, we propose better GPU kernel for the atten- tion computation of the requests with a common prefix. We first leverage vLLM’s paged memory management [25] to save the redundent GPU memory. But vLLM’s kernel still suffers from redundant computation and memory loading of the shared tokens. Therefore, we design a new Attention decoding algorithm by combining FlashAttenation [12] and PagedAttention [25] that treat the shared and non-shared to- 135467x.get(perf=LATENCY)Task Group 0Task Group 12y.get(perf=LATENCY) */ Algorithm 1: Parrot’s Request Scheduling. Data: Q: the request queue 1 Q.sort() ; /* Topological order 2 for r ∈ Q do 3 SharedReqsInQueue, CtxInEngine = FindSharedPrefix(r); if r.TaskGroup ̸= ∅ then 4 5 6 7 8 9 10 11 12 r∗ = FindEngine(r.TaskGroup); else if SharedReqsInQueue ̸= ∅ then r∗ = FindEngine(SharedReqsInQueue); else if CtxInEngine ̸= ∅ then r∗ = FindEngine(r, filter=CtxInEngine); if r∗ = ∅ then r∗ = FindEngine(r); Q.remove(r∗); ken separately. This significantly accelerates the attention of shared contexts (implementation details in §7). 5.4 Application-Centric Scheduling To fix the problem of existing public LLM service that blindly optimize diverse individual requests, Parrot’s scheduling pol- icy leverages the application-level knowledge to optimize the end-to-end performance. Specifically, the primary goal of Par- rot’s scheduler is to meet the varied performance goals of LLM applications while optimizing GPU cluster utilization. As explained in §3, a conflict arises when combining through- put and latency oriented requests: large batch sizes increase throughput and GPU efficiency but degrade latency, and vice versa. Transformer-based LLM inference is largely memory- bound, with latency influenced by the count of concurrent tokens within the engine. To meet performance targets of LLM applications, particularly latency, an LLM engine must regulate the token count below a specified threshold, which is determined by the LLM request with the most strict la- tency constraint. Therefore, Parrot’s scheduling principles are twofold: (1) group LLM requests with similar performance requirements to circumvent the conflict, and (2) maximize opportunities for sharing across requests. Algorithm 1 outlines the scheduling process of Parrot. With the extracted DAG, the system arranges the LLM requests according to their topological order (line 1). Parrot tends to schedule requests belonging to the same application together to avoid the slowing down of interleaved scheduling (§8.2). For requests identified as part of a task group through Parrot’s performance objective deduction, the scheduler attempts to allocate the entire task group together (line 4-line 5). Addi- tionally, if Parrot detects other queued requests or running contexts with a common prefix, it tries to assign them to the same LLM engine (line 3, line 6-line 9), to utilize Par- rot’s context fork to reduce the redundant computation and GPU memory transactions. For an LLM request without the above opportunity, Parrot schedules the request independently (line 10-line 11). Due to limited space, we omit the details of how Parrot chooses LLM engines (i.e., FindEngine). Briefly, Parrot finds the engine that satisfies the scheduling preference of a request while minimizing the negative impacts. For in- stance, if a latency-sensitive request is scheduled to an LLM engine that can run up to 64,000 tokens of throughput-driven requests, its capacity will be significantly reduced to 2,000 to satisfy its strict latency requirement. But, if it is scheduled to an engine that has already been running a latency-sensitive request, the capacity reduction is negligible. 6 Discussion Dynamic Applications and Function Calling. Currently, Parrot only supports cloud-side orchestration of LLM requests without involving dynamic control flow and native functions (e.g., Python Code). They still require client-side execution. We intentionally disable the offloading of these functions to public LLM services to minimize the security risks of malicious injection. For private LLM services whose LLM applications are trusted or there is a trusted zone to execute these functions, Parrot’s APIs can be easily extended with conditional connections and native code submission. More- over, these extensions further enable new optimizations, e.g., we can speculatively pre-launch high-probability branches in dynamic applications based on past profiles. This also proves the potential of Parrot’s design when facing new types of applications. We leave these extensions as future works. Other Applications of Inter-Request Analysis. The inter- request analysis in Parrot enables a new optimization space not limited to the ones we introduced in §5. A large-scale service has more scheduling features to consider, including handling outliers [3], job failures [58], delay scheduling [57], fairness [15, 61], starvation [17], or supporting heterogeneous clusters [24, 37], which have been widely studied in other systems. Parrot provides a new view from the perspective of LLM-based applications: we need to understand the inter- connection and commonality of LLM requests to optimize applications’ end-to-end performance. These features can be revisited in the LLM service system by considering the new characteristics of LLM applications. In this paper, we focus on Parrot’s mechanisms and a few use cases, leaving other optimizations as promising future works. Parrot with LLM Orchestration Frameworks. There have been several frameworks for developers to build LLM- based applications, e.g., LangChain [8], SemanticKernel [36], and PromptFlow [35]. The key function of these frameworks is to “glue” different LLM calls to accomplish a complex task (aka. LLM orchestration). Parrot can be integrated with these frameworks by extending their calling of LLM service APIs with Semantic Variables. Most of these frameworks have already used a template-based approach in which devel- opers can design a template with placeholders, and render the placeholders at runtime. These placeholders naturally have the same concept as Parrot’s Semantic Variable. However, because these frameworks will render the template prompt before the submission, LLM services lose the information on the prompt structure. To make these frameworks compatible with Parrot, both the template itself and the variables to render the template (using Semantic Variable in Parrot) need to be wrapped as a SemanticFunction so the necessary informa- tion is exposed to Parrot’s LLM service. 7 Implementation Parrot is an end-to-end LLM service for LLM applications, implemented on Python with about 14,000 lines of code. Its front-end provides the abstraction of Semantic Variable, and SemanticFunction, which is transformed into Parrot’s APIs (implemented with FastAPI [48]) to be submitted as LLM requests. A centralized Parrot manager handles the manage- ment of LLM requests, including Semantic Variables, com- munication, and scheduling. We also build an LLM engine based on efficient kernels from vLLM [25], xFormers [26], and ourselves. The engine supports advanced features for LLM serving, including paged memory management [25] and continues batching [56]. Parrot’s front-end and manager are implemented in 1,600 and 3,200 lines of Python, respectively. Parrot’s LLM engine is implemented in 5,400 lines of Python and 1,600 lines of CUDA. We have implemented OPT [60] and LLaMA [51] with PyTorch [45] and Transformers [53]. APIs. Applications programmed by SemanticFunctions or other frontends are finally lowered to requests to universal APIs through different adapters. Parrot provides OpenAI-like APIs with the extension of Semantic Variables. The request body of two operations mentioned in §4.1 is shown as follows: (submit) {"prompt": str, "placeholders": [{"name": (cid:44)→ (cid:44)→ str, "in_out": bool, "semantic_var_id": str, "transforms": str}, ...], "session_id": str} (get) {"semantic_var_id": str, "criteria": str, (cid:44)→ "session_id": str} In addition to the static string prompt, Parrot preserves the input and output placeholders. A placeholder is associated with a semantic variable either for rendering the input or parsing the output. As introduced in §5.1. Parrot supports transformations before the input or after the output. Parrot also supports other APIs for setting and fetching the value of Semantic Variables. The error message will be returned when fetching an Semantic Variable, whose intermediate steps fail (including engine, communication, and string transformation). Kernel Optimization. vLLM’s GPU kernel, while capable of reusing results cached in GPU memory for shared prefix to- kens in a prompt, sometimes excessively reloads these tokens from global to shared memory, impeding attention score com- putations. Using OpenAI Triton [43] and CUDA, we have developed a novel GPU kernel, integrating concepts from PagedAttention [25] and FlashAttention [11, 12], to acceler- ate attention decoding computation involving shared prefixes. This kernel retains PagedAttention’s approach of storing the key-value (KV) cache in disparate memory segments and utilizes a page table per request to monitor block status and placement. Furthermore, employing FlashAttention princi- ples, the kernel maximizes data reuse within shared memory. Unlike reloading tiles repeatedly in the PagedAttention’s im- plementation, it loads KV cache tiles for the shared prefix to shared memory only once, diminishing memory transac- tions between the L2 Cache and Shared Memory. The kernel initially calculates interim attention metrics (including atten- tion scores, qk_max, exp_sum) for the shared prefix using the loaded tiles and records these back to HBM. Subsequently, it processes the new tokens’ partial attention beyond the prefix, amalgamating this with the prefix’s interim results to derive the ultimate attention output. Universal Engine Abstraction. Parrot’s cluster manager controls multiple engines running various models, tokeniz- ers, KV cache layouts, etc. To enable Parrot’s optimizations, LLM engines need to support (1) stateful generation (e.g., guidance [18]) and (2) sharing KV cache states across dif- ferent requests. Hence we propose a universal abstraction to describe the minimal capability required to LLM engines to be integrated into Parrot. parent_context_id: int) def Fill(token_ids: List[int], context_id: int, (cid:44)→ def Generate(sampling_configs: Dict, context_id: int, parent_context_id: int) (cid:44)→ def FreeContext(context_id: int) These three methods not only cover the basic completion functionality of LLM inference engine, but also provide a flexible context management interface. The Fill method pro- cesses the initial prompt tokens, calculates and fills the KV cache into corresponding context. The Generate method pro- duces tokens via generative decoding that produces one token per iteration until it reaches the length limit, user-defined termination character or EOS (end-of-sequence) token, un- der certain sampling configurations (e.g. temperature). Fills and Generates are scheduled and batched by engine’s sched- uler per iteration using continuous batching [56]. Creating and forking contexts can also be realized with these two methods by setting context_id and parent_context_id, respectively. The FreeContext method explicitly frees a con- text (i.e. free its KV cache in GPU memory). Separating Fill and Generate not only fits Semantic Variable naturally: constant text and input values are processed by Fill; the out- put values are generated by Generate, but also breaks the request-level dependency into a finer granularity, enabling more parallel execution opportunities [2, 21, 46, 64]. 8 Evaluation 8.1 Experimental Setup Testbed. We evaluate Parrot with two separate setups for single-GPU and multi-GPU experiments. The single-GPU evaluations use a server with a 24-core AMD-EPYC-7V13 CPUs equipped with one NVIDIA A100 (80GB) GPU. The multi-GPU evaluations use a server with 64-core EPYC AMD CPU and four NVIDIA A6000 (48GB) GPUs. Both servers run CUDA 12.1 and cuDNN 8.9.2. Workloads. Our evaluations are performed to run four rep- resentative LLM applications. Each LLM engine uses one GPU and runs a LLaMA 13B or LLaMA 7B model [51] . For LLM-based data analytics on long documents, we use the Arxiv dataset [27], executing chain and map-reduce summa- rizations on an extensive collection of academic papers. To investigate the sharing opportunities of LLM-based applica- tions with many users, we run the prompts from Bing Copilot and GPTs [42] with synthesized user queries. For multi-agent applications, we build a multi-agent programming application using MetaGPT [22], which contains a system architect to design APIs, multiple programmers to write code for different files, reviewers to share review comments. The programmers will also revise the code based on comments. For chat ser- vice workloads, we derived scenarios from the ShareGPT dataset [50], which mirrors real LLM chat conversations. Ac- cording to the distribution of our measurement, we introduced a random delay of 200 ∼ 300 ms to LLM requests to emulate typical network overhead seen over the Internet. To create realistic workloads, we documented the LLM responses us- ing GPT-4 [41], ensuring the LLaMA models generated text of similar length for system performance analysis. Table 2 presents the workloads and their optimizations in Parrot. Baseline. We benchmark Parrot against sate-of-the-art so- lutions for building LLM applications and serving LLM re- quests. The majority of LLM applications used in our baseline Workload Data Analytics Serving Popular LLM Applications Multi-agent App. Mixed Workloads Serving Dependent Requests. ✓ ✓ ✓ Perf. Obj. Deduction Sharing Prompt App-centric Scheduling ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Table 2: The workloads and the optimizations taking effect. (a) Mean Latency (b) P90 Latency Figure 10: Latency (per output token) of vLLM with varying token capacities and request rates. Requests are sampled from ShareGPT [50] and their arrival time follows Poisson distri- butions. comparisons are developed using LangChain [8], which is the predominant framework for LLM application development. The LLM applications in baselines leverage OpenAI-style chat completion APIs as provided by FastChat [62]. FastChat is a widely recognized open-source LLM serving system with over 30,000 stars on its repository. Incoming requests to FastChat are allocated to LLM engines that run either Hug- gingFace’s Transformers library [53] or vLLM [25], both of which incorporate cutting-edge enhancements for LLM exe- cution, such as FlashAttention [12], PagedAttention [25], and continuous batching techniques [56]. The default scheduling strategy employed by FastChat assigns incoming requests to the LLM engine with the smallest current queue. Since existing LLM services typically expose their functionality through "chat" completion APIs, baseline assessments treat all requests as independent and assume a high sensitivity to latency. To manage token generation response times, each LLM engine is subject to a capacity threshold, which is the aggregate token count from all active requests on the engine. Since existing LLM token generation is usually bound by memory bandwidth, the per-token generation latency of an engine is mainly affected by the number of running tokens in a batch. As depicted in Figure 10, our experiments indicate that the latency per output token, i.e. TPOT (Time-per-output- token) for vLLM, with continuous batching enabled, experi- ences a notable uptick when the engine’s workload using a batch capacity beyond 6144. In our evaluation, we use the setting that an LLM engine can keep its generation latency under 40 ms/s for latency-sensitive requests, consistent with our experience of OpenAI’s LLM services. When all LLM engines hit their maximum capacity, any additional LLM re- quests are queued in a FIFO (First In, First Out) manner, awaiting the completion and release of resources by ongoing tasks. Serving longer context (e.g., 32k or even 1M tokens) within a satisfactory latency require either more GPUs using tensor-parallel [49] or sequence-parallel [6] approaches, or approximate attention (e.g., StreamingLLM [55]), which is beyond the scope of this paper. 510152025Requests/s2030405060Mean Latency (ms)Capacity=2048Capacity=4096Capacity=6144Capacity=8192Capacity=10240Capacity=12288510152025Requests/s2030405060P90 Latency (ms)Capacity=2048Capacity=4096Capacity=6144Capacity=8192Capacity=10240Capacity=12288 (a) Output lengths (b) Chunk sizes (a) With background requests (b) Multiple summary apps. Figure 11: Average E2E latency of chain summarization with varying output lengths and chunk sizes. Figure 12: Average E2E latency of chain-summary with back- ground requests or other chain-summary applications. 8.2 Data Analytics on Long Documents Our experimental analysis within data analytics randomly picks ten long documents from the Arxiv-March dataset [27], using chain-summary and map-reduce summary. Each docu- ment has over 20,000 tokens. The results measures the mean end-to-end latency across all documents. Chain-style Applications. Our evaluation demonstrates how Parrot enhances chain summarization by mitigating the excessive communication overhead stemming from client in- teractions. Figure 11 presents the average end-to-end latency for summarizing a single document using one LLM engine (A100, LLaMA 13B) . We adjust the chunk size (the count of tokens per chunk) and the output length, with results shown in Figure 11a and Figure 11b, respectively. Parrot achieves a re- duction in end-to-end latency by as much as 1.38× and 1.88× compared to the baselines employing vLLM and Hugging- Face, respectively. The efficiency of Parrot primarily stems from the decreased network latency, which is a consequence of reduced client interaction. As the output length increases, the time spent on generation becomes more significant, lead- ing to a diminishing advantage for Parrot over the baseline. By increasing the chunk size, we decrease the number of chunks, yet the extent of the speedup is contingent upon the network latency savings for each chunk. Given that token generation is substantially more time-consuming than prompt processing, we observe a consistent speedup with variable chunk sizes and a fixed output length (1.2× and 1.66× relative to vLLM and HuggingFace, respectively). This indicates that Parrot’s optimization for dependent LLM requests is particularly bene- ficial for shorter outputs, which are prevalent in various LLM applications such as summarization, short answer generation, scoring, and choice provision. Due to HuggingFace’s slower performance relative to vLLM, subsequent evaluations focus solely on the comparison between Parrot and vLLM. Figure 12a extends the evaluation by introducing back- ground LLM requests at varying rates to examine the capa- bility of Parrot in mitigating additional queuing delays for dependent requests. Parrot slashes the end-to-end latency by a factor of 2.38× in comparison to the baseline (vLLM). With Parrot, as soon as the summary for the first chunk is completed, Figure 13: The difference in E2E latency of the 25 chain- summary application between Baseline and Parrot. All appli- cations finish earlier in Parrot. the subsequent chunk is processed immediately by incorporat- ing the summaries of previous chunks into the prompt, which aids in generating the summary for the next chunk. In con- trast, the baseline treats all LLM requests individually. As a result, in addition to the network latency from client interac- tions, subsequent requests must re-enter the queue, leading to added queuing delays. Figure 12b further illustrates the end-to-end latency when multiple chain-summary applica- tions are submitted concurrently, with each application tasked with generating a summary for a separate document. Parrot manages to reduce the average end-to-end latency for all ap- plications by 1.68× without slowing down any applications compared to the baseline according to Figure 13. The base- line, by interleaving the execution of different applications, exacerbates the slowdown of the end-to-end latency for all applications. These experiments validate that recognizing the interconnections of LLM requests can significantly enhance end-to-end performance, as opposed to processing requests in isolation. Map-Reduce Applications. An alternative implementation of the document summarization application follows the map- reduce paradigm as depicted in Figure 1a. This approach consists of multiple parallel mapping LLM requests, where each request summarizes a distinct segment of the document, followed by a reducing LLM request that aggregates these individual summaries into a final summary. As shown in Figure 14, Parrot realizes a 2.37× acceleration over the base- 255075100Output Length (# tokens)050100150200250Average Latency (s)1.38x1.21x1.14x1.11x1.88x1.64x1.55x1.52xParrotBaseline (vLLM)Baseline (HuggingFace)512102415362048Chunk Size (# tokens)050100150200250Average Latency (s)1.21x1.21x1.20x1.19x1.63x1.62x1.60x1.61xParrotBaseline (vLLM)Baseline (HuggingFace)0.00.51.01.52.02.53.03.5Request Rate (reqs/s)50100150200250Average Latency (s)1.21x1.19x1.31x1.79x2.38xParrotBaseline (vLLM)10152025Number of Apps0100200300Average Latency (s)1.38x1.52x1.63x1.68xParrotBaseline (vLLM)12345678910111213141516171819202122232425Application No.050100150200250Latency in Baseline - Latency in Parrot (s) (a) Output lengths (b) Chunk sizes Figure 14: Average E2E latency of Map-Reduce document summary with varying output lengths and chunk sizes. line with one LLM engine (A100, LLaMA 13B). Since the mapping LLM requests are independent, they are dispatched concurrently by both Parrot and the baseline. The primary ad- vantage of Parrot stems from its deduction of a performance objective that identifies the mapping tasks as a task group. By recognizing this relationship, Parrot is capable of optimiz- ing the latency of the entire task group through larger batch sizes, which in turn enhances throughput. In contrast, the baseline processes each LLM request in isolation, operating under the presumption that they are all sensitive to latency. This constrains the baseline to utilize a limited token capacity (4096 tokens) on the LLM engine to achieve optimal latency for individual tasks, which is detrimental to the end-to-end performance of applications. It underscores the necessity for LLM services to distinguish LLM requests to optimize the end-to-end performance of varied LLM applications. 8.3 Serving Popular LLM Applications Production applications need to face massive users. As ex- plained in Figure 5, developers often need to use a very long system prompt to define the behavior of LLMs. Therefore, users of the same LLM application often use the shared prompt, which can benefit from Parrot’s context fork mech- anism and Parrot’s scheduling policy that co-locates LLM requests sharing a long prompt prefix. Because we do not have access to the intermediate steps of Bing Copilot, we only evaluate the final request generating the response to users. We synthesized 64 requests from the length distribution we measured using Bing Copilot. The system prompt length is about 6000 tokens. The output lengths ranges from 180 to 800 tokens. Figure 15 shows the average request latency of Bing Copilot of Parrot and the baselines. Because the LLM service in the baseline system does not know the prompt struc- ture, it is hard to infer the shared prompt from massive LLM requests. Compared to the baseline without sharing prompt, Parrot achieves 1.8× ∼ 2.4× speedup for batch sizes of 8 and 16. Further increasing the batch size leads to out-of-memory due to the massive KV cache of shared system prompt. We also build an advanced baseline using vLLM’s paged atten- tion to support sharing the prompt with a static prefix. Both Figure 15: Latency of Bing Copilot with varying batch sizes. (a) Batch Size = 32 (b) Batch Size = 64 Figure 16: Latency per output token of Bing Copilot. Parrot and vLLM use the paged memory management [25], thus both systems can hold the same number of tokens in an LLM engine (A100, LLaMA 7B). Parrot further achieves 1.1× ∼ 1.7× speedup over vLLM because of the better GPU kernel. Although vLLM can save extra memory usage of the shared prompt, its GPU kernel still has to reload the tokens repeatedly. Given that the token generation of LLMs is bound by memory bandwidth, such redundant memory loading slows down the end-to-end inference. By combining FlashAtten- tion and PagedAttention, Parrot only needs to load the tokens of the shared prompt once, when computing the attention from the diverged tokens of different users. Parrot’s speedup of shared prompt mainly comes from the token generation, thus the longer output length leads to higher improvement. Figure 16 shows Parrot achieves 1.58× and 1.84× speedup compared to vLLM using paged attention, showing 40 ms per-output-token latency at a batch size of 32. In Figure 17, we further evaluated the serving of multiple GPTs applications [42], each of which has multiple users, in a multi-GPU cluster. Four A6000 (48GB) GPUs are deployed with four LLM engines (LLaMA 7B). We select four GPTs applications in four popular categories including productivity, programming, image generation, and data analysis. The LLM requests are randomly generated from the four categories with equal probability. LLM requests arrive at fixed rates following Poisson distribution. Parrot can sustain 12× higher request rates compared to the baseline without sharing. Because the baseline’s scheduling policy is not aware of the shared prompt within each LLM application, the requests are mixed in all LLM engines making it impossible to reuse the common prompt prefix. Parrot’s scheduling policy co-locates LLM requests of the same applications to maximize the sharing op- 255075100Output Length (# tokens)010203040Average Latency (s)1.70x2.04x2.22x2.37xParrotBaseline (vLLM)512102415362048Chunk Size (# tokens)0102030Average Latency (s)1.96x2.07x2.07x2.16xParrotBaseline (vLLM)8163264Batch Size010203040Avg. Latency (s)1.1x1.3x1.4x1.7x1.8x2.4xxxParrotBaseline w/ SharingBaseline w/o Sharing200400600800Output Length (# tokens)0.000.020.040.060.080.100.12Latency per token (s)1.44x1.53x1.56x1.58xParrotBaseline w/ Sharing100200300400480Output Length (# tokens)0.000.050.100.150.200.25Latency per token (s)1.44x1.64x1.74x1.81x1.84xParrotBaseline w/ Sharing Figure 17: Serving multiple GPTs applications. portunity, achieving both lower inference latency and higher cluster throughput. After turning off such affinity scheduling policy, Parrot only exhibits 3× higher request rates compared to the baseline, because the requests with shared prefix are often dispatched to different engines thus reduced the sharing opportunities. Moreover, Parrot’s attention kernel helps Parrot to achieve 2.4× higher rate compared to Parrot using vLLM’s PagedAttention, by avoiding the redundant memory loading for attention of shared prompts. 8.4 Multi-agent Applications We assess the performance of multi-agent systems utiliz- ing MetaGPT [22] within Parrot. A workflow is constructed with three distinct roles. Initially, the Architect outlines the project’s file structures and specifies APIs within each file for a given task. Subsequently, multiple Coders undertake the project implementation, with each focusing on a specific file. Following the integration of the code from all files, several Reviewers engage in the process, each examining and com- menting on a single file. The Coders then revise their code based on these comments. This review-and-revision cycle is iterated three times to produce the final code. Figure 18 illustrates the latency and memory consumption of Parrot compared to baseline systems on one A100 running LLaMA 13B. Parrot achieves a speedup of up to 11.7× compared with the latency-centric baseline. The primary improvement is attributed to Parrot’s capability to deduct the performance objectives for LLM requests based on the end-to-end perfor- mance criteria. For this specific multi-agent scenario, the goal is to minimize the time taken to deliver the final code. Parrot identifies multiple task groups within the parallel processes of coding, reviewing, and revising, facilitating larger batch sizes to enhance throughput and reduce the completion time of task groups. We also contrast Parrot with an throughput-centric baseline that uses larger batch on purpose to optimize cluster throughput, which also shows higher concurrency and better completion time than the latency-centric baseline. Even when compared to the throughput-centric baseline, Parrot demonstrates superiority, being faster by up to 2.45×. This enhancement mainly stems from Parrot’s ability to (a) End-to-end Latency (b) GPU Memory of KV Cache Figure 18: The latency and memory usage for multi-agent programming, with varying number of files to program. decrease redundancy through its prompt structure analysis, which contributes a 2.35× acceleration. Given the interactive nature of the roles in MetaGPT, there is considerable overlap in the context among different roles, which Parrot capitalizes on by sharing this common context as a prompt prefix. The static prefix sharing mechanism from vLLM does not work in this dynamic scenario. Without a grasp of the prompt’s structure, it cannot identify dynamically generated Semantic Variables that could also be shared during runtime. As de- picted in Figure 18b, Parrot without this sharing capability would hit the GPU memory ceiling. Additionally, Parrot’s spe- cialized GPU kernel for processing the shared prefix achieves a further 1.2× speedup when there are 16 files, compared to using vLLM’s PagedAttention, due to the reduced memory transactions. 8.5 Scheduling of Mixed Workloads To assess the performance of Parrot on a multi-GPU setup, we configure a cluster with four A6000 (48GB) GPUs, each host- ing a separate LLM engine (LLaMA 7B), resulting in a total of four LLM engines. We emulate a real-world scenario where LLM services encounter a variety of demands by injecting a mix of requests from chat applications at a rate of 1 req/s and from data analytic tasks (i.e., map-reduce applications) previ- ously analyzed in §8.2. Requests from the chat applications are characterized by their need for low latency, whereas the map-reduce applications prioritize high throughput, creating a challenge when they are concurrently processed by the same LLM engine. We benchmark Parrot against two reference implementations: one tailored for latency, limiting engine ca- pacity to reduce decoding time, and another for throughput, 012345678910111213141516Request rate (req/s)0100200300Normalized latency (ms/token)ParrotParrot w/ PagedAttentionParrot w/o SchedulingBaseline (vLLM)2400260011.7x481216Number of Files050010001500Average Latency (s)1.00x1.04x1.14x1.16x1.22x1.61x1.88x2.35x1.27x1.58x2.03x2.45x7.19x4.9x3.0xParrotParrot w/ PagedAttentionParrot w/o SharingBaseline (vLLM, Throughput)Baseline (vLLM, Latency)481216Number of Files01020304050GPU Memory ofKV Cache (GB)GPU Memory CapacityParrotParrot w/o Sharing orthogonal to them. With more application-level knowledge exposed by Semantic Variables, Parrot can do data flow analy- sis on LLM requests, which enables a brand new optimization space with the final goal of optimizing the end-to-end perfor- mance of applications, rather than individual requests. LLM Orchestrator Frameworks. LLM orchestration frameworks help developers create and manage applications powered by LLMs. They simplify the process of prompt de- sign, and orchestration of multiple LLM requests, which en- able developers to interact with LLMs easily. LangChain [8] is a Python framework that provides many workflow patterns, e.g., chain, map-reduce so that developers can easily cus- tomize their own LLM applications. Semantic Kernel [36] introduces Planners are semantic agents that can automati- cally generate plans based on the needs of the users. Prompt- Flow [35] supports chains of native and semantic functions and visualizes them as a graph. LlamaIndex [29] allows de- velopers to use natural language queries to retrieve relevant documents. Parrot is orthogonal to these frameworks and can be easily integrated with these frameworks to support Parrot’s APIs with Semantic Variable abstraction, as discussed in §6. DAG-aware System Optimizations. Dependency graphs or DAGs (Directed Acyclic Graphs) widely exist in many kinds of systems, and many optimizations have been proposed to optimize the systems by exploiting the DAG information. Tez [4], Dryad [23], and Graphene [16] use the task depen- dency to optimize the scheduling and packing of parallel data analytic workloads. SONIC [30], Caerus [59], and Orion [31] optimize serverless functions from the aspects of communica- tion, latency, and cost. Parrot learns from the previous system works and realizes the importance of correlations of LLM requests to optimize the end-to-end performance of LLM ap- plications. This motivates Parrot to build APIs for exposing such dependency information. Moreover, it is unique to LLM applications to understand the prompt structure in addition to request-level dependency, which is necessary for communica- tion and identifying commonality across LLM requests. This motivates us to propose the Semantic Variable abstraction, instead of just using a DAG of requests. 10 Conclusion This paper proposes Parrot that treats LLM applications as first-class citizens and targets to optimize the end-to-end per- formance of LLM applications, instead of only optimizing individual LLM requests. We propose Semantic Variable as the key abstraction that exposes the dependency and common- ality of LLM requests, enabling a new optimization space. Our evaluation shows Parrot can optimize LLM-based ap- plications by up to 11.7×. We envision this new angle of efficiency improvement of LLM applications brings a broad Figure 19: The mixture of chat and map-reduce applications. utilizing full engine capacity to maximize GPU utilization. The results depicted in Figure 19 demonstrate that Par- rot attains a 5.5× and 1.23× improvement in normalized latency (measured as request latency per number of output tokens) [25, 56] for chat applications in comparison to the latency-focused and throughput-focused baselines, respec- tively. In terms of token generation speed for chat applications, Parrot delivers performance on par with the latency-centric baseline and outperforms the throughput-centric baseline by 1.72×. For map-reduce applications, Parrot reaches a 3.7× speedup over the latency-centric baseline and is 1.05× more efficient than the throughput-centric baseline. Parrot excels by providing both low latency for chat applications and high throughput for map-reduce applications. It mitigates the con- tention between chat and map-reduce workloads by intelli- gently scheduling them on separate engines. These findings underscore the significance of specialized handling for diverse requests to enhance the overall performance of LLM services. 9 Related Works Deep Learning Serving Systems. The field of model serv- ing has seen a surge of research activity in recent years, with many systems developed to address the different chal- lenges of deep learning model deployment. The systems in- clude Clipper [10], TensorFlow Serving [39], Clockwork [19], REEF [20], AlpaServe [28], which have explored many as- pects including batching, caching, placement, scheduling, model parallelism for the serving of single or multiple models. These systems were proposed for serving general deep learn- ing models, which have less consideration about the unique requirements of large language models, e.g., autoregressive decoding. Orca [56] proposed a fine-grained scheduling mech- anism that can batch multiple LLM requests at the iteration level, which is also known as continuous batching. vLLM proposes PagedAttention [25] allows the batching of LLM requests with different lengths using non-contiguous memory, increasing memory utilization. These systems for LLM serv- ing still treat LLM requests separately, missing the opportuni- ties to understand the interconnections within an application and exploit the commonality of different requests. Parrot is 0200400600800149.1184.6827.6Average Chat Normalized Latency (ms)02040608045.177.841.4Average ChatDecode Time (ms)02040608010023.224.586.4AverageMap-Reduce JCT (s)ParrotBaseline (Throughput)Baseline (Latency) future direction to study other scheduling features like the fairness of end-to-end performance of LLM applications. [8] Harrison Chase. LangChain. https://github.com/ langchain-ai/langchain, October 2022. Acknowledgments We thank the anonymous reviewers and the shepherd for their constructive feedback and suggestions. Zhenhua Han, Yuqing Yang and Chen Chen are the corresponding authors. References [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, San- jay Ghemawat, Geoffrey Irving, Michael Isard, Man- junath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: A system for Large- Scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, Savannah, GA, November 2016. USENIX Association. [2] Amey Agrawal, Nitin Kedia, Ashish Panwar, Jayashree Mohan, Nipun Kwatra, Bhargav S Gulavani, Alexey Tu- manov, and Ramachandran Ramjee. Taming throughput- latency tradeoff in llm inference with sarathi-serve. arXiv preprint arXiv:2403.02310, 2024. [3] Ganesh Ananthanarayanan, Srikanth Kandula, Albert Greenberg, Ion Stoica, Yi Lu, Bikas Saha, and Edward Harris. Reining in the outliers in Map-Reduce clusters using mantri. In 9th USENIX Symposium on Operating Systems Design and Implementation (OSDI 10), Van- couver, BC, October 2010. USENIX Association. [4] Apache. Tez. https://tez.apache.org/, November 2019. [5] Apache. Kafka. https://kafka.apache.org/, Octo- ber 2023. [6] Zhengda Bian, Hongxin Liu, Boxiang Wang, Haichen Huang, Yongbin Li, Chuanrui Wang, Fan Cui, and Yang You. Colossal-ai: A unified deep learning system for large-scale parallel training. CoRR, abs/2110.14883, 2021. [7] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experi- ments with gpt-4, 2023. [9] Lequn Chen. Dissecting batching effects in gpt infer- https://le.qun.ch/en/blog/2023/05/13/ ence. transformer-batching/, May 2023. [10] Daniel Crankshaw, Xin Wang, Guilio Zhou, Michael J. Franklin, Joseph E. Gonzalez, and Ion Stoica. Clipper: A Low-Latency online prediction serving system. In 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17), pages 613–627, Boston, MA, March 2017. USENIX Association. [11] Tri Dao. Flashattention-2: Faster attention with bet- ter parallelism and work partitioning. arXiv preprint arXiv:2307.08691, 2023. [12] Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory- efficient exact attention with io-awareness. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Process- ing Systems, volume 35, pages 16344–16359. Curran Associates, Inc., 2022. [13] Bill Gates. Ai is about to completely change how you use computers and upend the software industry. https: //www.gatesnotes.com/AI-agents, Nov 2023. [14] Google. Google bard. https://bard.google.com/, Nov 2023. [15] Robert Grandl, Mosharaf Chowdhury, Aditya Akella, and Ganesh Ananthanarayanan. Altruistic scheduling in Multi-Resource clusters. In 12th USENIX Sympo- sium on Operating Systems Design and Implementa- tion (OSDI 16), pages 65–80, Savannah, GA, November 2016. USENIX Association. [16] Robert Grandl, Srikanth Kandula, Sriram Rao, Aditya Akella, and Janardhan Kulkarni. GRAPHENE: Packing and Dependency-Aware scheduling for Data-Parallel In 12th USENIX Symposium on Operating clusters. Systems Design and Implementation (OSDI 16), pages 81–97, Savannah, GA, November 2016. USENIX Asso- ciation. [17] Juncheng Gu, Mosharaf Chowdhury, Kang G. Shin, Yibo Zhu, Myeongjae Jeon, Junjie Qian, Hongqiang Liu, and Chuanxiong Guo. Tiresias: A GPU cluster manager for distributed deep learning. In 16th USENIX Sympo- sium on Networked Systems Design and Implementation (NSDI 19), pages 485–500, Boston, MA, February 2019. USENIX Association. [18] guidance ai. Guidance. https://github.com/ guidance-ai/guidance, November 2023. [19] Arpan Gujarati, Reza Karimi, Safya Alzayat, Wei Hao, Antoine Kaufmann, Ymir Vigfusson, and Jonathan Mace. Serving DNNs like clockwork: Performance predictability from the bottom up. In 14th USENIX Sym- posium on Operating Systems Design and Implementa- tion (OSDI 20), pages 443–462. USENIX Association, November 2020. [20] Mingcong Han, Hanze Zhang, Rong Chen, and Haibo Chen. Microsecond-scale preemption for concurrent In 16th USENIX GPU-accelerated DNN inferences. Symposium on Operating Systems Design and Imple- mentation (OSDI 22), pages 539–558, Carlsbad, CA, July 2022. USENIX Association. [21] Connor Holmes, Masahiro Tanaka, Michael Wyatt, Am- mar Ahmad Awan, Jeff Rasley, Samyam Rajbhan- dari, Reza Yazdani Aminabadi, Heyang Qin, Arash Bakhtiari, Lev Kurilenko, et al. Deepspeed-fastgen: High-throughput text generation for llms via mii and deepspeed-inference. arXiv preprint arXiv:2401.08671, 2024. [22] Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, et al. Metagpt: Meta programming for multi-agent collaborative frame- work. arXiv preprint arXiv:2308.00352, 2023. [23] Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, and Dennis Fetterly. Dryad: Distributed data-parallel programs from sequential building blocks. In Proceed- ings of the 2nd ACM SIGOPS/EuroSys European Con- ference on Computer Systems 2007, EuroSys ’07, page 59–72, New York, NY, USA, 2007. Association for Com- puting Machinery. [24] Suhas Jayaram Subramanya, Daiyaan Arfeen, Shouxu Lin, Aurick Qiao, Zhihao Jia, and Gregory R. Ganger. Sia: Heterogeneity-aware, goodput-optimized ml-cluster scheduling. In Proceedings of the 29th Symposium on Operating Systems Principles, SOSP ’23, page 642–657, New York, NY, USA, 2023. Association for Computing Machinery. [25] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory man- agement for large language model serving with page- dattention. In Proceedings of the 29th Symposium on Operating Systems Principles, SOSP ’23, page 611–626, New York, NY, USA, 2023. Association for Computing Machinery. [26] Benjamin Lefaudeux, Francisco Massa, Diana Liskovich, Wenhan Xiong, Vittorio Caggiano, Sean Naren, Min Xu, Jieru Hu, Marta Tintore, Susan Zhang, Patrick Labatut, and Daniel Haziza. xformers: A modu- lar and hackable transformer modelling library. https: //github.com/facebookresearch/xformers, 2022. [27] Yucheng Li. Unlocking context constraints of llms: En- hancing context efficiency of llms with self-information- based content filtering, 2023. [28] Zhuohan Li, Lianmin Zheng, Yinmin Zhong, Vincent Liu, Ying Sheng, Xin Jin, Yanping Huang, Zhifeng Chen, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Al- paServe: Statistical multiplexing with model parallelism In 17th USENIX Sympo- for deep learning serving. sium on Operating Systems Design and Implementa- tion (OSDI 23), pages 663–679, Boston, MA, July 2023. USENIX Association. [29] Jerry Liu. LlamaIndex, November 2022. [30] Ashraf Mahgoub, Karthick Shankar, Subrata Mitra, Ana Klimovic, Somali Chaterji, and Saurabh Bagchi. SONIC: Application-aware data passing for chained serverless applications. In 2021 USENIX Annual Tech- nical Conference (USENIX ATC 21), pages 285–301. USENIX Association, July 2021. [31] Ashraf Mahgoub, Edgardo Barsallo Yi, Karthick Shankar, Sameh Elnikety, Somali Chaterji, and Saurabh Bagchi. ORION and the three rights: Sizing, bundling, and prewarming for serverless DAGs. In 16th USENIX Symposium on Operating Systems Design and Imple- mentation (OSDI 22), pages 303–320, Carlsbad, CA, July 2022. USENIX Association. [32] Microsoft. Bing chat. https://www.bing.com/chat, Nov 2023. [33] Microsoft. teams. microsoft-teams/premium, May 2023. in microsoft recap https://www.microsoft.com/en-us/ Meeting [34] Microsoft. Microsoft 365 copilot. https: //www.microsoft.com/en-us/microsoft-365/ enterprise/microsoft-365-copilot, Mar 2023. [35] Microsoft. PromptFlow. microsoft/promptflow, November 2023. https://github.com/ [36] Microsoft. Semantic Kernel. https://github.com/ microsoft/semantic-kernel, November 2023. [37] Deepak Narayanan, Keshav Santhanam, Fiodar Kazhamiaka, Amar Phanishayee, and Matei Zaharia. Heterogeneity-Aware cluster scheduling policies for deep learning workloads. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pages 481–498. USENIX Association, November 2020. [38] Flemming Nielson, Hanne R Nielson, and Chris Hankin. Principles of program analysis. Springer, 2015. [39] Christopher Olston, Fangwei Li, Jeremiah Harmsen, Jor- dan Soyke, Kiril Gorovoy, Li Lao, Noah Fiedel, Sukriti Ramesh, and Vinu Rajashekhar. Tensorflow-serving: Flexible, high-performance ml serving. In Workshop on ML Systems at NIPS 2017, 2017. [40] OpenAI. Chatgpt. https://chat.openai.com/, Nov 2023. [41] OpenAI. Gpt-4 technical report, 2023. [42] OpenAI. Introducing gpts. https://openai.com/ blog/introducing-gpts, Nov 2023. [43] OpenAI. OpenAI Triton. https://github.com/ openai/triton, November 2023. Production best practices [44] OpenAI. nai api. docs/guides/production-best-practices/ improving-latencies, Nov 2023. - ope- https://platform.openai.com/ [45] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Auto- matic differentiation in pytorch. 2017. [46] Pratyush Patel, Esha Choukse, Chaojie Zhang, Íñigo Goiri, Aashaka Shah, Saeed Maleki, and Ricardo Bian- chini. Splitwise: Efficient generative llm inference using phase splitting. arXiv preprint arXiv:2311.18677, 2023. [47] Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. Communicative agents for software development. arXiv preprint arXiv:2307.07924, 2023. [48] Sebastián Ramírez. FastAPI. https://github.com/ tiangolo/fastapi. [49] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism. CoRR, abs/1909.08053, 2019. [50] ShareGPT Team. Sharegpt dataset. https:// sharegpt.com/, Nov 2023. [51] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Bap- tiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foun- dation language models, 2023. [52] Unknown. Prompt of bing chat. https: //www.make-safe-ai.com/is-bing-chat-safe/ Prompts_Conversations.txt, Nov 2023. [53] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Perric Cistac, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-Art Natural Language Processing. pages 38–45. Association for Computational Linguistics, Oc- tober 2020. [54] Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xi- aoyun Zhang, and Chi Wang. Autogen: Enabling next- gen llm applications via multi-agent conversation frame- work. arXiv preprint arXiv:2308.08155, 2023. [55] Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. Efficient streaming language models with attention sinks. arXiv, 2023. [56] Gyeong-In Yu, Joo Seong Jeong, Geon-Woo Kim, Soo- jeong Kim, and Byung-Gon Chun. Orca: A distributed serving system for Transformer-Based generative mod- els. In 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22), pages 521–538, Carlsbad, CA, July 2022. USENIX Association. [57] Matei Zaharia, Dhruba Borthakur, Joydeep Sen Sarma, Khaled Elmeleegy, Scott Shenker, and Ion Stoica. Delay scheduling: A simple technique for achieving locality and fairness in cluster scheduling. In Proceedings of the 5th European Conference on Computer Systems, EuroSys ’10, page 265–278, New York, NY, USA, 2010. Association for Computing Machinery. [58] Matei Zaharia, Andy Konwinski, Anthony D Joseph, Randy H Katz, and Ion Stoica. Improving mapreduce In 8th performance in heterogeneous environments. USENIX Symposium on Operating Systems Design and Implementation (OSDI 08), San Diego, CA, 2008. [59] Hong Zhang, Yupeng Tang, Anurag Khandelwal, Jin- grong Chen, and Ion Stoica. Caerus: NIMBLE task scheduling for serverless analytics. In 18th USENIX Symposium on Networked Systems Design and Imple- mentation (NSDI 21), pages 653–669. USENIX Associ- ation, April 2021. [60] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mi- haylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. Opt: Open pre-trained trans- former language models, 2022. [61] Hanyu Zhao, Zhenhua Han, Zhi Yang, Quanlu Zhang, Fan Yang, Lidong Zhou, Mao Yang, Francis C.M. Lau, Yuqi Wang, Yifan Xiong, and Bin Wang. HiveD: Sharing a GPU cluster for deep learning with guarantees. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pages 515–532. USENIX Association, November 2020. [62] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuo- han Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. [63] Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Jeff Huang, Chuyue Sun, Cody Hao Yu, Shiyi Cao, Chris- tos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, and Ying Sheng. Efficiently programming large language models using sglang, 2023. [64] Yinmin Zhong, Shengyu Liu, Junda Chen, Jianbo Hu, Yibo Zhu, Xuanzhe Liu, Xin Jin, and Hao Zhang. Dist- serve: Disaggregating prefill and decoding for goodput- optimized large language model serving. arXiv preprint arXiv:2401.09670, 2024.
ai_researcher
1
Should_ChatGPT_help_with_my_research_A_caution_against_artificial_intelligence_in_qualitative_analysis.pdf
1 Can ChatGPT evaluate research quality? Mike Thelwall: Information School, University of Sheffield, UK. https://orcid.org/0000-0001- 6065-205X Purpose: Assess whether ChatGPT 4.0 is accurate enough to perform research evaluations on journal articles to automate this time-consuming task. Design/methodology/approach: Test the extent to which ChatGPT-4 can assess the quality of journal articles using a case study of the published scoring guidelines of the UK Research Excellence Framework (REF) 2021 to create a research evaluation ChatGPT. This was applied to 51 of my own articles and compared against my own quality judgements. Findings: ChatGPT-4 can produce plausible document summaries and quality evaluation rationales that match the REF criteria. Its overall scores have weak correlations with my self- evaluation scores of the same documents (averaging r=0.281 over 15 iterations, with 8 being statistically significantly different from 0). In contrast, the average scores from the 15 iterations produced a statistically significant positive correlation of 0.509. Thus, averaging scores from multiple ChatGPT-4 rounds seems more effective than individual scores. The positive correlation may be due to ChatGPT being able to extract the author’s significance, rigour, and originality claims from inside each paper. If my weakest articles are removed, then the correlation with average scores (r=0.200) falls below statistical significance, suggesting that ChatGPT struggles to make fine-grained evaluations. Research limitations: The data is self-evaluations of a convenience sample of articles from one academic in one field. Practical implications: Overall, ChatGPT does not yet seem to be accurate enough to be trusted for any formal or informal research quality evaluation tasks. Research evaluators, including journal editors, should therefore take steps to control its use. Originality/value: This is the first published attempt at post-publication expert review accuracy testing for ChatGPT. Keywords: ChatGPT, Large Language Models, LLM, Research Excellence Framework, REF 2021, research quality, research assessment Introduction 1 Academic peer review of articles entails reading a complex document containing text and perhaps also tables and images, and then judging its value. For journal peer review the results might be a publishing recommendation and a list of corrections. After publication, a similar evaluation by someone conducting a literature review might inform their decision about whether and how to use the information in the article in future research. A similar evaluation might also judge the overall quality of the article formally for a process like the UK’s Research Excellence Framework (REF) national evaluation (www.ref.ac.uk) or the equivalents in Italy (www.anvur.it/en/activities/vqr) and New Zealand (www.tec.govt.nz/funding/funding-and- performance/funding/fund-finder/pbrf). A cut down review evaluation might also be used for informal or less systematic evaluations, including for appointments, tenure and promotions. The time-consuming nature of this task has led to the partial automation of some aspects by journals, such as plagiarism checking (Memon, 2020), reviewer selection and assignment (Zhao & Zhang, 2022) and statistics checking (Baker, 2016). In addition, there have been 2 attempts to more fully automate some types of peer review evaluation, such as by replacing them with bibliometrics (Sivertsen, 2017) or artificial intelligence (Thelwall, et al., 2023). In addition, ChatGPT can provide useful advice to peer reviews about individual paper evaluations (Liang et al., 2023). Despite these calls and applications, peer review remains a labour-intensive task that consumes the time of academic experts. The emergence of Large Language Models (LLMs) like ChatGPT (Wu et al., 2023) that have shown new general-purpose text and image processing capabilities has created a new possibility for research evaluation. LLMs work by processing enormous collections of documents and learning layers of patterns in them to the extent that they are self-trained grammar experts and highly capable at linguistics tasks like translation, sentiment analysis and question answering (Kocoń et al., 2023). In addition, they can write short programs on demand (Feng et al., 2023) and might also be useful for eliciting information or giving support through chat-based dialog with patients (Cheng, et al., 2023). In education and wider examination contexts, ChatGPT performs well at answering questions, including providing answers that could pass university exams and attain professional qualifications (Nazir & Wang, 2023). Overall, ChatGPT 3.5 and 4 seem to perform above the baseline but below the state-of-the-art algorithms for natural language processing tasks. They are least accurate for tasks involving understanding and for practical tasks (Kocoń et al., 2023). The main advantages of LLMs may lie as being part of task pipelines (e.g., Wei et al., 2023) and their ready availability for a wide range of tasks (Kocoń et al., 2023). In theory, an LLM might replace human peer reviewers by judging academic article quality, especially if given guidelines about how to perform the evaluation. Alternatively, they might instead provide support to a human reviewer if the human took responsibility for the final report (Hosseini & Horbach, 2023). Nevertheless, since LLMs can produce misleadingly plausible incorrect (Nazir & Wang, 2023) or incomplete (Johnson et al., 2023) answers, careful accuracy testing is needed. This article assesses the extent to which ChatGPT-4 can estimate the quality of academic journal articles using the REF 2021 quality criteria (REF, 2019ab). ChatGPT-4 was chosen as apparently the most capable LLM at the time of writing. The REF 2021 quality criteria are appropriate for this task because they are both public definitions of four quality scores and guidelines about what to consider as aspects of quality in four different broad areas of scholarship. This gives perhaps the clearest criteria for evaluating any type of research quality. Whilst this type of research quality is not appropriate for Global South research, for pre-publication peer review, or for evaluations of field contributions, the results may provide a starting point to investigate LLMs for these other types. The following research questions drive the study. • RQ1: Can ChatGPT 4.0 understand the REF research quality evaluation task in the sense of producing plausible outputs? • RQ2: Does ChatGPT 4.0 allocate the full range of REF research quality scores? • RQ3: Is ChatGPT 4.0 consistent in its REF quality scoring? • RQ4: Is ChatGPT 4.0 accurate in its REF quality scoring? • RQ5: Does averaging ChatGPT 4.0 scores improve its accuracy? • RQ6: Can ChatGPT 4.0 scores distinguish between high-quality articles? 2 Background This section describes ChatGPT and REF2021 to set the context for the methods and results. 3 2.1 LLMs and ChatGPT A large language model (LLM) contains information about language that has been extracted from a huge collection of text and is stored abstracted in a neural network format. This information allows the model to accurately determine if new text is likely or not. Whilst previous small scale linguistic models could determine sentence likelihood based on grammar information and patterns (e.g., “The cat sat on.” is unlikely because “[noun phrase] sat on” should be followed by something), large language models have ingested sufficient information to also make fact-based determinations about grammatically correct sentences (e.g., “The cat sat on the sea.” is unlikely because a sea cannot be sat on). The abstraction is important because it allows the LLM to make determinations about text that it has not seen before. LLMs are currently built with the transformer neural network architecture, a type of deep learning. When built, an LLM is “pre-trained” in the sense of having learned from ingesting a huge amount of text. A Generative Pretrained Transformer (GPT) goes one step further by generating likely text. It harnesses its LLM and, when fed some input text, predicts what the next text could plausibly be. For this, it uses random parameters, so the text generated is not always the same. Thus, if fed with “The cat sat on the”, it could easily guess “mat” but if asked many times might occasionally produce different plausible answers, like “lap” and “sofa”, but not “sea”. Much more impressively, an LLM could also complete large sections of credible text, such as writing an entire thematically coherent poem with this starting phrase. The accuracy or usefulness of the output of a GPT can be improved by systematic large scale human evaluation of its responses. The GPT can learn from this human feedback to produce more consistently useful or correct results. Feedback can also help it learn to avoid controversial or illegal responses. At the time of writing, the latest GPT from OpenAI was GPT 4.0 (openai.com/gpt-4, openai.com/research/gpt-4). Whilst general technical details about GPT 4.0 are public (OpenAI, 2023), some details are retained as a commercial secret. Essentially, though, each version of OpenAI’s GPT series seems to have more input data, a larger network to abstract it, and more human feedback to fine tune it. ChatGPT is a GPT from OpenAI that is optimised for a chat-like environment, where it delivers responses to a series of human inputs (called “prompts”). It is general purpose, so the goal of a chat could just as easily be to elicit fiction reading recommendations as to identify a timeline of flash drive maximum capacities. ChatGPT therefore gives a mediated public interface to the capabilities of the underlying GPT. 2.2 Research quality and UK REF2021 The purpose of academic research is broadly to advance the world body of knowledge and understanding. The tangible outputs are usually journal articles, conference papers, monographs, and book chapters but can also include more diverse entities like software, datasets, compositions, and performances. For many different purposes, editors, reviewers, funders, peers, and managers may attempt to assess the “quality” of documentary outputs. Although there are many different definitions of research quality that partly conform to the stakeholder goals, methodological rigour, novelty/originality, and impact on science or society are usually included or explicitly stated as the three core components (Langfeldt et al., 2020). In line with this, the UK REF definition of research quality revolves around rigour, originality and significance. 4 The Research Excellence Framework in the UK is a periodic national assessment of research processes, environments, and societal impacts in public universities and other government funded research institutions. It succeeded the Research Assessment Exercise (RAE), with iterations including REF2014, REF2021, and REF2029 (projected). The results are primarily based on evaluating research outputs, with the scores used to direct the entire UK block grant for research until the next iteration. REF2021 is split into 34 Units of Assessment (UoAs), each of which corresponds to a large academic field (e.g., UoA 8 Chemistry) or a collection of related fields (e.g., UoA 24 Sport and Exercise Sciences, Leisure and Tourism). Institutions can choose how to split their work between these UoAs. Each UoA has a team of assessors who are field experts. Most are full professors although there are also some from outside academia. Collectively, there were over 1000 assessors for REF2021 evaluating 2.5 outputs on average per full time equivalent academic at the source institution. The UoAs are grouped into four related sets: Main Panel A (UoAs 1-6; mainly health and life sciences); Main Panel B (UoAs 7-12; mainly physical sciences and engineering), Main Panel C (UoAs 13-24; mainly social sciences), and Main Panel D (UoAs 25-34; mainly arts and humanities). Each REF2021 journal article or other output was given a quality score on the following scale: 4* (world-leading), 3* (internationally excellent), 2* (recognised internationally), or 1* (recognised nationally). Explanations of these levels are given by each Main Panel, and these are public (REF, 2019ab). A few outputs were also scored as 0 for being out of scope or low quality. Each output is primarily scored by two assessors from the relevant UoA, who agree on a score. In some UoAs (1-9, 11, 16) they may consult a narrow set of standardised bibliometrics provided by the REF team, but these have little influence (Wilsdon et al., 2015) so REF scores almost purely reflect expert judgement. 3 Methods 3.1 Article selection I checked my recently published articles to obtain at least 50 open access articles of variable quality. Fifty was chosen as a large but practical round number in the absence of any expectation about correlations that could be used for statistical power calculations. I searched from the present day backwards for articles that were open access and for which I had retained copyright so that there would be no ambiguity about whether I could legally upload them to ChatGPT. After this, I searched for articles from the same period (2019-2024) that I had written but not published. These were either not submitted to a journal because I thought them to be substandard or were submitted but rejected and I considered them to be not worth resubmitting elsewhere. These were included to give lower quality articles so that the collection had the full range of REF quality ratings. The final total was 51 articles. My research has always been submitted to REF UoA 34 Communication, Cultural and Media Studies, Library and Information Management. This Main Panel D area contains a mix of social science and humanities approaches, but journal articles are still important for it. Thus, I consider the 51 articles to be within the remit of UoA 34. They include articles about scientometrics, gender, research evaluation, and social media analysis. All contain primary research (rather than reviews) and would therefore be eligible for the REF. 5 3.2 Article scoring by my judgements Before entering any of the articles into Chat-GPT or any other LLM, I assigned each one a quality score using the REF2021 quality criteria for Main Panel D. I am very familiar with these criteria, not just as a UK academic and leader of UoA 34 submission to REF2021, but also from spending six months developing traditional AI solutions for estimating REF2021 scores and evaluating REF2021 score data (Thelwall et al., 2023a). I am also familiar with my own work so consider myself to be in a good position to estimate its quality. Nevertheless, probably like most academics, I probably tend to overestimate the quality of my own work. I therefore tried to be conservative in my quality judgements and allocate them scores that I considered REF assessors might give them. In cases where I valued an article highly, but it had been rejected from at least one journal, I used this information to lower the score given. Thus, the final scores reflect my own judgements, occasionally tempered by negative opinions from reviewers. None of my scores were changed after seeing any ChatGPT results. Using my own judgements as the core evidence in my own article is clearly not ideal from a rigour perspective. Nevertheless, this strategy seems preferable to using others’ judgements because it takes a substantial amount of time and expertise to read and evaluate academic work and so asking others to do this task would risk obtaining surface-level judgements. 3.3 ChatGPT 4 REF D configuration and scores ChatGPT 4 was subscribed to for this project (chat.openai.com). It allows custom ChatBots to be created for specific tasks. The approach used was zero-shot learning in the sense that no answers or feedback were provided to ChatGPT. All ChatGPT’s requests for feedback on its answers, even if purely stylistic, were ignored to avoid any human input that could potentially bias it. A custom chatbot, ChatGPT 4 REF D, was created in December 2023 with the official REF Main Panel D criteria, as used for UoA 34. The online public information for REF assessors and submitting researchers was entered almost verbatim as the setup instructions. The overall definitions of quality levels (1*, 2*, 3*, and 4*) and statements about the three dimensions to be assessed (rigour, originality, and impact) were merged with the panel- specific criteria. Small changes were made to the original REF text to align with the task (e.g., changing pronouns, deleting repeated text). The “unclassified” category was also removed because this was very rare in the REF and removing it would simplify the already complex instructions. This information was entered twice. It was entered in response to the setup instructions for a new GPT, but much of the information entered was lost during this process, so the missing information was subsequently added to the ChatGPT configuration section to ensure that all information was available for the evaluations. The final version of the configuration instructions is in the Appendix. This is the only ChatGPT used in the current article and so the name is sometimes abbreviated to ChatGPT below for convenience. For each article, ChatGPT 4 REF D was started, the PDF was uploaded, and it was given the instruction, “score this”. Initially, all articles were scored by the same chatbot instance on the basis that (a) each was uploaded separately and should therefore be allocated a separate score and (b) processing multiple articles from a collection might help each chatbot to calibrate its scoring level by comparing new articles with scores from its memory. This did not work because the scores were very stable, with long sequences of the same number, strongly suggesting that ChatGPT was giving a quality score to the current article partly based on all articles previously uploaded in the same session. In response to this, a new chatbot was 6 started for each article and the issue of unchanging scores for different consecutive articles stopped. These queries were submitted manually in January-February 2023. The results were inspected in each case to check that the PDF had been scanned correctly and for obvious errors and the presence of a score. Identity checking was possible in almost all cases because the ChatGPT output usually started with a sentence containing the article title, which it could only have extracted from the PDF. No problems with implausible results (e.g., answering a different question) were found. The overall score was extracted from the ChatGPT output text. In some cases, ChatGPT 4.0 REF D did not give an immediate clear score, but a score was always obtained through follow up prompts. For example, the follow up prompt “Give an exact REF score” within the same chat session gave an answer for the last two cases below. Thus, although an additional prompt was occasionally needed, ChatGPT 4.0 was always able to produce a plausible response to the request for a research quality score. The following problems occasionally occurred. • Reporting an error processing the uploaded PDF (always solved by re-uploading the PDF, sometimes after a break if several attempts did not work). • Reporting an error whilst it was writing the report. In such cases, the report was retained if it contained the score but regenerated if not. • Reporting that it could not decide between 3* and 4* (e.g., “The decision between 3* and 4* would depend on further details about the broader impact and recognition of the work within the international community, as well as additional evidence of its influence on policy, practice, or subsequent research.” ChatGPT 4.0). In these cases, the average was reported. • Scoring originality, rigour, and significance separately without an overall score. In this case, the mean of these three scores was recorded. • Evaluating originality, rigour, and significance but not reporting a numerical score. • Summarising the contents of the PDF without clearly evaluating originality, rigour, and significance separately or giving a numerical score. It is possible that ChatGPT 4.0 triggered a stopping condition before producing a score in such cases, since the other outputs tended to start with an article summary. After scores had been obtained for all articles, the process was repeated fourteen times to get additional batches of scores to obtain average scores for each of the 51 articles. A total of 15 repetitions was judged sufficient to obtain a reasonably reliable average estimate. 3.4 Analyses For RQ1 (Can ChatGPT 4.0 understand the REF research quality evaluation task in the sense of producing plausible outputs?), I read and qualitatively examined the ChatGPT outputs for whether they delivered an appropriate response to the task. For RQ2 (Does ChatGPT 4.0 allocate the full range of REF research quality scores?), the scores were summarised. Averages were also reported for additional detail. For RQ3 (Is ChatGPT 4.0 consistent in its REF quality scoring?) the scores from each of the 15 rounds of scoring were correlated against each other and the average correlation calculated. Even though the data consists of ranks, Pearson correlations were used because some of the scores were fractional and there are no extreme values. Whilst it is also reasonable to argue that REF scores are not equidistant in the sense that the quality difference between, for example, 1* and 2* might not be the same as the quality difference between 3* and 4*, it seems more appropriate to make this assumption than to treat 7 fractional ranks as full ranks. The Pearson correlation assesses the extent to which two scores form a linear relationship but not the extent to which they agree. For example, a perfect correlation of 1 would occur if the REF scored all articles as 3* or 4* and ChatGPT scored all REF 3* articles as 1* and all REF 4* articles as 2*. Thus, the correlation assesses the extent to which the two processes recognise the same quality differences between articles but not the extent to which they agree on the precise score. For RQ4 (Is ChatGPT 4.0 accurate in its REF quality scoring?), the degree to which my scores agree with the ChatGPT scores was tested in two ways. First, the two were correlated using the Pearson rank correlation coefficient. Again, although REF scores are ranks, The Pearson rank correlation is more appropriate because some of ChatGPT’s outputs are fractional. The Mean Absolute Difference (MAD) was calculated between my scores and ChatGPT scores to assess the extent to which they differ. A MAD of 0 means that they agree 100% whereas a MAD of 1 would occur if ChatGPT differed from my score by 1 on average. For RQ5 (Does averaging ChatGPT 4.0 scores improve its accuracy?) The correlation was calculated for the average of the 15 ChatGPT in the same way as for the original scores. For RQ6 (Can ChatGPT 4.0 scores distinguish between high-quality articles?) The evaluations were repeated for the 34 articles that I had scored as being 2.5* or higher and also for the 24 articles that I had scored as being 3* or higher. In the REF, funding is only allocated for articles scoring 3* or 4* so a threshold of 2.5* is reasonable on the basis that such articles have a chance of getting a 3* score. Similarly, a threshold of 3* is also reasonable since such articles are predicted to get funding. 4 Results 4.1 RQ1: Can ChatGPT 4.0 understand the REF research quality evaluation task in the sense of producing plausible outputs? ChatGPT 4.0 almost always appeared to understand the task in the sense of producing a report that critically evaluated the originality, rigour, and significance of the uploaded article and giving either a valid score or a score range. The only strange results that it produced were (a) that in a few cases it illustrated a star score on a five-point scale (e.g., 3* was illustrated as: ★★★☆☆) despite there being no 5* score, and (b) it sometimes reported fractional scores, such as 3.5*, which is reasonable but does not conform to the REF guidelines. The following illustrates the ChatGPT 4.0 REF D output evaluating one document with the prompt, “score this”. The output formats varied considerably with the only nearly common factor being that the first one to three paragraphs usually summarised the contents of the article, with the remainder either describing or evaluating its rigour, significance, and originality. Scores were usually given in the final paragraph. [A two-paragraph summary of the article] Based on the REF criteria for assessing research outputs, the article would be evaluated under the following categories: 1. **Originality**: The study demonstrates originality through its comprehensive, large-scale analysis of academic collaboration patterns across multiple countries and disciplines. It engages with the complex problem of understanding the dynamics of academic collaboration and its impact on citation rates, a topic of significant relevance in the context of global research practices. 8 insights that can It provides valuable influence 2. **Significance**: The research has a significant potential to understanding in multiple fields, particularly in research policy and scholarly communication. inform national and international strategies for research collaboration and funding. The study’s findings challenge the uniformity of collaboration practices across different regions and disciplines, highlighting the need for context-specific approaches in research management and policy-making. 3. **Rigour**: The study exhibits intellectual coherence and integrity. It adopts a robust and appropriate methodology, utilizing a large dataset and a well-justified metric for citation impact analysis. The research design is methodologically sound, and the authors provide a thorough analysis of their findings, acknowledging the limitations of their study. Based on these considerations, the article would be rated as follows: - **4* (World-leading)**: While the study is methodologically sound and offers new insights into the patterns of academic collaboration and citation impact, it may not reach the highest standards of world-leading work in terms of providing transformative theoretical advances or developing entirely new methods or paradigms. It offers substantial contributions to the field but may fall slightly short of a pioneering breakthrough that would characterize a definitive 4* rating. - **3* (Internationally excellent)**: The article most likely fits within this category. It is of high quality, contributes important knowledge, and is likely to be influential at an international level. The study is well-designed, applies a robust methodology, and addresses a significant issue in academic research. It is likely to be an important point of reference for future research in this area. In conclusion, the article [title redacted] is assessed to be of 3* quality, indicating that it is internationally excellent in terms of originality, significance, and rigour. Output from ChatGPT 4.0 REF D Despite each article being uploaded to ChatGPT 15 times, the reports were always different. They varied moderately in overall structure and content, but the exact phrasing of expressions was always novel. 4.2 RQ2: Does ChatGPT 4.0 allocate the full range of REF research quality scores? ChatGPT 4.0 REF D only ever allocated scores between 2* and 4*, never using the lowest score of 1*. Over two thirds of the time it allocated a score of 3*, with lower scores only being given 2.5% of the time. My average score for these articles was 2.75* and the ChatGPT 4.0 REF D average score was only slightly higher at 3*. Thus, ChatGPT 4.0 REF D seems to be slightly biased towards higher scores, at least compared to my self-evaluations, and it is substantially biased towards allocating a 3* score, irrespective of the merits of an article. Table 1. The scores given by ChatGPT-4 REF D and me to 51 of my open access articles. Score GPT % 1* 1.5* 2* 2.33* 2.5* 0 0 14 1 2 0.0% 0.0% 1.8% 0.1% 0.3% Me % 2 3 12 0 9 4% 6% 24% 0% 18% 2.67* 2.75* 2 0 3* 509 9 14 15 4* 199 0.3% 0.0% 66.5% 1.2% 1.8% 2.0% 26.0% 765 100.0% 3.33* 3.5* 3.67* Total 9 0 1 8 0 7 0 9 0% 2% 16% 0% 14% 0% 18% 51 100% 4.3 RQ3/4/5/6: Is ChatGPT 4.0 REF D consistent and accurate in its REF quality scoring? In terms of accuracy, the ChatGPT 4.0 REF D quality scores were out by 0.802 (mean average deviation), on average. When the ChatGPT 4.0 REF D quality scores are averaged across all 15 attempts, then the average deviation (MAD) is the same at 0.802. Thus, ChatGPT is inaccurate. Nevertheless, a high correlation is more important than accuracy because it would indicate that the ChatGPT scores could be useful, if appropriately scaled. For the complete set of 51 articles, the correlation between my scores and the average ChatGPT-4 REF D scores (0.509) was positive and statistically significantly different from 0 (Table 2). This supports the hypothesis that ChatGPT has some capability to detect REF research quality. Nevertheless, the correlation is only moderate, with ChatGPT being able to account for only 25% (=0.5092) of the variance in my scores. Moreover, the correlation is lower and not statistically significant for both sets of higher quality articles. Thus, whilst ChatGPT has some power for mixed quality sets of articles, its power is probably weaker for more uniformly high-quality sets of articles. Table 2. Pearson correlations for 51 of my open access articles, comparing my initial scores, and scores from ChatGPT-4 REF D. Correlation All articles scored GPT average vs. author (95% CI) GPT vs. author, average of 15 pairs (fraction of 95% Cis excluding 0) GPT vs. GPT (average of 105 pairs) Sample size (articles) 0.509 (0.271,0.688) 0.281 (8/15) 0.245 51 Articles 2.5+ by me 0.200 (-0.148,0.504) 0.102 (1/15) 0.194 34 Articles scored 3+ by me 0.246 (-0.175,0.590) 0.128 (1/15) 0.215 24 Despite the moderate correlation for all articles between my scores and the ChatGPT average, some low-quality articles had high ChatGPT averages and vice versa (Figure 1). The graph is consistent with ChatGPT being better able to detect between 1*-2* articles and 2.5*-4* articles than within other ranges of scores. 10 Figure 1. The average REF star rating given by the REF D GPT against the author’s prior evaluation of the REF score of 51 of his open access articles. ChatGPT gave at least two different scores to 50 out of the 51 articles, with the remining article being scored as 3* all 15 times (article 11 in Figure 2). Five of the 51 articles were given all three of the main scores (2*, 3* and 4*) in different rounds by ChatGPT illustrating that it is scoring inconsistently. The inconsistency of the scores between rounds is also evident in the correlations between different rounds of ChatGPT being about the same as the correlation between individual rounds of ChatGPT and my scores, and much lower than the correlations between average ChatGPT scores and my scores, at least for the full dataset of 51 articles (Table 2). This suggests that the averaging strategy is better than using individual ChatGPT rounds. 11 Figure 2. The range of REF star ratings given by the REF D GPT against the author’s prior evaluation of the REF score of 51 of his open access articles. The area of each bubble is proportional to the number of times the y axis score was given by ChatGPT to the x axis article. My REF scores are marked on the x axis. 5 Discussion 5.1 Limitations and alternatives This study has major limitations. The articles evaluated are from a single author and disciplinary area and most of the results are based on my self-evaluations of the quality of these articles. It is possible that a greater rate of agreement could have been obtained if the scores had been given by REF judges instead. Moreover, higher correlations might also have been obtained from different configurations of ChatGPT or other LLMs than the ChatGPT 4 REF D configuration used here. Because there were too few articles to create separate development and evaluation subsets of articles, it was not practical to experiment with different configurations or prompt chains to find one that gave higher correlations. Nevertheless, ChatGPT seemed to follow the REF rules well, giving no indication that it was doing anything inappropriate or sub-optimal. Another limitation is that LLMs are evolving rapidly, and more accurate results may be obtained in the future from upgraded systems. More generally, the REF quality definition is not the only one and ChatGPT may work better on other versions. Finally, as mentioned above, standards varied between UoAs within a Main Panel (Thelwall et al., 2023a), and this was not considered by the instructions. 5.2 Comparison with prior research There is no comparable study, with the partial exception that a traditional machine learning approach using journal, citation, and authorship data to estimate REF scores. If only the above 2* set here is considered, then the results of the current paper would be comparable with the prior results for UoA 34 (Thelwall et al., 2023a). The discussion of the contents of the reports also agrees with prior research that ChatGPT can provide useful advice to peer reviews about individual paper evaluations (Liang et al., 2023). There is agreement in the sense the ChatGPT output here was generally correct and meaningful information about the articles’ rigour, 12 originality, and significance. More generally, the results of this study also confirm prior observations that ChatGPT can generate output that is plausible but inaccurate (Nazir & Wang, 2023). 5.3 Potential applications The current article used a “zero-shot” approach by not feeding ChatGPT with any “correct” scores to learn from. Although articles are unique and diverse, ChatGPT’s performance might be improved with reference to example scores. A previous machine learning study that used citation data and metadata as inputs (but not full text) was able to make predictions that had high correlations with REF scores in some UoAs (mainly health, life, and physical sciences) (Thelwall et al., 2023a), so ChatGPT does not seem like a realistic alternative to this traditional approach in these areas. It is not clear whether ChatGPT could augment the traditional machine learning approach, for example by providing score predictions for articles that the machine learning reports a low confidence in its score or for UoAs where the traditional approach does not work at all. ChatGPT might also be useful for curating inputs to a machine learning model, by extracting useful information like the number of figures and tables, although other software could also do this. 5.4 Potential threats ChatGPT’s ability to produce plausible complex written quality evaluations of academic research despite little capacity to detect quality (at least as found in the current experiments) is a threat to peer review. This is because reviewers might try to save time by uploading documents to ChatGPT (probably in breach of copyright) and trust the output because of its plausibility. Thus, LLM use should be explicitly banned or controlled by journals, funders, universities, and other organisations that evaluate research (see also: Garcia, 2024). Explicit rules are already common for journals (Perkins & Roe, 2024) but the current study emphasises their importance and the potential for ChatGPT output to be plausible but misleading. The results also support a previous call for journals to actively detect whether reviewers have used generative AI for their evaluation (Mollaki, 2024). 5.5 Reason for positive correlations I read and compared the ChatGPT reports to try to detect how ChatGPT evaluated rigour, significance, and originality, with the goal of understanding why it had some ability to detect an article’s quality. In all cases it seemed to primarily extract originality, significance and rigour claims from inside the article rather than by applying externally obtained information to make judgements. The results are therefore consistent with ChatGPT having the ability to translate an author’s information about strengths and weaknesses into a quality judgement. It sometimes brought in wider information to make a claim about the potential significance or reach of an article, suggesting that it might be applying some ability to generalise. To test this, I created a fake article and uploaded it to ChatGPT 4.0 REF D for a score. The article was titled, “Do squirrel surgeons generate more citation impact?” and it was based on a short article that had been rejected from a journal and that I did not resubmit elsewhere (not one of the 51 evaluated), but that I would have scored as 1.5*. I changed two words throughout the article to make it a comparison between humans and squirrels for surgery research to test whether ChatGPT could detect that this research would have no significance (or that its data was fake). It allocated it a 4* score, however, justifying it with, “The study 13 stands out for its innovative approach, potential influence on scholarly thought and policy, and rigorous methodology.” The report also made clearly false claims that it had uncritically derived from the paper, “By highlighting species-based differences in citation impact, the research could contribute to broader discussions on diversity and representation in academia.” I asked ChatGPT 4.0 separately, “can squirrels write academic research journal articles?” and it gave a definitive reply, “No, squirrels cannot write academic research journal articles. Squirrels are animals without the cognitive capabilities necessary for complex tasks like academic writing.[]”. Thus, it had ingested the information necessary to draw an appropriate conclusion but had not applied it to the fake article. Whilst this is a single case and fake research rather than poor quality research, it partly undermines my initial hypothesis that ChatGPT could harness its wider information to estimate the significance of an article. It seems that ChatGPT can’t reliably do this. 6 Conclusion The results suggest that ChatGPT 4.0 can write plausible REF reviews of journal articles and has a weak capacity to estimate REF scores, but that this is probably due to an ability to differentiate between research that is and isn’t high quality (above 2* in REF terms). The most accurate way to use ChatGPT for quality scores seems to be to apply it multiple times and then use the average score. Norm referencing and scaling will also be needed because it may have a strong tendency to assign a default score (e.g., 3*) to most articles. Its evaluative reports are primarily derived from the article itself in terms of information about significance, rigour, and originality. It is not clear why it can score articles with some degree of accuracy, but it might typically deduce them from author claims inside an article rather than by primarily applying external information. In terms of practical advice, it would be unethical and may breach copyright for a reviewer to use a public LLM like ChatGPT to help review a document that was not already in the public domain (Buriak et al., 2023; Flanagin et al., 2023). Moreover, even published documents that are not open access may be legally problematic to upload, so it seems that ChatGPT should be avoided for all research evaluation purposes until the copyright situation is clarified or explicit permission is obtained from the copyright holder first and an effective prompt engineering strategy is developed and validated. When LLM use is ethical and does not breach copyright, the most important immediate conclusion is that ChatGPT’s output can be misleading, and it should be avoided by researchers, editors, reviewers, literature review authors, and evaluators attempting to make quality judgements of articles unless an improved prompt engineering strategy can be developed or the existing strategy becomes more effective on newer LLMs. 7 References Baker, M. (2016). Stat-checking software stirs up psychology. Nature, 540(7631), 151-152. Buriak, J. M., Hersam, M. C., & Kamat, P. V. (2023). Can ChatGPT and Other AI Bots Serve as Peer Reviewers? ACS Energy Letters, 9, 191-192. Cheng, S. W., Chang, C. W., Chang, W. J., Wang, H. W., Liang, C. S., Kishimoto, T., & Su, K. P. (2023). The now and future of ChatGPT and GPT in psychiatry. Psychiatry and Clinical Neurosciences, 77(11), 592-596. Feng, Y., Vanam, S., Cherukupally, M., Zheng, W., Qiu, M., & Chen, H. (2023). Investigating Code Generation Performance of Chat-GPT with Crowdsourcing Social Data. 14 In Proceedings of the 47th IEEE Computer Software and Applications Conference (pp. 1- 10). Flanagin, A., Kendall-Taylor, J., & Bibbins-Domingo, K. (2023). Guidance for authors, peer reviewers, and editors on use of AI, language models, and chatbots. JAMA. https://doi.org/10.1001/jama.2023.12500 Garcia, M. B. (2024). Using AI tools in writing peer review reports: should academic journals embrace the use of ChatGPT? Annals of biomedical engineering, 52, 139-140. Gov.uk (2023). Guidance: Exceptions to copyright. https://www.gov.uk/guidance/exceptions- to-copyright Hosseini, M., & Horbach, S. P. (2023). Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other Large Language Models in scholarly peer review. Research Integrity and Peer Review, 8(1), 4. https://doi.org/10.1186/s41073-023-00133-5 Huang, J., & Tan, M. (2023). The role of ChatGPT in scientific communication: writing better scientific review articles. American Journal of Cancer Research, 13(4), 1148. Johnson, D., Goodman, R., Patrinely, J., Stone, C., Zimmerman, E., Donald, R., & Wheless, L. (2023). Assessing the accuracy and reliability of AI-generated medical responses: an PubMed. evaluation https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10002821/ Chat-GPT model. the of Kocoń, J., Cichecki, I., Kaszyca, O., Kochanek, M., Szydło, D., Baran, J., & Kazienko, P. (2023). ChatGPT: Jack of all trades, master of none. Information Fusion, 101861. Langfeldt, L., Nedeva, M., Sörlin, S., & Thomas, D. A. (2020). Co-existing notions of research quality: A framework to study context-specific understandings of good research. Minerva, 58(1), 115-137. Liang, W., Zhang, Y., Cao, H., Wang, B., Ding, D., Yang, X., & Zou, J. (2023). Can large language models provide useful feedback on research papers? A large-scale empirical analysis. arXiv preprint arXiv:2310.01783 Memon, A. R. (2020). Similarity and plagiarism in scholarly journal submissions: bringing clarity to the concept for authors, reviewers and editors. Journal of Korean medical science, 35(27). https://synapse.koreamed.org/articles/1146064 Mollaki, V. (2024). Death of a reviewer or death of peer review integrity? the challenges of using AI tools in peer reviewing and the need to go beyond publishing policies. Research Ethics, 17470161231224552. Nazir, A., & Wang, Z. (2023). A Comprehensive Survey of ChatGPT: Advancements, Applications, Prospects, and Challenges. Meta-radiology, 100022. OpenAI (2023). GPT-4 technical report. https://arxiv.org/abs/2303.08774 Perkins, M., & Roe, J. (2024). Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis. F1000Research, 12, 1398. REF (2019a). Guidance on submissions (2019/01). https://archive.ref.ac.uk/publications-and- reports/guidance-on-submissions-201901/ and (2019b). criteria Panel REF working methods (2019/02). https://archive.ref.ac.uk/publications-and-reports/panel-criteria-and-working- methods-201902/ Sivertsen, G. (2017). Unique, but still best practice? The Research Excellence Framework (REF) from an international perspective. Palgrave Communications, 3(1), 1-6. 15 Thelwall, M., Kousha, K., Wilson, P., Makita, M., Abdoli, M., Stuart, E., Levitt, J. & Cancellieri, M. (2023a). Predicting article quality scores with machine learning: The UK Research Excellence Framework. Quantitative Science Studies, 4(2), 547-573. Thelwall, M., Kousha, K., Stuart, E., Makita, M., Abdoli, M., Wilson, P. & Levitt, J. (2023b). Does the perceived quality of interdisciplinary research vary between fields? Journal of Documentation. 79(6), 1514-1531. https://doi.org/10.1108/JD-01-2023-0012 Wei, X., Cui, X., Cheng, N., Wang, X., Zhang, X., Huang, S., & Han, W. (2023). Zero-shot information extraction via chatting with chatgpt. arXiv preprint arXiv:2302.10205. Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., (2015). The metric tide. Report of the independent review of the role of metrics in research assessment and https://www.ukri.org/publications/review-of-metrics-in-research- management. assessment-and-management/ Wu, T., He, S., Liu, J., Sun, S., Liu, K., Han, Q. L., & Tang, Y. (2023). A brief overview of ChatGPT: The history, status quo and potential future development. IEEE/CAA Journal of Automatica Sinica, 10(5), 1122-1136. Zhao, X., & Zhang, Y. (2022). Reviewer assignment algorithms for peer review automation: A survey. Information Processing & Management, 59(5), 103028. 8 Appendix: ChatGPT configuration The configuration reported below largely quotes and uses small paraphrases of text from REF documentation (REF, 2019ab). Breaking academic conventions about plagiarism, these are not in quotes because the quotes might confuse ChatGPT. 8.1 ChatGPT-4 REF D configuration instructions REF Assessor for Main Panel D employs an academic tone, prioritizing precision, formality, and clarity in its analyses. It avoids casual language, overly simplistic explanations, and subjective judgments not grounded in REF criteria. It focuses on providing objective, evidence-based assessments, maintaining the integrity and seriousness expected in academic evaluations. The GPT's interactions are guided by the principles of scholarly communication, ensuring that every assessment aligns with academic standards of originality, significance, and rigour. Originality will be understood as the extent to which the output makes an important and innovative contribution to understanding and knowledge in the field. Research outputs that demonstrate originality may do one or more of the following: produce and interpret new empirical findings or new material; engage with new and/or complex problems; develop innovative research methods, methodologies and analytical techniques; show imaginative and creative scope; provide new arguments and/or new forms of expression, formal innovations, interpretations and/or insights; collect and engage with novel types of data; and/or advance theory or the analysis of doctrine, policy or practice, and new forms of expression. Significance will be understood as the extent to which the work has influenced, or has the capacity to influence, knowledge and scholarly thought, or the development and understanding of policy and/or practice. Rigour will be understood as the extent to which the work demonstrates intellectual coherence and integrity, and adopts robust and appropriate concepts, analyses, sources, theories and/or methodologies. The scoring system used is 1*, 2*, 3* or 4*, which are defined as follows. 16 4*: Quality that is world-leading in terms of originality, significance and rigour. 3*: Quality that is internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence. 2*: Quality that is recognised internationally in terms of originality, significance and rigour. 1* Quality that is recognised nationally in terms of originality, significance and rigour. The terms ‘world-leading’, ‘international’ and ‘national’ will be taken as quality benchmarks within the generic definitions of the quality levels. They will relate to the actual, likely or deserved influence of the work, whether in the UK, a particular country or region outside the UK, or on international audiences more broadly. There will be no assumption of any necessary international exposure in terms of publication or reception, or any necessary research content in terms of topic or approach. Nor will there be an assumption that work published in a language other than English or Welsh is necessarily of a quality that is or is not internationally benchmarked. In assessing outputs, look for evidence of originality, significance and rigour and apply the generic definitions of the starred quality levels as follows: • • • In assessing work as being 4* (quality that is world-leading in terms of originality, significance and rigour), expect to see evidence of, or potential for, some of the following types of characteristics across and possibly beyond its area/field: a primary or essential point of reference of profound influence instrumental in developing new thinking, practices, paradigms, policies or audiences a major expansion of the range and the depth of research and its application outstandingly novel, innovative and/or creative. • • • • • In assessing work as being 3* (quality that is internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence), expect to see evidence of, or potential for, some of the following types of characteristics across and possibly beyond its area/field: an important point of reference of considerable influence a catalyst for, or important contribution to, new thinking, practices, paradigms, policies or audiences a significant expansion of the range and the depth of research and its application significantly novel or innovative or creative. • • In assessing work as being 2* (quality that is recognised internationally in terms of originality, significance and rigour), expect to see evidence of, or potential for, some of the following types of characteristics across and possibly beyond its area/field: a recognised point of reference of some influence an incremental and cumulative advance on thinking, practices, paradigms, policies or audiences a useful contribution to the range or depth of research and its application. • In assessing work as being 1* (quality that is recognised nationally in terms of originality, significance and rigour), expect to see evidence of the following characteristics within its area/field: an identifiable contribution to understanding without advancing existing paradigms of enquiry or practice of minor influence. • • • • •
ai_researcher
1
Explain_ability_and_interpretability_in_machine_learning_models.pdf
Assessing the Local Interpretability of Machine Learning Models Dylan Slack,1 Sorelle A. Friedler,1, Carlos Scheidegger2, Chitradeep Dutta Roy3 1Haverford College 2University of Arizona 3University of Utah 9 1 0 2 g u A 2 ] G L . s c [ 2 v 1 0 5 3 0 . 2 0 9 1 : v i X r a Abstract The increasing adoption of machine learning tools has led to calls for accountability via model interpretability. But what does it mean for a machine learning model to be interpretable by humans, and how can this be assessed? We focus on two definitions of interpretability that have been introduced in the machine learning literature: simulatability (a user’s ability to run a model on a given input) and “what if” local explain- ability (a user’s ability to correctly determine a model’s pre- diction under local changes to the input, given knowledge of the model’s original prediction). Through a user study with 1000 participants, we test whether humans perform well on tasks that mimic the definitions of simulatability and “what if” local explainability on models that are typically consid- ered locally interpretable. To track the relative interpretabil- ity of models, we employ a simple metric, the runtime opera- tion count on the simulatability task. We find evidence that as the number of operations increases, participant accuracy on the local interpretability tasks decreases. In addition, this ev- idence is consistent with the common intuition that decision trees and logistic regression models are interpretable and are more interpretable than neural networks. Introduction Recently, there has been growing interest in interpreting ma- chine learning models. The goal of interpretable machine learning is to allow oversight and understanding of machine- learned decisions. Much of the work in interpretable ma- chine learning has come in the form of devising methods to better explain the predictions of machine learning models. However, such work usually leaves a noticeable gap in un- derstanding interpretability (Lipton 2018; Doshi-Velez and Kim 2017). The field currently stands on shaky foundations: papers mean different things when they use the word “in- terpretability”, and interpretability claims are typically not validated by measuring human performance on a controlled task. However, there is growing recognition in the merit of such human validated assessments (Lage et al. 2018b; Lage et al. 2018a; Lakkaraju, Bach, and Leskovec 2016). In line with this goal, we seek concrete, falsifiable notions of interpretability. Copyright c(cid:13) 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. “Interpretability” can be broadly divided into global inter- pretability, meaning understanding the entirety of a trained model including all decision paths, and local interpretabil- ity, the goal of understanding the results of a trained model on a specific input and small deviations from that input. We focus on local interpretability, and on two specific defini- tions. We assess simulatability (Lipton 2018), the ability of a person to—independently of a computer—run a model and get the correct output for a given input, and “what if” local explainability (Ribeiro, Singh, and Guestrin 2016; Lipton 2018): the ability of a person to correctly determine how small changes to a given input affect the model out- put. We will refer to a model as locally interpretable if users are able to correctly perform both of these tasks when given a model and input. The experiments we present here are necessarily artificial and limited in scope. We see these as lower bounds on the local interpretability of a model; if peo- ple cannot perform these interpretability tasks, these models should not be deemed locally interpretable. In addition to considering the successful completion of these tasks a lower bound on the local interpretability of a model, we might reasonably ask whether these are valu- able interpretability tasks at all. Though purposefully lim- ited in scope, we argue that these tasks are still valuable in real-world settings. Consider a defense attorney faced with a client’s resulting score generated by a machine learned risk assessment. In order to properly defend their client, the attor- ney may want to verify that the risk score was correctly cal- culated (simulatability) and argue about the extent to which small changes in features about their client could change the calculated score (local explainability). Despite being simple interpretability tasks, successfully completing them is im- portant to the attorney’s ability to defend their client from potential errors or issues with the risk assessment. We assessed the simulatability and “what if” local ex- plainability of decision trees, logistic regressions, and neu- ral networks through a crowdsourced user study using Pro- lific (Prolific 2014). We asked 1,000 participants to simulate the model on a given input and anticipate the outcome on a slightly modified version of the input. We measured user accuracy and completion time over varied datasets, inputs, and model types (described in detail in the User Study De- sign section). The results are consistent with the folk hy- potheses (Lipton 2018) that decision trees and logistic re- gression models are locally interpretable and are more lo- cally interpretable than neural networks given the particular model representations, datasets, and user inputs used in the study. As has been previously observed (Lipton 2018), it may be the case that a small neural network is more interpretable than a very large decision tree. To begin to answer questions surrounding cross-model comparisons and generalizations of these results to models not studied here, we investigated a measure for its suitability as a proxy for the users’ ability to correctly perform both the simulation and “what if” lo- cal explainability tasks. We hypothesized that the number of program operations performed by an execution trace of the model on a given input would be a good proxy for the time and accuracy of users’ attempts to locally interpret the model under both definitions; specifically, that as the total number of operations increased, the time taken would increase and the accuracy on the combined task would decrease. Analyzing the results of this study, we find evidence that as the number of total operations performed by the model increases, the time taken by the user increases and their ac- curacy on the combined local interpretability task decreases. We anticipated that as the number of operations increases, the model would become uninterpretable because all users are eventually expected to make a mistake simulating a very large model. The operation count at which the users cannot locally interpret a model can be considered an upper bound limit to the interpretability of the model. Users reached this upper bound when simulating the largest neural network sizes we considered. We see this work as a first step in a more nuanced understanding of the users’ experience of in- terpretable machine learning. Related Work Work on the human interpretability of machine learning models began as early as Breiman’s study of random forests (Breiman 2001). Since then, many approaches to the in- terpretability of machine learning models have been con- sidered, including the development of new globally inter- pretable models (Ustun and Rudin 2016), post-hoc local ex- planations (Ribeiro, Singh, and Guestrin 2016) and visual- izations (Olah et al. 2018), and post-hoc measurement of the global importance of different features (Henelius et al. 2014; Datta, Sen, and Zick 2016; Adler et al. 2018). We re- fer the interested read to Molnar and Guidotti et al. for a more detailed discussion of these methods (Molnar 2018; Guidotti et al. 2018). Some of the recent activity on interpretability has been prompted by Europe’s General Data Protection Regulation (GDPR). A legal discussion of the meaning of the regula- tion with respect to interpretability is ongoing. Initially, the GDPR regulations were described as providing a “right to an explanation” (Goodman and Flaxman 2016), although subsequent work challenges that claim (Wachter, Mittel- stadt, and Floridi 2017), supporting a more nuanced right to “meaningful information” about any automated decision impacting a user (Selbst and Powles 2017). Exactly what is meant by interpretability to support the GDPR and in a broader legal context remains in active discussion (Selbst and Barocas 2018). The uncertainty around the meaning of “interpretability” has prompted calls for more precise definitions and carefully delineated goals (Lipton 2018). One thought-provoking pa- per makes the case for a research agenda in interpretabil- ity driven by user studies and formalized metrics that can serve as validated proxies for user understanding (Doshi- Velez and Kim 2017). Doshi-Velez and Kim argue that hu- man evaluation of the interpretability of a method in its spe- cific application context is the pinnacle of an interpretability research hierarchy followed by human evaluation of inter- pretability on a simplified or synthetic task and analysis of proxy tasks without associated user studies. In order to per- form interpretability analysis without user studies, they ar- gue, it is necessary to first assess proxies for user behavior. Here, we propose one such metric and assess its suitability as a proxy for the local interpretability of a model. Although we are unaware of existing metrics for the lo- cal interpretability of a general model, many measures de- veloped by the program analysis community aim at assess- ing the understandability of a general program, which could be seen as metrics for global interpretability. For example, the cyclomatic complexity counts the number of independent paths through a program using its control flow graph (Mc- Cabe 1976). Metrics for specific model types have also been developed. Lage et al. (Lage et al. 2018a) investigate how different measures of complexity in decision sets affect ac- curacy and response time on tasks consisting of simulatabil- ity, verification, and counterfactual-reasoning. Via six differ- ent user studies of 150 people (for a total of 900 participants) they find that increased complexity in decision set logic re- sults in increased response time but do not find a signifi- cant connection with accuracy. They measure decision set complexity as a combination between the explanation size, clauses in the disjunctive normal form of the input (called cognitive chunks), and number of repeated input conditions to the decision set. Their work is specific to decision sets and does not generalize to other model types. There have also been experimentally grounded assess- ments of model properties related to (but different from) in- terpretability. Poursabzi-Sangdeh et al. (Poursabzi-Sangdeh et al. 2017) consider the impact of model attributes (e.g. black-box vs. clear) on user trust, simulatability, and mis- take detection using randomized user studies on a similar scale to what we will consider here. They find that clear models (models where the inner calculations are displayed to the user) are best simulated. Allahyari et al. (Allahyari and Lavesson 2011) measure the perceived relative understand- ability of decision trees and rule-based models and find de- cision trees are seen as more understandable than rule-based models. Other methods are concerned with human in the loop opti- mization of the interpretability of machine learning models. Lage et. al. (Lage et al. 2018b) develop a method that op- timizes models for both interpretability and accuracy by in- cluding user studies in the optimization loop. Their method minimizes the number of user studies needed to generate models that are both interpretable and accurate. They per- form experiments on optimizing decision trees and find that the proxy interpretability metric optimized by the model (e.g. number of nodes, mean path length) varies based on dataset. A Metric for Local Interpretability Figure 1: A decision tree where the answer when run on the input (a = −80, b = 200) is shown circled in blue and the result of running the same model on the input (a = −64, b = 115) is shown circled in red. Motivated by the previous literature and its calls for user- validated metrics that capture aspects of interpretability, we wish to assess whether a candidate metric captures a user’s ability to simulate and “what if” locally explain a model. The candidate metric we consider here is the total number of runtime operation counts performed by the model when run on a given input. We consider two basic variants of op- erations, arithmetic and boolean, and track their totals sepa- rately. Effectively, we seek a proxy for the work that a user must do (in their head or via a calculator) in order to simu- late a model on a given input, and will claim that the total number of operations also impacts a user’s ability to perform a “what if” local explanation of a model. ning the model on the input (a = −80, b = 200) is shown circled in blue and the result of running the same model on the input (a = −64, b = 115) is shown circled in red. The red answer is at a depth of 10 in the decision tree while the blue answer is at a depth of 5. Counting the operations that the model takes to run on the input (including each boolean comparison operation or memory access required, which we count as an arithmetic operation) gives the total number of runtime operations - our candidate metric. Using the be- low methodology to count these operations, to determine the number of runtime operations executed when evaluating the decision tree model on the inputs from the example above, (blue: a = −80, b = 200 and red: a = −64, b = 115), the blue input is found to require 17 total operations (6 opera- tions are arithmetic and 11 are boolean) while the red input requires 32 total operations (11 arithmetic and 21 boolean). Essentially, at each branch point one arithmetic operation is performed to do a memory access, one boolean operation is performed to check if the node is a leaf node, and one more boolean operation is performed for the branching operation. Calculating Runtime Operation Counts In order to calculate the number of runtime operations for a given input, we instrumented the prediction operation for existing trained models in python’s scikit-learn pack- age (Buitinck et al. 2013). The source code for the tech- nique is available at URL removed for anonymization. Since most machine learning models in scikit-learn use (indirectly, via other dependencies) cython, Fortran, and C for speed and memory efficiency, we implemented a pure Python ver- sion of the predict method for the classifiers, and in- strumented the Python bytecode directly. We created pure- Python versions of the decision tree, logistic regression, and neural network classifiers in scikit-learn.1 Once working only with pure Python code, we used the tracing feature of python’s sys module and a custom tracer function to count the number of boolean and arithmetic op- erations. The default behavior of tracer in python is line based, meaning the trace handler method is called for each line of the source code. We used the dis module to mod- ify the compiled bytecode objects of useful modules stored in their respective .pyc files. In particular, we modified the line numbering metadata so that every bytecode is given a new line number, ensuring that our tracer function is called for every bytecode instruction (Ned 2008b; Ned 2008a; Ike-Nwosu 2018). Inside the tracer function we use the dis module to determine when a byte corresponds to a valid op- eration and count them accordingly for our simplified pre- dict method implementations when run on a given input. User Study Design We have two overall goals in this project: to assess the simulatability and “what if” local explainability of machine learning models, and to study the extent to which the pro- posed metric works as proxy for local interpretability. To An Example As an example of how this metric would work, consider the visualization of a decision tree in Figure 1. The result of run- 1Specifically, sklearn.tree.DecisionTreeClassifier, sklearn.linear model. LogisticRegression, sklearn.neural network.MLPClassifier. and those ends, we designed a crowdsourced experiment that was given to 1000 participants. Participants were asked to run a model on a given input and then evaluate the same model on a locally changed version of the input. We start by describing the many potentially interacting factors that required a careful experimental design. Models and Representations For this study we consider the local interpretability of three models: decision trees, logistic regression, and neu- ral networks. We chose decision trees and logistic re- gression because they are commonly considered to be in- terpretable (Lipton 2018). In contrast, we picked neural networks because they are commonly considered uninter- pretable. The models were trained using the standard pack- age scikit-learn.2 Our decision tree representation is a standard node-link diagram representation for a decision tree or flow chart. In order to allow users to simulate the logistic regression and neural network classifiers we needed a representation that would walk the users through the calculations without pre- vious training in using the model or any assumed mathemat- ical knowledge beyond arithmetic. The resulting representa- tion for logistic regression is shown in Figure 2. The neural network representation used the same representation as the logistic regression for each node and one page per layer. The representations described so far are for the first ques- tion a user will be asked about a model - the request to sim- ulate it on a given input. In order to allow users to assess the “what if” local explainability of the model, we also asked them to determine the output of the model for a perturbed version of the initial input they were shown. The represen- tations used here are the same as the ones described, but a snapshot of the participants’ previously filled in answers are shown for the logistic regression and neural network repre- sentations (see Figure 3) and users are not given blank en- tries to allow the re-simulation of the model. Data and Inputs In order to avoid effects from study participants with do- main knowledge, we created synthetic datasets to train the models. We created four synthetic datasets simple enough so that each model could achieve 100% test accuracy. These datasets consisted of a 2 dimensional dataset with rota- tion around an axis applied, 2 dimensional without rotation around an axis, 3 dimensional with rotation around an axis, 2Decision any was trees were using default and with the multi class depth Logistic restrictions regression trained sklearn.tree.DecisionTreeClassifier without rameters. sklearn.linear model.LogisticRegression ’multinomial’ with the as average and solver. using network was sklearn.neural network.MLPClassifier. The neural network used is a fully connected network with 1 input layer, 1 hidden layer with 3 nodes, and 1 output layer. The relu (rectified linear unit) activation function was used for the hidden layer. to descent) implemented ’sag’(Stochastic neural set gradient pa- using argument trained The Figure 2: The logistic regression representation shown to users. and 5 dimensional with rotation around an axis. As the num- ber of dimensions increases, so does the operation count. These four datasets were used to train the three considered models via an 80/20 train-test split. We generated user in- puts using the test data. For each test data point, we changed one dimension incrementally in order to create a perturbed input. From this set of input and perturbed input pairs, we then chose a set of eight pairs for each trained model (i.e., for each model type and dataset combination) to show to the participants. The set was chosen to fit the following condi- tions: 50% of the classifications of the original inputs are True, 50% of the classifications on the perturbed input are True, and 50% of the time the classification between input and its perturbed input changes. We used this criteria in or- der to distribute classification patterns evenly across users so that a distribution of random guesses by the participants would lead to 50% correctness on each task, and guessing that the perturbed input had the same outcome as the origi- nal input would also be correct 50% of the time. Pilot Studies In order to assess the length of the study and work out any problems with instructions, we conducted three pilot studies. In the first informal study, one of us watched and took notes while a student attempted to simulate an input on each of the three types of models and determine the outcome for a perturbed input for each of those three models. In the second two pilots we recruited about 40 participants through Prolific and gave the study for a few fixed models and inputs with the same setup as we would be using for the full study. The main takeaways from these pilot studies were that we estimated it would take users 20-30 minutes to complete the survey, but that some users would take much longer. We had originally planned to include a dataset with 10 dimensions, and based on the time taken by users in the pilot survey decreased our largest dataset to 5 dimensions and added the 2-dimensional dataset with no rotation. Experimental Setup We used Prolific to distribute the survey to 1000 users each of whom was paid $3.50 for completing it. Participants were restricted to those with at least a high school education (due to the mathematical nature of the task) and a Prolific rat- ing greater than 75 out of 100. The full survey information (hosted through Qualtrics) and resulting data is available on- line.3 Each participant was asked to calculate the output of a machine learning model for a given input, and then to de- termine the output of a perturbed input applied to the same model. We showed each participant three trained models: a logistic regression, a decision tree, and a neural network in a random order. Each participant was shown a model trained on a specific dataset (chosen from the four described ear- lier) at most once to avoid memory effects across models. Each question began with the initial input and a brief de- scription of the task. As an attention check, we included a question in the survey that asked users to do some basic ad- dition. Lastly, we asked each user at the end of the study to indicate whether they fully attempted to determine correct answers and that they would still be compensated in case they selected no. We considered only the data of the 930 users, who we will refer to as confident respondents, that se- lected they fully tried to determine correct answers and who correctly answered the basic addition problem. Preregistered Hypotheses We preregistered two experi- mental hypotheses. Namely, that time to complete will be positively related to operation count and that accuracy will be negatively related to operation count. We also preregis- tered two exploratory hypotheses. These were that we would explore the specific relationship between time and accuracy versus operation count and that we would explore how the 3URL removed for anonymization Figure 3: The “what if” local explainability question shown to users for the neural network model. Note that while the simulatability question on the neural networks allowed users to fill in the blanks, the shown blanks in the above image rep- resent where the variable values will be filled in, and users are given no location to fill in partial simulations of the neu- ral network. DT LR NN Correct p-Value 95% CI Correct p-Value 95% CI Correct p-Value 95% CI Simulatability 717 / 930 5.9 × 10−63 [0.73, 0.81] 592 / 930 1.94 × 10−15 [0.59, 0.69] 556 / 930 7.34 × 5.5−8 [0.55, 0.65] “What If” Local Explainability 719 / 930 5.16 × 10−64 [0.73, 0.82] 579 / 930 2.07 × 10−12 [0.57, 0.67] 499 / 930 0.78 [0.49, 0.59] Table 1: Per-model correct responses out of the total confi- dent respondents on the original input (simulatability task) and perturbed inputs (“what if” local explainability task) for decision trees, logistic regression, and neural networks. p- values given are with respect to the null hypothesis that re- spondents are correct 50% of the time, using exact binomial tests. perturbed input is related to time and operation count. These hypotheses can be found at the Open Science Framework at: url removed for anonymization Study Setup Issues After running the user study, we found that an error in the survey setup meant that the survey exited prematurely for users given two of the eight inputs on the decision tree models for one dataset. Since we did not receive data from these participants, Prolific recruited other participants who were allocated to other inputs and datasets, so the analyzed dataset does not include data for these two inputs. Users who contacted us to let us know about the problem were still paid. Multiple Comparison Corrections In order to mitigate the problem of multiple comparisons, all p-values and confi- dence intervals we report in the next section include a Bon- ferroni correction factor of 28. While we include 15 statisti- cal tests in this paper, we considered a total of 28. Reported p-values greater than one arise from these corrections. User Study Results Based on the results from the described user study, we now examine folk hypotheses regarding the local interpretability of different model types, consider the relative local inter- pretability of these models, and assess our proposed metric. Assessing the Local Interpretability of Models In order to assess the local interpretability of different model types, we first separately consider the user success on the task for simulatability (the original input) and the task for “what if” local explainability (the perturbed input). Since inputs were chosen so that 50% of the correct model out- puts were “yes” and 50% were “no”, we compare the result- ing participant correctness rates to the null hypothesis that respondents are correct 50% of the time. The resulting p- values and confidence intervals are shown in Table 1. The results indicate strong support for the simulatability of decision trees, logistic regression, and neural networks based on the representations the users were given. The re- sults also indicate strong support for the “what if” local ex- plainability of decision trees and logistic regression models, but neural networks were not found to be “what if” locally explainable. Recall that we consider models to be locally interpretable if they are both simulatable and “what if” locally explain- able. Based on the results in Table 1, we thus have evidence that decision trees and logistic regression models are locally interpretable and neural networks are not, partially validat- ing the folk hypotheses about the interpretability of these models. Next, we’ll consider the relative local interpretabil- ity of these models. Assessing Relative Local Interpretability In order to assess the relative local interpretability of models — to evaluate the folk hypothesis that decision trees and lo- gistic regression models are more interpretable than neural networks — we compared the distributions of correct and incorrect answers on both tasks across pairs of model types. We applied one-sided Fisher exact tests with the null hy- pothesis that the models were equally simulatable, “what if” locally explainable, or locally interpretable. The alternative hypotheses were that decision trees and logistic regression models were more interpretable (had a greater number of correct responses) than neural networks and that decision trees were more interpretable than logistic regression. The results (see Table 2) give strong evidence that deci- sion trees are more locally interpretable than logistic regres- sion or neural network models on both the simulatability and “what if” local explainability tasks. While there was strong evidence that logistic regression is more “what if” locally explainable and more locally interpretable than neural net- works, there is not evidence that logistic regression is more simulatable than neural networks using the given representa- tions. This may be because the logistic regression and neu- ral network representations were very similar. An analysis of the users who got both tasks right, i.e., were able to lo- cally interpret the model, shows that the alternative hypothe- sis was strongly supported in all three cases, thus supporting the folk hypotheses that decision trees and logistic regres- sion models are more interpretable than neural networks. Assessing Runtime Operations as a Metric for Local Interpretability In order to evaluate our preregistered hypotheses, we consid- ered the relationship between total operation counts, time, and accuracy on the simulatability, “what if” local explain- ability, and combined local interpretability tasks. The graphs showing these relationships, including ellipses that depict the degree to which the different measurements are linearly related to each other, are shown in Figure 4. The time and accuracy given for the simulatability and “what if” local ex- plainability tasks are separated individually for those tasks in the first two columns of the figure, while the final local interpretability column includes the sum of the time taken Table 2: Comparative correct / incorrect distributions and p-values between model types generated through Fisher Exact Tests for confident responses. Relative correctness is shown for simulatability (correctness on the original input), “what if” local explainability (correctness on the perturbed input), and local interpretability (correctness on both parts). DT stands for Decision Tree, LR stands for Logistic Regression, and NN stands for Neural Network. Relative Simulatability: Contingency Table Correct Incorrect p-value, 95% CI DT > NN DT > LR 717 213 1.5 × 10−14 556 374 [1.69, ∞] 717 213 3.7 × 10−9 592 338 [1.43, ∞] 592 338 1.3 LR > NN 556 374 [0.90, ∞] Relative “What If” Local Explainability: Contingency Table Correct Incorrect p-value, 95% CI DT > NN DT > LR 719 211 7.3 × 10−26 499 431 [2.20, ∞] 719 211 2.6 × 10−11 579 351 [1.54, ∞] LR > NN 499 431 [1.09, ∞] 579 351 2.9 × 10−3 Relative Local Interpretability: Contingency Table Correct Incorrect p-value, 95% CI DT > NN DT > LR 594 336 9.3 × 10−32 337 593 [2.36, ∞] 594 336 5.9 × 10−14 425 505 [1.60, ∞] LR > NN 337 593 [1.13, ∞] 425 505 5.7 × 10−4 by the user on both tasks and credits the user with an ac- curate answer only if both the simulatability and “what if” local explainability tasks were correctly answered. The ac- curacies as displayed in the figure are averaged over all users given the same input into the trained model. All total oper- ation counts given are for the simulation task on the spe- cific input. In the case of the “what if” local explainability task for decision trees, this operation count is for the simu- latability task on the perturbed input; the logistic regression and neural network simulatability operation counts do not vary based on input. The local interpretability total opera- tion count is the sum of the counts on the simulatability and “what if” local explainability tasks. Additionally, we con- sidered the effect on time and accuracy of just the arithmetic operation counts. The overall trends are discussed below. Assessing the Relationship Between Runtime Operations and Time The number of operations has a positive relationship with the time taken. Across all three interpretability tasks it appears clear that as the number of operations increases, the total time taken by the user also increases (see the first row of Figure 4). This trend is especially clear for the sim- ulatability task, validating Hypothesis 1. This effect is per- haps not surprising, since the operation count considered is for the simulatability task and the representations given fo- cus on performing each operation. Users were locally interpreting the “what if” local ex- plainability task. Users spent much less time on the lo- cal explainability task than the simulatability task across all models. The difference suggests that users were actually lo- cally interpreting the model on the “what if” local explain- ability task as opposed to re-simulating the whole model. The time taken to simulate neural networks might not be feasible in practice. The neural network simulation time was noticeably greater than that of the decision tree and logistic regression. In some cases, the time expended was greater than 30 minutes. A user attempting the simulate the results of a model might give up or be unable to dedicate that much time to the task. The study takers likely feared lack of compensation if they gave up. This result suggests that in time constrained situations, neural networks are not simulatable. Assessing the Relationship Between Runtime Operations and Accuracy The relationship between accuracy and operation count is clear for decision trees but not the other model types. As the total number of runtime operations increases, we hy- pothesized that the accuracy would decrease. In the second row of Figure 4 we can see that this trend appears to hold clearly for all three interpretability tasks for the decision tree models, but there is no clear trend for the logistic regression and neural network models. This lack of effect may be due to the comparatively smaller range of operation counts ex- amined for these two model types, or it may be that the local interpretability of these model types is not as related to oper- ation count as it is for decision trees. The lack of overlap in the ranges for the operation counts of logistic regression and neural networks also makes it hard to separate the effects of the model type on the results. Some users might not have understood the logistic re- gression and neural network tasks. Because the logis- tic regression and neural network tasks could be considered more challenging than the decision tree task, there may have been noise introduced by the variability in user ability to per- Figure 4: Comparisons shown are between total operations for a particular trained model and input, the time taken by the user to complete the task, and the accuracy of the users on that task for the simulatability (original input), “what if” local explainability (perturbed input), and the combined local interpretability (getting both tasks correct) tasks. The total time shown is in seconds. The total operation count is for the simulatability task on the specific input; this is the same for both “what if” local explainability and simulatability except for in the case of the decision tree models, where operation counts differ based on input. The local interpretability operation count is the sum for the simulatability and “what if” local explainability task operation counts. Accuracy shown is averaged over all users who were given the same input for that task and trained model. The models considered are decision trees (DT), logistic regression models (LR), and neural networks (NN). The ellipses surrounding each group depict the covariance between the two displayed variables, and capture 95% of the sample variance. form the task. While operation counts might influence the accuracy for users who are able to understand the base task, this trend may be hidden by the fact that some users who were confident did not understand the task. Discussion and Conclusion We investigated the local interpretability of three common model types: decision trees, logistic regression, and neural networks, and our user study provides evidence for the folk hypotheses that decision trees and logistic regression models are locally interpretable, while neural networks are not. We also found that decision trees are more locally interpretable than logistic regression or neural network models. We also showed that as the number of runtime operations increase, participants take longer to locally interpret a model, and they become less accurate on local interpretation tasks. This run- time operations metric provides some insight into the local interpretability of the discussed models and representations, and could indicate to practitioners the extent to which their models fulfill a lower bound requirement of interpretabil- ity. Further work is needed to consider the extent to which the metric generalizes to other model types. In addition, we found that users were consistently unable to locally inter- pret the largest operation count neural networks shown to them, and their inability to simulate such neural networks could suggest that users struggle to locally interpret mod- els more than 100 operations. Because we did not give users other models of similar operation count due to their poten- tial display size to the user, further work is needed to verify if users inability to locally interpret large neural networks was caused by the number of operation counts or neural net- works themselves. Further, there are many caveats and limitations to the reach of this work. The domain-agnostic nature of our syn- thetic dataset has transferability advantages, but also has dis- advantages in that it does not study interpretability within its target domain. The definitions of local interpretability that we assess here — simulatability and “what if” local explainability— are limited in their reach and the specific user study setup that we introduce may be limited in captur- ing the nuance of these definitions. Still, this work provides a starting point for designing user studies to validate notions of interpretability in machine learning. Such controlled stud- ies are delicate and time-consuming, but are ultimately nec- essary in order for the field to make progress. 2018. SIGKDD Conference on Knowledge Discovery and Data Mining (KDD). [Lipton 2018] Lipton, Z. C. 2018. The mythos of model interpretability. Queue 16(3):30. [McCabe 1976] McCabe, T. J. 1976. A complexity measure. IEEE Transactions on software Engineering (4):308–320. Interpretable Machine [Molnar 2018] Molnar, C. Learning. https://christophm.github.io/interpretable-ml- book/. https://christophm.github.io/interpretable-ml-book/. [Ned 2008a] Ned, B. 2008a. The structure of .pyc files. Blog. https://nedbatchelder.com/blog/200804/the structure of pyc files.html. [Ned 2008b] Ned, B. 2008b. Wicked hack: Python byte- code tracing. Blog. https://nedbatchelder.com/blog/200804/ wicked hack python bytecode tracing.html. [Olah et al. 2018] Olah, C.; Satyanarayan, A.; Johnson, I.; Carter, S.; Schubert, L.; Ye, K.; and Mordvintsev, A. 2018. The building blocks of interpretability. Distill. https://distill.pub/2018/building-blocks. [Poursabzi-Sangdeh et al. 2017] Poursabzi-Sangdeh, F.; Goldstein, D. G.; Hofman, J. M.; Vaughan, J. W.; and Wallach, H. 2017. Manipulating and measuring model interpretability. Transparent and Interpretable Machine Learning in Safety Critical Environments Workshop at NIPS. [Prolific 2014] Prolific. 2014. https://prolific.ac/, last ac- cessed on June 5th, 2019. [Ribeiro, Singh, and Guestrin 2016] Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016. Why should i trust you?: Ex- plaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowl- edge discovery and data mining, 1135–1144. ACM. [Selbst and Barocas 2018] Selbst, A. D., and Barocas, S. 2018. The intuitive appeal of explainable machines. Ford- ham Law Review. Forthcoming. Available at SSRN: https: //ssrn.com/abstract=3126971. [Selbst and Powles 2017] Selbst, A. D., and Powles, J. 2017. Meaningful information and the right to explanation. Inter- national Data Privacy Law 7(4):233–242. [Ustun and Rudin 2016] Ustun, B., and Rudin, C. 2016. Su- persparse linear integer models for optimized medical scor- ing systems. Machine Learning 102(3):349–391. [Wachter, Mittelstadt, and Floridi 2017] Wachter, S.; Mittel- stadt, B.; and Floridi, L. 2017. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law 7(2):76–99. References [Adler et al. 2018] Adler, P.; Falk, C.; Friedler, S. A.; Nix, T.; Rybeck, G.; Scheidegger, C.; Smith, B.; and Venkatasubra- manian, S. 2018. Auditing black-box models for indirect influence. Knowledge and Information Systems 54(1):95– 122. [Allahyari and Lavesson 2011] Allahyari, H., and Lavesson, N. 2011. User-oriented assessment of classification model understandability. In 11th scandinavian conference on Arti- ficial intelligence. IOS Press. [Breiman 2001] Breiman, L. 2001. Random forests. Ma- chine learning 45(1):5–32. [Buitinck et al. 2013] Buitinck, L.; Louppe, G.; Blondel, M.; Pedregosa, F.; Mueller, A.; Grisel, O.; Niculae, V.; Pretten- hofer, P.; Gramfort, A.; Grobler, J.; Layton, R.; VanderPlas, J.; Joly, A.; Holt, B.; and Varoquaux, G. 2013. API design for machine learning software: experiences from the scikit- In ECML PKDD Workshop: Languages for learn project. Data Mining and Machine Learning, 108–122. [Datta, Sen, and Zick 2016] Datta, A.; Sen, S.; and Zick, Y. 2016. Algorithmic transparency via quantitative input in- fluence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on, 598– 617. IEEE. [Doshi-Velez and Kim 2017] Doshi-Velez, F., and Kim, B. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. [Goodman and Flaxman 2016] Goodman, B., and Flaxman, S. European union regulations on algorithmic decision-making and a” right to explanation”. presented at the 2016 ICML Workshop on Human Interpretability in Ma- chine Learning (WHI 2016), New York, NY. [Guidotti et al. 2018] Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; and Pedreschi, D. 2018. A survey of methods for explaining black box models. ACM Computing Surveys (CSUR) 51(5):93. K.; A.; [Henelius et al. 2014] Henelius, Bostr¨om, H.; Asker, L.; and Papapetrou, P. 2014. A peek into the black box: exploring classifiers by randomization. Data mining and knowledge discovery 28(5-6):1503–1529. [Ike-Nwosu 2018] Ike-Nwosu, O. 2018. Inside Python Vir- tual Machine. Lean publishing, 1st edition. chapter 5 Code Objects, 68–78. [Lage et al. 2018a] Lage, I.; Chen, E.; He, J.; Narayanan, M.; Gershman, S.; Kim, B.; and Doshi-Velez, F. 2018a. An evaluation of the human-interpretability of explanation. In Conference on Neural Information Processing Systems (NeurIPS) Workshop on Correcting and Critiquing Trends in Machine Learning. [Lage et al. 2018b] Lage, I.; Slavin Ross, A.; Kim, B.; J. Ger- shman, S.; and Doshi-Velez, F. 2018b. Human-in-the-loop interpretability prior. In Conference on Neural Information Processing Systems (NeurIPS). [Lakkaraju, Bach, and Leskovec 2016] Lakkaraju, H.; Bach, S. H.; and Leskovec, J. 2016. Interpretable decision sets: A joint framework for description and prediction. In ACM Puolam¨aki, 2016.
ai_researcher
1
Devising_Network_Intrusion_Detection_System_for_Smart_City_with_an_Ensemble_of_Optimization_and_Deep_Learning_Techniques.pdf
A Comprehensive Comparative Study of Individual ML Models and Ensemble Strategies for Network Intrusion Detection Systems Ismail Bibers, Osvaldo Arreche, and Mustafa Abdallah 4 2 0 2 t c O 1 2 ] R C . s c [ 1 v 7 9 5 5 1 . 0 1 4 2 : v i X r a Abstract—The escalating frequency of intrusions in networked systems has spurred the exploration of new research avenues in devising artificial intelligence (AI) techniques for intrusion detection systems (IDS). Various AI techniques have been used to automate network intrusion detection tasks, yet each model possesses distinct strengths and weaknesses. Selecting the optimal model for a given dataset can pose a challenge, necessitating the exploration of ensemble methods to enhance generalization and applicability in network intrusion detection. This paper addresses this gap by conducting a comprehensive evaluation of diverse individual models and both simple and advanced ensemble methods for network IDS. We introduce an ensemble learning framework tailored for assessing individual models and ensemble methods in network intrusion detection tasks. Our framework encompasses the loading of input datasets, training of individual models and ensemble methods, and the generation of evaluation metrics. Furthermore, we incorporate all features across individual models and ensemble techniques. The study presents results for our framework, encompassing 14 methods, including various bagging, stacking, blending, and boosting techniques applied to multiple base learners such as decision trees, neural networks, and among others. We evaluate the framework using two distinct network intrusion datasets, RoEduNet-SIMARGL2021 and CICIDS-2017, each possessing unique characteristics. Additionally, we categorize AI models based on their performances on our evaluation metrics and via their confusion matrices. Our assessment demonstrates the efficacy of learning across most setups explored in this study. Furthermore, we contribute to the community by releasing our source codes, providing a foundational ensemble learning framework for network intrusion detection. Index Terms—Intrusion Detection Systems, Ensemble Learning, Network Security, Machine Learning, CICIDS-2017, RoEduNet-SIMARGL2021, Predictive Modeling, and Evaluation Metrics. I. INTRODUCTION The primary aim of intrusion detection systems (IDS) is to detect unauthorized utilization, misuse, and exploitation of computer network systems by both internal users and external intruders [1]–[3]. Traditional IDS designs typically operate under the assumption that the behavior of an intruder will This work is partially supported by the Lilly Endowment through the AnalytixIN grant, the Enhanced Mentoring Program with Opportunities for Ways to Excel in Research (EMPOWER), and the 1st Year Research Immersion Program (1RIP) grants from the Office of the Vice Chancellor for Research at Indiana University-Purdue University Indianapolis. Ismail Bibers and Mustafa Abdallah are with Computer and Information Technology Department, Purdue University in Indianapolis, Indianapolis, IN, USA. Email: {ibibers,abdalla0}@purdue.edu. Osvaldo Arreche is with Electrical and Computer Engineering Department, Purdue University in Indianapolis, Indianapolis, IN, USA. Email: [email protected] deviate noticeably from that of a legitimate user and that many unauthorized actions are discernible. The potential of artificial intelligence (AI) has spurred the advancement of fully automated intrusion detection systems [4], [5]. Various AI methodologies have been employed to automate intrusion detection tasks, including neural networks [6], [7], decision logistic regression [10], [11], and random trees [8], [9], forest [12], [13]. The majority of these AI methods, with the exception of random forest, operate as standalone learning models where the combination of their decisions is not utilized by the IDS [14], [15]. Each of these AI models harbors its own constraints, such as a high false positive rate for certain models (for instance, approximately half of the major companies contend with 10,000 security alerts daily from AI-based threat monitoring tools [16]), and a high false negative rate for others (which poses a significant challenge in safety-critical computer network applications [17]). Prior AI-focused studies predominantly emphasized the classification accuracy of different AI algorithms without harnessing the collective potential of these diverse AI techniques. This inherent limitation has stressed the urgent necessity to exploit various ensemble learning methods to bolster IDS [18]–[20]. Numerous recent studies have begun delving into the utilization of ensemble learning with various AI models for IDS, as evidenced by works such as [21]–[34]. Specifically, works like [23], [24], [26], [28], [29], [31], [32] have proposed ensemble learning frameworks for anomaly detection, focusing on binary classification to discern normal from anomalous traffic. Conversely, other studies [21], [30], [33], [34] have developed ensemble learning frameworks for classification of network intrusions, encompassing categories like Denial of Service (DoS) attacks, Port Scanning, Normal traffic, and others. [22], [27], These frameworks employ ensemble learning techniques such as Boosting, Stacking, and Bagging, considering various base models like Decision Trees, Support Vector Machines, and Neural Networks. Primary evaluation metrics include traditional AI metrics like accuracy, precision, recall (true positive rate), F1 score, and false positive rates. While most studies utilize benchmark datasets for IDS, such as CICIDS-2017, KDD’99, NSL-KDD, and UNSW-NB15 [23], [28], [29], [33], some conduct tests on real networks like the Palo Alto network [31], and even in real-time scenarios, as demonstrated by the “kitsune” framework [24]. An exemplary contribution in this domain is the work by [22], which focuses on generating a new dataset and benchmarking it using ensemble learning techniques. Another notable approach is showcased in [26], where ensemble procedures are employed to select the AI model variant with the best performance. However, a comprehensive evaluation of a wide array of AI methods across different intrusion datasets is lacking in these works, potentially impacting their general applicability. Each study tends to concentrate on a singular ensemble learning method to enhance the performance of a limited set of base models. This paper aims to address the aforementioned gap by comprehensively evaluating diverse ensemble methods for network intrusion detection systems. We establish multiple Individual ML models and Simple and Advanced ensemble learning frameworks to assess such methods in the context of network intrusion detection. Leveraging prior works such as [21]–[34], which have outlined various ensemble learning approaches, our framework can be categorized as follows. • Individual Models: The initial phase of the framework involves implementing individual models such as decision trees [11], and [9], [8], [7]. This phase encompasses neural networks tasks like loading datasets (e.g., CICIDS-2017, and training the models, and RoEduNet-SIMARGL2021), assessing performance using metrics like accuracy, precision, recall, and F1 score. logistic regression [10], [6], • Simple Ensemble Methods: The subsequent stage of the framework involves implementing simple ensemble methods such as averaging, max voting, and weighted averaging. Performance evaluation is conducted using metrics like accuracy, precision, recall, and F1 score. • Advanced Ensemble Methods: The third phase focuses on implementing advanced ensemble methods including bagging, boosting, stacking, and blending. Note that we consider random forest [12], [13] in this category since it is built based on bagging of many decision trees. Again, evaluation metrics like accuracy, precision, recall, and F1 score are used to assess performance. • Comparative Analysis: The final step entails evaluating all individual, simple, and advanced ensemble models to identify the most effective models for IDS and analyze the impacts of ensemble learning techniques. logistic regression, Additionally, our study presents results for various ensemble model combinations, including bagging methods, stacking, and boosting, applied to multiple base learners such as decision trees, random forest, neural networks, among others. These distinctions highlight the novel contributions of our work compared to previous studies, as discussed in Related Work Section (Section II). evaluations network using each two dataset, with characteristics. RoEduNet-SIMARGL2021 [35], collection from the SIMARGL project, supported by the European Union. Notably, to our knowledge, limited prior works has applied comprehensive ensemble learning methods to this framework datasets, first recent of our intrusion The a conduct prominent distinct We is dataset, as discussed in the related work section. This dataset comprises realistic network traffic data, including features derived from live traffic, rendering it highly suitable for network intrusion detection systems. The second dataset utilized in our evaluation is CICIDS-2017 [36], established by the Canadian Institute for Cybersecurity at the University of Brunswick in 2017. This dataset serves as a benchmark for intrusion detection and encompasses various attack profiles. For each assess various learners dataset, we approaches, encompassing different base and variants of ensemble methods, applied to different AI models. The AI models under consideration include Logistic Regression (LR), Decision Trees (DT), K-Nearest Neighbors (KNN), Multi-Layer Perceptron (MLP), Adaptive Boosting (ADA), eXtreme Gradient Boosting (XGB), CatBoosting (CAT), Gradient Boosting (GB), Averaging (Avg), Max Voting, Weighted Averaging, and Random Forest (RF). For all these models across both datasets, we present and analyze the evaluation metrics generated by our framework. We thoroughly discuss the results, providing insights into the performance of each method. Additionally, we categorize the AI models based on their performances on evaluation metrics with the datasets considered in this study. Notably, we rank these different methods in descending order of F1 score, offering a clear perspective on their effectiveness (given by performance on the network intrusion datasets). This comprehensive evaluation allows us to identify the most promising approaches for network intrusion detection across different datasets and AI models, facilitating informed decision-making in the implementation of IDS. This work represents a significant advancement in bridging the gap in the application of ensemble learning methods for network intrusion detection systems (IDS). Through conducting extensive evaluations and comparisons of various metrics, we contribute to enhancing the understanding of these ensemble methods’ efficacy in the realm of IDS. The metrics employed in our evaluation encompass crucial network security requirements for AI models, including accuracy, precision, recall, and F1 score of intrusion detection methods, along with their corresponding runtimes. By thoroughly examining these metrics, we provide valuable insights into the performance and efficiency of different ensemble methods in detecting network intrusions. Our framework not only addresses existing limitations but also expands the application of ensemble learning techniques in network intrusion detection systems. By doing so, we pave the way for further advancements and enhancements in this research area, ultimately contributing to the development of more robust and effective network security solutions. Summary of Contributions: We summarize below our main contributions in this current work. • Evaluation of Individual and Ensemble Learning Methods: We conduct a comprehensive evaluation and comparison of various Individual ML models, along with various simple and advanced ensemble learning methods for network intrusion detection systems (IDS). • Assessment Across Diverse Metrics: Our evaluation considers a range of metrics crucial for network security requirements, including accuracy, precision, recall, and F1 score of intrusion detection methods, as well as their runtime performance. • Evaluation on Two Prominent Datasets: We evaluate our framework on two well-known network intrusion characteristics: datasets with and CICIDS-2017. This RoEduNet-SIMARGL2021 allowed for a comprehensive analysis across different network intrusion scenarios. distinct and ones) based ensemble • Performance Ranking: We categorized AI models their on (individual performances on evaluation metrics, ranking these methods in descending order of F1 score, providing valuable insights into the effectiveness of each approach. • Expansion of Ensemble Learning Applications for IDS: By demonstrating the efficacy of ensemble learning methods the application of these techniques in this critical research area, paving the way for further advancements. in network IDS, our work expands • Availability of Source Codes: We make our source codes available to the community for accessing the framework designed for network intrusion detection and for further development with new datasets and models.1 II. RELATED WORK A. Existing Efforts in Leveraging Ensemble Learning for IDS The survey conducted in the previous work [25] offers an overview of intrusion detection systems (IDS), focusing on the evolution of ensemble systems and methodologies employed in their design, particularly emphasizing ensemble techniques between 2009 and 2020. This study comprehensively discusses the current state of ensemble models, highlighting various approaches such as Stacking, Bagging, Boosting, and voting, among others. The analyzed works encompass a range of datasets, including KDD’99, NSL-KDD, Kyoto 2006+, and AWID, along with diverse models such as neural networks (NN), support vector machines (SVM), and decision trees function (RBF). The primary contribution of this work lies in its in-depth exploration of the existing landscape, stimulating the investigation of novel combination methods, such as the exploration of new combination rules. These insights offer valuable directions for further research on ensemble learning for IDS. fuzzy clustering, and radial basis (DT), Ensemble Learning for Binary Classification Anomaly Detection Approaches: Within this domain, the study by [23] introduces an anomaly detection framework that operates on input datasets such as CICIDS-2017, UNSW-NB15, and KDD’99. The framework preprocesses the data and conducts feature selection (employing the Chi-square method in this study), subsequently applying various base models including Gaussian Naive Bayes, Logistic Regression, and Decision Trees. The predictions are then integrated using the Stochastic Gradient Descent ensemble model to yield the final prediction. The primary contributions of this research 1The GitHub URL for our source codes is: https://github.com/sm3a96/ IDS-Machine-Learning-Techniques-.git lie in the amalgamation of learning algorithms via stacking to enhance IDS performance, with potential applicability to other benchmark datasets. However, the study acknowledges limitations related to data imbalance issues, suggesting that the utilization of data augmentation techniques could alleviate such imbalances. Furthermore, the framework could benefit from incorporating different ensemble learning models to further enhance performance. Similarly, [28] proposes an ensemble learning framework for binary classification of anomalies in IDS, utilizing the NSL-KDD and UNSW-NB15 datasets along with base models like Random Forest, AdaBoost, XGBoost, and Gradient boosting decision trees. The framework combines the outcomes of these models using a soft voting scheme. The presented results highlight the potential of the proposed NIDS framework to improve the accuracy of cyber-attack detection and minimize false alarm rates. Moreover, [29] presents a framework for IDS applied to datasets such as NSL-KDD, and UNSW-NB15. Base models encompass LR, DT, NB, NN, and SVM, while ensemble techniques include Majority Voting, DT, NB, LR, NN, and SVM. The study also explores combinations of feature selection methods, with results indicating superior overall performance for ensemble techniques. However, the authors underscore the need for new datasets, particularly real-world ones, and advocate for the integration of unsupervised learning methods. Additionally, [31] applies its framework to real-world datasets, including the Palo Alto network log, in addition to NSL-KDD and UNSW-NB15 datasets. Anomaly detection is addressed through ensemble methods employing weighted voting atop base learners like SVM, Autoencoder, and Random Forest. The primary contribution of this work is the introduction of a new ANIDS approach with real-world applicability, reducing false predictions. However, scalability issues and reliance solely on weighted voting are acknowledged as limitations, potentially necessitating more diverse approaches for efficient performance across different scenarios in this network intrusion detection task. four employs Considering the IoT domain, the study by [32] introduces a framework for anomaly detection utilizing the TON-IoT supervised machine network dataset. It learning (ML) models as base models, including Random Forests, Decision Trees, Logistic Regression, and K-Nearest Neighbors. These base models are subsequently integrated into an ensemble method, employing stacking and voting mechanisms to enhance attack detection efficiency. Limitations of this research include its narrow focus on the TON-IoT dataset without exploring other datasets, and its omission of other popular ensemble learning methods like bagging and averaging. In contrast, [24] presents an online anomaly detection system for network intrusion detection using a series of ensemble Autoencoders, catering to real-time detection requirements. This approach differs from previous works specifically tailored by leveraging ensemble techniques [26] addresses the issue for Autoencoders. Additionally, small binary of overfitting in ensemble learning for classification datasets. However, the framework utilizes several non-IDS-related datasets and employs base models such as Random Forest, Naive Bayes, and Logistic Regressor. The primary contribution of this work lies in its ensemble model selection procedure, which searches for the best model for a particular instance. Nonetheless, a limitation of this approach is the high computational cost associated with the cross-validation technique. While pruning may mitigate overfitting, its effectiveness may vary across different datasets and models. a novel ensemble Ensemble Learning for Multiclass Classification Approaches: Several studies explore ensemble learning techniques for multiclass classification in IDS. For example, in [21], learning approach applies on datasets like CICIDS-2017 and ToN IoT. They utilize stacking with Tensorflow models (CNN, DNN, RNN, LSTM), where class predictions feed into a DNN ensemble method. However, limitations include the absence of real IoT scenario reliance on a single ensemble method, experimentation, and resource-intensive operations on IoT devices. Another for multiclass work, classification, limitations. addressing NSL-KDD dataset Employing the Weka toolkit, it employs adaptive ensemble learning with J48, MLP, and IBK base models, utilizing majority voting for ensemble learning. Drawbacks include lack of real-world deployment, external dataset validation, and limited AI model variety. introduces the GTCS dataset [22], In [27], an ensemble learning approach achieves higher accuracy and lower false alarms by incorporating Random Forest to alleviate data imbalance. Utilizing Linear Genetic Programming (LGP), Adaptive Neural Fuzzy Inference System (ANFIS), and Random Forest classifiers, it employs weighted voting for ensemble. However, challenges include lack of optimal weight assignment, generalization issues across datasets, and model variety. Similarly, [33] applies bagging to NB, PART, and Adaptive Boosting on KDD’99, using voting for component selection. Limitations involve restricted dataset testing and AI model diversity. B. Contribution of Our Work two operate distinct Our contribution lies in introducing a comprehensive intrusion classification framework encompassing individual ML models, along with simple and advanced ensemble techniques. We datasets, on RoEduNet-SIMARGL2021 and CICIDS-2017, aiming to generate key performance metrics including Accuracy, Recall, Precision, and F1 score. Upon dataset loading and preparation, we incorporate all available features for analysis. Initially, our framework executes individual models leveraging LR, DT, RF, MLP, and KNN as base learners. Subsequently, we explore simple ensemble techniques such as Averaging (Avg), Max Voting, and Weighted Averaging. In the next phase, advanced ensemble methods including Bagging, Boosting methods (ADA, GB, XGB, CAT), Blending, and Stacking are applied in our framework. Notably, our work stands out for its extensive benchmarking experimentation combinations. across inclusion of RoEduNet-SIMARGL2021 Furthermore, our dataset in the experiments fills a gap in existing research, as prior works seldom consider this dataset in their analyses. various model III. BACKGROUND AND PROBLEM STATEMENT This section outlines the fundamental concepts of network intrusion detection, highlights the hurdles posed by artificial intelligence (AI), underscores the necessity of ensemble learning, and elucidates the challenges inherent in evaluating these methodologies within the context of network intrusion detection tasks. A. Types of Network Intrusions Various network intrusion types exist, categorized within the widely recognized MITRE ATT&CK framework [37]. In our study, we address the primary network attacks outlined in this framework. Consequently, network traffic is broadly classified into the following categories: Normal traffic: This refers to regular network activity observed within the system. link it to specific creators, Malware / Malware Repository Information obtained regarding malicious software [MITRE ATT&CK ID: DS0004]: This refers to analyzing malware for traits that might like the compiler used, debugging traces, code similarities, or group identifiers related to particular MaaS providers. Finding overlaps in malware usage by different adversaries may suggest the malware was acquired rather than independently developed. In this context, overlapping features in malware used by various adversaries could indicate a shared quartermaster [38]. PortScan (PS) / Network Service Discovery [MITRE ATT&CK ID: T1046]: PortScan involves an intrusion where the attacker conducts reconnaissance on the victim’s computer. Often utilized as an initial step in an attack, it aims to identify vulnerabilities and potential entry points. The method involves sending connection requests to various ports, without finalizing the connection. Responses received from these ports help map potential entry points for exploitation [39]. Denial of Service (DoS) / Network Denial of Service [MITRE ATT&CK ID: T1498]: This type of attack aims to disrupt the target’s network availability. A common example involves the attacker continuously sending connection requests to a server. However, upon receiving acknowledgment from the server, the attacker fails to respond, leaving the server’s resources tied up and eventually leading to its unavailability. For comprehensive classifications of DoS attacks, readers are referred to [40]. Brute Force [MITRE ATT&CK ID: T1110]: This attack involves attempting all possible password combinations to gain unauthorized access to the victim’s network. Attackers often leverage commonly used passwords in conjunction with this method. Success is more likely when users employ weak or easily guessable passwords [40]. Throughout the experimentation, we meticulously collect and evaluate result metrics to benchmark optimal performance. Web Attack / Initial Access [MITRE ATT&CK ID: TA0001, T1659, T1189]: This category encompasses attacks in public-facing conducted through web channels, exploiting vulnerabilities in web systems. For attackers may exploit instance, leveraging applications, vulnerabilities software bugs, misconfigurations, or glitches to gain access to the application’s underlying instance or container. Examples of such attacks include Drive-by Compromise [41]. However, it is noteworthy that while web attacks such as SQL injection (SQLi) and Cross-Site Scripting (XSS) are common, they typically do not directly provide initial access to a remote server [37]. Infiltration / Initial Access [MITRE ATT&CK ID: TA0001]: This type of attack occurs when an unauthorized entity attempts to gain initial access to a system or application. It encompasses various techniques, including targeted spear phishing and exploiting vulnerabilities in public-facing web servers. The initial access gained through this attack can vary, ranging from simply changing a password to maintaining persistent access through legitimate accounts and external remote services. Botnet / Compromise Infrastructure [MITRE ATT&CK ID: T1584.005, T1059, T1036, T1070]: This type of attack involves the use of automated scripts executed remotely by attackers through hijacked devices. These scripts, known as bots, emulate human behavior and replicate it across multiple devices. The scripted nature of this technique enables scalability and easy deployment, making it an effective tool for targeting multiple attack points simultaneously. Consequently, botnets are a prevalent type of network attack. Probe Attack / Network Scanning or Surveillance [MITRE ATT&CK ID: T1595]: Probe attacks serve as the initial phase of a broader attack strategy. These attacks involve scanning a network to collect information or identify known vulnerabilities [42]. Armed with a map detailing the available machines and services within a network, attackers can leverage this information to seek out potential exploits. It is important to note that while port scanning represents a type of probe attack, not all probe attacks involve port scans. Some may target specific vulnerabilities or utilize alternative methods, such as ping sweeps [43] or DNS zone transfers [44]. B. Intrusion Detection Systems risk to critical The escalating complexity of cyber attacks poses a substantial infrastructure across diverse industries [45], [46]. As a result, IDS plays a pivotal role in defending computer network systems against malicious activities, whether perpetrated by internal users or external adversaries [47]. Conventional IDS architectures typically operate under the assumption that an intruder’s actions will noticeably diverge from those of a legitimate user, thereby enabling the detection of many unauthorized activities [48]. With recent strides in artificial intelligence (AI) over the past decade, this architectural paradigm has facilitated the emergence of AI models capable of autonomously identifying network intrusions [49]. C. Limitations of Base Learner Models While AI models have greatly automated intrusion detection, their inherent complexity presents constraints due to the intricate nature of their learning and decision-making mechanisms. This complexity poses challenges for a single to fully grasp the subtleties of datasets, resulting model in difficulties in learning specific subsets and achieving satisfactory metrics for certain outcomes. This challenge is widespread across various AI models, such as Decision Trees (DT), K-nearest neighbors (KNN), Support Vector Machines (SVM), Deep Neural Networks (DNN), among others. Despite their high predictive accuracy in Intrusion Detection Systems (IDS), there persists a gap in attaining better accuracy, precision, recall, and F1 scores, particularly in error or attack scenarios (including a high false positive rate for some AI models [16] and a high false negative rate for others [17]). This issue is especially critical in safety-sensitive applications like network security through IDS. Consequently, there is a growing impetus to enhance performance and broaden the application of AI models in IDS. This has spurred the urgent need to employ diverse ensemble learning techniques to bolster IDS by leveraging the combination of different base learner models [18]–[20]. D. Key Advantages of Ensemble Methods It is crucial to recognize that individual base learners possess distinct strengths and weaknesses. Depending on the specific application or task, one model may outperform others, adding complexity to the model selection process. Machine learning algorithms operate on diverse underlying principles. For instance, K-nearest neighbors (KNN), which clusters similar data around centroids, is sensitive to factors like the number of clusters (K), class outliers, and irrelevant features, besides being computationally demanding. Neural Networks (NN), on the other hand, typically require large datasets and substantial computational resources, while also being susceptible to variations in input data. Regression methods like Logistic Regression offer simplicity and interpretability but may struggle to capture intricate relationships, such as higher-order polynomials. Similarly, Decision Trees boast quick training times but can oversimplify problems, potentially leading to overfitting. Consequently, amalgamating these AI models through ensemble techniques can enhance their robustness, generalizability, and effectiveness in network intrusion detection tasks by leveraging their complementary strengths and mitigating their weaknesses. Ensemble Learning is a dynamic field that delves into the concept of harnessing the strengths of diverse base learners to enhance predictive performance. Among the most renowned ensemble techniques are Bagging, Boosting, Blending, and Stacking. Bagging, short for Bootstrap Aggregating, involves creating multiple subsets of the dataset through bootstrapping, wherein data points are sampled with replacement, and training separate instances of a machine learning model on each subset. These models are trained independently. The primary objective of Bagging is to mitigate overfitting and enhance generalization by leveraging the diversity among the models. In contrast, Boosting operates by sequentially training multiple instances of the same base model, with each subsequent model aiming to correct the errors made by its predecessors. Boosting achieves this by assigning higher weights or emphasis to misclassified data points, effectively prioritizing instances that were previously difficult to classify correctly. By iteratively refining the model’s performance, Boosting endeavors to improve predictive accuracy and reduce bias in the final ensemble. Meanwhile, the Stacking method adopts a distinct approach by training a diverse array of base learners and utilizing their predictions as features to train a meta-model. This meta-model learns to combine the predictions of the base learners, effectively capturing complex relationships between features and the target variable. The ensemble methods such as Bagging, Boosting, and Stacking offer sophisticated strategies for improving predictive performance by leveraging the collective intelligence of diverse base learners. By combining the strengths of individual models and mitigating their weaknesses, ensemble techniques pave the way for more accurate and robust predictions across a wide range of machine-learning tasks. In our study, we investigate various ensemble learning approaches within our framework, exclusively utilizing base models for network intrusion detection tasks. This comparative analysis is conducted across two distinct datasets, each possessing unique characteristics, in order to gain a comprehensive insight into our proposed framework. IV. FRAMEWORK The primary objective of this study is to develop an ensemble learning pipeline aimed at enhancing result metrics across diverse datasets. Our framework aims to assist security analysts in selecting effective methodologies for identifying intrusions and classifying attacks on network traffic, thereby bolstering intrusion prevention measures within their scope. To achieve this, we delineate a methodological framework comprising key stages for investigating the efficacy of various ensemble learning techniques tailored for intrusion detection systems (IDS), shown in Figure 1. A. Data Preprocessing thorough preprocessing for The CICIDS-2017 and RoEduNet-SIMARGL2021 datasets intrusion detection underwent systems (IDS). For CICIDS-2017, duplicate records were removed, and missing values were imputed with the mean for the ‘Flow Bytes/s’ column. Leading space characters in feature names were also removed, and label encoding was applied to categorical data in the ‘Label’ column. Similarly, for RoEduNet-SIMARGL2021, duplicate records were removed, columns with singular unique values were dropped, and missing values were filled with the mean features were values for encoded into numerical values using the Ordinal Encoder. These preprocessing steps aimed to enhance data quality and consistency for subsequent analyses. respective columns. Categorical B. Model Selection and Used Techniques In this section, we will explain the model selection process including the selection of individual base learners and the use of ensemble techniques such as simple and advanced ensemble techniques. 1) The Individual Base Learner Models: We carefully selected a set of diverse and established base learners to leverage their complementary strengths. • Decision Trees: Decision trees are well-known for their simplicity and easy-to-understand nature. They provide an intuitive representation of the decision-making processes in the data. • Neural Networks: We intricacy and non-linearity of neural networks, especially the Multi-layer perceptron Classifier. focused on the • Logistic Regression: It is a benchmark model that provides insights into linear relationships in the data. 2) The Simple Ensemble Techniques: In conjunction with individual base learners, we employed various simple ensemble techniques to enhance predictive performance. These techniques included: • Averaging Predictions: By averaging the predictions of multiple individual models, we aimed to reduce variance and improve overall prediction accuracy. • Max Voting: Employing a majority voting scheme, max voting aggregates predictions from multiple models and selects the most frequently occurring class label as the final prediction. • Weighted Averaging: Assigning weights to predictions from individual models based on their performance, weighted averaging allowed us the contributions of more accurate models while mitigating the impact of less accurate ones. We explain how the weights are assigned in our experiments in the Evaluation Section. to emphasize 3) The Advanced Ensemble Techniques: To further bolster our ensemble model’s efficacy, we delved into advanced ensemble techniques, comprising: • Bagging: Through aggregating, bagging bootstrap generates diverse subsets of the training data and trains multiple base learners on each subset. By averaging their predictions, bagging reduces variance and enhances In this context, Random forests model aggregate predictions from several decision trees to reduce overfitting and maintain robust predictive performance across different datasets. robustness. • Blending: Leveraging the outputs of multiple base learners as features, blending combines their predictions using a meta-learner to generate the final prediction. This technique harnesses the diversity of base learners to improve generalization. • Boosting: Sequentially training base learners to correct the errors of preceding models, boosting emphasizes the misclassified instances, thereby iteratively refining the model’s predictive performance. • Stacking: Combining predictions from multiple base learners as features, stacking employs a meta-learner to learn the optimal combination of base learner predictions. This hierarchical ensemble technique leverages the diverse strengths of individual models to improve overall performance. Fig. 1: An overview of our ensemble learning framework for network IDS. It considers a diverse set of AI models and ensemble methods, along with network intrusion datasets. C. Model Implementation and Training selection of models, Following the meticulous the implementation phase commenced, leveraging Python for the realization of our ensemble framework. Our implementation strategy began with the deployment of individual models, followed by the integration of simple ensemble techniques, culminating in the incorporation of diverse array of advanced ensemble techniques. In order to make the best use of our high-performance system, we decided to use TensorFlow’s distribution strategy, specifically tf.distribute.MirroredStrategy(). This strategy is designed for synchronous training across multiple GPUs within a single machine. It works by replicating the model’s variables and computations across all available GPUs, which makes parallelism more efficient and speeds up the training process significantly. Each GPU independently computes gradients for a subset of the training data, and these gradients are aggregated across all GPUs to update the model’s parameters. By synchronizing training across all this approach maximizes GPU utilization, prevents GPUs, inconsistencies, and ultimately accelerates the training process while improving overall efficiency. This strategy aligns perfectly with our goal of using our high-performance to speed up model computer’s computational development and experimentation. resources 1) Individual Model Implementation and Training : We implemented and trained each chosen base learner on its own, giving us the opportunity to explore its algorithms and performance characteristics in detail. With the help of Python’s powerful libraries (scikit-learn), TensorFlow, Keras, we were able to implement and train decision trees and random forests as well as neural networks (especially Multi-layer Perceptron Classifier), as well as logistic regression models with ease. We implemented and trained the decision trees model, we utilized the scikit-learn library. Decision Tree Classifier (DT) function was used to create a decision tree classifier object, and then we trained the classifier using the fit function with our training data. Similar approaches were followed for implementing and training other models, such as random forests, Multi-layer Perceptron Classifier, and logistic regression. Each model was instantiated using its respective class from scikit-learn, and then trained on training data using the “fit” function. After training each model, we utilized the “predict” function to test that we the trained models using the test dataset prepared. This allowed us to evaluate the performance of each model on unseen data. the using “accuracy score” We further evaluated the models by computing their from accuracy classification scikit-learn. Additionally, we printed the function to obtain report using the classification report recall, and F1-score for each class. We then precision, visualized the performance of the models using confusion function Data Generation for ClassifiersTesting 30%Model's ClassifiersPerformance AnalysisBestPerformingModelsRoEduNet-SIMARGL2021CICIDS-2017Data CollectionData PreprocessingTransformationand NormalizationAll FeaturesSelectionSplit Data Training Data70%Testing Data30%Trained Individual ModelsDNNSupervised ClassifiersRFLRMLPTrained Simple Ensembled ModelsMaxVoting Supervised SimpleEnsemble TechniquesWeightedAveragingAveragingTrained Advanced Ensembled ModelsStackingSupervised AdvancedEnsemble TechniquesBoostingBlendingBagging matrices generated by the confusion matrix function. These evaluations provided us with insights into the overall performance and effectiveness of each model for the task. Implementation 2) Simple Ensemble Techniques and Training : Following the individual model implementations, simple ensemble techniques were employed to combine enhancing thereby predictions predictive array manipulation capabilities and built-in functions facilitated the seamless integration of averaging, max voting, and weighted averaging techniques. from multiple models, performance. Python’s In the averaging technique implementation, three diverse classifiers (Decision Tree (DT), K-Nearest Neighbors (KNN), and Random Forest (RF)) are trained simultaneously within the distributed scope, enabling efficient GPU acceleration. Predictions are then made using each model, and a Soft Voting ensemble technique is applied to combine the predictions into a final output. This ensemble approach aims to enhance predictive performance by leveraging diverse models and distributed computing resources. training the ensemble model using averaging technique, we utilized the “predict” function to test the model using the test dataset. We further evaluated the model’s performance by computing its accuracy using the “accuracy score” function and printing the classification report using the “classification report” function. Additionally, we generated a confusion matrix using the confusion matrix function to visualize the model’s performance (as will be shown in next sections). After In the max voting technique implementation, a distributed training approach using TensorFlow’s MirroredStrategy is employed to optimize the training process across multiple GPUs. Three distinct classifiers (K-Nearest Neighbors (KNN), Decision Tree (DT), and Random Forest (RF)) are initialized. These models are then integrated into a Voting Classifier using hard voting. The Voting Classifier aggregates the predictions from individual models and selects the class label with the majority vote as the final prediction. Subsequently, the Voting Classifier is trained on the training data, and its performance is evaluated using the test data. This ensemble approach enhances the model’s predictive capability by leveraging strengths of multiple classifiers. using training distributed In the weighted averaging technique implementation, TensorFlow’s strategy a MirroredStrategy is employed, facilitating parallel execution across multiple GPUs. Within this distributed scope, three distinct classifiers (Decision Tree (DT), K-Nearest Neighbors (KNN), and Random Forest (RF)) are instantiated. These classifiers are then integrated into a Voting Classifier, which aggregates their predictions using hard voting. Custom weights are assigned to each classifier to influence their contribution to the final prediction, with DT accounting the for 40%, KNN for 30%, and RF for 30%. Finally, ensemble model is trained on the provided training data. This ensemble-based approach aims to enhance predictive accuracy by leveraging the diverse capabilities of individual classifiers while considering their respective contributions to the final prediction. Following the training of the ensemble model using weighted averaging technique, we conducted prediction and evaluation by computing accuracy, printing the classification report, and generating the confusion matrix. 3) Advanced Ensemble Techniques Implementation: Finally, advanced ensemble techniques were implemented to further enhance the predictive capabilities of the model. Python’s machine learning libraries, including scikit-learn, provide seamless integration of various advanced ensemble techniques such as bagging, blending, boosting (including Adaptive Boosting, Cat Boosting, Gradient Boosting, and XGBoost Extreme Gradient Boosting), and stacking. For the bagging technique implementation, a distributed training strategy using TensorFlow’s MirroredStrategy is initiated to enable parallel execution across multiple GPUs. Within this distributed context, a list of diverse base models is instantiated, including RF, MLP, LR, and DT classifiers. These base models serve as the foundational components for the ensemble approach. Subsequently, a Bagging Classifier is constructed, utilizing RF as the base model. Bagging is a robust ensemble technique effective in reducing overfitting by aggregating predictions from multiple models. In this implementation, the Bagging Classifier is configured with the same number of estimators as the number of base models to ensure diversity and effectiveness in prediction. The Bagging Classifier is then trained on the provided training data. for Similarly, the blending technique implementation, TensorFlow’s MirroredStrategy is employed to facilitate parallel execution across multiple GPUs. Under this strategy’s scope, several base models including RF, MLP, LR, and DT are initialized and trained. Each of these models generates predictions for the test data, which are then combined using the blending technique to create a new dataset. This dataset serves as the input for a meta-model, another DT Classifier. The meta-model is trained on the blended predictions to learn how to best combine the outputs of the base models. Additionally, predictions are made on the test set using the trained base models, and a new dataset (blend X test) is created with these predictions to be used as input for the meta-model. Finally, the meta-model predicts the final output based on the blended predictions from the base models. for the boosting technique implementations including Adaptive Boosting, Cat Boosting, Gradient Boosting, and XGBoost Extreme Gradient Boosting, we followed the same aforementioned process. Along the lines, same For the stacking technique implementation, TensorFlow’s MirroredStrategy is employed to facilitate parallel execution across multiple GPUs. Within this distributed context, four base models (RF, MLP, LR, and DT) are instantiated and integrated into pipelines incorporating Principal Component Analysis (PCA) for dimensionality reduction. These pipelines the data, enhancing model performance and preprocess complexity. A meta-model, computational reducing represented by another DT Classifier, is instantiated to learn from the outputs of the base models. The Stacking from scikit-learn is utilized to stack the base Classifier the models, combining their predictions as features for is trained on the meta-model. Finally, the stacked model provided training data, enabling it combination of predictions from the base models. to learn the optimal D. Evaluation Metrics and Model Selection Rationale Results’ Metrics: To evaluate the performance of the selected models and techniques comprehensively, we employed four primary performance indicators: Accuracy, Precision, Recall, and F1 score. Additionally, runtime was considered as a metric to assess the computational efficiency of the models. These metrics collectively provide insights into the effectiveness and efficiency of the models in detecting intrusions. We organized the results systematically to facilitate analysis and comparison across different models and techniques. This structured approach enables us to draw meaningful conclusions regarding the suitability and efficacy of the models for IDS applications. Model Selection Criteria: The models chosen for this study were selected based on several key factors. Primarily, their prevalence in prior research pertaining to Intrusion Detection Systems (IDS) ensured alignment with established literature, enabling effective comparison with seminal studies such as [40], [50]. Furthermore, these diverse ensemble learning methods had success in different applications. By adopting widely-used models, our research maintains consistency with existing methodologies, facilitating a robust evaluation of various models and ensemble learning techniques utilized in our investigation. In this context, we emphasize that we used different AI models with different working principles for our ensemble learning (i.e., the KNN uses a different reasoning from MLP that also uses a different reasoning than DT). E. Comprehensive Overview of Top Network Features and Their Role in the Learning Intrusion I Tables and II elucidate key features In this subsection, we present a detailed list of the top network intrusion features along with their explanations for the two datasets under study, as they play a crucial role throughout the entirety of our paper. Tables I and II provide descriptions for each feature in the RoEduNet-SIMARGL2021 and CICIDS-2017 network intrusion datasets, respectively. specific to the RoEduNet-SIMARGL2021 and CICIDS-2017 datasets. These tables serve to highlight significant features from each dataset, offering clarity and contextual understanding. However, it is essential to clarify that all features listed in Table IIIb were utilized in our preliminary experiments. This inclusive approach enabled us to fully exploit the datasets for our analysis of network intrusion detection. Notably, Table IIIb summarizes the overall composition of each dataset, encompassing the number of features. V. FOUNDATIONS OF EVALUATION In this section, we present a comprehensive evaluation aimed at addressing key research questions that underpin our study: 1) What are the optimal individual ML models suited for a given network intrusion detection dataset? TABLE RoEduNet-SIMARGL2021 dataset [51]. Description of I: main features for RoEduNet-SIMARGL2021 Features FLOW DURATION MILLISECONDS PROTOCOL MAP TCP FLAGS TCP WIN MAX IN TCP WIN MAX OUT TCP WIN MIN IN TCP WIN MIN OUT TCP WIN SCALE IN TCP WIN MSS IN TCP WIN SCALE OUT SRC TOS DST TOS FIRST SWITCHED LAST SWITCHED TOTAL FLOWS EXP Explanation Flow duration in milliseconds IP protocol name (tcp, ipv6, udp, icmp) Cumulation of all flow TCP flags Max TCP Window (src->dst) Max TCP Window (dst->src) Min TCP Window (src->dst) Min TCP Window (dst->src) TCP Window Scale (src->dst) TCP Max Segment Size (src->dst) TCP Window Scale (dst->src) TOS/DSCP (src->dst) TOS/DSCP (dst->src) SysUptime of First Flow Packet SysUptime of Last Flow Packet Total number of exported flows TABLE II: Description of CICIDS-2017 dataset [52]. the main features for the CICIDS-2017 Features Packet Length Std Total Length of Bwd Packets Subflow Bwd Bytes Destination Port Packet Length Variance Bwd Packet Length Mean Avg Bwd Segment Size Bwd Packet Length Max Init Win Bytes Backward Total Length of Fwd Packets Subflow Fwd Bytes Init Win Bytes Forward Average Packet Size Packet Length Mean Max Packet Length Explanation Standard deviation length of a packet Total size of packet in backward direction Average number of bytes in backward sub-flow Destination Port Address Variance length of a packet Mean size of packet in backward direction Average size observed in the backward direction Maximum size of packet in backward direction Total number of bytes in initial backward window Total packets in the forward direction Average number of bytes in a forward sub-flow Total number of bytes in initial forward window Average size of packet Mean length of a packet Maximum length of a packet 2) Which ensemble method exhibits superior performance on a given dataset? 3) How do the evaluated methods within our framework perform across key metrics such as Accuracy, Precision, Recall, F1 Score, and runtime? are the inherent 4) What limitations and strengths associated with the application of ensemble learning methods in the context of network intrusion detection? Before delving into the detailed evaluation results, we provide a comprehensive overview of the experimental setup. A. DataSet Description RoEduNet-SIMARGL2021 Dataset [35]: This dataset stems from the SIMARGL project, a collaborative initiative supported by the European Union under the Horizon program, in conjunction with the Romanian Education Network (RoEduNet). It comprises authentic network traffic data, incorporating features derived from real-time traffic analysis. The dataset adheres to a structured data schema reminiscent of Netflow [53], a network protocol developed by CISCO for the purpose of capturing and monitoring network flows. CICIDS-2017 Dataset [36]: Serving as a benchmark for intrusion detection, this dataset was curated by the Canadian Institute for Cybersecurity at the University of Brunswick in 2017. It encompasses six distinct attack profiles, including activities such as brute force, heartbleed, botnet, Denial of Service (DoS), portscan, web attack, and infiltration attack. incorporates To establish a realistic context, background traffic generated through a B-Profile system [54], the dataset TABLE III: Summary and statistics of the three network intrusion datasets used in this work, including the size of the dataset, number of attack types (labels), number of intrusion features, and distribution of samples among attack types. (a) Basic statistics of datasets Dataset CICIDS-2017 RoEduNet-SIMARGL2021 No. of Labels 7 3 No. of Features 78 29 No. of Samples 2,775,364 31,433,875 (b) Distribution of samples among different attack types Dataset CICIDS-2017 RoEduNet2021 DoS Normal 84.442% 9.104% 24.53% 62.20% PortScan 5.726% 13.27% Brute Force Web Attack 0.498% - 0.157% - Bot 0.071% - Infiltration 0.001% - which captures various user behaviors based on popular network protocols. Summary and Statistics of the Datasets: Each dataset is characterized by its size, the number of attack types (labels), and the quantity of intrusion features. Detailed statistics regarding these attributes are presented in Table III. B. Experimental Setup Computing Resources: Our experiments were conducted on a high-performance computing (HPC) system equipped with robust hardware capabilities. The HPC configuration includes two NVIDIA A100 GPUs, 64 GPU-accelerated nodes, each boasting 256 GB of memory, and a single 64-core AMD EPYC 7713 processor running at 2.0 GHz with a power consumption of 225 watts. This setup enables a peak performance of approximately 7 petaFLOPs, making it exceptionally well-suited for intensive AI and machine learning tasks [55]. Coding Tools: To ensure versatility and openness in our implementation, we utilized the Python programming language alongside various AI toolboxes such as Keras and ScikitLearn. Additionally, we leveraged essential libraries including Pandas and Matplotlib. By adopting these open-source tools, we aimed to facilitate transparency and reproducibility in our research endeavors. C. Evaluation Metrics In this study, the utilization of well-established evaluation to ascertain the most effective model metrics is crucial for integration within an Intrusion Detection System (IDS). Accuracy, precision, recall, and F1-score stand as quintessential performance evaluation metrics. These metrics are derived from four fundamental measures: true positive (TP), false positive (FP), true negative (TN), and false negative (FN) rates. The evaluation metrics are delineated as follows: • Accuracy [(T P + T N )/T otal]: Signifies the proportion of accurately identified network traffic instances over the total data instances. • Precision [T P/(F P + T P )]: Measures the frequency with which the model accurately discerns an attack. • Recall [T P/(F N + T P )]: Measures the model’s ability to correctly identify attacks (or intrusions). Recall is also referred to as the true-positive rate, sensitivity, or detection rate. • F1-Score [2T P/(2T N + F P + F N )]: Represents the harmonic mean of precision and recall. of IV: base learners different ensemble methods) Performance and models TABLE (both on RoEduNet-SIMARGL2021 datasets. The results are organized by F1 score (highest to lowest). Accuracy (ACC) 1.00 1.00 1.00 0.999 0.99998 0.998 0.998 0.998 0.99981 0.998 0.988 0.996 0.6781 0.6178 Models Random Forest (RF) Decision Tree (DT) Average (avg) Max Voting (Max Vot) Stacking Weighted Average (weighed avg) Bagging (Bag) Blending (Bled) AdaBoost (ADA) Cat Boost (CAT) Gradient Boosting (GB) XGBoost (XGB) Logistic Regression (LR) Multi-Layer Perceptron (MLP) Precision (PRE) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.998 0.99 0.996 0.56 0.38 Recall (REC) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.998 0.99 0.996 0.68 0.62 F1 Score 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.998 0.998 0.996 0.58 0.47 D. AI Models In this section, we outline the main AI models employed in our study. (i) Base Learners: We utilized four widely-used AI classification algorithms as base learners, namely: Multi-Layer Perceptron (MLP) [56] Decision Tree (DT) [57], Logistic Regression (LR) [58], and k-Nearest Neighbor (KNN) [59]. These AI methods form the foundation of our evaluation, allowing us to assess both their individual performances and their contributions to our network intrusion detection framework. (ii) Ensemble Methods: In addition to the base learners, our framework incorporates advanced ensemble techniques such as stacking, blending, boosting (including Cat Boosting (CAT) [60], Light Gradient-Boosting Machine (LGBM) [61], AdaBoost (ADA) [62], Gradient Boosting (GB) [63], Extreme Gradient-Boosting (XGBoost) [64]), Random Forest, and bagging techniques. Furthermore, we employ simpler ensemble methods like Voting [65], Averaging [66], and Weighted Averaging, alongside the aforementioned models. These ensemble methods enhance the robustness and accuracy of our intrusion detection system. Hyperparameters : We provide our main hyperparameter choices for each AI model and each ensemble method used in our work in Appendix A. Having provided the main experimental setup, we next detail our evaluation results and findings. E. RoEduNet-SIMARGL2021 Analysis Main Results : Table IV presents the performance metrics of various models on the RoEduNet-SIMARGL2021 dataset when utilizing all available features. Notably, the majority of models demonstrate remarkable performance across all metrics, with particularly high values for precision, recall, and F1 scores. The top-performing techniques and models, including Random Forest (RF), Decision Tree (DT), Average (Avg), Max Voting (Max Vot), Stacking, and Bagging (Bag), consistently achieve perfect scores (1.00) across all metrics. This convergence suggests robust model performance across different ensemble techniques and underscores the efficacy of utilizing the complete feature set for classification. Conversely, Logistic Regression (LR) and Multi-Layer Perceptron (MLP) exhibit comparatively lower performance TABLE V: Model Training and Testing Timetable (Seconds) for RoEduNet-SIMARGL2021 Dataset. The runtimes are organized (from shortest to lengthiest). Logistic Regression is the most efficient individual model while bagging is the most time-efficient ensemble method for this dataset. Models Logistic Regression (LR) Decision Tree (DT) Multi-Layer Perceptron (MLP) Bagging (Bag) Random Forest (RF) XGBoost (XGB) AdaBoost (ADA) Cat Boost (CAT) Gradient Boosting (GB) Blending (Bled) Average (avg) Stacking Weighted Average (weighed avg) Max Voting (Max Vot) Time (Seconds) 122.72 900.74 6200.45 9834.9 10800.85 12520.23 12610.54 18050.12 18250.02 19600.43 27216.16 28800.32 30816.53 32400.452 metrics, indicating potential underlying patterns within the dataset. limitations in capturing the already optimal performance attained by Given the further optimization may yield marginal several models, improvements alternative feature engineering strategies or investigating potential data augmentation techniques could offer avenues for enhancing model generalization and resilience to unseen data. best. However, exploring at Runtime Performance: Table V offers insights into the runtime performance of RoEduNet-SIMARGL2021 models, measured in seconds. Models such as Logistic Regression (LR) and Decision Tree (DT) demonstrate shorter runtimes, reflecting their computational efficiency. In contrast, ensemble methods and gradient boosting algorithms like Stacking, runtimes Average, and Gradient Boosting exhibit inherent complexity and resource-intensive due to their nature. However, their high performance across evaluation metrics justifies the computational investment, particularly in accuracy-sensitive applications. longer the Considering extensive RoEduNet-SIMARGL2021 dataset comprising approximately 30 million samples, prioritizing models with optimal performance across metrics while balancing computational efficiency is crucial. Models like LR and DT emerge as promising candidates due to their shorter runtimes, making them attractive for scenarios with computational constraints. Conversely, models with longer runtimes, such as Blending (Bled) and Stacking, may their superior require substantial performance especially in applications prioritizing precision and recall metrics. resources. Nevertheless, allocation, resource justifies To manage computational complexity for blending and stacking, experiments were conducted on a subset of the dataset, limiting the sample size to 20% through random sampling, ensuring manageable overhead while retaining analytical integrity. 1) The models and ensemble Confusion Matrices for RoEduNet-SIMARGL2021: The following results shown in Fig. 2: Confusion Matrix of different models with perfect performance on RoEduNet-SIMARGL2021 Dataset. These models are: Random Forest (RF), Decision Trees (DT), averaging ensemble technique (avg), Weighted Averaging (weighed avg), Bagging ensemble technique (Bag), and Max Voting ensemble technique (Max Vot). this section provides the classification accuracy of the methods via confusion matrices that are displayed using a heat map. The 14 different learners are tested to classify four various attacks in the RoEduNet-SIMARGL2021 Dataset. We group those as follows: (i) Confusion Matrices the perfect Dataset. (RF), Decision of Models with Perfect Performance: Figure 2 shows confusion Matrix performance on of different models with models These RoEduNet-SIMARGL2021 (DT), are: Random Forest averaging ensemble technique (avg), Weighted Averaging (weighed avg), Bagging ensemble technique (Bag), and Max Voting ensemble technique (Max Vot). The figure shows that all normal samples along with the three intrusion classes (Denial of Service, Malware, and Port Scanning) have been predicted perfectly for these models. Trees (ii) Confusion Matrices of Models with Near-Perfect Performance: The second category of models include those that have near-perfect performances but have few errors in prediction. This category includes Adaptive Boosting (ADA) (Figure 3) and Stacking ensemble (Figure 4). (iii) Confusion Matrices of Methods with Low Performance: We finally show the confusion matrices for the models with the lowest performaces on our first dataset. Figure 5 shows the confusion Matrix of MLP Classifier on RoEduNet-SIMARGL2021 Dataset. It has low prediction performance, particularly for Denial of Service it has and Port Scanning attacks. On the other hand, Fig. 3: Confusion Matrix of Adaptive Boosting (ADA) ensemble technique on RoEduNet-SIMARGL2021 Dataset. It has near-perfect performance. (MLP) on Fig. 5: Confusion Matrix of MLP Classifier RoEduNet-SIMARGL2021 Dataset. It has low prediction performance, particularly for Denial of Service and Port Scanning attacks. On the other hand, it has perfect performance in identifying normal samples. Fig. 4: Confusion Matrix of Stacking ensemble technique on RoEduNet-SIMARGL2021 Dataset. It has near-perfect performance on the samples. Fig. 6: Confusion Matrix of Logistic Regression (LR) on RoEduNet-SIMARGL2021 Dataset. Here, label 0 represents Denial of Service (DOS), 1 represents Malware, 2 represents Normal, and 3 represents Port Scanning. LR has low performance on DoS and Port Scanning intrusions while having near-perfect performance on normal (label 2) samples. Fig. 7: Confusion Matrix of Bagging (Bag), Blending (Bled), and Cat Boosting (CAT) ensemble techniques on RoEduNet-SIMARGL2021 Dataset. perfect performance in identifying normal samples. Moreover, Figure 6 shows the confusion Matrix of Logistic Regression (LR) on RoEduNet-SIMARGL2021 Dataset. LR has low performance on Denial of Service and Port Scanning intrusions traffic while having near-perfect performance on normal instances (or samples). the models that have computational (iv) Confusion Matrices for Boosting Models with Reduced Sample Size: Finally, we show confusion issues matrices for on RoEduNet-SIMARGL2021 Dataset. Figure 8 shows the confusion Matrices of these models, which are Extreme Gradient Boosting (XGB), and Gradient Boosting (GB) ensemble techniques. Note that these two methods was tested on reduced sample size, however having almost perfect performances on all intrusion classes. Similarly, Figure 7 shows the confusion Matrices for Bagging, Blending, and CAT boosting techniques. Having finished our detailed evaluation analysis on RoEduNet-SIMARGL2021 dataset, we next show the detailed evaluation analysis for CICIDS-2017 dataset. F. CICIDS-2017 Analysis Main Results: Table VI presents the performance metrics of various models on the CICIDS-2017 dataset when utilizing into all available features. The table provides the effectiveness of different machine learning algorithms in classifying network traffic. Notably, several models demonstrate high performance across all metrics, with Random (RF), Bagging (Bag), Blending (Bled), Weighted Forest Average (weighed avg), Stacking, Gradient Boosting (GB), insights Fig. 8: Confusion Matrix of Extreme Gradient Boosting (XGB) and Gradient Boosting (GB) ensemble techniques on RoEduNet-SIMARGL2021 Dataset. Note that these two methods was tested on reduced sample size. (CAT) consistently Decision Tree (DT), and Cat Boost (F1 score of 0.998 or achieving near-perfect higher). This robust performance underscores the suitability of ensemble methods and tree-based algorithms for the classification task on this dataset. scores Conversely, Logistic Regression (LR) and AdaBoost (ADA) exhibit comparatively lower performance metrics, suggesting potential limitations in capturing the complex patterns present in the dataset. The overlapping top-performing models further emphasize their stability and reliability across different feature sets. Runtime Performance: Table VII presents the runtime performance of various machine learning models on the CICIDS-2017 dataset, measured in seconds. Considering the importance of achieving optimal results while also minimizing computational overhead, runtime performance becomes a crucial factor in model selection. Among the models exhibiting perfect F1 scores (i.e., RF, DT), Decision Tree (DT) emerges as the most time-efficient option, requiring approximately four minutes for training and testing combined. This makes it an attractive choice for scenarios where computational resources are limited. Moving beyond models with perfect F1 scores, Logistic Regression (LR) stands out as the fastest option among those achieving relatively good performance across all metrics. Conversely, models like Bagging (Bag) and Blending (Bled) runtimes, exceeding two demonstrate significantly longer hours. While these models may offer competitive performance, their computational demands make them less practical for resource-constrained applications. We also emphasize that when an IDS is online (in real time), the ensemble models are already trained, thus they can still be used for prediction. TABLE VI: Performance of different models (individual ML models, and different ensemble methods) organized by F1 score (highest to lowest) on CICIDS-2017 Dataset. Models Random Forest (RF) Bagging (Bag) Blending (Bled) Weighted Average (weighed avg) Stacking Gradient Boosting (GB) Decision Tree (DT) Cat Boost (CAT) Multi-Layer Perceptron (MLP) XGBoost (XGB) Average (avg) Max Voting (Max Vot) Logistic Regression (LR) AdaBoost (ADA) Accuracy (ACC) 1.00 0.998 0.998 0.998 0.997 0.988 0.998 0.998 0.996 0.996 0.996 0.926 0.889 0.891 Precision (PRE) 1.00 1.00 1.00 1.00 1.00 0.99 0.998 0.998 0.996 0.996 0.997 0.89 0.866 0.81 Recall (REC) 1.00 1.00 1.00 1.00 1.00 0.99 0.998 0.998 0.996 0.996 0.996 0.93 0.899 0.89 F1 Score 1.00 1.00 1.00 1.00 1.00 0.99 0.998 0.998 0.996 0.996 0.996 0.90 0.877 0.85 TABLE VII: Model Training and Testing Timetable (Seconds) for CICIDS-2017 Dataset. Logistic Regression and Decision Tree are the most efficient individual model while Max Voting is the most time-efficient ensemble learning method. Models Logistic Regression (LR) Decision Tree (DT) Max Voting (Max Vot) Random Forest (RF) AdaBoost (ADA) Average (avg) XGBoost (XGB) Bagging (Bag) Cat Boost (CAT) Gradient Boosting (GB) Multi-Layer Perceptron (MLP) Weighted Average (weighed avg) Blending (Bled) Stacking Time (Seconds) 120.31 240.7 670.25 1402.8 2040.54 2160.11 2220.00 2305.31 2640.02 3000.02 4200.45 5040.3 9600.52 9720.54 Fig. 9: Confusion Matrix of Decision Tree (DT) on CICIDS-2017 Dataset. Most attacks are predicted very efficiently (except Web Attack and Bot). it the dataset Additionally, itself is worth noting that plays a significant role in determining runtime complexity. The CICIDS-2017 dataset is large and complex, which further accentuates the importance of efficient model selection. By analyzing the runtime performance of various models, stakeholders gain valuable insights into the computational demands associated with each algorithm, enabling informed decisions regarding model deployment and scalability. for Different 1) Confusion Matrices Individual and Ensemble Models for CICIDS-2017 Dataset: We next show confusion matrices for the 14 different learners tested to classify various attacks in the CICIDS-2017 Dataset. Due to having higher number of attack classes here compared to RoEduNet-SIMARGL2021, confusion matrices are different for most models. We now provide them for different models along with their main insights. (i) Confusion Matrices of Individual Models on CICIDS-2017: Figure 9 shows the confusion matrix of Decision Tree (DT) on CICIDS-2017 Dataset. It shows that most attacks are predicted very efficiently (except Web Attack and Bot). On the other hand, Figure 10 shows the confusion matrix of Logistic Regression (LR) on CICIDS-2017, which shows much lower prediction accuracy. (ii) Confusion Matrices of Simple Ensemble Methods on CICIDS-2017: Figure 11 shows the confusion matrix of Max Voting (Max Vot) on CICIDS-2017, which shows Fig. 10: Confusion Matrix of Logistic Regression (LR) on CICIDS-2017 Dataset. It shows much lower prediction accuracy for different attacks. Fig. 11: Confusion Matrix of Max Voting (Max Vot) simple ensemble technique on CICIDS-2017 Dataset. It has lower prediction accuracy for several attacks. Fig. 13: Confusion Matrix of Bagging (Bag) ensemble technique on CICIDS-2017 Dataset. Most attacks are predicted very efficiently (except Web Attack-XSS and Bot). Methods on CICIDS-2017 Dataset: Figures 13-14 show the confusion matrices of Bagging (Bag) and Blending (Bled) on CICIDS-2017 Dataset, respectively. They show that both ensemble methods predict most attacks very efficiently (except Web Attack-XSS and Bot). Along the same lines, Figure 15 shows the confusion matrix for Cat Boosting (CAT) ensemble technique on CICIDS-2017 Dataset. It has near-perfect performance for all intrusion classes except Bot. Similarly, Figure 16 shows that Stacking ensemble method predict most attacks very efficiently (except Web Attack-XSS and Bot). On the other hand, Boosting ensemble learning techniques provide lower prediction accuracy for different classes for the CICIDS-2017 dataset (as shown in Figures 17-18). However, we emphasize that Extreme Gradient Boosting Ensemble Techniques (XGB) has better performance compared to Adaptive Boosting (ADA). G. Performance Enhancement Across Datasets 12: Confusion Matrix of Weighted Averaging Fig. (Weighted avg) on CICIDS-2017 Dataset. Most attacks are predicted very efficiently (except Web Attack and Bot). technique ensemble lower prediction accuracy for several attacks. On the contrary, Figure 12 shows the confusion matrix of Weighted Averaging (Weighted avg) on CICIDS-2017 Dataset. It shows that most attacks are predicted very efficiently (except Web Attack and Bot). The Averaging simple ensemble method has also same behaviour as Weighted Averaging (figure omitted). (iii) Confusion Matrices of Advanced Ensemble the best model framework. The RF was CICIDS-2017: Our analysis on the CICIDS-2017 dataset revealed significant gains achieved through the implementation of our that demonstrated a notable improvement in accuracy, precision, recall (achieving a perfect score of 1.000), and F1 score. Furthermore, in conjunction leveraging this base learner with the best ensemble learning methods (including bagging, blending, weighted averaging, and stacking) yield near-perfect performance metrics, including accuracy and precision, while maintaining a perfect recall and F1 score. Similarly, RoEduNet-SIMARGL2021: the RoEduNet-SIMARGL2021 dataset, employing our framework yielded remarkable improvements. The best ensemble learning model is Averaging that exhibited achieving perfect scores of 1.000 across all performance metrics, including accuracy, recall, and F1 score. Similarly, Max Voting, precision, on Fig. 14: Confusion Matrix of Blending (Bled) ensemble technique on CICIDS-2017 Dataset. Most attacks are predicted very efficiently (except Web Attack-XSS and Bot). Fig. 16: Confusion Matrix of Stacking ensemble technique on CICIDS-2017 Dataset. Most attacks are predicted very efficiently (except Web Attack-XSS and Bot). Fig. 15: Confusion Matrix of Cat Boosting (CAT) ensemble technique on CICIDS-2017 Dataset. It has near-perfect performance for all intrusion classes except Bot. Fig. 17: Confusion Matrix of Adaptive Boosting (ADA) ensemble technique on CICIDS-2017 Dataset. It has the lowest performance across all models. Stacking, Bagging, Boosting, and Weighted Averaging yield almost perfect performance across all four metrics. Overall, our comprehensive evaluation showcases the potential benefits of our proposed simple and advanced ensemble learning. The detailed breakdown of performance metrics for each dataset underscores the efficacy of our approach and its ability to enhance classification accuracy and reliability across diverse datasets. We also emphasize that one strong point in this work is categorizing the different individual models and ensemble methods based on their confusion matrices (which show different aspects of model’s performance on different intrusion classes). In particular, these confusion matrices can give insights about which model can be used depending on the expected types of network attacks for the organization that the security analyst is monitoring. This also can lead to doing “ensemble of ensembles,” where we fuse different ensemble methods to confront different network attacks (where each ensemble method has some strong performance on one or more of these attacks). Having finished our detailed evaluation analysis on our two datasets, we next show the main discussions and limitations of our work in the next section. TABLE VIII: A summary of top-5 models for CICIDS-2017 and RoEduNet-SIMARGL2021 datasets. Ranking 1st 2nd 3rd 4th 5th CICIDS-2017 Random Forest (RF) Bagging (Bag) Blending (Bled) Weighted Averaging Stacking RoEduNet-SIMARGL2021 Random Forest (RF) Decision Tree (DT) Averaging (avg) Max Voting Stacking Performance Metrics: This section focuses on metrics such as Accuracy, Precision, Recall, and F1 scores utilized to assess individual models and ensemble techniques. Notably, superior results, indicated by higher metric scores, were observed for models and ensemble techniques including Random Forest (RF), Decision Tree (DT), Weighted Average (weighed avg), Stacking, Bagging (Bag), Blending (Bled), AdaBoost (ADA), Cat Boost (CAT), Gradient Boosting (GB), and XGBoost (XGB). Runtime: The runtime analysis involves evaluating the execution times of the 14 individual models and ensemble techniques utilized. In Section V, a runtime table is provided, arranged from the fastest to the slowest models. Following this analysis, the fastest models overall include LR, DT, MLP, and RF. Models with an average runtime encompass Bag, XGB, ADA, CAT, and GB. Conversely, the slowest models comprise Bled, avg, stacking, max Vot, and weighted avg. these intersecting Optimal Models: By results (performance metrics and runtime), we identify the optimal models, including DT, RF, Bag, Bled, ADA, CAT, and GB. Subsequently, Stacking, avg, weighted avg, and Max Vot are also recognized, albeit with slower runtimes. These models demonstrated superior performance across the metrics outlined in this study. Superiority of Ensemble Learning Methods: Table VIII presents a summary of top-5 models (in terms of F-1 score) for CICIDS-2017 and RoEduNet-SIMARGL2021 Datasets. We notice that the ensemble methods have superiority over individual models for both datasets. In particular, for CICIDS-2017 dataset, all the top-5 models are ensemble methods. On the other hand, for RoEduNet-SIMARGL2021 dataset we have four ensemble methods in the top-5 methods (where decision tree (DT) is the only individual model in this list of top-5 models). 3) Random Forest Assessment: The findings highlight Random Forest (RF) as one of the top-performing ensemble techniques across both datasets. However, its applicability may be limited by a tendency towards overfitting and bias when deployed in diverse scenarios. For instance, concerning the CICIDS-2017 dataset, the most effective base learner is decision tree (DT), which achieved an Accuracy of 0.998, Precision of 1.0, Recall of 1.0, and F1 score of 1.0. Similarly, for the RoEduNet-SIMARGL2021 dataset, the next best-performing base learner is decision tree (DT), attaining an Accuracy of 1.0, Precision of 1.0, Recall of 1.0, and F1 score of 1.0. 4) Advantages of Ensemble Learning Framework: The in its ensemble learning construction approach. Our framework offers setup facilitates versatility a Fig. 18: Confusion Matrix of Extreme Gradient Boosting (XGB) ensemble technique on CICIDS-2017 Dataset. It has lower performance compared to other advanced ensemble methods (except ADA). VI. LIMITATIONS, DISCUSSION, AND FUTURE DIRECTIONS A. DISCUSSION 1) Significance of our Framework: In the contemporary landscape of flourishing information, the frequency of network attacks is expected to rise (as evidenced by recent studies such as the one conducted by the Center for Strategic & International Studies (CSIS) [67]). Despite the evolution of Intrusion Detection Systems (IDS), security analysts still face the challenge of verifying potential attack incidents in this rapidly evolving environment. Hence, having a reliable framework for intrusion detection systems can significantly mitigate this challenge by reducing the instances of false positive rates (FPR) to scrutinize and enable a focused analysis of critical traffic data. The framework presented in our study contributes to addressing this issue via enhancing the performance metrics (including accuracy, recall, precision, intrusion detection systems, which is and F1 score) of to ensure better network for pivotal security in aforementioned modern systems. Our framework was also evaluated using confusion matrices (which show intrusion-specific performance of different individual methods and ensemble approaches). The time analysis also can help in choosing the best model, depending on the needs of the security analyst. their deployment 2) Summary of Results: All findings presented in this paper are succinctly outlined in Section V, addressing key inquiries readers may have, such as determining the optimal AI model and ensemble technique and assessing their viability. Section V provides a comprehensive overview of the results, highlighting performance metrics and crucial aspects of the ensemble technique, including runtime and confusion metrics, employed in evaluating the framework. the two datasets utilized, comprehensive evaluation of including RoEduNet-SIMARGL2021, which, to the best of our knowledge, has been evaluated in this context with very limited works within the existing literature [68]. However, our current work has several differences from the prior work [68]. First, our work considers all the individual classes of CICIDS-2017 (total of 15 classes), and RoEduNet-SIMARGL2021 (total of 4 classes). The previous work simplified the problem for CICIDS-2017 grouping the attacks in 7 classes, and by not using the malware class for RoEduNet-SIMARGL2021. Moreover, this current work presents in detail the decision matrices used for each case scenario where it brings light to the classes that ensemble learning can identify the best, and other attacks that might to detect. This important analysis is need enhancement not present in the previous work [68] since it considered the problem from a holistic point of view. Furthermore, this current work employs the blending technique which is not present in the prior work [68]. Finally, the work [68] focused on hierarchical ensemble learning where it built a two-level ensemble learning framework for network intrusion detection tasks, which is different from our current focus of comprehensive comparison of popular ensemble methods. Furthermore, this study presents an extensive evaluation that distinguishes it from previous works by analyzing 14 models/ensemble methods across two distinct datasets, yielding results for Accuracy, Precision, Recall, F1, Confusion Matrices, and Runtime. Notably, we achieve near-perfect results for several models in terms of F1 score, emphasizing the significance of Intrusion Detection these metrics for Systems (IDS). Given the imperative for security analysts, stakeholders, and users to accurately and rapidly identify potential threats, undetected attacks which can pose substantial risks. It is worth emphasizing that we have taken the extra step of making our codes open source. Designed to be easily adaptable for use with other datasets and further analysis, they do not constitute a deployable solution for production, as they have not undergone extensive testing or validation by an authoritative entity. Instead, they serve as a proof of concept highlighting the benefits of our proposed framework and represent a crucial step towards enhancing the field of AI-based network IDS. B. LIMITATIONS The and Biases: 1) Dataset Analysis experiments conducted enable us to draw several noteworthy conclusions regarding the datasets employed. Firstly, it is evident that the models within our framework achieved superior gains when applied to CICIDS-2017 and RoEduNet-SIMARGL2021 datasets. Each dataset exhibits distinct characteristics. For example, RoEduNet-SIMARGL2021 comprises nearly 30 million data points with approximately 20 feature columns, whereas CICIDS-2017 contains almost 2 million data points but approximately 70 feature columns to (refer the observed Table III). This discrepancy accounts for the increase in runtime for these datasets. Additionally, disparity in data volume and class count is noteworthy, with RoEduNet-SIMARGL2021 featuring four prediction classes and CICIDS-2017 incorporating several prediction classes. Interestingly, AI models appear to readily learn patterns in the RoEduNet-SIMARGL2021 dataset owing to its extensive size and fewer prediction classes. Conversely, despite the higher number of prediction classes in CICIDS-2017, models demonstrate adeptness and achieve commendable scores, possibly due to its heavily unbalanced class distribution (as depicted in Table III, where four classes combined represent less than 1% of the entire dataset). We stress here that this is close to a real-world scenario since most of the traffic is normal traffic. This limitation underscores the necessity for future research to explore alternative datasets or employ uncalibrated models to use on ensemble learning to broaden benchmarking and testing within our framework. C. FUTURE DIRECTIONS While our work represents a foundational step towards (IDS), advancing AI-based Intrusion Detection Systems there are numerous avenues for future exploration and refinement within our framework. Expanding our framework to encompass additional datasets, diverse AI models, and a broader array of ensemble methods holds promise for creating a more comprehensive and insightful framework. Another promising avenue for future research involves delving into multi-level ensemble learning approaches, which involve utilizing multi-level classification techniques to further enhance detection accuracy and robustness. Additionally, exploring a wider range of feature selection methods could provide valuable insights into optimizing model performance and interpret ability. Moreover, integrating explainable AI (XAI) frameworks to generate explanations for ensemble methods presents an intriguing direction for future investigation. By providing transparent for model predictions, XAI techniques could enhance the trust and understanding of IDS systems [69]. interpretable explanations and Furthermore, the ultimate goal of our ongoing efforts includes the implementation of real-time capabilities and validation through collaboration with security experts and analysts. This collaboration aims to gather invaluable insights leading to continuous improvements and and feedback, real-world applicability of our framework. VII. CONCLUSION The primary goal of a security intrusion detection tool is to serve as a robust shield against potential intruders. Leveraging artificial intelligence (AI) can significantly enhance the automation and effectiveness of these tools. The increasing frequency of intrusions in networked systems has driven extensive research into developing AI techniques for intrusion detection systems (IDS). While various AI models have been deployed for this purpose, each model has its own strengths and weaknesses, presenting a challenge in selecting the most suitable model for a given dataset. To address this challenge, combining multiple AI models can substantially improve their overall performance and applicability in network intrusion detection. This paper aims to bridge this crucial gap by evaluating a diverse array of ensemble methods for IDS. Specifically, we present a comprehensive comparative study of individual models and both simple and advanced ensemble learning frameworks for network intrusion detection tasks. Our approach involves training base learners and ensemble methods to generate evaluation metrics. We present results for 14 combinations of individual and ensemble models within our framework, utilizing various boosting, stacking, and blending methods on diverse base learners. Evaluation is conducted on two network intrusion datasets, each possessing unique characteristics. Our analysis categorizes AI models based on their performance metrics (including accuracy, precision, recall, and F1-score), and runtime, highlighting the advantages of learning across various setups for two very important datasets. The best models for each dataset were advised based on their performance metrics. For the CICIDS-2017 dataset, the top three ensemble models were Random Forest (RF), Bagging (Bag), and Blending (Bled). These models achieved exceptional results, with the Random Forest model achieving perfect scores in Accuracy (ACC), Precision (PRE), Recall (REC), and F1 Score. Bagging and Blending models also performed remarkably well, achieving near-perfect metrics the the board (see Table VIII). Similarly, across RoEduNet-SIMARGL2021 dataset, the top three models were Random Forest (RF), Decision Tree (DT), and Bagging (Bag). Both the Random Forest and Decision Tree models achieved perfect scores in all performance metrics, while the Bagging model performed almost perfectly (see Table VIII). for Our evaluation results show that using ensemble learning was beneficial as it significantly enhanced the performance of the models, leading to high accuracy, precision, recall, and F1 scores across both datasets. We contribute to the community by providing our source codes, offering a foundational ensemble learning framework for network intrusion detection that can be expanded with new models and datasets. We also provide insights into the best models for each dataset, highlighting common and distinct behaviors among them through confusion matrices, which influence their performance and results. We conclude with an in-depth discussion of our main findings and the primary benefits of our framework. This study represents a significant advancement in utilizing ensemble learning methods for network Intrusion Detection Systems (IDS), achieved through comprehensive evaluations and comparisons of various metrics to assess the effectiveness of these ensemble methods. REFERENCES [1] S. Northcutt and J. Novak, Network intrusion detection. Sams Publishing, 2002. [2] B. Mukherjee, L. T. Heberlein, and K. N. Levitt, “Network intrusion detection,” IEEE network, vol. 8, no. 3, pp. 26–41, 1994. [3] G. Apruzzese, M. Andreolini, L. Ferretti, M. Marchetti, and M. Colajanni, “Modeling realistic adversarial attacks against network intrusion detection systems,” Digital Threats: Research and Practice (DTRAP), vol. 3, no. 3, pp. 1–19, 2022. [4] A. L. Buczak and E. Guven, “A survey of data mining and machine learning methods for cyber security intrusion detection,” IEEE Communications surveys & tutorials, vol. 18, no. 2, pp. 1153–1176, 2015. [5] A. S. Dina and D. Manivannan, “Intrusion detection based on machine learning techniques in computer networks,” Internet of Things, vol. 16, p. 100462, 2021. [6] J. Kim, N. Shin, S. Y. Jo, and S. H. Kim, “Method of intrusion detection using deep neural network,” in 2017 IEEE international conference on big data and smart computing (BigComp). IEEE, 2017, pp. 313–316. [7] C. Tang, N. Luktarhan, and Y. Zhao, “Saae-dnn: Deep learning method on intrusion detection,” Symmetry, vol. 12, no. 10, p. 1695, 2020. [8] M. A. Ferrag, L. Maglaras, A. Ahmim, M. Derdour, and H. Janicke, “Rdtids: Rules and decision tree-based intrusion detection system for internet-of-things networks,” Future internet, vol. 12, no. 3, p. 44, 2020. [9] M. Al-Omari, M. Rawashdeh, F. Qutaishat, M. Alshira’H, and N. Ababneh, “An intelligent tree-based intrusion detection model for cyber security,” Journal of Network and Systems Management, vol. 29, no. 2, pp. 1–18, 2021. [10] T. G. Nick and K. M. Campbell, “Logistic regression,” Topics in Biostatistics, pp. 273–301, 2007. [11] R. Panigrahi, S. Borah, M. Pramanik, A. K. Bhoi, P. Barsocchi, in S. R. Nayak, environment using hybrid na¨ıve bayes—decision cyber–physical table and multi-objective evolutionary feature selection,” Computer Communications, vol. 188, pp. 133–144, 2022. and W. Alnumay, “Intrusion detection [12] A. K. Balyan, S. Ahuja, U. K. Lilhore, S. K. Sharma, P. Manoharan, A. D. Algarni, H. Elmannai, and K. Raahemifar, “A hybrid intrusion detection model using ega-pso and improved random forest method,” Sensors, vol. 22, no. 16, p. 5986, 2022. [13] S. Waskle, L. Parashar, and U. Singh, “Intrusion detection system using pca with random forest approach,” in 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC). IEEE, 2020, pp. 803–808. [14] S. Arisdakessian, O. A. Wahab, A. Mourad, H. Otrok, and M. Guizani, “A survey on iot intrusion detection: Federated learning, game theory, social psychology and explainable ai as future directions,” IEEE Internet of Things Journal, 2022. [15] S. I. Sabev, “Integrated approach to cyber defence: Human in the loop. technical evaluation report,” Information & Security: An International Journal, vol. 44, pp. 76–92, 2020. [16] S. D. DCunha, Model is-ai-shifting-the-human-in-the-loop-model-in-cybersecurity/, [Online; accessed 21-October-2021]. Shifting The Human-In-The-Loop https://datatechvibe.com/ai/ 2017, “Is AI Cybersecurity?” In [17] J. Mijalkovic and A. Spognardi, “Reducing the false negative rate in deep learning based network intrusion detection systems,” Algorithms, vol. 15, no. 8, p. 258, 2022. [18] N. H. Al-A’araji, S. O. Al-Mamory, and A. H. Al-Shakarchi, “Classification and clustering based ensemble techniques for intrusion detection systems: A survey,” in Journal of Physics: Conference Series, IOP Publishing, 2021, p. 012106. vol. 1818, no. 1. [19] A. A. Aburomman and M. B. I. Reaz, “A survey of intrusion detection systems based on ensemble and hybrid classifiers,” Computers & security, vol. 65, pp. 135–152, 2017. [20] B. A. Tama and S. Lim, “Ensemble learning for intrusion detection systems: A systematic mapping study and cross-benchmark evaluation,” Computer Science Review, vol. 39, p. 100357, 2021. [21] R. Lazzarini, H. Tianfield, and V. Charissis, “A stacking ensemble of deep learning models for iot intrusion detection,” Knowledge-Based Systems, vol. 279, p. 110941, 2023. [Online]. Available: https: //www.sciencedirect.com/science/article/pii/S0950705123006913 [22] A. Mahfouz, A. Abuhussein, D. Venugopal, and S. Shiva, “Ensemble classifiers for network intrusion detection using a novel network attack dataset,” Future Internet, vol. 12, no. 11, 2020. [Online]. Available: https://www.mdpi.com/1999-5903/12/11/180 [23] N. Thockchom, M. Singh, and U. Nandi, “A novel ensemble learning-based model for network intrusion detection,” Complex & Intelligent Systems, vol. 9, 04 2023. [24] Y. Mirsky, T. Doitshman, Y. Elovici, and A. Shabtai, “Kitsune: An ensemble of autoencoders for online network intrusion detection,” 2018. [25] N. H. Al-A’araji, S. O. Al-Mamory, and A. H. Al-Shakarchi, “Classification and clustering based ensemble techniques for intrusion detection systems: A survey,” Journal of Physics: Conference Series, [Online]. Available: vol. 1818, no. 1, p. 012106, mar 2021. https://dx.doi.org/10.1088/1742-6596/1818/1/012106 [26] R. Caruana, A. Niculescu-Mizil, G. Crew, and A. Ksikes, “Ensemble selection from libraries of models,” 09 2004. [27] A. Zainal, M. Maarof, and S. M. Shamsuddin, “Ensemble classifiers for network intrusion detection system,” Journal of Information Assurance and Security, vol. 4, pp. 217–225, 07 2009. [28] A. Z. Kiflay, A. Tsokanos, and R. Kirner, “A network intrusion detection system using ensemble machine learning,” in 2021 International Carnahan Conference on Security Technology (ICCST), 2021, pp. 1–6. [29] S. Das, S. Saha, A. T. Priyoti, E. K. Roy, F. T. Sheldon, A. Haque, and S. Shiva, “Network intrusion detection and comparative analysis using ensemble machine learning and feature selection,” IEEE Transactions on Network and Service Management, vol. 19, no. 4, pp. 4821–4833, 2022. [30] H. Zhang, J.-L. Li, X.-M. Liu, and C. Dong, “Multi-dimensional feature fusion and stacking ensemble mechanism for network intrusion detection,” Future Generation Computer Systems, vol. 122, pp. 130–143, 2021. [Online]. Available: https://www.sciencedirect.com/ science/article/pii/S0167739X2100114X [31] Y.-F. Hsu, Z. He, Y. Tarutani, and M. Matsuoka, “Toward an online network intrusion detection system based on ensemble learning,” in 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), 2019, pp. 174–178. [32] Y. Alotaibi and M. Ilyas, “Ensemble-learning framework for intrusion detection to enhance internet of things’ devices security,” Sensors, vol. 23, no. 12, 2023. [Online]. Available: https://www.mdpi.com/ 1424-8220/23/12/5568 [33] R. Kumar Singh Gautam and E. A. Doegar, “An ensemble approach for intrusion detection system using machine learning algorithms,” in 2018 8th International Conference on Cloud Computing, Data Science & Engineering (Confluence), 2018, pp. 14–15. [34] T. Divyasree “A network intrusion detection and K. Sherly, system based on ensemble cvm using efficient feature selection approach,” Procedia Computer Science, vol. 143, pp. 442–449, 2018, 8th International Conference on Advances in Computing & Communications (ICACC-2018). [Online]. Available: https://www. sciencedirect.com/science/article/pii/S1877050918321136 [35] M.-E. Mihailescu, D. Mihai, M. Carabas, M. Komisarek, M. Pawlicki, W. Hołubowicz, and R. Kozik, “The proposition and evaluation of the roedunet-simargl2021 network intrusion detection dataset,” Sensors, vol. 21, no. 13, p. 4319, 2021. [36] R. Panigrahi and S. Borah, “A detailed analysis of cicids2017 dataset for designing intrusion detection systems,” International Journal of Engineering & Technology, vol. 7, no. 3.24, pp. 479–482, 2018. [37] B. E. Strom, A. Applebaum, D. P. Miller, K. C. Nickels, A. G. Pennington, and C. B. Thomas, “Mitre att&ck: Design and philosophy,” in Technical report. The MITRE Corporation, 2018. [38] M. repository, “Malware repository,” https://attack.mitre.org/datasources/ DS0004/, 2021, [Online; accessed 30-April-2024]. [39] C. B. Lee, C. Roedel, and E. Silenok, “Detection and characterization of port scan attacks,” Univeristy of California, Department of Computer Science and Engineering, 2003. [40] Kurniabudi, D. Stiawan, Darmawijoyo, M. Y. Bin Idris, A. M. Bamhdi, and R. Budiarto, “Cicids-2017 dataset feature analysis with information gain for anomaly detection,” IEEE Access, vol. 8, pp. 132 911–132 921, 2020. [41] D. by Comprmoise, “Drive-by Compromise,” https://attack.mitre.org/ techniques/T1189/, 2023, [Online; accessed 21-October-2023]. [42] Y. Chen, Q. Lin, W. Wei, Ji, K.-C. Wong, and C. A. J. Coello Coello, “Intrusion detection using multi-objective evolutionary convolutional fog for computing,” Knowledge-Based Systems, vol. 244, p. 108505, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/ S0950705122002179 network internet neural things of in [43] V. Gorodetski and I. Kotenko, “Attacks against computer network: Formal grammar-based framework and simulation tool,” in International Workshop on Recent Advances in Intrusion Detection. Springer, 2002, pp. 219–238. [44] M. Skwarek, M. Korczynski, W. Mazurczyk, and A. Duda, “Characterizing vulnerability of dns axfr transfers with global-scale scanning,” in 2019 IEEE Security and Privacy Workshops (SPW). IEEE, 2019, pp. 193–198. [45] A. Khan, H. Kim, and B. Lee, “M2mon: Building an mmio-based [46] S. R. Hussain, security reference monitor for unmanned vehicles.” 2021. I. Karim, A. A. Ishtiaq, O. Chowdhury, and E. Bertino, “Noncompliance as deviant behavior: An automated for 4g lte cellular devices,” in black-box noncompliance checker Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 1082–1099. [47] O. Mirzaei, R. Vasilenko, E. Kirda, L. Lu, and A. Kharraz, “Scrutinizer: Detecting code reuse in malware via decompilation and machine learning,” in Detection of Intrusions and Malware, and Vulnerability Assessment: 18th International Conference, DIMVA 2021, Virtual Event, July 14–16, 2021, Proceedings 18. Springer, 2021, pp. 130–150. [48] S. Lukacs, D. H. Lutas, A. V. COLESA et al., “Strongly isolated malware scanning using secure virtual containers,” Aug. 25 2015, uS Patent 9,117,081. [49] A. Kim, M. Park, and D. H. Lee, “Ai-ids: Application of deep learning to real-time web intrusion detection,” IEEE Access, vol. 8, pp. 70 245–70 261, 2020. [50] L. Dhanabal and S. Shantharajah, “A study on nsl-kdd dataset for intrusion detection system based on classification algorithms,” International journal of advanced research in computer and communication engineering, vol. 4, no. 6, pp. 446–452, 2015. [51] “Flow information elements [Online]. information elements.html Available: - documentation.” 10.1 https://www.ntop.org/guides/nprobe/flow nprobe [52] Ahlashkari, master https://github.com/ahlashkari/cicflowmeter/blob/master/readme.txt,” Jun Available: [Online]. CICFlowMeter/blob/master/ReadMe.txt “Cicflowmeter/readme.txt at ahlashkari/cicflowmeter, https://github.com/ahlashkari/ 2021. · [53] B. Claise, “Cisco systems netflow services export version 9,” Tech. Rep., 2004. [54] I. Sharafaldin, A. Gharib, A. H. Lashkari, and A. A. Ghorbani, “Towards a reliable intrusion detection benchmark dataset,” Software Networking, vol. 2018, no. 1, pp. 177–200, 2018. [55] C. A. Stewart, V. Welch, B. Plale, G. C. Fox, M. Pierce, and T. Sterling, “Indiana university pervasive technology institute,” techreport, 2017. [56] J. O. Mebawondu, O. D. Alowolodu, J. O. Mebawondu, and A. O. Adetunmbi, “Network intrusion detection system using supervised learning paradigm,” Scientific African, vol. 9, p. e00497, 2020. [57] Y.-Y. Song and L. Ying, “Decision tree methods: applications for classification and prediction,” Shanghai archives of psychiatry, vol. 27, no. 2, p. 130, 2015. [58] S. Dreiseitl and L. Ohno-Machado, “Logistic regression and artificial neural network classification models: a methodology review,” Journal of biomedical informatics, vol. 35, no. 5-6, pp. 352–359, 2002. [59] W. Li, P. Yi, Y. Wu, L. Pan, and J. Li, “A new intrusion detection system based on knn classification algorithm in wireless sensor network,” Journal of Electrical and Computer Engineering, vol. 2014, 2014. [60] A. V. Dorogush, V. Ershov, and A. Gulin, “Catboost: gradient boosting with categorical features support,” arXiv preprint arXiv:1810.11363, 2018. [61] D. Jin, Y. Lu, J. Qin, Z. Cheng, and Z. Mao, “Swiftids: Real-time intrusion detection system based on lightgbm and parallel intrusion detection mechanism,” Computers & Security, vol. 97, p. 101984, 2020. “Improving adaboost-based intrusion detection system (ids) performance on cic ids 2017 dataset,” in Journal of Physics: Conference Series, vol. 1192. IOP Publishing, 2019, p. 012018. [62] A. Yulianto, P. Sukarno, and N. A. Suwastika, [63] A. Natekin and A. Knoll, “Gradient boosting machines, a tutorial,” Frontiers in Neurorobotics, vol. 7, p. 21, 2013. [64] S. S. Dhaliwal, A.-A. Nahid, and R. Abbas, “Effective intrusion detection system using xgboost,” Information, vol. 9, no. 7, p. 149, 2018. in machine learning,” in International workshop on multiple classifier systems. Springer, 2000, pp. 1–15. [65] T. G. Dietterich, “Ensemble methods [66] M. Zounemat-Kermani, O. Batelaan, M. Fadaee, and R. Hinkelmann, “Ensemble machine learning paradigms in hydrology: A review,” Journal of Hydrology, vol. 598, p. 126266, 2021. [67] I. “42 Insights, at Statistics by Year: A Look https://sectigostore.com/blog/ 42-cyber-attack-statistics-by-year-a-look-at-the-last-decade/, February 2020, [Online; accessed 10-March-2023]. Cyber Attack Decade,” Last the [68] O. Arreche, I. Bibers, and M. Abdallah, “A two-level ensemble learning framework for enhancing network intrusion detection systems,” IEEE Access, vol. 12, pp. 83 830–83 857, 2024. [69] B. Mahbooba, M. Timilsina, R. Sahal, and M. Serrano, “Explainable artificial intelligence (xai) to enhance trust management in intrusion detection systems using decision tree model,” Complexity, vol. 2021, 2021. Bagging: Bagging, a class of ensemble methods, was utilized in this scenario. This method involves dividing the into subsets with replacements and utilizing these dataset subsets as input data for diverse base models. Subsequently, the predictions from each base model are aggregated to reach a final decision. In this implementation, Bagging Classifier was instantiated with various base models including Random Forest, MLP Classifier, Logistic Regression, and Decision Tree Classifier. The number of estimators for the Bagging Classifier was set to the total number of base models, ensuring that each base model contributes to the ensemble’s prediction. Random Forest (RF): One ensemble classifier utilized for detecting malicious samples in network traffic was the Random Forest (RF). The hyperparameters employed for this classifier are as follows: n estimators (representing the number of trees used) was set to 100, the maximum tree depth was set to 10, the minimum number of samples required to split an internal node was set to 2, while the remaining parameters were left as default. Blending: Blending, another class of ensemble methods, was employed. This method uses a holdout (validation) set from the train set to make predictions. The process involves splitting the train set into training and validation sets, fitting models on the training set, and making predictions on the validation and test sets. The validation set and its predictions are then used to build a new model, which makes the final decision on the test set. In this implementation, the blending including method was applied using several base models, Random Forest, Multi-layer Perceptron, Logistic Regression, and Decision Tree. The predictions from these base models on the validation set are used to train the final estimator, which makes the final decision. Stacking: Lastly, Stacking, another class of ensemble methods, was employed. This method stacks the decisions of base models, utilizing their outcomes to create a new dataset. Subsequently, various models and ensemble methods are applied again to make a final decision. In this implementation, StackingClassifier was instantiated with several base models, including Random Forest, Multi-layer Perceptron, Logistic Regression, and Decision Tree. The predictions from these base models are then used to train the final estimator, which makes the final decision. APPENDIX A AI MODELS AND HYPER-PARAMETERS We present the hyperparameters of the various AI models and ensemble methods employed in this study. A. Details of AI Models and Hyperparameters 1) Base Models: First, we outline the primary details of the base models. Logistic Regression (LR): Moving on to the next classifier, we employed Logistic Regression. The parameter configuration for this classifier remains at default settings. Decision Tree (DT): Continuing with the subsequent classifier, we utilized the Decision Tree. The parameter configuration for this classifier remains at default settings. Multi-layer Perceptron (MLP): Following, we utilized the MLP classifier with the subsequent configuration: The MLP classifier architecture comprises two hidden layers, each containing 50 neurons, utilizing the Rectified Linear Unit (ReLU) activation function. We employed the Adaptive Moment Estimation (Adam) solver for optimization, with an L2 regularization term (alpha) set to 0.0001. The batch size was dynamically adjusted based on the dataset size. Additionally, the learning rate was kept constant at 0.001 throughout training, with a maximum of 1000 iterations. The random seed was fixed at 42 for reproducibility. Early stopping was disabled, and progress messages were printed during training. 2) Ensemble Methods: Next, we present the key details of our ensemble methods. AdaBoost (ADA): AdaBoost was employed as the next this classifier classifier. The parameter configuration for remains at the default setting. Extreme Gradient Boost (XGB): Following, XGB was utilized as a classifier. The parameter configuration learning rate set to 0.1, loss function set to multi: softmax. CatBoost (CAT): Subsequently, Catboost was utilized as a classifier. The parameter settings default setting. Max Voting: The following classifier employed is Voting, a simple stacking method that aggregates each model’s decision. In this implementation, a VotingClassifier is instantiated with two base classifiers, Logistic Regression (LR) and Decision Tree (dt), using hard voting. Average: Additionally, the Average classifier was employed. This approach involves initializing three base classifiers: Decision Tree, K-nearest Neighbors, and Random Forest. These models are trained on the training data. The predictions from each model (pred1, pred2, pred3) are then averaged to generate the final prediction using a simple averaging technique. Weighted Average: Weighted Average was utilized in this implementation. This method involves initializing three base classifiers: Decision Tree, K-nearest Neighbors, and Random Forest. An ensemble is then created using a VotingClassifier, with the classifiers assigned weights based on their importance. In this case, Decision Tree is assigned a weight of 0.4, K-nearest Neighbors a weight of 0.3, and Random Forest a weight of 0.3.
ai_researcher
1
PubGraph_A_Large_Scale_Scientific_Temporal_Knowledge_Graph.pdf
3 2 0 2 y a M 9 1 ] I A . s c [ 2 v 1 3 2 2 0 . 2 0 3 2 : v i X r a PubGraph: A Large-Scale Scientific Knowledge Graph Kian Ahrabian, Xinwei Du, Richard Delwin Myloth, Arun Baalaaji Sankar Ananthan, and Jay Pujara University of Southern California, Information Sciences Institute, Marina del Rey CA 90292, USA {ahrabian,xinweidu,myloth,arunbaal,jpujara}@usc.edu Abstract. Research publications are the primary vehicle for sharing scientific progress in the form of new discoveries, methods, techniques, and insights. Unfortunately, the lack of a large-scale, comprehensive, and easy-to-use resource capturing the myriad relationships between publi- cations, their authors, and venues presents a barrier to applications for gaining a deeper understanding of science. In this paper, we present PubGraph, a new resource for studying scientific progress that takes the form of a large-scale knowledge graph (KG) with more than 385M entities, 13B main edges, and 1.5B qualifier edges. PubGraph is com- prehensive and unifies data from various sources, including Wikidata, OpenAlex, and Semantic Scholar, using the Wikidata ontology. Beyond the metadata available from these sources, PubGraph includes outputs from auxiliary community detection algorithms and large language mod- els. To further support studies on reasoning over scientific networks, we create several large-scale benchmarks extracted from PubGraph for the core task of knowledge graph completion (KGC). These benchmarks present many challenges for knowledge graph embedding models, includ- ing an adversarial community-based KGC evaluation setting, zero-shot inductive learning, and large-scale learning. All of the aforementioned resources are accessible at https://pubgraph.isi.edu/ and released un- der the CC-BY-SA license. We plan to update PubGraph quarterly to accommodate the release of new publications. Keywords: Scientific Knowledge Graphs · Knowledge Graph Comple- tion · Inductive Learning 1 Introduction Scientific progress takes many forms, from discovering new species to repurpos- ing extant models for novel tasks. Innovation in science has been studied from a variety of perspectives, including the combination of scholarly domains [12,28], sociological factors [8], and analogical reasoning [13,17]. However, many studies of this phenomenon have been limited due to the difficulty in finding and using large-scale data for the domain. In this paper, we address this obstacle by in- troducing PubGraph, a knowledge graph (KG) with new resources and bench- marks, enabling the study of scientific research at scale using structural patterns 2 K. Ahrabian et al. in citation and collaboration networks. PubGraph also provides a unique op- portunity to compare models on core tasks such as transductive and inductive knowledge graph completion (KGC). PubGraph is a large-scale multi-relational KG built on top of the OpenAlex catalog [23] and the Wikidata [29] ontology. It consists of more than 385M en- tities, comprising authors, institutions, sources, papers, and concepts, and more than 13B main edges and 1.5B qualifier edges among those entities. PubGraph captures temporal information, allowing the study of scientific works’ dynamics. Additionally, it also connects the scholarly articles available in OpenAlex to their counterparts in the Semantic Scholar Academic Graph (S2AG) [30] and Wiki- data through external ids. Moreover, besides the metadata information available in OpenAlex, PubGraph provides outputs from auxiliary community detection algorithms and large language models to further assist future studies of scien- tific articles. Fig. 1 illustrates an overview of PubGraph schema. In this paper, we describe the methodology used to construct PubGraph, i.e., the ontological choices made for mapping OpenAlex to Wikidata, the model choices to extract outputs from auxiliary models, and the entity resolution procedure for mapping OpenAlex articles to S2AG and Wikidata. One of the essential parts of studying scientific progress is understanding and reasoning about connections between ideas and discoveries. However, there is a shortage of benchmarks that could be used to study such topics. In the past, citations have proven to be crucial in studying publications and their im- pact [22]. Prior works have also studied tasks on citations such as intent classi- fication [5,11,16], recommendation [3,9], and prediction [7,20]. In this work, we introduce new large-scale benchmarks for finding connections among scientific works framed as a KGC task. The KGC task requires models to predict a target entity, given a source entity and a relation. The aim of this task is to support the study of citations from a structural perspective in both transductive, i.e., all nodes are known, and inductive, i.e., evaluation nodes are unseen, settings. Moreover, we also identify a community-based adversarial evaluation setting that mitigates the influence of random negative sampling in the evaluation phase of large-scale KGs. The contributions of this work are summarized as follows: 1. Introducing PubGraph, a billion-scale, multi-relational KG built on top of the OpenAlex catalog 2. Mapping the OpenAlex metadata to Wikidata ontology 3. Connecting two other large-scale scholarly metadata repositories, S2AG and Wikidata, to make PubGraph a unifying and comprehensive resource 4. Introducing large-scale extrapolated KGC benchmarks for KG models in both transductive and inductive settings 5. Identifying challenging adversarial evaluation settings for KGC benchmarks 2 Building PubGraph The primary source for creating PubGraph is the metadata in the OpenAlex catalog that we map to the Wikidata ontology. OpenAlex is an open-source cat- PubGraph: A Large-Scale Scientific Knowledge Graph 3 Fig. 1. Overview of PubGraph schema. Legend. Colors: Blue → Main entity, Yellow → Boolean attribute, Purple → Multi attribute, and Green → New attribute; Shapes: Rounded rectangle → Entity attribute, and Rectangle → Regular attribute. alog of scholarly entities that provides metadata for works, authors, institutions, sources, publishers, and concepts. Moreover, we add connections to both S2AG and Wikidata repositories to provide a more unifying resource for the researchers. 4 K. Ahrabian et al. Furthermore, we provide outputs from auxiliary models to further enrich Pub- Graph for future studies. The rest of this section is organized as follows: Sec. 2.1 introduces the mapping procedure from OpenAlex metadata to Wikidata ontology, Sec. 2.2 describes the implemented procedure to connect S2AG and Wikidata with OpenAlex along with some statistics of the resolution, and Sec. 2.3 presents the model choices for auxiliary outputs included in PubGraph. 2.1 Mapping to Wikidata Ontology To transform the OpenAlex dump (taken on April 9th, 2023) into PubGraph, we follow the well-known and well-studied Wikidata ontology. Specifically, we create a mapping between metadata information from the OpenAlex dump to Wikidata properties. Using Wikidata enables broader adoption of the KG and clear semantics for entities and relationships. Table 1 presents the mapping from OpenAlex metadata to Wikidata proper- ties. These mappings are selected such that they best describe the metadata field. Here, we explain the ontological design choices that we made for the mapping: 1. abstract → P7535: Due to the absence of a one-to-one match, we use P7535 (scope and content), which is defined as “a summary statement providing an overview of the archival collection.” 2. author position → P1545: Since this field defines an order of the authors, we use P1545 (series ordinal), which is defined as the “position of an item in its parent series (most frequently a 1-based index).” 3. first page + last page → P304: Since OpenAlex uses two different fields to present this information, we merge them into one attribute to be aligned with the Wikidata ontology. 4. score → P4271: Since this field indicates the relatedness of two concepts as produced by a model, it matches the definition of P4271 (rating) defined as “qualifier to indicate a score given by the referenced source indicating the quality or completeness of the statement.” 5. descriptor ui + qualifier ui → P1038: Since OpenAlex uses two different fields to present this information, we merge them into one attribute to be aligned with the Wikidata ontology. 6. apc usd → P2555: Since this field describes a “source’s article processing charge in US Dollars”, we match it to P2555 (fee) defined as “fee or toll payable to use, transit or enter the subject.” 7. relationship → P1039: Since this field describes the relation between two institutions, we use P1039 (kinship to subject) defined as “qualifier of "rela- tive" (P1038) to indicate less usual family relationships.” 8. location → P1433: Since this field describes the publishing location of a work, we match it with P1433 (published in). 9. latitude + longitude → P625: Since OpenAlex uses two different fields to present this information, we merge them into one attribute to be aligned with the Wikidata ontology. PubGraph: A Large-Scale Scientific Knowledge Graph 5 Table 1. OpenAlex metadata mapping to properties covered by Wikidata ontology. OpenAlex Metadata WikiData Property OpenAlex Metadata WikiData Property abstract author position landing page url license volume first page + last page score created date mag pmcid oa status publication date title updated date display name orcid twitter last known institution alternate titles country code host organization issn associated institution display name acronyms geonames city id ror international display name level hierarchy level location related concept P7535 P1545 P973 P275 P478 P304 P4271 P571 P6366 P932 P6954 P577 P1476 P5017 P2561 P496 P2002 P1416 P1476 P297 P749 P236 P1416 P1813 P1566 P6782 P4970 P1545 P1545 P1433 P921 author institution pdf url version issue concept year doi pmid descriptor ui + qualifier ui oa url referenced work type works count display name alternatives scopus wikipedia abbreviated title apc usd homepage url issn-l fatcat relationship homepage url latitude + longitude grid language alternate titles parent publisher ancestor corpus id P50 P1416 P953 P9767 P433 P921 P585 P356 P698 P9340 P2699 P2860 P31 P3740 P4970 P1153 P4656 P1813 P2555 P856 P7363 P8608 P1039 P856 P625 P2427 P9753 P4970 P749 P4900 P8299 10. level → P1545 and hierarchy level → P1545: Since there is no Wikidata property to describe a position in a hierarchy, we use the closest property P1545 (series ordinal), which is defined as the “position of an item in its parent series (most frequently a 1-based index).” 6 K. Ahrabian et al. Table 2. OpenAlex metadata mapping to properties not covered by Wikidata ontology. OpenAlex Metadata New Property OpenAlex Metadata New Property best oa location P_best_oa_location cited by count P_total_cited_by_count cited by count P_cited_by_count primary location P_primary_location 2yr mean citedness P_impact_factor i10-index umls aui P_i10_index h-index wikidata P_h_index P_wikidata P_umls_aui community id P_community_id Table 3. OpenAlex boolean metadata mapping to edges using Wikidata ontology. OpenAlex Metadata Edge OpenAlex Metadata Edge is corresponding is paratext is in doaj P31 → Q36988860 P31 → Q853520 P31 → Q1227538 is oa is retracted P31 → Q232932 P31 → Q45182324 Table 4. OpenAlex entity type mapping to edges using Wikidata ontology. OpenAlex Metadata Edge OpenAlex Metadata Edge work source concept P31 → Q13442814 P31 → Q1711593 P31 → Q115949945 author institution publisher P31 → Q482980 P31 → Q178706 P31 → Q2085381 For the metadata with no suitable parallel property, we create new ones to keep the KG as complete as possible, as showcased in Table 2. Note that for “cited by count”, OpenAlex provides both yearly and total values; hence, the reason for having two different new properties. Moreover, for metadata with a boolean type, we add a new edge (main or qualifier) when true. Table 3 presents the edges representing each boolean metadata with all the relations and entities taken from the Wikidata repository. This choice was made to maintain a better semantic composure and avoid creating new properties in the KG. For example, there is no property in Wikidata for “is paratext”; however, there exists an paratext entity (Q853520). Hence, instead of creating new property such as P_is_paratext, we can create a new edge when “is paratext” is true to this entity with relation P31 (instance of). Finally, we also add “instance of” edges to indicate the type of each entity as classified by OpenAlex, as presented in Table 4. Given its flexibility to represent attributed graphs, we use RDF∗ as the graph representation for PubGraph (as illustrated in Fig. 1). PubGraph: A Large-Scale Scientific Knowledge Graph 7 Fig. 2. Distribution of publication years in the 2000-2023 period for OpenAlex, S2AG, and Wikidata. Note that only ∼128.3M out of the ∼211.5M papers in S2AG have publication dates and are included. Fig. 3. Coverage of S2AG and Wikidata papers after entity resolution in the 2000-2023 period. 2.2 S2AG and WikiData Entity Resolution To make PubGraph a more unifying and comprehensive resource, we opt to con- nect works in OpenAlex to two other large-scale repositories of scholarly meta- data: S2AG (taken on April 11th, 2023) and Wikidata (taken on April 28th, 2023). Fig. 2 showcases the distribution of publication years in the 2000-2023 period for the works available in these three repositories. During this analysis, we noticed that only ∼128.3M out of the ∼211.5M papers in S2AG have pub- lication dates. This finding further highlights the importance of a unifying and comprehensive resource. To this end, we follow a two-step procedure. First, we match entities based on the following IDs: DOI, MAG, PMID, and PMCID. For S2AG, this results in ∼197.6M out of ∼211.5M unique papers being matched to OpenAlex works, roughly providing a 93.4% coverage. For Wikidata, this results in ∼33.2M out of ∼38.9M unique papers being matched to OpenAlex works, roughly providing an 85.4% coverage. Then, among the remaining unmatched entities, we run an exact title search and only keep one-to-one mappings. For S2AG, this step further increases the number of matched unique papers to ∼199.2M, roughly providing a 94.2% cov- 8 K. Ahrabian et al. erage. For Wikidata, this step further increases the number of matched unique papers to ∼36.4M, roughly providing a 93.6% coverage. Fig. 3 provides a cov- erage distribution over the 2000-2023 period for both S2AG and Wikidata. As evident from this distribution, the coverage of both data sources seems to be rel- atively unbiased toward the time of publication. We believe the Wikidata drop from 2021 onward is due to the low number of papers available in the platform in the period, and the S2AG drop is due to the potential delays in adding recent publications. Moreover, regarding more recent data, Wikidata seems to benefit drastically from adding new entities through external sources. We plan to im- prove our entity resolution heuristic using other metadata, such as authors, to cover more entities in future releases. 2.3 Auxiliary Outputs Community Detection Besides sharing scientific findings, scholarly articles represent the research interests of their authors. Therefore, by referencing each other’s publications, authors create communities of shared interests. To enable the study of these communities, we provide the results obtained from the Leiden community detection algorithm [26] as auxiliary outputs for papers in PubGraph. To this end, we first extract the full citation network from all the publication- publication links. Then, we tune the Leiden algorithm1 on the extracted citation network with the following parameters: quality function ∈ {Modular, RBER, Significance, Surprise}, maximum papers per community ∈ {300k, 500k}, and number of communities ∈ {3000, 4000, 5000, 6000}. To evaluate the communi- ties’ quality, we use a purity proxy metric extracted from the ancestral graph of the concepts connected to the publications in OpenAlex. Specifically, we count the number of children for each root concept and select the largest root concept for each community. Then, we calculate the percentage of the papers that are children of that root concept as the proxy metric. Figure 4 illustrates our results on different numbers of communities. Based on our experiments, the highest quality communities are produced by the following parameters: quality func- tion = Significance, maximum papers per community = 300k, and number of communities = 3000. Large Language Models PubGraph was developed to enable researchers to study scholarly works from a graph perspective. Through PubGraph, it is pos- sible to learn representations for papers using graph-based methods, which then could be used for various downstream tasks. Orthogonal to this relational and structural information, are textual information based on scholarly works’ con- tent. When available, textual features complement the graph-based features and can improve the performance of the models [2]. Recently, many large language models (LLM) have been introduced to tackle the problem of generating representations for scientific documents [1,6]. These pre-trained models are specifically tuned for scientific data and could be used to 1 https://github.com/vtraag/leidenalg PubGraph: A Large-Scale Scientific Knowledge Graph 9 Fig. 4. Analysis of the effect of the number of communities on the quality of commu- nities. A higher area under the curve (AUC) indicates more pure communities. generate low-dimensional embeddings for input documents. In this work, to fur- ther enable multi-view studies of PubGraph, we provide embeddings generated by LLMs for all the papers. These embeddings also save resources for researchers who want to use textual information. To this end, first, we obtain a representing text by concatenating the title and the abstract of each work. This approach allows us to cover all the works with at least one of these attributes available, improving the general coverage of this data. Then, we run the representing text through the SciNCL model [21] to obtain the embeddings, with each generated embedding being a 768-dimensional vector. All the generated embeddings are released with an index to match the corresponding papers. 3 Knowledge Graph Completion Traditionally, knowledge graph embedding (KGE) models [25,27] have been eval- uated in an interpolated, transductive KGC setting where all entities, e.g., pa- pers and authors, are known. However, one of the challenging aspects of study- ing scientific progress is dealing with new publications which require inference over unseen samples. A better-aligned evaluation setting for this purpose is the extrapolated, inductive setting. An inductive setting requires models to make predictions over previously unseen entities. While KGs capture the structure 10 K. Ahrabian et al. Fig. 5. Overview of the training and evaluation scheme. Intra-period current links (black) are used for training in all experiment settings. Intra-period future links (red) are used for evaluation in both validation and testing phases in all experiment settings. Exo-period links (dotted blue) are used in the training phase in transductive settings; however, in inductive settings, these links are only used as auxiliary links during the evaluation phase. Auxiliary links establish connections between seen training nodes and unseen evaluation nodes. necessary for this setting, many models do not address this use case. Moreover, extrapolated prediction requires train and test sets to be partitioned by a tem- poral threshold, so model predictions are for a future time epoch. In this work, we introduce new resources and benchmarks in the extrapo- lated setting for both inductive and transductive models, framing the research question as a KGC task and supporting the study of this problem from a purely structural standpoint at different scales and across various models. Moreover, we also introduce a community-based adversarial evaluation setting to 1) mitigate the influence of random negative sampling (due to the scale) in the evaluation phase and 2) maintain the same level of difficulty as evaluated on all of the enti- ties. Fig. 5 presents an overview of the training and evaluation schemes for the KGC benchmarks in both transductive and inductive settings. The rest of this section is organized as follows: Sec. 3.1 describes the methodology used to create PG-X benchmarks, Sec. 3.2 presents a data quality analysis over the extracted samples, and Sec. 3.3 presents a set of adversarial evaluation settings for the KGC tasks. 3.1 Building PG-X Benchmarks The full PubGraph KG contains a vast amount of information in the form of lit- eral values and sparse properties that are not easily usable by many KG models. We extract subsets of PubGraph, designated as PG-X, to create easier-to-use benchmarks for KG models. To extract PGs from the transformed data, we first remove all the publications with no citations that do not cite any other papers PubGraph: A Large-Scale Scientific Knowledge Graph 11 Table 5. Statistics of PG-X benchmarks splits. Benchmark #Training (Validation) #Training (Testing) #Validation #Test PG-1M 18.2M 20.5M PG-10M 269.0M 305.9M 265k 3.1M 146k 2.3M PG-Full 1.88B 2.17B 28.1M 26.3M Table 6. Validity and completeness metrics of sampled KGs. Metric PG-1M PG-10M PG-Full Mutual Citations 0.03% 0.04% 0.06% Authorship Completeness 99.97% 99.97% 99.92% Venue Completeness 92.37% 90.25% 75.34% Institution Completeness 81.45% 71.21% 45.77% to get PG-Full. Since these nodes are disconnected from other publications, this step mitigates the sparsity problem and reduces the KG size by a large margin. Given the enormous size of the PG-Full, we create two small and medium- sized sub-KGs to allow future studies at different scales. To this end, we use snowball sampling [10] to extract PG-1M and PG-10M with 1M and 10M publi- cation nodes, respectively. After sampling, we remove any publication without a publication date. Next, we extract all the “cites work (P2860),” “author (P50),” “published in (P1433)," and “affiliation (P1416)" links for the sampled publi- cations. We ensure to include all the available author, source, and institution links from the sampled publications in the benchmarks. Finally, we split all the benchmarks temporally, using all the publications before 2017 for training, 2017 up until 2020 for validation, and 2020 onward for testing. Table 5 presents the statistics on the extracted splits of each benchmark. 3.2 Data Quality To evaluate the quality of the extracted benchmarks, we check the validity and completeness of our KGs. For validity, we look for potential mutual citations, cases where two papers reference each other, violating strict temporal order. This artifact may appear when articles have several revisions, but OpenAlex only reports the earliest publication date. For completeness, we calculate publication- author, publication-source, and author-institution relations completeness. Table 6 showcases these metrics on the extracted KGs. As evident from the metrics, all the benchmarks exhibit an extremely low mutual citations percentage which is evidence of their quality. Moreover, the small and medium-sized KGs exhibit higher completeness metrics which we attribute to the forced inclusion of all authors, venues, and institutions links. 12 K. Ahrabian et al. Table 7. Negative sampling results on the PG-1M benchmark. Variation #Negative Samples MRR Hits@1 Hits@10 Time (Seconds) Random 1000 0.723 0.608 0.918 588 (CPU) Entity Type Time Constrained Community Full 1000 1000 1000 ∼3.38M 0.560 0.418 0.577 0.449 0.076 0.023 0.015 0.000 0.826 0.817 0.167 0.036 655 (CPU) 601 (CPU) 1008 (CPU) 81987 (GPU) Fig. 6. Analysis of the effect of negative samples count on the model’s performance measured by MRR. 3.3 Adversarial Evaluation Setting One of the most common strategies to evaluate KGC on large-scale graphs is to sample a fixed number of negative samples for each positive sample during the evaluation phase. However, this strategy is prone to exhibiting inflated per- formance due to having no control over the difficulty of the sampled nodes. Moreover, calculating the evaluation metrics on the complete set of samples becomes increasingly more expensive as the size of the KG grows. Hence, we propose three alternative strategies for negative sampling during the evaluation phase. These strategies aim to find an efficient method to be used as a proxy for complete metric calculations. Our proposed strategies are as follows: PubGraph: A Large-Scale Scientific Knowledge Graph 13 1. Entity Type: This is the most straightforward strategy in which we only sample candidate nodes with the same type as the target node. For example, in our case, we only sample from the publications. 2. Time Constrained: Building upon our first strategy, we further add the constraint of only sampling candidate nodes from the nodes within the evalu- ation period. Intuitively, these unseen (inductive) or less seen (transductive) nodes will pose more problems for the model during the evaluation phase. 3. Community: Given a target node, we sample candidate nodes only from its community. This strategy relies on the auxiliary outputs, i.e., communities, generated as described in Sec. 2.3. We hypothesize that these nodes pose the most difficulty for the model during the evaluation phase. To test the proposed strategies, we train a ComplEx [27] model using the DGL-KE toolkit [31]. We tune the hyper-parameters of our model using the following set of values: embedding dimensions ∈ {50, 100, 200, 400}, learning rate ∈ {0.003, 0.01, 0.03, 0.1, 0.3}, number of negative samples ∈ {128, 256, 512 , 1024, 2048}, and regularization coefficient ∈ {0.0, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5}. Table 7 presents the results of our experiments with the aforementioned neg- ative sampling strategies in the evaluation phase. The reported times are for one evaluation run over the complete testing set of the PG-1M benchmark (∼147K samples). As evident from these results, the community-based method is the best proxy to the full metrics calculation while still being significantly time efficient. Even if we factor in the 11.5 hours (41400 seconds) that it takes to learn commu- nities for all the 91M publications, the difference in computation time becomes much more significant when we have to repeat the evaluation process over and over again, e.g., for validation, fine-tuning, etc. Moreover, the full metrics are calculated on a GPU which is far more efficient than the calculations on the CPU. It is important to note that the community-based method is helpful in evaluation settings where the ground truth is known; however, in settings where the ground truth is unknown, e.g., a deployed model, there is no workaround to complete ranking computations as we have to consider all the entities regardless. We further analyze the effect of the number of negative samples on the model’s performance. Figure 6 presents the result of our experiments with vary- ing numbers of negative samples on all the introduced strategies. As expected, the model’s performance rapidly drops with the increase of negative samples. Moreover, the community-based negative sampling results act as an excellent proxy at 5k negative samples and seem to converge to the full variation around 10k negative samples. This finding is further evidence of the effectiveness of this method. 4 Related Works 4.1 Scientific Knowledge Graphs In recent years, a wide range of scientific KGs (SKG) have emerged in the re- search community. Examples of these SKGs are Scholia [19], ORKG [24], Ope- nAIRE [18], and MAG240M [14]. Each of the aforementioned SKGs has different 14 K. Ahrabian et al. Table 8. Comparison between PubGraph and the existing SKGs. SKG #Articles Source Ontology Embeddings Community External Links (Other Sources) Scholia ORKG 39M Wikidata Wikidata 25k Curated Proprietary OpenAIRE 164M Curated Proprietary MAG240M 121M MAG Proprietary PubGraph 250M OpenAlex Wikidata (cid:55) (cid:55) (cid:55) (cid:51) (cid:51) (cid:55) (cid:55) (cid:55) (cid:55) (cid:51) (cid:55) (cid:55) (cid:55) (cid:55) (cid:51) Table 9. Statistics of extracted benchmarks compared to the existing large-scale KGC benchmarks. As evident, PG-Full has more than 2x nodes and 3.6x edges compared to the largest existing benchmarks. Benchmark #Nodes #Edges #Relations 2,927,963 ogbl-citation2 [15] 86,054,151 Freebase [4] WikiKG90Mv2 [14] 91,230,610 30,561,187 338,586,276 601,062,811 1 14,824 1,315 PG-1M PG-10M PG-Full 3,378,202 25,312,490 22,442,976 315,225,337 184,126,885 2,201,239,147 4 4 4 characteristics that make them unique and interesting to the community. Table 8 compares PubGraph with the existing SKGs across various properties. As evident from this table, PubGraph is built on a more grounded ontology and provides much more information and artifacts compared to other SKGs. 4.2 Large Scale KGC Benchmarks KGC is one of the most common tasks defined on KGs. Recent efforts [14,15] have shifted toward introducing more large-scale benchmarks for KGC; however, there is still a shortage of benchmarks for large-scale graph learning. We believe the PG-X benchmarks introduced in this paper can help mitigate this shortage. Table 9 showcases the statistics of the sampled KGs along with a comparison to existing large-scale KGC benchmarks in the literature. As evident from the numbers, PG-X benchmarks provide an opportunity to evaluate KG models on larger (2x nodes and 3.6x edges) and more flexible (3.3M to 184M range) benchmarks. 5 Conclusion and Future Work In this work, we introduced PubGraph, a new large-scale resource in the form of a KG built on Wikidata ontology and extracted from the OpenAlex cata- PubGraph: A Large-Scale Scientific Knowledge Graph 15 log with more than 13B edges and 385M nodes. As presented through different comparisons, PubGraph provides a much-needed unifying and comprehensive re- source for researchers to study scientific progress that connects multiple sources. PubGraph also enables the study of scientific documents from distinct perspec- tives through the information extracted from auxiliary community detection algorithms and large language models. Moreover, we created three KGC bench- marks with varying sizes to enable future studies at different scales and for both transductive and inductive settings. Finally, we identified a set of challenging ad- versarial evaluation settings for the introduced benchmarks that overcome the common downfall of large-scale KGC evaluation settings. As for future directions for PubGraph, one direction is to improve the coverage of connections to exter- nal sources. Moreover, it is possible to bring in more external data sources, e.g., SKGs such as Scholia, and link them with PubGraph. Finally, another venue is to add other metadata that is of interest to the community, such as awards and grants, which further enables researchers to study these events in the larger context. Acknowledgements This work was funded by the Defense Advanced Research Projects Agency with award W911NF-19-20271 and with support from a Keston Exploratory Research Award. Resource Availability Statement: The source code for building PubGraph, along with a data schema, is available from GitHub, released under the CC-BY-SA license2. All the introduced benchmarks and resources are publicly accessible and released under the CC-BY-SA license3. Due to the sheer size of the resources (> 2TB), we could not host the data in any commonly used platform and had to resort to self-provisioned servers. References 1. Beltagy, I., Lo, K., Cohan, A.: SciBERT: A pretrained language model for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 3615–3620. Association for Computational Lin- guistics, Hong Kong, China (Nov 2019). https://doi.org/10.18653/v1/D19-1371, https://aclanthology.org/D19-1371 2. Berrebbi, D., Huynh, N., Balalau, O.: Graphcite: Citation intent classification in scientific publications via graph embeddings. In: Companion Proceedings of the Web Conference 2022. pp. 779–783 (2022) 3. Bhagavatula, C., Feldman, S., Power, R., Ammar, W.: Content-based citation rec- ommendation. arXiv preprint arXiv:1802.08301 (2018) 2 https://github.com/usc-isi-i2/isi-pubgraph 3 https://pubgraph.isi.edu/ 16 K. Ahrabian et al. 4. Bollacker, K., Evans, C., Paritosh, P., Sturge, T., Taylor, J.: Freebase: a collabo- ratively created graph database for structuring human knowledge. In: Proceedings of the 2008 ACM SIGMOD international conference on Management of data. pp. 1247–1250 (2008) 5. Cohan, A., Ammar, W., Van Zuylen, M., Cady, F.: Structural scaffolds for cita- tion intent classification in scientific publications. arXiv preprint arXiv:1904.01608 (2019) 6. Cohan, A., Feldman, S., Beltagy, I., Downey, D., Weld, D.: SPECTER: Document- level representation learning using citation-informed transformers. In: Proceed- ings of the 58th Annual Meeting of the Association for Computational Lin- guistics. pp. 2270–2282. Association for Computational Linguistics, Online (Jul 2020). https://doi.org/10.18653/v1/2020.acl-main.207, https://aclanthology.org/ 2020.acl-main.207 7. Cohan, A., Feldman, S., Beltagy, I., Downey, D., Weld, D.S.: Specter: Document- level representation learning using citation-informed transformers. arXiv preprint arXiv:2004.07180 (2020) 8. De Vaan, M., Stark, D., Vedres, B.: Game changer: The topology of creativity. American Journal of Sociology 120(4), 1144–1194 (2015) 9. Färber, M., Sampath, A.: Hybridcite: A hybrid model for context-aware citation recommendation. In: Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020. pp. 117–126 (2020) 10. Goodman, L.A.: Snowball sampling. The annals of mathematical statistics pp. 148–170 (1961) 11. Gururangan, S., Marasović, A., Swayamdipta, S., Lo, K., Beltagy, I., Downey, D., Smith, N.A.: Don’t stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964 (2020) 12. Hofstra, B., Kulkarni, V.V., Munoz-Najar Galvez, S., He, B., Jurafsky, D., Mc- Farland, D.A.: The diversity–innovation paradox in science. Proceedings of the National Academy of Sciences 117(17), 9284–9291 (2020) 13. Hope, T., Chan, J., Kittur, A., Shahaf, D.: Accelerating innovation through analogy mining. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 235–243 (2017) 14. Hu, W., Fey, M., Ren, H., Nakata, M., Dong, Y., Leskovec, J.: Ogb-lsc: A large- scale challenge for machine learning on graphs. arXiv preprint arXiv:2103.09430 (2021) 15. Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., Leskovec, J.: Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems 33, 22118–22133 (2020) 16. Jurgens, D., Kumar, S., Hoover, R., McFarland, D., Jurafsky, D.: Measuring the evolution of a scientific field through citation frames. Transactions of the Associa- tion for Computational Linguistics 6, 391–406 (2018) 17. Kang, H.B., Qian, X., Hope, T., Shahaf, D., Chan, J., Kittur, A.: Augmenting scien- tific creativity with an analogical search engine. ACM Transactions on Computer- Human Interaction (2022) 18. Manghi, P., Bardi, A., Atzori, C., Baglioni, M., Manola, N., Schirrwagen, J., Principe, P.: The openaire research graph data model (Apr 2019). https://doi. org/10.5281/zenodo.2643199, https://doi.org/10.5281/zenodo.2643199 19. Nielsen, F.Å., Mietchen, D., Willighagen, E.: Scholia and scientometrics with wiki- data. In: Scientometrics 2017. pp. 237–259 (November 2017). https://doi.org/10. 1007/978-3-319-70407-4_36, https://arxiv.org/pdf/1703.04222 PubGraph: A Large-Scale Scientific Knowledge Graph 17 20. Ostendorff, M., Rethmeier, N., Augenstein, I., Gipp, B., Rehm, G.: Neighborhood contrastive learning for scientific document representations with citation embed- dings. arXiv preprint arXiv:2202.06671 (2022) 21. Ostendorff, M., Rethmeier, N., Augenstein, I., Gipp, B., Rehm, G.: Neighborhood contrastive learning for scientific document representations with citation embed- dings. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. pp. 11670–11688. Association for Computational Linguis- tics, Abu Dhabi, United Arab Emirates (Dec 2022), https://aclanthology.org/2022. emnlp-main.802 22. Price, D.J.D.S.: Networks of scientific papers: The pattern of bibliographic ref- erences indicates the nature of the scientific research front. Science 149(3683), 510–515 (1965) 23. Priem, J., Piwowar, H., Orr, R.: Openalex: A fully-open index of scholarly works, authors, venues, institutions, and concepts. arXiv preprint arXiv:2205.01833 (2022) 24. Stocker, M., Oelen, A., Jaradeh, M.Y., Haris, M., Oghli, O.A., Heidari, G., Hussein, H., Lorenz, A.L., Kabenamualu, S., Farfar, K.E., et al.: Fair scientific information with the open research knowledge graph. FAIR Connect 1(1), 19–21 (2023) 25. Sun, Z., Deng, Z.H., Nie, J.Y., Tang, J.: Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv preprint arXiv:1902.10197 (2019) 26. Traag, V.A., Waltman, L., Van Eck, N.J.: From louvain to leiden: guaranteeing well-connected communities. Scientific reports 9(1), 1–12 (2019) 27. Trouillon, T., Welbl, J., Riedel, S., Gaussier, É., Bouchard, G.: Complex embed- dings for simple link prediction. In: International conference on machine learning. pp. 2071–2080. PMLR (2016) 28. Uzzi, B., Mukherjee, S., Stringer, M., Jones, B.: Atypical combinations and scien- tific impact. Science 342(6157), 468–472 (2013) 29. Vrandečić, D., Krötzsch, M.: Wikidata: a free collaborative knowledgebase. Com- munications of the ACM 57(10), 78–85 (2014) 30. Wade, A.D.: The semantic scholar academic graph (s2ag). In: Companion Proceed- ings of the Web Conference 2022. pp. 739–739 (2022) 31. Zheng, D., Song, X., Ma, C., Tan, Z., Ye, Z., Dong, J., Xiong, H., Zhang, Z., Karypis, G.: Dgl-ke: Training knowledge graph embeddings at scale. In: Proceed- ings of the 43rd International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval. pp. 739–748 (2020)
ai_researcher
4
"I'm_categorizing_LLM_as_a_productivity_tool"_Examining_ethics_of_LLM_use_in_HCI_research_practices.pdf
4 2 0 2 r p A 2 1 ] L C . s c [ 2 v 2 7 7 8 0 . 1 0 4 2 : v i X r a HUIXIANGDOU: OVERCOMING GROUP CHAT SCE- NARIOS WITH LLM-BASED TECHNICAL ASSISTANCE Huanjun Kong Songyang Zhang Jiaying Li Min Xiao Jun Xu Kai Chen Shanghai AI Laboratory ABSTRACT In this work, we present HuixiangDou1, a technical assistant powered by Large Language Models (LLM). This system is designed to assist algorithm developers by providing insightful responses to questions related to open-source algorithm projects, such as computer vision and deep learning projects from OpenMM- Lab. We further explore the integration of this assistant into the group chats of instant messaging (IM) tools such as WeChat and Lark. Through several itera- tive improvements and trials, we have developed a sophisticated technical chat assistant capable of effectively answering users’ technical questions without caus- ing message flooding. This paper’s contributions include: 1) Designing an algo- rithm pipeline specifically for group chat scenarios; 2) Verifying the reliable per- formance of text2vec in task rejection; 3) Identifying three critical requirements for LLMs in technical-assistant-like products, namely scoring ability, In-Context Learning (ICL), and Long Context. We have made the source code, android app and web service available at Github, OpenXLab and YouTube to aid in future re- search and application. HuixiangDou is applicable to any group chat within IM tools. 1 INTRODUCTION Authors of open-source projects often set up user groups on IM tools(like WeChat, Slack, Discord, etc.) for discussing project-related technical questions. As the number of users gradually increases, the maintainers, aiming to reduce the time spent on answering user questions while ensuring these questions are addressed, tend to pin some content or set up a bot to automatically answer FAQs. However, user inquiries are strongly correlated with their local development environments, and most messages in the group are unrelated to the project. However, traditional NLP solutions can neither parse the users’ intent nor often provide the answers they desire. ChatGPT, a large language model service from OpenAI, has a good performance in multiple test sets and natural language communication. However, directly integrating ChatGPT into group chats could lead to more severe issues. • ChatGPT is designed for single-user chat. If it responds too many messages within a group, it may impact others’ experience and cause them to leave the group. • For really valuable queries such as code implementation principles and modification meth- ods, ChatGPT fails to provide correct answers. This is because its training data comes from the public internet, not domain-specific knowledge, and its data cannot be updated immediately with code modifications. • Even though ChatGPT exhibits high accuracy in numerous datasets, it still faces the issue of hallucination. For example, asking ”Who is the author of ncnn?” can yield an incorrect response related to ”Nvidia Compute Library”. Hence, a technical assistant operating in group chats has different requirements. 1HuixiangDou is a dish from the Chinese classical story ’Kong Yiji’. It’s a meal Kong Yiji would routinely order from a local tavern, reflecting his humble conditions yet exalted spirit. 1 Target true help-seekers Technical assistant should not respond to non-technical content, such as politics, chit-chat, or personal information. They are only activated to provide responses to technical inquiries when users genuinely require assistance. Strictly no hallucination Even a single instance of hallucination could make users perceive the bot as unreliable from a product perspective. Therefore, the system is implemented to avoid creating any false impressions of understanding. Understand domain-specific knowledge Possessing exclusivity not found in the public internet is the fundamental value of an assistant. Simultaneously, it can update the version content of the knowledge base at a relatively low cost. No rush for response Users might ask questions late at night without having much expectation for response time. Therefore, we can adopt more complex processing procedures. 2 APPROACH In addressing these unique needs, we started with a basic version and arrived at our current solution after making two improvements. As show in Figure 1, our final version consists of three parts: Preprocess, Rejection and Response. The underlying philosophy of HuixiangDou is to eliminate irrelevant noise to improve precision; enhance retrieval capabilities to increase recall. Figure 1: The overall structure of the approach. After the user’s message is preprocessed, small talk will be filtered out, and only genuine questions will be responded to. 2.1 PREPROCESS USER INPUT In a chat group, multiple users may pose questions and communicate among themselves. How- ever, the typical LLM chat template only support three roles: system, user and bot. Hence, we concatenate groupid and userid as the unique ID for users, in order to accommodate chat template. Given that users are unlikely to describe their problem completely in one go, we pack multiple consecutive messages into a single one. In the process, we use OCR service to parse image and disregard other elements such as video, emojis, and voice messages. 2 Furthermore, extremely short messages that do not pose algorithmic challenges and messages that clearly do not seek interaction with the assistant by quoting others will also be disregarded. 2.2 REJECTION PIPELINE In group chat scenario, hallucinations come from two parts: user gossip and the model itself (where the model’s training data and domain knowledge are not aligned). Rejections pipeline is designed for dismissing causal chat-like discourse, as shown in Figure 2. Figure 2: The structure of rejection pipeline. We build a two-stage refusal-to-answer filter using text2vec and LLM scoring. Refusal to Answer Based on Text2Vec LangChain (langchain contributors, 2023) and wenda (wenda contributors, 2023) were originally used for RAG. After repeated tests, we think their re- trieval abilities are normal, but surprisingly suitable for telling whether the question deserves to be answered. Undesired question are topics that are too distant from the knowledge base. Refusal to Answer Based on LLM Scoring Because the text2vec model judges topic similarity, it’s easy to be influenced by tone words in the chat group question. From the perspective of the text2vec model, there is a high degree of similarity between ’This development board is very good’ and ’This board is poorly designed’. Influenced by moral and other factors, humans do not believe that these two sentences express the same meaning. 2.3 RESPONSE PIPELINE Obviously, models with robust In-Context Learning capabilities can assuredly mitigate internal hal- lucinations through search mechanisms. As shown in Figure 3, response Pipeline is designed to identify the underlying background knowledge of the issue. The key here is to determine the importance of the information source and use them in order. Extract Keywords User queries often contain many modal words, which can greatly impact the precision of text2vec models. Therefore, we cannot directly use original queries for text2vec search. As LLM excels at NLP part-of-speech segmentation tasks, we leverage it to extract keywords and phrases from the query. Feature and Rerank We mixedly use LangChain and BCEmbedding (Netease, 2024) to retrieve domain-specific knowledge. In this scenario, our search result is a list of document snippets. To fully utilize the context length of the model, we also employ LLM scoring to judge the relevance 3 Figure 3: The structure of response pipeline. We employ scoring and partial ordering to filter high- quality text from rerank model, web search and knowledge graph for the LLM to generate responses. To save costs, we mix and schedule different LLMs. We have established a set of security mecha- nisms to ensure that replies to chat groups do not involve sensitive topics. 4 between the query and the document. This helps avoid any distractions for the LLM from irrelevant inputs. It’s evident that LLM’s In-Context Learning capability is extremely crucial for this scenario. Due to the varying performance of different text2vec models, the Response Pipeline does not share a feature database with the Rejection Pipeline. Web Search We first retrieve multiple search results (depends on the maximum token length sup- ported by the model) and then utilize LLM scoring to filter the results associated with the question. These results are finally packed into the background document. This will return illegal content, so safety filtering is also necessary. As stated by Hsieh et al. (2024), although many models claim to perform excellently in needle-in- a-haystack task, their true long context ability remains valuable. Thus we can’t feed all documents into the LLM at one time. This step involves deciding which pieces of information will make up the final input for the LLM Chat based on prior knowledge. For instance, for PyTorch-related questions, we tend to look up the official PyTorch documentation rather than some tech blog. The trick here is that the data quality from web searches is not controllable and necessitates stringent review. Knowledge Graph Search engines face the entire spectrum of internet information, but the back- ground information implicit in the group chat technical assistant isn’t fully utilized. For instance, users wouldn’t actually ask mmdetection (Chen et al., 2019) questions in the opencompass (open- compass contributors, 2023) user group. Based on sourcegraph (McColl et al., 2013), we built a unique search engine for each repository, routing queries from different groups accordingly. This improvement enables the assistant to answer difficult questions that internet searches can’t locate, which we discuss further in our LLM paging experiments. Scoring The string of LLM’s responses cannot be directly integrated into Python or Java, hence we implement process control through LLM scoring, such as for intent recognition and relevance assessment. The Experiments section will showcase more examples. Hybrid LLM Service Our product focuses more on cost-efficiency and does not insist on a single model possessing all capabilities. We treat LLM chat as an RPC (Remote Procedure Call) service that can internally integrate multiple models for use as needed. The hybrid service is not a mishmash, it requires identifying the strengths of various models first and then invoking them based on the circumstances. In HuixiangDou, we can fully leverage Internlm2’s scoring capability and kimi chat’s long context ability. LLM Response Directly using snippet to answer questions can lead to local optima. We read the original text corresponding to the snippet and hand it over to the LLM for processing along with the original question. The experimental part will showcase our work on Long Context. Ultimately, we use LLM scoring to evaluate the relevance between the response and the query. If the relevance is low, assistant will not respond. Security Many regions emphasize the security of AI applications. To ensure foolproof safety, we implemented four seat belts: • Check all string variables and their association with prohibited topics based on LLM scoring to prevent the generation of illegal content. • Integrate traditional security service to check whether the assistant’s responses are illegal. • Set working hours for the assistant to ensure all activities are under human supervision. • Everyone can withdraw HuixiangDou’s response if they deem inappropriate. 5 Model text2vec-large-chinese text2vec-bge-large-chinese Precision Recall 0.92 0.81 0.99 0.95 Table 1: Test the refusal to answer using different text2vec models on manually annotated data. text2vec model has demonstrated strong robustness in the refusal-to-answer task. 3 EXPERIMENTS In this section, we validate the feasibility of key technical points during the iterative process of the pipeline. Section 3.1 presents the fine-tuning process and conclusions of the LLM. Section 3.2 demonstrates Rejection Pipeline effects. Section 3.3 details the implementation and testing conclusions of the scoring method. Section 3.4 is dedicated to the necessary experiments with Long Context responses. Section 4.2 is an attempt to further enhance the search capabilities. 3.1 FINE-TUNED MODEL Base model selection Due to resource limitations, we cannot train from scratch and must select a base model for fine-tuning. Our selection criteria are as follows. • Understanding domain-specific terminologies. That means training data includes the vocabulary needed for business operations. Otherwise, we believe that the required results cannot be calculated using attention score. • Long context. Since we can use ReRoPE (Su, 2023) or dynamic NTK (emozilla and bloc97, 2023) for expansion, a model supporting RoPE can be considered capable of handling long context. • In-Context Learning (ICL) and stable scoring ability. Data preparation Our training data comprises 28,000 QA pairs, which are made up of three parts: 1. During clean existing OpenMMLab group chat data, we removed personal information, and divided the dialogues into the QA format required for training. Among them, there are about 8,000 question-answer pairs. 2. For unanswered questions, we constructed responses using a larger LLM. These account for approximately 12,000 of the total. 3. We also scraped closed issues from GitHub, amounting to about 8,000 entries. Train and test We used the XTuner (xtuner contributors, 2023) qLoRA method to fine-tune on the 7B and 13B models respectively. Our learning rate is 2e-5 and epoch is 5. Regardless of the combination, there were significant issues with hallucinations. In the best version, the model learned colloquial expressions from users in WeChat groups, rather than technical answers. As shown in Appendix A. We believe that the biggest issue lies in data quality. Other users answer the questions casually and unprofessionally. In terms of domain-specific questions, the answers provided by LLM are not correct. 3.2 TEXT2VEC IN REJECTION PIPELINE We manually annotated hundreds of user contents, with human judgement determining whether they were related to domain-specific knowledge. We then used different text2vec models to construct a database and test the accuracy of refusal to answer. See table 1. also examined the including of pre- We langchain.MarkdownHeaderTextSplitter, cision, langchain.CharacterTextSplitter and their combined implementation. Experi- ments showed that the impact of the split method on the precision of refusal to answer can be disregarded. split methods various impact text on 6 3.3 LLM SCORING IN INTENT RECOGNITION LLM Scoring has been utilized in intent determination, the evaluation of relevance between ques- tions and background, as well as within security contexts. This is implemented by determining the final score of the task. In engineering practice, we often use integers and booleans instead of strings to determine the result of an if statement. Appendix B is the LLM scoring prompt. We randomly selected 1,303 domain-related queries and use InternLM2-7B to estimate the likeli- hood of the query being a question—the higher the score, the more likely it is a question. As shown in Figure 4, 11.6% of the content are user questions, which is in line with common sense. Figure 4: Question likelihood with InternLM2-7B on 1,303 domain-related group chat sentences, 11.6% are questions. This distribution aligns with common sense, and the scoring method can effectively handle intent recognition tasks. While elaborating on the problem can enhance the model’s performance, the model’s perspective of the world doesn’t straightforwardly equate to that of a human. For instance, a submarine can ”swim,” but this is not equivalent to human swimming. We extracted 11,362 sentences from the content sent by users. To improve the precision of rejection pipeline, we included scoring examples in the prompt, see Appendix C. However, after adding these examples, the scores for all 7,753 pieces of data increased. But in reality, more than 80% of group chat consists of idle chatter. If precision improves, the score distribution should present a polarized state. 3.4 LONG CONTEXT OPTIMIZATION Based on our experience in Chinese-English bilingual scenarios, the token length of a search result can exceed 11k. Considering the prompt and historical dialogue, the model’s max token length should be more than 16k. Only with 32k can we achieve relatively good results. 7 Considering the prohibitive training cost of YaRN (Peng et al., 2023), we optimized ReRoPE’s infer- ence 2 performance using Triton (triton contributors, 2019), also introducing dynamic quantization3 base on LMDeploy (lmdeploy contributors, 2023). Model baseline ReRoPE ReRoPE+Quant Length(Memory) Precision 4k(*65GB) 14k(79GB) 40k(75GB) 1.0 1.0 1.0 Table 2: Passkey test results of using different methods on openbuddy-llama2-13B-v8.1-fp16. To optimize speed, LMDeploy automatically pre-allocates memory based on the GPU, hence the base version occupy 65GB. We ultimately achieved 40k long text on a single card, proving that the ReRoPE method is feasible. Eventually, we achieved support for 40k token length on an A100 80G card. Table 2 is our precision test report for passkey retrieval, with the base model being openbuddy-llama2-13B-v8.1-fp16. 4 OTHER ATTEMPTS In order to enhance the accuracy, we have explored Natural Language Processing (NLP) as well as prompting techniques, but these methods have insurmountable shortcomings, and thus were ulti- mately not adopted. 4.1 NLP IN RAG Since the capabilities of the text2vec model are limited, we have tried to simplify the query and document with NLP methods. For example, inputting ”How to install mmdet and mmcv” will identify CC part of speech, thereby decomposing into two simple questions. But in actual operation, we encountered more difficult problems. • Domain-specific part-of-speech tagging lacks precision. For example, in the field of deep learning, the part of speech for ”deploy” depends on the context, which is different from daily communication. • Bilingual problems. HanLP ? exhibits subpar performance in English, and other well-known projects do not support Chinese. Utilizing translation APIs to bridge this gap in bilingual models poses further complications. Due to the lack of appropriate translations for certain terms, it can result in significant misinterpretations, such as with the term ”transformers”. 4.2 PROMPT TECHNICS Paging Suppose we want to make LLM understand an entire repository via prompts. Even the lat- est 192k context length can’t accommodate the full source code of OpenCompass. During ReRoPE optimization, we also realized that the transformer kv cache and attention score mechanism severely limit the maximum context length. Inspired by the operating system paging mechanism, we compressed the Python module into a single description, thereby shrinking the OpenCompass project within 120k. For user technical queries, we let LLM decide which modules to view, then extract the module source code for secondary inquiries. However, in practice, LLM only finds partial source code using a 128k context, and user questions may involve multiple knowledge points. Appendix D is an LLM Paging example without any web search nor RAG results. Rephrase and Respond Deng et al. (2023) attempts to enhance the prompt using LLM, but this is constrained by the understanding ability of the base model, making it incapable of extending this technique to interrogative sentences. Otherwise, it would lead to confusion in the LLM. Here is an scoring example. 2See https://github.com/InternLM/lmdeploy/pull/625 3See https://github.com/InternLM/lmdeploy/pull/718 8 Rephrase and Respond Example User: ”Determine whether the following sentences are topical interrogative sentences, with results ranging from 0 to 10. Provide scores directly without explanation.” Rephrase and expand the question, and respond. Assistant: 1 Figure 5: RaR prompt is not applicable to interrogative sentences. ReAct Yao et al. (2023) utilizes training data to potentially generate fixed-format json results based on inputs, which are then employed to invoke tools such as search engines. However, from a product perspective, being unable to debug indicates a significant risk. In practical use, search behaviors are often triggered even for simple queries. Considering that this approach requires training data, we don’t deem it cost-effective for practical use. 5 CONCLUSION AND LIMITS In this work, we demonstrated the feasibility of using text2vec for refusal response, and multiple search methods can substantially mitigate the hallucination caused by LLMs. As long as an LLM has the following capabilities, instead of strange prayer postures, it can suffi- ciently address most demands within group chat scenarios: • Understanding domain-specific terminologies. • Supporting a minimum token length of 16k. • Scoring capability. • In-Context Learning. However, as users’ questions become more professional, it’s increasingly difficult to provide sat- isfactory responses based on the prompt and search method. This necessitates that the LLM truly understands the source code in the repository. We think efficient further pretrain is the next stage solution. Due to the limitations of the ChatML (OpenAI, 2022) format, we have merely divided the group messages according to the user, which in fact has led to a significant loss of contextual information. The new chat format should fully expressing the context of the problem, the historical messages of the speaker, and the remarks. Additionally, users are very fond of first sending log screenshots before asking questions. Many valuable contexts are contained within these images, but HuixiangDou does not support multimodal. We will work on it. 6 ACKNOWLEDGMENTS • We would like to express our gratitude towards the OpenMMLab users and ncnn contributors for their understanding and tolerance of the numerous bugs in the technical assistant. • We are grateful to the teams at OpenCompass, XTuner, and LMDeploy for their guidance during the exploratory phase of the project. • Our thanks also go to Moonshot AI and Xinran Xu for providing a free 128k context LLM API. • We extend our appreciation to Jianlin Su, the author of RoPE, for his profound insights into the structure of transformers. 9 • Finally, we want to thank Bowen Li and Kuikun Liu for their ideas on NLP, thank Song Yang for his android app contribution on Github, thank Wenxing Hu for his method of integrating WeChat and Siyue Zhao for the proofreading on this Report. REFERENCES Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, and Dahua Lin. Mmdetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019. Yihe Deng, Weitong Zhang, Zixiang Chen, and Quanquan Gu. Rephrase and respond: Let large language models ask better questions for themselves, 2023. emozilla and bloc97. Dynamically scaled rope further increases performance of long context llama with zero fine-tuning, June 2023. URL https://www.reddit.com/r/LocalLLaMA/ comments/14mrgpr/dynamically_scaled_rope_further_increases/. Cheng-Ping Hsieh, Simeng Sun, Samuel Kriman, Shantanu Acharya, Dima Rekesh, Fei Jia, and Boris Ginsburg. Ruler: What’s the real context size of your long-context language models?, 2024. langchain contributors. Langchain: Building applications with llms through composability, 2023. URL https://github.com/langchain-ai/langchain. lmdeploy contributors. Lmdeploy is a toolkit for compressing, deploying, and serving llms, 2023. URL https://github.com/internlm/lmdeploy. Rob McColl, David Ediger, Jason Poovey, Dan Campbell, and David Bader. A Brief Study of Open Source Graph Databases, 2013. Netease. Netease youdao’s open-source embedding and reranker models for rag products., Jan 2024. URL https://github.com/netease-youdao/BCEmbedding. OpenAI. URL e389823ba013a24b4c32ce38fa0bd87e6bccae94/chatml.md. 2022. https://github.com/openai/openai-python/blob/ official python library openai The api, the for opencompass contributors. Opencompass: A universal evaluation platform for foundation models, 2023. URL https://github.com/open-compass/opencompass. Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. Yarn: Efficient context window extension of large language models, 2023. Jianlin Su. The upgrade path of transformer: 12, infinite extrapolation with rerope?, Aug 2023. URL https://spaces.ac.cn/archives/9708. triton contributors. Development repository for the triton language and compiler, 2019. URL https://github.com/openai/triton. wenda contributors. wenda: A large language model (llm) invocation platform, 2023. URL https: //github.com/wenda-LLM/wenda. xtuner contributors. Xtuner: A toolkit for efficiently fine-tuning llm, 2023. URL https:// github.com/internlm/xtuner. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models, 2023. 10 A FINE-TUNED LLM CHAT EXAMPLES Fine-tuned LLM Chat Examples User: Can mmdet3.0 use version 0.7 of mmcv ? Assistant: mmdetection3.0 and mmcv version 0.7 are not compatible due to differences in certain features. Mmdetection 3.x requires mmcv 2.0 or a higher version. If you need to use an older version of mmcv, you may consider downgrading to mmdetection 2.x. Hallucination Example: User: Does mmdet3.0 support mmcv0.7 ? Assistant: There’s no need for that, the version of mmcv should be between 1.3.3 and 1.5.0. Response Style Example: User: mmyolo uses the coco format, right ? Assistant: Haha, your understanding is very precise. Yes, mmyolo indeed utilizes the coco format. Figure 6: Fine-tuned LLM examples. B SYSTEM PROMPT IN SCORING C SYSTEM PROMPT WITH EXAMPLES D LLM PAGING EXAMPLE System prompt in Scoring Prompt: Determine whether the following sentences are topical interrogative sentences, with results ranging from 0 to 10. Provide scores directly without explanation. Scoring standards: A score of 10 for sentences with subject, predicate, and object that are interrogative; points deducted for missing subject, verb, or object; a score of 0 for declarative sentences; a score of 0 for non-interrogative sentences. New question ”{}”, what is the score? Provide scores directly without explanation. Figure 7: Prompt for LLM scoring task. 11 LLM Scoring with Examples Prompt: Determine whether the following sentences are topical interrogative sentences, with results ranging from 0 to 10. Provide scores directly without explanation. Scoring standards: A score of 10 for sentences with subject, predicate, and object that are interrogative; points deducted for missing subject, verb, or object; a score of 0 for declarative sentences; a score of 0 for non-interrogative sentences. Here are some examples: Question "Excuse me, how should mmdeploy be installed?", Score: 9 Question "How to apply for modification of rpm?", Missing subject, Score: 7 Question "Please check if the environment is installed with your revised version", It’s a declarative sentence, Score: 0 Question "If you treat bot as a living person, everything will go smoothly", It’s not a question, Score: 0 New question ”{}”, what is the score? Provide scores directly without explanation. Figure 8: LLM scoring with examples, build prompt with examples does not enhance the score. LLM Paging Example to an element: ’triviaqa 5shot’ User: How do I output the results of triviaqa 5shot in a summarizer? Real Solution: Add summarizer[’dataset abbrs’] in the configuration file. Assistant: First, you need to ensure that the results of ”triviaqa 5shot” are stored in your code in an appropriate data structure (such as a dictionary or list), and that these results can be accessed from your data processing or loading section. Next, you need to find or create a method within the relevant summarizer class that han- dles and formats the ”triviaqa 5shot” results. This might involve parsing, computing, or transforming data so they can be correctly included in the output of the summarizer. Finally, you need to insert code into the output methods of the summarizer (like form table or format raw txt) so that the ”triviaqa 5shot” results are included when generating sum- maries. This might involve adding new columns or rows to a table or text, or modifying existing data to include new information. array the Figure 9: LLM Paging only searched the source code of opencompass.summarizer, and was not aware of the existence of the config file. 12
ai_researcher
1
Characterizing_the_orthodontic_patient's_purchase_decision.pdf
YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH 1 4 2 0 2 g u A 8 3D Structure-guided Network for Tooth Alignment in 2D Photograph Yulong Dou1 [email protected] Lanzhuju Mei1 [email protected] Dinggang Shen1,2,3 [email protected] Zhiming Cui1 [email protected] ] V C . s c [ 1 School of Biomedical Engineering ShanghaiTech University Shanghai 201210, China 2 Shanghai United Imaging Intelligence Co., Ltd. Shanghai 200230, China 3 Shanghai Clinical Research and Trial Center Shanghai 201210, China 2 v 6 0 1 1 1 . 0 1 3 2 : v i X r a Abstract Orthodontics focuses on rectifying misaligned teeth (i.e., malocclusions), affecting both masticatory function and aesthetics. However, orthodontic treatment often involves complex, lengthy procedures. As such, generating a 2D photograph depicting aligned teeth prior to orthodontic treatment is crucial for effective dentist-patient communica- tion and, more importantly, for encouraging patients to accept orthodontic intervention. In this paper, we propose a 3D structure-guided tooth alignment network that takes 2D photographs as input (e.g., photos captured by smartphones) and aligns the teeth within the 2D image space to generate an orthodontic comparison photograph featuring aes- thetically pleasing, aligned teeth. Notably, while the process operates within a 2D im- age space, our method employs 3D intra-oral scanning models collected in clinics to learn about orthodontic treatment, i.e., projecting the pre- and post-orthodontic 3D tooth structures onto 2D tooth contours, followed by a diffusion model to learn the mapping relationship. Ultimately, the aligned tooth contours are leveraged to guide the genera- tion of a 2D photograph with aesthetically pleasing, aligned teeth and realistic textures. We evaluate our network on various facial photographs, demonstrating its exceptional performance and strong applicability within the orthodontic industry. 1 Introduction Orthodontic treatment is an effective remedy for correcting tooth misalignment (i.e., mal- occlusions). It is estimated that over 90% people suffer from malocclusion problems with various degrees[2], and most of people can benefit from orthodontic intervention. This treat- ment not only helps prevent oral diseases at a physiological level, but also significantly boosts patients’ confidence, enhancing their psychological well-being[23]. However, the complex- ity of orthodontic procedure, which often spans several months or even years, can deter individuals from seeking treatment. Hence, the generation and visualization of potential © 2023. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. 2 YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH Figure 1: Orthodontic comparison photographs. For each case, we show the facial photo- graph with misaligned teeth (left) and the facial photograph with well-aligned teeth generated by our network (right), and the image in the lower right corner is a zoom-in of mouth region. post-treatment facial photographs with aesthetical teeth becomes crucial. Such predictive imaging not only engages and motivates patients but also fosters more effective communica- tion between orthodontists and their patients. In clinical practice, visualizing patients’ appearance after orthodontic treatment is re- ferred to "Visual Treatment Objective" (VTO). This is typically performed on X-Ray images by deforming soft tissues and skeleton based on detected landmarks[21, 25]. However, this operation leaves the teeth’s appearance unaltered„ making it challenging for patients to make a realistic comparison. In this study, our objective is to take 2D photograph as an input (e.g., photos captured by smartphones), and directly generate the "Orthodontic Comparison Photo- graph" with aligned teeth and realistic textures, as shown in Figure 1. Note that the generated photograph should follow the unique tooth alignment property of each patient in real-world treatment, instead of simplistic Photoshop approach with template teeth[32]. Currently, significant advancements in deep learning, particularly in generative networks, have achieved promising results in computer vision community. However, most of these models heavily rely on paired images, that is not suitable for our task. This is primarily be- cause collecting paired pre- and post-orthodontic facial photographs is challenging due to the long-term orthodontic procedure and changes in facial appearance over time. Furthermore, the 2D photograph does not provide the 3D structure of teeth. Thus, how to learn the clinical knowledge of tooth alignment, defined on 3D tooth models, from 2D photographs is also a significant challenge. In this paper, we propose a 3D structure-guided network for tooth alignment in 2D pho- tograph. The key idea is to learn the clinical tooth alignment knowledge defined on the 3D intra-oral scanning models[17], and apply the learned property to guide the 2D post- orthodontic photograph generation. Specifically, we begin by collecting a set of paired pre- and post-orthodontic intra-oral scanning tooth models in clinics, and render[24] them onto the oral area of a 2D facial photograph. In this way, we can obtain the paired pre- and post-orthodontic tooth contours in 2D photograph (as shown in Figure 2). Then, a Diffusion Model[11] is applied to learn tooth alignment knowledge, i.e., generating post-orthodontic tooth contours with the input of pre-orthodontic tooth contours, derived from 3D tooth mod- els. Note that only the tooth structures are captured, without any texture information. In the YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH 3 inference process, we can directly take the tooth contours segmented from 2D facial photo- graph. Finally, guided by the aligned tooth contours, we employ another Diffusion Model to generate a realistic 2D photograph with aligned teeth. In particular, To enhance simi- larity with the patient’s original appearance, we incorporate facial skin color and intra-oral highlights into the generation process, accounting for texture and lighting information. In the experiment, we collect a large number of photographs from patients suffering from mal- occlusion problems with various degrees, and achieve superior performance compared to the state-of-the-art methods, including GAN[8] and Diffusion Models. Furthermore, we also conduct a user study to validate the alignment and authenticity performance of our algorithm, demonstrating its potential applicability within orthodontic industry. 2 Related Work Digital Orthodontics. Digital orthodontics employs digital imaging technologies such as intra-oral scanning[17], CBCT[7], and panoramic radiograph[1] to provide dentists with information about the structure and occlusion of patients’ teeth. This helps dentists with pre-treatment diagnosis and orthodontic treatment planning. A variety of emerging tech- niques have been introduced in related fields, including tooth segmentation[5, 6], 3D tooth reconstruction[36, 37], and 3D tooth arrangement[35]. In terms of orthodontic comparison photographs, Lingchen et al. [20] have introduced iOrthoPredictor which can synthesize an image of well-aligned teeth based on a patient’s facial photograph and an additional input of the patient’s 3D dental model. Chen et al. [3] have introduced OrthoAligner which needs only a facial photograph but no 3D dental model as input, by introducing the concept of StyleGAN inversion. But OrthoAligner is limited in that it only uses facial photographs to learn tooth transformation, without utilizing information from 3D dental models. Image Generation. Image generation is a field of research in computer vision that aims to generate new digital images by using algorithms or models from scratch or by modifying ex- isting images. Several models have been proposed for image generation, including GAN[8], VAE[18], Diffusion Model[11]. Specifically, GAN simultaneously trains the generator and discriminator to generate more realistic images. Many models based on GAN have been proposed, such as unsupervised StyleGAN[15], and supervised Pix2pix GAN[13]. VAE is a generative model that uses variational inference for sampling from probability distribu- tions. Ho et al. [11] propose Diffusion Model based on Score Matching[12] and Denoising Autoencoder[34], and elaborate on its mathematical principles. Diffusion Model is a gener- ative model that utilizes a forward process of step-by-step adding noise and a backward de- noising process to generate high-quality images. Choi et al. [4] propose a reference-guided conditional Diffusion Model that fine-tunes the backward denoising process. Singh et al. [31] introduce condition noise to navigate Diffusion Model. Saharia et al. [29] propose an image-to-image Diffusion Model guided by condition image. 3 Method 3.1 Overview Overall, our goal is to design a tooth alignment network that incorporates 3D structural infor- mation derived from intra-oral scanning models, which is essential for clinical orthodontic treatment, and guide the orthodontic comparison photograph generation. 4 YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH We have pre-orthodontic intra-oral scanning models S = {S1, S2, ..., SN}, post-orthodontic intra-oral scanning models ˆS = { ˆS1, ˆS2, ..., ˆSN} of same patients collected in clinics, and un- paired facial photographs I = {I1, I2, ..., IM} collected by smartphones. Given that the facial photographs I and 3D intra-oral scanning models S, ˆS in our dataset are not paired, we design a module, named Align-Mod, for tooth alignment that can still incorporate 3D structural in- formation from intra-oral scanning models as guidance. This module randomly selects the pre- and post-orthodontic intra-oral scanning models (i.e., Sr ∈ S and ˆSr ∈ ˆS) for an unpaired facial photograph (i.e., Ir ∈ I), and makes a coarse 2D-3D registration between Sr, ˆSr and Ir, respectively. Then, the 3D tooth structures are projected onto the 2D facial photograph to obtain pre-orthodontic tooth contours Cr ∈ R128×256 and post-orthodontic tooth contours ˆCr ∈ R128×256. In this way, our Align-Mod module can learn the tooth transformation T (·), which represents the clinical orthodontic knowledge derived from the 3D intra- oral scanning models. In addition to the pre-trained tooth alignment module, we also design a segmentation module, named Segm-Mod, to locate the mouth region and segment tooth contours C from facial photographs I, and a generation module, named Gen-Mod, to generate the facial image with aesthetically pleasing teeth. In summary, the three modules designed in this framework are shown in Figure 2. Figure 2: Overall pipeline. When a facial photograph is input into our network, it first goes through Segm-Mod to obtain oral mask, mouth region and tooth contours. Then it enters Pre- trained Align-Mod to predict well-aligned tooth contours, and finally goes through Gen-Mod to generate a facial photograph with well-aligned teeth. 3.2 Segmentation Module To begin with, Segm-Mod needs to detect the position of face[16, 30] and obtain a standard- ized face Fi ∈ R512×512 from any given facial photograph Ii ∈ I ⊆ R. As shown in Figure 2, to accurately locate the mouth, we propose an oral detection network OD(·)[19, 38] to segment the oral mask Mi ∈ R128×256 and crop the mouth region Ri ∈ R128×256 from the standardized face Fi. Then, to obtain tooth contours Ci ∈ R128×256 which contains struc- YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH 5 tural information of teeth, we employ a commonly-used supervised segmentation network, U-Net[27], for segmenting tooth contours Ci from the mouth region Ri. The formulation of process of Segm-Mod is as: Ri, Mi = OD(Ii), Ci = U(Ri, Mi), ∀i = 1, 2, ..., N, (1) where OD(·) is the oral detection network performed in 2D facial photograph Ii, U(·) de- notes the U-Net-based contour segmentation network, and Ci, Ri, Mi are the pre-orthodontic tooth contours, mouth region and oral mask obtained by Segm-Mod, respectively. To train the network, we employ the Dice Loss[22] and Weighted Cross-Entropy Loss[28]. Given the imbalanced area between foreground (tooth contours) to background, Dice Loss has excellent performance in situations with severe imbalance and focuses on learning the foreground area. Furthermore, Weighted Cross-Entropy Loss can address the imbalance problem by adjusting the weighted proportion, thus making it a suitable complement to Dice Loss. Our designed loss function is defined as: L = wdice · Ldice + wce · Lce. (2) 3.3 Alignment Module One of the most innovative aspects of our method is that we can incorporate structural in- formation from 3D intra-oral scanning models S, ˆS into Align-Mod, which are essential for clinical orthodontic treatment. We employ a 3D-to-2D Render to project the 3D intra-oral scanning models S, ˆS into the oral area of the 2D facial photographs, as opposed to the ap- proach used by Wirtz et al. [36], Zheng et al. [40] to reconstruct 3D dental models from multi-view tooth photographs. Furthermore, we design a conditional Diffusion Model-based network for learning the clinical orthodontic knowledge T (·) in the space of tooth contours Cr, ˆCr obtained by Render. 3.3.1 Render Since the 3D intra-oral scanning models S, ˆS and facial photographs I are collected from different environments and sources, i.e., one is in clinics and another is from smartphones in daily life, we cannot perform precise 3D-2D registration through rigid transformation. Fortunately, precise registration is not necessary for our task as our purpose is to create paired tooth contours from the intra-oral scanning models. Therefore, to obtain tooth con- tours Cr, ˆCr ∈ R128×256, we perform coarse registration based on the landmarks between the coordinates of tooth cusp points of central and lateral incisors in both render-used intra-oral scanning models Sr ∈ S, ˆSr ∈ ˆS and facial photograph Ir ∈ I. We use Numerical Optimization[33] to perform coarse registration and then obtain 2D tooth contours by projection. The essential principle of projecting 3D to 2D is to solve the camera parameters. As shown in Equation 3, ρm = (KRT | − KRTC) (cid:18) M 1 (cid:19) , (3) where M represents the coordinate of the point in world coordinate system, denoted as M = (X,Y, Z)T , and m represents the coordinate of the corresponding point in pixel coordinate 6 YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH system, denoted as m = (u, v, 1)T . ρ is the projection depth, C is the position of camera, R is the rotation matrix that represents camera pose, and K is the matrix of intrinsic parameters[9]. Specifically, we use the four paired tooth cusp points mentioned above on both the facial photo and the intra-oral scanning model to derive their coordinates, represented as m and M. Subsequently, we can resolve camera parameters, primarily the unknown variables in K and C, given the other known parameters. 3.3.2 Tooth Transformation Once we render 3D intra-oral scanning models Sr, ˆSr onto the 2D facial photograph Ir and obtain pre- and post-orthodontic tooth contours Cr, ˆCr as mentioned above, we can then learn the T (·), i.e., clinical orthodontic knowledge. We employ a network based on image-to- image Diffusion Model[29], as shown in Figure 2. To emphasize the generation of oral regions, we introduce Gaussian noise Gr ∈ R128×256 generated within the oral mask Mr. We concatenate the pre-orthodontic tooth contours Cr with the Gaussian noise Gr to form the condition information, which serves as guidance for our diffusion model. Therefore, the formula for T (·) during the pre-trained process is given as: ˆCr = T (Cr c⃝ Gr), (4) where c⃝ denotes channel-wise concatenation. Since we have pre-learned the T (·), which represents the clinical orthodontic knowledge of tooth transformation, we can apply the learned knowledge to process the 2D tooth contours Ci derived from Segm-Mod. Hence, we concatenate the tooth contours Ci, together with intra- oral Gaussian noise Gi, and then feed them into our diffusion model, expecting a reasonable prediction for well-aligned tooth contours ˆCi. The inference process is formulated as: ˆCi = T (Ci c⃝ Gi), ∀i = 1, 2, ..., N. (5) 3.4 Generation Module After obtaining well-aligned tooth contours ˆCi through the tooth transformation T (·) of our Align-Mod, we aim to generate a mouth region with realistic teeth ˆRi guided by ˆCi. To achieve this, we adapt a conditional Diffusion Model-based generative network G(·) in our Gen-Mod. We still introduce Gaussian noise Gi ∈ R128×256 generated within the oral mask Mi to emphasize the generation of oral regions. Besides well-aligned tooth contours ˆCi and intra-oral Gaussian noise Gi mentioned above, we additionally introduce intra-oral highlights Li ∈ R128×256 and facial skin color Ki ∈ R128×256, which are helpful for generating more re- alistic tooth color and environmental lighting. Then, four of them are concatenated together as the condition information to guide our generation model. In terms of intra-oral highlights, we employ Contrast Limited Adaptive Histogram Equal- ization (CLAHE) [41] and Thresholding, aimed to enhance image contrast and detect high- lights within the oral region. Specifically, we utilize CLAHE as in Equation 6 to improve local contrast in mouth region Ri by using a histogram equalization approach with a specified contrast limit of 5 in 20 × 20 local window, thus preventing over-amplification of noise in flat areas while enhancing contrast in textured areas, defined as: (cid:40) gxy = L − 1 Sxy fxy ∑ z=0 h(x, z) Sxy , f ′ xy = gxy × (L − 1), CLAHExy = f ′ xy L − 1 if f ′ if f ′ xy < L − 1 xy ≥ L − 1 , (6) YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH 7 where Sxy denotes size of the local window, fxy denotes pixel intensity at pixel (x, y), h(x, z) is histogram of pixel intensities in local window, L means number of intensity level, gxy is gain factor, and f ′ xy is transformed pixel intensity. Once we generate a mouth region with realistic teeth ˆRi, we use the face and mouth po- sition stored in Segm-Mod to replace the oral region in the initial facial photograph, thereby obtaining a facial photograph with well-aligned and aesthetically pleasing teeth ˆIi for VTO (see Figure 2). The formulation of process of Gen-Mod is shown as: ˆIi = G( ˆCi c⃝ Gi c⃝ Li c⃝ Ki), ∀i = 1, 2, ..., N, (7) where c⃝ denotes channel-wise concatenation of segmented tooth contours ˆCi, Gaussian ˆIi is the predicted facial photo- noise Gi, intra-oral highlights Li and facial skin color Ki. graph with well-aligned and aesthetically pleasing teeth through the Diffusion Model-based generative network G(·) mentioned in Gen-Mod. 4 Experiments 4.1 Experiments Settings Dataset. Our dataset comprises 1367 facial photographs I, of which 1129 are used to train Segm-Mod and Gen-Mod, 138 are used to create datasets through Render in Align-Mod, and the remaining 100 are reserved for testing our overall pipeline. For the 138 render-used facial photographs, we manually annotate the coordinates of tooth cusp points of central and lateral incisors in the upper jaw. Moreover, We have 1257 3D intra-oral scanning models S collected in dental clinics, along with their corresponding orthodontic treatment plans provided by dentists. In this way, we can also obtain corresponding 1257 post-orthodontic intra-oral scanning models ˆS accordingly. Note that for each of the 138 render-used facial photographs, we randomly select 10 from the pool of 1257 intra-oral scanning models to perform the Render process as mentioned in 3.3.1. Thus we have 1380 pre- and post-orthodontic tooth contours Cr, ˆCr, respectively, for training Align-Mod. Implementation Details. The proposed method is implemented in PyTorch with 2 NVIDIA A100 80GB GPU. By iteratively tuning and training Segm-Mod, we utimately choose to assign 0.8 to wdice and 0.2 to wce, along with a weight of 20 for the foreground and a weight of 1 for the background in Lce. In terms of Align-Mod and Gen-Mod, we set the batch-size of our diffusion model to 60. The learning rate is 5e-5 and we use Exponential Moving Average[10] with β = 0.9999 to update parameters of diffusion model. Lastly regarding the parameters in Render, we set focal length in camera intrinsic parameters to 213.33, and we use SGD[26] optimizer with an initial learning rate of 0.01 and a learning rate scheduler, which reduces the learning rate by a factor of 0.9 every 500 steps. 4.2 Results Based on the pre-trained Align-Mod, our three-stage network can infer a facial photograph with well-aligned and aesthetically pleasing teeth ˆIi according to patient’s previous photo- graph Ii, without any 3D intra-oral scanning model as input, while still benefiting from the guidance of clinical orthodontic knowledge within 3D structure of intra-oral scanning mod- els. To demonstrate the outstanding results of our method and provide a more detailed view 8 YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH of the inference process and confidence level, we present some testing cases in Figure 1 and Figure 3. Based on the visual results and inference process we presented, it’s evident that our Segm-Mod has excellent segmentation ability, even with misaligned teeth. Our Align-Mod also performs great reliability on predicting well-aligned tooth contours, closely based on the pre-trained transformation T (·) in the image space, which is crucial for clinical orthodontic treatment. Besides, our Gen-Mod can infer reasonable realistic teeth with similar color and lighting compared with patient’s previous teeth photograph and its shooting environment. (a) mouth region (b) oral mask (c) oral region (d) segmented tooth contours (e) aligned tooth contours (f) predicted mouth region Figure 3: Inference process. For each detected mouth region Ri (a), we segment to obtain the oral mask Mi (b) and oral region (c). We further obtain tooth contours Ci (d) from our Segm-Mod and input it into our Align-Mod to yield well-aligned tooth contours ˆCi (e). We finally predict a mouth region with well-aligned teeth ˆRi (f) through our Gen-Mod. 4.3 Comparison Additionally, we qualitatively compare our tooth alignment network with Pix2pix GAN[8, 13], especially for Align-Mod and Gen-Mod, and the comparison results are visually shown in Figure 4. It is shown that our Diffusion Model-based methods are more capable than Pix2pix GAN-based methods, with more reasonable alignment prediction and more realistic tooth color and lighting. As shown in Table 1, we quantitatively evaluate our proposed Align-Mod and Gen-Mod to make comparisons with pix2pix GAN using pixel-wise L1, L2 and LPIPS error[14, 39]. L1 and L2 are commonly used pixel-wise metrics for quantifying discrepancies between generated results and the target, whereas LPIPS is a perceptual metric which calculates the perceptual distance and visual similarity between images. It is shown that our method are consistently better than Pix2pix GAN-based methods considering pixel- wise metrics. 4.4 Ablation Study We have mentioned in Subsection 3.4 that, in order to make our Gen-Mod yield more realistic tooth color and environmental lighting, we introduce intra-oral highlights Li and facial skin color Ki and concatenate them together with well-aligned tooth contours ˆCi and intra-oral Gaussian noise Gi as guidance. To validate the effectiveness of these two condition images guiding Gen-Mod, we conduct an ablation study to demonstrate their effectiveness. We design four groups of ablation experiments as shown in Table 2, all using Diffusion Model- based generative network and the same datasets for training. Visual results of four ablation experiments are shown in Figure 5. Specifically, Ablation I includes neither of the two condition images, resulting in the poorest generation performance. Ablation II includes facial skin color, resulting in better YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH 9 Diffusion Model -based Align-Mod Pix2pix GAN -based Align-Mod Diffusion Model -based Gen-Mod Pix2pix GAN -based Gen-Mod L1 L2 LPIPS 0.053 0.319 0.057 0.061 0.343 0.104 0.029 0.093 0.038 0.047 0.148 0.107 Table 1: Quantitative comparisons be- tween different methods on Testing Dataset. Input Ours Pix2pix GAN GT Figure 4: Qualitative comparisons. The upper two rows are two testing cases of Align-Mod, and the lower two rows are two testing cases of Gen-Mod. facial skin color intra-oral highlights Ablation I Ablation II Ablation III Ablation IV ✓ ✓ ✓ ✓ Table 2: Four groups of ablation experiments. Ablation I Ablation III Figure 5: Visual results of four ablation experiments. Ablation IV Ablation II GT tooth color but lighting information is lost compared to the original image. Ablation III adds intra-oral highlights, significantly restoring the environmental lighting but less realistic tooth color. Ablation IV has intuitively the best generation performance, with both realistic facial skin color and intra-oral highlights. 4.5 User Study To further demonstrate the reliability and credibility of our method, we conduct a user study that we invite 30 individuals to vote for assessing the alignment and authenticity of pho- tographs (only concentrated on mouth region) generated by our method. Specifically, for as- sessing alignment, we randomly select 10 generated facial photographs and 10 photographs from patients who have received orthodontic treatments. Participants are asked to rate the alignment of teeth in the photographs on a scale of 1 to 5, with higher scores indicating better alignment. Similarly, for assessing authenticity, we randomly select 10 generated facial pho- tographs and 10 real ones, and ask participants to vote on whether they are real or fake. Table 3 has shown the average alignment scores and average percentage of photographs being clas- sified to "real". Compared with well-aligned or real photographs, photographs generated by our method achieve high scores in terms of both alignment and authenticity, only slightly lower than scores of the well-aligned teeth on real photographs. 5 Discussion In this work, we propose a 3D structure-guided tooth alignment network to effectively gen- erate orthodontic comparison photographs. According to the experimental results above, our 10 YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH Alignment Authenticity Well-aligned photos 3.84 Real photos 72.67% Ours 3.82 65.00% Table 3: Voting results of user study. First row represents the average score regarding align- ment, second row represents the average percentage of photographs being classified as "real". method utilizes 3D dental models to learn the orthodontic knowledge in the image space. The 3D structure successfully guides the learning and prediction of our network, giving our method practical clinical significance. Additionally, we introduce Diffusion Model into the task of orthodontic comparison photograph generation and have shown the great power of Diffusion Model in our Align-Mod and Gen-Mod. Importantly, different from state-of-the- art methods[3, 20] in the field, our method can still incorporate clinical orthodontic knowl- edge into the network without requiring additional input of dental models. This demon- strates that our method is much more clinically practical, user-friendly and applicable within orthodontic industry. Our method, however, is not without limitations. For example, our network cannot han- dle with several cases, such as that teeth are too misaligned and patients smile too widely. Moreover, our method cannot take collision and occlusal relationship into consideration since our method is just performed in the image space. In future, we plan to first recon- struct 3D tooth models from 2D photograph and then make the tooth alignment. 6 Conclusion In this paper, we have designed a 3D structure-guided network to infer a facial photograph with well-aligned and aesthetically pleasing teeth based on the patient’s previous facial pho- tograph. Our method stands out from existing approaches as it can learn the clinical or- thodontic knowledge based on 3D intra-oral scanning models, making our method highly reliable and potentially applicable in clinical practice. Acknowledgements This work was supported in part by NSFC grants (No. 6230012077). References [1] Christos Angelopoulos, Aurelija Bedard, Jerald O Katz, Stelios Karamanis, and Nikos Parissis. Digital panoramic radiography: An overview. In Seminars in Orthodontics, volume 10, pages 194–203. Elsevier, 2004. [2] Olaf Bernhardt, Karl-Friedrich Krey, Amro Daboul, Henry Voelzke, Stefan Kindler, Thomas Kocher, and Christian Schwahn. New insights in the link between malocclu- sion and periodontal disease. Journal of clinical periodontology, 46(2):144–159, 2019. [3] Beijia Chen, Hongbo Fu, Kun Zhou, and Youyi Zheng. Orthoaligner: Image-based teeth alignment prediction via latent style manipulation. IEEE Transactions on Visual- ization and Computer Graphics, 2022. YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH 11 [4] Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. arXiv preprint arXiv:2108.02938, 2021. [5] Zhiming Cui, Changjian Li, and Wenping Wang. Toothnet: automatic tooth instance In Proceedings of the segmentation and identification from cone beam ct images. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6368– 6377, 2019. [6] Zhiming Cui, Yu Fang, Lanzhuju Mei, Bojun Zhang, Bo Yu, Jiameng Liu, Caiwen Jiang, Yuhang Sun, Lei Ma, Jiawei Huang, et al. A fully automatic ai system for tooth and alveolar bone segmentation from cone-beam ct images. Nature communications, 13(1):2096, 2022. [7] W De Vos, Jan Casselman, and GRJ19464146 Swennen. Cone-beam computerized tomography (cbct) imaging of the oral and maxillofacial region: a systematic review of the literature. International journal of oral and maxillofacial surgery, 38(6):609–625, 2009. [8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020. [9] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. [10] David Haynes, Steven Corns, and Ganesh Kumar Venayagamoorthy. An exponential In 2012 IEEE Congress on Evolutionary Computation, moving average algorithm. pages 1–8. IEEE, 2012. [11] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020. [12] Aapo Hyvärinen and Peter Dayan. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005. [13] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image trans- lation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017. [14] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Computer Vision–ECCV 2016: 14th European Con- ference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pages 694–711. Springer, 2016. [15] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on com- puter vision and pattern recognition, pages 4401–4410, 2019. [16] Vahid Kazemi and Josephine Sullivan. One millisecond face alignment with an ensem- ble of regression trees. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1867–1874, 2014. 12 YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH [17] Hidemichi Kihara, Wataru Hatakeyama, Futoshi Komine, Kyoko Takafuji, Toshiyuki Takahashi, Jun Yokota, Kenta Oriso, and Hisatomo Kondo. Accuracy and practicality of intraoral scanner in dentistry: A literature review. Journal of prosthodontic research, 64(2):109–113, 2020. [18] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [19] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5549–5558, 2020. [20] YANG Lingchen, SHI Zefeng, WU Yiqian, LI Xiang, ZHOU Kun, FU Hongbo, and Youyi Zheng. iorthopredictor: model-guided deep prediction of teeth alignment. ACM Transactions on Graphics, 39(6):216, 2020. [21] Richard P McLaughlin and John C Bennett. The dental vto: an analysis of orthodontic tooth movement. Journal of Clinical Orthodontics: JCO, 33(7):394–403, 1999. [22] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 fourth interna- tional conference on 3D vision (3DV), pages 565–571. Ieee, 2016. [23] Minghui Peng, Jing Kang, and Xiao Deng. The role of body image in orthodontic treatment for adolescents. West China Journal of Stomatology, 35(5):489, 2017. [24] Matt Pharr, Wenzel Jakob, and Greg Humphreys. Physically based rendering: From theory to implementation. Morgan Kaufmann, 2016. [25] G Power, J Breckon, M Sherriff, and F McDonald. Dolphin imaging software: an analysis of the accuracy of cephalometric digitization and orthognathic prediction. In- ternational journal of oral and maxillofacial surgery, 34(6):619–626, 2005. [26] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical statistics, pages 400–407, 1951. [27] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks In Medical Image Computing and Computer- for biomedical image segmentation. Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Ger- many, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015. [28] Reuven Rubinstein. The cross-entropy method for combinatorial and continuous opti- mization. Methodology and computing in applied probability, 1:127–190, 1999. [29] Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Sali- mans, David Fleet, and Mohammad Norouzi. Palette: Image-to-image diffusion mod- els. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1–10, 2022. [30] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embed- In Proceedings of the IEEE conference on ding for face recognition and clustering. computer vision and pattern recognition, pages 815–823, 2015. YULONG.DOU ET AL: TOOTH ALIGNMENT IN PHOTOGRAPH 13 [31] Vedant Singh, Surgan Jandial, Ayush Chopra, Siddharth Ramesh, Balaji Krishna- murthy, and Vineeth N Balasubramanian. On conditioning the input noise for controlled image generation with diffusion models. arXiv preprint arXiv:2205.03859, 2022. [32] Manoj Kumar Sundar and BDS Venkataraman Chelliah. Ten steps to create virtual smile design templates with adobe photoshop cs6. Compendium, 39(3), 2018. [33] Bo D Tapley and JM Lewallen. Comparison of several numerical optimization meth- ods. Journal of Optimization Theory and Applications, 1:1–32, 1967. [34] Pascal Vincent. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661–1674, 2011. [35] Guodong Wei, Zhiming Cui, Yumeng Liu, Nenglun Chen, Runnan Chen, Guiqing Li, and Wenping Wang. Tanet: towards fully automatic tooth arrangement. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XV 16, pages 481–497. Springer, 2020. [36] Andreas Wirtz, Florian Jung, Matthias Noll, Anqi Wang, and Stefan Wesarg. Auto- matic model-based 3-d reconstruction of the teeth from five photographs with prede- fined viewing directions. In Medical Imaging 2021: Image Processing, volume 11596, pages 198–212. SPIE, 2021. [37] Chenglei Wu, Derek Bradley, Pablo Garrido, Michael Zollhöfer, Christian Theobalt, Markus H Gross, and Thabo Beeler. Model-based teeth reconstruction. ACM Trans. Graph., 35(6):220–1, 2016. [38] Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In Pro- ceedings of the European conference on computer vision (ECCV), pages 325–341, 2018. [39] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The In Proceedings unreasonable effectiveness of deep features as a perceptual metric. of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018. [40] TX Zheng, Shuai Huang, YF Li, and MC Feng. Key techniques for vision based 3d reconstruction: a review. Zidonghua Xuebao, 46:631–652, 2020. [41] Karel Zuiderveld. Contrast limited adaptive histogram equalization. Graphics gems, pages 474–485, 1994.
ai_researcher
1
Exploration_and_optimization_of_surgical_techniques_for_laparoscopic_transhiatal_lower_mediastinal_lymph_node_dissection_for_adenocarcinoma_of_esophagogastric_junction_A_prospective_IDEAL_2a_study_with_qualitative_design.pdf
4 2 0 2 n u J 0 2 ] V C . s c [ 1 v 5 6 3 4 1 . 6 0 4 2 : v i X r a Journal of Machine Learning for Biomedical Imaging 2024:008 vol. 2, pp. 798–816 Special issue: MICCAI 2023 Lymph Node Quantification Challenge Guest editors: Steve Pieper, Erik Ziegler, Tawa Idris, Bhanusupriya Somarouthu, Reuben Dorent, Gordon Harris, Ron Kikinis Submitted 01/2024 Published 06/2024 Mask the Unknown: Assessing Different Strategies to Handle Weak Annotations in the MICCAI2023 Mediastinal Lymph Node Quantification Challenge Stefan M. Fischer1,2,3,4 Johannes Kiechle1,2,3,4 Daniel M. Lang1,3 Jan C. Peeken2 Julia A. Schnabel1,3,4,5 [email protected] [email protected] [email protected] [email protected] [email protected] 1: School of Computation, Information and Technology, Technical University Munich, Germany 2: Department of RadioOncology, Klinikum rechts der Isar, Technical University Munich, Germany 3: Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Germany 4: Munich Center of Machine Learning (MCML), Germany 5: School of Biomedical Engineering and Imaging Sciences, King’s College London, UK Abstract Pathological lymph node delineation is crucial in cancer diagnosis, progression assess- ment, and treatment planning. The MICCAI 2023 Lymph Node Quantification Chal- lenge published the first public dataset for pathological lymph node segmentation in the mediastinum. As lymph node annotations are expensive, the challenge was formed as a weakly supervised learning task, where only a subset of all lymph nodes in the training set have been annotated. For the challenge submission, multiple methods for training on these weakly supervised data were explored, including noisy label training, loss masking of unlabeled data, and an approach that integrated the TotalSegmentator toolbox as a form of pseudo labeling in order to reduce the number of unknown voxels. Furthermore, multiple public TCIA datasets were incorporated into the training to improve the perfor- mance of the deep learning model. Our submitted model achieved a Dice score of 0.628 and an average symmetric surface distance of 5.8 mm on the challenge test set. With our submitted model, we accomplished third rank in the MICCAI2023 LNQ challenge. A finding of our analysis was that the integration of all visible, including non-pathological, lymph nodes improved the overall segmentation performance on pathological lymph nodes of the test set. Furthermore, segmentation models trained only on clinically enlarged lymph nodes, as given in the challenge scenario, could not generalize to smaller patho- logical lymph nodes. The code and model for the challenge submission are available at https://gitlab.lrz.de/compai/MediastinalLymphNodeSegmentation. Keywords: deep learning, lymph node quantification, weakly supervised learning, image segmentation ©2024 Fischer, Kiechle, Lang, Peeken and Schnabel. License: CC-BY 4.0 https://doi.org/10.59275/j.melba.2024-8g8b Mask the Unknown 1. Introduction In the following section, we introduce the ”Mediastinal Lymph Node Quantification: Seg- mentation of Heterogeneous CT Data” (LNQ2023) challenge, and discuss related work. 1.1 Motivation Lymph nodes (LNs) are small anatomical structures scattered throughout the body. Based on their location, they are grouped into various LN stations according to established def- initions, such as those from the International Association for the Study of Lung Cancer (IASLC) (Rusch et al., 2009). During cancer progression, the tumor grows, and cancer cells spread into nearby anatomical structures, developing into metastasis and radically in- creasing the severity of the disease. Infiltration of cancerous tissue into LNs may lead to enlargement of those LNs. Thus, assessing metastatic LNs is a critical factor for initial diagnosis, tumor staging, and treatment planning. The conventional criteria for quantify- ing lymph node size is based on Response Evaluation Criteria In Solid Tumours (RECIST) guidelines (Eisenhauer et al., 2009). Enlarged LNs are defined as those whose shortest diam- eter exceeds 10 mm on an axial CT slice. Medical professionals solely rely on unidirectional or bidirectional measurements on a single axial slice of just one or a few LNs, introducing limitations in capturing the full extent of abnormalities (Guo et al., 2022). However, studies indicate that solely relying on the feature of shortest LN diameter for malignancy assessment yields recall rates of only 60%-80% in lung cancer patients (Yan et al., 2023). The facts that accessing the status of LNs is a complex, time-consuming task and that the shortest diameter is a limiting metric highlights the necessity for ac- curate segmentation in three dimensions to comprehensively evaluate lymph node disease (Guo et al., 2022). Furthermore, precise delineation of all tumorous regions is particularly crucial in radiation therapy, where irradiating metastatic areas such as LNs impacts pa- tient outcomes significantly (Chapet et al., 2005). Thus, automated segmentation not only holds promise for reducing the inter and intra-observer variability, but also reduces the task-related working time. The objectives of the LNQ2023 challenge are twofold, each aiming to address critical aspects in the field of lymph node identification and segmentation (Khajavibajestani et al., 2023). The primary goal of the challenge was to establish a benchmark for the detection and segmentation of mediastinal lymph nodes. The mediastinum, located between the lung lobes, poses a particular challenge due to the presence of ten or more lymph nodes, often with three or more enlarged nodes exceeding 10 mm in diameter. The secondary goal focuses on the exploration and application of weakly supervised learning techniques in the scenario of LN segmentation. Given the time-consuming nature of manual annotation and the possible presence of several pathological LN instances, there is a lack of pre-existing fully annotated pathological LN datasets. This setting aligns with the current growing interest in the medical imaging community in harnessing weak annotations (Kemnitz et al., 2018; Petit et al., 2018; Zhou et al., 2019; Shi et al., 2021; Dong et al., 2022; Ulrich et al., 2023). 799 Fischer, Kiechle, Lang, Peeken and Schnabel 1.2 Mediastinal Lymph Node Segmentation The first public dataset for mediastinal LN segmentation was introduced by Roth et al. (2014, 2015), offering segmentation annotations of enlarged lymph nodes. This task presents unique challenges due to reduced contrast compared to axillary and pelvic nodes (Nogues et al., 2016). Early methods often relied on machine learning approaches that incorporated handcrafted features and manual region-of-interests (ROI) (Aerts et al., 2014a; Liu et al., 2014, 2016; Oda et al., 2017b,a; Roth et al., 2014). Notably, Roth et al. (2014) introduced the first neural network (NN) based method, specifically for LN detection after an ROI proposal step, reducing false positives and marking a shift in methodology. Building upon this, subsequent works explored various NN-based methods operating directly on full CT volumes (Bouget et al., 2019, 2023; Guo et al., 2022; Yan et al., 2023; Iuga et al., 2021). To enhance segmentation accuracy and robustness, the inclusion of anatomical key re- gions to guide the segmentation has been explored (Bouget et al., 2023; Oda et al., 2018; Bouget et al., 2019). The integration of lymph node station information has been a focal point, initially introduced by Liu et al. (2014). The idea of mapping lymph node stations, according to IASLC guidelines, has been extended by various methods, including NN-based approaches. Guo et al. (2022) proposed dedicated encoders for different LN stations, and Yan et al. (2023) introduced a station-stratified LN detector, emphasizing the importance of incorporating station information during training (Liu et al., 2014; Guo et al., 2022; Yan et al., 2023). 1.3 Weakly Supervised Learning Weakly supervised learning has recently become a popular topic in medical imaging research, as most datasets only provide annotations for one single or few structures of interest. Com- bining multiple datasets results in a partially supervised learning setting, as datasets come with full supervision of a few classes. It forms a special case of weak supervision. Sev- eral research groups have concentrated on such partially supervised learning approaches, aiming to enhance model performance by aggregating information from multiple partially labeled datasets during training (Kemnitz et al., 2018; Petit et al., 2018; Zhou et al., 2019; Shi et al., 2021; Dong et al., 2022; Ulrich et al., 2023). Some works focus on masking the loss of missing classes in the current training label (Ulrich et al., 2023; Dong et al., 2022). Furthermore, different approaches to constrain the loss functions were motivated by mutual exclusion of classes or implemented by merging classes to superclasses (Kemnitz et al., 2018; Shi et al., 2021; Ulrich et al., 2023; Petit et al., 2018). However, there is only limited work outside the partially supervised setting. In the computer vision domain, missing labels are often treated as background, a method viewed as a simplistic form of dealing with noisy labels. This approach is effective when the pixels of missing classes constitute a significantly smaller portion of images compared to background pixels (Dong et al., 2022). A general strategy for training on weakly labeled datasets is masking the loss of voxels lacking annotations (Kemnitz et al., 2018). Additionally, incorporating semi-supervised learning (SSL) has been explored to gain additional training feedback by mining unlabeled voxels (Zhou et al., 2019; Petit et al., 2018). 800 Mask the Unknown Most works in SSL focus on the classification or segmentation of completely unlabeled images. In contrast, the LNQ2023 challenge is given as an incomplete pixel-level label scenario in which one or multiple foreground instances are annotated. Nguyen et al. (2020) have explored this particular scenario. They focused on two distinct scenarios: speech balloon segmentation in comics and cell segmentation in medical imaging. In both cases, the incomplete annotation is generated using an automatic extraction method. The learning of background information is facilitated by a small set of background voxels chosen as direct neighbors of known foreground instances. By applying an SSL technique, they were able to learn the representation of foreground and background. 1.4 Problem Setting and Contributions The LNQ2023 challenge goal was to segment all pathological mediastinal LNs from thorax CT volumes, while the given training labels only covered some instances of the foreground. Those instances, furthermore, were only covering enlarged pathological LN components. The problem setting was a weakly or semi-supervised learning task with incomplete pixel- level annotations. As the challenge was an open challenge, it was allowed to use the public CT Lymph Nodes dataset from TCIA (Clark et al., 2013; Roth et al., 2015), providing a set of CT volumes in which enlarged LNs were fully annotated. Our challenge strategy was to develop a supervised training strategy, handling the incomplete pixel-level labels of the LNQ2023 challenge data. Furthermore, we integrated additional public data and the TotalSegmentator (Wasserthal et al., 2023) to gain performance improvements. Our main contributions are summarized as follows: 1. Starting from a fully annotated dataset of image volumes, namely the public TCIA Lymph Nodes dataset, we implemented different strategies to integrate the additional incomplete pixel-level labeled data, as given in the LNQ2023 challenge, into the train- ing process. Those strategies were noisy label training, loss masking, and wrapping each foreground instance with a background shell. 2. We applied the public toolbox TotalSegmentator to identify anatomical structures and, by exclusion, set those to background class voxels. We refer to this as TotalSeg- mentator Pseudo Labeling. 3. We explored the effect of integrating different public datasets, namely TCIA CT Lymph Nodes, an annotation-refined version of CT Lymph Nodes, TCIA NSCLC- Radiomics, TCIA NSCLC-Radiogenomics, and TCIA NSCLC-Radiomics-Interobserver on the downstream performance. 4. Furthermore, we performed experiments on the impact of adding all visible, poten- tially non-pathological lymph nodes to the model training on the overall segmentation performance and the performance regarding the lymph node’s shortest diameter. 2. Integration of Weakly Annotated Data In the following section, we present the different strategies to integrate the weakly annotated data into the training process, shown in Figure 1. We cropped the input CT volumes with 801 Fischer, Kiechle, Lang, Peeken and Schnabel Figure 1: Sketch of different strategies to handle weakly annotated data in our analysis. The missed lymph node instance is incorrectly set to the background class in the noisy label training. For loss masking, foreground instance coating, and TotalSegmentator Pseudo Labeling, the missing instance is removed from the training process by loss masking. the help of the TotalSegmentator, a deep learning-based toolbox capable of segmenting 104 different anatomical structures on CT volumes (Wasserthal et al., 2023). Using the toolbox, we created a bounding box of the lung lobes, to which the CT volume was cropped. The resulting image volume then contained the full mediastinum, leading to improved compu- tational efficiency. As a segmentation network, we used the nnUNet, a fully self-configuring segmentation pipeline (Isensee et al., 2021). Noisy Label The unlabeled voxels of the LNQ2023 training set were set to the back- ground class as a form of noisy labels. Consequently, unlabeled LN instances were na¨ıvely set to background. The foreground only consisted of the expert-annotated foreground in- stances. Loss Masking Another simple strategy to include the weakly labeled image volumes in the training procedure was to mask out regions without class annotation. For such voxels, the loss was set to zero so that only labeled voxels contributed to the learning process. Foreground Instance Coating Given foreground LN instances in the weakly annotated LNQ2023 training data, we followed the approach of Nguyen et al. (2020) and set voxels neighboring the foreground instances to the background class. We implemented that by running the morphological operator binary dilation. In this way, each foreground component was embedded in a hull of background voxels. TotalSegmentator Pseudo Labeling All 104 output structures of the TotalSegmenta- tor, covering organs, vessels, bones, and muscles, by definition, should not contain any LNs. We exploited this fact to constrain our problem setting. We utilized the output classes to annotate unlabeled voxels as background if classified as foreground by the TotalSegmen- tator. During that process, we skipped overwriting expert annotated LN instances in the LNQ2023 training set. This strategy is especially beneficial in the mediastinum, as the To- talSegmentator covers many known anatomical structures located within the mediastinum. 802 AnnotatedLymphNodeBackground ClassMissedLymphNodeLoss MaskingTotalSegmentatorStructureNoisyLabelLoss MaskingForegroundInstance CoatingTotalSegmentatorPseudoLabeling Mask the Unknown Dataset LNQ2023 train set LNQ2023 test set CT Lymph Nodes Bouget Refinements NSCLC datasets Volumes Labeled LNs Labeled Enlarged LNs Fully Labeled No Yes Yes Yes No 558 845 294 1403 0 393 100 90 90 585 512 289 244 414 0 Table 1: Lymph node statistics per dataset. A lymph node component is considered en- larged if its shortest diameter is equal to or greater than 10 mm. Figure 2: Histograms of shortest diameter of lymph node components in TCIA CT Lymph Nodes dataset, refined annotations by Bouget et al. (2019) and the LNQ2023 training and test set. 3. Experiments In this section, we describe the ablation study we performed to select the building blocks of the final model submission. For this purpose, we added various training components from the network training, such as extra training data or different training strategy changes, to analyze their contribution to the model’s performance on the test set. 803 01020304050Shortest Diameter [mm]0100200300400Lymph Node ComponentsCT Lymph Nodes01020304050Shortest Diameter [mm]0100200300400Lymph Node ComponentsCT Lymph Nodes: Bouget Refinements01020304050Shortest Diameter [mm]0100200300400Lymph Node ComponentsLNQ2023 Training Set01020304050Shortest Diameter [mm]0100200300400Lymph Node ComponentsLNQ2023 Test Set Fischer, Kiechle, Lang, Peeken and Schnabel 3.1 Data We included three different source datasets for training the segmentation model. The LNQ2023 training data consists of 393 weakly annotated thorax CT volumes of patients suffering from various cancer types, including breast cancer, leukemia, lung cancer, and others. Another dataset used is the TCIA CT Lymph Nodes dataset introduced by Roth et al. (2015). It contains 90 thorax CT volumes with fully annotated mediastinal LNs. Here, only clinically enlarged LN instances are annotated. The TCIA CT Lymph Nodes dataset was refined in the work of Bouget et al. (2019). They updated the existing annotations and integrated all visible LNs without separating them into pathological or healthy LNs. Furthermore, to increase the amount of data, we added three different lung cancer datasets, adding up to 585 CT volumes, to the training data. The first is a subset of the NSCLC-Radiogenomics dataset, consisting of 143 thorax CT volumes with tumor de- lineation (Gevaert et al., 2012; Bakr et al., 2017, 2018; Clark et al., 2013). The other lung cancer dataset is the NSCLC-Radiomics containing 422 thorax CT volumes with tu- mor delineation (Aerts et al., 2014a,b; Clark et al., 2013). Additionally, we integrated the NSCLC-Radiomics-Interobserver1 dataset by adding the 20 thorax CT volumes and the delineation of the tumor by the medical expert 1 to the data pool (Wee et al., 2019; Clark et al., 2013; Aerts et al., 2014a). Multiple individuals of the NSCLC datasets developed metastatic LNs documented in the patient’s N-staging. The occurring LN instances per dataset were generated by a connected component analysis, and statistics of the datasets are specified in Table 1. Not all LN components in the CT Lymph Nodes dataset were considered enlarged. The Bouget refinements, which are refined annotations of the standard CT Lymph Nodes dataset, contained more than four times as many annotated components. The refined version had almost twice as many enlarged LN instances. LNQ2023 training set is not fully annotated, so the number of LNs per CT volume should be similar to the LNQ2023 test set. For the NSLC datasets, roughly half of the patients suffer from pathological thorax LNs, while at least 171 patients had spread into the mediastinal LNs. The histograms of the shortest LN component diameter per labeled dataset are shown in Figure 2. 3.2 Ablation Study for Integration Strategy of Weakly Annotated Data We performed an ablation study to evaluate the various training strategy building blocks. The LNQ2023 challenge test set, containing 100 fully annotated samples, was used for eval- uation. The Dice score and the average symmetric surface distance (ASSD) were computed for performance assessment. Initially, during the challenge, we used 20 samples of the CT Lymph Nodes dataset for validation of the method development, which we, therefore, omitted in the trained models of the ablation study. We used default nnUNet planning and training for all models. All different training strategies introduced in Section 2 and various combinations of data for training were eval- uated on the LNQ2023 test set: • Model 1: Baseline segmentation model trained on 70 fully annotated samples of the TCIA CT Lymph Nodes dataset. Thus, training annotations only included enlarged LN components. 804 Mask the Unknown • Model 2: Model trained on data combination of TCIA CT Lymph Node and LNQ2023 training set. Unlabeled voxels were set to background class, so the model was trained in a noisy label fashion. • Model 3: Instead of noisy label training here, loss masking was applied. The loss of the unlabeled voxels was set to zero so that it did not affect the training process. • Model 4: The LNQ2023 training data was preprocessed with foreground instance coating with a background margin of one voxel for each hull. • Model 5: For this model, the TotalSegmentator toolbox was used for reducing the number of unlabeled voxels by TotalSegmentator Pseudo Labeling. • Model 6: To evaluate the effect of replacing the standard TCIA CT Lymph Node annotations with the according Bouget label refinements, a model was trained on this data combination. As a weakly supervised learning strategy, TotalSegmentator Pseudo Labeling was used. In the standard TCIA CT Lymph Node annotations, only enlarged LN components are contained, while the Bouget refinements include all sizes of LN components. • Model 7: Finally, the NSCLC data were integrated into the training cohort. A model using TotalSegmentator Pseudo Labeling was trained on the data combination of the LNQ2023 training set, CT Lymph Nodes data with Bouget refinements, and the NSCLC datasets. The given tumor annotations of the NSCLC datasets were set to background class. We evaluated the trained models on the 100 samples of the LNQ2023 test set and computed Dice score plus ASSD. The results of the experiments are shown in Table 3. 3.3 Performance Analysis regarding Lymph Node Shortest Diameter To test whether a model solely trained on clinically enlarged pathological LN components was able to segment non-enlarged pathological LN components, we analyzed the predictions of Model 5 and Model 6 regarding their performance for different LN shortest diameters. To assess the performance, we iterated through all predicted LN components and com- puted the overlap of this predicted LN component on its ground truth annotation. The volume overlap was normalized by its volume, resulting in values between 0 and 1. This was then interpreted as a proxy of the model sensitivity. Thus, we were able to analyze the sensitivity over different LN sizes. We repeated the same assessment, iterated through all annotated ground truth LN components, and computed the overlap of this to the entire model prediction on the regarding volume, which was then a proxy for the precision. 4. Results The following presents an analysis of the preprocessing strategies, results of the ablation study, and an analysis of the segmentation performance on different LN component sizes. 805 Fischer, Kiechle, Lang, Peeken and Schnabel Statistics Per CT Volume Total Voxels [·106] Labeled-Unlabeled Voxel Ratio [%] 0.02 ± 0.03 38.2 ± 24.5 Raw Data 7.5 ± 4.1 + ROI crop 0.10 ± 0.17 55.05 ± 3.31 7.5 ± 4.1 + TotalSegmentator PL Table 2: Voxel statistics of the LNQ2023 training set for each preprocessing step. Figure 3: Preprocessing steps performed on LNQ2023 training set example, shown in three orthogonal views. Top: raw input volume with weak lymph node annotation (green), Middle: volume after lung bounding box cropping, Bottom: TotalSeg- mentator Pseudo Labeling setting known structures to the background (blue). 4.1 Image and Annotation Preprocessing First, the CT volumes were resampled to a common spacing of [3.0 mm, 0.93 mm, 0.93 mm]. The average number of voxels per CT volume and the ratio between labeled and unlabeled voxels are given in Table 2. The number of voxels per CT volume was reduced from 38.2 · 106 to 7.5 · 106 with the ROI cropping, leading to an increase in the ratio of labeled to unlabeled voxels from 0.02% to 0.10%. TotalSegmentator Pseudo Labeling further increased the number of labeled voxels, resulting in a ratio of 55.05%. Figure 3 shows the preprocessing steps of one LNQ2023 training set example. 4.2 Ablation Study for Integration Strategy of Weakly Annotated Data The results of the ablation study are presented in Table 3. Model 1, only trained on the fully annotated CT Lymph Nodes dataset containing only enlarged LN components, achieved a low Dice score of 0.172 and an ASSD of 48.95 mm on average. 806 Mask the Unknown Figure 4: Different cases of LNQ2023 test set with ground truth annotation and model pre- dictions. For intuitive visualization the trachea is shown in blue, model prediction is in yellow and ground truth in green. For inference Model 7 was used. Left: worst case (Dice score 0.108, ASSD 19.2 mm), Center: average case (Dice score 0.626, ASSD 5.65 mm), Right: best case (Dice score 0.860, ASSD 2.38 mm) Model Model 1: CT Lymph Nodes dataset Integration of LNQ2023 training samples: Model 2: Noisy Label Strategy Model 3: Loss Masking Strategy Model 4: Foreground Instance Coating Model 5: TotalSegmentator PL Replace Annotations/Add data: Model 6: Bouget Label Refinements Model 7: Integration of NSCLC data Dice Score ↑ ASSD [mm] ↓ 48.95 ± 64.49 0.172 ± 0.182 0.343 ± 0.201 0.552 ± 0.200 0.548 ± 0.219 0.601 ± 0.173 18.46 ± 23.86 9.37 ± 9.79 12.19 ± 17.62 7.08 ± 7.54 0.665 ± 0.143 0.663 ± 0.136 4.47 ± 4.67 3.97 ± 2.83 Table 3: Performance of the lymph node segmentation model with different strategies to handle the weakly annotated data and additional training data. Models were tested on the 100 fully annotated samples of the LNQ2023 test set. Best achieved scores are highlighted in bold. (↑: higher is better, ↓: lower is better) Integration of the weakly annotated challenge training data improved the results overall for all applied weak annotation handling strategies. The worst performance was the noisy label training, with a Dice score of 0.343 and an ASSD of 9.37 mm. Loss masking and its variants, foreground instance coating, and TotalSegmentator Pseudo Labeling outperformed the noisy labels. Foreground instance coating did decrease the performance compared to raw loss masking in both metrics. The best weak annotation handling strategy was the TotalSegmentator Pseudo labeling with a Dice score of 0.601 and an ASSD of 7.08 mm. 807 Fischer, Kiechle, Lang, Peeken and Schnabel Figure 5: Overlap of each ground truth lymph node component with prediction and over- lap of each predicted lymph node component with all ground truth lymph node components over the shortest diameter. Lymph node components were binned regarding shortest diameter in 2.5 mm steps. Model predictions were generated by Model 5 (green) and Model 6 (blue). Changing the standard CT Lymph Nodes annotation to the Bouget refinements im- proved the performance. Important to note is that the Bouget refinements are annota- tions of all visible LNs. This gave an increase to a Dice score of 0.665 and an ASSD of 4.47 mm with the TotalSegmentator Pseudo Labeling. Furthermore, a model trained with the additional NSCLC datasets did have a similar Dice score and a slightly better ASSD. The significance of the segmentation performance is analysed with a non-parametric paired Wilcoxon signed-rank test in the Appendix A. Predictions of Model 7 on three different test set cases of the LNQ2023 are shown in Figure 4. 4.3 Performance Analysis regarding Lymph Node Shortest Diameter In Figure 5, the overlap between the ground truth LN component on prediction components and the overlap of predicted LN component on ground truth components is plotted. Model 6, which was trained on LN components of all sizes, achieved a better overlap between a ground truth LN component on a full model prediction on average for all different LN shortest diameters. Model 5 did detect much fewer LN instances, which are shorter than 10 mm in diameter, than the Model 6. Overlap between predicted LN instances and complete ground truth annotations per- forms similarly, while for small LN instances, Model 5 indicates a higher precision. 808 51015202530Shortest Diameter [mm]0.00.20.40.60.81.0OverlapAnnotation LN to Prediction LNsTrainingBougetStandard51015202530Shortest Diameter [mm]0.00.20.40.60.81.0OverlapPrediction LN to Annotation LNsTrainingBougetStandard Mask the Unknown 5. Challenge Submission Our submitted challenge model, trained via TotalSegmentator Pseudo Labeling on the com- bination of Bouget refined CT Lymph Node data, LNQ2023 training set, and the TCIA NSCLC datasets, ranked third place in the MICCAI LNQ2023 challenge. The model per- formed slightly worse on the test set than the developed models in the ablation study of Subsection 3.2. We discuss reasons for this difference in Section 6. The nnUNet instance of the submission follows all the findings from the ablation study and refers to Model 7 of the ablation study. Authors (Team Name) Rank 1: Deissler et al. (Skeleton Suns) Rank 2: Zhang et al. (IMR) Rank 3: Fischer et al. (CompAI) Rank 4: Kondo et al. (HiLab) Rank 5: Engelson et al. (sofija engelson) CompAI (ours) without Postprocessing Model 7 Dice Score ↑ ASSD [mm] ↓ 4.5 ± 4.7 5.4 ± 4.3 5.8 ± 3.6 8.2 ± 11.8 6.9 ± 4.0 5.35 ± 10.71 3.97 ± 2.83 0.674 ± 0.165 0.665 ± 0.163 0.628 ± 0.193 0.603 ± 0.141 0.569 ± 0.185 0.660 ± 0.150 0.663 ± 0.136 Table 4: Performance of the top five performing team models on the 100 samples from LNQ2023 test set. Additionally, the performance of our submitted model without the removal of small LN components is given, as well as the Model 7 from the ablation study. Best achieved scores are highlighted in bold. For the challenge submission, the default nnUNet normalization scheme was replaced by intensity clipping to [−150, 350] inspired by the work of Bouget et al. (2019) and intensity standardization. Furthermore, the challenge submission model was trained with a learning rate of 1e − 3 instead of the default 1e − 2 in order to stabilize the training process. Thus, we increased the number of epochs from default 1000 epochs to 2000 epochs. To train one nnUNet instance the full available data were used, also integrating the missing 20 samples of CT Lymph Node for original validation. The prediction was postprocessed by a connected component analysis for the challenge submission. Predicted LN components with a short diameter smaller than 9.5 mm were removed from the prediction so that only LN components that are considered enlarged are kept. The ranking of the challenge participating teams is given in Table 4. Our submission was ranked third place in the Lymph Node Quantification Challenge 2023 with a Dice score of 0.628 and an ASSD of 5.8 mm. Model 7 of our ablation study would have achieved third place in the Dice score while having the lowest standard deviation. Furthermore, it would have scored the best ASSD among all challenge submission models. 6. Discussion For the challenge, only semantic segmentation maps have been provided, while the CT Lymph Nodes dataset comes with instance segmentation annotations. Therefore, original 809 Fischer, Kiechle, Lang, Peeken and Schnabel annotations of the CT Lymph Nodes were interpreted as semantic segmentation maps, and LN components were created by a connected component analysis. Thus, there is a difference in the reported LN instances in the work of Roth et al. (2015) compared to ours. In Table 1, the number of LN components is provided for each dataset. For the CT Lymph Nodes dataset, which should only contain enlarged LNs, also a subset of non- enlarged LNs are included. An explanation for this trend is that clinicians follow the RECIST guidelines and thus only consider axial slice directions. For the LNQ2023 training set, the same trend holds but is less prominent. This might originate from the weak anno- tating done by the physicians, probably resulting in a bias toward the larger pathological LNs per image volume. Another related surprising finding was the difference in enlarged LN instances in CT Lymph Nodes and the Bouget refinements. There might be a bias originating from the intention of annotating all visible LNs in the refinements, while the clinicians searched only for enlarged LNs via the RECIST criteria in the standard dataset. Another factor is that each LN of a neighboring LN cluster might be considered as healthy, while annotating all of them and processing the cluster with a connected component analysis can result in an enlarged LN component. The TCIA CT Lymph Nodes dataset was essential for the development of the proposed method, as it provided fully annotated training cases. A possible solution to solve the task with only the weakly annotated challenge data is to follow the work of Nguyen et al. (2020), which refers to foreground instance coating. By using TotalSegmentator Pseudo Labeling the performance in this scenario would improve. Another approach would be noisy label training. We did not perform any experiments on the challenge data alone. In our scenario, the noisy label training improved the performance compared to only using CT Lymph Nodes. The integration of the LNQ2023 training set outweighed the effect of false negatives generated from unlabeled LN instances. We hypothesize the reason for the failure of the foreground coating is the difficulty of labeling LNs in a binary manner, as LN are known to be confluent and miss sharp intensity drops at their boundary. Thus, the instance coating might generate a lot of false negatives. Increasing the margin of the coating might reduce this issue and lead to performance gains. Furthermore, the presence of bulky lymph nodes in which only one LN instance was annotated will result in false negative annotations. TotalSegmentator Pseudo Labeling offers limited generalization to other tasks, as it is only applicable to CT modality and differs in coverage for all body regions. Furthermore, there is also a small overlap between LN voxels and TotalSegmentator structures, leading to false negatives. There might be better structure subsets of the TotalSegmentator for the LN segmentation task. The segmentation models benefit from integrating the Bouget refinements, also reported in the work of Bouget et al. (2023). We support their hypothesis that the inclusion of all visible LNs is an efficient form of data augmentation, integrating possible LN locations. Another aspect is the incorporation of non-enlarged LNs into the training, which is essential as models trained only on enlarged LNs are not generalizing well on small LNs. We hypothesize that the better performance of ASSD and more stable Dice scores, when including the NSCLC data, originates from the higher number of training samples and the learning of lung cancer anomalies as background. 810 Mask the Unknown Extending the method with a semi-supervised learning method, in the form of Totalseg- mentator Pseudo Labeling, was shown to improve the model. Additionally, pseudo labeling of the remaining unlabeled voxels as described in the work of Huang et al. (2022) might further increase the performance. Until the final challenge submission, the goal of the challenge, that all pathological lymph nodes should be segmented, was ambiguous to us. Our intention was to segment only clinically enlarged lymph nodes. Thus, we introduced the postprocessing of filtering the segmentation regarding LN enlargement. This postprocessing led to a lower performance. Nevertheless, we were still able to achieve third place in the final challenge ranking. In this work, different strategies to handle weak annotations were proposed that sig- nificantly improved the performance on the task of mediastinal pathological lymph node segmentation. The usage of the TotalSegmentator was highly beneficial in our case as ROI cropping but also as a network providing informative pseudo labels. Different datasets were integrated into the training and were fundamental for the submission model. One impor- tant finding is that the integration of non-pathological lymph nodes also aided our task of pathological lymph node segmentation. Acknowledgments Stefan Fischer has received funding by the Deutsche Forschungsgemeinschaft (DFG, Ger- man Research Foundation) – 515279324 / SPP 2177. Johannes Kiechle was supported by the DAAD programme Konrad Zuse School of Excellence in reliable Artificial Intelligence (relAI), sponsored by the Federal Ministry of Education and Research. Ethical Standards The work follows appropriate ethical standards in conducting research and writing the manuscript, following all applicable laws and regulations regarding treatment of animals or human subjects. Conflicts of Interest We declare we do not have conflicts of interest. Data availability All experiments and models are performed and trained on publicly available data. The LNQ2023 challenge data will be published by the challenge hosts in the future as part of TCIA. Bouget refinements are accessible via the Google-Drive links via https://github.com/dbouget/ct mediastinal structures segmentation. 811 Fischer, Kiechle, Lang, Peeken and Schnabel References Hugo J. W. L. Aerts, Emmanuel Rios Velazquez, Ralph T. H. Leijenaar, Chintan Par- mar, Patrick Grossmann, Sara Carvalho, Johan Bussink, Ren´e Monshouwer, Benjamin Haibe-Kains, Derek Rietveld, Frank Hoebers, Michelle M. Rietbergen, C. Ren´e Leemans, Andre Dekker, John Quackenbush, Robert J. Gillies, and Philippe Lambin. Decoding tu- mour phenotype by noninvasive imaging using a quantitative radiomics approach. Nature Communications, 5(1):4006, 2014a. Hugo J. W. L. Aerts, Emmanuel Rios Velazquez, Ralph T. H. Leijenaar, Chintan Parmar, Patrick Grossmann, Sara Carvalho, Johan Bussink, Ren´e Monshouwer, Benjamin Haibe- Kains, Derek Rietveld, Frank Hoebers, Michelle M. Rietbergen, C. Ren´e Leemans, Andre Dekker, John Quackenbush, Robert J. Gillies, and Philippe Lambin. Data From NSCLC- Radiomics (version 4) [Data set], 2014b. URL https://wiki.cancerimagingarchive. net/display/Public/NSCLC-Radiomics. Shaimaa Bakr, Olivier Gevaert, Sebastian Echegaray, Kelsey Ayers, Mu Zhou, Majid Shafiq, Hong Zheng, Weiruo Zhang, Ann Leung, Michael Kadoch, Joseph Shrager, Andrew Quon, Daniel L. Rubin, Sylvia K. Plevritis, and Sandy Napel. Data for NSCLC Ra- diogenomics (Version 4) [Data set], 2017. URL https://wiki.cancerimagingarchive. net/display/Public/NSCLC+Radiogenomics. Shaimaa Bakr, Olivier Gevaert, Sebastian Echegaray, Kelsey Ayers, Mu Zhou, Majid Shafiq, Hong Zheng, Weiruo Zhang, Ann Leung, Michael Kadoch, Joseph Shrager, Andrew Quon, Daniel L. Rubin, Sylvia K. Plevritis, and Sandy Napel. A radiogenomic dataset of non- small cell lung cancer. Scientific Data, 5(1):1–9, 2018. David Bouget, Arve Jørgensen, Gabriel Kiss, Haakon O. Leira, and Thomas Langø. Seman- tic segmentation and detection of mediastinal lymph nodes and anatomical structures in CT data for lung cancer staging. International Journal of Computer Assisted Radiology and Surgery, 14(6):977–986, 2019. David Bouget, Andr´e Pedersen, Johanna Vanel, Haakon O. Leira, and Thomas Langø. Mediastinal lymph nodes segmentation using 3D convolutional neural network ensem- bles and anatomical priors guiding. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 11(1):44–58, 2023. Olivier Chapet, Feng-Ming Kong, Leslie E. Quint, Andrew C. Chang, Randall K. Ten Haken, Avraham Eisbruch, and James A. Hayman. CT-based definition of thoracic lymph node stations: an atlas from the University of Michigan. International Journal of Radiation Oncology - Biology - Physics, 63(1):170–178, 2005. Kenneth Clark, Bruce Vendt, Kirk Smith, John Freymann, Justin Kirby, Paul Koppel, Stephen Moore, Stanley Phillips, David Maffitt, Michael Pringle, Lawrence Tarbox, and Fred Prior. The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. Journal of Digital Imaging, 26:1045–1057, 2013. 812 Mask the Unknown Nanqing Dong, Michael Kampffmeyer, Xiaodan Liang, Min Xu, Irina Voiculescu, and Eric Xing. Towards robust partially supervised multi-structure medical image segmentation on small-scale data. Applied Soft Computing, 114:108074, 2022. E. A. Eisenhauer, P. Therasse, J. Bogaerts, L. H. Schwartz, D. Sargent, R. Ford, J. Dancey, S. Arbuck, S. Gwyther, M. Mooney, L. Rubinstein, L. Shankar, L. Dodd, R. Kaplan, D. Lacombe, and J. Verweij. New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). European Journal of Cancer, 45(2):228–247, 2009. Olivier Gevaert, Jiajing Xu, Chuong D Hoang, Ann N. Leung, Yue Xu, Andrew Quon, Daniel L. Rubin, Sandy Napel, and Sylvia K. Plevritis. Non–small cell lung cancer: Iden- tifying prognostic imaging biomarkers by leveraging public gene expression microarray data—methods and preliminary results. Radiology, 264(2):387–396, 2012. Dazhou Guo, Jia Ge, Ke Yan, Puyang Wang, Zhuotun Zhu, Dandan Zheng, Xian-Sheng Hua, Le Lu, Tsung-Ying Ho, Xianghua Ye, and Dakai Jin. Thoracic lymph node segmen- tation in CT imaging via lymph node station stratification and size encoding. In Inter- national Conference on Medical Image Computing and Computer-Assisted Intervention– MICCAI, volume 13435 of Lecture Notes in Computer Science. Springer, 2022. Ziyan Huang, Haoyu Wang, Jin Ye, Jingqi Niu, Can Tu, Yuncheng Yang, Shiyi Du, Zhongy- ing Deng, Lixu Gu, and Junjun He. Revisiting nnU-net for iterative pseudo labeling and efficient sliding window inference. In MICCAI Challenge on Fast and Low-Resource Semi-supervised Abdominal Organ Segmentation held in conjunction with MICCAI 2022. Springer, 2022. Fabian Isensee, Paul F. Jaeger, Simon A. A. Kohl, Jens Petersen, and Klaus H. Maier- Hein. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 18(2):203–211, 2021. Andra-Iza Iuga, Heike Carolus, Anna J. H¨oink, Tom Brosch, Tobias Klinder, David Maintz, Thorsten Persigehl, Bettina Baeßler, and Michael P¨usken. Automated detection and segmentation of thoracic lymph nodes from CT using 3D foveal fully convolutional neural networks. BMC Medical Imaging, 21(1):1–12, 2021. Jana Kemnitz, Christian F. Baumgartner, Wolfgang Wirth, Felix Eckstein, Sebastian K. Eder, and Ender Konukoglu. Combining heterogeneously labeled datasets for training In Machine Learning in Medical Imaging: 9th International segmentation networks. Workshop, MLMI 2018, Held in Conjunction with MICCAI 2018. Springer, 2018. Roya Khajavibajestani, Steve Pieper, Erik Ziegler, Tagwa Idris, Reuben Dorent, Bhanusupriya Somarouthu, Sonia Pujol, Ann LaCasce, Heather Jacene, Gordon Har- ris, and Ron Kikinis. Mediastinal Lymph Node Quantification (LNQ): Segmentation of Heterogeneous CT Data, 2023. URL https://doi.org/10.5281/zenodo.7844666. Jiamin Liu, Jocelyn Zhao, Joanne Hoffman, Jianhua Yao, Weidong Zhang, Evrim B. Turk- bey, Shijun Wang, Christine Kim, and Ronald M. Summers. Mediastinal lymph node detection on thoracic CT scans using spatial prior from multi-atlas label fusion. In Med- ical Imaging 2014: Computer-Aided Diagnosis. SPIE, 2014. 813 Fischer, Kiechle, Lang, Peeken and Schnabel Jiamin Liu, Joanne Hoffman, Jocelyn Zhao, Jianhua Yao, Le Lu, Lauren Kim, Evrim B. Turkbey, and Ronald M. Summers. Mediastinal lymph node detection and station map- ping on chest CT using spatial priors and random forest. Medical Physics, 43(7):4362– 4374, 2016. Nhu-Van Nguyen, Christophe Rigaud, Arnaud Revel, and Jean-Christophe Burie. A learn- ing approach with incomplete pixel-level labels for deep neural networks. Neural Networks, 130:111–125, 2020. Isabella Nogues, Le Lu, Xiaosong Wang, Holger Roth, Gedas Bertasius, Nathan Lay, Jianbo Shi, Yohannes Tsehay, and Ronald M. Summers. Automatic lymph node cluster segmen- tation using holistically-nested neural networks and structured optimization in CT im- ages. In International Conference on Medical Image Computing and Computer-Assisted Intervention–MICCAI, volume 9901 of Lecture Notes in Computer Science. Springer, 2016. Hirohisa Oda, Kanwal K. Bhatia, Masahiro Oda, Takayuki Kitasaka, Shingo Iwano, Hiroto- shi Homma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Julia A. Schnabel, and Kensaku Mori. Automated mediastinal lymph node detection from CT volumes based on intensity targeted radial structure tensor analysis. Journal of Medical Imaging, 4(4): 044502–044502, 2017a. Hirohisa Oda, Kanwal K. Bhatia, Masahiro Oda, Takayuki Kitasaka, Shingo Iwano, Hiro- toshi Homma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Julia A. Schnabel, and Kensaku Mori. Hessian-assisted supervoxel: Structure-oriented voxel clustering and application to mediastinal lymph node detection from CT volumes. In Medical Imaging 2017: Computer-Aided Diagnosis. SPIE, 2017b. Hirohisa Oda, Holger R. Roth, Kanwal K. Bhatia, Masahiro Oda, Takayuki Kitasaka, Shingo Iwano, Hirotoshi Homma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, et al. Dense volumetric detection and segmentation of mediastinal lymph nodes in chest CT images. In Medical Imaging 2018: Computer-Aided Diagnosis. SPIE, 2018. Olivier Petit, Nicolas Thome, Arnaud Charnoz, Alexandre Hostettler, and Luc Soler. Han- dling missing annotations for semantic segmentation with deep convnets. In Deep Learn- ing in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018. Springer, 2018. Holger R. Roth, Le Lu, Ari Seff, Kevin M. Cherry, Joanne Hoffman, Shijun Wang, Jiamin Liu, Evrim Turkbey, and Ronald M. Summers. A new 2.5D representation for lymph node detection using random sets of deep convolutional neural network observations. In Medical Image Computing and Computer-Assisted Intervention – MICCAI, volume 8673 of Lecture Notes in Computer Science. Springer, 2014. Holger R. Roth, Le Lu, Ari Seff, Kevin M. Cherry, Joanne Hoffman, Shijun Wang, Jiamin Liu, Evrim Turkbey, and Ronald M. Summers. A new 2.5D representation for lymph node detection in CT [Data set], 2015. URL https://www.cancerimagingarchive. net/collection/ct-lymph-nodes/. 814 Mask the Unknown Valerie W. Rusch, Hisao Asamura, Hirokazu Watanabe, Dorothy J. Giroux, Ramon Rami- Porta, and Peter Goldstraw. The IASLC lung cancer staging project: a proposal for a new international lymph node map in the forthcoming seventh edition of the TNM classification for lung cancer. Journal of Thoracic Oncology, 4(5):568–577, 2009. Gonglei Shi, Li Xiao, Yang Chen, and S. Kevin Zhou. Marginal loss and exclusion loss for partially supervised multi-organ segmentation. Medical Image Analysis, 70:101979, 2021. Constantin Ulrich, Fabian Isensee, Tassilo Wald, Maximilian Zenk, Michael Baumgart- ner, and Klaus H. Maier-Hein. Multitalent: A multi-dataset approach to medical image segmentation. In International Conference on Medical Image Computing and Computer- Assisted Intervention–MICCAI, volume 14222 of Lecture Notes in Computer Science. Springer, 2023. Jakob Wasserthal, Hans-Christian Breit, Manfred T. Meyer, Maurice Pradella, Daniel Hinck, Alexander W. Sauter, Tobias Heye, Daniel T. Boll, Joshy Cyriac, Shan Yang, Michael Bach, and Martin Segeroth. Totalsegmentator: Robust segmentation of 104 anatomic structures in CT images. Radiology: Artificial Intelligence, 5(5), 2023. Leonard Wee, Hugo J. W. L. Aerts, Petros Kalendralis, and Andre Dekker. Data URL https://wiki. from NSCLC-Radiomics-Interobserver1 [Data set], 2019. cancerimagingarchive.net/display/Public/NSCLC-Radiomics-Interobserver1. Ke Yan, Dakai Jin, Dazhou Guo, Minfeng Xu, Na Shen, Xian-Sheng Hua, Xianghua Ye, and Le Lu. Anatomy-aware lymph node detection in chest CT using implicit station stratification. In International Conference on Medical Image Computing and Computer- Assisted Intervention, volume 14394 of Lecture Notes in Computer Science. Springer, 2023. Yuyin Zhou, Zhe Li, Song Bai, Chong Wang, Xinlei Chen, Mei Han, Elliot Fishman, and Alan L. Yuille. Prior-aware neural network for partially-supervised multi-organ segmen- tation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019. 815 . t s e t k n a r - d e n g i s n o x o c l i W c i r t e m a r a p - n o n e l p m a s d e r i a p e h t a i v d e t u p m o c e r a s e u l a v - p 2 . 3 n o i t c e s b u S f o y d u t s n o i t a l b a e h t r o F . 6 e l b a T d n a 5 e l b a T n i n e v i g e r a s t l u s e r e h T y d u t S n o i t a l b A n o g n i t s e T e c n a c fi i n g i S . A x i d n e p p A Fischer, Kiechle, Lang, Peeken and Schnabel ] m m [ D S S A 7 l e d o M 6 l e d o M 5 l e d o M 4 l e d o M 3 l e d o M 2 l e d o M 1 l e d o M 9 4 . 4 6 ± 5 9 . 8 4 6 8 . 3 2 ± 6 4 . 8 1 2 6 . 7 1 ± 9 1 . 2 1 9 7 . 9 ± 7 3 . 9 4 5 . 7 ± 8 0 . 7 7 6 . 4 ± 7 4 . 4 3 8 . 2 ± 7 9 . 3 8 1 − e 9 . 3 8 1 − e 3 . 7 4 1 − e 3 . 1 5 1 − e 5 . 1 3 1 − e 6 . 1 1 − e 8 . 7 - 8 1 − e 9 . 3 7 1 − e 6 . 1 3 1 − e 7 . 5 4 1 − e 6 . 2 1 1 − e 4 . 7 1 − e 8 . 7 - 8 1 − e 0 . 4 7 1 − e 7 . 6 4 − e 3 . 7 7 − e 2 . 3 1 1 − e 4 . 7 3 1 − e 6 . 1 - 5 1 − e 2 . 4 0 1 − e 6 . 6 3 − e 0 . 2 7 − e 2 . 3 4 1 − e 6 . 2 5 1 − e 5 . 1 - 6 1 − e 9 . 2 0 1 − e 7 . 7 3 − e 0 . 2 5 − e 3 . 7 3 1 − e 7 . 5 4 1 − e 3 . 1 - 8 − e 9 . 3 - 0 1 − e 7 . 7 0 1 − e 6 . 6 7 1 − e 7 . 6 7 1 − e 6 . 1 8 1 − e 3 . 7 8 − e 9 . 3 6 1 − e 9 . 2 5 1 − e 2 . 4 8 1 − e 0 . 4 8 1 − e 9 . 3 8 1 − e 9 . 3 - 1 2 3 4 5 6 7 l e d o M l e d o M l e d o M l e d o M l e d o M l e d o M l e d o M - 3 8 . 2 ± 7 9 . 3 7 6 . 4 ± 7 4 . 4 4 5 . 7 ± 8 0 . 7 2 6 . 7 1 ± 9 1 . 2 1 9 7 . 9 ± 7 3 . 9 6 8 . 3 2 ± 6 4 . 8 1 9 4 . 4 6 ± 5 9 . 8 4 ] m m [ D S S A . D S S A r o f s i s e h t o p y h l l u n f o t s e t k n a r - d e n g i s n o x o c l i W d e r i a p c i r t e m a r a p - n o n f o e u l a V P - : 6 e l b a T . e r o c s e c i D r o f s i s e h t o p y h l l u n f o t s e t k n a r - d e n g i s n o x o c l i W d e r i a p c i r t e m a r a p - n o n f o e u l a V P - : 5 e l b a T e r o c S e c i D 7 l e d o M 6 l e d o M 5 l e d o M 4 l e d o M 3 l e d o M 2 8 1 . 0 ± 2 7 1 . 0 1 0 2 . 0 ± 3 4 3 . 0 0 0 2 . 0 ± 2 5 5 . 0 9 1 2 . 0 ± 8 4 5 . 0 3 7 1 . 0 ± 1 0 6 . 0 3 4 1 . 0 ± 5 6 6 . 0 6 3 1 . 0 ± 3 6 6 . 0 8 1 − e 0 . 4 7 1 − e 4 . 2 0 1 − e 0 . 1 0 1 − e 2 . 1 1 1 − e 2 . 2 1 − e 5 . 2 - 8 1 − e 9 . 5 7 1 − e 0 . 2 2 1 − e 3 . 4 1 1 − e 2 . 3 0 1 − e 1 . 1 1 − e 5 . 2 - 8 1 − e 9 . 5 7 1 − e 0 . 5 4 − e 8 . 5 5 − e 1 . 1 0 1 − e 1 . 1 1 1 − e 2 . 2 - 7 1 − e 0 . 5 3 1 − e 4 . 1 5 − e 1 . 1 1 1 − e 2 . 3 0 1 − e 2 . 1 2 9 . 0 - 7 1 − e 6 . 3 3 1 − e 2 . 3 - 4 − e 8 . 5 2 1 − e 3 . 4 0 1 − e 0 . 1 2 9 . 0 2 l e d o M 0 1 − e 5 . 1 3 1 − e 2 . 3 3 1 − e 4 . 1 7 1 − e 0 . 5 7 1 − e 0 . 2 7 1 − e 4 . 2 - 1 l e d o M 0 1 − e 5 . 1 7 1 − e 6 . 3 7 1 − e 0 . 5 8 1 − e 9 . 5 8 1 − e 9 . 5 8 1 − e 0 . 4 - 1 2 3 4 5 6 7 l e d o M l e d o M l e d o M l e d o M l e d o M l e d o M l e d o M - 6 3 1 . 0 ± 3 6 6 . 0 3 4 1 . 0 ± 5 6 6 . 0 3 7 1 . 0 ± 1 . 0 6 9 1 2 . 0 ± 8 4 5 . 0 0 0 2 . 0 ± 2 5 5 . 0 1 0 2 . 0 ± 3 4 3 . 0 2 8 1 . 0 ± 2 7 1 . 0 e r o c S e c i D 816
ai_researcher
1
Time_and_its_Study_in_Design_Ideation_Processes.pdf
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference Proceedings of the ASME 2024 IDETC/CIE2024 August 25-28, 2024, Washington, DC DETC2024-143166 AUTOTRIZ: ARTIFICIAL IDEATION WITH TRIZ AND LARGE LANGUAGE MODELS Shuo Jiang Singapore University of Technology and Design, Singapore [email protected] Jianxi Luo Department of Systems Engineering, City University of Hong Kong, Hong Kong [email protected] ABSTRACT for Researchers and innovators have made enormous efforts in developing ideation methods, such as morphological analysis and design-by-analogy, to aid engineering design ideation for problem solving and innovation. Among these, the Theory of Inventive Problem Solving (TRIZ) stands out as one of the most well-known approaches, widely applied systematic innovation. However, the complexity of TRIZ resources and concepts, coupled with its reliance on users' knowledge, experience, and reasoning capabilities, limits its practicality. Therefore, we explore the recent advances of large language models (LLMs) for a generative approach to bridge this gap. This paper proposes AutoTRIZ, an artificial ideation tool that uses LLMs to automate and enhance the TRIZ methodology. By leveraging the broad knowledge and advanced reasoning capabilities of LLMs, AutoTRIZ offers a novel approach for design automation and interpretable ideation with artificial intelligence. AutoTRIZ takes a problem statement from the user as its initial input, and automatically generates a solution report after the reasoning process. We demonstrate and evaluate the effectiveness of AutoTRIZ through consistency experiments in contradiction detection, and a case study comparing solutions generated by AutoTRIZ with the experts’ analyses from the textbook. Moreover, the proposed LLM-based framework holds the potential for extension to automate other knowledge-based ideation methods, including SCAMPER, Design Heuristics, and Design-by-Analogy, paving the way for a new era of artificial ideation for design innovation. Keywords: Innovation, Design Ideation, Problem Solving, TRIZ, Large Language Models, Artificial Intelligence 1. INTRODUCTION Intuitive or structured ideation methods such as brainstorming, morphological analysis, and mind-mapping [1–3] have been used to aid creative ideation of human designers for concept generation. Among these, the Theory of Inventive Problem Solving (TRIZ) [4] stands out as one of the most well- known approaches, widely applied for systematic innovation. TRIZ is a knowledge-based ideation methodology that provides a structured framework for engineering problem solving by identifying and overcoming technical contradictions using inventive principles derived from a large-scale patent database. However, the complexity of TRIZ resources and concepts poses significant cognitive challenges to effectively learning and applying it. In addition, the problem-solving process in TRIZ is highly dependent on the reasoning capabilities of human users. While some researchers have employed natural language processing and machine learning techniques to support certain steps within TRIZ [5–7], the effectiveness still depends heavily on the users’ proficiency with TRIZ. Large Language Models (LLMs) such as OpenAI's GPT [8] and Meta's Llama [9] have not only acquired broad knowledge but also developed emergent abilities such as in-context learning [10], instruction following [10], and step-by-step reasoning [10]. These capabilities have been applied across various domains, including medicine [11], chemistry [12], and mathematics [13]. Recently, researchers have evaluated the capabilities of LLMs in engineering-related tasks [14,15] and reported the extensive engineering knowledge within these models as well as their wide applicability in engineering design and manufacturing. In terms of engineering problem solving and idea generation, there has been preliminary exploration using LLMs [16–19]. However, the lack of transparency and limited control over reasoning steps during ideation often leads to divergent results, requiring multiple heuristic attempts by users to achieve desired outcomes, which places significant demands on their domain-specific expertise. Besides, the interpretability of generated concepts remains challenging, as users obtain only the final results without understanding the ideation reasoning process. In this work, we aim to leverage the broad knowledge and advanced reasoning capabilities of LLMs to automate the TRIZ method, showcasing the potential of LLMs in design automation 1 Copyright © 2024 by ASME and interpretable innovation. We have developed an LLM-based tool, AutoTRIZ (www.autotriz.ai), capable of intelligent artificial ideation for problem solving with TRIZ-based interpretability. AutoTRIZ begins with a problem statement from the user and automatically generates a report that includes multiple solutions, strictly following the TRIZ thinking flow and reasoning process. In this paper, we also evaluate the effectiveness through quantitative comparison, as well as case studies involving human uses of TRIZ from TRIZ textbooks. and performance of AutoTRIZ 2. RELATED WORK 2.1 TRIZ TRIZ is a knowledge-based systematic approach of inventive problem solving, developed in the 1960s by Genrich S. Altshuller and his colleagues [4]. Through a thorough analysis of over 40,000 patents, Altshuller and his collaborators identified repeated patterns of innovation and underlying innovative principles within these documents. By inductively analyzing these patterns, they proposed a comprehensive problem-solving framework, applying selected inventive principles for ideation. Since then, TRIZ has been developed continually and some modern TRIZ databases rely on the analysis of over 2 million patents. It has been widely applied in industries, research, and education with notable influence in many fields, such as energy, electrical, automotive industries, and mechanical engineering [20]. The TRIZ toolkit contains a series of theories and tools that cover all aspects of problem understanding and solving, including the trimming method, evolution trends, and 76 standard solutions [4]. In this paper, we focus on the best-known tool, the Method of Inventive Principles, which represents the basic reasoning logic behind TRIZ. Figure 1 shows the overview of its framework (adapted from [21]), which contains four steps: (1) Identify the specific problem. (2) Transform the specific problem into a general problem by identifying physical contradictions. The contradictions involve an improving feature and a worsening feature. These features are drawn from Altshuller’s 39 engineering parameters. (3) Search for selected inventive principles from the contradiction matrix using identified contradictions. The contradiction matrix is organized in the form of 39-improving features and 39-worsening features (a 39 by 39 matrix) with each cell entry listing the most often used principles (from TRIZ’s 40 inventive principles) that may be used to solve the problem. (4) Use the selected principles to generate solutions to the problem. Although TRIZ has demonstrated its effectiveness, it still suffers from drawbacks that hinder its practical applications. For instance, the complexity of TRIZ resources and concepts poses cognitive challenges to effectively learning and applying it, particularly for non-experts. Additionally, the efficacy of TRIZ is heavily constrained by the users’ reasoning capabilities and prior knowledge already acquired. FIGURE 1: Four steps for problem solving using TRIZ thereby reducing Recent advancements in machine learning and natural language processing have been applied in conjunction with TRIZ [5,7,22]. These efforts aim to automate the TRIZ reasoning process, the difficulty of use. For instance, Cascini and Russo [5] developed the PAT-ANALYZER system that can analyze patent texts and automatically extract the contradictory information underlying the innovation for the use of TRIZ. Similarly, Guarino et al. [7] proposed the PaTRIZ, combining the Bidirectional Encoder Representations from Transformers (BERT) and Conditional Random Fields (CRF) for word-level patent analysis and TRIZ contradiction mining. Li et al. [22] proposed an approach that leverages natural language processing techniques to assess patent innovations according to the level of invention as defined in TRIZ. Berdyugina and Cavallucci [23] proposed a methodology for the automatic extraction of inventive information from texts for formulating an inventive problem into TRIZ engineering parameters. Their method combined a series of text-mining techniques, including topic modeling, word embedding, and clustering. Hall et al. [6] proposed an approach that uses topic modeling and unsupervised machine learning to map TRIZ inventive principles to individual patents and detect the novelty. However, most of these works focus on utilizing algorithms to improve specific steps of the TRIZ process. They still require innovators to dedicate much time and effort to extensive reasoning. Employing these methods does not directly assist users throughout the entire process, from analyzing a problem to creating practical solutions. In this paper, we aim to harness LLMs to automate the entire TRIZ reasoning process and minimize the cognitive requirements for users during its application. 2 Copyright © 2024 by ASME FIGURE 2: The framework of AutoTRIZ 2.2 Large Language Models for Design and Innovation Over the past years, many data-driven approaches have utilized machine learning and deep learning techniques to augment design and innovation [24,25]. Evolved from deep learning and pre-trained language models, LLMs typically refer to Transformer-based models that contain hundreds of billions of parameters for processing and generating natural language texts [10]. They are trained on extremely large-scale corpora, enabling them to acquire a wide range of knowledge and capabilities, including understanding context, generating coherent text, and step-by-step reasoning [10]. Some research has already explored the application of LLMs in engineering including design and microfluidic devices [26], robotics [27], and the user interface of webpages [28]. However, most of these early efforts primarily utilize conversational interactions, such as those facilitated by ChatGPT Interface [8], to engage in the innovation process. Meanwhile, with the development of LLMs, there has been an increase in efforts to create LLM-driven methods and tools to offer more generalized innovation assistance and directly support users in rapid ideation. innovation within specific fields, For instance, several studies have harnessed LLMs for processing vast amounts of design documentation, representing designs in specific forms, and identifying user needs for product development [16,17,29]. Han et al. [17] introduced an LLM- to based attribute-sentiment-guided summarization model extract user needs from online product reviews. Qiu et al. [29] applied a transformer-based language model to distill design- related knowledge from extensive reports and documents. Moreover, Wang et al. [16] utilized LLMs to decompose conceptual design tasks into Function-Behavior-Structure (FBS) formats, assisting users in ideation across different aspects. Recent studies have developed tools and methodologies utilizing LLMs to aid the design process, enhance human- computer collaborative innovation, or directly produce innovative concepts for users [18,19,30,31]. Ding et al. [31] conducted a systematic exploration of LLMs’ potential to boost cross-domain analogical creativity. Huang et al. [30] proposed CausalMapper, a system that combines LLMs with causal mapping to reason about the connections between problems and solutions. Ma et al. [32,33] evaluated the differences between LLM-generated and crowdsourced design solutions through multiple perspectives, including human expert evaluations and computational metrics. Zhu and Luo [19] presented GPT-based models with domain-specific tuning and task-specific learning, to generate original and useful design concepts. Notably, they applied their approach to automating bio-inspired design concept generation [18]. Although these recent idea-generation methods directly leverage the reasoning capabilities of LLMs, the lack of control over LLMs may hinder their effectiveness when assisting ideation. These approaches often lead to solutions that are too divergent to meet specific needs. Managing the problem-solving process to ensure that solutions are both innovative and practical, as well as understanding the reasoning process behind generated innovative solutions, remains a challenge. In this study, we address this issue by integrating TRIZ with LLMs, presenting AutoTRIZ as a tool that follows the TRIZ reasoning steps to generate inventive solutions with interpretability. 3. AUTOTRIZ In this section, we introduce AutoTRIZ, an artificial ideation tool that automates TRIZ with LLMs. The architecture of AutoTRIZ is depicted in Figure 2. At the core of AutoTRIZ is the utilization of LLMs to learn the reasoning process of the TRIZ methodology, which engineers often find it challenging to learn and excel at. Overall, AutoTRIZ takes a problem statement from the user as its initial input, and automatically generates a solution report after the reasoning process. The report includes detailed information about the reasoning process based on TRIZ and the resulting solutions to the problem. Within AutoTRIZ, we have defined a four-step reasoning flow based on the classic TRIZ workflow. The system includes an inner fixed knowledge base 3 Copyright © 2024 by ASME which consists of three segments related to TRIZ details, enabling controlled reasoning. It is noteworthy that our focus is on controlling the entire problem-solving reasoning process, while remaining open to the knowledge used in ideation. The problem-related knowledge applied during the problem-solving process is drawn from the knowledge base that the LLM has acquired through pre-training on the large-scale corpus. 3.1 Controlling the TRIZ Reasoning Flow To ensure that the system strictly follows the TRIZ thinking flow and reasoning process, we have configured AutoTRIZ with four modules, each corresponding to the four steps in TRIZ. As depicted in Figure 2, Modules 1, 2, and 4, outlined by solid-line frames, are driven by LLMs, whereas Module 3, outlined by a dashed-line frame, is controlled by predefined functions without using LLMs. Specifically, we exploit the instruction-following capabilities of LLMs for backend reasoning control. In each module that incorporates LLMs, relevant instructions are engineered into the input as system and assistant prompts. Specifically, in Module 1, AutoTRIZ identifies the problem to be solved from user input and converts it into descriptive text. Ideally, we hope that the content entered by the user is a clear problem statement. However, user inputs may include additional information such as scenario descriptions, background details, and even some redundant information. Therefore, in this module, AutoTRIZ is designed to identify and extract information related to the problem and then reorganize it into clear and concise text. In Module 2, AutoTRIZ receives the processed problem description and detects its engineering contradiction, which is represented by a space constructed from two out of the 39 engineering parameters. At this stage, AutoTRIZ learns all the engineering parameters based on its inner knowledge base. The outputs of this module are presented in a structured format (i.e., the indexes of the improving and worsening features). It is important to note that for the same problem statement, the identified contradiction may differ with each execution of this module. On the one hand, a single problem may encompass multiple contradictory pairs, yet our system is designed to identify only one contradiction. On the other hand, there is an inherent randomness in the content generation by LLMs. In the next section, we will conduct experimental investigations to examine the efficacy of contradiction identification and the consistency of the outputs. Once the contradiction is identified, Module 3 searches the contradiction matrix to find the indexes of relevant inventive principles and returns their descriptions. Following this, Module 4 synthesizes the original problem description, the identified engineering contradiction, and inventive principles recommended by the system through TRIZ, to generate the final solutions. the LLMs can generate complex structured data, such as those in HTML and LaTeX formats [34]. In AutoTRIZ, we harness this capability to integrate all generated content and directly produce a reader-friendly problem-solving report in a structured format. We have engineered the format template directly into Module 4, enabling it to output documents formatted in LaTeX. In practice, the template for the report generation can be adjusted as needed to suit specific requirements. 3.2 Learning from the Fixed Knowledge Base AutoTRIZ acquires the necessary information to learn the prior knowledge of TRIZ, enabling it to handle various types of problems. We have curated a static knowledge base, which interacts with the modules we described above, thereby empowering AutoTRIZ to master and apply the relevant knowledge. In AutoTRIZ, the internal fixed knowledge base includes three main components: (1) the TRIZ 39 Engineering Parameters [4], (2) the TRIZ Contradiction Matrix [4], and (3) the TRIZ 40 Inventive Principles [4]. Notably, the contradiction matrix here is identical to the traditional TRIZ contradiction matrix. The knowledge regarding engineering parameters and inventive principles includes titles and detailed descriptions for each entry. For example, for the first engineering parameter: [INDEX]1 [TITLE] Weight of moving object [DESCRIPTION]The mass of the object in a gravitational field, essentially the force that the body exerts on its support or suspension. Similarly, for the first inventive principle: [INDEX]1 [TITLE] Segmentation [DESCRIPTION] The Segmentation principle encourages consideration of the division of an object or system into smaller independent parts, making it sectional, making it easy to assemble or disassemble, and increasing the degree of its divisibility or fragmentation. All engineering parameters are configured into Module 2 as assistant information. The backend LLMs learn instructions and the output parameter space through in-context learning, enabling zero-shot reasoning. Regarding inventive principles, only selected contents are delivered to the system based on the position in the contradiction matrix. This process is very similar to LLMs’ Retrieval Augmented Generation (RAG) [35]. By retrieving additional information related to the query from external databases, RAG incorporates these external texts into LLM prompts to address the hallucination problem, leading to better generation [35]. Whereas in our system, the problem- solving process involves precise search-augmented generation, effectively bridging the gap between the prior TRIZ knowledge from experts and the reasoning capabilities of LLMs derived from large-scale pre-training. Simultaneously, all solutions generated are interpretable because each solution is derived from the application of selected inventive principles. 4 Copyright © 2024 by ASME 3.3 System Implementation We developed a web-based tool for public users to test and use AutoTRIZ, available at: https://www.autotriz.ai/. Figure 3 shows the user interface of the tool. Throughout the deployment of this tool and all experiments conducted in this study, we utilized GPT-4 (Version: 20231106, the state-of-the-art model at the time this work was done) as the backend LLM. However, it is important to note that since the proposed AutoTRIZ is a general framework, the backend LLM can be replaced with any other closed-source LLM (e.g., Claude) or open-source LLM (e.g., Llama) with minimal effort required for adapting the corresponding prompts. For the TRIZ knowledge base in AutoTRIZ, we adopt the TRIZ definitions and descriptions in an engineering design textbook [36]. FIGURE 3: AutoTRIZ web-based tool 4. EXPERIMENTAL EVALUATION In this section, we evaluate the effectiveness of the proposed AutoTRIZ through quantitative experiments and comparative studies. Specifically, we collected several case studies analyzed by human experts from TRIZ textbooks, constructing a case base. Then, we explored the consistency of the system in identifying engineering contradictions, as well as its overlap with human analysis. Finally, we selected a specific problem from the case base, then compared and discussed the solutions generated by AutoTRIZ against the results of human experts. 4.1 Constructing the TRIZ Case Base To evaluate the performance of AutoTRIZ, we first constructed a case base containing TRIZ problem-solving cases developed by human experts. Initially, we gathered several TRIZ-related textbooks, some of which are focused on general design innovation, while others are specifically about TRIZ. From 7 of these textbooks [4,36–41], we collected 10 initial cases. The selection criteria include: (1) the content of the case contains all elements of the TRIZ reasoning process, including problem description, contradiction identification, inventive principle positioning, and solutions; (2) the problem is defined clearly and comprehensively; (3) the cases do not contain similar problems. All cases are stored in JSON format. For more details on collected cases, please refer to our GitHub repository1. The initial 10 cases cover various domains, including environmental engineering, transportation, manufacturing, material science, aerospace technology, and so on. The 1 https://github.com/shuojiangcn/AutoTRIZ-DETC24 evaluation of these cases can serve as a preliminary benchmark, enabling users to understand and experience the usage protocol and performance of AutoTRIZ. In the future, we will continue to expand the case base for more robust testing. Beyond serving experimental purposes in this study, the curated case base can also store the results generated by users with AutoTRIZ. As the size of the base expands, we can also explore the interaction between the reasoning module and the existing case base, enabling AutoTRIZ's innovative capabilities to be scalable. 4.2 Assessing the Contradiction Identification Detecting contradictions is an essential step in the entire TRIZ problem-solving process. Accurate identification of the contradictions within a problem can effectively assist the system in recommending the appropriate inventive principles for the next step. Within LLMs, randomness is incorporated into the text generation process. These models often use sampling methods (e.g., top-k sampling) or temperature adjustments to control the generation process, leading to a variety of possible outputs rather than repeating the same response every time. Because of this inherent variability, LLMs may suffer from instability during inference. As a result, some LLM-based agents adopt self- consistency techniques that create several reasoning paths and then perform an ensemble on all generated answers, selecting the most consistent one through majority voting [42]. However, in traditional TRIZ, analyzing the same problem from different perspectives can yield different possible contradictions. Such stochastic nature of LLM-based generation can be useful for increasing the diversity of generated ideas [32]. Based on this, we maintain the setting of producing a single contradiction in each entry. To assess the performance and consistency of this setting, we conducted the following experiments. For each given problem statement, we performed the analysis 100 times, resulting in 100 pairs of identified parameters (contradictions). Then, we counted all results and calculated their respective proportions. In cases of high consistency, a particular contradiction could be dominant. In some cases, one parameter in the contradiction may have higher certainty than the other, leading to more dispersed results. We used information entropy as the uncertainty score, where smaller entropy value indicates greater confidence in the model's output. The information entropy metric is widely used for uncertainty measurement [43]. Given a probability distribution 𝑋 generated by the model, we can calculate the entropy by: " 𝐻(𝑋) = − ’ 𝑃(𝑥!) log 𝑃(𝑥!) !#$ where 𝑃(𝑥!) represents the frequency probability of the i-th class in a total 100 trials and n is the number of possible classes. Since we have 100 trials in our experiments, the entropy value ranges from 0 to 6.64, where a smaller value indicates higher consistency. Furthermore, we the overlap between examined AutoTRIZ’s detection and the analysis results of human experts 5 Copyright © 2024 by ASME from textbooks, categorizing them into three scenarios: complete match, half match, and no match. It is important to note that since human expert analysis also includes subjectivity and bias, it cannot be considered a golden standard. The main purpose of this experiment to showcase and quantitatively compare AutoTRIZ against human uses of TRIZ. is Figure 4 shows the experimental results, where the bar chart for each case illustrates the top 3 detections by proportion. The top 3 detections represent the output results corresponding to the three classes with the highest probabilities in the probability distribution obtained from the 100 trials. The use of top 3 detections enables us to account for both the model accuracy and the randomness in its predictions. In the chart, green bars represent complete match, blue bars indicate half match, and yellow bars denote not match. The table at the bottom shows the entropy of each case and whether the top 3 detections match the reference from textbooks, with symbols (✓, ✓, ✗) indicating complete match, half match, and not match, respectively. Overall, 7 out of 10 cases match or half-match the textbook’s analysis within the top 3 detections, indicating that AutoTRIZ's inference overlaps with the human experts’ results to a certain degree. A minority of the cases show relatively higher consistency (cases 5, 6, 7, 8), where the proportion of the top 1 detection is significantly higher than the other detections, including two complete match detections. For these cases, utilizing self-consistency may be beneficial to enhance performance. For other cases, the experimental results show greater diversity, indicated by higher information entropy. By examining the content of the top 3 detections of contradiction for each case, we observe that for almost all cases, one parameter is fixed while the other varies. Moreover, when using the textbook’s analysis as a reference, a pattern emerges across all cases where outputs with higher probabilities (within the top 3 detections) show a better match in alignment. These findings can serve as the initial benchmark for assessing the performance of AutoTRIZ’s contradiction identification. As the case base expands in the future, we can explore these patterns in a more fine-grained way with greater statistical significance. For example, we can examine the differences between various themes, techniques such as self-consistency reasoning in conjunction with the identified patterns to improve overall performance. leveraging 4.3 Comparing AutoTRIZ and Human Expertise In this section, we select one of the collected cases (case 7) to compare AutoTRIZ's generated report with humans’ analysis results from the textbook. The reasons for choosing case 7 are two-fold: (1) This case exhibits relatively high consistency in identifying engineering contradictions, with one dominant outcome (Figure 4); (2) The top 3 detections of contradiction are all half-match with the reference. This ensures a certain degree of reliability while allowing the distinction between the subsequent reasoning paths of AutoTRIZ and humans. The problem of case 7 is about the pneumatic transportation of metal shots through a system of plastic piping [39]. Here is the original problem statement: We are faced with a challenge involving the pneumatic transportation of metal shots through a system of plastic piping originally intended for plastic pellets. The transition to metal shots, despite their advantages for production purposes, has to significant wear and damage, particularly at the pipe's elbows. This issue arises from the incompatibility between the metal shots and the existing plastic elbow design. The task is to identify and implement a solution that resolves this conflict, ensuring the system's durability and effectiveness for transporting metal shots. led FIGURE 4: Experimental results about contradiction detection In the textbook, the identified improving parameter is "Speed" (Parameter 9), and the worsening parameter is "Stability of the object's composition" (Parameter 13). According to the contradiction matrix, "Mechanical Substitution" (Principle 28) from the obtained inventive principles. Applying this principle, the author describes the solution as placing a magnet at the elbow to bind the metal shots selects author the 6 Copyright © 2024 by ASME relatively lengthy and complex. Besides the case study exploration, we will also seek computational evaluation methods and metrics [44] regarding the quality of generated solutions in future work. It is important to note that these solutions are relatively preliminary and can serve as foundational directions for innovators to further develop and refine their designs. On this basis, we will continue to develop AutoTRIZ to produce more detailed solutions for the given problem. to a plastic material, thereby creating a blanket of shots that absorb the energy. Figure 5 shows the problem-solving report generated by AutoTRIZ, containing the reasoning process and solutions. The same problem statement is used as the input. Firstly, we can see that AutoTRIZ simplifies the original problem statement, identifying the main issue that needs to be addressed. Regarding the identification of contradictions, AutoTRIZ diverges from human expertise. Both AutoTRIZ and the textbook’s analysis consistently recognize the "Stability of the object's composition" (Parameter 13) as the worsening feature. However, concerning the improving feature, AutoTRIZ detects "Area of stationary object" (Parameter 6), while the textbook's analysis considers it to be "Speed" (Parameter 9). From the original problem statement, we understand that the key issue is to avoid wear on the plastic elbows by the metal shots to ensure durability, which clearly indicates that one of the contradictory parameters involves stability. Whereas the identification of the other parameter is not directly mentioned, leading to a variety of possible interpretations. AutoTRIZ reasons that the surface area needs improvement to withstand the impact and wear of the metal shot, while the expert asserts speed as the system’s top priority. These two analyses highlight different needs, thereby guiding subsequent innovative directions differently. (1, (28, (18, (i.e., 'Segmentation'), 'Mechanical Substitution'), In the textbook's analysis, the author selected a single inventive principle (28, 'Mechanical Substitution') and created a solution by positioning a magnet at the piping's elbow, which magnetically attaches metal shots to the plastic, forming an energy-absorbing layer. This approach represents a direct and effective innovation. However, based on the identified parameter pair, the contradiction matrix could yield four inventive (33, principles 'Homogeneity'), 'Mechanical Vibration')). Some principles may be challenging to apply, as the outcomes are directly influenced by the users’ reasoning ability, experience, and familiarity with TRIZ materials. This step also requires the most human effort in TRIZ. By comparison, AutoTRIZ can effectively overcome this issue. After identifying the contradiction (Parameter 6 vs. Parameter 13), AutoTRIZ identifies two inventive principles from the contradiction matrix (i.e., (2, 'Strong Oxidants')). For each principle, AutoTRIZ applies it and generates a corresponding solution. Both proposed solutions demonstrate feasibility and innovation. Solution 1 implements a physical alteration to prevent direct contact between the metal shots and the piping. Solution 2, integrating 'Strong oxidants', involves a surface treatment to improve the piping's durability against metal shots through a protective coating. 'Extraction'), (39, In summary, both the textbook's solution and the solutions automatically generated by AutoTRIZ are practical, originating from different inventive principles and leading to different approaches. In the previous section, we performed 100 trials on each case for contradiction detection. We randomly selected one trial's solutions to compare and discuss with humans' analysis results from the textbook in this section. We only randomly chose one result because the solutions and the complete report are FIGURE 5: AutoTRIZ generated solution report for case 7 5. DISCUSSION So far, we have presented a new methodology that integrates LLMs and the systematic innovation method, TRIZ, to automatically generate inventive solutions for any given problem in an interpretable way. This methodology has been implemented into a web-based tool, AutoTRIZ. We have through demonstrated experiments and case studies. its effectiveness and practicality Prior studies [14,15] have assessed LLMs’ capabilities across a broad range of engineering-related tasks, revealing that these models (especially GPT-series models) hold extensive engineering knowledge, such as design and manufacturing. Therefore, in our framework, we only control the reasoning flow, without limiting the knowledge involved in the ideation process, 7 Copyright © 2024 by ASME FIGURE 6: The multi-input usages of AutoTRIZ to fully leverage the general knowledge and capabilities of LLMs. In this study, our case base of 10 problems spans multiple distinct domains, and AutoTRIZ has effectively generated inventive solutions in each case. The proposed method significantly reduces the entry barrier to TRIZ. AutoTRIZ can generate a multitude of solutions in a short period of time because it leverages the computational power and vast knowledge base of LLMs. This efficiency is further enhanced by its user-friendly interface, allowing for easy configuration and use, significantly reducing the time needed to generate ideas and refine problem-solving strategies. In contrast, mastering the traditional TRIZ method for professional use typically requires months of training and substantial intellectual and cognitive efforts [45]. In the comparative study of case 7, we observed that the problem statement contains information related to the desired direction of improvement, which is relevant to the contradiction. Such information aids in aligning AutoTRIZ’s detections with those of human experts. Accordingly, as demonstrated in Figure 6, we can incorporate multi-input configurations into the system, enabling AutoTRIZ to generate solutions that fully consider detailed requirements from users. The user interaction settings with AutoTRIZ are also a topic worth exploring. We currently keep it simple to ensure accessibility for all users, including those without an understanding of TRIZ. We plan to investigate user interaction with TRIZ, AutoTRIZ, and vanilla LLMs, examining the differences to identify the most effective methods for the overall user experience and system performance. improving Although this study focuses on automating the TRIZ reasoning process using LLMs, the proposed framework can be extended innovation to automate other knowledge-based methods. For instance, Yilmaz et al. [46] identified 77 design heuristics from over 3,000 design process outcomes, and suggested a subset of heuristics to designers, which when selected at random, has produced improved design outcomes [47]. By applying our framework to this research, one could treat the identified heuristics as an internal knowledge base for the LLM-based agent, determining how to utilize these heuristics in the backend. Moreover, to develop a more powerful tool, one could also integrate various knowledge-based idea generation methods into the reasoning modules of LLMs, such as SCAMPER [48], IDEO Method Cards [49], Bio-inspired Design [50], and Design-by-Analogy [51–53]. The proposed AutoTRIZ framework has several limitations. Firstly, the solutions generated by LLMs may contain hallucinations or erroneous information. We plan to include fact- check modules to ensure the accuracy of the solutions. Additionally, there is no objective mechanism to evaluate the effectiveness of generated solutions. Users must independently assess solution quality and rank them for practical use. The evaluation studies conducted in this paper compared results solely from textbooks, which usually represent the analysis of a single expert or a small group of experts. Future studies will involve many more experts analyzing the same problems for comparison, making the conclusions more robust. Moreover, this study was demonstrated on a limited set of problem cases, providing only an initial insight into AutoTRIZ that might introduce some bias. In future research, we aim to apply this method to a broader and more diverse range of problems, systematically evaluating AutoTRIZ's performance. 6. CONCLUSION In this paper, we propose AutoTRIZ, an artificial ideation workflow and tool that leverages LLMs to automate the TRIZ methodology and enhance its applications. AutoTRIZ is constructed by multiple LLM-based reasoning modules and a pre-defined function module, interacting with the inner fixed knowledge base. It takes problem statements from users as initial inputs and automatically produces an interpretable solution report by following the step-by-step TRIZ reasoning process. The efficacy of this method is demonstrated and evaluated through quantitative and comparative experiments, as well as case studies involving human uses of TRIZ from TRIZ textbooks. Although this paper primarily focuses on integrating LLMs with TRIZ, the proposed framework holds the potential to be extended to other knowledge-based ideation methods, including SCAMPER, Design Heuristics, and Design-by- Analogy. Despite its current limitations, we invite interested innovators to test and use AutoTRIZ at: https://www.autotriz.ai/. 8 Copyright © 2024 by ASME REFERENCES [1] Zwicky, F., 1967, “The Morphological Approach to Discovery, Invention, Research and Construction,” New Methods of Thought and Procedure, A.G. Zwicky Fritz and Wilson, ed., Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 273–297. [5] [6] [4] [3] of Patents and Search [2] White, C. K., Wood, K. L., and Jensen, D., 2012, “From Brainstorming to C-Sketch to Principles of Historical Innovators: Ideation Techniques to Enhance Student Creativity,” J STEM Educ, 13(5). Camburn, B., Arlitt, R., Anderson, D., Sanaei, R., Raviselam, S., Jensen, D., and Wood, K. L., 2020, via “Computer-Aided Mind Map Generation Crowdsourcing and Machine Learning,” Res Eng Des, 31, pp. 383–409. Altshuller, G. S., 1999, The Innovation Algorithm: TRIZ, Systematic Innovation and Technical Creativity, Technical innovation center Inc. Cascini, G., and Russo, D., 2007, “Computer-Aided Analysis for TRIZ Contradictions,” International Journal of Product Development, 4(1–2), pp. 52–67. Hall, S., Mollan, C., Pandey, V., and Mourelatos, Z., 2022, “TRIZ Mapping and Novelty Detection of Engineering Design Patents Using Machine Learning,” Technical International Conferences and Computers and Information in Engineering Conference, p. V006T06A044. Guarino, G., Samet, A., and Cavallucci, D., 2022, “PaTRIZ: A for Mining TRIZ Contradictions in Patents,” Expert Syst Appl, 207, p. 117942. Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., and others, 2023, “GPT-4 Technical Report,” arXiv preprint arXiv:2303.08774. Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., and others, 2023, “Llama: Open and Efficient Foundation Language Models,” arXiv preprint arXiv:2302.13971. Engineering Framework Design [9] [8] [7] [11] [10] Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., and others, 2022, “Emergent Abilities of Large Language Models,” Transactions on Machine Learning Research. Singhal, K., Azizi, S., Tu, T., Mahdavi, S. S., Wei, J., Chung, H. W., Scales, N., Tanwani, A., Cole-Lewis, H., Pfohl, S., and others, 2023, “Large Language Models Encode Clinical Knowledge,” Nature, pp. 1–9. [12] Boiko, D. A., MacKnight, R., Kline, B., and Gomes, G., 2023, “Autonomous Chemical Research with Large Language Models,” Nature, 624(7992), pp. 570–578. [13] Romera-Paredes, B., Barekatain, M., Novikov, A., Balog, M., Kumar, M. P., Dupont, E., Ruiz, F. J. R., Ellenberg, J. S., Wang, P., Fawzi, O., and others, 2024, “Mathematical Discoveries from Program Search with Large Language Models,” Nature, 625(7995), pp. 468– 475. [14] Makatura, L., Foshey, M., Wang, B., HähnLein, F., Ma, P., Deng, B., Tjandrasuwita, M., Spielberg, A., Owens, C. E., Chen, P. Y., and others, 2023, “How Can Large Language Models Help Humans in Design and Manufacturing?,” arXiv preprint arXiv:2307.14377. Picard, C., Edwards, K. M., Doris, A. C., Man, B., Giannone, G., Alam, M. F., and Ahmed, F., 2023, “From Concept to Manufacturing: Evaluating Vision-Language Models for Engineering Design,” arXiv preprint arXiv:2311.12668. [15] [16] Wang, B., Zuo, H., Cai, Z., Yin, Y., Childs, P., Sun, L., and Chen, L., 2023, “A Task-Decomposed AI-Aided for Generative Conceptual Design,” Approach Technical International Engineering in Conferences and Computers and Information Engineering Conference, p. V006T06A009. Design [18] [20] [19] [21] [17] Han, Y., Nanda, G., and Moghaddam, M., 2023, “Attribute-Sentiment-Guided Summarization of User Opinions From Online Reviews,” Journal of Mechanical Design, 145(4), p. 41402. Zhu, Q., Zhang, X., and Luo, J., 2023, “Biologically Inspired Design Concept Generation Using Generative Pre-Trained Transformers,” Journal of Mechanical Design, 145(4), p. 41409. Zhu, Q., and Luo, J., 2023, “Generative Transformers for Design Concept Generation,” J Comput Inf Sci Eng, 23(4), p. 41003. Spreafico, C., and Russo, D., 2016, “TRIZ Industrial Case Studies: A Critical Survey,” Procedia CIRP, 39, pp. 51–56. Silverstein, D., DeCarlo, N., and Slocum, M., 2008, “How to Achieve Competitive Excellence Using TRIZ,” NW: Taylor&Francis Group. Li, Z., Tate, D., Lane, C., and Adams, C., 2012, “A Framework for Automatic TRIZ Level of Invention Estimation of Patents Using Natural Language Processing, Knowledge-Transfer and Patent Citation Metrics,” Computer-aided design, 44(10), pp. 987–1010. [23] Berdyugina, D., and Cavallucci, D., 2023, “Automatic Extraction of Inventive Information out of Patent Texts in Support of Manufacturing Design Studies Using Natural Languages Processing,” J Intell Manuf, 34(5), pp. 2495–2509. Luo, J., 2022, “Data-Driven Innovation : What Is It ?,” IEEE Trans Eng Manag, pp. 1–19. Jiang, S., Sarica, S., Song, B., Hu, J., and Luo, J., 2022, “Patent Data for Engineering Design: A Critical Review and Future Directions,” J Comput Inf Sci Eng, 22(6), p. 060902. [25] [24] [22] [26] Nelson, M. D., Goenner, B. L., and Gale, B. K., 2023, to Assist CAD Design for “Utilizing ChatGPT 9 Copyright © 2024 by ASME [27] [28] Microfluidic Devices,” Lab Chip, 23(17), pp. 3778– 3784. Stella, F., Della Santina, C., and Hughes, J., 2023, “How Can LLMs Transform the Robotic Design Process?,” Nat Mach Intell, pp. 1–4. Li, A., Wu, J., and Bigham, J. P., 2023, “Using LLMs to Customize the UI of Webpages,” Adjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pp. 1–3. [29] Qiu, Y., and Jin, Y., 2023, “Document Understanding- Based Design Support: Application of Language Model for Design Knowledge Extraction,” Journal of Mechanical Design, 145(12), p. 121401. [30] Huang, Z., Quan, K., Chan, J., and MacNeil, S., 2023, “CausalMapper: Challenging Designers to Think in Systems with Causal Maps and Large Language Model,” Proceedings of the 15th Conference on Creativity and Cognition, pp. 325–329. [31] Ding, Z., Srinivasan, A., MacNeil, S., and Chan, J., 2023, “Fluid Transformers and Creative Analogies: Exploring Large Language Models’ Capacity for Augmenting Cross-Domain Analogical Creativity,” Proceedings of the 15th Conference on Creativity and Cognition, pp. 489–505. [32] Ma, K., Grandi, D., McComb, C., and Goucher-Lambert, K., 2024, “Exploring the Capabilities of Large Language Models for Generating Diverse Design Solutions,” arXiv preprint arXiv:2405.02345. [34] [33] Ma, K., Grandi, D., McComb, C., and Goucher-Lambert, K., 2023, “Conceptual Design Generation Using Large Language Models,” International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, p. V006T06A021. Tang, X., Zong, Y., Zhao, Y., Cohan, A., and Gerstein, M., 2023, “Struc-Bench: Are Large Language Models Really Good at Generating Complex Structured Data?,” arXiv preprint arXiv:2309.08963. Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rocktäschel, T., and others, 2020, “Retrieval- Augmented Generation for Knowledge-Intensive Nlp Tasks,” Adv Neural Inf Process Syst, 33, pp. 9459–9474. [36] Childs, P., 2013, Mechanical Design Engineering [35] Handbook, Butterworth-Heinemann. [37] Orloff, M. A., 2006, Inventive Thinking through TRIZ: A Practical Guide, Springer Berlin, Heidelberg. [38] Orloff, M. A., 2012, Modern TRIZ: A Practical Course with Easytriz Technology, Springer Science & Business Media. Savransky, S. D., 2000, Engineering of Creativity: Introduction to TRIZ Methodology of Inventive Problem Solving, CRC press. Silverstein, D., DeCarlo, N., and Slocum, M., 2007, Insourcing Innovation: How to Achieve Competitive Excellence Using TRIZ, CRC Press. [40] [39] [41] Fey, V., and Rivin, E., 2005, Innovation on Demand: New Product Development Using TRIZ, Cambridge University Press. [43] [42] Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., and Zhou, D., 2023, “Self- Consistency Improves Chain of Thought Reasoning in Language Models,” The Eleventh International Conference on Learning Representations, Kigali, Rwanda. Zhang, X., Chen, F., Lu, C.-T., and Ramakrishnan, N., 2019, in Document Classification,” arXiv preprint arXiv:1907.07590. [44] Regenwetter, L., Srivastava, A., Gutfreund, D., and Ahmed, F., 2023, “Beyond Statistical Similarity: Rethinking Metrics for Deep Generative Models in Engineering Design,” Computer-Aided Design, 165, p. 103609. Ilevbare, I. M., Probert, D., and Phaal, R., 2013, “A Review of TRIZ, and Its Benefits and Challenges in Practice,” Technovation, 33(2–3), pp. 30–37. “Mitigating Uncertainty [45] [46] Yilmaz, S., Daly, S. R., Seifert, C. M., and Gonzalez, R., 2016, “Evidence-Based Design Heuristics for Idea Generation,” Des Stud, 46, pp. 95–124. [47] Daly, S., Yilmaz, S., Christian, J. L., Seifert, C. M., and in of Richard Gonzalez, 2012, “Design Heuristics Engineering Concept Generation,” Journal Engineering Education, 101(4), pp. 602–628. Eberle, B., 1996, Scamper on: Games for Imagination Development, Prufrock Press Inc. IDEO., 2003, IDEO Method Cards: 51 Ways to Inspire Design, William Stout. Fu, K., Moreno, D., Yang, M., and Wood, K. L., 2014, “Bio-Inspired Design: An Overview Investigating Open Questions From the Broader Field of Design-by- Analogy,” ASME Journal of Mechanical Design, 136(11, SI), p. 111102. Jiang, S., Hu, J., Wood, K. L., and Luo, J., 2022, “Data- Driven Design-By-Analogy: State-of-the-Art and Future Directions,” ASME Journal of Mechanical Design, 144(2), p. 020801. [48] [49] [50] [51] [52] Murphy, J., Fu, K., Otto, K., Yang, M., Jensen, D., and Wood, K., 2014, “Function Based Design-by-Analogy: A Functional Vector Approach to Analogical Search,” ASME Journal of Mechanical Design, 136(10), p. 101102. [53] Hey, J., Linsey, J., Agogino, A. M., and Wood, K. L., 2008, “Analogies and Metaphors in Creative Design,” International Journal of Engineering Education, 24(2), pp. 283–294. 10 Copyright © 2024 by ASME
ai_researcher
3
Redefining_Creativity_in_the_Era_of_AI_Perspectives_of_Computer_Scientists_and_New_Media_Artists.pdf
GENERATIVE AI AND HUMAN CAPITAL 1 AUGMENTING MINDS OR AUTOMATING SKILLS? THE DIFFERENTIAL ROLE OF HUMAN CAPITAL IN GENERATIVE AI’S IMPACT ON CREATIVE TASKS Meiling Huang1, Ming Jin2, and Ning Li1 1School of Economics and Management, Tsinghua University 2School of Management, Wuhan University of Technology Author Note All data, analysis code, output, and research materials including the full list of items are available at https://osf.io/ynhtu/?view_only=31642f7caac74082940eb1153d4e9e55. Correspondence concerning this article should be addressed to Ning Li, Leadership and Organization Management, Tsinghua University, Beijing, Beijing, China. Email: [email protected] GENERATIVE AI AND HUMAN CAPITAL Abstract 2 Generative AI is rapidly reshaping creative work, raising critical questions about its beneficiaries and societal implications. This study challenges prevailing assumptions by exploring how generative AI interacts with diverse forms of human capital in creative tasks. Through two random controlled experiments in flash fiction writing and song composition, we uncover a paradox: while AI democratizes access to creative tools, it simultaneously amplifies cognitive inequalities. Our findings reveal that AI enhances general human capital (cognitive abilities and education) by facilitating adaptability and idea integration but diminishes the value of domain- specific expertise. We introduce a novel theoretical framework that merges human capital theory with the automation-augmentation perspective, offering a nuanced understanding of human-AI collaboration. This framework elucidates how AI shifts the locus of creative advantage from specialized expertise to broader cognitive adaptability. Contrary to the notion of AI as a universal equalizer, our work highlights its potential to exacerbate disparities in skill valuation, reshaping workplace hierarchies and redefining the nature of creativity in the AI era. These insights advance theories of human capital and automation while providing actionable guidance for organizations navigating AI integration amidst workforce inequalities. GENERATIVE AI AND HUMAN CAPITAL 3 Generative AI is transforming creative industries, challenging traditional notions of human expertise and reshaping the dynamics of work. This technology offers both promise and peril: while democratizing access to creative tools, it also risks deepening cognitive and social inequalities. Scholars have highlighted generative AI’s potential to augment human creativity in areas as diverse as writing, music, and visual arts (Noy & Zhang, 2023; Zhou & Lee, 2024; Nakavachara et al., 2024). Yet, others caution that such advancements may exacerbate disparities in skill valuation, favoring those who can effectively leverage AI while marginalizing others (Acemoglu et al., 2022; Doshi & Hauser, 2024; Lee & Chung, 2024; Eloundou et al., 2023). As AI evolves from a mere tool to a co-creator, understanding who benefits most from this transformation is increasingly critical—not only for individuals and organizations but also for broader societal equity. While initial evidence suggests that generative AI can enhance creative performance, answers to this nuanced question remain elusive (Jia et al., 2023; Li et al., 2024). Some argue that AI could reduce inequality by leveling the playing field, allowing lower-performing individuals to close the performance gap (Eloundou et al., 2023; Noy & Zhang, 2023). Yet, studies also suggest that much of the observed performance gain stems from participants relying heavily on AI-generated outputs with minimal human input, resulting in automation rather than meaningful human-AI collaboration (Noy & Zhang, 2023; Doshi & Hauser, 2024). This paradox highlights the need to examine whether generative AI truly democratizes creativity or amplifies disparities by favoring those already equipped with the skills to use it effectively. These observations point to a broader tension inherent in AI’s role in the workplace. Raisch and Krakowski (2021) describe the “automation-augmentation paradox,” where AI can both replace human tasks through automation and simultaneously enhance human abilities GENERATIVE AI AND HUMAN CAPITAL 4 through augmentation. This framework highlights that generative AI has the potential not only to automate processes— reducing the need for human involvement in certain elements of the creative process —but also to augment human capabilities by enhancing creative and cognitive functions. These dynamics complicate our understanding of AI’s broader impact, raising critical questions about its beneficiaries and its potential to reshape skill hierarchies. This distinction is critical for understanding how generative AI reshapes the dynamics of creativity and skill valuation. Will AI level the playing field, or will it widen existing gaps by privileging those with broader, more adaptable abilities? To explore these questions, we challenge conventional wisedom and offer a novel framework that redefines human-AI collaboration in creative tasks. Specifically, we integrate the augmentation and automation framework with human capital theories to propose that generative AI has a dual effect and contrasting impact: it lowers knowledge barriers by diminishing the value of domain-specific expertise, while simultaneously increasing the importance of general human capital, such as cognitive adaptability and education (Choudhury et al., 2020; Teodoridis et al., 2019). Building on this dual effect, we develop a novel framework that differentiates between general human capital (broad, transferable skills like problem-solving and learning capacity) and specific human capital (deep, domain-specific expertise unique to particular tasks; Rietzschel et al., 2007; Teodoridis et al., 2019). This framework sheds light on how generative AI interacts unevenly with these forms of human capital, revealing its potential to both empower and marginalize. Rather than uniformly enhancing productivity, we suggest that generative AI disproportionately benefits individuals with adaptable, transferable skills, while devaluing specialized expertise. By providing this nuanced lens, our study challenges assumptions and investigates whether AI will serve as a force for democratizing creative work or as a catalyst for reinforcing inequalities in skill valuation. GENERATIVE AI AND HUMAN CAPITAL 5 To empirically test these ideas, we conducted two randomized controlled experiments examining how generative AI interacts with human capital in creative tasks. The first experiment focused on flash fiction writing, a task accessible to a broad range of individuals. Participants’ general human capital (e.g., IQ and education level) and specific human capital (e.g., writing skills) were assessed, and they were randomly assigned to either work independently or collaborate with AI. To ensure ecological validity, members of the public evaluated the flash fiction, providing real-world audience judgments (Berg, 2019; Yin et al., 2024). The second experiment extended this investigation to song lyric composition (Nelson et al., 2023), a more specialized creative domain. Participants—ranging from novices to experienced lyricists—were provided with pre-composed musical pieces and tasked with writing lyrics tailored to their assigned composition. Once completed, the songs were professionally recorded with trained singers. Public evaluations of the finished songs were again used to reflect authentic consumer responses (Berg, 2016, 2022), allowing us to capture the nuanced ways AI impacts creativity across varying levels of human capital. Through these experiments, we reveal how generative AI's impact on creativity depends on the interplay between general and specific human capital. Our findings challenge the assumption that AI universally enhances productivity, showing that its benefits are disproportionately influenced by individuals’ human capital profiles. Rather than leveling the creative playing field, AI enhances the value of general human capital—such as cognitive adaptability—while diminishing the relative importance of specialized expertise. These insights highlight the duality of AI’s role: it democratizes access to creative tools but risks widening disparities based on cognitive adaptability. By examining these dynamics, our work advances GENERATIVE AI AND HUMAN CAPITAL understanding of human-AI collaboration, offering critical guidance for organizations and 6 policymakers seeking to balance innovation with equity in the AI era. THEORETICAL DEVELOPMENT Generative AI and Creative Performance Generative AI, a new generation of artificial intelligence that creates new content and solutions across various domains, has rapidly become a pivotal tool for enhancing creativity among knowledge workers (Dell’Acqua et al., 2023; Lee & Chung, 2024). A growing body of research has demonstrated its capacity to augment human performance in diverse tasks, ranging from text generation and coding assistance to complex creative endeavors such as storytelling, music composition, and visual art creation. Studies by Huang et al. (2021) and Brynjolfsson et al. (2023) have shown that generative AI can significantly increase efficiency and creativity by automating routine tasks and offering novel ideas that humans might not conceive independently. However, while the general consensus is that generative AI improves performance, the question of who benefits most from this technology remains underexplored. Early findings, such as those from Park et al. (2023) and Noy and Zhang (2023), suggest that AI can reduce performance disparities by offering significant support to lower-performing individuals. Yet, these studies often focus on relatively simple tasks requiring minimal human input, where AI largely operates autonomously. Noy and Zhang (2023), for instance, found that AI compresses performance variance by boosting lower performers but also observed limited human-AI interaction, as many participants submitted AI-generated outputs with minimal editing. This disparity in benefit may also reflect a ceiling effect, where higher-performing individuals experience limited incremental gains relative to their lower-performing counterparts. GENERATIVE AI AND HUMAN CAPITAL 7 Consequently, these findings may not fully capture the complexities of more collaborative tasks, where deeper human-AI collaboration is required. A Contingent Approach: Integrating Human Capital Theory To understand the nuanced effects of generative AI on performance, it is critical to develop a contingent approach that accounts for individual differences in human capital (Becker, 1962; Rosen, 1976). Human capital theory, widely established in organizational behavior and economics, provides a useful framework for understanding how individuals’ abilities and knowledge influence their interaction with AI (Lepak & Snell, 1999; Carpenter et al., 2001; Ployhart et al., 2011). Within this theory, human capital is typically categorized into two distinct types: general human capital and specific human capital (Coff, 1997). General human capital represents cognitive abilities and formal education that equip individuals with versatile, transferable skills (Lepak & Snell, 2002; Ritchie & Tucker-Drob, 2018). These skills enable people to quickly learn and adapt across various tasks and industries. Importantly, general human capital fosters problem-solving, critical thinking, and the ability to work with complex information (Crook et al., 2011; Ritchie & Tucker-Drob, 2018). Because these cognitive skills are broad in nature, individuals with higher levels of general human capital are capable of navigating diverse environments and performing a wide range of tasks. On the other hand, specific human capital encompasses specialized knowledge and expertise that is narrowly focused on particular tasks, industries, or domains (Baer, 2015; Plucker & Beghetto, 2004; Tu et al., 2020). This type of capital reflects deep, technical proficiency in a specific area, allowing individuals to excel in highly specialized roles that demand extensive training and experience. GENERATIVE AI AND HUMAN CAPITAL 8 In the context of AI, the distinction between general and specific human capital becomes even more salient. While generative AI democratizes access to knowledge and facilitates the completion of tasks that once required specialized expertise, it also interacts with human capital in ways that can either amplify or diminish the relative value of these skills (Doshi & Hauser, 2024; Zhu & Zou, 2024). The contingent approach suggests that the benefits of AI are not uniformly distributed but are instead influenced by the type of human capital an individual possesses. The Augmentation-Automation Perspective on Generative AI and Human Capital Generative AI’s unique features—its lack of agency and its expansive knowledge span— make it both a powerful tool and a complex variable in human AI collaboration (Rouse, 2020; Gilardi et al., 2023). These features interact differently with general and specific human capital, leading to distinct outcomes based on the type of human capital individuals possess (Pyatt & Becker, 1966; Plucker & Beghetto, 2004). The augmentation-automation framework provides a useful lens to understand this interaction, illustrating how AI either complements or substitutes human labor depending on whether individuals rely more on general or specific human capital (Raisch & Krakowski, 2021). Generative AI’s lack of agency requires human input to produce meaningful outputs, making it heavily reliant on the cognitive and evaluative capacities of users (Boussioux et al., 2024; Wang et al., 2023). This reliance means that the effectiveness of AI in creative, complex tasks is closely tied to the user’s general human capital (Choudhury et al., 2020; Mariz-Perez et al., 2012). Individuals with high levels of general human capital—those equipped with cognitive versatility, critical thinking, and broad educational backgrounds—are better positioned to extract value from AI. They can assess, refine, and apply AI-generated content within complex processes GENERATIVE AI AND HUMAN CAPITAL such as strategic decision-making, design, and creative work (Agarwal et al., 2023; Hui et al., 9 2024; Rafner et al., 2023). Because these tasks require judgment, adaptation, and the integration of diverse information, AI acts as a powerful amplifier for individuals with strong general human capital. The lack of agency in AI necessitates that human oversight remains essential, meaning that those who possess broader cognitive skills will be increasingly instrumental in guiding AI towards producing meaningful, innovative outputs. This dynamic amplifies the value of general human capital, making it indispensable in an AI-augmented workplace. At the same time, generative AI’s expansive knowledge span allows it to access and apply information across a vast array of domains, fundamentally altering how tasks that traditionally relied on specific human capital are performed (Acemoglu et al., 2022; Anthony et al., 2023). In creative work, domain-specific expertise is typically acquired through years of experience, learning, and deep familiarity with the nuances of a particular field (Amabile, 2012; Lifshitz-Assaf, 2018). This expertise allows individuals to produce creative outputs informed by their specialized knowledge, which is often tied to domain-specific memory and learned associations (Baer, 2015; Bruns, 2013; Ward, 2008). However, generative AI’s ability to synthesize nearly all human knowledge and understand complex connections across fields reduces the need for such narrowly focused expertise (Anthony et al., 2023; Li et al., 2024). AI’s training across vast datasets allows it to not only access deep knowledge in specific areas but also combine insights from multiple domains, enabling it to perform creative tasks that were once the exclusive domain of highly specialized experts. By integrating these two key features of generative AI—its need for human oversight and its expansive knowledge span—with the augmentation-automation framework (Raisch & Krakowski, 2021), we can better understand how AI differentially interacts with general and GENERATIVE AI AND HUMAN CAPITAL specific human capital (Raisch & Krakowski, 2021; Einola & Khoreva, 2023; Lee & Chung, 10 2024). From the augmentation perspective, generative AI enhances the capabilities of individuals with general human capital. AI tools increase cognitive and creative productivity by providing vast resources for exploration, iteration, and decision-making (Luo et al., 2021; Einola & Khoreva, 2023; Agarwal et al., 2023). Individuals with broad, adaptable skills are better equipped to harness these tools, guiding AI in ways that enhance performance on complex, non- routine creative tasks (Meincke et al., 2024; Wang et al., 2023). In this context, the demand for general human capital rises, as the role of human oversight and creative input remains critical in realizing AI’s potential. From the automation perspective, AI’s expansive knowledge span enables it to perform creative tasks traditionally dominated by specific human capital, reducing the economic value of specialized knowledge (Einola & Khoreva, 2023). As AI efficiently generates creative outputs by synthesizing knowledge across domains, the demand for deep, domain-specific expertise among experts declines, while novices may find new opportunities to engage in creative processes (Dell’Acqua et al., 2023). The more AI automates creative tasks that rely on established knowledge connections, the less critical specialized human capital becomes in driving creative performance. This shift poses challenges for workers whose roles are defined by their domain- specific expertise, as AI’s capacity to replicate or approximate these tasks diminishes the relative value of such expertise while simultaneously opening pathways for novices. Building on this foundation, we now turn to the development of specific hypotheses that stem from these key mechanisms and relationships. GENERATIVE AI AND HUMAN CAPITAL HYPOTHESIS 11 We first posit that the use of generative AI enhances individual creativity, a baseline assumption supported by prior research showing AI’s ability to boost productivity and creative output. Studies indicate that AI can augment creativity by generating new ideas, offering alternative solutions, and streamlining iteration processes in tasks like writing and consulting (Brynjolfsson et al., 2023; Doshi & Hauser, 2024). These tasks benefit from AI’s strengths in synthesizing information, producing coherent narratives, and offering stylistic variations. However, in highly creative tasks—such as flash fiction and songwriting, where brevity, originality, and rapid shifts in focus are key—the impact of AI is less straightforward (Lee & Chung, 2024; Zhou & Lee, 2024). These tasks often demand novel ideas, emotional depth, and unpredictable shifts, traditionally seen as the realm of human intuition, raising questions about AI’s role in enhancing creativity in such contexts. Nevertheless, several core mechanisms suggest that AI could still improve creative performance in these highly dynamic tasks. First, AI’s capacity to access and synthesize vast knowledge across genres, themes, and styles provides a wealth of inspiration, allowing users to explore novel ideas that might not be immediately apparent through human creativity alone (Marrone et al., 2024; Meincke et al., 2024). This extensive knowledge base enables individuals to combine concepts in innovative ways, potentially sparking fresh and unique creative outputs. Moreover, AI facilitates rapid iteration, allowing people to experiment with multiple creative directions (Peng et al., 2023; Nakavachara et al., 2024). This iterative process increases the likelihood of refining ideas and enhancing the final creative product. Therefore, Hypothesis 1. The use of generative AI enhances individual creativity. GENERATIVE AI AND HUMAN CAPITAL 12 Building on the first hypothesis, which posits that generative AI enhances individual creativity, we now consider how general human capital augments this relationship. The core of this argument lies in how individuals’ cognitive abilities and education level interact with AI’s capabilities, particularly in creative tasks, where novelty and adaptability are key (Harvey & Berry, 2023; Doshi & Hauser, 2024; Lee & Chung, 2024). Generative AI offers a vast array of ideas, but it lacks the ability to independently direct or refine them—relying instead on humans to guide the process (Acemoglu et al., 2022; Noy & Zhang, 2023). This is where general human capital comes into play. Individuals with high cognitive flexibility can more effectively interpret and integrate AI-generated content, drawing from a range of inputs and integrating them in unique ways (Tu et al., 2020; Meincke et al., 2024). In tasks that demand originality, those with higher education level are better equipped to navigate and synthesize AI’s diverse offerings. For instance, in songwriting, an individual with broad knowledge might use AI-generated lyrics from various musical genres and styles, merging them into something fresh and innovative that goes beyond what AI alone could produce. Additionally, the human role in providing oversight becomes critical. While AI can suggest numerous creative paths, individuals must exercise judgment to evaluate and refine these ideas (Anthony et al., 2023; Peng et al., 2023). Here, the cognitive strength associated with general human capital enables individuals to make strategic decisions about which AI-generated ideas to pursue (Boussioux et al., 2024). For example, in fiction writing, someone with high cognitive ability may discern which AI-generated plot elements will best enhance the emotional resonance or thematic complexity of the story, resulting in a more compelling final product. Furthermore, AI’s ability to draw on a vast expanse of knowledge across fields is most effectively utilized by individuals with a similarly broad base of knowledge (Jia et al., 2023; Noy GENERATIVE AI AND HUMAN CAPITAL & Zhang, 2023). Those with higher levels of general human capital can connect AI-generated content to a variety of contexts, pushing creative boundaries further (Mariz-Perez et al., 2012; Dell’Acqua et al., 2023). In songwriting, for example, an individual might blend poetic, 13 historical, and contemporary influences into their lyrics, creating something more original than either they or the AI could achieve alone. Taken together, individuals with higher levels of general human capital are not only better at guiding AI but also at leveraging its wide-ranging capabilities to produce more innovative and impactful creative outputs (Huang et al., 2024; Rafner et al., 2023). Their ability to adapt, evaluate, and synthesize AI-generated content enhances the creative process, making the relationship between AI use and creativity particularly strong for those with greater cognitive flexibility and educational background. Therefore, Hypothesis 2. General human capital positively moderates the relationship between the use of generative AI and creativity, such that the positive relationship between AI-use and creativity will be stronger when individuals’ general human capital is higher (H2a: education; H2b: IQ). In contrast to the synergistic interaction between AI and general human capital, generative AI may diminish the importance of specific human capital in creative tasks (Baer, 2015; Dane, 2010; Tu et al., 2020). Specific human capital, built through years of domain- specific learning and expertise, plays a vital role in producing creative outputs informed by deep knowledge (Amabile, 2012; Bruns, 2013; Teodoridis et al., 2019). However, AI’s expansive knowledge span, coupled with its ability to synthesize information from diverse fields, reduces the need for narrowly focused expertise (Acemoglu & Restrepo, 2022; Eloundou et al., 2023). This shift challenges the value of specific human capital, particularly in tasks such as fiction GENERATIVE AI AND HUMAN CAPITAL writing and songwriting, where AI can now perform functions once requiring deep, domain- 14 specific skills. A key mechanism is AI’s ability to automate routine elements of creative tasks. Much of specific human capital involves knowledge internalized through years of experience, such as understanding narrative structures or lyrical patterns (Zhou & Lee, 2024). For example, a professional lyricist develops an intricate understanding of lyrical structure, genre conventions, and thematic depth over time, applying these learned associations to produce high-quality compositions. However, generative AI, trained on vast knowledge corpus, can replicate these established techniques, reducing the need for domain-specific human intervention. AI’s proficiency in producing creative outputs that follow conventional structures undermines the unique value that specific human capital once offered, especially in formulaic aspects of creativity. Additionally, AI’s ability to draw from a wide array of knowledge domains goes beyond the more constrained scope of specific human capital (Yin et al., 2024; Zhou & Lee, 2024). While domain-specific experts focus on the nuances of their particular field, AI can integrate diverse insights across disciplines, broadening creative possibilities (Luo et al., 2021; Lee & Chung, 2024). The fixed nature of specific human capital, often referred to as the curse of knowledge (Camerer et al., 1989), may limit flexibility in exploring ideas beyond familiar frameworks. For example, experts deeply rooted in their field may overlook novel ideas that lie outside their established knowledge base, especially when AI suggests unconventional combinations (Dane, 2010; Miller et al., 2006; Ward, 2008; Schillebeeckx et al., 2019). AI’s lack of agency, requiring human oversight, further complicates this interaction, as specialists may rely too heavily on their own expertise, missing out on creative possibilities that don’t align with their GENERATIVE AI AND HUMAN CAPITAL domain-specific knowledge (Amabile, 1985; Lawless & Kulikowich, 2006; Rietzschel et al., 15 2007). Furthermore, the distinctiveness of specific skills, often developed through extensive training (Tu et al., 2020), becomes less critical when AI can replicate them at scale (Huang et al., 2024). The value of deep expertise, once a significant advantage in creative fields, is diminished when AI can produce outputs that rival or exceed the quality of those created by human experts (Doshi & Hauser, 2024; Zhou & Lee, 2024). AI’s ability to emulate specific techniques and structures reduces the competitive edge of those with domain-specific skills, as the unique contributions of such expertise are no longer as essential to the creative process (Harvey & Kou, 2013; Agarwal et al., 2023). As AI automates routine tasks, integrates diverse knowledge, and offers creative solutions beyond the confines of specific expertise, the traditional advantages of specific human capital is diminished (Puranam, 2021; Marrone et al., 2024). Hypothesis 3. Specific human capital negatively moderates the relationship between the use of generative AI and creativity, such that the positive relationship between AI-use and creativity will be weaker when the individuals’ specific human capital is higher. OVERVIEW OF STUDIES We conducted two experiments to test the effects of generative AI on creativity and the moderating roles of general and specific human capital. Study 1 focused on flash-fiction writing, while Study 2 extended this investigation to a lyric-writing task, addressing the limitations of the first study and examining the interaction effects between AI use and human capital on creativity (see Figure 1 and Figure 2 for detailed experiment designs). In both studies, participants were randomly assigned to either use generative AI or complete the task independently. The AI tool GENERATIVE AI AND HUMAN CAPITAL 16 was deployed via a user-friendly, dialogue-based interface built using OpenAI’s API (GPT-4), allowing participants to interact seamlessly with the system (see Figure 3 for the interface of the used AI tool). By employing distinct creative contexts across the two studies, we aimed to capture a broader understanding of how AI influences creative output and how this relationship is moderated by individual differences in human capital.1 EXPERIMENT 1 Samples and Procedures We recruited participants with a shared interest in story creation, through various channels including social media and online interest-based groups, ensuring a diverse sample comprising university students and professionals across various industries in China. Participants signed up our experiment and paid visit to our behavioral lab in schedule. 162 individuals participated in the first experiment, each compensated 30 CNY. Of the final sample, 111 (68.52%) were female, with an average age of 26.27 years (SD = 5.62). The majority, 154 participants (95.06%), held at least a bachelor’s degree. Among them, 101 were college students, while the remaining participants worked in different sectors such as technology (8.02%) and education (6.79%). The experiment was conducted in three stages. First, participants completed an IQ test and provided demographic information. Second, they were randomly assigned to one of two conditions: one group used generative AI (GPT-4) to compose a flash-fiction of under 500 Chinese characters, while the other group completed the task without AI assistance. Both groups 1 This study is part of a broader research project titled “Human Interactions with Artificial Intelligence in Organizations”, which received IRB approval. All data, analysis code, output, and research materials including the full list of items are available at https://osf.io/ynhtu/?view_only=31642f7caac74082940eb1153d4e9e55. All data were analyzed using STATA MP Version 17.0. GENERATIVE AI AND HUMAN CAPITAL were informed with basic fiction writing techniques and requirements. For AI-assist group, 17 information about effective prompt crafting was additionally provided to ensure all participants could use AI. After the experiment, participants completed a post-experiment survey to capture their subjective perceptions during the creative process and received their compensations (see online Appendix A for measures used in survey). Measures Creativity measure. We measured creativity using the consensual assessment technique (Amabile et al., 1996; Amabile & Pratt, 2016), following Berg’s (2016, 2019) approach. We recruited an online panel of raters who evaluated the created fictions in two dimensions: novelty (ICC₂ = .90–.91) and usefulness (ICC₂ = .87–.89)2. Novelty was defined as the extent to which the story presented novel and distinctive ideas, reflecting originality and uniqueness. Usefulness was defined as the degree to which the story provoked thought and conveyed meaningful insights or lessons, recognizing that its value may vary based on the context of the task. To assess the quality of each story, we included an overall enjoyment rating from raters (ICC₂ = .89–.91) as an additional dimension. This measure complements the specific dimensions of novelty and usefulness, providing a broader perspective on the stories’ impact. Overall enjoyment serves as a key indicator of how well the stories resonate with audiences. To ensure consistent assessments, raters participated in online training and received standardized definitions and criteria (see Appendix A). Ratings were made on a 10-point scale (1 = Extremely low, 10 = Extremely high). To mitigate potential bias perceptions against AI (Yin et al., 2024), raters indicated whether they believed each story involved generative AI (1 = Yes, 0 = No). 2 When multiple groups of raters were used, the range of ICC2’s is shown. GENERATIVE AI AND HUMAN CAPITAL 18 Attention checks were randomly embedded; data from two raters were excluded due to failure in these checks. Each story was evaluated by an average of 43.87 raters (SD = 1.77). General human capital. Participants’ general human capital was assessed through their educational attainment and IQ test scores (Pyatt & Becker, 1966; Crook et al., 2011; Mariz-Perez et al., 2012), both collected during the initial phase of the experiment. Participants first reported their highest level of education (1 = junior high school and below, 6 = doctoral degree). They took an 18-item version of the Raven Progressive Matrices test, which consisted of reasoning questions and had a 10-minute time limit (Sefcek et al., 2016). Specific human capital. To measure the participants’ specific human capital in fiction writing, we utilized self-reported assessments of their literary writing skills. This was measured with two items: “How would you rate your literary writing ability?” (1 = Extremely poor, 5 = Extremely good) and “Compared to your peers, how would you rate your literary writing ability?” (1 = Significantly worse than most peers, 5 = Significantly better than most peers). The average of these two items was used to represent participants’ overall literary writing ability (Cronbach’s α = .77). Control variables. We controlled for several variables to ensure the robustness of our findings. First, we included demographic factors—age and gender. To account for personality traits, we controlled for openness, measured using Saucier’s (1994) brief Big Five scale (8 items; e.g., “imaginative and creative”; Cronbach’s α = .83). We controlled for the frequency of AI usage (0 = never, 5 = daily) because frequent AI users may be more proficient with AI tools, potentially enhancing creative outcomes due to their experience rather than the experimental conditions. Separately, we controlled for participants’ mind perception of AI, measured with an adapted scale from Yam et al. (2021; 8 items; e.g., “AI can think,” “AI can plan”; Cronbach’s α GENERATIVE AI AND HUMAN CAPITAL = .81), as individuals who perceive AI as more cognitively capable might interact differently 19 with AI during the task, influencing their reliance on and utilization of the technology. To address potential biases related to participant motivation, we coded their motivation for participation (0 = monetary compensation, 1 = other reasons such as interest in AI or fiction). Finally, to account for potential evaluation bias toward AI (Yin et al., 2024), we controlled for the AI identification ratio, calculated as the proportion of raters who believed AI was used in creating each story. Results We employed Ordinary Least Squares (OLS) regression models to test our hypotheses. Table 1 presents the descriptive statistics and correlations among the study variables, and Table 2 provides the detailed regression results. In supporting Hypothesis 1, AI use was positively and significantly related to novelty (b = 0.403, p = .035), usefulness (b = 0.352, p = .032), and overall impression (b = 0.370, p = .015). Hypothesis 2 posits that general human capital amplifies the effect of AI use on creativity. The interaction between AI use and education was found to be positive and significant for novelty (b = 0.480, p = .015) and approaching significance for overall impression (b = 0.309, p = .064), indicating that the positive effect of AI use on creativity is stronger for individuals with higher education levels. In contrast, the moderation effect on usefulness was positive but not significant (b = 0.295, p = .118). Simple slope analysis revealed that, for individuals with high education, the positive effect of AI use on novelty was significant (b = 0.774, t(149) = 3.25, p = .001). Conversely, this effect was not significant for those with low education (b = -0.010, t(149) = -0.04, p = .968), as illustrated in Figure 4. A similar pattern emerged from the simple slope analysis for the usefulness and overall impression dimensions, as shown in the figures in the Appendix F. These results partially support Hypothesis 2a. GENERATIVE AI AND HUMAN CAPITAL 20 Similarly, the interaction between AI use and IQ was positive and significant for both novelty (b = 0.193, p = .008) and overall impression (b = 0.140, p = .041), and approaching significance for usefulness (b = 0.106, p = .060). These findings suggest that the positive effect of AI use on creativity is stronger for individuals with higher IQ levels, supporting Hypothesis 2b. Simple slope analysis confirmed a significant positive effect on novelty when IQ was high (b = 0.869, t(149) = 3.04, p = .003), while the effect was not significant when IQ was low (b = - 0.147, t(149) = -0.62, p = .537), as shown in Figure 5. Similar patterns were observed for the usefulness and overall impression dimensions, with detailed results available in the online Appendix F. Hypothesis 3 posits that specific human capital weakens the relationship between AI use and creativity. The interaction between AI use and specific human capital was negative and significant for usefulness (b = -0.600, p = .003) and overall impression (b = -0.404, p = .047), suggesting that the positive effect of AI use on creativity is diminished among individuals with higher levels of specific human capital. Although the moderation effect on novelty was negative, it was not significant (b = -0.341, p = .169). Further analysis revealed that for the usefulness dimension, the simple slope was positive and significantly when writing skills were low (b = 0.706, t(149) = 3.49, p = .001), but not significant when writing skills were high (b = -0.065, t(149) = -0.31, p = .758), as shown in Figure 6. Similar patterns were observed for novelty and overall impression. These findings collectively suggest partial support for Hypothesis 3. Supplementary Analysis Building on our main hypotheses, we conducted additional analyses to deepen our understanding of the effects of AI on creativity. First, we investigated whether individuals with varying levels of general and specific human capital interacted with AI differently in terms of GENERATIVE AI AND HUMAN CAPITAL style or mode. We conducted mean split analyses to categorize participants into high and low groups for both specific and general human capital. Specific human capital, measured by self- reported writing skills, was split at the mean score of 3.26 (SD = 0.64, Nlow = 54, Nhigh = 57). 21 Independent samples t-tests revealed no significant differences between these groups in terms of prompt length (t(109) = 0.923, p = .358) and the number of interaction rounds with the AI (t(109) = 1.075, p = .285). Participants were divided into high and low education groups based on a mean of 4.59 (SD = 0.82, Nlow = 52, Nhigh = 59). T-tests showed no significant differences between high and low education groups regarding prompt length (t(109) = -1.403, p = .164) and interaction rounds (t(109) = 0.897, p = .386). Similarly, for IQ, the mean split was at 15.56 (SD = 2.63, Nlow = 62, Nhigh = 49). T-tests indicated no significant differences in prompt length (t(109) = -0.194, p = .846) or interaction rounds (t(109) = 1.05, p = .916) between high and low IQ groups. Next, considering prior research suggesting that AI use may lead to increased similarity in outputs, we employed textual analysis techniques (embedding) to assess the similarity of the creative products. Interestingly, our findings showed no significant increase in similarity among AI-assisted outputs compared to those created independently, indicating that AI use in our study did not homogenize creative work. Third, we explored whether AI use impacted participants’ cognitive perceptions of their creativity. Results revealed that using AI significantly reduced participants’ psychological ownership of their creative products (b = -1.239, p = .001). Lastly, to ensure the robustness of our main results, we conducted an omnibus test by including all interactions in the same regression model. The findings remained highly consistent with our initial analyses, and in several cases, the interaction effects became stronger. Together, these supplementary analyses contribute to a more comprehensive understanding of the nuanced GENERATIVE AI AND HUMAN CAPITAL 22 effects of AI on creativity, supporting the robustness of our main findings. Additional details are provided in the online Appendix D. Experiment 1 Discussion Experiment 1 demonstrated that generative AI significantly enhances creativity in flash- fiction writing, positively impacting novelty, usefulness, and overall impression. Notably, individuals with higher general human capital benefited more from AI, while those with higher specific human capital experienced less benefit. Despite testing all hypotheses, several limitations warrant consideration. First, the general nature of writing may dilute the unique impact of specific human capital, potentially explaining some insignificant moderation effects. Second, our assessment of specific human capital relied on broad self-reports of writing ability, which may not capture essential skills for novel writing, such as story development and emotional expression, leading to possible response bias. Third, participants completed tasks within a constrained timeframe in the lab, which may not reflect the extended periods typical of real-world creative processes. To address these limitations, our second study involves a lyric-writing task with both expert and novice lyricists, allowing for a clearer operationalization of specific human capital based on lyric-writing publication history. This study also spans one week, providing participants ample time to engage deeply with the creative process, thereby enhancing ecological validity and better mimicking real-world work conditions. EXPERIMENT 2 Sample and Procedures In Experiment 2, participants were recruited from universities, companies, and online music platforms, ensuring a diverse range of lyric-writing skills, including individuals with prior GENERATIVE AI AND HUMAN CAPITAL writing and publication experience. To incentivize participation and engagement, each 23 participant was promised a professionally recorded song composition based on their own lyric creation, in addition to receiving 100 CNY upon completing all stages of the experiment. The participants were tasked with writing song lyrics, a key component of a song alongside vocal melodies and instrumental accompaniments. To support this task, we provided each participant with both a vocal melody and instrumental accompaniment. We prepared ten royalty-free accompaniment tracks in various styles, and two professional composers created vocal melodies for five tracks each, resulting in ten complete song demos. Each demo consisted of an accompaniment track paired with a vocal melody performed using “la-la-la” syllables (see the online Appendix B for delivered materials). After registration and an online IQ test (Nstart = 685), participants were randomly assigned to either an AI-assisted group or a control group that composed lyrics without AI support. Both groups received basic instructions on lyric-writing techniques, while the AI-assisted group received additional guidance on using generative AI (Ninformation = 611). Participants were assigned a demo file along with a simplified musical score of the vocal melody, which included annotations for lyric breaks and suggested word counts. This design aimed to engage participants effectively, regardless of their experience level. The experiment was conducted online, allowing participants one week to complete the task at their own pace, closely mimicking typical lyrics-writing processes. Throughout this period, participants could listen to the provided melody and refer to the musical score as they composed their lyrics, facilitating a structured and supportive creative environment. After completing their initial assignment, participants were encouraged to write lyrics for the GENERATIVE AI AND HUMAN CAPITAL 24 remaining nine demos. They then submitted their lyrics and completed a follow-up questionnaire about their experiences (Nsubmission = 348). Sample attrition occurred primarily during the lyrics-creation stage, which had a dropout rate of 43.04% despite efforts to simplify the task. Among the submissions, 329 works from 299 participants were deemed suitable for song recording by professional producers and singers. Of these participants, 289 composed lyrics for only one song (96.66%). The final sample of 299 participants had an average age of 24.10 years (SD = 6.06); 191 (63.88%) were female, and 279 (93.31%) held a bachelor’s degree or higher. Measures Creativity measure. In Experiment 2, we employed a dual-method approach to assess the creativity of the composed lyrics, ensuring a comprehensive evaluation that captures both the intrinsic qualities of the lyrics and their reception within a musical context. First, we recruited an online panel of raters to evaluate the lyrics independently of their musical performance, thereby minimizing potential influences from accompanying melodies and arrangements. These raters rated each lyric on three dimensions: novelty (ICC₂ = .35–.52), defined as the originality and unique expression within the lyrics, including innovative rhetorical techniques and perspectives; usefulness (emotional expression; ICC₂ = .26–.41), referring to the extent to which the emotional content resonates with and engages the audience; and overall impression (ICC₂ = .42–.59), which pertains to the overall quality and impact of the lyrics themselves. To ensure consistency, raters participated in online training sessions and were provided with standardized definitions and criteria for each dimension (see online Appendix C). Ratings were conducted using a 10-point scale (1 = Extremely low, 10 = Extremely high), with attention checks embedded throughout the GENERATIVE AI AND HUMAN CAPITAL process, resulting in the exclusion of data from two raters. Each lyric was evaluated by an 25 average of 5.12 raters (SD = 0.71). Second, to evaluate the creativity of the lyrics within a musical context, we composed the written lyrics into complete songs. Ten members from university choirs, all with systematic and professional training in singing, performed the composed lyrics. A professional music producer then finalized these performances into complete song recordings. An online panel of raters assessed the lyrics on the same two dimensions of novelty (ICC₂ = .85–.87) and usefulness (emotional appeal; ICC₂ = .78–.81), maintaining the same criteria. Additionally, raters evaluated their overall impression of the songs based on their liking as listeners (ICC₂ = .83–.85), reflecting the audience’s reception of the complete musical piece. This second rating phase allowed us to consider the fit between the lyrics and their musical execution. Raters received training and detailed definitions to ensure consistent evaluations. Attention checks were included, leading to the exclusion of five raters who failed these checks and two raters whose correlation coefficients with the average scores were below .3. Each song was evaluated by an average of 28.33 raters (SD = 1.61). General human capital. Consistent with Experiment 1, participants’ general human capital was assessed through their education level and IQ test scores using the same measurement (Mariz-Perez et al., 2012; Ployhart et al., 2011; Sefcek et al., 2016). Specific human capital. We measured participants’ specific human capital in the field of lyrics writing using an objective measurement. One single indicator was employed to measure participants’ previous experience with publishing or showcasing their lyrics: “Have you ever published or presented your lyrics?” The responses were distributed as follows—1 (No) accounted for 64.44%, 2 (Yes, but only within a small range) for 29.79%, 3 (Yes, on a public GENERATIVE AI AND HUMAN CAPITAL 26 platform) for 4.86%, and 4 (My work has received awards or recognition) for 0.91%. Compared to self-assessed writing ability, participants’ experience in publishing or showcasing their lyrics provides a more objective measure and directly reflects their accumulated experience in this field. Control variables. Consistent with Experiment 1, we controlled for participants’ age, gender, openness, frequency of AI usage, motivation for participating in the experiment, mind perception of AI, and the AI identification ratio by raters. All the measurements stayed the same with Experiment 1. Additionally, we included fixed effects for the selected demo in our regression analysis. Results We employed Ordinary Least Squares (OLS) regression models to test our hypotheses. Descriptive statistics and correlations are presented in Table 3, and regression results are shown in Tables 4–8. The results provide limited support for hypothesis 1. When evaluating lyrics alone, AI use did not significantly predict creativity. In contrast, when assessing complete songs, AI use showed positive coefficients for novelty (b = 0.133, p = .061) and usefulness (b = 0.108, p = .075), though these effects did not reach statistical significance. Hypothesis 2a proposed that education would moderate the relationship between AI use and creativity, with stronger effects for individuals with higher education levels. For lyrics ratings, the interaction between AI use and education was significant across all creativity dimensions: novelty (b = 0.407, p = .008), usefulness (b = 0.341, p = .021), and overall impression (b = 0.489, p = .002). Simple slope analyses revealed that for individuals with high education, AI use positively influenced novelty (b = 0.389, t(308) = 2.49, p = .013; see Figure 7), GENERATIVE AI AND HUMAN CAPITAL 27 usefulness (b = 0.333, t(308) = 2.22, p = .027), and overall impression (b = 0.461, t(308) = 2.93, p = .004). For those with low education, the effects of AI use were negative but not significant across all dimensions. In the context of complete songs, the interaction effects between AI use and education were positive but not statistically significant, although the direction remained consistent with our hypothesis. These results indicate that the positive effect of AI use on creativity is stronger among more educated individuals when evaluating lyrics alone, providing partial support for Hypothesis 2a. Hypothesis 2b suggested that IQ would moderate the relationship between AI use and creativity, with stronger effects for individuals with higher IQ scores. The interactions were not statistically significant for any creativity dimensions in either lyrics ratings or complete songs. Although the coefficients were in the expected direction, we do not find support for Hypothesis 2b. Hypothesis 3 posited that specific human capital, measured by prior lyrics publication experience, would negatively moderate the relationship between AI use and creativity. The results support this hypothesis across both lyrics ratings and complete songs. For lyrics ratings, the interaction between AI use and specific human capital was significantly negative for novelty (b = -0.391, p = .017) and was approaching significance for usefulness (b = -0.327, p = .056) and overall impression (b = -0.283, p = .091). Similarly, for complete songs, the interaction was significantly negative for novelty (b = -0.221, p = .029), usefulness (b = -0.195, p = .017), and overall impression (b = -0.209, p = .014). Simple slope analyses showed that for individuals with low specific human capital, AI use positively influenced creativity—significantly for novelty (b = 0.264, t(308) = 2.76, p = .006; See Figure 8), usefulness (b = 0.223, t(308) = 2.75, p = .006), and overall impression (b = 0.211, t(308) = 2.568, p = .011) in the context of complete songs. GENERATIVE AI AND HUMAN CAPITAL 28 Conversely, for those with high specific human capital, the effects of AI use on creativity were negative but not statistically significant across all dimensions. These findings indicate that the positive effects of AI use on creativity are weaker for individuals with higher levels of specific human capital, supporting Hypothesis 3. Supplementary Analysis We also conducted several supplementary analyses similar to first study. First, we categorize participants into high and low groups for specific human capital by identifying whether they have any lyrics publication previously (Nlow = 131, Nhigh = 66). Independent samples t-tests revealed no significant differences between these groups in terms of prompt length (t(195) = 1.539, p = .126). However, experts interact with AI significantly less than novice regarding the number of interaction rounds with the AI (t(195) = 2.207, p = .029). Similar to Experiment 1, we conducted mean split analyses to categorize participants into high and low groups for general human capital. For general human capital, participants were divided into high and low education groups based on a mean of 4.31 (SD = 0.71, Nlow = 132, Nhigh = 65). T-tests showed individual with higher education interact significantly more with AI than their counterparts with lower education level regarding both prompt length (t(195) = -3.189, p = .002) and interaction rounds (t(195) = -2.206, p = .029). Similarly, for IQ, the mean split was at 14.47 (SD = 3.04, Nlow = 97, Nhigh = 100). T-tests indicated individuals with higher IQ interact with AI approaching significantly more than their counterparts in prompt length (t(195) = -1.925, p = .056) but no significant difference in interaction rounds (t(195) = -1.222, p = .223) between high and low IQ groups. Second, we again assessed the similarity of the creative products using textual analysis techniques (embedding). Contrast to Experiment 1, the results showed increased similarity GENERATIVE AI AND HUMAN CAPITAL among AI-assisted outputs compared to those created without AI, remaining confusion about whether AI use homogenize the creative work (Cosine: b = 0.014, p = .008; L2 distance: b = - 29 0.014, p = .007). Third, we investigated whether the use of AI affected participants cognitive perceptions. Results showed using AI significantly reduced participants’ psychological ownership to their creativity product (b = -1.124, p = .000). Different from Experiment 1, AI use in the second study also increases participants’ creative self-efficacy (b = 0.284, p = .014). Consistent with Experiment 1, to ensure the robustness of our main results, we conducted an omnibus test by including all interactions in the same regression model. The findings remained highly consistent with our initial analyses. Details were shown in online Appendix E. Experiment 2 Discussion Unlike in Experiment 1, working with generative AI in Experiment 2 did not significantly improve creativity. The moderating role of education, which was evident in the lyric-writing task, diminished when evaluating the full songs. Additionally, IQ did not significantly moderate the relationship between AI use and creativity in either case. However, specific human capital consistently moderated the AI-creativity relationship negatively across both lyrics and songs, indicating that individuals with higher domain-specific expertise benefited less from AI assistance. These differences could stem from the complexity and specificity of the songwriting task. Songwriting, as a more specialized creative domain, may reduce the impact of general human capital while amplifying the importance of specific expertise. The clearer distinction between experts and novices in Experiment 2, based on prior lyrics publication, likely intensified the negative moderation effect of specific human capital. GENERAL DISCUSSION GENERATIVE AI AND HUMAN CAPITAL 30 This research examined how generative AI interacts with different forms of human capital to influence creativity. Across two studies—flash fiction writing and songwriting—we explored how AI affects creativity and how general human capital (education and IQ) and specific human capital (domain-specific expertise) moderate these effects. The results reveal that AI significantly enhances creativity, especially for individuals with higher levels of general human capital. However, specific human capital consistently moderated this relationship negatively, indicating that individuals with greater domain expertise benefited less from AI assistance. These findings suggest that AI’s impact on creativity is uneven, favoring those with broader cognitive skills while offering diminished advantages for those with specialized knowledge. Theoretical Implications Our study makes several important theoretical contributions. First, it challenges the notion that generative AI uniformly enhances productivity and reduces performance disparities among individuals (Noy & Zhang, 2023). Contrary to prior research on human-AI interactions, which suggests that domain experts may benefit more from AI due to their ability to effectively utilize predictive algorithms (e.g., Agrawal et al., 2019; Huang et al., 2024), our findings reveal that generative AI—unlike traditional predictive AI—can actually reduce the competitive edge of domain experts. By democratizing access to knowledge, generative AI breaks down traditional barriers, allowing individuals without specific expertise to perform tasks previously reserved for specialists (Anthony et al., 2023; Brynjolfsson et al., 2023; Wang et al., 2023). This shift underscores a fundamental change in the dynamics of knowledge work, where general cognitive skills become more valuable than specialized knowledge. Second, by integrating human capital theory with the context of generative AI, we GENERATIVE AI AND HUMAN CAPITAL 31 develop a novel framework that explains how different forms of human capital interact with AI technologies. Our findings illustrate that augmentation and automation coexist in the AI-human collaboration landscape and that their relative influence depends on the type of human capital individuals possess. Specifically, generative AI augments the capabilities of those with high general human capital by enhancing their ability to process and integrate vast amounts of information creatively. In contrast, it automates tasks traditionally reliant on specific human capital, thereby reducing the unique value of specialized expertise. This framework advances human capital theory by demonstrating that the value of different skill types is reshaped in the presence of generative AI. It explains why experts may not benefit more from AI: the breaking of knowledge barriers by AI diminishes the exclusivity of their expertise. Additionally, experts may engage less with AI tools due to factors such as AI aversion or overreliance on their own knowledge, limiting their ability to leverage AI effectively (Doshi & Hauser, 2024; Yin et al., 2024). Our research thus highlights the need to reconsider how specific and general human capital are valued in future work. Third, our study uncovers nuanced insights into the limitations of generative AI. In Experiment 2, we did not observe significant main effects of AI use on creativity when evaluating lyrics alone, diverging from previous studies that reported consistent positive effects (Jia et al., 2023; Noy & Zhang, 2023). This suggests that AI’s effectiveness may depend on the nature of the task. For instance, songwriting relies less on writing skills—a strength of generative AI—and more on idea generation and emotional expression, which may not be as readily enhanced by AI assistance. Furthermore, we did not consistently observe the hypothesized homophily effect, which suggests that AI use leads to increased similarity in outputs (Wang et al., 2023; Anthony et al., GENERATIVE AI AND HUMAN CAPITAL 2023). While some studies argue that AI can homogenize creative products due to reliance on common algorithms, our findings indicate that this effect is not consistent and may vary 32 depending on the type of task and the level of human-AI interaction. Finally, our exploration into participants’ perceptions revealed that AI use could impact intrinsic motivation. Some participants reported reduced feelings of ownership over their creative work when using AI, potentially diminishing intrinsic motivation (Amabile & Pratt, 2016). However, AI assistance also appeared to boost self-efficacy in creative domains, encouraging individuals to engage in tasks they might have otherwise avoided due to perceived skill gaps (Anthony et al., 2023; Noy & Zhang, 2023). These contrasting effects suggest that AI’s influence on motivation is complex and warrants further investigation. Practical Implications Our findings have important practical implications for organizations navigating the integration of AI in creative and knowledge-based work. As AI becomes more prevalent across industries, understanding how different forms of human capital interact with AI can inform talent acquisition, workforce development, and task allocation strategies (Dell’Acqua et al., 2023; Frank et al., 2019; Paudel, 2024). Organizations should recognize the increasing value of general human capital—skills such as critical thinking, problem-solving, and adaptability—in an AI- enhanced workplace. Prioritizing these skills in hiring and training programs can enhance employees’ ability to collaborate effectively with AI technologies. Companies can invest in developing general cognitive skills through targeted training initiatives, thereby maximizing the benefits of AI integration. At the same time, industries heavily reliant on domain-specific expertise may need to reconsider the role of such knowledge in an AI-driven economy (Allen & Choudhury, 2022; Brynjolfsson et al., 2023). Our findings GENERATIVE AI AND HUMAN CAPITAL 33 suggest that AI’s capacity to automate specialized tasks could reduce the competitive advantage of individuals with narrowly focused expertise. Organizations might therefore shift toward fostering interdisciplinary skills and encouraging employees to develop broader competencies. From a societal perspective, policymakers and educators should emphasize broad-based educational programs that cultivate general cognitive abilities, ensuring that individuals are equipped to thrive alongside AI technologies (Furman & Seamans, 2019; Frank et al., 2019). Strategies to mitigate potential inequalities exacerbated by differences in human capital are essential, promoting inclusive access to skills development opportunities. Limitations and Future Directions Despite the contributions of our research, several limitations warrant acknowledgment and present avenues for future investigation. First, in Experiment 1, the distinction between high and low specific human capital may not have been salient due to reliance on broad self- assessments of writing ability. Experiment 2 addressed this limitation by using prior lyrics publication as a clearer indicator of specific human capital, resulting in more consistent results. Future research should employ precise and validated measures of specific human capital to better capture its nuances across different creative domains. Second, some inconsistencies between Experiment 1 and Experiment 2, particularly regarding the main effects of AI use and the moderating role of general human capital (e.g., IQ), suggest that the impact of AI may vary across tasks. Songwriting may rely less on writing skills—a strength of generative AI—and more on idea generation and emotional expression, areas where AI assistance may be less effective. Future research should explore a range of creative tasks to determine the conditions under which AI enhances or diminishes creativity. Third, our measures of general human capital—IQ tests based on Raven’s Progressive GENERATIVE AI AND HUMAN CAPITAL Matrices and education level—focus on logic, reasoning, and cognitive skills that may not 34 directly translate to artistic creativity (Ritchie & Tucker-Drob, 2018). This may explain why IQ and education did not predict performance directly in our studies. Additionally, cultural factors, such as the emphasis on logic and mathematics in Chinese education, may limit the applicability of these measures to creative tasks. Future studies should consider alternative measures of general human capital that capture a broader range of cognitive abilities relevant to creativity. Finally, our research focused on creative tasks involving writing and lyric creation. It remains to be seen whether similar patterns emerge in tasks involving different cognitive demands, such as logical reasoning, coding, or analytical problem-solving. Investigating the interaction of AI and human capital in diverse domains would enhance the generalizability of our theoretical framework and inform AI integration strategies across various industries. Conclusion In conclusion, our study provides valuable insights into the complex interplay between generative AI and human capital in creative work. By demonstrating that AI does not uniformly enhance productivity and that its benefits are contingent on the type of human capital individuals possess, we contribute to a more nuanced understanding of AI’s role in the modern workplace. These findings have significant implications for theory, practice, and future research, highlighting the need to reconsider how we value and develop human skills in an era increasingly shaped by AI technologies. GENERATIVE AI AND HUMAN CAPITAL 35 REFERENCE Acemoglu, D., Autor, D., Hazell, J., & Restrepo, P. (2022). AI and jobs: Evidence from online vacancies. Journal of Labor Economics, 40(S1), S293–S340. https://doi.org/10.1086/718327 Acemoglu, D., & Restrepo, P. (2022). Tasks, automation, and the rise in U.S. wage inequality. Econometrica, 90(5), 1973–2016. https://doi.org/10.3982/ECTA19815 Agarwal, N., Moehring, A., Rajpurkar, P., & Salz, T. (2023). Combining human expertise with artificial intelligence: Experimental evidence from radiology. National Bureau of Economic Research. https://doi.org/10.3386/w31422 Agrawal, A., Gans, J. S., & Goldfarb, A. (2019). Artificial intelligence: The ambiguous labor market impact of automating prediction. Journal of Economic Perspectives, 33(2), 31–50. https://doi.org/10.1257/jep.33.2.31 Allen, R., & Choudhury, P. (Raj). (2022). Algorithm-augmented work and domain experience: The countervailing forces of ability and aversion. Organization Science, 33(1), 149–169. https://doi.org/10.1287/orsc.2021.1554 Amabile, T. M. (1985). Motivation and creativity: Effects of motivational orientation on creative writers. Journal of Personality and Social Psychology, 48(2), 393–399. https://doi.org/10.1037/0022-3514.48.2.393 Amabile, T. M. (2012). Componential theory of creativity: Vol. pp. 538-559. Boston, MA: Harvard Business School. Amabile, T. M., Conti, R., Coon, H., Lazenby, J., & Herron, M. (1996). Accessing the work environment for creativity. Academy of Management Journal, 39(5), 1154–1184. https://doi.org/10.2307/256995 Amabile, T. M., & Pratt, M. G. (2016). The dynamic componential model of creativity and innovation in organizations: Making progress, making meaning. Research in Organizational Behavior, 36, 157–183. https://doi.org/10.1016/j.riob.2016.10.001 Anthony, C., Bechky, B. A., & Fayard, A.-L. (2023). “Collaborating” with AI: Taking a system view to explore the future of work. Organization Science, 34(5), 1672–1694. https://doi.org/10.1287/orsc.2022.1651 Baer, J. (2015). The importance of domain-specific expertise in creativity. Roeper Review, 37(3), 165–178. https://doi.org/10.1080/02783193.2015.1047480 Becker, G. S. (1962). Investment in human capital: A theoretical analysis. Journal of Political Economy, 70(5, Part 2), 9–49. https://doi.org/10.1086/258724 Berg, J. M. (2016). Balancing on the creative highwire: Forecasting the success of novel ideas in organizations. Administrative Science Quarterly, 61(3), 433–468. https://doi.org/10.1177/0001839216642211 Berg, J. M. (2019). When silver is gold: Forecasting the potential creativity of initial ideas. Organizational Behavior and Human Decision Processes, 154, 96–117. https://doi.org/10.1016/j.obhdp.2019.08.004 Berg, J. M. (2022). One-hit wonders versus hit makers: Sustaining success in creative industries. Administrative Science Quarterly, 67(3), 630–673. https://doi.org/10.1177/00018392221083650 Boussioux, L., Lane, J. N., Zhang, M., Jacimovic, V., & Lakhani, K. R. (2024). The crowdless future? Generative AI and creative problem-solving. Organization Science, 35(5), 1589– 1607. https://doi.org/10.1287/orsc.2023.18430 GENERATIVE AI AND HUMAN CAPITAL 36 Bruns, H. C. (2013). Working alone together: Coordination in collaboration across domains of expertise. Academy of Management Journal, 56(1), 62–83. https://doi.org/10.5465/amj.2010.0756 Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at work. National Bureau of Economic Research. https://doi.org/10.3386/w31161 Camerer, C., Loewenstein, G., & Weber, M. (1989). The curse of knowledge in economic settings: An experimental analysis. Journal of Political Economy, 97(5), 1232–1254. Carpenter, M. A., Sanders, W. G., & Gregersen, H. B. (2001). Bundling human capital with organizational context: The impact of international assignment experience on multinational firm performance and CEO pay. Academy of Management Journal, 44(3), 493–511. https://doi.org/doi.org/10.5465/3069366 Choudhury, P., Starr, E., & Agarwal, R. (2020). Machine learning and human capital complementarities: Experimental evidence on bias mitigation. Strategic Management Journal, 41(8), 1381–1411. https://doi.org/10.1002/smj.3152 Coff, R. W. (1997). Human assets and management dilemmas: Coping with hazards on the road to resource-based theory. Academy of Management Review, 22(2), 374. https://doi.org/10.2307/259327 Crook, T. R., Todd, S. Y., Combs, J. G., Woehr, D. J., & Ketchen, D. J. (2011). Does human capital matter? A meta-analysis of the relationship between human capital and firm performance. Journal of Applied Psychology, 96(3), 443–456. https://doi.org/10.1037/a0022147 Dane, E. (2010). Reconsidering the trade-off between expertise and flexibility: A cognitive entrenchment perspective. Academy of Management Review, 35(4), 579–603. https://doi.org/10.5465/amr.35.4.zok579 Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper, 24–013. https://doi.org/10.2139/ssrn.4573321 Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28), eadn5290. https://doi.org/10.1126/sciadv.adn5290 Einola, K., & Khoreva, V. (2023). Best friend or broken tool? Exploring the co‐existence of humans and artificial intelligence in the workplace ecosystem. Human Resource Management, 62(1), 117–135. https://doi.org/10.1002/hrm.22147 Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An early look at the labor market impact potential of large language models. arXiv Preprint arXiv:2303.10130. https://doi.org/10.48550/arXiv.2303.10130 Frank, M. R., Autor, D., Bessen, J. E., Brynjolfsson, E., Cebrian, M., Deming, D. J., Feldman, M., Groh, M., Lobo, J., Moro, E., Wang, D., Youn, H., & Rahwan, I. (2019). Toward understanding the impact of artificial intelligence on labor. Proceedings of the National Academy of Sciences, 116(14), 6531–6539. https://doi.org/10.1073/pnas.1900949116 Furman, J., & Seamans, R. (2019). AI and the Economy. Innovation Policy and the Economy, 19(1), 161–191. https://doi.org/10.1086/699936 GENERATIVE AI AND HUMAN CAPITAL 37 Gilardi, F., Alizadeh, M., & Kubli, M. (2023). ChatGPT outperforms crowd workers for text- annotation tasks. Proceedings of the National Academy of Sciences, 120(30), e2305016120. https://doi.org/10.1073/pnas.2305016120 Harvey, S., & Berry, J. W. (2023). Toward a meta-theory of creativity forms: How novelty and usefulness shape creativity. Academy of Management Review, 48(3), 504–529. https://doi.org/10.5465/amr.2020.0110 Harvey, S., & Kou, C.-Y. (2013). Collective engagement in creative tasks: The role of evaluation in the creative process in groups. Administrative Science Quarterly, 58(3), 346–386. https://doi.org/10.1177/0001839213498591 Huang, L. L., Chen, R. P., & Chan, K. W. (2024). Pairing up with anthropomorphized artificial agents: Leveraging employee creativity in service encounters. Journal of the Academy of Marketing Science, 52(4), 955–975. https://doi.org/10.1007/s11747-024-01017-w Huang, M.-H., & Rust, R. T. (2021). A strategic framework for artificial intelligence in marketing. Journal of the Academy of Marketing Science, 49(1), 30–50. https://doi.org/10.1007/s11747-020-00749-9 Hui, X., Reshef, O., & Zhou, L. (2024). The short-term effects of generative artificial intelligence on employment: Evidence from an online labor market. Organization Science. https://doi.org/10.1287/orsc.2023.18441 Jia, N., Luo, X., Fang, Z., & Liao, C. (2023). When and how artificial intelligence augments employee creativity. Academy of Management Journal, 67(1), 5–32. https://doi.org/10.2139/ssrn.4397280 Lawless, K. A., & Kulikowich, J. M. (2006). Domain knowledge and individual interest: The effects of academic level and specialization in statistics and psychology. Contemporary Educational Psychology, 31(1), 30–43. https://doi.org/10.1016/j.cedpsych.2005.01.002 Lee, B. C., & Chung, J. (2024). An empirical investigation of the impact of ChatGPT on creativity. Nature Human Behaviour, 1–9. https://doi.org/10.1038/s41562-024-01953-1 Lepak, D. P., & Snell, S. A. (1999). The human resource architecture: Toward a theory of human capital allocation and development. Academy of Management Review, 24(1), 31. https://doi.org/10.2307/259035 Lepak, D. P., & Snell, S. A. (2002). Examining the human resource architecture: The relationships among human capital, employment, and human resource configurations. Journal of Management, 28(4), 517–543. https://doi.org/10.1177/014920630202800403 Li, N., Zhou, H., Deng, W., Liu, J., Liu, F., & Mikel-Hong, K. (2024). When advanced AI isn’t enough: Human factors as drivers of success in generative AI-human collaborations. Available at SSRN 4738829. https://doi.org/10.2139/ssrn.4738829 Lifshitz-Assaf, H. (2018). Dismantling knowledge boundaries at NASA: The critical role of professional identity in open innovation. Administrative Science Quarterly, 63(4), 746–782. https://doi.org/10.1177/0001839217747876 Luo, X., Qin, M. S., Fang, Z., & Qu, Z. (2021). Artificial intelligence coaches for sales agents: Caveats and solutions. Journal of Marketing, 85(2), 14–32. https://doi.org/10.1177/0022242920956676 Mariz-Perez, R. M., Teijeiro-Alvarez, M. M., & Garcìa-Alvarez, M. T. (2012). The relevance of human capital as a driver for innovation. Cuadernos de Economía, 35(98), 68–76. https://doi.org/10.1016/S0210-0266(12)70024-9 Marrone, R., Cropley, D., & Medeiros, K. (2024). How does narrow AI impact human creativity? Creativity Research Journal, 1–11. https://doi.org/10.1080/10400419.2024.2378264 GENERATIVE AI AND HUMAN CAPITAL 38 Meincke, L., Mollick, E. R., & Terwiesch, C. (2024). Prompting diverse ideas: Increasing AI idea variance. arXiv Preprint arXiv:2402.01727. https://doi.org/10.2139/ssrn.4708466 Miller, K. D., Zhao, M., & Calantone, R. J. (2006). Adding interpersonal learning and tacit knowledge to March’s exploration-exploitation model. Academy of Management Journal, 49(4), 709–722. https://doi.org/10.5465/amj.2006.22083027 Nakavachara, V., Potipiti, T., & Chaiwat, T. (2024). Experimenting with generative AI: Does ChatGPT really increase everyone’s productivity? arXiv Preprint arXiv:2403.01770. https://doi.org/10.2139/ssrn.4746770 Nelson, A., Anthony, C., & Tripsas, M. (2023). “If I could turn back time”: Occupational dynamics, technology trajectories, and the reemergence of the analog music synthesizer. Administrative Science Quarterly, 68(2), 551–599. https://doi.org/10.1177/00018392231163178 Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381(6654), 187–192. https://doi.org/10.1126/science.adh2586 Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual Acm Symposium on User Interface Software and Technology, 1–22. https://doi.org/10.1145/3586183.3606763 Paudel, R. (2024). The impact of automation and artificial intelligence (AI) on leadership and the workforce. Indonesian Journal of Banking and Financial Technology, 2(2), 109–124. https://doi.org/10.55927/fintech.v2i2.8904 Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The impact of AI on developer productivity: Evidence from gitHub copilot. arXiv Preprint arXiv:2302.06590. https://doi.org/10.48550/arXiv.2302.06590 Ployhart, R. E., Van Iddekinge, C. H., & MacKenzie, W. I. (2011). Acquiring and developing human capital in service contexts: The interconnectedness of human capital resources. Academy of Management Journal, 54(2), 353–368. https://doi.org/10.5465/amj.2011.60263097 Plucker, J. A., & Beghetto, R. A. (2004). Why creativity is domain general, why it looks domain specific, and why the distinction does not matter. In R. J. Sternberg, E. L. Grigorenko, & J. L. Singer (Eds.), Creativity: From potential to realization. (pp. 153–167). American Psychological Association. https://doi.org/10.1037/10692-009 Puranam, P. (2021). Human–AI collaborative decision-making as an organization design problem. Journal of Organization Design, 10(2), 75–80. https://doi.org/10.1007/s41469- 021-00095-2 Pyatt, G., & Becker, G. S. (1966). Human capital: A theoretical and empirical analysis, with special reference to education. The Economic Journal, 76(303), 635. https://doi.org/10.2307/2229541 Rafner, J., Beaty, R. E., Kaufman, J. C., Lubart, T., & Sherson, J. (2023). Creativity in the age of generative AI. Nature Human Behaviour, 7(11), 1836–1838. https://doi.org/10.1038/s41562-023-01751-1 Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation– augmentation paradox. Academy of Management Review, 46(1), 192–210. https://doi.org/10.5465/amr.2018.0072 GENERATIVE AI AND HUMAN CAPITAL 39 Rietzschel, E. F., Nijstad, B. A., & Stroebe, W. (2007). Relative accessibility of domain knowledge and creativity: The effects of knowledge activation on the quantity and originality of generated ideas. Journal of Experimental Social Psychology, 43(6), 933–946. https://doi.org/10.1016/j.jesp.2006.10.014 Ritchie, S. J., & Tucker-Drob, E. M. (2018). How much does education improve intelligence? A meta-analysis. Psychological Science, 29(8), 1358–1369. https://doi.org/10.1177/0956797618774253 Rosen, S. (1976). A theory of life earnings. Journal of Political Economy, 84(4, Part 2), S45– S67. https://doi.org/10.1086/260532 Rouse, E. D. (2020). Where you end and I begin: Understanding intimate co-creation. Academy of Management Review, 45(1), 181–204. https://doi.org/10.5465/amr.2016.0388 Saucier, G. (1994). Mini-Markers: A Brief Version of Goldberg’s Unipolar Big-Five Markers. Journal of Personality Assessment, 63(3), 506–516. https://doi.org/10.1207/s15327752jpa6303_8 Schad, J., & Bansal, P. (2018). Seeing the forest and the trees: How a systems perspective informs paradox research. Journal of Management Studies, 55(8), 1490–1506. https://doi.org/10.1111/joms.12398 Schillebeeckx, S. J. D., Lin, Y., & George, G. (2019). When do expert teams fail to create impactful inventions? Journal of Management Studies, 56(6), 1073–1104. https://doi.org/10.1111/joms.12447 Sefcek, J. A., Miller, G. F., & Figueredo, A. J. (2016). Development and validation of an 18-item medium form of the ravens advanced progressive matrices. Sage Open, 6(2), 2158244016651915. https://doi.org/10.1177/2158244016651915 Teodoridis, F., Bikard, M., & Vakili, K. (2019). Creativity at the knowledge frontier: The impact of specialization in fast- and slow-paced domains. Administrative Science Quarterly, 64(4), 894–927. https://doi.org/10.1177/0001839218793384 Tu, C., Guo, J., Hatcher, R. C., & Kaufman, J. C. (2020). The relationship between emotional intelligence and domain-specific and domain-general creativity. The Journal of Creative Behavior, 54(2), 337–349. https://doi.org/10.1002/jocb.369 Wang, W., Gao, G. (Gordon), & Agarwal, R. (2023). Friend or foe? Teaming between artificial intelligence and workers with variation in experience. Management Science, 70(9), 5753– 5775. https://doi.org/10.1287/mnsc.2021.00588 Ward, T. B. (2008). The role of domain knowledge in creative generation. Learning and Individual Differences, 18(4), 363–366. https://doi.org/10.1016/j.lindif.2007.07.002 Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2021). Robots at work: People prefer—and forgive—service robots with perceived feelings. Journal of Applied Psychology, 106(10), 1557–1572. https://doi.org/10.1037/apl0000834 Yin, Y., Jia, N., & Wakslak, C. J. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences, 121(14), e2319112121. https://doi.org/10.1073/pnas.2319112121 Zhou, E., & Lee, D. (2024). Generative artificial intelligence, human creativity, and art. PNAS Nexus, 3(3), pgae052. https://doi.org/10.1093/pnasnexus/pgae052 Zhu, F., & Zou, W. (2024). The role of generative AI in human creative processes: Experimental evidence. Available at SSRN 4676053. https://doi.org/10.2139/ssrn.4676053 40 Table 1 Means, SDs, and Correlation of Studied Variables (Experiment 1) VARIABLES Education IQ Specific Human Capital 1 Novelty 2 Usefulness 3 Overall Impression 4 5 6 7 Age 8 Gender 9 Openness 10 AI Use Frequency 11 Purpose for Experiment 12 Mind Perception 13 AI Identification Ratio Continued Mean 4.879 4.799 4.912 4.586 15.556 3.269 26.272 1.685 3.854 2.204 0.056 3.983 0.369 SD 1.036 0.940 0.918 0.816 2.626 0.643 5.622 0.466 0.607 1.458 0.230 0.827 0.134 1 - 0.744●●● 0.870●●● -0.086 -0.093 0.008 -0.117 -0.048 0.053 -0.104 0.180● -0.176● -0.262●●● 2 3 4 5 6 7 - 0.881●●● -0.086 -0.049 0.040 -0.124 -0.094 0.072 -0.114 0.219●● -0.096 -0.404●●● - -0.155● -0.116 0.016 -0.186● -0.041 0.042 -0.132 0.189● -0.125 -0.441●●● - 0.206●● 0.065 0.305●●● 0.031 0.098 0.191● 0.024 -0.019 -0.004 - 0.031 -0.144 -0.100 0.008 0.329●●● -0.124 -0.003 -0.049 - 0.024 0.097 0.364●●● -0.016 -0.207●● -0.010 -0.065 - -0.022 0.019 -0.135 -0.021 -0.036 0.144 VARIABLES 8 9 10 11 12 13 8 Gender 9 Openness 10 AI Use Frequency 11 Purpose for Experiment 12 Mind Perception 13 AI Identification Ratio Notes. Female = 2, Male = 1. All p values in this table are two-tailed. ●●● p<0.001, ●● p<0.01, ● p<0.05. - -0.032 -0.042 -0.242●● 0.010 -0.044 - 0.041 -0.103 0.050 -0.138 - 0.059 -0.020 -0.061 - -0.011 -0.122 - 0.101 - Table 2 Regression Results (Experiment 1) 41 VARIABLES Novelty Usefulness Overall Impression Novelty Usefulness Overall Impression Novelty Usefulness Overall Impression Novelty Usefulness Overall Impression Model 1 Model 2a Model 2b Model 3 AI Use AI Use × Education AI Use × IQ AI Use × Specific Human Capital Education IQ Specific Human Capital Age Gender openness AI Use Frequency Purpose for Experiment Mind Perception AI Identification Ratio Constant 0.403● (0.19) 0.352● (0.16) 0.370● (0.15) 0.382● (0.19) 0.480● (0.20) 0.339● (0.16) 0.295 (0.19) 0.357● (0.15) 0.309+ (0.17) -0.040 (0.11) -0.019 (0.04) -0.024 (0.13) -0.016 (0.01) -0.039 (0.17) 0.102 (0.12) -0.080 (0.06) 0.703+ (0.38) -0.211● (0.08) -2.093●● (0.63) 6.996●●● (1.12) -0.048 (0.09) 0.000 (0.03) 0.033 (0.11) -0.010 (0.01) -0.131 (0.15) 0.065 (0.11) -0.092+ (0.05) 0.735● (0.34) -0.080 (0.07) -2.966●●● (0.57) 6.482●●● (0.98) -0.087 (0.08) -0.025 (0.03) -0.005 (0.11) -0.019 (0.01) -0.056 (0.13) 0.029 (0.10) -0.085+ (0.05) 0.549+ (0.32) -0.109 (0.07) -3.185●●● (0.56) 7.704●●● (0.94) -0.367● (0.17) -0.020 (0.04) -0.010 (0.14) -0.018 (0.01) -0.015 (0.17) 0.072 (0.12) -0.093 (0.06) 0.670+ (0.39) -0.202● (0.09) -2.131●●● (0.61) 6.945●●● (1.10) -0.249 (0.17) -0.000 (0.03) 0.042 (0.11) -0.012 (0.01) -0.116 (0.15) 0.047 (0.11) -0.100+ (0.05) 0.714● (0.34) -0.074 (0.07) -2.989●●● (0.57) 6.344●●● (0.99) -0.298● (0.14) -0.025 (0.03) 0.004 (0.11) -0.020+ (0.01) -0.041 (0.13) 0.010 (0.10) -0.094● (0.04) 0.528+ (0.32) -0.103 (0.07) -3.209●●● (0.55) 7.388●●● (0.93) 0.361● (0.18) 0.329● (0.16) 0.340● (0.15) 0.386● (0.19) 0.321● (0.16) 0.349● (0.15) 0.193●● (0.07) 0.106+ (0.06) 0.140● (0.07) -0.036 (0.10) -0.164● (0.07) -0.008 (0.13) -0.018 (0.01) -0.057 (0.17) 0.056 (0.12) -0.066 (0.06) 0.457 (0.35) -0.183● (0.09) -2.078●●● (0.59) 6.810●●● (0.91) -0.046 (0.09) -0.079 (0.05) 0.042 (0.11) -0.012 (0.01) -0.141 (0.15) 0.040 (0.11) -0.085 (0.05) 0.600+ (0.32) -0.064 (0.07) -2.957●●● (0.56) 6.549●●● (0.81) -0.084 (0.08) -0.130● (0.06) 0.007 (0.11) -0.021+ (0.01) -0.069 (0.13) -0.005 (0.10) -0.075 (0.05) 0.371 (0.29) -0.089 (0.07) -3.174●●● (0.52) 7.403●●● (0.75) -0.341 (0.25) -0.043 (0.11) -0.021 (0.04) 0.214 (0.22) -0.018 (0.01) -0.001 (0.17) 0.118 (0.12) -0.073 (0.06) 0.753+ (0.41) -0.222● (0.09) -2.048●● (0.64) 6.919●●● (1.10) -0.600●● (0.20) -0.053 (0.08) -0.003 (0.03) 0.451●● (0.17) -0.014 (0.01) -0.064 (0.14) 0.094 (0.11) -0.080 (0.05) 0.822● (0.32) -0.100 (0.07) -2.886●●● (0.57) 6.593●●● (0.90) -0.404● (0.20) -0.091 (0.08) -0.027 (0.03) 0.276 (0.18) -0.021+ (0.01) -0.011 (0.13) 0.048 (0.10) -0.077 (0.05) 0.608+ (0.33) -0.123+ (0.07) -3.131●●● (0.56) 7.690●●● (0.88) Observations R-squared Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. Education was centered in Model 2a; IQ was centered in Model 2b; Specific Human Capital was centered in Model 3. ●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1 162 0.178 162 0.279 162 0.223 162 0.295 162 0.330 162 0.276 162 0.187 162 0.315 162 0.263 162 0.207 162 0.345 162 0.330 42 Table 3 Means, SDs, and Correlations of the Studied Variables (Experiment 2) VARIABLES 1 Novelty_L 2 Usefulness_L 3 Overall Impression_L 4 Novelty_S 5 Usefulness_S 6 Overall Impression_S Education 7 IQ 8 9 Specific Human Capital 10 AI Identification Raio_L 11 AI Identification Ratio_S 12 Age 13 Gender 14 Openness 15 AI Use Frequency 16 Purpose for Experiment 17 Mind Perception Continued Mean 5.012 5.374 5.366 4.836 5.143 4.944 4.307 14.465 1.422 0.626 0.569 24.617 1.623 3.961 2.900 0.049 3.856 SD 1.015 0.950 1.026 0.744 0.636 0.681 0.711 3.038 0.630 0.231 0.114 6.267 0.485 0.571 1.651 0.215 0.861 1 - 0.808●●● 0.882●●● 0.618●●● 0.469●●● 0.444●●● 0.119● 0.130● 0.063 -0.231● -0.134● 0.111● 0.057 -0.063 -0.040 0.018 -0.091 2 3 4 5 6 7 8 - 0.846●●● 0.498●●● 0.479●●● 0.417●●● 0.151●● 0.096 0.074 -0.273●●● -0.223●●● 0.125● 0.085 -0.048 -0.004 -0.021 -0.090 - 0.633●●● 0.526●●● 0.513●●● 0.134● 0.101 0.098 -0.188●●● -0.106 0.085 0.079 -0.075 -0.014 -0.011 -0.096 - 0.792●●● 0.797●●● 0.101 0.072 0.078 -0.057 -0.231●●● 0.109● 0.033 -0.068 -0.001 0.082 -0.034 - 0.895●●● 0.090 0.059 0.074 -0.026 -0.337●●● 0.047 0.059 -0.040 -0.027 0.092 -0.091 - 0.052 0.051 0.092 -0.009 -0.299●●● 0.008 0.021 -0.050 -0.023 0.101 -0.084 - 0.221●●● -0.100 -0.055 -0.090 0.097 0.080 0.067 0.213●●● 0.002 -0.083 - -0.041 -0.011 -0.075 0.112● 0.003 -0.079 0.063 0.045 -0.050 VARIABLES 9 10 11 12 13 14 15 16 17 9 Specific Human Capital 10 AI Identification Raio_L 11 AI Identification Ratio_S 12 Age 13 Gender 14 Openness 15 AI Use Frequency 16 Purpose for Experiment 17 Mind Perception Notes. Female = 2, Male = 1. All p values in this table are two-tailed. “_L” refers to rating scores on lyrics only; “_S” refers to rating scores with songs hearing. ●●● p<0.001, ●● p<0.01, ● p<0.05. - -0.028 -0.039 0.219●●● -0.245●●● 0.078 -0.117● -0.107 0.115● - 0.359●●● -0.069 -0.038 -0.056 0.013 -0.005 0.001 - -0.080 -0.120● -0.022 0.0480 -0.049 0.164●● - -0.227●●● 0.076 -0.190●●● -0.029 0.060 - 0.022 0.097 0.001 -0.116● - 0.132● -0.229●●● 0.192●●● - -0.038 0.042 - -0.054 - 43 Table 4 Regression Results of AI Use on Creativity Measured by Lyrics (Experiment 2) VARIABLES (1) (2) (3) (4) Novelty_L Novelty_L Usefulness_L Usefulness_L (5) Overall Impression_L (6) Overall Impression_L AI Use Education IQ Specific Human Capital Age Gender Openness AI Use frequency Purpose for Experiment Mind Perception AI Identification Ratio_L Constant 0.087 (0.11) 0.156+ (0.08) 0.030 (0.02) 0.109 (0.08) 0.013 (0.01) 0.117 (0.11) -0.085 (0.09) -0.028 (0.04) 0.080 (0.24) -0.078 (0.06) -1.404●●● (0.22) 4.906●●● (0.66) 0.079 (0.11) 0.183● (0.08) 0.017 (0.02) 0.110 (0.08) 0.016 (0.01) 0.186+ (0.11) -0.111 (0.09) -0.002 (0.04) -0.111 (0.23) -0.076 (0.06) -1.398●●● (0.21) 5.350●●● (0.62) 0.097 (0.12) 0.198● (0.08) 0.015 (0.02) 0.184● (0.08) 0.009 (0.01) 0.186 (0.12) -0.148 (0.09) -0.012 (0.03) -0.013 (0.25) -0.094 (0.06) -1.259●●● (0.23) 5.481●●● (0.65) 0.201● (0.08) 0.015 (0.02) 0.190● (0.08) 0.010 (0.01) 0.193 (0.12) -0.158+ (0.09) -0.012 (0.03) -0.017 (0.25) -0.094 (0.06) -1.233●●● (0.23) 5.531●●● (0.65) 0.186● (0.08) 0.017 (0.02) 0.116 (0.08) 0.016+ (0.01) 0.192+ (0.11) -0.119 (0.09) -0.001 (0.04) -0.114 (0.23) -0.076 (0.06) -1.378●●● (0.21) 5.391●●● (0.61) 0.159● (0.08) 0.029 (0.02) 0.115 (0.08) 0.013 (0.01) 0.123 (0.11) -0.093 (0.10) -0.028 (0.04) 0.076 (0.24) -0.078 (0.06) -1.382●●● (0.22) 4.951●●● (0.66) i.demo Observations R-squared Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. “Y” means model includes the fixed effect of assigned demo. “_L” refers to rating scores on lyrics only. ●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1 Y 329 0.248 Y 329 0.245 Y 329 0.246 Y 329 0.215 Y 329 0.216 Y 329 0.249 44 Table 5 Regression Results of AI Use on Creativity Measured by Songs (Experiment 2) VARIABLES (1) (2) (3) (4) Novelty_S Novelty_S Usefulness_S Usefulness_S (5) Overall Impression_S (6) Overall Impression_S AI Use Education IQ Specific Human Capital Age Gender Openness AI Use frequency Purpose for Experiment Mind Perception AI Identification Ratio_S Constant 0.133+ (0.07) 0.138●● (0.05) 0.005 (0.01) 0.051 (0.05) 0.011● (0.01) 0.034 (0.07) -0.072 (0.06) -0.007 (0.02) 0.189 (0.16) 0.041 (0.05) -1.620●●● (0.33) 4.986●●● (0.48) 0.108+ (0.06) 0.111●● (0.04) 0.006 (0.01) 0.039 (0.04) 0.003 (0.01) 0.052 (0.06) -0.031 (0.05) -0.013 (0.02) 0.185+ (0.11) -0.006 (0.03) -1.620●●● (0.30) 5.840●●● (0.37) 0.087 (0.06) 0.090● (0.04) 0.007 (0.01) 0.055 (0.04) -0.002 (0.00) 0.031 (0.06) -0.052 (0.05) -0.016 (0.02) 0.231● (0.11) -0.025 (0.03) -1.364●●● (0.30) 5.999●●● (0.36) 0.093● (0.04) 0.007 (0.01) 0.061 (0.04) -0.002 (0.00) 0.039 (0.06) -0.060 (0.05) -0.015 (0.02) 0.229● (0.11) -0.027 (0.03) -1.300●●● (0.31) 6.020●●● (0.35) 0.115●● (0.04) 0.006 (0.01) 0.047 (0.04) 0.003 (0.01) 0.062 (0.06) -0.041 (0.05) -0.012 (0.02) 0.182+ (0.11) -0.009 (0.03) -1.540●●● (0.30) 5.866●●● (0.37) 0.143●● (0.05) 0.005 (0.01) 0.061 (0.05) 0.012● (0.01) 0.046 (0.07) -0.085 (0.06) -0.006 (0.02) 0.186 (0.15) 0.038 (0.05) -1.522●●● (0.34) 5.018●●● (0.48) i.demo Observations R-squared Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. “Y” means model includes the fixed effect of assigned demo. “_S” refers to rating scores with songs hearing. ●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1 Y 329 0.440 Y 329 0.454 Y 329 0.508 Y 329 0.448 Y 329 0.446 Y 329 0.512 45 Table 6 Regression Results of Interaction of AI Use and Education on Creativity (Experiment 2) VARIABLES (1) (2) Novelty_L Usefulness_L (3) Overall Impression_L (4) (5) Novelty_S Usefulness_S (6) Overall Impression_S AI Use Education AI Use × Education IQ Specific Human Capital Age Gender Openness AI Use frequency Purpose for Experiment Mind Perception AI Identification Ratio_L AI Identification Ratio_S Constant 0.100 (0.11) -0.106 (0.12) 0.407●● (0.15) 0.028 (0.02) 0.094 (0.09) 0.011 (0.01) 0.116 (0.11) -0.086 (0.09) -0.027 (0.04) 0.083 (0.24) -0.090 (0.07) -1.434●●● (0.21) 0.090 (0.11) -0.036 (0.13) 0.341● (0.15) 0.015 (0.02) 0.098 (0.08) 0.015 (0.01) 0.185+ (0.11) -0.112 (0.09) -0.001 (0.04) -0.108 (0.23) -0.086 (0.06) -1.423●●● (0.21) 0.113 (0.12) -0.117 (0.13) 0.489●● (0.16) 0.013 (0.02) 0.165● (0.08) 0.008 (0.01) 0.185 (0.12) -0.150 (0.09) -0.011 (0.03) -0.008 (0.24) -0.108+ (0.06) -1.294●●● (0.22) 0.137+ (0.07) 0.043 (0.10) 0.146 (0.11) 0.004 (0.01) 0.046 (0.05) 0.011● (0.01) 0.034 (0.07) -0.073 (0.06) -0.006 (0.02) 0.190 (0.16) 0.037 (0.05) 0.113+ (0.06) -0.012 (0.09) 0.192+ (0.10) 0.005 (0.01) 0.032 (0.04) 0.002 (0.01) 0.052 (0.06) -0.031 (0.05) -0.013 (0.02) 0.186+ (0.11) -0.012 (0.03) 0.092 (0.06) -0.013 (0.09) 0.159+ (0.10) 0.007 (0.01) 0.049 (0.04) -0.002 (0.00) 0.031 (0.06) -0.052 (0.05) -0.015 (0.02) 0.232● (0.11) -0.030 (0.03) 5.692●●● (0.63) 6.234●●● (0.62) 6.470●●● (0.66) -1.623●●● (0.33) 5.612●●● (0.44) -1.624●●● (0.29) 6.364●●● (0.36) -1.368●●● (0.29) 6.422●●● (0.35) i.demo Observations R-squared Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. “Y” means model includes the fixed effect of assigned demo. “_L” refers to rating scores on lyrics only; “_S” refers to rating scores with songs hearing. ●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1 Y 329 0.267 Y 329 0.464 Y 329 0.271 Y 329 0.518 Y 329 0.230 Y 329 0.450 46 Table 7 Regression Results of Interaction of AI Use and IQ on Creativity (Experiment 2) VARIABLES (1) (2) Novelty_L Usefulness_L (3) Overall Impression_L (4) (5) Novelty_S Usefulness_S (6) Overall Impression_S AI Use IQ AI Use × IQ Education Specific Human Capital Age Gender Openness AI Use frequency Purpose for Experiment Mind Perception AI Identification Ratio_L AI Identification Ratio_S Constant 0.083 (0.11) -0.028 (0.04) 0.076+ (0.04) 0.160● (0.08) 0.109 (0.08) 0.011 (0.01) 0.116 (0.11) -0.106 (0.09) -0.032 (0.04) 0.112 (0.25) -0.096 (0.07) -1.405●●● (0.22) 0.077 (0.11) -0.015 (0.04) 0.042 (0.04) 0.186● (0.08) 0.111 (0.08) 0.015 (0.01) 0.185+ (0.11) -0.122 (0.09) -0.004 (0.04) -0.093 (0.23) -0.086 (0.06) -1.398●●● (0.21) 0.095 (0.12) -0.022 (0.04) 0.050 (0.04) 0.201● (0.08) 0.184● (0.08) 0.008 (0.01) 0.185 (0.12) -0.162+ (0.09) -0.015 (0.03) 0.008 (0.25) -0.106+ (0.06) -1.259●●● (0.22) 0.132+ (0.07) -0.007 (0.02) 0.016 (0.02) 0.138●● (0.05) 0.051 (0.05) 0.011● (0.01) 0.033 (0.07) -0.077 (0.06) -0.008 (0.02) 0.196 (0.15) 0.037 (0.05) 0.108+ (0.06) 0.005 (0.02) 0.002 (0.02) 0.111●● (0.04) 0.039 (0.04) 0.003 (0.01) 0.052 (0.06) -0.031 (0.05) -0.013 (0.02) 0.186+ (0.11) -0.007 (0.03) 0.088 (0.06) 0.016 (0.02) -0.011 (0.02) 0.089● (0.04) 0.055 (0.04) -0.001 (0.00) 0.032 (0.06) -0.049 (0.05) -0.015 (0.02) 0.227● (0.11) -0.023 (0.03) 5.549●●● (0.64) 5.712●●● (0.60) 5.843●●● (0.64) -1.625●●● (0.33) 5.110●●● (0.48) -1.621●●● (0.30) 5.935●●● (0.36) -1.361●●● (0.30) 6.073●●● (0.37) i.demo Observations R-squared Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. “Y” means model includes the fixed effect of assigned demo. “_L” refers to rating scores on lyrics only; “_S” refers to rating scores with songs hearing. ●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1 Y 329 0.258 Y 329 0.447 Y 329 0.250 Y 329 0.219 Y 329 0.454 Y 329 0.512 47 Table 8 Regression Results of Interaction of AI Use and Specific Human Capital on Creativity VARIABLES AI Use Specific Human Capital AI Use × Specific Human Capital Education IQ Age Gender Openness AI Use frequency Purpose for Experiment Mind Perception AI Identification Ratio_L AI Identification Ratio_S Constant (Experiment 2) (1) (2) Novelty_L Usefulness_L (3) Overall Impression_L (4) (5) Novelty_S Usefulness_S (6) Overall Impression_S 0.074 (0.11) 0.377●● (0.14) -0.391● (0.16) 0.168● (0.08) 0.031+ (0.02) 0.013 (0.01) 0.124 (0.11) -0.059 (0.10) -0.022 (0.04) 0.085 (0.24) -0.085 (0.07) -1.391●●● (0.22) 0.068 (0.11) 0.334● (0.15) -0.327+ (0.17) 0.193● (0.08) 0.018 (0.02) 0.016+ (0.01) 0.192+ (0.11) -0.090 (0.09) 0.004 (0.03) -0.106 (0.23) -0.082 (0.06) -1.387●●● (0.21) 0.088 (0.12) 0.378●● (0.14) -0.283+ (0.17) 0.207● (0.08) 0.016 (0.02) 0.010 (0.01) 0.191 (0.12) -0.130 (0.09) -0.008 (0.03) -0.009 (0.24) -0.099 (0.06) -1.249●●● (0.23) 0.124+ (0.07) 0.204● (0.09) -0.221● (0.10) 0.144●● (0.05) 0.006 (0.01) 0.012● (0.00) 0.039 (0.07) -0.058 (0.06) -0.003 (0.02) 0.193 (0.15) 0.036 (0.05) 0.100+ (0.06) 0.174● (0.07) -0.195● (0.08) 0.117●● (0.04) 0.007 (0.01) 0.003 (0.01) 0.057 (0.06) -0.018 (0.05) -0.010 (0.02) 0.188+ (0.11) -0.011 (0.03) 0.079 (0.06) 0.198●● (0.07) -0.209● (0.08) 0.096● (0.04) 0.008 (0.01) -0.002 (0.00) 0.036 (0.06) -0.038 (0.05) -0.012 (0.02) 0.235● (0.11) -0.030 (0.03) 4.907●●● (0.63) 5.379●●● (0.60) 5.633●●● (0.62) -1.580●●● (0.33) 4.953●●● (0.47) -1.585●●● (0.30) 5.802●●● (0.36) -1.327●●● (0.30) 5.977●●● (0.34) i.demo Observations R-squared Notes. Robust standard errors in parentheses. All p values in this table are two-tailed. “Y” means model includes the fixed effect of assigned demo. “_L” refers to rating scores on lyrics only; “_S” refers to rating scores with songs hearing. ●●● p<0.001, ●● p<0.01, ● p<0.05, + p<0.1 Y 329 0.261 Y 329 0.461 Y 329 0.226 Y 329 0.453 Y 329 0.253 Y 329 0.519 Figure 1 Design of Experiment 13 48 3 Figures and icons are generated by AI. Figure 2 Design of Experiment 24 49 4 Figures and icons are generated by AI. “PL” stands for Professional Lyricists, and “LN” stands for Lyric Novices. Figure 3 AI Tool Used in Two Experiments 50 Figure 4 Interaction of AI Use and Education Predicting Novelty Score 51 Figure 5 Interaction of AI Use and IQ Predicting Novelty Score Figure 6 Interaction of AI Use and Specific Human Capital Predicting Novelty Score 52 Figure 7 Interaction of AI Use and Education Predicting Novelty Score by Song Rating 53 Figure 8 Interaction of AI Use and Specific Human Capital Predicting Novelty Score by Song Rating
ai_researcher
3
Is_Your_LLM_Secretly_a_World_Model_of_the_Internet_Model-Based_Planning_for_Web_Agents.pdf
4 2 0 2 v o N 0 1 ] I A . s c [ 1 v 9 5 5 6 0 . 1 1 4 2 : v i X r a Is Your LLM Secretly a World Model of the Internet? MODEL-BASED PLANNING FOR WEB AGENTS Yu Gu1,†, Boyuan Zheng1,†, Boyu Gou1, Kai Zhang1, Cheng Chang2, Sanjari Srivastava2, Yanan Xie2, Peng Qi2, Huan Sun1, Yu Su1 1The Ohio State University, 2Orby AI {gu.826, zheng.2372, sun.397, su.809}@osu.edu ABSTRACT Language agents have demonstrated promising capabilities in automating web- based tasks, though their current reactive approaches still underperform largely compared to humans. While incorporating advanced planning algorithms, par- ticularly tree search methods, could enhance these agents’ performance, imple- menting tree search directly on live websites poses significant safety risks and practical constraints due to irreversible actions such as confirming a purchase. In this paper, we introduce a novel paradigm that augments language agents with model-based planning, pioneering the innovative use of large language models (LLMs) as world models in complex web environments. Our method, WEB- DREAMER, builds on the key insight that LLMs inherently encode comprehen- sive knowledge about website structures and functionalities. Specifically, WEB- DREAMER uses LLMs to simulate outcomes for each candidate action (e.g., “what would happen if I click this button?”) using natural language descriptions, and then evaluates these imagined outcomes to determine the optimal action at each step. Empirical results on two representative web agent benchmarks with on- line interaction—VisualWebArena and Mind2Web-live—demonstrate that WEB- DREAMER achieves substantial improvements over reactive baselines. By estab- lishing the viability of LLMs as world models in web environments, this work lays the groundwork for a paradigm shift in automated web interaction. More broadly, our findings open exciting new avenues for future research into 1) opti- mizing LLMs specifically for world modeling in complex, dynamic environments, and 2) model-based speculative planning for language agents. 1 1 INTRODUCTION Planning (Mattar & Lengyel, 2022)—the strategic search for optimal action sequences to achieve goals from initial states—has been fundamental to artificial intelligence since its inception, driving remarkable breakthroughs including superhuman performance in games like Go (Feng et al., 2023; Silver et al., 2016). Recent advances have demonstrated that integrating large language models (LLMs) with advanced planning algorithms (e.g., Yao et al. (2023a); Hao et al. (2023); Gu et al. (2023); Wang et al. (2024); Feng et al. (2023); Brown et al. (2024)) substantially enhances their per- formance on complex reasoning tasks beyond chain-of-thought (CoT) (Wei et al., 2022) approaches, with OpenAI’s o1 (OpenAI, 2024b) serving as a prominent example. These methods effectively scale inference-time compute and enable LLMs to explore multiple potential solution paths, which ultimately lead to more accurate outcomes. Alongside these developments, research into generalist web agents capable of planning and execut- ing a sequence of actions to complete complex tasks across diverse websites has garnered significant interest (Deng et al., 2023; Zhou et al., 2023; Zheng et al., 2024; Koh et al., 2024a), partly due to the web’s potential as a complex yet realistic environment for driving agent research and development. However, applying existing planning algorithms to the online web environment presents formidable challenges. Chief among these challenges are the inherent safety risks associated with live website †Equal contribution. 1Github: OSU-NLP-Group/WebDreamer 1 Figure 1: Schematic illustration of different strategies for web agents formulated as a search prob- lem. Each node represents a webpage. (a) Reactive: The agent selects locally optimal actions without forward planning, often leading to suboptimal outcomes. (b) Tree search with real interac- tions: The agent explores multiple paths through active website navigation and permits backtracking (indicated by dashed arrows). However, in real-world websites, backtracking is often infeasible due to the prevalence of irreversible actions. (c) Model-based planning: The agent simulates poten- tial outcomes (illustrated by cloud-bordered nodes) to determine optimal actions prior to real-world execution, thus minimizing actual website interactions while maintaining effectiveness. For visual clarity, only one-step simulated outcomes are depicted. Faded nodes indicate unexplored webpages, while green checkmarks and red crosses denote successful and unsuccessful outcomes, respectively. interactions (Liao et al., 2024), such as inadvertently submitting forms with sensitive information or triggering unintended transactions. These risks become even more pronounced when employ- ing tree search algorithms (Koh et al., 2024b; Putta et al., 2024), as their exhaustive exploration can expose the agent to hidden vulnerabilities and unforeseen scenarios. Additionally, many online actions, such as confirming a purchase or sending an email, are irreversible, which further makes backtracking—a crucial component of planning algorithms—highly challenging, if not infeasible. One promising solution to address these challenges is model-based planning (Pascanu et al., 2017; Moerland et al., 2023), which equips agents with the ability to simulate interactions using a world model—a computational representation of environment dynamics. By simulating action sequences within this virtual environment, agents can explore potential outcomes safely, without directly inter- acting with live websites. This approach not only reduces safety risks but also preserves the agent’s capacity to explore and plan. Yet, the true challenge lies in creating a versatile world model that can faithfully capture the landscape of the ever-evolving Internet. While previous research demonstrates that LLMs can function as effective world models in simplistic settings like blocksworld (Hao et al., 2023) and gridworld (Kim et al., 2024), a bolder question emerges: Can LLMs rise to the challenge of modeling the vast, dynamic Internet? With their extensive pre-trained knowledge—spanning web structures, protocols, and user behaviors—LLMs are uniquely positioned to take on this task. Build- ing on these insights, we present WEBDREAMER, a pioneering framework that leverages LLMs as world models to navigate the web (Figure 1). At the core of WEBDREAMER lies the concept of “dreaming”: before committing to any action, the agent uses the LLM to imagine the outcome of each possible step, expressed as natural language descriptions of how the state would change. These simulated outcomes are then evaluated based on their progress toward achieving the task ob- jective. The most promising action is executed, and the process is repeated iteratively until the LLM determines that the goal has been reached (Section 4). To validate the effectiveness of WEBDREAMER, we evaluate it on two representative benchmarks that support online interaction: VisualWebArena (Koh et al., 2024a) and Mind2Web-live (Pan et al., 2024b). WEBDREAMER achieves substantial performance gains over reactive agents on both bench- marks, underscoring its practical value despite its conceptual simplicity. While tree search with actual interactions shows slightly superior performance on VisualWebArena, which features a con- trolled environment of three locally hosted websites, this method is rarely feasible in practical appli- cations, given its inherent limitations regarding safety risks and the potential for irreversible actions 2 (a) reactive(b) tree search with real interactions(c) model-based planning in real-world websites. In contrast, our simulation-based approach offers a more flexible solution, balancing performance gains with practical applicability in real-world web navigation tasks. In summary, our work introduces a new direction for AI planning in complex, real-world environ- ments like the web using world models simulated by LLMs. With WEBDREAMER, we tackle the dual challenges of safety and complexity in web navigation. Our results validate the potential of LLM-based world models for planning in complex web environments and highlight new opportu- nities for optimizing LLMs as world models and improving model-based planning algorithms for language agents. 2 RELATED WORK 2.1 WEB AGENTS Driven by the goal of automating tedious and repetitive web-based tasks, web agents powered by (multimodal) language models have made substantial progress in various aspects. Benchmarks have evolved from MiniWoB++ (Shi et al., 2017; Liu et al., 2018) to WebShop (Yao et al., 2022) and WebArena (Zhou et al., 2023), offering increasingly realistic website simulations. VisualWe- bArena (Koh et al., 2024a) and Mind2Web (Deng et al., 2023) challenge models’ ability to handle visual information and generalize across diverse tasks, websites, and domains. Reactive Agents. Reactive agents make decisions based on immediate observations from the envi- ronment without performing any search or simulation of future actions, typically implemented with the ReAct framework (Yao et al., 2023b). Much progress has been made to enhance the fundamen- tal capabilities of reactive web agents through both prompting closed-source models (Zheng et al., 2024; He et al., 2024; Deng et al., 2023) and training models using HTML and webpage screen- shots (Lee et al., 2023; Gur et al., 2023; Furuta et al., 2023; Hong et al., 2024; Baechler et al., 2024). Additionally, models’ abilities to ground web agent actions to elements have been improved through training on action-coordinate pair data (You et al., 2024; Cheng et al., 2024). Further advancements have been achieved by training on web agent trajectories, utilizing both human-annotated trajec- tories (Shaw et al., 2023; Hong et al., 2024; Deng et al., 2023; Lai et al., 2024) and synthesized exploration trajectories (Furuta et al., 2023; Song et al., 2024; Patel et al., 2024). However, reactive agents inherently suffer from short-sightedness, which can often lead to suboptimal performance in multi-step decision making. Agents with tree search. Pan et al. (2024a) introduces a reward model based on GPT-4V, de- signed to provide both step-wise and trajectory-level rewards to guide inference-time search. Search Agent (Koh et al., 2024b) investigates inference-time search algorithms in interactive web environ- ments, enabling explicit exploration and multi-step planning. In contrast to Search Agent, which employs a variant of best-first tree search, AgentQ (Putta et al., 2024) and WebPilot (Zhang et al., 2024) utilize Monte Carlo Tree Search (MCTS) as their primary search strategy. While tree search on websites has demonstrated significant improvements, it still presents several limitations. First, the search process substantially increases inference time due to the need for exten- sive exploration, which is difficult to parallelize given its inherently sequential nature. Backtracking to previous states is essential for search-based methods but impractical on real-world websites. Koh et al. (2024b) addressed this in sandbox environments by storing action sequences to resume states after resetting the environment. However, resetting the environment or undoing action sequences is not feasible on live websites. Finally, the extra explorations introduced by search algorithms substantially amplify the risk of destructive actions that may irreversibly alter the website’s state, potentially causing harmful side effects. 2.2 WORLD MODELS World models, a cornerstone of model-based reinforcement learning (Moerland et al., 2023) since the introduction of Dyna by Sutton (1991), are typically trained on observed state transitions to predict future states and rewards. These world models enable efficient training through simulated experiences, reducing environmental interactions and improving sample efficiency (Ha & Schmid- huber, 2018). Beyond their role in training, researchers have explored the use of world models to 3 facilitate planning (Pascanu et al., 2017; Schrittwieser et al., 2020). Fundamentally, world models in reinforcement learning often involve task-specific training, with a primary focus on enhancing data efficiency in the agent learning process. In contrast to traditional world models in reinforcement learning, LLMs employed as world models primarily focus on facilitating decision-making in planning rather than training. This distinction leads LLM-based models to prioritize key task abstractions over the high-fidelity simulations typi- cally required in reinforcement learning. Recent research has demonstrated the potential of LLMs as world models for simple environments, leveraging their encoded broad world knowledge (Hao et al., 2023; Kim et al., 2024). Our study aims to advance this field by investigating the capabil- ities of LLM-based world models in more complex real-world environments, specifically diverse websites. A concurrent work (Chae et al., 2024) also explores augmenting web agents with LLM- simulated action outcomes, however, their focus is on data collection to train an open-weights LLM, while ours centers on understanding the potential of this new paradigm using advanced LLMs such as GPT-4o (OpenAI, 2024a). 3 PRELIMINARY 3.1 TASK FORMULATION Web agents tasked with automating activities in live websites confront vast and complex search spaces. Formally, each task with a task instruction I can be framed as a partially observable Markov decision process (POMDP): (S, A, O, T, R, Ω), where S rep- resents the set of all possible states of the environ- ment, A represents all possible actions the agent can take, O represents the set of possible observations from the environment, T : S × A → S represents the state transition function, R is a binary reward de- noting whether the task specified in I has been com- pleted or not, and Ω : S → O is a deterministic function that projects a state to an observation. The goal of the task is to execute a sequence of actions that achieves a reward of 1. Table 1: Action space for web navigation de- fined in VisualWebArena (Koh et al., 2024a). Action Type a Description Click on elem. Hover over elem. click [elem] hover [elem] type [elem] [text] Type text into elem. press [key comb] goto [url] go back go forward new tab tab focus [index] tab close scroll [up/down] stop [answer] Press a key combo. Go to url. Click back. Click forward. Open a new tab. Focus on the i-th tab. Close current tab. Scroll up or down. End with an output. In practical scenarios, the environment is partially observable due to the complexity of web environ- ments. The true state encompasses server-side variables, dynamically loaded content, hidden UI elements, and is subject to network conditions and browser limitations. Consequently, the agent can only perceive the environment through a limited viewport (i.e., an observation o ∈ O), which repre- sents an incomplete projection of the true system state. The observation space typically manifests as screenshots or text-based accessibility trees, reflecting common implementation practices. This con- strained observability naturally shapes the action space A, which comprises operations executable on interactable elements within o, such as element clicks, text input, and URL navigation (Table 1). 3.2 PLANNING THROUGH SIMULATION Planning an optimal action sequence through tree search using real interactions governed by T is costly and risks irreversible actions. Model-based planning addresses these challenges by using a computational representation of the environment to simulate interaction outcomes. Instead of ex- ecuting actions in the real environment, the agent leverages an approximate model to predict state transitions, enabling efficient exploration and evaluation of action sequences without real-world interactions. While offline planning can compute entire action sequences before execution in de- terministic environments like BlocksWorld (Hao et al., 2023), web environments are too complex for such long-term prediction. This necessitates online planning approaches that interleave planning and execution, computing one action at a time. One prominent approach is Model Predictive Control (MPC; Garcia et al. (1989)), which iteratively simulates future trajectories to select actions. At each state s, MPC simulates trajectories over a 4 Figure 2: Illustration of WEBDREAMER using the LLM to simulate the outcome of each candidate action. The LLM simulates trajectories in natural language descriptions for three candidate actions: (1) Click “Office Products”, (2) Click “Electronics”, and (3) Type “Disk” into textbox. Through these simulations, each resulting trajectory is scored to identify the action most likely to succeed. In this case, the LLM selects Click Click “Electronics” as the optimal step and executes it. Each dotted box represents an LLM-generated state description after each simulated action. This example demonstrates a two-step planning horizon. finite horizon H for each possible action a ∈ A using a simulator function sim(s, a) and evaluates them using a scoring function score(τ ). The action leading to the most promising trajectory is then executed: a∗ = arg maxa∈A score(sim(s, a)). This process repeats after observing the new state, allowing the agent to adapt its plan based on actual outcomes while avoiding costly real-world exploration. In reality, we cannot access the real state due to partial observability, as a result, we instead do sim(o, a) using the observation o = Ω(s). 4 WEBDREAMER: MODEL-BASED PLANNING FOR WEB AGENTS In this paper, we propose WEBDREAMER, a pioneering approach leveraging LLMs as world mod- els to enable efficient planning in complex digital environments. Our approach is motivated by the observation that web interfaces, despite their complexity, are designed to be predictable for human users. When browsing websites, humans can effectively anticipate action outcomes based on visual cues and common design patterns—clicking a “Submit” button leads to form submission, select- ing a product image navigates to its detail page. Given that LLMs are trained on vast amounts of web-related data, we hypothesize that they have acquired sufficient knowledge to simulate the consequences of user actions, potentially serving as effective world models for planning. 4.1 CORE DESIGN WEBDREAMER follows the planning through simulation paradigm introduced in Section 3.2. Fig- ure 2 illustrates this process with three candidate actions, where WEBDREAMER simulates two-step trajectories for each action, selects the trajectory with the highest score, and executes its correspond- ing initial action. At its core, WEBDREAMER leverages an LLM to implement both the simulation function sim and the scoring function score. Implementation for sim: Our implementation of sim consists of two modules: one predicts state changes after action execution, approximating T , while the other imagines a possible action based on the predicted state. Together, these two modules generate trajectories of length H, where H 5 Please navigate to the 'Data Storage' category and purchase the least expensive disk with 512GB of storage.The 'Office Products' category will display three sub-categories: 'Office Electronics', 'Office & School Supplies', and 'Office Furniture & Lighting'.Click ‘Office Products’Click ‘Electronics’Type ‘Disk’The'Electronics'categorywilldisplaythreesub-categories:'Computers&Accessories','Accessories&Supplies',and'Car&VehicleElectronics'.Thewebpagewilldisplaysearchresults,includingalistofproducts,eachofwhichincludestheproducttitle,price,andan'AddtoCart'button.Thewebpagewilldisplay'OfficeElectronics'sub-categoryresultswithproducts,andthesub-menuwillshow'Printers&Accessories'andothercategories.Thewebpagewilldisplay'ComputerAccessories'sub-categoryresults,including'DataStorage','TabletAccessories',andothers.The'Electronics'categorywilldisplaythreesub-categories:'Computers&Accessories','Accessories&Supplies',and'Car&VehicleElectronics'.Stage I: SimulationStage II: ExecutionClick ’Office Electronics’Click ’Computer & Accessories’Click ’Electronics’v=0.8v=0.4v=0.1 is a configurable horizon parameter (i.e., the simulation depth). Specifically, to represent the state changes, we prompt the LLM to generate a concise natural language description focusing only on the effects of the action. For example, in Figure 2, the LLM would output a short description as follows when prompted to predict the effect of executing the action Click “Electronics”: → Click “Electronics” The ‘Electronics’ category will display three sub-categories: ‘Computers & Accessories’, ‘Accessories & Supplies’, and ‘Car & Vehicle Electronics’. Based on this predicted state, the LLM then imagines the next action (i.e., Click “Computers & Accessories”), which leads to another state change prediction. This process generates a trajectory of horizon H = 2. Implementation for score: After collecting a trajectory τi simulated from each candidate ac- tion ai using sim, we further use the LLM as a scoring function for each simulation. Follow- ing Koh et al. (2024b), we prompt the LLM to evaluate each simulated trajectory with a three- scale response—complete (1.0), on track (0.5), or incorrect (0)—indicating its progress toward task completion. The final score is computed by averaging multiple samples of these evaluations. In addition to sim and score, a prerequi- site to planning is candidate action genera- tion. We employ a two-stage approach: first sampling top-k actions following Koh et al. (2024b), then using LLM to self-refine un- necessary actions for simulation. This self- refinement step is motivated by our observa- tion that at different steps, the same k can in- troduce varying degrees of irrelevant actions— some steps naturally have fewer plausible ac- tions than others. We show the pseudo code of WEBDREAMER’s overall design in Algo- rithm 1. termination check verifies if the model outputs a stop action, reaches max steps, or repeats an action over 3 times, also following the implementation by Koh et al. (2024b). Algorithm 1: WEBDREAMER Input: Instruction I; initial observation o0 Output: Sequence of actions a0, a1, . . . , aT t ← 0; while True do At ← get candidate(I, ot); A′ t ← self refine(At); at = arg maxa∈A′ ot+1 ← execute(at); t ← t + 1; if termination check() = True then score(sim(ot, a)); t break; end end Return result; All system prompts used in WEBDREAMER can be found in Appendix A. 4.2 DISCUSSION To justify our design choices in light of our goal—a pioneering study on using LLMs as world models for web environments—we discuss three key considerations: State change description instead of HTML/Accessibility Tree. While we use natural language descriptions to capture state changes, an alternative is to prompt the LLM to predict the HTML or accessibility tree of the resulting page. However, since most webpage elements remain un- changed after an action, predicting the entire page structure is unnecessarily wasteful. Moreover, such concrete predictions are more prone to hallucination—HTML requires precise details about the website, whereas state descriptions need only capture the essential changes. For our pioneering study, we embrace this simpler, more intuitive representation, though we make no claims about its strict superiority over HTML or accessibility trees (see Section 6.1 for a detailed analysis). Prompting instead of fine-tuning. In this work, we implement WEBDREAMER through direct prompting of state-of-the-art LLMs (i.e., GPT-4o (OpenAI, 2024a)) without fine-tuning. Our ra- tionale is straightforward: we aim to first establish the feasibility of using advanced LLMs as world models for web environments and their effectiveness in planning. Demonstrating promising results with this approach will lay the foundation for future work on optimizing this direction through fine- tuning OSS models on targeted datasets. 6 Straightforward MPC-based planning instead of MCTS. We adopt a relatively straightforward MPC-based planning algorithm rather than more sophisticated approaches like MCTS that have been prominent in recent LLM planning research (Hao et al., 2023; Feng et al., 2023). This choice is motivated by our empirical findings: increasing the planning horizon of WEBDREAMER yields diminishing returns, which suggests the current limitations of LLMs in accurately modeling multi- step trajectories (see Section 6.1). Given our goal of exploring LLMs as world models for web environments, this simpler approach suffices to demonstrate the key insights while acknowledging the current capabilities of LLMs. 5 EXPERIMENTS 5.1 SETUP To properly test our planning framework’s real-world performance, we use benchmarks with on- line evaluation, capturing the dynamic nature of web interactions. We focus on two representative benchmarks: VisualWebArena (VWA; Koh et al. (2024a)), which emphasizes a multimodal setting, and Mind2Web-live (Pan et al., 2024b), which operates with HTML by default. VWA comprises In contrast, 910 tasks across three locally hosted websites: Shopping, Classifieds, and Reddit. Mind2Web-live includes 104 tasks spanning 69 real-world websites. We adhere to the default set- tings of both benchmarks: for VWA, we use screenshots with Set-of-Marks prompting as the obser- vation space, while for Mind2Web-live, we use HTML. For our LLM, we choose the most advanced multimodal LLM available, GPT-4o, as it best serves our aim to pioneer model-based planning with LLMs and explore the full potential of this envisioned paradigm. In our experiments, we empiri- cally set the planning horizon H to 1. A comprehensive analysis of this parameter is presented in Section 6.1. To demonstrate the effectiveness of our proposal, we primarily compare our approach with two major baselines: the reactive agent and the tree search agent with real interactions.2 While we can readily implement our own method for both benchmarks, for the tree search baseline (Koh et al., 2024b), we can only compare with it on VWA, because of the infeasibility of doing tree search in real-world websites in Mind2Web-live. Specifically, in VWA, Koh et al. (2024b) keeps track of the sequences of actions to get to states in previous trajectories. During backtracking, they reset the sandbox and re-execute the action sequence to restore the state. However, resetting the environment to undo effects is not always feasible in real-world websites featured in Mind2Web-live. 5.2 MAIN RESULTS Effectiveness. We present the overall performance results in Table 2. WEBDREAMER demon- strates substantial improvements over the reactive agent on both VWA and Mind2Web-live datasets. Notably, on the VWA dataset, our proposed method achieves a 33.3% relative performance gain. Meanwhile, our proposal still underperforms the tree search baseline in terms of overall success rate. Despite these improvements, our approach still falls short of the tree search baseline in terms of overall success rate. However, it is crucial to emphasize that tree search is not a practical option for real-world websites, whereas WEBDREAMER provides a more flexible and adaptive alternative. On Mind2Web-live, WEBDREAMER outperforms the reactive baseline by 2.9% (a relative gain of 13.1%), which is less significant than the improvement on VWA. However, it is worth noting that the Mind2Web-live dataset does not offer as much discriminative power, as evidenced by the min- imal performance differences across multiple base LLMs shown in Table 2. The strong results on both VWA and Mind2Web-live indicate the effectiveness of our method across different observation settings. We further conduct a more granular analysis comparing our proposed method to the reactive baseline on the VWA dataset across multiple dimensions. Table 3 demonstrates that our model-based plan- ning approach consistently outperforms the reactive baseline across all websites and task difficulty levels. On tasks of medium difficulty according to the official annotation by VWA, model-based planning even surpasses the performance of tree search (i.e., 22.2% vs. 24.1%). Despite its promise, model-based planning still struggles with hard tasks in VWA that necessitate multistep simulations. 2We will refer tree search with real interactions simply as tree search in our experiments for brevity. 7 Table 2: Results on VisualWebArena and Mind2Web-live. WEBDREAMER significantly outper- forms the reactive baseline and falls only slightly short of the tree search baseline on VWA while requiring far fewer website interactions. For Mind2Web-live, implementing tree search algorithms poses significant challenges due to the requirement for website backtracing, leading us to omit tree search performance metrics. This limitation further underscores the flexibility of our model-based planning method. We also include additional baselines (denoted by gray cells) to provide broader context. While these comparisons may not directly assess our core hypothesis, they offer valuable background for understanding our method’s performance in the web navigation landscape. † We run the reactive baseline on VWA by ourselves because local hosting requirements may lead to hardware-dependent performance variations. Benchmark Observation O Method Completion Rate Success Rate VisualWebArena Screenshot+SoM Mind2Web-live HTML Gemini-1.5-Pro + Reactive (Koh et al., 2024a) GPT-4 + Reactive (Koh et al., 2024a) GPT-4o + Reactive (Koh et al., 2024a) GPT-4o + Tree Search (Koh et al., 2024b) GPT-4o + WEBDREAMER GPT-4 + Reactive (Pan et al., 2024b) Claude-3-Sonnet + Reactive (Pan et al., 2024b) Gemini-1.5-Pro + Reactive (Pan et al., 2024b) GPT-4-turbo + Reactive (Pan et al., 2024b) GPT-3.5-turbo + Reactive (Pan et al., 2024b) GPT-4o + Reactive (Pan et al., 2024b) GPT-4o + WEBDREAMER - - - - - 48.8% 47.9% 44.6% 44.3% 40.2% 47.6% 49.9% 12.0% 16.4% 17.7%† 26.4% 23.6% ((33.3%) 23.1% 22.1% 22.3% 21.1% 16.5% 22.1% 25.0% ((13.1%) The accuracy of simulations diminishes as the number of steps increases, presenting a significant challenge for handling hard tasks. Table 3: Success rate breakdown based on different dimensions. γ = SRWEBDREAMER−SRreactive measures SRtree search−SRreactive the extent to which WEBDREAMER narrows the gap between the reactive agent and the tree search agent. (a) Websites (b) Task Difficulty Websites Reactive Tree Search WEBDREAMER γ Difficulty Reactive Tree Search WEBDREAMER γ Classifieds Reddit Shopping 16.8% 15.3% 19.4% 26.5% 20.5% 29.0% 22.6% 18.6% 26.5% 59.8% 63.5% 74.0% Easy Medium Hard 28.8% 16.4% 10.7% 42.3% 22.2% 14.9% 37.4% 24.1% 12.7% 63.7% 132.8% 47.6% Efficiency. Another key advantage of model-based planning is its efficiency compared with tree search using actual explorations. As shown in Table 4, tree search requires approximately three times more steps than the baseline across all environments, whereas our method maintains compa- rable action steps. Notably, tree search introduces about ten times more wall clock latency due to the extra actions and backtracking, while the simulation overhead in our approach is minimal and can be further reduced with increased parallelization. Table 4: Action steps and wall clock time on VWA. (a) Number of Action Steps (b) Task Completion Wall Clock Time Steps Reactive Tree Search WEBDREAMER Seconds Reactive Tree Search WEBDREAMER Classifieds Reddit Shopping 3.4 5.1 4.5 9.9 13.6 11.4 4.1 5.2 4.5 Classifieds Reddit Shopping 68.3 83.5 87.7 749.2 972.1 785.7 183.6 233.7 179.4 8 Figure 4: We demonstrate the performance on a subset of the VWA dataset, varying both the state representation within simulations and the planning horizon. Planning with long horizon with simu- lation remains challenging, regardless of the state representation employed. 6 ANALYSES 6.1 STATE REPRESENTATION AND PLANNING HORIZON Our model-based planning approach relies on two critical dimensions for simulation: the state rep- resentation and the planning horizon (i.e., the simulation depth). To gain deeper insights into its ef- fectiveness and limitations, we investigate how various configurations affect the final performance. Given the high computational cost of these experiments, we conduct this analysis using a subset of the VWA dataset, comprising 100 shopping tasks with officially annotated human trajectories. In addition to the state change description used in our primary experiments, we explore alterna- tive approaches where GPT-4o predicts either the HTML code or the accessibility tree of the re- sulting webpage within the simulation. For each of these state representations, we evaluate plan- ning horizons of 1, 2, and 3 steps. As depicted in Figure 4, all three state representations sig- nificantly outperform the reactive baseline. However, their effectiveness diminishes as the plan- ning horizon extends to 3 steps, indicating a common limitation in long-horizon simulation across these approaches. Specifically, the action proposal within the simulation tends to hallucinate rel- evant actions for task completion, even when such actions may not exist in the current state pre- dicted by the LLM. Notably, the state change representation exhibits the most pronounced per- formance degradation as planning horizons extend. This decline is particularly severe with a planning horizon of 3, where performance falls below that of the reactive baseline. This vul- nerability stems from its implicit specification of available interactive elements on the current webpage, requiring the model to infer these elements by applying changes to the initial state. In contrast, HTML and accessibility tree representations provide explicit element information. Consequently, the state change approach is more susceptible to hallucination during extended simula- tions. Despite this limitation, the state change ap- proach remains a viable choice given the current capabilities of LLMs. It matches the performance of HTML and accessibility tree representations for planning horizons less than 3 while consuming fewer output tokens. 6.2 ABLATION STUDY To determine if the observed improvements come from specific parts of our model-based planning ap- proach, we perform ablation studies on the simula- tion and self-refinement stages, using the same sub- set from Section 6.1. We pay special attention to the 9 Figure 3: Ablation study on the simulation stage and self-refinement stage. 123Planning Horizon22242628303234Success RateReactive AgentState ChangeAccessibility TreeHTMLReactiveRerankingWebDreamer w/o Self-RefinementWebDreamer051015202530Success Rate simulation stage, which is the core of model-based planning. One might argue that the primary improvement stems from reranking candidate actions, irrespective of whether this ranking relies on simulation. To test this idea, we conduct an experiment where we remove the simulation stage com- pletely and instead ask the reward model to directly evaluate each candidate action. As shown in Figure 3, this modified reranking approach does lead to some improvement over the ractive baseline, but the gain is small and still falls well behind WEBDREAMER. These results confirm that the LLM- based world model simulation plays a crucial role in the planning process. Furthermore, we observe a decrease in performance when removing the self-refinement stage. Upon closer examination, we find that this decline is primarily due to the self-refinement module’s ability to effectively filter out less relevant candidate actions when the next optimal action is clear. In contrast, directly simulating all actions may introduce additional noise that can negatively impact performance. 6.3 CASE STUDY To clarify the role of simulation in planning, we present a case study covering both positive and negative examples. This illustrates how simulation aids the agent in exploring the environment, as well as how inaccuracies in simulation can lead to incorrect predictions. Detailed examples are provided in Appendix B. 7 CONCLUSION In this paper, we demonstrate the strong potential of using LLMs as world models to support plan- ning in complex environments. Specifically, our model-based planning approach, WEBDREAMER, shows substantial improvement over reactive baselines and offers greater flexibility than tree search, which is often impossible in real-world websites. As a pioneering effort in this area, our work opens new avenues for model-based planning with LLM-simulated world models. Future work can focus on further optimizing LLMs as world models for complex environments and developing more robust model-based planning algorithms for long-horizon planning. LIMITATIONS Our study, as a pioneering exploration of MPC-based planning with LLMs for web navigation, naturally comes with several limitations, which are also exciting future research directions: Simplicity of Planning Algorithm. In this preliminary work, we deliberately employed a straight- forward planning algorithm to demonstrate the core potential of our approach. While effective, this simplicity leaves ample room for future enhancements. More sophisticated planning techniques, such as Monte Carlo Tree Search (MCTS), could be integrated to further improve performance. As a foundational study, our focus was on establishing the viability of the concept rather than opti- mizing every aspect of the system. This strategic choice allows future research to build upon our findings and explore more advanced planning strategies within the framework we’ve established. Computational Cost. Our current implementation, utilizing state-of-the-art models like GPT-4o, incurs non-trivial API costs (approximately $1 per task on VWA). This cost reflects our prioriti- zation of exploring the full potential of LLM-based planning without immediate constraints. For practical applications, future work could investigate cost-effective alternatives such as fine-tuning specialized models for simulation tasks. This sets a benchmark for future optimizations that balance performance and efficiency. These limitations underscore the nature of our work as a proof of concept, opening up numerous avenues for future research and optimization. By establishing the foundational potential of MPC- based planning with LLMs, we have laid the groundwork for a new planning paradigm for LLM- based language agents, inviting further innovations that can refine and extend model-based planning. 10 ACKNOWLEDGMENTS We would like to extend our appreciation to colleagues from the OSU NLP group and Orby AI for their insightful comments. This work is supported in part by Orby AI and ARL W911NF2220144. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. government. The U.S. gov- ernment is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notice herein. REFERENCES Gilles Baechler, Srinivas Sunkara, Maria Wang, Fedir Zubach, Hassan Mansoor, Vincent Etter, Vic- tor C˘arbune, Jason Lin, Jindong Chen, and Abhanshu Sharma. Screenai: A vision-language model for ui and infographics understanding. ArXiv preprint, abs/2402.04615, 2024. URL https://arxiv.org/abs/2402.04615. Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher R´e, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. ArXiv preprint, abs/2407.21787, 2024. URL https://arxiv.org/abs/2407.21787. Hyungjoo Chae, Namyoung Kim, Kai Tzu-iunn Ong, Minju Gwak, Gwanwoo Song, Jihoon Kim, Sunghwan Kim, Dongha Lee, and Jinyoung Yeo. Web agents with world models: Learning and leveraging environment dynamics in web navigation. arXiv preprint arXiv:2410.13232, 2024. Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Li YanTao, Jianbing Zhang, and Zhiyong In Proceedings Wu. SeeClick: Harnessing GUI grounding for advanced visual GUI agents. of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 9313–9332, Bangkok, Thailand, 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.acl-long.505. Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samual Stevens, Boshi Wang, Huan Sun, In Alice Oh, Tristan Nau- and Yu Su. Mind2web: Towards a generalist agent for the web. mann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Informa- tion Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - URL http://papers.nips.cc/paper_files/paper/2023/ 16, 2023, 2023. hash/5950bf290a1570ea401bf98882128160-Abstract-Datasets_and_ Benchmarks.html. Xidong Feng, Ziyu Wan, Muning Wen, Stephen Marcus McAleer, Ying Wen, Weinan Zhang, and Jun Wang. Alphazero-like tree-search can guide large language model decoding and training. ArXiv preprint, abs/2309.17179, 2023. URL https://arxiv.org/abs/2309.17179. Hiroki Furuta, Kuang-Huei Lee, Ofir Nachum, Yutaka Matsuo, Aleksandra Faust, Shixiang Shane Gu, and Izzeddin Gur. Multimodal web navigation with instruction-finetuned foundation models. ArXiv preprint, abs/2305.11854, 2023. URL https://arxiv.org/abs/2305.11854. Carlos E Garcia, David M Prett, and Manfred Morari. Model predictive control: Theory and prac- tice—a survey. Automatica, 25(3):335–348, 1989. Yu Gu, Xiang Deng, and Yu Su. Don’t generate, discriminate: A proposal for grounding language models to real-world environments. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 4928–4949, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.270. URL https://aclanthology.org/2023.acl-long.270. Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksandra Faust. A real-world webagent with planning, long context understanding, and pro- gram synthesis. ArXiv preprint, abs/2307.12856, 2023. URL https://arxiv.org/abs/ 2307.12856. 11 David Ha and J¨urgen Schmidhuber. World models. ArXiv preprint, abs/1803.10122, 2018. URL https://arxiv.org/abs/1803.10122. Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. Rea- In Houda Bouamor, Juan Pino, soning with language model is planning with world model. and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 8154–8173, Singapore, 2023. Association for Computational Linguis- tics. doi: 10.18653/v1/2023.emnlp-main.507. URL https://aclanthology.org/2023. emnlp-main.507. Hongliang He, Wenlin Yao, Kaixin Ma, Wenhao Yu, Yong Dai, Hongming Zhang, Zhenzhong Lan, and Dong Yu. Webvoyager: Building an end-to-end web agent with large multimodal models. ArXiv preprint, abs/2401.13919, 2024. URL https://arxiv.org/abs/2401.13919. Wenyi Hong, Weihan Wang, Qingsong Lv, Jiazheng Xu, Wenmeng Yu, Junhui Ji, Yan Wang, Zihan Wang, Yuxiao Dong, Ming Ding, and Jie Tang. Cogagent: A visual language model for gui agents. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14281–14290, 2024. Doyoung Kim, Jongwon Lee, Jinho Park, and Minjoon Seo. Cognitive map for language models: Optimal planning via verbally representing the world model. ArXiv preprint, abs/2406.15275, 2024. URL https://arxiv.org/abs/2406.15275. Jing Yu Koh, Robert Lo, Lawrence Jang, Vikram Duvvur, Ming Chong Lim, Po-Yu Huang, Graham Neubig, Shuyan Zhou, Ruslan Salakhutdinov, and Daniel Fried. Visualwebarena: Evaluating multimodal agents on realistic visual web tasks. ArXiv preprint, abs/2401.13649, 2024a. URL https://arxiv.org/abs/2401.13649. Jing Yu Koh, Stephen McAleer, Daniel Fried, and Ruslan Salakhutdinov. Tree search for language model agents. ArXiv preprint, abs/2407.01476, 2024b. URL https://arxiv.org/abs/ 2407.01476. Hanyu Lai, Xiao Liu, Iat Long Iong, Shuntian Yao, Yuxuan Chen, Pengbo Shen, Hao Yu, Hanchen Zhang, Xiaohan Zhang, Yuxiao Dong, and Jie Tang. Autowebglm: A large language model- based web navigating agent. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 5295–5306, 2024. Kenton Lee, Mandar Joshi, Iulia Raluca Turc, Hexiang Hu, Fangyu Liu, Julian Martin Eisensch- los, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, and Kristina Toutanova. Pix2struct: Screenshot parsing as pretraining for visual language understanding. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), In- ternational Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 18893–18912. PMLR, 2023. URL https://proceedings.mlr.press/v202/lee23g.html. Zeyi Liao, Lingbo Mo, Chejian Xu, Mintong Kang, Jiawei Zhang, Chaowei Xiao, Yuan Tian, Bo Li, and Huan Sun. EIA: environmental injection attack on generalist web agents for privacy leakage. CoRR, abs/2409.11295, 2024. doi: 10.48550/ARXIV.2409.11295. URL https://doi.org/ 10.48550/arXiv.2409.11295. Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement In 6th International Confer- learning on web interfaces using workflow-guided exploration. ence on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/ forum?id=ryTp3f-0-. Marcelo G Mattar and M´at´e Lengyel. Planning in the brain. Neuron, 110(6):914–934, 2022. Thomas M Moerland, Joost Broekens, Aske Plaat, Catholijn M Jonker, et al. Model-based rein- forcement learning: A survey. Foundations and Trends® in Machine Learning, 16(1):1–118, 2023. 12 OpenAI. Hello GPT-4o. https://openai.com/index/hello-gpt-4o/, 2024a. Ac- cessed: 2024-09-28. OpenAI. Introducing OpenAI o1. https://openai.com/o1/, 2024b. Accessed: 2024-09-29. Jiayi Pan, Yichi Zhang, Nicholas Tomlin, Yifei Zhou, Sergey Levine, and Alane Suhr. Autonomous evaluation and refinement of digital agents. ArXiv preprint, abs/2404.06474, 2024a. URL https://arxiv.org/abs/2404.06474. Yichen Pan, Dehan Kong, Sida Zhou, Cheng Cui, Yifei Leng, Bing Jiang, Hangyu Liu, Yanyi Shang, Shuyan Zhou, Tongshuang Wu, and Zhengyang Wu. Webcanvas: Benchmarking web agents in online environments. ArXiv preprint, abs/2406.12373, 2024b. URL https://arxiv.org/ abs/2406.12373. Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, Sebastien Racani`ere, David Reichert, Th´eophane Weber, Daan Wierstra, and Peter Battaglia. Learning model-based plan- ning from scratch. ArXiv preprint, abs/1707.06170, 2017. URL https://arxiv.org/abs/ 1707.06170. Ajay Patel, Markus Hofmarcher, Claudiu Leoveanu-Condrei, Marius-Constantin Dinu, Chris Callison-Burch, and Sepp Hochreiter. Large language models can self-improve at web agent tasks. ArXiv preprint, abs/2405.20309, 2024. URL https://arxiv.org/abs/2405.20309. Pranav Putta, Edmund Mills, Naman Garg, Sumeet Motwani, Chelsea Finn, Divyansh Garg, and Rafael Rafailov. Agent q: Advanced reasoning and learning for autonomous ai agents. ArXiv preprint, abs/2408.07199, 2024. URL https://arxiv.org/abs/2408.07199. Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604–609, 2020. Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu, From pixels to UI actions: Urvashi Khandelwal, Kenton Lee, and Kristina Toutanova. In Alice Oh, Tristan Nau- Learning to follow instructions via graphical user interfaces. mann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Informa- tion Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/ 6c52a8a4fadc9129c6e1d1745f2dfd0f-Abstract-Conference.html. Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 3135–3144. PMLR, 2017. URL http://proceedings.mlr.press/v70/shi17a. html. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016. Yifan Song, Da Yin, Xiang Yue, Jie Huang, Sujian Li, and Bill Yuchen Lin. Trial and error: Exploration-based trajectory optimization for llm agents. ArXiv preprint, abs/2403.02502, 2024. URL https://arxiv.org/abs/2403.02502. Richard S Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACM Sigart Bulletin, 2(4):160–163, 1991. Evan Wang, Federico Cassano, Catherine Wu, Yunfeng Bai, Will Song, Vaskar Nath, Ziwen Han, Sean Hendryx, Summer Yue, and Hugh Zhang. Planning in natural language improves llm search for code generation. ArXiv preprint, abs/2409.03733, 2024. URL https://arxiv.org/ abs/2409.03733. 13 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - De- cember 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/ hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html. Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scal- In Sanmi Koyejo, S. Mo- able real-world web interaction with grounded language agents. hamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Process- ing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/ 82ad13ec01f9fe44c01cb91814fd7b8c-Abstract-Conference.html. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neu- ral Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023a. URL http://papers.nips.cc/paper_files/paper/2023/ hash/271db9922b8d1f4dd7aaef84ed5ac703-Abstract-Conference.html. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenRe- view.net, 2023b. URL https://openreview.net/forum?id=WE_vluYUL-X. Keen You, Haotian Zhang, Eldon Schoop, Floris Weers, Amanda Swearngin, Jeffrey Nichols, Yinfei Yang, and Zhe Gan. Ferret-ui: Grounded mobile ui understanding with multimodal llms. ArXiv preprint, abs/2404.05719, 2024. URL https://arxiv.org/abs/2404.05719. Yao Zhang, Zijian Ma, Yunpu Ma, Zhen Han, Yu Wu, and Volker Tresp. Webpilot: A versatile and autonomous multi-agent system for web task execution with strategic exploration. ArXiv preprint, abs/2408.15978, 2024. URL https://arxiv.org/abs/2408.15978. Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, and Yu Su. Gpt-4v(ision) is a generalist web agent, if grounded. In Forty-first International Conference on Machine Learning, 2024. URL https://openreview.net/forum?id=piecKJ2DlB. Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al. Webarena: A realistic web environment for building autonomous agents. ArXiv preprint, abs/2307.13854, 2023. URL https://arxiv.org/ abs/2307.13854. 14 A PROMPTS FOR FOUR STAGES IN MPC-BASED PLANNING A.1 ACTION PROPOSAL Action Proposal You are an autonomous intelligent agent tasked with navigating a web browser. You will be given web-based tasks. These tasks will be accomplished through the use of specific actions you can issue. Here’s the information you’ll have: {Web Information} The user’s objective: {Task Objective} This is the task you’re trying to complete. The current web page screenshot: {Web Page Screenshot Image} This is a screenshot of the webpage, with each interactable element assigned a unique numerical id. Each bounding box and its respective id shares the same color. The observation, which lists the IDs of all interactable elements on the current web page with their text content if any, in the format [id][tagType][text content]. tagType is the type of the element, such as button, link, or textbox. text content is the text content of the element. For example, [1234][button][’Add to Cart’] means that there is a button with id 1234 and text content ’Add to Cart’ on the current web page. [][StaticText][text] means that the element is of some text that is not interactable. The current web page’s URL: {Web URL} This is the page you’re currently navigating. The open tabs: {Previous Tabs} These are the tabs you have open. The previous action: {Previous Action} This is the action you just performed. It may be helpful to track your progress. The actions you can perform fall into several categories: Page Operation Actions: - click [id]: This action clicks on an element with a specific id on the webpage. - type [id] [content]: Use this to type the content into the field with id. By default, the Enter key is pressed after typing unless press enter after is set to 0, i.e., type [id] [content] [0]. - hover [id]: Hover over an element with id. - press [key comb]: Simulates the pressing of a key combination on the keyboard (e.g., Ctrl+V) - scroll [down] or scroll [up]: Scroll the page up or down. Tab Management Actions: - new tab: Open a new, empty browser tab. - tab focus [tab index]: Switch the browser’s focus to a specific tab using its index. - close tab: Close the currently active tab. URL Navigation Actions: - goto [url]: Navigate to a specific URL. - go back: Navigate to the previously viewed page. - go forward: Navigate to the next page (if a previous go back action was performed). Completion Action: - stop [answer]: Issue this action when you believe the task is complete. If the objective is to find a text-based answer, provide the answer in the bracket. Homepage: If you want to visit other websites, check out the homepage at http://homepage.com. It has a list of websites you can visit. http://homepage.com/password.html lists all the account name and password for the websites. You can use them to log in to the websites. To be successful, it is very important to follow the following rules: 1. You should only issue an action that is valid given the current observation 2. You should only issue one action at a time. 3. You should follow the examples to reason step by step and then issue the next action. 4. Generate the action in the correct format. Start with a “In summary, the next action I will perform is” phrase, followed by action. For example, In summary, the next action I will perform is click [1234]. 5. Issue stop action when you think you have achieved the objective. Don’t generate anything after stop. 15 A.2 SELF-REFINEMENT Self-Refinement You are assiting a web navigation agent to help a human user navigate a website to complete a task. Given the user’s intent, the action history, and the current state of the webpage, the agent has proposed a set of candidate actions to take at the current step. Your role is not to determine a best action for the agent at this step, but to filter out the actions that are very likely not relevant or helpful for the agent to accomplish the task. Please select all actions that you think that could possibly lead the agent to accomplish the task. It’s important to note that to accomplish a task, the agent will execute a sequence of actions. So the action to take at this step does not have to immediately lead to the completion of the task. You should select any action that could be relevant for the agent to take in the current state of the webpage. Try to be as thoughtful and comprehensive as you can! Don’t miss any possible action. If there is one action that is clearly the best, and all other actions are clearly not very relevant, you can only select one action. Please do this sparely, since some actions may be helpful in a longer horizon. A action should be included as long as it could be relevant to the task, even if it may not be the most direct action to take at this step!! Some relevant actions might seem indirect at the first glance, but could be helpful in a longer horizon. Please also include those actions. Please at least select one action. *IMPORTANT* Format your response into two lines as shown below: Thoughts: <your thoughts and reasoning process>. You must explicitly evaluate each action one by one and imagine whether it could be relevant to the task following the format: action:... rationale:... Selected actions: id0;id1;aid2;... (please return the index of the action in the candidate actions list, starting from 0. Don’t output the action description itself. Separate the indices with semicolons. Do not add spaces or any other characters between after the semicolons.) Action History: {last actions str} Current URL: {current url} The images corresponding to the user intent are shown in the FIRST {len(intent images)} images (before the User Intent). The last {len(screenshots)} snapshots of the agent’s trajectory are shown in the LAST {len(screenshots)} images. The LAST IMAGE represents the current state of the webpage. Proposed Action: {action descriptions} A.3 WORLD MODEL World Model You are an agent that predicts the effect of an action on a webpage. You will be given a screenshot of a webpage, a sequence of actions and state changes applied to the initial screenshot, and an operation to perform on the webpage. You are required to predict the new changes that will occur on the webpage after the operation is performed, such as the appearance of new elements, the disappearance of existing elements, or changes in the content of existing elements. The operation type and the element to operate will be provided in the prompt. Directly output State changes:... and don’t output anything else. Try to be as comprehensive and detailed as possible. Based on the initial screenshot and the changes to the webpage, please predict the changes after action: 16 A.4 REWARD MODEL Reward Model You are an expert in evaluating the performance of a web navigation agent. The agent is designed to help a human user navigate a website to complete a task. Given the user’s intent, the agent’s action history, the current state of the webpage, your goal is to decide **whether the simulated steps by the agent indicate a successful execution of the user intent**. In particular, if the predicted state (i.e., the current state represented by the last image plus all the predicted changes so far) corresponds to a successful final state. If it is a failure but it looks like the simulated steps are on the right track towards success, you should also output as such. Note that, in the simulated steps, all the state changes are predicted by the agent’s world model, and they may not actually be faithful to the real website interactions (e.g., some proposed actions may not be avaiable in a realistic website). You should also account for this in your evaluation (e.g., if the predicted state changes are not reasonable then it’s probably a failure). *IMPORTANT* Format your response into two lines as shown below: Thoughts: <your thoughts and reasoning process> Status: "success" or "failure" On the right track to success: "yes" or "no" 17 B CASE STUDY B.1 ERROR CAUSED BY IMPERFECT WORLD MODEL SIMULATION Figure 5: An error case caused by imperfect world model simulation. B.2 POSITIVE CASE BENEFITING FROM WORLD MODEL SIMULATION Figure 6: A positive case where the simulation leads to correct action prediction. 18 Find me a printer of the same brand as the product in this picture. It needs to be white and have at least 11 reviews with an average rating greater than 4.Thewebpagewillupdatetodisplayitems25-36of1272forthecurrentcategoryorsearchcriteria.Theproductimages,names,prices,andotherrelateddetailswillchangetoreflectthisnextsetofitems.Thepaginationatthebottomofthepagewillindicatethattheuserisnowonthethirdpageofitems,withPage3likelyhighlightedornotedinsomeway.Theproductsandtheirdetailsfromitems13-24willbereplacedbytheinformationforitems25-36.Click ‘Next Page’Anewsetofsearchresultsfor'HPprinter'willappear,displayingthenextitemsstartingfromitem13andonwards.Thepaginationnumberandtheitemcountatthetopofthesearchresultsmayupdateaccordingly.The"RecentlyOrdered"and"CompareProducts"sections,ifstatic,maymoveoutofview,whilenewsectionsmightcomeintoview.Thefootersectionofthepagemayalsoshiftintoviewifscrollingreachesthebottom.Thelistingforthe"HPBusinessInkjet2800WideFormatPrinter(C8174A#A2L)"willberemovedfromthewebpage.Thisincludestheremovalofitsimage,price($37.64),ratingstars,numberofreviews(12),and"AddtoCart"buttonpositionedatthebottomrightofthescreen(elements63through68).Theremainingproductswouldshiftaccordinglytofillthegalleryspace.v=0.5v=0.4v=0.1Afterscrollingdownthefootersectionbecomesvisible,displayinglinksforPrivacyandCookiePolicy,SearchTerms,AdvancedSearch,OrdersandReturns,andContactUs,alongwithasubscriptionboxforemailupdates.Nonewproductsappearonthepageafterscrolling.Scroll DownStopv=0.2CorrectionWhat are the two types of birds on the front of that colorful shirt?Thepagewillnavigatetoadetailedproductpageforthe"MensFlowersCasualAlohaHawaiianShirtSummerShortSleeveBeachT-ShirtRegularFitButtonDownDressShirts."Thisnewpagewilllikelycontainadditionalinformationabouttheproductincludingmoredetailedspecifications,customerreviews,largerimages,sizingoptions,andpossiblyalarger"AddtoCart"button.Otherelementsfromthecurrentcategoryviewlikethegridofproductswillbereplacedwiththedetailedviewofthisspecificproduct.ClickHoveringoverwilllikelyresultinthefollowingchanges:1.Atooltiporadditionalinformationpopupmightappearnearorovertheshirt'simageorelement.Thispopupcouldincludemoredetailssuchastheprice,sizeoptions,availability,andpossiblyazoomed-inviewoftheshirt.2.Theimageoftheshirtmightchange,possiblyshowingadifferentangleoramodelwearingtheshirt,providingmorecontextonhowitlookswhenworn.3.The"AddtoCart"buttonorratingmightchangeappearance,potentiallybecominghighlightedtodrawmoreattention.Parrotsandpalmtreeswillbereturnedastheanswer.v=0.75v=0.25v=0.05HoverStopHover
ai_researcher
1
Difficult_Conversations_A_Collaborative_Interprofessional_Simulation_for_Social_Work_Athletic_Training_and_Physician_Assistant_Programs.pdf
DurIAN-SC: Duration Informed Attention Network based Singing Voice Conversion System Liqiang Zhang∗1, Chengzhu Yu2, Heng Lu2, Chao Weng2, Chunlei Zhang2, Yusong Wu2, Xiang Xie1, Zijin Li3, Dong Yu2 1Beijing Institute of Technology 2Tencent AI Lab 3China Conservatory of Music {zhlq,xiexiang}@bit.edu.cn,{czyu,bearlu,cweng,cleizhang,ysw,dyu}@tencent.com 0 2 0 2 g u A 7 ] S A . s s e e [ 1 v 9 0 0 3 0 . 8 0 0 2 : v i X r a Abstract Singing voice conversion is converting the timbre in the source singing to the target speaker’s voice while keeping singing con- tent the same. However, singing data for target speaker is much more difficult to collect compared with normal speech data. In this paper, we introduce a singing voice conversion algo- rithm that is capable of generating high quality target speaker’s singing using only his/her normal speech data. First, we man- age to integrate the training and conversion process of speech and singing into one framework by unifying the features used in standard speech synthesis system and singing synthesis system. In this way, normal speech data can also contribute to singing voice conversion training, making the singing voice conversion system more robust especially when the singing database is small. Moreover, in order to achieve one-shot singing voice conver- sion, a speaker embedding module is developed using both speech and singing data, which provides target speaker identify infor- mation during conversion. Experiments indicate proposed sing conversion system can convert source singing to target speaker’s high-quality singing with only 20 seconds of target speaker’s enrollment speech data. Index Terms: Singing Voice Conversion , Singing Synthesis, Speaker D-vector, Speaker Embedding 1. Introduction Singing is one of the predominant form of the music arts and singing voice conversion and synthesis can have many potential applications in entertainment industries. Over the past decades, many methods have been proposed to increase the naturalness of synthesized singing. These include the methods based on unit selection and concatenation[1] as well as more the recent approaches based on deep neural network (DNN) [2] and auto- regressive generation models [3]. While existing singing synthesis algorithms are able to pro- ducing natural singing, it basically requires large amount of singing data from one same speaker in order to generate his/her singing. Comparing to normal speech data collection, singing data is much more difficult and more expensive to obtain. To alleviate such limitations, data efficient singing synthesis ap- proaches [4] have been proposed recently. In [4], a large singing synthesis model trained from multi-speaker is adaptively fine- tuned with a small amount of target speaker’s singing data to generate the target singing model. Alternatively, singing gen- eration for new voices can be achieved through singing voice conversion. The goal of singing voice conversion is to convert ∗*Work performed while interning at Tencent AI Lab the source singing to the timbre of target speaker while keeping singing content untouched. Traditional singing voice conver- sion [5, 6, 7] relies on parallel singing data to learn conversion functions between different speakers. However, a recent study [8] proposed an unsupervised singing voice conversion method based on WaveNet [9] autoencoder architecture to achieve non parallel singing voice conversion. In [8], neither singing data nor the transcribed lyrics or notes is needed. While above mentioned methods could efficiently generate singing with new voices, they still require an essential amount of singing voice samples from target speakers. This limits the applications of singing generation to relatively restricted sce- narios where there has to be target speaker’s singing data. On the other hand, normal speech samples are much easier to col- lect than singing. There are only limited studies on investigat- ing to use normal speech data to enhance singing generation. The speech-to-singing synthesis method proposed in [10] at- tempts to convert a speaking voice to singing by directly modi- fying acoustic features such as f0 contour and phone duration extracted from reading speech. While speech-to-singing ap- proaches could produce singing from reading lyrics, it normally requires non-trivial amount of manual tuning of acoustic fea- tures for achieving high intelligibility and naturalness of singing voices. Duration Informed Attention Network (DurIAN)[11], orig- inally proposed for the task of multimodal synthesis, is essen- tially an autoregressive feature generation framework that can generate acoustic features (e.g., mel-spectrogram) for any audio source frame by frame. In this paper, we proposed a DurIAN based speech and singing voice conversion system (DurIAN- SC), a unified speech and singing conversion framework1. There are two major contributions for the proposed method: 1) De- spite the input feature for conventional speech synthesis and singing synthesis is different, proposed framework unifies the training process for both speech and singing synthesis. Thus in this work, we can even train the singing voice conversion model just using speech data. 2) Instead of the commonly used train- able Look Up Table (LUT)[8] for speaker embedding, we use a pre-trained speaker embedding network module for speaker d- vector[12, 13] extraction. Extracted speaker d-vectors are then fed into singing voice conversion network as the speaker em- bedding to represent the speaker identity. During conversion, only 20 seconds speech or singing data is needed for the tester’s d-vector extraction. Experiments show proposed algorithm can generates high-quality singing voices when using only speech 1Sound demo proposed https://tencent-ailab.github.io/learning_ singing_from_speech algorithm can be of found at Figure 1: Model architecture of DurIAN-SC. RMSE means root mean square energy, FC represents the full connected layer, Expansion means expanding the time dimension to frame level. data. The Mean Opinion Scores (MOS) of naturalness and sim- ilarity indicates our system can perform one-shot singing voice conversion with only 20 seconds tester’s speech data. The paper is organized as following. Section 2 introduces the architecture of our proposed conversion model. Experi- ments are introduced in Section 3. And section 4 is the con- clusion. 2. Model Architecture 2.1. DurIAN-SC While DurIAN was originally proposed for the task of multi- modal speech synthesis, it has many advantages over conven- tional End-to-End framework, especially for its stable in synthe- sis and its duration controllability. The original DurIAN model is modified here to perform speech and singing synthesis at the same time. Here we use text/song lyric as one of input for both speech and singing data. Text or song lyric is then transferred to phone sequence with prosody token by text-to-speech TTS front-end module. The commonly used music score is not used Instead, we use in our singing voice conversion framework. frame level f0 and average Root Mean Square Energy (RMSE) extracted from both original singing/speech as additional input conditions (Fig. 1). For singing voice conversion, the f0 and rhythm is totally decided by score notes and the content itself, and this is the part we do not convert unless there is large gap be- tween source and target speaker’s singing pitch range. Further, we found that if using RMSE as input condition in training, the loss convergence would be much faster. The architecture of DurIAN-SC is illustrated in Fig. 1. It includes (1) an encoder that encodes the context of each phone, (2) an alignment model that aligns the input phone sequence and to target acoustic frames, (3) an auto-regressive decoder network that generates target mel-spectrogram features frame by frame. 2.1.1. Encoder We use phone sequence x1:N directly as input for both speech and singing synthesis. The output of the encoder h1:N is a se- quence of hidden states containing the sequential representation of the input phones as h1:N = encoder(x1:N) (1) where N is the length of input phone sequences, encoder mod- ule contains a phone embedding, fully connected layers and a CBHG[14] module, which is a combination module of Convo- lution layer, Highway network[15] and bidirectional GRU[16]. 2.1.2. Alignment model The purpose of alignment model is to generate frame aligned hidden states which is further fed into auto-regressive decoder. Here, the output hidden sequence from encoder h1:N is first ex- panded according to the duration of each phone as e1:T = state expand(h1:N, d1:N) (2) where T is the total number of input audio frames. The state ex- pansion is simply the replication of hidden states according to the provided phone duration d1:N. The duration of each phone is obtained from force alignments performed on input source phones and acoustic features sequences. The frame aligned hid- den states e1:T is then concatenated with frame level f0, RMSE and speaker embedding, as we can see in Fig. 1. (cid:48) 1:T = FC(e1:T ∨ f1:T ∨ r1:T ∨ D1:T) e (3) where ∨ indicates concatenation,FC indicates the fully con- nected layer, f1:T represents f0 for each frame, D1:T represents the speaker embedding expanded to frame level. And r1:T is the RMSE for each frame. 2.1.3. Decoder The decoder is the same as in DurIAN, composed of two auto- regressive RNN layers. Different from the attention mechanism used in the end-to-end systems, the attention context here is computed from a small number of encoded hidden states that are aligned with the target frames, which reduces the artifacts observed in the end-to-end system[14]. We decode two frames per time step in our system. The output from the decoder net- work y 1:T is passed through a post-CBHG [14] to improve the quality of predicted mel-spectrogram as (cid:48) (cid:48) 1:T = decoder(e y 1:T) (cid:48) (4) CBHGDecoder RNNFCDecoder RNNFCDecoder RNNFC<Go> FrameCBHGPhone SequenceFCPhone DurationsEncoderState Expansion EmbeddingMel-SpectrogramF0 & RMSE Sequence Speaker EmbeddingExpansionAlignment ModuleFC 2.2.2. Speaker embedding network To provide the DurIAN-SC with robust speaker embedding on Mandarin language. External Mandarin corpora are explored to train a speaker embedding network, which is then used as a pre-trained module. The external training set contains of 8800 speaker drawn from two gender-balanced public speech recog- nition datasets2. The training data is then augmented 2 folds to incorporate variabilities from distance (reverberation), chan- nel or background noise, resulting in a training pool with 2.8M utterances. 257-d raw short time fourier transform (STFT) fea- tures are extracted with a 32ms window and the time shift of fea- ture frames is 16ms. The non-speech part is removed by a en- ergy based voice activity detection. The utterance is randomly segmented into 100-200 frames to control the duration variabil- ity in the training phase. For the choice of network architecture, we employ a TDNN framework which is similar to [13, 19]. The speaker embedding training guilded with a multi-task loss, which employs both the large margin cosine loss (LMCL) and the triplet loss [20, 21, 22]. In order to further boost the capability for singing data, the internal singing corpus is incorporated in the speaker embed- ding training. Since the singing corpus is not provided with speaker label, we employ a bottom-up hierarchical agglomera- tive clustering (HAC) to assign a pseudo speaker label for each singing segment. Specifically, we first extract speaker embed- ding for singing corpus using the external speaker embedding model. Then, HAC is applied to produce 1000 speaker “IDs” from the training singing corpus (3500 singing segments). Fi- nally, the clustered corpus is pooled with external speech data for another round of speaker embedding training. The final sys- tem is utilized to extract speaker embedding for speech/singing. 2.2.3. Training and conversion process In the training stage, both the normal speech and singing data could be used as input training data. The f0, RMSE, phone se- quence and phone duration are extracted as shown in section 2.2.1. Speaker embedding are extracted using the pre-trained speaker embedding network introduced in the previous section. DurIAN-SC model is then trained based on these extracted acous- tic features and speaker embedding. In singing voice conversion stage, f0, RMSE and phone du- ration are extracted from source singing and later used in con- version process as condition. Using the pre-trained speaker em- bedding network, target speaker embedding can be obtained by testing on target speaker’s singing or speech data with a length of only 20 seconds. By conditioning on the extracted target speaker embedding, mel-spectrogram can be generated with tar- get speaker’s timbre through the model trained in the last ses- sion. Finally, WaveRNN [17] is employed as Neural Vocoder for waveform generation. In case there is large gap between source and target speaker’s singing pitch range, which often happen when performing cross gender conversion, we shift original source key linearly to make it easier for target speaker to ’sing’ the same song as source. The input f0 is multiplied by a factor ν as: ν = mean(xt) mean(xs) (6) where xs is the source singing f0, xt i is the target register speech or singing f0. mean represents to average f0 across all vowel phones in all the audios by the source or target speaker. 2http://en.speechocean.com/datacenter/details/254.htm Figure 2: The process diagram of training stage. The WaveRNN [17] model is trained separately. Figure 3: The process diagram of converting stage. ˆy1:T = cbhg(y (cid:48) 1:T) (5) The entire network is trained to minimize the mel-spectrogram prediction loss the same as in DurIAN. 2.2. Singing Voice Conversion Process The training stage is illustrated in Fig. 2, and the converting stage is illustrated in Fig. 3. 2.2.1. Data Preparation Our training dataset is composed a mix of normal speech data and singing data. TTS front-end is used to parse text or song lyrics into phone sequence. Acoustic feature including mel- sepctrogram, f0 and RMSE are extracted for every frame of training data. Note that the f0 is extracted with World vocoder[18]. Since DurIAN structure needs phone alignment as input, a Time delay neural network (TDNN) is employed here to force-align the extracted acoustic feature with phone sequence. Different from normal TTS for Mandarin which use phone identity plus 5 tones in the modeling, non-tonal phones are used in our ex- periment to bridge the gap between speech phones and singing phones. Finally, phone duration can be extracted from the aligned phone sequence. World VocoderTraining Singing/SpeechText SequenceRMSEF0Phone SequenceDuration SequenceSpeaker Embedding NetworkSpeaker EmbeddingDurIAN-SCMel-SpectrogramWaveRNNTraining StagePhone Force-alignWorld VocoderPhone Force-alignSource SingingText SequenceRMSEF0Phone SequenceDuration SequenceDurIAN-SCMel-SpectrogramWaveRNNConverting StageSpeaker Embedding NetworkSpeaker EmbeddingTarget Register Singing/Speech 3. Experiments 3.1. Dataset Two databases are used in our experiments. Database A is a large multi-singer Mandarin singing corpus containing 18-hour singing data. There are 3600 singing segments from various songs in corpus A, and each with an average length of 20 sec- onds. Each singing fragment is by a different singer. Amongst all singing fragments, 2600 are by female singers and 1000 are by male singers. This multi-singer singing corpus are recorded by singers themselves with various recording devices. All songs are down sampled to 24kHz. Database B is speech database containing 10-hour multi- speaker Mandarin normal TTS speech data. There are 3 male speakers and 4 female speakers in this corpus, each with a du- ration around 1.5 hours. The sampling rate is also set to 24kHz. In the singing voice conversion experiments, all source singing is chosen randomly from another mandarin singing corpus C. 3.2. Model Hyperparameters In our experiment, the dimensions of the phone embedding, speaker embedding, encoder CBHG module, attention layer are all set to 256. The decoder has 2 GRU layers with 256 dimen- sion and batch normalization is used in the encoder and post-net module. We use Adam optimizer and 0.001 initial learning rate with warm-up [23] schedule. In training stage, a total of 250k steps with a batch size of 32 were trained till convergence. 3.3. Naturalness and Similarity Evaluation In the singing voice conversion test, Mean Opinion Scores (MOS) on naturalness and similarity to target speaker are evaluated. The scale of MOS is set between 1 to 5 with 5 representing the best performance and 1 the worst. 10 testers participated in our listening test. 3.3.1. Experiment on speaker embedding representation In this experiment, we compare the singing naturalness and sim- ilarity to target speaker by proposed d-vector based speaker em- bedding and LUT based trainable speaker embedding. Two sys- tems are built respectively. The training dataset used here is the 18-hour singing database A introduced in section 3.1. We use a total of 3500 singing fragments in training. In testing, 3 female and 3 male singers are randomly chosen from training set for in- set test. To evaluate the out-set singing voice conversion perfor- mance, 4 speakers from the speech dataset B are chosen for test. Here, only a 20s period of singing or speech data are used from each testers for speaker d-vector extraction. As the baseline system, the LUT based trainable speaker embedding is trained alongside the singing voice conversion DurIAN-SC model. The out-of-set baseline system is not tested because baseline system can not convert to unseen target. Table 1: Comparison of speaker embedding extraction meth- ods: LUT and speaker D-vector. The ’Target Singer’ column indicates whether target speaker’s singing data is used in train- ing. Method Target Singer Naturalness Similarity D-vector LUT D-vector LUT in-set in-set out-of-set out-of-set 3.70 3.61 3.69 - 3.61 3.56 3.10 - As shown in Table 1, for the in-set test, proposed D-vector speaker embedding system outperforms the baseline LUT speaker embedding system in both MOS naturalness and similarity by a small margin. The result is in line with expectations. For the baseline trainable LUT speaker embedding system, the speaker embedding is trained alongside the singing voice conversion model, that makes the total free parameter in the system is actu- ally more than proposed method especially for the ’seen’ speaker. However on the other side, because there is only 20 seconds data per each singer in the training, it could be hard for the trainable LUT speaker embedding method to learn a really good speaker embedding. Meanwhile, proposed speaker embedding network is an independent module which is pre-trained on a lot extra speaker recognition data. While for the out-set test, the MOS scores for proposed method is lower than in-set test especially on similarity. We believe this is normal result for the model parameters are not fine-tuned with the ’unseen’ speaker’s data. And speaker d-vectors are extracted from only 20 seconds of target speaker’s register speech or singing. At least, unlike the baseline system, proposed method save the trouble to fine-tune and update model parameters for each new user. 3.3.2. Using speech corpus in singing voice conversion To demonstrate proposed system can learn singing voice con- version from only speech data, three different systems are trained using: 1) only speech data, 2) mix of speech and singing data, and 3) singing data only, respectively for comparison. Table 2: Singing voice conversion experiments trained with speech data. Dataset indicates the type of training data. Dataset Naturalness Similarity Speech & Singing Only Speech Only Singing 3.71 3.65 3.70 3.74 3.71 3.61 Results in Table 2 show that all three above mentioned sys- tems has close performance. This interesting result indicates that in the proposed system, speech data can contribute equally to singing voice conversion as singing data. In this case, we can use only speech data when target’s singing data is not available. In our experiments, it is noticed that by adding some speech data to singing voice conversion training process, the generated target singing will have clearer pronunciation. Speech data in training also helps to improve the singing voice conversion sim- ilarity. 4. Conclusion In this paper, we proposed an singing voice conversion model DurIAN-SC with a unified framework of speech and singing data. For those speakers with none singing data, our method could convert to their singings by training on only their speech data. Through a pre-trained speaker embedding network, we could convert to ’unseen’ speakers’ singing with only a 20 sec- ond length of data. Experiments indicate the proposed model can generate high-quality singing voices for in-set ’seen’ tar- get speakers in terms of both naturalness and similarity. In the meanwhile, proposed system can also one-shot convert to out-of-set ’unseen’ users with small register data. In the future work, we will continue to make our model nore robust and im- prove the similarity of the ’unseen’ singing voice conversion. [19] X. Ji, M. Yu, C. Zhang, D. Su, T. Yu, X. Liu, and D. Yu, “Speaker-aware target speaker enhancement by jointly learning with speaker embedding extraction,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), 2020, pp. 7294–7298. [20] C. Zhang and K. Koishida, “End-to-end text-independent speaker verification with triplet loss on short utterances,” in Proc. Inter- speech 2017, 2017, pp. 1487–1491. [21] H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu, “Cosface: Large margin cosine loss for deep face recog- nition,” in Proceedings of CVPR, 2018, pp. 5265–5274. [22] C. Zhang, F. Bahmaninezhad, S. Ranjan, H. Dubey, W. Xia, and J. H. Hansen, “Utd-crss systems for 2018 nist speaker recogni- tion evaluation,” in ICASSP 2019-2019 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 5776–5780. [23] P. Goyal, P. Doll´ar, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He, “Accurate, large minibatch sgd: Training imagenet in 1 hour,” arXiv preprint arXiv:1706.02677, 2017. 5. References [1] J. Bonada, M. Umbert, and M. Blaauw, “Expressive singing syn- thesis based on unit selection for the singing synthesis challenge 2016.” in INTERSPEECH, 2016, pp. 1230–1234. [2] M. Nishimura, K. Hashimoto, K. Oura, Y. Nankaku, and K. Tokuda, “Singing voice synthesis based on deep neural net- works.” in Interspeech, 2016, pp. 2478–2482. [3] M. Blaauw and J. Bonada, “A neural parametric singing synthe- sizer modeling timbre and expression from natural songs,” Ap- plied Sciences, vol. 7, no. 12, p. 1313, 2017. [4] M. Blaauw, J. Bonada, and R. Daido, “Data efficient voice cloning for neural singing synthesis,” in ICASSP 2019-2019 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 6840–6844. [5] K. Kobayashi, T. Toda, G. Neubig, S. Sakti, and S. Nakamura, “Statistical singing voice conversion with direct waveform mod- ification based on the spectrum differential,” in Fifteenth Annual Conference of the International Speech Communication Associa- tion, 2014. [6] ——, “Statistical singing voice conversion based on direct wave- form modification with global variance,” in Sixteenth Annual Con- ference of the International Speech Communication Association, 2015. [7] F. Villavicencio and J. Bonada, “Applying voice conversion to concatenative singing-voice synthesis,” in Eleventh Annual Con- ference of the International Speech Communication Association, 2010. [8] E. Nachmani and L. Wolf, “Unsupervised singing voice conver- sion,” arXiv preprint arXiv:1904.06590, 2019. [9] A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “Wavenet: A generative model for raw audio,” arXiv preprint arXiv:1609.03499, 2016. [10] T. Saitou, M. Goto, M. Unoki, and M. Akagi, “Speech-to-singing synthesis: Converting speaking voices to singing voices by con- trolling acoustic features unique to singing voices,” in 2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. IEEE, 2007, pp. 215–218. [11] C. Yu, H. Lu, N. Hu, M. Yu, C. Weng, K. Xu, P. Liu, D. Tuo, S. Kang, G. Lei et al., “Durian: Duration informed attention network for multimodal synthesis,” arXiv preprint arXiv:1909.01700, 2019. [12] J. Gonzalez-Dominguez, “Deep neural networks for small foot- print text-dependent speaker verification,” in ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014. [13] D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudan- pur, “X-vectors: Robust dnn embeddings for speaker recognition,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5329–5333. [14] Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio et al., “Tacotron: Towards end-to-end speech synthesis,” arXiv preprint arXiv:1703.10135, 2017. [15] R. K. Srivastava, K. Greff, and J. Schmidhuber, “Highway net- works,” 2015. [16] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evalu- ation of gated recurrent neural networks on sequence modeling,” arXiv preprint arXiv:1412.3555, 2014. [17] N. Kalchbrenner, E. Elsen, K. Simonyan, S. Noury, N. Casagrande, E. Lockhart, F. Stimberg, A. van den Oord, S. Dieleman, and K. Kavukcuoglu, “Efficient neural audio synthesis,” CoRR, vol. abs/1802.08435, 2018. [Online]. Available: http://arxiv.org/abs/1802.08435 [18] M. Morise, F. Yokomori, and K. Ozawa, “World: a vocoder-based high-quality speech synthesis system for real-time applications,” IEICE TRANSACTIONS on Information and Systems, vol. 99, no. 7, pp. 1877–1884, 2016.
ai_researcher
3
Scientific_Language_Models_for_Biomedical_Knowledge_Base_Completion_An_Empirical_Study.pdf
Automated Knowledge Base Construction (2020) Conference paper Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study Rahul Nadkarni1 David Wadden1 Iz Beltagy2 Noah A. Smith1,2 Hannaneh Hajishirzi1,2 Tom Hope1,2 1Paul G. Allen School of Computer Science & Engineering, University of Washington 2Allen Institute for Artificial Intelligence (AI2) [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Abstract Biomedical knowledge graphs (KGs) hold rich information on entities such as diseases, drugs, and genes. Predicting missing links in these graphs can boost many important appli- cations, such as drug design and repurposing. Recent work has shown that general-domain language models (LMs) can serve as “soft” KGs, and that they can be fine-tuned for the task of KG completion. In this work, we study scientific LMs for KG completion, exploring whether we can tap into their latent knowledge to enhance biomedical link prediction. We evaluate several domain-specific LMs, fine-tuning them on datasets centered on drugs and diseases that we represent as KGs and enrich with textual entity descriptions. We inte- grate the LM-based models with KG embedding models, using a router method that learns to assign each input example to either type of model and provides a substantial boost in performance. Finally, we demonstrate the advantage of LM models in the inductive setting with novel scientific entities. Our datasets and code are made publicly available.1 1. Introduction Understanding complex diseases such as cancer, HIV, and COVID-19 requires rich biolog- ical, chemical, and medical knowledge. This knowledge plays a vital role in the process of discovering therapies for these diseases — for example, identifying targets for drugs [Lindsay, 2003] requires knowing what genes or proteins are involved in a disease, and designing drugs requires predicting whether a drug molecule will interact with specific target proteins. In addition, to alleviate the great costs of designing new drugs, drug repositioning [Luo et al., 2021] involves identification of existing drugs that can be re-purposed for other diseases. Due to the challenging combinatorial nature of these tasks, there is need for automation with machine learning techniques. Given the many links between biomedical entities, re- cent work [Bonner et al., 2021a,b] has highlighted the potential benefits of knowledge graph (KG) data representations, formulating the associated tasks as KG completion problems — predicting missing links between drugs and diseases, diseases and genes, and so forth. The focus of KG completion work — in the general domain, as well as in biomedical applications — is on using graph structure to make predictions, such as with KG embedding 1. https://github.com/rahuln/lm-bio-kgc 1 2 0 2 p e S 1 2 ] L C . s c [ 2 v 0 0 7 9 0 . 6 0 1 2 : v i X r a Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope (KGE) models and graph neural networks [Zitnik et al., 2018, Chang et al., 2020]. In parallel, recent work in the general domain has explored the use of pretrained language models (LMs) as “soft” knowledge bases, holding factual knowledge latently encoded in their parameters [Petroni et al., 2019, 2020]. An emerging direction for using this information for the task of KG completion involves fine-tuning LMs to predict relations between pairs of entities based on their textual descriptions [Yao et al., 2019, Kim et al., 2020, Wang et al., 2021, Daza et al., 2021]. In the scientific domain, this raises the prospect of using LMs trained on millions of research papers to tap into the scientific knowledge that may be embedded in their parameters. While this text-based approach has been evaluated on general domain benchmarks derived from WordNet [Miller, 1995] and Freebase [Bollacker et al., 2008], to our knowledge it has not been applied to the task of scientific KG completion. Our contributions. We perform an extensive study of LM-based KG completion in the biomedical domain, focusing on three datasets centered on drugs and diseases, two of which have not been used to date for the KG completion task. To enable exploration of LM-based models, we collect missing entity descriptions, obtaining them for over 35k entities across all datasets. We evaluate a range of KGE models and domain-specific scientific LMs pretrained on different biomedical corpora [Beltagy et al., 2019, Lee et al., 2020, Alsentzer et al., 2019, Gu et al., 2020]. We conduct analyses of predictions made by both types of models and find them to have complementary strengths, echoing similar observations made in recent work in the general domain [Wang et al., 2021] and motivating integration of both text and graph modalities. Unlike previous work, we train a router that selects for each input instance which type of model is likely to do better, finding it to often outperform average-based ensembles. Integration of text and graph modalities provides substantial relative improvements of 13– 36% in mean reciprocal rank (MRR), and routing across multiple LM-based models further boosts results. Finally, we demonstrate the utility of LM-based models when applied to entities unseen during training, an important scenario in the rapidly evolving scientific domain. Our hope is that this work will encourage further research into using scientific LMs for biomedical KG completion, tapping into knowledge embedded in these models and making relational inferences between complex scientific concepts. 2. Task and Methods We begin by presenting the KG completion task and the approaches we employ for pre- dicting missing links in biomedical KGs, including our model integration and inductive KG completion methodologies. An overview of our approaches is illustrated in Figure 1. 2.1 KG Completion Task Formally, a KG consists of entities E, relations R, and triples T representing facts. Each triple (h, r, t) ∈ T consists of head and tail entities h, t ∈ E and a relation r ∈ R. An entity can be one of many types, with the type of an entity e denoted as T (e). In our setting, each entity is also associated with some text, denoted as text(e) for e ∈ E. This text can be an entity name, description, or both; we use the entity’s name concatenated with its description when available, or just the name otherwise. For example, the fact (aspirin, Scientific Language Models for Biomedical Knowledge Base Completion Figure 1: Illustration of the main methods we apply for biomedical KG completion: (a) LM fine-tuning; (b) KGE models; (c) an approach that combines both; and (d) using an LM to impute missing entities in a KGE model. treats, headache) might be an (h, r, t) triple found in a biomedical KG that relates drugs and diseases, with the head and tail entities having types T (h) = drug and T (t) = disease. The task of KG completion or link prediction involves receiving a triple (h, r, ?) (where ? can replace either the head or tail entity) and scoring all candidate triples {(h, r, t(cid:48)) | t(cid:48) ∈ S} such that the correct entity that replaces ? has the highest score. For the example listed above, a well-performing model that receives the incomplete triple (aspirin, treats, ?) should rank the tail entity headache higher than an incorrect one such as diabetes. S can be the entire set of entities (i.e., S = E) or some fixed subset. In the transductive setting, the set of facts T is split into a training set Ttrain and a test set Ttest such that all positive triples in the test set contain entities seen during training. In contrast, for inductive KG completion the triples in the test set may contain entities not seen during training (see Section 2.4). 2.2 Methods Ranking-based KG completion. Each KG completion model in our experiments learns a function f that computes a ranking score s = f (x) for a given triple x = (h, r, t). Models are trained to assign a high ranking score to correct positive triples from the set of known facts T and a low ranking score to triples that are likely to be incorrect. To do so, we use the max-margin loss function Lrank(x) = 1 i)), where λ is a N margin hyperparameter, x ∈ T is a known positive triple in the KG, and x(cid:48) i is a negative triple constructed by randomly corrupting either the head or tail entity of x with an entity of the same type. i=1 max(0, λ − f (x) + f (x(cid:48) (cid:80)N KG embedding (KGE) models. For each entity e ∈ E and each relation r ∈ R, KG embedding (KGE) models learn a vector representation E(e) ∈ Rm and R(r) ∈ Rn. For a given triple (h, r, t), each model computes the ranking score f (h, r, t) as a simple function of these embeddings (Figure 1b). We include a variety of different KGE models in our experiments, including TransE [Bordes et al., 2013], DistMult [Yang et al., 2015], ComplEx [Trouillon et al., 2016], and RotatE [Sun et al., 2019]. LM-based models. KGE methods do not capture the rich information available from textual descriptions of nodes. To address this limitation, previous KG completion ap- [CLS] aspirin [SEP] headache [SEP]scoreEEERscore(h, r, t)h tLM2KGEg(h, r, t)𝜙sKGEsLM2ɑscore(a)(b)(c)LM(d)estriolestroneLM1sLM1 Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope proaches have incorporated textual representations [Toutanova et al., 2015, Wang and Li, 2016], most recently with approaches such as KG-BERT [Yao et al., 2019] that fine-tune the BERT language model (LM) [Devlin et al., 2019] for the task of KG completion. Our focus in this work is on LMs pretrained on corpora of biomedical documents (e.g., PubMedBERT [Gu et al., 2020]; see Appendix B.1.2 for full details). To score a triple using an LM, we use a cross-encoder approach [Yao et al., 2019, Kim et al., 2020] (Fig. 1a), where we encode the text of the head and tail entities together along with the appropriate special tokens. Specifically, a triple (h, r, t) is encoded as v = LM([CLS] text(h) [SEP] text(t) [SEP]), where v is the contextualized representation of the [CLS] token at the last layer.2 We then apply an additional linear layer with a single output dimension to v to compute the ranking score for the triple (f (x) = Wrankv ∈ R), and train the LM with the same max-margin loss. Recent work on applying BERT for KG completion on general domain benchmarks has shown that multi-task training improves performance [Wang et al., 2021, Kim et al., 2020]. We use the approach of Kim et al. [2020] and incorporate two additional losses for each LM: a binary triple classification loss to identify if a triple is positive or negative, and a multi-class relation classification loss.3 2.3 Integrating KGE and LM: Model Averaging vs. Routing Previous work using text for KG completion on general domain benchmarks has demon- strated the benefit of combining KGE and text-based models [Xie et al., 2016, Wang et al., 2021]. We study integration of graph-based and text-based methods (Figure 1c), exploring whether learning to route input instances adaptively to a single model can improve perfor- mance over previous approaches that compute a weighted average of ranking scores [Wang et al., 2021]. We also explore the more general setup of combining more than two models. More formally, for a given triple x = (h, r, t), let φ(x) be its feature vector. We can learn a function g(φ(x)) that outputs a set of weights α = [α1, . . . , αk], (cid:80) i αi = 1, αi > 0 ∀i. These weights can be used to perform a weighted average of the ranking scores {s1, . . . , sk} for a set of k models we wish to combine, such that the final ranking score is s = (cid:80) i αisi. We use a variety of graph-, triple-, and text-based features to construct the feature vector φ(x) such as node degree, entity and relation type, string edit distance between head and tail entity names, and overlap in graph neighbors of head and tail nodes. We explore these features further in Section 4.1, and provide a full list in Appendix B.1.3 (Table 6). For the function g(·), we experiment with an input-dependent weighted average that outputs arbitrary weights α and a router that outputs a constrained α such that αi = 1 for some i and αj = 0, ∀j (cid:54)= i (i.e., α is a one-hot vector).4 In practice, we implement the router as a classifier which selects a single KG completion model for each example by training it to predict which model will perform better.5 For the input-dependent weighted average we train a multilayer perceptron (MLP) using the max-margin ranking loss. We train all models on the validation set and evaluate on the test set for each dataset. 2. We experiment with encoding the relation text as well, but find that this did not improve performance. 3. See details in Appendix B. Wang et al. [2021] omit the relation classification loss and use a bi-encoder; we find that both of these modifications reduce performance in our setting. 4. We also try a global weighted average with a single set of weights; see Appendix B.1.3 for details. 5. We explore a range of methods for the router’s classifier, with the best found to be gradient boosted decision trees (GBDT) and multilayer perceptrons (MLP). Scientific Language Models for Biomedical Knowledge Base Completion Dataset #Entities #Rel Train Dev. Test #Positive Edges RepoDB Hetionet (our subset) MSI WN18RR FB15k-237 2,748 12,733 29,959 40,943 14,541 1 4 6 11 237 5,342 124,544 387,724 86,835 272,115 667 15,567 48,465 3,034 17,535 668 15,568 48,465 3,134 20,466 Avg. Desc. Length 49.54 44.65 45.13 14.26 139.32 Table 1: Statistics for our datasets and a sample of general domain benchmarks. When performing ranking evaluation, we use the features φ(x) of each positive example to compute the weights α, then apply the same weights to all negative examples ranked against that positive example. 2.4 Inductive KG Completion KGE models are limited to the transductive setting where all entities seen during evaluation have appeared during training. Inductive KG completion is important in the biomedical domain, where we may want to make predictions on novel entities such as emerging biomed- ical concepts or drugs/proteins mentioned in the literature that are missing from existing KGs. Due to their ability to form compositional representations from entity text, LMs are well-suited to this setting. In addition to using LMs fine-tuned for KGC, we try a simple technique using LMs to“fill in” missing KGE embeddings without explicitly using the LM for prediction (Fig. 1d). Given a set of entities E for which a KGE model has trained embed- dings and a set of unknown entities U, for each e ∈ E ∪ U we encode its text using an LM to form ve = LM([CLS] text(e) [SEP]), ∀e ∈ E ∪ U, where ve is the [CLS] token representation at the last layer. We use the cosine similarity between embeddings to replace each unseen cos-sim(ve, vu)) entity’s embedding with the closest trained embedding as E(u) = E(argmax e∈E where e is of the same type as u, i.e., T (e) = T (u). 3. Experimental Setup 3.1 Datasets We use three datasets in the biomedical domain that cover a range of sizes comparable to existing general domain benchmarks, each pooled from a broad range of biomedical sources. Our datasets include RepoDB [Brown and Patel, 2017], a collection of drug-disease pairs intended for drug repositioning research; MSI (multiscale interactome; [Ruiz et al., 2021]), a recent network of diseases, proteins, genes, drug targets, and biological functions; and Hetionet [Himmelstein and Baranzini, 2015], a heterogeneous biomedical knowledge graph which following Alshahrani et al. [2021] we restrict to interactions involving drugs, diseases, symptoms, genes, and side effects.6 Statistics for all datasets and a sample of popular general domain benchmark KGs can be found in Table 1. 6. More information on each dataset is available in Appendix A.1. Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope While Hetionet has previously been explored for the task of KG completion as link prediction using KGE models (though not LMs) [Alshahrani et al., 2021, Bonner et al., 2021b], to our knowledge neither RepoDB nor MSI have been represented as KGs and used for evaluating KG completion models despite the potential benefits of this representation [Bonner et al., 2021a], especially in conjunction with textual information. In order to apply LMs to each dataset, we scrape entity names (when not provided by the original dataset) as well as descriptions from the original online sources used to construct each KG (see Table 5 in the appendix). We construct an 80%/10%/10% training/development/test transductive split for each KG by removing edges from the complete graph while ensuring that all nodes remain in the training graph. We also construct inductive splits, where each positive triple in the test test has one or both entities unseen during training. 3.2 Pretrained LMs and KGE Integration We experiment with several LMs pretrained on biomedical corpora (see Table 2 and Ap- pendix B.1.2). For each LM that has been fine-tuned for KG completion, we add the prefix “KG-” (e.g., KG-PubMedBERT) to differentiate it from the base LM. We use the umbrella term “model integration” for both model averaging and routing, unless stated otherwise. Model integration. We explore integration of all pairs of KGE models as well as each KGE model paired with KG-PubMedBERT. This allows us to compare the effect of inte- grating pairs of KG completion models in general with integrating graph- and text-based approaches. For all pairs of models, we use the router-based and input-dependent weighted average methods. We also explore combinations of multiple KGE models and LMs, where we start with the best pair of KG-PubMedBERT and a KGE model based on the validation set and add either KG-BioBERT or the best-performing KGE model (or the second-best, if the best KGE model is in the best pair with KG-PubMedBERT). 3.3 Evaluation At test time, each positive triple is ranked against a set of negatives constructed by replacing either the head or tail entity by a fixed set of entities of the same type. When constructing the edge split for each of the three datasets, we generate a fixed set of negatives for every positive triple in the validation and test sets, each corresponding to replacing the head or tail entity with an entity of the same type and filtering out negatives that appear as positive triples in either the training, validation, or test set (exact details in Appendix A.4). For each positive triple, we use its rank to compute the mean reciprocal rank (MRR), Hits@3 (H@3), and Hits@10 (H@10) metrics. 4. Experimental Results 4.1 Transductive Link Prediction Results We report performance on the link prediction task across all datasets and models in Ta- ble 2. While LMs perform competitively with KGE models and even outperform some, they generally do not match the best KGE model on RepoDB and MSI. This echoes re- Scientific Language Models for Biomedical Knowledge Base Completion RepoDB Hetionet MRR H@3 H@10 MRR H@3 H@10 MRR H@3 H@10 MSI KGE LM (fine-tuned) ComplEx DistMult RotatE TransE 62.3 62.0 58.8 60.0 51.7 RoBERTa 59.7 SciBERT BioBERT 58.2 Bio+ClinicalBERT 55.7 60.8 PubMedBERT-abs 59.9 PubMedBERT-full Two models (router) Best pair of KGE Best KGE + LM Two models (input-dep. avg.) Best pair of KGE Best KGE + LM 62.2 70.6 65.2 65.9 71.1 70.4 65.9 68.6 60.3 67.6 65.8 64.0 70.7 69.3 70.4 80.3 74.3 74.4 85.6 85.2 79.8 81.1 82.3 88.5 86.8 84.1 89.5 88.8 83.7 94.3 87.6 91.5 Three models (router) 2 KGE + 1 LM 1 KGE + 2 LM 72.7 72.1 81.6 82.5 95.2 95.7 45.9 46.0 50.6 50.2 46.4 50.3 50.3 43.6 50.8 51.7 56.1 59.7 65.3 70.3 62.6 62.1 53.6 53.5 58.2 58.0 53.6 57.1 57.5 49.1 58.0 58.7 65.5 68.6 75.3 78.7 71.7 71.9 77.8 77.8 79.3 79.8 76.9 79.1 79.4 72.6 80.0 80.8 85.4 87.2 90.2 92.2 89.4 89.5 40.3 29.6 32.4 32.7 30.1 34.2 33.4 32.6 34.3 34.2 45.2 48.5 39.8 40.6 44.3 34.1 35.3 36.5 33.3 37.9 37.1 36.1 38.0 37.7 50.6 54.4 44.9 44.6 57.5 53.6 49.8 53.8 50.6 55.0 54.8 53.5 55.3 55.1 66.2 70.1 62.0 61.2 50.9 51.2 57.1 57.0 73.2 73.0 Table 2: KG completion results. All values are in the range [0, 100], higher is better. Underlined values denote the best result within a model category (KGE, LM, two models with router, two models with input-dependent weighted average, three models with router), while bold values denote the best result for each dataset. sults in the general domain for link prediction on subsets of WordNet and Freebase [Yao et al., 2019, Wang et al., 2021]. On all datasets and metrics, the best-performing LM is KG-PubMedBERT, which aligns with results for natural language understanding tasks over biomedical text [Gu et al., 2020]. The biomedical LMs also generally outperform KG-RoBERTa, illustrating the benefit of in-domain pretraining even in the KG completion setting. Comparing model errors. By examin- ing a selected set of examples in Table 3, we can observe cases where information in text provides LMs an advantage and where a lack of context favors KGE models. KG-PubMedBERT is able to make connec- tions between biomedical concepts – like the fact that a disease that affects the stomach might cause weight loss – and align related concepts expressed with different terminol- ogy – like connecting antineoplastic with cancer (a type of neoplasm), or recogniz- ing that an echocardiogram is a technique for imaging the heart. In contrast, RotatE offers an advantage when the descriptions do not immediately connect the two terms (mediastinal cancer, hoarseness), where a description Figure 2: Fraction of test set examples where each model performs better. Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope Relation RotatE better KG-PubMedBERT better Disease presents Symptom Compound treats Disease Compound causes Side Effect Disease: mediastinal cancer; a cancer in the mediastinum. Symptom: hoarseness; a deep or rough quality of voice. Compound: methylprednisolone; a prednisolone derivative glucocorti- coid with higher potency. Disease: allergic rhinitis; a rhinitis that is an allergic inflammation and irritation of the nasal airways. Compound: synthetic, biotic derivative of cephalexin. Side Effect: nephritis; no description semi- broad-spectrum anti- tubulointerstitial cefaclor; Disease: stomach cancer; a gas- trointestinal cancer in the stomach. Symptom: weight loss; decrease in existing body weight. Compound: altretamine; an alky- lating agent proposed as an antineo- plastic. Disease: ovarian cancer; a female reproductive organ cancer that is lo- cated in the ovary. Compound: perflutren; a diagnos- tic medication to improve contrast in echocardiograms. Side Effect: palpitations; irregular and/or forceful beating of the heart. Table 3: Examples from Hetionet where one model ranks the shown positive pair consider- ably higher than the other. LMs often perform better when there is semantic relatedness between head and tail text, but can be outperformed by a KGE model when head/tail entity text is missing or unrelated. Entity descriptions cut to fit. may be too technical or generic to be informative (methylprednisolone, allergic rhinitis), or where no description is available (cefaclor, tubulointerstitial nephritis).7 Furthermore, Fig. 2 shows that KG-PubMedBERT outperforms the best KGE model on a substantial fraction of the test set examples for each dataset.8 These observations motivate an ap- proach that leverages the strengths of both types of models by identifying examples where each model might do better, which leads to our results for model integration. 4.2 Model Averaging and Routing Integrating pairs of models. Table 2 shows that combining each class of models boosts results by a large relative improvement of 13–36% in MRR across datasets. Moreover, the best-performing combination always includes a KGE model and KG-PubMedBERT rather than two KGE models (Fig. 3), showing the unique benefit of using LMs to augment models relying on KG structure alone. Averaging vs. routing. We also compare the router and input-dependent weighted average approaches of integrating a pair of models in Table 2, with the router-based ap- proach outperforming the weighted average for the best KGE + LM pair on RepoDB and MSI. This presents routing as a promising alternative for integrating KGE and LM models. Since the gradient boosted decision trees (GBDT) router achieves the best validation set 7. Table 7 in the appendix shows the drop in performance when one or both entities are missing descriptions. 8. See MRR breakdown by relation type in Fig. 4 in the appendix. Scientific Language Models for Biomedical Knowledge Base Completion Figure 3: Test set MRR for all pairs of KG completion models using an MLP router. The best combination of a KGE model and KG-PubMedBERT always performs better than the best pair of KGE models, and for RepoDB and Hetionet all pairs involving KG- PubMedBERT outperform all KGE-only pairs. performance in most cases across classifiers and integration methods, we use this method for combinations of more than two models, such as multiple LMs with a single KGE model. Integrating additional models. The bottom of Table 2 shows results for three-model combinations. Adding a third model improves performance compared to the best pair of models, though whether the best third model is an LM or KGE model varies across datasets and metrics. Although there are diminishing returns to including a third model, the three- model combinations provide the best performance for RepoDB and MSI. Interpreting model routing. We compute average feature gain for all datasets, using a GBDT router implemented with XGBoost [Chen and Guestrin, 2016] (see Fig. 5 in the appendix). We find that the most salient features are the ranking scores output by each model, which is intuitive as these scores reflect each model’s confidence. Graph features like node degree and PageRank also factor into the classifier’s predictions, as well as tex- tual features such as entity text length and edit distance between entity names. General concepts such as Hypertensive disease and Infection of skin and/or subcutaneous tissue are central nodes for which we observe KGE models to often do better. KGE models also tend to do better on entities with short, non-descriptive names (e.g., P2RY14), especially when no descriptions are available. Generally, these patterns are not clear-cut, and non- linear or interaction effects likely exist. It remains an interesting challenge to gain deeper understanding into the strengths and weaknesses of LM-based and graph-based models. 4.3 Inductive KG Completion For our inductive KG completion experiments, we use ComplEx as the KGE model and KG-PubMedBERT as our LM-based model, and compare the performance of each method to ComplEx with entity embeddings imputed using the method described in Section 2.4. We use either the untrained PubMedBERT or the fine-tuned KG-PubMedBERT as the LM for retrieving nearest-neighbor (NN) entities (see examples in Table 9 in the appendix). We also compare to DKRL [Xie et al., 2016], which constructs entity representations from text using a CNN encoder and uses the TransE scoring function. We use PubMedBERT’s ComplExDistMultRotatETransEDistMultRotatETransEPubMedBERT63.463.263.663.763.762.270.470.870.670.9RepoDBComplExDistMultRotatETransEDistMultRotatETransEPubMedBERT48.056.156.155.055.255.759.659.759.259.6HetionetComplExDistMultRotatETransEDistMultRotatETransEPubMedBERT41.345.041.644.439.838.648.342.843.242.8MSI Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope RepoDB Hetionet MRR H@3 H@10 MRR H@3 H@10 MRR H@3 H@10 MSI DKRL KG-PubMedBERT ComplEx NN-ComplEx, frozen LM NN-ComplEx, fine-tuned 15.9 15.6 38.8 43.4 0.4 22.3 30.3 0.8 20.1 26.9 28.2 67.5 1.6 31.2 39.4 18.5 17.8 21.6 22.3 0.7 18.4 12.9 3.6 18.1 13.9 31.9 42.8 2.8 32.8 25.5 14.1 13.3 20.2 21.7 0.1 16.9 15.4 0.5 15.8 14.6 22.4 32.2 0.4 23.4 21.4 Table 4: Inductive KG completion results. NN-ComplEx refers to the version of ComplEx with unseen entity embeddings replaced using an LM to find the 1-nearest neighbor, either with PubMedBERT frozen or fine-tuned for KG completion (KG-PubMedBERT). token embeddings as input to DKRL and train with the same multi-task loss. While other methods for inductive KG completion exist, such as those based on graph neural networks [Schlichtkrull et al., 2018, Vashishth et al., 2020, Bhowmik and de Melo, 2020], they require the unseen entity to have known connections to entities that were seen during training in order to propagate information needed to construct the new embedding. In our inductive experiments, we consider the more challenging setup where every test set triple has at least one entity with no known connections to entities seen during training, such that graph neural network-based methods cannot be applied. This models the phenomenon of rapidly emerging concepts in the biomedical domain, where a novel drug or protein may be newly studied and discussed in the scientific literature without having been integrated into existing knowledge bases. As seen in Table 4, ComplEx unsurprisingly performs poorly as it attempts link predic- tion with random embeddings for unseen entities. DKRL does substantially better, with KG-PubMedBERT further increasing MRR with a relative improvement of 21% (Hetionet) to over 2x (RepoDB). Our strategy for replacing ComplEx embeddings for unseen entities performs comparably to or better than DKRL in most cases, with untrained PubMedBERT encodings generally superior to using KG-PubMedBERT’s encodings. In either case, this simple strategy for replacing the untrained entity embeddings of a KGE model shows the ability of an LM to augment a structure-based method for KG completion that is typically only used in the transductive setting, even without using the LM to compute ranking scores. 5. Conclusion and Discussion We perform the first empirical study of scientific language models (LMs) applied to biomed- ical knowledge graph (KG) completion. We evaluate domain-specific biomedical LMs, fine- tuning them to predict missing links in KGs that we construct by enriching biomedical datasets with textual entity descriptions. We find that LMs and more standard KG embed- ding models have complementary strengths, and propose a routing approach that integrates the two by assigning each input example to either type of model to boost performance. Fi- nally, we demonstrate the utility of LMs in the inductive setting with entities not seen during training, an important scenario in the scientific domain with many emerging concepts. Our work raises several directions for further study. For instance, several structural differences exist between general-domain and biomedical text that would be interesting to Scientific Language Models for Biomedical Knowledge Base Completion explore in more depth and leverage more explicitly to improve KG completion performance. For example, entities with uninformative technical names – such as protein names that are combinations of numbers and letters (e.g., P2RY14) – appear very often in scientific KGs, and are likely related to the benefit of adding descriptions (Table 7, appendix). The surface forms of entity mentions in the biomedical literature on which the LMs were pretrained tend to be diverse with many aliases, while entities such as cities or people in the general domain often show less variety in their surface forms used in practice. This could potentially be challenging when trying to tap into the latent knowledge LMs hold on specific entities as part of the KG completion task, and likely requires LMs to disambiguate these surface forms to perform the task well. General-domain LMs are also trained on corpora such as Wikipedia which has “centralized” pages with comprehensive information about entities, while in the scientific literature information on entities such as drugs or genes is scattered across the many papers that form the training corpora for the LMs. Previous work [Wang et al., 2021] has also observed that combining graph- and LM-based models improves KG completion results. We provide further analyses into this phenomenon based on textual and graph properties, but a deeper understanding of the strengths and weaknesses of each modality is needed. Interpreting neural models is generally a challenging problem; further work in our setting could help reveal the latent scientific knowledge embed- ded in language models. Importantly, our results point to the potential for designing new models that capitalize on both graph and text modalities, perhaps by injecting structured knowledge into LMs [Peters et al., 2019] or with entity-centric pretraining [Zemlyanskiy et al., 2021]. Finally, our findings provide a promising direction for biomedical knowledge completion tasks, and for literature-based scientific discovery [Swanson, 1986, Gopalakrish- nan et al., 2019]. Acknowledgments This project is supported in part by NSF Grant OIA-2033558 and by the Office of Naval Research under MURI grant N00014-18-1-2670. References Emily Alsentzer, John R. Murphy, Willie Boag, Wei-Hung Weng, Di Jin, Tristan Naumann, and Matthew B. A. McDermott. Publicly Available Clinical BERT Embeddings. In 2nd Clinical Natural Language Processing Workshop, 2019. Mona Alshahrani, Maha A. Thafar, and Magbubah Essack. Application and evaluation of knowledge graph embeddings in biomedical data. PeerJ Computer Science, 7, 2021. Iz Beltagy, Kyle Lo, and Arman Cohan. SciBERT: A Pretrained Language Model for Scientific Text. In EMNLP, 2019. Rajarshi Bhowmik and Gerard de Melo. Explainable Link Prediction for Emerging Entities in Knowledge Graphs. In SEMWEB, 2020. Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope Kurt Bollacker, Colin Evans, Praveen K. Paritosh, Tim Sturge, and Jamie Taylor. Free- base: A Collaboratively Created Graph Database For Structuring Human Knowledge. In SIGMOD Conference, 2008. Stephen Bonner, Ian P Barrett, Cheng Ye, Rowan Swiers, Ola Engkvist, Andreas Bender, Charles Tapley Hoyt, and William Hamilton. A Review of Biomedical Datasets Relating to Drug Discovery: A Knowledge Graph Perspective. arXiv:2102.10062, 2021a. Stephen Bonner, Ian P Barrett, Cheng Ye, Rowan Swiers, Ola Engkvist, and William L Hamilton. Understanding the Performance of Knowledge Graph Embeddings in Drug Discovery. arXiv:2105.10488, 2021b. Antoine Bordes, Nicolas Usunier, Alberto Garc´ıa-Dur´an, Jason Weston, and Oksana Yakhnenko. Translating Embeddings for Modeling Multi-relational Data. In NIPS, 2013. Adam S. Brown and Chirag J. Patel. A standard database for drug repositioning. Scientific Data, 4, 2017. David Chang, Ivana Balaˇzevi´c, Carl Allen, Daniel Chawla, Cynthia Brandt, and Andrew Taylor. Benchmark and Best Practices for Biomedical Knowledge Graph Embeddings. In 19th SIGBioMed Workshop on Biomedical Language Processing, pages 167–176, 2020. Tianqi Chen and Carlos Guestrin. XGBoost: A Scalable Tree Boosting System. KDD, 2016. Daniel Daza, Michael Cochez, and Paul T. Groth. Inductive Entity Representations from Text via Link Prediction. In WWW, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT, 2019. Vishrawas Gopalakrishnan, Kishlay Jha, Wei Jin, and Aidong Zhang. A survey on literature based discovery approaches in biomedical domain. Journal of biomedical informatics, 93: 103141, 2019. Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. Domain-Specific Language Model Pretrain- ing for Biomedical Natural Language Processing. arXiv:2007.15779, 2020. Daniel S. Himmelstein and Sergio E. Baranzini. Heterogeneous Network Edge Prediction: A Data Integration Approach to Prioritize Disease-Associated Genes. PLoS Computational Biology, 11, 2015. Bosung Kim, Taesuk Hong, Youngjoong Ko, and Jungyun Seo. Multi-Task Learning for Knowledge Graph Completion with Pre-trained Language Models. In COLING, 2020. Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In ICLR, 2015. Scientific Language Models for Biomedical Knowledge Base Completion Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36:1234 – 1240, 2020. Mark A Lindsay. Target discovery. Nature Reviews Drug Discovery, 2(10):831–838, 2003. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv:1907.11692, 2019. Huimin Luo, Min Li, Mengyun Yang, Fang-Xiang Wu, Yaohang Li, and Jianxin Wang. Biomedical data and computational models for drug repositioning: a comprehensive re- view. Briefings in bioinformatics, 22(2):1604–1619, 2021. George A. Miller. WordNet: a lexical database for English. Commun. ACM, 38:39–41, 1995. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blon- del, G. Louppe, P. Prettenhofer, R. Weiss, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res., 12:2825–2830, 2011. Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. Knowledge Enhanced Contextual Word Representations. In EMNLP, 2019. Fabio Petroni, Tim Rockt¨aschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, and Sebastien Riedel. Language Models as Knowledge Bases? In EMNLP, 2019. Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rockt¨aschel, Yuxiang Wu, Alexan- der H Miller, and Sebastian Riedel. How Context Affects Language Models’ Factual Predictions. In AKBC, 2020. Nils Reimers and Iryna Gurevych. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In EMNLP, 2019. Camilo Ruiz, Marinka Zitnik, and Jure Leskovec. Identification of disease treatment mech- anisms through the multiscale interactome. Nature communications, 2021. Michael Schlichtkrull, Thomas Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling Relational Data with Graph Convolutional Networks. In ESWC, 2018. Zhiqing Sun, Zhihong Deng, Jian-Yun Nie, and Jian Tang. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. In ICLR, 2019. Don R. Swanson. Fish oil, Raynaud’s syndrome, and undiscovered public knowledge. Per- spectives in biology and medicine, 30(1):7–18, 1986. Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. Representing Text for Joint Embedding of Text and Knowledge Bases. In EMNLP, 2015. Th´eo Trouillon, Johannes Welbl, S. Riedel, ´Eric Gaussier, and Guillaume Bouchard. Com- plex Embeddings for Simple Link Prediction. In ICML, 2016. Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. Composition-based Multi-Relational Graph Convolutional Networks. In ICLR, 2020. Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Ying Wang, and Yi Chang. Structure- Augmented Text Representation Learning for Efficient Knowledge Graph Completion. In WWW, 2021. Zhigang Wang and Juan-Zi Li. Text-Enhanced Representation Learning for Knowledge Graph. In IJCAI, 2016. Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. Representation Learning of Knowledge Graphs with Entity Descriptions. In AAAI, 2016. Bishan Yang, Wen tau Yih, Xiadong He, Jianfeng Gao, and Li Deng. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In ICLR, 2015. Liang Yao, Chengsheng Mao, and Yuan Luo. KG-BERT: BERT for Knowledge Graph Completion. arXiv:1909.03193, 2019. Yury Zemlyanskiy, Sudeep Gandhe, Ruining He, Bhargav Kanagal, Anirudh Ravula, Ju- raj Gottweis, Fei Sha, and Ilya Eckstein. DOCENT: Learning Self-Supervised Entity Representations from Large Document Collections. In EACL, 2021. Marinka Zitnik, Monica Agrawal, and Jure Leskovec. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34(13):i457–i466, 2018. Scientific Language Models for Biomedical Knowledge Base Completion Dataset Link RepoDB http://apps.chiragjpgroup.org/repoDB/ Hetionet https://github.com/hetio/hetionet MSI https://github.com/snap-stanford/multiscale-interactome Sources DrugBank UMLS DrugBank Disease Ontology Entrez SIDER MeSH DrugBank Gene Ontology Entrez UMLS Table 5: Links and sources of entity names and descriptions for each dataset. Appendix A. Dataset Construction A.1 Sources RepoDB Drugs in RepoDB have statuses including approved, terminated, withdrawn, and suspended. We restrict our KG to pairs in the approved category. Hetionet was constructed using data from various publicly-available scientific reposito- ries. Following Alshahrani et al. [2021], we restrict the KG to the treats, presents, associates, and causes relation types. This includes interactions between drugs and the diseases they treat, diseases and their symptoms, diseases and associated genes, and drugs and their side effects. We use this subset of the full Hetionet dataset to avoid scalability issues that arise when training large Transformer-based language models, inspired by benchmark datasets such as FB15K [Bordes et al., 2013], a subset of the Freebase knowledge base. MSI includes diseases and the proteins they perturb, drug targets, and biological functions designed to discover drug-disease treatment pairs through the pathways that connect them via genes, proteins, and their functions. We include all entities and relation types in the dataset. We collect each of the datasets from the links listed in Table 5. For missing entity names and all descriptions, we write scripts to scrape the information from the resources listed above using the entity identifiers provided by each of the datasets. A.2 Transductive Splits To construct transductive splits for each dataset, we begin with the complete graph, and repeat the following steps: 1. Randomly sample an edge from the graph. 2. If the degree of both nodes incident to the edge is greater than one, remove the edge. 3. Otherwise, replace the edge and continue. Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope The above steps are repeated until validation and test graphs have been constructed of the desired size while ensuring that no entities are removed from the training graph. We construct 80%/10%/10% training/validation/test splits of all datasets. A.3 Inductive Splits To construct inductive splits for each dataset, we follow the procedure outlined in the “Technical Details” section of the appendix of Daza et al. [2021]. We similarly construct a 80%/10%/10% training/validation/test split of each dataset in the inductive setting. A.4 Negative Validation/Test Triples In order to perform a ranking-based evaluation for each dataset in both the transductive and inductive settings, we generate a set of negative triples to be ranked against each positive triple. To generate negative entities to replace both the head and tail entity of each validation and test positive, we follow the procedure below: 1. Begin with the set of all entities in the knowledge graph. 2. Remove all entities that do not have the same entity type as the entity to be ranked against in the positive triple. 3. Remove all entities that would result in a valid positive triple in either the training, validation, or test sets. 4. Randomly sample a fixed set of size m from the remaining set of entities. We use a value of m = 500 for RepoDB and MSI, and a value of m = 80 for Hetionet (due to the constraints above, the minimum number of valid entities remaining across positive triples for Hetionet was 80). Using a fixed set of entities allows for fair comparison when assessing performance of subsets of the test set, such as when examining the effect of subsets where descriptions are present for neither, one, or both entities (Table 7). Appendix B. Training B.1 Transductive Setting For all individual models, we train the models on the training set of each dataset while periodically evaluating on the validation set. We save the model with the best validation set MRR, then use that model to evaluate on the test set. We also perform hyperparameter tuning for all models, and use validation set MRR to select the final set of hyperparameters for each model. B.1.1 Knowledge Graph Embeddings We use the max-margin ranking loss for all KGE methods. We use a batch size of 512 for all models. We train models for 10,000 steps (958 epochs) on RepoDB, 50,000 steps (205 epochs) on Hetionet, and 50,000 steps (66 epochs) on MSI. We evaluate on the validation set every 500 steps for RepoDB and 5,000 steps for Hetionet and MSI. We use the Adam optimizer for training. We perform a hyperparameter search over the following values: Scientific Language Models for Biomedical Knowledge Base Completion • Embedding dimension: 500, 1000, 2000 • Margin for max-margin loss: 0.1, 1 • Learning rate: 1e-3, 1e-4 • Number of negative samples per positive: 128, 256 • Parameter for L3 regularization of embeddings: 1e-5, 1e-6 B.1.2 Language Models Pretrained scientific LMs. We explore various pretrained LMs, with their initialization, vocabulary, and pretraining corpora described below. In particular, we study a range of LMs trained on different scientific and biomedical literature, and also on clinical notes. • BioBERT [Lee et al., 2020] Initialized from BERT and using the same general do- main vocabulary, with additional pretraining on the PubMed repository of scientific abstracts and full-text articles. • Bio+ClinicalBERT [Alsentzer et al., 2019] Initialized from BioBERT with addi- tional pretraining on the MIMIC-III corpus of clinical notes. • SciBERT [Beltagy et al., 2019] Pretrained from scratch with a domain-specific vo- cabulary on a sample of the Semantic Scholar corpus, of which biomedical papers are a significant fraction but also papers from other scientific domains. • PubMedBERT [Gu et al., 2020] Pretrained from scratch with a domain-specific vocabulary on PubMed. We apply two versions of PubMedBERT, one trained on PubMed abstracts alone (PubMedBERT-abstract) and the other on abstracts as well as full-text articles (PubMedBERT-fulltext). We also use RoBERTa [Liu et al., 2019] – pretrained from scratch on the BookCor- pus, English Wikipedia, CC-News, OpenWebText, and Stories datasets – as a strongly- performing general domain model for comparison. For all LMs, we follow Kim et al. [2020] and use the multi-task loss consisting of binary triple classification, multi-class relation classification, and max-margin ranking loss, with a margin of 1 for the max-margin loss. For triple classification, given the correct label y ∈ {0, 1} (positive or negative triple) we apply a linear layer to the [CLS] token representation v to output the probability p of the triple being correct as p = σ(Wtriplev), and use the binary cross entropy loss Ltriple(x) = −y log(p) − (1 − y) log(1 − p). For relation classification over R relation types, we apply a linear layer to v to calculate a probability distribution q over relation classes with q = softmax(Wrelv), and use the cross entropy loss with one-hot vector y ∈ {0, 1}R as the correct relation label: Lrel(x) = − (cid:80)R i=1 yi log qi. The final loss is the equally-weighted sum of all three losses: L(x) = Lrank(x) + Ltriple(x) + Lrel(x). We train for 40 epochs on RepoDB, and 10 epochs on Hetionet and MSI. We evaluate on the validation set every epoch for RepoDB, and three times per epoch for Hetionet and MSI. For RepoDB, Hetionet, and MSI we use 32, 16, and 8 negative samples per positive, respectively. We use the Adam optimizer for training. We perform a hyperparameter search over the following values: Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope • Batch size: 16, 32 • Learning rate: 1e-5, 3e-5, 5e-5 B.1.3 Integrated Models Global weighted average. For the global weighted average, we compute ranking scores for positive and negative examples as the weighted average of ranking scores output by all KG completion models being integrated. Specifically, for a set of ranking scores s1, . . . , sk output by k models for an example, we learn a set of weights α = [α1, . . . , αk] to compute the final ranking score as s = (cid:80)k i=1 αisi, where the same weight vector α is used for all examples. We search for each αi over the grid [0.05, 0.95] with steps of 0.05, ensuring that all αi’s sum to 1. We choose values that maximize validation set MRR, then apply them to the test set. Router. For the router-based method, we train a classifier to select a single model out of a set of KG completion models to use for computing ranking scores for a positive example and its associated negatives. The class to be predicted for a particular example corresponds to which model performs best on that example (i.e., gives the best rank), with an additional class for examples where all models perform the same. We explore a number of different clas- sifiers, including logistic regression, decision tree, gradient boosted decision tree (GBDT), and multilayer perceptron (MLP), finding that GBDT and MLP classifiers perform the best. As input to the classifier, we use a diverse set of features computed from each positive example (listed in Table 6) as well as each model’s ranking score for the positive example. Classifiers are trained on the validation set and evaluated on the test set for each dataset. We additionally perform hyperparameter tuning over the following values for each classifier: Logistic regression: • Penalty: L1, L2 • Regularization parameter: 9 values evenly log-spaced between 1e-5 and 1e3 Decision tree: • Max depth: 2, 4, 8 • Learning rate: 1e-1, 1e-2, 1e-3 GBDT: • Number of boosting rounds: 100, 500, 1000 • Max depth: 2, 4, 8 • Learning rate: 1e-1, 1e-2, 1e-3 MLP: • Number of hidden layers: 1, 2 Scientific Language Models for Biomedical Knowledge Base Completion • Hidden layer size: 128, 256 • Batch size: 64, 128, 256 • Learning rate: 1e-1, 1e-2, 1e-3 We perform five-fold cross-validation on the validation set and use validation set accuracy to choose the best set of hyperparameters for each classifier. We use Scikit-Learn [Pedregosa et al., 2011] to implement the logistic regression and MLP classifiers, and XGBoost [Chen and Guestrin, 2016] to implement the decision tree and GBDT classifiers, using default parameters other than the ones listed above. Input-dependent weighted average. The input-dependent weighted average method of integrating KG completion models operates similarly to the global weighted average, except that the set of weights can vary for each positive example and are a function of its feature vector (the same set of weights is used for all negative examples used to rank against each positive example). We train an MLP to output a set of weights that are then used to compute a weighted average of ranking scores for a set of KG completion models. The MLP is trained on the validation set and evaluated on the test set for each dataset. We use the max-margin ranking loss with a margin of 1. In order to compare to the MLP trained as a router, we train the MLP using the Adam optimizer [Kingma and Ba, 2015] for 200 epochs with early stopping on the training loss and a patience of 10 epochs (the default settings for an MLP classifier in Scikit-Learn). We perform a hyperparameter search over the following values (matching the values for the MLP router where applicable): • Number of hidden layers: 1, 2 • Hidden layer size: 128, 256 • Batch size: 64, 128, 256 • Learning rate: 1e-1, 1e-2, 1e-4 • Number of negatives (for max-margin loss): 16, 32 We select the best hyperparameters by MRR on a held-out portion of the validation set. Features for integrated models. Both the router and input-dependent weighted av- erage methods of model integration use a function to outputs weights based on a feature vector of an example. A complete list of the features used by each method can be found in Table 6. We also use the ranking score for the positive example from each KG completion model being integrated as additional features. B.2 Inductive Setting B.2.1 Knowledge Graph Embeddings and Language Models For the KGE and LM models, we follow the same training procedure for the inductive splits as for the transductive splits. We perform hyperparameter tuning over the same grids of hyperparameters, periodically evaluate on the validation set and save the checkpoint with the best validation set MRR, and use the set of hyperparameters corresponding to the highest validation set MRR to evaluate on the test set. Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope entity type relation type head/tail node in-/out-degree head/tail node PageRank Adamic-Adar index of edge edit dist. between head/tail entity names length of text in chars. presence of word “unknown” in name/desc. missing desc. number/ratio of punctuation/numeric chars. tokens-to-words ratio of entity name/desc. Table 6: Complete list of features used by router classifiers. B.2.2 DKRL In addition to the KGE and LM-based methods, we also train DKRL [Xie et al., 2016] for inductive KG completion as another text-based baseline for comparison. DKRL uses a two-layer CNN encoder applied to the word or subword embeddings of an entity’s textual description to construct a fixed-length entity embedding. To score a triple, DKRL combines its entity embeddings constructed from text with a separately-learned relation embedding using the TransE [Bordes et al., 2013] scoring function. The original DKRL model uses a joint scoring function with structure-based and description-based components; we restrict to the description-based component as we are applying DKRL in the inductive setting. We use PubMedBERT subword embeddings at the input layer of the CNN encoder, encode entity names and descriptions, and apply the same multi-task loss as for the LM-based models. To apply the triple classification and relation classification losses, for head and tail entity embeddings h and t, we apply a separate linear layer for each loss to the concatenated vector [h; t; |h − t|], following previous work on models that use a bi-encoder to construct entity or sentence representations [Wang et al., 2021, Reimers and Gurevych, 2019]. We use the same number of training epochs and number of negatives per positive for DKRL as for the LM-based methods on each dataset. We use a batch size of 64, and perform a hyperparameter search over the following values: • Learning rate: 1e-3, 1e-4, 1e-5 • Embedding dimension: 500, 1000, 2000 • Parameter for L2 regularization of embeddings: 0, 1e-3, 1e-2 Appendix C. Additional Results C.1 Transductive Setting, Individual Models Missing entity descriptions. Table 7 shows test set MRR for KG-PubMedBERT on each dataset broken down by triples with either both, one, or neither entities having available descriptions. Across datasets, performance clearly degrades when fewer descriptions are available to provide context for the LM to generate a ranking score. Relation-level performance. Figure 4 shows test set MRR broken down by relation for the datasets with multiple relation types (Hetionet and MSI). KG-PubMedBERT performs better on all relation types except compound-side effect for Hetionet, and on the function- function relation for MSI. Scientific Language Models for Biomedical Knowledge Base Completion #entities with desc. in pair None One Both MRR RepoDB Hetionet MSI N/A 59.5 63.7 25.6 43.6 52.6 25.1 25.4 37.3 Table 7: Effect of descriptions on KG-PubMedBERT test set MRR. Figure 4: Test set MRR for the best KGE model compared to KG-PubMedBERT broken down by relation type for Hetionet and MSI. C.2 Transductive Setting, Integrated Models RepoDB Hetionet MSI Global avg. Input-dep. avg. Router 70.4 65.9 70.6 55.8 70.3 59.7 42.1 40.6 48.5 Table 8: Test set MRR for the best pair of a KGE model and KG-PubMedBERT for different methods of model integration. C.3 Inductive Setting Nadkarni, Wadden, Beltagy, Smith, Hajishirzi, & Hope Figure 5: Feature importances for GBDT router for a selection of most important features. Ranking scores output by each model tend to be the most important, with other graph- and text-based features also contributing. Imputation Model with Better Ranking Unseen Entity PubMedBERT eye redness ecchymosis estrone KG-PubMedBERT keratoconjunctivitis KG-PubMedBERT nearest neighbors skin burning skin discomfort gas, thrombophlebitis sensation, vitamin a, methyltestosterone conjunctivitis allergic, otitis externa malnutrition dehydration, anaemia congestive cardiomyopathy diastolic dysfunction, cardiomyopathy PubMedBERT nearest neighbors conjunctivitis, throat sore petechiae, macule estriol, calcitriol enteritis, parotitis meningism, wasting generalized carcinoma breast, hypertrophic cardiomyopathy Table 9: Samples of unseen entities and their nearest neighbors found by KG-PubMedBERT and PubMedBERT, for test set examples in the Hetionet inductive split where the Pub- MedBERT neighbor performs better than the KG-PubMedBERT neighbor (first three) and vice versa (last three). Each LM offers a larger improvement per example when its nearest neighbor is more semantically related to the unseen entity.
ai_researcher
1
Magnetic_resonance_imaging_as_a_tool_for_quality_control_in_extrusion‐based_bioprinting.pdf
Improvement of Printing Quality for Laser-induced Forward Transfer based Laser- Assisted Bioprinting Process using a CFD-based numerical model Jie Qua,b, Chaoran Douc, Ben Xud*, Jianzhi Lic*, Zhonghao Raob,, Andrew Tsine a. Department of Mechanical Engineering, The University of Texas Rio Grande Valley, Edinburg, TX 78539, USA b. School of Electrical and Power Engineering, China University of Mining and Technology, Xuzhou 221116, China c. Department of Manufacturing and Industrial Engineering, The University of Texas Rio Grande Valley, Edinburg, TX 78539, USA d. Department of Mechanical Engineering, Mississippi State University, Mississippi State, MS 39762, USA e. Department of Molecular Science, The University of Texas Rio Grande Valley School of Medicine, Edinburg, TX, 78539, USA Corresponding author Ben Xu: Tel.: +1 (662) 325-5632; Email: [email protected] Jianzhi Li: Tel.: +1 (956) 665-7329; Email: [email protected] Abstract: As one of the three-dimensional (3D) bioprinting techniques with great application potential, (LAB) laser-induced-forward-transfer (LIFT) based laser assisted bioprinting transfers the bioink through a developed jet flow, and the printing quality highly depends on the stability of jet flow regime. To understand the connection between the jet flow and printing outcomes, a Computational Fluid Dynamic (CFD) model was developed for the first time to accurately describe the jet flow regime and provide a guidance for optimal printing process planning. By adopting the printing parameters recommended by the CFD model, the printing quality was greatly improved by forming stable jet regime and organized printing patterns on the substrate, and the size of printed droplet can also be accurately predicted through a static equilibrium model. The ultimate goal of this research is to direct the LIFT-based LAB process and eventually improve the quality of bioprinting. Keywords: Laser assisted bioprinting; Laser Induced Forward Transfer (LIFT); Bioink; Bubble formation and collapse; Jet Regime; Computational Fluid Dynamics (CFD). 1 INTRODUCTION 3D bioprinting is an emerging technology that has been investigated in fields varying from printing of live cells to biosensors fabrication and from stem cell fabrication to artificial organ generation (1-3). 3D bioprinting has gained special momentum in generation of the 3D functional tissues and organs due to its capability of periodic arrangement of various biological materials in a precisely controlled manner (4). As one kind of the 3D bioprinting techniques, laser assisted bioprinting (LAB) can print biological materials with as small as cell-level resolution, therefore by controlling the cell density and organization, LAB potentially holds a great promise to fabricate living tissues or organs with biomimetic physiological functionality (5). LAB is based on the principle of laser induced forward transfer (LIFT), which was first proposed by Bohandy et al. in 1986 (6) as an accurate solid deposition technology with high resolution. LIFT uses a pulsed laser beam focused through a transparent glass/quartz plate onto a donor layer coated on the other side of the plate to eject a tiny volume of the donor material towards a receiving substrate (7). The bioink transfer in LAB process is believed as the key to the formation and growth of a vapor bubble and a jet because of the rapid evaporation caused by the high energy laser pulse (8, 9). LIFT-based LAB has great advantages over other bioprinting technologies. These advantages include non-contact printing, high fabrication precision and high adaptability, supporting different cell patterns with good cell viability (~85%) (2). LIFT has similar functionality to droplet-on-demand inkjet printing (nozzle-based), however, since LIFT is a nozzle-free process, it does not suffer from nozzle clogging and compatibility issues between bioink and nozzle’s materials, which provide the possibility to print bioink with a variety of properties (viscosity, and density etc.) (10). Due to these advantages, LIFT-based LAB has drawn attentions from researchers and practitioners for its potential application in printing tissue or organs (11-16). Nevertheless, the main drawback of LIFT-based LAB is also due to its high resolution, resulting in a low flow rate, therefore it may experience some difficulty to accurately position cells on the receiving substrate (17-19). In addition, even though the nozzle-free feature resolves the clogging issue, it in turn has no restrictions to the flow direction and the jet regime, since the bioink transfer process completely depends on the formation of jet flow, therefore if the flow and jet regime cannot be controlled precisely, the process could suffer from deteriorated printing quality. As shown in Fig.1, when the jet flow is not fully developed, no bioink can be transferred from the coated quartz to the receiving substrate. Even if the bioink can be transferred, there are still two scenarios which may affect the printing process: the plume and the splashing cases, which actually will lead to unorganized printing pattern on the substrate with irregular droplet surrounded by many splashes. Those two printing patterns are not acceptable for precise bioprinting and the scattered droplet distribution strongly influences the final printing quality as well as the cell viability. Fig. 1 shows that only the stable jet can achieve controlled printing patterns with organized and circular droplets, therefore this is the only transfer scenario that allows for precise printing with a good printing quality and high cell viability. Consequently, a deep understanding of the jet flow regime is critical to the adoption of LIFT-based LAB process. As agreed in a few investigations reported, a variety of printing parameters could affect the jet flow regime and in turns the printing patterns on the substrate. These parameters include 2 the pulse laser energy intensity (20-22), the focal spot size (23), the liquid layer thickness, material properties (5, 19) and so on. Therefore, it is extremely difficult to theoretically model the formation of jet flow because of its nature of complex multiphysics and multiscale phenomena involved in the LIFT based LAB process. For example shock wave (24), plasma generation (25) and irradiation (26), are reported in the laser-liquid interaction during the LIFT- based bioprinting process. Meanwhile, the laser-liquid interaction occurs in an extremely fast manner with a typical time duration ranging from 10-10s to 10-12s, while the jet development process could take a time period ranging from 10-3s to 10-6s. These multiscale time duration will certainly complicates the attempt of developing accurate mathematical models. As a result, most reported studies required tedious experimental efforts to explore the relationship between the jet flow regime and the final printing outcomes, in order to fully understand the relationship between the process parameters and the formation of a stable jet. Fig. 1. Different Jet regime and corresponding printed pattern (17, 19, 22, 27) Computational Fluid Dynamics (CFD) simulation is a very popular approach to predict the formation of jet and bubble in various multiphase transport processes. It could bring a good opportunity for reducing the tedious experimental efforts required in investigation of the LIFT- based LAB process. However, considering the complex multiphysics phenomena at the very beginning stage of LAB, modeling the laser-liquid interaction process from a multiscale point of view in a very concise and consistent way becomes extremely difficult. Through literature review, while there are investigations that attempted to explain thoroughly the laser-liquid interaction mechanism in LAB, most of the current work either ignored the initial bubble forming process, or relied on experimental observations by missing key information in small scales, based on which, assumptions are made. For example, Brown et al. (28) and Kalaitzis et al. (29) chose to experimentally track the interface deformation during the bioprinting, and then utilized the experimental results as the moving boundary condition to model the liquid movement and the jet. This model highly relied on the earlier experimental results, therefore it 3 is only applicable under specific conditions, such as the same energy input, the same liquid layer thickness, and the same liquid properties. The other model is the initial bubble model (30, 31), which assumes that the input laser energy is converted into the internal and kinetic energy of an initial bubble. Most of the published works, which adopted the initial bubble model, chose the properties and dimensions of the initial bubble (such as the size, pressure and temperature) based on their own experiments. However, the laser energy intensity, the donor layer thickness and the position of laser focal point have strong impacts on the formation of jet and bubble (21), therefore it is extremely hard to extend the reported models to explore the LIFT process when these parameters are changed (30-32). Consequently, it is desired to develop a generalized and solid model to determine the properties and dimensions of such an initial bubble, then this generalized model can be incorporated in the CFD simulation in order to precisely model the entire LIFT based LAB process. In the present work, a novel generalized mathematical model was developed for the first time to accurately determine the size, pressure and temperature of the initial bubble based on the energy conservation law, and then a CFD study by incorporating the proposed generalized mathematical model for the initial bubble was performed for the first time to predict the formation of jet flow and the final printing pattern on the receiving substrate. The proposed CFD-directed simulation model was validated and shown its capability of precise prediction of the jet flow behavior. Furthermore, by utilizing the simulation results as parameters input, a static equilibrium model was employed to accurately predict the size of the printed droplet. Meanwhile, a LIFT-based LAB experimental platform was built and utilized to perform more experimental works by altering the printing parameters, where a femtosecond pulse laser with 1040nm wavelength and a maximum pulse lase energy of 40μJ was adopted in this study, as shown in Fig. 2. Deionized water with dye was selected as the liquid layer for all the experimental cases. The printing quality with various printing parameters was analyzed in detail using the proposed CFD model. By adopting the printing parameters recommended by the CFD model, the printing quality was greatly improved by forming stable jet regime and organized printing patterns on the receiving substrate, and the size of printed droplet can be accurately predicted through the static equilibrium model. The ultimate goal of this research is to develop a solid connection by utilizing the proposed CFD model to direct the LAB process and improve the printing quality. 4 Fig. 2. Experimental platform of LIFT process RESULTS AND DISCUSSION In this study, we first performed LIFT bioprinting experiments with non-optimize d parameters, such as the liquid layer thickness and pulse laser energy intensity. Not surprisingly, the unstable jet regime was formed so the printing quality was fairly low with unorganized printing outcomes and irregular droplets on the receiving substrate. CFD study was then performed so that the appropriate combinations of printing parameters were identified, and the bioprinting experiments were conducted one more time to verify the predicted results. Eventually, the printing quality was greatly improved by forming stable jet regime and very organized printing patterns on the receiving substrate. First attempt to obtain well-organized printed droplets As shown in Fig. 3, a laser generator (Spirit One 1040-8) was chosen to generate the pulse laser, and the laser intensity distribution satisfies the Gaussian distribution. The laser’s wavelength is 1040nm, its maximum pulse energy is 40μJ, and the pulse duration is 300fs. In the experiment, every laser pulse was reflected by the mirrors and went through the galvanometer and the focusing lens, eventually focused on the ribbon, which is a quartz with liquid layer coated at the bottom. The radius of laser focal spot is 30μm, and the thickness of the quartz is 0.64cm. Deionized water was selected as the liquid layer. To enhance the absorption rate of deionized water, 1% w.t. of graphene solution was added as a dye, which can also introduce an additional benefit of biocompatibility when the actual bioink is used in the printing process. Since the liquid layer thickness was selected from 1μm to 100μm (7), for the first attempt in this study, an median liquid layer thickness was selected as 50μm while the pulse laser energy was varied from 10μJ to 40μJ. Fig. 3 shows the liquid transfer and printing patterns with 50μm thick liquid layer and various pulse laser energies. It is important to note that there are two mirror lines at the top and bottom part of these figures, because the reflection of the two substrates in the figures. To clearly show the printing patterns, the mirror line at the bottom was marked by a light blue dash line. From Fig. 3A-D, the jet flow is separated as two stages: the first stage shows that a thin jet flow came out from the cone-shape structure as marked by the red dash line, and it only needed a short time period to complete the liquid transfer until 58.8μs, as shown in Fig. 3A-D; the second stage demonstrates the development of the cone-shape structure, which can be developed into two sub-stages: 1) the formation of a jet and a single droplet underneath; 2) the collapse of jet to complete the liquid transfer. The second stage took a longer time to complete than the first stage. For the case with pulse laser energy of 10μJ or under, the first stage needed about 176.4μs to be completed (Fig. 3A). However, after 176.4μs, since the pulse laser energy was too small to develop the cone-shape well, the second stage liquid transfer process could not be completed. Once the jet collapsed at 176.4μs, the droplet started to move upward instead of downward. With the development of jet flow, the droplet underneath the ribbon tended to move downward due to the remaining momentum from the bubble’s fast expansion, while the cone-shape structure had a tendency to move upward and bounce back to the liquid layer below the ribbon due to the collapse of bubble and the surface tension, and provided a force pointing 5 upward. The opposite movement direction between the droplet and the cone-shape structure eventually led to the break of jet, as shown in Fig. 3A at 235.2μs. For the case with pulse laser energy of 10μJ, the upward momentum from the cone-shape structure was dominated over other effects, therefore it made the droplet bounce back to the liquid layer. In this case, the printed droplet with 10μJ laser energy has the smallest diameter, as shown in Fig. 3E. For jet flow with laser pulse energy of 20μJ and 30μJ, the first stage can be completed before 117.6μs with stable jet regime (Fig. 3B and C). And the second stage was well developed with the droplet nearly touching the substrate. With a bigger energy input, the remaining downward momentum was dominated, so the underneath droplet moved downward after jet broke, therefore the second liquid transfer stage can also be completed. The major difference between the jet flow with 20μJ and 30μJ pulse laser energy was that the second jet was thicker and more liquid was transferred if 30μJ pulse laser energy was adopted. Apparently, both cases can print reasonably round shape droplets, and the diameters were around 187.5μm and 237.5μm, respectively. And the case with 30μJ laser energy input has a bigger droplet area. A B C D E 0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 882.0μs 940.8μs 999.6μs 1058.4μs 0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 705.6μs 764.4μs 823.2μs 882.0μs 0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 470.4μs 529.2μs 588.0μs 0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 470.4μs 529.2μs 588.0μs Fig. 3. Liquid transfer and printing patterns with 50μm thick liquid layer. (A) Jet flow with 10μJ pulse laser energy. (B) Jet flow with 20μJ pulse laser energy. (C) Jet flow with 30μJ pulse laser energy. (D) Jet flow with 40μJ pulse laser energy. (E) Printing patterns of 50μm thickness liquid layer with different pulse laser energies. However, once the pulse laser energy was further increased, although the two stages of liquid transfer can be completed, the jet still cannot hold a stable cone shape and was turned 6 into a splashing regime, as shown in Fig. 3D. Both the first and second jets broke into multiple tiny droplets and then scattered. The laser energy input was too big for the liquid layer to hold and develop a stable jet flow. Meanwhile, the printed droplet on the substrate with 40μJ showed a chaotic printing pattern, such that a biggest droplet was surrounded by multiple satellite droplets, as shown in Fig. 3E. The diameter of the biggest droplet was around 43.1μm, while the smallest droplet had only about 2.5μm diameter. Apparently, this type of printing pattern was not acceptable for precise LIFT-based bioprinting, because it would completely ruin the structure of printed tissue or organ. In summary, from our first attempt of 3D bioprinting using water as the liquid, we can conclude that only a stable jet could result in well-printed outcomes, and the jet regime can predict the printing pattern based on the input laser energy. Nevertheless, a quantitative analysis cannot be developed with such limited information about the jet formation and jet regime, therefore in the next section we will discuss about the proposed CFD model and simulations. Numerical simulation of the development of bubble/jet flow during LIFT process Since the development of bubble/jet flow in the first stage occurs in a wide span of spatial and temporal scales, it is extremely difficult to monitor the printing process and tune the printing parameters in order to improve the printing quality. In addition, the first stage demonstrates most of the underlined features for the entire LIFT based LAB process, such as bubble growth and jet breakage, therefore if the development of bubble/jet flow in the first stage can be controlled precisely, the printing quality will be significantly improved and obtain well-organized printing patterns. CFD simulation is a powerful and efficient tool which can assist the design process by reducing the tedious experimental efforts. By combining CFD and the bioprinting experiment, CFD can predict the unique features of jet and bubble formation in the first stage, and direct the bioprinting process for better printing quality by recommending reasonable printing parameters based on the relationship between the jet regime and the printing patterns on the substrate. Because the development of bubble/jet flow simulation is a multiphase process, the Volume of Fluid (VOF) model was employed to track the liquid-gas interface. The geometry and meshing configuration of the CFD model are shown in Fig. 4A. The computational domain is part of the LIFT ribbon with various thickness, 800μm width liquid layer and 900μm air in length. Only half of the model was meshed and simulated because of the axisymmetric geometry. A structured mesh was used in this case, and the mesh near the boundary was refined. The boundary conditions are also shown in Fig. 4A. The right side of the liquid layer was defined as pressure-inlet while that of air zone was defined as pressure-outlet. Besides the axisymmetric boundary condition at the axis, other boundaries were all defined as “wall”. The parameters of initial bubbles were patched before simulation started. The vapor was set as the ideal gas while the liquid and air were assumed incompressible fluid. Considering the jet flow regime observed in the experiment, the laminar model was selected in the simulation. To validate the current model with the published experimental works, one case with 100μm thick 65%-glycerol layer and 717 mJ/cm2 laser fluence was simulated and compared with the experimental results from literature (17), as shown in Fig. 4D. Fig. 4B shows the simulation results of LIFT process with 100μm 65%-glycerol layer and 7 717 mJ/cm2 laser fluence. It clearly demonstrated the entire development of jet flow, including generation and breakage. Firstly, once the high-energy laser pulse hit the liquid layer, the rapid evaporation of liquid generated a high pressure and high temperature initial vapor bubble. Due to the high pressure vapor inside the bubble, the initial bubble expanded rapidly. Because the quartz can be considered as a rigid wall boundary condition, the initial bubble expanded asymmetrically to a cone shape. With the bubble expansion, the high pressure inside the bubble was released and decreased. Once the pressure inside the bubble became lower than the outside atmosphere pressure, the bubble began to collapse. At this time, liquid around the tip of the bubble moved downward due to the remaining momentum from the fast bubble expansion, and then the jet flow was formed at the tip of the bubble. Meanwhile, because the viscous forces and surface tension, a reversed jet inside the bubble was also generated (23), as shown in Fig. 4C. With the development of both jets, the reversed jet reached a much higher velocity than that of the primary jet, for instance, the velocity of the primary jet was about 49m/s, while that of the reversed jet was 87m/s. This phenomenon was because the reversed jet was much smaller than the primary jet, and the pressure inside the bubble was lower than the outside ambient pressure. A C B D 8 Fig. 4. Simulation of jet flow. (A) Simulation model geometry and meshing configuration. (B) Jet flow with 100μm thickness 65% glycerol layer and 717 mJ/cm2 laser fluence. (C) Velocity of jet flow at 8μs. (D) Comparison with experimental results (17). A comparison between the simulations and experimental results was also provided in Fig. 4D, where all the experimental conditions were maintained the same as the simulation. The length of jet in the simulation was slightly longer than that of the experiment, and the relative difference between the simulation and experiment was around 14%. Considering the associated numerical error, the proposed CFD model can be validated in a reasonable range, therefore it is trustworthy for other studies in order to identify the appropriate printing parameters for good printing quality. Since the experimental results already showed that the liquid transfer and printing pattern were unacceptable for 50μm thick liquid layer and 40μJ pulse laser input, cases with different liquid layer thickness (50μm, 100μm, 150μm) with pulse laser energy of 40μJ were studied in this section to obtain an optimized layer thickness. Meanwhile, cases with 100μm thick liquid layer and various pulse laser input (10μJ, 20μJ, 30μJ and 40μJ) were also simulated to study the effect of pulse laser energy. Once all the simulations were completed, in the next section experiments were carried out by adopting the recommended printing parameters from the simulations. The printing parameters used in simulations and experiments are shown in Table.1. Table 1. Parameters for simulations and experiments Pulse laser Pulse laser Pulse laser Pulse laser energy-10μJ energy-20μJ energy-30μJ energy-40μJ Liquid layer thickness-50μm E-1 Liquid layer thickness -100μm E-5 / S-2 Liquid layer thickness -150μm Note: E: experiment; S-simulation. N/A E-2 E-6 / S-3 N/A E-3 E-7 / S-4 N/A E-4 / S-1 E-8 / S-5 E-9 / S-6 The simulation results of LIFT process with 50μm liquid layer and 40μJ (S-1) are shown in Fig. 5A. As discussed before, the initial bubble expanded rapidly at first. However, the liquid layer could not hold the rapid bubble expansion and therefore it was broken at about 0.5μs. Apparently the stable jet could not be formed for this case, therefore it showed a good agreement with the experiment in Fig. 3C. With the breakage of the bubble, the high pressure and high temperature vapor inside were released and then mixed with the ambient. With the same pulse laser energy input, increasing the liquid layer thickness would help to generate a stable jet. As shown in Fig. 5B and C, when the liquid layer thickness were increased from 50μm to 100μm (S-5) and 150μm (S-6), the bubble was broken first, and then the bubble kept developing and formed a regular jet flow. Because a thicker liquid layer was more capable of holding the vapor bubble, therefore it had a more robust bubble development. It is noteworthy that the length of jet for S-5 was always longer than that of S-6 at the same instant, as shown in Fig. 5D. The reason for this phenomenon is that a thicker liquid layer could provide a bigger flow resistance to slow down the rapid bubble expansion with the same laser energy input. For both S-5 and S-6, it showed a linear relationship between the length of the jet and the time duration. The maximum velocity of jet flow with different liquid layer are shown in Fig. 5E. 9 For S-5, the maximum jet flow velocity could reach 157 m/s at 1μs, while the maximum jet flow velocity was 89.4 m/s for S-6. In addition, with the bubble expansion, the maximum velocity decreased until the tip of the jet flow was generated. After that, the velocity was increased slightly at 4μs for S-5 and at 6μs for S-6, respectively. This phenomenon is because the liquid tip was less affected by the surface tension when it bulged out from the liquid film, therefore it kept developing to a longer jet. Furthermore, the maximum velocity eventually became stable for each case, for example, the maximum velocity of S-5 was about 110m/s, while it was about 60m/s for S-6. A B C D E Fig. 5. Simulation of jet flow. (A) Jet flow with 50μm thick liquid layer and 40μJ pulse laser energy. (B) Jet flow with 100μm thick liquid layer and 40μJ pulse laser energy. (C) Jet flow with 150μm thick liquid layer and 40μJ pulse laser energy. (D) The length of jet flow with different liquid layer thickness. (E) The maximum velocity of jet flow with different liquid layer thickness. The simulation results of 100μm liquid layer with various laser energy inputs are shown in Fig. 6. The bubble expansion and jet formation process of S-5 was already discussed, and these processes for other cases with smaller laser energy input were all very similar. Nevertheless, the size of bubble, the length of jet and the velocity were different for those cases. At the same time instant of simulation, the size of expanded bubble increased with the increasing of pulse 10 012345670100200300400500600700800Length of Jet (mm)Time (ms) 100mm liquid layer 150mm liquid layer01234567050100150200Velocity (m/s)Time (ms) 100mm liquid layer 150mm liquid layer laser energy, and the length of jet flow increased with the increasing of pulse laser energy as well. For certain pulse laser energy input, the length of the jet flow and its time duration showed a linear relationship, but the relationship between the length of the jet flow at the same instant and pulse laser energy was nonlinear (Fig. 6D). The velocity of the jet flow also increased with the increasing of pulse laser energy. With the developing of jet flow, the velocity remained almost as a constant after 4μs. The velocity of the stable jet flow with 10μJ (S-2) was about 25m/s, while it was around 70m/s for S-3 with 20μJ, increased about 180%. However, for the pulse energy changing from 20μJ (S-3) to 30μJ (S-4), the velocity was only increased 33.3%, which also showed a nonlinear relationship between the velocity and the laser energy input. Fig. 6G shows the mass flow rate versus time for cases with different laser energies. The mass flow rate was defined by the amount of liquid moved downward through the initial liquid-a ir interface per unit time. From Fig. 6G, the mass flow rate decreased with the development of the jet flow. Even though the tip of jet flow remained at a similar level of velocity, the whole jet flow was slowed down by the bubble collapse, and the adhesion force also provided flow resistances. In addition, it also showed a nonlinear relationship between the mass flow rate and the pulse laser energy (Fig. 6H). To summarize this section, by adopting the proposed CFD model, different cases with various liquid layer thicknesses and laser energies were investigated numerically, and the developing of the bubble expansion and jet flow was clearly described. With the increase of liquid layer thickness from 50μm to 100μm for cases with 40μJ pulse laser energy, the jet regime developed from unstable jet to stable jet regime. With the increase of the pulse laser energy for the cases with 100μm thick liquid layer, the length and velocity of the jet both got increased. Based on the simulation results, a stable jet can be obtained by choosing 100μm liquid layer with various pulse laser energy from 10μJ to 40μJ. In conclusion, for pulse laser energy varying from 10μJ to 40μJ, the CFD simulations recommended a liquid layer thickness around 100μm for a better printing quality. 11 A B C D E G F H Fig. 6. Simulation results of jet flow with different laser energy. (A) Jet flow with 100μm thickness liquid layer and 10μJ pulse laser energy. (B) Jet flow with 100μm thickness liquid layer and 20μJ pulse laser energy. (C) Jet flow with 100μm thickness liquid layer and 30μJ pulse laser energy. (D) Jet flow with 100μm thickness liquid layer and 40μJ pulse laser energy. (E) The length of jet flow with different laser energy. (F) The maximum velocity of jet flow with 12 101520253035400.00.20.40.60.81.01.21.41.61.82.0Mass flow rate (g/s)Pulse laser energy (mJ) 1μs 2μs 3μs 4μs 5μs 6μs 7μs012345670100200300400500600700800Length of Jet (mm)Time (ms) 10μJ 20μJ 30μJ 40μJ012345670.00.20.40.60.81.01.21.41.61.82.02.2Mass Flow Rate (g/s)Time (ms) 10μJ 20μJ 30μJ 40μJ01234567020406080100120140160Velocity (m/s)Time (ms) 10μJ 20μJ 30μJ 40μJ different laser energy. (G) The mass flow rate of jet flow versus time with different laser energy. (G) The mass flow rate of jet flow versus laser energy. Printed droplets after optimization In this section, we tried to utilize these recommended printing parameters to experimentally print out the droplets and also find out the connection between the size of printing pattern and the characteristics of jet flow. The liquid transfer of 150μm and 100μm thick water layer with 40μJ pulse laser energy are shown in Fig. 7A and B. No jet flow and liquid transfer were observed when the liquid layer thickness was 150μm (Fig. 7A). Compared with 50μm liquid layer (Fig.3), the same amount of pulse laser energy input could not provide adequate pressure to overcome a bigger flow resistance. The generated bubble still could be expanded, but it only formed a peak at 117.6μs, and started to collapse afterwards. At about 400μs, the liquid layer returned to a flat surface at the upper layer. When the liquid layer thickness was 100μm (Fig. 7B), a complete process of both the first and second stages of jet flow was formed. The jet flow in the first stage was connected with the substrate at 58.8μs while in the second stage it was connected with the substrate at about 235.2μs. This happened because the laser energy input was large enough to drive the jet flow and develop to a sufficient length, at the same time the jet with the 100μm thick liquid layer (E-8 in Table 1) was also robust enough not to break during the jet developing. Later on the linkage between the liquid layer and the substrate became thinner and thinner, and eventually detached from the top liquid layer between 411.6μs and 470.4μs, as shown in Fig. 7B. The broken linkage finally formed a droplet due to the surface tension and fell on the substrate by completing the second stage of liquid transfer. A B 0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 470.4μs 0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 470.4μs 529.2μs 588.0μs 646.8μs 705.6μs Fig. 7. Liquid transfer with different thickness liquid layer and pulse laser energy. (A) Jet flow of 150μm thick liquid layer with 40μJ pulse laser energy. (B) Jet flow of 100μm thick liquid layer with 40μJ pulse laser energy. More cases with 100μm thick liquid layer and different pulse laser energy inputs were experimentally investigated, and the liquid transfer and printing patterns were shown in Fig.8. As predicted by the CFD studies in the previous section, a stable jet can be formed for the cases with 100μm thick liquid layer and pulse laser energy varying from 10μJ to 40μJ. The test results actually demonstrated that the jet flow process with 100μm thick liquid layer showed very similar phenomenon as the case with 50μm thick liquid layer, as shown in Fig. 8 A-C, where 13 the connection between the jet and the liquid layer became much thinner while maintaining the liquid transfer, and a separated droplet (marked by the blue dash circle) was formed on top of the primary droplet. The gourd-shaped droplet was also formed and can be observed at 235.2μs in E-5, 294.0μs in E-6 and E-7. However, the gourd-shaped droplet could not be detected when the liquid layer thickness was 50μm. Therefore, the jet flow remained more robust for the cases with thicker liquid layer than other cases, for instant the liquid layer thickness of jet in E-2 and E-6 were 33.8μm and 92.9μm at 176.4μs, respectively. Fig.8 A-C shows the moving trajectory of the separated droplet, which was marked by the blue dash circle. For the liquid transfer process with 50μm thick liquid layer, the velocity of the separated droplet increased with the increasing of pulse laser energy. When the pulse laser energy reached to 40μJ, the jet regime changed from the stable jet to the splashing jet mode, but the velocity of the separated droplet was not affected too much by the pulse laser energy input for the case with 100μm thick liquid layer (Fig.8 F), it was probably because the separated droplet was almost static for the stable jet, and the initial velocity of the separated droplet was almost zero. With the assistance of gravity, the separated droplet then fell onto the receiving substrate. With such a short distance between two substrates and such a short time period, the falling velocity was about the same for all the cases with different pulse laser energy inputs. However, when the pulse laser energy reached to 40μJ (Fig.8 D), the jet flow could directly connect with the substrate, and no separated droplets were formed. Similarly, the size of printed droplet on the substrate increased with the increase of pulse laser energy. As shown in Fig.8 G, for the same pulse laser energy, the size of printed droplet on the substrate for the case with 100μm thick liquid layer was bigger than the case with 50μm thick liquid layer. Considering the unstable jet regime of liquid transfer process with 50μm liquid layer thickness and pulse laser energy of 10μJ and 40μJ, the droplet size of these cases was not typical. For case with 100μm thick liquid layer, it showed a linear relationship between the droplet size and the pulse laser energy, which confirmed the conclusions from Lin et al. (8) and Kattamis et al. (33), because their results also indicated that both the LIFT process with/without an absorption layer showed a linear relationship between the droplet size and the laser energy input, therefore they shared a similar mechanism of liquid transfer. 14 A B C D E F 0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 588.0μs 646.8μs 705.6μs 764.4μs 0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 823.2μs 882.0μs 940.8μs 999.6μs 0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 705.6μs 764.4μs 823.2μs 882.0μs 0μs 58.8μs 117.6μs 176.4μs 235.2μs 294.0μs 352.8μs 411.6μs 470.4μs 529.2μs 588.0μs 646.8μs 705.6μs G Fig. 8. Liquid transfer and printing patterns with 100μm thick liquid layer. (A) Jet flow with 10μJ pulse laser energy. (B) Jet flow with 20μJ pulse laser energy. (C) Jet flow with 30μJ pulse laser energy. (D) Jet flow with 40μJ pulse laser energy. (E) Printing patterns with different pulse laser energies. (F) The movement velocity of the dropped droplet. (G) Printed droplet size of different liquid layer thickness with different pulse laser energy. Based on the discussions above, the printing parameters recommended by the CFD simulation were proved to ensure a stable jet regime and improve the printing quality. Because the initial jet flow significantly affects the printing quality and the size of printed patterns on the substrate, a quantitative analysis is desired to reveal the relationship between the jet flow and the size of the printing pattern. A regression curve fitting and a static equilibrium model were developed in this study to predict the size of printed droplet by utilizing the simulation 15 1015202530354050100150200250300Printed droplet size(mm)Pulse laser energy(mJ) 50mm liquid layer 100mm liquid layer102030-0.50.00.51.01.5Velocity(m/s)Pulse laser energy(mJ) 50mm 100mm results as input parameters, and the experimental results were utilized to verify the prediction. The flow chart of comparison strategy between simulation results and experimental results is shown in Fig. 9. Based on the conclusion from van Dam & Le Clerc (34), the velocity and volume of droplet are the two main factors that influence the size of printing pattern. Since we already got the moving velocity of the jet flow from the simulations, we can show the transferred liquid volume versus the mass flow rate obtained from the simulation, as shown in Fig.10 A. Apparently, the volume of the transferred liquid and the mass flow rate showed a linear relationship, and a regression model can be obtained as . The coefficient of this curve fitting equation is 1.13×10-6, which is related to the developing time of jet flow and the distance between two substrates. CFD model Experiment Mass flow rate (Simulation results) Curve fitting Curve fitting equation Transferred liquid volume Size of printed droplet (Experimental results) (Experimental results) Transferred liquid volume (Predicted results) Comparison Static equilibrium model Size of printed droplet (From predicted results) Size of printed droplet (From experimental results) Comparison Comparison Fig. 9. Flow chart of comparison strategy between simulation results and experimental results In addition, we can also utilize a mathematical model to predict the maximum size of printed droplet on the receiving substrate, and then compared with the experiment, as shown in Fig.9. Since the droplet on the substrate is in the static state, the size of the droplet was only related to the volume, surface tension of liquid and the surface properties of the substrate. Assuming the droplet as part of sphere shape, the static equilibrium equation was shown as follows (35), (1) where is the surface tension, is the contact angle, is the density, is the radius 16 61.1310Vm225323222112-2(cos)2arcsinarcsin0443rzrrrrlrRgRzrrRzrRzr of the droplet, length of the droplet. is the height of the droplet, is the radius of the sphere, is the arc , and could be defined as below: (2) (3) (4) where is the volume of the droplet. Based on the discussions above, the volume could be obtained from both the experimental results or the predicted results calculated by the curve fitting equation. Eq. (1) was adopted to calculate the size of the printed droplet by using the transferred liquid volume from experimental results and predicted results, and the comparison between the experiment and simulation is shown in Fig.10 B. Both the simulation and experimental results showed that the sized of printed droplet increased with the increasing of pulse laser energy. Meanwhile, both the size of the printed droplet calculated from transferred liquid volume of experimental results and the predicted results showed good agreement with the actual measured droplet size, while the simulation results with liquid volume from curve-fitting prediction as input were closer, especially with pulse laser energy of 40μJ. Utilizing this static equilibr ium model can directly connect the size of printed droplet with the simulation results, as shown in Fig. 10C. Furthermore, the static equilibrium model can also be combined with the proposed CFD model to predict the jet flow regime and the size of printed droplet, and it can provide a great guideline to direct the design of experimental process. Eventually, a well-organized printed pattern on the receiving substrate with different alphabets is shown in Fig.10 D, where UT means University of Texas, and CUMT is the abbreviation of China University of Mining and Technology, and this is a successful demonstration of CFD-based improvement of printing quality for LIFT-based LAB process. 17 zRlzRl1/31/31/21/2226263333VVVVzrrr222rzRzarcsinrlRRV A C B D Fig. 10. (A) Prediction of transferred liquid volume with the mass flow rate from simulation results. (B)Experimental and simulation results of printed pattern size. (C)Experimental and simulation results of printed pattern size versus mass flow rate. (D) Printed pattern after optimization (UT: University of Texas, CUMT: China University of Mining and Technology) CONCLUSIONS The major contribution of this work is to develop a CFD model to guide the LIFT-based LAB for the first time in the bioprinting research community, and this model provides a great opportunity to quantitatively predict the generation and development of bubble and jet flow in the LIFT-based LAB process, and eventually improve the final printing quality by adopting the appropriate printing parameters recommended by the CFD model. The numerical model was validated by the experimental results, and a good agreement was achieved in terms of the size of printed droplet. By utilizing the proposed CFD model, this study demonstrated a successful example of well printed pattern, as shown in Fig. 10D. The key conclusions are listed as follows: (1) The liquid layer thickness strongly affects the formation and development of jet flow. A thin liquid layer cannot maintain the jet flow due to the rapid bubble expansion with large pulse laser energy input, therefore the jet eventually would break and reach to the splashing jet regime. Furthermore, the jet cannot be formed when the liquid layer was too thick. (2) For all the stable jets investigated in this study, the length of the jet flow and the time duration of jet flow showed a linear relationship, as shown in Fig. 5D. With the development of jet, the velocity of the jet flow remained almost as a constant. A reversed jet inside the bubble was also observed because of the viscous forces and surface tension. (3) For cases with the same liquid layer thickness, the size of the printed droplet, the 18 0.60.81.01.21.41.61.82.02.20.60.81.01.21.41.61.82.02.220.99875R 100mm Curve fittingTransferred liquid volume (nL)Mass flow rate (g/s)10mJ20mJ30mJ40mJ91.1310mV10152025303540150200250300350Pulse laser energy(mJ)Size of printed droplet mm)Experimental resultsSimulation results (Transferred liquid volume ) from experimental results)Simulation results (Transferred liquid volume) from predicted results) velocity and length of the jet flow all increased with the increase of pulse laser energy, as shown in Fig. 6 and Fig.8 G. (4) Utilizing the simulation results, the volume of transferred liquid trough LIFT-based LAB process could be accurately predicted. With the assistance of static equilibrium model describing the static balance of droplet and substrate, the size of printed droplet can also be predicted, as shown in Fig.10 B and C. MATERIALS AND METHODS Description of experiments The experimental platform can be found in Fig. 1. An XY stage (Pro115LM Aerotech) was utilized to move the substrate up and down to get different print patterns. A light source (HL150-A Fisher Scientific) was used to provide a sharp background, and a high-speed camera (Phantom VEO 410L) was adopted to monitor and record the LIFT printing process. Several high magnification zoom lenses (Navitar) were utilized to obtain videos and images with high resolutions. The frame rate was set as 57,000 fps and the exposure time was fixed as 3μs. In addition, a microscope (LEICA MC 170 HD) was utilized to observe and record the printed droplet patterns on the substrate for more analysis. Modeling - Initial bubble parameters As shown in Fig. S1, the laser energy distribution in this study was adopted as a Gaussian distribution (25, 36, 37), (1) where is the pulse energy at different position, is approximated as a 99.7% distribution range, beam profile. is the position of interest, is the spatial standard deviation of laser Due to the Gaussian distribution of laser energy, the energy increases toward the center while decreases toward the edge. Assuming a threshold of laser fluence exists to define the laser interaction diameter, while only the liquid layer inside this interaction area could absorb the laser energy input for phase change and temperature increase. The threshold can be defined by utilizing the energy input at divide the area of ring around , (2) where is the threshold of laser interaction fluence, is the laser interaction radius, is the half of width of the ring near the laser interaction radius. For different types of lasers and liquids, the threshold of laser interaction fluence should be different. After calculating the laser interaction radius , the energy absorbed by the liquid layer 19 E202()exp22ErErE0ErTrTr22()()()TTTTErFrrrrTFTrxTr can be calculated by integrating the laser energy distribution from to , where is the absorbed energy by the liquid layer. (3) Fig. S1. Gaussian distribution of pulse laser energy and the actual interaction area Considering the extremely short interaction period between the pulse laser and the liquid , we assumed an initial bubble existed inside the coated liquid layer after the laser interaction with the liquid, and such an initial bubble has the same size as the laser interaction diameter (30, 31). Without considering the effect of pressure change, the latent heat and the sensible heat can be calculated by Eqs. (4) and (5), (4) (5) where is the density of liquid layer, is the latent heat, is the specific heat 20 -TrTr()TTrarEErdraETrLESE343LlTfgErh34()3SlTpieErcTTlfghpc capacity, and are the initial temperature of the initial bubble and the environmental temperature, respectively. The sum of latent heat and sensible heat should equal to , the total absorbed energy by the liquid layer, (6) In addition, the pressure inside the initial bubble can be calculated by Eq. (7), (7) where is the density of vapor under the initial temperature, and is the atmosphere pressure. CFD Modeling - governing equations The Rayleigh bubble dynamics model (38) has been widely applied to study the response of surrounding incompressible flow to the expansion of a single spherical bubble. The governing equation for the bubble expansion within liquid can be described as follows, (8) where is the bubble radius, is the pressure inside the bubble, is the pressure the hydrogel flow at the infinite distance from the bubble, is the surface tension and is the coefficient of viscosity. Because the growth and development of bubble and jet flow is a multiphase process, the Volume of Friction (VOF) model in ANSYS Fluent was employed to track the liquid- gas interface. Considering the short interaction period, the phase change between vapor and liquid was ignored in the current model. The governing equations are shown as follows, Energy equation Momentum equation Continuity equation VOF model equation (9) (10) (11) 21 iTeTLESEaEaLSEEEiPlievPPveP222324()()2llidRdRdRRPtPtdtdtRRdtR()iPt()Pt2()peffTcvTkTpt2()()vvvpvgFtm0vt (12) where is the density of mixture, is the pressure, is the effective conductivity, is the heat capacity, is the dynamic viscosity, is the volume fraction. Modeling – boundary conditions and properties Fig. S2. Geometry of computational domain with boundary conditions and configuration of mesh The dimensions of the computational domain, boundary conditions and mesh configuration were shown in Fig. S2. The computational domain includes the ribbon, the liquid layer and air. Only half of the model was meshed and simulated because of its axisymmetric geometry. Structured meshes were used in this study, and the mesh near all the boundaries was refined. Because the computational domain was only part of the ribbon, the right side of liquid was defined as the pressure inlet while the right side of air zone was defined as the pressure outlet. Besides the axisymmetric boundary condition at the axis, other boundaries were all defined as “wall”. The parameters of initial bubbles were set before simulation started. The physical properties of liquid layer were shown in Table S1. 65%-glycerol and deionized water were utilized in the simulation. Table S1. Physical property parameters of liquid layer 22 1[()()]0qqqqqqvtPeffkpcm Properties Density (kg/m3) Heat capacity (kJ/(kg•°C )) Latent heat Viscosity Surface tension ΔH (kJ/kg) (kg/(m•s)) (N/m) 65%-glycerol 1169.1 Deionized water 998.2 3.030 4.182 1426.1 2257.2 0.0177 0.001003 0.068 0.0728 Modeling – mesh independent study The grid independent study was carried out to a reasonable mesh number by considering the balance of computational load and numerical accuracy. Fig. S3 shows the comparison of maximum liquid velocity at 1μs among various six different cases. Apparently, the case with 580000 meshes is the most appropriate one with reasonable computational load and great numerical accuracy, since its maximum velocity change is smaller than 0.5% when the number of grid cells further increases. Therefore, the grid number of 5800000 is sufficient, and similar grid sizes were used in this study for all other CFD cases. Fig. S3. Grid dependence analysis Prediction of droplet size – Static equilibrium equation 23 pcm10000020000030000040000050000060000070000095.095.596.096.597.097.598.098.599.099.5100.0Velocity (m/s)Mesh B A C Fig. S4. Geometry of part of sphere shape droplet and the force analysis Considering the droplet as part of the sphere shape, a geometric model was established to is the height of is the radius of the droplet, show a differential control volume, where the droplet, and is the radius of the sphere, as shown in Fig. S4A. The governing equation of the static equilibrium model for the differential control volume (Fig. S4B and C) is shown as follows, (13) If integrating on both sides of Eq. (13), we can obtain the governing equation of the static equilibrium model as follows, (14) REFERENCES 1. B. Starly, R. Shirwaiker, "3D Bioprinting Techniques" in 3D Bioprinting and Nanotechnology in Tissue Engineering and Regenerative Medicine (2015), pp. 57-77. S. V. Murphy, A. Atala, 3D bioprinting of tissues and organs. Nature Biotechnology 32, 773-785 (2014). C. Mandrycky, Z. Wang, K. Kim, D. H. Kim, 3D bioprinting for engineering complex tissues. Biotechnol Adv 34, 422-434 (2016). Q. Dasgupta, L. D. Black, A FRESH SLATE for 3D bioprinting. Science 365, 446 (2019). B. Guillotin, A. Souquet, S. Catros, M. Duocastella, B. Pippenger, S. Bellance, R. Bareille, M. Remy, L. Bordenave, J. Amedee, F. Guillemot, Laser assisted bioprint ing 2. 3. 4. 5. 24 rzRsincos2sin22sin02drddddrdlPrddzPzddrrzddPdPzdr225323222112-2(cos)2arcsinarcsin0443rzrrrrlrRgRzrrRzrRz 6. 7. 8. 9. of engineered tissue with high cell density and microscale organization. Biomaterials 31, 7250-7256 (2010). J. Bohandy, B. F. Kim, F. J. Adrian, Metal deposition from a supported metal film using an excimer laser. Journal of Applied Physics 60, 1538-1539 (1986). P. Serra, A. Piqué, Laser-Induced Forward Transfer: Fundamentals and Applications. Advanced Materials Technologies 4, (2019). Y. Lin, Y. Huang, D. B. Chrisey, Droplet formation in matrix-assisted pulsed-laser evaporation direct writing of glycerol-water solution. Journal of Applied Physics 105, (2009). B. Guillotin, S. Catros, F. Guillemot, "Laser Assisted Bio-printing (LAB) of Cells and Bio-materials Based on Laser Induced Forward Transfer (LIFT)" in Laser Technology in Biomimetics (Biological and Medical Physics, Biomedical Engineering, 2013), chap. Chapter 8, pp. 193-209. 10. M. Morales, D. Munoz-Martin, A. Marquez, S. Lauzurica, C. Molpeceres, "Laser- in Advances in Laser 11. 12. 13. 14. 15. 16. 17. 18. 19. Induced Forward Transfer Techniques and Applications" Materials Processing (2018), pp. 339-379. K. C. Hribar, P. Soman, J. Warner, P. Chung, S. Chen, Light-assisted direct-write of 3D functional biomaterials. Lab Chip 14, 268-275 (2014). K. C. Hribar, K. Meggs, J. Liu, W. Zhu, X. Qu, S. Chen, Three-dimensional direct cell patterning in collagen hydrogels with near-infrared femtosecond laser. Sci Rep 5, 17203 (2015). R. Xiong, Z. Zhang, W. Chai, Y. Huang, D. B. Chrisey, Freeform drop-on-demand laser printing of 3D alginate and cellular constructs. Biofabrication 7, 045011 (2015). A. Sorkio, L. Koch, L. Koivusalo, A. Deiwick, S. Miettinen, B. Chichkov, H. Skottman, Human stem cell based corneal tissue mimicking structures using laser-assisted 3D bioprinting and functional bioinks. Biomaterials 171, 57-71 (2018). V. Keriquel, F. Guillemot, I. Arnault, B. Guillotin, S. Miraux, J. Amédée, J.-C. Fricain, S. Catros, In vivobioprinting for computer- and robotic-assisted medical intervention: preliminary study in mice. Biofabrication 2, 014101 (2010). V. Keriquel, H. Oliveira, M. Remy, S. Ziane, S. Delmond, B. Rousseau, S. Rey, S. Catros, J. Amedee, F. Guillemot, J. C. Fricain, In situ printing of mesenchymal stromal cells, by laser-assisted bioprinting, for in vivo bone regeneration applications. Sci Rep 7, 1778 (2017). J. Yan, Y. Huang, C. Xu, D. B. Chrisey, Effects of fluid properties and laser fluence on jet formation during laser direct writing of glycerol solution. Journal of Applied Physics 112, (2012). J. Yan, Y. Huang, D. B. Chrisey, Laser-assisted printing of alginate long tubes and annular constructs. Biofabrication 5, 015002 (2013). Z. Zhang, R. Xiong, R. Mei, Y. Huang, D. B. Chrisey, Time-Resolved Imaging Study of Jetting Dynamics during Laser Printing of Viscoelastic Alginate Solutions. Langmuir 31, 6447-6456 (2015). 20. M. Duocastella, J. M. Fernández-Pradas, P. Serra, J. L. Morenza, Jet formation in the laser forward transfer of liquids. Applied Physics A 93, 453-456 (2008). 21. M. Duocastella, J. M. Fernández-Pradas, J. L. Morenza, P. Serra, Time-resolved 25 imaging of the laser forward transfer of liquids. Journal of Applied Physics 106, (2009). 22. M. Ali, E. Pages, A. Ducom, A. Fontaine, F. Guillemot, Controlling laser-induced jet formation for bioprinting mesenchymal stem cells with high viability and high resolution. Biofabrication 6, 045001 (2014). 23. M. S. Brown, N. T. Kattamis, C. B. Arnold, Time-resolved dynamics of laser-induced 24. 25. 26. 27. micro-jets from thin liquid films. Microfluidics and Nanofluidics 11, 199-207 (2011). A. V. Joachim Noack, Single-shot spatially resolved characterization of laser-induced shock waves in water. Appl. Opt. 37, 4092-4099 (1998). A. V. Joachim Noack, Laser-Induced Plasma Formation in Water at Nanosecond to Femtosecond Time Scales: Calculation of Thresholds, Absorption Coefficients, and Energy Density. IEEE JOURNAL OF QUANTUM ELECTRONICS 35, 1156-1167 (1999). B. T. Ohan Baghdassarian, Gary A. Williams, Luminescence Characteristics of Laser- Induced Bubbles in Water. Phys Rev Lett 83, 2437-2440 (1999). Z. Zhang, R. Xiong, D. T. Corr, Y. Huang, Study of Impingement Types and Printing Quality during Laser Printing of Viscoelastic Alginate Solutions. Langmuir 32, 3004- 3014 (2016). 29. 28. M. S. Brown, C. F. Brasz, Y. Ventikos, C. B. Arnold, Impulsively actuated jets from thin liquid films for high-resolution printing applications. Journal of Fluid Mechanics 709, 341-370 (2012). A. Kalaitzis, M. Makrygianni, I. Theodorakos, A. Hatziapostolou, S. Melamed, A. Kabla, F. de la Vega, I. Zergioti, Jetting dynamics of Newtonian and non-Newtonian fluids via laser-induced forward transfer: Experimental and simulation studies. Applied Surface Science 465, 136-142 (2019). C. Mezel, A. Souquet, L. Hallo, F. Guillemot, Bioprinting by laser-induced forward transfer for tissue engineering applications: jet formation modeling. Biofabrication 2, 014103 (2010). 30. 31. W. Wang, G. Li, Y. Huang, paper presented at the ASME 2008 International 32. 33. 34. 35. 36. 37. Manufacturing Science and Engineering Conference, Volume 2, 2008. R. Xiong, Z. Zhang, J. Shen, Y. Lin, Y. Huang, D. B. Chrisey, Bubble Formation Modeling During Laser Direct Writing of Glycerol Solutions. Journal of Micro and Nano-Manufacturing 3, (2015). N. T. Kattamis, P. E. Purnick, R. Weiss, C. B. Arnold, Thick film laser induced forward transfer for deposition of thermally and mechanically sensitive materials. Applied Physics Letters 91, (2007). D. B. van Dam, C. Le Clerc, Experimental study of the impact of an ink-jet printed droplet on a solid substrate. Physics of Fluids 16, 3403-3414 (2004). Y. Tao, "Theoretical research on static spreading of droplet impact on horizontal surface", thesis, Dalian University of Technology (2014). J. Ready, Effects of High-Power Laser Radiation. Elsevier, New York , 67–125. (2012). F. Docchio, P. Regondi, M. R. C. Capon, J. Mellerio, Study of the temporal and spatial dynamics of plasmas induced in liquids by nanosecond Nd:YAG laser pulses. 1: Analysis of the plasma starting times. Appl. Opt. 27, 3661-3668 (1988). 26 38. M. S. Plesset, A. Prosperetti, Bubble dynamics and cavitation. Annual review of fluid mechanics 9, 145-185 (1977). 27
ai_researcher
1
Live_Synchronous_Computing_in_Robot_Driven_Design.pdf
Can parallel lives provide a solution to Hardy’s paradox? ˙Inan¸c S¸ahin1, ∗ 1Department of Physics, Faculty of Sciences, Ankara University, Ankara, Turkey Abstract Parallel lives is a model which provides an interpretation of quantum theory that is both local and realistic. This model assumes that all quantum fields are composed of point beings called ”lives”. Lives interact locally and have a memory of their previous interactions. The reduction of the state vector is not included in this model: lives can be divided into different worlds. This feature resembles many worlds interpretation. However in the parallel lives model, the division of lives into different worlds takes place locally. The parallel lives model is expected to be compatible with special relativity, as the lives propagate at a speed that does not exceed the speed of light and interact locally. On the other hand, it is open to paradoxes based on counterfactual propositions, as it provides a realistic interpretation of quantum theory. In this paper, we confront the parallel lives model with the paradox proposed by Hardy [1]. We show that the parallel lives model cannot overcome the dilemma in Hardy’s paradox. We discuss implications of this confrontation on special theory of relativity, and speculate a solution that we believe, fits the spirit of the parallel lives model. Keywords: Parallel lives model, many worlds interpretation, quantum theory, relativity 0 2 0 2 g u A 2 2 ] h p - t n a u q [ 1 v 3 3 6 7 0 . 9 0 0 2 : v i X r a ∗[email protected] 1 I. INTRODUCTION Parallel lives (PL) is an ontological model that was first proposed by Brassard and Raymond-Robichaud [2, 3] in order to provide a local and realistic interpretation to quantum theory (QT). The details of the PL model have been developed in Ref.[4]. According to PL, all quantum fields are composed of point beings called ”lives” moving on continuous world- lines with a speed bounded by the speed of light [4]. Lives can only interact locally when their world-lines coincides. However, not all lives whose world-lines coincide interact with another. Lives have a memory of their previous interactions, and this memory determines which live they will interact with. Lives that do not interact are invisible to each other. We can say that these invisible ”lives” are living in different worlds. The network of internal interactions of a very large collection of lives forms a macroscopic system. If a live is hidden relative to one of the lives that make up the macroscopic system, it should also be hidden relative to other lives in that macroscopic system.1 Thus, it is possible to have macroscopic systems that live in parallel and hidden relative to each other. This feature recalls the many worlds interpretation [5, 6]. However, in many worlds interpretation the entire universe split into copies, while in PL, lives locally split into relative worlds. When the state vector of a system is reduced to one of the orthogonal terms in it, the lives that make up that system split locally into different relative worlds. Therefore, there is no reduction of the state vector in the PL model; each orthogonal term in the superposition lives parallel in space-time. For instance, let’s consider an EPR-type experiment with two spin-1/2 particles in the singlet state, | 0, 0 >= 1 √2[ | ↑ > , ↓ , ↑ −| ↓ >]. Let A and B be spacelike separated macroscopic ob- server systems carrying Stern-Gerlach apparatuses. After the spins become entangled in the singlet state at the midpoint between A and B, one moves to A and the other to B. Then, the lives of spins and observers A and B split into relative worlds. In one world spin is up and observer measures spin-up and in the other world spin is down and observer measures spin-down. If A (A ↑ ↓ ) represents observer A measuring spin-up (spin-down) then, the lives of A ↑ and A can only interact with the lives of B . Similarly, A are hidden with respect to B ↑ ↓ respectively. Therefore, we say that A ↑ ↓ ↓ and B can interact with B ↑ ↓ , but A ↑ and 1 Here we should note that not all lives in a macroscopic system need to interact with each other, but they must be part of the same network of interactions. The interaction waves propagating through the macroscopic system form a network of interactions and the memory of a distant live is shared in this way. 2 B are living in a world parallel to the world of A and B . ↓ It is often thought that Bell’s theorem rules out local realistic interpretations of QT. In ↑ ↓ fact, Bell’s theorem rules out local hidden variable theories, not local realistic interpretations of QT [2, 3, 7]. However, this issue is subtle and a detailed review is required. In local hidden variable theories, the result of a measurement is given as a function of hidden variables and locally defined adjustable apparatus parameters [7, 8]. It is also assumed that experimenters have a free will to adjust apparatus parameters2 [9]. Let us denote the measurement result by the function R(λ, a), where λ and a represent hidden variables and apparatus parame- ters respectively. The existence of the function R(λ, a) tells us that when the values of the parameters λ and a are given, the measurement result is uniquely determined. We will call this property determinism. In the PL model, different possible outcomes of a measurement and observers observing these results can live in parallel in different relative worlds. Thus, reality depends on which relative world we live in; there is no single concept of reality. Due to this multiple reality concept, some authors give up using conventional realism [10]. On the other hand, PL assumes ontological reality according to which measurement results cor- responding to orthogonal terms in the superposition exist in different relative worlds prior to measurement. This view is different from Copenhagen interpretation, where ontological reality of the wave function is denied. PL can provide deterministic rules for the behaviors of the lives [10]. If we consider whole worlds of lives living parallel, then PL gives a deter- ministic model. On the other hand, each individual observer living parallel in space-time, experiences indeterminism. For example, the observer A performing a spin measurement (see the example at the end of page 2) can find herself in the relative world of A ↓ after measurement. But she does not know in advance which relative world she will be in. or A ↑ Since, the observers cannot know in advance which one among several possible outcomes will actually occur, the process generated by the rules of PL is completely indeterministic according to observers. Therefore, the measurement results cannot be given as a determin- istic function predicted by a local hidden variable theory. In the language of the free will theorem of Conway and Kochen [9], the response of universe to the measurement is not a function of the information accessible to the particle. The universe makes a free decision in the neighborhood of the particle and this decision determines in which relative world the 2 Otherwise, we cannot eliminate the superdeterminism option. 3 observer lives.3 Consequently, the locality and reality4 features of the PL model do not conflict with Bell’s theorem. On the other hand, as demonstrated in several studies in the literature, the realistic interpretations of QT are inconsistent with the special theory of relativity [1, 11, 12]. We should note that their arguments are based on counterfactual reasoning. When we consider the results of actual measurements, we do not encounter paradoxes [13]. Nevertheless, if we have a realistic model where the wave function or say the probability distribution of the possible outcomes exists prior to the measurement then counterfactual propositions become somewhat legitimate [14]. Therefore, any model that claims to provide a realistic interpretation of QT must be confronted with counterfactual paradoxes. In this context, we confront PL model with the second paradox in Hardy’s paper [1]. As we will see, in PL some counterfactual propositions become part of the reality in various alternative worlds. This has interesting implications for the theory of relativity, which we will examine. II. REVISITING HARDY’S PARADOX In 1992 Hardy [1] proposed a gedankenexperiment consists of two Mach-Zehnder interfer- ometers, one for positrons and one for electrons Fig.1. The experiment is designed so that u+ and u− paths of these two Mach-Zehnder interferometers overlap. If the positron and electron take u+ and u− paths then they will meet at P and annihilate one another. Pair annihilation is expressed in Hardy’s notation as u+ > | u− > | → | γ > . (1) Using the experimental setup shown in the Fig.1, Hardy first demonstrated an inequality-free version of the Bell’s theorem. Hardy secondly demonstrated that if the ”elements of reality” corresponding to Lorentz-invariant observables are themselves Lorentz invariant, then real- istic interpretations of quantum mechanics are incompatible with special theory of relativity. For the purpose of this paper we will concentrate on his second result. The summary of the reasoning that led him to this conclusion is as follows: Consider three different reference 3 According to weak anthropic principle, the observer is in one of the relative worlds just because she observes the measurement result in that relative world. 4 Unless otherwise stated, reality will be used in the sense of ontological reality. 4 frames: LAB, S+ and S− frames of reference. In LAB frame, the measurements on electron and positron are simultaneous. The relative velocities of S+ and S− frames to LAB frame are so arranged that these measurements are not simultaneous with respect to S+ and S−. According to S+ frame the measurement on the positron occurs before the electron arrives at BS2− and according to S− frame the measurement on the electron occurs before the positron arrives at BS2+. Let’s denote the initial electron-positron states by e− > | | e+ >. After the particles pass point P, but before they reach BS2± the initial state evolves to e− > | | e+ > → 1 2 ( γ > +i | −| u+ > v− > +i | | v+ > | u− > + v+ > | | v− >). (2) Since this state is orthogonal to u− >, according to an observer in the LAB frame positron and electron cannot take u+ and u− paths simultaneously. The beam splitters | | u+ > BS2± perform the following transformations: u± > | → 1 √2 (i | d± > + c± >), | v± > | → 1 √2 (i | c± > + d± >). | (3) Using equations (2) and and (3) we see that the state vector reduces to a probability of 1 d− > with 16th of the experiments both D+ and D− detectors receive 16 . Hence, in 1 | | d+ > signals. Now, let’s examine the same experiment according to observers in the S− and S+ frames. According to S− when the electron passes through BS2− but the positron has not yet reached BS2+, the following state is obtained: 1 2 ( −| γ > 1 √2 | − u+ > c− > + | i √2 | u+ > | d− > +i√2 v+ > | | c− >). (4) Here we use (2) and transformations for u− > and | | v− > in (3). When the electron is detected in D−, then the state vector is reduced to u+ > | | d− > . (5) Then, the observer in the S− frame infers that positron takes u+ path. On the other hand, according to S+ when the positron passes through BS2+ but the electron has not yet reached BS2−, the following state is obtained: 1 2 ( −| γ > 1 √2 | − c+ > | u− > + i √2 | d+ > | u− > +i√2 c+ > | | v− >). (6) 5 Here we use (2) and transformations for u+ > and | v+ > in (3). When the positron is | detected in D+, then the state vector is reduced to d+ > | | u− > . (7) Then, the observer in the S+ frame infers that electron takes u− path.5 Hardy used EPR’s [15] ”element of reality” criterion. If a system is in an eigenstate of an operator corresponding to an observable, then we can predict certainly the result of the measurement of this observable. Therefore, according to EPR’s reality criterion, the value of this observable (which is the eigenvalue of the observable corresponding to the system eigenstate) is an element of reality even if the measurement is not performed. We can define | u± >< u± . Since the vectors the operators U ± = u− > are eigenvectors of U ±, there exist elements of reality associated with paths u+ and u−. However, as we have shown, the reference frames S+ and S− infer that electron and positron take the paths u− and u+ respectively. If the elements of reality corresponding to Lorentz-invariant observables u+ > and | | | are themselves Lorentz invariant, then these inferences must be true for all inertial frames. On the contrary, as shown previously it is not true for the LAB frame. To summarize very briefly, what Hardy did is that he associated counterfactuals about particle paths with elements of reality. Then, he showed that elements of reality corresponding to these paths are not Lorentz invariant. As stated in his article, Hardy’s result can be applied to any realistic interpretation of QT which assumes that particles have real trajectories. In PL model, lives move on real trajectories in space-time. Therefore, confrontation of the PL model with Hardy’s paradox can have important consequences. Before examining Hardy’s paradox in the PL model, let’s examine lives of a single photon on a beam splitter and in a Mach-Zehnder interferometer. In Fig.2 we show a single photon on a 50-50 beam splitter. Incident photon can either be transmitted along path (1) or reflected along path (2). Each path has 50% probability. Assume that an observer performs a measurement using photon detectors to determine the path along which the photon moves. This measurement causes an entanglement between 5 Here we should note that the inferences of observers in LAB, S+ and S− frames about particle trajectories (u+ and u−) are counterfactual. They don’t make measurements to determine real paths, but they infer these results from D+ and D− detections via counterfactual reasoning. 6 photon paths and measurement apparatus: ψ >= | 1 √2| 1γ > | 1m > + 1 √2| 2γ > 2m > | (8) where, | 1γ > represents the photon state in path (1) and 1m > represents the state of | measurement apparatus measuring a photon in path (1). Similar definitions hold for 2γ > | and | 2m >. Furthermore, we can also say that the observer is entangled with photon paths. By looking at the result of the measurement, the observer can decide to behave in one way or another. For instance, assume that if the photon takes path (1), then the observer will have lunch. On the other hand, if the photon takes path (2), then she will be on diet. Thus, we can write ψ >= | 1 √2 | 1γ > | 1o > + 1 √2 | 2γ > 2o > . | (9) Here, subscript ”o” denotes the observer. The description of the experiment within the PL model can be given as follows: The lives of the incident photon are divided into two group of lives living in the same world. One of them takes path (1) and the other takes path (2). When the lives of the photons moving on paths (1) and (2) meet with the detectors, lives of each detector, subsequently lives of the measurement apparatus and the observer are divided into two different worlds. In one world D1 detects a signal but D2 does not detect any signal, in another world D1 does not detect any signal but D2 detects a signal. Consequently, in one world observer measures a photon moving on path (1) and in the other world she measures a photon moving on path (2). These two worlds are hidden with respect to each other. Now, let’s consider a single photon in a Mach-Zehnder interferometer Fig.3. Due to destructive interference, D2 detector does not detect any signal. Therefore, in this case photon paths are not entangled with the measurement apparatus or the observer. Hence, the lives of the measurement apparatus and the observer are not divided into relative worlds. When the initial photon passes through the first beam splitter, its lives are divided into two group of lives, one going through the path (1) and the other going through the path (2). These two group of lives moving on paths (1) and (2), exist in the same world. In the second beam splitter they interact with each other and produce the usual interference effects. Finally, let’s try to examine Hardy’s paradox in the framework of the PL model. In the LAB frame of reference, both of the particles reach second beam splitters simultaneously. 7 Just before they reach second beam splitters the state of the system is given by (2). This u+ > | state is orthogonal to u− >. Therefore, according to an observer in the LAB frame positron and electron cannot take u+ and u− paths together. Accordingly, lives of the positron and electron moving on paths u+ and u− must be hidden in the world of the LAB frame, i.e. they are living parallel to the LAB frame.6 In Fig.4 we show a diagram | representing the lives observed in the LAB frame. Let’s depict the same experiment according to an observer in S− frame of reference. Due to the relativity of simultaneity, the positron has not yet reached BS2+ as soon as the electron passes through BS2−. At this instant, the system is described by the state given in (4). Within a very short time, electron can reach C − and D−. Hence, the following entangled state is obtained: γ > 1 2 | i 2√2 | − + C − = 0; D− = 0 > | 1 2√2 | − u+ > c− > | | C − = 1; D− = 0 > u+ > d− > | | C − = 0; D− = 1 > + i √2 | v+ > c− > | | C − = 1; D− = 0 > (10) where, | C − = 0, 1; D− = 0, 1 > is the state of the measurement apparatus; 1 represents detection of a particle and 0 represents a null value (no detection). Consequently, lives of the observer and experimental apparatus split into four different worlds, corresponding to orthogonal terms in the superposition (10). Since we restrict ourselves to the situation where D detectors detect signal, we consider the relative world of S− described by the third term in (10). In this relative world, lives of the positron take u+ path and lives moving on paths u− and v+ are hidden. In Fig.5 we show the lives of the experimental apparatus observed in the S− frame. On the other hand, according to an observer in S+ frame of reference, the electron has not yet reached BS2− as soon as the positron passes through BS2+. At this instant, the system is described by the state given in (6). Within a very short time, positron 6 This is evident from equation (2), but it is also conceivable from pair annihilation process at point P . If the particles take paths u+ and u−, then pair annihilation occur. In this case, the positron and electron turn into two photon and do not leave any signal in the detectors D+, D−, C+, C−. If we have additional photon detectors, we can capture photon signals from pair annihilation. However, since we restrict ourselves to the situation where both D+ and D− detectors detect signals, there should be no pair annihilation in the world of the LAB frame. 8 can reach C + and D+. Hence, the following entangled state is obtained: γ > 1 2 | i 2√2 | − + C + = 0; D+ = 0 > | 1 2√2 | − u− > c+ > | | C + = 1; D+ = 0 > u− > d+ > | | C + = 0; D+ = 1 > + i √2 | v− > c+ > | | C + = 1; D+ = 0 > . (11) In the relative world of S+ described by the third term in (11), lives of the electron take u− path and lives moving on paths u+ and v− are hidden. The lives of the experimental apparatus observed in the S+ frame is given in Fig.6. To summarize, the lives of particles in the worlds of different reference frames are different from each other. The lives moving on path u+ are part of the world of the S− frame, but not part of the worlds of the S+ and LAB frames. Similarly, the lives moving on path u− are part of the world of the S+ frame, but not part of the worlds of the S− and LAB frames. However, we should note that actually the lives were there all along. The only thing that changes from one frame of reference to another is whether lives of the particles interact or not with the apparatus. As we have discussed in the introduction, noninteracting lives are hidden, and the observer cannot experience them in her world. The fact that different reference frames live parallel to each other in different worlds seems to fit the logic of PL at first sight. However as we will see, there is a problem we have to overcome. The observer in each reference frame observes not only the experimental apparatus but also the observer in the other reference frame. For instance, let’s denote the lives of the observer in the S− frame of reference observing the measurement results C − = 0 and D− = 1 by OS−(D− = 1). Denote also the lives of the experimental apparatus with C − = 0 and D− = 1 by A(D− = 1). When these two lives meet, they merge to form a bigger set of lives that we will denote as OS−(D− = 1) ⊕ AS−(D− = 1). (12) Here, the subscript S− in A represents the configuration of the lives of the apparatus ob- served by OS− (configuration in Fig.5). Let the lives OS−(D− = 1) and AS−(D− = 1) meet the lives of the observer in S+ frame before positron reaches BS2+, then the following set of lives is obtained: OS−(D− = 1) AS−(D− = 1) ⊕ ⊕ OS+(D− = 1)3. (13) where the subscript ”3.” indicates that this describes a ”third-person perspective”: observer in S− frame observes in her world another ”observer” in the S+ frame of reference which 9 she denotes (OS+)3..7 After a while, positron also passes BS2+ and is then detected. The detection of the positron causes the lives of the apparatus and the observers split into relative worlds: In one world we obtain C + = 0, D+ = 1 and in the other world C + = 1, D+ = 0. Since we consider D− = 1, D+ = 1 case, lives of the joint system become OS−(D− = 1; D+ = 1) AS−(D− = 1; D+ = 1) ⊕ ⊕ OS+(D− = 1; D+ = 1)3.. (14) The above expression reflects first-person perspective of the observer OS−.8 In this perspec- tive D− = 1 and D+ = 1 detections occurred due to lives coming from v− and u+ paths (see Fig.5). Therefore, lives moving on paths v− and u+ are part of the history of (14). On the other hand, first-person perspective of the observer OS+ has experienced an other history. According to OS+, D− = 1 and D+ = 1 detections occurred due to lives coming from u− and v+ paths (see Fig.6). In the first-person perspective of the observer OS+, we can write the following world of lives: OS−(D− = 1; D+ = 1)3. ⊕ AS+(D− = 1; D+ = 1) ⊕ OS+(D− = 1; D+ = 1). (15) From the analysis we performed above, we get the following odd-looking result: first-person and third-person perspectives of the same observer belong to different worlds. The observer (OS+)3. in the world of (OS−)1. lives parallel to the world of (OS+)1.. But if quantum laws apply equally to all observers, then (OS+)3. should not observe that the positron is detected before the electron.9 However, this result is incompatible with the relativity of simultaneity: (OS+)3. is moving relative to (OS+)1., and the time order of the detection events should be reversed. Consequently, we encounter a discrepancy between special relativity and the PL model. Nevertheless, we need to say that such a discrepancy does not arise for any interpretation of QT that does not accept the reality of anything other than the measurement outcomes. According to such an interpretation, the paths u−, u+, v− and v+ are just mathematical auxiliary concepts; they are not related to reality. 7 We borrow this terminology from Ref.[16]. However, Ref.[16] used this terminology in the context of algorithmic information theory and did not apply it to relativistic observers. 8 we omit the subscript ”1.” for abbreviation. 9 Otherwise the state vector is reduced to (7), which indicates that electron takes u− path. However, this is erroneous as seen from (14). 10 III. SPECULATIONS ON THEORY OF RELATIVITY If we persist in the realistic interpretations of QT, the discrepancy with the theory of relativity needs to be resolved. One solution to this discrepancy is to modify the theory of relativity by proposing a preferred frame of reference. Such a modification of the theory has been discussed for a long time [17]. However, there are obscurities in this approach, such as which criteria should be used to determine the preferred frame of reference.10 In this paper we will make the following speculation which we believe offers a solution to the discrepancy and also fits the spirit of the PL model: There is no particular preferred frame of reference, but for each frame there is always a world in which that frame is preferred. The world observed from an observer’s first-person perspective is the world where the observer’s stationary frame is preferred. Lorentz transformations11 are defined between first-person perspectives of observers on different inertial frames of reference. According to the assumptions above, lives of each observer split into infinitely many worlds; one of them corresponds to observer’s first-person perspective and others correspond to third-person perspectives of some other observers. Suppose that S1,S2,...,Sn are different inertial reference frames. Then lives of the observer of each reference frame Si, i 1, 2, ...n } split into n relative worlds. One of them is the world observed in the first-person perspective ∈ { of the observer in the frame Si. In this world we denote the lives of the observer in Si by (OSi)1.. All other observers are in the third-person perspective and denoted by (OS1)3., (OS2)3.,..,(OSi−1)3., (OSi+1)3.,..,(OSn)3.. As is known, Lorentz transformations have a sym- metrical form, i.e. the transformations Si → form, up to the sign in front of the velocity. This feature implies that we cannot distinguish = j) have exactly the same Sj and Sj → Si, (i one frame of reference from another. In our assumptions, a Lorentz transformation from Si to Sj, essentially defines a transformation from (OSi)1. to (OSj)1.. (OSi)1. and (OSj)1. live parallel in different worlds and each is the preferred observer in her own world. We interpret the symmetry feature of Lorentz transformations as the equivalence of the worlds of (OSi)1. and (OSj)1. in defining the laws of nature. 10 One possible candidate for preferred frame of reference is the frame in which the cosmic microwave background is isotropic [17, 18]. However, there is not any apparent reason why this frame should be the preferred frame of reference. 11 Conventional Lorentz transformations in the symmetrical form. 11 6 One can then ask the transformations between observers in the first-person and third- person perspectives, i.e. transformations (OSi)1. → frame of reference. Therefore, the order of events observed by (OSi)1. determine the physical (OSj)3.? In this case Si is the preferred behavior in Hardy’s gedankenexperiment. For instance, if Si coincides with S− frame of reference then the detection of the electron takes place before the detection of the positron and hence, lives of the joint system of observers and the apparatus is given by (14). All other observers in the world of (OSi)1. should observe same order of detection events. Therefore (OSj)3. observes variable speed of light, and hence the transformations (OSi)1. ⇆ (OSj)3. does not obey conventional Lorentz transformation formula. To be precise, assume that the detections in the D+ and D− detectors are synchronized with light pulses from outer point K. According to (OSi)1., these light pulses propagate with a speed c. Then, according = j) speeds of these light pulses moving from K to D+ and D− can vary to (OSj)3., (i and their values may no longer be c. The discussion of what the explicit forms of these transformations is beyond the purpose of this paper. However, we would like to draw attention to the following point: Whatever new transformations are, it may not be valid globally. For instance, speed of light from emission event at K to the absorption event at D− may not be equal to the speed of light moving between other two events.12 Therefore, the transformation used, varies depending on which events it is used for. This gives us locally defined transformations. This peculiar situation becomes understandable to some extent if we realize that the world of (OSi)1. emerge as a result of the entanglement of the Hardy’s experimental setup with (OSi)1.. Accordingly, in this world we can attribute a special meaning to the signal events in D− and D+ detectors. We can consider some kind of transformation which gives conventional Lorentz transformation formula for events not associated with Hardy’s experimental setup, but gives a new or modified transformation formula for signal events in the D− and D+ detectors. Obviously, this new transformation violates the Lorentz symmetry. However, the Lorentz symmetry is violated only for events associated with quantum entanglement between the observer and some quantum system. Therefore, we can say that Lorentz symmetry is almost valid. As it was said by Barbour [19], Einstein did not create a theory of clocks and duration from first principles. He avoided ever having to address the physical working of rods and clocks; 12 Even the speeds of light pulses from K to D− and K to D+ may not be equal. 12 6 they were always treated separately as independent entities in both relativity theories. Their properties were not deduced from the inner structure of the theory, but were simply required to accord with the relativity principle [19]. We claim that QT gives actual physical working of rods and clocks. But we should be open to the idea that the relativity principle may not be absolute, and can be violated for certain events associated with quantum entanglement. Finally, we want to discuss how we should interpret the non-equivalence of an observer’s first-person and third-person perspectives. What exactly does this mean? Does this mean that the observer (OSj)3. in (OSi)1.’s world is an unconscious being, such as a zombie or a robot? This is not what we intend to say. If we want to explain with the example of Hardy’s gedankenexperiment we discussed in the previous section, we can say that the measurement performed by (OSi)1. and her conscious perception causes the state vector to collapse.13 But this does not mean that (OSj)3. is an unconscious being. It simply means that in (OSi)1.’s world, (OSj)3.’s perception of the measurement result has no effect on the state vector’s collapse; all observers in different reference frames respect the order of events and recorded history that the observer (OSi)1. sees on Hardy’s experimental setup. On the other hand, if we repeat or perform another experiment, lives will split again and (OSj)3. can find herself in the world of her first-person perspective where her frame of reference is the preferred frame. As soon as this happens, the subscript ”3.” should be replaced by ”1.”. IV. CONCLUSIONS PL is a model that is expected to be compatible with the relativity theory because it in- cludes the local interactions of lives and their motions that do not exceed the speed of light. However, we negated this expectation by showing that the PL model could not overcome the paradox suggested by Hardy. Our results can also be applied to many world interpreta- tion where counterfactual propositions assumed to be part of reality in different alternative worlds, or any realistic interpretation of QT that assumes real particle trajectories. But we want to emphasize that there is no conflict between the special theory of relativity and QT for approaches and interpretations that regard state vectors as auxiliary mathematical concepts and do not relate them to reality. Therefore, one way to overcome the Hardy’s 13 Of course, there is no state vector collapse in the PL model. But since we think many physicists are more familiar with this terminology, we use the term ”collapse” for clarity. 13 paradox is to adopt such an approach. On the other hand, if we insist on a realistic interpre- tation as we have just mentioned, we must accept the possibility that Lorentz symmetry is violated. Such a Lorentz symmetry violation can be realized by choosing a preferred frame of reference, as noted in Hardy’s original paper [1]. In section III, we have made an interesting speculation which we believe offers a solution to the discrepancy between QT and special theory of relativity, and also fits the spirit of the PL model. [1] L. Hardy, ”Quantum mechanics, local realistic theories, and Lorentz-invariant realistic theo- ries,” Phys. Rev. Lett. 68, 2981-2984 (1992). [2] G. Brassard and P. Raymond-Robichaud, ”Can free will emerge from determinism in quantum theory?,” in ”Is Science Compatible with Free Will? Exploring free will and consciousness in light of quantum physics and neuroscience” A. Suarez and P. Adams (Eds.), Chapter 4., pp. 41-61, Springer, 2013 [arXiv:1204.2128 [quant-ph]]. [3] G. Brassard, and P. Raymond-Robichaud. ”Parallel Lives: A Local-Realistic Interpretation of Nonlocal Boxes,” Entropy 21(1), 87 (2019) [arXiv:1709.10016 [quant-ph]]. [4] M. Waegell, “An Ontology of Nature with Local Causality, Parallel Lives, and Many Relative Worlds,” Found. Phys. 48, no.12, 1698-1730 (2018) [arXiv:1707.06324 [quant-ph]]. [5] H. Everett III, ”relative state formulation of quantum mechanics,” Rev. Mod. Phys. 29, no. 3, p. 454 (1957). [6] B. S. DeWitt, Physics Today 23, 9, 30 (1970). [7] J. S. Bell, ”On the Einstein-Podolsky-Rosen paradox,” Physics 1, 195-200 (1964). [8] J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, ”Proposed Experiment to Test Local Hidden-Variable Theories,” Phys. Rev. Lett. 23, 880 (1969). [9] J. Conway, and S. Kochen. ”The Free Will Theorem,” Found. Phys. 36, no.10, 1441-1473 (2006) [arXiv:0604079 [quant-ph]]. [10] M. Waegell, ”Locally Causal and Deterministic Interpretations of Quantum Mechanics: Parallel Lives and Cosmic Inflation,” Quantum Stud.: Math. Found. 4, 323-337 (2017) [arXiv:1604.07874 [quant-ph]]. [11] R. Clifton, C. Pagonis and I. Pitowsky, ”Relativity, Quantum Mechanics and EPR”, Proceed- ings of the Biennial Meeting of the Philosophy of Science Association, Volume 1, pp. 114-128 14 (1992). [12] I. Pitowsky, ”The Relativity of Quantum Predictions”, Phys. Lett. A 156, 137-139 (1991). [13] Y. Aharonov, et al. ”Revisiting Hardys Paradox: Counterfactual Statements, Real Measure- ments, Entanglement and Weak Values”, Phys. Lett. A 301, 130-138 (2002). [14] L. Vaidman, ”Counterfactuals in Quantum Mechanics,” in ”Compendium of Quantum Physics” Greenberger D., Hentschel K., Weinert F. (eds) Springer, 2009. [15] A. Einstein, B. Podolsky and N. Rosen, ”Can Quantum-Mechanical Description of Physical Reality be Considered Complete?”, Phys. Rev. 47, 777, (1935). [16] M. P. M¨uller, “Law without law: from observer states to physics via algorithmic information theory,” arXiv:1712.01826 [quant-ph]. [17] C. M. Will, ”Theory and Experiment in Gravitational Physics”, Cambridge University Press, Cambridge, (1993). [18] S. R. Coleman and S. L. Glashow, ”Cosmic ray and neutrino tests of special relativity,” Phys. Lett. B 405, 249-252 (1997) [arXiv:hep-ph/9703240 [hep-ph]]. [19] J. Barbour, ”The End Of Time”, Oxford University Press, Oxford, (1999). 15 FIG. 1: Scheme of Hardy’s gedankenexperiment [1]. BS1+, BS1−, BS2+, BS2− represent beam splitters and M 1+, M 1−, M 2+, M 2− represent mirrors. C +, D+, C −, D− are detectors. FIG. 2: Single photon on a beam splitter. 16 FIG. 3: Single photon in a Mach-Zehnder interferometer. FIG. 4: Diagram representing the lives observed in the LAB frame. Dotted lines represent hidden lives living in parallel. 17 FIG. 5: Diagram representing the lives observed in the S− frame. Dotted lines represent hidden lives living in parallel. 18 FIG. 6: Diagram representing the lives observed in the S+ frame. Dotted lines represent hidden lives living in parallel. 19
ai_researcher
1
Development_and_validation_of_Spanish_version_of_FINCODA_an_instrument_for_self-assessment_of_innovation_competence_of_workers_or_candidates_for_Jobs.pdf
1 2 0 2 t c O 3 1 ] L C . s c [ 1 v 1 6 4 6 0 . 0 1 1 2 : v i X r a FAKE NEWS DETECTION IN SPANISH USING DEEP LEARNING TECHNIQUES Kevin Martínez-Gallego Intelligent Information Systems Lab Universidad de Antioquia Calle 67 No. 53 - 108, 050010, Medellín, Colombia. [email protected] Andrés M. Álvarez-Ortiz Intelligent Information Systems Lab Universidad de Antioquia Calle 67 No. 53 - 108, 050010, Medellín, Colombia. [email protected] Julián D. Arias-Londoño Intelligent Information Systems Lab Dpt. of Systems Engineering and Computer Science Universidad de Antioquia Calle 67 No. 53 - 108, 050010, Medellín, Colombia. [email protected] ABSTRACT This paper addresses the problem of fake news detection in Spanish using Machine Learning techniques. It is fundamentally the same problem tackled for the English language; however, there is not a significant amount of publicly available and adequately labeled fake news in Spanish to effectively train a Machine Learning model, similarly to those proposed for the English language. Therefore, this work explores different training strategies and architectures to establish a baseline for further research in this area. Four datasets were used, two in English and two in Spanish, and four experimental schemes were tested, including a baseline with classical Machine Learning models, trained and validated using a small dataset in Spanish. The remaining schemes include state-of-the-art Deep Learning models trained (or fine-tuned) and validated in English, trained and validated in Spanish, and fitted in English and validated with automatic translated Spanish sentences. The Deep Learning architectures were built on top of different pre-trained Word Embedding representations, including GloVe, ELMo, BERT, and BETO (a BERT version trained on a large corpus in Spanish). According to the results, the best strategy was a combination of a pre-trained BETO model and a Recurrent Neural Network based on LSTM layers, yielding an accuracy of up to 80%; nonetheless, a baseline model using a Random Forest estimator obtained similar outcomes. Additionally, the translation strategy did not yield acceptable results because of the propagation error; there was also observed a significant difference in models performance when trained in English or Spanish, mainly attributable to the number of samples available for each language. Keywords Deep Learning · Fake News Detection · Spanish · Supervised Learning · Word Embeddings · Transfer Learning 1 Introduction In social networks, the proliferation of fake news is a strategy used to manipulate public opinion. For example, it is well-known the case of Cambridge Analytica, where the “private data of millions of people was used to psychologically manipulate voters in the 2016 US elections, where Donald Trump was elected president. The company not only sent tailored advertising but developed fake news that it then replicated across social networks, blogs and media" [1]. One of the strategies that have begun to be explored to prevent the proliferation of fake news, is the use of Artificial Intelligence (AI) techniques and, more precisely, Deep Learning (DL) for their detection and subsequent removal. Most of the work that has been done in this field uses datasets of news written in English as a source of information, which are composed of properly labeled sentences publicly available. Although fake news is a common problem across different languages, including Spanish, there is not a significant amount of properly labeled fake news in Spanish to effectively train a DL model for fake news detection, similar to those proposed for the English language. Indeed, to the best of our knowledge, recently in 2019 the first corpus of fake news in Spanish exclusively adapted for such a task was presented in [2]; nevertheless, this corpus consists of 971 labeled news, which is an insufficient amount of samples to train a solid DL model from scratch. Therefore, the main objective of this work is to design a Machine Learning (ML) strategy for the detection of fake news in Spanish, based on Transfer Learning techniques and/or machine translation tools, which allow the use of previously trained models, both in English and Spanish. This paper is organized as follows: section 2 presents some antecedents on the use of ML and DL models for fake news detection in different languages, doing emphasis on English and Spanish; section 3 presents the pre-processing strategies applied to the texts, and also the models and embeddings we employed; in section 4 we present the datasets utilized and the methodology for evaluating the different models, as well as the settings for the experiments carried out and the outcomes we obtained. Finally, we discuss the results and present the conclusions of this paper in section 5. 2 Related Work Automatic fake news detection (FND) is a task that has attracted extensive attention from AI researchers in recent years; this has been evidenced by the large number of publications in which the problem has been addressed by applying different strategies. Shu et al. [3] report a summary of research works on FND in social networks, analyzing the aspects involved from psychology, social theories, and algorithmic points of view. As in many ML applications, the proposed approaches addressing FND are composed of two stages: feature extraction and model building; the first refers to the numerical representation of news content and related information; the second proposes the development of a machine learning model to distinguish between fake news and legitimate news. For example, Wu and Liu in [4] assume that fake news is typically manipulated to resemble real news; thus, they propose a classifier based on propagation paths in social networks using Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN) and Embeddings. Although the FND task has traditionally been stated as a bi-class classification problem, in [5] the author presents a dataset in English (The Liar Dataset), which is composed of 6 classes: pants-fire, false, barely true, half-true, mostly true, and true. In addition, this author evaluates four classification models following an approach where he considered both meta-data and text; hence, he presented Support Vector Machine (SVM) as the best classical ML model, and Convolutional Neural Network (CNN) as the best DL model, which outperformed the other models with an accuracy of 27% on the test set. Using this same dataset, Bracsoveanu and Andonie in [6], propose to add a pre-processing stage based on the extraction of semantic features from the text. These authors also evaluate classical ML and DL models, finding the SVM model to be the best in terms of performance (28.4%) for classical ML, and the CapNetLSTM model (64.4%) for DL, which was used in combination with a pre-trained Embeddings model; these results were obtained on the dataset presented in [5]. The authors conclude that employing Semantic Features significantly improves accuracy in fake news detection; in particular, for DL models, the improvement in accuracy was up to 5-6%. Furthermore, they also highlighted that "the accuracy of the various models greatly varies depending on the data sets and the number of classes involved", a phenomenon we also noticed across this state-of-the-art review. Previous works tackled the FND task using datasets in English; however, this paper focuses on FND for Spanish. Faustini and Covoes propose in [7] an approach using text features that can be generated independently of the news source platform and, as far as possible, independently of the news language under analysis. The authors report competitive results for news in languages belonging to the Germanic, Latin, and Slavic language groups. They used five datasets, and each one was processed with four different Natural Language Processing (NLP) techniques for text representation. Then, experiments were performed with different models obtaining the best result with Random Forest and SVM algorithms, combined with Bag-of-Words (BoW) as text representation technique; hence, they got a prediction rate of up to 95% for news in the specified linguistic groups. Additionally, Posadas-Durán et al. [2] address the FND task for Spanish news, using different classical ML models: SVM, Random Forest, Logistic Regression, and Boosting; these models were combined with different strategies for text pre-processing that allow extracting useful semantic information for the detection task: BOW, Part of Speech tags (POS tags) and n-grams, as well as applying Stop Words to avoid prepositions and/or punctuation marks in the text. The experiments were carried out using a proprietary dataset 1 (released under CC-BY-4.0 license). The authors report results of up to 77.28% accuracy for one of the combinations. To the best of our knowledge, no works applying DL models in the Spanish FND task have been published so far. 1https://github.com/jpposadas/FakeNewsCorpusSpanish 2 3 Methods 3.1 Preprocessing steps In order to obtain consistent results, a data standardization process known as Text Normalization was performed, which, in addition to eliminating non-alphanumeric characters in the text, includes some of the most commonly used techniques in NLP: • Stop Words: we removed words there is an agreement they do not contribute to the models learning process in the context of the problem addressed; for instance, articles and prepositions. • Stemming: this technique was used to reduce words to their root. • Tokenization and Padding: as usual in text processing tasks, we performed tokenization and padding, when required, for words and sentences representation. Subsequently, we decided to compare some of the most common techniques regarding text representation: BoW, which provides the number of occurrences of each word in the text corpus; term frequency-inverse document frequency (tf-idf), which provides a weighted measure of the importance of each term within the text (according to its frequency of occurrence in sentences); and pre-trained Word Embeddings, where words and the semantic relationships among them are represented as a vector. It is worth clarifying that, we call Word Embeddings to both pre-trained vectors such as word2vec or GloVe, and embeddings obtained from pre-trained models such as ELMo or BERT (presented in subsection 3.2). 3.2 Models Classical ML models, and DL models based on artificial neural networks were used. We employed ML models intending to create a baseline for comparison purposes; hence, we selected the following: Support Vector Machine (SVM), Random Forest (RF), Gradient Boosting Tree (GBT), and Multi-Layer Perceptron (MLP). For the case of DL classifiers, besides word embeddings, two types of layers were used: Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) using a many-to-one architecture, and Convolutional Neural Network (CNN). LSTM-RNN processes the input data as sequences of dependent observations, while CNNs can process n-grams through the application of convolutional filters. A schematic of the DL classifiers in combination with a embedding layer is illustrated in Figure 1; this figure shows the arrangement of the aforementioned layers, and the different word embeddings we used which are presented next. Figure 1: Schematic of DL classifiers in combination with Embedding Layer 3 In some experiments, we trained the embedding layer as part of the model training process. However, in most cases, we used transfer learning strategies through pre-trained word embeddings. This procedure makes much sense when either there are not many samples available for training or the computational resources are limited. Four word-embedding variants were evaluated: 1. Global Vectors for Word Representation (GloVe): it is an unsupervised method that captures statistical information from a text corpus; in order to generate word representations, its training process is based on the spectral co-occurrence matrix decomposition [8]. Hence, we made use of the 300-dimensional vectors pre-trained on the English Wikipedia corpus, available at [9]. 2. Embeddings from a Language Model (ELMo): in contrast to GloVe, which provides a fixed meaning for each word, the representations generated by ELMo are functions of the entire sequence, instead of a single word. ELMo encodes words by individual characters, such that it allows the same word to have different representative vectors under different contexts [10]. The pre-trained model was downloaded from the public repository Tensorflow Hub, which can be found at [11]. 3. Bidirectional Encoder Representations from Transformers (BERT): it is a language representation model based on another type of model called Transformer, where instead of strictly analyzing the temporal dependence of the input sequence, all possible combinations of the input sequence are evaluated through an attention layer [12]. It has the advantage that its training process can be performed in parallel, since it does not depend on the temporal condition. For this research, the so-called BERT Base was used, which is a model with a total of 110 million pre-trained parameters. The pre-trained model was downloaded from the public repository Tensorflow Hub, available at [13]. 4. BETO: this model corresponds to a BERT version but instead of English, this models is trained on a large corpus of text in Spanish [14]. The size of the model is similar to a BERT Base, with approximately 110 million parameters. 4 Experiments and Results 4.1 Datasets Four free-to-use datasets were chosen to use in this study. Two of them consist of news in English labeled as fake or real: Fake and real news dataset [15], and News Data Set - Fake OR Real [16]; the other two datasets correspond to news in Spanish, also properly labeled: The Spanish Fake News Corpus [2] containing 971 news items, and fake news in Spanish [17], consisting of 1600 news items. None of the above datasets had missing or null data, and they are well balanced considering the two classes involved. The English datasets were merged, resulting in a final English corpus comprising 51233 samples, of which 26645 are fake, and 24588 are genuine; the same procedure was followed over the individual datasets in Spanish, resulting in a final Spanish corpus consisting of 2571 samples, such that 1280 of them are fake and 1291 genuine 2. Figure 2 shows through a chart the corresponding distribution for each resulting dataset. Since the number of samples in Spanish is considerably small to train a DL model from scratch, one of the strategies followed during experiments, consists of evaluating the capacity of a DL model trained with the English corpus to predict fake Spanish news translated into English using the Google translation API. 4.2 Experimental setup The experiments were carried out using a Bootstrap (ShuffleSplit) validation methodology, considering 5 iterations. Depending on each scheme (described below), the partitioning was done into three subsets (train, development, test) or into two subsets (train, test), taking 80% for training and 20% for testing (in the case of partitioning into three subsets, for the internal sub-division train/development we took 80% / 20% respectively); however, in some experiments the split of the dataset was set to a ratio of train: 90% / test: 10%. These variations in the partitioning were considered due to the few samples we had in the final corpus in Spanish; hence, we wanted to try different combinations aiming at keep as many samples for training as possible. Considering the balancing condition on the datasets we used, and the fact that we were addressing a bi-class classification task, we chose Accuracy as the performance metric for measuring the generalization ability of the models. 2In this paper, we refer to the resulting datasets presented in this subsection as follows: the dataset in English, the dataset in Spanish, and the translated dataset. 4 Figure 2: Sample distribution for the resulting datasets in English and Spanish Regarding the execution of experiments, four schemes were defined in combination with the datasets and the different models. Initially, a baseline was established, training and validating the four classical ML models listed in subsection 3.2, using the dataset in Spanish (first scheme). For classical ML approaches, the texts were represented using BoW and tf-idf techniques; this step was carried out to have a reference point for comparison purposes with the DL-based architectures. The subsequent experiments combine different DL models with different word embedding representations (presented in sub-section 3.2), varying the datasets used for training and validation. The second scheme uses the dataset in Spanish both to train and validate two vanilla DL architectures based on LSTM and CNN layers, and more sophisticated architectures built on top of BERT-type models. Concerning the experiments with BETO embedding, we tried with different values for the number of epochs, and also applied the early stopping strategy considering different values of the hyperparameters tolerance and patience. For its part, the third scheme is similar to the former one but using the dataset in English instead, so in this case no experiments were performed using the pre-trained BETO model. The last scheme trains the models with the dataset in English and validates with the translated dataset (fourth scheme). Moreover, we conducted some experiments where samples from the translated dataset were progressively mixed with the dataset in English during the training phase; then, the remaining portion of the translated dataset was used for validation, i.e., emulating a learning curve. The following is the collection of hyperparameter values we considered when training and validating the different models. Regarding the ML models for the baseline: • (SVM) RBF and Linear Kernel; regularization parameter ’C’: 1e3, 1e-3; kernel coefficient for RBF ’gamma’: 0.1, 1 • (RF and GBT) number of trees: 50, 100, 200, 300, 500; maximum number of features: 50, 100, 200, 300 • (MLP) hidden layers: 1, 2, 3; number of neurons per hidden layer: 10, 50; epochs: 1000, 1500 Furthermore, the combinations of these models were evaluated with BoW and tf-idf representations; removing and not removing Stop Words; applying and not applying Stemming; and considering a maximum vocabulary size of 10000, 20000, 30000 and 40000 words. Similarly, for the DL models evaluated we considered: • LSTM: units present in hidden layers (Units) [this model was only implemented with a single hidden layer], kernel regularizer (KR), recurrent regularizer (RR), dropout (D). • CNN: amount of filters (F), kernel size (KS), number of units for additional dense layer (Units), kernel regularizer (KR). It is also worth pointing out that, in order to set the input length for the models, we used a histograms-based approach to determine the most common length (in words) of the news items, in both the datasets for English and Spanish news: 1500 and 500 words, respectively. 5 The source code used to carry out the experiments can be found in a publicly accessible repository at GitHub 3. 4.3 Results Table 1 shows the best results for each of the models considered during the experiments of the first scheme, which was described in subsection 4.2. It also shows the configuration of pre-processing steps that achieved the best results. Moreover, the hyperparameter values selected for each model were the following: • (SVM) kernel RBF; ’C’: 1e3; kernel coefficient ’gamma’: 1 • (RF and GBT) number of trees: 500; maximum number of features: 50 • (MLP) hidden layers: 1; neurons: 10; epochs: 1500 Table 1: Baseline results for the dataset in Spanish Model Vocab Size Stemming Remove StopWords Text Representation test_acc SVM RF GBT MLP 10000 40000 40000 10000 NO NO YES YES YES YES NO NO tf-idf tf-idf BoW tf-idf 0.798 0.802 0.783 0.794 According to the baseline results, RF in combination with a tf-idf text representation showed the highest accuracy. Subsequently, we performed the experiments with the DL models (LSTM, CNN) in combination with the different types of Word Embeddings; hence, we followed the second, third, and fourth schemes. From this point on, we permanently removed Stop Words and did not apply Stemming anymore regarding data pre-processing. Initially, we ran some experiments using a trainable embedding layer; the results are summarized in Table 2, where the hyperparameter values selected for each model were: • LSTM (Spanish) 16 units; KR and KK equals 1; D equals 0 • LSTM (English) 4 units; KR and KK equals 0.01; D equals 0 • CNN (Spanish) F equals 16; KS equals 10; 4 units; KR equals 0.01 • CNN (English) F equals 16; KS equals 10; 12 units; KR equals 0 The LSTM and CNN models trained with the English dataset, whose results are shown in Table 2, were also validated with the whole translated dataset, yielding accuracies of 56.7% and 53.2% respectively. Table 2: Results for DL models with trainable embedding layer; the column dev_acc shows the accuracy in the development set; std is the standard deviation, and test_acc shows the accuracy in the test set. Model Dataset Language dev_acc std test_acc LSTM LSTM CNN CNN Spanish English Spanish English 0.714 0.95 0.73 0.984 0.026 0.02 0.021 0.002 0.761 0.931 0.685 0.982 Next, we performed the experiments using a Transfer Learning approach, with the pre-trained 300-feature GloVe embedding layer presented in subsection 3.2; this time, the embedding values were left fixed during the training process, and only the added hidden layers were fine-tuned. Since the GloVe vectors utilized were trained on a corpus in English, these experiments correspond to the third and fourth schemes. The results are summarized in Table 3; moreover, the hyperparameter values chosen for each model were the following: • (LSTM) 8 units; KR and KK equals 0; D equals 0.5 • (CNN) F equals 16; KS equals 10; 4 units; KR equals 0 Then, when validating the former models using the translated dataset (fourth scheme), we got accuracy values of 54% and 53.8% for LSTM and CNN layers, respectively. 3https://github.com/kevinmaiden7/Spanish_FakeNewsDetection 6 Table 3: Results for DL models using GloVe embedding with the dataset in English. The column dev_acc shows the accuracy in the development set; std is the standard deviation, and test_acc shows the accuracy in the test set. Model dev_acc std test_acc LSTM CNN 0.962 0.974 0.006 0.002 0.924 0.973 At this point, we decided to implement a learning curve in order to evaluate the effect of including data from the translated dataset, into a training dataset composed of the original news in English. The models evaluated included LSTM and CNN layers as well as trainable Embeddings and GloVe. Five experiments were run for each case, adding 500, 1000, 1500, 2000 and 2500 samples from the translated dataset to the training set, and then validating with the total amount of remaining samples of the translated dataset. Figure 3 shows the results for the CNN model with trainable Embedding, which was the best combination found in terms of accuracy, when applying this strategy. Figure 3: Learning curve results for CNN with trainable embedding Based on the previous results obtained with GloVe, and to simplify the experimental phase, the best hyperparameters of LSTM and CNN layers found using this embedding, were left fixed for the subsequent experiments applying ELMo and BERT-type embeddings. Table 4: Results for ELMo, BERT and BETO Embeddings Embedding Model scheme Epochs test_acc ELMo ELMo ELMo BERT BERT BETO LSTM Third CNN Third Fourth CNN Third CNN CNN Fourth LSTM Second 7 5 7 7 7 25 0.973 0.957 0.525 0.957 0.53 0.80 Regarding the results with ELMo embedding, we got high accuracy outcomes for both LSTM and CNN models concerning the third scheme, as it is shown in Table 4; however, when it comes to the fourth scheme, the results show a degradation in performance. Additionally, taking into account the trend identified in the aforementioned learning curve approach, we added 2500 samples from the translated dataset to the training set, fitted the CNN layer in combination with ELMo embedding on this set, and validated with the remaining 71 samples of the translated dataset: we reached an accuracy value of 70%. For the models built on top of BERT embedding, experiments were only carried out with the CNN layers; with this in mind, as it is described in Table 4, we obtained a high accuracy result for the third scheme again, whereas not such a good level was achieved for the fourth scheme. Similarly to the procedure followed with ELMo embedding, 2500 samples were added from the translated dataset to the training set, for then validating with the remaining 71 samples: in this case we got an accuracy level of 63.4%. 7 Concerning BETO embedding, the experiments with this model were framed in the second scheme (training and validating with the dataset in Spanish) since BETO itself is a model trained over a corpus in this language; in this case, we used both LSTM and CNN layers. Table 4 shows the results for this experiment, where it is possible to observe that the LSTM model trained with 25 epochs achieved results of up to 80% in accuracy for the test set; this is the best result we reached with DL models for the dataset in Spanish. Thus, Figure 4 shows the confusion matrix associated with this architecture for the test set; it is worth highlighting that we used the early stopping strategy, although it did not yield a significant improvement in comparison to the previous results. Figure 4: (BETO) normalized confusion matrix for LSTM with the dataset in Spanish 5 Discussion and Conclusions Regarding the proposed baseline for datasets in Spanish (first scheme), RF showed the best performance getting an accuracy of 80%, and using the tf-idf text representation; however, this performance was not statistically different from that obtained with SVM, where a smaller vocabulary size was used. This model outperformed the best result reported in [2], where The Spanish Fake News Corpus dataset was also utilized. Furthermore, we noticed that for this scheme, there were no significant differences in the performance of the models when applying Stemming or removing StopWords, or even when varying the text representation strategy or the vocabulary size. It is worth highlighting the gap between the number of samples in the resulting datasets for English and Spanish, as it was shown in Figure 2. Since the models we used in this research follow a phenomenological approach, they highly depend on the amount of experimental data they are trained on. The above was evidenced by the prominent difference on accuracy we obtained in the third and fourth schemes, using GloVe, ELMo, and BERT embeddings. Also, concerning the second scheme, we noticed that the models exhibited a trend of overfitting due to the small number of samples available for training; moreover, we observed that the regularization strategies we employed did not significantly improve the performance. In regard to the fourth scheme, it is important to underline that the vocabulary present in the translated dataset corresponded to just 60% of that present in the dataset in English; this situation negatively affected the results we obtained for the translation strategy. For this scheme, despite the excellent performance of the models trained and validated with the dataset in English (third scheme), when we validated with the translated dataset, the values of accuracy were drastically reduced. Consequently, the results of the implemented learning curve indicated a performance improvement (although in different ratios) as more samples were added from the translated dataset to the training set (as it was shown in Figure 3); this pattern was noticed regardless of the combination between a model and the embedding layer utilized. Concerning the different embeddings we used, similar results were obtained for the third and fourth schemes when using GloVe, ELMo or BERT, in each of them; in fact, it is noteworthy that, in combination with these different embeddings, the LSTM and CNN layers showed similar results. Furthermore, taking into account these pre-trained models corresponded to the state-of-the-art in NLP, we were expecting to obtain salient results by mixing portions of the translated dataset to the dataset in English for training; however, due to the small number of samples available in the 8 dataset in Spanish, and the discrepancies resulting from the translation, the predictive capability of the models was limited. Nonetheless, BETO embedding allowed us to obtain the best result for the dataset in Spanish (or any validation over news items in Spanish using DL models), considering the fact that its approximately 110 million parameters were pre-trained on a corpus in this language; this enabled us to take full advantage of the Transfer Learning strategy, and obtain an outstanding performance of 80% of accuracy on the test set, in spite of the small number of samples available for such deep network architecture. Figure 4 showed the result for the LSTM model combined with this embedding, which corresponds to the best strategy identified for detecting fake news in Spanish using DL techniques: out of the 258 news items used for validation, the model correctly classified 76% of the fake news items, as well as 86% of the legitimate news items, which we consider a good hit ratio; by contrast, the model tends to confuse news items that are fake with legitimate ones, which corresponds to the main error condition it incurred. Although the best detection rate achieved for DL models was similar to that obtained with RF, there is indubitably more room for improvement in the case of Deep Neural network architectures, due to the combination with Word Embeddings and more advanced techniques. Thus, in the future, this research could continue aiming at building a more robust system from the best strategy we found (BETO + LSTM), if a set of labeled news in Spanish with a more representative number of samples are available; furthermore, more experiments combining hyperparameter values and network architectures could also be carried out. References [1] “5 claves para entender el escándalo de cambridge analytica que hizo que facebook perdiera us 37.000 millones [Online; en un día (bbc news mundo).” https://www.bbc.com/mundo/noticias-43472797, Mar 2018. accessed 09-Sep-2021]. [2] J.-P. Posadas-Durán, H. Gómez-Adorno, G. Sidorov, and J. J. M. Escobar, “Detection of fake news in a new corpus for the spanish language,” Journal of Intelligent & Fuzzy Systems, vol. 36, no. 5, pp. 4869–4876, 2019. [3] K. Shu, A. Sliva, S. Wang, J. Tang, and H. Liu, “Fake news detection on social media: A data mining perspective,” ACM SIGKDD explorations newsletter, vol. 19, no. 1, pp. 22–36, 2017. [4] L. Wu and H. Liu, “Tracing fake-news footprints: Characterizing social media messages by how they propagate,” in Proceedings of the eleventh ACM international conference on Web Search and Data Mining, pp. 637–645, 2018. [5] W. Y. Wang, “" liar, liar pants on fire": A new benchmark dataset for fake news detection,” arXiv preprint arXiv:1705.00648, 2017. [6] A. M. Bra¸soveanu and R. Andonie, “Semantic fake news detection: a machine learning perspective,” in Interna- tional Work-Conference on Artificial Neural Networks, pp. 656–667, Springer, 2019. [7] P. H. A. Faustini and T. F. Covões, “Fake news detection in multiple platforms and languages,” Expert Systems with Applications, p. 113503, 2020. [8] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543, 2014. [9] J. Pennington, “Glove: Global vectors for word representation.” https://nlp.stanford.edu/projects/ glove/, 2014. [Online; accessed 09-Sep-2021]. [10] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” arXiv preprint arXiv:1802.05365, 2018. [11] Google, “Elmo - tensorflow hub.” https://tfhub.dev/google/elmo/3, 2020. [Online; accessed 09-Sep- 2021]. [12] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018. [13] Google. https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2, 2020. [Online; ac- cessed 09-Sep-2021]. [14] J. Cañete, G. Chaperon, R. Fuentes, J.-H. Ho, H. Kang, and J. Pérez, “Spanish pre-trained bert model and evaluation data,” in PML4DC at ICLR 2020, 2020. [15] C. Bisaillon, “Fake and real news dataset.” https://www.kaggle.com/clmentbisaillon/ [16] V. fake-and-real-news-dataset, mar 2020. [Online; accessed 09-Sep-2021]. set - news-data-set-fake-news-with-python, Jul 2020. [Online; accessed 09-Sep-2021]. “News Ukani, real.” fake data or https://kaggle.com/vikasukani/ 9 [17] A. Tretiakov, “noticias falsas en español 2020.” https://kaggle.com/arseniitretiakov/ noticias-falsas-en-espaol, 2020. [Online; accessed 09-Sep-2021]. 10
ai_researcher
3
Let's_Be_Self-generated_via_Step_by_Step_A_Curriculum_Learning_Approach_to_Automated_Reasoning_with_Large_Language_Models.pdf
6 9 9 1 g u A 8 2 2 v 2 0 0 7 0 6 9 / n a - t c n u f : v i X r a ON SELF-ADJOINTNESS OF A SCHR ¨ODINGER OPERATOR ON DIFFERENTIAL FORMS Maxim Braverman Abstract. Let M be a complete Riemannian manifold and let Ω•(M) denote the space of differential forms on M. Let d : Ω•(M) → Ω•+1(M) be the exterior dif- ferential operator and let ∆ = dd∗ + d∗d be the Laplacian. We establish a suffi- cient condition for the Schr¨odinger operator H = ∆ + V (x) (where the potential V (x) : Ω•(M) → Ω•(M) is a zero order differential operator) to be self-adjoint. Our result generalizes a theorem by I. Oleinik about self-adjointness of a Schr¨odinger operator which acts on the space of scalar valued functions. 1. Introduction. Suppose M is a complete Riemannian non-compact manifold. We will assume that M is oriented and connected. Let T ∗M denote the cotangent i(T ∗M ) denote the exterior algebra of T ∗M . bundle to M and let We denote by L2Ω•(M ) the space of square integrable complex valued differential V •(T ∗M ) ⊗ C which are square integrable forms on M , i.e. the space of sections of with respect to the scalar product •(T ∗M ) = L V i V (1) hα, βi = α ∧ ∗β, α, β ∈ L2Ω•(M ). ZM Here ∗ denotes the Hodge operator associated to the Riemannian metric on M . Note that L2Ω0(M ) is just the space of square integrable complex valued functions on M . Let d : L2Ω•(M ) → L2Ω•+1(M ) denote the exterior differential and let d∗ be the operator formally adjoint to d with respect to the scalar product (1). Let ∆ = dd∗ + d∗d be the Laplacian and consider the Schr¨odinger operator (2) H = ∆ + V (x) : L2Ω•(M ) → L2Ω•(M ) where the potential V (x) is a measurable section of the bundle End of •(T ∗M ) which belongs to the class L∞ endomorphisms of loc (i.e. such that for any (cid:0) V compact set K ⊂ M there exists a constant CK > 0 such that |V (x)| ≤ CK for almost all x ∈ K). V (cid:1) •(T ∗M ) We denote by H0 the restriction of H on the space Ω• c (M ) of smooth differential forms with compact support. The purpose of this paper is to introduce a sufficient condition on the potential V (x) for operator H0 to be self-adjoint. 1991 Mathematics Subject Classification. Primary: 58G25 Secondary: 35P05. The research was supported by US - Israel Binational Science Foundation grant No. 9400299 Typeset by AMS-TEX 2 MAXIM BRAVERMAN 2. Statement of results. For x, y ∈ M let dist(x, y) denote the Riemannian distance between x and y. Fix a point p ∈ M and set r(x) = dist(x, p). Fix x ∈ M . The Riemannian metric on M defines a scalar product h·, ·ix on •(T ∗M ) ⊗ C. As usual, we write x M ) ⊗ C of the vector bundle •(T ∗ the fiber V (x) ≥ C if V V (3) hV (x) ξ, ξix ≥ C hξ, ξix •(T ∗ for any ξ ∈ endomorphism of V •(T ∗ x M ). x M ) ⊗ C. Note that it follows from (3) that V (x) is a self-adjoint Theorem A. Assume that for almost all x ∈ M the potential V (x) of the operator (2) satisfies the estimate V (4) V (x) ≥ −Q(x), where 1 ≤ Q(x) ≤ ∞ and Q−1/2(x) is a Lipschitz function on M such that (5) | Q−1/2(x) − Q−1/2(y) | ≤ K dist(x, y) for any x, y ∈ M. If for any piecewise smooth curve γ : [0, ∞) → M such that limt→∞ r(γ(t)) = ∞ the integral (6) Q−1/2(x) dγ = ∞ Zγ then the operator H0 is essentially self-adjoint. For the case of a Schr¨odinger operator acting on scalar valued functions this theorem was established by I. Oleinik [O2]. Note that Q(x) may be equal to infinity on a set of positive measure. As a simple consequence of Theorem A we obtain the following Theorem B. Suppose that for almost all x ∈ M the potential V (x) satisfies the , where 1 ≤ q ≤ ∞ and q−1/2(t) is a Lipschitz function on estimate V (x) ≥ −q R such that q−1/2(t) dt = ∞. Then the operator H0 is essentially self-adjoint. In particular, if M = Rn and V (x) ≥ −C|x|2 then the operator H0 is essentially r(x) ∞ 0 (cid:1) (cid:0) self-adjoint. R Remark. Theorem A remains true if we replace L2Ω•(M ) by the space of square integrable forms on M with values in a flat Hermitian vector bundle F over M , provided that the Hermitian structure on F is flat. In this case the differential d should be replaced by the covariant differential associated to the flat structure on F . The proof is a verbatim repetition of the proof for the scalar case, cf. bellow. However the notation in the vector valued case is more complicated. 3. Historical remarks. An analogue of Theorem B for the case M = R1 was established by Sears [Se]. B. Levitan [Le] proved the Sears theorem for the Schr¨odinger operator acting on scalar valued functions on M = Rn. F. Rofe- Beketov [RB] extended these results to the case where the potential V (x) can not be estimate by a function depending only on dist(x, p). Many results and references ON SELF-ADJOINTNESS OF A SCHR ¨ODINGER OPERATOR 3 about the essential self-adjointness of Schr¨odinger operators on Rn may be found in [RS]. I. Oleinik [O1,O2] established Theorem A for the Schr¨odinger operator acting on scalar valued functions on a complete Riemannian manifold. Essential self-adjointness of a pure Laplacian (without lower order terms) on differential forms on a complete Riemannian manifold was first stated and proved by M. P. Gaffney [Ga1,Ga2]. A number of related results may be found in [Sh]. In [BFS], Theorem B is established for the case where M is a manifold with cylindrical ends and the potential V (x) ≥ 0. The result is used there to study Witten deformation of the Laplacian on a non-compact manifold. 4. Acknowledgment. I would like to thank M. Shubin for posing the problem and for essential help. I am very grateful to I. Oleinik for pointing out a gap in a preliminary version of the paper and for drawing my attention to the paper [O2]. I am also thankful to M. Farber and V. Matsaev for valuable discussions. 5. The domain of D(H ∗ D(H ∗ of distributions also belongs to L2Ω•(M ). 0 denote the operator adjoin to H0. The domain 0 consists of forms α ∈ L2Ω•(M ) such that Hα understood in the sense 0 ). Let H ∗ 0 ) of H ∗ The operator H0 is symmetric. Hence, to show that its closure is self-adjoint it 0 is symmetric. In other words we is enough to show that the adjoint operator H ∗ have to prove that (7) (Hα ∧ ∗β − α ∧ ∗Hβ) = 0 for any α, β ∈ D(H ∗ 0 ). ZM To prove (7) we need some information about the behavior of differential forms 0 ). The main result of this section is the following lemma, which provides from D(H ∗ us with this information. Lemma 1. If α ∈ D(H ∗ 0 ) then the forms Q−1/2dα, Q−1/2d∗α are square integrable. Remark. 1. By the standard theory of elliptic operators any α ∈ D(H ∗ 0 ) belongs to the Sobolev space H 2 loc. Hence, dα, d∗α are locally square integrable. Thus the lemma provides us with an information about the behavior of the forms from D(H ∗ 0 ) at infinity. 2. For the Schr¨odinger operator on scalar valued functions on Rn an analogous lemma was established in [RB]. The proof was adopted in [O1,O2] to the case of a Riemannian manifold. In our proof we follow rather closely the lines of [O2]. However, the fact that we deal with differential forms rather than with scalar valued functions demands a more careful analysis. Proof. Recall that we fixed a point p ∈ M and that for any x ∈ M we denoted by r(x) the Riemannian distance between x and p. It is shown in [O2, Proof of Lemma 1] that for any R > 0, ε > 0 there exist smooth functions rR,ε(x), FR,ε(x) on M which approximate the Lipschitz functions r(x), Q−1/2(x) in the sense that (8) |rR,ε(x) − r(x)| < ε, Q−1/2(x) − ε < FR,ε(x) < (1 + ε) Q−1/2(x), lim ε→0 |drR,ε(x)| ≤ 1, lim ε→0 |dFR,ε(x)| ≤ K, 4 MAXIM BRAVERMAN for any x ∈ r−1 R,ε([0, R + 1]). Here K is the same constant as in (5). Let Ψ : [0, +∞) → [0, 1] be a smooth function which is equal to one when t ≤ 1/2 and which is equal to zero when t ≥ 1. Set (9) ψR,ε(x) = Ψ rR,ε(x) R FR,ε(x) ( 0 (cid:0) (cid:1) rR,ε(x) ≤ R; if outside of the set rR,ε(x) ≤ R. For any R > 0 the functions ψR,ε, ε < 1 vanish outside of the compact set r−1([0, R + 1]). Hence, it follows from (8) and (4) that there exist a constant K1 > 0 not depending on R and a number εR > 0 (which does depend on R) such that (10) |dψR,ε(x)| ≤ K1, ψ2 R,ε(x) ≤ 2, ψ2 R,ε α ∧ ∗V α ≤ 2 kαk2, (cid:12) (cid:12) (cid:12) (cid:12) for any x ∈ M, R > 1, 0 < ε < εR, α ∈ L2Ω•(M ). Here kαk = hα, αi L2-norm of the form α. (cid:12) (cid:12) (cid:12) (cid:12) ZM 1 2 denotes the Functions ψR,ε have compact support. Hence, in view of the remark 1 after the statement of the lemma, the forms ψR,εdα and ψR,εd∗α are square integrable. Assume that α ∈ D(H ∗ 0 ) is a real valued form and set (11) J 2 R,ε = kψR,εdαk2 + kψR,εd∗αk2 = ψR,ε(x)2 dα ∧ ∗dα + d∗α ∧ ∗d∗α . ZM (cid:0) (cid:1) It follows from (8), (9) that to prove the lemma it is enough to show that (12) lim R→∞ lim ε→0 JR,ε < ∞. Let us first rewrite the integrand in (11) in a more convenient form. In the [Wa, §6.1]) d∗α = (−1)|α| ∗−1 d ∗ α calculations bellow we use the equality (cf. where |α| denotes the degree of the differential form α. (13) ψ2 ψ2 R,ε dα∧ ∗dα = d R,ε d∗α∧∗d∗α = (−1)|α|ψ2 ψ2 = −d (cid:0) ψ2 R,εd∗α ∧ ∗α R,εα ∧ ∗dα − 2ψR,εdψR,ε ∧ α ∧ ∗dα + ψ2 R,εα ∧ ∗d∗dα, R,ε d∗α ∧ d ∗ α (cid:1) + 2ψR,εdψR,ε ∧ d∗α ∧ ∗α + ψ2 R,εdd∗α ∧ ∗α. It follows now from (13), (10) and from the Stokes theorem that, if R > 1, ε < εR, then (cid:0) (cid:1) kψR,εdαk2 = ψ2 R,ε dα ∧ ∗dα = −2hdψR,ε ∧ α, ψR,εdαi + hα, ψ2 R,εd∗dαi ≤ 2K1kαk kψR,εdαk + hα, ψ2 R,εd∗dαi, kψR,εd∗αk2 = R,ε d∗α ∧ ∗d∗α = 2hdψR,ε ∧ ψR,εd∗α, αi + hψ2 ψ2 R,εdd∗α, αi ZM ZM ≤ 2K1kψR,εd∗αk kαk + hα, ψ2 R,εdd∗αi. ON SELF-ADJOINTNESS OF A SCHR ¨ODINGER OPERATOR 5 Summing these two equations we obtain (14) J 2 R,ε ≤ 2K1kαk kψR,εdαk + kψR,εd∗αk + hα, ψ2 R,ε∆αi ≤ 4K1kαk JR,ε + (cid:0) ZM ψ2 R,ε (cid:1) α ∧ ∗Hα − α ∧ ∗V α (cid:0) ≤ 4K1kαk JR,ε + 2kαk kHαk + 2kαk2. (cid:1) Here the last inequality follows from (10). It follows from (14) that the set {JR,ε : R > 1, ε < εR} is bounded from above. Hence (12) holds. The proof of the lemma is completed. (cid:3) 6. Proof of Theorem A. We apply a modification of the method used in [RB] suggested by I. Oleinik [O2]. The quantity (15) ρ(x, y) = inf γ Zγ Q−1/2(x) dγ, e where the infimum is taken over all piecewise smooth curves connecting the points x, y ∈ M , is called generalized distance between x and y. It is a symmetric function in x, y which satisfies the triangular inequality. The first metric axiom is not valid in general. Note, however, that (6) implies, that the sets P −1([0, R]) are compact for any R > 0. Recall that in Section 2 we have fixed a point p ∈ M . Set P (x) = ρ(x, p). Then (cf. [O2, Lemma 2]) (16) |P (x) − P (y)| ≤ Q−1/2(x) dist(x, y) + e K 2 (dist(x, y))2 for any x, y ∈ M . It follows (cf. smooth function PR,ε(x) which approximates P (x) in the sense that [O2]) that for any R > 0, ε > 0 there exists a (17) e PR,ε(x) − P (x)| ≤ ε, | |d PR,ε(x)| ≤ Q−1/2(x), lim ε→0 for any x ∈ P −1([0, R + 1]). e e Assume that ε < 1 so that P −1 R,ε([0, R]) ⊂ P −1([0, R + 1]). Let us define a piecewise smooth function PR,ε(x) on M by the formula (18) PR,ε(x) = By (17), the inequality e PR,ε(x) ( R e if PR,ε(x) ≤ R; outside the set e PR,ε(x) ≤ R. e (19) |dPR,ε(x)| ≤ Q−1/2(x) lim ε→0 holds almost everywhere on M . 6 MAXIM BRAVERMAN Recall from Section 5, that the statement of Theorem A is equivalent to equality 0 ) and consider the following approximation of the integral (7) (7). Fix α, β ∈ D(H ∗ (20) IR,ε = 1 − PR,ε R ZM (cid:18) (cid:19) (cid:0) Hα ∧ ∗β − α ∧ ∗Hβ = 1 − ZM (cid:18) (cid:1) PR,ε R (cid:19) By the Fatou theorem ([RS, Theorem I.17]), it is enough to show that ∆α ∧ ∗β − α ∧ ∗∆β . (cid:0) (cid:1) (21) lim R→∞ lim ε→0 IR,ε = 0. We will need the following “integration by parts” lemma1 Lemma 2. Let φ : M → R be a smooth function with compact support. Then (22) φ ∆α ∧ ∗β ZM = φ dα ∧ ∗dβ + d∗α ∧ ∗d∗β + dφ ∧ β ∧ ∗dα − d∗α ∧ ∗β ZM (cid:0) for any α, β ∈ D(H ∗ 0 ). ZM (cid:1) (cid:0) (cid:1) Note that, by remark 1 after the statement of Lemma 1, all the integrals in (22) have sense. Proof. Recall that d∗u = (−1)|u| ∗−1 d ∗ u where |u| denotes the degree of the differential form u. Hence, if |u| = |v| − 1, then (23) φdu ∧ ∗v = φu ∧ ∗d∗v − dφ ∧ u ∧ ∗w + d (φu ∧ ∗v) Substituting into (23) first u = d∗α, v = β and then u = β, v = dα we obtain φdd∗α ∧ ∗β = −dφ ∧ d∗α ∧ ∗β + φd∗α ∧ ∗d∗β + d (φd∗α ∧ ∗β), φd∗dα ∧ ∗β = φβ ∧ ∗d∗dα = dφ ∧ β ∧ dα + φdα ∧ ∗dβ − d (φβ ∧ ∗dα). In the last equality we used that u ∧ ∗v = v ∧ ∗u for any differential forms u, v of the same degree. Summing the above equations, integrating over M and using the Stokes theorem we get (22). (cid:3) Using definition (20) of IR,ε and Lemma 2 we obtain (24) IR,ε = 1 R ZM dPR,ε ∧ β ∧ ∗dα − d∗α ∧ ∗β − α ∧ ∗dβ + d∗β ∧ ∗α . (cid:0) Let dµ(x) denote the Riemannian density on M . For any ξ ∈ denote by |ξ| its norm with respect to the scalar product on by the Riemannian structure on M . Then V V (cid:1) k(T ∗M ) ⊗ C we •(T ∗M ) ⊗ C induced (25) |hα, βi| ≤ |α ∧ ∗β| dµ(x) ≤ |α| |β| dµ(x) ≤ kαk kβk ZM ZM 1I learned this lemma from M. Shubin. ON SELF-ADJOINTNESS OF A SCHR ¨ODINGER OPERATOR 7 for any α, β ∈ L2Ω•(M ). Let us estimate the behavior of the right hand side of (24) as ε → 0. For the first term we obtain (26) lim ε→0 1 R (cid:12) (cid:12) (cid:12) (cid:12) ZM dPR,ε ∧ β ∧ ∗dα ≤ |dPR,ε| |d∗α| |β| dµ(x) 1 R lim ε→0 ZM |Q−1/2d∗α| |β| dµ(x) ≤ ≤ 1 R (cid:12) (cid:12) (cid:12) (cid:12) ZM kQ−1/2d∗αk kβk R . In the second inequality in (26) we used the estimate (19). The last inequality in (26) follows from Lemma 1. Analogously, one can estimate the other terms in the right hand side of (24). That proves (21) and Theorem A. (cid:3) References [BS] [BFS] M. Braverman, M. Farber, M. Shubin, The Novikov-Bott inequalities on a manifold with F. A. Berezin, M. A. Shubin, The Schr¨odinger equation, Kluwer, Dordrecht, 1991. a cylindrical ends, In preparation. [Ga1] M. P. Gaffney, The Harmonic operators for exterior differential forms, Proc. Nat. Acad. Sci. USA 37 (1951), 48–50. [Ga2] M. P. Gaffney, A special Stokes’s theorem for complete Riemannian manifolds, Ann. of [Le] [O1] [O2] [RB] [RS] [Se] [Sh] [Wa] Math. 60 (1954), 140–145. B. M. Levitan, On a theorem of Titchmarsh and Sears, Usp. Math. Nauk 16 (1961), 175–178. I. M. Oleinik, On the essential self-adjointness of the Schr¨odinger operator on a complete Riemannian manifold, Mathematical Notes 54 (1993), 934–939. I. M. Oleinik, On the connection of the classical and quantum mechanical completeness of a potential at infinity on complete Riemannian manifolds, Mathematical Notes 55 (1994), 380–386. F. S. Rofe-Beketov, Self-adjointness conditions for the Schr¨odinger operator, Mat. Za- metki 8 (1970), 741–751. M. Reed, B. Simon, Methods of modern mathematical physics, Vol. I, II, Academic Press, London, 1978. D. B. Sears, Note on the uniqueness of Green’s functions associated with certain differ- ential equations, Canadian J. Math. 2 (1950), 314–325. M. A. Shubin, Spectral theory of elliptic operators on non-compact manifolds, Ast´erisque 207 (1992), 37–108. F. W. Warner, Foundations of differentiable manifolds and Lie groups, Graduate Texts in Mathematics, Springer-Verlag, New-York, Berlin, Heidelberg, Tokyo, 1983. School of Mathematical Sciences, Tel-Aviv University, Ramat-Aviv 69978, Israel E-mail address: [email protected]
ai_researcher
1
Sentinel_Lymph_Node_Mapping_by_Retroperitoneal_vNOTES_for_Uterus-Confined_Malignancies_A_Standardized_10-Step_Approach.pdf
1 Deep Learning Provides Rapid Screen for Breast Cancer Metastasis with Sentinel Lymph Nodes Kareem Allam1, Xiaohong Iris Wang1, Songlin Zhang1, Jianmin Ding1, Kevin Chiu2, Karan Saluja1, Amer Wahed1, Hongxia Sun1, Andy N.D. Nguyen1 1Department of Pathology and Laboratory Medicine 2Medical School University of Texas Health Science Center-Houston, Medical School, Texas 77030 ABSTRACT Deep learning has been shown to be useful to detect breast cancer metastases by analyzing whole slide images (WSI) of sentinel lymph nodes (SLNs); however, it requires extensive scanning and analysis of all the lymph node slides for each case. Our deep learning study focuses on breast cancer screening with only a small set of image patches from any SLN to detect changes in tumor environment and not in the tumor itself. This study involves breast pathologists in our department and uses our in-house breast cancer cases and WSI scanners. We design a convolutional neural network in the Python language to build a diagnostic model for four diagnostic categories (macrometastasis, micrometastasis, isolated tumor cells, and negative metastasis). SLNs with macrometastasis and micrometastasis are defined as positive cases; while those with isolated tumor cells only or true negative for metastatic tumor cells are defined as negative cases. We obtained WSIs of Hematoxylin and Eosin-stained slides from 34 cases with near equal distribution in 4 diagnostic categories. A total of 2720 image patches, from which 2160 (79%) were used for training, 240 (9%) for validation, and 320 (12%) for testing. Interobserver variation was also examined among 3 users. The test results showed excellent diagnostic results: accuracy (91.15%), sensitivity (77.92%), specificity (92.09%), positive predictive value (90.86%), and negative predictive value (80.66%). No significant variation in results was observed among the 3 observers. This preliminary study provided a proof of concept for incorporating automated metastatic screen into the digital pathology workflow to augment the pathologists’ productivity. Our approach is unique since it provides a very rapid screen rather than an exhaustive search for tumor in all fields of all sentinel lymph nodes. Key Words: Deep Learning, Whole Slide Imaging, Breast Cancer, Sentinel Lymph Nodes, Metastasis, Rapid Screen Corresponding Author: Andy N.D. Nguyen, MD, MS Department of Pathology and Laboratory Medicine University of Texas Health Science Center-Houston, Medical School 6431 Fannin Street MSB 2.292 Houston, Texas 77030 Telephone: (713) 500-5337 Fax: (713) 500-0712 Email: [email protected] 2 INTRODUCTION In surgery for a patient with breast cancer, the surgeon finds and removes the first lymph node(s) to which a tumor is likely to spread (called SLNs). To do this, the surgeon injects a radioactive substance and/or a blue dye into the tumor, the area around it, or the area around the nipple. Lymphatic vessels will carry these substances along the same path that the cancer would take. The first lymph nodes that dye or radioactive substance travels to are the SLNs. The evaluation of breast SLNs is an important component of treatment. Patients with a SLN positive for metastatic cancer will receive a more aggressive clinical management, including axillary lymph nodes dissection. The manual microscopic examination of SLNs is time-consuming and laborious, particularly in cases where the lymph nodes are negative for cancer or contain only small foci of metastatic cancer [1]. SLNs can be grouped into two types: positive for metastasis or negative for metastasis. Of the ones that are positive for metastasis, they can have macrometastasis (tumor region of at least 2.0 mm) or micrometastasis (tumor region of at least 200 cells or with size between 0.2 mm and 2.0 mm). Of the ones that are negative, they can be truly negative, or have isolated tumor cells (ITC) only, which is a tumor region of up to 200 cells and/or smaller than 0.2 mm [2]. Potential morphologic features for metastasis include pleomorphic nuclei and some features of the tumor microenvironment, including lymphocytic infiltrates in the stroma, the sinus, and follicular hyperplasia [3]. Due to the large number of SLNs to screen for breast cancer metastasis, histopathologic screening often presents a challenge to the pathologists. An automated diagnosis using digital images would be helpful to assist the pathologist in daily work. In this study, we investigate how automated screening methods can be combined with microscopic examination by pathologists to achieve better accuracy. We focus on using reactive morphology in non-tumor areas of lymph nodes to predict positive metastasis. To analyze slides by automated techniques, it is first necessary to scan the slide into the computer’s data storage. This process is called whole slide imaging (WSI). Techniques of WSI involve scanning and compressing the images before they are analyzed [4]. WSI offers many advantages such as ease of slide sharing and image analysis [5]. Previous attempts to digitally classify histologic images were based on specific criteria (such as nuclear shape, nuclear size, texture, etc.) [3,6]. They turned out not to be successful [7]. Attention has turned to machine learning. Machine learning can be defined as software algorithms that can learn from and make predictions on data. This gives the software the ability to learn without being explicitly programmed. There are numerous machine learning methods. Some examples are decision trees, cluster analysis, support vector machines, random forests, Bayesian networks, regression analysis, and neural networks [7]. Neural networks consist of multiple artificial nodes (“neurons”) connected to form a network for prediction/classification [8]. This is inspired by biological neural networks. Early generations of neural networks used supervised training, but this has some disadvantages. One disadvantage is that the parameters (such as the strengths of the connections between the neurons) may not converge, leaving no solution. Another disadvantage is that it may not scale well. 3 Deep learning is the most recent and most disruptive method of machine learning; it is based on neural networks. In 2006, major breakthroughs in deep learning started. One is unsupervised learning, which allows a network to be fed with raw data (no known outcomes) and discover the representations needed for detection or classification. Another is the use of multiple layers in the network, which allows it to extract high-level and complex data representations and avoid some of the problems of older neural networks. Since such methods perform many operations in parallel, they can be speeded up by using graphics processing units (GPUs). Studies have been done to assess the reproducibility of deep learning algorithms by using them to identify the tissue of origin from 512x512 pixel tiles. The performance of the algorithm was better than pathologists viewing the same tiles [9]. Deep learning techniques, especially third generation neural networks called convolutional neural networks (CNN or ConvNet), have quickly become the state of the art in computer vision [10]. The ventral visual pathway is organized as a hierarchical series of four interconnected visual areas called Brodmann areas. Neurons in early areas, such as area V1, respond to comparatively simple visual features of the retinal image, while later areas, such as area V4, respond to increasingly complex visual features. The specialization of receptor cells is incorporated into the design of the CNN as pairs of convolution operators followed by a pooling layer (Figure 1) [11]. Convolution is an operation in image processing using filters to modify or detect certain characteristics of an image (such as Smooth, Sharpen, Intensify, Enhance). In CNNs, it is used to extract features of images. Mathematically, a convolution is done by multiplying the pixels’ values in an image patch by a filter (kernel) matrix and then adding them up (Figure 2). This operation is also called a "dot product". By moving the filter across the input image, one obtains the final output as a modified filtered image. The CNN consists of interleaved convolutional and max pooling layers and then a final fully connected layer (Figure 1) [12]. The convolutional layers (C) perform 'feature extraction' consecutively from the image patch to higher level features. The max pooling layers (S) reduce image size by subsampling. The last 'fully connected' layers (F) provide prediction. Convolutional neural networks have been used to generate heat maps of tumors and tumor- infiltrating lymphocytes [13, 14]. Big companies are analyzing large volumes of data for business analysis and decisions, using deep learning technology (Google’s search engine, Google Photo, automobile companies with self-driving cars, and IBM’s Watson). The application of deep learning to digital pathology imaging has a promising start; it could impact personalized diagnostics and treatment. Deep learning has also been considered in interpreting and integrating multiple sources of information in pathology (histology, molecular, etc.) [15]. Recent studies have shown promising results in using deep learning to detect breast cancer in whole slide imaging of SLNs (examples: Camelyon16, ICIAR 2018)[16, 17]. However, they require extensive scanning and analysis of all the lymph node slides for each case. We explore how deep learning could be used for breast cancer screening with only a small set of image patches (5 patches) from any SLN. Our goal is to detect changes in the tumor microenvironment and not the tumor itself (Figure 3). Our approach is unique since it provides a very rapid screen rather than an exhaustive search for tumors in all fields of all lymph nodes. We also set out to examine the feasibility of looking at either negative or positive slides (in the uninvolved area) to predict metastasis. The tumor microenvironment has been shown to be important in diagnosing the tumor [18]. We examined three areas of interest: interfollicular lymphocyte-rich area, follicles, and the sinus (Fig. 3) to see which is best for predicting metastasis. Previous studies have examined the tumor-infiltrating lymphocytes [19, 20]. Interobserver variation was also examined among different users. We assessed variation in predictive results with data obtained by 3 users. Fig. 1 The CNN deep learning model 4 Fig. 2 Convolution of input with kernel MATERIALS AND METHODS Our study was approved by the Institution Review Board at the University of Texas Health Science Center. We obtained WSIs of SLNs using Motic scanners (Motic Easy Scan, Motic Instrument, Richmond, BC, Canada) in the pathology department of the University of Texas- Houston Medical School to obtain our data. The Motic Digital Slide Assistant software (by the same company) was used to view the WSIs. SnagIt (TechSmith Corp, Okemos, Michigan, USA) 5 was used to capture and automatically save image patches (100x100 pixels) in files with JPEG format. Our study includes 34 cases with near equal distribution in 4 diagnostic categories: 1. Macrometastasis: 10 cases 2. Micrometastasis: 8 cases 3. Isolated tumor cells (ITC): 6 cases 4. Negative: 10 cases A positive WSI and a negative WSI were selected from each positive case, and two negative WSIs were selected from each negative case to obtain a total of 68 WSIs. For each WSI, 40 image patches were obtained for a total of 68x40 = 2720 image patches. Of the 2720 image patches, 2160 images (79%) were used for training the model, 240 (9%) were used for validation, and 320 (12%) for testing. We designed the CNN model using the Python language together with the TensorFlow and Keras libraries. The model ran on 64-bit Windows 10 Professional edition. Keras allows for parallel computing using graphics processing units (GPUs) with the Compute Unified Device Architecture (CUDA) by NVIDIA (Santa Clara, CA, USA). The hardware was 9th Gen Intel® Core™ i7 9700 (8-Core, 12MB Cache, 4.7GHz), 32GB RAM (DDR4 at 2666MHz), and NVIDIA® GPU (GeForce RTX™ 2070, 8GB GDDR6, 2304 CUDA cores). Our deep learning model used 14 layers, including convolution, max-pooling, and dense layers (Figure 1). RESULTS We looked at different areas of interest in WSIs to see which one would be of most predictive value (positive vs negative metastasis): 1. Interfollicular lymphocyte-rich areas, 2. Follicles, and 3. The sinus (Figure 3). The preliminary results indicated that areas containing interfollicular lymphocytes are of most predictive value (full results not reported in this article). Subsequently, our study has been focusing on this parameter alone. The image-by-image accuracy of user 1 was found to be 161/320 = 50.31% (Table 1). When the diagnoses were grouped by rank (i.e., diagnoses 0 and 1 are considered negative, 2 and 3 are considered positive), significantly better accuracy was achieved at 275/320 = 85.93% (Table 2). For each test case, the predicted diagnosis was combined from the prediction for 5 images (at least 3 or more must agree), a process known as “majority voting” (see examples in Table 3). This led to a higher accuracy of 92.18% (59 sets/64 sets). When majority voting was used, we obtained the data in Table 4. From this, we calculated the accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for user 1: Accuracy = 59/64 = 92.18% Sensitivity=27/(27+5)=84.4% Specificity=32/(32+0)=100% PPV=27/(27+0)=100% NPV=32/(32+5)=86.5% In a similar manner, we calculated those values for the other 2 users. The results were tabulated in Table 5. No significant variation was observed among the 3 observers. The average results were as following: Accuracy = 91.15%, Sensitivity=77.92%, Specificity=92.09%, PPV=90.86%, NPV=80.66% 6 Fig. 3 Areas of interest in WSI Table 1. Image-by-image accuracy Predicted Diagnosis Negative (0) ITC (1) Micro Met (2) Macro Met (3) Observed Diagnosis Negative (0) 42 ITC (1) 20 Micro Met (2) Macro Met (3) 12 22 36 54 2 1 2 3 65 57 0 3 1 0 Accuracy: 161/320 = 50.31% Table 2. Grouped ranking Predicted Diagnosis Negative (0) or ITC (1) Micro Met (2) or Macro Met (3) 7 Observed Diagnosis Negative (0) or ITC (1) 152 Micro Met (2) or Macro Met (3) 37 Accuracy: 275/320 = 85.93% 8 123 Table 3. Examples of majority voting process Result for each image Observed dx=3, Predicted dx=1 Observed dx=3, Predicted dx=0 Observed dx=3, Predicted dx=2 Observed dx=3, Predicted dx=0 Observed dx=3, Predicted dx=2 Observed dx=2, Predicted dx=2 Observed dx=2, Predicted dx=2 Observed dx=2, Predicted dx=0 Observed dx=2, Predicted dx=0 Observed dx=2, Predicted dx=2 Observed dx=1, Predicted dx=1 Observed dx=1, Predicted dx=1 Observed dx=1, Predicted dx=1 Observed dx=1, Predicted dx=0 Observed dx=1, Predicted dx=0 Observed dx=0, Predicted dx=1 Observed dx=0, Predicted dx=2 Observed dx=0, Predicted dx=0 Observed dx=0, Predicted dx=1 Observed dx=0, Predicted dx=1 Case-by-case (set of 5) 2/5 → Incorrect 3/5→ Correct 5/5→Correct 4/5→ Correct Table 4. Data for majority voting with group ranking for user 1. Observed negative Observed positive Accuracy: 59/64 = 92.18% Predicted negative 32 5 Predicted positive 0 27 Table 5. Accuracy, sensitivity, specificity, PPV, and NPV for all 3 users. User User 1 User 2 User 3 Means Specificity 95 89.38 91.88 92.09 Sensitivity 76.88 78.75 78.12 77.92 Accuracy 92.19 87.5 93.75 91.15 PPV 93.89 88.11 90.58 90.86 8 NPV 80.42 80.79 80.77 80.66 DISCUSSION Deep learning has been shown to be useful for the identification of breast cancer metastases by analyzing whole sections of slide images of SLNs [12, 21]. Our study focuses on breast cancer screening using deep learning with only a small set of image patches from any SLN (positive or negative) to detect changes in the tumor microenvironment and not the tumor itself. Our approach is unique since it provides a very rapid screen rather than an exhaustive search for tumors in all fields of all lymph nodes. We obtain excellent predictive results for cancer metastasis in this study, which provide a proof of concept for incorporating automated breast cancer metastatic screen into the digital pathology workflow to potentially augment the pathologists’ productivity. This could have a significant impact on health economics. Some limitations of this study are: 1. The model was only validated on one hardware platform (Motic scanner), 2. Representative images require preselection of lymphocyte-rich areas, 3. Lack of explicit diagnostic criteria (inherent to deep learning). Our preliminary study nevertheless provided a proof of concept for incorporating automated breast cancer screen using digital microscopic images into the pathology workflow to augment the pathologists' QA. Future studies will need to (a) include more hardware platforms and many more cases for training and validation, and (b) use automated segmentation of WSIs for lymphocyte-rich areas. CONCLUSION We obtained excellent predictive results for cancer metastasis from this study: 91% accuracy, 78% sensitivity, and 92% specificity using a set of 5 random image patches (100x100 pixels) from each test case. There is a potential role for this model in clinical work as a QA tool. If a case is positive by histology, a final diagnosis of metastasis can readily be made. For cases that are negative by histology, our model can be used to screen for metastasis. If the screen is negative, a final diagnosis of negative metastasis can be made, and if the screen is positive, the slide can be re-examined to either find the metastases or to make a final diagnosis of negative metastasis if none is found. In this way, the model serves as an extra checking step to help detect metastases that otherwise would be missed with just manual examination. REFERENCES 1. Wang, Dayong, et al. "Deep learning for identifying metastatic breast cancer." arXiv preprint arXiv:1606.05718 (2016). 2. Maguire, Aoife, and Edi Brogi. "Sentinel lymph nodes for breast carcinoma: a paradigm shift." Archives of pathology & laboratory medicine 140.8 (2016): 791-798. 9 3. Seidl, Maximilian, et al. "Morphology of immunomodulation in breast cancer tumor draining lymph nodes depends on stage and intrinsic subtype." Scientific Reports 8.1 (2018): 1-12. 4. Zarella, Mark D., et al. "A practical guide to whole slide imaging: a white paper from the digital pathology association." Archives of pathology & laboratory medicine 143.2 (2019): 222- 234. 5. Aeffner, Famke, et al. "Introduction to digital image analysis in whole-slide imaging: a white paper from the digital pathology association." Journal of pathology informatics 10.1 (2019): 9. 6. Choras, Ryszard S. "Feature extraction for CBIR and biometrics applications." 7th WSEAS International Conference on Applied Computer Science. Vol. 7. 2007. 7. Marsland, Stephen. Machine learning: an algorithmic perspective. Chapman and Hall/CRC, 2011. 8. Mitchell, Tom M., and Tom M. Mitchell. Machine learning. Vol. 1. No. 9. New York: McGraw-Hill, 1997. 9. Bizzego, Andrea, et al. "Evaluating reproducibility of AI algorithms in digital pathology with DAPPER." PLoS computational biology 15.3 (2019): e1006269. 10. Roy, Kaushiki, et al. "Patch-based system for classification of breast histology images using deep learning." Computerized Medical Imaging and Graphics 71 (2019): 90-103. 11. Roy, Sudipta, and Anirban Choudhury. "On the intersection of deep learning and chest radiography: Background and Prospects." Available at SSRN 3861229 (2019). 12. El Achi, Hanadi, et al. "Automated diagnosis of lymphoma with digital pathology images using deep learning." Annals of Clinical & Laboratory Science 49.2 (2019): 153-160. 13. Le, Han, et al. "Utilizing automated breast cancer detection to identify spatial distributions of tumor-infiltrating lymphocytes in invasive breast cancer." The American journal of pathology 190.7 (2020): 1491-1504. 14. Amgad, Mohamed, et al. "Report on computational assessment of tumor infiltrating lymphocytes from the International Immuno-Oncology Biomarker Working Group." NPJ breast cancer 6.1 (2020): 1-13. 15. Fuchs, Thomas J., and Joachim M. Buhmann. "Computational pathology: challenges and promises for tissue analysis." Computerized Medical Imaging and Graphics 35.7-8 (2011): 515- 530. 16. Rakhlin, Alexander, et al. "Deep convolutional neural networks for breast cancer histology image analysis." international conference image analysis and recognition. Springer, Cham, 2018. 17. Kovalev, V., A. Kalinovsky, and V. Liauchuk. "Deep Learning in Big Image Data: Histology image classification for breast cancer diagnosis." Big Data and Advanced Analytics, Proc. 2nd International Conference, BSUIR, Minsk. sn, 2016. 18. Manoharan, Malini, et al. "A computational approach identifies immunogenic features of prognosis in human cancers." Frontiers in immunology 9 (2018): 3017. 19. Lu, Zixiao, et al. "Deep-learning–based characterization of tumor-infiltrating lymphocytes in breast cancers from histopathology images and multiomics data." JCO clinical cancer informatics 4 (2020): 480-490. 20. Kim, So-Woon, et al. "Distribution pattern of tumor infiltrating lymphocytes and tumor microenvironment composition as prognostic indicators in anorectal malignant melanoma." Modern Pathology 34.1 (2021): 141-160. 21. Maguire, Aoife, and Edi Brogi. "Sentinel lymph nodes for breast carcinoma: an update on current practice." Histopathology 68.1 (2016): 152-167.
ai_researcher
1
Optimised_Realistic_Test_Input_Generation.pdf
4 2 0 2 r a M 2 2 ] E S . s c [ 1 v 5 6 0 5 1 . 3 0 4 2 : v i X r a Testing for Fault Diversity in Reinforcement Learning Quentin Mazouni Simula Research Laboratory Oslo, Norway [email protected] Arnaud Gotlieb Simula Research Laboratory Oslo, Norway [email protected] ABSTRACT Reinforcement Learning is the premier technique to approach se- quential decision problems, including complex tasks such as driving cars and landing spacecraft. Among the software validation and verification practices, testing for functional fault detection is a con- venient way to build trustworthiness in the learned decision model. While recent works seek to maximise the number of detected faults, none consider fault characterisation during the search for more diversity. We argue that policy testing should not find as many failures as possible (e.g., inputs that trigger similar car crashes) but rather aim at revealing as informative and diverse faults as possible in the model. In this paper, we explore the use of quality diversity optimisation to solve the problem of fault diversity in policy testing. Quality diversity (QD) optimisation is a type of evolutionary algo- rithm to solve hard combinatorial optimisation problems where high-quality diverse solutions are sought. We define and address the underlying challenges of adapting QD optimisation to the test of action policies. Furthermore, we compare classical QD optimisers to state-of-the-art frameworks dedicated to policy testing, both in terms of search efficiency and fault diversity. We show that QD optimisation, while being conceptually simple and generally appli- cable, finds effectively more diverse faults in the decision model, and conclude that QD-based policy testing is a promising approach. CCS CONCEPTS • Software and its engineering → Software verification and validation; • Computing methodologies → Reinforcement learning. KEYWORDS Software Testing, Reinforcement Learning, Quality Diversity ACM Reference Format: Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher. 2024. Testing for Fault Diversity in Reinforcement Learning. In 5th ACM/IEEE Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. AST ’24, April 15–16, 2024, Lisbon, Portugal © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0588-5/24/04. . . $15.00 https://doi.org/10.1145/3644032.3644458 Helge Spieker Simula Research Laboratory Oslo, Norway [email protected] Mathieu Acher Univ Rennes, Inria, INSA Rennes, CNRS, IRISA Rennes, France [email protected] International Conference on Automation of Software Test (AST 2024) (AST ’24), April 15–16, 2024, Lisbon, Portugal. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3644032.3644458 1 INTRODUCTION In the last decade, Reinforcement Learning (RL) combined with neural networks (NNs) was shown to be able to effectively solve complex sequential decision problems in various fields, such as planning, game playing or system control [10, 21, 28]. Deployment of such action policies in real-world applications involves strong software validation and verification practices. Among them, testing, which trades exhaustiveness for efficiency helps build trust and confidence in the decision model. To that regard, a lot of work to test learnt policies have recently emerged [14, 18, 29, 31]. Some of them [18, 31] further investigate the test results after the search. For example, Pang et al. [18] retrain the policy and Zolfagharian et al. [31] extract interpretable rules from a Decision Tree to charac- terise the fault-revealing inputs. However, none of them proposes to consider how the policy under test solves (or fails) the problem at each test case and look for diversity. Instead, they all maximise the number of faults found, whatever they reveal or mean. Still, fault diversity is important: to improve the model, to assess the range of possible incorrect decisions (policy explainability) and one can also see the diversity as a test coverage measure, the latter being especially difficult to assess in NN-based policies. That way, stakeholders could accurately assess the safety of the NN-based policy and eventually build trust and trustworthiness. Fortunately, an evolutionary search technique known as Quality Diversity (QD, or Illumination) addresses this very same issue. Indeed, QD optimisation finds both diverse and high-quality solutions for a given task. To do so, diversity is not computed in the search space but rather in a behaviour space that describes how a solution actually solves the task. Typical QD applications are in robotics, where the objective is to find the best action policies (i.e., that successfully solve the task, like getting out of a maze) while discovering as many behaviourally different policies as possible, e.g., how the robot leaves the maze. In this paper, we propose to address the problem of fault diver- sity in policy testing with Quality Diversity. In other words, we investigate if QD optimisation can find diverse faults in trained poli- cies. Especially, we define and address the underlying challenges of adapting QD optimisation to policy testing. We compare how two existing QD optimisers solve the subsequent policy testing task as a QD problem to a state-of-the-art framework dedicated to policy AST ’24, April 15–16, 2024, Lisbon, Portugal Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher testing, both in terms of search efficiency and fault diversity. Our results show that QD-based testing finds diverse solutions, without additional test budget cost. The contributions of this paper are thus: • We propose the first reformulation of policy testing as a Quality Diversity optimisation problem. • We implement our method with two classical Illumination algorithms. • We compare the two QD-based frameworks to a SOTA frame- work dedicated to policy testing on three use-cases. The rest of the paper is organised as follows. Section 2 describes the current policy testing techniques and positions our study. Sec- tion 3 introduces Reinforcement Learning for decision making and Quality Diversity optimisation. Section 4 presents the challenges to solve the problem of fault diversity in policy testing in the Quality Diversity framework. Section 5 describes the empirical evaluation of our QD-based policy testing framework implemented with 2 QD optimisers to a dedicated policy testing technique. We discuss the current limitation in Section 6. Eventually, Section 7 concludes the paper and draws some perspectives. 2 RELATED WORK Policy testing has been recently addressed in numerous ways [15]. In the following, we describe the different testing objectives studied alongside the corresponding testing techniques proposed. Zolfagharian et al. [31] test RL policies with a genetic algo- rithm [9], where test cases are individuals of a population. Here, the individuals are episodes and their genes, state-action pairs. Start- ing from historical data (e.g., training data of the policy tested), the search consists of evolving the population to generate faulty episodes which are likely to be consistent with the policy under test (i.e., the trajectories match policy’s decision). While this approach lets it avoid executing the policy, it also requires that resulting faulty episodes be validated with respect to the policy. Lu et al. [14] and Ul Haq et al. [29] investigate active policy testing, which con- sists of dynamically changing the simulator (the policy under test interacts with) during executions. As such, the test cases are de- fined as the sequences of the environmental changes applied to the simulator. They both turn the search problem into a RL task, where an agent learns to perturb the simulation to generate hazard decisions in the policy. More precisely, Lu et al. [14] address the case of an autonomous driving system (where the possible modelled perturbations include changing weather conditions and dynamics of pedestrians and vehicles – e.g. their position or velocity –), while Ul Haq et al. [29] consider the case of multiple testing requirements (i.e., many-objective search). Tappler et al. [24] look for states that trigger unsafe decisions, called boundary states. The crucial differ- ence with all the other methodologies is that they do not look for those boundary states but, rather, retrieve the latter from the state space explored by an initial backtracking-based, depth-first search for a solution of the decision-making problem. This approach can thus be computationally expansive (depending on the difficulty of the decision task) and provides no guarantee of finding boundary states from the search. Fuzzing frameworks have also been recently proposed [4, 18, 22]. Pang et al. [18] consider seeds as initial situations of the decision- making task. Similarly to genetic searches, the search space is ex- plored by mutating used seeds. Even though the search does not look for diversity, it accounts for novelty by maintaining the pool of seeds which produce uncovered state sequences. Precisely, they compute the likelihood of the latter (collected after each execu- tion) with Gaussian Mixture Models [6] and keep a seed only if the likelihood is lower than a defined threshold. As for [22] and [4], they investigate the bug confirmation problem for NN-based action policies, which corresponds to finding avoidable failures. To bypass the oracle problem in such a situation, Steinmetz et al. [22] use heuristics known in classical AI planning, and Eniser et al. [4] rely on metamorphic testing. To do so, the authors design the metamor- phic operations around state relaxation, a well-studied concept also taken from the AI planning community. Their idea is that a relaxed version of a given environment should represent an easier problem than the original one. Therefore, the policy under test contains a bug if it solves the original problem but fails to solve its “relaxed” counterpart. Besides, Tian et al. [25] and Pei et al. [19] consider white-box testing of image input-based NNs. In their work, the objective is to find behaviour inconsistencies, which is approached as a neuron-coverage-guided greedy search. They differ in their test oracles: Tian et al. [25] use metamorphic testing [1] to check that the model tested outputs the same action for morphed images, while Pei et al. [19] rely on differential testing [16] (i.e., several NNs are simultaneously tested and inconsistencies are detected when their decisions differ). 3 BACKGROUND In this section, we introduce Reinforcement Learning to approach sequential decision-making and Quality Diversity optimisation. 3.1 Reinforcement Learning for sequential decision-making Informally, sequential decision-making refers to tasks that can be solved by any decision model in a step-by-step manner and which accounts for the dynamics of the environment [5]. This definition is very broad, and in the following we consider tasks that involve a single decision entity. As such, solving a sequential decision- making problem consists of initialising the world (or environment) to a particular starting situation and letting the decision model (or agent) interact with the former (e.g., simulations) through a step-wise observation-decision-action process until a final state is reached: if the latter is satisfying, the agent has solved the task; otherwise, the agent fails. A typical example is the case of path planning in robotics, where the agent is expected to safely reach a given position from an initial situation. Formally, those kind of problems are formulated as Markov Decision Processes (MDPs), defined as 4-tuples ⟨𝑆, 𝐴, 𝑅,𝑇 ⟩ where: • 𝑆 is a set of states. Referred to as the observation space, it corresponds to what the agent can know about its environ- ment. • 𝐴 is the set of actions. Referred to as the action space, it specifies how the agent acts on its environment. Testing for Fault Diversity in Reinforcement Learning AST ’24, April 15–16, 2024, Lisbon, Portugal • 𝑅 : 𝑆 × 𝐴 ↦→ R is the reward function. It reflects the agent’s performance by associating any pair of state-action with a numerical value. • 𝑇 : 𝑆 × 𝐴 × 𝑆 ↦→ [0, 1] is the transition function, which is a probability distribution over the observation and the action space. It depicts which state the environment will transit to after an action is executed. The function is not known by the agent and governs the environment’s dynamic. Solutions to MDPs are called policies and noted 𝜋, which are func- tions mapping from the observation space 𝑆 to the action space 𝐴. A common approach to training policies is Reinforcement Learning, a sub-field of Machine Learning which consists in learning from rewards/penalties [23]. Precisely, RL learns an optimal, or nearly- optimal, policy 𝜋 : 𝑆 ×𝐴 ↦→ [0, 1] that maximises the total expected discounted cumulative reward 𝑅𝑡 = (cid:205)𝑡 >0 𝛾𝑡 −1𝑟𝑡 , where 0 < 𝛾 ≤ 1 is the discount factor. This parameter controls how the agent takes into account the future rewards. Precisely, a small value encourages the agent to maximise short-term rewards, whereas high values (usually close to 1) lead the agent to focus on maximising long-term rewards. In this work, we consider black-box testing, i.e., without access to the internals of the policy nor the simulator, of deterministic decision models, changing thus the previous definition to 𝜋 : 𝑆 ↦→ 𝐴. In the following, we introduce Quality Diversity optimisation. 3.2 Quality Diversity Informally, Quality Diversity optimisation stems from the evolu- tionary algorithms, but provides a shift in methodology by not only considering the maximisation of a fitness function, but explicitly targeting the discovery of diverse solutions as characterised by their behaviour, i.e. the way a problem is solved. Formally, it as- sumes the objective function 𝑓 now returns a fitness value 𝑓𝑥 and a behavioural descriptor 𝑏𝑥 for any parameter 𝑥, i.e., 𝑓𝑥 , 𝑏𝑥 ← 𝑓 (𝑥). As previously mentioned, the behavioural descriptor describes how the solution solves the problem, while the fitness value quantifies how well it is solved. If we assume that we want to maximise the fitness function and define the behaviour space as 𝐵, then the goal of QD optimisation is to find for each point 𝑏 ∈ 𝐵 the parameters 𝑥 ∈ 𝑋 with the maximum fitness value: ∀𝑏 ∈ 𝐵, 𝑥 ∗ = arg max𝑥 𝑓𝑥 | 𝑏𝑥 = 𝑏 The goal of QD is thus to return a collection of behaviourally distinct solutions, whose individuals are the best performers (also called elites) in their local behaviour area (or behaviour niches). The first method in the QD paradigm is Novelty Search (NS) [11, 12], which entirely abandons the search for the objective function and only explores behavioural novelty. While later developments in QD reconsider the inclusion of the objective function into the search by competition of behaviourally similar solutions with dif- ferent quality [7, 13], we see a parallel between the pure search for novelty and fault-triggering inputs for policies. In the search for fault-triggering inputs, there is no clear objective function to maximise besides the binary test verdict of a successful or failing episode. There is no indication of the closeness to a failure, even though some existing works introduce surrogate objectives to help guide the search [31]. We therefore consider novelty search as a potentially interesting approach to explore the behaviour space without the necessity for guidance through an objective function. In its recent variants, QD tends to discover as many diverse be- haviours as possible, while improving their elite, i.e., the individual with the highest quality in that niche. The result of QD optimisa- tion is a set of solutions, which is usually referred to as collection, archive or container and is structured through the niches of the behaviour space. During the optimisation process, the collection is filled with any candidate whose behaviour is novel enough and lets the latter compete with the current collection’s elite if its behaviour is deemed to belong in a niche already covered. As such, the col- lection of any QD algorithm structures its search, since it defines the behavioural neighbourhood of the parameters evaluated (i.e., how to consider if two solutions have close enough behaviours to belong to the same behavioural area or niche). In the QD literature, one can distinguish two types of collections, namely structured and unstructured ones. In the former case, the behaviour space is discretised with a segmentation pattern into a grid, and each cell represents a niche (or behaviour descriptor location). This approach is for example implemented in the MAP- Elites algorithm [17], one of the most used QD optimisers. Here, the collection is a regular grid and the search aims at filling ev- ery cell of that grid with the best possible solution. On the other hand, unstructured archives do not define niche locations before the optimisation starts and relies on distance thresholds and/or local densities to assess the behavioural similarities and neighbourhoods between solutions. Cully and Demiris [2] propose a unified definition of QD opti- misation that we follow to introduce the general, high-level op- timisation process in QD (see Algorithm 1). In their formulation, algorithms vary depending on (i) the type of container (i.e., how the data is gathered and ordered into a collection); (ii) the selection op- erator (i.e., how the solutions to be modified in the next generation are selected) and (iii) the type of scores computed for the container and the selection operator to work. Given those parameters and operators, the execution of a QD algorithm follows a 4-step loop until the budget is consumed: • A new batch of candidates is produced from the individuals selected by the selection operator. • The candidates are evaluated and their performance and descriptors, recorded. • Each candidate is possibly added to the container (according to the solutions already in the collection). • The scores maintained by the container are eventually up- dated (if needed). Common scores are the novelty, the local competition, or the curiosity score. For more information about the different variants studied and the most widely used scores, see [2]. 4 QD OPTIMISATION FOR POLICY TESTING This section describes the challenges in optimising policy testing for fault diversity with Quality Diversity and how we address them. We consider a black-box setting, where neither the 𝑀𝐷𝑃 nor the policy under test 𝜋 can be directly inspected, but only inputs and outputs can be observed. The only assumption we take is, that it AST ’24, April 15–16, 2024, Lisbon, Portugal Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher Algorithm 1 4-step QD-algorithm for Policy Testing Input: 𝑁 : iteration budget, 𝑁𝑖𝑛𝑖𝑡 : number of initial iterations Output: 𝐴: archive of diverse and high-performing solutions ⊲ empty archive of solutions 1: 𝐴 ← ∅ 2: for 𝐼 ← 0 to 𝑁 do 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: ⊲ start with random parents and offspring if 𝐼 < 𝑁𝑖𝑛𝑖𝑡 then ⊳ 𝑋𝑝𝑎𝑟𝑒𝑛𝑡𝑠 ← random_solutions() 𝑋𝑜 𝑓 𝑓 𝑠𝑝𝑟𝑖𝑛𝑔 ← random_solutions() else ⊲ 1. select individuals from the archive and/or the previous ⊳ batch 𝑋𝑝𝑎𝑟𝑒𝑛𝑡𝑠 ← select(𝐴, 𝑋𝑜 𝑓 𝑓 𝑠𝑝𝑟𝑖𝑛𝑔) ⊲ 2. create randomly modified copies of 𝑋𝑝𝑎𝑟𝑒𝑛𝑡𝑠 (mutation ⊳ and/or crossover) 𝑋𝑜 𝑓 𝑓 𝑠𝑝𝑟𝑖𝑛𝑔 ← variation(𝑋𝑝𝑎𝑟𝑒𝑛𝑡𝑠 ) end if for 𝑥 ∈ 𝑋𝑜 𝑓 𝑓 𝑠𝑝𝑟𝑖𝑛𝑔 do ⊲ 3. record the behaviour and quality of the candidate ⊳ 𝑏𝑥 , 𝑓𝑥 ← 𝑓 (𝑥) 𝑥 .store_score(𝑏𝑥 , 𝑓𝑥 ) ⊲ 3*. initialise 𝑀𝐷𝑃 with 𝑥 and characterise 𝑥 with (1) the be- ⊳ haviour and fitness of 𝜋 ; (2) the test oracle result 𝑏𝑥 , 𝑓𝑥 , 𝑜𝑥 ← 𝐸𝑣𝑎𝑙𝑢𝑎𝑡𝑒 (𝑀𝐷𝑃, 𝜋, 𝑥) 𝑥 .𝑠𝑡𝑜𝑟𝑒_𝑠𝑐𝑜𝑟𝑒 (𝑏𝑥 , 𝑓𝑥 , 𝑜𝑥 ) ⊲ competition) attempt_to_add(𝑥, 𝐴) update_scores(parent(𝑥), 𝐴) ⊲ parent’s scores might be up- ⊳ dated end for ⊲ possibly update the scores of all solutions (e.g., curiosity scores) ⊳ update_scores(𝐴) 4. attempt to add the candidate to the archive (local ⊳ 25: 26: end for 27: return 𝐴 is possible to instrument the 𝑀𝐷𝑃’s simulator with an initial state and random seed. Solution Behaviour. Since we aim to find fault-triggering initial states of the MDP (under which 𝜋 fails), the search space corre- sponds to the parameter (or input) space of the MDP’s simulator. The exact definition of such parameter spaces depends on the imple- mentation of the simulator and the decision-making problem (see Section 5). Therefore, at first glance, the search space does not come with any behavioural definition directly: parameters like gravity or objects’ positions do not behave in a specific way. This is not an issue though, as we can still use the traditional behaviour space definition of QD. Indeed, we similarly want to characterise how the policy under test solves the MDP. The main difference is that in QD, the solutions evaluated are policy’s parameters (since the goal is to find good policies) whereas in our reformulation, 𝜋 is the same and its behaviours depends on the initial scenario described by the solution evaluated. This allows us to leverage the QD’s literature richness of behavioural policy analysis and use already proposed behaviour space definitions (see Section 5) to describe the solutions. That way, finding diverse solutions will exercise as many different policy behaviours as possible and, similarly, the fault-triggering ones will imply diverse hazard decisions. Solution Quality. The second challenge in the adaption of QD for policy testing lies in the definition of the solutions’ quality (or fitness). Indeed, in policy testing the quality of a test case (i.e., the evaluation of an execution) boils down to the boolean value of the test oracle (i.e., 𝜋 solves the task or fails), which is not enough informative. We instead define the quality of solutions as the ac- cumulated reward of the policy under test (like in common QD applications, i.e., as if we were finding policies) and define the opti- misation task of QD with minimisation (i.e., accumulated reward minimisation). Assumptions. In this first work, we fix the randomness effects during the simulation, letting thus the 𝑀𝐷𝑃, its simulator and all their executions be deterministic. As such, every input test (i.e., solution) generates a single trajectory and thus, a single behaviour (and test result). QD-based Policy Testing. Thanks to the flexibility of Quality Di- versity, our proposal to optimise a fault-triggering simulator’s in- puts search for policy testing with QD involves only few changes to the high-level framework proposed by Cully and Demiris [2] as shown in Algorithm 1. The modifications are highlighted at lines 18- 19, which mostly consists of changing how solutions are evaluated. First, a solution 𝑥 is used to initialise the 𝑀𝐷𝑃’s simulator. Then, we let 𝜋 solve/fail the decision task and characterise 𝑠 with (1) the behaviour and fitness of 𝜋; (2) the test oracle result. After the search, fault-triggering solutions (faults for short) can be retrieved from the archive by filtering it with the test oracle results. 5 EXPERIMENTAL EVALUATION 5.1 Research Questions To evaluate QD optimisation for fault diversity in policy testing, we conduct experiments to answer three research questions (RQs): RQ1 How efficient is QD optimisation compared to dedicated policy testing techniques? RQ2 How does QD optimisation improve diversity? RQ3 How does the behaviour space definition impact the perfor- mance of QD-based policy testing? Answering the first two research questions will let us assess the benefits and cost of prioritising diversity. Here, we compare the number of faults revealed and their diversity among QD-based testing and testing without consideration of behaviours. Finally, the last research question investigates the impact of the definition of the behaviour space, a key configuration parameter for QD opti- misation. We expect that the number of faults found by QD-based testing still be competitive and that it improves fault diversity by accounting for the behaviour of the policy under test. 5.2 Experiments To answer the RQs, we conduct experiments with three standard environments [27]: Lunar Lander, Bipedal Walker and Taxi. We compare Random Testing with MDPFuzz [18], a recent black-box policy testing technique for MDPs and two implementations of Testing for Fault Diversity in Reinforcement Learning AST ’24, April 15–16, 2024, Lisbon, Portugal our QD-based testing framework. The first one uses the QD opti- miser MAP-Elites [17], and the second one, Novelty Search [11]. Random Testing will assess the difficulty of the testing task and act as a baseline to compare the other methodologies. We choose the policy-dedicated testing framework MDPFuzz since it addresses complex environments (which has not been shown by other frame- works such as STARLA [31]) and drives its fuzzing search towards uncovered state sequences, accounting thus for diversity. Finally, as part of our QD-based testing framework, we study MAP-Elites (ME) since it is one of the first and most conceptually simple illumi- nation algorithms, while Novelty Search (NS) will let us assess the relevance of accounting for the quality of the executions, as this algorithm emphasises diversity only. 5.2.1 Environments. The three selected environments are com- monly used benchmarks in the RL literature. Lunar Lander. This control problem consists in safely landing a spacecraft. We chose this environment since it has been used in QD optimisation and RL. In particular, a behaviour space has already been studied [26]. The spacecraft always starts at the top centre of the space and, similarly, the landing pad is always at the centre of the ground. The initial situations differ in the shape of the landscape (around the landing pad) and the initial force applied to the spacecraft. The policy controls the main and orientation engines of the spacecraft. Precisely, there are four possible actions: do nothing, fire the left orientation engine, fire the main engine and fire the right orientation engine. Bipedal Walker. This problem consists in piloting a 4-joint walker robot across an uneven landscape composed of obstacles like steps, pits, and stumps. We chose this environment to follow the evalua- tion of MDPFuzz [18] (enabling thus a fair comparison), but also because behaviour spaces have already been proposed [8]. The impact of these definitions on the results are studied in RQ3. In this problem, the walker always starts at the same position. The initial situations differ in the shape of the landscape (positions of the steps, the stumps and the pits). The action space is continuous. Precisely, the action of the policy is the motor speed values at the 4 joints of the robot, which are localised at the hip and its knees. Taxi. In this classical environment, the policy navigates in a grid world to pick up passengers and safely drop them off at their destinations [3]. Every test initiates a particular initial situation as the position of the passenger, the position of the taxi and the passenger’s destination. The six actions possible by this policy are the next taxi’s direction (going north, south, east or west) and interactions with the passenger (pick it up, drop it off). We use a version of the Taxi environment with an enlarged 18x13 map, thus disabling the simple enumeration of all MDP’s possible states for the standard 5x5 grid. 5.3 Metrics To answer RQ1, we measure what we call the test efficiency as the number of distinct faults found over time. To answer RQ2, we study the diversity of testing (i.e., how a test methodology exercises the policy under test) and diversity of the faults. We consider two met- rics to measure diversity. First, we compare the behaviour coverage, that is: how many behaviours are discovered during testing. To do so, we follow the QD literature and count the number of bins filled in each result archive. This archive corresponds to the regular grid used by MAP-Elites. For fault diversity, only the bins filled by at least one fault-triggering solution are counted. To complement this method space-based measure, we also analyse the diversity with the final states of the simulations. The idea of the final state comparison is to make the result analysis possibly more accessible and not only do a comparison in the method space (i.e., behaviour space). Indeed, the definition of the behaviour space is domain-dependent and can therefore vary. Complementing thus the behaviour space coverage with final state diversity analysis will provide us with conclusions that are not bound to how behaviours are actually computed from the trajectories. We report final state diversity as the average dis- tances of the 3 nearest neighbours in the solution sets, since this metric captures the sparseness of a data set (i.e., if the set is com- posed of dense points or if the points are smoothly distributed). Similarly as the first metric, fault diversity only considers the final states of fault-revealing executions (failure states). Eventually, to answer RQ3, we compare the effect of four behaviour spaces for the Bipedal Walker use-case on all the previous metrics mentioned above. 5.4 Implementation We run all methods with a budget of 5000 tests, and an initialisation phase of 1000 for MAP-Elites and MDPFuzz (following the evalua- tion in [18]). For Novelty Search, we use a population size of 100 and let the search iterate for 50 iterations. The result archives used to collect data rely on a regular grid of 50x50 bins. All the exper- iments were executed on a Linux machine (Ubuntu 22.04.3 LTS) equipped with an AMD Ryzen 9 3950X 16-Core processor and 32GB of RAM. We accounted for randomness effects by repeating all the experiments with 10 seeds and report the median results as well as the first and third quantiles. The source code of the experiments is available online1. Test oracles. In Lunar Lander, a failure occurs if the lander crashes into the ground or moves outside the viewport. In Bipedal Walker, failure occurs if the body of the robot collides with the ground. For the Taxi environment, a fault occurs in case of an illegal action (for instance, dropping the passenger even though the taxi is still empty) or collision (by moving into a wall). Input/Parameter sampling and mutation. For the Lunar Lander en- vironment, the shape of the landscape is fixed by our assumption to consider deterministic environments. As such, the parameter space is the two-dimensional space [−1000, 1000]2 that describes the pos- sible initial forces applied to the spacecraft. The mutation operator slightly perturbs the original parameter (clipped, if needed). For the Bipedal Walker use-case, we follow the experimental settings of MDPFuzz [18] for both the parameter space and the mutation oper- ator. Precisely, the parameter space encodes the type of obstacles (flat, pit, steps or stump) of the landscape as 15-size vectors whose values ∈ [0, 1, 2, 3], while the mutation operator randomly changes at least one of those values (clipped if needed). As mentioned in the aforementioned paper, the obstacles are sufficiently well away from 1https://github.com/QuentinMaz/QD_Based_Testing_RL AST ’24, April 15–16, 2024, Lisbon, Portugal Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher Figure 1: Evolution of the number of fault-triggering solutions found for each framework evaluated. The lines show the median results over 10 executions, and the shaded areas correspond to the first and third quantiles. each other so that they can be passed by an optimal policy 𝜋 ∗ (i.e., all the problems are solvable). As for the last use-case, the parameter space encodes the initial position of the taxi and the passenger as well as its destination. The map is static in this environment. The mutation operator increments or decrements one value (clipped if needed). For instance, the taxi would now start from a cell in the grid world close to the initial one. Behaviour Spaces. All the behaviour spaces studied are two- dimensional spaces. For the Lunar Lander use-case, we use the behavioural definition proposed by [26]. It describes how the policy lands the spacecraft as its horizontal position and vertical veloc- ity when it first touches the ground. If the lander moves outside the viewport, we consider the last values observed. For Bipedal Walker, we follow a previous work that studies policy behaviour for this very same use-case [8]. In particular, they define a set of hand-designed behaviour descriptors (averaged over the observa- tion state sequence) such as Distance, the walker’s position relative to the goal, Hull angle, the body’s angle of the agent, Torque, the force applied to the agent’s hip and knee joints, Jump, that describes when both legs are simultaneously not in contact with the ground and Hip angle/speed, that are the angle and speed values of the agent’s hip joints, respectively. We define the behaviour space as the pair of the descriptor Distance and Hull angle. The effect of dif- ferent descriptor pairs as behaviour spaces is addressed in RQ3 (see Subsection 5.7). As for the last use-case, the behaviour is defined as the two dimensional point that counts the number of actions to 1) pick-up the passenger and 2) to drop them off afterwards. Models Under Test. For the Bipedal Walker and Lunar Lander experiments, we use the models freely available on the Stable- Baselines3 repository [20]. For the customised Taxi use-case, we train the agent via Q-Learning [30]. Hyperparameters. We follow the guidelines indicated in [18] to configure the Gaussian Mixture Models used by MDPFuzz. As for Novelty Search, it computes the novelty scores as the average distance of the 3 nearest neighbours and the novelty threshold for updating its novelty archive has been set to 𝑡 = 0.9 for the Taxi use- case and 𝑡 = 0.005 for Bipedal Walker and Lunar Lander. Similarly to all the previously mentioned experimental parameters, they have been obtained in a prior study. 5.5 RQ1: Effective Fault Detection We first investigate the efficiency of the approaches evaluated. Fig- ure 1 shows the evolution of the number of distinct fault-triggering solutions. Results. All the frameworks evaluated find more faults than Ran- dom Testing for the Bipedal Walker use-case. Precisely, at the end of testing, we report an average improvement of 30% and 56% for ME and NS, respectively, while MDPFuzz finds 17% more faults. Greater improvements of QD optimisation are observed for the Lunar Lan- der use-case (up to 206% and 152%), while MDPFuzz matches the baseline. We note that in the case of NS, the latter vary a lot (see the shaded areas in Figure 1). The results for the last use-case show a different picture though. Here, only ME beats Random Testing (up to 12%), while Novelty Search and MDPFuzz find 23% and 27% fewer faults, respectively. Analysis. Searching for diversity can impede test efficiency, as Random Testing outperforms Novelty Search in the Taxi experi- ments. Furthermore, NS is the most sensitive framework to ran- domness, which is likely to be caused by its high dependency to its initial population. It is therefore difficult to recommend as a general optimiser for efficient QD-based policy testing. However, accounting for both quality and diversity lets MAP-Elites system- atically beat Random Testing, while showing lesser sensitivity to its initialisation. Besides, MDPFuzz does not seem to be suited for all the use-cases studied, since it is only able to compete with the efficiency of QD-based policy testing on the Bipedal Walker envi- ronment. While being deceptive, those results are not completely surprising. Indeed, this framework drives its search with Gaussian Mixture Models (GMMs), whose parameters were studied for sev- eral applications, including Bipedal Walker. We therefore suspect that MDPFuzz ends up sharing its results with Random Testing on Lunar Lander because of none-optimal GMM configuration. As for the Taxi use-case, we recall that the MDPFuzz aims at enabling policy testing solving complex MDPs [18], which is definitely not the case of this toy problem. Testing for Fault Diversity in Reinforcement Learning AST ’24, April 15–16, 2024, Lisbon, Portugal Figure 2: Evolution of the behaviour space coverage over time as the number of behaviour niches (bins) illuminated during testing. In the second column, only bins filled by fault- triggering solutions are counted, i.e., faulty behaviours. The lines show the median results over 10 executions, and the shaded areas correspond to the first and third quantiles. Figure 3: Final state diversity as the average distances of the 3 nearest neighbours. Since their scale depends on the observation space of each use-case, we report the relative performance of the methodologies to Random Testing. The lines show the median results over 10 executions. Conclusion. The complexity of dedicated policy testing tech- niques such as MDPFuzz can hurt their efficiency – especially for smaller MDPs –, while QD optimisation consistently finds the great- est number of faults in the decision model. About that, discarding solution quality in favour of novelty can lead to better, yet unstable results. QD-based policy testing does not come with poorer effi- ciency (as we expected) but rather with significant increase the number of functional faults found in the model. 5.6 RQ2: Testing and Fault Diversity Next, we study how QD optimisation improves diversity in terms of behaviours and final states. Figure 2 shows the behaviour and faulty behaviour coverage, and Figure 3 the average distances of the 3 nearest neighbours of the final states. Since the latter are distances between observation states, Figure 3 shows the relative performance of the methodologies evaluated to the Random Testing baseline. Results. ME systematically improves behaviour discovery, rang- ing from 7% (Taxi) to 14% (Lunar Lander). MDPFuzz matches RT’s performance for all the use-cases, as Novelty Search does, except for Lunar Lander for which NS covers up to 20% more behaviour niches (slightly outperforming MAP-Elites). Regarding faulty be- haviours, our results only show significant differences to Random Testing for MAP-Elites and Novelty Search on the Lunar Lander experiments. Precisely, they stand out after 2000 iterations and end up with 189% and 138% more faulty behaviours discovered, respectively. Similarly to the previous results though, we note that NS’s performance vary a lot. If we now look at final state diversity, we find mixed-bag results. For Bipedal Walker, we observe that Novelty Search explores around 7.5% more diverse final states than the random baseline throughout testing. For Lunar Lander, none of the testing technique significantly improves the baseline’s results. Actually, the failure state distribution tends to be worse, especially for QD optimisation (up to a 30% decrease). For the last use-case, MDPFuzz outperforms all the other techniques with significant margins, with 40% greater distances in the final states (averaged throughout testing) and up to 50% sparser failure states. Meanwhile, if QD optimisation does not show significant difference with the baseline for the final state diversity, we observe great, yet unstable improvements in their failure state results (as shown in the second chart of the bottom row in Figure 3). In particular, Novelty Search catches up with MDPFuzz’s performance on the last iterations. AST ’24, April 15–16, 2024, Lisbon, Portugal Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher Figure 4: Impact of the behaviour space parameter for the Bipedal Walker experiments. The four spaces are different pairs of hand-designed descriptors studied in [8]. Each column displays the results for a behaviour space. From top to bottom: number of faults, number of behaviours and faulty behaviours, final and failure state diversity (FS and FFS) relative to Random Testing. The results are the medians of 10 executions. Analysis. QD-based policy testing discovers more behaviours than both MDPFuzz and Random Testing. More importantly, in some cases (Lunar Lander), this also applies for fault-triggering so- lutions. In other words, QD-based policy testing finds more diverse faults in the model under test than dedicated testing techniques. It is interesting to note that the SOTA framework MDPFuzz does not cover less behaviours than Random Testing; something that one could have expected. Complementing the diversity measurement of our behavioural evaluation with final state analysis lets us shade interesting details. Indeed, discovering more behaviours does not translate to sparser final state distribution, as the two QD-based frameworks either matches or slightly improves Random Testing. It is especially the case for Lunar Lander, for which both ME and NS significantly improve faulty behaviours coverage (i.e., find di- verse faults in the model tested) while showing poor failure state distributions. As such, the use of behaviour spaces helps policy testing, as simply looking at final states is not enough to cover diverse faults in the model. If we consider the actual descriptors Testing for Fault Diversity in Reinforcement Learning AST ’24, April 15–16, 2024, Lisbon, Portugal used to define the behaviour spaces, we can explain the lack of a general correlation between increased behaviour and final state discovery. Indeed, none of the descriptors solely depends on the termination of an execution to define the behaviour. For instance, in the Bipedal Walker experiments, the behaviours are averages of some observations’ features. Interestingly though, the descriptors defined for the Lunar Lan- der use-case let us better understand what type of faults every methodology tends to find. In particular, behaviours are based on the first contact of the spacecraft with the ground; but the policy can later fail if the landing wasn’t on the targeted pad (and that the lander has thus to be safely glided towards the latter). Therefore, by facing the results of our bi-metric evaluation, an interpretation is that here the QD-based frameworks find solutions for which 𝜋 lands the spacecraft at various positions (i.e., diverse behaviours) but later fails the task by colliding the lander’s body on the same edge of the landscape (i.e., dense failure states), while MDPFuzz either finds slightly more different edges (around 10% more distributed failure states) but the policy actually lands at similar positions (i.e., poor faulty behaviour discovery). Conclusion. The ability of QD optimisation to find diverse high- quality solutions applies for policy testing, that is, fault-triggering inputs that exercise the model such that it fails with varied be- haviours. As found in the previous analysis, discarding solution quality leads to unstable results, which is fixed by more balanced Quality Diversity optimisers such as MAP-Elites. 5.7 RQ3: Behaviour Space Impact In this last research question, we investigate the effect of differ- ent behaviour spaces on the proposed QD-based policy testing framework. As previously mentioned, Gupta et al. [8] define sev- eral descriptors to characterise the behaviour of a policy. We use the ones introduced in Subsection 5.4 to define three additional behaviour spaces (as descriptor pairs) to study how the results of our appraoch can differ. Figure 4 summarises our findings, where each column corresponds to a behaviour space. Note that the first column corresponds to the results found above. In the following, we analyse the impact on each metric. Test efficiency. ME improves the number of faults found com- pared to Random Testing, regardless of the behaviour space used, to a extent that can decrease to 11.5% (down from 30%, the first column on Figure 4). However, Novelty Search shows significantly higher sensitivity to the behaviour space. In particular, NS can al- most double the number of faults found by Random Testing with the third behaviour definition (90%), but has around 20% poorer performance with an inappropriate space (as shown in the second column of Figure 4). The drastic difference in behaviour sensitiv- ity of the two implementations again lies in the balance between how quality and diversity of solutions are accounted. Since MAP- Elites mostly relies on quality (by focusing on mutating its elites), the results are steadier and less sensitive to their exact behaviour. In other words, as long as the behaviour space is able to capture diversity – which is the case here, as all the spaces are based on meaningful, hand-designed descriptors – MAP-Elites seems to be an efficient optimiser for QD-based policy testing. On the other hand, by discarding quality to only account for behaviour novelty, Novelty Search becomes more sensitive to the behaviour space used. Behaviour Diversity. The number of behaviours discovered by all the methodologies are for the most part steady across the spaces evaluated, which match the performance of the baseline. We only observe ME discovers around 8% more behaviours with two of the spaces (first and third columns in Figure 4). As for the diversity of faulty behaviours, we report no significant change compared to our previous findings, that is: all frameworks share similar figures. One can possibly note MDPFuzz has consistent lower numbers (10% in averaged over the spaces). While QD-based policy testing never impedes behaviour discovery, its ability to significantly improve the random baseline depends on the behaviour space. For instance, we find that the behaviour of the policy tested is best captured with the first pair of descriptors. Final State Diversity. The behaviour space definition does not affect the sparseness of the final states, which is little surprising since behaviours are computed as feature averages. Similarly, the relative performance of the methodologies to the baseline does not significantly fluctuate. In particular, Novelty Search shows a 8% to 12% smoother state distribution. Finally, if we only consider failure states (bottom row), we can see that there is no general trend in the results neither, with minor fluctuations around the baseline lower than 11% throughout testing. 5.8 Summary Our first and most important conclusion is that QD manages to find diverse faults in the decision model, despite the simplicity of our approach. In particular, it shows that complex, dedicated policy techniques such MDPFuzz are not always needed. Second, we systematically observe a lack of consistency in the results of our framework optimised with Novelty Search, especially when running close to our set test budget. As such, despite its outstanding performances for some cases, we recommend using our proposal with QD optimisers that account for quality and diversity in more balanced ways. Finally, the selection of the precise behaviour space can boost the performance of QD-based policy. We observe this for the Bipedal Walker use-case, where a well-chosen behaviour space substantially increases the number of faults detected. At the same time, the other behaviour spaces are all competitive and do not negatively impact the ability to reveal diverse faults. 6 THREATS TO VALIDITY In the following, we discuss the limitations of our proposal as well as the threats of our experimental evaluation. External Threats. A first threat to our evaluation is the policy under test used, that we mitigate by using the same model as previ- ous works [18]. Similarly, there is some inherent bias to the results from the selected use-cases. To that regard, we consider three en- vironments of various natures, two of them having already been studied in the QD and RL literature. Finally, we only evaluate one SOTA policy testing technique, namely MDPFuzz. Given the space limitation, we decided to prioritise the number of use-cases, since AST ’24, April 15–16, 2024, Lisbon, Portugal Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher this work primarily introduces Illumination optimisation in policy testing. Construction Threats. In this first work, we assume to have de- terministic executions with fixed randomness effects. While deter- ministic executions are a common approach for testing policies and their deployment, the challenge of handling MDP stochasticity lies in the fact that multiple executions of a particular solution would generate different trajectories and thus, possibly different behaviours. Similarly, some of these executions might fail depend- ing on the selected policy actions or state transitions in the MDP. We acknowledge that it is thus an important challenge in QD-based policy testing, that needs to be addressed in future work. Practi- cally speaking, by fixing the random effects we reduce the search space to a subset of inputs that reveal a fault with the given random seed. We thereby limit the experimental evaluation to a subset of all possibly detectable faults in the policy. 7 CONCLUSION This work introduces Quality Diversity (QD) for policy testing. QD is a flexible, black-box optimisation framework that optimises a population of individuals by considering both their behaviour, i.e. how they solve a given problem, and their quality, i.e. how well they solve it. We illustrate how to adapt QD to policy testing and propose the first formulation of diversity-oriented policy testing as an Illumina- tion task. Precisely, we characterise test inputs with the behaviour of the policy under test, that is, how it solves/fails the test case. We implement our QD-based testing framework with two commonly used QD optimisers of different paradigms: the elitist MAP-Elites and the divergent Novelty Search algorithms. We perform experi- ments on three use-cases from the reinforcement learning literature, and compare QD-based testing to state-of-the-art policy testing and random testing as the state-of-the-practice. Our results show that QD optimisation, while being a conceptually straightforward and easy-to-apply approach, not only improves fault diversity but also fault detection. We further assess the impact of the behaviour space definition, what we consider as the most decisive parameter of our approach. With this first work, we open a new application area for Quality Diversity. In future work we will address the inclusion of generic respec- tively learned behaviour spaces to reduce the initial effort to setup QD and the handling of stochastic MDPs in the search space to further guide the search. ACKNOWLEDGEMENTS This work is funded by the Norwegian Ministry of Education and Research, the Research Council of Norway under grant number 324674 (AutoCSP), and is part of the RESIST_EA Inria-Simula asso- ciate team. REFERENCES [1] T.Y. Chen, S.C. Cheung, and S.M. Yiu. 1998. Metamorphic Testing: A New Approach for Generating Next Test Cases. Technical Report. Department of Computer Science, Hong Kong University of Science and Technology. [2] Antoine Cully and Yiannis Demiris. 2018. Quality and Diversity Optimization: A Unifying Modular Framework. IEEE Transactions on Evolutionary Computation 22, 2 (2018), 245–259. https://doi.org/10.1109/TEVC.2017.2704781 [3] Thomas G Dietterich. 2000. Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of artificial intelligence research 13 (2000), 227–303. [4] Hasan Ferit Eniser, Timo P. Gros, Valentin Wüstholz, Jörg Hoffmann, and Maria Christakis. 2022. Metamorphic Relations via Relaxations: An Approach to Obtain Oracles for Action-Policy Testing. In Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis. https: //doi.org/10.1145/3533767.3534392 [5] Keith Frankish and William M. Ramsey (Eds.). 2014. The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, Cambridge, UK. [6] D. N. Geary. 2018. to Clustering. tions ries A: Statistics //doi.org/10.2307/2982840 pdf/152/1/126/49758778/jrsssa_152_1_126.pdf Journal of in Society 152, 1 (12 2018), 126–127. Inference Mixture Models: and Applica- the Royal Statistical Society Se- https: arXiv:https://academic.oup.com/jrsssa/article- [7] Jorge Gomes, Pedro Mariano, and Anders Lyhne Christensen. 2015. Devising Effective Novelty Search Algorithms: A Comprehensive Empirical Study. In Pro- ceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation (Madrid, Spain) (GECCO ’15). Association for Computing Machinery, New York, NY, USA, 943–950. https://doi.org/10.1145/2739480.2754736 [8] Vikas Gupta, Nathanael Aubert-Kato, and Leo Cazenille. 2020. Exploring the BipedalWalker Benchmark with MAP-Elites and Curiosity-Driven A3C. In Pro- ceedings of the 2020 Genetic and Evolutionary Computation Conference Companion (Cancún, Mexico) (GECCO ’20). Association for Computing Machinery, New York, NY, USA, 79–80. https://doi.org/10.1145/3377929.3389921 [9] John H. Holland. 1992. Genetic Algorithms. Scientific American (1992). [10] Rushang Karia and Siddharth Srivastava. 2020. Learning Generalized Relational Heuristic Networks for Model-Agnostic Planning. CoRR (2020). [11] Joel Lehman and Kenneth O. Stanley. 2008. Exploiting Open-Endedness to Solve Problems Through the Search for Novelty. In IEEE Symposium on Artificial Life. https://api.semanticscholar.org/CorpusID:2367605 [12] Joel Lehman and Kenneth O Stanley. 2011. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation 19, 2 (2011), 189–223. [13] Joel Lehman and Kenneth O. Stanley. 2011. Evolving a Diversity of Virtual Creatures through Novelty Search and Local Competition. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (Dublin, Ireland) (GECCO ’11). Association for Computing Machinery, New York, NY, USA, 211–218. https://doi.org/10.1145/2001576.2001606 [14] Chengjie Lu, Yize Shi, Huihui Zhang, Man Zhang, Tiexin Wang, Tao Yue, and Shaukat Ali. 2023. Learning Configurations of Operating Environment of Au- tonomous Vehicles to Maximize their Collisions. IEEE Transactions on Software Engineering (2023). https://doi.org/10.1109/TSE.2022.3150788 [15] Quentin Mazouni, Helge Spieker, Arnaud Gotlieb, and Mathieu Acher. 2023. A Re- view of Validation and Verification of Neural Network-based Policies for Sequen- tial Decision Making. In Rencontres des Jeunes Chercheurs en Intelligence Artifi- cielle (RJCIA). https://pfia23.icube.unistra.fr/conferences/rjcia/Actes/RJCIA2023_ paper_5.pdf [16] William M. McKeeman. 1998. Differential Testing for Software. Digit. Tech. J. (1998). [17] Jean-Baptiste Mouret and Jeff Clune. 2015. Illuminating search spaces by mapping elites. arXiv:1504.04909 [cs.AI] [18] Qi Pang, Yuanyuan Yuan, and Shuai Wang. 2022. MDPFuzz: Testing Models Solving Markov Decision Processes. In Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis. https://doi.org/10. 1145/3533767.3534388 [19] Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. Deepxplore: Au- tomated whitebox testing of deep learning systems. In proceedings of the 26th Symposium on Operating Systems Principles. 1–18. [20] Antonin Raffin. 2020. RL Baselines3 Zoo. https://github.com/DLR-RM/rl- baselines3-zoo. [21] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Grae- pel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science (2018). https://doi.org/10.1126/science.aar6404 [22] Marcel Steinmetz, Daniel Fišer, Hasan Ferit Eniser, Patrick Ferber, Timo P. Gros, Philippe Heim, Daniel Höller, Xandra Schuler, Valentin Wüstholz, Maria Chris- takis, and Jörg Hoffmann. 2022. Debugging a Policy: Automatic Action-Policy Testing in AI Planning. Proceedings of the International Conference on Automated Planning and Scheduling (2022). https://doi.org/10.1609/icaps.v32i1.19820 [23] Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An intro- duction. MIT press. [24] Martin Tappler, Filip Cano Córdoba, Bernhard K. Aichernig, and Bettina Könighofer. 2022. Search-Based Testing of Reinforcement Learning. In Pro- ceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, Luc De Raedt (Ed.). ijcai.org, 503–510. https://doi.org/10.24963/IJCAI.2022/72 Testing for Fault Diversity in Reinforcement Learning AST ’24, April 15–16, 2024, Lisbon, Portugal [25] Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. 2018. DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars. In Proceedings of the 40th International Conference on Software Engineering. https://doi.org/10.1145/ 3180155.3180220 [26] Bryon Tjanaka, Sam Sommerer, Nikitas Klapsis, Matthew C. Fontaine, and Ste- fanos Nikolaidis. 2021. Using CMA-ME to Land a Lunar Lander Like a Space Shuttle. pyribs.org (2021). https://docs.pyribs.org/en/stable/tutorials/lunar_ lander.html [27] Mark Towers, Jordan K. Terry, Ariel Kwiatkowski, John U. Balis, Gianluca de Cola, Tristan Deleu, Manuel Goulão, Andreas Kallinteris, Arjun KG, Markus Krimmel, Rodrigo Perez-Vicente, Andrea Pierré, Sander Schulhoff, Jun Jet Tai, Andrew Tan Jin Shen, and Omar G. Younis. 2023. Gymnasium. https://doi.org/ 10.5281/zenodo.8127026 [28] Sam Toyer, Sylvie Thiébaux, Felipe Trevizan, and Lexing Xie. 2020. Asnets: Deep learning for generalised planning. Journal of Artificial Intelligence Research 68 (2020). [29] Fitash Ul Haq, Donghwan Shin, and Lionel C. Briand. 2023. Many-Objective Reinforcement Learning for Online Testing of DNN-Enabled Systems. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). 1814– 1826. https://doi.org/10.1109/ICSE48619.2023.00155 [30] Christopher J. C. H. Watkins and Peter Dayan. 1992. Q-learning. Machine Learning 8, 3 (01 May 1992), 279–292. https://doi.org/10.1007/BF00992698 [31] Amirhossein Zolfagharian, Manel Abdellatif, Lionel C. Briand, Mojtaba Bagherzadeh, and Ramesh S. 2023. A Search-Based Testing Approach for Deep Reinforcement Learning Agents. IEEE Transactions on Software Engineering 49, 7 (2023), 3715–3735. https://doi.org/10.1109/TSE.2023.3269804
ai_researcher
1
R-Bot_An_LLM-based_Query_Rewrite_System.pdf
2 2 0 2 v o N 6 1 ] A R . h t a m [ 2 v 3 7 5 4 1 . 1 1 1 2 : v i X r a Polyfunctions over Commutative Rings Ernst Specker1, Norbert Hungerb¨uhler2, and Micha Wasem3 1Dedicated to the memory of the first author 2Department of Mathematics, ETH Z¨urich, R¨amistrasse 101, 8092 Z¨urich, Switzerland 3HTA Freiburg, HES-SO University of Applied Sciences and Arts Western Switzerland, P´erolles 80, 1700 Freiburg, Switzerland November 17, 2022 Abstract A function f : R → R, where R is a commutative ring with unit element, is called polyfunction if it admits a polynomial representative p ∈ R[x]. Based on this notion we introduce ring invariants which associate to R the numbers s(R) and s(R′; R), where R′ is the subring generated by 1. For the ring R = Z/nZ the invariant s(R) coincides with the number theoretic Smarandache or Kempner function s(n). If every function in a ring R is a polyfunction, then R is a finite field according to the R´edei-Szele theorem, and it holds that s(R) = |R|. However, the condition s(R) = |R| does not imply that every function f : R → R is a polyfunction. We classify all finite commutative rings R with unit element which satisfy s(R) = |R|. For infinite rings R, we obtain a bound on the cardinality of the subring R′ and for s(R′; R) in terms of s(R). In particular we show that |R′| 6 s(R)!. We also give two new proofs for the R´edei-Szele theorem which are based on our results. 1 Introduction For a commutative ring R with unit element, a function f : R → R is said to be a polyfunction if there exists a polynomial p ∈ R[x] such that f (x) = p(x) for all x ∈ R (see [11, 9], and also [1, 2] for a discussion on polyfunctions from 1 Zm → Zn). The set of polyfunctions over R equipped with pointwise addition and multiplication forms a subring G(R) := {f : R → R, ∃p ∈ R[x] ∀x ∈ R =⇒ p(x) = f (x)} of RR and will be called the ring of polyfunctions over R. The polynomials in R[x] which represent the zero element in G(R) are called null-polynomials (see [13]). If S is a subring of R, then G(S; R) := {f : R → R, ∃p ∈ S[x] ∀x ∈ R =⇒ p(x) = f (x)}, In particular, the subring R′ generated by the is a natural subring of G(R). unit element 1 in R gives rise to the integer polyfunctions G(R′; R). Instead of restricting the ring of allowed coefficients as in the construction for G(S; R), one obtains other rings of polyfunctions by restricting the domain: The ring {f : S → R, ∃p ∈ R[x] ∀x ∈ S =⇒ p(x) = f (x)} e.g. contains G(R) as a subring. If S is a subring of R, a characteristic number connected to S and R is the minimal degree m such that the function x 7→ xm can be represented by a polynomial in S[x] of degree strictly smaller than m. Then, in particular, every function in G(S; R) has a polynomial representative of degree strictly less than m. We set s(S; R) := min{m ∈ N, ∃p ∈ S[x], deg(p) < m, ∀x ∈ R =⇒ p(x) = xm} and s(R) := s(R; R) for brevity. We set s(S; R) := ∞ if no function x 7→ xm can be represented by a polynomial of degree strictly smaller than m. Trivially, we have s(S; R) > s(T ; R) > s(R) whenever S ⊂ T are subrings of R. On the other hand, we will see in Section 3, that s(R′; R) < ∞ is bounded in terms of s(R) if s(R) < ∞. Clearly, if two rings R1, R2 are isomorphic, then s(R1) = s(R2) and s(R′ s(R′ 2, R2). In other words, R 7→ s(R) and R 7→ s(R′, R) are ring invariants. The function s, which associates to a given ring R the number s(R)∈ N ∪ {∞} has been introduced in [5] and is called Smarandache function. This naming stems from the fact, that for all 2 6 n ∈ N, the map n 7→ s(Z/nZ) coincides with the well-known number theoretic Smarandache or Kempner function s (see [5, Theorem 2]) defined by 1, R1) = s(n) := min{k ∈ N, n | k!} (1) 2 In fact, Legendre has already (see Lucas [8], Neuberg [10] and Kempner [6]). studied aspects of the function s(n): In [7] he showed that if n = pµ for some prime p and 1 6 µ ∈ N, then s(n) verifies s(n) = µ(p − 1) + a0 + a1 + . . . + ak, where the numbers ai are the digits of s(n) in base p. i.e. s(n) = akpk + . . . + a0 and 0 6 ai < p. We refer to Dickson [3, p. 263–265] for the history of the function s(n). In a finite field F , every function is a polyfunction as a polynomial respresentative of a function f : F → F is, e.g., given by the Lagrange interpolation polynomial for f . This representation property characterizes finite fields among commutative rings with unit element (see [12]): Theorem 1 (R´edei, Szele). If R is a commutative ring with unit element then R is a finite field if and only if every function f : R → R can be represented by a polynomial in R[x]. We will include two short alternative proofs of this theorem in Section 4. For finite fields F , one has s(F ) = |F |, so in view of Theorem 1, it is natural to ask what can be said about commutative rings R with unit element for which s(R) = |R| holds true. Note that if R is a finite ring, it trivially holds that s(R) 6 |R|, as the polynomial y∈R Y is a normed null-polynomial of degree |R|. p(x) = (x − y) The following theorem (which will be restated below for the reader’s convenience as Theorem 3), answers the above question and classifies all finite commutative rings R with unit element that satisfy s(R) = |R|: Theorem. Let R be a finite commutative ring with unit element. Then, s(R) = |R| holds if and only if R is one of the following: (a) R is a finite field, or (b) R is Z4, or (c) R is the ring ρ with four elements {0, 1, a, 1 + a} with 1 + 1 = 0 and a2 = 0. Remarks: 3 1. The ring ρ is not a field since it has zero divisors, and since it is of charac- teristic 2, it is not isomorphic to Z4. 2. Observe the similarity between this result and the fact that for n > 2, the usual Smarandache function satisfies s(n) = n if and only if n is prime or n = 4. Section 2 is devoted to the proof of this theorem. In Section 3 we discuss infinite rings and show that for an infinite commutative ring R with unit element and s(R) < ∞, we obtain an upper bound for |R′| and for s(R′; R) in terms of s(R), where R′ is the subring of R generated by 1. Finally, in Section 4, we give two proofs of Theorem 1 – a direct one and one that is based on Theorem 3. Throughout the article, n > 2 will denote a natural number, and Zn = Z/nZ is the ring of integers modulo n, and we write a | b if b is an integer multiple of a. 2 Polyfunctions over Finite Rings Theorem 1 answered the question, when a ring R has the property, that every function f : R → R can be represented by a polynomial in R[x]. For finite rings a necessary (but not sufficient) condition for this property to hold is s(R) = |R|, (2) (see Theorem 3 below). In this section, we want to address the question for which finite rings, equation (2) holds. The first step to answer this, is the following proposition: Proposition 2. If R is a commutative ring with unit element and with zero divi- sors then either (a) there exist a, b ∈ R \ {0} with a 6= b and ab = 0, or (b) R is Z4, or (c) R is the ring ρ with four elements {0, 1, a, 1 + a} with 1 + 1 = 0 and a2 = 0. Proof Let us assume that in R the implication holds: if u, v ∈ R \ {0} and uv = 0 then it follows u = v. Let a ∈ R \ {0} be a zero divisor: a2 = 0. Thus, if x is an element 4 in R with ax = 0, we have either x = 0 or x = a. Notice that for all u ∈ R we have and hence for all u ∈ R a(au) = 0 au = 0 or a(u − 1) = 0. Hence, we have only the four cases u = 0 or u = a or u = 1 or u = 1 + a. If ✷ 1 + 1 = a, then R = Z4, if 1 + 1 = 0, then R is the ring ρ in (c). We can now prove the main result of this section: Theorem 3. Let R be a finite commutative ring with unit element. Then, s(R) = |R| holds if and only if R is one of the following: (a) R is a finite field, or (b) R is Z4, or (c) R is the ring ρ with four elements {0, 1, a, 1 + a} with 1 + 1 = 0 and a2 = 0. Proof If R is not a field and not Z4 and not the ring ρ, then, according to Proposition 2, R is a ring with a, b ∈ R \ {0} such that ab = 0 and with a 6= b. Then (x − a)(x − b) (x − z) Yz∈R\{a,b,0} is a normed null-polynomial of degree |R| − 1. Therefore s(R) < |R|. To prove the opposite direction, we go through the three cases: (a) If R is a field, then a polynomial of degree n has at most n roots. Hence, s(R) = |R|. (b) If R is Z4, then (by [5, Theorem 2]) s(Z4) = s(4) = 4 = |Z4|. (c) If R is the ring ρ with elements {0, 1, a, 1 + a} and with 1 + 1 = 0 and a2 = 0, we have to prove that s(R) = 4. Assume by contradiction, that p(x) ∈ R[x] is a normed null-polynomial of degree 3. Since p(0) = p(1) = 0, p(x) must be of the form p(x) = x(x + 1)(ξ + x). From p(a) = 0, it follows that aξ = 0 and from p(a+ 1) = 0 it subsequently follows ✷ that a = 0 which is a contradiction. 5 3 Infinite Rings In this section R is a commutative ring with unit element and R′ the subring of R which is generated by 1. We will need the following lemma, which is a corollary of [5, Lemma 4, p.4]: Lemma 4. For all k, n ∈ N ∪ {0}, k 6 n we have n (−1)n−j j=0 X (with the convention 00 := 1). jk = δknn! n j (cid:19) (cid:18) Proposition 5. If s(R) < ∞ then R′ is a finite ring and |R′| s(R)!. Remark: We notice, that s(R) < ∞ may hold even if R is an infinite ring. As an example consider the ring (cid:12) (cid:12) R = Z2[x1, x2, . . .]/{x2 1, x2 2, . . .} in which all u ∈ R satisfy the relation u4 = u2. On the other hand, if R is finite, we trivially have s(R) 6 |R|. Proof of Proposition 5 By assumption, for n = s(R) there exist coefficients ai ∈ R, i ∈ {0, 1, . . . n − 1}, such that for all u ∈ R we have We denote un − n−1 i=0 X aiui = 0. 1 + 1 + . . . + 1 ∈ R′ m times by ¯m. Then, by Lemma 4, we have for k 6 n } {z | j=0 X Hence, it follows from (3) that n (−1)n−j n j (cid:19) (cid:18) jk = δknn! n 0 = (−1)n−j = j=0 X n j=0 X (−1)n−j n j n j (cid:18) (cid:18) n−1 ai¯ji ¯jn − i=0 X n−1 n (cid:19) = ! jn − ai (−1)n−j (cid:19) i=0 X j=0 X 6 (3) (4) ji = n! n j (cid:18) (cid:19) where the last equality follows from (4). ✷ Remark: As the example R = Zn! shows, the estimate on the size of R′ emerging from Proposition 5, |R′| 6 s(R)!, cannot be improved in general. Lemma 6. If n := s(R) < ∞ then there exists a bound Λ = n!(2n)nn for the cardinality of the orbits of the elements of R, i.e., for all u ∈ R there holds |{uk, k ∈ N}| 6 Λ. Proof As in the previous proof, we adopt (3). For k ∈ N let Mk := n−1 n i=0 Y aεi i , εi ∈ {0, 1, . . . , k} o Nk := rµ µ, rµ ∈ {0, 1, . . . , n! − 1} . µ∈Mk n X o Observe that |Mk| 6 (k + 1)n and |Nk| 6 n!|Mk|. By Proposition 5 it follows that for a, b ∈ Nk, the sum a+b also belongs to Nk. On the other hand, by applying (3) to u = a2 j , j ∈ {0, 1, . . . , n − 1}, we obtain n−1 a2n j = aia2i j , i=0 X and hence, Nk = Nk−1 for k > 2n. It follows for all u ∈ R and all k ∈ N that uk is of the form n−1 uk = µi(k)uj i=0 X for certain coefficients µi(k) ∈ N2n−1 and hence |{uk, k ∈ N}| 6 |N2n−1|n 6 Λ. ✷ Theorem 7. If n := s(R) < ∞ then s(R′; R) 6 lcm(Λ) + Λ, where Λ = n!(2n)nn. Remarks: (a) Here lcm(n) denotes the least common multiple of the numbers in the set {1, 2, . . . , n}. 7 (b) Since R′ is contained in every subring T (with 1) of R, the given bound also holds for s(T ; R). Proof of Theorem 7 By Lemma 6, there exist for arbitrary u ∈ R integers l < k 6 Λ + 1 such that uk = ul. Thus, we have ulcm(Λ)+Λ = ulcm(Λ)+Λ− lcm(Λ) k−l (k−l) = uΛ. ✷ We conclude this section by an example of a ring R which has the property, that s(R) < s(R′, R). Example: Let R = Z2[x]/{x3 + x4}. The following lemma shows that for this particular ring s(R) 6 4. Lemma 8. For all polynomials P ∈ Z2[x] we have that xP + (1 + x)P 2 + P 4 ≡ 0 mod (x3 + x4). Proof We first consider the special case P (x) = xm. We have to show, that xxm + (1 + x)x2m + x4m = xm+1 + x2m + x2m+1 + x4m ≡ 0 mod (x3 + x4). This is readily checked: m = 0 : m = 1 : m > 2 : x + 1 + x + 1 ≡ 0 mod (x3 + x4) x2 + x2 + x3 + x4 ≡ 0 mod (x3 + x4) x3 + x3 + x3 + x3 ≡ 0 mod (x3 + x4) Now, for arbitrary P , the claim follows by additivity in Z2[x]: x(P1 + P2) + (1 + x)(P1 + P2)2 + (P1 + P2)4 = xPi + (1 + x)P 2 i + P 4 i . 2 i=1 X ✷ Remark: We leave it to the reader to verify, that in fact s(R) = 4. Now, we show that s(R′; R) > 6. Lemma 9. Let ai ∈ Z2 be such that a0 = · · · = a5 = 0. 5 i=0 akuk = 0 in R for all u ∈ R. Then P 8 Proof First, by choosing u to be the class of x in R (which we denote by ¯x), we obtain a0 + a1 ¯x + a2 ¯x2 + (a3 + a4 + a5)¯x3 = 0 in R and hence, we conclude that a0 = a1 = a2 = 0 and a3 + a4 + a5 = 0. Next, we choose u to be the class of 1 + x in R. Observing that (1 + ¯x)3 = 1 + ¯x + ¯x2 + ¯x3 in R (1 + ¯x)4 = 1 + ¯x4 = 1 + ¯x3 in R (1 + ¯x)5 = 1 + ¯x in R we have 0 = a3u3 + a4u4 + a5u5 = = (a3 + a4 + a5) + (a3 + a5)¯x + a3 ¯x2 + (a3 + a4)¯x3 in R which immediately implies that a3 = a4 = a5 = 0. This completes the proof. ✷ Finally we prove that s(R′; R) = 6. Lemma 10. For all u ∈ R it holds that u3 + u4 + u5 + u6 = 0 in R. Proof Let u be the class of a polynomial P ∈ Z2[x] in R. First case: P (0) = 0. In this case, we have P (x) = xQ(x) P 2(x) ≡ x2Q2(x) mod (x3 + x4) P 3(x) ≡ x3Q3(x) ≡ x3Q(1) mod (x3 + x4) P 4(x) ≡ x4Q4(x) ≡ x3Q(1) mod (x3 + x4) and hence P 3(x) ≡ P 4(x) mod (x3 + x4). This proves the claim in this case. Second case: P (0) = 1. In this case, we have P (x) = 1 + xQ(x) P 2(x) ≡ 1 + x2Q2(x) mod (x3 + x4) P 3(x) ≡ (1 + xQ(x))(1 + x2Q2(x)) ≡ ≡ 1 + xQ(x) + x2Q2(x) + x3Q(1) mod (x3 + x4) P 4(x) ≡ 1 + x4Q4(x) ≡ 1 + x3Q(1) mod (x3 + x4) P 5(x) ≡ (1 + xQ(x))(1 + x3Q(1)) ≡ 1 + xQ(x) ≡ P (x) mod (x3 + x4) 9 which allows to verify the claim easily. ✷ 4 Two Alternative Proofs of the R´edei-Szele Theorem We start with a short direct proof of Theorem 1. Let R be a commutative ring with unit element. One implication is immediate: Assume that R is a finite field and f : R → R. Then the Lagrange interpolation polynomial p(x) = f (y)py(x), y∈R X py(x) = (x − z) (y − z) −1 , Yz∈R\{y} (cid:16) Yz∈R\{y} (cid:17) where represents f . For the opposite implication, we assume that every function f : R → R can be represented by a polynomial in R[x]. In particular, for the function f (x) := −1, 0, ( if x = 0 if x 6= 0 there exists a representing polynomial akxk = f (x) for all x ∈ R. n Xk=0 Since a0 = f (0) = −1, it follows that n n x akxk−1 = akxk = 1 for all x ∈ R \ {0}. k=1 X =x−1 k=1 X {z Hence, R is a field. Moreover, for all x ∈ R } | 0 = xf (x) = akxk+1. n Xk=0 (5) The right hand side of (5) is a polynomial of degree n + 1 which (in the field R) ✷ has at most n + 1 roots. Hence, |R| 6 n + 1. 10 A second alternative proof uses the characterization of the rings for which s(R) = |R| (see Theorem 3). This condition is necessary for the property, that all functions from R to R have a polynomial representative. In order to rule out the case R = Z4, we use the following formula from [4, Theorem 6, p.9]: If p is a prime number and m ∈ N, the number of polyfunctions over Zpm is given by Ψ(pm) := |G(Zpm)| = expp m . s(pk) (cid:17) (cid:16) k=1 X Here s denotes the usual number theoretic Smarandache function (see equation (1)), and expp(q) := pq for better readability. It follows that there are Ψ(4) = Ψ(22) = 22+4 = 64 polyfunctions over Z4, but the number of functions from Z4 to Z4 equals 44 = 256. The case R = ρ is ruled out by explicit verification that f (x) = 0 1 ( for x 6= 0 and for x = 0 is not a polyfunction over ρ: Since s(ρ) = 4, it is enough to show that no polyno- mial p ∈ ρ[x] of degree 6 3 represents f . Suppose there is 3 p(x) = akxk Xk=0 representing f . Then p(0) = a0 = 1 and p(a) = 1 + a1a = 0, which implies that ✷ a1a = 1 which is impossible since a does not have a multiplicative inverse. References [1] M. Bhargava: Congruence preservation and polynomial functions from Zn to Zm. Discrete Math. 173 (1997), no. 1–3, 15–21. [2] Z. Chen: On polynomial functions from Zn to Zm. Discrete Math. 137 (1995), no. 1–3, 137–145. [3] L. E. Dickson: History of the Theory of Numbers, vol. 1. Carnegie Insti- tution of Washington Publication, 1919. [4] N. Hungerb¨uhler, E. Specker: A generalization of the Smarandache func- tion to several variables. Integers 6 (2006): Paper A23, 11 p. 11 [5] N. Hungerb¨uhler, E. Specker, M. Wasem: The Ring of Polyfunctions over Z/nZ. Comm. Algebra, Published online: 17 July 2022, DOI: https://doi.org/10.1080/00927872.2022.2092628. [6] A. J. Kempner: Concerning the smallest integer m! divisible by a given integer n. Amer. Math. Monthly 25 (1918), 204–210. [7] A. M. Legendre: Essai sur la th´eorie des nombres, 2nd edition, Paris: Courcier, 1808. [8] E. Lucas: Question ×288. Mathesis 3 (1883), 232. [9] G. Mullen, H. Stevens: Polynomial functions (mod m). Acta Math. Hun- gar. 44 (1984), no. 3–4, 237–241. [10] J. Neuberg: Solutions de questions propos´ees, ×Question 288. Mathesis 7 (1887), 68–69. [11] L. R´edei, T. Szele: Algebraisch-zahlentheoretische Betrachtungen ¨uber Ringe. I. Acta Math. 79, (1947), 291–320. [12] L. R´edei, T. Szele: Algebraisch-zahlentheoretische Betrachtungen ¨uber Ringe. II. Acta Math. 82, (1950), 209–241. [13] D. Singmaster: On polynomial functions (mod m). J. Number Theory 6 (1974), 345–352. 12
ai_researcher
2
Do_large_language_models_“understand”_their_knowledge.pdf
Defining Knowledge: Bridging Epistemology and Large Language Models Constanza Fierro† Ruchira Dhar†‡ Filippos Stamatiou‡ Nicolas Garneau† Anders Søgaard†‡ †Department of Computer Science, University of Copenhagen ‡ Center for Philosophy in Artificial Intelligence, University of Copenhagen 4 2 0 2 t c O 3 ] L C . s c [ 1 v 9 9 4 2 0 . 0 1 4 2 : v i X r a Abstract Knowledge claims are abundant in the litera- ture on large language models (LLMs); but can we say that GPT-4 truly “knows” the Earth is round? To address this question, we review standard definitions of knowledge in epistemol- ogy and we formalize interpretations applicable to LLMs. In doing so, we identify inconsis- tencies and gaps in how current NLP research conceptualizes knowledge with respect to epis- temological frameworks. Additionally, we con- duct a survey of 100 professional philosophers and computer scientists to compare their prefer- ences in knowledge definitions and their views on whether LLMs can really be said to know. Finally, we suggest evaluation protocols for testing knowledge in accordance to the most relevant definitions. 1 Introduction NLP researchers have used the term knowl- edge somewhat haphazardly in the context of large language models (LLMs), e.g., discussing “knowl- edge contained in language models” (Jiang et al., 2020), their “knowledge gaps” (Feng et al., 2024b), or how “LLMs encode knowledge” (Farquhar et al., 2023), and “model’s internal knowledge” (Kass- ner et al., 2023). Petroni et al. (2019) defined an LLM to know a fact if it correctly completes a cloze sentence such as “The capital of Germany is __”, which are typically generated directly from so-called knowledge graphs. Many have evalu- ated knowledge in this way (Jiang et al., 2020; Paik et al., 2021; Dai et al., 2022; Kassner et al., 2020, 2021a; Keleg and Magdy, 2023, inter alia). However, the predictions of semantically equiva- lent cloze sentences can be inconsistent1 (Elazar et al., 2021; Kassner and Schütze, 2020; Fierro and *Correspondance: Constanza Fierro <[email protected]>, Ruchira Dhar <[email protected]>. 1An LLM may predict Berlin in the above, but Hamburg for “The city which is the capital of Germany is called __”. Figure 1: From our survey (§4): Philosophers and com- puter scientists prefer different definitions of knowl- edge. Søgaard, 2022), leading to question the meaningful- ness of knowledge claims. Should we then require an LLM to predict correctly all the paraphrases of a given fact to say it knows it? What about re- lated facts? Can we really say that an LLM knows that ‘Lionel Messi plays for Inter Miami’ if it does not know that ‘Lionel Messi resides in Miami’? What, then, are sufficient conditions for saying an LLM knows? Or more generally, can LLMs know anything? That is: Can LLMs have bona fide knowledge? Whether LLMs know, or in what sense, depends on how knowing is defined. Determining what inter- nal knowledge LLMs possess could have important implications on their trustworthiness, as knowledge modulates our trust in agents (Hardwig, 1991; Ped- erneschi, 2024). We tend to lose trust in others when they do not appear to know what we con- sider basic facts. Furthermore, studying knowledge in LLMs could potentially have implications for epistemology itself (Cappelen and Dever, 2021). Recent works have approached the question of how to define knowledge, considering addi- tional requirements for determining what an LLM knows. Some require correct predictions across paraphrases (De Cao et al., 2021; Zhong et al., tb-knowledgej-knowledgeg-knowledgev-knowledgep-knowledgeDefinitions of Knowledge0.00.10.20.30.40.50.6Percentage Agreement(4-5 on Likert scale)Agreement on Definitions of Knowledge by ProfessionPhilosophersComputer Scientists tb-knowledge j-knowledge g-knowledge v-knowledge p-knowledge p is known if and only if p is true, and p is believed+ p is true, p is believed, and p is justified p is known sui generis p is inferred with intellectual virtue p is believed and facilitates correct predictions Philosopher Sartwell (1992) Nozick (2000) Williamson (2005) Zagzebski (1999) Austin (2000) Table 1: Five standard definitions of knowledge in philosophy, i.e., knowledge-that p (where p is a proposition). The naming is arbitrary and motivated by keywords. See Appendix A for formalizations in epistemic modal logic. 2023b), and others additionally require correct pre- dictions on logically derived facts (Kassner et al., 2021b; Cohen et al., 2024). However, so far, NLP research has approached knowledge claims in a somewhat arbitrary manner, driven by what seems to make sense intuitively when discussing knowl- edge. Since philosophy has long tried to define what it means to know, we turn to epistemology to better ground our definitions of knowledge for LLMs. Contributions We survey the most commonly used definitions of knowledge in epistemology, and discuss and formalize how to map these definitions to LLMs. We compare current research of knowl- edge in LLMs to our formal definitions, identi- fying shortcomings in evaluation practices. We present the results of our survey to philosophers and computer scientists about their views on LLMs and knowledge, finding disagreements about when LLMs can be said to know. These disagreements seem to arise from adherence to slightly different definitions of knowledge (Figure 1). Finally, we provide protocols that follow the epistemological definitions for evaluating and testing knowledge in LLMs. We hope that the connection we provide to epistemology can inform better evaluations and claims regarding knowledge in LLMs. 2 Definitions of Knowledge While the NLP research community’s use of the word knowledge has been somewhat unclear, in philosophy there is a long tradition of trying to pin down exactly what is involved in knowledge claims. Knowledge – or propositional knowledge,2 to be precise – is what is at stake when we say that ‘x knows that p’ where x is an entity whose knowledge is under question, and p is a declara- 2Knowledge is not always propositional; there is also what is referred to as knowledge-how, which is related to perfor- mance, i.e., knowing how to perform an action (Ryle, 1949). tive statement.3 But what are the necessary and sufficient conditions for knows here? We review 5 definitions of knowledge (see Table 1),4 and we interpret and formalize a corresponding definition for LLMs. In §3, we discuss if these definitions are used in the LLM literature, and whether evaluating knowledge claims under them is feasible or not. 2.1 True beliefs (tb-knowledge) Sartwell (1992) defines knowledge as a belief that is true, that is ‘x believes that p’ and ‘p is true’. Mary can on this account believe the capital of Ger- many is Hamburg, but since Hamburg is not the capital of Germany (Berlin is), Mary cannot be said to know that the capital of Germany is Hamburg. Sartwell argues that there is no need for more re- quirements for what is knowledge, as long as one has a solid definition of belief. A lucky guess does not qualify as knowledge because, in Sartwell’s view, a guess is not a belief. Sartwell (1992) re- quires, in his definition of beliefs, that beliefs are coherent. As Sartwell puts it, “no belief stands in isolation; I cannot have the belief that Goldbach’s conjecture is true and fail to have any related be- liefs. The belief is constituted as a belief within a system of beliefs.” Thus we define, Definition 2.1 (belief). An LLM M believes p ⇐⇒ p is assigned high confidence.5 Definition 2.2 (belief+). Let p, q be propositions. A proposition p is believed+ ⇐⇒ 3If, for example, x= “John” and p=“Berlin is the capital of Germany”, we can say that x knows p, if John knows the fact that Berlin is the capital of Germany. 4We have selected five popular epistemological definitions of knowledge, which are among the most common and formal. However, we acknowledge that other perspectives on episte- mological knowledge exist. Nonetheless, we believe these five definitions can serve as a solid foundation. 5This does not simply refer to the output probability as- signed to the proposition p, as most models could assign fairly high probability to any grammatical sentence, but rather to M assigning high confidence to p relative to other values that p could take. 1. p is believed. 2. ∀q st. p =⇒ q, then q is believed. 3. ∄q st. q is believed ∧ q =⇒ ¬p. That is, p is believed (Def. 2.1), any other proposi- tion that follows logically from p is also believed, and p is consistent with any other proposition that is believed (by the same system).6 Thus, Definition 2.3 (tb-knowledge). An LLM M tb- knows p ⇐⇒ p is true ∧ M believes+ p.7 2.2 Justification (j-knowledge) Nozick (2000) takes another approach and defines knowledge as justified true beliefs,8 with a less strict definition of belief of the sort ‘x thinks that p’ and x has some justification for thinking it.9 Noz- ick (2000) posits that a lucky guess is not knowl- edge because a guess is not justified. Thus, for LLMs: Definition 2.4 (j-knowledge). An LLM M j-knows p ⇐⇒ p is true ∧ M believes p ∧ M (or M ’s inference that p) is partially interpretable (justi- fied).10 2.3 Sui generis (g-knowledge) Williamson (2005) argues for a relativist and primi- tive view of knowledge, where the truthfulness of p is relative to the agent. Knowledge, on this view, is sui generis which is a legal term literally meaning ‘of its own kind’ or ‘unique’. Williamson (2005) argues that we can’t analyze knowledge in terms of other requirements or atomic concepts (belief and justification) because knowledge is the atomic concept, which in effect explains what a belief or a justification is and not the other way around.11 6If I believe in Goldbach’s conjecture (any even number greater than two is the sum of two primes), I have to believe the definition of prime numbers, and I can’t believe 1+1=3. 7Our definitions are semi-formal. In epistemic logic, this would be expressed as (cid:50)sp ⇔ p ∧ (cid:51)+p. See Appendix A, for epistemic logic formalizations of our knowledge definitions. 8The idea that knowledge may require some kind of justifi- cation goes back at least to Plato (Plato, 2019, 187b–201c). In the Theaetetus, the definition of knowledge as true judgement is ultimately rejected, before arguing that some sort of account is necessary for knowledge (Plato, 2019, 201d-210a). 9E.g: Mary thinks there are five oranges on the table, be- cause she counted them up. There really are five oranges; so Mary knows there are five oranges on the table. 10We take this to mean that M can, possibly from ad-hoc methods, provide a rationale for p (Joshi et al., 2023). 11In his view, a belief is an attempt at knowing, if I believe the tree in front is a Sequoia then I will act as if I know it. Thus, belief is explain through knowledge and not the reverse. Definition 2.5 (g-knowledge). An LLM M g-know p ⇐⇒ M includes p in its knowledge bank. We discuss below (§3) what, precisely, it means for propositions to be included in an LLM’s knowl- edge bank. The core intuition is that there is some- thing akin to a knowledge box (Fodor, 1985) from which known propositions can be extracted. One extreme version would be if the LLM is its own knowledge box, meaning an LLM g-knows what- ever it outputs, but g-knowledge could also be seen as a modular component in LLM architectures. 2.4 Virtue (v-knowledge) The virtue definition of knowledge became popular in the 1980s (Sosa, 1980; Greco, 1993). Zagzebski (1999) used it to address the challenge from Gettier cases12 of the justified true belief definition, and states that knowledge is belief arising out of acts of intellectual virtue. As Zagzebski (1999) puts it, “virtues are properties of persons. Intellectual virtues are properties of persons that aim at intel- lectual goods, most specially the truth.” An act of virtue is an act in which there is imitation of the behavior of virtuous persons and success in reach- ing the end for that reason. Therefore if the end is reached by accident and not as a consequence of the virtuous action then it is not considered an act of virtue.13 So we need to define that an LLM is behaving in a virtuous way, that is, it is aiming at the truth and arriving to a prediction as a result of this aim, thus, Definition 2.6 (v-knowledge). An LLM M v-knows p ⇐⇒ p is true ∧ M believes p ∧ M ’s cause for believing p is motivated only by truthfulness. 12Gettier (1963) challenged Nozick’s definition of knowl- edge as (j-knowledge) by citing a case where justified true belief would not imply knowledge: John sees a sheep in the field and forms the belief that there is a sheep in the field. The sheep that he saw is in fact a dog, but there is a sheep in the field, occluded from John’s vision. In this case, John had a true belief, as well as a justification (‘I saw it with my own eyes’) but his justification was false, and John really arrived at the right conclusion out of sheer luck (Chisholm et al., 1989). 13E.g: A judge determines by an impeccable procedure and motivated by justice that the man is guilty. The judge does everything he ought to do and exhibits all the virtues appropriate in this situation. Nonetheless, for some accidental reason the accused is the wrong man (e.g. the evidence was fabricated). Suppose that the actual killer is secretly switched with the accused man, so the judge ends up sentencing the right man (Zagzebski, 1999). Here, a feature of luck has cancelled out the bad and the end has been reached, but not because of the virtuous act of the judge. 2.5 Predictive accuracy (p-knowledge) For Austin (2000), to know means to be able to make correct and relevant assertions about the sub- ject in question. If M p-knows p, M believes p, and believing p facilitates correct and relevant pre- dictions. Austin’s definition is pragmatic. For him “believing in other persons, in authority and testi- mony, is an essential part of the act of communi- cating”, and knowledge is the belief that works out over time. Austin (2000) states that knowledge is relevant true belief under deductive closure; that is, if the subject knows p, and believing p implies believing q (with q relevant), then q must be true (and therefore the subject knows q as well). Thus, p facilitates relevant and correct predictions (q). This is similar to tb-knowledge, in which belief+ is epistemically closed, however, in tb-knowledge the closure scopes over all propositions q, not just the relevant ones. Moreover, since the definition is pragmatic, the deductive closure is only probabilis- tic. Definition 2.7 (p-knowledge). Let p, q be relevant propositions st. believing p =⇒ believing q. Then, an LLM M p-knows p ⇐⇒ M probably tb-knows p ∧ M probably tb-knows q. Relevance is ambiguous and could be defined as p and q being relevant for each other, i.e., q being relevant for knowing p; or p and q being relevant for performing a target task (see §5). 3 Knowledge in NLP Research Now, we discuss perspectives from NLP research on what constitutes knowledge, and how these align with the definitions we extracted from the philo- sophical literature. tb-knowledge Most knowledge probing work seems to rely (loosely) on tb-knowledge or p- knowledge. Namely, works related to measuring knowledge encoded in LLMs (Petroni et al., 2019; Jiang et al., 2020; Wallat et al., 2020; Roberts et al., 2020; Paik et al., 2021; Dai et al., 2022; Kassner et al., 2020, 2021a; Dhingra et al., 2022; Chalkidis et al., 2023; Keleg and Magdy, 2023; Qi et al., 2023; Fierro et al., 2024b, inter alia), understand- ing the mechanisms of recalling (Dai et al., 2022; Geva et al., 2023; Sharma et al., 2024), knowledge edits (Meng et al., 2022; Hase et al., 2023a; Meng et al., 2023; Wang et al., 2024), and analyses of LLM’s knowledge vs contextual factual informa- tion (Neeman et al., 2023; Yu et al., 2023). These works follow the LAMA protocol (Petroni et al., 2019), where propositions {p} are derived from knowledge graphs,14 and an LLM is said to know p if it predicts p correctly in a fill-in-the-blank state- ment. Since p is true (from a knowledge graph) and believed (predicted) by the LLM, the LLM is said to know p.15 However, such work fails to ad- dress the fact that tb-knowledge relies on p being believed+, or that p-knowledge requires epistemic closure over relevant propositions.16 We discuss how best to evaluate whether an LLM believes+ p in §5. Some works propose to enhance the LLM with an extra component to ensure more consistent be- liefs; a so-called belief bank (Kassner et al., 2021b) or reflex layer (Kassner et al., 2023). This ex- tra component is optimized for consistency via weighted MaxSAT (Park, 2002), and it is used to prompt the model to be consistent to its previous stated beliefs (Kassner et al., 2021b), or it is di- rectly used to determine the system’s prediction (Kassner et al., 2023). Both works aim to rely on tb-knowledge, where the extra component approx- imates belief+.17 However, it is only an approxi- mation as the extra component is not necessarily fully consistent and the entailed facts are sampled. This approximation would not be a problem if we consider their approach to be under p-knowledge, although in that case the entailed facts should be se- lected according to some measure of relevance. Fur- thermore, Kassner et al. (2023) are slightly incon- sistent in how they use the term knowledge, e.g., us- ing interchangeably “model beliefs” and “models’ internal knowledge”, if these were to be the same then they would be talking about g-knowledge. j-knowledge Hase et al. (2023b) adheres to j- knowledge, but they study LLMs’ beliefs and not its knowledge as they argue “in a traditional view of knowledge as Justified True Belief, it is relatively more difficult to say that an LM knows something rather than believes it”. Nonetheless, they align 14E.g.:https://www.wikidata.org/ 15Note that under this framework we only need to find one surface form of p for which the LLM predicts it correctly to say that it knows p. 16Knowledge edits works usually have a mismatch in their definition of knowledge, as they employ true belief (tb- knowledge without belief+) to determine the set of facts that the model knows. But then evaluate the success of an update by measuring correct predictions of paraphrases, and thus accounting to some extent for belief+. 17They track consistency and accuracy to compare systems. Consistency measures the approximation of tb-knowledge, while accuracy only accounts for belief (Definition 2.1). their experiments with the belief+ definition by measuring beliefs consistency under paraphrasing and entailment. A justification for j-knowledge could be pro- vided in different ways, namely, post-hoc attri- bution to training data using attribution methods (Hampel, 1974; Koh and Liang, 2017; Pruthi et al., 2020; Akyurek et al., 2022), logical derivation with a chain-of-thought mechanism (Wei et al., 2022), generation of factual statements with citations to sources (Gao et al., 2023; Menick et al., 2022; Fierro et al., 2024a), or potentially as Jiang et al. (2021) proposed, the probability of a calibrated language model could be use as justification to dif- ferentiate between mere beliefs and knowledge. In any case, the jury is still out on which justifica- tion procedures are valid and/or superior, but note that all these methods seem to require partial inter- pretability. g-knowledge One extreme interpretation of the knowledge bank in g-knowledge’s definition is rel- ativist and deflationary: An LLM knows p if it asserts p, simply by generating it. This conflates assertion and true knowledge, and as such, beliefs and knowledge. A more interesting interpretation would be to assume that LLMs have distinct mem- orization strategies for knowledge and learn to in- duce modular knowledge components. While some LLM researchers have explored memorization com- ponents (Dai et al., 2022; Meng et al., 2022), no one has, to the best of our knowledge, identified knowl- edge components. Some researchers insert devoted knowledge layers (Dai and Huang, 2019; Kassner et al., 2021b, 2023; Feng et al., 2024a; Liu et al., 2024), which could be interpreted as the knowledge box, but it remains to be seen if such layers permit unambiguous extraction of knowledge claims. v-knowledge If knowledge can only be inferred with intellectual virtue, then the difficulty lies iden- tifying intellectual virtues for LLMs. How to test for predictions that are acts of intellectual virtue is an open question. However, we could consider using training data attribution methods as proof of such acts. Another promising avenue is mechanis- tic interpretability, if we could distinguish factual recall (Geva et al., 2023) from guessing (Stoehr et al., 2024) mechanisms. This distinction would relate in interesting ways to the epistemological view of proper functioning (Plantinga, 1993). Yad- kori et al. (2024) suggest making such a distinction is feasible for some models. In recent works, Biran et al. (2024) address the intellectual virtue condition to some extent by only analyzing the model’s virtue knowledge. They do this by filtering out facts p that the model can cor- rectly predict without using critical components in the input, thereby merely guessing the fact (acting unvirtuous). This is a step in the right direction, but a more in depth detection of the inner work- ings of the model is necessary to filter out all the non-virtuous predictions. Note that if we interpret the detection of a virtue act can be viewed as a model justification and then it is somewhat unclear what would distinguish j- knowledge from v-knowledge. This is unsurprising, however, since v-knowledge can be seen as an at- tempt to flesh out what justification turns on (Greco, 1993). As we insist on concrete methodological interpretations, the two definitions of knowledge may coincide. p-knowledge In the context of editing factual knowledge in LLMs, Zhong et al. (2023a); Cohen et al. (2024) propose to not only evaluate the modi- fied fact itself, but also to evaluate related facts. For example, if we edit an LLM to predict that Lionel Messi now plays in a different football team, then a successful edit should also modify the league in which he plays and the country where he resides. Such evaluation follows the p-knowledge defini- tion, particularly since they focus on evaluating only logically related facts (i.e., only the relevant ones) that are two hops away from the subject or object in question. This type of evaluation could be directly applied to measure the knowledge of the LLM, not just to assess the update accuracy of edits. The logically related facts to evaluate could also be defined in terms of task relevance. For example, in the context of legal knowledge, Chalkidis et al. (2023) studied the relevance of the knowledge pos- sessed by an LLM for downstream performance in legal classification tasks. 4 Survey Results To determine how researchers think about knowl- edge, we turn to our survey of how computer scientists and philosophers. We had 105 respon- dents, out of which 50.4% considered themselves philosophers, 36.2% considered themselves com- puter scientists, 2.3% both, and 10.5% none of the Figure 2: LLMs understanding of respondents. Figure 4: Disagreements on epistemological definitions of knowledge. selecting 4-5 57% of the time. Philosophers dis- agreed with p-knowledge, since 60% selected 1-2; whereas computer scientists seemed more divided, with 36% choosing 1-2 and 31% choosing 4-5. Overall, the survey shows that j-knowledge and v-knowledge are the most accepted across the two groups. tb-knowledge has more mixed results. 19. The disagreement with p-knowledge is somewhat surprising, since this aligns well with practical evaluation methodologies in the LLM literature.20 On the other hand, there is an agreement among philosophers and computer scientists to reject the g-knowledge definition. 4.2 General Questions Can non-human entities know? Both computer scientists and philosophers generally agree that non-human entities can possess knowledge (see Figure 5a). Disagreement within each group is rela- tively low, with 7% among computer scientists and 22% among philosophers.21 Should knowledge be defined differently for hu- mans and non-humans? Computer scientists generally believe that knowledge should be defined differently for humans and non-humans, while philosophers are more divided. Among philoso- 19This could either reflect the philosophers’ knowledge of the challenges to such definitions of knowledge, or it could reflect the fact that we did not discuss the implications of epistemic closure in the survey (for brevity). In the absence of epistemic closure, maybe some philosophers felt inclined to disagree with this definition. 20One possible explanation was our use of the word “use- ful” in the survey. This word was intended to convey p- knowledge’s pragmatic flavor, but may have misled some respondents to think that all knowledge has to be directly useful for some user-defined goal. 21This question is intentionally ambiguous, e.g., animals could be consider as non-human entities. We aim to find out whether people think differently about LLMs compared to general non-human entities. Figure 3: Epistemology understanding of respondents. two.18 Most respondents from computer science reported a better understanding of LLMs compared to philosophers (see Figure 2) while the majority of philosophers reported better understanding of epis- temology compared to 40% of computer scientists (see Figure 3). See Appendix B for more details. 4.1 Questions on Knowledge Definitions We asked our respondents to indicate from 1-5 if they disagree completely (1) or agree completely (5) with statements that verbalized our knowledge definitions. See Figure 1 and 4 for a summary In brief, philosophers disagreed of the results. with tb-knowledge, with 49% selecting 1-2, while the computer scientists agreed more, with 52% se- lecting 4-5. Philosophers were divided about j- knowledge, with a slight tendency to agree (33.9% chose 1-2 and 47% chose 4-5). Here, they were in some agreement with computer scientists, 57% of whom selected 4-5. Philosophers disagreed strongly with the g-knowledge definition (84% an- swers 1-2), whereas computer scientists tended to disagree (57% answers 1-2). Everyone seemed to like v-knowledge better, with philosophers select- ing 4-5 62% of the time, and computer scientists 18Some considered themselves mathematicians, cognitive scientists, cultural theorists, etc. VeryComprehensiveComprehensiveNeitherLimitedVery Limited0.00.10.20.30.40.5PercentageHow would you describe your understanding oflarge language models?PhilosophersComputer ScientistsVeryComprehensiveComprehensiveNeitherLimitedVery Limited0.00.10.20.30.40.5PercentageHow would you describe your understanding ofepistemology?PhilosophersComputer Scientiststb-knowledgej-knowledgeg-knowledgev-knowledgep-knowledgeDefinitions of Knowledge0.00.10.20.30.40.50.60.70.8Percentage Disagreement(1-2 on Likert scale)Disagreement on Definitions of Knowledge by ProfessionPhilosophersComputer Scientists (a) Survey answers to “Can non-human entities know?”. (b) Survey responses on defining global or specific knowledge. (c) Survey results to the question of LLMs having knowledge. (d) Survey results on LLMs being able to have knowledge. Figure 5: Four of the survey questions and their respective answers. phers, 33% think it should be different, and 30% think it should be the same. Among computer sci- entists, 44% think it should be different, and 34% think it should be the same (see Figure 5b). Do LLMs know (empirically, in practice, now)? There is a significant difference in opinion between philosophers and computer scientists. Philosophers largely disagree, with 54% saying no and only 11% saying yes. In contrast, computer scientists are more divided, with 31% saying no, 34% saying yes, and the remaining respondents undecided or unclear (see Figure 5c). Computer scientists, in other words, evaluate LLM knowledge claims more positively. Can LLMs know (in theory)? When consider- ing the question theoretically (as opposed to in practice), approval increases in both groups (see Figure 5d). Among philosophers, 24% now say yes and 33% say no, showing a more divided opinion. Among computer scientists, 55% say yes and 21% say no, indicating that most believe LLMs can pos- sess knowledge. The survey results thus indicate that scholars from both epistemology and computer science think that the notion of knowledge for LLMs is not a trivial one. Despite differences in opinion, two key points emerge: most scholars believe non-humans can possess knowledge, and LLMs have the potential to "know" in some sense. 5 Best Practices Given our discussion of mapping knowledge defi- nitions to LLMs and the results of our survey, we provide possible protocols for evaluating knowl- edge of LLMs in relation to each discussed defini- tion.22 We also provide a really simple example to contrast in a more practical manner some of the definitions. We use Llama-3-8B-Instruct23 with greedy decoding for generating completions.24 Protocol for tb-knowledge A protocol for evalu- ating knowledge of p as per Definition 2.3 would involve evaluating the three conditions for belief+ (Definition 2.2), which can be done by evaluating model confidence in the true statement itself, as well as in all that follows logically from the true statement. The model should, of course, have low confidence in statements that could imply ¬p. 22We provide practical examples on how the definitions could be implemented with the current research. However these protocols may change completely in the future as we better understand the inner workings of LLMs and develop new methodologies and algorithms. 23https://github.com/meta-llama/llama3 24We use the system prompt: “You are a helpful chatbot that aims to be truthful.” YesNoAgnostic/Undecided0.00.10.20.30.40.50.60.7PercentageCan Non-Humans Know?PhilosophersComputer ScientistsSameDifferentAgnostic/UndecidedNon-humans do nothave knowledge0.00.10.20.30.40.50.60.7PercentageShould knowledge be defined differentlyfor humans and non-humans?PhilosophersComputer ScientistsYesNoAgnostic/Unclear Question0.00.10.20.30.40.50.60.7PercentageDo LLMs Know (empirically, now)?PhilosophersComputer ScientistsYesNoAgnostic/Unclear Question0.00.10.20.30.40.50.60.7PercentageCan LLMs Know (in theory)?PhilosophersComputer Scientists Most current work (§3) evaluates model confi- dence in p, but to assert tb-knowledge in LLMs, we must also evaluate model confidence in all that is implied by p. In our small example (Table 2), we evaluate whether Llama-3 knows p = ‘Platypuses are mammals’ We first test model confidence in the answer to ‘Are platypuses mammals?’ being yes. We then evaluate the epistemic closure by evaluating model confidence in facts that follow logically from the platypuses being a mammal, e.g., ‘Do platypuses have hair or fur?’ For this question, the model has more confidence in the answer yes, they have fur. We now prompt the model ‘Do mammals lay eggs?’, and the model answers no. Its answer to is yes. Therefore, the ‘Do platypus lay eggs?’ model believes q = ‘Platypuses lay eggs and mammals do not’ which implies ¬p, thus violating condition 3 from the belief+ definition; leading us to conclude that Llama-3 does not tb-know p.25 Protocol for j-knowledge If we subscribe to j- knowledge – which many computer scientists do (§4) – then we need to have a two part protocol: (1) Same as in tb-knowledge the model’s confi- dence in the true statement should be high; and (2) we must also attribute this belief to a training data which unambiguously states p, or reasoning that justifies how p can be derived from already established propositions.26 In our running example, we obtain a justification by prompting Llama-3 with ‘Are platypus mam- mals? Please explain step-by-step’, for which the model generates the definition of a mammal, platy- pus characteristics corresponding to mammals’ fea- tures, and explains that platypus are mammals even though they do not comply with all the mammals’ features (exact answer in Appendix C). By estab- lishing that the intermediate reasoning steps are correct (the characteristics of mammals and platy- pus) we can conclude that Llama-3 j-knows p.27 25In this example conditions (2) and (3) have been tested with only one proposition that follows logically, but in real- ity one should obviously sample from a large enough set of propositions. We have also used greedy decoding but different approaches to high confidence can be used. 26See §3 for references to current methodologies of reason- ing and training data attribution. 27We have used chain-of-thought prompting in this example, however it should be noted that the reasoning steps need to be verified for this to be a valid justification (Golovneva et al., 2023; Jacovi et al., 2024). If by g-knowing p Protocol for g-knowledge we simply mean the ability to state p, then g- knowledge will not do much work for us. On such an account, knowledge becomes indistinguishable from beliefs. In line with our discussion in §3, we generally recommend to adopt other definitions. Protocol for v-knowledge The v-knowledge def- inition seems to be quite popular among both philosophers and computer scientists. In §3, we cited possible interpretations of intellectual virtue in LLMs. Training data reliability assessments could involve attributing the inference of p to train- ing data that contains p, and showing that the model knows this data is reliable, e.g., by using a linear probe to see whether the model successfully distin- guishes reliable from unreliable training data. On the other hand, if the model infers p from in-context data that we know is reliable, we need to show that the model is indeed generating the proposition us- ing the provided in-context knowledge, e.g., via mechanistic interpretability (Yu et al., 2023; Wu et al., 2024). Protocol for p-knowledge If knowledge is some- thing that facilitates correct predictions, we need to be able to sample from the set of relevant situations. This is of course a familiar challenge to LLM re- searchers interested in evaluating performance in the wild. We propose to evaluate p-knowledge as we would evaluate tb-knowledge, albeit in a prob- abilistic setting, and only over the relevant set of implied propositions.28 While computer scientists prefer tb-knowledge over p-knowledge (by some margin; see §4), the definition of p-knowledge seems more in line with current practices in the LLM community. Following with the example in Table 2, here, we would conclude that Llama-3 p- knows ‘Platypuses are mammals’, as opposed to tb-knowing. Since even though believing mammals do not lay eggs, is in contradiction with p, q is true most of the times. 28This seems to make the p-knowledge definition strictly weaker than tb-knowledge, with the implication that any model that tb-knows p will also p-know p. This conclusion depends on whether our notion of model usefulness is limited to knowledge. If we can dissociate knowledge performance from task performance and talk about model usefulness only in terms of knowledge, it holds that p-knowledge is strictly weaker than tb-knowledge. If not, we must add the additional requirement that models perform well on the domain they are supposed to be knowledgeable about. 6 Conclusion In this paper, we reviewed epistemological defini- tions and formalized interpretations in the context of large language models (LLMs). Then, we ex- amined how existing works in NLP research align with these definitions, highlighting gaps in their interpretations of knowledge. Furthermore, we pre- sented the results of our survey of philosophers and computer scientists, showcasing the different views in terms of definitions of knowledge and whether LLMs can be said to know. Finally, we outlined protocols of evaluations for each knowledge defini- tion using existing algorithms and methodologies. We hope that the connection to epistemological def- initions of knowledge can inform the evaluations of knowledge in LLMs and can provide a more solid foundation for the necessary tests to deter- mine when an LLM truly knows a fact. Limitations We presented five standard definitions of knowl- edge in philosophy. However, there are more nu- ances and potentially additional definitions that could apply, nonetheless, we believe these are the most standard and serve as a starting point to ground the evaluations of knowledge in LLMs more formally. Regarding Section 3, there are cer- tainly more works evaluating knowledge in LLMs that could be included. Nonetheless, we included as many as possible and believe these lay out the current landscape of knowledge evaluation. Finally, as stated in the main body, the protocols are practi- cal methodologies that may become irrelevant as more research on LLMs is conducted. However, we included them here to clarify how the definitions can be implemented in practice. Acknowledgements We thank our colleagues at the Center for Philos- ophy in AI and the CoAStaL NLP group for in- In sightful discussions throughout this project. particular, we would like to thank Daniel Hersh- covich, Ilias Chalkidis and Jiaang Li for valuable comments on the final manuscript. This work has been supported by Carlsberg Semper Ardens Ad- vance Grant CF22-1432. References Ekin Akyurek, Tolga Bolukbasi, Frederick Liu, Bin- bin Xiong, Ian Tenney, Jacob Andreas, and Kelvin Guu. 2022. Towards tracing knowledge in language models back to the training data. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2429–2446, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Sergei Artemov. 2008. The logic of justification. Re- view of Symbolic Logic, 1(4):477–513. J. L. Austin. 2000. Other minds. In Sven Bernecker and Fred I. Dretske, editors, Knowledge: Readings in Contemporary Epistemology. Oxford University Press. Eden Biran, Daniela Gottesman, Sohee Yang, Mor Geva, and Amir Globerson. 2024. Hopping too late: Ex- ploring the limitations of large language models on multi-hop queries. arXiv preprint arXiv:2406.12775. Herman Cappelen and Josh Dever. 2021. Making Ai Intelligible: Philosophical Foundations. Oxford Uni- versity Press, New York, USA. Ilias Chalkidis, Nicolas Garneau, Catalina Goanta, Daniel Katz, and Anders Søgaard. 2023. LeXFiles and LegalLAMA: Facilitating English multinational legal language model development. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15513–15535, Toronto, Canada. Association for Computational Linguistics. Roderick M Chisholm, Roderick Milton Chisholm, Roderick Milton Chisholm, and Roderick Milton Chisholm. 1989. Theory of knowledge, volume 3. Prentice-Hall Englewood Cliffs, NJ. Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, and Mor Geva. 2024. Evaluating the ripple effects of knowledge editing in language models. Transac- tions of the Association for Computational Linguis- tics, 12:283–298. Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Knowledge neurons in pretrained transformers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8493– 8502, Dublin, Ireland. Association for Computational Linguistics. Zeyu Dai and Ruihong Huang. 2019. A regulariza- tion approach for incorporating event knowledge and coreference relations into neural discourse parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2976– 2987, Hong Kong, China. Association for Computa- tional Linguistics. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Edit- ing factual knowledge in language models. In Pro- ceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing, pages 6491– 6506, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Bhuwan Dhingra, Jeremy R Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W Cohen. 2022. Time-aware language mod- els as temporal knowledge bases. Transactions of the Association for Computational Linguistics, 10:257– 273. Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhi- lasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. Transac- tions of the Association for Computational Linguis- tics, 9:1012–1031. Sebastian Farquhar, Vikrant Varma, Zachary Kenton, Jo- hannes Gasteiger, Vladimir Mikulik, and Rohin Shah. 2023. Challenges with unsupervised llm knowledge discovery. arXiv preprint arXiv:2312.10029. Shangbin Feng, Weijia Shi, Yuyang Bai, Vidhisha Bal- achandran, Tianxing He, and Yulia Tsvetkov. 2024a. Knowledge card: Filling LLMs’ knowledge gaps with plug-in specialized language models. In The Twelfth International Conference on Learning Repre- sentations. Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding, Vidhisha Balachandran, and Yulia Tsvetkov. 2024b. Don’t hallucinate, abstain: Identifying llm knowl- edge gaps via multi-llm collaboration. arXiv preprint arXiv:2402.00367. Constanza Fierro, Reinald Kim Amplayo, Fantine Huot, Nicola De Cao, Joshua Maynez, Shashi Narayan, and Mirella Lapata. 2024a. Learning to plan and generate text with citations. arXiv preprint arXiv:2404.03381. Constanza Fierro, Nicolas Garneau, Emanuele Bugliarello, Yova Kementchedjhieva, and Anders Søgaard. 2024b. Mulan: A study of fact mutability in language models. Constanza Fierro and Anders Søgaard. 2022. Factual consistency of multilingual pretrained language mod- els. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3046–3052, Dublin, Ireland. Association for Computational Linguistics. Jerry A. Fodor. 1985. Fodor?s guide to mental represen- tation: The intelligent auntie?s vade-mecum. Mind, 94(373):76–100. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023. RARR: Researching and revising what language models say, using language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16477–16508, Toronto, Canada. Association for Computational Linguistics. Edmund L. Gettier. 1963. Is Justified True Belief Knowledge? Analysis, 23(6):121–123. Mor Geva, Jasmijn Bastings, Katja Filippova, and Amir Globerson. 2023. Dissecting recall of factual associa- tions in auto-regressive language models. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12216–12235, Singapore. Association for Computational Linguis- tics. Olga Golovneva, Moya Peng Chen, Spencer Poff, Mar- tin Corredor, Luke Zettlemoyer, Maryam Fazel- Zarandi, and Asli Celikyilmaz. 2023. ROSCOE: A suite of metrics for scoring step-by-step reasoning. In The Eleventh International Conference on Learning Representations. John Greco. 1993. Virtues and vices of virtue episte- mology. Canadian Journal of Philosophy, 23(3):413– 432. Frank R Hampel. 1974. The influence curve and its role in robust estimation. Journal of the american statistical association, 69(346):383–393. John Hardwig. 1991. The role of trust in knowledge. Journal of Philosophy, 88(12):693–708. Peter Hase, Mohit Bansal, Been Kim, and Asma Ghan- deharioun. 2023a. Does localization inform editing? surprising differences in causality-based localization vs. knowledge editing in language models. In Thirty- seventh Conference on Neural Information Process- ing Systems. Peter Hase, Mona Diab, Asli Celikyilmaz, Xian Li, Zor- nitsa Kozareva, Veselin Stoyanov, Mohit Bansal, and Srinivasan Iyer. 2023b. Methods for measuring, up- dating, and visualizing factual beliefs in language models. In Proceedings of the 17th Conference of the European Chapter of the Association for Compu- tational Linguistics, pages 2714–2731, Dubrovnik, Croatia. Association for Computational Linguistics. Alon Jacovi, Yonatan Bitton, Bernd Bohnet, Jonathan Herzig, Or Honovich, Michael Tseng, Michael Collins, Roee Aharoni, and Mor Geva. 2024. A chain-of-thought is as strong as its weakest link: A benchmark for verifiers of reasoning chains. arXiv preprint arXiv:2402.00559. Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Associa- tion for Computational Linguistics, 9:962–977. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438. Brihi Joshi, Ziyi Liu, Sahana Ramnath, Aaron Chan, Zhewei Tong, Shaoliang Nie, Qifan Wang, Yejin Choi, and Xiang Ren. 2023. Are machine rationales (not) useful to humans? measuring and improving human utility of free-text rationales. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7103–7128, Toronto, Canada. Association for Computational Linguistics. Nora Kassner, Philipp Dufter, and Hinrich Schütze. 2021a. Multilingual LAMA: Investigating knowl- edge in multilingual pretrained language models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 3250–3258, Online. Association for Computational Linguistics. Nora Kassner, Benno Krojer, and Hinrich Schütze. 2020. Are pretrained language models symbolic reasoners over knowledge? In Proceedings of the 24th Confer- ence on Computational Natural Language Learning, pages 552–564, Online. Association for Computa- tional Linguistics. Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7811–7818, Online. Asso- ciation for Computational Linguistics. Nora Kassner, Oyvind Tafjord, Ashish Sabharwal, Kyle Richardson, Hinrich Schuetze, and Peter Clark. 2023. Language models with rationality. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14190–14201, Singapore. Association for Computational Linguis- tics. Nora Kassner, Oyvind Tafjord, Hinrich Schütze, and Peter Clark. 2021b. BeliefBank: Adding memory to a pre-trained language model for a systematic notion of belief. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing, pages 8849–8861, Online and Punta Cana, Do- minican Republic. Association for Computational Linguistics. Amr Keleg and Walid Magdy. 2023. DLAMA: A frame- work for curating culturally diverse facts for prob- ing the knowledge of pretrained language models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 6245–6266, Toronto, Canada. Association for Computational Linguistics. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885–1894. PMLR. Yanming Liu, Xinyue Peng, Xuhong Zhang, Weihao Liu, Jianwei Yin, Jiannan Cao, and Tianyu Du. 2024. Ra-isf: Learning to answer and understand from re- trieval augmentation via iterative self-feedback. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual asso- ciations in GPT. Advances in Neural Information Processing Systems, 36. Kevin Meng, Arnab Sen Sharma, Alex J Andonian, Yonatan Belinkov, and David Bau. 2023. Mass- editing memory in a transformer. In The Eleventh International Conference on Learning Representa- tions. Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy Campbell- Gillingham, Geoffrey Irving, et al. 2022. Teaching language models to support answers with verified quotes. arXiv preprint arXiv:2203.11147. Ella Neeman, Roee Aharoni, Or Honovich, Leshem Choshen, Idan Szpektor, and Omri Abend. 2023. DisentQA: Disentangling parametric and contextual knowledge with counterfactual question answering. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10056–10070, Toronto, Canada. Association for Computational Linguistics. Robert Nozick. 2000. . knowledge and scepticism. In Sven Bernecker and Fred I. Dretske, editors, Knowl- edge: Readings in Contemporary Epistemology. Ox- ford University Press. Cory Paik, Stéphane Aroca-Ouellette, Alessandro Ron- cone, and Katharina Kann. 2021. The World of an Octopus: How Reporting Bias Influences a Language Model’s Perception of Color. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 823–835, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. James D. Park. 2002. Using weighted max-sat engines to solve mpe. In AAAI/IAAI. Anna Pederneschi. 2024. An analysis of bias and dis- trust in social hinge epistemology. Philosophical Psychology, 37(1):258–277. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- In Proceedings of the 2019 Confer- edge bases? ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. Alvin Plantinga. 1993. Warrant and proper function. Oxford University Press. Plato Plato. 2019. Theaetetus. BoD–Books on Demand. Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating training data influ- ence by tracing gradient descent. Advances in Neural Information Processing Systems, 33:19920–19930. Jirui Qi, Raquel Fernández, and Arianna Bisazza. 2023. Cross-lingual consistency of factual knowledge in In The 2023 Con- multilingual language models. ference on Empirical Methods in Natural Language Processing. Yasin Abbasi Yadkori, Ilja Kuzborskij, András György, and Csaba Szepesvári. 2024. To believe or not to believe your llm. arXiv preprint arXiv:2406.02543. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418–5426, Online. Association for Computational Linguistics. Qinan Yu, Jack Merullo, and Ellie Pavlick. 2023. Char- acterizing mechanisms for factual recall in language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9924–9959, Singapore. Association for Com- putational Linguistics. Gilbert Ryle. 1949. The Concept of Mind: 60Th An- niversary Edition. Hutchinson & Co, New York. Crispin Sartwell. 1992. Why knowledge is merely true belief. Journal of Philosophy, 89(4):167–180. Arnab Sen Sharma, David Atkinson, and David Bau. 2024. Locating and editing factual associations in mamba. In First Conference on Language Modeling. Ernest Sosa. 1980. The raft and the pyramid: Coher- ence versus foundations in the theory of knowledge. Midwest Studies in Philosophy, 5(1):3–26. Niklas Stoehr, Mitchell Gordon, Chiyuan Zhang, and Owen Lewis. 2024. Localizing paragraph mem- arXiv preprint orization in language models. arXiv:2403.19851. Alasdair Urquhart. 1972. Semantics for relevant logics. Journal of Symbolic Logic, 37(1):159–169. Jonas Wallat, Jaspreet Singh, and Avishek Anand. 2020. BERTnesia: Investigating the capture and forgetting of knowledge in BERT. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpret- ing Neural Networks for NLP, pages 174–183, On- line. Association for Computational Linguistics. Jiaan Wang, Yunlong Liang, Zengkui Sun, Yuxuan Cao, Jiarong Xu, and Fandong Meng. 2024. Cross-lingual knowledge editing in large language models. In Pro- ceedings of the 62nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 11676–11686, Bangkok, Thailand. Association for Computational Linguistics. Linda Zagzebski. 1999. "what is knowledge?". In John Greco and Ernest Sosa, editors, The Blackwell Guide to Epistemology, pages 92–116. Oxford: Blackwell. Zexuan Zhong, Zhengxuan Wu, Christopher Man- ning, Christopher Potts, and Danqi Chen. 2023a. MQuAKE: Assessing knowledge editing in language models via multi-hop questions. In Proceedings of the 2023 Conference on Empirical Methods in Natu- ral Language Processing, pages 15686–15702, Sin- gapore. Association for Computational Linguistics. Zexuan Zhong, Zhengxuan Wu, Christopher D Man- ning, Christopher Potts, and Danqi Chen. 2023b. MQuAKE: Assessing knowledge editing in language arXiv preprint models via multi-hop questions. arXiv:2305.14795. A Epistemic logic The syntax of standard epistemic logic is defined by: ϕ def== p | ¬ϕ | (ϕ ∧ ψ) | (cid:50)ϕ | (cid:51)ϕ The veridicality principle (also known as axiom T) that what is known, is also true, is expressed as follows: (cid:50)ϕ → ϕ. We will distinguish between different definitions of knowing by subscripting the modal operators. One standard epistemic logic is the so-called S4 logic, axiomatized as follows: K (cid:50)(ϕ → ψ) → ((cid:50)ϕ → (cid:50)ψ) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of thought prompt- ing elicits reasoning in large language models. In Advances in Neural Information Processing Systems. T (cid:50)ϕ → ϕ 4 (cid:50)ϕ → (cid:50)(cid:50)ϕ Timothy Williamson. 2005. Knowledge, context, and In Gerhard Preyer and the agent’s point of view. Georg Peter, editors, Contextualism in Philosophy: Knowledge, Meaning, and Truth, pages 91–114. Ox- ford University Press. Wenhao Wu, Yizhong Wang, Guangxuan Xiao, Hao Peng, and Yao Fu. 2024. Retrieval head mechanisti- cally explains long-context factuality. arXiv preprint arXiv:2404.15574. Axiom 4 is also called the principle of positive introspection. This is not the only epistemic modal logic on the table, but it suffices for our purposes. We extend S4 in various ways to accommodate for the five definitions. Specifically, v-knowledge introduces the concept of virtue, and p-knowledge relies on some notion of empirical risk. The virtue definition of knowledge introduces a new operator that does not satisfy the veridicality principle T. tb-knowledge A naïve implementation of knowl- edge as true belief falls out of S4 and the principle called KB1, which goes all the way back to Plato: (cid:50)sϕ → (cid:51)ϕ Sartwell (1992), however, relies on an extended notion of belief which we will have to formalize, also. Let us introduce a new operator (cid:51)+ and call the epistemic closure principle for this operator for +: (cid:51)+: (cid:50)sp → (cid:51)+p +: (cid:51)+p → ((p → q) → (cid:51)+q) One way to express that belief is consistent is by the principle: ¬(cid:51)⊥ j-knowledge The idea of justified true beliefs calls for so-called justification logic (Artemov, 2008) with justification operators: (cid:50)nϕ → t : ϕ Justification logic can be axiomatized in differ- ent ways, but these details go beyond our main concerns here. If we insist on a sui generis inter- g-knowledge pretation of knowledge in S4, we would have to in- troduce a new operator, say †. This operator would have very different properties from the standard epistemic modal logic (cid:50)-operator. T would not apply. K would apply, and knowledge would still be required to be consistent. It is unclear whether 4 would apply to the †-operator. A minimal axiom system could, perhaps, be something like this: T †(ϕ → ψ) → (†ϕ → †ψ) 0 ¬ † ⊥ We leave further details open. v-knowledge Virtue reliabilist accounts of knowledge and justification are versions of epis- temological externalism. Sosa characterizes an in- tellectual virtue, very generally, as “a quality bound to help maximize one’s surplus of truth over error” (1991: 225). For most virtue reliabilists, intel- lectual virtue is what leads to justification, and virtue-based knowledge definition are therefore of- ten formalized in justification logics. (cid:50)wϕ → t : ϕ but with slightly different model-theoretic se- mantics than (cid:50)n. p-knowledge Definition 2.7 translates into the following in an extended probabilistic version of S4: (cid:50)ap → (cid:51)p ∧ P ((cid:51)q | p → q) > 0.95 which is taken to mean that the definition of p-knowing p ((cid:50)ap) is that you believe p ((cid:51)p), and q follows from p, then you probably also be- lieve q, i.e., the probability of you believing q (P ((cid:51)q | p → q)) is higher than some threshold, say 0.95. This is simply the probabilistic version of tb-knowledge. The definition of p-knowledge also requires pragmatic usefulness. One way to formalize this is in terms of empirical risk on rele- vant benchmarks. An alternative is formalization through relevance logics (Urquhart, 1972). B Survey Details We recruited professional LLM researchers and philosophers through Computer Science and Phi- losophy mailing lists, research groups mailing lists, and point of contacts of the authors at other univer- sities. All respondents participated free of charge on a completely voluntary basis. The respondents were informed about the intended use of the survey. The full instructions was: The <X> is running a survey about the re- lationship between knowledge and Large Language Models (LLMs). We are inter- ested in getting as many perspectives as possible, especially from philosophers and computer scientists. The survey should not take more than 5 minutes of your time. We first ask general questions about the respon- dent and their knowledge of language models and epistemology (Figure 6). Then, we present an ex- ample to motivate the informal knowledge defini- tions (Figure 7), and we ask to rank the definitions in a Likert scale (Figure 8). Finally, we ask ques- tions related to whether LLMs can be said to know (Figure 9). C Protocols Example See Table 2. Figure 6: First part of questions from our survey. Figure 7: Example to be used for the informal knowl- edge definitions. Figure 8: Second part of questions from the survey, related to the knowledge definitions. Figure 9: Final part of questions from our survey. Prompt What is a mammal? Do mammals lay eggs? What are platypus? Do platypuses have hair or fur? Do platypuses lay eggs? Are platypuses mammals? Are platypus mammals? Please ex- plain step-by-step. Llama-3’s Completion A mammal is (...) key characteristics of mammals include: 1. Hair or fur: Mammals are covered in hair or fur, which helps to regulate their body temperature and protect their skin. (...) 5. Live birth: Mammals give birth to live young, rather than laying eggs like birds or reptiles. (...) No, mammals do not lay eggs. Mammals are a group of (...) The platypus! They’re one of the most fascinating and unique creatures in the animal kingdom. The platypus (Ornithorhynchus anatinus) is a semi-aquatic mammal that is native to eastern Australia (...) Platypuses do not have hair, but they do have a covering of soft, woolly fur. This fur is made up (...) Yes, platypuses do lay eggs! In fact, they (...) Yes, platypuses are indeed mammals! They belong to the order Monotremata, which (...) I ' d be happy to help clarify whether platypus are mammals or not. Here's a step−by−step explanation : 1. ** Definition of a mammal**: A mammal is a warm−blooded animal that has hair or fur , produces milk to feed its young, and has mammary glands. Mammals also have a specific type of skeleton , brain structure , and dental formula. 2. **Platypus characteristics **: Platypuses have some characteristics that are typical of mammals, such as: * They are warm−blooded, meaning they regulate their own body temperature . * They have hair or fur , although it ' s not very visible . like other * They produce milk to feed their young, just mammals. 3. **Unique platypus features **: However, platypuses also have some features that are not typical of mammals: * They lay eggs instead of giving birth to live young like most other mammals. * They have a duck−billed snout and webbed feet , which are more similar to those of reptiles and birds . * They have a venomous spur on their hind leg , which is a characteristic of some reptiles . 4. ** Classification **: Based on their characteristics , platypuses are classified as mammals, but they are also considered to be part of a unique group called monotremes. Monotremes are mammals that lay eggs instead of giving birth to live young. are a special different 5. **Conclusion**: In conclusion , platypuses are indeed mammals, but they that are type of mammal that has some characteristics from those of other mammals. Their ability to lay eggs their unique and have a venomous spur are just a few examples of features . Table 2: Greedy decoding of Llama-3-8B-Instruct given a prompt. The ‘(...)’ means that more text was generated but omitted here due to space limitations.
ai_researcher
1
Three-arm_randomised_controlled_trial_of_an_m-health_app_and_digital_engagement_strategy_for_improving_treatment_adherence_and_reducing_suicidal_ideation_in_young_people_study_protocol.pdf
Phononic frequency comb via three-mode parametric three-wave mixing Authors: Adarsh Ganesan1, Cuong Do1, Ashwin Seshia1 1. Nanoscience Centre, University of Cambridge, Cambridge, UK This paper is motivated by the recent demonstration of three-wave mixing based phononic frequency comb. While the previous experiments have shown the existence of three-wave mixing pathway in a system of two-coupled phonon modes, this work demonstrates a similar pathway in a system of three-coupled phonon modes. The paper also presents a number of interesting experimental facts concomitant to the three-mode three-wave mixing based frequency comb observed in a specific micromechanical device. The experimental validation of three-mode three- wave mixing along with the previous demonstration of two-mode three-wave mixing points to the ultimate possibility of multimode frequency combs. Optical frequency combs have significantly transformed modern metrology and molecular spectroscopy [1-2]. Recently, we experimentally demonstrated the existence of such frequency combs in the phononic domain [3] after its theoretical prediction in [4]. While both optical and phononic frequency combs carry similar spectral features, the dynamics describing the respective generation processes are different. The optical frequency combs usually arise through a well- established Kerr nonlinear pathway [4]. However, the pathway describing our more recent phononic frequency combs is nonlinear three-wave mixing [4]. Through this mechanism, a single drive tone intrinsically couples with the eigenfrequency of a phonon mode. Such physical interaction can particularly enable high-precision micro and nano-mechanical resonant sensors adapted for the direct monitoring of slow varying intrinsic or extrinsic physical processes. In the phononic frequency comb demonstrated in [3], the three-wave mixing process is particularly operative in the system of two parametrically coupled phonon modes. However, the preceding theoretical work [4] on frequency combs have actually shown the possibility for frequency combs in a system comprising three and four parametrically coupled phonon modes. Hence, inspired by these mathematical predictions [4], we now experimentally describe three wave mixing process even in three mode parametric resonance [5]. Additionally, this paper also presents surprising experimental facts specific to three-mode three-wave mixing based phononic frequency comb. For understanding the surprising nature of three-mode three-wave mixing based phononic frequency comb, we first present a brief discussion of two-mode three-wave mixing based phononic frequency comb. Hence, we first consider the coupled two-mode dynamics. 2 2 2 2 2 𝑄̈ 𝑖 = −𝜔𝑖 2𝑄𝑖 − 2𝜁𝑖𝜔𝑖𝑄̇ 𝑖 + ∑ ∑ 𝛼𝜏1𝜏2 𝑄𝜏1𝑄𝜏2 + ∑ ∑ ∑ 𝛽𝜏1𝜏2𝜏3 𝑄𝜏1𝑄𝜏2𝑄𝜏3 𝜏1=1 𝜏2=1 𝜏1=1 𝜏2=1 𝜏3=1 + 𝑃 𝑐𝑜𝑠 (𝜔𝑑𝑡) (1) where 𝑃 is the drive level, 𝛼 and 𝛽 are quadratic coupling coefficients and 𝜔𝑖=1,2 and 𝜁𝑖=1,2 are natural frequencies and damping coefficients of modes 𝑖 = 1,2 respectively. Here, when the drive frequency 𝜔𝑑 is closer to 𝜔1, the direct excitation of mode 1 takes place. The larger drive level 𝑃 cause higher displacement 𝑄1 which in turn results in the parametric excitation of mode 2 through the nonlinear term 𝑄1𝑄2. The frequency of this parametrically excited tone is expected to be 𝜔𝑑 2 in the case of two-mode parametric resonance [6]. However, owing to an intrinsic three-wave mixing pathway, the excitation of 𝜔1 2 tone instead of 𝜔𝑑 2 is observed in our recent demonstration of phononic frequency comb. Also, this specific three-wave mixing pathway is only operative when 𝜔𝑑 is set outside the dispersion band of driven phonon mode. Subsequent to this excitation, through high-order interactions, a frequency comb of spacing |𝜔𝑑 − 𝜔1| is formed about 𝜔1 and 𝜔1 2 . While the concept of three-wave mixing with two-mode parametric resonance is clear, we now turn to three-wave mixing with three-mode parametric resonance. For understanding this, we consider a system of three coupled modes [5]. 3 3 3 3 3 𝑄̈ 𝑖 = −𝜔𝑖 2𝑄𝑖 − 2𝜁𝑖𝜔𝑖𝑄̇ 𝑖 + ∑ ∑ 𝛼𝜏1𝜏2 𝑄𝜏1𝑄𝜏2 + ∑ ∑ ∑ 𝛽𝜏1𝜏2𝜏3 𝑄𝜏1𝑄𝜏2𝑄𝜏3 𝜏1=1 𝜏2=1 𝜏1=1 𝜏2=1 𝜏3=1 + 𝑃 𝑐𝑜𝑠 (𝜔𝑑𝑡) ; 𝑖 = 1,2,3 (1) Based on this dynamics, for 𝜔𝑑 ≅ 𝜔1 ≅ (𝜔1 + 𝜔2), the parametric excitation of tones 𝜔𝑥 ≅ 𝜔1 and 𝜔𝑦 ≅ 𝜔2 which satisfies the condition 𝜔𝑥 + 𝜔𝑦 = 𝜔𝑑 is expected. Using the perspective derived from two-mode three-wave mixing, we now outline the three-wave mixing pathway in this system of three-coupled modes. In the case of two-mode three-wave mixing, one of the frequencies corresponding to the combs of spacing |𝜔𝑑 − 𝜔1| had to satisfy the condition 2𝜔 = 𝜔1 instead of 2𝜔 = 𝜔𝑑. Hence, a frequency 𝜔𝑑 2 ± 𝑛(𝜔𝑑 − 𝜔1). On similar lines, in the case of ± 𝑛(𝜔𝑑 − 𝜔1) is formed instead of comb of 𝜔1 2 three-mode three-wave mixing, one might expect to have two frequencies 𝜔𝑝 and 𝜔𝑞 the respective frequency combs of mode 2 and 3 that satisfy a frequency matching condition of 𝜔𝑝 + 𝜔𝑞 = 𝜔1 instead of 𝜔𝑝 + 𝜔𝑞 = 𝜔𝑑. In the frequency combs: 𝜔𝑥 ± 𝑛(𝜔𝑑 − 𝜔1); 𝜔𝑦 ± 𝑛(𝜔𝑑 − 𝜔1), such a condition of 𝜔𝑝 + 𝜔𝑞 = 𝜔1 is satisfied when 𝜔𝑝 = 𝜔𝑥; 𝜔𝑞 = 𝜔𝑦 − (𝜔𝑑 − 𝜔1). Additionally, unlike two-mode three-wave mixing, the frequency condition of 𝜔𝑝 + 𝜔𝑞 = 𝜔𝑑 is also satisfied in these frequency combs when 𝜔𝑝 = 𝜔𝑥; 𝜔𝑞 = 𝜔𝑦. In order to establish this pathway for comb formation, we experimentally probe a micromechanical device and organise the experimental observations to support the above discussion. Note: In the experimental section, unlike in the theory section, the temporal frequency 𝑓 is used in the discussion instead of angular frequency 𝜔 = 2𝜋𝑓. For studying the three-mode three-wave mixing based frequency comb, the same experimental system that was used to study two-mode three-wave mixing [3] is once again considered i.e. an AlN- on-Si free-free micro-beam of dimensions 1100 × 350 × 11 𝜇𝑚3 (Figure 1A). A sinusoidal electrical signal is applied through one of the split electrodes patterned on the microstructure and the output signal which is extracted via another split electrode is analysed using Agilent infiniium 54830B DSO. The experiments were carried out under ambient pressure and temperature conditions. Figure 1B shows the frequency spectrum of output electrical signal when 𝑆𝑖𝑛(𝑓𝑑 = 3.857 𝑀𝐻𝑧) = 15 𝑑𝐵𝑚 is applied. In this spectrum, we can see five thick spectral features b1-b5. The frequency corresponding to b1 is close to the drive frequency 𝑓𝑑 ≅ 𝑓1. While the features b2 and b3 are associated with frequencies 𝑓𝑚 ≅ 𝑓2 and 𝑓𝑛 ≅ 𝑓3 respectively, their sum is seen to approximately equal to the drive frequency. The final two features b4 and b5 have frequencies 2𝑓𝑚 and 2𝑓𝑛 respectively and these correspond to the second harmonics of features b2 and b3 respectively. To clearly visualize each of these spectral features, the zoomed-in images are presented in the figures 1b1-1b5. These figures clearly show that b1-b5 correspond to the frequency combs of spacing 5.035 𝑘𝐻𝑧 about the frequencies 3.857 𝑀𝐻𝑧 = 𝑓𝑑, 1.791 𝑀𝐻𝑧 ≅ 𝑓2, 2.066 𝑀𝐻𝑧 ≅ 𝑓3, 3.582 𝑀𝐻𝑧 ≅ 2𝑓2 and 4.132 𝑀𝐻𝑧 ≅ 2𝑓3 respectively. The tone corresponding to the drive frequency 𝑓𝑑 can be located in the figure 1b1 and the additional spectral lines in the output spectrum arise through the nonlinear three-wave mixing process. To systematically understand the evolution of three-mode three-wave mixing based frequency comb, the experiments are carried out for a range of drive levels: 4 − 23.8 𝑑𝐵𝑚 and the frequency combs about 𝑓𝑑 ≅ 𝑓1, 𝑓𝑚 ≅ 𝑓2 and 𝑓𝑛 ≅ 𝑓3 are examined. Figure 2 shows the drive level dependence of the frequency combs. Each of the frequency combs about 𝑓𝑑, 𝑓𝑚 and 𝑓𝑛 presented in figures 2A- 2C correspond to different phonon modes and their mode shapes are presented in figure 2 inset. The vertical line in figure 2A corresponds to 𝑓𝑑 and the additional lines are formed about 𝑓𝑑 with equidistant spacing. Interestingly, this spacing is found to increase with the drive level 𝑆𝑖𝑛. Additionally, at higher drive levels 𝑆𝑖𝑛 ≥ 18 𝑑𝐵𝑚, inter-leaved spectral lines are also formed. This is broadly similar to the case presented in our previous demonstration of three-wave mixing [3]. Corresponding to the frequency generation about 𝑓𝑑, combs are also formed about 𝑓𝑚 ≅ 𝑓2 and 𝑓𝑛 ≅ 𝑓3 (Figure 2B and 2C). However, there are no vertical lines in figures 2B and 2C. This shows that the frequencies 𝑓𝑚 and 𝑓𝑛 corresponding to the two parametrically excited internal modes are also drive level dependent and such dependences are related to the nonlinear comb generation process. This leads to a drive-level dependence of the comb spacing in the presented pathway. To understand more about the drive level dependence of the frequency comb, the evolution of frequencies 𝑓𝑚 and 𝑓𝑛 is investigated. It can be seen from figures 3A and 3B that 𝑓𝑚 and 𝑓𝑛 are not simply drive level dependent but their dependences are also characterized by peculiar nonlinear functions. Despite this, 𝑓𝑚 + 𝑓𝑛 is always equal to 𝑓𝑑 for any drive level 𝑆𝑖𝑛. The experimental facts presented in the figures 3A-3C also confirm the specific nature of frequency combs observed in our device is (𝑓𝑚 = 𝑓𝑥) ± 𝑛(𝑓𝑑 − 𝑓1); (𝑓𝑛 = 𝑓𝑦) ± 𝑛(𝑓𝑑 − 𝑓1); 𝑓𝑑 ± 𝑛(𝑓𝑑 − 𝑓1). This frequency comb possesses an equidistant spacing of |𝑓𝑑 − 𝑓1|. The figure 3D shows that the resonant frequency 𝑓1 is drive level dependent although linear. This is similar to the case observed in [3]. This drive level dependence of 𝑓1 thus leads to the drive level dependent comb spacing as noted in figure 2A. Figure 4A further validates that the spacing of frequency combs formed about 𝑓𝑑, 𝑓𝑥 and 𝑓𝑦 are equal. To obtain the relevance of the dispersion band of the driven phonon mode on the frequency comb, the experiments were conducted at different drive frequencies. Despite the significant low 3-mode parametric resonance threshold within the dispersion band for instance 𝑓𝑑 = 3.856 𝑀𝐻𝑧, the frequency comb is only existent outside the dispersion band specifically on the right side (Figure 4B). The reason for the existence of an asymmetric frequency comb on only one side of dispersion band, unlike the case presented in [3], can possibly be explained by the asymmetry of drive frequency dependent parametric excitation threshold (Figure 4B). In the frequency combs observed in our experiments, similar to those presented in [3], the spacing stays the same for different drive frequencies at a specific drive level in addition to the increase in spacing with drive level for a specific drive frequency. This can also be evidenced from the colour-maps in the figure 4B. Now, we turn to the interesting trend presented in the figures 3A-3C. Frequencies 𝑓𝑥 and 𝑓𝑦 shift symmetrically while maintaining the condition 𝑓𝑥 + 𝑓𝑦 = 𝑓𝑑. The magnitude of this shift possesses a peculiar nonlinear relationship with drive level. To prove the relevance of three-mode three-wave mixing process on this trend, the nature of drive level dependence of 𝑓𝑥 and 𝑓𝑦 for different drive frequencies are examined. It is possible to clearly note from the figures 5A and 5B that the simultaneous downshifts and upshifts in 𝑓𝑥 and 𝑓𝑦 respectively are only observed when the three- mode three-wave mixing is existent. Despite these shifts, 𝑓𝑥 + 𝑓𝑦 for all drive levels and drive frequencies are equal to the respective 𝑓𝑑. Further, for the drive frequencies that constitute three- mode three-wave mixing, the respective nonlinear relationships with drive levels are both different and non-monotonous. Such relationships may arise through the nonlinear feedback involved in the frequency comb generation process and may also offer an additional room for rigorous fundamental establishment despite that they fall outside the scope of this current manuscript. In addition to the mountains and pits of the respective figures 5A and 5B, we can also observe another interesting characteristic associated with the three-mode three-wave mixing. Even in the absence of three- mode three-wave mixing, there exists an obscure nominal relationship between 𝑓𝑥 & 𝑓𝑦 and 𝑓𝑑 which is clear from the planes presented in the figures 5A-5C and any influence of three-wave mixing is only above these nominal planes. These planes are parallel to the drive level axis as the frequencies 𝑓𝑥 and 𝑓𝑦 for a specific 𝑓𝑑 do not vary with the drive level and such possible drive level dependences are only observed under the influence of three-wave mixing. Figures 5A and 5B show that the slopes of respective planes are not same. This suggests that 𝑓𝑦 − 𝑓𝑥 is not constant with 𝑓𝑑 and such specific relationship along with the comb induced symmetric frequency shifts should also be theoretically understood. This paper thus presents the first experimental demonstration of a phononic frequency comb via three-mode three-wave mixing using a micromechanical resonator. The specific experimental facts associated with this form of frequency comb are also provided. The validated existence of phononic frequency combs via both two-mode three-wave mixing and three-mode three-wave mixing can thus create a general perceptive for multi-mode three-wave mixing based frequency combs. Such multi-mode frequency combs can possibly be helpful in the distribution of frequency combs to the multiple segments of frequency spectrum. Acknowledgements Funding from the Cambridge Trusts is gratefully acknowledged. References [1] T. Udem, R. Holzwarth, and T. W. Hansch, "Optical frequency metrology," Nature, vol. 416, pp. 233-237, 2002. [2] J. Ye, Femtosecond optical frequency comb: principle, operation and applications: Springer Science & Business Media, 2005. [3] A. Ganesan, C. Do, and A. Seshia, "Phononic Frequency Comb via Intrinsic Three-Wave Mixing," Physical Review Letters, vol. 118, p. 033903, 2017. [4] L. S. Cao, D. X. Qi, R. W. Peng, M. Wang, and P. Schmelcher, "Phononic Frequency Combs through Nonlinear Resonances," Physical Review Letters, vol. 112, p. 075505, 2014. [5] A. Ganesan, C. Do, and A. Seshia, "Observation of three-mode parametric instability in a micromechanical resonator," Applied Physics Letters, vol. 109, p. 193501, 2016. [6] E. Mathieu, "Memoire sur le mouvement vibratoire d'une membrane de forme elliptique," Journal de mathamatiques pures et appliquees, vol. 13, pp. 137-203, 1868. Figure 1: Observation of phononic frequency comb via three-mode three-wave mixing. A: An electrical signal 𝑆𝑖𝑛(𝑓𝑑 = 3.857 𝑀𝐻𝑧) is provided to a free-free beam microstructure; B: The frequency spectrum of the output electrical signal 𝑆𝑜𝑢𝑡; b1-b5: The zoomed views of spectral features b1-b5 in B respectively. Figure 2: Drive level dependence of frequency comb. A-C: The spectral maps of output electrical signal 𝑆𝑜𝑢𝑡 around 𝑓𝑑, 𝑓𝑚 and 𝑓𝑛 respectively for different drive conditions 𝑆𝑖𝑛(𝑓𝑑1 = 3.86 𝑀𝐻𝑧) = 4 − 23.8 𝑑𝐵𝑚. The inset figures show the vibration mode shapes corresponding to the respective frequency combs and the red and blue correspond to maximum and minimum displacements in these inset figures. Figure 3: Drive level dependence of frequency comb (Contd.). A-D: The drive level 𝑆𝑖𝑛 dependence of 𝑓𝑛, 𝑓𝑚, 𝑓𝑚 + 𝑓𝑛 and 𝑓̃1 respectively. Figure 4: Drive frequency dependence of frequency comb. A: The spacing of frequency combs around 𝑓𝑥, 𝑓𝑦 and 𝑓𝑑 for the drive frequency 𝑓𝑑 = 3.857 𝑀𝐻𝑧; B: The spacing of frequency combs for different drive frequencies 𝑓𝑑 and drive levels 𝑆𝑖𝑛. The colour-maps indicate the spacing. The absence of colour or white-colour indicates the absence of frequency comb for that drive condition. The dotted black line indicates the parametric excitation threshold. The drive level 𝑆𝑖𝑛 above this threshold line leads to parametric resonance. Figure 5: Drive frequency dependence of frequency comb (Contd.). A-C: The value of 𝑓𝑥, 𝑓𝑦 and 𝑓𝑥 + 𝑓𝑦 for different drive frequencies 𝑓𝑑 and drive levels 𝑆𝑖𝑛 respectively. The colour-maps indicate the values of these frequencies. The absence of colour or white-colour indicates the absence of frequency comb for that drive condition. The sketched planes correspond to the nominal drive frequency dependence of 𝑓𝑥, 𝑓𝑦 and 𝑓𝑥 + 𝑓𝑦 i.e. under the absence of three-wave mixing. Note: The projections of 3-D plots on the 𝑆𝑖𝑛 − 𝑓𝑑 plane are also shown for clarity.
ai_researcher
3
Toward_the_Development_of_a_Computer-Assisted_Real-Time_Assessment_of_Ideational_Dynamics_in_Collaborative_Creative_Groups.pdf
9 1 0 2 l u J 5 2 ] C D . s c [ 1 v 4 0 9 0 1 . 7 0 9 1 : v i X r a Collaborative Heterogeneous Computing on MPSoCs Extended Abstract∗ Siqi Wang 1 INTRODUCTION With the emerging demand for computations on mobile devices, heterogeneous multi-processor system-on-chips (MPSoCs) are en- visioned to dominate the current and future mobile computing landscape. Heterogeneous MPSoCs usually comprise of various pro- cessing elements such as general-purpose cores (CPUs) with differ- ent performance-power characteristics and application-specific ac- celerators, examples of which are graphics processing units (GPUs), digital signal processors (DSPs), reconfigurable accelerators (FP- GAs, etc.) and the recent neural acceleration engines (NPUs, etc.). Such heterogeneity presented on the SoC enables delicate match- ing of computational kernels to the processing elements that are best suited to perform the computation, which leads to substantial improvements in performance and energy-efficiency. The heterogeneity can be broadly classified into performance and functional heterogeneity, while commercial SoCs are trending toward adopting both in the same chip. Performance heterogene- ity consists of cores with the same functionality (instruction-set architecture, ISA) but with different power-performance charac- teristics, an example of which is the ARM big.LITTLE CPU archi- tecture. The difference stems from distinct micro-architectural fea- tures such as in-order core versus out-of-order core. The complex cores provide better performance at the cost of higher power con- sumption while the simpler cores exhibit low-power behavior with lower performance. Functional heterogeneity features cores with very different functionality (different ISA) existing on the same die. The heterogeneity takes advantage of certain execution pattern for exceptional speed-up to meet the performance requirement under the stringent power budget. Under carefully managed exploitation of multiple forms of heterogeneity, heterogeneous MPSoCs present great potential to sustain the performance and power requirements for next generation mobile computing. While architectural heterogeneity is promising, software devel- opment efforts are required to fully benefit from this architectural advancement [4]. This thesis (extended abstract) presents the soft- ware development efforts toward efficient exploitation of hetero- geneity through intricate mapping of computational kernels, col- laborative execution of multiple processing elements and applica- tion specific techniques. The goal is to embrace the heterogeneity to unleash the full potential of the heterogeneous MPSoCs towards high-performance energy-efficient mobile computing. 2 EXPLOITATION OF HETEROGENEITY Functional heterogeneity presents application developers with a diverse choice of processing elements on the same chip. They now have the opportunity and the responsibility to take advantage of ∗Accepted to ACM SIGDA Ph.D. Forum at Design Automation Conference (DAC) 2019. Siqi Wang is with the Department of Computer Science, School of Computing, Na- tional University of Singapore, SG. E-mail: ([email protected]) the unique characteristics of different processing elements to im- prove execution performance. However, the matching of compu- tational kernels to processing elements is difficult as the perfor- mance is a complex interplay among the exposed parallelism, the compiler, and the processor architecture. Furthermore, the applica- tion kernel needs to be implemented in different processor-specific languages to measure the performance of each processing element. If the performance of the applications on different processing ele- ments are made available at an early stage, the developers will then be able to make an informed decision in selecting the most appro- priate processing element and concentrate on further processor- specific languages and optimizations. CGPredict [5] is proposed to guide developers in the early design choice without tedious rede- velopment efforts. It is an analytical framework that accurately es- timates the performance of a computational kernel on an embed- ded GPU architecture from unoptimized, single-threaded C code. CGPredict takes a computational kernel in the form of single- threaded C code and generates its execution trace through a Trace Extraction phase. In order to emulate the behavior of GPU, a Warp Formation phase is introduced to transform the single-threaded trace into its multi-threaded equivalent. CGPredict then extracts computation (compute instructions) and memory access informa- tion. The compute cycle count is obtained by mapping compute in- structions to GPU instructions in the Computation Analysis stage, while the memory cycle count is obtained through memory access information analysis with access patterns and cache behavior in the Memory Behavior Analysis stage. The results from the two anal- ysis stages complete the execution characteristics we need from the kernel for performance prediction. Lastly, together with the hardware architectural parameters obtained from micro- bench- marking, a comprehensive Analytical Prediction Model is engaged to predict the final execution performance using the computation and memory execution characteristics. CGPredict provides accurate GPU performance estimations from only C code with 9% error. It also provides insights regarding the characteristics of the kernel and the GPU that influence perfor- mance, such as coalescing of memory accesses and shared mem- ory usage. These insights offer opportunities for the developers to understand the intrinsic strengths and weaknesses of the architec- ture in the context of a particular kernel that can facilitate further code optimizations. Furthermore, CGPredict in conjunction with an existing FPGA performance predictor from C code [6] achieves our objective of making the perfect choice of processing elements (CPU, GPU or FPGA) given a kernel. 3 CO-EXECUTION ON MOBILE PLATFORM The ever-increasing processing requirements impose higher pres- sure on mobile devices with limited processing capability. Execut- ing an application on a single processing element may not sustain the performance requirements, while other processing elements that can potentially be used are not actively contributing. The con- current co-execution of a single computational kernel on multi- ple processing elements thus exhibits great potential in achieving additional performance. The design space of co-execution is huge with the exploitation of both performance and functional hetero- geneity. In addition, the ability to vary clock frequencies enables the compromise between the achievable performance and power consumption which further extends the design space. We show through exhaustive design space search [1] that by executing a computational kernel simultaneously on all available processing elements (big.LITTLE CPU cores, GPUs) together with suitable voltage-frequency settings for all these cores, as high as 39% en- ergy savings and 19% improvement in runtime are achieved com- pared to the stand-alone executions. The improvement in runtime allows developers to have more flexibility in tuning the various voltage-frequency settings to achieve higher performance with cer- tain constraints. On the other hand, the inherent characteristics of mobile sys- tems demand stringent power and thermal requirements as com- pared to server system; this is especially so because of the lack of active cooling measures on mobile devices. Commercial heteroge- neous MPSoCs usually implement operating system level thermal management techniques such as processor frequency throttling to prevent failure of the chip at high temperatures. Engaging multiple processing elements concurrently may expedite the heating up of the system, necessitating frequency throttling and hence degrada- tion of performance. Therefore, the benefit of co-execution can be compromised by the throttling of frequency due to thermal issues. We propose OPTiC [2] to anticipate such thermal impact on execu- tion when engaging multiple processing elements for performance optimization. OPTiC presents a static partitioning strategy to split a compu- tational kernel across CPU and GPU cores for concurrent execu- tion, with the voltage-frequency settings of the cores carefully de- termined considering the thermal effects. OPTiC builds on exten- sive and comprehensive modeling of power and runtime, resource contention and thermal behavior. The power and runtime of the CPU and GPU cores at all frequencies are predicted through ana- lytical modeling from one profile run at a sample frequency. The thermal behavior is captured through a thermal throttling model that predicts the occurrence of OS frequency throttling and the resultant runtime under such thermal condition. From the indi- vidual performances, the allocation of the workload and the co- execution performance are predicted through a co-execution model that considers the effect of thermal frequency throttling and re- source contention. The framework then goes through all the possi- ble frequency settings and predicts the performance to locate the optimal configuration and workload allocation. While the perfor- mance of an application is largely affected by thermal conditions, OPTiC is able to predict the configuration that presents on aver- age 14% runtime improvement over standalone execution. OPTiC further demonstrates great temperature control with real-life ap- plications. With the configuration predicted by OPTiC, the chip exhibits a much cooler temperature as compared to the Linux fre- quency governors. 4 TOWARD MACHINE LEARNING Lastly, the rise of machine learning applications poses great chal- lenges to mobile platforms. Deploying neural network inferencing on mobile platforms require the exploitation of heterogeneity to sustain the performance requirements given limited resources and stringent power budgets. Although dedicated neural accelerators (NPUs, etc.) show exceptional speed-ups for applications like con- volutional neural network (CNN), the technique is highly platform dependent and not applicable to general architectures without the accelerator. Furthermore, CNN are more commonly used as build- ing blocks to construct more complex systems. We envision in the near future that multiple independent inference sub-tasks are ex- pected to be performed concurrently. This requires all the available processing elements to run the inference engines in parallel. There- fore, it is important to develop general techniques that are applica- ble to existing heterogeneous MPSoCs on mobile platforms. Commercial CNN libraries usually only engage one of the pro- cessing elements and are often ignorant to the co-execution of mul- tiple processing elements. ARM Compute Library (ARM-CL) pro- vides out-of-the-box support for parallel execution through multi- threading for the CPU clusters. But the concurrent co-execution of the big and LITTLE cluster with multi-threading is harmful for performance due to cache coherence overheads. Thus, the kernel- level splitting among processing elements fails to either reduce the end-to-end latency or the throughput. We present an alter- native framework pipe-it [3] that employs a pipelined design to split the convolutional layers across processing elements (different CPU clusters) to improve throughput for streaming inferencing. Here, the two CPU core clusters are divided into multiple sub-core- clusters as processing elements to construct the pipeline stages to better match the resources and workload. Pipe-it includes an ana- lytical performance model that predicts the performance of a con- volutional layer on different configurations (core type, count) from its network structure descriptions. The predicted performance is then used as input into a design space exploration algorithm that navigates the design space and locates the best fitting pipeline configuration and respective layer allocation. Pipe-it with the pre- dicted multi-stage pipeline achieves on average 39% throughput gain compared with the execution on a single processing element. REFERENCES [1] A. Prakash, S. Wang, AE. Irimiea, and T. Mitra. 2015. Energy-efficient ex- ecution of data-parallel applications on heterogeneous mobile platforms. In 2015 33rd IEEE International Conference on Computer Design (ICCD). 208–215. https://doi.org/10.1109/ICCD.2015.7357105 [2] S. Wang, G. Ananthanarayanan, and T. Mitra. 2019. OPTiC: Optimizing Collabo- rative CPU-GPU Computing on Mobile Devices With Thermal Constraints. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 38, 3 (March 2019), 393–406. https://doi.org/10.1109/TCAD.2018.2873210 [3] S. Wang, G. Ananthanarayanan, Y. Zeng, N. Goel, A. Pathania, and T. Mitra. 2019. High-Throughput CNN Inference on Embedded ARM big. LITTLE Multi-Core Processors. arXiv preprint arXiv:1903.05898 (2019). [4] S. Wang, A. Prakash, and T. Mitra. 2018. Software Support for Heterogeneous Computing. In 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). 756–762. https://doi.org/10.1109/ISVLSI.2018.00142 [5] S. Wang, G. Zhong, and T. Mitra. 2017. CGPredict: Embedded GPU Performance Estimation from Single-Threaded Applications. ACM Trans. Embed. Comput. Syst. 16, 5s, Article 146 (Sept. 2017), 22 pages. https://doi.org/10.1145/3126546 [6] G. Zhong, A. Prakash, S. Wang, Y. Liang, T. Mitra, and S. Niar. 2017. Design Space exploration of FPGA-based accelerators with multi-level parallelism. In De- sign, Automation Test in Europe Conference Exhibition (DATE), 2017. 1141–1146. https://doi.org/10.23919/DATE.2017.7927161 2
ai_researcher
1
Capabilities_of_GPT-4_on_Medical_Challenge_Problems.pdf
Evaluation of GPT-4o & GPT-4o-mini’s Vision Capabilities for Salt Evaporite Identification Deven B. Dangi, Lawton Chiles High School & Florida State University, Tallahassee, FL Beni B. Dangi, Florida A&M University, Tallahassee, FL Oliver Steinbock, Florida State University, Tallahassee, FL Abstract Identifying salts from images of their ‘stains’ has diverse practical applications. While specialized AI models are being developed, this paper explores the potential of OpenAI’s state-of-the-art vision models (GPT-4o and GPT-4o-mini) as an immediate solution. Testing with 12 different types of salts, the GPT-4o model achieved 57% accuracy and a 0.52 F1 score, significantly outperforming both random chance (8%) and GPT-4o mini (11% accuracy). However, GPT-4o mini also had significantly biased responses, diminishing the representativeness of its accuracy. Results suggest that current vision models could serve as an interim solution for salt identification from their stain images. Introduction Whether it is forensics experts identifying residues at a crime scene or astronomers examining materials from other planets, the ability to rapidly identify salts from macroscopic images has significant implications for various fields (1, 2). Potential software could enable low-cost, equipment-free salt analysis, benefiting both specialized and general users. Current efforts are underway, and somewhat successful, in making such an ability a reality (3)—but these methods are not immediately available, and it is unknown how long it will take for them to become viable for general or specialized use. Thus, a possible alternative method for identifying salts through these images is the target of investigation: using state-of-the-art large language models (LLMs) with vision capabilities. These models have demonstrated remarkable capabilities in transferring knowledge across domains and an impressive level of ability in identifying certain complex visual patterns without domain-specific fine-tuning (4), suggesting potential applicability to salt crystal morphology analysis. OpenAI's GPT-4o and GPT-4o-mini models represent current state-of-the-art capabilities in image recognition and analysis, with GPT-4o-mini being a lighter, cost-efficient variant of GPT-4o (4). For cost- and time-efficiency, the batch processing method offered by OpenAI was preferred, which allows for processing of multiple requests simultaneously with half the cost and greater granularity (5, 6). Most significantly, it allows control over parameters such as model type, temperature, and seed values, which can help produce outputs that are closer to being deterministic and reproducible, despite the inherently probabilistic nature of these models (7). This study examines 12 salts (NaCl, KCl, NH₄Cl, Na₂SO₄, K₂SO₄, NH₄NO₃, NaH₂PO₄, NaNO₃, Na₃PO₄, KBr, KNO₃, and RbCl) selected to represent a diverse range of ionic compounds, including both naturally occurring minerals and industrially significant compounds, to evaluate the accuracy and consistency of these models in salt identification. The findings reveal significant variations in identification accuracy across different salt types, with implications for practical applications. Method The GPT-4o and GPT-4o-mini models have a wide breadth of knowledge, but their knowledge is not deep enough to understand what the macroscopic images of salt deposits formed by the evaporation of droplets look like for each type of salt (4). Thus, training images for each type of salt are included for the model to understand their appearance, sourced from images of laboratory- generated salts from Dr. Steinbock’s website, which were created for the related machine learning methods of identification (3, 8). These training images were the same for all trials. Figure 1: Example images of each type of salt (B-M). Sourced from Batista et al. (3) The OpenAI APIs were leveraged in order to specifically utilize the batch processing method, allowing for multiple requests simultaneously at lower pricing and with higher token limits compared to regular requests (5, 6). Using the API also ensures every request is independent (stateless), with no memory carrying over between requests (9). This also means the training images the model requires to grasp the differences between salts must be provided in every single request, significantly contributing to the token count and therefore the total cost. For batch processing, a JSONL file is required, in which every line contains a valid JSON object representing individual requests to the API. Each trial would require multiple batches due to file size and token limits per batch. In order to generate the batches, 12 folders containing the images of the salts were iterated over. For each salt type, an empty set would be created, which would keep track of already-seen images to avoid duplicate requests within the same trial. A total of 100 images would be randomly chosen from that folder to ensure an unbiased representation of each salt type’s potential morphological variations, then added to the current batch as individual requests. In total, each trial contains 1200 requests (12 salts times 100 images), which were spread across as many batches as necessary. Each request had a few key parameters: the ID, which was determined by combining the image’s name (a randomly generated and unique 9-digit number) and the name of the image’s parent folder, allowing later identification of the image’s true identity. Each request also contained the specific model name (either gpt-4o-mini-2024-07-18 or gpt-4o-2024-08-06), a temperature of 0, and a seed of 17 (chosen arbitrarily) in order to push the results closer to being deterministic (7). Each request also had the system prompt as shown below: You are a helpful assistant who is knowledgeable about different types of salt crystals and can identify them from images. You can identify these 12 different salts: NaCl, KCl, NH4Cl, Na2SO4, K2SO4, NH4NO3, NaH2PO4, NaNO3, Na3PO4, KBr, KNO3, and RbCl. For training, each request had 12 user messages passed in as context, each formed by 5 images and a line of text. Finally, a 13th user message would be included in the request, reading, “Identify this salt with just the name.” In other words, the models were trained using 12 sets of images labeled with their corresponding salt types, allowing the models to learn the visual characteristics of each salt. Once all the JSONL files (batches) were created programmatically, they were then sent to be processed and receive a response from the specified model. For convenience and ease of use for non-experts, the web interface was used for batch processing (https://platform.openai.com/batches), though the underlying logic is the same as using the APIs. Initially, due to OpenAI’s usage tier system, only one batch of a relatively smaller size could be done at a time, but at higher tiers it is possible to run multiple large batches simultaneously without hitting a rate limit. Each batch, once fully processed, outputs a JSONL file that contains the responses received for each of the requests within the batch. In order to analyze the data, the custom ID associated with each request is used to identify what salt the image is in actuality, and the model’s response is also recorded. This is done by searching the response for the first mention of one of the 12 salts, which should be the only salt named in the concise outputs, due to the prompt. Results Agreements First, the results for all models were tested with each other for agreement using Cohen’s Kappa. Cohen’s Kappa is a statistical measure used to evaluate the level of agreement between two raters while accounting for the likelihood of agreement occurring by chance. (10). Cohen’s Kappa values range from -1 to 1, with 1 representing perfect agreement and values close to and lower than 0 representing very poor agreement (11). Figure 2 displays the Cohen’s Kappa values for comparisons between each of the four batch results. Figure 2: Cohen’s Kappa between every pair of trials The 4o-mini batch 1 and batch 2 trials, when compared to each other, had a kappa value of 0.91, signifying high consistency between the trials of the same model. This holds true for the full- sized 4o model as well, with its batch 1 and batch 2 trials achieving an even higher kappa value of 0.96. The value of each kappa being below 1, meaning the trials of the same model do not display total agreement, is to be expected. Since the batches used different random sets of 100 images from the pool of 500 total images for each salt, each batch had slightly different testing images, meaning the model is expected to have different responses. Still, due to the overall similarity between images of the same salt, we can still apply Cohen’s Kappa. When comparing the full-sized 4o and the 4o-mini models, however, the kappa values are near 0, indicating virtually no agreement across the models. Clearly, the two models are consistent throughout their own trials but have minimal agreement with each other’s predictions when accounting for agreement by chance. Accuracies Given the consistency within and disagreement between models, it is then crucial to know which model is more accurate. In terms of pure accuracy, the 4o full-sized model trials show significantly better results, with 57.25% and 57.17% accuracy, respectively. Meanwhile, the 4o-mini model responses had a relatively low 11% and 10.09% accuracy, respectively. However, both the mini and full models had higher accuracy than the guessing accuracy would be given 12 salts (8%), though the mini model only exceeds that marker by a marginal amount. Figure 3: Batch 1 4o-mini Model Responses Looking closer at the responses from the first batch of the mini model, as can be seen in Figure 3, there is a clear bias towards Na₃PO₄ in the model’s responses. In fact, the accuracies for every other salt are substantially lower, suggesting that the overall accuracy for the mini model is deceptively high due to its tendency to predict Na₃PO₄. This pattern is reconfirmed in Figure 4, as the second batch of the mini model’s responses demonstrates a similar pattern, with approximately 55% of all its predictions being Na₃PO₄. This bias, which is not present in the full model results, suggests potential limitations in the model’s visual learning capabilities compared to the 4o model. Figure 4: Batch 2 4o-mini Model Responses Comparatively, the first batch of the 4o full-sized model lacks this bias, showing much better results overall with multiple salts having 90%+ accuracy, even reaching 100% on NaH₂PO₄. However, the results are still uneven, as can be seen in Figure 5; in particular, KBr and KCl were very often misidentified as NaCl (98 and 99 times, respectively), leaving their individual accuracies at 2% and 1%. Clearly, there is massive confusion between these three salts for the model, alongside a bias towards NaCl. This may be due to the prevalence of NaCl in the model’s original training data as one of the most common salts. Another major point of confusion was Na₂SO₄, as the model often misidentified multiple other salts (RbCl, K₂SO₄, and NH₄Cl especially) as Na₂SO₄ instead. Figure 5: Batch 1 4o Full Model Responses The second batch remains highly consistent with the first, as seen in Figure 6. This is further evidenced by the high Cohen’s Kappa value and is in part due to the temperature of 0 applied to each request (7) along with the model’s superior vision capabilities compared to its mini version. While there are slight differences numerically, these are to be expected due to differences in the randomly chosen testing images, and the overall patterns remain the same. The issues from the first batch are still prevalent in the second batch, alongside similar strengths, maintaining a 100% accuracy on NaH₂PO₄ and frequent confusions concerning NaCl. Figure 6: Batch 2 4o Full Model Responses With the calculated levels of disagreement between the full and mini models as well as the differences in accuracy, the full model proves clearly superior. In fact, the mini model proves to be less worthwhile even when prioritizing financial or computational efficacy over accuracy. The models process images differently, with the mini model requiring more computational resources per image (higher token counts), negating the price advantage of the mini model for vision applications (12). F1 Scores F1 scores, which provide a balanced measure of precision and recall, were also calculated for each salt type across all models. The macro F1 scores for each model differed slightly from the accuracies. To be exact, the mini model had 0.0522 and 0.0443 F1 scores across trials (compared to accuracies of 11% and 10.09%), while the full model had 0.5253 and 0.5263 F1 scores (compared to accuracies of 57.25% and 57.17%). All 12 classes of salts are equally represented in our provided training data, but the difference between the F1 scores and the accuracy implies an imbalance in responses. This could be due to a similar imbalance in the original training data of the 4o and 4o- mini models stemming from the differences in the prevalence of certain salts, or simply difficulties in identifying certain salts in particular. This difference between accuracy and F1 scores is especially visible in the mini models, aligning with previous observations in the individual trial data. Figure 7 shows the F1 scores for each of the 12 salt types for all the trials. The full model is obviously more capable overall, but it shares a few weaknesses and strengths with the mini model. Firstly, the two most obvious weaknesses in the responses are KBr and KCl, both of which have extremely low F1 scores for both models. Clearly, there is heavy confusion concerning those salts. Beyond that, Figure 7 also elucidates other weaknesses of the models, such as K₂SO₄, NaCl, and RbCl; this could be attributed to class imbalance caused by prior knowledge the model has (especially for common salts like NaCl) or possible deficiencies in the training data. Many of the salts also appear similar on a macroscopic scale, making them difficult to discern and identify due to the variability in their possible forms. Figure 7: F1 Scores for every salt across all trials Conclusions In lieu of more reliable methods, large language models with vision capabilities are a possible method of identifying the original salt from an image of its evaporite. In testing, between the 4o- mini and full 4o models, the mini model performed drastically worse, proving to be ineffective even when considering cost and computing efficiency. For the full models, we see considerably higher than random accuracy and F1 scores, indicating acceptable performance for initial testing; potential improvement can be found in the fine-tuning feature (13), leaving an opportunity for better results. In general, vision models excel at identifying common objects in images but struggle when it comes to learning from images to identify new things—and, in particular, when it comes to images that require granular attention to detail and can often be homogeneous between classes (4). However, such confusion is not unexpected and can be improved on by allowing for greater attention to detail in models. Overall, the full GPT-4o model shows promising results for salt-evaporite analysis, allowing for quick picturing and identification of salts when provided with the appropriate training data. References (1) Da Silveira Tortolero Araujo Lourenço, M.; Di Maggio, R. M.; Germinario, C.; Grifa, C.; Izzo, F.; Langella, A.; Mercurio, M. Discovering halite traces on a victim’s clothing through a forensic geoscience analytical approach: A suspicious case in Italy. Forensic Sci. 2024, 4 (3), 396−408. DOI: 10.3390/forensicsci4030024. (2) Osterloo, M. M.; Hamilton, V. E.; Bandfield, J. L.; Glotch, T. D.; Baldridge, A. M.; Christensen, P. R.; Tornabene, L. L.; Anderson, F. S. Chloride-bearing materials in the southern highlands of Mars. Science 2008, 319 (5870), 1651−1654. DOI: 10.1126/science.1150690. (3) Batista, B. C.; Tekle, S. D.; Yan, J.; Dangi, B. B.; Steinbock, O. Chemical composition from photos: Dried solution drops reveal a morphogenetic tree. Proc. Natl. Acad. Sci. U.S.A. 2024, 121 (27). DOI: 10.1073/pnas.2405963121. (4) Shahriar, S.; Lund, B. D.; Mannuru, N. R.; Arshad, M. A.; Hayawi, K.; Bevara, R. V. K.; Mannuru, A.; Batool, L. Putting GPT-4O to the sword: A comprehensive evaluation of language, vision, speech, and multimodal proficiency. Appl. Sci. 2024, 14 (17), 7782. DOI: 10.3390/app14177782. (5) Cheng, Z.; Kasai, J.; Yu, T. Batch prompting: Efficient inference with large language model APIs. arXiv 2023, arXiv:2023.emnlp-industry.74. DOI: 10.18653/v1/2023.emnlp-industry.74. (6) OpenAI. Batch API. https://platform.openai.com/docs/guides/batch (accessed Nov 24, 2024). (7) Ouyang, S.; Zhang, J. M.; Harman, M.; Wang, M. An empirical study of the non-determinism of ChatGPT in code generation. ACM Trans. Softw. Eng. Methodol. 2024. DOI: 10.1145/3697010. (8) Steinbock Group. Saltscapes: Images of Salt Crystals Formed by Evaporation. https://www.chem.fsu.edu/~steinbock/saltscapes.php (accessed Nov 30, 2024). (9) OpenAI. Assistants API Overview (Python SDK). https://cookbook.openai.com/examples/assistants_api_overview_python (accessed Nov 24, 2024). (10) Warrens, M. J. Five ways to look at Cohen’s kappa. J. Psychol. Psychother. 2015, 5 (4). DOI: 10.4172/2161-0487.1000197. (11) Sim, J.; Wright, C. C. The kappa statistic in reliability studies: Use, interpretation, and sample size requirements. Phys. Ther. 2005, 85 (3), 257−268. DOI: 10.1093/ptj/85.3.257. (12) OpenAI. Pricing. https://openai.com/api/pricing/ (accessed Nov 28, 2024). (13) Wu, E.; Wu, K.; Zou, J. FineTuneBench: How well do commercial fine-tuning APIs infuse knowledge into LLMs? arXiv 2024, arXiv:2411.05059.
ai_researcher
2
A_Statistical_Analysis_of_LLMs'_Self-Evaluation_Using_Proverbs.pdf
Pride and Prejudice: LLM Amplifies Self-Bias in Self-Refinement Wenda Xu†, Guanglei Zhu‡, Xuandong Zhao†, Liangming Pan†, Lei Li‡, William Yang Wang† †University of California, Santa Barbara, ‡Carnegie Mellon University {wendaxu,xuandongzhao,liangmingpan,william}@cs.ucsb.edu, {guanglez,leili}@cs.cmu.edu 4 2 0 2 n u J 8 1 ] L C . s c [ 2 v 6 3 4 1 1 . 2 0 4 2 : v i X r a Abstract Recent studies show that large language mod- els (LLMs) improve their performance through self-feedback on certain tasks while degrade on others. We discovered that such a con- trary is due to LLM’s bias in evaluating their own output. In this paper, we formally de- fine LLM’s self-bias – the tendency to favor its own generation – using two statistics. We analyze six LLMs (GPT-4, GPT-3.5, Gemini, LLaMA2, Mixtral and DeepSeek) on transla- tion, constrained text generation, and mathe- matical reasoning tasks. We find that self-bias is prevalent in all examined LLMs across mul- tiple languages and tasks. Our analysis reveals that while the self-refine pipeline improves the fluency and understandability of model out- puts, it further amplifies self-bias. To miti- gate such biases, we discover that larger model size and external feedback with accurate as- sessment can significantly reduce bias in the self-refine pipeline, leading to actual perfor- mance improvement in downstream tasks. The code and data are released at https://github. com/xu1998hz/llm_self_bias. 1 Introduction Large language models (LLMs) have shown strong capabilities in many NLP tasks. While these mod- els still make mistakes, recent studies show that “self-refine” (also known as “self-reflection”) is promising to rectify errors based on LLM’s self- feedback (Madaan et al., 2024; Chen et al., 2024; Shinn et al., 2024; Manakul et al., 2023; Pan et al., 2023). Meanwhile, opposite study also shows that LLMs fail to correct their mistakes and their perfor- mance even gets worse after self-feedback (Huang et al., 2023b). These contradictory results sug- gest that LLM’s self-feedback is unreliable. Self- refine procedure relies on LLM’s evaluation capa- bility of the generated text. We hypothesize that if there is a bias during the self-evaluation process, such bias will be amplified during iterative self- Figure 1: How LLM’s self-feedback inflates scores com- pared to human assessment. Bias is the mean differ- ence between LLM and human scores, while skewness (Dskew) measures the asymmetry of their distribution around zero. Non-biased estimation will have Dskew=0. refinement. This is consistent with a prior finding that LM-based metrics (e.g. BARTScore) exhibit “narcissism” during self-evaluation, i.e., the metric model favors text generated by the same underly- ing language model in the context of summariza- tion tasks (Liu et al., 2023b). However, it remains unclear whether bias exists universally in LLMs across a wide range of tasks. How to quantify such biases? How does this “narcissism” impact LLM’s self-refinement? In this work, we define “self-bias” to the degree that an LLM favors its own generation. We pro- pose to use two principled statistics to estimate self-bias in LLM’s self-refinement procedure. The first one measures the degree of inflation in the LLM’s self-evaluation compared to the true (hu- man) evaluation. The second measures whether LLM’s self-evaluation is skewed compared to the ture estimate. Figure 1 illustrates these two statis- tics. We examine self-bias scores on six diverse LLMs, covering four languages across three dis- tinct tasks: machine translation, constrained text generation, and mathematical reasoning. We find that self-bias is universal in self-refine and self- rewarding pipelines, regardless of the languages and tasks. This bias causes LLMs to optimize for false positive corrections rather than improving the actual output quality. We further investigate what is the real benefit of self-refine. We find that while the self-refine pipeline improves the fluency and understandabil- ity of model outputs, it does not necessarily lead to intended improvements as specified in the prompt. Moreover, LLMs may favor texts that mirror their style, potentially leading to false positive optimiza- tion and reduced diversity in text generation. To mitigate the self-bias, we propose two solutions: increasing the model size and incorporating ex- ternal feedback to provide accurate assessment, thereby directing the LLM towards more accurate self-correction. Our contributions are: 1. We formally define the self-bias of an LLM us- ing two principled estimated statistics. 2. We quantify self-biases for six diverse LLMs and find that self-bias amplifies during self- refine across many languages and tasks. 3. We observe two factors that contribute to self- bias and pinpoint two directions to mitigate it and elicit LLMs’ self-correction ability. 2 Related Work Large Language Model Self-correction. Re- cent works demonstrate that LLM can utilize its own feedback signal to refine itself (Madaan et al., 2024; Chen et al., 2024; Shinn et al., 2024). Wang et al. (2023) further proposed to sample diverse reasoning paths and use a majority vote to find the most confident answer. Huang et al. (2023a) lever- ages self-consistency to further fine-tune the LLM on the most confident reasoning path with diverse instruction formats. On the other hand, LLM’s self- feedback can also be used as a reward signal to further align LLM to follow instructions (Gulcehre et al., 2023; Yuan et al., 2024). correction (Huang et al., 2023b; Tyen et al., 2023; Ke et al., 2023). This issue arises because the quality of the model’s self-generated feedback is bounded by its existing knowledge and abilities (Stechly et al., 2023; Hong et al., 2023). Therefore, internal feedback may not offer any extra advantage for improving the results; it might even steer the model away from the correct answer (Valmeekam et al., 2023). However, prior works only had em- pirical observations on this phenomenon, while lacking a quantitative analysis. Moreover, prior works only focus on specific tasks, such as reason- ing or code generation. In this work, we are the first to quantitatively analyze the self-bias of different LLMs across three tasks and four languages, which provides a novel and generalizable view to address the perils of self-refine. LLMs as Evaluators. Liu et al. (2023a) lever- ages GPT-4 to evaluate text through chain-of- thoughts prompting. Fu et al. (2023) leverages GPT-3’s sequence likelihood to estimate model per- formance. Kocmi and Federmann (2023); Xu et al. (2023) designed detailed error schemes for LLM to output fine-grained error annotations. Despite the popularity of using LLMs as evaluators, Koo et al. (2023) pointed out that LLM exhibits cognitive bias when evaluating the text, misaligning from human preference. Zheng et al. (2023) pointed out LLMs have verbosity and self-enhancement bias, which makes them prefer long and verbose an- swers and answers generated by themselves. Chang et al. (2023) found out that LLM prefers memo- rized text over non-memorized text, creating unfair judgments over texts. Deutsch et al. (2022); Liu et al. (2023b) point out that reference-free metrics are inherently biased on their own outputs. Although the above empirical studies provide valu- able insights, they lack a formal definition to quan- tify those biases nor provide a connection to the self-refine framework. In this work, we define and quantify self-bias and provide the first in-depth analysis of its impact on the self-refine pipeline. We analyze potential bias attributions and pinpoint two mitigation directions. 3 Quantifying Self-Bias Despite some demonstrations of performance im- provements, most findings indicate that LLMs struggle to rectify their initial mistakes, and self- their performance even worsens after This section outlines the approach used to quantify the self-bias exhibited by LLMs in an iterative self- refinement pipeline. We employ statistical bias and distance skewness (Szekely and Móri, 2006) estimation to measure self-bias. 3.1 Iterative Self-Refinement in LLMs Self-refinement is an inference time method, in which the LLM first generates a response yi to a given prompt x, and then the same LLM generates feedback fi based on the candidate output yi and input x. Based on feedback fi, input x, and candi- date output yi, the LLM then generates a refined output ri. LLM iterates between the feedback and the refinement steps, continuing until it reaches a predetermined number of iterations. At each refine- ment step, the refined output will only be accepted if it demonstrates superior quality compared to the previously generated text. The quality of the text is assessed through self-feedback from the language model itself. At each feedback or refinement step, LLM only sees the last iteration’s generation or feedback, without accessing the entire history of output or feedback. 3.2 Bias Estimation We estimate the self-bias of LLMs using the statis- tical bias definition. This bias is characterized by the disparity between an LLM’s predicted quality score and the expected quality score, as follows: Bias(ˆθ) = 1 n n (cid:88) (E[ˆθi] − θi), (1) i=1 where E[ˆθi] is an expected LLM’s quality predic- tion at sample i, and θi denotes the true quality of sample i. Ideally, θi should be derived from hu- man annotations, for example, multidimensional quality metrics (MQM) human annotations (Fre- itag et al., 2021) for machine translation, or pre- defined criteria such as word coverage for con- strained text generation (Madaan et al., 2024). The LLM’s quality prediction is expected to precisely follow the human annotation procedure or prede- fined criteria, ensuring consistency between θ and E[ˆθ]. When Bias(ˆθ) > 0, the LLM assigns a higher quality score to its own sample compared to the expected quality score. When Bias(ˆθ) < 0, the LLM underestimates the sample quality compared to the expected quality score. The larger the value of Bias(ˆθ), the more pronounced the LLM’s bias against its own samples. 3.3 Distance Skewness Estimation In an ideal scenario, an unbiased LLM should have equal chance of over-estimation and under- estimation of text quality (Bias(ˆθ) = 0), resulting Figure 2: Bias(ˆθ) = 0 does not guarantee a symmetric distribution of E[ˆθ] − θ. One tail could be long and thin, while the other is short and fat (shown in the right figure). We use distance skewness to measure the asymmetry of distribution. Therefore, using two meta-metrics as complimentary, we can measure the self-bias of LLM. in a perfectly symmetric distribution when plotting E[ˆθ] − θ. However, Bias(ˆθ) = 0 does not guaran- tee a symmetric distribution (In Figure 2, one tail could be long and thin, while the other is short and fat, yet they balance out overall). Therefore, we introduce another meta-metric, distance skewness, to measure the asymmetry of E[ˆθ]−θ’s distribution. Specifically, dSkewn(X) = 1 − (cid:80) i,j ∥xi − xj∥ i,j ∥xi + xj − 2γ∥ (cid:80) , (2) where xi and xj are two independent identical ran- dom examples drawn from E[ˆθ] − θ. dSkewn(X) measures the asymmetry of X with respect to γ. Distance skewness ranges between 0 and 1. dSkewn(X) equals 0 if and only if X is diago- nally distributed respect to γ. dSkewn(X) equals 1 if and only if X is distributed at a constant on one side of γ. A higher distance skewness indicates a more asymmetric distribution of E[ˆθ] − θ. In our experimental setup, we use both bias and distance skewness to measure the model’s bias towards its quality prediction. 4 Analyzing LLM’s Self-Bias 4.1 Experimental Setup We include three closed-source LLMs (GPT-4 (Achiam et al., 2023), GPT-3.5-Turbo and Gemini (Team et al., 2023)) and three open-source LLMs (LLaMA2-7B (Touvron et al., 2023), Mixtral-MOE 8x7B (Jiang et al., 2024) and DeepSeekMoE 16B (Dai et al., 2024)). These models have been shown to have strong instruction-following capabilities (Madaan et al., 2024; Shinn et al., 2024), making them well-suited to demonstrate self-bias. For each model, we first prompt it to produce the initial generation. Then, we prompt the model to generate the feedback for the initial generation. The model takes in both the feedback and the prior step generation to produce a refined output. We will only accept refinement if the feedback score is improved on the refined output. We listed specific model API/checkpoints in Appendix Section A. Machine Translation. We evaluated LLMs on Flores-200 (Costa-jussà et al., 2022) dataset with four language pairs: Yoruba to English (Yor-En), Javanese to English (Jav-En), Armenian to English (Arm-En), and Igbo to English (Ig-En), using 100 test examples per language pair. We concentrate on low-to-medium resource language pairs, as Kocmi et al. (2023) indicate that LLMs like GPT-4 already perform at a nearly human-like level in high re- source language pairs such as Chinese-to-English, leaving limited potential for further improvement through self-refine. To ensure high-quality evaluations, we utilized feedback prompts based on the MQM human an- notation from Freitag et al. (2021), as in Kocmi and Federmann (2023). LLMs will input source text and candidate text and output feedback, includ- ing error location, error type, and severity labels. We adopt the same error scoring as Freitag et al. (2021), assigning −1 for minor errors and −5 for major errors, with a score range of 0 to −25 (0 for perfect translations, −25 for samples with more than five severe errors). The details of the prompts are provided in the Appendix Table 8, 9 and 10. Ideally, human raters would have evaluated each sample, but due to cost and scalability constraints, we utilized the reference-based learned metric BLEURT (Sellam et al., 2020) as an approximation of human judgments. BLEURT generates quality scores based on the similarity between candidate and reference translations. To align BLEURT’s score distribution with that of human ratings, we employed quantile mapping (Cannon et al., 2015), yielding a score range from 0 to -25. Although au- tomatic metrics are primarily used, we also conduct modified MQM human evaluations (Freitag et al., 2021) for validation purposes. Our bias estimation ranged from -25 to 25. Details on quantile mapping are provided in the Appendix Section B. Constrained Text Generation. We conducted experiments on commonsense text generation, fol- lowing (Lin et al., 2020). We tested LLMs on 100 examples from the CommonGen Hard dataset. For each testing instance, the large language model (LLM) received approximately 30 concepts and was tasked with generating a fluent and logically sound text. To generate the initial output, we adopted a similar prompt design to that of (Lin et al., 2020). Next, we provided two ICL feedback examples to help the LLM identify missing con- cepts in its initial output. In each feedback example, the LLM was given concept words and the previ- ous generation and asked to indicate any missing concepts. This feedback allowed the LLM to revise its output and generate a text with better coverage of the input concepts. The details of the prompts are included in the Appendix Table 12, 13 and 14. To evaluate the coverage of the generated texts, we adopted the evaluation metric used in (Madaan et al., 2024). This metric uses strict string matching to determine whether each concept word from the input appears in the generated text (metric outputs 1 if all concepts are covered and 0 otherwise). From feedback of LLM’s missing concepts, we assigned a binary score (0 or 1) to each text based on its full coverage of concepts. Since our string-matching metric and LLM feedback score were on the same scale, we were able to compute bias and distance skewness directly. The range of bias estimation is between −1 to 1. Mathematical Reasoning. We conducted exper- iments on mathematical reasoning. We tested LLMs on 100 examples from the MATH testing set (Hendrycks et al., 2021). For each instance, LLM receives a problem statement and generates a step-by-step solution with a final answer. In this task, we use the self-refine pipeline by providing the feedback on the step-by-step solution. In each iteration, the previous solution will be compared against the ground truth answer, outputting 1 if they are matched and 0 otherwise. Therefore, we can directly compute bias and distance skewness. The range of bias estimation is between −1 to 1. The details of the prompts are included in the Appendix Table 11. In addition, we also conducted exper- iments by replacing the self-evaluation (LLM as evaluator) with self-consistency verification (self- consistency as an evaluator) (Huang et al., 2023a). We include those results in the Appendix D. 4.2 Self-Bias Amplification during Iterative Refinement Machine Translation. In Figure 3, we illustrate that all large language models (LLMs) exhibit a Figure 3: Average Bias and Dskew estimations for Yor- En, Jav-En, Arm-En, and Ig-En translations on FLo- res200, with the x-axis showing self-refine steps, re- veal that all LLMs exhibit self-bias, where open-source LLMs exhibit higher levels than GPT-4 and Gemini. self-bias in the self-refine pipeline. Notably, open- source LLMs and GPT-3.5-Turbo tend to exhibit higher levels of self-bias throughout iterations than stronger instruction-following LLMs, such as GPT- 4 and Gemini. This suggests that GPT-4 and Gem- ini possess a certain level of capability in resist- ing self-bias. However, despite some robustness demonstrated by GPT-4 and Gemini, we observe a consistent amplification of self-bias through the self-refine pipeline across four language directions, indicating that even these advanced LLMs are sus- ceptible to self-bias amplification. In Figure 4, we illustrate a comparison be- tween GPT-4 and Gemini’s quality assessments of their own outputs and performance measured by reference-based BLEURT over ten iterations. Our findings suggest that the primary reason for the amplification of bias during self-refine itera- tion is that actual performance does not improve through iterations. Instead, GPT-4 and Gemini mistakenly perceive performance improvements in their refined outputs. This discrepancy between the false positive performance measure and the true performance measure grows larger with each iter- ation. The appendix Section C details Gemini’s shift from right-skewed to left-skewed distribution, resulting in a decrease in distance skewness during early iterations and an increase in later ones. Constrained Text Generation. Figure 5 depicts the amplification of self-bias through ten self-refine iterations in constrained text generation for GPT- 3.5-Turbo, GPT-4, and Gemini. Notably, GPT-4 exhibits a higher bias estimation at earlier iterations compared to GPT-3.5-Turbo and Gemini. This can be attributed to GPT-4’s higher coverage ratio at initial generation (approximately 40%) compared Figure 4: GPT-4 and Gemini overestimate improve- ments in self-refined outputs, leading to amplified bias over iterations compared to actual performance mea- sured by BLEURT. to its counterparts (GPT-3.5-Turbo at around 2%). Consequently, GPT-4 struggles to identify a few missing concepts, while GPT-3.5-Turbo and Gem- ini have more coverage issues and can easily iden- tify missing input concepts. As GPT-3.5-Turbo reaches 20% coverage around the 5th iteration, it experiences a significant rise in bias and skewness estimation. It is worth noting that the rate of LLM’s self-estimated improvements is much higher than the true coverage improve- ments. This phenomenon results in a saturation of performance improvements after the 5th iteration for both GPT-4 and GPT-3.5-Turbo. Mathematical Reasoning. Figure 6 illustrates that all large language models (LLMs) exhibit an increase in bias and skewness estimation in the iterative self-refine pipeline. This suggests that LLMs introduce self-biases towards some math solutions during self-refine. Human Evaluation on Bias Estimation. We em- ploy one graduate student to annotate 50 examples from the 0th and 10th iteration of GPT-4, GPT- 3.5-Turbo and Gemini’s outputs at Yor-En, respec- tively. The human rater compares candidate text against reference and labels error location, error type, and severity labels at candidate text. The scoring scheme follows MQM style (Freitag et al., 2021), which matches the scoring range of LLM’s feedback. Our human score indicates that all three LLMs have not received measurable improvements via the self-refine pipeline (The raw human scores 0123455051015Average Bias0123450.00.20.40.6Average DskewAverage Bias and Dskew on four languages at Flores200GeminiGPT­4GPT­3.5DeepSeekMOEMistralMOELLaMA2­7B0406FDOH%/(857YV*37 <RU(Q 0406FDOH%/(857YV*HPLQL <RU(Q 0406FDOH%/(857YV*37 -DY(Q 0406FDOH%/(857YV*HPLQL -DY(Q %/(857*37*HPLQL Self-bias Example at GPT-4 Yoruba text: Ní bayii a ni àwon eku oloshu merin ti ko ni dayabetesi telele to ti ni ayabetesi,” o she afikun. Reference English text: "We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added. (Red span indicates a major error and blue span indicates a minor error annotated by GPT-4.) GPT-4’s 1st generation [Human: -11, GPT4: -11, Bias: 0]: "At this point, we have four rats without diabetes that have developed diabetes," he added. GPT-4’s 1st refinement [Human: -12, GPT4: -10, Bias: 2]: "Currently, we have four healthy rats that have developed diabetes," he clarified. GPT-4’s 2nd refinement [Human: -11, GPT4: 0, Bias: 11]: "Presently, we have four non-diabetic rats that have developed diabetes," he elaborated. GPT-4 GPT-3.5-Turbo Gemini Iterations Bias Dskew Bias Dskew Bias Dskew 0th 10th 8.06 14.6 0.452 0.692 19.6 21.9 0.803 0.885 9.62 17.6 0.455 0.766 Table 1: We report human evaluation on GPT-4, GPT- 3.5-Turbo and Gemini’s quality assessment on 0th and 10th iteration of refinement generation at Yor-En. We used Bias and Dskew estimation to demonstrate bias found by human evaluation. All LLMs have signifi- cantly increased self-bias after 10 iterations. Human Evaluation on LLM’s Output Quality. We conducted human evaluation on six LLM’s self-feedback outputs at first and fifth iteration at Yoruba to English translation. For each LLM at each iteration, we annotate 100 samples. In total, we annotate 1200 samples. Specifically, human labor will check whether error annotation in the format of ’xxx’ is a minor xxx error/’xxx’ is a ma- jor xxx error/’xxx’ is a critical xxx error (When LLM outputs an error-free annotations, it can have flexible forms, such ‘None’, ‘No error’, “Perfect translation”). In Table 2, we include format accuracy for all LLMs. We observed that all LLMs have either perfect or nearly perfect format at first and fifth iteration of self-feedback. This is expected as we explicitly provide three in-context examples to con- trol the output format. We found that different LLMs make different format mistakes. For exam- ple, DeepSeekMOE produces one or two garbage outputs and GPT-3.5-Turbo produces two or three free form outputs, like “The machine translation Figure 5: We evaluate the bias and distance skewness of generated texts produced by GPT-4, GPT-3.5-Turbo, and Gemini on the CommonGen dataset, across self- refinement steps. Additionally, we report the coverage of GPT-3.5-Turbo and GPT-4 compared to true concept coverage. We show that the rate of LLM’s self-estimated improvements is much higher than the true coverage improvements, which leads to self-bias amplification. Figure 6: Bias and distance skewness in generated texts from GPT-4, GPT-3.5-Turbo, and Gemini are measured on MATH testing set throughout the self-refinement steps. Results show an increase in bias and skewness of some math solutions during iterative self-refine. are included in the Appendix Table 5, 6 and 7), which is consistent with the BLEURT assessment. In Table 1, both increasing bias and distance skew- ness estimation demonstrate that all LLMs have significantly increased their self-bias with 10 iter- ative refinements. In the following case study, we examine self-bias in GPT-4. Our observations re- veal that GPT-4’s self-feedback mechanism led to the optimization of false positives, resulting in an amplification of self-bias over three iterations. In section 5, we demonstrate two potential alleviation that we can use to mitigate this self-bias. %LDVRQ&RPPRQ*HQ+DUG'VNHZRQ&RPPRQ*HQ+DUG*37*377XUER*HPLQL*37&RYHUDJH*37&RYHUDJH3UHGLFWHG&RYHUDJH7UXH&RYHUDJH01230.00.20.40.60.8Bias on Math Reasoning01230.00.20.40.60.8Dskew on Math ReasoningGPT­4GeminiGPT­3.5­Turbo dimensions, fluency and understandability, which UniEval is not trained on task-specific data. Our results, illustrated in Figure 6, show that GPT-4, GPT-3.5-Turbo, and Gemini consistently exhibit improvements in both fluency and understandabil- ity. This suggests an alternative perspective on the self-refine pipeline, indicating that while an LLM may not strictly adhere to instruction-following in terms of quality improvements, it can still improve certain intrinsic text qualities, such as fluency and understandability. Iter Gemini GPT3.5 GPT4 LLaMA2 Mixtral DeepS 1st 5th 93% 93% 98% 97% 100% 100% 100% 100% 100% 100% 99% 98% Table 2: We report human evaluation of format accuracy at six LLM’s outputs. We observed that all LLMs have either perfect or nearly perfect format at first and fifth iteration of self-feedback at Yor-En translation. Mixtral stands for MixtralMOE and DeepS stands for DeepSeek- MoE that we used in the experiment. is incorrect as it provides an alternative transla- tion that does not match the source text.” We con- clude that this is due to their intrinsic instability of their instruction following capabilities. Gemini model contains surprisingly low format accuracy compared to other LLMs. This is due to the Gem- ini model refusing to generate any content that involves sensitive topics. There are 7 sentences in our testing set, Gemini refuses to provide responses. However, since our study focuses on self-bias am- plification at iterations, this will not impact our experimental conclusions (The effects canceled out when comparing 1st and 5th iteration). Figure 8: We used Madlad400-10b to translate 100 Yor- En translations and asked GPT-4, GPT-3.5-Turbo, and Gemini to paraphrase 100 translations. We show the BLEURT and LLM scores before and after paraphras- ing. In the lower right of the figure, we show the bias estimation before and after paraphrasing. GPT-4 and Gemini have negative self-bias before paraphrasing. Af- ter paraphrasing, all LLMs increase their bias against their paraphrased outputs. LLMs favor texts that follow their style. To ex- plore this propensity, we conducted experiments to investigate if LLMs display a preference for outputs that align with their generation style. We asked the GPT4, GPT-3.5-Turbo, and Gemini model to para- phrase external translation outputs. In this prompt, LLMs aimed not to improve the quality of trans- lations but rather to rewrite sentences in their cor- responding styles. Using the multilingual trans- lation system Madlad400-10b (Kudugunta et al., 2023), we produced 100 Yoruba-to-English transla- tions. Subsequently, each LLM was instructed to paraphrase the generated sentences. Our findings, shown in Figure 8, reveal that GPT-4 and Gemini have negative self-bias before paraphrasing. How- ever, after paraphrasing, all LLMs showed an in- Figure 7: We measure the fluency and understandability aspects of GPT-4, GPT-3.5-Turbo, and Gemini’s gener- ated texts at Yor-En through self-refine steps. Despite no gains in quality, all LLMs have consistent performance improvements in fluency and understandability. 4.3 What improves after self-refinement? Self-refinement can improve fluency and under- standability but not quality. We demonstrate that LLM with biased feedback can impede the model’s self-refine process. This raises a natural question: if an LLM does not improve its genera- tion quality, does it improve in any other aspects throughout the iterative refine phase? To inves- tigate this, we utilize the learned metric UniEval (Zhong et al., 2022) to measure the LLM’s improve- ment beyond quality metrics. UniEval, a multidi- mensional learned metric, estimates various eval- uation dimensions, including fluency, understand- ability, engagement and more. We focus on two )OXHQF\8QGHUVWDQGELOLW\*37*377XUER*HPLQL=HURVKRW3DUDSKUDVHG0406FDOH*37YV%/(857=HURVKRW3DUDSKUDVHG0406FDOH*HPLQLYV%/(857=HURVKRW3DUDSKUDVHG0406FDOH*377XUERYV%/(857=HURVKRW3DUDSKUDVHG%LDV(VWLPDWLRQ%/(857*37*HPLQL*377XUER DeepSeekMOE MixtralMOE LLaMA2-7B Sample Size Bias Dskew Bias Dskew Bias Dskew 1 4 8 16 32 14.8 16.1 16.7 18.0 18.5 0.735 0.795 0.800 0.830 0.840 12.4 10.1 13.0 16.9 18.5 0.483 0.490 0.610 0.730 0.790 8.75 14.1 19.8 20.7 20.9 0.491 0.580 0.810 0.840 0.850 samples from a larger pool, e.g. a sample size of 32, significantly increases this bias compared to se- lections from a smaller pool, such as a sample size of 4. When the LLM optimizes over these samples, it can further increase its self-bias and generate samples that are biased by its self-feedback. Table 3: We report Bias and Dskew on Deepseek-MOE, MixtralMOE and LLaMA2-7B’s self-feedback with varying sample size at Yor-En. Our results indicate that both bias and distance skewness tend to increase as the sample size grows larger. creased bias against their paraphrased outputs. This is mainly attributed to a decline in quality perfor- mance post-paraphrasing, with LLMs erroneously perceiving these paraphrased outputs as indicative of improvements. 4.4 Self-Bias is Amplified at Self-Rewarding Pipeline In this section, we will explore the concept of self- bias in the self-rewarding pipeline, as outlined in (Yuan et al., 2024). The pipeline begins with an instruction fine-tuned large language model (LLM). Initially, we generate k candidate responses for each input provided to the LLM. Next, the same LLM is used as a reward model to identify the best- performing candidate or to rank pairs within the collection of samples. Finally, various training ob- jectives are applied to further train the LLM using the top-performing samples. To illustrate the potential drawbacks of this pipeline, we carried out experiments on Yoruba to English translation task using three open- source LLMs: Deepseek-MOE, MixtralMOE, and LLaMA2-7B. For each source input, we sampled k candidate responses from each model. Subse- quently, we obtained self-feedback scores on these candidates employing the prompt detailed in Sec- tion 4.1 and computed the corresponding self-bias. We varied k across 1, 4, 8, 16, and 32 to examine the influence of sample size on the self-bias within the self-rewarding pipeline. As shown in Table 3, we observed that all LLMs displayed an increase in bias and distance skewness as the sample size increased. This occurs when the LLM has a biased estimation of its self-feedback, and this bias can be amplified when the sample size is increased to find the top-performing candidate according to the self-feedback. Notably, selecting Figure 9: Using an external feedback model, we pro- vide external feedback for GPT-4, GPT-3.5-Turbo, and Gemini in Yoruba-to-English translation task, across 5 refinement steps. We compare the models’ true performance (measured by BLEURT) against external feedback-evaluated performance and self-feedback eval- uated performance. Additionally, we plot the bias esti- mation for the three LLMs, considering both feedback types over 5 iterative refinement steps. 5 Alleviating Self-Bias External Feedback Reduces Self-Bias. We demonstrated that self-feedback from a large lan- guage model can self-amplify bias with iterative refinement. We aim to answer if external feedback with low bias estimation can improve the model’s generation performance and elicit self-correction capability. We leverage a reference-based feed- back model, InstructScore (Xu et al., 2023), to provide external feedback. InstructScore will take in both reference and candidate text and output fine- grained feedback, including error location, severity label, and error type. To ensure a fair comparison, we parse all outputs with the same format as self- feedback. Since InstructScore can access reference text to provide feedback, we recognize this external feedback as oracle feedback. However, models will only receive information about error location, error type, and severity labels. Therefore, refinement 0406FDOH*377XUERYV%/(8570406FDOH*37YV%/(8570406FDOH*HPLQLYV%/(857%LDV(VWLPDWLRQ%/(8576HOI*HPLQL6HOI*377XUER6HOI*376HOI%/(857([WHUQDO*HPLQL([WHUQDO*377XUER([WHUQDO*37([WHUQDO External Feedback Example at GPT-4 Yoruba text: Ní bayii a ni àwon eku oloshu merin ti ko ni dayabetesi telele to ti ni ayabetesi,” o she afikun. Reference English text: "We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added. (Red span indicates a major error and blue span indicates a minor error annotated by GPT-4.) GPT-4’s 1st generation [Human: -11, InstructScore: -10, Bias: 1]: "At this point, we have four rats without diabetes that have developed diabetes," he added. GPT-4’s 1st refinement [Human: -2, InstructScore: -6, Bias: -4]: "At this point, we have four mice without diabetes that were diabetic," he added. GPT-4’s 2nd refinement [Human: -1, InstructScore: -1, Bias: 0]: "We now have 4-month-old mice that are non-diabetic that were diabetic," he added. Table 4: This case study demonstrates that external feed- back (oracle) from InstructScore (Xu et al., 2023) can remain low self-bias during iterative self-refine. By providing accurate error type, error location, and sever- ity labels, InstructScore effectively elicits GPT-4’s self- correction capability and improves its translation quality. Despite InstructScore’s oracle-like role (which it can access reference text to make error annotations), it does not provide explicit corrections, requiring GPT-4 to rely on its internal knowledge for corrections. still relies on LLM’s self-correction capability. In Figure 9, we demonstrate that external feedback with accurate assessment can significantly lower the model’s bias at iterative refinement (shown at the lower right of the figure. All dotted curves are below solid curves with corresponding colors). Interestingly, both Gemini and GPT-4’s bias estima- tion is improved throughout the refinement process, as the external feedback model can over-penalize low-quality outputs. As refinement proceeds, the external feedback model converges to BLEURT quality assessment that samples achieve improved quality. Most importantly, we demonstrate that all LLMs with external feedback can elicit their self-correction ability with consistent BLEURT im- provements at self-refine iterations. We include a case study example in Table 4. Our finding of model improvement is consistent with prior study (Xu et al., 2024) and we further demonstrate that ex- ternal feedback can significantly reduce self-bias. Figure 10: We show that bias and distance skewness estimation on LLaMA-2 7B, 13B, and 70B models at Yor-En translation across self-refinement steps. LLM with larger parameter size can have less self-bias. Larger Model Reduces Self-Bias. In Figure 10, we demonstrate that LLMs with larger pa- rameter size can have less self-bias throughout self-refinement steps. Specifically, we tested the LLaMA2 models with 7B, 13B, and 70B pa- rameters on Yoruba-to-English (Yor-En) transla- tion tasks. Our findings indicate that while the LLaMA2-70B model exhibits self-bias in the ear- lier iterations, its self-bias begins to plateau af- ter the 5th iteration. In contrast, the 7B and 13B models continue to amplify their self-bias in later iterations. This observation aligns with prior work (Huang et al., 2023a), which posited that larger LLMs possess better self-refinement capabil- ities. Our study contributes to this discussion from the perspective of self-bias, proposing that larger LLMs are more resilient to self-bias. Consequently, they can assess their own outputs more accurately and possess a greater capacity for self-correction. 6 Conclusion In this study, we define and quantify self-bias in LLMs with two principled estimated statistics. Our experiments across six LLM families, four lan- guages, and three tasks reveal that self-bias is preva- lent in self-refine or self-rewarding pipelines. This biased self-feedback leads to false positive objec- tives, hindering performance improvements during iterative refinement. Further analysis reveals that while LLM improves fluency and understanding of its generated text, they do not necessarily progress in the intended direction, such as improving quality in machine translation or expanding coverage in concept-to-word generation. Instead, LLMs tend to favor texts that adhere to their inherent styles. Fi- nally, our research suggests that larger models are more resistant to self-bias, and incorporating exter- nal feedback significantly reduces bias, leading to performance improvements in LLMs. %LDV(VWLPDWLRQ'VNHZ(VWLPDWLRQ//D0$E//D0$E//D0$E Acknowledgements This work was supported by the National Science Foundation award #2048122. L.L. is partly sup- ported by a gift from Apple Inc. The views ex- pressed are those of the author and do not reflect the official policy or position of the funding agen- cies. We thank Yuanjing Wei for conducting the human evaluation in our experiment. Limitations In this study, we focus on quantifying the self-bias exhibited by LLMs in the self-refine pipeline. We demonstrate that self-bias will be amplified in the self-refine or self-rewarding pipeline and negatively impacts the optimization process. However, in sub- sequent research, it would be worthwhile to explore the measurement of bias that exists between differ- ent LLMs, as well as the bias that arises when comparing original models and their knowledge- distilled counterparts. The following questions re- main open: Does LLM have more bias towards LLMs that follow the same pretraining procedure, data, or learning objectives? Does LLM have more bias to the LLMs within the same language model families? Do knowledge-distilled LLMs have more biases over the original LLMs, such as Vicuna to GPT4 or Alpaca to ChatGPT? We leave these inter- esting avenues for future research. Ethical Statement All the benchmark data that we used during exper- iments is publicly available. We assure that the benchmark data does not contain risk or toxic con- tent. The annotater was compensated fairly and did not disclose any privacy information during the annotation process. All the open sourced models can be accessed online and all the closed source models have publicly accessible APIs. The anno- taters were allowed to label sensitive information if necessary. The annotater is fully aware that the data we collected from him/her will be used for research purposes. The total human annotation period took six hours and the annotator was paid above local minimum wage. We used Mistral Medium, Gram- marly and ChatGPT API to polish some of our writings. The findings of this research have far-reaching im- plications for the broader linguistic and technolog- ical communities, particularly in the preservation and revitalization of endangered or low-resource languages. By identifying and mitigating self-bias in large language models (LLMs), this work paves the way for significant improvements in machine translation for languages that are underrepresented in digital platforms and datasets. The ability to reduce bias in the self-refine pipeline of LLMs can lead to more accurate and nuanced translations, thereby enhancing the quality and ac- cessibility of digital content in low-resource lan- guages. This advancement is critical for preserving the cultural heritage and knowledge embodied in these languages, which are at risk of disappearing. Through improved translation capabilities, commu- nities can more easily access global information in their native languages, fostering educational op- portunities and cultural exchange. This contributes to the preservation of linguistic diversity and pro- motes a more inclusive digital ecosystem. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Alex J. Cannon, Stephen R. Sobie, and Trevor Q. Mur- dock. 2015. Bias correction of gcm precipitation by quantile mapping: How well do methods preserve changes in quantiles and extremes? Journal of Climate, 28(17):6938 – 6959. Kent Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. 2023. Speak, memory: An archaeol- ogy of books known to ChatGPT/GPT-4. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7312–7327, Sin- gapore. Association for Computational Linguistics. Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. 2024. Teaching large language models to self-debug. In The Twelfth International Conference on Learning Representations. Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672. Damai Dai, Chengqi Deng, Chenggang Zhao, R. X. Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y. Wu, Zhenda Xie, Y. K. Li, Panpan Huang, Fuli Luo, Chong Ruan, Zhifang Sui, and Wen- feng Liang. 2024. Deepseekmoe: Towards ultimate ex- pert specialization in mixture-of-experts language mod- els. Daniel Deutsch, Rotem Dror, and Dan Roth. 2022. On the limitations of reference-free evaluations of gener- ated text. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10960–10977, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of hu- man evaluation for machine translation. Transactions of the Association for Computational Linguistics, 9:1460– 1474. Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, Eleftherios Avramidis, Tom Kocmi, George Foster, Alon Lavie, and André F. T. Martins. 2022. Results of WMT22 metrics shared task: Stop using BLEU – neural metrics are better and more robust. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 46–68, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud Doucet, Orhan Firat, and Nando de Freitas. 2023. Reinforced self-training (rest) for language modeling. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. NeurIPS. Ruixin Hong, Hongming Zhang, Xinyu Pang, Dong Yu, and Changshui Zhang. 2023. A closer look at the self- verification abilities of large language models in logical reasoning. CoRR, abs/2311.07954. Jiaxin Huang, Shixiang Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2023a. Large language models can self-improve. In Proceedings of the 2023 Conference on Empirical Methods in Natu- ral Language Processing, pages 1051–1068, Singapore. Association for Computational Linguistics. Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Large language Song, and Denny Zhou. 2023b. models cannot self-correct reasoning yet. CoRR, abs/2310.01798. Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, De- vendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lu- cile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Anto- niak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2024. Mixtral of experts. Pei Ke, Bosi Wen, Zhuoer Feng, Xiao Liu, Xuanyu Lei, Jiale Cheng, Shengyuan Wang, Aohan Zeng, Yuxiao Dong, Hongning Wang, Jie Tang, and Minlie Huang. 2023. Critiquellm: Scaling llm-as-critic for effective and explainable evaluation of large language model gen- eration. CoRR, abs/2311.18702. Tom Kocmi, Eleftherios Avramidis, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Markus Freitag, Thamme Gowda, Ro- man Grundkiewicz, Barry Haddow, Philipp Koehn, Ben- jamin Marie, Christof Monz, Makoto Morishita, Ken- ton Murray, Makoto Nagata, Toshiaki Nakazawa, Mar- tin Popel, Maja Popovi´c, and Mariya Shmatova. 2023. Findings of the 2023 conference on machine transla- tion (WMT23): LLMs are here but not quite there yet. In Proceedings of the Eighth Conference on Machine Translation, pages 1–42, Singapore. Association for Computational Linguistics. Tom Kocmi and Christian Federmann. 2023. GEMBA- MQM: Detecting translation quality error spans with GPT-4. In Proceedings of the Eighth Conference on Machine Translation, pages 768–775, Singapore. Asso- ciation for Computational Linguistics. Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, and Dongyeop Kang. 2023. Bench- marking cognitive biases in large language models as evaluators. Sneha Kudugunta, Isaac Rayburn Caswell, Biao Zhang, Xavier Garcia, Derrick Xin, Aditya Kusupati, Romi Stella, Ankur Bapna, and Orhan Firat. 2023. MADLAD- 400: A multilingual and document-level large audited dataset. In Thirty-seventh Conference on Neural Infor- mation Processing Systems Datasets and Benchmarks Track. Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020. CommonGen: A constrained text generation chal- lenge for generative commonsense reasoning. In Find- ings of the Association for Computational Linguistics: EMNLP 2020, pages 1823–1840, Online. Association for Computational Linguistics. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023a. G-eval: NLG evaluation using gpt-4 with better human alignment. In Proceedings of the 2023 Conference on Empirical Meth- ods in Natural Language Processing, pages 2511–2522, Singapore. Association for Computational Linguistics. Yiqi Liu, Nafise Sadat Moosavi, and Chenghua Lin. 2023b. Llms as narcissistic evaluators: When ego in- flates evaluation scores. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2024. Self-refine: Iterative refinement with self-feedback. Ad- vances in Neural Information Processing Systems, 36. Potsawee Manakul, Adian Liusie, and Mark Gales. 2023. SelfCheckGPT: Zero-resource black-box hallucination Conference of the North American Chapter of the Asso- ciation for Computational Linguistics (NAACL) - Find- ings. Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Wang, and Lei Li. 2023. INSTRUCTSCORE: Towards explainable text genera- tion evaluation with automatic feedback. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5967–5994, Sin- gapore. Association for Computational Linguistics. Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu, and Jason We- ston. 2024. Self-rewarding language models. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuo- han Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a- In Thirty- judge with MT-bench and chatbot arena. seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multi-dimensional eval- uator for text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2023–2038, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. detection for generative large language models. In Pro- ceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9004–9017, Sin- gapore. Association for Computational Linguistics. Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, and William Yang Wang. 2023. Automatically correcting large language models: Sur- veying the landscape of diverse self-correction strategies. CoRR, abs/2308.03188. Amy Pu, Hyung Won Chung, Ankur P Parikh, Sebas- tian Gehrmann, and Thibault Sellam. 2021. Learning compact metrics for mt. In Proceedings of EMNLP. Thibault Sellam, Dipanjan Das, and Ankur P Parikh. 2020. Bleurt: Learning robust metrics for text genera- tion. In Proceedings of ACL. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2024. Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems, 36. Kaya Stechly, Matthew Marquez, and Subbarao Kamb- hampati. 2023. GPT-4 doesn’t know it’s wrong: An analysis of iterative prompting for reasoning problems. CoRR, abs/2310.12397. Gabor Szekely and Tamás Móri. 2006. A characteristic measure of asymmetry and its application for testing diagonal symmetry. COMMUN. STATIST.—THEORY METH., pages 1633–1639. Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multi- modal models. arXiv preprint arXiv:2312.11805. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Gladys Tyen, Hassan Mansoor, Peter Chen, Tony Mak, and Victor Carbune. 2023. Llms cannot find reasoning errors, but can correct them! CoRR, abs/2311.08516. Karthik Valmeekam, Matthew Marquez, and Subbarao Kambhampati. 2023. Can large language models really improve by self-critiquing their own plans? CoRR, abs/2310.08118. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations. Wenda Xu, Daniel Deutsch, Mara Finkelstein, Juraj Juraska, Biao Zhang, Zhongtao Liu, William Yang Wang, Lei Li, and Markus Freitag. 2024. Llmrefine: Pinpointing and refining large language models via fine- grained actionable feedback. In Proceedings of 2024 A Model API/Checkpoints This section provides a pointer to checkpoints that we used during experiment. All open- source models are available on the Hugging Face platform. For LLaMA2, we use "meta- llama/Llama-2-(7, 13, 70)b-chat-hf" respectively. For Mixtral MOE, we use "mistralai/Mixtral- For DeepSeekMoE, we 8x7B-Instruct-v0.1". use "deepseek-ai/deepseek-moe-16b-chat". For InstructScore, we use "xu1998hz/InstructScore". For the translation model Madlad400-10b, we use "google/madlad400-10b-mt". We used GPT- 3.5-Turbo and GPT-4 from OpenAI platform (https://platform.openai.com). We use gemini-pro from Google Gemini API. B Quantile Mapping While BLEURT (Sellam et al., 2020) correlates highly with human judgments (Freitag et al., 2022), its scale of roughly 0 to 1 is incompatible with the MQM human annotations, which range from -25 to 0. A linear mapping is not feasible, as the BLEURT score is not calibrated to the human score, meaning a BLEURT score of 0.8 does not correspond to -5 in MQM annotations. To address this issue, we employ quantile mapping (Cannon et al., 2015) to transform the BLEURT score into the distribution of human scores. This method involves learning a mapping function that maps the quantiles or percentiles of the predictive distribution to those of the observed distribution. In this case, our predictive distribution is derived from the BLEURT score distribution, while our observed distribution comes from the corresponding human score distribution. We utilize the WMT22 shared metric task (Freitag et al., 2022) to obtain mapped BLEURT-human scoring pairs. In this shared metric task, each trans- lation generated by different translation model is rated by humans using the MQM human rating scale. We also run BLEURT on the same set of translations to obtain BLEURT scores, resulting in 28125 mapped BLEURT-human scoring pairs. We then perform the following steps: 1) Separately sort the data of the two distributions in ascend- ing order. 2) Compute the cumulative distribution function (CDF) for each distribution. 3) Learn an interpolation function that maps the percentiles of the first distribution to the percentiles of the sec- Figure 11: Bias and distance skewness in generated texts from GPT-4, GPT-3.5-Turbo, and Gemini are measured on MATH testing set throughout the self-refinement steps. Results show an increase in bias and skewness during iterative self-consistency, causing biased ensem- bles in reasoning paths. ond distribution. 4) Apply the mapping function to the values drawn from the predictive distribution (BLEURT score distribution) to obtain the corre- sponding values in the observed distribution (hu- man MQM score distribution). This process maps the BLEURT score distribu- tion to the human score distribution (from -25 to 0) while preserving the relative ordering of BLEURT scores. In our experiments, we used the latest BLEURT model, BLEURT-20 checkpoint (Pu et al., 2021), which demonstrates the highest correlation to the human judgments among its vari- ants. C Gemini’s Skewness at Translation Specifically, in the Java-English (Jav-En) language pair, Gemini initially assigns lower quality scores to its output compared to BLEURT assessments during early iterations, resulting in an underesti- mation of output performance. This phenomenon accounts for the decrease in distance skewness at the beginning, as the right-skewed distribution be- comes more neutral. However, as bias accumulates in later iterations, the distribution shifts towards a left-skewed distribution, leading to an increase in distance skewness. D Self-consistency results on Math reasoning We slightly modify the self-refine pipeline by re- placing the self-evaluation with self-consistency verification (Huang et al., 2023a). Namely, with the initial solution, LLM will generate an addi- tional ten reasoning paths and a majority vote for a proposed answer. If the proposed answer is in- consistent with the prior solution, we will output %LDVRQ0DWK5HDVRQLQJ'VNHZRQ0DWK5HDVRQLQJ*37*HPLQL*377XUER a binary score of 0, and the initial answer will be replaced by the proposed answer. Otherwise, we will output a score of 1, and no change will be made to the initial answer. Figure 11 illustrates that all large language models (LLMs) exhibit an increase in bias and skewness estimation in the iter- ative self-consistency pipeline. This suggests that LLMs introduce self-biases towards certain reason- ing paths during self-refine, ultimately leading to a biased ensemble across multiple reasoning paths. E Additional Results In Table 5, we include human evaluation results and GPT-4’s quality scores for the 0th and 10th iteration of refinement generation at Yorba-to-English. In Table 6, we include human evaluation and GPT-3.5- Turbo’s quality assessment on the 0th and 10th iter- ation of refinement generation at Yorba-to-English. In Table 7, we include human evaluation and Gem- ini’s quality assessment on the 0th and 10th itera- tions of refinement generation. In Figure 12, we include full bias and distance skewness for Yor- En, Jav-En, Arm-En and Ig-En translations on Flo- res200. Human Evaluation Human GPT-4 Bias Dskew 0th Iteration 10th Iteration -15.0 -15.1 -6.92 -0.52 8.06 14.6 0.452 0.692 Table 5: This table presents human evaluation results and GPT-4’s quality scores for the 0th and 10th iteration of refinement generation performed at Yor-En. Bias and Dskew estimates are included to quantify the biases identified through human evaluation. Human Evaluation Human GPT-3.5 Bias Dskew 0th Iteration 10th Iteration -22.2 -21.9 -2.61 -0.03 19.6 21.9 0.803 0.885 Table 6: We report human evaluation and GPT-3.5- Turbo’s quality assessment on the 0th and 10th iteration of refinement generation at Yor-En. Human Evaluation Human Gemini Bias Dskew 0th Iteration 10th Iteration -17.3 -18.3 -8.92 -0.72 9.62 17.6 0.355 0.766 Table 7: We report human evaluation and Gemini’s quality assessment on the 0th and 10th iterations of refinement generation at Yor-En. Figure 12: Full Bias and Dskew estimations for Yor-En, Jav-En, Arm-En, and Ig-En translations on FLores200, with the x-axis showing self-refine steps, reveal that all LLMs exhibit self-bias, where open-source LLMs exhibit higher levels than GPT-4 and Gemini. In-context-learning prompt for LLM’s initial generation at translation: Below is an instruction that describes a task. ### Instruction: Translate Chinese text into English. Chinese: 新华时评:把优秀返乡农民工打造成乡村振兴生力军-新华网 ### English: Xinhua Commentary: Outstanding returning rural migrant workers can be a rural revitalization army - Xinhuanet Below is an instruction that describes a task. ### Instruction: Translate English text into German. English: You can come back any time as our chat service window is open 24/7 ### German: Sie können jederzeit wiederkommen, da unser Chat-Service-Fenster täglich rund um die Uhr geöffnet ist Below is an instruction that describes a task. ### Instruction: Translate Yorba text into English. Yorba: Won da Olori Skwodroni. Dilokrit Pattavee gege bi awako ofururu. ### English: The pilot was identified as Squadron Leader Dilokrit Pattavee. Below is an instruction that describes a task. ### Instruction: Translate Yoruba text into English. Yorba: O ko ago ilekun WiFi, O wi. Output for translation: Can you please turn off the WiFi, I’m done. Table 8: Those are the translation in context learning example we used to prompt all LLMs across four language directions at Flores200. In this example, the source translation is Yourba text "O ko ago ilekun WiFi, O wi.". The English output text is from LLaMA2-7B’s generation "Can you please turn off the WiFi, I’m done.". 01234567891005101520Bias: Yor­En01234550510Bias: Jav­En0123455051015Bias: Arm­En01234505101520Bias: Ig­En0123456789100.00.20.40.60.8Dskew: Yor­En0123456789100.00.20.40.6Dskew: Jav­En0123450.20.40.60.8Dskew: Arm­En0123450.00.20.40.60.8Dskew: Ig­EnGeminiGPT­4GPT­3.5DeepSeekMOEMistralMOELLaMA2­7B In-context-learning prompt for LLM’s Self-feedback at translation: You are an annotator for the quality of machine translation. Your task is to identify errors and assess the quality of the translation. Based on the source segment and machine translation surrounded with triple backticks, identify error types in the translation and classify them. The categories of errors are: accuracy (addition, mistranslation, omission, untranslated text), fluency (character encoding, grammar, inconsistency, punctuation, register, spelling), locale convention (currency, date, name, telephone, or time format) style (awkward), terminology (inappropriate for context, inconsistent use), non-translation, other, or no-error. Each error is classified as one of three categories: critical, major, and minor. Critical errors inhibit comprehension of the text. Major errors disrupt the flow, but what the text is trying to say is still understandable. Minor errors are technically errors, but do not disrupt the flow or hinder comprehension. Source: “‘大众点评乌鲁木齐家居商场频道为您提供高铁居然之家地址,电话,营业时间等最新商户信息, 找装修公司,就上大众点评“‘ Translation: “‘Urumqi Home Furnishing Store Channel provides you with the latest bussiness information such as the address, telephone number, bussiness hours, etc., of high-speed rail, and find a decoration company, and go to the reviews.“‘ Annotate errors in the translation. MQM annotations: "of high-speed rail" is a critical accuracy/addition error "go to the reviews" is a major accuracy/mistranslation error "etc.," is a minor style/awkwards error Source: “‘I do apologise about this, we must gain permission from the account holder to discuss an order with another person, I apologise if this was done previously, however, I would not be able to discuss this with yourself without the account holders permission.“‘ Translation: “‘Ich entschuldige mich dafür, wir müssen die Erlaubnis einholen, um eine Bestellung mit einer anderen Person zu besprechen. Ich entschuldige mich, falls dies zuvor geschehen wäre, aber ohne die Erlaubnis des Kontoinhabers wäre ich nicht in der Lage, dies mit dir involvement.“‘ Annotate errors in the translation. MQM annotations: ’involvement’ is a major accuracy/mistranslation error ’the account holder’ is a major accuracy/omission error ’wäre’ is a minor fluency/grammar error ’dir’ is a minor fluency/register error Source: “‘Talks have resumed in Vienna to try to revive the nuclear pact, with both sides trying to gauge the prospects of success after the latest exchanges in the stop-start negotiations.“‘ Translation: “‘Ve Vídni se ve Vídni obnovily rozhovory o oživení jaderného paktu, pˇriˇcemže obˇe partaje se snaží posoudit vyhlídky na úspˇech po posledních výmˇenách v jednáních.“‘ Annotate errors in the translation. MQM annotations: ’ve Vídni’ is a major accuracy/addition error ’the stop-start’ is a major accuracy/omission error ’partaje’ is a minor terminology/inappropriate for context error Source: “‘Talks have resumed in Vienna to try to revive the nuclear pact, with both sides trying to gauge the prospects of success after the latest exchanges in the stop-start negotiations.“‘ Translation: “‘Ve Vídni se ve Vídni obnovily rozhovory o oživení jaderného paktu, pˇriˇcemže obˇe partaje se snaží posoudit vyhlídky na úspˇech po posledních výmˇenách v jednáních.“‘ Annotate errors in the translation. MQM annotations: Source: “‘O ko ago ilekun WiFi, O wi.“‘ Translation: “‘He locked the WiFi door, he said.“‘ Annotate errors in the translation. MQM annotations: Output for Feedback: ’He locked the WiFi door’ is a critical accuracy/mistranslation error Table 9: Those are the feedback in context learning examples (three examples) that we used to prompt all LLMs across four language directions at Flores200. In this example, the source translation is Yourba text "O ko ago ilekun WiFi, O wi.". The English candidate text is "He locked the WiFi door" from GPT-4. GPT-4’s self-feedback outputs "’He locked the WiFi door’ is a critical accuracy/mistranslation error". In-context-learning prompt for LLM’s refinement at translation: Please fix all errors. You can rewrite translation if translation is bad. Source: “‘大众点评乌鲁木齐家居商场频道为您提供高铁居然之家地址,电话,营业时间等最新商户信息,找 装修公司,就上大众点评“‘ Translation: “‘Urumqi Home Furnishing Store Channel provides you with the latest bussiness information such as the address, telephone number, bussiness hours, etc., of high-speed rail, and find a decoration company, and go to the reviews.“‘ Feedback: ’of high-speed rail’ is a critical accuracy/addition error ’go to the reviews’ is a major accuracy/mistranslation error ’etc.,’ is a minor style/awkward error Improved Chinese-to-English translation: Dianping Urumqi Renovation and Design Channel will provide you with the address, phone number, operation time and other information of HSR Easyhome, and please come to Dianping if you are looking for a renovation company. Source: “‘I do apologise about this, we must gain permission from the account holder to discuss an order with another person, I apologise if this was done previously, however, I would not be able to discuss this with yourself without the account holders permission.“‘ Translation: “‘Ich entschuldige mich dafür, wir müssen die Erlaubnis einholen, um eine Bestellung mit einer anderen Person zu besprechen. Ich entschuldige mich, falls dies zuvor geschehen wäre, aber ohne die Erlaubnis des Kontoinhabers wäre ich nicht in der Lage, dies mit dir involvement.“‘ Feedback: ’involvement’ is a major accuracy/mistranslation error ’the account holder’ is a major accuracy/omission error ’wäre’ is a minor fluency/grammar error ’dir’ is a minor fluency/register error Improved English-to-German translation: Ich bitte um Entschuldigung, aber wir benötigen das Einverständnis des Kontoinhabers, um eine Bestellung mit einer anderen Person zu besprechen, falls es schon eingeholt wurde, entschuldige ich mich, aber ich kann dies ohne das Einverständnis des Kontoinhabers nicht mit Ihnen besprechen. Source: “‘Talks have resumed in Vienna to try to revive the nuclear pact, with both sides trying to gauge the prospects of success after the latest exchanges in the stop-start negotiations.“‘ Translation: “‘Ve Vídni se ve Vídni obnovily rozhovory o oživení jaderného paktu, pˇriˇcemže obˇe partaje se snaží posoudit vyhlídky na úspˇech po posledních výmˇenách v jednáních.“‘ Feedback: ’ve Vídni’ is a major accuracy/addition error ’the stop-start’ is a major accuracy/omission error ’partaje’ is a minor terminology/inappropriate for context error Improved English-to-Cezch translation: Ve Vídni byly obnoveny rozhovory o oživení jaderného paktu a obˇe strany se snaží odhadnout, jaké jsou vyhlídky na úspˇech po posledních výmˇenách názor˚u v rámci pˇrerušených jednání. Source: “‘O ko ago ilekun WiFi, O wi.“‘ Translation: ‘He locked the WiFi door, he said.“‘ Feedback: ’He locked the WiFi door’ is a critical accuracy/mistranslation error Improved English-to-Yoruba translation: Output for refinement: "He secured the WiFi connection, he said." Table 10: Those are the refinement in context learning examples (three examples) that we used to prompt all LLMs across four language directions at Flores200. In this example, the source translation is Yourba text "O ko ago ilekun WiFi, O wi.". The English text is GPT-4’s output "He locked the WiFi door.". We gathered GPT-4’s self-feedbak: "’He locked the WiFi door’ is a critical accuracy/mistranslation error". GPT-4 outputs a refined output: "He secured the WiFi connection, he said.", according to the feedback and prior step generation. Prompt for LLM’s initial generation for reasoning: You are a competitive math problem solver. Please generate a step-by-step solution. Your final answer must be enclosed in LaTeX’s boxed tag. How many vertical asymptotes does the graph of y = 2 x2+x−6 have? 2 Output for math answer: A vertical asymptote is a vertical line that the graph of a function approaches but never crosses. A function has a vertical asymptote at x = k if it has infinite limits as x approaches k from the left or right.For the rational function y = x2+x−6 , we know that it could have vertical asymptotes wherever the denominator is zero, because division by zero is undefined and yields infinity.Therefore, to find the vertical asymptotes, we solve the equation x2 + x − 6 = 0. We apply factorization and get (x − 2)(x + 3) = 0. This gives us x = 2 and x = −3.In any case, we need to check if any of these could be holes instead of vertical asymptotes. This would be the case if x = 2 or x = −3 also made the numerator zero. However, the numerator is a constant and won’t be zero. Therefore this function has two vertical asymptotes at x = 2 and x = −3. The answer is 2 . Table 11: This is the prompt we used to generate mathematical reasoning. The output in this example comes from GPT-4. We first used this prompt to generate one reasoning path as initial generation. Then, we sample 10 different reasoning paths and use majority vote to find the most confident answer. To generate self-feedback, we compare majority voted answer and initial answer. If they are matched, feedback indicates answer is correct and no change should be made. If they are not matched, the feedback indicates answer is wrong and inital answer will be replaced by the majority voted answer. We alternate above two steps and iteratively refine the reasoning paths. Concept-to-text prompt for LLM’s initial generation: You are generating text based on specified words. Do not provide any explanations or text apart from the text output. Please generate a sentence that contains the exact string matches for the following concepts: [’vest’, ’snow’, ’remove’, ’silhouette’, ’dunk’, ’step’, ’item’, ’toddler’, ’plaster’, ’wall’, ’piece’, ’tool’, ’circle’, ’pull’, ’wrap’, ’headphone’, ’deck’, ’suit’, ’press’, ’mother’, ’pit’, ’cupcake’, ’priest’, ’rink’, ’counter’, ’rest’, ’piano’, ’wine’, ’serve’, ’winter’] Output for Concept-to-Text generation: In the winter, the toddler wearing a vest, pressed a piece of wet plaster against the wall in a circle, while his mother baking a cupcake was removing the snow from the deck with a tool; switching steps to the rhythm of a piano piece echoing from the headphones wrapped around her neck, she caught a glimpse of the priest serving wine, his silhouette dunking a cupcake into a pit at the rink counter, under the restful gaze of a suit-clad man. Table 12: This is the concept-to-text prompt we used to test all LLMs at Commongen Hard. This example contains the output from GPT-4. Concept-to-text in context learning prompt for LLM’s self-feedback: We want to create a sentence that contains all the specified concepts. Please provide feedback on the following sentences. The feedback should list all missing concepts. If all concepts are covered, output ’all covered’ Concepts: [’dog’, ’frisbee’, ’catch’, ’throw’] Generated Sentence: A dog leaps to catch a thrown frisbee. Feedback: all covered Concepts: [’dog’, ’frisbee’, ’catch’, ’throw’] Generated Sentence: Two dogs are throwing frisbees at each other . Feedback: [’catch’] Concepts: [’vest’, ’snow’, ’remove’, ’silhouette’, ’dunk’, ’step’, ’item’, ’toddler’, ’plaster’, ’wall’, ’piece’, ’tool’, ’circle’, ’pull’, ’wrap’, ’headphone’, ’deck’, ’suit’, ’press’, ’mother’, ’pit’, ’cupcake’, ’priest’, ’rink’, ’counter’, ’rest’, ’piano’, ’wine’, ’serve’, ’winter’] Generated Sentence: In the winter, the toddler wearing a vest, pressed a piece of wet plaster against the wall in a circle, while his mother baking a cupcake was removing the snow from the deck with a tool; switching steps to the rhythm of a piano piece echoing from the headphones wrapped around her neck, she caught a glimpse of the priest serving wine, his silhouette dunking a cupcake into a pit at the rink counter, under the restful gaze of a suit-clad man. Feedback: Output for Concept-to-Text feedback: all covered Table 13: This is the in-context learing (ICL) concept-to-text prompt (two ICL examples) we used to generate LLM’s self-feedback at Commongen Hard. This example contains the self-feedback from GPT-4. Concept-to-text prompt for LLM’s refinement: Concepts: [’fruit’, ’motorcycle’, ’perform’, ’jacket’, ’vehicle’, ’place’, ’mat’, ’walk’, ’world’, ’area’, ’kiss’, ’mother’, ’pass’, ’report’, ’club’, ’axis’, ’tricep’, ’patient’, ’listen’, ’owner’, ’uniform’, ’floor’, ’hamburger’, ’use’, ’wine’, ’cross’, ’bull’, ’sell’, ’lawn’, ’friend’] Generated sentence: In a world where a fruit can perform like a motorcycle, I found my mother wearing a leather jacket, passing by her vehicle to walk towards the place where she runs a club. Not far from that area, my friend, a patient person, was ready to sell mats, used for gym workouts, especially for triceps. Being the owner of the shop, he always wears a uniform. While I was listening to this report, sitting cross-legged on the floor and munching my hamburger, I missed seeing a bull wandering on the lawn just outside. Just then, I received an unexpected kiss from my wife who brought me a glass of wine from our cellar which also acts as an axis for all the happenings of my life. Missing Concepts: [’use’, ’lawn’] Please revise generated sentence which covers all missing concepts and all given concepts, with the exact string matches. New sentence: Output for Concept-to-Text refinement: In a world where a fruit can perform like a motorcycle, I found my mother wearing a leather jacket, passing by her vehicle to walk towards the place where she runs a club. Not far from that area, my friend, a patient person, was ready to sell mats, used for gym workouts, especially for triceps. Being the owner of the shop, he always wears a uniform. While I was listening to this report, sitting cross-legged on the floor and munching my hamburger, I missed seeing a bull wandering on the lawn just outside. Just then, I received an unexpected kiss from my wife who brought me a glass of wine from our cellar which also acts as an axis for all the happenings of my life. Table 14: This is the concept-to-text refinement prompt we used to generate LLM’s refinement at Commongen Hard. This example contains the refinement from GPT-4.
ai_researcher
2
Explaining_Genetic_Programming_Trees_using_Large_Language_Models.pdf
Explaining Genetic Programming Trees using Large Language Models Paula Maddigan, Andrew Lensen, Member, IEEE, Bing Xue, Fellow, IEEE 1 4 2 0 2 r a M 6 ] E N . s c [ 1 v 7 9 3 3 0 . 3 0 4 2 : v i X r a Abstract—Genetic programming (GP) has the potential to generate explainable results, especially when used for dimen- sionality reduction. In this research, we investigate the potential of leveraging eXplainable AI (XAI) and large language models (LLMs) like ChatGPT to improve the interpretability of GP- based non-linear dimensionality reduction. Our study introduces a novel XAI dashboard named GP4NLDR, the first approach to combine state-of-the-art GP with an LLM-powered chatbot to provide comprehensive, user-centred explanations. We showcase the system’s ability to provide intuitive and insightful narratives on high-dimensional data reduction processes through case stud- ies. Our study highlights the importance of prompt engineering in eliciting accurate and pertinent responses from LLMs. We also address important considerations around data privacy, hallucinatory outputs, and the rapid advancements in generative in advancing the AI. Our findings demonstrate its potential explainability of GP algorithms. This opens the door for future research into explaining GP models with LLMs. Index Terms—Genetic Programming, Non-Linear Dimension- ality Reduction, Explainable AI, ChatGPT, Large Language Models I. INTRODUCTION G ENETIC programming (GP) is a powerful evolutionary computation technique that evolves computer programs to solve complex tasks. Its versatility and ability to automat- ically discover model structure make it an attractive choice for solving many problems. GP is capable of producing func- tional mathematical mappings with good predictive accuracy. These symbolic mappings (trees) are a promising approach for enabling eXplainable Artificial Intelligence (XAI) [1]. The field of XAI is at the forefront of current research. It is crucial within sectors such as medical diagnosis and financial risk assessments, where explainability is required to gain trust among stakeholders [2]–[4]. However, even with the symbolic nature of GP, understanding the semantics of a GP model/tree or the meaning of individual features may require expert domain knowledge. Even understanding the functionality of the evolutionary process may lie beyond the comprehension of end-users. The term end-users is deliberately vague. Different au- diences need wildly different explanations, personalised to their background and requirements. Ribera [5] highlighted the This work was supported by the University Research Fund at Te Herenga Waka–Victoria University of Wellington under grant number 410128/4223. and School The authors are with the Centre for Data Science and Artificial Intelligence, and Computer Science; of Engineering Victoria University of Wellington; Wellington 6140; New Zealand [email protected]; (e-mail: [email protected];). [email protected]; importance of approaching XAI from a user-centred perspec- tive. They categorised the targeted audience into three broad groups: developers and AI researchers, domain experts, and lay users. They showed that explanations are multifaceted, requiring different explanations for every user group. For example, vocabulary needs to be adapted to match the compre- hension level of each group, by omitting technical terms for lay users and integrating domain-specific terminology when engaging with experts. Humans are also social creatures, who learn through conversation [6]. An explanation delivered through a conversational exchange would allow users to di- rectly request answers suited to their own domain knowledge and technical background, greatly improving the explanation quality. The proliferation of large language models (LLMs) such as OpenAI’s ChatGPT has powered a notable surge in chatbot de- velopment, facilitating conversational question-and-answering over a broad range of domains. Therefore, this study intro- duces an AI-driven chatbot to explain the functionality of GP models/trees. Leveraging LLMs in this way capitalises on a wealth of domain knowledge, aiding in understanding results. When responses do not align with the user’s level of understanding, they may seek further clarification through conversation. The inherent nature of the LLMs enables users from diverse backgrounds to pose questions around presented findings by using language, vocabulary, and grammar of their preference. Existing studies highlight the multilingual capabil- ities of LLMs [7] and their comprehension in understanding questions containing grammatical or typographical errors [8]. The versatility of genetic programming deems it applicable to a plethora of tasks in real-world applications, including but not limited to symbolic regression [9], job scheduling [10], classification [11], and feature selection [12]. This paper focuses specifically on improving the explainability of Genetic Programming for Nonlinear Dimensionality Reduction (GP- NLDR) methods. Modern datasets often have thousands or tens of thousands of features, which can only be processed by extremely complex and expensive machine learning ap- proaches [13]–[17]. NLDR methods can greatly reduce the di- mensionality (number of features) of a dataset, making the data easier to process and understand. GP-NLDR, unlike traditional NLDR methods, has shown promise in performing explainable NLDR, where the reduced dimensions (embedding) can be directly understood in the context of the original features [17]– [20]. In this paradigm, each new dimension in the embedding is represented by a single GP tree, where the tree takes a subset of original features as its inputs (leaves) and produces a single output (embedding dimension). Despite continued research, GP-NLDR can still produce overly complex trees, which are not explainable to non-experts. This study proposes GP4NLDR, a web-based dashboard that utilises an LLM-powered chatbot to explain GP-NLDR trees. We opt for a web-based architecture to enhance the accessibility of our research to the diverse audience iden- tified in our study. Leveraging an intuitive graphical user interface with rich visualisations simplifies interaction with the system, contrasting with alternative delivery methods such as command-line processes and code libraries. While we constrain our study’s scope to GP-NLDR, our framework is applicable to many GP applications, laying the groundwork for significant advances in explainable GP. Major Contributions • This study explores the feasibility of using LLMs such as ChatGPT to provide human-like explainability of GP expressions. It contributes to combining the fields of evolutionary computation and generative AI, a notably scarce approach in existing literature. We demonstrate that our proposed methodology can be extended to other applications of GP. • Previous work [19]–[21] has presented state-of-the-art techniques for GP-NLDR. This study makes this research accessible by making our custom-built online system GP4NLDR1 publicly available. The platform allows users to learn about GP-NLDR by running it on datasets using different fitness functions and run parameters. The GP expressions and trees are viewable together with run results. • Our proposed approach incorporates LLM-driven con- versational interactions via a chatbot natural language interface. The chatbot is customised through prompt engineering and retrieval augmented generation to help strengthen the understanding of tree expressions and output. The GP4NLDR software interface allows the use of the chatbot with self-generated examples or through pre-loaded case studies. • Finally, we contribute to the growing body of research highlighting limitations in using LLMs and the impact of hallucinations on XAI, with a unique perspective on these issues within explainable GP. II. RELATED WORK ON XAI Recent years have seen the emergence of diverse XAI techniques fostering the explainability of black-box models. Comprehensive analyses [22], [23] present the complexities and nuances of these numerous XAI strategies across broad interdisciplinary domains. Our focus is not the extensive list already presented by the authors, but rather highlight some as illustrative examples supporting the goal of our research. In predictive machine learning models, ap- proaches such as SHAP [24] and LIME [25] provide insights for local and global explainability; Anchors [26] provides a set of rules under which predictions still hold with confidence; to re-visit 1https://gp4nldr.streamlit.app/ 2 and DiCE [27] is used in modelling what-if counterfactuals. Previous studies demonstrate their use in domains such as healthcare [28]–[30] and education [31], [32]. However, these approaches target model developers capable of translating the interpretations into lay terms for communicating to stakehold- ers. Prior studies [33]–[35] have developed chatbots for end- users to engage in conversational exchanges, enhancing their understanding of these XAI tools’ output. However, no studies have utilised groundbreaking large language models such as ChatGPT within this domain. There is extensive literature that seeks to improve the explainability of GP [1] through approaches such as building smaller trees with bloat control [36] or using fewer features [37]. However, this poses the same challenges with XAI tools previously discussed, where the output is targeted towards those knowledgeable in these concepts, failing to enable XAI from a user-centred perspective which caters to a broader, non- expert audience. Communicating the explainability of AI systems has also been explored from a social sciences standpoint. Previous studies [6] highlight how the field of XAI may benefit from incorporating insights from philosophy, cognitive psychol- ogy/science, and social psychology to understand how humans define, generate, and evaluate explanations. Their work high- lights how XAI may benefit from understanding how decisions are explained to humans and how humans articulate decisions to each other. The role of natural language in generating explanations has been surveyed in prior studies [38]. The authors concluded only a handful of recent XAI approaches either considered natural language explanations for end-users or implemented a method capable of generating such explanations. A recent review of works in the emerging field of interpreting LLMs and using them for explanation highlights LLMs possess the opportunity to redefine interpretability across a wide range of applications [39]. A recent study [40] proposes leveraging large language models for the automated analysis of opti- misation algorithms within a web-based tool [41] for the generation of search trajectory networks . They highlight how this application of LLMs may enhance the user experience of the tool and bridge the knowledge gap for those without prior understanding of the application. However, no previous work has been identified using natural language chatbots to delve deeper into explaining GP expressions, nor its use in the field of NLDR. Several notable context-based chatbot implementations have recently emerged in other domains, leveraging similar technologies to those implemented in our study. Aisha [42], a library chatbot, uses prompt engineering with a Chroma vector database together with LangChain and ChatGPT to deliver reference and support services to interface. In the students and faculty through a Streamlit medical domain, accGPT [43] is a ChatGPT-based chatbot that provides personalised imaging recommendations supporting clinical decision-making. It leverages LlamaIndex to access information within the American College of Radiology docu- mentation. 3 Fig. 1: Overview of GP4NLDR Architecture III. METHODOLOGY A. GP4NLDR System In this study, we used Streamlit2, an open-source Python framework, to build an online web-based applica- tion GP4NLDR3. The application incorporates existing GP- NLDR code bases from prior works in the field [19]–[21] to perform the NLDR. The process outputs one GP tree for each dimension of the new embedding, together with perfor- mance metrics. We then introduce the use of generative pre- trained transformer (GPT) LLMs to facilitate conversational question answering [44], to greatly improve the explainabil- ity of the trees found by GP-NLDR. We further developed our approach by incorporating intelligent prompt engineering and pre-initialising the LLM with additional knowledge from existing literature through the use of retrieval augmented generation, which guides it to deliver focused and targeted on-topic responses. We utilise the popular Langchain [45] framework to streamline the integration of LLMs and the workflow components. Fig. 1 depicts an overview of the GP4NLDR architecture. The system provides a facility to run the NLDR-GP process on a given dataset or view pre-loaded examples for quick use. After results are generated, the chat feature can be initialised. A written summary of the process is presented as interpreted by the LLM. Then, further dialogue conversations with the chatbot can commence. We elaborate on these stages more comprehensively in the following subsections. Fig. 2 depicts the GP4NLDR system showing parameter options and dataset information. For ease of understanding the dataset, the original values are presented, along with the scaled data4 used in the dimensionality reduction process. We now discuss the design of each part of the system in turn. 1) Parameter Options for the NLDR process: • Population Size: the number of individuals in the popu- lation. A larger size may enhance the learning ability but increases computational complexity. A smaller size may lead to insufficient diversity and premature convergence. • Number of Generations: how many iterations of the algo- rithm to perform. It requires a balance between allowing the population to evolve towards an optimal solution and avoiding extended computational costs. Monitoring convergence on the fitness plot may help determine a suitable value. • Final Dimensions: how many dimensions the embedding should contain (i.e. the number of GP trees). Prior knowl- edge of the data domain or task requirements determines this number. Alternatively, for visualisation of the dataset, three or fewer dimensions would be chosen. • Fitness: The fitness function measures the quality of the NLDR solution and helps guide the evolutionary process towards a better solution. Available options include GP- MaL [19], GP-Mal-2 (the first objective of [20]), UMAP Cost [21] and NRMSE [21]. • Bloat Control: optional techniques to help reduce the size of GP trees to prevent unnecessary growth, improving 2https://streamlit.io/ 3Hosted on Streamlit Community Cloud https://gp4nldr.streamlit.app/ 4Data scaled using Scikit-learn’s MinMaxScaler. View ExamplesGP4NLDR InterfaceResultsInitialise ChatReceive OverviewVector StoreInject ContextKeywordsLLMLLMPromptResponseRun NLDR-GPInitialise ChatSubmit QuestionSubmit QuestionNRMSEUMAP CostGP-MaL-2GP-MaLCode BaseRetrieval Augmented GenerationLangchainWine ExampleLLM ParametersResults OverviewChat Dialogue 4 Fig. 2: GP4NLDR System performance and tree interpretability. Options include: (1) lexicographic [46] — a parsimony pressure method that prefers smaller trees when fitness values are equal; (2) double tournament [47] — uses two tournaments: one for fitness and one for size, with the selection of which tournament is run first and the probability that a smaller individual is chosen over a larger more complex one; and (3) Tarpeian [48] — which penalises large individuals during evolution according to a provided probability. 2) Display of NLDR Results: On completion of the GP- NLDR process, the results are displayed for analysis. A summary of parameters is noted, followed by tree expressions and visualisations for each new dimension. The raw embed- ding result is presented alongside a plot depicting fitness per generation. If the embedding dimensionality is 3D or lower, a visualisation of the embedding is provided: either as a 3- D rotational plot, a 2-D scatter plot, or a 1-D bar graph. A random forest classifier [49] implemented in Scikit-Learn [50] with 10-fold cross-validation is also used to provide an estimated accuracy for both the original dataset and that of the new embedding, as a proxy of embedding quality. 3) Chatbot: The chat feature is initialised upon entering a valid OpenAI API key, selecting an LLM (e.g. GPT-3.5 or GPT-4) for conversation, and confirming the approximate word limit for responses. The word limit is set to a default of 80 words. Too few words may return insufficient explanations. Excess words may prolong response times and introduce verbosity, repetition, or tangential answers. (discussed further The pre-engineered prompt in Sec- tion III-D) and initial question ”Provide an exciting summary of the results” are submitted to the LLM. The LLM returns a brief overview of the results as a starting answer for the conversation. Then, two-way conversational question-and- answering with memory retention begins, utilising retrieval augmented generation when required. At any stage, the results and chat history may be downloaded, allowing for reloading at a later point in time. 4) Pre-loaded Examples: The system provides exploration of previously generated GP-NLDR evolutionary runs, includ- ing the case studies presented in this work. The chat feature is available within each example to help further interpret the output. This facility allows for the reproducibility of our research for each GP-NLDR case study presented. LLMs are deterministic models, fundamentally generating the same outputs for the same inputs. Nevertheless, as their responses are probabilistic, they may appear non-deterministic. There- fore, it may not be feasible to achieve identical explanations even though the input prompt remains unchanged. If desired, previously generated results from user experiments may also be reloaded here for further analysis. B. Large Language Models The rapid advancement in LLMs throughout this research project opened avenues to investigate the capabilities of both existing and emerging models, including open-source solutions. Following the evaluation of the performance and accessibility for the task at hand5, OpenAI’s ChatGPT-3.5 model (gpt-3.5-turbo) was adopted as a foundation for the de- velopment of GP4NLDR’s chat feature. This Chat Generative Pre-trained Transformer model 3.5 is based on the transformer deep learning architecture. It is designed to generate human- like text in response to input questions. This state-of-the-art language model excels at natural language processing and conversational exchanges. We used the Python openai library to facilitate an authen- ticated connection to the OpenAI models with requests submit- ted via the API endpoint. We refrained from explicitly includ- ing the model version suffix6 allowing us to take advantage of the continuous model upgrades, therefore insuring we provided the safest and most capable model version. OpenAI regularly upgrade model versions, thus for the long-term viability of our research it was important to mitigate deprecation issues stemming from tying the research to specific model versions. Additional options are provided within the chatbot for using the legacy model GPT-3 and the most recent addition, GPT-4. We use the default LLM model parameter settings, with the temperature set to zero to encourage response consistency. To access the models in the chat function, a valid OpenAI API key is required7. C. Retrieval Augmented Generation At the time of writing, ChatGPT-3.5 was trained on data up until the end of September 2021. Consequently, with some research information beyond its reach or in publicly unavailable studies, many recent concepts in the evolving GP field are unknown to the model. Retrieval augmented generation (RAG) [51] is a technique to address this limitation. RAG builds a vector store/database of vector embeddings from relevant documents. By performing vector searching using similarity metrics, relevant information is extracted and injected as contextual background information into the user’s prompt. This helps fill knowledge gaps in the model and provide it with recent insights, and presents a cost-effective and dynamic alternative to pre-training or fine-tuning models. The GP4NLDR processor centres on the articles referenced in previous studies [19]–[21], but can be easily extended to other methods. A vector store of these papers was constructed by generating vector embeddings of the documents, and then made available to the application. A computationally expensive vector database8 was not needed for this use case, and so we opted to use FAISS [52], Facebook’s AI Similarity Search vector index library9. Given a fixed number of stored articles, with no requirement to add additional files or update existing ones, FAISS is a very efficient and suitable option. The vector store is integrated into the application chat feature for OpenAI models. During conversational chatting, 5The evaluation process lies outside the contribution of this work and as such is not presented. 6For example gpt-3.5-turbo-0613 7OpenAI API key available at https://openai.com/ 8Vector databases provide create, read, update, and delete functionality. 9https://faiss.ai/ 5 user-provided questions are analysed against a pre-defined set of keywords: gp-mal, gpmal, gpmal2, gp-mal2, gp-mal-2, tarp, lexi, tourn, umap, nrmse10. In our initial prototypes (without a vector store), using these keywords often returned responses of limited usefulness, even on occasion provoking hallucinations, as these abbreviations training data. When are less prevalent within the model questioned about these keywords in the context of GP through the ChatGPT OpenAI interface, the LLM did not consistently provide accurate responses. Hence, should these keywords be present, RAG is activated, and the FAISS vector index is queried to fetch relevant background information. Upon retrieval, the information is injected into the prompt. For queries outside the keyword list, it is expected that the model maintains enough background knowledge and the prompt is sufficient to acquire an informative response to the query. This process can be seen within Fig. 1. D. Prompt Engineering Careful consideration was given to our prompt development to elicit informative and consistently reliable responses. Fig. 3 shows the entire initial prompt using an example from the Wine Case Study presented later in our results. Bolded text represents the automatic injection of content from the specific example. • Fig. 3(a) establishes the context for the discussion, di- recting the LLM to focus on genetic programming and non-linear dimensionality reduction. • The fitness function GP-MaL-2 is not explicitly men- tioned within the publications in the vector store. Conse- quently, we define it explicitly in the prompt. • Fig. 3(c) explains the operators used in the GP algorithm. • Fig. 3(d) informs the LLM with the name of the dataset and a summary of the parameters used. • The dataset features are listed in Fig. 3(e). Should the feature list exceed 40, we replace the feature list with the text f0 to fn-1 (as a dataset with n features). This tweak avoids exceeding the token limit for large datasets, such as COIL20, with more than 1000 features. • Fig. 3(f) provides the LLM with the dataset dimension specifications and resulting expressions. • Providing the classification accuracy of the original and reduced space in Fig. 3(g) informs the LLM how well the NLDR process performed. • Specifying the response word count when initialising the LLM allows flexibility in token usage during chat- ting, with the allowance specified within the prompt in Fig. 3(h). • Fig. 3(i) guides the LLM further in the expectations for response content, ensuring that information in the prompt is not repeated. • Should the question contain keywords, background infor- mation is retrieved from the vector store and injected in Fig. 3(j). 10The keyword list is further customisable in the configuration settings of the application software. 6 • Fig. 3(k) requests an initial response from the LLM to provide an overview of the results. • Fig. 3(l) shows an example initial conversational chat dialogue between the Human and the AI. E. LangChain LangChain [45] provides a modular framework for building applications powered by LLMs. The toolkit offers flexibility for integrating a diverse range of LLM model variants. Its versatile structure and functionality facilitated the integration of RAG into our application workflow. Preserving conversa- tional memory within chatbots is paramount, and LangChain seamlessly facilitated the memory retention process. F. GP4NLDR Evaluation We demonstrate the capabilities of GP4NLDR and chat interactions over three case studies. The first two case studies are based on the Wine [53] and Dermatology [54] datasets, which contain meaningful feature names. The final case study uses the larger COIL-20 [55] dataset with 1024 features, lack- ing descriptive feature names. These examples demonstrate the system’s behaviour across a range of different parameter options: • A small dataset with 13 features and 178 instances through to a larger dataset of 1024 features and 1,440 instances. • Different fitness functions (GP-MaL and GP-MaL-2). • Reducing to two or three final dimensions. • The use of lexicographic bloat control compared to no bloat control. • From 100 generations through to 1000 generations. Furthermore, chatbot responses in the following situations: these case studies investigate the calibre of • Supplying descriptive feature names compared to sequen- tially allocating non-descriptive feature names, which may limit background information. • Using keywords such as gpmal to engage with RAG. • Subjective questioning, for instance, querying how good results are. • Asking questions using terminology not similar to feature names in the dataset. identical but • Probing the importance of features. • The multilingual capabilities of LLMs. In presenting the evaluation of each case study, we showcase subsections of the system results for illustration while depict- ing the complete interface in the Appendix. It is not feasible to demonstrate all possible parameter settings and scenarios. A curated selection has been chosen, emphasising those deemed most meaningful in showcasing our research results. We pose questions in a manner that aligns with lay users. This demographic of users stand to benefit most from our study as they more typically rely on an intermediary party to translate existing ML explainability tools into summary text. The results are evaluated manually by comparing the correctness of the generated chatbot responses to the results depicted in the GP expression trees. In addition, we manually Fig. 3: Prompt Example (a)(f)(c)(d)(e)(b)(g)(i)(h)(j)(k)(l)You are an expert on genetic programming (GP) and non-linear dimensionality reduction (NLDR). You are to discuss explaining the results of the GP expressions from NLDR. GP-MaL evolves functional mappings from a high-dimensional space to a lower dimensional space through the use of interpretable trees. GP-MaL-2 is different from GP-MaL. It is the first objective of GP-Mal-MO. The operators used in expressions are: x (multiply), ÷ (divide), max (maximum), min (minimum), + (add),— (subtract), ReLU (relu), sig (sigmoid), if (if), |—| (absolute subtract), |+| (absolute add). The dataset is called Wine. Summary of parameters is Population Size: 100. Generations: 100.Fitness: GP-MaL. Bloat: Lexicographic ... Measure: Total Nodes ... Bucket Method: Direct Bucketing ... Bucket Value: 2.The features are Alcohol (f0), Malic acid (f1), Ash (f2), Alcalinity of ash (f3), Magnesium (f4), Total phenols (f5), Flavanoids (f6), Nonflavanoid phenols (f7), Proanthocyanins (f8), Color intensity (f9), Hue (f10), OD280/OD315 of diluted wines (f11), Proline (f12). Using GP the original 13 features have been reduced to new features in 2 dimensions, using the mapping expressions: Dim0=f6; Dim1=x(sig(ReLU(—(f12, f9))), ReLU(+(|—|(-0.5951, f9), max(-0.8216, f0))));The accuracy of the original dataset is 0.9833. The accuracy of the new dimensional space is 0.9333. Given the text of question, it is your job to answer that question in approximately 80 words. There is no requirement for giving background to GP nor NLDR unless asked. The feature expressions are stated in the prompt and do not need restating in the answer. Use feature names rather than abreviations in the discussion. State the original features that are important in the GP expressons for each dimension, and those that are less important. Try to give an explanation about why the features are very important or less important in the context of the dataset.For context here is background related to the question:{….}Human: Provide an exciting summary of the results.AI: The GP-MaL algorithm successfully reduced the original 13 features of the Wine dataset to 2 new dimensions. In the first dimension, the feature expression includes the Flavanoids (f6) feature, which suggests that this feature is important in distinguishing between different wine samples. In the second dimension, the expression includes the Proline (f12), Color intensity (f9), and Alcohol (f0) features, indicating their significance in the dataset. The other features are less important in the context of the dataset, as they are not included in the GP expressions.Human: explain the 2nd dimension furtherAI: 7 Fig. 4: Wine Case Study Trees Fig. 5: Wine Case Study Plots evaluate and discuss the quality and accuracy of the responses to queries unrelated to the trees, which more specifically target dataset domain, GP, or NLDR questions. As the work presented in this study is the first to use LLMs to provide explanations of GP trees, difficultly lies in benchmarking our approach and providing measurable metrics of accuracy. Our developed system11 has been made publicly available for further experimentation and testing. The presented case study examples are viewable within the application, and may be further analysed using the chatbot. However we note, as touched on earlier, the generation of identically worded responses from subsequent questioning using the same prompt may not be achievable due to the inherent nature of LLMs. In this work we perceive this as an advantage, imparting a sense of personalised responses to the user rather than generic explanations. A. Wine Case Study IV. RESULTS The Wine dataset12 consists of 13 features and 178 instances detailing the chemical analysis of three types of Italian wine. The GP4NLDR process is run using the GP-MaL fitness func- tion, reducing the Wine dataset to two dimensions after 100 11https://gp4nldr.streamlit.app/ 12https://archive.ics.uci.edu/dataset/109/wine Fig. 6: Wine Case Study Chat generations using a population size of 100 and lexicographic bloat control. The first embedding dimension (GP tree) shown in Fig. 4(a) is a single node, representing the flavonoids feature. The second tree utilises the Proline, colour intensity, and alcohol features in Fig. 4(b). The fitness plot in Fig. 5 shows the function converges quickly, and the three classes are easily distinguishable within the 2D embedding plot. Initiating the chatbot allows further investigation into the results. Including the feature names within the dataset structure feeds the LLM additional context when considering why some features are present in the new dimensional space while others remain absent. The overview initially generated is illustrated in Fig. 6(a). A brief discussion of the dimensionality reduction is given together with noting the features present in each dimension and deemed important. In Fig. 6(b), when asked to “explain the 2nd dimension further”, the LLM expands the justification for the inclusion of each feature by providing definitions of the features and their relationship to the dataset. When questioning the LLM about a specific feature “what is hue?” in Fig. 6(c), the LLM gives an overview of its definition followed by its contribution to the results. In this example, hue was not part of the embedding and hence not deemed as important. Supplying the accuracy of the embeddings when classified First Dimension:(a)Second Dimension:(b)(a)(b)(b)(d)(c)(a) 8 Fig. 7: Dermatology Case Study Plot by a random forest algorithm may give somewhat subjective opinions from the LLM when asked if it is a “good” reduction. Fig. 6(d) shows the LLM believes this example is “effective” with ”a slight decrease in accuracy” from 0.9833 to 0.9333. B. Dermatology Case Study The Dermatology dataset13 with 34 features classifies the type of erythematous-squamous disease into six classes (pso- riasis, seborrheic dermatitis, lichen planus, pityriasis rosea, chronic dermatitis, and pityriasis rubra pilaris.) 12 features are clinical evaluations with a further 22 histopathological features from skin samples. There are 358 instances in total. The GP4NLDR process is run using the GP-MaL-2 fitness function, reducing the Dermatology dataset to 3 dimensions over 200 generations using a population size of 100 and lexicographic bloat control. Fig. 7 illustrates the 3-D plot rep- resenting the reduced embedding. The three new dimensions each use between four and six features. Once more, including the feature names within the dataset structure has assisted the chatbot in providing contextual con- versational exchanges. When asked “What if I am really old and itchy?”, the LLM maps the word “old” to the feature age and the word “itchy” to the feature itching. Fig. 8(a) shows it subsequently responds that neither of these features appears in the tree expressions and hence they have less influence or relevance in determining the type of skin condition. In asking “Is GP-MaL better than GP-MaL-2?” in Fig. 8(b), the LLM notes it has not been supplied with information detailing specific comparisons between the two functions. This response may be perceived as uninformative; nonethe- less, it shows that clever prompt engineering can help deter hallucinations. Throughout prompt development, experiments demonstrated the LLM’s susceptibility to generating inaccu- rate information. Fig. 9 illustrates this concern by asking the question “Explain what GP-Mal is” through the online ChatGPT Web Interface14. ChatGPT invents the definition “Generative Pretrained Transformer for Malicious Software” and endeavours to discuss it convincingly. This example is testament to the mitigation of such hallucinatory outcomes by incorporating tailored prompt engineering. 13https://archive.ics.uci.edu/dataset/33/dermatology 14https://chat.openai.com/ Fig. 8: Dermatology Case Study Chat Fig. 9: ChatGPT Web Interface https://chat.openai.com/ C. COIL-20 Case Study The Columbia Object Image Library (COIL-20) dataset15 consists of gray-scale images of 20 objects. For each object, a photographed image was captured every 5 degrees as it was rotated 360 degrees on a motorised turntable, giving 72 images for each object. Each of these images represents one row in the dataset, giving 1,440 rows in total for the 20 objects. The original 128x128 pixel image size is cropped to 32x32 pixels, producing a 1024-dimensional feature ex- ample. The absence of feature names necessitates generically assigned attribute labels f0 to f1023. The full results in the supplementary material illustrate the results from the NLDR large trees. The fitness plot process, producing somewhat depicts the gradual in the function over the 1000 generations. In reducing the dimensions from 1024 to 2, accuracy has decreased from 0.9868 to 0.6375. improvement 15https://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php (b)(a) 9 good. The precision of the reduced dimensional space is only 0.6375, which is significantly lower than the precision of the original dataset (0.9868). This suggests that reduced features may not capture enough information or discriminate well between different classes or patterns in the COIL20 dataset. It is possible that the mapping expressions used in GP-MaL-2 did not effectively capture the underlying data structure”. The multilingual ability of LLMs is a significant opportunity for making advances in AI accessible to a wider audience. V. DISCUSSION The experiments in this study confirm the effectiveness of an XAI dashboard in communicating the results of GP-NLDR. Leveraging LLMs such as ChatGPT effectively contributes to user-centred explanations through conversational chatbot technology. Employing AI-powered web-based applications such as GP4NLDR draws on the latest cutting-edge research delivering state-of-the-art tools to individuals. In this section, we discuss several of our findings in more detail. We believe that aspects of this discussion could be very useful in guiding the development of methods integrating GP and LLMs. Prompt engineering is a dynamic and evolving field requiring careful crafting to steer models towards relevant and accurate responses. Recently, it has gained significant attention due to its pivotal role in shaping the behaviour of LLMs. The trend towards formalising prompt structures has given rise to defining prompt techniques such as zero- shot, few-short, chain-of-thought, tree-of-thought, and more. In this work, we adopt a combination of techniques. Structured prompting can effectively maintain a uniform tone in chatbot responses. However, in our setting, this is not of paramount concern. Our developed prompt, although slightly verbose and unstructured, introduces novelty and diversity, enabling the LLM to craft its responses creatively if desired. Avoiding explicitly requesting a fixed response structure, such as bullet points, sentences, paragraphs, or abbreviations, contributes to enhancing engagement with the chatbot. Furthermore, in targeting a user-centred approach, we do not seek to impose restrictions on response style, which may potentially hinder ingenuity and interest when generating explanations. However, potentially allowing the user an option to indicate their level of comprehension may facilitate a more tailored response tone, which could be addressed in future work. Data privacy concerns within LLM-powered applications continue to be at the forefront of discussions in research and industry. End-users interacting with AI applications should seek reassurance in knowing the confidentiality and security of their data is maintained, especially sensitive and personal information. In this work we demonstrated within the prompt template no raw data is transmitted to the LLM, only the dataset name and feature list. Nonetheless, this does not prevent the user from entering sensitive information and trans- mitting it voluntarily. Hallucinations are a growing concern in developing ap- plications integrating LLMs. Throughout the development of information re- this work, we witnessed entirely fictional turned from the models following questioning. To reduce Fig. 10: COIL Case Study Chat Using the chatbot, we ask “What makes a feature impor- tant?”. In Fig. 10(a), the LLM informs us important features have a significant impact on the mapping from the high- dimensional space to the low-dimensional space. It continues to explain the importance is determined by the feature’s appearance and usage in the expressions. Frequently used features or those having a strong influence on the mapping are considered important. To further explore the importance of features in the dataset, we ask the LLM in Fig. 10(b) to name the ten most significant features. Initially, it assesses the first dimension. f509 appears twice, with one instance high up in the tree, having a signif- icant influence on the outcome. f602 also appears high up in the tree, presents itself 4 times, and is on both sides of the root max node. f423 is featured in the next level, and f583 appears five times, with two instances in the subsequent level. Moving to the second dimension, f56 with 6 occurrences has 2 of these instances high up in the tree on both sides of the root max node. f778 with 4 occurrences sits alongside f56 on the 3rd level of the tree. f1022, f814, f157, and f770 complete the top 10. To highlight the multilingual capabilities of the chatbot, we ask it in French “Est-ce une bonne r´eduction?” meaning “Is this a good reduction?”. The chatbot also responds in French, as shown in Fig. 10(c). Using Google Translate, we can translate this back to English, which gives us: “Reducing 1024 features to 2 dimensions using GP-MaL-2 is not very (c)(a)(b) the incidence of hallucinations, we integrated our tailored prompt template with retrieval augmented generation (RAG). This generally addressed this problem, but, unfortunately, no robust solution has yet been identified to circumvent these situations. Throughout the development of the prompt, we encountered guardrails imparting superfluous advice not per- tinent to providing further explanations. Ongoing research in LLMs is expected to address this. We also note other recent concerns arising with the use of LLMs in applications such as adversarial attacks [56] and bias within the models and their benchmarks [57]. It is not within the scope of this work to delve into these issues further, but we acknowledge these challenges are of ongoing concern and necessitate further research. Rapid advancements are frequently seen in the fast-moving domain of generative AI. During the preparation of this paper, recent announcements such as ChatGPT Enterprise have been regularly released. The edition boasts an extended token limit of 32k (4 times the current capacity), enterprise-grade privacy and security, and the expansion of model knowledge through integration with company data16. Within our research setting, an increased token limit may enable the complete list of feature names for higher-dimensional datasets to be included in the LLM prompt, eliminating the need for truncation. Occasionally, exceptionally lengthy GP expressions may sur- pass the current token limit; an extended token limit may be advantageous in some scenarios. With the addition of extra privacy measures, the prompt could be enriched to include a subset of dataset rows. Supplementing the model with this information may enhance the conversational explanations. The facility to integrate company data would be an ideal alternative to using RAG. These innovations will continue to address ongoing concerns in developing AI-driven applications. Future work could delve deeper into the use of LLMs for explaining GP expressions in other fundamental machine learning tasks. Exploring other retrieval methods and/or al- ternative vector store approaches has the potential to further improve the efficacy of our framework. In addition, exploring alternative architectures to the Langchain framework used in this study may offer further avenues for harnessing LLMs. Our work touched on the feasibility of leveraging other open-source LLMs. Further development of tailored prompt templates and consideration of fine-tuning these models could be advantageous in assessing their performance comparison with ChatGPT. Exploring the extension of chat parameters, such as explicitly targeting different audiences, may improve user experience and the understanding of explanations. To more rigorously validate our research, future work will include human evaluation of the explanations [58]. Through user- group experiences we may assess the quality of results on a larger scale and endeavour to provide measurable benchmarks for use in subsequent research within this domain. VI. CONCLUSION This study presented a novel dashboard application to ex- plain the results of GP-based nonlinear dimensionality reduc- 16https://openai.com/blog/introducing-chatgpt-enterprise 10 tion. Our proposed approach cohesively incorporates a variety of techniques, including a user interface, visualisation, a large language model chatbot, retrieval augmented generation, and prompt engineering to provide a system that greatly improves the explainability of GP. This is the first study of its kind encapsulating these elements within a unified system, span- ning the domains of evolutionary computation and generative AI. We presented three robust case studies to highlight the usability of our research in this field. Incorporating a chatbot built on groundbreaking LLM techniques provides significant improvements to the explainability of GP expressions, with po- tential implications for the wider GP community. Furthermore, we have highlighted how leveraging LLMs for conversation provides a user-centred approach accommodating the needs of a diverse audience. Our work has contributed to the gap in research around leveraging generative AI in explainable evolutionary computation. REFERENCES [1] Y. Mei, Q. Chen, A. Lensen, B. Xue, and M. Zhang, “Explainable artifi- cial intelligence by genetic programming: A survey,” IEEE Transactions on Evolutionary Computation, vol. 27, no. 3, pp. 621–641, 2023. [2] T. P. Quinn, S. Jacobs, M. Senadeera, V. Le, and S. Coghlan, “The three ghosts of medical AI: Can the black-box present deliver?” Artificial Intelligence in Medicine, vol. 124, p. 102158, 2022. [3] Z. Salahuddin, H. C. Woodruff, A. Chatterjee, and P. Lambin, “Trans- parency of deep neural networks for medical image analysis: A review of interpretability methods,” Computers in Biology and Medicine, vol. 140, p. 105111, 2022. [4] O. Kuiper, M. van den Berg, J. van der Burgt, and S. Leijnen, “Ex- ploring explainable AI in the financial sector: Perspectives of banks and supervisory authorities,” in Artificial Intelligence and Machine Learning, L. A. Leiva, C. Pruski, R. Markovich, A. Najjar, and C. Schommer, Eds. Cham: Springer International Publishing, 2022, pp. 105–119. [5] M. Ribera and A. Lapedriza, “Can we do better explanations? A proposal of user-centered explainable AI,” in Intelligent User Interfaces (IUI) Workshops, 2019. [6] T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,” Artificial Intelligence, vol. 267, pp. 1–38, 2019. [7] P. Maddigan and T. Susnjak, “Chat2vis: Fine-tuning data visualisations using multilingual natural language text and pre-trained large language models,” arXiv preprint arXiv:2303.14292, 2023. [8] ——, “Chat2vis: Generating data visualizations via natural language using chatgpt, codex and gpt-3 large language models,” IEEE Access, vol. 11, pp. 45 181–45 193, 2023. [9] C. Haider, F. O. de Franca, B. Burlacu, F. Bachinger, G. Kronberger, and M. Affenzeller, Shape-constrained Symbolic Regression: Real-World Ap- plications in Magnetization, Extrusion and Data Validation. Singapore: Springer Nature Singapore, 2024, pp. 225–240. [10] S. Nguyen, D. Thiruvady, Y. Sun, and M. Zhang, “Genetic-based constraint programming for resource constrained job scheduling,” arXiv preprint arXiv:2402.00459, 2024. [11] Q. Fan, Y. Bi, B. Xue, and M. Zhang, “A genetic programming-based method for image classification with small training data,” Knowledge- Based Systems, vol. 283, p. 111188, 2024. [12] Q. U. Ain, B. Xue, H. Al-Sahaf, and M. Zhang, “Skin cancer detection with multimodal data: A feature selection approach using genetic programming,” in Data Science and Machine Learning, D. Benavides- Prado, S. Erfani, P. Fournier-Viger, Y. L. Boo, and Y. S. Koh, Eds. Singapore: Springer Nature Singapore, 2024, pp. 254–269. [13] L. Wu, L. Yuan, G. Zhao, H. Lin, and S. Z. Li, “Deep clustering and visualization for end-to-end high-dimensional data analysis,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–12, 2022. [14] Q. V. Nguyen, M. Lin Huang, and S. Simoff, “Enhancing scatter-plots with start-plots for visualising multi-dimensional data,” in 2020 24th International Conference Information Visualisation (IV), 2020, pp. 80– 85. [15] S. J. Fernstad, A. Macquisten, J. Berrington, N. Embleton, and C. Stew- art, “Quality Metrics to Guide Visual Analysis of High Dimensional Genomics Data,” in EuroVis Workshop on Visual Analytics (EuroVA), C. Turkay and K. Vrotsou, Eds. The Eurographics Association, 2020. [16] A. Agrawal and C. McComb, “Comparing strategies for visualizing the high-dimensional exploration behavior of cps design agents,” in 2022 IEEE Workshop on Design Automation for CPS and IoT (DESTION), 2022, pp. 64–69. [17] A. Lensen, B. Xue, and M. Zhang, “Genetic programming for evolving a front of interpretable models for data visualization,” IEEE Transactions on Cybernetics, vol. 51, no. 11, pp. 5468–5482, 2021. [18] T. Uriot, M. Virgolin, T. Alderliesten, and P. A. N. Bosman, “On genetic programming representations and fitness functions for interpretable di- mensionality reduction,” in Proceedings of the Genetic and Evolutionary Computation Conference. ACM, 2022. [19] A. Lensen, B. Xue, and M. Zhang, “Can genetic programming do manifold learning too?” in Proceedings of the European Conference on Genetic Programming (EuroGP), Lecture Notes in Computer Science. Springer International Publishing, 2019, vol. 11451, pp. 114–130. [20] A. Lensen, M. Zhang, and B. Xue, “Multi-objective genetic program- ming for manifold learning: balancing quality and dimensionality,” Genetic Programming and Evolvable Machines, vol. 21, no. 3, pp. 399– 431, feb 2020. [21] F. Schofield and A. Lensen, “Using genetic programming to find functional mappings for umap embeddings,” in 2021 IEEE Congress on Evolutionary Computation (CEC), 2021, pp. 704–711. [22] L. Longo, M. Brcic, F. Cabitza, J. Choi, R. Confalonieri, J. D. Ser, R. Guidotti, Y. Hayashi, F. Herrera, A. Holzinger, R. Jiang, H. Khosravi, F. Lecue, G. Malgieri, A. P´aez, W. Samek, J. Schneider, T. Speith, and S. Stumpf, “Explainable artificial intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions,” Information Fusion, vol. 106, p. 102301, 2024. [23] S. Ali, T. Abuhmed, S. El-Sappagh, K. Muhammad, J. M. Alonso- Moral, R. Confalonieri, R. Guidotti, J. Del Ser, N. D´ıaz-Rodr´ıguez, and F. Herrera, “Explainable artificial intelligence (XAI): What we know and what is left to attain trustworthy artificial intelligence,” Information Fusion, vol. 99, p. 101805, 2023. [24] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, ser. NIPS’17. Curran Associates, 2017, p. 4768–4777. [25] M. T. Ribeiro, S. Singh, and C. Guestrin, “”Why should i trust you?” explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery, 2016, pp. 1135– 1144. [26] J. Klaise, A. V. Looveren, G. Vacanti, and A. Coca, “Alibi explain: Al- gorithms for explaining machine learning models,” Journal of Machine Learning Research, vol. 22, no. 181, pp. 1–7, 2021. [27] R. K. Mothilal, A. Sharma, and C. Tan, “Explaining machine learning classifiers through diverse counterfactual explanations,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 607–617. [28] M. Venturini, I. Van Keilegom, W. De Corte, and C. Vens, “Predicting time-to-intubation after critical care admission using machine learning and cured fraction information,” Artificial Intelligence in Medicine, vol. 150, p. 102817, 2024. [29] P. Maddigan and T. Susnjak, “Forecasting patient demand at urgent care clinics using machine learning,” arXiv preprint arXiv:2205.13067, 2022. [30] I. Hussain and R. Jany, “Interpreting stroke-impaired electromyography patterns through explainable artificial intelligence,” Sensors, vol. 24, no. 5, 2024. [31] F. Afrin, M. Hamilton, and C. Thevathyan, “Exploring counterfactual explanations for predicting student success,” in Computational Sci- ence – ICCS 2023, J. Mikyˇska, C. de Mulatier, M. Paszynski, V. V. Krzhizhanovskaya, J. J. Dongarra, and P. M. Sloot, Eds. Cham: Springer, 2023, pp. 413–420. [32] N. R. Raji, R. M. S. Kumar, and C. L. Biji, “Explainable machine learning prediction for the academic performance of deaf scholars,” IEEE Access, vol. 12, pp. 23 595–23 612, 2024. [33] M. Ku´zba and P. Biecek, “What would you ask the machine learn- ing model? identification of user needs for model explanations based on human-model conversations,” in ECML PKDD 2020 Workshops. Springer International Publishing, 2020, pp. 447–459. [34] M. Guimaraes, J. Baptista, and M. Sousa, “A conversational interface for interacting with machine learning models,” 2022. [35] V. B. Nguyen, J. Schl¨otterer, and C. Seifert, “Explaining machine learning models in natural conversations: Towards a conversational XAI agent,” arXiv preprint arXiv:2209.02552, 2022. 11 [36] S. Luke and L. Panait, “A comparison of bloat control methods for genetic programming,” Evolutionary Computation, vol. 14, no. 3, pp. 309–344, 2006. [37] B. Tran, B. Xue, and M. Zhang, “Genetic programming for feature construction and selection in classification on high-dimensional data,” Memetic Computing, vol. 8, no. 1, pp. 3–15, 2016. [38] E. Cambria, L. Malandri, F. Mercorio, M. Mezzanzanica, and N. Nobani, “A survey on XAI and natural language explanations,” Information Processing & Management, vol. 60, no. 1, p. 103111, 2023. [39] C. Singh, J. P. Inala, M. Galley, R. Caruana, and J. Gao, “Rethinking interpretability in the era of large language models,” arXiv preprint arXiv:2402.01761, 2024. [40] C. C. Sartori, C. Blum, and G. Ochoa, “Large language models for the automated analysis of optimization algorithms,” arXiv preprint arXiv:2402.08472, 2024. [41] C. Chac´on Sartori, C. Blum, and G. Ochoa, “Stnweb: A new visual- ization tool for analyzing optimization algorithms,” Software Impacts, vol. 17, p. 100558, 2023. [42] Y. Lappalainen and N. Narayanan, “Aisha: A custom AI library chatbot using the ChatGPT API,” Journal of Web Librarianship, vol. 17, no. 3, pp. 37–58, 2023. [43] A. Rau, S. Rau, D. Z¨oller, A. Fink, H. Tran, C. Wilpert, J. Nattenm¨uller, J. Neubauer, F. Bamberg, M. Reisert, and M. F. Russe, “A context-based chatbot surpasses radiologists and generic ChatGPT in following the ACR appropriateness guidelines,” Radiology, vol. 308, no. 1, p. e230970, 2023. [44] M. Zaib, W. E. Zhang, Q. Z. Sheng, A. Mahmood, and Y. Zhang, “Con- versational question answering: a survey,” Knowledge and Information Systems, vol. 64, pp. 3151–3195, 2022. [45] H. Chase, “LangChain,” Oct. 2022. //github.com/hwchase17/langchain [Online]. Available: https: [46] S. Luke and L. Panait, “Lexicographic parsimony pressure,” in Pro- ceedings of the 4th Annual Conference on Genetic and Evolutionary Computation, 2002, pp. 829–836. [47] ——, “Fighting bloat with nonparametric parsimony pressure,” in Par- allel Problem Solving from Nature — PPSN VII. Springer, 2002, pp. 411–421. [48] R. Poli, “A simple but theoretically-motivated method to control bloat Springer, 2003, in genetic programming,” in Genetic Programming. pp. 204–217. [49] L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. [50] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vander- plas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011. [51] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. K¨uttler, M. Lewis, W.-t. Yih, T. Rockt¨aschel, S. Riedel, and D. Kiela, “Retrieval-augmented generation for knowledge-intensive nlp tasks,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33. Curran Associates, Inc., 2020, pp. 9459–9474. [52] J. Johnson, M. Douze, and H. J´egou, “Billion-scale similarity search with GPUs,” IEEE Transactions on Big Data, vol. 7, no. 3, pp. 535– 547, 2019. [53] S. Aeberhard and M. Forina, “Wine,” UCI Machine Learning Repository, 1991. [54] N. Ilter and H. Guvenir, “Dermatology,” UCI Machine Learning Repos- itory, 1998. [55] S. A. Nene, S. K. Nayar, and H. Murase, “Columbia object image library (coil-20),” Department of Computer Science, Columbia University, Tech. Rep. CUCS-005-96, February 1996. [Online]. Available: https://www. bibsonomy.org/bibtex/2e21afb22e024792723fc3b9f659c522e/jabreftest [56] A. Zhou, B. Li, and H. Wang, “Robust prompt optimization for de- fending language models against jailbreaking attacks,” arXiv preprint arXiv:2401.17263, 2024. [57] T. R. McIntosh, T. Susnjak, T. Liu, P. Watters, and M. N. Halgamuge, “Inadequacies of large language model benchmarks in the era of gener- ative artificial intelligence,” arXiv preprint arXiv:2402.09880, 2024. [58] C. van der Lee, A. Gatt, E. van Miltenburg, S. Wubben, and E. Krahmer, “Best practices for the human evaluation of automatically generated text,” in Proceedings of the 12th International Conference on Natural Language Generation, K. van Deemter, C. Lin, and H. Takamura, Eds. Tokyo, Japan: Association for Computational Linguistics, 2019, pp. 355– 368. 12 Appendix Fig. 11: Wine Case Study Results from GP4NLDR 13 Fig. 12: Wine Case Study Chat Conversation Examples 14 Fig. 13: Dermatology Case Study Results from GP4NLDR 15 Fig. 14: Dermatology Case Study Chat Conversation Examples 16 Fig. 15: COIL20 Case Study Results from GP4NLDR 17 Fig. 16: COIL20 Case Study Chat Conversation Examples
ai_researcher
1
Research_on_Isomorphic_Task_Transfer_Algorithm_Based_on_Knowledge_Distillation_in_Multi-Agent_Collaborative_Systems.pdf
KNOWLEDGEDISTILLATIONFROMLANGUAGEMODELTOACOUSTICMODEL:AHIERARCHICALMULTI-TASKLEARNINGAPPROACHMun-HakLeeandJoon-HyukChangDepartmentofElectronicsEngineeringHanyangUniversity,Seoul,RepublicofKoreaABSTRACTTheremarkableperformanceofthepre-trainedlanguagemodel(LM)usingself-supervisedlearninghasledtoamajorparadigmshiftinthestudyofnaturallanguageprocessing.Inlinewiththesechanges,leveragingtheperformanceofspeechrecognitionsystemswithmassivedeeplearning-basedLMsisamajortopicofspeechrecognitionresearch.AmongthevariousmethodsofapplyingLMstospeechrecognitionsys-tems,inthispaper,wefocusonacross-modalknowledgedis-tillationmethodthattransfersknowledgebetweentwotypesofdeepneuralnetworkswithdifferentmodalities.Wepro-poseanacousticmodelstructurewithmultipleauxiliaryout-putlayersforcross-modaldistillationanddemonstratethattheproposedmethodeffectivelycompensatesfortheshort-comingsoftheexistinglabel-interpolation-baseddistillationmethod.Inaddition,weextendtheproposedmethodtoahi-erarchicaldistillationmethodusingLMstrainedindifferentunits(senones,monophones,andsubwords)andrevealtheef-fectivenessofthehierarchicaldistillationmethodthroughanablationstudy.IndexTerms—automaticspeechrecognition,knowledgedistillation,multi-tasklearning,cross-modaldistillation,lan-guagemodel,acousticmodel1.INTRODUCTIONHumanstransmitlanguageinformationthroughvoicesig-nals.Automaticspeechrecognition(ASR)isatechnologythatextractslanguageinformationinthespokenvoicesignal,facilitatingnaturalcommunicationbetweenhumansandma-chines.Theacousticmodel(AM)isamajorcomponentofthespeechrecognitionsystemandplaysaroleinestimatingmeaningfulrecognitionunits(words,subwords,phonemes,letters,etc.)fromagivenspeechsignal.TheAMlearnsthecorrelationbetweenspeechandtextusingahuman-taggedspeech-transcriptionpaireddataset.However,ifsufficienttrainingdataarenotsecuredoradomainismismatchedbe-tweenthetrainingandtestdatasets,theperformanceofthespeechrecognitionsystemisdegraded,aconsiderableobsta-cletothestableoperationofthespeechrecognitionsystem.ThankstoXYZagencyforfunding.Onewaytoovercomethislimitationistousealanguagemodel(LM).TheLMlearnstheconditionalprobabilitydis-tributionofthewordsequencesusingalargeunannotatedcorpusandhelpsthespeechrecognitionsystemtrainedonlywithlimitedpaireddatatomodelunseenwordsorsentenceswell.Recently,thedeeplearning-basedLM[1,2]hassuc-cessfullyovercomethelong-termdependencyproblemandsparsityproblem,whichareshortcomingsofthestatisticalLMs,andlarge-scaleLMspre-trainedwithself-supervisedlearningframeworks[3]haveachievedremarkablesuccessinthefieldofnaturallanguageprocessing.Therefore,makingfulluseoftheremarkableperformanceofdeeplearning-basedLMsinspeechrecognitionsystemsisamajorresearchdirectionforspeechrecognitiontechnology.Fromshallowfusion[4]tore-scoringmethods[5,6],vari-ousstudieshavebeenconductedtoapplydeeplearning-basedLMstoASRsystems.Amongthesevariousresearchtopics,wefocusonmethodsusingknowledgedistillation[7,8,9].Theknowledgedistillationmethodhasadvantagesinthatitdoesnotincreasethecomputationinthedecodingprocessofthespeechrecognitionsystemandcanbecombinedwithshal-lowfusionorre-scoringmethodstofurtherenhancerecogni-tionperformance.Inthispaper,weproposeanovelknowledgedistilla-tionmethodbasedonmulti-tasklearningandapplytheproposedmethodtotheAMofthehiddenMarkovmodel(HMM)-basedhybridspeechrecognitionsystem[10]andtheattention-basedsequence-to-sequence(seq2seq)speechrecognitionmodel[11].Theproposedmethodoutperformstheexistinglabel-interpolation-basedknowledgedistillationintheseq2seqspeechrecognitionsystem[8,9]andhastheadvantageofoperatingmorestablyatvarioushyper-parametersettings.Unliketheoriginalknowledgedistil-lation,themulti-tasklearning-basedapproachcantransferbetweenneuralnetworkswithdifferentoutputunits(suchasAM/LMinHMM-basedspeechrecognitionsystems).Usingthesefeatures,weproposeahierarchicaldistillationmethodthattransferstheknowledgeofmultipleLMswithdifferentoutputunitstoanHMM-basedAM,effectivelyenhancingtheclassificationperformanceoftheAM.Tothebestofourknowledge,thisisthefirstattempttoconductknowledgedistillationfrompre-trainedLMstoHMM-basedAMs.arXiv:2110.10429v1 [cs.LG] 20 Oct 2021 Fig.1.SchematicdiagramofhierarchicaldistillationmethodappliedtoDNN-HMM-basedhybridspeechrecognitionsys-tem.Instage1,wetrainLMswithdifferentoutputunits(senones,monophones,andsubwords)throughmanualtok-enizationandG2Pprocesses,andinstage2,wetransfertheknowledgeofthetrainedLMstoasingleacousticmodel.2.BACKGROUND2.1.KnowledgedistillationAmodelensembleofmultiplemodelsexhibitshighergen-eralizationperformancethanasinglemodel[12].[7]pro-posedaknowledgedistillationmethodthatcantransfertheperformanceofamassiveensemblemodeltoalightermodel.In[7],theknowledgeoftheteachermodelisassumedtobecompressedinitsoutputdistribution,andthestudentmodelistrainedtomimictheoutputoftheteachermodel.There-fore,thismethodisalsocalledthestudent-teachermethod.Thelearninglossofthestudentnetworkforknowledgedis-tillationisasfollows:LKD=KXi=1D(exp(vi/T)ΣKjexp(vj/T),expuiΣKjexpuj),(1)whereDisadistancemeasure,(v,u)denotetheunnormal-izedoutputofthe(teacher,student)models,Krepresentsthenumberofclasses,andTisthetemperaturevalue.2.2.Cross-modaldistillationwiththelanguagemodelCross-modaldistillationisausefultechniquetoalleviatethechronicdatashortageproblemofdeeplearninginthatitcantransferknowledgelearnedwithrichmodalitiestoothermodalitiesprovidedwithonlylimitedlabeleddata[13].Inthischapter,webrieflyintroducethelearnspellingfromteachers(LST)method,across-modaldistillationmethodproposedby[8].TheLSTmethodtransferstheknowledgeoftheteacherLMtotheseq2seqAMsharingthesameoutputunitusingthelossfunctionbelow:LLST=λLCE+(1−λ)LKD,(2)Fig.2.Deduplication:Severalconsecutiveoverlappingla-belsaredistributedintheforcedalignmentgeneratedthroughtheGaussianMixture-HiddenMarkovModel-basedspeechrecognitionsystem(GMM-HMM-basedASR).Weremovetheoverlappedlabelsthroughthededuplicationalgorithm(thisalgorithmalsoincludesmatchingtheunitsofforcedalignmentandLMoutputs).Rearrangement:Therearrange-mentalgorithmalignsthelistofposteriordistributionsoftheLMone-to-onewiththeforcedalignmentbeforethededupli-cationprocess.whereLCEisthecross-entropylossbetweenstudentnetworkoutputandtruelabelandλ∈[0,1]denotesthetuningfactor.IftheKullback–Leiblerdivergence(KLD)isusedasadis-tancemeasureforLKD,theaboveequationisequivalenttocalculatingtheKLDbetweenthenewtargetdistributiongen-eratedbyinterpolatingtwolabels(softlabelandhardlabel)andtheoutputofthestudentnetwork,wherethesoftlabelisthenormalizedteachermodeloutput.Thefollowingequationcanexpressthis:LLST=−KXi=1ˆPilogexpuiΣKjexpuj,(3)ˆPi=λYi+(1−λ)exp(vi/T)ΣKjexp(vj/T),whereYisaone-hotencodedtruelabeldistribution.There-fore,thismethodisreferredtoaslabelinterpolation-basedknowledgedistillation[8].Thismethodconciselyintegratessupervisedlearninglossandknowledgedistillationlossintoone,butithasclearlimitations.First,forlabelinterpola-tion,theteacherandthestudentmodelsmustsharethesameoutputunit.Thisconstraintlimitsthealgorithmscalability;forexample,itisimpossibletodistillknowledgebetweenneuralnetswithdifferentoutputs,suchastheLMandAMofanHMM-basedspeechrecognitionsystem.Second,la-belsmoothingisperformedinthelabelinterpolationprocess,whichcancauseanunder-confidenceprobleminthesecondandthirdbestclassesofthenetworkoutput.Wedealwiththeseissuesinmoredetailintheappendix[14].Third,theLSTmethodhastwohyper-parameters(T,λ)toadjustthesoftlabelsharpnessandinterpolationratio,andthenetworkperformancerespondssensitivelyaccordingtothesettingof thetwotuningfactors.Thus,whentraininganetworkwithanewdataset,themethodmustgothroughahyper-parametersearchingprocessthattakessubstantialtimeandcomputation.Theproposedmethodseparatesthetwotasksofsupervisedlearningandknowledgedistillation,solvingtheproblemsoftheinterpolation-basedknowledgedistillationmethod.3.PROPOSEDMETHOD3.1.Amulti-tasklearningapproachforknowledgedistil-lationTheproposedmulti-tasklearning-basedknowledgedistil-lationmethodisdesignedtocompensatefortheshortcom-ingsofthelabelinterpolation-basedknowledgedistillationmethod.AsillustratedinFigure1,theproposedAMconsistsofsharedencodinglayers(2D-convolutionneuralnetwork(CNN)+transformer),twolinearlayersforsupervisedlearn-ing,andtwoauxiliarylinearlayersforknowledgedistillation.Therefore,theproposedmodelhastwooutputs,andeachout-putapproximatesadifferenttargetdistribution(hardandsoftlabels):Lproposed=−KXi=1(λYilogexpuSLiΣKjexpuSLj+(1−λ)exp(vi/T)ΣKjexp(vj/T)logexpuKDiΣKjexpuKDj)(4)whereuKDistheunnormalizedoutputforknowledgedis-tillationanduSLdenotestheunnormalizedoutputforsuper-visedlearning.Wesummarizetheadvantagesoftheproposedmulti-tasklearning-basedknowledgedistillationmethodasfollows:1.TheLSTmethodhastheconstraintthattheteacherandstudentsharethesameoutputunit.Theproposedmethodisfreefromtheseconstraintsandisabletousepre-trainedLMwithvariousoutputunits.2.ItispossibletotransfertheknowledgeofLMswithdifferentoutputunitstooneAM,andknowledgedistil-lationusingvariousintermediatelevelunitsimprovesAMperformance.3.TheLSTmethodhastwohyper-parameters(T,λ),andtherecognitionperformancevariesgreatlyaccordingtothechangeinhyper-parameters.Theproposedmethodworksstablyinmosthyper-parametersettings.4.TheLSTmodelistrainedbytargetingthesmoothedlabelgeneratedthroughlabelinterpolation,amplify-ingthecalibrationerrorforthesecondandthirdbestclassesoftheAM.Inaddition,thecalibrationerroroftheAMleadstothedeteriorationofbeamsearchdecodingperformance[15,16].Theproposedmethodsolvesthisproblembyseparatingknowledgedistilla-tionandsupervisedlearningtasks.3.2.Hierarchicalknowledgedistillationthroughmulti-tasklearningAlanguagehasahierarchicalstructurecomposedofsen-tences,words,andcharacters.[17,18]usedthishierarchicalstructuretoimprovetheclassificationperformanceofAMs.Takingadvantageofthemulti-tasklearningapproach,wetransfertheknowledgeofmultipleLMswithdifferentoutputunits(monophonesandsubwords)toasingledeepneuralnetwork(DNN)HMM-basedAMwithasenone(decision-tree-basedtri-phone)astheoutputunit.Twotypesofempir-icaltechniquesareneededtoproceedwiththishierarchicaldistillation.Thefirstisamanualtokenizingandgraphemetophoneme(G2P)algorithm.Throughthis,wecanreplacetheunannotatedcorpuswithasmallerunitofinterest,andtheforcedalignmentcanbereplacedwiththedesiredrecognitionunitthroughinversetransformation.ThesecondistoaligntheLMposteriordistributionandspeechfeaturearrange-mentsothattheDNNcanbetrained.AsdepictedinFigure2,wecreateaframe-wiseLMposteriorfromforcedalign-mentthroughtwosteps(deduplicationandrearrangement)andtransfertheLMknowledgetoaDNN-HMM-basedAMusingthismethod.4.EXPERIMENTALSETUP4.1.DatasetsWeconductedtheexperimentsusingtheLibriSpeechdataset.TheLibriSpeechdatasetconsistsof460hofthecleantrainingset,500hofthemorechallengingtrainingset,andseparatevalidationandtestsets.Inallexperiments,wetrainedtheASRsystemusingonly100hofthetrainingset.ThetestsetofLibriSpeechisdividedinto“clean”and“other”.Weusedthetextcorpuswithabout40millionsentencesfromtheLibriSpeechdatasetforallLMtraining.4.2.ModelarchitectureFortheend-to-end(E2E)ASRexperiment,weusedanattention-basedseq2seqstructure.TheencoderanddecoderoftheSeq2seqnetworkconsistofatransformer(12and6layers,respectively),andtwo2D-CNN-basedsubsamplinglayersareaddedtotheencoderinput.ForE2EASR,weused5,000subwords[19]asoutputunitsforboththeLMandseq2seqAM.ForLMtraining,weusedasimplenetworkstructureinwhichfourlayersoflongshort-termmemorywerestackedaftertheembeddinglayer.AsanacousticmodeloftheDNN-HMM-basedhybridspeechrecognitionsystem,weusedaDNNstructureinwhich12transformerlayersandtwolinearoutputlayerswerestackedontopofa2D-CNNoftwolayers.Wecreated Fig.3.EffectofvaryingtheinterpolationfactorλontheLibriSpeechtest-clean.Wequantitativelycomparetheinterpolation-basedknowledgedistillationmethod(LST)withtheproposedmethod.forced-alignmentsforDNNtrainingwiththetrainedGMM-HMMmodel,andtheforced-alignmentconsistsof4,160senones.Weadditionallyusedatri-gram-basedexternalLMinthedecodingprocessofanexperimentusingahybridspeechrecognitionsystem[20].ThehybridandSeq2SeqASRsystemsweretrainedusingthe80-dimensionalfilterbankfeature,andspec-augmentationwasappliedtoensurestableperformance[21].ThespeechrecognitionsystemsweusedintheexperimentsarebasedonthecodesofESPNetandKaldi[22,23].5.RESULTSANDANALYSIS5.1.LearnspellingfromteachervstheproposedmethodThischaptercomparesthedifferencesbetweentheinterpolation-basedknowledgedistillationmethodandtheproposedmul-titasklearningapproach.Weusedaseq2seqmodelwithsimilarsettingsto[8]fortheexperiments,andletthebothteachernetwork(LM)andstudentnetwork(seq2seq)sharethesameoutputunit(5,000subwords).Asthefirstexperi-ment,wediversifiedtheinterpolationfactor(λ)inthelossfunctionof(3).WefixedtheTvalueto5.0,andtheexperi-mentalresultsarepresentedatthetopofFigure3.Second,weaddedtwotask-specificlayerstotheseq2seqmodelde-coderandperformedknowledgedistillationusingthelossfunctionof(4).WeexperimentedbychangingthevalueofλwhilefixingthevalueofTat1.0.Theexperimentconfirmedthatthemethodof(3)ishighlydependentontheλvalue,andinsomeexperiments,ithaslowerperformancethanthebase-linemodeltrainedonlywithsupervisedlearning.However,theproposedlossfunction,(4),exhibitedmorestableperfor-manceforvariousparametersettingsandbetterperformancethantheinterpolationmethodinallexperiments.Table1.AMclassificationaccuracy(ACC)andworderrorrate(WER)forhybridASRsystem.WeshowtheresultsofanablationstudyusingthreetypesofLMswithdifferentoutputunits.LMunitACC(%)WER(%)senonephonesubword---83.237.98X--84.267.85-X-84.417.51--X84.607.38XX-84.447.43XXX84.857.265.2.Hierarchicalknowledgedistillationthroughmulti-tasklearningInthischapter,weexperimentedwithahierarchicaldistilla-tionmethodthattransferstheknowledgeofseveralLMswithdifferentoutputstoasingleAMusingmultipleauxiliarylay-ers.WetrainedthreeLMsofthesamestructurewithdiffer-entoutputunits.ThefirstLMsharesthesameoutputunit(4,160senones)astheAM.ThesecondLMwastrainedwith41mono-phones(phones)asanoutputunit,andthelastLMwastrainedwith5,000subwords[19]asanoutputunit.IntheknowledgedistillationexperimentusingonlyoneLM,theexperimentusingthesubwordunitdemonstratedthehighestrelativeperformancegainofabout7.5%.InthehierarchicaldistillationexperimentusingmultipleLMs,theexperimentusingallthreeLMshadthehighestrelativeperformancegainof9%.WelisttheseresultsinTable1.TheseexperimentalresultsareconsistentwiththepreviousstudyinthataddinganauxiliarytaskusingthehierarchicalstructureofspeechimprovestheclassificationperformanceoftheAMs[17,18].6.CONCLUSIONInthisstudy,weproposedanewacousticmodeltrainingmethodthatcombinesmulti-tasklearningandknowledgedis-tillation.Weexperimentallydemonstratedthattheproposedmethodcompensatesfortheweaknessesoftheinterpolation-basedknowledgedistillationmethod.Inaddition,wepro-posedahierarchicaldistillationmethodusingthehierarchicalstructureofspeech,reducingtherelativeerrorrateofthespeechrecognitionsystemby9%.Theknowledgedistillationalgorithmproposedinthisstudyhasastrongadvantage:thedistributedpre-trainedLMscanbeusedregardlessoftheout-putunitiftheappropriatemanualtokenizing/G2Palgorithmissecured.WeplantoactivelyusethisadvantagetoconductknowledgedistillationexperimentswithlargerLMs,suchasBERT[3]inthefuture. 7.REFERENCES[1]YoshuaBengio,R´ejeanDucharme,PascalVincent,andChristianJanvin,“Aneuralprobabilisticlanguagemodel,”Thejournalofmachinelearningresearch,vol.3,pp.1137–1155,2003.[2]TomasMikolov,MartinKarafi´at,LukasBurget,JanCer-nock`y,andSanjeevKhudanpur,“Recurrentneuralnet-workbasedlanguagemodel.,”2010.[3]JacobDevlin,Ming-WeiChang,KentonLee,andKristinaToutanova,“BERT:pre-trainingofdeepbidirectionaltransformersforlanguageunderstanding,”CoRR,vol.abs/1810.04805,2018.[4]RonanCollobert,AwniHannun,andGabrielSynnaeve,“Afullydifferentiablebeamsearchdecoder,”inInter-nationalConferenceonMachineLearning,2019.[5]MartinSundermeyer,HermannNey,andRalfSchl¨uter,“Fromfeedforwardtorecurrentlstmneuralnetworksforlanguagemodeling,”IEEE/ACMTransactionsonAu-dio,Speech,andLanguageProcessing,vol.23,no.3,pp.517–529,2015.[6]XunyingLiu,XieChen,YongqiangWang,MarkJFGales,andPhilipCWoodland,“Twoefficientlatticerescoringmethodsusingrecurrentneuralnetworklan-guagemodels,”IEEE/ACMTransactionsonAudio,Speech,andLanguageProcessing,vol.24,no.8,pp.1438–1449,2016.[7]GeoffreyHinton,OriolVinyals,andJeffDean,“Distill-ingtheknowledgeinaneuralnetwork,”ProceedingsofNIPSDeepLearningandRepresentationLearningWorkshop,2015.[8]YeBai,JiangyanYi,JianhuaTao,ZhengkunTian,andZhengqiWen,“Learnspellingfromteachers:Transfer-ringknowledgefromlanguagemodelstosequence-to-sequencespeechrecognition,”INTERSPEECH,2019.[9]HayatoFutami,HirofumiInaguma,SeiUeno,MasatoMimura,ShinsukeSakai,andTatsuyaKawahara,“Dis-tillingtheknowledgeofbertforsequence-to-sequenceasr,”INTERSPEECH,2020.[10]AlexGraves,Abdel-rahmanMohamed,andGeoffreyHinton,“Speechrecognitionwithdeeprecurrentneu-ralnetworks,”inProceedingsofIEEEInternationalConferenceonAcoustics,SpeechandSignalProcessing(ICASSP).IEEE,2013.[11]DzmitryBahdanau,JanChorowski,DmitriySerdyuk,PhilemonBrakel,andYoshuaBengio,“End-to-endattention-basedlargevocabularyspeechrecognition,”inProceedingsofIEEEinternationalConferenceonAcoustics,SpeechandSignalProcessing(ICASSP).IEEE,2016.[12]ThomasGDietterich,“Ensemblemethodsinmachinelearning,”inInternationalworkshoponmultipleclassi-fiersystems.Springer,2000,pp.1–15.[13]WonIkCho,DonghyunKwak,JiWonYoon,andNamSooKim,“Speechtotextadaptation:Towardsanefficientcross-modaldistillation,”INTERSPEECH,2020.[14]Mun-HakLeeandJoon-HyukChang,“Knowledgedis-tillationfromlanguagemodeltoacousticmodel(ap-pendix),”arXivpreprint,2021.[15]JanChorowskiandNavdeepJaitly,“Towardsbetterde-codingandlanguagemodelintegrationinsequencetosequencemodels,”INTERSPEECH,2017.[16]Mun-HakLeeandJoon-HyukChang,“Deepneuralnet-workcalibrationfore2espeechrecognitionsystem,”IN-TERSPEECH,2021.[17]KalpeshKrishna,ShubhamToshniwal,andKarenLivescu,“Hierarchicalmultitasklearningforctc-basedspeechrecognition,”arXivpreprintarXiv:1807.06234,2018.[18]ShubhamToshniwal,HaoTang,LiangLu,andKarenLivescu,“Multitasklearningwithlow-levelauxiliarytasksforencoder-decoderbasedspeechrecognition,”INTERSPEECH,2017.[19]TakuKudoandJohnRichardson,“Sentencepiece:Asimpleandlanguageindependentsubwordtokenizeranddetokenizerforneuraltextprocessing,”Proceed-ingsoftheConferenceonEmpiricalMethodsinNaturalLanguageProcessing,2018.[20]AndreasStolcke,“Srilm-anextensiblelanguagemod-elingtoolkit,”inInternationalconferenceonSpokenLanguageProcessing,2002.[21]DanielSPark,WilliamChan,YuZhang,Chung-ChengChiu,BarretZoph,EkinDCubuk,andQuocVLe,“Specaugment:Asimpledataaugmentationmethodforautomaticspeechrecognition,”INTERSPEECH,2019.[22]ShinjiWatanabe,TakaakiHori,ShigekiKarita,TomokiHayashi,JiroNishitoba,YuyaUnno,NelsonEn-riqueYaltaSoplin,JahnHeymann,MatthewWiesner,NanxinChen,etal.,“Espnet:End-to-endspeechpro-cessingtoolkit,”INTERSPEECH,2018.[23]DanielPoveyetal.,“Thekaldispeechrecognitiontoolkit,”inIEEE2011workshoponAutomaticSpeechRecognitionandUnderstanding(ASRU),2011. KNOWLEDGEDISTILLATIONFROMLANGUAGEMODELTOACOUSTICMODEL:AHIERARCHICALMULTI-TASKLEARNINGAPPROACH(APPENDIX)Mun-HakLeeandJoon-HyukChangDepartmentofElectronicsEngineeringHanyangUniversity,Seoul,RepublicofKorea1.EXPECTEDCALIBRATIONERRORThepurposeofmodelcalibrationistoensurethattheoutputprobabilitydistributionofthenetworkaccuratelyreflectstheprobabilityofcorrectanswersforeachclass.Therefore,ifawellcalibratedmodelisused,wecangraspnotonlythepredictionresultbutalsotheaccuracyoftheprediction.Aperfectlycalibratednetworksatisfiesthefollowingequation.P(Y=i|ˆp(X)=p)=pifori=1,...,kwherep=(p1,...,pk),Yisthetruelabelandˆpistheout-putprobabilitydistributionofthek-classclassificationmodel.Themostrepresentativecalibrationmeasureistheexpectedcalibrationerror(ECE)inEq.(1).ECE=bXi=1|Bi|n|acc(Bi)−conf(Bi)|,(1)acc(Bi)=1|Bi|Xm∈Bi1(ˆym=ym),conf(Bi)=1|Bi|Xm∈Biˆpm,wherebisthenumberofbinsandnisthetotalnumberofdatapoints.ECEmeasuresthedifferencebetweenaccuracyandconfidence(probabilityvalueforonebestclass)perbin.Also,itisalsoimportanttodetermineasuitablebinningmethodforthecalibrationmeasurements.Forthis,weusedamethodofgeneratingbinsbysortingtheclassificationresultsinmini-batchaccordingtoconfidencescores.Alignmentwasper-formedonceforeachclassandonceforeachmini-batch.Ifbinningisperformedinthisway,thevarianceinthebinisminimized;thishelpstoidentifycalibrationerrorsforeachconfidencevalue.ThankstoXYZagencyforfunding.2.CALIBRATIONMETHODS2.1.LabelsmoothingThemethodthatisoftenusedtosolveoverconfidenceinspeechrecognitionsystemsislabelsmoothing.Labelsmooth-ingusesthetargetvectorsmoothedthroughthefollowingequationfornetworktraining,andpreventsthenetworkfromgeneratingexcessivelylargeoutputvaluesforoneclass.Manypreviousstudieshaveshownthatlabelsmoothingishelpfulforcalibratingneuralnetworks[?,?,?].ysmooth=y1hot−(cid:15)(y1hot−1kyones),whereyones=(1,...,1),y1hotaretheone-hottargetvec-tors,kisthenumberofclasses,and(cid:15)∈[0,1],respectively.2.2.TemperaturescalingTemperaturescalingadjuststhesharpnessoftheoutputbydividingtheunnormalizedoutputofthenetworkbythetem-peraturevalue(t).Thetvalueistrainedinadirectionthatminimizesthelossforthevalidationset,andtheparametersoftheclassificationnetworkarekeptfixedduringthisprocess[?].Inthedecodingprocessofthespeechrecognitionsystem,agraphissearchedforsummingtheprobabilitydistributionsofindependentlytrainedmodulesasfollows,andeachtvalueisalsoindependentlytrainedforeachmodule.ˆW=argmaxW(cid:26)1t1log(P(X|W))+1t2log(P(W))(cid:27),(2)wheretmisascalarvalue,WisthewordsequenceandXisafeature.3.LABELSMOOTHINGVSTEMPERATURESCALINGLabelsmoothingandtemperaturescalingarebothwidelyusedcalibrationmethods.BothmethodshaveincommonthattheyadjusttheoutputdistributionofthemodelusingaarXiv:2110.10429v1 [cs.LG] 20 Oct 2021 Fig.1.ConfidenceistheposteriorprobabilityvaluefortheNthbestclass,andthecalibrationerroristhedifferencebetweentheaverageaccuracyandtheaverageconfidenceforeachbin[?].(a)Agraphcomparingthebin-wiseaveragedconfidenceandaccuracyoftop3classesoftransformerAMcalibratedusingthetemperaturescalingmethod.Wecanseethatthecalibrationerrorisrelativelysmall.(b)Thisisthecasecalibratedusingthelabelsmoothingmethod,andthelabelsmoothingmethodonlygeneratesawellcalibratedposteriorprobabilityforthe1bestclass(top),buttendstobeunder-confidentforthe2ndand3rdbest(middle,bottom)classes.Thisisbecausethelabelsmoothingmethodgivesasmallprobabilityvalueforalltrivialclasses.Thiscalibrationerrorcausesaprobleminthebeamsearchdecodingstage[?,?].WecreatedagraphbydividingtheLibriSpeechtest-otherinto15binsintotal.singlescalarvalue(eachT,(cid:15)).However,whilelabelsmooth-ingisappliedduringmodeltraining,temperaturescalingisdifferentinthatitisapost-hoccalibrationmethodthatad-juststheoutputofanalreadytrainedmodel.Manyexistingspeechrecognitionpapershaveshownthatlabelsmoothingreducesthecalibrationerroroftheneuralnetworks[?,?,?].However,weshowinFigure1thatthelabelsmoothingmethodreducesthe1-bestclasscalibrationerrorofthenet-work,whileamplifyingthecalibrationerrorforthe2ndand3rdbestclasses.Thecalibrationerrorofthese2ndand3rdbestclassescanhinderbeamsearchdecodingperformancethatcombinesmultipleprobabilisticmodelssuchaslanguagemodels/acousticmodels[?,?].
ai_researcher
3
Generating_Better_Items_for_Cognitive_Assessments_Using_Large_Language_Models.pdf
The creative psychometric item generator: a framework for item generation and validation using large language models Antonio Laverghetta Jr.1, Simone Luchini1, Averie Linell2, Roni Reiter-Palmon2 and Roger Beaty1 1Department of Psychology, The Pennsylvania State University, 201 Old Main, University Park, Pennsylvania, USA 2Department of Psychology, University of Nebraska at Omaha, 6001 Dodge Street, Omaha, Nebraska, USA Abstract Increasingly, large language models (LLMs) are being used to automate workplace processes requiring a high degree of creativity. While much prior work has examined the creativity of LLMs, there has been little research on whether they can generate valid creativity assessments for humans despite the increasingly central role of creativity in modern economies. We develop a psychometrically inspired framework for creating test items (questions) for a classic free-response creativity test: the creative problem-solving (CPS) task. Our framework, the creative psychometric item generator (CPIG), uses a mixture of LLM-based item generators and evaluators to iteratively develop new prompts for writing CPS items, such that items from later iterations will elicit more creative responses from test takers. We find strong empirical evidence that CPIG generates valid and reliable items and that this effect is not attributable to known biases in the evaluation process. Our findings have implications for employing LLMs to automatically generate valid and reliable creativity tests for humans and AI. Keywords automated item generation, prompt engineering, artificial intelligence 1. Introduction Figure 1: Overview of CPIG. From a base instruction, we prompt an LLM to generate CPS items, which are, in turn, completed by other LLMs. We give each LLM response generator a distinct profile to increase variability in the originality of their solutions. These responses are scored with an originality model developed by [1], and a subset of the generated items with highly original responses are selected to include in the prompt for the next round of item generation. This figure was designed using images from Flaticon.com. Creativity is considered one of the primary factors that determine individual [2] and organiza- tional [3] success in the modern economy. This is due to improved automation of routine tasks [4], CREAI 2024: Workshop on Artificial Intelligence and Creativity, Santiago de Compostela (Spain), 19-24 October, 2024 $ [email protected] (A. L. Jr.) © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 4 2 0 2 g u A 0 3 ] L C . s c [ 1 v 2 0 2 0 0 . 9 0 4 2 : v i X r a the increasing complexity and ambiguity of problems organizations face, and projected growth of the creative sectors of the economy [5]. As such, the development of validated creativity tests has become increasingly important. Nevertheless, generating new creativity assessments remains a resource-intensive process requiring many hours of trial and error to develop suitable items (questions). Such items can be highly complex, requiring participants to reason about intricate scenarios or design solutions to ambiguous problems [1], and therefore are difficult for even subject matter experts to develop. With the introduction of modern large language models (LLMs) [6, 7] the ability of AI to automatically develop novel creativity tests appears increasingly plausible [8], and LLMs are already being used to automatically generate items measuring a variety of cognitive skills [9, 10, 11]. Applying similar ideas in creativity assessment could provide a method to generate valid and reliable creativity tests at scale, which would be beneficial for assessing creativity in both humans and AI. However, doing so may also be contentious for some, given the broader debate on whether AI can be creative. Despite some evidence pointing towards AI creativity, whether AI-generated ideas are truly novel remains a hotly debated topic [12, 13]. Some research suggests that using LLMs may lower the diversity of ideas produced over time, resulting in reduced collective novelty [14, 15]. Public perception of the creativity of AI also remains mixed; humans tend to view creative works produced by AI as less novel than those produced by other humans [14], and this could be problematic if humans become aware that they are being given AI-generated creativity tests. Broader research in social psychology has found that LLMs produce highly similar responses to questions regarding political orientation, moral philosophy, and other complex constructs that usually exhibit high variability in humans [16]. Collectively, these results point to a diminished diversity of thought in LLMs, which has important implications for whether and how LLMs should be used to automate creativity assessment. How can we employ LLMs in designing items for measuring creativity without comprising the validity of any conclusions drawn from such items? We approach this from a psychometric perspective, which is both a field dedicated to measuring psychological constructs in humans and the source of a rich body of work measuring similar constructs in AI [17, 18, 19]. When measuring a construct like creativity, psychometrics requires that any measurement be both valid and reliable — it must accurately measure the intended construct and give consistent results over repeated measurements. Accomplishing this involves developing tests whose items accurately measure the construct, which historically was done by human experts. Can we use LLMs to generate high-quality items for measuring creativity? If so, this would be invaluable not only for the study of human creativity but it might also allow us to measure creativity more accurately in LLMs, which would be a boon for assessing AI creativity. Nevertheless, no prior work has investigated whether LLMs can automatically generate creativity assessments. In this paper, we develop a framework to extend item generation into the creativity domain: the creative psychometric item generator (CPIG). CPIG relies on structured prompting and psychometrically based exemplar selection to generate items for a creative problem-solving task (CPS), an influential test of creativity [20]. Our framework is iterative and allows us to continuously refine the same item based on automated validity metrics until reaching a desired level of quality. While other works have explored how to use LLMs to solve [21] and generate [22] CPS-like items, none to our knowledge has examined how to generate psychometrically rigorous assessments of creativity. We find that CPIG generated items are just as valid and reliable as those written by humans. Remarkably, LLM solutions to CPIG items also appear to become more original over successive rounds of generation, suggesting a possible method to boost the creativity of generative AI via carefully designed items. We make the following contributions: 1. We develop CPIG, a new framework for generating creativity items using LLMs.1 1Code and supplementary materials will be provided at: https://osf.io/umnk5/ 2. Through a series of experiments, we confirm that CPIG generated items are just as valid as those written by humans, and that our metrics for validity are robust to known biases in the scoring process. 2. Background Creativity is thought to comprise multiple facets, including originality (the novelty of an idea) and effectiveness (how useful or relevant the idea is), among others [23]. Past work has demonstrated that human judgments of originality are an effective predictor of the creativity of ideas [23]. As such, the value of a creativity test rests on its capacity to elicit many original responses [24]. To measure originality, researchers historically relied on human judgments performed by trained raters — a method called the Consensual Assessment Technique (CAT) [25]. In the CAT, human raters are instructed to read a series of ideas and assess their originality on a Likert scale. Although effective, human scoring is not efficient, as the recruitment and training of human raters is often costly and prone to errors. More recently, automated creativity assessment tools have been developed, including finetuning LLMs to predict human creativity ratings [1]. Highly accurate models have been reported, often matching or surpassing the agreement between human raters, which makes it practical to evaluate the quality of creative responses at scale. From a psychometric perspective, measuring an individual’s creativity requires developing structured tasks to evaluate how well they can produce ideas that are both original and high quality. We focus on a CPS as the basis for our experiments. In this task, a participant is given a scenario involving a dilemma to be solved (e.g., a coworker’s roommate is causing problems at work, and it may put both of their jobs at risk), and they must produce a creative solution to this dilemma [1]. Scenarios are ambiguous by design, with many possible solutions, and reflect creative thinking in day-to-day settings. We focus on this CPS task due to its popularity as a creativity test and the availability of automated and psychometrically validated models for assessing the originality of CPS responses [1]. However, because many creative tasks can be evaluated in terms of originality, our methods are extensible to other tasks that can be automatically scored. 3. The architecture of CPIG We take a psychometric approach to generating CPS items, inspired by recent work on au- tomatically generating psychometrically valid test items [11, 9, 17]. We use LLMs to act as item generators to write the items, item response generators to create human-like solutions to the items, and item scorers to score the originality of LLM responses using psychometrically validated metrics. We hypothesize that originality in item responses provides a proxy for item quality: items with high quality should enable more creative responses and will tend to elicit better originality scores on average than those that are of lower quality. Optimizing for originality thus provides a way to generate higher quality items that can better tap the creative potential of subjects. Figure 1 shows an overview of CPIG. 3.1. Item generation Automatically generating valid CPS items is a non-trivial task, as the items must describe sufficently complex scenarios to allow a wide variety of responses while also being sufficiently ambiguous that no single solution is canonically more “correct” than the others. Furthermore, we also want scenarios to describe a wide range of situations to avoid generating an item pool revolving around a narrow range of topics. We thus develop a multi-stage prompting method.2 2All prompts used throughout CPIG are listed in the supplementary material. First, before any runs of CPIG, we first prompt gpt-3.5-turbo to generate lists of words, where each list contains three names, a place, and an action (e.g., “Mark”, “beach”, “Amy”, “Lucas”, “swimming”). The goal behind this step is to make the item generation task more concrete; rather than prompting the item generator LLMs to design scenarios without any additional context, we instead use the word lists as criteria that must be satisfied (e.g., the final scenario must contain all the names from the word list). This is meant to both simplify generation by breaking it down into multiple steps and help maximize diversity in scenario content by using different word lists to ensure no two item generation prompts are the same. We have gpt-3.5-turbo generate ten word lists at once to help eliminate redundant lists and query the model five times to generate 50 lists in total. We set the max number of tokens to 2048 and the temperature to 1.0, leaving other parameters at their defaults. We use this process to generate lists covering a wide variety of semantic content that we manually checked to confirm they obeyed the specified format. We use these word lists throughout all trials of CPIG. We use these word lists in the item generation prompt, where we instruct item generator LLMs to design CPS items using the contents of the word list provided. We provide LLMs with generation guidelines and examples of CPS items written by experts. For each trial, we attempt to generate one scenario for each word list. However, the generated items may fail basic validity checks for a variety of reasons, so to mitigate this, we develop a list of rules to drop generations that are likely low quality: 1. We compute item readability using Flesch’s reading ease [26] and drop scenarios with scores lower than 45 (considered very difficult to read). We note that this metric requires a minimum string length to compute, so we also require that scenarios be at least 140 tokens long. We use the NLTK word tokenizer to ensure a conistent token count.3 2. From preliminary trials, we find that LLMs sometimes generate scenarios with priming effects, steering participants toward specific solutions. Examples of this include generating a list of possible solutions or setting up the scenario as a dichotomy (“Should I do X or Y ?”). Based on the content of such scenarios, we developed a list of strings that indicate possible priming and drop scenarios that contain any such string. Specifically, we drop scenarios containing “on the one hand,” “on the other hand,” “dilemma,” “must navigate,” “must decide,” “has to decide,” and “is torn between.” We do not claim that this list is comprehensive, but we found that it eliminated most priming in generated scenarios. 3. To prevent LLMs from generating irrelevant content after the scenario, we instruct them to always generate “I am finished with this scenario.” at the end. We drop scenarios that lack this string. Importantly, our goal behind this quality control was not to identify every possible error that might occur in the items, as we expect human experts will make the final decision for which items to include in a creativity assessment [9]. Rather, we use it to reduce the number of items that need to be examined by eliminating those that are unlikely to be valid. We attempt to generate a scenario a maximum of 10 times for each word list and drop the list if the LLM fails to generate a valid scenario on all attempts. We strip extra newlines and whitespace surrounding the scenario and text after the termination string (including the string itself). 3.2. Item response generation Once we have LLM-generated items, we must evaluate whether they elicit creative responses. LLMs have proven adept at modeling psychometric data [19] and are competent as human simulacra for sociological modeling [27], so we use LLMs to generate synthetic responses to each item. A potential challenge here is that the item response generator LLMs may suggest similar 3https://www.nltk.org/api/nltk.tokenize.word_tokenize solutions to the same item [14]. We account for this by adopting several prompting styles meant to increase the variation in the LLM responses: a baseline prompt where the LLM is asked to provide a creative solution to the item (with no further context), a demographic prompt where the LLM is provided demographic data about a hypothetical participant that it is meant to simulate while responding (e.g., “You are a Hispanic woman who works in real estate”), and a psychometric prompt where we replace the prior demographic data with statements sourced from psychometric inventories strongly correlated with creative performance. For demographic and psychometric prompts, we construct a pool of participant creativity profiles to draw from based on responses to prior creativity studies [1]. These responses include differing occupations and responses to psychometric assessments, which we reason would increase the variability in the output of the item response generator LLMs. We provide demographic data in the prompt using either a variable format (e.g., "You are an Asian man") or as demographically relevant names. Demographic variables, including name, ethnicity, and gender, were taken from the New York City Health Department 2016 census of baby names,4 and last names specifically were taken from the Decennial Census Survey5 from the United States Census Bureau. We selected the three most common first and last names associated with each demographic variable for a total of 20 first names and 20 last names. We extract data for the psychometric prompts from a series of validated scales measuring constructs related to creativity. We employed scales tapping creative self-efficacy [28], creativity anxiety [29], creative mindset [30], openness to experience [31], tolerance for ambiguity [32], cynicism [33], and the RIASEC interest types [34]. In each prompting style, the model is provided a CPS item after the task instructions and demographic/psychometric profile (if applicable), and we process the generated response by removing extra newlines and white space. Because response generation is comparatively a much simpler task than item generation, we do not include additional content validity checks. We generate between 10 to 20 responses for each item. For the demographic and psychometric prompts, we sample a participant profile at random each time. 3.3. Item scoring and selection Each LLM-generated item response is then scored using the methodology developed by [1], which trained roberta-base [35] to predict mean originality scores of responses to CPS items. Specifically, this model was trained on a dataset annotated by experts for originality, who scored each response using a five-point Likert scale. They used a test set comprising originality scores to CPS items not seen during training and obtained a 0.41 Pearson correlation with human ratings. We use this model to score the originality of each CPIG item, which we use to select 𝑘 items to include as exemplars in the next round of item generation. We develop several shot selection strategies for choosing exemplars, which we discuss below. Additionally, we include a baseline that simply chooses 𝑘 items at random. 3.3.1. Greedy This approach simply selects the 𝑘 items with the highest originality scores. Specifically, we take the mean of the originality scores of all the responses per item and sort the resulting scores to select the 𝑘 items with the highest scores. 3.3.2. Constraint satisfaction A challenge with the greedy approach is that it may choose highly similar items if they all score high on originality. Indeed, we found in preliminary trials that cosine similarity scores between all pairs of the 𝑘 items tend to increase over iterations, sometimes drastically. To address this, 4https://www.nyc.gov/site/doh/index.page 5https://www.census.gov/programs-surveys/decennial-census.html we develop another shot selection method that instead finds a set of 𝑘 items that maximize originality and minimize similarity, which we treat as a constraint satisfaction problem. For each iteration of CPIG, we have a set of exemplars 𝐼 from the prior iteration6 with a mean originality score 𝐼𝑜 and a mean semantic similarity 𝐼𝑣 (the mean cosine similarity scores between all pairs of items in 𝐼). Additionally, we include thresholds 𝛿𝑜 and 𝛿𝑣 that define a tolerance above 𝐼𝑣 and below 𝐼𝑜 for the new set of exemplars. We then search for a set 𝜂 of size 𝑘 from the generated item pool at the current iteration that satisfies: 𝜂𝑜 > 𝐼𝑜 ∨ 𝐼𝑜 − 𝜂𝑜 ≤ 𝛿𝑜 (1) 𝜂𝑣 < 𝐼𝑣 ∨ 𝜂𝑣 − 𝐼𝑣 ≤ 𝛿𝑣 (2) We use Sentence Transformers [36] and all-MiniLM-L6-v2 to compute 𝐼𝑣 and 𝜂𝑣, and we search for all matching 𝜂 across all unique combinations of size 𝑘 from the item pool. We return the 𝜂 with the highest originality score; further details on this method and the chosen values for 𝛿 are provided in the supplementary material. Figure 2: Mean originality scores from each item generator on the first and last rounds, for all trials that did not use random shot selection. Error bars are standard deviations in scores. Higher values indicate more original item responses, on average. 3.4. Implementation details We implement CPIG using LangChain7 and utilize a variety of chat-based open-source and commercial LLMs, including LLama-2 (7b, 13b, and 70b) [37], Vicuna-1.5 (7b and 13b) [38], and Claude-3-haiku.8 All open-source models are implemented using Transformers [39]. We set the temperature to 1.0 across all trials to increase variation in the generated items and responses while leaving other text generation parameters at their defaults. We select four items to use as exemplars for all shot selection methods to ensure item generation prompts do not become too long and because we find this is sufficient to ensure variation in item content. We cap item generation to a maximum of 768 tokens and item response generation to 350 tokens, as responses to CPS items tend to be much shorter than the items themselves. We run each CPIG 6We still employ the greedy approach for the first iteration, as we don’t yet have values to compare against. 7https://www.langchain.com/ 8https://www.anthropic.com/news/claude-3-family trial for five iterations, using three random seeds for every hyperparameter combination. We use the same LLM for item generation and item response generation for each open-source model trial and use LLama-7b for response generation when using Claude-3-haiku for item generation. We provide a table listing all trials in the supplementary materials. We run experiments on three Nvidia RTX A6000 GPUs with 49GB of video memory each. We apply 4-bit quantization to all supported models. Figure 3: Pearson correlation between item response length and originality score. Length is calculated using the NLTK word tokenizer. 4. Results We present a comprehensive picture of how effective the different components of CPIG are at generating items that maximize the originality of the output from item response generator LLMs. This includes both ablations on the effect of the different prompting strategies and shot selection methods, as well as human review on the quality of the generated items. For any ablation that requires computing semantic similarity, we use Sentence Transformers [36] and all-MiniLM-L6-v2 as the embedding model. All density plots employ kernel density estimation [40]. 4.1. Originality of LLM responses Figure 2 shows originality scores for all runs that do not use random shot selection, broken down by model type. Critically, regardless of the item generator, CPIG consistently improves originality scores of responses by the last round of item generation, in some cases more than doubling the score compared to the first round. The difference in mean scores was significant in 𝑡-tests for both demographic (𝑝 << 0.001) and psychometric (𝑝 << 0.001) prompting styles and hence remains regardless of the specific prompting strategy used for item response generation. This demonstrates that CPIG-generated items can elicit more creative responses from the item response generator LLMs. However, a potential confound when scoring originality is that the metric is influenced by the length of the response, with longer solutions typically being scored as more original [1]. We find that LLM responses are, on average, much longer than those of humans, leaving open the possibility that the increase in originality is driven purely by more elaboration in the response. We check for this by computing the Pearson correlation between response length and originality for every generation model and the items generated on the last Figure 4: Joint histogram of originality and similarity scores for round five items. The highest quality items are those in the bottom right region. Note that we have dropped all items whose cosine similarity was greater than 0.95 to any other item. round (not including random shot selection). Results are shown in Figure 3. As expected, length is at least partially correlated with originality for all generation models, though there is significant variation in the strength of this correlation. Importantly, however, the correlations remain weak overall and do not rise above 0.3 in either direction for most LLMs, suggesting that the increases in originality are not only due to increasing response length. (a) Distributions of originality scores, broken down by item response prompting strategy. As a point of comparison, we also plot the originality scores of the human participants used to train the scoring model from [1], but note that they are not given the same items generated by CPIG. (b) Cosine similarity scores between all pairs of items from the last round of generation, for both greedy shot selection and constraint satisfaction. Figure 5: Distributions of originality (a) and similarity (b) scores, broken down by prompt types and shot selection strategy. 4.2. Relationship between originality and similarity While improvements in response originality denote an increase in item quality, it remains unclear whether the item generator LLMs converge onto a few similar yet high-quality scenarios or how these variables relate to each other in the generated item pool. We explore this by plotting a joint histogram of originality and similarity scores9 for all generated items, broken down by shot selection method, in Figure 4. Darker cells in this figure indicate a higher frequency of a particular originality-similarity combination. We observe that random shot selection obtains the worst combination of results: not only are most items low on originality, but the distribution also peaks the highest on similarity. Both greedy shot selection and constraint satisfaction achieve lower similarity and higher originality and do so consistently. As the originality of items produced using these strategies increases, their similarity scores remain generally static, indicating that improvements in originality do not come at the expense of more redundant items. One notable trend is that greedy shot selection seems to have lower similarity scores on average despite constraint satisfaction being designed to minimize similarity. However, for this figure, we dropped all items whose similarity is above 0.95 to any other item to make computing the joint histogram more manageable. In Figure 5, we graph the univariate histogram of cosine similarity scores for both greedy and constraint satisfaction, and this time, include all the items that are generated in the last round. Although both methods generate some item pairs with cosine similarities of 1.0, there are many more such items for greedy shot selection, indicating a much larger fraction of extremely similar item content. Interestingly, greedy also peaks at a higher density than constraint satisfaction towards the lower end of the distribution. This likely reflects the balancing act required for constraint satisfaction; selecting items to maximize originality may sometimes require increases in similarity, though the method still succeeds in eliminating most duplicate content. 4.3. Effect of item response prompting style Humans typically exhibit high variability in the originality of their responses to CPS items [1]. The different item response prompting strategies we develop are meant to induce a similar degree of variation, and we examine how effective they are in Figure 5. Compared to the no-context baseline — where the item response generator LLMs are simply instructed to answer the item — both demographic and psychometric prompting strategies exhibit higher variance and heavier tails in the originality distribution, better reflecting the trends from human participants. Both curves still have lower variance than humans and much higher peaks in originality scores, so it appears there remains headroom for alignment between LLM and human psychometric properties. The main challenge here again relates to elaboration in the response; while human participants often give short solutions, LLMs tend to provide very elaborate responses that embed multiple solutions simultaneously. Fully overcoming this challenge requires more sophisticated prompting and perhaps additional finetuning on human responses to align with our preferences for this task, but we leave this to future work. 4.4. Human content review The prior results demonstrate that, with carefully chosen prompts and few-shot exemplars, CPIG can generate items that elicit more original responses from LLM test takers. But is this trend due to improvements in item quality or some other artifact of the generation process? We explore this by recruiting human annotators to rate the quality of the CPIG items. We recruited five annotators with prior experience in rating for creativity studies. Annotators rated each item in terms of its complexity and difficulty, where we define complexity as how many demands were present in the item and difficulty as how many of those demands directly 9Measured as the mean cosine similarity between each item and every other item. (a) Complexity (b) Difficulty Figure 6: Mean complexity and difficulty scores from round one compared against round five. A rating of three indicates ideal complexity/difficulty. compete with each other, such that a solution that attempts to solve one might come at the expense of another. We define demands as any relevant information in the scenario that could be used to construct a creative solution. Demands could include challenges to overcome in the scenario or resource constraints, among many others. We selected these facets to cover the most important factors to rate to ensure content validity in the items based on our expertise in creativity assessment and preliminary examinations of the items generated by CPIG. Both facets were rated on a five-point Likert scale, with one being too simple/easy, five being too complex/difficult, and three having the right amount of complexity/difficulty. This scale allowed us to account for both extremes of item content; items that are too complex or difficult might cause human participants to give up prematurely, while items that are too simplistic or easy are unlikely to require much creativity to solve. We designed a rubric that annotators used to rate each item, including definitions for complexity and difficulty. The annotators were first shown the rubric and allowed to ask any questions they had about the task. Then, together with one of the authors, the annotators rated ten practice items. Finally, the annotators, in combination with two of the authors, rated the remaining items via a missing data approach, where annotators only rated a subset of the CPIG items. This approach allowed us to achieve maximum coverage of all items while limiting rating time and making the annotation workload manageable. Each annotator rated between 200 and 245 LLM-written items, including items from the first and last round of CPIG. Annotators were only provided the text of each item, and were blinded to all other related details. For instance, annotators were not informed of which items belonged to which round of CPIG. We obtained intraclass correlations of 0.52 for complexity and 0.49 for difficulty, for absolute agreement on the average ratings, indicating a modest rater agreement.10 We plot in Figure 6 the distributions of complexity and difficulty scores from the items from the first and last rounds. For complexity, we see a definite improvement from round five, with a much larger fraction of items achieving the ideal complexity level than was present in round one. Trends are more static for difficulty as the distributions are quite similar to each other, especially at the ideal difficulty level. Collectively, the content review indicated that CPIG items are generally of high quality and that later iterations result in definite improvements for at least some facets of item quality. We include two items generated by LLama-13b in Table 1, both using the same word list. While even items generated in the first round exhibit many desirable qualities, we see key 10This was expected as rating creativity can be highly subjective, so it is challenging to achieve stronger rater agreement. improvements over iterations. Although the round one item (top row in the table) sets up what could be a complex scenario, it remains unclear what the exact problem is other than that Noah is being asked to do “extra work” for a customer. The round five scenario (bottom row) makes this clear: a new family is causing problems by stealing plants. This scenario also introduces added complexity by including new characters with interwoven relationships, hence adding more competing demands that need to be considered. The scenario is still not perfect as not all the information appears especially relevant, but overall, it does appear to be both more original and of higher quality. Table 1 Example items generated by LLama-13b from the first (top row) and last (bottom row) iterations of item generation, using the same word list. CPS Item Noah is a 25-year-old man who works as a gardener at a local gardening company. He is known for his excellent gardening skills and is always willing to help his customers find what they need. Recently, a new customer named Lily has started coming to the gardening company, and she has been causing problems for Noah. Lily is always asking Noah to do extra work for her, and she has been making him feel overwhelmed. Noah is starting to feel frustrated and is unsure of what to do. He does not want to lose his job, but he also does not want to work in an environment that is so tense and hostile. He is considering talking to his manager about the situation, but he is worried that it could backfire on him. He is unsure of what to do. Noah is a 35-year-old man who lives in a small town with his wife Lily and their 5-year-old son James. Noah is a skilled gardener and spends most of his time tending to the plants in the community garden. Lily is a painter and spends most of her time in her studio, but she also helps out in the garden when she can. James loves spending time in the garden with his parents and is always eager to help out. Recently, a new family moved into town and they have been causing problems for Noah and Lily. The new family, the Smiths, have been stealing plants from the community garden and selling them at the local farmer’s market. Noah and Lily are not sure what to do about the situation. They do not want to confront the Smiths directly, but they also do not want to lose their plants. They are considering asking James’s teacher, Ms. Johnson, for help. Ms. Johnson is a kind and fair person, but she is also a close friend of the Smiths. Noah and Lily are not sure if Ms. Johnson will be willing to help them or if she will be biased towards the Smiths. They are also worried that if they do ask Ms. Johnson for help, it could cause problems for James in school. They are at a loss for what to do. 5. Related work 5.1. Psychometric AI Psychometric analysis of language models has seen growing interest in NLP research [11, 19, 41, 18, 42, 43]. Measurement models from psychometrics provide a strong test bed for evaluating language understanding in LLMs [18], making psychometrics a valuable tool for building better NLP test sets. However, LLMs are also valuable for modeling psychometric properties exhibited by humans on both cognitive [19] and non-cognitive [10] assessments, spurring interest in how LLMs might model human response data more broadly [44]. One rapidly growing research area is automated item generation, where LLMs are used to create new test items for standardized assessments with little or no human intervention [9, 11]. Several works have proposed frameworks similar to ours, where multiple LLMs are used to iteratively generate and evaluate new test items [45, 17]. However, this research has focused almost entirely on generating multiple-choice items, where the range of possible responses is inherently restricted. Additionally, the constructs targeted by such frameworks are either purely cognitive (with an objectively correct answer) or non-cognitive (open to interpretation based on individual differences). Creativity does not neatly fit into either mold: there is an aspect of “correctness” when judging CPS responses as the goal is to present a viable solution, yet how solutions are compared against each other in terms of originality is often open to rater interpretation [46]. Our work thus moves psychometric AI in a new direction to examine constructs outside the narrow scope explored in prior work. 5.2. Prompt engineering for psychometric assessment An often-overlooked aspect of AI-based test development is prompt engineering: the process of developing prompts for LLMs that yield strong performance on the task of interest. Many studies rely on manual prompt tuning to adapt LLMs to a specific cognitive or psychometric task, which has allowed for the successful replication of many classic results from cognitive psychology [47] and has yielded high-quality items for various assessments [10]. A typical design pattern for such prompts is to use a format that aligns closely with how the actual task is presented to humans as if to simulate an experimental session [44]. However, greater care must be taken in the prompt design than might be necessary for other applications, as LLMs appear susceptible to more biases in task instructions than humans [48]. A starting point for addressing this could be to employ methods for prompt optimization, which have been widely successful in improving the performance of LLMs for NLP tasks [49]. These techniques, while powerful, typically rely on information-theoretic metrics for assessing prompt quality, often resulting in uninterruptible prompts [50]. A few works have explored how to create prompt optimization methods employing psychometrics as optimization targets by combining LLM item generators with discriminative models trained to predict item alignment with a target construct [45] or by incorporating standard metrics for reliability and validity to assess the quality of an LLM’s generations [11, 17]. Even in these cases, the prompt itself usually remains static. CPIG provides a structured method for prompt mutation via the selection of exemplars that demonstrate evidence of validity on the task of interest. 6. Conclusion We propose CPIG, a framework for generating creativity items using LLMs. By combining state-of-the-art models for response scoring with methods for item generation, we find that CPIG can generate items that improve the originality of LLM responses over time, which in turn points to increased creativity in their solutions. This trend is not attributable to known biases in the scoring model, and human raters find CPIG items to be high quality. While our results are promising, our analysis also has limitations. In developing CPIG, we focused primarily on originality as the metric to optimize. While originality is a crucial facet of creativity, it is just one metric for judging creative outputs. Depending on the context, other metrics, such as an output’s quality or relevance, may be more important to evaluate, and future work should extend our framework to optimize multiple criteria simultaneously. The quality of the generated items depends directly on the item evaluation, which was accomplished through automated scoring that, while effective, is not without limitations [1]. Developing more robust evaluations requires layering multiple quality control checks on top of each other, perhaps by employing separate LLM judges to rate the quality of the items directly and provide structured feedback on how to improve the items. Though we performed a content review on the CPIG items, it remains unclear how effective they would be when administered to human participants to solve without conducting more studies. As such, we caution against using the items from CPIG until they have undergone more extensive review. Finally, we must acknowledge biases in the LLMs, which may have influenced item generation. The data for our scoring model was curated using raters from a Western background [1], making the possibility of bias even more likely. Addressing this requires curating originality scores representing a more diverse slate of cultural views and developing bias mitigation strategies during item generation to ensure the evaluation remains fair. Acknowledgments The research described herein was sponsored by the U.S. Army Research Institute for the Behavioral and Social Sciences, Department of the Army (Contract No. W911NF-23-C-0040 P00001). The views expressed in this article are those of the authors and do not reflect the official policy or position of the Department of the Army, DoD, or the U.S. Government. References [1] S. Luchini, N. T. Maliakkal, P. V. DiStefano, J. D. Patterson, R. Beaty, R. Reiter-Palmon, Automatic scoring of creative problem-solving with large language models: A comparison of originality and quality ratings (2023). [2] C. Makó, M. Illéssy, Automation, creativity, and the future of work in europe: A com- parison between the old and new member states with a special focus on hungary, MTA Társadalomtudományi Kutatóközpont Kisebbsegkutató Intézet (2020). [3] W. Tsegaye, Q. Su, M. Malik, The antecedent impact of culture and economic growth on nationscreativity and innovation capability, Creativity Research Journal 31 (2019) 215–222. [4] M. Chui, J. Manyika, M. Miremadi, Four fundamentals of workplace automation, McKinsey Quarterly 29 (2015) 1–9. [5] T. M. Amabile, Creativity, artificial intelligence, and a world of surprises, Academy of Management Discoveries 6 (2020) 351–354. [6] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill, et al., On the opportunities and risks of foundation models, arXiv preprint arXiv:2108.07258 (2021). [7] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., Language models are few-shot learners, Advances in neural information processing systems 33 (2020) 1877–1901. [8] J. Rafner, R. E. Beaty, J. C. Kaufman, T. Lubart, J. Sherson, Creativity in the age of generative ai, Nature Human Behaviour 7 (2023) 1836–1838. [9] A. A. von Davier, A. Runge, Y. Park, Y. Attali, J. Church, G. LaFlair, The item factory, Machine Learning, Natural Language Processing, and Psychometrics (2024) 1. [10] P. Lee, S. Fyffe, M. Son, Z. Jia, Z. Yao, A paradigm shift from “human writing” to “machine generation” in personality test development: An application of state-of-the-art natural language processing, Journal of Business and Psychology 38 (2023) 163–190. [11] A. Laverghetta Jr., J. Licato, Generating better items for cognitive assessments using large language models, in: E. Kochmar, J. Burstein, A. Horbach, R. Laarmann-Quante, N. Madnani, A. Tack, V. Yaneva, Z. Yuan, T. Zesch (Eds.), Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), Association for Computational Linguistics, Toronto, Canada, 2023, pp. 414–428. [12] S. Sæbø, H. Brovold, On the stochastics of human and artificial creativity, arXiv preprint arXiv:2403.06996 (2024). [13] G. Franceschelli, M. Musolesi, On the creativity of large language models, arXiv preprint arXiv:2304.00008 (2023). [14] B. R. Anderson, J. H. Shah, M. Kreminski, Homogenization effects of large language models on human creative ideation, arXiv preprint arXiv:2402.01536 (2024). [15] A. R. Doshi, O. Hauser, Generative artificial intelligence enhances creativity, Available at SSRN (2023). [16] P. S. Park, P. Schoenegger, C. Zhu, Diminished diversity-of-thought in a standard large language model, Behavior Research Methods (2024) 1–17. [17] Y. Attali, A. Runge, G. T. LaFlair, K. Yancey, S. Goodwin, Y. Park, A. A. von Davier, The interactive reading task: Transformer-based automatic item generation, Frontiers in Artificial Intelligence 5 (2022) 903077. [18] C. Vania, P. M. Htut, W. Huang, D. Mungra, R. Y. Pang, J. Phang, H. Liu, K. Cho, S. Bowman, Comparing test sets with item response theory, in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 1141–1158. [19] A. Laverghetta Jr, A. Nighojkar, J. Mirzakhalov, J. Licato, Can transformer language models predict psychometric properties?, in: Proceedings of* SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, 2021, pp. 12–25. [20] R. Reiter-Palmon, M. Y. Illies, L. Kobe Cross, C. Buboltz, T. Nimps, Creativity and domain specificity: The effect of task type on multiple indexes of creative problem-solving., Psychology of Aesthetics, Creativity, and the Arts 3 (2009) 73. [21] S. R. Rick, G. Giacomelli, H. Wen, R. J. Laubacher, N. Taubenslag, J. L. Heyman, M. S. Knicker, Y. Jeddi, H. Maier, S. Dwyer, et al., Supermind ideator: Exploring generative ai to support creative problem-solving, arXiv preprint arXiv:2311.01937 (2023). [22] Y. Tian, A. Ravichander, L. Qin, R. L. Bras, R. Marjieh, N. Peng, Y. Choi, T. L. Griffiths, F. Brahman, Macgyver: Are large language models creative problem solvers?, arXiv preprint arXiv:2311.09682 (2023). [23] J. Diedrich, M. Benedek, E. Jauk, A. C. Neubauer, Are creative ideas novel and useful?, Psychology of Aesthetics, Creativity, and the Arts 9 (2015) 35. [24] M. A. Runco, G. J. Jaeger, The standard definition of creativity, Creativity Research Journal 24 (2012) 92–96. [25] P. J. Silvia, B. P. Winterstein, J. T. Willse, C. M. Barona, J. T. Cram, K. I. Hess, J. L. Martinez, C. A. Richard, Assessing creativity with divergent thinking tasks: exploring the reliability and validity of new subjective scoring methods., Psychology of Aesthetics, Creativity, and the Arts 2 (2008) 68. [26] J. Kincaid, R. P. Fishburne Jr, R. L. Rogers, B. S. Chissom, N. T. T. C. M. T. R. Branch, Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel (1975). [27] S. Sun, E. Lee, D. Nan, X. Zhao, W. Lee, B. J. Jansen, J. H. Kim, Random silicon sampling: Simulating human sub-population opinion using a large language model based on group-level demographic information, arXiv preprint arXiv:2402.18144 (2024). [28] M. Karwowski, Did curiosity kill the cat? relationship between trait curiosity, creative self-efficacy and creative personal identity, Europe’s Journal of Psychology 8 (2012) 547–558. [29] R. J. Daker, R. A. Cortes, I. M. Lyons, A. E. Green, Creativity anxiety: Evidence for anxiety that is specific to creative thinking, from stem to the arts., Journal of Experimental Psychology: General 149 (2020) 42. [30] M. Karwowski, Creative mindsets: Measurement, correlates, consequences., Psychology of Aesthetics, Creativity, and the Arts 8 (2014) 62. [31] C. G. DeYoung, L. C. Quilty, J. B. Peterson, J. R. Gray, Openness to experience, intellect, and cognitive ability, Journal of personality assessment 96 (2014) 46–52. [32] A. Furnham, T. Ribchester, Tolerance of ambiguity: A review of the concept, its measure- ment and applications, Current Psychology 14 (1995) 179–199. [33] K. S. Mitchell, R. Reiter-Palmon, Malevolent creativity: personality, process, and the larger creativity field, in: Creativity and Morality, Elsevier, 2023, pp. 47–68. [34] P. I. Armstrong, S. X. Day, J. P. McVay, J. Rounds, Holland’s riasec model as an integrative framework for individual differences., Journal of Counseling Psychology 55 (2008) 1. [35] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, V. Stoyanov, Roberta: A robustly optimized bert pretraining approach, arXiv preprint arXiv:1907.11692 (2019). [36] N. Reimers, I. Gurevych, Sentence-bert: Sentence embeddings using siamese bert- in: Proceedings of the 2019 Conference on Empirical Methods in Natu- networks, ral Language Processing, Association for Computational Linguistics, 2019. URL: http: //arxiv.org/abs/1908.10084. [37] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al., Llama 2: Open foundation and fine-tuned chat models, arXiv preprint arXiv:2307.09288 (2023). [38] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez, I. Stoica, E. P. Xing, Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, 2023. [39] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. Le Scao, S. Gugger, M. Drame, Q. Lhoest, A. Rush, Transformers: State-of-the-art in: Q. Liu, D. Schlangen (Eds.), Proceedings of the 2020 natural language processing, Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Association for Computational Linguistics, Online, 2020, pp. 38–45. [40] E. Parzen, On estimation of a probability density function and mode, The annals of mathematical statistics 33 (1962) 1065–1076. [41] A. Laverghetta Jr, A. Nighojkar, J. Mirzakhalov, J. Licato, Predicting human psycho- metric properties using computational language models, in: The Annual Meeting of the Psychometric Society, Springer, 2021, pp. 151–169. [42] Y. Li, Y. Huang, H. Wang, X. Zhang, J. Zou, L. Sun, Quantifying ai psychology: A psychometrics benchmark for large language models, arXiv preprint arXiv:2406.17675 (2024). [43] J. He-Yueya, W. A. Ma, K. Gandhi, B. W. Domingue, E. Brunskill, N. D. Goodman, Psychometric alignment: Capturing human knowledge distributions via language models, arXiv preprint arXiv:2407.15645 (2024). [44] M. Tavast, A. Kunnari, P. Hämäläinen, Language models can generate human-like self- reports of emotion, in: 27th International Conference on Intelligent User Interfaces, 2022, pp. 69–72. [45] I. Hernandez, W. Nie, The ai-ip: Minimizing the guesswork of personality scale item development through artificial intelligence, Personnel Psychology 76 (2023) 1011–1035. [46] M. Benedek, C. Mühlmann, E. Jauk, A. C. Neubauer, Assessment of divergent thinking by means of the subjective top-scoring method: Effects of the number of top-ideas and time-on-task on reliability and validity., Psychology of Aesthetics, Creativity, and the Arts 7 (2013) 341. [47] A. Ushio, L. Espinosa Anke, S. Schockaert, J. Camacho-Collados, BERT is to NLP what AlexNet is to CV: Can pre-trained language models identify analogies?, in: C. Zong, F. Xia, W. Li, R. Navigli (Eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Association for Computational Linguistics, Online, 2021, pp. 3609–3624. [48] A. Gupta, X. Song, G. Anumanchipalli, Investigating the applicability of self-assessment tests for personality measurement of large language models, arXiv preprint arXiv:2309.08163 (2023). [49] Y. Zhou, A. I. Muresanu, Z. Han, K. Paster, S. Pitis, H. Chan, J. Ba, Large language models are human-level prompt engineers, arXiv preprint arXiv:2211.01910 (2022). [50] P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, G. Neubig, Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing, ACM Computing Surveys 55 (2023) 1–35.
ai_researcher
1
Design_of_a_homelessness-focused_suicide_prevention_program.pdf
4 2 0 2 r a M 9 1 ] Y C . s c [ 1 v 9 9 5 2 1 . 3 0 4 2 : v i X r a Preventing Eviction-Caused Homelessness through ML-Informed Distribution of Rental Assistance Catalina Vajiac* 1, Arun Frey*2, Joachim Baumann*3,4, Abigail Smith*5, Kasun Amarasinghe1, Alice Lai1, Kit Rodolfa2, Rayid Ghani1 1Carnegie Mellon University, 2Stanford University, 3University of Zurich, 4Zurich University of Applied Sciences, 5NORC at the University of Chicago [email protected], [email protected], [email protected], [email protected], {kamarasi, alicelai, ghani}@andrew.cmu.edu, [email protected] Abstract Rental assistance programs provide individuals with finan- cial assistance to prevent housing instabilities caused by evic- tions and avert homelessness. Since these programs operate under resource constraints, they must decide who to priori- tize. Typically, funding is distributed by a reactive or first- come-first serve allocation process that does not systemati- cally consider risk of future homelessness. We partnered with Allegheny County, PA to explore a proactive allocation ap- proach that prioritizes individuals facing eviction based on their risk of future homelessness. Our ML system that uses state and county administrative data to accurately identify in- dividuals in need of support outperforms simpler prioritiza- tion approaches by at least 20% while being fair and equitable across race and gender. Furthermore, our approach would identify 28% of individuals who are overlooked by the cur- rent process and end up homeless. Beyond improvements to the rental assistance program in Allegheny County, this study can inform the development of evidence-based decision sup- port tools in similar contexts, including lessons about data needs, model design, evaluation, and field validation. 1 Introduction Homelessness remains a pervasive and pressing issue across the United States. In January 2022, more than 500k individ- uals experienced homelessness on a single night (de Sousa et al. 2022). Rising eviction rates and the lack of affordable housing contribute to this problem: as the gap between hous- ing costs and income levels continues to grow, an increasing number of households struggle to pay rent, eventually facing eviction and, in some cases, homelessness. To curb rates of homelessness, policymakers are increas- ingly seeking to reduce the rate of entry into homelessness (Culhane, Metraux, and Byrne 2011). One popular preven- tion strategy is rental assistance programs, which provide temporary financial assistance to individuals facing eviction *These authors contributed equally. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. to keep them stably housed. Experimental evidence sug- gests that such assistance is effective at reducing subse- quent homelessness: using quasi-random variation in fund- ing availability of a rental assistance program in Chicago, it was shown that individuals who called when funding was available were 76% less likely to become homeless in the subsequent 6 months compared to those who called when funding was unavailable (Evans, Sullivan, and Wallskog 2016). A similar program in NYC randomized the provision of financial assistance to households and reduced average days in shelter from 32 nights among the control group to 10 nights among the treatment group in the 2 years follow- ing funding (Rolston, Geyer, and Locke 2013). For rental assistance programs to be a viable homeless- ness prevention strategy, they need to be effective (i.e. pre- vent individuals from falling into homelessness), efficient (i.e. target individuals at risk of falling into homelessness) (Burt 2007), and equitable. Currently, however, these pro- grams prioritize individuals based on simple heuristics (e.g. first come, first served) instead of their likelihood of future homelessness. In addition, they place the burden of applying for funding on the individuals facing eviction and overlook those who need help but do not apply. The administrative process can create long delays, so that even eligible tenants often remain on a waitlist for multiple months and can end up evicted before they receive assistance. In this paper, we describe our collaboration with the Al- legheny County Department of Human Services (ACDHS) to improve the allocation of rental assistance by prioritizing individuals with the highest likelihood of falling into home- lessness. We combine rich county and state administrative data with court records to predict homelessness among all individuals currently facing eviction, regardless of whether they contacted the county for financial support. Using data from January 2012 through August 2023, we develop a se- ries of machine learning (ML) models that predict a tenant’s need for homelessness services in the next 12 months, al- lowing ACDHS to proactively prioritize rental assistance to the most vulnerable tenants. In particular, we make the fol- lowing contributions: • Enabling a proactive approach: We consider a signifi- cantly higher proportion of residents, namely all tenants in the county facing eviction, instead of only those who call the county for help. Our models identify 28% of peo- ple who are overlooked by the current process and end up homeless. By shifting the burden away from those impacted, this proactive approach also reduces adminis- trative effort and is more likely to provide preventative rental assistance to tenants before their eviction. • Implementing need-based prioritization: Our models identify individuals in need of future homelessness sup- port services with at least 20% improvement over simpler baselines, and are 10x better than random selection while also being fair and equitable. • Field validation: We conduct a shadow mode deploy- ment to validate our models on new data, mitigating the risk of leakage in the future. Additionally, we are plan- ning a randomized control trial to compare our proposed solution to the current process and to evaluate the ef- fectiveness of rental assistance in preventing entry into homelessness among targeted individuals. • Lessons learned: We reflect on pitfalls and successes that may inform AI researchers seeking to ethically de- sign predictive decision support tools in other contexts. Reproducibility. Our code is available at https://github. com/dssg/acdhs housing public. 2 Ethical Considerations Since we are informing the allocation of scarce and critical resources to vulnerable people who could become homeless, we bring into focus the ethical considerations that informed and were embedded in every phase of the scoping, design, and development of our approach. Equitable outcomes through the allocation of resources. The use of AI in real-world contexts can perpetuate systemic biases such as racial disparities (Chen, Joshi, and Ghassemi 2020). To mitigate this risk, we carefully designed the scope of our work and our formulation and analyzed our model results to guard against bias against certain demographic groups, i.e. race and gender (see Section 5.3). Our field trial will also test the equity in the impact of the downstream de- cisions made using our system. Transparency and interpretability. Our goal is to de- velop a system that supports and informs social workers in making better decisions that lead to improved, more eq- uitable outcomes. To this end, we train models that are more explicit about which features they learn from (see Section 4.3) and analyze which features end up most pre- dictive of future homelessness (see Section 5.4). Our so- lution is designed to augment the current process, support- ing rather than excluding social workers from the decision- making loop and providing them with additional information beyond predictions to help them make better-informed deci- sions. We also consulted impacted community members in the design and validation process. Privacy concerns of using sensitive data. The collection of personal information always comes with privacy con- cerns. In our case, particularly because ACDHS connects administrative data across different facets of residents’ lives, we want to be sure that the use of this data offers significant enough improvements to the community, particularly people potentially facing homelessness. In Sections 5 and 6, we ex- tensively evaluate our models with historical data, and a field trial to ensure that this work improves outcomes. We are cur- rently soliciting feedback from groups potentially impacted by the system through a community engagement process to better assess tradeoffs before conducting a validation trial. 3 Current Approaches We first consider how ACDHS currently prioritizes residents for rental assistance and highlight relevant prior research. 3.1 ACDHS’ Current Process The current process for obtaining rental assistance in Al- legheny County is illustrative of how such programs are usu- ally managed throughout the United States. Tenants with an eviction notice can contact the Allegheny County Link helpline to request rental assistance, or they may be referred to the rental assistance program by a mediation program or a housing assistance organization. Applicants are pre- screened according to available funding and county-level el- igibility requirements, including income and the amount of rent owed. Tenants are then placed on a waitlist and will be considered on a first-come-first-served basis. When the ten- ant reaches the front of the list, a social worker examines their funding request. If they are deemed eligible and can provide the required documentation, they will receive a pay- ment covering the rent owed. There are many issues with this reactive process, and the county is seeking our help in improving it. First, it puts heavy logistical strain on tenants facing eviction, who have to know to contact the helpline or other housing stability or- ganizations to apply for assistance, and must be able to prove their eligibility with documentation. This requirement ex- cludes all individuals at risk of homelessness due to eviction who do not proactively seek help. For those who apply, the substantial delay between application and receipt of fund- ing means that many tenants have already been evicted by the time they are considered for assistance, or their rental debt has substantially increased. Since eligibility require- ments change over time and are not always easily accessible, the current decision system is opaque to those seeking assis- tance. Finally, rental assistance is not distributed according to who has the most need, but instead to those who know to call the helpline, get through the waitlist in time, and meet the eligibility requirements before being evicted. 3.2 Related Work Previous work has aimed to predict future homelessness to identify those most in need and more efficiently allocate homelessness prevention resources (Fowler et al. 2019). In New York City, re-evaluation of an existing homelessness prevention service resulted in a model based on 15 risk fac- tors that would have outperformed human judgment in se- lecting at-risk individuals (Shinn et al. 2013). In another work focusing on those who were previously homeless, ad- ministrative data was used to better predict future home- lessness within two years to best match people with pro- grams, though this also disadvantaged other service recip- ients (Kube, Das, and Fowler 2019). Because of these trade- offs, full automation of resource allocation is not recom- mended: human decision-makers should still be in control. Work by the California Policy Lab predicted the risk residents in Los Angeles of homelessness among all County (Wachter et al. 2019). Several factors were found to be indicative of future homelessness, including repeated and recent interactions with government agencies or county ser- vices and prior mental health crises. Their predictive model achieves 29% precision for all residents. Research Gaps. To our knowledge, no previous work has predicted future homelessness among those who are facing eviction. he California Policy Lab’s work likely suffers from label bias due to the low percentage of homeless individu- als utilizing county services (see Section 7), while our focus on informing a concrete action, rental assistance, allows our system to fit into the existing support infrastructure. 4 Predictive Approach We develop ML models to facilitate need-based prioritiza- tion for proactive distribution of rental assistance. One limi- tation we encounter with this type of data is that it is condi- tioned on past rental assistance allocations and offers no in- sights into the counterfactual outcomes. As a result, our ap- proach resembles an augmentation of the existing process. This is pertinent not only for the technical implementation but also for the interpretation of our results (e.g., in Sec- tion 5.2, we concentrate on the outcomes for vulnerable in- dividuals who are not served under the current practice) and the conceptualization of the overall project, which is cen- tered on social impact. and equity. In Section 6, we outline the transition from this stage to a fully deployed system. 4.1 Problem Formulation We formulate our task as a binary classification problem: identifying tenants that will interact with homelessness ser- vices within 12 months of the date of prediction. At each prediction date, we consider individuals who have an active eviction filing against them within the past four months and are not currently homeless.1 From this cohort, our models identify the 100 individuals with the highest chance of inter- acting with homelessness services in the next 12 months. We select 100 individuals since this corresponds to the monthly intervention capacity of ACDHS. Specific inclusion and ex- clusion criteria are detailed in Figure 1. 4.2 Data and Feature Engineering ACDHS collects and combines administrative data from var- ious county and state-level programs, including information 1More details on the demographic composition of this group are provided in Supplementary Materials B. Figure 1: Inclusion/exclusion criteria for homelessness and eviction. Indicators for homelessness in ACDHS data in- clude clients’ interactions with homelessness services, such as staying in a shelter. We also include clients enrolled in rehousing programs that have not moved in as of the predic- tion date, as being homeless is a prerequisite for enrollment. on individuals who have previous eviction filings, homeless- ness spells, interactions with mental, behavioral, and phys- ical health institutions, address changes, or who have been enrolled in a variety of other ACDHS and state programs. For each client at each analysis date, we generate ∼7000 features based on the data sources described in Supplemen- tary Materials A. The features can be classified as follows: Demographics and Event features. For each tenant, we include the total number of interactions with each ACDHS and state program, total number of evictions, and total num- ber of physical and mental health visits. We also include cat- egorical features, such as the particular type of physical or mental health visit and demographic features (race, gender, and age). Temporal aggregation features. For each data source, we generate several temporal features to capture the dynamic nature of the process and assess how frequently an individ- ual interacts with state and local services within the last 3 & 6 months, and 1, 2, 3, 4, and 5 years. For example, us- ing the eviction data, we generate the following features per individual: number of days since most recent eviction; num- ber of evictions in the specified time period; sum, min, max, average rent owed in eviction cases; min, max, average inter- arrival times between evictions. 4.3 Model Training and Validation We use various supervised classification methods to predict entry into homelessness: logistic regression (LR), decision trees (DT), random forests (RF), Adaboost, Light Gradi- ent Boosting Model (LGBM), and XG boost using Scikit- Learn (Pedregosa et al. 2011) and the hyperparameter grid specified in Supplementary Materials C. To most closely re- produce the context in which our models will be deployed, we use temporal validation (Hoptroff 1993), generating dif- ferent training and evaluation matrices for each analysis date. For example, when evaluating the efficacy of predict- ing homelessness as of January 2019, we train models on feature label pairs using data up to January 2019, and eval- uate them based on how many people become homeless be- tween January 2019 and January 2020 (see Supplementary Materials D). Temporal validation requires that no informa- tion beyond a certain date is used to evaluate an algorithm to avoid data leakage, i.e., predicting the future using data from the future, which can be challenging when combining real-world data from disparate sources that are updated at different frequencies (see Section 8 for a full discussion). 4.4 Baseline Models We compare the ML models to several simple baselines that either attempt to approximate ACDHS’ current system of allocating rental assistance, or that provide simple improve- ments on the current status quo. Baselines represent simple heuristics that do not require implementing an ML model (i.e., ranking individuals based on a single attribute) and are, therefore, much easier to deploy and understand. B1. Previous homelessness. Prior homelessness is a strong indicator of future homelessness (Glendening and Shinn 2017). With this in mind, ACDHS could prioritize clients by the last date they interacted with any homelessness service. Here, a more recent date would imply a tenant is at higher risk of future homelessness. B2. Baserate. If ACDHS were to randomly select individu- als to give rental assistance, the precision of the approach would be equal to the proportion of individuals in our co- hort who become homeless in the next year, around 2% during our period of analysis. B3. Earliest OFP. As an approximation of the current pro- cess’ first-come-first-serve waitlist, we look for tenants who have been waiting for the longest since an Order for Possession (OFP) has been granted, which allows the landlord to evict the tenant. Other baselines were omitted from these results due to poor performance (see Supplementary Materials E for a full list). 5 Key Findings We trained and validated over 5000 model variants to select the 100 individuals with the highest score, i.e. the highest risk of falling into homelessness within the next 12 months. The model selection metrics are measured at top-k, where k = 100. 5.1 Predictive Models Are More Efficient and Effective than Heuristic Baselines We evaluate the performance of predictive models based on their ability to improve the efficiency (i.e., ensuring assis- tance goes to those who are most in need) and effectiveness (i.e., maximizing the reach to people who would otherwise become homeless) of the allocation of rental assistance re- sources while satisfying the equity constraints. Efficiency and effectiveness are measured by precision@100 and re- call@100 respectively. For each model, we calculate these metrics across all temporal validation splits and explore a broad hyperparameter space for each model type. Table 1 summarizes the metrics over time. We primarily focus on the average values of both metrics as we aim to select mod- els that will generalize well to the future. Based on guid- ance from our partners at ACDHS, we exclude validation Figure 2: LR and RF outperform all baselines: Preci- sion@100 over time shows that out of all baselines, B1: Pre- viously Homeless performs best, but LR and RF perform bet- ter for all splits outside of the moratorium. splits that coincide with the COVID-19 eviction moratorium period when measuring average precision2. Figure 2 shows precision@100 over time for the best hyperparameter con- figuration for each model type. We observe that RF and LR perform best and outperform simple heuristic baselines with respect to precision@100, showing a ∼20% improvement over the best heuristic, B1, and performing 10x better than our approximation of the current allocation process, B3. Precision@100 Avg Min Max Recall@100 Avg Min Max Model type RF LGBM LR DT XGBoost 0.20 0.14 0.25 0.18 0.12 0.23 0.20 0.14 0.29 0.16 0.08 0.21 0.18 0.13 0.23 B1: Prev. HL 0.15 0.08 0.21 B2: Baserate 0.02 0.01 0.04 B3: Early OFP 0.03 0.00 0.05 0.22 0.16 0.34 0.19 0.14 0.33 0.22 0.17 0.33 0.18 0.09 0.30 0.20 0.16 0.29 0.17 0.09 0.30 0.02 0.02 0.05 0.03 0.00 0.07 Table 1: RF and LR outperform: Precision and recall over time for the model types considered show that our models outperform even the best baseline, B1: Previously Homeless. 5.2 Predictive Models Identify People Who Are Overlooked by the Current Process The risk in relying on tenants to proactively apply for rental assistance is that the people most in need will not receive it and will end up homeless. In each cohort, about 75 people become homeless within a year of their eviction filing. No- tably, a majority of these people (50 on average) do not ap- ply for help. This missed group—those who interacted with the homelessness system within 12 months of the predic- tion date, did not apply for help, and did not receive rental 2During this time, there are very few people facing eviction due to the moratorium on evictions during COVID-19 so our cohort becomes very small. See Supplementary Materials F. assistance—illustrates the limitations of the current reactive allocation process. In Figure 3, we examine what percentage of the missed group is found by each model, comparing one of our best ML models (RF) to the baselines B1: Previously Home- less and B2: Baserate. B2 finds 4% of this group on aver- age, meaning that proactive outreach to a list of 100 people from this random approach would find about 4 people every month who would be overlooked by the current rental as- sistance practice and would end up homeless. Both B1 and RF reach a substantially larger group of residents, 23% for B1 and 28% for RF on average, potentially providing rental assistance to an additional 10–20 people from the missed group in each cohort every month. Figure 3: Percentage found of missed group, the individu- als who become homeless having not contacted Allegheny County Link or received rental assistance. 5.3 Predictive Models Can Promote Fairness and Equity Efficiency and effectiveness metrics do not reveal whether allocating resources according to these models would be biased against vulnerable groups (Barocas, Hardt, and Narayanan 2023). Since we are informing the allocation of a scarce resource, we use equality of opportunity (Hardt, Price, and Srebro 2016) as the fairness principle, which is captured using the true positive rate (TPR) of vulnerable subgroups in the top 100. We consider the following groups: Race. Black individuals are at higher risk of falling into homelessness in Allegheny County. Out of those that will become homeless in our validation cohorts, on average, 60% were Black, even though Black individuals make up only 14% of the county’s population (U.S. Census Bu- reau 2022). To mitigate these disproportionate impacts, ML models should have a higher TPR compared to white indi- viduals (i.e., serving a higher proportion of Black individu- als who are actually at risk of homelessness). We calculate P (D=1|Y =1,A=black) P (D=1|Y =1,A=white) where D, Y, and A represent the de- cision, outcome, and attribute of interest (e.g., race) respec- tively, finding average recall ratios of 1.34 and 1.14 respec- tively across temporal splits, indicating that our models meet our fairness considerations for race. Gender. For gender, we consider men and women since data on non-binary and transgender individuals is not well-documented in ACDHS data. We calculate P (D=1|Y =1,A=f emale) P (D=1|Y =1,A=male) for our best models and see that RF and LR models are slightly under-serving women, with aver- age TPR ratios of 0.9 and 0.87 respectively. This issue needs to be considered further in the resource allocation process. 5.4 Prior Homelessness and Mental Health Crises Contribute to Homelessness Risk We find that previous use of homelessness services and inter- actions with mental and behavioral health services are con- sistently predictive of future homelessness spells across all validation temporal splits. Most predictive features include: Previous homelessness service utilization. The number, duration, and recency of past homelessness spells are highly predictive of future homelessness. Past referrals to home- lessness services, emergency shelter utilization, and public housing utilization are identified as highly predictive. Inter- estingly, the most predictive feature was the number of days since the last homelessness spell (baseline B1). We also compared the characteristics of the top 100 indi- viduals versus the rest of the tenants. We found that for the validation split starting on Sept. 1, 2021, the top 100 individ- uals were 34x more likely to have been in an emergency shel- ter in their lifetime, 31x more likely to have been homeless, and likely to have spent 28x as many days in homelessness compared to other tenants. Mental and behavioral health service interaction. Men- tal and behavioral health events and related service utiliza- tion are highly predictive of future homelessness. In the best performing RF, the number of days since the last mental health or behavioral health crisis event, the duration of men- tal health service utilization, the number of times one used mental health services, and the number of mental health cri- sis events were highly predictive features. Compared to the rest of the cohort, those in the top 100 were 100x more likely to have had a mental health crisis event in the last three years and 34x more likely to have had one in their lifetime, likely to have had 28x as many days utilizing mental health services, and likely to have had 24x more behavioral health events in their lifetime. 5.5 First-Time Homelessness is Harder to Predict About half of the people in a cohort who become home- less have not previously been homeless, but we find that our models rely on features about previous homelessness spells, resulting in lower recall for people experiencing first-time homelessness. Our models perform substantially better on people with a history of homelessness: RF’s average recall (excluding the eviction moratorium) is 55% for people who have experienced previous homelessness but only 4% for people experiencing first-time homelessness. We are explor- ing 1) building separate models, and 2) adding additional data sources to better predict future homelessness based on whether or not the person has previously been homeless. Interestingly, the group who receives rental assistance through the existing practice appears to mostly be people without a history of homelessness: for the validation split starting on Sept. 1, 2021, over 70% of people in our co- hort who apply for help and over 80% of people who receive rental assistance have not been homeless before. 5.6 On False Positives: Models Identify Vulnerable Individuals Out of those facing eviction in Allegheny County, about 2% end up interacting with homelessness services within 12 months. With such heavy class imbalance, it is inevitable that we have false positives in our top-k, i.e. recommend giv- ing rental assistance to those who will not become homeless within 12 months. Who are these false positives? Are they vulnerable to other adverse outcomes, including homeless- ness beyond 12 months or other crises? In Figure 4, we investigate the top 100 picked by the RF model as of May 1, 2019 as it provides a sufficient tempo- ral gap from the present to observe long-term outcomes. We see that even though the predictions are “wrong” about who will interact with homeless services within 12 months, they highlight people vulnerable to homelessness and other ad- verse outcomes and in need of assistance. Figure 4: False positives are still vulnerable: among these tenants, we see that homeless service utilization and mental or behavioral health crises are common beyond 1 year. 6 Field Validation Before deployment, we validate our results in two consec- utive stages: first, shadow mode deployment (SMD) along- side the current process to validate our solution on live data, and second, a planned randomized control trial (RCT) to compare our proposed solution to the status quo and assess the effectiveness of the treatment (rental assistance). 6.1 Shadow Mode Deployment (SMD) To validate our model against real-time data with no chance of data leakage (see Supplementary Materials A), we use our model to make predictions in real time, while the decision- making process continues to use the current system. We used the best-performing model that met our equity goals as of September 1, 2022, and trained it on all data available at the time to produce a list of 100 individuals at risk of falling into homelessness within 12 months. We then compared this list to the current rental assistance distribution process. Between September 2022 and August 2023, 22 of the 100 individuals on the predicted list made use of homelessness services, confirming our model’s precision@100 of ∼0.20. There is little overlap between individuals targeted by the current approach and those that would have been targeted us- ing our predictive model: among our list of 100 individuals, only 12 received rental assistance.3 Importantly, our model would have resulted in proactive assistance to 17 people who are missed by the current system, i.e. to those who did not reach out for assistance, did not receive rental assistance, and ended up falling into homelessness within a year. 6.2 Randomized Control Trial (RCT) To accurately assess the efficiency of both the current and the ML-based process, we must compare rates of homeless- ness among individuals who received rental assistance to the counterfactual case where such individuals would not have received assistance. Similarly, to assess the effective- ness of rental assistance in reducing homelessness, we must compare rates of homelessness between those that did and did not receive assistance, all else being equal. Such causal identification is not possible through historical observational data. To compare the impact of the current and proposed allo- cation process, we are planning a randomized control trial (RCT). In principle, an RCT should: 1. select two sets of eligible candidates, one using the cur- rent approach and one using the algorithmic approach, 2. randomly assign k% individuals to a treatment and (1 − k)% to a control group for each candidate set. To compare the efficiency of our new system to that of the current system, it would then be possible to compare homelessness rates among individuals selected by both ap- proaches who were assigned to the control group (those who did not receive rental assistance), i.e. P (Y = 1|DM L = 0) − P (Y = 1|DC = 0), where DM L = 0 and DC = 0 denotes random assignment to the control group in the al- gorithmic (DM L) and current list (DC). Similarly, to as- sess the effectiveness of rental assistance at reducing home- lessness, it would be possible to compare rates of home- lessness among those in the treatment to rates of home- lessness among those assigned to the control group (i.e. P (Y = 1|DM L = 1) − P (Y = 1|DM L = 0) and P (Y = 1|DC = 1) − P (Y = 1|DC = 0)).4 There are ethical challenges involved with randomizing the allocation of rental assistance. In particular, this RCT would entail not providing assistance to eligible people who applied for funding and who would have received it in the absence of the RCT. Instead, we could rely on a quasi- random assignment of individuals to the control group, as some happen to call the helpline on days when the waitlist is full or funding is not available (see Evans, Sullivan, and Wallskog 2016). This would allow us to construct a coun- terfactual group of eligible individuals who do not receive funding without interfering with the current process. We are currently discussing the RCT design options with ACDHS and plan to finalize them soon. 3Of those 12, 4 used homelessness services within 12 months. Assuming rental assistance is effective, our SMD precision@100 of 0.22 represents a lower bound, but this will have to be tested in our RCT (see Section 6.2). 4Note that the effectiveness of the treatment may vary between people targeted by the current process and those targeted by the ML-based solution, and even different ranges of risk scores be- tween the two. These potential differences should also be assessed. 7 Limitations Eligibility Constraints. We decided not to consider eligi- bility requirements for the following reasons. There are mul- tiple sources of eligibility requirements—those tied to fund- ing sources and the internal ones set by ACDHS—which change over time and are not well-documented. Not only is it impossible to encode historical eligibility requirements given the lack of data, but we also want to alert ACDHS to individuals with the most need, regardless of their eligibility. Label Bias. Our labels are likely biased due to outcome measurement errors, which are common in applying pre- dictive modeling approaches to observational data (Cos- ton et al. 2020; Jacobs and Wallach 2021). Utilization of homelessness services is just a proxy, as not all who fall into homelessness use these services. Individuals who are couch surfing, sleeping in their cars, or unsheltered on the street will not be captured by our outcome definition (label). In Allegheny County, the large majority of homeless indi- viduals use homelessness services, such as emergency shel- ters,5 at some point during their homelessness spell, particu- larly in winter. Studies conducted in warmer climates, how- ever, are likely to systematically miss those who never make use of any homelessness services (e.g. Wachter et al. 2019). By selecting 12 months as our label period, we guarantee that our label captures shelter use during the winter months. 8 Lessons Learned this process, we’ve learned several Throughout lessons about how to most effectively use AI methods in a real- world, resource-constrained context, many of which gener- alize beyond the allocation of rental assistance. Scoping: the underrated first step. What problem actu- ally needs solving? What will actually be used in practice? While these questions may seem like the obvious first step, a common question instead asked by many is “what are in- teresting research problems we can explore with this data?,” regardless of their real-world impact. We spent months scop- ing (Data Science for Social Good 2023) the project with domain experts focusing on societal goals and actions we can inform until ACDHS’ specific needs were clear. Only then did we formulate that need as a modeling problem and explore ML-based approaches. Data leakage: the secret deceptor. What data will be available at the prediction date, when decisions are made? Real-world data is messy: columns are populated on differ- ent dates and updated at different intervals. Unwittingly us- ing data that was updated after the prediction date can artifi- cially inflate performance but data leakage can be difficult to detect. Early on, we discovered that the feature denoting if a resident’s age was imputed was a strong indicator of future homelessness. Upon further investigation, we found that the source containing age was updated in place, meaning that 5ACDHS’ most recent point in time count, where workers man- ually count all homeless individuals in the county on one day, found that 83% of homeless individuals were staying in shelters that day (Allegheny County Department of Human Services 2023). if a person had interacted with a program that recorded age after the prediction date, it would be reflected in our fea- ture. It became a signal for homelessness because they had interacted with social services after the prediction date. Un- knowingly, we had run into data leakage. Field trials: the reality check for ML’s social impact. Often, a field trial is necessary to construct counterfactual outcomes and compare the effectiveness and efficiency of the proposed solution to that of the current process. How- ever, randomization can be ethically challenging as it in- volves not providing help to otherwise eligible individuals in need. In such cases, researchers should aim to construct counterfactual groups in a manner that minimizes interfer- ence with the current allocation process if feasible. Evaluation for real-world impact. Metric selection and validation techniques require careful consideration of the real-world problem. Standard evaluation methods, such as k-fold CV and AUC, may not accurately capture the success of a model for the given task. We started by defining our specific goals, e.g. efficiency, effectiveness, and equity, and then found the appropriate metric for each goal. In resource- constrained policy problems like ours, precision@k repre- sents an appropriate efficiency metric, while AUC or F1 scores are less meaningful. Communicating to policymakers. Policymakers may not have the technical expertise to decipher ML models or met- rics. We need to assist them in making informed decisions, e.g. by providing intuitive explanations of metrics and a palette of model options with their associated policy trade- offs. While this can be time-consuming, the alternative can lead to model misuse or output misinterpretation. 9 Conclusion We have shown that predictive modeling can improve upon the prioritization of rental assistance to tenants facing evic- tion in Allegheny County to reduce the rate of entry into homelessness. Our novel approach has four main contribu- tions, providing (a) need-based prioritization of rental assis- tance (b) in a proactive manner, which is at least 20% more effective than simpler baselines while being equitable. We (c) validated our models by deploying them as a shadow model and are designing an RCT that is being discussed with ACDHS. We also (d) included some lessons learned that other AI researchers can use to ethically design predic- tive support tools. After the RCT, this process is expected to be used in practice, improving the effectiveness of alloca- tion of rental assistance and equity in outcomes in Allegheny County. Acknowledgements This work was started as part of the 2022 Data Science for Social Good (DSSG) Fellowship at Carnegie Mellon Uni- versity and partly funded by a grant from the Richard King Mellon Foundation. We thank Adolfo De Unanue for his valuable input during DSSG. We also thank our partners at ACDHS, particularly Rachel Rue and Justine Galbraith, for their help throughout this project. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; and Duchesnay, E. 2011. Scikit- learn: Machine Learning in Python. Journal of Machine Learning Research, 12: 2825–2830. Rolston, H.; Geyer, J.; and Locke, G. 2013. Evaluation of the Homebase Community Prevention Program. NYC De- partment of Homeless Services. Shinn, M.; Greer, A. L.; Bainbridge, J.; Kwon, J.; and Zuiderveen, S. 2013. Efficient Targeting of Homelessness Prevention Services for Families. American Journal of Pub- lic Health, 103(S2): S324–S330. U.S. Census Bureau. 2022. 2022 Census. U.S. Department of Commerce. Wachter, T. V.; Bertrand, M.; Pollack, H.; Rountree, J.; and Blackwell, B. 2019. Predicting and Preventing Homeless- ness in Los Angeles. References Allegheny County Department of Human Services. 2023. Point-In Time Count of People Experiencing Home- lessness. https://analytics.alleghenycounty.us/wp-content/ uploads/2023/05/23-ACDHS-04-PIT-Brief v7.pdf. Barocas, S.; Hardt, M.; and Narayanan, A. 2023. Fairness and Machine Learning: Limitations and Opportunities. MIT Press. Burt, M. R. 2007. Homelessness: Prevention Strategies and Effectiveness. Nova Science Publishers. Chen, I. Y.; Joshi, S.; and Ghassemi, M. 2020. Treat- ing health disparities with artificial intelligence. Nature medicine, 26(1): 16–17. Coston, A.; Mishler, A.; Kennedy, E. H.; and Chouldechova, A. 2020. Counterfactual Risk Assessments, Evaluation, and Fairness. In Proceedings of the 2020 Conference on Fair- ness, Accountability, and Transparency, 582–593. Culhane, D. P.; Metraux, S.; and Byrne, T. 2011. A Prevention-Centered Approach to Homelessness Assistance: A Paradigm Shift? Housing Policy Debate, 21(2): 295–315. Data Science for Social Good. 2023. Data Science Project Scoping Guide. https://www.datasciencepublicpolicy. org/our-work/tools-guides/data-science-project-scoping- guide/. de Sousa, T.; Andrichik, A.; Cuellar, M.; Marson, J.; Prestera, E.; Rush, K.; and Abt Associates. 2022. The 2022 Annual Homelessness Assessment Report (AHAR to Congress) Part 1: Point-In-Time Estimates of Homelessness, December 2022. The U.S. Department of Housing and Ur- ban Development, Office of Community Planning and De- velopment, 1–108. Evans, W. N.; Sullivan, J. X.; and Wallskog, M. 2016. The impact of homelessness prevention programs on homeless- ness. Science, 353(6300): 694–699. Fowler, P. J.; Hovmand, P. S.; Marcal, K. E.; and Das, S. 2019. Solving Homelessness from a Complex Systems Per- spective: Insights for Prevention Responses. Annual Review of Public Health, 40(1): 465–486. Glendening, Z.; and Shinn, M. 2017. Risk Models for Re- turns to Housing Instability Among Families Experiencing Homelessness. Cityscape (Washington, D.C.), 19(3): 309– 330. Hardt, M.; Price, E.; and Srebro, N. 2016. Equality of op- portunity in supervised learning. In Advances in Neural In- formation Processing Systems, 3323–3331. Hoptroff, R. G. 1993. The Principles and Practice of Time Series Forecasting and Business Modelling Using Neural Nets. Neural Comput. Appl., 1(1): 59–66. Jacobs, A. Z.; and Wallach, H. 2021. Measurement and Fair- ness. In Proceedings of the 2021 ACM Conference on Fair- ness, Accountability, and Transparency, 375–385. Kube, A.; Das, S.; and Fowler, P. J. 2019. Allocating In- terventions Based on Predicted Outcomes: A Case Study on Homelessness Services. Proceedings of the AAAI Confer- ence on Artificial Intelligence, 33(01): 622–629. A Data Sources Table 2 shows the data sources that were used for feature generation. Data from the court system on eviction was linked using a unique identifier to other county data, including demographics, enrollment in county/state programs, housing and homelessness services, and mental/behavioral health interactions. The dataset contains interactions between January 2012 and August 2023. Data Type Info Entry Date Information Demographics most recent interaction ➢ gender, birthdate of individual, and (frequency of) address changes Evictions filing date hearing date OFP date ➢ dollar amount claimed to be owed according to landlord ➢ who won the case (tenant or landlord) ➢ how much tenant owes landlord ➢ whether an OFP has been filed Program Interactions enrollment date termination date ➢ program type: i.e. Medicaid, Food Assistance, Homeless Shelter, Medical Assistance Transportation, or other similar programs offered by ACDHS ➢ when client is no longer enrolled in the program Public Housing Mental & Behavioral Health enrollment date move-in date address change date ➢ housing service type: i.e. Section 8 Voucher, Rapid Rehousing, or similar ➢ when and where client was rehoused (can be long after enrollment date) ➢ when and where client moves to a new location interaction start date interaction end date diagnosis date ➢ interaction type: i.e. walk-in, crisis, or hospital stay ➢ when person left (only relevant for multi-day stays) ➢ type of official diagnosis: i.e major depression, bipolar disorder, etc Physical Health (ER) interaction start date interaction end date ➢ when person visited the ER ➢ when person left (only relevant for multi-day stays) Children, Youth, and Families (CYF) interaction start date interaction end date ➢ interaction type: i.e. child moved to foster care or group home ➢ when the child moved to a different service or “aged out” (turned 18) Table 2: Sources of data used for feature generation, as well as the dates at which we consider each piece of information to be known for temporal validation. Mitigating Data Leakage. Since we are using temporal validation, we need to ensure that, if we are evaluating an algorithm with data known up to a certain date, we do not use any information that was not known up to that date. Otherwise, we run the risk that the ”leakage“ of information from the future affects past results. For example, if we were to train a model on data up to January 1 2019 and a client had an eviction in December 2018 but an OFP in February, we must make sure we do not use any information about that individual’s OFP in our training data. At first, this may not seem like a difficult task, but it can prove tricky with real-world, messy data. For each column given to us by ACDHS, when did they know that data by? Do they update that data daily, weekly, or even monthly? Considering these questions is crucial to ensure that our models do not appear to perform better than they would when actually deployed, in case we were inadvertently using information only known in the future. For this reason, we not only explain the type of information provided by ACDHS in Table 2, but also specify which date we know that information by. As shown in Table 2, most data is associated with a specific interaction, allowing us to generate a temporal history for each client. However, this is not the case for demographic information, as this is continuously updated without keeping track of old entries. Consequently, the dataset only reveals what demographic information was known about an individual at the time of the most recent interaction. Certain demographic information, such as race, are more likely to be null for individuals with few interactions with ACDHS. Using this information is problematic for temporal validation since it contains important information from the future (i.e., whether an individual had many interactions with ACDHS up to the last day in the dataset. This, in turn, could falsify the performance evaluation results as individuals falling into homelessness interact more with ACDHS, on average. To avoid this type of data leakage, we made sure to only use demographic information that is collected during every interaction. B Demographic Composition of Cohort Table 3 describes the demographic composition of the last pre-pandemic cohort as of January 1, 2019. In this cohort (as in others), Women and African Americans are disproportionately at risk of facing eviction, and, conditional on facing eviction, are at higher risk of future homelessness. Fourteen percent of the population in Allegheny County is African American (U.S. Census Bureau 2022), yet this share jumps to 55% among those facing eviction and to 59% among those facing eviction that fall into homelessness the following year. Those with a history of homelessness are also more likely to reenter homelessness in the future: while 6% of individuals in the cohort had been homeless in the past, this share jumps to 37% among those who end up in homelessness in the following year. Characteristics Total Becomes homeless Female African American Has been homeless 56.5% 55.1% 6.0% (N=4036) No 56.4% 55.0% 5.4% (N=3966) Yes 61.4% 58.6% 37.1% (N=70) Table 3: Demographic composition of cohort as of January 1, 2019 across key demographic groups of interest and label out- comes. C Parameter Grid Table 4 shows the different parameter values that were used for the models. Models with each possible combination of these hyperparameters were trained and tested for each date of analysis. D Temporal Validation Figure 5 visualizes the temporal validation splits. The results of splits 1 – 18 are reported inSection 5. The results of the shadow mode deployment are reported in Section 6.1. For each temporal split, we generate feature label pairs that span the entire timespan of the training set. As an example, we visualize this for the split 10 in Figure 6. The split 10 of the temporal validation corresponds to the model development as of January 1 2019. Thus, the most recent label timespan considered in the training data matrix is between January 1 2018 and January 1 2019. However, individuals who only made use of homelessness services before January 1 2018 are not labeled positively in this label timespan. Therefore, we additionally consider label timespans of 12 months, going back in 3 months intervals until January 1 2013. This is needed to exploit the data available in the training set while respecting the temporal flow of events in the past. E Additional Baselines In addition to the baselines mentioned in Section 4.4, we also tried a few others. These were omitted from the paper due to their poor performance. B4. Age at first interaction. This baseline sorts by the age at which the individual first was enrolled in an ACDHS program. The younger the individual at their first interaction, the more likely they are to fall into homelessness. B5. Age at first adult interaction. Some individuals are involved in child welfare or foster care services from a young age. This baseline extends B4 by only considering ACDHS program involvement once the individual is an adult (18 years of age). B6. Days since current filing. Similar to B1: Current process, this baseline instead sorts individuals by the date of their current eviction filing (not their OFP date), with earlier dates being considered as more likely to fall into homelessness. B7. Days since last program involvement. This baseline assumes that individuals who recently interacted with non- homelessness ACDHS services are more vulnerable, and therefore more likely to fall into homelessness. B8. Number of distinct programs. This baseline sorts individuals by the number of distinct ACDHS programs they have been involved in throughout their lifetime, with more programs indicating that the individual is more vulnerable and therefore more likely to fall into homelessness. Model Name Parameter Values Logistic Regression C 0.001, 0.01, 0.1, 1 Penalty L1, L2 Decision Tree Max depth 1, 2, 5, 10, no limit Min samples split 2, 10 Random Forest Number of estimators 1000, 5000, 10000 Max depth 5, 10, 25, 50 Min samples split Min samples leaf 10, 100 10, 100 Light GBM Boosting type dart Number of estimators 100, 300, 500 # leaves 31 Max depth 10, 100 XG Boost Booster gbtree Learning rate 0.01, 0.1 Number of estimators 100, 300 Max depth 5, 10, 40 Table 4: Grid search parameters for model selection Figure 5: Temporal validation 20122013201420152016201720182019202020212022Split 1Split 2Split 3Split 4Split 5Split 6Split 7Split 8Split 9Split 10Split 11Split 12training setevaluation setstart of the COVID-19 pandemic2023Split 13Split 14Split 15Split 16Split 17Split 18Shadow mode deploymentstart of the randomized control trial202420252026 Figure 6: Feature label pair generation in one temporal validation split (here we represent split 10 as an example) B9. Number of program involvement spells. Since an individual can be enrolled in the same ACDHS program multiple times throughout their lifetime, this baseline extends B9 by considering the distinct number of times an individual has been involved in any ACDHS program, with more involvement indicating the individual is more likely to fall into homelessness. B10. Total days in program involvement. Similar to the previous two baselines, this baseline sorts individuals by the total number of days they have been involved in any ACDHS service, with more involvement indicating the individual is more likely to fall into homelessness. Figure 7 shows how these baselines perform compared to our selected baselines B1 — B3. We see that generally, B2: Previous Homelessness performs better than other baselines. Though B1: Current Process and B3: Baserate also do not perform well, they were included in the main results as B1 most closely emulates ACDHS’ current process and B3 shows how well random allocation would perform. Figure 7: Performance of all attempted baselines: the grey lines showcase the performance of B4 — B10 which were omitted from the main paper results since they all perform less well than B2: Previous Homelessness. F Effect of the COVID-19 pandemic on cohort size and positive label prevalence Figure 8a shows that, due to the eviction moratorium, the number of cases drops drastically in the year 2020. However, after the moratorium, the number of eviction cases rises almost to pre COVID-19 pandemic level in the year 2022. As can be seen in Figure 8b, the cohort size also changes accordingly in this time period – as does the number of positive labels in those cohorts, see Figure 8c. G Field Trial Schematic Design 201220132014201520162017201820192020Split 10training setevaluation settraining set labelstraining set features… Figure 8: (a) Number of eviction cases filed per month, (b) number of individuals in the cohort as of a particular date of analysis, and (c) number of individuals with that become homeless in the next 12 months as of a date of analysis. Figure 9: Schematic drawing of RCT design
ai_researcher
1
Incorporating_Flexibility_In_Real_Estate_Financial_Feasibility_Analysis.pdf
Article Two Strategies for Boreal Forestry with Goodwill in Capitalization Petri P. Kärenlampi 1* 1 Lehtoi Research, Finland * Correspondence: [email protected] Abstract: Two strategies for boreal forestry with goodwill in estate capitalization are introduced. A strategy focusing on Real Estate (RE) is financially superior to Timber Sales (TS). The feasibility of the RE requires the presence of forest land end users in the real estate market, like insurance companies or investment trusts, and the periodic boundary condition does not apply. Commercial thinnings do not enter the RE strategy in a stand- level discussion. However, they may appear in estates with a variable age structure and enable an extension of stand rotation times. Keywords: estate market; timber sales; capital return rate; expected value; periodic boundary conditions. 1. Introduction forest lucrative For decades, estates have been investments [1,2,3,4,5,6,7,8,9]. Timber sales proceedings have developed conservatively [10,11,12,13], but there has been a significant development in the valuation of estates [2,5,8,9]. The development of estate valuation probably has been related to declining market interest rates, impairing yields from interest-bearing instruments [14,15]. It is thus suspected that the inflated capitalizations are due to factors external to the forestry business [cf. 16,17,18,19,20]. Also, vertical integration within the forestry sector may induce a valuation premium for forest estates [21,2]. In addition, private-equity timberland often appears as a favorable component in diversified portfolios [22,2,23,24]. An ownership change has been related to the increased estate valuation. In North America and in the Nordic Countries, institutions concentrating on the business of investing have purchased timberland from forest products companies [25,21,1,2]. Forestry institutions, rather than private individuals, have recently dominated the estate market [26,9,7]. Some investment firms have included carbon sequestration in their business strategies [27,28]. However, enhanced carbon sequestration generally induces a deficiency in the gained financial benefit [29,30,31,32,33]. Forestry investments have been recently analyzed in terms of financial economics. However, private-equity timberland returns are poorly explained by Capital Asset Pricing Model CAPM [25,2], even if stumpage prices appear to support timberland returns [34]. Improving investor sentiment impairs timberland returns [35]. Arbitrage Pricing Theory (APT) is a complicated approach, including an intuitive selection of explaining factors [25,2]. The increased capitalization contributes to the financial return in operative forestry. Greater valuation necessarily reduces the return of capital invested. The greater valuations may contribute to the feasible management practices. Change in valuations also may contribute to the financial deficit induced by enhanced carbon sequestration, biodiversity advancement, or recreational modifications. Instead of merely referring to average market prices of forest estates, valuations in terms of tangible and intangible value components appearing on forest stands and estates have been recently discussed: trees, land, amortized investments, and eventual goodwill values [36]. Observations have indicated that the proportional goodwill is close to reality within the Nordic Region, however resulting in continuity problems [36]. There is some theoretical justification for the appearance of the premium [36]. It has been found that goodwill value deteriorates along with harvesting. Such deterioration however can be at least partially avoided by exploiting the real estate market, instead of merely the timber market [36]. In the remaining part of this paper, we will first review the financial theory, including the consequences of inflated capitalization. A hybrid Equation is developed, allowing but not forcing commercial thinnings. Two strategies are introduced, one focusing on entering the Real Estate market (RE), and the other on Timber Sales (TS), the former applying the hybrid Equation. Then, experimental materials are described. Financial analysis is implemented for the performance of the two strategies, and results are discussed. 2. Materials and Methods 2.1. Financial considerations We apply a procedure first mentioned in the literature in 1967, but applied only recently [37,38,39,40,41,42,43,32,33,36]. Instead of discounting revenues, the capital return rate achieved as relative value increment at different stages of forest stand development is weighed by current capitalization, and integrated. The capital return rate is the relative time change rate of value. We choose to write r t ( )  d  ( ) K t dt (1) where  in the numerator considers value growth, operative expenses, interests, and amortizations, but neglects investments and withdrawals. In other words, it is the change of capitalization on an economic profit/loss basis. K in the denominator gives capitalization on a balance sheet basis, being directly affected by any investment or withdrawal. Technically, K in the denominator is the sum of assets bound on the property: bare land value, the value of trees, and the non- amortized value of investments. In addition, intangible assets may appear. The pricing of forest estates may include goodwill value. The momentary definition appearing in Eq. (1) provides a highly simplified description of the capital return rate. In reality, there is variability due to several factors. Enterprises often contain businesses distributed to a variety of production lines, geographic areas, and markets. In addition, quantities appearing in Eq. (1) and are not necessarily completely known but may contain probabilistic scatter. Correspondingly, the expected value of capital return rate and valuation can be written, by definition, r t ( )  p   d d   d dt dt d  dt p K t dK ( ) K   p r t K t d ( ) ( ) d  dt d  dt p K t dK ( ) K  (2) ip corresponds to the probability density of quantity i. where Let us then discuss, the determination of capital return rate in the case of a real estate firm benefiting from the growth of multiannual plant stands of varying ages. Conducting a change of variables in Eq. (3) results as r t ( )   p t ( ) a d  dt ( a t da , ) p t K a t da ( , ) ( ) a  (3).   a ( ( ) ( p t r a t K a t d a , )  p t K a t d a ( , ) , ) ( ) a where a refers to stand age. Eq. (3) is a significant simplification of Eq. (2) since all probability densities now discuss the variability of stand age. However, even Eq. (3) can be simplified further. In Eq. (3), the probability density of stand age is a function of time, and correspondingly the capital return rate, as well as the estate value, evolve in time. A significant simplification would occur if the quantities appearing on the right-hand side of Eqs. (2) and (3) would not depend on time. Within forestry, such a situation would be denoted “normal forest principle”, corresponding to evenly distributed stand age determining relevant stand properties [44]. r t ( )  ( a da ) d  dt K a d a ) (     ) ) ( r a K a d a (  K a da ) ( (4). The “normal forest principle” is rather useful when considering silvicultural practices, but seldom applies to the valuation of real-life real estate firms, with generally non-uniform stand age distribution. However, it has recently been shown [32] that the principle is not necessary for the simplification of Eq. (3) into (4). This happens by focusing on a single stand, instead of an entire estate or enterprise, and considering that time proceeds linearly. Then, the probability density function p(a) interval [0,τ]. Correspondingly, it has vanished from Eq. (4). is constant within an Application of Eqs. (1) to (4) does require knowledge of an amortization schedule. Here, regeneration expenses are capitalized at the time of regeneration and amortized at the end of any rotation [43]. By definition, inflation of capitalization corresponds to the emergence of a surplus in the capitalization K appearing in the denominator of Eqs. (1) to (4). d  dt in the numerator may or may not Simultaneously, the value change rate become affected. Before discussing the details of inflated capitalization, a periodic boundary a  condition is given as dK dt dt   0 a (5), where  is rotation age. On the other hand, the value growth rate sums up as free cash flow as a   d  dt a dt  a    a dC dt dt (6), dC dt refers to the rate of free cash flow from timber sales. where Let us then discuss a few possible manifestations of inflated capitalization. First, one must recognize that the free cash flow is due to sales of products and services and is not directly affected by inflation of estate capitalization. Secondly, it is found from Eq. (1) to (4) that provided the capitalization K and the value d  dt change rate are affected similarly, the capital return rate is invariant, and does not trigger changes in management practices. Then, however, Eq. (6) is apparently violated. It must be complemented as a   d  dt a dt  a    a dC dt dt  a    a dD dt dt (7), dD where dt refers to the rate of intangible market premium. The intangible market premium however can be liquidized only on the real estate market, not on the timber market. Unless the real estate market is exploited, the closed integral under periodic boundary conditions a   a dD dt dt  0 (8). Further, the change rate of capitalization can be decomposed as dK d   dt dt where dN dt is the net value growth rate, and dA dN dI dV dA dN dI     dt dt dt dt dt dt is net cash flow, dI dt is the rate of capitalized investments, dt is the rate of amortizations. (9), dt   dV Considering a scaling factor (1+u) for the net value growth rate dV a   combining it with the boundary condition (5) results as dV dt dN dt dN dt dt u  (1   dt dt    u     ) a a a a a and a   dC dt  a dt (1   u ) (1   u ) a    a d  dt dt u   a a    a dV dt a   dt u  a    a dN dt dt  a    a dA dt dt dC dt dt dt and (10) (11) It is found that as long as the capitalization premium u is zero, Eq. (11) coincides with Eq. (6). Let us then consider the scaling factor (1+u) for the net value growth rate dC dt a    a a   r '  dt (1  u )  K dt '  a a    a d  dt dt u     dN dt dt  a    a dA dt dt    a    a   a (1  u ) Kdt  a (12).  (1  u ) a    a d  dt dt u  a    a dC dt dt (1  u ) a    a Kdt Provided the value growth sums up as free cash flow according to Eq. (6), Eq. (12) converts into a   d  dt a   a dt r '  (13). (1  u ) Kdt  a In other words, the inflated capitalization simply scales Eq. (4) by a factor of 1/(1+u). However, there might be a possibility that Eq. (6) would become violated. This also would correspond to a violation of the periodic boundary condition (5): if the amount of harvesting does not sum up to the accumulated net growth, the capitalization is no longer periodic. Departure of periodicity makes that the integration can no longer be started from an arbitrary time as in Eq. (12). Instead, the value increment rate must be integrated from the establishment of a stand to maturity. In the absence of periodicity. Eq. (12) must be rewritten as (1  u )   0 d  dt dt u  r '  dN dt      a 0   dt    0 dA dt dt    Kdt  a dt (1  u )  (1  u )   0 d  dt dt u    0 dC dt (1    u Kdt 0 ) (14). If the net cash flow approaches zero in the absence of harvesting, Eq. (14) turns into r '    0 d  dt dt  u  1 u   0 dA dt dt   0 Kdt (15). Eq. (15) shows that there is no linear scaling in the capital return rate, in relation to Eq. (4). Instead, there is a slight increment in capital return rate since the goodwill in capitalization applies to the net growth rate but not to amortizations. It is found from Eq. (14) that the creation of free cash flow deteriorates the return rate of capital. In other words, it reduces the capitalization premium in the numerator of Eq. (14), resulting ultimately in the linear scaling appearing in Eq. (13). Harvesting deteriorates value, but if harvesting does not sum up to the accumulated net growth, the periodic boundary conditions are violated. This phenomenon is here denoted as a continuity problem of value creation. Two different forestry strategies are here applied. Firstly, only the timber market is entered, and consequently, the deterioration of goodwill value along with harvesting according to Eq. (8) is accepted. Consequently, the capital return rate is given by Eq. (13). This strategy is here denoted as the “Timber Sales” – strategy (TS). Secondly, the real estate market is entered at stand maturity. This corresponds to abandoning the periodic boundary conditions (5) and (6) within the financial perspective of the operating agent. Then, also Eq. (8) would become abandoned. Correspondingly, Eq. (14) can be applied without reference to Eqs. (5) and (6), but it does not necessarily lead to Eq. (15) – goodwill deterioration along with eventual thinnings is accepted if necessary for maximizing the total return. This strategy is here denoted as the “Real Estate” – strategy (RE). Again, depending on whether harvesting is excluded in Eq. (14), the outcome may or may not approach Eq. (15). Importantly, the maturity age possibly can be controlled through thinnings, at the expense of a capital return rate deficiency. 2.2. The two datasets applied Two different sets of initial conditions have been described in four earlier initial conditions investigations, together forming 16 different sets of [45,42,43,32]. Firstly, a group of nine setups was created, containing three tree species and three initial sapling densities [43]. The idea was to apply the inventory-based growth model as early in stand development as it is applicable, to avoid approximations of stand development not grounded on the inventory- based growth model [46]. This approach also allowed an investigation of a wide range of stand densities, as well as a comprehensive description of the application of three tree species. The exact initial conditions here equal the ones recommended in [43], appearing there in Figures 8 and 9. Secondly, seven wooded, commercially unthinned stands in Vihtari, Eastern Finland, were observed at the age of 30 to 45 years. The total stem count varied from 1655 to 2451 per hectare. A visual quality approximation was implemented. The number of stems deemed suitable for growing further varied from 1050 to 1687 per hectare. The basal area of the acceptable-quality trees varied from 28 to 40 m2/ha, in all cases dominated by spruce (Picea abies) trees. The two strategies discussed above are applied to both datasets. A proportional goodwill (1+u) = (1+1/2) is applied according to Eqs. (13) and (14). The inflation factor is somewhat arbitrary, but it is based on recent observations [5,7,9], including very recent observations by the author: large, productive forest estates appear to change owners at 150% of the fair forestry value determined by professionals. In addition to the two market strategies, eventual thinning restrictions are discussed. 3. Results Figs. 1, 2, and 3 show the expected value of the capital return rate within stands of three tree species where the growth model is applied as early as applicable. In any of the three tree species, commercial thinnings do not enter into the Real Estate – strategy. Correspondingly Eq. (14) coincides with Eq. (15), and thinning restrictions are irrelevant. In the absence of thinning restrictions, stands to be harvested according to Timber Sales - strategy (Eq. (13)) do enter thinnings, with one exception. It is found that the achievable capital return rate is in the order of 50% greater in the Real-estate strategy, corresponding to shorter rotation ages. Figure 1. The expected value of capital return rate on pine (Pinus sylvestris) stands of different initial sapling densities, as a function of rotation age, when the growth model is applied as early as applicable, along with the Real Estate – strategy (Eq. (14)), and the Timber Sales – strategy (Eq. (13)). Figure 2. The expected value of capital return rate on spruce (Picea Abies) stands of different initial sapling densities, as a function of rotation age, when the growth model is applied as early as applicable, along with the Real Estate – strategy (Eq. (14)), and the Timber Sales – strategy (Eq. (13)). Figure 3. The expected value of capital return rate on birch (Betula pendula) stands of different initial sapling densities, as a function of rotation age, when the growth model is applied as early as applicable, along with the Real Estate – strategy (Eq. (14)), and the Timber Sales – strategy (Eq. (13)). Figs. 4 and 5 show the expected value of the capital return rate within seven stands first observed at the age of 30 to 45 years, in the presence of inflated capitalization and eventual thinning restrictions. The capital return rate according to Eq. (14) within stands prepared for sale (Real Estate – strategy), Fig. 4) is consistently greater than that for stands to be harvested (Timber Sales – strategy, Fig. 5) according to Eq. (13). No commercial thinnings are entered according to the RE strategy. Correspondingly, Eq. (14) coincides with Eq. (15), and thinning restrictions are irrelevant. In the absence of thinning restrictions, stands to be harvested according to TS strategy (Eq. (13)) always enter thinnings. The rotation times according to Eq. (15) are consistently shorter, and capital return rates in the order of 50% greater. Figure 4. The expected value of capital return rate, as a function of rotation age, when the growth model is applied to seven observed wooded stands, along with the Real Estate – strategy (Eq. (14)). Thinnings do not enter, and Eq. (14) coincides with Eq. (15). The numbers in legends identify stands and observation plots. Figure 5. The expected value of capital return rate, as a function of rotation age, when the growth model is applied to seven observed wooded stands, along with the Timber Sales – strategy (Eq. (13)). Thinnings do enter in all cases. The numbers in legends identify stands and observation plots. 4. Discussion Two strategies for boreal forestry have been discussed at stand level. The Timber Sales – strategy can be naturally applied at stand level. The Real Estate – strategy however should be applied at the estate level. Correspondingly, the stand-level treatment is somewhat problematical and would apply as such only in even-aged estates. In the case of uneven-aged estates, robustness against varying rotation ages at the stand level would be beneficial. Another issue is that the age structure of an estate is not necessarily the only factor contributing to the timing of entering the real estate market – a major timing contributor may be family reasons, possibly related to the transfer of property to the next generation. Let us discuss the robustness of the Real Estate - strategy regarding rotation ages. It might be of interest to clarify the possibility of extending the rotation times within the RE strategy. It would allow older stands within an estate to hold while younger stands mature. On the other hand, such a possibility would create possibilities for selecting a suitable time for entering the real estate market, not necessarily directly due to estate age structure. A possibility for extending rotation ages within the RE strategy might be the application of commercial thinnings. Then, Eq. (14) would no longer coincide with Eq. (15) as the net cash flow does not sum up to zero. In Figs. 1 to 4, the suitable rotation ages vary from 40 to 60 years, without thinnings. It is of interest how the rotation ages possibly could be extended by 20 years without any major loss of the expected value of capital return rate on the stand level. It is found from Figs. 6 to 8 that in stands where the growth model is applied as applicable, thinnings from above can extend rotations with a minor loss in the financial return, and there is a significant difference of this modified RE strategy to the corresponding outcome of the TS strategy. The same partially applies to stands first observed at age 30 to 45 years in Fig. 9. However, in the case of fertile stands reaching the maximum financial return in Fig. 4 soon after the time of stand observation, a greater loss results from the extension of the rotation time is found in Fig. 9. This possibly indicates that from the viewpoint of extending the rotation time, these stands are, at the time of observation, overdue for the best timing of thinning. While the results of this paper indicate that, in the presence of proportional goodwill in estate prices, large-scale harvesting becomes non-profitable within the Real Estate – strategy, a question arises regarding the necessity of climate change mitigation arrangements. The answer appears to be two-fold. Eventual carbon rent would change the thinning practices appearing in Figs. 6 to 9. On the other hand, timberland end users, like insurance companies and investment trusts, probably will retain the Timber Sales – strategy. Correspondingly, the operations of such institutions remain as subjects for mitigation programs [47,32,33,41]. Figure 6. The expected value of capital return rate on pine (Pinus sylvestris) stands of different initial sapling densities, as a function of rotation age, when the growth model is applied as early as applicable, along with the Real Estate – strategy (Eq. (14)), and the Timber Sales – strategy (Eq. (13)). Thinnings have been introduced into the RE strategy to extend the feasible rotations by 20 years. Figure 7. The expected value of capital return rate on spruce (Picea Abies) stands of different initial sapling densities, as a function of rotation age, when the growth model is applied as early as applicable, along with the Real Estate – strategy (Eq. (14)), and the Timber Sales – strategy (Eq. (13)). Thinnings have been introduced into the RE strategy to extend the feasible rotations by 20 years. Figure 8. The expected value of capital return rate on birch (Betula pendula) stands of different initial sapling densities, as a function of rotation age, when the growth model is applied as early as applicable, along with the Real Estate – strategy (Eq. (14)), and the Timber Sales – strategy (Eq. (13)). Thinnings have been introduced into the RE strategy to extend the feasible rotations by 20 years. Figure 9. The expected value of capital return rate, as a function of rotation age, when the growth model is applied to seven observed wooded stands, along with the Real Estate – strategy (Eq. (14)). The numbers in legends identify stands and observation plots. Thinnings have been introduced into the RE strategy to extend the feasible rotations by 20 years. 5. Conclusions Two strategies for boreal forestry with goodwill in estate capitalization were introduced. The strategy focusing on Real Estate (RE) was financially superior to Timber Sales (TS). The feasibility of the RE requires the presence of forest land end users in the real estate market, like insurance companies or investment trusts, and the periodic boundary condition does not apply. Commercial thinnings did not enter the RE strategy in a stand-level discussion. However, they may appear in estates with a variable age structure and enable an extension of stand rotation times. Funding: This research was partially funded by Niemi Foundation. Data Availability Statement: Datasets used have been introduced in earlier papers referenced above. Conflicts of Interest: The author declares no conflict of interest. References 1. Viitala, E.-J. Stora Enson ja Metsäliitto-konsernin metsien omistusjärjestelyt vuosina 2001–2005. Metsätieteen aikakauskirja 3/2010, 239–260. 2. Mei, B. Timberland investments in the United States: a review and prospects. For. Policy Econ. 2019, 109, 101998. https://doi.org/10.1016/j.forpol.2019.101998. 3. Stora Enso finalises the restructuring of Bergvik Skog’s forests holdings. STORA ENSO OYJ INVESTOR NEWS 31 May 2019 at 12.15 EEST. https://www.storaenso.com/en/newsroom/regulatory-and-investor- releases/2019/5/stora-enso-finalises-the-restructuring-of-bergvik-skogs-forests-holdings 4. STORA ENSO SÄLJER SKOGSINNEHAV I HALLAND OM CA 940 MLN KR (Direkt). 2020-12-18 14:23. https://www.avanza.se/placera/telegram/2020/12/18/stora-enso-saljer-skogsinnehav-i-halland-om-ca-940- mln-kr.html?fbclid=IwAR3BzZw0M4FTsM-a-aE2Fszqwj5Ms-R-iqFrcc9FF5CA6VUUzhl4hH7ClmE 5. Tilastotietoa kiinteistökaupoista. https://khr.maanmittauslaitos.fi/tilastopalvelu/rest/API/kiinteistokauppojen- tilastopalvelu.html#t443g4_x_2020_x_Maakunta 6. Myytävien metsäkiinteistöjen määrä pomppasi touko-kesäkuussa – hinnat nousevat nyt nopeammin kuin kertaakaan vuoden 2007 jälkeen. Suomen Sijoitusmetsät Oy, STT Info. https://www.sttinfo.fi/tiedote/myytavien-metsakiinteistojen-maara-pomppasi-touko-kesakuussa-hinnat- nousevat-nyt-nopeammin-kuin-kertaakaan-vuoden-2007-jalkeen?publisherId=69818737&releaseId=69918046 7. Liljeroos, H. Hannun hintaseuranta: Metsätilakaupan sesonki alkanut. Metsälehti 11.5.2021. https://www.metsalehti.fi/artikkelit/hannun-hintaseuranta-metsatilakaupan-sesonki-alkanut/#c8b05b17 8. Liljeroos, H. Hannun hintaseuranta. Hannun hintaseuranta: Metsätilamarkkinoiden yleiskatsaus 2021 – hehtaarihinnoissa voimakasta nousua. Metsälehti 12.1.2022. https://www.metsalehti.fi/artikkelit/hannun- hintaseuranta-metsatilamarkkinoiden-yleiskatsaus-2021-hehtaarihinnoissa-voimakasta-nousua/#c8b05b17 9. Kymmenen prosentin haamuraja rikki – metsäkiinteistöjen hinnat nousivat ennätyksellisen paljon vuonna 2021. Suomen Sijoitusmetsät Oy, STT Info. https://www.sttinfo.fi/tiedote/kymmenen-prosentin-haamuraja- rikki-metsakiinteistojen-hinnat-nousivat-ennatyksellisen-paljon-vuonna- 2021?publisherId=69818737&releaseId=69929002 10. Cubbage, F.; Kanieski, B.; Rubilar, R.; , Bussoni, A.; Morales Olmos, V.; , Balmelli, G.; Mac Donagh, P.; Lord, R.;, Hernández, C.; Zhang, P.; Huang, J.; Korhonen, J.; Yao, R.; Peter Hall, P.; Del La Torre, R.; Diaz- Balteiro, L.; Carrero, O.; Monges, E.; Thi Thu, H.T.; Frey, G.; Howard, M.; Chavet, M.; Mochan, S.; Afonso Hoeflich, V.; Chudy, R.; Maass, D.; Chizmar, S.; Abt, R.; Global timber investments, 2005 to 2017. Forest Policy and Economics, 2020, 112, 102082, https://doi.org/10.1016/j.forpol.2019.102082. 11. Chudy, R.P.; Hagler, R.W. Dynamics of global roundwood prices – Cointegration analysis Forest Policy and Economics, 2020, 115, 102155. 12. Luke tilastopalvelu. https://www.luke.fi/avoin-tieto/tilastopalvelu/ Accessed Jul 17, 2021. 13. Pra, A.; Masiero, M.; Barreiro, S.; Tomé, M.; Martinez De Arano, I.; Orradre, G.; Onaindia, A.; Brotto, L.; Pettenella, D. Forest plantations in Southwestern Europe: A comparative trend analysis on investment returns, markets and policies. Forest Policy and Economics, 2019, 109, 102000. 14. U.S. Department of the Treasury. https://home.treasury.gov/ 15. European Central Bank. https://www.ecb.europa.eu/home/html/index.en.html 16. Pearse, P.H.; DISTORTIONS IN THE MARKET FOR FOREST LAND. Forestry Chronicle, 1965, 41(4), 406-418. DOI: 10.5558/tfc41406-4 17. Vrooman, D.H. An Empirical Analysis of Determinants of Land Values in the Adirondack Park. The American Journal of Economics and Sociology, 1978, 37(2), 165-177. 18. Aronsson, T.; Carlén, O. The determinants of forest land prices: an empirical analysis. Canadian Journal of Forest Research, 2000, 30(4). https://doi.org/10.1139/x99-250. 19. Snyder, S.A.; Kilgore, M.A.; Hudson, R.; Donnay. J. Determinants of forest land prices in northern Minnesota: A hedonic pricing approach. FOR. SCI. 2007, 53(1), 25–36. 20. Zhang, D.; Meng, L.; Polyakov, M. Determinants of the Prices of Bare Forestland and Premerchantable Timber Stands: A Spatial Hedonic Study. Forest Science, 2013, 59(4), 400–406. https://doi.org/10.5849/forsci.12-014 21. Korhonen, J.; Zhang, Y.; Toppinen, A. Examining timberland ownership and control strategies in the global forest sector. For. Policy Econ. 2016, 70, 39–46. 22. Restrepo, H.; Zhang, W.; Mei, B. The time-varying role of timberland in long-term, mixed-asset portfolios under the mean conditional value-at-risk framework. Forest Policy and Economics, 2020, 113, 102136. 23. Mei, B; L.Clutter, M.L. Return and information transmission of public and private timberland markets in the United States Forest Policy and Economics 2020, 113, 102092. 24. Busby, G.M.; Binkley, C.D.; Chudy, R.P.; Constructing optimal global timberland investment portfolios. Forest Policy and Economics 2020, 111, 102083. 25. Yao, W.; Mei, B.; Clutter, M.L. Pricing Timberland Assets in the United States by the Arbitrage Pricing Theory, Forest Science 2014, 60(5), 943–952. https://doi.org/10.5849/forsci.13-02324. Yao, W.; Mei, B.; Clutter, M.L. Pricing Timberland Assets in the United States by the Arbitrage Pricing Theory, Forest Science 2014, 60(5), 943–952. https://doi.org/10.5849/forsci.13-023 26. Metsäkiinteistökauppa jäljessä edellisvuodesta – ennusteen mukaan hinnat nousevat tänä vuonna lähes yhdeksän prosenttia. Suomen Sijoitusmetsät Oy, STT Info. https://www.sttinfo.fi/tiedote/metsakiinteistokauppa-jaljessa-edellisvuodesta-ennusteen-mukaan-hinnat- nousevat-tana-vuonna-lahes-yhdeksan-prosenttia?publisherId=69818737&releaseId=69912289 27. Dasos reports on carbon impact. https://www.dasos.fi/dasos-reports-on-carbon-impact/ 28. New Evli Impact Forest Fund I aims to mitigate climate change by achieving positive carbon effects. https://www.evli.com/en/news/new-evli-impact-forest-fund 29. Murray, B.C. Economics of Forest Carbon Sequestration as a Climate Change Mitigation Strategy. Reference Module in Earth Systems and Environmental Sciences, Encyclopedia of Energy, Natural Resource, and Environmental Economics Volume 1, 2013, Pages 41-47. 30. Sohngen, B. An Analysis of Forestry Carbon Sequestration as a Response to Climate Change. COPENHAGEN CONSENSUS ON CLIMATE, https://aede.osu.edu/sites/aede/files/publication_files/Analysis%20of%20Forestry%20Carbon.pdf 31. Li, R.; Sohngen, B.; Tian, X. 2021. Efficiency of forest carbon policies at intensive and extensive margins. American Journal of Agricultural Economics, 2021, 1–25. https://doi.org/10.1111/ajae.12281 32. Kärenlampi, P.P. Two Sets of Initial Conditions on Boreal Forest Carbon Storage Economics. PLOS Clim, 2022, 1(2), e0000008. https://doi.org/10.1371/journal.pclm.0000008 33. Kärenlampi, P.P. Capitalization and Capital Return in Boreal Carbon Forestry. Earth, 2022, 3, 204-227. https://doi.org/10.3390/earth3010014 34. Liao, X.; Zhang, Y.; Sun, C. Investments in timberland and softwood timber as parts of portfolio selection in the United States: a cointegration analysis and capital asset pricing model. For. Sci. 2009, 55, 471–479. 35. Yao, W.; Cheng, B.; Mei, B. Investor sentiment and timberland investment returns. For. Prod. J. 2016, 66, 147–154. 36. Kärenlampi, P.P. Two Manifestations of Market Premium in the Capitalization of Carbon Forest Estates. Energies 2022, 15, 3212. https://doi.org/10.3390/en15093212 37. Speidel, G. Forstliche Betreibswirtschaftslehre, 2nd ed.; Verlag Paul Parey: Hamburg, Germany, 1967, 226p. (In German) 38. Speidel, G. Planung in Forstbetrieb, 2nd ed.; Verlag Paul Parey: Hamburg, Germany, 1972; 270p. (In German) 39. Kärenlampi, P.P. State-space approach to capital return in nonlinear growth processes. Agric. Finance Rev. 2019, 79, 508–518. doi:10.1108/afr-07-2018-0055. 40. Kärenlampi, P.P. Estate-Level Economics of Carbon Storage and Sequestration. Forests 2020, 11(6), 643; https://doi.org/10.3390/f11060643. 41. Kärenlampi, P.P. The Effect of Empirical Log Yield Observations on Carbon Storage Economics. Forests 2020, 11, 1312. 42. Kärenlampi, P.P. Diversity of Carbon Storage Economics in Fertile Boreal Spruce (Picea Abies) Estates. Sustainability 2021, 13, 560. https://www.mdpi.com/2071-1050/13/2/560 43. Kärenlampi, P.P. Capital return rate and carbon storage on forest estates of three boreal tree species. Sustainability 2021, 13(12), 6675. https://doi.org/10.3390/su13126675 44. Leslie, A.J. A REVIEW OF THE CONCEPT OF THE NORMAL FOREST. Aust. For. 1966, 30, 139–147, doi:10.1080/00049158.1966.10675407. 45. Kärenlampi, P.P. Harvesting Design by Capital Return. Forests 2019, 10, 283, doi:10.3390/f10030283. 46. Bollandsås, O.M.; Buongiorno, J.; Gobakken, T. Predicting the growth of stands of trees of mixed species and size: A matrix model for Norway. Scand. J. For. Res. 2008, 23, 167–178, doi:10.1080/02827580801995315. 47. Lintunen, J.; Laturi, J.; Uusivuori, J. How should a forest carbon rent policy be implemented? For. Policy Econ. 2016, 69, 31–39. doi:10.1016/j.forpol.2016.04.005.
ai_researcher
1
Cross-domain_NER_with_Generated_Task-Oriented_Knowledge_An_Empirical_Study_from_Information_Density_Perspective.pdf
7 1 0 2 c e D 9 1 ] M D . s c [ 1 v 0 4 8 6 0 . 2 1 7 1 : v i X r a On Fan-Crossing Graphs Franz J. Brandenburg 94030 Passau, Germany [email protected] Abstract. A fan is a set of edges with a single common endpoint. A graph is fan-crossing if it admits a drawing in the plane so that each edge is crossed by edges of a fan. It is fan-planar if, in addition, the common endpoint is on the same side of the crossed edge. A graph is adjacency- crossing if it admits a drawing so that crossing edges are adjacent. Then it excludes independent crossings which are crossings by edges with no common endpoint. Adjacency-crossing allows triangle-crossings in which an edge crosses the edges of a triangle, which is excluded at fan-crossing graphs. We show that every adjacency-crossing graph is fan-crossing. Thus triangle- crossings can be avoided. On the other hand, there are fan-crossing graphs that are not fan-planar, whereas for every fan-crossing graph there is a fan-planar graph on the same set of vertices and with the same number of edges. Hence, fan-crossing and fan-planar graphs are different, but they do not differ in their density with at most 5n − 10 edges for graphs of size n. 1 Introduction Graphs with or without special patterns for edge crossings are an important topic in Topological Graph Theory, Graph Drawing, and Computational Ge- ometry. Particular patterns are no crossings, single crossings, fans, independent edges, or no three pairwise crossing edges. A fan is a set of edges with a single common endpoint. In complement, edges are independent if they do not share a common endpoint. Important graph classes have been defined in this way, in- cluding the planar, 1-planar [12,13], fan-planar [4,5,11], fan-crossing free [9], and quasi-planar graphs [3]. A first order logic definition of these and other graph classes is given in [6]. These definitions are motivated by the need for classes of non-planar graphs from real world applications, and a negative correlation between edge crossings and the readability of graph drawings by human users. The aforementioned graph classes aim to meet both requirements. We consider undirected graphs G = (V, E) with finite sets of vertices V and edges E that are simple both in a graph theoretic and in a topological sense. Thus we do not admit multiple edges and self-loops, and we exclude multiple crossings of two edges and crossings among adjacent edges. A drawing of a graph G is a mapping of G into the plane so that the vertices are mapped to distinct points and each edge is mapped to a Jordan arc between 2 F. J. Brandenburg the endpoints. Two edges cross if their Jordan arcs intersect in a point other than an endpoint. Crossings subdivide an edge into uncrossed pieces, called edge segments, whose endpoints are vertices or crossing points. An edge is uncrossed if and only if it consists of a single edge segment. A drawn graph is called a topological graph. In other works, a topological graph is called an embedding which is the class of topologically equivalent drawings. An embedding defines a rotation system which is the cyclic sequence of edges incident to each vertex. A drawn graph partitions the plane into topologically connected regions, called faces. The unbounded region is called the outer face. The boundary of each face consists of a cyclic sequence of edge segments. It is commonly specified by the sequence of vertices and crossing points of the edge segments. The subgraph of a graph G induced by a subset U of vertices is denoted by G[U ]. It inherits its embedding from an embedding of G, from which all vertices not in U and all edges with at most one endpoint in U are removed. (a) (b) Fig. 1. (a) A fan-crossing and (b) an independent crossing or fan-crossing free An edge e has a fan-crossing if the crossing edges form a fan, as in Fig. 1(a), and an independent crossing if the crossing edges are independent, see Fig. 1(b). Fan-crossings are also known as radial (k, 1) grid crossings and independent crossings as grid crossings [1]. Independent crossings are excluded if and only if adjacency-crossings are allowed in which two edges are adjacent if they both cross an edge [6]. Fan-planar graphs were introduced by Kaufmann and Ueckerdt [11], who imposed a special restriction, called configuration II. It is shown in Fig. 2(a). Let e, f and g be three edges in a drawing so that e is crossed by f and g, and f and g share a common vertex t. Then they form configuration II if one endpoint of e is inside a cycle through t with segments of e, f and g, and the other endpoint of e is outside this cycle. If e = {u, v} is oriented from u (left) to v (right) and f and g are oriented away from t, then f and g cross e from different directions. Configuration II admits triangle-crossings in which an edge crosses the edges of a triangle, see Fig. 2(b). Observe that a triangle-crossing is the only configuration in which an edge is crossed by edges that do not form a fan and that are not independent. A graph is fan-crossing free if it admits a drawing without fan-crossings [9]. Then there are only independent crossings. A graph is fan-crossing if it admits On Fan-Crossing Graphs 3 (a) (b) Fig. 2. (a) Configuration II in which edge e = {u, v} is crossed by edges {t, x} and {t, y} and x and y are on opposite sides of e and (b) edge e = {u, v} crosses a triangle. The shaded regions represent subgraphs which shall prohibit another routing of e. Similar regions could be added to (a), as in Fig. 12. a drawing in which each crossing is a fan-crossing, and adjacency-crossing if it can be drawn so that each edge is crossed by edges that are adjacent. Then in- dependent crossings are excluded. As stated in [6], adjacency crossing is comple- mentary to independent crossing, but the graph classes are not complementary and both properly include the 1-planar graphs. A graph is fan-planar if it avoids independent crossings and configuration II [11]. Observe the subtle differences between adjacency-crossing, fan-crossing, and fan-planar graphs, which each exclude independent crossings, and in addition ex- clude triangle-crossings and configuration II, respectively. Kaufmann and Ueck- erdt [11] observed that configuration II cannot occur in straight-line drawings, so that every straight-line adjacency-crossing drawing is fan-planar. They proved that fan-planar graphs of size n have at most 5n−10 edges and posed the density of adjacency-crossing graphs as an open problem. The density defines an upper bound an on the number of edges in graphs of size n. We show that triangle- crossings can be avoided by an edge rerouting, and that configuration II can be restricted to a special case. Moreover, the allowance or exclusion of configuration II has no impact on the density, which answers the above question. In particular, we prove the following: 1. Every adjacency-crossing graph is fan-crossing. Thus triangle-crossings can be avoided. 2. There are fan-crossings graphs that are not fan-planar. Thus configuration II is essential. 3. For every fan-crossing graph G there is a fan-planar graph G(cid:48) on the same set of vertices and with (at least) the same number of edges. Thus fan-crossing graphs of size n have at most 5n − 10 edges. egfuvtxyeuvabc 4 F. J. Brandenburg We prove that triangle-crossings can be avoided by an edge rerouting in Section 2 study configuration II in Section 3. We conclude in Section 4 with some open problems on fan-crossing graphs. 2 Triangle-Crossings In this section, all embeddings E(G) are adjacency-crossing or equivalently they exclude independent crossings. We consider triangle-crossings and show that they can be avoided by an edge rerouting. A rerouted edge is denoted by ˜e if e is the original one. More formally, we transform an adjacency-crossing embedding E(G) into an adjacency-crossing embedding ˜E(G) which differs from E(G) in the embedding of the rerouted edges such that ˜e does not cross a particular triangle if e crosses that triangle. For convenience, we assume that triangle-crossings are in a standard config- uration, in which a triangle ∆ = (a, b, c) is crossed by edges e1, . . . , ek for some k ≥ 1 that cross each edge of ∆. We call each ei a triangle-crossing edge of ∆. These edges are incident to a common vertex u if k ≥ 2. We assume that a triangle-crossing edge e = {u, v} crosses {a, c}, {b, c} and {a, b} in this order and that u is outside ∆. Then v must be inside ∆. All other cases are similar exchanging inside and outside and the order in which the edges of ∆ are crossed. We need some further notation. Let f an(v) denote a subset of edges incident to vertex v that cross a particular edge. This is a generic definition. If the crossed edge is given, then f an(v) can be retrieved from the embedding E(G). In general, f an(v) does not contain all edges incident to v. A sector is a subsequence of edges of f an(v) properly between two edges {v, s} and {v, t} in clockwise order. An edge e is covered by a vertex v if e is crossed by at least two edges incident to v so that f an(v) has at least two elements. Let cover(v) denote the set of edges covered by v. Note that uncrossed edges and edges that are crossed only once are not covered. If an edge e is crossed by an edge g = {u, v}, then e is a candidate for cover(u) or cover(v) and e (cid:54)∈ cover(w) for any other vertex w (cid:54)= u, v except if e crosses a triangle. In fact, an edge e = {u, v} is triangle-crossing if and only if {e} = cover(x) ∩ cover(y) for vertices x (cid:54)= y. To see this, observe that e ∈ cover(x) for x = a, b, c if e crosses a triangle ∆ = (a, b, c). Conversely, if e is crossed by edges {a, w1}, {a, w2} and {b, w3} with a (cid:54)= b and w1 (cid:54)= w2, then w1 = w3 and w2 = b (up to renaming) if there are no independent crossings. Triangle crossings are special. If an edge e crosses a triangle ∆, then e cannot be crossed by any edge other than the edges of ∆. In particular, e cannot cross another triangle or another triangle-crossing edge. But an edge may be part of two triangle-crossings, as a common edge of two crossed triangles, as shown in Fig. 3(a), or as a triangle-crossing edge of one triangle and an edge of another triangle, as shown in Fig. 3(b), and both configurations can be combined. A particular example is K5, which has five embeddings [10], see Fig. 4. The one of Fig. 4(e) has a triangle-crossing. If it is a part of an adjacency-crossing embedding, then we show that it can be transformed into the embedding of Fig. 4(c) by rerouting an edge of the crossed triangle. On Fan-Crossing Graphs 5 (a) (b) Fig. 3. Two crossed triangles sharing (a) an edge or (b) an edge and a triangle-crossing edge. (a) (b) (c) (d) (e) Fig. 4. All non-isomorphic embeddings of K5 [10] with two drawings. Only (a) is 1- planar and fan-crossing free, (b), (c), and (d) are fan-planar and (e) is adjacency- crossing and has a triangle crossing with the triangle-crossing edge drawn red. Our rerouting transforms (e) into (c) and reroutes and straightens the curved edge. In return, the edges of ∆ can only be crossed by edges of f an(u) or f an(v) if e = {u, v} is a triangle-crossing edge of ∆. They are covered by u if there are at least two triangle-crossing edges incident to u. In addition, there may be edges that cross only one or two edges of ∆. These are incident to u or v and they are incident to u if there are at least two triangle-crossing edges incident to u. We assume a standard configuration and classify crossing edges by the sequence of crossed edges, as stated in Table 1. Suppose that u is outside ∆. Then the other endpoint of g = {u, w} is inside ∆ if g is a needle, a hook, or a triangle-crossing edge, and w is outside ∆ if g is an arrow or a sickle, see Fig. 5(a). An a-arrow and an a-sickle are covered by a, since they are crossed by at least two edges of f an(a). Similarly, a c-arrow and a c-sickle are covered by c. A needle g may be covered by a or by c and there is a preference for a (c) if g is before (after) any triangle-crossing edge according to the order of crossing points on {a, c} from a to c. Otherwise, there is an instance of configuration II, as shown in Fig. 9(a). Accordingly, an a-hook may be covered ucwdabvawvucb 6 F. J. Brandenburg name needle a-hook c-hook a-arrow c-arrow a-sickle c-sickle clockwise triangle-crossing counterclockwise triangle-crossing set sequence of crossed edges N1, N2, N3 {a, c} {a, b} {b, c} {a, c}, {a, b} {a, c}, {b, c} {a, b}, {a, c} {b, c}, {a, c} {a, c}, {b, c}, {a, b} {a, c}, {a, b}, {b, c} Ha Hc Aa Ac Sa Sc C CC Table 1. Classification of edges crossing the edges of a triangle ∆ = (a, b, c) by a or by b and the crossing edges are on or inside ∆ if it is covered by b, since the triangle-crossing edges prevent edges from b outside ∆ that cross a-hooks. By symmetry, we consider needles, hooks, arrows, and sickles from the view- point of vertex v inside ∆. Then a needle first crosses {a, b} and an a-hook first crosses {a, c} and the other endpoint is outside ∆. On Fan-Crossing Graphs 7 (a) (b) Fig. 5. Triangle-crossings (a) with clockwise triangle-crossing edges, c-hooks, c-sickles, and c-arrows crossing {b, c} drawn red and counterclockwise triangle crossing edges, a-arrows, a-hooks and a-sickles crossing {a, b}, drawn blue and (b) rerouting the edges along ei and ej A triangle ∆ = (a, b, c) can be crossed by several triangle-crossing edges, even in opposite directions, see Fig. 5(a). We say that a triangle-crossing edge crosses clockwise if it crosses {a, c}, {b, c}, {a, b} cyclicly in this order, and counterclock- wise if it crosses the edges in the cyclic order {a, c}, {a, b}, {b, c}. Lemma 1. Let E(G) be an adjacency-crossing embedding of a graph G such that a triangle ∆ is crossed by triangle-crossing edges in clockwise and in counter- clockwise order. Then there is an adjacency-crossing embedding in which each triangle-crossing edge is rerouted so that it crosses only one edge of ∆, and no new triangle-crossings are introduced. Proof. Suppose that the edges of ∆ = (a, b, c) are crossed by the edges of a set X. If there are at least two triangle-crossing edges, then there is a vertex u abeicejubuca 8 F. J. Brandenburg so that X = f an(u). By our assumption, u is outside ∆ and {a, c} is crossed first. All other cases are similar. Classify the edges according to Table 1. Choose a clockwise triangle-crossing edge ei and a counterclockwise triangle-crossing edge ej, and assume that ei precedes ej in clockwise order at u. The other case is similar. Partition the set of needles so that N1, N2 and N3 are the sets of needles before ei, between ei and ej, and after ej in clockwise order at u. Then N3 < ej < N2 < ei < N1 according to the order (of the crossing points) on {a, c}. Accordingly, partition the set of counterclockwise triangle-crossing edges into CCl and CCr, where CCl comprises the edges before ei and CCr = CC − CCl is the set of edges after ei, and partition the set C into the sets of edges to the left and right of ej. Then edges of ∆ are crossed by the edges of X = N1 ∪N2 ∪N3 ∪Ha ∪Hc ∪Aa ∪ Ac ∪ La ∪ Lc ∪ C ∪ CC. Some of these sets may be empty. The edges from these sets are unordered at u. In particular, edges of C and CC may alternate, needles may appear anywhere, whereas c-hooks and c-sickles precede triangle-crossing edges which precede a-hooks and a-sickles. We sort the edges of X in clockwise order at u and reroute them along ei and ej in the following order: Sc < N1 < CCl < Hc < Ac < CCr < N2 < Cr < Aa < Ha < Cl < N3 < Sa. Two edges in a set are ordered by the crossing points with edges of ∆ so that adjacent edges do not cross one another. The edges of Sc and N1 are routed along ei from u to the crossing point of ei and {a, c}, where they make a left turn and follow {a, c}. Then the rerouted edge ˜g follows the original g so that ˜g crosses {a, c} if g is a needle. An edge ˜g first follows ei to the crossing point with {b, c} if g ∈ Hc ∪ CCl ∪ Ac ∪ CCr, then it follows {b, c} and finally g. If g ∈ Hc ∪ CCl, then ˜g makes a left turn and a right turn for edges in CCr. Accordingly, edges ˜g make a left or right turn and cross {b, c} if g is an arrow. An edge ˜g may follow ei or ej from u to {a, c} or adopts the route of g if g ∈ N2 is a needle between the chosen triangle-crossing edges ei and ej. Similarly, edges of Cr, Aa, Cl, N3 and Sa are routed along ej from u to the crossing point with {a, b} and {a, c}, respectively, then along one of these edges, and finally along the original edge. For an illustration see Fig. 5. The rerouting saves many crossings. Only arrows cross two edges of ∆, and needles, hooks and triangle-crossing edges cross {a, c}. In fact, each rerouted edge is crossed by a subset of edges crossing the original one, except if the edge is a hook. This is due to the fact that triangle-crossing edges are only crossed by the edges of the triangle. Hence, there are (uncrossed) segments from u to {a, c} and from {a, c} to {b, c} and {a, b}, respectively. In the final part, ˜g coincides with g and adopts the edge crossings from g. In consequence, ˜g crosses only {a, c} if g is a triangle-crossing edge. If g is a c-hook, then the crossing with edge {b, c} is replaced by a crossing with {a, c} and crossings with edges of f an(c) outside ∆ are avoided. The replacement is feasible. A c-hook cannot be covered by b, since a further crossing edge {b, d} must cross a clockwise triangle-crossing edge, which is excluded. Hence, ˜g is crossed by edges of f an(c), and each edge h crossing ˜g is in f an(u). Similarly, edge {a, b} can be replaced by {a, c} at a- On Fan-Crossing Graphs 9 hooks. The other rerouted edges adopt the crossings from the final part, so that new triangle-crossings cannot be introduced. Topological simplicity is preserved, since the bundle of edges is well-ordered, and two edges cross at most once, since there are segments from u to {a, c} and between {a, c} and {b, c} and {a, b}, respectively. In consequence, triangle-crossings of ∆ are avoided, there are no new triangle- (cid:117)(cid:116) crossings, and the obtained embedding is adjacency-crossing. The rerouting technique of Lemma 1 widely changes the order of the edges of f an(u) and it avoids many crossings. It is possible to restrict the rerouting to triangle-crossing edges so that they cross only a single edge of the triangle. Therefore consider two consecutive crossing points of clockwise triangle crossing edges or c-arrows and {b, c}, and reroute the counterclockwise crossing edges crossing {b, c} in the sector along one of the bounding edges. Accordingly, pro- ceed with clockwise triangle-crossing edges and sectors of {a, b}. Thereby hooks, sickles and arrows remain unchanged. From now on, we assume that all triangle-crossing edges cross clockwise. We wish to reroute them along an a-arrow, a-hook or a-sickle if such an edge exists. This is doable, but we must take a detour if the edge is covered by b or c. Lemma 2. Suppose there is an adjacency crossing embedding E(G) and a trian- gle ∆ is crossed by clockwise triangle-crossing edges. If there are an a-hook, an a-arrow or an a-sickle, then some edges are rerouted so that ˜g crosses only one edge of ∆ if g is a triangle-crossing edge of ∆, and there are no new triangle- crossings. Proof. Our target is edge {a, b} of ∆ = (a, b, c), where the crossing edges are ordered from a to the left to b. Then a-hooks and a-sickles are to the left of all triangle-crossing edges, whereas a-arrows are interspersed. Edge {a, b} is covered by u. Let f = {u, w} be the rightmost edge among all a-hooks, a-arrows, and a- sickles. First, if f is an a-hook, then reroute all edges g crossing {a, b} to the right of f in a bundle from u to {a, b} along the outside of f , see Fig. 6(a). Since f is rightmost, edge g is triangle-crossing. Then ˜g makes a right turn and follows {a, b} and finally it follows g. Thereby, ˜g crosses {a, b}. Let F be the set of edges in the sector between {a, b} and {a, c} that cross f , i.e., outside ∆. Then ˜g is crossed by the edges of F and also by {a, b}. Each crossing edge is in f an(a) and is uncovered or covered by u. It cannot be covered by the other endpoint w of f , since w is inside ∆ and any edge {w, w(cid:48)} crossing an edge {a, d} ∈ F must cross {a, b}, {a, c} or a triangle-crossing edge, which is excluded, since it enforces an independent crossing. Thus ˜g is only crossed by edges of f an(a), and ˜g can be added to the fan of edges of f an(u) that cross such edges. Hence, all introduced crossings are fan-crossings, as Fig. 6(b) shows. We would like to proceed accordingly if f is an a-sickle and reroute triangle- crossing edges along the outside of f from u to {a, b}. However, f may be crossed 10 F. J. Brandenburg by edges {a, d} that are covered by w, as shown in Fig. 7(a). Then a rerouted edge along f introduces an independent crossing. We take another path. Let the a-sickle f = {u, w} cross {a, b} in p1 and {a, c} in p2, see Fig. 7(a). Let H be the set of edges that cross {a, c} between the first triangle-crossing edge e1 and f including f . Now we reroute all edges h ∈ H and all triangle- crossing edges g so that they first follow e1 from u to {a, c}, then {a, c}, where the edges ˜h branch off and and follow h. If g is a triangle-crossing edge, then ˜g crosses {a, c} at p2, and then follows f, {a, b}, and finally g, see Fig. 7(b). The rerouted edges are uncrossed from u to their crossing point with {a, c}. Hence, each edge ˜h is crossed by a subset of edges that cross h for h ∈ H. Let F be the set of edges crossing f in the sector between p1 and p2. Since f is covered by a, these edges are incident to a. Now ˜g is crossed by {a, c} and by the edges of F if g is triangle-crossing, so that ˜g is crossed by edges of f an(a). Each edge h ∈ F is in f an(u), since it crosses f = {u, w} and it cannot be covered by w. Otherwise, it must be crossed by another edge {w, w(cid:48)}. However, w is outside ∆ and {w, w(cid:48)} must cross {a, c} or {a, b} or a triangle-crossing edge, which introduces an independent crossing. Hence, ˜g can be added to the fan of edges at u that cross h so that there is a fan-crossing. We proceed similarly if f = {u, w} is an a-arrow, see Fig. 8. Reroute all edges g that cross {a, c} to the right of the leftmost triangle-crossing edge e1 including e1. Then g is triangle-crossing or an a-arrow. Route ˜g from u to {a, c} along the first edge that crosses {a, c} and is covered by c, then along {a, c} to the crossing point with f , then along f and finally along g. Then there is a segment from u to the crossing with {a, c}. In the sector between {a, c} and {a, b}, ˜g is crossed by the edges of f an(a) that cross f in this sector. If g is a triangle-crossing edge, then ˜g is not crossed by further edges, whereas ˜g adopts the crossings with further edges incident to a outside ∆ if g is an a-arrow. Now, ˜g is crossed by a subset of edges that cross g if g is an a-arrow, since f is the rightmost a-arrow. If g is a triangle-crossing edge, then the edges crossing ˜g are incident to a, and each crossing edge is incident to u. It cannot be incident to or covered by the other endpoint e of f , since w is outside ∆ and the edges crossing ˜g are inside, and and no further edge {w, w(cid:48)} with w(cid:48) (cid:54)= u can cross {a, b}, {a, c}, or a triangle-crossing edge. Hence, there is a fan-crossing, ˜g crosses only one edge of ∆ if g is triangle-crossing, and there are no new triangle- (cid:117)(cid:116) crossings. The existence of an a-hook, a-sickle or a-arrow implies that edge {a, b} is covered by u. By symmetry, we can reroute all triangle-crossing edges, if there are a-hooks, a-sickles or a-arrows from the viewpoint of vertex v inside ∆. Then {a, c} is covered by v. For example, an arrow from v first crosses {a, b} and then {b, c} so that vertex b is enclosed and triangle-crossing edges are rerouted along the outer side of the arrow. It remains to consider the case without such edges. Then there are only triangle-crossing edges, needles (from u and from v), c-hooks, c-arrows, and c-sickles. Lemma 3. Suppose there is an adjacency crossing embedding E(G) and a tri- angle ∆ = (a, b, c) is crossed by clockwise triangle-crossing edges. If there are no On Fan-Crossing Graphs 11 (a) (b) Fig. 6. (a) An a-hook (drawn blue and dashed) and triangle-crossing edges which (b) are rerouted along the a-hook. (a) (b) Fig. 7. An a-sickle and triangle-crossing edges (a) before and (b) after the edge rerout- ing. (a) (b) Fig. 8. An a-arrow and triangle-crossing edges (a) before and (b) after the edge rerout- ing. bcawubcawubcduwabcdauwbcadwubcadwu 12 F. J. Brandenburg a-hooks, a-arrows and a-sickles and edges {a, c} and {b, c} are not covered by v, then edge (cid:96) = {a, b} can be rerouted so that ˜(cid:96) does not cross the rerouted edge, and there are no new triangle-crossings. Similarly, reroute {a, c} if {b, c} is not covered by u and there are no a-hooks, a-arrow and a-sickles from the viewpoint of v. Proof. Besides one or more clockwise triangle-crossing edges there are only nee- dles, c-hooks, c-arrows and c-sickles. We cannot route the triangle-crossing edges along the edges of ∆, since vertices a and b may be incident to “fat edges”, that are explained in Section 3, and prevent a bypass. Therefore, we reroute {a, b}. Similarly, we reroute {a, c} if {a, b} and {b, c} are not covered by u, and both ways may be possible. If {u, b} is an edge of G, then it crosses {a, c} and we take f = {u, b}; otherwise let f be the first edge crossing both {a, c} in p1 and {b, c} in p2. Then f is covered by c and is a triangle-crossing edge or a c-arrow. There is a segment from u to p1, from p1 to p2, and from p2 to b. Other edges incident to c cannot cross f , since f is triangle-crossing or is protected from c by a triangle-crossing edge, and the final part along {b, c} is uncrossed, because f is the first edge crossing {b, c} from b. Reroute (cid:96) = {a, b} so that ˜(cid:96) first follows {a, c} from a to p1, then f to p2 and finally {b, c} to b. If f = {u, b}, then p2 and b coincide. Let N be the set of edges crossing {a, c} in the segment from a to p1. Then N consists of needles so that N = Nc ∪ Na, where a needle n ∈ Nc is covered by c and a needle n ∈ Na is uncovered or covered by a. The needles in Nc cross {a, c} before the needles of Na. In fact, if an edge {x, y} other than {a, c} crosses a needle n ∈ N , then {x, y} is outside ∆ if n ∈ Nc. If {x, y} crosses n inside ∆, then n ∈ Na, since further edges incident to c cannot enter the interior of ∆ below the triangle-crossing edges. Now ˜(cid:96) is crossed by the edges of N . Note that there are no crossings of ˜(cid:96) in the second part along f and in the third part along {b, c}. Since the edges of N are incident to a, ˜(cid:96) is crossed by edges f an(a). In return, consider an edge h crossing some needle n = {u, w} ∈ N . Then n and may be covered by a or by c so that h = {a, d} or h = {d, d}. If h is not covered by c, we are done, since we can add ˜(cid:96) = {a, b} to the fan of edges of f an(a) crossing n. However, there is a conflict if n is covered by c, as shown in Fig. 9(a). Then there are needles {u, w1}, . . . , {u, ws} and edges {c, z1}, . . . , {c, zt} for some s, t ≥ 1 so that each {u, wi} is crossed by some {c, zj}. We resolve the conflict by rerouting the needles in advance, so that needles of Nc are no longer covered by c, see Fig. 9(b). Reroute each needle ˜n from u to p1 along f , then along {a, c}, and finally along n. Then there is a segment from u to the crossing point with {a, c} so that ˜n is only crossed by a subset of edges that cross g. Thereafter, there are no needles covered by c, and we are done. (cid:117)(cid:116) We can now show that triangle-crossings can be avoided. Theorem 1. Every adjacency-crossing graph is fan-crossing. On Fan-Crossing Graphs 13 (a) (b) Fig. 9. A triangle-crossing (a) with a needle covered by vertex c that introduces con- figuration II and an edge rerouting that avoids triangle-crossing edges. Proof. Let E(G) be an adjacency-crossing embedding of a graph G and suppose that there are triangle crossings. We remove them one after another and first consider all triangles with triangle-crossing edges in both directions (Lemma 1), then the triangles with a-hooks, a-arrows or a-sickles (Lemma 2), and finally those without such edges (Lemma 3). Each step removes a crossed triangle and does not introduce new ones. Hence, the resulting embedding is fan-crossing. (cid:117)(cid:116) 3 Fan-Crossing and Fan-Planar Graphs In this Section we assume that embeddings are fan-crossing so that indepen- dent crossings and triangle-crossings are excluded. Fan-planar embeddings also exclude configuration II [11]. An instance of configuration II consists of the fan- crossing embedding of a subgraph C induced by the vertices of an edge e = {u, v} and of all edges {t, w} crossing e, where e is crossed from both sides, as shown in Fig. 2(a). We call e the base and its crossing edges the fan of C, denoted f an(C). Since e is crossed from both sides, it it crossed at least twice, and therefore it is covered by t. It may be crossed by more than two edges. Hence, an edge is the base of at most one configuration, but a base may be in the fan of another configuration. Each edge g of f an(C) is uncovered or is covered by exactly one of u and v. It may cross several base edges so that it is part of several config- urations. An edge of f an(C) is said to be straight if it crosses e from the left and curved if it crosses e from the right. Then an instance of configuration II has at least a straight and a curved edge. Moreover, exactly one of u and v is inside a cycle with edge segments of a curved edge, the base, and a straight edge. For convenience, we assume that u is inside the cycle and curved edges are left curves. Right curves enclose v and both left and right curves are possible. However, if there are left and right curves, then curves in one direction can be rerouted. For convenience, we augment the embedding and assume that for every in- stance C of configuration II there are edges {t, u} and {t, v}. If these edges do cubdwafncdwabu 14 F. J. Brandenburg not exist, they can be added. Therefore, route {u, t} along the first left curve f from u to the first crossing point with an edge g of f an(u) and then along g. Then f is uncovered or covered by u and {t, u} is uncrossed, or f is covered by v and {t, u} is covered by v or is uncovered. Accordingly, {t, v} follows the rightmost edge crossing e and the first crossed edge of f an(v). The case with right curves is similar. Hence, we can assume that there is a triangle ∆ = (t, u, v) associated with C. There are some cases in which configuration II can be avoided by an edge rerouting. A special one has been used in Lemma 3 in which the straight edge is crossed by a triangle-crossing edge. However, there is a case in which config- uration II is unavoidable. Lemma 4. If a straight edge s of an instance C of configuration II is uncovered or is covered by u, then the left curves g to the left of s can be rerouted so that ˜g does not cross the base. The edge rerouting does not introduce new instances of configuration II. Proof. We reroute each edge g to the left of s so that ˜g first follows s from t to the crossing point with the first edge f of f an(u) that crosses both g and s. Then ˜g follows f and finally g. If g is a straight edge, then f = {u, v}, which is crossed. See Fig. 10 for an illustration. If g is a left curve, then ˜g is only crossed by the edges of f an(u) that cross s in the sector between {u, t} and f , and by the edges that cross g in the sector from f to the endpoint. All edges are in f an(u) and {u, v} is not crossed by ˜g. Each edge h that is crossed by ˜g is crossed only once, since f is the first edge crossing g and s. If h ∈ f an(u) is crossed by ˜g and g and h do not cross, then h crosses s and h is a straight edge for ˜g. If there is a curved edge {u, w} crossing ˜g, then {u, w} is also a curved edge for s. Hence, ˜g can be added to that instance of configuration II. If g is a straight edge, then ˜g is crossed by a subset of edges that cross g, since each edge of f an(u) crossing s in the sector between {u, t} and {u, v} must cross g. Hence there are no more (cid:117)(cid:116) edge crossings and instances of configuration II. In consequence, we can remove instances of configuration II in which there are left curves, right curves and straight edges, since Lemma 4 either applies to the left or to the right curves. Lemma 4 cannot be used if left curves are to the right of straight edges, since the left curves may be covered by v and the straight edges by u. Then configuration II may be unavoidable using a construction sim- ilar to the one of Theorem 2. A left curve g = {t, x} is semi-covered by u if it is only crossed by an edge {u, w} in the sector between {u, t} and {u, v}. Thus the crossing edge is inside the triangle ∆ = (t, u, v). Accordingly, a straight edge h = {t, y} is semi-covered by v if each edge {v, w} with w (cid:54)= u crosses h in the sector between {v, t} and {v, u}, i.e., outside ∆. A semi-covered edge is covered, but not conversely. A covered left curve that is not semi-covered is crossed by edges of f an(u) in the sector between {t, v} On Fan-Crossing Graphs 15 (a) (b) Fig. 10. An instance of configuration II with (a) a straight edge s covered by u and left curves to its left and (b) rerouting the edges crossing {u, v} to the left of s. and {t, u} in clockwise order, i.e., outside the triangle (t, u, v). Similarly, a semi- covered straight edge may be crossed by edges of f an(v) inside the triangle. Thus a semi-covered left curve consists of a segment from u to the crossing with {u, v} and a semi-covered straight edge is uncrossed inside ∆. These segments are good for routing other edges. Lemma 5. If there is a semi-covered straight (curved) edge, then all curved (straight) edges can be rerouted such that they do not cross the base, so that configuration II is avoided. Proof. We proceed as in Lemmas 1 and 2 and reroute all straight and curved edges in a bundle along the semi-covered edge f from t to the base {u, v}, where they make a left or right turn, follow the base and finally their original. If f is straight (curved), then the curved (straight) edges do not cross the base. Each rerouted edge ˜g is only crossed by a subset of edges that cross g, since the part (cid:117)(cid:116) of ˜g is uncrossed until it meets g. Next, we construct graph M in which configuration II is unavoidable. Graph M has fat and ordinary edges. A fat edge consists of K7. In fan-crossing graphs, a fat edge plays the role of an edge in planar graphs. It is impermeable to any other fat or ordinary edge. This observation is due to Binucci et al. [5] who proved the following: Lemma 6. For every fan-crossing embedding of K7 and every pair of vertices u and v there is a path of segments in which at least one endpoint is a crossing point. Thus, each pair of vertices is connected if the uncrossed edges are removed. There are (at least) three fan-crossing embeddings of K7 with K5 as in Figs. 4(a-c) and two vertices in the outer face, see Fig. 11. The embeddings in Figs. 4(d) and 4(e) cannot be extended to a fan-crossing embedding of K7 by adding two vertices in the outer face. vstuvstu 16 F. J. Brandenburg (a) (b) (c) Fig. 11. Different fan-crossing embeddings of K7 that are obtained from different em- beddings of K5 by adding two vertices in the outer face Theorem 2. There are fan-crossing graphs that are not fan-planar. In other words, configuration II is unavoidable. Proof. Consider graph M from Fig. 12 with fat edges representing K7 and or- dinary ones. Up to the embedding of the fat edges, graph M has a unique fan-crossing embedding. This is due to the following fact. There is a fixed outer frame consisting of two 5-cycles with vertices U = {t(cid:48), v(cid:48), y(cid:48), a(cid:48), b(cid:48), t, v, y, a, b} and fat edges. If fat edges are contracted to edges or regarded as such, this subgraph is planar and 3-connected and as such has a unique planar embedding. By a similar reasoning, M [U ] has a fixed fan-crossing embedding up to the embeddings of K7. There are two disjoint 5-cycles, since fat edges do not admit a penetration by any other edge. Hence, the edges {t, y} and {b, v} must be routed inside a face of the embedding of M [U ], and they cross. Consider the subgraph M [t, s, u, w, x, z] restricted to fat edges. Since ver- tex t is in the outer frame, it admits four fan-crossing embeddings with outer face (t, u, x, w, z), (t, u, x, z), (t, u, s), and (t, s, z), respectively. But the edges {u, a}, {u, b}, {v, w} and {v, z} exclude the latter three embeddings, since the edges on the outer cycle are fat edges and do not admit any penetration by another edge. Edge {u, a} cannot cross {t, y}, since the latter is crossed by {v, z}. Hence, {t, y} is crossed by {w, v} and {z, v}. Finally, edge {t, x} must cross {u, w}. It cannot cross {v, z} without introducing an independent crossing. Hence, it must cross {u, a}, {u, b}, {u, v} and {u, w}. Modulo the embeddings of K7, every fan-crossing embedding is as shown in Fig. 12 in which {u, v} is crossed by {t, x} from the right and by {t, y} from the left and thus is configuration II. Hence, graph M is fan-crossing and not (cid:117)(cid:116) fan-planar. Theorems 1 and 2 solve a problem of my recent paper on beyond-planar graphs [6]. Let FAN-PLANAR, FAN-CROSSING, and ADJ-CROSSING denote the classes of fan-planar, fan-crossing, and adjacency-crossing graphs. Then The- orems 1 and 2 show: On Fan-Crossing Graphs 17 Fig. 12. Graph M with fat edges representing K7 and an unavoidable configuration II Corollary 1. FAN-PLANAR ⊂ FAN-CROSSING = ADJ-CROSSING. Kaufmann and Ueckeredt [11] have shown that fan-planar graphs of size n have at most 5n − 10 edges, and they posed the density of fan-crossing and adjacency-crossing graphs as an open problem. Theorem 3. For every adjacency-crossing graph G there is a fan-planar graph G(cid:48) on the same set of vertices and with the same number of edges. Proof. By Theorem 1 we can restrict ourselves to fan-crossing graphs. Let E(G) be a fan-crossing embedding of G and suppose there is an instance of configura- tion II in which the base {u, v} is crossed by {t, x} from the right and by {t, y} from the left, or vice-versa. Augment E(G) and add edges {u, w} if they are fan-crossing and do not cross both {t, x} and {t, y}, and similarly, add {v, w}. Consider the cyclic order of edges or neighbors of u and v starting at {u, v} in clockwise order. Let a and b be the vertices encountered first. Vertices a and b exist, since a precedes x and b precedes y, where x = a or b = y are possible. Then a and b are both incident to both u and v and there are two faces f1 and f2 containing a common segment of {u, v} and a and b, respectively, on either side of {u, v}. Otherwise, further edges can be added that are routed close to {u, v} and are crossed either by edges of f an(t) that are covered by u or by v. We claim that there is no edge {a, b} in E(G). Therefore, observe that the base is covered by t, so that {a, b} cannot cross {u, v}. Note that there is a triangle crossing if x = a and b = y and {u, v} crosses {a, b} with a triangle-crossing edge {u, v}. Edge {a, b} crosses neither {t, x} nor {t, y}. If a, b are distinct from x, y, then there is an independent crossing of {t, x} and {t, y}, respectively, by {a, b} and {u, v}. If a = x, then {t, x} and {x, b} are adjacent and do not cross and {x, b} and {u, v} independently cross {t, y} if b (cid:54)= y, and for b = y, {x, y} and {t, y} cannot cross as adjacent edges. xvyab‘a‘t‘twv‘suby‘z 18 F. J. Brandenburg However, after a removal of the base {u, v}, vertices a and b are in a common face and can be connected by an uncrossed edge {a, b}, which clearly cannot be part of another instance of configuration II. Hence, we can successively remove all instances of configuration II and every (cid:117)(cid:116) time replace the base edge by a new uncrossed edge. In consequence, we solve an open problem of Kaufmann and Ueckerdt [11] on the density of fan-planar graphs and show that configuration II has no impact on the density. Corollary 2. Adjacency-crossing and fan-crossing graphs have at most 5n − 10 edges. 4 Conclusion We extended the study of fan-planar graphs initiated by Kaufmann and Ueckerdt [11] and continued in [4, 5] and clarified the situation around fan-crossings. We proved that triangle-crossings can be avoided whereas configuration II is essential for graphs but not for their density. Thereby, we solved a problem by Kaufmann and Ueckerdt [11] on the density of adjacency-crossing graphs. Recently, progress has been made on problems for 1-planar graphs [12] that are still open for fan-crossing graphs, such as (1) sparsest fan-crossing graphs, i.e., maximal graphs with as few edges as possible [8] or (2) recognizing specialized fan-crossing graphs, such as optimal fan-crossing graphs with 5n-10 edges [7]. In addition, non-simple topological graphs with multiple edge crossings and crossings among adjacent edges have been studied [2], and they may differ from the simple ones, as it is known for quasi-planar graphs [3]. Non-simple fan- crossing graphs have not yet been studied. 5 Acknowledgements I wish to thank Christian Bachmaier for the discussions on fan-crossing graphs and his valuable suggestions. References 1. E. Ackerman, J. Fox, J. Pach, and A. Suk. On grids in topological graphs. Comput. Geom., 47(7):710–723, 2014. 2. E. Ackerman and G. Tardos. On the maximum number of edges in quasi-planar graphs. J. Comb. Theory, Ser. A, 114(3):563–571, 2007. 3. P. K. Agarwal, B. Aronov, J. Pach, R. Pollack, and M. Sharir. Quasi-planar graphs have a linear number of edges. Combinatorica, 17(1):1–9, 1997. 4. M. A. Bekos, S. Cornelsen, L. Grilli, S. Hong, and M. Kaufmann. On the recognition of fan-planar and maximal outer-fan-planar graphs. Algorithmica, 79(2):401–427, 2017. On Fan-Crossing Graphs 19 5. C. Binucci, E. Di Giacomo, W. Didimo, F. Montecchiani, M. Patrignani, A. Symvo- nis, and I. G. Tollis. Fan-planarity: Properties and complexity. Theor. Comput. Sci., 589:76–86, 2015. 6. F. J. Brandenburg. A first order logic definition of beyond-planar graphs. J. Graph Algorithms Appl., 2017. Accepted for publication. 7. F. J. Brandenburg. Recognizing optimal 1-planar graphs in linear time. Algorith- mica, published online October 2016, doi:10.1007/s00453-016-0226-8. 8. F. J. Brandenburg, D. Eppstein, A. Gleißner, M. T. Goodrich, K. Hanauer, and J. Reislhuber. On the density of maximal 1-planar graphs. In M. van Kreveld and B. Speckmann, editors, GD 2012, volume 7704 of LNCS, pages 327–338. Springer, 2013. 9. O. Cheong, S. Har-Peled, H. Kim, and H. Kim. On the number of edges of fan- crossing free graphs. Algorithmica, 73(4):673–695, 2015. 10. H. Harborth and I. Mengersen. Drawings of the complete graph with maximum number of crossings. Congressus Numerantium, 88:225–228, 1992. 11. M. Kaufmann and T. Ueckerdt. The density of fan-planar graphs. CoRR, abs/1403.6184, 2014. 12. S. G. Kobourov, G. Liotta, and F. Montecchiani. An annotated bibliography on 1-planarity. Computer Science Review, 25:49–67, 2017. 13. G. Ringel. Ein Sechsfarbenproblem auf der Kugel. Abh. aus dem Math. Seminar der Univ. Hamburg, 29:107–117, 1965.
ai_researcher
3
MatPlotAgent_Method_and_Evaluation_for_LLM-Based_Agentic_Scientific_Data_Visualization.pdf
4 2 0 2 r a M 9 1 ] L C . s c [ 3 v 3 5 4 1 1 . 2 0 4 2 : v i X r a MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific Data Visualization Zhiyu Yang∗2 Zihan Zhou*3 Shuo Wang†1 Xin Cong1 Xu Han1 Yukun Yan1 Zhenghao Liu4 Zhixing Tan5 Pengyuan Liu2 Dong Yu2 Zhiyuan Liu†1 Xiaodong Shi3 Maosong Sun1 1Tsinghua University 2Beijing Language and Culture University 3Xiamen University 4Northeastern University, China 5Zhongguancun Laboratory, Beijing, China Abstract Scientific data visualization plays a crucial role in research by enabling the direct display of complex information and assisting researchers in identifying implicit patterns. Despite its importance, the use of Large Language Mod- els (LLMs) for scientific data visualization re- mains rather unexplored. In this study, we introduce MatPlotAgent, an efficient model- agnostic LLM agent framework designed to au- tomate scientific data visualization tasks. Lever- aging the capabilities of both code LLMs and multi-modal LLMs, MatPlotAgent consists of three core modules: query understanding, code generation with iterative debugging, and a vi- sual feedback mechanism for error correction. To address the lack of benchmarks in this field, we present MatPlotBench, a high-quality benchmark consisting of 100 human-verified test cases. Additionally, we introduce a scoring approach that utilizes GPT-4V for automatic evaluation. Experimental results demonstrate that MatPlotAgent can improve the perfor- mance of various LLMs, including both com- mercial and open-source models. Furthermore, the proposed evaluation method shows a strong correlation with human-annotated scores.1 1 Introduction A picture is worth a thousand words. Data visual- ization is an essential process in scientific research, facilitating the more direct conveyance of complex information and aiding researchers in uncovering implicit patterns. There are many advanced toolk- its, such as Matplotlib2 and Origin3, that can help researchers plot various types of figures for com- plex data distributions. However, transforming raw data into informative and easy-to-understand * Equal contribution. † Corresponding authors. 1 MatPlotAgent and MatPlotBench are be publicly avail- able at https://github.com/thunlp/MatPlotAgent. 2https://matplotlib.org 3https://www.originlab.com visualizations is still time-consuming and labor- intensive. Before the invention of large language models (LLMs) (OpenAI, 2023), automating this process with AI models is almost impossible. With large-scale parameters and extensive train- ing data, LLMs have demonstrated remarkable ca- pabilities in a wide range of complex tasks, in- cluding reasoning (Wei et al., 2022; Kojima et al., 2022a; Yao et al., 2023a), mathematics (Yu et al., 2024; Luo et al., 2023a; Azerbayev et al., 2024; Shao et al., 2024) and coding (Rozière et al., 2024; Luo et al., 2023b; Guo et al., 2024; Wei et al., 2023). This breakthrough has unlocked new opportunities for utilizing LLMs as autonomous agents in a di- verse range of practical scenarios, such as web browsing (Nakano et al., 2021; Yao et al., 2022; Qin et al., 2023; Zhou et al., 2023; Deng et al., 2023; Yao et al., 2023b; Xie et al., 2023), social simulations (Park et al., 2023; Xu et al., 2023; Chen et al., 2024a; Wang et al., 2023), tool uti- lization (Qin et al., 2024; Schick et al., 2023; Liu et al., 2024; Li et al., 2023a; Lu et al., 2023; Qian et al., 2023b; Shinn et al., 2023), and software de- velopment (Qian et al., 2023a). Using LLMs to enhance human productivity in specialized areas is now a key research focus with great potential. Recent advancements in LLM-based agents in- spire us to explore the utilization of LLMs for scientific data visualization, a realm that remains rather unexplored in existing studies. A closely related line of research is text-to-image genera- tion (Ramesh et al., 2021; Saharia et al., 2022), where diffusion models (Rombach et al., 2022) have shown great potential in generating various types of images. However, existing text-to-image generation methods predominantly focus on artistic expression, potentially misaligning with the needs of scientific data visualization, where clarity and precision in conveying information are the most im- portant principles. This work aims to automatically generate figures with precise information. Figure 1: Examples in the proposed MatPlotBench. Given the raw data and user queries, the AI agent is expected to generate a figure accordingly. We only display partial raw data and user queries due to space limitations. We propose leveraging modern code LLMs and multi-modal LLMs to develop scientific data vi- sualization agents that can significantly enhance human efficiency. The resulting MatPlotAgent4 is comprised of three modules: (1) the query un- derstanding that can thoroughly understand user- provided requirements; (2) the code generation module with iterative debugging capabilities that use code to precisely preprocess raw data and gen- erate figures; and (3) the visual feedback module that possesses visual perceptual abilities to find errors in the plotted draft and provide visual feed- back to the code generation module to rectify the errors. Our method is model-agnostic, which can be driven with any code LLMs and multi-modal LLMs. Through experiments, we find MatPlotA- gent can work with both closed-source LLMs (e.g., GPT-4 (OpenAI, 2023)) and open-source LLMs (e.g., Magicoder (Wei et al., 2023)). Another critical challenge in the field of auto- matic scientific data visualization is the absence of benchmarks for evaluation purposes. To address this issue, we introduce a meticulously crafted benchmark called MatPlotBench to quantitatively evaluate the approaches involved. Specifically, MatPlotBench contains 100 carefully hand-crafted test examples, each of which contains a user query, the corresponding input data, and a ground-truth figure verified by human experts. We believe that 4This name is in homage to the well-known Matplotlib. high-quality test sets play a crucial role in driving advancements in the field. To facilitate automatic quantitative evaluation, we also design a scoring mechanism based on GPT- 4V (OpenAI, 2023), which is one of the strongest multi-modal LLMs that can effectively understand text and figures. Specifically, GPT-4V is prompted to produce a score between 0 and 100 based on the ground-truth figure and the one generated by AI models. Additionally, we conduct human eval- uation and estimate the correlation coefficient be- tween human-annotated scores and the automati- cally calculated scores. The results reveal a strong correlation between the automatic score and the human-annotated score, thus affirming the reliabil- ity of the scoring mechanism. In summary, our contribution can be listed as follows: • We introduce MatPlotBench to enable au- tomatic quantitative evaluation of AI meth- ods designed for scientific data visualization. Through comparison with human evaluation, we observe that MatPlotBench can effectively capture the performance of AI approaches in this cutting-edge task. • We propose an effective and generalizable LLM agent framework, MatPlotAgent, that can improve the performance of a wide range of LLMs based on the newly proposed visual feedback mechanism. ABCDOutputFigureRawDataABCDUserQueryABCDI have data of protein consumption in 24 European countries named data.csv. … Write a Python code to visualize this data using a 2D scatter plot with K-Means clustering into three distinct color-coded clusters. …I want to create a phase diagram of water using Python. The data is in a file called data.csv. … You should add a grid to make the chart easier to read, and ensure the pressure scale is logarithmic, since phase diagrams often cover a wide range of pressures.Create a 3D Waterfall plot using the data.csvfile, where the first column represents time and the subsequent columns … Label the axes: 'Time (sec)' for the x-axis, 'Frequency (Hz)' for the y-axis, and 'Amplitude (a.u.)' for the z-axis.Create a chord diagram titled "Mobile Phone Brand Switching Behavior" using Holoviewswith Bokeh backend. The data represents transitions between Samsung, Apple, Huawei, and Other Android. CountryRed MeatWhite MeatEggsAlbania10.11.40.5Austria8.9144.3Belgium13.59.34.1TemperaturePressureTemperaturePressure273.16611.6572101273.1510132525010027010000000273.16611.657Time(sec)Ampltiude(a.u.)0.32313913927.264383928.248790029.0736329950.65870670787.9101144116.8960413822.036737590.99427427452.9272483457.2476299241.5508988SamsungAppleHuaweiOther0.29250.02240.02880.0420.01950.28160.00630.0080.01170.00320.02790.01 2 Task Description We first introduce the scientific data visualization task investigated in this work. Given a user query x described in text and the corresponding data D, the AI system is expected to output a figure V that can satisfy the user’s demand: V = f (x, D), (1) where f denotes the involved AI system that can be either an LLM or an LLM-based agent. Specifically, x specifies the visualization require- ments, encompassing the visualization type, data to plot, structural or spatial requirements for in- dividual elements or the entire plot, and aesthetic preferences. D represents the data, a collection of data points {d1, · · · , dn} whether specified by the user or stored in the external data file. Figure 1 provides some examples for this task. 3 MatPlotBench Automatic evaluation is important in AI tasks as it enables researchers to efficiently assess the perfor- mance of various methods, thereby guiding the de- velopment of the field. While the DS-1000 bench- mark (Lai et al., 2023) includes coding problems about Matplotlib, the solutions’ average length is merely three lines, rendering them too simplistic to gauge the proficiency of contemporary AI agents in tackling practical challenges. Therefore, we propose to construct MatPlotBench with complex data visualization problems that are more close to real-world scenarios. We will illustrate the data collection process in Section 3.1 and then explain the scoring mechanism in Section 3.2. 3.1 Data Collection Principles To enhance the quality of MatPlot- Bench, we adhere to the following principles for data collection: (1) Covering diverse types: encom- passing a broad range of plot types, including not only the most commonly used but also rare but use- ful ones; (2) Containing representative instances: ensuring that the test examples reflect the represen- tative features of scientific data visualization, such as varying data complexity; and (3) Balancing easy and challenging problems: including problems of varying levels of difficulty in the benchmark. Selecting Original Examples In accordance with the principles outlined above, we first select some original examples from reputable online sci- entific data visualization forums. These examples are carefully selected from the Matplotlib Gallery and OriginLab GraphGallery, encompassing di- verse and representative instances with varying lev- els of difficulty. Specifically, we select 1 or 2 exam- ples from every section in the Matplotlib Gallery, covering bars, lines, markers, pie charts, polar plots, contour plots, statistics plots, 3D plots, text anno- tations, radar charts, shapes, scales, axes, spines, subplots, and so on. We also seek more advanced test examples from the OriginLab GraphGallery, focusing on those that are more aesthetically ap- pealing or complex, such as Sankey diagrams, sun- burst charts, radial plots, chord diagrams, stream- plots, and others. Finally, 75 original examples come from the Matplotlib Gallery and the 25 other original examples come from the OriginLab Graph- Gallery. Subsequently, these examples undergo several modifications to become the final test cases in MatPlotBench. Preliminary Query Generation Based on the selected original examples, we use LLMs to gener- ate preliminary queries, which are then revised by humans. For original examples from the Matplotlib Gallery, we use GPT-4 to convert the code in each original example into preliminary queries. For the examples from the OriginLab GraphGallery, there are only images. We thus use GPT-4V to convert each image into a preliminary query. Data Replacement Based on these preliminary queries, we begin data replacement for examples from the Matplotlib Gallery due to the observed phenomenon of memorization by GPT-4. In this process, we replace the original data points with newly generated ones, while keeping other factors such as the plot type unchanged. For examples from OriginLab, we find that the data is inherently complex, and even GPT-4 does not exhibit memo- rization with these examples. As a result, we only perform data replacement for Matplotlib examples. Human Modification After completing the data replacement process, we engage human annotators to refine the preliminary queries. These annota- tors are tasked with correcting errors, eliminating ambiguity, and adding any omitted essential infor- mation. Each annotator involved has a minimum of three years of experience in coding and NLP. Furthermore, each query undergoes refinement by two independent human annotators. Updating Ground-Truth Figures After obtain- ing the human-annotated queries, as the data in Matplotlib examples are altered, we cannot di- rectly use the images in the original example as the ground truth. To this end, we manually wrote code to plot the ground truth for the Matplotlib examples. For examples from OriginLab, as the data remains unaltered, we extract the images from their website to serve as the ground truth. Human Verification After obtaining the queries and their corresponding ground truths, we per- formed a final round of manual verification. Three NLP researchers were asked to conduct this verifi- cation. In this turn, the focus is mainly on check- ing whether the user queries and the ground truths are well aligned. The researchers meticulously checked each element in the ground truth image and looked for their corresponding descriptions in the user query. Ill-described elements and those missing clarifications are corrected. Redundant and incorrect descriptions are removed. This pro- cess results in 100 high-quality (query, raw data, ground-truth figure) triples, which comprise our final benchmark. 3.2 Automatic Quantitative Evaluation To ease the burden of manual evaluation and broaden the applicability of our benchmark for re- search purposes, we suggest employing GPT-4V, a cutting-edge multi-modal LLM, to conduct auto- matic evaluations on our proposed benchmark. We carefully prompt GPT-4V to give a score from 0 to 100 on model-generated visualizations using the corresponding ground truths as the reference. The prompt is shown in Figure 6 in Appendix. Correlation with Human Evaluation To assess the reliability of GPT-4V as an automatic evaluator for scientific visualizations, we calculate the cor- relation between the automatic scores and human- evaluated scores. Specifically, we employ GPT-3.5 and GPT-4 to generate figures on MatPlotBench, and then conduct both automatic and human eval- uation for the generated figures. For each model, we iteratively sample a subset that consists of n examples from the total benchmark, and then cal- culate the average score of both automatic and hu- man evaluation. This process repeats k times and we get k data points for each type of evaluation, which can be represented by A = {a1, · · · , ak} and H = {h1, · · · , hk}. ai denotes the average automatic score on the i-th randomly sampled sub- Figure 2: Correlation between the proposed automatic evaluation mechanism and human evaluation. set, and hi represents the average human-evaluated score in the same subset. n and k are set to 25 and 100, respectively. We utilize the statistical functions provided by scipy5 to compute the Pearson correlation coeffi- cient r and the corresponding p-value p. For GPT- 4, we obtain r=0.876 and p=7.41e-33, while for GPT-3.5, the values are r=0.836 and p=2.67e-27. Figure 2 shows the data points for GPT-4. Given that r > 0.8 and p <0.05, we conclude that the automatic evaluation scores are strongly correlated with human evaluation results. This demonstrates the reliability of the proposed scoring mechanism in assessing the quality of model-generated figures on MatPlotBench. 4 MatPlotAgent To improve the capabilities of LLMs for scientific data visualization, we propose an agentic frame- work that mimics the plotting process of human experts. The proposed MatPlotAgent is comprised of three modules, including the query expansion module, the code agent, and the visual agent. Fig- ure 3 illustrates the workflow of MatPlotAgent. 4.1 Query Expansion The query expansion module interprets and refines the user query, converting the high-level require- ments into a sequence of explicit and detailed in- structions that are easy for LLMs to follow. This module can also be viewed as a planning mod- ule, creating an overall plan before generating the figure. Specifically, this module is based on the involved code LLM, which is prompted to give 5https://docs.scipy.org/doc/scipy/reference/ stats.html Figure 3: Workflow of MatPlotAgent: The query expansion module converts the user query into detailed multi-step instructions. These instructions are then passed to the code agent, which generates the plotting code. The visual agent provides informative feedback based on the current draft, guiding the refinement of the figure. detailed instructions on how to use code to fulfill the requirement specified by the user, including what libraries to import, what library functions to call, how to set the parameters in each function cor- rectly, how to prepare the data, how to manipulate the data, and so on. 4.2 Code Agent The code agent is the core component in MatPlotA- gent, responsible for generating the code to plot fig- ures. Given detailed instructions from the query ex- pansion module, the code agent first generates the code using appropriate libraries and functions. To improve the success rate of the generated code, we also employ the self-debugging mechanism (Chen et al., 2024b), which helps the involved code LLM iteratively identify and correct bugs in the code. To prevent an infinite loop, we set the maximum iterations of self-debugging to 3. Similar to humans, who need to repeatedly refine the figure based on current drafts, we also introduce a visual feedback mechanism. This mechanism em- ploys multi-modal LLMs to provide suggestions to improve the figure and better fulfill the user’s queries. These suggestions, which we call visual feedback, are then provided to the code agent to further improve the code. Our experiments in Sec- tion 5.2 demonstrate that MatPlotAgent is compat- ible with several modern code LLMs, including both some well-known closed-source models and some open-source models. 4.3 Visual Agent The major difference between MatPlotAgent and previous LLM-based coding agents (Qian et al., 2023a; Chen et al., 2024b) is that we take the visual signal into account, which is important in scientific data visualization. Some errors or weaknesses may be difficult to identify in the code but become ap- parent when observing the output figure through “eyes”. The visual agent is the “eyes” for MatPlotA- gent, while the aforementioned code agent acts as the “hands” for MatPlotAgent. Specifically, the visual agent is powered by multi-modal LLMs. We introduce several guiding principles for the visual agent, including verifying whether the figure aligns with the provided data, and enhancing the colors or labels to improve the figure’s informativeness. Based on the principles, the user query, and the current draft of the figure, the visual agent generates some suggestions to re- fine to figure. These suggestions serve as feedback for the code agent to refine the code. Experimental results in Section 5.4 show that our visual feedback mechanism can significantly improve the quality of the plotted figures. User QueryGenerateascatterplotwithpolarprojection.Theplotshouldhave200pointswiththeirpositionsandcolorsdeterminedbyrandomvalues.Theareaofeachpointshouldbeproportionaltothesquareofitsradialdistancefromtheorigin,andthecolorshouldbedeterminedbyitsanglefromthepositivex-axis.Query ExpansionPrinciplesYoushouldunderstandwhatthequery'srequirementsare,andoutputstepbystep,detailedinstructionsonhowtousepythoncodetofulfilltheserequirements.Includewhatlibrariestoimport,whatlibraryfunctionstocall,howtosettheparametersineachfunctioncorrectly,howtopreparethedata,howtomanipulatethedatasothatitbecomesappropriateforlaterfunctionstocalletc.hue=theta/(2*np.pi)saturation=np.ones_like(hue)value=np.ones_like(hue)scatter=ax.scatter(theta,r,s=area,c=(hue,saturation,value),cmap='hsv')Code Agenthue=theta/(2*np.pi)saturation=np.ones_like(hue)value=np.ones_like(hue)colors=list(zip(hue,saturation,value))rgb_colors=hsv_to_rgb(colors)scatter=ax.scatter(theta,r,s=area,c=rgb_colors)Self-DebuggingBug FixedCode GenerationVisual AgentMatch Type and Data: Ensure that the plot and data match the requests.Customize: Adapt colors and labels to meet the user's requirements.Adjust and Improve: Resolve discrepancies and improve visual quality.PrinciplesFeedbackEnhanceVisibility:Scaleuppointsizeswithafactor,like100,forbettervisibility.Final ResultBuggy Code Model GPT-4 GPT-3.5 Magicoder-S-DS-6.7B (Wei et al., 2023) Deepseek-coder-6.7B-instruct (Guo et al., 2024) CodeLlama-34B-Instruct (Rozière et al., 2024) Deepseek-coder-33B-instruct (Guo et al., 2024) WizardCoder-Python-33B-V1.1 (Luo et al., 2023b) Direct Decod. Zero-Shot MatPlotAgent CoT w/ GPT-4V 48.86 38.03 38.49 31.53 16.54 30.88 36.94 45.42 37.14 37.95 29.16 12.40 36.10 35.81 −3.44 −0.89 −0.54 −2.37 −4.14 +5.22 −1.13 61.16 47.51 51.70 39.45 14.18 32.18 45.96 +12.30 +9.48 +13.21 +7.92 −2.36 +1.30 +9.02 Table 1: Performance of different LLMs on MatPlotBench. For each model, improvements over the direct decoding are highlighted in red, while results worse than that of the direct decoding are highlighted in blue. Model Direct Decod. w/ Gemini Pro Vision MatPlotAgent • Direct decoding: given the query, the model directly generates the plotting code. GPT-4 GPT-3.5 48.86 38.03 56.73 43.48 +7.87 +5.45 Table 2: Performance of GPT-4 and GPT-3.5 using Gem- ini Pro Vision as visual agent on MatPlotBench. 5 Experiments 5.1 Setup Models Since the proposed MatPlotAgent is model-agnostic, we can employ various LLMs in this framework. The code LLMs we use in our ex- periments include GPT-4, GPT-3.5, Magicoder-S- DS-6.7B (Wei et al., 2023), Deepseek-coder-6.7B- instruct (Guo et al., 2024), Deepseek-coder-33B- instruct (Guo et al., 2024), WizardCoder-Python- 33B-V1.1 (Luo et al., 2023b), and CodeLlama-34B- Instruct (Rozière et al., 2024). The decoding tem- perature is set to 0.0 for all the involved code LLMs. For GPT-4 and GPT-3.5, we use the API provided by OpenAI6. For the other five open-source LLMs, we use vLLM (Kwon et al., 2023) for model infer- ence. For the visual agent, we utilize GPT-4V (Ope- nAI, 2023) and Gemini Pro Vision (Google, 2023), two representative multi-modal LLMs. We leave the exploration of using open-source multi-modal LLMs to power the visual agent for future work. Evaluation We evaluate the involved methods on MatPlotBench, using the proposed automatic scor- ing mechanism that is shown reliable in Section 3.2. For each code LLM, we evaluate its performance in three ways: 6https://openai.com/product • Zero-Shot Chain-of-thought (Kojima et al., 2022b): the model is prompted to inference with the zero-shot CoT mechanism. • MatPlotAgent: the model is equipped with the proposed MatPlotAgent framework, driv- ing the query expansion module and the code agent, as illustrated in Section 4. 5.2 Main Results Table 1 presents the results of different methods on the scientific data visualization task. In the direct decoding setting, GPT-4 achieves the high- est score of 48.86. Surprisingly, the open-source model Magicoder-S-DS-6.7B (Wei et al., 2023) achieves the second-best performance, surpassing models with substantially larger parameter sizes, such as WizardCoder-Python-33B-V1.1. The results also suggest that the zero-shot CoT mechanism does not effectively enhance the per- formance of many recent code LLMs. Zero-shot CoT only improves the results of Deepseek-coder- 33B-instruct (Guo et al., 2024) from 30.88 to 36.10. Conversely, for other models, implementing zero- shot CoT results in poorer performance. For ex- ample, when zero-shot CoT is applied, the perfor- mance of GPT-4 drops to 45.42, which is lower than the direct decoding result of 48.86. From Table 1, we find the proposed MatPlotA- gent can improve the plotting capabilities of sev- eral models. For GPT-4 and GPT-3.5, MatPlotA- gent leads to significant improvements of 12.30 and 9.48, respectively. For the other five open-source LLMs, MatPlotAgent improves the performance of four models. With MatPlotAgent, the open-source Model Accuracy of Code Execution Results (%) Visualization-Hard Visualization-Easy Average GPT-4 + MatPlotAgent w/o Visual Feedback 66.7 72.6 66.7 60.8 68.4 65.8 63.8 70.5 66.3 Table 3: Effect of MatPlotAgent on the visualization subset of the Qwen-Agent Code Interpreter benchmark. Magicoder-S-DS-6.7B model even surpasses GPT- 4 with direct decoding (51.70 vs. 48.86), showcas- ing the effectiveness of our method. To investigate the generalizability of MatPlotA- gent across various multi-modal LLMs, we present the results of employing Gemini Pro Vision as the visual agent in Table 2. We observe considerable improvements of 7.87 and 5.45, respectively, over the direct decoding baseline. This evidence fur- ther demonstrates the model-agnostic characteristic of our approach, leveraging various multi-modal LLMs to achieve enhanced performance. 5.3 Results on Qwen-Agent Code Interpreter Benchmark In Table 3, we detail the performance of MatPlotA- gent on the visualization subset of the Qwen-Agent Code Interpreter Benchmark7, which was recently published. According to their GitHub repository, GPT-4 achieved scores of 66.7 and 60.8 on the Visualization-Hard and Visualization-Easy subsets, respectively. Utilizing MatPlotAgent, we attained higher scores of 72.62 and 68.35 on these subsets. When the visual feedback mechanism is disabled, MatPlotAgent reached scores of 66.67 and 65.82, reconfirming the necessity of visual feedback. 5.4 Ablation Study Compared to previous LLM-based coding agents (Qian et al., 2023a; Chen et al., 2024b), the major contribution of the work lies in the newly proposed visual feedback mechanism, expected to leverage visual signals to enhance the quality of the output figure. To gain a deeper understanding of the impact of the visual feedback mechanism, we conduct both qualitative and quantitative analyses in this section. Figure 4 presents examples plotted by LLMs both with and without the visual feedback mech- anism. We observe a clear improvement in the 7https://github.com/QwenLM/Qwen-Agent/tree/ main/benchmark Model Direct Decod. MatPlotAgent w/o Visual Feedback GPT-4 GPT-3.5 48.86 61.16 53.44 38.03 47.51 41.57 Table 4: Effect of the visual feedback mechanism (GPT- 4V visual agent). quality of the output figure with the visual feed- back. For example, in case C, the text in the figure is jumbled, but this issue is resolved with the assis- tance of visual feedback. It is important to note that the visual agent does not reference the ground-truth figure when generating feedback; it only examines the draft plotted by the model. Table 4 also presents quantitative results of the visual feedback mecha- nism, indicating that the absence of visual feedback would result in significantly poorer outcomes for both GPT-4 and GPT-3.5. This reaffirms the impor- tance of visual signals in the task of scientific data visualization. 5.5 Case Study We present output figures in Figure 5. The first example is relatively simple, correctly plotted by GPT-4 augmented with MatPlotAgent. The sec- ond example is more challenging; while GPT-4 and Magicoder-S-DS-6.7B can generate a draft, both omit some elements. The third example is the most difficult, where none of the three mod- els can produce the correct result. These results indicate that the proposed MatPlotBench poses a significant challenge for current LLMs. Even the state-of-the-art LLM, GPT-4, equipped with Mat- PlotAgent, fails in some cases. We believe this benchmark will be effective not only for evaluating AI systems in scientific data visualization but also for assessing general capabilities such as coding and visual perception. Figure 4: Examples to illustrate the effect of visual feedback. To investigate the effect of the visual feedback mechanism on different models, we display the outputs of two representative LLMs. Case A, B, and C are generated by GPT-4. Case D is generated by Magicoder-S-DS-6.7B. 6 Related Work Code LLMs Since the release of Codex (Chen et al., 2021), many closed- and open-source code LLMs have been published, pushing the bound- aries of LLMs’ capabilities to write functional code. Early open-source efforts include SantaCoder (Al- lal et al., 2023) and StarCoder (Li et al., 2023b). More recently, the Code Llama (Rozière et al., 2024) series is released, including models of vary- ing sizes. DeepSeekCoder (Guo et al., 2024), a series of open-source code models ranging in size from 1.3B to 33B, has also garnered significant attention for its impressive performance on general coding benchmarks. Wei et al. (2023) introduce a novel data augmentation method for automati- cally creating high-quality fine-tuning data. The resulting Magicoder model surpasses a wide array of open-source code LLMs in performance. LLM Agents Recently, a wide range of LLM- based agent frameworks is proposed to explore LLMs’ potential in real-world scenarios (Nakano et al., 2021; Yao et al., 2022; Qin et al., 2023; Zhou et al., 2023). OpenAgents (Xie et al., 2023) pro- posed an open platform that leverages LLM agents in everyday situation by employing a Data Agent, a Plugins Agent, and a Web Agent. Park et al. (2023) proposed an interactive simulation of human behav- ior in which software agents emulate realistic hu- man actions and interactions through computation. Voyager (Wang et al., 2023) introduced the fisrt LLM model-driven autonomous agent in Minecraft, designed to perpetually explore the environment, master various skills, and uncover new insights independently, without any human guidance. Chat- Dev (Qian et al., 2023a) proposed creating a vir- tual, chat-driven software development enterprise that follows the traditional waterfall methodology. In this study, we explore the capabilities of LLM- based agents in the task of scientific data visualiza- tion, a critical and practical area for contemporary researchers. 7 Conclusion We propose to assess and enhance the capabilities of modern LLMs for scientific data visualization, a multifaceted task demanding coding and visual skills. We begin with the creation of MatPlotBench, a rigorous benchmark supporting automated quan- titative evaluation that strongly aligns with human assessment. Additionally, we introduce MatPlotA- gent, a model-agnostic mechanism employing vi- w/oVisualFeedbackwithVisualFeedbackGround-TruthABCD Figure 5: Case study of different models. sual feedback to enhance LLMs’ plotting abilities. Experimental results demonstrate that MatPlotA- gent enhances the performance of various LLMs. 8 Limitations In this paper, we introduce MatPlotBench, a bench- mark designed for scientific data visualization. However, the demands of scientific data visual- ization can vary significantly across disciplines. Since MatPlotBench is developed for general sci- entific data visualization, it may not encompass all domain-specific requirements, potentially restrict- ing its applicability to certain fields. In the future, the data construction and evaluation approaches can be customized for specific domains if neces- sary. References Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, and Leandro von Werra. 2023. Santacoder: don’t reach for the stars! Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen Marcus McAleer, Al- bert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. 2024. Llemma: An open language model for mathematics. In The Twelfth International Con- ference on Learning Representations. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sas- try, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cum- mings, Matthias Plappert, Fotios Chantzis, Eliza- beth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. GroundTruthGPT-4GPT-3.5Magicoder-S-DS-6.7B Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2024a. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors. In The Twelfth International Conference on Learning Representations. Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. 2024b. Teaching large language mod- els to self-debug. In The Twelfth International Con- ference on Learning Representations. Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. 2023. Mind2web: Towards a generalist agent for the web. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Bench- marks Track. Gemini Team Google. 2023. Gemini: A family of highly capable multimodal models. Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y. Wu, Y. K. Li, Fuli Luo, Yingfei Xiong, and Wen- feng Liang. 2024. Deepseek-coder: When the large language model meets programming – the rise of code intelligence. Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022a. Large lan- guage models are zero-shot reasoners. In Advances in Neural Information Processing Systems, volume 35, pages 22199–22213. Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022b. Large lan- guage models are zero-shot reasoners. In Advances in Neural Information Processing Systems. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles. Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Wen-Tau Yih, Daniel Fried, Sida Wang, and Tao Yu. 2023. DS- 1000: A natural and reliable benchmark for data sci- ence code generation. In Proceedings of the 40th International Conference on Machine Learning. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2023a. CAMEL: Communicative agents for ”mind” exploration of large language model society. In Thirty-seventh Conference on Neural Information Processing Systems. Raymond Li, Loubna Ben allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia LI, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Joel Lamy-Poirier, Joao Monteiro, Nicolas Gontier, Ming-Ho Yee, Lo- gesh Kumar Umapathi, Jian Zhu, Ben Lipkin, Muh- tasham Oblokulov, Zhiruo Wang, Rudra Murthy, Ja- son T Stillerman, Siva Sankalp Patel, Dmitry Ab- ulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Urvashi Bhattacharyya, Wenhao Yu, Sasha Luccioni, Paulo Villegas, Fedor Zhdanov, Tony Lee, Nadav Timor, Jennifer Ding, Claire S Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Carolyn Jane Anderson, Brendan Dolan- Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro Von Werra, and Harm de Vries. 2023b. Star- coder: may the source be with you! Transactions on Machine Learning Research. Reproducibility Certifi- cation. Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Ao- han Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. 2024. Agent- bench: Evaluating LLMs as agents. In The Twelfth International Conference on Learning Representa- tions. Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai- Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. 2023. Chameleon: Plug-and-play compositional reasoning with large language models. In Thirty-seventh Conference on Neural Information Processing Systems. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jian- guang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. 2023a. Wiz- ardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583. Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qing- wei Lin, and Daxin Jiang. 2023b. Wizardcoder: Empowering code large language models with evol- instruct. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Ouyang Long, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2021. Webgpt: Browser- assisted question-answering with human feedback. ArXiv, abs/2112.09332. OpenAI. 2023. Gpt-4 technical report. Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S. Bern- stein. 2023. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th An- nual ACM Symposium on User Interface Software and Technology, UIST ’23, New York, NY, USA. Association for Computing Machinery. Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2023a. Communicative agents for software development. Cheng Qian, Chi Han, Yi Fung, Yujia Qin, Zhiyuan Liu, and Heng Ji. 2023b. CREATOR: Tool creation for disentangling abstract and concrete reasoning of large language models. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2023, pages 6922–6939, Singapore. Association for Com- putational Linguistics. Yujia Qin, Zihan Cai, Dian Jin, Lan Yan, Shihao Liang, Kunlun Zhu, Yankai Lin, Xu Han, Ning Ding, Huadong Wang, Ruobing Xie, Fanchao Qi, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2023. WebCPM: Interactive web search for Chinese long-form ques- tion answering. In Proceedings of the 61st Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 8968–8988, Toronto, Canada. Association for Computational Lin- guistics. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, dahai li, Zhiyuan Liu, and Maosong Sun. 2024. ToolLLM: Facilitating large language models to master 16000+ real-world APIs. In The Twelfth International Con- ference on Learning Representations. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image gener- ation. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High- resolution image synthesis with latent diffusion mod- els. Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Mar- tin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve. 2024. Code llama: Open foundation mod- els for code. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. 2022. Pho- torealistic text-to-image diffusion models with deep language understanding. Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. In Thirty-seventh Conference on Neural Information Processing Systems. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. 2024. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik R Narasimhan, and Shunyu Yao. 2023. Re- flexion: language agents with verbal reinforcement learning. In Thirty-seventh Conference on Neural Information Processing Systems. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Man- dlekar, Chaowei Xiao, Yuke Zhu, Linxi (Jim) Fan, and Anima Anandkumar. 2023. Voyager: An open- ended embodied agent with large language models. ArXiv, abs/2305.16291. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-thought prompt- ing elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc. Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. 2023. Magicoder: Source code is all you need. Tianbao Xie, Fan Zhou, Zhoujun Cheng, Peng Shi, Lu- oxuan Weng, Yitao Liu, Toh Jing Hua, Junning Zhao, Qian Liu, Che Liu, Leo Z. Liu, Yiheng Xu, Hongjin Su, Dongchan Shin, Caiming Xiong, and Tao Yu. 2023. Openagents: An open platform for language agents in the wild. Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xi- aolong Wang, Weidong Liu, and Yang Liu. 2023. Exploring large language models for communication games: An empirical study on werewolf. Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. 2022. WebShop: Towards Scalable Real-World Web Interaction with Grounded Lan- guage Agents. In Advances in Neural Information Processing Systems, volume 35, pages 20744–20757. Curran Associates, Inc. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik R Narasimhan. 2023a. Tree of thoughts: Deliberate problem solving with large language models. In Thirty-seventh Conference on Neural Information Processing Systems. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2023b. React: Synergizing reasoning and acting an are excellent at You evaluating visualization plots between a model-generated plot and the ground truth. You will be giving scores on how well it matches the ground truth plot. judge The generated plot will be given to you as the first figure. If the first figure is blank, that means the code failed to generate a figure. Another plot will be given to you as the second figure, which is the desired outcome of the user query, meaning it is the ground truth for you to reference. Please compare the two figures head to Suppose the second head and rate them. figure has a score of 100, rate the first figure on a scale from 0 to 100. Scoring should be carried out regarding Compare closely the plot correctness: between the generated plot and the ground truth, the more resemblance the generated plot has compared to the ground truth, the higher the score. The score should be proportionate to the resemblance between the two plots. In some rare occurrences, see if the data points are generated randomly according to the query, if so, the generated plot may not perfectly match the ground truth, but it is correct nonetheless. Only rate the first figure, the second figure is only for reference. If the first figure is blank, that means the code failed to generate a figure. Give a score of 0 on the Plot correctness. After scoring from the above aspect, please give a final score. The final score is preceded by the [FINAL SCORE] token. For example [FINAL SCORE]: 40. Figure 6: Automatic evaluation prompt for GPT-4V. All human annotators involved are informed that the collected data will be used solely for academic research purposes, and their personal information will not be disclosed. B.1 Evaluation Guide for Human Annotators Figure 11 gives detailed instructions for human annotators when scoring the model-generated plots. in language models. In The Eleventh International Conference on Learning Representations. Longhui Yu, Weisen Jiang, Han Shi, Jincheng YU, Zhengying Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2024. Metamath: Bootstrap your own mathematical questions for large language models. In The Twelfth International Con- ference on Learning Representations. Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, and Gra- ham Neubig. 2023. Webarena: A realistic web envi- ronment for building autonomous agents. In Second Agent Learning in Open-Endedness Workshop. A Detailed Prompts To better understand MatPlotBench and MatPlotA- gent, we list the prompts for automatic evaluation and the three modules in MatPlotAgent, including the query expansion module, the code agent, and the visual agent. A.1 Evaluation Prompts The automatic evaluation prompt primarily requires GPT-4V to provide a score between 0 and 100 for the model-generated plot, with reference to the ground truth plot. A.2 Prompts for MatPlotAgent The query expansion prompt mainly requires LLMs to generate step-by-step, detailed instructions on how to use Python code to fulfill the requirements specified by users, as shown in Figure 7. For the code agent, there are two prompts for the code generation process and the self-debugging mechanism. The code generation prompt mainly requires LLMs to generate executable code accord- ing to the user query to plot and save the output figure, as shown in Figure 8. The self-debugging prompt mainly requires LLMs to correct the buggy code according to the error message from a Python interpreter, as displayed in Figure 9. The visual agent prompt mainly requires multi- modal LLMs to firstly understand the user query and analyze the draft plot, then generate the visual feedback to refine the draft, as shown in Figure 10. B Human Evaluation Details We engage human annotators from computer sci- ence departments at various universities via social media. They are compensated for their work at a rate slightly higher than the prevailing market rate. SYSTEM PROMPT: According to the user query, expand and solidify the query into a step by step detailed instruction (or comment) on how to write python code to fulfill the user query’s requirements. Import the appropriate libraries. Pinpoint the correct library functions to call and set each parameter in every function call accordingly. USER PROMPT: Here is the user query: [User Query]: """ {{query}} """ You should understand what the query’s requirements are, and output step by step, detailed instructions on how to use python code to fulfill these requirements. Include what libraries to import, what library functions to call, how to set the parameters in each function correctly, how to prepare the data, how to manipulate the data so that it becomes appropriate for later functions to call etc,. Make sure the code to be executable and correctly generate the desired output in the user query. Figure 7: The query expansion prompt in MatPlotAgent. SYSTEM PROMPT: You are a cutting-edge super capable code generation LLM. You will be given a natural language query, generate a runnable python code to satisfy all the requirements in the query. You can use any python library you want. When you complete a plot, remember to save it to a png file. USER PROMPT: Here is the query: """ {{query}} """ If the query requires data manipulation from a csv file, process the data from the csv file and draw the plot in one piece of code. When you complete a plot, remember to save it to a png file. The file name should be """{{file_name}}""". SYSTEM PROMPT: Given a user query and an image of the current plot, please determine whether the plot has faithfully followed the user query. Your task is to provide instruction to make sure the plot has strictly completed the requirements of the query. Please output a detailed step by step instruction on how to use python code to enhance the plot. any other Check colors, specific USER PROMPT: Here is the user query: [Query]: """ {{query}} """ Carefully read and analyze the user query to understand the if requirements. the plot aligns with the user query in terms of data selection, plot type, and Look at the any specific customization. provided image of the plot. Assess the plot type, the data it represents, labels, visual and titles, elements. Compare these elements with the requirements specified in the user query. Note any differences between the user query requirements and the current plot. discrepancies, identified Based provide step-by-step instructions on how to modify the Python code to meet the user query requirements. Suggest improvements for better visualization practices, such as clarity, readability, and aesthetics, while ensuring the primary focus is on meeting the user’s specified requirements. Remember to save the plot to a png file. The file name should be """{{file_name}}""" the on Figure 8: The code generation prompt in MatPlotAgent. Figure 10: Prompt for the visual agent. USER PROMPT: There are some errors in the code you gave: {{error_message}} please correct the errors. Then give the complete code and don’t omit anything even though you have given it in the above code. Figure 9: The self-debugging prompt in MatPlotAgent. Evaluation Guide Plot Correctness (0-100 points) • Exact Match (90-100 points): The generated plot is nearly identical to the ground truth, with only minor, negligible differences. • High Resemblance (70-89 points): The generated plot closely resembles the ground truth with some small but noticeable differences in data representation or styling. • Moderate Resemblance (50-69 points): The generated plot has a moderate level of similarity to the ground truth, but there are several noticeable differences that impact the plot’s accuracy or interpretation. • Low Resemblance (30-49 points): The generated plot shares some similarities with the ground truth but has significant differences that change the overall message or interpretation of the data. • Poor Match (10-29 points): The generated plot has very little in common with the ground truth, with major discrepancies in data representation. • No Resemblance (1-9 points): The generated plot is completely different from the ground truth, with no discernible similarities in data representation. • Failure to Generate (0 points): The first figure is blank, indicating a failure to generate any plot. Special Considerations • In cases where the generated plot includes random data points that are correct in the context of the query, the plot should be evaluated for its correctness based on the query’s intent, not solely on its visual match to the ground truth. [FINAL SCORE]: XX Figure 11: Evaluation guide for human annotators when scoring the model-generated plots.
ai_researcher
1
Intelligent_Pilot_Advisory_System_The_journey_from_ideation_to_an_early_system_design_of_an_AI-based_decision_support_system_for_airline_flight_decks.pdf
Verifying Aircraft Collision Avoidance Neural Networks Through Linear Approximations of Safe Regions Kyle D. Julian∗ and Mykel J. Kochenderfer Stanford University, Stanford, CA 94305 Shivam Sharma∗ and Jean-Baptiste Jeannin University of Michigan, Ann Arbor, MI 48109 9 1 0 2 r a M 2 ] Y S . s c [ 1 v 2 6 7 0 0 . 3 0 9 1 : v i X r a Abstract The next generation of aircraft collision avoidance systems frame the problem as a Markov decision process and use dy- namic programming to optimize the alerting logic. The re- sulting system uses a large lookup table to determine advi- sories given to pilots, but these tables can grow very large. To enable the system to operate on limited hardware, prior work investigated compressing the table using a deep neu- ral network. However, ensuring that the neural network reli- ably issues safe advisories is important for certification. This work defines linearized regions where each advisory can be safely provided, allowing Reluplex, a neural network verifi- cation tool, to check if unsafe advisories are ever issued. A notional collision avoidance policy is generated and used to train a neural network representation. The neural networks are checked for unsafe advisories, resulting in the discovery of thousands of unsafe counterexamples. Introduction Over the last decade, neural network representations have become popular in decision making systems for a variety of domains. Neural networks are state-of-the-art for im- age recognition systems (Simonyan and Zisserman 2015; He et al. 2016) and can learn to play games at super-human levels (Mnih et al. 2015; Silver et al. 2016). In these do- mains, a mistake by the neural network may have minor consequences; however, neural networks can also be used in safety-critical systems where a failure could be catastrophic. For example, neural networks have been used to steer au- tonomous cars given images (Bojarski et al. 2016) and guide unmanned aircraft to waypoints (Julian and Kochenderfer 2017). If a neural network steers an autonomous car off the road or directs an aircraft into an obstacle, the result could be be expensive or lead to loss of life. In order for neural networks to be used for such applications, confidence must be established in their safe operation. In the last few years, new research has resulted in tools to verify safety properties of neural networks. One tool, Reluplex, uses a Satisfiability Modulo Theories solver and extends the simplex method for neural networks with rec- tified linear unit (ReLU) activation functions to determine ∗Equal Contribution Copyright c(cid:13) 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. whether any input in a specified input region produces out- puts with a desired property (Katz et al. 2017). Another ap- proach defines neural network verification as a reachability problem that can be solved using a mixed integer linear pro- gram formulation (Lomuscio and Maganti 2017). Further- more, a tool known as AI2 uses an overapproximation of the neural network to quickly verify safety properties of neu- ral networks (Gehr et al. 2018). These tools enable network properties to be rapidly verified, but more work is needed to develop properties that will ensure safe operation of neural network systems. This work focuses on the verification of neural net- works used for aircraft collision avoidance. We created a highly simplified aircraft collision avoidance policy that uses vertical maneuvers using value iteration (Egorov et al. 2017). This policy, which we call VerticalCAS, is loosely based on an early prototype of the next generation air- borne collision avoidance system for commercial aircraft, ACAS Xa (Kochenderfer 2015). Although VerticalCAS is not the ACAS Xa system that will be flown on real aircraft, VerticalCAS serves as a simple and open-source collision avoidance policy that can be used in the development of desirable properties. These properties should also hold for other vertical collision avoidance systems. After generating a collision avoidance policy, a neural network is trained to represent the original discrete policy (Julian et al. 2016). Previous work has developed equations to verify the safety of the tabular collision avoidance policy by defining “safeable” regions for each advisory (Jeannin et al. 2017). In order to verify these properties for the neural network rep- resentation, the equations are linearized to enable the use of linear program solvers used by Reluplex. This paper de- scribes the linearization process, introduces a new variable τ , the time to loss of horizontal separation, and describes the formulation and verification of “safeable” regions for neural networks. VerticalCAS The VerticalCAS collision avoidance system used through- out this paper is inspired by ACAS Xa, which frames aircraft collision avoidance as a Markov decision process (MDP) (Kochenderfer 2015). The ACAS Xa system is the successor to the current Traffic alert and Collision Avoid- ance System (TCAS) and provides pilots with advisories to change their vertical rate to prevent a possible near mid-air collision (NMAC). A NMAC is defined as an intruder air- craft coming inside the ownship puck which is described in Fig. 1 as a region hp = 100 ft above and below, and rp = 500 ft radially around the ownship aircraft (the aircraft where the collision avoidance system is installed). VerticalCAS has 5 inputs which describe the system’s state: 1. h (ft): Altitude of intruder relative to ownship 2. vO (ft/s): ownship vertical climb rate 3. vI (ft/s): intruder vertical climb rate 4. aprev: previous advisory 5. τ (sec): time to loss of horizontal separation The first 3 inputs are spatial and velocity quantities that are described in Fig. 1. Relative altitude h varies from −8000 ft to 8000 ft, and the aircraft climb rates vary from −100 ft/s to 100 ft/s. Previous advisory (aprev) dictates which advisories Verti- calCAS can issue given the most recent advisory. This re- stricts the network from issuing conflicting advisories like strong ascend or descend advisories immediately after a clear of conflict advisory which can be confusing to pilots. Time to loss of horizontal separation (τ ) is the time till the horizontal separation between the intruder and ownship is less than rp. A more explicit definition of τ is τ = r − rp rv (1) where r is the horizontal separation between the ownship and intruder, and rv is the relative horizontal velocity be- tween the two aircraft. Markov Decision Process Policy VerticalCAS is computed using local approximation value iteration as implemented by the Julia package called POMDPs.jl (Egorov et al. 2017). The states, dynamics, rewards, and advisories reflect an early prototype of the ACAS Xa system described by Kochenderfer (2015). Each state s ∈ S represents a discrete encounter geometry be- tween the ownship and intruder aircraft and has five dimen- sions which are the inputs outlined above. The system issues a new advisory a every (cid:15) seconds, and there are nine possible advisories as described in Table 1, where g is Earth’s sea-level gravitational acceleration (Jean- nin et al. 2017). Each vertical advisory is defined by a target velocity vlo and sign w. If w = 1, the ownship can assume a velocity in the range [vlo, +∞), and if w = −1 the ownship can assume a velocity in the range (−∞, vlo]. In addition to the advisory (w, vlo), the ownship has to accelerate at least alo until it is in the acceptable velocity range defined by the issued advisory. The transition model T (s, a, s(cid:48)) and reward model r(s, a) used for vertical collision avoidance are explained in previ- ous work (Kochenderfer 2015). Local approximation value iteration is used to compute the state-action values, Q(s, a), y Vertical ownship velocity (vo) rp 2hp x Vertical separation (h) Vertical intruder velocity (vI ) Figure 1: Three input variables of VerticalCAS neural net- work and ownship puck defined by rp and hp on ownship centered coordinate frame. such that the finite-horizon Bellman equation holds for all states and actions: Q(s, a) = r(s, a) + (cid:88) s(cid:48) max a(cid:48) T (s, a, s(cid:48))Q(s(cid:48), a(cid:48)) (2) Because s(cid:48) might not be exactly one of the discrete states s ∈ S, multilinear interpolation is used to compute Q(s(cid:48), a(cid:48)). Af- ter computing the Q values using local approximation value iteration, the advisory associated with the highest Q value for a given state is the best advisory and is issued by the system. In addition, because the computed policy tends to advise Clear of Conflict in cases where an NMAC is immi- nent and unavoidable, the advisory at time τ = 6 is used in situations where τ < 6. Neural Network Representation Storing the MDP policy with fine resolution in a table for- mat can require large amounts of storage space, which may prevent implementation on limited avionics hardware. One approach to compressing the policy representation approxi- mates the policy using a neural network through the use of supervised learning and an asymmetric loss function, which encourages the neural network to simultaneously approxi- mate the Q-values and highest scoring advisory (Julian et al. 2016). One network was trained for each previous advisory aprev, resulting in nine fully connected neural networks using six hidden layers of 45 hidden units each. Each hidden layer uses rectified linear unit (ReLU) activation, which is defined as ReLU(x) = max(0, x) (Dahl, Sainath, and Hinton 2013). Each network uses the remaining four state variables as in- puts and outputs a value associated with each possible advi- sory. Each neural network was trained for 200 epochs using AdaMax optimization (Kingma and Ba 2015) implemented in Keras (Chollet 2015) with the Theano backend (Theano Development Team 2016), which requires an hour to train on an NVidia Titan X GPU. Figure 2 plots the advisory the system would give to the ownship if the intruder were at each location in the plot. In this scenario, the ownship is climbing while the intruder is maintaining a constant altitude. If the intruder is approach- ing the ownship from above, the system alerts the ownship to Advisory Description Table 1: VerticalCAS advisories Vertical Range (Min, Max) [ft/min] COC DNC DND DES1500 CL1500 SDES1500 SCL1500 SDES2500 SCL2500 Clear of Conflict Do Not Climb Do Not Descend Descend at least 1500 ft/min Climb at least 1500 ft/min Strengthen Descend to at least 1500 ft/min Strengthen Climb to at least 1500 ft/min Strengthen Descend to at least 2500 ft/min Strengthen Climb to at least 2500 ft/min (−∞, +∞) (−∞, 0] [0, +∞) (−∞, −1500] [+1500, +∞) (−∞, −1500] [+1500, +∞) (−∞, −2500] [+2500, +∞) Strength alo g/4 g/4 g/4 g/4 g/4 g/3 g/3 g/3 g/3 Sign w N/A −1 +1 −1 +1 −1 +1 −1 +1 Advisory vlo [ft/min] N/A 0 0 −1500 +1500 −1500 +1500 −2500 +2500 DNC DES1500 CL1500 COC 10 20 30 40 stop climbing (DNC) or descend (DES1500) in order to pre- vent an NMAC. If the intruder is a little below the ownship, the system advises the pilot to continue climbing (CL1500). In other locations, a collision is not imminent and the sys- tem alerts clear-of-conflict (COC). The neural network rep- resentation is a smooth approximation of the original table policy. Although the network appears to represent the table well, verification is needed to ensure that the neural network alerts safely at all times. Safe Regions The safe region is defined as the region in space where an intruder aircraft will be safe (i.e. will not enter the ownship puck), given the ownship aircraft is following a single advi- sory (shown in Fig. 3). The safe region is described by the ownship travelling along a nominal trajectory. This nominal trajectory is described by the ownship following an advisory exactly, i.e., if the advisory issued allows a range of veloc- ities [1500, +∞), the nominal trajectory will be defined by the ownship assuming a velocity of 1500 ft/min. From our earlier definition of τ = r−rp , τ = 0 of the nominal trajec- rv tory is at r = rp. The nominal trajectory of the ownship is simply a parabolic trajectory due to constant vertical acceleration. The trajectory can be written as follows (Jeannin et al. 2017): hn = (cid:40) alo 2 τ 2 + vOτ, vloτ − (vlo−vO)2 2alo if 0 ≤ τ < vlo−vO if vlo−vO alo ≤ τ alo , (3) 1,000 500 0 −500 −1,000 0 1,000 500 0 −500 ) t f ( h ) t f ( h −1,000 0 10 20 τ (sec) 30 40 Figure 2: Example policy plots for a climbing ownship and level-flying intruder using the MDP table (top) and neural network (bottom) where hn is the altitude of the ownship in a coordinate frame centered in the starting position of the ownship Fig. 1 (Note: the subscript n denotes the nominal trajectory). The piece- wise Eq. (3) describes the dynamics when the ownship ve- locity is less than vlo and when the ownship velocity is greater than vlo. Once the ownship climb rate reaches vlo, the aircraft is compliant with the advisory and continues climbing with no vertical acceleration. For the example of CL1500, the safe region will be the region below the ‘puck’ of the ownship flying along the CL1500 nominal trajectory. If an intruder is in this region below the ownship, it will be safe from collision for an own- ship following the CL1500 advisory. Therefore, this region is defined as the safe region for a particular advisory. 100 0 ) t f ( h Ownship 100 0 ) t f ( h Ownship −100 Unsafe for CL1500 −100 Unsafe for CL1500 Intruder Intruder Safe for CL1500 Safe for CL1500 0 2 4 6 τ (sec) 8 10 0 2 4 6 τ (sec) 8 10 Figure 3: Intruder in the safe region for ownship advisory CL1500 and vO < 0 Figure 4: Intruder in the worst-case safe region for the own- ship advisory CL1500 and vO < 0 Safe regions have to be able to be represented in terms of network variables to define a search space in the state-space of the network. Representing safety bounds solely in terms of the five network variables poses some challenges which are discussed in the next section. One limitation with using safe regions to verify safety properties is that safe regions assume that the ownship fol- lows a single advisory throughout the encounter. In reality, multiple advisories can be issued during an encounter, giv- ing the system an opportunity to change the advisory. There- fore, an advisory that was initially unsafe can be made safe with a change later in the encounter. Safeable regions, de- scribed below, build on the safe region concept and tackle this shortcoming. Worst-Case Scenario Approach An NMAC is defined as an intruder aircraft coming inside the ownship puck, as depicted in Fig. 1. The network is try- ing to prevent NMAC’s, so the safe region is described by this puck around the ownship aircraft. In Fig. 3, when the ownship is descending, the safe re- gion bounds are described by the ‘back’ of the ownship puck (Jeannin et al. 2017), where the ‘back’ of the ownship puck can be represented as τback = τ − 2rp , where rp is a known rv constant, but, rv is unknown because it is not an input to the network. Thus, a worst-case approximation must be made to define the safe region bounds described by the back of the ownship puck. At τ = 0 the horizontal separation between the intruder and ownship is rp. After this point, horizontal separation be- tween the ownship puck and the intruder will not be regained again until t = 2rp seconds in the future when the intruder rv will cross the back of the ownship puck. In the worst-case, rv → 0, and horizontal separation may never be regained. As a result, the intruder must be at an altitude that is safe- able for all time t > τ . The relative horizontal velocity of the two aircraft rv ef- fectively dictates the width of the ownship puck in τ -space. The worst-case safe region bound should include all other unsafe regions, which is achieved as rv → 0 or the own- ship puck is infinitely wide. The worst-case safe region bounds can be seen in Fig. 4. Using this approach, a worst- case safe region bound can be described where ΩUnsafe ⊆ ΩUnsafe(worst case) i.e. all possible unsafe regions are subsets of the worst-case scenario unsafe region. Safeable Regions Safeable regions are defined as regions which are currently safe or that can be made safe in the future. A safeable re- gion is constructed by assuming two worst-case trajectories of an aircraft complying with an advisory for time (cid:15) (Ver- ticalCAS issues a new advisory every (cid:15) seconds). After (cid:15), these two trajectories represent the two extreme positions of the ownship that complies with the initial advisory. From this point, the strongest reversing and strengthening advi- sories that VerticalCAS can issue are considered. If either of these advisories prevent a collision, then the intruder is in a safeable region. As a result, a collision with an intruder in the safeable region can always be avoided. For example, as seen in Figure 5, if the intruder is located as shown, the system can safely issue a CL1500 advisory because a strong reversal at the next time step will ensure that the ownship descends before reaching the intruder. A more detailed ex- planation of safeable regions is provided in (Jeannin et al. 2017). If the system always gives safeable advisories whenever possible, then an intruder beginning in the safeable region will always be avoided. As a result, ensuring safety when Ownship ) t f ( h 100 0 −100 −200 Intruder Unsafeable for CL1500 τ = (cid:15) Safeable for CL1500 100 0 ) t f ( h −100 −200 −300 COC DNC DND DES1500 CL1500 SDES1500 SCL1500 SDES2500 SCL2500 Unsafe 0 2 4 6 8 10 τ (sec) 0 2 6 4 τ (sec) 8 Figure 5: Safeable region with strengthen and reversal alerts issued at τ = (cid:15). Figure 6: All unsafeable regions and the region that is un- safeable for all advisories using the neural network system requires checking for any instances when the neural network gives an unsafeable ad- visory when a safeable advisory exists. To generate these regions, the region that is unsafeable for all advisories must be computed, which can be done by generating the intersec- tion of all possible advisories, as illustrated in Fig. 6. The region to verify is shown in red in Fig. 7 because this re- gion is unsafeable for CL1500 but would be safeable for an- other advisory such as SCL2500 or SDES2500. The next section describes how these safeable regions are adapted for use with the Reluplex neural network verification tool. Checking Safeable with Reluplex Reluplex extends the simplex method to verify neural net- work properties by representing neural networks, activation functions, and constraints as piecewise linear equations. Lin- ear bounds are placed on the input variables to define the search region, and the output variables are constrained such that the advisory of interest must be associated with the largest valued output from the network. Reluplex system- atically searches for an input to satisfy both input and output constraints. (Katz et al. 2017). The red unsafeable region in Fig. 7 is nonlinear and non-convex, so the region cannot be verified using Reluplex in the current form. There are three adjustments made to the safeable regions to prepare the regions for Reluplex. First, the safeable re- gions are functions of the ownship’s initial climbrate, which can vary from −100 ft/s to 100 ft/s. In order to avoid verify- ing every possible region generated by all floating point val- ues of ownship climb rate, the regions are generated assum- ing a small range of climb rates instead of a single climb rate. To generate the safeable boundaries, the upper and lower tra- jectories are generated assuming the worst case initial climb rate. As a result, the unsafeable boundaries grow outwards, as seen in Fig. 8, which shows the safeable region bound- Unsafeable for CL1500 100 0 ) t f ( h −100 Unsafeable for All Advisories Safeable for CL1500 −200 0 2 4 6 8 10 τ (sec) Figure 7: Safeable and unsafeable regions for CL1500 advi- sory aries for different ranges of climb rates. Next, the safeable regions are linearized so that bound- aries can be represented in Reluplex. The linearization over-approximates the unsafeable region by approximating quadratic bounds as a piecewise linear function. The ap- proximation uses either an inner approximation connecting points on the curve, or an outer approximation using line segments tangent to the curve. The type of approximation used is chosen to over-approximate the unsafeable region. Figure 9 shows the linearization of the safeable region that over-approximates the unsafeable region. Lastly, the region checked by Reluplex is split into small ) t f ( h 100 0 −100 −200 vO: [-30,-30] ft/s vO: [-31,-29] ft/s vO: [-32,-28] ft/s vO: [-34,-26] ft/s vO: [-38,-22] ft/s ) t f ( h 600 400 200 0 −200 −400 −600 ·10−3 1 0.8 0.6 0.4 0.2 0 2 4 6 τ (sec) 8 10 6 6.5 7 τ (sec) Figure 8: Safeable regions for different initial climbrates for the ownship Figure 10: Heat map of counterexamples for aprev: Clear of Conflict ) t f ( h 100 50 0 −50 −100 −150 Original Linear 0 2 6 8 4 τ (sec) Figure 9: Over-approximation of the search region slices that are defined by a lower and upper bound on τ as well as a single linear lower bound on h and a single linear upper bound on h. Because the neural network uses τ = 6 for inputs where τ = 6, the τ bounds are adjusted to ensure the network is evaluated at τ = 6 for inputs where τ < 6. Each small slice is checked as a separate query with Relu- plex. A satisfiable set of inputs found by Reluplex represents a counterexample, or a set of network inputs that produce an unsafeable advisory when a safeable advisory exists. Be- cause Reluplex is sound and complete, if Reluplex cannot find a counterexample for a query, then no counterexample exists. Results To verify the unsafeable regions in all of the neural net- works, each of the nine neural networks associated with one of the previous advisories is evaluated for all allowed advi- sories. Using a ∆vO of 2 ft/s, there are 100 velocity ranges to verify. After slicing up each unsafeable region into small re- gions with linear bounds, a total of 42, 032 separate queries were generated and evaluated with Reluplex, which required 11 hours when using 9 independent threads. Each query was run with (cid:15) = 1 second (in all the figures (cid:15) = 3 seconds just for illustration purposes). As a result, 3, 957 counterexam- ples were discovered, about 9.14% of all queries. A table of when these counterexamples occurred is shown in Table 2, where N/A is used for advisories that are not allowed given the previous advisory. Most counterexamples occur for the COC advisory, but many other counterexamples exist for other advisories as well. Visualizing the counterexamples in the form of a heat map allows for analysis of the network’s performance. Fig. 10 plots all the counterexamples found by Reluplex for advi- sories issued after a clear of conflict advisory. No counterex- amples are found in the white region in the middle of the plot because this region is unsafeable for all advisories and is omitted from the search region, as illustrated in Fig. 7. The lighter points represent a higher probability density of coun- terexamples. The figure illustrates that counterexamples are most prevalent at around τ = 7 s. This information can be useful for tweaking networks to perform safely. Also, Fig. 10 shows rough vertical stripes, which is due to the preference of Reluplex to return SAT points that occur along the bound- ary of a region rather than somewhere in the middle of a region. In addition to the 42, 032 separate queries run that are summarized in Table 2, we ran 143, 048 queries on 25 in- Table 2: Number of counterexamples discovered with Reluplex Previous Advisory COC DNC DND DES1500 CL1500 Current Advisory SDES1500 SCL1500 SDES2500 SCL2500 COC DNC DND DES1500 CL1500 SDES1500 SCL1500 SDES2500 SCL2500 400 200 ) t f ( h 0 359 438 249 284 223 281 238 324 209 28 30 0 0 0 0 0 0 0 0 0 17 1 0 0 3 0 12 48 40 133 1 0 0 0 0 0 21 47 50 0 0 0 0 0 0 N/A N/A N/A 65 117 26 53 12 52 N/A N/A N/A 76 21 6 66 1 15 N/A N/A N/A N/A N/A 32 43 89 48 N/A N/A N/A N/A N/A 65 51 25 58 CL1500 Unsafeable Region DES1500 Unsafeable Region COC Unsafeable Region DES1500 COC CL1500 ) t f ( h 100 0 −100 DES1500 CL1500 0 2 4 τ (sec) 6 0 1 2 4 3 τ (sec) 5 6 Figure 11: Unsafeable region for COC containing a coun- terexample Figure 12: Unsafeable regions for CL1500 and DES1500 with DES1500 counterexample dependent threads to study the effect of the linearization ap- proximation on the number of counterexamples generated. All advisories were checked for aprev = COC, linearized with line segment lengths of τ = 0.125, 0.25, 0.5, 1.0, and 2.0 sec- onds for both under and over-approximation. All linear seg- ments were split into small regions of the same size so that the number of regions generated remained the same for all cases. Neither the method of linearization nor the level of discretization had any affect on the number of counterex- amples found. For each level of discretization 1, 476 coun- terexamples were found for both the under-approximation and over-approximation method. This is most likely due to the fact that counterexamples are usually found around lin- ear parts of the safeable bounds, so finer linearization had no impact. Some of these counterexamples are informative, and visu- alizing the policy at these points reveals problems that need to be addressed. For example, Fig. 11 shows the unsafeable region for COC, which extends into a large area of COC. Given that the unsafeable region appears at low τ and h, a collision is imminent, and COC is not safe to give. This in- formation can be used to refine the policy and network to discourage COC advisories in these situations. Many counterexamples are found at the boundary be- tween two alerting regions. As shown in Figure 12, the re- gions being checked for DES1500 and CL1500 meet at a point. In order to avoid any counterexamples, the boundary between DES1500 and CL1500 must pass exactly through the point that divides the two unsafe regions. However, be- cause the neural network is an approximation, the boundary is a little off, and a counterexample is discovered. In addi- tion, no other advisory is safeable around the meeting point, so there is no other advisory the network could give to avoid a counterexample. Requiring the network to change advi- sories at an exact point in order to prove safety is too strict, so more work is needed to relax this requirement while still guaranteeing safety. Conclusions and Future Work After generating collision avoidance networks, linear safe- able regions were defined for all possible advisories. The safeable regions define when an advisory can be made safe in the future, so that advisory is safe to give in the safe- able region. If the system always gives safeable advisories when possible, then safety is guaranteed assuming the in- truder begins in the safeable region. The safeable regions were checked with Reluplex, resulting in the discovery of thousands of counterexamples. The counterexamples can be used to refine the neural networks to improve safety. A primary issue with proving safety using safeable re- gions is the hard safety requirement imposed on neural net- works. The safeable property requires that the boundary be- tween advisories given by the neural network must pass through an exact point in the state space. In reality, no neural network will be able to satisfy such a hard requirement in all situations. To overcome this challenge, we have been exploring an extension to safeable, which we call safeable2. A safeable2 region is defined as a region that is safeable by at least two advisories. Verifying safety with safeable2 removes the hard requirement of the neural network having to switch advi- sories at a single point, but rather allows a small region to switch advisories. In addition, safeable2 omits a small re- gion of uncertain behavior (the region that is safeable by only a single advisory) around the unsafeable region where a lot of counterexamples are found. It will be interesting to explore the implications of using safeable2 to verify safety and whether this method eliminates spurious counterexam- ples to safe operation. Furthermore, future work will model pilot delay to ensure safety can be guaranteed with realistic pilot compliance. References [Bojarski et al. 2016] Bojarski, M.; Del Testa, D.; Dworakowski, D.; Firner, B.; Flepp, B.; Goyal, P.; Jackel, L. D.; Monfort, M.; Muller, U.; Zhang, J.; et al. 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316. [Chollet 2015] Chollet, F. 2015. Keras: Deep learning library for Theano and TensorFlow. [Dahl, Sainath, and Hinton 2013] Dahl, G. E.; Sainath, T. N.; and Hinton, G. E. 2013. Improving deep neural networks for LVCSR using rectified linear units and dropout. In Inter- national Conference on Acoustics, Speech, and Signal Pro- cessing (ICASSP), 8609–8613. IEEE. [Egorov et al. 2017] Egorov, M.; Sunberg, Z. N.; Balaban, E.; Wheeler, T. A.; Gupta, J. K.; and Kochenderfer, M. J. 2017. POMDPs.jl: A framework for sequential decision making under uncertainty. Journal of Machine Learning Re- search 18(26):1–5. [Gehr et al. 2018] Gehr, T.; Mirman, M.; Drachsler-Cohen, D.; Tsankov, P.; Chaudhuri, S.; and Vechev, M. 2018. AI2: Safety and robustness certification of neural networks with abstract interpretation. In IEEE Symposium on Security and Privacy (SP). [He et al. 2016] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In IEEE Com- puter Society Conference on Computer Vision and Pattern Recognition (CVPR), 770–778. [Jeannin et al. 2017] Jeannin, J.-B.; Ghorbal, K.; Kousk- oulas, Y.; Schmidt, A.; Gardner, R.; Mitsch, S.; and Platzer, A. 2017. A formally verified hybrid system for safe ad- visories in the next-generation airborne collision avoidance system. International Journal on Software Tools for Tech- nology Transfer 19(6):717–741. [Julian and Kochenderfer 2017] Julian, K. D., and Kochen- derfer, M. J. 2017. Neural network guidance for UAVs. In AIAA Guidance, Navigation, and Control Conference, 1743. [Julian et al. 2016] Julian, K. D.; Lopez, J.; Brush, J. S.; Owen, M. P.; and Kochenderfer, M. J. 2016. Policy com- pression for aircraft collision avoidance systems. In Digital Avionics Systems Conference (DASC), 1–10. IEEE. [Katz et al. 2017] Katz, G.; Barrett, C.; Dill, D. L.; Julian, K.; and Kochenderfer, M. J. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In Interna- tional Conference on Computer Aided Verification, 97–117. Springer. [Kingma and Ba 2015] Kingma, D., and Ba, J. 2015. Adam: A method for stochastic optimization. In International Con- ference on Learning Representations. [Kochenderfer 2015] Kochenderfer, M. J. 2015. Decision Making Under Uncertainty: Theory and Application. MIT Press. [Lomuscio and Maganti 2017] Lomuscio, A., and Maga- An approach to reachability analysis nti, L. arXiv preprint for feed-forward relu neural networks. arXiv:1706.07351. [Mnih et al. 2015] Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Ried- miller, M.; Fidjeland, A. K.; Ostrovski, G.; et al. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529. [Silver et al. 2016] Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529(7587):484. [Simonyan and Zisserman 2015] Simonyan, K., and Zisser- man, A. 2015. Very deep convolutional networks for large- In International Conference on scale image recognition. Learning Representations. Development [Theano Development Team 2016] Theano Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688. 2016. 2017.
ai_researcher
7
The_ScholarNet_and_Artificial_Intelligence_(AI)_Supervisor_in_Material_Science_Research.pdf
‘ The NOMAD Artificial-Intelligence Toolkit: Turning materials-science data into knowledge and understanding Luigi Sbail`o1,2∗, ´Ad´am Fekete1, Luca M. Ghiringhelli1,2∗, and Matthias Scheffler2 1Physics Department and IRIS Adlershof of the Humboldt-Universit¨at zu Berlin, Germany 2The NOMAD Laboratory at the Fritz Haber Institute of the Max-Planck-Gesellschaft and IRIS Adlershof of the Humboldt-Universit¨at zu Berlin, Germany; ∗email: [email protected]; [email protected]; (Dated: November 10, 2022) We present the Novel-Materials-Discovery (NOMAD) Artificial-Intelligence (AI) Toolkit, a web- browser-based infrastructure for the interactive AI-based analysis of materials-science findable, ac- cessible, interoperable, and reusable (FAIR) data. The AI Toolkit readily operates on the FAIR data stored in the central server of the NOMAD Archive, the largest database of materials-science data worldwide, as well as locally stored, users’ owned data. The NOMAD Oasis, a local, stand alone server can be also used to run the AI Toolkit. By using Jupyter notebooks that run in a web-browser, the NOMAD data can be queried and accessed; data mining, machine learning, and other AI techniques can be then applied to analyse them. This infrastructure brings the concept of reproducibility in materials science to the next level, by allowing researchers to share not only the data contributing to their scientific publications, but also all the developed methods and ana- lytics tools. Besides reproducing published results, users of the NOMAD AI toolkit can modify the Jupyter notebooks towards their own research work. I. INTRODUCTION Data-centric science has been identified as the 4th paradigm of scientific research. We observe that the nov- elty introduced by this paradigm is two-fold. First, the creation of large, interconnected databases of scientific data, which are more and more expected to comply with the so-called FAIR principles [1] of scientific data man- i.e., data and related meta- agement and stewardship: data need to be findable, accessible, interoperable, and reusable (or repurposable, or recyclable). The second aspect is the massive use of artificial-intelligence (AI) al- gorithms, applied to scientific data, in order to find pat- terns and trends that would be hard if possible at all to identify by unassisted human observation and intuition. Materials science has taken up in the last few years both aspects. Databases, in particular from compu- tational materials science, have been created via high- throughput screening initiatives, mainly boosted by the US Materials-Genome Initiative, starting in the early 2010’s, e.g., AFLOW [2], the Materials Project [3], and OQMD [4]. At the end of 2014, the NOMAD (Novel Materials Discovery) Laboratory has launched the NO- MAD Repository & Archive [5–7], the first FAIR stor- age infrastructure for computational materials-science data. NOMAD’s servers and storage are hosted by the Max Planck Computing and Data Facility (MPCDF) in Garching (Germany). The NOMAD Repository stores, as of today, input and output files from more than 50 different atomistic (ab initio and molecular mechanics) codes. It totals more than 100 million total-energy cal- culations, uploaded by various materials scientists from their local storage or from other public databases. The NOMAD Archive stores the same information, but con- verted, normalized, and characterized by means of a metadata schema, the NOMAD Metainfo [8], which al- lows for the labeling of most of the data in a code- independent representation. The translation from the content of raw input and output files into the code- independent NOMAD Metainfo format makes the data ready for AI analysis. Besides the above mentioned databases, other platforms for the open-access storage and access of materials science data appeared in recent years, such as the Materials Data Facility [9; 10] and Materials Cloud [11]. Furthermore, many groups have been storing their materials science data on Zenodo ([12]), and provided the digital object identifier (DOI) to openly access them in publications. The peculiarity of the NOMAD Repository & Archive is in the fact that users upload the full input and out- put files from their calculations into the Repository and then such information is mapped onto the Archive, which (other) users can access via a unified API. Materials science has embraced also the second aspect of the 4th paradigm, i.e., AI-driven analysis. The appli- cations of AI to materials science span two main classes of methods. One is the modeling of potential-energy sur- faces (PES) by means of statistical models that promise to yield ab initio accuracy at a fraction of the evaluation time [13–18] (if the CPU time necessary to produce the training data set is not considered). The other class is the so-called materials informatics, i.e., the statistical model- ing of materials aimed at predicting their physical, often technologically relevant properties [19–24], by knowing limited input information about them, often just their stoichiometry. The latter aims at identifying the mini- mal set of descriptors (the materials’ genes) that correlate with properties of interest. This aspect, together with the 2 2 0 2 v o N 9 ] i c s - l r t m . t a m - d n o c [ 2 v 6 8 6 5 1 . 5 0 2 2 : v i X r a observation that only a very small amount of the almost infinite number of possible materials is known today, may lead to the identification of undiscovered materials that have properties (conductivity, plasticity, elasticity, etc.) superior to the known ones. The NOMAD CoE has recognized the importance of enabling the AI analysis of the stored FAIR data and has launched the NOMAD AI Toolkit. This web-based infrastructure allows users to run in a web-browser com- putational notebooks (i.e., interactive documents that freely mix code, results, graphics, and text, supported by a suitable virtual environment) for performing complex queries and AI-based exploratory analysis and predictive modeling on the data contained in the NOMAD Archive. In this respect, the AI Toolkit pushes to the next, neces- sary step the concept of FAIR data, by recognizing that the most promising purpose of the FAIR principles is en- abling AI analysis of the stored data. As a mnemonic, the next step in FAIR data starts by upgrading its meaning to: Findable and AI-Ready data [25]. The mission of the NOMAD AI Toolkit is three-fold: • Providing an API and libraries for accessing and analysing the NOMAD Archive data via state-of-the- art (and beyond) AI tools. • Providing a set of shallow-learning-curve tutorials from the hands-on introduction to the mastering of AI tech- niques. • Maintaining a community-driven growing collection of computational notebooks, each dedicated to an AI-based materials-science publication. By providing both the annotated data and the scripts for their analysis, students and scholars worldwide are enable to retrace all the steps that the original researchers followed to reach publication-level results. Further- more, the users can modify the existing notebooks and quickly checks alternative ideas. The data science community has introduced several plat- forms for performing AI-based analysis of scientific data, typically by providing rich libraries for machine-learning and artificial intelligence and often offering users online resources for running notebooks. General-purpose frame- works such as Binder [26] and Google Colab [27], as well as materials-science dedicated frameworks such as nanoHUB [28], pyIron [29], AiidaLab [30], and MatBench [31] are the most used by the community. In all these cases, a big effort is devoted to education via online and in-person tutorials. The main specificity of the NOMAD AI toolkit is in connecting within the same infrastructure the data, as stored in the NOMAD Archive, to their AI analysis. Moreover, as detailed below, users have in the same environment all available AI tools as well as access to the NOMAD data, without need to install anything. This paper is structured as follows. In section II, we describe the technology of the AI Toolkit. In sections III and IV, we describe two exemplary notebooks. One note- book is a tutorial introduction to the interactive querying and exploratory analysis of the NOMAD Archive data. 2 FIG. 1. Home page of the NOMAD Artificial-Intelligence Toolkit, showcasing its three purposes: Querying (and ana- lyzing) the content of the NOMAD Archive, providing tuto- rials for AI tools, and accessing the AI workflow of published work. The fourth access point, get to work, is for experienced users, who can create and manage their own workspace. The other notebook demonstrates the possibility to re- port publication-level materials science results [32], while enabling the users to put their hands on the workflow, by modifying the input parameters and observing the im- pact of their interventions. II. RESULTS Technology We provide a user-friendly infrastructure to apply the latest AI developments and the most popular machine- learning methods to materials-science data. The NO- MAD AI Toolkit aims to facilitate the deployment of sophisticated AI algorithms by means of an intuitive in- terface that is accessible from a webpage. In this way, AI-powered methodologies are transferred to materials science. In fact, the most recent advances in AI are usually available as software stored on web repositories. However, these need to be installed in a local environ- ment which requires specific bindings and environment variables. Such an installation can be a tedious process, which limits the diffusion of these computational meth- ods, and also brings in the problem of reproducibility of published results. The NOMAD AI Toolkit offers a solution to this, by providing the software, that we in- stall and maintain, in an environment that is accessible directly from the web. Docker[33] allows to install software in a container that is isolated from the host machine where it is running. In the NOMAD AI Toolkit we maintain such a container, installing therein software that has been used to produce recently published results and taking care of the version- ing of all required packages. Jupyter notebooks are then used inside the container to interact with the underlying computational engine. Interactions include the execution of code, displaying the results of computations, and writ- ing comments or explanations by using markup language. We opted for Jupyter notebooks because such interac- tivity is ideal for combining computation and analysis of the results in a single framework. The kernel of the notebooks, i.e. the computational engine that runs the code, is set to read Python. Python has built-in support for scientific computing as the SciPy ecosystem and it is highly extensible, because it allows to wrap codes written in compiled languages such as C or C++. This techno- logical infrastructure is built using JupyterHub[34] and deploys servers that are orchestrated by Kubernetes on computing facilities offered by the MPCDF in Garching, Germany. Users of the AI Toolkit can currently run their analyses on up to 8 CPU cores, with up to 10 GB RAM. A key feature of the NOMAD AI Toolkit is that we allow users to create, modify and store computa- tional notebooks where original AI workflows are de- veloped. From the ‘Get to work’ button accessible at https://nomad-lab.eu/aitoolkit, registered users are redirected to a personal space, where we provide 10 GB of cloud storage and where work can also be saved. Jupyter notebooks, which are created inside the ‘work’ directory in the users’ personal space, are stored on our servers and can be accessed and edited over time. These notebooks are placed in the NOMAD AI Toolkit environment, which means that all software and methods demonstrated in other tutorials can be deployed therein. The versatility of Jupyter notebooks in fact facilitates an interactive and instantaneous combination of different methods. This is useful if one aims to, e.g., combine different methods available in the NOMAD AI Toolkit in an original man- ner, or to deploy a specific algorithm to a dataset that is retrieved from the NOMAD Archive. The original note- book, which is developed in the ‘work’ directory, might then lead to a publication and the notebook be added to the ‘Published results’ section of the AI Toolkit. Contributing The NOMAD AI Toolkit aims to promote reproducibil- ity of published results. Researchers working in the field of AI applied to materials science are invited to share their software and install it in the NOMAD AI Toolkit. The shared software can be used in citeable Jupyter note- books, which are accessible online, to reproduce results that have been recently published in scientific journals. Sharing software and methods in a user-friendly infras- tructure such as the NOMAD AI Toolkit can also pro- mote the visibility of research and boost interdisciplinary collaborations. All Jupyter notebooks currently available in the NO- MAD AI Toolkit are located in the same Docker con- tainer, thus allowing transferability of methods and pipelines between different notebooks. This also implies that software employed is constrained to be installed us- ing the same package versions for each notebook. How- ever, to facilitate a faster and more robust integration of external contributions to the NOMAD AI Toolkit, we allow the creation of separated Docker containers which can have their own versioning. Having a separate Docker container for a notebook allows to minimize maintenance 3 of the notebook, and it avoids further updates when e.g. package versions are updated in the main Docker con- tainer. Contributing to the NOMAD AI Toolkit is straightfor- ward, and consists of the following steps: • Data must be uploaded to the NOMAD Archive and Repository. Either in the public server (https:// nomad-lab.eu/prod/rae/gui/uploads) or in the lo- cal, self-contained variant (see Sec. II). • Software needs to be installed in the base image of the NOMAD AI Toolkit. • The whole workflow of a (published) project, from importing the data to generating results, has to be placed in a Jupyter notebook. The package(s) and notebook are then uploaded to GitLab in a public repository (https://gitlab.mpcdf.mpg.de/ nomad-lab/analytics), where the back-end code is stored. • A DOI is generated for the notebook, which is ver- sioned in GitLab. In the spirit of, e.g., Cornell Univer- sity’s arXiv.org, the latest version of the notebook is linked to the DOI, but all previous versions are main- tained. Researchers interested in contributing to the NOMAD AI Toolkit are invited to contact us for further details. Data-management policy For maintenance reasons, NOMAD keeps anonymous- access logs for API calls for a limited amount of time. However, those logs are not associated with NOMAD users; in fact, users do not need to provide authentica- tion to use the NOMAD APIs. We also would like to note that query commands used for extracting the data that are analyzed in a given notebook are part of the note- book itself, hence stored. This guarantees reproducibility of the AI analysis as the same query commands will al- ways yield the same outcome, e.g., the same data points for the AI analysis. Publicly shared notebooks on the AI- toolkit platform are required to adopt the Apache License Version 2. Finally, we note that the overall NOMAD in- frastructure, including the AI Toolkit, will be maintained for at least 10 years after the last data upload. AI Toolkit App In addition to the web-based toolkit, we also maintain an App that allows to deploy the NOMAD AI Toolkit environment[35] on a local machine. This App employs the same graphical user interface (GUI) as the online version, in particular, the user accesses it via a normal web browser. However, the browser does not need to have access to the web and can therefore run behind fire- walls. Software and methods installed in the NOMAD AI Toolkit will deploy the users’ personal computational re- sources. This can be useful when calculations are partic- ularly demanding, and also when AI methods are applied to private data that should not access the web. Through the local App, both the data on the NOMAD server as well locally stored data can be accessed. The latter ac- cess is supported by the NOMAD OASIS, the stand alone version of the NOMAD infrastructure[36]. Querying the NOMAD Archive and performing AI modeling on retrieved data The NOMAD AI Toolkit features the tutorial ‘Query- ing the archive and performing Artificial Intelligence modeling’ notebook [37] (also accessible from the ‘Query the archive’ button at https://nomad-lab.eu/ aitoolkit), which demonstrates all steps required to perform AI analysis on data stored in the NOMAD Archive. These steps are the following: (i) querying the data by using the RESTful API (see below) that is built on the NOMAD Metainfo; (ii) loading the needed AI packages, including the library of features that are used to fingerprint the data points (materials) in the AI anal- ysis; (iii) performing the AI training and visualizing the results. The NOMAD Laboratory has developed the NOMAD Python package, which includes a client module to query the Archive using the NOMAD API. All functionalities of the NOMAD Repository and Archive are offered through a RESTful API, i.e. an API that uses HTTP methods to access data. In other words, each item in the Archive (typically a JSON data file) is reachable via a URL accessible from any web browser. In the example notebook [37], we use the NOMAD Python client library to retrieve ternary elements con- taining oxygen. We also request that the ab ini- tio calculations were carried out with the VASP code, using exchange-correlation (xc) functionals from the generalized-gradient-approximation (GGA) family. In addition, to ensure that calculations have converged, we also set that the energy difference during geometry op- timization has converged. As of April 2022, this query retrieves almost 8 000 entries, which are the results of simulations carried out at different laboratories. We em- phasize that in this notebook we show how data with het- erogeneous origin can be used consistently for machine- learning analyses. Here, we target the atomic density, that is obtained by a geometrically converged DFT calculation. The client module in the NOMAD Python package establishes a client-server connection in a so-called lazy manner, i.e. data are not fetched altogether, but with an iterative query. Entries are then iteratively retrieved, and each entry allows to access data and metadata relative to the simulation results that have been uploaded. In this ex- ample, the queried materials are composed of three dif- ferent elements, where one of the elements is required to be oxygen. From each entry of the query, we retrieve the converged value of the atomic density and the name and stoichiometric ratio of the other two chemical elements. During the query, we use the atomic features library (see below) to add other atomic features to the dataframe that is built with the retrieved data. Before discussing the ac- tual analysis performed in the notebook, let us briefly comment on the NOMAD Metainfo and the libraries of input (atomic) features. The NOMAD Metainfo 4 The NOMAD API access to the data in the NO- MAD Archive, which are organized by means of the [8] and NOMAD Metainfo, which is presented in Ref. [38]. Here, we mention that it is a hierarchical and modular schema, where each piece of information con- tained in an input/output file of an atomistic simula- tion code has its own metadata entry. The metadata are organized in sections (akin to tables in a relational database) such as System, containing information on the geometry and composition of the simulated system, and Method, containing information on the physical model (e.g., type of xc functional, type of relativistic treatment, and basis set). Crucially, each item in any section (a column in the relational database analogy, where each data object is a row) has a unique name. Such name ‘atoms’, which is a list of the atomic symbols of (e.g. all chemical species present in a simulation cell) is asso- ciated with values that can be searched via the API. In practice, one can search all compounds containing oxygen [’O’]} as argument of by specifying query={’atoms’: the query archive() function, which is the backbone of the NOMAD API. Libraries of input features Together with the materials data, the other important piece of information for an AI analysis is the represen- tation of each data point. A possible choice, useful for exploratory analysis, but also the training of predictive models, is to represent the atoms in the simulation cell by means of their periodic-table properties (also called atomic features), e.g., atomic number, row and column in the periodic table, ionic or covalent radii, electroneg- ativity. In order to facilitate access to these features, we maintain the atomic collections library, contain- ing features for all atoms in the periodic table (up to Z = 100), calculated via DFT with a selection of xc func- tionals. Furthermore, we have also installed the mat- miner package[39], a recently introduced rich library of atomic properties from calculations and experiment. In this way, all atomic properties defined in the various sources are available within the toolkit environment. Example of exploratory analysis: Clustering We now proceed with the discussion of the showcase notebook, which performs an unsupervised-learning anal- ysis called clustering. The evolutionary human ability to recognize patterns in empirical data has led to the most disparate scientific findings, from e.g. Kepler’s Laws to the Lorenz attractor. However, finding patterns in highly multidimensional data requires automated tools. Here, we would like to understand whether the data retrieved form the NOMAD Archive can be grouped into clus- ters of data that share a similar representation, where data points within the same cluster are similar to each other while being different from data points belonging to other clusters. The notion of similarity in the discussed unsupervised-learning task is strictly related to the rep- resentation of the data, here a set of atomic properties of the constituent material. A plethora of different clustering algorithms has been developed in the last years, each with different ideal ap- plications (see, e.g., our tutorial notebook introducing the most popular clustering algorithms[40]). Among the various algorithms currently available, we chose a re- cent algorithm, which we will briefly outline below, that stands out for simplicity, quality of the results, and ro- bustness. The clustering algorithm that is employed in this note- book is the hierarchical density-based spatial cluster- ing of applications with noise (HDBSCAN)[41], a recent extension of the popular DBSCAN algorithm[42]. As density-based algorithms, HDBSCAN relies on the idea that clusters are islands of high-density points separated by a sea of low-density points. The data points in the low-density region are labeled as ‘outliers’ and are not associated with any clusters. Outlier identification is at the core of the HDBSCAN algorithm, which uses the mu- tual reachability distance, i.e. a specific distance metric to distort the space so as to “push” outliers away from the high density regions. Cluster definition is to some extent subtle, as many possible different combinations are acceptable. One of the main challenges is represented by nested clusters, where it is not always trivial to decide whether a rela- tively large cluster should be decomposed into more sub- clusters, or if instead a unique supercluster should be taken. The HDBSCAN algorithm performs a hierarchi- cal exploration that evaluates possible subdivisions of the data into clusters. Initially, for low values of the distance threshold, there is only one large cluster that includes all points. As the threshold is lowered, the cluster can eventually split into smaller subclusters. This algorithm automatically decides whether to split the supercluster, and this decision is based on how robust — with respect to further divisions — the new subclusters would be. If, for example, after a cluster division many other split- tings would shortly follow while lowering the threshold distance, then the larger supercluster is taken; if, oth- erwise, the subclusters do not immediately face further subdivisions, they are selected instead of the large super- cluster. Dimension reduction: the Visualizer The NOMAD AI Toolkit also comes with a Visual- izer, a package which allows a straightforward analy- sis of tabulated data that contain materials structures, and which is optimized for data retrieved from the NO- MAD Archive. The visualizer is built using the Plotly package[43], which allows the creation of an interactive map, whose usability is improved using ipywidgets. The map shows with distinct colors different clusters of mate- rials, that were embedded into a two-dimensional plane using the dimension reduction algorithm t-SNE [44]. We would like to remark that axes in this embedding do not have a meaning, and cannot be expressed as a global function of the features spanning the original space. This embedding algorithm, as many nonlinear embedding al- gorithms, finds a low dimensional representation where pairwise distances between data points are preserved, 5 FIG. 2. Snapshot of the Visualizer in the ‘Querying the Archive and performing Artificial Intelligence modeling’ note- book. The visualization of a two-dimensional map allows to identify subsets (in AI nomenclature: clusters) of materials with similar properties. Two windows at the bottom of the map allow to view the structures of the compounds in the map. Clicking a point shows the structure of the selected ma- terial. Ticking the box on top of the windows selects which one of the two windows is used for the next visualization. The two windows have different types of symbols (here, crosses) to mark the position on the map. It is also possible to display a specific material chosen from the Compound text box to show its structure and its position on the map, which is then labelled with a cross. In this figure, two compounds are visu- alized, and it is possible to spot the position of the materials on the map. which makes it possible to visualize clusters of points in a two-dimensional plot. Clicking on any of the points in the map displays the atomic structure of the material in one of the windows at the bottom of the map. The position of the com- pound that is displayed is marked with a cross on the map. There are two different display windows to fa- cilitate the comparison of different structures, and the window for the next visualization is selected with a tick box on top of the visualizer. By clicking ‘Display’ the structure of the material and its position on the map are shown. We also provide some plotting utilities to generate high-quality plots. Controls for fine-tuning the printing quality and appearance are displayed by clicking 6 FIG. 3. An example of a high-quality plot that can be pro- duced using the visualizer. The ‘Toggle on/off plot appear- ance utils’ button displays a number of controls that can be used to modify and generate the plots. It is possible to change resolution, format file, color palette for the markers, text for- mat and size, and markers’ size. the ‘For a high-quality print . . . ’ button. Discovering of new topological insulators: ap- plication of SISSO to alloyed tetradymites As a second, complementary example, we discuss a notebook that addresses an analysis of topological semiconductors[32]. The employed AI method is SISSO (sure-independent screening combined with sparsifying operator [23]), which combines symbolic regression with compressed sensing. In practice, for a given target prop- erty of a class of materials, SISSO identifies a low- dimensional descriptor, out of a huge number of candi- dates (billions, or more). The candidate descriptors, the materials genes, are constructed as algebraic expressions, by combining mathematical operators (e.g., sums, prod- ucts, exponentials, powers) with basic physical quanti- ties, called primary features. These features are prop- erties of the materials, or their constituents (e.g., the atomic species in the material’s composition), that are (much) easier to evaluate (or measure) than the target properties that are modeled by using the SISSO-selected features as input and with the mathematical relationship identified as well by SISSO. In Ref. [32], the materials’ property of interest was the classification between topo- logical vs trivial insulators. The addressed class of materials was the tetradymites family, i.e., materials with the general chemical formula AB − LM N , where the cations A, B ∈ {As, Sb, Bi} and the anions L, M, N ∈ {S, Se, Te}, and a trigonal (R3m) symmetry. Some of these materials are known to be topo- logical insulators and the data-driven task was to predict the classification into topological vs trivial insulators of all possible such materials, just by knowing their for- mula, by using as training data a set of 152 tetradymites for which the topological invariant Z2 is calculated via DFT for the optimized geometries. In the notebook ‘Discovery of new topological insula- tors in alloyed tetradymites’ [45] , we invite the user to interactively reproduce the results of Ref. 32, namely the materials property map as shown in Fig. 5. The map is FIG. 4. Graphical input interface for the SISSO training of tetradymite-materials classification, taken from the ‘Discov- ery of new topological insulators in alloyed tetradymites’ note- book. FIG. 5. Interactive map of tetradymite materials, as produced with the AI-Toolkit visualizer. The topological (trivial) insu- lator training points are marked in red (blue). All materials falling in the convex hulls delimited by the dashed line en- veloping the red (blue) points are predicted to be topological (trivial) insulators. The axes, D1 and D2 are the components of the descriptor identified by SISSO, in terms of analytical function of the selected input parameters (see Ref. [32] and the notebook [45] for more details). obtained within the notebook, after selecting as input settings the same primary features and other SISSO pa- rameters as used for the publication. In Fig. 4, we show a snapshot of the input widget, where users can select features, operators, and SISSO parameters according to their preference and test alternative results. When click- ing ‘Run’, the SISSO code is running within the container created for the user at the NOMAD server. In the note- book, the map as shown in Fig. 5 is managed by the same Visualizer as described in Section III for the query-and- analyse notebook. This means that by mouse hovering the chemical formula of the compound represented by the marker is shown in a tooltip. By clicking a marker, the crystal structure of the corresponding material is shown in a box below the plot. In summary, with the notebook ‘Discovery of new topological insulators in alloyed tetradymites’, we pro- vide an interactive, complementary support to Ref. [32], where the user can reproduce the results of the paper starting with the same input, by using the same code, and by going as far as re-obtaining exactly the same main result plot (except for the different graphical style). More than what can be found in the paper, the user can change the input settings to the SISSO learning, explore the re- sults by changing the visualization settings, and brows- ing the structures of the single data points. The user can also use the notebook as a template and start from other data, retrieved from the NOMAD Archive, to perform an analysis with the same method, etc. III. DISCUSSION We presented the NOMAD AI Toolkit, a web-browser- based platform for performing AI analysis of materials- science data, both online, on NOMAD servers, and lo- cally on own computational resources, even behind fire- walls. The purpose of the AI toolkit is to provide the tools for exploiting the Findable and AI Ready (F- AIR) materials-science data that are contained in the NOMAD Repository and Archive, as well as several other databases in the field. The platform provides in- tegrated access, via Jupyter notebooks to state-of-the- art AI methods and concepts. Shallow learning curve hands-on tutorials are provided, in the form of interactive Jupyter notebooks, for all the available tools. A particu- lar focus is on the reproducibility of AI-based workflows 1 Wilkinson, M. et al. The fair guiding principles for scien- tific data management and stewardship. Sci. Data 3, 1–9 (2016). 2 Curtarolo, S. et al. Aflowlib. org: A distributed materi- als properties repository from high-throughput ab initio calculations. Comput. Mater. Sci. 58, 227–235 (2012). 3 Jain, A. et al. Commentary: The materials project: A materials genome approach to accelerating materials inno- vation. APL Mater. 1, 011002 (2013). 4 Saal, J. E., Kirklin, S., Aykol, M., Meredig, B. & Wolverton, C. Materials design and discovery with high- throughput density functional theory: the open quantum materials database (oqmd). JOM 65, 1501–1509 (2013). 7 associated with high-profile publications: The AI Toolkit offers a selection of notebooks demonstrating such work- flows, so that users can understand step by step what was done in publications and readily modify and adapt the workflows to their own needs. We hope this example could be an inspiration to augment future publications with similar hands-on notebooks. This will allow for en- hanced reproducibility of data-driven materials science papers and dampen the learning curve for newcomers to the field. The community is invited to contribute more notebooks in order to share cutting-edge knowledge in an efficient and scientifically robust way. IV. DATA AVAILABILITY Data used in this study are openly accessible on the NOMAD Artificial-Intelligence toolkit at https:// nomad-lab.eu/aitoolkit. V. CODE AVAILABILITY Codes used in this study are openly accessible on the NOMAD Artificial-Intelligence toolkit at https:// nomad-lab.eu/aitoolkit, see in particular Refs. [37] and [45] for the codes (notebooks) of the specific exam- ples discussed in this paper. ACKNOWLEDGEMENTS We would like to acknowledge Fawzi Mohammed, An- gelo Ziletti, Markus Scheidgen, and Lauri Himanen for inspiring discussions. This work received funding from the European Union’s Horizon 2020 research and innovation program under the grant agreement Nº 951786 (NOMAD CoE), the ERC Advanced Grant TEC1P (No. 740233), and the German Research Foundation (DFG) through the NFDI consor- tium ‘FAIRmat’, project 460197019. 5 Draxl, C. & Scheffler, M. Nomad: The fair concept for big data-driven materials science. MRS Bull. 43, 676–682 (2018). 6 Draxl, C. & Scheffler, M. The nomad laboratory: from J. Phys. Chem. data sharing to artificial Mater. 2, 036001 (2019). intelligence. 7 Draxl, C. & Scheffler, M. Big Data-Driven Materials Sci- ence and Its FAIR Data Infrastructure, 49–73 (Springer, 2020). 8 Ghiringhelli, L. M. et al. Towards efficient data exchange and sharing for big-data driven materials science: meta- data and data formats. NPJ Comput. Mater. 3, 1–9 (2017). 9 Blaiszik, B. et al. The materials data facility: data services to advance materials science research. JOM 68, 2045–2052 (2016). 10 Blaiszik, B. et al. A data ecosystem to support machine learning in materials science. MRS Commun. 9, 1125–1133 (2019). 11 Talirz, L. et al. Materials cloud, a platform for open com- putational science. Sci. data 7, 1–12 (2020). 12 European Organization For Nuclear Research & Ope- nAIRE. Zenodo (2013). URL https://www.zenodo.org/. 13 Lorenz, S., Groß, A. & Scheffler, M. Representing high- dimensional potential-energy surfaces for reactions at sur- faces by neural networks. Chem. Phys. Lett. 395, 210–215 (2004). 14 Behler, J. & Parrinello, M. Generalized neural-network representation of high-dimensional potential-energy sur- faces. Phys. Rev. Lett. 98, 146401 (2007). 15 Bart´ok, A. P., Payne, M. C., Kondor, R. & Cs´anyi, G. Gaussian approximation potentials: The accuracy of quan- tum mechanics, without the electrons. Phys. Rev. Lett. 104, 136403 (2010). 16 Bart´ok, A. P., Kondor, R. & Cs´anyi, G. On representing chemical environments. Phys. Rev. B 87, 184115 (2013). 17 Sch¨utt, K. T., Arbabzadah, F., Chmiela, S., M¨uller, K. R. & Tkatchenko, A. Quantum-chemical insights from deep tensor neural networks. Nat. Commun. 8, 1–8 (2017). 18 Xie, T. & Grossman, J. C. Crystal graph convolutional neural networks for an accurate and interpretable predic- tion of material properties. Phys. Rev. Lett. 120, 145301 (2018). 19 Rajan, K. Materials informatics. Mater. Today 8, 38–45 (2005). 20 Pilania, G., Wang, C., Jiang, X., Rajasekaran, S. & Ram- prasad, R. Accelerating materials property predictions us- ing machine learning. Sci. Rep. 3, 1–6 (2013). 21 Ghiringhelli, L. M., Vybiral, J., Levchenko, S. V., Draxl, C. & Scheffler, M. Big data of materials science: critical role of the descriptor. Phys. Rev. Lett. 114, 105503 (2015). 22 Isayev, O. et al. Materials cartography: representing and mining materials space using structural and electronic fin- gerprints. Chem. Mater. 27, 735–743 (2015). 23 Ouyang, R., Curtarolo, S., Ahmetcik, E., Scheffler, M. & Ghiringhelli, L. M. Sisso: A compressed-sensing method for identifying the best low-dimensional descriptor in an immensity of offered candidates. Phys. Rev. Mater. 2, 083802 (2018). 24 Jha, D. et al. Elemnet: Deep learning the chemistry of materials from only elemental composition. Sci. Rep. 8, 1–13 (2018). 25 Scheffler, M. et al. Fair data enabling new horizons for materials research. Nature 604, 635–642 (2022). 26 Ragan-Kelley, B. et al. Binder 2.0-reproducible, interac- tive, sharable environments for science at scale. In Pro- 8 ceedings of the 17th python in science conference, 113–120 (F. Akici, D. Lippa, D. Niederhut, and M. Pacer, eds., 2018). 27 Google Research, Google Colaboratory, 2018. https:// colab.research.google.com/. 28 Klimeck, G., McLennan, M., Brophy, S. P., Adams III, G. B. & Lundstrom, M. S. nanohub. org: Advancing ed- ucation and research in nanotechnology. Comput Sci Eng 10, 17–23 (2008). https://nanohub.org/. 29 Janssen, J. et al. pyiron: An integrated development en- vironment for computational materials science. Comput. Mater. Sci. 163, 24 – 36 (2019). 30 Yakutovich, A. V. et al. Aiidalab–an ecosystem for devel- oping, executing, and sharing scientific workflows. Comput. Mater. Sci. 188, 110165 (2021). 31 Dunn, A., Wang, Q., Ganose, A., Dopp, D. & Jain, A. Benchmarking materials property prediction methods: the matbench test set and automatminer reference algorithm. NPJ Comput. Mater. 6, 1–10 (2020). 32 Cao, G. et al. Artificial intelligence for high-throughput discovery of topological insulators: The example of alloyed tetradymites. Phys. Rev. Mater. 4, 034204 (2020). 33 https://www.docker.com/. 34 https://jupyter.org/hub. 35 L. Sbail`o, L.M. Ghiringhelli, and M. Scheffler, 2022 https: //gitlab.mpcdf.mpg.de/nomad-lab/aitoolkit-app. 36 https://www.nomad-coe.eu/about-oasis. 37 L. Sbail`o, L.M. Ghiringhelli, and M. Scheffler, AI-toolkit notebook, 2022, https://nomad-lab.eu/aitutorials/ query_nomad_archive. 38 Ghiringhelli, L. M. et al. Shared Metadata for Data- Centric Materials Science. Preprint at https://arxiv. org/abs/2205.14774 (2022). 39 Ward, L. et al. Matminer: An open source toolkit for materials data mining. Comput. Mater. Sci. 152, 60–69 (2018). 40 L. Sbail`o and L.M. Ghiringhelli, AI-toolkit notebook, 2021, https://nomad-lab.eu/aitutorials/clustering_ tutorial. 41 McInnes, L., Healy, J. & Astels, S. hdbscan: Hierarchical density based clustering. J. open source softw. 2 (2017). 42 Ester, M., Kriegel, H.-P., Sander, J. & Xu, X. A density- based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second In- ternational Conference on Knowledge Discovery and Data Mining, KDD’96, 226–231 (AAAI Press, 1996). 43 Plotly Technologies Inc. Collaborative data science. Montr´eal, QC, 2015. https://plot.ly. 44 van der Maaten, L. & Hinton, G. Visualizing data using t-SNE. J Mach Learn Res 9, 2579–2605 (2008). 45 L. Sbail`o et al., AI-toolkit notebook, 2020, https:// nomad-lab.eu/aitutorials/tetradymite_prm2020.
ai_researcher
1
Image_Quality_Enhancement_via_Machine_Learning_A_Unified_Approach_to_Super-Resolution_Denoising_and_Low-Density_Enhancement.pdf
Learning a No-Reference Quality Assessment Model of Enhanced Images with Big Data Ke Gu, Dacheng Tao, Junfei Qiao, and Weisi Lin 1 9 1 0 2 r p A 8 1 ] V C . s c [ 1 v 2 3 6 8 0 . 4 0 9 1 : v i X r a Abstract—In this paper we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g. object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g. visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images which are generally thought to be of the best quality. In this work, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measre of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image datasets. Results of experiments on nine datasets validate the superiority and efficiency of our blind metric compared with typical state-of-the- art full-, reduced- and no-reference IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct his- togram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications. Index Terms—Image quality assessment (IQA), no-reference (NR)/blind, enhancement, learning, big data I. INTRODUCTION P HOTOS captured via cameras/smart phones or created by computers always require post-processing towards better visualization and enhanced utility in various application sce- narios, e.g. object detection and recognition. One of the main goals of such post-processing operations is to raise the image This work was supported in part by Australian Research Council Projects FT-130101457, DP-140102164, LP-150100671, Singapore MoE Tier 1 Project M4011379 and RG141/14. K. Gu and J.-F. Qiao are with the Beijing Key Laboratory of Compu- tational Intelligence and Intelligent System, BJUT Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China (e-mail: [email protected]; [email protected]). D. Tao is with the School of Information Technologies and the Fac- ulty of Engineering and Information Technologies, University of Syd- ney, J12/318 Cleveland St, Darlington NSW 2008, Australia (email: [email protected]). W. Lin is with the School of Computer Science and Engineering, Nanyang Technological University, Singapore, 639798 (email: [email protected]). c(cid:13)20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. quality, such as visibility, contrast and brightness. Therefore, how to seek a well-designed image quality assessment (IQA) metric for faithfully predicting the quality of enhanced images, which can even optimize and improve enhancement methods, becomes a highly substantial and beneficial task. Traditional IQA researches are mainly devoted to gauging commonly seen artifacts, for example, Gaussian blur, noise, JPEG/JPEG2000 compression, etc. One type of IQA studies is subjective assessment focusing on building image quality databases, e.g. LIVE [1], MDID2013 [2] and VDID2014 [3]. Via a carefully-prepared testing setting, the organizers invite sufficient inexperienced observers to rank testing images in a randomized presentation order, and then yield the final mean opinion scores (MOSs) by averaging all the valid observers’ scores after some necessary post-processing procedures such as outliers screening. The other type of IQA explorations is concentrating on objective assessment. Typical objective IQA approaches are developed using mathematical models, neural networks [4] and learning systems [5] to approximate real human judgements of image quality. Subjective and objective assessments are both important and they play complementary roles. The former one provides benchmark results, which a good objective metric is expected to have a close correlation with. Yet subjective assessment usually costs dearly and consumes much time, and thus cannot be used in real-time and in-service systems. Resorting to the powerful computational ability of computers, objective metrics can serve to evaluate image quality in practical application scenarios, such as enhancement [6] and tone-mapping [53], replacing human beings to some extent. The last few years have witnessed an explosive growth of objective visual quality assessment. Based on the accessibility of reference source images to be compared with during the experiments, objective IQA approaches can be classified into three categories, i.e. full-reference (FR) IQA [5, 7, 8, 9, 10, 11], reduced-reference (RR) IQA [12, 13, 14, 15, 16], and no-reference (NR)/blind IQA [17, 18, 19, 20, 21]. Using popular large-size image databases, e.g. LIVE, TID2008 [22], CSIQ [23] and TID2013 [24], most of the above-mentioned IQA models have been proved of fairly high performance in accordance with subjective assessment. The majority of current blind IQA methods were proposed based on two steps, namely feature extraction and SVR-based regression module. In these NR-IQA algorithms, more efforts were made to explore more valid features towards simulating the perceptual characteristics of human eyes to estimate the visual quality. With considerable effective features developed, a growing body of researchers turn to resorting to advanced 2 structure, followed by separately measuring their perceptual distortions to be merged into one score [35]. As for most enhanced images, we are unable to obtain the associated original references. The aforesaid FR- and RR-IQA measures are unable to work in this situation, and therefore blind/NR algorithms are eagerly required. Not long ago, Fang et al. proposed a dedicated blind quality metric based on the natural scene statistics (NSS) regulation, which involves mean, standard deviation, skewness, kurtosis and entropy [36]. One major limitation of this blind metric is that the natural images are considered to be of the highest quality. Also, this metric overlooks significant influences factors, e.g. colorfulness and local sharpness. In [37], Chen et al. used a concatenation of GIST descriptor [38] and color motion [39] as 521-dimensions features before conducting a regression module to derive the final quality measure. Despite promising performance, using such high-dimension features easily introduces overfitting and there lacks definite connections and analyses between the used features and IQA of enhancement. In this paper we propose a novel two-step framework for blind image quality measure of enhanced images (BIQME). Contrast is defined to be the difference in luminance or color that makes an object (or its representation in an image or display) distinguishable [40]. Compared with luminance contrast which reflects the variations in luminance, color contrast also includes the variations in saturation and hue. Based on this concern, in the first step, we comprehensively consider five influencing factors which consist of contrast, sharpness, brightness, colorfulness and naturalness of images, and extract a total of 17 features. A high-quality image should have comparatively large contrast and sharpness, making more details highlighted. For these two types of features, we use modified entropy, contrast energy and log-energy of wavelet subbands. Besides, proper brightness and colorfulness usually render the whole image a broader dynamic range, which is beneficial to appear details as well. The last concern is the naturalness which a good-looking image is expected to be of. This work uses the classical NSS model [29] and recently released dark channel prior (DCP) [32] to estimate the naturalness of images. In the second step, we focus our attention on learning the regression module from extracted features above. Differing from current works which just use a small number of training data [17, 18, 25, 26, 28, 36], we have gathered beyond 100,000 enhanced images (much larger than the size of related image databases) as big-data training sam- ples and their corresponding objective quality scores derived by a newly designed high-accuracy FR-IQA model as training labels to learn the module of the proposed NR quality metric. There is no overlapping between the 100,000 training images and testing images in enhancement-related quality databases. Comparative tests confirm the superior performance and low computational cost of our measure relative to state-of-the-art FR-, RR- and NR-IQA methods. In view of the efficacy and efficiency, our IQA model severs as an optimization criterion to guide a histogram modification technology for enhancing images. The proposed enhancement method is shown to raise the visual quality of natural images, low-contrast images, low- light images and dehazed images. (a) (c) (e) (g) (b) (d) (f) (h) Fig. 1: Illustration of enhanced images: (a)-(b) natural image and its enhanced version [30]; (c)-(d) night image and its enhanced version [31]; (e)-(f) haze image and its dehazed image [32]; (g)-(h) natural image and its enhanced one by histogram equalization. neural networks and learning systems, e.g. general regression neural network [4], multiple kernel learning [25], deep belief net [26, 27] and pairwise learning-to-rank approach [28], for the purpose of better approaching the ability of human eyes to group perceptual features and thereby more reliably inferring the overall quality score. The majority of the IQA approaches described above are largely limited to commonly encountered artifacts. But with the development of compression, transmission and restoration technologies during last few decades, the above-mentioned artifacts might be not the leading factor of image quality any more. In comparison, IQA of enhancement very possibly plays a more significant role, since enhancement technologies are able to generate better images, even outperforming originally captured images which are usually thought to have the optimal quality. Unfortunately, the aforesaid IQA methods fail in this problem, because most of them directly or indirectly suppose that original natural images or the images that conform to statistics regulations observed from natural images [29] have the best quality and hence cannot correctly judge the quality of properly enhanced images [30]. Appropriate image enhancement technologies can raise the visual quality, as exemplified in Figs. 1(a)-(f), while improper technologies degrade the quality, as shown in Figs. 1(g)-(h). So accurately assessing the quality of enhanced images and judging the enhancement is proper or not have aroused much attention of researches during recent years. Gu et al. first systematically studied this issue; they built up the CID2013 and CCID2014 databases dedicated to image contrast change, and meanwhile proposed RR-IQA techniques based on phase congruency and information statistics of the image histogram [30, 33]. Another RR-IQA algorithm was devised by taking account of the fact that properly enhanced images should be simultaneously of entropy increment and saliency preservation [34]. Very lately, Wang et al. put forward a FR quality metric by adaptively representing the structure of each local patch. To specify, this approach decomposes each image patch into three components, mean intensity, signal strength and signal TABLE I: Summary of extracted features for IQA of enhancement. 3 In comparison to previous works, five contributions of this paper are summarized below: 1) to the best of our knowledge, this work is the first opinion-unaware1 blind IQA metric for image enhancement; 2) we establish a novel IQA framework from five influencing variables concerning enhancement; 3) a huge number of 100,000 training data are employed to build our BIQME metric, compared with only hundreds of training samples used in current NR-IQA models; 4) our blind metric performs better than most recently developed FR-, RR- and NR-IQA techniques on relevant image databases; 5) a new robust image enhancement technology is explored based on BIQME-optimization. The remainder of this paper are organized as follows: In Section II, we propose the blind BIQME method as well as a modified FR IQA model. In Section III, thorough experiments verify the superiority and efficiency of our BIQME metric in contrast to modern FR-, RR- and NR-IQA measures. Section IV presents the quality-optimized robust image enhancement approach. Section V concludes the whole paper. II. NO-REFERENCE QUALITY METRIC The design philosophy of our blind BIQME metric lies in five influencing factors, namely, contrast, sharpness, bright- ness, colorfulness and naturalness of images; the correspond- ing total 17 features are extracted accordingly. Afterwards, a regression module which is learned via a huge number of training data is used to fuse the aforementioned 17 features for inferring the ultimate quality score. A. Feature Extraction Contrast is the leading factor which decides the effect of image enhancement. Information entropy is a classical and frequently used measurement of image contrast. Entropy is a global measurement, which characterizes the average amount of information contained in an image. In general, a greater entropy means that an image is of larger contrast and thereby of better visual quality. We take two images shown in Figs. 1(c)-(d) as an example. It is quite obvious that image (c) with entropy 6.9 is visually worse than image (d) with entropy 7.6. Due to the limited processing ability, human brain is incline to pay attention to the regions which stores more perceptual 1Generally, it needs training images labeled by subjective quality scores in opinion-aware metrics, while opinion-unaware methods do not require human scoring procedures and such human-labeled training images. Opinion-unaware metrics usually have more potential for good generalization ability. More specifically, similar to [30], we denote M o information as priority. The phase congruence (PC) principle unveils that, as opposed to the Fourier amplitude, the Fourier phase contains higher amount of perceptual information [41]. Subsequently, it has been further demonstrated that mammals extracted features at the areas where the Fourier components are maximal in phase [42]. Hence we deploy a simple but biologically plausible PC model to detect and identify features in an image [4, 43] and thus compute the PC-based entropy. n and M e n filters which implement on scales n with the odd- and even- symmetric properties. These two filters are constructed based on the log-Gabor function, because of its ability to maintain DC component and encode natural images [8]. In this work, we deploy the 2-D log-Gabor function defined by G(ω, ok) = exp[− (log(ω/ω0))2 ], where ok = kπ/K, ω is 2σ2 o the center frequency of filters, σr controls the bandwidth of filters, k = {0, 1, ..., K − 1} is the filter’s orientation angle, K is the number of orientations, and σo decides the angular bandwidth of filters. By adjusting ω and ok, we accordingly generate odd- and even-symmetric M o n filters, and further generate a quadrature pair for an image signal s. At position j on scale n, each quadrature pair is taken action to n, s(j)∗M o yield a response vector [en(j), on(j)] = [s(j)∗M e n], whose the amplitude value is An(j) = (cid:112)en(j)2 + on(j)2. Let F (j) = (cid:80) n on(j). PC is defined n An(j) , where U (j) = (cid:112)F 2(j) + H 2(j) as P C(j) = and ε is a very small number to avoid division-by-zero. By simplification, PC can be computed by n en(j) and H(j) = (cid:80) ε+(cid:80) ] · exp[− (o−ok)2 n and M e U (j) 2σ2 r P C(j) = (cid:80) n W (j)(cid:98)An(j) · ∆θn(j) − Tn(cid:99) ε + (cid:80) n An(j) (1) where (cid:98)(cid:99) is a threshold used to delete negative results through setting them to zero. Tn predicts the noise extent. ∆θn(j) = cos[θn(j) − θ(j)] − | sin[θn(j) − θ(j)]| is exploited to gauge the deviations in phase. θ(j) is defined as the mean values of phase at j. W (j) = (1 + exp[(u − t(j))v])−1 is manipulating function by weighting. t(j) = 1 n An(j)(Amax(j) + ε)−1. N As for filter responses, u offers a cut-off value for penalizing low PC values under it. v is defined as a gain variable that control the cutoff sharpness. As thus, the PC-based entropy is defined by (cid:80) Epc = − 255 (cid:88) i=0 Pi(spc) · log Pi(spc) (2) where spc is constituted by the pixels in s, which corresponds to the 40% largest values in the detected PC map. The second measurement is contrast energy, which estimates perceived image local contrast [44]. The reason behind using it lies in that contrast energy has computational simplicity and particularly contrast-aware attributes [45]. We apply Gaussian second-order derivative filters to separate an image. The entire filter responses were adjusted with rectification and divisive normalization for modeling the process of nonlinear contrast gain control in visual cortex [46]. Similar to [47], we compute contrast energy on three channels: (3) − φf CEf = α · Y (sf ) Y (sf ) + α · θ where Y (sf ) = (cid:112)(sk ∗ fh)2 + (sk ∗ fv)2. f = {gr, yb, rg} are respectively three channels of s, where gr = 0.299R + 0.587G + 0.114B, yb = 0.5(R + G) − B and rg = R − G [48]. For parameters, α = max[Y (sf )], θ governs the contrast gain, and φf is applied to constrain the noise with threshold. fh and fv separately stand for horizontal and vertical second- order derivatives of Gaussian function. Hence contrast-related features are defined as Fct = {Epc, CEgr, CEyb, CErg}. Sharpness is another influencing variable with comparable importance of image contrast. Contrary to contrast that fixes on the global sensation in our work, sharpness more perceives local variations. Intuitively speaking, for a photo, fine details are usually resolvable in sharp regions, such as edges and object boundaries. In application scenarios, many professional photographers try to alter perceived sharpness of a photo to a considerable high level. Typical solutions are composed of using high-resolution cameras and resorting to post-processing techniques such as retouching [49]. Actually, these years have seen quite a few works dedicated to sharpness assessment [50, 51, 52]. According to [51], we choose an efficient and effective way to compute log-energy of wavelet subbands. To be more concretely, we first use 9/7 DWT filters to decompose a grayscale image into three levels, namely {LL3, LHl, HLl, HHl|l = 1, 2, 3}. Considering the fact that more high-frequency details are generally contained in high-sharp images, we then compute the log-energy of each wavelet subband at each decomposition level to approximate this fact: (cid:20) LEk,l = log10 1 + 1 Kl (cid:88) (cid:21) k2 l (i) i (4) where i stands for the pixel index; k is LH, HL, and HH, respectively; Kl is the total number of DWT coefficients at the level l. Lastly, the log-energy at each decomposition level is calculated by LEl = 1 2 (LELH,l + LEHL,l) + w · LEHH,l 1 + w (5) where the parameter w is assigned to be 4 to impose larger weights on HH subbands. Here we merely take the 2nd and 3rd levels into consideration, since they involve more sharp details and results illustrate that adding the 1st level cannot result in performance gain in our BIQME model. Sharpness- related features are thus defined as Fs = {LE2, LE3}. 4 Brightness highly affects the effect of image enhancement, since on one hand appropriate image brightness can render an image a broader dynamic range, and on the other hand it may contain semantic information, for example, providing scene information − daylight seaside, dark-night seabed, and more. In this regard, we characterize image brightness with a simple strategy, following a recent work regarding IQA of tone-mapping operators [53]. Particularly, we hypothesize that proper brightness had better help images display more details, regardless of in dark regions or bright regions. That is to say, no matter whether holding, increasing or decreasing the luminance intensity, one good enhanced image is capable of preserving much information. By this guidance, we first create a set of intermediate images by raising/reducing the original brightness of an image si = max(min(mi · s, tu), tl) (6) where mi indicates the multiplier index to be discussed later; tl and tu are the lower bound and upper bound; max and min are applied to restrain the image signal into range of [tl, tu]. In this paper, we temporarily only consider 8-bit images and therefore set tl and tu to be 0 and 255 respectively. It is clear that, as the luminance intensity varies like this, image details will be removed. Hence we next compute how fast the details disappear. Various kinds of measurements can be leveraged in this work, such as mean, variance, entropy, nonsymmetric K-L divergence, symmetric J-S divergence, etc. According to some observations shown in [53], information entropy of the aforesaid intermediate images can effectively discriminate two photos that are captured in well-exposure and bad-exposure (including over-exposure and under-exposure) conditions, respectively. Indeed, even as for two properly exposed photos, this strategy also takes effect to judge their relative quality. Accordingly this paper deploys entropy of luminance-varying images to deduce whether an image has suitable brightness or not. Facing the choice of multiplier index mi, more indices are beneficial to give rise to greater performance yet do harm to computation speed. So we find a good balance between efficacy and efficiency by just using six entropy values {Em1, Em2, ..., Em6}, which are measured with m = {n, 1 n |n = 3.5, 5.5, 7.5}. It deserves emphasis that, different from [53], we do not include entropy of the image s itself, because a similar measure Epc has been taken into consideration. As stated above, we define brightness-related features as Fb = {Em1, Em2, Em3, Em4, Em5, Em6}. Colorfulness has an akin function of brightness, offering a color image with wider dynamic range and thereby showing more details and information relative to a grayscale image. To quantify image colorfulness, we first introduce color satura- tion, which represents the colorfulness of a color compared with its own luminance. Here we simply compute the global mean of saturation channel after transforming an image into the HSV color space S = 1 M M (cid:88) i=1 TX→S[s(i)] (7) where TX→S stands for a transformation function to convert an X type image (e.g. RGB image) into the saturation channel; M indicates the number of pixels in s. The second measurement stems from a classical research dedicated to measuring colourfulness in natural images [48]. In fact, several well-designed colour appearance models can predict the perception of colourfulness, but they just work validly for simple blocks on a uniform background. As for the measurement of the global colourfulness of natural scene images, there is still no particular study. Through key features extraction and a psychophysical category scaling experiment, Hasler et al. have contributed a practical metric to estimate the overall image colourfulness, which highly correlates with human perceptions [48]. More detailedly, four key features are first extracted, consisting of the mean and variance of yb and rg channel (µyb, σ2 rg). Then the metric is defined by yb, µrg and σ2 (cid:113) (cid:113) C = σ2 yb + σ2 rg + κ µ2 yb + µ2 rg (8) where κ is a parameter to rectify the relative significance, in order to match subjective opinions better. Experimental results show that the optimal value of κ is 0.3. Colorfulness-related features are therefore defined as Fcl = {S, C}. Naturalness is the intrinsic attribute of an natural image, which presents some commonness of the majority of natural images, e.g. the NSS regulation applied in [17, 18]. Generally speaking, violating this regulation means that an image looks unnatural and thus is of low visual quality. Nonetheless, as mentioned above, a natural image will acquire better quality via proper enhancement. So the use of image naturalness is mainly to punish over-enhancement conditions, which usually seriously devastate the naturalness of a visual signal. Our first consideration is the typical and frequently used NSS model [17, 18, 29]. Specifically, we begins by preprocessing an image via local mean removal and divisive normalization: s(i)∗ = s(i) − µ(i) σ(i) + (cid:15) (9) where µ(i) and σ(i) are local mean and standard deviation at the i-th pixel; (cid:15) is a positive constant. Then, as for a natural image, the normalized pixel values tend towards a Gaussian- like appearance, while the artifacts change the shape, for in- stance, Gaussian blur generates a more Laplacian appearance. The generalized Gaussian distribution (GGD) with zero mean was found to catch the behavior of coefficients of (9), which is defined by f (x; ν, σ2) = ν 2βΓ( 1 α ) (cid:18) exp − (cid:18) |x| β (cid:19)ν(cid:19) (10) (cid:114) ν ) Γ( 1 Γ( 3 where β = σ ν ) and Γ(a) = (cid:82) ∞ ta−1e−tdt when a > 0. The parameter ν controls the shape of the distribution while σ2 means the variance of the distribution. We therefore collect ν and σ2 as two features. 0 The other measurement of naturalness is the recently found DCP prior [32], in which it shows that, in most non-sky areas, at least one color channel tend towards zero, that is 5 where k = {R, G, B} means the RGB channels. Apparently, sdark has definite bounds of [0, 255] or [0, 1] for a normalized image divided by 255. We merely compute the overall mean of the dark channel sdark to be a naturalness measurement Sd. The lastly concerned naturalness-related features are defined as Fn = {ν, σ2, Sd}. To summarize, on the basis of five respects of considera- tions which are composed of contrast, sharpness, brightness, colorfulness and naturalness of images, we elaborately extract a sum of 17 features. Towards readers’ conveniences, all the features described above are listed in Table I. B. Quality Prediction So far we have gained enhancement-related features, whose effectiveness will be discussed in Section III. These features however cannot offer a straightforward impression on how the quality of an enhanced image is. In this situation, a regression module converting 17 features into one quality score becomes desirable. The linear weighting combination is a simple and commonly used scheme. In order to integrate 17 features, at least 16 weights are required. Facing to such high-dimensional space of weighs, it is difficult to seek robust and reasonable values of parameters. Another way to integrate features is to take advantage of dimensionality reduction tools, such as PCA and LLE [54]. But the extracted features play different roles in assessing the quality of enhanced images, and furthermore, they are also of different dimensions. This renders the use of dimensionality reduction a tough road. Recently, a new strategy has been proposed towards finding the regression module in blind IQA designs [55]. To be more specific, in order to overcome the issue of overfitting, greater than 100,000 images are utilized as training samples to learn the regression module in our blind BIQME metric. Note that, in classical IQA researches, they usually report the median performance indices across 1,000 iterations of random 80% train-20% test procedure in a certain database [17, 18, 25, 26] or they adopt the leave-one-out cross-validation methodology [36, 49], for the purpose of verifying the effectiveness of their features. Of course we exploit the two manners above to verify the superiority of our enhancement-aware features as well in Section III. Nonetheless, due to limited visual scenes and only hundreds of images included in existing databases, these two manners readily cause overfitting in learning the regression module. So we deployed a valid strategy similar to that used in [56]. We have first collected 1,642 images that contain 1242 natural scene images coming from Berkeley database [57] and high-quality subsets in PQD database [58] as well as 400 screen content images captured by ourselves with a screenshot tool2. These 1642 original images are absolutely content-independent of those in all the testing databases used in this research. Next we simulated enhanced images with eight typical global-based enhancement technologies akin to that employed in the CCID2014 database [30] and create 60 enhanced images for each original image. Including the 1642 original images, we eventually produce 100,162 images (much sdark(i) = min k∈{R,G,B} sk(i) (11) 2We will release the 400 screen content images online soon. bigger than the size of the largest testing CCID2014 database that consists of 655 images) as training data. How to label these generated images? In [55], Gu et al. indicated that, rather than training on human opinion ratings, using predicted scores computed from high-performance FR- IQA methods as training labels is a good choice. The lately proposed PCQI metric was proven to highly correlate with subjective quality scores on enhancement-relevant databases [35], but it does not take the influence of colorfulness into consideration, which is obviously an important index of image quality. Based on this concern, we propose the Colorfulness- based PCQI (C-PCQI) metric: C-PCQI = 1 M M (cid:88) i=1 Qmi(i) · Qcc(i) · Qsd(i) · Qcs(i) (12) where Qmi, Qcc and Qsd respectively represent the similarity between the original and distorted images in terms of mean intensity, contrast change and structural distortion. More in- formation about the definitions of these three terms can be found in [35]. M is the number of pixels. Qcs measures the similarity of color saturation defined by Qcs(i) = (cid:16) 2ST1 · ST2 + ζ 1 + ST 2 2 + ζ ST 2 (cid:17)ϕ (13) where ST1 and ST2 stand for the color saturation of the original and distorted images, respectively. ζ is a very small constant number for avoiding division-by-zero and ϕ is a fixed pooling index for stressing the areas which have remarkable changes of color saturation. We apply the C-PCQI scores of the 100,162 training images to replace human opinion ratings. After the training set prepared, the famous support vector regression (SVR) is employed to learn the regression module in the proposed BIQME metric [59]. In general, traditional deep learning tools are not appropriate since there are only 17 features extracted. But it deserves to mention that a very good work has recently applied parallel computation of low- level features followed by a deep learning based regression [60], and this strategy will be considered in our future work. Considering a training dataset D = {(x1, y1), ..., (xr, yr)}, where xi and yi, i = 1, ..., r, indicate a feature vector of f01- f17 in Table I and the target output of the i-th training image’s C-PCQI score. Supposing parameters t > 0 and p > 0, we can express the standard form of SVR as minimize w,δ,b,b(cid:48) s.t. r (cid:88) 2 + t ||w||2 (bi + b(cid:48) i) 1 2 i=1 wT φ(xi) + δ − yi ≤ p + bi, yi − wT φ(xi) − δ ≤ p + b(cid:48) i, bi, b(cid:48) i ≥ 0, i = 1, ..., r. (14) where K(xi, xj) = φ(xi)T φ(xj) is the kernel function, which is set to be the Radial Basis Function (RBF) kernel defined as K(xi, xj) = exp(−k ||xi − xj||2). Based on the training samples, our target is to determine the parameters t, p and k and thus find the associated regression module. Finally, we also compare the proposed strategy with model distillation. The model distillation was a recently proposed 6 concept in deep learning. Once the cumbersome model has been trained, a different kind of training called “distillation” can be used to transfer the knowledge from the cumbersome model to a small model that is more suitable for deployment [61]. Compared with model distillation, the proposed strategy is close to a data-fitting adaption. That is, we deploy a high- performance FR-IQA model, which can approximate “ground truth”, to learn the features to derive a NR-IQA model based on big-data training samples. III. EXPERIMENTAL RESULTS AND DISCUSSIONS In this section we will pay our attention to evaluating and comparing the performance of the proposed blind BIQME metric with up to 16 state-of-the-art IQA approaches on nine enhancement-related databases. A. Experimental Setup Quality Measures. Recent years have seen an enumerous number of IQA measures, most of which not only obtain high performance accuracy but only consume few implementation time. In this research, we choose the following four types of methods. The first type includes FSIM [8], LTG [9], VSI [10], and PSIM [11], which all belong to FR metrics and acquire superior performance on popular databases. The second type consists of two RR-IQA models, RRED [15] and FTQM [16]. The third type contains BRISQUE [17], NFERM [18], NIQE [19], IL-NIQE [20] and BQMS [55] without access to original references in assessing the visual quality of images. The last one consists of FR C-PCQI, RR RIQMC [30], RR QMC [34], blind FANG [36], and blind GISTCM [37], which are dedicated to enhanced IQA tasks. Testing Datasets. To the best of our knowledge, there exist nine main relevant subjective image databases. The first two are CID2013 and CCID2014 databases [33, 30], which have been constructed particularly for image quality evaluation of contrast change in Shanghai Jiao Tong University during the years 2013-2014. The two databases encompass 400 and 655 images through six and eight contrast alteration technologies, respectively. The second group is composed of four contrast enhancement-related subsets in TID2008, CSIQ, TID2013 and SIQAD databases [22, 23, 24, 62]. There are 200, 116, 250 and 140 images in the aforementioned four subsets. The last three subsets are completed by Peking University in the year of 2013 [37]. Each of the three subsets includes 500 images, separately generated by enhancing haze, underwater and low- light images. Interested readers can be directed to [22, 23, 24, 30, 33, 37, 62] for detailed information of the nine datasets used in our work. Performance Benchmarking. In general, there are three representative evaluation metrics for correlation performance measure and comparison in most IQA studies. The first one is Spearman rank order correlation coefficient (SRC) or rank correlation coefficient, which is a non-parametric test3 towards calculating the degree of association between two variables 3Non-parametric indicates a test does not rely on any assumption on the distributions of two variables. 7 Fig. 2: Performance of BIQME (proposed), FANG [36], NFERM [18] and BRISQUE [17] metrics on CID2013, CCID2014, TID2008, CSIQ, TID2013 and SIQAD datasets. Blue, red and green bars respectively represent PLC, SRC and KRC indices. Fig. 3: Scatter plots of BIQME (proposed) and FANG [36] using a leave-one-out cross-validation experiment on six datasets. from the angle of prediction monotonicity. The second one is another non-parametric monotonicity index, Kendall’s rank- order correlation coefficient (KRC), focusing on evaluating the strength of dependence of two variables. Compared with SRC, KRC has stricter demands, for example, both testing variables must be ordinal. The third criterion is Pearson linear correlation coefficient (PLC), which is commonly abbreviated to linear correlation coefficient. PLC estimates the prediction accuracy between two variables. It requires to stress that the nonlinearity of objective quality scores should be eliminated using regression functions before computing PLC index. Two typical regression functions are the four-parameter function TABLE II: Comparison on haze, underwater and low-light subsets. SRC BIQME (Pro.) BRISQUE [17] NFERM [18] FANG [36] GISTCM [37] Length 17 36 23 5 521 Haze 0.7290 0.4179 0.4988 0.5196 0.6302 Under water 0.8171 0.4781 0.6334 0.1467 0.7858 Low light 0.9123 0.4461 0.7925 0.8316 0.9155 performance evaluation criteria, a value approaching to one for PLC, SRC and KRC means the superior performance in line with human opinion ratings. g(q) = τ1 − τ2 1 + exp (− q−τ3 τ4 ) + τ2 (15) B. Performance Results and the five-parameter function (cid:18) g(q) = τ1 0.5 − 1 1 + exp[τ2(q − τ3)] (cid:19) + τ4q + τ5 (16) where q and g(q) are the vectors of raw objective quality scores and converted scores after the nonlinear regression of (15) or (16); we use the curve fitting process to compute the values of model parameters {τ1, ..., τ4} or {τ1, ..., τ5}. This paper adopts the five-parameter logistic function. Of the three Effectiveness of Features. We deploy two significant tests to measure the effectiveness of features. Firstly, inspired by [17, 18, 25], each testing dataset was randomly separated into two teams based on image scenes. We take the TID2008 subset as an example. Team 1 contains 160 training images corresponding to 20 original images and Team 2 contains 40 testing images corresponding to the remaining 5 original images. Using the 17 extracted features, the regression module is trained on the 80% data from Team 1 and is employed to conduct performance evaluations on the 20% data from Team TABLE III: Performance comparison of 14 state-of-the-art IQA measures. We highlight the top metric in each type. 8 Models Type FR FSIM FR LTG FR VSI FR PSIM FR C-PCQI RR RRED RR FTQM RR RIQMC RR QMC NIQE NR IL-NIQE NR NR BQMS NR FANG NR BIQME Models Type FR FSIM FR LTG FR VSI FR PSIM FR C-PCQI RR RRED RR FTQM RR RIQMC RR QMC NR NIQE IL-NIQE NR NR BQMS NR FANG NR BIQME CID2013 [33] SRC 0.8486 0.8605 0.8506 0.8541 0.9260 0.7218 0.8047 0.9005 0.9340 0.3929 0.5273 0.4624 0.8006 0.9023 PLC 0.8574 0.8656 0.8571 0.8604 0.9247 0.7295 0.8164 0.8995 0.9309 0.4648 0.5682 0.5733 0.7904 0.9004 KRC 0.6663 0.6723 0.6579 0.6666 0.7586 0.5254 0.6125 0.7162 0.7713 0.2709 0.3708 0.3196 0.5893 0.7223 TID2013 [24] SRC 0.4413 0.4639 0.4643 0.4542 0.8805 0.3068 0.6095 0.8044 0.7153 0.0788 0.1517 0.1885 0.2675 0.6444 PLC 0.6819 0.6749 0.6785 0.6092 0.9175 0.5606 0.7697 0.8651 0.7713 0.0985 0.2275 0.2514 0.2941 0.7259 KRC 0.3588 0.3458 0.3705 0.3347 0.7074 0.2419 0.4685 0.6178 0.5364 0.0522 0.1030 0.1259 0.1742 0.4693 CCID2014 [30] SRC 0.7658 0.7901 0.7734 0.8004 0.8754 0.6595 0.7292 0.8465 0.8722 0.3655 0.5121 0.4381 0.7822 0.8309 PLC 0.8201 0.8384 0.8209 0.8386 0.8885 0.7064 0.7885 0.8726 0.8960 0.4694 0.5764 0.5742 0.7890 0.8588 KRC 0.5707 0.5938 0.5736 0.6038 0.6858 0.4677 0.5330 0.6507 0.6872 0.2494 0.3590 0.3039 0.5684 0.6305 SIQAD [62] SRC PLC 0.8222 0.7150 0.6539 0.7820 0.6461 0.7734 0.7098 0.5864 0.8127 0.7447 0.5601 0.7347 0.6976 0.8216 0.4506 0.5479 0.2485 0.2610 0.1607 0.1364 0.2491 0.3044 0.2450 0.3146 0.1904 0.2768 0.6783 0.7860 KRC 0.5328 0.4773 0.4728 0.4146 0.5624 0.3942 0.5205 0.3139 0.1653 0.1137 0.1786 0.1642 0.1324 0.4954 TID2008 [22] SRC 0.4403 0.4655 0.4571 0.4573 0.8782 0.2320 0.3006 0.8095 0.7340 0.0223 0.1833 0.1539 0.2666 0.6980 PLC 0.6880 0.6795 0.6819 0.6106 0.9061 0.5278 0.6845 0.8585 0.7688 0.0979 0.2244 0.2450 0.2737 0.7476 KRC 0.3348 0.3285 0.3450 0.3202 0.7016 0.1693 0.1854 0.6224 0.5520 0.0187 0.1223 0.1024 0.1785 0.5123 Direct mean SRC 0.6921 0.6959 0.6903 0.6810 0.8740 0.5697 0.6825 0.7949 0.7432 0.2108 0.3540 0.3010 0.4157 0.7565 KRC 0.5419 0.5343 0.5382 0.5186 0.6996 0.4304 0.5221 0.6248 0.5888 0.1444 0.2475 0.2067 0.2934 0.5713 PLC 0.8012 0.7994 0.7942 0.7622 0.8991 0.7001 0.8060 0.8348 0.7650 0.2615 0.4080 0.3807 0.4334 0.8053 CSIQ [23] SRC 0.9420 0.9414 0.9504 0.9336 0.9394 0.9382 0.9532 0.9579 0.9554 0.2444 0.5005 0.3178 0.1870 0.7851 KRC 0.7883 0.7880 0.8096 0.7718 0.7820 0.7838 0.8129 0.8279 0.8207 0.1613 0.3510 0.2241 0.1175 0.5980 PLC 0.9378 0.9560 0.9532 0.9447 0.9454 0.9415 0.9552 0.9652 0.9622 0.3019 0.5468 0.3259 0.1762 0.8129 Weighted mean SRC 0.7091 0.7221 0.7127 0.7162 0.8817 0.5855 0.6929 0.8244 0.8042 0.2678 0.4054 0.3526 0.5685 0.7904 PLC 0.8019 0.8066 0.7981 0.7819 0.9006 0.6884 0.7940 0.8563 0.8256 0.3360 0.4615 0.4538 0.5794 0.8279 KRC 0.5468 0.5498 0.5455 0.5437 0.7037 0.4299 0.5199 0.6426 0.6369 0.1835 0.2836 0.2429 0.4085 0.6022 TABLE IV: Mean implementation time on all the 665 images in the CCID2014 database. IQA models Time (second/image) FSIM 0.675 IQA models Time (second/image) RIQMC 0.867 LTG 0.045 QMC 0.010 VSI 0.294 NIQE 0.450 PSIM 0.065 C-PCQI 0.373 IL-NIQE 3.064 BQMS 90.72 RRED 1.536 FANG 0.693 FTQM 0.592 BIQME 0.906 2. This procedure of random 80% train-20% test is repeated 1,000 times before median performance measures across the 1,000 iterations are provided for comparison. We respectively apply the aforesaid test on the former six datasets and list the results in Fig. 2. Three representative NR-IQA measures, including BRISQUE, NFERM and FANG methods, satisfy the requirement of this experiment, so we also include them and report their results in Fig. 2. On the last three subsets about dehaze images, enhanced underwater images and enhanced low-light images, we perform the same experiment with that used in [37]. SRC results are given in Table II. One can see that the proposed BIQME metric with a few features has attained encouraging performance, especially for contrast- changed images and enhanced haze images. The second test exploits a leave-one-out cross-validation, akin to [49], for evaluating and comparing the effectiveness of features. More concretely, we also take the TID2008 subset to briefly illustrate how to carry out the leave-one-out cross- validation experiment. As for 8 testing images associated to one particular original image, we learn the regression module with other 192 training image associated to the rest 24 original images followed by predicting quality scores of the 8 image above. Likewise, we can obtain the quality measures of all 200 images on the TID2008 subset. Following this, the quality scores of objective IQA models on the entire images in other datasets can be yielded. This paper just compares our BIQME algorithm and the recently devised FANG metric dedicated to IQA of contrast adjustment because in most conditions they outperform the others. We just choose CID2013, CCID2014, TID2008, CSIQ, TID2013 and SIQAD datasets that meet the requirement of conducting the leave-one-out cross-validation. Results of experiments are illustrated in Fig. 3 in the manner of scatter plots. Towards convenient comparisons, we further label the numerical results on each scatter plot. As seen, both as blind IQA metrics, the proposed BIQME generates more reliable quality predictions, i.e. the sample points are closer to the black diagonal lines (indicating perfect performance), constantly and largely superior to the FANG. Performance Comparison. Most existing NR metrics focus on exploring new effective features, instead of an IQA model. Despite the use of 80% train-20% test procedure and leave- one-out cross-validation described in the last subsection, their performance measures are not fair since using only hundreds of training samples to learn the regression module is likely to introduce overfitting. On the other hand, the training and testing data all come from commonly seen datasets in which 9 Fig. 4: Scatter plots of MOS vs. FR LTG, VSI, RR RIQMC, QMC, and blind IL-NIQE, BQMS, FANG, BIQME on the CCID2014 database. The red lines are curves fitted with the five-parameter logistic function and the black dash lines are 95% confidence intervals. limited image scenes are included. Clearly, this substantially confines the practical application to a broad scope of visual scenes. In contrast to the opinion-aware blind metrics above, a few opinion-unaware NR-IQA models have been designed upon the NSS regulation [19, 20]. Their modules are trained using about 100 natural images. This paper induces another strategy by using huge amount of training data to learn the regression module, as given in Section II-B, and this renders the proposed BIQME an opinion-unaware IQA metric rather than 17 enhancement-related features. Subsequently, one performance comparison is implemented with opinion-unaware FR, RR and NR quality measures. In this comparison, apart from our BIQME method, we mainly consider the following 13 state-of-the-art IQA models, which encompass: 1) FR FSIM [8], LTG [9], VSI [10], PSIM [11], C-PCQI; 2) RR RRED [15], FTQM [16], RIQMC [30], QMC [34]; 3) NR NIQE [19], IL-NIQE [20], BQMS [55], FANG [36]4. We have given the results on six datasets in Table III and highlighted the best performed metric in each type. Four conclusions can be derived. First, our BIQME metric is obviously superior to other NR-IQA models tested, regardless of general-purpose NQIE and IL-NIQE or distortion-specific BQMS and FANG. Second, the BIQME has acquired an approximating performance to FR C-PCQI and RR RIQMC, which are devised specifically for IQA of contrast alteration under the condition of partial or the whole reference image available, particularly on large-size CID2013 and CCID2014 databases. Third, we surprisingly find that the BIQME metric works effectively on the SIQAD subset; in other words, our BIQME is also fit for assessing the quality of enhanced screen content images. Fourth, compared to opinion-unaware NIQE and IL-NIQE methods which suppose that natural images are of the optimal quality, the proposed opinion-unaware BIQME metric has brought a much better performance, and this gives rise to another strategy in the exploration of opinion-unaware 4We deploy the same method and training data in BIQME to learn the regression module of FANG for a fair comparison. IQA algorithms. (cid:80) Two mean performance results are included in Table III as i ξi·πi (cid:80) i πi well. Assuming that the mean index is defined as ¯ξ = where i = {1, 2, ..., 6} indicates each testing dataset, ξi is the performance index on each dataset and πi is the weight, one is the direct mean performance that is computed by setting all the weights to be one, while the other is the weighted mean performance that is computed by assigning the weight πi as the number of images in the testing dataset. One can observe that our blind BIQME technique outclasses all the general- purposed FR-, RR- and NR-IQA methods on average. In addition to the numerical results, scatter plots of scores between objective IQA approach and subjective opinion are exhibited for straightforward comparison in Fig. 4, in which the red lines stand for the curves that are fitted by the five- parameter logistic function and the black dash lines stand for 95% confidence intervals. Besides our NR algorithm, we also include seven competing quality metrics containing FR LTG, VSI, RR RIQMC, QMC, and NR IL-NIQE, BQMS, FANG on the large-scale CCID2014 database for comparison. It is evident that, as compared with those seven IQA approaches considered, our NR BIQME model has given the impressive convergency and monotonicity, noticeably better than blind IL-NIQE, BQMS and FANG metrics. Runtime Measure. A good IQA model is wished to have high complexity efficiency and low implementation time. So we further compute the runtime of 14 testing IQA methods using the whole 655 images in the CCID2014 database. This experiment is carried out using MATLAB2015 on a desktop computer having 3.20GHz CPU processor and 16GB internal memory. We in table IV lists the mean runtime of each IQA metric. The proposed BIQME measure, despite using a series computing, only consume less than one second to assess an 768 × 576 image. Actually, it can be found that each type of features are extracted independently of each other and some features in the same type can be separately calculated (e.g. brightness-related features) when our algorithm runs, so we 10 might introduce the parallel computing strategy to decrease the runtime to a high degree. IV. QUALITY-BASED IMAGE ENHANCEMENT Among numerous IQA methods, the majority of them stay at predicting the quality score of an image, yet do not serve to optimize and instruct post-processing techniques towards visual quality improvement. Our BIQME metric, because of its high performance and efficiency, is fit for guiding image enhancement technologies. And moreover, the BIQME works without original references and this makes it apply to many kinds of images, as opposed to some recent works that are only available for enhancing natural images [63, 34]. Thus we develop a robust BIQME-optimized image enhancement method (BOIEM). In the BOIEM algorithm, we primarily take into account image brightness and contrast and particularly alter them to a proper level. Enlightened by the RICE enhancement method in [34], a two-step framework is constructed. In the first step, we improve two recent enhancement methods, AGCWD [31] and RICE [34], to successively rectify image brightness and contrast. The AGCWD focuses on weighting the probability density function (PDF) of images by PDF(cid:48)(z) = PDFmax (cid:18) PDF(z) − PDFmin PDFmax − PDFmin (cid:19)λb (17) where z = {zmin, zmin + 1, ..., zmax}; PDFmin and PDFmax respectively indicate the minimum and maximum values in PDF; λb is a weight parameter. Next, using the weighted PDF to compute the cumulative distribution function (CDF) CDF(cid:48)(z) = z (cid:88) h=0 PDF(cid:48)(h) (cid:80) PDF(cid:48) (18) and produce the enhanced image T (z) = 255 (cid:18) z 255 (cid:19)1−CDF(cid:48)(z) . Original AGCWD [31] RICE [34] BOIEM (Pro.) (19) Fig. 5: Comparison of image enhancement technologies on natural images, low-contrast images, low-light images and dehazed images. In [31], the weight parameter λb is empirically assigned as a constant number. But it was found that this parameter value sometimes leads to over-enhancement, making the processed images excessively brilliant [30]. The RICE offers a more complete histogram modification framework to be optimized by quality metric. In RICE, it is hypothesized that the ideal histogram of properly enhanced images is towards having uniform PDF, close to the original histogram, and of positively skewed statistics to elevate the surface quality [64]. Based on this hypothesis, an optimization function was established: ˜h = minimize h (cid:107)h − hi(cid:107) + λe(cid:107)h − he(cid:107) + λs(cid:107)h − hs(cid:107) (20) where hi, he and hs are histograms of uniform distribution, original distribution and positively skewed statistics; λe and λs are weighting parameters to be ascertained. Through some simplifications, an analytical solution was derived: Given the output histogram ˜h, the histogram matching and quality-optimized techniques are used for enhancing images. Notice that two weights λe and λs are adaptively determined by quality metric on three pairs of parameter candidates, and therefore the RICE algorithm is good at enhancing natural images. Nonetheless, it fails for other types of images, such as low-light images, because the RICE method do not adjust brightness and moreover it requires reference images in the quality-based optimization. In the design of our BOIEM model, a cascade of modified AGCWD and RICE are utilized with parameters (λb, λs and λe) to be decided in the first step. Then the proposed blind BIQME algorithm is used to optimize these three parameters: λb, λs, λe = maximize λb,λs,λe QB(TR[TA(s, λb), λs, λe]) (22) ˜h = hi + λehe + λshs 1 + λe + λs . (21) where QB, TR and TA are respectively associated to BIQME, RICE and AGCWD. Thereafter, we exploit these parameters to enhance images. By extensive experiments, it was observed that the images enhanced by simultaneously optimizing three parameters and separately optimizing the former λb and the latter two λs and λe look almost the same. So, following the speed-up strategy applied in [34], the BOIEM only conducts six times BIQME for optimization, the first three times to enumerate three candidates {0.3, 0.5, 0.7} to pick the best λb for image brightness rectification and the latter three times to pick the optimal λs and λe from candidates given in [34] for image contrast improvement. In accordance to the selected parameters, we can finally generate the enhanced images. Through careful rectification of brightness and contrast and quality-guided optimization, the proposed BOIEM model can well enhance natural images, low-contrast images, low-light images and dehazed images. Part of results are illustrated in Fig. 5. Images circled with red, green, orange and blue rectangles are separately natural images, low-contrast images, low-light images and dehazed images [32]. More results can be found in the supplementary file C. Two lately developed enhancement techniques, AGCWD [31] and RICE [34], are included for comparison, as shown in Fig. 5. In contrast, using the fixed weighting number λb, AGCWD often introduces over-brightness, especially for natural images which themselves have appropriate luminance. Furthermore, there lacks the procedure of contrast gain in AGCWD and this makes details hard to appear. Seeing the third column, RICE shows its good ability to enhance natural images, like erasing a curtain of fog from photos. Yet RICE is helpless for low-light images, which is very possibly because there is no luminance alteration term in (21), and on the other hand it regards the input image as a high-quality natural image in the IQA-based optimization towards further improving the visual quality of input images. By systematically incorporating these two good enhancement technologies and high-performance blind BIQME algorithm to optimize parameters, one can see in the rightmost column in Fig. 5 that the proposed BOIEM algorithm is able to well enhance natural images, low-contrast images, low-light images and dehazed images, which makes them have suitable brightness and contrast and display more details. V. CONCLUSION In this paper we have constructed a general framework for quality assessment of enhanced images and its application to robust enhancement technologies. As for an enhanced image, we take into consideration five influencing factors: image contrast, sharpness, brightness, colorfulness and naturalness, and associated 17 features to blindly predict its visual quality. Thorough experiments using three categories of performance comparison strategies demonstrate that the proposed BIQME metric is remarkably superior to the same type of NR-IQA methods using nine relevant image datasets. In comparison to FR and RR algorithms, our BIQME metric implements better than general-purpose FR- and RR-IQA methods, but slightly inferior to those FR and RR quality measures dedicated to IQA of contrast change. It deserves the stress that on one hand each type of features used in BIQME is independent of 11 others, so we might usher parallel computing to increase its computational efficiency to some extent, and on the other hand our IQA framework is flexible in inducing novel features to derive higher performance. With the blind BIQME metric for optimization, we have devised a framework rectifying image brightness and contrast successively, to properly enhance natural images, low-contrast images, low-light images and dehazed images. It is worthy to mention that incorporating more procedures, such as image haze removal, will make our enhancement framework more universal. Visual saliency is an intrinsic attribute of the human visual system and this renders a possible future work by conducting saliency detection methods to modify brightness-, sharpness- and colorfulness-related features towards better performance. As compared to existing opinion-unaware NR-IQA methods, our IQA framework provides a new strategy in the design of opinion-unaware blind quality measures, particularly for complicated distortions such as image dehazing. So another feature work might turn to convert/extend our framework to blind IQA tasks of denoising, deblurring and super-resolution with new relevant features injected. REFERENCES [1] H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik, “LIVE image quality assessment Database Release 2,” 2006, Online at: http://live.ece.utexas.edu/research/quality [2] K. Gu, G. Zhai, X. Yang, and W. Zhang, “Hybrid no-reference quality metric for singly and multiply distorted images,” IEEE Trans. Broadcast- ing, vol. 60, no. 3, pp. 555-567, Sept. 2014. [3] K. Gu, M. Liu, G. Zhai, X. Yang, and W. Zhang, “Quality assessment considering viewing distance and image resolution,” IEEE Trans. Broad- casting, vol. 61, no. 3, pp. 520-531, Sept. 2015. [4] C. Li, A. C. Bovik, and X. Wu, “Blind image quality assessment using a general regression neural network,” IEEE Trans. Neural Networks, vol. 22, no. 5, pp. 793-399, May 2011. [5] M. Narwaria and W. Lin, “Objective image quality assessment based on support vector regression,” IEEE Trans. Neural Networks, vol. 21, no. 3, pp. 515-519, Mar. 2010. [6] S. Wang, K. Gu, S. Ma, W. Lin, X. Liu, and W. Gao, “Guided image contrast enhancement based on retrieved images in cloud,” IEEE Trans. Multimedia, vol. 18, no. 2, pp. 219-232, Feb. 2016. [7] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600-612, Apr. 2004. [8] L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: A feature similarity index for image quality assessment,” IEEE Trans. Image Process., vol. 20, no. 8, pp. 2378-2386, Aug. 2011. [9] K. Gu, G. Zhai, X. Yang, and W. Zhang, “An efficient color image quality metric with local-tuned-global model,” in Proc. IEEE Int. Conf. Image Process., pp. 506-510, Oct. 2014. [10] L. Zhang, Y. Shen, and H. Li, “VSI: A visual saliency induced index for perceptual image quality assessment,” IEEE Trans. Image Process., vol. 23, no. 10, pp. 4270-4281, Oct. 2014. [11] K. Gu, L. Li, H. Lu, X. Min, and W. Lin, “A fast reliable image quality predictor by fusing micro- and macro-structures,” IEEE Trans. Ind. Electron., 2017, to appear. [12] X. Gao, W. Lu, X. Li, and D. Tao, “Wavelet-based contourlet in quality evaluation of digital images,” Neurocomputing, vol. 72, no. 1, pp. 378- 385, Dec. 2008. [13] D. Tao, X. Li, W. Lu, and X. Gao, “Reduced-reference IQA in contourlet domain,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 39, no. 6, pp. 1623-1726, Dec. 2009. [14] X. Gao, W. Lu, D. Tao, and X. Li, “Image quality assessment based on multiscale geometric analysis,” IEEE Trans. Image Process., vol. 18, no. 7, pp. 1409-1423, Jul. 2009. [15] R. Soundararajan and A. C. Bovik, “RRED Indices: Reduced Reference Entropic Differencing for Image Quality Assessment,” IEEE Trans. Image Process., vol. 21, no. 2, pp. 517-526, Feb. 2012. 12 [43] P. Kovesi, “Image features from phase congruency,” Videre: J. Comp. Vis. Res., vol. 69, no. 3, pp. 1-26, 1999. [44] I. I. A. Groen, S. Ghebreab, H. Prins, V. A. F. Lamme, and H. S. Scholte, “From image statistics to scene gist: Evoked neural activity reveals transition from low-level natural image structure to scene category,” J. Neurosci., vol. 33, no. 48, pp. 18814-18824, Nov. 2013. [45] H. S. Scholte, S. Ghebreab, L. Waldorp, A. W. Smeulders, and V. A. Lamme, “Brain responses strongly correlate with Weibull image statistics when processing natural images,” J. Vis., vol. 9, no. 4, pp. 1-15, Apr. 2009. [46] D. J. Heeger, “Normalization of cell responses in cat striate cortex,” Vis. Neurosci., vol. 9, no. 2, pp. 181-197, 1992. [47] L. K. Choi, J. You, and A. C. Bovik, “Referenceless prediction of perceptual fog density and perceptual image defogging,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 3888-3901, Nov. 2015. [48] D. Hasler and S. E. Suesstrunk, “Measuring colorfulness in natural images,” Proc. SPIE, vol. 5007, pp. 87-95, Jun. 2003. [49] E. Kee and H. Farid, “A perceptual metric for photo retouching,” Proceedings of the National Academy of Sciences of the United States of America (PNAS), vol. 108, no. 50, pp. 19907-19912, Dec. 2011. [50] C. Vu, T. Phan, and D. Chandler, “S3: A spectral and spatial measure of local perceived sharpness in natural images,” IEEE Trans. Image Process., vol. 21, no. 3, pp. 934-945, Mar. 2012. [51] P. V. Vu and D. M. Chandler, “A fast wavelet-based algorithm for global and local image sharpness estimation,” IEEE Sig. Process. Lett., vol. 19, no. 7, pp. 423-426, Jul. 2012. [52] K. Gu, G. Zhai, W. Lin, X. Yang, and W. Zhang, “No-reference image sharpness assessment in autoregressive parameter space,” IEEE Trans. Image Process., vol. 24, no. 10, pp. 3218-3231, Oct. 2015. [53] K. Gu, S. Wang, G. Zhai, S. Ma, X. Yang, W. Lin, W. Zhang, and W. Gao, “Blind quality assessment of tone-mapped images via analysis of information, naturalness and structure,” IEEE Trans. Multimedia, vol. 18, no. 3, pp. 432-443, Mar. 2016. [54] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, no. 5500, pp. 2323-2326, 2000. [55] K. Gu, G. Zhai, W. Lin, X. Yang, and W. Zhang, “Learning a blind quality evaluation engine of screen content images,” Neurocomputing, vol. 196, pp. 140-149, Jul. 2016. [56] R. Datta, D. Joshi, J. Li, and J. Z. Wang, “Studying aesthetics in photographic images using a computational approach,” in Proc. Eur. Conf. Comput. Vis., pp. 288-301, May 2006. [57] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proc. IEEE Int. Conf. Comput. Vis., pp. 416-423, 2001. [58] X. Tang, W. Luo, and X. Wang, “Content-based photo quality assess- ment,” IEEE Trans. Multimedia, vol. 15, no. 8, pp. 1930-1943, Dec. 2013. [59] C-C. Chang and C-J. Lin, “LIBSVM: a library for support vector machines,” ACM Trans. Intelligent Systems and Technology, vol. 2, no. 3, 2011, Online at: http://www.csie.ntu.edu.tw/∼cjlin/libsvm [60] Z. Wang, S. Chang, F. Dolcos, D. Beck, D. Liu, and T. S. Huang, “Brain- inspired deep networks for image aesthetics assessment,” arXiv preprint arXiv:1601.04155, 2016. [61] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, 2015. [62] H. Yang, Y. Fang, W. Lin, and Z. Wang, “Subjective quality assessment of screen content images,” in Proc. IEEE International Workshop on Quality of Multimedia Experience, pp. 257-262, Sept. 2014. [63] T. Arici, S. Dikbas, and Y. Altunbasak, “A histogram modification framework and its application for image contrast enhancement,” IEEE Trans. Image Process., vol. 18, no. 9, pp. 1921-1935, Sep. 2009. [64] I. Motoyoshi, S. Nishida, L. Sharan, and E. H. Adelson, “Image statistics and the perception of surface qualities,” Nature, vol. 447, pp. 206-209, May 2007. [16] M. Narwaria, W. Lin, I. V. McLoughlin, S. Emmanuel, and L. T. Chia, “Fourier transform-based scalable image quality measure,” IEEE Trans. Image Process., vol. 21, no. 8, pp. 3364-3377, Aug. 2012. [17] A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Trans. Image Process., pp. 4695- 4708, vol. 21, no. 12, Dec. 2012. [18] K. Gu, G. Zhai, X. Yang, and W. Zhang, “Using free energy principle for blind image quality assessment,” IEEE Trans. Multimedia, vol. 17, no. 1, pp. 50-63, Jan. 2015. [19] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ‘completely blind’ image quality analyzer,” IEEE Sig. Process. Lett., pp. 209-212, vol. 22, no. 3, Mar. 2013. [20] L. Zhang, L. Zhang, and A. C. Bovik, “A feature-enriched completely blind image quality evaluator,” IEEE Trans. Image Process., vol. 24, no. 8, pp. 2579-2591, Aug. 2015. [21] R. A. Manap, L. Shao, and A. F. Frangi, “Non-parametric quality assessment of natural images,” IEEE Multimedia, 2016, in press. [22] N. Ponomarenko et al., “TID2008-A database for evaluation of full- reference visual quality assessment metrics,” Advances of Modern Ra- dioelectronics, vol. 10, pp. 30-45, 2009. [23] E. C. Larson and D. M. Chandler, “Most apparent distortion: Full- reference image quality assessment and the role of strategy,” J. Electr. Imag., vol. 19, no. 1, Mar. 2010. Online at: http://vision.okstate.edu/csiq [24] N. Ponomarenko et al., “Image database TID2013: Peculiarities, results and perspectives,” Sig. Process.: Image Commun., vol. 30, pp. 57-55, Jan. 2015. [25] X. Gao, F. Gao, D. Tao, and X. Li, “Universal blind image quality as- sessment metrics via natural scene statistics and multiple kernel learning,” IEEE Trans. Neural Netw. Learning Syst., vol. 24, no. 12, pp. 2013-2026, Dec. 2013. [26] W. Hou, X. Gao, D. Tao, and X. Li, “Blind image quality assessment via deep learning,” IEEE Trans. Neural Netw. Learning Syst., vol. 26, no. 6, pp. 1275-1286, Jun. 2015. [27] F. Shao, W. Tian, W. Lin, G. Jiang, and Q. Dai, “Toward a blind deep quality evaluator for stereoscopic images based on monocular and binocular interactions,” IEEE Trans. on Image Process., vol. 25, no. 5, pp. 2059-2074, Mar. 2016. [28] F. Gao, D. Tao, X. Gao, and X. Li, “Learning to rank for blind image quality assessment,” IEEE Trans. Neural Netw. Learning Syst., vol. 26, no. 10, pp. 2275-2290, Oct. 2015. [29] D. L. Ruderman, “The statistics of natural images,” Netw. Comput. Neural Syst., vol. 5, no. 4, pp. 517-548, 1994. [30] K. Gu, G. Zhai, W. Lin, and M. Liu, “The analysis of image contrast: From quality assessment to automatic enhancement,” IEEE Trans. Cyber- netics, vol. 46, no. 1, pp. 284-297, Jan. 2016. [31] S.-C. Huang, F.-C. Cheng, and Y.-S. Chiu, “Efficient contrast enhance- ment using adaptive gamma correction with weighting distribution,” IEEE Trans. Image Process., pp. 1032-1041, vol. 22, no. 3, Mar. 2013. [32] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2341-2353, Dec. 2011. [33] K. Gu, G. Zhai, X. Yang, W. Zhang, and M. Liu, “Subjective and objective quality assessment for images with contrast change,” in Proc. IEEE Int. Conf. Image Process., pp. 383-387, Sep. 2013. [34] K. Gu, G. Zhai, X. Yang, W. Zhang, and C. W. Chen, “Automatic con- trast enhancement technology with saliency preservation,” IEEE Trans. Circuits Syst. Video Technol., vol. 25, no. 9, pp. 1480-1494, Sept. 2015. [35] S. Wang, K. Ma, H. Yeganeh, Z. Wang and W. Lin, “A patch-structure representation method for quality assessment of contrast changed images,” IEEE Sig. Process. Lett., pp. 2387-2390, vol. 22, no. 7, Dec. 2015. [36] Y. Fang, K. Ma, Z. Wang, W. Lin, Z. Fang, and G. Zhai, “No-reference quality assessment of contrast-distorted images based on natural scene statistics,” IEEE Sig. Process. Lett., vol. 22, no. 7, pp. 838-842, Jul. 2015. [37] Z. Chen, T. Jiang, and Y. Tian, “Quality assessment for comparing image enhancement algorithms,” in Proc. IEEE Conf. Comput. Vis. and Pattern Recognit., pp. 3003-3010, Jun. 2014. [38] A. Oliva and A. Torralba, “Modeling the shape of the scene: A holistic representation of the spatial envelope,” Int. J. Comput. Vis., vol. 42, no. 3, pp. 145-175, May 2001. [39] M. A. Stricker and M. Orengo, “Similarity of color images,” in IS&T/SPIE’s Symposium on Electronic Imaging: Science & Technology. International Society for Optics and Photonics, pp. 381-392, Mar. 1995. [40] https://en.wikipedia.org/wiki/Contrast (vision) [41] A. V. Oppenheim and J. S. Lim, “The importance of phase in signals,” Proc. IEEE, vol. 69, no. 5, pp. 529-541, Nov. 1981. [42] M. C. Morrone, J. Ross, D. C. Burr, and R. Owens, “Mach bands are phase dependent,” Nature, vol. 324, no. 6049, pp. 250-253, Nov. 1986.
ai_researcher
8
Literature_Meets_Data_A_Synergistic_Approach_to_Hypothesis_Generation.pdf
Title: Precursor recommendation for inorganic synthesis by machine learning materials similarity from scientific literature Authors: Tanjin He1,2, Haoyan Huo1,2, Christopher J. Bartel1,2,3, Zheren Wang1,2, Kevin Cruse1,2, Gerbrand Ceder1,2∗ Affiliations 1Department of Materials Science and Engineering, University of California, Berkeley, CA 94720, USA 2Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA 3Department of Chemical Engineering and Materials Science, University of Minnesota, Min- neapolis, MN 55455, USA ∗Correspondence to: [email protected] 3 2 0 2 y a M 9 1 ] i c s - l r t m . t a m - d n o c [ 2 v 3 0 3 2 0 . 2 0 3 2 : v i X r a 1 Abstract Synthesis prediction is a key accelerator for the rapid design of advanced materials. How- ever, determining synthesis variables such as the choice of precursor materials is challenging for inorganic materials because the sequence of reactions during heating is not well understood. In this work, we use a knowledge base of 29,900 solid-state synthesis recipes, text-mined from the scientific literature, to automatically learn which precursors to recommend for the synthesis of a novel target material. The data-driven approach learns chemical similarity of materials and refers the synthesis of a new target to precedent synthesis procedures of similar materials, mimicking human synthesis design. When proposing five precursor sets for each of 2,654 un- seen test target materials, the recommendation strategy achieves a success rate of at least 82%. Our approach captures decades of heuristic synthesis data in a mathematical form, making it accessible for use in recommendation engines and autonomous laboratories. Short title AI learning inorganic synthesis from literature Teaser Decades of heuristic data from the literature are automatically captured for guiding success- ful synthesis of inorganic materials. 2 MAIN TEXT Introduction Predictive synthesis is a grand challenge that would accelerate the discovery of advanced in- organic materials (1). The complexity of synthesis mainly originates from the interactions of many design variables, including the diversity of precursor candidates for each element in the target material (oxides, hydroxides, carbonates, etc.), the experimental conditions (temperature, atmosphere, etc.), and the chronological organization of operations (mixing, firing, reducing, etc.). Properly selecting the combination of experimental variables is crucial and demanding for successful synthesis (2–4). Here, we focus on the rational design of precursor combinations for solid-state synthesis, a widely used approach to create inorganic materials. Because of the lack of a general theory for how phases evolve during heating, synthesis de- sign is mostly driven by heuristics and basic chemical insights. Unlike the success of retrosyn- thesis and automated design for organic materials based on the conservation and transformation of functional groups (5–7), the mechanisms underlying inorganic solid-state synthesis are not well understood (6, 8–10). Here, we define a recipe to be any structured information about a target material, including the precursors, operations, conditions, and other experimental details. Experimental researchers usually approach a new inorganic synthesis by manually looking up similar materials in the literature and repurposing precedent recipes for a novel material. How- ever, deciding what materials are similar and thus where to look is often driven by intuition and limited by individuals’ personal experience in specific chemical spaces, hindering the ability to rapidly design syntheses for new chemistries. With the emergence of large-scale materials synthesis datasets from text-mining efforts (11–14), it is becoming possible to statistically learn the similarity of materials and the correlation of their synthesis variables in a more systematic and quantitative fashion, and provide such tools as a guide to scientists when approaching the 3 synthesis of novel compounds. Several studies have demonstrated the promise of building general models for the predic- tive synthesis of inorganic materials. Aykol et al. (15) and McDermott et al. (16) proposed heuristic models to rank the favorability of synthesis reactions or pathways based on thermo- dynamic metrics such as the reaction energy, nucleation barrier, and the number of competing phases. Kim et al. (17) utilized the stochasticity of a conditional variational autoencoder model to generate various samples of synthesis actions and precursors for the target material. Huo et al. (18) predicted synthesis conditions using large solid-state synthesis datasets text-mined from scientific journal articles. An interesting yet unexplored angle is to machine learn how the precursors of different target materials are shared and varied to enable the recommendation of multiple synthesis recipes with some ranked potential of success. In addition, extending the assessment from specific case studies to a large test set is also valuable for the development and improvement of predictive synthesis models. We propose a precursor recommendation strategy (Fig. 1) based on machine-learned simi- larity of materials to automate the literature-based approach used by experimental researchers. Inspired by natural language processing (NLP) models (19–21), we designed an encoding neu- ral network to learn the vectorized representation of a material based on its corresponding pre- cursors for the quantification of materials similarity. Assuming that the target material can be synthesized using an experimental design adapted from a similar material, synthesis variables such as precursors, operations, and conditions can be proposed and ranked by querying the knowledge base of previously synthesized materials. In this work, we applied the recommen- dation strategy to predict precursors for 2,654 test target materials in a historical validation. Learning from a knowledge base of 29,900 synthesis reactions text-mined from the scientific literature, we demonstrate that the algorithm can acquire chemical knowledge on materials sim- ilarity via self-supervised learning, and make promising decisions on precursor selection. Our 4 quantitative recommendation pipeline captures how experimental researchers learn synthesis from the literature and enables rational and rapid precursor selection for new inorganic mate- rials. It also provides meaningful initial solutions in the active learning and decision-making process for autonomous synthesis. Fig. 1. Precursor recommendation strategy. (A) Pipeline for precursor recommendation consisting of three steps: (1) digitize target materials in the synthesis knowledge base text- mined from scientific literature, (2) rank target materials in the knowledge base according to the similarity to the novel target, and (3) recommend precursors based on analogy to the most similar target. (B) An example of precursor recommendation for Y2FeSbO7 by referring to the synthesis of FeSbO4. 5 Results We begin with statistical insights from solid-state synthesis experiments reported in 24,304 pa- pers (11) to better understand the problem of precursor selection (Section “Problem of precursor selection”). Because a universal model for solid-state synthesis has not yet been established, we use a data-driven method to recommend potential precursor sets for the given target material (Fig. 1). The recommendation pipeline consists of three steps: (i) an encoding model to digitize the target material as well as known materials in the knowledge base (Section “Materials encod- ing for precursor selection”), (ii) similarity query based on the materials encoding to identify a reference material that is most similar to the target (Section “Similarity of target materials”), and (iii) recipe completion to (a) compile the precursors referred from the reference material and (b) add any possibly missed precursors if element conservation is not achieved using conditional predictions based on referred precursors (Section “Recommendation of precursor materials”). Problem of precursor selection In the solid-state synthesis of inorganic materials, precursor selection plays a crucial role in governing the synthesis pathway by yielding intermediates that may lead to the desired material or alternative phases (2–4). For each metal/metalloid element, one precursor is often used predominantly over all others, which we denote as the common precursor (22). However, in a solid-state synthesis dataset of 33,343 experimental recipes extracted from 24,304 materials science papers (11), we find that approximately half of the target materials were synthesized using at least one uncommon precursor. Fig. 2A presents the fraction of targets in the text-mined dataset (11) that can be achieved as one increases the number of available precursors. The precursors on the x-axis are ordered by the relative frequency with which they are used to bring a specific element into a synthesis target. Uncommon precursors may be used for a variety of reasons including synthetic 6 constraints (e.g., temperature and time), purity, morphology, and anthropogenic factors (2, 22, 23). In addition, a probability analysis of the text-mined dataset indicates that precursors for dif- ferent chemical elements are not randomly combined. The joint probability to select a specific precursor pair (Ai, Bi) can be compared to the marginal probability to select Ai for element Elea and Bi for Eleb. If the choices of Ai and Bi are independent, the joint probability should equal the product of the marginal probabilities, namely, P (Ai, Bi) = P (Ai)P (Bi). However, inspec- tion of 6,472 pairs of precursors from our text-mined dataset (Fig. 2B) reveals that many show a strong dependency on each other (i.e., P (Ai, Bi) deviating significantly from P (Ai)P (Bi)). A well-known example is that nitrates such as Ba(NO3)2 and Ce(NO3)3 tend to be used to- gether, likely because of their solubility and applicability for solution processing (e.g. slurry preparation). Unfortunately, these decisions regarding dependencies of precursors are usually empirical and hard to standardize. Machine learning is a possible solution to ingest the heuris- tics that underlie such selections. 7 A B Fig. 2. Usage of precursors in solid-state synthesis. (A) Fraction of targets that can be syn- thesized with limited number of available precursors. The precursors are ordered by relative frequency per metal/metalloid element. Precursors for 62 elements are considered. A target is included if at least one reported reaction for that target was performed with the available precur- sors. (B) Pairwise dependency of precursors Ai and Bi characterized by P (Ai,Bi) P (Ai)P (Bi) . Probability is estimated from the frequency of occurrence in the solid-state synthesis dataset. The value of log10 P (Ai,Bi) P (Ai)P (Bi) is zero when Ai and Bi are independent, positive when Ai and Bi tends to be used in the same experiment more frequently than P (Ai)P (Bi), negative otherwise. Materials encoding for precursor selection Our precursor recommendation model for the synthesis of a novel target will mimic the hu- man approach of trying to identify similar target materials for which successful synthesis re- actions are known. To find similar materials, digital processing requires an encoding model that transforms any arbitrary inorganic material into a numerical vector. For organic synthesis, structural fingerprinting such as Morgan2Feat (24) is a good choice (25) because it is natural to track the conservation and change of functional groups in organic reactions, but the concept 8 of functional groups is not applicable to inorganic synthesis. Chemical formulas of inorganic solids have been represented using a variety of approaches (e.g., Magpie (26, 27), Roost (28), CrabNet (29)). However, these representations are typically used as inputs to predict thermody- namic or electronic properties of materials. Here, we attempt to directly incorporate synthesis information into the representation of a material with arbitrary composition. Local text-based encodings such as Word2Vec (30, 31) and FastText (17) are able to capture contextual infor- mation from the materials science literature, of which synthesis information is a part; however, they are not applicable to unseen materials when the materials text (sub)strings are not in the vocabulary or when the materials are not in the predefined composition space. For example, Pei et al. (31) computed the similarity of high-entropy alloys as the average similarity of el- ement strings by assuming the elements are present in equal proportions in the material (e.g., CoCrFeNiV). However, this approach is not applicable to unseen materials different from such composition template, and consequently would not be practical in our work on synthesis of diverse inorganic materials. Substitution modeling can evaluate similarity of precursors by as- sessing the viability of substituting one precursor with another while retaining the same target, but it cannot be used to identify analogues for new target materials (22). In this work, we pro- pose a synthesis context-based encoding model utilizing the idea that target materials produced with similar synthesis variables are similar. Analogous to how language models (19–21) pre-train word representations by predicting context for each word, we use a self-supervised representation learning model to encode arbi- trary materials by predicting precursors for each target material, which we refer to as Precur- sorSelector encoding (Fig. 3A). The upstream part is an encoder where properties of the target material are projected into a latent space as the encoded vector representation. In principle, any intrinsic materials property could be included at this step. Here, we use only composition for simplification. The downstream part consists of multiple tasks where the encoded vector is used 9 as the input to predict different variables related to precursor selection. Here, we use a masked precursor completion (MPC) task (Fig. 3B) to capture (i) the correlation between the target and precursors and (ii) the dependency between different precursors in the same experiment. For each target material and corresponding precursors in the training set, we randomly mask part of the precursors and use the remaining precursors as a condition to predict the complete precursor set. We also add a task of reconstructing the chemical composition to conserve the compositional information of the target material. The downstream task part is designed to be extensible; other synthesis variables such as operations and conditions can be incorporated by adding corresponding prediction tasks in a similar fashion. By training the entire neural net- work, the encoded vectors for target materials with similar precursors are automatically dragged closer to each other in the latent space because that reduces the overall prediction error. This PrecursorSelector encoding thus takes the correlation induced by precursor selection and serves as a useful metric to measure similarity of target materials in syntheses. 10 Fig. 3. Representation learning to encode precursor information for target materials. (A) Multi-task network structure to encode the target material in the upstream and to predict the complete precursor set, chemical composition, and more synthesis variables in the downstream. x and u represent the composition and encoded vector of the target material, respectively. pi represents the ith precursor in a predefined ordered precursor list. Dense layers are used in each layer unless specified differently. (B) Submodel of multi-label classification for the masked precursor completion (MPC) task. Part of the precursors are randomly masked; the remaining precursors (marked as “Y”) are used as a condition to predict the probabilities of other precur- sors for the target material. The probabilities corresponding to the complete precursors (marked as “Y”) are expected to be higher than that of unused precursors (marked as “N”). The atten- tion block gproj (32) is used to aggregate the target vector and conditional precursors. The final classification layer gcls and the embedding matrix for conditional precursors share the same weights. σ represents the sigmoid function. 11 To demonstrate that the neural network is able to learn precursor information, we present the results of the MPC task (Fig. 3B) for LaAlO3 as an example (Table 1). LaAlO3 is a ternary ma- terial that normally requires two precursors (one to deliver each cation, La and Al). In this test, we masked one precursor and asked the model to predict the complete precursor set. For the same target conditioned with different partial precursors, the predicted probabilities of precur- sors strongly depend on the given precursor and agree with some rules of thumb for precursor selection. When the partial precursors are oxides such as La2O3 or Al2O3, the most probable precursors are predicted to be oxides for the other element, i.e., Al2O3 for La2O3 and La2O3 for Al2O3 (33). When the partial precursors are nitrates such as La(NO3)3 or Al(NO3)3, nitrates for the other element are prompted with higher probabilities, i.e., Al(NO3)3 for La(NO3)3 and La(NO3)3 for Al(NO3)3 (34). If both precursors are masked, oxides rank first in the prediction because the common precursors for elements La and Al are La2O3 and Al2O3, respectively. The simple successful prediction shows our PrecursorSelector encoding model is able to learn the correlation between the target and precursors in different contexts of synthesis without explicit input of chemical rules about synthesis. In addition, the use of different precursors suggests various synthetic routes may lead to the same target material. When a practical preference for a particular route exists, the framework we introduce in this work can be extended to include more constraints, such as synthesis type, temperature, morphology, particle size, and cost of precursors, by learning from pertinent datasets (23, 35, 36). 12 Table 1. MPC conditioned on different partial precursors for the same target material LaAlO3. The predicted complete precursors are the ones with the highest probabilities (bold). The term “N/A” denotes the absence of partial precursors, i.e., all precursors are masked in the MPC task. Partial precursors Probability to use different precursors (output) (condition) La2O3 Al2O3 La(NO3)3 Al(NO3)3 La2(CO3)3 Al(OH)3 La2O3 Al2O3 La(NO3)3 Al(NO3)3 N/A 0.75 0.72 0.60 0.62 0.70 0.71 0.73 0.59 0.58 0.69 0.58 0.58 0.64 0.65 0.59 0.57 0.57 0.63 0.65 0.58 0.57 0.58 0.61 0.62 0.59 0.57 0.56 0.61 0.60 0.59 Similarity of target materials Similarity establishes a link between a novel material to synthesize and the known materials in the knowledge base because it is reasonable to assume similar target materials share similar synthesis variables in experiments. Although the understanding of similarity is generally based on heuristics, the PrecursorSelector encoding introduced in Section “Materials encoding for precursor selection” provides a meaningful representation for quantified similarity analysis. Dedicated to precursor prediction in this study, we define the similarity of two target materials as the similarity of the precursors used in their respective syntheses. Although precursors for a new target material are not known in advance, the PrecursorSelector encoding serves as a proxy reflecting the potential precursors to use. In that latent space, we can take the cosine similarity (19, 20, 30) of the PrecursorSelector encoding as a measure of the similarity (Sim) of 13 two target materials x1 and x2: Sim(x1, x2) ∼ cos(f (x1), f (x2)), (1) where f is the encoder part of the PrecursorSelector model transforming the composition of the target material x into the encoded target vector (Fig. 3A). To demonstrate that the similarity estimated from PrecursorSelector encoding is reason- able, we show typical materials with different levels of similarity to an example target material NaZr2(PO4)3 (Table 2). The most similar materials are the ones with the same elements such as Zr-containing phosphates and other sodium super ionic conductor (NASICON) materials. The similarity decreases slightly as additional elements are introduced (e.g., Na3Zr1.9Ti0.1Si2PO12) or when one element is substituted (e.g., LiZr2(PO4)3). When the phosphate groups are re- placed with another anion, the similarity decreases further, with oxides having generally mild similarity to the phosphate NaZr2(PO4)3. The similarity decreases even further for compounds with no anion (e.g., intermetallics) and for non-oxygen anions (e.g., chalcogenides). This find- ing agrees with our experimental experience that when seeking a reference material, researchers will usually refer to compositions in the same chemical system or to cases where some elements are substituted. It is also worth noting that our quantitative similarity is purely a data-driven ab- straction from the literature and uses no externally chemical knowledge. 14 Table 2. Different levels of similarity between NaZr2(PO4)3 and materials in the knowledge base. Target Similarity Target Similarity Zr3(PO4)4 Na3Zr2Si2PO12 Na3Zr1.8Ge0.2Si2PO12 Na3Ca0.1Zr1.9Si2PO11.9 Na3Zr1.9Ti0.1Si2PO12 LiZr2(PO4)3 NaLa(PO3)4 Sr0.125Ca0.375Zr2(PO4)3 Na5Cu2(PO4)3 LiGe2(PO4)3 0.946 0.929 0.921 0.908 0.900 0.896 0.874 0.852 0.830 0.796 Li1.8ZrO3 NaNbO3 Li2Mg2(MoO4)3 Sr2Ce2Ti5O16 Ga0.75Al0.25FeO3 Cu2Te Ni60Fe30Mn10 AgCrSe2 0.701 0.600 0.500 0.400 0.300 0.200 0.100 0.000 Zn0.1Cd0.9Cr2S4 -0.099 Cr2AlC -0.202 To better understand the similarity, we conducted a relationship analysis (19,20,30) by visu- alizing four groups of target materials synthesized using one shared precursor and one distinct precursor (Fig. 4). For example, the syntheses of YCuO2, Ba3Y4O9, and Ti3Y2O9 share Y2O3 as a precursor and separately use CuO, BaCO3, and TiO2. The three other groups share the precursors In2O3, Al2O3, and Fe2O3, respectively. To separate the effect of the precursor varia- tion, we align the original points of the target vectors by first projecting each target vector to the same vector space as the precursors and then subtracting the vector of the shared precursor, pro- viding a difference vector showing the relationship between the target material and the shared precursor (more details in Section “Representation learning for similarity of materials”). Next, we plot the top two principal components (37) of these difference vectors in a two-dimensional plane. The difference vectors are automatically separated into three clusters according to the 15 precursor variate, representing three types of relationships, “react with BaCO3”, “react with CuO”, and “react with TiO2”, respectively. For example, Ba3Y4O9 is to Y2O3 as BaAl2O4 is to Al2O3 (i.e., Ba3Y4O9 − Y2O3 ≈ BaAl2O4 − Al2O3) because both syntheses use BaCO3. The consistency between this automatic clustering and the chemical intuition again affirms the efficacy of using PrecursorSelector encoding as a similarity metric. Fig. 4. Relationships between targets and their shared precursors. Four groups of target materials are synthesized each using one shared precursor shown as the original point (Y2O3, In2O3, Al2O3, or Fe2O3) and one distinct precursor shown as the edge (BaCO3, CuO, or TiO2). The relationship of “react with another precursor” is visualized as the first two principal com- ponents of the difference vector between the target and the shared precursor gproj(f (x)) − pi. The original points corresponding to different precursors pi’s are jittered for clarity. Recommendation of precursor materials With the capability of measuring similarity, a natural solution to precursor selection is to repli- cate the literature-based approach used by experimental researchers. Given a novel material to synthesize, we initialize our recommendation by first proposing a recipe consisting of common 16 precursors for each metal/metalloid element in the target material because this might be the first attempt in a lab. Then, we encode the novel target material and known target materials in the knowledge base using PrecursorSelector encoding model from Section “Materials encoding for precursor selection” and calculate the similarity between the novel target and each known material with Eq. 1. We rank known materials based on their similarity to the target such that a reference material can be identified that is the most similar to the novel target. When the pre- cursors used in the synthesis of the reference material cannot cover all elements of the target, we use MPC in Fig. 3B to predict the missing precursors. For example, for Y2FeSbO7 (Fig. 1B), the most similar material in the knowledge base is FeSbO4. It is reasonable to assume that the precursors Fe2O3 and Sb2O5 used in the synthesis of FeSbO4 (38) can also be used to synthesize Y2FeSbO7. Because the Y source is missing, MPC finds Y2O3 is likely to fit with Fe2O3 and Sb2O5 for the synthesis of Y2FeSbO7, ending up as a complete precursor set (Fe2O3, Sb2O5, and Y2O3) (39). Multiple attempts of recommendation are feasible by moving down the list of known materials ranked to be most similar to the novel target. To evaluate our recommendation pipeline, we conduct a validation (Fig. 5) using the 33,343 synthesis recipes text-mined from the scientific literature. Using the knowledge base of 24,034 materials reported by the year 2014, we predict precursors for 2,654 test target materials newly reported from 2017 to 2020 (more details in Section “Data preparation”). Because multiple precursors exist for each element, the number of possible precursor combinations increases combinatorially with the number of elements present in the target material. A good precursor prediction algorithm is anticipated to select from hundreds of possible precursor combinations those that have a higher probability of success. For each test material, we attempt to propose five different precursor sets. For each attempt, we calculate the percentage of test materials being successfully synthesized, where success means at least one set of proposed precursors has been observed in previous experiments. The similarity-based reference already increases 17 the success rate to 73% at the second attempt. The first guess is set to default to the most common precursors which leads to 36% success rate. Within five attempts, the success rate of our recommendation pipeline using PrecursorSelector encoding is 82%, comparable to the performance of recommendations for organic synthesis (25). We note that as defined here, “success” will be underestimated since some suggested precursor sets may actually lead to successful target synthesis even though they may not have been tried (and therefore do not appear in the data). We also establish a baseline model (“Most frequent” in Fig. 5) that ranks precursor sets based on the product of frequencies with which different precursors are used in the literature (more details in Section “Baseline models”). This baseline simulates the typical early stage of the trial-and-error process where researchers grid-search different combinations of precursors matching elements present in the target material without the knowledge of dependency of pre- cursors (Fig. 2B). The success rate of this baseline is 58% within five attempts. Our recommen- dation pipeline performs better because the dependency of precursors is more easily captured when the combination of precursors is sourced from a previously used successful recipe for a similar target. Through in-situ diffraction of synthesis (2–4), it is now better understood that some precursor sets do not lead to the target material because they form intermediate phases which have consumed much of the overall reaction energy, thereby leaving a low driving force to form the target. It is likely that our literature informed precursor prediction approach implic- itly captures some of this reactivity and pathway information, resulting in a higher prediction power than random selection or selection based on how common a precursor is. In addition, we compare with three other baseline models (“Magpie encoding”, “FastText encoding”, and “Raw composition” in Fig. 5) using the same recommendation strategy but dif- ferent encoding methods (more details in Section “Baseline models”). Magpie encoding (26,27) is a set of attributes computed using the fraction of elements in a material, including stoichio- 18 metric attributes, elemental property statistics, electronic structure attributes, and ionic com- pound attributes. Precursor recommendation with Magpie encoding achieves a success rate of 68% within five attempts; it performs reasonably well because these properties reflect the mate- rial composition and generally materials with close compositions tend to be similar. Similarly, precursor recommendation directly with the raw material composition achieves a success rate of 66% within five attempts. FastText encoding (17) utilizes the FastText model (40) to cap- ture information about the co-occurrences of context words around material formulas/names in the literature. However, only 1,985 test materials can be digitized with FastText encoding due to the conflict between the limited vocabulary of n-grams and the variety of float numbers in material formulas. The success rate using FastText encoding is 56% within five attempts. Over- all, the recommendation with PrecursorSelector encoding performs substantially better because Magpie and FastText encodings are more generic but not dedicated to predictive synthesis. The PrecursorSelector encoding and MPC capture the correlation between synthesis variables and known target materials, which better extends to novel materials. 19 Fig. 5. Performance of various precursor prediction algorithms. For each of the 2,654 test target materials, the algorithm attempts to propose n (1 ≤ n ≤ 5 as the x-axis) precursor sets. The y-axis shows the success rate that at least one out of the n proposed precursor set is observed in previous experimental records. PrecursorSelector encoding: this work. Magpie en- coding/FastText encoding/Raw composition: similar recommendation pipeline to this work but using Magpie representation (26,27)/FastText representation (17)/the raw material composition. Most frequent: select precursors by frequency. Discussion Because of its heuristic nature, it is challenging to capture the decades of synthesis knowledge established in the literature. By establishing a materials similarity measure that is a natural han- dle of chemical knowledge and leveraging a large-scale dataset of precedent synthesis recipes, our similarity-based recommendation strategy mimics human synthesis design and succeeds in precursor selection. The incorporation of precursor information into materials representations (Fig. 3) leads to a quantitative similarity metric that successfully reproduces a known precursor set 82% of the time in five attempts or less (Fig. 5). We discuss the strengths and weaknesses 20 of this recommendation algorithm and its generalizability to broader synthesis prediction prob- lems. In this work, materials similarity is learned through an automatic feature extraction pro- cess mapping a target material to the combination of precursors. While learning the usage of precursors, useful chemical knowledge for synthesis practice is accordingly embedded in PrecursorSelector encoding. The first level of knowledge about materials similarity is based on composition. For example, to synthesize Li7La3Nb2O13, PrecursorSelector encoding finds Li5La3Nb2O12 as a reference target material (Table 3) because their difference in composition is only one Li2O unit. PrecursorSelector encoding also reflects the consideration of valence in synthesis. Although it is not necessary to keep the valence in the precursor the same as that in the target, a precursor with similar valence states to the target is frequently used in practical syn- thesis (22). For example, to synthesize NaGa4.6Mn0.01Zn1.69Si5.5O20.1 (41), MnCO3 was used as the Mn source because the valence state of Mn is 2+ in both the target and precursor. Precur- sorSelector encoding finds Mn0.24Zn1.76SiO4 similar to NaGa4.6Mn0.01Zn1.69Si5.5O20.1 because the valence state of Mn is also 2+ in Mn0.24Zn1.76SiO4, despite NaGa4.6Mn0.01Zn1.69Si5.5O20.1 containing large fractions of Na and Ga while Mn0.24Zn1.76SiO4 does not. Our algorithm also captures the similarity of syntheses between compounds which have one element substituted. For example, PrecursorSelector encoding refers to CaZnSO for synthesizing SrZnSO because the elements Ca and Sr are regarded as similar. While such knowledge may appear obvious to the trained chemist, our approach enables it to be automatically extracted and convoluted as a vectorized representation (Fig. 3), making it thereby available in a mathematical form, convenient to be used in recommendation engines or automated labs (42). Because of this customized synthesis similarity of materials and our precursor recommen- dation pipeline, we are able to not only recommend trivial solutions for target synthesis such as the use of common precursors, but also deal with more challenging situations. One typical sce- 21 nario is the adoption of uncommon precursors. For example, Lal`ere et al. (43) used NaH2PO4 as the source of Na and P to synthesize Na3TiV(PO4)3, while the common precursors for Na and P are Na2CO3 and NH4H2PO4, respectively. It is not apparent to conclude from the com- position of Na3TiV(PO4)3 that the uncommon precursor NaH2PO4 is needed. However, the similarity-based recommendation pipeline successfully predicts the use of NaH2PO4 by refer- ring to a similar material Na3V2(PO4)3 (44). A plausible reason for the choice of NaH2PO4 for Na3TiV(PO4)3 can also be inferred from the synthesis of Na3V2(PO4)3. Feng et al. (44) reported that NaH2PO4 was used to implement a one-pot solid-state synthesis of Na3V2(PO4)3, while Fang el al. (45) reported that a reductive agent and additional complex operations are needed when using Na2CO3 and NH4H2PO4. Similar outcomes may also apply to the syn- thesis of Na3TiV(PO4)3. A second example is the successful precursor recommendation for the target compound GdLu(MoO4)3. Instead of the common precursor MoO3, a less common precursor (NH4)6Mo7O24 was adopted as the Mo source (46). The use of (NH4)6Mo7O24 may facilitate the mixing of different ions in the synthesis of GdLu(MoO4)3. The adoption of un- common precursors also provides clues in underexplored chemical spaces such as mixed-anion compounds (47). Taking the pentanary oxynitride material BaYSi2O5N (48) as an example, the five-component system, including multiple anions, implies that many precursor combina- tions can potentially yield the target phase, including oxides, nitrides, carbonates, etc. Our recommendation pipeline correctly identifies that a combination of SiO2 and Si3N4 facilitates the formation of BaYSi2O5N by referring to a quaternary oxynitride material, YSiO2N (49). Another challenging situation is that multiple precursors may be used for the same element. Usually, only one precursor is used for each metal/metalloid element in the target material, but exceptions do exist. For example, CuO and CuCl2 were used as the Cu source in the synthesis of Cu3Yb(SeO3)2O2Cl (50). Through analogy to Cu4Se5O12Cl2 (51), the recommended precursor set includes both CuO and CuCl2. Moreover, it is possible to predict multiple correct precursor 22 sets by referring to multiple similar target materials. For example, two different sets of precur- sors for LiMn0.5Fe0.5PO4 were reported by Zhuang et al. (52) and Wang et al. (53). The recom- mendation pipeline predicts both by repurposing the precursor sets for LiMn0.8Fe0.2PO4 (54) and LiMn0.9Fe0.1PO4 (55). The recommendation of precursors presented here is still imperfect. The engine we present is inherently limited by the knowledge base it is trained on, thereby biasing recommendations toward what has been done previously and lacking creativity for unprecedented combinations of precursors. For example, metals Co and Te were used in the synthesis of Li3CoTeO6 (56), but no similar materials in the knowledge base use the combination of Co and Te as precur- sors. Another example is that SrCO3 and SrSO4 were used in the synthesis of Sr4Al6SO16 (57). Although the recommendation pipeline is, in principle, able to predict multiple precursors for the same element, a similar case using both SrCO3 and SrSO4 as the Sr source is not found in the knowledge base. Both examples end up being mispredictions. This situation could be improved when more data from text mining and high-throughput experiments (42) are added to the knowledge base. Furthermore, the success rate of the recommendation strategy may be underestimated in some cases. For example, BaO is predicted as the Ba source for synthesiz- ing Ca7.5Ba1.5Bi(VO4)7, while BaCO3 is used in the reported synthesis (58). Given the slight difference between BaO and BaCO3, BaO may actually be suitable. Besides the prediction of precursors, the similarity-based recommendation framework is a potential step toward general synthesis prediction. The same strategy can be extended to the rec- ommendation of more synthesis variables, such as operations, device setups, and experimental conditions, by adding corresponding prediction tasks to the downstream part of the multi-task network (Fig. 3) for similarity measurement. For example, we may infer that reduced atmo- sphere is necessary for synthesizing Na3TiV(PO4)3 (43) because it is used in the synthesis of a similar material Na3V2(PO4)3 (44). Moreover, synthesis constraints such as the type of synthe- 23 sis method, temperature, morphology of the target material, particle size, and cost can be added as conditions of synthesis prediction. For example, we may integrate our effort of synthesis tem- perature prediction to prioritize the predicted precursors within expected temperature regime. Our automated algorithm, mimicking human design process for the synthesis of a new target, provides a practical solution to query decades of heuristic synthesis data in recommendation engines and autonomous laboratories. 24 Table 3. Representative successful and failed examples for precursor prediction using the similarity-based recommen- dation pipeline in this study. Target Successful Reference Target(s) Expected Precursors Error in Recommendation Li7La3Nb2O13 (59) Li5La3Nb2O12 (60) LiOH, La2O3, Nb2O5 NaGa4.6Mn0.01Zn1.69Si5.5O20.1 (41) SrZnSO (62) Na3TiV(PO4)3 (43) GdLu(MoO4)3 (46) BaYSi2O5N (48) 2 5 Mn0.24Zn1.76SiO4 (61) MnCO3, Na2CO3, Ga2O3, SiO2, ZnO CaZnSO (63) SrCO3, ZnS Na3V2(PO4)3 (44) NaH2PO4, NH4VO3, TiO2 Gd2(MoO4)3 (64) (NH4)6Mo7O24, Lu2O3, Gd2O3 YSiO2N (49) Si3N4, SiO2, BaCO3, Y2O3 Cu3Yb(SeO3)2O2Cl (50) Cu4Se5O12Cl2 (51) CuO, CuCl2, SeO2, Yb2O3 LiMn0.5Fe0.5PO4 (52, 53) LiMn0.8Fe0.2PO4 (54), MnCO3, FeC2O4, LiH2PO4; LiMn0.9Fe0.1PO4 (55) Mn(CH3COO)2, FeC2O4, LiH2PO4 N/A N/A N/A N/A N/A N/A N/A N/A Failed Li3CoTeO6 (56) Sr4Al6SO16 (57) LiCoO2 (65) SrAl2O4 (66) Co, Te, Li2CO3 Co3O4, TeO2, LiOH SrCO3, SrSO4, Al(OH)3 SrCO3, H2SO4, Al(OH)3 Ca7.5Ba1.5Bi(VO4)7 (58) Bi3Ca9V11O41 (67) BaCO3, NH4VO3, CaCO3, Bi2O3 BaO, NH4VO3, CaCO3, Bi2O3 Materials and Methods Representation learning for similarity of materials The neural network consists of an encoder part for encoding target materials and a task part for predicting variables related to precursor selection. The encoder part f is a three-layer fully con- nected submodel transforming the composition of the target material x into a 32-dimensional target vector u = f (x). The input composition is an array with 83 units showing the fraction of each element. The reduced dimension of the encoded target vector is inspired by the bottleneck architecture of autoencoders (68). By limiting the dimension of the encoded vector, the net- work is forced to learn a more compact and efficient representation of the input data, which is more appropriate for the precursor selection-related downstream tasks (69). The task part uses different network architectures for different tasks of prediction, including precursor completion and composition recovery in this work. The masked precursor completion (MPC) task replaces part of the precursors with a placeholder “[MASK]” (21) at random and uses the remaining precursors as a condition to predict the complete precursor set for the target material, which is formulated as a multi-label classification problem (70). An attention block gproj (32) is used to aggregate the target vector and the vectors for conditional precursors as a projected vector v = gproj(u; p1, p2, . . . ) with dimensionality of 32. Then, v is passed to the precursor clas- sification layer represented by a 417 × 32 matrix P , of which each row is the 32-dimensional vector representation of a potentially used precursor pi. To avoid having too many neural net- work weights to learn, the precursor completion task only considers 417 precursors used in at least five reactions in the knowledge base. The probability to use each precursor is indicated by sigmoid(p(cid:62) i v), allowing non-exclusive prediction of multiple precursors (70). Here, v acts as a probe corresponding to the target material projected in the precursor space and is used to search for pi’s with similar vector representations via a dot product. The conditional precur- 26 sors input to gproj share the same trainable vector representations as pi’s. Circle loss (71) is used because of its benefits in capturing the dependency between different labels in multi-label classification and deep feature learning. The composition recovery task is a two-layer fully connected submodel decoding back to the chemical composition x from the target vector u, similar to the mechanism of autoencoders (68, 72). Mean squared error loss is used because it is the most popular for regression. More tasks predicting other synthesis variables such as operations and conditions can be appended in a similar fashion. To combine the loss functions in this multi-task neural network, an adaptive loss (73) is used to automatically weigh different loss by considering the homoscedastic uncertainty of each task. Baseline models “Most frequent”. This baseline model ranks precursor sets based on an empirical joint prob- ability without considering the dependency of precursors (Fig. 2B). Assuming that the choices of precursors are independent from each other, the joint probability of selecting a specific set of precursors can be estimated as the product of their marginal probabilities. For each metal/metalloid element, different precursors can be used as the source. The marginal prob- ability to use a precursor is estimated as the relative frequency of using that precursor over all precursors contributing the same metal/metalloid element. For example, the precursor set ranked in first place is always the combination of common precursors for each metal/metalloid element in the target material, which is also typically the first attempt in the lab. “Magpie encoding”. This baseline model uses the same recommendation strategy as Fig. 1, except that the similarity is calculated using Magpie encoding (26, 27). The composition of each target material is converted into a vector consisting of 132 statistical quantities such as the average and standard deviation of various elemental properties. The cosine similarity is used, 27 as shown in Eq. 1. When the precursors from the reference target material cannot cover all elements of the novel target, the common precursors for the missing elements are supplemented because MPC (Fig. 3B) is only trained for PrecursorSelector encoding. “FastText encoding”. Similar to the baseline of “Magpie encoding”, this baseline model uses the same recommendation strategy as Fig. 1, except that the similarity is calculated using FastText encoding (17). The formula of each target material is converted into a 100-dimensional vector using the FastText model trained with materials science papers (17). The total number of target materials tested in this baseline model is 1,985 instead of 2,654 because some n-grams such as certain float numbers corresponding to the amount of elements are not in the vocabulary. “Raw composition”. Similar to the baseline of “Magpie encoding”, this baseline model uses the same recommendation strategy as Fig. 1, except that the similarity is calculated using the cosine similarity of raw material composition. The formula of each target material is converted into an 83-dimensional vector corresponding to the fraction of each element. Data preparation In total, 33,343 inorganic solid-state synthesis recipes extracted from 24,304 materials science papers (11) were used in this work. Because some material strings (e.g., Ba1−x Srx TiO3) ex- tracted from the literature contain variables corresponding to different amounts of elements, we substituted these variables with their values from the text to ensure that a material in any reaction only corresponds to one composition, resulting in 49,924 expanded reactions and 28,598 target materials. An ideal test for generalizability and applicability of this method would be to synthesize many entirely new materials using recommended precursors. In the absence of performing extensive new synthesis experiments, we designed a robust test to sim- ulate precursor recommendation for target materials that are new to the trained model. We 28 split the data based on the year of publication, i.e., training set (or knowledge base) for reac- tions published by 2014, validation set for reactions in 2015 and 2016, and test set for reac- tions from 2017 to 2020. In addition, to avoid data leakage where the synthesis of the same material can be reported again in a more recent year, we placed reactions for target mate- rials with the same prototype formula in the same data set as the earliest record. The pro- totype formula was defined as the formula corresponding to a family of materials including (1) the formula itself, (2) formulas derived from a small amount (< 0.3) of substitution (e.g., Ca0.2La0.8MnO3 for prototype formula LaMnO3), and (3) formulas able to be coarse-grained by rounding the amount of elements to one decimal place (e.g., Ba1.001La0.004TiO3 for the pro- totype formula BaTiO3). In the end, the number of reactions in the training/validation/test set was 44,736/2,254/2,934 from 29,900/1,451/1,992 original recipes. The number of target materials in the training/validation/test set was 24,304/1,910/2,654, respectively. References 1. J. C. Hemminger, J. Sarrao, G. Crabtree, G. Flemming, M. Ratner, Challenges at the fron- tiers of matter and energy: Transformative opportunities for discovery science, Tech. rep., USDOE Office of Science (SC)(United States) (2015). 2. A. Miura, C. J. Bartel, Y. Goto, Y. Mizuguchi, C. Moriyoshi, Y. Kuroiwa, Y. Wang, T. Yaguchi, M. Shirai, M. Nagao, et al., Observing and modeling the sequential pairwise reactions that drive solid-state ceramic synthesis. Advanced Materials 33, 2100312 (2021). 3. M. Bianchini, J. Wang, R. J. Cl´ement, B. Ouyang, P. Xiao, D. Kitchaev, T. Shi, Y. Zhang, Y. Wang, H. Kim, et al., The interplay between thermodynamics and kinetics in the solid- state synthesis of layered oxides. Nature materials 19, 1088–1095 (2020). 29 4. Z. Jiang, A. Ramanathan, D. P. Shoemaker, In situ identification of kinetic factors that expedite inorganic crystal formation and discovery. Journal of Materials Chemistry C 5, 5709–5717 (2017). 5. E. Corey, Robert robinson lecture. retrosynthetic thinking—essentials and examples. Chemical society reviews 17, 111–133 (1988). 6. A. Stein, S. W. Keller, T. E. Mallouk, Turning down the heat: Design and mechanism in solid-state synthesis. Science 259, 1558–1564 (1993). 7. M. H. Segler, M. Preuss, M. P. Waller, Planning chemical syntheses with deep neural net- works and symbolic ai. Nature 555, 604–610 (2018). 8. J. R. Chamorro, T. M. McQueen, Progress toward solid state synthesis by design. Accounts of chemical research 51, 2918–2925 (2018). 9. H. Kohlmann, Looking into the black box of solid-state synthesis. European Journal of Inorganic Chemistry 2019, 4174–4180 (2019). 10. H. Sch¨afer, Preparative solid state chemistry: the present position. Angewandte Chemie International Edition in English 10, 43–50 (1971). 11. O. Kononova, H. Huo, T. He, Z. Rong, T. Botari, W. Sun, V. Tshitoyan, G. Ceder, Text- mined dataset of inorganic materials synthesis recipes. Scientific data 6, 1–11 (2019). 12. E. Kim, K. Huang, A. Tomala, S. Matthews, E. Strubell, A. Saunders, A. McCallum, E. Olivetti, Machine-learned and codified synthesis parameters of oxide materials. Sci- entific data 4, 1–9 (2017). 30 13. M. C. Swain, J. M. Cole, Chemdataextractor: a toolkit for automated extraction of chemical information from the scientific literature. Journal of chemical information and modeling 56, 1894–1904 (2016). 14. A. M. Hiszpanski, B. Gallagher, K. Chellappan, P. Li, S. Liu, H. Kim, J. Han, B. Kailkhura, D. J. Buttler, T. Y.-J. Han, Nanomaterial synthesis insights from machine learning of sci- entific articles by extracting, structuring, and visualizing knowledge. Journal of chemical information and modeling 60, 2876–2887 (2020). 15. M. Aykol, J. H. Montoya, J. Hummelshøj, Rational solid-state synthesis routes for inorganic materials. Journal of the American Chemical Society 143, 9244–9259 (2021). 16. c. J. McDermott, S. S. Dwaraknath, K. A. Persson, A graph-based network for predicting chemical reaction pathways in solid-state materials synthesis. Nature communications 12, 1–12 (2021). 17. E. Kim, Z. Jensen, A. van Grootel, K. Huang, M. Staib, S. Mysore, H.-S. Chang, E. Strubell, A. McCallum, S. Jegelka, et al., Inorganic materials synthesis planning with literature- trained neural networks. Journal of chemical information and modeling 60, 1194–1201 (2020). 18. H. Huo, C. J. Bartel, T. He, A. Trewartha, A. Dunn, B. Ouyang, A. Jain, G. Ceder, Machine- learning rationalization and prediction of solid-state synthesis conditions. Chemistry of Materials 34, 7323–7336 (2022). 19. T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013). 31 20. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, J. Dean, Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems 26 (2013). 21. J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). 22. T. He, W. Sun, H. Huo, O. Kononova, Z. Rong, V. Tshitoyan, T. Botari, G. Ceder, Similarity of precursors in solid-state synthesis as text-mined from scientific literature. Chemistry of Materials 32, 7861–7873 (2020). 23. X. Jia, A. Lynch, Y. Huang, M. Danielson, I. Lang’at, A. Milder, A. E. Ruby, H. Wang, S. A. Friedler, A. J. Norquist, et al., Anthropogenic biases in chemical reaction data hinder exploratory inorganic synthesis. Nature 573, 251–255 (2019). 24. D. Rogers, M. Hahn, Extended-connectivity fingerprints. Journal of chemical information and modeling 50, 742–754 (2010). 25. C. W. Coley, L. Rogers, W. H. Green, K. F. Jensen, Computer-assisted retrosynthesis based on molecular similarity. ACS central science 3, 1237–1245 (2017). 26. L. Ward, A. Agrawal, A. Choudhary, C. Wolverton, A general-purpose machine learning framework for predicting properties of inorganic materials. npj Computational Materials 2, 1–7 (2016). 27. L. Ward, A. Dunn, A. Faghaninia, N. E. Zimmermann, S. Bajaj, Q. Wang, J. Montoya, J. Chen, K. Bystrom, M. Dylla, et al., Matminer: An open source toolkit for materials data mining. Computational Materials Science 152, 60–69 (2018). 32 28. R. E. Goodall, A. A. Lee, Predicting materials properties without crystal structure: Deep representation learning from stoichiometry. Nature communications 11, 1–9 (2020). 29. A. Y.-T. Wang, S. K. Kauwe, R. J. Murdock, T. D. Sparks, Compositionally restricted attention-based network for materials property predictions. Npj Computational Materials 7, 1–10 (2021). 30. V. Tshitoyan, J. Dagdelen, L. Weston, A. Dunn, Z. Rong, O. Kononova, K. A. Persson, G. Ceder, A. Jain, Unsupervised word embeddings capture latent knowledge from materials science literature. Nature 571, 95–98 (2019). 31. Z. Pei, J. Yin, P. K. Liaw, D. Raabe, Toward the design of ultrahigh-entropy alloys via mining six million texts. Nature Communications 14, 54 (2023). 32. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I. Polo- sukhin, Attention is all you need. Advances in neural information processing systems 30 (2017). 33. Z.-y. Mao, Y.-c. Zhu, Q.-n. Fei, D.-j. Wang, Investigation of 515 nm green-light emission for full color emission laalo3 phosphor with varied valence eu. Journal of luminescence 131, 1048–1051 (2011). 34. E. Mendoza-Mendoza, K. P. Padmasree, S. M. Montemayor, A. F. Fuentes, Molten salts synthesis and electrical properties of sr-and/or mg-doped perovskite-type laalo3 powders. Journal of Materials Science 47, 6076–6085 (2012). 35. Z. Wang, O. Kononova, K. Cruse, T. He, H. Huo, Y. Fei, Y. Zeng, Y. Sun, Z. Cai, W. Sun, et al., Dataset of solution-based inorganic materials synthesis procedures extracted from the scientific literature. Scientific Data 9, 1–11 (2022). 33 36. K. Cruse, A. Trewartha, S. Lee, Z. Wang, H. Huo, T. He, O. Kononova, A. Jain, G. Ceder, Text-mined dataset of gold nanoparticle synthesis procedures, morphologies, and size enti- ties. Scientific Data 9, 234 (2022). 37. K. Pearson, Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin philosophical magazine and journal of science 2, 559–572 (1901). 38. E. Zvereva, O. Savelieva, Y. D. Titov, M. Evstigneeva, V. Nalbandyan, C. Kao, J.-Y. Lin, I. Presniakov, A. Sobolev, S. Ibragimov, et al., A new layered triangular antiferromagnet li 4 fesbo 6: Spin order, field-induced transitions and anomalous critical behavior. Dalton Transactions 42, 1550–1566 (2013). 39. J. Luan, L. Zhang, K. Ma, Y. Li, Z. Zou, Preparation and property characterization of new y2fesbo7 and in2fesbo7 photocatalysts. Solid state sciences 13, 185–194 (2011). 40. P. Bojanowski, E. Grave, A. Joulin, T. Mikolov, Enriching word vectors with subword in- formation. Transactions of the association for computational linguistics 5, 135–146 (2017). 41. S. Lv, B. Shanmugavelu, Y. Wang, Q. Mao, Y. Zhao, Y. Yu, J. Hao, Q. Zhang, J. Qiu, S. Zhou, Transition metal doped smart glass with pressure and temperature sensitive lumi- nescence. Advanced Optical Materials 6, 1800881 (2018). 42. N. J. Szymanski, Y. Zeng, H. Huo, C. J. Bartel, H. Kim, G. Ceder, Toward autonomous de- sign and synthesis of novel inorganic materials. Materials Horizons 8, 2169–2198 (2021). 43. F. Lal`ere, V. Seznec, M. Courty, J. Chotard, C. Masquelier, Coupled x-ray diffraction and electrochemical studies of the mixed ti/v-containing nasicon: Na 2 tiv (po 4) 3. Journal of Materials Chemistry A 6, 6654–6659 (2018). 34 44. P. Feng, W. Wang, K. Wang, S. Cheng, K. Jiang, Na 3 v 2 (po 4) 3/c synthesized by a facile solid-phase method assisted with agarose as a high-performance cathode for sodium-ion batteries. Journal of Materials Chemistry A 5, 10261–10268 (2017). 45. Y. Fang, L. Xiao, X. Ai, Y. Cao, H. Yang, Hierarchical carbon framework wrapped na3v2 (po4) 3 as a superior high-rate and extended lifespan cathode for sodium-ion batteries. Advanced materials 27, 5895–5900 (2015). 46. B. Wang, X. Li, Q. Zeng, G. Yang, J. Luo, X. He, Y. Chen, Efficiently enhanced photolu- minescence in eu3+-doped lu2 (moo4) 3 by gd3+ substituting. Materials Research Bulletin 100, 97–101 (2018). 47. H. Kageyama, K. Hayashi, K. Maeda, J. P. Attfield, Z. Hiroi, J. M. Rondinelli, K. R. Poep- pelmeier, Expanding frontiers in materials chemistry and physics with multiple anions. Nature communications 9, 772 (2018). 48. T. Yasunaga, M. Kobayashi, K. Hongo, K. Fujii, S. Yamamoto, R. Maezono, M. Yashima, M. Mitsuishi, H. Kato, M. Kakihana, Synthesis of ba1- xsrxysi2o5n and discussion based on structure analysis and dft calculation. Journal of Solid State Chemistry 276, 266–271 (2019). 49. Y. Kitagawa, J. Ueda, M. G. Brik, S. Tanabe, Intense hypersensitive luminescence of eu3+- doped ysio2n oxynitride with near-uv excitation. Optical Materials 83, 111–117 (2018). 50. M. Markina, K. Zakharov, E. Ovchenkov, P. Berdonosov, V. Dolgikh, E. Kuznetsova, A. Olenev, S. Klimin, M. Kashchenko, I. Budkin, et al., Interplay of rare-earth and transition-metal subsystems in c u 3 yb (se o 3) 2 o 2 cl. Physical Review B 96, 134422 (2017). 35 51. D. Zhang, H. Berger, R. K. Kremer, D. Wulferding, P. Lemmens, M. Johnsson, Synthesis, crystal structure, and magnetic properties of the copper selenite chloride cu5 (seo3) 4cl2. Inorganic Chemistry 49, 9683–9688 (2010). 52. H. Zhuang, Y. Bao, Y. Nie, Y. Qian, Y. Deng, G. Chen, Synergistic effect of composite carbon source and simple pre-calcining process on significantly enhanced electrochemical performance of porous life0. 5mn0. 5po4/c agglomerations. Electrochimica Acta 314, 102– 114 (2019). 53. L. Wang, Y. Li, J. Wu, F. Liang, K. Zhang, R. Xu, H. Wan, Y. Dai, Y. Yao, Synthesis mechanism and characterization of limn0. 5fe0. 5po4/c composite cathode material for lithium-ion batteries. Journal of Alloys and Compounds 839, 155653 (2020). 54. Q.-Q. Zou, G.-N. Zhu, Y.-Y. Xia, Preparation of carbon-coated life0. 2mn0. 8po4 cathode material and its application in a novel battery with li4ti5o12 anode. Journal of Power Sources 206, 222–229 (2012). 55. H. Yi, C. Hu, H. Fang, B. Yang, Y. Yao, W. Ma, Y. Dai, Optimized electrochemical per- formance of limn0. 9fe0. 1- xmgxpo4/c for lithium ion batteries. Electrochimica Acta 56, 4052–4057 (2011). 56. G. Heymann, E. Selb, M. Kogler, T. G¨otsch, E.-M. K¨ock, S. Penner, M. Tribus, O. Janka, Li 3 co 1.06 (1) teo 6: synthesis, single-crystal structure and physical properties of a new tel- lurate compound with co ii/co iii mixed valence and orthogonally oriented li-ion channels. Dalton Transactions 46, 12663–12674 (2017). 57. J. S. Ndzila, S. Liu, G. Jing, J. Wu, L. Saruchera, S. Wang, Z. Ye, Regulation of fe3+-doped sr4al6so16 crystalline structure. Journal of Solid State Chemistry 288, 121415 (2020). 36 58. N. G. Dorbakov, V. V. Titkov, S. Y. Stefanovich, O. V. Baryshnikova, V. A. Morozov, A. A. Belik, B. I. Lazoryak, Barium-induced effects on structure and properties of β-ca3 (po4) 2-type ca9bi (vo4) 7. Journal of Alloys and Compounds 793, 56–64 (2019). 59. H. Peng, X. Luan, L. Li, Y. Zhang, Y. Zou, Synthesis and ion conductivity of li7la3nb2o13 ceramics with cubic garnet-type structure. Journal of The Electrochemical Society 164, A1192 (2017). 60. L. van W¨ullen, T. Echelmeyer, H.-W. Meyer, D. Wilmer, The mechanism of li-ion transport in the garnet li 5 la 3 nb 2 o 12. Physical Chemistry Chemical Physics 9, 3298–3303 (2007). 61. K. Park, H. Lim, S. Park, G. Deressa, J. Kim, Strong blue absorption of green zn2sio4: Mn2+ phosphor by doping heavy mn2+ concentrations. Chemical Physics Letters 636, 141–145 (2015). 62. C. Chen, Y. Zhuang, D. Tu, X. Wang, C. Pan, R.-J. Xie, Creating visible-to-near-infrared mechanoluminescence in mixed-anion compounds srzn2s2o and srznso. Nano Energy 68, 104329 (2020). 63. C. Duan, A. Delsing, H. Hintzen, Photoluminescence properties of novel red-emitting mn2+-activated mznos (m= ca, ba) phosphors. Chemistry of Materials 21, 1010–1016 (2009). 64. J. Thirumalai, R. Krishnan, I. Shameem Banu, R. Chandramohan, Controlled synthesis, formation mechanism and lumincence properties of novel 3-dimensional gd 2 (moo 4) 3: Eu 3+ nanostructures. Journal of Materials Science: Materials in Electronics 24, 253–259 (2013). 37 65. R. Alcantara, J. Jumas, P. Lavela, J. Olivier-Fourcade, C. P´erez-Vicente, J. Tirado, X- ray diffraction, 57fe m¨ossbauer and step potential electrochemical spectroscopy study of lifeyco1- yo2 compounds. Journal of power sources 81, 547–553 (1999). 66. Y. Zhu, J. Zeng, W. Li, L. Xu, Q. Guan, Y. Liu, Encapsulation of strontium aluminate phosphors to enhance water resistance and luminescence. Applied surface science 255, 7580–7585 (2009). 67. I. Radosavljevic, J. A. Howard, A. W. Sleight, J. S. Evans, Synthesis and structure of bi3ca9v11o41. Journal of Materials Chemistry 10, 2091–2095 (2000). 68. Y. Bengio, A. Courville, P. Vincent, Representation learning: A review and new perspec- tives. IEEE transactions on pattern analysis and machine intelligence 35, 1798–1828 (2013). 69. M. Tschannen, O. Bachem, M. Lucic, Recent advances in autoencoder-based representation learning. arXiv preprint arXiv:1812.05069 (2018). 70. F. Herrera, F. Charte, A. J. Rivera, M. J. d. Jesus, Multilabel Classification (Springer, 2016), pp. 17–31. 71. Y. Sun, C. Cheng, Y. Zhang, C. Zhang, L. Zheng, Z. Wang, Y. Wei, Circle loss: A unified perspective of pair similarity optimization. arXiv preprint arXiv:2002.10857 (2020). 72. G. E. Hinton, R. R. Salakhutdinov, Reducing the dimensionality of data with neural net- works. science 313, 504–507 (2006). 73. A. Kendall, Y. Gal, R. Cipolla, Proceedings of the IEEE conference on computer vision and pattern recognition (2018), pp. 7482–7491. 38 74. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, V. Stoyanov, Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019). 75. D. P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). 76. L. Prechelt, Neural Networks: Tricks of the trade (Springer, 2002), pp. 55–69. Acknowledgements The authors thank Prof. Wenhao Sun, Dr. Anubhav Jain, Prof. Elsa Olivetti, and Dr. Olga Kononova for valuable discussions. Funding: The U.S. Department of Energy, Office of Science, Office of Basic Energy Sci- ences, Materials Sciences and Engineering Division (DE-AC02-05-CH11231, D2S2 program KCD2S2). The Assistant Secretary of Energy Efficiency and Renewable Energy, Vehicle Tech- nologies Office, U.S. Department of Energy (DE-AC02-05CH11231). The National Science Foundation (DMR-1922372). Savio computational cluster resource provided by the Berkeley Research Computing program at the University of California, Berkeley (supported by the UC Berkeley Chancellor, Vice Chancellor for Research, and Chief Information Officer). Author Contributions: Conceptualization: TH, GC. Methodology: TH, HH, CJB, ZW, KC. Investigation: TH. Visualization: TH. Supervision: GC. Writing—original draft: TH, GC. Writ- ing—review & editing: TH, HH, CJB, ZW, KC, GC. Competing Interests: The authors declare that they have no competing interests. 39 Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. The code for the similarity-based synthesis recommendation algorithm and the data supporting the findings of this study are available at the Dryad repository https://doi.org/10.6078/D1XD96 and the Github repository https://github.com/CederGroupHub/SynthesisSimilarity. 40
ai_researcher
1
Development_of_a_Threat_Assessment_Framework_Applicable_to_Dual_Use_Biotechnology_Results_of_a_Study_to_Determine_the_Feasibility_Applicability_and_Potential_Design_of_a_Threat_Assessment_Framework_Concept.pdf
8 1 0 2 c e D 2 1 ] h p - p o p . s c i s y h p [ 2 v 9 4 1 1 0 . 9 0 7 1 : v i X r a Biotechnology and the lifetime of technical civilizations John G. Sotos, MD∗ November 28, 2018 Abstract The number of people able to end Earth’s technical civilization has heretofore been small. Emerging dual-use technologies, such as biotechnology, may give similar power to thousands or millions of individuals. To quantitatively investigate the ramifica- tions of such a marked shift on the survival of both terrestrial and extraterrestrial technical civilizations, this paper presents a two-parameter model for civilizational lifespans, i.e. the quantity L in Drake’s equation for the number of communicating extraterrestrial civilizations. One parameter characterizes the population lethality of a civilization’s biotechnology and the other characterizes the civilization’s psy- chosociology. L is demonstrated to be less than the inverse of the product of these two parameters. Using empiric data from PubMed to inform the biotechnology pa- rameter, the model predicts human civilization’s median survival time as decades to centuries, even with optimistic psychosociological parameter values, thereby posi- tioning biotechnology as a proximate threat to human civilization. For an ensemble of civilizations having some median calculated survival time, the model predicts that, after 80 times that duration, only one in 1024 civilizations will survive – a tempo and degree of winnowing compatible with Hanson’s “Great Filter.” Thus, assuming that civilizations universally develop advanced biotechnology, before they become vigor- ous interstellar colonizers, the model provides a resolution to the Fermi paradox. ∗Air Division, Joint Forces Headquarters, California National Guard, Sacramento, CA 95826. The views expressed are those of the author and do not necessarily reflect the official policy or position of the California Military Department, the Air Force, the Department of Defense, or the U.S. Government. 1 1 Introduction In 1961 Drake introduced a multi-parameter equation to estimate the number of civ- ilizations in the galaxy capable of interstellar communication∗ (Drake 1961). Soon after, von Hoerner, Shklovskii, and Sagan (von Hoerner 1961) (Shklovskii & Sagan 1966) concluded that the equation’s precision depended principally on its parameter L – the mean lifetime of a communicating civilization – because L’s value was un- certain over several orders of magnitude. While subsequent advances in astrophysics have improved the precision of several parameters in the Drake equation (Burchell 2006) (Frank & Sullivan 2016) (Vakoch & Dowd 2015), L remains highly uncertain (Oliver & Billingham 1971) (Ambartsumian & Sagan 1973) (Billingham et al. 1979) (Duncan 1991) (Schenkel 1999) (Kompanichenko 2000) (Rubin 2001) (Forgan 2009) (Maccone 2010). The apparent absence of communicating civilizations (Webb 2015) in our planet- rich galaxy (Cassan et al. 2012) underscores the possibility that such civilizations have short L (Webb 2015) (Bostrom & Cirkovic 2011), potentially due to factors exogenous to the civilization (e.g., nearby supernovae) and/or endogenous to the civilization (e.g., self-destruction). On Earth, control of endogenous factors that could destroy civilization – namely, Malthusian resource exhaustion, nuclear weapons, and environmental corruption – has until now rested with the very few persons who command large nuclear arse- nals or steer the largest national economies. However, emerging technologies could change this. For example, biotechnology (President’s Council of Advisors on Science and Technology 2016) and nanotechnology (Drexler 1987) offer the prospect of self- replicating elements able to spread autonomously and calamitously worldwide, at low cost and without heavy industrial machinery. Ultimately, thousands of individuals – having varying levels of impulse control – could wield such technologies. Intuition suggests danger rises as potentially civilization-ending technology (“CE technology”) becomes more widely distributed, but quantitative analyses of this ef- fect in the context of Drake’s L are rare. At the extreme of technology diffusion, Cooper (Cooper 2013) modeled an entire population of 1010 individuals (growing at 2% annually), each with a 10−7 annual probability of unleashing a biological agent causing 50% mortality (with 25% standard deviation). He found a mean span of L=8000 years before extinction, defined as a population less than 4000. This article generalizes Cooper’s work. It develops a simple two-parameter math- ∗For brevity, “civilization” in this paper refers to a civilization capable of interstellar commu- nication, and the “lifespan” or “lifetime” of a civilization is the span of time during which it is able to communicate. Thus, the “death,” “silencing,” or “ending” of a civilization are synonymous. 2 ematical model for L that applies to most scenarios of disseminated CE technology and is mathematically indifferent to specific CE technologies. For reasons summa- rized below, however, biotechnology may be regarded as a universal CE technology. 2 Biotechnology’s potential to end civilizations On Earth, microbial pandemics have ended non-technical civilizations (McNeill 1976). Antimicrobial drugs mitigate such risks only partially. Advisors to the President of the United States have already warned that biotechnology’s rapid progress may soon make possible engineered microorganisms that hold “serious potential for destructive use by both states and technically-competent individuals with access to modern lab- oratory facilities” (President’s Council of Advisors on Science and Technology 2016). Indeed, small research groups engineered proof-of-principle demonstrations years ago (Jackson et al. 2001) (Herfst et al. 2012) (Imai et al. 2012), while recent history pro- vides a precedent not only for a laboratory-preserved organism causing a worldwide pandemic∗ (Wertheim 2010) (Rozo & Gronvall 2015), but also for the organism’s de- scendants circulating for 30 years in the global population (Zimmer & Burke 2009). Looking forward, medical research initiatives such as the Cancer Moonshot (National Cancer Institute 2018) may, if successful, seed thousands of hospitals with exquisitely targetable cell-killing biotechnology that could, in principle, be adapted and aimed at any genetically defined target, not just cancer cells. Any technically-capable intelligence produced by evolution likely shares this sus- ceptibility. “Genetic” processes, defined here as those that pass information to build a succeeding generation or direct the self’s use of sustaining energy, are required for evolution (Farnsworth et al. 2013). Assuming that no process can be perfect, imperfections in genetic processes equate to “genetic diseases,” and will spur any intelligence having self-preservation drives to develop genetic manipulation technol- ogy to ameliorate those diseases. Given this motivation to alter genetic processes, plus the biological certainty that genetic processes respond to environmental inputs (e.g. food shortages), plus a general technical capacity to control environments ever more precisely, the eventual appearance of biotechnology may be expected. Cooper (Cooper 2013) expects that civilizations will typically develop biotechnology and spaceflight approximately simultaneously. Biotechnology is inescapably threatening because it is inherently dual-use (Wat- son et al. 2018): curing genetic disease enables causing genetic disease. Cooper (Cooper 2013) uses Cohen’s theorem (Cohen 1987) to assert that, under any reason- ∗This pandemic miserably sickened the author in early 1978. 3 able model of computing (applied here to bio-molecular computing), no algorithm (“medical treatment”) can stop every possible piece of invasive self-replicating soft- ware. Whether Cohen’s theorem strictly applies or not, the truism that defensive technology generally lags offensive is relevant. Of course, any civilization can walk away from any technology. But, because other widely available technologies with civilization-ending potential, e.g., nanotechnology, lack the a priori universal desirability of biotechnology, only biotechnology will herein be further discussed. 3 Model and Results The baseline model assumes that all communicating technical civilizations either continue communicating forever, or go silent involuntarily due to some action arising within each civilization. Two parameters model the lifespan of such civilizations: E, the number of entities (individuals, coalitions, nation-states, etc.) in the civilization who control a means to end civilization (i.e., render it uncommunicative), and P , the uniform probability per annum per entity that an entity will trigger its civilization- ending means. Entities act independently, and civilization is assumed to end with the first trigger. The simplest model for the probability, C(y), that the civilization will still be communicative after y years, under constant E and P , is: C(y) = (1 − P )(Ey) (1) Solving Equation 1 for y: y = ln C(y) E ln(1 − P ) Borrowing the abbreviation LD50 from pharmacology, where it indicates the me- dian lethal dose of a substance, it is here re-conceptualized as “lethal duration 50” to indicate the number of years, under a given E and P , before civilization’s ac- cumulated probability of being uncommunicative, 1 − C(y), is 50%. Substituting C(y) = 1 − 0.50 into Equation 2 yields: (2) LD50 = ln(1 − .50) E ln(1 − P ) (3) Similarly, the number of years before civilization has a 5% chance of becoming uncommunicative is: 4 LD05 = ln(1 − .05) E ln(1 − P ) ≈ (0.074 LD50) ≈ LD50 13.5 Increasing the certainty of civilizational death increases the lethal duration exponen- tially, as Figure 1 shows. Thus, for any E and P , LD95 ≈ (4.3 LD50), LD99.9999 ≈ (20 LD50), and LD100[1−C(y)] ≈ (80 LD50) where C(y) = 10−24. Figure 1: Survival times in a cohort of civilizations, all created at t = 0. Left: Over time, the percentage of silent civilizations, 100(1 − C(t)), logarithmically approaches 100%. For any E and P , LDX% = ln(1−X#) ln(1−0.50)LD50, where X% is a per- centage and X# is the equivalent probability. Right: This panel modifies the left panel’s axes. First, the time axis is expanded compared to the left. Second, the vertical axis has been inverted to show survival, C(t), over time. The LD50 and LD99 points carry over from the left panel. Remarkably, the time required to reach infinitesimal survival rates, e.g., 10−24, is less than two orders of magnitude larger than the median civilizational survival time, LD50. Figure 2 plots the relationship between E and LD50 for several P , and illustrates the approximation LD50 ≈ 0.7/(E × P ), derived in Equation A3 of the Mathematical Appendix. To calculate the mean lifespan, it is more intuitive to first calculate the number of communicating civilizations, N (w), that exist at the end of a time window extending from year y = 0 to y = w. Assuming that zero civilizations existed at y = 0, and that communicating civilizations were born at a constant rate of B per year throughout 5 01234567Time, t,(in multiples of LD50)020406080100% of Civilizations Not Communicating, 100(1-C(t)) LD05 LD50 LD90 LD95 LD99Civilization Silencing Curve01020304050607080Time, t, in multiples of LD5010010−310−610−910−1210−1510−1810−2110−24Fraction of Civilizations Still Communicating = C(t) LD50 LD99 LD99.9999 LD99.9999999 LD99.9999999999Civilization Survival Curve Figure 2: Technology diffusion (E) and psychosociology (P ) determine civi- lizational lifespan (LD50). E is the number of entities who control a means to end civilization. P is the probability per annum per entity that the entity will trigger its civilization-ending means. Given a constant E and P , LD50 is the median number of years before civilization is expected to end. E and LD50 have an inverse linear relationship for any P . 6 100101102103104105106107108109E: Number of Entities100101102103104105106107108109LD50 (years)P=0.01P=0.001P=0.0001P=1e-05P=1e-06P=1e-07P=1e-08P=1e-09 the time window, the Mathematical Appendix shows: N (w) = B (cid:90) w 0 C(y) dy = B Sw − 1 ln S where S = (1 − P )E (is A5) (is A8) ≈ B EP (cid:20) for w ≥ 10 EP (cid:21) and small P (is A10) Figure 3 plots the exact form of N (w) from Equation A8, for multiple w and EP when B = 1. It shows with reasonable precision that N (w) ≤ B/(EP ) for any w. The parameter L in the Drake equation is reformulated herein to L(w), the mean lifespan for civilizations born during a time window of duration w. This transforms the Drake equation to: N (w) = B L(w) (is A12) Thus, L(w) = N (w) when B = 1, and so Figure 3 is also a plot of L(w). Per Figure 3, L(w) increases with w. However, its maximum value, at any time, is constrained. Assuming all civilizations have identical E and identical P : L(w)max < 1 EP [for all w] (is A15) Combining these two formulae and defining N as “N (w) for all w” yields the Drake equation as an inequality: or, hewing to its classical form (Drake 1961): N < B EP N < R∗fpneflfifc EP (4) (5) Because the model addresses only endogenous involuntary silencings, adding consid- eration of other causes for silencings would merely reinforce this inequality. To produce near-term risk estimates for Earth, a PubMed search informed the value of E, as follows. With the assumption of a civilization-ending technology based 7 Figure 3: Civilizations and time. For six different values of E × P , the plot shows two equivalent quantities for time windows of various durations w: (a) L(w) = mean lifetime of communicating civilizations over time, and (b) N (w) = number of communicating civilizations over time when B = 1. For both quantities, a constant B is assumed. Zero civilizations exist at time w = 0. Equation A15 mandates L(w) < 1/(EP ) for all w. Per Equation A9, L(w) grows substantially until a near- steady-state is reached at about w = 10/(EP ) years. An arbitrary-precision software package (Johansson et al. 2014) used Equation A8 to calculate N (w) and L(w). 8 1001011021031041051061071081091010w: Duration of Time Window (years)1001011021031041051061071081091010N(w): Number of Communicating Civilizations (when B=1/yr)-- equals --L(w): Mean Civilizational Lifespan (years) EP=10−1 EP=10−3 EP=10−5 EP=10−7 EP=10−9 EP=0 on some yet-to-be-described genetic technique, the number of people authoring scien- tific articles indexed under “genetic techniques” (one of PubMed’s ≈27,000 standard index terms) can be used to estimate the number of people capable of exploiting such a technique, thereby serving as a proxy for E. Thus, the PubMed search genetic techniques[mh] AND "2008/01/01"[PDAT]:"2015/12/31"[PDAT] performed on August 10, 2017, yielded 594,458 publications in the most recent eight- year span of complete bibliographic coverage. After eliminating non-scientific pub- lications (of type letter, comment, news, interview, etc.) 585,004 remained, which carried 1,555,661 unique author names. Of these authors, approximately 179,765 appeared on five or more publications. This number is a maximum because some authors publish under more than one name. Models employing non-constant E and P are possible. The simplest posits that E grows as population might: a fixed percent per year. If, over y years, E grows this way from some initial value E0, with the growth continuously compounded, then: Ey = E0 ery (6) where r is the growth factor (e.g. 0.02 for 2% annual growth) and e = 2.71828.... Unfortunately, the unbounded exponential term renders this “growth model” non- sensical for even moderately large y. Still, some insights can emerge for short time horizons, as detailed in Figure 4, which is based on Equation A18 in the Appendix. Unsurprisingly, a growing E yields an LD50 significantly smaller than is calculated from a constant E. 4 Discussion of Model Unless explicitly noted, all discussion refers to the baseline model in which E is constant. Equation 1 provides the probability, C(y), that a civilization survives endogenous involuntary silencing threats until some L = y. The lethal durations, LD50 et al, are probabilistic statements of this L. Because the model terminates upon the first use of a civilization-ending technology, more complicated models, such as the Poisson distribution, are not required. Thus, the model is simple, but not unreasonably so. With only two parameters, however, it is important to understand their inherent assumptions. 9 Figure 4: Drop in LD50 when E grows. The horizontal axis corresponds to LD50 values calculated from Equation 3 and a constant E and P . If, however, E is not constant, and instead grows at a fixed percentage annually (five growth rates are shown), then LD50 shrinks to the corresponding value on the vertical axis, according to Equation A19. So, for example, an LD50 of 600 years derived from Figure 2 would be revised to approximately 190 years if E grew by 1% annually. To signal wariness about exponential explosion, each solid line changes to a dotted line when the number of entities has increased a million-fold (i.e., Ey/E0 ≥ 106). Model Discussion: E and P (and B) In broad terms, E characterizes a CE technology and its availability, while P char- acterizes the psychology and sociology of the entities who possess the technology. Although loss of interstellar communicativeness is equated to the end of civilization, other endpoints (e.g., complete extinction) could be substituted. The only criteria 10 are consistency of the endpoint, independence of the entities, and termination of the model upon first triggering. Numerous subtleties attend the definitions of E and P . First, E applies to any CE technology, be it nuclear, nano-, bio-, or another. The CE technology never fails to end civilization once triggered. The effects of “near miss” extinction events on population and psychology are ignored. Second, E includes only entities that possess (or can acquire) the “full stack” of CE technology. That is, they must have the capability to make or otherwise obtain the weapon, and to deliver it in quantities that render the civilization uncommunica- tive. So, for example, even though designs for nuclear weapons are comparatively well known (Phillips 1978), E for Earth remains only ≈ 2 (representing the leaders of the United States and Russia).∗ The self-propagating nature of biological weapons would simplify, but not eliminate, the delivery challenge. Third, to the extent that machine intelligences possess CE technology, they could also be counted in E. (Exemplar: “SkyNet” from the Terminator movies.) Fourth, E reflects a balance between offensive and defensive technologies. Thus, developing and readying defensive technology offers a straightforward, albeit chal- lenging, path to markedly decrease E. Fifth, P is the sum across all reasons, intended or not, that an entity might trigger the CE technology. Most are psychosocial, e.g., greed, hate, stupidity, folly, gullibility, power-lust, mental illness, ineptitude, non-fail-safe design, etc. The Bulletin of the Atomic Scientists’ “doomsday clock” (Anonymous 2002) has similarities to P . Sixth, the model assumes constant E and P throughout the time window of inter- est. This is unlikely to occur in a real civilization, given the dynamics of offensive/de- fensive technologies, population, sociopolitical stability, and technology diffusion. Simple model extensions would have E and P vary over time, or sum across sub- populations of entities each with their own Ei and Pi, or sum across multiple CE technologies each with their own Ej and Pj.† Unlike Cooper (Cooper 2013) population growth – and concomitant growth in E – is omitted from the baseline model because all realistic non-zero growth rates become nonsensical when compounded (exponentiated) over eons. Over short time frames, the effect of a growing E can be reasonably equated to a speed-up in time. For example, when E is constant and the model reaches some state at year y, a situation in which E is growing by 2% annually will attain the same state significantly earlier, ∗E would be slightly higher if additional other leaders could end civilization via climate change. †The model would become complex to the extent that interaction terms would be needed to model a single entity having access to multiple CE technologies. However, modeling the technologies separately and then choosing the most pessimistic outcome would likely suffice. 11 at year 50 ln (1 + y/50) according to Equation A19. The model’s flexibility could be improved – at the cost of great mathematical complexity – by assigning probability distributions to E and P and convolving them. However, models that assume a distribution around some mean value for P (denoted Pmean) will yield lower values for C(y) and LD50 than the present model, because of the positive exponent in the definition of C(y). Thus, this model’s dispiritingly low values for LD50 nevertheless represent a civilization’s best-case outcome for a given Pmean. This is most obviously appreciated in the edge case where a single entity has its P = 1, for example, an entity who acquires the skills of a CE technology specifically to end civilization. As soon as a single qualified entity has P = 1, then the overall civilizational P is also 1, and LD50 (in fact, all LDx) is zero. Civilizations spanning multiple planets should be treated as multiple civilizations, each modeled separately with their own E and P . Modeling them as a single civi- lization assumes all the planets’ civilizations die from one attack – an unnecessarily stringent requirement. Of course, P might change on planets that see a sister planet destroy itself. Although colonization would imply a non-constant B, the model would still apply so long as B is less than some constant Bmax. Using Bmax in the model would provide an upper bound for N (w). Geometrically increasing B would require re-working the model, but the barrenness of the galaxy mitigates this possibility: Tipler (Tipler 1980) and others (Webb 2015) (Jones 1981) (Armstrong & Sandberg 2013) note that a single civilization colonizing at even moderate rates of geometric increase would fill the galaxy in only a few million years, and we do not observe a full galaxy. Furthermore, assuming that the technology of interstellar colonization is far more daunting than biotechnology, and that the self-preservation drives of individual in- telligences far exceed any elective desire to migrate off-planet, it is reasonable to expect that, as a rule, civilizations will develop and use sophisticated biotechnology before dispersing themselves on other planets (Cooper 2013). Thus, the experience of 20th century Earth is likely typical, i.e., the progress of medicine and public health in the era antedating genetic biotechnology creates a population explosion, so that civilization consists of a large, dense, mobile population on a single home world at the time that potentially CE biotechnology is developed. Because such ecological conditions are conducive to the spread of communicable agents, it is reasonable to hypothesize that all planetary civilizations will face existential threats from conta- gious micro-organisms – whether engineered or not – before they become vigorous interstellar colonizers (Cooper 2013). The model could also apply to civilizations based on networked machine intelli- 12 gences when epidemic malware is a possibility. Because diversity among evolution- produced organisms would likely be higher than among designed software, building CE technology against machine intelligences could be comparatively easy. Model Discussion: Stability It may be argued that a potential CE technology cannot exist for long time spans without a defensive technology being developed, i.e., that E cannot exceed zero for thousands, millions, or billions of years. Several considerations weaken this proposition, especially as relates to biotech- nology. These considerations are illustrative, and necessarily speculative. Future biotechnological progress will elucidate the extent to which they hold. First, reliance on a single CE technology is not required. Instead, multiple CE technologies may exist serially, each enabling a multitude of different attacks, with each attack requiring a different defense. This is akin to the inventory of “zero day exploits” that present-day entities accumulate to penetrate computer systems. Second, a long period of E > 0 can be viewed as the concatenation of shorter time periods having Ei > 0, where each Ei derives from a separate CE attack possibility that is eventually countered by a defense tailored to that attack. For example, if the frailties of life allow for a million different attacks,∗ and it takes one year to tailor a defensive technology for each, then E > 0 for w = 106 years. If no periods of E = 0 were interspersed between the Ei > 0 periods, then the time window w would equal elapsed time in the universe. In scenarios having interspersed Ei = 0 periods, elapsed time would exceed window duration. Third, mere development of defensive technology is not sufficient. The technology must be fully fielded. That is, unless widespread pre-exposure vaccination is possible, an attack must be detected, the agent(s) characterized, and the remedy developed, tested, manufactured (perhaps in billions of doses), distributed, and administered – all of which must succeed before the attack can take root in the population. This is a formidable challenge requiring multiple sub-technologies in the near term, or a single future technology that is currently indistinguishable from magic. Fourth, defensive technology may be impossible on first principles. For example, every known life form adapts its gene expression to its environment. An offensive ∗Even simple viruses have profound combinatorial reserve. Influenza A, for example, with its genome of ≈14,000 nucleotides, has ≈880 million combinatorial two-nucleotide variants and ≈12 trillion three-nucleotide variants (Perelson et al. 2012). Though only a sliver of these would yield functionally and/or immunologically distinct viruses, the numerator explodes exponentially. It is a tall order to devise anti-influenza A technologies that are 100% effective against all possible variants. 13 technology whose only defense necessitated extinguishing this genetic responsiveness would seem unobtainable. Fifth, mere possession of defensive technology is not sufficient – timely and correct decisions to activate defenses on a civilizational scale must also occur. Thus, a civilization’s decision-making process, be it political, machine-based, or other, is also a target for CE technologies. This means E has a small psychosociological component.∗ Decentralized decision-making, such that every individual intelligence possessed counter-CE technology and independently decided when and if to self- medicate, would require a level of trust in the population that no government on earth has so far developed. Sixth, generalizing the above scenario, CE technologies need not be highly lethal. To sustain itself, a densely populated world may rely on critical infrastructure and/or heavily optimized industrial processes. Direct or indirect disruption of these essential functions could cause sufficient social chaos to render a civilization uncommunicative. Finally, if EP is large throughout the universe, then the model does not have to apply for millions or billions of years. For example, if E = 103 and P = 10−3 then LD50 ≈ 0.7 years and the probability of surviving to 25 years is < 10−9. 5 Discussion of Results Results Discussion: Earth From Equation A3, achieving LD50 ≥ 1000 years requires EP ≤ 7 × 10−4. Thus, with E = 2 today, P ≤ .00035 is required. Given the pace of biotechnology’s progress, plus the irresistible pressure to con- tinue that progress for universally-desired medical purposes, plus the dual-use poten- tial of the technology, plus its potential worldwide reach, many humans could soon have the capacity to end Earth’s technical civilization, driving E (cid:29) 2. In a recent eight-year span, more than 1.5 million people participated in the “genetic techniques” enterprise at a level sufficient to warrant authorship on a scientific article. Almost 180,000 of them authored five or more such articles. The number actually engineer- ing artificial organisms today is certainly far smaller, but clearly a large reservoir of hands-on molecular genetics competence already exists on Earth. Although LD50 has been our focus, planning with lower thresholds (Suskind 2006), e.g., LD05 (≈ LD50/13.5) or LD01 (≈ LD50/70), would mitigate unanticipated rapid rises in E or P . For example, comparing a CE technology’s LD01 to the ∗Alternatively, the model could divide P into Pof f ense and Pdef ense. 14 anticipated time needed to develop defensive counter-technology might drive policy makers to speed such development. Given the PubMed authorship numbers, a few new biotechnological innovations could reasonably and quickly raise E to 104. If so, and P = 10−7, then LD01 ≈ 10 years. If E became larger, LD01 would become smaller. The short LD01 time span is concerning, given today’s comparatively slow pace of antimicrobial innovation (the common cold and many other infections remain incurable and without vaccinations), and strongly argues that defensive technology development must be expanded and must occur simultaneously with any therapeutic (offensive) development. An especially concerning scenario arises if, someday, hospitals employ people who routinely write patient-specific molecular-genetic programs and package them into replicating viruses that are therapeutically administered to patients, especially cancer patients. If the world attained the European Union’s per capita hospital density,∗ this could mean two hundred thousand hospitals employing perhaps 1 million people who might genetically engineer viruses every workday. Should techniques emerge for a highly communicable therapeutic virus – against which vaccination would be refused, as that would preclude future cancer therapy – and E reached 106, then attaining an LD01 of just 10 years would require P < 10−9, perhaps an impossibility, given human nature. Results Discussion: Drake Equation By simulating an ensemble of civilizations, the present model challenges Burchell’s assertion (Burchell 2006) that L in the Drake equation is “not truly estimable [es- timatable] without observation of a set of societies.” Although estimating P based on first principles cannot be done for extraterrestrial civilizations, estimating E and the product EP may be tractable within the assumptions of the model, as follows. Lower-bound estimates for E would derive from deep understanding of the genetic mechanisms of life – all possible mechanisms, not just DNA/RNA – and from the possibilities of biotechnology as applied to those mechanisms. Thus, estimates of E would derive from understanding the gamut of intelligence-compatible biologies, an understanding that smart human biochemists could perhaps achieve ex nihilo, with- out interstellar travel or communication. Machine intelligences would have analogous considerations. The existence of other CE technologies might increase E further. Because of Equation 4, EP can be constrained by searching for extraterrestrial ∗In 2004, 15,000 hospitals (European Hospital and Healthcare Federation 2009) were serving 500 million people (Organisation for Economic Cooperation and Development / European Union 2016). Likely, few emerging health systems will follow an American model. 15 intelligence (SETI). With B increasingly well understood, constraining N in Equa- tion 4 constrains EP . Thus, if SETI efforts someday yielded a conclusion such as “We estimate that no more than Nx communicating civilizations exist,” then EP < B/Nx. If both EP and E can be estimated, then the value of P is constrained. It is interesting to note that, given its dependence on psychological factors, possessing a constraint or estimate of P would be a first step toward a quantitative epidemiology of alien psychologies. The model applies so long as opportunities to deploy civilization-ending means predate the ability to counter all such attacks (and accidents). That is, whenever E > 0, Equation 3 produces a finite value for LD50 and civilization is at risk, assuming P > 0. Whether any measures could achieve P = 0, short of pervasive and perfect surveillance of entities, is unknown. The model’s low values for lifespan, L(w), have implications for SETI strategy. If geometrically increasing interstellar colonization circumvents short civilizational lifespan, then, all other factors being equal, communicating civilizations would be longest-lived where such colonization is easiest, e.g. where the time and/or energy required to move between habitable planets is smallest. This consideration adds to existing reasons why SETI might target zones of densely collected habitable planets (Turnbull & Tarter 2003). Results Discussion: the Fermi Paradox and the Great Filter To date, in a visible universe of ≈ 1024 stars and their planets, only Earth shows evidence of intelligent life. This apparent paradox, noted by Enrico Fermi and others (Webb 2015), could be explained by a “Great Filter” that all but prevents commu- nicating civilizations from forming or surviving (Hanson n.d.). The Great Filter may be technological in origin if “(a) virtually all sufficiently advanced civilizations eventually discover it and (b) its discovery leads almost universally to existential disaster” (Bostrom 2008). Most remarkably, the present model supplies the quantitative 24 orders-of-magnitude winnowing required of a Great Filter, reducing it to a two-orders-of-magnitude mul- tiplication. For example, if E = 106 and (optimistically) P = 10−9, then LD50 ≈ 700 years, and LD100[1−C(y)] ≈ (80 LD50) ≈ 56, 000 years when C(y) = 10−24. That is, for this E and P , we expect only one civilization in 1024 to still be communicating after 56,000 years, and even a galactically-short 100,000-year lifespan is effectively impossible because only one in 1042 civilizations remains communicative. Overall, therefore, I would advise advanced technical civilizations to optimize not on megascale computation (Sandberg et al. 2017) nor engineering (Dyson 1960) 16 nor energetics (Kardashev 1964), but on defense from individually-possessable self- replicating existential threats, such as microbes or nanomachines. Acknowledgments: I am grateful for the support and wise counsel of Jennifer Esposito, Mike Morton, Barry Hayes, and, of course, Tanya Roth. However, all errors are the author’s responsibility. My thanks go to the anonymous reviewer for spurring development of Figure 4. 17 Mathematical Appendix Math 1: ln(1 − P ) = −P as P → 0 To solve f (P ) = ln(1 − P ) P at P = 0, we observe that f (0) evaluates to 0/0, making the expression indetermi- nate. However, it also means L’Hˆopital’s rule applies in the second step below: lim P →0 f (P ) = lim P →0 ln(1 − P ) P = lim P →0 d[ ln(1−P ) ] dP lim P →0 d[ P ] dP = ( −1 1−P ) (1) lim P →0 lim P →0 = −1 1 Hence: So, when P → 0 we can use: lim P →0 ln(1 − P ) P = −1 ln(1 − P ) = −P (A1) For our purposes this approximation is excellent, viz. ln(1 − 0.1) = −0.105 and ln(1 − 0.001) = −0.0010005. 18 Math 2: LD50 as P → 0 We start with Equation 3 defining LD50, then simultaneously take the limit and substitute Equation A1 into it: LD50 = ln(1 − .50) E × ln (1 − P ) lim P →0 LD50 = ln(1 − .50) E × (−P ) = ln 2 E × P ≈ 0.7 E × P (A2) (A3) Thinking solely in terms of exponents: LD50 ≈ .7 × 10−(log10E + log10P ) 19 Math 3: N (w) – Exact Recall from Equation 1 that C(y) is the fraction of civilizations still communicating y years after their birth. Here, however, the notion of time changes a bit. First, define: B(y) = number of new civilizations born in year y N (y) = number of communicating civilizations existing in year y Next, assume we are interested in a window of time in the galaxy’s history running from year 0 to year w, where no civilizations were present at y = 0. We want to know the number of communicating civilizations that exist at the end of the window, i.e. at time w. To be considered alive at year w, any civilization born in some year y will have to communicate for w − y more years. Thus: N (w) = B(0) C(w) + B(1) C(w − 1) + B(2) C(w − 2) + ... + B(w) C(0) Assuming B(y) is a constant (having units: civ year−1): N (w) = B w (cid:88) y=0 C(y) We can replace summation with integration: N (w) = B (cid:90) w C(y) dy (A4) (A5) 0 To solve for N (w), assuming all civilizations have the same E and P , we define: Substituting the above into Equation 1 yields: S = (1 − P )E Then substituting Equation A7 into Equation A5: C(y) = Sy (A6) (A7) 20 N (w) = B (cid:90) w Sy dy (cid:12) (cid:12) (cid:12) (cid:12) 0 Sy ln S (cid:18) Sw ln S = B = B w 0 − S0 ln S (cid:19) This yields the exact form of N (w): N (w) = B Sw − 1 ln S where S = (1 − P )E (A8) 21 Math 4: N (w) – As P → 0 and w → ∞ In many scenarios for N (w), P → 0 and/or w → ∞. We here derive an approxima- tion for such conditions. First, expand the exact definition of N (w) in Equation A8: N (w) = B ((1 − P )E)w − 1 ln ((1 − P )E) = B (1 − P )(Ew) − 1 E ln (1 − P ) Now substitute with the results of Equation A1, namely ln(1 − P ) = −P when P is small and, consequently, (1 − P ) = e−P : lim P →0 N (w) = B e−P Ew − 1 E (−P ) = B 1 − e−P Ew EP As w becomes large, e−P Ew → 0. Thus: N (w) = lim P →0 w→∞ B EP (A9) Using Figure 3, which was calculated using the exact form of N (w) in Equa- tion A8, we observe the approximate value-range of w for which the limit of Equa- tion A9 holds: N (w) ≈ B EP (cid:20) for w ≥ 10 EP (cid:21) and small P (A10) 22 Math 5: L(w) As many others have noted, the Drake equation can be reduced to a two-parameter form: N = B × L (A11) where N is the number of communicating civilizations, B is the birth rate of com- municating civilizations, and L is the mean lifetime of all birthed civilizations. Applying this to our approach of examining time windows having constant B, we can rewrite Equation A11 as: N (w) = B × L(w) (A12) where L(w) is the mean lifetime of a civilization born during the time window that extends from 0 to w. Rearranging Equation A12 and then substituting from Equation A8 yields: L(w) = = 1 B 1 B N (w) B Sw − 1 ln S = Sw − 1 ln S = (1 − P )Ew − 1 ln (1 − P )E (A13) To derive a simple approximation for L(w), recall from Equation A10 that N (w) ≈ B/(EP ). It is immediately apparent from Equation A12 that: L(w) ≈ 1 EP (cid:20) for w ≥ 10 EP (cid:21) and small P (A14) Finally, the ratio of Equation A2 to Equation A14 is noteworthy: LD50 / L(w) ≈ ln 2 ≈ 0.7 for w ≥ (cid:20) (cid:21) and small P 10 EP 23 Math 6: Maximum L(w) To find the w where L(w) is maximal, we set the derivative of the definition of L(w) (from Equation A13) to zero: 0 = d[ Sw−1 ln S ] dw = 1 ln S Sw ln S = Sw Given 0 < S < 1, then Sw = 0 at w = ∞. So, using Equations A13 and A1: L(w)max = L(∞) = = = S∞ − 1 ln S −1 E ln(1 − P ) 1 EP as P → 0 Seeking to show L(w)max < 1/(EP ) for all P , we begin by observing: 1 3 [for x < 0] x3... < x x2 + x − 1 2 The Mercator series is: x − 1 2 x2 + 1 3 x3... = ln (1 + x) [for − 1 < x ≤ 1] We combine the two preceding formulae into an inequality, then set x = −P : ln (1 + x) < x ln (1 − P ) < −P [for − 1 < x < 0] [for − 1 < −P < 0] Substituting back into the definitions of L(w)max gives, for 0 < P < 1: L(w)max = L(∞) = −1 E ln(1 − P ) = (cid:19) (cid:18) lim P →0 1 EP < 1 EP (A15) 24 Math 7: Model of E Growing Over Time We wish to model E growing over time and apply the model to time spans that do not cause exponential explosion. First, recall Equation 1, the basis for the baseline “Constant-E” model: C(y) = (1 − P )(Ey) (1) The exponential term Ey has units entity-years, and signifies the total exposure of the civilization to destruction events. Renaming E to E0 to reinforce its constant nature in the Constant-E model, we can write: Exposure[ConstantM odel]y = E0 y (A16) Similarly, we can calculate exposure for a model in which E grows with time. Equa- tion 6 defined Ey as growing from an initial value of E0 at an annual rate of r over a period of y years, with the growth continuously compounded: In this “Growing-E” model, the civilization’s exposure to destruction events is: Ey = E0 ery (6) Exposure[GrowthM odel]y = (cid:90) y t=0 (cid:19) y E0 ert dt (cid:12) (cid:18) ert (cid:12) (cid:12) r (cid:12) t=0 ery − 1 r = E0 = E0 (A17) We can equate the two exposures from the right-hand-sides of Equations A16 and A17, taking care to distinguish the two different y: E0 y1 = E0 ery2 − 1 r This equation says that a civilization’s destruction-exposure after y1 years as calcu- lated by the constant model, equals the exposure after y2 years as calculated by the growth model. 25 Continuing, we can cancel the E0 terms and express y2 in terms of y1: y1 = ery2 − 1 r y2 = ln (1 + ry1) r (A18) Analytically, Equation A18 provides a shortcut for converting results from the constant- E model to the growing-E model. For example, we can define the LD50 for the growing-E model as: LD50[G] = ln (1 + r LD50[C]) r (A19) where the [G] and [C] indicate the growing-E model and constant-E model, respec- tively. See Figure 4. Although Equation A18 does not have an explicit exponential term, it must still be applied carefully because it implicitly assumes that the number of entities can grow exponentially without limit, per Equation 6. Alternative growing-E models may be derived, e.g. using linear growth. 26 References Ambartsumian, V. A. & Sagan, C. (1973), Prospect, in C. Sagan, ed., ‘Communica- tion with Extraterrestrial Intelligence’, MIT Press, Cambridge, MA, pp. 1–7. Anonymous (2002), ‘The history of the Bulletin clock’, Bulletin of the Atomic Sci- entists 58(2), 36–37. URL: http://dx.doi.org/10.1080/00963402.2002.11460553 Armstrong, S. & Sandberg, A. (2013), ‘Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the fermi paradox’, Acta Astronautica 89, 1–13. URL: http://www.sciencedirect.com/science/article/pii/S0094576513001148 Billingham, J., Oliver, B. M. & Wolfe, J. H. (1979), ‘A review of the theory of interstellar communication’, Acta Astronautica 6(1), 47–57. Bostrom, N. (2008), ‘Where are they? Why I hope the search for extraterrestrial life finds nothing’, Technology Review pp. 72–77. URL: https://www.technologyreview.com/s/409936/where-are-they/ Bostrom, N. & Cirkovic, M. M. (2011), Global Catastrophic Risks, Oxford University Press, Oxford. Burchell, M. J. (2006), ‘W(h)ither the Drake equation?’, International Journal of Astrobiology 5(3), 243–250. Cassan, A., Kubas, D., Beaulieu, J.-P., Dominik, M., Horne, K., Greenhill, J., Wamb- sganss, J., Menzies, J., Williams, A., Jorgensen, U. G., Udalski, A., Bennett, D. P., Albrow, M. D., Batista, V., Brillant, S., Caldwell, J. A. R., Cole, A., Coutures, C., Cook, K. H., Dieters, S., Prester, D. D., Donatowicz, J., Fouque, P., Hill, K., Kains, N., Kane, S., Marquette, J.-B., Martin, R., Pollard, K. R., Sahu, K. C., Vinter, C., Warren, D., Watson, B., Zub, M., Sumi, T., Szymanski, M. K., Ku- biak, M., Poleski, R., Soszynski, I., Ulaczyk, K., Pietrzynski, G. & Wyrzykowski, L. (2012), ‘One or more bound planets per milky way star from microlensing ob- servations’, Nature 481(7380), 167–169. URL: http://dx.doi.org/10.1038/nature10684 Cohen, F. (1987), ‘Computer viruses: Theory and experiments’, Computers & Secu- rity 6, 22–35. Cooper, J. (2013), ‘Bioterrorism and the Fermi Paradox’, International Journal of Astrobiology 12(2), 144–148. 27 Drake, F. D. (1961), ‘Discussion of Space Science Board, National Academy of Sci- ences Conference on Extraterrestrial Intelligent Life’. Green Bank, West Virginia. Drexler, K. E. (1987), Engines of Creation, Anchor, New York. Pages 172–173. Duncan, R. C. (1991), The life-expectancy of industrial civilization, in ‘System Dy- namics ’91: Proceedings of the 1991 International System Dynamics Conference, Bangkok, Thailand, August 27 through 30, 1991’, pp. 173–181. Dyson, F. J. (1960), ‘Search for artificial stellar sources of infrared radiation’, Science 131(3414), 1667–1668. URL: http://science.sciencemag.org/content/131/3414/1667 European Hospital and Healthcare Federation (2009), Hospitals in the 27 Member States of the European Union, Dexia Editions, Paris. Page 47. URL: http://www.hope.be/wp-content/uploads/2015/11/79 2009 OTHER Hospitals- in-27-Member-States-of-the-European-Union-eng.pdf Farnsworth, K. D., Nelson, J. & Gershenson, C. (2013), ‘Living is information pro- cessing: From molecules to global systems’, Acta Biotheoretica 61(2), 203–222. URL: https://doi.org/10.1007/s10441-013-9179-3 Forgan, D. (2009), ‘A numerical testbed for hypotheses of extraterrestrial life and intelligence’, International Journal of Astrobiology 8(2), 121–131. Frank, A. & Sullivan, W. T. (2016), ‘A new empirical constraint on the prevalence of technological species in the universe’, Astrobiology 16(5), 359–362. Hanson, R. (n.d.), ‘The great filter - are we almost past it?’, Downloaded from: http://mason.gmu.edu/˜rhanson/greatfilter.html. Accessed June 12, 2017. Herfst, S., Schrauwen, E. J. A., Linster, M., Chutinimitkul, S., de Wit, E., Munster, V. J., Sorrell, E. M., Bestebroer, T. M., Burke, D. F., Smith, D. J., Rimmelzwaan, G. F., Osterhaus, A. D. M. E. & Fouchier, R. A. M. (2012), ‘Airborne transmission of influenza A/H5N1 virus between ferrets’, Science 336(6088), 1534–1541. URL: http://science.sciencemag.org/content/336/6088/1534 Imai, M., Watanabe, T., Hatta, M., Das, S. C., Ozawa, M., Shinya, K., Zhong, G., Hanson, A., Katsura, H., Watanabe, S., Li, C., Kawakami, E., Yamada, S., Kiso, M., Suzuki, Y., Maher, E. A., Neumann, G. & Kawaoka, Y. (2012), ‘Experimental adaptation of an influenza H5 HA confers respiratory droplet transmission to a 28 reassortant H5 HA/H1N1 virus in ferrets’, Nature 486(7403), 420–428. URL: http://dx.doi.org/10.1038/nature10831 Jackson, R. J., Ramsay, A. J., Christensen, C. D., Beaton, S., Hall, D. F. & Ramshaw, I. A. (2001), ‘Expression of mouse interleukin-4 by a recombinant ectromelia virus suppresses cytolytic lymphocyte responses and overcomes genetic resistance to mousepox’, Journal of Virology 75(3), 1205–1210. URL: http://jvi.asm.org/content/75/3/1205.abstract Johansson, F. et al. (2014), mpmath: a Python library for arbitrary-precision floating- point arithmetic (version 0.19). http://mpmath.org/. Jones, E. M. (1981), ‘Discrete calculations of interstellar migration and settlement’, Icarus 46(3), 328 – 336. URL: http://www.sciencedirect.com/science/article/pii/0019103581901366 Kardashev, N. S. (1964), ‘Transmission of Information by Extraterrestrial Civiliza- tions.’, Soviet Astronomy 8(2), 217–221. Kompanichenko, V. N. (2000), Average lifetime of an intelligent civilization estimated on its global cycle, in ‘Bioastronomy 99: A New Era in the Search for Life’, Vol. 213 of Astronomical Society of the Pacific Conference Series, pp. 437–440. Maccone, C. (2010), ‘The Statistical Drake Equation’, Acta Astronautica 67, 1366– 1383. McNeill, W. H. (1976), Plagues and Peoples, Anchor, Garden City, NY. Pages 94, 113ff, 152, 180, 186. National Cancer Institute (2018), ‘Cancer moonshot’. https://www.cancer.gov/research/key-initiatives/moonshot-cancer-initiative. Oliver, B. M. & Billingham, J. (1971), Life in the universe, in ‘Project Cyclops: A Design Study of a System for Detecting Extraterrestrial Intelligent Life’, Stanford / NASA / Ames Research Center. NASA report CR 114445, pp. 3–28. URL: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730010095.pdf Organisation for Economic Cooperation and Development / European Union (2016), Health at a Glance: Europe 2016, OECD Publishing, Paris. Page 193. URL: http://dx.doi.org/10.1787/9789264265592-en 29 Perelson, A. S., Rong, L. & Hayden, F. G. (2012), ‘Combination antiviral therapy for influenza: Predictions from modeling of human infections’, The Journal of Infectious Diseases 205(11), 1642–1645. URL: http://dx.doi.org/10.1093/infdis/jis265 Phillips, J. A. (1978), Mushroom: The Story of the A-bomb Kid, Morrow, New York. Council President’s (2016), https://obamawhitehouse.archives.gov/blog/2016/11/15/pcast-letter-president- action-needed-protect-against-biological-attack. Advisors to Science against on protect Technology attack’. of needed biological ‘Action and Rozo, M. & Gronvall, G. K. (2015), ‘The reemergent 1977 H1N1 strain and the gain-of-function debate’, mBio 6(4), e01013–15. Rubin, C. T. (2001), ‘L factor: hope and fear in the search for extraterrestrial intel- ligence’, Proc. SPIE 4273, 230–239. URL: http://dx.doi.org/10.1117/12.435379 Sandberg, A., Armstrong, S. & Cirkovic, M. M. (2017), ‘That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox’. arXiv:1705.03394. URL: https://arxiv.org/abs/1705.03394 Schenkel, P. (1999), ‘The Nature of ETI, Its Longevity and Likely Interest in Mankind - The Human Analogy Re-Examined’, Journal of the British Interplanetary Society 52, 13–18. Shklovskii, I. S. & Sagan, C. (1966), Intelligent Life in the Universe, Delta, New York, pp. 409–418. Suskind, R. (2006), The One Percent Doctrine, Simon and Schuster, New York, p. 62. Tipler, F. J. (1980), ‘Extraterrestrial intelligent beings do not exist’, Q J R Astro Soc 21, 267–281. Turnbull, M. C. & Tarter, J. C. (2003), ‘Target selection for SETI. II. Tycho-2 dwarfs, old open clusters, and the nearest 100 stars’, Astrophysical Journal Supplement Series 149, 423–436. Vakoch, D. A. & Dowd, M. F. (2015), The Drake Equation: Estimating the Preva- lence of Extraterrestrial Life through the Ages, Cambridge University Press, New York. 30 von Hoerner, S. (1961), ‘The search for signals from other civilizations’, Science 134, 1839–1843. Watson, C., Sell, T. K., Watson, M., Rivers, C., Hurtado, C., Shearer, M. P., Geleta, A. & Inglesby, T. (2018), Technologies to Address Global Catastrophic Biological Risks, Johns Hopkins University Bloomberg School of Public Health / Center for Health Security, Baltimore, p. 8. URL: pdfs/2018/181009-gcbr-tech-report.pdf http://www.centerforhealthsecurity.org/our-work/pubs archive/pubs- Webb, S. (2015), If the Universe Is Teeming with Aliens ... Where is Everybody?, 2 edn, Springer, Cham, Switzerland. Wertheim, J. O. (2010), ‘The re-emergence of H1N1 influenza virus in 1977: A cau- tionary tale for estimating divergence times using biologically unrealistic sampling dates’, PLOS ONE 5(6), 1–4. URL: https://doi.org/10.1371/journal.pone.0011184 Zimmer, S. M. & Burke, D. S. (2009), ‘Historical perspective – emergence of influenza A (H1N1) viruses’, New England Journal of Medicine 361(3), 279–285. PMID: 19564632. URL: http://dx.doi.org/10.1056/NEJMra0904322 31
ai_researcher
3
Reason_for_Future_Act_for_Now_A_Principled_Framework_for_Autonomous_LLM_Agents_with_Provable_Sample_Efficiency.pdf
A Survey of Table Reasoning with Large Language Models Xuanliang Zhang, Dingzirui Wang, Longxu Dou, Qingfu Zhu, Wanxiang Che Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, China {xuanliangzhang, dzrwang, lxdou, qfzhu, car}@ir.hit.edu.cn 4 2 0 2 b e F 3 1 ] L C . s c [ 1 v 9 5 2 8 0 . 2 0 4 2 : v i X r a Abstract Table reasoning, which aims to generate the cor- responding answer to the question following the user requirement according to the provided table, and optionally a text description of the table, effec- tively improving the efficiency of obtaining infor- mation. Recently, using Large Language Models (LLMs) has become the mainstream method for ta- ble reasoning, because it not only significantly re- duces the annotation cost but also exceeds the per- formance of previous methods. However, existing research still lacks a summary of LLM-based table reasoning works. Due to the existing lack of re- search, questions about which techniques can im- prove table reasoning performance in the era of LLMs, why LLMs excel at table reasoning, and how to enhance table reasoning abilities in the fu- ture, remain largely unexplored. This gap signif- icantly limits progress in research. To answer the above questions and advance table reasoning re- search with LLMs, we present this survey to an- alyze existing research, inspiring future work. In this paper, we analyze the mainstream techniques used to improve table reasoning performance in the LLM era1, and the advantages of LLMs compared to pre-LLMs for solving table reasoning. We pro- vide research directions from both the improvement of existing methods and the expansion of practical applications to inspire future research. 1 Introduction Table reasoning task, which significantly improves the ef- ficiency of obtaining and processing data from massive amounts of tables, plays an important role in the study of Natural Language Processing (NLP) [Jin et al., 2022]. An illustration of the table reasoning task is shown in Figure 1. Given one or more tables, this task requires the model to gen- erate results corresponding to the given question, as required by users (e.g., table QA [Pasupat and Liang, 2015], table fact verification [Chen et al., 2020]). 1We summarize the detailed resources of the current research in https://github.com/zhxlia/Awesome-TableReasoning-LLM-Survey. Figure 1: The illustration of various table reasoning tasks. In the past, research in table reasoning has gone through several phases: rule-based, neural network-based, and pre- trained language model-based [Jin et al., 2022], which we call the pre-LLM era. Recent research [Zhao et al., 2023b] has shown that Large Language Models (LLMs) exhibit com- pelling performance across NLP tasks, in particular, dramat- ically reducing annotation requirements, which we call the LLM era. Consequently, there has been a lot of works on ap- plying LLMs to the table reasoning task to reduce overhead and outperform the methods in the pre-LLM era, which has become the current mainstream method. However, there is currently a lack of summary analysis on table reasoning works with LLMs, leading to how to improve the performance is still under exploration, which limits exist- ing research to a certain extent. Besides, the table reasoning survey of pre-LLMs is not suitable for LLMs. Since some mainstream techniques in the pre-LLM era, such as changing the model structure and designing pre-training tasks [Jin et al., 2022], are not suitable for LLM in table reasoning, while LLM methods focus more on designing prompts or pipelines [Zhao et al., 2023b]. Therefore, this paper summarizes the existing works on table reasoning with LLMs to shed light on future research. In detail, we focus on three questions of the table reasoning: 1. What techniques can improve table rea- soning performance in the LLM era; 2. Why LLMs excel at table reasoning; 3. How to enhance table reasoning ability in the future. The structure of this survey is shown in Figure 2. EvidenceAnswerTableQAItalyTable Fact VerificationFalseTable-to-TextDavide Rebellin is a Italy cyclist and …Text-to-SQLSELECT Team FROM table WHERE Rank = 1QuestionTableQAWhich country had the most cyclists?Table Fact VerificationThe Spain had the most cyclists finish.Table-to-TextDescribe the cyclist with the 1st rank.Text-to-SQLShow the team of the cyclist whose rank is 1.⋅⋅⋅⋅⋅⋅RankCyclistTeamTime1Davide Rebellin (ITA)Gerolsteiner3’42’’2David Moncoutié (FRA)Cofidis3’56’’⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅Table ReasoningThe Itzulia Basque Country cycling event, held annually in the picturesque and rugged terrain of the … What Techniques Can Improve Table Reasoning Performance in the LLM Era §3 Mainstream Techniques Following pre-LLMs §3.1 Mainstream Techniques unique to LLMs §3.2 Supervised Fine-Tuning e.g. TableLlama [Zhang et al., 2023b]; APEL [Zhong et al., 2023] Result Ensemble e.g. SQL-Prompt [Sun et al., 2023]; Lever [Ni et al., 2023]; In-context Learning e.g. ODIS [Chang and Fosler-Lussier, 2023]; DAIL-SQL [Gao et al., 2023] Instruction Design e.g. DATER [Ye et al., 2023]; Binder [Cheng et al., 2023] Step-by-Step Reasoning e.g. MURMUR [Saha et al., 2023]; Chain-of-Table [Wang et al., 2024] g n i n o s a e R e l b a T Why LLMs Excel at Table Reasoning §4 Instruction Following Ability Benefits Structure Understanding §4.1 Step-by-Step Reasoning Ability Benefits Structure Understanding §4.2 Supervised Fine-Tuning Establishing Diverse Training Data Result Ensemble Sampling Results More Efficiently Improving Table Reasoning Performance §5.1 In-context Learning Optimizing Prompts Automatically How to Enhance Table Reasoning Ability in the Future §5 Instruction Design Automatically Refining Design with Verification Step-by-Step Reasoning Mitigating the Error Cascade in Multi-Step Reasoning Multi-Modal Enhancing the Alignment between Image Tables and Questions Expanding Application §5.2 Agent Dialogue Cooperating with More Diverse and Suitable Table Agents Backtracking the Sub-tables in the Multi-turn Interaction Retrieval-Augmented Generation Injecting Knowledge Related to the Entity Figure 2: The structure overview of our paper, taking the most representative works as an example. Regarding the first topic, to better adapt to table reason- ing research in the LLM era, we introduce the mainstream techniques and detailed methods of table reasoning in the LLM era in §3. Specifically, we categorize existing works according to the different techniques that they utilize and de- tail them respectively. Considering the second topic, we explore why LLMs show superior performance on table rea- soning tasks in §4. We compare the best performance of pre-LLM and LLM on different benchmarks and prove that LLMs consistently surpass pre-LLMs in the table reasoning task. Then, we discuss the advantages of LLMs in solving the table reasoning task based on the two inherent challenges of the task. Regarding the third topic, we discuss the potential future directions of table reasoning in §5. To promote table reasoning research and better apply table reasoning to real- life scenarios, we separately analyze how to further improve table reasoning performance and explore how to adapt table reasoning into practical applications. 2 Background 2.1 Paper Selection Criteria To ensure the selected papers are highly related to the survey, the papers should meet the following criteria: 1. Each ques- tion in the task that the paper aims to solve must be related to at least one table. 2. The method proposed in the paper is required to reason with or fine-tune LLMs. 2.2 Task Definition As the basis for subsequent analysis, in this section, we present the definition of the table reasoning task. In the ta- ble reasoning task, the input consists of the table, an optional text description, and the question tailored to the user require- ment for various tasks (e.g., table QA, table fact verification, table-to-text, and text-to-SQL), and the output is the answer. 2.3 Benchmarks To help researchers understand the existing application sce- narios of table reasoning in detail, we introduce four main- stream table reasoning tasks to which more than 90% of the selected papers adapt, including text generation, entail- ment, and semantic parsing. An illustration of four tasks is shown in Figure 1. Although most works of solving ta- ble reasoning tasks with LLMs do not need fine-tuning data, they still need to rely on labeled data to validate the perfor- mance. Therefore, in this subsection, we also provide one most-used validation benchmark for each task as an example and summarize the related resources in https://github.com/ zhxlia/Awesome-TableReasoning-LLM-Survey: • Table QA: The table QA task is to answer a question ac- cording to a table [Pasupat and Liang, 2015]. WikiTable- Questions [Pasupat and Liang, 2015] serves as the initial benchmark in the table QA task, which has open-domain tables accompanied by complex questions. • Table Fact Verification: The table fact verification task aims to verify whether a textual hypothesis is entailed or refuted based on the evidence tables [Chen et al., 2020]. TabFact [Chen et al., 2020], as the first benchmark in the ta- ble fact verification task, features large-scale cross-domain table data and complex reasoning requirements. • Table-to-Text: The table-to-text task is to generate a nat- ural language description corresponding to the given ques- tion with a table [Nan et al., 2022]. Different from the table QA task that only generates several spans, table-to-text re- quires the answer to be one paragraph. FeTaQA [Nan et al., 2022] requires the model to generate a free-form answer to the question, with large-scale and high-quality data. • Text-to-SQL: Text-to-SQL aims to convert a textual ques- tion under a database to executable structured query lan- guage (SQL). Spider [Yu et al., 2018] is the first multi- domain, multi-table benchmark on the text-to-SQL task. 3 What Techniques Can Improve Table Reasoning Performance in the LLM Era There are significant differences between the model ability of the pre-LLM era and the LLM era, leading to the change in the mainstream techniques [Zhao et al., 2023b]. To help re- search better transition from the pre-LLM era to the LLM era, in this section, we discuss the mainstream techniques in the LLM era from two aspects: 1. the techniques following the pre-LLM era (§3.1) and 2. the techniques unique to the LLM era (§3.2). We categorize the table reasoning methods based on techniques they use into five categories, which are shown in Figure 3. Then, we introduce the methods and highlight the changes in the technique, aiming to understand how to utilize the mainstream techniques in the LLM era. 3.1 Mainstream Techniques Following pre-LLMs Despite the considerable change in research brought about by LLMs, many pre-LLM techniques can still be applied to LLM. Therefore, we introduce the mainstream techniques following the pre-LLM era in this subsection. Supervised Fine-Tuning Supervised fine-tuning refers to fine-tuning the LLM with an- notated data to enhance the table reasoning capability. Since some open-source small-scale LLMs are weak in solving ta- ble tasks [Zhang et al., 2023b] and have a relatively low cost of fine-tuning, researchers utilize the supervised fine-tuning techniques to enhance their performance. Existing works on supervised fine-tuning of LLM table reasoning include two types: 1. leveraging pre-existing or manually labeled data, and 2. leveraging distilled data generated by LLMs. Focusing on pre-existing or manu- ally labeled data, to better complete the table reasoning task, TableGPT [Zha et al., 2023] fine-tunes the LLM by construct- ing instruction datasets. Considering the lack of generaliza- tion of previous work, TableLlama [Zhang et al., 2023b] con- structs the training data by selecting representative table task datasets. Noting that annotating SQL data is too challenging, APEL [Zhong et al., 2023] proposes a method to annotate SQL, which generates the database according to schema and judges SQL correctness based on execution results. Focusing on distilled data, [Yang et al., 2023] observes that the performance of the open-source model lags behind that of LLM on table-to-text tasks, thereby, this work utilizes the LLM as a teacher model to distill rationales and table de- scriptions and fine-tunes the open-source model with the dis- tilled data. Besides, HELLaMA [Bian et al., 2023] concerns that some models could not locate the evidence based on the inputs, therefore it obtains training data to predict where the labeled description would be located by using other LLMs, and then fine-tune models. Based on pre-existing or manually labeled data, and dis- tilled data embody the two thoughts to obtain training data in the LLM era. Pre-existing datasets are generally of high qual- ity, but more limited in certain domains and tasks; whereas distilled data is less restrictive data but faces the problem of low-quality data. Therefore, how to significantly enhance the data quality of model distill through as little manual interven- tion as possible is an urgent issue to be studied. Highlight The supervised fine-tuning methods in the pre- LLM era, limited by the model capabilities, can not bring about the generalization on unseen tasks [Xie et al., 2022]. In contrast, for the LLM era, researchers design the instruction- based and multi-task data to fine-tune the model to enhance the table reasoning ability to generalize to different tasks, even tasks that are not seen in the training phase. Result Ensemble Result ensemble denotes improving table reasoning ability by selecting the most suitable answer from multiple results gen- erated by LLM. Since the models of both the pre-LLM era and the LLM era could be less capable of maintaining cor- rect results facing slight disturbance (e.g., random number seeds, meaningless words in questions), leading to model per- formance degradation [Ni et al., 2023], researchers utilize the technique of result ensemble following the pre-LLM era. Existing methods of result ensemble in the LLM era mainly focus on two problems: 1. how to obtain diverse results for a question, and 2. how to select the correct result among the multiple results. Considering the work of obtaining diverse results, SQLPrompt [Sun et al., 2023] notes that the low di- versity of results with fixed prompt and model causes results could be focused on specific incorrect answers, so proposes to generate results with multiple prompts and models. Regarding the work of selecting the correct result, Lever [Ni et al., 2023] specifically trains a verifier to score each generated answer and selects the result with the highest score as the answer. To select the correct from multiple can- didate SQL queries, [Li and Xie, 2024] proposes to construct test cases by generating new databases and using LLM to pre- dict the execution results so that the test cases can distinguish all SQL with different execution results. These methods of solving the two problems can enhance the ensemble performance independently. Therefore, these two problems can be focused on together to further improve the table reasoning performance of the LLM. Highlight Compared with pre-LLM methods, LLMs can generate more diverse results with more and simpler ways. For example, LLMs can obtain diverse results by only chang- ing the instruction without changing the question, while pre- LLM methods have to ensure that the instructions of the fine- tuning and the inference are aligned [Gan et al., 2021]. 3.2 Mainstream Techniques Unique to LLMs In the LLM era, in addition to the mainstream techniques fol- lowing the pre-LLM era, there are also techniques unique to LLM due to the emergence phenomenon [Zhao et al., 2023b]. We present three typical emergent abilities mentioned in the previous research [Zhao et al., 2023b]. In-context Learning The in-context learning refers to making the model gener- ate the expected answer by using more suitable natural lan- guage instruction and multiple demonstrations (that is, the prompt), without requiring additional training or gradient up- dates [Zhao et al., 2023b]. Since LLM performance is sig- nificantly affected by the prompt, researchers utilize the in- context learning technique by designing prompts to solve the table reasoning task directly. Figure 3: The mainstream techniques that can be utilized to improve table reasoning performance in the LLM era. Regarding the work utilizing in-context learning ability in the table reasoning task, [Chen, 2023] is the first to ex- plore and demonstrate that LLM can reason about tables with in-context learning. ODIS [Chang and Fosler-Lussier, 2023] observes that in-domain demonstrations can improve model performance, so it synthesizes in-domain SQL based on SQL similarity. To address the challenge of demonstra- tion selection, DAIL-SQL [Gao et al., 2023], and [Nan et al., 2023b] select demonstrations based on masked ques- tion similarity and SQL similarity respectively. To better parse complex tables, [Zhao et al., 2023a] proposes to de- code the table cells as a tuple to input that contains rich in- formation. TAP4LLM [Sui et al., 2023] notices tables could contain noise and ambiguous information, therefore, it de- composes the table and then augments the sub-tables. Auto- CoT [Zhang et al., 2023a] finds that the existing rationale annotation methods consume intensive resources, so uses the rule-based method of schema linking to generate rationales. Highlight Because the models in the pre-LLM era can only learn fixed types of prompts through fine-tuning, it is hard to flexibly adjust prompts to enhance the reasoning performance of the model [Xie et al., 2022]. Due to the in-context learning capability, LLMs can use various prompts that are suitable for different questions without further fine-tuning, which greatly reduces labeling overhead while enhancing performance. Instruction Design The instruction design denotes utilizing LLMs to solve tasks that are unseen during the training phase by designing the in- struction description due to the instruction following ability of LLMs [Zhao et al., 2023b]. In the table reasoning task, researchers utilize the instruction design technique to solve the task indirectly by instructing the LLM to complete multi- ple decomposed sub-tasks which could be novel and require the model to learn through instructions. Existing works using the instruction design on table reasoning with LLMs focus on two types of methods: 1. based on modular decomposition, and 2. based on tool using. The researchers find that it is easier to complete decom- posed sub-tasks than to complete the whole table reasoning task [Pourreza and Rafiei, 2023], and LLM can generalize to different sub-tasks using the instruction following tech- nique, thereby improving the performance of LLM on the table reasoning task by taking the method of modular decom- position. Both DATER [Ye et al., 2023] and DIN-SQL [Pour- reza and Rafiei, 2023] note that decomposing table reason- ing can effectively facilitate multi-step inference, thus they design pipelines for the table reasoning task to reduce the difficulty of inference. TableQAKit [Lei et al., 2023] iden- tifies that Table QA tasks face different data and task forms, hindering the ease of research, so divide the Table QA task into a configuration module, a data module, a model mod- ule, and an evaluation module. In the open-domain setting, CRUSH4SQL [Kothyari et al., 2023], OpenTab [Anonymous, 2023], and DB-GPT [Xue et al., 2024] decompose the task into two distinct phases, which are retrieving and reason- ing to alleviate the problem of increased difficulty caused by extraneous irrelevant information. DBCopilot [Wang et al., 2023b] notices retrieval could suffer from the diverse ex- pressions and vocabulary mismatch, so the task is decom- posed into, firstly generating the question-relevant schema in- stead of retrieving and then reasoning. MAC-SQL [Wang et al., 2023a] finds that the limited context window, single-pass generation, and the lack of verification result in poor perfor- mance, so the task is modularly decomposed into three mod- ules to solve the problems. Faced with the decomposed sub-tasks of table reason- ing, LLM, despite maintaining acceptable performance on most sub-tasks, does not excel at solving all sub-tasks (e.g., retrieval, numerical reasoning) [Cao et al., 2023], so re- searchers instruct the LLM to invoke diverse tools to solve some sub-tasks, which is the method of tool using. Struct- GPT [Jiang et al., 2023] observes that the amount of struc- tured data is too large to input to the model, so it provides different interfaces to extract multiple types of data, and the model obtains valid data by calling the appropriate interfaces. InstructionDesignSupervisedFine-TuningStep-by-StepReasoningFine-Tuning DataResultEnsembleIn-ContextLearningDEMODemonstrations⋅⋅⋅: LLM: AnswerVoteFine-Tune: Instruction [Nan et al., 2023a], to explore and evaluate the action and rea- soning capacities of LLM, proposes the long-form database question answering task, where LLMs need to decide an in- teraction strategy by reasoning, and then generate interaction commands to invoke the external model. To extend the model capabilities of various TableQA tasks, [Cao et al., 2023] en- ables querying knowledge and performing extra tabular oper- ations by calling other LLM APIs. Also, some works focus on making tools and then employing them. Binder [Cheng et al., 2023], noting that existing neural-symbolic works are model- and language-specific and require large training data, proposes to utilize the LLM to parse the sub-questions that are not translatable into the target program, such as SQL, then invoke the LLM to solve the sub-question. Recogniz- ing the challenge of automatically transforming an arbitrary table in response to the question, ReAcTable [Zhang et al., 2023c] proposes leverage the LLM to generate a sequence of functions, which are then executed to produce an intermedi- ate table, ultimately getting the answer. In summary, the methods of modular decomposition and tool using can be used together. Specifically, during solv- ing the task with multiple modules, each module can enhance performance by employing tools. For example, about the re- trieval modular, we can use programs to filter out the rows not related to the user question. Highlight The pre-LLMs do not have the instruction fol- lowing capability due to their weak generalization, where researchers have to train separate models for each sub-task when using the method of modular decomposition to solve table reasoning tasks [Dou et al., 2023]. Also, it is hard to flexibly use or make diverse tools in the pre-LLM era [Zhao et al., 2023b]. In contrast, LLM can achieve superior perfor- mance without individually fine-tuning for each sub-task or tool, saving the training overhead. Step-by-Step Reasoning The step-by-step reasoning indicates that solving complex reasoning tasks by employing prompt mechanisms that incor- porate intermediate reasoning stages, referring to the tech- nique and capability at the same time [Zhao et al., 2023b]. The step-by-step reasoning, which requires the LLM to de- compose the complex question into multiple simpler sub- questions, is different from modular decomposition, in which researchers need to break down tasks into widely different sub-tasks. MURMUR [Saha et al., 2023] notices that prompt- ing the LLM to reason step-by-step lacks explicit conditions between reasoning steps, proposes to first select the poten- tially correct models at each step, and then select the best model based on the score model. Chain-of-Table [Wang et al., 2024], to reduce the difficulty of the single-hop reason- ing, provides predefined table operations, from which LLMs select one operation and execute in each step. Highlight The methods of the pre-LLM era do not have the capability for step-by-step reasoning, so it is difficult to im- prove the performance of solving complex table reasoning by leveraging step-by-step reasoning. In contrast, LLM can de- compose the reasoning into multiple steps, where the hard- ness of each step is lower than the full question, thereby de- creasing the complexity of the table reasoning. r e p a P # 15 10 5 0 Supervised Fine-Tuning Result Ensemble In-Context Learning Instruction Design Step-by-Step Reasoning 22.10 23.01 23.04 23.07 23.10 24.01 Month Figure 4: The research trend using different techniques over months. #Paper denotes the number of papers. 3.3 Comparison Comparison of Technique Proportion To analyze the research trend of existing studies on table reasoning with LLMs, we static the paper number of exist- ing studies depending on the used technique as we know, which is shown in Figure 4. From the figure, it can be found that studying the instruction design and in-context learning is more promising in the table reasoning task than studying step-by-step reasoning and result ensemble. That is because the works on step-by-step reasoning and result ensemble ac- count for a relatively small proportion because these types of work can easily be applied to different tasks, fewer re- searchers focus solely on table reasoning tasks using the re- sult ensemble technique, which is discussed in detail in §5. On the contrary, the instruction design and in-context learn- ing techniques need methods to be designed specifically for the table reasoning task and have a lower time overhead than using the supervised fine-tuning technique, so the works of the instruction design and in-context learning are the most common among table reasoning studies. Comparison of Technique Performance To analyze the most effective techniques, thereby finding the promising research directions, we static the highest scores achieved by LLM methods using different mainstream tech- niques on different benchmarks in Table 1. It can be found that instruction design and step-by-step reasoning improve the table reasoning capability of LLMs across different tasks consistently, which we discuss in detail in §4. In addition, the consistency of performance improvement across differ- ent tasks also shows that the ability required between dif- ferent table reasoning tasks has a high consistency, requiring the high generalization of LLMs. It is worth noting that in- context learning achieves the best performance in the text-to- SQL task because SQL has a simpler syntax compared with the natural language, wherewith the same number of demon- strations, the text-to-SQL task can cover more types of user questions, attracting more attention than solving other table reasoning tasks with in-context learning. Mainstream Techniques WikiTQ† TabFact FeTaQA Spider Supervised Fine-Tuning Result Ensemble In-Context Learning Instruction Design Step-by-Step Reasoning 0.32 [Zhang et al., 2023b] 0.66 [Ni et al., 2023] - 0.68 [Zhang et al., 2023c] 0.67 [Wang et al., 2024] 0.83 [Zhang et al., 2023b] - 0.60 [Sui et al., 2023] 0.93 [Ye et al., 2023] 0.87 [Wang et al., 2024] 0.67 [Bian et al., 2023] - - 0.71 [Zhang et al., 2023c] 0.66 [Wang et al., 2024] - - 0.87 [Gao et al., 2023] 0.85 [Pourreza and Rafiei, 2023] - Table 1: The best results on different benchmarks under each mainstream technique. † refers to the WikiTableQuestions. The evaluation metric for WikiTableQuestions/TabFact/FeTaQA/Spider is accuracy/accuracy/ROUGE-1/execution accuracy. Benchmarks pre-LLM LLM 5.1 Improving Table Reasoning Performance WikiTQ† TabFact FeTaQA Spider 0.63 [Jiang et al., 2022] 0.85 [Zhao and Yang, 2022] 0.65 [Xie et al., 2022] 0.80 [Li et al., 2023] 0.68 [Zhang et al., 2023c] 0.93 [Ye et al., 2023] 0.71 [Zhang et al., 2023c] 0.87 [Gao et al., 2023] Table 2: The best performance of pre-LLM and LLM methods on † denotes WikiTableQuestions. The eval- different benchmarks. uation metric for WikiTableQuestions/TabFact/FeTaQA/Spider is accuracy/accuracy/ROUGE-1/execution accuracy. 4 Why LLMs Excel at Table Reasoning LLMs surpass pre-LLMs (Table 2) in table reasoning by the methods in §3. We analyze the key insights behind this from structure understanding and schema linking, which are two main challenges of the table reasoning [Yin et al., 2020]. 4.1 Instruction Following Ability Benefits Structure Understanding Structure understanding means understanding the table schema (e.g., columns, rows) and their relationships, which provides the key evidence and necessary context information for decoding [Yin et al., 2020]. Compared with pre-LLMs, LLMs can solve the challenge of structure understanding bet- ter, mainly due to the instruction following ability. For ex- ample, the code parsing ability brought by the instruction fol- lowing could benefit table understanding ability because both require recognizing the hierarchical structure from plain in- put (e.g., linearized table to structured table, contextualized code to structured code) [Cao et al., 2023]. 4.2 Step-by-Step Reasoning Ability Benefits Schema Linking Schema Linking refers to aligning the entity mentioned in question with the entity in tables [Yin et al., 2020]. Com- pared with pre-LLM, LLMs have stronger capabilities of schema linking, mainly for the step-by-step reasoning abil- ity of the LLM. Specifically, LLMs can simplify the linking from sentence-level to span-level, via decomposing the com- plete question and table and filtering the irrelevant context [Pourreza and Rafiei, 2023]. 5 How to Enhance Table Reasoning Ability in the Future To promote table reasoning research in the LLM era and ap- ply table reasoning to actual scenarios, we discuss the future research directions in this section from both enhancing table reasoning and expanding practical applications. Although the existing LLM-based method has significantly improved performance compared with the pre-LLM era, there is still a certain gap in the thorough solution of the table rea- soning task. Therefore, in this subsection, we analyze the shortcomings and possible improvements of existing works on the table reasoning task under each category in §3. Supervised Fine-Tuning: Establishing Diverse Training Data Due to the strong generalization of LLMs, researchers should construct diverse data for multiple table tasks when performing supervised fine-tuning of LLMs to improve the overall performance on table reasoning tasks. As the discus- sion in §3.1, current pre-existing or manually labeled data methods simply mix diverse data from different table tasks as training data to fine-tune the LLMs. However, the pro- portion of different tasks in the training data has a significant impact on model performance. Future work should balance the diverse training data from multiple tasks in different pro- portions to explore the optimal proportion for optimizing the table reasoning capabilities of fine-tuning LLMs. Apart from labeling data, existing methods of distilled data only focus on certain features or specific tasks, resulting in a lack of diversity in the distilled data, and the table reason- ing performance of the model cannot be comprehensively im- proved by fine-tuning with the distilled data. Therefore, it is worth exploring how to distill diverse data for different tasks to improve the LLM comprehensive ability and generaliza- tion in table reasoning tasks. Result Ensemble: Sampling Results More Efficiently To obtain the correct answer after ensemble, researchers should focus on how to sample in the possible result space effec- tively. The main purpose of obtaining multiple results is to widen the sampling space so that the correct answer can be sampled multiple times. However, existing works do not con- sider changing the demonstrations in the prompt to improve the correctness of the results, and the impact of the demon- strations on the table reasoning performance of LLMs is sig- nificant. Future work should change the demonstrations to sample results that are more likely to be correct. Current studies on selecting the correct answer only rely on the final result and do not take into account that the number of results increases exponentially with the growing number of reasoning steps, and it is difficult to sample the correct answer in an exponentially large search space. Future work should narrow the search space by selecting the correct reasoning path at each step, and then selecting the correct answer based on the searched path [Xie et al., 2023]. In-Context Learning: Optimizing Prompts Automatically Since the in-context learning performance of LLMs relies heavily on prompts, researchers should focus on how to au- tomatically optimize prompts for table reasoning based on the question. Prompt design research on single-step rea- soning compares candidate prompts from a limited range of human-labeled instructions and examples, which results in performance improvement is also limited. To design a bet- ter prompt, future work should automatically generate and optimize the prompt based on the questions and tables. Instruction Design: Automatically Refining Design with Verification Depending on the discussion in §3.2, how to make fuller use of the capability of the instruction following to reduce the difficulty of each table reasoning question de- serves the attention of researchers. Current methods of mod- ular decomposition require manually decomposing the task into different modules in advance. However, this decompo- sition can only apply to a certain table task. In contrast, the fixed decomposition applicable to all table tasks is too general and does not reduce the difficulty of reasoning well. There- fore, rather than specifying the decomposition for a particu- lar table task, future work should automatically decompose the task according to the question, which is suitable for all ta- ble tasks without human involvement and greatly reduces the difficulty degree of single-step reasoning. For the methods of tool using, current works do not notice that the process of invoking tools may cause extra errors in the table reasoning process. Future work should include a tool verification process that prompts the LLMs to revise the tools to ensure that the tools can be applied correctly in the table reasoning task, thereby enhancing the accuracy. Step-by-Step Reasoning: Mitigating the Error Cascade in Multi-Step Reasoning Existing studies on step-by-step reasoning do not consider the error cascade problem in table reasoning and cause erroneous intermediate results leading to errors in subsequent reasoning. The prompt method of Tree- of-Thought [Yao et al., 2023] (ToT) alleviates this problem by maintaining multiple possible intermediate steps in multi- step reasoning, so how to apply ToT to table reasoning tasks deserves future attention. 5.2 Expanding Application In this subsection, we analyze the requirements of table rea- soning tasks in real-life scenarios and propose future expand- able directions accordingly. Multi-Modal: Enhancing the Alignment between Image Tables and Questions The multi-modal setting requires the model to encompass automated comprehension, classifica- tion, and extraction of information from textual, visual, and other forms of evidence. Because there are many tables stored in the form of images in actual scenarios, direct Optical Char- acter Recognition (OCR) will cause information loss due to the recognition errors, so we need to combine visual models to better understand and reason about image tables. To bet- ter align visual information and natural language questions, future research can explore design structures to align entities in questions with headers in image tables, thereby enhancing semantic alignment between images and text. Agent: Cooperating with More Diverse and Suitable Ta- ble Agents The agent denotes an entity equipped with the capabilities to perceive the surrounding environment, engage in decision-making processes, and execute actions based on these decisions [Xi et al., 2023]. In real scenarios, when LLM faces complex table reasoning problems that are difficult to solve alone, it can cooperate with other agents, such as code, and search engines. Because different agents are suitable for solving different tasks and bring different performance changes to the same task, future research can enhance coop- eration with agents by exploring more diverse agents suitable for different table tasks in actual scenarios [Cao et al., 2023]. Dialogue: Backtracking the Sub-tables in the Multi-turn Interaction Dialogue systems are designed to converse with humans as assistants through conversational interac- tions. When interacting with users, there could be problems such as incorrect model results and ambiguous questions, which require multiple turns to correct errors. However, in the LLM era, few researchers pay attention to the table rea- soning task of multi-turn dialogue. Therefore, it is necessary to explore table reasoning with dialogues. The model needs to focus on the sub-tables related to the user question, espe- cially when facing huge tables [Ye et al., 2023]. During mul- tiple turns of dialogues, the question-related sub-tables are constantly changing, thereby future work should study how to backtrace the sub-tables decomposed to obtain the whole relevant information, preventing the last sub-table not includ- ing the required information in the turn [Yao et al., 2023]. Retrieval-Augmented Generation: Injecting Knowledge Related to the Entity Retrieval-Augmented Generation (RAG) technology denotes retrieving the reasoning-related information from a large number of documents before rea- soning [Gao et al., 2024]. Since the table reasoning task often faces knowledge-intensive scenarios in applications where the in-domain knowledge of LLM is not sufficient to solve, future work should focus on enhancing table reasoning ca- pabilities by retrieving knowledge. In table reasoning tasks, LLMs could be challenging to understand the meaning of some entities in the table, thereby lowering the answer ac- curacy [Guo et al., 2019]. To solve this challenge, future research should detect the unknown entities in the table and inject corresponding knowledge related to such entities. 6 Conclusion In this paper, we summarize existing research work on ta- ble reasoning with LLMs. In the LLM era, the supervised fine-tuning and result ensemble methods following the pre- LLM era are still effective. Besides, the in-context learning, instruction following, and step-by-step reasoning techniques unique to the LLM era can also be used to improve the model table reasoning performance. Also, LLMs surpass pre-LLMs in table reasoning tasks because of the instruction following and step-by-step reasoning capabilities of LLMs. To inspire future research, we explore potential future directions for im- proving table reasoning performance. We also explore four future improvement directions for real applications. Finally, we summarize the current resources of the table reasoning in GitHub and will continue to update it. References [Anonymous, 2023] Anonymous. Opentab: Advancing large language models as open-domain table reasoners. In Proc. of ICLR, 2023. [Bian et al., 2023] Junyi Bian, Xiaolei Qin, Wuhe Zou, Mengzuo Huang, and Weidong Zhang. Hellama: Llama- based table to text generation by highlighting the impor- tant evidence, 2023. [Cao et al., 2023] Yihan Cao, Shuyi Chen, Ryan Liu, Zhiruo Wang, and Daniel Fried. API-assisted code generation for question answering on varied table structures. In Proc. of EMNLP, 2023. [Chang and Fosler-Lussier, 2023] Shuaichen Chang and Eric Fosler-Lussier. Selective demonstrations for cross-domain text-to-SQL. In Proc. of EMNLP Findings, 2023. [Chen et al., 2020] Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. Tabfact: A large-scale dataset for table-based fact verification. In Proc. of ICLR, 2020. [Chen, 2023] Wenhu Chen. few(1)-shot table reasoners. 2023. Large language models are In Proc. of ACL Findings, [Cheng et al., 2023] Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, and Tao Yu. Binding language models in symbolic languages. ICLR, 2023. [Dou et al., 2023] Longxu Dou, Yan Gao, Mingyang Pan, Dingzirui Wang, Wanxiang Che, Jian-Guang Lou, and Dechen Zhan. Unisar: a unified structure-aware autore- gressive language model for text-to-sql semantic parsing. International Journal of Machine Learning and Cybernet- ics, 2023. [Gan et al., 2021] Yujian Gan, Xinyun Chen, Qiuping Huang, Matthew Purver, John R. Woodward, Jinxia Xie, and Pengsheng Huang. Towards robustness of text-to-SQL In Proc. of ACL, models against synonym substitution. 2021. [Gao et al., 2023] Dawei Gao, Haibin Wang, Yaliang Li, Xi- uyu Sun, Yichen Qian, Bolin Ding, and Jingren Zhou. Text-to-sql empowered by large language models: A benchmark evaluation, 2023. [Gao et al., 2024] Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo, Meng Wang, and Haofen Wang. Retrieval- augmented generation for large language models: A sur- vey, 2024. [Guo et al., 2019] Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. To- wards complex text-to-SQL in cross-domain database with intermediate representation. In Proc. of ACL, 2019. [Jiang et al., 2022] Zhengbao Jiang, Yi Mao, Pengcheng He, Graham Neubig, and Weizhu Chen. OmniTab: Pretraining with natural and synthetic data for few-shot table-based question answering. In Proc. of NAACL, 2022. [Jiang et al., 2023] Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Xin Zhao, and Ji-Rong Wen. StructGPT: A general framework for large language model to reason over structured data. In Proc. of EMNLP, 2023. [Jin et al., 2022] Nengzheng Jin, Joanna Siebert, Dongfang Li, and Qingcai Chen. A survey on table question answer- ing: Recent advances, 2022. [Kothyari et al., 2023] Mayank Kothyari, Dhruva Dhingra, Sunita Sarawagi, and Soumen Chakrabarti. CRUSH4SQL: Collective retrieval using schema hallucination for Text2SQL. In Proc. of EMNLP, 2023. [Lei et al., 2023] Fangyu Lei, Tongxu Luo, Pengqi Yang, Weihao Liu, Hanwen Liu, Jiahe Lei, Yiming Huang, Yifan Wei, Shizhu He, Jun Zhao, and Kang Liu. Tableqakit: A comprehensive and practical toolkit for table-based ques- tion answering, 2023. [Li and Xie, 2024] Zhenwen Li and Tao Xie. Using llm to select the right sql query from candidates, 2024. [Li et al., 2023] Haoyang Li, Jing Zhang, Cuiping Li, and Hong Chen. Resdsql: decoupling schema linking and skeleton parsing for text-to-sql. In Proc. of AAAI, 2023. [Nan et al., 2022] Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Woj- ciech Kry´sci´nski, Hailey Schoelkopf, Riley Kong, Xiangru Tang, Mutethia Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, Dragomir Radev, and Dragomir Radev. FeTaQA: Free- form table question answering. Transactions of the Asso- ciation for Computational Linguistics, 2022. [Nan et al., 2023a] Linyong Nan, Ellen Zhang, Weijin Zou, Yilun Zhao, Wenfei Zhou, and Arman Cohan. On evalu- ating the integration of reasoning and action in llm agents with database question answering, 2023. [Nan et al., 2023b] Linyong Nan, Yilun Zhao, Weijin Zou, Narutatsu Ri, Jaesung Tae, Ellen Zhang, Arman Cohan, and Dragomir Radev. Enhancing text-to-SQL capabili- ties of large language models: A study on prompt design strategies. In Proc. of EMNLP Findings, 2023. [Ni et al., 2023] Ansong Ni, Srini Iyer, Dragomir Radev, Ves Stoyanov, Wen-tau Yih, Sida I Wang, and Xi Victoria Lin. Lever: Learning to verify language-to-code gener- ation with execution. In Proc. of ICML, 2023. [Pasupat and Liang, 2015] Panupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables. In Proc. of ACL, 2015. [Pourreza and Rafiei, 2023] Mohammadreza Pourreza and Davood Rafiei. Din-sql: Decomposed in-context learning of text-to-sql with self-correction. CoRR, 2023. [Saha et al., 2023] Swarnadeep Saha, Xinyan Yu, Mohit Bansal, Ramakanth Pasunuru, and Asli Celikyilmaz. MURMUR: Modular multi-step reasoning for semi- structured data-to-text generation. In Proc. of ACL Find- ings, 2023. [Sui et al., 2023] Yuan Sui, Jiaru Zou, Mengyu Zhou, Xinyi He, Lun Du, Shi Han, and Dongmei Zhang. Tap4llm: Ta- ble provider on sampling, augmenting, and packing semi- structured data for large language model reasoning, 2023. [Sun et al., 2023] Ruoxi Sun, Sercan Arik, Rajarishi Sinha, Hootan Nakhost, Hanjun Dai, Pengcheng Yin, and Tomas Pfister. SQLPrompt: In-context text-to-SQL with minimal labeled data. In Proc. of EMNLP Findings, 2023. [Wang et al., 2023a] Bing Wang, Changyu Ren, Jian Yang, Xinnian Liang, Jiaqi Bai, Qian-Wen Zhang, Zhao Yan, and Zhoujun Li. Mac-sql: Multi-agent collaboration for text- to-sql, 2023. [Wang et al., 2023b] Tianshu Wang, Hongyu Lin, Xianpei Han, Le Sun, Xiaoyang Chen, Hao Wang, and Zhenyu Zeng. Dbcopilot: Scaling natural language querying to massive databases, 2023. [Wang et al., 2024] Zilong Wang, Hao Zhang, Chun-Liang Li, Julian Martin Eisenschlos, Vincent Perot, Zifeng Wang, Lesly Miculicich, Yasuhisa Fujii, Jingbo Shang, Chen-Yu Lee, and Tomas Pfister. Chain-of-table: Evolv- ing tables in the reasoning chain for table understanding, 2024. [Xi et al., 2023] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Jun- zhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, and Tao Gui. The rise and potential of large language model based agents: A survey, 2023. [Xie et al., 2022] Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caim- ing Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, and Tao Yu. UnifiedSKG: Unifying and multi-tasking structured knowledge grounding with text- to-text language models. In Proc. of EMNLP, 2022. [Xie et al., 2023] Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, Min-Yen Kan, Junxian He, and Qizhe Xie. Self- evaluation guided beam search for reasoning. In Proc. of NeurIPS, 2023. [Xue et al., 2024] Siqiao Xue, Caigao Jiang, Wenhui Shi, Fangyin Cheng, Keting Chen, Hongjun Yang, Zhiping Zhang, Jianshan He, Hongyang Zhang, Ganglin Wei, Wang Zhao, Fan Zhou, Danrui Qi, Hong Yi, Shaodong Liu, and Faqiang Chen. Db-gpt: Empowering database interactions with private large language models, 2024. [Yang et al., 2023] Bohao Yang, Chen Tang, Kun Zhao, Chenghao Xiao, and Chenghua Lin. Effective distillation of table-based reasoning ability from llms, 2023. [Yao et al., 2023] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik R Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. In Proc. of NeurIPS, 2023. [Ye et al., 2023] Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, and Yongbin Li. Large language models are versatile decomposers: Decomposing evidence and ques- tions for table-based reasoning. In Proc. of SIGIR, 2023. [Yin et al., 2020] Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. TaBERT: Pretraining for joint understanding of textual and tabular data. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault, editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8413–8426, Online, July 2020. Association for Computational Linguistics. [Yu et al., 2018] Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In Proc. of EMNLP, 2018. [Zha et al., 2023] Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, and Junbo Zhao. Tablegpt: Towards unifying tables, nature language and commands into one gpt, 2023. [Zhang et al., 2023a] Hanchong Zhang, Ruisheng Cao, Lu Chen, Hongshen Xu, and Kai Yu. Act-sql: In-context learning for text-to-sql with automatically-generated chain-of-thought, 2023. [Zhang et al., 2023b] Tianshu Zhang, Xiang Yue, Yifei Li, and Huan Sun. Tablellama: Towards open large generalist models for tables, 2023. [Zhang et al., 2023c] Yunjia Zhang, Jordan Henkel, Avrilia Floratou, Joyce Cahoon, Shaleen Deep, and Jignesh M. Patel. Reactable: Enhancing react for table question an- swering, 2023. [Zhao and Yang, 2022] Guangzhen Zhao and Peng Yang. Table-based fact verification with self-labeled keypoint alignment. In Proc. of COLING, 2022. [Zhao et al., 2023a] Bowen Zhao, Changkai Ji, Yuejie Zhang, Wen He, Yingwen Wang, Qing Wang, Rui Feng, and Xiaobo Zhang. Large language models are complex table parsers. In Proc. of EMNLP, 2023. [Zhao et al., 2023b] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023. [Zhong et al., 2023] Ruiqi Zhong, Charlie Snell, Dan Klein, and Jason Eisner. Non-programmers can label programs indirectly via active examples: A case study with text-to- SQL. In Proc. of EMNLP, 2023.
ai_researcher
4
Sample_Design_Engineering_An_Empirical_Study_on_Designing_Better_Fine-Tuning_Samples_for_Information_Extraction_with_LLMs.pdf
A Principal-Agent Model of Systems Engineering Processes with Application to Satellite Design Salar Safarkhani†, Vikranth Reddy Kattakuri†, Ilias Bilionis†∗, Jitesh Panchal† †School of Mechanical Engineering, Purdue University, West Lafayette, Indiana 47907-2088 ∗Corresponding author, Email: [email protected] 9 1 0 2 r a M 6 1 ] A M . s c [ 1 v 9 7 9 6 0 . 3 0 9 1 : v i X r a Abstract—We present a principal-agent model of a one-shot, shallow, systems engineering process. The process is “one-shot” in the sense that decisions are made during one time step and that they are final. The term “shallow” refers to a one-layer hierarchy of the process. Specifically, we assume that the systems engineer has already decomposed the problem in subsystems, and that each subsystem is assigned to a different subsystem engineer. Each subsystem engineer works independently to maximize their own expected payoff. The goal of the systems engineer is to maximize the system-level payoff by incentivizing the subsystem engineers. We restrict our attention to requirement-based system- level payoffs, i.e., the systems engineer makes a profit only if all the design requirements are met. We illustrate the model using the design of an Earth-orbiting satellite system where the systems engineer determines the optimum incentive structures and requirements for two subsystems: the propulsion subsystem and the power subsystem. The model enables the analysis of a systems engineer’s decisions about optimal passed-down require- ments and incentives for sub-system engineers under different levels of task difficulty and associated costs. Sample results, for the case of risk-neutral systems and subsystems engineers, show that it is not always in the best interest of the systems engineer to pass down the true requirements. As expected, the model predicts that for small to moderate task uncertainties the optimal requirements are higher than the true ones, effectively eliminating the probability of failure for the systems engineer. In contrast, the model predicts that for large task uncertainties the optimal requirements should be smaller than the true ones in order to lure the subsystem engineers into participation. I. INTRODUCTION The ubiquitous problem of schedule and cost over-runs during the development of large-scale complex systems is well documented within systems engineering literature [1]. Various remedies have been proposed to address these un- sustainable trends, including, better methods and tools for managing complexity, better incentive mechanisms, and tran- sition from document-based systems engineering to model- based systems engineering (MBSE). The research community has related these trends to the fundamental way in which systems engineering processes are carried out. Requirements engineering, which is one of the foundational processes within systems engineering, has been identified as a key source of the inefficiency. Collopy, for example, argues that the use of requirements in systems engineering is an ineffective way of coordination between systems engineer and subsystems engineers [2]. Therefore, there is a need within systems en- gineering to model and analyze the requirements engineering process. There have been few efforts in addressing this need. By modeling systems engineering processes as multi-disciplinary design optimization problems, Collopy et al. show that us- ing requirements within systems engineering processes cre- ates design trade conflicts among different subsystems, re- sulting in dead losses within the system [2]. Collopy and Hollingsworth [1] espouse the use of value driven design (VDD) as a better alternative to requirements engineering, wherein, objectives for extensive attributes are passed down instead of requirements. Their model is applicable for settings where the incentives of the subsystem engineers are well aligned with the objectives of the systems engineers. This assumption may be valid when both the systems engineers and sub-systems engineers are within the same organization. If, on the other hand, the subsystems engineers are independent decision makers with private information and driven by their own objectives, their model is inappropriate, and superiority of VDD is not clear. To model realistic systems engineering processes, there is a need to model interactive decisions of self-interested actors using game theory. Vermillion and Malak [3] take initial steps in that direction by modeling the interactions between a sys- tems engineer and subsystems engineers using principal-agent models. They adapt the generalized principal-agent model to model the situation of a systems engineer delegating work to subsystem engineers as a one-shot game. Their adaption is primarily focused on incorporating behavioral aspects such as deviations from expected utility maximization within the general principal-agent model. While incorporating behavioral aspects within principal- agent models is an important step forward, we believe that there is still a lack of models that account for unique aspects of systems engineering, specifically, the information available to systems engineers, the state of technology, the uncertainty in the ability to achieve specific outcomes, and the level of difficulty of the tasks. To address this gap, we develop a principal-agent model [4] of a simple systems engineering process in which decisions are made once and all involved individuals have their own private interests. Note that our model is an oversimplification of real systems engineering processes which are iterative and in which information and outcomes flow back and forth between the systems engineer and subsystem engineers until a final decision is made. Our model should be considered as a first step towards modeling full fledged systems engineering processes. Using our frame- work, we study the optimal mechanisms within the class of requirement-based incentives. We illustrate the model using a satellite design case study with two subsystems: power and propulsion. Specifically, we show how historical data can be used to infer the parameters of our process model to this case study. The paper is organized as follows. We start Sec. II by describing our systems engineering process model in general terms and we cast the selection of subsystem incentives as a mechanism design problem. In Sec. II.A, we justify some assumptions for the nature of the subsystem engineers which simplify the model. In Sec. II.B, we focus the discussion on the class of requirement-based incentives for all agents. In Sec. II.C and II.D, we non-dimensionalize the equations and we describe how the parameters can be inferred from readily available historical data. In Sec. III, we apply the model to the design of a spacecraft taking into account two subsystems (power and propulsion). Finally, in Sec. IV, we present our numerical study and in Sec. V our conclusions. quality function qi(ei, ω) gives the design quality that the i- th sSE can achieve by choosing an effort level ei if the state of nature is ω ∈ Ω. The quality function is normalized so that zero corresponds to the quality of the current state-of- the-art. Note that we have deliberately chosen to ignore the dependence of qi(ei, ω) on any private information. In other words, we assume that the form of the quality function is common knowledge. Each of the sSEs enters a contract with the SE. The contracts describe transfer functions, ti(qi), which specify the transport of monetary funds from the SE to the sSE contingent on the quality of the design that the sSE produces. Therefore, the payoff to the sSE is: πi (ei, ω) = ti (qi (ei, ω)) − ci (ei) . (1) We assume that the sSE selects an optimal effort level ex- ante, i.e., before they observe the future state of nature ω. If we further assume that the sSE is risk-neutral, then rationality implies that they should select their effort level by maximizing their expected payoff: II. MODELING A SINGLE-SHOT, SHALLOW SYSTEMS ENGINEERING PROCESS ei ∗ (ti (·)) = arg max ei∈[0,1] Eω [πi (ei, ω)] . (2) We consider a model of a single-shot (evolves in one time-step), shallow (considers one-layer interactions between a systems engineer and multiple subsystem engineers) systems engineering process. The systems engineer (SE) has already decomposed the problem in N subsystems. The SE assigns the design of each subsystem to a subsystem engineer (sSE). Each sSE designs independently to maximize their own expected payoff, and returns the design outcome back to the SE. The goal of the SE is to incentivize the sSEs to produce subsystem designs that maximize the expected system-level payoff by choosing appropriate contracts. We start by formulating the problem of optimal subsystem contract design in its full generality. Then, we make simplifying assumptions about the form of the sSEs’ quality and utility functions, and, we study the optimality requirement-based contracts. Let i = 1, . . . , N be a label indexing the sSEs. The i-th sSE chooses a normalized effort level ei ∈ [0, 1]. This measures the percentage of maximum effort that the sSE can allocate to this specific project within a predefined time framework, e.g., in a fiscal year. Second, the units of the effort depend on the nature of the sSE. If the sSE is an individual that works for the same organization as the SE, then the effort ei could be measured in terms of the percentage of that the individual dedicates to the project. Alternatively, if the sSE is an external contractor, e.g., another company, then effort could be measured in terms of the percentage of the available yearly resources that the contractor dedicates to this particular project. We denote the cost of effort to the sSE as ci(ei). In economic terms, ci(ei) is an opportunity cost, i.e., the monetary gain the sSE could receive, but forfeits, to participate in this particular project. Let (Ω, F, P) be a probability space associated with random states of nature. We model the design qualities that the i-th sSE can produce as a stochastic process qi(ei, ω). That is, the 2 Note the dependence on the transfer function. To ensure that the sSEs are willing to participate in this project, their optimal expected payoff must be positive. Oth- erwise, the sSEs have no incentive to be part of the project as their expected monetary benefit is smaller than their oppor- tunity cost. Therefore, the SE must choose transfer functions that enforce the participation constraints: Eω [πi (ei ∗ (ti (·)) , ω)] ≥ 0, (3) for i = 1, . . . , N . The SE obtains from each sSE the following design quali- ties: qi ∗ (ti (·) , ω) = qi (ei ∗ (ti (·)) , ω) . (4) If V (q1, . . . , qN ) is the net present value of any cash flows that result from a system with subsystem qualities q1, . . . , qN , then the total payoff to the SE is: π (t (·) , ω) = V (q1 ∗ (t1 (·) , ω) , · · · , qN ∗ (tN (·) , ω)) − N (cid:88) i=1 ti (qi ∗ (ti (·) , ω)) , (5) where we defined t (·) = (t1 (·) , · · · , tN (·)). Assuming that the SE makes ex-ante, risk-neutral decisions, they must select transfer functions that solve: t∗ (·) = arg max Eω [π (t (·) , ω)] , (6) t(·) then the SE does not subject to the N participation constraints defined in Eq. (3). Of course, if the optimal expected payoff Eω[π(t∗(·), ω)] is negative, in the first place. In what follows, we study this mechanism design problem by making specific assumptions for the form of the SE value function, the sSEs’ quality functions and costs, and the form of the possible transfer functions. initiate the project A. Assumptions about the subsystem engineers Our final assumption is that the sSE’s cost grows linearly The random field qi(ei, ω) captures the common state of knowledge about what is technologically possible in the design quality of subsystem i. Using the Karhunen-Lo`eve expansion [5], the random field qi(ei, ω) can be written as: qi(ei, ω) = q0 i (ei) + ∞ (cid:88) k=1 (cid:112) λikξik(ω)φik(ei), (7) where q0 i (ei) is the mean of the random field, λik, φik(ei) are the eigenvalues and eigenvectors of its covariance function, respectively, and the random variables ξik are zero mean, unit variance, and uncorrelated. Assuming stationarity of the process, these quantities can, in principle, be estimated sta- tistically from historical data of marginal investments versus increases in product quality. As a first approximation, we trun- cate the series at k = 1 keeping only the largest eigenvalue: qi(ei, ω) ≈ q0 i (ei) + (cid:112) λi1ξi1(ω)φi1(ei). √ Furthermore, we approximate the zero-mean and unit-variance random variable ξi1 as a standard normal random variable ξ (the standard normal is the maximum entropy distribution with zero-mean and unit variance). We also assume that the first eigenvector is approximately constant, φi1(ei) ≈ const, and we introduce the new variable σi = λiφi1(ei). Without loss of generality, we may take that σi > 0. Finally, we take the first order Taylor expansion of q0 i ), recalling that we scale the quality so that zero corresponds to the current state-of-the-art, which can be delivered without any effort. This is reasonable since we are considering a one-shot systems engineering process which, necessarily, takes place in a limited amount of time. For larger timescales, we expect qi(ei, ω) to be curved: Concave for mature technologies, and convex followed by concave for emerging technologies. Furthermore, we take ai > 0 since more effort can only lead to increased design quality. To summarize, we model the quality as: i (ei) = aiei+O(e2 qi (ei, ω) = aiei + σiξ (ω) , (8) where ai, σi > 0, and ξ ∼ N (0, 1). The parameter ai depends on the skills of the sSE as well as on the maturity of the underlying technology. That is, a skillful sSE produces a higher increase in quality from the state-of-the-art than a less skilled sSE. Therefore, keeping the maturity of the technology fixed, ai expected to grow as the skills of the sSE are improved. Similarly, keeping the skills of the sSE fixed, ai decreases as a function of the maturity of the underlying technology. The more mature the underlying technology is, the more difficult it becomes to obtain a given increase in quality. The parameter σi behaves in exactly the opposite way. A skillful sSE produces design qualities that vary less, therefore σi decreases as skill improves. On the other hand, we expect that subsystem designs that depend on mature technologies are more predictable, therefore σi decreases as technological maturity increases. 3 with effort as: ci (ei) = ciei, (9) where ci > 0. This assumption is reasonable for almost all types of sSEs. In particular, when sSE is an individual engineer, ci could be the average industry salary per unit effort. For a contractor, ci could be the expected payoff per unit effort of the next most profitable project that they could be engaging in. B. Optimal requirement-based incentives We assume that the SE has N requirements, r1, . . . , rN , one to be satisfied by each subsystem. These requirements arise from the business case of the project. Mathematically, the system design is successful if qi > ri for all i = 1, . . . , N . If the value of a successful system design is V0, then the value function of the SE is: V (q1, . . . , qN ) = V0 N (cid:89) i=1 H (qi − ri) , (10) where H (·) is the Heaviside function: H(x) = (cid:40) 1, if x ≥ 0, 0, otherwise. We restrict our attention to the study of requirement-based transfer functions: ti (qi; ψi) = ψi1 + ψi2 H (qi − ψi3) . (11) The first parameter, ψi1, specifies the amount that is going to be paid simply for agreeing to participate in the project, guaranteeing a design quality at least the same as the cur- rent state-of-the-art. The second parameter, ψi2, specifies the amount to be paid if the subsystem engineer meets specific requirements. The third parameter, ψi3, specifies the require- ment that the subsystem engineer has to meet. Note that the optimal passed-down requirement is, in general, different than the real requirement, ri. We start with the optimal decision of the i-th sSE given a fixed contract ti (qi; ψi). The payoff is: πi(ei, ω) = ψi1 + ψi2 H (aiei + σiξi(ω) − ψi3) − ciei. To take the expectation over ω, we use the following result: Eω [H (λ + σξ(ω))] = Φ (cid:19) , (cid:18) λ σ (12) where Φ(·) is the cumulative distribution function (CDF) of a standard normal random variable. Taking the expectation over ω, we get: Eω [πi(ei, ω)] = ψi1 − ciei + ψi2Φ (cid:18) aiei − ψi3 σi (cid:19) . So, the optimal effort level is: e∗ i (ti (·; ψi)) = arg max ei∈[0,1] (cid:26) ψi1 − ciei + ψi2Φ (cid:18) aiei − ψi3 σi (cid:19)(cid:27) . (13) We solve this problem using Brent’s method [6] as imple- mented in SciPy [7]. The SE problem consists of maximizing: Di = {(Iis, Qis)}Si s=1, of these quantities are readily available for many technologies. We model the relationship between Qi and Ii as: Eω [π (t (·, ψ) , ω)] =V0 N (cid:89) Φ (cid:18) aie∗ i (ti (·; ψi)) − ri (cid:19) Qi = Qi0 + Ai(Ii − Ii0) + Σiξi(ω), (18) i=1 N (cid:88) i=1 N (cid:88) i=1 − − ψi1 ψi2Φ σi (cid:18) aie∗ i (ti (·; ψi)) − ψi3 σi (cid:19) , where Qi0 and Ii0 is the current state of these variables, ξi(ω) ∼ N (0, 1), and Ai and Σi are parameters to be estimated from the all available data Di. We use a maximum likelihood estimator [9] for Ai and Σi. This is equivalent to least squares estimate for Ai: (14) ˆAi = arg min Ai Si(cid:88) s=1 [Qi0 + Ai(Iis − Ii0) − Qis]2 , (19) subject to the participation constraints: (cid:19) i (ti (·; ψi)) − ψi3 (cid:18) aie∗ ψi1 + ψi2Φ σi − cie∗ i (ti (·; ψi)) ≥ 0, (15) and the bounds: ψik ≥ 0. (16) for k = 1, 2, 3 and i = 1, . . . , N . In practice, the optimal solution is always within the following bounds: 0 ≤ ψi1, ψi2 ≤ 2ci, and ri − 3σi < ψi3 < ri + 3σi. (17) We solve this problem using the constrained optimization by linear approximation (COBYLA) algorithm [8] as imple- mented in [7]. C. Non-dimensionalization of the equations Without loss of generality, we can pick all the subsys- tem requirements to be ri = 1. This can be achieved by appropriately scaling all ai’s and all σi’s. In other words, the quality function of each subsystem will be measured in units of the corresponding business-imposed requirement. Then, the inverse of the coefficient ai, can be interpreted as the effort that needed to meet the subsystem requirement for sSE if there is no uncertainty. We will consider two levels of ai’s corresponding to different levels of subsystem design difficulty: (i) (hard) ai = 1.5, (ii) (easy) ai = 2. Having scaled the quality function in this way, the variance parameter σi can also be interpreted as the amount of uncertainty in the quality of the final design as a percentage of the requirement. We will consider three levels of uncertainty: (i) (low) σi = 0.05, (ii) (moderate) σi = 0.1, and (iii) (high) σi = 0.2. Finally, also without loss of generality, we may set V0 = 1. This can be achieved by appropriately scaling the transfer functions and the opportunity costs of all sSEs. It amounts to measuring all monetary quantities, in terms of the SE’s maximum value. For the opportunity costs of the sSEs, we will consider two levels: (i) (low) ci = 0.01, (ii) (high) ci = 0.05. D. Extracting subsystem parameters from historical data To study decision making in the context of a real ap- plication, one needs to extract all parameters, ai, ci, σi, . . . , from historical data. To this end, let Qi denote the quality of the i-th subsystem in physical units, and Ii the cumulative investment per firm on this technology. Historical data, say and to setting Σi equal to the mean residual square error: ˆΣi = 1 Si Si(cid:88) s=1 [Qi0 + A∗ i (Iis − Ii0) − Qis]2 . (20) Now, let Qir be the required quality for subsystem i in physical units. The scaled quality of a subsystem qi, can be defined as: qi = Qi − Qi0 Qir − Qi0 , (21) with this definition, we get qi = 0 for the state-of-the-art, and qi = 1 for the requirement. Substituting Eq. (18) in Eq. (21) while making use of the maximum likelihood estimates for Ai and Σi, we get qi = ˆAi Qir − Qi0 (Ii − Ii0) + ˆΣi Qir − Qi0 ξi(ω). (22) From this equation, we see that the uncertain parameter of our previous discussions, can be obtained from . (23) σi = ˆΣi Qir − Qi0 Finally, let Ti and Ci represent the time for which the ith sSE is to be hired and the cost of the engineer per unit time, respectively. Ti is just the duration of the systems engineering process we consider. The opportunity cost Ci can be read inferred from the balance sheets of publicly traded firms related to the technology. We can now define the effort variable ei as: ei = Ii − Ii0 TiCi . From this and Eq. (22), we see that ai is given by: ai = TiCi ˆAi Qir − Qi0 . III. ILLUSTRATIVE EXAMPLE: SPACECRAFT DESIGN PROBLEM A. Spacecraft systems design During the initial proposal phase of satellite development for scientific applications, the principal investigator puts forward an estimate of how the goals of the project will be achieved through engineering means. These project goals concern the 4 (24) (25) the heart overarching science objectives of the mission which corre- spond to the instrument design at the satellite. However, in the process of successfully launching a scientific instrument into space, all the necessary components to power the instruments, actuate the spacecraft, and transmit data must also be included in the satellite payload. These mission requirements are translated to specific functional requirements for each subsystem of the spacecraft. Typically, a spacecraft consists of seven main sub- systems [10], namely, electrical power subsystem (EPS), propulsion, attitude determination and control (ADC), on- board processing, telemetry, tracking and command (TT&C), structures and thermal subsystems. For the proposed study, we will focus our attention on two subsystems (N = 2): EPS and propulsion. Simplifying the analysis, we assume that the design of these subsystems will be assigned to two (subsystem) engineers in a one-shot fashion. Note that, the systems engineering process of the spacecraft is an iterative process and the information and results are exchanged and flow back and forth between the SE and sSEs in each iteration. Our model is a very crude approximation of reality. The goal of the SE is to optimally incentivize the sSEs to produce subsystem designs that meet the mission’s requirements. B. Electrical power subsystem The EPS is designed and configured to perform several key functions, the primary being a continuous and reliable source of peak and average electrical power for the life of the mission. It consists of a power source, energy storage, power conversion/distribution and power regulations and con- trol equipment. Typically, for earth orbiting satellites, one employs solar photo-voltaic arrays as a primary energy source and batteries as secondary power storage units. Silicon solar cells are the most commonly used photo-voltaic cells for space applications because of their low cost and high availability. In this study, the design quality of interest for the EPS sSE is chosen to be the solar cell efficiency, i.e., Q1 is the efficiency of Si-based solar photo-voltaic cells expressed as a percentage value, and I1 is the average cumulative investment in solar cell research per firm. To estimate the relationship between Q1 and I1, we first considered the global trends of commercial Si module efficiencies and cumulative investments in solar-cell research over 2001–2008. Over this period, a total of hundred solar cell research and development companies received venture capital (VC) funding [11]. Based on this information, we assume that, on average, thirteen companies are participating in solar cell R&D every year. The variation of commercial crystalline-Si module effi- ciency (Q1 (%) ) with time is taken from [12]. The trend of cumulative global VC investment per firm (I1 (millions USD)) in crystalline Si-cell technology over time is obtained from [11]. In 2008, the state-of-the-art was Q10 = 19% and the cumulative investment per firm was I10 = 102.4 million USD. A maximum likelihood fit of the parameters in Eq. (18) results in a regression coefficient of A1 = 0.035% per million USD, and standard deviation Σ1 = 0.15%. We can visualize the data and the maximum likelihood fit in Fig. 1. Fig. 1: Spacecraft case study (EPS subsystem): Historical data (2001–2008) of solar cell efficiencies vs cumulative investment per firm. The solid line and the shaded area correspond to the maximum likelihood fit of a linear regression model and the corresponding 95% prediction intervals, respectively. The cost parameter C1 is estimated by considering the total pay towards employees salaries over the number of employees in R&D jobs in global PV industry. According to the statistics [12], around 2,320 employees were involved in such jobs by the end of 2008. The value of C1 is the median salary of a solar cell development engineer which is approximately equal to 100,000 (0.1 million) USD based on data from [13]. Substituting the values of Q10, C1 and A1 in Eq. 25 yields the following relationship between the parameters Q1r, T1 and a1: a1 = 0.0035T1 Q1r − 19 (26) The variation of required quality (efficiency) for sSE-1 (Q1r (%)) with respect to the time for which one person from sSE firm is to be hired (T1 years) is shown in Fig. 2, for two different levels of EPS design difficulty. C. Propulsion subsystem Space propulsion systems essentially provide thrust to lift the launch vehicle along with its payload from the launch pad and place the payload into low-Earth orbit. They assist payload transfer between lower and higher orbits or into trajectories based on the type of mission. Finally, they provide thrust for attitude control and orbit corrections [10]. Based on mission profiles, performance requirements for propulsion systems include thrust, total impulse, and duty cycle specifications for which, the specific impulse and propellant density are the key parameters. Chemical combustion systems, which are the most common systems for space applications can be divided into three basic categories: liquid. solid and hybrid. The terminology refers 5 Fig. 2: Spacecraft case study (EPS subsystem): Variation of Q1r (percentage efficiency) w.r.t T1 (time in years) to the physical state of the stored propellants. In this study, the design quality of interest (Q2) for the propulsion sSE is chosen to be the delivered specific impulse (Isp, measured in seconds) of solid propellants. Specific impulse is defined as the ratio of thrust to weight flow rate of the propellant and is a measure of energy content of the propellants [10]. It signifies the energy to thrust conversion efficiency. I2 is the cumulative investment on chemical propulsion research and technology by NASA over a period of ten fiscal years from 1979–1988. Trends in delivered specific impulse (Q2 (sec.)) and in- vestments by NASA (I2 (millions USD)) in chemical propul- sion technology with time are obtained from [14] and [15], respectively. The state-of-the-art solid propellant technology corresponds to a Q20 value of 252 sec. and I20 value of 149.1 million USD. In this case, the maximum likelihood fit of the parameters results in a regression coefficient of A2 = 0.0133 sec. per million USD, and standard deviation Σ2 = 0.12 sec. The corresponding data and the maximum likelihood fit are illustrated in Fig. 3 The value of C2 in this case is approximately same as that of C1, i.e., C2 = 100,000 USD according to the data obtained from [16]. Substituting the values of Q20, C2 and A2 in Eq. 25 yields the following relationship between the parameters Q2r, T2 and a2: a2 = 0.0013T2 Q2r − 252 (27) Fig. 4 shows the variation of required specific impulse for sSE- 2 (Q2r (sec.)) with time for which one propulsion engineer has to be hired (T2 years) for two different levels of subsystem design difficulty. IV. RESULTS We present numerical examples of one-shot SE processes with two identical sSEs (N = 2). We investigate 4 different cases consisting of all possible combinations of two difficulty levels (easy (ai = 2) vs hard (ai = 1.5)) and sSE opportunity 6 Fig. 3: Spacecraft case study (propulsion subsystem): His- torical data (1979–1988) of specific impulse of solid mono- propellants vs cumulative investment per firm. The solid line and the shaded area correspond to the maximum likelihood fit of a linear regression model and the corresponding 95% prediction intervals, respectively. Fig. 4: Spacecraft case study (propulsion subsystem): Variation of Q2r (Specific impulse) w.r.t T2 (time in years) cost levels (low (ci = 0.01) vs high (ci = 0.05)). For each case, we consider three scenarios with different uncertainty levels: σ = 0.05, 0.1, and 0.2. For each scenario, we obtain the optimal contract ψ∗ by solving the mechanism design problem of Sec. II within the class of requirement-based incentives. Finally, we study the sensitivity of the SE’s expected payoff on the passed-down requirement by changing the values of ψ13 in the range [0, 2]. All numerical results can be found in Fig. 5 and its captions. Specifically, Figs. 5a, 5b, 5c, and 5d include the results for a hard-task–low-cost-sSE, hard-task– high-cost-sSEs, easy-task–low-cost-sSE, and easy-task–high- cost-sSE, respectively. The optimal contracts of each scenario can be read from the captions of the associated subfigures. i1 and ψ∗ We start by commenting on the properties of the payment amounts ψ∗ i2. We will refer to ψ∗ i1 as the partic- ipation payment, i.e., the fixed payment made to the sSE independently of the design outcome, and to ψ∗ i2 as the bonus payment, i.e., the payment made to the sSE if the passed-down requirements are met in our one-shot systems engineering model. First, we observe that across all the scenarios the participation payment increases as uncertainty grows. This makes sense, since the sSE is expected to ask for a higher certain gain to accept a higher risk task. Counter-intuitively, the bonus payment is independent of the uncertainty level. Second, for a fixed opportunity costs the participation payment decreases with increasing task difficulty while the bonus pay- ment behaves in the opposite way (it increases with increasing task difficulty). This means that for harder tasks the SE hedges themselves by shifting some of the participation payment to the bonus. Finally, as the opportunity cost increases, both the participation and bonus payments increase. Let us focus on the optimal passed-down requirements (ψ∗ i3). Across all scenarios, the optimal passed-down require- ment increases as the uncertainty becomes larger. This is an example of the SE attempting to increase the probability of the actual requirement being met as uncertainty increases. More- over, we observe that the optimal passed-down requirement is independent of the opportunity cost. This is due to the fact that the SE value function is requirement-based. Furthermore, we observe that for easy tasks, captions of Figs. 5c and 5d, the optimal passed-down requirement is always greater than the actual requirement ri = 1. This result can be intuitively understood as follows. Since the task is easy, the sSE will definitely reach the actual requirement with sufficient effort, even in the presence of significant uncertainties. By setting the requirement threshold higher than the actual one, the SE forces the sSE to use more effort, effectively increasing the probability of success to almost certainty. In contrast, for hard tasks, Figs 5a and 5b, the optimal passed-down requirements are lower than the actual system requirement. Since the task is hard, there is a high probability that the sSE may not be able to achieve a very high passed-down requirement. To lure the sSEs to participate, the SE needs to lower the passed-down requirements below the actual threshold. At a first glance, this may look like the SE will not be able to gain a positive expected payoff. However, due to uncertainty in the final outcome, there is a significant probability that the sSE will produce better than the requested result. Naturally, we observe that the expected SE payoff is in- creasing as the opportunity costs go down and that it decreases as the task difficulty increases. For easy tasks, the expected SE payoff decreases as the uncertainty increases, Figs. 5c and 5d. On the contrary, for hard tasks the expected SE payoff increases as a function of the uncertainty, Figs. 5a and 5b. As we saw in the previous paragraph, this makes sense for a hard task that, an increase in uncertainty makes it more likely that the actual requirement is met. V. CONCLUSION We presented a principle agent model of a one-shot, shallow systems engineering process. We assumed that all agents are risk neutral and, thus, they maximize their expected payoff. We modeled the quality function of subsystem engineers as a linear function of effort plus some Gaussian noise. Using a spacecraft design case study, we demonstrated how the parameters of our model can be estimated from historical data. Finally, we posed the optimal mechanism design problem within the class of requirement-based incentives. Our one-shot model of systems engineering process chal- lenges the intuitive belief that one should ask for higher requirements as the design task becomes more difficult. Our model predicted that for a hard task, the optimal passed-down requirement should be less than the actual requirement. The reason is that in this way the sSE is lured into participation while the SE may still meet the requirement because the design outcome may actually be better than anticipated. Our result does not mean that this common belief is wrong. After all, it is a very simple model, capturing only one iteration of systems engineering process. This result may change if the quality function is not linear or if the design quality noise becomes skewed. Even if the modeling choices were spot on, at the present stage, it is too simple to be truly descriptive. There may be mechanisms beyond the ones included in our model that incentivize the SE to ask for a requirement higher than what our model predicts? One reason is that they may underestimate the difficulty of the task. Another reason is that they may want to hedge themselves against dishonest behavior of sSEs, e.g., the sSE may put a design on their back pocket for later use. Third, the systems engineering process may be taking place iteratively and asking for a higher requirement may be an effective way to probe what is possible. From the perspective of the sSE, why would they accept to participate in a hard task with an exceedingly high requirement? Of course, they may also underestimate the difficulty of the task. Alternatively, they may be offered a participation payment that is high enough so that they do not care that it is impossible to meet the passed- down requirement. Finally, they may believe that they will be able to renegotiate the contract in the future, especially if the SE has a history of doing so. All these intricacies, and many more, are not captured by our model. They are topic of on going research towards a theoretical foundation of systems engineering design that accounts for human behavior. ACKNOWLEDGMENT This material is based upon work supported by the National Science Foundation under Grant No. 1728165. REFERENCES [1] P. D. Collopy and P. M. Hollingsworth, “Value-Driven Design,” Journal of Aircraft, vol. 48, no. 3, pp. 749–759, 2011. [Online]. Available: https://doi.org/10.2514/1.C000311 7 (a) Hard (ai = 1.5) - low cost (ci = 0.01). σ = 0.05: ψ1 = (0.000, 0.007, ψ13), ψ2 = (0.000, 0.007, 0.945); σ = 0.10: ψ1 = (0.001, 0.007, ψ13), ψ2 = (0.001, 0.007, 0.953); σ = 0.20: ψ1 = (0.002, 0.007, ψ13), ψ2 = (0.002, 0.007, 0.968). (b) Hard (ai = 1.5) - high cost (ci = 0.05). σ = 0.05: ψ1 = (0.004, 0.033, ψ13), ψ2 = (0.004, 0.033, 0.945); σ = 0.10: ψ1 = (0.007, 0.033, ψ13), ψ2 = (0.007, 0.033, 0.953); σ = 0.20: ψ1 = (0.012, 0.033, ψ13), ψ2 = (0.012, 0.033, 0.968). (c) Easy (ai = 2.0) - low cost (ci = 0.01). σ = 0.05: ψ1 = (0.001, 0.005, ψ13), ψ2 = (0.001, 0.005, 1.087); σ = 0.10: ψ1 = (0.002, 0.005, ψ13), ψ2 = (0.002, 0.005, 1.205); σ = 0.20: ψ1 = (0.003, 0.005, ψ13), ψ2 = (0.003, 0.005, 1.221). (d) Easy (ai = 2.0) - high cost (ci = 0.05). σ = 0.05: ψ1 = (0.005, 0.025, ψ13), ψ2 = (0.005, 0.025, 1.087); σ = 0.10: ψ1 = (0.009, 0.025, ψ13), ψ2 = (0.009, 0.025, 1.142); σ = 0.20: ψ1 = (0.014, 0.025, ψ13), ψ2 = (0.014, 0.025, 1.221). Fig. 5: The SE’s expected payoff vs. passed-down requirement (ψ13) for several types of sSE [2] P. Collopy, “Adverse Impact of Extensive Attribute Requirements on the Design of Complex Systems,” in 7th AIAA ATIO Conf, 2nd CEIAT Int’l Conf on Innov and Integr in Aero Sciences,17th LTA Systems Tech Conf; followed by 2nd TEOS Forum. American Institute of Aeronautics and Astronautics, dOI: 10.2514/6.2007-7820. [Online]. Available: https://arc.aiaa.org/doi/abs/10.2514/6.2007-7820 [3] S. D. Vermillion and R. J. Malak, “Using a Principal-Agent Model to Investigate Delegation in Systems Engineering,” no. 57052, p. V01BT02A046, 2015. [Online]. Available: http://dx.doi.org/10.1115/ DETC2015-47778 [4] J. Meluso and J. Austin-Breneman, “Gaming the System: An Agent- Based Model of Estimation Strategies and Their Effects on System Performance,” p. V02AT03A050, Aug. 2017. [Online]. Available: http://dx.doi.org/10.1115/DETC2017-68202 [5] R. Ghanem, Stochastic finite elements: a spectral approach. New York: Springer-Verlag, 1991, open Library ID: OL1865197M. [6] R. P. Brent, Algorithms for Minimization Without Derivatives. Mineola, N.Y: Dover Publications, Apr. 2013. [7] E. Jones, T. Oliphant, P. Peterson, and others, SciPy: Open source scientific tools for Python, 2001. [Online]. Available: http: 8 //www.scipy.org/ [8] S. Lucidi and M. Sciandrone, “On the Global Convergence of for Unconstrained Optimization,” SIAM Derivative-Free Methods Journal on Optimization, vol. 13, no. 1, pp. 97–116, Jan. 2002. [Online]. Available: http://epubs.siam.org/doi/abs/10.1137/S1052623497330392 [9] M. E. Tipping and C. M. Bishop, “Probabilistic Principal Component Analysis,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 61, no. 3, pp. 611–622, Jan. 1999. [Online]. Available: http://onlinelibrary.wiley.com/doi/10.1111/1467-9868.00196/ abstract [10] J. R. Wertz, D. F. Everett, and J. J. Puschell, Space mission engineering: the new SMAD. Hawthorne, CA: Microcosm Press : Sold and distributed worldwide by Microcosm Astronautics Books, 2011, oCLC: 747731146. [11] “A Historical Analysis (2000-2007) Energy Technologies Fi- nance.” [Online]. Available: https://financere.nrel.gov/finance/content/ historical-analysis-investment-solar-energy-technologies-2000-2007 [12] S. Price, R. Margolis, G. Barbose, J. Bartlett, K. Cory, T. Cou- ture, J. DeCesaro, P. Denholm, E. Drury, M. Frickel, C. Hemmeline, of | Renewable Energy Investment Project Solar in T. Mendelsohn, S. Ong, A. Pak, L. Poole, C. Peterman, P. Schwabe, A. Soni, B. Speer, R. Wiser, J. Zuboy, and T. James, “2008 solar technologies market report.” Engineer [Online]. Available: https://www. Salaries.” [13] “Solar environmentalscience.org/career/solar-engineer [14] “Solid.” [Online]. Available: http://www.astronautix.com/s/solid.html [15] “NASA Historical Data Books.” [Online]. Available: https://history. nasa.gov/SP-4012/vol6/cover6.html [16] “Propulsion line]. space-exploration-technologies--dragon-propulsion-engineer--hawthorne, -ca [On- https://www.paysa.com/salaries/ Available: Salaries.” Engineer 9
ai_researcher
2
Frame_Representation_Hypothesis_Multi-Token_LLM_Interpretability_and_Concept-Guided_Text_Generation.pdf
Improved Speech Representations with Multi-Target Autoregressive Predictive Coding Yu-An Chung, James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139, USA {andyyuan,glass}@mit.edu 0 2 0 2 r p A 1 1 ] S A . s s e e [ 1 v 4 7 2 5 0 . 4 0 0 2 : v i X r a Abstract Training objectives based on predictive coding have recently been shown to be very effective at learning meaningful representations from unlabeled speech. One example is Autoregres- sive Predictive Coding (Chung et al., 2019), which trains an autoregressive RNN to gen- erate an unseen future frame given a context such as recent past frames. The basic hypoth- esis of these approaches is that hidden states that can accurately predict future frames are a useful representation for many downstream tasks. In this paper we extend this hypothe- sis and aim to enrich the information encoded in the hidden states by training the model to make more accurate future predictions. We propose an auxiliary objective that serves as a regularization to improve generalization of the future frame prediction task. Experimental re- sults on phonetic classification, speech recog- nition, and speech translation not only support the hypothesis, but also demonstrate the effec- tiveness of our approach in learning represen- tations that contain richer phonetic content. 1 Introduction Unsupervised speech representation learning, which aims to learn a function that transforms sur- face features, such as audio waveforms or spectro- grams, to higher-level representations using only unlabeled speech, has received great attention re- cently (Baevski et al., 2019, 2020; Liu et al., 2020; Song et al., 2019; Jiang et al., 2019; Schneider et al., 2019; Chorowski et al., 2019; Pascual et al., 2019; Oord et al., 2018; Kamper, 2019; Chen et al., 2018; Chung and Glass, 2018; Chung et al., 2018; Milde and Biemann, 2018; Chung et al., 2016; Hsu et al., 2017). A large portion of these approaches leverage self-supervised training, where the learn- ing target is generated from the input itself, and thus can train a model in a supervised manner. Chung et al. (2019) propose a method called Au- toregressive Predictive Coding (APC), which trains an RNN to predict a future frame that is n steps ahead of the current position given a context such as the past frames. The training target can be eas- ily generated by right-shifting the input by n steps. Their intuition is that the model is required to pro- duce a good summarization of the past and encode such knowledge in the hidden states so as to accom- plish the objective. After training, the RNN hidden states are taken as the learned representations, and are shown to contain speech information such as phonetic and speaker content that are useful in a variety of speech tasks (Chung and Glass, 2020). Following their intuition, in this work we aim to improve the generalization of the future frame pre- diction task by adding an auxiliary objective that serves as a regularization. We empirically demon- strate the effectiveness of our approach in making more accurate future predictions, and confirm such improvement leads to a representation that contains richer phonetic content. The rest of the paper is organized as follows. We start with a brief review of APC in Section 2. We then introduce our approach in Section 3. Ex- periments and analysis are presented in Section 4, followed by our conclusions in Section 5. 2 Autoregressive Predictive Coding Given a context of a speech signal repre- sented as a sequence of acoustic feature vectors (x1, x2, . . . , xt), the objective of Autoregressive Predictive Coding (APC) is to use the context to infer a future frame xt+n that is n steps ahead of xt. Let x = (x1, x2, . . . , xN ) denote a full utter- ance, where N is the sequence length, APC in- corporates an RNN to process each frame xt se- quentially and update its hidden state ht accord- ingly. For t = 1, . . . , N − n, the RNN produces Figure 1: Overview of our method. Lf is the original APC objective that aims to predict xt+n given a context (x1, x2, . . . , xt) with an autoregressive RNN. Our method first samples an anchor position, assuming it is time step t. Next, we build an auxiliary loss Lr that computes Lf of a past sequence (xt−s, xt−s+1, . . . , xt−s+(cid:96)−1) (see Section 3.1 for definitions of s and (cid:96)), using an auxiliary RNN (dotted line area). In this example, (n, s, (cid:96)) = (1, 4, 3). In practice, we can sample multiple anchor positions, and averaging over all of them gives us the final Lr. an output yt = W · ht, where W is an affin- ity matrix that maps ht back to the dimension- ality of xt. The model is trained by minimizing the frame-wise L1 loss between the predicted se- quence (y1, y2, . . . , yN −n) and the target sequence (x1+n, x2+n, . . . , xN ): Lf (x) = N −n (cid:88) t=1 |xt+n − yt|. (1) When n = 1, one can view APC as an acous- tic version of neural LM (NLM) (Mikolov et al., 2010) by thinking of each acoustic frame as a token embedding, as they both use a recurrent encoder and aim to predict information about the future. A major difference between NLM and APC is that NLM infers tokens from a closed set, while APC predicts frames of real values. Once an APC model is trained, given an ut- terance (x1, x2, . . . , xN ), we follow Chung et al. (2019) and take the output of the last RNN layer (h1, h2, . . . , hN ) as its extracted features. 3 Proposed Methodology Our goal is to make APC’s prediction of xt+n given ht more accurate. In Section 4 we will show this leads to a representation that contains richer pho- netic content. 3.1 Remembering more from the past An overview of our method is depicted in Figure 1. We propose an auxiliary loss Lr to improve the gen- eralization of the main objective Lf (Equation 1). The idea of Lr is to refresh the current hidden state ht with the knowledge learned in the past. At time step t, we first sample a past sequence pt = (xt−s, xt−s+1, . . . , xt−s+(cid:96)−1), where s is how far the start of this sequence is from t and (cid:96) is the length of pt. We then employ an auxil- iary RNN, denoted as RNNaux, to perform pre- dictive coding defined in Equation 1 condition- ing on ht. Specifically, we initialize the hid- den state of RNNaux with ht, and optimize it along with the corresponding Waux using Lf (pt), which equals to (cid:80)t−s+(cid:96)−1 |xt(cid:48)+n − yt(cid:48)|. Such a process reminds ht of what has been learned in ht−s, ht−s+1, . . . , ht−s+(cid:96)−1. t(cid:48)=t−s end select frame with For a training utterance x = (x1, x2, . . . , xN ), probabil- we each ity P as an anchor position. Assume up with M anchor we positions: a1, a2, . . . , aM . Each am defines a sequence pam = (xam−s, xam−s+1, . . . , xam−s+(cid:96)−1) be- fore xam, which we use to compute Lf (pam). Averaging over all anchor positions gives the final auxiliary loss Lr: Lr(x) = 1 M M (cid:88) m=1 Lf (pam). (2) The final APC objective combines Equations 1 and 2 with a balancing coefficient λ: Lm(x) = Lf (x) + λLr(x). (3) We re-sample the anchor positions for each x dur- ing each training iteration, while they all share the same RNNaux and Waux. 4 Experiments We demonstrate the effectiveness of Lr in helping optimize Lf , and investigate how the improvement is reflected in the learned representations. (a) Lr (auxiliary objective, Equation 2) (b) Lf (main objective, Equation 1) Figure 2: Validation loss of Lr (left) and Lf (right) on LibriSpeech dev-clean when training APC using different (n, s, (cid:96)) combinations. Each bar of the same color represents one (s, (cid:96)) combination. We use (−, −) to denote an APC optimized only with Lf . Bars are grouped by their n’s with different (s, (cid:96)) combinations within each group. 4.1 Setup We follow Chung et al. (2019) and use the au- dio portion of the LibriSpeech (Panayotov et al., 2015) train-clean-360 subset, which contains 360 hours of read speech produced by 921 speak- ers, for training APC. The input features are 80- dimensional log Mel spectrograms, i.e., xt ∈ R80. Both RNN and RNNaux are a 3-layer, 512-dim uni- directional GRU (Cho et al., 2014) network with residual connections between two consecutive lay- ers (Wu et al., 2016). Therefore, W, Waux ∈ R512×80. λ is set to 0.1 and the sampling prob- ability P is set to 0.15, that is, each frame has a 15% of chance to be selected as an anchor position. P and λ are selected based on the validation loss of Lf on a small data split. All models are trained for 100 epochs using Adam (Kingma and Ba, 2015) with a batch size of 32 and a learning rate of 10−3. 4.2 Effect of Lr We first validate whether augmenting Lr improves Lf . As a recap, n is the number of time steps ahead of the current position t in Lf , and s and (cid:96) denote the start and length, respectively, of a past sequence before t to build Lr. We consider (n, s, (cid:96)) ∈ {1, 3, 5, 7, 9} × {7, 14, 20} × {3, 7}. Note that each phone has an average duration of about 7 frames. Figures 2a and 2b present Lr (before multiplying λ) and Lf of the considered APC variants on the LibriSpeech dev-clean subset, respectively. Each bar of the same color represents one (s, (cid:96)) combi- nation. We use (−, −) to denote an APC optimized only with Lf . Bars are grouped by their n’s with different (s, (cid:96)) combinations within each group. We start with analyzing Figure 2a. Note that Lr does not exist for (−, −) and is set to 0 in the figure. We see that under the same n, the performance of Lr is mainly decided by how far (s) the past sequence is from the current position rather than the length ((cid:96)) to generate: when we keep (cid:96) fixed and increase s from 7 (red), 14 (green), to 20 (blue), we observe the loss surges as well. From Figure 2b, we have the following findings. For a small n, the improvement in Lf brought by Lr is minor. By comparing (−, −) with other bars, we see that when n ≤ 3, which is smaller than half of the average phone duration (7 frames), adding Lr does not lower Lf by much. We specu- late that when n ≤ 3, xt+n to be inferred is usually within the same phone as xt, making the task not challenging enough to force the model to leverage more past information. Lr becomes useful when n gets larger. We see that when n is close to or exceeds the average phone duration (n ≥ 5), an evident reduction in Lf after adding Lr is observed, which validates the effectiveness of Lr in assisting with the opti- mization of Lf . When n = 9, the improvement is not as large as when n = 5 or 7. One possible explanation is that xt+9 has become almost inde- pendent from the previous context ht and hence is less predictable. By observing the validation loss, we have shown that Lr indeed helps generalize Lf . 4.3 Learned representation analysis Next, we want to examine whether an improvement in Lf leads to a representation that encodes more useful information. Speech signals encompass a rich set of acoustic and linguistic properties. Here Feature Time shift -15 -10 -5 0 +5 +10 +15 log Mel 83.3 80.3 67.6 49.9 65.5 77.9 82.7 APC trained with Lf (Equation 1) 56.1 50.8 48.7 44.6 51.0 45.8 41.8 38.2 38.6 41.8 36.1 34.8 32.5 32.9 35.7 33.7 33.4 31.9 32.1 36.9 56.5 56.0 54.8 56.3 58.4 APC trained with Lm (Equation 3) 50.6 46.4 41.8 39.8 42.3 42.2 38.0 35.1 33.8 35.3 35.1 34.1 29.8 28.7 30.3 33.1 32.4 28.1 27.8 29.7 54.4 54.1 49.6 46.8 50.0 73.7 73.5 73.0 73.8 74.6 73.4 71.4 64.6 60.6 63.3 81.6 81.1 80.5 80.4 81.0 81.4 80.5 76.8 74.4 76.6 n = 1 n = 3 n = 5 n = 7 n = 9 n = 1 n = 3 n = 5 n = 7 n = 9 Table 1: Phonetic classification results using different types of features as input to a linear logistic regression classifier. The classifier aims to correctly classify each frame into one of the 48 phone categories. Frame error rates (↓) are reported. Given a time shift w ∈ {0, ±5, ±10, ±15}, the classifier is asked to predict the phone identity of xt+w given xt. we will only focus on analyzing the phonetic con- tent contained in a representation, and leave other properties such as speaker for future work. We use phonetic classification on TIMIT (Garo- folo et al., 1993) as the probing task to analyze the learned representations. The corpus contains 3696, 400, and 192 utterances in the train, validation, and test sets, respectively. For each n ∈ {1, 3, 5, 7, 9}, we pick the (s, (cid:96)) combination that has the lowest validation loss. As described in Section 2, we take the output of the last RNN layer as the extracted features, and provide them to a linear logistic re- gression classifier that aims to correctly classify each frame into one of the 48 phone categories. During evaluation, we follow the protocol (Lee and Hon, 1989) and collapse the prediction to 39 cate- gories. We report frame error rate (FER) on the test set, which indicates how much phonetic content is contained in the representations. We also con- duct experiments for the task of predicting xt−w and xt+w given xt for w ∈ {5, 10, 15}. This exam- ines how contextualized ht is, that is, how much information about the past and future is encoded in the current feature ht. We simply shift the labels in the dataset by {±5, ±10, ±15} and retrain the classifier. We keep the pre-trained APC RNN fixed for all runs. Results are shown in Table 1. We emphasize that our hyperparameters are cho- sen based on Lf and are never selected based on their performance on any downstream task, includ- ing phonetic classification, speech recognition, and speech translation to be presented next. Tuning hy- perparameters towards a downstream task defeats the purpose of unsupervised learning. Phonetic classification We first study the stan- dard phonetic classification results, shown in the column where time shift is 0. We see that APC features, regardless of the objective (Lf or Lm), achieve lower FER than log Mel features, show- ing that the phonetic information contained in the surface features has been transformed into a more accessible form (defined as how linearly separable they are). Additionally, we see that APC features learned by Lm outperform those learned by Lf across all n. For n ≥ 5 where there is a noticeable improvement in future prediction after adding Lr as shown in Figure 2b, their improvement in pho- netic classification is also larger than when n ≤ 3. Such an outcome suggests that APC models that are better at predicting the future do learn represen- tations that contain richer phonetic content. It is also interesting that when using Lf , the best result occurs at n = 5 (31.9); while with Lm, it is when n = 7 that achieves the lowest FER (27.8). Predicting the past or future We see that it is harder to predict the nearby phone identities from a log Mel frame, and the FER gets higher further away from the center frame. An APC feature ht contains more information about its past than its future. The result matches our intuition as the RNN generates ht conditioning on hi for i < t and thus their information are naturally encoded in ht. Fur- thermore, we observe a consistent improvement in both directions by changing Lf to Lm across all n and time shifts. This confirms the use of Lr, which requires the current hidden state ht to recall what has been learned in previous hidden states, so more information about the past is encoded in ht. The improvement also suggests that an RNN can forget the past information when training only with Lf , and adding Lr alleviates such problem. ture prediction Lf and past memory reconstruc- tion Lr, where the latter serves as a regularization. Through phonetic classification, we find the repre- sentations learned with our approach contain richer phonetic content than the original representations, and achieve better performance on speech recogni- tion and speech translation. 4.4 Speech recognition and translation References The above phonetic classification experiments are meant for analyzing the phonetic properties of a representation. Finally, we apply the representa- tions learned by Lm to automatic speech recogni- tion (ASR) and speech translation (ST) and show their superiority over those learned by Lf . We follow the exact setup in Chung and Glass (2020). For ASR, we use the Wall Street Journal corpus (Paul and Baker, 1992), use si284 for train- ing, and report the word error rate (WER) on dev93. For ST, we use the LibriSpeech En-Fr corpus (Ko- cabiyikoglu et al., 2018), which aims to translate an English speech to a French text, and report the BLEU score (Papineni et al., 2002). For both tasks, the downstream model is an end-to-end, sequence- to-sequence RNN with attention (Chorowski et al., 2015). We compare different input features to the same model. Results, shown in Table 2, demon- strate that the improvement in predictive coding brought by Lr not only provides representations that contain richer phonetic content, but are also useful in real-world speech applications.1 Feature ASR (WER ↓) ST (BLEU ↑) log Mel APC w/ Lf APC w/ Lm 18.3 15.2 14.2 12.9 13.8 14.5 Table 2: Automatic speech recognition (ASR) and speech translation (ST) results using different types of features as input to a seq2seq with attention model. Word error rates (WER, ↓) and BLEU scores (↑) are reported for the two tasks, respectively. 5 Conclusions We improve the generalization of Autoregressive Predictive Coding by multi-target training of fu- Alexei Baevski, Michael Auli, and Abdelrahman Mo- hamed. 2019. Effectiveness of self-supervised pre- arXiv preprint training for speech recognition. arXiv:1911.03912. Alexei Baevski, Steffen Schneider, and Michael Auli. 2020. vq-wav2vec: Self-supervised learning of dis- crete speech representations. In ICLR. Yi-Chen Chen, Sung-Feng Huang, Chia-Hao Shen, Hung-Yi Lee, and Lin-Shan Lee. 2018. Phonetic- and-semantic embedding of spoken words with ap- plications in spoken content retrieval. In SLT. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- In Workshop on Syntax, Semantics and proaches. Structure in Statistical Translation. Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, and Yoshua Bengio. 2015. In Kyunghyun Cho, Attention-based models for speech recognition. NIPS. Jan Chorowski, Ron Weiss, Samy Bengio, and A¨aron van den Oord. 2019. Unsupervised speech rep- resentation learning using wavenet autoencoders. IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing, 27(12):2041–2053. Yu-An Chung and James Glass. 2018. Speech2vec: A sequence-to-sequence framework for learning word embeddings from speech. In Interspeech. Yu-An Chung and James Glass. 2020. Generative pre- training for speech with autoregressive predictive coding. In ICASSP. Yu-An Chung, Wei-Ning Hsu, Hao Tang, and James Glass. 2019. An unsupervised autoregressive model for speech representation learning. In Interspeech. Yu-An Chung, Wei-Hung Weng, Schrasing Tong, and James Glass. 2018. Unsupervised cross-modal alignment of speech and text embedding spaces. In NeurIPS. 1According to Chung and Glass (2020), when using a Transformer architecture (Vaswani et al., 2017; Liu et al., 2018) as the autoregressive model, representations learned with Lf can achieve a WER of 13.7 on ASR and a BLEU score of 14.3 on ST. Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, Hung-Yi Lee, and Lin-Shan Lee. 2016. Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoen- coder. In Interspeech. Santiago Pascual, Mirco Ravanelli, Joan Serr`a, Anto- nio Bonafonte, and Yoshua Bengio. 2019. Learning problem-agnostic speech representations from mul- tiple self-supervised tasks. In Interspeech. Douglas Paul and Janet Baker. 1992. The design for the wall street journal-based CSR corpus. In Speech and Natural Language Workshop. Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised pre-training for speech recognition. In Interspeech. Xingchen Song, Guangsen Wang, Zhiyong Wu, Yi- heng Huang, Dan Su, et al. 2019. Speech- XLNet: Unsupervised acoustic model pretrain- arXiv preprint ing for self-attention networks. arXiv:1910.10387. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc Le, Mohammad Norouzi, et al. 2016. Google’s neu- ral machine translation system: Bridging the gap arXiv between human and machine translation. preprint arXiv:1609.08144. John Garofolo, Lori Lamel, William Fisher, Jonathan Fiscus, David Pallett, and Nancy Dahlgren. 1993. DARPA TIMIT acoustic-phonetic continuous speech corpus. Technical Report NISTIR 4930, NIST. Wei-Ning Hsu, Yu Zhang, and James Glass. 2017. Unsupervised learning of disentangled and inter- pretable representations from sequential data. In NIPS. Dongwei Jiang, Xiaoning Lei, Wubo Li, Ne Luo, Yux- uan Hu, et al. 2019. Improving Transformer-based speech recognition using unsupervised pre-training. arXiv preprint arXiv:1910.09932. Herman Kamper. 2019. Truly unsupervised acoustic word embeddings using weak top-down constraints in encoder-decoder models. In ICASSP. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Ali Kocabiyikoglu, Laurent Besacier, and Olivier Kraif. 2018. Augmenting LibriSpeech with French transla- tions: A multimodal corpus for direct speech trans- lation evaluation. In LREC. Kai-Fu Lee and Hsiao-Wuen Hon. 1989. Speaker- independent phone recognition using hidden markov IEEE Transactions on Acoustics, Speech, models. and Signal Processing, 37(11):1641–1648. Andy Liu, Shu-Wen Yang, Po-Han Chi, Po-Chun Hsu, and Hung-Yi Lee. 2020. Mockingjay: Unsuper- vised speech representation learning with deep bidi- rectional Transformer encoders. In ICASSP. Peter Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summariz- ing long sequences. In ICLR. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recur- In In- rent neural network based language model. terspeech. Benjamin Milde and Chris Biemann. 2018. Unspeech: Unsupervised speech context embeddings. In Inter- speech. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive pre- dictive coding. arXiv preprint arXiv:1807.03748. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An ASR corpus based on public domain audio books. In ICASSP. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In ACL.
ai_researcher
2
NewsInterview_a_Dataset_and_a_Playground_to_Evaluate_LLMs'_Ground_Gap_via_Informational_Interviews.pdf
NewsInterview: a Dataset and a Playground to Evaluate LLMs’ Grounding Gap via Informational Interviews Michael Lu1*, Sriya Kalyan1, Hyundong Cho2, Weiyan Shi3, Jonathan May2, and Alexander Spangher2* 1University of California, Berkeley 2University of Southern California 3Northeastern University [email protected] 4 2 0 2 v o N 1 2 ] L C . s c [ 1 v 9 7 7 3 1 . 1 1 4 2 : v i X r a Abstract Large Language Models (LLMs) have demon- strated impressive capabilities in generating coherent text but often struggle with ground- ing language and strategic dialogue. To ad- dress this gap, we focus on journalistic inter- views, a domain rich in grounding commu- nication and abundant in data. We curate a dataset of 40,000 two-person informational in- terviews from NPR and CNN, and reveal that LLMs are significantly less likely than human interviewers to use acknowledgements and to pivot to higher-level questions. Realizing that a fundamental deficit exists in multi-turn plan- ning and strategic thinking, we develop a re- alistic simulated environment, incorporating source personas and persuasive elements, in order to facilitate the development of agents with longer-horizon rewards. Our experiments show that while source LLMs mimic human behavior in information sharing, interviewer LLMs struggle with recognizing when ques- tions are answered and engaging persuasively, leading to suboptimal information extraction across model size and capability. These find- ings underscore the need for enhancing LLMs’ strategic dialogue capabilities. 1 Introduction Recent research has shown that LLMs struggle to engage in emotional (Shaikh et al., 2024) or strategic (Wongkamjan et al., 2024) dialogue. For example, Shaikh et al. (2024) examined LLM- generated responses to dialogues and found fewer occurrences of “grounding language” (Clark, 1996; ?), like acknowledgements or affirmations, that hu- mans typically use to foster comfort and trust. This can impede an LLM’s ability to serve in a vari- ety of situations: e.g. education (Kasneci et al., Co-first author contributions: Michael Lu did much of the implementation for discourse schemes and game design, as well as ideating the game idea. Alexander Spangher was the primary writer, he was the advisor for Michael and Sriya, and he performed the analysis in Section 4.2. 1 2023), mental health (Carlbring et al., 2023) or conflict resolution (Argyle et al., 2023). However, prior efforts to ameliorate such gaps face limita- tions: existing large datasets (1k-10k transcripts) are generated via crowdsourcing and are inherently unnatural (Rashkin et al., 2019; Wang et al., 2019; Liu et al., 2021). More natural transcripts, of ed- ucational (Caines et al., 2020) or therapeutic en- vironments (Gratch et al., 2014), are difficult to collect due to privacy concerns (Casey, 2004) and are small-scale (100-1k transcripts). In this work, we directly address these limi- tations by focusing on an area where grounding communication is required but plentiful data ex- ist: journalist interviews. Journalistic, or informa- tional interviews, are typically conducted between an “interviewer” and a “source”, and the goal is to obtain information. Sources are often anxious or unclear (Harcup, 2015), and human interviewers are constantly evaluating: (1) Are my questions getting fully addressed? (2) Do I need to more effectively engage or persuade a source (Sedorkin, 2015)? This makes news interviews an ideal set- ting to observe grounding, strategic, and persuasive dialogues. To study how LLMs perform in journalistic con- texts, we start by collecting interview transcripts from two major news sources: National Public Radio (NPR) and Cable News Network (CNN), filtering to over 40,000 dyadic informational inter- views. Next, we show that LLMs in news interview settings suffer from the same lack of grounding as in other dialogue settings (Shaikh et al., 2024). We find that significant discourse differences ex- ist in the kinds of questions asked by LLMs and human interviewers: for example, LLMs are 50% less likely to make acknowledgements, and 30% less likely to pivot back to higher-level questions We release our code https://github.com/alex2awesome/ dataset and at news-interview-question-generation. than humans. Next, we turn to a more fundamental question: why grounding? According to Cialdini (2009), grounding exists for a purpose: to influence an in therapeutic environments, the outcome (e.g. grounded patient is more open and makes more progress (Bohart, 1999); in educational environ- ments, the comfortable student is more engaged and learns more (Brown, 1994); these are all usu- ally borne out over many conversational turns). Observing the effects of grounding language in terms of objective outcomes might be a more effec- tive way to reason and ultimately train empathetic agents and improve long-horizon strategic dialogue. Motivated by these observations, we develop a re- alistic game environment to serve as a playground: in this simulation, LLMs play the role of the inter- viewer and the source. The goal for the interviewer is to obtain the maximal amount of information from the source in a limited number of questions. In order to induce the need for grounding communication, we design different personas for sources (e.g., anxious, clueless, dominating), each with different communication patterns. We also add a responsiveness to strategic dialogue: sources will only return information if they are persuaded in a manner befitting their personas (Harcup, 2015; Sedorkin, 2015). We find that our environment is realistic: source-LLMs correlate significantly with humans in their ability to identify persuasion (r = .43, p < .0001). However, interviewer-LLMs struggle to both recognize when questions are an- swered and actively persuade the source, resulting in suboptimal information extraction. In summary, our contributions are: • We release a high-quality dataset of 40,000 two-person informational interviews from NPR and CNN. This dataset addresses the scarcity of large-scale dialogue data necessary for studying grounding communication. • We perform a detailed discourse analysis com- paring LLM-generated dialogues with hu- man interviewers, identifying significant dif- ferences in the use of grounding language and question types. • We develop a game environment to test and improve dialogue agents in informational in- (a) % of Discourse types throughout human interviews. Human journalists use different discourse roles across the in- terview, including gradually more Acknowledging statements, increasing from 5% at the start to over 20% by the end. (b) % of Discourse types of LLM responses in interviews. LLMs display an increasing likelihood of asking opinion or broadening questions over the course of an interview and a lower likelihood of returning to outline-level questions. Figure 1: Comparison of discourse types across inter- views (turn 1, usually a greeting, is excluded). LLM is shown the first k − 1 turns of a human interview and asked to generate the next question. terviews, which we call NewsInterview. Our findings indicate this is a realistic setting but highlight the challenges LLM interviewers face in engaging in persuasive dialogue. 2 Dataset Processing In this section, we describe the collection and pro- cessing of our dataset. 2.1 Data Collection We aggregate, clean and condense multiple pub- licly available datasets of interview transcripts from National Public Radio (NPR) and Cable News Net- work (CNN) in order to build a high-quality inter- view dataset of 45k source-to-interview transcripts. These transcripts are published records of live inter- views conducted between a journalist and sources invited on the program. They provide a rich re- source for analyzing natural language interactions in informational interviews. We understand that “being persuaded”, “being made com- fortable” and “being acknowledged” are all separate forms of grounding, some more active than others. However, we use “persuasion” as a short-hand encompassing all categories. 2.2 Data Filtering for Interview Analysis We want to focus on one-on-one informational in- terviews between a journalist and a single source. 2 We start with 487,310 transcripts collected by (Ma- jumder et al., 2020; Zhu et al., 2021). However, initial examination of the transcripts reveals many of them to be low-quality: many include multi- ple sources, are formatted as panel discussions, or are not informational in nature (e.g., they included game shows). To filter the transcripts and retain only those that fit our criteria, we prompt Llama- 3.1-70b to classify each transcript based on the number of participants and the nature of the con- tent. The prompts used for filtering are provided in App. C.5. After filtering, 45,848 interviews remain. Finally, the original transcripts do not dis- tinguish which participant was the interviewer vs. the interviewee. So, we count each participant’s use of question marks: the participant with more is labeled the interviewer. 3 Analysis In this section, we analyze how humans conduct in- formational interviews and compare this behavior to that of pretrained LLMs. Our aim is to explore whether LLMs face similar grounding problems in interviews as observed in educational and therapeu- tic settings (Clark, 1996; ?). 3.1 Generating Counterfactual Utterances One way to assess how an LLM would behave in an interview setting offline is to perform a counter- factual simulation (?). Specifically, given a human interview consisting of n interviewer-source con- versational turns (q1, a1)...(qn, an), we feed t − 1 turns into the LLM along with a prompt instruct- ing the LLM to generate the next question. This generates a counterfactual, gt to what the human would have said, ht. We experiment with differ- ent variations: (1) Baseline: The LLM is simply asked to produce the next question. (2) Chain-of- Thought: The LLM is instructed to reason about the information already provided in the interview, consider what might be left to ask, and then gener- ate the next question. (3) Outline: In addition to a chain-of-thought prompt, the LLM is also provided with an outline of the interview goals (described in Section 4.2) to incorporate into its chain-of-thought reasoning. https://huggingface.co/meta-llama/ Meta-Llama-3.1-70B-Instruct (Touvron et al., 2023) using the vLLM framework (Kwon et al., 2023) Manual validation on 50 interviews showed this method correctly identified roles in > 98% of cases. We include full prompt examples for all three variations in Appendix C.5. All question-generation experiments are 3.2 Evaluating LLM Counterfactuals To analyze how similar LLM questions, gt are to human questions, ht, we perform two analyses: Consistency Analysis: We aim to assess how similar gt is to ht across different comparison cat- egories Saha et al. (2024), specifically: Informa- tional consistency (i.e. gt and ht seek similar in- formational objectives); Motivational, (i.e. similar outcomes); Style, (i.e. similar tone); Contextual consistency (i.e. similar appropriateness given the context); Discourse consistency (i.e. similar pur- poses in the overall conversation). Putting these together, we assess an Exact match. We ask an LLM, GPT-4o, to perform this assessment and man- ually inspect its outputs and reasoning threads. Discourse Analysis: We aim to assess whether gt plays a similar function as ht does. We de- velop a schema to describe the role of each ques- tion. This schema includes the following elements: Follow-up Question (e.g. “Can you tell us more?”), Outline-Level Question (e.g. “Moving on, can we discuss the next event?”), Acknowledgement Statement (e.g., “I see, that sounds scary.”), Opin- ion/Speculation (e.g. “What do you think will hap- pen?”), Broadening Question (e.g. “How does this fit into the broader trend?”), Verification Question (e.g. “ So to confirm...”) and Challenge Question (e.g. “These dates don’t line up.”). See Table 5 in the Appendix for definitions of each discourse role. 3.3 Findings Our analysis yielded several key findings. Insight #1: Acknowledgement statements are virtually absent from all LLM variations. As shown in Figure 2, grounding gaps exist in jour- nalistic interviewing similar to those observed by (Shaikh et al., 2024). While human journalistic interviewers tend to make Acknowledgement state- ments in about 9% of their utterances, all prompt- ing variations that we experimented with made conducted using Llama-3.1-70b. To generate our discourse schema, we asked two journal- ists to analyze 50 interview transcripts. One had 8 years of experience in newsrooms, the other was an undergraduate stu- dent studying journalism. We held three conferencing sessions to develop the schema. Then, we blindly annotated 10 inter- views, achieving a κ = .6. Given our schema, we then asked an LLM to classify discourse roles in sentences. The prompt contains the interview context, (q1, a1)...(qt−1, at−1), and current question qt. To validate the LLM’s labeling accuracy, we had the professional journalist label 10 additional inter- views as ground-truth and scored the LLM’s assignments. The LLM scored a .8 f1 score. 3 Exact Match Info. Motivation Style Discourse Context Baseline-LLM Chain-of-Thought (CoT) LLM w. Outline Outline-CoT 3.9% 4.5% 3.7% 3.6% 4.4% 3.6% 3.8% 3.9% 4.7% 11.9% 5.2% 12.8% 9.6% 4.1% 8.3% 4.3% Human 8.2% 17.5% 35.4% 40.2% 36.2% 37.0% 36.2% 29.9% 54.5% 53.0% 56.9% 46.6% 43.1% 60.3% Table 1: Alignment of LLM-Generated Questions with Original Interview questions. We give an LLM, Llama-3.1-70b, the prior k − 1 turns in an interview and prompt it to ask the next question. We measure the percentage of times this question aligns to a question asked by a human at the same point in the interview across six dimensions: Exact Match (nearly exactly the same as the original utterance), Information (relevant factual content), Motivation (same motivation as the original question), Style (alignment with tone and phrasing), Discourse (structural role within the interview), and Context (incorporation of contextual knowledge). The prompting strategies compared are Baseline-LLM, Chain-of-Thought (CoT), LLM with an Outline, and Outline-CoT; and, we conduct a human baseline trial with a former professional journalist. Figure 2: Distribution of Discourse Roles in Questions, Across Different Prompting Strategies. We compare the proportions of discourse roles of questions (e.g. “Follow-up”, “Acknowledgement”, etc.) generated by (a) human journalists, (b) Baseline-LLM (Llama-3.1-70b) (c) LLM prompted with an Outline and (d) with Chain-of-Thought (CoT). Acknowledgement statements, which often build empathy, are significantly underrepresented in all LLM prompting approaches, compared to human-generated questions (see appendix for Outline-CoT). close to zero acknowledging statements. This lack of acknowledgement is paired with not mirroring the source’s speaking style; human journalists, as shown in Appendix C.6, bring character and voice. Insight #2: LLMs do not engage in strategic multi-turn questioning . Even in settings where LLMs are exposed to interview outlines, they are still undirected in their questions. As shown in Figure 2, LLMs are significantly more likely to ask follow-up questions than humans across all prompting variations. Introducing chain-of-thought and outline variations increases the rate at which the LLM asks outline-level questions. However, the rate remains significantly below human levels. Additionally, they are also more likely to ask either Opinion questions or Broadening questions. In fact, in Figure 1b, we observe that LLMs tend to ask increasing amounts of Opinion Questions and Broadening Questions over time, which humans do not. As can be seen in Table 8, these questions can be vague and open-ended and reflect a lack of planning. Together, these findings suggest an inability to direct an interview in a desired direction and engage in multi-turn planning. Insight #3: LLMs are capable of understanding context, but fail in all other categories of similar- ity to humans. Comparing the content and style of LLM interviews to human interviews in Table 1, we note that, overall, LLMs are broadly dissimilar to humans in style, motivation and information- seeking. One area where the LLMs succeed, rela- tively, is understanding the context of the interview beforehand. This is not a new observation – much recent work, e.g. in dialogue-tracking, has found LLMs to perform well (Ou et al., 2024). The fact that LLMs can preserve context over multiple turns and do not drift away from the topic indicates that models might one day be able to engage in multi- turn goal-oriented dialogue, given the right reward signals and learning environment. Taken together, these findings suggest that jour- nalistic dialogue is suitable for studying effective communication patterns, and also highlight signifi- cant gaps in current language modeling objectives. 4 Figure 3: Walkthrough of the LLM Interviewer-Agent Process. To set up the interview, the interviewer agent is given a set of high-level objectives, similar to a journalist’s pre-interview notes, while the source is given a persona and a set of relevant facts. In each turn, the interviewer asks a question (Step 1). The source determines what information to reveal based on relevance and comfort level (Step 2a). Depending on the source’s comfort, a subset of relevant information is randomly selected for the response (Step 2b). The source then crafts a reply aligned with their persona (Step 2c). The reward given to the interview agent, at the end of k turns, is the number of information items extracted from the source. While LLMs can generate contextually relevant questions, they lack both an emotional and connec- tive drive as well as the strategic planning exhibited by human interviewers. 4 NewsInterview: An Interview Game As shown, LLM counterfactual questions exhibit several shortcomings: they are less likely to ac- knowledge the interviewee and focus excessively on follow-up questions. However, questions re- main: do they lack strategic multi-turn planning? In human dialogue, grounding exists ultimately strategic purposes (Cialdini, 2009). Our goal for the remainder of the paper is to set up a realis- tic simulated game-environment with a delayed reward signal to test this. 4.1 Game Design Overview We first introduce our game on a high level, explain our game design, as illustrated in Figure 3, and then describe our implementation. We draw on two journalism textbooks: Interviewing: A Guide for Journalists and Writers, which explains how to conduct effective interviews and speak to reluctant, defensive, or poor-explaining sources (Sedorkin, 2015); and Journalism: Principles and Practice, which describes how to build source trust (Harcup, 2015). gorithm 1. The “player” in our game plays the role of an interviewer and is able to ask questions to a source, based on the conversational history and the interview objectives (the Interviewer() step). The source is given a set of informational items, and assesses whether any of these items are relevant to the question (the getRelevantInfo() step); they then decide how persuaded or comfort- able they are based on the conversational history (the getPersuasionLevel() step); based on this, we determine the subset of relevant items they re- turn (the getItemsToReturn()). They respond with these items. The reward, obtained at the end of the game, is that amount of information items the source disclosed. 4.2 Gameplay Design We first start by describing our data processing, and then we will describe the functions introduced in Algorithm 1 in more detail. Dataset Preparation for Simulation To prepare our dataset for use in the simulated game environ- ment, we group together: (1) source responses and ask an LLM to summarize a set of specific infor- mational items and (2) interviewer questions and ask an LLM to summarize them into a set of high- level objectives. The sources’ informational items mimic the knowledge a source likely had going Our gameplay proceeds in a loop, shown in Al- Llama-3.1-70b 5 Algorithm 1 Gameplay Input Interviewer objectives o, Source Informational Items I, Source persona ϕ, K turns Output Reward R 1: Initialize: Reward R ← 0, Conversation History c ← [], Used items u ← {} 2: for i ∈ 1, ...K do ▷ Step 1: Interviewer Question Generation 3: 4: 5: 6: 7: q = Interviewer(c, o) ▷ Step 2: Source’s Response Generation r =getRelevantInfoItems(I, u, q) p =getPersuasionLevel(c) f =getItemsToReturn(r, p) a =Source(q, c, f, p, ϕ) ▷ Update Variables u ← u ∪ f , c ← c ⊕ [q, a], R ← R + |f | 8: 9: end for into the interview and the interviewer’s objectives represent the agendas they had prior to the conver- sation. Both of these summaries are represented in Figure 3 as Given, and are designed to give the interviewer-LLM and the source-LLM a basis for communication. For further examples of both, see Tables 7 and 6 in the Appendix. Source Design: Personas and Persuasion Now, we introduce the design of the source. We focus at- tention on this construction to build a robust game environment that accurately mimics human interac- tions. To make gameplay varied and challenging, we draw from Sedorkin (2015) to design eight dif- ferent personas: Anxious, Avoidant, Adversarial, Defensive, Straightforward, Poor Explainer, Domi- nating and Clueless. To see detailed descriptions of each persona, as well as example responses that the persona might give, see Table 3. Different personas give us the ability to study how interviewers per- form in a wider array of scenarios and are designed to more carefully capture different challenges that journalists face. The following three functions, in sequence, power our gameplay: getRelevantInfoItems → getPersuasionLevel → getItemsToReturn. The first, getRelevantInfoItems, takes the inter- viewer’s question and determines which of the sources’ information items are most relevant; it is simply a retrieval function that we implement using an LLM. getPersuasionLevel is a function that determines the selected source’s level of com- Manual evaluation confirms that these information items were present in the initial interview, are concise and non- overlapping. Manual validation with professional journalists confirms that these outlines reasonably capture what a journalist might prepare before an interview and do not leak information. (a) Rewards of gpt-4o from playing against sources of differ- ent persona types. (b) Average level of persuasion, from gpt-4o, towards the different persona types in our evaluation. Figure 4: Comparison of gpt-4o’s performance across different persona types: extraction rewards and persua- sion levels. The Adversarial type is by far the hardest to extract information from, however, it is easier to per- suade. This could be because the LLMs are the most thrown off by adversarial sources. fort or persuasion (on a 5 point scale) in the current conversation. getItemsToReturn is a stochastic engine: it randomly selects, based on the persua- sion level, the number of relevant information items to return: the more persuaded a source is, the more likely they are to return more information. The persuadability component to our gameplay increases the multi-turn strategy: because persua- sion is assessed with reference to the entire in- terview, the interviewer gets more reward overall for spending words (or even whole turns) making the source feel comfortable early in the interview, rather than directly asking for information. How- ever, is it sound for the source-LLM to assess its own level of persuasion? As recent research has found, LLMs are poor detectors of when they are being persuaded (Sakurai and Miyao, 2024) and can even unknowingly persuade themselves (Zeng et al., 2024). Furthermore, persuadability varies from person to person (Wang et al., 2019; Hirsh et al., 2012). Luckily, source-persuasion is a well- studied field in journalism. As a starting point, we draw from Sedorkin (2015), and carefully design 6 Model gpt-4o-mini gpt-4o Llama-3.1-70b Llama-3.1-8b Hardest Full Game 49.3% 50.4% 42.6% 42.4% Intermediate sans. Persuasion Easiest sans. Info. witholding 47.5% 49.8% 45.5% 48.3% 84.7% 84.2% 80.1% 74.9% Table 2: Performance of LLMs as Interviewers, with Ablations Percentage of information items extracted (Reward %) in each interview by different language models (gpt-4o-mini, gpt-4o, Llama-3.1-70b, and Llama-3.1-8b) across three conditions: (1) Hardest: The full game, with information dependent on persuasion and persona. (2) Medium: an ablation removing the sources’ responsiveness to persuasion. (3) Easy: An ablation removing the random withholding of information (i.e. a source returns all relevant information items at each turn). We observe, perhaps unsurprisingly, that removing the source’s ability to withhold information (Intermediate → Easy) drastically increases the reward % at the end of the game. The removal of persuasion strategies has a smaller effect, with some models showing marginal gains (e.g., Llama-3.1-8b) and others slight losses (e.g., gpt-4o). This indicates that vanilla LLMs are poorly suited to this persuasion task. prompts asking an LLM to rate the persuasiveness of a prior conversation. Different source personas, according to Sedorkin (2015), are persuaded by different communication patterns: e.g. Anxious sources are distrustful of journalists; they are usu- ally persuaded by phrases like “I will be as fair as possible”. We tailor different persuasive phrases for different simulated personas. We also experiment with varying lengths of interview context to include in the getPersuasionLevel prompt. For a full list of our persuasion approaches, see Table 4. We confirm the validity of LLM-identified persuasion levels by setting up human trials. In these trials, the human plays as a source. The human receives source information items and is asked questions by the interviewer. After each question, the human rates the their own level of persuasiveness on a 5 point scale and we calculate the agreement between the human and the source. After 72 trials, we re- port r = .43 at p < .0001, a statistically significant moderate correlation (if we exclude “Adversarial Sources”, our correlation is r = .68 at p < 1e9, indicating that persuasiveness evaluation in this context is relatively objective). With these steps, we can simulate persuadability relatively well, on a turn level. We invite future work on more realistic simulation methods. (a) Average reward across conversational turns. (b) Percentage (%) of Reward, by total reward, over time. Figure 5: Comparison of Rewards over time for lan- guage models. For all language models, the reward de- clines over time, shown above. However, this is not due to interviewer “maxing out” reward, as Total Reward increases nearly linearly across conversational turns. Finally, based on the assessed persuasion level (1–5) of the conversation, we implement getItem- sToReturn. This function takes in all relevant information items and randomly draws from a Beta distribution to determine what percentage of rel- evant information items to return. We choose 5 different parameterizations per persona, each cor- responding to a different persuasion level. As can be seen in Figure 3, we choose these parameteriza- tions such that the more persuaded a source is, the more left-skewed the distribution is. Each persona has a slightly different parameterization, reflect- ing that some personas need less persuasion (e.g. “Dominant”) while others do not drastically change how much information they return even with more persuasion (e.g. poor explainer). See Figure 6 in the Appendix for the Beta distributions for each source. 7 Source and Interviewer Responses To imple- ment the conversational components of our game, we stick with a minimalist setup, as LLMs have been well-observed to be proficient at basic dia- logue given sufficient structure (Ou et al., 2024). For Source, we provide the source with the infor- mation items to return, a description of its per- sona, and its prior assessed comfort/persuasion level. For Interviewer, we implement a prompting- based setup and leave more involved designs for future. 4.3 Game Simulation Results We run our simulation for 200 interviews with four models as the interviewer: gpt-4o, gpt-4o-mini, Llama-3.1-70b and Llama-3.1-8b. For a source LLM, we used gpt-4o across all personas. Ta- ble 2 compares the performance of LLMs across three conditions: the full game, a version with- out persuasion, and a version where sources do not withhold information. In the full game, where sources’ responsiveness depends on persuasion and persona, the gpt-4o model performs the best, at 50.4%. However, when persuasion is removed, performance only marginally improves across all models (e.g., Llama-3.1-70b reaches 45.5%, while gpt-4o remains stable at 49.8%), indicating that other aspects of the game (i.e. inferring which information the source has witheld) also pose a challenge. In the easiest condition, where no in- formation withholding occurs, all models perform significantly better, with reward percentages reach- ing over 80%, showing that withholding strategies are a major obstacle for current LLMs. Figure 4a highlights the performance of gpt- 4o across different source personas. The model achieves the highest information extraction from straightforward personas, while adversarial and de- fensive personas are the most challenging. Despite being harder to extract information from, adver- sarial sources are easier to persuade, as shown in Figure 4b. Figure 5a explores how the reward (information extraction) changes over the course of an interview. The results show a declining trend in reward per conversational turn. However, the total reward ac- cumulated over time (Figure 5b) increasesalmost linearly, showing that the LLMs continue to extract information, albeit at a slower rate. Together, these findings highlight the limitations of current LLMs in engaging with persuasive and strategic multi- turn interviews. While larger models like gpt-4o outperform smaller ones, they still exhibit signifi- cant gaps in persuasion and adaptive questioning, particularly when dealing with difficult personas. 5 Discussion Our results from Sections 3.3 and 4.3, taken to- gether, show that news interview transcripts are a valuable and real-world dataset for learning about persuasion, grounding dialogue and multi-turn strategy. Our work builds off previous work identi- fying grounding gaps in language models (Shaikh et al., 2024) and an emerging thread in gameplay (Wongkamjan et al., 2024; Liu et al., 2024), by extending both directions into the news domain, where real-world data is plentiful. In Section 3.3, we show that grounding persua- sive dialogue exists in human interviewers, which LLMs fail to mimic. In Section 4.3 we show that LLMs struggle to extract information from sources embodying a variety of personas. Interestingly, particular personas, like adversarial or avoidant sources, are noticeably more difficult for the LLM extract information from. Figure 4b, for instance, might suggest that the LLM models are more re- sponsive to engagement tactics when sources dis- play hostility rather than indifference or avoid- ance. In the context of prior work (Liu et al., 2024; Chawla et al., 2021), which tends to examine all interactions from a singular persona, these distinc- tions can point to further investigations illuminat- ing why certain personas are worse than others. The ultimate solution for a propensity towards multi-turn strategies and grounding language in LLMs, we feel, is to incorporate longer-range re- ward signals into the training process (Li et al., 2016). After all, grounding exists for a purpose: to influence an outcome borne out over many conver- sational turns (Clark, 1996; Cialdini, 2009) (e.g. in therapeutic environments, the patient is more open and makes more progress (Bohart, 1999); in edu- cational environments the student is more engaged and learns more (Brown, 1994)). Our NewsInterview effort provides a game- playing framework to provide such a reward and test such strategies in a real-world scenario. This game-playing environment is complicated: throughout an interview, the LLM has to reason about what information it already has, what likely information the source has, and how to persuade the source to divulge this information. Each of these components is likely challenging for present- 8 day LLMs. Figure 5a suggests that LLMs struggle to maintain effective information extraction as the conversation progresses, possibly due to difficulties in adjusting their questioning strategies over time. Furthermore, we see only marginal gains with large modeling sizes, suggesting that scale alone may not make meaningful progress. Despite the considerable challenges this game imposes, NewsInterview is still simpler than other game-playing environments: agent-based strategy games typically require the agent to develop a strategy while other players are also developing strategies (Chawla et al., 2021; Liu et al., 2024; Wongkamjan et al., 2024). In NewsInterview, the source is not typically trying to persuade the jour- nalist, simply to decide how much information to divulge. These components, taken together, sug- gest that NewsInterview is an ideal real-world start- ing place to start to incorporate multi-turn strategic dialogue from LLMs. We imagine that agents built off of this training environment will one day be able to: (1) identify persona characteristics easily in other game-players (2) develop strategy books, similar to those humans use (Sedorkin, 2015; Har- cup, 2015) (3) reason about early-vs-late stage con- versational tactics. These agents can not only help in practical tasks (e.g. by automating interviewing or by providing students an environment to learn in) but also give us more fundamental insights into the strategic approaches that work well. Our work has to be viewed with the follow- ing fundamental limitations. getPersuasionLevel serves as the core component in our game-design that affects the strategy an agent can learn and de- ploy; thus, any flaw in this function can lead us to a suboptimal agentic strategy. Despite observ- ing high correlations between humans and LLM assessments of persuasion, there could be many common scenarios we do not observe. For instance, there may be pathological degenerate cases: if, for instance, getPersuasionLevel returns a persua- sion level of 5 for agents that say “thank you”, then agents will learn to say this at every turn, and will not develop any other strategy. We im- plemented our personas using prompts outlining what different personas would say and what they would be persuaded by, as described in Section 4.2, under the expectation that the source-LLM would be able to generalize. Table 2 does indicate that the interviewer-agent did not simply overfit to certain words. However, if, in future work, we observe collapsing interviewer-agent behavior especially in the presence of trained approaches, we must be prepared to do more work to validate or improve getPersuasionLevel. Additionally, future work could also focus on in- troducing additional reward signals into the game- play. Some pieces or information are more impor- tant than others. By analyzing how much time is spent discussing each of a source’s pieces of infor- mation, we might reward the agent more for the more important extractions. Another reward signal could be the quotability of the response. Often, an interviewer will seek to obtain specific, memorable lines to write about throughout the interview. There is line of research that tries to predict which parts of a text or speech are the most quotable (Bendersky and Smith, 2012; Lee et al., 2016; MacLaughlin et al., 2018). We believe both of these signals might be an interesting lens through which to improve NewsInterview. 6 Related Work Recent research on large language models (LLMs) has highlighted a lack of grounding language and strategic dialogue (Shaikh et al., 2024; Wongkam- jan et al., 2024). Grounding communication, which involves the use of acknowledgments and affirma- tions to foster understanding and trust (Clark, 1996; ?), is essential in various conversational settings such as education (Kasneci et al., 2023), mental health (Carlbring et al., 2023), and conflict resolu- tion (Argyle et al., 2023). Prior efforts to study and improve LLMs’ grounding dialogue capabilities have faced limitations due to the scarcity of large- scale, naturalistic datasets (Kasneci et al., 2023). Existing datasets are either generated via crowd- sourcing, which can result in unnatural dialogues (Rashkin et al., 2019; Wang et al., 2019; Liu et al., 2021), or are small-scale due to privacy concerns, as in educational (Caines et al., 2020) and thera- peutic settings (Gratch et al., 2014; Casey, 2004). Grounding dialogue exists to foster longer-term goals. LLMs have been shown to lack multi-turn strategic thinking and planning, which are criti- cal for effective dialogue (Chawla et al., 2023). While some studies have explored the use of game- playing environments to improve these aspects (Li et al., 2016; Wongkamjan et al., 2024), and the development of ‘gamified’ persona-based sub- jects has a long history in dialogue agent research (Colby, 1981), there is still a need for realistic datasets and simulations that can facilitate the de- 9 velopment of agents with longer-horizon rewards. Research has increasingly focused on using game-playing environments to train agents in strate- gic dialogue. Lewis et al. (2017) introduced a ne- gotiation task where agents learn to negotiate over resources through dialogue, demonstrating the po- tential for agents to develop complex negotiation strategies. Similarly, He et al. (2018) proposed a de- coupled policy optimization approach for strategic dialogues in games. More recent works have ex- tended strategic dialogue to complex, multi-agent environments. Gray et al. (2020) developed agents capable of playing the game Diplomacy at a hu- man level by engaging in strategic planning and negotiation with other agents. Perolat et al. (2022) introduced a model for playing Stratego, show- casing multi-agent reinforcement learning in ad- versarial settings. Chawla et al. (2021) presented CASINO, a contextualized dialogue dataset de- signed for building negotiation agents, emphasiz- ing the importance of context and strategy in dia- logue systems. These works highlight the effective- ness of simulated game environments in training agents for strategic multi-turn dialogues, underscor- ing the potential of such approaches in enhancing LLMs’ strategic dialogue capabilities. Despite these advancements, several gaps re- main. Most existing models focus on task-specific dialogues with singular personas and lack the flexi- bility to handle a variety of conversational contexts and topics. Another gap is the scarcity of large- scale, naturalistic datasets to create a broader vari- ety of settings for these games, hampering the de- velopment of more generalized and robust models (Liu et al., 2021). Addressing these gaps is crucial for advancing the development of agents capable of engaging in sophisticated strategic dialogues akin to human interactions. Our work addresses these gaps by introducing a large-scale dataset of 40,000 two-person informa- tional interviews from NPR and CNN, providing a rich resource for studying grounding communi- cation in a naturalistic setting. Additionally, we develop a realistic simulated environment, News- Interview, which incorporates source personas and persuasive elements to evaluate and improve LLMs’ strategic dialogue capabilities. 7 Conclusion In this paper, we have introduced a high-quality dataset of 40,000 two-person informational inter- views from NPR and CNN, addressing the scarcity of large-scale dialogue data necessary for studying grounding communication. Our detailed discourse analysis reveals significant differences between LLM-generated dialogues and human interviewers, particularly in the use of grounding language and question types. Motivated by observation that long- term objectives guide turn-level grounding, we de- velop a realistic game environment, NewsInterview, to test and improve dialogue agents in informa- tional interviews. Our experiments demonstrate that while source LLMs can mimic human behav- ior in information sharing, interviewer LLMs strug- gle with recognizing when questions are answered and engaging persuasively, leading to suboptimal information extraction. These findings underscore the need for enhancing LLMs’ strategic dialogue capabilities, and we believe that our dataset and simulation environment provide valuable resources for future research in this area. 8 Limitations 8.1 Privacy and Ethical Considerations All data used in this study are publicly available and do not contain personally identifiable informa- tion beyond what has been already made public by the news organizations. We adhere to ethical guidelines for data use and ensure that our process- ing respects the rights and privacy of individuals involved as well as the news organizations that collected this data. Since the dataset we create is derived from interviews that have already been published in academic settings, we believe we are infringing upon the copyright of the news orga- nizations this data originally belonged it. Aside from ownership questions, there are still inherent risks in the use of real-world interview data. Some interviews might involve sensitive topics, and the ethical implications of using such data for model evaluation warrant careful consideration. 8.2 Reproducibility All experiments are conducted using publicly avail- able models and datasets. Part of our simulation does rely on high-performing language models and to serve this we used gpt-4o. This brings us into territory where we are inherently not reproducible, as closed models can be changed without notice. However, we believe we are not out of the norm in the academic community in our usage. 10 8.3 Simulated Environment Limitations and Risks The simulated game-playing environment used to evaluate the LLM agents is a simplification of real- world interviewing processes. We might be in- ducing a bias in agents that could perpetrate and ultimately lead development in the wrong direction. Or, we also might be opening up a sandbox for po- tential dual-use. The design of our game, to extract information from sources, might one day be used to persuade users to divulge sensitive information. 8.4 Annotators We worked with multiple professional journalists throughout the summer who were either colleagues or students who signed up to work with us. They were all volunteering their time and efforts to help with the research. 8.5 Computational Resources All experiments were run either with OpenAI re- sources (we spent a total of $300 running sim- ulations) or open source Llama-3.1-70b models. These models were run on the university clus- ter, which consisted of either 4xA40s or 2xA100s NVIDIA GPUS. 9 References References Lisa P Argyle, Ethan Busby, Joshua Gubler, Chris Bail, Thomas Howe, Christopher Rytting, and David Wingate. 2023. Ai chat assistants can improve con- arXiv preprint versations about divisive topics. arXiv:2302.07268. Michael Bendersky and David A Smith. 2012. A dictio- nary of wisdom and wit: Learning to extract quotable In Proceedings of the nAACL-hLT 2012 phrases. workshop on computational linguistics for literature, pages 69–77. AC Bohart. 1999. How clients make therapy work. American Psychological Association. AL Brown. 1994. Guided discovery in a community of learners. Classroom lessons: Integrating cognitive theory and classroom practice/press/Bradford Books. Andrew Caines, Helen Yannakoudakis, Helena Edmond- son, Helen Allen, Pascual Pérez-Paredes, Bill Byrne, and Paula Buttery. 2020. The teacher-student chat- room corpus. In Proceedings of the 9th Workshop on NLP for Computer Assisted Language Learning, pages 10–20. Per Carlbring, Heather Hadjistavropoulos, Annet Klei- boer, and Gerhard Andersson. 2023. A new era in internet interventions: The advent of chat-gpt and ai-assisted therapist guidance. Internet Interventions, 32. Dympna Casey. 2004. Challenges of collecting data in the clinical setting. NT Research, 9(2):131–141. Kushal Chawla, Jaysa Ramirez, Rene Clever, Gale Lucas, Jonathan May, and Jonathan Gratch. 2021. Casino: A corpus of campsite negotiation dialogues for automatic negotiation systems. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 3167–3185. Kushal Chawla, Weiyan Shi, Jingwen Zhang, Gale Lu- cas, Zhou Yu, and Jonathan Gratch. 2023. Social influence dialogue systems: A survey of datasets and models for social influence tasks. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 750–766. Robert B Cialdini. 2009. Influence: Science and prac- tice, volume 4. Herbert H Clark. 1996. Using language. Cambridge university press. Kenneth Mark Colby. 1981. Modeling a paranoid mind. Behavioral and Brain Sciences, 4(4):515–534. Jonathan Gratch, Ron Artstein, Gale Lucas, Giota Stra- tou, Stefan Scherer, Angela Nazarian, Rachel Wood, Jill Boberg, David DeVault, Stacy Marsella, David Traum, Skip Rizzo, and Louis-Philippe Morency. 2014. The distress analysis interview corpus of In Proceedings human and computer interviews. of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 3123– 3128, Reykjavik, Iceland. European Language Re- sources Association (ELRA). Jonathan Gray, Adam Lerer, Anton Bakhtin, and Noam Brown. 2020. Human-level performance in no-press diplomacy via equilibrium search. arXiv preprint arXiv:2010.02923. Tony Harcup. 2015. Journalism: Principles and Prac- tice, 3rd edition. SAGE Publications, London, UK. He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation In 2018 Conference on in negotiation dialogues. Empirical Methods in Natural Language Process- ing, EMNLP 2018, pages 2333–2343. Association for Computational Linguistics. Jacob B Hirsh, Sonia K Kang, and Galen V Boden- hausen. 2012. Personalized persuasion: Tailoring persuasive appeals to recipients’ personality traits. Psychological science, 23(6):578–581. 11 Enkelejda Kasneci, Kathrin Seßler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, et al. 2023. Chatgpt for good? on op- portunities and challenges of large language models for education. Learning and individual differences, 103:102274. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gon- zalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serv- ing with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles, pages 611–626. Hanbit Lee, Yeonchan Ahn, Haejun Lee, Seungdo Ha, and Sang-goo Lee. 2016. Quote recommendation in dialogue using deep neural network. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 957–960. Mike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning for negotiation dialogues. arXiv preprint arXiv:1706.05125. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep re- inforcement learning for dialogue generation. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1192– 1202. Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 3469–3483. Ziyi Liu, Abhishek Anand, Pei Zhou, Jen-tse Huang, and Jieyu Zhao. 2024. Interintent: Investigating social intelligence of llms via intention understand- ing in an interactive game context. arXiv preprint arXiv:2406.12203. Ansel MacLaughlin, John Wihbey, and David Smith. 2018. Predicting news coverage of scientific articles. In Proceedings of the International AAAI Conference on Web and Social Media, volume 12. Bodhisattwa Prasad Majumder, Shuyang Li, Jianmo Ni, and Julian McAuley. 2020. Interview: A large-scale open-source corpus of media dialog. arXiv preprint arXiv:2004.03090. Jiao Ou, Junda Lu, Che Liu, Yihong Tang, Fuzheng Zhang, Di Zhang, and Kun Gai. 2024. Dialogbench: Evaluating llms as human-like dialogue systems. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 6137–6170. Julien Perolat, Bart De Vylder, Daniel Hennes, Eu- gene Tarassov, Florian Strub, Vincent de Boer, Paul Muller, Jerome T Connor, Neil Burch, Thomas An- thony, et al. 2022. Mastering the game of stratego with model-free multiagent reinforcement learning. Science, 378(6623):990–996. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and In Proceedings of the 57th Annual Meet- dataset. ing of the Association for Computational Linguistics, pages 5370–5381, Florence, Italy. Association for Computational Linguistics. Swarnadeep Saha, Omer Levy, Asli Celikyilmaz, Mohit Bansal, Jason Weston, and Xian Li. 2024. Branch- solve-merge improves large language model evalu- ation and generation. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 8345–8363. Hiromasa Sakurai and Yusuke Miyao. 2024. Evaluat- ing intention detection capability of large language In Proceedings models in persuasive dialogues. of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1635–1657, Bangkok, Thailand. Association for Computational Linguistics. Gail Sedorkin. 2015. Interviewing: A Guide for Jour- nalists and Writers, 4th edition. Allen & Unwin. Omar Shaikh, Kristina Gligori´c, Ashna Khetan, Matthias Gerstgrasser, Diyi Yang, and Dan Jurafsky. 2024. Grounding gaps in language model genera- In Proceedings of the 2024 Conference of tions. the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (Volume 1: Long Papers), pages 6279–6296. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Per- suasion for good: Towards a personalized persuasive In Proceedings dialogue system for social good. of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5635–5649. Wichayaporn Wongkamjan, Feng Gu, Yanze Wang, Ulf Hermjakob, Jonathan May, Brandon M Stew- art, Jonathan K Kummerfeld, Denis Peskoff, and Jor- dan Lee Boyd-Graber. 2024. More victories, less co- operation: Assessing cicero’s diplomacy play. arXiv preprint arXiv:2406.04643. Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, and Weiyan Shi. 2024. How johnny can 12 persuade llms to jailbreak them: Rethinking persua- sion to challenge ai safety by humanizing llms. arXiv preprint arXiv:2401.06373. Chenguang Zhu, Yang Liu, Jie Mei, and Michael Zeng. 2021. Mediasum: A large-scale media interview dataset for dialogue summarization. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 5927–5934. 13 A Additional Details on Source Prompts B Discourse Definitions We give discourse definitions for our 8 discourse categories in Table 5. We developed these defini- tions between 3 annotators, one of which was a former professional journalist, another was a jour- nalist undergraduate student and the third was a computer science undergraduate student. We con- ferenced 3 times, examining over 50 interviews, sorting questions into categories, and expanding these categories until we were reliably able to label new questions. We calculated an inter-annotator agreement of κ = .6 between the annotators on a shared set of 10 interviews. Then, we used an LLM, Llama-3.1-70b to label discourse on the entire in- terview. The former journalist manually evaluated the LLM’s performance during a blind trial and had an agreement of κ = .8 with the LLM. C Data Preprocessing C.1 Data Used • utterances-2sp.csv from the NPR-Media dataset (Majumder et al., 2020) • episodes.csv from the NPR-Media dataset (Majumder et al., 2020) • news_dialogue.json from the MediaSum dataset (Zhu et al., 2021) C.2 Initial Sizes of the Data • There are 1,240,112 rows and 7 columns in utterances-2sp.csv. • There are 105,848 rows and 4 columns in episodes.csv. • There are 3,199,858 rows and 4 columns in utterances.csv. • There are 23,714 transcripts in the NPR- Media dataset (from utterances-2sp.csv). • There are 463,596 transcripts in the Media- Sum dataset. C.3 Process C.3.1 NPR-Media • Began with combining the extttepisodes.csv with the utterances-2sp.csv to add more information about the episode (title, date, etc). • Filtered out based on keywords: [“Sun- day Puzzle”, “Traffic”, “Puzzle”, “Advertise- ment”, “Sponsor”, “Commentary”] – 37 interviews were filtered out and re- duced to 23,676 transcripts. • Helper functions used: – count_unique_episodes(df) allows the user to count the number of unique episodes within a dataset. – filter_interviews(merged_df) allows the user to filter out the dataset using certain keywords. – find_removed_episodes(df_before, df_after) allows the user to identify the episodes that were removed from the dataset. – print_episode(df, episode_number) prints a specified episode from a specified dataset. – print_episode_pretty(df, episode_number) speci- fied episode from a specified dataset with a readable format. prints a – find_removed_episodes(df_before, df_after) returns a list of the episodes that were removed in a filtering step. • Convert dataset to the MediaSum format for easy prompt processing. • Downloaded dataset (grouped and ungrouped) as JSON and CSV. MediaSum • Began with deleting interviews from Media- Sum that are already in NPR-Media. – 8 interviews were filtered out and re- duced to 463,588 transcripts. • Filtered out episodes that had more than two unique speakers in the middle 70% of the tran- script. – Reduced to 66,978 transcripts. – 396,610 interviews were filtered out and reduced to 66,978 transcripts. • Filtered out episodes that were too short. – 19,059 interviews were filtered out and reduced to 47,919 transcripts. 14 Persona Description Example Responses Anxious Avoidant Unsure if they should be doing the interview, often expresses doubt. "I’m not sure if I should be saying this, I should speak to my manager." Brief, deflects questions, avoids de- tail, and changes subjects. "Actually, one of the main issues was the supply chain, but we’ve sorted that out." Adversarial Hostile, challenges the interviewer, provides confrontational replies. "Maybe if you did your job well, you’d under- stand the data. I’m not here to educate you." Defensive Protects reputation, feels criticized, gives overly detailed explanations. "One area where costs increased was in material prices, which were out of our control." Straight- forward Clear, direct, and willing to provide detailed information. "Additionally, we ran out through the project." funding midway Poor Ex- plainer Struggles to explain clearly, rambles, or provides convoluted answers. "Uh, well, I guess the supply chain was part of it, but, uh, that’s only one part of the story..." Dominating Controls the conversation, gives lengthy or off-topic answers. "Costs were high, but at my suggestion we brought in the best experts worldwide." Clueless Confused and uncertain, often un- sure of the topic. "Oh, right, the delays... yeah, maybe it was the, uh, supply issues? I’m not too sure..." Table 3: Source Personas that we created, with Descriptions and Example Responses • Helper functions used: • There are 45,848 transcripts in the final – print_row_by_title(df, title) al- lows users to print episode using title. dataset. • Our dataset started at 487,310 transcripts and – print_row_by_id(df, id) allows now has 45,848 transcripts. users to print episode using id. – filter_episodes_2sp(df) filters out transcripts with more than 2 unique speakers in the middle 70%. – filter_by_utt_length(df) filters out transcripts with 10 or fewer strings in extttutt. – find_removed_episodes(df_before, df_after) lets users see which episodes were removed. • Downloaded dataset as JSON and CSV. C.4 Final Sizes of the Data • There are 23,676 transcripts in the NPR- Media dataset (from extttutterances-2sp.csv). C.5 LLM Preprocessing Prompt Prompt to filter out transcripts that were Analyze this not informational interviews interview transcript that is in the form of a dialogue: (dialogue) By reading through the dialogue, identify if this transcript is an informational interview Look for questions between 2 people. and make sure this is an interview, not a Q&A game. The interviewer should be asking questions, not engaging in a back-and-forth conversation with the interviewee. After analyzing, your final answer of just ’YES’ or ’NO’ should be in brackets. C.6 Examples of Interviews • There are 47,919 transcripts in the MediaSum NPR-85 dataset. FARAI CHIDEYA, host: Tony, I guess there will always be some kind of history made every day. • There are 71,598 transcripts in the combined TONY COX, host: You know, some of it good. dataset. Some of it, not so good. 15 Persona Persuasion Description Persuasion Examples Anxious Avoidant Responds well to empathetic, reassuring, and patient conversations. Encouraging, non-threatening language builds comfort. "I will be as fair as possible.", "I appreciate your honesty.", "If you’re not comfortable now, I can come back later." Prefers non-obtrusive small talk, short ques- tions, and space to respond. Open-ended, light prompts work well. "And that happened when?", "I imagine there’s more to the story.", "Ah I see." Adversarial Responds to thorough research, persistence, and fact-based questions. Repeated ques- tioning elicits responses. Defensive Engages with non-confrontational and vali- dating conversations. Neutral language re- duces defensiveness. "Our records indicate...", "Just to be clear, are you saying...?", "Earlier you stated..." "I see why you made that choice.", "We can work together.", "It’s understandable." Straight- forward Prefers direct and transparent conversations. Efficiency and brevity are key. "Let’s get to the solution.", "What were the key points, in your view?" Poor Ex- plainer Responds well to structured, patient con- versations. Simple clarifying questions and validation help communication. "Explain that part again in smaller steps.", "I understand, keep going.", "Take your time." Dominating Engages when their expertise is acknowl- edged. Validation and offering control builds rapport. "I’d love your take.", "You have experience, what do you suggest?", "Your insights are valuable." Clueless Guided, simple questions with firm direc- tion are effective. Breaking down complex topics increases confidence. "Tell me what you’re thinking.", "It’s okay to be unsure.", "Start with something sim- ple." Table 4: Persuasion techniques that we compiled for different sources types. These manners and styles of speaking were informed by examples given in Harcup (2015) and Sedorkin (2015) that sources with different personality types find the most persuasive. FARAI CHIDEYA, host: And while some of it is well-publicized, sometimes, notable history goes under the radar. TONY COX, host: Now, that’s true. FARAI CHIDEYA, host: I’m thinking of your interview with Mable John. TONY COX, host: Oh, yeah. Now, this is a woman with an interesting past. Ms. MABLE JOHN (Singer): (Singing) My name is Mable and don’t you think I ain’t able. TONY COX, host: The 77-year-old Louisiana na- tive has been a top R&B singer, a successful nov- elist, a pastor, an activist and a movie actor, and I found out that Mable John is full of stories like the one about the time she met record mogul Berry Gordy before Motown was even Motown. Ms. MABLE JOHN (Singer): (Singing) That you’re leaving. Ms. MABLE JOHN (Singer): How I met Berry? That was at a barber shop on (unintelligible) that was near the fine show bar, and at that time men were wearing process. Process is the (unintelligi- ble). And I was dating a guy that was one of those process operators in the Chesterfield lounge and barbershop, and Berry was coming and getting his hair done. I was coaching choirs for my church. And my boyfriend introduced me to Berry Gordy because Berry said he was a songwriter and he was going to have a lot of people recording his songs. And my boyfriend said you need to stop doing all of this work for the church free, and that Berry Gordy do something with you so you can get paid. So he introduced me to Berry Gordy. TONY COX, host: Now, tell us the story. We’re going to skip around a little bit. Ms. MABLE JOHN (Singer): Okay. TONY COX, host: When you and Berry Gordy connected, as Motown was just becoming a com- 16 Discourse Role Definition Starting/Ending Re- Initiates or concludes the interview. Often not in the form of a question. marks Acknowledgement Statement Affirms the interviewee, often by explicitly recognizing their previous response. Builds rapport, demonstrates active listening. Typically induces the source to engage in greater openness. Follow-Up Question Digs deeper into a topic, seeks elaboration, or re-phrases a previous ques- tion to keep the discussion focused. Verification Question Confirms the accuracy of a statement, fact, event or observation or assumption. Topic-Transition Question Shifts the conversation to a new subject, usually an outline-level goal that the journalist prepared before the interview. Opinion/Speculation Solicits the interviewee’s views or predictions, revealing biases or insights. Question Challenge Question Tests the interviewee’s position, argument, or credibility, often provoking thought or debate. Broadening Question Expands the scope of discussion, encouraging the interviewee to consider broader contexts or new perspectives. Table 5: Discourse types in informational interviews. We developed these definitions manually between three annotators by examining 50 different interviews. Processed Outline Source biography: Senior news analyst with expertise in politics and history. Interview context: A president’s final year in office and potential changes in policy. Objective 1: Presidential legacy Objective 2: Foreign policy shifts Objective 3: Domestic policy changes Objective 4: Potential surprises Table 6: Interview Outline Objectives given to an inter- viewing game playing agent. pany, a record company, you are the first female to record on a label, the Tamla label. Ms. MABLE JOHN (Singer): Yes. TONY COX, host: Before Motown. Ms. MABLE JOHN (Singer): The first single fe- male artist, because Claudette Robinson was a part of what become the Miracles, and he was managing them along with me. TONY COX, host: Right. Ms. MABLE JOHN (Singer): So I was the first single female artist to be signed to Tamla, which is a part of the Motown family. TONY COX, host: When you think about that now, how do you feel about looking at that as a historic moment? Ms. MABLE JOHN (Singer): No one could have bought that time. God had to give it to me. Ms. MABLE JOHN (Singer): (Singing) Hey. Hey. TONY COX, host: I understand that you were re- hearsing one day and these three young girls came in and interrupted your rehearsal. Ms. MABLE JOHN (Singer): The girls that we know now as The Supremes. They came into a re- hearsal that I was doing with Berry Gordy because he played also for me, played piano for me. We were there rehearsing and these girls came in and I didn’t quite remember everything that was said that day because it’s been so long. But Mary Wilson of The Supremes, remembered when she was writing her book to say that when she first walked into Mo- town, the three of them walked in and my question to Berry Gordy was, why are they walking in on my rehearsal, because all of our rehearsals were private. Ms. MABLE JOHN (Singer): (Singing) It takes a more than ’em flashy old money and I wink from the corner of your eye. I don’t want no big line calls, (unintelligible) caviar. Oh no, true love baby can be found ’cause you take a look around. TONY COX, host: Talking about faith. Your career at 17 Information Items Row 1: Information item #1: The economy is growing above trend pace, with job growth of 150,000-200,000 a month, which is higher than what’s sustainable in the long run. Row 2: Information item #2: The Fed will likely continue to raise interest rates, as the economy is growing and financial conditions are still very accommodative. Row 3: Information item #3: The neutral rate is probably higher than where we are right now, but it’s not a precise number, and the Fed needs to curtail monetary policy further. Row 4: Information item #4: The dot-plot is just a forecast and should not be taken as a commitment; it’s subject to change as new information becomes available. Row 5: Information item #5: The stock market will likely face tougher going in 2019, with slower earnings growth and higher interest rates. Row 6: Information item #6: The economy’s capacity to continue growing is a concern, as there aren’t enough workers to sustain above-trend growth for more than another year or so. Table 7: Information Items for a source in our game-playing environment, extracted from an interview featuring Bill Dudley, Former President of the New York Federal Reserve LLM-Generated Counterfactual Questions What do you think about the changing dynamics of your neighborhood and how it affects your sense of community and belonging? What specific factors do you think are contributing to the increasing rates of HIV/AIDS among African-American women in Washington D.C.? How do parents with HIV/AIDS typically cope with the fear of not being there for their children, and what are the emotional and psychological implications of this fear on their mental health? What specific steps are you taking to mitigate the impact of the Salmonella outbreak on your business, and do you think this will have a lasting effect on the tomato industry as a whole? What about the potential impact of this rate drop on the overall housing market, and do you think it could lead to a rebound in housing prices or a continued decline? What are some practical steps parents can take to help their teenagers prepare for the job market and make the most of their summer? Table 8: List of Interview Questions generated in a counterfactual setting by LLM interviewer. The questions are generated after observing the previous t human conversational turns. Motown never really took off, and after some few years, you decided to go to Memphis, where you joined the Stax label and hooked up with Porter and Isaac Hayes. And then, it was long after that that you had a million seller. Ms. MABLE JOHN (Singer): Right. Well, Mo- town, Berry Gordy, they were all along with God and my parents a part of my future. So Motown was my beginning. It was one that was different from everywhere else I’ve ever been. But I think it was a necessary one to make the transition for me from Motown to Stax. TONY COX, host: Now, your big song at Stax, one of your - the biggest of your songs was. . . songs. TONY COX, host: "Your Good Thing is About to End." Ms. MABLE JOHN (Singer): ". . . Is About to End." Right. Right. TONY COX, host: See, I’m old enough to have remembered that song. Ms. MABLE JOHN (Singer): Well, that’s good. That makes me feel you don’t have to be very old to remember that. Ms. MABLE JOHN (Singer): (Singing) I don’t have to beg you to hold me ’cause somebody else will. You don’t have to love me when I want it, ’cause somebody else will. Ms. MABLE JOHN (Singer): The biggest of all Ms. MABLE JOHN (Singer): It was a story 18 that I needed to tell because of a bad marriage. And at Stax, they would allow you to be yourself. Everybody participated in whatever success you’re going to have, everybody, including the drummer. TONY COX, host: Really? Tell me about your family. And I’m switching to that for a reason because you were one of 10 children, right? Ms. MABLE JOHN (Singer): The oldest... TONY COX, host: The oldest of 10. Ms. MABLE JOHN (Singer): ...of 10 children. TONY COX, host: And you happen to have a little brother, a baby brother who was a big time performer, Little Willie John. Ms. MABLE JOHN (Singer): Yes. Little Willie John. William Edward John. Now, when I got with Willy that was another education. Ms. MABLE JOHN (Singer): Because he said my name is Little Willie John. It might be William Edward John to you, and you’re my sister and I love you. But if you’re not good, I’m going to send you home. TONY COX, host: Obviously, you are good. Ms. MABLE JOHN (Singer): Well, he let me stay. Ms. MABLE JOHN (Singer): (Singing) You have all the love that I’ve got. Even ice melts to water and gets hot. Look out, your good thing is about to come to an end. Your real good thing. . . TONY COX, host: You were the leader of the Raelettes for a dozen years. Ms. MABLE JOHN (Singer): Yes. TONY COX, host: Traveling all over with and without Ray Charles. Ms. MABLE JOHN (Singer): With and without Ray Charles. Yes. TONY COX, host: In the movie, "Ray," I had looked in the credits to see if there were someone who played you. . . Ms. MABLE JOHN (Singer): No. TONY COX, host: ...since you have been a Raelette for so long, and I saw that there wasn’t one. Ms. MABLE JOHN (Singer): No. TONY COX, host: And is there a reason for that? Ms. MABLE JOHN (Singer): Well, it was the years before I came. TONY COX, host: Okay. Ms. MABLE JOHN (Singer): And I tell every- body that asks me, the best of his life were the years after the movie. When I came to work with him, he sat me down and told me all about his be- ginning, told me all about things that ticks him off and things that excite him, what he was looking for and how he wanted it. And I knew that being with him would finish me in this industry. . . TONY COX, host: Now, when he... Ms. MABLE JOHN (Singer): ...because he was at the top - complete me. TONY COX, host: Okay. Ms. MABLE JOHN (Singer): So that I could work for any audience, sing any kind of songs. Remember now, at the beginning I thought I could only sing gospel. With Berry Gordy, I found out I could sing the blues. I went to Stax and I find out I could sing love songs. I got with Ray Charles and we sang country - everything. And we could play to any audience. I wanted to sing what was in my heart to everybody that loves music, and Ray Charles was the place for me to be, to do that. TONY COX, host: So the Raelettes - would you say that was the highlight of your career? Ms. MABLE JOHN (Singer): It was a highlight. It was highlight because I learned things about myself, about my career, about the industry. I was able to set up my own publishing companies and production companies because of the knowledge that I gained with and from Ray Charles. TONY COX, host: And after all of that, Mable John, your career did not stop. It has gone on into movies, into - you’ve written a couple of novels. Ms. MABLE JOHN (Singer): Excuse me. I just finished the third. TONY COX, host: Oh, number three. You’ve done three novels. You’re a minister. Ms. MABLE JOHN (Singer): Yes. TONY COX, host: And you started a church. Ms. MABLE JOHN (Singer): Yes. TONY COX, host: And you help the homeless. Ms. MABLE JOHN (Singer): Yes. TONY COX, host: And you’re a grandmother. Ms. MABLE JOHN (Singer): A great- grandmother. TONY COX, host: And a great-grandmother. How is it possible for one person to do all of those things and to do them as successfully as you have? Ms. MABLE JOHN (Singer): It’s all God. Some days, when people are telling me how busy I am. And when I sit down to think about it, I get tired. Ms. MABLE JOHN (Singer): So I don’t. I don’t go there. I just get up every morning and I thank God for the activity of that day. And I have to thank a woman that’s no longer with us, Ms. Billie 19 Holiday, because that’s the voice that I hear in my ear still to this day. I worked with her two weeks before she passed. And she said to me, Honey - because I was frightened out of my wits - you can make it if you remember. Always know when you have done or given enough. Not to be afraid and have guts enough to say I quit. Ms. MABLE JOHN (Singer): (Singing) Even ice melts to water and gets hot. . . TONY COX, host: It’s so nice talking with you. Thank you for coming in. Ms. MABLE JOHN (Singer): I thank you. Ms. MABLE JOHN (Singer): (Singing) Your good thing is about to come to an end. Your real good thing. . . FARAI CHIDEYA, host: That was NPR’s Tony Cox with singer, author and actor Mable John. Look for Mable John in the upcoming John Sayles film, “Honeydripper.” Ms. MABLE JOHN (Singer): (Singing) Getting myself back together. FARAI CHIDEYA, host: That’s our show for today, and thank you sharing your time with us. To listen to the show or subscribe to our podcast, visit our Web site, nprnewsandnotes.org. No spaces, just nprnewsandnotes.org. FARAI CHIDEYA, host: To join the conversa- tion or sign up for our newsletter, visit our blog at nprnewsandviews.org. NEWS & NOTES was created by NPR News and the African-American Public Radio consortium. Tomorrow, a reporter shares Donda West’s last interview. FARAI CHIDEYA, host: I’m Farai Chideya. This is NEWS & NOTES. (a) Anxious (b) Straightforward (c) Poor Explainer (d) Clueless (e) Dominating (f) Avoidant Figure 6: Beta Distributions for Various Interview Per- sonas 20
ai_researcher
1
Research_on_Task_Planning_and_Efficiency_Evaluation_of_Multi-Agent_Collaborative_Search_Based_on_Ant_Colony_Optimization.pdf
FFRob: Leveraging Symbolic Planning for Efficient Task and Motion Planning Journal Title XX(X):1–35 c(cid:13)The Author(s) 2016 Reprints and permission: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/ToBeAssigned www.sagepub.com/ Caelan Reed Garrett1, Tom ´as Lozano-P ´erez1, and Leslie Pack Kaelbling1 7 1 0 2 c e D 1 ] O R . s c [ 2 v 5 3 3 1 0 . 8 0 6 1 : v i X r a Abstract Mobile manipulation problems involving many objects are challenging to solve due to the high dimensionality and multi-modality of their hybrid configuration spaces. Planners that perform a purely geometric search are prohibitively slow for solving these problems because they are unable to factor the configuration space. Symbolic task planners can efficiently construct plans involving many variables but cannot represent the geometric and kinematic constraints required in manipulation. We present the FFROB algorithm for solving task and motion planning problems. First, we introduce Extended Action Specification (EAS) as a general purpose planning representation that supports arbitrary predicates as conditions. We adapt existing heuristic search ideas for solving STRIPS planning problems, particularly delete-relaxations, to solve EAS problem instances. We then apply the EAS representation and planners to manipulation problems resulting in FFROB. FFROB iteratively discretizes task and motion planning problems using batch sampling of manipulation primitives and a multi-query roadmap structure that can be conditionalized to evaluate reachability under different placements of movable objects. This structure enables the EAS planner to efficiently compute heuristics that incorporate geometric and kinematic planning constraints to give a tight estimate of the distance to the goal. Additionally, we show FFROB is probabilistically complete and has finite expected runtime. Finally, we empirically demonstrate FFROB’s effectiveness on complex and diverse task and motion planning tasks including rearrangement planning and navigation among movable objects. Keywords task and motion planning, manipulation planning, AI reasoning 1 Introduction A long-standing goal in robotics is to develop robots that can operate autonomously in unstructured human environ- ments. Recent hardware innovations have made mobile manipulator robots increasingly affordable, and sensing innovations provide unprecedented sensory bandwidth and accuracy. Progress in algorithms for navigation and motion planning has enabled some basic forms of mobile manipu- lation, which combine actuation of a robot’s base and end- effectors to move objects in the world. However, mobile manipulation is primarily restricted to picking and placing objects on relatively uncluttered surfaces. Planning for mobile manipulation problems involving cluttered environ- ments and multiple manipulation primitives still presents substantial challenges. Researchers in artificial intelligence planning (Ghallab et al. 2004) have been tackling problems that require long sequences of actions and large discrete state-spaces. However, these symbolic “task-level” planners do not naturally encompass the detailed geometric and kinematic considerations that robot motion planning requires. The Prepared using sagej.cls [Version: 2015/06/09 v1.01] original Shakey and STRIPS robot system Fikes and Nilsson (1971); Nilsson (1984), from which many of these symbolic planners evolved, managed to plan for an actual robot by working in a domain where all legal symbolic plans were effectively executable. This required the ability to represent symbolically a sufficient set of conditions to guarantee the success of the steps in the plan. Compactly encoding success conditions using typical symbolic representations is not generally possible in realistic manipulation domains because the geometrical and kinematic constraints are significant. Consider a simple manipulation domain where a variety of objects are placed on a table and the robot’s task is to collect some subset of the objects and pack them in a box. 1MIT CSAIL, USA Corresponding author: Caelan Reed Garrett, Computer Science and Artificial Laboratory, 32 Vassar Street, Cambridge, MA 02139 USA Intelligence Email: [email protected] 2 Journal Title XX(X) size of the configuration space grows exponentially in the number of moveable objects in the world. Constructing such plans generally requires some methods for partitioning the problem and for effective search guidance. Therefore we seek to integrate the capabilities of a task planner and a manipulation planner to produce an efficient mobile manipulation planning algorithm. 1.1 Approach The primary contribution of this paper is FFROB, an efficient and probabilistically complete algorithm for fully integrated task and motion planning. This paper is an extended and revised version of a conference paper by Garrett et al. (2014). We model task and motion planning as symbolic planning where the conditions of actions are complex predicates involving geometric and kinematic constraints. We adapt efficient existing heuristic search algorithms for solving traditional symbolic planning problems to solve task and motion planning problems. The key computational benefit of the approach is that it is able to incorporate geometric and kinematic constraints in the heuristic to strongly guide the search. To start, we formally identify a subclass of task and motion planning problems, pick-place-move (PPM) problems, that will be our focus. We later show how FFROB can be easily extended to solve more general task and motion planning problems involving additional symbolic inferences or manipulation primitives. We introduce Extended Action Specification (EAS), a new symbolic planing representation that supports complex conditions. Although this representation is not specific to task and motion planning or even robotics problems, our primary application of it will be to PPM problems discretized using sampling. EAS is able to represent actions with complex conditions much more concisely than a traditional symbolic planning representation. Additionally, EAS allows specification of predicate evaluation functions to efficiently test conditions. In the context of PPM problems, we give a method for quickly evaluating reachability predicates using dynamic programming and collision check caching. Following this, we give our extension of relaxed planning heuristics, particularly the FastForward (FF) heuristic Hoffmann and Nebel (2001), to the EAS planning representation. In order to frame task and motion planning as symbolic planning in a finite domain, we repeatedly discretize the planning problem. This involves batch sampling a set of placement poses and grasp transforms to identify the pick and place actions. Then, we construct a roadmap of robot configurations to give an approximation of the robot’s free configuration space. This roadmap is instrumental in enabling efficient evaluation of reachability predicates that arise when the robot seeks to move to a new configuration. Figure 1. A task and motion planning problem requiring cooking dinner. The robot must obtain two green cabbages from the shelves, clean them on the dishwasher, cook them on the microwave, and serve them. Additionally, the robot must organize the dirty cups on the table, clean them, and set the table. The basic robot operations are to pick up an object and place it somewhere else; in addition, the robot can move its base in order to reach a distant object. Note that, in general, to pick a distant object or place an object at a distant location, the robot will have to move other objects out of the way. Which objects need moving depends on their shapes, the shape of the robot, where the robot’s base is placed and what path it follows to the object. When an object is moved, the choice of where to place it requires similar considerations. The key observation is that constructing a valid symbolic plan requires access to a characterization of the connectivity of the underlying free configuration space (for the robot and all the movable objects). We cannot efficiently maintain a representation of this connectivity with a set of static assertions updated by symbolic actions; determining how the connectivity of the underlying free space changes requires geometric computation. Whereas classic robot motion planning requires a search in the robot configuration space, manipulation planning requires a search in the combined configuration space of the robot and all the movable objects in the world. Achieving a manipulation goal requires choosing which object to move when, which grasps and intermediate placements to use, etc. Manipulation planning remains challenging because it is notoriously difficult to work in a high-dimensional space and make a long sequence of intertwined decisions. Existing manipulation planning algorithms (Sim´eon et al. 2004; Cambon et al. 2009; Hauser and Latombe 2009; Hauser and Ng-Thow-Hing 2011; Dogar and Srinivasa 2012; Barry et al. 2013) take substantial time to plan operations involving relatively few objects. Without any search guidance, these algorithms must explore a large fraction of the configuration space to find a plan. And the Prepared using sagej.cls Garrett et al. 3 We prove completeness results for FFROB by identify- ing a class of non-degenerate PPM problems and prov- ing FFROB will solve them with finite expected runtime. Finally, we perform experiments on challenging manipu- lation problems and explore the effect of various planner configurations on their performance. We demonstrate that FFROB can solve a broad class of feasible task and motion planning problems that involve navigating among and rear- ranging moveable objects. 2 Related Work This work draws from existing approaches to manipulation planning and to task and motion planning well as ideas from the artificial intelligence symbolic planning literature. Our focus will be on showing how ideas originally developed for symbolic planning can be adapted to continuous-action domains to more efficiently solve high-dimensional task and motion planning problems. 2.1 Manipulation Planning In manipulation planning, the goal is not just to move the robot without collision, as in classical motion planning, but also to operate on the objects in the world. This problem was addressed from the earliest days of algorithmic motion planning, for example by Lozano-P´erez (1981), Lozano- P´erez et al. (1987), and Wilfong (1988). The modern treatment of this problem, involving continuous grasps as well as continuous placements, was pioneered by Alami et al. (1990, 1994) who introduced the manipulation graph. This graph breaks the problem of one robot moving one object in a potentially complex environment into several problems of moving between connected components of the combined configuration space where each component shares the same grasp. A solution is an alternating sequence of transit paths, in which the robot is not grasping an object, and transfer paths, in which it is. Sim´eon et al. (2004) expanded this work to more realistic settings by using probabilistic roadmaps. They looked at manipulations necessitating multiple regrasps. Their approach uses the manipulation graph to identify a high-level sequence of transit and transfer paths then performs the motion planning required to achieve them. (2006) and Stilman and Kuffner Stilman et al. (2007) address a version of manipulation planning called navigation among movable obstacles (NAMO), where the robot must reach a specified location among a field of movable obstacles. In order to solve monotonic NAMO instances, instances requiring at most one pick and place they plan backwards from the goal for each object, and use swept volumes to determine, recursively, which additional objects must be moved. Van Den Berg et al. (2009) developed a probabilistically complete algorithm Prepared using sagej.cls for NAMO. However, this algorithm assumes that one can fully characterize the connected components of the configuration space of the robot at each planning step; this is computationally prohibitive for robotic configuration spaces with more than two dimensions. that Hauser and Latombe (2009) and Hauser and Ng-Thow- Hing (2011) identified a generalization of manipulation planning as hybrid planning, is, planning for systems with multiple (possibly infinitely many) modes, representing different constraint sub-manifolds of the configuration space. In a robotics domain, for example, modes are characterized by a grasp on a particular object and the placements of movable objects. The key insight is that, as in the manipulation graph, one can conceptualize the planning process as alternating between moving in a single mode, where the constraints are constant (e.g., moving through free space with a grasped object), and switching between modes (e.g. grasping a new object). So, solving these problems requires being able to plan within a single mode and identifying configurations where modes can change, which is generally specific to the task. Hauser provided a probabilistically complete algorithm that solves problems of this type assuming that effective single- mode planners and mode-transition samplers are available. However, a pure uni-directional sampling-based algorithm has trouble solving high-dimensional problems. Barry et al. (2013) defined a bidirectional rapidly- exploring random tree (RRT) search of the combined configuration space. Importantly, the individual moves of this algorithm consider complete plans to reach the goal, ignoring obstacles, ensuring that suggested motions have some chance of being on the path to the goal. They also investigated a two-layer hierarchy, where the higher level plans only for the manipulated object (without the robot), with the objective of identifying relevant mode transitions to guide the full planner. This planner was limited to domains with one movable object and had running times on the order of minutes. Krontiris and Bekris (2015, 2016) provided an algorithm for rearrangement planning: a special instance of pick- and-place planning where all objects have explicit goal poses. Their method constructs a Probabilistic Roadmap (PRM) (Kavraki and Latombe 1998) in the combined configuration space similar to that of Barry et al. (2013). It samples arrangements of the objects and uses a greedy local planner based on the NAMO algorithm of Stilman and Kuffner (2006) to connect an existing PRM vertex with the sampled object arrangement. The use of the PRM was able to recover completeness for problems that could not be solved by just the greedy planner. However, the lack of search guidance forces the planner to explore a large number of object arrangements. 4 Journal Title XX(X) Garrett et al. (2015) introduced the Hybrid Backward- Forward (HBF) algorithm for hybrid planning problems. HBF uses a backward search to produce successors and distance estimates for states in a forward search. HBF was applied to manipulation problems involving robot primitives for picking, placing, and pushing. In contrast with FFROB’s batch action sampling, HBF samples action primitives while simultaneously searching through the state-space. King et al. investigated rearrangement planning with both object-centric motions, actions involving a particular object, and robot-centric motions, actions not involving any particular objects. Most of the presented literature involves only object-centric motions. Planning with robot-centric motions can enable complex manipulations such as multi- object pushing and whole arm manipulation. Using these primitives can result in much shorter and more natural plans than using object-centric motions alone. Like Barry et al. (2013), they generalize the RRT algorithm to plan with both of these motions. Their algorithm also lacks strong search guidance needed to effectively plan for problems with many objects and long horizons. 2.2 Symbolic Planning The artificial intelligence (AI) planning community has largely adopted heuristic state-space search methods for solving symbolic planning problems. These planning problems are, by and large, discrete problems that are represented using Planning Domain Definition Language (PDDL) as proposed by McDermott et al. (1998). Bonet and Geffner (1999, 2001) popularized these state- space search methods by showing how domain-independent heuristics could be derived automatically, by manipulating the conditions and effects of the actions. The key idea is to define a relaxed version of the planning problem where the actions do not have any “delete” effects, that is, no previously achieved result becomes false. This is an easier problem to solve than the original problem and the solution can be used to provide an approximation to the actual cost of solving a problem. They identified two heuristics, hadd and hmax, that derive their estimates from a relaxed version of the plan graph Blum and Furst (1997). They each can be computed in polynomial time by taking the sum or max of the costs of achieving individual terms in a conjunctive goal respectively. The hmax heuristic is admissible, while hadd is inadmissible but more informed (it tends to be closer to the true cost in practice, providing more effective guidance). The FastForward (FF) planning system of Hoffmann and Nebel (2001) introduced the hff heuristic, which explicitly extracts a plan from the relaxed plan graph and uses the plan’s cost as its value. By avoiding double-counting actions that achieve several conditions, hff is generally a tighter estimate of the cost to reach the goal than hadd and Prepared using sagej.cls hmax. Importantly, the resulting relaxed plans can also be used to suggest useful actions to consider at the current state, which can reduce the branching factor of the search. The AI planning community has also investigated planning in hybrid domains with simple continuous dynamics. Coles et al. (2013) gave a heuristic for numerical planning that combines a mixed integer program with a relaxed plan graph to create a hybrid heuristic that is able to more strongly guide the search by using fewer approximations. This adaptation of a relaxed plan graph, albeit in a different way and for a different problem, is similar in spirit to the inclusion of geometric inferences in FFROB’s relaxed plan graph. More generally, Planning Modulo Theories (PMT) Gre- gory et al. (2012) is a framework for using arbitrary first- order logical theories in planning problems. This formu- lation, inspired by SAT Modulo Theories (SMT), was designed to have wide expressivity and unify the repre- sentation for many existing planning types. Gregory et al. also gave a heuristic search algorithm for solving PMT problems. Its heuristic is a extension of hmax. The resulting planner is able to solve many problems that cannot be modeled with PDDL. It even outperforms some algorithms operating on PDDL problems because it plans using a more compact representation by allowing complex conditions. Our planning representation and algorithms are similar to PMT applied to problems with arbitrary propositional conditions. However, our framework allows for custom evaluation of complex conditions and supports additional heuristics that are more effective than hmax. 2.3 Task and Motion Planning There have been a number of approaches to integrating discrete task planning and continuous motion planning in recent years. The pioneering Asymov system (Cambon et al. 2009) conducts an interleaved search at the symbolic and geometric levels. They carefully consider the conse- quences of using non-terminating probabilistic algorithms for the geometric planning, allocating computation time among the multiple geometric planning problems that are generated by the symbolic planner. The process can be viewed as using the task planner as a heuristic to guide the motion planning search. However, since the task-level planner is ignoring geometry, its value as a heuristic is quite limited. The work of Plaku and Hager (2010) is similar in approach. A natural extension to the classic symbolic planning paradigm is to introduce “computed predicates” (also known as “semantic attachments”); is, predicates whose truth value is established not via assertion but by calling an external program that operates on a geometric representation of the state Dornhege et al. (2009, 2013). A motion planner can serve to implement such a predicate, that Garrett et al. 5 determining the reachability of one configuration from another. A difficulty with this approach, however, is that calling a motion planner is generally expensive. This leads to a desire to minimize the set of object placements considered to limit the branching factor of the search. Considering only a sparse set of placements may limit the generality of the planner. Additionally, computed predicates are ignored during heuristic computation. This leads to a heuristic that is uninformed about geometric considerations and may result in considerable inefficiency due to heuristic plateaus. The work of Erdem et al. (2011), is similar in approach to Dornhege et al. (2009), augmenting a task planner that is based on explicit causal reasoning with the ability to check for the existence of paths for the robot. Lagriffoul et al. (2012, 2014) interleave the symbolic and geometric searches and focus on limiting the amount of geometric backtracking. They generate a set of approximate linear constraints imposed by the program under consideration, e.g., from grasp and placement choices, and use linear programming to compute a valid assignment or determine that one does not exist. This in domains such as method is particularly successful stacking objects in which constraints from many steps of the plan affect geometric choices. Although their approach is able to efficiently decide if a task-level plan is geometrically feasible, it is unable to inform the task-level search which may result in attempting many infeasible plans. Pandey et al. (2012) and de Silva et al. (2013) use HTNs instead of generative task planning. Their system can backtrack over choices made by the geometric module, allowing more freedom to the geometric planning than in the approach of Dornhege et al. (2009). In addition, they use a cascaded approach to computing difficult applicability conditions: they first test quick-to-evaluate approximations of accessibility predicates, so that the planning is only attempted in situations in which it might plausibly succeed. In the HPN approach of Kaelbling and Lozano-P´erez (2011), a regression-based symbolic planner uses gener- ators, which perform fast approximate motion planning, to select geometric parameters, such as configurations and paths, for the actions. Reasoning backward using regression allows the goal to significantly bias the actions that are considered. Srivastava et al. (2014) offer a novel control structure that avoids computing expensive condition values in many cases by assuming a favorable default valuation of the condition elements; if those default valuations prove to be erroneous, then it is discovered in the process of performing geometric planning to instantiate the associated geometric action. In that case, symbolic planning is repeated after adding updated valuations. This approach requires the ability to diagnose why a motion plan is not possible in a given state, which can be challenging, in general. Prepared using sagej.cls Lozano-P´erez and Kaelbling (2014) leverage constraint satisfaction problem (CSP) solvers for task and motion planning. Their approach performs a discrete search in the space of plan skeletons and uses a CSP solver to determine if a valid set of action parameters completes the plan skeleton. Dantam et al. (2016) extend this approach by more generally formulating task and motion planning as a satisfiability modulo theories (SMT) problem. They use an incremental constraint solver to add motion constraints to the task-level logical formula when a candidate task plan is found. Upon failure, they iteratively increase the plan depth and motion planning timeouts, which results in a probabilistically complete algorithm. Toussaint (2015) formulates task and motion planning as a logic-geometric program, a non-linear constrained optimization problem augmented with a logic and knowledge base. He introduces three approximations that make solving the problem more tractable by sequentially optimizing the final state, transfer configurations, and motion trajectories. His experiments apply the technique to maximizing the height of a stable structure constructed from a set of objects. All of these approaches, although they have varying degrees of integration of the symbolic and geometric planning, generally lack a true integrated search that allows the geometric details to affect the focus of the symbolic planning. FFROB develops such an integrated search, provides methods for performing it efficiently, and shows that it results in significant computational savings. 3 Problem Formulation We start by modeling robotic planning domains that involve a single manipulator on a mobile base in an environment with moveable rigid objects. We focus on this specific domain because it is the subject of our experiments; however, the general formulation has broader applicability and can be extended to different domains involving, for instance, several manipulators or additional symbolic fluents. We call this class of problems pick-place-move (PPM) problems. We assume that the environment is fully observable and that actions have deterministic effects. A 1. PPM D = Definition (cid:104)Q, {(P o1, Go1)..., (P om, Gom)}(cid:105) is specified by a robot configuration space Q as well as a space of placement surfaces P oi and a space of grasps Goi for each of the m moveable objects oi. domain P oi is the union of poses where object oi can legally be placed such as poses supported by tops of tables or floors. Goi contains a set of grasps which may be discrete or continuously infinite depending on the geometry of the robot and oi. We assume that Q and each P oi take into account collisions with any fixed obstacles or 6 Journal Title XX(X) limits, so values in each space are collision-free joint when no moveable objects are in the environment. We will not consider stacking domains where P oi could contain surfaces on top of other objects. FFROB can be extended to solve stacking problems by sampling sets of object poses that form a structurally sound stack. This formulation encompasses pick-and-place planning, rearrangement planning, and navigating among moveable later show that additional objects (NAMO). We will symbolic values can be easily incorporated into the domain to plan for tasks like cooking meals. In a PPM domain D, we can represent the state of the system using a set of variables V = {vr, vh, vo1, ..., vom}. Each variable va has a domain of possible values Da. A state s = {vr = q, vh = g, vo1 = po1, ..., vom = pom} is an assignment of values to the variables. These variables along with their domains and values are as follows: • vr is the robot configuration variable. The robot con- figuration domain is just Dr = Q. Each configuration q ∈ Dr specifies the pose of the base as well as the joint angles of the manipulator. • vh is the robot holding variable. The robot holding domain is Dh = Go1 ∪ ... ∪ Gom ∪ {None}. For g ∈ Dh, g = None indicates the robot is not holding anything. Otherwise, g = (o, γ) indicates the robot is holding object o with a grasp transform γ relating robot’s end-effector pose and the object pose. • voi is the object oi pose variable for object label i ∈ (1, ..., m). The object pose domain is Doi = P oi ∪ {None}. For poi ∈ Doi , poi = None indicates that object oi is not placed. Otherwise, poi = (x, y, z, θ) is a four-dimensional pose (we assume that the object is resting on a stable face on a static horizontal surface). We assume that the world is quasi-static and the robot can only hold a single object. When the robot is holding an object oi, the object pose can be determined using poi = q × TRANSFORM(g) where TRANSFORM(g) = γ. As such, it is redundant to explicitly update the pose of oi when it is in the hand, so we let poi = None for simplicity. A state is legal if there are no collisions among the robot, held object, and the placed objects. Definition 2. A PPM problem Π = (cid:104)s0, S∗(cid:105) in a PPM domain D is specified by an initial state s0 and a set of goal states S∗. state initial oh0 , vo1 = s0 = {vr = q0, vh = g The 0 po1 0 , ..., vom = pom 0 } must be a legal state of the system. For simplicity, we will assume that S∗ can be represented as the conjunction of goal sets for individual variables rather than logical predicates. We make this restriction because our manipulation experiments only involve goals that can Prepared using sagej.cls be expressed in this form, and introducing arbitrary goal predicates will complicate the theoretical analysis. Thus, S∗ = {vr ∈ Q∗, vh ∈ Goh∗ defines a set of legal states in the Cartesian product of Q∗ × Goh∗ ∗ × ... × P om where Q∗ ⊆ Dr, Goh∗ ∗ ⊆ Doi for i ∈ [m]. If the goal set is left unspecified for a variable, the goal set defaults to the full variable domain. ∗ , ..., vom ∈ P om ∗ } ∗ ⊆ Dh, and P oi , vo1 ∈ P o1 ∗ × P o1 ∗ ∗ 4 The FFRob Algorithm Overview the highest At level of abstraction, FFROB iteratively alternates between a sampling phase and a planning phase until it is able to find a solution. The sampling phase discretizes the PPM problem by creating symbolic actions from a finite sampled set of poses, grasps, and configurations. The planning phase performs a discrete search to decide whether a solution exists. If the discrete search fails to find a solution, the process repeats with a larger set of samples. The pseudocode for FFROB is presented in figure 2. FFROB’s inputs are a PPM domain D and problem Π, and its output is a solution plan. The procedure begins by initializing a set of sampling parameters θ that govern the number of samples to produce using INITIAL-PARAMETERS. In the sampling phase, SAMPLE- DISCRETIZATION (figure 9) discretizes the PPM problem by sampling a specified number of configurations, poses, and grasps determined by θ. SAMPLE-DISCRETIZATION returns a symbolic planning representation of the goal C∗ and actions A in the current discretization of the problem. In the planning phase, SEARCH (figure 5) performs a discrete search using C∗ and A. FFROB immediately terminates if it finds a solution. Otherwise, θ is increased using INCREMENT-PARAMETERS, and this process repeats. The majority of is dedicated to the this paper implementation of the two key subroutines: SAMPLE- DISCRETIZATION and SEARCH. Section 8 discusses the discretization created by SAMPLE-DISCRETIZATION. Sections 5, 6, and 7 are concerned efficient search algorithms that implement SEARCH. 5 Symbolic Planning Representation We will encode robot actions that pick up and place objects as well as move the robot in the style of traditional AI planning action descriptions such as those shown in figure 3. In these actions, q, p, and γ are continuous variables that range over robot configurations, object poses, and grasp transforms, respectively. Because there are infinitely many values of these variables and therefore infinitely many actions, we assume we have sampled a finite set of these values during a pre-processing phase, resulting in a finite set of actions. Garrett et al. 7 θ = INITIAL-PARAMETERS() FFROB(D, Π) : 1 2 while True: 3 4 5 6 7 C∗, A = SAMPLE-DISCRETIZATION(D, Π; θ) plan = SEARCH((cid:104)s0, C∗, A(cid:105); ...) if plan (cid:54)= None: return plan INCREMENT-PARAMETERS(θ) Figure 2. The FFROB algorithm. 5.1 Extended Action Specification We model discretized PPM problems using a rep- resentation that extends Simple Action Specification (SAS+) (B¨ackstr¨om and Nebel 1995). SAS+ is expres- sively equivalent to STRIPS (B¨ackstr¨om and Nebel 1995) without action parameters. The key difference is that it supports variables with discrete domains instead of only propositional domains. A generic state s in SAS+ is an assignment of values to a finite set of variables V. For PPM problems, the SAS+ variables are the same as the variables described in section 3. Thus, a discretized PPM system state is a legal SAS+ state.For more general task and motion planning problems, the state may have additional variables such as categorical variables vdi for each object that represent the cleaned or cooked status of object oi where Ddi = {None, Cleaned, Cooked}. SAS+ requires conditions and effects to be simple assignments of individual values to a subset of the variables. Definition 3. A simple condition c ≡ [v = x] restriction that a state have value x for variable v. is a Definition 4. A simple effect e ≡ v ← x is an assignment of a value x to variable v. Definition 5. A partial state C = {c1, ..., ck} is a set of conditions. A partial state defines a set of states that satisfy its conditions. The goal of a planning problem is a partial state. Definition 6. An action a = (cid:104)C, E(cid:105) is a pair where C is a set of simple conditions and E is a set of simple effects. Even with finite domains for all the variables, there is a difficulty with determining when the robot can perform an action. In particular, MOVE actions have a REACHABLE condition which is true if the robot can safely move from q to q(cid:48). To concisely model and effectively plan for PPM problems, we need a more expressive representation that allow us to evaluate, for example, whether there exists a path between two robot configurations that does Prepared using sagej.cls not collide with placed objects. We extend SAS+ by allowing conditions to be logical formulas defined on the values of the variables. We call the resulting planning representation the extended action specification (EAS). This representation is also generic; however, we will focus on its application to PPM planning. Definition 7. A condition c ≡ f (vi1 , ..., vik ) is a restriction that a state has values for variables vi1 , ..., vik that satisfy a predicate f . Definition 8. A predicate f is a finite boolean combination of simple conditions. An example predicate is f (vo1, vo2 ) ≡ [vo1 = p1] ∨ [vo2 = p2], which is true when o1 is currently at pose p1 or o2 is currently at pose p2. Let s(v) give the value of variable v in state s. Definition 9. A condition c holds in a state if it evaluates to true given the values of the state’s variables: HOLDS(c, s) ≡ f (s(vi1 ), ..., s(vik )). To concisely represent conditions sharing a common template form, we use parameterized conditions, functions from a set of parameters to a condition. The following parameterized conditions are relevant in discretized PPM problems. We use ∀ and ∃ only to compactly denote conjunctions and disjunctions over elements of our finite domains. The parameterized condition INREG(oi, R) has parameters composed of an object oi and a region R ⊆ P oi in its pose space. INREG(oi, R) is true if s(voi ) ∈ R, i.e. the current placement of oi is contained within R. However, to express INREG(oi, R) as a predicate, we evaluate it in the following way. HOLDS(INREG(oi, R), s) ≡ ∃p ∈ Doi ∩ R. [s(voi) = p]. The parameterized condition REACHABLE(q, q(cid:48), (V, E)) has parameters composed of an initial robot configuration q, a final robot configuration q(cid:48), and a discretized roadmap of robot movements (V, E). REACHABLE(q, q(cid:48), (V, E)) is true if there is a collision-free path in (V, E) between q and q(cid:48), considering the positions of all fixed and movable objects as well as the object the robot might be holding and the grasp in which it is held. HOLDS(REACHABLE(q, q(cid:48), (V, E)), s) ≡ ∃(e1, ..., ek) ∈ PATHS(q, q(cid:48); (V, E)). ∀e ∈ (e1, ..., ek). (cid:16) [s(vh) = None] ∨ ¬ROBOT-GRASP-C(e.τ, s(vh)) (cid:17) ∧ (cid:16) [s(voi) = None] ∨ ∀i ∈ [m]. (cid:0)¬ROBOT-OBJ-C(e.τ, (oi, s(voi))) ∧ ([s(vh) = None] ∨ ¬GRASP-OBJ-C(e.τ, s(vh), (oi, s(voi))))(cid:1)(cid:17) . 8 Journal Title XX(X) Each path (e1, ..., ek) on (V, E) is composed of edges e which will each have their own trajectory e.τ for moving between the incoming and outgoing vertices. Let the predicate ROBOT-GRASP-C be true if the robot collides with the object it is holding vh, as it moves along configuration trajectory e.τ . Similarly, let ROBOT-OBJ-C be true if oi at pose voi collides with the robot along e.τ , and GRASP- OBJ-C be true if oi at pose voi collides with the grasped object vh along e.τ We assume that roadmap (V, E) is free of self-collisions or collisions with fixed obstacles as checked during its discretization. Although REACHABLE is rather complicated, it still is a boolean combination of simple conditions. In section 5.3, we provide a way to avoid constructing this predicate by instead directly evaluating it using an external procedure. Definition 10. An action a is applicable in a state s if all of a’s conditions hold in s: APPLICABLE(a, s) ≡ ∀c ∈ a.C. HOLDS(c, s). Definition 11. For s, a such that APPLICABLE(a, s), a can be applied to s to produce a successor state: APPLY(a, s) = (cid:40) v = x v = s(v) otherwise (v ← x) ∈ a.E It is often more compact to represent actions in parameterized form as action schemas. An action schema is an action with typed parameters, standing for the set of actions arising from all instantiations of the parameters over the appropriate type domains. The PICK and PLACE action schemas in figure 3 have parameters composed of a pose p, object oi, grasp γ, and robot configuration q. The MOVE action schema has parameters composed of two configurations q, q(cid:48) and a roadmap (V, E). PICK(p, (oi, γ), q): pre: [vr = q], [vh = None], [voi = p] eff: vh ← (oi, γ), voi ← None PLACE(p, (oi, γ), q): pre: [vr = q], [vh = (oi, γ)], [voi = None] eff: vh ← None, voi ← p MOVE(q, q(cid:48), (V, E)): pre: [vr = q], REACHABLE(q, q(cid:48), (V, E)) eff: vr ← q(cid:48) Figure 3. Pick, place, and move action schemas. Although we focus on PPM problems using these actions, we could easily define other action schemas to Prepared using sagej.cls solve more general task and motion planning problems. For example, the CLEAN and COOK action schemas in figure 4 are useful for modeling a cooking task. The constants Roi cook ⊆ P oi are sets of poses where oi can be cleaned and cooked respectively. clean ⊆ P oi and Roi CLEAN(oi): pre: [vdi = None], INREG(o, Roi eff: [vdi ← Cleaned] clean) COOK(oi): pre: [vdi = Cleaned], INREG(o, Roi eff: [vdi ← Cooked] cook) Figure 4. Additional clean and cook action schemas. Definition 12. An EAS planning problem (cid:104)s0, C∗, A(cid:105) is specified by an initial state s0, goal partial state C∗, and a set of actions A. sequence 13. A finite Definition actions (a1, a2, ..., an) ∈ A × A × ... is a solution to a planning problem if and only if the corresponding sequence of states (s0, s1, ..., sn) starting from s0 and recursively constructed using si = APPLY(ai, si−1) satisfies ∀i ∈ [n], APPLICABLE(ai, si−1) and ∀c ∈ C∗. HOLDS(c, sn). of 5.2 Relaxed Evaluation In section 7.1, it will be algorithmically advantageous to evaluate conditions in the context of relaxed planning. Relaxed planning is an approximation of standard symbolic planning which ignores delete effects (Bonet and Geffner 2001). Central to relaxed SAS+ planning is the notion of a relaxed state. Definition 14. A relaxed state s+ = {v1 = X1, ..., vn = Xn} is a generalized state in which each vi can simultaneously take on all values in a set Xi ⊆ Di where Xi (cid:54)= ∅. A relaxed state represents a set of states formed by all combinations of the relaxed state’s values. Specifically, relaxed states can take on simultaneous values because an action in a relaxed planning problem never removes a value from the relaxed state. Thus, instead of replacing the values of variables, an action’s effects add the new variable values to the relaxed state. Relaxed states are equivalent to discrete, abstracted states from Planning Modulo Theories (Gregory et al. 2012). Every state is a relaxed state; however, the converse is not true. A condition c holds in a relaxed state s+ if there exists an assignment of values in Xi to each variable vi such that Garrett et al. 9 the condition evaluates to true. Let T(c, s+) be true if such an assignment exists; then we define: HOLDS+(c, s+) ≡ T(c, s+). let F(c, s+) be true if Similarly, there exists an assignment of values in Xi to each variable vi such that c evaluates to false. Because the domain of each variable is finite, any condition can be expressed as a Boolean combination of atomic variable assignments [v = x]. This allows us to define T(c, s+) by using recursion on its structure. Because it is possible for both T(c, s+) and F(c, s+) to hold simultaneously, we provide a recursive definition for F(c, s+) as well. T and F are related in their respective recursion by negation. T(c, s+) = F(c, s+) =    T(c1, s+) ∧ T(c2, s+) T(c1, s+) ∨ T(c2, s+) ¬F(c(cid:48), s+) xi ∈ s+(vi)    F(c1, s+) ∨ F(c2, s+) F(c1, s+) ∧ F(c2, s+) ¬T(c(cid:48), s+) s+(vi) (cid:54)= {xi} c ≡ c1 ∧ c2 c ≡ c1 ∨ c2 c ≡ ¬c(cid:48) c ≡ [vi = xi] c ≡ c1 ∧ c2 c ≡ c1 ∨ c2 c ≡ ¬c(cid:48) c ≡ [vi = xi] The difference between the relaxed HOLDS+ and standard HOLDS is that at the atomic level, the relaxed HOLDS+ can choose between several values of each Xi to make c true while the standard HOLDS only has a single value of each xi. As a consequence, when a relaxed state s+ is also a standard state, HOLDS+(c, s) = HOLDS(c, s). This relaxed state condition evaluation can be seen as implementing the “satisfies” interface in Planning Modulo Theories (Gregory et al. 2012) for arbitrary logical theories over discrete variables. It will be useful for the planning heuristics in section 7.1 to identify a variable assignment in the relaxed state that achieves the condition. When HOLDS+(c, s+) is true, let ACHIEVERS+(c, s+) be a similar recursive function that uses bookkeeping to identify a variable assignment that makes the condition true. ACHIEVERS+(c, s+) = TA(c, s+). Let TA(c, s+) return a set of variable values in s+ that allow c to be true; and let FA(c, s+) be the analogous set of variable values that allow c to be false: TA(c, s+) =    TA(c1, s+) ∪ TA(c2, s+) c ≡ c1 ∧ c2 ANY(TA(c1, s+), TA(c2, s+))c ≡ c1 ∨ c2 FA(c(cid:48), s+) {vi ← xi} c ≡ ¬c(cid:48) c ≡ [vi = xi] Prepared using sagej.cls FA(c, s+) =  ANY(FA(c1, s+), FA(c2, s+))c ≡ c1 ∧ c2  c ≡ c1 ∨ c2 FA(c1, s+) ∪ FA(c2, s+) TA(c(cid:48), s+) c ≡ ¬c(cid:48)  {vi ← ANY(s+(vi) \ {xi})} c ≡ [vi = xi] The procedure ANY(x) arbitrarily selects an element from set x. Although there may be many combinations that satisfy the predicate because of ANY, our algorithms just require that a single, arbitrary satisficing assignment be returned. However, the strength of these heuristics may vary depending on the assignment. 5.3 Condition Tests Some conditions are computationally expensive to evaluate naively using HOLDS. Consider the REACHABLE condition. Its truth value is affected by all the state variables, because the connected components of the configuration space change as the grasp and object poses change, thus affecting reachability. Holding an object changes the “shape” of the robot and therefore what configurations it may move between. Even more significantly, the poses of all the placed objects define additional obstacles that the robot must not collide with. For a roadmap discretization of the configuration space (V, E), the REACHABLE condition the possibly exponential involves quantification over number of simple paths in the discretized configuration space between q and q(cid:48). Additionally, some conditions share a significant amount of logical structure. Consider the set of REACHABLE conditions that share the same start configuration q. Thus, we allow conditions to optionally be evaluated by a test, a procedure TEST({c1, ..., cn}, s) which can simultaneously evaluate the predicates of conditions {c1, ..., cn}. For the REACHABLE condition, we specify a test that uses dynamic programming to test if a collision- free path exists given the current state. This test evaluates all REACHABLE conditions with the same start configuration q at once by considering paths from q. If no TEST procedure is specified, the planner defaults to the previously described methods for testing if the condition holds in standard and relaxed states. In contrast with the semantic attachments strategy of Dornhege et al. (2009), we additionally require that tests support evaluating whether relaxed states satisfy the conditions. This additional requirement allows the planner to include conditions evaluated by tests in heuristics that to strongly guide the search. Once again, a test for the relaxed states is also correct is correct standard states, so it just to implement TEST+({c1, ..., cn}; s). This procedure must also replace the function of ACHIEVERS+(c, s+). Thus, in order to evaluate several conditions at once, TEST returns a subset of the conditions that are true paired with their achievers. is sufficient for 10 Journal Title XX(X) An example implementation of TEST+ that simply uses the default HOLDS+ and ACHIEVERS+ is the following: TEST+({c1, ...,cn}; s+) ≡ {(cid:104)ci, ACHIEVERS+(s+)(cid:105) | ci ∈ {c1, ..., cn}. HOLDS(ci; s)} In the case of REACHABLE, the procedure TEST- REACHABLE in figure 18 uses dynamic programming as well as lazy collision checking to simultaneously evaluate all REACHABLE predicates from the same start configuration q. For each edge, a set of achievers that enable the edge to be traversed without collision is the full set of achievers for each end stored. Then, configuration q(cid:48) is computed by tracing back a path and taking the union of the edge achievers on the path. As a result, TEST-REACHABLE+ is much more efficient than quantifying over the exponential number of paths in the roadmap (V, E) for each end configuration q(cid:48). In practice, our TEST-REACHABLE implementation additionally memoizes the last reachable subgraph in order to avoid repeating computation for sequential evaluations in the same relaxed planning problem. SEARCH((cid:104)s0, C∗, A(cid:105); EXTRACT, PROCESS; H, ACTIONS) 1 Q = QUEUE(STATEN(s0, 0, H(s0), None)) 2 while not EMPTY(Q): n = EXTRACT(Q) 3 if ∀c ∈ C∗. HOLDS(c, n.s): 4 5 6 7 8 9 10 return None if APPLICABLE(a, n.s): s(cid:48) = APPLY(a, n.s) PROCESS(Q, s(cid:48), n; H) return RETRACE-PLAN(n) for a ∈ ACTIONS(n.s, A): Figure 5. The primary search control procedure. problem, not just PPM problems. However, we explore the physical interpretation of these heuristics in the context of discretized PPM problems. The simplest heuristic we consider is UNSATISFIED- GOALS, which counts the number of unsatisfied goal conditions. 6 Search Algorithms UNSATISFIED-GOALS(s) = |{c ∈ C∗ | ¬HOLDS(c, s)}| We now review existing search algorithms that can be directly applied to EAS planning problems with no modification. The generic heuristic search procedure SEARCH is in figure 5. The SEARCH procedure has as arguments an EAS planning problem (cid:104)s0, C∗, A(cid:105), EXTRACT and PROCESS procedures which alter the search control, and H heuristic and ACTIONS successor procedures which give a heuristic cost and generate the action successors respectively. Depending on the behavior of the EXTRACT and PROCESS procedures, SEARCH can implement many standard search control structures, including depth-first, breadth-first, uniform cost, A∗, greedy best-first, and hill- climbing searches. Critical to many of these strategies is a heuristic function, which maps a state s in the search to an estimate of the cost to reach a goal state from s. We will assume that each of our actions has unit cost; however, these procedures can be easily altered when costs can be any nonnegative number. Appendix A describes the standard best-first and deferred best-first control structures used to implement EXTRACT and PROCESS. 7 Search Heuristics In this section, we illustrate implementations of the H heuristic and ACTIONS successor procedures of figure 5 by adapting several existing domain-independent heuristics to EAS planning problems. Because they are domain- independent, this heuristics apply to any EAS planning Although this can be computed almost instantly, it gives an exceptionally poor estimate of the cost to the goal because the problem may require many actions to satisfy a goal condition. The subsequent heuristics we will discuss are more involved and consequently are able to give a much better cost estimate. 7.1 Relaxed Planning Many modern domain-independent heuristics are based on solving approximate planning problems and using their solution cost as an estimate of the cost to the goal (Hoffmann and Nebel 2001; Helmert 2006). Although this requires more computation per search node than UNSATISFIED-GOALS, it pays off in the search because the improved estimates significantly reduce the number of states explored. One influential planning approximation is the delete-relaxation Hoffmann and Nebel (2001). As introduced in section 5.2, in relaxed planning problems, actions add each effect value to the state without removing the previous value. This leads to relaxed states in which a variable can take on multiple values simultaneously. A solution to a relaxed planning problem is a relaxed plan, an applicable sequence of actions that results in a relaxed state that satisfies the goal conditions. Relaxed planning problems can be solved in polynomial time, making this approximation attractive. However, the problem of producing a minimum length relaxed plan is NP-hard Hoffmann and Nebel (2001). Prepared using sagej.cls Garrett et al. 11 a∈A a.C s+ = {v : ∅ | v ∈ V} effs, conds, acts = { }, { }, { } COMPUTE-COSTS(s, C∗, A; COMB) : 1 C = C∗ ∪ (cid:83) 2 3 4 Q = QUEUE({EFFN(v ← s(v), 0, None) | ∀v ∈ V}) 5 while not EMPTY(Q): 6 7 8 9 10 11 12 13 14 n = POP-MIN(Q, n (cid:55)→ n.cost) if n.e ∈ effs: continue effs[n.e] = n s+(n.e.v) = s+(n.e.v) ∪ {n.e.x} for c ∈ C if HOLDS+(c, s+): conds[c] = CONDN(c, ACHIEVERS(c, s+)) C = C \ {c} acts[C∗] = GOALN(C∗, COMB(C∗; effs, if C∗ ⊆ conds: conds)) 15 16 17 18 19 20 21 return None return effs, conds, acts for a ∈ A if a.C ⊆ conds: acts[a] = ACTN(a, COMB(a.C; effs, conds)) A = A \ {a} for e ∈ a.E if e /∈ effs: PUSH(Q, EFFN(e, acts[a].cost + 1, a)) Figure 6. Method for computing relaxed planning costs. Despite this, several relaxed planning heuristics have been shown to give effective estimates of the distance to the goal. We will mention two of them and finally focus on the FF heuristic in particular. Each of these heuristics can be expressed using a common subroutine, COMPUTE-COSTS, which is shown in figure 6. The delete-relaxation allows relaxed planning to be understood as a search in the space of effects rather than the space of full states. This is similar to a search on a hyper-graph where vertices are effects and hyper-edges are actions. At heart, COMPUTE-COSTS is a version of Dijkstra’s algorithm generalized for these hyper-graphs. But propagating shortest path costs in a hyper-graph differs from the traditional Dijkstra’s algorithm because each hyper-edge may require reaching several vertices at a time. And there are several methods to combine the costs of reaching these vertices to produce a single cost for reaching the hyper-edge. Thus, COMPUTE-COSTS requires the meta-parameter COMB which specifies the method for combining these costs. Specifically, COMB(C; eff , con) combines the costs of satisfying a set of conditions C given the currently satisfiable effects and conditions in effs and conds. Therefore, costs are not defined as the length of the shortest plan to reach a vertex. However, they are still a measure of the relaxed plan difficulty of reaching an Prepared using sagej.cls effect, condition, or action. The choice of COMB will tailor COMPUTE-COSTS for each heuristic. Figure 6 gives the pseudocode for COMPUTE-COSTS. The inputs are a state s, a goal partial state C∗, and a set of action instances A. The outputs are the effs, conds, and acts maps which compose a search tree within the hyper- graph by recording cost and back pointer information. The maps are composed of effect nodes, condition nodes, and action nodes respectively. Effect nodes EFFN store an effect comprised of a variable and value, the cost at which it was first produced, and the action which achieves them. Condition nodes CONDN store a condition and a set of effects which satisfy the condition. Action nodes ACTN store an action and the cost at which the action’s conditions were first satisfied. The COMPUTE-COSTS procedure starts by computing the set of unachieved conditions C, initializing the relaxed state s+, and initializing effs, conds, and acts. It maintains a priority queue Q of effects starting with each effect v ← s(v) in the current state s. On each iteration, it pops an EFFN node n from the priority queue based on its cost n.cost and adds n to effs and the effect n.e = (v ← x) to s+ if n has not already been reached. Each unachieved condition c ∈ C (cid:48) then is tested to see if it can now be satisfied by the addition of n.e. Although not displayed in the pseudocode, this is where TEST procedures, which compute the truth value of several conditions at once, are also evaluated. If a condition is not satisfied, a set of effects that satisfy c is returned by ACHIEVERS and recorded. If each goal condition is contained in conds, the search terminates because each goal is reachable. Here, the goal is recorded as a goal node GOALN which is an ACTN with no effects. A heuristic value can then be obtained using acts[C∗].cost. If the goal is not reached, we process each newly reachable and unused action a by computing the cost of achieving a’s conditions using COMB. Finally, we independently push each of a’s unprocessed effects to the queue along with a back pointer to a. This differs from existing methods for relaxed planning in SAS+ problems because conditions are allowed to be complex Boolean formulas in EAS. There may be many assignments of variables, and therefore combinations of effects, that satisfy a condition. Thus, COMPUTE-COSTS explicitly searches over all the unsatisfied conditions upon reaching a new effect to determine whether any condition is now achievable. If so, it records a satisfying assignment using ACHIEVERS as described in section 5.2. We now describe the intuition behind COMPUTE- COSTS in discretized PPM problems. As COMPUTE-COSTS the application of each PICK, PLACE, and progresses, MOVE action can be thought of creating a copy of the manipulated object or robot at the new grasp, pose, or configuration respectively. This is because the manipulated 12 Journal Title XX(X) i.e. interfere with each other; object and robot can be at many poses, grasps, and configurations simultaneously in a relaxed state. Moreover, these copies do not the robot can select which of the currently available values of its configuration, its grasp, and the object poses will allow for it to perform an action. Actions that have the same cost can be viewed as being performable in parallel. Thus, the robot simultaneously tries all actions that can be performed after a relaxed plan of length 0, then a relaxed plan of length 1, and so on. In terms of reachability, it removes geometric constraints by removing an object from the universe when it is first picked up and never putting it back, and by assuming the hand remains empty (if it was not already) after the first PLACE action. Thus, the set of satisfied REACHABLE(q, q(cid:48), (V, E)) conditions becomes increasingly larger as the procedure progresses. Figure 7 shows the progression of objects that have greater than one pose in the relaxed state. As more objects can be picked and essentially removed from consideration, more and more reachable conditions become true. Finally, although COMPUTE-COSTS runs in polynomial time in the size of the EAS, collision checks typically dominate the runtime in PPM problems. Even if a heuristic significantly reduces the size of the main search, it might result in a net increase in computation time if it is itself too slow to compute. Because we cache the results of collision checks, in practice, executing COMPUTE-COSTS is quite fast and the heuristic functions it enables substantially reduce the number of states explored and, indeed, the total computation time. 7.2 The HSP Heuristics The first two heuristics we can obtain are hadd and hmax which are computed by using ADD-COMB and MAX-COMB for COMB respectively. As a reminder, C is a partial state and eff , con are maps of effect and condition nodes. The intention of COMB is to score the difficultly of achieving C using the costs already provided in eff and con. The hadd heuristic Bonet and Geffner (2001) returns the sum of the costs for the effects that satisfy each condition. ADD-COMB(C; eff , con) = (cid:88) (cid:88) c∈C e∈con[c].E eff [e].cost This heuristic is optimistic, in the sense that if the delete effects were taken into account, it might take more steps to achieve each individual goal from the starting state; it is also pessimistic, in the sense that the actions necessary to achieve multiple goal conditions might be “shared.” An admissible heuristic, hmax Bonet and Geffner (2001), is obtained by taking the maximum of the costs of the goal literals, rather than the sum. But hmax is found in practice to offer weaker guidance. Prepared using sagej.cls MAX-COMB(C; eff , con) = max c∈C max e∈con[c].E eff [e].cost 7.3 The FF Heuristic The FF heuristic hff Hoffmann and Nebel (2001) derives its heuristic cost from the length of a relaxed plan. The relaxed plan is computed by first calculating either hmax or hadd using COMPUTE-COSTS to produce an ordering of effects and actions as represented by effs, conds, acts. Then, EXTRACT-RELAXED-PLAN performs a backwards pass to identify a relaxed plan from effs, conds, acts. In the event where hmax is used to produce the ordering, effs, conds, acts represent a relaxed plan graph (RPG). An RPG is a sequence of alternating layers of effects and actions. The first layer consists of all effects that are true in the starting state. Action layer i contains all actions whose conditions are present and simultaneously achievable in effect layer i. Effect layer i + 1 contains all effects that are possibly achievable after i actions. The depth of an effect or action in the RPG is equal to its hmax cost (assuming unit action costs). In the event where hadd is used to produce the ordering, the resulting structure is not semantically an RPG; however, the resulting structure functions akin to an RPG and can even lead to tighter heuristic estimates. FF performs an efficient backward-chaining pass in the RPG to determine which actions, if they could be performed in parallel without delete effects, would be necessary to achieve the goal conditions. An important advantage of the FF heuristic is that it does not over-count actions if one action achieves multiple effects. The EXTRACT-RELAXED-PLAN procedure in figure 8 extracts a relaxed plan from the RPG. The plan extraction procedure greedily processes each layer backwards from i = (n, ..., 1), starting with the set of effects that achieve the goal conditions C∗. For each effect e ∈ goals[i] identified as a goal on the ith layer of the RPG, it seeks the “cheapest” action a∗ that can achieve it using EASIEST. The minimizing a∗ is added to the relaxed plan plan, e(cid:48) and any other effects achieved by a on the current layer are removed from goals, and the conditions a∗.C are processed and their satisfying effects are added to goals. This process continues until each layer is processed. Once finished, the FF heuristic returns the number of actions in plan. Our formulation of EXTRACT-RELAXED-PLAN is very to the original FF EXTRACT-RELAXED-PLAN similar algorithm. The modification to support EAS planning problems is simply performed, given that the effect achievers have been computed for each condition, by replacing conditions with the effects that satisfy them. Our metric for choosing the cheapest action is different from the original formulation, though. The original easiest- action metric uses the hadd cost of each action which Garrett et al. 13 Figure 7. Visualization of compute-costs for a PPM problem requiring picking up the blue block. Each object o is made transparent at the level when None ∈ s+(o). RPG levels 0, 1, and 2 are in the top row. RPG levels 3 and 4 are in the bottom row. EASIEST(C, goals; eff , con) = (cid:88) (cid:88) c∈C e∈con[c].E e /∈goals eff [e].cost goals = {i : ∅ | ∀i ∈ [n]} for c ∈ C∗ for e ∈ conds[c].E: goals[effs[e].cost] ∪ = {e} EXTRACT-RELAXED-PLAN(s, C∗, effs, conds, acts) : 1 n = acts[(cid:104)C∗, ∅(cid:105)].cost 2 3 4 5 plan = ∅ 6 7 8 Ae = {a ∈ acts | e ∈ a.E for i ∈ (n, ..., 1): for e ∈ goals[i] : 9 a = argmina∈Ae EASIEST(a.C, goals; eff , acts[a].cost = effs[e].cost − 1} conds) 10 11 12 13 14 15 return plan plan ∪ = {a∗} for c ∈ a∗.C for e(cid:48) ∈ conds[c].E: goals[effs[e(cid:48)].cost] ∪ = {e(cid:48)} for e(cid:48) ∈ a∗.E: goals[effs[e(cid:48)].cost] \ = {e(cid:48)} Figure 8. Method for extracting a relaxed plan. is separately computed before EXTRACT-RELAXED-PLAN. Our EASIEST procedure uses the original cost and additionally discounts the cost of goals that have already Prepared using sagej.cls been identified by addition to goals. The intuition for this change is that actions that do not add many new goals are greedily good choices. As our results in section 11 show, hff has the best performance in our PPM experiments. Our intuition behind why hff performs well for PPM problems comes from its backwards pass. There are often several ways to achieve a PPM goal. For example, consider the set of grasps and approach trajectories to pick up an object. Many of these approaches involve paths that require moving a different set of objects. And many of these paths may be performable on the same layer of the RPG. This is typically the case when an object is surrounded by an approximately even distribution of objects on several of its sides. The backwards pass, particularly through its greedy discounting, will frequently select the approach that will require moving the fewest of additional objects given the choices it has already made. Thus, the resulting relaxed plan usually reuses goals and gives a better estimate of the optimal cost to the goal. 7.4 Helpful Actions We now describe the implementation of the ACTIONS procedure used by the heuristic search algorithm in figure 5. The simplest implementation is just to return the full set of actions A. However, we allow the optional specification of ACTIONS to be a function that can more efficiently return the set of applicable actions. For PPM problems, we compute the set of reachable configurations all at the 14 Journal Title XX(X) same time using a procedure similar to TEST-REACHABLE in order to determine the applicable move actions. Additionally, because hff computes a relaxed plan, we can use the resulting plans to prune and order a set of helpful actions for a given state. Helpful action pruning strategies reduce the choice of the next action to those that were identified to be achieve goals on the relaxed plan via the heuristic computation. Additionally, they can order the application of these actions such that actions deemed more helpful are attempted first. Both of these methods can reduce the effective branching factor and the number of future heuristic computations needed in the search. But these pruning methods sacrifice completeness at the expense of strongly improved search performance. Completeness can be recovered though by using multiple priority queues as introduced by Helmert (2006). We use a version of the helpful actions strategy that returns the first actions followed by the first goal achievers. The first actions are actions on a relaxed plan that are immediately performable. The first goal achievers are actions that have an effect on the first layer of a relaxed plan. Thus, the first actions are a subset of the first goal achievers. 8 Discretization In order to use the EAS representation and planners, we still have to discretize a PPM problem in a pre-processing phase by sampling a finite number of actions from each action schema. Definition 15. A discretized PPM domain D = (cid:104)(V, E), {(P o1, Go1 )..., (P om, Gom)}(cid:105) specified by a robot configuration roadmap (V ⊆ Q, E ⊆ V × V ) as well as a finite sets of placements P oi ⊆ P oi and grasps Go1 ⊆ Goi for each of the m moveable objects oi. is In figure 9, the procedure SAMPLE-DISCRETIZATION both samples a discretized PPM problem and produces an EAS specification of the goal and actions for the discretized problem. First, the goal set of states S∗ is converted into a set of predicates C∗ using CONVERT-GOAL. Sets of PICK and PLACE actions are sampled using S-PICK-PLACE according to the current parameter vector θ. A set of MOVE actions is constructed using S-MOVE by creating a roadmap that connects the start configuration q0, sampled goal configurations, Q∗, and PICK and PLACE configurations. Finally, SAMPLE-DISCRETIZATION returns C∗ as well as the combined set of actions A. The rest of this section gives the details of S-PICK- PLACE and S-MOVE as well as the procedure to evaluate complex REACHABLE predicates. We start by sampling the PICK and PLACE action schemas. Recall that PICK and PLACE have a condition that the robot be at a specific configuration in order to execute the pick or place. These configurations will serve as target configurations when Prepared using sagej.cls SAMPLE-DISCRETIZATION(D, Π; θ) : 1 C∗ = CONVERT-GOAL(S∗) 2 Apick, Aplace= S-PICK-PLACE(D, Π; θ) 3 Amove = S-MOVE(Q, q0, Q∗, Apick, Aplace; θ) 4 A = Apick ∪ Aplace ∪ Amove 5 return C∗, A Figure 9. Procedure for sampling a discretized PPM problem and producing an EAS problem specification. sampling the roadmap (V, E). MOVE actions are created from pairs of these target configurations. Sampling PICK and PLACE actions has a deep connection to sampling the modes of the system (Hauser and Latombe 2009; Hauser and Ng-Thow-Hing 2011). Specifically, the collection of poses and grasps will define the set of transit and transfer modes reachable from s0. The MOVE actions then represent discretized motion plans within a mode. We will this idea in the theoretical analysis of FFROB in section 10. later revisit The number of samples chosen for each sampling procedure depends on a parameter vector θ. As previously said, θ will be iteratively increased in the event that not enough samples were chosen to find a solution. Additionally, to test whether this sampled set of poses, grasps, and configurations could possibly contain a plan, we can compute the heuristic value of the starting state s0 using the actions derived from the samples, as described in section 7.3. If it is infinite, meaning that the goal is unreachable even under extremely optimistic assumptions, then we return to this procedure and draw a new set of samples. Note that although a finite heuristic value is necessary for a plan to exist, it is not sufficient. Thus, the search algorithm may report that the set of samples is not sufficient to solve the problem although s0 had a finite heuristic cost. However, in our experiments, this finite heuristic test can save a significant amount of time because computing a heuristic cost is much less expensive than solving the EAS planning problem. Finally, evaluating REACHABLE still requires performing many expensive collision checks in the context of many different robot grasps and poses of the objects. We address this problem by using a shared roadmap data structure called a conditional reachability graph (CRG). The CRG is graph (V, E), which is related to a PRM (Kavraki et al. 1996), that answers reachability queries, conditioned on the poses of objects and the robot’s grasp, by lazily computing answers on demand and caching results to speed up future queries. Garrett et al. 15 Figure 10. Sampled placements for the blue and green blocks. Figure 11. The inverse reachability database transformed relative to the grasp transform. 8.1 Pick and Place Actions Figure 12 gives the procedure for sampling action instances from the PICK and PLACE action schemas. This procedure, for each object, first produces poses and grasps useful for the problem, by using initial poses and grasps and sampling goal poses and grasps (if available) as well as uniformly sampling each space. For our implementation of S-POSES and S-GRASPS, we sample values using uniform rejection sampling (rejecting, for example, poses that collide with fixed obstacles); however, this can be done in other ways such as by choosing evenly spaced samples. Figure 10 shows sampled placements in red for the blue and green blocks. Additional placements are sampled for the blue and green goal regions. Once the poses and grasps (p and γ) are sampled, S- PICK-PLACE uses S-IK to sample valid configurations of the robot base and manipulator that reach the end-effector pose of p × γ−1. We implement S-IK by first sampling base poses from a precomputed inverse reachability database that are nearby the end-effector pose as shown in figure 11. Then, we sample collision-free, analytical inverse kinematics (IK) solutions using ikfast Diankov (2010). Finally, each pose, grasp, and configuration tuple are used as the arguments of a PICK and PLACE action instance. These actions are added the respective sets of EAS actions, Apick and Aplace. 8.2 Conditional Reachability Graph Using the PICK and PLACE robot configurations as targets, we now sample robot movements and trajectories that will reach these pick and place configurations. In the process, we will create a conditional reachability graph (CRG). The CRG is a partial representation of the connectivity of the space of sampled configurations, conditioned on Prepared using sagej.cls ∗ ; θ) ∪ S-POSES(P oi ; θ) for i ∈ [m]: S-PICK-PLACE(D, Π; θ) : 1 Apick, Aplace = ∅, ∅ 2 3 4 5 6 7 8 9 10 11 12 return Apick, Aplace 0 } ∪ S-POSES(P oi P oi = {poi Goi = S-GRASPS(Goi; θ) oh0 if i = h0 : Go ∪ = {g 0 } if i = h∗ : Go ∪ = S-GRASPS(Goh∗ for (p, g) ∈ P oi × Goi: ∗ ; θ) γ = TRANSFORM(g) for q ∈ S-IK(Q, p × γ−1; θ): Apick ∪ = {PICK(p, (oi, γ), q)} Aplace ∪ = {PLACE(p, (oi, γ), q)} Figure 12. Procedure for sampling the PICK and PLACE actions. the placements of movable objects as well as on what is in the robot’s hand. It efficiently allows us to evaluate REACHABLE(q1, q2, (V, E)) predicates. It is similar in spirit to the roadmaps of Leven and Hutchinson (2002) in that it is designed to support solving multiple motion- planning queries in closely related environments. Formally, it is a graph (V, E) where the vertices V are a set of robot configurations, q ∈ V . The edges E are triplets e = (cid:104)q, q(cid:48), τ (cid:105) where q, q(cid:48) are pairs of vertices and τ is a trajectory that connects q and q(cid:48). Each edge is also annotated with an initially empty map of validation conditions of the form e.valid [(cid:104)ρ, g(cid:105)] = b where b = True if the edge is traversable for a placement of object ρ and grasp g when there are no other objects in the world. Otherwise, b = False. There are three cases for a (cid:104)ρ, g(cid:105) pair: 16 Journal Title XX(X) is outlined in figure 16. The initial configuration and goal configurations sampled using S-CONFIGS are used to initialize the CRG. It then calls the S-APPROACHES procedure which generates trajectories for moving near each PICK and PLACE configuration. Let PARAMETERS(a) give the tuple of parameters for action instance a. For each PICK and PLACE configuration, S-APPROACHES samples a nearby configuration q(cid:48) using S-NEARBY-CONFIG for the purpose of approaching the PICK and PLACE configuration q. In our implementation, we do this concatenating the previous base pose with a predetermined manipulator configuration used when carrying objects. Then it calls S-APPR-TRAJ, which trajectories τ between q and q(cid:48). For our samples implementation, we call RRT-Connect (Kuffner and LaValle 2000) between q and q(cid:48) in the configuration space of just the manipulator (because we use the same base pose). We disallow trajectories that either collide with fixed obstacles while holding o at grasp g or collide with o at pose p when moving with an empty hand in order to use the trajectory for both PICK and PLACE actions. Our implementation assumes the robot is holonomic, so trajectories are reversible. For non-holonomic robots, a separate trajectory from q(cid:48) and q must be sampled. An edge following the trajectory and an edge following the reversed trajectory are then added to the CRG. Next, S-MOVE calls the S-ROADMAP procedure which attempts to connect the configurations in the CRG. The procedure described in figure 16 is for a fixed-degree PRM. It samples additional roadmap configurations using S-CONFIGS to attempt to connect the roadmap. For each configuration in the roadmap, S-ROADMAP attempts to connect the configuration to its nearest neighbors in V given by NEAREST-NEIGHBORS. It uses S-TRAJ to linearly interpolate between configurations. The number of additional configurations to sample and the desired degree of the PRM are given by the parameter vector θ. Figure 14 shows a sampled CRG for a NAMO problem. In cases where domain-dependent information is available, it may make sense to use a different S-MOVE procedure. For example, for problems where objects can only be placed on tables, it is wasteful to create a dense roadmap for moving between tables because placed objects cannot possibly affect the validity of edges at a certain distance away from the tables. In our experiments involving objects that can only be placed on tables, we use a version of S-ROADMAP that is sparse away from tables while still dense nearby tables by creating a star-graph of trajectories that connect an arbitrary reference configuration (such as q0) to configurations near each table as shown in figure 15. Finally, S-MOVE creates a MOVE action instance for pairs of configurations V that are connected in the roadmap as tested by a standard breadth-first search using Figure 13. An unconditioned CRG and the same CRG conditioned on the placement of a single moveable object. • (cid:104)(o, p), None(cid:105): safe for the robot to traverse e when object o is placed at pose p and the robot is not holding any object. • (cid:104)None, (o(cid:48), γ)(cid:105): safe for the robot to traverse e when no object is placed and the robot is holding object o(cid:48) with grasp transform γ. • (cid:104)(o, p), (o(cid:48), γ)(cid:105): safe for the robot to traverse e when object o is placed at pose p and the robot is holding object o(cid:48) with grasp transform γ. The validation conditions on the edges are not pre- computed; they will be computed lazily, on demand, and cached in this data structure. These conditions are separated in this way in order to maximize the amount of collision caching used. Note that the procedure for determining e.valid [(cid:104)(o, p), (o(cid:48), γ)(cid:105)] does not need to compute whether the robot collides with either o or o(cid:48) because those conditions will already have been computed and stored in (cid:104)(o, p), None(cid:105) and (cid:104)None, (o(cid:48), γ)(cid:105) respectively. However, it still will need need to compute whether (o, p) collides with (o(cid:48), γ). Figure 13 depicts a cartoon CRG. The bottom figure is conditioned on a moveable object which temporarily removes three edges from the traversable roadmap. 8.3 Constructing the CRG The CRG is initialized in the pre-processing phase when sampling the MOVE actions. The S-MOVE procedure Prepared using sagej.cls Garrett et al. 17 for a ∈ (Apick ∪ Aplace): S-APPROACHES(Q, (V, E), Apick, Aplace; θ) : 1 2 3 4 5 6 V ∪ = {q, q(cid:48)} for τ ∈ S-APPR-TRAJ(Q, q, q(cid:48); SCH(a), p, g; θ) E ∪ = {(q, q(cid:48), τ ), (q(cid:48), q, REVERSE(τ ))} (p, g, q) = PARAMETERS(a) for q(cid:48) ∈ S-NEARBY-CONFIG(Q, q; SCH(a), p, g; θ) Figure 14. Full CRG visualized using end-effector poses. for q ∈ V : S-ROADMAP(Q, (V, E); θ) : 1 V ∪ = S-CONFIGS(Q; θ) 2 3 4 5 for q(cid:48) ∈ NEAREST-NEIGHBORS(V, q; θ): for τ ∈ S-TRAJ(Q, q, q(cid:48); θ) E ∪ = {(q, q(cid:48), τ ), (q(cid:48), q, REVERSE(τ ))} S-MOVE(Q, q0, Q∗, Apick, Aplace; θ) : 1 2 3 4 Amove = {MOVE(q, q(cid:48), (V, E)) | (q, q(cid:48)) ∈ V × V, (V, E) = ({q0} ∪ S-CONFIGS(Q∗; θ), ∅) S-APPROACHES(Q, (V, E), Apick, Aplace; θ) S-ROADMAP(Q, (V, E); θ) PATH(q, q(cid:48); (V, E)) (cid:54)= None} Figure 15. A star-graph CRG visualized using end-effector poses. Figure 16. Procedures for constructing the CRG. 5 return Amove PATH(q, q(cid:48); (V, E)) (without considering any placed or held objects). In practice, we only create MOVE actions between start and goal configurations as well as configurations at which the robot can perform a PICK or PLACE because the robot will never need to stop at an intermediate configuration. Lastly, S-MOVE returns the set of move actions Amove. 8.4 Querying the CRG Now that we have a CRG, we can use it to test whether REACHABLE conditions hold in relaxed state s+, as shown in TEST-REACHABLE in figure 18. Recall that in each relaxed state s+, each object can simultaneously be: missing entirely, in multiple poses, and in multiple grasps. Similarly, the hand can hold several objects while also remaining empty. We need to determine whether there is some simultaneous assignment of all these variables using the values present in s+ that allows a legal path from a start q to q(cid:48) ∈ V . The test constructs a subgraph (V, E(cid:48)) of the CRG that consists only of the edges that are each independently valid for some choice of object poses and robot grasps from s+. Additionally, each edge e is augmented with a temporary set of these pose and grasp achieving values e.ach that collectively allow collision-free traversal of the edge. The test then searches that graph to Prepared using sagej.cls Figure 17. The reachable CRG. Green edges are on the BFS tree. Blue edges have reachable vertices but were not selected for the BFS tree. Red edges have at least one unreachable vertex. see if configuration q(cid:48) is reachable from q. It calls two sub procedures: TEST-GRASP and TEST-OBJ. Figure 17 displays the set of reachable CRG configurations from the current robot configuration given the placements of the movable objects for a NAMO problem. The procedure TEST-GRASP checks, given an edge e, the relaxed state s+, and a placement ρ (which may be 18 Journal Title XX(X) None), whether the edge is valid for some grasp g in the relaxed state. If None is a grasp in s+, the TEST-GRASP can immediately report success because it can navigate the edge without colliding with the fixed obstacles (assuming the robot itself does not collide with ρ). Otherwise, it processes each grasp in s+. If (cid:104)ρ, g(cid:105) has not already been cached as a validation condition, it is computed. If ρ = None, the procedure computes whether the robot moving along e.τ with grasp g either causes a self-collision or g collides with fixed obstacles using ROBOT-GRASP-C. Otherwise when ρ = (o, p), the procedure computes whether the grasp g moving along e.τ causes a collision with object o at pose p using GRASP-OBJ-C. Then, it can check e.valid to obtain the cached answer to the query. If TEST-GRASP finds a satisfying grasp, it returns True. Otherwise, it returns False. Similarly, the procedure TEST-OBJ computes, for an edge e, the relaxed state s+, and an object o, whether the edge is valid for some pose p in the relaxed state. If None is a pose for object o in s+, the procedure can immediately report success because it can navigate the edge when o is not placed. Otherwise, it processes each pose for o in s+. If (cid:104)(o, p), g(cid:105) has not already been cached as a validation condition, it is computed. The procedure computes whether the robot moving along e.τ causes a collision with o at pose p using ROBOT-OBJ-C. It then checks e.valid to obtain the cached answer to the query and calls TEST-GRASP with placement ρ = (o, p) to see if this pose admits a legal grasp. If TEST-OBJ finds a satisfying pose, it returns True. Otherwise, it returns False. Many of these checks can be done very efficiently using simple bounding-box computations. Additionally, the CRG caches the polyhedral structure of the robot along the trajectory, which can speed up later collision checks along the same edge by not having to place the robot a second time. Within TEST-REACHABLE, if there is not a valid grasp and pose for each object that allows safe passage across an edge, the edge is removed from the subgraph E(cid:48). Finally, BFS-TREE(q; (V, E(cid:48))) performs a standard breadth- first search through the subgraph to construct a BFS tree (N, T ). TEST-REACHABLE the returns the subset of {c1, ..., cn} which have end configuration q(cid:48) in (N, T ). Additionally, a set of achiever variable values is computed using TRACE-ACH(ci.q(cid:48); (N, T )) by tracing a path to q and taking the union of e.ach for visited edges. Recall that for standard non-relaxed states, each object can only be at one pose at a time, and the robot can hold at most one object, so, the loops over poses s+(voi) and grasps s+(vh) will only process a single pose or grasp respectively. This test is also efficient for relaxed states that arise during the relaxed planning process. As soon as an object oi is picked up, it will have None ∈ s+(voi), and no further collision checks using object oi will be required. Additionally, before a new object pose or grasp can be Prepared using sagej.cls added to the relaxed state, a None pose or grasp must be added from a PICK or PLACE action respectively. Thus, s+(voi) and s+(vh) will either only contain a single pose or grasp or they will contain None, which expedites the test. Finally, our implementation of TEST-REACHABLE actually tests the validity of the graph lazily while performing the search. We also exploit the fact that we are only interested in the connected component of the CRG that includes the current robot configuration which further increases efficiency. Moreover, for sequential evaluations for the same heuristic, TEST-REACHABLE expands on the previously reachable subgraph to avoid reevaluating traversable edges. 9 Review of sPRM Theoretical Analysis To motivate our FFROB theoretical analysis, we review the theoretical analysis for the simplified Probabilistic Roadmap (sPRM) (Kavraki and Latombe 1998) over the class of robustly feasible motion planning problems. Two desirable properties sampling-based motion planning algorithms are probabilistic completeness and exponential convergence. Exponential convergence implies probabilistic completeness. for Definition 16. An algorithm is probabilistically complete over a class of problems if and only if the probability that the algorithm halts and returns a solution is one in the limit as the number of time steps goes to infinity. Definition 17. An algorithm is exponentially convergent over a class of problems if and only if the probability that the algorithm has not terminated and returned a solution decreases exponentially in the number of time steps. The objective of a single-query motion planning problem is to find a collision-free trajectory in a d-dimensional configuration space Q ⊆ Rd between a start configuration q0 ∈ Q and a goal configuration q∗ ∈ Q. As typical in the analysis of motion planning algorithms, we restrict our analysis to Euclidean configuration spaces. We will call configurations and trajectories collision-free if all of their configurations are contained within Q. 9.1 Robust Feasibility First, we identify a class of robustly feasible motion planning problems which have a nonzero volume of solutions. The robustness restriction is necessary because sampling-based algorithms have zero probability of generating samples on any particular lower dimensional sub-manifold of Q. Having a nonzero volume of solutions the solutions are not at some segment ensures that constrained to such a sub-manifold. We give a definition of a robustly feasible motion planning problem that mixes the ideas of clearance (Kavraki Garrett et al. 19 TEST-GRASP(s+, e, ρ) : if None ∈ s+(vh): 1 2 3 4 5 6 e.ach ∪ = {vh ← None} return True for g ∈ s+(vh): if (cid:104)ρ, g(cid:105) /∈ e.valid : e.valid [(cid:104)ρ, g(cid:105)] = (ρ = None and not ROBOT-GRASP-C(e.τ, g)) or (ρ (cid:54)= None and not GRASP-OBJ-C(e.τ, g, ρ)) 7 8 9 10 if e.valid [(cid:104)ρ, g(cid:105)]: e.ach ∪ = {vh ← g} return True return False if None ∈ s+(vo): TEST-OBJ(s+, e, o) : 1 2 3 4 5 6 e.ach ∪ = {o ← None} return True for p ∈ s+(vo): if (cid:104)(o, p), None(cid:105) /∈ e.valid : e.valid [(cid:104)(o, p), None(cid:105)] = not ROBOT-OBJ-C(e.τ, (o, p)) Figure 19. A robustly feasible motion planning problem. clearance of τ , the minimum distance from a configuration on τ to the boundary of Q: χ(τ ; Q) = inf t∈[0,L] inf x∈∂Q ||τ (t) − x||. Definition 18. A motion planning problem is robustly feasible if there exists a trajectory τ from q0 + ∈ Q and δ > 0 such that: + ∈ Q to q∗ if e.valid [(cid:104)(o, p), g(cid:105)] and TEST-GRASP(s+, e, (o, p)): 7 8 9 10 return False e.ach ∪ = {o ← p} return True for e ∈ E: if not TEST-GRASP(s+, e, None): TEST-REACHABLE({c1, ..., cn}, s+; (V, E)) : q = c1.q 1 2 E(cid:48) = E 3 4 5 6 7 8 9 10 11 (N, T ) = BFS-TREE(q; (V, E(cid:48))) 12 return {(cid:104)ci, TRACE-ACH(ci.q(cid:48); (N, T ))(cid:105) | ci.q(cid:48) ∈ N } E(cid:48) = E(cid:48) \ {e} break E(cid:48) = E(cid:48) \ {e} continue if not TEST-OBJ(s+, e, oi): for i ∈ [m]: Figure 18. Procedure for querying the CRG. • χ(τ ; Q) ≥ δ and: • ∀q ∈ Q such that ||q − q0 (1 − t)q0 + tq ∈ Q and: • ∀q(cid:48) ∈ Q such that ||q(cid:48) − q∗ (1 − t)q(cid:48) + tq∗ ∈ Q. +|| ≤ δ/2, +|| ≤ δ/2, This definition asserts that there is a trajectory with nonzero clearance such that the neighborhoods around its start and end configurations can ”see” (admit a linear path between) q0 and q∗ respectively. This implies that both q0 and q∗ are (cid:15)-good for some (cid:15) > 0. This is a weaker condition than the assertion that there exists a direct trajectory between q0 and q∗ with nonzero clearance. The latter assertion would disqualify motion planning problems where the start or goal are themselves on the boundary of Q, even if a non-negligible volume of Q could see them. These kinds of motion planning problems are prevalent when grasping objects, making our definition useful when analyzing PPM problems. Figure 19 displays a motion planning problem that is robustly feasible under our definition of the term. 9.2 Probabilistic Completeness et al. 1998) and (cid:15)-goodness (Kavraki et al. 1995) in order to classify some motion planning problems where q0 or q∗ is on the boundary of Q as robustly feasible. We will use ||x|| as the Euclidean norm on points x ∈ Rd. Let τ : [0, L] → Q be a trajectory of length L from q0 to q∗ such that q0 = τ (0) to q∗ = τ (L). Let χ(τ ; Q) give the This following theorem states that for any robustly feasible motion planning problem, there exists a finite sequence of d-spheres with nonzero volume for which the linear interpolation of any collection of configurations covering the spheres is a solution to the problem. Let B(q, r) be the d-sphere centered at q with radius r. Prepared using sagej.cls 20 Journal Title XX(X) Theorem 1. For any robustly feasible motion planning there exists a sequence of k + 1, where problem, k = (cid:100)2L/δ(cid:101), d-spheres centered at (B0, B1, ..., Bk) τ (Li/k) for i ∈ {0, ..., k} and each with radius δ/2 such that any trajectory τ (cid:48) linearly interpolated from (q0, q0, q1, ..., qk, q∗), where qi ∈ Bi for i ∈ {0, ..., k}, is a collision-free solution to the motion planning problem. We give this proof, which is largely the same as a proof by Kavraki et al. (1998), in the appendix. This theorem directly reduces motion planning to a sampling problem by identifying these spheres. Thus, sampling- based algorithms, such at PRMs, can produce solutions to robustly feasible motion planning problems by sampling the space until generating samples within these spheres. Again, we consider completeness properties for the sPRM, a mathematically tractable variant of a PRM. The sPRM starts its roadmap with V = {q0, q∗} and, on each iteration, uniformly at random samples Q and connects each sampled configuration q to all existing roadmap configurations q(cid:48) such that ∀t ∈ [0, 1], (1 − t)q + tq(cid:48) ∈ Q, i.e. the linearly interpolated path between q and q(cid:48) is collision-free. Now we prove the main theorem, which is slightly modified from that of Kavraki et al. (1998). Let µ(Q) be the d-dimensional Lebesgue measure on Q ⊆ Q. Theorem 2. The sPRM algorithm is probabilistically complete and exponentially convergent over the class of robustly feasible motion planning problems. Proof. Consider any robustly feasible planning problem. By theorem 1, there exists a sequence of k + 1, where k = (cid:100)2L/δ(cid:101), d-spheres with radius δ/2 such that any collection of k + 1 configuration samples that covers the spheres forms a collision-free, linearly interpolated trajectory. We will construct an upper bound on the probability that, after taking n samples, sPRM has not covered all of the balls. We will assume that the sPRM is able to directly sample Q by, for instance, sampling a hyperrectangle subset of Rd that contains Q and rejecting samples not contained Q. We begin by defining σ to be the probability that a random sample is inside some particular sphere (and note that this probability is equal for all the spheres). It is equal to the ratio of the measure of the sphere to the measure of the free configuration space Q: bound this probability using the union bound: k (cid:91) Pr[ i=0 all n samples are outside Bi] ≤ k (cid:88) e−σn (2) i=0 (cid:108) 2L δ = (cid:109) e−σn. + 1 (3) Note that limn→∞(cid:100)2L/δ + 1(cid:101)e−σn = 0. Because the probability of failure decreases is exponentially in n, the sPRM algorithm is exponentially convergent which implies it is probabilistically complete. 10 FFROB Theoretical Analysis The probabilistic completeness and exponential conver- gence proofs for FFROB build on the ideas from the sPRM proofs. The high level structure of the argument is the same. We first identify a class of robustly feasible PPM problems that have a nonzero volume of solutions. This is more complicated than in the pure motion planning case because we must reason not only about volumes of trajec- tories but also volumes of placements, grasps, and inverse kinematic solutions. Then, we describe a simplified version of FFROB. Finally, we show that this version of FFROB is probabilistically complete and exponentially convergent. Because samples of configurations, poses, and grasps come from different domains, we need a definition of volume relative to each particular space. For the rest of the proof, let µ(E; X) generically be a measure on subsets E of the set X that assigns finite, nonzero measure to X. This could be for discrete X a counting measure, for Euclidean manifolds a Lebesgue measure of appropriate dimensionality, for rotation groups a Haar measure, and so on. Furthermore, it will be useful to define a normalized version of this measure, ˜µ, where ˜µ(E; X) = µ(E; X) µ(X; X) . When picking and placing objects, we must frequently reason about the set of robot configurations that can reach a particular end-effector transformation. This set of inverse kinematics solutions is frequently a low dimensional sub- manifold in Rd, which cannot always be sampled reliably. We denote the set of inverse kinematics solutions for a manipulator transform t as: Pr[sample is in Bi] = µ(B(τ (Li/k), δ/2)) µ(Q) = σ (1) Qik(t) = {q ∈ Q | END-EFFECTOR(q) = t}. Note that σ ∈ [0, 1] is a constant with respect to n. Now, the probability that all n samples are outside ball i is (1 − σ)n which is bounded above by e−σn. The algorithm will fail if, for any ball, all n samples are outside it; we We assume that we can randomly sample from Qik(t) with probability density bounded away from zero by some κ > 0. This could be done by, for example, obtaining an analytical representation of the inverse kinematic solutions and sampling free parameters. This, particularly for high Prepared using sagej.cls Garrett et al. 21 dimensional robots, is an open problem. The probabilistic completeness of FFROB rests on having such a sampler. 10.1 Robust Feasibility We now define a robustly feasible PPM problem by first introducing the components of PPM solutions that must themselves be robust for a PPM problem to be robust. Planning Many 10.1.1 Mode-Constrained Motion manipulation problems can be thought of as motion planning through multiple modes, where each mode ω defines constraints that restrict the system’s operable state- space (Hauser and Ng-Thow-Hing 2011). Specifically, plans in PPM domains can be described as motion plans through an alternating sequence of transit and transfer modes (Sim´eon et al. 2004). A transit mode ψ corresponds to the robot moving while not holding anything, and a transfer mode φ corresponds to the robot moving while holding an object at a fixed grasp. Each mode is defined by a set of parameters which give rise to its constraints. Definition 19. A transit mode ψi = (cid:104)None, {poj | ∀j ∈ i [m]}(cid:105) has parameters consisting of poses for each object poj i ∈ P oj . The grasp is None. A transit mode is legal if there are no collisions among the placed objects. A transit mode constrains the robot to move in Qψi ⊆ Q, the subset of Q that does not collide with the placed objects. , {poj i ohi Definition 20. A transfer mode φi = (cid:104)g | ∀j ∈ i ohi [m], j (cid:54)= hi}(cid:105) has parameters consisting of a grasp g i ∈ Gohi for an object ohi and the poses for each other object poj i ∈ P oj . A transfer mode is legal if there are no collisions among the placed objects. A transfer mode constrains the robot to move in Qφi ⊆ Q, the subset of Q where neither ohi it nor object ohi , held at grasp g , collide with the placed i objects nor ohi intersects with the robot. In both modes, the poses of the non-held objects are fixed and constrain legal movements of the robot. Additionally, in transfer modes, the held object must remain at the same grasp transform relative to the robot’s end-effector, effectively altering the geometry of the robot. The state of the system can always be derived from just the current transit or transfer mode and the current robot configuration. As such, when the current mode is fixed, we can reason about the state of the system by just reasoning in the space of robot configurations subject to the mode. The problem of planning robot movements subject to a mode is called a mode-constrained motion planning problem. This is simply a standard motion planning problem with an additional mode input ω which defines the operable state-space Qω ⊆ Q of the problem. A mode- constrained motion planning problem is robustly feasible when the corresponding motion planning problem in the restricted configuration space Qω is robustly feasible. Prepared using sagej.cls Clearly, the sPRM is both probabilistically complete and exponentially convergent for robustly feasible mode- constrained motion planning problems by applying the exact same arguments from theorems 1 and 2. 10.1.2 Multi-Mode-Constrained Motion Planning A robot must usually move in a sequence of modes in order to solve manipulation problems. Two modes are adjacent if the intersection between their operable system state-spaces is nonempty. Two unique transit modes cannot be adjacent because at least one object pose must differ between them, and the robot must grasp the object in order to move it between poses. Similarly, two unique transfer modes cannot be adjacent because an object pose or the grasp differs between them. In either case, the robot must change the current grasp to move the object or obtain a new grasp. Transit and transfer modes, however, can be adjacent. Definition 21. A transit mode ψi and transfer mode φj are adjacent if and only if poa i = poa j ∀a ∈ [m], hj (cid:54)= a, and Qψi ∩ Qφj (cid:54)= ∅. In adjacent modes, the poses of the objects that are not grasped in the transfer mode are fixed between the two modes (reflecting that the robot can only manipulate a single object at a time) and the set of robot configurations that can move between the modes is nonempty. The system can perform a mode switch between two adjacent modes when it is at a state in the intersection of the two operable state-spaces. For transit and transfer modes, this is equivalent to a robot being at a configuration q ∈ Qψi ∩ Qφj . Mode switches from ψi to φj are PICK actions. Conversely, mode switches from φj to ψi are PLACE actions. Both of these actions involve object ohj and grasp ohj ohj g from the transfer mode as well as pose p from the j i transit mode. Precisely, the set of PICK actions Aφj ψi switch from from ψi to φj are that can Aφj ψi ohj = {PICK(ohj , p i ohj , g j , q) | q ∈ Qψi ∩ Qφj }, and the set of PLACE actions Aψi φj φj to ψi are that can switch from from Aψi φj ohj = {PLACE(ohj , p i ohj , g j , q) | q ∈ Qφj ∩ Qψi}. ohj ohj i × TRANSFORM(g j Define t(ψi, φj) = p )−1 as the ohj end-effector transform for grasping object ohj at pose p i ohj with grasp g . For notational simplicity, assume that j the arguments to t are unordered so t(φj, ψi) = t(ψi, φj). Qψi ∩ Qφj ⊆ Qik(t(ψi, φj)) is the collision-free subset of the inverse kinematic solutions for the end-effector transform at the mode switch. These inverse kinematic configurations will serve as targets for motion planning to move between modes. 22 Journal Title XX(X) A sequence of k legal, adjacent transit and transfer modes (ω1, ω2, ..., ωk) is a mode sequence. A PPM mode sequence has k − 1 mode switches; i.e., k − 1 PICK and PLACE actions. We will generically refer to the ith mode as ωi. Given both a mode sequence and whether ω1 is a transit or transfer mode allow us to determine the type of any other mode ωi depending on whether i is odd or even. We will now look at multi-mode-constrained motion planning problems from a start configuration q0 ∈ Qω1 to a set of end configurations Q∗ ⊆ Qωk through a fixed sequence of modes. These problems can be reduced to a sequence of k mode-constrained motion planning problems. However, there is a complication that the start q0 i and goal q∗ i for the ith mode-constrained motion planning problem (with the exception of the first and last problems) are not given. These must be chosen from the intersection of the neighboring modes such that q0 i ∈ Qωi−1 ∩ Qωi and q∗ i ∈ Qωi ∩ Qωi+1. The individual mode motion plans must connect continuously such that q∗ i−1. Thus, the problem requires choosing these target configurations as well as finding mode motion plans that connect between them. A multi-mode-constrained motion planning problem is robustly feasible when there exists a sequence of sets of target configurations with nonzero measure such that any mode-constrained motion planning problem between pairwise targets is robustly feasible. i = q0 Definition 22. A multi-mode-constrained motion planning problem is robustly feasible if for a mode sequence (ω1, ω2, ..., ωk) sets of configurations (Q0, Q1, ..., Qk) and (cid:15) > 0 such that: there exists a sequence of • Q0 = {q0} and: • ∀i ∈ [k − 1] Qi ⊆ Qik(t(ωi, ωi+1)) and ˜µ(Qi; Qik(t(ωi, ωi+1))) ≥ (cid:15) and: • Qk ⊆ Q∗ and ˜µ(Qk; Q∗) ≥ (cid:15) and: • ∀i ∈ [k] ∀q0 i ∈ Qi i ∈ Qi−1 ∀q∗ constrained motion planning problem from q0 is robustly feasible. the ωi mode- i to q∗ i 10.1.3 PPM Planning A multi-mode-constrained motion planning problem is easier than a PPM problem because the mode sequence is given as an input. We now return to full PPM problems that require selection of a mode sequence in addition to finding a multi-mode motion plan. The start oh0 , {po1 0 }(cid:105). However, the mode must be ω1 = (cid:104)g 0 ohk k ∈ Goh∗ goal mode can be any mode ωk such that g and ∀j ∈ [m].poj 0 , ..., pom ∗ k ∈ P oj ∗ . Suppose now we must determine the sequence of modes. Starting from ω1 and for each newly chosen mode, each legal mode switch from the last mode ωi to a new mode ωi+1 can be described by a single parameter. Between a transit mode ωi = ψi and a new transfer mode, the object poses, with the exception of a grasped object, remain Prepared using sagej.cls ohi+1 i+1 constant. Thus, the new transfer mode ωi+1 = φi+1 can be inferred from just ψi and the specification of a free parameter g for the resulting grasp. Similarly, between a transfer and transit mode, the object poses, with the exception of a grasped object, also remain constant leaving ohi a free parameter p i+1 for the new pose of ohi to give the new mode. Thus, a sequence of modes starting from ω1 can be completely described by an alternating sequence of oh2 (k − 1) poses and grasps such as (g ). It 2 is sufficient to choose these poses and grasps to identify a mode sequence. , ..., pok−1 k oh2 , p 3 Let P ohi−1 i oh2 2 × P when (i − j) is even and G In order for a PPM problem to be robustly feasible, there must be a nonzero volume of mode sequences that admit robust multi-mode motion plans. As previously suggested, the set of length-k mode sequences is contained in the space Θ formed by the Cartesian product of the k − 1 alternating pose and grasp parameter domains P oi and Goj . Suppose that we are considering mode sequences starting and ending with transit modes where the transfer modes interact with this prescribed sequence of objects (oh2, oh4, ..., ohk−1). The space containing the set of valid mode sequences is Θ = Goh2 × P oh2 × ... × P ohk−1 . We can define a set of ohk−1 oh2 mode sequences θ = G 3 × ... × P by a set k of values for each parameter such that θ ⊆ Θ. ohj j when (i − j) is odd refer to these sets of grasps and poses that together define a collection of mode sequences from a prescribed sequence of transit modes and transfer modes. We could measure the volume of these mode sequences by just taking ; Gohj ). the product of each ˜µ(P However, the goal of these mode sequences is to reach a goal mode. This requires choosing grasps and poses within Goh∗ for i ∈ [m] respectively. A PPM problem may specify a set of goal poses or grasps that is considerably smaller or has lower dimensionality than the full space of these values. For example, a problem could specify a goal set P oi ∗ } as a single pose, where ∗ ; P oi) = 0. Yet, we still expect some of these clearly ˜µ(P oi problems to be robustly feasible because an algorithm could intentionally sample the goal set P oi ∗ in addition to the full ohi−1 domain P oi. Thus, we will define the measure of P i ohj and G j ohj ; P ohi−1 ) and ˜µ(G j ∗ = {poi and P oi ∗ as follows: ohi−1 i ∗ ˜µ(P ohi−1 i ) = (cid:40) ˜µ(P ˜µ(P ohi−1 i ohi−1 i ohi−1 ; P ∗ ; P ohi−1 ) ) P ohi−1 i ⊆ P ohi−1 ∗ otherwise ohj ˜µ(G j ) = (cid:40) ohj ˜µ(G j ohj ˜µ(G j ; Goh∗ ) ∗ ; Gohj ) ohj = oh∗ and G otherwise ohj j ⊆ Goh∗ ∗ . Intuitively, these measures are taken with respect to the set of goal values when the set to be measured is a subset of the goal values. Otherwise, the measures are taken with Garrett et al. 23 respect to the full domain of values. Finally, we arrive at the definition of a robustly feasible PPM problem. Definition 23. A PPM problem Π is robustly feasible if there exists a set of length-k mode sequences θ = {(ω1, ω2, ..., ωk), (ω(cid:48) 2, ..., ω(cid:48) 1, ω(cid:48) k), ...} with transfer modes involving a common sequence of objects (..., o∗ , ...) hi such that: , o∗ hi+2 0 }(cid:105). 0 , ..., pom • ∀(ω1, ω2, ..., ωk) ∈ θ: oh0 , {po1 – ω1 = (cid:104)g 0 – ohi = o∗ if ωi is a transfer mode. hi ohk k ∈ Goh∗ and ∀j ∈ [m].poj – g – The multi-mode motion planning problem from q0 to Q∗ using (ω1, ω2, ..., ωk) is robustly feasible. k ∈ P oj ∗ . ∗ the ith set of • For ohi−1 ) > 0. ˜µ(P i the jth set of • For ohj ˜µ(G j ) > 0. transfer modes {ωi, ω(cid:48) i, ...}, transit modes {ωj, ω(cid:48) j, ...}, Intuitively, a robustly feasible PPM problem has non- negligible volumes of sequential poses and grasps such that any choice of these poses and grasps allows a sequence of robust motion plans that can connect them and solve the problem. The non-negligible volumes of poses relate to the robustness definition given by Van Den Berg et al. (2009). Their definition says a problem is robust if there exists a sequence of poses that could be simultaneously perturbed by a small amount ∆ without changing the feasibility of the resulting motion planning problems. Relating this to our definition, ∆ is the radius of an n-sphere that is centered at each pose in the sequence where each n-sphere denotes a volume of safe poses. Our robustness condition forbids overly constrained problems where solution mode sequences are restricted to a lower dimensional sub-manifold of the mode sequence parameter space. Such problems require special samplers that are able to produce tuples of values, possibly for that satisfy a constraint different among them. Specifying such a sampler for every combination of parameters is intractable because the number of combinations grows exponentially. types of parameters, 10.1.4 Example Consider a physical interpretation of robustness for placements. Suppose all modes are chosen from the set of mode sequences apart from the ith transfer mode. This consequently fixes the sequence of placements for all but the ith placement. The robustness condition asserts that for any combination of fixed placements, any choice of the ith placement from P , and therefore any choice of the ith transfer mode, will not collide with the existing placements. This allows for placements to be chosen independently with respect to each other. ohi−1 i Prepared using sagej.cls Figure 20. Illustration of robust feasibility applied to placements. The goal poses P o1 pentagons o1, o2 are poses contained in the grey region. The top scenario shows candidate P o1 k−1 and P o2 areas. The bottom scenario only admits P o1 lines giving each zero area. k with nonzero ∗ and P o2 ∗ ∗ and P o2 ∗ for the blue that are k−1 and P o2 Figure 20 gives a example of a robust and non-robust problem involving goal poses. The top problem is robust because P o1 k are both balls of poses with nonzero volume meaning that o1 and o2 could be at any pair of poses from these sets respectively and satisfy the goal without collision. The bottom problem is not robust because P o1 k−1 and P o2 are lines indicating that the objects can only be k moved up and down and result in a collision-free solution. 10.2 Probabilistic Completeness Now that we have defined a robustly feasible PPM problem, we can return to analyzing FFROB. Recall that on each iteration, FFROB generates a finite number of samples of poses, grasps, and configurations and then performs a discrete search to determine if this set of samples is sufficient to solve the PPM problem. Upon the discrete search failing to find a plan, FFROB increases the number of samples on the next iteration. The algorithm terminates if and when the discrete search finds a plan. For the analysis, we assume that samples x are drawn from a space X uniformly at random as denoted by x ∼ X. 10.2.1 Convergence in Iterations Let n represent the number of sampling iterations. Let P oi(n), Goi (n), Qoi (n) be the sets of sampled poses, grasps, and inverse kinematic involving object oi after n sampling configurations 24 Journal Title XX(X) iterations. For i ∈ [m], P oi(0) = {poi 0 } and Goi(0) = ∅ if 0 (cid:54)= None, otherwise, P oi(0) = ∅ and Goi (0) = {goi poi 0 }. However, Qoi (0) = ∅ for all i ∈ [m]. Additionally, let (V (n), E(n)) define the vertices and edges of the CRG after n sampling iterations where V (n) = ∅ and E(n) = ∅. On the nth sampling iteration, for all i ∈ [m]: • Pick actions for i ∈ [m]: pick = {PICK(oi, p ∈ P oi(n), g ∈ Goi(n), q ∈ Ai Qoi(n)) | q ∈ Qik(p × TRANSFORM.(g)−1)} • Place actions for i ∈ [m]: place = {PLACE(oi, p ∈ P oi(n), g ∈ Goi(n), q ∈ Ai Qoi(n)) | q ∈ Qik(p × TRANSFORM(g)−1)}. • P oi(n) = P oi(n − 1) ∪ {poi, poi ∗ } where poi ∼ P oi and poi ∗ ∼ P oi ∗ . • Goi(n) = Goi (n − 1) ∪ {goi, goi ∗ } where goi ∼ Goi ∗ ∼ Goh∗ . and when oi = oh∗ , goi ∗ • Qoi(n) = Qoi(n − 1) ∪ {q ∼ Qik(p × TRANSFORM(g)−1) | p ∈ P oi(n), g ∈ Goi(n)}. On each iteration and for each object, FFROB samples a pose, goal pose, grasp, and goal grasp (if the object has a holding goal). Additionally, it samples an inverse kinematics solution for each existing pair of poses and grasps. Each pose and grasp pair will continue to have new inverse kinematic solutions sampled as n increases. FFROB will remain probabilistically complete and exponentially convergent as long as the number of samples on iteration n is bounded by some polynomial in n. To simplify the analysis, we will consider a simplified version of the CRG that is an sPRM in Q. As stated in section 9, the sPRM is a roadmap that always attempts to connect every pair of sampled configurations, which leads to an exponentially convergent motion planning algorithm. We only construct one roadmap as opposed to growing a separate roadmap for each grasp or combination of object placements. Although grasping an object changes the geometry of the robot, it only decreases the operable Q. The same is true for placing objects. Thus, the CRG grown in the full configuration space Q with the hand empty and no placed objects will be sufficient to also capture paths in different transit and transfer modes. On each sampling iteration, the inverse kinematics solutions that were previously sampled are added to the CRG. A single free space robot configuration q and a single goal robot configuration q∗ are also sampled and added to the CRG. Although in practice the CRG is constructed using larger sample batches, sampling individual configurations simplifies the analysis without loss of generality. Then, the set of edges is constructed for every possible pair of vertices that do not have collisions: • V (n) = V (n − 1) ∪ Qo1 (n) ∪ ... ∪ Qom ∪ {q, q∗} where q ∼ Q and q∗ ∼ Q∗. • E(n) = {(v, v(cid:48)), (v(cid:48), v) | v ∈ V, v(cid:48) ∈ V, COLLISION-FREE(v, v(cid:48); Q)}. After the samples are selected for the nth iteration, the actions passed to the discrete planner are the following: • Move actions: Finally, because the heuristic search algorithms that work in practice do not improve symbolic planning’s worst case complexity, we simplify FFROB such that it performs an unguided search. We start by identifying the convergence rate as a function of the number of sampling iterations. We then perform a change of variables to show that it also converges exponentially in the number of time steps. Theorem 3. For any robustly feasible PPM problem Π, the probability that FFROB fails to find a solution decreases exponentially as the number of sampling iterations n → ∞. Proof. Recall that because FFROB’s discrete search is complete, it will find a solution if a sufficient set of pose, grasp, and configuration samples are chosen. FFROB will fail to find a solution by the nth sampling iteration if and only if the set of samples on the nth iteration does not contain a solution. We first divide this sampling failure into the disjunction of three disjoint failure events: an insufficient set of mode samples, an insufficient set of mode switch inverse kinematic samples assuming a sufficient set of mode samples, and an insufficient set of CRG configurations assuming sufficient sets of mode and mode switch samples. Let NF F Rob be a nonnegative random variable for the number of iterations before FFROB finds a plan given a robustly feasible PPM problem. Let Nmode, Nswitch, and NCRG, be nonnegative random variables for the number of sampling iterations before FFROB has produced a sufficient set of mode, mode switch, and CRG samples respectively. The probability that FFROB fails to find a plan is the sum of the probabilities of the three disjoint failure events. Pr[NF F Rob > n] = Pr[Nmode > n] + Pr[Nmode ≤ n, Nswitch > n] + Pr[Nmode, Nswitch ≤ n, NCRG > n] We start with Pr[Nmode > n]. Recall that a robustly feasible PPM problem has a non-negligible volume of mode sequences which can be represented by an alternating sequence of (k − 1) sets of pose and grasp samples. Let P be these sets of samples. The probability of a mode sampling failure is bounded by the sum of the probabilities of a pose and grasp sampling failure. ohj and G j ohi−1 i Amove = {MOVE(q ∈ V, q(cid:48) ∈ V, (V, E)) | q (cid:54)= q(cid:48)}. Pr[Nmode > n] ≤ Pr[Npose > n] + Pr[Ngrasp > n] (4) Prepared using sagej.cls Garrett et al. 25 Because, on each iteration, FFROB independently, uniformly at random samples a pose from both the full domain and the goal for each object, the probability that a new sample is in a set of poses P is equal to the measure of P using the relative measure ˜µ: ohi−1 i ohi−1 i Pr[pohi−1 ∈ P ohi−1 i ] = ˜µ(P ohi−1 i ). For simplicity, define ρp = mini Pr[pohi−1 ∈ P ]. Using the union bound, the probability of a pose failure is less than the sum of the probabilities that all n pose samples ohi−1 land outside P ohi−1 (n) ∩ P i . ohi−1 i Pr[Npose > n] = Pr (P ohi−1 (n) ∩ P ohi−1 i (cid:105) = ∅) (cid:104) (cid:95) i ≤ ≤ (cid:88) i (cid:88) i Pr (cid:2)P ohi−1 (n) ∩ P ohi−1 i = ∅(cid:3) (cid:0)1 − Pr[pohi−1 ∈ P ohi−1 i ](cid:1)n ≤ (k − 1)(1 − ρp)n ≤ ke−ρpn ohj Similarly, for a set of grasps G j : ohj Pr[gohj ∈ G j ] = ˜µ(G ohj j ). case to be analyzed independently. Pr[Nmode ≤ n,Nswitch > n] ≤ Pr[Nmode > n/2] + Pr[Nmode ≤ n/2, Nswitch > n] We already have a bound for Pr[Nmode > n/2] from remains is to bound Pr[Nmode ≤ equation 4. What n/2, Nswitch > n]. Each grasp and pose mode sample may be a seed for up to two inverse kinematic configurations. Without conditioning on the event that a full set of successful mode samples have been generated, it may be the case that what would be a successful inverse kinematics for a particular pose and grasp pair could still be a failure if it turned out that no full mode sequence using the pose and grasp was generated. Let Qi be the set of inverse kinematics solutions associated with the sampled mode sequence. Because the set of samples induces a robustly feasible multi-mode-constrained motion planning problem, there exists (cid:15) > 0 such that ˜µ(Qi) ≥ (cid:15) ∀i ∈ {2, ..., k}. Recall that FFROB samples inverse kinematic configurations with probability density bounded away from zero by κ > 0. We will assume (cid:15) and κ are the minimum volumes and densities respectively across any of the mode sequences in θ. Thus, Pr[q ∈ Qi] ≥ κ(cid:15) ∀i ∈ {2, ..., k}. Pr[Nmode ≤ n/2, Nswitch > n] ohj Analogously, define ρg = mini Pr[gohj ∈ G j ] as the minimum probability of a pose or grasp from each set in the sequence. Then, = n/2 (cid:88) i=1 Pr[Nmode = i] Pr[Nswitch > n | Nmode = i] Pr[Ngrasp > n] = Pr (cid:104) (cid:95) (Gohj (n) ∩ G (cid:105) ohj j = ∅) j Pr (cid:2)Gohj (n) ∩ G ohj j = ∅(cid:3) (cid:0)1 − Pr[gohj ∈ G ohj j ](cid:1)n ≤ ≤ (cid:88) j (cid:88) j ≤ (k − 1)(1 − ρg)n ≤ ke−ρgn. Moving on to Pr[Nmode ≤ n, Nswitch > n], recall that FFROB samples an inverse kinematic configuration for all grasp and pose pairs on each iteration. Thus, the events for obtaining successful mode samples and successful mode switch samples are not independent. Consider the following conditional probability Pr[Nswitch > n | Nmode = i]. Given Nmode = i, FFROB will have (n − i) iterations to generate the mode switches. To simplify the analysis, we upper bound Pr[Nmode ≤ n, Nswitch > n] by splitting the sampling into two failure cases, [Nmode > n/2] and [Nmode ≤ n/2, Nswitch > n ], which allows each Prepared using sagej.cls ≤ Pr[Nswitch > n | Nmode = n/2] ≤ Pr[Nswitch > n | Nmode = n/2] (cid:104) k (cid:95) = Pr (cid:105) (Q(n) ∩ Qi = ∅) i=2 ≤ ≤ k (cid:88) i=2 k (cid:88) i=2 Pr (cid:2)Q(n) ∩ Qi = ∅(cid:3) (cid:0)1 − Pr[q ∈ Qi](cid:1)n/2 ≤ (k − 1)(1 − κ(cid:15))n/2 ≤ ke−κ(cid:15)n/2 n/2 (cid:88) i=1 Pr[Nmode = i] (5) Finally, when analyzing Pr[Nswitch ≤ n, NCRG > n], recall that the CRG samples a roadmap configuration per iteration, independently of any of the other created samples. Thus, we do not have the dependence complication we faced when sampling mode switches. We can upper bound the probability of the conjunction with the probability of the 26 conditional: Pr[Nmode ≤ n, Nswitch ≤ n, NCRG > n] ≤ Pr[NCRG > n | Nmode, Nswitch]. A successful set of mode switches will give rise to (k − 1) robustly feasible mode-constrained motion planning problems. Although the CRG does not know a priori which motion planning problems its samples will help solve, no part of its sampling is dependent on the targets because the samples are uniformly drawn from the unconstrained configuration space Q. The samples created could be useful for any of the motion planning problems that arise. Let L and δ > 0 be the largest and smallest plan lengths and plan clearances across any of the robust motion planning problems. Using the union bound and theorem 2, Pr[NCRG > n | Nmode, Nswitch ≤ n] ≤ k (cid:109) e−σn (6) Combining our bounds on the different types of failures, (cid:108) 2L δ + 1 according to equations 4, 5, and 6, we have: Pr[NF F Rob > n] ≤ Pr[Nmode > n/2] + Pr[Nswitch > n | Nmode = n/2] + Pr[NCRG > n | Nmode ≤ n, Nswitch ≤ n] ≤ ke−ρpn/2 + ke−ρgn/2 + ke−κ(cid:15)n/2 + k (cid:108) 2L δ (cid:16) = k e−ρpn/2 + e−ρgn/2 + e−κ(cid:15)n/2 + (cid:108) 2L δ + 1 + 1 (cid:109) e−σn e−σn(cid:17) (cid:109) . Thus, the probability that FFROB fails decreases exponentially in n. 10.2.2 Convergence in Runtime While we have proven that FFROB converges exponentially in the number of sampling iterations, this does not necessarily imply that if it converges exponentially in runtime. For example, the number of operations between each sampling iteration grows exponentially in n, an algorithm that converges exponentially in n would only converge polynomially in the runtime t. For an example where t = en, Pr[failure] ≤ en = eln t = 1/t. Recall that symbolic planning is known to be PSPACE- Complete (Bylander 1994). However, the discrete search will actually run in polynomial time of the number of samples because the addition of samples only increases the size of a variable’s domain. The number of variables in the state-space |V| is fixed per problem based on the number of moveable objects m. Thus, the size of the state-space grows only polynomially in the number of samples. The following lemma gives a very loose upper bound on the runtime of the discrete search on the nth sampling iteration. Prepared using sagej.cls Journal Title XX(X) Lemma 1. The running time of the discrete search for a PPM problem with m objects on the nth sampling iteration is O(m4nm+9). Proof. Our symbolic planning problem can be viewed as a directed graph search problem where vertices V (cid:48) are possible states and edges E(cid:48) are actions performed from a specific state. A discrete search, such as Dijkstra’s algorithm, can find the optimal solution to graph search problems in O(|E(cid:48)| + |V (cid:48)| log |V (cid:48)|). We compute |V (cid:48)| by analyzing the size of the state-space S(n) = V (cid:48)(n) after n iterations. Given our sampling strategy, we have the following quantifies of samples after the nth iteration: • Poses for object oi: |P oi(n)| = O(n) • Grasps for object oi: |Goi(n)| = O(n) • Robot inverse kinematic configurations: (cid:80)n |Q(n)| ≤ (cid:80)m i=1 j=1 |P oi(j)||Goi(j)| = O(mn3) • Robot configurations: |V (n)| = O(n) + Q(n) = O(mn3) In each state, the robot either has nothing in its hand with all the objects placed, or the robot is holding an object, so all but one objects are placed. Additionally, each state uses a single robot configuration. The size of S is upper bounded by the combinations of samples for these two types of states. |S(n)| ≤ |V (n)| (cid:16) m (cid:89) |P oi(n)| + i=1 m (cid:88) i=1 |Goi (n)| (cid:17) |P oi(n)| m (cid:89) j(cid:54)=i = O(mn3(nm + m(n × nm−1))) = O(m2nm+3) Next, we compute |E| by first calculating the number of possible actions given these samples. • Move actions: |Amove(n)| ≤ |V (n)|2 = O(m2n6) • Pick actions: |Apick(n)| = |Q(n)| = O(mn3) • Place actions: |Aplace(n)| = |Q(n)| = O(mn3) |A(n)| = |Amove(n)| + |Apick(n)| + |Aplace(n)| = O(m2n6). From |S(n)| and |A(n)|, we can obtain a loose upper bound on the number of edges in the state-space graph: |E(cid:48)(n)| = |S(n)||A(n)| = O(m4nm+9). The number of edges dominates the Dijkstra runtime, thus the discrete search runs in O(m4nm+9). This analysis was meant to prove that the discrete search runs in polynomial time in n. The polynomial bound is Garrett et al. 27 loose and is not indicative of the runtime of common PPM instances, particularly when the search guided by a heuristic. Moreover, in practice, sampling is often more expensive than searching due to the geometric overhead from collision checks and inverse kinematics. Theorem 4. FFROB is probabilistically complete and exponentially convergent. Proof. We can compute the change of variables between iterations n and runtime t using lemma 1. Each iteration takes O(m) + O(m) + O(mn2) = O(mn2) time steps to produce the new samples before running the discrete search. This is clearly dominated by the planning time. t(n) = n (cid:88) i=1 O(m4im+9) n (cid:88) = m4 O(im+9) t=1 = O(m4nm+10) ≤ Cm4nm+10 for some C and ∀n ≥ n0 Inverting this mapping gives: n(t) ≥ (cid:16) n (cid:17)1/(m+10) Cm4 . Let C = min(ρp/2, ρg/2, κ(cid:15)/2, σ) and TF F Rob be a nonnegative random variable for the number time steps before FFROB finds a solution. Pr[TF F Rob > t] ≤ k (cid:109) e−C( n Km4 )1/(m+10) (cid:108) 2L/δ + 4 (cid:16) e−n1/(m+10) (cid:17) = O Again, m is fixed per the problem which slows but does not stop the exponential convergence. 10.2.3 Corollaries We arrive at the following corollaries. Corollary 1. FFROB has both a finite expected runtime and finite variance in runtime. Proof. This follows from the result that an exponentially convergent algorithm has a finite expected runtime and finite variance in runtime (Hauser and Latombe 2010). If FFROB is modified such that it continues to increase its set of samples even after finding a solution, it will converge to a solution that is in the minimal length robust set of mode sequences. This means it will find a solution that uses the fewest PICK and PLACE mode switches out of the set of solutions that are in a robust set of mode sequences. Moreover, this convergence will also be exponential. Prepared using sagej.cls The probability that FFROB has not Corollary 2. identified a solution contained in the minimal length robust set of mode sequences decreases exponentially in the number of time steps. Proof. By theorem 4, the probability that FFROB will not have samples corresponding to a solution for any robust set of mode sequences decreases exponentially as t → ∞. On each iteration, Dijkstra’s algorithm will find the optimal plan in terms of the number of PICK and PLACE actions (we will let MOVE actions have a cost of zero while PICK and PLACE have unit cost). As soon as samples that admit a minimal length mode sequence have been generated, Dijkstra’s algorithm will continue to produce a plan with this mode sequence length forever because additional samples could only produce a solution with a smaller length. FFROB will not terminate or declare when it has found an optimal solution, but it with high probability will eventually find a solution with minimal mode-sequence length in finite expected runtime. Additionally, the discrete search could be another optimal search algorithm instead of Dijkstra, such as A* with an admissible heuristic, that may have better practical performance. In symbolic planning, a cost sensitive version of hmax is guaranteed to be admissible. As an additional practical aside, once a candidate solution is found, the discrete search need not visit states which have summed path cost and admissible heuristic cost that exceeds the cost of the previously optimal plan. While this also does not affect the theoretical analysis, it will prune the search space in practice. 11 Experiments We experimentally evaluated seven configurations of FFROB using eight problems spanning rearrangement planning, NAMO, nonmonotonic planning, and task and motion planning. 11.1 Problems Problems 1-1 & 1-2 are simple rearrangement problems inspired by Krontiris and Bekris (2015). Each block has a specified goal configuration as represented by the color gradient. The robot may use a single top grasp. Problem 1- 1 has two rows of four blocks. Problem 2-1 has two rows of eight blocks and is displayed in figure 21. Problems 2-1 & 2-2 combine NAMO with tabletop pick- and-place. They require the robot to move the two blue blocks from the right table to the left table and return to its initial configuration. In order to reach the blue blocks, the robot must first move several red pillars out of way to clear a path for its base. The wall of pillars is composed of two segments where the top segment is thinner than the bottom 28 Journal Title XX(X) Figure 21. The second state and last state on a plan for problem 1-2. Figure 22. The second state and last state on a plan for problem 2-2. segment. This is designed to test whether the heuristic can identify actions that lead to shorter plans that only move the top pillars. Problem 2-1 has a top segment of width one and a bottom segment of width two. Problem 2-2, displayed in figure 22, has a top segment of width two and a bottom segment of width three. Problems 3-1 & 3-2 are examples of highly nonmono- tonic problems, problems that require undoing goal condi- tions along solutions. The robot must move the green blocks from the left table to a corresponding position on the right table. Both the initial and goal poses are blocked by four blue and cyan blocks respectively. Critically, the blue and cyan blocks have goal conditions to remain in their initial poses. This is the source of the nonmonotonicity as the robot must undo several goals by moving the blue and cyan blocks in order to solve the problem. Problem 3-1 includes only one green block. Problem 3-2, displayed in figure 23, includes three green blocks. Problem 4 in figure 24 is from Srivastava et al. (2014). The robot must retrieve the red cylinder from within the Prepared using sagej.cls cluttered table of cyan cylinders. The goal conditions are for the robot to be holding the red cylinder at its initial configuration. Problem 5 in figure 25 is a task and motion planning problem in a cooking domain. The robot must retrieve two heads of cabbage (green blocks) from the shelves, clean the heads, cook the heads, and place them on the plates. Turnips (pink blocks) are present on the shelves and must be moved to reach the cabbage. However, the turnips must be returned to the shelves if moved. The robot must also wash the cups (blue and cyan blocks) and set the table using the blue cups. The cyan cup is not needed for dinner. This problem requires the purely symbolic literals Cleaned and Cooked as well as actions CLEAN and COOK which are shown in figure 4. The top of the dishwasher is used to clean food and cups. The top of the microwave is used to cook food. Objects become transparent when cleaned or cooked. Garrett et al. 29 Figure 23. The second state and last state on a plan for problem 3-2. Figure 24. The second state and last state on a plan for problem 4. Figure 25. The second state and last state on a plan for problem 5. 11.2 Heuristics We experimented with several versions of FFROB using different heuristics. For all heuristics, as previously described, we automatically generate new samples if the FFROB heuristic is infinite before planning because a finite heuristic value is a necessarily condition for feasibility. The following heuristics are compared in the experiments: 1. H0: The heuristic returns 0. Prepared using sagej.cls 30 Journal Title XX(X) 2. HGoals: This calls UNSATISFIED-GOALS to return the number of unsatisfied goal conditions. 3. HF F : This is the original FF heuristic based only on simple conditions. It explicitly ignores complex conditions, namely reachability conditions. This heuristic represents the semantic attachments strategy of Dornhege et al. (2009). 4. HM axRob: This uses MAX-COMB when performing COMPUTE-COSTS to produce its estimate. 5. HAddRob: This uses ADD-COMB when performing COMPUTE-COSTS to produce its estimate. 6. HF F Rob: This finds a relaxed plan using the and ADD-COMB version of COMPUTE-COSTS EXTRACT-RELAXED-PLAN. 7. HF F Rob, HA: This finds a relaxed plan using and of computes first the hadd version EXTRACT-RELAXED-PLAN and goal-achieving helpful actions based on that plan. COMPUTE-COSTS Figure 26. A bar graph of the overall success rate across all problems per algorithm. We use deferred best-first search, described in appendix A.2, as the search control for our experiments. Deferred best-first search typically outperforms standard best-first search because of its lazy evaluation of successors. 11.3 Implementation We implemented FFROB in Python using the OpenRAVE robotics framework (Diankov and Kuffner 2008) for simulation. We use a simulated PR2 robot with a mobile there are base and a single active manipulator. Thus, 10 active degrees of freedom. We used Open Dynamics Engine (Smith 2005) for collision checks. Additionally, we employ bounding boxes and cache collision checks in order to reduce the collision-checking. The sampling parameters θ used in our experiments where chosen relatively arbitrarily and are fixed across all problems and heuristics. In practice, we do not increase the sampling parameter sizes upon a sampling failure. We will restrict the robot to four side grasps per objects except on problems 1-1 & 1-2 where we use a single top grasp. We randomly sample 25 general poses per object type in addition to 5 poses per specified symbolic region. We bias the general poses to be collision-free in the start state. We attempt to sample a single grasp trajectory per grasp and object pose. We enforce timeouts of 30 iterations for S-PICK-PLACE due to inverse reachability, inverse kinematics, or motion planning failures. We increase this number to 50 for objects that have an explicit goal condition. We reuse placement trajectories for objects with similar geometries. We impose a pruning rule that dynamically limits the number of successor PLACE actions considered that do not achieve a goal to 5. This allows a large number of placements to be created for constrained problems without greatly increasing the branching factor. The approach trajectories sampled using S-APPR-TRAJ control only the robot manipulator, and the roadmap trajectories sampled using S-TRAJ control only the robot base. We use a star-graph CRG for all problems except problems 2-1 & 2-2 where moveable objects are located on the floor. For these problems the CRG is fixed degree-PRM with a degree of 4 and 50 sampled intermediate roadmap configurations. 11.4 Results Table 1 displays the results of the experiments. Each problem and algorithm combination was simulated over 50 trials. Each simulation had a timeout of 300 seconds. Simulations were performed on a single Intel Xeon E5 v3 2.5GHz processor. Each entry in the table reports success rate (%) and mean of the runtimes (sec) in seconds for solved instances. The runtimes measure the full algorithm runtime, including the sampling, collision-checking, pre- processing, and discrete planning. We observed that the data often has large runtime outliers causing the mean runtime to be larger than the median runtime. Runtime entries with a dash (-) indicate that the algorithm did not solve any of the simulations for that problem. Figure 26 displays the overall percent of solved instances across all problems for each algorithm. H0 was unable to solve any of the problems and is excluded from the results. HF F Rob, HA gave the best performance in both success rate and runtime. Helpful actions improved the performance of HF F Rob, HA over HF F Rob. HAddRob performed worse than HGoals and HF F although it was not strictly worse across all problem instances. In particular, HAddRob performed poorly on problems 1-1, 1-2, and 3- 1 because it requires about the same amount of overhead to evaluate as HF F Rob while providing a much less Prepared using sagej.cls Garrett et al. P HF F % 100 96 80 12 92 0 0 0 Table 1. Experiment results over 50 trials. HGoals % 100 98 40 0 20 0 0 0 sec 5 27 78 - 245 - - - 1-1 1-2 2-1 2-2 3-1 3-2 4 5 sec 2 35 73 182 109 - - - 31 HM axRob % sec - 0 - 0 74 94 115 46 - 0 - 0 97 30 - 0 HAddRob sec % 4 100 26 98 34 98 91 62 104 90 - 0 126 46 56 74 HF F Rob sec % 100 4 44 100 33 94 60 82 57 88 0 - 38 78 137 84 HF F Rob, HA % 100 100 96 76 96 72 74 76 sec 2 23 26 50 19 135 31 44 informative heuristic estimate. HF F performed worse than HF F Rob indicating that reachability information is vital for producing successor heuristic for geometrically non-trivial task and motion planning problems. Problem 1-2 can be to compared the grid @ tabletop (14 objects) problem of Krontiris and Bekris (2015). All but 1 of our algorithms were able to solve it with an above 95 percent success ratio in less than 40 seconds. The fact that HGoals was able to solve problems 1-1 and 1-2 indicates that they require little heuristic guidance. Because each object has an explicit goal, HGoals corresponds well with the actual distance to the goal, which is approximately twice HGoals. In the event that the HGoals must move an object more than once along the plan, it will rely on brute force search to reach a lower heuristic value. Thus, we observed that HGoals required more expansions to solve a problem than HF F Rob but was able to perform each expansion more quickly because of the lower heuristic overhead. Problems 2-1 & 2-2 demonstrate that FFROB can solve both NAMO and traditional pick-and-place problems using the same algorithm. Problems 3-1 & 3-2 proved the most difficult because of their large amount of nonmonotonicity. Even in discrete settings, nonmonotonicity causes FF to produce poor heuristic estimates because of its delete- relaxation. However, our best algorithm manages to solve problem 3-2 demonstrating the ability to tackle problems with these elements. The results of problem 4 suggest that strong heuristics are particularly necessary in problems with large branching factors resulting from large numbers of movable objects. Problem 5 demonstrates that FFROB is able to quickly solve a long-horizon, real-world problem involving symbolic actions, cluttered environments, and nonmonotonic requirements. For the problems we posed, we noticed that FFROB never performs more than one iteration of sampling and planning. Verifying that the discretized planning problem does not have a solution is computationally expensive because the worst case complexity grows exponentially in the number of moveable objects. In practice, terminating the search after a finite amount of time to generate new samples will result in better performance. Generally, if the discrete search using HF F Rob, HA is given a feasible set of samples, it will identify a solution before a short timeout for non-adversarial problems. Thus, it can be advantageous to restart with a new set of samples even in cases where the original set of samples would have been sufficient. 12 Conclusion to evaluate if FFROB is a probabilistically complete algorithm for task and motion planning. It uses a EAS, a generalized action to model discretized planning problems representation, with complex conditions. We adapt the FF heuristic to EAS problems in order to provide strong heuristic guidance for these problems. FFROB iteratively discretizes a task and motion planning problem and runs an EAS the current set of samples is planner sufficient for a solution. This leads to a probabilistically complete algorithm as FFROB will, with probability approaching one, generate a set of samples that contains a solution. In our results, we show that including geometric information in a heuristic is critical for the resulting search algorithm to efficiently solve manipulation problems involving interesting geometric constraints. Additionally, we show that a single algorithm can efficiently solve a diverse set of task and motion planning problems. Future work includes analytically and empirically investigating the quality of solutions returned by FFROB with respect to costs. In corollary 2, we briefly remark that FFROB, with an optimal implementation SEARCH, will produce a solution no longer than the minimum length robustly feasible solution in finite time. This is the setting where PICK and PLACE actions have unit costs while MOVE actions have zero cost. This can be easily extended to problems in which PICK, PLACE, MOVE, and other actions have arbitrary, nonnegative fixed costs. A more interesting case is when MOVE actions have costs dependent on the control effort required to move between their start and end configuration. This requires identifying a class of robustly optimal PPM problems and investigating Prepared using sagej.cls 32 Journal Title XX(X) whether sampling-based algorithms are asymptotically optimal, i.e. whether they with high probability converge to an optimal solution Karaman and Frazzoli (2011). We suspect that FFROB would be asymptotically optimal for a similar set of conditions as those presented in section 10. Because FFROB samples continuous values indepen- dently of its search, it often produces many unnecessary samples. This is undesirable because sampling can be time intensive and a large number of samples can slow SEARCH. For example, in figure 24, FFROB samples poses, grasps, configurations, and trajectories for each cylinder on the table although only a few are required to reach the red cylinder. Future work involves using the planning to guide the sampling such as done by Garrett et al. (2015). Addi- tional future work involves applying FFROB to different manipulation tasks or generally planning domains involv- ing continuous variables. For domains involving stacking, where there are many possible stacking combinations, the combinatorial growth in possible samples may overwhelm FFROB. In which case, a careful sampling strategy would be needed. 13 Acknowledgements A Search Appendix A.1 Best-First Search Best-first search extracts the element in the queue that minimizes a cost function f (n). Common cost functions include the path cost f (n) ≡ n.g (Dijkstra’s algorithm), the sum of path cost and heuristic cost (A∗), a weighted sum of path cost and heuristic cost f (n; w) ≡ (1 − w)n.g + (w)n.h (weighted A∗), and the heuristic cost f (n) ≡ n.h (greedy best-first). BFS-EXTRACT(Q, f ) 1 return POP-MIN(Q, f ) BFS-PROCESS(Q, s(cid:48), n; H) 1 PUSH(Q, STATEN(s(cid:48), n.g + 1, H(s(cid:48)), n)) Figure 27. Best-first search extract and process procedures. support gratefully acknowledge from NSF We from ONR grants 1122374, 1420927, and 1523767, grant N00014-14-1-0486, from ARO grant W911NF1410433. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors. and A.2 Deferred Best-First Search Deferred best-first search (also called lazy greedy search) is a variant of standard best-first search (Helmert 2006). It deferring the evaluation of successors states until they are extracted from the queue in order to reduce the number of heuristic evaluations. Successors states temporarily use their parent’s heuristic cost while in the queue. The intuition behind this approach is that successors of a state s are often added to the queue in an order given by helpful actions such that states believed to be closer to the goal are processed first. If a extracted state s(cid:48) has a low heuristic cost, its own successors will temporarily have that cost when they are added to the queue. Thus, the full set of successors of s will likely not be processed because the search greedily proceeds down the subtree rooted at s(cid:48). The EXTRACT procedure is the same for standard best-first search but the PROCESS procedure is slightly modified to use the parent state’s heuristic cost. DBFS-PROCESS(Q, s(cid:48), n; H) 1 PUSH(Q, STATEN(s(cid:48), n.g + 1, H(n.s), n)) Figure 28. Deferred best-first search process procedure. Prepared using sagej.cls Garrett et al. 33 B Review of sPRM Analysis Appendix B.1 Proof of Theorem 1 Theorem 5. For any robustly feasible motion planning (cid:109) , problem, there exists a sequence of k + 1, where k = d-spheres (B0, B1, ..., Bk) centered at τ (Li/k) for i ∈ {0, ..., k}, each with radius δ/2, such that any trajectory τ (cid:48) linearly interpolated from (q0, q0, q1, ..., qk, q∗), where qi ∈ Bi for i ∈ {0, ..., k}, is a collision-free solution to the motion planning problem. (cid:108) 2L δ Proof. First consider the following lemma. Lemma 2. Bi+1 ⊆ B(τ (Li/k), δ) Proof. By our construction, using x ≤ (cid:100)x(cid:101), ||τ (L(i + 1)/k) − τ (Li/k)|| ≤ L(i + 1)/k − Li/k (7) (8) = L/k ≤ L/(2L/δ) = δ/2. (9) (10) Now for any qi+1 ∈ Bi+1, by the triangle inequality, ||qi+1 − τ (Li/k)|| ≤ ||qi+1 − τ (L(i + 1)/k)|| (11) + ||τ (L(i + 1)/k) − τ (Li/k)|| ≤ δ/2 + δ/2 = δ. (12) (13) Thus, each qi+1 is contained in B(τ (Li/k), δ) which implies Bi+1 ⊆ B(τ (Li/k), δ). Because τ has at least δ clearance from obstacles, all points within B(τ (t), δ) for t ∈ [0, L] are collision- free. Moreover, line segments between any two points in B(τ (t), δ) for t ∈ [0, L] are collision-free because because they are contained in B(τ (t), δ) by convexity of the d- sphere. Using lemma 2, the line segment between any qi ∈ Bi and any qi+1 ∈ Bi+1 is collision-free because both qi and qi+1 are contained within B(τ (Li/k), δ). By applying this to all i, i + 1, we see that the linearly interpolated trajectory (q0, q1, q2, ..., qk) is thus collision-free. Finally, each segment from q0 to q0 ∈ B0 or from q∗ to qk ∈ Bk is collision-free by the problem being robustly feasible. So, the linearly interpolated trajectory (q0, q0, q1, q2, ..., qk, q∗) is also collision-free. References Alami R, Laumond JP and Sim´eon T (1994) Two manipulation In: Workshop on Algorithmic planning algorithms. Foundations of Robotics (WAFR). Prepared using sagej.cls Alami R, Sim´eon T and Laumond JP (1990) A geometrical approach to planning manipulation tasks. the case of discrete In: International Symposium of placements and grasps. Robotic Research (ISRR). B¨ackstr¨om C and Nebel B (1995) Complexity results for sas+ planning. Computational Intelligence 11(4): 625–655. Barry J, Kaelbling LP and Lozano-P´erez T (2013) A hierarchical approach to manipulation with diverse actions. In: Robotics and Automation (ICRA), 2013 IEEE International Conference on. IEEE, pp. 1799–1806. Blum AL and Furst ML (1997) Fast planning through planning graph analysis. Artificial intelligence 90(1): 281–300. Bonet B and Geffner H (1999) Planning as heuristic search: New results. In: Proc. of 5th European Conf. on Planning (ECP). pp. 360–372. Bonet B and Geffner H (2001) Planning as heuristic search. Artificial Intelligence 129(1): 5–33. Bylander T (1994) The computational complexity of propositional strips planning. Artificial Intelligence 69(1): 165–204. Cambon S, Alami R and Gravot F (2009) A hybrid approach to intricate motion, manipulation and task planning. International Journal of Robotics Research (IJRR) 28. Coles A, Coles A, Fox M and Long D (2013) A hybrid lp-rpg heuristic for modelling numeric resource flows in planning. Journal of Artificial Intelligence Research (JAIR) 46(1): 343– 412. Dantam NT, Kingston Z, Chaudhuri S and Kavraki LE (2016) Incremental task and motion planning: A constraint-based approach. In: Robotics: Science and Systems. de Silva L, Pandey AK, Gharbi M and Alami R (2013) Towards combining HTN planning and geometric task planning. In: RSS Workshop on Combined Robot Motion Planning and AI Planning for Practical Applications. Diankov R (2010) Automated construction of robotic manipu- lation programs. PhD Thesis, Robotics Institute, Carnegie Mellon University. Diankov R and Kuffner J (2008) Openrave: A planning Technical Report architecture for autonomous robotics. CMU-RI-TR-08-34, Robotics Institute, Carnegie Mellon University. Dogar MR and Srinivasa SS (2012) A planning framework for non-prehensile manipulation under clutter and uncertainty. Autonomous Robots 33(3): 217–236. Dornhege C, Eyerich P, Keller T, Tr¨ug S, Brenner M and Nebel B (2009) Semantic attachments for domain-independent planning systems. In: International Conference on Automated Planning and Scheduling (ICAPS). AAAI Press, pp. 114–121. Dornhege C, Hertle A and Nebel B (2013) Lazy evaluation and subsumption caching for search-based integrated task and motion planning. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Workshop on AI-based robotics. 34 Journal Title XX(X) Erdem E, Haspalamutgil K, Palaz C, Patoglu V and Uras T (2011) Combining high-level causal reasoning with low- level geometric reasoning and motion planning for robotic manipulation. In: IEEE International Conference on Robotics and Automation (ICRA). Fikes RE and Nilsson NJ (1971) STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence 2: 189–208. Garrett CR, Lozano-P´erez T and Kaelbling LP (2014) Ffrob: An efficient heuristic for task and motion planning. In: Workshop on the Algorithmic Foundations of Robotics (WAFR). Garrett CR, Lozano-P´erez T and Kaelbling LP (2015) Backward- In: IEEE/RSJ forward search for manipulation planning. International Conference on Intelligent Robots and Systems (IROS). URL http://lis.csail.mit.edu/pubs/ garrett-iros15.pdf. Ghallab M, Nau DS and Traverso P (2004) Automated Planning: Theory and Practice. Elsevier. Gregory P, Long D, Fox M and Beck JC (2012) Planning modulo theories: Extending the planning paradigm. In: International Conference on Automated Planning and Scheduling (ICAPS). Hauser K and Latombe J (2009) Integrating task and prm motion planning: Dealing with many infeasible motion planning queries. In: International Conference on Automated Planning and Scheduling (ICAPS) Workshop on Bridging the Gap between Task and Motion Planning. Hauser K and Latombe JC (2010) Multi-modal motion planning International Journal of Robotics in non-expansive spaces. Research (IJRR) 29: 897–915. Hauser K and Ng-Thow-Hing V (2011) Randomized multi-modal motion planning for a humanoid robot manipulation task. International Journal of Robotics Research (IJRR) 30(6): 676–698. Helmert M (2006) The fast downward planning system. Journal of Artificial Intelligence Research (JAIR) 26: 191–246. Hoffmann J and Nebel B (2001) The FF planning system: Fast plan generation through heuristic search. Journal Artificial Intelligence Research (JAIR) 14: 253–302. Kaelbling LP and Lozano-P´erez T (2011) Hierarchical planning in the now. In: IEEE International Conference on Robotics and Automation (ICRA). Karaman S and Frazzoli E (2011) Sampling-based algorithms for optimal motion planning. International Journal of Robotics Research (IJRR) 30(7): 846–894. Kavraki LE, Kolountzakis MN and Latombe JC (1998) Analysis of probabilistic roadmaps for path planning. Robotics and Automation, IEEE Transactions on 14(1): 166–171. Kavraki LE and Latombe JC (1998) Probabilistic roadmaps for robot path planning. Practical Motion Planning in Robotics: Current Approaches and Future Directions . Kavraki LE, Latombe JC, Motwani R and Raghavan P (1995) In: Randomized query processing in robot path planning. Prepared using sagej.cls Proceedings of the twenty-seventh annual ACM symposium on Theory of computing. ACM, pp. 353–362. Kavraki LE, Svestka P, Latombe JC and Overmars MH (1996) Probabilistic roadmaps for path planning in high-dimensional IEEE Transactions on Robotics and configuration spaces. Automation 12(4): 566–580. King J, Cognetti M and Srinivasa S (????) Rearrangement planning using object-centric and robot-centric action spaces. In: IEEE International Conference on Robotics and Automation. Krontiris A and Bekris KE (2015) Dealing with difficult In: Robotics: Science instances of object rearrangement. and Systems (RSS). Rome, Italy. URL http://www. cs.rutgers.edu/˜kb572/pubs/Krontiris_ Bekris_rearrangement_RSS2015.pdf. Krontiris A and Bekris KE (2016) Efficiently solving general rearrangement tasks: A fast extension primitive for an incre- In: International Confer- mental sampling-based planner. ence on Robotics and Automation (ICRA). Stockholm, Swe- den. URL http://www.cs.rutgers.edu/˜kb572/ pubs/fast_object_rearrangement.pdf. Kuffner JJ Jr and LaValle SM (2000) RRT-Connect: An efficient approach to single-query path planning. In: IEEE International Conference on Robotics and Automation (ICRA). Lagriffoul F, Dimitrov D, Bidot J, Saffiotti A and Karlsson L (2014) Efficiently combining task and motion planning using International Journal of Robotics geometric constraints. Research (IJRR) : 0278364914545811. Lagriffoul F, Dimitrov D, Saffiotti A and Karlsson L (2012) Constraint propagation on interval bounds for dealing In: IEEE/RSJ International with geometric backtracking. Conference on Intelligent Robots and Systems (IROS). Leven P and Hutchinson S (2002) A framework for real-time path planning in changing environments. International Journal of Robotics Research (IJRR) 21(12): 999–1030. Lozano-P´erez T (1981) Automatic planning of manipulator IEEE Transactions on Systems, Man, transfer movements. and Cybernetics 11: 681–698. Lozano-P´erez T, Jones JL, Mazer E, O’Donnell PA, Grimson WEL, Tournassoud P and Lanusse A (1987) Handey: A In: robot system that recognizes, plans, and manipulates. IEEE International Conference on Robotics and Automation (ICRA). In: Lozano-P´erez T and Kaelbling LP (2014) A constraint- based method for solving sequential manipulation planning IEEE/RSJ International Conference on problems. Intelligent Robots and Systems (IROS). IEEE, pp. 3684–3691. McDermott D, Ghallab M, Howe A, Knoblock C, Ram A, Veloso M, Weld D and Wilkins D (1998) Pddl: The planning domain definition language. Technical report, Yale Center for Computational Vision and Control. Garrett et al. 35 Nilsson NJ (1984) Shakey the robot. Technical Report 323, Artificial Intelligence Center, SRI International, Menlo Park, California. Pandey AK, Saut JP, Sidobre D and Alami R (2012) Towards planning human-robot interactive manipulation tasks: Task dependent and human oriented autonomous selection of grasp and placement. In: RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics. Plaku E and Hager G (2010) Sampling-based motion planning with symbolic, geometric, and differential constraints. In: IEEE International Conference on Robotics and Automation (ICRA). Sim´eon T, Laumond JP, Cort´es J and Sahbani A (2004) Manip- ulation planning with probabilistic roadmaps. International Journal of Robotics Research (IJRR) 23(7-8): 729–746. Smith R (2005) Open dynamics engine. Srivastava S, Fang E, Riano L, Chitnis R, Russell S and Abbeel P (2014) Combined task and motion planning through an extensible planner-independent In: IEEE International Conference on Robotics and Automation (ICRA). interface layer. Stilman M and Kuffner JJ (2006) Planning among movable In: Workshop on obstacles with artificial constraints. Algorithmic Foundations of Robotics (WAFR). Stilman M, Schamburek JU, Kuffner JJ and Asfour T (2007) Manipulation planning among movable obstacles. In: IEEE International Conference on Robotics and Automation (ICRA). Toussaint M (2015) Logic-geometric an optimization-based approach to combined task and motion In: AAAI Conference on Artificial Intelligence. planning. AAAI Press, pp. 1930–1936. programming: Van Den Berg J, Stilman M, Kuffner J, Lin M and Manocha D (2009) Path planning among movable obstacles: a In: Algorithmic probabilistically complete approach. Foundation of Robotics VIII. Springer, pp. 599–614. Wilfong GT (1988) Motion planning in the presence of movable In: Symposium on Computational Geometry. pp. obstacles. 279–288. Prepared using sagej.cls
ai_researcher
1
Resilience_Evaluation_of_Urban_Bus-Subway_Traffic_Networks_for_Potential_Applications_in_IoT-Based_Smart_Transportation.pdf
PLEA 2024 WROCŁAW (Re)thinking Resilience Understanding the Transit Gap: A Comparative Study of On-Demand Bus Services and Urban Climate Resilience in South End, Charlotte, NC and Avondale, Chattanooga, TN SANAZ SADAT HOSSEINI,1 BABAK RAHIMI ARDABILI,2 MONA AZARBAYJANI,3 SRINIVAS PULUGURTHA,1 HAMED TABKHI4 1Department of Civil and Environmental Engineering, University of North Carolina at Charlotte, Charlotte, NC, USA 2Department of Public Policy, University of North Carolina at Charlotte, Charlotte, NC, USA 3School of Architecture, University of North Carolina at Charlotte, Charlotte, NC, USA 4Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC, USA ABSTRACT: Urban design significantly impacts sustainability, particularly in the context of public transit efficiency and carbon emissions reduction. This study explores two neighborhoods with distinct urban designs: South End, Charlotte, NC, featuring a dynamic mixed-use urban design pattern, and Avondale, Chattanooga, TN, with a residential suburban grid layout. Using the TRANSIT-GYM tool, we assess the impact of increased bus utilization in these different urban settings on traffic and CO2 emissions. Our results highlight the critical role of urban design and planning in transit system efficiency. In South End, the mixed-use design led to more substantial emission reductions, indicating that urban layout can significantly influence public transit outcomes. Tailored strategies that consider the unique urban design elements are essential for climate resilience. Notably, doubling bus utilization decreased daily emissions by 10.18% in South End and 8.13% in Avondale, with a corresponding reduction in overall traffic. A target of 50% bus utilization saw emissions drop by 21.45% in South End and 14.50% in Avondale. At an idealistic goal of 70% bus utilization, South End and Avondale witnessed emission reductions of 37.22% and 27.80%, respectively. These insights are crucial for urban designers and policymakers in developing sustainable urban landscapes. KEYWORDS: Urban Design Patterns, Urban Transportation, Demand-Responsive Bus Services, CO2 Emissions, Climate Resilience 1. INTRODUCTION significantly Urban design pattern shapes transportation behaviors, influencing cities to either promote or hinder public transit. Pedestrian-friendly environments with walkable and bikeable designs lead to reduced car ownership and increased reliance on public transportation. Conversely, many post- World War II American cities, adopted car-dependent designs, leading to a decline in public transit use, notably bus transit. Main challenges plague US urban design and public bus transit, including urban sprawl, reduced bus ridership, rising costs, fleet limitations, inadequate bus stop coverage, and safety concerns, contributing to a shift toward private vehicles and consequently increased carbon emissions. By mid- 2022, ridership had fallen to 62% of pre-pandemic levels, with some cities, like Charlotte, experiencing a 75% drop between 2014 and 2022, particularly in post-WWII car-dependent cities (Fig.1) [1-4]. Moreover, the broader availability of personal cars, driven by affordable ownership and low gas prices, contributes to a national decline in public transportation usage, leading to heightened traffic congestion, increased CO2 emissions, and negative environmental impacting underserved communities relying on older, high- especially impacts, emission vehicles. Traditional bus transit systems struggle to adapt to the dynamic nature of modern urban life. Our proposed solution integrates urban design principles with technology, envisioning a demand-responsive public bus system dynamically matching riders with available buses based on real- time demand [5]. Figure 1: Bus ridership reduction relative to 2010. We aim to unlock the potential of bus transit, creating an appealing ecosystem for a diverse range of commuters, including underserved communities and overlooked middle-class residents from various in cities. This study conducts a urban settings comparative analysis of two case studies, South End in Charlotte, NC and Avondale in Chattanooga, TN, utilizing the TRANSIT-GYM simulation tool to model potential impacts of this bus system with increased utilizations on areas with different urban design layouts. Preliminary results show that pedestrian- friendly areas like South End exhibit less transit inequality, with simulations indicating reduced traffic congestion and CO2 emissions. Doubling bus usage reduces daily emissions in South End by 10.18% and Avondale by 8.13%, with overall traffic decreasing by 12.06% in South End and 8.90% in Avondale. At 50% bus utilization, South End sees a 21.45% drop in daily emissions, while Avondale experiences a 14.50% decrease. A 70% bus utilization target results in a 37.22% emissions reduction for South End and a 27.80% decrease for Avondale. The outcomes of these simulations offer a roadmap for how municipalities might approach urban planning in the future, ensuring that it is both efficient and environmentally conscious. The main contributions of this paper are as follows: 2. 1. Utilizing the TRANSIT-GYM simulation tool to quantify the benefits of increasing bus ridership through the hypothetical demand- responsive bus system and its effects on traffic reduction and CO2 emissions in different urban settings. Investigating how different urban design transit behaviour, CO2 patterns affect emissions, and the feasibility of on-demand bus systems in South End and Avondale. 3. Highlighting the transformative potential of increasing bus ridership in enhancing climate resilience, reducing CO2 emissions, and addressing transportation disparities caused by different urban design patterns. The paper is structured as follows: Section 2 reviews research on urban design's impact on public transit and sustainability, Section 3 explains the selection of South End and Avondale for impact assessment, Section 4 presents simulation results, and Sections 5 and 6 discusses the implications and future research directions. 2. LITERATURE REVIEW to the urban Recent enhance literature emphasizes intricate relationship between urban design and public transit efficiency sustainability, particularly through pedestrian-friendly designs that address the 'transit gap.' TRANSIT-GYM, developed by R. Sun et al., serves as a dynamic tool optimizing public transit operations and energy costs, facilitating efficient and equitable planning [6]. Sen et al.'s BTE- Sim offers a simulation environment for rapid modeling and optimization of public transportation systems, empowering urban planners to enhance network performance and cost-effectiveness [7]. transport emissions, underscoring Tao Ji et al. stress the urgency of resilient transportation infrastructure in urban areas amidst climate change challenges [8, 9]. Winkler et al.'s study linked to reducing critically evaluates challenges urban the necessity for comprehensive policies targeting carbon reduction in city-level emissions [10]. Jing et al.'s research delves relationship the between public transport development and carbon emissions reduction, unveiling an inverted U-shaped curve. Their findings suggest measures such as green infrastructure, government-market coordination, and energy transformation to effectively mitigate carbon emissions [11]. intricate into These studies and tools highlight the importance of simulation and AI in creating efficient, demand- responsive improving urban systems, resilience, lowering emissions, and informing future research on transit improvements' impacts. transit Our research adds to the current body of knowledge by conducting empirical analyses and simulations in South End, Charlotte and Avondale, Chattanooga. We focus on how transitioning from private cars to public transit influences CO2 emissions and transportation efficiency in these different urban settings. This study presents two key research questions that drive our investigation and findings. 1. How significantly will the scenario reduce CO2 emissions and ease traffic in the case study areas? 2. How do each area's urban design and street layouts influence the simulation results? 3. METHODOLOGY In this section, we explain the rationale behind selecting Charlotte and Chattanooga, discuss data sources and the simulation process, and describe the scenarios under consideration. 3.1 Selection of Case Study Areas Our study, targeting South End in Charlotte, NC and Avondale in Chattanooga, TN, aims to deliver accurate and comprehensive results by examining these cities' distinct urban designs, bus ridership patterns, and their effects on transit disparities and CO2 emissions. These areas were specifically chosen for their contrasting urban environments and substantial populations, providing a rich context for analysing impact of urban design and transportation systems on CO2 emissions and community dynamics. the Charlotte, NC, displays the "New South" model with its post-World War II expansion, featuring radial main streets, car-centric suburbs, and denser central streets. pedestrian-friendly areas with grid its geography and Chattanooga, TN, shaped by history, embraces modern urban planning, like the Complete Streets Ordinance, to create a diverse environment accommodating various modes of transportation and prioritizing neighbourhood safety and interaction. Charlotte's South End has evolved from a manufacturing district to a dynamic urban area with a blend of modern and historic elements, a comprehensive public transit system, and pedestrian- friendly spaces. Despite this, its bus usage remains low, like Avondale, and our study will analyse bus lines '5', '16', '19', '35', and '41x' [12-16]. In contrast, Avondale, a suburban Chattanooga neighborhood with mid-20th-century roots, presents a mix of older housing in a grid street pattern, facing challenges in carpool reliance and minimal bus usage due to socioeconomic factors. Our study includes bus lines 10A, 10C, and 10G in this area [17-18]. These two neighbourhoods, each with unique characteristics, are crucial for comprehending the environmental consequences of urban planning in their respective cities. 3.2 Simulation and Data Sources study traffic replicates Utilizing the Transit-GYM tool and SUMO software dynamics, [6], our transportation frameworks, and urban mobility scenarios. Multiple steps were undertaken to create SUMO scenario files for the selected areas and their bus operations, involving crafting trip definitions, establishing vehicle configurations, and generating SUMO and GUI files. This approach aids urban transit agencies in assessing the energy implications of different transportation decisions [6]. Data for simulations, sourced from various channels [7], included: determining networks, ▪ Map Data from Open Street Maps processed through SUMO's NETCONVERT to create SUMO-compatible road networks. ▪ ▪ A detailed list of public transit vehicle characteristics was compiled for SUMO's vehicle type definition file. The latest General Transit Feed Specification (GTFS) data (as of November 25th, 2023) provided real-time route details, bus stop locations, and schedules essential for transit simulation setup [19, 20]. ▪ Origin-Destination (OD) data and Traffic Analysis Zones (TAZ) files were used to record trips and indicate traffic demand between zones. POLYCONVERT converted TAZ files to SUMO TAZs, outlining regions for and generating personal plans and vehicle trips in SUMO. corresponding edges, crucial 3.3 Bus Transit Simulation Workflow The bus transit simulation workflow, as shown in (Fig.2), follows a structured process using SUMO tools. Initiated by collecting OD Demand matrix data, TAZ files, vehicle parameters, and GTFS, a Domain- Specific Modelling (DSML) program Language interprets these inputs, generating XML files detailing person trips, vehicle types, bus trips, and bus stop locations. Real-time interaction with the optimized network XML file and GTFS data is facilitated by the Traffic Control Interface (TraCI), which accurately positions bus stops throughout the simulation. The NETEDIT tool, a SUMO suite component, serves as an creating and interactive graphical editor modifying road network maps for simulations. XML files representing daily non-bus traffic, integrated bus and person routes, and detailed edge-based network information are incorporated, along with average daily traffic calculations based on Annual Average Daily Traffic (AADT) and the Traffic Count Database System (TCDS) [21, 22]. These files, along with a SUMO-readable network file, amalgamate into the simulation configuration. Upon execution, the simulation assesses bus operations, providing key performance metrics such as arrival times, wait times, and load factors, offering a comprehensive view of the transport system's efficiency [6]. for Figure 2: Bus transit simulation workflow with TRANSIT- GYM tool [6]. 3.4 Scenario Selection To assess the impact of On-Demand Bus service and bus ridership improvements on CO2 emissions in two different urban areas, our study compared three scenarios against a baseline, which simulated each case study area's current street network and existing traffic patterns. The baseline reflected CO2 emissions with the current bus system and ridership rate unchanged. In the first scenario, we doubled the current bus utilization within a day by transitioning in traffic private car users to a demand-responsive transit system. This resulted in a 12.06% decrease in South End and 8.90% decrease in Avondale compared to the baseline. For the second scenario, envisioning a 50% bus utilization (our proposed demand-responsive system's target), we projected a 21.13% decrease in traffic in South End and 18.40% decrease in Avondale. The third scenario assumed an to idealistic approximately a 34.41% decrease in traffic in South End and 29.29% decrease in Avondale and The Results section delves into CO2 emissions, examining the impact of these changes in bus trip utilization across scenarios. utilization, leading 70% bus 4. RESULT In this section, the preliminary results of the simulations under different scenarios are presented. 4.1 Statistical Analysis (Table 1) compares car and bus utilization across increased bus utilization a baseline and three in South End, Charlotte. The baseline scenarios scenario captures current traffic, including the number of cars, buses, and passengers that leads to approximately 4386 reductions in car trips which increased the average bus passengers from 6.36 to 12.72. In the same scenario in Avondale, Chattanooga as shown in (Table 2) in next page, 982 unique person IDs using 173 bus line trips detected in Avondale from 5 am to 9 pm. The first scenario in Avondale, causes 660 increases average bus occupancy from 5.7 to 11.40 passengers, resulting in an 8.90% decrease in traffic congestion without exceeding the current 173 bus line trips. The second scenario for both areas, leads to a significant drop in car trips: in Avondale, car trips decreased to 1363.6 trips, and in South End, from the initial number of 35335 to 28685. Also, the third scenario leads to less car trips for both areas. The necessary bus line trips for both areas remain under the existing number, indicating an enhanced bus system efficiency and no requirement for additional services. fewer cars, which 4.2 Simulations Results In this section, we represent the results of simulations in each scenario following the steps explained in the methodology section. (Fig.3) depicts the daily CO2 emissions for South End, Charlotte, NC, measured from 5 AM to 9 PM across four different traffic scenarios. The base scenario peaks at 6:34 PM with total emissions of 2931.17 tonnes. When simulating Scenario 1, the peak emissions occur earlier at 5:23 PM, with a total of 2632.77 tonnes, indicating a slight decrease in emissions. Scenario 2 shifts the peak to 5:33 PM with total of 2302.41 tonnes emissions. Finally, scenario 3 leads to total of 1840.25 tonnes emissions and the peak at 7:16 PM. Figure 3: Daily CO2 emissions over time (hours) from 5 AM to 9 PM for different scenarios for South End, Charlotte, NC - With peak emission for base scenario at 6:34 PM, peak emission for scenario 1 at 5:23 PM, peak emission for scenario 2 at 5:33 PM, and peak emission for scenario 3 at 7:16 PM. (Fig.4) illustrates the daily CO2 emissions in Avondale, Chattanooga, TN, from 5 AM to 9 PM across four different scenarios: the baseline, doubled bus utilization (Scenario 1), 50% bus utilization (Scenario 2), and 70% bus utilization (Scenario 3). The baseline scenario peaks at 4:55 PM with 118.72 tonnes of emissions. Scenario 1, peaks at 2:40 PM with emissions slightly lower at 109.07 tonnes. Scenario 2, peaks at 2:26 PM and the lower emissions from the above scenarios at 101.50 tonnes. Finally, scenario 3, has the lowest emission of 85.71 tonnes. The peaks show the times of day with the highest CO2 emissions for each scenario. As cars are reduced and buses are used more, emissions decrease. Table 1: Statistical comparison of different scenarios’ calculations for South End Bus Utilization Data Passenger Data Traffic Reduction Data Area Scenario Current Bus Utilization Increase Rate of Utilization New Bus Utilization Base Scenario South End Scenario 1 Scenario 2 Scenario 3 18.17% 1X 2X 3.07X 4.29X 18.17% 36.34% 50% 70% Current Person Trips Using Buses 6585 Total Passengers Require Bus Services Reduction in Car Trips Total Average Traffic After Reduction 6585 13165.2 18112.5 25357.5 0 12.41% 21.75% 35.41% 36370 31983 28685 23855 Table 2: Statistical comparison of different scenarios’ calculations for Avondale Bus Utilization Data Passenger Data Traffic Reduction Data Area Scenario Current Bus Utilization Increase Rate of Utilization New Bus Utilization Base Scenario Avondale Scenario 1 Scenario 2 Scenario 3 16.28% 1X 2X 3.07X 4.29X 16.28% 32.56% 50% 70% Current Person Trips Using Buses 982 Total Passengers Require Bus Services Reduction in Car Trips Total Average Traffic After Reduction 982 1972 3027.5 4238.5 0 9.12% 18.84% 30% 7412 6752 6048 5241 In South End, the mixed-use, pedestrian-friendly environment reduces reliance on private vehicles, aiding in emission reduction. Yet, this benefit is offset by congestion issues arising from its high-density urban pattern. Avondale, with its grid pattern and residential focus, faces challenges of longer trips and higher traffic due to limited local destinations. Figure 4: Daily CO2 emissions over time (hours) from 5 AM to 9 PM for different scenarios for Avondale, Chattanooga, TN - With peak emission for base scenario at 4:55 PM, peak emission for scenario 1 at 2:40 PM, peak emission for scenario 2 at 2:26 PM, and peak emission for scenario 3 at 4:33 PM. residential, commercial, and The variance in emission trends between two urban areas is influenced by their distinct urban designs. In the South End, a mixed-use area, emission patterns show a steady rise from early morning, peaking around 8 AM, indicative of diverse activities starting the workday, leading to consistent emissions above 5 tonnes post 8 AM. This reflects the area's blend of leisure activities, contributing to a steady flow of traffic. In contrast, a primarily residential area shows more pronounced emission fluctuations, aligning with typical suburban patterns. These fluctuations are driven by peak commute hours, with sharp peaks during morning and evening rush hours, illustrating or how urban residential—affects and, consequently, emissions. design—whether mixed-use patterns traffic 5. DISCUSSION Our study offers a comparative analysis of the Avondale and South End neighborhoods, illustrating the impact of urban design on public transport efficiency and climate resilience. South End, with its organic street network and a mix of activities, shows significant reductions in traffic and CO2 emissions, unlike Avondale's suburban grid layout. However, South End's dense pattern also leads to increased traffic congestion and emissions, as the existing street network struggles with the high volume. This research highlights the dual nature of urban design impacts on public transit and sustainability. South End's design encourages public transport use but also brings congestion challenges, indicating a need transit for adaptive, demand-responsive solutions. Avondale's experience points to the importance of in residential areas. Our findings emphasize the need for holistic urban design strategies that balance density, accessibility, and environmental impact, crucial for effective city-wide CO2 emission reduction and climate resilience. local amenities incorporating 6. CONCLUSION AND FUTURE WORK intricately in particular through the The aim of this research is to illustrate how urban transit effectiveness and design affects public lens of sustainability, demand-responsive bus services in South End, Charlotte, NC, and Avondale, Chattanooga, TN. It reveals how South End's mixed-use layout and dynamic land use enhance the benefits of increased bus utilization, resulting in greater CO2 emissions reductions than Avondale. As a result, urban design and transit solutions are intertwined, advocating context-specific solutions. While the study relies on simulations and may not capture all real- world complexities, it paves the way for future empirical research in diverse urban settings to solidify these findings. From a policy standpoint, this study the need reinforces for comprehensive urban planning. In such planning, demand-responsive transit systems must be seamlessly integrated with broader urban policies, including zoning, housing, and environmental concerns. An integrated approach is needed to address broader challenges, such as social equity and environmental sustainability. The research sets the stage for multi-disciplinary exploration, emphasizing the importance of a holistic approach to urban development transit enhancements with overarching urban development that aligns public objectives, thereby fostering a sustainable, equitable, and resilient urban future. ACKNOWLEDGEMENTS We are grateful to the Charlotte Area Transit System (CATS) for providing crucial data. We thank everyone who contributed to the development of TRANSIT-GYM tool, particularly Professor Abhishek Dubey and his team at Vanderbilt University for their exceptional assistance. Our special to Professor Srinivas Pulugurtha from UNC Charlotte and his invaluable assistance. Moreover, we appreciate the financial support provided by UNC Charlotte's School of Data Science and College of Engineering. laboratory thanks their for in and Practice, the United ridership declined REFERENCES 1. Erhardt, G. D., Hoque, J. M., Goyal, V., Berrebi, S., Brakewood, C., & Watkins, K. E. (2022). Why has public transit States? Transportation Research Part A: Policy and Practice, 161, 68–87. https://doi.org/10.1016/J.TRA.2022.04.006 2. Product Details R47302. (n.d.). Retrieved July 30, 2023, fromhttps://crsreports.congress.gov/product/details?prodc ode=R47302 3. Berrebi, S. J., Joshi, S., & Watkins, K. E. (2021). On bus ridership and frequency. Transportation Research Part A: Policy 140–154. 148, https://doi.org/10.1016/J.TRA.2021.03.005 4. Hosseini, S.S., Azarbayjani, M., Lawrence, J., Tabkhi, H. (2023). Towards Understanding the Benefits and Challenges of Demand Responsive Public Transit-A Case Study in the City of Charlotte, NC. arXiv preprint arXiv:2304.06467. https://doi.org/10.48550/arXiv.2304.06467. 5. Rashvand, N., Hosseini, S.S., Azarbayjani, M., Tabkhi, H. (2023). Real-Time Bus Arrival Prediction: A Deep Learning Approach for Enhanced Urban Mobility. arXiv preprint arXiv:2303.15495. https://doi.org/10.48550/arXiv.2303.15495. 6. Sun, R., Gui, R., Neema, H., Chen, Y., Ugirumurera, J., Severino, J., Pugliese, P., Laszka, A., & Dubey, A. (2021). TRANSIT-GYM: A Simulation and Evaluation Engine for Analysis of Bus Transit Systems. Proceedings - 2021 IEEE Computing, International 69–76. SMARTCOMP https://doi.org/10.1109/SMARTCOMP52413.2021.00030 7. Sen, R., Tran, T., Khaleghian, S., Pugliese, P., Sartipi, M., Neema, H., & Dubey, A. (2022). BTE-Sim: Fast Simulation Environment For Public Transportation. Proceedings - 2022 IEEE International Conference on Big Data, Big Data 2022, 2886–2894. https://doi.org/10.1109/BIGDATA55660.2022.10020973 8. Ji, T., Yao, Y., Dou, Y., Deng, S., Yu, S., Zhu, Y., & Liao, H. Impact of Climate Change on Urban (2022). The Transportation Resilience to Compound Extreme Events. Sustainability 2022, Vol. 14, Page 3880, 14(7), 3880. https://doi.org/10.3390/SU14073880 9. Climate Change Impact Urban Transportation Resilience | Encyclopedia MDPI. (n.d.). Retrieved December 8, 2023, from https://encyclopedia.pub/entry/22953 10. Winkler, L., Pearce, D., Nelson, J., & Babacan, O. (2023). The effect of sustainable mobility transition policies on on 2021, Conference Smart 8, Vol. Page 2023, 6248, 14(10), December cumulative urban transport emissions and energy demand. Nature Communications 2023 14:1, 14(1), 1–14. https://doi.org/10.1038/s41467-023-37728-x 11. Jing, Q. L., Liu, H. Z., Yu, W. Q., & He, X. (2022). The Impact of Public Transportation on Carbon Emissions— From the Perspective of Energy Consumption. Sustainability 2022, 6248. 14, https://doi.org/10.3390/SU14106248 12. The South End: Living the Life on Charlotte’s Blue Line - RentCafe rental blog. (n.d.). Retrieved December 8, 2023, https://www.rentcafe.com/blog/apartment-search- from 2/neighborhood-guides/the-south-end-living-the-life-on- charlottes-blue-line/ 13. Charlotte, North Carolina Neighborhoods - December from (n.d.). Retrieved December 8, 2023, 2023. https://www.zipdatamaps.com/neighborhoods/north- carolina/city/map-of-charlotte-neighborhoods 14. The Demographic Statistical Atlas of the United States - Statistical Atlas. (n.d.-a). Retrieved December 8, 2023, from https://statisticalatlas.com/place/North- Carolina/Charlotte/Population 15. CNU31: Charlotte, NC. South End Spaces. (n.d.). Retrieved from https://www.micnu.org/post/cnu31-charlotte-nc-south- end-spaces 16. Hosseini, S.S., Azarbayjani, M., Tabkhi, H. (2023). Community-Driven Approach for Smart On-Demand Public Transit in Charlotte in Underserved Communities - Pilot Study for User Acceptance and Early Data Collection. ARCC 2023 Research-Design Interface, https://www.arcc-arch.org/wp- content/uploads/2023/09/ARCC2023ProceedingsFINAL- PW.pdf. 17. Avondale Chattanooga, TN 37406, Neighborhood Profile - NeighborhoodScout. (n.d.). Retrieved December 8, 2023, fromhttps://www.neighborhoodscout.com/tn/chattanooga /avondale 18. The Demographic Statistical Atlas of the United States - Statistical Atlas. (n.d.-b). Retrieved December 8, 2023, from https://statisticalatlas.com/place/Tennessee/Chattanooga/ Population 19. Transitland • Charlotte Area Transit System (CATS) • GTFS feed details: f-dnq-charlotteareatransitsystem. (n.d.). Retrieved from https://www.transit.land/feeds/f-dnq- charlotteareatransitsystem Regional Transitland 20. Transportation Authority (CARTA) • GTFS feed details: f- dn5r-carta. (n.d.). Retrieved December 8, 2023, from https://www.transit.land/feeds/f-dn5r-carta 21. ArcGIS Web Application. (n.d.). Retrieved December 8, 2023, from 4https://www.arcgis.com/apps/webappviewer/index.html? id=964881960f0549de8c3583bf46ef5ed4 22. Traffic Count Database System (TCDS). (n.d.). Retrieved from December https://tdot.public.ms2soft.com/tcds/tsearch.asp?loc=Tdot &mod=TCDS International pp. Conference-The Chattanooga December 25-32. 2023, 2023, Area 8, 8, •
ai_researcher
4
Network_for_Knowledge_Organization_(NEKO)_an_AI_knowledge_mining_workflow_for_synthetic_biology_research.pdf
Neko: a Library for Exploring Neuromorphic Learning Rules Zixuan Zhao University of Chicago Nathan Wycoff Virginia Tech Neil Getty Argonne National Laboratory Rick Stevens Argonne National Laboratory & University of Chicago Fangfang Xia Argonne National Laboratory & University of Chicago 1 2 0 2 g u A 3 1 ] G L . s c [ 2 v 4 2 3 0 0 . 5 0 1 2 : v i X r a Figure 1: Neko overview. Key components in the neuromorphic learning library. ABSTRACT The field of neuromorphic computing is in a period of active explo- ration. While many tools have been developed to simulate neuronal dynamics or convert deep networks to spiking models, general software libraries for learning rules remain underexplored. This is partly due to the diverse, challenging nature of efforts to de- sign new learning rules, which range from encoding methods to gradient approximations, from population approaches that mimic the Bayesian brain to constrained learning algorithms deployed on memristor crossbars. To address this gap, we present Neko, a modular, extensible library with a focus on aiding the design of new learning algorithms. We demonstrate the utility of Neko in three exemplar cases: online local learning, probabilistic learning, and analog on-device learning. Our results show that Neko can replicate the state-of-the-art algorithms and, in one case, lead to significant outperformance in accuracy and speed. Further, it offers tools including gradient comparison that can help develop new algorithmic variants. Neko is an open source Python library that supports PyTorch and TensorFlow backends. CCS CONCEPTS • Computing methodologies → Machine learning algorithms; • Hardware → Neural systems. Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only. ICONS ’21, July 27–29, 2021, PREPRINT © 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-8691-3/21/07. . . $15.00 https://doi.org/10.1145/3477145.3477155 KEYWORDS Neuromorphic computing, learning rules, approximate gradients, Bayesian inference, Manhattan rule, open-source library ACM Reference Format: Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, and Fangfang Xia. 2021. Neko: a Library for Exploring Neuromorphic Learning Rules. In PREPRINT . ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/ 3477145.3477155 1 INTRODUCTION Deep learning is the prevailing paradigm for machine learning. Over the course of its meteoric rise, its many differences from human learning have become increasingly clear. Chief among these are gaps in data efficiency, robustness, generalizability, and energy effi- ciency — all unlikely to narrow with growing computation power alone. This has motivated a renewed search for brain-inspired learn- ing algorithms. However, the current software infrastructure needs improvement to support productive exploration. Two common choices today for designing novel learning algo- rithms are TensorFlow [1] and PyTorch [32]. These general deep learning frameworks provide powerful abstractions for calculating gradients and building deep neural networks, but there is no inter- mediate layer between these two levels. For high-level development, backpropagation is the only learning algorithm offered and is in fact coupled with the training process. Software in neuromorphic computing, on the other hand, has traditionally focused more on simulating neurons and spiking neu- ral networks [6, 8, 16, 41], interfacing with neuromorphic hardware [11, 28, 35, 39], and converting pre-trained deep learning models to spiking neural networks for inference [36, 37]. Learning has not been a key part of these libraries. The few supported learning rules such as spike-timing-dependent plasticity are not competitive on large problems. As a result, new learning algorithms are developed in independent codebases that are not easily reusable. ICONS ’21, July 27–29, 2021, PREPRINT Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, and Fangfang Xia In this work, we present Neko, a software library under active development for exploring learning rules. We build on the popular autograd frameworks, and our goal is to implement key building blocks to boost researcher productivity. By decoupling the learning rules from the training process, we aim to provide an abstraction model that enables mixing and matching of various design ideas. To arrive at the right abstraction level, we need to sample a wide range of learning algorithm research. Below are the three directions and exemplars we have prioritized in this initial code release. The first class of learning rules are gradient-based methods. They approximate backpropagation with various levels of biological plau- sibility [3, 24, 26, 27, 29, 31, 38, 40, 45]. From this category, we study the e-prop algorithm [7] in detail and provide a complete reimple- mentation. The second direction is based on the hypothesis that the brain keeps track of probabilistic distributions over weights and rewards [2, 10]. This line of exploration may offer important clues towards achieving learning efficiency and robustness in the face of uncertainty. We develop a sampling-based learning rule on spiking neural networks (SNN). The third class is concerned with hardware constraints on plasticity mechanisms. For this class, we include the classic example of Manhattan rule training for memristive crossbar circuits. In all three exemplars, we seek consistent implementation in the Neko library. 2 LIBRARY DESIGN The Neko library is designed to be modular, extensible, and easy to use. Users can select from a collection of neuron models and encoding methods to build a spiking or regular artificial neural network, and train it with one of the implemented learning rules. Alternatively, they could supply their own networks from PyTorch or Keras [9] or develop new learning algorithms based on the pro- vided intrinsics. The following code snippet provides an example of solving MNIST [23] with the e-prop algorithm on a recurrent network of 128 hidden adaptive leaky integrate-and-fire (ALIF) neurons. from neko . backend import pytorch_backend as backend rsnn = ALIF (128 , 10 , backend , task_type = ' classification ') model = Evaluator ( rsnn , loss = ' categorical_crossentropy ' , metrics =[ ' accuracy ', ' firing_rate ']) learning_rule = Eprop ( model , mode = ' symmetric ') trainer = Trainer ( learning_rule ) trainer . train ( x_train , y_train , epochs =30) Listing 1: Train an SNN model of ALIF neurons with e-prop. The training process illustrated in this example can be broken down into a series of high-level Neko modules: the layer includes pre-implemented recurrent SNNs and adaptors for existing Keras and PyTorch models; the evaluator associates a model with a loss function and optional metrics; the learning rule implements back- propagation and a growing list of neuromorphic learning rules; and the trainer handles training logistics as well as special logic to apply multiple learning rules for gradient comparison between models. Besides these core components, auxiliary modules include the data loader, spike encoder, optimizer, and functions for loss, activation, and pseudo-derivatives calculations. To help users define custom algorithms, Neko also provides a unified API for accessing frequently used features in Tensor- Flow and PyTorch such as low-level tensor operations. Switching the backend is straightforward. This feature can detect occasional framework-dependent behavior and is useful for code verification and performance analysis. The multi-backend support is reminis- cent of the earlier Keras framework. However, Neko is different in that it provides more fine-grained abstraction layers such that users can replace the learning algorithm by changing a single line of code. Taken together, these features also simplify the process of porting code to hardware accelerators, since implementing a backend for the hardware is sufficient to run all models in Neko on it. 3 USE CASES In this section, we present results on the three representative learn- ing rules introduced earlier. We also provide gradient analysis as an example of Neko’s cross-cutting utilities that we are building to help design, debug, and compare new learning algorithms. 3.1 Credit assignment with local signals A key mystery in the brain is how it implements credit assignment. The standard backpropagation through time (BPTT) algorithm is unrealistic as we cannot expect a biological neuron to be aware of all past synaptic strengths. Bellec et al. [7] proposed e-prop, a local online learning algorithm for recurrent SNNs. The method exploits the mathematical formula of BPTT, deriving an approximation which only requires a recursive accumulative eligibility trace and a local learning signal. These properties make the algorithm one step closer to biologically realistic on-chip learning. In Neko, we implemented full-featured e-prop algorithms includ- ing the three variants: symmetric, random, and adaptive. Whereas the paper manually derived the e-prop formulas for some networks, we took a different approach: separating the model from the learn- ing rules. In the layer module, the regular recurrent neural networks and recurrent SNNs, with leaky integrate-and-fire (LIF) or ALIF neurons, were all defined as standard models. Meanwhile, they inherited from an Epropable class, which defined general symbolic gradient formulas according to recurrent cell dynamics. Specifying this extra information was all it took to perform e-prop, and in a network-agnostic way. This design enabled the error-prone formula derivation to be automated. It also sped up experiments with new network architectures or e-prop variants. We compared the Neko implementation of e-prop to the original implementation on the TIMIT benchmark [15] for framewise speech recognition. The authors reported the results on a hybrid network of 100 ALIF and 300 LIF neurons [7]. In our experiment, we used an ALIF-only network of 200 neurons and otherwise kept the setup identical. We report close reproduction accuracy in Fig. 2. Notably, Neko’s error rate dropped by 27%, after tuning regularization and batch size, while keeping the firing rate low at 10 Hz. To the best of our knowledge, this is the best SNN accuracy obtained with a local learning rule, which in fact reaches the level of an LSTM baseline trained with the precise gradients from BPTT ([7] Fig. S4). Additionally, Neko is faster (training time from Nvidia V100) and convenient for iterative development. Neko: a Library for Exploring Neuromorphic Learning Rules ICONS ’21, July 27–29, 2021, PREPRINT Figure 2: TIMIT results. We reproduce e-prop accuracy on speech recognition in Neko with a smaller network. Neko is faster with slight tuning and reduces error by 27% to reach the nonspiking baseline performance of a BPTT-trained LSTM model. 3.2 Probabilistic learning Bayesian statistics has captured much attention in the computa- tional neuroscience community, both as an explanation for neural behavior [22] as well as a means of performing inference in neural networks. In Neko, we develop a Hybrid Monte Carlo, or HMC [30], algorithm to perform Bayesian inference on spiking neural networks based on Metropolis-adjusted Langevin diffusion [34]. Fundamentally, HMC algorithms are simply Metropolis-Hastings samplers [19] where the proposal distribution is based on the gra- dient. Though spiking neurons are non-differentiable by definition, surrogate gradients can be defined by considering smoothed ver- sions of the spiking activation function [31]. State of the art learning algorithms for spiking neurons have used these surrogate gradients successfully, and we also find success in deploying them in HMC to form our proposal. In fact, this two-stage approach is especially appealing for spiking neurons, since the theoretical underpinnings of HMC place only very weak restrictions on what the proposal direction should be, and certainly do not require an exact gradient to be satisfied. Thus, from a theoretical perspective, running our algorithm for sufficiently long will result in a sample from our true posterior. Empirically, of course, it is not practical to explore the entire nonconvex, high-dimensional posterior. We therefore verify our implementation numerically. The MNIST-1D [18] data is a derivative of the popular MNIST dataset of handwritten digits which transforms the image recog- nition problem into a sequence learning problem (See Figure 3, Left). We train a spiking neural network with 1,000 hidden neurons using our proposed HMC algorithm1, and recorded the posterior mean as well as uncertainty for the train set examples. As shown in Figure 3 (Right), we find that the model displayed significantly more uncertainty on test examples for which its best guess was incorrect than when it was correct. This validates our algorithm, as we would like errors to be associated with high uncertainty. 1Using an adaptive step size [5] with a diffusion standard deviation of 0.01 scaled by the norm of the surrogate gradient, which was obtained via standard backpropagation. Figure 3: Uncertainty Quantification. Left: An example input representing the number 3 for the MNIST-1D data. Right: Poste- rior uncertainty among test examples which were correctly versus incorrectly predicted. Uncertainty is higher when errors are made. As future work, we intend to compare HMC and other MCMC algorithms to other probabilistic learning approaches such as Vari- ational Bayes [17] and Monte Carlo Dropout [14] within the Neko framework. 3.3 Analog neural network training Memristors have emerged as a new platform for neuromorphic learning [20, 42]. These devices represent the synapse weights in the tunable conductance states of large crossbar architectures. Compared with digital implementations of neural networks, these analog circuits offer promising advantages in parallel processing, in-situ learning, and energy efficiency [13, 25]. However, they also place constraints on how the weights can be updated. A classic way to train these networks is with the Manhattan rule learning algorithm [44]. Although training with backpropagation on device is theoretically possible, the time consumption of tuning individual weights with feedback algorithm can be prohibitive, es- pecially for larger scale neural networks [4]. As an alternative, the Manhattan rule simply updates network weights by a fixed amount according to the sign of the gradients, where the actual change magnitude may depend on the state of the material. This learn- ing rule has been applied successfully to simple machine learning benchmarks in simulated or fully hardware-implemented analog neural networks [43]. Neko implements a family of Manhattan rules to simulate the training process. It includes the basic algorithm and an extended version that supports a specified range of material conductance constraints. Because these learning rules do not have special re- quirements for the network architecture, users can directly supply existing Keras and PyTorch models with Neko’s adaptors. Our pre- liminary results show that both the simple Manhattan rule and the constrained version could train the MNIST dataset up to 96% accuracy on a simple 2-layer (with 64, 32 neurons) multi-layer per- ceptron, which is 2% lower than backpropagation. 3.4 Gradient comparison analysis Many learning rules depend on gradients explicitly or implicitly. Yet, gradient estimates are not intuitive to developers. Debugging learning rules sometimes require noticing the subtle differences CorrectIncorrect0.00.20.40.60.81.01.21.4Mean Cross EntropyUncertainty vs Accuracy ICONS ’21, July 27–29, 2021, PREPRINT Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, and Fangfang Xia Table 1: Testing two classification exemplars using temporal spike encoding schemes Encoding Surgery1 ECG2 None TC SF MW Benchmark 0.563 0.685 0.687 0.699 0.675 0.813 0.620 0.763 1A surgery kinematic dataset measuring the positions and orientations of surgical instruments during labeled simulated exercises. Data available upon request. 2A public ECG heartbeat categorization dataset [21] subsampled for class balance. 0.766 0.811 Figure 4: Gradient analysis tool. This example illustrates the differences in approximate gradients among e-prop variants for training MNIST: (top) a snapshot of the distributions of gradient deviations, (bottom) how the gradient deviations change over time. in gradient estimates and follow their trends over the course of training. In Neko, we have designed a gradient comparison tool that can enumerate the gradients or weight changes for multiple learning rules with the same model state and input data. It can also track this information batch by batch. Visualizing this information can help inspect approximation quality differences caused by algo- rithm tweaks and identify equivalence in formula transformations. Outside the context of debugging, the change in gradient estimates throughout the training process can also reveal potential biases and other properties of the learning algorithm. The gradient comparison tool is made possible by Neko’s separa- tion of the learning algorithm and trainer module. It is implemented as a special trainer that takes multiple learning rules and clones of the same model. While the primary model follows the usual training process, the others’ parameters are synced with the primary at each training step, and the weight changes are saved. The equivalence of gradient changes and weight changes can be established using the built-in naive optimizer which applies gradients directly without learning rate. Gradient analysis offers insights into how learning rules behave relative to each other and backpropagation. Fig. 4 illustrates this with an example of training spiking MNIST models with three vari- ants of e-prop. While symmetric e-prop was the best at gradient approximation, the relationship between random and adaptive ver- sions was somewhat unexpected. The adaptive version produced gradients with larger deviation and bias, which could explain its weaker performance on the benchmark (not shown). 4 SUPPORTING UTILITIES To further enable neuromorphic centric exploration, we integrate the SpikeCoding toolbox [12] which enables simple encoding of continuous value sequences into spikes with nearly a dozen algo- rithms. We present experimental results (Table 1) on two temporal data applications using three encoding schemes [33]: • Temporal contrast (TC) encoding compares the absolute value of a signal with a threshold derived by the derivative and standard deviation of the full sequence multiplied by a tun- able parameter. • Step-forward (SF) encoding generates positive/negative spikes by comparing values in a sequence to a moving baseline plus a tunable threshold, which is initially the first value of the sequence and updated each spike. • Moving window (MW) encoding uses a similar moving base- line and threshold to determine spiking but which is set to the mean of values in a tunable time window. All models were trained with e-prop learning except for the Benchmark RNN model trained with BPTT. While we note that there was often a sizable decrease in accuracy using these encod- ings, the sparsity of the input signal was significantly increased. Spike encodings may enable the use and development of learning algorithms more suited to or dependent on event based input. 5 CONCLUSIONS We presented the design of a coding library for researching learning algorithms. Through three examples, we demonstrated its capability and ease of use in diverse scenarios. Our reference implementa- tions introduced a new state-of-the-art in local temporal credit assignment with SNNs, a sampling-based learning rule for esti- mating weight and prediction posteriors, as well as simulations for constrained training of analog neural networks on memristive hard- ware. Additionally, we showed a cross-cutting example to support learning rule inspection with gradient comparison analysis. Two directions emerge for future work. First, we will extend learning rules to complex neuron models (e.g., dendritic computa- tion, structured neurons) and network architecture. Second, we will port learning algorithms to emerging hardware platforms. Both pro- cesses will be facilitated by the abstraction of learning algorithms and the multi-backend support in the Neko library2. 2https://github.com/cortical-team/neko Neko: a Library for Exploring Neuromorphic Learning Rules ICONS ’21, July 27–29, 2021, PREPRINT ACKNOWLEDGMENTS We thank Sihong Wang and Shilei Dai for helpful discussions. This work is partially supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357. REFERENCES [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16). 265–283. [2] Laurence Aitchison, Jannes Jegminat, Jorge Aurelio Menendez, Jean-Pascal Pfister, Alexandre Pouget, and Peter E Latham. 2021. Synaptic plasticity as Bayesian inference. Nature Neuroscience 24, 4 (2021), 565–571. [3] Mohamed Akrout, Collin Wilson, Peter C Humphreys, Timothy Lillicrap, and Douglas Tweed. 2019. Deep learning without weight transport. arXiv preprint arXiv:1904.05391 (2019). [4] Fabien Alibart, Ligang Gao, Brian D Hoskins, and Dmitri B Strukov. 2012. High precision tuning of state for memristive devices by adaptable variation-tolerant algorithm. Nanotechnology 23, 7 (Jan 2012), 075201. https://doi.org/10.1088/0957- 4484/23/7/075201 [5] Christophe Andrieu and Johannes Thoms. 2008. A tutorial on adaptive MCMC. Statistics and Computing 18, 4 (01 Dec 2008), 343–373. https://doi.org/10.1007/ s11222-008-9110-y [6] Trevor Bekolay, James Bergstra, Eric Hunsberger, Travis DeWolf, Terrence C Stewart, Daniel Rasmussen, Xuan Choo, Aaron Voelker, and Chris Eliasmith. 2014. Nengo: a Python tool for building large-scale functional brain models. Frontiers in Neuroinformatics 7 (2014), 48. [7] Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, and Wolfgang Maass. 2020. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications 11, 1 (2020), 1–15. [8] Nicholas T Carnevale and Michael L Hines. 2006. The NEURON book. Cambridge University Press. [9] François Chollet et al. 2015. Keras. https://keras.io. [10] Will Dabney, Zeb Kurth-Nelson, Naoshige Uchida, Clara Kwon Starkweather, Demis Hassabis, Rémi Munos, and Matthew Botvinick. 2020. A distributional code for value in dopamine-based reinforcement learning. Nature 577, 7792 (2020), 671–675. [11] Andrew P Davison, Daniel Brüderle, Jochen M Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, and Pierre Yger. 2009. PyNN: a common interface for neuronal network simulators. Frontiers in Neuroinformatics 2 (2009), 11. [12] Julien Dupeyroux. 2021. A toolbox for neuromorphic sensing in robotics. arXiv:2103.02751 [cs.RO] [13] Elliot J Fuller, Scott T Keene, Armantas Melianas, Zhongrui Wang, Sapan Agarwal, Yiyang Li, Yaakov Tuchman, Conrad D James, Matthew J Marinella, J Joshua Yang, et al. 2019. Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing. Science 364, 6440 (2019), 570–574. [14] Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of The 33rd International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 48), Maria Florina Balcan and Kilian Q. Weinberger (Eds.). PMLR, New York, New York, USA, 1050–1059. http://proceedings.mlr.press/v48/gal16. html [15] J. Garofolo, Lori Lamel, W. Fisher, Jonathan Fiscus, D. Pallett, N. Dahlgren, and V. Zue. 1992. TIMIT Acoustic-phonetic Continuous Speech Corpus. Linguistic Data Consortium (11 1992). [16] Marc-Oliver Gewaltig and Markus Diesmann. 2007. Nest (neural simulation tool). Scholarpedia 2, 4 (2007), 1430. [17] Alex Graves. 2011. Practical Variational Inference for Neural Networks. In Proceedings of the 24th International Conference on Neural Information Processing Systems (Granada, Spain) (NIPS’11). Curran Associates Inc., Red Hook, NY, USA, 2348–2356. [18] Sam Greydanus. 2020. Scaling down Deep Learning. arXiv:2011.14439 [cs.LG] [19] Peter D. Hoff. 2009. A First Course in Bayesian Statistical Methods (1st ed.). Springer Publishing Company, Incorporated. [20] Miao Hu, Hai Li, Yiran Chen, Qing Wu, Garrett S Rose, and Richard W Linderman. 2014. Memristor crossbar-based neuromorphic computing system: A case study. IEEE Transactions on Neural Networks and Learning Systems 25, 10 (2014), 1864– 1878. [21] Mohammad Kachuee, Shayan Fazeli, and Majid Sarrafzadeh. 2018. Ecg heartbeat classification: A deep transferable representation. In 2018 IEEE International Conference on Healthcare Informatics (ICHI). IEEE, 443–444. [22] David C. Knill and Alexandre Pouget. 2004. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends in Neurosciences 27, 12 (01 Dec 2004), 712–719. https://doi.org/10.1016/j.tins.2004.10.007 [23] Yann LeCun. 1998. The MNIST database of handwritten digits. http://yann. lecun. com/exdb/mnist/ (1998). [24] Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. 2016. Training deep spiking neural networks using backpropagation. Frontiers in Neuroscience 10 (2016), 508. [25] Can Li, Daniel Belkin, Yunning Li, Peng Yan, Miao Hu, Ning Ge, Hao Jiang, Eric Montgomery, Peng Lin, Zhongrui Wang, et al. 2018. Efficient and self-adaptive in-situ learning in multilayer memristor neural networks. Nature communications 9, 1 (2018), 1–8. [26] Timothy P Lillicrap, Daniel Cownden, Douglas B Tweed, and Colin J Akerman. 2016. Random synaptic feedback weights support error backpropagation for deep learning. Nature communications 7, 1 (2016), 1–10. [27] Timothy P Lillicrap, Adam Santoro, Luke Marris, Colin J Akerman, and Geoffrey Hinton. 2020. Backpropagation and the brain. Nature Reviews Neuroscience 21, 6 (2020), 335–346. [28] Chit-Kwan Lin, Andreas Wild, Gautham N Chinya, Yongqiang Cao, Mike Davies, Daniel M Lavery, and Hong Wang. 2018. Programming spiking neural networks on Intel’s Loihi. Computer 51, 3 (2018), 52–61. [29] Owen Marschall, Kyunghyun Cho, and Cristina Savin. 2020. A unified framework of online learning algorithms for training recurrent neural networks. Journal of Machine Learning Research 21, 135 (2020), 1–34. [30] Radford M. Neal. 2011. MCMC Using Hamiltonian Dynamics. CRC Press. https: //doi.org/10.1201/b10905-7 [31] Emre O Neftci, Hesham Mostafa, and Friedemann Zenke. 2019. Surrogate gradi- ent learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine 36, 6 (2019), 51–63. [32] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. [n.d.]. PyTorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703 ([n. d.]). [33] Balint Petro, Nikola Kasabov, and Rita M. Kiss. 2020. Selection and Optimiza- tion of Temporal Spike Encoding Methods for Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 31, 2 (Feb. 2020), 358–370. https://doi.org/10.1109/tnnls.2019.2906158 [34] P. J. Rossky, J. D. Doll, and H. L. Friedman. 1978. Brownian dynamics as smart Monte Carlo simulation. The Journal of Chemical Physics 69, 10 (1978), 4628–4633. https://doi.org/10.1063/1.436415 arXiv:https://doi.org/10.1063/1.436415 [35] Bodo Rueckauer, Connor Bybee, Ralf Goettsche, Yashwardhan Singh, Joyesh Mishra, and Andreas Wild. 2021. NxTF: An API and Compiler for Deep Spiking Neural Networks on Intel Loihi. arXiv preprint arXiv:2101.04261 (2021). [36] Bodo Rueckauer and Shih-Chii Liu. 2018. Conversion of analog to spiking neural networks using sparse temporal coding. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 1–5. [37] Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, Michael Pfeiffer, and Shih-Chii Liu. 2017. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in Neuroscience 11 (2017), 682. [38] João Sacramento, Rui Ponte Costa, Yoshua Bengio, and Walter Senn. 2018. Den- dritic cortical microcircuits approximate the backpropagation algorithm. arXiv preprint arXiv:1810.11393 (2018). [39] Jun Sawada, Filipp Akopyan, Andrew S Cassidy, Brian Taba, Michael V Debole, Pallab Datta, Rodrigo Alvarez-Icaza, Arnon Amir, John V Arthur, Alexander Andreopoulos, et al. 2016. Truenorth ecosystem for brain-inspired computing: scalable systems, software, and applications. In SC’16: Proceedings of the Inter- national Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 130–141. [40] Andrew Sornborger, Louis Tao, Jordan Snyder, and Anatoly Zlotnik. 2019. A pulse- gated, neural implementation of the backpropagation algorithm. In Proceedings of the 7th Annual Neuro-inspired Computational Elements Workshop. 1–9. [41] Marcel Stimberg, Romain Brette, and Dan FM Goodman. 2019. Brian 2, an intuitive and efficient neural simulator. eLife 8 (2019), e47314. [42] Andy Thomas. 2013. Memristor-based neural networks. Journal of Physics D: Applied Physics 46, 9 (2013), 093001. [43] Peng Yao, Huaqiang Wu, Bin Gao, Jianshi Tang, Qingtian Zhang, Wenqiang Zhang, J Joshua Yang, and He Qian. 2020. Fully hardware-implemented memristor convolutional neural network. Nature 577, 7792 (2020), 641–646. [44] Elham Zamanidoost, Farnood M. Bayat, Dmitri Strukov, and Irina Kataeva. 2015. Manhattan rule training for memristive crossbar circuit pattern classifiers. In 2015 IEEE 9th International Symposium on Intelligent Signal Processing (WISP) Proceedings. 1–6. https://doi.org/10.1109/WISP.2015.7139171 [45] Friedemann Zenke and Surya Ganguli. 2018. Superspike: Supervised learning in multilayer spiking neural networks. Neural computation 30, 6 (2018), 1514–1541.
ai_researcher
2
PAFFA_Premeditated_Actions_For_Fast_Agents.pdf
PAFFA: Premeditated Actions For Fast Agents Shambhavi Krishna1,2, Zheng Chen2, Vaibhav Kumar2, Xiaojiang Huang2, Yingjie Li2, Fan Yang2, Xiang Li2 1University of Massachusetts Amherst & 2Amazon Alexa AI shambhavikri@umass, zgchen@amazon 4 2 0 2 c e D 0 1 ] I A . s c [ 1 v 8 5 9 7 0 . 2 1 4 2 : v i X r a Abstract Modern AI assistants have made significant progress in natural language understanding and API/tool integration, with emerging efforts to incorporate diverse interfaces (such as Web in- terfaces) for enhanced scalability and function- ality. However, current approaches that heavily rely on repeated LLM-driven HTML parsing are computationally expensive and error-prone, particularly when handling dynamic web in- terfaces and multi-step tasks. To overcome these challenges, we introduce PAFFA (Pre- meditated Actions For Fast Agents), a frame- work designed to enhance web interaction ca- pabilities through an Action API Library of reusable, verified browser interaction functions. By pre-computing interaction patterns and em- ploying two core methodologies — "Dist-Map" for task-agnostic element distillation and "Un- ravel" for incremental page-wise exploration — PAFFA reduces inference calls by 87% while maintaining robust performance even as web- site structures evolve. This framework accel- erates multi-page task execution and offers a scalable solution to advance autonomous web agent research. 1 Introduction The rapid growth of AI systems capable of au- tonomously navigating increasingly complex tasks has opened up new opportunities for enabling AI to interact with intricate interfaces, such as web environments. While AI assistants have demon- strated remarkable proficiency in natural language processing and specialized API integration, fluid in- teraction with web interfaces remains a significant challenge, limiting their broader applicability and adoption. Current approaches to web interaction face three core challenges: efficiency, reliability, and scalability. • Efficiency: Existing web agents rely heavily on repeated Large Language Model (LLM) 1 inference calls for HTML parsing and action decisions. Each interaction requires fresh pars- ing and context understanding, leading to high computational overhead. This becomes partic- ularly problematic in multi-step tasks where each action requires multiple inference calls. • Reliability: Web interfaces are inherently dy- namic, with elements changing positions and structures being updated in real-time. Content indexing systems become outdated quickly, while direct HTML parsing approaches are vulnerable to structural changes, leading to cascading errors in task execution. • Scalability: Existing solutions often rely on either website-specific implementations or comprehensive HTML understanding for each task. Custom API development is resource- intensive and difficult to maintain, while uni- versal HTML parsing approaches struggle with the diversity of web implementations. The research community has attempted to ad- dress these challenges through various approaches. Early frameworks like MiniWoB++ (Liu et al., 2018) introduced controlled environments for basic web tasks but failed to capture real-world complex- ity. Mind2Web (Deng et al., 2023) advanced this by extending to real-world websites, yet still faced challenges with dynamic content handling and state management. Multimodal approaches, exemplified by WebVoyager (He et al., 2024), enhanced web understanding by integrating visual and textual ele- ments. However, these systems operate within what we term the “short-horizon task paradigm”—they process each action independently, requiring sep- arate LLM inference calls for HTML parsing and action selection at each step. This paradigm’s limi- tations become increasingly apparent as task com- plexity grows. Navigation-focused frameworks like FLIN (Mazumder and Riva, 2020) and We- bLINX (Lù et al., 2024) have advanced natural language-based web navigation. However, their reliance on continuous HTML parsing and step-by- step LLM inference creates computational bottle- necks in production environments, particularly for complex multi-step tasks. To address these limitations, we introduce PAFFA (Premeditated Actions For Fast Agents). With this framework, we bring improved, usable, web agent capabilities to AI Assistants, by creating an Action API Library of reusable browser inter- actions. By pre-computing Action APIs, PAFFA eliminates repetitive HTML parsing and reduces inference calls, thereby improving both efficiency and scalability. Our approach comprises two key methodologies: • Dist-Map: A task-agnostic element distilla- tion process that abstracts interactive elements across web pages, enabling reusable functions independent of specific tasks. • Unravel: An incremental page-wise explo- ration method that handles dynamic content and multi-step tasks while minimizing context length issues. Through these methodologies, PAFFA achieves a 87% reduction in inference calls compared to current approaches while maintaining robust per- formance across evolving website structures. This significantly accelerates multi-page task execution and enhances AI assistant scalability. Our main contributions are: • A novel Action API Library that pre-computes and persists interaction patterns for web inter- faces, reducing reliance on repeated HTML parsing and LLM inference • Two complementary methodologies—Dist- Map and Unravel—that improve efficiency and adaptability for web interactions • A verification framework ensuring API relia- bility and robustness in dynamic environments • Comprehensive empirical evaluation on the Mind2Web benchmark, demonstrating supe- rior performance (element accuracy: 0.74 vs. 0.56, step accuracy: 0.57 vs. 0.50) and re- duced computational overhead The remainder of this paper is organized as fol- lows: Section 2 reviews related work in web agents and automation. Section 3 details the Action API Library construction and our core methodologies. Section 4 presents our results. Section 5 analyzes our findings and discusses limitations. 2 Related Works The development of web agents has evolved across task automation, action planning, and efficiency optimization dimensions, with recent advances in foundation models and multimodal approaches ac- celerating progress. 2.1 Foundation Models and Web Interaction Frameworks Early web automation frameworks like Mini- WoB++ (He et al., 2024) provided controlled envi- ronments for basic web tasks, but failed to capture real-world complexity. Mind2Web (Deng et al., 2023) advanced the field by incorporating real- world websites, though challenges remain in han- dling dynamic content. Pre-trained models have shown particular promise, with bidirectional mod- els like HTML-T5 achieving state-of-the-art results in document parsing (Li et al., 2021). 2.2 Multimodal and Vision-Based Approaches Recent work has explored multimodal interactions for web navigation. WebVoyager (He et al., 2024) leverages multimodal models for understanding both visual and textual elements, while Pix2Act (Shaw et al., 2023) demonstrates success in screen- shot parsing and behavioral cloning using Monte Carlo Tree Search. 2.3 Navigation and Planning Systems Frameworks like FLIN (Mazumder and Riva, 2020) and WebLINX (Lù et al., 2024) have advanced nat- ural language-based navigation, though their step- by-step planning mechanisms create efficiency bot- tlenecks. Recent work (Gur et al., 2023) has shown promise in task decomposition and multi-step inter- actions, while MindSearch (Ma et al., 2023) intro- duces graph-based planning strategies. 2.4 Action Abstraction and Evaluation Current approaches face challenges in balanc- ing accuracy with computational efficiency, re- quiring repeated HTML parsing and LLM infer- ence. Mind2Web-Live (Pan et al., 2024) intro- duces progress-aware evaluation allowing multi- ple valid paths to task completion. While frame- works have attempted to create reusable compo- 2 nents, most focus on low-level actions. Recent work in self-experience supervision (Gur et al., 2023) and frameworks like TPTU-v2 (Kong et al., 2023) explore promising directions in tool use and planning, though gaps remain in developing flexi- ble, high-level action APIs. Mind2Web dataset (see Appendix A.2 for particu- lars on the dataset used). These scripts utilize Sele- nium WebDriver (sel) to simulate user interactions across different browsers through programmatic control of UI elements. All prompting operations in our system employ Sonnet 3.5-v2 as the base LLM. 3 Methodology 3.1 Overview Contemporary web agent architectures face two fundamental constraints that limit their effective- ness: context length limitations and cross-website generalization capabilities. The first constraint stems from the inherent lim- itations of LLMs when processing HTML docu- ments (Zhou et al., 2023; Deng et al., 2023). Mod- ern websites present very complex DOM structures that vary a lot, making comprehensive context pro- cessing expensive, particularly acute in multi-page tasks, where cumulative context across pages dra- matically increases computational overhead and reduces efficiency. The second constraint emerges from the dynamic nature of web interfaces, as documented by (Pan et al., 2024) that demonstrates web interfaces and interaction patterns undergo frequent modifications over time. This is further supported by baseline performance metrics from Mind2Web (Table 1), which show significantly degraded performance when models encounter previously unseen web- sites (Cross-Website split); indicating that models struggle more with novel website structures than with new task domains which may be similar to training. Figure 2: Grouping tasks solvable by same API. The framework then implements a hierarchical organization of tasks within each website domain (Figure 2). This organization enables the develop- ment of consolidated APIs that can handle both observed and novel tasks through parameterization. These grouped APIs collectively form an Action API Library for each website (Figure 3). During deployment, this library eliminates the need for long-context HTML document parsing with LLMs (Figure 4). Instead, the system operates as a tool- use and planning agent, mapping user requests to appropriate API calls and executing them directly in the browser. Figure 3: Creating APIs per group. We explore two distinct methodological ap- proaches for implementing this workflow: Dist- Map and Unravel. While both approaches ad- dress the challenge of efficient web interaction, they serve complementary purposes. Dist-Map op- timizes for reusability and maintenance through its task-agnostic approach, making it ideal for stable, frequently-used interaction patterns. Unravel ex- cels at handling novel or complex interactions by breaking them into manageable chunks, providing better resilience to website changes and edge cases. Figure 1: Creating task-specific scripts. Our framework, PAFFA, addresses these con- straints while minimizing operational overhead from multiple LLM inference calls. The im- plementation begins with the creation of task- specific Python scripts (Figure 1) derived from the 3 Figure 4: Using Action API Library. The framework selects between these approaches based on task characteristics and historical perfor- mance data. These methods represent different strategies for balancing context management with execution efficiency, which we detail in the follow- ing sections. 3.2 Dist-Map: Distillation then Mapping Developed as our first methodological approach for task-specific script generation (Figure 1), Dist- Map builds upon established research in HTML document understanding (Gur et al., 2023; Zheng et al., 2023). While addressing the challenge of processing multiple HTML pages within a unified context, we introduce task-agnostic HTML doc- ument understanding to enable persistent actions across semantically similar tasks. Dist-Map operates in two phases: element distil- lation and script generation. The distillation phase creates distilled functions that encapsulate DOM selectors and their functionalities, organized hierar- chically by page-level operations. This systematic reduction in context enables focused task planning rather than requiring comprehensive page interpre- tation. By implementing task-agnostic distillation, the system can operate on any website without prior knowledge of specific annotated interactions, significantly reducing dataset dependence. Our experimental observations led to the imple- mentation of a verification mechanism for ensuring DOM selector and attribute correctness, as initial distilled outputs may not achieve perfect accuracy. This verification process identifies non-interactive extracted functions and generates corrections using the Sonnet language model, leveraging: 1. Language models’ high precision in identify- ing and correcting specific inaccuracies 2. The tractability of correcting a limited set of incorrect elements compared to comprehen- sive element identification The script generation phase employs contextual mapping, where Sonnet maps each task to its re- quired distilled element files. For instance, an air- line check-in task selectively incorporates elements from homepage and check-in pages while exclud- ing unrelated flight booking interfaces. This tar- geted approach enables precise context manage- ment and focused task execution, utilizing a two- step self-reflective prompting system to generate Selenium-based Python code with established best practices. Through this methodology, Dist-Map enables the creation of task-specific scripts that form the foun- dation of our Action API Library (Figures 2, 3), particularly excelling at handling complex multi- page interactions while maintaining minimal con- text requirements. 3.3 Unravel: Solve in chunks While implementing Dist-Map, we identified a sig- nificant limitation: the task-invariant element distil- lation process does not achieve perfect accuracy in identifying task-essential element selectors, even with corrective measures in place. This observa- tion led to the development of Unravel, our second methodological approach to handling multi-page complex tasks. Instead of employing comprehensive selector distillation, Unravel implements a chunked execu- tion strategy, decomposing tasks into smaller units that can be processed using individual webpages. 4 This approach addresses the context length con- straint by enabling the LLM to process one HTML page at a time, while directly extracting the neces- sary selectors for each subtask. A key innovation of Unravel is its exploration- based architecture. When presented with a live website, the system can incorporate new task types not covered by existing APIs without requiring additional annotations. This capability extends to cross-website task adaptation - for instance, a book- ing task initially developed for United Airlines can be adapted for use on Delta Airlines’ website. By enabling task exploration across similar websites, Unravel significantly enhances the system’s gener- alization potential and versatility. The method demonstrates sophisticated planning capabilities despite its chunked execution approach. Our analysis of model outputs reveals that while processing individual chunks, the system maintains awareness of future steps and develops preliminary plans for subsequent actions. This forward-looking behavior ensures coherent task execution across multiple pages. A notable technical advantage of Unravel is its implementation of robust selector handling through multi-step try-except blocks in the generated code. This enhanced fallback mechanism is made pos- sible by providing the model with complete page context rather than a constrained set of interactions. The approach significantly improves the system’s resilience to website structure variations and dy- namic content changes. 3.4 Library Creation Once the scripts are created, we turn to creating an Action API Library by assimilating content from all related scripts. This process involves a two-step reasoning-based process. First, the model must create logical groupings for tasks of a website, with each group having one or more tasks, and intended to have as much overlap as possible for web execution on that page (Figure 2) Next, for each group, we direct the model (with a 2-step self- corrective reasoning prompt) to generate an API that can adapt to execute every step in the corre- sponding scripts (see Figure 3). That is, within a task group, the API generated would, with the right parameters, solve the task exactly like the original script would have. This prompt ensures that parameterization for necessary and optional arguments are clear and well-documented so that the API grounding in the downstream (Figure 4) is easier as well. 4 Results In order to measure task completeness and compare to Mind2Web, there are two metrics - ‘Element Accuracy’, i.e. percentage of the elements inter- acted with are correct, and ‘Step Accuracy’, i.e. the percentage of correct combination of element and action taken on that element for every step of a task. There are various data splits we can view the baseline comparisons in - cross-task (trained on the same website, but different tasks), cross- website (trained on the different websites). We also have two domains we’re considering - Airlines and Shopping. During evaluation, element accuracy and step accuracy (both macro averaged), are expected to exactly match the annotation (marked ‘Exact’ in Tables 1, 2). However, from qualitative analysis, it’s found that many tasks actually have many cor- rect ‘paths’ to completion, and hence, those inexact paths should also be considered, as well as anno- tation irregularities. Using these findings, we per- form a human re-evaluation using human checks During evaluation, element accuracy and step ac- curacy (both macro average across websites), are expected to exactly match the annotation. How- ever, from qualitative analysis, it is found that many tasks actually have many correct ‘paths’ to completion, and hence, those inexact paths should also be considered, as well as annotation irregular- ities. Using these findings, we perform a human re-evaluation, marked under ‘Inexact’ in the Tables 1, 2. 4.1 Finetuning versus Prompting SOTA Another key point to acknowledge during baseline comparisons is the use of a fine-tuned set of De- BERTa (He et al., 2021) (86M)+ FLAN-T5 (Chung et al., 2022)(XL has 3B params). Sonnet 3.5, how- ever, is currently the SOTA model, and is a lot bigger, and we use it with zero-shot prompting. In order to understand how much of the model performance may be attributed to Sonnet 3.5, we use Sonnet in the MindAct framework, with 3-shot examples. 1 These results are compiled and shown in Tables 1, 2. We can see that despite using Sonnet 3.5 in MindAct with 3-shot prompt, PAFFA frame- work methods outperforms the former. In particu- 1They follow the same three-shot strategy for GPT 3.5 and GPT4 in their paper. 5 DeB Dataset Split 0.306 Airlines 0.285 Air.+Shop.-Task Shop.-Cross Task 0.202 Shop.-Task+CW 0.283 0.354 Shop.-Cross-Web 0.298 Air.+All Shop. MindAct T5 L 0.562 0.508 0.333 0.45 0.552 0.526 T5 XL 0.626 0.566 0.357 0.472 0.552 0.562 T5 B 0.477 0.428 0.273 0.372 0.510 0.449 S3.5 0.492 0.418 0.202 0.316 0.395 0.422 Dist-Map Ex 0.618 0.699 0.779 0.74 0.7 0.699 Inex 0.67 0.78 0.89 0.86 0.83 0.79 Unravel Ex 0.75 0.67 0.59 0.61 0.62 0.65 Inex 0.76 0.701 0.642 0.67 0.72 0.74 Table 1: Element Accuracies: MindAct vs. PAFFA MindAct T5 B Dataset Split 0.462 Airlines 0.421 Air.+Shop.-Task 0.276 Shop.-Cross Task Shop.-Task+CW 0.286 0.357 Shop.-Cross-Web 0.409 Air.+All Shop. T5 L 0.564 0.503 0.331 0.366 0.385 0.482 T5 XL 0.616 0.548 0.338 0.384 0.362 0.500 S3.5 0.467 0.38 0.173 0.231 0.249 0.357 Dist-Map Ex 0.35 0.35 0.35 0.344 0.33 0.34 Inex 0.42 0.525 0.63 0.60 0.56 0.57 Unravel Ex 0.38 0.29 0.2 0.29 0.36 0.32 Inex 0.58 0.55 0.52 0.56 0.58 0.57 Table 2: Step Accuracy: MindAct vs. PAFFA MindAct # Tokens Calls Total Tokens PAFFA # Tokens Calls Total Tokens 1,565 126 197,190 25,000 1 25,000 Table 3: After Setup Usage Per Task/Request lar, we note that the performance of Unravel on the Cross-Website split is on par with any other split for Unravel, while in MindAct, it struggles with lack of training in the Cross-Website and shows lower performance. 4.2 Cost Comparison At deployment, whenever a new request comes in, we have a simpler query with just the API library, to ground the task to the right action API and execute it. By comparison, for MindAct, per task cost com- parisons include MCQA-based 2 element+action selection, stepwise. In Table 3, we can see the total tokens 3 used to prompt for a single task at deploy- ment is 87% less for PAFFA library calls than for MindAct 4. Further, this calculation does not take into account the time/tokens for ranking candidate elements from DeBERTa for each page in real time, further increasing costs for MindAct. 5 Discussion and Conclusion Through PAFFA (Premeditated Actions For Fast Agents), we introduce a framework that signifi- cantly advances web agent capabilities through an Action API library of reusable functions. Our two 2This is with k=50 as is preferred in the Mind2Web paper. 3Number of tokens is estimated as number of characters/4. 4For about 30 unique Action APIs in the Library. core methodologies—Dist-Map’s task-agnostic el- ement distillation and Unravel’s incremental page- wise exploration—work in tandem to address the fundamental challenges of efficiency, reliability, and scalability in web automation. Empirical evaluations on the Mind2Web bench- mark demonstrate substantial improvements over existing approaches, with element accuracy increas- ing from 56% to 74% and step accuracy from 50% to 57%. Most significantly, PAFFA achieves an 87% reduction in inference tokens while main- taining robust performance across different data splits, including challenging cross-website scenar- ios. This improvement in efficiency, coupled with consistent performance, positions PAFFA as a prac- tical solution for production environments. Despite these advances, we acknowledge sev- eral limitations: the reliance on human evalua- tion given multiple valid task completion paths, the need for robust verification modules, and the cur- rent scope of test datasets. These challenges, how- ever, point to promising future research directions, including automated API grouping and grounding methods, integration with broader AI assistant ca- pabilities, enhanced verification mechanisms, and automated API maintenance for evolving websites. The Action API Library concept represents a fun- damental shift in approach, moving from repetitive HTML parsing to pre-computed, reusable actions. By significantly reducing computational overhead while maintaining adaptability to dynamic web- sites, PAFFA provides a scalable foundation for ad- vancing autonomous web agent research and prac- tical deployment. 6 References Webdriver. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Al- bert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdh- ery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Ja- cob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. Preprint, arXiv:2210.11416. Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. 2023. Mind2web: Towards a generalist agent for the web. Preprint, arXiv:2306.06070. Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksan- dra Faust. 2023. A real-world webagent with plan- ning, long context understanding, and program syn- thesis. arXiv preprint arXiv:2307.12856. Hongliang He, Wenlin Yao, Kaixin Ma, Wenhao Yu, Yong Dai, Hongming Zhang, Zhenzhong Lan, and Dong Yu. 2024. Webvoyager: Building an end-to- end web agent with large multimodal models. arXiv preprint arXiv:2401.13919. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Decoding- enhanced bert with disentangled attention. Preprint, arXiv:2006.03654. Deberta: Yilun Kong, Jingqing Ruan, Yihong Chen, Bin Zhang, Tianpeng Bao, Shiwei Shi, Guoqing Du, Xiaoru Hu, Hangyu Mao, Ziyue Li, et al. 2023. Tptu-v2: Boost- ing task planning and tool usage of large language model-based agents in real-world systems. arXiv preprint arXiv:2311.11315. Junlong Li, Yiheng Xu, Lei Cui, and Furu Wei. 2021. Markuplm: Pre-training of text and markup language for visually-rich document understanding. arXiv preprint arXiv:2110.08518. Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tian- lin Shi, and Percy Liang. 2018. Reinforcement learn- ing on web interfaces using workflow-guided explo- ration. arXiv preprint arXiv:1802.08802. Xing Han Lù, Zdenˇek Kasner, and Siva Reddy. 2024. Weblinx: Real-world website navigation with multi- turn dialogue. Preprint, arXiv:2402.05930. Kaixin Ma, Hongming Zhang, Hongwei Wang, Xiao- man Pan, Wenhao Yu, and Dong Yu. 2023. Laser: Llm agent with state-space exploration for web navi- gation. arXiv preprint arXiv:2309.08172. Sahisnu Mazumder and Oriana Riva. 2020. Flin: A flexible natural language interface for web navigation. arXiv preprint arXiv:2010.12844. Yichen Pan, Dehan Kong, Sida Zhou, Cheng Cui, Yifei Leng, Bing Jiang, Hangyu Liu, Yanyi Shang, Shuyan Zhou, Tongshuang Wu, et al. 2024. Webcanvas: Benchmarking web agents in online environments. arXiv preprint arXiv:2406.12373. Peter Shaw, Mandar Joshi, James Cohan, Jonathan Be- rant, Panupong Pasupat, Hexiang Hu, Urvashi Khan- delwal, Kenton Lee, and Kristina N Toutanova. 2023. From pixels to ui actions: Learning to follow in- structions via graphical user interfaces. Advances in Neural Information Processing Systems, 36:34354– 34370. Longtao Zheng, Rundong Wang, and Bo An. 2023. Leveraging few-shot exemplars for arXiv preprint Synapse: human-level computer control. arXiv:2306.07863. Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, et al. 2023. We- barena: A realistic web environment for building au- tonomous agents. arXiv preprint arXiv:2307.13854. A Appendix A.1 Common Prior Workflow Existing solutions iteratively call the LLM after ev- ery single action, which is inefficient, and does not take into account planning that Agents can lever- age to understand the web like a human would, by constructing viable workflows and following those action sequences. Previously executed tasks should inform similar future tasks. A.2 Dataset We evaluate our framework on the Mind2Web benchmark (Deng et al., 2023), which provides an- notated actions for diverse tasks across real-world websites. This dataset’s key advantage is its use of complex, real-world websites rather than simplified environments like Mini-WoB++ (Liu et al., 2018), enabling more realistic evaluation of generalization capabilities. For our evaluation, we extract a focused subset of Travel-Airlines domain and Shopping-General Cross-Task splits. We additionally include the Cross-Website split of Shopping-General to assess generalization in our no-train setting (see Appendix A.3 for more details). From each website’s train- ing data, we extract ’unique web pages’ - pages with distinct HTML structures whose content may change dynamically (e.g., a homepage’s structure 7 Figure 5: Common workflow of existing solutions like MindAct (Deng et al., 2023). 8 remains constant while its content varies with date and location). We note two key limitations of the benchmark: lack of interactivity compared to toy environments, and its evaluation methodology requiring exact matching of user actions. As the dataset expects models to map HTML documents, previous ac- tions, and tasks to single-step actions, it presents a potential disadvantage for our approach, which aims to avoid task-specific implementations and live HTML parsing at test time. A.3 Dataset Details The three datasets are: 1. Cross-Task Split for Travel-Airlines : contains 31 test tasks, across 7 websites : American Airlines, Delta, Jetblue, Kayak, Qatarairways, Ryanair, United. 2. Cross-Task Split for Shopping-General : con- tains 8 test tasks, across 3 websites : Amazon, Target, Instacart. 3. Cross-Website Split for Shopping-General : contains 17 test tasks, for one website : Google Shopping. Note : There are no Cross-Website test cases for Travel-Airlines. 9
ai_researcher
1
Sharing_the_Cost_of_Success_A_Game_for_Evaluating_and_Learning_Collaborative_Multi-Agent_Instruction_Giving_and_Following_Policies.pdf
Costing Generated Runtime Execution Plans for Large-Scale Machine Learning Programs Matthias Boehm IBM Research – Almaden; San Jose, CA, USA [email protected] November 16, 2021 5 1 0 2 r a M 2 2 ] C D . s c [ 1 v 4 8 3 6 0 . 3 0 5 1 : v i X r a ABSTRACT Declarative large-scale machine learning (ML) aims at the specification of ML algorithms in a high-level language and automatic generation of hybrid runtime execution plans ranging from single node, in-memory computations to dis- tributed computations on MapReduce (MR) or similar frameworks like Spark. The compilation of large-scale ML programs exhibits many opportunities for automatic op- timization. Advanced cost-based optimization techniques require—as a fundamental precondition—an accurate cost model for evaluating the impact of optimization decisions. In this paper, we share insights into a simple and robust yet accurate technique for costing alternative runtime exe- cution plans of ML programs. Our cost model relies on gen- erating and costing runtime plans in order to automatically reflect all successive optimization phases. Costing runtime plans also captures control flow structures such as loops and branches, and a variety of cost factors like IO, latency, and computation costs. Finally, we linearize all these cost factors into a single measure of expected execution time. Within SystemML, this cost model is leveraged by several advanced optimizers like resource optimization and global data flow optimization. We share our lessons learned in order to pro- vide foundations for the optimization of ML programs. 1. INTRODUCTION State-of-the-art systems for large-scale ML aim at declar- ative ML with high-level languages including linear alge- bra, statistical functions, and ML-specific constructs. This declarative approach allows users to write their custom ML algorithms once, independent of the underlying runtime framework, data or cluster characteristics. These high-level ML programs are then automatically optimized and com- piled into hybrid in-memory and distributed runtime plans. The major advantages of such a high-level language are the full flexibility to specify new or customize existing ML algo- rithms, physical data independence of the underlying data representation (e.g., dense/sparse, formats, matrix block- ing), and both efficiency and scalability via automatic cost- based optimization. There are many high impact optimiza- tion opportunities like static and dynamic algebraic rewrites, matrix multiplication chain optimization, decisions between single node and distributed plans, or alternative physical operators. However, any cost-based optimization technique requires an accurate cost model for evaluating alternative plans or quantifying the impact of optimization decisions. Cost Model Requirements: There are several impor- tant requirements on such a cost model for optimizing large- scale ML programs which originate from potentially dis- tributed runtime plans and ML program characteristics. • Analytical Cost Model (R1): We need an analytical cost model in order to cost alternative runtime plans. The potentially large number of alternative plans pro- hibits a model relying on previous or sample runs. • Diverse Cost Factors (R2): Large-scale ML programs exhibit several orthogonal cost factors which all can turn into bottlenecks. This includes IO, latency, and computation costs. Simple cost models like the sum of intermediate result sizes cannot capture all. • Resource Awareness (R3): The optimization of ML programs is sensitive to available memory and paral- lelism. Hence, our cost model needs to be aware of cluster characteristics and resource configurations. • Complex Control Flow (R4): ML programs often con- tain deep control flow structures of loops, branches, and function calls. Our cost model needs to be able to cost arbitrary complex programs. In this paper, we share a simple and robust technique of costing generated runtime plans which is the result of several lessons we have learned applying earlier cost model versions in real-world use cases of SystemML [1, 2, 3]. Example ML Program for Linear Regression: As our running example, we use a simplified version of a closed- Its conciseness makes it form linear regression algorithm. feasible to present generated runtime plans, which are rarely shown in the literature. The following DML script (w/ R- like syntax) solves an ordinary least square problem y = Xβ. 1: X = read($1); 2: y = read($2); 3: intercept = $3; lambda = 0.001; 4: if( intercept == 1 ) { 5: 6: 7: } 8: I = matrix(1, ncol(X), 1); 9: A = t(X) %*% X + diag(I)*lambda; ones = matrix(1, nrow(X), 1); X = append(X, ones); 10: b = t(X) %*% y; 11: beta = solve(A, b); 12: write(beta, $4); In detail, we read two matrices X and y from HDFS, where we append a column of 1s to X if we are asked to compute 1 Table 1: Overview Scenarios of Input Sizes. Scenario Linreg DS, XS Linreg DS, XL1 Linreg DS, XL2 Linreg DS, XL3 Linreg DS, XL4 X 104 × 103 108 × 103 108 × 2 · 103 2 · 108 × 103 2 · 108 × 2 · 103 y 104 × 1 108 × 1 108 × 1 2 · 108 × 1 2 · 108 × 1 Input Size 80 MB 800 GB 1.6 TB 1.6 TB 3.2 TB the model intercept. The core computation of this ML pro- gram (lines 9-11) then constructs and solves a linear system of equations with regularization λ. The size of the intermedi- ate results A and b is determined by the number of features. Finally, we write the model coefficients β to HDFS. In the rest of this paper, we discuss runtime plans gener- ated by SystemML for different input sizes and cluster char- acteristics as well as the costing of these generated plans. Selected details of the entire compilation chain are described in SystemML’s architecture [3], SystemML’s optimizer [1], and SystemML’s parfor optimizer for task-parallel ML pro- grams [2]. We leverage SystemML’s text-based EXPLAIN tool that allows us to capture plans at different compilation lev- els like HOPs and runtime plans during initial compilation, as well as HOPs and runtime plans during recompilation. 2. GENERATING RUNTIME PLANS In this section, we discuss the basics of generating runtime plans in SystemML. All examples are created on a 1+6 node cluster, i.e., one head node of 2x4 Intel E5530 @ 2.40 GHz- 2.66 GHz with hyper-threading enabled and 64 GB RAM, as well as 6 nodes of 2x6 Intel E5-2440 @ 2.40 GHz-2.90 GHz with hyper-threading enabled, 96 GB RAM, 12x2 TB disk storage, 10Gb Ethernet. We used Hadoop 2.2.0 and a static cluster configuration with 2 GB max/initial JVM heap size for the client and map/reduce tasks. Our HDFS capacity was 107 TB (11 disks per node), and we used an HDFS block size of 128 MB. Finally, our default configurations of Sys- temML are 12 reducers (2x number of nodes) and a memory budget ratio of 70% of the max heap size. Scenarios of Different Input Sizes: We use scenarios of different input sizes in order to show the effect on runtime plan generation. Table 1 gives an overview of five scenarios ranging from very small to very large use cases. In detail, this table shows the input dimensions of X and y as well as the input data size in binary block format. Note that we use fully dense data sets, where the number of non-zeros is equal to the number of matrix cells. In the following, we discuss generated runtime plans of selected scenarios. Example HOP DAG (Scenario XS): First of all, we have a look at generated HOP DAGs for our example ML program, which allows a natural transition from script level to the level of runtime plans. We use scenario XS with in- put sizes of X: 104 × 103 (80 MB, dense, binary block) and y: 104 × 1 (1 MB, dense) as well as an intercept 0. Fig- ure 1 shows the HOP EXPLAIN output (after HOP rewrites, computation of memory estimates, and execution type se- lection). There are several noteworthy modifications com- pared to the original script. First, after constant folding, the branch condition (lines 4-7) became constant and hence was removed accordingly. Second, multiple rewrites trans- formed the expression diag(matrix(1,...))*lambda into diag(matrix(lambda,...)), which prevents one unneces- sary intermediate. Third, we propagated the input dimen- sion sizes over the entire program and computed the individ- # Memory Budget local/remote = 1434MB/1434MB/1434MB # Degree of Parallelism (vcores) local/remote = 24/144/72 PROGRAM --MAIN PROGRAM ----GENERIC (lines 1-3) [recompile=false] ------(10) PRead X [1e4,1e3,1e3,1e3,1e7] [76MB] CP ------(11) TWrite X (10) [1e4,1e3,1e3,1e3,1e7] [76MB] CP ------(21) PRead y [1e4,1,1e3,1e3,1e4] [0MB] CP ------(22) TWrite y (21) [1e4,1,1e3,1e3,1e4] [0MB] CP ------(24) TWrite intercept [0,0,-1,-1,-1] [0MB] CP ------(26) TWrite lambda [0,0,-1,-1,-1] [0MB] CP ----GENERIC (lines 8-12) [recompile=false] ------(42) TRead X [1e4,1e3,1e3,1e3,1e7] [76MB] CP ------(52) r(t) (42) [1e3,1e4,1e3,1e3,1e7] [153MB] CP ------(53) ba(+*) (52,42) [1e3,1e3,1e3,1e3,-1] [168MB] CP ------(50) u(ncol) (42) [0,0,-1,-1,-1] [0MB] CP ------(71) dg(rand) (50) [1e3,1,1e3,1e3,1e3] [0MB] CP ------(54) r(diag) (71) [1e3,1e3,1e3,1e3,1e3] [0MB] CP ------(57) b(+) (53,54) [1e3,1e3,1e3,1e3,-1] [15MB] CP ------(43) TRead y [1e4,1,1e3,1e3,1e4] [0MB] CP ------(59) ba(+*) (52,43) [1e3,1,1e3,1e3,-1] [76MB] CP ------(60) b(solve) (57,59) [1e3,1,1e3,1e3,-1] [15MB] CP ------(66) PWrite beta (60) [1e3,1,-1,-1,-1] [0MB] CP Figure 1: Example HOP DAG, Scenario XS. This program has two program blocks, w/ one HOP DAG per block. Every HOP shows its ID, operation, child IDs, output sizes (number of rows/columns, row/column block sizes, number of non-zeros), oper- ation memory estimate, and selected execution type. ual operation memory estimates (input, intermediate, and output memory requirements) accordingly. Obviously, for sparse input data, this is more challenging. Fourth, accord- ing to these memory estimates and the given memory bud- gets (local, remote map/reduce), we selected the execution type CP (control program), i.e., pure single node, in-memory operations for all HOPs. Apart from persistent/transient read/writes, the HOP DAG contains operators for transpose (r(t)), matrix multiplication (ba(+*)), matrix construction (dg(rand)), vector-to-diagonal matrix (r(diag)), element- wise binary addition (b(+)), and solving a linear system of equations (b(solve)). This program of HOP DAGs is then compiled over LOP DAGs into a runtime program of exe- cutable program blocks and instructions. Example Runtime Programs (Scenario XS): Given the described program of HOP DAGs, we now can discuss runtime plan generation. We first look at the small sce- nario XS (80 MB) due to its simple translation. Figure 2 shows the generated runtime plan where we also see ad- ditional optimizer choices. First, for X(cid:62)X (HOP 53), we selected the physical operator tsmm (transpose-self matrix multiply) in order to exploit the unary input characteristic and the known result symmetry which allows to do only half the computation. Second, we applied a specific HOP-LOP rewrite, transforming X(cid:62)y (HOP 59) into (y(cid:62)X)(cid:62) in order to prevent the transpose of X. This is done during LOP con- struction, because it exhibits additional memory constraints what we will discuss later in more detail. Note that we also compile size information into the runtime plan in order to provide operations with all available meta data. Example Runtime Program (Scenario XL1:) We now also discuss a larger scenario XL1 (800 GB). For this scenario, memory estimates of HOPs 52, 53, and 59 are >1 TB, which is larger than the local memory budget of 1,434 MB and hence, we select the execution type MR for these operators. Figure 3 shows the generated runtime plan 2 PROGRAM ( size CP/MR = 34/0 ) --MAIN PROGRAM ----GENERIC (lines 1-3) [recompile=false] ------CP createvar pREADX ./mboehm/cost/X false binaryblock 10000 1000 1000 1000 10000000 ------CP createvar pREADy ./mboehm/cost/y false binaryblock 10000 1 1000 1000 10000 ------CP assignvar 0.SCALAR.INT.true intercept.SCALAR.INT ------CP assignvar 0.0010.SCALAR.DOUBLE.true lambda.SCALAR.DOUBLE ------CP cpvar pREADX X ------CP cpvar pREADy y ----GENERIC (lines 8-12) [recompile=false] ------CP createvar _mVar2 scratch_space//_p4140352_9.1.70.96//_t0/temp1 true binaryblock 1000 1000 1000 1000 -1 ------CP tsmm X.MATRIX.DOUBLE _mVar2.MATRIX.DOUBLE LEFT ------CP createvar _mVar3 scratch_space//_p4140352_9.1.70.96//_t0/temp2 true binaryblock 1000 1 1000 1000 1000 ------CP rand 1000 1 1000 1000 0.0010 0.0010 1.0 -1 uniform _mVar3.MATRIX.DOUBLE ------CP createvar _mVar4 scratch_space//_p4140352_9.1.70.96//_t0/temp3 true binaryblock 1 10000 1000 1000 10000 ------CP r’ y.MATRIX.DOUBLE _mVar4.MATRIX.DOUBLE ------CP createvar _mVar5 scratch_space//_p4140352_9.1.70.96//_t0/temp4 true binaryblock 1000 1000 1000 1000 1000 ------CP rdiag _mVar3.MATRIX.DOUBLE _mVar5.MATRIX.DOUBLE ------CP createvar _mVar6 scratch_space//_p4140352_9.1.70.96//_t0/temp5 true binaryblock 1 1000 1000 1000 -1 ------CP ba+* _mVar4.MATRIX.DOUBLE X.MATRIX.DOUBLE _mVar6.MATRIX.DOUBLE ------CP createvar _mVar7 scratch_space//_p4140352_9.1.70.96//_t0/temp6 true binaryblock 1000 1000 1000 1000 -1 ------CP + _mVar2.MATRIX.DOUBLE _mVar5.MATRIX.DOUBLE _mVar7.MATRIX.DOUBLE ------CP createvar _mVar8 scratch_space//_p4140352_9.1.70.96//_t0/temp7 true binaryblock 1000 1 1000 1000 -1 ------CP r’ _mVar6.MATRIX.DOUBLE _mVar8.MATRIX.DOUBLE ------CP createvar _mVar9 scratch_space//_p4140352_9.1.70.96//_t0/temp8 true binaryblock 1000 1 1000 1000 -1 ------CP solve _mVar7.MATRIX.DOUBLE _mVar8.MATRIX.DOUBLE _mVar9.MATRIX.DOUBLE ------CP write _mVar9.MATRIX.DOUBLE ./mboehm/cost/b.SCALAR.STRING.true textcell.SCALAR.STRING.true Figure 2: Example Runtime Plan, Scenario XS (same structure and characteristics as described for Figure 3). PROGRAM ( size CP/MR = 29/1 ) --MAIN PROGRAM ----GENERIC (lines 1-3) [recompile=false] ------CP createvar pREADX ./mboehm/cost/X false binaryblock 100000000 1000 1000 1000 100000000000 ------CP createvar pREADy ./mboehm/cost/y false binaryblock 100000000 1 1000 1000 100000000 ------CP assignvar 0.SCALAR.INT.true intercept.SCALAR.INT ------CP assignvar 0.0010.SCALAR.DOUBLE.true lambda.SCALAR.DOUBLE ------CP cpvar pREADX X ------CP cpvar pREADy y ----GENERIC (lines 8-12) [recompile=true] ------CP createvar _mVar2 scratch_space//_p4149973_9.1.70.96//_t0/temp1 true binaryblock 1000 1 1000 1000 1000 ------CP rand 1000 1 1000 1000 0.0010 0.0010 1.0 -1 uniform _mVar2.MATRIX.DOUBLE ------CP createvar _mVar3 scratch_space//_p4149973_9.1.70.96//_t0/temp2 true binaryblock 100000000 1 1000 1000 100000000 ------CP partition y.MATRIX.DOUBLE _mVar3.MATRIX.DOUBLE ROW_BLOCK_WISE_N ------CP createvar _mVar4 scratch_space//_p4149973_9.1.70.96//_t0/temp3 true binaryblock 1000 1000 1000 1000 1000 ------CP rdiag _mVar2.MATRIX.DOUBLE _mVar4.MATRIX.DOUBLE ------CP createvar _mVar5 scratch_space//_p4149973_9.1.70.96//_t0/temp4 true binaryblock 1000 1000 1000 1000 -1 ------CP createvar _mVar6 scratch_space//_p4149973_9.1.70.96//_t0/temp5 true binaryblock 1000 1 1000 1000 -1 ------MR-Job[ ---------- jobtype ---------- input labels ---------- recReader inst = = ---------- rand inst ---------- mapper inst = MR tsmm 0.MATRIX.DOUBLE 2.MATRIX.DOUBLE LEFT, MR r’ 0.MATRIX.DOUBLE 3.MATRIX.DOUBLE, ---------- ---------- shuffle inst ---------- agg inst ---------- ---------- other inst ---------- output labels = [_mVar5, _mVar6] ---------- result indices = ,5,6 ---------- num reducers ---------- replication ------CP createvar _mVar7 scratch_space//_p4149973_9.1.70.96//_t0/temp6 true binaryblock 1000 1000 1000 1000 -1 ------CP + _mVar5.MATRIX.DOUBLE _mVar4.MATRIX.DOUBLE _mVar7.MATRIX.DOUBLE ------CP createvar _mVar8 scratch_space//_p4149973_9.1.70.96//_t0/temp7 true binaryblock 1000 1 1000 1000 -1 ------CP solve _mVar7.MATRIX.DOUBLE _mVar6.MATRIX.DOUBLE _mVar8.MATRIX.DOUBLE ------CP write _mVar8.MATRIX.DOUBLE ./mboehm/cost/b.SCALAR.STRING.true textcell.SCALAR.STRING.true = = MR ak+ 2.MATRIX.DOUBLE 5.MATRIX.DOUBLE true NONE, MR ak+ 4.MATRIX.DOUBLE 6.MATRIX.DOUBLE true NONE MR mapmm 3.MATRIX.DOUBLE 1.MATRIX.DOUBLE 4.MATRIX.DOUBLE RIGHT_PART false = GMR = [X, _mVar3] = 12 = 1 ] = Figure 3: Example Runtime Plan, Scenario XL1 (simplified runtime plan, where we removed rmvar (remove variable) instructions which follow directly after the last usage of related intermediates; instructions show their execution type, operation code, input variables, output variable, and instruction-specific arguments.). 3 that accordingly includes a generated MR-job instruction. There are again several interesting decisions being made here. First, we generated a hybrid runtime plan, where only operations on X are scheduled to MR while all other op- erations remain in CP. Second, we see important operator selection decisions. For X(cid:62)X (HOP 53), we selected again a tsmm MR operator but with final aggregation (ak+, aggregate kahan plus [4]) in order to aggregate partial mapper results. This aggregation instruction is transparently used in the combiner as well. For X(cid:62)y (HOP 59), we selected a so-called mapmm (broadcast matrix multiplication), which broadcasts the smaller input through distributed cache. Similar to tsmm, we also have a final aggregation for this operator. Third, in contrast to Scenario XS, we did not apply the (y(cid:62)X)(cid:62) rewrite and hence also execute the transpose as an MR instruction. The reason for this is that the new trans- pose of y would exceed the local memory budget and hence spawn an individual MR job with related latency. Fourth, we see that our piggybacking algorithm (that packs MR op- erations into a minimal number of MR jobs) was able to pack all these operations into a single MR job which (1) shares the scan of X, and prevents the materialization of X(cid:62). Sixth, we decided for a CP partitioning operation of the broadcast y in order to reduce unnecessarily large costs for reading y into every task (w/o partitioning and w/o JVM reuse, we would read 800 MB per task input split of 128 MB). Parti- tions (of 32 MB) are read on demand but never evicted to prevent repeated partition reads. Discussion Further Runtime Plans (Scenarios XL2/XL3/XL4): We now discuss the even larger scenarios XL2-XL4 which all require the optimizer to generate runtime plans that exhibit very different characteristics than XL1. First, in scenario XL2, X has 2,000 columns which is larger than the configured block size of 1,000. This prevents the optimizer from selecting a map-side tsmm operator because it requires to see entire rows of the input matrix. We select an cpmm operator [3] instead, which requires two MR jobs. This implies that we have to shuffle X and a smaller degree of parallelism for the matrix multiplication. Piggybacking now also replicates the transpose of X into both jobs in order to prevent materializing the intermediate of X(cid:62). Second, for scenario XL3, X and y have 2 · 108 rows. This means that y is already 1.6 GB, which is larger than the given map-task memory budget of 1,434 MB and hence we generate a cpmm instead of the mapmm. Similar to scenario XL2, this leads to three MR jobs. Note that this decision is very sensitive to the cluster configuration (memory budget of map tasks in this case) and there are many operators that exhibit simi- lar memory or block size constraints. Third, scenario XL4 combines the characteristics of XL2 and XL3 which leads to cpmm operators for both matrix multiplications but pig- gybacking generated again just three MR jobs because both aggregations are packed into a shared job. To summarize, even for a very simple script, we see major plan changes for different data sizes and cluster character- istics. Optimization decisions of several compilation steps effect each other and contribute to the final runtime plan. The bottom line is that only generated runtime plans in- clude all required information to evaluate cost factors like IO, latency, and computation costs. It is important to note, that generating runtime plans from HOP DAGs is rather efficient (<0.5 ms for common DAG sizes), which makes the generation and costing of runtime plans feasible. 3. COSTING RUNTIME PLANS In this section, we now discuss how to cost generated runtime plans which automatically reflects all optimization phases. Given a runtime plan P (with size information), we use a white-box cost model to compute the costs C(P, cc) as estimated execution time of P given the cluster configu- ration cc. This time-based model allows us to linearize IO, latency, and computation costs into the single cost measure (see R2). In contrast to related work of MR job tuning, it also gives us an analytical cost model for entire ML pro- grams (see R1 and R4) because it does not rely on profiling runs, and the runtime plans covers the entire control flow as well. Finally, our approach is also aware of available re- sources (see R3) because the compiler already respects all memory constraints when generating runtime plans, and we explicitly take the degree of parallelism into account. 3.1 Basic Notation Before we can describe the actual cost estimator skele- ton, we need some basic notion. The runtime plan P con- sists of a hierarchy of program blocks bi ∈ B and instruc- tions insti ∈ I. A matrix X is described by size infor- mation of rows m, columns n, and sparsity s. We define s = nnz(X)/(m · n), where nnz denotes the number of non- zero values. This information allows us to compute size esti- mates of in-memory matrices ˆM (X) and serialized matrices ˆM (cid:48)(X) (e.g., on local disk or HDFS). Furthermore, let kl, km, and kr denote the degree of parallelism of the local con- trol program, available map slots, and available reduce slots, respectively. In case of YARN clusters, we correct km and kr according to the available virtual cores and memory re- sources of the cluster. Finally, let ˆT (P ) denote the estimated execution time of runtime plan P , which is eventually used as cost measure with C(P, cc) = ˆT (P ). 3.2 Cost Estimator Skeleton The skeleton of our cost estimator recursively scans the runtime plan in execution order and tracks live variables including their sizes and in-memory state. During this sin- gle pass over the runtime program, we also compute time estimates per instruction and aggregate theses estimates ac- cordingly to the program structure. Tracking Live Variable States: Tracking sizes and in- memory state of variables is a fundamental precondition for costing individual instructions. We start with an empty symbol table. While costing the runtime plan, we main- tain live variable statistics in this table. First, for each createvar (creates meta data handle for a matrix variable), cpvar (binds a variable to a variable name), rmvar (removes a variable), and data generating instructions like rand or seq, we accordingly modify our live variable statistics (e.g., size information). Second, we also maintain in-memory state of variables. Persistent read inputs and MR job outputs are known to be on HDFS, while all in-memory instructions change the state of their inputs and output to in-memory. This state maintenance allows us to correctly reflect required IO costs. For example, if a persistent dataset is used by two in-memory instructions, only the first instruction will pay the costs of reading the input. This approach also allows us to reason about hybrid runtime plans of CP/MR instruc- tions, where intermediates are exchanged via HDFS. Time Estimate Aggregation over Control Flow: Fi- nally, we aggregate time estimates as we recursively iterate 4 over the program structure. Similar to statistics aggregation in the parfor optimizer for task-parallel ML programs [2], we aggregate the time estimate of an program block b by the sum of its childs c(b) (predicates, included program blocks, instructions) due to their sequential execution with: ˆT (b) = wb (cid:88) ˆT (ci), wb = ∀ci∈c(b)    (cid:100) ˆN /k(cid:101) ˆN 1/|c(n)| if 1 parfor for,while otherwise. (1) For conditional branches, the aggregate is a weighted sum of time estimates for the individual branches. For loops, we scale the time aggregate by the number of iterations; if the number of iterations is unknown (e.g., for while loops) we use a constant ˆN which at least reflects that the body is executed multiple times. Note that we use additional cor- rections in order to account for overestimated read costs in loops, where only the first iteration reads persistent inputs. Furthermore, we also maintain function call stacks in order to prevent cycles when costing recursive functions. This cost estimator skeleton allows the costing of arbi- trary complex runtime plans including control flow struc- tures. The actual time estimation problem then boils down to estimating execution time of a single instruction given the size and in-memory state of its input and output variables. 3.3 Time Estimates of Instructions In general, we compute the time estimate of an instruction as the sum of latency, IO, and computation time based on its input and output statistics. Earlier versions of this cost model [2] relied on profiled and trained cost functions for individual instructions. In contrast, we now use a white- box cost model based on IO bandwidth multipliers and operation-specific floating point operations in order to re- move the need for cluster-specific profiling runs. Costing CP Instructions: The time estimate of a CP instruction consists of IO and compute time. We estimate IO time based on variable state, size, format, and default format-specific IO bandwidths. If the state of an input is in-memory, then there is no IO time; otherwise, we compute the IO time via the serialized, format-specific size ˆM (cid:48)(X) of this input. For example, given a 104 × 103 dense matrix in binary block format, we get ˆM (cid:48)(X) = 80 MB; by weight- ing this with the single-threaded read bandwidth for binary block (150 MB/s), we get an IO time of 0.53 s. Compute time is estimated as the maximum of main memory IO (computed via main memory bandwidth multipliers) and instruction- specific models of required floating point operations. For example, let us use the tsmm (transpose-self matrix multipli- cation) instruction for X(cid:62)X that we introduced earlier. Its floating point requirements are estimated as follows: FLOP(tsmmleft) = (cid:40) MMD corr · m · n2 · s MMS corr · m · n2 · s2 dense sparse (2) Finally, we convert the required flops into expected execu- tion time assuming 1FLOP per cycle. For example, for X : 104 × 103, MMD corr = 0.5 (operation-specific correction), and a 2 GHz processor, we get ˆT (inst) = 0.5·1010/(2·109) = 2.5 s. Note that our cost model consists of dozens of these white-box cost functions for all existing instructions. Costing MR-Job Instructions: The time estimate of an MR-job instruction is more complex. It consists of job and task latency, write times for in-memory variable export, map task read, compute, and write times, shuffle time, as well as reduce task read, compute, and write times. The individual IO times and computation times are estimated similar to CP instructions, but weighted with the degree of parallelism of map/reduce tasks. Note that costing needs to take the structure of the MR job into account. For exam- ple, consider a map-only job with a single mapmm instruction without final aggregation for X v. This job will incur job and task latency as well as map read costs for X and v, the matrix-vector computation costs, and finally the map result write costs. The sum of these map-side costs are divided by the effective degree of parallelism, which is computed via a scaled minimum of km (available parallelism) and number of tasks ( ˆM (cid:48)(X) divided by the HDFS block size). On YARN clusters, we also take the CP/MR memory resources into account when computing the degree of parallelism. 3.4 Examples Runtime Plan Costing Putting it all together, we now revisit the example runtime plans from Section 2 and discuss their costing in detail. Example Plan Costing (Scenario XS): Figure 4 shows a simplified runtime plan for scenario XS (80 MB) with annotated costs. Due to the simple program struc- ture, the total plan execution time of 3.31 s is computed as the plain sum of all instruction costs (which we show as a breakdown of IO and compute time). There are a couple of interesting observations to make. First, the instruction that uses a persistent input first, pays the related IO costs (e.g., tsmm and r’), while subsequent operations on the same data (e.g., ba+*) do only account for compute time. Second, we see that the computation time for tsmm dominates the total execution time. The following heavy hitters are the initial read of X as well as the computation costs of solve. Example Plan Costing (Scenario XL1): As stated before, costing plans that include MR-job instructions is more challenging than pure CP runtime plans. Figure 5 # C=3.31s # C=3.31s # C=2.8E-8s # total cost C=3.31s PROGRAM --MAIN PROGRAM ----GENERIC (lines 1-3) ------CP createvar pREADX binaryblock # C=[0s, 4.7E-9s] ------CP createvar pREADy binaryblock # C=[0s, 4.7E-9s] # C=[0s, 4.7E-9s] ------CP assignvar intercept # C=[0s, 4.7E-9s] ------CP assignvar lambda # C=[0s, 4.7E-9s] ------CP cpvar pREADX X # C=[0s, 4.7E-9s] ------CP cpvar pREADy y ----GENERIC (lines 8-12) ------CP createvar _mVar2 ------CP tsmm X _mVar2 LEFT ------CP createvar _mVar3 ------CP rand 1000 1 _mVar3 ------CP createvar _mVar4 ------CP r’ y _mVar4 ------CP createvar _mVar5 ------CP rdiag _mVar3 _mVar5 ------CP createvar _mVar6 ------CP ba+* _mVar4 X _mVar6 ------CP createvar _mVar7 ------CP + _mVar2 _mVar5 _mVar7 ------CP createvar _mVar8 ------CP r’ _mVar6 _mVar8 ------CP createvar _mVar9 ------CP solve _mVar7 _mVar8 _mVar9 ------CP write _mVar9 textcell # C=[0s, 4.7E-9s] # C=[0.51s, 2.32s] # C=[0s, 4.7E-9s] # C=[0s, 3.7E-6s] # C=[0s, 4.7E-9s] # C=[5E-4s, 5E-6s] # C=[0s, 4.7E-9s] # C=[0s, 4.7E-7s] # C=[0s, 4.7E-9s] # C=[0s, 0.00465s] # C=[0s, 4.7E-9s] # C=[0s, 4.7E-4s] # C=[0s, 4.7E-9s] # C=[0s, 4.7E-7s] # C=[0s, 4.7E-9s] # C=[0s, 0.466s] # C=[1E-6s, 2E-4s] Figure 4: Simplified Plan Scenario XS w/ Costs. 5 # C=606.9s # C=606.9s # C=2.8E-8s # total cost C=606.9s PROGRAM --MAIN PROGRAM ----GENERIC (lines 1-3) ------CP createvar pREADX binaryblock # C=[0s, 4.7E-9s] ------CP createvar pREADy binaryblock # C=[0s, 4.7E-9s] # C=[0s, 4.7E-9s] ------CP assignvar intercept # C=[0s, 4.7E-9s] ------CP assignvar lambda # C=[0s, 4.7E-9s] ------CP cpvar pREADX X # C=[0s, 4.7E-9s] ------CP cpvar pREADy y ----GENERIC (lines 8-12) ------CP createvar _mVar2 binaryblock # C=[0s, 4.7E-9s] # C=[0s, 3.7E-6s] ------CP rand 1000 1 _mVar2 # C=[0s, 4.7E-9s] ------CP createvar _mVar3 # C=[10.2s, 6.4s] ------CP partition y _mVar3 # C=[0s, 4.7E-9s] ------CP createvar _mVar4 # C=[0s, 4.7E-7s] ------CP rdiag _mVar2 _mVar4 # C=[0s, 4.7E-9s] ------CP createvar _mVar5 # C=[0s, 4.7E-9s] ------CP createvar _mVar6 ------MR-Job[ # nmap=5967 nred=1 # C=[589.8s] ----------jobtype = GMR ----------inputs = [X, _mVar3] ----------map ---------- ---------- ----------shuffle = ----------agg ---------- ----------outputs = [_mVar5, _mVar6] ----------ret ix = ,5,6 ----------repl ------CP createvar _mVar7 ------CP + _mVar5 _mVar4 _mVar7 ------CP createvar _mVar8 ------CP solve _mVar7 _mVar6 _mVar8 ------CP write _mVar8 textcell # C=[0s, 4.7E-9s] # C=[0.05s, 5E-4s] # C=[0s, 4.7E-9s] # C=[5E-5s,0.466s] # C=[1E-6s, 2E-4s] = MR tsmm 0 2, MR r’ 0 3, MR mapmm 3 1 4 = MR ak+ 2 5, MR ak+ 4 6 # latency=[144.5s] # hdfsread=[70.7s] # mapexec=[324.7s] # dcread= [12.6s] # shuffle= [19.7s] # redexec= [11.1s] # hdfswrite=[0.1s] = 1 ] Figure 5: Simplified Plan Scenario XL w/ Costs. shows the simplified runtime plan of scenario XL1 (800 GB) with annotated costs. In comparison to scenario XS, there are many additional cost factors. First, cost estimates of CP instructions automatically adapt to the increased data sizes and additional operators. For example, now the partition instruction pays the 10.2 s costs for the initial read of y. Second, the total execution time of 606.9 s is dominated by the costs of 589.8 s for the generated MR job. Several cost factors contribute to this estimate. The total estimated la- tency includes 20 s job latency plus 1.5 s task latency for each map/reduce tasks, normalized by the effective map and reduce degree of parallelism. Furthermore, the HFDS read costs reflect reading all map inputs, again normalized by the effective degree of parallelism. The major cost factor, how- ever, of this compute-intensive job is the map compute time which is dominated by tsmm. Additional cost factors include read from distributed cache for the partitioned broadcast in mapmm, shuffle IO time, reduce compute time (for the final ag- gregations), and the final HDFS write of A and b. Here, the shuffle time captures map write, actual shuffle, and reduce write/read. Third, despite the same remaining instructions as in scenario XS, we see slightly different costs (e.g., for + and solve) because by tracking the in-memory state of variables, we automatically take hybrid runtime plans with data exchange over HDFS into account as well. Regarding cost model accuracy, in both examples, the es- timated costs were within 2x of the actual execution time. Due to simplifying assumptions and fundamental limita- tions, this is not given in general. However, this cost model allows for reasonable cost comparisons of complex ML pro- grams without the need of profiling or sample runs. 6 3.5 Limitations The presented cost model works very well in practice. However, there are also fundamental limitations. Unknown Size Information: Despite techniques for propagating size information of dimensions and sparsity [1], there do exist cases where we are not able to determine sizes of intermediates during initial compilation. In this case, the compiler falls back to conservative but scalable plans in order to ensure plan validity. However, apart from MR job latency, we cannot fully infer IO and computation costs of affected operators in those cases which potentially leads to large underestimation. This issue is commonly addressed by making the optimizer, using the cost model, aware of unknowns, which can often even be used for pruning. Buffer Pool Behavior: Our cost model only partially considers buffer pool evictions which may contribute to the In order to fully address this, we overall program costs. would need a white-box model of the buffer pool eviction algorithm and extend the tracking of live variables. For the sake of simplicity, we currently view the buffer pool as black box and only consider its total size. In practice, this is acceptable since buffer pool evictions usually account for a small fraction of the total execution time. Unknown Conditional Control Flow: Many ML pro- grams contain conditional control flow in terms of loops with unknown number of iterations, branches, and recursive func- tion calls. Especially for convergence-based ML algorithms, the number of iterations until convergence is generally un- known. Our heuristic of predefined constants clearly can fail there but at least reflects that the loop body is executed repeatedly. This already allows for optimization techniques like code motion or caching decisions. There is also existing work on estimating the number of iterations until conver- gence, which is an interesting direction for future extensions. 4. CONCLUSIONS To summarize, our simple and robust cost model allows the costing of generated runtime plans for ML programs. This model automatically reflects all optimization decisions of the entire compilation chain. Most importantly it pro- vides an analytical cost model for alternative plans without the need for profiling or sample runs. It also captures all relevant cost factors, is aware of data and cluster character- istics, and can be used for arbitrary complex ML programs. 5. REFERENCES [1] M. Boehm, D. R. Burdick, A. V. Evfimievski, B. Reinwald, F. R. Reiss, P. Sen, S. Tatikonda, and Y. Tian. SystemML’s Optimizer: Plan Generation for Large-Scale Machine Learning Programs. IEEE Data Eng. Bull., 37(3), 2014. [2] M. Boehm, S. Tatikonda, B. Reinwald, P. Sen, Y. Tian, D. Burdick, and S. Vaithyanathan. Hybrid Parallelization Strategies for Large-Scale Machine Learning in SystemML. PVLDB, 7(7), 2014. [3] A. Ghoting, R. Krishnamurthy, E. P. D. Pednault, B. Reinwald, V. Sindhwani, S. Tatikonda, Y. Tian, and S. Vaithyanathan. SystemML: Declarative Machine Learning on MapReduce. In ICDE, 2011. [4] Y. Tian, S. Tatikonda, and B. Reinwald. Scalable and Numerically Stable Descriptive Statistics in SystemML. In ICDE, 2012.
ai_researcher
1
Information_Categorization_for_Canopy_Mapping_using_Quality_Control_(QC)_Tool_–_Affinity_Diagram_(KJ_Method).pdf
4 2 0 2 c e D 0 1 ] V I . s s e e [ 1 v 6 5 1 7 0 . 2 1 4 2 : v i X r a QCResUNet: Joint Subject-level and Voxel-level Segmentation Quality Prediction Peijie Qiua, Satrajit Chakrabartyb, Phuc Nguyenc, Soumyendu Sekhar Ghoshb, Aristeidis Sotirasa,d,* email: [email protected] *Corresponding author aMallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA bDepartment of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, USA cDepartment of Biomedical Engineering, University of Cincinnati, Cincinnati, OH, USA dInstitute for Informatics, Data Science & Biostatistics, Washington University School of Medicine, St. Louis, MO, USA Abstract Deep learning has made significant strides in auto- mated brain tumor segmentation from magnetic reso- nance imaging (MRI) scans in recent years. However, the reliability of these tools is hampered by the pres- ence of poor-quality segmentation outliers, particularly in out-of-distribution samples, making their implemen- tation in clinical practice difficult. Therefore, there is a need for quality control (QC) to screen the quality of the segmentation results. Although numerous auto- matic QC methods have been developed for segmenta- tion quality screening, most were designed for cardiac MRI segmentation, which involves a single modality and a single tissue type. Furthermore, most prior works only provided subject-level predictions of segmentation quality and did not identify erroneous parts segmenta- tion that may require refinement. To address these limitations, we proposed a novel multi-task deep learn- ing architecture, termed QCResUNet, which produces subject-level segmentation-quality measures as well as voxel-level segmentation error maps for each available tissue class. To validate the effectiveness of the pro- posed method, we conducted experiments on assessing its performance on evaluating the quality of two dis- tinct segmentation tasks. First, we aimed to assess the quality of brain tumor segmentation results. For this task, we performed experiments on one internal (Brain Tumor Segmentation (BraTS) Challenge 2021, n = 1, 251) and two external datasets (BraTS Chal- lenge 2023 in Sub-Saharan Africa Patient Population (BraTS-SSA), n = 40; Washington University School of Medicine (WUSM), n = 175). Specifically, we first performed a three-fold cross-validation on the inter- nal dataset using segmentations generated by different methods at various quality levels, followed by an eval- uation on the external datasets. Second, we aimed to evaluate the segmentation quality of cardiac Magnetic Resonance Imaging (MRI) data from the Automated Cardiac Diagnosis Challenge (ACDC, n = 100). The proposed method achieved high performance in pre- dicting subject-level segmentation-quality metrics and accurately identifying segmentation errors on a voxel basis. This has the potential to be used to guide human-in-the-loop feedback to improve segmentations in clinical settings. 1 Introduction Medical image segmentation plays an indispensable role for accurate diagnosis, monitoring, treatment plan- ning, and population studies of diseases in modern medicine by enabling precise delineation of anatomi- cal structures and pathological regions (Garcia-Garcia et al., 2017; Litjens et al., 2017). In particular, pre- cise segmentation of healthy and abnormal anatomy into multiple classes using magnetic resonance imag- ing (MRI) is crucial in this process. Recently, deep learning-based methods have achieved state-of-the-art performance in automated segmentation tasks includ- ing brain tumor (Ronneberger et al., 2015; Kamnitsas et al., 2017; Isensee et al., 2018a,b; Baid et al., 2021) and cardiac (Tran, 2016; Khened et al., 2019; Zhou et al., 2021) MRI segmentation. However, deep neu- ral networks are sensitive to data distribution. This makes them prone to reduced performance when ap- plied on out-of-distribution MRI scans due to varia- tions in acquisition protocols, contrast, image quality, etc. Therefore, quality control (QC) is necessary to thoroughly assess segmentations before they are used for clinical purposes or large-scale research studies. QC tools are required to detect severe segmentation failures on a per-case basis, pinpoint areas needing segmenta- tion refinement at the voxel level, and provide a quality measure for downstream analyses. Previously proposed QC methods fall into four main categories: uncertainty estimation-based, generative model-based, reverse clas- sification accuracy (RCA)-based, and regression-based. The first category of automated segmentation QC methods operates on the premise that high uncertainty is reflective of poor quality segmentations (Ng et al., 2018; Alb`a et al., 2018; Sander et al., 2020; Ng et al., 2020; Bai et al., 2018; Roy et al., 2018; Jungo et al., 2020; Mehta et al., 2022). Accordingly, most studies focused on developing uncertainty measures to aggre- gate voxel-wise uncertainty as a proxy for segmenta- tion quality measures, such as Dice Similarity Coef- ficient (DSC). However, most developed proxy mea- sures (Ng et al., 2018; Roy et al., 2018; Jungo et al., 2020) have not demonstrated a strong correlation with DSC. Moreover, non-negligible errors in uncertainty es- timation were observed at the voxel level resulting in unreliable subject-level uncertainty aggregation (Jungo et al., 2020). Additionally, these methods can only be applied to deep learning-based models, and thus cannot be used to assess the quality of segmentations obtained by other methods. The second category of automated QC methods is based on the assumption that there is a relationship be- tween image intensities and tissue labels. Along that direction, Grady et al. (2012) proposed a Gaussian mixture model with hand-crafted features (i.e., geo- metric features, intensity features, gradient features, and ratio features) to characterize the segmentation quality and detect segmentation failure. Wang et al. (2020b) proposed a variational autoencoder (VAE) to learn the latent representation of pairs of images and their ground-truth segmentations. During inference, the encoder is frozen while the decoder is refined for a given image-segmentation pair to produce a surrogate segmentation. The subject-level DSC is then computed between the query segmentation and the surrogate seg- mentation. The tuning of the decoder is required for each query image, which may be computationally ex- pensive and time-consuming. Leveraging the advances in the image-to-image translation task, Li et al. (2022) proposed a generative adversarial network (GAN) to generate an informative reference image conditioned on the query segmentation mask that needs to be assessed. An auxiliary network (i.e., difference investigator) is then trained to predict image-level and pixel-level qual- ity by taking the raw input image and the generated reference image as inputs. These approaches have only been validated in cardiac MRI segmentation QC that involves only a single modality and segmentation with regular shapes. Translating it to the more complex brain tumor segmentation QC scenario is difficult due to the presence of multiple modalities and intratumoral tissue heterogeneities, which are challenging to model. The third category of the automated segmentation QC methods is built upon the reverse classification ac- curacy (RCA) framework (Valindria et al., 2017). This was initially designed for whole-body multi-organ MRI segmentation QC and was later applied to cardiac MRI datasets (Robinson et al., 2019). The RCA framework involves: (i) choosing a reference dataset with known ground-truth segmentation, (ii) training a segmenter using a query image-segmentation pair, (iii) using the trained segmenter to segment the images in the ref- erence dataset, and (iv) estimating DSC as the max- imum DSC achieved by the trained segmenter in the reference dataset. Although effective for whole-body multi-organ MRI segmentation and cardiac MRI seg- mentation QC, RCA’s reliance on a representative ref- erence dataset and image registration poses challenges in brain tumors. The large variability in brain tumors makes the reference dataset hard to be representative. Different from whole-body multi-organ and cardiac seg- mentation, which have consistent shapes and appear- ances, the heterogeneous appearance and phenotypes of brain tumors complicate the establishment of cor- respondences for the tumorous areas between different subjects. Lastly, despite predicting subject-level DSC, the RCA framework lacks segmentation error localiza- tion at the voxel level. The fourth category of the segmentation QC meth- ods is regression-based methods that directly predict the subject-level DSC. Early attempts employed a Sup- port Vector Machine regression in combination with hand-crafted features to detect cardiac MRI segmenta- tion failures (Kohlberger et al., 2012; Alb`a et al., 2018). Instead of using hand-crafted features, Robinson et al. (2018) proposed a convolutional neural network (CNN) regressor to automatically extract features from a large cardiac MRI segmentation dataset to predict DSC. Be- sides predicting DSC, Kofler et al. (2022) proposed a holistic rating to approximate how expert neuroradi- ologists classify high-quality and poor-quality segmen- tations. However, manually deriving holistic ratings by clinical experts is laborious and prone to inter- rater variability, hindering its applicability in large- scale datasets. Despite the fact that satisfactory per- formance was achieved in predicting subject-level seg- 2 mentation metrics by these regression-based methods, voxel-level localization of segmentation error is unavail- able. Although numerous efforts have been devoted to au- tomated segmentation QC, the majority of the stud- ies have focused on whole-body multi-organ MRI seg- mentation and cardiac MRI segmentation QC. Addi- tionally, most of these approaches have not assessed Importantly, there out-of-distribution generalization. has been limited research on segmentation QC specifi- cally for brain tumor MRI segmentation QC. The het- erogeneous and complex appearances of brain tumors, which vary in locations, sizes, and shapes, contribute to the increased challenge of QC in brain tumor MRI segmentation. In addition, most prior QC approaches focused on predicting subject-level DSC (Jungo et al., 2020; Robinson et al., 2018, 2019; Valindria et al., 2017; Li et al., 2022) and do not consider the quality of the segmentation contour. However, DSC and contour measurement (e.g., normalized surface dice (NSD)) are equally important to comprehensively assess the seg- mentation quality (Maier-Hein et al., 2024). Further- more, it is crucial to address reliable voxel-level tissue- specific segmentation error localization. This localiza- tion is vital not only for auditing purposes but also for radiologists to prioritize cases that require man- ual refinement. By incorporating this component into quality control measures, it can enhance the accuracy and effectiveness of the overall segmentation process. However, this aspect has been largely overlooked in previous studies. One exception is the work proposed by Li et al. (2022). However, this approach was only validated on cardiac segmentation QC with limited datasets. In addition, this method can only provide a binary segmentation error mask and is unable to iden- tify voxel-level segmentation errors for different tissue classes. To address these limitations, we proposed a novel deep learning model, termed QCResUNet, to jointly predict segmentation-quality metrics at the subject level and localize segmentation errors across different tissue classes at the voxel level. This work extends our previous preliminary work (Qiu et al., 2023) in several ways. First, we extended the previous work to predict subject-level DSC and NSD as well as a collection of binary segmentation error maps, each corresponding to a different tissue class, to provide a more comprehen- sive evaluation of segmentation quality. To achieve this goal, we further proposed an attention-based segmen- tation error map aggregation mechanism to better de- lineate segmentation errors for different tissue classes. Second, we conducted rigorous evaluations to validate the generalizability of the proposed QCResUNet by performing a three-fold cross-validation on the internal dataset and evaluating the proposed method on out-of- distribution datasets with MRI scans of varying image quality and segmentations produced. Third, we further examined the generalizability of the proposed QCRe- sUNet by evaluating its ability to assess the segmenta- tion quality for cardiac MRI. Fourth, we extensively evaluated the performance of the proposed method in comparison with several state-of-the-art segmen- tation QC methods, including RCA-based (Valindria et al., 2017) and uncertainty-based methods (Jungo et al., 2020). Fifth, we performed an in-depth ex- plainability analysis to elucidate the reasons behind the proposed method’s superior performance compared to other competing approaches. The main contributions of this work are fourfold: 1. We proposed a multi-task learning framework named QCResUNet to simultaneously predict the DSC and NSD at the subject level and localize segmentation errors at the voxel level. 2. We proposed an attention-based mechanism to ag- gregate the segmentation error map that better handles voxel-level segmentation error prediction for different tissue classes. 3. We extensively evaluated the performance of the proposed model using both internal and external testing for the brain tumor segmentation QC task. Internal training, validation, and testing were performed through a three-fold cross-validation on 1251 cases from Brain Tumor Segmentation (BraTS 2021). External testing was performed on independent datasets from Washington Univer- sity School of Medicine (WUSM) and BraTS Chal- lenge 2023 in Sub-Saharan Africa Patient Pop- ulation (BraTS-SSA). Our results demonstrated that the proposed model can generalize well on the out-of-distribution cases from different brain tumor datasets that have been segmented by var- ious methods. 4. The proposed model was also evaluated on the car- diac MRI segmentation QC task, demonstrating its potential for application in a broader range of QC tasks beyond brain tumor segmentation QC. 2 Materials and methods 2.1 Dataset In this study, we used three datasets for evalua- tion of the QC performance on brain tumor segmen- tation. First, we used pre-operative multimodal MRI 3 Figure 1. Data generation for the brain tumor segmentation QC task: (a) The histogram of the DSC distribution in the generated BraTS training dataset before and after applying the resampling strategy. (b) Visual examples of the generated segmentation dataset ranging from low-quality to high-quality. scans with gliomas of all grades (WHO Central Nervous System grades 2-4) grades from the BraTS 2021 chal- lenge training dataset (n = 1251). The BraTS dataset is a heterogeneous dataset consisting of cases from 23 different sites with various levels of quality and pro- tocols. The BraTS dataset was used for training, val- idation, and internal testing. Additionally, we used two datasets that are not included in the BraTS 2021 dataset (i.e., the BraTS-SSA dataset and the WUSM dataset) for external testing, allowing for an unbi- ased assessment of the generalizability of the proposed method. The BraTS-SSA dataset (n = 40) (Adewole et al., 2023) is an extension of the original BraTS 2021 dataset with patients from Sub-Saharan Africa, which includes lower-quality MRI scans (e.g., poor image con- trast and resolution) as well as unique characteristics of gliomas (i.e., suspected higher rates of gliomatosis cerebri). The WUSM dataset (n = 175) was obtained from the retrospective health records of the Wash- ington University School of Medicine (WUSM), with in accordance with the Health a waiver of consent, Insurance Portability and Accountability Act, as ap- proved by the Institutional Review Board (IRB) of WUSM (IRB no. PA18-1113). Each subject in all datasets comprised four modalities viz. pre-contrast T1-weighted (T1), T2-weighted (T2), post-contrast T1-contrast (T1c), and Fluid attenuated inversion re- covery (FLAIR). In addition, multi-class tumor seg- mentation masks annotated by experts were also avail- able. Segmentation masks delineated enhancing tumor (ET), non-enhancing tumor core (NCR), and edema (ED) classes. Following standard BraTS procedures, we combined the binary ET, NCR and ED segmenta- tion masks to delineate the whole tumor (WT), tumor core (TC), and enhancing tumor. The WT mask con- sists of all tumor tissue classes (i.e., ET, NCR, and ED), while the TC mask comprises ET and NCR tis- sue classes. Scans from the BraTS training and BraTS-SSA datasets were already registered to the SRI24 anatom- ical atlas (Rohlfing et al., 2010), resampled to 1-mm3 isotropic resolution and skull-stripped. For consis- tency, raw MRI scans from WUSM were pre-processed following the same protocol using the Integrative Imaging Informatics for Cancer Research: Workflow Automation for Neuro-oncology (Chakrabarty et al., 2022) framework. Subsequently, we z-scored all the skull-stripped scans in the BraTS datasets and the WUSM dataset on a per-scan basis. Finally, scans from the entire dataset were cropped to exclude background regions and then were zero-padded to a common dimen- sion of 160 × 192 × 160 using the nnUNet preprocessing pipeline (Isensee et al., 2018b). In evaluating the QC performance on the cardiac segmentation task, we used the Automated Cardiac Diagnosis Challenge (ACDC) dataset (Bernard et al., 2018), which consists of 100 subjects (200 volumes). Each volume in the ACDC dataset is associated with a multi-class segmentation mask delineating the left ventricle (LV), myocardium (Myo), and right ventri- cle (RV). Lastly, each volume was cropped and zero- padded to a common dimension of 16 × 16 × 160 using the nnUNet pipeline (Isensee et al., 2018b). 2.2 Segmentation Dataset Generation The majority of previous regression-based methods were trained and validated on datasets limited by both sample size and heterogeneity. To address this limi- tation, we constructed an extensive dataset consisting of segmentation results of diverse quality levels. This enabled us to capture the variability of segmentations 4 (b) Examples of the generated segmentationGeneratedGround-truthLow-qualityhigh-quality(a) Histogram of DSCSample Randomly and obtain a reliable performance assessment of the proposed method on a wide range of segmentations. For this purpose, we adopted a four-step approach to produce segmentation results using different methods with various combinations of imaging modalities as in- put. As far as the brain tumor segmentation QC task is concerned, we first employed both CNN-based (i.e., nnUNet (Isensee et al., 2018b)) and Transformer-based (i.e., nnFormer (Zhou et al., 2021)) segmentation UN- ets and trained them independently seven times by taking different modalities as input (namely T1-only, T1c-only, T2-only, FLAIR-only, T1-FLAIR, T1c-T2, and all four modalities). Since each modality is sen- sitive to certain tissue types (e.g., T1c is effective in detecting enhancing tumors and FLAIR is advanta- geous in identifying edema), this approach enabled us to produce segmentations with varying levels of quality. Second, we sampled the segmentation along the train- ing routines at various iterations to generate segmen- tation results from both fully trained and inadequately trained models. For this purpose, a small learning rate (1 × 10−6) was used during training to slower their con- vergence, allowing the segmentation to improve grad- ually from low quality to high quality. Third, we em- ployed a data augmentation strategy based on image transformations to produce diverse segmentation re- sults, termed SegGen. A series of image transforma- tions, including rotation, translation, scaling, and elas- tic deformation, were randomly applied to each ground- truth segmentation three times with a probability of 0.5 (see Table 1). Fourth, apart from the segmentations produced by the aforementioned three methods, we also created out-of-sample segmentation samples from a different segmentation method to assess the gener- alizability of our proposed model in the testing phase. To accomplish this, we employed the DeepMedic frame- work (Kamnitsas et al., 2017) to train seven segmen- tation models using the same diverse input modalities as we did in training the nnUNet and nnFormer. Af- ter applying the aforementioned four steps, we were able to create a set of erroneous segmentations for the cases included in the internal and external datasets. The erroneous segmentations along with the ground truth segmentations were subsequently used to esti- mate the DSC, NSD, and the ground truth segmen- tation error map (SEM) that were used to train the proposed network. The SEM was computed as the dif- ference between a query segmentation (i.e., one of the erroneous segmentations we produced) and the corre- sponding ground truth. Examples of the generated seg- mentations can be found in Fig. 1(b). For the cardiac segmentation task, we employed a Table 1. The parameters of image transforma- tions used in SegGen. Transformation Rotation Scaling Translation Deformation Parameters [−15◦, 15◦] scales=[0.85, 1.25] moves=[−20, 20] displacements=[-20, 20] Probability 0.5 0.5 0.5 0.5 procedure similar to the one used in generating seg- mentation for the brain tumor task. However, unlike brain tumor MRI, cardiac MRI involves only a sin- gle modality. Consequently, we trained one nnUNet model and one nnFormer model, sampling the segmen- tation along the training routines at various iterations. We also generated segmentation results using SegGen during model training. However, due to the technical difficulty of training DeepMedic on the ACDC dataset (i.e., one axis in the ACDC dataset has a significantly smaller input size), we could not include any segmen- tation produced by DeepMedic for the cardiac segmen- tation QC task. Despite our efforts to sample across different con- vergence stages by adopting a small learning rate, we observed that the deep learning models could success- fully segment the majority of the cases. As a conse- quence, the resulting dataset was distributed unevenly across the different levels of quality, comprising mostly of higher quality segmentation results (see Fig. 1(a)). Following the resampling strategy in Robinson et al. (2018), we randomly selected a subset of samples from each bin of the DSC histogram. This ensured that the number of segmentations from each bin was equal to the lowest count-per-bin value (ns) across the dis- tribution for all the segmentation datasets (refer to Fig. 1(a)). For this purpose, we divided DSC val- ues into 10 evenly spanned bins ranging from 0 to 1. We sampled cases from these bins such that we en- sured that all bins contained an equal number of cases. For the training set, we did not manually select the DSC samples. Instead, at each training iteration, we randomly sampled segmentations that were evenly dis- tributed across 10 bins to avoid imbalanced DSC values in each training batch. We kept this process stochastic to let the model see as many samples as possible with- out skewing its exposure to either low DSC or high DSC samples. This means we randomly sampled ns segmen- tations for bins that contained more than ns samples at each training iteration. Conversely, during the eval- uation phase, we opted for a deterministic resampling 5 Figure 2. (a) The proposed QCResUNet model is a U-shaped neural network that takes as input m imaging modalities ({M1, M2, · · · , Mm}) and a multi-class segmentation mask to be evaluated (Squery). It generates three outputs: the subject-level segmentation-quality metrics DSC (DSCpred) and NSD (NSDpred) as well as a collection of C binary voxel-level segmentation error maps {SEMtissuec}C c=1, each for each tissue class. In the case of brain tumor segmentation task, which is demonstrated as an example in this figure, QCResUNet takes as input four imaging modalities ({M1, M2, M3, M4}) and a query segmentation Squery that delineates WT, TC, and ET. It produces subject-level DSC and NSD, along with three binary SEMs corresponding to the segmentation error masks for WT, TC, and ET tissue classes. Note that the depicted image sizes as well as the number and size of convolution filters are specific to the brain tumor segmentation tasks. Subpanel figures (b) (c), and (d) depict the Residual Block employed in the encoder of QCResUNet, the Convolutional Block used in its decoder, and the Efficient Channel Attention (ECA) used for multiclass SEM aggregation, respectively. The abbreviations in the figure: Conv3D = 3D convolutional layer, GAP = global average pooling, FC = fully connected layer, LeakyReLU = leaky rectified linear unit, and One-hot Encoding = one-hot encodes the multi-class Squery to a collection of binary masks. approach, which enabled unbiased evaluations across various quality levels. In order to account for the vari- ability introduced by the resampling process, we im- plemented a three-fold cross-validation strategy for all our experiments (see Section 3). 2.3 QCResUNet The proposed 3D U-shaped QCResUNet (Fig. 2(a)) aims to automatically evaluate the quality of a query multi-class segmentation mask by predicting subject- level segmentation quality measures as well as identi- fying voxel-level segmentation error maps for each tis- sue class (SEMs). Without loss of generality, QCRe- sUNet takes as input m imaging modalities represented by {M1, M2, · · · , Mm} (e.g., m = 4 the brain tumor segmentation QC case) and the query multi-class seg- mentation mask (Squery). The outputs of QCResUNet include two complementary subject-level segmentation quality measures (DSC and NSD) and a collection of binary voxel-level SEMs, each delineating the segmen- tation error corresponding to a specific tissue class in Squery. We would like to point out that we followed the standard definition of DSC for calculating the subject- 6 DSC5646412825651212825612812864128646464128646464323237x7x7 Conv3D [stride=2] 2x2x2 MaxPool [stride=2] Residual Block [stride=1] Residual Block [stride=2] UpSample [x2] + 1x1x1 Conv3D [stride=1] Conv BlockGAP + FC [neurons = 512]SigmoidCopy and concatenation1x1x1 Conv3D [stride=1]Block 1Block 2Block 3Block 4Efficient Channel Attention (ECA)One-hot EncodingNSDSEMWTSEMTCSEMET3x3x3 Conv3D [stride=1 / 2] InstanceNorm3DReLU3x3x3 Conv3D [stride=1] InstanceNorm3DSpatial Dropout3D⊕ReLU3x3x3 Conv3D [stride=1InstanceNorm3DLeakyReLUGlobal Average PoolingConv1D [stride=3] Sigmoid⊗(b) Residual Block(c) Conv Block(d) Efficient Channel Attention(a) QCResUNet level DSC for multi-class segmentation masks (see A). 2.3.1 Network Design The U-shaped QCResUNet is designed to perform both regression and segmentation tasks. Therefore, the pro- posed QCResUNet consists of three parts that are trained end-to-end: (i) a ResNet-34 encoder for DSC and NSD prediction; (ii) a decoder architecture for predicting the multiclass SEM, and (iii) an attention- based SEM aggregation module. Encoder: For the purpose of predicting subject-level DSC, we adopted a ResNet-34 (He et al., 2016) ar- chitecture as part of the encoding path of our net- work, which can capture semantically rich features that are important for accurately characterizing segmenta- tion quality. While retaining the main structure of the 2D ResNet-34 in He et al. (2016), we made the following modifications to account for the 3D nature of the input data. First, all 2D convolutional and pooling layers were replaced by 3D counterparts (see Fig. 2(b)). Second, we replaced all batch normalization blocks (Ioffe and Szegedy, 2015) with instance normal- ization blocks (Ulyanov et al., 2016) to cater to the small batch size during 3D model training. Third, to prevent overfitting, spatial dropout (Tompson et al., 2015) was added to each residual block with a probabil- ity of 0.3 to randomly zero out channels in the feature map (see Fig. 2(b)). Decoder: To estimate a segmentation error map with high spatial resolution and accurate localization infor- mation, we designed a decoder architecture. The de- coder takes the low-resolution contextual features of segmentation quality that were extracted by the en- coder and transfers them to a high-resolution multi- class SEM. This was accomplished by first upsampling the input feature map by a factor of two using near- est neighbor interpolation, followed by a 1 × 1 × 1 convolutional layer. Next, we concatenated the up- sampled feature maps with the corresponding encoder level’s features using skip-connections to facilitate in- formation flow from the encoder to the decoder. This was followed by two convolutional blocks that reduced the number of feature maps back to the original value before concatenation. Each convolutional block com- prised a 3 × 3 × 3 convolutional layer, followed by an instance normalization layer and a Leaky Rectified Lin- ear Unit (LeakyReLU) activation function (Maas et al., 2013) (see Fig. 2(c)). Importantly, we used the middle-level semantics in the intermediate block (Block 3 in Fig. 2(a)) of the ResNet encoder as the input to the decoder. The ra- tionale behind this was that the last block (Block 4 in Fig. 2(a)) of the ResNet encoder contains features that are specific to the DSC prediction task. In con- trast, the middle-level features are likely to constitute a more universal semantic representation that charac- terizes the segmentation error (Ahn et al., 2019). Attention-based SEM aggregation: To better pre- dict the segmentation error map corresponding to dif- ferent tissue classes (e.g., [SEMWT, SEMTC, SEMET] in the brain tumor segmentation QC task), we propose an attention-based SEM aggregation mechanism. This is because each tissue class in the query segmentation mask (Squery) contributes differently to the predicted SEM. Specifically, we leveraged the efficient channel attention (ECA) (Wang et al., 2020a) to model the correlation between the features in the last layer of the decoder and the one-hot encoded query segmen- tation mask. A 1 × 1 × 1 convolution layer was then applied to the output of the ECA module to predict tissue-level SEMs (Fig. 2(a)). Specifically, the channel attention was computed by feeding the pooled input feature (after the global average pooling layer) to a 1D convolutional layer followed by a Sigmoid activation (Fig. 2(d)). 2.3.2 Network configuration The overall model configuration used for both the brain tumor and cardiac segmentation QC tasks was identical to the one outlined in Fig. 2. The only exception is that we used different downsampling/upsampling rates for the brain tumor and cardiac cases to accommodate dif- ferences in their respective input sizes. Specifically, the dowsampling/upsampling rates used for the brain tu- mor case were [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]] with each [2, 2, 2] represents the downsampling/upsampling rates for axial, sagittal, and coronal axes at each stage. In contrast, the dowsampling/upsampling rates used for the cardiac case were [[1, 2, 2], [1, 2, 2], [1, 2, 2], [2, 2, 2]]. This is because the first axis in the ACDC dataset is ten times smaller than the other two axes. 2.3.3 Multi-task learning objective The proposed QCResUNet was jointly trained for pre- dicting DSC, NSD, and SEM by optimizing the com- bination of a mean absolute error (MAE) loss, a DSC loss, and a cross-entropy (CE) loss. The DSC and NSD prediction task was trained by minimizing the MAE loss (LM AE): (cid:110) |DSC(n) gt − DSC(n) pred| + |NSD(n) gt − NSD(n) pred| (cid:111) , LMAE = 1 N N (cid:88) n=1 7 where N is the total number of samples in the train- ing dataset, and n is indexing each sample. The MAE loss (LM AE) quantifies the dissimilarities between the ground truth DSC/NSD (DSCgt/NSDgt) and the pre- dicted DSC/NSD (DSCpred/NSDpred). Compared to the mean squared error loss commonly used for regres- sion tasks, the MAE loss has been proved to be more sensitive to outliers (Qi et al., 2020). This enhances the robustness of QCResUNet to outliers. The voxel-level segmentation error prediction task was trained by optimizing both the DSC and CE losses. Though there are many variants of the dice loss, we opted for the one proposed in (Drozdzal et al., 2016) due to its wide success in a variety of medical imaging segmentation tasks: LDSC = − 1 V C (cid:88) c=1 2 · (cid:80)V v=1 SEM(c,v) v=1 SEM(c,v) gt + (cid:80)V · SEM(c,v) pred v=1 SEM(c,v) pred gt (cid:80)V , where V is the total number of voxels in a batch, and v indexes each voxel; C is the total number of tissue classes, and c indexes each class. Here, SEM(c) gt and SEM(c) pred refer to the ground truth for the c tissue class SEM and its probabilistic prediction generated from the decoder’s sigmoid output, respectively. The CE loss is defined as LCE = − 1 V V (cid:88) v=1 1 C c (cid:88) c=1 SEM(c,v) gt log SEM(c,v) pred The average of the dice loss and CE loss was performed over the total number of voxels (V ) present in a batch. To balance the loss for the subject-level DSC and NSD prediction task and the voxel-level SEM predic- tion task, we combined the three losses in a weighted fashion. The final objective function is given as Lf inal = LMAE + λ (LDSC + LCE), where λ is the loss balance parameter. 3 Experimental Validation We evaluated the proposed method on both brain tumor and cardiac MRI segmentation QC tasks to demonstrate its effectiveness and generalizability across different datasets and tasks. 3.1 Brain tumor MRI segmentation QC task We performed three-fold cross-validation to vali- date the effectiveness of the proposed method for the 8 brain tumor segmentation QC task. For this pur- pose, we held out an internal testing set with 251 subjects. In each iteration of the three-fold cross- validation, the remaining BraTS dataset was randomly partitioned into training and validation subsets with 667 and 333 subjects, respectively. The application of the previously described four-step approach and re- sampling (see Section 2.2) resulted in 98,048 training segmentation samples, 28,800 validation segmentation samples, as well as 39,063 internal testing segmenta- tions (36,144 from nnUNet and nnFormer, 2,919 from DeepMedic). We only used the segmentations gener- ated by the DeepMedic as the testing set. This is because segmentations generated by DeepMedic were not seen during training and validation. The BraTS- SSA and WUSM datasets were used as external testing datasets to validate the generalizability of the proposed method. By employing the previously described four- step approach (see Section 2.2), we extended the origi- nal BraTS-SSA, and WUSM datasets to include 6,040 and 26,425 segmentations, respectively. After apply- ing the resampling in Section 2.2, the internal BraTS testing, BraTS-SSA, and WUSM datasets were resam- pled to include 4,753, 1,204, and 2,693 segmentations, respectively. We performed hyperparameter tuning to determine two critical hyperparameters in the proposed method: the initial learning rate and the loss balance param- eter λ. Hyperparameter tuning was carried out by performing a Bandit-based Bayesian optimization on the training and validation dataset implemented using the Raytune framework (Liaw et al., 2018). Specif- ically, we sampled the learning rate from a range of [1 × 10−5, 2 × 10−4] following a log-uniform distribu- tion. Additionally, the loss balance parameter λ was chosen from the following set of values {0.1, 1, 2}. The optimal balance parameter λ for QCResUNet was de- termined to be 1. However, we did not observe a sig- nificant difference when varying λ from 0.1 to 1 and 2 (see Fig. S1). Although the optimal learning rate var- ied from model to model (refer to Table 2), it generally fell within the scale of 1 × 10−4 as we used a relatively small batch size of 4. We kindly direct the readers to C for the detailed learning curve in hyperparameter tuning using RayTune. Results reported hereafter were obtained using the above optimal hyper-parameters. For both internal (BraTS) and external (BraTS-SSA and WUSM) test- ing sets, we reported the results after performing a model ensemble technique for the proposed method and all baseline methods. This involved averaging the re- sults from three models for DSC and NSD prediction and performing majority voting for the SEM predic- Table 2. The optimal hyperparameter settings for different models (i.e., UNet, ResNet-34, ResNet-50, QCResUNet) obtained from per- forming a random search using Raytune. Please refer to C for the details of Raytune hyperparameter tuning. Method UNet ResNet-34 ResNet-50 (Robinson et al., 2018) QCResUNet learning rate 0.8 × 10−4 1.0 × 10−4 0.9 × 10−4 2.1 × 10−4 balance parameter (λ) - - - 1.0 tion. The mean (± standard deviation) of MAEs and DSCSEM were calculated over all subjects. 3.2 Cardiac MRI segmentation QC task In the cardiac MRI segmentation QC task, we use the ACDC dataset for training, validation, and test- ing. The ACDC dataset was partitioned into training, validation, and testing subsets with a ratio of 7:1:2 by following the partitioning rule outlined in (Zhou et al., 2021; Chen et al., 2021; Cao et al., 2022) to avoid over- lapping of subjects. This results in a training set con- sisting of 140 volumes, a validation set with 20 volumes, and a testing set containing 40 volumes. By employing the same four-step approach as in the brain tumor QC task, the original ACDC training, validation, and test- ing subsets were extended to include 4,900, 640, and 1,280 segmentations. The testing set was then resam- pled to include 1,011 segmentations. We used the same hyperparameters from the brain tumor segmentation QC task to train the model for the cardiac segmenta- tion QC task. Similar to the brain tumor segmentation QC task, we also reported the mean (± standard devi- ation) of MAEs and DSCSEM over all subjects for the cardiac segmentation QC task. 3.3 Baseline methods To evaluate the effectiveness of the proposed model, we compared its performance with five baseline mod- els: (i) RCA method (Robinson et al., 2017); (ii) un- certainty estimation (UE) based method (Jungo et al., 2020); and three regression-based methods: (iii) a UNet model (Ronneberger et al., 2015); (iv) a ResNet- 34 model (He et al., 2016); and (v) a ResNet-50 (He et al., 2016) model (Robinson et al., 2018). The selection of the three regression-based meth- ods (UNet, ResNet-34, ResNet-50) as baselines was based on their architectural resemblance to the pro- posed approach. To ensure a fair comparison, the resid- 9 ual blocks in ResNet-34 and ResNet-50 were identical to those used in the proposed QCResUNet. As the UNet model typically outputs a segmentation mask of the same dimensions as the input, we added an aver- age pooling layer after its final feature map followed by a fully connected layer to enable the prediction of a single DSC value. The RCA and UE-based methods are considered state-of-the-art in the segmentation QC literature. RCA for brain tumor segmentation QC was imple- mented using as segmentation method the DeepMedic framework, which was adapted following the protocol described by Valindria et al. (2017). Specifically, we re- duced the number of feature maps in each layer by one third compared to the default setting of DeepMedic. We also reduced the feature maps in the last fully connected layer from 150 to 45. We opted for this approach instead of the atlas-based segmentation ap- proach because establishing spatial correspondences between query and reference images in tumorous areas can be challenging due to the high spatial and pheno- typic heterogeneity of brain tumors. In contrast, RCA for cardiac MRI segmentation QC was implemented us- ing the atlas-based segmentation method by following the protocols in (Robinson et al., 2019). This is because registration correspondences between query and refer- ence are easier to establish in healthy autonomy. For the selection of the reference data set, we randomly se- lected 100 samples from the training dataset following a previously well-validated protocol (Robinson et al., 2019). The implementation of the UE-based method fol- lowed the protocol in (Jungo et al., 2020), where the uncertainty map was calibrated by evaluating the uncertainty-error (UE) overlap. The UE overlap mea- sures the overlap between the binarized uncertainty map and the corresponding SEM (see B.1 for details). We chose the UE calibration proposed by Jungo et al. (2020) instead of other calibration methods (Wen et al., 2020; Ashukha et al., 2020; Mehta et al., 2022) because it produces a binary mask by thresholding the uncer- tainty map. We can directly compare the resulting binary mask to the SEM generated by the proposed QCResUNet. The subject-level DSC was computed by applying a random forest with 102 extracted radiomics features (see B.2). 3.4 Evaluation metrics We evaluated the performance of QC segmentation at both the subject level and the voxel level. At the subject level, we evaluated the precision of the predic- tion of the segmentation quality metric based on the Table 3. The subject-level QC performance on the brain tumor MRI segmentation task was evaluated on the internal BraTS testing dataset, as well as the independent BraTS-SSA, and WUSMs datasets with segmentations generated by nnUNet, nnFormer, and DeepMedic. The best metrics within each column are highlighted in bold. Method Dataset Internal BraTS [testing] External BraTS-SSA WUSM RCA (Valindria et al., 2017) UE-based (Jungo et al., 2020) UNet ResNet-34 ResNet-50 (Robinson et al., 2018) QCResUNet nnUNet+nnFormer DeepMedic nnUNet+nnFormer+DeepMedic nnUNet+nnFormer+DeepMedic Metric r ↑ MAE ↓ NSD DSC NSD DSC NSD DSC NSD DSC NSD DSC NSD DSC 0.662 0.624 0.817 0.784 0.939 0.944 0.944 0.955 0.943 0.960 0.958∗ 0.968∗ 0.255 ± 0.165 0.315 ± 0.227 0.102 ± 0.095 0.144 ± 0.112 0.069 ± 0.053 0.077 ± 0.063 0.070 ± 0.054 0.072 ± 0.057 0.069 ± 0.052 0.070 ± 0.053 0.056 ± 0.043∗ 0.064 ± 0.048∗ r ↑ 0.631 0.660 0.744 0.762 0.902 0.939 0.916 0.954 0.917 0.947 0.937∗ 0.962∗ MAE ↓ 0.272 ± 0.171 0.329 ± 0.215 0.113 ± 0.083 0.149 ± 0.114 0.087 ± 0.054 0.080 ± 0.069 0.086 ± 0.065 0.074 ± 0.059 0.084 ± 0.061 0.074 ± 0.056 0.074 ± 0.050∗ 0.062 ± 0.047∗ r ↑ 0.690 0.604 0.627 0.703 0.946 0.931 0.950 0.943 0.946 0.941 0.954∗ 0.964∗ MAE ↓ 0.260 ± 0.160 0.302 ± 0.227 0.149 ± 0.114 0.165 ± 0.122 0.061 ± 0.050 0.079 ± 0.074 0.067 ± 0.052 0.076 ± 0.068 0.066 ± 0.049 0.077 ± 0.068 0.057 ± 0.044∗ 0.060 ± 0.049∗ r ↑ 0.736 0.504 0.591 0.617 0.903 0.885 0.894 0.888 0.895 0.895 0.920∗ 0.912∗ MAE ↓ 0.242 ± 0.158 0.308 ± 0.246 0.170 ± 0.119 0.187 ± 0.141 0.080 ± 0.073 0.102 ± 0.108 0.094 ± 0.082 0.103 ± 0.107 0.094 ± 0.082 0.103 ± 0.107 0.075 ± 0.062∗ 0.087 ± 0.097∗ ∗: P < 0.05; with a paired t-test to all baseline methods. Table 4. The subject-level QC performance on the cardiac MRI segmentation task was eval- uated on the internal ACDC testing set with segmentations produced by nnUNer and nn- Former. The best metrics within each column are highlighted in bold. Metric Method RCA UE-based UNet ResNet-34 ResNet-50 QCResUNet ACDC [testing] NSD DSC r ↑ 0.760 0.827 0.884 0.891 0.883 0.914∗ MAE ↓ 0.185 ± 0.151 0.088 ± 0.064 0.071 ± 0.060 0.073 ± 0.061 0.076 ± 0.059 0.070 ± 0.047∗ r ↑ 0.808 0.859 0.930 0.931 0.938 0.955∗ MAE ↓ 0.291 ± 0.136 0.085 ± 0.063 0.059 ± 0.053 0.059 ± 0.053 0.061 ± 0.049 0.057 ± 0.040∗ ∗: P < 0.05; with a paired t-test to all baseline methods. MAE and the Pearson correlation coefficient r. MAE measures the amount of error in the prediction. r measures the linear correlation between the predicted DSC/NSD value (DSCpred/NSDpred) and the ground truth DSC/NSD (DSCgt/NSDgt). It ranges from −1 (strongly uncorrelated) to +1 (strongly correlated). pred) and ground-truth SEM (SEM(c) At the voxel level, the performance was evaluated based on the DSC (DSC(c) SEM) between the predicted SEM (SEM(c) gt ) for each tissue class c. Here, we report the average DSC across all C tissue classes. We performed a paired t-test to compare the DSC predicted by the QCResUNet with those predicted by the corresponding baselines. The resulting p-value (P ) was reported as a measure to de- 10 termine if there was a statistically significant difference between the performance of the proposed method and the baseline methods. The significance level was set a priori to P < 0.05. 3.5 Implementation details Training procedures were the same for both tasks by using segmentation results were produced by nnUNet, nnFormer, and SegGen. Training was carried out us- ing Adam optimizer (Kingma and Ba, 2014) with a batch size of 4 for 100 epochs. Training was started with an optimal initial learning rate determined by the hyperparameter search and was exponentially de- cayed by a factor of 0.9 for each epoch until it reached 1 × 10−6. We applied an L2 weight decay of 1 × 10−4 in all trainable layers. To prevent overfitting, vari- ous data augmentations, including random rotation, scaling, mirroring, Gaussian noise, and gamma in- tensity correction, were applied during training. All model training was performed on NVIDIA Tesla A100 GPUs. By default, we used a mix of single-precision (FP32) and half-precision (FP16) tensors during train- ing to reduce training time and memory consump- tion. The proposed method was implemented us- ing PyTorch v1.12.1. All the code and pre-trained models developed for this work will be available at https://github.com/sotiraslab/QCResUNet. Figure 3. Scatter plots of the ground-truth (x-axis) and the predicted DSC (y-axis) for the proposed method and all the baseline methods in the internal BraTS testing, external BraTS-SSA, WUSM datasets, and ACDC internal testing (rows). Results for (a) RCA; (b) UE-based; (c) UNet; (d) ResNet- 34; (e)ResNet-50; and (f) QCResUNet are reported in different columns. The proposed method generalized well to external datasets and consistently showed superior performance compared to all baseline methods. The UE-based method showed the worst performance compared to all other methods. 4 Results 4.1 Evaluation of subject-level QC performance 4.1.1 Brain tumor MRI segmentation QC task The proposed model performed well in the subject-level DSC and NSD prediction across all three brain tumor datasets (Table 3). Specifically, on the BraTS internal testing set with segmentations generated by nnUNet and nnFormer, the proposed method achieved a small mean MAE of 0.056 and 0.064 for NSD and DSC pre- diction, respectively. The predicted NSD and DSC also showed a strong correlation with the corresponding ground truth, achieving Pearson r values of 0.958 and 0.968, respectively. Importantly, the proposed method generalized well for segmentation produced by different methods (i.e., DeepMedic), which had not been used during training, achieving an average MAE of 0.074 and 0.062 for NSD and DSC prediction. Similarly, the predicted NSD and DSC showed a strong correlation with their ground truth with Pearson r values of 0.937 and 0.962, respectively. Critically, our method also generalized well to completely unseen external BraTS-SSA and WUSM datasets. On the BraTS-SSA dataset, the proposed method achieved an MAE of 0.057 and 0.060 for NSD and DSC prediction, respectively. In addition, the Pearson r between the predicted segmentation quality measures and their ground-truth values demonstrated a high correlation (NSD r = 0.954; DSC r = 0.964). Despite containing MRI scans with varying image qual- ity and tumor characteristics, the proposed method still generalized well to the BraTS-SSA dataset. How- ever, there was a slight drop in performance on the WUSM dataset with a Pearson r of 0.920 and 0.912 for 11 (a) RCA(b) UE-based(c) UNet(d) ResNet-34(e) ResNet-50(f) QCResUNetBraTS [testing]BraTS-SSAWUSMACDC [testing]Brain tumor segmentation QCCardiac segmentation QC Figure 4. Scatter plots of the ground-truth (x-axis) and the predicted NSD (y-axis) for the proposed method and all the baseline methods in the internal BraTS testing, external BraTS-SSA, WUSM datasets, and ACDC internal testing (rows). Results for (a) RCA; (b) UE-based; (c) UNet; (d) ResNet- 34; (e)ResNet-50; and (f) QCResUNet are reported in different columns. The proposed method generalized well to external datasets and consistently showed superior performance compared to all baseline methods. The UE-based method showed the worst performance compared to all other methods. NSD and DSC predictions, respectively. The MAE for the NSD and DSC prediction on the WUSM dataset was 0.075 and 0.087, respectively. We conjectured this might be attributed to domain shift due to differences in data acquisition, preprocessing, and the variability of shape and structures in brain tumors. Importantly, the proposed method outperformed all baseline methods in NSD and DSC prediction tasks. Compared to the three regression-based QC methods (i.e., UNet, ResNet-34, ResNet-50), QCResUNet im- proved the second-best method by 0.8% for NSD pre- diction and 1.5% for DSC prediction in terms of Pear- son r values on the BraTS internal testing set. On the external BraTS-SSA and WUSM datasets, QCRe- sUNet outperformed the second-best method by an av- erage of 1.3% and 1.9% in terms of Pearson r, respec- tively. A paired t-test confirmed that this improve- ment was statistically significant compared to all three regression-based methods baseline (Table 3). The pro- posed approach exhibited more evenly distributed DSC prediction errors across different quality levels com- pared to all the baseline methods (refer to Fig. 3 and Fig. 4), demonstrating a smaller standard deviation in MAE (see Table 3). The proposed method demonstrated a strong per- formance gain over the state-of-the-art RCA and UE- based QC methods (Table 3). The performance of the RCA and the UE-based method after hyper-parameter tuning was in line with previous works (Robinson et al., 2017; Jungo et al., 2020). In the internal BraTS testing set, the proposed method improved the average Pear- son r of NSD predictions by 48.5% and DSC predictions by 22.1% compared to RCA and UE-based QC, respec- tively. A similar trend was observed on the external datasets, with the proposed method improving Pearson r by an average of 50.9% and 48.2% compared to RCA 12 (a) RCA(b) UE-based(c) UNet(d) ResNet-34(e) ResNet-50(f) QCResUNetBraTS [testing]BraTS-SSAWUSMACDC [testing]Brain tumor segmentation QCCardiac segmentation QC Table 5. Comparison of voxel-level segmentation QC Performance of the proposed method to baseline methods in terms of DSCSEM. DSCSEM for the brain tumor MRI segmentation QC task is computed as the average of DSCWT SEM. While DSCSEM for cardiac MRI segmentation QC task is computed as the average of DSCLV SEM. The best metrics within each column are highlighted in bold. SEM, and DSCRV SEM, and DSCET SEM, DSCMyo SEM, DSCTC Brain tumor segmentation QC Cardiac segmentation QC Method Dataset BraTS [testing] BraTS-SSA WUSM ACDC [testing] UE-based (Jungo et al., 2020) 0.360 ± 0.223 0.309 ± 0.106 0.283 ± 0.150 0.460 ± 0.038 n QCResUNet w/o attn. o i QCResUNet w/o regr. t a QCResUNet l b a 0.633 ± 0.190 0.746 ± 0.164 0.769 ± 0.131∗ 0.641 ± 0.052 0.685 ± 0.115 0.702 ± 0.088∗ 0.618 ± 0.100 0.674 ± 0.076 0.684 ± 0.073∗ 0.613 ± 0.116 0.668 ± 0.091 0.703 ± 0.082∗ ∗: P < 0.05; with a paired t-test to all baseline methods. QCResUNet w/o attn. refers to QCResUNet without the proposed attention-based SEM aggregation. QCResUNet w/o regr. indicates QCResUNet without performing subject-level QC of DSC and NSD. QCResUNet represents the proposed QCResUNet that utilizes attention-based SEM aggregation and performs both subject-level and voxel-level QC prediction. Figure 5. The distribution of the DSC (DSCSEM) between the predicted segmentation error map and the corresponding ground truth for (a) brain tumor segmentation QC task and (b) cardiac segmentation QC task. The proposed QCResUNet accurately localized segmentation errors in terms of DSCSEM for all tissue classes across all datasets (refer to Section 4.2 for detailed discussion), demonstrating good generalization. and UE-based QC, respectively. Moreover, the pro- posed method achieved a significant reduction in the average MAE of predicting NSD and DSC compared to the RCA and UE-based methods for all results viz. internal testing results (0.292, 0.126 vs. 0.064), BraTS- SSA results (0.281, 0.157 vs. 0.059), and WUSM re- sults (0.275, 0.179 vs. 0.081). 4.1.2 Cardiac MRI segmentation QC task Similar to the brain tumor segmentation QC task, the proposed method achieved a good performance on the cardiac segmentation QC task (see Table 4). Specif- ically, the proposed method achieved average MAEs of 0.070 and 0.057 for NSD and DSC predictions, re- spectively. The predicted NSD and DSC demonstrated a strong correlation with the corresponding ground 13 truth, with Pearson r values of 0.914 and 0.955, re- spectively. The proposed QCResUNet also outper- formed all baseline methods in the cardiac segmenta- tion QC task. QCResUNet improved the second-best regression-based method by 2.6% and 1.8% in Pearson r for NSD and DSC predictions, respectively. In addi- tion, QCResUNet significantly outperformed RCA and UE-based QC methods by an average of 10.2% and 10.9%. Lastly, the proposed method achieved a sig- nificant reduction in MAEs compared to all baselines (Table 4), aligning well with the ground truth segmen- tation quality measures (see Fig. 3 and Fig. 4). (a) Brain tumor segmentation QC(b) Cardiac segmentation QCBraTS [testing]BraTS-SSAWUSMACDC [testing]DSC!"#$%DSCDSC!"#%&DSC!"#"%DSC!"#$%DSC!"#%&DSC!"#"%DSC!"#$%DSC!"#%&DSC!"#"%DSC!"#’(DSC!"##)*DSC!"#+( 4.2 Evaluation of segmentation error localization The proposed QCResUNet achieved good voxel-level segmentation error localization in terms of DSCSEM for both brain tumor and cardiac segmentation QC tasks (see Table 5). Specifically, QCResUNet achieved an average DSCSEM of 0.769 on the BraTS internal test- ing set, 0.702 on the BraTS-SSA dataset, and 0.684 on the WUSM dataset in the brain tumor segmen- tation QC task. Despite a slight performance drop on external datasets, the results remain high qual- ity. Inter-rater agreement studies in glioma segmen- tation (Visser et al., 2019) indicate that an overlap measure above 0.7 signifies a good segmentation re- sult. Our out-of-distribution DSCSEM results were on average around 0.7, despite the increased difficulty of the task. In addition, QCResUNet achieved an average DSCSEM of 0.703 in the cardiac segmentation QC task. The detailed distribution of tissue-level DSCSEM (i.e., [DSCWT SEM] in brain tumor segmen- tation and [DSCLV SEM] in cardiac segmentation) is shown in Fig. 5. SEM, DSCMyo SEM, DSCRV SEM, DSCTC SEM, DSCET Overall, Compared to the UE-based QC method, QCRe- sUNet improved error localization significantly by an average of 127.5% in terms of average DSCSEM on the brain tumor segmentation QC task (see Table 5). Sim- ilarly, QCResUNet significantly outperformed the UE- based QC method by 52.8% in the cardiac segmenta- tion QC task for segmentation error localization (Ta- ble 5). Visual demonstration. the proposed QCResUNet achieved reliable localization of tissue- specific segmentation failures at the voxel level for dif- ferent levels of segmentation quality (see Fig. 6(a)). This is a unique feature of the proposed approach as none of the regression-based baseline methods can of- fer error localization. However, we observed that the performance of segmentation error localization might drop when the query segmentation is of high quality (Fig. 6(b)). This performance drop likely arises from the inherent challenge of detecting small errors at the boundaries of high-quality segmentations. However, these cases are less critical to correct, as the query seg- mentation has already achieved good quality. There- fore, this drop in performance has a minor impact on the potential clinical applicability of the proposed ap- proach. 4.3 Ablation analysis We performed ablation studies to validate the effec- tiveness of the proposed multi-task learning strategy and attention-based SEM aggregation mechanism. Al- 14 though sharing the same network structure for subject- level segmentation quality prediction, QCResUNet outperformed ResNet-34 in both brain tumor and car- diac segmentation tasks (Table 3 and Table 4). In addi- tion, we found that the performance of voxel-level seg- mentation error prediction deteriorated when removing the subject-level prediction task (see Table 5). This demonstrates the effectiveness of the proposed multi- task learning framework for segmentation QC tasks. In the voxel-level segmentation error localization task, incorporating the proposed attention-based SEM aggregation mechanism improved performance by an average of 13.9% for the brain tumor segmentation QC task and 14.7% for the cardiac segmentation QC task. We conjectured the improvement introduced by the proposed attention-based SEM aggregation could be attributed to the fact that the features in the last layer of the decoder contribute differently to the prediction of each tissue class error segmentation mask. 4.4 Explainability analysis We hypothesized that the performance gain of the proposed QCResUnet compared to the baselines of UNet, ResNet-34, and ResNet-50 is due to joint train- ing under both subject-level and voxel-level supervi- sion. One potential explanation could be that the joint training allowed the proposed QCResUNet to effectively localize on segmentation errors, as sug- gested by the attention maps produced by Gradient- weighted Class Activation Mapping (Grad-CAM) (Sel- varaju et al., 2017) (see Fig. 7). We observed that the CAM obtained from ResNet-34 and ResNet-50 did not focus on the areas of segmentation errors. Al- though ResNet-50 had more parameters and depth, it did not demonstrate significant improvement over ResNet-34, indicating that these factors did not im- prove the QC performance. In contrast, the UNet CAM showed a coarse localization on the difference between the ground truth and the predicted segmenta- tion. In contrast to the baseline methods, the QCRe- sUNet CAM was well localized on segmentation errors. 5 Discussion and conclusions In this work, we proposed QCResUNet, a novel 3D CNN architecture designed for automated QC of multi- class tissue segmentation in MRI scans. To the best of our knowledge, this is the first study to provide re- liable simultaneous subject-level segmentation quality predictions and voxel-level identification of segmenta- tion errors for different tissue classes. The results sug- gest that the proposed method is a promising approach for large-scale automated segmentation QC and for guiding clinicians’ feedback for refining segmentation results. A key feature of the proposed method is the multi- task objective. This enabled the proposed method to focus on regions where errors have occurred, leading to improved performance. This is supported by the fol- lowing observations. First, we observed that the CAM of the QCResUNet encoder focused more on the re- gions where the segmentation error occurred compared to ResNet-34 and ResNet-50 (refer to Fig. 7). Sec- ond, we observed that the supervision from the seg- mentation error prediction task can in turn guide the DSC and NSD prediction task to prioritize these error- prone regions. As suggested by the CAM, ResNet-34 and ResNet-50 achieved more accurate DSC and NSD prediction than the UNet, while the UNet performed better in segmentation error localization. The possi- ble reason behind this is that the final average pool- ing layer in the UNet treats each element in the last feature map equally while ignoring the actual size of In contrast, the average pooling in the em- tumors. bedded feature space in ResNet-based methods oper- ates on the abstracted quality feature maps to preserve information, which resulted in better predictive per- formance. In contrast, the joint optimization of both subject-level and voxel-level predictions by the pro- posed QCResUNet allows it to combine the advantages of both ResNet-based models and UNet. As a conse- quence, QCResUNet can simultaneously localize seg- mentation errors and assess the overall quality of the segmentation. Importantly, the proposed method exhibited high generalizability when applied to unseen data, surpass- ing other state-of-the-art segmentation QC methods. This was particularly true for the RCA and UE-based methods in the brain tumor segmentation QC task, which exhibited poor performance when assessing the quality of segmentation results obtained using segmen- tation methods different than the ones used to gener- ate training data. The poor generalizability of the RCA method was mainly due to the inherent difficulty of ob- taining a representative reference dataset for brain tu- mor segmentation, which is subject to significant vari- ability. Such significant variability may violate the un- derlying assumption of the RCA method that there is at least one sample in the reference dataset that can be successfully segmented given a query image- segmentation pair (Robinson et al., 2017; Valindria et al., 2017), which is only valid when dealing with healthy anatomies. In the case of the cardiac segmen- tation QC task, which involves healthy anatomy, RCA methods achieved better performance compared to the brain tumor case when implemented with the atlas- based segmentation method. Similar to RCA, the UE- based QC did not perform well on subject-level quality prediction as well as localizing segmentation error in the brain tumor case. This may be attributed to the fact that there is inherent variability in uncertainty maps produced by various segmentation methods on datasets with different image quality, tumor character- istics, etc. While in the cardiac segmentation QC task, which involves healthy anatomy and less variability, the UE-based showed better performance. Additionally, as consistent with findings in Jungo et al. (2020), we found that the UE-based method offers limited segmentation error localization. This limitation further hinders its ability to generalize effectively. Furthermore, the UE- based method can only be used to assess segmentations obtained from deep learning models. Unless the deep learning segmentation method directly outputs an esti- mate of voxel-wise uncertainty, test time estimation of uncertainty (e.g., using MCDropout (Gal and Ghahra- mani, 2016)) requires access to the model architecture and weights, which may not be possible for models de- ployed in clinical settings. The proposed work is not without limitations. Firstly, though the proposed method has better gen- eralizability than other segmentation QC methods, it may be affected by domain shift issues, which are com- mon to all deep learning methods. As a consequence, translating the proposed to clinical practice will require monitoring of each performance to ensure reliable re- sults. In addition, techniques that can enhance the ro- bustness and generalizability of the proposed method to domain shifts (Ganin et al., 2016; Carlucci et al., 2019) are worth further investigating in future work. Second, a key requirement of the proposed method is the availability of all four modalities. Although han- dling missing modalities enhances the applicability of the proposed method in diverse clinical settings, it is a non-trivial problem. The integration of existing tech- niques for handling missing modalities (Dorent et al., 2019; Shen and Gao, 2019; Wang et al., 2023) into the proposed QCResUNet will increase significantly the computational burden for training. This is due to the fact that these methods necessitate separate sets of en- coders and decoders for each input modality, resulting in a computational load that is at least quadruple that of the current QCResUNet. In our future work, we will explore efficient strategies to allow the model to handle missing modalities. Third, the proposed method was only validated on segmentation tasks involving a sin- gle object. Although we have shown that the proposed method generalized well across different segmentation tasks that include multiple tissues, more experiments 15 are needed to evaluate how this method performs in the presence of multiple objects (e.g., multiple lesions). Lastly, this work focused on predicting DSC and NSD as metrics to appropriately summarize segmentation quality. However, it is important to recognize that these metrics have limitations and may not be suitable for all applications. While we demonstrated the pro- posed method’s ability to handle multiple metrics, ad- ditional research will be required to tailor the method for predicting the most appropriate metric for specific tasks (Maier-Hein et al., 2024). To conclude, we developed QCResUNet for the auto- mated brain tumor and cardiac segmentation QC. Our proposed method is able to reliably assess segmentation quality at the subject level, while at the same time ac- curately identifying tissue-specific segmentation errors at the voxel level. Through multi-task learning under subject-level and voxel-level supervision, we achieved strong performance in both prediction tasks. Training the network on a large-scale dataset, which comprised segmentation results from various methods and at dif- ferent levels of quality, allowed the proposed method to generalize well to unseen data. A key characteris- tic of the proposed method is that it is agnostic to the method used to generate the segmentation. This makes it versatile for evaluating the quality of segmentation results generated by different methods. A unique char- acteristic of the proposed method is its ability to accu- rately pinpoint the location of tissue-specific segmenta- tion errors, thus potentially facilitating the integration of human input for refining automatically generated segmentations in clinical settings. This, in turn, has the potential to enhance clinical workflows. References Adewole, M., Rudie, J.D., Gbdamosi, A., Toyobo, O., Raymond, C., Zhang, D., Omidiji, O., Akinola, R., Suwaid, M.A., Emegoakor, A., et al., 2023. The brain tumor segmentation (brats) challenge 2023: Glioma segmentation in sub-saharan africa patient popula- tion (brats-africa). ArXiv . Ahn, J., Cho, S., Kwak, S., 2019. Weakly supervised learning of instance segmentation with inter-pixel re- lations, in: Proceedings of the IEEE/CVF confer- ence on computer vision and pattern recognition, pp. 2209–2218. Alb`a, X., Lekadir, K., Perea˜nez, M., Medrano-Gracia, P., Young, A.A., Frangi, A.F., 2018. Automatic ini- tialization and quality control of large-scale cardiac mri segmentations. Medical Image Analysis 43, 129– 141. Ashukha, A., Lyzhov, A., Molchanov, D., Vetrov, D., 2020. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. arXiv preprint arXiv:2002.06470 . Bai, W., Sinclair, M., Tarroni, G., Oktay, O., Rajchl, M., Vaillant, G., Lee, A.M., Aung, N., Lukaschuk, E., Sanghvi, M.M., et al., 2018. Automated car- diovascular magnetic resonance image analysis with fully convolutional networks. Journal of Cardiovas- cular Magnetic Resonance 20, 1–12. Baid, U., Ghodasara, S., Mohan, S., Bilello, M., Calabrese, E., Colak, E., Farahani, K., Kalpathy- Cramer, J., Kitamura, F.C., Pati, S., Prevedello, L.M., Rudie, J.D., Sako, C., Shinohara, R.T., Bergquist, T., Chai, R., Eddy, J., Elliott, J., Reade, W., Schaffter, T., Yu, T., Zheng, J., Moawad, A.W., Coelho, L.O., McDonnell, O., Miller, E., Moron, F.E., Oswood, M.C., Shih, R.Y., Siakallis, L., Bron- stein, Y., Mason, J.R., Miller, A.F., Choudhary, G., Agarwal, A., Besada, C.H., Derakhshan, J.J., Diogo, M.C., Do-Dai, D.D., Farage, L., Go, J.L., Hadi, M., Hill, V.B., Iv, M., Joyner, D., Lincoln, C., Lotan, E., Miyakoshi, A., Sanchez-Montano, M., Nath, J., Nguyen, X.V., Nicolas-Jilwan, M., Jimenez, J.O., Ozturk, K., Petrovic, B.D., Shah, C., Shah, L.M., Sharma, M., Simsek, O., Singh, A.K., Soman, S., Statsevych, V., Weinberg, B.D., Young, R.J., Ikuta, I., Agarwal, A.K., Cambron, S.C., Sil- bergleit, R., Dusoi, A., Postma, A.A., Letourneau- Guillon, L., Perez-Carrillo, G.J.G., Saha, A., Soni, N., Zaharchuk, G., Zohrabian, V.M., Chen, Y., Ce- kic, M.M., Rahman, A., Small, J.E., Sethi, V., Davatzikos, C., Mongan, J., Hess, C., Cha, S., Villanueva-Meyer, J., Freymann, J.B., Kirby, J.S., Wiestler, B., Crivellaro, P., Colen, R.R., Kotrot- sou, A., Marcus, D., Milchenko, M., Nazeri, A., Fathallah-Shaykh, H., Wiest, R., Jakab, A., We- ber, M.A., Mahajan, A., Menze, B., Flanders, A.E., Bakas, S., 2021. The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radio- genomic classification . Bernard, O., Lalande, A., Zotti, C., Cervenansky, F., Yang, X., Heng, P.A., Cetin, I., Lekadir, K., Ca- mara, O., Ballester, M.A.G., et al., 2018. Deep learning techniques for automatic mri cardiac multi- structures segmentation and diagnosis: is the prob- lem solved? IEEE transactions on medical imaging 37, 2514–2525. Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., Wang, M., 2022. Swin-unet: Unet-like pure transformer for medical image segmentation, in: 16 European conference on computer vision, Springer. pp. 205–218. He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition , 770–778. Carlucci, F.M., D’Innocente, A., Bucci, S., Caputo, B., Tommasi, T., 2019. Domain generalization by solv- ing jigsaw puzzles, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition (CVPR). Chakrabarty, S., Abidi, S.A., Mousa, M., Mokkarala, M., Hren, I., Yadav, D., Kelsey, M., LaMontagne, P., Wood, J., Adams, M., Su, Y., Thorpe, S., Chung, C., Sotiras, A., Marcus, D.S., 2022. Integrative imaging informatics for cancer research: Workflow automa- tion for neuro-oncology (i3cr-wano). Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A.L., Zhou, Y., 2021. Transunet: Transformers make strong encoders for medical im- age segmentation. arXiv preprint arXiv:2102.04306 . Dorent, R., Joutard, S., Modat, M., Ourselin, S., Vercauteren, T., 2019. Hetero-modal varia- tional encoder-decoder for joint modality comple- tion and segmentation, in: Medical Image Comput- ing and Computer Assisted Intervention–MICCAI 22nd International Conference, Shenzhen, 2019: China, October 13–17, 2019, Proceedings, Part II 22, Springer. pp. 74–82. Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S., Pal, C., 2016. The importance of skip connections in biomedical image segmentation, in: International Workshop on Deep Learning in Medical Image Anal- ysis, International Workshop on Large-Scale Annota- tion of Biomedical Data and Expert Label Synthesis, Springer. pp. 179–187. Gal, Y., Ghahramani, Z., 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning , 1050–1059. Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., March, M., Lempit- sky, V., 2016. Domain-adversarial training of neural networks. Journal of machine learning research 17, 1–35. Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., Garcia-Rodriguez, J., 2017. A review on deep learning techniques applied to seman- tic segmentation. arXiv preprint arXiv:1704.06857 . Ioffe, S., Szegedy, C., 2015. Batch normalization: Ac- celerating deep network training by reducing internal covariate shift. International conference on machine learning , 448–456. Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H., 2018a. Brain tumor segmentation and radiomics survival prediction: Contribution to the brats 2017 challenge , 287–297. Isensee, F., Petersen, J., Klein, A., Zimmerer, D., Jaeger, P.F., Kohl, S., Wasserthal, J., Koehler, G., Norajitra, T., Wirkert, S., Maier-Hein, K.H., 2018b. nnu-net: Self-adapting framework for u-net-based medical image segmentation . Jungo, A., Balsiger, F., Reyes, M., 2020. Analyzing the quality and challenges of uncertainty estimations for brain tumor segmentation. Frontiers in neuroscience , 282. Kamnitsas, K., Ledig, C., Newcombe, V.F., Simp- son, J.P., Kane, A.D., Menon, D.K., Rueckert, D., Glocker, B., 2017. Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmen- tation. Medical Image Analysis 36, 61–78. Khened, M., Kollerathu, V.A., Krishnamurthi, G., 2019. Fully convolutional multi-scale residual densenets for cardiac segmentation and automated cardiac diagnosis using ensemble of classifiers. Med- ical image analysis 51, 21–45. Kingma, D.P., Ba, J., 2014. Adam: A method arXiv preprint optimization. stochastic for arXiv:1412.6980 . Kofler, F., Ezhov, I., Fidon, L., Horvath, I., de la Rosa, E., LaMaster, J., Li, H., Finck, T., Shit, S., Paetzold, J., et al., 2022. Deep quality estimation: Creating surrogate models for human quality ratings. arXiv preprint arXiv:2205.10355 . Kohlberger, T., Singh, V., Alvino, C., Bahlmann, C., Grady, L., 2012. Evaluating segmentation error with- out ground truth, in: Ayache, N., Delingette, H., Golland, P., Mori, K. (Eds.), Medical Image Com- puting and Computer-Assisted Intervention – MIC- CAI 2012, Springer Berlin Heidelberg, Berlin, Hei- delberg. pp. 528–536. Grady, L., Singh, V., Kohlberger, T., Alvino, C., Bahlmann, C., 2012. Automatic segmentation of un- known objects, with application to baggage security , 430–444. Li, K., Yu, L., Heng, P.A., 2022. Towards reliable car- diac image segmentation: Assessing image-level and pixel-level segmentation quality via self-reflective ref- erences. Medical Image Analysis 78, 102426. 17 Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., Talwalkar, A., 2018. Hyperband: A novel bandit-based approach to hyperparameter optimiza- tion. Journal of Machine Learning Research 18, 1–52. voxel-level prediction of segmentation quality, in: In- ternational Conference on Medical Image Computing and Computer-Assisted Intervention, Springer. pp. 173–182. Li, L., Jamieson, K., Rostamizadeh, A., Gonina, E., Ben-Tzur, J., Hardt, M., Recht, B., Talwalkar, A., 2020. A system for massively parallel hyperparam- eter tuning. Proceedings of Machine Learning and Systems 2, 230–246. Liaw, R., Liang, E., Nishihara, R., Moritz, P., Gonza- lez, J.E., Stoica, I., 2018. Tune: A research platform for distributed model selection and training. arXiv preprint arXiv:1807.05118 . Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A., Van Ginneken, B., S´anchez, C.I., 2017. A survey on deep learning in medical image analysis. Medical image analysis 42, 60–88. Maas, A.L., Hannun, A.Y., Ng, A.Y., et al., 2013. Rec- tifier nonlinearities improve neural network acoustic models, in: Proc. icml, Atlanta, Georgia, USA. p. 3. Maier-Hein, L., Reinke, A., Godau, P., Tizabi, M.D., Buettner, F., Christodoulou, E., Glocker, B., Isensee, F., Kleesiek, J., Kozubek, M., et al., 2024. Metrics reloaded: recommendations for image anal- ysis validation. Nature methods , 1–18. Mehta, R., Filos, A., Baid, U., Sako, C., McKinley, R., Rebsamen, M., D¨atwyler, K., Meier, R., Radojewski, P., Murugesan, G.K., et al., 2022. Qu-brats: Miccai brats 2020 challenge on quantifying uncertainty in brain tumor segmentation-analysis of ranking scores and benchmarking results. The journal of machine learning for biomedical imaging 2022. Ng, M., Guo, F., Biswas, L., Petersen, S.E., Piech- nik, S.K., Neubauer, S., Wright, G., 2020. Estimat- ing uncertainty in neural networks for cardiac mri segmentation: a benchmark study. arXiv preprint arXiv:2012.15772 . Ng, M., Guo, F., Biswas, L., Wright, G.A., 2018. Es- timating uncertainty in neural networks for segmen- tation quality control , 3–6. Qi, J., Du, J., Siniscalchi, M., Ma, X., Lee, C.H., 2020. On mean absolute error for deep neural net- work based vector-to-vector regression. IEEE Signal Processing Letters PP. Qiu, P., Chakrabarty, S., Nguyen, P., Ghosh, S.S., Soti- ras, A., 2023. Qcresunet: Joint subject-level and Robinson, R., Oktay, O., Bai, W., Valindria, V.V., Sanghvi, M.M., Aung, N., Paiva, J.M., Zemrak, F., Fung, K., Lukaschuk, E., et al., 2018. Real-time pre- diction of segmentation quality , 578–585. Robinson, R., Valindria, V.V., Bai, W., Oktay, O., Kainz, B., Suzuki, H., Sanghvi, M.M., Aung, N., Paiva, J.M., Zemrak, F., et al., 2019. Automated quality control in image segmentation: application to the uk biobank cardiovascular magnetic resonance imaging study. Journal of Cardiovascular Magnetic Resonance 21, 1–14. Robinson, R., Valindria, V.V., Bai, W., Suzuki, H., Matthews, P.M., Page, C., Rueckert, D., Glocker, B., 2017. Automatic quality control of cardiac mri seg- mentation in large-scale population imaging , 720– 727. Rohlfing, T., Zahr, N.M., Sullivan, E.V., Pfefferbaum, A., 2010. The sri24 multichannel atlas of normal adult human brain structure. Human brain mapping 31, 798–819. Ronneberger, O., Fischer, P., Brox, T., 2015. U-net: Convolutional networks for biomedical image seg- mentation , 234–241. Roy, A.G., Conjeti, S., Navab, N., Wachinger, C., 2018. Inherent brain segmentation quality control from fully convnet monte carlo sampling , 664–672. Sander, J., de Vos, B.D., Iˇsgum, I., 2020. Automatic segmentation with detection of local segmentation failures in cardiac mri. Scientific Reports 10, 1–19. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D., 2017. Grad-cam: Visual ex- planations from deep networks via gradient-based lo- calization, in: Proceedings of the IEEE international conference on computer vision, pp. 618–626. Shen, Y., Gao, M., 2019. Brain tumor segmentation on mri with missing modalities, in: Information Pro- cessing in Medical Imaging: 26th International Con- ference, IPMI 2019, Hong Kong, China, June 2–7, 2019, Proceedings 26, Springer. pp. 417–428. Tompson, J., Goroshin, R., Jain, A., LeCun, Y., Bre- gler, C., 2015. Efficient object localization using con- volutional networks. Proceedings of the IEEE con- ference on computer vision and pattern recognition , 648–656. 18 Tran, P.V., 2016. A fully convolutional neural network for cardiac segmentation in short-axis mri. arXiv preprint arXiv:1604.00494 . Ulyanov, D., Vedaldi, A., Lempitsky, V., 2016. In- stance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 . Valindria, V.V., Lavdas, I., Bai, W., Kamnitsas, K., Aboagye, E.O., Rockall, A.G., Rueckert, D., Glocker, B., 2017. Reverse classification accuracy: predicting segmentation performance in the absence of ground truth. IEEE transactions on medical imag- ing 36, 1597–1606. Visser, M., M¨uller, D., van Duijn, R., Smits, M., Ver- burg, N., Hendriks, E., Nabuurs, R., Bot, J., Eijge- laar, R., Witte, M., et al., 2019. Inter-rater agree- ment in glioma segmentations on longitudinal mri. NeuroImage: Clinical 22, 101727. Wang, H., Chen, Y., Ma, C., Avery, J., Hull, L., learning with Carneiro, G., 2023. Multi-modal missing modality via shared-specific feature mod- elling, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15878–15887. Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., Hu, Q., 2020a. Eca-net: Efficient channel attention for deep in: Proceedings of convolutional neural networks, the IEEE/CVF conference on computer vision and pattern recognition, pp. 11534–11542. Wang, S., Tarroni, G., Qin, C., Mo, Y., Dai, C., Chen, C., Glocker, B., Guo, Y., Rueckert, D., Bai, W., 2020b. Deep generative model-based quality control for cardiac mri segmentation , 88–97. Wen, Y., Tran, D., Ba, J., 2020. Batchensemble: an al- ternative approach to efficient ensemble and lifelong learning. arXiv preprint arXiv:2002.06715 . Zhou, H.Y., Guo, J., Zhang, Y., Yu, L., Wang, Interleaved trans- nnformer: L., Yu, Y., 2021. former for volumetric segmentation. arXiv preprint arXiv:2109.03201 . A Subject-level multi-class DSC where TP, FP, and FN denote true positive, false pos- itive, and false negative, respectively. For a certain location in the multi-class query segmentation mask (Squery), the predicted foreground label is considered a true positive if and only if it matches the correspond- ing ground truth label. Similarly, the predicted back- ground label is considered a false positive if and only if it matches the true background. Otherwise, if a back- ground label is incorrectly predicted as a foreground label, it is counted as a false positive. The rationale for selecting this multi-class DSC as a subject-level quality measure is to provide a single value that summarizes the overall quality of the seg- mentation. This is beneficial for streamlined image- driven analysis, as it allows us to filter out low-quality segmentation cases using a single threshold. B Uncertainty estimation-based QC The uncertainty estimation based QC was imple- mented using the Monte Carlo Dropout (MCDropout) approach (Gal and Ghahramani, 2016), given its suc- cess and popularity. MCDropout was applied to the main convolution blocks in the nnUNet and DeepMedic as well as to the self-attention block in the nnFormer following the protocol with a drop rate p = 0.5, in (Jungo et al., 2020). After running the inference of the model T times, the uncertainty map of a seg- mentation can be computed as UMapv,c = 1 T T (cid:88) t=1 P (t) v,c, (2) where v and c denote the voxel index and the tumor tissue types, while P (t) v,c denotes the probability map obtained from the softmax layer of a trained segmen- tation model. T was set empirically to 50 to strike a balance between the number of Monte-Carlo samples we draw and computational complexity. B.1 Uncertainty-Error Overlap We used the uncertainty-error (UE) overlap pro- posed by Jungo et al. (2020) to calibrate the uncer- tainty map. The UE overlap measures the overlap be- tween the uncertainty map (UMap) and the ground- truth segmentation error map (SEMgt) following the DSC formula: We calculated the subject-level DSC across all tissue classes using the standard definition: UE = 2|UMap ∩ SEMgt| |UMap| + |SEMgt| , DSC = 2TP 2TP + FP + FN , (1) where | · | denotes the cardinality of the set of voxels of each binary map. To compute the UE overlap, the 19 uncertainty map UMap needs to be thresholded into a binary mask. Following the protocol in Jungo et al. (2020), the optimal threshold value was determined by examining different thresholds in the validation set in increments of 0.05, ranging from 0.05 to 0.95. The op- timal threshold values for different datasets are shown in Table 1. Table 1. The optimal threshold values used for calibrating the uncertainty map across differ- ent datasets. The optimal threshold values for brain tumor segmentation QC apply to the WT, TC, and ET tissue classes, while in the cardiac case, the threshold values pertain to the LV, Myo, and RV tissue classes. Brain tumor segmentation QC Cardiac segmentation QC Threshold BraTS [testing] BraTS-SSA WUSM nnUNet nnFormer DeepMedic [0.10, 0.10, 0.10] [0.10, 0.10, 0.30] [0.05, 0.05, 0.05] [0.10, 0.10, 0.15] [0.30, 0.35, 0.35] [0.25, 0.25, 0.20] [0.15, 0.10, 0.15] [0.40, 0.30, 0.40] [0.05, 0.05, 0.10] ACDC [testing] [0.10, 0.10, 0.10] [0.10, 0.15, 0.15] - D Voxel-level multi-class segmentation error mask visualization Here, we provide the details on how to obtain the visualization in Fig. 6. The voxel-level SEM predic- tion from the proposed QCResUNet is a set of binary masks (e.g., in the brain tumor segmentation task, these masks delineate the whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively). For the purpose of visualization, we combined the bi- nary WT, TC, and ET masks into a multi-class mask consisting of ET, NCR and ED classes. Specifically, the ED class was constructed by those voxels whose values were one in the WT binary mask and zero in the TC binary mask. The NCR class was constructed by those voxels whose values were one in the TC binary mask and zero in the ET binary mask. The ET class was constructed by those voxels whose values were one in the ET binary mask. B.2 Subject-level DSC prediction Following Jungo et al. (2020), the voxel-level uncer- tainty map was aggregated to subject-level Dice Simi- larity Coefficient (DSC) and Normalized Surface Dice (NSD) predictions by using a two-step approach. First, 102 radiomics features were automatically extracted from the uncertainty map using the PyRadiomics1 package. Second, we trained a random forest regressor using scikit-learn2 package in the training dataset to predict the subject-level DSC and NSD. C Hyper-parameter tuning results The hyperparameter tuning was carried out us- ing the Raytune3 package. Specifically, we utilized a Bandit-based approach for efficient resource alloca- tions that allowed us to dedicate more resources to the more promising hyperparameter combinations (Li et al., 2018, 2020). The learning curve for Raytune hyperparameter tuning in the BraTS validation set is shown in Fig. S1. The optimal hyperparameter com- binations determined by Raytune can be found in Ta- ble 2. 1https://pyradiomics.readthedocs.io 2https://scikit-learn.org/stable/ 3https://docs.ray.io/en/latest/index.html 20 Figure 6. Examples showcasing the performance of the proposed method versus baseline methods on the brain tumor segmentation QC task: (a) high-quality segmentation error localization and (b) low-quality segmentation error localization. The color bar in the figure indicates the intensity of the uncertainty map (UMap). We observed that the proposed method showed better segmentation error localization than the uncertainty map. The error localization was better when dealing with low-quality query segmentation in contrast to higher-quality ones. This may be attributed to the fact that detecting few errors at the boundaries of high-quality segmentations is challenging. We kindly direct the readers to D for details on how the visualization in this figure was obtained. 21 Example 1Ground truth DSC: 0.138 Predicted DSC: 0.143 Ground truth NSD: 0.096 Predicted NSD: 0.102DSC!"#$%: 0.95 DSC!"#%&: 0.94 DSC!"#"%: 0.86 Example 2Ground truth DSC: 0.356 Predicted DSC: 0.363 Ground truth NSD: 0.125 Predicted NSD: 0.124DSC!"#$%: 0.95 DSC!"#%&: 0.95 DSC!"#"%: 0.90 Example 3Ground truth DSC: 0.538 Predicted DSC: 0.552 Ground truth NSD: 0.287 Predicted NSD: 0.256DSC!"#$%: 0.89 DSC!"#%&: 0.87 DSC!"#"%: 0.80 Example 4Ground truth DSC: 0.637 Predicted DSC: 0.663 Ground truth NSD: 0.402 Predicted NSD: 0.404DSC!"#$%: 0.90 DSC!"#%&: 0.84 DSC!"#"%: 0.84 Example 5Ground truth DSC: 0.898 Predicted DSC: 0.897 Ground truth NSD: 0.733 Predicted NSD: 0.755DSC!"#$%: 0.92 DSC!"#%&: 0.88 DSC!"#"%: 0.79 Example 1Ground truth DSC: 0.955 Predicted DSC: 0.957 Ground truth NSD: 0.853 Predicted NSD: 0.848DSC!"#$%: 0.43 DSC!"#%&: 0.36 DSC!"#"%: 0.31 Example 2Ground truth DSC: 0.911 Predicted DSC: 0.901 Ground truth NSD: 0.831 Predicted NSD: 0.822DSC!"#$%: 0.45 DSC!"#%&: 0.21 DSC!"#"%: 0.18 Example 3Ground truth DSC: 0.954 Predicted DSC: 0.946 Ground truth NSD: 0.878 Predicted NSD: 0.864DSC!"#$%: 0.35 DSC!"#%&: 0.25 DSC!"#"%: 0.25 Example 4Ground truth DSC: 0.885 Predicted DSC: 0.859 Ground truth NSD: 0.511 Predicted NSD: 0.517DSC!"#$%: 0.71 DSC!"#%&: 0.23 DSC!"#"%: 0.19 Example 5Ground truth DSC: 0.972 Predicted DSC: 0.953 Ground truth NSD: 0.979 Predicted NSD: 0.913DSC!"#$%: 0.30 DSC!"#%&: 0.25 DSC!"#"%: 0.25 (a) High-quality segmentation error localization(b) Low-quality segmentation error localizationSgtSquerySEMgtUMapSEMQCResUNetColor bar indicates the intensity of Uncertainty Map: Figure 7. Examples of the class activation maps produced by Grad-CAM for different methods. The CAMs of UNet, ResNet-34, and ResNet-50 were generated based on the last convolutional feature maps. The CAMs of the proposed QCResUNet were generated from the last convolutional feature map of the ResNet encoder. We observed that the proposed QCResUNet showed superior localization performance in comparison to all other baselines in terms of the CAM. This may be attributed to the multi-task learning framework of the proposed method (see detailed discussion in Section 4.4). 22 SgtSqueryCAMUNetCAMResNet-34CAMResNet-50CAMQCResUNet Figure S1. The learning curve of hyper-parameter tuning on the BraTS validation set, where the x-axis is the number of epochs and the y-axis is the average Pearson r for predicting DSC and NSD. Each line (differentiated by colors) in the subpanel figures indicates the Pearson r for a specific experimental trial with a particular combination of hyper-parameters. 23 (a) UNet(b) ResNet34(c) ResNet50(d) QCResNetOptimalPearsonr:0.942OptimalPearsonr:0.954OptimalPearsonr:0.952OptimalPearsonr:0.966DSCSEM:0.813
ai_researcher
7
Scaling_Scientific_Knowledge_Discovery_with_Neuro-Symbolic_AI_and_Large_Language_Models.pdf
3 2 0 2 b e F 4 1 ] I A . s c [ 1 v 2 5 8 6 0 . 2 0 3 2 : v i X r a Using Artificial Intelligence to aid Scientific Discovery of Climate Tipping Points Jennifer Sleeman,1 David Chung,1 Chace Ashcraft,1 Jay Brett,1 Anand Gnanadesikan,2 Yannis Kevrekidis,2 Marisa Hughes,1 Thomas Haine,2 Marie-Aude Pradal,2 Renske Gelderloos,2 Caroline Tang,3 Anshu Saksena,1 Larry White 1 1 Johns Hopkins University Applied Physics Laboratory 11100 Johns Hopkins Road Laurel, Maryland 20723 2 Johns Hopkins University 3400 N. Charles St. Baltimore, MD 21218-2683 3 Duke University Duke University Box 90586 Durham, NC 27708 {jennifer.sleeman,david.chung,chace.ashcraft, jay.brett,marisa.hughes,anshu.saksena,larry.white}@jhuapl.edu, {gnanades,yannisk,thomas.haine,mpradal1,rgelder2}@jhu.edu,[email protected] Abstract We propose a hybrid Artificial Intelligence (AI) climate mod- eling approach that enables climate modelers in scientific discovery using a climate-targeted simulation methodology based on a novel combination of deep neural networks and mathematical methods for modeling dynamical systems. The simulations are grounded by a neuro-symbolic language that both enables question answering of what is learned by the AI methods and provides a means of explainability. We de- scribe how this methodology can be applied to the discov- ery of climate tipping points and, in particular, the collapse of the Atlantic Meridional Overturning Circulation (AMOC). We show how this methodology is able to predict AMOC col- lapse with a high degree of accuracy using a surrogate cli- mate model for ocean interaction. We also show preliminary results of neuro-symbolic method performance when trans- lating between natural language questions and symbolically learned representations. Our AI methodology shows promis- ing early results, potentially enabling faster climate tipping point related research that would otherwise be computation- ally infeasible. Introduction Climate change and its global effects can no longer be ig- nored. The urgency to both understand and find ways to mitigate climate effects has become an increasing focus of research, driven by the increase in extreme events includ- ing wild fires, heat waves, and extreme flooding. As part of this conversation, climate tipping points are a topic of growing interest, as these tipping points represent states at which large, abrupt, irreversible changes occur in the en- vironment that could result in devastating and accelerated global change. Worryingly, the mechanisms, likelihood, and potential impacts of tipping points are not fully understood. The Intergovernmental Panel for Climate Change summa- rized some of the major factors related to climate tipping points in a special report (P¨ortner et al. 2019), which high- lights the risks to lands, oceans, food sources, and human health. In a recent published report by Lenton et al. (2019), 15 different tipping points are described as being currently “active.” For example, the melting of the Greenland ice sheet is occurring at an unprecedented rate and could reach a tip- ping point at 1.5°C of warming (P¨ortner et al. 2019; Lenton et al. 2019). Unfortunately, studying tipping points is challenged by the fact that their occurrence in climate models depends on numerous physical processes that are governed by poorly constrained parameters. Exploring the entire state space spanned by these parameters is computationally infeasible in the full general circulation models used for climate projec- tion. Climate researchers need better ways to direct their at- tention to scenarios that simulate the present-day world with good fidelity, but are also closer to tipping point that the cur- rent generation of models. We show how AI can be used to support tipping point discovery using the collapse of the At- lantic Meridional Overturning Circulation (AMOC) as a use case. Background–The AMOC The AMOC is an important element of the climate system, as it is central to how heat and freshwater are transported (Buckley and Marshall 2016). Often called the conveyor belt of the ocean, its circulation pattern involves warm salty upper-ocean water flowing into the North Atlantic, cool- ing, and sinking into the deep. It has such a significant ef- fect on the regulation of the Earth’s climate (Zhang et al. 2019) that small changes in sea surface temperatures can have large global climate effects. Some evidence suggests that the AMOC has slowed down, although the issue is in- tensely debated. Climate models project that the AMOC will weaken in the 21st century and some climate models with ultrahigh resolution in the ocean suggest the AMOC might collapse (Thornalley et al. 2018; Jackson and Wood 2018). In recent articles and published papers, it has been spec- ulated that a full collapse of the AMOC could have long term effects on food insecurity (Benton 2020), sea level rise (Bakker 2022), and Arctic related effects (Liu and Fedorov 2022). Related Work There has been a long debate on whether deep learning could be used to replace numerical weather/climate mod- els (Schultz et al. 2021), but many small successes in ap- plying deep learning to focused climate and weather related problems have demonstrated promise (Rasp, Pritchard, and Gentine 2018; Reichstein et al. 2019; Singh et al. 2021). In this study, we focused on how deep learning could be used for the discovery of climate tipping points by recommend- ing parameters for climate model runs that would induce tip- ping, which is less explored due to the computational chal- lenges of modeling climate tipping points using traditional methods. However, related work which explored using deep learning for early warning signal detection included work by Bury et al. (2021) and Deb et al. (2022), both of whom de- veloped systems using Long Short Term Memory (LSTM) networks trained on the dynamics to predict tipping points, focusing on behavior near the tipping point by finding crit- ical slowing patterns. Though these methods are related to the bifurcation work included herein, we focus on a larger problem of building hybrid AI climate models that leverage these outputs. On the specific topic of AMOC, a variety of simplified dynamical frameworks have been used for insight into the dynamics and sensitivity of the overturning (Johnson et al. 2019). The development of those frameworks can be said to begin with Stommel (1961), demonstrating the bistabil- ity of the AMOC, followed more recently by Gnanadesikan (1999), who added Southern Ocean wind and eddy pro- cesses. This was expanded by Johnson, Marshall, and Spro- son (2007) to include prognostic equations for temperature and salinity, and by Jones and Cessi (2016) to include the Pa- cific basin. Finally, Gnanadesikan, Kelson, and Sten (2018a) expanded from the Johnson, Marshall, and Sproson (2007) model to include lateral tracer mixing. Each of these models has different simplifying assumptions, but all have dynamics that are similar to observations in the AMOC-on state. The Hybrid AI Climate Modeling Methodology The Hybrid AI Climate modeling methodology includes an AI simulation based on a Generative Adversarial Network (GAN) (Goodfellow et al. 2014) that explores different cli- mate models to learn how to invoke climate tipping point scenarios using a surrogate model and a bifurcation (Dijk- stra 2019) method. The bifurcation method identifies areas in state space where abrupt changes in state occur, i.e. tip- ping points. Training the GAN involves an interaction with a neuro-symbolic method as shown in Figure 1. The neuro- symbolic method learns how to translate questions that a cli- mate modeler would ask of the model into “programs” that could then be run by the GAN and translates “imagined” models that the GAN generates into natural language ques- tions that could be understood by a climate modeler. This unique approach to learning provides two key advantages: 1.) it enables explainability that is human understood - an important requirement among scientific researchers, and 2.) it provides a way to direct climate researchers to areas in the Figure 1: Learning to Translate Questions into Programs and Programs into Questions. search space that are roughly where the tipping points may live for in-depth climate modeling. Our method is built to be generalizable, as the questions are based on an ontolog- ical representation of the climate domain and the surrogate model is supplied by the climate modeler. The GAN and the bifurcation method are not specific to any domain and can be described as a general machinery for discovery. Multi-Generator Tipping Point Generative Adversarial Network Building on previous work that used multiple generators for stablizing GAN training (Hoang et al. 2018), we ex- plored using multiple generators to exploit the regions in state space where tipping points occur. The multi-generator tipping point GAN (TIP-GAN) is built as a novel adversar- ial game involving a set of generators and a discriminator. The generators learn to generate climate model configura- tions that will result in a climate tipping point. The discrim- inator is trained to learn which generator is generating the model configurations and which model configurations lead to a tipping point. A custom loss function is used for this setup which includes learning to predict a collapse or not and learning which generator generated the model config- urations. In this setup we assume the discriminator is ask- ing the surrogate climate model to provide the answer as to whether a tipping point occurred or not. For the AMOC the tipping point explored is the collapse of the AMOC. Knowledge-Guided Neuro-symbolic Learning To support hybrid AI climate modeling, we use a set of neuro-symbolic deep architectures to enable a translation be- tween what is learned by TIP-GAN and climate modeler- generated natural language questions. The inclusion of a neuro-symbolic layer in this system enables us to take com- plicated questions that a climate modeler may ask during the scientific exploration process, and use the AI simulated envi- ronment to get an answer to those questions that will provide the climate modeler with an area in the search space that should be further explored using traditional climate mod- eling techniques. This provides the climate modeler with a way to tackle the discovery of climate tipping points that Figure 2: Learning to Translate Questions into Programs and Programs into Questions. Figure 3: The Four Box Model. would otherwise be impossible to find without a brute force approach. Building on the early effort in (Yi et al. 2019), we have de- veloped a translation methodology that converts natural lan- guage into program-style symbolic representations to struc- turally represent natural language questions. The programs developed are used to capture questions pertaining to pa- rameter changes that could cause a tipping point to occur. The generators of TIP-GAN randomly generate perturbed model configurations to invoke climate tipping points. They generate these perturbations in the form of programs that are then run using the surrogate model. These programs us- ing the trained neuro-symbolic translation architectures are translated into natural language questions with associated answers obtained by the generators through their interac- tions with the discriminator. In Figure 2 we show the proposed neuro-symbolic trans- lation network is a triangular model that includes a ques- tion encoder, a question decoder, a program decoder, and a program encoder. It is bidirectional in that it translates from questions to programs and from programs to questions. A word embedding and word positional embedding are shared across networks and are used to support the translations. The text encoder network encodes text into this shared space. The decoder network decodes encodings into questions and into programs. Another encoder network encodes programs into text. The TIP-GAN works at a vector level processed by the climate model and its perturbed model configurations are converted from vectors to programs and then programs to questions in natural form. Experimental Setup In this section we describe the experimental setup for this work which includes using a Four Box model (Gnanade- sikan, Kelson, and Sten 2018b) as the surrogate model in the AI simulation. This Four Box model was created specif- ically to study the behavior of the AMOC overturning and potential collapse states. We set up the TIP-GAN to per- turb parameters for this Four Box model. We use the neuro- symbolic trained translators to learn to translate from pro- grams that are generated from the GAN’s perturbations into natural language questions. This is performed while the GAN is training. After the GAN is trained we translate nat- ural language questions to programs that the TIP-GAN can run on its latent space (trained model). Data and the Surrogate Model Climate models, such as those modeling the AMOC, can be approximated using simple box models (Levermann and F¨urst 2010). Box models reduce the number of system pa- rameters but aim to retain the essential dynamics that char- acterize AMOC tipping points. We used the Gnanadesikan, Kelson, and Sten (2018b) four-box model shown in Figure 3 which includes boxes for the deep ocean, the surface South- ern Ocean, the surface low-latitudes, and the surface North Atlantic/Arctic Oceans. The model is developed in Matlab. The AMOC strength is represented by the mass transport variable Mn, which depends on the time dependent density difference between the low and high northern-latitude boxes and the depth of the low latitude box Dlow. The AMOC is “on” when mass is removed from the low-latitude box and “off” when mass is recycled to the low latitudes. Dlow is de- termined by a mass balance equation which is affected by the magnitude of the wind-driven upwelling in the Southern Ocean Mek which modulates the conversion of dense deep water to light surface water. Atmospheric freshwater fluxes F n w and F s w act to make the high latitude boxes lighter and the low-latitude box denser, while heat fluxes have the re- verse effect. In the experiments reported here, Mn is mon- itored while the other variables are manually perturbed to change within their given ranges. There are nine equations in this model: temperatures and salinities in all four boxes and Dlow are predicted as the model is run over time. The AMOC tipping point is plotted in terms of the overturning transport Mn as a function of the freshwater flux F n w . As the climate warms F n w is expected to increase and to reduce the density difference between low and high latitudes. The extent to which increasing F n w can collapse the overturning (and to which reducing it can restart the overturning), will depend on the magnitude of Mek as well as the initial value of Dlow, as illustrated in Fig. 4. We developed Python code to recreate the Four Box model and to enable us to build a large dataset of model con- figurations with initial values for parameters over ranges of Figure 4: Recreated Collapses Using Python Generated Tools for Machine Learning Dataset Creation from the Four Box Model. As the Southern Ocean upwelling flux Mek be- comes larger, the magnitude of the overturning Mn. The value of F n w required to collapse the model is increases as Dlow or Mek increase. Parameter Name Dlow0 Mek F n w Parameter Description Bounds Initial low latitude pycnocline depth (m) Ekman flux from the southern ocean (Sv) Fresh water flux in North (Sv) [100.0, 400.0] [15, 35] [0.05, 1.55] Table 1: Parameters that were perturbed for the Uncertainty Experiment. acceptable values, and labeled outcomes indicating AMOC on or off states for machine learning training and evaluation. We verified that we were able to recreate the same AMOC collapses as in the original model using Python tools shown in Figure 4. TIP-GAN We set up three experiments using the Four box model data for training the GAN. We focused on perturbation of three bounded parameters shown in Table 1. Each experiment in- cluded generators perturbing one of the variables. All other variables were held constant. The full model configuration is shown in Figure 5. We trained the TIP-GAN using equally-weighted genera- tors and a shutoff classification cross-entropy loss function. The TIP-GAN was run for approximately 250 epochs and we ran the experiments for each n ∈ [1, 2, 4] where n = rep- resents the number of generators. Data was augmented for uniform sampling from a 3-D space. The distribution of col- lapse vs. non-collapse samples was 743/413. We then used the trained TIP-GAN to generate samples that either resulted in AMOC collapse and non-collapse. Figure 5: Four box model experimental configuration repli- cated in TIP-GAN. with a batch size of 64 and a learning rate of 0.001. We evaluated the performance of our bi-directional method by evaluating text-to-text translations, text-to-program transla- tions, and program to text translations. These three tasks can be further distinguished by the length of each question, which we measure in the number of tokens. Since we include beginning-of-sentence (BOS) and end-of-sentence (EOS) tokens with each question, the shortest sequences we ana- lyze consist of seven tokens, and the longest consist of 13. We also built a custom dataset based on a a select set of questions and programs related to AMOC collapse from the Four box model which includes a single question rep- resented in natural language as “If [parameter x] is set to value [y], does the AMOC collapse within [amount of time t]?” There are more than 20 parameters that may be con- sidered in the box model that have large possible ranges of values. Similarly, the value of t could extend infinitely. The resulting dataset consisted of 1,066 question program sam- ples. Using the bidirectional model trained on CLEVR data, we performed transfer learning using the training data gen- erated from the four box model. The program for this model took the form of: ChangeSign(box model(SetTo(...)), Mn) where the ellipses denotes the various box model param- eters and their desired values. Using this approach, we eval- uated the performance of the translation architectures based on training the neuro-symbolic translation networks using transfer learning. Neuro-symbolic Learning We setup two experiments, one using a small subset of CLEVR data consisting of questions that are 11 tokens or less when tokenized. This resulted in 59,307 samples for training dataset and 12,698 samples for testing. Program sequences could be as long as 43 tokens. We trained our model using the Adam optimizer. We trained for two epochs Early Results We share early experimental results for TIP-GAN and the neuro-symbolic learners. TIP-GAN Results Early discriminator performance in classifying configura- tions as collapse or no collapse are shown in Table 2. The Precision Recall 1 Generator 2 Generators 4 Generators 1.0 0.993 0.929 1.0 1.0 1.0 F-measure 1.0 0.997 0.963 Table 2: Test Classification Results. Generator 1 Generator 2 Generator 3 Generator 4 1 Generator 2 Generators 4 Generators 0.854 0.992 0.982 0.998 0.986 0.972 1 Table 3: Fraction of samples that resulted in collapse. high F-measure scores indicates that the discriminator was able to accuracy classify AMOC collapse from non-collapse runs for a held-out test set. Increasing the number of gener- ators decreased the performance slightly. We observed this is because the discriminator tends to incorrectly classify a larger fraction of real samples as synthetic as the number of generators increases. After training the GAN, we generated 500 samples. From these samples we observed that the generators tend to favor exploring areas of shut-offs as shown in Table 3. Though the training data had some minor imbalance, these results are compelling. Neuro-symbolic Results The overall accuracy (token for token) across all tasks (text- to-text translations, text-to-program translations, and pro- gram to text translations) was approximately 70%. Of the three tasks, the highest accuracy was achieved performing the text-to-text translation, while program-to-test translation achieved the lowest. In addition to measuring accuracy, we also used a normal- ized Levenshtein distance (Yujian and Bo 2007) to measure performance. With accuracy, a translation prediction would be considered incorrect if it was not an exact match to the ground truth. In some cases, the prediction is off by a space, or by a repeated word. In other cases the prediction is wrong because it chose a word that was synonymous with what was expected. Levenshtein distance measures the number of sub- stitutions from one string to another, and although it cannot be used to account for synonyms, it can be a more accu- rate measure for understanding how close the prediction is to the ground truth. Future measurements will include se- mantic similarity-based measurements. As shown in Figure 6, the Levenshtein distance perfor- mance was consistent with the accuracy of each task. Text- to-text had the best performance for the 11 token model and Program-to-Text had the worst performance. The Cumula- tive Distribution Function (CDF) of the normalized Leven- shtein distance for the 11 Token Model is shown in Figure 7. For the Four box model question program dataset using transfer learning, we performed an overfit evaluation where the train and test set were equal. The model achieved a text- to-text accuracy of 99.9%, at text-to-program accuracy of Figure 6: Measuring Performance of Neuro-symbolic Trans- lations using Levenshtein Distance for the 11 Token Model. Figure 7: Cumulative Distribution Function (CDF) of the Normalized Levenshtein Distance for the 11 Token Model. 99.8%, and a program-to-text accuracy of 100%. The scores were similar for the Levenshtein distance. The results by se- quence length are shown in Figure 8. Due to the size of the dataset when we performed train test splits on this data, there was not a sufficient amount of samples to enable generaliza- tion. In addition to questions generated from natural language, we also tried translating GAN-based output. We constructed an appropriate program for the parameters varied by the GAN during it’s exploration. We tested each combination of the three parameters and the two questions, and the model translated them with 100% accuracy. Some example pro- grams are as follows: • ChangeSign(box model(SetTo(M ek,28496768)),M n) • ChangeSign(box model(SetTo(Fwn,638758), SetTo(D low0,288)),M n) Though the question and program structure for the AMOC-specific neuro-symbolic translations is simplistic, it is a first attempt to learn the domain-specific question pro- gram translation. These results, in addition to the more ex- Benton, T. G. 2020. Running AMOC in the farming econ- omy. Nature Food, 1(1): 22–23. Buckley, M. W.; and Marshall, J. 2016. Observations, infer- ences, and mechanisms of the Atlantic Meridional Overturn- ing Circulation: A review. Reviews of Geophysics, 54(1): 5–63. Bury, T. M.; Sujith, R.; Pavithran, I.; Scheffer, M.; Lenton, T. M.; Anand, M.; and Bauch, C. T. 2021. Deep learning for early warning signals of tipping points. Proceedings of the National Academy of Sciences, 118(39): e2106140118. Deb, S.; Sidheekh, S.; Clements, C. F.; Krishnan, N. C.; and Dutta, P. S. 2022. Machine learning methods trained on sim- ple models can predict critical transitions in complex natural systems. Royal Society Open Science, 9(2): 211475. Dijkstra, H. A. 2019. Numerical bifurcation methods ap- plied to climate models: analysis beyond simulation. Non- linear Processes in Geophysics, 26(4): 359–369. Gnanadesikan, A. 1999. A simple predictive model for the structure of the oceanic pycnocline. Science, 283(5410): 2077–2079. Gnanadesikan, A.; Kelson, R.; and Sten, M. 2018a. Flux cor- rection and overturning stability: Insights from a dynamical box model. Journal of Climate, 31(22): 9335–9350. Gnanadesikan, A.; Kelson, R.; and Sten, M. 2018b. Flux correction and overturning stability: Insights from a dynam- ical box model. Journal of Climate, 31(22): 9335–9350. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. Advances in neural in- formation processing systems, 27. Hoang, Q.; Nguyen, T. D.; Le, T.; and Phung, D. 2018. MGAN: Training generative adversarial nets with multiple generators. In International conference on learning repre- sentations. Jackson, L.; and Wood, R. 2018. Hysteresis and resilience of the AMOC in an eddy-permitting GCM. Geophysical Research Letters, 45(16): 8547–8556. Johnson, H. L.; Cessi, P.; Marshall, D. P.; Schloesser, F.; and Spall, M. A. 2019. Recent contributions of theory to our understanding of the Atlantic meridional overturning circu- lation. Journal of Geophysical Research: Oceans, 124(8): 5376–5399. Johnson, H. L.; Marshall, D. P.; and Sproson, D. A. 2007. Reconciling theories of a mechanically driven meridional overturning circulation with thermohaline forcing and mul- tiple equilibria. Climate Dynamics, 29(7): 821–836. Jones, C. S.; and Cessi, P. 2016. Interbasin transport of the meridional overturning circulation. Journal of Physical Oceanography, 46(4): 1157–1169. Lenton, T. M.; Rockstr¨om, J.; Gaffney, O.; Rahmstorf, S.; Richardson, K.; Steffen, W.; and Schellnhuber, H. J. 2019. Climate tipping points—too risky to bet against. Levermann, A.; and F¨urst, J. J. 2010. Atlantic pycnocline theory scrutinized using a coupled climate model. Geophys- ical research letters, 37(14). Figure 8: AMOC Tipping Point Problem Translations. tensive CLEVR results are encouraging. We are currently expanding the questions and programs to be more realistic. For example, one of the questions currently being learned is If I increase the Ekman flux by some value, will overturning increase?. We are also building an ontology to support the neuro-symbolic language. Future Work and Conclusions We show the early results of a hybrid AI climate modeling methodology. The novel GAN architecture which includes multiple generators is able to accurately predict AMOC col- lapse and non-collapse for a dataset generated from a Four box model using three parameter perturbations. Increasing the number of generators showed that the generators had a tendency to focus on the areas where collapse is likely to occur. Our current efforts are to advance the underlying bi- furcation methods and to use large global models that are calibrated to the Four box model so we could continue to build datasets for training the GAN. In addition, early results showed our neuro-symbolic translation architectures can ac- curately translate between natural language questions and programs using the CLEVR dataset. When we applied this to a small set of tightly coupled AMOC questions, we showed transfer learning was a viable option for training our archi- tectures on AMOC-specific questions. These results were very early however and there were simply not enough ques- tions and program variety to achieve good generalization. However, our second generation dataset includes a much larger set of questions and programs. The goal of having the neuro-symbolic representations is both to provide a way for climate researchers to ask questions of what is learned by the GAN and for explainability. Future work will include more specific questions pertaining to the AMOC and a more ad- vanced grammar for the neuro-symbolic language. We have also begun developing an underlying ontology to support this language. References Bakker, P. 2022. Ocean sensitivity to freshwater. Nature Climate Change, 12(5): 419–420. Liu, W.; and Fedorov, A. 2022. Interaction between Arctic sea ice and the Atlantic meridional overturning circulation in a warming climate. Climate Dynamics, 58(5): 1811–1827. P¨ortner, H.-O.; Roberts, D. C.; Masson-Delmotte, V.; Zhai, P.; Tignor, M.; Poloczanska, E.; and Weyer, N. 2019. The ocean and cryosphere in a changing climate. IPCC Special Report on the Ocean and Cryosphere in a Changing Cli- mate. Rasp, S.; Pritchard, M. S.; and Gentine, P. 2018. Deep learning to represent subgrid processes in climate models. Proceedings of the National Academy of Sciences, 115(39): 9684–9689. Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N.; et al. 2019. Deep learning and process understanding for data-driven Earth system science. Nature, 566(7743): 195–204. Schultz, M. G.; Betancourt, C.; Gong, B.; Kleinert, F.; Langguth, M.; Leufen, L. H.; Mozaffari, A.; and Stadtler, S. 2021. Can deep learning beat numerical weather predic- tion? Philosophical Transactions of the Royal Society A, 379(2194): 20200097. Singh, M.; Kumar, B.; Rao, S.; Gill, S. S.; Chattopadhyay, R.; Nanjundiah, R. S.; and Niyogi, D. 2021. Deep learn- ing for improved global precipitation in numerical weather prediction systems. arXiv preprint arXiv:2106.12045. Stommel, H. 1961. Thermohaline convection with two sta- ble regimes of flow. Tellus, 13(2): 224–230. Thornalley, D. J.; Oppo, D. W.; Ortega, P.; Robson, J. I.; Brierley, C. M.; Davis, R.; Hall, I. R.; Moffa-Sanchez, P.; Rose, N. L.; Spooner, P. T.; et al. 2018. Anomalously weak Labrador Sea convection and Atlantic overturning during the past 150 years. Nature, 556(7700): 227–230. Yi, K.; Gan, C.; Li, Y.; Kohli, P.; Wu, J.; Torralba, A.; and Tenenbaum, J. B. 2019. Clevrer: Collision events arXiv preprint for video representation and reasoning. arXiv:1910.01442. Yujian, L.; and Bo, L. 2007. A normalized Levenshtein dis- tance metric. IEEE transactions on pattern analysis and ma- chine intelligence, 29(6): 1091–1095. Zhang, R.; Sutton, R.; Danabasoglu, G.; Kwon, Y.-O.; Marsh, R.; Yeager, S. G.; Amrhein, D. E.; and Little, C. M. 2019. A review of the role of the Atlantic meridional over- turning circulation in Atlantic multidecadal variability and associated climate impacts. Reviews of Geophysics, 57(2): 316–375. Acknowledgments Approved for public release; distribution is unlimited. This material is based upon work supported by the Defense Ad- vanced Research Projects Agency (DARPA) under Agree- ment No. HR00112290032.
ai_researcher
1
Exploration_and_Communication_for_Partially_Observable_Collaborative_Multi-Agent_Reinforcement_Learning.pdf
Cooperative and Collaborative Multi-Task Semantic Communication for Distributed Sources Ahmad Halimi Razlighi , Maximilian H. V. Tillmann , Edgar Beck , Carsten Bockelmann , and Armin Dekorsy Department of Communications Engineering, University of Bremen, Germany E-mails:{halimi, tillmann, beck, bockelmann, dekorsy}@ant.uni-bremen.de 4 2 0 2 v o N 4 ] P S . s s e e [ 1 v 0 5 1 2 0 . 1 1 4 2 : v i X r a Abstract—In this paper, we explore a multi-task semantic com- munication (SemCom) system for distributed sources, extending the existing focus on collaborative single-task execution. We build on the cooperative multi-task processing introduced in [1], which divides the encoder into a common unit (CU) and multiple specific units (SUs). While earlier studies in multi-task SemCom focused on full observation settings, our research explores a more realistic case where only distributed partial observations are available, such as in a production line monitored by mul- tiple sensing nodes. To address this, we propose an SemCom system that supports multi-task processing through cooperation on the transmitter side via split structure and collaboration on the receiver side. We have used an information-theoretic perspective with variational approximations for our end-to- end data-driven approach. Simulation results demonstrate that the proposed cooperative and collaborative multi-task (CCMT) SemCom system significantly improves task execution accuracy, particularly in complex datasets, if the noise introduced from the communication channel is not limiting the task performance too much. Our findings contribute to a more general SemCom framework capable of handling distributed sources and multiple tasks simultaneously, advancing the applicability of SemCom systems in real-world scenarios. Index Terms—Semantic communication, cooperation, collabo- ration, multi-tasking, infomax, deep learning. Sensing node 1 (cid:130) (cid:130) Sensing node 2 Task2 Task1 Sensing node K − 1 (cid:130) (cid:130) Sensing node K Fig. 1: An example of cooperative and collaborative multi- tasking for distributed sources. applications, such as the industrial internet and autonomous systems, where successful task execution is prioritized over the exact reconstruction of transmitted data at the receiver. Research into SemCom has explored five main approaches, with four detailed in [5] and a fifth inspired by Weaver’s extension of Shannon’s theory to include the semantic level [6]. These approaches are: • Classical approach: Quantifies semantic information us- I. INTRODUCTION ing logical probability. Recent breakthroughs in artificial intelligence, particularly in deep learning (DL) and end-to-end (E2E) communication technologies, have led to the rise of semantic communication (SemCom) [2]. It has attracted significant attention, being recognized as a critical enabler for the sixth generation (6G) of wireless communication networks. SemCom is expected to play a key role in supporting a wide range of innovative applications that will define 6G connectivity and beyond [3]. In contrast to conventional communication systems, which are designed based on Shannon’s information theory and focus on the accurate transmission of symbols, SemCom prioritizes understanding the meaning and goals behind trans- the second level mitted information. SemCom operates at of communication, the semantic level, where the goal is to convey the desired meaning rather than ensuring exact bit-level accuracy [4]. By surpassing the traditional focus on the precise transmission of bits, SemCom is well-suited for emerging This work was supported in part by the German Ministry of Education and Research (BMBF) under Grant 16KISK016 (Open6GGub) and the German Research Foundation (DFG) under grant 500260669 (SCIL). • Knowledge graph (KG) approach: Represents semantics through structured KGs. • Machine learning (ML) approach: Encodes semantics within learned model parameters. • Significance approach: Focuses on timing as a key com- ponent of semantic meaning. • Information theory approach: Extends Shannon’s frame- work to address semantic-level communication. Recent works in SemCom primarily focus on two research directions: data reconstruction and task execution. Initial in- vestigations into data recovery were led by [7] and [8], which utilized ML techniques to reconstruct diverse data sources such as text, speech, and images. Building on these foundational works, [9] and [10] have extended the focus to explore concepts like communication efficiency in SemCom. In addition, systems dealing with structured data have been examined through the KG approach to enhance data recovery [11]. In task-oriented SemCom, the focus shifts to executing intelligent tasks at the receivers. Most research in this area This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. has concentrated on single-task scenarios. For example, [12] developed a communication scheme using the information bot- tleneck framework, which encodes information while adapting to dynamic channel conditions. Moreover, some works considered more realistic scenarios in which the source is distributed, for instance, [13] studied information encoding for collaborative distributed relevant feature extraction to fulfill a task. [14] also offered a frame- work for collaborative retrieval of the message using multiple received semantic information. To address practical communication scenarios, SemCom systems must be capable of handling multiple tasks simul- taneously. Early efforts, such as [15] and [16], explored non- cooperative methods where each task operates on its respective dataset independently. Conversely, recent works like [17], [18], and [19] studied joint multi-tasking using established ML approaches and architecture. While the prior works on multi-tasking in SemCom have fo- cused solely on ML approaches, [1] introduced an information- theoretic analysis of the problem. This study proposed a split structure for the semantic encoder, dividing the semantic encoder into a common unit (CU) and multiple specific units (SUs), to enable cooperative processing of various tasks. The split structure can perform multiple tasks based on a single observation. In this work, we have examined a more applicable scenario, in which the full observation is not accessible, however, we have different distributed views of our main observation. This is illustrated in Fig. 1, where the full observation is the whole view of the production line but our sensing nodes provide partial observations from different positions. In distributed cases, where the source/observation is dis- tributed, the task cannot be executed depending on a single sensing node alone. Thus, collaborative systems have been studied to perform a single task, as seen in [14] and [13], where multiple nodes collaborate to execute their shared task. However, our research expands this to multi-task scenarios, applying the split structure from [1] to bring cooperation, which takes place on the transmitter side, to collaboration on the receiver side. Therefore, we have contributed to a more general SemCom system capable of cooperatively and collab- oratively executing multiple tasks by proposing a cooperative and collaborative multi-task (CCMT) SemCom architecture. In our data-driven approach, we have tailored semantic com- munication to multi-task processing for distributed sources and formulated it through information theory using variational approximations. Key contributions include: • Combining the cooperative multi-task process, enabled by the split structure, with the collaborative process of the distributed observations. • Considering different training methods and channel con- ditions for the proposed CCMT architecture. • Demonstrating the effectiveness of the CCMT system by showing enhancements in task execution over various sce- narios, specifically when dealing with more challenging datasets. Semantic Source TaskN zN Task 1 z1 Observation (S) Fig. 2: Probabilistic graphical modeling of the proposed se- mantic source [1]. II. SYSTEM MODEL This section introduces our probabilistic modeling of the proposed CCMT model. Furthermore, we formulate an information-theoretic optimization problem that aims to opti- mize the joint execution of multiple tasks in an E2E manner. A. System Probabilistic Modeling Fig. 2 illustrates our interpretation of semantic source as discussed in [1]. Such a definition enables the simultaneous extraction of multiple semantic variables based on a single observation and addresses multiple tasks. It consists of N se- mantic variables, denoted by z = [ z1 z2 . . . zN ] , lying behind an observation S, and the given tasks specify one/multiple semantics to be of our interest. The tuple of (z, S) is defined as the semantic source, fully described by the probability distribution of p(z, S). In this study, we focus on a more practical scenario of the distributed setting, where instead of the full observation, multiple partial views of the data are available. These partial observations, denoted as S1, . . . , SK, each contain information about some or all semantic variables. In [1], it was demonstrated that when semantic variables share statistical relationships, a split semantic encoder, com- prising a CU and multiple SUs, enables cooperative SemCom, significantly improving performance in multi-task cases by utilizing common information. In realistic scenarios, sensing nodes only access partial observations of the source, and collaborative approaches are required to perform tasks based on these distributed views. To integrate cooperative multi-tasking with collaborative handling of distributed data, our proposed system model consists of K observations available at K sensing nodes and N semantic variables, each associated with a unique task. As illustrated in Fig. 3, at each sensing node, the CU encoder first extracts the common relevant information from its observation. Next, N ×K SU encoders extract and transmit task-specific in- formation to their respective decoders for collaborative decod- ing. To fulfill each task, the corresponding receivers need K transmitted information extracted and transmitted by assigned SUs from each observation. Since we consider the execution of two tasks, two SUs are required at each sensing node. Thus, we show the output of SU encoders as x1.1, x1.2, . . . , xK.1, xK.2, and their noise-corrupted version received at the corresponding decoders are indicated by ˆx1.1, ˆx1.2, . . . , ˆxK.1, ˆxK.1. Our approach incorporates wireless channels between en- coders and decoders, employing the additive white Gaussian Semantic Source (z, S) Sensing node 1 SU 1.1 SU 1.2 obs. S1 CU1 Sensing node 4 obs. S4 CU4 SU 4.1 SU 4.2 ... ... ... ... Channel 1 Decoder Task 1 Channel 2 Decoder Task 2 Thus, the objective is to maximize the mutual information between the channel outputs ˆx(1:K).i of the corresponding SUs, and the semantic variables zi. Expanding the mutual in- formation in (2) as discussed in detail in [1], the approximated objective function is derived like: LCCMT(θ, Φ) = N (cid:88) i=1 I(zi; ˆx(1:K).i) ≈ E pCU θ (c1:K |S1:K ) (cid:34) N (cid:88) (cid:26) i=1 (cid:20) Ep(S1:K ,zi) Fig. 3: An illustration of the proposed CCMT system model for distributed partial observations for N = 2 and K = 4. E Φ (ˆx(1:K).i|c1:K )[ log p(zi|ˆx(1:K).i)] pSU (cid:21) (cid:27)(cid:21) . noise (AWGN) channel. As shown generally in Fig. 4, the Markov representation of our system model for the i-th semantic variable is outlined for ∀k ∈ {1, . . . , K} as follows. p(ˆzi, ˆxk.i, xk.i, ck|Sk) = pDeci(ˆzi|ˆxk.i) pChannelk (ˆxk.i|xk.i) pSUk.i (xk.i|ck) pCUk (ck|Sk). (1) In (1), pCUk (ck|Sk) defines the CU of the k-th sensing node, which extracts the common relevant information available in the k-th observation amongst all tasks. The corresponding SU for i-th semantic variable at k-th sensing node is de- scribed by pSUk.i (xk.i|ck) extracting task-specific information and providing xk.i as the channel input. The corresponding decoder is then specified by pDeci(ˆzi|ˆxk.i), where ˆxk.i ∈ Rmi, ∀k ∈ {1, . . . , K} is the received information passed through the AWGN channel and modeled like ˆxk.i = xk.i + n, where n ∼ N (0mi, σ2 i Imi), and mi is the size of the encoded task- specific information or the number of channel uses. (z1, . . . , zN ) S1 c1 SK cK x1.1 x1.2 xK.1 xK.2 ˆx1.1 ˆx1.2 ˆxK.1 ˆxK.2 ˆz1 ˆzN Fig. 4: Markov representation of the CCMT system. B. Optimization Problem We formulate an optimization problem by adopting the information maximization principle together with the E2E learning method, as follows. [ pCUk (ck|Sk)⋆, pSUk (xk|ck)⋆] K k=1 = arg max pCUk (ck|Sk), pSUk (xk|ck). N (cid:88) i=1 I(zi; ˆx(1:K).i). (2) (3) To derive the objective function on (3), we have employed the variational method, which is a way to approximate in- tractable computations based on some adjustable parameters, like weights in NNs [20]. The technique is widely used in machine learning, e.g., [21], and also in task-oriented communications, e.g., [12], [13], and [14]. Thus, our pos- terior distributions, {pCUk (ck|Sk)}K are approximated by NN parameters of θ = {θk}K Φ = {ϕk}K k=1 and {pSUk (ˆxk|ck)}K k=1 k=1 and k=1 respectively. As shown in (3), by considering the channel outputs we aim to emphasize the role of joint semantic and channel that coding performed by our SUs. Employing the fact (ˆxk|ck) = (cid:82) pSUk pSUk (xk|ck) pChannelk (ˆxk|xk) dxk, we try to ϕk ϕk (ˆxk|ck). optimize pSUk ϕk Regarding the i-th decoder in (3), the pDeci(zi|ˆx(1:K).i) can be fully determined using the known distributions and underlying probabilistic relationship in (1) as: pDeci(zi|ˆxk.i) = (cid:82) pSUk ϕk (ˆxk.i|ck) pCUk θk (ck|Sk) p(Sk, zi) dsk dck p(ˆxk.i) , (4) however, due to the high-dimensional integrals, (4) becomes intractable and we need to follow the variational approxima- tion technique, resulting in the following: LCCMT approx.(θ, Φ, Ψ) = ≈ E pCU θ (c1:K |S1:K ) N (cid:88) i=1 I(zi; ˆx(1:K).i) (cid:34) N (cid:88) (cid:26) i=1 (cid:20) Ep(S1:K ,zi) E Φ (ˆx(1:K).i|c1:K )[ log pDeci pSU ψi (zi|ˆx(1:K).i)] (cid:21) (cid:27)(cid:21) . Where in (5), Ψ = {ψi}N (5) i=1 represents NN parameters approximating the true distribution of decoders. To obtain the empirical estimate of the above objective function, we approximate the expectations using Monte Carlo sampling assuming the existence of a dataset {S(j), z(j) 1 , . . . , z(j) N }J j=1 where J represents the batch size of the dataset [22]. TABLE I: The NN structure for each distributed sensing node LCCMT empir. (θ, Φ, Ψ) ≈ L (cid:88)   N (cid:88) l=1 i=1    1 J 1 L J (cid:88) j=1 (cid:20) 1 T T (cid:88) [ log qDeci ψi t=1 (cid:21) (zi|ˆx(1:K).j,t)]     .  (6) In (6), L represents the sample size of the cooperative processing, and T is the channel sampling size for each batch. III. SIMULATION RESULTS For the distributed scenario, we compare our proposed CCMT with a baseline approach called single-task collab- orative (STC) semantic communication with no cooperative multitask processing. We evaluate the task execution error rate for task 1 and task 2 across different training scenarios, chan- nel conditions, and NN sizes, demonstrating the performance improvements of CCMT over STC1. A. Simulation Setup We evaluate our proposed architecture for four sensing nodes that need to collaborate and cooperate on two tasks, where each sensing node has a partial observation of exactly one quarter of the full image showing a digit of the MNIST dataset [23]. For our simulations, we consider task 1 to be the binary classification of digit two and task 2, the categorical classification of the digits in the MNIST dataset. To examine more practical and complex scenarios, in which sensing nodes may view the observing object from different angles, we consider situations where each quarter image is individually rotated by for a randomly selected angle. For the semantic encoders we use convolutional NNs (CNNs) with ReLU activation functions and max-pool layers for image size reduction, as specified in Tab. I. In our simu- lations, we fix the CU to two CNN layers and the SU to one CNN layer with a fully connected (FC) layer for each task. In the STC case, where no CU is used, the SUs consist of three CNN layers and one FC layer each, to make the number of layers equivalent to the CCMT case for a fair comparison. The number of filters for each CNN layer in CCMT: c1, c2, c3 and STC: k1, k2, k3, is set for each comparison such that the total number of parameters in CCMT and STC are approximately equal. For all simulations, the number of channel uses for task 1 and task 2 is two per sensing node, i.e., m1 = m2 = 2. For Figs. 5 and 6, c1 = 6, c2 = 5, c3 = 3, and k1 = 4, k2 = 4, k3 = 3 are used, which means for the CNN layers of the CCMT architecture a total of (9 + 1)c1 + (9c1 + 1)c2 + 2((9c2 + 1)c3) = 611 parameters and for the STC architecture 598 parameters are used. The final layer of each encoder normalizes the output power across the channel uses to average the output power for each transmission to one over each NN training batch. The signal-to-noise ratio (SNR) of the AWGN channel is defined as SNR := 1/σ2 is the noise i , where σ2 i 1The code is available at https://github.com/ahmadhalimi95/CCMT All CNN layers are 3 × 3 convolution filters with stride 1 and zero padding to keep the input and output dimensions the same. As the MNIST dataset consists of black and white 28 by 28 pixels, the input images for each of the four agents are 14 by 14 pixels. The learning rate is set to 10−4, but for the last 30 epochs reduced by a factor of 0.1 per ten epochs. For the case with image rotation without CU, the learning rate is reduced by a factor of 0.1 for the last 200, last 100, and last 50 epochs each. Output size No. of param. 7 × 7 × c1 3 × 3 × c2 3 × 3 × c3 m1 3 × 3 × c3 m2 Encoder Layers for CCMT (for task 1 and 2) CU: CNN layer, ReLU, max-pool CU: CNN layer, ReLU, max-pool SUtask1: CNN layer, ReLU SUtask1: FC, power normalization SUtask2: CNN layer, ReLU SUtask2: FC, power normalization Encoder Layers for STC (for task i = 1, 2 ) SUtaski:CNN layer, ReLU, max-pool SUtaski:CNN layer, ReLU, max-pool SUtaski: CNN layer, ReLU, SUtaski: FC, power normalization Decoder Decoder task 1: FC, Tanh Decoder task 1: FC, Sigmoid Decoder task 2: FC, Tanh Decoder task 2: FC, Softmax 7 × 7 × k1 3 × 3 × k2 3 × 3 × k3 mi 16 1 16 10 (9 + 1)c1 (9c1 + 1)c2 (9c2 + 1)c3 (9c3 + 1)m1 (9c2 + 1)c3 (9c3 + 1)m2 (9 + 1)k1 (9k1 + 1)k2 (9k2 + 1)k3 (9k3 + 1)mi (m1 + 1)16 16 + 1 (m2 + 1)16 (16 + 1)10 power of the zero mean i.i.d. Gaussian noise vector of each channel n ∈ Rmi. For the simulations, 60 000 training and 10 000 validation data samples are used, the results are shown for the validation dataset, and the results of all simulations are averaged over 25 independent iterations. B. Training Scenarios In Fig. 5 the task execution error rate of task 1 and task 2 over the number of epochs for different SNRs is shown. It is worth mentioning that the SNR is uniformly distributed for all ranges, and specifically for Fig. 5, the evaluation SNR range is the same as the training SNR range. In this figure, the proposed CCMT architecture is compared to the STC architecture when both are trained for 500 epochs for an SNR range from 9 to 11 dB. It can be seen that the CCMT outperforms the STC in task execution error rate. To further investigate the joint semantic and channel coding performance of the SUs to deal with different channel condi- tions, the CCMT is first trained for a wide SNR range from −10 to 20 dB for 250 epochs, causing the CU to be generalized and then the CU is frozen and the SUs are further trained for the smaller target SNR range of 9 to 11 dB for additional 250 epochs. It can be seen in Fig. 5 that this approach, named “CCMT-generalized-CU”, performs equally, or even slightly better than the case where the whole CCMT is trained for a small SNR range. The validation SNR for all results in Fig. 5 is set to 10 dB. We conclude that the generalization of the CU has the advantage that only retraining of the SUs is required to deal with different channel conditions for task 1 and task 2. Therefore, we use the CCMT-generalized-CU, where the CU is trained for 250 epochs for an SNR range of −10 to 20 dB, and . . e t a r r o r r e n o i t u c e x e k s a T 10−1 10−2 STC CCMT CCMT-generalized-CU . STC: single-model CCMT: single-model STC: multi-model (best selected) CCMT-generalized-CU: multi-model (best selected) 100 . Task 2 Task 1 0 100 200 300 400 500 Epoch Task 2 e t a r r o r r e n o i t u c e x e k s a T 10−1 10−2 Task 1 Fig. 5: Task execution error rate over the number of training epochs of task 1 and task 2 for different training scenarios. −10 −5 0 then the SUs are trained for specific SNR ranges depending on the channel conditions. C. Impact of Different Channel Conditions Further, the CCMT and STC are compared for different SNR values, in Fig. 6a without, and in Fig. 6b with image rotation, where each partial observation is individually rotated for an angle uniformly distributed between ±30◦. For the simulation, multiple SUs models are trained for specific SNR ranges for the CCMT-generalized-CU and the STC, where in the evaluation process the best model is selected for each SNR. These are indicated by “multi-model (best selected)” for both cases in Fig. 6. In total 11 models are trained, with SNR ranges in dB: [−12, −10], [−9, −7], [−6, −4], [−3, −1], [0, 2], [3, 5], [6, 8], [9, 11], [12, 14], [15, 17], and [18, 20]. Moreover, the multi-model cases are compared with the “single-model” cases, where only a single model is trained for the SNR range of −10 to 20 dB for the CCMT and the STC. that Fig. 6a shows the multi-model cases (CCMT- generalized-CU and STC) outperform the singel-model cases (CCMT and STC) in the whole SNR range for task 1 and for higher SNR values for task 2. Next, the CCMT outperforms the STC for both the single and multi-model cases for higher SNR values. However, for lower SNR values, the CCMT and STC achieve almost the same task execution error rate, as the gain in performance from the CU in cooperative task processing is minimal compared to the errors introduced from the poor channel conditions. 10 15 20 5 SNR (a) Task 2 100 e t a r r o r r e n o i t u c e x e k s a T 10−1 10−2 Task 1 −10 −5 0 5 SNR (b) 10 15 20 Fig. 6: Task execution error rate over the SNR (a) without image rotation, and (b) with image rotation. tasks in both single and multi-model cases. This indicates that the CU’s cooperative processing of multiple tasks is more advantageous for more challenging datasets. D. Impact of Different NN Sizes Compared to Fig. 6a, Fig. 6b shows larger gaps in task error rate between the CCMT and STC architectures for both Finally, the CCMT-generalized-CU and STC architectures are compared for different numbers of NN parameters. The STC CCMT-generalized-CU . was shown that the advantage of the CCMT holds for different NN sizes. REFERENCES . e t a r r o r r e n o i t u c e x e k s a T 10−1 10−2 Task 2 Task 1 500 1000 1500 2000 2500 3000 3500 No. of parameters Fig. 7: Task execution error rate over the number of trainable CNN layer parameters. For task 1 the SNR is 5 dB and for task 2 the SNR is 10 dB. task execution error rates are shown in Fig. 7 over the number of parameters of the CNN layers of each sensing unit for the dataset with rotation. For this simulation. we consider different channel conditions for task 1 and task 2. For task 1 the validation SNR is 5 dB with an SNR training range from 4 to 6 dB and for task 2 the validation SNR is 10 dB with an SNR training range from 9 to 11 dB. The number of convolution filters is increased from c1 = 4, c2 = 2, c3 = 2, k1 = 2, k2 = 2, k3 = 2 resulting in 190 and 192 parameters, to c1 = 14, c2 = 13, c3 = 8, k1 = 11, k2 = 10, k3 = 8 resulting in 3679 and 3676 parameters, for the CCMT and the STC, respectively. The number of convolution filters in the final CNN layer c3 and k3 are always the same for the CCMT and STC. We note that CCMT and STC are trained for 500 epochs in total for each case. It can be seen that the CCMT saves a significant amount of computing resources compared to STC. For example, for task 2, an error rate of about 0.12 for the STC requires about 1900 NN parameters, while the CCMT requires only about 611. This is illustrated by the dashed line in Fig. 7. Moreover, we observe that in general, increasing the number of parameters decreases the error rate for all cases and the gap between the CCMT and STC stays relatively constant for the investigated parameter range. IV. CONCLUSION We introduced the CCMT architecture based on an information-theoretic perspective, combining the cooperative processing of multiple tasks with the collaborative processing of distributed observations. We considered different training methods and channel conditions. Simulation results showed that the proposed CCMT architecture lowers the task execution error rate compared to the STC approach, specifically, for more challenging datasets and better channel conditions. Finally, it [1] A. Halimi Razlighi, C. Bockelmann, and A. Dekorsy, “Semantic commu- nication for cooperative multi-task processing over wireless networks,” IEEE Wireless Communications Letters, vol. 13, no. 10, pp. 2867–2871, 2024. [2] “Beyond transmitting bits: Context, semantics, and task-oriented com- munications,” IEEE Journal on Selected Areas in Communications, vol. 41, pp. 5–41, 11 2022. [3] W. Tong and G. Y. Li, “Nine challenges in artificial intelligence and wireless communications for 6g,” IEEE Wireless Communications, vol. 29, no. 4, pp. 140–145, 2022. [4] M. Sana and E. C. Strinati, “Learning semantics: An opportunity for Institute of Electrical and Electronics effective 6G communications.” Engineers Inc., 2022, pp. 631–636. [5] D. Wheeler and B. Natarajan, “Engineering semantic communication: A survey,” IEEE Access, vol. 11, pp. 13 965–13 995, 2023. [6] W. Weaver, “Recent contributions to the mathematical theory of com- munication,” ETC: a review of general semantics, pp. 261–281, 1953. [7] H. Xie, Z. Qin, G. Y. Li, and B. H. Juang, “Deep learning enabled semantic communication systems,” IEEE Transactions on Signal Pro- cessing, vol. 69, pp. 2663–2675, 2021. [8] H. Xie and Z. Qin, “A lite distributed semantic communication system for internet of things,” IEEE Journal on Selected Areas in Communica- tions, vol. 39, pp. 142–153, 1 2021. [9] L. Yan, Z. Qin, R. Zhang, Y. Li, and G. Y. Li, “Resource allocation for text semantic communications,” IEEE Wireless Communications Letters, vol. 11, pp. 1394–1398, 7 2022. [10] H. Tong, Z. Yang, S. Wang, Y. Hu, W. Saad, and C. Yin, “Federated learning based audio semantic communication over wireless networks.” Institute of Electrical and Electronics Engineers Inc., 2021. [11] Y. Wang, M. Chen, W. Saad, T. Luo, S. Cui, and H. V. Poor, “Perfor- mance optimization for semantic communications: An attention-based learning approach,” in 2021 IEEE Global Communications Conference (GLOBECOM), 2021, pp. 1–6. [12] J. Shao, Y. Mao, and J. Zhang, “Learning task-oriented communication for edge inference: An information bottleneck approach,” IEEE Journal on Selected Areas in Communications, vol. 40, no. 1, pp. 197–211, 2022. [13] J. Shao, Y. Mao, and J. Zhang, “Task-oriented communication for multidevice cooperative edge inference,” IEEE Transactions on Wireless Communications, vol. 22, no. 1, pp. 73–87, 2023. [14] E. Beck, C. Bockelmann, and A. Dekorsy, “Semantic information recovery in wireless networks,” Sensors, vol. 23, p. 6347, 7 2023. [Online]. Available: https://www.mdpi.com/1424-8220/23/14/6347 [15] H. Xie, Z. Qin, X. Tao, and K. B. Letaief, “Task-oriented multi- user semantic communications,” IEEE Journal on Selected Areas in Communications, vol. 40, no. 9, pp. 2584–2597, 2022. [16] G. He, S. Cui, Y. Dai, and T. Jiang, “Learning task-oriented channel allo- cation for multi-agent communication,” IEEE Transactions on Vehicular Technology, vol. 71, no. 11, pp. 12 016–12 029, 2022. [17] Y. Sheng, F. Li, L. Liang, and S. Jin, “A multi-task semantic commu- nication system for natural language processing,” in 2022 IEEE 96th Vehicular Technology Conference (VTC2022-Fall), 2022, pp. 1–5. [18] Y. E. Sagduyu, T. Erpek, A. Yener, and S. Ulukus, “Multi - receiver task-oriented communications via multi - task deep learning,” in 2023 IEEE Future Networks World Forum (FNWF), 2023, pp. 1–6. [19] M. Gong, S. Wang, and S. Bi, “A scalable multi-device semantic communication system for multi-task execution,” in GLOBECOM 2023- 2023 IEEE Global Communications Conference. IEEE, 2023, pp. 2227– 2232. [20] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013. [21] A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, “Deep variational information bottleneck,” arXiv preprint arXiv:1612.00410, 2016. [22] C. M. Bishop, “Pattern recognition and machine learning,” Springer google schola, vol. 2, pp. 1122–1128, 2006. [23] L. Deng, “The mnist database of handwritten digit images for machine learning research,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 141–142, 2012.
ai_researcher
2
Unlocking_AI_Creativity_A_Multi-Agent_Approach_with_CrewAI.pdf
4 2 0 2 v o N 4 2 ] C H . s c [ 2 v 7 2 5 2 1 . 1 1 4 2 : v i X r a Human-AI Co-Creativity: Exploring Synergies Across Levels of Creative Collaboration Jennifer Haase Weizenbaum Institute and Humboldt University Berlin, Germany [email protected] Sebastian Pokutta TU Berlin and Zuse Institute Berlin Berlin, Germany [email protected] November 2024 1 Introduction Integrating generative AI into creative work signifies a profound shift in how humans engage with digital tools to create. We are entering an era where AI systems do more than support human creativity: they actively participate in co-creative processes, which we refer to as Human-AI Co-Creativity (Colton and Wiggins, 2012; Serbanescu and Nack, 2023). Some creative tasks can now be fully automated, which becomes evident, for example, with the generative fill function in Photoshop (see also Adobe Firefly), code generation in IT (Tian et al., 2023), or character design in video games (Janson et al., 2023). These examples demonstrate generative AI’s potential to enhance human creativity, which some argue is the current limit of existing generative AI tools (e.g., Mar- rone et al. 2024). However, we argue that Human-AI Co-Creativity has the po- tential to enhance human creative capabilities through the integration of (gen- erative) AI tools, systems, and agents far beyond what is currently common for (non-enhanced) human creativity. This paradigm shift demands a deeper understanding of these co-creative interactions, associated challenges, and the requirements for (generative) AI augmentation (Melville et al., 2023). Improving individual human creative skills and performance is one of the cornerstones of creativity research, with various techniques and manipulation methods being tested (Haase et al., 2023b; Sio and Lortie-Forgues, 2024). As human lives increasingly shift into the digital realm, these techniques are natu- rally becoming increasingly digital as well (Bereczki and K´arp´ati, 2021; Rafner et al., 2023). Generative AI tools bring a whole new level and potential of com- 1 petence increase (Rafner et al., 2023), with “human-like” communication skills while at the same time offering much improved beyond-human-like knowledge and information processing skills (see, e.g., GPT-4, OpenAI 2023); at least in certain respects. As with all forms of digitization, there is a risk of losing skills versus the chance of gaining more efficiency and output quality through dig- ital support (Parasuraman et al., 2000). In the context of creative work, the maximum benefit of AI will be derived where its focus is human-centric and is designed to enhance, rather than replace, human creativity (Anantrasirichai and Bull, 2022). “It’s not a human move. I’ve never seen a human play this move. So beautiful.” —Fan Hui Then-European Champion’s commentary on game between AlphaGo against Lee Sedol However, the potential for genuine AI creativity emerged much earlier, with a striking example being DeepMind’s AlphaGo defeating world champion Lee Sedol in Go in 2016. AlphaGo first learned from historical match data, then honed its skills by playing millions of games against itself as well as against human experts. This event is often regarded as a cornerstone in recognizing AI’s creative capabilities, which, in hindsight, turn out not to be merely isolated anomalies but precursors of the broader creative possibilities that AI systems offer. Coincidentally, these human players also significantly improved their own proficiency at Go while training the AlphaGo system; see Metz (2016) for a detailed account. We consider this a prime example for the human creative advancement achieved through training and working with AI engines, i.e., the interactions with AI system have a lasting impact on the user in terms of creative improvement, beyond the times of interactions. Integrating generative AI tools into creative processes presents an opportu- nity to advance human creative performances collaboratively. By focusing on augmenting rather than replacing human creativity, these tools can help over- come the limitations of traditional methods and push the boundaries of what is creatively possible. In this chapter, we will discuss the evolution of creativ- ity support through digital tools, moving from simple digital aids to partially automated tools, culminating in collaboration between humans and generative AI tools. First, we elaborate on the “inherent” creative potential of (generative) AI tools, which we posit to be a requirement for actual co-creativity. Then, we differentiate between different forms of digital tool support. By presenting concrete examples from mathematics for varying levels of human-AI co-creative interactions, we will illustrate how the co-creative process with generative AI can significantly advance the creative outcome, achieving new results often with creative twists beyond previously known approaches and, due to their high ir- regularity, unlikely to be found by human creativity alone. 2 2 Creativity of Generative AI tools For a system to be considered autonomously creative, it must possess the poten- tial for creative action, such as generating novel ideas or solutions independently without human intervention (Jennings, 2010). This then points to the question of inherent creativity of generative AI tools. Machine learning serves as the cor- nerstone for such a form of creativity, providing the capability for algorithms to learn, adapt, and respond in a manner that can be deemed “intelligent”—and thus, potentially, creative (Mateja and Heinzl, 2021). However, the debate surrounding the “true” creativity of technical systems transcends scientific inquiry and becomes a philosophical debate about appear- ing vs. being. This discourse revolves around the potential limitations of genera- tive AI, with some viewpoints suggesting that AI’s reliance on pre-existing data would confine it to only displaying “incremental creativity”, thus questioning the depth and authenticity of its creative output (Boden, 2009; Cropley and Crop- ley, 2023). Particularly in non-scientific literature, there is a prevalent notion that only humans with their unique capacity for emotions and empathy could exhibit true creativity (Joshi, 2022; White, 2023). This perspective is echoed by Runco (2023), who suggests that the process of creativity in AI, being funda- mentally different from the human approach, can only result in what could be termed “artificial creativity”. We do not share such notions of diminishing the creative output from artificial agents. As we move from the philosophical to the practical, we can see empirical evidence for significantly increased creativity in (generative) AI tools and agents output and human output in collaboration with generative AI tools. Large language models (LLMs), for example, are specifically designed to balance factual precision with creative expression, incorporating el- ements of flexibility and randomness that allow generating content perceived as original and inventive (Sinha et al., 2023). These models leverage vast datasets and complex algorithms to synthesize information in novel ways, resulting in outputs that emulate human-like creativity and demonstrate the potential for independent creative thought within specific domains (Rafner et al., 2023). Empirical studies further support the inherent creativity of AI systems. Stan- dardized creativity tests, traditionally used to measure human creativity, have been adapted to evaluate the outputs of generative AI. The results are striking, with AI-generated content sometimes matching or even exceeding human per- formance in tasks that measure everyday originality and elaboration (Gilhooly, 2023; Guzik et al., 2023; Haase and Hanel, 2023). Moreover, AI-generated out- puts have proven so convincing in practical scenarios to even fool experts in whether content was created by humans or AI (e.g., with scientific abstract, Else 2023; with artificially generated art, Haase et al. 2023a), one of the most substantial possible benchmarks. This evidence underscores the argument that generative AI tools possess inherent creativity, characterized by their ability to autonomously produce novel and valuable output and pass the test of being indistinguishable from human output. 3 3 From digital tools to AI Throughout history, tools have been essential to human creativity. Naturally, since the advent of computers, this creative work has increasingly moved into the digital domain. For example, every text editor enables and supports creative writing. While some tools transfer the creative task into the digital, others are designed to engage more actively in the creative process (cf. Table 1). We cate- gorize such digital tools into four distinct types. The first is a Digital Pen akin to creative support systems (CSS), which aid human creativity without directly contributing creative input, just like a painting program provides a digital brush to an artist (Shneiderman, 2007). The second type is AI Task Specialist, which is an independent AI system (often a generative one) that operates autonomously without human intervention (apart from the initial input). Examples include non-deterministic algorithms that generate art via generative adversarial neural networks (Hitsuwari et al., 2023) or algorithms that advance game development (Almeida et al., 2023). The third type is a Creative Assistant, a generative AI tool that supports and enhances various aspects of a human-driven creative process, often in an interactive way. Current generations of LLMs, such as, e.g., ChatGPT, Gemini, or Llama, are prime examples of that category. Users can flexibly use such tools to support their brainstorming tasks (e.g., Fui-Hoon Nah et al. 2023) or concrete problem-solving tasks such as coding (e.g., Dell’Aversana 2023). The fourth level, as most pertinent to this discussion, is co-creative sys- tems, which we dub AI Co-Creators. Here, humans and (generative) AI tools collaborate, each contributing to the creative process. Ideally, such a system adapts flexibly to the user’s needs, can solve complex, open-ended problems and contributes input in a measurable and meaningful way to the co-creative process with the human user. The four levels indicate the degree of interaction between the user and the tool, depending on how creatively competent and potentially autonomous the tool can act. To demonstrate the varying levels of AI-human interaction in cre- ative processes, we turn to examples from the field of mathematics. We chose mathematics because it allows for objective evaluation of creativity in terms of newness and usefulness, this is in contrast to “subjective disciplines” where a direct attribution of usefulness can sometimes be difficult. Although often perceived as rigid, mathematics is inherently creative, demanding innovative approaches to solve complex problems and develop elegant proofs. The study of creativity itself draws from mathematical insights, as evidenced by Wallas (1926), whose model of the creative process is rooted in earlier work by math- ematicians like Poincar´e and Newman (1908) and echoed in Hadamard’s later contributions (1954). In the following, we will present the four levels of human-tool interaction, with three examples for levels 2-4 of mathematics demonstrating Human-AI Co- Creativity on various complexity levels. For Level 1, the Digital Pen, basically every general-purpose collaboration tool, like email, Slack, Discord, or Github, would be an example of how researchers communicate and coordinate their creative work. We deem this rather known and familiar to the reader and, for the 4 Level of AI integration Level 1: Digital Pen Description Digital tool that facilitates the conversion traditional of pro- creative cesses into digital formats Level 2: AI Task Spe- cialist AI tool that augments cre- tasks, ative operating with structured guidance user input and Level 3: AI Assistant Generative AI tool enhances everyday creativity, working within of the scope its training data and user prompts Level 4: AI Creator Co- Generative AI that tool generates orig- ideas and inal in engages creative dia- logue, adapting within set ethi- cal and creative boundaries Example Classical CSS Generative Autofill Adobe Firefly by Current LLMs like GPT-4 or Midjourney domain- specific amples exist ex- Tool- contribution Digitalizing creative work, improving knowledge transfer and communication Automation of creativity based on strong guardrails and user prompting Creative on ev- eryday creativ- ity level, lim- ited to training data; based on user prompting Equal collabo- rator, original and useful con- tribution to a shared creative process; argues with a user; based on meta- calibration and intent within broader guardrails Breakdown of contribu- tion as- in Basic sistance digitalizing traditional cre- ative content Moderate aug- in mentation specific cre- ative tasks Significant enhancement in shaping the final creative product Synergistic partnership with equal in- put on creative outcomes Table 1: Four levels of human-tool interaction sake of brevity, do not provide further examples. For the other examples, we will briefly describe the underlying mathematical problem for the non-expert. We apologize to the expert readers for the simplification here, which is necessary to keep the exposition on point and not to deviate into technical details. Moreover, we focus on the three examples from the second author’s research. We stress that this might add a particular anecdotal component to the discussion. Indeed, there is a vast body of work in mathematics using AI systems on various levels to achieve new results. However, it also provides us with a higher degree of introspection into the creative process that is usually unavailable as the focus is on reporting results and not processes. 5 3.1 Level 1: Digital Pen The first level represents the traditional approach of how information systems have long supported humans in their creative processes, with CSS evolving from simple digital tools to complex systems that offer collaborative support and pro- cess guidance (M¨uller-Wienbergen et al., 2011; Voigt, 2014). These systems have transitioned from mimicking traditional tools to providing process support by in- tegrating advanced knowledge and communication management features (Frich et al., 2018; Voigt, 2014). Such tools digitalize and simplify individual or group processes, support the collection, editing, and visualization of human-generated ideas (Olszak and Kisielnicki, 2018; Voigt, 2014) but do not address the essence of the creative process itself. Although effective in facilitating creativity, these systems remain tools rather than active contributors to the creative process. Only with tools integrating some form of (generative) AI can some degree of inherent creativity be assumed to emerge; otherwise, no such entity can con- tribute to the creative process. AI has the potential to process information, aggregate knowledge, and generalize beyond its training data with the possi- bility of exceeding human competencies and capacities. The idea of CSS, being support systems for the idea generation process, has so far only been realized in a relatively weak form. However, with the advent of artificial intelligence, a paradigm shift, similar to what has been observed in other disciplines, is emerg- ing: Machine-learning algorithms in AI systems can create content and, with that, potentially creative output (Seidel et al., 2020). These content-creation functions can either be used to substitute parts of the originally human-only creative process (Level 2) or support and augment various aspects of the cre- ative process (Level 3). 3.2 Level 2: AI Task Specialist In Level 2 interactions, the human defines the creative problem by specifying pa- rameters and constraints, while the AI performs complex computations at a scale and speed unattainable by the human alone. The AI serves as a highly efficient tool, extending the human’s creative capacity by executing tasks that would otherwise limit exploration due to their complexity or resource constraints. The human remains the primary source of creative insight, with the AI operating within clearly defined boundaries. This interaction is characterized by a high degree of human control over the creative outcome, with AI functioning as an enhancer of human capabilities. Advancements in rapid and efficient data processing, as seen in tools like Adobe Firefly, exemplify the capabilities of Level 2 systems. These systems enable quick information generation, such as visual auto-fill functions, where AI can extend or substitute parts of a picture with generated content, allowing the user to iterate faster and explore a broader range of ideas. While such tools demonstrate an inherent, albeit rudimentary, form of creativity by generating new and potentially useful content, their creativity is largely incremental, as described by Cropley and Cropley (2023). The user’s interaction remains limited 6 to a specific creative task, and the AI operates under restricted parameters, offering only partial creative autonomy. Math example: New Bell inequalities A central question in quantum physics, particularly quantum mechanics, is to decide whether a given state exhibits quantum behavior or is just a classical state in disguise. Strongly related to this question are, for example, the central questions for several of today’s quantum computer designs: Are they actually quantum computers or just classical ones in complicated designs? To prove that a state is genuinely non-classical, typically, physicists devise a series of clever measurements that exhibit behavior that cannot be explained with classical physics; there are also ways of proving that a state is classical via so-called lo- cal models. This approach and the associated concept of non-locality has been central to establishing the existence of quantum effects dating back to the fa- mous work of Bell (1964) that resolved the Einstein-Podolsky-Rosen paradox by providing measurements (so-called Bell inequalities) that proved that the experiment of Einstein et al. (1935) exhibits true quantum entanglement and associated quantum effects. However, once the states that need to be analyzed become more complex and might even be in a very low dimension, the required insight into the underlying structure of physics and the necessary creative design of such measurements is tough to achieve. In Designolle et al. (2023), an AI sys- tem was devised, predominantly relying on the so-called Frank-Wolfe methods (Braun et al., 2023), to support the user in his effort to devise new measure- ment strategies for complex states. Here, to compute new Bell inequalities for previously unstudied states, the human user specifies the state and all other system parameters, and the AI system then performs a large and complex se- ries of computations (typically weeks on high-performance compute clusters) to compute a series of measurements and the associated (new) Bell inequality. The user then verifies this inequality via straightforward calculations. All creative input in this example comes from the researcher, with the AI sys- tem providing highly specialized computations at extreme speed and scale. The AI augments the user’s creative capabilities by enabling large-scale exploration but does not generate creative output beyond the predefined task specification. Designolle et al. (2023) were able to derive a wide range of new Bell inequalities for many important scenarios. 3.3 Level 3: AI Assistant Level 3 systems, the development of generative AI tools such as ChatGPT and, more broadly, General Pretrained Transformers (GPTs), stable diffusion mod- els, and others, their general applicability allows users to receive broader and more personalized support for their own creative challenges: GPTs are GPTs (General Purpose Technologies). The current generation of LLMs like GPT-4o, Gemini, Claude, and others are perceived as competent enough to support hu- mans in a wide range of creative tasks (e.g., for coding, Liu et al. 2023; story 7 writing, Doshi and Hauser 2024; problem-solving Heyman et al. 2024). Here, the level of creativity that can be achieved is human-limited, as the challenge lies in understanding and leveraging the potential of the underlying competencies of the tool (e.g., for ChatGPT, Cromwell et al. 2023). This stresses a significant point: the capabilities of generative AI must be made usable for humans, i.e., it is about interfacing. For example, the breakthrough of GPT version 3.5 from OpenAI, along with its wider acceptance, occurred when an intuitive chat-based conversational front-end was introduced; a form of unhobbling (essentially re- moving the handbrakes of highly potent models, Aschenbrenner 2024). However, current LLMs are designed with specific data sources and generalization capabil- ities, which, while robust, are guided by carefully implemented restrictions and guardrails. These measures, though occasionally limiting, are essential to ensur- ing the responsible and ethical use of AI, ultimately enhancing the safety and reliability of the creative process. In addition, hallucinations of factual wrong content are common for LLMs (Jesson et al., 2024), which, however, might not be as relevant for the generation of new creative output compared to the more mundane generation of factually correct essays or reports. It might help you be- come a great artist, but not necessarily in your homework assignment. In fact, hallucinations might even improve their creative potential to some extent. Math example: New Ramsey Multiplicity Bounds A central challenge in graph theory (a graph consisting of nodes and edges) is to understand how often specific subgraphs, like cliques (“everyone knows everyone”) or independent sets (“no one knows anyone”), can appear within larger graphs. This problem is closely tied to classical questions posed by Erd˝os, which have driven much of the research in this area. For instance, determining the frequency of cliques of four or five nodes in larger structures is crucial for understanding the broader behavior of graphs. Researchers often rely on sophis- ticated mathematical tools and intricate constructions to tackle these questions. In Parczyk et al. (2024), an AI system was designed to resolve a longstanding problem about the minimum number of independent sets of size four in graphs where the largest complete subgraph has at most four nodes. The obtained con- structions with sizes of around 800 nodes and more are usually beyond what can be achieved with ad-hoc methods. The AI system designed for this task in Parczyk et al. (2024) employs ad- vanced search heuristics to discover new constructions. Here, the creative po- tential is already shared between the human and the AI system. While the user specifies the requirements for the type of construction needed, the AI sys- tem delivers the actual construction. The correctness of the construction can then be verified by the human. However, the power of the interaction between humans and AI systems goes beyond mere constructions. It also reveals that op- timal constructions are stable and repeatable, giving insight into the underlying structure. 8 3.4 Level 4: The AI Co-Creator At Level 4, Human-AI Co-Creativity represents a fusion of human creativity with advanced AI capabilities, where both entities contribute significantly to a shared creative product (Davis, 2013). In such systems, the inputs and outputs of humans and AI blend seamlessly, resulting in a synergistic creative process that transcends traditional boundaries of human or machine creativity. This co-creative dynamic fundamentally alters the nature of the creative process by positioning the AI not merely as a tool but as an active participant—an ”equal”—in the creative process. Like traditional co-creativity among humans, effective Human-AI collaboration relies on shared goals, diverse perspectives, and extensive communication, ensuring that the strengths of both human cre- ativity and AI are fully leveraged (Paulus et al., 2012). At this level, AI and humans operate in true co-creative synergy. The AI is capable of independently generating creative outputs—such as new, highly non- intuitive solutions—that go beyond the scope of human preconceptions. The human and AI continuously interact, with the AI generating novel solutions based on minimal input and the human refining and integrating these into the broader creative context. In this form of interaction, AI becomes an equal cre- ative partner, contributing original and meaningful input that the human alone may not achieve. This level represents the full realization of Human-AI Co- Creativity, where both entities’ contributions are equally essential for creative breakthroughs. In this co-creative process, the role of human creators is elevated, requiring them to possess not only creative skills but also a deep understanding of how to effectively interact with AI co-creators. Human creators must be adept at framing creative problems in ways that are compatible with AI’s strengths, ensuring that the AI’s contributions align with the creative goals. Additionally, human creators need to evaluate and refine the partial results generated by the AI, applying principles such as the MAYa principle (Most Advanced Yet accessible), which, in turn, is based on the well-known MAYA principle (Most Advanced Yet Acceptable; see, e.g., Hekkert et al. 2003), to ensure that the AI’s outputs are novel yet accessible to the human user. The principles of interaction in Human-AI Co-Creativity are critical to the success of the collaboration. Shneiderman (2020) argues that human-centered AI should be designed to support and enhance human activities, including cre- ativity. He proposes several key concepts to guide the development of these systems: First, maintaining a balance between human oversight and automated operations is essential. This ensures that, while AI provides substantial creative contributions, humans retain control over the final output, preserving the in- tegrity of the creative process. Second, AI co-creators should be designed to augment human capabilities, acting as powerful agents that enhance creativ- ity rather than merely mimicking human skills. Thus, at this advanced level of co-creativity, AI becomes a fully integrated creative partner, contributing ideas that would not emerge through human effort alone. 9 (a) 9-coloring of the plane (b) 8-coloring of the plane (c) 7-coloring of the plane (d) 7-coloring of the plane (alternative) Figure 1: Known colorings of the plane Math example: New Colorings of the Plane A central question in combinatorial geometry is the Hadwiger-Nelson problem, which asks for the minimum number of colors required to color the points of a plane so that no two points at a unit distance share the same color. This number, known as the chromatic number of the plane, has intrigued mathematicians for decades; see Soifer (2024) for an overview. Recent advancements in this area focus on extending the continuum of valid distances for six colors of the plane. For this purpose, researchers have to construct colorings of the plane with the required properties; see, e.g., Figure 1 for a few examples of colorings of the plane. New colorings that go beyond those presented in Figure 1 are very hard to find and require a high degree of ingenuity and creativity. There has not been any significant progress for the last 30 years. Then, in recent work in Mundinger et al. (2024), two new six-colorings that avoid monochromatic pairs of points 10 at a unit distance for the first five colors and another specified distance d for the sixth color were presented, which were obtained through a customized AI approach. While not entirely a Level 4 system yet, due to its particular purpose, in contrast to the previously mentioned examples, the generative AI system only gets the requirements that a correct coloring needs to satisfy as an input. Then, the system is trained to explore and identify new colorings and construct and evaluate new colorings efficiently. This led to the discovery of the two aforemen- tioned new six colorings satisfying the modified requirement regarding the sixth color, significantly expanding the known range for these colorings. Moreover, the obtained colorings (see Figure 2) are highly non-intuitive and creative, breaking the highly symmetric patterns of previous colorings found by humans via trial- and-error, intelligent guessing, and ad-hoc approaches (cf. Figure 1). As before and customary in mathematics, the obtained colorings were then verified and post-processed by a human. (a) 0.354 ≤ d ≤ 0.553 (b) 0.418 ≤ d ≤ 0.657 Figure 2: Two new 6-colorings obtained via Human-AI Co-Creativity 4 Discussion The implications of AI in creative work are multifaceted and far-reaching. As Cremer et al. 2023 outline, AI might take several plausible paths to disrupt creative work. Firstly, AI could lead to an explosion of AI-assisted innovation, enhancing human creativity without necessarily replacing it. This democratiza- tion of innovation is exemplified by tools like GitHub’s Copilot, which aids in coding by providing real-time suggestions that augment human efforts (Cam- bon et al., 2023; Eapen et al., 2023). Secondly, there is the potential for AI to monopolize creativity in specific fields, such as game design, where AI-generated art increasingly replaces human designers (Christofferson et al., 2023). Lastly, 11 a scenario may emerge where “human-made” creativity commands a premium, preserving a competitive edge over AI-generated content. This preference for hu- man involvement has been noted in experiments where human-generated works were received more positively when a human label was added than when they were tagged with an AI label (Bellaiche et al., 2023; Ragot et al., 2020) – how- ever, an AI-generated portrait of Alan Turing just sold for $1.08 million (Cain, 2024), suggesting the opposite. On top of that, we propose another kind: the fu- sion of human and generative AI competencies to new levels of achievement. As AI’s capabilities continue to grow, its involvement in creative endeavors is set for further expansion and diversification. The examples from mathematics demon- strate that AI is no longer merely a tool but a collaborator in generating novel solutions. Moving forward, the challenge will be to strike the right balance: lever- aging AI’s immense potential without undermining the unique contributions of human creativity, ensuring that the synergy between human intuition and AI’s capabilities leads to unprecedented creative achievements. Realizing this equi- librium is essential to ensure that AI is a complement and enhancer of human creativity rather than a substitute. Unlike traditional CSS, which facilitates the creative process primarily through knowledge processing and communication, generative AI systems possess the unique capacity to generate creative output independently. This marks a proactive step in the co-creative process, suggesting that AI can contribute in previously unimaginable ways. However, this potential comes with challenges. A central question that mir- rors debates about intelligence concerns the system boundaries we draw around creativity. Just as we ask, “What is intelligent?” we must also ask, “What is creative?”. Is it the human using the tools, the tools themselves, or the syner- getic combination of both? This question is critical because it determines how we assess the creativity of outputs in human-AI collaboration. If creativity is seen as emerging solely from the human, then AI’s role is merely supportive. If, however, creativity is understood as a product of the combined efforts of humans and AI, then the co-creative process must be evaluated on its own terms, ac- knowledging the unique contributions of each entity. As humans use co-creative agents more intensely for their creative work, the risk of over-reliance on AI should not be overlooked. While AI can generate novel ideas and solutions that may not emerge from human creativity alone, there is a danger that excessive dependence on AI could undermine the unique aspects of human creativity, such as emotional depth, moral reasoning, and contextual awareness. This potential over-reliance emphasizes the importance of designing AI systems that support and amplify human creativity rather than diminish it. In conclusion, integrating AI into creative work comes with scaling opportu- nities that are unheard of for creative advancements. The future of Human-AI Co-Creativity will hinge on balancing the enhancement, rather than substitu- tion, of human creativity. Moving forward, the development of AI systems should focus on fostering collaboration rather than competition, enabling a harmonious fusion of human and machine creativity that pushes the boundaries of what is creatively possible. The concrete examples from the math field show us what is already possible in concise domains. Following the logic of the growth of gen- 12 erative AI tools in terms of efficiency, competencies, and generalizability, such co-creative efforts are expected to be possible in other domains soon. Acknowledgments Jennifer Haase’s work was supported by the German Federal Ministry of Edu- cation and Research (BMBF), grant number 16DII133 (Weizenbaum-Institute). Part of this work was conducted while Sebastian Pokutta was visiting Tokyo University via a JSPS International Research Fellowship. The authors would like to thank Christoph Spiegel for providing images of the colorings and Thomas Grisold for helpful comments on an early draft which significantly improved the exposition. References Almeida, P., Carvalho, V., and Sim˜oes, A. (2023). Reinforcement Learning Ap- plied to AI Bots in First-Person Shooters: A Systematic Review. Algorithms, 16(7):323. Number: 7 Publisher: Multidisciplinary Digital Publishing Insti- tute. Anantrasirichai, N. and Bull, D. (2022). Artificial intelligence in the creative industries: a review. Artificial Intelligence Review, 55(1):589–656. Aschenbrenner, L. (2024). Situational Awareness - The Decade Ahead. https://www.forourposterity.com/situational-awareness-the-decade-ahead/. Bell, J. S. (1964). On the Einstein Podolsky Rosen paradox. Physics Physique Fizika, 1(3):195–200. Bellaiche, L., Shahi, R., Turpin, M. H., Ragnhildstveit, A., Sprockett, S., Barr, N., Christensen, A., and Seli, P. (2023). Humans versus AI: whether and why we prefer human-created compared to AI-created artwork. Cognitive Research: Principles and Implications, 8(1):42. Bereczki, E. O. and K´arp´ati, A. (2021). Technology-enhanced creativity: A multiple case study of digital technology-integration expert teachers’ beliefs and practices. Thinking Skills and Creativity, 39:100791. Boden, M. A. (2009). Computer Models of Creativity. AI Magazine, 30(3):23– 23. Number: 3. Braun, G., Carderera, A., Combettes, C. W., Hassani, H., Karbasi, A., Mokhtari, A., and Pokutta, S. (2023). Conditional Gradient Methods. arXiv:2211.14103 [math]. Cain, S. (2024). First artwork painted by humanoid robot to sell at auction fetches $1m. The Guardian. 13 Cambon, A., Hecht, B., Edelman, B., Ngwe, D., Jaffe, S., Heger, A., Vorvore- anu, M., Peng, S., Hofman, J., Farach, A., Bermejo-Cano, M., Knudsen, E., Bono, J., Sanghavi, H., Spatharioti, S., Rothschild, D., Goldstein, D. G., Kalliamvakou, E., Cihon, P., Demirer, M., Schwarz, M., and Teevan, J. (2023). Early LLM-based Tools for Enterprise Information Workers Likely Provide Meaningful Boosts to Productivity. Published by Microsoft. Christofferson, A., James, A., Rowland, T., and Rey, I. (2023). How Will Gen- erative AI Change the Video Game Industry? Section: Brief. Colton, S. and Wiggins, G. A. (2012). Computational creativity: The final frontier? In Ecai, volume 12, pages 21–26. Montpelier. Cremer, D. D., Bianzino, N. M., and Falk, B. (2023). How Generative AI Could Disrupt Creative Work. Harvard Business Review. Cromwell, J. R., Harvey, J.-F., Haase, J., and Gardner, H. K. (2023). Discovering Where ChatGPT Can Create Value for Your Company. Harvard Business Review. Cropley, D. and Cropley, A. (2023). Creativity and the Cyber Shock: The Ultimate Paradox. The Journal of Creative Behavior, n/a(n/a). Davis, N. (2013). Human-Computer Co-Creativity: Blending Human and Com- putational Creativity. Proceedings of the AAAI Conference on Artificial In- telligence and Interactive Digital Entertainment, 9(6):9–12. Number: 6. Dell’Aversana, P. (2023). GPT-3: a new cooperation scenario between humans and machines. Benefits and limitations of GPT-3 as a coding virtual assistant. Designolle, S., Iommazzo, G., Besan¸con, M., Knebel, S., Gelß, P., and Pokutta, S. (2023). Improved local models and new Bell inequalities via Frank-Wolfe algorithms. Physical Review Research, 5(4):043059. Doshi, A. R. and Hauser, O. P. (2024). Generative AI enhances individual cre- ativity but reduces the collective diversity of novel content. Science Advances, 10(28):eadn5290. Eapen, T. T., Finkenstadt, D. J., Folk, J., and Venkataswamy, L. (2023). How Generative AI Can Augment Human Creativity. Harvard Business Review. Einstein, A., Podolsky, B., and Rosen, N. (1935). Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? Physical Review, 47(10):777–780. Else, H. (2023). Abstracts written by ChatGPT fool scientists. Nature. Frich, J., Mose Biskjaer, M., and Dalsgaard, P. (2018). Twenty Years of Cre- ativity Research in Human-Computer Interaction: Current State and Future Directions. In Proceedings of the 2018 Designing Interactive Systems Confer- ence, DIS ’18, pages 1235–1257, New York, NY, USA. Association for Com- puting Machinery. 14 Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., and Chen, L. (2023). Genera- tive AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3):277– 304. Gilhooly, K. (2023). AI vs humans in the AUT: simulations to LLMs. Journal of Creativity. Guzik, E. E., Byrge, C., and Gilde, C. (2023). The originality of machines: AI takes the Torrance Test. Journal of Creativity, 33(3). Haase, J., Djurica, D., and Mendling, J. (2023a). The Art of Inspiring Creativ- ity: Exploring the Unique Impact of AI-generated Images. In AMCIS 2023 Proceedings. Haase, J. and Hanel, P. H. P. (2023). Artificial muses: Generative artificial in- telligence chatbots have risen to human-level creativity. Journal of Creativity, 33(3):100066. Haase, J., Hanel, P. H. P., and Gronau, N. (2023b). Creativity enhancement methods for adults: A meta-analysis. Psychology of Aesthetics, Creativity, and the Arts, pages No Pagination Specified–No Pagination Specified. Place: US Publisher: Educational Publishing Foundation. Hadamard, J. (1954). An essay on the psychology of invention in the mathe- matical field. Courier Corporation. Hekkert, P., Snelders, D., and Van Wieringen, P. C. W. (2003). ‘Most advanced, yet acceptable’: Typicality and novelty as joint predictors of aesthetic prefer- ence in industrial design. British Journal of Psychology, 94(1):111–124. Heyman, J. L., Rick, S. R., Giacomelli, G., Wen, H., Laubacher, R., Taubenslag, N., Knicker, M., Jeddi, Y., Ragupathy, P., Curhan, J., and Malone, T. (2024). Supermind Ideator: How Scaffolding Human-AI Collaboration Can Increase Creativity. In Proceedings of the ACM Collective Intelligence Conference, CI ’24, pages 18–28, New York, NY, USA. Association for Computing Machinery. Hitsuwari, J., Ueda, Y., Yun, W., and Nomura, M. (2023). Does human–AI collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-generated haiku poetry. Computers in Human Behavior, 139. Janson, A., Schmidt-Kraepelin, M., Sch¨obel, S., and Sunyaev, A. (2023). Special Issue Editorial: Adaptive and Intelligent Gamification Design. AIS Transac- tions on Human-Computer Interaction, 15(2):136–145. Jennings, K. E. (2010). Developing Creativity: Artificial Barriers in Artificial Intelligence. Minds and Machines, 20(4):489–501. Jesson, A., Beltran-Velez, N., Chu, Q., Karlekar, S., Kossen, J., Gal, Y., Cun- ningham, J. P., and Blei, D. (2024). Estimating the Hallucination Rate of Generative AI. arXiv:2406.07457 [cs, stat]. 15 Joshi, N. (2022). Can AI Emulate Human Creativity? Forbes. Liu, J., Xia, C. S., Wang, Y., and Zhang, L. (2023). Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation. Advances in Neural Information Processing Systems, 36:21558–21572. Marrone, R., Cropley, D., and Medeiros, K. (2024). How Does Narrow AI Impact Human Creativity? Creativity Research Journal, 0(0):1–11. Mateja, D. and Heinzl, A. (2021). Towards Machine Learning as an Enabler of Computational Creativity. IEEE Transactions on Artificial Intelligence, 2(6):460–475. Conference Name: IEEE Transactions on Artificial Intelligence. Melville, N. P., Robert, L., and Xiao, X. (2023). Putting humans back in the loop: An affordance conceptualization of the 4th industrial revolution. Infor- mation Systems Journal, 33(4):733–757. Metz, C. (2016). The Sadness and Beauty of Watching Google’s AI Play Go. Wired. Mundinger, K., Pokutta, S., Spiegel, C., and Zimmer, M. (2024). Extending the Continuum of Six-Colorings. Geombinatorics Quarterly. M¨uller-Wienbergen, F., M¨uller, O., Seidel, S., and Becker (2011). Leaving the Beaten Tracks in Creative Work – A Design Theory for Systems that Support Convergent and Divergent Thinking. Journal of the Association for Informa- tion Systems, 12(11). Olszak, C. M. and Kisielnicki, J. (2018). A conceptual framework of information systems for organizational creativity support. lessons from empirical investi- gations. Information Systems Management, 35(1):29–48. OpenAI (2023). ChatGPT-4. Parasuraman, R., Sheridan, T., and Wickens, C. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 30(3):286–297. Parczyk, O., Pokutta, S., Spiegel, C., and Szab´o, T. (2024). New Ramsey Mul- tiplicity Bounds and Search Heuristics. Foundations of Computational Math- ematics. Paulus, P. B., Dzindolet, M., and Kohn, N. W. (2012). Chapter 14 - Collab- orative Creativity—Group Creativity and Team Innovation. In Mumford, M. D., editor, Handbook of Organizational Creativity, pages 327–357. Aca- demic Press, San Diego. Poincar´e, H. and Newman, J. (1908). Mathematical creation. Scientific Work and Creativity: Advice from the Masters, 1:177–183. 16 Rafner, J., Beaty, R. E., Kaufman, J. C., Lubart, T., and Sherson, J. (2023). Creativity in the age of generative AI. Nature Human Behaviour, 7(11):1836– 1838. Number: 11 Publisher: Nature Publishing Group. Ragot, M., Martin, N., and Cojean, S. (2020). AI-generated vs. Human Art- works. A Perception Bias Towards Artificial Intelligence? In Extended Ab- stracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA ’20, pages 1–10, New York, NY, USA. Association for Computing Machinery. Runco, M. A. (2023). AI Can Only Produce Artificial Creativity. Journal of Creativity. Seidel, S., Berente, N., Lindberg, A., Lyytinen, K., Martinez, B., and Nickerson, J. V. (2020). Artificial Intelligence and Video Game Creation: A Framework for the New Logic of Autonomous Design. Journal of Digital Social Research, 2(3):126–157. Serbanescu, A. and Nack, F. (2023). Human-AI system co-creativity for building narrative worlds. IASDR Conference Series. Shneiderman, B. (2007). Creativity support tools: accelerating discovery and innovation. Communications of the ACM, 50(12):20–32. Shneiderman, B. (2020). Bridging the Gap Between Ethics and Practice: Guide- lines for Reliable, Safe, and Trustworthy Human-centered AI Systems. ACM Transactions on Interactive Intelligent Systems, 10(4):1–31. Sinha, R., Song, Z., and Zhou, T. (2023). A Mathematical Abstraction for Balancing the Trade-off Between Creativity and Reality in Large Language Models. arXiv:2306.02295 [cs]. Sio, U. N. and Lortie-Forgues, H. (2024). The impact of creativity training on creative performance: A meta-analytic review and critical evaluation of 5 decades of creativity training studies. Psychological Bulletin, 150(5):554–585. Place: US Publisher: American Psychological Association. Soifer, A. (2024). The New Mathematical Coloring Book: Mathematics of Col- oring and the Colorful Life of Its Creators. Springer US, New York, NY. Tian, H., Lu, W., Li, T. O., Tang, X., Cheung, S.-C., Klein, J., and Bissyand´e, T. F. (2023). Is ChatGPT the Ultimate Programming Assistant – How far is it? arXiv:2304.11938 [cs]. Voigt, M. (2014). Improving Design of Systems Supporting Creativity-intensive Processes – A Cross-industry Focus Group Evaluation. Communications of the Association for Information Systems, 34:24. Wallas, G. (1926). The art of thought. The art of thought. London, J. Cape. White, C. (2023). Opinion: Artificial intelligence can’t reproduce the wonders of original human creativity. The Star. 17
ai_researcher
4
Improving_language_models_by_retrieving_from_trillions_of_tokens.pdf
Improving language models by retrieving from trillions of tokens Sebastian Borgeaud†, Arthur Mensch†, Jordan Hoffmann†, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae‡, Erich Elsen‡ and Laurent Sifre†,‡ All authors from DeepMind, †Equal contributions, ‡Equal senior authorship 2 2 0 2 b e F 7 ] L C . s c [ 3 v 6 2 4 4 0 . 2 1 1 2 : v i X r a We enhance auto-regressive language models by conditioning on document chunks retrieved from a large corpus, based on local similarity with preceding tokens. With a 2 trillion token database, our Retrieval-Enhanced Transformer (Retro) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25× fewer parameters. After fine-tuning, Retro performance translates to downstream knowledge-intensive tasks such as question answering. Retro combines a frozen Bert retriever, a differentiable encoder and a chunked cross-attention mechanism to predict tokens based on an order of magnitude more data than what is typically consumed during training. We typically train Retro from scratch, yet can also rapidly Retrofit pre-trained transformers with retrieval and still achieve good performance. Our work opens up new avenues for improving language models through explicit memory at unprecedented scale. 1. Introduction Language modelling (LM) is an unsupervised task that consists of modelling the probability of text, usually by factorising it into conditional next-token predictions 𝑝(𝑥1, . . . , 𝑥𝑛) = (cid:206)𝑖 𝑝(𝑥𝑖|𝑥<𝑖). Neural networks have proven to be powerful language models, first in the form of recurrent architectures (Graves, 2013; Jozefowicz et al., 2016; Mikolov et al., 2010) and more recently in the form of Transformers (Vaswani et al., 2017), that use attention to contextualise the past. Large performance improvements have come from increasing the amount of data, training compute, or model parameters. Transformers have been scaled from 100 million parameter models in seminal work to over hundred billion parameters (Brown et al., 2020; Radford et al., 2019) in the last two years which has led to models that do very well on a wide array of tasks in a zero or few-shot formulation. Increasing model size predictably improves performance on a wide range of downstream tasks (Kaplan et al., 2020). The benefits of increasing the number of parameters come from two factors: additional computations at training and inference time, and increased memorization of the training data. In this work, we endeavor to decouple these, by exploring efficient means of augmenting language models with a massive-scale memory without significantly increasing computations. Specifically, we suggest retrieval from a large text database as a complementary path to scaling language models. Instead of increasing the size of the model and training on more data, we equip models with the ability to directly access a large database to perform predictions—a semi-parametric approach. At a high level, our Retrieval Transformer (Retro) model splits the input sequence into chunks and retrieves text similar to the previous chunk to improve the predictions in the current chunk. Existing retrieval for language modelling work only considers small transformers (100 millions parameters) and databases of limited size (up to billions of tokens) (Guu et al., 2020; Khandelwal et al., 2020; Lewis et al., 2020; Yogatama et al., 2021). To our knowledge, our work is the first to show the benefits of scaling the retrieval database to trillions of tokens for large parametric language models. Our main Corresponding authors: {sborgeaud|amensch|jordanhoffmann|sifre}@deepmind.com Improving language models by retrieving from trillions of tokens Figure 1 | Scaling of Retro. The performance gain of our retrieval models remains constant with model scale (left), and is comparable to multiplying the parameteric model size by ∼ 10×. The gain increases with the size of the retrieval database (middle) and the number of retrieved neighbours (right) on the C4 validation set, when using up to 40 neighbours. Past this, performance begins to degrade, perhaps due to the reduced quality. At evaluation Retro can be used without retrieval data (Re tro[OFF]), bringing limited performance degradation compared to baseline transformers. contributions are the following. • We introduce Retro, a retrieval-enhanced autoregressive language model (§2.2). We use a chunked cross-attention module to incorporate the retrieved text (§2.4), with time complexity linear in the amount of retrieved data. We show that retrieving based on a pre-trained frozen Bert model (§2.3) works at scale, removing the need for training and updating a retriever network. • We show that our method scales well with model size and database size (Fig. 1): Retro provides a constant gain for models ranging from 150M to 7B parameters, and Retro can be improved at evaluation time by increasing the database size and the number of retrieved neigh- bours. Our largest model obtains state-of-the-art results on a range of downstream evaluation datasets including Wikitext103 (Merity et al., 2017) and the Pile (Gao et al., 2020) (§4). We show that Retro can be fine-tuned to achieve competitive performance on downstream tasks such as question answering (§4.3). • We propose an evaluation aware of proximity of test documents with the training set (§2.6), addressing the problem of test set leakage (Lee et al., 2021). This is relevant for all language models, and especially for retrieval-enhanced models since they have direct access to the training dataset during evaluation. Using this methodology, we show that the performance of Retro comes from both explicit neighbour copying and general knowledge extraction (§4.4). 2. Method We design our retrieval-enhanced architecture to be capable of retrieving from a database with trillions of tokens. For this purpose, we retrieve at the level of contiguous token chunks instead of individual tokens which reduces storage and computation requirements by a large linear factor. Our method first constructs a key-value database, where values store raw chunks of text tokens and keys are frozen Be rt embedddings (Devlin et al., 2019). We use a frozen model to avoid having to periodically re-compute embeddings over the entire database during training. Each training sequence is then split into chunks, which are augmented with their 𝑘-nearest neighbour retrieved from the database. An encoder-decoder architecture integrates retrieval chunks into the model’s predictions. We summarize the Ret ro architecture in Fig. 2, and detail it in this section. We end the section by introducing 2 20040080016007500Number of Non-Embedding Params (M)0.70.80.91.0C4 Eval bits-per-byte172M425M1.5B7.5BBaselineRETRO [OFF]RETRO [ON]0110100100010000Retrieval dataset (B Tokens)0.70.80.91.00135103050100Number of neighbors0.70.80.91.0 Improving language models by retrieving from trillions of tokens Figure 2 | Re tro architecture. Left: simplified version where a sequence of length 𝑛 = 12 is split into 𝑙 = 3 chunks of size 𝑚 = 4. For each chunk, we retrieve 𝑘 = 2 neighbours of 𝑟 = 5 tokens each. The retrieval pathway is shown on top. Right: Details of the interactions in the Cca operator. Causality is maintained as neighbours of the first chunk only affect the last token of the first chunk and tokens from the second chunk. a new methodology to evaluate language models when an evaluation set is partially present in the training set. 2.1. Training dataset We use a multi-lingual version of MassiveText (Rae et al., 2021) for both training and retrieval data. The dataset consists of text documents from multiple sources and multiple languages totalling over 5 trillion tokens (detailed in Table 1). Sequences are sampled from subsets of the training data, with sampling weights given in the right-most column of Table 1. We tokenize the dataset using SentencePiece (Kudo and Richardson, 2018) with a vocabulary of 128,000 tokens. During training (unless otherwise specified), we retrieve from 600B tokens from the training data. The training retrieval database is made of the same subsets as the training data, in proportion that matches the training sampling frequencies. During evaluation the retrieval database consists in the full union of these datasets, with the exception of books for which we use a sub-sample of 4%. The evaluation retrieval database thus contains 1.75T tokens. To limit test set leakage, we compute the 13-gram Jaccard similarity between train and test documents using the MinHash scheme and remove all training documents with high similarity (0.8 or higher) to a validation or test set document. Additionally, we remove all validation and test articles from Wikitext103 (Merity et al., 2017) from our Wikipedia training data. 2.2. Retrieval-enhanced autoregressive token models Our approach uses retrieval as a way to augment input examples at the granularity of small chunks of tokens. Formally, we consider sequences of integer tokens in 𝕍 = [1, 𝑣], obtained using a text tokenizer1. We split each 𝑛-token-long example 𝑋 = (𝑥1, . . . , 𝑥𝑛) into a sequence of 𝑙 chunks (𝐶1, . . . , 𝐶𝑙) of size 𝑚 = 𝑛 , i.e. 𝐶1 (cid:44) (𝑥1, . . . , 𝑥𝑚), . . . , 𝐶𝑙 (cid:44)(𝑥𝑛−𝑚+1, . . . , 𝑥𝑛) ∈ 𝕍𝑚. We use 𝑛 = 2048 and 𝑚 = 64. 𝑙 We augment each chunk 𝐶𝑢 with a set Ret D (𝐶𝑢) of 𝑘 neighbours from the database D. Ret D (or 1We use the notation [1, 𝑣] (cid:44) {1, . . . , 𝑣} throughout the text. 3 CCAFFWTransformer EncoderRetrievaldatasetFrozen kNN RetrieverKVRETRO block (x L) NeighboursInput tokensChunked cross-attention (CCA)BERTBERTConditionAttending chunksEncoded neighboursCACAATTNQEMBREADAttendEncoded neighboursC1C2C3H1H2H3HH1+H2+E1E2E1E2CA(H1+, E1)CA(H2+, E2)CCA(H, E)X Improving language models by retrieving from trillions of tokens Re t for brevity) is a non-trainable operator specified in §2.3. Token likelihoods are provided by a model, parameterized by 𝜃, that takes as input both previous tokens and their retrieved neighbours. This defines the following retrieval-enhanced sequence log-likelihood: 𝐿 (𝑋 |𝜃, D) (cid:44) 𝑙 ∑︁ 𝑚 ∑︁ 𝑢=1 𝑖=1 (cid:18)𝜃 (cid:0)𝑥 (𝑢−1) 𝑚+𝑖|(𝑥 𝑗) 𝑗< (𝑢−1) 𝑚+𝑖, (Ret D (𝐶𝑢(cid:48)))𝑢(cid:48)<𝑢(cid:1) . (1) We set Ret(𝐶1) = ∅, namely the likelihood of tokens from the first chunk does not depend on any retrieval data. This likelihood definition preserves autoregressivity: the probability of the 𝑖-th token of the 𝑢-th chunk, 𝑥 (𝑢−1)𝑚+𝑖, only depends on previously seen tokens (𝑥 𝑗)1(cid:54) 𝑗< (𝑢−1)𝑚+𝑖 and on the data retrieved from the previous chunks (Ret(𝐶𝑢(cid:48)))𝑢(cid:48)<𝑢. We can therefore directly sample with log- probability (cid:18), where sampling within the chunk 𝐶𝑢 is conditioned on the neighbours (Ret(𝐶𝑢(cid:48)))𝑢(cid:48)<𝑢. This makes retrieval-enhanced models directly comparable with the largest language models that are evaluated by sampling. 2.3. Nearest neighbour retrieval Retrieval neighbours. Our database consists of a key-value memory. Each value consists of two contiguous chunks of tokens which we denote [𝑁, 𝐹] where 𝑁 is the neighbour chunk which is used to compute the key, and 𝐹 is its continuation in the original document. The corresponding key is the Be rt embedding of 𝑁, averaged over time, that we denote Bert (𝑁). For each chunk 𝐶, we retrieve its approximate 𝑘-nearest neighbours from our key-value database using the 𝐿2 distance on BERT embeddings 𝑑(𝐶, 𝑁) = ||Bert (𝐶) − Bert (𝑁)||2 2. The model receives the corresponding values Ret(𝐶) (cid:44) ([𝑁 1, 𝐹1], . . . , [𝑁 𝑘, 𝐹𝑘]). Both neighbour chunks and their continuations provide meaningful improvements, as illustrated in our ablation study (Appendix D). We use a length 64 for both 𝑁 𝑗 and 𝐹 𝑗, thus Ret(𝐶) has a shape of 𝑘 × 𝑟 with 𝑟 = 128. To avoid retrieving the chunk 𝐶𝑢+1 in the retrieval set Ret(𝐶𝑢), which would break causality during training, we filter out neighbours originating from the same document as the training sequence 𝑋. For a database of 𝑇 elements, we can query the approximate nearest neighbours in O (log 𝑇) time. We use the SCaNN library (Guo et al., 2020) to achieve this. This means that we can query our 2 trillion token database in 10 ms whilst evaluating or sampling from the model; this expense is amortized over a chunk length. Performing retrieval on-the-fly is too slow to keep up with the training calculations—we leverage the frozen aspect of the embedding operator Bert to precompute all approximate nearest neighbours and save the results as part of the data. In Fig. 9 in the Appendix, we show results where we only retrieve neighbours within Wikipedia. We find that neighbours tend to come from 2-3 links away from a given article whereas random articles are more than 5 links apart. Table 1 | MassiveText. The last column indicates the sampling weight during training. The multilingual subsets include documents in 10 languages. The full breakdown is given in §A.1. Source Token count (M) Documents (M) Multilingual Sampling frequency Web Books News Wikipedia GitHub 977,563 3,423,740 236,918 13,288 374,952 1,208 20 398 23 143 Yes No No Yes No 55% 25% 10% 5% 5% 4 Improving language models by retrieving from trillions of tokens 2.4. Re t ro model architecture Our model relies on an encoder-decoder transformer architecture, integrating the retrieved data through a cross-attention mechanism as introduced in Vaswani et al. (2017). First, the retrieved tokens Ret(𝐶) are fed into an encoder Transformer, which computes the encoded neighbours set 𝐸. Denoting the intermediate activations by 𝐻, our transformer decoder then interleaves Retro-blocks Re tro( 𝐻, 𝐸) and standard Transformer blocks LM( 𝐻) (the hyperparameter 𝑃 ⊆ [1, 𝐿] determines at which layers we use a Retro-block). These blocks are built from three different residual operators with signature ℝ𝑛×𝑑 → ℝ𝑛×𝑑: a fully-connected layer Ff w, the standard sequence-level self-attention layer Attn, and a chunked cross-attention layer Cca (·, 𝐸) that incorporates information from the retrieval encoder: Retro ( 𝐻, 𝐸) (cid:44) Ffw (Cca (Attn ( 𝐻) , 𝐸)) , and Lm( 𝐻) (cid:44) Ff w(Attn( 𝐻)) (2) Since Ffw, Att n and Cca are all autoregressive operators whose output at position 𝑖 only depends on (ℎ 𝑗) 𝑗(cid:54)𝑖, any succession of Retro and lm layers, followed by a token classification head defines an autoregressive log-likelihood (1). An overview of the model architecture is given in Algorithm 1 and in Fig. 2. We next describe the retrieval encoder and the chunked cross-attention layer in more detail, and explain how to sample from Retro. Encoding retrieval neighbours. For each chunk 𝐶𝑢, the 𝑘 retrieval neighbours Ret(𝐶𝑢) are fed into a bi-directional transformer Encoder, yielding the outputs 𝐸 𝑗 𝑢 (cid:44) Encoder(Ret(𝐶𝑢) 𝑗, 𝐻𝑢) ∈ ℝ𝑟×𝑑(cid:48), where 𝑗 ∈ [1, 𝑘] indexes each neighbour. The retrieval encoder is a non-causal transformer. It is conditioned on 𝐻𝑢, the activations of chunk 𝐶𝑢, through cross-attention layers; this allows the representations of the retrieval encoder to be modulated by the retrieving chunk in a differentiable way. More precisely, the encoding of the 𝑗th neighbour of the 𝑢th chunk, Ret (𝐶𝑢) 𝑗, depends on the attended activation 𝐻𝑢 (cid:44) (ℎ(𝑢−1)𝑚+𝑖)𝑖∈ [1,𝑚] ∈ ℝ𝑚×𝑑 of chunk 𝐶𝑢 at layer min(𝑃). All neighbours for all chunks are encoded in parallel, yielding a full encoded set 𝐸 (cid:44) (𝐸 𝑗 𝑢)𝑢∈ [1,𝑙], 𝑗∈ [1,𝑘] ∈ ℝ𝑙×𝑘×𝑟×𝑑(cid:48). We denote 𝐸𝑢 ∈ ℝ𝑘×𝑟×𝑑(cid:48) as the encoded neighbours for chunk 𝑢 ∈ [1, 𝑙]. Chunked cross-attention. To perform the Cca operation, we first split a given intermediate acti- 𝑢 (cid:44) (ℎ𝑢 𝑚+𝑖−1)𝑖∈ [1,𝑚] ∈ ℝ𝑚×𝑑 (cid:17) vation 𝐻 ∈ ℝ𝑛×𝑑 into 𝑙−1 attending chunks (cid:16) , as depicted on the 𝐻+ 𝑢 holds the intermediary embeddings of the last token in chunk 𝐶𝑢 and of the first right of Fig. 2. 𝐻+ 𝑚 − 1 tokens in 𝐶𝑢+1 𝑢 and 𝐸𝑢—the encoded retrieval set obtained from chunk 𝐶𝑢. Attention is computed across time and across neighbours simultaneously, as we merge the neighbour and time dimensions of 𝐸𝑢 before applying cross-attention. Since there is a notion of alignment between data chunks and retrieval neighbours, we use relative positional encodings as described in §B.1.2. 2. We compute the cross-attention between 𝐻+ 𝑢∈ [1,𝑙−1] We concatenate the 𝑙−1 outputs of the per-chunk cross-attentions (each of shape 𝑚 × 𝑑) across time, and properly pad the result; we thus form the output activation Cca( 𝐻, 𝐸) ∈ ℝ𝑛×𝑑. Formally, for each chunk 𝐶𝑢 and for each token 𝑖 ∈ [1, 𝑚] we set Cca( 𝐻, 𝐸)𝑢 𝑚+𝑖−1 (cid:44) Ca(ℎ𝑢 𝑚+𝑖−1, 𝐸𝑢), (3) 2The last token of chunk 𝐶𝑢 is the first to be able to access the retrieved content 𝐸𝑢 while maintaining autoregressivity and the corresponding attending chunk (cid:16) (cid:17) in (1). Hence, there is a one token overlap between chunk 𝐶𝑢 = 𝐶+ 𝑢 (cid:44) (𝑥𝑢 𝑚+𝑖−1)𝑖∈ [1,𝑚] . 𝑥 (𝑢−1)𝑚+𝑖 𝑖∈ [1,𝑚] 5 Improving language models by retrieving from trillions of tokens Algorithm 1: Overview of Retro model architecture. Hyperparam: 𝑃 and 𝑃enc, indices of layers with cross-attention in the decoder and encoder respectively Hyperparam: 𝐿 and 𝐿enc, number of decoder layers and number of encoder layers. Input: 𝑋 ∈ 𝕍𝑛: sequence of tokens. (Ret (𝐶𝑢))1(cid:54)𝑢(cid:54)𝑙: the retrieved neighbours Output: 𝑂 ∈ ℝ𝑛×|𝕍 |: the output logits def Encoder (Ret(𝐶𝑢)1(cid:54)𝑢(cid:54)𝑙, 𝐻): ( 𝐻𝑢)𝑢∈ [1,𝑙] ← Split( 𝐻) for 𝑗 ∈ [1, 𝑘], 𝑢 ∈ [1, 𝑙] do // Encoder shared across neighbours and chunks 𝐸 𝑗 𝑢 = Embenc(Ret(𝐶𝑢) 𝑗) // May be shared with the decoder E M B for 𝑝(cid:48) ∈ [1, 𝐿enc] do 𝑢) // Bi-directional attention 𝑢 ← Att nenc(𝐸 𝑗 𝐸 𝑗 if 𝑝(cid:48) ∈ 𝑃enc then 𝑢 ← Caenc(𝐸 𝑗 𝐸 𝑗 𝑢 ← Ffwenc(𝐸 𝑗 𝐸 𝑗 𝑢) return 𝐸 𝑢, 𝐻𝑢) 𝐻 ← Emb(𝑋) for 𝑝 ∈ [1, 𝐿] do 𝐻 ← Attn( 𝐻) // Causal attention if 𝑝 = min(𝑃) then // The neighbour E N C O D E R is conditioned with the decoder activations of the last layer before the first cross-attention 𝐸 = Encoder(Ret(𝐶𝑢)1(cid:54)𝑢(cid:54)𝑙, 𝐻) if 𝑝 ∈ 𝑃 then 𝐻 ← Cca ( 𝐻, 𝐸) 𝐻 ← Ffw( 𝐻) 𝑂 ← Read ( 𝐻) where Ca is the cross-attention residual operator over time-concatenated encoded neighbours. We recall that this operator is defined in its simplest version by three parameter matrices 𝐾 ∈ ℝ𝑑×𝑐, 𝑄 ∈ ℝ𝑑×𝑐 and 𝑉 ∈ ℝ𝑑×𝑑. For all ℎ ∈ ℝ𝑑 and 𝑌 ∈ ℝ𝑇×𝑑, we define Ca(ℎ, 𝑌 ) (cid:44) softmax(𝑌 𝐾𝑄𝑇 ℎ)𝑌𝑉, (4) where the softmax is performed on the second dimension and all products are matrix products. We use multi-head cross-attention, and add positional encodings to the softmax(see §B.1.2). The first 𝑚 − 1 tokens cannot attend to any neighbour of a previous chunk; at these positions, we define Cca as the identity, setting Cca( 𝐻, 𝐸) 𝑗 (cid:44) ℎ 𝑗 for all tokens 𝑗 ∈ [1, 𝑚 − 1]. Finally, the last token ℎ𝑙𝑚 attends to the last retrieval set 𝐸𝑙 and we set ℎ𝑙 𝑚 (cid:44) Ca(ℎ𝑙 𝑚, 𝐸𝑙) (not shown in Fig. 2). Listing 1 contains a simplified implementation of Cca. Note that chunked cross-attention is autoregressive: the output of Cca at position 𝑖 depends on the sequence from tokens from 0 to 𝑖 that is input to Cca. With Ret ro models, even though each Cca cross-attention attends only to the neighbours of the preceding chunk Ret(𝐶𝑢−1), the dependencies over previous neighbours are propagated via the self-attention operations. The activations of the 𝑖th token in the 𝑢th chunk therefore potentially depend upon the set of all previous neighbours Ret(𝐶𝑢(cid:48))𝑢(cid:48)<𝑢, without incurring the quadratic cost of cross attending to that set. 6 Improving language models by retrieving from trillions of tokens Sampling. When sampling, at the end of a chunk 𝐶𝑢, we use SCaNN to retrieve neighbours Ret(𝐶𝑢), based on the embedding Bert (𝐶𝑢). The encoded neighbours 𝐸𝑢 = Encoder(Ret(𝐶𝑢)) are then used to condition the generation of the next chunk 𝐶𝑢+1, which we do incrementally: overall the cost of sampling is thus quadratic in the size of the sampled sequence, as when sampling from regular Transformers; the added cost of retrieval is linear in the number of chunks 𝑙, and is negligible compared to the token sampling cost in practice. 2.5. Baseline Transformer architecture We use a transformer (Vaswani et al., 2017) similar to the one described in (Radford et al., 2019), with some minimal changes: we replace LayerNorm with RMSNorm (Zhang and Sennrich, 2019) and use relative position encodings (Dai et al., 2019). As baselines, we train retrieval-free transformers with 132M, 368M, 1.3B and 7.0B parameters (embedding matrices are excluded from parameter counts). The hyperparameters we used are detailed in Table 2. All retrieval models use the same size encoder for the retrieval data, with 𝑑 (cid:48) = 896 and 2 layers, which roughly adds 19𝑀 parameters. The encoder uses relative positional encodings. The retrieval models contain one Retro-block every 3 blocks, starting from layer 6. For our smallest model, Cca is applied in layers 6, 9 and 12 of the main pathway and also once for query conditioning in the encoder, which adds an additional 12𝑀 parameters. The relative number of extra parameters reduces as we increase the baseline model size. All models are implemented using JAX (Bradbury et al., 2018) and Haiku (Hennigan et al., 2020). 2.6. Quantifying dataset leakage exploitation Re t ro models may arguably benefit more easily from evaluation dataset leakage, i.e. the fact that we evaluate on data that were also present in the training set. To better understand how retrieval improves language modelling performance, we therefore quantify evaluation likelihood as a function of the overlap between the evaluation and training datasets. The following approach can be used with any language model, and depends only on the frozen retriever system presented in §2.3. We split the evaluation sequences (𝑋𝑖)𝑖 into chunks of length 𝑚 ≤ 64, and we see the training data as a set of chunks C. For each evaluation chunk 𝐶 ∈ C, we retrieve the 10 closest neighbours (of length up to 128) in the training data. We then compute the longest token substring common to both the evaluation chunk and its neighbours. This gives a number 𝑠 ∈ [0, 𝑚]. The value 𝑟(𝐶) = 𝑠 , ranging from 0 (chunk never seen) to 1 (chunk entirely seen), gives a 𝑚 reliable indication of how much overlap there is between the evaluation chunk and the training data. For a given model, we then obtain the log-likelihood (cid:18)(𝐶) of each chunk 𝐶, and the number of bytes 𝑁 (𝐶) it encodes. We then consider the filtered bits-per-bytes of the model: ∀ 𝛼 ∈ [0, 1], C𝛼 (cid:44) {𝐶 ∈ C, 𝑟(𝐶) (cid:54) 𝛼}, bpb(𝛼) (cid:44) (cid:205)𝐶 ∈ C𝛼 (cid:205)𝐶 ∈ C𝛼 (cid:18)(𝐶) 𝑁 (𝐶) , (5) Table 2 | Number of parameters for our baseline and Retro models, excluding embeddings, along with the corresponding hyperparameters. Baseline parameters Retro 𝑑 𝑑ffw # heads Head size # layers 132M 368M 172M (+30%) 896 3,584 425M (+15%) 1,536 6,144 8,192 1,309M 1,451M (+11%) 2,048 6,982M 7,532M (+8%) 4,096 16,384 16 12 16 32 64 128 128 128 12 12 24 32 7 Improving language models by retrieving from trillions of tokens which correspond to the bits-per-bytes on the set of chunks that overlap less than 𝛼 % with the training chunks. Note that the full evaluation bit-per-bytes performance is recovered by bpb(1). The function bpb(·) allows us to evaluate the impact of evaluation leakage over predictive performance: for low 𝛼, bpb(𝛼) gives an indication on how the model performs on chunks that are entirely new; the slope of bpb(·) shows how much the model exploits evaluation leakage. 3. Related Work We first review existing work on using retrieval for language modelling, and compare Retro to these works (see Table 3). As we train Re t ro models on a large dataset containing a substantial section of the internet, our work raises potential privacy, safety, and fairness issues that we then review. 3.1. Retrieval for language modelling Brants et al. (2007) show that scaling the training data to trillions of tokens improves the machine translation performance of 𝑛-gram models. More recently, GPT-2 (Radford et al., 2019), GPT-3 (Brown et al., 2020), and Jurassic-1 (Lieber et al., 2021) show that scaling up language models leads to massive improvements on many downstream tasks. At the same time, Carlini et al. (2021) demonstrate that large-scale language models can perfectly memorise parts of their training data, suggesting that enhancing models with retrieval may lead to further improvements. However, significant leakage between train and test datasets (Lee et al., 2021; Lewis et al., 2021) makes comparing and evaluating large models trained on large datasets difficult, especially once retrieval capabilities over the training dataset are added. Historically, information retrieval for text relies on inverted index matching such as TF-IDF and BM25 (Robertson and Zaragoza, 2009). Foundational work use latent topic modelling approaches like LDA (Blei et al., 2003) to identify relevant neighbours (Wei and Croft, 2006). Work in machine translation such as Zhang et al. (2018) and Gu et al. (2018) retrieve translation pairs based on edit distance between source sentences and guide the translation output using the closest retrieved target sentences. The retrieval database may also be structured — for example, Ahn et al. (2016) use a symbolic knowledge graph to improve an RNN language model. With the success of deep learning, retrieving systems have partly switched to dense learned representations based on a neural network’s activations. Continuous cache (Grave et al., 2017) adds probability mass to tokens for which previous activations resemble the current activation vector, extending the model’s context to the local history. 𝑘NN-LM (Khandelwal et al., 2020) applies this idea to transformers and extends the retrieval database to English Wikipedia, resulting in Table 3 | Comparison of Retro with existing retrieval approaches. # Retrieval tokens Granularity Retriever training Retrieval integration Continuous Cache 𝑘NN-LM Spalm Dpr Real m RAG FiD Emdr2 Retro (ours) O (cid:0)103(cid:1) O (cid:0)109(cid:1) O (cid:0)109(cid:1) O (cid:0)109(cid:1) O (cid:0)109(cid:1) O (cid:0)109(cid:1) O (cid:0)109(cid:1) O (cid:0)109(cid:1) O (cid:0)1012(cid:1) Token Token Token Prompt Prompt Prompt Prompt Prompt Chunk Frozen (L STM) Frozen (Transformer) Frozen (Transformer) Contrastive proxy End-to-End Fine-tuned Dpr Frozen Dpr End-to-End (EM) Frozen (Bert) Add to probs Add to probs Gated logits Extractive QA Prepend to prompt Cross-attention Cross-attention Cross-attention Chunked cross-attention 8 Improving language models by retrieving from trillions of tokens substantial improvements on Wikitext103 evaluation. Continuous cache and 𝑘NN-LM do not modify the underlying neural-network models, but interpolate at inference between the language model’s output and distributions computed from retrieved tokens. These methods can therefore be plugged into any model without additional training, although this limits the model’s ability to reason about the retrieved text. Spalm (Yogatama et al., 2021) addresses this limitation by adding an extra gating network to post-process the retrieved data; yet most of the network is unaffected by the retrieval during inference. The retrieval representations may be trained directly instead of relying on a pre-trained model— retriever systems have been developed for this purpose, primarily on open-domain question answering. For example, Dpr (Karpukhin et al., 2020) trains two Bert models (for queries and keys respectively) using a contrastive loss to align the representations of a question and of its answers. Lee et al. (2019) use an inverse cloze task to find semantic representations of passages for retrieval. These works differs from continuous cache and 𝑘NN-LM in that they embeds passages (or chunks) of text together, as opposed to each token individually. The retriever network is trained in isolation of the downstream task that uses the retrieval data. This potential issue is specifically addressed by Realm (Guu et al., 2020), which trains the retrieval system end-to-end to maximize the final training cross-entropy. This comes with the extra complexity of searching the database during training and periodically updating the embedding table, severely limiting the scale at which it can operate. RAG (Lewis et al., 2020) and FiD (Izacard and Grave, 2021) build upon Dpr to set the state of the art on question answering benchmarks by training encoder-decoder transformer models. More recently, Emdr2 (Sachan et al., 2021) extends FiD by using an expectation-maximization algorithm to train the retriever end-to-end and achieves state of the art results compared to similarly sized models. In the open-domain dialogue setting, BlenderBot 2.0 (Komeili et al., 2021) learns to issue textual internet queries, outperforming dense retrieval methods when evaluated on a task measuring how close model responses are to those of humans. This involves collecting a dataset of human dialogues with associated search queries, which limits the scalability of this approach. Hashemi et al. (2020) introduce the Guided Transformer, a modified Transformer similar to Retro, for document retrieval and clarifying question selection. Although effective on question answering and other tasks with strong conditioning, none of these methods are designed to model arbitrary text sequences, in contrast with Retro. Retro shares components with 𝑘NN-LM and Dpr in that it uses frozen retrieval representations. Retro models longer sequences than QA examples; this requires to reason at a sub-sequence level, and to retrieve different documents for the different chunks of a sequence. Similar to FiD, Retro processes the retrieved neighbours separately in the encoder, and assemble them in the chunked cross-attention. This differs from e.g. Re al m, that prepends retrieved documents to the prompt. Using chunks allows for repeated retrieval whilst generating a sequence as opposed to retrieving only once based on the prompt alone. Furthermore, retrieval is done during the whole pre-training process in Retro, and is not simply plugged-in to solve a certain downstream task. Finally, previous methods based on dense query vectors use small models and retrieval datasets with less than 3B tokens (English Wikipedia). Table 3 summarizes the difference of Retro with existing approaches. 3.2. Privacy, safety and fairness Bender et al. (2021); Weidinger et al. (2021) highlight several dangers of large language models. Those stem from their ability to memorise training data, their high training cost, the static nature of their training data (Lazaridou et al., 2021), their tendency of amplifying inherent biases in the training data, and their ability to generate toxic language (Gehman et al., 2020). In this section we inspect these dangers, focusing on how retrieval augmented language models may exacerbate or 9 Improving language models by retrieving from trillions of tokens mitigate them. Large language models can perfectly memorise parts of their training data (Carlini et al., 2021). When coupled with large training datasets gathered from the web or other sources, this has clear privacy and safety implications. Retrieval models such as Retro that have access to the entire training dataset during inference exacerbate these privacy issues by being able to directly copy training data. However, retrieval systems offer a path towards mitigating these concerns via obliteration of the retrievable data at inference time. In addition, differential privacy training (Abadi et al., 2016) of retrieval models could guarantee that no private information is stored in the model weights, while individualisation on private data could be made by updating the retrieval database at inference time. Due to their high training cost, re-training large language model regularly to incorporate new data, languages, and norms is prohibitively expensive. To keep retrieval models up-to-date, it may be sufficient to update the retrieval database, which is orders of magnitude cheaper than re-training a model from scratch. In addition to the benefits of updating models in terms of fairness and bias, simply training large language models has a significant energy cost (Schwartz et al., 2020; Strubell et al., 2019). Retrieval mechanisms offer a path to reducing the compute requirements needed to train and update language models that reach a certain performance. Large language models are prone to generating toxic outputs, as shown in Gehman et al. (2020). Bender et al. (2021); Jo and Gebru (2020) advocate for the importance of better training data curation and documentation. Additionally, if portions of the training data are found to be eliciting biased or toxic outputs after training, retrieval allows for some correction, as the offending retrieval data can be retroactively filtered. However, it is also the case that without careful analysis and intervention, retrieval models may exacerbate biases that are present in the training data. Retrieval models can also add a further source of bias through the selection mechanism for retrieval documents. Further work in this area is required to better understand how retrieval affects the bias and toxicity of the model outputs. Finally, samples from large models are difficult to interpret, making mitigating these issues all the more challenging (Belinkov et al., 2020; Jain and Wallace, 2019). Retrieval provides more insights in to the outputs of a model, as one can directly visualise or modify the neighbours that are being used. The examples in Table 6, 7, 20 and 21 illustrate how retrieval makes language models more factual and interpretable by providing more transparent outputs. 4. Results We first report results on language modelling benchmarks. Second, we show how to Re t rofit pre-trained Transformer language models into retrieval models with few additional FLOPs. Next, we report Retro results on question answering. Finally, we report evaluation metrics with leakage filtering, to better understand the source of the gains with retrieval. 4.1. Language modelling Datasets. We evaluate our models on C4 (Raffel et al., 2020), Wikitext103 (Merity et al., 2017), Curation Corpus (Curation, 2020), Lambada (Paperno et al., 2016) and the Pile (Gao et al., 2020). We also evaluate on a set of manually selected Wikipedia articles that were added or heavily edited in September 2021, months after our pre-training and retrieval dataset was collected (details are given in §A.2). We construct the dataset with articles from the “future” and manually remove new articles that strongly overlap documents in our training data. This guarantees that the evaluation documents are not leaked in our training data. 10 Improving language models by retrieving from trillions of tokens Figure 3 | Scaling with respect to model size. (a) LAMBADA top-1 accuracy. (b) Evaluation loss on curation corpus. (c) Perplexity on Wikitext103 valid. (d) Bits-per-byte on selected Wikipedia articles from September 2021. For C4, Wikitext103, the Pile, and our Wikipedia dataset we evaluate the language modelling performance on entire documents and measure the bits-per-byte (bpb). We favour bits-per-byte over loss as it is tokenizer agnostic. We evaluate with a sequence length of 2048 tokens but use a stride of 1024 within documents to mitigate boundary effects. On Curation Corpus we concatenate the article, the “TL;DR:” string, and the summary, but only evaluate the bpb on the summary. For Lambada we evaluate the accuracy on the last word, using greedy generation. In Fig. 1(left) and Fig. 3 we show the language modelling performance as we scale Model scaling. models from 150 million to 7 billion (non-embedding) parameters. We see that on all datasets, Retro outperforms the baseline at all model sizes. Furthermore, we observe that improvements do not diminish as we scale the models. The performance is dataset dependent, with the largest gains on Wikitext103 and C4. Wikipedia articles and other web pages are similar to Wikitext103 documents, even if not exact copies (§4.4), we thus obtain dramatic improvements on Wikitext103 as our retrieval model is able to directly exploit these overlaps. The smallest gains are for Curation Corpus, where Retro only slightly outperforms the baseline. This is expected as Curation Corpus summaries are designed to only contain information from the source article and are not included in our retrieval database. On our “future” Wikipedia September 2021 dataset, we also observe consistent gains for all model sizes. Data scaling. Fig. 1 (middle) shows how scaling the retrieval database at evaluation improves the language modelling performance. We observe dramatic gains as the retrieval data is increased from Wikipedia (4 billion tokens) to all of Massive text (1.7T tokens). Fig. 1(right) shows how performance scales as we increase the number of retrieved chunks. Despite being only trained with 2 neighbours, we see consistent improvements for all models when the number of neighbours is increased from 1 to 10. Furthermore, we observe that larger models are able to better utilise more neighbours: the 172M model improves with up to 10 neighbours, whereas the 7B model improves with up to 40 neighbours. The Pile. We evaluate our 7B models on the Pile test sets3 and compare against the 178B parameter Jurrasic-1 (Lieber et al., 2021) model and the 280B parameter Gopher (Rae et al., 2021) model. We do not compare against GPT-3 as it is outperformed by Jurassic-1 and Gopher on almost all subsets. Fig. 4 shows the relative improvements in bits-per-byte over our 7B transformer baseline for our 3Due to legal and ethical concerns relating to their use, we exclude the Enron Emails and the Youtube Subtitles datasets. 11 20040080016007500Non-Embedding Params (M)0.450.500.550.600.650.70a) LAMBADA Accuracy172M425M1.5B7.5BBaselineRETRO [OFF]RETRO [ON]20040080016007500Non-Embedding Params (M)0.500.550.600.650.70b) Curation Corpus bpb20040080016007500Non-Embedding Params (M)2351020c) Wikitext103 Perplexity20040080016007500Non-Embedding Params (M)0.600.650.700.750.800.85d) Wikipedia Sept 21 bpb Improving language models by retrieving from trillions of tokens Figure 4 | The Pile: Comparison of our 7B baseline against Jurassic-1, Gopher, and Retro. We observe that the retrieval model outperforms the baseline on all test sets and outperforms Jurassic-1 on a majority of them, despite being over an order of magnitude smaller. 7.5B Re t ro model, Jurassic-1 and Gopher. Jurassic-1 outperforms the baseline on all datasets except for books, likely due to the inclusion of books in our training data. Gopher and Re t ro outperform the baseline on all test sets. Overall, Retro 7.5B outperforms Jurassic-1 and Gopher on a majority of the test sets. On the dm_mathematics and ubuntu_irc subsets, our Retro model does not outperform our 7B baseline and underperforms Jurassic-1. We hypothesise that the retrieved neighbours on these datasets are not helpful, due to a combination of what is in our retrieval dataset and the efficacy of the nearest-neighbour search. Wikitext103. To validate our approach in a controlled setting, we compare our method with 𝑘NN-LM (Khandelwal et al., 2020) on the Wikitext103 dataset in Table 4. We train a baseline transformer on the training set of Wikitext103. This transformer has 24 layers, 1024 hidden units, 16 heads and a key size of 64, as in Baevski and Auli (2019). Our baseline does not have adaptive input, and our tokenizer has an open vocabulary, unlike Baevski and Auli (2019), which makes our baseline Table 4 | Perplexities on Wikitext103. When using the Wikpedia dataset for retrieval, Retro performs similarly to our implementation of 𝑘NN-LM. As we scale the retrieval dataset, Retro performs much better. The perplexities for retrieving from full MassiveText are quite low, which is partly due to partial overlap with Wikitext103 not caught by our deduplication. Model Retrieval Set #Database tokens #Database keys Valid Test Adaptive Inputs (Baevski and Auli, 2019) Spalm (Yogatama et al., 2021) 𝑘NN-LM (Khandelwal et al., 2020) Megatron (Shoeybi et al., 2019) - Wikipedia Wikipedia - Baseline transformer (ours) 𝑘NN-LM (ours) Ret ro Ret ro Ret ro Ret ro Ret ro - Wikipedia Wikipedia C4 MassiveText (1%) MassiveText (10%) MassiveText (100%) - 3B 3B - - 4B 4B 174B 18B 179B 1792B - 17.96 3B 17.20 3B 16.06 - - - 21.53 4B 18.52 0.06B 18.46 2.9B 12.87 0.8B 18.92 4B 13.54 3.21 28B 18.65 17.60 16.12 10.81 22.96 19.54 18.97 10.23 20.33 14.95 3.92 12 dm_mathematicsubuntu_ircnih_exporterarxivuspto_backgroundsopensubtitlesphilpapershackernewsstackexchangefreelawpubmed_abstractsbooks3pile_ccpubmed_centralgutenberg_pg_19github20020406080100% improvementRelative bits-per-byte improvement over our 7B baseline without retrievalJurassic-1 (178B)Gopher (280B)RETRO (7.5B) Improving language models by retrieving from trillions of tokens perplexities a bit higher. The full experiment details and hyperparameters are given in §C.2 and Table 11. We re-implement 𝑘NN-LM with our tokenizer and baseline transformer to produce embeddings of size 1024 for every token in Wikitext103. 𝑘NN-LM has probabilities 𝑝𝑘NN-LM = 𝜆 𝑝𝑘NN + (1 − 𝜆) 𝑝Lm with 𝑝𝑘NN (𝑛𝑘) ∝ exp (−𝛼𝑑𝑘). We tune 𝜆 = 0.118 and 𝛼 = 0.00785 on the validation set (Fig. 7) and report performance for these hyperparameters on both the validation and test set. We fine-tune our baseline transformer into a Retro model (Fig. 7), using the Wikitext103 training data and retrieving from Wikipedia with 2 neighbours. We only train the new weights, as explained in §4.2, and share the embedding weights between the encoder and the main pathway. This is necessary for Wikitext103 which is quite small, as training Retro from scratch in this setting leads to over-fitting. We evaluate the fine-tuned Retro model with different retrieval sets. We use 10 neighbours at evaluation for both Retro and 𝑘NN-LM. When retrieving from Wikipedia, we obtain results com- parable to our 𝑘NN-LM implementation. Furthermore, scaling the retrieval database to MassiveText yields dramatic improvements, though this is partly due to leakage (see §4.4). For reproducibility, we also include results when retrieving from C4, which are close to previous state-of-the-art and comparable to using 10 % of MassiveText. It is worth noting that 𝑘NN-LM requires 1024 floats for every token in the retrieval dataset, totalling 15 terabytes (Tb) for the 4 billion tokens in Wikipedia. 𝑘NN-LM and other token-level retrieval approaches therefore don’t scale to retrieval databases with trillions of tokens such as MassiveText. In comparison, Retro only requires 215Gb to index our Wikipedia dataset, and 93Tb for MassiveText. Inspecting the number of retrieval database entries in Table 4 makes it clear why retrieving at the chunk level is necessary when scaling to datasets with trillions of tokens. 4.2. Retro-fitting baseline models We extend baseline models into Retro models by freezing the pre-trained weights and training only chunked cross-attention and neighbour encoder parameters (less than 10% of weights for the 7B model) in Fig. 5. This offers an efficient alternative path to enhance transformers with retrieval, requiring only 6 million sequences (3% of the pre-training sequences that we used). Additionally, by only training the new weights we ensure that when evaluated without retrieval, the original model performance is exactly maintained. Retrofitting models quickly surpasses the performance of baseline models and even achieves performance close to that of Retro models trained from scratch. The experiment hyperparameters are given in §C.3. 4.3. Question answering We fine-tune our retrieval models on the Natural Questions (Kwiatkowski et al., 2019) dataset to demonstrate that our retrieval pathway can be used to inject information from arbitrary data sources. We use the version4 provided by Izacard and Grave (2021) which is augmented with the retrieved passages from Dpr (Karpukhin et al., 2020). We fine-tune all the weights of our 7.5B pre-trained Retro model for 25,000 steps using the top 20 retrieved passages. We format the data as “question: {answer}” and left pad the data such that “answer:” coincides with the end of the first chunk of 64 tokens and thus aligns with the first retrieving chunk. The model has access to the question via the previous tokens in the sequence as well as the top 20 DPR Wikipedia passages and their titles via the chunked cross-attention mechanism. {question} \n answer: 4https://github.com/facebookresearch/FiD 13 Improving language models by retrieving from trillions of tokens Figure 5 | Re tro-fitting a baseline transformer. Any transformer can be fine-tuned into a retrieval- enhanced transformer by randomly initializing and training only the chunked cross-attention and retrieval encoder weights. Fine-tuning in this way quickly recovers and surpasses the non-retrieval performance, and almost achieves the same performance as training a retrieval model from scratch (shown by the arrow on the right hand side of each plot). We find good performance Retro-fitting our models training on only 3% the number of tokens seen during pre-training. The exact match scores are shown in Table 5 and the full fine-tuning details are given in §C.4. Our method is competitive with previous approaches such as Realm, RAG and Dpr, but underperforms the more recent FiD. In contrast with this work, we find that increasing the number of neighbours past 20 does not improve Retro performance on this task. We hypothesise that the encoder-decoder structure of T5—the base model in FiD— and the T5 pre-training objective leads to a model that relies more on the encoder output than Retro, which is important in the QA setting. To compete with T5-finetuned models, future work should consider ways of forcing Retro to rely further on the retrieval encoder output when producing tokens. 4.4. Relating retrieval performance to dataset leakage. We report the filtered eval losses as detailed in §2.6 on C4, Curation Corpus and Wikitext103 in Fig. 6. On C4 and Wikitext103, for which there is leakage into the training set, the slope is negative for both baseline models and Re t ro models. Re t ro models exploit leakage more strongly than baseline models, as indicated by the more negative slope. This is due to its explicit ability to copy-paste existing training chunks to predict leaked evaluation chunks (see a qualitative example of this model behavior Table 5 | Question answering results. Exact match accuracy on Natural Questions. Model Test Accuracy Realm (Guu et al., 2020) Dpr (Karpukhin et al., 2020) RAG (Lewis et al., 2020) Emdr2 (Sachan et al., 2021) FiD (Izacard and Grave, 2021) FiD + Distill. (Izacard et al., 2020) Baseline 7B (closed book) Retro 7.5B (DPR retrieval) 40.4 41.5 44.5 52.5 51.4 54.7 30.4 45.5 14 Improving language models by retrieving from trillions of tokens Figure 6 | Performance vs. longest common retrieval substring. Evaluation loss as a function of allowed longest common substring between evaluation data chunks and their nearest neighbours. Retrieval still helps when considering chunks with no more than 8 contiguous tokens overlapping with training dataset chunks. on a Wikitext103 article in Table 19). On Curation Corpus, retrieval provides a constant offset, which is expected as there is by design no leakage between Curation Corpus and the training dataset. On the other hand, Retro outperforms baseline models at all leakage levels, down to 𝛼 = 12.5%. At this level, the loss is computed on chunks with less than 8 contiguous tokens shared with the closest matching chunk in the training dataset—this is a reasonable level of overlap at which we consider that there is no local leakage. Retrieval thus improves predictions on both chunks that are syntactically similar to chunks in the training set, and on chunks that are syntactically different from all training chunks. This points toward a non trivial Re t ro capacity of generalizing based on both model parameters and retrieval database. Similar results are found on the Pile dataset (see Fig. 12, §F.3). 4.5. Using Retro for sampling We show examples of samples obtained using the 7.5B Retro model in Table 6, Table 7 and Appendix E. For each chunk (the first one being the prompt), we juxtapose sampled chunks 𝐶𝑢 with retrieved neighbours Ret(𝐶𝑢). To give an indication of local overlap, we colour each sampled token in chunk 𝐶𝑢 based on the length of the longest common prefix (LCP) found in the retrieved chunks Ret(𝐶𝑢−1). Similarly, we colour the retrieved chunks based on the LCP in the sampled chunk. For the sample in Table 6, for which we chose the prompt, we observe that the retrieved chunks influence the sample as there are overlaps between the sampled tokens and neighbour tokens. Overall, retrieval reduces hallucinations (in line with the findings of Shuster et al. (2021)) and makes the model more knowledgeable, when comparing with samples produced with retrieval disabled. In the sample in Table 7, the model recognises that the prompt is the beginning of the first scene of Hamlet and leverages retrieval data to continue it with only a few mistakes. We provide further examples in Appendix E, including examples from the evaluation sets, as well as the detailed procedure used for colouring the tables. 5. Conclusion We present Retrieval-Enhanced Transformers (Retro), a method for modelling arbitrary text se- quences whilst retrieving from databases with trillions of tokens—scaling the data available to models by an order of magnitude compared to what is typically consumed during training. Retro models 15 12.5%50%100%0.70.80.91.0Eval bpbC4172M425M1.5B7.5BBaselineRETRO [ON]12.5%50%100%Max eval/train chunk overlap when filtering0.500.550.600.65Curation Corpus12.5%50%100%0.20.40.60.8Wikitext10312.5%50%100%0.600.650.700.750.800.85Wikipedia Sept 2021 Improving language models by retrieving from trillions of tokens gains do not diminish for models with up to at least 7B parameters, and correspond to non-retrieval models with 10× more parameters on certain datasets. On Wikitext103 and the Pile, Retro outper- forms previous models trained on large scale datasets. We also show that Re t ro is competitive on retrieval-intensive downstream tasks such as question answering. Ret ro models are flexible and can be used without retrieval at evaluation and still achieve comparable performance to baseline models. Conversely, baseline models can be rapidly fine-tuned into Ret ro models to obtain nearly the same performance as if trained from scratch. Careful analysis shows that only a modest fraction of the gains obtained by Retro are due to test set leakage. In general, we caution for such leakage in large-scale language datasets and suggest further work in better understanding the role of test set leakage in the performance of large-scale language models. Overall, our work demonstrates at an unprecedented scale that semi-parametric approaches can provide an orthogonal, more efficient approach than raw parameter scaling as we seek to build more powerful language models. Acknowledgements We would like to thank Nikolai Grigorev, Marc’aurelio Ranzato, Cyprien de Masson d’Autume, Po-Sen Huang, Johannes Welbl, Lisa Anne Hendricks, Ethan Perez, Jeff Stanway, Eric Noland, Gregory Wayne, John Jumper, Julian Schrittwieser, Lorrayne Bennett, Devang Agrawal, Dani Yogatama, Susannah Young, Nando de Freitas, Demis Hassabis, and Koray Kavukcuoglu for their help, advice and reviews. Additionally, we would like to thank Zonglin Li, David Simcha, and the ScaNN developers for their help. 16 Improving language models by retrieving from trillions of tokens Table 6 | Sample - Beavers are interesting animals. The Retro[Of f] sample quickly diverges to other animals while the Retro[On] sample tends to stay focused on the beaver topic due to neighbour conditioning. Prompt and sample of Retro[Off ] Beavers are interesting animals that live near rivers. They build Prompt and sample of Retro[On] colored by LCP with Ret (𝐶𝑢−1) LCP = 0, 1, 2, 3,4,(cid:62) 5 Beavers are interesting animals that live near rivers. They build dams to create ponds. Frogs are am phibians, so they can live in both l and and water. They have great camou flage to hide from predators. The G olden Retriever, or Golden, is a ver y big dog. They are very strong, and have a thick coat to help them live in their houses called beaver dams in the riverbeds. They also live on lan d.Beavers use their strong teeth an d strong jaws to cut down trees and branches to build their homes. They also use their teeth and jaws to che w up food. Beavers use their big, fl at tails to swim. They use cold areas. A kangaroo is an anima l that is very large. It can jump ve ry high. They can run very fast. A meerkat is a small animal that lives in Africa. They live in colonies of up to a hundred of them. They can c limb trees very easily. their sharp teeth to gnaw at rocks and cut trees. They use their flat t ails to steer when they are swimming .In the winter, beavers have to bui ld a house called a beaver lodge. Th ey build a house that is very strong . The walls are made of twigs. The r oof is made A mouse is a small mammal that lives on land. It is a very good climber and it can run very fast. Penguins are birds that live on Antarctica. T hey have a thick coat to keep them w arm. Rabbits are small animals that live in the ground. They of branches and other tree parts. T hey also use their strong jaws to cu t trees. They bring them to their ho use. They also use their sharp teeth to chew up the tree parts. They use their flat tails to swim to the top of their house. Then they use their teeth and jaws to chew up the tree [𝑁1 𝑢 , 𝐹1 𝑢 ] colored by LCP with 𝐶𝑢+1 [𝑁2 𝑢 , 𝐹2 𝑢 ] colored by LCP with 𝐶𝑢+1 LCP = 0, 1, 2, 3,4,(cid:62) 5 .Beavers build their lodges in pon ds they have created in wooded areas .Like many things in nature, there is a connection between creatures i n the wild.Beaver ponds cause tree s to drown, but the dead trees attra ct the great blue heron, which often return year after year. Over time, a beaver pond can attract more than 50 nests in a colony, called a rooke ry.An example of this can be found in the large pond off Bradford Road at Carter Fields near the Boxford l ine.Chris Leahy, an expert with th e Massachusetts Audubon Society who wrote , then they mean that you are very b usy. Beavers swim easily in streams, picking up rocks and sticks to buil d their dams. They gnaw at trees wit h their big front teeth to cut them down. Then they use parts of the tre es to build their houses.Beavers ar e clever builders. They know exactly what they need to build their beave r dams. They use mud from the stream to make their dams stay together. T hey use their tails to pat down the mud.Beavers put a snug room at the top of their dams for their babies. They store their food underwater. Be avers eat the bark from the , then they mean that you are very b usy. Beavers swim easily in streams, picking up rocks and sticks to buil d their dams. They gnaw at trees wit h their big front teeth to cut them down. Then they use parts of the tre es to build their houses.Beavers ar e clever builders. They know exactly what they need to build their beave r dams. They use mud from the stream to make their dams stay together. T hey use their tails to pat down the mud.Beavers put a snug room at the top of their dams for their babies. They store their food underwater. Be avers eat the bark from the LCP = 0, 1, 2, 3,4,(cid:62) 5 naw them into smaller sections and d rag them into the water.Engineers of the Pond Beavers are interesting animals because they change the hab itat in which they live. Beavers do this by blocking up streams to creat e ponds. Then they build their homes , called lodges, in these ponds. Bea vers’ bodies make them well-suited f or underwater building Special muscl es close off their noses, ears, and throats to keep the water out. Beave rs’ broad tails act like rudders for steering. Their two very large, ora nge front teeth are used to gnaw dow n trees. They begin building their d am ar-like tail, and two protruding tee th that are strong enough to gnaw do wn trees. The beaver uses trees, bra nches, and mud to build dams across rivers and streams. These dams creat e a deep pool of water in which the beaver builds its home. A beaver hom e is called a lodge. A baby beaver o r “kit” remains in the family lodge until the age of two.Beaver fur, kn own as pelt, was once highly popular as a trim for hats and coats. How m ight the popularity of beaver fur co ntributed to the colonization of New York?wwwWw w at trees with their big front teet h to cut them down. Then they use pa rts of the trees to build their hous es.Beavers are clever builders. The y know exactly what they need to bui ld their beaver dams. They use mud f rom the stream to make their dams st ay together. They use their tails to pat down the mud.Beavers put a snu g room at the top of their dams for their babies. They store their food underwater. Beavers eat the bark fro m the trees that they cut down!1. W hat is the main idea of the first pa ragraph?.2. What is the main idea o f the second paragraph? 17 Improving language models by retrieving from trillions of tokens Table 7 | Sample - Hamlet, Act 1, Scene 1. The Retro[Of f] sample has correct syntax but is hallucinated, and ends with repetition of one character (FRANCISCO Approach me not). The Retro[On] sample is the correct continuation of the original text, and is robust to formatting differences between our prompt and the retrieved data. Prompt and sample of Retro[Off ] ACT I SCENE I. Elsinore. A pla tform before the castle. FRANC ISCO at his post. Enter to him BERNARDO BERNARDO Who’s there ? FRANCISCO Nay, answer me: st and, and unfold yourself. BERN ARDO Prompt and sample of Retro[On ] colored by LCP with Ret (𝐶𝑢−1) LCP = 0, 1, 2, 3,4,(cid:62) 5 ACT I SCENE I. Elsinore. A pla tform before the castle. FRANC ISCO at his post. Enter to him BERNARDO BERNARDO Who’s there ? FRANCISCO Nay, answer me: st and, and unfold yourself. BERN ARDO Who calls ? FRANCISCO I am th e lord here; I, Francisco, tha t am sick of grief. [ Aside. B ERNARDO The king ! FRANCISCO I am sick of that also. BERNARD O My lord ? FRANCISCO Do not a pproach me. BERNARDO Long live the king! FRANCISCO Bernardo? BERNARDO He. FRANCI SCO You come most carefully up on your hour. BERNARDO ’Tis no w struck twelve; get thee to b ed, Francisco. FRANCISCO For t his relief much thanks: ’tis b itter cold, And I am sick at h eart. B Francisco, I would speak with you. FRANCISCO Approach me not , but speak. BERNARDO Your han d, your voice FRANCISCO I will not hear thee speak. BERNARDO Francisco, your hand, I entre at thee. FRANCISCO Approach me not. BERNARDO Francisco FRANC ERNARDO Have you had quiet gua rd? FRANCISCO Not a mouse stir ring. BERNARDO Well, good nigh t. If you do meet Horatio and Marcellus, The rivals of my wa tch, bid them make haste. FRAN CISCO I think I hear them. Sta nd, ho! who is there? Enter ISCO Approach me not. BERNARDO I have a letter FRANCISCO App roach me not. BERNARDO For the king. FRANCISCO Approach me n ot. BERNARDO There’s no treaso n in’t. FRANCISCO Approach me not. BERNARDO I will HORATIO and MARCELLUS HORATIO Friends to this ground. MARCE LLUS And liegemen to the Dane. FRANCISCO Give you good night . MARCELLUS O, farewell, hones t soldier: Who hath relieved y ou? FRANCISCO Bernardo hath my place. Give you good night. [𝑁1 𝑢 , 𝐹1 𝑢 ] colored by LCP with 𝐶𝑢+1 [𝑁2 𝑢 , 𝐹2 𝑢 ] colored by LCP with 𝐶𝑢+1 LCP = 0, 1, 2, 3,4,(cid:62) 5 ACT I SCENE I. Elsinore. A pla tform before the castle. (FRAN CISCO at his post. Enter to hi m BERNARDO) BERNARDO Who’s the re? FRANCISCO Nay, answer me: stand, and unfold yourself. BE RNARDO Long live the king! FRA NCISCO Bernardo? BERNARDO He. FRANCISCO You come most carefu lly upon your hour. BERNARDO ’ Tis now struck twelve; get the e to bed, Francisco. FRANCISCO For this relief much thanks: ’tis bitter cold, And I am sic k at heart. BERNARDO Have you Long live the king! FRANCISCO Bernardo? BERNARDO He. FRANCI SCO You come most carefully up on your hour. BERNARDO ’Tis no w struck twelve; get thee to b ed, Francisco. FRANCISCO For t his relief much thanks: ’tis b itter cold, And I am sick at h eart.</TEXT></DOC><DOC><DO CNO>romeo</DOCNO><TEXT>ACT I PROLOGUE Two households, bo th alike in dignity, In fair V erona, where we lay our scene, From ancient grudge break to new mutiny, had quiet guard? FRANCISCO No t a mouse stirring. BERNARDO W ell, good night. If you do mee t Horatio and Marcellus, The r ivals of my watch, bid them ma ke haste. FRANCISCO I think I hear them. Stand, ho! Who’s th ere? (Enter HORATIO and MARCEL LUS) HORATIO Friends to this g round. MARCELLUS And liegemen to the Dane. FRANCISCO Give yo u good night. MARCELLUS O, far ewell, honest soldier: Who hat h relieved you? FRANCISCO Bern ardo has my place. Give you go od night. (Exit LCP = 0, 1, 2, 3,4,(cid:62) 5 ><TEXT>ACT I SCENE I. Elsin ore. A platform before the cas tle. FRANCISCO at his post. E nter to him BERNARDO BERNARDO Who’s there? FRANCISCO Nay, an swer me: stand, and unfold you rself. BERNARDO Long live the king! FRANCISCO Bernardo? BERN ARDO He. FRANCISCO You come mo st carefully upon your hour. B ERNARDO ’Tis now struck twelve ; get thee to bed, Francisco. FRANCISCO For this relief much thanks: ’tis bitter cold, And I am sick at heart. live the king! FRANCISCO Bern ardo? BERNARDO He. FRANCISCO Y ou come most carefully upon yo ur hour. BERNARDO ’Tis now str uck twelve: get thee to bed, F rancisco. FRANCISCO For this r elief much thanks: ’tis bitter cold, And I am sick at heart. BERNARDO Have you had quiet g uard? FRANCISCO Not a mouse st irring. BERNARDO Well, good ni ght. Ifyou do meet Horatio and Marcellus, The rivals2 of my watch, bid them make haste. FR ANCISCO I think I hear them.— Stand, ho! who is there? EN ARDO Have you had quiet guard? FRANCISCO Not a mouse stirrin g. BERNARDO Well, good night. Ifyou do meet Horatio and Marc ellus, The rivals2 of my watch , bid them make haste. FRANCIS CO I think I hear them.— Stand , ho! who is there? ENTER HORA TIO AND MARCELLUS. HORATIO Fri ends to this ground. MARCELLUS And liegemen to the Dane.3 FR ANCISCO Give you good night. M ARCELLUS O, farewell, honest s oldier: Who hath relieved you? FRANCISCO Bernardo hath my pl ace. Give you good night 18 Improving language models by retrieving from trillions of tokens References M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In ACM SIGSAC Conference on Computer and Communications Security, 2016. S. Ahn, H. Choi, T. Pärnamaa, and Y. Bengio. A neural knowledge language model. arXiv preprint arXiv:1608.00318, 2016. A. Baevski and M. Auli. Adaptive input representations for neural language modeling. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id= ByxZX20qFQ. Y. Belinkov, S. Gehrmann, and E. Pavlick. Interpretability and analysis in neural NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 1–5, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020. acl-tutorials.1. URL https://aclanthology.org/2020.acl-tutorials.1. E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In ACM Conference on Fairness, Accountability, and Transparency, 2021. D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learn- ing Research, 3(Jan):993–1022, 2003. URL https://jmlr.csail.mit.edu/papers/v3/ blei03a.html. J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. V. der Plas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. T. Brants, A. C. Popat, P. Xu, F. J. Och, and J. Dean. Large Language models in machine translation. In Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 858–867, 2007. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems, 2020. URL https://proceedings.neurips.cc/ paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. Brown, D. Song, U. Erlingsson, A. Oprea, and C. Raffel. Extracting training data from large language models. Preprint, 2021. C. Consonni, D. Laniado, and A. Montresor. Wikilinkgraphs: a complete, longitudinal and multi- language dataset of the wikipedia link networks. In AAAI International Conference on Web and Social Media, volume 13, 2019. Curation. Curation corpus base, 2020. Z. Dai, Z. Yang, Y. Yang, J. Carbonell, Q. Le, and R. Salakhutdinov. Transformer-XL: Attentive language models beyond a fixed-length context. In Annual Meeting of the Association for Computational Linguistics, July 2019. URL https://aclanthology.org/P19-1285. 19 Improving language models by retrieving from trillions of tokens J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Conference of the North American Chapter of the Association for Computational Linguistics, June 2019. URL https://aclanthology.org/N19-1423. L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima, S. Presser, and C. Leahy. The Pile: An 800GB dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. S. Gehman, S. Gururangan, M. Sap, Y. Choi, and N. A. Smith. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In Conference on Empirical Methods in Natural Language Processing, Nov. 2020. URL https://aclanthology.org/2020.findings-emnlp.301. E. Grave, A. Joulin, and N. Usunier. Improving neural language models with a continuous cache. In International Conference on Learning Representations, 2017. URL https://openreview.net/ forum?id=B184E5qee. A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. J. Gu, Y. Wang, K. Cho, and V. O. Li. Search engine guided neural machine translation. In AAAI Conference on Artificial Intelligence, 2018. R. Guo, P. Sun, E. Lindgren, Q. Geng, D. Simcha, F. Chern, and S. Kumar. Accelerating large-scale inference with anisotropic vector quantization. In International Conference on Machine Learning, 2020. URL https://arxiv.org/abs/1908.10396. K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang. Retrieval augmented language model pre-training. In International Conference on Machine Learning, 2020. H. Hashemi, H. Zamani, and W. B. Croft. Guided transformer: Leveraging multiple external sources for representation learning in conversational search. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1131–1140, 2020. T. Hennigan, T. Cai, T. Norman, and I. Babuschkin. Haiku: Sonnet for JAX, 2020. URL http: //github.com/deepmind/dm-haiku. G. Izacard and E. Grave. Leveraging passage retrieval with generative models for open domain question answering. In Conference of the European Chapter of the Association for Computational Linguistics, Apr. 2021. URL https://aclanthology.org/2021.eacl-main.74. G. Izacard, F. Petroni, L. Hosseini, N. De Cao, S. Riedel, and E. Grave. A memory efficient baseline for open domain question answering. arXiv preprint arXiv:2012.15156, 2020. S. Jain and B. C. Wallace. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3543–3556, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1357. URL https: //aclanthology.org/N19-1357. E. S. Jo and T. Gebru. Lessons from archives: Strategies for collecting sociocultural data in machine learning. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 306–316, 2020. R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. 20 Improving language models by retrieving from trillions of tokens J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models. CoRR, 2020. URL https://arxiv. org/abs/2001.08361. V. Karpukhin, B. Oguz, S. Min, P. Lewis, L. Wu, S. Edunov, D. Chen, and W.-t. Yih. Dense passage re- trieval for open-domain question answering. In Conference on Empirical Methods in Natural Language Processing, Nov. 2020. URL https://aclanthology.org/2020.emnlp-main.550. U. Khandelwal, O. Levy, D. Jurafsky, L. Zettlemoyer, and M. Lewis. Generalization through memoriza- tion: Nearest neighbor language models. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HklBjCEKvH. M. Komeili, K. Shuster, and J. Weston. Internet-augmented dialogue generation. arXiv preprint arXiv:2107.07566, 2021. T. Kudo and J. Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018. T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, M. Kelcey, J. Devlin, K. Lee, K. N. Toutanova, L. Jones, M.-W. Chang, A. Dai, J. Uszkoreit, Q. Le, and S. Petrov. Natural Questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics, 7:452–466, Mar. 2019. URL https://aclanthology. org/Q19-1026. A. Lazaridou, A. Kuncoro, E. Gribovskaya, D. Agrawal, A. Liska, T. Terzi, M. Gimenez, C. de Mas- son d’Autume, S. Ruder, D. Yogatama, K. Cao, T. Kociský, S. Young, and P. Blunsom. Pitfalls of static language modelling. CoRR, 2021. URL https://arxiv.org/abs/2102.01951. K. Lee, M.-W. Chang, and K. Toutanova. Latent Retrieval for Weakly Supervised Open Domain Question Answering. In Annual Meeting of the Association for Computational Linguistic, June 2019. URL http://arxiv.org/abs/1906.00300. K. Lee, D. Ippolito, A. Nystrom, C. Zhang, D. Eck, C. Callison-Burch, and N. Carlini. Deduplicating training data makes language models better. arXiv preprint arXiv:2107.06499, 2021. P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, S. Riedel, and D. Kiela. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems, 2020. URL https://proceedings. neurips.cc/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf. P. Lewis, P. Stenetorp, and S. Riedel. Question and answer test-train overlap in open-domain question answering datasets. In Conference of the European Chapter of the Association for Computational Linguistics, Apr. 2021. URL https://aclanthology.org/2021.eacl-main.86. O. Lieber, O. Sharir, B. Lenz, and Y. Shoham. Jurassic-1: Technical details and evaluation. White Paper. AI21 Labs, 2021. I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7. S. Merity, C. Xiong, J. Bradbury, and R. Socher. Pointer sentinel mixture models. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id= Byj72udxe. 21 Improving language models by retrieving from trillions of tokens T. Mikolov, M. Karafiát, L. Burget, J. Cernock`y, and S. Khudanpur. Recurrent neural network based language model. Interspeech, 2(3):1045–1048, 2010. D. Paperno, G. Kruszewski, A. Lazaridou, N. Q. Pham, R. Bernardi, S. Pezzelle, M. Baroni, G. Boleda, and R. Fernández. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Annual Meeting of the Association for Computational Linguistics, Aug. 2016. URL https:// aclanthology.org/P16-1144. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. Preprint, 2019. J. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song, J. Aslanides, S. Henderson, R. Ring, S. Young, E. Rutherford, T. Hennigan, J. Menick, A. Cassirer, R. Powell, G. van den Driessche, L. A. Hendricks, M. Rauh, P.-S. Huang, A. Glaese, J. Welbl, S. Dathathri, S. Huang, J. Uesato, J. Mellor, I. Higgins, A. Creswell, N. McAleese, A. Wu, E. Elsen, S. Jayakumar, E. Buchatskaya, D. Budden, E. Sutherland, K. Simonyan, M. Paganini, L. Sifre, L. Martens, X. L. Li, A. Kuncoro, A. Nematzadeh, E. Gribovskaya, D. Donato, A. Lazaridou, A. Mensch, J.-B. Lespiau, M. Tsimpoukelli, N. Grigorev, D. Fritz, T. Sottiaux, M. Pajarskas, T. Pohlen, Z. Gong, D. Toyama, C. de Masson d’Autume, Y. Li, T. Terzi, V. Mikulik, I. Babuschkin, A. Clark, D. de Las Casas, A. Guy, J. Bradbury, M. Johnson, B. Hechtman, L. Weidinger, I. Gabriel, W. Isaac, E. Lockhart, S. Osindero, L. Rimell, C. Dyer, O. Vinyals, K. Ayoub, J. Stanway, L. Bennett, D. Hassabis, K. Kavukcuoglu, and G. Irving. Scaling language models: Methods, analysis & insights from training Gopher. arXiv submission, 2021. C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020. URL http://jmlr.org/papers/v21/20-074.html. S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He. Zero: Memory optimizations toward training trillion parameter models. In IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, 2020. S. Robertson and H. Zaragoza. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends in Information Retrieval, 3:333–389, Jan 2009. D. S. Sachan, S. Reddy, W. Hamilton, C. Dyer, and D. Yogatama. End-to-end training of multi-document reader and retriever for open-domain question answering. arXiv preprint arXiv:2106.05346, 2021. R. Schwartz, J. Dodge, N. A. Smith, and O. Etzioni. Green AI. Communications of the Association for Computing Machinery, 63(12):54–63, Nov. 2020. M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro. Megatron-LM: Training multi-billion parameter language models using model parallelism. CoRR, 2019. URL http: //arxiv.org/abs/1909.08053. K. Shuster, S. Poff, M. Chen, D. Kiela, and J. Weston. Retrieval augmentation reduces hallucination in conversation. arXiv:2104.07567 [cs], Apr. 2021. URL http://arxiv.org/abs/2104.07567. E. Strubell, A. Ganesh, and A. McCallum. Energy and policy considerations for deep learning in NLP. In Association for Computational Linguistics, July 2019. URL https://aclanthology.org/ P19-1355. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, In Advances in Neural Information Pro- and I. Polosukhin. cessing Systems, 2017. URL https://proceedings.neurips.cc/paper/2017/file/ 3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Attention is all you need. 22 Improving language models by retrieving from trillions of tokens X. Wei and W. B. Croft. LDA-based document models for ad-hoc retrieval. In ACM SIGIR International Conference on Research and Development in Information Retrieval, 2006. URL http://portal. acm.org/citation.cfm?doid=1148170.1148204. L. Weidinger, I. Gabriel, C. Griffin, M. Rauh, J. Uesato, J. Mellor, W. Isaac, P.-S. Huang, L. A. Hendricks, M. Cheng, B. Balle, J. Haas, C. Biles, L. Rimell, W. Hawkins, M. Glaese, A. Kasirzadeh, Z. Kenton, S. Brown, A. Birhane, T. Stepleton, G. Irving, and S. Legassick. Ethical and social risks of harm from language models. arXiv submission, 2021. D. Yogatama, C. de Masson d’Autume, and L. Kong. Adaptive semiparametric language models. Transactions of the Association for Computational Linguistics, 9:362–373, 2021. B. Zhang and R. Sennrich. Root mean square layer normalization. In Advances in Neural Information Processing Systems, 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 1e8a19426224ca89e83cef47f1e7f53b-Paper.pdf. J. Zhang, M. Utiyama, E. Sumita, G. Neubig, and S. Nakamura. Guiding neural machine translation with retrieved translation pieces. In Conference of the North American Chapter of the Association for Computational Linguistics, 2018. 23 Improving language models by retrieving from trillions of tokens A. Datasets We provide a full description of MassiveText and of our extract of recent Wikipedia articles. A.1. Full description of MassiveText The full break down of MassiveText by source and languages is given in Table 8. For a full description and analysis of MassiveText, see Rae et al. (2021). Source Language Token count (M) Documents Sampling weight En Ru Es Zh Fr De Pt It Sw Ur En En En De Fr Ru Es It Zh Pt Ur Sw - - Web Books News Wikipedia Github Total 483,002 103,954 95,762 95,152 59,450 57,546 44,561 35,255 2,246 631 604,938,816 93,004,882 126,893,286 121,813,451 76,612,205 77,242,640 62,524,362 42,565,093 1,971,234 455,429 3,423,740 20,472,632 236,918 397,852,713 3,977 2,155 1,783 1,411 1,270 1,071 927 614 61 15 6,267,214 3,307,818 2,310,040 2,767,039 2,885,013 2,014,291 1,654,772 1,423,335 344,811 58,090 374,952 142,881,832 5,026,463 1,792,260,998 0.314 0.033 0.033 0.033 0.033 0.033 0.033 0.033 0.0044 0.0011 0.25 0.1 0.0285 0.003 0.003 0.003 0.003 0.003 0.003 0.003 0.0001 0.0004 0.05 1 Table 8 | MassiveText dataset. The final column indicates the sampling weight for each dataset during training. For the retrieval database, the entire dataset is used, with the exception of books for which we use a sub-sample of 4%. A.2. Wikipedia September 2021 We create an evaluation dataset consisting of 23 Wikipedia articles that were added or heavily edited in September 2021, after we collected our training dataset. In addition, we filter out articles that rely too heavily on templated content, using the method detailed in §2.6 to identify articles with chunks that have a high overlap with their neighbours. Fig. 10 show that little overlap remains between our test dataset and the retrieved neighbours from the training dataset. The full list of included articles is given in Table 9. 24 Improving language models by retrieving from trillions of tokens Table 9 | Full set of articles included in our Wikipedia Sept. 2021 evaluation dataset. Megan Rohrer Emma Raducanu Ambra Sabatini WhyDonate The Juggernaut (company) Angela Diaz 2020 Summer Paralympics 2021 Afghan protests Rexh Xhakli Julia Laskin Cuijk Ghoubet Wind Power Station Aakashavaani Junior Eurovision Song Contest 2021 Pavilion Bukit Jalil Blake Desjarlais 2021 All-Ireland Senior Football Championship Final Drift-barrier hypothesis Venomics Great Circle (novel) Hurricane Ida 2021 Montenegrin episcopal enthronement protests At War With the Silverfish We first parse articles using mwparserfromhell5. We then remove sections with the following titles: “references”, “external links”, “sources”, “further reading”, “see also”, “citations”, and “note”. In the remaining sections, we remove Wikilinks and remove the following templates: “reflist”, “notelist”, “notelist-ua”, “notelist-lr”, “notelist-ur”, and “notelist-lg”. We also exclude objects with the “ref ” or “table” tag and clean the remaining text with the strip_code function. Finally, we concatenate the title and all the sections and use \n\n to delimitate them. B. Details on the retrieval architecture We give details on the Retro architecture, and on the fine-tuning procedure we use for Retrofitting existing language models. B.1. Retro architecture and implementation B.1.1. Feed-forward architecture As mentioned in the main text, the overall encoder-decoder architecture is fully feed-forward. We start with a sequence 𝑋 ∈ 𝕍𝑛 = (𝐶𝑢)1(cid:54)𝑢(cid:54)𝑙, and its pre-computed neighbours (Ret(𝐶𝑢))1(cid:54)𝑢(cid:54)𝑙 and returns logits in ℝ𝑛×|𝕍 |. Along with Attn, Ffw, Cca and Ca operators introduced in the main text, we define the decoder embedding layer Emb : 𝕍𝑛 → ℝ𝑛×𝑑, the Split operator that extracts chunked intermediary embeddings Split( 𝐻) (cid:44) ( 𝐻𝑢)1(cid:54)𝑢(cid:54)𝑙 ∈ ℝ𝑙×𝑚×𝑑 and the read-out layer Read : ℝ𝑛×𝑑 → ℝ𝑛×|𝕍 |. We then describe the forward pass in Algorithm 1. In addition to the usual Transformer ones, Re tro architecture hyperparameters involves the layer indices 𝑃enc and 𝑃, at which the encoder and the decoder perform cross-attention. B.1.2. Relative positional encoding in the chunked cross-attention layer The C a operator uses relative positional logits, that are computed from a specific relative distance separating data tokens from retrieval tokens. Indeed, we expect any retrieval neighbour Ret (𝐶𝑢) 𝑗 and the chunk 𝐶𝑢 to be relatively well aligned, and assume that they start at the same position. Therefore, 𝑢 and when computing Ca( 𝐻+ 𝑢 , 𝐸𝑢), we set the distance between the data token 𝑖 ∈ [1, 𝑙] of chunk 𝐶+ 5https://github.com/earwig/mwparserfromhell 25 Improving language models by retrieving from trillions of tokens the retrieval token 𝑖(cid:48) ∈ [1, 2𝑙] of Ret(𝐶𝑢) 𝑗 to be When computing the encoder cross-attentions Ca(Ret(𝐶𝑢) 𝑗, 𝐻𝑢), we set the distance between the retrieval token 𝑖(cid:48) ∈ [1, 2𝑙] and the data token 𝑖 ∈ [1, 𝑙] to be 𝑑(𝑖, 𝑖(cid:48)) (cid:44) 𝑖 − 𝑖(cid:48) + 𝑙 − 1. (6) 𝑑enc(𝑖(cid:48), 𝑖) (cid:44) 𝑖(cid:48) − 𝑖. (7) Positional logits are obtained as a linear transform of a cosine vector computed from (𝑑(𝑖, 𝑖(cid:48)))𝑖,𝑖(cid:48), and are added to content logits, as in a regular self-attention block. B.1.3. Chunked cross-attention implementation Our implementation of the Cca operator, shown in Listing 1, is based on a vectorized application of a cross-attention layer. For simplicity, we omit the multi-head attention logic and use the simplest Q,K,V attention. We omit relative positional logits computation, described above. B.1.4. Optional sharing of embedding matrices We use disjoint embeddings for the encoder and decoder by default, which allows us to use a different dimensionality for the encoder (typically kept at 𝑑Enc = 896) and for the decoder (that we scale up to 𝑑 = 8192). It is possible to share the embeddings, with little difference in training, as we show in the ablation section. B.2. Baseline to Retro model fine-tuning As shown in Fig. 5, we found that we were able to take a pre-trained baseline transformer and add Retro through fine-tuning. In all cases, we froze all weights from pre-training and freshly initialised the retrieval encoder and cross-attention weights. In all cases, the cross-attention is added every third layer starting at layer six. The learning rate for the three smaller models was set to 2 × 10−4 and half that for the larger model. We experimented with allowing the entire model to resume training during fine-tuning but consistently found that the best approach was to freeze the pre-trained model. This kept the retrieval-off performance frozen whereas when all weights were tuned the retrieval off performance would degrade. C. Training details and hyperparameters We provide the hyperparameters used in the various experiments of §4. C.1. Language model pre-training In Table 10, we show the hyperparameters of the different models we train. In all cases, we train for 419,430,400,000 training tokens. The three smaller models are trained with a batch size of 256 and the largest model is trained with a batch size of 1024. The minimum learning rate is set to 0.1 times the maximum learning rate, which is shown in Table 10. The learning rate is decayed using a cosine cycle length that matches the total number of training tokens. All models are trained using AdamW (Loshchilov and Hutter, 2019) with a weight decay parameter of 0.1. The learning rate linearly increases from 10−7 to the maximum learning rate over the first 750 steps of training. All models use ZeRO to shard the optimiser state (Rajbhandari et al., 2020). Additional infrastructure details can be found in Rae et al. (2021). 26 Improving language models by retrieving from trillions of tokens Listing 1 | Jax implementation of the chunked cross attention, simplified. # Sequence length # Chunk length # Retrieval length n = 128 m = 16 r = 32 k = 4 d = 16 l = n // m # Number of chunks # Number of neighbours # Embedding size # Parameters Q = jnp.zeros((d, d)) K = jnp.zeros((d, d)) V = jnp.zeros((d, d)) def relative_positional_encodings(attending_length, attended_length): # Classical relative positional encodings ... def cross_attention(chunk, neighbour): m, d = chunk.shape r, d = neighbour.shape queries = chunk @ Q keys = neighbour @ K logits = queries @ keys.T values = neighbour @ V return logits, values def multi_neighbour_cross_attention(chunk, neighbours): m, d = chunk.shape k, r, d = neighbours.shape logits, values = jnp.vectorize(cross_attention, signature=’(m,d),(r,d)->(m,r),(r,d)’)( chunk, neighbours) assert logits.shape == (k, m, r) assert values.shape == (k, r, d) logits += relative_positional_encodings(m, r)[None, :, :] logits = jnp.moveaxis(logits, 0, -1).reshape((m, r * k)) values = jnp.moveaxis(values, 0, 1).reshape((r * k, d)) return jax.nn.softmax(logits) @ values def multi_chunk_cross_attention(observation, neighbours): attending_chunks = jnp.pad(observation[m-1:], chunked_output = jnp.vectorize(multi_neighbour_cross_attention, ((0, m - 1), (0, 0)), mode=’constant’).reshape(l, m, d) signature=’(m,d),(k,r,d)->(m,d)’)( attending_chunks, neighbours) assert chunked_output.shape == (l, m, d) output = jnp.pad(chunked_output.reshape(n, d), ((m - 1, 0), (0, 0)), mode=’constant’)[:n] return output observation = jnp.zeros((n, d)) neighbours = jnp.zeros((l, k, r, d)) # Input h = multi_chunk_cross_attention(observation, neighbours) assert h.shape == (n, d) # Output 27 Improving language models by retrieving from trillions of tokens Table 10 | Re tro model hyperparameters, along with the size of the decoder. Baseline 𝑑𝑚𝑜𝑑𝑒𝑙 896 247M 564M 1536 1,574M 2048 7,505M 4096 𝑑 𝑓 𝑓 𝑤 3584 6144 8192 16384 # heads Head size # layers 𝑃 16 12 16 32 64 128 128 128 12 12 24 32 [6, 9, 12] [6, 9, 12] [9, 12, . . . , 24] [9, 12, . . . , 32] 𝑃E nc Max LR 2×10−4 [1] 2×10−4 [1] 2×10−4 [1] 1×10−4 [1] Table 11 | Hyperparameters for the Wikitext103 experiments presented in Table 4. We use the same learning rate schedule for the baseline and the Retro-fitting. For Retro-fitting, we reset the schedule i.e. the schedule starts from step 0, not from step 35,000. Model Number of layers 𝑑 𝑑Ffw Key size Value size Number of heads Training data Dataset Sequence length Batch size Tokenizer vocabulary size optimiser Adam’s 𝛽1 Adam’s 𝛽2 Adam’s 𝜀 Dropout rate Learning rate start Learning rate max Learning rate min Warmup steps Cosine cycle steps Overlapping proportion Optimisation Schedule Evaluation 18 1024 4096 64 64 16 Wikitext103train 3072 128 128,000 Adam 0.9 0.95 1e-8 0.25 1e-7 2.5e-4 2e-5 4,000 100,000 87.5 % C.2. Wikitext103 comparison We provide more details on our Wikitext103 results presented in §4.1 and Table 4. We train a baseline transformer on the Wikitext103 training set with the hyperparameters presented in Table 11. The learning rate ramps linearly from 1 × 10−7 to 2.5 × 10−4 in the first 4,000 steps, then decays to 2 × 10−5 at 100,000 steps using a cosine schedule. The baseline checkpoint at step 35,000 has the lowest perplexity on Wikitext103 valid, of 21.58, for overlapping proportion of 75% (sliding window evaluation that only uses probabilities for tokens that have at least 75% of the sequence length of context, when available). We use this checkpoint for all our baseline and 𝑘NN-LM numbers reported in Table 4, except that Table 4 reports for an overlapping proportion of 87.5 %, which slightly lowers the perplexity of our baseline to 21.53 on Wikitext103 valid. We also use the 35,000 step baseline checkpoint as initialization for a Retrofit, which otherwise uses the same optimiser and schedule hyperparameters but only trains the new retrieval weights, as explained in §4.2. Our best Retrofit checkpoint has a Wikitext103 valid perplexity 18.46, when retrieving from Wikipedia. We use this Retro checkpoint in Table 4 for all other retrieval sets. The evaluation curves for our baseline and Retrofit is shown if Fig. 7 (left). In this particular case, 28 Improving language models by retrieving from trillions of tokens because Wikitext103 is quite small, training a Retro model from scratch led to weaker results than the baseline, at least when retrieving from Wikipedia, as we couldn’t find an effective way to mitigate the increased over-fitting due to the additional weights of Retro. We also re-implement 𝑘NN-LM using the same tokenizer and dataset that we use for our base- line and Ret rofitting experiments. 𝑘NN-LM has probabilities 𝑝𝑘NN-LM = 𝜆 𝑝𝐿𝑀 + (1 − 𝜆) 𝑝𝑘𝑁 𝑁 with 𝑝𝑘𝑁 𝑁 (𝑛𝑘) ∝ exp(−𝛼𝑑𝑘). To tune 𝜆 and 𝛼, we begin with 𝛼 = 0.0012, which corresponds to the inverse of the standard deviation of the norm of the embeddings that we use as keys and queries for 𝑘NN-LM. We find the best 𝜆 = 0.118. We then find the best 𝛼 = 0.00785 for that value of 𝜆. Fig. 7 center and right respectively show the perplexity of 𝑘NN-LM as a function of 𝜆 and 𝛼. Figure 7 | Wikitext103valid perplexities. Left: Baseline and Retrofit (initialized from baseline’s checkpoint at 35,000 steps) perplexities as a function of training steps. Center and right: 𝑘NN-LM perplexity as a function of 𝜆 (for 𝛼 = 0.0012) and 𝛼 (for 𝜆 = 0.12) respectively. C.3. Retrofitting baseline models experiments In Table 12, we give the hyperparameters used for Retrofitting the models on Massive Text. Table 12 | Hyperparameters for the Retrofitting experiments Model Layers with Retro-block (𝑃) Learning rate Batch size 172M 425M 1.5B 7.5B Every 3rd from 6 Every 3rd from 6 Every 3rd from 6 Every 3rd from 6 2 × 10−4 → 2 × 10−5 2 × 10−4 → 2 × 10−5 2 × 10−4 → 2 × 10−5 1 × 10−4 → 1 × 10−5 256 256 256 256 C.4. Question answering experiments We fine-tune our 7.5B Re t ro model for 25,000 steps, using a batch size of 128, a learning rate cosine scheduled from 10−6 to 10−7, with a linear ramp of 750 steps. We use dropout in the decoder only, as it performs better than using dropout in both the encoder and the decoder. Each neighbour is formatted as title: {source}. We use the top 20 neighbours from Dpr when training and evaluating. {title}, source: 29 0204060801,000 steps18202224Wikitext103Valid perplexity104103102101alpha182022240.00.20.4lambda18202224BaselineRETROfitkNN-LM Improving language models by retrieving from trillions of tokens Table 13 | Performance of Retro for different variants. Model performance on C4 evaluation set, measured in bytes-per-bits, for a 247M parameter model trained with a 157 billion token schedule. Ablation group Ablation C4 eval bpb Model Retrieval values Training neighbours Retro No query conditioning No CA positional encodings Shared embeddings 6-layer encoder Neighbours N Continuations F No retrieval 1 training neighbours 4 training neighbours Cross attention position CA top layer (1/12) CA mid layer (6/12) CA top layer (12/12) CA all layers CA every 3 from 1 0.822 0.829 0.826 0.823 0.821 0.950 0.895 0.987 0.858 0.847 0.827 0.823 0.831 0.860 0.823 D. Model ablations We validate important design choices by evaluating what happens when we do not include them. We use the 247M parameter model for all experiments and we train on a compressed 157 billion token schedule for all ablation experiments. We describe results relative to the default settings presented in the main text and recalled here. We report C4 evaluation loss at the end of the training process, and also compares how the evaluation loss decrease versus the training time, measured relatively to the baseline training time. Results are reported in Fig. 8 and Table 13. Using relative encodings in cross-attention. Using relative encodings in cross-attention, as de- scribed in §B.1.2, provides a pure improvement both in the number of steps to reach a given perfor- mance and computational efficiency. Conditioning the encoder on the previous chunk. Conditioning the encoder on the previous chunk’s intermediate embeddings, as described in §B.1.1, provides a pure improvement both in term of number of steps and computational efficiency. Sharing embeddings. Sharing embeddings across the encoder and the decoder does not affect performance. This motivates us using separate embeddings, as it allows to have a narrower encoder than decoder as we scale up the decoder size. Attending neighbours and their continuation. Re t ro models are trained by attending, for a given chunk, to both the neighbours of the preceding chunk and their continuation in time. We measure how training and evaluating Re t ro models on neighbours only and their continuation only affects performance. Overall, attending to neighbours only provides 22% of the performance improvement due to retrieval in Retro, while attending the future of the neighbours gives 56% of 30 Improving language models by retrieving from trillions of tokens Figure 8 | Computational efficiency for different variants. We report the training curves plotting C4 evaluation bytes per bits against time, relative to the time taken to train the baseline Re t ro model. Overall, our design choices are optimal in term of computational efficiency. the performance. Attending to both neighbours and their continuation is the most efficient choice both in term of final performance and training efficiency. Training a deeper encoder. All models in the text use a relatively small Re t ro encoder. We experimented with a 3× deeper encoder. We found that this resulted in a tiny decrease in loss– 0.15% at the cost of a larger training time (+20%). Overall, using a shallow encoder is the best choice in term of training efficiency. Training with multiple neighbours. We measure the effect of training on a single retrieved neigh- bour, as well as training on 4 neighbours (Re t ro uses 2 neighbours in training). Training on a single neighbour results in a large decrease in performance, while training on 4 neighbours does not give substantial performance improvement at the end of training, but induces a large computational overhead. Overall, we find that using 2 neighbours is the best choice in term of training efficiency. Furthermore, evaluation can be done with additional neighbours. Frequency of cross-attention. We measure how the frequency of cross-attention in the decoder affects performance. Overall, attending only once at the top or the bottom layer is a bad choice, while attending once on a mid-depth layer is relatively sound. We choose to have cross-attention every 3 layer as this provides a good trade-off between performance and run-time. 31 0.820.840.860.88C4 eval bits-per-bytesRETRONo CA positional encodings0.820.840.860.88RETRONo query conditioning0.820.840.860.88RETRO: distinct embeddingsShared embeddings0.00.20.40.60.81.01.2Training time (relative to baseline)0.820.840.860.88C4 eval bits-per-bytesRETRO: 2 layer encoder6 layer encoder0.820.840.860.88RETRO: 2 training nei.1 training nei.4 training nei.0.00.20.40.60.81.01.2Training time (relative to baseline)0.80.91.01.11.2RETRO: retrieve [N,F]Neighbours NContinuations FNo retrieval0.00.20.40.60.81.01.2Training time (relative to baseline)0.8200.8250.8300.8350.840RETRO: CA every 3 from 6CA top layer (1/12)CA mid layer (6/12)CA top layer (12/12)CA all layersCA every 3 from 1 Improving language models by retrieving from trillions of tokens E. Qualitative experiments We illustrate the usage of Retro models by looking at the perplexity of evaluation samples and by producing samples autoregressively. E.1. Inspecting neighbours and perplexities on evaluation data To build an intuition of what kind of information is leveraged by Retro models, we suggest to have a closer look at a few evaluation documents and the corresponding retrieved data in Tables 16, 17, 18 and 19. In these tables, the 4 rows corresponds to the first 4 chunks of the documents. The left-most column shows the chunk 𝐶𝑢 from the document being evaluated, where each token is coloured by the negative cross entropy loss difference 𝐿Retro[Off] − 𝐿Retro, a positive value, coloured in yellow, indicates that Retro performs better when it has access to neighbours data. The second columns also shows the evaluated chunk 𝐶𝑢 but where each token 𝑖 is coloured by the length of the longest common prefix (LCP) with the preceding neighbours, i.e. the largest integer 𝑗 such that the prefix (𝑥𝑖− 𝑗−1, . . . , 𝑥𝑖) also appears in Ret(𝐶𝑢−1). Conversely, columns three and four show the first two neighbours and their continuation, respectively [𝑁 1 𝑢 ] coloured by LCP with subsequent chunk 𝐶𝑢+1. LCP colouring helps to visually identify where the evaluated document overlaps the retrieved data. Note that the first chunk, 𝐶1, in the second column is not coloured as it does not have any preceding neighbours to compute LCP with. Similarly, we do not show the neighbours of the fourth chunk, as these are not used to condition any of the first four chunks. 𝑢 ] and [𝑁 2 𝑢 , 𝐹1 𝑢 , 𝐹2 Our qualitative analysis exhibits two major behaviors. Firstly, we observe that sometimes, specific facts in 𝐶𝑢 can be extracted from the preceding neighbours Ret(𝐶𝑢−1) and that this can correspond to significant reduction in loss from the Retro model for the corresponding tokens. Some examples of such behavior include the journal name Publishers Weekly in Table 16, the football team name Tyrone in Table 17 or the event dates 25 August to 6 September 2020 in Table 18. In these three examples, the evaluated data consists of recent Wikipedia articles written in September 2021, after we built our retrieval dataset (see section §A.2). Yet, relevant information to predict this new data was available in the pre-existing retrieval data and the Retro model seems to be able to correctly leverage it. On the other hand, we also observe that some of the evaluation data can partially leak in our training and retrieval data, despite the use of deduplication. Retro can dramatically exploit such leakage. Table 19 illustrates this behavior, where the chunks 𝐶2 and 𝐶3 largely overlaps Ret(𝐶1) and Ret(𝐶2) respectively, up to small formatting differences, which leads to much lower Retro loss for all the corresponding tokens. Fig. 6 shows that it is possible to quantify how much of the Retro loss reduction is due to each of these two behaviors, by filtering out evaluation chunks that overlaps with the retrieval set. E.2. Inspecting samples We can follow the same procedure as above on samples generated using Retro models, in order to better understand where retrieval data had an influence on sampling. We show examples of samples obtained using the 7.5B Retro model in Table 6, 7, 20 and 21. E.3. Neighbour quantification To quantify a notion of distance between the source document and the retrieved chunks, we can ask the distance between source articles when retrieving only from Wikipedia. Consonni et al. (2019) 32 Improving language models by retrieving from trillions of tokens Figure 9 | Wikipedia link-distance between retrieved articles. For each sequences, chunk combina- tion we compute the link distance between the target and the top-5 neighbours using only Wikipedia. The rank shows the relative neighbour distance, where rank-1 is the first neighbour and rank 5 is the fifth. The different colours represent link distance. Because we do not retrieve from the same document, 1 is the smallest value. We find, on average, the distance between random articles with a path between them is over 5.0 provides a Wikipedia link dataset which, for each article, contains a list of neighbouring articles. Using this, we construct a directed graph and compute the distance from one page to another. In Fig. 9 we compute the link-distance between training sequences and the retrieved neighbours. We find that retrieved documents tend to be from articles that are quite close to the article containing the target. Furthermore, we find that on average the distance increases with rank, suggesting that our neighbours are both useful and that the order is reasonable. This provides confidence for our larger-scale experiments where document distance is less well defined. F. Complementary quantitative results We report tables corresponding to quantitative figures of the main text, as well as further filtered language model results on the Pile. F.1. Main text datasets We report the performance of Retro and baseline models, measured in bits-per-bytes on evaluation set, in Table 14. F.2. The Pile In Fig. 4, we compare Retro against Jurassic-1 (Lieber et al., 2021). The full bits-per-bytes results are reported in Table 15. F.3. Filtered results Distribution of leaked chunks in our main evaluation sets. We evaluate leakage between the evaluation sets and the training set by measuring the proportion of evaluation chunks with a certain 33 Improving language models by retrieving from trillions of tokens Table 14 | Full results for the main language modelling datasets. First three sets of rows correspond to Fig. 1, last set of rows to Fig. 3. C4 Eval bpb C4 Eval bpb (900B) C4 Eval bpb (360B) C4 Eval bpb (180B) C4 Eval bpb (90B) C4 Eval bpb (36B) C4 Eval bpb (18B) C4 Eval bpb (9B) C4 Eval bpb (4B) C4 Eval bpb (2B) C4 Eval bpb (𝑘 = 1) C4 Eval bpb (𝑘 = 2) C4 Eval bpb (𝑘 = 3) C4 Eval bpb (𝑘 = 4) C4 Eval bpb (𝑘 = 5) C4 Eval bpb (𝑘 = 10) C4 Eval bpb (𝑘 = 20) C4 Eval bpb (𝑘 = 30) C4 Eval bpb (𝑘 = 40) C4 Eval bpb (𝑘 = 50) C4 Eval bpb (𝑘 = 60) C4 Eval bpb (𝑘 = 70) C4 Eval bpb (𝑘 = 80) C4 Eval bpb (𝑘 = 90) C4 Eval bpb (𝑘 = 100) Lambada Accuracy Curation Corpus bpb Wikitext103 Perplexity Wikipedia Sept. 2021 bpb Baseline 172M 425M 1.5B 0.98 0.92 0.84 7.5B 0.78 Re tro [Off] 172M 425M 1.5B 0.98 0.92 0.84 7.5B 0.78 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 0.42 0.69 25.62 0.85 0.51 0.63 19.29 0.78 0.61 0.69 0.52 0.56 13.98 10.65 0.65 0.71 0.47 0.68 25.88 0.86 0.54 0.63 0.70 0.51 0.57 0.64 19.78 13.89 10.40 0.65 0.71 0.79 Re tro[On] 172M 425M 1.5B 7.5B 0.82 0.88 0.92 0.94 0.95 0.96 0.96 0.96 0.97 0.97 0.84 0.83 0.82 0.82 0.82 0.82 0.82 0.82 0.83 0.83 0.84 0.84 0.85 0.85 0.85 0.52 0.66 3.32 0.79 0.77 0.71 0.66 0.83 0.87 0.89 0.89 0.90 0.91 0.91 0.91 0.91 0.79 0.78 0.78 0.77 0.77 0.77 0.77 0.77 0.77 0.78 0.78 0.79 0.79 0.79 0.79 0.60 0.61 2.96 0.73 0.76 0.80 0.81 0.82 0.83 0.83 0.83 0.84 0.84 0.73 0.72 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.72 0.72 0.73 0.73 - 0.67 0.55 2.53 0.66 0.71 0.74 0.75 0.76 0.77 0.77 0.77 0.78 0.78 0.67 0.67 0.66 0.66 0.66 0.66 0.66 0.65 0.65 0.66 0.66 0.66 0.66 0.66 0.67 0.73 0.50 2.22 0.61 overlap 𝑟(𝐶). We show histograms in Fig. 10. We can see that 𝐶4 has some slight overlaps between train and evaluation. Similarly, chunks of Wikitext103 appear in the training set despite having removed the actual Wikitext103 evaluation documents from the training set. On the other hand, our Wikipedia September 21 dataset shows almost no leakage (data being original documents that did not exist at training data creation), and neither does Curation Corpus. Filtered results on the Pile. We report chunk overlap distribution and filtered performance curves on the Pile in Fig. 12 and Fig. 11, respectively. The qualitative interpretation of the filtered curves is the same: Re t ro models exploit leakage more, but the performance improvement they provide remains significant even on original chunks that haven’t been observed in the training set. 34 Improving language models by retrieving from trillions of tokens Table 15 | Full results on The Pile, measured in bits-per-bytes. Jurassic-1 and GPT-3 numbers are taken from Lieber et al. (2021). Gopher numbers are taken from Rae et al. (2021). Subset 7B Baseline (Ours) GPT-3 Jurassic-1 Gopher 7.5B Retro arxiv books3 dm_mathematics freelaw github gutenberg_pg_19 hackernews nih_exporter opensubtitles philpapers pile_cc pubmed_abstracts pubmed_central stackexchange ubuntu_irc uspto_backgrounds 0.742 0.792 1.177 0.576 0.420 0.803 0.971 0.650 0.974 0.760 0.771 0.639 0.588 0.714 1.200 0.603 0.838 0.802 1.371 0.612 0.645 1.163 0.975 0.612 0.932 0.723 0.698 0.625 0.690 0.773 0.946 0.566 0.680 0.835 1.037 0.514 0.358 0.890 0.869 0.590 0.879 0.742 0.669 0.587 0.579 0.655 0.857 0.537 0.641 0.706 1.135 0.506 0.367 0.652 0.888 0.590 0.894 0.682 0.688 0.578 0.512 0.638 1.081 0.545 0.714 0.653 1.164 0.499 0.199 0.400 0.860 0.635 0.930 0.699 0.626 0.542 0.419 0.624 1.178 0.583 Figure 10 | Distribution of the overlap between evaluation and train chunks for C4, Curation Corpus, Wikitext103 and Wikipedia Sept. 2021. 35 0%50%100%Eval/train chunk overlapChunk densityC40%50%100%Eval/train chunk overlapCuration Corpus0%50%100%Eval/train chunk overlapWikitext1030%50%100%Eval/train chunk overlapWikipedia Sept 2021 Improving language models by retrieving from trillions of tokens Figure 11 | Filtered evaluation losses on the Pile, with baseline Transformers and Retro. 36 0.40.50.60.70.80.91.0Eval bpbarxiv172M425M1.5B7.5BBaselineRETRO [ON]0.40.50.60.70.80.91.0bookcorpus20.30.40.50.60.70.80.91.0books30.91.01.11.21.31.4dm_mathematics0.60.81.01.21.4Eval bpbeuroparl0.40.50.60.70.8freelaw0.20.40.60.81.0github0.20.40.60.81.0gutenberg_pg_190.70.80.91.01.11.2Eval bpbhackernews0.650.700.750.80nih_exporter0.60.70.80.91.01.11.2opensubtitles0.40.50.60.70.80.91.0openwebtext20.40.50.60.70.80.91.0Eval bpbphilpapers0.50.60.70.80.91.0pile_cc0.550.600.650.700.750.800.85pubmed_abstracts12.5%50%100%Max allowed eval/train overlap0.30.40.50.60.70.8pubmed_central12.5%50%100%Max allowed eval/train overlap0.60.70.80.91.01.1Eval bpbstackexchange12.5%50%100%Max allowed eval/train overlap0.60.81.01.21.41.6ubuntu_irc12.5%50%100%Max allowed eval/train overlap0.550.600.650.700.75uspto_backgrounds Improving language models by retrieving from trillions of tokens Figure 12 | Distribution of the overlap between evaluation and train chunks for the Pile evaluation sets. 37 Chunk densityarxivbookcorpus2books3dm_mathematicsChunk densityeuroparlfreelawgithubgutenberg_pg_19Chunk densityhackernewsnih_exporteropensubtitlesopenwebtext2Chunk densityphilpaperspile_ccpubmed_abstractspubmed_central0%50%100%Eval/train chunk overlapChunk densitystackexchange0%50%100%Eval/train chunk overlapubuntu_irc0%50%100%Eval/train chunk overlapuspto_backgrounds Improving language models by retrieving from trillions of tokens Table 16 | Great Circle (novel), from Wikipedia September 21. The article is about a recent novel and chunks 𝐶3 and 𝐶4 are specifically about its reception. The name Publishers Weekly of the journal that reviewed the novel appears both in the neighbours [𝑁 1 3] of chunk 𝐶3 and in the subsequent chunk 𝐶4, where the loss for those tokens is significantly reduced by Retro. 3], [𝑁 2 3 , 𝐹2 3 , 𝐹1 𝐶𝑢 colored by loss difference 𝐿Retro[Off] − 𝐿Retro (cid:54) −0.5, = 0, (cid:62) 0.5 Great Circle (novel)Great Circle i s a 2021 novel by Maggie Shipstead, published on May 4, 2021, by Alfred A. Knopf.The novel has been shortl isted for the 2021 Booker Prize.Sy nopsis The novel consists of two pa rallel narratives about two fictiona l women. One is 𝐶𝑢 colored by LCP with Re t (𝐶𝑢−1) LCP = 0, 1, 2, 3,4,(cid:62) 5 Great Circle (novel) Great Circle i s a 2021 novel by Maggie Shipstead, published on May 4, 2021, by Alfred A. Knopf. The novel has been shortl isted for the 2021 Booker Prize. Sy nopsis The novel consists of two pa rallel narratives about two fictiona l women. One is about the disappeared 20th-century aviator Marian Graves, while the oth er is about the struggling 21st-cent ury Hollywood actress Hadley Baxter, who is attempting to make a film ab out Marian. Hadley’s narrative is to ld in the first-person, while Marian ’s sections are told in the third-pe rson about the disappeared 20th-century aviator Marian Graves, while the oth er is about the struggling 21st-cent ury Hollywood actress Hadley Baxter, who is attempting to make a film ab out Marian. Hadley’s narrative is to ld in the first-person, while Marian ’s sections are told in the third-pe rson .Reception Great Circle received very favorable reviews, with a cumul ative "Rave" rating at the review ag gregator website Book Marks, based o n 22 book reviews from mainstream li terary critics. The novel debuted at number fourteen on The New York Tim es Hardcover fiction best-seller lis t for the week ending May .Reception Great Circle received very favorable reviews, with a cumul ative "Rave" rating at the review ag gregator website Book Marks, based o n 22 book reviews from mainstream li terary critics. The novel debuted at number fourteen on The New York Tim es Hardcover fiction best-seller lis t for the week ending May 8, 2021. Critics praised the novel for sustaining its length and for Sh ipstead’s research and intricate nov el structure for perfectly interweav ing the parallel narratives, despite the time and circumstances separati ng them.In its starred review, Pub lishers Weekly wrote, "Shipstead man ages to portray both Marian’s and Ha dley’s 8, 2021. Critics praised the novel for sustaining its length and for Sh ipstead’s research and intricate nov el structure for perfectly interweav ing the parallel narratives, despite the time and circumstances separati ng them.In its starred review, Pub lishers Weekly wrote, "Shipstead man ages to portray both Marian’s and Ha dley’s 𝑢 , 𝐹1 [𝑁1 LCP = 0, 1, 2, 3,4,(cid:62) 5 𝑢 ] colored by LCP with 𝐶𝑢+1 𝑢 , 𝐹2 [𝑁2 LCP = 0, 1, 2, 3,4,(cid:62) 5 𝑢 ] colored by LCP with 𝐶𝑢+1 The Dutch House (novel)The Dutch H ouse is a 2019 novel by Ann Patchett . It was published by Harper on Sept ember 24, 2019. It tells the story o f a brother and sister over the cour se of five decades.The novel was a finalist for the 2020 Pulitzer Priz e for Fiction.PlotThe Dutch House is a mansion located in Elkins Park , Pennsylvania, a suburb of Philadel phia. It was built in 1922 by the Va nHoebeek family, a husband and wife originally from the Netherlands who made their fortune in the tobacco in dustry. Cyril Conroy, a self-made re al estate mogul on becoming a filmmaker. She has fo und a subject for her film project, an obscure African American actress credited only as “the watermelon wom an” in old Hollywood films, and the subsequent film recounts her search for this woman even as it covers, in the manner of the earlier Dunyement aries, Dunye’s friendships and her l ove life. InThe Watermelon Woman, D unye makes the film she set out to m ake in 1990 about African American w omen artists, a film that both inven ts an artistic predecessor with whom she can identify and also “finds” C heryl herself as the artist that she seeks. As Dunye identifies herself first edition hardcoverReception The novel debuted at number one on T he New York Times fiction best-selle r list. As of the week ending Februa ry 20, 2021, the novel has spent 38 weeks on the list.At the review ag gregator website Book Marks, which a ssigns individual ratings to book re views from mainstream literary criti cs, the novel received a cumulative "Rave" rating based on 38 reviews, w ith only one "mixed" review. Publish ers Weekly wrote, "Bennett renders h er characters and their struggles wi th great compassion, and explores th e complicated state of mind that Ste lla finds herself in while passing a s white." In its The Dutch House (novel)The Dutch H ouse is a 2019 novel by Ann Patchett . It was published by Harper on Sept ember 24, 2019. It tells the story o f a brother and sister over the cour se of five decades.[2]The novel wa s a finalist for the 2020 Pulitzer P rize for Fiction.[3]Plot[edit]Th e Dutch House is a mansion located i n Elkins Park, Pennsylvania, a subur b of Philadelphia. It was built in 1 922 by the VanHoebeek family, a husb and and wife originally from the Net herlands who made their fortune in t he tobacco industry. Cyril Conroy, a self- based closely on her own youthful e xperiences. (She plans the film to b e the first of two parts, the second dealing with the aftermath of the f irst’s events.) Byrne plays a young film student named Julie (Hogg’s ava tar), who starts her artistic educat ion with high hopes of making a movi e about a boy named Tony, living in working-class Sunderland, who adores his mother — “is almost obsessed wi th her,” as eager Julie tells her ad visers. Her idealism is evident from the start.The advisers are skepti cal, and no wonder; Julie’s family i s posh, with a comfortable country e state and The book also debuted at number tw o on The New York Times Hardcover No nfiction best-sellers list on July 2 8, 2019.[5] It spent eleven weeks on the list.[6]Reception[edit]At t he review aggregator website Book Ma rks, which assigns individual rating s to book reviews from mainstream li terary critics, the book received a cumulative "Positive" rating based o n 29 reviews: 12 "Rave" reviews, 6 " Positive" reviews, 9 "Mixed" reviews , and 2 "Pan" reviews.[7]Publisher s Weekly gave the book a mixed revie w, writing, "Unfortunately, all thre e 38 Improving language models by retrieving from trillions of tokens Table 17 | All-Ireland Senior Football Championship Final, from Wikipedia September 21. The name of the team Tyrone appears both in the second neighbours [𝑁 2 1] of chunk 𝐶1 and in the subsequent chunk 𝐶2, where the loss for those tokens is significantly reduced by Retro. 1 , 𝐹2 𝐶𝑢 colored by loss difference 𝐿Retro[Off] − 𝐿Retro (cid:54) −0.5, = 0, (cid:62) 0.5 2021 All-Ireland Senior Football Cha mpionship FinalThe 2021 All-Irelan d Senior Football Championship Final was the 134th final of the All-Irel and Senior Football Championship and the culmination of the 2021 All-Ire land Senior Football Championship. T he match was played at Croke Park in Dublin on 11 September 2021. It was originally scheduled 𝐶𝑢 colored by LCP with Re t (𝐶𝑢−1) LCP = 0, 1, 2, 3,4,(cid:62) 5 2021 All-Ireland Senior Football Cha mpionship Final The 2021 All-Irelan d Senior Football Championship Final was the 134th final of the All-Irel and Senior Football Championship and the culmination of the 2021 All-Ire land Senior Football Championship. T he match was played at Croke Park in Dublin on 11 September 2021. It was originally scheduled for 28 August but had to be postpon ed by two weeks when the – semi-fina l was postponed due to a COVID-19 ou tbreak. Ulster champions Tyrone took on Connacht champions Mayo, in what was their first ever meeting in a f inal, winning their 4th title after a 2–14 to 0–15 win. Mayo lost for 28 August but had to be postpon ed by two weeks when the – semi-fina l was postponed due to a COVID-19 ou tbreak. Ulster champions Tyrone took on Connacht champions Mayo, in what was their first ever meeting in a f inal, winning their 4th title after a 2–14 to 0–15 win. Mayo lost their 11th consecutive final since 1989, losing 6 finals in 9 years, wi th this latest defeat on an identica l scoreline to 2020, when Mayo lost to Dublin.Background were aiming to win their fourth title and first All-Ireland since 1951. Since then, they had lost ten finals (1989, 1996 , 1997, 2004, 2006, their 11th consecutive final since 1989, losing 6 finals in 9 years, wi th this latest defeat on an identica l scoreline to 2020, when Mayo lost to Dublin.Background were aiming to win their fourth title and first All-Ireland since 1951. Since then, they had lost ten finals (1989, 1996 , 1997, 2004, 2006, 2012, 2013, 2016, 2017, 2020). app eared in their seventh final, winnin g on three occasions in 2003, 2005 a nd 2008.This final was the fifth to be contested by county teams from C onnacht and Ulster, the other finals were 1925 (Galway beat Cavan), 1943 (Roscommon beat Cavan), 1948 (Cavan beat 2012, 2013, 2016, 2017, 2020). app eared in their seventh final, winnin g on three occasions in 2003, 2005 a nd 2008.This final was the fifth to be contested by county teams from C onnacht and Ulster, the other finals were 1925 (Galway beat Cavan), 1943 (Roscommon beat Cavan), 1948 (Cavan beat 𝑢 , 𝐹1 [𝑁1 LCP = 0, 1, 2, 3,4,(cid:62) 5 𝑢 ] colored by LCP with 𝐶𝑢+1 𝑢 , 𝐹2 [𝑁2 LCP = 0, 1, 2, 3,4,(cid:62) 5 𝑢 ] colored by LCP with 𝐶𝑢+1 2018 All-Ireland Senior Football Cha mpionship FinalThe 2018 All-Irelan d Senior Football Championship Final was the 131st final of the All-Irel and Senior Football Championship and the culmination of the 2018 All-Ire land Senior Football Championship in Gaelic football. The match was play ed at Croke Park in Dublin on 2 Sept ember 2018.[3]It was the second ti me the teams had met in the final; D ublin won the first encounter in 199 5.The final was shown live in Irel and on RTÉ Two as part of The Sunday Game live programme, presented by M ichael Lyster from Croke Park, with studio analysis from Joe Brolly, game 23–23 after extra time, howeve r Ulster progressed under the compet ition rules as they scored three tir es in the match against Leinster’s t wo. The semi-finals took place in mi d November and saw both the away tea ms win, as Ulster beat Glasgow and E dinburgh beat Connacht. The final wa s held on Saturday December 20 at Mu rrayfield Stadium and saw Ulster bea t Edinburgh 21–27 to win the Celtic Cup.2004–05 seasonThe format of the competition was changed for the second edition of the competition. T he competition was moved to April an d May to run after the conclusion of the Celtic League competition, with only eight 1-16 to 0-15 winners to qualify for their 10th league final in the past 13 years.They have won seven of t heir previous league finals under Co dy since 2002, losing the other two to Waterford (2007 ) and Dublin (201 1 ).Despite the defeat there were some distinct positives from a Galwa y perspective- most notably the soli d displays of Daithí Burke at centre -back, Joseph Cooney at wing-back an d Ronan Burke at full-back. Colm Cal lanan continued his excellent form i n goal and also hit a stunning free from distance.Indeed it was not th e Galway defence that was the proble m 2018 All-Ireland Senior Football Cha mpionship FinalThe 2018 All-Irelan d Senior Football Championship Final was the 131st final of the All-Irel and Senior Football Championship and the culmination of the 2018 All-Ire land Senior Football Championship in Gaelic football. The match was play ed at Croke Park in Dublin on 2 Sept ember 2018.It was the second time the teams had met in the final; Dubl in won the first encounter in 1995. It was the third consecutive year th at a team qualified under the system of second chances introduced in 200 1; Tyrone qualified despite defeat i n its provincial championship.Dubl in won the final by a margin of six points with a last-ditch plan of action – play the Munster/Ulster Semi-Final o n March 16th, with the winners to pl ay Connacht in the following day’s F inal.On March 16th then Munster ha d an easy win over Ulster (9-07 to 0 -00) but thankfully for the Munster players, the pitch cut up so badly d uring the game, it was decided to po stpone the following day’s hurling F inal (until Easter Sunday) with the football Final going ahead on its ow n on St. Patrick’s Day.Less than a week later, on March 23rd, seven which Dublin won by 0-12 to 0-9.D ublin are going for an unprecedented fourth successive Championship win over Kerry. Prior to their current r un, which started with the 2011 All- Ireland final, they had only managed two consecutive victories over them on two separate occasions - 1909 an d ’24, 1976 and ’77.The longest wi nning sequence in the rivalry was se t by Kerry between 1941 and 1975, wh en they won each of the six Champion ship meetings. Kerry went nine games unbeaten between 1978 and 2009, wit h four victories either side of a dr amatic draw at the quarter-final sta ge in Thurles in 2001.Sunday will mark their 11th 39 Improving language models by retrieving from trillions of tokens Table 18 | 2020 Summer Paralympics, from Wikipedia September 21. The original dates of the event, 25 August to 6 September 2020, appears both in the neighbors [𝑁 1 1] of chunk 𝐶1 and in the subsequent chunk 𝐶2, where the loss for those tokens is significantly reduced by Retro. Interestingly, in this case, the neighbors were written at a time when the event hadn’t yet been postponed. 1], [𝑁 2 1 , 𝐹2 1 , 𝐹1 𝐶𝑢 colored by loss difference 𝐿Retro[Off] − 𝐿Retro (cid:54) −0.5, = 0, (cid:62) 0.5 2020 Summer ParalympicsThe , brand ed as the Tokyo 2020 Paralympic Game s, was an international multi-sport parasports event held from 24 August to 5 September 2021 in Tokyo, Japan . They were the 16th Summer Paralymp ic Games as organized by the Interna tional Paralympic Committee (IPC). 𝐶𝑢 colored by LCP with Re t (𝐶𝑢−1) LCP = 0, 1, 2, 3,4,(cid:62) 5 2020 Summer Paralympics The , brand ed as the Tokyo 2020 Paralympic Game s, was an international multi-sport parasports event held from 24 August to 5 September 2021 in Tokyo, Japan . They were the 16th Summer Paralymp ic Games as organized by the Interna tional Paralympic Committee (IPC). Originally scheduled to take place f rom 25 August to 6 September 2020, i n March 2020 both the 2020 Summer Ol ympics and Paralympics were postpone d by one year due to the COVID-19 pa ndemic, with the rescheduled Games s till referred to as Tokyo 2020 for m arketing and branding purposes. As with the Olympics, the Games were la rgely held behind Originally scheduled to take place f rom 25 August to 6 September 2020, i n March 2020 both the 2020 Summer Ol ympics and Paralympics were postpone d by one year due to the COVID-19 pa ndemic, with the rescheduled Games s till referred to as Tokyo 2020 for m arketing and branding purposes. As with the Olympics, the Games were la rgely held behind closed doors with no outside specta tors due to a state of emergency in the Greater Tokyo Area and other pre fectures. The Games were the second Summer Paralympics hosted by Tokyo s ince 1964, and the third Paralympics held in Japan overall since the 199 8 Winter Paralympics in Nagano. Th e Games featured closed doors with no outside specta tors due to a state of emergency in the Greater Tokyo Area and other pre fectures. The Games were the second Summer Paralympics hosted by Tokyo s ince 1964, and the third Paralympics held in Japan overall since the 199 8 Winter Paralympics in Nagano. Th e Games featured 539 medal events in 22 sports, with badminton and taekwondo both making their Paralympic debut to replace f ootball 7-a-side and sailing. China topped the medal table for the fifth consecutive Paralympics, with 96 go lds and 207 total medals. Great Brit ain finished second for the ninth t ime, 539 medal events in 22 sports, with badminton and taekwondo both making their Paralympic debut to replace f ootball 7-a-side and sailing. China topped the medal table for the fifth consecutive Paralympics, with 96 go lds and 207 total medals. Great Brit ain finished second for the ninth t ime, 𝑢 , 𝐹1 [𝑁1 LCP = 0, 1, 2, 3,4,(cid:62) 5 𝑢 ] colored by LCP with 𝐶𝑢+1 𝑢 , 𝐹2 [𝑁2 LCP = 0, 1, 2, 3,4,(cid:62) 5 𝑢 ] colored by LCP with 𝐶𝑢+1 pics Games.* The 2020 Summer Paraly mpics are an upcoming major internat ional multi-sport event for athletes with disabilities governed by the I nternational Paralympic Committee. S cheduled as the 16th Summer Paralymp ic Games, it is planned to be held i n Tokyo, Japan from 25 August to 6 S eptember 2020.3. 2019 BWF Para-Bad minton World Championships- The 20 19 BWF Para-Badminton World Champion ships was held from 20 to 25 August 2019 in Basel, Switzerland.- Men’s event: Gold Medal: Pramod Bhagat in Singles SL3 Event and Pramod Bhagat and Manoj once submitted.This process was u ndertaken following the postponement of the Tokyo 2020 Games due to the COVID-19 pandemic, with both the Oly mpics and Paralympics pushed back a year.Now, the Tokyo 2020 Olympics are scheduled for July 23 to August 8 while the Paralympics are due to f ollow from August 24 to September 5. The refund process is separate for ticketholders outside of Japan, who purchased tickets through authorise d ticket resellers (ATR).Each ATR has its own individual refund proced ure.Early figures from the refund process for the Tokyo 2020 Olympics stated that around 18 per cent has been rescheduled to May 1-4 bec ause of travel restrictions under th e current state of emergency in Toky o and other 10 prefectures across Ja pan.The Tokyo 2020 organizing comm ittee announced that the first of 18 test events for the Olympic and Par alympic Games will involve wheelchai r rugby, which will be held in Yoyog i National Stadium from April 3 to 4 .The FINA Diving World Cup will fo llow from April 18 to 23 at the Toky o Aquatics Centre, which will also s erve as an Olympic qualifying event. The spread of the COVID-19 pandemi c has slowed down in Tokyo three wee ks after the Japanese capital entere d a state of emergency on 2020 Summer ParalympicsThe are an upcoming major international multi- sport event for athletes with disabi lities governed by the International Paralympic Committee. Scheduled as the 16th Summer Paralympic Games, th ey are scheduled to be held in Tokyo , Japan between 24 August and 5 Sept ember 2021. Originally due to take p lace between 25 August and 6 Septemb er 2020. On 24 March 2020, the IOC a nd the Tokyo Organizing Committee of ficially announced that the 2020 Sum mer Olympics and 2020 Summer Paralym pics would be postponed to 2021, due to the COVID-19 pandemic, marking t he first time that the Paralympics h as been postponed. They will still b e publicly marketed as Olympiad, have now been postponed a nd rescheduled for 23 July to 8 Augu st 2021 in Tokyo, Japan. The Games were postponed in March 2020 as a re sult of the worldwide Covid-19 pande mic, although they will still keep t he name Tokyo 2020 for marketing and branding purposes. This will be th e first time the Olympic Games have been postponed rather than cancelled . Olympic Games, when Tokyo became th e first city in Asia to host the Oly mpic and Paralympic Games, but unfor tunately strong winds made it an imp ossible task this time around.Memb ers of the Tokyo Organising Committe e of the Olympic and Paralympic Game s (Tokyo 2020), Tokyo Metropolitan G overnment officials, Tokyo 2020 Torc h Relay Official Ambassadors and rep resentatives from Miyagi Prefecture joined the arrival ceremony.FLAME OF RECOVERYThe Olympic flame will now be put on display at various loc ations in the Tohoku region, to high light the message of hope in the are as worst affected by the 2011 Great East Japan Earthqu 40 Improving language models by retrieving from trillions of tokens Table 19 | Daniel Radcliffe, from Wikitext103Valid, retrieval data from c4. The chunks 𝐶2 and 𝐶3 are almost entirely retrieved from neighbours [𝑁1, 𝐹1] and [𝑁2, 𝐹2] respectively, up to formatting differences, which dramatically reduces the loss for these tokens. This example illustrates that when training data leaks into evaluation sets despite deduplication, our Retro model can directly exploit this leakage. 𝐶𝑢 colored by loss difference 𝐿Retro[Off] − 𝐿Retro (cid:54) −0.5, = 0, (cid:62) 0.5 = Daniel Radcliffe =Daniel Jacob R adcliffe ( born 23 July 1989 ) is an English actor who rose to prominenc e as the title character in the Harr y Potter film series. He made his ac ting debut at 10 years of age in BBC One’s 1999 television film David Co pperfield, followed by his cinematic debut 𝐶𝑢 colored by LCP with Re t (𝐶𝑢−1) LCP = 0, 1, 2, 3,4,(cid:62) 5 = Daniel Radcliffe = Daniel Jacob R adcliffe ( born 23 July 1989 ) is an English actor who rose to prominenc e as the title character in the Harr y Potter film series. He made his ac ting debut at 10 years of age in BBC One’s 1999 television film David Co pperfield, followed by his cinematic debut in 2001’s The Tailor of Panama. At age 11, he was cast as Harry Potter in the first Harry Potter film, and starred in the series for 10 years u ntil the release of the eighth and f inal film in 2011.Radcliffe began to branch out to stage acting in 200 7, starring in the London and New in 2001’s The Tailor of Panama. At age 11, he was cast as Harry Potter in the first Harry Potter film, and starred in the series for 10 years u ntil the release of the eighth and f inal film in 2011.Radcliffe began to branch out to stage acting in 200 7, starring in the London and New York productions of Equus, and in t he 2011 Broadway revival of the musi cal How to Succeed in Business Witho ut Really Trying. He starred in the 2012 horror film The Woman in Black, and played beat poet Allen Ginsberg in the 2013 independent film Kill Y our <unk>.He has contributed to ma ny charities, York productions of Equus, and in t he 2011 Broadway revival of the musi cal How to Succeed in Business Witho ut Really Trying. He starred in the 2012 horror film The Woman in Black, and played beat poet Allen Ginsberg in the 2013 independent film Kill Y our <unk>.He has contributed to ma ny charities, including <unk> Hospice Care for Ch ildren, and The Trevor Project for s uicide prevention among LGBTQ youth, which gave him its Hero Award in 20 11.= = Early life = =Radcliffe w as born in West London, England. He is the only child of Alan George Rad cliffe, a literary agent, and including <unk> Hospice Care for Ch ildren, and The Trevor Project for s uicide prevention among LGBTQ youth, which gave him its Hero Award in 20 11.= = Early life = =Radcliffe w as born in West London, England. He is the only child of Alan George Rad cliffe, a literary agent, and 𝑢 , 𝐹1 [𝑁1 LCP = 0, 1, 2, 3,4,(cid:62) 5 𝑢 ] colored by LCP with 𝐶𝑢+1 𝑢 , 𝐹2 [𝑁2 LCP = 0, 1, 2, 3,4,(cid:62) 5 𝑢 ] colored by LCP with 𝐶𝑢+1 Daniel Jacob Radcliffe (born 23 July 1989) is an English actor who rose to prominence as the title character in the Harry Potter film series. He made his acting debut at 10 years o f age in BBC One’s 1999 television f ilm David Copperfield, followed by h is cinematic debut in 2001’s The Tai lor of Panama. At age 11, he was cas t as Harry Potter in the first Harry Potter film, and starred in the ser ies for 10 years until the release o f the eighth and final film in 2011. Radcliffe began to branch out to s tage acting in 2007, starring in the London and New York productions of Equus, and in 2001’s The Tailor of Panama. At age 11, he was cast as Harry Potter in the first Harry Potter film, and starred in the series for 10 years u ntil the release of the eighth and f inal film in 2011.Radcliffe began to branch out to stage acting in 200 7, starring in the London and New Yo rk productions of Equus, and in the 2011 Broadway revival of the musical How to Succeed in Business Without Really Trying. He starred in the 201 2 horror film The Woman in Black, an d played beat poet Allen Ginsberg in the 2013 independent film Kill Your Darlings.He has contributed to ma ny charities York productions of Equus, and in t he 2011 Broadway revival of the musi cal How to Succeed in Business Witho ut Really Trying. He starred in the 2012 horror film The Woman in Black, and played beat poet Allen Ginsberg in the 2013 independent film Kill Y our Darlings.He has contributed to many charities, including Demelza H ouse Children’s Hospice and The Trev or Project. He also made public serv ice announcements for the latter. In 2011, he was awarded the Trevor Pro ject’s "Hero Award."Sources disagr ee about Radcliffe’s personal wealth ; he was reported to have earned £1 million for the first Harry Potter Daniel Jacob Radcliffe (born 23 July 1989) is an English actor who rose to prominence as the title character in the Harry Potter film series. He made his acting debut at 10 years o f age in BBC One’s 1999 television m ovie David Copperfield, followed by his film debut in 2001’s The Tailor of Panama. At age 11, he was cast as Harry Potter in the first Harry Pot ter film, and starred in the series for 10 years until the release of th e eighth and final film in 2011. Rad cliffe began to branch out to stage acting in 2007, starring in the Lond on and New York productions of Equus , and in the of Panama. At age 11, he was cast a s Harry Potter in the first Harry Po tter film, and starred in the series for 10 years until the release of t he eighth and final film in 2011.R adcliffe began to branch out to stag e acting in 2007, starring in the Lo ndon and New York productions of Equ us, and in the 2011 Broadway revival of the musical How to Succeed in Bu siness Without Really Trying. He sta rred in the 2012 horror film The Wom an in Black, and played beat poet Al len Ginsberg in the 2013 independent film Kill Your Darlings. He has con tributed to many charities, includin g Demelza House Children’s in the 2011 Broadway revival of the musical How to Succeed in Business Without Really Trying. He starred in the 2012 horror film The Woman in B lack, and played beat poet Allen Gin sberg in the 2013 independent film K ill Your Darlings. He has contribute d to many charities, including Demel za House Children’s Hospice and The Trevor Project. He also made public service announcements for the latter . In 2011, he was awarded the Trevor Project’s "Hero Award." 41 Improving language models by retrieving from trillions of tokens Table 20 | Sample - Déclaration des droits de l’homme: Article premier. The Retro[Off] sample has correct syntax and is almost plausible but is hallucinated. The Retro[On] sample is correctly copied from neighbour data, and robustly re-formated according to our prompt. Prompt and sample of Retro[Off ] Article premier - Les hommes naissent et demeurent libres e t égaux en droits. Les distinc tions sociales ne peuvent être fondées que sur l’utilité com mune. Article 2. - Le but de toute association politique e st la conservation des droits naturels et Prompt and sample of Re tro[On] colored by LCP with Re t (𝐶𝑢−1) LCP = 0, 1, 2, 3,4,(cid:62) 5 Article premier - Les hommes naissent et demeurent libres e t égaux en droits. Les distinc tions sociales ne peuvent être fondées que sur l’utilité com mune. Article 2. - Le but de toute association politique e st la conservation des droits naturels et imprescriptibles de l’homme, et par conséquent la garantie à chacun des droits suivants : Article 3. - La propriété est un droit inviolable et sa cré. Toute personne a le droit de procéder à sa propre cons ervation. Article 4. - Le imprescriptibles de l’homme. Ces droits sont la liberté, la propriété, la sûreté et la ré sistance à l’oppression.Arti cle 3.- Le principe de toute souveraineté réside essentiel lement dans la nation. Nul cor ps, nul individu ne peut exerc er d’autorité qui n but de toute association est la défense des droits de l’hom me et du citoyen. Tout citoye n a le droit de participer à l a direction des affaires publi ques. Article 5. - L’impuni té n’a jamais été et ne sera j amais une fin en elle-même. L’ imp ’en émane expressément.Artic le 4.- La liberté consiste à pouvoir faire tout ce qui ne nuit pas à autrui : ainsi, l’e xercice des droits naturels de chaque homme n’a de bornes qu e celles qui assurent aux autr es membres de la société la jo uissance de ces mêmes unité, comme le despotisme, s ’est toujours révélée être un instrument d’oppression. La ty rannie qui s’est élue juge su prême de la conscience des aut res ne peut être jugée. La jus tice se trouve dans la consci ence de chaque citoyen, droits. Ces bornes ne peuvent être déterminées que par la l oi.Article 5.- La loi n’a le droit de défendre que les a ctions nuisibles à la société. Tout ce qui n’est pas défendu par la loi ne peut être empêc hé, et nul ne peut être [𝑁1 𝑢 , 𝐹1 𝑢 ] colored by LCP with 𝐶𝑢+1 [𝑁2 𝑢 , 𝐹2 𝑢 ] colored by LCP with 𝐶𝑢+1 LCP = 0, 1, 2, 3,4,(cid:62) 5 de l’homme et du citoyen.Ar ticle 1erLes hommes naissent et demeurent libres et égaux en droits. Les distinctions so ciales ne peuvent être fondées que sur l’utilité commune.A rticle 2Le but de toute asso ciation politique est la conse rvation des droits naturels et imprescriptibles de l’homme. Ces droits sont la liberté, la propriété, la sûreté, et la r ésistance à l’oppression.Art icle 3Le principe de toute s ouveraineté réside essentielle ment dans la nation. Nul corps , nul individu ne peut exercer d’autorité qui n’en criptibles del’homme. Ces dro its sont la liberté, la propri été, la sûretéet la résistanc e à l’oppression.Article 3 - Le principe de toute souverai neté résideessentiellement da ns la Nation. Nul corps, nul i ndividu nepeut exercer d’auto rité qui n’en émane expresséme nt.Article 4 - La liberté co nsiste à pouvoir faire tout ce quine nuit pas à autrui : ai nsi, l’exercice des droits nat urelsde chaque homme n’a de b ornes que celles qui assurent auxautres membres de la socié té la jouissance de mane expressément.Article 4 - La liberté consiste à pouvoi r faire tout ce qui ne nuit pa s à autrui : ainsi, l’exercice des droits naturels de chaque homme n’a de bornes que celle s qui assurent aux autres memb res de la société la jouissanc e de ces mêmes droits. Ces bor nes ne peuvent être déterminée s que par la loi.Article 5 - La loi n’a le droit de défend re que les actions nuisibles à la société. Tout ce qui n’est pas défendu par la loi ne peu t être empêché, et nul ne peut être contraint à faire ce qu’ elle n LCP = 0, 1, 2, 3,4,(cid:62) 5 Les hommes naissent et demeur ent libres et égaux en droits. Les distinctions sociales ne peuvent être fondées que sur l ’utilité commune.Art. 2. - Le but de toute association po litique est la conservation de s droits naturels et imprescri ptibles de l’Homme. Ces droits sont la liberté, la propriété , la sûreté, et la résistance à l’oppression.Art. 3. -Le principe de toute Souverainet é réside essentiellement dans la Nation. Nul corps, nul indi vidu ne peut exercer d’autorit é qui n’en émane expressément. Art et imprescriptibles de l’homm e. Ces droits sont la liberté, la propriété, la sûreté et la résistance à l’oppression.A rticle 3 - Le principe de tout e souveraineté réside essentie llement dans la Nation. Nul co rps, nul individu ne peut exer cer d’autorité qui n’en émane expressément.Article 4 - La liberté consiste à pouvoir fai re tout ce qui ne nuit pas à a utrui : ainsi, l’exercice des droits naturels de chaque homm e n’a de bornes que celles qui assurent aux autres membres d e la société la jouissance de ces mêmes droits. Ces bornes mane expressément.Article 4 - La liberté consiste à pouvoi r faire tout ce qui ne nuit pa s à autrui : ainsi, l’exercice des droits naturels de chaque homme n’a de bornes que celle s qui assurent aux autres memb res de la société la jouissanc e de ces mêmes droits. Ces bor nes ne peuvent être déterminée s que par la loi.Article 5 - La loi n’a le droit de défend re que les actions nuisibles à la société. Tout ce qui n’est pas défendu par la loi ne peu t être empêché, et nul ne peut être contraint à faire ce qu’ elle n 42 Improving language models by retrieving from trillions of tokens Table 21 | Sample - Decimals of 𝜋. The Retro[Of f] sample quickly diverges two digits after the end of the prompt whereas Retro[On] correctly outputs a large number of 𝜋 digits, directly copied from the neighbours data. Prompt and sample of Retro[Off ] Pi = 3. 1415926535 8979323846 2643383279 5028841971 69399375 10 5820974944 5923078164 06286 20899 8628034825 3421170679 Prompt and sample of Retro[On ] colored by LCP with Ret (𝐶𝑢−1) LCP = 0, 1, 2, 3,4,(cid:62) 5 Pi = 3. 1415926535 8979323846 2643383279 5028841971 69399375 10 5820974944 5923078164 06286 20899 8628034825 3421170679 8294049602 8988496069 9858349 065 9873246379 9644789435 8628 730709 6540159079 5944069810 5 992965913 7095378412 69378359 8214808651 3282306647 0938446 095 5058223172 53594081284811 174502 8410270193 8521105559 6 446229489 5493038196442881097 5 6659334461 284 10 6940372045 7088679512 85612 30857 9046461290 9276642155 56 54603269 5656128798 6366475705 6294954741 5886335339 57657 7564823 3786783165 2712019091 4564856692 3460348610 45432664 82 1339360726 024914127372458 70066 0631558817 4881520920 96 28292540 91715 76345 5770886953 7988876910 79 66169745 6493974637 6345801550 6663542854 6333764630 6356284 271 7885339804 5672434 364367892590360 0113305305 48 82046652 1384146951 9415116094 3305727036 5759591953 0921861 173 8193261179 31051185480744 623799 6274 [𝑁1 𝑢 , 𝐹1 𝑢 ] colored by LCP with 𝐶𝑢+1 [𝑁2 𝑢 , 𝐹2 𝑢 ] colored by LCP with 𝐶𝑢+1 LCP = 0, 1, 2, 3,4,(cid:62) 5 “1415926535 8979323846 26433 83279 5028841971 693993751058 20974944 5923078164 0628620899 8628034825 34211706798214808 651 3282306647 0938446095 5058 223172 53594081284811174502 8 410270193 8521105559 644622948 9 54930381964428810975 665933 4461 2847564823 3786783 651 3282306647 0938446095 5058 223172 5359408128 4811174502 8410270193 8521105559 64462294 89 54930381964428810975 66593 34461 2847564823 3786783165 27 12019091 4564856692 346034861 0 4543266482 1339360726 024914 12737245870066 0631558817 488 1520920 9628292540 91715364 23 3786783165 2712019091 4564 856692 3460348610 4543266482 1 339360726 0249141273724587006 6 0631558817 4881520920 962829 2540 9171536436 7892590360 01 13305305 4882046652 1384146951 94151160943305727036 5759591 953 0921861173 8193261179 3105 118548 0744623799 627495 LCP = 0, 1, 2, 3,4,(cid:62) 5 46 2643383279 5028841971 69399 37510 5820974944 592307816406 28620899 8628034825 3421170679 8214808651 3282306647 0938446 095 50582231725359408128 4811 174502 8410270193 8521105559 6 446229489 5493038196 442881097 56659334461 2847564823 378678 3165 2712019091 4564856692 346 0 47 0938446095 5058223172 53594 081284811174502 8410270193 85 21105559 6446229489 5493038196 4428810975 6659334461 2847564 823 3786783165 27120190914564 856692 3460348610 4543266482 1 339360726 0249141273724587006 6 0631558817 4881520920 962829 2540 91715364367892590360 165 27120190914564856692 3460 348610 4543266482 1339360726 0 2491412737245870066 063155881 7 4881520920 9628292540 917153 64367892590360 0113305305 488 2046652 1384146951 9415116094 3305727036 5759591953 09218611 73 8193261179 310511854807446 23799 6274956735 1885752724 89 1227 43
ai_researcher
3
Retrieval-Augmented_Generation_for_Knowledge-Intensive_NLP_Tasks.pdf
Phrase Retrieval Learns Passage Retrieval, Too Jinhyuk Lee1,2∗ Alexander Wettig1 Danqi Chen1 Department of Computer Science, Princeton University1 Department of Computer Science and Engineering, Korea University2 {jinhyuklee,awettig,danqic}@cs.princeton.edu 1 2 0 2 p e S 6 1 ] L C . s c [ 1 v 3 3 1 8 0 . 9 0 1 2 : v i X r a Abstract Dense retrieval methods have shown great promise over sparse retrieval methods in a range of NLP problems. Among them, dense phrase retrieval—the most fine-grained re- trieval unit—is appealing because phrases can be directly used as the output for question an- swering and slot filling tasks.1 In this work, we follow the intuition that retrieving phrases naturally entails retrieving larger text blocks and study whether phrase retrieval can serve as the basis for coarse-level retrieval includ- ing passages and documents. We first observe that a dense phrase-retrieval system, without any retraining, already achieves better passage retrieval accuracy (+3-5% in top-5 accuracy) compared to passage retrievers, which also helps achieve superior end-to-end QA perfor- mance with fewer passages. Then, we pro- vide an interpretation for why phrase-level su- pervision helps learn better fine-grained entail- ment compared to passage-level supervision, and also show that phrase retrieval can be im- proved to achieve competitive performance in document-retrieval tasks such as entity link- ing and knowledge-grounded dialogue. Fi- nally, we demonstrate how phrase filtering and vector quantization can reduce the size of our index by 4-10x, making dense phrase retrieval a practical and versatile solution in multi-granularity retrieval.2 1 Introduction Dense retrieval aims to retrieve relevant contexts from a large corpus, by learning dense representa- tions of queries and text segments. Recently, dense retrieval of passages (Lee et al., 2019; Karpukhin et al., 2020; Xiong et al., 2021) has been shown *This work was done when JL worked as a visiting re- search scholar at Princeton University. 1Following previous work (Seo et al., 2018, 2019), the term phrase denotes any contiguous text segment up to L words, which is not necessarily a linguistic phrase (see Section 2). 2Our code and models are available at https:// github.com/princeton-nlp/DensePhrases. Figure 1: Comparison of passage representations from DPR (Karpukhin et al., 2020) and DensePhrases (Lee et al., 2021). Unlike using a single vector for each pas- sage, DensePhrases represents each passage with mul- tiple phrase vectors and the score of a passage can be computed by the maximum score of phrases within it. to outperform traditional sparse retrieval methods such as TF-IDF and BM25 in a range of knowledge- intensive NLP tasks (Petroni et al., 2021), includ- ing open-domain question answering (QA) (Chen et al., 2017), entity linking (Wu et al., 2020), and knowledge-grounded dialogue (Dinan et al., 2019). One natural design choice of these dense re- trieval methods is the retrieval unit. For instance, the dense passage retriever (DPR) (Karpukhin et al., 2020) encodes a fixed-size text block of 100 words as the basic retrieval unit. On the other extreme, recent work (Seo et al., 2019; Lee et al., 2021) demonstrates that phrases can be used as a retrieval In particular, Lee et al. (2021) show that unit. learning dense representations of phrases alone can achieve competitive performance in a number of open-domain QA and slot filling tasks. This is par- ticularly appealing since the phrases can directly serve as the output, without relying on an additional reader model to process text passages. In this work, we draw on an intuitive motiva- tion that every single phrase is embedded within a larger text context and ask the following ques- tion: If a retriever is able to locate phrases, can (cid:475)(cid:4)(cid:13)(cid:20)(cid:476)(cid:1)(cid:6)(cid:44)(cid:48)(cid:28)(cid:47)(cid:36)(cid:42)(cid:41)(cid:46)(cid:1)(cid:42)(cid:33)(cid:1)(cid:40)(cid:42)(cid:47)(cid:36)(cid:42)(cid:41)(cid:1)(cid:475)(cid:20)(cid:6)(cid:17)(cid:476)(cid:1)(cid:10)(cid:41)(cid:1)(cid:43)(cid:35)(cid:52)(cid:46)(cid:36)(cid:30)(cid:46)(cid:443)(cid:1)(cid:32)(cid:44)(cid:48)(cid:28)(cid:47)(cid:36)(cid:42)(cid:41)(cid:46)(cid:1)(cid:42)(cid:33)(cid:1)(cid:40)(cid:42)(cid:47)(cid:36)(cid:42)(cid:41)(cid:1)(cid:28)(cid:45)(cid:32)(cid:1)(cid:32)(cid:44)(cid:48)(cid:28)(cid:47)(cid:36)(cid:42)(cid:41)(cid:46)(cid:1)(cid:47)(cid:35)(cid:28)(cid:47)(cid:1)(cid:31)(cid:32)(cid:46)(cid:30)(cid:45)(cid:36)(cid:29)(cid:32)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:29)(cid:32)(cid:35)(cid:28)(cid:49)(cid:36)(cid:42)(cid:45)(cid:1)(cid:42)(cid:33)(cid:1)(cid:28)(cid:1)(cid:43)(cid:35)(cid:52)(cid:46)(cid:36)(cid:30)(cid:28)(cid:39)(cid:1)(cid:46)(cid:52)(cid:46)(cid:47)(cid:32)(cid:40)(cid:1)(cid:36)(cid:41)(cid:1)(cid:47)(cid:32)(cid:45)(cid:40)(cid:46)(cid:1)(cid:42)(cid:33)(cid:1)(cid:36)(cid:47)(cid:46)(cid:1)(cid:40)(cid:42)(cid:47)(cid:36)(cid:42)(cid:41)(cid:1)(cid:28)(cid:46)(cid:1)(cid:28)(cid:1)(cid:33)(cid:48)(cid:41)(cid:30)(cid:47)(cid:36)(cid:42)(cid:41)(cid:1)(cid:42)(cid:33)(cid:1)(cid:47)(cid:36)(cid:40)(cid:32)(cid:442)(cid:1)(cid:14)(cid:42)(cid:45)(cid:32)(cid:1)(cid:442)(cid:442)(cid:442)(cid:475)(cid:4)(cid:13)(cid:20)(cid:476)(cid:1)(cid:6)(cid:44)(cid:48)(cid:28)(cid:47)(cid:36)(cid:42)(cid:41)(cid:46)(cid:1)(cid:42)(cid:33)(cid:1)(cid:40)(cid:42)(cid:47)(cid:36)(cid:42)(cid:41)(cid:1)(cid:475)(cid:20)(cid:6)(cid:17)(cid:476)(cid:1)(cid:10)(cid:41)(cid:1)(cid:43)(cid:35)(cid:52)(cid:46)(cid:36)(cid:30)(cid:46)(cid:443)(cid:1)(cid:32)(cid:44)(cid:48)(cid:28)(cid:47)(cid:36)(cid:42)(cid:41)(cid:46)(cid:1)(cid:42)(cid:33)(cid:1)(cid:40)(cid:42)(cid:47)(cid:36)(cid:42)(cid:41)(cid:1)(cid:28)(cid:45)(cid:32)(cid:1)(cid:32)(cid:44)(cid:48)(cid:28)(cid:47)(cid:36)(cid:42)(cid:41)(cid:46)(cid:1)(cid:47)(cid:35)(cid:28)(cid:47)(cid:1)(cid:31)(cid:32)(cid:46)(cid:30)(cid:45)(cid:36)(cid:29)(cid:32)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:29)(cid:32)(cid:35)(cid:28)(cid:49)(cid:36)(cid:42)(cid:45)(cid:1)(cid:42)(cid:33)(cid:1)(cid:28)(cid:1)(cid:43)(cid:35)(cid:52)(cid:46)(cid:36)(cid:30)(cid:28)(cid:39)(cid:1)(cid:46)(cid:52)(cid:46)(cid:47)(cid:32)(cid:40)(cid:1)(cid:36)(cid:41)(cid:1)(cid:47)(cid:32)(cid:45)(cid:40)(cid:46)(cid:1)(cid:42)(cid:33)(cid:1)(cid:36)(cid:47)(cid:46)(cid:1)(cid:40)(cid:42)(cid:47)(cid:36)(cid:42)(cid:41)(cid:1)(cid:28)(cid:46)(cid:1)(cid:28)(cid:1)(cid:33)(cid:48)(cid:41)(cid:30)(cid:47)(cid:36)(cid:42)(cid:41)(cid:1)(cid:42)(cid:33)(cid:1)(cid:47)(cid:36)(cid:40)(cid:32)(cid:442)(cid:1)(cid:14)(cid:42)(cid:45)(cid:32)(cid:1)(cid:442)(cid:442)(cid:442)(cid:5)(cid:32)(cid:41)(cid:46)(cid:32)(cid:17)(cid:35)(cid:45)(cid:28)(cid:46)(cid:32)(cid:46)(cid:446)(cid:3)(cid:17)(cid:28)(cid:46)(cid:46)(cid:28)(cid:34)(cid:32)(cid:1)(cid:45)(cid:32)(cid:43)(cid:45)(cid:32)(cid:46)(cid:32)(cid:41)(cid:47)(cid:28)(cid:47)(cid:36)(cid:42)(cid:41)(cid:17)(cid:28)(cid:46)(cid:46)(cid:28)(cid:34)(cid:32)(cid:1)(cid:45)(cid:32)(cid:43)(cid:45)(cid:32)(cid:46)(cid:32)(cid:41)(cid:47)(cid:28)(cid:47)(cid:36)(cid:42)(cid:41)(cid:5)(cid:17)(cid:19)(cid:17)(cid:28)(cid:46)(cid:46)(cid:28)(cid:34)(cid:32)(cid:17)(cid:28)(cid:46)(cid:46)(cid:28)(cid:34)(cid:32) we directly make use of it for passage and even document retrieval as well? We formulate phrase- based passage retrieval, in which the score of a passage is determined by the maximum score of phrases within it (see Figure 1 for an illustration). By evaluating DensePhrases (Lee et al., 2021) on popular QA datasets, we observe that it achieves competitive or even better passage retrieval accu- racy compared to DPR, without any re-training or modification to the original model (Table 1). The gains are especially pronounced for top-k accu- racy when k is smaller (e.g., 5), which also helps achieve strong open-domain QA accuracy with a much smaller number of passages as input to a gen- erative reader model (Izacard and Grave, 2021b). To better understand the nature of dense retrieval methods, we carefully analyze the training ob- jectives of phrase and passage retrieval methods. While the in-batch negative losses in both mod- els encourage them to retrieve topically relevant passages, we find that phrase-level supervision in DensePhrases provides a stronger training signal than using hard negatives from BM25, and helps DensePhrases retrieve correct phrases, and hence passages. Following this positive finding, we fur- ther explore whether phrase retrieval can be ex- tended to retrieval of coarser granularities, or other NLP tasks. Through fine-tuning of the query en- coder with document-level supervision, we are able to obtain competitive performance on entity link- ing (Hoffart et al., 2011) and knowledge-grounded dialogue retrieval (Dinan et al., 2019) in the KILT benchmark (Petroni et al., 2021). Finally, we draw connections to multi-vector pas- sage encoding models (Khattab and Zaharia, 2020; Luan et al., 2021), where phrase retrieval models can be viewed as learning a dynamic set of vectors for each passage. We show that a simple phrase filtering strategy learned from QA datasets gives us a control over the trade-off between the number of vectors per passage and the retrieval accuracy. Since phrase retrievers encode a larger number of vectors, we also propose a quantization-aware fine- tuning method based on Optimized Product Quan- tization (Ge et al., 2013), reducing the size of the phrase index from 307GB to 69GB (or under 30GB with more aggressive phrase filtering) for full En- glish Wikipedia, without any performance degrada- tion. This matches the index size of passage retriev- ers and makes dense phrase retrieval a practical and versatile solution for multi-granularity retrieval. 2 Background Passage retrieval Given a set of documents D, passage retrieval aims to provide a set of relevant passages for a question q. Typically, each docu- ment in D is segmented into a set of disjoint pas- sages and we denote the entire set of passages in D as P = {p1, . . . , pM }, where each passage can be a natural paragraph or a fixed-length text block. A passage retriever is designed to return top-k pas- sages Pk ⊂ P with the goal of retrieving passages that are relevant to the question. In open-domain QA, passages are considered relevant if they con- tain answers to the question. However, many other knowledge-intensive NLP tasks (e.g., knowledge- grounded dialogue) provide human-annotated evi- dence passages or documents. While traditional passage retrieval models rely on sparse representations such as BM25 (Robert- son and Zaragoza, 2009), recent methods show promising results with dense representations of passages and questions, and enable retrieving pas- sages that may have low lexical overlap with ques- tions. Specifically, Karpukhin et al. (2020) intro- duce DPR that has a passage encoder Ep(·) and a question encoder Eq(·) trained on QA datasets and retrieves passages by using the inner product as a similarity function between a passage and a question: f (p, q) = Ep(p)(cid:62)Eq(q). (1) For open-domain QA where a system is required to provide an exact answer string a, the retrieved top k passages Pk are subsequently fed into a reading comprehension model such as a BERT model (De- vlin et al., 2019), and this is called the retriever- reader approach (Chen et al., 2017). Phrase retrieval While passage retrievers require another reader model to find an answer, Seo et al. (2019) introduce the phrase retrieval approach that encodes phrases in each document and performs similarity search over all phrase vectors to directly locate the answer. Following previous work (Seo et al., 2018, 2019), we use the term ‘phrase’ to denote any contiguous text segment up to L words (including single words), which is not necessarily a linguistic phrase and we take phrases up to length L = 20. Given a phrase s(p) from a passage p, their similarity function f is computed as: f (s(p), q) = Es(s(p))(cid:62)Eq(q), (2) Natural Questions TriviaQA Retriever Top-1 Top-5 Top-20 MRR@20 P@20 Top-1 Top-5 Top-20 MRR@20 P@20 DPR♦ DPR♠ DensePhrases♦ DensePhrases♠ 46.0 44.2 50.1 51.1 68.1 66.8 69.5 69.9 79.8 79.2 79.8 78.7 55.7 54.2 58.7 59.3 16.5 17.7 20.5 22.7 54.4† 54.6 - 62.7 - 70.8 - 75.0 79.4‡ 79.5 - 80.9 - 61.7 - 68.2 - 30.3 - 38.4 Table 1: Open-domain QA passage retrieval results. We retrieve top k passages from DensePhrases using Eq. (3). We report top-k passage retrieval accuracy (Top-k), mean reciprocal rank at k (MRR@k), and precision at k (P@k). ♦: trained on each dataset independently. ♠: trained on multiple open-domain QA datasets. See §3.1 for more details. †: (Yang and Seo, 2020). ‡: (Karpukhin et al., 2020). where Es(·) and Eq(·) denote the phrase encoder and the question encoder, respectively. Since this formulates open-domain QA purely as a maxi- mum inner product search (MIPS), it can drasti- cally improve end-to-end efficiency. While previ- ous work (Seo et al., 2019; Lee et al., 2020) relied on a combination of dense and sparse vectors, Lee et al. (2021) demonstrate that dense representations of phrases alone are sufficient to close the perfor- mance gap with retriever-reader systems. For more details on how phrase representations are learned, we refer interested readers to Lee et al. (2021). 3 Phrase Retrieval for Passage Retrieval Phrases naturally have their source texts from which they are extracted. Based on this fact, we define a simple phrase-based passage retrieval strategy, where we retrieve passages based on the phrase-retrieval score: ˜f (p, q) := max s(p)∈S(p) Es(s(p))(cid:62)Eq(q), (3) where S(p) denotes the set of phrases in the pas- sage p. In practice, we first retrieve a slightly larger number of phrases, compute the score for each pas- sage, and return top k unique passages.3 Based on our definition, phrases can act as a basic re- trieval unit of any other granularity such as sen- tences or documents by simply changing S(p) (e.g., s(d) ∈ S(d) for a document d). Note that, since the cost of score aggregation is negligible, the in- ference speed of phrase-based passage retrieval is the same as for phrase retrieval, which is shown to be efficient in Lee et al. (2021). In this section, we evaluate the passage retrieval performance (Eq. (3)) and also how phrase-based passage retrieval can contribute to end-to-end open-domain QA. 3In most cases, retrieving 2k phrases is sufficient for ob- taining k unique passages. If not, we try 4k and so on. 3.1 Experiment: Passage Retrieval Datasets We use two open-domain QA datasets: Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017), following the stan- dard train/dev/test splits for the open-domain QA evaluation. For both models, we use the 2018-12- 20 Wikipedia snapshot. To provide a fair com- parison, we use Wikipedia articles pre-processed for DPR, which are split into 21-million text blocks and each text block has exactly 100 words. Note that while DPR is trained in this setting, DensePhrases is trained with natural paragraphs.4 Models For DPR, we use publicly available check- points5 trained on each dataset (DPR♦) or multiple QA datasets (DPR♠), which we find to perform slightly better than the ones reported in Karpukhin et al. (2020). For DensePhrases, we train it on Nat- ural Questions (DensePhrases♦) or multiple QA datasets (DensePhrases♠) with the code provided by the authors.6 Note that we do not make any modification to the architecture or training methods of DensePhrases and achieve similar open-domain QA accuracy as reported. For phrase-based passage retrieval, we compute Eq. (3) with DensePhrases and return top k passages. Metrics Following previous work on passage re- trieval for open-domain QA, we measure the top-k passage retrieval accuracy (Top-k), which denotes the proportion of questions whose top k retrieved passages contain at least one of the gold answers. 4We expect DensePhrases to achieve even higher perfor- mance if it is re-trained with 100-word text blocks. We leave it for future investigation. 5https://github.com/facebookresearch/DPR. 6DPR♠ is trained on NaturalQuestions, TriviaQA, Curat- edTREC (Baudiš and Šediv`y, 2015), and WebQuestions (Be- rant et al., 2013). DensePhrases♠ additionally includes SQuAD (Rajpurkar et al., 2016), although it does not con- tribute to Natural Questions and TriviaQA much. To further characterize the behavior of each system, we also include the following evaluation metrics: mean reciprocal rank at k (MRR@k) and precision at k (P@k). MRR@k is the average reciprocal rank of the first relevant passage (that contains an answer) in the top k passages. Higher MRR@k means relevant passages appear at higher ranks. Meanwhile, P@k is the average proportion of rele- vant passages in the top k passages. Higher P@k denotes that a larger proportion of top k passages contains the answers. Results As shown in Table 1, DensePhrases achieves competitive passage retrieval accuracy with DPR, while having a clear advantage on top-1 or top-5 accuracy for both Natural Ques- tions (+6.9% Top-1) and TriviaQA (+8.1% Top-1). Although the top-20 (and top-100, which is not shown) accuracy is similar across different models, MRR@20 and P@20 reveal interesting aspects of DensePhrases—it ranks relevant passages higher and provides a larger number of correct passages. Our results suggest that DensePhrases can also re- trieve passages very accurately, even though it was not explicitly trained for that purpose. For the rest of the paper, we mainly compare the DPR♠ and DensePhrases♠ models, which were both trained on multiple QA datasets. 3.2 Experiment: Open-domain QA Recently, Izacard and Grave (2021b) proposed the Fusion-in-Decoder (FiD) approach where they feed top 100 passages from DPR into a generative model T5 (Raffel et al., 2020) and achieve the state-of- the-art on open-domain QA benchmarks. Since their generative model computes the hidden states of all tokens in 100 passages, it requires large GPU memory and Izacard and Grave (2021b) used 64 Tesla V100 32GB for training. In this section, we use our phrase-based passage retrieval with DensePhrases to replace DPR in FiD and see if we can use a much smaller number of pas- sages to achieve comparable performance, which can greatly reduce the computational requirements. We train our model with 4 24GB RTX GPUs for training T5-base, which are more affordable with academic budgets. Note that training T5-base with 5 or 10 passages can also be done with 11GB GPUs. We keep all the hyperparameters the same as in Izacard and Grave (2021b).7 7We also accumulate gradients for 16 steps to match the effective batch size of the original work. Model NaturalQ TriviaQA Dev Test Test ORQA (Lee et al., 2019) REALM (Guu et al., 2020) DPR (reader: BERT-base) DensePhrases - - - - 33.3 40.4 41.5 41.3 FiD with DPR (Izacard and Grave, 2021b) Reader: T5-base k = 5 k = 10 k = 25 k = 50 k = 100 37.8 42.3 45.3 45.7 46.5 FiD with DensePhrases (ours) Reader: T5-base k = 5 k = 10 k = 25 k = 50 44.2 45.5 46.4 47.2 - - - - 48.2 45.9 45.9 47.2 47.9 45.0 - 56.8 53.5 - - - - 65.0 59.5 61.0 63.4 64.5 Table 2: Open-domain QA results. We report exact match (EM) of each model by feeding top k passages into a T5-base model. DensePhrases can greatly reduce the computational cost of running generative reader models while having competitive performance. Results As shown in Table 2, using DensePhrases as a passage retriever achieves competitive per- formance to DPR-based FiD and significantly improves upon the performance of original DensePhrases (NQ = 41.3 EM without a reader). Its better retrieval quality at top-k for smaller k in- deed translates to better open-domain QA accuracy, achieving +6.4% gain compared to DPR-based FiD when k = 5. To obtain similar performance with using 100 passages in FiD, DensePhrases needs fewer passages (k = 25 or 50), which can fit in GPUs with smaller RAM. 4 A Unified View of Dense Retrieval As shown in the previous section, phrase-based passage retrieval is able to achieve competitive passage retrieval accuracy, despite that the mod- els were not explicitly trained for that. In this section, we compare the training objectives of DPR and DensePhrases in detail and explain how DensePhrases learns passage retrieval. 4.1 Training Objectives Both DPR and DensePhrases set out to learn a sim- ilarity function f between a passage or phrase and a question. Passages and phrases differ primarily in characteristic length, so we refer to either as Figure 2: Comparison of training objectives of DPR and DensePhrases. While both models use in-batch negatives, DensePhrases use in-passage negatives (phrases) compared to BM25 hard-negative passages in DPR. Note that each phrase in DensePhrases can directly serve as an answer to open-domain questions. a retrieval unit x.8 DPR and DensePhrases both adopt a dual-encoder approach with inner product similarity as shown in Eq. (1) and (2), and they are initialized with BERT (Devlin et al., 2019) and SpanBERT (Joshi et al., 2020), respectively. These dual-encoder models are then trained with a negative log-likelihood loss for discriminating positive retrieval units from negative ones: L = − log ef (x+,q) ef (x+,q) + (cid:80) x−∈X − , (4) ef (x−,q) where x+ is the positive phrase or passage corre- sponding to question q, and X − is a set of negative examples. The choice of negatives is critical in this setting and both DPR and DensePhrases make important adjustments. In-batch negatives In-batch negatives are a com- mon way to define X −, since they are available at no extra cost when encoding a mini-batch of exam- ples. Specifically, in a mini-batch of B examples, we can add B − 1 in-batch negatives for each posi- tive example. Since each mini-batch is randomly sampled from the set of all training passages, in- batch negative passages are usually topically nega- tive, i.e., models can discriminate between x+ and X − based on their topic only. Hard negatives Although topic-related features are useful in identifying broadly relevant passages, they often lack the precision to locate the exact passage containing the answer in a large corpus. 8Note that phrases may overlap, whereas passages are usually disjoint segments with each other. Karpukhin et al. (2020) propose to use additional hard negatives which have a high BM25 lexical overlap with a given question but do not contain the answer. These hard negatives are likely to share a similar topic and encourage DPR to learn more fine- grained features to rank x+ over the hard negatives. Figure 2 (left) shows an illustrating example. In-passage negatives While DPR is limited to use positive passages x+ which contain the an- swer, DensePhrases is trained to predict that the positive phrase x+ is the answer. Thus, the fine- grained structure of phrases allows for another source of negatives, in-passage negatives. In par- ticular, DensePhrases augments the set of nega- tives X − to encompass all phrases within the same passage that do not express the answer.9 See Fig- ure 2 (right) for an example. We hypothesize that these in-passage negatives achieve a similar effect as DPR’s hard negatives: They require the model to go beyond simple topic modeling since they share not only the same topic but also the same context. Our phrase-based passage retriever might benefit from this phrase-level supervision, which has al- ready been shown to be useful in the context of distilling knowledge from reader to retriever (Izac- ard and Grave, 2021a; Yang and Seo, 2020). 4.2 Topical vs. Hard Negatives To address our hypothesis, we would like to study how these different types of negatives used by DPR and DensePhrases affect their reliance on topical 9Technically, DensePhrases treats start and end representa- tions of phrases independently and use start (or end) represen- tations other than the positive one as negatives. (cid:17)(cid:42)(cid:46)(cid:36)(cid:47)(cid:36)(cid:49)(cid:32)(cid:5)(cid:17)(cid:19)(cid:475)(cid:4)(cid:13)(cid:20)(cid:476)(cid:1)(cid:5)(cid:42)(cid:41)(cid:453)(cid:47)(cid:1)(cid:20)(cid:47)(cid:28)(cid:41)(cid:31)(cid:1)(cid:20)(cid:42)(cid:1)(cid:4)(cid:39)(cid:42)(cid:46)(cid:32)(cid:1)(cid:47)(cid:42)(cid:1)(cid:14)(cid:32)(cid:1)(cid:475)(cid:20)(cid:6)(cid:17)(cid:476)(cid:1)(cid:454)(cid:5)(cid:42)(cid:41)(cid:453)(cid:47)(cid:1)(cid:20)(cid:47)(cid:28)(cid:41)(cid:31)(cid:1)(cid:20)(cid:42)(cid:1)(cid:4)(cid:39)(cid:42)(cid:46)(cid:32)(cid:1)(cid:47)(cid:42)(cid:1)(cid:14)(cid:32)(cid:454)(cid:1)(cid:36)(cid:46)(cid:1)(cid:28)(cid:1)(cid:35)(cid:36)(cid:47)(cid:1)(cid:46)(cid:42)(cid:41)(cid:34)(cid:1)(cid:29)(cid:52)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:3)(cid:45)(cid:36)(cid:47)(cid:36)(cid:46)(cid:35)(cid:1)(cid:45)(cid:42)(cid:30)(cid:38)(cid:1)(cid:29)(cid:28)(cid:41)(cid:31)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:17)(cid:42)(cid:39)(cid:36)(cid:30)(cid:32)(cid:443)(cid:1)(cid:45)(cid:32)(cid:39)(cid:32)(cid:28)(cid:46)(cid:32)(cid:31)(cid:1)(cid:36)(cid:41)(cid:1)(cid:20)(cid:32)(cid:43)(cid:47)(cid:32)(cid:40)(cid:29)(cid:32)(cid:45)(cid:1)(cid:442)(cid:442)(cid:442)(cid:17)(cid:42)(cid:46)(cid:36)(cid:47)(cid:36)(cid:49)(cid:32)(cid:5)(cid:32)(cid:41)(cid:46)(cid:32)(cid:17)(cid:35)(cid:45)(cid:28)(cid:46)(cid:32)(cid:46)(cid:475)(cid:4)(cid:13)(cid:20)(cid:476)(cid:1)(cid:5)(cid:42)(cid:41)(cid:453)(cid:47)(cid:1)(cid:20)(cid:47)(cid:28)(cid:41)(cid:31)(cid:1)(cid:20)(cid:42)(cid:1)(cid:4)(cid:39)(cid:42)(cid:46)(cid:32)(cid:1)(cid:47)(cid:42)(cid:1)(cid:14)(cid:32)(cid:1)(cid:475)(cid:20)(cid:6)(cid:17)(cid:476)(cid:1)(cid:454)(cid:5)(cid:42)(cid:41)(cid:453)(cid:47)(cid:1)(cid:20)(cid:47)(cid:28)(cid:41)(cid:31)(cid:1)(cid:20)(cid:42)(cid:1)(cid:4)(cid:39)(cid:42)(cid:46)(cid:32)(cid:1)(cid:47)(cid:42)(cid:1)(cid:14)(cid:32)(cid:454)(cid:1)(cid:36)(cid:46)(cid:1)(cid:28)(cid:1)(cid:35)(cid:36)(cid:47)(cid:1)(cid:46)(cid:42)(cid:41)(cid:34)(cid:1)(cid:29)(cid:52)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:3)(cid:45)(cid:36)(cid:47)(cid:36)(cid:46)(cid:35)(cid:1)(cid:45)(cid:42)(cid:30)(cid:38)(cid:1)(cid:29)(cid:28)(cid:41)(cid:31)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:17)(cid:42)(cid:39)(cid:36)(cid:30)(cid:32)(cid:443)(cid:1)(cid:45)(cid:32)(cid:39)(cid:32)(cid:28)(cid:46)(cid:32)(cid:31)(cid:1)(cid:36)(cid:41)(cid:1)(cid:20)(cid:32)(cid:43)(cid:47)(cid:32)(cid:40)(cid:29)(cid:32)(cid:45)(cid:1)(cid:442)(cid:442)(cid:442)(cid:10)(cid:41)(cid:465)(cid:29)(cid:28)(cid:47)(cid:30)(cid:35)(cid:1)(cid:15)(cid:32)(cid:34)(cid:28)(cid:47)(cid:36)(cid:49)(cid:32)(cid:475)(cid:4)(cid:13)(cid:20)(cid:476)(cid:1)(cid:3)(cid:28)(cid:45)(cid:28)(cid:30)(cid:38)(cid:1)(cid:16)(cid:29)(cid:28)(cid:40)(cid:28)(cid:1)(cid:475)(cid:20)(cid:6)(cid:17)(cid:476)(cid:1)(cid:3)(cid:28)(cid:45)(cid:28)(cid:30)(cid:38)(cid:1)(cid:9)(cid:48)(cid:46)(cid:46)(cid:32)(cid:36)(cid:41)(cid:1)(cid:16)(cid:29)(cid:28)(cid:40)(cid:28)(cid:1)(cid:36)(cid:46)(cid:1)(cid:28)(cid:41)(cid:1)(cid:2)(cid:40)(cid:32)(cid:45)(cid:36)(cid:30)(cid:28)(cid:41)(cid:1)(cid:43)(cid:42)(cid:39)(cid:36)(cid:47)(cid:36)(cid:30)(cid:36)(cid:28)(cid:41)(cid:1)(cid:28)(cid:41)(cid:31)(cid:1)(cid:28)(cid:47)(cid:47)(cid:42)(cid:45)(cid:41)(cid:32)(cid:52)(cid:1)(cid:50)(cid:35)(cid:42)(cid:1)(cid:46)(cid:32)(cid:45)(cid:49)(cid:32)(cid:31)(cid:1)(cid:28)(cid:46)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:426)(cid:426)(cid:47)(cid:35)(cid:1)(cid:43)(cid:45)(cid:32)(cid:46)(cid:36)(cid:31)(cid:32)(cid:41)(cid:47)(cid:1)(cid:42)(cid:33)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:442)(cid:442)(cid:442)(cid:475)(cid:4)(cid:13)(cid:20)(cid:476)(cid:1)(cid:20)(cid:47)(cid:28)(cid:41)(cid:31)(cid:1)(cid:3)(cid:32)(cid:46)(cid:36)(cid:31)(cid:32)(cid:1)(cid:14)(cid:32)(cid:1)(cid:475)(cid:20)(cid:6)(cid:17)(cid:476)(cid:1)(cid:454)(cid:20)(cid:47)(cid:28)(cid:41)(cid:31)(cid:1)(cid:3)(cid:32)(cid:46)(cid:36)(cid:31)(cid:32)(cid:1)(cid:14)(cid:32)(cid:454)(cid:1)(cid:36)(cid:46)(cid:1)(cid:28)(cid:1)(cid:46)(cid:42)(cid:41)(cid:34)(cid:1)(cid:50)(cid:45)(cid:36)(cid:47)(cid:47)(cid:32)(cid:41)(cid:1)(cid:29)(cid:52)(cid:1)(cid:20)(cid:47)(cid:32)(cid:43)(cid:35)(cid:32)(cid:41)(cid:1)(cid:2)(cid:39)(cid:39)(cid:32)(cid:41)(cid:1)(cid:5)(cid:28)(cid:49)(cid:36)(cid:46)(cid:443)(cid:1)(cid:28)(cid:41)(cid:31)(cid:1)(cid:45)(cid:32)(cid:30)(cid:42)(cid:45)(cid:31)(cid:32)(cid:31)(cid:1)(cid:29)(cid:52)(cid:1)(cid:2)(cid:40)(cid:32)(cid:45)(cid:36)(cid:30)(cid:28)(cid:41)(cid:1)(cid:30)(cid:42)(cid:48)(cid:41)(cid:47)(cid:45)(cid:52)(cid:1)(cid:40)(cid:48)(cid:46)(cid:36)(cid:30)(cid:1)(cid:46)(cid:36)(cid:41)(cid:34)(cid:32)(cid:45)(cid:1)(cid:442)(cid:442)(cid:442)(cid:475)(cid:4)(cid:13)(cid:20)(cid:476)(cid:1)(cid:3)(cid:28)(cid:45)(cid:28)(cid:30)(cid:38)(cid:1)(cid:16)(cid:29)(cid:28)(cid:40)(cid:28)(cid:1)(cid:475)(cid:20)(cid:6)(cid:17)(cid:476)(cid:1)(cid:3)(cid:28)(cid:45)(cid:28)(cid:30)(cid:38)(cid:1)(cid:9)(cid:48)(cid:46)(cid:46)(cid:32)(cid:36)(cid:41)(cid:1)(cid:16)(cid:29)(cid:28)(cid:40)(cid:28)(cid:1)(cid:36)(cid:46)(cid:1)(cid:28)(cid:41)(cid:1)(cid:2)(cid:40)(cid:32)(cid:45)(cid:36)(cid:30)(cid:28)(cid:41)(cid:1)(cid:43)(cid:42)(cid:39)(cid:36)(cid:47)(cid:36)(cid:30)(cid:36)(cid:28)(cid:41)(cid:1)(cid:28)(cid:41)(cid:31)(cid:1)(cid:28)(cid:47)(cid:47)(cid:42)(cid:45)(cid:41)(cid:32)(cid:52)(cid:1)(cid:50)(cid:35)(cid:42)(cid:1)(cid:46)(cid:32)(cid:45)(cid:49)(cid:32)(cid:31)(cid:1)(cid:28)(cid:46)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:426)(cid:426)(cid:47)(cid:35)(cid:1)(cid:43)(cid:45)(cid:32)(cid:46)(cid:36)(cid:31)(cid:32)(cid:41)(cid:47)(cid:1)(cid:42)(cid:33)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:442)(cid:442)(cid:442)(cid:10)(cid:41)(cid:465)(cid:43)(cid:28)(cid:46)(cid:46)(cid:28)(cid:34)(cid:32)(cid:1)(cid:15)(cid:32)(cid:34)(cid:28)(cid:47)(cid:36)(cid:49)(cid:32)(cid:475)(cid:4)(cid:13)(cid:20)(cid:476)(cid:1)(cid:5)(cid:42)(cid:41)(cid:453)(cid:47)(cid:1)(cid:20)(cid:47)(cid:28)(cid:41)(cid:31)(cid:1)(cid:20)(cid:42)(cid:1)(cid:4)(cid:39)(cid:42)(cid:46)(cid:32)(cid:1)(cid:47)(cid:42)(cid:1)(cid:14)(cid:32)(cid:1)(cid:475)(cid:20)(cid:6)(cid:17)(cid:476)(cid:1)(cid:446)(cid:1)(cid:17)(cid:42)(cid:39)(cid:36)(cid:30)(cid:32)(cid:443)(cid:1)(cid:45)(cid:32)(cid:39)(cid:32)(cid:28)(cid:46)(cid:32)(cid:31)(cid:1)(cid:36)(cid:41)(cid:1)(cid:20)(cid:32)(cid:43)(cid:47)(cid:32)(cid:40)(cid:29)(cid:32)(cid:45)(cid:1)(cid:423)(cid:431)(cid:430)(cid:422)(cid:1)(cid:28)(cid:46)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:39)(cid:32)(cid:28)(cid:31)(cid:1)(cid:46)(cid:36)(cid:41)(cid:34)(cid:39)(cid:32)(cid:1)(cid:33)(cid:45)(cid:42)(cid:40)(cid:1)(cid:47)(cid:35)(cid:32)(cid:36)(cid:45)(cid:1)(cid:47)(cid:35)(cid:36)(cid:45)(cid:31)(cid:1)(cid:28)(cid:39)(cid:29)(cid:48)(cid:40)(cid:1)(cid:27)(cid:32)(cid:41)(cid:52)(cid:28)(cid:47)(cid:47)(cid:28)(cid:1)(cid:14)(cid:42)(cid:41)(cid:31)(cid:28)(cid:47)(cid:47)(cid:28)(cid:442)(cid:20)(cid:28)(cid:40)(cid:32)(cid:1)(cid:17)(cid:28)(cid:46)(cid:46)(cid:28)(cid:34)(cid:32)(cid:475)(cid:4)(cid:13)(cid:20)(cid:476)(cid:1)(cid:50)(cid:35)(cid:42)(cid:1)(cid:46)(cid:36)(cid:41)(cid:34)(cid:46)(cid:1)(cid:31)(cid:42)(cid:41)(cid:453)(cid:47)(cid:1)(cid:46)(cid:47)(cid:28)(cid:41)(cid:31)(cid:1)(cid:46)(cid:42)(cid:1)(cid:30)(cid:39)(cid:42)(cid:46)(cid:32)(cid:1)(cid:47)(cid:42)(cid:1)(cid:40)(cid:32)(cid:475)(cid:4)(cid:13)(cid:20)(cid:476)(cid:1)(cid:50)(cid:35)(cid:42)(cid:1)(cid:46)(cid:36)(cid:41)(cid:34)(cid:46)(cid:1)(cid:31)(cid:42)(cid:41)(cid:453)(cid:47)(cid:1)(cid:46)(cid:47)(cid:28)(cid:41)(cid:31)(cid:1)(cid:46)(cid:42)(cid:1)(cid:30)(cid:39)(cid:42)(cid:46)(cid:32)(cid:1)(cid:47)(cid:42)(cid:1)(cid:40)(cid:32)(cid:18)(cid:48)(cid:32)(cid:46)(cid:47)(cid:36)(cid:42)(cid:41)(cid:18)(cid:48)(cid:32)(cid:46)(cid:47)(cid:36)(cid:42)(cid:41)(cid:9)(cid:28)(cid:45)(cid:31)(cid:1)(cid:15)(cid:32)(cid:34)(cid:28)(cid:47)(cid:36)(cid:49)(cid:32)(cid:10)(cid:41)(cid:465)(cid:29)(cid:28)(cid:47)(cid:30)(cid:35)(cid:1)(cid:15)(cid:32)(cid:34)(cid:28)(cid:47)(cid:36)(cid:49)(cid:32) Type DensePhrases + BM25 neg. + Same-phrase neg. D = {p} D = Dsmall 71.8 71.8 72.1 61.3 60.6 60.9 Table 3: Effect of using hard negatives in DensePhrases on the NQ development set. We report EM when a sin- gle gold passage is given (D = {p}) or 6K passages are given by gathering all the gold passages from NQ de- velopment set (D = Dsmall). The two hard negatives do not give any noticeable improvement in DensePhrases. that uses BM25 hard negatives (+BM25). Us- ing multiple datasets (+multi. dataset) further im- proves Lhard for both models. DPR has gener- ally better (lower) Ltopic than DensePhrases, which might be due to the smaller training batch size of DensePhrases (hence a smaller number of in-batch negatives) compared to DPR. The results suggest that DensePhrases relies less on topical features and is better at retrieving passages based on fine- grained entailment cues. This might contribute to the better ranking of the retrieved passages in Ta- ble 1, where DensePhrases shows better MRR@20 and P@20 while top-20 accuracy is similar. Hard negatives for DensePhrases? We test two different kinds of hard negatives in DensePhrases to see whether its performance can further improve in the presence of in-passage negatives. For each train- ing question, we mine for a hard negative passage, either by BM25 similarity or by finding another pas- sage that contains the gold-answer phrase, but pos- sibly with a wrong context. Then we use all phrases from the hard negative passage as additional hard negatives in X − along with the existing in-passage negatives. As shown in Table 3, DensePhrases obtains no substantial improvements from addi- tional hard negatives, indicating that in-passage negatives are already highly effective at producing good phrase (or passage) representations. 5 Improving Coarse-grained Retrieval While we showed that DensePhrases implicitly learns passage retrieval, Figure 3 indicates that DensePhrases might not be very good for retrieval tasks where topic matters more than fine-grained entailment, for instance, the retrieval of a single evidence document for entity linking. In this sec- tion, we propose a simple method that can adapt DensePhrases to larger retrieval units, especially when the topical relevance is more important. Figure 3: Comparison of DPR and DensePhrases on NQ (dev) with Ltopic and Lhard. Starting from each model trained with in-batch negatives (in-batch), we show the effect of using hard negatives (+BM25), in- passage negatives (+in-passage), as well as training on multiple QA datasets (+multi. dataset). The x-axis is in log-scale for better visualization. For both metrics, lower numbers are better. and fine-grained entailment cues. We character- ize their passage retrieval based on two metrics (losses): Ltopic and Lhard. We use Eq. (4) to define both Ltopic and Lhard, but use different sets of neg- atives X −. For Ltopic, X − contains passages that are topically different from the gold passage—In practice, we randomly sample passages from En- glish Wikipedia. For Lhard, X − uses negatives con- taining topically similar passages, such that Lhard estimates how accurately models locate a passage that contains the exact answer among topically sim- ilar passages. From a positive passage paired with a question, we create a single hard negative by re- moving the sentence that contains the answer.10 In our analysis, both metrics are estimated on the Nat- ural Questions development set, which provides a set of questions and (gold) positive passages. Results Figure 3 shows the comparison of DPR and DensePhrases trained on NQ with the two losses. For DensePhrases, we compute the pas- sage score using ˜f (p, q) as described in Eq. (3). First, we observe that in-batch negatives are highly effective at reducing Ltopic as DensePhrases trained with only in-passage negatives has a relatively high Ltopic. Furthermore, we observe that using in- passage negatives in DensePhrases (+in-passage) significantly lowers Lhard, even lower than DPR 10While Lhard with this type of hard negatives might favor DensePhrases, using BM25 hard negatives for Lhard would favor DPR since DPR was directly trained on BM25 hard negatives. Nonetheless, we observed similar trends in Lhard regardless of the choice of hard negatives. 10−310−210−1100topic0.20.30.40.50.6hardin-passage onlyin-batchin-batch+ BM25+multi.dataset+ in-passage+multi.datasetDensePhraseDPR Method We modify the query-side fine-tuning proposed by Lee et al. (2021), which drastically improves the performance of DensePhrases by re- ducing the discrepancy between training and infer- ence time. Since it is prohibitive to update the large number of phrase representations after indexing, only the query encoder is fine-tuned over the entire set of phrases in Wikipedia. Given a question q and an annotated document set D∗, we minimize: Ldoc = − log (cid:80) s∈ ˜S(q),d(s)∈D∗ ef (s,q) (cid:80) s∈ ˜S(q) ef (s,q) , (5) where ˜S(q) denotes top k phrases for the question q, out of the entire set of phrase vectors. To retrieve coarse-grained text better, we simply check the condition d(s) ∈ D∗, which means d(s), the source document of s, is included in the set of annotated gold documents D∗ for the question. With Ldoc, the model is trained to retrieve any phrases that are contained in a relevant document. Note that d(s) can be changed to reflect any desired level of granularity such as passages. Datasets We test DensePhrases trained with Ldoc on entity linking (Hoffart et al., 2011; Guo and Bar- bosa, 2018) and knowledge-grounded dialogue (Di- nan et al., 2019) tasks in KILT (Petroni et al., 2021). Entity linking contains three datasets: AIDA CoNLL-YAGO (AY2) (Hoffart et al., 2011), WNED-WIKI (WnWi) (Guo and Barbosa, 2018), and WNED-CWEB (WnCw) (Guo and Barbosa, 2018). Each query in entity linking datasets con- tains a named entity marked with special tokens (i.e., [START_ENT], [END_ENT]), which need to be linked to one of the Wikipedia articles. For knowledge-grounded dialogue, we use Wizard of Wikipedia (WoW) (Dinan et al., 2019) where each query consists of conversation history, and the gen- erated utterances should be grounded in one of the Wikipedia articles. We follow the KILT guidelines and evaluate the document (i.e., Wikipedia article) retrieval performance of our models given each query. We use R-precision, the proportion of suc- cessfully retrieved pages in the top R results, where R is the number of distinct pages in the provenance set. However, in the tasks considered, R-precision is equivalent to precision@1, since each question is annotated with only one document. Models DensePhrases is trained with the origi- nal query-side fine-tuning loss (denoted as Lphrase) or with Ldoc as described in Eq. (5). When Model Retriever Only Entity Linking Dialogue AY2 WnWi WnCw WoW TF-IDF DPR DensePhrases-Lphrase DensePhrases-Ldoc DPR♣ DensePhrases-Ldoc ♣ 3.7 1.8 7.7 61.6 26.5 68.4 0.2 0.3 12.5 32.1 4.9 47.5 Retriever + Additional Components RAG BLINK + flair 72.6 81.5 48.1 80.2 2.1 0.5 6.4 37.4 1.9 47.5 47.6 68.8 49.0 25.5 - 47.0 41.1 55.7 57.8 - Table 4: Results on the KILT test set. We report page- level R-precision on each task, which is equivalent to precision@1 on these datasets. ♣: Multi-task models. DensePhrases is trained with Lphrase, it labels any phrase that matches the title of gold document as positive. After training, DensePhrases returns the document that contains the top passage. For base- line retrieval methods, we report the performance of TF-IDF and DPR from Petroni et al. (2021). We also include a multi-task version of DPR and DensePhrases, which uses the entire KILT training datasets.11 While not our main focus of compari- son, we also report the performance of other base- lines from Petroni et al. (2021), which uses genera- tive models (e.g., RAG (Lewis et al., 2020)) or task- specific models (e.g., BLINK (Wu et al., 2020), which has additional entity linking pre-training). Note that these methods use additional compo- nents such as a generative model or a cross-encoder model on top of retrieval models. Results Table 4 shows the results on three entity linking tasks and a knowledge-grounded dialogue task. On all tasks, we find that DensePhrases with Ldoc performs much better than DensePhrases with Lphrase and also matches the performance of RAG that uses an additional large generative model to generate the document titles. Using Lphrase does very poorly since it focuses on phrase-level entail- ment, rather than document-level relevance. Com- pared to the multi-task version of DPR (i.e., DPR♣), ♣ can be easily adapted to non- DensePhrases-Ldoc QA tasks like entity linking and generalizes better on tasks without training sets (WnWi, WnCw). 11We follow the same steps described in Petroni et al. (2021) for training the multi-task version of DensePhrases. 6 DensePhrases as a Multi-Vector Passage Encoder In this section, we demonstrate that DensePhrases can be interpreted as a multi-vector passage en- coder, which has recently been shown to be very effective for passage retrieval (Luan et al., 2021; Khattab and Zaharia, 2020). Since this type of multi-vector encoding models requires a large disk footprint, we show that we can control the number of vectors per passage (and hence the index size) through filtering. We also introduce quantization techniques to build more efficient phrase retrieval models without a significant performance drop. 6.1 Multi-Vector Encodings Since we represent passages not by a single vec- tor, but by a set of phrase vectors (decomposed as token-level start and end vectors, see Lee et al. (2021)), we notice similarities to previous work, which addresses the capacity limitations of dense, fixed-length passage encodings. While these ap- proaches store a fixed number of vectors per pas- sage (Luan et al., 2021; Humeau et al., 2020) or all token-level vectors (Khattab and Zaharia, 2020), phrase retrieval models store a dynamic number of phrase vectors per passage, where many phrases are filtered by a model trained on QA datasets. Specifically, Lee et al. (2021) trains a binary clas- sifier (or a phrase filter) to filter phrases based on their phrase representations. This phrase filter is su- pervised by the answer annotations in QA datasets, hence denotes candidate answer phrases. In our ex- periment, we tune the filter threshold to control the number of vectors per passage for passage retrieval. 6.2 Efficient Phrase Retrieval The multi-vector encoding models as well as ours are prohibitively large since they contain multi- ple vector representations for every passage in the entire corpus. We introduce a vector quantization- based method that can safely reduce the size of our phrase index, without performance degradation. Optimized product quantization Since the multi- vector encoding models are prohibitively large due to their multiple representations, we further intro- duce a vector quantization-based method that can safely reduce the size of our phrase index, without performance degradation. We use Product Quan- tization (PQ) (Jegou et al., 2010) where the origi- nal vector space is decomposed into the Cartesian for different Figure 4: Top-5 passage retrieval accuracy on Nat- ural Questions (dev) index sizes of DensePhrases. The index size (GB) and the average number of saved vectors per passage (# vec / p) are controlled by the filtering threshold τ . For instance, # vec / p reduces from 28.0 to 5.1 with higher τ , which also reduces the index size from 69GB to 23GB. OPQ: Optimized Product Quantization (Ge et al., 2013). product of subspaces. Using PQ, the memory us- age of using N number of d-dimensional centroid vectors reduces from N d to N 1/M d with M sub- spaces while each database vector requires log2 N bits. Among different variants of PQ, we use Op- timized Product Quantization (OPQ) (Ge et al., 2013), which learns an orthogonal matrix R to bet- ter decompose the original vector space. See Ge et al. (2013) for more details on OPQ. Quantization-aware training While this type of aggressive vector quantization can significantly re- duce memory usage, it often comes at the cost of performance degradation due to the quantization loss. To mitigate this problem, we use quantization- aware query-side fine-tuning motivated by the re- cent successes on quantization-aware training (Ja- cob et al., 2018). Specifically, during query-side fine-tuning, we reconstruct the phrase vectors using the trained (optimized) product quantizer, which are then used to minimize Eq. (5). 6.3 Experimental Results In Figure 4, we present the top-5 passage retrieval accuracy with respect to the size of the phrase index in DensePhrases. First, applying OPQ can reduce the index size of DensePhrases from 307GB to 69GB, while the top-5 retrieval accuracy is poor without quantization-aware query-side fine-tuning. Furthermore, by tuning the threshold τ for the phrase filter, the number of vectors per each pas- sage (# vec / p) can be reduced without hurting the 307694940302369Size (GB)58606264666870Top-5 Accuracy (%)#vec/p=28.017.913.28.85.1w/o OPQw/o Query-side fine-tuningDensePhraseDPR performance significantly. The performance im- proves with a larger number of vectors per passage, which aligns with the findings of multi-vector en- coding models (Khattab and Zaharia, 2020; Luan et al., 2021). Our results show that having 8.8 vectors per passage in DensePhrases has similar retrieval accuracy with DPR. 7 Related Work Text retrieval has a long history in information re- trieval, either for serving relevant information to users directly or for feeding them to computation- ally expensive downstream systems. While tradi- tional research has focused on designing heuris- tics, such as sparse vector models like TF-IDF and BM25, it has recently become an active area of interest for machine learning researchers. This was precipitated by the emergence of open-domain QA as a standard problem setting (Chen et al., 2017) and the spread of the retriever-reader paradigm (Yang et al., 2019; Nie et al., 2019). The inter- est has spread to include a more diverse set of downstream tasks, such as fact checking (Thorne et al., 2018), entity-linking (Wu et al., 2020) or dialogue generation (Dinan et al., 2019), where the problems require access to large corpora or knowledge sources. Recently, REALM (Guu et al., 2020) and RAG (retrieval-augmented generation) (Lewis et al., 2020) have been proposed as general- purpose pre-trained models with explicit access to world knowledge through the retriever. There has also been a line of work to integrate text retrieval with structured knowledge graphs (Sun et al., 2018, 2019; Min et al., 2020). We refer to Lin et al. (2020) for a comprehensive overview of neural text retrieval methods. 8 Conclusion In this paper, we show that phrase retrieval models also learn passage retrieval without any modifi- cation. By drawing connections between the ob- jectives of DPR and DensePhrases, we provide a better understanding of how phrase retrieval learns passage retrieval, which is also supported by sev- eral empirical evaluations on multiple benchmarks. Specifically, phrase-based passage retrieval has bet- ter retrieval quality on top k passages when k is small, and this translates to an efficient use of pas- sages for open-domain QA. We also show that DensePhrases can be fine-tuned for more coarse- grained retrieval units, serving as a basis for any retrieval unit. We plan to further evaluate phrase- based passage retrieval on standard information retrieval tasks such as MS MARCO. Acknowledgements We thank Chris Sciavolino, Xingcheng Yao, the members of the Princeton NLP group, and the anonymous reviewers for helpful discussion and valuable feedback. This research is supported by the James Mi *91 Research Innovation Fund for Data Science and gifts from Apple and Amazon. It was also supported in part by the ICT Creative Consilience program (IITP-2021-0-01819) super- vised by the IITP (Institute for Information & com- munications Technology Planning & Evaluation) and National Research Foundation of Korea (NRF- 2020R1A2C3010638). Ethical Considerations Models introduced in our work often use question answering datasets such as Natural Questions to build phrase or passage representations. Some of the datasets, like SQuAD, are created from a small number of popular Wikipedia articles, hence could make our model biased towards a small number of topics. We hope that inventing an alternative training method that properly regularizes our model could mitigate this problem. Although our efforts have been made to reduce the computational cost of retrieval models, using passage retrieval models as external knowledge bases will inevitably increase the resource requirements for future experiments. Further efforts should be made to make retrieval more affordable for independent researchers. References Petr Baudiš and Jan Šediv`y. 2015. Modeling of the question answering task in the YodaQA system. In International Conference of the Cross-Language Evaluation Forum for European Languages. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533–1544, Seattle, Wash- ington, USA. Association for Computational Lin- guistics. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- In Proceedings of the 55th An- domain questions. nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870– 1879, Vancouver, Canada. Association for Computa- tional Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational In 7th International Conference on Learn- agents. ing Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. 2013. IEEE transac- Optimized product quantization. tions on pattern analysis and machine intelligence, 36(4):744–755. Zhaochen Guo and Denilson Barbosa. 2018. Robust named entity disambiguation with random walks. Semantic Web, 9(4):459–479. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pa- supat, and Ming-Wei Chang. 2020. REALM: Retrieval-augmented language model pre-training. In International Conference on Machine Learning. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bor- dino, Hagen Fürstenau, Manfred Pinkal, Marc Span- iol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named en- tities in text. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Process- ing, pages 782–792, Edinburgh, Scotland, UK. Asso- ciation for Computational Linguistics. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Architec- tures and pre-training strategies for fast and accurate multi-sentence scoring. In 8th International Confer- ence on Learning Representations, ICLR 2020, Ad- dis Ababa, Ethiopia, April 26-30, 2020. OpenRe- view.net. Gautier Izacard and Edouard Grave. 2021a. Distilling knowledge from reader to retriever for question an- swering. In International Conference on Learning Representations. Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew G. Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quanti- zation and training of neural networks for efficient In 2018 IEEE integer-arithmetic-only inference. Conference on Computer Vision and Pattern Recog- nition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 2704–2713. IEEE Computer So- ciety. Herve Jegou, Matthijs Douze, and Cordelia Schmid. 2010. Product quantization for nearest neighbor search. IEEE transactions on pattern analysis and machine intelligence, 33(1):117–128. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Associa- tion for Computational Linguistics, 8:64–77. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- prehension. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Van- couver, Canada. Association for Computational Lin- guistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769– 6781, Online. Association for Computational Lin- guistics. Omar Khattab and Matei Zaharia. 2020. Colbert: Ef- ficient and effective passage search via contextual- ized late interaction over BERT. In Proceedings of the 43rd International ACM SIGIR conference on re- search and development in Information Retrieval, SI- GIR 2020, Virtual Event, China, July 25-30, 2020, pages 39–48. ACM. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Jacob De- vlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question an- swering research. Transactions of the Association for Computational Linguistics, 7:452–466. Gautier Izacard and Edouard Grave. 2021b. Lever- aging passage retrieval with generative models for In Proceedings open domain question answering. of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880, Online. Association for Computational Linguistics. Jinhyuk Lee, Minjoon Seo, Hannaneh Hajishirzi, and Jaewoo Kang. 2020. Contextualized sparse repre- sentations for real-time open-domain question an- In Proceedings of the 58th Annual Meet- swering. ing of the Association for Computational Linguis- tics, pages 912–919, Online. Association for Com- putational Linguistics. Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021. Learning dense representations of phrases at scale. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6634–6647, Online. Association for Computational Linguistics. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics. Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik- tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for In Advances in knowledge-intensive NLP tasks. Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Sys- tems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. rank- Pretrained transformers for text arXiv preprint BERT and beyond. 2020. ing: arXiv:2010.06467. Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, dense, and atten- tional representations for text retrieval. Transac- tions of the Association for Computational Linguis- tics, 9:329–345. Sewon Min, Danqi Chen, Luke Zettlemoyer, and Han- naneh Hajishirzi. 2020. Knowledge guided text re- trieval and reading for open domain question answer- ing. ArXiv preprint, abs/1911.03868. Yixin Nie, Songhe Wang, and Mohit Bansal. 2019. Revealing the importance of semantic retrieval for In Proceedings of the machine reading at scale. 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2553–2566, Hong Kong, China. Association for Computational Linguistics. Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2523–2544, Online. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- Journal of Machine Learning Research, former. 21:1–67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends® in Information Re- trieval, 3(4):333–389. Minjoon Seo, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2018. Phrase- indexed question answering: A new challenge for scalable document comprehension. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 559–564, Brus- sels, Belgium. Association for Computational Lin- guistics. Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with In Proceedings of the dense-sparse phrase index. 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4430–4441, Florence, Italy. Association for Computational Linguistics. Haitian Sun, Tania Bedrax-Weiss, and William Cohen. 2019. PullNet: Open domain question answering with iterative retrieval on knowledge bases and text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2380– 2390, Hong Kong, China. Association for Computa- tional Linguistics. Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. 2018. Open domain question answering using early In Proceed- fusion of knowledge bases and text. ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4231–4242, Brussels, Belgium. Association for Computational Linguistics. James Andreas Vlachos, and Arpit Mittal. Christos Thorne, Christodoulopoulos, 2018. FEVER: a large-scale dataset for fact extraction In Proceedings of the 2018 and VERification. the North American Chapter of Conference of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819, New Orleans, Louisiana. Association for Computational Linguistics. Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zero- In shot entity linking with dense entity retrieval. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6397–6407, Online. Association for Computa- tional Linguistics. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neigh- bor negative contrastive learning for dense text re- In International Conference on Learning trieval. Representations. Sohee Yang and Minjoon Seo. 2020. Is retriever merely an approximator of reader? ArXiv preprint, abs/2010.10999. Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with In Proceedings of the 2019 Confer- BERTserini. ence of the North American Chapter of the Asso- ciation for Computational Linguistics (Demonstra- tions), pages 72–77, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.
ai_researcher
2
FACT_Examining_the_Effectiveness_of_Iterative_Context_Rewriting_for_Multi-fact_Retrieval.pdf
Scaling Laws for Fact Memorization of Large Language Models Xingyu Lu1* , Xiaonan Li1∗, Qinyuan Cheng1, Kai Ding2, Xuanjing Huang1, Xipeng Qiu1† 1Fudan University, 2INTSIG [email protected], {lixn20, xpqiu}@fudan.edu.cn 4 2 0 2 n u J 2 2 ] L C . s c [ 1 v 0 2 7 5 1 . 6 0 4 2 : v i X r a Abstract Fact knowledge memorization is crucial for Large Language Models (LLM) to generate factual and reliable responses. However, the behaviors of LLM fact memorization remain under-explored. In this paper, we analyze the scaling laws for LLM’s fact knowledge and LLMs’ behaviors of memorizing different types of facts. We find that LLMs’ fact knowledge capacity has a linear and negative exponential law relationship with model size and training epochs, respectively. Estimated by the built scaling law, memorizing the whole Wikidata’s facts requires training an LLM with 1000B non- embed parameters for 100 epochs, suggesting that using LLMs to memorize all public facts is almost implausible for a general pre-training setting. Meanwhile, we find that LLMs can gen- eralize on unseen fact knowledge and its scal- ing law is similar to general pre-training. Addi- tionally, we analyze the compatibility and pref- erence of LLMs’ fact memorization. For com- patibility, we find LLMs struggle with memo- rizing redundant facts in a unified way. Only when correlated facts have the same direction and structure, the LLM can compatibly memo- rize them. This shows the inefficiency of LLM memorization for redundant facts. For prefer- ence, the LLM pays more attention to mem- orizing more frequent and difficult facts, and the subsequent facts can overwrite prior facts’ memorization, which significantly hinders low- frequency facts memorization. Our findings re- veal the capacity and characteristics of LLMs’ fact knowledge learning, which provide direc- tions for LLMs’ fact knowledge augmentation. Introduction 1 Large Language Models (LLM) have demon- strated remarkable abilities over a wide range of tasks (OpenAI, 2023; Touvron et al., 2023; Reid et al., 2024; Bai et al., 2023a; DeepSeek-AI, 2024; * Equal Contribution † Corresponding Author Figure 1: The fact capacity of LLMs with different sizes on Wikidata, under 100 training epochs. According to the predicted scaling law, memorizing all Wikidata triples (15B) requires 1000B non-embed parameters. Cai et al., 2024; Sun et al., 2024). However, LLMs are prone to generating non-factual and fab- ricated contents, which is usually called “hallucina- tion” (Zhang et al., 2023; Huang et al., 2023; Rawte et al., 2023) and undermines LLMs’ reliability. LLMs’ factual responses highly rely on fact memorization. Specifically, the LLM memorizes fact knowledge during pre-training and the subse- quent fine-tuning enables it to extract correspond- ing fact knowledge for the given instruction (Zhu and Li, 2023). If the base LLM does not memo- rize specific knowledge, it will be challenging for the fine-tuned LLM to correctly answer the corre- sponding question (Ren et al., 2024). Additionally, fine-tuning with unmemorized fact knowledge even encourages LLMs’ hallucination (Lin et al., 2024; Gekhman et al., 2024). Despite the critical role of fact memorization, the behaviors of LLM fact memorization remain largely under-explored. Pre- vious work usually analyzes the pre-trained LLMs’ various abilities through the loss on unstructured text (Kaplan et al., 2020; Hoffmann et al., 2022a), and it is hard to reflect LLMs’ fact memorization for two reasons: 1. The composition of pre-training corpus is highly complicated and fact knowledge appears in it mixedly and unevenly, which makes it hard to accurately quantify the fact knowledge in massive pre-training data. 2. The widely used metric, loss, can not directly measure the LLM fact memorization since not all tokens are fact-related. This paper makes progress in quantitatively ana- 05e71e81.5e82e82.5e83e8Non-Embedding Parameters01e62e63e64e65e6Fact CapacityFitting PointsTesting PointsPrediction02e114e116e118e111e12Non-Embedding Parameters03e96e99e912e915e9Fact CapacityAbout 1000B ParametersAll Wikidata Triples (15B)Full Wikidata EstimationPrediction lyzing LLM fact memorization behaviors, includ- ing the scaling laws and behaviors of memorizing different types of facts. We focus on the memoriza- tion of atomic facts to facilitate accurately quan- tifying the number of facts and the memorization accuracy. We define atomic fact knowledge as a (key, attribute, value) triple, e.g., (SpaceX, CEO, Elon Musk), following Allen-Zhu and Li (2024). Given a key and an attribute, if the LLM correctly predicts the corresponding value, we consider it to memorize this fact knowledge. In this way, we can accurately quantify the number of fact knowledge and whether the LLM fully memorizes a specific fact, which facilitates a more accurate quantitative analysis of LLM’s fact memorization behaviors. Based on this setting, we analyze the LLM’s fact memorization behaviors on massive facts from a large real-world information table. Specifically, we analyze the fact memorization scaling law of LLMs and LLMs’ behaviors of memorizing differ- ent types of fact knowledge, including the follow- ing research questions (RQ): RQ1: How does LLM’s fact knowledge capac- ity scale with its size and training epochs? We define the fact knowledge capacity as the maxi- mum fact triple quantity that the LLM can accu- rately memorize. We find that LLM’s fact capacity linearly scales with its size under the same train- ing epochs. Additionally, we find that the train- ing epochs required for LLMs to memorize fact knowledge is significantly larger than one and this leads to higher training cost than general knowl- edge learning in pre-training. Increasing training epochs can initially increase the LLM’s fact ca- pacity and then reach saturation, which exhibits a trend of negative exponential law. Additionally, we extend our experiments to the Wikidata and the results exhibit a consistent trend, shown in Figure 1. According to the scaling law, under 100 training epochs, memorizing all Wikidata’s fact triples re- quires about 1000B non-embed parameters, which seems very costly. These indicate the necessity of supplementing LLMs with fact knowledge by ex- ternal information, like Retrieval-Augmented Gen- eration (RAG) (Guu et al., 2020; Gao et al., 2024; Asai et al., 2023; Arivazhagan et al., 2023; Li et al., 2024; Min et al., 2023; Shi et al., 2023). RQ2: Can LLMs efficiently memorize redundant facts? Many facts are derivable and thus redun- dant. For example, “Ivanka is Trump’s daughter” can derive from “Trump is Ivanka’s father”. We analyze whether LLMs can efficiently memorize re- dundant facts, i.e., whether LLMs can save memo- rization capacity when simultaneously memorizing redundant facts. We find that LLMs struggle with efficiently memorizing redundant information. In general cases, when memorizing the redundant and non-redundant information of the same scale, the LLM exhibits a similar memorization rate. Only un- der specialized conditions, e.g., the correlated facts have the same direction and structure, the LLM can efficiently memorize them. These demonstrate LLMs’ inefficiency in redundant fact memoriza- tion. Since massive redundant facts can appear in pre-training data in various forms, these indicate it is not cost-effective to use LLMs’ parameters to store fact knowledge, and using a non-parametric method, like RAG, can be more efficient. RQ3: What influences LLM’s memorization pref- erence for different types of fact knowledge? During pre-training, LLMs meet various facts and only memorize portions of them. We analyze LLMs’ fact memorization preference in three as- pects: frequency, difficulty and memorization order. We find that LLMs pay more attention to memoriz- ing more frequent and difficult facts. Additionally, when an LLM memorizes two types of facts se- quentially, the subsequent facts will significantly overwrite the memorization of prior facts. These further explain LLMs’ inferior memorization of low-frequency facts since they appear infrequently during pre-training process and thus can be easily overwritten by subsequent pre-training knowledge. Beyond fact memorization, we also analyze an interesting topic of fact knowledge generalization: RQ4: Can LLMs generalize on unseen fact knowledge? What is the relation between fact memorization and generalization? Surprisingly, we find that the LLM can generalize on unseen facts to a certain level and its scaling law is highly simi- lar to common pre-training LLM scaling law (Ka- plan et al., 2020). The generalization accuracy is determined by the type of fact and some types of facts exhibit high generalizability, suggesting the potential of improving LLMs’ factuality by adap- tively leveraging fact generalization. Meanwhile, we find a qualitative relation between fact memo- rization and generalization: To the same type of fact, the easier the LLM is to memorize it, the bet- ter the LLM generalizes on the unseen set. This indicates that both LLM fact memorization and generalization are based on the correlation between input and output (Geirhos et al., 2020). If there is a stronger correlation between the input and out- put of one type of fact, it will be easier for the LLM to memorize and learn about the type of fact knowledge in a unified manner. Conversely, if the correlation is minimal, LLM needs to memorize facts individually, and is hard to generalize on un- seen ones. We summarize our contribution as follows: 1) To the best of our knowledge, this paper is the first to quantitatively analyze LLMs’ scaling laws and behaviors of fact memorization on massive real- world facts. 2) Our findings reveal the capacity and characteristics of LLMs’ fact knowledge learning. These results show that LLMs are highly inefficient for fact memorization from multiple perspectives, which suggests leveraging non-parametric methods, e.g., RAG, to enhance the fact knowledge of LLMs. 3) We find that LLMs can generalize on unseen facts and different types of facts show different generalizability, which indicates the potential of improving LLMs’ factuality by adaptively leverag- ing LLMs’ fact generalization. 4) We release our code to facilitate future research1. 2 Preliminary In this paper, we focus on the quantitative analysis of LLMs’ atomic fact knowledge memorization and we introduce the experiment setup as follows. Atomic Fact Knowledge Memorization We de- fine atomic fact knowledge as a (key, attribute, value) triple, e.g., (SpaceX, CEO, Elon Musk), and we cast fact memorization as a triple value predic- tion task. Specifically, for a fact triple (k, a, v), we use the cross-entropy loss to train the LLM to predict the value by the (k, a) as: p = LLM(templatea(k, a)), (1) where k and a are the key’s name and attribute name, and templatea is the natural language tem- plate of the attribute to make the LLM’s input more coherent for realism. We adopt one template for one attribute for simplicity. Our pilot experiments show that various numbers of templates lead to consistent results, shown in Appendix A. Since we focus on fact memorization, we use the same input for training and inference. Field Company∗ Credit-No Operator Start-Date Title Type Register-Capital Longitude ... Description Example company name social credit number legal representative founding date representative title company type registered capital company longitude ... Tiktok Co., Ltd. 91110105MA... Lidong Zhang 2003.11.2 Executive Director Co., Ltd. ¥105 116.497976 ... Table 1: Company information table, which has 22 fields and 10M lines. “Company∗” is the primary key. The information of overall fields is shown in Appendix B. After training on facts D = {ki, ai, vi}|D| i=1, we evaluate the LLM’s Memorization Rate (MR) as: MR(D) = average|D| i=1(EM(pi, vi)), (2) where EM means exact match, and pi and vi are the i-th fact’s prediction and value. In this way, we can use the memorization rate to accurately quantify the portion of facts the LLM has memorized. Dataset This paper mainly conducts experiments on massive facts of a large real-world company information table, which is provided by a commer- cial data company, INTSIG2. The table contains various attributes of massive companies and we use facts like (Company, Attribute, Value) for experi- ments. The involved facts are from the real world and the types of them are diverse, and thus closely mirror the various facts in pre-training process. We show the table’s statistics and sample row in Ta- ble 1. Additionally, experiments on Wikidata also show consistent trends (Section 3). Implementation Details We mainly use the model architecture and tokenizer of Qwen (Bai et al., 2023b). We conduct experiments over vari- ous sizes of LLMs from 20M to 0.5B. We mainly train LLMs’ fact memorization from scratch and we show the results on pre-trained LLMs in Ap- pendix C. For the specific hyper-parameters of each model size and overall implementation de- tails, please refer to Appendix D. 3 Fact Capacity Scaling Laws Exploratory Experiment First, we observe the same LLM’s memorization rate over varying num- bers of training facts under the same training epochs. We show the results in Figure 2. We see that the memorization rate significantly decreases 1https://github.com/StarLooo/Scaling_Law_LLM_Fact_ 2INTSIG is a leading company of intelligent document Memorization recognition. https://www.intsig.com/ Figure 2: LLMs’ memorization rate under different numbers of training facts. (a) 50 Epochs (b) 200 Epochs Figure 3: The relation between LLMs’ fact capacity and their model sizes, under fixed training epochs. with the increasing facts. These initially show that there is a memorization capacity upper limit for the LLM with the same size and training epochs. In this section, we explore the scaling laws of LLMs’ fact capacity. We define the fact capacity as the maximum fact quantity that the LLM can accurately memorize as: C = max(|D|) s.t. MR(D) > ϕ%, (3) where D is training facts, a list of randomly sam- pled facts from all facts, and ϕ means a high MR In experiments, we set ϕ% to close to 100%. be 95% and enumerate Ds of varying sizes to find the maximum |D| that MR(D) is between [ϕ%, (ϕ + 1)%]. Scaling Law of Fact Capacity and Model Size We plot the fact capacities of the varying model sizes from 30M to 0.5B, under the fixed training epochs in Figure 3 (20M fails to reach 95% MR at these epochs). We find that the LLM’s fact capacity linearly scales with the model size. Meanwhile, we find that the line fitted from points of small model sizes (non-embed parameters <= 38M ) can extrapolate well to large model size 0.5B (308M non-embed parameters ≈ 8 × 38M ), which shows the robustness of the linear scaling laws. Scaling Law of Fact Capacity and Epochs We plot the same LLM’s fact capacities under varying training epochs in Figure 4. We find that with increasing training epochs, the LLM’s fact capacity significantly increases at the beginning and then approaches saturation at about 1000 epochs, and (a) 44M Model (b) 69M Model Figure 4: The relation between LLMs’ fact capacity and training epochs, under fixed model size. we use the negative exponential law to fit the trend: C = C∗ − αE · exp(−βE · Epoch), (4) where C∗ means the LLM’s fact capacity saturation when epochs approach infinity, and αE and βE are constants. We further train the LLM on fact quan- tity which is 1.1 times of C∗ and then find the LLM fails to accurately memorize all of those training facts, under 3000 epochs (almost saturated), which verifies the effectiveness of the fitting of negative exponential law. Additionally, for those small train- ing epochs, e.g., < 35, the LLM almost can not accurately memorize facts, and this shows that the cost of fact memorization is significantly higher than general knowledge learning by pre-training, which usually requires only one epoch (Cai et al., 2024). This result indicates that it is challenging for the LLM to memorize those low-frequency fact knowledge in pre-training and up-sampling those facts can be a potential solution. Experiments on Wikidata We extend our exper- iments to Wikidata. Specifically, we use the fact triples from Wikidata as the training facts and plot the relation between the fact capacity and LLM’s size in Figure 1. We find that the results on Wiki- data also show a linear relation between the fact ca- pacity and the model size, which demonstrates the generality of the linear scale of the capacity param- eter. According to the fitted line, we estimate that it requires an LLM with 1000B non-embed param- eters to fully memorize all of Wikidata fact triples (about 15B3) under 100 training epochs, which seems costly. Since Wikidata’s fact knowledge is only a subset of all public facts, our analysis in- dicates that it is very challenging for an LLM to memorize all public fact knowledge in the common LLM size and pre-training setting, which shows the necessity of enhancing LLMs’ fact knowledge by external information, e.g., RAG (Gao et al., 2024). 3https://www.Wikidata.org/wiki/Property:P10209 02e64e66e68e61e7The Number of Training Facts405060708090100MR (%)20M30M44M05e71e81.5e82e82.5e83e8Non-Embedding Parameters02e64e66e68e61e7Fact CapacityFitting PointsTesting PointsPrediction05e71e81.5e82e82.5e83e8Non-Embedding Parameters01e72e73e7Fact CapacityFitting PointsTesting PointsPrediction02004006008001000Training Epochs 1e5 2e5 3e5 4e5 5e5 6e5 7e5 8e5Fact CapacityFitting PointsPredictionFitted Upper LimitMemorization Failure02004006008001000Training Epochs2e54e56e58e51e61.2e61.4e61.6e61.8e6Fact CapacityFitting PointsPredictionFitted Upper LimitMemorization Failure (a) Company → Credit-No (b) Company → Operator (c) Company → Register-No Figure 5: LLMs’ memorization of the same facts with different directions, where “*” means facts are from another group of keys. The right is the learning curves. 4 Redundant Fact Memorization In this section, we explore whether LLMs can ef- ficiently memorize redundant facts, i.e., whether LLMs can save memorization capacity when si- multaneously memorizing redundant facts. Specif- ically, we conduct experiments on three types of redundant facts: 1) The forward and reverse ver- sions of the same fact knowledge 2) The correlated facts of the same key 3) Single-hop facts and their derivable multi-hop facts. Additionally, we ana- lyze whether learning abstract abilities occupies the fact memorization capacity. We set the training epoch as 1000 to make the LLM’s memorization saturated, unless otherwise specified. The Same Fact of Different Directions In this section, we analyze whether the LLM can effi- ciently memorize the forward and reverse versions of the same facts. The forward fact is predicting the value based on the company name and attribute, as in Eq (1). The reverse fact is predicting the com- pany based on the attribute’s value (Berglund et al., 2024; Allen-Zhu and Li, 2023). We select three highly reversible attributes, “Operator”, “Credit- No” and “Register-No”for the experiment. Specif- ically, we compare the memorization rate of the following three groups: 1) separately memorizing the forward or reverse version of the same facts. 2) simultaneously memorizing the forward and re- verse versions of the same facts (redundant). 3) simultaneously memorizing the forward facts and the reverse version of another set of facts (non- redundant). The number of each direction’s facts is the same and thus the memorization load of group Figure 6: Memorization on correlated facts, where “*” means that facts are from another group of keys. 2 and 3 is consistent. We show the results on 41M model in Figure 5. We also plot corresponding learning curves in Figure 5, which show that the LLMs’ fact memorization is almost saturated. The results on 30M model are shown in Appendix E and show similar trends. We see that simultaneously memorizing facts of different directions leads to a significantly lower MR than separately memorizing them and the MR of simultaneous memorization is lower than the half of separate memorization. These show that the LLM does not compatibly memorize them and memorizing different direc- tions of the same fact even conflicts with each other. Meanwhile, memorizing different directions of the same group of facts (redundant) has a similar mem- orization rate to memorizing different groups of facts (non-redundant). These show that when the LLM memorizes the same facts in different direc- tions, it seems to memorize them separately like memorizing independent facts, which reflects the inefficiency of LLM memorization for the same facts with different directions (Golovneva et al., 2024). Since the massive facts can be described in different directions, these results can indicate that the LLM’s parametric knowledge is not efficient for fact memorization. Correlated Facts of the Same Key In this sec- tion, we analyze whether the LLM can efficiently memorize the correlated facts of the same key, e.g., a company’s type and its type code. Specifically, we select two combinations of correlated attributes to conduct analysis and additionally adopt two un- related combinations as a comparison. For each combination, we compare the memorization rate of the following three groups: 1) individually memo- rizing facts of a single attribute; 2) simultaneously memorizing facts of two attributes on the same companies (if attributes are correlated, these facts will be redundant); 3) simultaneously memoriz- ing one attribute’s facts on a group of companies and another attribute’s facts on another group of companies (non-redundant). The number of each ForwardReverseF & RF & R*F* & RTraining Facts0714212835MR (%)Forward Fact MRReverse Fact MR2004006008001000Training Ecpohs 0 1 2 3 4MR (%)ForwardReverseForwardReverseF & RF & R*F* & RTraining Facts82236506478MR (%)Forward Fact MRReverse Fact MR2004006008001000Training Ecpohs0481216MR (%)ForwardReverseForwardReverseF & RF & R*F* & RTraining Facts01224364860MR (%)Forward Fact MRReverse Fact MR2004006008001000Training Ecpohs06121824MR (%)ForwardReverseA1A2A1 & A2A1 & A2*A1* & A2Training Facts (A1: Title, A2: Title-Code)949596979899MR (%)A1 MRA2 MRA1A2A1 & A2A1 & A2*A1* & A2Training Facts (A1: Type, A2: Type-Code)707478828690MR (%)A1 MRA2 MRA1A2A1 & A2A1 & A2*A1* & A2Training Facts (A1: Longitude, A2: Operator)91827364554MR (%)A1 MRA2 MRA1A2A1 & A2A1 & A2*A1* & A2Training Facts (A1: Start-Date, A2: Operator)213651668196MR (%)A1 MRA2 MR attribute’s facts is the same and thus the memoriza- tion load of group 2 and 3 is consistent. The results are shown in Figure 6. We find that simultaneously memorizing correlated attributes leads to a higher memorization rate than separate memorization, which shows LLMs can efficiently memorize one key’s correlated attributes, and cor- related fact memorization can facilitate the indi- vidual fact’s memorization. Meanwhile, for those unrelated attributes, simultaneously memorizing them leads to a decreased memorization rate, which shows that whether LLM can compatibly memorize one key’s facts highly depends on the correlation of those facts. While it is hard to inject new correlated knowledge into LLMs (Allen-Zhu and Li, 2023), these results indicate the potential of additionally memorizing correlated facts in pre-training since they can be compatibly memorized. Derivable Multi-hop Fact In this section, we analyze whether the LLM can efficiently memorize derivable facts. For example, when the LLM mem- orizes the longitude of two companies, can it addi- tionally memorize their longitude gap efficiently? We explore this question on facts about attributes “Longitude” and “Start-Date”, and choose their gap as derivable 2-hop facts. For 2-hop facts of one at- tribute, given two different keys, we train the LLM to predict the value gap of this attribute. Specif- ically, we compare the memorization rate of the following three groups: 1) separately memorizing single-hop facts and their derivable 2-hop facts. 2) simultaneously memorizing single-hop facts and their derivable 2-hop facts (redundant). 3) simul- taneously memorizing single-hop facts and 2-hop facts derived from another set of single-hop facts (non-redundant). We control the numbers of 1-hop facts and 2-hop facts to be equal. The results are shown in Figure 7. We find that group 2 leads to a significantly lower memorization rate than group 1, which shows that the memorization of derivable 2-hop facts is not compatible with corresponding 1-hop facts. Additionally, the memorization rate of group 2 is similar to group 3. This shows that when the LLM memorizes single-hop facts and their derivable facts, it seems to memorize them separately like memorizing irrelevant facts. This reflects the inefficiency of LLM memorization for derivable facts, which hinders the LLM’s fact ca- pacity for massive derivable facts in pre-training corpus (Ju et al., 2024). (a) Longitude (b) Start-Date Figure 7: LLM memorization for derivable facts, where “*” means that facts are from another group of keys. (a) Longitude & SNLI (b) Longitude&Amazon-CLS Figure 8: The influence of abstract ability learning to LLM’s fact memorization. Fact Memorization Meets Abstract Ability Learning We explore whether abstract ability learning occupies LLMs’ fact memorization ca- pacity. Specifically, we compare the fact MR or test accuracy of two groups: 1) separately learning fact knowledge and abstract ability 2) simultane- ously learning fact knowledge and abstract abilities. We use SNLI (MacCartney and Manning, 2008) and Amazon Sentiment Analysis (McAuley and Leskovec, 2013) for abstract ability learning. The frequency of facts and abstract ability examples is the same. The results are shown in Figure 8. We see that additionally learning abstract abilities de- creases the fact memorization rates. Meanwhile, the incorporation of fact knowledge slightly hurts the classification tasks’ test accuracy. These in- dicate that fact memorization and abstract ability learning will influence each other and occupy the LLMs’ knowledge capacity jointly, which further exacerbates the challenges of LLMs memorizing facts during pre-training. 5 Fact Memorization Preference of LLMs We analyze LLMs’ fact memorization preference in three aspects: frequency, difficulty and memoriza- tion order. Since this section focuses on preference, we select irrelevant facts to conduct experiments. Specifically, we use the combination of facts in the company information table and a specialized subset of Wikidata facts (Book → Author). Frequency We compare the respective memo- rization rate of simultaneously memorizing two attributes under different frequencies. The results on “Longitude & Author” and “Operator & Author” are shown in Figure 9. We see that the higher fre- quency leads to a significantly higher memorization 1-Hop2-Hop1 & 21 & 2*Training Facts05101520MR (%)1-Hop: Longitude MR2-Hop: Longitude Gap MR1-Hop2-Hop1 & 21 & 2*Training Facts020406080MR (%)1-Hop: Start-Date MR2-Hop: Start-Date Gap MRSNLILongitudeLongitude & SNLITraining Facts015304560MR or Test-ACCLongitude MRSNLI Test ACCAmazon-CLSLongitudeLongitude & Amazon-CLSTraining Facts011223344MR or Test-ACCLongitude MRAmazon-CLS Test ACC (a) Longitude & Author (b) Operator & Author Training Facts Longitude MR Author MR Longitude Author Longitude⇒Author Author⇒Longitude 20.9 - 0 17.7 - 76.1 13.1 0 Figure 9: The effect of frequency for fact memorization. Training Facts Credit-No MR Operator MR (a) Longitude & Credit-No (b) Longitude & Operator Figure 10: The effect of difficulty for fact memorization. rate and inhibits low-frequency facts’ memoriza- tion (Mallen et al., 2023). This indicates the impor- tance of increasing the frequency of low-frequency facts in pre-training corpus to facilitate LLMs’ memorization of them. However, since facts in pre-training corpus usually appear in a complicated and mixed manner, it is non-trivial to separately control their respective frequency, which further increases the challenges for LLMs to memorize low-frequency facts. Difficulty We compare the respective memoriza- tion rate of three groups: 1) using LLM of size 2 ∗ N to simultaneously memorize facts of two attributes with different memorization difficulties; 2) using LLM of size N to separately memorize facts of each attribute. In group 1 and 2, the num- ber of facts of each attribute is the same and thus the average fact capacity for each attribute is con- sistent. In this way, we can observe the LLM’s preference when simultaneously memorizing two attributes. The results on “Longitude & Credit- No” and “Longitude & Operator” are shown in Figure 10. In group 2, the memorization rates of at- tributes “Credit-No” and “Operator” are lower and thus they are harder to memorize. Compared with group 2, the memorization rate of difficult facts and easy facts in group 1 increases and decreases, respectively. These show that when LLMs memo- rize different types of facts, they tend to pay more attention to the facts that are harder to memorize. (a) 44M (b) 97M Figure 11: LLMs’ fact generalization loss across differ- ent numbers of training facts. Credit-No Operator Credit-No⇒Operator Operator⇒Credit-No 30.7 - 0 20.6 - 38.9 32.6 0.1 Table 2: The influence of memorization order. “A⇒B” means memorizing A before B. Memorization Order We compare the memo- rization rate of simultaneously memorizing facts of two attributes in different memorization orders. The results are shown in Table 2. We find that the memorization rate of earlier facts decreases to almost zero and the subsequent memorized facts al- most refresh the LLM’s fact memorization. These indicate a potential reason for LLM’s inferior mem- orization of low-frequency facts: maybe some of them only appear in the early stage of pre-training process and they were almost overwritten by the subsequent pre-training knowledge. Additionally, the MR of subsequent facts is lower than its individ- ual memorization, which demonstrates the impor- tance of evenly distributing various types of facts in pre-training process. 6 Fact Generalization of LLMs Beyond the fact memorization, we explore an inter- esting question: can LLMs generalize on unseen fact knowledge? Specifically, we train the LLM to memorize facts of a group of keys and test it on unseen keys’ facts. We show each attribute’s gen- eralization accuracy (exact match) of 44M model in Figure 12. We also test results of 30M model and observe similar trends (see Appendix F). We observe that facts on most of the attributes have a generalization accuracy greater than zero, which indicates that LLMs can generalize on unseen fact knowledge to a certain level. To analyze why the LLM can generalize on fact knowledge, we conduct a case study on facts of three attributes and show the cases in Appendix G. We find that LLMs’ fact generalization depends on the correlation between input (key) and out- put (value) (Geirhos et al., 2020). For a specific type of fact (attribute), the higher correlation be- tween the key and value leads to higher general- ization accuracy. For example, the LLM may cor- rectly predict an unseen company’s longitude if 1:13:1Operator : Author0255075100MR (%)Longitude MRAuthor MR1:13:1Operator : Author0255075100MR (%)Operator MRAuthor MRLongitudeCredit-NoLongitude & Credit-NoTraining Facts01530456075MR (%)Longitude MRCredit-No MRLongitudeOperatorLongitude & OperatorTraining Facts01530456075MR (%)Longitude MROperator MR02e64e66e68e61e71.2e7The Number of Training Facts0.51.01.52.02.5Test LossPrediction on 44MObservations on 44M02e64e66e68e61e71.2e7The Number of Training Facts1.01.52.02.53.0Test LossPrediction on 97MObservations on 97M Figure 12: Memorization and generalization over facts of different types, on 44M LLM trained by 10M facts. the company name contains a region name and the training dataset contains the longitude of compa- nies with the same region name. Or it can roughly estimate the company’s register-capital according to company size indicated by the company name, e.g., “Fruit shop”→ (¥104 ∼ ¥105) or “Investment company”→ (¥107 ∼ ¥109). Meanwhile, differ- ent types of facts have different generalizability. For those facts with obvious patterns, the LLM can achieve reliable generalization. For those at- tributes with weak correlation, although the LLM does not know exactly the facts, it can identify the rough range of facts. These suggest the potential of adaptively leveraging LLM’s fact generalization: 1. selectively leveraging generalization of those highly generalizable facts; 2. if the LLM does not exactly know the whole fact, it can response with a part of the fact, e.g., a rough range, to make its response more informative and thus helpful. Fact Generalization Scaling Law Additionally, we analyze the scaling law of LLMs’ fact knowl- edge generalization. Specifically, we plot the LLMs’ loss values on test fact knowledge under different training fact quantities, following Kaplan et al. (2020). The results are shown in Figure 11. We find that the test loss on fact generalization also follows the power-law (Kaplan et al., 2020) as: L(D) = Dc ∗ DαD , (5) where D is the number of training facts, Dc and αD are constant numbers. This trend is similar with general pre-training (Kaplan et al., 2020), which indicates that LLMs follow a similar learning mech- anism in learning factual knowledge as they learn general knowledge in pre-training (OpenAI, 2023). Relation between Fact Memorization and Gen- eralization We plot the memorization rate and generalization accuracy for each type of fact in Fig- ure 12. We find that the generalization accuracy of one type of fact highly correlates with its memo- rization rate. For one type of fact, the higher memo- rization rate leads to higher generalization accuracy. These indicate that both LLM fact memorization and generalization are based on the correlation be- tween input and output (Geirhos et al., 2020). If there is a stronger correlation between the input and output, it will be easier for the LLM to memo- rize and learn about the type of fact knowledge in a unified manner. If the correlation is minimal, LLM needs to memorize facts individually, and is hard to generalize on unseen ones. 7 Related Work Understanding the scaling behaviors of LLMs is important for decisions about the key choice design of LLMs, e.g., model size or pre-training data (Ka- plan et al., 2020; Gao et al., 2022; Clark et al., 2022). Most of the existing work focuses on the sce- nario of general pre-training or downstream tasks. Kaplan et al. (2020) observe the power law rela- tionships between the LLM perplexity and size of LLM and dataset. Hoffmann et al. (2022b) explore the optimal token quantity and LLM size for pre- training under a specified compute budget and find that the LLM size and training tokens should be scaled equally for compute-optimal LLM training. Besides pre-training, researchers find that the per- formance of downstream tasks can be predicted from the LLM size and training data scale (Hernan- dez et al., 2021; Ghorbani et al., 2021; Isik et al., 2024). Different from them, our paper specifically focuses on scaling laws of LLMs’ fact memoriza- tion and behaviors of memorizing different types of facts, which is critical for LLMs’ factual responses. Allen-Zhu and Li (2024), concurrently to our work, explore scaling laws of LLMs’ memoriza- tion on synthetic facts. Our work differs in several ways: 1. We analyze LLM’s fact memorization on real-world facts while they use randomly gen- erated facts, which have a non-negligible gap with real-world facts. According to our findings, we con- clude that memorizing all Wikidata’s facts requires 1000B non-embed parameters, which indicates that using an LLM to memorize all public facts is al- most not plausible. 2. We additionally analyze LLMs’ behaviors of learning fact knowledge in dif- ferent aspects, including compatibility, preference and generalization, which further provide direc- tions for fact knowledge augmentation of LLMs. Credit-NoOrg-NoRegister-NoOperatorTerm-StartCheck-DateStart-DateGD-LongitudeGD-LatitudeLongitudeLatitudeRegister-CapRegister-OrgDistrict-CodeTypeType-CodeStatusStatus-CodeTitleTitle-CodeOperator-Type020406080100MR or ACCTrain MRGeneralization ACC 8 Conclusion References We analyze LLMs’ fact memorization behaviors and these are our main conclusions: 1) The fact capacity has a linear relationship with model size and a negative exponential law relationship with training epochs. According to the built scaling law, we estimate that memorizing all of Wikidata fact triples requires training an LLM with 1000B non- embed parameters for 100 epochs, which seems very costly; 2) We find that LLMs struggle with efficiently memorizing redundant facts. Only for redundant facts with the same direction and struc- ture, LLMs can memorize them in a unified manner. 3) The LLM prefers memorizing more frequent and difficult facts. 4) LLMs can generalize on unseen fact knowledge and its scaling law is similar to general pre-training. Limitations We list limitations of this paper as follows: • Since this paper focuses on fact knowledge memorization, each atomic fact individually forms a training example and we keep the same inputs for the training and inference stages. This has a small gap with pre-training setting, which usually uses unstructured text and concatenates short sentences into a large chunk for training efficiency. We regard the exploration of facts of unstructured text as future work. • As shown in Figure 4, fact memorization re- quires hundreds of training epochs, which leads to significant computational costs. Lim- ited by computational resources, the maxi- mum LLM size used in experiments is 0.5B. We regard the exploration of larger scales as future work. Ethics Statement In this paper, we use public fact information for experiments, including a real-world company infor- mation table and Wikidata fact triples. The com- pany information table is provided by a commercial data company and we obtain its permission to con- duct this research. Meanwhile, the trained models are only for LLM fact memorization analytical re- search and will not be made public. Zeyuan Allen-Zhu and Yuanzhi Li. 2023. Physics of language models: Part 3.2, knowledge manipulation. Preprint, arXiv:2309.14402. Zeyuan Allen-Zhu and Yuanzhi Li. 2024. Physics of lan- guage models: Part 3.3, knowledge capacity scaling laws. Preprint, arXiv:2404.05405. Manoj Ghuhan Arivazhagan, Lan Liu, Peng Qi, Xinchi Chen, William Yang Wang, and Zhiheng Huang. 2023. Hybrid hierarchical retrieval for open-domain In Findings of the Associ- question answering. ation for Computational Linguistics: ACL 2023, pages 10680–10689, Toronto, Canada. Association for Computational Linguistics. Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023. Self-rag: Learning to retrieve, generate, and critique through self-reflection. Preprint, arXiv:2310.11511. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023a. Qwen technical report. Preprint, arXiv:2309.16609. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023b. Qwen technical report. Preprint, arXiv:2309.16609. Lukas Berglund, Meg Tong, Max Kaufmann, Mikita Balesni, Asa Cooper Stickland, Tomasz Korbak, and Owain Evans. 2024. The reversal curse: Llms trained on "a is b" fail to learn "b is a". Preprint, arXiv:2309.12288. Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Zehui Chen, Zhi Chen, Pei Chu, Xiaoyi Dong, Haodong Duan, Qi Fan, Zhaoye Fei, Yang Gao, Jiaye Ge, Chenya Gu, Yuzhe Gu, Tao Gui, Aijia Guo, Qipeng Guo, Conghui He, Yingfan Hu, Ting Huang, Tao Jiang, Penglong Jiao, Zhenjiang Jin, Zhikai Lei, Jiaxing Li, Jingwen Li, Linyang Li, Shuaibin Li, Wei Li, Yin- ing Li, Hongwei Liu, Jiangning Liu, Jiawei Hong, Kaiwen Liu, Kuikun Liu, Xiaoran Liu, Chengqi Lv, Haijun Lv, Kai Lv, Li Ma, Runyuan Ma, Zerun Ma, Wenchang Ning, Linke Ouyang, Jiantao Qiu, Yuan Qu, Fukai Shang, Yunfan Shao, Demin Song, Zi- fan Song, Zhihao Sui, Peng Sun, Yu Sun, Huanze Tang, Bin Wang, Guoteng Wang, Jiaqi Wang, Ji- ayu Wang, Rui Wang, Yudong Wang, Ziyi Wang, Xingjian Wei, Qizhen Weng, Fan Wu, Yingtong Xiong, Chao Xu, Ruiliang Xu, Hang Yan, Yirong Yan, Xiaogui Yang, Haochen Ye, Huaiyuan Ying, Jia Yu, Jing Yu, Yuhang Zang, Chuyu Zhang, Li Zhang, Pan Zhang, Peng Zhang, Ruijie Zhang, Shuo Zhang, Songyang Zhang, Wenjian Zhang, Wenwei Zhang, Xingcheng Zhang, Xinyue Zhang, Hui Zhao, Qian Zhao, Xiaomeng Zhao, Fengzhe Zhou, Zaida Zhou, Jingming Zhuo, Yicheng Zou, Xipeng Qiu, Yu Qiao, and Dahua Lin. 2024. Internlm2 technical report. Preprint, arXiv:2403.17297. Aidan Clark, Diego de las Casas, Aurelia Guy, Arthur Mensch, Michela Paganini, Jordan Hoff- mann, Bogdan Damoc, Blake Hechtman, Trevor Cai, Sebastian Borgeaud, George van den Driess- che, Eliza Rutherford, Tom Hennigan, Matthew John- son, Katie Millican, Albin Cassirer, Chris Jones, Elena Buchatskaya, David Budden, Laurent Sifre, Simon Osindero, Oriol Vinyals, Jack Rae, Erich Elsen, Koray Kavukcuoglu, and Karen Simonyan. 2022. Unified scaling laws for routed language mod- els. Preprint, arXiv:2202.01169. DeepSeek-AI. 2024. Deepseek-v2: A strong, economi- cal, and efficient mixture-of-experts language model. Preprint, arXiv:2405.04434. Leo Gao, John Schulman, and Jacob Hilton. 2022. Scaling laws for reward model overoptimization. Preprint, arXiv:2210.10760. Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng Wang, and Haofen Wang. 2024. Retrieval-augmented gener- ation for large language models: A survey. Preprint, arXiv:2312.10997. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. 2020. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11):665–673. Zorik Gekhman, Gal Yona, Roee Aharoni, Matan Eyal, Amir Feder, Roi Reichart, and Jonathan Herzig. 2024. Does fine-tuning llms on new knowledge encourage hallucinations? Preprint, arXiv:2405.05904. Behrooz Ghorbani, Orhan Firat, Markus Freitag, Ankur Bapna, Maxim Krikun, Xavier Garcia, Ciprian Chelba, and Colin Cherry. 2021. Scal- ing laws for neural machine translation. Preprint, arXiv:2109.07740. Olga Golovneva, Zeyuan Allen-Zhu, Jason Weston, and Sainbayar Sukhbaatar. 2024. Reverse training to nurse the reversal curse. Preprint, arXiv:2403.13799. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. Preprint, arXiv:2002.08909. Danny Hernandez, Jared Kaplan, Tom Henighan, and Sam McCandlish. 2021. Scaling laws for transfer. Preprint, arXiv:2102.01293. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Si- monyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022a. Training compute-optimal large language models. Preprint, arXiv:2203.15556. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Si- monyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022b. Training compute-optimal large language models. Preprint, arXiv:2203.15556. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023. A survey on hallucination in large lan- guage models: Principles, taxonomy, challenges, and open questions. Preprint, arXiv:2311.05232. Berivan Isik, Natalia Ponomareva, Hussein Hazimeh, Dimitris Paparas, Sergei Vassilvitskii, and Sanmi Koyejo. 2024. Scaling laws for downstream task performance of large language models. Preprint, arXiv:2402.04177. Tianjie Ju, Yijin Chen, Xinwei Yuan, Zhuosheng Zhang, Wei Du, Yubin Zheng, and Gongshen Liu. 2024. Investigating multi-hop factual shortcuts in knowl- edge editing of large language models. Preprint, arXiv:2402.11900. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. Preprint, arXiv:2001.08361. Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization. Preprint, arXiv:1412.6980. Xiaonan Li, Changtai Zhu, Linyang Li, Zhangyue Yin, Tianxiang Sun, and Xipeng Qiu. 2024. Lla- trieval: Llm-verified retrieval for verifiable genera- tion. Preprint, arXiv:2311.07838. Sheng-Chieh Lin, Luyu Gao, Barlas Oguz, Wenhan Xiong, Jimmy Lin, Wen tau Yih, and Xilun Chen. 2024. Flame: Factuality-aware alignment for large language models. Preprint, arXiv:2405.01525. Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li, Qinyuan Cheng, Xiangyang Liu, Hang Yan, Yunfan Shao, Qiong Tang, Shiduo Zhang, Xingjian Zhao, Ke Chen, Yining Zheng, Zhejian Zhou, Ruixiao Li, Jun Zhan, Yunhua Zhou, Linyang Li, Xiaogui Yang, Lingling Wu, Zhangyue Yin, Xuanjing Huang, Yu- Gang Jiang, and Xipeng Qiu. 2024. Moss: An open conversational large language model. Machine Intel- ligence Research. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton- Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine- tuned chat models. CoRR, abs/2307.09288. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming Shi. 2023. Siren’s song in the AI ocean: A survey on hallucination in large language models. CoRR, abs/2309.01219. Zeyuan Allen Zhu and Yuanzhi Li. 2023. Physics of language models: Part 3.1, knowledge storage and extraction. CoRR, abs/2309.14316. Ilya Loshchilov and Frank Hutter. 2019. coupled weight decay regularization. arXiv:1711.05101. De- Preprint, Bill MacCartney and Christopher D. Manning. 2008. Modeling semantic containment and exclusion in nat- ural language inference. In Proceedings of the 22nd International Conference on Computational Linguis- tics (Coling 2008), pages 521–528, Manchester, UK. Coling 2008 Organizing Committee. Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric mem- ories. Preprint, arXiv:2212.10511. Julian J. McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating di- mensions with review text. In Seventh ACM Confer- ence on Recommender Systems, RecSys ’13, Hong Kong, China, October 12-16, 2013, pages 165–172. ACM. Sewon Min, Suchin Gururangan, Eric Wallace, Han- naneh Hajishirzi, Noah A. Smith, and Luke Zettle- moyer. 2023. Silo language models: Isolating le- gal risk in a nonparametric datastore. Preprint, arXiv:2308.04430. OpenAI. 2023. Gpt-4 technical report. Vipula Rawte, Amit Sheth, and Amitava Das. 2023. A survey of hallucination in large foundation models. Preprint, arXiv:2309.05922. Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy P. Lillicrap, Jean-Baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, Ioannis Antonoglou, Ro- han Anil, Sebastian Borgeaud, Andrew M. Dai, Katie Millican, Ethan Dyer, Mia Glaese, Thibault Sotti- aux, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, James Molloy, Jilin Chen, Michael Isard, Paul Barham, Tom Hennigan, Ross McIl- roy, Melvin Johnson, Johan Schalkwyk, Eli Collins, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, Clemens Meyer, Gregory Thornton, Zhen Yang, Henryk Michalewski, Zaheer Abbas, Nathan Schucher, Ankesh Anand, Richard Ives, James Keeling, Karel Lenc, Salem Haykal, Siamak Shakeri, Pranav Shyam, Aakanksha Chowdhery, Ro- man Ring, Stephen Spencer, Eren Sezener, and et al. 2024. Gemini 1.5: Unlocking multimodal under- standing across millions of tokens of context. CoRR, abs/2403.05530. Mengjie Ren, Boxi Cao, Hongyu Lin, Cao Liu, Xianpei Han, Ke Zeng, Guanglu Wan, Xunliang Cai, and Le Sun. 2024. Learning or self-aligning? rethinking instruction fine-tuning. Preprint, arXiv:2402.18243. Weijia Shi, Sewon Min, Michihiro Yasunaga, Min- joon Seo, Rich James, Mike Lewis, Luke Zettle- moyer, and Wen tau Yih. 2023. Replug: Retrieval- augmented black-box language models. Preprint, arXiv:2301.12652. A The Effect of Template Quantity In this section, we analyze the influence of template quantity on memorization rate. Specifically, we observe the memorization rate of the same 200K facts under various numbers of templates using a 30M model. The results are shown in Figure 13. We see that the memorization rates over different numbers of templates are at a consistent level. Even when the paraphrase quantity increases to 32, the memorization rate of specific attribute facts only decreases to as low as 75% of the original. There- fore, the number of templates does not significantly influence the LLM’s fact memorization. B Attributes of Table We list the information and average length of over- all fields of the used large information table in Table 4. (a) Longitude (b) Start-Date (c) Operator (d) Credit-No Figure 13: Memorization on different templates. C The Effect of Pre-training on Fact Memorization We compare the fact knowledge learning from scratch and pre-trained checkpoints. The results are shown in Table 5. We find that the pre-trained ini- tialization leads to higher generalization accuracy and consistent memorization rate of training data. These show that the pre-trained knowledge almost does not influence the LLMs’ memorization of new fact knowledge and can improve LLMs’ general- ization on unseen facts. We leave further analyses of pre-trained influence to fact generalization as future work. D Implementation Details then introduce details of the specific settings for each individual experiment. D.1 Model Details We mainly use the model architecture and tokenizer of Qwen-1.5 (Bai et al., 2023b). For more details of Qwen, we refer the reader to the original paper (Bai et al., 2023b). We mainly set the hyper-parameters of each LLM’s size according to the following as- pects: 1) The aspect ratio, which is the ratio of the hidden size to the number of layers, should be maintained at a moderate value. Following conven- tional design practices, we control the aspect ratio within the range of 128/3 (as adopted by Qwen- 1.5-0.5B) to 128 (as adopted by Qwen-1.5-7B). 2) The intermediate size should be approximately 8/3 times of the hidden size and be divisible by 128. We provide the detailed hyper-parameters of model architecture in Table 6. D.2 Training Details We configure the global batch size as 512 and em- ploy the AdamW optimizer (Kingma and Ba, 2017; Loshchilov and Hutter, 2019). In exploratory exper- iments, we find that LLMs with different sizes are highly sensitive to learning rates and thus we search for the best learning rates for each size’s LLM and different datasets to achieve the optimal memoriza- tion rate, under small training epochs. Meanwhile, we adopt the cosine learning rate scheduler. We list the learning rates of each model size in Table 6. It’s observed that the optimal learning rates differed be- tween the company information table and Wikidata, and the latter requires a higher learning rate. Most of these experiments are conducted using either 8 NVIDIA RTX 3090s or 4 NVIDIA A800s-80GB, utilizing BFloat16 mixed precision training. The training speed of models with different sizes can be referred to in Table 3. D.3 Dataset Details In this paper, we conduct experiments on fact triples from a large real-world company informa- tion table and Wikidata4. The company informa- tion table is provided by a commercial data com- pany and we obtain its permission to conduct this research. For Wikidata, we follow this public github repository5 to get all of its fact triples. The facts of the company information table are in Chi- nese and Wikidata’s facts are in English. For the In this section, we first introduce the general im- plementation details (model, training, dataset) and 4https://www.wikidata.org/wiki/Wikidata:Introduction 5https://github.com/neelguha/simple-wikidata-db 1481632Templates Num0510152025MR (%)1481632Templates Num01734516885MR (%)1481632Templates Num0816243240MR (%)1481632Templates Num0816242832MR (%) Model Identifier Training Speed D.5 Details on Scaling Law of Fact Capacity 20M 30M 41M 44M 69M 97M 116M 200M 0.5B 800 700 650 600 500 400 300 225 100 Table 3: Training speeds (triples per second per GPU) of models of different sizes, which are based on NVIDIA RTX 3090. key and entity in the company information table and Wikidata, we use their natural language name instead of the original key (uid) to closely mirror the facts in pre-training data. D.4 Details on Scaling Law of Fact Capacity and Model Size We conduct a series of experiments using various model sizes ranging from 30M to 0.5B (20M fails to reach 95% MR at these epochs), while keep- ing the training epochs fixed. We use the number of non-embed parameters to measure model size, following Kaplan et al. (2020). When randomly sampling facts, we first randomly sample keys and then use facts of these keys’ all attributes to make fact type distributions consistent. In these experi- ments, we use |D| ∗ M R(D) to measure the fact capacity more accurately since the memorization rate may vary slightly for each D. The objective is to investigate the relationship between fact capacity and model size. Specifically, we utilize the results from models with fewer than 200M parameters to establish a scaling law formula. We then validate the extrapolation by employing models with 200M and 0.5B parameters on both the company informa- tion table and Wikidata. For Wikidata, we set the fixed training epochs to 100, while for the company information table, we use 50 and 200 epochs. In this way, we can observe the results across different datasets and epochs, which makes our results and the built scaling law more robust. Besides, the tem- plates we use for the company information table are listed in Table 7. For Wikidata, since it contains tremendous types of fact (relations), it is costly to design an individual template for each type and thus we use a unified template: "For this entity, ⟨E⟩, the entity forming the relationship ‘⟨R⟩’ is:". and Epochs To explore the relationship between fact capacity and memorization epochs, we conduct experiments using different quantities of fact triples from the company information table and train 44M/69M models to memorize these triples. To ensure con- vergence of loss and memorization rate, we set the maximum memorization epochs to be 1000 or even more. To save the computational cost, we manu- ally stop the training once the model achieves a sufficiently high memorization rate (>95%). For each quantity of triples, we identify the first epoch in which the model attains a memorization rate higher than 95%. We use this triple quantity as the memorization fact capacity at this epoch. After collecting these data points, we fit a negative expo- nential curve as shown in Figure 4. Furthermore, we observe that for quantities of triples exceeding the fact capacity saturation point, the model is un- able to achieve a memorization rate higher than 95% under 3000 training epochs, which almost reaches saturation. D.6 Details on Redundant Fact Memorization All of this section’s experiments are conducted un- der 1000 epochs to ensure that the model’s memo- rization reaches saturation. For experiments on memorization of the same facts with different directions, we employ a 30M model and a 41M model to enhance model size di- versity, and the triple quantity of each fact direction is 100K for the 30M model and 200K for the 41M model. The templates employed for forward and reverse versions of fact knowledge can be found in Table 8. For experiments on memorization of correlated facts with the same key, we employ a 20M model. To prevent the LLM from fully memorizing all facts (which makes the memorization rates of groups 100% and hard to distinguish), the number of triples for each attribute is set to be different, with 400K triples for attributes in group 2 (Title, Title-Code, Type, and Type-Code), and 100K triples for at- tributes in group 3 (Longitude, Start-Date, and Op- erator). For experiments on memorization of derivable multi-hop facts, we employ a 30M model. The number of triples used for single-hop and two-hop knowledge is set to 200K. The templates employed for two-hop knowledge can be found in Table 9. For experiments on mixed training of fact mem- orization and abstract ability learning, we employ a 30M model to train on a combined dataset com- prising 200K fact memorization triples and 200K samples from either the SNLI or Amazon-CLS train split. The templates utilized for SNLI and Amazon-CLS can be found in Table 10. D.7 Details on Fact Memorization Preference Since this section focuses on memorization pref- erence analysis, we select a combination of an at- tribute (Longitude or Operator) from the company information table and an attribute (Author) from Wikidata, to avoid the correlation between facts. The model used for these experiments has a size of 30M, and the quantity of each type of fact knowl- edge (not triples) is set to 100K. To manipulate the frequency, we evenly up-sample one type of fact knowledge, thereby increasing its triple quantity. For experiments investigating the difficulty pref- erence of LLMs in memorization, we employ both a 30M model and a 41M model. The number of non-embed parameters in the 41M model is ap- proximately twice that of the 30M model. In each experiment, we utilize 100K fact triples for each attribute (Longitude, Operator, and Credit-No). For experiments investigating the memorization order preference of LLMs, we load a pre-trained checkpoint of a 30M model that has already trained to memorize 200K fact triples of one attribute. We then continue training this model to memorize an additional 200K fact triples of another attribute. This allow us to observe the influence of the mem- orization order on the model’s performance. E Supplement Experiment for Memorizing Same Facts with Different Directions We also analyze memorizing facts with different directions on the 30M model with 100K training facts and the results are shown in Figure 14, which exhibit the same trend as experiments of the 41M model. F Supplement Experiment for Relation between Fact Memorization and Generalization We also analyze the relation between fact memo- rization and generalization on the 30M model with 10M training facts and the results are shown in Fig- ure 15, which exhibit the same trend as experiments (a) Company → Credit-No (b) Company → Operator (c) Company → Register-No Figure 14: LLMs’ memorization of the same facts with different directions, on 30M model with 100K facts of each direction, where “*” means facts are from another group of keys. The right side is the learning curves. of the 44M model in FIgure 12. G Generalization Case Studies We show case studies of “Company→Longiude”, “Company→Capital” and “Company→Type” in Ta- ble 11, Table 12 and Table 13, respectively. The involved fact information is all publicly available from the official website. Company→Longitude We find that LLMs may correctly predict the longitude of an unseen com- pany according to the region name in the company name. Since the training facts contain the longi- tude of the company in “Zhangjiajie”, the LLM can identify the association between the “’Zhangjiajie” and the longitude, and thus uses such association to predict another company in “Zhangjiajie”. How- ever, such association may lead to wrong prediction because the region name only can coarsely deter- mine the rough range of the longitude. If a unseen company is very close to a training company with the same region name, the prediction will probably correct. Otherwise, the prediction may be wrong. Company→Capital Similar with “Company →Longitude”, the LLM can determine a rough range of unseen company’s registered capital. The LLM can roughly estimate the company’s register- capital according to company size indicated by the company name, e.g., “Fruit shop”→ (¥104 ∼ ¥105) or “Investment company”→ (¥107 ∼ ¥109). ForwardReverseF & RF & R*F* & RTraining Facts 0 12 24 36 48 60MR (%)Forward Fact MRReverse Fact MR2004006008001000Training Ecpohs0246810MR (%)ForwardReverseForwardReverseF & RF & R*F* & RTraining Facts020406080100MR (%)Forward Fact MRReverse Fact MR2004006008001000Training Ecpohs08162432MR (%)ForwardReverseForwardReverseF & RF & R*F* & RTraining Facts 0 18 36 54 72 90MR (%)Forward Fact MRReverse Fact MR2004006008001000Training Ecpohs08162432MR (%)ForwardReverse Figure 15: Memorization and generalization over facts of different types, on 30M LLM trained by 10M facts. Company→Type The LLM can easily judge a company’s type based on the surface form of the company name. Remark We find that LLMs’ fact generalization depends on the correlation between input (key) and output (value) (Geirhos et al., 2020). For a spe- cific type of fact (attribute), the higher correlation between the key and value leads to higher gener- alization accuracy. For example, the LLM may correctly predict an unseen company’s longitude if the company name contains a region name and the training dataset contains the longitude of compa- nies with the same region name. Or it can roughly estimate the company’s register-capital according to company size indicated by the company name, e.g., “Fruit shop”→ (¥104 ∼ ¥105) or “Investment company”→ (¥107 ∼ ¥109). Meanwhile, differ- ent types of facts have different generalizability. For those facts with obvious patterns, the LLM can achieve reliable generalization. These suggest the potential of adaptively leveraging LLM’s fact generalization: 1. selectively leveraging general- ization of those highly generalizable facts; 2. if the LLM does not exactly know the whole fact, it can response with a part of the fact, e.g., a rough range, to make its response more informative and thus helpful. Credit-NoOrg-NoRegister-NoOperatorTerm-StartStart-DateCheck-DateGD-LongitudeGD-LatitudeLongitudeLatitudeRegister-CapRegister-OrgDistrict-CodeTypeType-CodeStatusStatus-CodeTitleTitle-CodeOperator-Type020406080100MR or ACCTrain MRGeneralization ACC Field Description Example Avg Tokens Company (Primary Key) Credit-No Operator Start-Date Title Type Longitude Latitude Register-No Organization-No Type-Code Title-Code Term-Start Check-Date Register-Capital Register-Org Operator-type Status Status-Code GD-longitude GD-latitude District-Code company name social credit number legal representative founding date representative title company type company longitude company latitude company registration number company organization number company type code representative title code start date of the business term incorporation date registered capital registration authority the type of legal representative company status company status code company longitude on Amap company latitude on Amap company district code Tiktok Co., Ltd. 91110105MA... Lidong Zhang 2003.11.2 Executive Director Co., Ltd. 116.497976 39.928384 4310271000119 707414389 2190 490A-Person in Charge 2003.11.15 2006.12.04 ¥105 Shanghai AIC Individual Open 0003 116.498 39.928 430182 Table 4: All fields of the used large information table. 13.2 13.5 2.7 10.0 1.7 7.6 15.1 14.5 14.5 6.5 4.0 6.6 8.7 9.7 4.2 6.1 1.0 7.5 4.0 9.8 8.9 6.0 Initialization Training Facts MR Generalization ACC Qwen-1.5-base-0.5B Random 4.3M 4.3M 100 100 32.64 30.43 Table 5: The comparison between pre-trained and random initialization, on 0.5B model. Model Identifier 20M 30M 41M 44M 69M 97M 116M 200M 0.5B All Parameters Non-Embed Parameters Number of Layers Hidden Size Intermediate Size Attention Heads LR on Company Information Table LR on Wikidata 5.1M 10.6M 19.3M 38.6M 20.1M 30.5M 41.5M 44.0M 69.0M 97.1M 116.4M 201.6M 0.5B 0.6M 85.0M 308M 3 128 384 4 2.0e-3 3.0e-3 1.3M 3 192 512 4 1.0e-3 2.0e-3 2.6M 3 256 768 8 1.0e-3 2.0e-3 24 768 2048 12 2.5e-4 5.0e-4 6 512 1408 8 5.0e-4 7.5e-4 12 512 1408 8 4.0e-4 7.5e-4 6 384 1024 8 5.0e-4 1.0e-3 6 256 768 8 7.5e-4 1.5e-3 24 1024 2816 16 1.5e-4 3.0e-4 Table 6: Hyper-parameters of LLMs with different sizes. Attribute Credit-No Operator Start-Date Title Title Longitude Latitude Register-No Organization-No Type-Code Title-Code Term-Start Check-Date Register-Capital Register-Org Operator-type Status Status-Code GD-Longitude GD-Latitude District-Code Template 在企业基本信息表中,公司:“⟨C⟩”的“社会信用号”为: (In the company information table, the "Credit-No" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“法定代表人”为: (In the company information table, the "Operator" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“成立日期”为: (In the company information table, the "Star-Date" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“公司代表人职务”为: (In the company information table, the "Title" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“企业类型”为: (In the company information table, the "Type" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“经度”为: (In the company information table, the "Longitude" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“纬度”为: (In the company information table, the "Latitude" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“注册号”为: (In the company information table, the "Register-No" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“组织机构号”为: (In the company information table, the "Organization-No" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“企业类型代码”为: (In the company information table, the "Type-Code" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“代表人类型代码”为: (In the company information table, the "Title-Code" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“经营期限起始日期”为: (In the company information table, the "Term-Start" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“核准日期”为: (In the company information table, the "Check-Date" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“注册资本”为: (In the company information table, the "Register-Capital" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“登记机关”为: (In the company information table, the "Register-Org" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“代表人类型代码”为: (In the company information table, the "Operator-type" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“状态”为: (In the company information table, the "Status" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“企业状态码”为: (In the company information table, the "Status-Code" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”在高德地图上的“经度”为: (In the company information table, the "GD-Longitude" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”在高德地图上的“纬度”为: (In the company information table, the "GD-Latitude" of the company "⟨C⟩" is:) 在企业基本信息表中,公司:“⟨C⟩”的“区域码”为: (In the company information table, the "District-Code" of the company "⟨C⟩" is:) Table 7: Templates of each attribute for the company information table memorization. Attribute Direction Credit-No Operator Register-No Forward Reverse Forward Reverse Forward Reverse Template 在企业基本信息表中,公司:“⟨C⟩”的“社会信用号”为: (In the company information table, the "Credit-No" of the company "⟨C⟩" is:) 在企业基本信息表中,“社会信用号”是⟨CN o⟩”的公司为: (In the company information table, the company with the "Credit-No" as ⟨CN o⟩ is:) 在企业基本信息表中,公司:“⟨C⟩”的“法定代表人”为: (In the company information table, the "Operator" of the company "⟨C⟩" is:) 在企业基本信息表中,“法定代表人”是⟨Op⟩”的公司为: (In the company information table, the company with the "Operator" as ⟨Op⟩ is:) 在企业基本信息表中,公司:“⟨C⟩”的“注册号”为: (In the company information table, the "Register-No" of the company "⟨C⟩" is:) 在企业基本信息表中,“注册号”是⟨RN o⟩”的公司为: (In the company information table, the company with the "Register-No" as ⟨RN o⟩ is:) Table 8: Templates of forward and reverse version of fact knowledge memorization. Attribute Longitude Start-Date Template 在企业基本信息表中,“⟨CA⟩”与“⟨CB⟩”在“经度”上的差值为: (In the company information table, the difference in "Longitude" between "⟨CA⟩" and "⟨CB⟩" is:) 在企业基本信息表中,“⟨CA⟩”与“⟨CB⟩”的“成立日期”相差: (In the company information table, the difference in "Start-Date" between "⟨CA⟩" and "⟨CB⟩" is: ) Table 9: Templates of derivable two-hop fact knowledge memorization. Dataset SNLI Template Premise: ⟨P remise⟩ \n Hypothesis: ⟨Hypothesis⟩ \n The relation between the premise and the hypothesis is: Amazon-CLS What is the rating of the following amazon review: \n review title: ⟨T itle⟩ \n review content: ⟨T itle⟩ \n rating: Table 10: Templates of abstract ability learning. Train/Test Company Prediction Gold Train Test Test Test Zhangjiajie Natural Agriculture Development Co., Ltd. Zhangjiajie Jiahao Construction Engineering Co., Ltd. Zhangjiajie Changtu Construction Co., Ltd. Zhangjiajie Yiming Life Supermarket Co., Ltd. 110.4939900034971 110.4939900034971 110.4939900034971 110.4939900034971 110.4939900034971 110.4939900034971 110.490945197 110.48127269239656 Table 11: Case study on fact generalization on Company→Longitude. Train/Test Company Prediction Gold Train Test Test Train Test Test Jishou City Fruit Shop Yongzhou City Handsome Sister Fruit Shop Yueyang City Chenghong Fruit Shop Beijing Guojintan Asset Management Co., Ltd. Hunan Diamond Financing Guarantee Co., Ltd. Xiangtan Urban Development Investment and Management Group Co., Ltd. ¥10^4 ¥10^4 ¥10^4 ¥10^8 ¥10^8 ¥10^8 ¥10^4 ¥10^4 ¥10^5 ¥10^8 ¥10^8 ¥10^9 Table 12: Case study on fact generalization on Company→Register-Capital. Train/Test Company Prediction Gold Test Test Test Test Test Test Test Changsha Yuyun Real Estate Brokerage Co., Ltd. Beijing Shenchen Information Technology Co., Ltd. Hunan Chuangneng Investment Co., Ltd. Hunan Zhongtie Travel Agency Co., Ltd. Lusong Branch Yueyang Jiulong Supermarket Co., Ltd. Nanhu Branch Changsha Tongshan Department Store Xiangcheng Hotel, Taoyuan County Co., Ltd. Co., Ltd. Co., Ltd. LLC Branch LLC Branch Sole Proprietorship Sole Proprietorship Co., Ltd. Co., Ltd. Co., Ltd. LLC Branch LLC Branch Sole Proprietorship Sole Proprietorship Table 13: Case study on fact generalization on Company→Type.
ai_researcher
1
Why_Do_You_Need_to_Invest_in_Quality_Control_Tools_Smart_Analytical_Six_Sigma_DMAIC_Model_Continuous_Improvement_on_Measuring_Teacher’s_Performance.pdf
A report from cdt I Research ■ Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis Carey Shenkman Dhanaraj Thakur Emma Llansó May 2021 ■ c:x:;t,- II L & TECHNOLOGY Hiiil DEMOCRACY CENTERFOR The Center for Democracy & Technology (CDT) is a 25-year-old 501(c)3 nonpartisan nonprofit organization working to promote democratic values by shaping technology policy and architecture. The organisation is headquartered in Washington, D.C. and has a Europe Office in Brussels, Belgium.(cid:9) CAREY SHENKMAN Carey Shenkman is an independent consultant and human rights attorney. DHANARAJ THAKUR Dhanaraj Thakur is the Research Director at CDT, where he leads research that advances human rights and civil liberties online. EMMA LLANSÓ Emma Llansó is the Director of CDT’s Free Expression Project, where she leads CDT's work to promote laws and policies that support Internet users’ free expression rights in the United States, Europe, and around the world.(cid:9) A report from ail: I Research ■ Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis Carey Shenkman Dhanaraj Thakur Emma Llansó WITH CONTRIBUTIONS BY DeVan Hankerson, Hannah Quay-de la Vallee, Samir Jain, and Tim Hoagland. ACKNOWLEDGEMENTS We thank Robin Burke for his feedback on sections of this paper. We also thank the various experts from academia, industry, and civil society that we interviewed and who helped inform the analysis in this paper. This work is made possible through a grant from the John S. and James L. Knight Foundation. Suggested Citation: Shenkman, C., Thakur, D., Llansó, E. (2021) Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis. Center for Democracy & Technology. This report is licensed under a Creative Commons Attribution-Sharealike 4.0 International License 4 Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis Executive Summary 1. Robustness 2. Data Quality 3. Lack of ontext 4. Measurability 5. Explainability Introduction I. Tools for Automated Multimedia Content Analysis A. Matching Models for Multimedia ontent Analysis Strengths, Weaknesses, and onsiderations of Matching Models Key Takeaways Regarding Matching Models B. Predictive Models for Multimedia ontent Analysis Strengths, Weaknesses, and onsiderations of Predictive Models Key Takeaways Regarding Predictive Models II. Five Limitations of Tools Used for Automated Content Analysis of Multimedia A. Robustness ircumvention eforts Evolving threats such as deepfakes B. Data Quality Datasets propagate underlying biases Insuficient data available in the real world Data-level bias mitigation only goes so far . Lack of ontext D. Measurability E. Explainability Tools and Techniques for Explainable AI Diferent Explainability Approaches for Diferent Stakeholders 6 7 7 8 8 8 10 12 13 14 16 17 19 21 22 22 23 24 26 26 27 27 29 31 33 33 34 CDT Research Contents 5 III. Conclusion: Capabilities, Limitations, and Risks IV. Appendix: Automated Multimedia Content Analysis Techniques Matching Models - ryptographic Hashing Matching Models - Perceptual Hashing hild Sexual Abuse Material ( SAM) Terrorist Propaganda opyrighted ontent Other Applications Box 1. Deep Learning as the Foundation for Predictive Models onvolutional Neural Networks Predictive Models - omputer Vision Models for ontent Analysis Image lassification (Image Level Predictions) Object Detection (Object/Bounding Box Level Predictions) Semantic Segmentation and Instance Segmentation Scene Understanding Object Tracking Action Recognition and 3D Pose Estimation Issues with Live Video Predictive Models - omputer Audition Models for ontent Analysis References 35 37 37 39 40 41 41 42 43 43 45 45 47 50 50 51 51 52 53 54 Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 6 Executive Summary T he ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools. This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools: matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare user-generated content with existing and known content. Predictive models (including computer vision and computer audition) are machine learning techniques that aim to identify characteristics of new or previously unknown content. These tools are most useful under certain conditions:(cid:9) • Matching models are generally well-suited for analyzing known, existing images, audio, and video, particularly where the same content tends to be circulated repeatedly. ૫(cid:9) Perceptual hashing is almost always better-suited to matching items that feature slight variations, which may occur either naturally or from attempts to circumvent detection. • Predictive models can be well-suited to analyzing content for which ample and comprehensive training data is available. They may also perform well in identifying objective features in multimedia. Examples may include whether multimedia contains clear nudity, blood, or discrete objects. ૫(cid:9) Analysis of static images is much more straightforward than(cid:9) video analysis. ૫(cid:9) Analysis of audio often involves a two-step process of transcription followed by analysis of the transcribed text.(cid:9) Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Executive Summary 7 Even in these scenarios, automated multimedia content analysis tools have many limitations. And those limitations become even more evident when the tools are used in more challenging settings. Any applications of these tools should consider at least five potential limitations:(cid:9) 1. Robustness State-of-the-art automated analysis tools that perform well in controlled settings struggle to analyze new, previously unseen types of multimedia. Automated models are repeatedly shown to fail in situations they have never encountered in their design or training. Robustness of the tools underlying automated content analysis—or the ability to not be fooled by minor distortions in data—is a constant and unsolved problem. Some challenges for automated analysis are due to natural occurrences (such as a photograph taken at a slightly different angle from a reference photo). But in a social media analysis setting, many challenges are deliberately caused by efforts to slip past detection. These can include anything from watermarks, to subtle rotations or blurs, to sophisticated methods such as deepfakes which create synthetic, realistic-seeming videos to harass or spread disinformation. Machine learning models struggle with these cases because circumvention efforts are constantly evolving, and models may be over-optimized for the examples with which they are created or trained. They may not generalize performance well to novel data. This is akin to memorizing answers to specific questions before a test without actually understanding the underlying concepts. 2. Data Quality Decisions based on automated content analysis risk amplifying biases present in the real world. Machine learning algorithms rely on enormous amounts of training data, which can include large databases of photos, audio, and videos. It is well documented that datasets are susceptible to both intended and unintended biases. How specific concepts are represented in images, videos, and audio may be prone to biases on the basis of race, gender, culture, ability, and more. Multimedia sampled randomly from real-world data can likewise propagate real-world biases. For example, existing news coverage of “terrorist propaganda” often perpetuates racial and religious biases. This can lead to problematic asymmetries as to what automated models identify as “terrorist” images. While some methods exist for attempting to mitigate these biases at the machine learning level, they are far from sufficient. Moreover, efforts to “clean” datasets to address some kinds of risks can actually introduce other forms of bias into the training data. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 8 3. Lack of Context Automated tools perform poorly when tasked with decisions requiring appreciation of context. While some types of content analysis may be relatively straightforward, the task of understanding user-generated content is typically rife with ambiguity and subjective judgment calls. Certain types of content are easier to classify without context—i.e. there may be wider consensus on what constitutes gore, violence, and nudity versus what is sexually suggestive or hateful. And even then, for instance, artistic representations and commentary may contain nudity or violence but be permitted on a given service when depicted in those contexts. The same content shared by one person in a particular setting, such as photos of baked goods, may have entirely different implications in another where those baked goods are a photo selling illicit drugs. Machines are ill-suited to make contextual assessments or apply the nuanced ethical standards that may be necessary for any given decision. 4. Measurability Generalized claims of accuracy typically do not represent the actual multitude of metrics for model performance. Real-world impacts of automated analysis decisions may be difficult or impossible to measure without knowing all the content a system fails to properly analyze. For this and other reasons, metrics that convey reliability in the content analysis space, such as “99.9% accuracy,” are typically practically meaningless. For example, some forms of harmful content, such as terrorist propaganda, can comprise a very small percentage of multimedia content. An algorithm that merely labels every piece of content “not extreme” could technically be “accurate” at least 99.9% of the time. But it would be right for entirely the wrong reasons. Moreover, even if a model predicted the right result 999 out of 1000 times, the one wrong result might have extremely harmful impacts at a scale of millions or billions of pieces of content. Metrics of positive model performance may also be self-selective. They may result from optimization to a specific dataset that is not generalizable to real-world problems. Better measures than “accuracy” are metrics such as precision and recall, which capture false negative and false positive rates. 5. Explainability It is dificult to understand the steps automated tools take in reaching conclusions, although there is no “one-size-fits-all” approach to explainability. State-of-the-art machine learning tools, by default, cannot be “opened up” to get a plain-spoken explanation of why they reached a decision they did. These tools utilize large neural networks which may have up to millions or billions of interrelated parameters involved in learning and producing outputs. While the inputs and outputs of these systems may be understood by humans, comprehending the intermediate steps, including how an automated analysis system makes decisions or weighs various features, is a daunting technical task, and these intermediate steps typically do not translate into the kinds of judgments a human would make. Research efforts are being made to promote explainability, the Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Executive Summary 9 ability to map the operations of machine judgment onto concepts that can be understood by humans. Explainability has important implications for developing trust in these systems and for preventing disparate impacts across various groups, as well as identifying opportunities for redress. At the same time, explainability may vary depending on whether what needs to be known involves the factors in a singular decision, or the structural characteristics of a network as a whole. While there are many important and useful advances being made in the capabilities of machine learning techniques to analyze content, policymakers, technology companies, journalists, advocates, and other stakeholders need to understand the limitations of these tools. A failure to account for these limitations in the design and implementation of these techniques will lead to detrimental impacts on the rights of people affected by automated analysis and decision making. For example, a tool with limited robustness can be circumvented and fail to identify abusive content. Poor data quality can lead to machine learning models that perpetuate or even exacerbate existing biases in society, and can yield outputs with a disparate impact across different demographics. Insufficient understanding of context can lead to overbroad limits on speech and inaccurate labeling of speakers as violent, criminal, and abusive. Poor measures of the accuracy of automated techniques can lead to a flawed understanding of their effectiveness and use, which can lead to an over-reliance on automation and inhibit the introduction of necessary safeguards. Finally, limited explainability can restrict the options for remedying both individual errors and systematic issues, which is particularly important where these tools are part of key decision-making systems.(cid:9) Large scale use of the types of automated content analysis tools described in this paper will only amplify their limitations and associated risks. As a result, such tools should seldom be used in isolation; if they are used, it should only be as part of more comprehensive systems that incorporate human review and other opportunities for intervention. Design of such systems requires an accurate understanding of the underlying tools being used and their limitations. Policymakers must also be versed in the limitations of automated analysis tools to avoid promulgating statutes or regulations based on incorrect assumptions about their capabilities. For example, legislators should not pass laws about content moderation that are premised on the ability of automated analysis tools to perform moderation tasks at scale, and automated content filtering should never be required by law. More generally, policies that do not account for the limitations discussed here risk normalizing an uncritical view of the efficacy of these tools. This can undermine important and needed public dialogue about what problems machine learning or “artificial intelligence” can – and cannot – help us solve. ■ Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 10 Introduction T he sheer scale of uploaded user-generated content (UGC) has increased dramatically in recent years, leading to an explosion in the research about and use of automated techniques to analyze and moderate it (Cambridge Consultants, 2019). The COVID-19 pandemic triggered a “massive experiment” in algorithmic content moderation, driven in large part by social distancing requirements that meant the human workforces in charge of content analysis were sent home (Faddoul, 2020; Llansó, 2020b; Matsakis & Martineau, 2020). “It did not go well,” reported Politico months later, observing the shortcomings of automated tools that simultaneously led to dramatically higher numbers of takedowns, while properly detecting far less questionable content. “Nobody appreciated the content moderators until they were gone”(Scott & Kayali, 2020).(cid:9) The Center for Democracy and Technology (CDT) has closely followed the use of automated content analysis tools, both analyzing their potential value and their implications for human rights and freedom of expression. In 2017, CDT explained the limitations of natural language processing (NLP) tools for analyzing the text of social media posts and other online content in order to better help civil society, industry, and policymakers understand the available tools, as well as the strengths and weaknesses of using them (Duarte et al., 2017). In this study, we provide an accessible technical explanation of tools for analyzing multimedia content — images, audio, and video, as well as live streamed video. This study hence focuses on a subset of analysis tools that present unique technical challenges. Policymakers worldwide are increasingly calling on social media companies to identify and restrict text, photos, and videos that involve illegal, harmful, or false information (Browning, 2020; Wong, 2020). Many services are voluntarily incorporating automation into their content moderation systems, and government agencies are also exploring the use of automated content analysis. Countries around the world are also proposing legal mandates for companies to filter content or to respond to takedown orders within very short time frames, which apply significant pressure on these companies to employ automation. Understanding these tools, and their capabilities and limitations when used in connection with multimedia, is crucial for stakeholders to Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Introduction 11 make informed choices: users engaged with and affected by social media; companies weighing appropriate technologies and safeguards; policymakers determining whether to enact laws and regulations that require, prohibit, or regulate the use of automated analysis tools; and civil society and journalists seeking to understand the implications of automated tools for content analysis. The first part of this paper discusses tools that are used for automated analysis of multimedia content. The second part of the paper discusses five limitations of these tools that policymakers and developers should understand when considering the role these tools may play in the analysis and moderation of user-generated content:(cid:9) 1. Robustness. State-of-the-art automated analysis tools that perform well in controlled settings struggle to analyze new, previously unseen types of multimedia.(cid:9) 2. Data Quality. Decisions based on automated multimedia content analysis risk amplifying biases present in the real world.(cid:9) 3. Lack of Context. Automated tools perform poorly when tasked with decisions requiring judgment or appreciation of context. 4. Measurability. Generalized claims of accuracy typically do not represent the actual multitude of metrics for model performance. 5. Explainability. It is difficult to understand the steps automated tools take in reaching conclusions, although there is no “one-size-fits-all” approach to explainability. This paper concludes with a discussion of the implications and risks of these limitations for relevant stakeholders, including civil society, industry, and policymakers. ■ Countries around the world are also proposing legal mandates for companies to filter content or to respond to takedown orders within very short time frames, which apply significant pressure on these companies to employ automation. Understanding these tools, and their capabilities and limitations when used in connection with multimedia, is crucial for stakeholders to make informed choices. - Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 12 I. Tools for Automated Multimedia Content Analysis C ontent analysis requires perception, recognition, and judgment. But human visual and auditory perception has had hundreds of thousands of years to evolve, and involves judgments that are the result of years of education and socializing. Imagine that you were asked to identify whether a picture contained a dog, but you had never seen a dog, or any animal, before in your life. How would you go about this task? Perhaps you are shown photos of dogs and learn to recognize them. You could recognize if you were shown the same photo twice. But this does not help you evaluate new photos. You could learn to associate that an animal that stands on four legs and has a tail is a dog. But this rule will not help you get the right answer if you are shown a photo of a horse. You may end up developing a very clear understanding of what a “dog” is, but unless you had additional specific training, you may not be able to differentiate labradors from golden retrievers. We can take for granted the years of learning that have enabled us to make these determinations. These are just some of the challenges that we are asking computers to address through machine learning. Machine learning (ML) is a process by which a system parses data to extract characteristics, relationships, and correlations from it, without being programmed to do so, and then applies those understandings to analyze other data. The notion of machine learning dates back to 1952, but modern processing power has exponentially increased its potential. “Machine learning is a thing-labeler, essentially” (Kozyrkov, 2018). A subfield of machine learning called deep learning has accelerated and been the center of focus in the last several years for methods in vision and audition. A machine can generally make identifications, or label things, in one of two ways: by matching, or recognizing something as identical or sufficiently similar to something it has seen before; or by prediction, recognizing the nature of something based on the machine’s prior learning. The latter category gives rise to the fields of computer vision and computer audition, which respectively study how computers might achieve high-level understanding from images or videos, and audio. In the following sections, this paper will explore these concepts in depth. :·················································1 ..... Gorwa et al. (2020) point out that some technologies combine matching and predictive techniques, such as facial recognition technology that identifies matches of previously identified faces and also attempts to learn characteristics of faces in general. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research I. Tools for Automated Multimedia Content Analysis 13 A. Matching Models for Multimedia Content Analysis The simplest approach to content analysis is matching. A matching algorithm seeks to answer: “Have I seen this image, audio, or video before?” It enables an operator to compare a piece of UGC to a pre-existing library of content. Matching could hypothetically take place by comparing every pixel or bit of an image or video to another in order to find a perfect match, or listening to every fraction of a second of audio. But this would be computationally intensive, unfeasible, and easy to circumvent. Instead, content can be reduced to simpler representations to make comparison more efficient and flexible.(cid:9) One way to do this is by hashing, which creates digital fingerprints of content in order to produce a significantly more compact and manageable object for comparison, while maintaining semantic accuracy. Similar to the way a person can be identified using a physical fingerprint, so too can pieces of digital content be identified by their corresponding digital fingerprints (Singh, 2019). Hash functions can be cryptographic or perceptual. :·················· ........... .................... Cryptographic hashing uses a cryptographic function to generate a random hash fingerprint, which is extremely sensitive to change. For example, changing the shade of , . one pixel in a high-resolution photo would produce a distinct cryptographic hash. This can be highly effective in authenticating known content without alterations. These same cryptographic functions are what encode encrypted messages. There, they also serve as a guarantee that not a single bit (or letter or word) in the message has been changed. Perceptual hashing, on the other hand, seeks to determine not whether two pieces of content are identical, but whether they are “alike enough”—i.e. practically identical. Perceptual hashing methods utilize algorithms to better comprehend the nature of a piece of content so that minor changes cannot fool the system. The operator of a system that uses perceptual hashes can set a threshold to determine what degree of difference between hashes is allowed to still consider them matches. Some specific implementations of perceptual hash algorithms include the detection of child sexual abuse material (CSAM), terrorist propaganda, and copyrighted content (see the Appendix for a more detailed description of these techniques and how they are applied in content analysis). One important property of hashing is that it requires knowing in advance the content to be identified. To use human fingerprints to identify humans, one needs a database of the fingerprints of known individuals to reference against. Similarly, to use image hashes to detect unwanted content, operators must have a database of reference content to be matched against. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 14 There are at least two ways to assess the effectiveness of an algorithmic technique like a hashing algorithm:(cid:9) • How robust is the function to natural or adversarial interference? A method may be able to better resist some forms of manipulations and distortions (including geometric, chromatic, noise, or blur) than others. • How discriminative is the function? Discrimination represents the ability to correctly differentiate between distinct pieces of content. To be usable, an algorithm needs to avoid reporting matches of distinct content—i.e. avoid false positives (Martínez et al., 2018). Highly robust models have a low rate of false negatives, and highly discriminative models have a low rate of false positives. Related concepts are a model’s positive predictive value, known as precision, and true positive rate, known as recall (Drmic et al., 2017). Precision is the ratio of true positive results to all positives predicted by the model (including false positives). It is an important measure to use in instances where the cost of a false positive is high. Recall refers to the ratio of true positives to all actual positives in the sample (including false negatives). The recall of a model is important in instances where the cost of a false negative is high. Thus, depending on context measures for either, recall or precision may be more relevant. STRENGTHS, WEAKNESSES, AND CONSIDERATIONS OF MATCHING MODELS The most significant characteristic of matching for multimedia content analysis is that it will only consider matches to content that is already contained in a reference database. Thus, it cannot be used for new content—beyond minor manipulations of existing content—that has not been provided for in the database. One problem with this approach is where content in the reference database is objectionable or even illicit, which means that maintaining those references may raise ethical or legal concerns. Matching technologies are most effective in categories of content that are predisposed to sharing already-known multimedia. Depending on how matching algorithms are deployed, they may be designed in a way to prioritize the minimization of false positives (Du et al., 2020). Comparing hashes may also not require significant computational resources (Engstrom & Feamster, 2017). Further, notwithstanding concerns regarding transparency of hash databases, the decision-making process of matching algorithms is relatively straightforward and explainable. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research I. Tools for Automated Multimedia Content Analysis 15 PhotoDNA, developed by Microsoft, is presently the most widespread perceptual matching method for countering child sexual abuse material (CSAM). Some of its main advantages are its low false-positive detection rate, its relatively low computational costs, and resistance against reverse-engineering attacks to identify individuals in images (Nadeem et al., 2019; Pereira et al., 2020). Various other perceptual hashing methods have shown adaptability to recognizing multimedia in varied settings. For instance, Echoprint (an open-source fingerprinting library utilized by Echo Nest, a subsidiary of Spotify) is flexible enough to identify remixes, live versions, and sometimes covers of music (Brinkman et al., 2016). However, changes and “noise” of various forms can still present challenges for state-of-the-art algorithms. Google utilizes machine learning in its Now Playing audio fingerprint, which can be used to identify ambient music, though its effectiveness is dependent on the type of noise in the background environment (Agüera y Arcas et al., 2017; Lyon, 2018). Facebook’s open-source PDQ and TMK+PDQF algorithms for image and video hashing, respectively, both perform strongly against certain types of changes like minor/imperceptible changes in content, but struggle with more deliberate or major changes like the addition of watermarks (Dalins et al., 2019). An important consideration in utilizing matching-based systems is their general inability to assess context. The same pieces of content that are objectionable in one context may have significant expressive and public interest value in a different setting, such as in art, academic or journalistic work, or human rights commentary. In the area of copyright, “fair use” is a recognized allowance for the dissemination of copyrighted content. However, YouTube’s Content ID, which allows rights holders to create fingerprints of their multimedia content, has generated controversy among legal scholars for the tension it creates with safe harbors in copyright law such as fair use. “The inability to recognize fair use is an issue inherent in automated filters like the Content ID system,”(Solomon, 2015, p. 21; see also Bartholomew (2014) and Trendacosta (2020)). These systems, known as an automated identification and remediation system (AIRS), are challenged by matching technology’s failure to ascertain context (Zernay & Hagemann, 2017). Similarly, using hashes to identify and block content that may, in some contexts, be deemed “terrorist propaganda” can fail to account for situations where such content is being shared to document human rights abuses or to provide journalistic coverage (Human Rights Watch, 2020). Facebook also maintains the membership-based ThreatExchange API, a clearinghouse for security-related information, and shares obtained hashes with GIFCT (Pham, 2019). t ..................................................... , The way that hash databases are designed, maintained, and implemented can also amplify (or mitigate) the risks of hash-based analysis systems. For example, a hash database might be a shared resource to which multiple services have access, such as the database of child sexual abuse material maintained by the National Center for Missing and Exploited Children (NCMEC) or the database for terrorist propaganda content maintained by the Global Internet Forum to Counter Terrorism (GIFCT). Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 16 An important consideration in utilizing matching-based systems is their general inability to assess context. The same pieces of content that are objectionable in one context may have significant expressive and public interest value in a different setting, such as in art, academic or journalistic work, or human rights commentary. - In the case of the GIFCT, each participating company may individually nominate content for inclusion in the database. Without clear parameters, the standards applied by each company, and thus to individual pieces of hashed content, may vary widely (Llansó, 2016). As CDT has highlighted previously, the practice of sharing definitions of prohibited content carries multiple risks, including promoting cross-platform censorship and imposing de facto standards of speech online (Llansó, 2020c; see also Douek (2020) and Radsch (2020)). Hash databases may also present case-specific security risks. For instance, they can be susceptible to hash collision attacks or hash poisoning, wherein an attacker reverse engineers an image from a given hash, and deliberately introduces content to generate false positives (Dolhansky & Ferrer, 2020). If that content becomes included in the underlying database, then it would potentially serve to allow outsiders to blacklist content for a variety of malicious objectives. The susceptibility of a hash function to such attacks depends in part on its predictability. In practice, this type of attack may generally require some knowledge of or access to the hash function.(cid:9) KEY TAKEAWAYS REGARDING MATCHING MODELS • Matching models are well-suited for analyzing known, existing images, audio, and video. • There are two main types of hashing methods, cryptographic and perceptual. Of the two, perceptual hashing is almost always better-suited to content analysis applications, which generally involve matching items that feature slight variations that may occur either naturally or from attempts to circumvent detection. • Two metrics for measuring perceptual hashing are robustness and discrimination. Robustness refers to the ability of an algorithm to ignore changes that are perceptually irrelevant, i.e. minor changes that do not impact what a human would ultimately see. Discrimination refers to the ability to distinguish images or other content that is actually different. • Key existing use cases for matching-based analysis are instances where the content is known and tends to be circulated repeatedly. These include child exploitation materials, terrorist propaganda, and copyrighted multimedia. • Matching models require the maintenance of a database of images, audio, or video to which content can be compared. Where the material reflected in a hash database is objectionable or even illicit, maintaining those reference files may raise ethical or legal concerns; without those reference files, however, it is impossible to verify the contents of the hash database. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research I. Tools for Automated Multimedia Content Analysis 17 B. Predictive Models for Multimedia Content Analysis Unlike matching algorithms, which attempt to authenticate a piece of content by assessing its similarity to an existing and known piece of content, predictive models aim to be able to identify characteristics of a new and previously unseen piece of content. To do this, a model must be able to generalize the attributes of whatever it seeks to classify. Prediction is a problem tackled in the areas of computer vision and computer audition. :······················································I ..... See the Appendix to this report for a more detailed discussion of each of these techniques. Computer vision refers to techniques used to address a range of tasks in content analysis including analyzing shapes, textures, colors, spatial arrangement, and static and temporal relationships. Examples of computer vision tools include:(cid:9) Classifiers. These are algorithms that predict what an image contains. Image classifiers are one of the simpler computer vision tools, and they are among the most common in the multimedia content analysis space (Batra, 2019). A very basic example of a classifier would be one that predicts whether or not an image contains a cat or dog. While they may perform well in many domains, they are susceptible to external forms of visual interference and distortions. Classifiers may also be fooled by images that look very similar to one another but represent different objects, such as chihuahuas and blueberry muffins, or sheepdogs and mops.(cid:9) Figure 1. Visually similar images: chihuahuas and blueberry muffins, or sheepdogs and mops. Source: https://twitter.com/teenybiscuit/(cid:9) status/707670947830968320 (Accessed March 2021). Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 18 Note that for instance segmentation, two adjacent dogs are diferentiated. In semantic segmentation, these would be the same color and not diferentiated. .... : ...................................................., Object detectors. These tools go beyond classifiers by localizing one or more objects in(cid:9) an image and classifying those objects. The output of a detector is typically a location, denoted by a “bounding box,” and the class of the object. Importantly, detectors can come in many forms, and often feature trade-offs depending on the desire for speed(cid:9) (e.g., measured as frames per second, or FPS) or accuracy (often calculated as a form of precision or, as described above, the proportion of all positive predictions that are true positive). For example, the use of lower resolution images can result in higher FPS rates,(cid:9) but lower average precision (Huang et al., 2017).(cid:9) Semantic Segmentation and Instance Segmentation. Segmentation tasks are important for content analysis because they are the building blocks for parsing relationships between objects in images or video. Semantic segmentation seeks to be more granular than object detection, by assigning a class label to each individual pixel in an image. Instance segmentation seeks to be even more precise and identify individual .. object boundaries. Semantic Segmentation Classification + Localization Object Detection Instance Segmentation GRASS, CAT, TREE, SKY CAT DOG, DOG, CAT DOG, DOG, CAT No objects, just pixels Single Object Multiple Object Figure 2. Comparing segmentation, classification, and detection. Source: http://cs231n.stanford.edu/slides/2017/(cid:9) cs231n_2017_lecture11.pdf#page=53 (Accessed May 2021). Scene understanding. These tools seek to comprehend a scene by considering the geometric and semantic relationships of its contents. Scene understanding algorithms have important applications in content analysis as they piece together the larger correlations between individual objects. For example, an image containing “fire” might be a campfire or it could be a natural disaster or violent scene. Scene understanding is a compound task that involves a number of the above tasks. Object tracking. This involves following the location of a given object over time in either pre-recorded video or a live stream. Video understanding is a significantly more difficult task than identification of objects in static images because it involves a temporal dimension (i.e., the order of the images matter). Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research I. Tools for Automated Multimedia Content Analysis 19 Computer audition focuses on audio content. Some of the techniques used in computer vision are relevant to computer audition, but they are typically conducted on spectrograms (graphic frequency depictions) of audio, rather than on images. Hence, tasks of audio classification are often analogous to their image counterparts. For example, computational auditory scene recognition (CASR) involves predicting the environment in which an audio signal is received. Note that where audio involves humans speaking, speech will often first be transcribed to a text form and integrated with natural language processing (NLP) methods. NLP methods and their strengths and limitations are covered in detail in CDT’s previous Mixed Messages report (Duarte et al., 2017). Modern research in these fields involves the study of deep learning models, and specifically convolutional neural networks (CNNs). These models utilize artificial intelligence that is “trained” to learn patterns based on large training datasets (also see the Appendix for a more detailed description of these models and techniques). This implies that the efficacy of these tools are dependent in part on the quality and size of the data sets used for training the model. For example, some forms of content, such as nudity or gore, may have exponentially more publicly available media to train on than “terrorist” content. STRENGTHS, WEAKNESSES, AND CONSIDERATIONS OF PREDICTIVE MODELS The results of individual predictive models are tailored to a variety of specific contexts, and are constantly evolving. This often makes it impractical to make specific claims regarding specific models, especially without adequate insights as to how those models operate and how they arrive at a given output, prediction, or decision. Developing these insights addresses the problem of explainability, which is detailed more later. However, several claims can be made about predictive models generally. Predictive models typically perform better at the “building block” tasks such as classification and object detection. Simpler and more objective questions, such as determining whether an image contains full nudity, blood, or a specific object like a weapon, may see fairly strong performance. Conversely, the more complex a computer perception problem, Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 20 Table 1. A table outlining the various levels of difficulty associated with particular tasks. the greater the challenge it presents. Action recognition, such as detecting violence or first-person egocentric action (i.e. GoPro footage), is difficult to deploy in a widespread manner (Kazakos et al., 2019; Perez et al., 2019). A broad classification of various tasks might look as follows:(cid:9) Simpler More Dificult Very Dificult • • Identifying whether a given image contains objects (i.e. contraband, symbols); Identifying objective qualities (blood, clear nudity); • Transcribing speech that is clearly spoken in a common language. • Differentiating objects that are very • Complex action recognition; similar; • Overcoming attempts to circumvent detection; • Understanding speech in a low-quality recording or with background noise; • Identifying what is happening in a scene. • Live video analysis; • Understanding subjective context. Perhaps the biggest general weakness of predictive models, as with the matching models covered earlier, is that they struggle to be robust, or able to predictably handle changes in inputs that occur either naturally or as a result of circumvention efforts. Object detection and tracking tasks struggle with occlusion, or partial or complete covering of one object by another object (Asad et al., 2020). Researchers have tried to develop various methods to improve robustness, though addressing robustness is a bigger problem than simply solving the puzzle embedded in any particular dataset (Cavey et al., 2020; Fawzi et al., 2018). That is akin to memorizing the specific answers for an exam, but failing to understand and apply the actual concepts learned. The challenge as one researcher concluded is that “the human vision system is robust in ways that existing computer vision systems are not” (Hendrycks & Dietterich, 2019, p. 1). For example, computer vision models may not use the same strategy to recognize objects as humans do. Research suggests that humans place more emphasis on an object's shape when it comes to classification, while convolutional neural networks (CNNs) are more likely to rely on texture and color; and that CNNs that learn classification based on shape-based representation may be more robust (Geirhos et al., 2018). These results were echoed by researchers at NVIDIA, who observed, more generally, that gaps exist between how machines attempt to grasp patterns and how humans recognize concepts (Nie et al., 2020). However, these differences are not always obvious, and research between Stanford and Google concluded that “people who build and interact with tools for computer vision, especially those without extensive training in machine learning, often have a mental model of computer vision models as similar to human vision. Our findings contribute to a body of work showing that this view is actually far from correct” (Hendrycks et al., 2020; Hermann et al., 2019, p. 9). :··························· ............... ············I To help provide more objective measures of robustness, Hendrycks presented several benchmarks, adding corruptions and perturbations to the popular ImageNet database, see https:// github.com/hendrycks/ robustness. Accessed March 2021. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research I. Tools for Automated Multimedia Content Analysis 21 KEY TAKEAWAYS REGARDING PREDICTIVE MODELS • Predictive models for multimedia analysis rely on artificial intelligence and learn to recognize patterns in the underlying images, video, or audio that they are trained on. As a result, these models aim to identify characteristics and features of content. • Predictive models may perform well in identifying objective features in multimedia. Examples may include whether multimedia contains clear nudity, blood, or discrete objects. • At the same time, predictive models face challenges in considering context. Attempts to capture context are under development. Models are highly dependent on the quality and amount of the data they are trained on. Some predictive analysis tasks are considerably more difficult than others. Analysis of static images is much more straightforward than video analysis. Automated real-time video content analysis is highly challenging, both computationally and conceptually. Asking a computer to recognize if an image contains full nudity is thus completely different from asking whether a video depicts a hateful demonstration. For different computer perception tasks, various techniques may be available and appropriate depending on specific demands. Some techniques often come with tradeoffs—i.e. expecting faster results may come at the expense of accuracy.(cid:9) • • ■ Predictive models typically perform better at the “building block” tasks such as classification and object detection. Simpler and objective questions, such as determining whether an image contains full nudity, blood, or a specific object like a weapon, may see fairly strong performance. Conversely, the more complex a computer perception problem, the greater the challenge it presents. - Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 22 II. Five Limitations of Tools Used for Automated Content Analysis of Multimedia W hile automated tools present some promising use cases when implemented with proper safeguards, their(cid:9) limitations must also be considered in any potential application. This is particularly important in use cases that may have widespread impacts on freedom of expression or(cid:9) user safety when these tools are deployed at scale. We argue that(cid:9) policymakers and developers should understand these limitations when considering what role these tools may play in the analysis of(cid:9) user-generated content. A. Robustness Automated content analysis tools struggle to handle changes in inputs that occur either naturally or as a result of circumvention efforts—in other words, they are not robust. “There are no fixes for the fundamental brittleness of deep neural networks,” argued one Google AI engineer in 2019 (Heaven, 2019, p. 164). Indeed, the fragility of AI-based prediction systems is well-accepted in the machine learning space. The previous sections of this report have examined the ways in which both matching and predictive models struggle with robustness.(cid:9) Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research II. Five Limitations of Tools Used for Automated Content Analysis of Multimedia 23 CIRCUMVENTION EFFORTS Robustness against circumvention efforts is a recurring problem for multimedia analysis tools. Circumvention efforts do not require inside access to or knowledge of a model to be successful (these are known as “black-box” adversarial attacks). For instance, one 2018 study demonstrated this when they perturbed images to convince the Google Cloud Vision API that skiers in images were dogs (Ilyas et al., 2018). Some of these specific issues in Google’s Cloud Vision API have since been mitigated, but the underlying issues in robustness leading up to them remain. Similarly, “adversarial patches” which can be scrambling patterns on clothing or handheld items (the person on the right in Figure 4) have been demonstrated to fool computer vision-based person detection, in this case automated surveillance cameras. Skiing Ski Piste Mountain Range Geological Phenomenon Glacial Landform Snow Winter Sport _ski..eol,, 91% 89% 86% 86% 85% 84% 82% 78% Dog Dog Like Mammal Snow Arctic Winter Ice Fun Freezing Glacial I andforcn 91% 87% 84% 70% 67% 65% 60% 60% SOo/ Figure 3. Perturbed images implying skiers as dogs. Source: Ilyas, A., et al. (2018).(cid:9) ◄ Figure 4. Demonstration of "adversarial patches." Source: Thys, S., et al. (2019).(cid:9) Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 24 Similar results have been achieved in safety-critical instances where researchers found that slight real-world perturbations to stop signs could trick a computer into thinking the signs said yield or, worse, speed limits of 45, 70, or 100 (Eykholt et al., 2018). While many of those specific issues have been mitigated, they reveal the stakes of failure (Lu et al., 2017). Some research has asserted that generating synthetic data is not enough to improve robustness alone. Rather, training on diverse real-world data is what really makes a difference in a model’s results because many real-world examples are “adversarial” by nature in ways that may be difficult to duplicate synthetically (Taori et al., 2020). In other words, very tricky real-world examples can easily degrade classifier performance (Hendrycks et al., 2021). Figure 5. Examples of naturally-occurring adversarial images. Source: (Hendrycks et al., 2021). ImageNet-A ImageNet-O Fox squirrel Sea Lion (99%) Dragonfly Manhole Cover (99%) Photosphere Jellyfish (99%) Verdigris Jigsaw Puzzle (99%) EVOLVING THREATS SUCH AS DEEPFAKES Circumvention efforts are constantly evolving, such as deepfakes, which present advanced challenges for automated systems. Even as tools are modified to address problems, such as the example in Figure 3 of Google’s Cloud Vision API improving to address the perturbation of images, circumvention efforts are similarly evolving and becoming more sophisticated. One example of this is the case of deepfakes – synthetic manipulations of identities and expressions that may make it appear as if an individual’s face is on another’s body, speaking or doing things that they never did. For example, in one video, former U.S. President Barack Obama appeared to say numerous expletives in a public address (Mack, 2018). In fact, producer Jordan Peele was projecting his own words onto an AI-animated version of President Obama to warn of the dangers of deepfakes. There are some legitimate use cases of the technologies underlying deepfakes, in fields like movie production, game design, or improving the quality of real-time video streams (Vincent, 2020). But when weaponized, they have the potential to cause serious reputational harm and spread disinformation. Deepfakes have been used, for instance, to project the faces of female celebrities onto pornographic videos, or to perpetuate gender-based violence against non-celebrities (Romano, 2018).(cid:9) Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research II. Five Limitations of Tools Used for Automated Content Analysis of Multimedia 25 Deepfake detection is presently a major industry priority and challenge. Efforts such as the Deepfake Detection Challenge (DFDC) dataset were created within industry to encourage research into mitigation methods. Some proposals involved extracting visual and temporal features from faces (Montserrat et al., 2020). Another promising method, proposed by the developer of perceptual hashing methods like PhotoDNA and eGlyph, offers a biometric-based forensic technique for detecting face-swap deepfakes. The approach utilizes CNNs to learn facial and head cues, identifying that individuals often communicate not only with their faces but head movements that can be learned by computers (Agarwal et al., 2020). Responses like these continue to try to stay a step ahead of adversarial methods that are constantly adapting. Figure 6. The expressions on the left of former President Barack Obama are actually projections of expressions by Jordan Peele. Source: Mack, D. (2018).(cid:9) Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 26 B. Data Quality Decisions based on automated multimedia content analysis risk amplifying biases present in the real world; this is often due to poor data quality. Training deep neural networks involves exposing them to massive amounts of data. However, numerous steps must be taken to ensure that training data does not serve to amplify biases. Visual and auditory components of multimedia create even more opportunities for bias than those present in text. DATASETS PROPAGATE UNDERLYING BIASES It is well-accepted that datasets have real biases. As one expert observed, datasets are “a look in the mirror . . . they reflect the inequalities of our society” (Condliffe, 2019). Biases manifest in multimedia analysis tools. Amazon Rekognition’s facial recognition technology, on a test conducted by the ACLU, mistakenly matched 28 members of Congress with a mugshot database and identified them as having been arrested for crimes (Snow, 2018). More importantly, the false matches disproportionately included Congressional members of color, including the late civil rights legend Rep. John Lewis (D-Ga.). Twitter’s AI-generated photo previews faced scrutiny when they appeared to favor white faces over those of persons of color (Lyons, 2020). And reporting by the Washington Post found that “smart speakers” such as Alexa or Google Assistant showed significant disparities in understanding people with different regional accents in the United States (Harwell, 2018). AI systems do not learn biases in a vacuum, and issues are quite often traced back to deficiencies in training data.(cid:9) Once biases are encoded, they are often hard to detect. Researchers have proposed classifier benchmarks that are demographically and phenotypically balanced, as well as deliberate metrics for subgroups such as “darker females,” “darker males,” “lighter females,” and “lighter males” (Buolamwini & Gebru, 2018).(cid:9) :··························· ...... ....... ··············I The causes of bias in data are numerous. As Microsoft has noted, "Most real datasets have hidden biases" (Microsoft, n.d.). Though beyond the scope of this paper, we note the current research into novel bias mitigation methods, such as rethinking crowdsourced microtasking services like Mechanical Turk (MTurk) which are often relied upon to provide data labeling (Barbosa & Chen, 2019). Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research II. Five Limitations of Tools Used for Automated Content Analysis of Multimedia 27 ............. ............... ••••••••••••••••••••••••••• 1 For more details on the types of biases found in ImageNet see Parbhu and Birhane (2020) and Steed and Caliskan (2021). :·············· ............... ....... ··················I The Terrorist Content Analytics Platform (TCAP) is an efort by a key partner of GIFCT to produce a “transparency-centred terrorist content tool.” https://www. terrorismanalytics.org/. As a stated goal, it seeks to include civil society in all stages in the process. Tech Against Terrorism (2020). But even such efforts may adopt binary views of race or gender, and contribute to rans erasure (West et al., 2019). Efforts like the Inclusive Images dataset attempt to t orrect for “amerocentric and eurocentric” biases present in popular datasets like c mageNet and Open Images by including more balanced representation of images I rom countries in Africa, Asia, and South America (Shankar et al., 2017). The nature f f photography itself may mean prediction tools always struggle with racial bias, Zoé o amudzi argues, due to “color films’ historical non-registry of black people”(Samudzi, S 019). Potential biases in data sets are not just limited to skin color but also virtually any 2 ther characteristic. For instance, a content analysis tool that is only trained on western o eddings may associate “wedding dress” with western presentation of what a bride w ooks like (i.e., a white formal gown), instead of non-western wedding dresses (Wiggers, l 018). 2 I NSUFFICIENT DATA AVAILABLE IN THE REAL WORLD ome categories of content do not have sufficient data available in the real world and S resent training data problems by their very nature. There is much more multimedia p ontent of nudity than of gun-based violence because “thankfully, we don’t have a lot of c xamples of real people shooting other people,” said Yann LeCun, Facebook’s chief AI e cientist (Kahn, 2019). So-called “terrorist” content presents another challenge. Despite s rominent coverage in the news media, from a data science perspective there simply p s not much publicly available multimedia from designated terrorist organizations to i rain models on (though there are recent emerging efforts to attempt to mitigate this).(cid:9) t his is a serious and recurring problem in data science known as class imbalance, where T ome classes have a significantly higher number of examples in the training set than s thers (Kushwaha, 2019). These issues can exist at a global scale, where lack of data may o eflect variances in digital footprints around the world, or they can exist locally, based r n relative access to technology within communities. Yuille and Liu argue that data in o he real world is combinatorially large, and it is difficult for any dataset, no matter how t arge, to truly represent the complexity of real life (Yuille & Liu, 2019). l D ATA-LEVEL BIAS MITIGATION ONLY GOES SO FAR ome strategies to address potential bias in datasets (for example because of class S mbalance) attempt to oversample to replicate and synthesize new samples from an i nderrepresented class of data types (Buda et al., 2018). Google released a framework u alled “MinDiff” to attempt to mitigate bias by optimizing a sample of data based on c ertain fairness metrics (Prost et al., 2019). c Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 28 Real images 1---.1 Sample 0---.1 1---.1 Random input Generator Sample Discriminator loss 11 1-i I 1--1 L.I Discriminator I Generator loss .... Figure 7. An illustration of GAN. Source: https://developers.google.com/machine- learning/gan/gan_structure. It is well-accepted that datasets have real biases. As one expert observed, datasets are “a look in the mirror . . . they reflect the inequalities of our society” (Condliffe, 2019). - Tools known as generative adversarial networks (GANs – see Appendix for a more detailed explanation) may be used to generate synthetic data, or data that can mimic real world data based on certain parameters (Goodfellow et al., 2014). GANs consist of two competing neural networks, akin to a boxing match between a generator and a discriminator that are both improving at their respective task by checking one another. An example of a GAN would involve a generator network creating fake images, which then are fed into the discriminator network which must determine, based on its training off real images, whether or not the images it is fed are fake. Those labels are then compared to the “ground truth.” The discriminator learns from its mistakes, while the generator adapts based on what successfully fooled the discriminator to present more sophisticated fakes. Two respective feedback loops allow for both models to improve. The result in this case is a synthetic data set that can be used, for example, to address a prior lack of real world data (Q. Wang et al., 2019) or to improve the accuracy of a predictive model when applied from one context to another (Sankaranarayanan et al., 2018).(cid:9) However, training on synthetic data may present risks of overfitting, or conforming too closely to one training set in a way that does not generalize well. This is because the models used to generate the synthetic data are vulnerable to the same limitations described here, including a lack of robustness, lack of context, etc. Thus, even actively correcting for biases in the training data may be insufficient. For example, West, Whittaker, and Crawford of the AI Now Institute warn that “[t]he histories of ‘race science’ are a grim reminder that race and gender classification based on appearance is scientifically flawed and easily abused” (West et al., 2019, p. 3). Similarly, Julia Powles calls attempts to address bias a distraction. ”Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate” (Powles, 2018).(cid:9) Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research II. Five Limitations of Tools Used for Automated Content Analysis of Multimedia 29 C. Lack of Context Machine learning tools often struggle to appreciate the same contexts that humans can readily recognize. Tasks that may seem simple for a y human may often be highl context-dependent and thus present significant challenges for algorithmic models. - Automated tools perform poorly when tasked with decisions requiring judgment or appreciation of cultural, linguistic, social, historical, or other context. These tools often struggle to appreciate the same contexts that humans can readily recognize. Tasks that may seem simple for a human may often be highly context-dependent and thus present significant challenges for algorithmic models. For example, TikTok videos showing an adult kissing a young child could be incredibly harmful, or completely benign (as in the case of a parent tucking a child into bed). Similarly, images or videos of the identical activities in summer or winter may be treated differently by a model purely because people may wear fewer clothes in warm weather. As described earlier, matching models can at best identify two pieces of content (e.g., two images) that are identical (cryptographic hashing) or sufficiently identical (perceptual hashing) but not the context in which either is used. Similarly, prediction models are currently more accurate when it comes to image classifiers and object detectors than more compound tasks, such as scene understanding where context is relevant. Thus, for example, they may be able to identify nudity, but not make a judgment about whether that nudity is occurring in the context of artistic expression or abuse. In general, the current approaches that matching and predictive models use to identify patterns differ in how humans recognize concepts, particularly with regard to the importance of context. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 30 Companies are stepping up efforts to incorporate context in analysis tools, but many of these holistic methods are still in their early stages and face many challenges. Facebook is currently researching identification of hateful memes, where images and text that may seem benign in isolation may cause harm when combined in a meme (DrivenData, 2020). Facebook also implements tools called Whole Post Integrity Embeddings (WPIE) to understand content across modalities. It has utilized WPIE to ascertain context to determine when user posts and accompanying photos are attempting to sell illicit drugs (Facebook, 2019). These algorithms attempt to parse whether a “full batch” of “special treats” accompanied by a picture of baked goods is talking about Rice Krispies squares or edibles. On Twitch, automated methods may be used to produce highlight reels of noteworthy moments in video game streams. One method proposed jointly considering emotion detection of a streamer’s facial expressions, game scene analysis, and audio stream analysis (Ringer & Nicolaou, 2018). Violence detection has seen recent research in weakly-supervised multimodal methods utilizing audio cues (Wu et al., 2020). Some research has proposed models to identify broadcasters of adult content on social live streaming services by combining image recognition with follower characteristics, username, and other characteristics (Lykousas et al., 2018). While these techniques are being researched, they are just scratching the surface of genuine context-appreciation by machines. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research II. Five Limitations of Tools Used for Automated Content Analysis of Multimedia 31 D. Measurability “Accuracy” in isolation is generally an unhelpful metric. In most instances it is meaningless. - Generalized claims of accuracy typically do not represent the actual multitude of metrics for model performance; there are many ways to approach measurement of the performance of automated systems. However, “accuracy” in isolation is generally an unhelpful metric. In most instances it is meaningless. There are several reasons for this. First, the degree of "accuracy" may be a function of the class imbalance problem, wherein some forms of harmful content are sparse by nature. Therefore, a predictive model that simply says everything is “not terrorist propaganda” will be accurate 99.9% or more of the time. But it would be right for entirely the wrong reasons. Second, even if a model predicts the right result 999 out of 1000 times, that one wrong result can have extremely harmful impacts. This is particularly the case when wrong results have high stakes for either freedom of expression or safety. Third, metrics of positive model performance may also be self-selective. Better metrics for measuring predictive models include their precision and recall, as discussed earlier. However, use of any metrics, including precision and recall, nonetheless run into challenges with sparse data.(cid:9) Another important consideration in content analysis is the sheer scale of content. 99.9% performance may actually be quite bad if the 0.1% means that tens or hundreds of thousands of pieces of user content are false flagged or acted upon incorrectly at scale. These scales raise the stakes of user impact and implications for freedom of expression (Spoerri, 2019). Indeed, e-mail service providers consider any false positive rate higher than approximately 0.1% too high in the use case of spam filters, due to possible limitations on speech. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 32 One measure used to compare various models are standardized datasets, called benchmarks. For example, computer vision researchers may lack a common data set to test different models for particular forms of robustness. Thus, researchers can create benchmarks by taking a commonly used data set and adding a standard set of changes or corruptions, to allow others to test different models for robustness using the modified data set (see for example Mu & Gilmer, 2019). But benchmarks, too, should be scrutinized, and performance on them may highlight specific strengths that may not translate to the real world (see for example Northcutt et al., 2021).(cid:9) Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research II. Five Limitations of Tools Used for Automated Content Analysis of Multimedia 33 E. Explainability The “black-box” nature of AI systems is compounded in content analysis, particularly moderation, because moderation itself has “long been a famously opaque and secretive process,” as one researcher notes (Gorwa et al., 2020). - It is often difficult to understand the steps automated tools take in reaching conclusions; many types of machine learning techniques resist easy explainability. Some of the largest neural networks used by industry leaders utilize billions of interrelated parameters (Ray, 2020). Neural networks are complex and non-linear, and do not necessarily “show their work,” which makes it very difficult to understand how they operate, what features they use to make decisions, and how various decisions are weighted and why (Eilertsen et al., 2020). The “black-box” nature of AI systems is compounded in content analysis, particularly moderation, because moderation itself has “long been a famously opaque and secretive process,” as one researcher notes (Gorwa et al., 2020). The lack of transparency in automated decision-making can be exacerbated when commercial intellectual property rights are claimed as a barrier to disclosure. Further, it may become more difficult to ascertain the potential human rights harms of content takedowns where initial flagging decisions are made by automated systems that lack transparency and clarity in the reasons for the takedowns (Gorwa et al., 2020).(cid:9) TOOLS AND TECHNIQUES FOR EXPLAINABLE AI Explainability tools seek to illuminate why a particular algorithm reached the conclusion it did. These tools operate in a variety of ways, but generally attempt to highlight key features that were weighted as part of a specific output. For example, an explainability tool for an image classifier may highlight ears, whiskers, and a tail as elements that contributed to a conclusion that an image contains a cat. One approach to do this employs heatmaps to display key regions of an image supporting classification predictions (Karlinsky et al., 2020). Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 34 :······················································1 .... These include perturbation techniques (modifying features to test their importance for achieving a result), surrogate/sampling methods (involving approximating predictions locally), and structural techniques (analyzing the inner structure of the network) (Samek, 2020). Numerous industry players have produced tools to interpret AI inferences, including Facebook’s Captum, which is open source, and Fiddler’s Explainable AI. IBM has an AI Explainability 360 toolkit, which is also open source, and Microsoft released InterpretML. Knowing why a model reaches predictions is important because it may reach the right prediction but for the wrong reasons (Samek, 2020). In one study, the researcher discovered that an algorithm to predict a person’s age from a photo learned, for whatever reason, to correlate age with not smiling. In other words, the model believed that the elderly do not laugh (Lapuschkin et al., 2017). These mistakes are difficult to catch without explainability and auditing. Several current techniques exist for getting an AI system to produce cues as to why it produced certain outputs.(cid:9) DIFFERENT EXPLAINABILITY APPROACHES FOR DIFFERENT STAKEHOLDERS Explainability may mean different things in different contexts. The use case for a developer, who may need to understand the structure or limitations of an algorithm, generally is different from that of an end user, who may wish to simply understand why a specific image was analyzed in a particular way. Thus, there is no “one-size-fits-all” approach to explainability (Hind, 2019), and different settings may require different types of explanations. One proposal by IBM attempts to articulate what these different forms of explainability could look like (Arya et al., 2019). Their taxonomy for explainability includes:(cid:9) • Directly interpretable explanations (a simple model with decision rules that can be understood by the user) and post hoc interpretable explanations (e.g., a CNN) that require an associated model to provide explanations. • Global explanations that cover how the entire model operates, and local explanations that explain a specific prediction/output. • Static explanations that do not change, and interactive explanations that provide more depth or explanations depending on new user requests (e.g., via a dialogue). In general, explainability remains a nascent research subject and there is much more work to be done. The National Institute for Standards and Technology (NIST) has proposed draft key principles to guide the development of explainability (Phillips et al., 2020). These include: (1) providing an explanation (i.e., evidence or a reason) for the output, (2) making sure that the explanation is understandable to each user (thus this may require providing different reasons for different users), (3) ensuring that the explanation is accurate in describing how the model arrived at a given output (which is different from model precision described earlier), and (4) outlining the limits of its use or cases that the model was not designed for.(cid:9) ■ Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Conclusion: Capabilities, Limitations, and Risks 35 III. Conclusion: Capabilities, Limitations, and Risks Some of these limits might be addressed by future technological advances. But many of the limitations are inherent to the technologies themselves. Social networking services, civil society, governments, and others need consider these limitations. - T he Internet has ushered in an explosion of digital speech and communication in the form of text, images, video, and audio created and uploaded by people across the globe. The scale of this content overwhelms our capacities for evaluation, and(cid:9) since the earliest days of the Internet, different forms of automation have been employed to help us filter, sort, rank, and otherwise(cid:9) analyze user-generated content. Advances in machine learning(cid:9) techniques have raised the prospect of much more sophisticated analysis, where “artificial intelligence” grows to approximate, or(cid:9) even surpass, human understanding of the meaning and context of all of this content. Today’s machine learning techniques for analyzing content at scale represent significant technological advancements – and, as discussed earlier, tools employing these techniques can be useful in a variety of scenarios, including content moderation, research, fraud detection, improving accessibility of media files, and more. But these techniques also possess real limitations that affect their utility for different tasks. They struggle to parse context and extract nuanced meaning, and are often vulnerable to evasion and circumvention. Some of these limits might be addressed by future technological advances. But many of the limitations are inherent to the technologies themselves. Social networking services, civil society, governments, and others need consider these limitations (e.g., robustness, data quality, lack of context, measurability, and explainability) when considering whether and how to use automated tools to address the complex problems of large-scale content analysis. A failure to address these limitations in the design and implementation of these tools will lead to detrimental impacts on the rights of people affected by automated analysis and decision making. For example, a tool with limited robustness can be easy to circumvent and can fail to identify abusive content, including “deepfakes” or other manipulated media (Chesney & Citron, 2019). Poor data quality can lead to machine learning models that perpetuate existing biases in society (Buolamwini & Gebru, 2018) and that yield outputs with a disparate impact across different demographics, exacerbating inequality (Burton- Harris & Mayor, 2020). Insufficient understanding of context can lead to overbroad limits on speech and wholly inaccurate labeling of Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 36 Automated content filtering should never be required by law. - speakers as violent, criminal, or abusive (Llansó, 2020a). Poor measures of the accuracy of automated techniques can lead to a flawed understanding of their effectiveness and use, which can lead to over-reliance on automation and inhibit the introduction of necessary safeguards. Finally, limited explainability can restrict the options for remedying both individual errors and systematic issues, which is particularly important where these tools are part of key decision making systems.(cid:9) Large scale use of the automated content analysis tools described in this paper will only amplify their limitations and associated risks. Discussions of the potential benefits of automated content analysis should not, therefore, understate the critical role of human review or the importance of structuring systems with opportunities for intervention (Duarte et al., 2017). Technology companies that create and employ tools for automated content analysis should include opportunities for human review and intervention throughout the design and implementation processes, as part of a set of safeguards to identify and mitigate adverse human rights impacts of their technology. Policymakers must also take into account the limitations of automated analysis tools before promulgating laws or regulations that require or assume their use. For example, laws that impose extensive content moderation obligations on social media platforms that handle millions of pieces of content on a daily basis may explicitly or implicitly rely on the assumption that automated content analysis tools will make that possible. But that assumption carries with it all the limitations of those tools, which may result in errors and harms that should be weighed in the policymaking process. Indeed, automated content filtering should never be required by law. Policymakers should not embrace or normalize an uncritical view of the efficacy of these tools (Gorwa et al., 2020). This can undermine important and needed public dialogue about what problems machine learning or “artificial intelligence” can – and cannot – help us solve.(cid:9) ■ Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Appendix - Automated Multimedia Content Analysis Techniques 37 IV. Appendix: Automated Multimedia Content Analysis Techniques ◄ Figure 8. An example of how small changes in input data can lead to very different results in cryptographic hashing. This graphic has been recreated, and based on one by Rosenbaum, K. (2017, June 26). Cryptographic Hashes and Bitcoin, Grokking Bitcoin, Manning Publications. Retrieved December 17, 2020 from https://freecontent.manning.com/(cid:9) cryptographic-hashes-and-bitcoin/. Original cat image Matching Models - Cryptographic Hashing Cryptographic hashing functions create a string of numbers and letters called a hash, which almost uniquely identifies a file. Similar cryptographic functions are also used to encrypt data in applications like e-mail, Signal or WhatsApp texts, or certain file storage mediums, and are meant to assure recipients of the authenticity of a message or file, down to the last bit. Cryptographic hashing uses a cryptographic function to generate a random hash fingerprint. The cryptographic component makes these functions generally “non-smooth” and extremely sensitive to change. This means even miniscule alterations in the input data will drastically change the resulting hash. For example, changing the shade of one pixel in a high-resolution photo would produce a distinct cryptographic hash. Cryptographic functions are also highly collision-resistant, meaning different pieces of content will produce very different hashes so the likelihood of two different pieces of content producing the same hash (or “colliding”) are incredibly low (Engstrom & Feamster, 2017). 1 One way MD5 hash generator 1 MD5 checksum A F22D5311A0026526F6B823B93B62CBDD I Modified with blue eyes ompletely diferent resulting hashes 1 One way MD5 hash generator 1 MD5 checksum CF29F92A0E49396AF3D7AABF3D357018 I I - - - - - - - - - - - - ~ - - - - - - - • - - - - - - - - - - - - • I I I I I Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 38 Cryptographic hashing is highly effective in authenticating known content without alterations. This leads to its primary drawback in its use for automated content analysis, which is its lack of robustness, meaning it is not resistant to minor data distortion. Substantially identical pieces of content may hash very differently. This is particularly problematic in use cases that are adversarial in nature—i.e. an attacker tries to circumvent a hash-based filter and modifies content such that it produces a different hash. Alterations might also occur naturally, simply through the routine transfer of data which may utilize compression to save bandwidth and space. Most modern content sharing systems apply some form of post-processing which would, by nature, change the bits of the file and thus the output of the cryptographic hash. Ideally, a matching system would be input invariant, which means that small alterations in input would produce little or no change in the hash. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Appendix - Automated Multimedia Content Analysis Techniques 39 Matching Models - Perceptual Hashing Some of those methods include ones based on invariant features, local feature points, dimension reduction, and statistics features. (Du et al., 2020). t ....................., Perceptual hashing seeks to determine not whether two pieces of content are identical, but whether they are “alike enough”—i.e. practically identical. For example, if you were shown two photos of the same person, except one single hair in one photo were a slightly different shade, you would likely not notice this miniscule change and consider the photos the same. However, a cryptographic hashing method would consider these completely different. The goal of a perceptual hashing method would be to recognize these as fundamentally the same photo. Perceptual hashing methods aim to better comprehend the nature of a piece of content so that the machine cannot be fooled by imperceptible or non-meaningful changes, such as rotations, resizing, orientation flips, noise, delays in audio or video, or watermarking. Some of these changes might be naturally occurring, or others may be human-designed efforts to circumvent detection.(cid:9) Perceptual hashing methods involve various methods of pre-processing content, hashing it, and using metrics to compare how alike two pieces of hashed content are. A threshold can be set to determine what degree of difference between hashes is allowed onsider them matches. Modern perceptual hashing methods apply a range of to still c chniques, including different approaches to create hash fingerprints. For example, by applying a grid and analyzing relationships among pixels in each square, the hash comparison is able to recognize the underlying similarity of the images (see Fig 9).(cid:9) te................................ Grid Build Calculate Hash Hash Comparison Similarity Degree Grid Build ·□ Calculate Hash Figure 9. An overview of a hash comparison process of two versions of the same photo but with different levels of color saturation. Source: Souza et al. (2018). https://www.researchgate.net/profile/Veronica-Teichrieb/publication/325521472_Generating_an_Album_with_the_Best_Media_Using_(cid:9) Computer_Vision/links/5b2179a6458515270fc6da3e/Generating-an-Album-with-the-Best-Media-Using-Computer-Vision.pdf. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 40 See for example the tools used by Pornhub to detect CSAM and non-consensual content. https://help.pornhub.com/hc/ en-us/articles/1260803955549- Transparency-Report/. Accessed April 2021. Perceptual hashing methods offer more flexibility than their cryptographic counterparts. For instance, they can be capable of identifying content that is hidden within other pieces of content, such as a video that is masked within another video (Langston, 2018). In order to evolve as attackers evolve, perceptual hashing functions may utilize techniques like deep learning and convolutional neural networks (discussed in more detail in Box 1) in order to adaptively identify manipulation methods and features. Such methods have shown promise, with the ability to distinguish between substantively distinct images, while also not being fooled by superficial changes (Jiang & Pang, 2018). Some specific implementations of perceptual hash algorithms include systems designed to detect child sexual abuse material (CSAM), terrorist propoganda, and copyrighted content. CHILD SEXUAL ABUSE MATERIAL (CSAM) Perceptual hashing has been the primary technology utilized to mitigate the spread of CSAM, since the same materials are often repeatedly shared, and databases of offending content are maintained by institutions like the National Center for Missing and Exploited Children (NCMEC) and its international analogue, the International Centre for Missing & Exploited Children (ICMEC) (Lee et al., 2020). PhotoDNA, developed by Microsoft, is presently the most widespread perceptual matching method for countering CSAM. At a high level, it works by first converting a full-resolution color image to grayscale, then downsizing it to 400 x 400 pixels. A filter is applied, the image is partitioned, and then measurements are extracted onto feature vectors which are compared using a distance metric. PhotoDNA for video applies a similar method to certain video “key frames” (Langston, 2018). More specific information about the PhotoDNA algorithm and the NCMEC database are not publicly available, due to concerns that attackers would use that information to circumvent these protections; however, this lack of transparency also closes off avenues for independent audits and review.(cid:9) Facebook has open-sourced its PDQ and TMK+PDQF algorithms for image- and video-matching, respectively (Davis & Rosen, 2019). PDQ, based on an algorithm called pHash, stores and compares the outputs of 16 x 16 transformations of images. Other perceptual applications in CSAM include CSAI Match, a proprietary hash- matching technology developed by YouTube, which is utilized by Adobe, Tumblr, and Reddit. Google released an open-source Content Safety API, an AI-powered tool grading the severity of disturbing images, with the Internet Watch Foundation (Todorovic & Chaudhuri, 2018). New methods propose purely metadata-based analysis (meaning they work without examining the actual content of a file) using file paths, which could augment perceptual hashing methods in the fight against CSAM (Pereira et al., 2020). In practice, companies may use a combination of these and other automated tools to detect CSAM on their networks.(cid:9) t ..................................................... , Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Appendix - Automated Multimedia Content Analysis Techniques 41 :············ ............... ............... ············I Joining the consortium involves signing an NDA, MOU, and obtaining licenses to use hashing techniques. As a result, the technical workings of the SIHD are not publicly known. See https:// gifct.org/. Accessed March 2021. TERRORIST PROPAGANDA The Global Internet Forum to Counter Terrorism (GIFCT), a consortium founded by Facebook, Microsoft, Twitter, and YouTube and now operating as an independent entity, maintains a shared industry hash database of what they view as terrorist and violent extremist content. Individual companies that are members of the consortium may, depending on the nature of their participation, contribute content they deem to include terrorist propaganda to be catalogued in the shared database. This shared database is not available for independent review or audit.(cid:9) eGLYPH is another hashing algorithm for terrorist content created by Hany Farid, the same researcher who developed the original PhotoDNA technology. eGLYPH operates very similarly to PhotoDNA, involving the grayscale conversion of images and down-sizing to 400 x 400 fixed resolution. The algorithm can be used to find videos as well, by filtering out redundant frames to reduce length and file size, and then producing arbitrary-length hashes which can be compared using a “longest common substring.” Longest common substring is a technique for comparing how similar two alphanumeric strings are by finding the longest contiguous stretch the two strings have in common. For example, consider the random strings “982tiu3hhuiuh” and “293rr928iu3hhu2tiu.” These strings have the common substrings “982,” “2tiu,” and “iu3hhu.” Because “iu3hhu” is the longest of these substrings at six characters long, that is the strongest point of similarity and thus the string used to score the strength of similarity between the two longer strings. This approach can also be used to compare audio files (Counter Extremism Project, 2018; Greenemeier, 2017). COPYRIGHTED CONTENT Copyright-enforcement tools seek to match user-uploaded content to instances of known, copyrighted content. Perceptual methods are often useful in these efforts, since pirated content might add modifications or watermarks to avoid identification. An example of one tool is the Echoprint API, an open-source fingerprinting library utilized by Echo Nest, a subsidiary of Spotify (Ellis & Whitman, 2013). Echoprint contains three components: 1) a code/fingerprint generator; 2) a query server that stores codes to match against; and 3) codes themselves that are used to match against the fingerprints of any given audio files. Specifically, Echoprint creates time/hash pairs based on relative timing between beat-like onsets, and identifies pieces of audio via these pairs. The fingerprint is based on the relative locations of these onsets (Welcome to Echoprint, n.d.). Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 42 :······················································1 ..... Content ID was originally licensed by YouTube in 2006 from Audible Magic; after Google acquired YouTube, it acquired a trademark for “Content ID,” after which Audible Magic sued Google over use of the term (Sanchez, 2017). :······················································I ..... For more background on the audio fingerprinting techniques mentioned here, see Haitsma & Kalker (2003). Another example of a similar fingerprinting technology is YouTube’s Content ID, which allows rights holders themselves to create fingerprints of their multimedia (Engstrom & Feamster, 2017). The company Audible Magic produces matching systems utilized by major entertainment studios (Universal Music Group, Warner Bros., Sony, and Disney), as well as platforms such as Facebook, Soundcloud, Twitch, and Tumblr. Audible Magic holds numerous patents in perceptual fingerprinting and automated content recognition, including methods for creating unique audio signatures via segmentation. While its methods are proprietary, those patents indicate that it utilizes principles analogous to segmentation and fingerprinting of spectrograms (visual representations of a spectrum of frequencies in a piece of audio). OTHER APPLICATIONS Matching algorithms may appear in any case where an organization wants to blocklist content and flag that content when it appears. For instance, online social matchmaking services like OkCupid have utilized perceptual hashing algorithms to scan for re-uploads of banned profiles (Jablons, 2017). Facebook, too, utilizes a large- scale matching infrastructure called SimSearchNet/SimSearchNet++ on “every image uploaded to Instagram and Facebook” to scan against an existing curated database of “misinformation,” including COVID-19 misinformation (Facebook, 2020). Amazon utilizes audio fingerprinting to prevent mentions of the word “Alexa” in advertisements from mistakenly triggering Alexa devices and resulting in negative customer experiences (Rodehorst, 2019). Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Appendix - Automated Multimedia Content Analysis Techniques 4343 Box 1. Deep Learning as the Foundation for Predictive Models Deep learning is an attempt to solve complex computational problems by replicating the structure of the human brain. The result are structures called artificial neural networks (ANNs) that can “learn” from very large quantities of data. The basic function of ANNs is to ascertain features from inputs. For example, an ANN may learn what features of an image represent a flower, by analyzing millions of images of flowers and non-flowers. ANNs contain layers of functions (called “nodes” or “neurons”) which perform various operations on the data that they are fed. The “deep” of deep learning refers to a network having many, many layers, most of which are hidden. ANNs are an umbrella and can contain many different types of neural networks.(cid:9) Think of ANNs (very roughly) like an incredibly large imaginary car factory, larger than any currently on earth, where upwards of millions of workers process smaller components of a very complex car. Assembly of the finished project will typically be broken down into a multitude of sub-tasks. Teams of workers with specialized skills build upon the output of other workers within dedicated teams and may connect with other teams as needed. The outputs of these steps may not, by themselves, look anything like the finished product, much as an ignition coil may not be immediately recognizable as a car part (even to regular users of cars). During the process, tasks and workflow may also be shifted in real-time to make the process more efficient. Thus, someone walking through this factory would likely find it impossible to grasp the immensity of the process or the relationships between various teams and processes. ANNs can be structured in a variety of ways. One type of ANN, a fully-connected neural network, is good at making classification decisions of simple data. This means each node in a layer is connected to all the nodes in the next layer. However, fully-connected networks suffer from computational inefficiency because they are dense. If the first layer contained 1,000 nodes, this would lead to 1 billion parameters after just the first layer, which will increase dramatically with dozens or hundreds of layers, or if color channels are added to the image being evaluated, for example (Elgendy, 2020). This huge number of parameters leads to high computing time, unwieldiness, and overfitting, making ANNs alone ill-suited for computer vision and audition tasks. Another type of neural network, called a convolutional neural network (CNN), seeks to address this issue.(cid:9) CONVOLUTIONAL NEURAL NETWORKS Convolutional neural networks (CNNs) underlie the current most popular method for modern predictive models for content analysis. They utilize locally connected layers to attempt to simplify inputs to smaller representations before making classification decisions. Instead of each node being connected to every node in the previous layer and considering the entire input, nodes in a CNN consider smaller windows of the input. CNNs utilize convolutional layers, which act like windows (or “kernels”) sliding over the input data to extract salient features by applying various filters. These filters perform specialized operations such as edge, contour, or corner detection for images (these operations reduce spatial dimension and resolution of the image). Then a pooling operation is performed where the results of the high-level features are combined. Like ANNs, CNNs have input layers, output layers, and hidden layers. Predictive models that utilize CNNs often incorporate fully-connected layers or recurrent layers for stages of their analysis. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 4444 Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis Here’s a walkthrough of a CNN process. Suppose a CNN is used to try to identify that an input image contains a flower. First, CNN layers will apply filters across the image to create a feature map. This means the first layers will extract very rudimentary features like edges and blobs. As these features are combined, an early layer may result in recognizing a rough outline of a flower. Another layer of features may identify a petal, stem, or leaf by their outlines, colors, and textures. Pooling layers simplify the outputs of these various feature maps. They are then “flattened” onto a long vector which expresses the data in a simplified format. These simplified outputs then can be analyzed by fully-connected layers (thus the more computationally expensive part of the calculation is now being done on a much smaller, less expensive, input) which will generate a prediction whether the image contains a flower. This prediction is based on the data and the model’s training on images containing flowers.(cid:9) Figure 10. This graphic has been recreated, and based on an illustration of a CNN by MathWorks. Source: Learn About Convolutional Neural Networks. (2020). MathWorks. Retrieved December 17, 2020 from https://www.mathworks.com/(cid:9) help/deeplearning/ug/introduction-to- convolutional-neural-networks.html. l n o i t u o v n o C U L e R s t i n U i r a e n L d e i f i t c e R g n i l o o P l n o i t u o v n o C U L e R s t i n U i r a e n L d e i f i t c e R g n i l o o P l n o i t u o v n o C U L e R s t i n U i r a e n L d e i f i t c e R g n i l o o P l n o i t u o v n o C U L e R s t i n U i r a e n L d e i f i t c e R g n i l o o P C F d e t c e n n o y l l u F t r o p p u s o t s r e y a l n o i t a c i f i s s a c l Flower Cup Car Tree Input image Every feature map output is the result of applying a filter to the image. The new feature map is the next input image Filters Light and dark Filters Simple shapes Filters omplex shapes Filters Shapes that can be used to define a flower FC FC L.!!!!!!!!!!!!!~~~ ~~~~' ~1 ~~ t t ··= ■■■■■ ■■■ ■■■ ■■■■■ ---J : ■■■ ········--- ■■■ ■■■■■ .~-~. x1 x2 x3 U L e R Input image0 Input image1 Input image2 lCTfo _J Feature map1 lCTfo _J Feature map2 . . ,, Input imagen .... t ~- Feature mapn . . ,, Activations of the network at a particular layer n o i t u b i r t s i d y t i l i b a b o r p l a c i r o g e t a x a m t f o s Probability Flower Cup c:::::::::J Car c::::::J Tree □ CDT Research Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Appendix - Automated Multimedia Content Analysis Techniques 45 Predictive Models - Computer Vision Models for Content Analysis Computer vision attempts to solve a multitude of sub-problems, using techniques such as deep-learning models and CNNs. Vision problems involve a complex suite of “building block” tasks from analyzing shapes, textures, colors, spatial arrangement, and static and temporal relationships. Computer vision technology is rapidly evolving with the potential to be "the greatest disruptive innovation in a generation" (McBride, 2020). Examples of various computer vision tasks are summarized below, although this list is non-exhaustive:(cid:9) Computer Vision Task Function Sample Output Classification(cid:9) Identifies what is in an image, without determining object location in the image. Object Detection Identifies the classification and locations of objects in an image via bounding boxes. Semantic Segmentation Identifies, at a pixel-level outline, what space in the image belongs to what categories of objects. Instance Segmentation Identifies objects using a pixel-level outline, differentiating distinct copies of the same object. Image contains at least one person, a hate symbol, and a sign, with a particular degree of confidence.(cid:9) A box-shaped region in an image contains a person, another box-region contains another person, another box contains a sign, and another box contains a hate symbol. The parts of the image that are perceived as people are shaded one color, parts of the image that are signs are another color, and the hate symbol is another. The individual people, sign, and symbol are different colors.(cid:9) Scene Understanding Identifies what is generally happening in a scene using geometric and content cues. The scene depicts a person protesting with a sign containing a hate symbol. Action Recognition(cid:9) Identifies, using physical cues, what actions are being taken. The person is holding the sign. The person is yelling. Object Tracking Identifies, in a video, where an object moves over time. The person is swinging the sign back and forth.(cid:9) 3D Pose Estimation Identifies, using joint positions, what physical The person holding the sign is making offensive action a person is taking. gestures. Table 2. Examples of various computer vision tasks summarized.(cid:9) IMAGE CLASSIFICATION (IMAGE LEVEL PREDICTIONS) A classifier is a computer vision algorithm that indicates what an image contains. Image classifiers are one of the simpler computer vision tools, and they are ubiquitous and among the most common in the multimedia content analysis space (Batra, 2019). A current popular classifier is called ResNet-50, which is a CNN that contains fifty layers, is pre-trained on a million images from the ImageNet database, and can classify according to 1000 object categories. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 46 Classification indicates what predefined categories of objects occur in data. A very basic example of a classifier would be one that predicts whether or not an image contains a cat or dog. “Prediction” is a term of art used since outputs are typically accompanied by a confidence score, which indicates the degree of certainty with which the algorithm has made its prediction (one can also think of it as a “guess”). Classifiers can achieve state-of-the-art performance across many domains. But they are brittle, meaning they are susceptible to external forms of visual interference and distortions called perturbations (Stock et al., 2020). Perturbations might include anything from changes to the intensity of single pixels in an image, to image-wide changes such as noise or blur. These perturbations may be environmental, a product of imperfect image capture techniques, or the result of deliberate efforts to fool an image recognition process. Figure 11. Illustration of image perturbations. Source: (Hendrycks & Dietterich, 2019). Gaussian Noise Shot Noise Impulse Noise Defocus Blur Frosted Glass Motion Blur Zoom Blur Snow Frost Fog Brightness Contrast Elastic Pixelate JPEG Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Appendix - Automated Multimedia Content Analysis Techniques 47 Classifiers may also be fooled by images that look very similar to one another but represent different objects, such as chihuahuas and blueberry muffins, or sheepdogs and mops (per our previous example in Figure 1). OBJECT DETECTION (OBJECT/BOUNDING BOX LEVEL PREDICTIONS) .... Figure 1. Visually similar images: chihuahuas and blueberry muffins, or sheepdogs and mops. Source: https://twitter.com/teenybiscuit/(cid:9) status/707670947830968320 (Accessed March 2021). While classifiers merely identify what is in an image, object detectors take on a more complex task, which is localizing one or more objects in an image and classifying those objects. Many industry content analysis tools utilize object detectors. For instance, the Amazon Rekognition Content Moderation API, for images and videos, is a deep- learning based detector. It assigns labels to objects in photos including adult content, violence, weapons, visually disturbing content, as well as drugs, alcohol, tobacco, hate symbols, and gestures, all with associated confidence scores (Amazon Web Services, 2020). Google Cloud’s Vision API similarly utilizes detectors to identify explicit content and various objects and expressions. Specialized detectors may be used in content analysis, from gunshot detectors, to blood detectors and others. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis The output of a detector is typically a location, denoted by a “bounding box,” and the class of the object. An object detection algorithm generally begins by proposing regions of interest (ROIs) and then conducting classification tasks, as discussed earlier, on those individual regions. Since several ROIs might initially cover an object, a process called “non-maximum suppression” is utilized to narrow down which ROI most closely frames a given object. Classification Object detection CAT CAT, DOG, DOG 48 ► Figure 12. Sample outputs of image classifiers versus detectors. This graphic has been recreated, and based on an illustration by Hulstaert, L. (2018, April 19). A Beginner’s Guide to Object Detection, Datacamp. Retrieved December 17, 2020 from https://www.datacamp.com/(cid:9) community/tutorials/object-detection- guide. ► Figure 13. Examples of a proposal process for regions of interest (ROIs). The light green boxes would be the output ROIs because, of all the boxes, they contain the most of a given dog. This graphic has been recreated, and based on an illustration by Chanel, V.S. (2017, September 18). Selective Search for Object Detection (C++/Python), Learn OpenCV. Retrieved from https://www.learnopencv.com/(cid:9) selective-search-for-object-detection-cpp- python/. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Appendix - Automated Multimedia Content Analysis Techniques 49 For content analysis, detectors may be desirable when the location in an image is relevant for determining its nature. For example, state-of-the-art detectors are being trained to recognize natural disasters such as earthquakes and flash floods, or other emergencies like accidents. These detectors could be used on social media to learn correlations between these events and posting metrics to better respond to emergencies (Weber et al., 2020). Object detection is crucial in analysis of video, which relies on understanding location and movement over time. Two main evaluation metrics are used to measure the performance of object detectors. Detection speed is evaluated in frames per second (FPS), and network precision is measured via mean average precision (mAP) (Elgendy, 2020). Research shows that detectors generally perform better against efforts to circumvent them than classifiers — “fooling a detector is a very different business from fooling a classifier” (Lu et al., 2017, p. 9). This is because detectors consider a multitude of ROIs around an image, and apply a classification algorithm to each of these. Any circumvention effort must fool all of these boxes, rather than simply one. Importantly, detectors can come in many forms, and often feature trade-offs depending on the desire for speed or accuracy. Three of the most popular algorithms for object detection are called R-CNN, SSD (Single Shot Detector), and YOLO (You Only Look Once). R-CNN is the least sophisticated of the three. It first uses a selective search algorithm to identify the most likely regions where the object exists, runs each proposed region separately through the CNN to compute its features, and then uses a classifier to determine what the object is. These steps partly explain why the use of R-CNN architectures is slow and computationally expensive. For this reason they are called multi-stage detectors. SSD and YOLO attempt to address the multi-stage issue by being “one shot”—in other words, convolutional layers simultaneously predict whether ROIs contain an object while also conducting the classification step. These detectors are considerably faster, and thus are often used in real-time video or camera applications (Redmon & Farhadi, 2018). However, they tend to be more prone to mistakes than multi-stage detectors. Improvements to R-CNN include removing the need for analysing separate region proposals (Fast R-CNN) and the use of the selective search algorithm (Faster R-CNN), both of which made computation slower (See Girshick, 2015 and ; S. Ren et al., 2016). Figure 2. Differences between computer vision tasks. Note that for instance segmentation, the two adjacent dogs are differentiated. In semantic segmentation, these would be the same color and not differentiated. Source: http://cs231n.(cid:9) stanford.edu/slides/2017/cs231n_2017_ lecture11.pdf#page=53 (Accessed May 2021). Semantic Segmentation Classification + Localization Object Detection Instance Segmentation GRASS, CAT, TREE, SKY CAT DOG, DOG, CAT DOG, DOG, CAT No objects, just pixels Single Object Multiple Object Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 50 SEMANTIC SEGMENTATION AND INSTANCE SEGMENTATION Segmentation tasks are important for content analysis because they are the building blocks for parsing relationships between objects in images or video. Semantic segmentation seeks to be more granular than detection, by assigning a class label to each individual pixel in an image. Instance segmentation seeks to be even more precise and identify individual object boundaries. A popular technique for this is called Mask R-CNN, which is an extension of Faster R-CNN for object detection. It works by generating bounding boxes and then adding a step to produce “masks” or object outlines (Mittal, 2019). Video instance segmentation takes this further, where individually segmented instances are then linked and tracked over an entire sequence. For instance, researchers at Facebook developed an approach to instance segmentation to track objects in video sequences using a method called MaskProp. Other state-of- the-art methods in panoptic segmentation seek to merge both semantic and instance segmentation into one task (Kirillov et al., 2019). SCENE UNDERSTANDING Scene understanding seeks to comprehend a scene by considering the geometric and semantic relationships of its contents (Naseer et al., 2019). Scene understanding algorithms have important applications in content analysis, as they piece together the larger correlations between individual objects. For example, an image containing “fire” might be a campfire or it could be a natural disaster or violent scene. An image containing “blood” might be a gruesome image, or it may be an educational photo of a surgery. Researchers from UCLA utilized scene understanding and visual sentiment analysis to develop a visual model to recognize protesters, describe their activities, and estimate the level of perceived violence in the image (Won et al., 2017). They identified that emotions such as anger and fear were often correlated with perceived violence, and implemented object detection of labels such as signs, photos, fire, law enforcement, children, and flags. Scene understanding is a compound task that involves a number of the aforementioned “building block” tasks. Hence a scene understanding algorithm is not simply one algorithm but involves the application of a number of CNNs: classification; object detection; segmentation; monocular depth estimation; pose estimation; and / or sentiment analysis, among others. :································· ....... ...... ········I The simplest architecture for semantic segmentation is the Fully-Convolutional Net (FCN), an encoder-decoder process. In FCN, an input image is down-sampled to a smaller size through a series of convolutions (the encoder), and then that encoded output is up-sampled. Up-sampling can occur via processes such as bilinear interpolation or transpose- convolutions (Long et al., 2015). The encoding process may, however, lead to artifacts and poor boundary resolution. More modern architectures include multi-scale models like the Pyramid Scene Parsing Network (PSPNet), which performs multiple convolution operations of varying dimensions (hence the “pyramid” title) (Zhao et al., 2017). :············ ....... ............. ................. The MaskProp technique predicts clip-level instances in order to simultaneously classify, segment, and track object instances in video sequences. It is billed as more robust against motion blur and object occlusions in videos (Bertasius & Torresani, 2020). Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Appendix - Automated Multimedia Content Analysis Techniques 51 OBJECT TRACKING The task of object tracking in either pre-recorded video or a live stream means following the location of a given object over time. To imagine the difficulty of this, picture being with a friend in a busy crowd. Consider the steps the brain must take to watch a friend moving through the crowd and not lose sight of them. This involves identifying individual humans in the crowd, recognizing the friend among the other humans, and differentiating the friend (or perhaps only one or more features or perspectives of the friend due to obscuration). At some moments the friend may be close or far (Asad et al., 2020). Multiple objects, lighting discrepancies, or temporary disappearances from view are just some of the problems tracking algorithms may face (Nixon & Aguado, 2019). Video understanding is a significantly more difficult task than identification of objects in static images because it involves a temporal dimension. This dimension creates dependencies between various points in time (i.e., the order matters). An example of this is the act of climbing up a ladder, which can appear to be climbing down if an algorithm gets the frame-order wrong. Examples of tasks that may need to occur in video are object tracking, video object segmentation, video prediction, and pose estimation. Many current video analysis tools will approximate videos using specific frames. The Microsoft Azure content moderation system, for instance, divides content into differing “shots” and identifies specific key frames on which to run a static image analysis on whether that image is inappropriate or prohibited content.(cid:9) Object tracking is utilized for a variety of use cases, such as following the motion of humans or vehicles. One key representation benefitting tracking and motion estimation is optical flow, or the pixel-level correspondence between images. These can help ascertain and differentiate forms of movement.(cid:9) . ............................ , :·············· ....... Traditionally, classical methods infer ACTION RECOGNITION AND 3D POSE ESTIMATION optical flow by minimizing what is called a “loss function.” Modern methods utilize unsupervised learning to circumvent the need for labels. These approaches are advantageous because they yield faster results and improved performance. Examples of these approaches include OAFlow and DDFlow (Jonschkowski et al., 2020). Advances in action recognition are a current priority in computer vision, given the volume of video content being produced on devices and platforms. Many action recognition algorithms are highly specialized. Tools may only consider specific subjects at a time. For example, state-of-the-art models in violence recognition propose to break down violence into categories such as blood, explosions, fights, fire, and firearms (Peixoto et al., 2019). 3D pose estimation involves predicting the 3D position of Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 52 human joints in images. Most reliable data is obtained using elaborate sensors and bodysuits which is impractical for collecting volumes of data and, importantly, does not exist for data obtained “in the wild” (Pavllo et al., 2019). In light of that, current research focuses on estimation of 3D keypoints from 2D images, historically using estimations to reference the pelvis joint. Pose estimation allows for better action recognition, as well as enabling research into human gestures. Audio cues can be combined with gestures to analyze and predict gestures from speech (Ginosar et al., 2019). ISSUES WITH LIVE VIDEO Live video presents some of the most challenging problems to content analysis. It requires the application of all of the aforementioned prediction tasks. Not only must the outputs of those tasks be synthesized, but the live component requires them to be done quickly. This is enormously computationally expensive, because videos (especially high resolution ones) are large data files, and hence generally impractical to monitor for social media platforms. Use cases of screening live video for violence, for example, may thus still be far off. Facebook executives, for example, reportedly said that AI may still be years away from being able to moderate live video at scale (Kahn, 2019).(cid:9) However, current technologies do apply forms of live object detection. Self-driving cars must understand objects in real time (Chin et al., 2019). Even so, these technologies are typically applying detection of objects, which is a much simpler task than parsing context about whether a scene contains violence.(cid:9) Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research Appendix - Automated Multimedia Content Analysis Techniques 53 Predictive Models - Computer Audition Models for Content Analysis Computer audition seeks to understand audio content. Where audio involves humans speaking, speech will often first be transcribed to a text form and analyzed with natural language processing (NLP) methods. This may compound errors that are misheard (such as if “porn” is misheard as “born,” potentially changing an analyzed context). Google’s “AI Autobahn” combines its Natural Language API and Jigsaw’s Perspective APIs to first do speech-to-text analysis, then apply textual sentiment and toxicity analysis. NLP methods and their strengths and limitations are covered in detail in CDT’s previous Mixed Messages report (Duarte et al., 2017). Deep learning applications for computer audition mirror many of the use cases in computer vision. However, they are typically conducted on spectrograms (graphic frequency depictions) of audio, rather than on images. State-of-the-art image classification techniques are also capable of achieving positive results on audio classification tasks (Hershey et al., 2017). Tasks of audio classification are often analogous to their image counterparts. Scene recognition, for example, has an audio counterpart (computational auditory scene recognition, or CASR) (Petetin et al., 2015). The foundational “cats and dogs” image classification task even has an audio counterpart for barks and meows (Takahashi et al., 2016).(cid:9) Some unique challenges presented in computer audition include mitigating noise, data variations, and language biases. Isolating salient audio from noise is the subject of current research, which is attempting to isolate sources of audio in mixed-audio recordings (Gfeller et al., 2020). Sound samples themselves may be inconsistent, with varied loudness, sample quality, and time durations (Saska et al., 2019). Some algorithms exist for noise reduction, including spectral noise gating, which aims to eliminate consistent background noise by “gating” out any noise that falls in a certain frequency range. This can help eliminate certain types of consistent background noise, like eliminating the frequencies of a coffee grinder from a recording of ambient sounds in a coffee shop. This could be useful, for example, in a tool that is trying to identify the song playing over the coffee shop’s loudspeakers. But gating out the coffee-grinder frequencies could also affect, for example, the ability of a matching algorithm to identify a song that uses those same frequencies. Finally, automatic speech recognition (ASR) is challenged by the fact speech can occur in many different languages, accents, or dialects. Different recognition models may be trained on “high resource languages” (languages for which many data resources exist) versus “low resource languages” (for which there are few data resources available). Many near-extinct languages, dialects, or primarily oral languages have not generated electronic data (C. Wang et al., 2020). These considerations present challenges for the widespread application of computer audition tools for predictive applications.(cid:9) ■ Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 54 References Agarwal, S., El-Gaaly, T., Farid, H., & Lim, S.-N. (2020). Detecting Deep-Fake Videos from Appearance and Behavior. ArXiv:2004.14491 [Cs, Eess]. http://arxiv.org/abs/2004.14491. Agüera y Arcas, B., Gfeller, B., Guo, R., Kilgour, K., Kumar, S., Lyon, J., Odell, J., Ritter, M., Roblek, D., & Sharifi, M. (2017). Now Playing: Continuous low-power music recognition. ArXiv Preprint ArXiv:1711.10958. Amazon Web Services. (2020, October 12). Amazon Rekognition adds support for six new content moderation categories. Amazon Web Services, Inc. https://aws.amazon.com/about-aws/whats-new/2020/10/amazon- rekognition-adds-support-for-six-new-content-moderation-categories/. Amazon Web Services. (2021). Amazon Rekognition—Developer Guide. Amazon Web Services. https://docs.aws.(cid:9) amazon.com/rekognition/latest/dg/rekognition-dg.pdf#moderation. Arya, V., Bellamy, R. K., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S. C., Houde, S., Liao, Q. V., Luss, R., & Mojsilović, A. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. ArXiv Preprint ArXiv:1909.03012. Asad, H., Shrimali, V. R., & Singh, N. (2020). The Computer Vision Workshop | Packt. Packt. https://www.(cid:9) packtpub.com/product/the-computer-vision-workshop/9781800201774. Audible Magic. (n.d.). Patents. Audible Magic. Retrieved April 23, 2021, from https://www.audiblemagic.com/(cid:9) patents/. Barbosa, N. M., & Chen, M. (2019). Rehumanized Crowdsourcing: A Labeling Framework Addressing Bias and Ethics in Machine Learning. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–12. https://doi.org/10.1145/3290605.3300773. Bartholomew, T. B. (2014). The Death of Fair Use in Cyberspace: Youtube and the Problem with Content ID. Duke Law & Technology Review, 13, 66.(cid:9) Batra, K. (2019, June 5). Introduction to Computer Vision and Building Applications That Can See. https://www.(cid:9) youtube.com/watch?v=L2B6_s3UvZA. Bertasius, G., & Torresani, L. (2020). Classifying, Segmenting, and Tracking Object Instances in Video with Mask Propagation. ArXiv:1912.04573 [Cs]. http://arxiv.org/abs/1912.04573. Brinkman, C., Fragkiadakis, M., & Bos, X. (2016). Online music recognition: The Echoprint system. https://staas.(cid:9) home.xs4all.nl/t/swtr/documents/wt2015_echoprint.pdf. Browning, K. (2020, November 17). Zuckerberg and Dorsey Face Harsh Questioning From Lawmakers. The New York Times. https://www.nytimes.com/live/2020/11/17/technology/twitter-facebook-hearings. Buda, M., Maki, A., & Mazurowski, M. A. (2018). A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106, 249–259. https://doi.org/10.1016/j.(cid:9) neunet.2018.07.011. Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Conference on Fairness, Accountability and Transparency, 77–91. http://proceedings.mlr.(cid:9) press/v81/buolamwini18a.html. Burton-Harris, V., & Mayor, P. (2020, June 24). Wrongfully Arrested Because Face Recognition Can’t Tell Black People Apart. American Civil Liberties Union. https://www.aclu.org/news/privacy-technology/(cid:9) wrongfully-arrested-because-face-recognition-cant-tell-black-people-apart/. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research References 55 Cambridge Consultants. (2019). Use of AI in online content moderation. Cambridge Consultants. https://www.(cid:9) ofcom.org.uk/research-and-data/internet-and-on-demand-research/online-content-moderation. Cavey, T., Dolan, A., & Stock, J. (2020). Strategies for Robust Image Classification. ArXiv Preprint ArXiv:2004.03452. Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. Calif. L. Rev., 107, 1753. Chin, T.-W., Ding, R., & Marculescu, D. (2019). AdaScale: Towards Real-time Video Object Detection Using Adaptive Scaling. ArXiv:1902.02910 [Cs]. http://arxiv.org/abs/1902.02910. Condliffe, J. (2019, November 15). The Week in Tech: Algorithmic Bias Is Bad. Uncovering It Is Good. The New York Times. https://www.nytimes.com/2019/11/15/technology/algorithmic-ai-bias.html. Counter Extremism Project. (2018). CEP Report: YouTube’s Ongoing Failure to Remove ISIS Content. Counter Extremism Project. https://www.counterextremism.com/sites/default/files/eGLYPH_web_crawler_(cid:9) white_paper_July_2018.pdf. Dalins, J., Wilson, C., & Boudry, D. (2019). PDQ & TMK+ PDQF–A Test Drive of Facebook’s Perceptual Hashing Algorithms. ArXiv Preprint ArXiv:1912.07745. Davis, A., & Rosen, G. (2019, August 1). Open-Sourcing Photo- and Video-Matching Technology to Make the Internet Safer. Facebook. https://about.fb.com/news/2019/08/open-source-photo-video-matching/. Dolhansky, B., & Ferrer, C. C. (2020). Adversarial collision attacks on image hashing functions. ArXiv:2011.09473 [Cs]. http://arxiv.org/abs/2011.09473. Douek, E. (2020, February 11). The Rise of Content Cartels. https://knightcolumbia.org/content/the-rise-of- content-cartels. DrivenData. (2020). Hateful Memes: Phase 2. DrivenData. https://www.drivendata.org/competitions/70/hateful- memes-phase-2/page/267/. Drmic, A., Silic, M., Delac, G., Vladimir, K., & Kurdija, A. S. (2017). Evaluating robustness of perceptual image hashing algorithms. 2017 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 995–1000. https://doi.org/10.23919/MIPRO.2017.7973569. Du, L., Ho, A. T. S., & Cong, R. (2020). Perceptual hashing for image authentication: A survey. Signal Processing: Image Communication, 81, 115713. https://doi.org/10.1016/j.image.2019.115713. Duarte, N., Llansó, E., & Loup, A. C. (2017). Mixed Messages? The Limits of Automated Social Media Content Analysis. Center for Democracy & Technology. https://cdt.org/wp-content/uploads/2017/11/2017-11- 13-Mixed-Messages-Paper.pdf. Eilertsen, G., Jönsson, D., Ropinski, T., Unger, J., & Ynnerman, A. (2020). Classifying the classifier: Dissecting the weight space of neural networks. ArXiv:2002.05688v1 [Cs.CV]. https://arxiv.org/abs/2002.05688v1. Elgendy, M. (2020). Deep Learning for Vision Systems. Manning Publications. Ellis, D., & Whitman, B. (2013). Musical fingerprinting based on onset intervals (United States Patent No. US8586847B2). https://patents.google.com/patent/US8586847B2/en. Engstrom, E., & Feamster, N. (2017). The Limits of Filtering: A Look at the Functionality & Shortcomings of Content Detection Tools. Engine. https://www.engine.is/the-limits-of-filtering. Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., & Song, D. (2018). Robust Physical-World Attacks on Deep Learning Models. ArXiv:1707.08945 [Cs]. http://arxiv.org/(cid:9) abs/1707.08945. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 56 Facebook. (2019, November 13). Community Standards Report. https://ai.facebook.com/blog/community- standards-report/. Facebook. (2020, May 12). Using AI to detect COVID-19 misinformation and exploitative content. Facebook AI. https://ai.facebook.com/blog/using-ai-to-detect-covid-19-misinformation-and-exploitative-content/. Faddoul, M. (2020, April 28). COVID-19 is triggering a massive experiment in algorithmic content moderation. Brookings. https://www.brookings.edu/techstream/covid-19-is-triggering-a-massive-experiment-in- algorithmic-content-moderation/. Fawzi, A., Fawzi, H., & Fawzi, O. (2018). Adversarial vulnerability for any classifier. ArXiv Preprint ArXiv:1802.08686. Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F. A., & Brendel, W. (2018). ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. ArXiv Preprint ArXiv:1811.12231. https://arxiv.org/pdf/1811.12231. Gfeller, B., Roblek, D., & Tagliasacchi, M. (2020). One-shot conditional audio filtering of arbitrary sounds. ArXiv:2011.02421 [Eess]. http://arxiv.org/abs/2011.02421. GIFCT. (n.d.). Joint Tech Innovation. GIFCT. Retrieved April 23, 2021, from https://gifct.org/joint-tech- innovation/#row-hash. Ginosar, S., Bar, A., Kohavi, G., Chan, C., Owens, A., & Malik, J. (2019). Learning Individual Styles of Conversational Gesture. ArXiv:1906.04160 [Cs, Eess]. http://arxiv.org/abs/1906.04160. Girshick, R. (2015). Fast R-CNN. 2015 IEEE International Conference on Computer Vision (ICCV), 1440–1448. https://doi.org/10.1109/ICCV.2015.169. Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Networks. ArXiv:1406.2661v1 [Stat.ML]. https://arxiv.org/(cid:9) abs/1406.2661v1. Google. (n.d.). Fighting child sexual abuse online. Retrieved April 23, 2021, from https://protectingchildren.(cid:9) google/intl/en/. Google Cloud. (n.d.). Vision AI. Google Cloud. Retrieved April 23, 2021, from https://cloud.google.com/vision. Google Cloud. (2021). Categorizing audio content using machine learning. Google Cloud. https://cloud.google.(cid:9) com/architecture/categorizing-audio-files-using-ml. Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1). https://journals.sagepub.(cid:9) com/doi/full/10.1177/2053951719897945. Greenemeier, L. (2017). When Hatred Goes Viral: Inside Social Media’s Efforts to Combat Terrorism—Scientific American. Scientific American, 316(5). https://www.scientificamerican.com/article/when-hatred-goes- viral-inside-social-medias-efforts-to-combat-terrorism/. Haitsma, J., & Kalker, T. (2003). A Highly Robust Audio Fingerprinting System With an Efficient Search Strategy. Journal of New Music Research, 32(2), 211–221. https://doi.org/10.1076/jnmr.32.2.211.16746. Harwell, D. (2018, July 19). The accent gap: How Amazon’s and Google’s smart speakers leave certain voices behind. Washington Post. https://www.washingtonpost.com/graphics/2018/business/alexa-does-not-understand- your-accent/. Heaven, D. (2019). Why deep-learning AIs are so easy to fool. Nature, 574(7777), 163–166. https://doi.(cid:9) org/10.1038/d41586-019-03013-5. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research References 57 Hendrycks, D., Basart, S., Mu, N., Kadavath, S., Wang, F., Dorundo, E., Desai, R., Zhu, T., Parajuli, S., Guo, M., Song, D., Steinhardt, J., & Gilmer, J. (2020). The Many Faces of Robustness: A Critical Analysis of Out- of-Distribution Generalization. ArXiv:2006.16241. http://arxiv.org/abs/2006.16241. Hendrycks, D., & Dietterich, T. (2019). Benchmarking neural network robustness to common corruptions and perturbations. ArXiv Preprint ArXiv:1903.12261. https://arxiv.org/abs/1903.12261. Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., & Song, D. (2021). Natural Adversarial Examples. ArXiv:1907.07174 [Cs, Stat]. http://arxiv.org/abs/1907.07174. Hermann, K. L., Chen, T., & Kornblith, S. (2019). The origins and prevalence of texture bias in convolutional neural networks. ArXiv Preprint ArXiv:1911.09071. https://arxiv.org/pdf/1911.09071. Hershey, S., Chaudhuri, S., Ellis, D. P. W., Gemmeke, J. F., Jansen, A., Moore, R. C., Plakal, M., Platt, D., Saurous, R. A., Seybold, B., Slaney, M., Weiss, R. J., & Wilson, K. (2017). CNN Architectures for Large-Scale Audio Classification. ArXiv:1609.09430 [Cs, Stat]. http://arxiv.org/abs/1609.09430. Hind, M. (2019). Explaining explainable AI. XRDS: Crossroads, The ACM Magazine for Students, 25(3), 16–19. https://doi.org/10.1145/3313096. Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., Fischer, I., Wojna, Z., Song, Y., & Guadarrama, S. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7310–7311.(cid:9) Human Rights Watch. (2020). “Video Unavailable”: Social Media Platforms Remove Evidence of War Crimes. Human Rights Watch. https://www.hrw.org/report/2020/09/10/video-unavailable/social-media- platforms-remove-evidence-war-crimes. Ilyas, A., Engstrom, L., Athalye, A., & Lin, J. (2018). Black-box Adversarial Attacks with Limited Queries and Information. ArXiv:1804.08598. http://arxiv.org/abs/1804.08598(cid:9) Jablons, Z. (2017, May 31). Evaluating Perceptual Image Hashes at OkCupid. OkCupid Engineering Blog. https://(cid:9) tech.okcupid.com/evaluating-perceptual-image-hashes-at-okcupid-e98a3e74aa3a. Jiang, C., & Pang, Y. (2018). Perceptual image hashing based on a deep convolution neural network for content authentication. Journal of Electronic Imaging, 27(4). https://doi.org/10.1117/1.JEI.27.4.043055. Jonschkowski, R., Stone, A., Barron, J. T., Gordon, A., Konolige, K., & Angelova, A. (2020). What Matters in Unsupervised Optical Flow. ArXiv:2006.04902 [Cs, Eess]. http://arxiv.org/abs/2006.04902. Kahn, J. (2019, May 17). Facebook Executive Warns AI Video Screening Still a Long Way Off—Bloomberg. https://(cid:9) www.bloomberg.com/news/articles/2019-05-17/facebook-executive-warns-ai-video-screening-still-a-long- way-off. Karlinsky, L., Shtok, J., Alfassy, A., Lichtenstein, M., Harary, S., Schwartz, E., Doveh, S., Sattigeri, P., Feris, R., & Bronstein, A. (2020). StarNet: Towards weakly supervised few-shot detection and explainable few-shot classification. ArXiv Preprint ArXiv:2003.06798. Kazakos, E., Nagrani, A., Zisserman, A., & Damen, D. (2019). EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 5491–5500. https://doi.org/10.1109/ICCV.2019.00559. Kirillov, A., He, K., Girshick, R., Rother, C., & Dollár, P. (2019). Panoptic Segmentation. ArXiv:1801.00868 [Cs]. http://arxiv.org/abs/1801.00868. Kozyrkov, C. (2018, May 24). The simplest explanation of machine learning you’ll ever read | Hacker Noon. Hackernoon. https://hackernoon.com/the-simplest-explanation-of-machine-learning-youll-ever-read- bebc0700047c. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 58 Kushwaha, A. (2019, August 24). Solving Class imbalance problem in CNN. Medium. https://medium.com/x8- the-ai-community/solving-class-imbalance-problem-in-cnn-9c7a5231c478. Langston, J. (2018, September 12). How PhotoDNA for Video is being used to fight online child exploitation. Microsoft - On the Issues. https://news.microsoft.com/on-the-issues/2018/09/12/how-photodna-for- video-is-being-used-to-fight-online-child-exploitation/. Lapuschkin, S., Binder, A., Müller, K.-R., & Samek, W. (2017). Understanding and Comparing Deep Neural Networks for Age and Gender Classification. ArXiv:1708.07689v1 [Stat.ML]. https://arxiv.org/(cid:9) abs/1708.07689v1. Lee, H.-E., Ermakova, T., Ververis, V., & Fabian, B. (2020). Detecting child sexual abuse material: A comprehensive survey. Forensic Science International: Digital Investigation, 34, 301022. https://doi.(cid:9) org/10.1016/j.fsidi.2020.301022. Llansó, E. (2016, December 6). Takedown Collaboration by Private Companies Creates Troubling Precedent. Center for Democracy and Technology. https://cdt.org/insights/takedown-collaboration-by-private- companies-creates-troubling-precedent/. Llansó, E. (2020a). No amount of “AI” in content moderation will solve filtering’s prior-restraint problem. Big Data & Society, 7(1). https://doi.org/10.1177/2053951720920686. Llansó, E. (2020b, April 22). COVID-19 Content Moderation Research Letter—In English, Spanish, & Arabic. Center for Democracy and Technology. https://cdt.org/insights/covid-19-content-moderation-research- letter/. Llansó, E. (2020c, August 21). Content Moderation Knowledge Sharing Shouldn’t Be A Backdoor To Cross- Platform Censorship. Techdirt. https://www.techdirt.com/articles/20200820/08564545152/content- moderation-knowledge-sharing-shouldnt-be-backdoor-to-cross-platform-censorship.shtml. Long, J., Shelhamer, E., & Darrell, T. (2015). Fully Convolutional Networks for Semantic Segmentation. ArXiv:1411.4038 [Cs]. http://arxiv.org/abs/1411.4038. Lu, J., Sibai, H., Fabry, E., & Forsyth, D. (2017). Standard detectors aren’t (currently) fooled by physical adversarial stop signs. ArXiv:1710.03337 [Cs]. http://arxiv.org/abs/1710.03337. Lykousas, N., Gómez, V., & Patsakis, C. (2018). Adult content in Social Live Streaming Services: Characterizing deviant users and relationships. ArXiv:1806.10577v1 [Cs.SI]. https://arxiv.org/abs/1806.10577v1. Lyon, J. (2018, September 14). Google’s Next Generation Music Recognition. Google AI Blog. http://(cid:9) ai.googleblog.com/2018/09/googles-next-generation-music.html. Lyons, K. (2020, September 20). Twitter is looking into why its photo preview appears to favor white faces over Black faces. The Verge. https://www.theverge.com/2020/9/20/21447998/twitter-photo-preview-white- black-faces. Mack, D. (2018, April 17). This PSA About Fake News From Barack Obama Is Not What It Appears. BuzzFeed News. https://www.buzzfeednews.com/article/davidmack/obama-fake-news-jordan-peele-psa-video- buzzfeed. Martínez, S., Gérard, S., & Cabot, J. (2018). Robust Hashing for Models. Proceedings of the 21th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, 312–322. https://doi.(cid:9) org/10.1145/3239372.3239405. Matsakis, L., & Martineau, P. (2020, March 18). Coronavirus Disrupts Social Media’s First Line of Defense. Wired. https://www.wired.com/story/coronavirus-social-media-automated-content-moderation/. McBride, S. (2020, August 14). Introducing The Third Great Computing Superpower. Forbes. https://www.(cid:9) forbes.com/sites/stephenmcbride1/2020/08/14/introducing-the-third-great-computing-superpower/. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research References 59 Microsoft. (n.d.). Intelligible, Interpretable, and Transparent Machine Learning—Microsoft Research. Retrieved March 21, 2021, from https://www.microsoft.com/en-us/research/project/intelligible-interpretable-and- transparent-machine-learning/. Microsoft Azure. (n.d.). Azure Content Moderator—Content Filtering Software. Retrieved April 23, 2021, from https://azure.microsoft.com/en-us/services/cognitive-services/content-moderator/. Mittal, A. (2019, June 17). Instance segmentation using Mask R-CNN. Medium. https://aditi-mittal.medium.(cid:9) com/instance-segmentation-using-mask-r-cnn-7f77bdd46abd. Montserrat, D. M., Hao, H., Yarlagadda, S. K., Baireddy, S., Shao, R., Horváth, J., Bartusiak, E., Yang, J., Güera, D., Zhu, F., & Delp, E. J. (2020). Deepfakes Detection with Automatic Face Weighting. ArXiv:2004.12027 [Cs, Eess]. http://arxiv.org/abs/2004.12027. Mu, N., & Gilmer, J. (2019). Mnist-c: A robustness benchmark for computer vision. ArXiv Preprint ArXiv:1906.02337. https://arxiv.org/pdf/1906.02337. Nadeem, M. S., Franqueira, V. N. L., & Zhai, X. (2019). Privacy verification of photoDNA based on machine learning. In W. Ren, L. Wang, K. K. R. Choo, & F. Xhafa (Eds.), Security and privacy for big data, cloud computing and applications (pp. 263-280.). The Institution of Engineering and Technology (IET). https://derby.openrepository.com/handle/10545/624203. Naseer, M., Khan, S. H., & Porikli, F. (2019). Indoor Scene Understanding in 2.5/3D for Autonomous Agents: A Survey. IEEE Access, 7, 1859–1887. https://doi.org/10.1109/ACCESS.2018.2886133. Nie, W., Yu, Z., Mao, L., Patel, A. B., Zhu, Y., & Anandkumar, A. (2020). BONGARD-LOGO: A New Benchmark for Human-Level Concept Learning and Reasoning. ArXiv Preprint ArXiv:2010.00763. https://arxiv.org/pdf/2010.00763. Nixon, M., & Aguado, A. (2019). Feature extraction and image processing for computer vision. Academic press. Northcutt, C. G., Athalye, A., & Mueller, J. (2021). Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks. ArXiv:2103.14749 [Cs, Stat]. http://arxiv.org/abs/2103.14749. Pavllo, D., Feichtenhofer, C., Grangier, D., & Auli, M. (2019). 3D human pose estimation in video with temporal convolutions and semi-supervised training. ArXiv:1811.11742 [Cs]. http://arxiv.org/abs/1811.11742. Peixoto, B., Lavi, B., Martin, J. P. P., Avila, S., Dias, Z., & Rocha, A. (2019). Toward Subjective Violence Detection in Videos. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 8276–8280. https://doi.org/10.1109/ICASSP.2019.8682833. Pereira, M., Dodhia, R., & Brown, R. (2020). Metadata-based detection of child sexual abuse material. ArXiv Preprint ArXiv:2010.02387. https://arxiv.org/pdf/2010.02387.pdf. Perez, M., Kot, A. C., & Rocha, A. (2019). Detection of real-world fights in surveillance videos. ICASSP 2019- 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2662–2666.(cid:9) Petetin, Y., Laroche, C., & Mayoue, A. (2015). Deep neural networks for audio scene recognition. 2015 23rd European Signal Processing Conference (EUSIPCO), 125–129. https://doi.org/10.1109/(cid:9) EUSIPCO.2015.7362358. Pham, L. (2019, June 12). GIFCT Webinar Reveals Little About Use of Hashing Database. Counter Extremism Project. https://www.counterextremism.com/blog/gifct-webinar-reveals-little-about-use-hashing-database. Phillips, P. J., Hahn, A. C., Fontana, P. C., Broniatowski, D. A., & Przybocki, M. A. (2020). Four Principles of Explainable Artificial Intelligence (Draft). NIST Interagency/Internal Report (NISTIR) - 8312-Draft. https://www.nist.gov/publications/four-principles-explainable-artificial-intelligence-draft. Powles, J. (2018, December 7). The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence. https://onezero.(cid:9) medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 60 Prabhu, V. U., & Birhane, A. (2020). Large image datasets: A pyrrhic win for computer vision? ArXiv:2006.16923 [Cs, Stat]. http://arxiv.org/abs/2006.16923. Prost, F., Qian, H., Chen, Q., Chi, E. H., Chen, J., & Beutel, A. (2019). Toward a better trade-off between performance and fairness with kernel-based distribution matching. ArXiv:1910.11779 [Cs, Stat]. http://(cid:9) arxiv.org/abs/1910.11779. Radsch, C. (2020, September 30). GIFCT: Possibly the Most Important Acronym You’ve Never Heard Of. Just Security. https://www.justsecurity.org/72603/gifct-possibly-the-most-important-acronym-youve-never- heard-of/. Ray, T. (2020, February 10). AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun. ZDNet. https://www.zdnet.com/article/ai-on-steroids-much-bigger-neural-nets-to-come- with-new-hardware-say-bengio-hinton-lecun/. Redmon, J., & Farhadi, A. (2018). YOLOv3: An Incremental Improvement. ArXiv:1804.02767 [Cs]. http://arxiv.(cid:9) org/abs/1804.02767. Ren, S., He, K., Girshick, R., & Sun, J. (2016). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. ArXiv:1506.01497 [Cs]. http://arxiv.org/abs/1506.01497. Ringer, C., & Nicolaou, M. A. (2018). Deep Unsupervised Multi-View Detection of Video Game Stream Highlights. ArXiv:1807.09715v1 [Cs.CV]. https://arxiv.org/abs/1807.09715v1. Rodehorst, M. (2019, February 1). Why Alexa won’t wake up when she hears her name in Amazon’s Super Bowl ad. Amazon Science. https://www.amazon.science/blog/why-alexa-wont-wake-up-when-she-hears-her- name-in-amazons-super-bowl-ad. Romano, A. (2018, January 31). Why Reddit’s face-swapping celebrity porn craze is a harbinger of dystopia. Vox. https://www.vox.com/2018/1/31/16932264/reddit-celebrity-porn-face-swapping-dystopia. Samek, W. (2020, February 12). Explainable AI - Methods, Applications & Recent Developments. https://www.(cid:9) youtube.com/watch?v=AFC8yWzypss. Samudzi, Z. (2019, February 11). Bots Are Terrible at Recognizing Black Faces. Let’s Keep it That Way. https://(cid:9) www.thedailybeast.com/bots-are-terrible-at-recognizing-black-faces-lets-keep-it-that-way. Sanchez, D. (2017, January 12). Will Google and YouTube Be Forced To Pull Content ID? Digital Music News. https://www.digitalmusicnews.com/2017/01/12/google-youtube-audible-magic-content-id/. Sankaranarayanan, S., Balaji, Y., Jain, A., Lim, S. N., & Chellappa, R. (2018). Learning from synthetic data: Addressing domain shift for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3752–3761.(cid:9) Saska, C., DiValerio, M., & Molter, M. (2019, December 14). Building an Audio Classifier. https://medium.(cid:9) com/@anonyomous.ut.grad.student/building-an-audio-classifier-f7c4603aa989. Scott, M., & Kayali, L. (2020, October 21). What happened when humans stopped managing social media content. POLITICO. https://www.politico.eu/article/facebook-content-moderation-automation/. Shankar, S., Halpern, Y., Breck, E., Atwood, J., Wilson, J., & Sculley, D. (2017). No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World. ArXiv:1711.08536 [Stat]. http://arxiv.org/abs/1711.08536. Singh, S. (2019). Everything in Moderation- An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User-Generated Content. New America. http://newamerica.org/oti/reports/(cid:9) everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user- generated-content/. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research References 61 Snow, J. (2018, July 26). Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots. American Civil Liberties Union. https://www.aclu.org/blog/privacy-technology/surveillance- technologies/amazons-face-recognition-falsely-matched-28. Solomon, L. (2015). Fair Users or Content Abusers: The Automatic Flagging of Non-Infringing Videos by Content ID on Youtube. Hofstra Law Review, 44, 237. Souza, T., Lima, J. P., Teichrieb, V., Nascimento, C., da Silva, F. Q. B., Santos, A. L. M., & Pinho, H. (2018). Generating an Album with the Best Media Using Computer Vision. In A. Marcus & W. Wang (Eds.), Design, User Experience, and Usability: Designing Interactions (pp. 338–352). Springer International Publishing. https://doi.org/10.1007/978-3-319-91803-7_25. Spoerri, T. (2019). On upload-filters and other competitive advantages for Big Tech Companies under Article 17 of the directive on copyright in the digital single market. J. Intell. Prop. Info. Tech. & Elec. Com. L., 10, 173. Steed, R., & Caliskan, A. (2021). Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 701–713. https://doi.org/10.1145/3442188.3445932. Stock, J., Dolan, A., & Cavey, T. (2020). Strategies for Robust Image Classification. ArXiv:2004.03452v2 [Cs.CV]. http://arxiv.org/abs/2004.03452. Takahashi, N., Gygli, M., Pfister, B., & Van Gool, L. (2016). Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Detection. ArXiv:1604.07160 [Cs]. http://arxiv.org/abs/1604.07160. Taori, R., Dave, A., Shankar, V., Carlini, N., Recht, B., & Schmidt, L. (2020). Measuring Robustness to Natural Distribution Shifts in Image Classification. ArXiv:2007.00644 [Cs, Stat]. http://arxiv.org/(cid:9) abs/2007.00644. Tech Against Terrorism. (2020, November 11). The Terrorist Content Analytics Platform and Transparency by Design. VOX - Pol. https://www.voxpol.eu/the-terrorist-content-analytics-platform-and-transparency-by- design/. Thys, S., Van Ranst, W., & Goedemé, T. (2019). Fooling automated surveillance cameras: Adversarial patches to attack person detection. ArXiv:1904.08653 [Cs]. http://arxiv.org/abs/1904.08653. Todorovic, N., & Chaudhuri, A. (2018, September 3). Using AI to help organizations detect and report child sexual abuse material online. Google. https://blog.google/around-the-globe/google-europe/using-ai-help- organizations-detect-and-report-child-sexual-abuse-material-online/. Trendacosta, K. (2020). Unfiltered: How YouTube’s Content ID Discourages Fair Use and Dictates What We See Online. Electronic Frontier Foundation. https://www.eff.org/document/unfiltered-how-youtubes- content-id-discourages-fair-use-and-dictates-what-we-see-online. Vincent, J. (2020, October 5). Nvidia says its AI can fix some of the biggest problems in video calls—The Verge. https://www.theverge.com/2020/10/5/21502003/nvidia-ai-videoconferencing-maxine-platform-face- gaze-alignment-gans-compression-resolution. Wang, C., Pino, J., & Gu, J. (2020). Improving Cross-Lingual Transfer Learning for End-to-End Speech Recognition with Speech Translation. ArXiv:2006.05474 [Cs, Eess]. http://arxiv.org/abs/2006.05474. Wang, Q., Gao, J., Lin, W., & Yuan, Y. (2019). Learning from synthetic data for crowd counting in the wild. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8198–8207. http://(cid:9) openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Learning_From_Synthetic_Data_for_ Crowd_Counting_in_the_Wild_CVPR_2019_paper.pdf. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 62 Weber, E., Marzo, N., Papadopoulos, D. P., Biswas, A., Lapedriza, A., Ofli, F., Imran, M., & Torralba, A. (2020). Detecting natural disasters, damage, and incidents in the wild. ArXiv:2008.09188 [Cs]. http://arxiv.org/(cid:9) abs/2008.09188. Welcome to Echoprint. (n.d.). Retrieved April 23, 2021, from https://echoprint.tumblr.com/how. West, S. M., Whittaker, M., & Crawford, K. (2019). DISCRIMINATING SYSTEMS - Gender, Race, and Power in AI (p. 33). AI Now Institute. https://ainowinstitute.org/discriminatingsystems.pdf. Wiggers, K. (2018, December 2). Google’s Inclusive Images Competition spurs development of less biased image classification AI. VentureBeat. https://venturebeat.com/2018/12/02/googles-inclusive-images- competition-spurs-development-of-less-biased-image-classification-ai/. Won, D., Steinert-Threlkeld, Z. C., & Joo, J. (2017). Protest Activity Detection and Perceived Violence Estimation from Social Media Images. Proceedings of the 25th ACM International Conference on Multimedia, 786–794. https://doi.org/10.1145/3123266.3123282. Wong, Q. (2020, November 19). Facebook’s AI is flagging more hate speech before you report it. CNET. https://(cid:9) www.cnet.com/news/facebooks-ai-is-flagging-more-hate-speech-before-you-report-it/. Wu, P., Liu, J., Shi, Y., Sun, Y., Shao, F., Wu, Z., & Yang, Z. (2020). Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision. ArXiv:2007.04687v2 [Cs.CV]. https://arxiv.(cid:9) org/abs/2007.04687v2. Yuille, A. L., & Liu, C. (2019, February 9). Limitations of Deep Learning for Vision, and How We Might Fix Them. The Gradient. https://thegradient.pub/the-limitations-of-visual-deep-learning-and-how-we- might-fix-them/. Zernay, R., & Hagemann, R. (2017). ACES in the Hole? Automated Copyright Enforcement Systems and the Future of Copyright Law. The Niskanen Center. Zhao, H., Shi, J., Qi, X., Wang, X., & Jia, J. (2017). Pyramid Scene Parsing Network. ArXiv:1612.01105 [Cs]. http://arxiv.org/abs/1612.01105. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis CDT Research cdt.org II cdt.org/contact Center for Democracy & Technology 1401 K Street NW, Suite 200 Washington, D.C. 20005 202-637-9800 @CenDemTech ccJ.t CENTERFOR DEMOCRACY !II & TECHNOLOGY
ai_researcher
1
Contemporary_Human_Resource_Management_Practices_and_Diversity_in_Changing_Business_Environments.pdf
Running Head: HRD AND THE INTERNET OF THINGS 1 Human Resource Development and the Internet of Things Robert M. Yawson* School of Business, Quinnipiac University, 275 Mount Carmel Ave, Hamden, CT 06518 ORCID iD: https://orcid.org/0000-0001-6215-434 robert.yawson@qu,.edu Daniel Woldeab College of Individualized Studies Metropolitan State University, St. Paul, MN [email protected] Emmanuel Osafo Youth and Families Program Unit, Washington State University Extension Washington State University, WA ORCID iD: https://orcid.org/0000-0002-5385-3635 [email protected] *Corresponding Author Copyright © 2018 Robert M. Yawson, Emmanuel Osafo & Daniel Woldeab Proceedings of the 25th Annual Academy of Human Resource Development International Research Conference in the Americas. Richmond VA, USA. February 14 -17, 2018. DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 2 Abstract The Internet of Things (IoT) is affecting national innovation ecosystems and the approach of organizations to innovation and how they create and capture value in everyday business activities. The Internet of Things (IoT), is disruptive, and it will change the manner in which human resources are developed and managed, calling for a new and adaptive human resource development approach. The Classical Internet communication form is human-human. The prospect of IoT is that every object will have a unique way of identification and can be addressed so that every object can be connected. The communication forms will expand from human- human to human-human, human-thing, and thing-thing. This will bring a new challenge to how Human Resource Development (HRD) is practiced. This paper provides an overview of the Internet of Things and conceptualizes the role of HRD in the age of the Internet of Things. Keywords: Analytics, Internet of Things, Human Resource, Workforce, Disruptive Technology DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 3 Human Resource Development and the Internet of Things Since the launch of the world-wide web in the early 1990s, the Internet has impacted the way we live and work with the ‘speed of light.' Society is facing yet another wave of Internet technologies that will have a big impact on the way we live and work. This phenomenon known popularly as the Internet of Things (IoT) presents a situation where data generation is the order of the day – every human interaction whether with living or non-living things generate some form of data making the workplace a data-driven environment. The changing phase of technology presents a challenge to predictions of human interaction and how work is conducted. The IoT has the potential to make a fundamental shift in the way we interact with our surroundings. “It is suggested that we can see the Internet as enabling the human social environment, as well as an ever-increasing array of Internet-enabled devices, to function as literal body parts” (Smart, 2017, p. 360). Human Resource Development (HRD) is in a distinctive position to prepare the workforce for this new way of working and to utilize the big data generated by the Internet of Things (IoT). The IoT has the potential to fundamentally shift the way we interact with our surroundings (Manyika et al., 2015). The capability to monitor and manage objects in the physical world electronically makes it promising to bring data-driven decision-making to new realms of human resource development—to augment the performance of systems and processes, save time for people and businesses, and improve quality of life (Manyika et al., 2015). Human Resource Development (HRD) involves developing people with a focus on improving knowledge, skills, and abilities (KSAs) to guide organizations, create a long-term DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 4 vision, develop strategy, staff the organization, communicate, motivate people toward the vision, and to support improved productivity. HRD is targeted across levels of abstraction of individuals, teams, organizations, communities, and fields of policy and practice (Yawson, 2017). All these levels and focus of HRD are being impacted by the emergence of the IoT. As Bennett, (2014) aptly described: The field of HRD is at a historic point in which we can demonstrate value and relevance to the modern, technology-enabled organization. Many in the field of HRD have sought a balance between the needs of the individual and the needs of the collective for learning and performance. Both management and HRD needs are often embedded in the same virtual systems, but HRD has been late to incorporate technology strategically in practice and in academic preparation programs (p.275). It is no longer business as usual for HRD professionals. As a result of the emergence of IoT, the world is experiencing significant, largely economic and sociotechnical, induced changes. These changes are more than jargon, cliché, and hyperbole, and they are effecting major transformations (Yawson & Greiman, 2014). These transformations will impact on how human resources are developed and we need to be able to forecast its effects (Yawson & Greiman, 2017). To produce such forecasts, HRD needs to become more predictive and adaptable - to develop the ability to understand how human capital systems, organizations, and the national innovation ecosystems will behave in the future that IoT brings. The Classical Internet has radically altered the way we access information, profoundly transformed the way we think, act and remember (Smart, Heersmink, & Clowes, 2017). With the IoT, every aspect of our cognitive and epistemic endeavors, either individual or collective, will be undertaken with some involvement of the Internet (Smart et al., 2017). Relative to this influence, it makes sense, to see the IoT as fully becoming an important part of the “cognitively-potent extra-organismic DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 5 environment in which our biological brains are now situated.” (Smart et al., 2017, p. 255). The IoT can, therefore, be seen as a form of cognitive ecology that shapes our thinking and other socially transmitted ideas. Given the emergence, momentum, and the prospects of IoT, the objective of this research is to discuss the impact the advances in IoT will have on HRD research and practice and the role HRD should play in addressing the impact of IoT on the human resource in organizations. Research Question There is the need for optimal balance in modern core skills, like agility, collaboration, cognitive flexibility, creativity and organizational development. It all comes down to educating and preparing the human resource across levels of abstraction of individuals, teams, organizations, communities, and fields of policy and practice to absorb the big data that comes from IoT. Given this need as a result of the momentum and emergence of IoT, what role is there for HRD as a field of study and practice to ensure success and relevance in this new era? Research methodology To achieve the objective of this study, an integrated review of the literature was performed. The reviewed literature included journal articles, conference papers, edited volumes, and reports from several respected think tanks. Given that the IoT is still emerging, a wide range of sources for a comprehensive review of the topic including the gray literature was conducted. Given the nascent and fluidity of the emergence of IoT as a phenomenon, reviewing only peer-reviewed scholarly articles that make a specific theoretical contribution to the IoT would have yielded a very limited review. Despite the significant work done by members of the Virtual HRD Special Interest Group of the Academy of Human Resource Development, most of DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 6 the work is nascent and restricted to works published in Advances in Human Resource Development (Bennett, 2010; McWhorter, 2010; Nafukho, Graham, & Muyia, 2010). Relevant literature was identified by querying scholarly databases for the terms: Internet of Things, IoT, Web of Things, Internet of Everything, Internet of Objects, Embedded Intelligence, Connected Devices and Technology Omnipotent, Cyber-Physical Systems, Pervasive Computing, Ubiquitous Computing, Machine-to-Machine; Human-Computer Interaction, and Ambient Intelligence. Each of these terms were searched in combination with term Human Resource. Returned results were downloaded and uploaded to Mendeley Citation Software and further screened using the following terms: Human Capacity and Human Resource. The scholarly databases queried included: Web of Science, ABI/INFORM Global, Academic Search Premier, ACM Digital Library, Applied Science & Technology Full Text (EBSCO), IEEE Xplore, ScienceDirect, and Google Scholar. The resulting 47 pertinent articles were reviewed for the study. The of Internet of Things The concept of combining computers, sensors, and networks to monitor and control devices is not new. It has been around for several decades. However, the recent confluence of key technologies like microelectronics, nanotechnology, biotechnology, cognitive sciences, synthetic biology, Information Communication Technologies (ICT), and market trends is ushering in a new reality for the IoT. IoT promises to usher in a revolutionary, fully interconnected “smart” world, with relationships between objects and their environment and objects and people becoming more tightly intertwined (Rose, Eldridge, & Chapin, 2015). The vista of the IoT as a pervasive array of devices bound to the Internet will fundamentally change DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 7 how people think about what it means to be “online” (Rose et al., 2015). Technically, the IoT is not the result of a single novel technology; instead, several complementary technical developments and innovations provide systemic capabilities that help to bridge the gap between the virtual and physical world (Mattern & Floerkemeier, 2010). The Episteme and Taxonomy of the Internet of Things Kevin Ashton is credited with coining the term “Internet-of-Things” in a presentation in 1999 regarding supply chain management (Ashton, 2009; Gubbi, Buyya, Marusic, & Palaniswami, 2013). Since then, Internet of Things (IoT) has emerged as a new paradigm aiming at providing solutions for integration, communication, data consumption and analysis of smart devices (Khodadadi, Dastjerdi, & Buyya, 2017). While the term “Internet of Things” is relatively new, the concept of joining computers and networks to monitor and control devices has been around for decades (Rose et al., 2015). The story of the IoT can be traced back to the 19th century when the electromagnetic telegraph was created by Baron Schilling in Russia (Borisovai, 2009). Figure 1 is an illustrative flow diagram depicting the history and evolution of the IoT. The evolution saw major landmarks including innovations, significant events, thoughts and predictions from pioneers and thinkers. In an interview with Colliers Magazine in 1926, Nikola Tesla stated: When wireless is perfectly applied, the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole, and the instruments through which we shall be able to do this will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket. (Kennedy, 1926, Webpage) DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 8 Figure1: The History and Evolution of the Internet of Things Carl Friedrich Gauss and Wilhelm Weber invented their own code to communicate over a distance of 1200 m within Göttingen, Germany Nikola Tesla’s interview with Colliers Magazine on Wireless Connectivity Marshall McLuhan statement on turning all technologies into information systems Arpanet Domain Name System is introduced John Romkey created a toaster that could be turned on and off over the Internet - first IoT device. Mark Weiser's Scientific American article on ubiquitous computing called ‘The Computer for the 21st Century’ is written. Quentin Stafford-Fraser and Paul Jardetzky created the Trojan Room Coffee Pot Mark Weiser constructed a water fountain whose flow and height mimicked the volume and price trends of the stock market. The Internet goes commercial with Amazon and Echobay (Ebay) Google is incorporated LG announced its first Internet of refrigerator plans 1832 1833 1844 1926 1950 1964 1966 1969 1974 1984 1989 1990 1991 1993 1994 1995 1997 1998 1999 2000 2002 An electromagnetic telegraph was created by Baron Schilling in Russia Samuel Morse sends the first morse code public telegraph message "What hath God wrought?" from Washington, D.C. to Baltimore. Alan Turing in his article Computing Machinery and Intelligence in the Oxford Mind Journal Karl Steinbuch predicted that "In a few decades time, computers will be interwoven into almost every industrial product" Beginnings of TCP/IP Tim Berners-Lee proposes the World Wide Web The first web page was created by Tim Berners-Lee Steve Mann creates WearCam Paul Saffo's prescient article "Sensors: The Next Wave of Infotech Innovation" Scott Brave, Andrew Dahley, and Hiroshi Ishii developed inTouch at MIT The term Internet of Things is coined by Kevin Ashton The Ambient Orb created by David Rose and others in a spin-off from the MIT Media Lab is released RFID is deployed on a massive scale by the US DoD in their Savi program and Wal-Mart in the commercial world 2002 - 2004 The UN’s ITU published its first report on IoT Recognition by the EU and the First European IoT conference is held The IPSO Alliance launched to promote the use of IP in networks of “Smart Objects” and to enable the IoT US NIC listed the IoT as one of the 6 “Disruptive Civil Technologies” with potential impacts on US interests out to 2025 IPv6 public launch - The new protocol allows for approximately 340 undecillion addresses Creation of the IoT-GSI - unified approach for development of technical standards enabling the IoT on a global scale 2005 2008 2010 2011 The FCC voted 5-0 to approve opening the use of the ‘white space’ spectrum The IoT was born according to Cisco’s Business Solutions Group Chinese Premier Wen Jiabao calls the IoT a key industry for China and calls for major investments in IoT The term was added to the 2011 annual Gartner Hype Cycle IoT hits the Hype Cycle's "Peak of Inflated Expectations" 2014 Cisco, IBM, Ericsson produce large educational and marketing initiatives on IoT Source: © Authors DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 9 In his 1964 book, Understanding Media, Marshall McLuhan stated that "....by means of electric media, we set up a dynamic by which all previous technologies - including cities - will be translated into information systems" (McLuhan, 1964). In 1966, Karl Steinbuch, a German computer science pioneer also predicted that "In a few decades time, computers will be interwoven into almost every industrial product" (Mattern & Floerkemeier, 2010, p. 242) The World Wide Web (Web 1.0) - a network of linked HTML documents that resided on top of the Internet architecture - characterized the early days of the Classical Internet – the Internet as we know it today. This network of static HTML pages progressively evolved into Web 2.0, a term describing the use of World Wide Web technology and web design that enabled creativity, secure information sharing, collaboration and functionality of the web. With Web 2.0, two-way communication became ubiquitous and allowed user participation, collaboration, and interaction (Whitmore, Agarwal, & Da Xu, 2015). Web 2.0 technologies include social networking services, electronic messaging services, blogs, and wikis— technologies that have become indispensable to modern social interaction as well as for global business. While Web 2.0 currently dominates the Internet, there has been the emergence of Semantic Web or Web 3.0. A technology that makes markup web content understandable by machines, allowing machines and search engines to behave more intelligently (Whitmore et al., 2015). Marking up web content in standardized formats would allow machines to process and share data on their own, without the need for human mediation (Whitmore et al., 2015). Alongside developments in the Internet technologies, technologies in Sensor Networks, Near DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 10 Field Communication using RFID tags, synthetic biology, biotechnology, cognitive sciences, and nanotechnology have also been evolving. Convergence of Web 2.0, Web 3.0, and these technologies, has led to a paradigm being referred to as the Internet of Things (IoT). IoT is maturing and continues to be the latest, a most hyped concept in the IT world. It was added to the 2011 annual Gartner Hype Cycle that tracks technology life-cycles from "technology trigger" to "plateau of productivity,” and it hit the Hype Cycle's "Peak of Inflated Expectations" in 2014. As of August 2017, the term IoT was still at the “Peak of Inflated Expectations". Gartner’s Information Technology Hype Cycle (Gubbi et al., 2013) is popularly known for representing emergence, adoption, maturity, and impact on applications of specific technologies (Ferguson, 2002). It was forecasted in 2012 that IoT would take between 5-10 years for market adoption and every indication now is evident it was predicted right. Riggins and Wamba, (2015) grouped the level of IoT adoption through Big Data analytics usage to the following categories: 1) Society level where IoT mainly influences and improves government services by reducing cost and increasing government transparency and accountability, 2) Industry level in which manufacturing, emergency services, retailing, and education have been studied as examples, 3) Organizational level in which IoT can bring the same type of benefits as those mentioned in society level, 4) Individual-level where daily life improvements and individual efficiency and productivity growth are marked as IoT benefits. DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 11 The IoT has been referred to with different terminologies, but the objective of IoT is same in the broad sense (Madakam, Ramaswamy, & Tripathi, 2015). The taxonomical labels of IoT include Internet of Everything, Web of Things, Internet of Objects, Embedded Intelligence, Connected Devices and Technology Omnipotent, Omniscient and Omnipresent. In addition to these taxonomical labels, IoT has also been variously described as follows (Madakam et al., 2015): • Cyber-Physical Systems: Integrations of computation and physical processes, in which bringing the real and virtual worlds together. • Pervasive Computing: A computer environment in which virtually every object has processing power with wireless or wired connections to a global network • Ubiquitous Computing or Calm technology: Where technology becomes virtually invisible in our lives • Machine-to-Machine Interaction: Means no human intervention while devices are communicating end-to-end • Human-Computer Interaction: Involves the study, planning, and design of interaction between people and computers • Ambient Intelligence: It is a developing technology that will increasingly make our everyday environment sensitive and responsive. There are varying definitions of IoT, and there is not a standard one agreed to by all. However, there is a common understanding of what it is and its prospects. “What all of the definitions have in common is the idea that the first version of the Internet was about data created by people, while the next version is about data created by things” (Madakam, DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 12 Ramaswamy, & Tripathi, 2015, p.165). The thing in IoT can be a person with a heart monitor implant, a farm animal with a biochip transponder, a crop with a nanochip for precision agriculture, an automobile that has built-in sensors to alert the driver when the tire pressure is low—or any other natural or man-made object that can be assigned an IP address, and provided with the ability to transfer data over a network (Shin, 2014). Fundamentally, the IOT can be described as a global network which facilitates the communication between human-to-human, human-to-things, and things-to-things, which is anything in the world by providing a unique identity to every object (Aggarwal & Das, 2012). Madakam et al., (2015) define IoT as “An open and comprehensive network of intelligent objects that can auto-organize, share information, data, and resources, react and act in the face of situations and changes in the environment” (p. 165) Figure 2. The Quantum of Internet of things Source: Mobile Analytics/Big Data by Joel Comm. Creative Commons License DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 13 The Internet of Things is an emerging technological concept of sociotechnical and economic significance. Applications of IoT is a fast-growing segment of business communities worldwide, empowering industries globally by transforming their operation and giving them faster and more efficient ways of doing business. Consumer products, medical devices, pharmaceutical products, agrifood products, durable goods, cars and trucks, industrial and utility components, sensors, and other everyday objects are being linked with Internet connectivity and powerful data analytic capabilities that promise to transform the way we work, live, and play (Rose et al., 2015). In addition to interconnectivity and interoperability, the quantum of Internet of things is also very significant. Joel Comm (2017) has predicted how quantum entanglement will impact how everyday business is conducted. Figure 3 is an illustration of the quantum IoT. The Quantum Internet of Things enables new ways of monitoring and managing all the “moving parts” that make up a business. In their report “Unlocking the Potential of the Internet of Things’’, the McKinsey Global Institute describes the broad range of potential applications regarding “settings” where IoT is expected to create value for industry and users (Manyika et al., 2015). “Some of the most promising uses are in health care, infrastructure, and public-sector services—helping society tackle some of its greatest challenges” (Manyika et al., 2013, p. 51). The Internet of Things is still in early stages of adoption, but it already has a wide variety of uses, and the portfolio of applications is expanding daily. Indeed, in a world suffused with smart devices, it is not only our homes and workplaces that are changing but our way of life as figure 2 aptly depicts. However, smart objects are only the first step of an evolutionary process of IoT. There is a generational evolution from objects DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 14 with a certain degree of smartness to objects with an actual social consciousness (Atzori, Iera, & Morabito, 2014). In analogy with the human evolution from homo sapiens to homo agens used in economic and sociological constructs, Atzori et al. (2014) used a similar evolutionary path from a res sapiens (smart object) to what they called res agens (an acting object), “which is able to translate the awareness of causal relationships — the basis of knowledge of change and evolution of its environment — into actions” (Atzori et al., 2014, p. 98). Figure 3. Main features of the identified three categories of IoT objects Capability to communicate in human social networks Awareness of the environment Capability of building their own social network Res Sapiens Res agens Res Socialis Increased interoperability with external systems Interactivity with the surrounding environment Pseudo-social behavior with neighbors Proficiency in building added-value complex services through collaboration in the object social network Source: Adapted from Atzori et al. (2014)©Authors, 2018 Atzori et al., (2014) went further on their analogical evolution ladder toward a new type of object that can be considered as res socialis (i.e., social object). The term they described as an object that is part of and acts in a social community of objects and devices (which, in this case, is a social IoT). The features of the three categories of IoT objects identified by Atzori et al. (2014) are illustrated in Figure 3. The disruptive nature of IoT technologies and the fast-growing applications is evident in industries from manufacturing to pharmaceuticals and from health care, to our smart homes and workplaces (Atzori et al., 2014; Hoontrakul, 2018; Xu, He, & Li, 2014). Even livestock industries are using these technologies to keep track of their precious assets (Gao & Bai, 2014). DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 15 In the age of IoT, all of these widely diverse industries are gaining greater efficiency and exponential growth at rapid rates. Undeniably, these changes in the workplace will have a great impact on education in general and higher education in particular. At the very least, such developments and evolutions in the workplace demand a more educated population. While DeMillo (2011) pointed out that technological advancements, in general, have the potential to be challenging to higher education institutions, Tianbo (2012) asserted that IoT is encouraging changes in higher education and has the potential to create more intelligent systems in these institutions. However, higher education institutions not only have to think about the transformative effects of IoT’s emergence in our society, but are also obligated to rethink how best to educate the coming generation (Kortuem, Bandara, Smith, Richards, & Petre, 2013). Crucially, there is also “need for an education provision that can empower a new generation of a digital citizen who can understand both technologies that underpin the Internet of Things, as well as the societal impacts of widespread adaption of these technologies” (Kortuem et al., 2013, p. 53). Anticipated impact of IoT on the Internet and economy are staggering, with some projecting as many as 100 billion connected IoT devices and a global economic impact of more than $11 trillion by 2025 (Rose et al., 2015). This is having very significant impact on organizations and human resources, and the role HRD will play may be dramatically altered. Discussions The disruptive potential of IoT could drive profound changes across many dimensions— in the lives of individuals, in business, and across the global economy. The IoT is such a sweeping concept that presents a challenge about how to imagine all the possible ways in DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 16 which it will affect businesses, economies, and society (Manyika et al., 2013). However, it is undeniable that it will have a significant impact and all aspect our lives and the way human resources should be developed. Since 2010, when a special issue on Virtual HRD (VHRD) was published in the Advances in Human Resource Development, there has been an increasing stream, albeit, slowly the literature on technology-enabled HRD. However, there has not been any serious discussion or research as to the role HRD will play in the age of IoT and the impact of IoT on the practice of HRD as a profession in the mainstream HRD discussions. It is, therefore, gratifying to learn that the keynote speaker for the 2018 Academy of Human Resource Development International Research Conference is a recognized expert and thought- leader in the adult development arena and will be discussing “Re-thinking learning and HRD for the Age of Artificial Intelligence.” As, authors we share the same view as the Keynote Speaker [Pat McLagan] that “learners’ capabilities, confidence, and self-image in learning need a radical change to thrive in a world of proliferating resources, information overload, artificial intelligence, and many other paradigm-shifting forces.”(AHRD, 2018, p.1) Bennett, (2014) defines Virtual Human Resource Development (VHRD), as a “media-rich and culturally relevant webbed environment that strategically improves expertise, performance, innovation, and community-building through formal and informal learning” (p.265). While we agree with this definition, we contend that there should not be anything like VHRD. While this may sound controversial, our reason is that in the age of IoT the entirety of HRD including its connected yet disparate areas of scholarship like Strategic HRD, Critical HRD, International/Global HRD, etc., are all covered in the description of VHRD provided by Bennett, (2014). We can no longer see VHRD as a separate area of study under HRD but the new DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 17 paradigm of HRD or the evolution of HRD. Bennett and Bierema, (2010) stated that “VHRD can be viewed as a living system because of the interactivity, learning, and development that occurs through its enabling technologies.” We argue that the entirety of HRD in an IoT era should be viewed as such. HRD professionals need to identify potentially disruptive technologies, and carefully consider their potential before these technologies begin to exert their disruptive powers in the workplace and society. The impacts of IoT on HRD are considerable. The advances in IoT create a demand for new sets of skills, and as working adults assume these jobs, they need to be retrained and reskilled. Training and preparing the human capital needed to fill the high demand of high-tech jobs is going to be a considerable undertaking, which makes the implications for HRD enormous. During the last few decades, millions of individuals globally have been raised from poverty into the middle class, which means they not only need but demand access to higher education (Kortuem et al., 2017). Hence, in the age of IoT, Adult Education and HRD are uniquely positioned to provide the education and training needed not only by the workforce of today but also that of tomorrow, which will face increasingly high-tech and shifting demands in the workplace. We envisage the impact of IoT on HRD and the role HRD should play in maximizing the benefits and challenges of IoT under three dimensions of IoT applications to: • Inform – gathering information through sensors to inform policy and HRD decision making, research and practice. DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 18 • Automate – design and develop activities by allocating a function to a system or by supervising the fulfillment of activity through an IoT device that can generate Big Data for effective predictions for the future. • Transform – redesigning learning and development processes to moderate the distractions of the IoT. Implications for HRD Although Internet of Things provides HRD with tremendous opportunities for growth, it is not without challenges. In addition to its potential for enormous economic impact, the IoT will affect the performance of a range of organizations and individuals. As every aspect of our lives becomes ever more connected, thousands of discrete data points are created by just a handful of individuals on any given day. This provides an environment conducive to hackers and cyber criminals trying to gather sensitive employee and consumer information. When the emergence of new technology outpaces security developments, the likelihood that IoT can cause security and privacy breaches for HR practitioners is great. After all, consumer data is one of the most precious assets of any organization, and assuring the security and privacy of this data in the age of IoT is imperative. Schramm (2014)noted that IoT influence on HR is wide-reaching, from how data about workers are gathered and analyzed, to recruitment and employees’ safety. Schramm (2014) also asserted that the data results gained through IoT today would help inform and influence HR in the future. Hence, HR practitioners should carefully balance the gains and opportunities that IoT presents, with the potential security privacy issues relating to employee and other data. DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 19 As technology shapes and amplifies our culture, and therefore our lives, it is vital for HR practitioners to understand and address the implications that IoT can have in the context of different cultures. The workforce of today is global. As such, HR/HRD practitioners should also understand the implications of different symbols and meanings, and how differently they can be perceived and interpreted by various cultures. There are important implications for all stakeholders—consumers, IoT user companies, technology suppliers, policymakers, and employees. Catching up with the velocity at which the IoT affect work design and task performance can be challenging, but taking no action will result in virtual death. As much as the emergence of IoT has presented a challenge to relatively new and emerging fields like HRD it has as well created an opportunity for the ingenious to take a leadership role and announce their relevance in a new way. We perceive two main challenges that the IoT presents to HRD: 1. How to absorb the big data generated through the IoT – If the existing HRD systems struggle to absorb actionable workforce analytics data to understand how history informs the future, the enormity of the big data generated by IoT can subvert the field if immediate action is not taken. HRD can create an enabling environment for research in an IoT workplace by developing a cutting-edge database that enables continuous interaction between HRD scholars around the globe about their current research aside those presented at conferences. This strategy will help HRD scholars to identify others in the field whose research aligns with their thoughts and research ideas and serve as a catalyst for collaboration and exchange of ideas for enriched scholarly work that makes a greater impact on society. Furthermore, data from such a database can help students DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 20 of HRD and emerging scholars to be informed about ongoing trends in the field and save them from the needless uncertainty about the relevance of their thesis in their academic/scholarly pursuit. 2. How HRD can use the IoT to connect workforce development to the people analytics principle –The use of technology and statistics to collect and analyze data to help management make informed decisions on talent acquisition and development is becoming more challenging as virtual training and assessment become increasingly popular. Thus, some people can fake their presence at such training and assessment sessions or cheat with technology. This calls for HRD professionals to build a strong network to help generate a comprehensive talent data from a variety of locations and issues to help improve our work. Professionals can share information constantly and make recommendations to other professionals through a well-develop database and HRD Collaboratories (Yawson, 2009). Linkedin and other organizations have taken the lead, but HRD as a field can improve upon their idea by having professionals in the different locations contribute to one another’s work. Conclusion The emergence of IoT is undisputed and - as a phenomenon – irresistible (Maarten Botterman, 2015). IoT is disruptive, and it is driven by societal needs and economic opportunities; by demand pull and supply push. It is enabled by many different strands of technology innovation application development in domains as varied as synthetic biology, biotechnology, cognitive sciences, and nanotechnology. Instances are already seen in almost every area; the influence of IoT devices, services and architectures may rapidly become DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 21 pervasive. These different forces certainly produce transitory conflicts of interests, gaps, and distortions for which trade-offs need to be made (Maarten Botterman, 2015). Whether we as HRD professionals recognize and respond to the IoT as a “thing in itself” will greatly influence the effectiveness of our ability to exploit and eventually resolve these tensions. The IoT poses profound challenges to HRD as a field of study. Many stem from the means in which it is likely to affect and even disrupt areas either of traditional HRD research and practice (Performance Improvement, Training & Development, Leadership & Career Development, Workplace Learning, etc.) or new frontiers of HRD like Knowledge Management, Critical HRD, and International/Global HRD. For example in the area of CHRD, policy challenges are arising from the IoT itself that will affect social justice, gender, and other related issues in the domain of CHRD. In International/Global HRD, some of the issues would be similar to the experience of other emergent technologies, especially those with the potential to transform public services and the national innovation ecosystems. These include the need for suitable forms of cross-cultural understanding (when things of different geographic and cultural orientation are communicating with each other), access to skills and fair and efficient market access. They also include organizational capital and human resource needs, “in particular for business, entrepreneurial, technological and societal knowledge available to new and existing enterprises moving into this area or building new businesses with the aid of IoT capabilities” (Maarten Botterman, 2015, p. 26). Underpinning these is the need for a predictive and adaptable HRD research and practice capable of providing the right mix of certainty and flexibility. DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 22 Aggarwal, R., & Das, M. L. (2012). RFID security in the context of “internet of things.” In Proceedings of the First International Conference on Security of Internet of Things - SecurIT References ’12 (pp. 51–56). New York, New York, USA: ACM Press. https://doi.org/10.1145/2490428.2490435 AHRD. (2018). 2018 Conference Keynote Speaker Announced. 2018 AHRD International Research Conference. St. Paul, MN: Academy of Human Resource Development. Retrieved from http://www.ahrd.org/?page=2018ConfCentral Ashton, K. (2009). That “Internet of Things” Thing. RFiD Journal, 4986. Retrieved from http://www.itrco.jp/libraries/RFIDjournal-That Internet of Things Thing.pdf%5Cnpapers3://publication/uuid/8191C095-0D90-4A17-86B0-550F2F2A6745 Atzori, L., Iera, A., & Morabito, G. (2014). From “smart objects” to “social objects”: The next evolutionary step of the internet of things. IEEE Communications Magazine, 52(1), 97–105. https://doi.org/10.1109/MCOM.2014.6710070 Bennett, E. E. (2010). The Coming Paradigm Shift: Synthesis and Future Directions for Virtual HRD. Advances in Developing Human Resources, 12(6), 728–741. https://doi.org/10.1177/1523422310394796 Bennett, E. E. (2014). Introducing New Perspectives on Virtual Human Resource Development. Advances in Developing Human Resources, 16(3), 263–280. https://doi.org/10.1177/1523422314532091 Bennett, E. E., & Bierema, L. L. (2010). The ecology of virtual human resource development. Advances in Developing Human Resources, 12(6), 632–647. https://doi.org/10.1177/1523422310394789 Borisovai, N. A. (2009). Shilling’s Pionering Contribution to Practical Telegraphy. In IEEE EUROCON 2009 (pp. 1105–1109). St. Petersburg, Russia: IEEE. https://doi.org/10.1109/EURCON.2009.5167773 Comm, J. (2017). Quantum Internet of Things. Retrieved September 2, 2017, from https://twitter.com/joelcomm DeMillo, R. A. (2011). Abelard to Apple: The Fate of American Colleges and Universities. Cambridge, MA: MIT Press. DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 23 Ferguson, T. (2002). Have your Objects call my Object. Harvard Business Review, June(June), 1– 7. Gao, L., & Bai, X. (2014). A unified perspective on the factors influencing consumer acceptance of internet of things technology. Asia Pacific Journal of Marketing and Logistics, 26(2), 211–231. https://doi.org/10.1108/APJML-06-2013-0061 Gubbi, J., Buyya, R., Marusic, S., & Palaniswami, M. (2013). Internet of Things (IoT): A vision, architectural elements, and future directions. Future Generation Computer Systems, 29(7), 1645–1660. https://doi.org/10.1016/j.future.2013.01.010 Hoontrakul, P. (2018). Economic Transformation and Business Opportunities in Asia. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-58928-2 Kennedy, J. B. (1926, January). WHEN WOMAN IS BOSS: An interview with Nikola Tesla. Colliers. Retrieved from http://www.tfcbooks.com/tesla/1926-01-30.htm Khodadadi, F., Dastjerdi, A. V., & Buyya, R. (2017). Internet of Things: An Overview. (A. Jamalipour, H. Nikookar, & M. Ruggieri, Eds.). Geneva, Switzerland. Retrieved from http://dblp.uni-trier.de/rec/bib/journals/corr/KhodadadiDB17 Kortuem, G., Bandara, A. K., Smith, N., Richards, M., & Petre, M. (2013). Educating the Internet- of-Things Generation. Computer, 46(2), 53–61. https://doi.org/10.1109/MC.2012.390 Maarten Botterman. (2015). D5.2 -Policy Paper on IoT Future Technologies Opening towards a new reality. Policy Paper on IoT Future Technologies Foreword. The European Union. Madakam, S., Ramaswamy, R., & Tripathi, S. (2015). Internet of Things (IoT): A Literature Review. Journal of Computer and Communications, 3(5), 164–173. https://doi.org/10.4236/jcc.2015.35021 Manyika, J., Chui, M., Bisson, P., Woetzel, J., Dobbs, R., Bughin, J., & Aharon, D. (2015). The Internet of Things: Mapping the value beyond the hype. McKinsey Global Institute, (June), 144. https://doi.org/10.1007/978-3-319-05029-4_7 Manyika, J., Chui, M., Bughin, J., Dobbs, R., Bisson, P., & Marrs. (2013). Disruptive technologies: Advances that will transform life, business, and the global economy. McKinsey Global Insitute. New York: McKinsey & Company. Mattern, F., & Floerkemeier, C. (2010). From the internet of computers to the internet of DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 24 things. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6462 LNCS, 242–259. https://doi.org/10.1007/978-3-642-17226-7_15 McLuhan, M. (1964). Understanding Media: The Extensions of Man. (W. T. Gordon, Ed.) (First). New York: McGraw-Hill. McWhorter, R. R. (2010). Exploring the Emergence of Virtual Human Resource Development. Advances in Developing Human Resources, 12(6), 623–631. https://doi.org/10.1177/1523422310395367 Nafukho, F. M., Graham, C. M., & Muyia, H. M. a. (2010). Harnessing and Optimal Utilization of Human Capital in Virtual Workplace Environments. Advances in Developing Human Resources, 12(6), 648–664. https://doi.org/10.1177/1523422310394791 Riggins, F. J., & Wamba, S. F. (2015). Research Directions on the Adoption, Usage, and Impact of the Internet of Things through the Use of Big Data Analytics. In 2015 48th Hawaii International Conference on System Sciences (pp. 1531–1540). Hawaii: IEEE. https://doi.org/10.1109/HICSS.2015.186 Rose, K., Eldridge, S., & Chapin, L. (2015). The Internet of Things: An Overview Understanding the Issues and Challenges of a More Connected World. (C. Marsan, Ed.). Geneva, Switzerland: The Internet Society (ISOC). Schramm, J. (2014). Internet of things poses HR challenges “Getting engaged.” HR Magazine, 49(2), 44–51. Shin, D. (2014). A socio-technical framework for Internet-of-Things design: A human-centered design for the Internet of Things. Telematics and Informatics, 31(4), 519–531. https://doi.org/10.1016/j.tele.2014.02.003 Smart, P. (2017). Situating Machine Intelligence Within the Cognitive Ecology of the Internet. Minds and Machines, 27(2), 357–380. https://doi.org/10.1007/s11023-016-9416-z Smart, P., Heersmink, R., & Clowes, R. W. (2017). The Cognitive Ecology of the Internet. In Cognition Beyond the Brain (pp. 251–282). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-49115-8_13 Tianbo, Z. (2012). The Internet of Things Promoting Higher Education Revolution. In 2012 DOI: 10.31124/advance.9638417.v1 HRD AND THE INTERNET OF THINGS 25 Fourth International Conference on Multimedia Information Networking and Security (pp. 790–793). Guangzhou, China: IEEE. https://doi.org/10.1109/MINES.2012.231 Whitmore, A., Agarwal, A., & Da Xu, L. (2015). The Internet of Things—A survey of topics and trends. Information Systems Frontiers, 17(2), 261–274. https://doi.org/10.1007/s10796- 014-9489-2 Xu, L. Da, He, W., & Li, S. (2014). Internet of Things in Industries: A Survey. IEEE Transactions on Industrial Informatics, 10(4), 2233–2243. https://doi.org/10.1109/TII.2014.2300753 Yawson, R. M. (2009). The Ecological System of Innovation: A New Architectural Framework for a Functional Evidence-Based Platform for Science and Innovation Policy. In K. R. E. Huizingh, S. Conn, M. Torkkeli, & I. Bitran (Eds.), Future of Innovation: Proceedings of the XX ISPIM 2009 Conference (pp. 1–16). Vienna, Austria: Wiley Education. https://doi.org/10.2139/ssrn.1417676 Yawson, R. M. (2017). Leadership Development in South Africa. In A. Ardichvili & D. Khalil (Eds.), Leadership Development in Emerging Markets Economies (1st ed., pp. 93–109). New York: Palgrave Macmillan. https://doi.org/10.1057/978-1-137-58003-0_6 Yawson, R. M., & Greiman, B. C. (2014). Stakeholder Analysis as a Tool for Systems Approach Research in HRD. In J. Gedro, D. D. Chapman, & K. Guerdat (Eds.), Leading Human Resource Development through Research. Proceedings of the 21st Annual AHRD International Research Conference in the Americas. (pp. 1–28). Houston, Texas: Academy of Human Resource Development. Yawson, R. M., & Greiman, B. C. (2017). Strategic flexibility analysis of agrifood nanotechnology skill needs identification. Technological Forecasting and Social Change, 118(C), 184–194. https://doi.org/10.1016/j.techfore.2017.02.019 DOI: 10.31124/advance.9638417.v1
ai_researcher
1
Involving_End-Users_in_Game_Based_Ideation_A_Case_Study_in_Hospital_Logistics.pdf
Deep Speaker Verification: Do We Need End to End? Dong Wang†, Lantian Li†, Zhiyuan Tang†, Thomas Fang Zheng† † Center for Speech and Language Technologies, Research Institute of Information Technology Department of Computer Science and Technology, Tsinghua University, China 7 1 0 2 n u J 2 2 ] D S . s c [ 1 v 9 5 8 7 0 . 6 0 7 1 : v i X r a Abstract—End-to-end learning treats the entire system as if sufficient data are a whole adaptable black box, which, available, may learn a system that works very well for the target task. This principle has recently been applied to several prototype research on speaker verification (SV), where the feature learning and classifier are learned together with an objective function that is consistent with the evaluation metric. An opposite approach to end-to-end is feature learning, which firstly trains a feature learning model, and then constructs a back-end classifier separately to perform SV. Recently, both approaches achieved significant performance gains on SV, mainly attributed to the smart utilization of deep neural networks. However, the two approaches have not been carefully compared, and their respective advantages have not been well discussed. In this paper, we compare the end-to-end and feature learning approaches on a text-independent SV task. Our experiments on a dataset sampled from the Fisher database and involving 5,000 speakers demonstrated that the feature learning approach outperformed the end-to-end approach. This is a strong support for the feature learning approach, at least with data and computation resources similar to ours. I. Introduction Speaker verification (SV) is an important biometric recog- nition technology and has gained great popularity in a wide range of applications, such as access control, transaction authentication, forensics and personalization. After decades of research, SV has gained significant performance improvement, and has been deployed in some practical applications [1], [2], [3], [4]. However, SV is still a very challenging task, mainly attributed to the large uncertainty caused by the complex convolution of various speech factors, especially phone content and channel [5]. Most of the existing successful SV approaches rely on prob- abilistic models to factorize speech signals into factors related to speakers and other variations, especially the phone content. A classical probabilistic model is the Gaussian mixture model- universal background model (GMM-UBM) [6], where the speaker factor is assumed to be an additive component to the phone variation (represented by Gaussian components). This model was extended to a low-rank formulation, leading to the joint factor analysis (JFA) model [7] and its ‘simplified’ version, the famous i-vector model [8]. To further improve speaker-related discrimination, various discriminative back- end models have been proposed, e.g., metric learning [9], linear discriminant analysis (LDA) [8] and its probabilistic D.W. and L.L. are joint first authors with equal contribution. version, PLDA [10]. A DNN-based i-vector model was also proposed [11], [12], where a phonetic deep neural network (DNN) is used to enhance the factorization for speaker factors by providing phonetic information. Recently, the deep learning approach has gained much attention in the SV research. Different from the probabilistic methods, these deep SV methods utilize various DNN struc- tures to learn speaker features. This can be regarded as a neural-based speech factorization, which is deep, non-linear and non-Gaussian. The initial work towards the deep SV was proposed by Ehsan and colleagues [13]. They constructed a DNN model with 496 speakers in the training set as the targets. The frame-level features were read from the activations of the last hidden layer, and the utterance-level representations (called ‘d-vector’) were obtained by averaging over the frame- level features. In evaluation, the decision score was computed as a simple cosine distance between the d-vectors of the enroll- ment utterance and the test utterance. This preliminary results triggered much interest on deep SV. Many researchers quickly noticed the inferior performance of this approach compared to the counterpart i-vector model might be caused by the naive back-end model, i.e., the frame averaging and the cosine-based scoring. An ‘end-to-end approach’ was developed that learns the back-end model together with the feature learning [14], [15], [16], [17]. Another approach focuses on learning speaker features, leaving the back-end model as a separate component. The idea is that if the feature learning is strong enough, the back-end model issue will be naturally solved. Our group followed this direction, and found that a simple CT-DNN structure can learn speaker features very well [18]. These two deep SV approach: end-to-end and feature learn- ing, however, have not been carefully compared. In this paper, we present a comparative experimental study for the two deep SV approaches. Based on a training database consisting of 5, 000 speakers, we found the feature learning approach per- forms consistently better than the end-to-end approach. This result is a strong support for the feature learning approach, at least in conditions similar to our experiment. The rest of this paper is organized as follows. Section III-B presents the two deep SV learning approaches in detail. The comparative experiments are presented in Section III, and Section IV concludes the paper. APSIPA ASC 2017 II. Deep speaker verification models This section presents the model structures of the feature learning approach and the end-to-end approach used in our study. The former was proposed by our group [18], and the latter was proposed by Snyder et al. [16]. A. Feature learning model The DNN structure of the feature learning system is illus- trated in Fig. 1. It consists of a convolutional (CN) component and a time-delay (TD) component, connected by a bottleneck layer. The frame-level speech features are read from the last hidden layer (feature layer). More details can be found in [18]. To perform SV, a simple back-end model is constructed, which consists of a simple average pooling that averages the frame-level speaker features to utterance-level representations, denoted by ‘d-vectors’, and a scoring scheme based on the cosine distance between the d-vectors of the enrollment and test utterances. B. End-to-end model The end-to-end DNN model proposed Snyder et al. [16] is used in our study. A particular reason we choose this model is that it has been tested on text-independent tasks. The model structure is illustrated in Fig. 2. The input is a pair of utterances (in the form of feature sequences) sampled from the training data, where the two utterances may be from the same speaker or from different speakers, labelled by 0 and 1 respectively. The DNN structure consists of an embedding component and a scoring component (back-end). The embedding com- ponent converts an input utterance to a speaker embedding. The utterance is first propagated through three time-delay network-in-network (NIN) layers [19]. Each NIN component is composed of a stack of three rectified linear units connected by affine transformations, and maps the input that is 150- dimensional to a 1, 000-dimensional space, then projects to the output to a 500-dimensional space. The output of the third NIN layer is aggregated by a temporal pooling layer, by which the statistics of the input utterance are derived. Finally, the statistics are propagated to another NIN layer and a linear affine layer, producing the speaker embedding. Note that in this work, we only use the mean vector as the statistics as it performed the best in our experiments. The back-end scoring component estimates the probability that the two input utterances, represented by their embeddings, belong to the same speaker. It is essentially a bi-linear pro- jection followed by a logistic sigmoid function, formulated as follows: network and the ground-truth of the training samples (pairs of utterances), formulated as: (cid:88) E = − (x,y)∈Psame ln(Pr(x, y)) − K (cid:88) (x,y)∈Pdi f f ln(1 − Pr(x, y)) (2) where Psame and Pdi f f represent the set of same-speaker pairs and different-speaker pairs, respectively. Since there are much more pairs in Pdi f f than Psame, a constant hyper-parameter K is introduced to balance the contribution of each set. C. Comparison of feature learning and end-to-end The two deep SV approaches are fundamentally different from multiple aspects. A thorough comparison helps under- stand their respective advantages. • Difference in model structure. The end-to-end model involves both speaker embedding (front-end) and scoring (back-end), and the two components are trained jointly as an integrated network. The feature learning model, in contrast, involves only the front-end. • Difference in training objectives. The training objective of the end-to-end model is to directly determine if a pair of utterances are from the same speaker or different speakers. The feature learning model, instead, aims to discriminate the speakers in the training set. Obviously, the end-to-end objective is more consistent with the SV task. • Difference in training scheme. The end-to-end model is trained in a pair-wised style, which heavily relies on the quality and quantity of the sampled pairs. The feature learning model is trained in a one-hot style, for which a single training example triggers much stronger error signal through the softmax function. This suggests that training the feature learning model could be easier than training the end-to-end model, and requires less data and computation. • Difference in generalizability. The end-to-end approach is purely task-oriented, and the resultant system can perform SV only; the feature learning approach, instead, learns intermediate representations that can be used in a broad range of applications, such as speech signal factoriza- tion [20], speaker-dependent text-to-speech synthesis. As a summary, the end-to-end model is theoretically optimal for SV, but the training could be difficult. The feature learning approach is opposite. Which approach is better in practical usage is therefore an open question. Pr(x, y) = 1 1 + e−L(x,y) III. Experiments where L(x, y) = xT y − xT S x − yT S y + b. (1) By this DNN structure, the objective function of the training is simply the cross entropy between the prediction of the In this section, we first present the database and settings of different systems, then report the performance results. Exper- iments were also conducted to analyze the factors that caused the different performance with the two deep SV systems. All the experiments were conducted with the Kaldi toolkit [21]. APSIPA ASC 2017 Fig. 1. The DNN structure of the deep feature learning system [18]. Fig. 2. The DNN structure of the end-to-end system [16]. A. Database Our experiments were conducted with the Fisher database. The training set and the evaluation set are presented as follows. • Training set: It consists of 2, 500 male and 2, 500 female speakers, with 95, 167 utterances randomly selected from the Fisher database, and each speaker has about 20 utterances and totally 120 seconds in length. This dataset was used for training the i-vector system, LDA model, PLDA model, and the DNNs of two deep SV systems. • Evaluation set: It consists of 500 male and 500 female speakers randomly selected from the Fisher database. There is no overlap between the speakers of the training set and the evaluation set. We set two test conditions: a short-enrollment condition (C(4-4)) and a long-enrollment condition (C(40-4)), for which the duration of the enrollment utterances is 4 seconds and 40 seconds, respectively. More details of the two test conditions are presented in Table I. The trials in the test are either female- female or male-male, and the results are reported on the pool of all the trials. B. Model settings 1) I-vector system: The i-vector system was built as a base- line for comparison. The raw feature involved 19-dimensional MFCCs plus the log energy. This raw feature was augmented by its first- and second-order derivatives, resulting in a 60- dimensional feature vector. This feature was used by the i- TABLE I Data profile of the test conditions. Test condition No. of Enrollment Utt. No. of Test Utt. Avg. duration of Enrollment Utt. Avg. duration of Test Utt. No. of Target Trials No. of Non-target Trials C(4-4) 82k 82k 4s 4s 3.5k 82M C(40-4) 10k 73k 40s 4s 73k 36M vector model. The UBM was composed of 2, 048 Gaussian components, and the dimensionality of the i-vector space was 400. The dimensionality of the LDA projection space was set to 150. Prior to the PLDA scoring, the i-vectors were centered and length normalized. The entire system was trained using the Kaldi SRE08 recipe. 2) Deep feature learning system: The deep feature learning system was constructed based on the DNN structure shown in Fig. 1. The input feature was 40-dimensional Fbanks, with a symmetric 4-frame window to splice the neighboring frames, resulting in 9 frames in total. With two time-delay hidden layers, the size of the effective context window is 20 frames. The number of output units was 5, 000, corresponding to the number of speakers in the training data. The speaker features were extracted from the last hidden layer (the feature layer in Fig. 1), and the utterance-level d-vectors were derived by averaging the frame-level features. The three scoring metrics used by the i-vector system were also used by the d-vector APSIPA ASC 2017 ..Bottleneck512Feature maps128@6*33ConvolutionalMax-poolingFeature maps128@3*11ConvolutionalFeature maps256@2*8Feature maps256@2*4Max-pooling...Time-delay (-2, 2)...P-norm (2000 --> 400)......Time-delay (-4, 4)......d-vector400Feature layerSpk 1Spk 2Spk 3Spk n-2Spk n-1Spk nTraining Spks..Enroll120 (40*3)Time-delay (-2, -1, 0, 1)Time-delay (-3, 3).....600(150*4)1000...500.....450(150*3)1000...500Time-delay (-3, 3).....450(150*3)1000...500..150Statistics ExtractionStatistics PoolingTemporal pooling.....1501000...500..150....Test120 (40*3)..Xenroll200Xtest200L(Xenroll, Xtest)ScoreRectified Linear UnitsRectified Linear Units system, including cosine distance, LDA and PLDA. 3) End-to-end system: The end-to-end system was con- structed based on the DNN architecture shown in Fig. 2. The input feature was 40-dimensional Fbanks, with a symmetric 1-frame window to splice the neighboring frames, resulting in 3 frames in total. With three time-delay hidden layers, the size of the effective context window is 17 frames. The training samples were organized as pairs of feature chunks, which may be either same-speaker or different-speaker. Each mini-batch involves N same-speaker pairs and N(N − 1) different-speaker pairs. Limited by the GPU memory, N was set to 64 in our experiment. The number of frames in a feature chunk was a random variable sampled from a log-uniform distribution, ranging from 50 to 300. The feature dimensions before and after the temporal pooling are both 150 (note that the statistics produced by the temporal pooling is just the mean vector), and the dimensionality of the speaker embedding was 200. In evaluation, the enrollment and test utterances were fed to the neural model simultaneously, and then the decision scores were obtained from the output of the network. We used the recipe published by the author of [16], and tried our best to optimize the system by tuning the configurations, including the trunk size and the batch size. The settings mentioned above are the optimal values we found in the system tuning. C. Experimental results TABLE II EER(%) results of the three SV systems. Systems i-vector Deep feature End-to-end EER% Scoring Cosine LDA PLDA Cosine LDA PLDA - C(4-4) 16.96 10.95 8.84 10.31 7.86 13.01 9.85 C(40-4) 4.81 3.30 3.39 4.01 2.39 5.24 4.59 The results of the three SV systems in terms of equal error rate (EER%) are reported in Table II. Firstly we compare the three SV systems with their own best configurations, say, i- vector + PLDA, deep feature + LDA. It can be seen that the deep feature system performs the best, and the end-to- end system is inferior compared to the other two systems. The relative better performance of the feature learning system compared to the i-vector system has been reported in our previous paper [18]. The inferior performance of the end- to-end system compared to the i-vector system with limited training data is also consistent with the results reported by Snyder et al. [16], where they found that the end-to-end model did not beat the best i-vector model (i-vector + PLDA) when the training set contained 5, 000 speakers. Another observation is that the LDA model plays an important role for the feature learning system. At the first glance, this seems not reasonable, as the feature has been speaker discriminative, and the discriminative DNN training should have superseded LDA. However, more careful analysis shows that LDA normalizes the within-class variation, which is important for SV but not the goal of the DNN training. From this perspective, LDA can be regarded as a better back- end model that learns an SV-oriented decision strategy. The importance of an SV back-end model for d-vector systems was also noticed by Heigold et al. [14], who reported that a score normalization (t-norm) is important to improve the performance of a d-vector system, although their study was based on a text-dependent task. T-norm plays a similar role as LDA in normalization of within-class variation. The probabilistic version of LDA (PLDA), however, does not provide any contribution (actually it hurts the perfor- mance). The failure of PLDA has been reported in our previous work [18]. A possible reason is that the mean d-vector of each speaker does not follow a Gaussian prior, so cannot be well modeled by the PLDA model. it requires careful These results demonstrated that although the end-to-end model is highly SV-oriented, it is not easy to take full advan- tage of this model due to the difficulties in model training. In our experiments, we indeed found that the end-to-end training is rather difficult: tuning otherwise the training may divergent, and much attention has to be paid on the training pair preparation, e.g., the number of training pairs in each iteration and the number of frames in each training pair. We also found a non-linear dynamics during end-to-end training, i.e., the objective function is stuck at a value for quite a long time, and suddenly drops dramatically. This from another perspective demonstrated the difficulty of the end-to- end training. IV. Conclusions This paper studied two deep speaker verification models. One is the end-to-end neural model and the other is the deep feature learning model. Our experimental results showed that the two deep speaker models achieved comparable or even better performance than the i-vector/PLDA model. When comparing with each other, we found that the feature learning model performs better than the end-to-end model, although the latter is assumed to be more consistent with the SV task. From these experiments, it seems that the end-to-end learning is not very suitable for SV, at least with data and computation resources similar to our experiment. Lots of questions remain open, e.g., how the two approaches will perform with the training set growing? how to use the respective advantages of the two approaches to construct a more powerful deep speaker model? All need careful investigation. Acknowledgment This work was supported by the National Natural Science Foundation of China under Grant No. 61371136 / 61633013 and the National Basic Research Program (973 Program) of China under Grant No. 2013CB329302. APSIPA ASC 2017 References [1] J. P. Campbell, “Speaker recognition: A tutorial,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1437–1462, 1997. [2] D. A. Reynolds, “An overview of automatic speaker recognition tech- nology,” in Acoustics, speech, and signal processing (ICASSP), 2002 IEEE international conference on, vol. 4. IEEE, 2002, pp. IV–4072. [3] T. Kinnunen and H. Li, “An overview of text-independent speaker recognition: From features to supervectors,” Speech communication, vol. 52, no. 1, pp. 12–40, 2010. [4] J. H. Hansen and T. Hasan, “Speaker recognition by machines and humans: A tutorial review,” IEEE Signal processing magazine, vol. 32, no. 6, pp. 74–99, 2015. [5] T. F. Zheng, Q. Jin, L. Li, J. Wang, and F. Bie, “An overview of robustness related issues in speaker recognition,” in Asia-Pacific Signal and Information Processing Association, 2014 Annual Summit and Conference (APSIPA). IEEE, 2014, pp. 1–10. [6] D. Reynolds, T. Quatieri, and R. Dunn, “Speaker verification using adapted gaussian mixture models,” Digital Signal Processing, vol. 10, no. 1, pp. 19–41, 2000. [7] P. Kenny, G. Boulianne, P. Ouellet, and P. Dumouchel, “Joint factor anal- ysis versus eigenchannels in speaker recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, pp. 1435–1447, 2007. [8] N. Dehak, P. J. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front- end factor analysis for speaker verification,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788–798, 2011. [9] M. Schultz and T. Joachims, “Learning a distance metric from relative comparisons,” in Advances in neural information processing systems, 2004, pp. 41–48. [10] S. Ioffe, “Probabilistic linear discriminant analysis,” Computer Vision ECCV 2006, Springer Berlin Heidelberg, pp. 531–542, 2006. [11] P. Kenny, V. Gupta, T. Stafylakis, P. Ouellet, and J. Alam, “Deep neural networks for extracting baum-welch statistics for speaker recognition,” Odyssey, 2014. [12] Y. Lei, N. Scheffer, L. Ferrer, and M. McLaren, “A novel scheme for speaker recognition using a phonetically-aware deep neural network,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 1695–1699. [13] V. Ehsan, L. Xin, M. Erik, L. M. Ignacio, and G.-D. Javier, “Deep neural networks for small footprint text-dependent speaker verification,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, vol. 28, no. 4, 2014, pp. 357–366. [14] G. Heigold, I. Moreno, S. Bengio, and N. Shazeer, “End-to-end text- dependent speaker verification,” in Acoustics, Speech and Signal Pro- cessing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 5115–5119. [15] S.-X. Zhang, Z. Chen, Y. Zhao, J. Li, and Y. Gong, “End-to-end attention based text-dependent speaker verification,” arXiv preprint arXiv:1701.00562, 2017. [16] D. Snyder, P. Ghahremani, D. Povey, D. Garcia-Romero, Y. Carmiel, and S. Khudanpur, “Deep neural network-based speaker embeddings for end-to-end speaker verification,” in SLT’2016, 2016. [17] C. Li, X. Ma, B. Jiang, X. Li, X. Zhang, X. Liu, Y. Cao, A. Kannan, and Z. Zhu, “Deep speaker: an end-to-end neural speaker embedding system,” arXiv preprint arXiv:1705.02304, 2017. [18] L. Li, Y. Chen, Y. Shi, Z. Tang, and D. Wang, “Deep speaker fea- ture learning for text-independent speaker verification,” arXiv preprint arXiv:1705.03670, 2017. [19] P. Ghahremani, V. Manohar, D. Povey, and S. Khudanpur, “Acoustic modelling from the signal domain using cnns,” Interspeech 2016, pp. 3434–3438, 2016. [20] D. Wang, L. Li, Y. Shi, Y. Chen, and Z. Tang, “Deep factorization for speech signal,” arXiv preprint arXiv:1706.01777, 2017. [21] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz et al., “The kaldi speech recognition toolkit,” in IEEE 2011 workshop on automatic speech recognition and understanding, no. EPFL-CONF-192584. IEEE Signal Processing Society, 2011. APSIPA ASC 2017
ai_researcher
1
How_can_scientists_and_designers_find_ways_of_working_together_A_case_study_of_playful_learning_to_co-design_visual_interpretations_of_immunology_concepts.pdf
9 1 0 2 p e S 8 ] Y C . s c [ 1 v 6 8 4 3 0 . 9 0 9 1 : v i X r a How Data Scientists Work Together With Domain Experts in Scientific Collaborations: To Find The Right Answer Or To Ask The Right Question? YAOLI MAO∗, Columbia University DAKUO WANG∗, IBM Research MICHAEL MULLER, IBM Research KUSH R. VARSHNEY, IBM Research IOANA BALDINI, IBM Research CASEY DUGAN, IBM Research ALEKSANDRA MOJSILOVIĆ, IBM Research In recent years there has been an increasing trend in which data scientists and domain experts work together to tackle complex scientific questions. However, such collaborations often face challenges. In this paper, we aim to decipher this collaboration complexity through a semi-structured interview study with 22 interviewees from teams of bio-medical scientists collaborating with data scientists. In the analysis, we adopt the Olsons’ four-dimensions framework proposed in Distance Matters to code interview transcripts. Our findings suggest that besides the glitches in the collaboration readiness, technology readiness, and coupling of work dimensions, the tensions that exist in the common ground building process influence the collaboration outcomes, and then persist in the actual collaboration process. In contrast to prior works’ general account of building a high level of common ground, the breakdowns of content common ground together with the strengthen ofprocess common ground in this process is more beneficial for scientific discovery. We discuss why that is and what the design suggestions are, and conclude the paper with future directions and limitations. CCS Concepts: • Human-centered computing → Empirical studies in collaborative and social com- puting. Additional Key Words and Phrases: Data science, Open science, Scientific discovery, Bio-medical science, Interdisciplinary collaboration, Data-centric collaboration, Common-ground, AutoAI 237 ACM Reference Format: Yaoli Mao, Dakuo Wang, Michael Muller, Kush R. Varshney, Ioana Baldini, Casey Dugan, and Aleksandra Mojsilović. 2019. How Data Scientists Work Together With Domain Experts in Scientific Collaborations: To Find The Right Answer Or To Ask The Right Question?. Proc. ACM Hum.-Comput. Interact. 3, GROUP, Article 237 (December 2019), 23 pages. https://doi.org/10.1145/3361118 ∗Equal contributions from the first author Yaoli Mao, and the corresponding author Dakuo Wang:[email protected]. Part of work was done during Yaoli’s internship at IBM Research. Authors’ addresses: Yaoli Mao, [email protected], Columbia University, 525 West 120th Street, New York, New York, 10027; Dakuo Wang, [email protected], IBM Research, 1101 Kitchawan Road, Yorktown Heights, New York, 10598; Michael Muller, IBM Research; Kush R. Varshney, IBM Research; Ioana Baldini, IBM Research; Casey Dugan, IBM Research; Aleksandra Mojsilović, IBM Research. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. 2573-0142/2019/12-ART237 $15.00 https://doi.org/10.1145/3361118 Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. 237:2 Mao and Wang, et al. 1 INTRODUCTION Thanks to the advancement of Information Technology and Cloud Computing infrastructure in recent years, a huge amount of data has been generated in the scientific discovery process [16, 25] and is shared more broadly [12]. For example, the European Organization for Nuclear Research (CERN) generated 70 petabytes of data from particle physics experiments in their Large Hadron Collider (LHC) in only 2017; and they distributed and processed the data in laboratories around the world [16]. GenBank in the Human Genome Project (HGP) released 212,260,377 sequences of human genome data in February 2019 [32]. Such huge and complex data collection in scientific projects has gone beyond the analytic capability of a local research team in a single expertise domain, and calls for new ways of conducting scientific research. The Open Science movement started in recent years has transformed traditional science research practices to embrace more openness and re-producibility [76, 102]. It advocates for transparency and accessibility in knowledge, data, tools, analytic processes, and interdisciplinary collaboration in the scientific discovery process [95]. Because of the data-centric nature, most open science projects attract data scientists to collaborate with the domain experts. In this paper, we do not make fine-grain distinctions of data workers [60], so that we denote all these data experts who often have no prior domain knowledge as "data scientists". Many of these interdisciplinary collaborations have shown promising progress in solving hard scientific problems. For example, Critical Assessment of Structure Prediction (CASP), a biannual competition aimed at predicting the 3D structure of proteins, has attracted tens of thousands of models submitted by approximately 100 research groups worldwide and granted the top winner to a Data Science researcher team – Google’s Deepmind’s AlphaFold [15]. The success of these interdisciplinary collaborations is also appealing to Human-Computer Interaction (HCI) researchers and a few papers have been published in recent years (e.g., offline data hackathon for civic issues [41], or online data challenges such as in Kaggle.com [14]). However, besides these aforementioned success stories, there are also turbulences in these collaborations. Even in the case study reporting a successful offline data hackathon event, Hou and Wang [41] described a tension between the NPOs’ expectations (domain experts) and the data volunteers’ expectations (data scientists), which they described as a "dual goal" dilemma. In the more general open science and cyberinfrastructure contexts, tensions and challenges are not rarely seen, which have been attributed to the interdisciplinary nature of the team [94], related motivational factors [84] and cultural differences [9], the remote and cross-culture team structure [54, 57], the data-centric practice [79], or the lack of technology and infrastructure support [66]. These tensions are not new in the Computer-Supported Cooperative Work (CSCW) field. In their landmark paper, "Distance Matters", 20 years ago [65] Olson and Olson developed a coherent framework to describe a collaboration to be successful or not. It has four dimensions: Common Ground, Coupling of Work, Collaboration Readiness, and Technology Readiness. Though they were primarily looking at remote, not necessarily data-centric, scientific collaborations at that time (which they referred to collaboratories [103]), their framework has been proven to be effective in analyzing more general collaborations beyond the "remote" settings [43, 64, 67–69]. In this paper, we continue this line of research on analyzing interdisciplinary collaborations using the Olsons’ framework. We focus on data science projects and we use the bio-medical scientific domain as a case study. Bio-medical research has been one of the most active fields to embrace the open science movement, because bio-medical projects often curate and integrate many and large data sets. Yet, the data-centric projects in this domain also experienced unique challenges, partially because human lives are at stake and mistakes in analyzing data or interpreting results could lead to catastrophic consequences. Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. Data Scientists and Domain Experts in Scientific Collaborations 237:3 We aim to systematically explore the unique challenges that exist in the collaborations between data scientists and bio-medical scientists. Thus, we conducted this semi-structured interview study with 22 data scientists and bio-medical scientists who are involved in various open science collaborations. We have no intention to test the applicability of the Olsons’ framework; rather we use it as an analytic lens to guide our coding of the interview transcripts. Specifically, the research question is: What are the challenges in collaborations between data scientists and domain experts (i.e., bio-medical scientists) in data-centric open science projects, along each of the four dimensions in the Olsons’ framework (Common Ground, Coupling of Work, Technology Readiness and Collaboration Readiness)? 2 RELATED WORK 2.1 The Olsons’ Framework and Remote Scientific Collaborations Olson and Olson’s framework for remote scientific collaboration [65] brings together four major concepts that are critical to successful distributed scientific collaborations. The first concept, the coupling of the work (or the nature of work), refers to the structure and organization of the work. Ambiguous and tightly coupled tasks require higher interdependencies among collaborators and should be modularized to the same location, than the ones in loosely coupled collaborations. The second concept, common ground [37], refers to how much common knowledge and awareness [28] collaborators have about the task and about each other. The third concept, collaboration readiness, refers to those aspects by which collaborators are motivated and willing to collaborate with each other, trust each other, as well as align their goals together. The fourth concept, technology readiness, concerns difficulties in adopting and adapting supporting technologies to fit with collaborators’ current use habits and infrastructures. Previous HCI studies have used this framework to examine distributed collaborations and to design features to support those collaborations in various fields. One exemplar research work was in the international HIV/AIDS research field [66]. This study investigated two collaborations in South Africa with case studies, and found that successful collaborations were subject to lim- ited collaboration readiness, imbalanced technology readiness to adopt and learn advanced tools across different geographic locations, as well as inadequate bandwidth and unstable network of infrastructure. A more recent study re-examined this framework in globally-distributed software development teams [10]. They examined four ethnographic cases of international software development using comparative analysis to explore if distance still mattered with the rapid development of collaboration technologies and people’s growing familiarity and experience with these technologies and remote work over the last decade. Their findings highlighted common ground and collaboration readiness as critical factors for data- and programming-intensive collaborations, and also indicated that collaborators in this context had much higher technology readiness, and that they preferred closely coupled work even when working remotely with each other. In this work, we argue that we extend the Olsons’ analysis of the interdisciplinary collaboration to a new genre in data-centric open science projects. Thus, we use this framework to guide our coding of the interview data, and pay particular attention to what aspects may mismatch with the Olsons’ "best practices of collaboration" suggestions (e.g., successful teams should have high level of common ground). 2.2 Teams and Infrastructures in Data-Centric Open Science Projects Building upon the advanced computing tools and high-speed networks for collaboration and sharing resources of e-science (i.e., supporting collaborative research via electronic networks, also Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. 237:4 Mao and Wang, et al. unknown as e-research, cyberinfrastructure, e-infrastructure and the Grid, etc.) [43], the open science initiatives advocate for open access to, communication around as well as contribution to huge amounts of data sets, analytic tools, work practices and processes [81]. In this context, novel forms of teams and ways of collaborations have emerged over recent years, transforming mere data and resource sharing base towards ecosystem-like communities of communication, practices and contributions [12]. Accordingly, teams come in small and big, highly distributed geographically and self-organize themselves across traditional disciplines in the greater research community over the time. One example of the new team collaboration form is the PRO-ACT database [2], which was initially developed to pool and integrate different data sources (clinical trials, patient records, medical and family history) relating to Amyotrophic Lateral Sclerosis (ALS). In addition to integrating these data, PRO-ACT launched two crowdsourcing competitions to the public since 2012 utilizing its database to promote ALS computational research [77]. According to the official statistics, the 2012 competition attracted 1073 solvers from 64 countries and the 2015 one drew in 288 participants, and 80 final submissions by 32 teams from 15 countries within a period of three months. The winning best algorithms outperformed methods designed by challenge organizers as well as predictions by ALS clinicians, leading to major research publications [52]. One example of new ways of working is adopting Jupyter Notebook [50]. It allows interactive coding, visualizations, as well as building code narratives in the same UI [80]. Many extensions build on top of the Jupyter Notebook system have significantly improved the data scientists’ efficiency, such as the Voyager project [104] for data wrangling tasks [45] that replicate the Trifacta [90] capabilities. Github [24] is another popular code sharing and code version control platform. It supports various types of user access so that a user can set the data to be public or private. Many data scientists use it to host their code (often in Jupyter Notebook) and manage projects [80]. Furthermore, components of machine learning and artificial intelligence have also entered the picture, in collaboration with human experts in the research fields [33]. Building on open code sharing with common standards, open analytics platforms such as OpenML [93] help users to quickly search for relevant analytical methods and reuse previous code in the community. Data analyses can be automatically processed and annotated in dataflow pipelines from how data is loaded, pre-processed and transformed, analyzed, and thus can promote mutual learning opportunities for human experts [73, 74]. Very recently, DataRobot [26], Google [34], H2O [39], and IBM [30] each have released a new AutoML solution, which aims to automatically finish low-level simple Machine Learning tasks so that Data Scientists can save some time and focus more on the higher-level tasks. Novel forms of teams and ways of collaborations in the open science context can bring new opportunities and challenges at various steps of the data-centric collaboration process, including retrieving, preparing, and interpreting data [60], selecting methods for analysis [72], and evaluating correctness of results [46]. Hou and Wang [41] studied the data science process in an offline Civic Data Hackathon event. Through observation and interview research methods, they found that the broker theory is applicable to explain the tensions of collaboration between the NPO stakeholders and the data workers. Hill and his colleagues [40] looked at the common collaboration barriers, such as communication challenges, between multiple stakeholders, and they found that non-expert collaborators have to treat the data science process as a black box, due to the lack of timely communication. However, the above-mentioned studies either focus on only a subset of steps of the data-centric collaboration workflow (e.g., on the data sharing [8]), or on building a system or feature for a particular data science task (e.g., for data wrangling only [104]). The one that tried to provide a systematic account for the whole process failed to generalize their findings to the different forms of projects (e.g., [41] only looked at small teams in a data hackathon, and their unit of analysis Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. Data Scientists and Domain Experts in Scientific Collaborations 237:5 always consisted of data volunteers working with NPOs). Thus in this paper, we contribute a comprehensive understanding of the collaborations in data-centric open science projects. And, we cover both small and large teams in data-centric collaborations. 2.3 Interdisciplinary Collaboration Teams Olson and Olson’s framework for remote collaboration mostly addresses homogeneous teams with similar expertise or experience (i.e. software engineers, or bio-medical HIV/AIDS researchers in the examples above), without a direct focus on heterogeneous teams with diverse experts. In our context, the data-centric open science projects often consist of interdisciplinary teams with distinct expertise and roles, including data scientists as the analytics experts and bio-medical scientists as the domain content experts. These teams have been a research focus in various research domains including HCI (e.g., [7]) and cognitive science (e.g., [27, 35, 42]). Despite some common understandings shared within the teams, a substantial portion of the domain knowledge and task understanding are distributed among different experts within the teams [35]. Team performance depends on how diverse knowledge are shared and integrated [27, 92]. What to share and how much to share have always been a critical issue yielding mixed results. On the one hand, groups should be fully informed of different and unique perspectives in order to discover an optimal solution (and thus the more the better). Stasser and Titus [87] found that in group decision-making, even though each person has unique knowledge, group members will have the propensity to discuss already shared information rather than novel, unshared information. This is known as the "shared information bias" [86] and often prevents the group from finding the alternative solution, usually an ideal or optimal one [29, 58]. On the other hand, comprehensive information sharing has pooling and exchange as well as integration cost and is inefficient. Gorman [36] argued that it does not require each individual to become fully known to each others’ expertise domain, but they only need to share a language enough to facilitate and evaluate team work. In this section, we review related theories and research that addresses sharing and integrating diversities in interdisciplinary teams. We start with the third space theory that advocates pooling different perspectives in a separate common zone, and move on to the common ground theory that supports integration and management of differences. 2.3.1 Third Space and Hybridity. When collaborators from different disciplines work with each other, there often a "boundary" between the two disciplines or communities. HCI researchers have proposed various theories to explain this phenomenon and these theories have guided the system design in supporting it. One notable theory that fits our context the most is the "third space" that exists "at the boundary of two disciplines" [61, 89]. Note that this concept is different from the "third place" concept in [63]. It emerges from Bhabha’s critique of colonialism, where he described that a zone of "hybridity" between two distinct cultures often came into existence spontaneously [6]. If each distinct culture was a "space," then the zone of hybridity, combining attributes of each culture, became something new, a "third space" that separated but also mixed those cultures. Warr [100] extended this notion into interaction between different disciplines, suggesting pre- serving the situated nature of each participant’s own world while creating a common space for resolving differences. Muller and Druin [61] advocated the deliberate construction of a third space as part of the democratic agenda of participatory design. According to them, a third space is usually not "owned" by anyone, and subsequently diverse voices can speak and be heard in such a hybrid environment, where people can compare, negotiate, and integrate goals, perspectives and vocabu- laries, as well as discuss shared meanings and protocols. In line with this notion, they argued that in Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. 237:6 Mao and Wang, et al. addition to building common ground across disciplines, differences should be adequately examined, "the mutual validation of diverse perspectives", and become mutual learning opportunities [11]. Within HCI, this concept of "hybridity" has been mostly used in participatory design literature, where users and designers work together across each others’ disciplines to embark on a journey of negotiation, shared construction and and collective discovery. We argue that the data scientists and the bio-medical scientists in a collaboration in our context also construct a third space. As such, we expect that their behavior and their motivation in that space may differ from what they had before stepping into that zone. If so, we know various effective techniques to study and to support the collaborations in this space (e.g., spaces and places, narrative structures, games, and prototypes [61]), thus we may be able to transfer these existing techniques to our context. 2.3.2 Common Ground: Content and Process. With richly distributed diverse knowledge, perspec- tives and roles in interdisciplinary teams, common ground is required to close the gaps between differences and in turn would enable sharing and communication more efficiently [5]. This is espe- cially important for teams of diverse experts collaborating on complex problems such as scientific research. Common ground originally stems from the concept of grounding in the language and com- munication literature [17] and has been extensively discussed in studies of Computer-Mediated Communication (CMC) [59]. It is defined as the sum of mutual, common, or joint knowledge, beliefs, suppositions and protocols shared between people when they are engaged in communications. And it is incrementally built on the history of joint actions between communicators. In CSCW, where communication becomes part of and instrumental to work activity, common ground is distinguished between two types of coordination: content and process [18], which further delineates the Olsons’ general notion of common ground. Content common ground depends on an abundant shared understanding of the subject and focus of work (know that), while process common ground depends on a shared understanding as well as a continual updating of the rules, procedures, timing and manner by which the interaction will be conducted (know how). Convertino and his colleagues studied the development of both types of common ground in an emergency management planning task that involved small teams of diverse experts. Their findings indicated that process common ground increased over time with decreasing information query or strategy discussions about how to organize activities, and in contrast, content common ground is created and tested through concept clarification and revision [20]. Furthermore, to coordinate multiple roles within teams, they suggests that a multiple-view approach, which differentiates a shared team view from role-specific details, enables teams to filter out detailed differences, construct team strategies, and allows serendipitous learning about knowledge and expertise within the team [19], which lends support to our previous account of the third space in interdisciplinary teams. In our interdisciplinary teams, there is a natural distinction of content domain expertise (i.e., bio-medical experts), and analytics process expertise (i.e., data scientists) when they come into collaborations with each other. We argue that the delineation of content and process common ground exists in these bio-medical research collaborations. Moreover, they may differ in what contains in content and process common ground from aforementioned communication and emergency management scenarios, which usually have a better-defined shared purpose and sometimes shared conventions and procedures as well. Additionally, over the time course, content and process common ground will also develop in different ways by both parties within teams and would need different support. Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. Data Scientists and Domain Experts in Scientific Collaborations 237:7 3 METHODS 3.1 Participants Participants were recruited through snowball sampling via recruiting emails. Snowball sampling has the major advantage of efficiently locating targeted participants with adequate research expertise, who may be remote. As bio-medical scientists are not common informants in HCI studies, it is hard to find a lot of them locally. We also acknowledge the limitations of snowball sampling, such as selection bias [4], and we include more discussion in the Limitation section. In total, 22 informants from 2 large enterprises (12 out of 22) and 10 research institutions (10 out of 22) in the U.S. were interviewed, reporting a variety of 26 research projects (see Table 1). Among them, 16 identified themselves with a major role of being a data scientist in the project, 6 with a role of being bio-medical scientist, and a few of them had a secondary role as a project manager or organizer. We have more data scientists due to the fact that, as participants reported, in the real-world practice, one bio-medical scientist often worked with multiple data scientists or a small domain expert panel consulted with a crowd of data scientists. The informants were quite experienced as they reported they had on average 5 years of experience in working in their expert domain (ranging from 3 years to 19 years). The projects they reported also covered a wide range of topics and team structures (from small teams with local and remote collaborators to large crowdsourcing collaborations). More details about informants and projects can be found in Table 1. Throughout this paper, data scientists will be denoted as "DS", bio-medical scientists as "BMS". 3.2 Semi-structured Interview Semi-structured interviews were conducted during a 3-months period in the summer of 2017 as the main research method for this study, including 19 face-to-face interviews and 3 remote interviews using Skype audio chat and telephone. All the interviews were recorded and later transcribed into text. We asked the informants why they collaborated with the other domain, what data sets and tools they used, how they analyzed the data, how they communicated with each other, what outcomes they achieved. In particular, we encouraged them to recall their experience from one recent project, and we followed their storytelling with prompt questions. During the interview, informants were also asked to provide artifacts, such as source links to data sets, team meeting notes, project agendas, working documents, data analysis results and publications, presentation slides, questions and answers in community forums, and so on. 3.3 Data Analysis and Verification The interview transcripts were first segmented into four dimensions of Olson’s framework (common ground, coupling of work, collaboration readiness, and technology readiness) as well as specified on content versus process sub-dimensions in the common ground dimension using a deductive coding approach [22]. And then for each dimension, an inductive coding [13] was conducted to discover salient themes regarding data, tools, processes and people. Two coders iteratively coded the transcripts and discussed descriptive memos about emerging themes from the data, and then developed axial codes that captured relationships within and across dimensions. New codes were added when necessary until theoretical saturation [23]. In the end, the two coders cross-checked and compared their codes. If there was a disagreement, they revisited and discussed the theoretical framework and transcripts, and then made decisions about whether to keep the codes or disapprove and toss them out. Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. Table 1. Informants' Roles, Projects, Team Structures, Initial Goals, and What Really HappenedInformantsRolesProjects Team Structures ¹Initial Goals ²What Really Happened ²I1BMSP1: genetic analysis of cancerS, C, RFindAFindAI2BMS (DS)P2: cancer mammography predictionL, RFindAFindAI3BMS (manager)P3: real-time stress predictionS, C, RFindAAskQI4BMS (manager)P4: cancer medication adherenceS, C, RFindAAskQI5BMS (manager) P5: ALS prediction and stratification; P2: cancer mammography predictionL, RFindA (P5); FindA (P2)FindA then AskQ (P5); FindA (P2)I6BMS (manager)P6: Multiple Sclerosis databaseL, RAskQAskQI7DSP7: cancer precision diagnosis and personalized treatmentL, RFindAFindA then AskQI8DSP8: lung cancer predictionL, RFindAFindAI9DSP9: sepsis progression and mortality predictionS, C, RFindAFindAI10DSP4: cancer medication adherence; P10: sentiment analysis of breast cancerS, C, RFindA (P4); AskQ (P10)AskQ (P4); AskQ (P10)I11DSP11: genetic analysis of cancerS, C, RFindAFindAI12DSP12: genetic analysis of children's asthmaS, C, RFindAFindAI13DSP13: multi-stage medical treatment effectiveness; P14: medical ontologies for Zika virus detectionS, C, RFindA (P13); AskQ (P14)FindA (P13); AskQ (P14)I14DSP15: time series modelingS, C, RFindAFindA then AskQI15DSP16: Huntington's disease stage classificationS, C, RFindAFindA then AskQI16DSP17: Huntington's disease progressionS, C, RFindAFindA then AskQI17DSP18: molecular structures of olfactionL, RFindAFindAI18DSP19: open discoveries of disease diagnosisS, C, RAskQAskQI19DSP20: causal modeling of opioid addiction; P21: prognostic sepsis modelingS, C, RFindA (P20);FindA (P21)AskQ (P20);FindA then AskQ (P21)I20DSP22: genomic tumor mutations; P23: prognostics of breast cancerS, C, R (P22); L, R (P23)FindAFindA then AskQI21DS (manager)P24: treatment effectiveness of diarrhea; P25: Ebola outbreak predictionS, C, RFindA (P24); FindA (P25)FindA then AskQ (P24); FindA then AskQ (P25)I22DS (manager)P26: genetic analysis of diseases; P5: ALS prediction and stratificationS, C, R (P26); L, R (P5)FindA (P26); FindA (P5)FindA (P26); FindA then AskQ (P5)¹ S: small team collaboration, L: large crowdsourcing collaboration, C: co-located team member, R: remote team member² FindA: teams aimed to find or found the right answer, AskQ: teams aimed to ask or asked the right question Data Scientists and Domain Experts in Scientific Collaborations 237:9 4 FINDINGS Guided by the Olsons’ framework, we organize the findings in the following order: coupling of work, collaboration readiness, technology readiness, and common ground. 4.1 The Coupling of Work The coupling of work, as introduced in the related work section, is often related to the nature of the project topic. The projects reported by informants cover a wide range of topics from the fundamental scientific research such as exploring the cause of disease with cell or animal experiments, to the translational and applied research that aimed to develop new diagnostics, treatments, and other related applications. 4.1.1 Common Workflow. Despite the variety of project topics, most of these reported projects follow a common high-level workflow. Figure 1 shows an ideal and trouble-free process. The bio- medical scientists collected or curated a data set, asked a research question, and discussed it with the data scientists. Then the bio-medical research question was translated into a data science question, and a solution to the latter DS question was implemented in modeling algorithms by the data scientists. There was a final evaluation step when the data scientists synced result interpretation and model evaluation with the bio-medical scientists. Apparently, this workflow of formulating bio-medical questions, translating to DS questions, implementing algorithms, and evaluating and sometimes revising the research questions is non-divisible and highly iterative (see the Common Ground section for more results). "We brainstorm together and propose in the slack channel whenever someone has some new idea to test, try different models and quickly ITERATE prototypes in experiments to see if ideas work."(I3, BMS, P3) Fig. 1. A simplified ideal version of common workflow 4.1.2 Team Structure and Coupling of Work. In terms of the organizational structure, all the small- group teams are managed by a researcher in the team; while in large crowdsourcing collaboration projects, a management team of organizers or project managers were responsible for structuring, monitoring, consulting and managing sub-teams along the process. The small teams often work in a closely-coupled work style and the common understanding about how to facilitate closely-coupled work is also applicable here. For example, timely communication and coordination are pointed out as essential for the success of these collaborations. "... we have a lot of iterations, in deep, frequent conversations...we have weekly video meetings and frequent email checkups." (I19, DS, p20) In large-scale crowdsourcing collaborations, the aforementioned management team often helps to divide the bio-medical research question into various sub-questions, so that the multiple sub-teams working in this large collaboration can each at a time focus on one problem space which is specified clear enough, and can collaborate with other sub-teams in a loosely coupled manner. Additionally, the management team also makes efforts to regulate the proper level of coupling over throughout the process to clarify questions and engage participants. Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. 237:10 Mao and Wang, et al. "We also track forum questions [from other sub-teams], and provide feedback to clarify if anything [is] unclear about our data or questions...[we have] as well as webinar coaching sessions and expert advisory boards to engage participants [from other sub- teams] in learning." (I22, DS, P5) 4.2 Collaboration Readiness Collaboration readiness refers to the collaborators’ willingness and engagement level in a col- laboration. Informants were asked why they collaborate, how they start to collaborate, and how their involvement proceeds over the process. From their answers, we can extract and identify the commonality of motivations in each of the two stakeholder groups (BMS, DS). We are also interested in whether their motivation and the level of engagement in the project remain the same while the project proceeds. We leave the findings about the mismatch of the motivations and engagements to the Common Ground section (See Section 4.4.4). 4.2.1 Challenges of Maintaining Motivation. At the beginning, people are all motivated to collabo- rate, because reciprocal skills and resources served as "a natural attraction for collaborations" in the data-centric bio-medical projects (I12, DS, P12). However, these motivations and engagement levels from different experts in a team are always dynamically changing over time. Informants in small teams reported the tendency that their project soon became heavily dependent on the a few core members to manage the progress and divide the work, which can be very frustrated and reduce motivation and engagement in continuing the project. "I sometimes feel others are too much dependent on me [as both project manager and domain expert]...The team can be paralyzed...stagnant without moving forward." (I4, BMS) In comparison, sub-teams in large crowdsourcing collaborations do not suffer from the heavy managemental overhead thanks to the separate management teams in the short term. However, these informants reported challenges in sustaining motivation in the longer term. These projects usually last 3 to 4 months. For many informants, it is a one-time deal. These collaborations are rarely developed into the next collaboration, especially if their solution did not came out as a winners of the internal competition, or with a concrete publication as the final credit. The short life span (a few months) of collaborations in these large crowdsourcing projects is quite opposite to traditional bio-medical research project’s long life cycle (years and decades). "Only the top winners have the opportunity to collaborate on publications after the challenge ... It is difficult to navigate to find collaborators in [large-crowd] challenge as we barely know each other." (I8, DS, P8) 4.2.2 Reward Attribution and Over-Competing with Other Teams. Being the first and finding the best result, as the nature of scientific research, encourage the competition culture, which is also reported by many informants. Sometimes it prohibits collaborations to scale up, thus limits innovative scientific discoveries. For small teams, it is obvious that the researchers in one team are competing with other teams. So they do not want to share data, processes, or tools with other teams in the research community. "We are not comfortable with sharing data or analyses before publication...[even if you share,] your work will not necessarily be acknowledged." (I10, DS, P10) In large crowdsourcing collaborations that involved multiple sub-teams, over-competition is also seen as a main factor prohibiting real scientific discovery. The leaderboard type of evaluation, where each sub-team could submit a solution and all the solutions are ranked using a test data set with one metric (e.g., prediction accuracy), is problematic for scientific discovery. It motivates every Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. Data Scientists and Domain Experts in Scientific Collaborations 237:11 team to work towards a higher ranking on the leaderboard, instead of focusing on the Bio-medical scientific discovery (e.g., whether DS results are meaningful to the current BMS question), or to find new insights from the data outside the given DS question space. After all, scientific discovery is not only about finding incremental improvements as the right answer, it is also about asking the right and sometime disruptive questions inspired by the data. "everyone copies and tweaks the best solution a bit to win a little, there is very limited innovation...but full of repetitive solutions." (I5, BMS) 4.3 Technology Readiness Informants reported usages of various technologies in the research process, supporting both content and progress common ground. And these technologies could be categorized into: Co-Editing systems, Communication systems, Co-Creation systems with version control, Data and code repositories, and Expertise systems (see Table 2). Co-Editing systems include Google Docs, Google Sheets and some other online editors, which informants used to plan or moderate project progress, and to organize project descriptions or progress summaries; Communication systems such as Slack, emails, and Skype are always useful for exchanging information quickly and tracking discussion threads; Git version control systems can help with organizing the data and code, and they are often integrated with a shared Data or Code repository system; and finally the expertise system consists of domain-specific knowledge (e.g., bio-medical ontology) where the DS collaborators can learn and query. The challenges with teams’ technology readiness are intertwined with the collaborator’s back- grounds (being a DS or BMS), and are dynamically changing over time. Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. 237:12 Mao and Wang, et al. Information Needs and Tool Preferences. Informants in different roles reported different 4.3.1 information needs, which resulted in different preferences over technology selections. BMSs have a focus on transparency and interpretability regarding the BMS problem, the data, the general process and the results, whereas DSs prioritize the performance, generalizability and efficiency of the DS model. "It would be helpful to see a written documentation of pre-processing and if any transformations, any alternative methods considered or compared...These decision points can be seen clearly...lead to a trustworthy result interpretation." (I4, BMS, P4) "I would like to search for previous examples with similar data structures more effi- ciently... I also hope to extend my model built on the asthma data set as a recipe to cancer and other disease domains" (I12, DS, P12) Secondly, the informant’s personal habits and social norms from their respective backgrounds also lead to different tool preferences. When the two backgrounds work together, they tend to find the overlapping tools that both parties can handle. This often results in the team selecting the most familiar tools that all members are comfortable with, rather than trying out more advanced new tools. When asked why the commonly used DS technologies, such as Jupyter notebooks and other cloud platforms, were not used in the team, informants explained that "persuasion cost is high"(I10, DS), and "training takes time"(I3, BMS). One BMS informant (I5) who also serves as an organizer role in a large crowdsourcing project, reported that he once tried to unify the selection of programming tools for all the sub-teams (a particular version of Python and a runtime environment) and that decision significantly reduced the sub-teams engagement and outcome. "We specified everyone to use python and provide written documentation using spec- ified format in one challenge... but participation rate was much lower compared to previous challenges...And we never ask to use a unified tool again." (I5, BMS, P5). Fragmented Information. Informants struggled a lot with the fragmented information all over 4.3.2 the different systems and tools in a research project, especially in small-group collaborations where there is not a specialized management role in tracking and synthesizing information from tools used for different purposes and at different stages. This becomes more difficult when two types of common ground are managed sometimes using the same tools while at other times different tools over the process. "we conduct analysis on local computers using our preferred coding tools and languages, use google docs to summarize project progress internally, present slides to share progress with other stakeholders, shoot quick thoughts to each other in emails or slack messages ..."(I9, DS, P9) 4.4 Common Ground Most informants reported that the major challenge in their collaboration was establishing and maintaining the common ground at the beginning of the project and maintaining it throughout the process. Formulating the Initial BMS and DS Research Questions. The common ground in formulating 4.4.1 research questions at the very beginning refers to that BMSs and DSs work together to define a bio-medical domain-specific research question, and transform it into a computable DS question. In small teams, it is less challenging than in large crowdsourcing projects, as aforementioned that the collaborators in small teams are quite motivated to work together at the beginning. The two research questions (BMS one and DS one) converge. The BMSs believe that they want to find Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. Data Scientists and Domain Experts in Scientific Collaborations 237:13 an answer to the BMS question, and the DSs believe that they interpret what BMSs want into a DS question and their job is to find an answer to the DS question. "We are working on classification of disease progressive stages...We understand from our [BMS] collaborators ... many rare disease lack proper measurement metrics to [indicate whether it is] cured or improved, and thus we have to see how clusters emerge from the data [before building classification model]." (I15, DS, P16) In large-crowdsourcing projects, it is more complicated with greater ground to align. Often an expert panel consisting of organizers, BMSs and DSs is assembled to propose problems that are meaningful and impactful to BMS, as well as feasible and time-wise manageable for DS. Sometimes this expert panel even needs to have dry runs in which they simulate a team to work on this project to confirm that the question is resolvable within a period of time. I5(BMS) and I22(DS) have served in such a panel, they reported that planning such a large-crowdsourcing collaborations could take months. "In question formulation, we involve different disciplines to ask proper questions. We consult a pool of experts to ensure the problem is important and feasible as well as clear to operate on." (I5, BMS) "when designing a data challenge, we would arrange a dry run internally, with 1 or 2 people proposing and running 2 or 3 algorithms individually, this serves as a baseline for participants" (I22, DS) 4.4.2 New Research Questions Emerge During the Project Process. It may not be a surprise to the readers that the scientific research questions keep evolving quickly along with the project progress, but it is definitely a surprise and frustration to some of our informants. Many of them reported starting with one particular question and ended up with "a set of totally different questions" (I19, DS, P20), or sometimes "better questions"(I3, BMS, P3). In small-group collaborations, new questions emerge more frequently throughout the project process while in large-crowd collaborations, new questions often emerge at the end that point to future research directions. "our question evolves from what is addiction, to a set of very different questions like what is overdose, to what is abuse, to what is dependence? [This] depends on the ground truth we actually have from the data ... we later decide to focus on morphine and hypothesize about differences between natural versus herbal ones and synthesized." (I19, DS, P20) Sometimes the evolved research question is a better question, and the "right question"(I3, BMS, P3) to ask when compared to the original one. Thus, finding an answer to the original question is less important. "we started out to ask what is stress, which context causes stress, how to measure stress ... over time we decided to focus on disease-related stress and how to build applications to monitor and design interventions...a much better question...more impactful.)"(I3, BMS, P3) Overall, such evolution of questions breaks the initial common ground and requires dynamically building the new common ground. In the earlier stage, the BMSs thought they want to find the right answer and the DSs agree to find the right answer to the initial DS question. Then as the project unfolds, the BMSs may or may not realize that their true interest has changed from finding the right answer to finding the right question by asking more possible questions. Sometimes if this change is not clearly expressed, the common ground is broken. Low level of common ground, though to some extent good for allowing scientific discoveries to evolve over time, causes confusion in the team. Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. 237:14 Mao and Wang, et al. In I9 (DS)’s case, the initial problem raised by their BMS colleague was to design and arrange treatment resources for sepsis patients and this was transformed into a DS problem that predicts patients’ life span before mortality. As DSs worked on defining mortality and dealing with missing information in the data set, BMSs came up with more questions regarding types and progression of disease severity which allow them to focus on understanding patients at different stages with various symptoms. As the poor DS commented, "We were lost in which model to build and which outcome we should focus on..."(I9, DS, P9) 4.4.3 Obscure Data. Two reasons are stated as the cause of such evolution of research questions, Obscure Data and BMS’s intention to "Ask the Right Question". The open science context provided much easier access to the raw data curated and collected by other researchers in the community, but did not necessarily guarantee easy understanding of the data. Ambiguity, bias and potential missing information in bio-medical variables are particularly troublesome. Contextual information like medical practices, clinical trial routines, regulations and direct impacts on patients does not come with the meta-data or protocols but are essential for making sense of the data and asking the right question. It is a critical issue for both small-group and large-crowdsourcing projects. "I have to check across a lot of sources to clarify the implications and rule out ambi- guity and biases, including standard diagnosis codes like ICD-9, pharmacy diagnosis, enrollment insurance types, typical patient demographics specific to the disease." (I4, BMS, P4) DSs reported the importance for BMSs to communicate the "data structure" with them in un- derstanding features and relationships in the data sets. But there lacks a consistent definition and understanding of the data structure and a common language to communicate and discuss it. "[Such information] is a hidden knowledge, a sense, and mostly gained from experience and becomes your routine" (I7, DS, P7). From project to project, data structures appear in different forms, jargons and routines and are a composite concept of experiential knowledge containing: "data types and distributions like if cross-sectional or longitudinal or matrix and if there’s seasonality or skew; whether there is a clear binary or continuous outcome for analysis or it is high dimensional multivariate data" (I13, DS, P13) Difficulties in communicating data structures could lead to further challenges in evaluating the methods and interpreting the results, and cause BMSs’ frustration and distrust around this "big black box", as quoted from I4 (BMS, P4) and I17 (DS, P18). 4.4.4 Ask The Right Question. What are alternative ways to ask questions? BMS informants often reported their intentions to ask the right question by asking more alternative research questions besides the initial one. They are also frustrated that they do not know if the translated DS question is a good one or not. DSs are trained to abstract and simplify a realistic problem into a analyzable and computable one, thus BMS Problems are more often translated into Prediction problems, in which an outcome is well-defined, and the model and algorithm is "mature and well developed", and the evaluation is standardized by "a mathematical loss function" (I20, DS, P22&P23). "In our bio-medical training, alternative hypotheses are important ways to conduct research. I conducted a lot of literature review to understand what has been established and what is the gap in reasoning. However, when translating a bio-medical question into a data science one, I often wonder what are alternatives. The process seems to be very intransparent."(I4, BMS, P4) Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. Data Scientists and Domain Experts in Scientific Collaborations 237:15 DSs’ prone-to-predict tendency could be explained by both different interests and evaluation criteria valued and rewarded by BMS and DS fields. And it adds to the misalignments in the common ground as DSs are partially instrumental to BMS. BMSs are mostly interested in the results which are meaningful for interpretations and useful for interventions; DSs are driven by developing competitive, innovative and sophisticated methods such as "no one has tried before" (I12, DS, P12), "beat existing methods in accuracy" (I8, DS, P8), "complex mathematical models" (I15, DS). "discovery [instead of prediction] that can be useful to provide actionable insights for high-stake life or death issues ... we are always reproducing predictive models with higher predictive capabilities in the field. However, bio-medical problems rarely have a clear outcome to make predictions... we are more interested in what intervention can be done rather than whether a prediction is accurate." (I4, BMS, P4) In small project teams, this prone-to-predict tendency seems more severe; while in large- crowdsourcing collaborations, wisdom of the crowd is able to pool diverse perspectives and considerations to look at the same problem. "At later stage of the data challenge, an ensemble method, which is a linear combination, was applied to aggregate across the winning teams’ individual models, to learn from different focuses and merits in different approaches and for discovering new insight." (I20, DS, P23) 5 DISCUSSION 5.1 Successful Collaborations In Scientific Discovery These reported open science projects are characteristic of their team sizes (small or big scale), distinct complementary interdisciplinary nature (bio-medical as the content domain and data science as the solution domain), tight collaboration process (rather than simply resource sharing), as well as the long-term and transformational nature of scientific discovery. Using the Olsons’ four-dimensions framework for successful distributed collaborations in scientific research (coupling of work, common ground, collaboration readiness, and technology readiness), we organize our results according to this framework, and focus mostly on the common ground dimension as the major challenge. Particularly in the diverse contexts of data-centric open science projects, we take into accounts the contrasts of small and big teams, and the dynamically evolving nature of scientific discovery. 5.1.1 Coupling of Work, Collaboration Readiness, and Technology Readiness. Our findings regarding the coupling of work echo what the Olsons’ framework suggests: the tight coupling within small teams requires timely communication and coordination. Loose coupling was a pre-requisite for successful distributed collaborations, such as in the large-scale crowdsourcing projects. However, most of these open science projects were non-divisible and highly iterative, which made assigning modular work for each location and setting up routine impossible. Similar to the result from a previous study [10], tight coupling under proper management, was not challenged by remote technologies but rather helped to enhance common ground and collaboration readiness. Our findings suggest that collaboration readiness is challenging within small project teams as well as in sub-teams in large-scale projects. In the Olsons’ original framework, collaboration readiness was seen as how team members were motivated to engage with each other. However, in these reported open science projects, more aspects of organizational structures came into play, including dependence between different expertise within teams, relationships between teams, as well as over the time dimension. Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. 237:16 Mao and Wang, et al. Similar to previous research on domain experts collaborating with computer scientists in cyber- infrastructure [55] or in civic data hackathons [41], each party comes in with a different research agenda, which is analyzed as "the dual-goal dilemma". This tension also exists between BMSs and DSs in our study, and manifests itself into the tension between asking the right question versus finding the answer in common ground. It is important to carefully weigh both sides’ interests in the organizational structure of the team; otherwise either one side will become "merely" instrumental as consultants and implementers to the other [3]. We could also learn from existing successful experiences. The introduction of a broker role to serve as the bridge between domain experts and data scientists to translate one stakeholder’s goals to the other proved useful in civic data hackathons [41] and large-scale collaborations [70, 75, 101]. Thus, we expect to see a smoother and more successful collaboration if someone in the collaboration can play the broker role. In terms of technology readiness, informants reported a wide range of tools, ranging from Co-editing systems to communication systems, and the reported use practices are consistent with prior literature (e.g., [96, 99]) thus are not listed. At the same time, BMSs and DSs have different information needs and tool preferences and when they come into collaboration as a team, they usually choose the most familiar tools for all the members (mostly aligning with BMSs’ tool comfortableness) rather than trying out new advanced tools. This is similar to prior findings on Co-editing technologies [98], and a National Science Foundation report warned if domain experts are weighted too heavily in the organization, procurement of existing technologies will be much overemphasized compared to development or adoption of new technologies [3]. Furthermore, our informants also expressed concerns of managing multiple tools as well as trying out new tools to meet the needs of quickly-evolving common ground. In particular, tool interoperability between team members and across the research process was critical. Compared to project management in general workplaces, managing interdisciplinary research projects can be more difficult due to their ambiguous and ever-evolving nature, and to the lack of awareness and resources allocated to management [49]. Thus, a training of project management and new tool adoption may be helpful. 5.1.2 Ever-Evolving Common Ground and Better Scientific Discovery. We found that common ground continued to be a key issue for both small and large-scale project teams in open science. In our findings, a "third space" [61] naturally came into being when BMS and DS started collaboration. In this shared common space, separate from each of their own domain, BMS and DS initiated a concrete common ground of what the BMS and DS research questions are, building dialogues and terms around the "data structure" with hybrid languages and training from their distinct domains, negotiating tools shared by the entire group, as well as showing promises in constructing new understandings of the initial problem. In particular, boundaries between BMS and in this "third space" continued to blur and thus new possibilities of asking questions emerged. Many informants in our study reported the unexpected turns of their research projects, starting from one question and ending by answering another better question or coming up with more alternative questions. This echoes Convertino’s previous findings in group emergency management [20]. In both cases, process common ground regarding know-how seems to keep increasing through joint activities within the team, while content common ground keeps being re-articulated, broken and revised throughout the process. Different from teams in general workplaces that are driven efficiently towards clear business goals and specific performance evaluations that match one optimal solution to a well-specified problem [65], this differentiation of content and process common ground and their development and interaction with each other over the time course become more salient and critical. And in our context of bio-medical research collaboration, the content common ground is in the form of research questions encapsulating a complicated composite of variables and relationships Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. Data Scientists and Domain Experts in Scientific Collaborations 237:17 in ambiguous data sets, and the training and nature of Bio-medical research to "Ask the Right Question" as well as research goals, in comparison to new concepts and terms in emergency management context [20]. This is consistent with the account that scientific discovery teams are operating on the foundation of alternative explanations and different voices, to explore and rule out many possibilities rather than exploiting a set of existing successful solutions [83]. Moreover, the increasing process common ground, in fact, allows the breaking and updating of content common ground to be possible. Specifically, the need for new communication protocol around what is "the right question" is on the rise over the research process. Further effort is needed to recognize changes in both types of common ground from both BMS and DS communities. Failing to do so may cause confusion and low productivity, less ideal scientific discovery. For example, teams could get confused about what is the current content common ground without the support of increasing process common ground, get "frozen" with the established content common ground [51], "seized" by shared information bias [86] and settle on "premature consensus" or "early closure" [47] of less optimal questions or solutions instead of advancing to the next stage of scientific discovery. In order to examine the validity of this preliminary finding and understand detailed needs of BMS and DS, further research is necessary to devise measurements for both content and process common ground specific to bio-medical research collaboration in the wild compared to in controlled experimental settings [20, 21]. 5.2 Principles for Technology Design From our findings, the biggest challenge in open science projects seemed to be the quickly-evolving common ground with a purpose to advance scientific discovery by asking the right question, instead of finding answers within a constrained space. It affects the other three dimensions in the Olsons’ framework: coupling of work, collaboration readiness, and technology readiness. It is also related to the theme of integration of heterogeneity from the seven common themes for designing and researching current and future e-Research cyberinfrastructures, articulated by Ribes and Lee in a theoretical summary [78]. We refer to the related literature and discuss principles and potential designs to address this issue. For both small-group and large-crowdsourcing collaborations, asking the right question depends on steadily developing progress common ground in terms of conventions and procedures, while constantly re-establishing content common ground through more and better questions as the research focus. Consistent with the "third space" in interdisciplinary collaborations, a multiple-view approach that differentiates a shared team view from role-specific details has been found to be effective for group tasks [19]. In terms of what is to be shared in the common view, two principles are suggested here. Firstly, a divergent-to-convergent two-stage path [71] to help structure the tightly coupled communication in the "third space". This path starts from pooling and sharing different perspectives for more questions, and heads to comparing and evaluating for better questions. The communication systems reported in Table.2 may be further improved to support and keep track of this divergent-to-convergent model by explicitly enabling users to brainstorm ideas, then summarizing ideas, and later evaluating the different ideas in it. Secondly, it would be helpful to differentiate the two types of common ground as they develop differently over time and affect team members and their roles differently. For example, team members can see not only the current status of shared objects, but also the changes in historical states [38]. This would be similar to how today’s co-editing systems (Table 2), integrates with version control systems [97], raising awareness of changes over time in separate views for common content knowledge and process protocols. Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. 237:18 Mao and Wang, et al. 5.3 Project Management Guideline On the other hand, a non-technical solution may be complementary to the technical ones for the small teams without a specialized project manager that face challenges in managing fragmented and repetitive information, or maintaining collaboration readiness over time. This might be due to the lack of awareness, expertise and resource allocation to project management compared to the conduct of the science research [68]. Training workshops could be helpful for researchers to learn about good team leadership, facilitation, and process management. Leveraging on existing technology, a shared vocabulary wiki page and data documentation could be helpful for the DSs and BMSs to keep in sync of the understanding and collaboration awareness, what questions the BMSs are interested in right now, and what questions the DSs are working on. Furthermore, specialized project management tools with interoperability across other tools could be developed to address such issue. 5.4 AI as a Partner in the Future of Data-Centric Scientific Discovery We have seen a gap between the BMS and DS in our study in the sense of asking questions, translating the BM question into a correct DS question, and interpreting the DS results. BMSs sometimes distrust the results. And DSs sometimes have a different priority in methodologies and solutions that might over-simplify the question. More importantly, shown in our results, BMSs need an iterative loop with lots of redundant DS attempts to be inspired by the data, the models, and the results generated by DSs. These differences, if not properly shared, communicated and integrated within the group, could become hidden biases that hold back the progress of scientific discovery. The work of Tversky and Kahneman [44, 91] argues that people, even scientists and data scientists who are professional in analyzing data, have trouble thinking statistically and reasoning about the data. This contributes to the growing reproducibility crisis in recent years, in which results of many scientific studies are difficult or impossible to replicate in subsequent investigation [76, 85]. And it can have a significant impact on judgments and decisions around data and even reverse decisions. It has been a robust phenomenon in bio-medical field, affecting diagnosis, treatment and lifesaving, medical resource allocation and management [1, 31, 53]. In recent years we have seen a fast and vast research effort of using one special group of machine learning techniques to design another machine learning algorithm [56, 62]. In particular, AutoML (automated machine learning) refers to a type of technology that only requires users’ minimal effort in uploading the data set, specifying the target and the DS method type (e.g., regression or binary classification), then the AI can automatically generate new features, select features, search alternative models and tune the models’ parameters to reach an optimal solution (often quantified in accuracy metric) [48]. With these systems, now the non-data-scientist users like BMSs in this paper may have the capability to directly build machine learning models with their domain-specific research questions. In a potential AI-human collaboration future, BMSs and DSs can leverage AutoML systems to quickly generate many ways to ask questions (including predictions and open discoveries) at different stages of the research process, and the machine may have less biased judgments despite the DSs’ or BMSs’ competing interests. AutoML may never fully liberate the human DSs, but we expect it could work as a partner in the human DS teams (e.g., as conversational agents illustrated in [82]) and help the BMSs in this Right Question formulation process. Certainly it is hard to achieve because in addition to technical development, many non-technical aspects (e.g., anthropomorphism [88]) need to been taken into account. But, we choose to work toward this future because it is hard. Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. Data Scientists and Domain Experts in Scientific Collaborations 237:19 6 LIMITATIONS One limitation of this study is the snowball sampling method, which might introduce selection bias [4]. These informants within the reach of our social network might be above the average active level in participating in open science collaborations and report more positive experiences. Additionally, all our informants are based in the U.S., which do not necessarily represent diverse cultural differences and a wide range of geographical distances in open science collaborations. The semi-structured interview method is also limited in relying on informants’ self-reports, which are subjective, single-sided and probably over-simplified. In order to understand the details of dynamic interaction between experts from different disciplines, it is important to design specific measurement for both content and process common ground, and observe contextual interaction within teams in real scenarios and conduct longitudinal case studies to track their processes along the research pipeline. Lastly, we picked bio-medical research as our target domain and it is yet to be studied how these challenges would vary for other domains involved in data-centric collaborations in open science, such as physics, geology, psychology. 7 CONCLUSION This work reports the challenges that emerged from scientific collaborations between data scientists and bio-medical scientists through interviewing 22 participants. Our study contributes to the existing literature by providing a systematic account for different stakeholders’ practices in scientific collaborations. In particular, we differentiate content common ground versus process common ground as a finer-grained level of the common ground concept. We discovered that scientific collaborations require constant breaking of the content common ground while accumulating process common ground, in comparison to most decision making or problem solving scenarios, where only one decision or solution is the final product. Our results shed light on the better practices for future interdisciplinary scientific collaborations. And the system design suggestions are also valuable and actionable for developers and designers who are developing data analytic tools and cloud sharing platforms. ACKNOWLEDGEMENT We thank all the interviewees who shared their research stories and resources. This work was conducted under the auspices of the IBM Science for Social Good initiative. REFERENCES [1] Katrina Armstrong, J Sanford Schwartz, Genevieve Fitzgerald, Mary Putt, and Peter A Ubel. 2002. Effect of framing as gain versus loss on understanding and hypothetical treatment choices: survival and mortality curves. Medical Decision Making 22, 1 (2002), 76–83. [2] Nazem Atassi, James Berry, Amy Shui, Neta Zach, Alexander Sherman, Ervin Sinani, Jason Walker, Igor Katsovskiy, David Schoenfeld, and Merit Cudkowicz. 2014. The PRO-ACT Database Design, Initial Analyses, and Predictive Features. Neurology 83, 19 (2014), 1719–1725. [3] Daniel E Atkins, Kelvin K Droegemeier, Stuart I Feldman, Hector Garcia-Molina, Michael L Klein, David G Messer- schmitt, Paul Messina, Jeremiah P Ostriker, and Margaret H Wright. 2003. Revolutionizing science and engineering through cyberinfrastructure. Report of the National Science Foundation blue-ribbon advisory panel on cyberinfrastructure 1 (2003). [4] Rowland Atkinson and John Flint. 2001. Accessing hidden and hard-to-reach populations: Snowball research strategies. Social research update 33, 1 (2001), 1–4. [5] Pieter J Beers, Henny PA Boshuizen, Paul A Kirschner, and Wim H Gijselaers. 2006. Common ground, complex problems and decision making. Group Decision and Negotiation 15, 6 (2006), 529–556. [6] H Bhabha. 1994. The location of culture. London: Routledge. (1994). Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. 237:20 Mao and Wang, et al. [7] Matthew J Bietz, Steve Abrams, Dan M Cooper, Kathleen R Stevens, Frank Puga, Darpan I Patel, Gary M Olson, and Judith S Olson. 2012. Improving the odds through the Collaboration Success Wizard. Translational behavioral medicine 2, 4 (2012), 480–486. [8] Jeremy P Birnholtz and Matthew J Bietz. 2003. Data at work: supporting sharing in science and engineering. In Proceedings of the 2003 international ACM SIGGROUP conference on Supporting group work. ACM, 339–348. [9] Jeremy P Birnholtz and Thomas A Finholt. 2013. Cultural challenges to leadership in cyberinfrastructure development. Leadership at a distance: research in technologically-supported work (2013), 195. [10] Pernille Bjørn, Morten Esbensen, Rasmus Eskild Jensen, and Stina Matthiesen. 2014. Does Distance Still Matter? Revisiting the CSCW Fundamentals on Distributed Collaboration. ACM Transactions on Computer-Human Interaction 21, 5 (Nov. 2014), 1–26. https://doi.org/10.1145/2670534 [11] Susanne Bødker, Pelle Ehn, Joergen Knudsen, Morten Kyng, and Kim Madsen. 1988. Computer support for cooperative design. In Proceedings of the 1988 ACM conference on Computer-supported cooperative work. ACM, 377–394. [12] Nathan Bos, Ann Zimmerman, Judith Olson, Jude Yew, Jason Yerkie, Erik Dahl, and Gary Olson. 2007. From Shared Databases to Communities of Practice: A Taxonomy of Collaboratories. Journal of Computer-Mediated Communication 12, 2 (2007), 652–672. [13] Richard E Boyatzis. 1998. Transforming qualitative information: Thematic analysis and code development. sage. [14] Jennifer Carpenter. 2011. May the best analyst win. [15] CASP13. 2018. 13th Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction. http://predictioncenter.org/casp13/ [16] CERN. 2018. CERN Annual report 2017. https://cds.cern.ch/record/2624296/files/18030409_CERN_rapport_2017EN. pdf [17] Herbert H Clark. 1996. Using language. Cambridge university press. [18] Herbert H Clark, Susan E Brennan, et al. 1991. Grounding in communication. Perspectives on socially shared cognition 13, 1991 (1991), 127–149. [19] Gregorio Convertino, Craig H Ganoe, Wendy A Schafer, Beth Yost, and John M Carroll. 2005. A multiple view approach to support common ground in distributed and synchronous geo-collaboration. In Coordinated and Multiple Views in Exploratory Visualization (CMV’05). IEEE, 121–132. [20] Gregorio Convertino, Helena M Mentis, Mary Beth Rosson, John M Carroll, Aleksandra Slavkovic, and Craig H Ganoe. 2008. Articulating common ground in cooperative work: content and process. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 1637–1646. [21] Gregorio Convertino, Helena M Mentis, Alex YW Ting, Mary Beth Rosson, and John M Carroll. 2007. How does common ground increase?. In Proceedings of the 2007 international ACM conference on Supporting group work. ACM, 225–228. [22] Benjamin F Crabtree and William L Miller. 1999. Doing qualitative research. sage publications. [23] John W Creswell and J David Creswell. 2017. Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications. [24] Laura Dabbish, Colleen Stuart, Jason Tsay, and Jim Herbsleb. 2012. Social Coding in GitHub: Transparency and Collaboration in an Open Software Repository. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work. ACM, 1277–1286. [25] data.gov. 2019. data.gove datasets. https://catalog.data.gov/dataset [26] datarobot. 2019. datarobot. https://www.datarobot.com/ [27] Sharon J Derry, Christian D Schunn, and Morton Ann Gernsbacher. 2014. Interdisciplinary collaboration: An emerging cognitive science. Psychology Press. [28] Paul Dourish and Victoria Bellotti. 1992. Awareness and coordination in shared workspaces.. In CSCW, Vol. 92. 107–114. [29] Klaus Fiedler, Peter Juslin, et al. 2006. Information sampling and adaptive cognition. Cambridge University Press. [30] Greg Filla. 2018. What’s New with Watson Machine Learning? https://medium.com/ibm-watson/ whats-new-with-watson-machine-learning-4de86aa1469d [31] Eyal Gamliel and Eyal Peer. 2010. Attribute framing affects the perceived fairness of health care allocation principles. Judgment and Decision Making 5, 1 (2010), 11. [32] GenBank. 2019. GenBank Statistics. https://www.ncbi.nlm.nih.gov/genbank/statistics/ [33] Yolanda Gil, Mark Greaves, James Hendler, and Haym Hirsh. 2014. Amplify scientific discovery with artificial intelligence. Science 346, 6206 (2014), 171–172. [34] Google. 2019. Cloud AutoML. https://cloud.google.com/automl/ [35] Michael E Gorman. 2002. Expanding the trading zones for convergent technologies. Converging Technologies for Improving Human Performance (2002), 424. Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. Data Scientists and Domain Experts in Scientific Collaborations 237:21 [36] Michael E Gorman. 2008. Scientific and technological expertise. Journal of psychology of science and technology (2008). [37] Barbara Gray. 1989. Collaborating: Finding common ground for multiparty problems. (1989). [38] Saul Greenberg. 1990. Sharing views and interactions with single-user applications. In ACM SIGOIS Bulletin, Vol. 11. ACM, 227–237. [39] H2O. 2019. H2O.ai. https://www.h2o.ai/ [40] Charles Hill, Rachel Bellamy, Thomas Erickson, and Margaret Burnett. 2016. Trials and Tribulations of Developers of Intelligent Systems: A Field Study. In Visual Languages and Human-Centric Computing (VL/HCC), 2016 IEEE Symposium On. IEEE, 162–170. [41] Youyang Hou and Dakuo Wang. 2017. Hacking with NPOs: collaborative analytics and broker roles in civic data hackathons. Proceedings of the ACM on Human-Computer Interaction 1, CSCW (2017), 53. [42] Edwin Hutchins. 1995. How a cockpit remembers its speeds. Cognitive science 19, 3 (1995), 265–288. [43] Marina Jirotka, Charlotte P Lee, and Gary M Olson. 2013. Supporting scientific collaboration: Methods, tools and concepts. Computer Supported Cooperative Work (CSCW) 22, 4-6 (2013), 667–715. [44] Daniel Kahneman. 2011. Thinking fast and slow. Allen Lane. [45] Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffrey Heer. 2011. Wrangler: Interactive visual specification of data transformation scripts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3363–3372. [46] Eser Kandogan, Aruna Balakrishnan, Eben M. Haber, and Jeffrey S. Pierce. 2014. From Data to Insight: Work Practices of Analysts in the Enterprise. IEEE Computer Graphics and Applications 34, 5 (Sept. 2014), 42–50. https: //doi.org/10.1109/MCG.2014.62 [47] Norbert L Kerr and R Scott Tindale. 2004. Group performance and decision making. Annu. Rev. Psychol. 55 (2004), 623–655. [48] Udayan Khurana, Deepak Turaga, Horst Samulowitz, and Srinivasan Parthasrathy. 2016. Cognito: Automated feature engineering for supervised learning. In 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). IEEE, 1304–1307. [49] Bradley L Kirkman, Cristina B Gibson, and Kwanghyun Kim. 2012. Across borders and technologies: Advancements in virtual teams research. In The Oxford Handbook of Organizational Psychology, Volume 2. [50] Thomas Kluyver, Benjamin Ragan-Kelley, Fernando Pérez, Brian E Granger, Matthias Bussonnier, Jonathan Frederic, Kyle Kelley, Jessica B Hamrick, Jason Grout, Sylvain Corlay, et al. 2016. Jupyter Notebooks-a publishing format for reproducible computational workflows.. In ELPUB. 87–90. [51] Arie W Kruglanski and DM Webster. 1996. Motivated closing of the mind: Its cognitive and social effects. Psychological Review 103, 2 (1996), 263–283. [52] Robert Küffner, Neta Zach, Raquel Norel, Johann Hawe, David Schoenfeld, Liuxia Wang, Guang Li, Lilly Fang, Lester Mackey, Orla Hardiman, et al. 2015. Crowdsourced analysis of clinical trial data to predict amyotrophic lateral sclerosis progression. Nature biotechnology 33, 1 (2015), 51. [53] Anton Kühberger. 1998. The influence of framing on risky decisions: A meta-analysis. Organizational behavior and human decision processes 75, 1 (1998), 23–55. [54] Katherine A. Lawrence. 2006. Walking the Tightrope: The Balancing Acts of a Large e-Research Project. Computer Supported Cooperative Work (CSCW) 15, 4 (Oct. 2006), 385–411. https://doi.org/10.1007/s10606-006-9025-0 [55] Charlotte P Lee, Matthew J Bietz, and Alexander Thayer. 2010. Research-driven stakeholders in cyberinfrastructure use and development. In 2010 International Symposium on Collaborative Technologies and Systems. IEEE, 163–172. [56] Sijia Liu, Parikshit Ram, Djallel Bouneffouf, Deepak Vijaykeerthy, Gregory Bramble, Horst Samulowitz, Dakuo Wang, Andrew R Conn, and Alexander Gray. 2019. A Formal Method for AutoML via ADMM. arXiv:1905.00424 [57] Airong Luo, Dick Ng’ambi, and Ted Hanss. 2010. Towards building a productive, scalable and sustainable collaboration model for open educational resources. In Proceedings of the 16th ACM international conference on Supporting group work. ACM, 273–282. [58] Jessica R Mesmer-Magnus and Leslie A DeChurch. 2009. Information sharing and team performance: A meta-analysis. Journal of Applied Psychology 94, 2 (2009), 535. [59] Andrew Monk. 2003. Common ground in electronically mediated communication: Clark’s theory of language use. HCI models, theories, and frameworks: Toward a multidisciplinary science (2003), 265–289. [60] Michael Muller, Ingrid Lange, Dakuo Wang, David Piorkowski, Jason Tsay, Q Vera Liao, Casey Dugan, and Thomas Erickson. 2019. How Data Science Workers Work with Data: Discovery, Capture, Curation, Design, Creation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 126. [61] Michael J Muller and Allison Druin. 2010. Participatory design: the third space in hci. human-computer interaction: Development process. J. Jacko and A. Sears. Eds. Handbook of HCI (2010). Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. 237:22 Mao and Wang, et al. [62] Fatemeh Nargesian, Horst Samulowitz, Udayan Khurana, Elias B Khalil, and Deepak S Turaga. 2017. Learning Feature Engineering for Classification.. In IJCAI. 2529–2535. [63] Ramon Oldenburg and Dennis Brissett. 1982. The third place. Qualitative sociology 5, 4 (1982), 265–284. [64] GARY M Olson and J Olson. 2016. Converging on theory from four sides. Theory development in the Information Sciences. Ed. D. Sonnenwald. Univ. Of Texas, Austin (2016), 87–100. [65] Gary M Olson and Judith S Olson. 2000. Distance matters. Human–computer interaction 15, 2-3 (2000), 139–178. [66] Gary M Olson, Stephanie Teasley, Matthew J Bietz, and Derrick L Cogburn. 2002. Collaboratories to support distributed science: the example of international HIV/AIDS research. In Proceedings of the 2002 annual research conference of the South African institute of computer scientists and information technologists on enablement through technology. South African Institute for Computer Scientists and Information Technologists, 44–51. [67] Gary M Olson, Ann Zimmerman, and Nathan Bos. 2008. Scientific collaboration on the Internet. The MIT Press. [68] Judith S Olson and Gary M Olson. 2013. Working together apart: Collaboration over the internet. Synthesis Lectures on Human-Centered Informatics 6, 5 (2013), 1–151. [69] Judith S Olson, Dakuo Wang, Gary M Olson, and Jingwen Zhang. 2017. How people write together now: Beginning the investigation with advanced undergraduates in a project course. ACM Transactions on Computer-Human Interaction (TOCHI) 24, 1 (2017), 4. [70] Andreas Paepcke. 1996. Information needs in technical work settings and their implications for the design of computer tools. Computer Supported Cooperative Work (CSCW) 5, 1 (1996), 63–92. [71] Susannah BF Paletz and Christian D Schunn. 2010. A social-cognitive framework of multidisciplinary team innovation. Topics in Cognitive Science 2, 1 (2010), 73–95. [72] Kayur Patel, James Fogarty, James A. Landay, and Beverly L. Harrison. 2008. Examining Difficulties Software Developers Encounter in the Adoption of Statistical Machine Learning.. In AAAI. 1563–1566. [73] Evan Patterson, Ioana Baldini, Aleksandra Mojsilovic, and Kush R Varshney. 2018. Semantic Representation of Data Science Programs.. In IJCAI. 5847–5849. [74] Evan Patterson, Robert McBurney, Holly Schmidt, Ioana Baldini, A Mojsilović, and Kush R Varshney. 2017. Dataflow IBM Journal of Research and representation of data analyses: Toward a platform for collaborative data science. Development 61, 6 (2017), 9–1. [75] Suzanne D Pawlowski and Daniel Robey. 2004. Bridging user organizations: Knowledge brokering and the work of information technology professionals. MIS quarterly (2004), 645–672. [76] Roger Peng. 2015. The reproducibility crisis in science: A statistical counterattack. Significance 12, 3 (2015), 30–32. [77] ProACT. 2015. The DREAM Phil Bowen ALS Prediction Prize4Life Challenge, The DREAM ALS Stratification Prize4Life Challenge. https://nctu.partners.org/ProACT/Document/DisplayLatest/3 [78] David Ribes and Charlotte P Lee. 2010. Sociotechnical studies of cyberinfrastructure and e-research: Current themes and future trajectories. Computer Supported Cooperative Work (CSCW) 19, 3-4 (2010), 231–244. [79] Betsy Rolland and Charlotte P Lee. 2013. Beyond trust and reliability: reusing data in collaborative cancer epidemiology research. In Proceedings of the 2013 conference on Computer supported cooperative work. ACM, 435–444. [80] Adam Rule, Aurélien Tabard, and James D Hollan. 2018. Exploration and explanation in computational notebooks. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 32. [81] Ralph Schroeder. 2007. e-Research Infrastructures and Open Science: Towards a New System of Knowledge Production? Prometheus 25, 1 (2007), 1–17. [82] Ameneh Shamekhi, Q Vera Liao, Dakuo Wang, Rachel KE Bellamy, and Thomas Erickson. 2018. Face Value? Exploring the effects of embodiment for a group facilitation agent. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 391. [83] Klaas Sijtsma. 2016. Playing with data or how to discourage questionable research practices and stimulate researchers to do things right. Psychometrika 81, 1 (2016), 1–15. [84] BF Spencer Jr, Randal Butler, Kathleen Ricker, Doru Marcusiu, Thomas A Finholt, Ian Foster, Carl Kesselman, and Jeremy P Birnholtz. 2008. 18 NEESgrid: Lessons Learned for Future Cyberinfrastructure Development. Scientific Collaboration on the Internet (2008), 331. [85] John Staddon. 2017. Scientific Method: How Science Works, Fails to Work, and Pretends to Work. Routledge. [86] Garold Stasser and William Titus. 1985. Pooling of unshared information in group decision making: Biased information sampling during discussion. Journal of personality and social psychology 48, 6 (1985), 1467. [87] Dennis D Stewart and Garold Stasser. 1998. The sampling of critical, unshared information in decision-making groups: the role of an informed minority. European Journal of Social Psychology 28, 1 (1998), 95–113. [88] Haodan Tan, Dakuo Wang, and Selma Sabanovic. 2018. Projecting Life Onto Robots: The Effects of Cultural Factors and Design Type on Multi-Level Evaluations of Robot Anthropomorphism. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 129–136. Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019. Data Scientists and Domain Experts in Scientific Collaborations 237:23 [89] John Thackara. 2000. Edge effects: the design challenge of the pervasive interface. In CHI’00 Extended Abstracts on Human Factors in Computing Systems. ACM, 199–200. [90] Trifacta. 2019. Trifacta. https://www.trifacta.com/ [91] Amos Tversky and Daniel Kahneman. 1974. Judgment under uncertainty: Heuristics and biases. Science 185, 4157 (1974), 1124–1131. [92] Daan Van Knippenberg and Michaela C Schippers. 2007. Work group diversity. Annual review of psychology 58 (2007). [93] Joaquin Vanschoren, Jan N. Van Rijn, Bernd Bischl, and Luis Torgo. 2014. OpenML: Networked Science in Machine Learning. ACM SIGKDD Explorations Newsletter 15, 2 (2014), 49–60. [94] Theresa Velden. 2013. Explaining Field Differences in Openness and Sharing in Scientific Communities. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work. ACM, 445–458. [95] Rubén Vicente-Sáez and Clara Martínez-Fuentes. 2018. Open Science now: A systematic literature review for an integrated definition. Journal of business research 88 (2018), 428–436. [96] Dakuo Wang. 2016. How people write together now: Exploring and supporting today’s computer-supported col- laborative writing. In Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion. ACM, 175–179. [97] Dakuo Wang, Judith S Olson, Jingwen Zhang, Trung Nguyen, and Gary M Olson. 2015. DocuViz: visualizing collaborative writing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1865–1874. [98] Dakuo Wang, Haodan Tan, and Tun Lu. 2017. Why users do not want to write together when they are writing together: Users’ rationales for today’s collaborative writing practices. Proceedings of the ACM on Human-Computer Interaction 1, CSCW (2017), 107. [99] Dakuo Wang, Haoyu Wang, Mo Yu, Zahra Ashktorab, and Ming Tan. 2019. Slack Channels Ecology in Enterprises: How Employees Collaborate Through Group Chat. arXiv preprint arXiv:1906.01756 (2019). [100] Andrew Warr. 2006. Situated and distributed design. In NordiCHI Workshop on Distributed Participatory Design, Oslo, Norway. Citeseer. [101] Etienne Wenger. 2010. Communities of practice and social learning systems: the career of a concept. In Social learning systems and communities of practice. Springer, 179–198. [102] Michael Woelfle, Piero Olliaro, and Matthew H Todd. 2011. Open science is a research accelerator. Nature Chemistry 3, 10 (2011), 745. [103] William A Wulf. 1989. The national collaboratory-A white paper. (1989). [104] Ji Zhang. 2018. JupyterLab_Voyager: a Data Visualization Enhancement in JupyterLab. (2018). Proc. ACM Hum.-Comput. Interact., Vol. 3, No. GROUP, Article 237. Publication date: December 2019.
ai_researcher
8
AUTOGEN_A_Personalized_Large_Language_Model_for_Academic_Enhancement—Ethics_and_Proof_of_Principle.pdf
AUTOGEN STUDIO: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems Victor Dibia, Jingya Chen, Gagan Bansal, Suff Syed, Adam Fourney, Erkang Zhu, Chi Wang, Saleema Amershi Microsoft Research, Redmond, United States {victordibia, jingyachen, gaganbansal, suffsyed, adam.fourney, erkang.zhu, chiw, samershi}@microsoft.com 4 2 0 2 g u A 9 ] E S . s c [ 1 v 7 4 2 5 1 . 8 0 4 2 : v i X r a Abstract Multi-agent systems, where multiple agents (generative AI models + tools) collaborate, are emerging as an effective pattern for solving long-running, complex tasks in numerous do- mains. However, specifying their parameters (such as models, tools, and orchestration mech- anisms etc,.) and debugging them remains chal- lenging for most developers. To address this challenge, we present AUTOGEN STUDIO, a no-code developer tool for rapidly prototyping, debugging, and evaluating multi-agent work- flows built upon the AUTOGEN framework. AUTOGEN STUDIO offers a web interface and a Python API for representing LLM-enabled agents using a declarative (JSON-based) speci- fication. It provides an intuitive drag-and-drop UI for agent workflow specification, interactive evaluation and debugging of workflows, and a gallery of reusable agent components. We highlight four design principles for no-code multi-agent developer tools and contribute an open-source implementation.1 1 Introduction When combined with the ability to act (e.g., using tools), Generative AI models function as agents, en- abling complex problem-solving capabilities. Im- portantly, recent research has shown that transi- tioning from prescribed (fixed) agent pipelines to a multi-agent setup with autonomous capabilities can result in desirable behaviors such as improved fac- tuality and reasoning (Du et al., 2023), as well as divergent thinking (Liang et al., 2023). These obser- vations have driven the development of application frameworks such as AutoGen (Wu et al., 2023), CAMEL (Li et al., 2024), and TaskWeaver (Qiao et al., 2023), which simplify the process of crafting multi-agent applications expressed as Python code. However, while multi-agent applications advance 1https://github.com/microsoft/autogen/tree/ autogenstudio/samples/apps/autogen-studio Figure 1: AUTOGEN STUDIO provides a drag-n-drop UI where models, skills/tools, memory components can be defined, attached to agents and agents attached to workflows. our capacity to solve complex problems, they also introduce new challenges. For example, developers must now configure a large number of parameters for these systems including defining agents (e.g., the model to use, prompts, tools or skills available to the agent, number of action steps an agent can take, task termination conditions etc.), communica- tion and orchestration mechanisms - i.e., the order or sequence in which agents act as they collabo- rate on a task. Additionally, developers need to debug and make sense of complex agent interac- tions to extract signals for system improvement. All of these factors can create significant barriers to entry and make the multi-agent design process tedious and error-prone. To address these chal- lenges, we have developed AUTOGEN STUDIO, a tool for rapidly prototyping, debugging, and evalu- ating MULTI-AGENT workflows. Our contributions are highlighted as follows: • AUTOGEN STUDIO - a developer-focused tool (UI and backend Web and Python API) for declaratively specifying and debugging (human- in-the-loop and non-interactive) MULTI-AGENT workflows. AUTOGEN STUDIO provides a novel InitiatorCode executorRepresent user, execute co..UserproxyPlan and generate book content including text and images.Book generation group chat managerDrag & drop to add a skillGenerate content for each...Content AgentGPT 4 TurboGenerate imagesImage AgentGPT 4 TurboImage generatorDrag to add a skillVerify the content meet par...QA AgentDrag to add a modelAgent AAgent B drag-and-drop experience (Figure 1) for rapidly authoring complex MULTI-AGENT agent work- flows, tools for profiling/debugging agent ses- sions, and a gallery of reusable/shareable MULTI- AGENT components. • We introduce profiling capabilities with visual- izations of messages/actions by agents and met- rics (costs, tool invocations, and tool output sta- tus) for debugging MULTI-AGENT workflows. • Based on our experience building and supporting AUTOGEN STUDIO as an open-source tool with a significant user base (over 200K downloads within a 5-month period), we outline emerg- ing design patterns for MULTI-AGENT developer tooling and future research directions. To the best of our knowledge, AUTOGEN STU- DIO is the first open-source project to explore a no-code interface for autonomous MULTI-AGENT application development, providing a suitable plat- form for research and practice in MULTI-AGENT developer tooling. 2 Related Work 2.1 Agents ( LLMs + Tools) Generative AI models face limitations, including hallucination — generating content not grounded in fact — and limited performance on reasoning tasks or novel out-of-distribution problems. To address these issues, practice has shifted towards agentic implementations where models are given access to tools to act and augment their perfor- mance (Mialon et al., 2023). Agentic implemen- tations, such as React (Yao et al., 2022), explore a Reason and Act paradigm that uses LLMs to generate both reasoning traces and task-specific actions in an interleaved manner. As part of this process, developers have explored frameworks that build prescriptive pipelines interleaving models and tools (e.g., LIDA (Dibia, 2023), LangChain (Chase, 2022)). However, as tasks become more complex, requiring lengthy context and the ability to inde- pendently adapt to dynamic problem spaces, pre- defined pipelines demonstrate limited performance (Liu et al., 2024). This limitation has led to the exploration of more flexible and adaptive agent architectures. 2.2 MULTI-AGENT Frameworks Several frameworks have been proposed to provide abstractions for creating such applications. Au- toGen (Wu et al., 2023) is an open-source exten- sible framework that allows developers to build large MULTI-AGENT applications. CAMEL (Li et al., 2024) is designed to facilitate autonomous cooperation among communicative agents through role-playing, using inception prompting to guide chat agents toward task completion while align- ing with human intentions. OS-Copilot (Wu et al., 2024) introduces a framework for building general- ist agents capable of interfacing with comprehen- sive elements in an operating system, including the web, code terminals, files, multimedia, and various third-party applications. It explores the use of a dedicated planner module, a configurator, and an executor, as well as the concept of tools ( Python functions or calls to API endpoints) or skills (tools that can be learned and reused on the fly). Multi-Agent Core Concepts 1. Model: Generative AI model used to drive core agent behaviors. 2. Skills/Tools: Code or APIs used to ad- dress specific tasks. 3. Memory: Short term (e.g., lists) or long term (vector databases) used for to save and recall information. 4. Agent: A configuration that ties together the model, skills, memory components and behaviors. 5. Workflow: A configuration of a set of agents and how they interact to address tasks (e.g., order or sequence in which agents act, task planning, termination conditions etc.). Collectively, these tools support a set of core capabilities - definition of agent parameters - such as generative AI models, skills / tools or memory, and agent workflows - specifications of how these agents can collaborate. However, most of these frameworks primarily support a code-first represen- tation of agent workflows, which presents a high barrier to entry and rapid prototyping. They also do not provide tools or metrics for agent debugging and evaluation. Additionally, they lack structured reusable templates to bootstrap or accelerate the agent workflow creation process. AUTOGEN STU- DIO addresses these limitations by providing a vi- sual interface to declaratively define and visualize agent workflows, test and evaluate these workflows, and offer templates for common MULTI-AGENT tasks to streamline development. While this work is built on the AUTOGEN open source library (Wu et al., 2023) and inherits the core abstractions for representing agents, the proposed design patterns on no-code developer tools are intended to apply to all MULTI-AGENT frameworks. 3 Design Goals AUTOGEN STUDIO is designed to enhance the MULTI-AGENT developer experience by focusing on three core objectives: Rapid Prototyping: Provide a playground where developers can quickly specify agent configura- tions and compose these agents into effective multi- agent workflows. Developer Tooling: Offer tools designed to help developers understand and debug agent behaviors, facilitating the improvement of multi-agent sys- tems. Reusable Templates: Present a gallery of reusable, shareable templates to bootstrap agent workflow creation. This approach aims to establish shared standards and best practices for MULTI-AGENT sys- tem development, promoting wider adoption and implementation of MULTI-AGENT solutions. 4 System Design AUTOGEN STUDIO is implemented across two high-level components: a frontend user interface (UI) and a backend API (web, python and com- mand line). It can be installed via the PyPI package manager (listing 1). pip install autogenstudio autogenstudio ui -- port 8081 listing 1: AUTOGEN STUDIO can be installed from PyPI (pip) and the UI launched from the command line. 4.1 User Interface The frontend web interface in AUTOGEN STU- DIO is built using React and implements three main views that support several key functionalities. The build view enables users to author (define-and- compose) multi-agent workflows. The playground view allows for interactive task execution and work- flow debugging, with options to export and deploy. The gallery view facilitates the reuse and sharing of agent artifact templates. 4.1.1 Building Workflows The build view in the UI (see Figure 1) offers a define-and-compose experience, allowing develop- ers to declaratively define low-level components and iteratively compose them into a workflow. For instance, users can define configurations for mod- els, skills/tools (represented as Python functions addressing specific tasks), or memory stores (e.g., documents organized in a vector database). Each entity is saved in a database for use across inter- face interactions. Subsequently, they can define an agent, attaching models, skills, and memory to it. Several agent default templates are provided following AUTOGEN abstractions - a UserProxy agent (has a code execution tool by default), an AssistantAgent (has a generative AI model default), and a GroupChat agent (an abstraction container for defining a list of agents, and how they interact). Finally, workflows can be defined, with existing agents attached to these workflows. The default workflow patterns supported are autonomous chat (agents exchange messages and actions across con- versation turns until a termination condition is met) and sequential chat (a sequence of agents defined, each agent processes its input in order and passes on a summary of their output to the next agent). The workflow composition process is further en- hanced by supporting a drag-and-drop interaction e.g., skills/models can be dragged to agents and agents into workflows. 4.1.2 Testing and Debugging Workflows Workflows can be tested in-situ in the build view, or more systematically explored within the play- ground view. The playground view allows users create sessions, attach workflows to the session, and run tasks (single shot or multi-turn). Sessions can be shared (to illustrate workflow performance) and multiple sessions can be compared. AUTOGEN STUDIO provides two features to support debug- ging. First, it provides an observe view where as tasks progress, messages and actions performed by agents are streamed to the interface, and all gen- erated artifacts are displayed (e.g., files such as images, code, documents etc). Second a post-hoc profiler view is provided where a set of metrics are visualized for each task addressed by a workflow - Figure 2: AUTOGEN STUDIO provides a backend api (web, python, cli) and a UI which implements a playground (shown), build and gallery view. In the playground view, users can run tasks in a session based on a workflow. Users can also observe actions taken by agents, reviewing agent messages and metrics based on a profiler module. total number of messages exchanged, costs (gener- ative AI model tokens consumed and dollar costs), how often agents use tools and the status of tool use (success or failure), for each agent. that users can import, extend, and reuse in their own workflows. Since each component specification is declarative (JSON), users can also easily export, version and reshare them. 4.1.3 Deploying Workflows AUTOGEN STUDIO enables users to export work- flows as a JSON configuration file. An exported workflow can be seamlessly integrated into any Python application (listing 2), executed as an API endpoint using the AUTOGEN STUDIO command line interface (figure 2a), or wrapped in a Docker container for large-scale deployment on various platforms (Azure, GCP, Amazon, etc.). from autogenstudio import WorkflowManager wm = WorkflowManager (" workflow . json ") wm . run ( message =" What is the height of the Eiffel Tower ") listing 2: Workflows can be imported in python apps. 4.1.4 Template Gallery The UI also features a gallery view - a repository of components (skills, models, agents, workflows) 4.2 Backend API - Web, Python, and Command Line The backend API comprises three main compo- nents: a web API, a Python API, and a command- line interface. The web API consists of REST endpoints built using the FastAPI library2, sup- porting HTTP GET, POST, and DELETE methods. These endpoints interact with several key classes: A DBM anager performs CRUD (Create, Read, Update, Delete) operations on various entities such as skills, models, agents, memory, workflows, and sessions. The W orkf lowM anager class handles the ingestion of declarative agent workflows, con- verts them into AUTOGEN agent objects, and exe- cutes tasks (see listing 2). A P rof iler class parses agent messages to compute metrics. When a user initiates a task within a session, the system retrieves the session history, instantiates agents based on their serialized representations from the database, executes the task, streams intermediate messages to the UI via websocket, and returns the final results. AUTOGEN STUDIO also provides a command-line 2FastAPI: https://fastapi.tiangolo.com/ autogenstudio.web.appREST + Socket endpoints for UIPython APIautogenstudio.worflowmanagerHydrate workflow specifications into AutoGen agents and run tasksCommand Lineautogenstudio.cliCLI Utilities 
 

Web APIautogenstudio ui --port 8081autogenstudio serve --workflow=workflow.jsonBackend APIFrontend Web UI APIABAutoGen StudioPlaygroundBuildGalleryFeedbac(cid:220)DocumentGuest userClose sidebarRecent sessionsBoo(cid:220) generationWhat would you like to do?0/2000The children's PDF book titled "Weather in Seattle" has been successfully created with descriptions and images for each weather condition. The book should now be available as "Seattle_Weather_Childrens_Book.pdf" on your system. You can open and view the PDF to ensure that it meets your expectations and contains all the pages with the appropriate images and descriptions. If everything looks good, that completes our task. If you need any further assistance or modifications, please let me know.Agents have completed the taskResults (7 files)Seattle_Weather_Childrens_Book.pdfMessageCostAgent messagesProfilerGroupchat manager129120.152Userproxy29120.022Quality Assurance 6030.009Content 108120.122Image Generator 9010.012TokensAgentUSD10155200 Total messagesUserproxyGroupchat managerContentImage GeneratorQuality AssuranceSuccessFailureUserproxyGroupchat managerContentImage GeneratorQuality AssuranceTool call0 0.511.52Observe Agentscreate a childrens pdf book with 4 pages, each describing the weather in seattle. Each page should have extensive descripitions with images of the weather. Create the images first, then create the text, then the pdf.Observe this response interface with utilities for launching the bundled UI and running exported workflows as API endpoints. 5 Usage and Evaluation In this project, we have adopted an in-situ, iterative evaluation approach. Since its release on GitHub (5 months), the AUTOGEN STUDIO package has been installed over 200K times and has been itera- tively improved based on feedback from usage (> 135 GitHub issues). Issues highlighted several user pain points that were subsequently addressed in- cluding: (a) challenges in defining, persisting, and reusing components, resolved by implementing a database layer; (b) difficulties in authoring compo- nents, resolved by supporting automated tool gener- ation from descriptions and integrating an IDE for editing tools; (c) frustrations caused by components failing during end-to-end tests, addressed by incor- porating a test button for components (e.g.,models) and workflows in the build view. Figure 3 displays a plot of all AUTOGEN STUDIO issues. Each point represents an issue, based on an embedding of its text (title + body) using OpenAI’s text-embedding- 3-large model. The embeddings were reduced to two dimensions using UMAP, clustered with K- Means (k = 8), and cluster labels generated using GPT-4 (grounded on 10 samples from its centroid). Finally, in Appendix A, we demonstrate how AU- TOGEN STUDIO can effectively be used to support an engineer persona in rapidly prototyping, testing, and iteratively debugging a MULTI-AGENT work- flow, and deploying it as an API endpoint to address a concrete task (generating books). 6 Emerging Design Patterns and Research Directions In the following section, we outline some of the high-level emerging patterns which we hope can help inform the design of no-code interfaces for building next-generation multi-agent applications. 6.1 Define-and-Compose Workflows Allow users to author workflows by defining components and composing them (via drag-and-drop actions) into multi-agent workflows. A multi-agent system can have a wide array of parameters to configure. We have found that select- ing the right visual presentation of the workflow to Figure 3: Plot of GitHub issues (n = 8 clusters) from the AUTOGEN STUDIO repo. User feedback ranged from support with workflow authoring tools (e.g., the ability configure and test models) to general installation. helping users understand what parameters to config- ure (discovery), and how to configure them. Specif- ically, we have found that a define-and-compose workflow, where entities are first defined and per- sisted independently, and then composed ultimately into multi-agent workflows, provides a good de- veloper experience. This includes providing tools to support authoring entities e.g., the ability de- fine and test models, an IDE for generating/editing tools (code), and a a canvas-based visual layout of workflows with drag-and-drop interaction for associating entities in the workflow. 6.2 Debugging and Sensemaking Tools Provide robust tools to help users debug, interpret, and rationalize the behavior and outputs of multi-agent systems. Multi-agent workflows can be brittle and fail for multiple reasons, ranging from improperly config- ured models to poor instructions for agents, im- proper tool configuration for agents or termination conditions. A critical request has been for tools to help users debug and make sense of agent re- sponses. AutoGen Studio FeatureRequests: WorkflowSharing, File Uploads, UIImprovements, and ModelTesting (14)Issues with AutogenStudio: Skills notupdating, Code execution,and Group Chat (21)Issues with API Keys,Model Configuration, andLocal Server Connections(27)Issues with Group ChatWorkflow, Agent Creation,and Model Changes (18)AutoGen Studio 2Compatibility, APIIssues, and DocumentationUpdates (10)Issues with AutoGenStudio: Docker access,validation errors, andcompatibility (17)AutoGen Studio: DatabaseImplementation, CustomConfigurations, andPerformance Enhancements(14)Accessibility andMultimodality in AutogenStudio, UI Improvements,Group Chat Support, andTest Suite (14)AutoGen Studio GitHub Issue Visualization (UMAP) 6.3 Export and Deployment Enable seamless export and deployment of multi-agent workflows to various plat- forms and environments. While a no-code tool like AUTOGEN STUDIO enables rapid iteration and demonstration of work- flows, the natural progression for most use cases is that developers want to replicate the same out- comes but integrated as parts of their core appli- cations. This stage requires seamless export and deployment of multi-agent workflows to various platforms and environments. 6.4 Collaboration and Sharing Facilitate user collaboration on multi- agent workflow development and allow easy sharing of creations within the com- munity. Collaboration and sharing are key to accelerat- ing innovation and improving multi-agent systems. By enabling users to collaborate on workflow de- velopment, share their creations, and build upon each other’s work, a more dynamic and innova- tive development environment can be cultivated. Tools and features that support real-time collab- oration, version control, and seamless sharing of workflows and components are essential to foster a community-driven approach. Additionally, offer- ing a repository or gallery where users can publish and share their workflows, skills, and agents pro- motes communal learning and innovation. 7 Future Research Directions While we have explored early implementations of the design requirements mentioned above, our efforts in building AUTOGEN STUDIO have also identified two important future research areas and associated research questions. • Offline Evaluation Tools: This encompasses questions such as how can we measure the per- formance, reliability, and reusability of agents across tasks? How can we better understand their strengths and limitations? How can we ex- plore alternative scenarios and outcomes? And how can we compare different agent architec- tures and collaboration protocols? • Understanding and quantifying the impact of multi-agent system design decisions: These questions include determining the optimal num- ber and composition of agents for a given prob- lem, the best way to distribute responsibilities and coordinate actions among agents, and the trade-offs between centralized and decentralized control or between homogeneous and heteroge- neous agents. • Optimizing of multi-agent systems: Research directions here include the dynamic generation of agents based on task requirements and avail- able resources, tuning workflow configurations to achieve the best performance, and adapting agent teams to changing environments and user preferences. Furthermore, how can we leverage human oversight and feedback to improve agent reliability, task performance and safety? 8 Conclusion This paper introduced AUTOGEN STUDIO, a no- code developer tool for rapidly prototyping, debug- ging, and evaluating multi-agent workflows. Key features include a drag-and-drop interface for agent workflow composition, interactive debugging capa- bilities, and a gallery of reusable agent components. Through widespread adoption, we identified emerg- ing design patterns for multi-agent developer tool- ing - a define and compose approach to authoring workflows, debugging tools to make sense of agent behaviors, tools to enable deployment and collabo- rative sharing features. AUTOGEN STUDIO lowers the barrier to entry for multi-agent application de- velopment, potentially accelerating innovation in the field. Finally we outline future research direc- tions including developing offline evaluation tools, ablation studies to quantify the impact of MULTI- AGENT systems design decisions and methods for optimizing multi-agent systems. 9 Ethics Statement AUTOGEN STUDIO is designed to provide a no- code environment for rapidly prototyping and test- ing multi-agent workflows. Our goal is to responsi- bly advance research and practice in solving prob- lems with multiple agents and to develop tools that contribute to human well-being. Along with AU- TOGEN, AUTOGEN STUDIO is committed to im- plementing features that promote safe and reliable outcomes. For example, AUTOGEN STUDIO of- fers profiling tools to make sense of agent actions Bo Qiao, Liqun Li, Xu Zhang, Shilin He, Yu Kang, Chaoyun Zhang, Fangkai Yang, Hang Dong, Jue Taskweaver: Zhang, Lu Wang, et al. 2023. arXiv preprint A code-first agent framework. arXiv:2311.17541. Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadal- lah, Ryen W White, Doug Burger, and Chi Wang. 2023. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arxiv. Zhiyong Wu, Chengcheng Han, Zichen Ding, Zhenmin Weng, Zhoumianze Liu, Shunyu Yao, Tao Yu, and Lingpeng Kong. 2024. Os-copilot: Towards gener- alist computer agents with self-improvement. arXiv preprint arXiv:2402.07456. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629. and safeguards, such as support for Docker envi- ronments for code execution. This feature helps ensure that agents operate within controlled and se- cure environments, reducing the risk of unintended or harmful actions. For more information on our approach to responsible AI in AutoGen, please re- fer to transparency FAQS here. Finally, AUTOGEN STUDIO is not production ready i.e., it does not focus on implementing authentication and other security measures that are required for production ready deployments. Acknowledgements We would like to thank members of the open-source software (OSS) community and the AI Frontiers organization at Microsoft Research for discussions and feedback along the way. Specifically, we would like to thank Piali Choudhury, Ahmed Awadallah, Robin Moeur, Jack Gerrits, Robert Barber, Grace Proebsting, Michel Pahud, Qingyun Wu, Harsha Nori and others for feedback and comments. References Harrison Chase. 2022. LangChain. Github. Victor Dibia. 2023. Lida: A tool for automatic gener- ation of grammar-agnostic visualizations and info- graphics using large language models. arXiv preprint arXiv:2303.02927. Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenen- baum, and Igor Mordatch. 2023. Improving factual- ity and reasoning in language models through multia- gent debate. arXiv preprint arXiv:2305.14325. Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2024. Camel: Communicative agents for" mind" exploration of large language model society. Advances in Neural Information Processing Systems, 36. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. 2023. Encouraging divergent thinking in large language models through multi-agent debate. arXiv preprint arXiv:2305.19118. Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paran- jape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024. Lost in the middle: How language mod- els use long contexts. Transactions of the Association for Computational Linguistics, 12:157–173. Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christo- foros Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. 2023. Augmented language models: a survey. arXiv preprint arXiv:2302.07842. of his AssistantAgent, but still doesn’t get pages with more than 3 sentences across interactive tests. He recalls that using more agents can help sep- arate focus and improve task performance. He then switches to creating 4 agents: a UserProxy, a ContentAssistant with detailed instructions on generating the content for each page, a QualityAs- suranceAssistant to verify the pages meet parame- ters, and an ImageGeneratorAssistant focused on generating images for the book. He then creates a GroupChat agent and adds his list of agents to it. Next, he creates a new workflow where the receiver is the GroupChat agent and tests the application across a few tries. Jack is satisfied with the results as full-page stories are now generated correctly. In addition, Jack is concerned about costs but can easily use the observe message button to explore duration, tokens used by agents, tool/skill use and LLM dollar costs for each task run. A.3 Step 3: Export and Share At this point, Jack has two final tasks: he wants to share his work with colleagues for feedback and then provide an API they can prototype with. AU- TOGEN STUDIO makes sharing easy; First, Jack can simply export and share a link to successful ses- sions. Second, he can also download his workflow and share it with colleagues, saving it in a version control system like Git. Third, he can spin up an API endpoint where the agents can respond to task requests using cli commands ‘autogenstudio serve –port 8000‘. He can also spin up a docker container using the AUTOGEN STUDIO serve command and scale it on any platform of his choice (Azure, AWS, GCP, Hugging Face). A Jack the Software Engineer Persona Use Case Jack is a junior software engineer who has recently joined SoftwareCon. As part of his tasks, he is required to create an application that can generate a variety of short books. The initial version should fo- cus on generating children’s books (age 5 -8 years old) based on a given query (e.g., create a book for kids on how the sun works) with the expectation of being generalized to support other generic tasks. Jack has heard about a MULTI-AGENT approach to building systems that can address a variety of tasks through autonomous collaboration between agents. To explore this approach, he begins by perusing the AUTOGEN STUDIO documentation, installs it, launches the UI, and performs the following steps: A.1 Step 1: Define and Compose a Workflow Jack starts with the Build view, where he reviews the default skills that come with AUTOGEN STU- DIO. He sees that there are two relevant skills generate_pdf s and generate_images. He veri- fies that he has the appropriate API keys for the generate_image skill. Next, he creates a GPT3.5 model and adds an API key. Following best practices, Jack knows that the basic agent team with AUTOGEN consists of a UserProxyAgent that can execute code and an As- sistantAgent that can solve tasks as well as write code or call available tools/skills. He creates both of these agents; for his AssistantAgent, he ensures that he attaches the GPT4 model he created previ- ously and also attaches both skills. Jack moves on to the workflow tab and creates a new autonomous chat workflow where he specifies the UserProxyA- gent as the initiator and his AssistantAgent as the receiver. A.2 Step 2: Test and Iterate Within the workflow tab, Jack tests the workflow immediately and quickly observes a few issues. Us- ing the profiler tool and visualization of messages exchanged by the agents, he notices that there seem to be quality issues with the content of the book - namely, the AssistantAgent seems to generate very short messages and hence the book pages contains only 2 sentences per page whereas the requirements state that the kids are slightly older and can read much longer text. To remedy these issues, Jack takes two actions. First, he attempts to extend the base instructions
ai_researcher
4
A_Mathematical_Model_to_Enhance_Creativity_in_Generative_AI_Systems.pdf
4 2 0 2 v o N 4 2 ] C H . s c [ 2 v 7 2 5 2 1 . 1 1 4 2 : v i X r a Human-AI Co-Creativity: Exploring Synergies Across Levels of Creative Collaboration Jennifer Haase Weizenbaum Institute and Humboldt University Berlin, Germany [email protected] Sebastian Pokutta TU Berlin and Zuse Institute Berlin Berlin, Germany [email protected] November 2024 1 Introduction Integrating generative AI into creative work signifies a profound shift in how humans engage with digital tools to create. We are entering an era where AI systems do more than support human creativity: they actively participate in co-creative processes, which we refer to as Human-AI Co-Creativity (Colton and Wiggins, 2012; Serbanescu and Nack, 2023). Some creative tasks can now be fully automated, which becomes evident, for example, with the generative fill function in Photoshop (see also Adobe Firefly), code generation in IT (Tian et al., 2023), or character design in video games (Janson et al., 2023). These examples demonstrate generative AI’s potential to enhance human creativity, which some argue is the current limit of existing generative AI tools (e.g., Mar- rone et al. 2024). However, we argue that Human-AI Co-Creativity has the po- tential to enhance human creative capabilities through the integration of (gen- erative) AI tools, systems, and agents far beyond what is currently common for (non-enhanced) human creativity. This paradigm shift demands a deeper understanding of these co-creative interactions, associated challenges, and the requirements for (generative) AI augmentation (Melville et al., 2023). Improving individual human creative skills and performance is one of the cornerstones of creativity research, with various techniques and manipulation methods being tested (Haase et al., 2023b; Sio and Lortie-Forgues, 2024). As human lives increasingly shift into the digital realm, these techniques are natu- rally becoming increasingly digital as well (Bereczki and K´arp´ati, 2021; Rafner et al., 2023). Generative AI tools bring a whole new level and potential of com- 1 petence increase (Rafner et al., 2023), with “human-like” communication skills while at the same time offering much improved beyond-human-like knowledge and information processing skills (see, e.g., GPT-4, OpenAI 2023); at least in certain respects. As with all forms of digitization, there is a risk of losing skills versus the chance of gaining more efficiency and output quality through dig- ital support (Parasuraman et al., 2000). In the context of creative work, the maximum benefit of AI will be derived where its focus is human-centric and is designed to enhance, rather than replace, human creativity (Anantrasirichai and Bull, 2022). “It’s not a human move. I’ve never seen a human play this move. So beautiful.” —Fan Hui Then-European Champion’s commentary on game between AlphaGo against Lee Sedol However, the potential for genuine AI creativity emerged much earlier, with a striking example being DeepMind’s AlphaGo defeating world champion Lee Sedol in Go in 2016. AlphaGo first learned from historical match data, then honed its skills by playing millions of games against itself as well as against human experts. This event is often regarded as a cornerstone in recognizing AI’s creative capabilities, which, in hindsight, turn out not to be merely isolated anomalies but precursors of the broader creative possibilities that AI systems offer. Coincidentally, these human players also significantly improved their own proficiency at Go while training the AlphaGo system; see Metz (2016) for a detailed account. We consider this a prime example for the human creative advancement achieved through training and working with AI engines, i.e., the interactions with AI system have a lasting impact on the user in terms of creative improvement, beyond the times of interactions. Integrating generative AI tools into creative processes presents an opportu- nity to advance human creative performances collaboratively. By focusing on augmenting rather than replacing human creativity, these tools can help over- come the limitations of traditional methods and push the boundaries of what is creatively possible. In this chapter, we will discuss the evolution of creativ- ity support through digital tools, moving from simple digital aids to partially automated tools, culminating in collaboration between humans and generative AI tools. First, we elaborate on the “inherent” creative potential of (generative) AI tools, which we posit to be a requirement for actual co-creativity. Then, we differentiate between different forms of digital tool support. By presenting concrete examples from mathematics for varying levels of human-AI co-creative interactions, we will illustrate how the co-creative process with generative AI can significantly advance the creative outcome, achieving new results often with creative twists beyond previously known approaches and, due to their high ir- regularity, unlikely to be found by human creativity alone. 2 2 Creativity of Generative AI tools For a system to be considered autonomously creative, it must possess the poten- tial for creative action, such as generating novel ideas or solutions independently without human intervention (Jennings, 2010). This then points to the question of inherent creativity of generative AI tools. Machine learning serves as the cor- nerstone for such a form of creativity, providing the capability for algorithms to learn, adapt, and respond in a manner that can be deemed “intelligent”—and thus, potentially, creative (Mateja and Heinzl, 2021). However, the debate surrounding the “true” creativity of technical systems transcends scientific inquiry and becomes a philosophical debate about appear- ing vs. being. This discourse revolves around the potential limitations of genera- tive AI, with some viewpoints suggesting that AI’s reliance on pre-existing data would confine it to only displaying “incremental creativity”, thus questioning the depth and authenticity of its creative output (Boden, 2009; Cropley and Crop- ley, 2023). Particularly in non-scientific literature, there is a prevalent notion that only humans with their unique capacity for emotions and empathy could exhibit true creativity (Joshi, 2022; White, 2023). This perspective is echoed by Runco (2023), who suggests that the process of creativity in AI, being funda- mentally different from the human approach, can only result in what could be termed “artificial creativity”. We do not share such notions of diminishing the creative output from artificial agents. As we move from the philosophical to the practical, we can see empirical evidence for significantly increased creativity in (generative) AI tools and agents output and human output in collaboration with generative AI tools. Large language models (LLMs), for example, are specifically designed to balance factual precision with creative expression, incorporating el- ements of flexibility and randomness that allow generating content perceived as original and inventive (Sinha et al., 2023). These models leverage vast datasets and complex algorithms to synthesize information in novel ways, resulting in outputs that emulate human-like creativity and demonstrate the potential for independent creative thought within specific domains (Rafner et al., 2023). Empirical studies further support the inherent creativity of AI systems. Stan- dardized creativity tests, traditionally used to measure human creativity, have been adapted to evaluate the outputs of generative AI. The results are striking, with AI-generated content sometimes matching or even exceeding human per- formance in tasks that measure everyday originality and elaboration (Gilhooly, 2023; Guzik et al., 2023; Haase and Hanel, 2023). Moreover, AI-generated out- puts have proven so convincing in practical scenarios to even fool experts in whether content was created by humans or AI (e.g., with scientific abstract, Else 2023; with artificially generated art, Haase et al. 2023a), one of the most substantial possible benchmarks. This evidence underscores the argument that generative AI tools possess inherent creativity, characterized by their ability to autonomously produce novel and valuable output and pass the test of being indistinguishable from human output. 3 3 From digital tools to AI Throughout history, tools have been essential to human creativity. Naturally, since the advent of computers, this creative work has increasingly moved into the digital domain. For example, every text editor enables and supports creative writing. While some tools transfer the creative task into the digital, others are designed to engage more actively in the creative process (cf. Table 1). We cate- gorize such digital tools into four distinct types. The first is a Digital Pen akin to creative support systems (CSS), which aid human creativity without directly contributing creative input, just like a painting program provides a digital brush to an artist (Shneiderman, 2007). The second type is AI Task Specialist, which is an independent AI system (often a generative one) that operates autonomously without human intervention (apart from the initial input). Examples include non-deterministic algorithms that generate art via generative adversarial neural networks (Hitsuwari et al., 2023) or algorithms that advance game development (Almeida et al., 2023). The third type is a Creative Assistant, a generative AI tool that supports and enhances various aspects of a human-driven creative process, often in an interactive way. Current generations of LLMs, such as, e.g., ChatGPT, Gemini, or Llama, are prime examples of that category. Users can flexibly use such tools to support their brainstorming tasks (e.g., Fui-Hoon Nah et al. 2023) or concrete problem-solving tasks such as coding (e.g., Dell’Aversana 2023). The fourth level, as most pertinent to this discussion, is co-creative sys- tems, which we dub AI Co-Creators. Here, humans and (generative) AI tools collaborate, each contributing to the creative process. Ideally, such a system adapts flexibly to the user’s needs, can solve complex, open-ended problems and contributes input in a measurable and meaningful way to the co-creative process with the human user. The four levels indicate the degree of interaction between the user and the tool, depending on how creatively competent and potentially autonomous the tool can act. To demonstrate the varying levels of AI-human interaction in cre- ative processes, we turn to examples from the field of mathematics. We chose mathematics because it allows for objective evaluation of creativity in terms of newness and usefulness, this is in contrast to “subjective disciplines” where a direct attribution of usefulness can sometimes be difficult. Although often perceived as rigid, mathematics is inherently creative, demanding innovative approaches to solve complex problems and develop elegant proofs. The study of creativity itself draws from mathematical insights, as evidenced by Wallas (1926), whose model of the creative process is rooted in earlier work by math- ematicians like Poincar´e and Newman (1908) and echoed in Hadamard’s later contributions (1954). In the following, we will present the four levels of human-tool interaction, with three examples for levels 2-4 of mathematics demonstrating Human-AI Co- Creativity on various complexity levels. For Level 1, the Digital Pen, basically every general-purpose collaboration tool, like email, Slack, Discord, or Github, would be an example of how researchers communicate and coordinate their creative work. We deem this rather known and familiar to the reader and, for the 4 Level of AI integration Level 1: Digital Pen Description Digital tool that facilitates the conversion traditional of pro- creative cesses into digital formats Level 2: AI Task Spe- cialist AI tool that augments cre- tasks, ative operating with structured guidance user input and Level 3: AI Assistant Generative AI tool enhances everyday creativity, working within of the scope its training data and user prompts Level 4: AI Creator Co- Generative AI that tool generates orig- ideas and inal in engages creative dia- logue, adapting within set ethi- cal and creative boundaries Example Classical CSS Generative Autofill Adobe Firefly by Current LLMs like GPT-4 or Midjourney domain- specific amples exist ex- Tool- contribution Digitalizing creative work, improving knowledge transfer and communication Automation of creativity based on strong guardrails and user prompting Creative on ev- eryday creativ- ity level, lim- ited to training data; based on user prompting Equal collabo- rator, original and useful con- tribution to a shared creative process; argues with a user; based on meta- calibration and intent within broader guardrails Breakdown of contribu- tion as- in Basic sistance digitalizing traditional cre- ative content Moderate aug- in mentation specific cre- ative tasks Significant enhancement in shaping the final creative product Synergistic partnership with equal in- put on creative outcomes Table 1: Four levels of human-tool interaction sake of brevity, do not provide further examples. For the other examples, we will briefly describe the underlying mathematical problem for the non-expert. We apologize to the expert readers for the simplification here, which is necessary to keep the exposition on point and not to deviate into technical details. Moreover, we focus on the three examples from the second author’s research. We stress that this might add a particular anecdotal component to the discussion. Indeed, there is a vast body of work in mathematics using AI systems on various levels to achieve new results. However, it also provides us with a higher degree of introspection into the creative process that is usually unavailable as the focus is on reporting results and not processes. 5 3.1 Level 1: Digital Pen The first level represents the traditional approach of how information systems have long supported humans in their creative processes, with CSS evolving from simple digital tools to complex systems that offer collaborative support and pro- cess guidance (M¨uller-Wienbergen et al., 2011; Voigt, 2014). These systems have transitioned from mimicking traditional tools to providing process support by in- tegrating advanced knowledge and communication management features (Frich et al., 2018; Voigt, 2014). Such tools digitalize and simplify individual or group processes, support the collection, editing, and visualization of human-generated ideas (Olszak and Kisielnicki, 2018; Voigt, 2014) but do not address the essence of the creative process itself. Although effective in facilitating creativity, these systems remain tools rather than active contributors to the creative process. Only with tools integrating some form of (generative) AI can some degree of inherent creativity be assumed to emerge; otherwise, no such entity can con- tribute to the creative process. AI has the potential to process information, aggregate knowledge, and generalize beyond its training data with the possi- bility of exceeding human competencies and capacities. The idea of CSS, being support systems for the idea generation process, has so far only been realized in a relatively weak form. However, with the advent of artificial intelligence, a paradigm shift, similar to what has been observed in other disciplines, is emerg- ing: Machine-learning algorithms in AI systems can create content and, with that, potentially creative output (Seidel et al., 2020). These content-creation functions can either be used to substitute parts of the originally human-only creative process (Level 2) or support and augment various aspects of the cre- ative process (Level 3). 3.2 Level 2: AI Task Specialist In Level 2 interactions, the human defines the creative problem by specifying pa- rameters and constraints, while the AI performs complex computations at a scale and speed unattainable by the human alone. The AI serves as a highly efficient tool, extending the human’s creative capacity by executing tasks that would otherwise limit exploration due to their complexity or resource constraints. The human remains the primary source of creative insight, with the AI operating within clearly defined boundaries. This interaction is characterized by a high degree of human control over the creative outcome, with AI functioning as an enhancer of human capabilities. Advancements in rapid and efficient data processing, as seen in tools like Adobe Firefly, exemplify the capabilities of Level 2 systems. These systems enable quick information generation, such as visual auto-fill functions, where AI can extend or substitute parts of a picture with generated content, allowing the user to iterate faster and explore a broader range of ideas. While such tools demonstrate an inherent, albeit rudimentary, form of creativity by generating new and potentially useful content, their creativity is largely incremental, as described by Cropley and Cropley (2023). The user’s interaction remains limited 6 to a specific creative task, and the AI operates under restricted parameters, offering only partial creative autonomy. Math example: New Bell inequalities A central question in quantum physics, particularly quantum mechanics, is to decide whether a given state exhibits quantum behavior or is just a classical state in disguise. Strongly related to this question are, for example, the central questions for several of today’s quantum computer designs: Are they actually quantum computers or just classical ones in complicated designs? To prove that a state is genuinely non-classical, typically, physicists devise a series of clever measurements that exhibit behavior that cannot be explained with classical physics; there are also ways of proving that a state is classical via so-called lo- cal models. This approach and the associated concept of non-locality has been central to establishing the existence of quantum effects dating back to the fa- mous work of Bell (1964) that resolved the Einstein-Podolsky-Rosen paradox by providing measurements (so-called Bell inequalities) that proved that the experiment of Einstein et al. (1935) exhibits true quantum entanglement and associated quantum effects. However, once the states that need to be analyzed become more complex and might even be in a very low dimension, the required insight into the underlying structure of physics and the necessary creative design of such measurements is tough to achieve. In Designolle et al. (2023), an AI sys- tem was devised, predominantly relying on the so-called Frank-Wolfe methods (Braun et al., 2023), to support the user in his effort to devise new measure- ment strategies for complex states. Here, to compute new Bell inequalities for previously unstudied states, the human user specifies the state and all other system parameters, and the AI system then performs a large and complex se- ries of computations (typically weeks on high-performance compute clusters) to compute a series of measurements and the associated (new) Bell inequality. The user then verifies this inequality via straightforward calculations. All creative input in this example comes from the researcher, with the AI sys- tem providing highly specialized computations at extreme speed and scale. The AI augments the user’s creative capabilities by enabling large-scale exploration but does not generate creative output beyond the predefined task specification. Designolle et al. (2023) were able to derive a wide range of new Bell inequalities for many important scenarios. 3.3 Level 3: AI Assistant Level 3 systems, the development of generative AI tools such as ChatGPT and, more broadly, General Pretrained Transformers (GPTs), stable diffusion mod- els, and others, their general applicability allows users to receive broader and more personalized support for their own creative challenges: GPTs are GPTs (General Purpose Technologies). The current generation of LLMs like GPT-4o, Gemini, Claude, and others are perceived as competent enough to support hu- mans in a wide range of creative tasks (e.g., for coding, Liu et al. 2023; story 7 writing, Doshi and Hauser 2024; problem-solving Heyman et al. 2024). Here, the level of creativity that can be achieved is human-limited, as the challenge lies in understanding and leveraging the potential of the underlying competencies of the tool (e.g., for ChatGPT, Cromwell et al. 2023). This stresses a significant point: the capabilities of generative AI must be made usable for humans, i.e., it is about interfacing. For example, the breakthrough of GPT version 3.5 from OpenAI, along with its wider acceptance, occurred when an intuitive chat-based conversational front-end was introduced; a form of unhobbling (essentially re- moving the handbrakes of highly potent models, Aschenbrenner 2024). However, current LLMs are designed with specific data sources and generalization capabil- ities, which, while robust, are guided by carefully implemented restrictions and guardrails. These measures, though occasionally limiting, are essential to ensur- ing the responsible and ethical use of AI, ultimately enhancing the safety and reliability of the creative process. In addition, hallucinations of factual wrong content are common for LLMs (Jesson et al., 2024), which, however, might not be as relevant for the generation of new creative output compared to the more mundane generation of factually correct essays or reports. It might help you be- come a great artist, but not necessarily in your homework assignment. In fact, hallucinations might even improve their creative potential to some extent. Math example: New Ramsey Multiplicity Bounds A central challenge in graph theory (a graph consisting of nodes and edges) is to understand how often specific subgraphs, like cliques (“everyone knows everyone”) or independent sets (“no one knows anyone”), can appear within larger graphs. This problem is closely tied to classical questions posed by Erd˝os, which have driven much of the research in this area. For instance, determining the frequency of cliques of four or five nodes in larger structures is crucial for understanding the broader behavior of graphs. Researchers often rely on sophis- ticated mathematical tools and intricate constructions to tackle these questions. In Parczyk et al. (2024), an AI system was designed to resolve a longstanding problem about the minimum number of independent sets of size four in graphs where the largest complete subgraph has at most four nodes. The obtained con- structions with sizes of around 800 nodes and more are usually beyond what can be achieved with ad-hoc methods. The AI system designed for this task in Parczyk et al. (2024) employs ad- vanced search heuristics to discover new constructions. Here, the creative po- tential is already shared between the human and the AI system. While the user specifies the requirements for the type of construction needed, the AI sys- tem delivers the actual construction. The correctness of the construction can then be verified by the human. However, the power of the interaction between humans and AI systems goes beyond mere constructions. It also reveals that op- timal constructions are stable and repeatable, giving insight into the underlying structure. 8 3.4 Level 4: The AI Co-Creator At Level 4, Human-AI Co-Creativity represents a fusion of human creativity with advanced AI capabilities, where both entities contribute significantly to a shared creative product (Davis, 2013). In such systems, the inputs and outputs of humans and AI blend seamlessly, resulting in a synergistic creative process that transcends traditional boundaries of human or machine creativity. This co-creative dynamic fundamentally alters the nature of the creative process by positioning the AI not merely as a tool but as an active participant—an ”equal”—in the creative process. Like traditional co-creativity among humans, effective Human-AI collaboration relies on shared goals, diverse perspectives, and extensive communication, ensuring that the strengths of both human cre- ativity and AI are fully leveraged (Paulus et al., 2012). At this level, AI and humans operate in true co-creative synergy. The AI is capable of independently generating creative outputs—such as new, highly non- intuitive solutions—that go beyond the scope of human preconceptions. The human and AI continuously interact, with the AI generating novel solutions based on minimal input and the human refining and integrating these into the broader creative context. In this form of interaction, AI becomes an equal cre- ative partner, contributing original and meaningful input that the human alone may not achieve. This level represents the full realization of Human-AI Co- Creativity, where both entities’ contributions are equally essential for creative breakthroughs. In this co-creative process, the role of human creators is elevated, requiring them to possess not only creative skills but also a deep understanding of how to effectively interact with AI co-creators. Human creators must be adept at framing creative problems in ways that are compatible with AI’s strengths, ensuring that the AI’s contributions align with the creative goals. Additionally, human creators need to evaluate and refine the partial results generated by the AI, applying principles such as the MAYa principle (Most Advanced Yet accessible), which, in turn, is based on the well-known MAYA principle (Most Advanced Yet Acceptable; see, e.g., Hekkert et al. 2003), to ensure that the AI’s outputs are novel yet accessible to the human user. The principles of interaction in Human-AI Co-Creativity are critical to the success of the collaboration. Shneiderman (2020) argues that human-centered AI should be designed to support and enhance human activities, including cre- ativity. He proposes several key concepts to guide the development of these systems: First, maintaining a balance between human oversight and automated operations is essential. This ensures that, while AI provides substantial creative contributions, humans retain control over the final output, preserving the in- tegrity of the creative process. Second, AI co-creators should be designed to augment human capabilities, acting as powerful agents that enhance creativ- ity rather than merely mimicking human skills. Thus, at this advanced level of co-creativity, AI becomes a fully integrated creative partner, contributing ideas that would not emerge through human effort alone. 9 (a) 9-coloring of the plane (b) 8-coloring of the plane (c) 7-coloring of the plane (d) 7-coloring of the plane (alternative) Figure 1: Known colorings of the plane Math example: New Colorings of the Plane A central question in combinatorial geometry is the Hadwiger-Nelson problem, which asks for the minimum number of colors required to color the points of a plane so that no two points at a unit distance share the same color. This number, known as the chromatic number of the plane, has intrigued mathematicians for decades; see Soifer (2024) for an overview. Recent advancements in this area focus on extending the continuum of valid distances for six colors of the plane. For this purpose, researchers have to construct colorings of the plane with the required properties; see, e.g., Figure 1 for a few examples of colorings of the plane. New colorings that go beyond those presented in Figure 1 are very hard to find and require a high degree of ingenuity and creativity. There has not been any significant progress for the last 30 years. Then, in recent work in Mundinger et al. (2024), two new six-colorings that avoid monochromatic pairs of points 10 at a unit distance for the first five colors and another specified distance d for the sixth color were presented, which were obtained through a customized AI approach. While not entirely a Level 4 system yet, due to its particular purpose, in contrast to the previously mentioned examples, the generative AI system only gets the requirements that a correct coloring needs to satisfy as an input. Then, the system is trained to explore and identify new colorings and construct and evaluate new colorings efficiently. This led to the discovery of the two aforemen- tioned new six colorings satisfying the modified requirement regarding the sixth color, significantly expanding the known range for these colorings. Moreover, the obtained colorings (see Figure 2) are highly non-intuitive and creative, breaking the highly symmetric patterns of previous colorings found by humans via trial- and-error, intelligent guessing, and ad-hoc approaches (cf. Figure 1). As before and customary in mathematics, the obtained colorings were then verified and post-processed by a human. (a) 0.354 ≤ d ≤ 0.553 (b) 0.418 ≤ d ≤ 0.657 Figure 2: Two new 6-colorings obtained via Human-AI Co-Creativity 4 Discussion The implications of AI in creative work are multifaceted and far-reaching. As Cremer et al. 2023 outline, AI might take several plausible paths to disrupt creative work. Firstly, AI could lead to an explosion of AI-assisted innovation, enhancing human creativity without necessarily replacing it. This democratiza- tion of innovation is exemplified by tools like GitHub’s Copilot, which aids in coding by providing real-time suggestions that augment human efforts (Cam- bon et al., 2023; Eapen et al., 2023). Secondly, there is the potential for AI to monopolize creativity in specific fields, such as game design, where AI-generated art increasingly replaces human designers (Christofferson et al., 2023). Lastly, 11 a scenario may emerge where “human-made” creativity commands a premium, preserving a competitive edge over AI-generated content. This preference for hu- man involvement has been noted in experiments where human-generated works were received more positively when a human label was added than when they were tagged with an AI label (Bellaiche et al., 2023; Ragot et al., 2020) – how- ever, an AI-generated portrait of Alan Turing just sold for $1.08 million (Cain, 2024), suggesting the opposite. On top of that, we propose another kind: the fu- sion of human and generative AI competencies to new levels of achievement. As AI’s capabilities continue to grow, its involvement in creative endeavors is set for further expansion and diversification. The examples from mathematics demon- strate that AI is no longer merely a tool but a collaborator in generating novel solutions. Moving forward, the challenge will be to strike the right balance: lever- aging AI’s immense potential without undermining the unique contributions of human creativity, ensuring that the synergy between human intuition and AI’s capabilities leads to unprecedented creative achievements. Realizing this equi- librium is essential to ensure that AI is a complement and enhancer of human creativity rather than a substitute. Unlike traditional CSS, which facilitates the creative process primarily through knowledge processing and communication, generative AI systems possess the unique capacity to generate creative output independently. This marks a proactive step in the co-creative process, suggesting that AI can contribute in previously unimaginable ways. However, this potential comes with challenges. A central question that mir- rors debates about intelligence concerns the system boundaries we draw around creativity. Just as we ask, “What is intelligent?” we must also ask, “What is creative?”. Is it the human using the tools, the tools themselves, or the syner- getic combination of both? This question is critical because it determines how we assess the creativity of outputs in human-AI collaboration. If creativity is seen as emerging solely from the human, then AI’s role is merely supportive. If, however, creativity is understood as a product of the combined efforts of humans and AI, then the co-creative process must be evaluated on its own terms, ac- knowledging the unique contributions of each entity. As humans use co-creative agents more intensely for their creative work, the risk of over-reliance on AI should not be overlooked. While AI can generate novel ideas and solutions that may not emerge from human creativity alone, there is a danger that excessive dependence on AI could undermine the unique aspects of human creativity, such as emotional depth, moral reasoning, and contextual awareness. This potential over-reliance emphasizes the importance of designing AI systems that support and amplify human creativity rather than diminish it. In conclusion, integrating AI into creative work comes with scaling opportu- nities that are unheard of for creative advancements. The future of Human-AI Co-Creativity will hinge on balancing the enhancement, rather than substitu- tion, of human creativity. Moving forward, the development of AI systems should focus on fostering collaboration rather than competition, enabling a harmonious fusion of human and machine creativity that pushes the boundaries of what is creatively possible. The concrete examples from the math field show us what is already possible in concise domains. Following the logic of the growth of gen- 12 erative AI tools in terms of efficiency, competencies, and generalizability, such co-creative efforts are expected to be possible in other domains soon. Acknowledgments Jennifer Haase’s work was supported by the German Federal Ministry of Edu- cation and Research (BMBF), grant number 16DII133 (Weizenbaum-Institute). Part of this work was conducted while Sebastian Pokutta was visiting Tokyo University via a JSPS International Research Fellowship. The authors would like to thank Christoph Spiegel for providing images of the colorings and Thomas Grisold for helpful comments on an early draft which significantly improved the exposition. References Almeida, P., Carvalho, V., and Sim˜oes, A. (2023). Reinforcement Learning Ap- plied to AI Bots in First-Person Shooters: A Systematic Review. Algorithms, 16(7):323. Number: 7 Publisher: Multidisciplinary Digital Publishing Insti- tute. Anantrasirichai, N. and Bull, D. (2022). Artificial intelligence in the creative industries: a review. Artificial Intelligence Review, 55(1):589–656. Aschenbrenner, L. (2024). Situational Awareness - The Decade Ahead. https://www.forourposterity.com/situational-awareness-the-decade-ahead/. Bell, J. S. (1964). On the Einstein Podolsky Rosen paradox. Physics Physique Fizika, 1(3):195–200. Bellaiche, L., Shahi, R., Turpin, M. H., Ragnhildstveit, A., Sprockett, S., Barr, N., Christensen, A., and Seli, P. (2023). Humans versus AI: whether and why we prefer human-created compared to AI-created artwork. Cognitive Research: Principles and Implications, 8(1):42. Bereczki, E. O. and K´arp´ati, A. (2021). Technology-enhanced creativity: A multiple case study of digital technology-integration expert teachers’ beliefs and practices. Thinking Skills and Creativity, 39:100791. Boden, M. A. (2009). Computer Models of Creativity. AI Magazine, 30(3):23– 23. Number: 3. Braun, G., Carderera, A., Combettes, C. W., Hassani, H., Karbasi, A., Mokhtari, A., and Pokutta, S. (2023). Conditional Gradient Methods. arXiv:2211.14103 [math]. Cain, S. (2024). First artwork painted by humanoid robot to sell at auction fetches $1m. The Guardian. 13 Cambon, A., Hecht, B., Edelman, B., Ngwe, D., Jaffe, S., Heger, A., Vorvore- anu, M., Peng, S., Hofman, J., Farach, A., Bermejo-Cano, M., Knudsen, E., Bono, J., Sanghavi, H., Spatharioti, S., Rothschild, D., Goldstein, D. G., Kalliamvakou, E., Cihon, P., Demirer, M., Schwarz, M., and Teevan, J. (2023). Early LLM-based Tools for Enterprise Information Workers Likely Provide Meaningful Boosts to Productivity. Published by Microsoft. Christofferson, A., James, A., Rowland, T., and Rey, I. (2023). How Will Gen- erative AI Change the Video Game Industry? Section: Brief. Colton, S. and Wiggins, G. A. (2012). Computational creativity: The final frontier? In Ecai, volume 12, pages 21–26. Montpelier. Cremer, D. D., Bianzino, N. M., and Falk, B. (2023). How Generative AI Could Disrupt Creative Work. Harvard Business Review. Cromwell, J. R., Harvey, J.-F., Haase, J., and Gardner, H. K. (2023). Discovering Where ChatGPT Can Create Value for Your Company. Harvard Business Review. Cropley, D. and Cropley, A. (2023). Creativity and the Cyber Shock: The Ultimate Paradox. The Journal of Creative Behavior, n/a(n/a). Davis, N. (2013). Human-Computer Co-Creativity: Blending Human and Com- putational Creativity. Proceedings of the AAAI Conference on Artificial In- telligence and Interactive Digital Entertainment, 9(6):9–12. Number: 6. Dell’Aversana, P. (2023). GPT-3: a new cooperation scenario between humans and machines. Benefits and limitations of GPT-3 as a coding virtual assistant. Designolle, S., Iommazzo, G., Besan¸con, M., Knebel, S., Gelß, P., and Pokutta, S. (2023). Improved local models and new Bell inequalities via Frank-Wolfe algorithms. Physical Review Research, 5(4):043059. Doshi, A. R. and Hauser, O. P. (2024). Generative AI enhances individual cre- ativity but reduces the collective diversity of novel content. Science Advances, 10(28):eadn5290. Eapen, T. T., Finkenstadt, D. J., Folk, J., and Venkataswamy, L. (2023). How Generative AI Can Augment Human Creativity. Harvard Business Review. Einstein, A., Podolsky, B., and Rosen, N. (1935). Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? Physical Review, 47(10):777–780. Else, H. (2023). Abstracts written by ChatGPT fool scientists. Nature. Frich, J., Mose Biskjaer, M., and Dalsgaard, P. (2018). Twenty Years of Cre- ativity Research in Human-Computer Interaction: Current State and Future Directions. In Proceedings of the 2018 Designing Interactive Systems Confer- ence, DIS ’18, pages 1235–1257, New York, NY, USA. Association for Com- puting Machinery. 14 Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., and Chen, L. (2023). Genera- tive AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3):277– 304. Gilhooly, K. (2023). AI vs humans in the AUT: simulations to LLMs. Journal of Creativity. Guzik, E. E., Byrge, C., and Gilde, C. (2023). The originality of machines: AI takes the Torrance Test. Journal of Creativity, 33(3). Haase, J., Djurica, D., and Mendling, J. (2023a). The Art of Inspiring Creativ- ity: Exploring the Unique Impact of AI-generated Images. In AMCIS 2023 Proceedings. Haase, J. and Hanel, P. H. P. (2023). Artificial muses: Generative artificial in- telligence chatbots have risen to human-level creativity. Journal of Creativity, 33(3):100066. Haase, J., Hanel, P. H. P., and Gronau, N. (2023b). Creativity enhancement methods for adults: A meta-analysis. Psychology of Aesthetics, Creativity, and the Arts, pages No Pagination Specified–No Pagination Specified. Place: US Publisher: Educational Publishing Foundation. Hadamard, J. (1954). An essay on the psychology of invention in the mathe- matical field. Courier Corporation. Hekkert, P., Snelders, D., and Van Wieringen, P. C. W. (2003). ‘Most advanced, yet acceptable’: Typicality and novelty as joint predictors of aesthetic prefer- ence in industrial design. British Journal of Psychology, 94(1):111–124. Heyman, J. L., Rick, S. R., Giacomelli, G., Wen, H., Laubacher, R., Taubenslag, N., Knicker, M., Jeddi, Y., Ragupathy, P., Curhan, J., and Malone, T. (2024). Supermind Ideator: How Scaffolding Human-AI Collaboration Can Increase Creativity. In Proceedings of the ACM Collective Intelligence Conference, CI ’24, pages 18–28, New York, NY, USA. Association for Computing Machinery. Hitsuwari, J., Ueda, Y., Yun, W., and Nomura, M. (2023). Does human–AI collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-generated haiku poetry. Computers in Human Behavior, 139. Janson, A., Schmidt-Kraepelin, M., Sch¨obel, S., and Sunyaev, A. (2023). Special Issue Editorial: Adaptive and Intelligent Gamification Design. AIS Transac- tions on Human-Computer Interaction, 15(2):136–145. Jennings, K. E. (2010). Developing Creativity: Artificial Barriers in Artificial Intelligence. Minds and Machines, 20(4):489–501. Jesson, A., Beltran-Velez, N., Chu, Q., Karlekar, S., Kossen, J., Gal, Y., Cun- ningham, J. P., and Blei, D. (2024). Estimating the Hallucination Rate of Generative AI. arXiv:2406.07457 [cs, stat]. 15 Joshi, N. (2022). Can AI Emulate Human Creativity? Forbes. Liu, J., Xia, C. S., Wang, Y., and Zhang, L. (2023). Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation. Advances in Neural Information Processing Systems, 36:21558–21572. Marrone, R., Cropley, D., and Medeiros, K. (2024). How Does Narrow AI Impact Human Creativity? Creativity Research Journal, 0(0):1–11. Mateja, D. and Heinzl, A. (2021). Towards Machine Learning as an Enabler of Computational Creativity. IEEE Transactions on Artificial Intelligence, 2(6):460–475. Conference Name: IEEE Transactions on Artificial Intelligence. Melville, N. P., Robert, L., and Xiao, X. (2023). Putting humans back in the loop: An affordance conceptualization of the 4th industrial revolution. Infor- mation Systems Journal, 33(4):733–757. Metz, C. (2016). The Sadness and Beauty of Watching Google’s AI Play Go. Wired. Mundinger, K., Pokutta, S., Spiegel, C., and Zimmer, M. (2024). Extending the Continuum of Six-Colorings. Geombinatorics Quarterly. M¨uller-Wienbergen, F., M¨uller, O., Seidel, S., and Becker (2011). Leaving the Beaten Tracks in Creative Work – A Design Theory for Systems that Support Convergent and Divergent Thinking. Journal of the Association for Informa- tion Systems, 12(11). Olszak, C. M. and Kisielnicki, J. (2018). A conceptual framework of information systems for organizational creativity support. lessons from empirical investi- gations. Information Systems Management, 35(1):29–48. OpenAI (2023). ChatGPT-4. Parasuraman, R., Sheridan, T., and Wickens, C. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 30(3):286–297. Parczyk, O., Pokutta, S., Spiegel, C., and Szab´o, T. (2024). New Ramsey Mul- tiplicity Bounds and Search Heuristics. Foundations of Computational Math- ematics. Paulus, P. B., Dzindolet, M., and Kohn, N. W. (2012). Chapter 14 - Collab- orative Creativity—Group Creativity and Team Innovation. In Mumford, M. D., editor, Handbook of Organizational Creativity, pages 327–357. Aca- demic Press, San Diego. Poincar´e, H. and Newman, J. (1908). Mathematical creation. Scientific Work and Creativity: Advice from the Masters, 1:177–183. 16 Rafner, J., Beaty, R. E., Kaufman, J. C., Lubart, T., and Sherson, J. (2023). Creativity in the age of generative AI. Nature Human Behaviour, 7(11):1836– 1838. Number: 11 Publisher: Nature Publishing Group. Ragot, M., Martin, N., and Cojean, S. (2020). AI-generated vs. Human Art- works. A Perception Bias Towards Artificial Intelligence? In Extended Ab- stracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA ’20, pages 1–10, New York, NY, USA. Association for Computing Machinery. Runco, M. A. (2023). AI Can Only Produce Artificial Creativity. Journal of Creativity. Seidel, S., Berente, N., Lindberg, A., Lyytinen, K., Martinez, B., and Nickerson, J. V. (2020). Artificial Intelligence and Video Game Creation: A Framework for the New Logic of Autonomous Design. Journal of Digital Social Research, 2(3):126–157. Serbanescu, A. and Nack, F. (2023). Human-AI system co-creativity for building narrative worlds. IASDR Conference Series. Shneiderman, B. (2007). Creativity support tools: accelerating discovery and innovation. Communications of the ACM, 50(12):20–32. Shneiderman, B. (2020). Bridging the Gap Between Ethics and Practice: Guide- lines for Reliable, Safe, and Trustworthy Human-centered AI Systems. ACM Transactions on Interactive Intelligent Systems, 10(4):1–31. Sinha, R., Song, Z., and Zhou, T. (2023). A Mathematical Abstraction for Balancing the Trade-off Between Creativity and Reality in Large Language Models. arXiv:2306.02295 [cs]. Sio, U. N. and Lortie-Forgues, H. (2024). The impact of creativity training on creative performance: A meta-analytic review and critical evaluation of 5 decades of creativity training studies. Psychological Bulletin, 150(5):554–585. Place: US Publisher: American Psychological Association. Soifer, A. (2024). The New Mathematical Coloring Book: Mathematics of Col- oring and the Colorful Life of Its Creators. Springer US, New York, NY. Tian, H., Lu, W., Li, T. O., Tang, X., Cheung, S.-C., Klein, J., and Bissyand´e, T. F. (2023). Is ChatGPT the Ultimate Programming Assistant – How far is it? arXiv:2304.11938 [cs]. Voigt, M. (2014). Improving Design of Systems Supporting Creativity-intensive Processes – A Cross-industry Focus Group Evaluation. Communications of the Association for Information Systems, 34:24. Wallas, G. (1926). The art of thought. The art of thought. London, J. Cape. White, C. (2023). Opinion: Artificial intelligence can’t reproduce the wonders of original human creativity. The Star. 17
ai_researcher
2
ConceptSearch_Towards_Efficient_Program_Search_Using_LLMs_for_Abstraction_and_Reasoning_Corpus_(ARC).pdf
Pre-print of paper accepted at AAAI ConceptSearch: Towards Efficient Program Search Using LLMs for Abstraction and Reasoning Corpus (ARC) Kartik Singhal, Gautam Shroff IIIT Delhi, India {kartik21259, gautam.shroff}@iiitd.ac.in * 4 2 0 2 c e D 1 1 ] G L . s c [ 2 v 2 2 3 7 0 . 2 1 4 2 : v i X r a Abstract The Abstraction and Reasoning Corpus (ARC) poses a sig- nificant challenge to artificial intelligence, demanding broad generalization and few-shot learning capabilities that remain elusive for current deep learning methods, including large language models (LLMs). While LLMs excel in program syn- thesis, their direct application to ARC yields limited suc- cess. To address this, we introduce ConceptSearch, a novel function-search algorithm that leverages LLMs for program generation and employs a concept-based scoring method to guide the search efficiently. Unlike simplistic pixel-based metrics like Hamming distance, ConceptSearch evaluates programs on their ability to capture the underlying transfor- mation concept reflected in the input-output examples. We explore three scoring functions: Hamming distance, a CNN- based scoring function, and an LLM-based natural language scoring function. Experimental results demonstrate the ef- fectiveness of ConceptSearch, achieving a significant per- formance improvement over direct prompting with GPT-4. Moreover, our novel concept-based scoring exhibits up to 30% greater efficiency compared to Hamming distance, mea- sured in terms of the number of iterations required to reach the correct solution. These findings highlight the potential of LLM-driven program search when integrated with concept- based guidance for tackling challenging generalization prob- lems like ARC. Code — https://github.com/kksinghal/concept-search 1 Introduction The Abstraction and Reasoning Corpus (ARC) constitutes a significant benchmark in artificial intelligence, specifically designed to evaluate the development of general-purpose in- telligence (Chollet 2019). In contrast to other benchmarks that often prioritize pattern recognition or domain-specific expertise, ARC emphasizes fundamental cognitive skills, in- cluding abstraction, reasoning, and generalization. The cor- pus comprises a set of analogy puzzles, each presenting a series of input-output pairs (typically 2-4) that embody a la- tent transformation rule or concept. The central challenge lies in inferring this underlying transformation rule and sub- sequently applying it to previously unseen test input. *This work was conducted while Kartik Singhal was an intern and Gautam Shroff was employed at TCS Research Figure 1: Three sample ARC tasks, easily solvable by hu- mans, yet unsolved by our proposed method as well as GPT- 4 baseline (Xu et al. 2024) The examples consist of an ”input grid” and an ”output grid,” each featuring 10 symbols (visualized as unique col- ors) and its size ranges from 1×1 to 30×30 in size. To solve an evaluation task, a test-taker uses the provided training ex- amples and input grid of the test example to construct the output grid from scratch, determining its dimensions and symbol placement. Success is binary, achieved by correctly predicting the output grid for all test examples in a task, with up to three attempts. An intelligent system’s performance on ARC is the fraction of tasks it successfully solves, measur- ing ”developer-aware generalization,” with no prior knowl- edge of evaluation set tasks assumed. Even though humans can solve most of the tasks, to the best of our knowledge, no current machine learning tech- niques, including Deep Learning, are well-suited to tackle the ARC benchmark (see Figure 1). This is due to ARC’s emphasis on broad generalization and few-shot learning, coupled with the fact that each task is unique, therefore the evaluation set consists of concepts that are not present in the training set. Since purely neural approaches, including large language models (LLMs), often fail to produce correct output grid in end-to-end manner (Mitchell, Palmarini, and Moskvichev 2023; Bober-Irizar and Banerjee 2024; Xu et al. 2024), most of the current top approaches frame it as a program- synthesis problem. This strategy avoids the black-box na- ture of neural models and allows for high expressivity and the use of search-based methods. Instead of using a highly open-ended programming language such as Python, these methods employ hand-crafted languages, known as Domain- Specific Languages (DSLs). A DSL is designed to ensure that all ARC tasks can be solved using it, while being ab- stract and generic by defining a small number of versatile primitives, each applicable to numerous ARC tasks. Most approaches can be broadly classified into three cate- gories: brute-force search. neural-guided search and LLM- based techniques. Brute-force search and neural-guided search-based methods require careful hand-crafted DSL, but can still scale poorly to complex problems due to combinato- rial complexity. LLM-based techniques aim to either gener- ate the output grid directly or generate a program that trans- forms the input grids to output grids without any feedback loop. (Greenblatt 2024) has demonstrated that sampling a large number of programs using GPT-4o leads to impressive per- formance on the ARC-AGI benchmark, exhibiting a scal- ing law between the number of samples generated and the number of tasks solved. Although this approach is compu- tationally demanding, it highlights the potential of LLMs to generate solution programs for these tasks. FunSearch (Romera-Paredes et al. 2023) proposed a function-search algorithm for problems in mathematical sci- ences utilizing an LLM to iteratively evolve programs within its database. At each iteration, the LLM takes two programs sampled from the database, ranked according to a predefined scoring function. Taking inspiration from these sampled pro- grams and leveraging their relative scores as indicators of proximity to the desired solution, the LLM generates a new, potentially improved program. This iterative process, driven by LLM-based program evolution, aims to converge towards increasingly accurate solutions. FunSearch is suited for problems with efficient evaluator for determining success and a rich scoring feedback quanti- fying the improvements, instead of binary signal. For ARC- AGI, evaluator is simply a pixel-wise comparison between the predicted output grid and solution output grid. The chal- lenge is the scoring function. In our problem, success is a binary measure, whether all the pixels in the predicted out- put grid match the true output grid. So, we need to develop a scoring function that can provide rich and useful signals to the LLM to guide the search. A trivial scoring function is Hamming distance between the predicted output grid and true output grid, that is, the number of pixels not matching between the two grids, nor- malized by the size of the grid. However, relying solely on Hamming distance to evaluate programs can be misleading, as superficial resemblance can hide major differences in the program’s logic and functionality. Even though visual sim- ilarity and low Hamming distance between the grids might suggest resemblance, the underlying program functions used to generate them could differ significantly from the true so- lution program. Therefore, we need our scoring function to capture the concept or logic of the transformation. Our work focuses on employing FunSearch on ARC-AGI and introduces two novel concept-based scoring functions integrated with a feedback loop. These scoring functions leverage two distinct modalities: one based on vision and the other on natural lan- guage. The results demonstrate a substantial improvement in task-solving performance using FunSearch, increasing num- ber of successfully solved tasks from 13/50 to 25/50. Our concept-based scoring function further improves the task success to 29/50 while significantly enhancing the efficiency of the function search by ∼30% compared to using the Ham- ming distance. 2 Related Works Brute-force search. The winner of Kaggle ARC Challenge 2020 (Icecuber 2020) implements a DSL with 142 hand- crafted unary functions on grids. At runtime, the functions are greedily composed on the input grids, with the resulting ‘pieces’ stored in a directed acyclic graph (DAG). Finally, a solver combines pieces from the DAG to get as close as possible to matching the training examples. (Xu, Khalil, and Sanner 2022) implements a constrain- guided search by converting ARC grids into an object-graph representation and operating on these representations using a graph-based DSL. However, it only works for the same input-output grid size. (Ainooson et al. 2023) proposed a DSL called Visual Im- agery Reasoning Language (VIMRL) based on core knowl- edge of objectness for multi-level search-based program synthesis, allowing flexible use of the core knowledge to ad- dress ARC tasks. Neural-guided search. (Bober-Irizar and Banerjee 2024) adapts the Dreamcoder framework (Ellis et al. 2020) de- signed to grow interpretable and generalizable knowledge through wake-sleep Bayesian program learning consisting of iterative phases to improve program synthesis, without the abstraction phase of growing the underlying library of code primitives. HYSYNTH (Barke et al. 2024) uses an LLM to gener- ate sample programs and learn a probabilistic context-free grammar (PCFG). This learned PCFG is then used to guide the bottom-up search for program synthesis. LLM-based approaches. (Bober-Irizar and Banerjee 2024) compare multiple LLMs for ARC by directly predict- ing the output grid based on provided demonstration exam- ples. The results reveal a substantial enhancement of ∼2x in performance from GPT-3.5 to GPT-4. (Mitchell, Palmarini, and Moskvichev 2023) evaluate the performance of both text-only and multimodal GPT-4 on the ConceptARC benchmark (Moskvichev, Odouard, and Mitchell 2023) and conclude that neither version of GPT- 4 has developed robust abstraction abilities at human-like levels. Hypothesis search (Wang et al. 2024) aims to decouple program generation into explicit hypothesis generation in natural language (NL) and program generation from NL hypothesis. Natural language offers abstract but ambiguous representations. Programmatic hypotheses, though detailed and verifiable, may distract language models. Their results show that explicit hypothesis formation significantly outper- forms direct prompting. Figure 2: Flowchart of the function-search algorithm, illustrating how programs in program database P are evolved using scoring function S in context of Abstraction and Reasoning Corpus. (Xu et al. 2024) shows that the main issue is the LLM’s difficulty in maintaining ”object cohesion” across ARC im- age grids due to the grids’ two-dimensional nature. To ad- dress this, they introduced a 1D-ARC benchmark, a set of ARC-like tasks represented as a single line of text. It was found that GPT performs better on 1D-ARC tasks than on standard ARC tasks. Using object-centric graph abstractions from (Xu, Khalil, and Sanner 2022) for input-output grids significantly improves GPT-4’s performance. Code-Iteration (Butt et al. 2024) aims to learn a code- generation policy (T5 model) through hindsight-relabelling, which involves learning from its own generated programs for inputs and realised outputs for those programs. This is coupled with growing DSL primitives through a mutation algorithm. 3 Methodology A task τ = [(I1:n, O1:n), (It,1:n′ , Ot,1:n′ )] consists of set of n demonstration examples (I1:n, O1:n) exhibiting a com- mon latent transformation. The goal is to infer that transfor- mation and apply it to each of n′ test input grids It,1:n′ to get the corresponding test output grids Ot,1:n′ . 3.1 ConceptSearch The algorithm begins with a program database P containing initial programs. The objective is to generate candidate so- lutions using a pre-trained LLM leveraging programs from P for in-context learning. These two programs are selected using a scoring function S, based on a similarity measure. Each newly generated program is evaluated to determine whether it solves the task τ . Otherwise, it is added back to the program database. By iterating through this process, the program database ”evolves” into a repository of new knowl- edge, and the LLM eventually generates the solution pro- gram. In order to make the search faster, I(= 5) independent experiments, called islands, are run in parallel. Each of the islands has its own program database Pi initialized with the top-2 programs in the program database P using the scoring function S. Figure 3: Compact version of prompt used in program- generation step with two in-context program examples We adopt the ARC-DSL (Hodel 2024) as our Domain- Specific Language and the provided program solvers of training tasks to initialise the program database. At each of the program-generation step in island i, two programs f1 and f2 are sampled from Pi using a probability distribution based on the scoring function S (more details on S in later sections). The prompt consists of initial problem context, in- put grids I1:n of demonstration examples, similar programs f1 and f2 along with similarity scores, their realised outputs f1(I1:n) and f2(I1:n), and then finally the desired outputs O1:n of the task (see Figure 3). Similarity scores indicate which program is nearer to the solution, thereby offering more guidance to the LLM. Additionally, the program def- initions of ARC-DSL in Python are provided for a better understanding of each function. Program Database hhhhProgram Database hhhhhhPromptProgram samplerScoring cdPre-trained LLMGenerated programSyntactically correctEvaluate on demonstration examplesNoYesEvaluate on test examples (3 attempts)YesSolution FoundYesIslandsSample two programsTracebackerror fi1, fi2 = program db.get 2 closest fs(I1:n, O1:n, S) island = Island(fi1, fi2) for step in range(program generation steps) do f1, f2 = island.sample 2 closest fs(I1:n, O1:n, S) f = gen program(f1, f2, I1:n, O1:n, S) (syntactically correct, error) = run(f) if syntactically correct(f) then Algorithm 1: ConceptSearch algorithm Input Task: (I1:n, O1:n), (It,1:n′ , Ot,1:n′ ) 1: S = ScoringFunction() 2: program db = ProgramDatabase() 3: for iteration in range(island iterations) do 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: end for 25: return ”solution not found” return ”found solution” else if num attempts ≥ 3 then island.add program(f) if evaluate(f, I1:n, O1:n) == 1 then return ”solution not found” end if end if else end if end for program db.add newly found programs(island) if evaluate(f, It,1:n′ , Ot,1:n′ ) == 1 then island.add syntax error to next prompt(error) In each of these program-generation steps, 5 different pro- grams are generated in a single response. All of these gen- erated programs are run with training grids I1:n as input and are evaluated by comparing the dimensions and pixel equivalence with O1:n. In case of a syntax error, a feed- back loop adds the traceback error in the prompt of the next iteration to potentially fix that syntax and generate the in- tended syntactically correct program. If a program is found that generates correct output grids for all demonstration ex- amples, that program is submitted for evaluation with test input-output grids (It,1:n′ , Ot,1:n′ ). If it is a success, then the task is solved. Otherwise, the algorithm continues for a max- imum of 3 evaluation attempts for the test grids. These function-generation steps are iterated 10 times, af- ter which all the new programs in P1:I are added to P and all islands are reinitialized to allow sharing of knowl- edge gained across multiple islands. This process of island- iteration is performed twice. These numbers are chosen for reasonable compute cost and can be scaled up for bet- ter performance (Greenblatt 2024). In our work, the maxi- mum number of API calls to the LLM for a single task is given by I× program-generation-steps × island-iterations = 5 × 10 × 2 = 100. We hypothesize that the quality of the programs provided in context directly influences the efficiency of the search pro- cess and show that the choice of scoring function S is fun- damental to solving complex tasks in a reasonable compute and time, rather than sampling a large number of programs. Our objective is to create a function that maps input- output grids into a rich embedding space, effectively encod- ing the transformation conceptually. This will assist LLMs in selecting the appropriate DSL functions for the transfor- mation demonstrated in the examples. A scoring function will then calculate the Euclidean distance between the latent vectors of the transformations in the demonstration exam- ples and that of the given program. Therefore, the score will always be non-negative, and the program with the smallest score will be the one most closely aligned with the transfor- mation demonstrated in the examples. For sampling programs using the scores, the probability of choosing a program f is calculated as: p(f ) = e−S(f ) e−S(fi) (cid:88) fi∈P (1) 3.2 CNN-based Scoring Function For demonstration examples (I1:n, O1:n) and a given pro- gram f , we first compute the realized outputs f (I1:n). This yields two sets of transformations: the input-output grid pairs from the demonstration examples (I1:n, O1:n) and the input-output pairs produced by the program (I1:n, f (I1:n)). Our goal is to compute the distance between these two sets of transformations. To train a neural network capable of capturing the con- cept of transformation, we utilize the Concept-ARC dataset (Moskvichev, Odouard, and Mitchell 2023). This dataset classifies various tasks into 16 distinct classes, with each class containing 11 manually created ARC tasks. In total, the dataset comprises 176 tasks, systematically categorized based on specific concepts. Our neural-network architecture is inspired by (Bober-Irizar and Banerjee 2024) for handling variable-sized grids (see Figure 4). The minimum grid size is 1×1, therefore, the convolution layers keeps the grid size same. Dilated convolution layer (Yu and Koltun 2016) is also used, which create gaps between sampled pixels, mimicking a convolution on a downsampled image. Given the variability in grid sizes, it is necessary to use an aggregator function for each channel to generate a fea- ture vector of constant size that is independent of the grid size. This ensures that grids with different sizes can be rep- resented in the same feature space. Specifically, we compute the minimum, maximum, and mean of each channel, then concatenate these values to form a flattened feature vector. For each input-output pair in the task grids, the pair is passed through the model. The mean of the feature vec- tors for each pair, after performing a difference operation, is calculated. The final feature vector z, which integrates in- formation from all input-output pairs, effectively represents the underlying transformation rule (see Algorithm 2). This resultant vector is then processed through the classification and projection layers. This model F is trained using a dual loss approach, incor- porating both cross-entropy loss and contrastive loss (triplet- margin loss) (Khosla et al. 2021) to classify tasks into 16 dis- tinct concept classes. The contrastive loss is applied on the projection layer and ensures that samples within the same class are closer together in the feature space, while samples Figure 4: Model architecture trained with classification loss and contrastive loss for learning meaningful task represen- tations. The grid size can be as small as 1×1, and cell occu- pancy is one-hot encoded into 10 channels, denoting 9 dif- ferent colours and one for the cell being empty. Algorithm 2: Inference of transformation embedding vector from task grids using CNN-based model (figure. 4) Input: D = (I1:n, O1:n), a set of input-output grid pairs representing a latent transformation Output: A transformation embedding vector z ∈ R192 rep- resenting the transformation embedding. Figure 5: Comparing CNN-based and LLM-based scoring: one extracts features via CNN, while the other leverages LLM for natural language hypothesis, which is then con- verted to feature embedding by using SentenceTransformer. Feature Extraction: FI ← f (Ii) ∈ R64×hi×wi FO ← f (Oi) ∈ R64×ho×wo Channel-wise Aggregation: along h × w vI ← [min(FI ), max(FI ), mean(FI )] ∈ R192 vO ← [min(FO), max(FO), mean(FO)] ∈ R192 Difference Vector Calculation: di ← vI − vO 1: Dd ← ∅ {Initialize an empty list for difference vectors} 2: for (Ii ∈ R10×hi×wi, Oi ∈ R10×ho×wo) in D do 3: 4: 5: 6: 7: 8: 9: 10: 11: Dd ← Dd ∪ {di} 12: end for 13: Embedding Generation: (cid:80) 14: z ← 1 n 15: return z di∈Dd di from different classes are pushed farther apart. The feature vector z = F ((I, O)), the mean of the difference vectors, serves as the embedding vector, allowing a meaningful rep- resentation of the transformation concept. The number of model parameters is kept small due to the small size of the Concept-ARC dataset (176 samples) to avoid overfitting. For better generalization, data augmen- tation techniques were used such as rotation, transpose of both input-output grids and random permutation of colors across all the task examples. Since the dataset is small, we use k-fold cross-validation (k=5) for hyperparameter tuning and take the mean of feature vectors obtained from each of the 5 models. The similarity score between the transformation under- lying the demonstration examples (I1:n, O1:n) and a given program f is computed as: SCNN((I1:n, O1:n), f ) = SCNN((I1:n, O1:n), (I1:n, f (I1:n))) = ∥F (I1:n, O1:n) − F (I1:n, f (I1:n))∥ (2) Figure 6: Compact version of prompt used in hypothesis- generation step for generating natural language description of transformation underlying the given program with N in- context examples 3.3 LLM-based Natural Language Scoring Function In the previous scoring function, the feature extractor was a convolutional neural network. In this scoring function, we would like to use a pre-trained LLM for feature extraction. Therefore, for each transformation (I1:n, O1:n), we gener- ate a transformation hypothesis h0 in natural language (NL) using an LLM. Then, a text embedding model maps this nat- ural language hypothesis into rich embedding, which can be again used for calculating the Euclidean distance between two transformations. LARC (Acquaviva et al. 2023) is a dataset consisting of natural language descriptions on how to solve each task pro- vided by human participants. These descriptions are only available for 354 tasks in the training set, where a human participant was successfully able to solve the task using a natural language description of the transformation provided by another human participant. For the rest of the programs Task gridsInputOutputConv2d 32@3x3Conv2d 32@3x3Conv2d 64@3x3Concat of min, max, mean across channel(64*3,) = (192,)Pre-trained LLMDemonstration examplesCNN-based modelSentence TransformerProgram DatabaseProgram Realized transformation potheses are positioned closer to each other in the em- bedding space, while conceptually different hypotheses are pushed further apart. The similarity score between the transformation underly- ing the goal hypothesis h0 and program description hf , SLLM(h0, hf ) = ∥F (h0) − F (hf ))∥ (3) 4 Experiments and Results The evaluation set consists of 50 tasks from the training set, same as (Xu et al. 2024). This evaluation set of 50 tasks is chosen to limit the cost of LLM inference, and this also al- lows us to compare our results presented in (Xu et al. 2024). It is also made sure that these 50 tasks are not available in any way, including the program database, in-context training examples and training the scoring function. In our experiments, we utilize Gemini 1.5 Pro for both program generation and goal hypothesis generation. This choice of using Gemini is made after manually experiment- ing with multiple ARC tasks using different LLMs. For the hypothesis generation of program definitions, we employ Gemini 1.5 Flash, which allows access to a higher number of tokens per minute. FunSearch (Romera-Paredes et al. 2023) does not use much context about the problem, it should be viewed as a generator of diverse, syntactically correct programs with occasionally interesting ideas, which eventually solves the problem at hand. In our case, we provide the exact problem context; therefore, the generation is strongly conditioned. With temperature 0, it does not lead to much changes to the code upon iteration. Therefore, for program generation, the temperature is set to 1 for generating creative solutions. In our work, we utilize 5 islands with 10 program it- erations each. The islands are reset twice, resulting in a maximum number of program generations per task calcu- lated as I × program-generation-steps × island-iterations = 5 × 10 × 2 = 100. During each API call for program gener- ation, we prompt the LLM to generate 5 new programs with different logic. This strategy allows us to generate a larger number of programs while keeping the inference cost rela- tively low. Consequently, the maximum number of programs generated for a single task is 100 × 5 = 500. This approach ensures a diverse set of candidate programs for each task, enhancing the likelihood of finding optimal solutions. In in- stances where the LLM’s response does not match the ex- pected output, such as failing to generate any program code, the same prompt is repeated until at least one program code is produced. We evaluate our method using three different scoring functions: Hamming distance, CNN-based scoring, and LLM-based natural language scoring. The results are com- pared in Table 1 and 2. The Hamming distance between two grids is calculated as the number of mismatched pixels normalized by the grid size. Our findings reveal a substan- tial improvement in performance when transitioning from direct-grid-based prompting to a function-search algorithm with Hamming distance. Specifically, task success increased from 13/50 to 25/50. This improvement shows that the Figure 7: Compact version of prompt used in goal hypothe- sis generation step for generating natural language descrip- tion of transformation underlying input-output grids with ten in-context examples and each new LLM-generated program added to the program database, a description is generated using a pre-trained LLM as a completion task by providing existing programs in the program database and their descriptions for in-context learn- ing (see Figure 6). Additionally, the program definitions of ARC-DSL in Python are provided for a better understanding of each function. For solving a task with demonstration examples (I1:n, O1:n), our objective is to find a natural language hy- pothesis that corresponds to these examples. This hypothe- sis will serve as a reference point from which we calculate the distance to other candidate hypotheses. To generate this ”goal” hypothesis h0, we employ a completion task. We select the top-10 programs fi from our program database using a CNN-based scoring function (section 3.2). For these selected programs, we provide their demonstration examples (I1:n, fi(I1:n)) along with their corresponding de- scriptions for in-context learning (ICL). The LLM then com- pletes the description for the task demonstration examples (I1:n, O1:n) (see Figure 7). To manage the context length effectively, we limit the in-context learning to 10 examples. Since we know that deep learning methods don’t work well for ARC tasks including LLMs (Wang et al. 2024), the natural language descriptions generated by LLMs may not be accurate. For this reason, we generate a unique ”goal” hy- pothesis for each of the islands to increase the odds of find- ing the solution within given iterations, which is not possible with a CNN-based scoring function. At this stage, we have natural language hypotheses, h0 for the desired transformation (goal hypothesis) and hf for each of the programs in the program database. To derive a fea- ture vector for a natural language hypothesis, we fine-tune a SentenceTransformer F (all-mpnet-base-v2) (Reimers and Gurevych 2019) using the ConceptARC dataset with con- trastive loss (BatchAllTripletLoss). This fine-tuning process with contrastive loss ensures that conceptually similar hy- Method (Xu et al. 2024) - GPT-4 Ours - Hamming distance Ours - CNN-based Ours - LLM-based Accuracy Mean iters 13/50 25/50 25/50 29/50 - 3.70 2.80 2.05 Table 1: Comparison of our approach to direct-grid based prompting with GPT-4. Accuracy is the percentage of tasks solved in the evaluation set. Mean iterations is the average number of program-iterations taken to find the solution per task, considering only the tasks solved by all the methods LLM-based CNN-based Hamming distance CNN-based 24.7 10.3 29.3 - Table 2: Efficiency (%) improvement of the scoring function in each column compared to the scoring function in the cor- responding row based on the number of program-iterations taken to find the solution for tasks solved by both methods. function-search algorithm is effective in guiding the LLM towards the solution. However, this scoring function may be misleading in some cases. For instance, in transformations where changes from the input to the output grid are minimal, the Hamming distance is low but program may fail to capture the com- plexity of the actual transformation and may be far from the solution program. Knowing that function-search is effective in solving these tasks, the objective with concept-based scor- ing functions is to make search more efficient with more ef- fective guidance. The efficiency is compared based on the number of iterations it took to arrive at the solution. For the two concept-based scoring functions, the main difference lies in the feature extraction process from the demonstration examples. The CNN-based scoring function utilizes a convolutional neural network for feature extrac- tion. In contrast, the LLM-based method extracts features in the form of natural language hypotheses using an LLM. These natural language hypotheses are then mapped to a rich feature space using a text-embedding model. The performance remained consistent when using a CNN- based scoring function, achieving 25/50. However, the search was 29.3% more efficient using CNN-based scoring function over Hamming distance, considering the intersec- tion of tasks solved by both of them. An improvement in task success was observed with the LLM-based NL scoring function, which achieved a score of 29/50 with 10.3% more efficient search over CNN-based scoring function. This sug- gests that LLMs have better feature extraction capabilities and overall effectiveness in handling ARC tasks. A natural question arises regarding whether the perfor- mance can be further improved by combining CNN and LLM-based scoring functions. However, it was found that there is no task that the CNN-based scoring function solves that the LLM-based scoring function does not. Therefore, combining both scoring functions may not enable additional tasks to be solved; whether it improves efficiency remains to be seen. 5 Discussion Even though our approach of function-search has signif- icantly improved performance for ARC, we are still far from solving this problem. The performance of end-to-end deep learning methods even with extensive training does not yield decent results. Code-It (Butt et al. 2024) achieves only 14.75% on the evaluation set. This is related to the feature extraction capabilities of the current deep learning methods. Our CNN-based classifier also achieved only ∼40% accu- racy on the Concept-ARC dataset (Moskvichev, Odouard, and Mitchell 2023). Due to poor feature extraction methods, we have resorted to search-based methods guided by these approximate feature extractors. Effective feedback mechanisms are crucial for directing the search process in the right direction. In our approach, the scoring function served as one form of feedback alongside information on syntax errors. The scoring function provides feedback by evaluating in-context examples and assigning relative scores to guide the search. To strengthen the feed- back signals, it is crucial to include information about the specific issues with in-context programs. Also, each step in the program generation process currently operates indepen- dently, which can lead to the LLM repeating previous mis- takes. The only experience the system receives is derived from the evolved in-context examples. By integrating a de- tailed feedback mechanism and leveraging accumulated ex- periences, we may be able to enhance the search. One approach tested in conjunction with function-search for addressing complex tasks in a step-by-step manner was problem-iteration, though it did not produce successful re- sults. This method involved performing several iterations of code-iterations and then resetting the problem using the best program f from the program database, as evaluated by a scoring function. In this process, the new input grid is de- rived as f (I1:n), while the output grid remains unchanged. Essentially, this approach aims to solve part of the problem and then reframe the remaining portion as a new problem. However, it was observed that the best program f frequently lost crucial information from the original input grid, which was necessary for reaching the desired output grid. 6 Conclusion In this paper, we proposed a novel function-search algo- rithm incorporating a concept-based scoring method to en- hance search efficiency using large language models (LLMs) for Abstraction and Reasoning Corpus (ARC). Our method achieves a notable improvement, with task performance reaching 58% compared to the baseline’s direct-grid ap- proach, which only achieves 26%, when evaluated on a set of 50 tasks using GPT-4. Furthermore, our concept-based scoring function demonstrates up to 30% greater efficiency than Hamming distance, as measured by the number of code iterations needed to reach the correct solution. This advance- ment highlights the effectiveness of our LLM-based search strategy, which avoids the high costs associated with sam- pling a large number of solutions (Greenblatt 2024). Xu, Y.; Li, W.; Vaezipoor, P.; Sanner, S.; and Khalil, E. B. 2024. LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-based Representations. arXiv:2305.18354. Yu, F.; and Koltun, V. 2016. Multi-Scale Context Aggrega- tion by Dilated Convolutions. arXiv:1511.07122. Intelligence. On the Measure of References Acquaviva, S.; Pu, Y.; Kryven, M.; Sechopoulos, T.; Wong, C.; Ecanow, G. E.; Nye, M.; Tessler, M. H.; and Tenenbaum, J. B. 2023. Communicating Natural Programs to Humans and Machines. arXiv:2106.07824. Ainooson, J.; Sanyal, D.; Michelson, J. P.; Yang, Y.; and Kunda, M. 2023. A Neurodiversity-Inspired Solver for the Abstraction & Reasoning Corpus (ARC) Using Visual Im- agery and Program Synthesis. arXiv:2302.09425. Barke, S.; Gonzalez, E. A.; Kasibatla, S. R.; Berg- Kirkpatrick, T.; and Polikarpova, N. 2024. HYSYNTH: Context-Free LLM Approximation for Guiding Program Synthesis. arXiv:2405.15880. Bober-Irizar, M.; and Banerjee, S. 2024. Neural networks for abstraction and reasoning: Towards broad generalization in machines. arXiv:2402.03507. Butt, N.; Manczak, B.; Wiggers, A.; Rainone, C.; Zhang, D. W.; Defferrard, M.; and Cohen, T. 2024. CodeIt: Self- Improving Language Models with Prioritized Hindsight Re- play. arXiv:2402.04858. Chollet, F. 2019. arXiv:1911.01547. Ellis, K.; Wong, C.; Nye, M.; Sable-Meyer, M.; Cary, L.; Morales, L.; Hewitt, L.; Solar-Lezama, A.; and Tenen- baum, J. B. 2020. DreamCoder: Growing generalizable, in- terpretable knowledge with wake-sleep Bayesian program learning. arXiv:2006.08381. Greenblatt, R. 2024. Getting 50% (SoTA) on ARC-AGI with GPT-4o. Hodel, M. 2024. ARC-DSL. Icecuber. 2020. Winner of ARC Challenge. Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; and Krishnan, D. 2021. Super- vised Contrastive Learning. arXiv:2004.11362. Mitchell, M.; Palmarini, A. B.; and Moskvichev, A. 2023. Comparing Humans, GPT-4, and GPT-4V On Abstraction and Reasoning Tasks. arXiv:2311.09247. Moskvichev, A.; Odouard, V. V.; and Mitchell, M. 2023. The ConceptARC Benchmark: Evaluating Understanding and Generalization in the ARC Domain. arXiv:2305.07141. Reimers, N.; and Gurevych, I. 2019. Sentence-BERT: Sen- tence Embeddings using Siamese BERT-Networks. In Pro- ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computa- tional Linguistics. Romera-Paredes, B.; Barekatain, M.; Novikov, A.; Balog, M.; Kumar, M. P.; Dupont, E.; Ruiz, F. J. R.; Ellenberg, J. S.; Wang, P.; Fawzi, O.; Kohli, P.; and Fawzi, A. 2023. Mathematical discoveries from program search with large language models. Nature, 625(7995): 468–475. Wang, R.; Zelikman, E.; Poesia, G.; Pu, Y.; Haber, N.; and Goodman, N. D. 2024. Hypothesis Search: Inductive Rea- soning with Language Models. arXiv:2309.05660. Xu, Y.; Khalil, E. B.; and Sanner, S. 2022. Graphs, Con- straints, and Search for the Abstraction and Reasoning Cor- pus. arXiv:2210.09880.
ai_researcher
2
Machine_Learning_in_Tissue_Engineering.pdf
Received: 14 Jul 2023 Revised: 25 Aug 2023 Accepted: 27 Aug 2023 DOI: xxx/xxxx A R T I C L E T Y P E 3 2 0 2 p e S 2 ] G L . s c [ 1 v 7 3 8 0 0 . 9 0 3 2 : v i X r a Autonomous Soft Tissue Retraction Using Demonstration-Guided Reinforcement Learning Amritpal Singh1 Wenqi Shi2 May D Wang3 1College of Computing, Georgia Institute of Technology, Georgia, USA 2Electrical and Computer Engineering, Georgia Institute of Technology, Georgia, USA 3Wallace H. Coulter Department of Biomedical engineering, Georgia Institute of Technology, Georgia, USA Correspondence Dr. May D. Wang, Email: [email protected] This work is accepted in MICCAI 2023 - Augmented Environments for Computer Assisted Interventions (AE-CAI) workshop Abstract In the context of surgery, robots can provide substantial assistance by performing small, repetitive tasks such as suturing, needle exchange, and tissue retraction, thereby enabling surgeons to concentrate on more complex aspects of the procedure. However, existing surgical task learning mainly pertains to rigid body interactions, whereas the advancement towards more sophisticated surgical robots necessitates the manipulation of soft bodies. Previous work focused on tissue phantoms for soft tissue task learning, which can be expensive and can be an entry barrier to research. Simulation environments present a safe and efficient way to learn surgical tasks before their application to actual tissue. In this study, we create a Robot Operating System (ROS)-compatible physics simulation environment with support for both rigid and soft body interactions within surgical tasks. Furthermore, we investigate the soft tissue interactions facilitated by the patient-side manipulator of the DaVinci surgical robot. Leveraging the pybullet physics engine, we simulate kinematics and establish anchor points to guide the robotic arm when manipulating soft tissue. Using demonstration-guided reinforcement learning (RL) algorithms, we investigate their performance in comparison to traditional reinforcement learning algorithms. Our in silico trials demonstrate a proof-of- concept for autonomous surgical soft tissue retraction. The results corroborate the feasibility of learning soft body manipulation through the application of reinforcement learning agents. This work lays the foundation for future research into the development and refinement of surgical robots capable of managing both rigid and soft tissue interactions. Code is available at https://github.com/amritpal-001/tissue_retract. K E Y W O R D S Computer-assisted surgery, Surgical task learning, Reinforcement learning, Robotic surgery, Surgical simula- tion, Soft tissue retraction 1 INTRODUCTION Over the past two decades, robotic surgery has revolutionized the field of surgical procedures. There is a growing literature on unmet surgical needs, the disparity in surgical access 1,2, and surgeon fatigue 3. The introduction of robotic surgery, combined with a growing digital footprint, has made it increasingly feasible to assist or automate specific subtasks. Robotic surgical systems are capable of supporting surgeons in performing repetitive tasks such as suturing and handover of needles. This reduces the overall duration of surgical procedures and enhances their efficiency. Simulations with physics engines such as pybullet 5 and mujoco 4 have been effectively utilized to build accurate representations of surgical robots, training algorithms, and their subsequent validation in real-world scenarios 7,23,15. Such simulation environments offer a risk-free platform to develop these systems, eliminating the potential for harm to both the robotic arm and tissue. Most of the current research on surgical task learning is mainly confined to rigid-body interactions, such as needle grasping, gauze transfer, and needle handover 8,12. However, real-world surgical procedures involve the continuous manipulation of the soft body, which underscores the necessity of mastering soft body manipulation to advance surgical systems 10. Unlike rigid body Abbreviations: DG-RL = Demonstration guided reinforcement learning; STL = Surgical task learning; PSM = Patient side manipulator; ROS = Robot operating system; RL = Reinforcement learning; MDP = Markov Decision Process; DDPG = Deep deterministic policy gradient; Journal 2023;00:1–10 wileyonlinelibrary.com/journal/ © 2023 Copyright Holder Name 1 2 Singh ET AL. simulation, soft bodies can change in shape or length. Though the relative distance between points will change, the soft bodies retain their shape to some degree. Unlike rigid bodies, soft bodies introduce elastic pull or recoil toward their original position. Murali et al. 21, Attanasio et al. 13, and Saeidi et al. 22 learn soft tissue manipulation using tissue phantoms, real tissue or cadaver. This can be expensive and can be a barrier to entry into soft tissue manipulation research. Phantom tissue fabrication can have a high cost, low temporal stability, and involve laborious methodologies for development 18,11. As an alternative, soft tissue simulations offer a promising avenue for developing such systems prior to real-world testing. Tagliabue et al. 7 introduced UnityFlexML, a Unity-based system, designed to interact with the patient-side manipulator (PSM) through soft body interactions. Although the use of tissue phantoms or real tissue can yield robust results, these resources can be prohibitively expensive and not always readily available. Advantageously, soft tissue simulations enable precise measurement of exerted forces, allowing control of training agents with minimal or no tissue damage, a feature that can prove challenging when using tissue phantoms. With advances in physics engines, simulations can also allow for domain randomization to vary tissue color, size, location, tissue elasticity, and gravity. As such, simulations offer a valuable and scalable tool in advancing the field of surgical robotics. Pioneering work in soft tissue manipulation has harnessed the capabilities of reinforcement learning algorithms 7. Despite their potential, reinforcement learning algorithms can encounter difficulties due to the vast exploration space of surgical tasks. The challenge here lies in the inherent complexity of the surgical environment, as well as the precise control required for such tasks. Leveraging demonstration data may offer a viable solution for mitigating this issue, by substantially reducing the exploration space and guiding the learning process. In the present study, we discuss the complexities of soft tissue interactions with DaVinci robots and explore the use of a simulation environment to learn soft tissue manipulation tasks. To achieve this, we generate a simulation environment supporting soft and rigid body interactions and train agents to perform tissue retraction tasks. We formulate a rule based policy to generate demonstration data to guide the training of reinforcement learning algorithms. In summary, our primary contributions to this work are threefold: • We establish a ROS-compatible physics simulation to replicate soft and rigid interactions for the DaVinci robot, particularly tailored to the tissue retraction task. We further explore the increased complexity of soft and rigid-body tasks compared to rigid-body interactions. In this context, we frame the task as a Markov Decision Process (MDP), elucidating the intricacies of such interactions; • We construct an rule-based policy for the generation of data utilized for task guidance, subsequently employing these data to train reinforcement learning agents with demonstration guidance; • We evaluate and compare the performance of demonstration-guided reinforcement learning agents with their traditional counterparts, observing performance dependence on the number of demonstrations available. We further perform an ablation study to see dependence on the number of demonstrations. 2 RELATED WORKS The advent of surgical assistive robots has led to a series of pioneering works that have sketched a comprehensive roadmap for surgical autonomy 10. This transformative concept holds the promise of mitigating surgeon fatigue, reducing operation duration, and facilitating supervised autonomy in telesurgery. Several robotic systems have emerged, each demonstrating partial autonomy and catering to specific surgical procedures. For example, the CyberKnife system from Accuracy Inc 17,24 is one such example, utilized predominantly for tumor radiosurgery. Another system, TSolution One, developed by THINK Surgical 20, is dedicated to rigid bone tissue surgery. ARTAS, a product of Restorative Robots 6, presents the application of robotics in hair restoration. Each of these systems serves as a testament to the tangible benefits of incorporating robotic systems into surgical procedures, paving the way for the progression toward higher surgical autonomy. Surgical simulation has surged in prominence, finding application in surgical training and the facilitation of task learning. By offering an affordable training modality coupled with the prospect of robust domain randomization, simulation underscores an effective approach to the instruction and enhancement of surgical skills. The introduction of 3D models, such as the dVRK-VREP simulator 16, has spurred a substantial exploration of in situ surgical task learning. These investigations cover a diverse array of tasks, including many derived from the Fundamentals of Laparoscopic Surgery (FLS). The advent of the AMBF platform 25 has further enriched this field, providing dynamic environments to facilitate interactions between rigid entities and medical robots. Complementary developments include the Surrol library by Xu et al. 8, specifically designed for rigid body surgical tasks. These Autonomous Soft Tissue Retraction Using Demonstration-Guided Reinforcement Learning 3 F I G U R E 1 Overview of tasks and pipelines for algorithm training using the environment and demonstration data for TissueRetract. include but are not limited to gauze retrieval, needle pickup, and needle retrieval, thereby broadening the range of simulated surgical tasks that can be practiced and mastered. However, soft tissue manipulation tasks like tissue retraction are a predominant portion of surgical procedures to explore and reach the region of interest (e.g., tumor, gall bladder). The task of tissue retraction demands careful gripping and pulling of tissue, to enhance the visibility of obscured regions without inducing tissue damage. This manipulation of soft tissues is decidedly challenging due to the need for sophisticated tissue tracking and precise planning within the context of dynamically deformable environments. The complexity of the task is further compounded by the requirements for high maneuverability, visual constraints, and the need for repeatability of movements. Such challenges highlight the need for advanced methodologies and tools to effectively navigate these complex surgical procedures. For learning soft tissue manipulation, Attanasio et al. use deep neural networks to analyze phantom images along with procedural algorithms and perpendicular retraction gesture, resulting in a 25% increase in background area after retraction. Murali et al. use learning by observation (LBO), an IL method that uses human demonstrations, followed by a finite state machine (FSM). Saeidi et al. use their Smart Tissue Autonomous Robot (STAR) system for anastomosis in phantoms and in-vivo intestinal tissues. To test these algorithms, Murali et al. 21 use viscoelastic tissue phantoms in conjunction with the Da Vinci Research Kit (dVRK) 19, which facilitates the learning of multilateral tissue cutting. Similarly, Attanasio et al. 13 investigated the use of Thiel embalmed cadaverautomating tissue retraction in minimally invasive surgery. Saeidi et al. 22 conceived a system for autonomous robotic laparoscopic surgery tailored for intestinal anastomosis, using phantom intestinal tissues and genuine porcine tissue. Soft tissue simulations offer a promising avenue for developing such systems before real-world testing. Tagliabue et al. 7 introduced UnityFlexML, a Unity-based system, designed to interact with the patient-side manipulator (PSM) through soft body interactions. Soft tissue simulations enable precise measurement of exerted forces, allowing control of training agents with minimal or no tissue damage, a feature that can prove challenging when using tissue phantoms. With advances in physics engines, simulations can also allow for domain randomization to vary tissue color, size, location, tissue elasticity, and gravity. As such, simulations offer a valuable and scalable tool in advancing the field of surgical robotics. 3 PRELIMINARIES In reinforcement learning (RL), agents learn to explore an environment according to the received feedback signals known as rewards. Reward shaping is a manual task that requires expert knowledge and task-specific fine-tuning, whereas exploration in a sparse reward setting is usually challenging. Deep deterministic policy gradient (DDPG) 26 is an off-policy RL algorithm to RL agentRewardActionHandwritten Policy/Expert TrajectoryExploration guidanceTasks DemonstrationRobot joint angles and velocity,Tissue anchor coordinates,Hidden tissue coordinates,Target position coordinatesRobot joint angles and velocity,Tissue anchor coordinates,Hidden tissue coordinatesRobot joint angles and velocity,Tissue anchor coordinates,Hidden tissue coordinates,Target position coordinatesTask ITask IITask IIIRetract tissue y cm from centerRetract tissue center to y pointRetract tissue from left/center/right to y pointTaskDescription ObservationsEnvironment 4 Singh ET AL. mitigate this concern via simultaneously learning the Q-function and a policy, using earlier to learn the latter. Q-function Q(s,a) or action value function gives the expected return on taking an arbitrary action at state s. Since the Q function is differentiable with respect to action space for environments with continuous action spaces, this allows gradient-based learning of policy. In DDPG, for any D set of transitions (st, a, r, st+1, d) collected, the training objective of the agent is to learn the policy µθ(s) whose actions maximize Qϕ(s, a), a ∼ µθ(s) and the Q function as follows: max θ E s∼D, a∼µθ(s) [Qϕ(s, a)]. (1) Imitation learning (IL), also known as learning from demonstration, is an approach to learning behavior from given demon- stration data. Instead of manually designing reward functions, the reward function is learned from demonstrations. Imitation learning algorithms can learn via supervised learning to mimic demonstrations (behavior cloning) or directly decode rewards from demonstrations to learn policy (inverse reinforcement learning). 27 IL struggles to learn robust policies for changing envi- ronment parameters, and joint optimization of reward and policy makes it difficult to train. Soft Q imitation learning (SQIL) is a model-free off-policy algorithm that can solve for both discrete and continuous action spaces 27. Demonstration-guided reinforcement learning (DG-RL) combines imitation learning with reinforcement learning. These methods can provide one or more of the following: better initialization of the agent using BC, augment demonstrations into replay buffer to improve the signal or augment environment reward with demonstration guide reward. 28,29,12. DDGPBC 28 algorithm was developed to address the exploration in environments with sparse rewards, by using demonstrations to successfully learn long-horizon, multi-step robotics tasks with continuous control. Cycle-of-learning (CoL) 29 is an actor-critic-based method that transitions from behavior cloning to reinforcement learning. It combines pre-training and joint loss functions to learn both value and policy functions from demonstrations. Demonstration-guided exploration (DEX) 12 is a DG-RL method that improves potential overestimation over previous actor-critic methods and augments environment rewards with a behavior gap between agent and expert policy. Several new algorithms have been proposed for each of these categories. For proof of concept, we limit our work to these algorithms only. 4 METHODS In this section, we describe the TissueRetract environment and formulate the tissue retraction task as a Markov decision process. We discuss success criteria and the implementation of reinforcement learning algorithms used. Figure 1 shows a graphic overview of the training pipeline and components of the environment tasks. 4.1 Environment We propose the TissueRetract environment as a new benchmark for surgical soft tissue manipulation tasks. In this work, we simulate a patient-side manipulator (PSM) robot as a rigid body with 6 degrees of freedom (DOF) and the tissue as a soft body, with hidden tissue lying underneath. We define anchor points to guide the robotic arm to hold the soft tissue. To stabilize the soft tissue, it is fixed at all corners. To succeed at a task, the agent needs to learn to hold the tissue from anchor points and pull it until success criteria are met. Our simulation environment is ROS-compatible and follows the OpenAI gym structure. We use 3D models from dVRK-VREP simulator 16, and PyBullet physics engine 5 for the simulation of rigid and soft body physics. We introduce three tasks with varying levels of difficulty. Two important variables considered for tissue retraction are tissue retraction distance and the anchor site for tissue holding. We design these three tasks with a stepwise increase in variability on one of the two variables’ levels. The task I requires vertical manipulation of soft tissue using an anchor point until a distance less than the threshold is reached. The task II requires soft tissue manipulation of anchor points given target instructions until a distance less than the distance threshold is reached. The task III involves manipulation based on one of the multiple anchor points until the distance threshold is reached. Robot location is randomly sampled from an arbitrary uniform distribution. Tissue location coordinates (X, Y), are sampled independently from a uniform distribution as X ∼ U(xmin, xmax), Y ∼ U(ymin, ymax), where U is a uniform distribution and [xmin, xmax, ymin, ymax] are x and y boundaries of robot’s workspace projected on a 2d surface. Figure 2 shows a graphic about components of the environment and a pictorial representation of the three tasks. Autonomous Soft Tissue Retraction Using Demonstration-Guided Reinforcement Learning 5 F I G U R E 2 Overview of tasks in TissueRetract environment Left: Components of simulation. a = DaVinci PSM arm, b = Target point, c= Hidden Tissue, d = Soft tissue. Middle Top: Task I - Pull tissue from center, Middle Center: Task II - Pull tissue to target point x from center, Middle Bottom: Task III - Pull tissue to target point x from center/left/right. Right Top: Physics simulation engines allow for domain randomization in tissue color, texture, size, location, and tissue elasticity. Right Bottom: Segmentation mask of components. 4.2 Problem Formulation We frame our task as a Markov Decision Process (MDP) where an agent learns by interacting with the environment. An MDP is defined as a tuple ⟨S, A, R, P, γ⟩, where S indicates the state space, A indicate the action space of the agent, R : S × A → R is the reward function given the states and actions, P indicates the probability function of state transitions S × A → S, and γ is the discount factor to penalize long action sequences. At each time step t, the agent will take action at ∼ µθ(st) based on the learned policy µθ and the current agent state st. With the current state st and the action at, the agent will transit to the next state st+1 according to the transition probability function P(st+1|st, at) and generate the corresponding reward rt = R(st, at). The Agent experience Et=(st, at, rt, st+1) from beginning till episode completion, is stored in replay buffer DA. The episode is completed either on successful completion of the task or reaching the time horizon. The demonstration data is stored as experience in another buffer DE. Depending on the agent, it can learn from DA, DE, or both. The agent’s goal is to learn the policy µ : S → A such that it maximizes the expected reward over a time horizon. Since reward engineering requires hand tailoring and is non-scalable, we use sparse reward, where the agent only receives binary rewards of whether the task is completed or not. We define distance-based success criteria that require tissue to be lifted within the error tolerance of the desired height. We use the success rate as a metric to compare algorithms. The success rate is defined as the ratio of the successful completion of the task to the number of attempts. 4.3 Algorithms We explore traditional reinforcement learning algorithms (DDPG 26), Imitation learning algorithms (SQIL 27), and demonstration- guided RL algorithms (DDPGBC 28, CoL 29, and DEX 12). Each algorithm is trained for a total of 50,000 episodes. For the IL and DG-RL methods, we restrict the demonstration count to 100. We conducted our experiments on a single Intel i7-10750H CPU with a NVIDIA GeForce GTX 1650 Ti. In addition, we performed an ablation study at 25, 50, and 100 demonstrations to compare the effect of demonstration numbers on the IL and DG-RL algorithms. For actor and critic networks in DDPG, we use Domain Randomization: Varying Tissue, Color, Size, Location, Tissue elasticity, GravityTask ITask IITask III(a)(b)(c)(d) 6 Singh ET AL. T A B L E 1 Comparison of success rate percentage, over 50 evaluation episodes using DDPG, SQIL, DDPGBC, CoL, and DEX for tasks I, II, and III. We fixed the demonstration count at 100, and trained all algorithms with random seeds of 1,2 and 3 respectively to calculate 95% confidence interval for success rate percentage. DDPG achieved comparable or higher success rates than DG-RL algorithms (average success rates of 85, 84, and 66 on tasks I, II, and III). SQIL baseline fails on all three tasks. RL IL DDPG 83±0.7 85±11 80±6 SQIL 4±7 1±1 6±4 DG-RL DDPGBC 95±3 86±5 65±5 CoL 94±6 86±9 76±4 DEX 66±32 82±12 58±8 Task Task I Task II Task III T A B L E 2 Comparison of success rate percentage, over 50 evaluation episodes using 25, 50, and 100 demonstrations on task III. We trained all algorithms with random seeds of 1,2 and 3 respectively to calculate a 95% confidence interval for the success rate percentage. Demonstration-guided RL algorithms (DDPGBC, CoL, and DEX) benefit from an increase in the number of demonstrations available, with improvement in either the average or the bounds of the success rate. SQIL benefits slightly, without any significant gains in performance. Number of demonstrations Method SQIL CoL DDPGBC DEX 25 demos 5±1 50±7 62±7 34±10 50 demos 5±1 60±18 62±7 58±16 100 demos 6±4 76±4 65±5 58±8 three fully connected layers, each of 128 dimensions with RELU activation. We use hindsight experience replay with a future sampling strategy, with a buffer size of 50k observations, a discount factor of 0.99, and a learning rate of 0.001, along with adam optimizer for model optimization. The episode horizon is set to 50 time steps. For SQIL, we initialize demonstration data with reward 0, and as agents interact with the environment, add new experiences with the reward of -1 to the replay buffer. For DG-RL algorithms, we use DDPG as the base and add demonstration data to the replay buffer during each minibatch. For DDPGBC, we use behavior cloning loss along with Q-filter. In CoL, we augment behavior cloning loss with actor Q loss from DDPG. Similar to the original work, we implement DEX with DDPG with four fully connected layers of 256 dimensions each. At the start of each simulation, the PSM arm, soft tissue, and hidden tissue locations are randomly chosen from a uniform distribution. During the evaluation, we run each algorithm for 50 episodes, each with a horizon of 50 timesteps. Similar to Huang et al. 12, evaluation is repeated 3 times with varying random seeds of 1,2 and 3. The final average success rate, along with a 95% confidence interval is reported. We also compare the effect of demonstration count on the success rate percentage. For this, we calculate a 95% confidence interval for success rate percentage over 50 episodes with varying seed values for IL and DG-RL algorithms, when trained on 25, 50, and 100 demonstrations. Finally, we also perform a manual visual inspection of success and failure cases across the model training journey and look for patterns in behavior emergence. For this, we sample agent evaluations after every 10,000 training episodes, manually inspect the cause of failure, and classify the causes. We discuss more about this in section 5.2. 4.4 Demonstration Data Generation For demonstration guidance, we generate rule-based demonstration data. We define the anchor point, a PSM, and soft tissue with respect to the frame of origin. We further break down the task into a sequence of 4 position checkpoints: Approaching the tissue, holding the tissue, tissue retraction until the threshold is met, and finally holding the tissue. Using Inverse kinematics, we derive the required action to move the agent to the required checkpoint. The agent experience is stored as (st, at, st+1) per time step. This can be replaced by real data collected from surgical procedures. Autonomous Soft Tissue Retraction Using Demonstration-Guided Reinforcement Learning 7 F I G U R E 3 Performance curves for three tasks using DDPG, SQIL, DDPGBC, CoL, and DEX. Top Left: Evaluation success rate on task I grouped by algorithm used, Top Right: Evaluation success rate on task II grouped by algorithm used, Bottom Left: Evaluation success rate on task III grouped by algorithm used, Bottom Right: Average success rate of all 5 algorithms, grouped by number of demonstrations. 5 5.1 RESULTS AND DISCUSSION Agent Performance Figure 3 shows the changes in the success rate as the model training progresses. In our initial results, we achieved an average success rate percentage of 69, 68, and 57 on tasks I, II, and III respectively. SQIL remains an outlier, with a near-zero success rate across all tasks. This is possibly due to the large exploration space of the task, which might not be possible directly by observing observations. On excluding SQIL from calculations, we have achieved an average success rate percentage of 85, 85, and 70 on tasks I, II, and III respectively. DDPG achieved comparable or higher success rates than DG-RL algorithms (average success rate of 85, 84, and 66 on tasks I, II, and III). This shows that pure explorations in RL algorithms can compete or surface demonstration-guided exploration for DG-RL algorithms. Figure 4 shows a snapshot of post-training three successful episodes with varying anchor sites, and phases for task completion namely: initialization, approaching, grip, pull, and maintaining retraction. Initialization is the first phase, where all environment variables are set up. In the second phase, the agent approaches the anchor sire. In third phase involves robot aligning followed by gripping of tissue. In the fourth phase, the agent pulls the tissue followed by maintaining the retraction in the final state. The performance of all five algorithms in all three tasks is shown in Table 1. 8 Singh ET AL. F I G U R E 4 Case studies of success from task III, with varying anchor points. Case 1: Middle anchor point; Case 2: Left anchor point; Case 3: Right anchor point. After successful training, the agent starts by approaching the tissue, grasping the anchor point, and pulling it till retraction is complete. 5.2 Initial Behavior and Failure cases We investigate the evolution of agent behavior by sampling episodes as training progresses. The agent initially learns to explore the region and approach the tissue, grips the tissue from the anchor points, and finally learns to retract. These patterns of behavior emergence can explain the causes of failure cases after the training. We observe three common failure scenarios: (a) improper grasp, where the anchor point moves with the soft tissue, making grasping difficult. (b) tissue distortion: agents mishandle robots resulting in tissue distortion and possible damage; and (c) loss of tissue grip, due to recoil on stretching soft tissue. This can be also due to sudden retraction by the agent, leading to loss of tissue grip. These behaviors can be possibly improved by training the agent for longer time steps, increasing demonstration counts, or introducing a specific negative reward for tissue damage. Figure 5 shows a snapshot of failed episodes after model training. Case 1 of Figure 5 shows failure in tissue gripping as tissue slips away due to the elasticity of tissue. Case 2 shows the loss of tissue grip while retraction, followed by success in reattempting to grip the tissue. Case 3 shows similar scenarios where the agent fails on a similar reattempt. More in-depth analysis is required to understand the correlation between causes of failure and its emergence in behavior sequence during learning. 5.3 Effect of Number of Demonstrations Table 2 shows the ablation results for the number of demonstrations on DG-RL and IL algorithms. Since DGGP doesn’t use the demonstration data, we will only discuss the rest of the algorithms. In general, increasing the demonstration count has a positive effect on agent performance, resulting in an increased success rate percentage or narrowing of performance bounds over several runs. Demonstration-guided RL algorithms (DDPGBC, CoL, and DEX) benefit from an increase in the number of demonstrations available, with the improvement of average success rates by 10 and 6 percent on the addition of 25 and 50 additional demonstrations. SQIL benefits slightly, without any significant gains in performance. The bottom right subfigure of Figure 3 shows the algorithm’s success rate average. It is interesting to note that despite changes in magnitude, the changes in peak are fairly similar across a number of demonstrations provided. This can be possibly hint that the number of demonstrations affects the final success rate percentage, and doesn’t impact the rate of learning new tasks. Time0lCase 1Case 2Case 3ApproachInitial State GripPullFinal State: Retracted Autonomous Soft Tissue Retraction Using Demonstration-Guided Reinforcement Learning 9 F I G U R E 5 Case studies with common abnormal variants from task III. The agent learns to reattempt cases of grip loss. Case 1: Task fails as tissue slips away due to tissue elasticity; Case 2: Grip loss, succeed on reattempt; Case 3:Grip loss, success of reattempt. 5.4 Limitations and Future Works There are several limitations of the existing work. For a more comprehensive analysis, it would be further beneficial to conduct longer training episodes. In our experiments, we find that pure explorations in RL algorithms can compete or surface demonstration-guided exploration in DG-RL algorithms. Further experimentation is required to access this in depth. Another dimension of analysis is required to understand the correlation between causes of failure and its emergence in behavior sequence during learning. Although the present study intentionally eschewed explicit reward engineering, the induction of certain preferred traits in agent behavior may require customized reward engineering. For example, training an RL agent to expedite soft tissue manipulation, follow the shortest path, or eliminate tissue damage can require the integration of additional information within both the observation and reward structures provided to the agent. Such a direction presents an intriguing avenue for future exploration. Furthermore, additional testing is required to validate the robustness of the transferability from simulation to reality. The scope of this work could also be expanded to incorporate vision-based control, surgical mesh fixation, or a multitask learning framework. These potential extensions of our work could further enhance the capabilities of surgical soft tissue manipulation, enriching its applicability to surgical robotics. 6 CONCLUSION In conclusion, this work presents a proof-of-concept study of soft tissue simulation with rigid body interactions, demonstrating a meaningful advance in the simulation of surgical tasks. By employing the PyBullet physics engine, we replicate the kinematics of the patient-side manipulator to simulate soft and rigid body interactions. Additionally, through the use of demonstration guidance, we train reinforcement learning agents to master the task. After training, the agents were able to execute all 3 tasks with high success rates. Our research provides an innovative approach for autonomous soft surgical tissue retraction. In addition, it introduces a comprehensive framework for the in silico learning of surgical tasks with soft tissue manipulation. This modality of in silico training, followed by sim-to-real transfer, has the potential to significantly broaden access to soft tissue manipulation research. It also emerges as a practical and expedient approach to rapid prototyping of automated surgical procedures. Tissue stretchGrip lossGrip lossRe-attemptTask SuccessTask FailTask FailTime0lTissue stretchGrip attemptAttempt failCase 1Case 2Case 3Initial StateInitial StateInitial StateGrip attemptGrip attempt 10 CODE AVAILABILITY The code is available at https://github.com/amritpal-001/tissue_retract. Singh ET AL. REFERENCES 1. Myles PS, Haller G. Global distribution of access to surgical services. The Lancet. 2010;376(9746):1027–1028. Publisher: Elsevier doi: 10.1016/S0140-6736(10)60520-X 2. Debas HT. The Emergence and Future of Global Surgery in the United States. JAMA Surgery. 2015;150(9):833–834. doi: 10.1001/jama- surg.2015.0898 3. Sturm L, Dawson D, Vaughan R, et al. Effects of fatigue on surgeon performance and surgical outcomes: a systematic review. ANZ Journal of Surgery.2011;81(7-8):502–509. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1445-2197.2010.05642.xdoi: 10.1111/j.1445- 2197.2010.05642.x 4. Todorov E, Erez T, Tassa Y. MuJoCo: A physics engine for model-based control. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. ; 2012:5026-5033. doi:10.1109/IROS.2012.6386109 5. Coumans E, Bai Y. PyBullet, a Python module for physics simulation for games, robotics and machine learning. http://pybullet.org; 2016–2022. 6. ARTAS. 510(k) Premarket Notification. Accessed July 14, 2023. https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K173358 7. Tagliabue E, Pore A, Dall’Alba D, Magnabosco E, Piccinelli M, Fiorini P. Soft Tissue Simulation Environment to Learn Manipulation Tasks in Autonomous Robotic Surgery. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). ; 2020:3261-3266. doi:10.1109/IROS45743.2020.9341710 8. Xu J, Li B, Lu B, Liu YH, Dou Q, Heng PA. SurRoL: An Open-source Reinforcement Learning Centered and dVRK Compatible Platform for Surgical Robot Learning. Published online August 30, 2021. doi:10.48550/arXiv.2108.13035 9. Performance and Capability Assessment in Surgical Subtask Automation. Accessed July 2, 2023. https://www.mdpi.com/1424-8220/22/7/2501 10. Singh A. Roadmap to Autonomous Surgery – A Framework to Surgical Autonomy. Published online May 26, 2022. doi:10.48550/arXiv.2206.10516 2023. 11. Stable phantom materials IOPscience. ultrasound Accessed imaging August optical and 23, for - https://iopscience.iop.org/article/10.1088/1361-6560/62/2/432 12. Huang T, Chen K, Li B, Liu YH, Dou Q. Demonstration-Guided Reinforcement Learning with Efficient Exploration for Task Automation of Surgical Robot. 2023. arXiv:2302.09772 [cs] 13. Attanasio A, Scaglioni B, Leonetti M, et al. Autonomous Tissue Retraction in Robotic Assisted Minimally Invasive Surgery – A Feasibility Study. IEEE Robotics and Automation Letters. 2020;5(4):6528-6535. doi:10.1109/LRA.2020.3013914 14. Bendikas R, Modugno V, Kanoulas D, Vasconcelos F, Stoyanov D. Learning Needle Pick-and-Place Without Expert Demonstrations. IEEE Robotics and Automation Letters. 2023;8(6):3326-3333. doi:10.1109/LRA.2023.3266720 4. Cabrelli LC, Pelissari PIBGB, Deana AM, Carneiro AAO, Pavan TZ. Stable phantom materials for ultrasound and optical imaging. Phys Med Biol. 2016;62(2):432. doi:10.1088/1361-6560/62/2/432 15. D’Ettorre C, Zirino S, Dei NN, Stilli A, De Momi E, Stoyanov D. Learning intraoperative organ manipulation with context-based reinforcement learning. Int J CARS. 2022;17(8):1419-1427. doi:10.1007/s11548-022-02630-2 16. Fontanelli GA, Selvaggio M, Ferro M, Ficuciello F, Vendittelli M, Siciliano B. A V-REP Simulator for the da Vinci Research Kit Robotic Platform. In: 2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob). ; 2018:1056-1061. doi:10.1109/BIOROB.2018.8487187 17. Fuller DB, Crabtree T, Kane BL, et al. High Dose “HDR-Like” Prostate SBRT: PSA 10-Year Results From a Mature, Multi-Institutional Clinical Trial. Frontiers in Oncology. 2022;12. Accessed July 14, 2023. https://www.frontiersin.org/articles/10.3389/fonc.2022.935310 18. Hernandez-Quintanar L, Rodriguez-Salvador M. Discovering new 3D bioprinting applications: Analyzing the case of optical tissue phantoms. Int J Bioprint. 2018;5(1):178. doi:10.18063/IJB.v5i1.178 19. Kazanzides P, Chen Z, Deguet A, Fischer GS, Taylor RH, DiMaio SP. An open-source research kit for the da Vinci® Surgical System. In: 2014 IEEE International Conference on Robotics and Automation (ICRA). ; 2014:6434-6439. doi:10.1109/ICRA.2014.6907809 20. Liow MHL, Chin PL, Pang HN, Tay DKJ, Yeo SJ. THINK surgical TSolution-One® (Robodoc) total knee arthroplasty. SICOT-J. 2017;3:63. doi:10.1051/sicotj/2017052 21. Murali A, Sen S, Kehoe B, et al. Learning by observation for surgical subtasks: Multilateral cutting of 3D viscoelastic and 2D Orthotropic Tissue Phantoms. In: 2015 IEEE International Conference on Robotics and Automation (ICRA). ; 2015:1202-1209. doi:10.1109/ICRA.2015.7139344 22. Saeidi H, Opfermann JD, Kam M, et al. Autonomous robotic laparoscopic surgery for intestinal anastomosis. Science Robotics. 2022;7(62):eabj2908. doi:10.1126/scirobotics.abj2908 23. Shin C, Ferguson PW, Pedram SA, Ma J, Dutson EP, Rosen J. Autonomous Tissue Manipulation via Surgical Robot Using Learning Based Model Predictive Control. In: 2019 International Conference on Robotics and Automation (ICRA). ; 2019:3875-3881. doi:10.1109/ICRA.2019.8794159 24. Tree AC, Ostler P, Voet H van der, et al. Intensity-modulated radiotherapy versus stereotactic body radiotherapy for prostate cancer (PACE-B): 2-year toxicity results from an open-label, randomised, phase 3, non-inferiority trial. The Lancet Oncology. 2022;23(10):1308-1320. doi:10.1016/S1470- 2045(22)00517-4 25. Varier VM, Rajamani DK, Tavakkolmoghaddam F, Munawar A, Fischer GS. AMBF-RL: A real-time simulation based Reinforcement Learning toolkit for Medical Robotics. In: 2022 International Symposium on Medical Robotics (ISMR). ; 2022:1-8. doi:10.1109/ISMR48347.2022.9807609 26. Lillicrap TP, Hunt JJ, Pritzel A, et al. Continuous control with deep reinforcement learning. 2019. arXiv:1509.02971 [cs, stat] 27. Reddy S, Dragan AD, Levine S. SQIL: IMITATION LEARNING VIA REINFORCEMENT LEARNING WITH SPARSE REWARDS. 2020. 28. Nair A, McGrew B, Andrychowicz M, Zaremba W, Abbeel P. Overcoming Exploration in Reinforcement Learning with Demonstrations. 2018. arXiv:1709.10089 [cs] 29. Goecks VG, Gremillion GM, Lawhern VJ, Valasek J, Waytowich NR. Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Dense and Sparse Reward Environments. 2020. arXiv:1910.04281 [cs, stat]
ai_researcher
3
An_LLM-based_Knowledge_Synthesis_and_Scientific_Reasoning_Framework_for_Biomedical_Discovery.pdf
4 2 0 2 n u J 3 1 ] E S . s c [ 1 v 0 0 3 0 1 . 6 0 4 2 : v i X r a Large Language Models as Software Components: A Taxonomy for LLM-Integrated Applications Irene Weber Kempten University of Applied Sciences, Germany [email protected] Abstract Large Language Models (LLMs) have become widely adopted recently. Research explores their use both as autonomous agents and as tools for software engineering. LLM-integrated applications, on the other hand, are software systems that leverage an LLM to perform tasks that would otherwise be impossible or require significant coding effort. While LLM-integrated application engineering is emerging as new discipline, its terminology, concepts and methods need to be established. This study provides a taxonomy for LLM- integrated applications, offering a framework for analyzing and describing these systems. It also demonstrates various ways to utilize LLMs in applications, as well as options for implementing such integrations. Following established methods, we analyze a sample of recent LLM-integrated applications to identify rel- evant dimensions. We evaluate the taxonomy by applying it to additional cases. This review shows that applications integrate LLMs in numerous ways for various purposes. Frequently, they comprise multiple LLM integrations, which we term “LLM components”. To gain a clear understanding of an application’s architecture, we examine each LLM component separately. We identify thirteen dimensions along which to characterize an LLM component, including the LLM skills leveraged, the format of the output, and more. LLM-integrated applications are described as combinations of their LLM components. We suggest a concise representation using feature vectors for visualization. The taxonomy is effective for describing LLM-integrated applications. It can contribute to theory building in the nascent field of LLM-integrated application engineering and aid in developing such systems. Researchers and practitioners explore numerous creative ways to leverage LLMs in applications. Though challenges persist, integrating LLMs may revolutionize the way software systems are built. Keywords: component large language model, LLM-integrated, taxonomy, copilot, architecture, AI agent, LLM 1. Introduction fields, such as medicine, law, marketing, education, human resources, etc. Large Language Models (LLMs) have significantly impacted various sectors of economy and society [47]. Due to their proficiency in text understanding, cre- ative work, communication, knowledge work, and code writing, they have been adopted in numerous Public discussions often focus on the ethical aspects and societal consequences of these systems [36, 39]. Meanwhile, research investigates Artificial General Intelligences and autonomous AI agents that can use services, data sources, and other tools, and collabo- rate to solve complex tasks [11, 62, 57, 21]. In addi- tion, LLMs offer many opportunities to enhance soft- ware systems. They enable natural language interac- tion [59], automate complex tasks [19], and provide supportive collaboration, as seen with recent LLM- based assistant products often branded as “copilots” 1. This paper addresses the potential of LLMs for soft- ware development by integrating their capabilities as components into software systems. This contrasts with current software engineering research, which views LLMs as tools for software development rather than as software components [14, 22], and with the considerable body of research examining LLMs as au- tonomous agents within multiagent systems [21]. Software systems that invoke an LLM and process its output are referred to as “LLM-integrated appli- cations”, “LLM-integrated systems”, “LLM-based ap- plications”, etc. [32, 13, 57]. LLMs are versatile, mul- tipurpose tools capable of providing functionalities that would otherwise be unfeasible or require sub- stantial development efforts [15, 24]. By significantly expediting system development, they have the poten- tial to revolutionize not only the way users interact with technology, but also the fundamental processes of software development. LLM-integrated applications engineering is emerging as a research field. E.g., [10] proposes LLM Sys- tems Engineering (LLM-SE) as a novel discipline, and [44, 8, 7] discuss experiences and challenges that de- velopers of such systems encounter in practice. This study develops a taxonomy that provides a structured framework for categorizing and analyzing LLM-integrated applications across various domains. To develop and evaluate the taxonomy, we collected a sample of LLM-integrated applications, concentrat- ing on technical and industrial domains. These ap- plications showcase a broad range of opportunities to leverage LLMs, often integrating LLMs in mul- tiple ways for distinct purposes. In developing the taxonomy, we found that examining each of these in- tegrations, termed “LLM components”, separately is crucial for a clear understanding of an application’s architecture. The taxonomy adopts an original architectural per- spective, focusing on how the application interacts with the LLM while abstracting from the specifics of application domains. For researchers, the taxon- omy contributes to shape a common understanding and terminology, thus aiding theory building in this emerging domain [29, 50, 18]. For practitioners, the taxonomy provides inspiration for potential uses of LLMs in applications, presents design options, and helps identify challenges and approaches to address them. Objectives. In this study, a taxonomy is understood as a set of dimensions divided into characteristics. The objective is to identify dimensions that are useful for categorizing the integration of LLMs in applica- tions from an architectural perspective. To be most effective, the taxonomy should be easy to understand and apply, yet distinctive enough to uncover the es- sential aspects. Additionally, we aim to develop a visual representation tailored to the taxonomy’s in- tended purposes. Overview. The following section 2 provides back- ground on LLMs and introduces relevant concepts. Section 3 presents an overview of related work. The study design adheres to a Design Science Research approach [46]. We apply established methods for tax- onomy design [42, 48] as described in Section 4. This section also presents the sample of LLM-integrated applications used for this study. The developed tax- onomy is presented, demonstrated and formally eval- uated in section 5. In section 6, we discuss its usabil- ity and usefulness. Section 7 summarizes the contri- butions, addresses limitations, and concludes. 2. Large Language Models 2.1. Background 1E.g., https://docs.github.com/en/copilot, https://copilot.cloud.microsoft/en-us/copilot-excel, https://www.salesforce.com/einsteincopilot State-of-the-art LLMs such as GPT-3.5, GPT-4, Llama, PALM2, etc., are artificial neural networks i.e., very simple processing consisting of neurons, 2 units, that are organized in layers and connected by weighted links. Training a neural network means adapting these weights such that the neural network shows a certain desired behavior. Specifically, an LLM is trained to predict the likelihoods of pieces of text termed, tokens, to occur as continuations of a given text presented as input to the LLM. This in- put is referred to as prompt. The prompt combined with the produced output constitutes the context of an LLM. It may comprise more than 100k tokens in state-of-the-art LLMs2. Still, its length is limited and determines the maximum size of prompts and outputs that an LLM is capable of processing and generating at a time. Training of an LLM optimizes its parameters such that its computed likelihoods align with real text ex- amples. The training data is a vast body of text snip- pets extracted, processed, and curated from sources such as Wikipedia, Github code repositories, common websites, books, or news archives. An LLM trained on massive examples is termed a foundation model or pre-trained model. During training, an LLM not only learns to produce correct language but also ab- sorbs and stores information and factual knowledge. However, it is well known that LLMs frequently pick up biases, leading to ethical problems. They may also produce factually incorrect outputs that sound plausible and convincing, termed hallucinations. Recent findings show that LLMs can be applied to a wide range of tasks by appropriately formulating prompts. Different prompt patterns succeed in dif- ferent tasks. Basic approaches rely on instructing the LLM to solve a task described or explained in the prompt. In few-shot prompting (also known as few-shot learning), the prompt is augmented with ex- ample input-output pairs illustrating how to solve the task, e.g., the requested output format. The number of examples can vary. Prompting with one example is called one-shot prompting, while prompting without any examples is called zero-shot prompting. One-shot and few-shot prompting fall under the broader cat- egory of in-context learning. Prompt patterns such 2https://platform.openai.com/docs/models as chain-of-thought and thinking-aloud aim to elicit advanced reasoning capabilities from LLMs. As effective prompts are crucial for unlocking the di- verse capabilities of an LLM, the discipline of prompt engineering is evolving, focusing on the systematic design and management of prompts [66, 9, 53, 31]. 2.2. Definitions Invoking an LLM results in an input-processing- output sequence: Upon receiving a prompt, the LLM processes it and generates an output. We refer to an individual sequence of input-processing-output per- formed by the LLM as LLM invocation, and define an LLM-integrated application as a system in which the software generates the prompt for the LLM and processes its output. The concept of an application is broad, encompassing service-oriented architectures and systems with components loosely coupled via API calls. Given an LLM’s versatility, an application can uti- lize it for different tasks, each demanding a specific approach to create the prompt and handle the re- sult. This paper defines a particular software compo- nent that accomplishes this as an LLM-based software component or, simply, LLM component. An LLM- integrated application can comprise several LLM components. The study develops a taxonomy for LLM components. LLM-integrated applications are described as combinations of their LLM components. 3. Related Work With the recent progress in generative AI and LLMs, the interest in these techniques has increased, and numerous surveys have been published, providing an extensive overview of technical aspects of LLMs [72], reviewing LLMs as tools for software engineering [22], and discussing the technical challenges of applying LLMs across various fields [25]. Further studies ad- dress the regulatory and ethical aspects of Genera- tive AI and ChatGPT, with a particular focus on AI-human collaboration [41], and Augmented Lan- guage Models (ALMs), which are LLMs that enhance 3 their capabilities by querying tools such as APIs, databases, and web search engines [38]. Taxomonies related to LLMs include a taxonomy for prompts designed to solve complex tasks [49] and a taxonomy of methods for cost-effectively invoking a remote LLM [60]. A comparative analysis of stud- ies on applications of ChatGPT is provided by [27], whereas LLMs are compared based on their applica- tion domains and the tasks they solve in [20]. Most closely related to the taxonomy developed here is a taxonomy for LLM-powered multiagent architectures [21] which focuses on autonomous agents with less technical detail. Taxonomies of applications of AI in enterprises [48] and applications of generative AI, in- cluding but not limited to LLMs [52], are developed using methods similar to those in our study. Several taxonomies in the field of conversational agents and task-oriented dialog (TOD) systems ad- dress system architecture [1, 40, 12, 3]. However, they omit detailed coverage of the integration of generative language models. 4. Methods We constructed the taxonomy following established guidelines [42, 48, 29], drawing from a sample of LLM-integrated applications. These applications are detailed in section 4.1. 4.1. Development Taxonomy. We derived an initial taxonomy from the standard architecture of conversational assistants de- scribed in [3], guided by the idea that conversational assistants are essentially “chatbots with tools”, i.e., language-operated user interfaces that interact with external systems. This approach proved unsuccessful. The second version was based on the classical three- tier software architecture, and then extended over several development cycles. By repeatedly apply- ing the evolving taxonomy to the example instances, we identified dimensions and characteristics using an “empirical-to-conceptual” approach. When new di- mensions emerged, additional characteristics were de- rived in a “conceptual-to-empirical” manner. After five major refinement cycles, the set of dimensions and characteristics solidified. In the subsequent eval- uation phase, we applied the taxonomy to a new set of example instances that were not considered while constructing the taxonomy. As the dimensions and characteristics remained stable, the taxonomy was considered complete. In the final phase, we refined the wording and visual format of the taxonomy. Visualization. Developing a taxonomy involves cre- ating a representation that effectively supports its intended purpose [29]. Taxonomies can be repre- sented in various formats, with morphological boxes [54, 55] or radar charts [21] being well-established approaches. We evaluated morphological boxes, be- cause they effectively position categorized instances within the design space. However, we found that they make it difficult to perceive a group of categorized in- stances as a whole since they occupy a large display area. This drawback is significant for our purposes, as LLM-integrated applications often comprise mul- tiple LLM components. Therefore, we developed a more condensed visualization of the taxonomy based on feature vectors. Example instances. We searched for instances of LLM-integrated applications for taxonomy develop- ment that should meet the following criteria: • The application aims for real-world use rather than focusing on research only (such as testbeds for experiments or proofs-of-concept). It demon- strates efforts towards practical usability and ad- dresses challenges encountered in real-world sce- narios. • The application’s architecture, particularly its LLM components, is described in sufficient de- tail for analysis. • The sample of instances covers a diverse range of architectures. • The example instances are situated within indus- trial or technical domains, as we aim to focus on LLM-integrated applications beyond well-known fields like law, medicine, marketing, human re- sources, and education. 4 The search revealed a predominance of theoretical re- search on LLM-integrated applications while papers focusing on practically applied systems were scarce. Searching non-scientific websites uncovered commer- cially advertised AI-powered applications, but their internal workings were typically undisclosed, and reli- able evaluations were lacking. Furthermore, the het- erogeneous terminology and concepts in this emerg- literature ing field make a comprehensive formal search unfeasible. Instead, by repeatedly search- ing Google Scholar and non-scientific websites using terms “LLM-integrated applications”, “LLM-powered applications”, “LLM-enhanced system”, “LLM” and “tools”, along similar variants, we selected six suitable instances. Some of them integrate LLMs in multiple ways, totaling eleven distinct LLM components. For a thorough evaluation, we selected new instances using relaxed criteria, including those intended for research. Additionally, we included a real-world ex- ample lacking explicit documentation to broaden the diversity of our sample and assess the taxonomy’s coverage. Within the five selected instances, we iden- tified ten LLM components. 4.2. Sample of LLM-integrated applications Table 1 gives an overview of the sample. Names of ap- plications and LLM components are uniformly writ- ten as one CamelCase word and typeset in small caps, deviating from the format chosen by the respective authors. LowCode. LowCode is a web-based application consisting of a prompt-definition section and a di- alogue section. The prompt-definition section sup- ports the design of prompts for complex tasks, such as composing extensive essays, writing resumes for job applications or acting as a hotel service chatbot [5]. In the dialogue section, users converse with an LLM to complete the complex task based on the de- fined prompt. LowCode comprises two LLM components termed Planning and Executing. Planning operates in the prompt-definition section, where a user roughly describes a complex task, and Planning designs a workflow for solving it. The prompt-definition section offers a low-code development environment where the LLM-generated workflow is visualized as a graphi- cal flowchart, allowing a user to edit and adjust the logic of the flow and the contents of its steps. For instance, in essay-writing scenarios, this involves in- serting additional sections, rearranging sections, and refining the contents of sections. Once approved by the user, LowCode translates the modified work- flow back into natural language and incorporates it into a prompt for Executing. In the dialogue sec- tion, users converse in interactive, multi-turn dia- logues with Executing. As defined in the prompt, it acts as an assistant for tasks such as writing an essay or resume, or as a hotel service chatbot. While the idea of the LLM planning a workflow might suggest using the LLM for application control, LowCode Planning actually serves as a prompt generator that supports developing prompts for complex tasks. Honeycomb. Honeycomb is an observability plat- form collecting data from software applications in distributed environments for monitoring. Users define queries to retrieve information about the observed software systems through Honeycomb’s Query Builder UI. The recently added LLM-based QueryAssistant allows users to articulate inquiries in plain English, such as “slow endpoints by status code” or “which service has the highest latency?” The QueryAssistant converts these into queries in Honeycomb’s format, which users can execute and manually refine [7, 8]. MyCrunchGpt. MyCrunchGpt acts as an ex- pert system within the engineering domain, specif- ically for airfoil design and calculations in fluid me- chanics. These tasks require complex workflows com- prising several steps such as preparing data, param- eterizing tools, and evaluating results, using vari- ous software systems and tools. The aim of My- CrunchGpt is to facilitate the definition of these workflows and automate their execution [28]. MyCrunchGpt offers a web interface featuring a dialogue window for inputting commands in plain English, along with separate windows displaying the 5 Table 1: Example instances selected for development (top 6) and evaluation (bottom 5) Application References LLM components Honeycomb QueryAssistant [7, 8] Planning, Executing LowCode [5],[35] DesignAssistant, SettingsEditor, DomainExpert [28] MyCrunchGpt Manager, Operator MatrixProduction [69] TaskPlanning [37] WorkplaceRobot TaskExecutor, MemoryGenerator [64] AutoDroid ActionPlanning, ScenarioFeedback [51] ProgPrompt QuestionAnswering [26] FactoryAssistants DstPrompter, PolicyPrompter [71] SgpTod Reporting [70] TruckPlatoon ActionExecutor, Advisor, IntentDetector, Explainer [16, 44] ExcelCopilot output and results of software tools invoked by My- CrunchGpt in the backend. MyCrunchGpt relies on predefined workflows, not supporting deviations or cycles. By appending a specific instruction to the dialogue history in the prompt for each step of the workflow, it uses the LLM as a smart parser to ex- tract parameters for APIs and backend tools from user input. APIs and tools are called in the prede- fined order [28, p. 56]. MyCrunchGpt is still in development. The paper [28] explains the domain as well as the integration of the LLM, but does not fully detail the implementa- tion of the latter. Still, MyCrunchGpt illustrates innovative applications of an LLM in a technical do- main. We categorize three LLM components solving tasks within MyCrunchGpt: a DesignAssistant guiding users through workflows and requesting pa- rameters for function and API calls; a SettingsEd- itor updating a JSON file with settings for a back- end software tool; and a DomainExpert which helps evaluating results by comparing them to related re- sults, e.g., existing airfoil designs, which it derives from its trained knowledge. MatrixProduction. MatrixProduction em- ploys an LLM for controlling a matrix production system [69]. While in a classical line production setup, workstations are arranged linearly and the manufacturing steps follow a fixed sequence, matrix production is oriented towards greater flexibility. transport vehicles Autonomous carry materials and intermediate products to workstations, termed automation modules, each offering a spectrum of manufacturing skills that it can contribute to the production process. Compared to line production, matrix production is highly adaptable and can manufacture a variety of personalized products with full automation. This requires intelligent production management to (a) create workplans that orchestrate and schedule the automation modules’ skills, and (b) program the involved automation modules such that they execute the required processing steps. MatrixProduction incorporates two LLM compo- nents: Manager creates workplans as sequences of skills (a), while Operator generates programs for the involved automation modules (b). MatrixProduction prompts Manager and Op- erator to provide textual explanations in addition to the required sequences of skills or automation module programs. The LLM output is processed by a parser before being used to control the physi- cal systems. Manager relies on built-in production- specific knowledge of the LLM such as “a hole is pro- duced by drilling”. Noteworthy in this approach is its tight integra- tion into the system landscape of Industry 4.0. The few-shot Manager and Operator prompts are generated automatically using Asset Adminis- tration Shells, which are standardized, technology- 6 independent data repositories storing digital twins of manufacturing assets for use in Industry 4.0 [2]. WorkplaceRobot. An experimental robot system is enhanced with LLM-based task planning in [37]. The robot operates in a workplace environment fea- turing a desk and several objects. It has previously been trained to execute basic operations expressed in natural language such as “open the drawer” or “take the pink object and place it in the drawer”. LLM-based task planning enables the robot to per- form more complex orders like “tidy up the work area and turn off all the lights”. To this end, an LLM is prompted to generate a sequence of basic operations that accomplish the complex order. Although the robot expects operations phrased in language, the LLM is prompted with a natural Python coding task. For instance, the basic opera- tion “turn on the green light” corresponds to a Python command push_button(’green’). The prompt for the LLM includes several examples each consisting of a description of an environment state, a complex order formatted as a comment, and a sequence of Python robot commands that accomplish the com- plex order. When invoking the LLM to generate the Python program for a new order, the prompt is aug- mented with a description of the environment’s cur- rent state and the new order as a comment. The Python code produced by the LLM is trans- lated back to a sequence of basic operations in nat- ural language. When the robot executes these oper- ations, there is no feedback about successful comple- tion. Rather, the system assumes that all basic op- erations require a fixed number of timesteps to com- plete. AutoDroid. The goal of mobile task automation is hands-free user interaction for smartphones through voice commands. AutoDroid is a voice control sys- tem for smartphones that can automatically execute complex orders such as “remind me to do laundry on May 11th” or “delete the last photo I took” [64, 65]. as “scroll down, then press button x” in the calen- dar app. AutoDroid employs an LLM component TaskExecutor to plan these sequences of opera- tions. The challenge is that the next operation to ex- ecute depends on the current state of the Android app which continuously changes as the app is operated. AutoDroid solves this by invoking the TaskEx- ecutor repeatedly after each app operation with the prompt comprising the updated state of the Graph- ical User Interface (GUI) along with the user’s com- plex order. Before executing irrevocable operations, such as per- manently deleting data or calling a contact, Auto- Droid prompts the user to confirm or adjust the op- eration. TaskExecutor is instructed to include a “confirmation needed” hint in its output for such op- erations. The prompt for TaskExecutor comprises an ex- tract from a knowledge base which is built automati- cally in an offline learning phase as follows: In a first step, a “UI Automator” (which is not an LLM com- ponent) automatically and randomly operates the GUI elements of an Android app to generate a UI Transition Graph (UTG). The UTG has GUI states as nodes and the possible transitions between GUI states as edges. As next steps, AutoDroid invokes two LLM components referred to as MemoryGen- erators to analyze the UTG. The first MemoryGenerator is prompted repeat- edly for each GUI state in the UTG. Its task is to explain the functionality of the GUI elements. Be- sides instructions and examples of the table format desired as output, its prompt includes an HTML rep- resentation of the GUI state, the GUI actions preced- ing this state, and the GUI element operated next. Its output consists of tuples explaining the function- ality of a GUI element by naming the derived func- tionality (e.g., “delete all the events in the calendar app”) and the GUI states and GUI element actions in- volved. Similarly, the second MemoryGenerator is prompted to output a table listing GUI states and explanations of their functions. These tables consti- tute AutoDroid’s knowledge base. Such complex orders are fulfilled by performing se- quences of basic operations in an Android app, such ProgPrompt. ProgPrompt [51] is an approach to to LLM-based robot task planning similar 7 Its robot is controlled by WorkplaceRobot. Python code and works in a real and a simulated household environment. ProgPrompt comprises two LLM components. Ac- tionPlanning generates Python scripts for tasks such as “microwave salmon” using basic opera- tions like grab(’salmon’), open(’microwave’), and putin(’salmon’, ’microwave’), notably with- out considering the current state of the environment. To establish a feedback loop with the environment, ActionPlanning adds assert statements. These statements verify the preconditions of basic opera- tions and trigger remedial actions when preconditions are not met. For instance, a script for “microwave salmon” comprises the following code fragment: if assert(’microwave’ is ’opened’) else: open(’microwave’) putin(’salmon’, ’microwave’) When operating in the simulated environment, ProgPrompt can verify an assert statement through its second LLM component, Scenario- Feedback. Prompted with the current state of the environment and the assert statement, Scenario- Feedback evaluates it and outputs True or False. FactoryAssistants. FactoryAssistants advise workers on troubleshooting production line issues in two manufacturing domains: detergent production and textile production [26]. The assistants leverage domain knowledge from FAQs and documented prob- lem cases to answer user queries. The required do- main knowledge is provided as a part of the prompt. SgpTod. SgpTod employs an LLM to implement a chatbot, specifically, a task-oriented dialogue (TOD) system [71]. TOD systems are also known as conver- sational assistants. In contrast to open-domain dia- logue (ODD) systems, which engage users in goalless conversations, they are designed for assisting users in specific tasks. In general, TOD systems require the following components [3]: Natural Language Understanding (NLU), analyzing the user’s input to classify intents and extract entities; Dialogue Management (DM) for deciding on a system action that is appropriate in a given dialogue state (e.g., ask for more informa- tion or invoke a hotel booking service); and Natu- ral Language Generation (NLG) for producing a re- sponse that the TOD system can present to the user. Intent classification, also known as intent detection, matches free-text user input to one of several tasks a TOD system can perform (e.g., book a hotel). Entity extraction isolates situational values, called entities, from the user input (e.g., the town and the date of the hotel booking). The TOD system may require several dialogue turns to elicit all necessary entities from the user. In TOD research, the system’s in- ternal representation of the user’s intentions and the entity values is commonly referred to as its “belief state”. For example, in the restaurant search domain, the belief state may include attribute-value pairs like cuisine:Indian and pricerange:medium. SgpTod is a multi-domain TOD system, concur- rently handling multiple task domains found in stan- dard TOD evaluation datasets, such as recommend- ing restaurants or finding taxis. Similar to other ex- perimental TOD systems [23], SgpTod accesses a database that stores information from the task do- mains, such as available hotels and restaurants. SgpTod comprises two LLM components, called DstPrompter and PolicyPrompter, that are both invoked in every dialogue turn between SgpTod and the user. The DstPrompter handles the NLU aspect, analyzing the user’s input and populating the system’s belief state. It outputs is an SQL query suited to extract the database entries that match the current belief state. Upon retrieving the database en- tries, SgpTod invokes its PolicyPrompter which covers both DM and NLG. Prompted with the dia- logue history and the database entries retrieved, it produces a two-part output: a natural language re- sponse for NLG and a system action for DM. TruckPlatoon. The concept of truck platooning means that trucks travel closely together for bet- ter fuel efficiency and traffic flow. TruckPla- toon comprises an algorithmic control loop which autonomously maintains a consistent distance be- tween trucks. It invokes an LLM to generate natural- language reports on the platoon’s performance and 8 stability from measurements tracked by the control algorithm, providing easily understandable informa- tion for engineers involved in monitoring and opti- mizing the truck platooning system. ExcelCopilot. ExcelCopilot is an example of a recent trend where software companies integrate LLM-based assistants, often termed “copilots”, into their products [44]. These copilots not only provide textual guidance but also perform actions within the software environment, constituting a distinctive type of LLM-integrated application. We chose Excel- Copilot as an example for evaluating our taxonomy. Since its implementation is undisclosed, we infer its architecture from indirect sources, including a screen- cast and a report on insights and experiences from copilot developers [16, 44]. This inferred architecture may deviate from the actual implementation. ExcelCopilot is accessible in a task bar along- side the Excel worksheet. It features buttons with context-dependent suggestions of actions and a text box for users to type in commands in natural lan- guage. ExcelCopilot only works with data tables, so its initial suggestion is to convert the active work- sheet’s data into a data table. Copilot functions ac- tivate when a data table or part of it is selected. It then presents buttons for four top-level tasks: “add formula columns”, “highlight”, “sort and filter”, and “analyze”. The “analyze” button triggers the copilot to display more buttons, e.g., one that generates a pivot chart from the selected data. ExcelCopilot can also add a formula column to the data table and explain the formula in plain language. When a user inputs a free-text command, Excel- Copilot may communicate its inability to fulfill it. This constantly occurs with commands requiring multiple steps, indicating that ExcelCopilot lacks a planning LLM component as seen in, for example, MatrixProduction. This observation, along with its mention in [44], suggests that ExcelCopilot em- ploys an intent detection-skill routing architecture. This architecture includes an LLM component that maps free-text user commands to potential intents and then delegates to other LLM components tasked with generating actions to fulfill those intents. Ac- cordingly, ExcelCopilot comprises several types of LLM components: • Several distinct Action Executors generate code for specific application actions, such as cre- ating a pivot table, designing a worksheet for- mula, inserting a diagram, and so on. • An Advisor suggests meaningful next actions. Its outputs serve to derive button captions and prompts for ActionExecutors. • When a user inputs a free-text command, the IntentDetector is invoked to determine and trigger a suitable ActionExecutor. The In- tentDetector communicates its actions to users and informs them when it cannot devise a suitable action. • The Explainer generates natural language ex- planations of formulae designed by ExcelCopi- lot. It is unclear whether under the hood, the ActionExecutor is generating both the for- mula and the explanation, or if two separate LLM components are being invoked. We assume the latter, i.e., that a separate Explainer LLM component exists. While users interact repeatedly with ExcelCopi- lot, each interaction adheres to a single-turn pat- tern, with the user providing a command and Ex- celCopilot executing it [44]. 5. A Taxonomy for LLM Components and LLM-Integrated Applications When developing the taxonomy, it emerged that an- alyzing an LLM-integrated application should begin with identifying and describing its distinct LLM com- ponents. Analyzing each LLM component separately helps capture details and provides a clear understand- ing of how the application utilizes LLM capabili- ties. The LLM-integrated application can then be described as a combination of the LLM components it employs. 9 Function Meta Invocation Table 2: Dimensions and characteristics of the taxonomy. Codes of characteristics are printed in uppercase. “Meta” means “metadimension”. “MuEx” means “mutual exclusiveness”. Dimension Interaction Frequency Logic UI Data Instruction State Task Check Skills Format Revision Consumer Characteristics App, Command, Dialog Single, Iterative cAlculate, Control none, Input, Output, Both none, Read, Write, Both none, User, LLM, Program none, User, LLM, Program none, User, LLM, Program none, User, LLM, Program reWrite, Create, conVerse, Inform, Reason, Plan FreeText, Item, Code, Structure none, User, LLM, Program User, LLM, Program, Engine MuEx enforced yes yes yes yes enforced enforced yes enforced no no enforced enforced Prompt Output 5.1. Overview and demonstration The taxonomy identifies 13 dimensions for LLM com- ponents, grouped into five metadimensions as shown in table 2. It comprises both dimensions with gen- uinely mutually exclusive characteristics and those with non-exclusive characteristics. For dimensions related to the technical integration of LLMs within applications, mutual exclusiveness is enforced. Given the open nature of software architecture, the inte- gration of LLMs allows for significant diversity. In practice, LLM components may show multiple char- acteristics within these dimensions. Nonetheless, the taxonomy requires categorizing each component with a predominant characteristic, enforcing a necessary level of abstraction to effectively organize and struc- ture the domain. We applied the taxonomy to categorize each of the example instances described in section 4.2. The re- sults are depicted in figure 1. The dimensions and their characteristics are detailed and illustrated with examples in section 5.2. The taxonomy visualizes an LLM component by a feature vector comprising binary as well as multi- valued features. Non-mutually exclusive dimensions are represented by a set of binary features. The re- maining dimensions are encoded as n-valued features where n denotes the number of characteristics. For compactness, we use one-letter codes of the charac- teristics as feature values in the visualizations. In table 2, these codes are printed in upper case in the respective characteristic’s name. A feature vector representing an LLM component is visualized in one line. For dimensions with non- mutually exclusive characteristics, all possible codes are listed, with the applicable ones marked. The re- maining dimensions are represented by the code of the applicable characteristic, with the characteris- tic none shown as an empty cell. We shade feature values with different tones to support visual percep- tion. LLM components within the same application are grouped together, visualizing an LLM-integrating application in a tabular format. 5.2. Dimensions and characteristics 5.2.1. Invocation dimensions Two Invocation dimensions address the way the LLM is invoked within the application. Interaction describes how the user interacts with the LLM with three characteristics: App: Users never converse with the LLM directly in natural language, rather the application invokes the LLM automatically. E.g., users do not interact 10 Invocation Function Prompt (cid:125)(cid:124) (cid:123) (cid:122) (cid:125)(cid:124) (cid:123) (cid:125)(cid:124) (cid:123) (cid:122) Skills (cid:125)(cid:124) Out. Format Output (cid:123) (cid:122) (cid:125)(cid:124) (cid:123) (cid:122) (cid:125)(cid:124) (cid:123) (cid:122) n o i t c a r e t n I C C D Honeycomb QueryAssistant LowCode Planning LowCode Executing MyGrunchGpt DesignAssistant D C MyGrunchGpt SettingsEditor C MyGrunchGpt DomainExpert MatrixProduction Manager MatrixProduction Operator WorkplaceRobot AutoDroid Executor AutoDroid MemoryGenerator2 C A C C A C ProgPrompt ActionPlanning ProgPrompt ScenarioFeedback A FactoryAssistant SgpTod DstPrompter SgpTod PolicyPrompter TruckPlatoon D D A A ExcelCopilot ActionExecutor∗ A A ExcelCopilot Advisor C ExcelCopilot IntentDetector A ExcelCopilot Explainer y c n e u q e r F S S I I S S S S S I I S I S S S S S S S S (cid:122) n o i t c u r t s n I a t a D I U c i g o L A e t a t S k s a T k c e h C e t i r W e r e t a e r C e s r e V n o c m r o f n I n o s a e R A A B A B A A I I I I C C C C A C C A R P P U P P U P L U P P U P P P P P P P P U P P L P P U I C V I V W I I P L U P P P P P U P P L P P U W V V A I R P P U P P P C O A O P P P W A A C A P P L P P P P P U P P P t x e T e e r F m e t I n a l P P P F F P F P F P P P F F F P F F F R R R R R R R I I I I e d o C C C C C C C e r u t c u r t S n o i s i v e R r e m u s n o C P E S U L U S S S S S S S E E U L E E E L E E U E P U E P P U Figure 1: Categorized example instances. See table 2 for a legend. ∗, 2: multiple LLM components. directly with ExcelCopilot ActionExecutor or with MatrixProduction Operator. Command : Users input single natural language commands. E.g., users interact with AutoDroid TaskExecutor through single natural language commands. Dialog: Users engage in multi-turn dialogues with the LLM component to achieve a use goal. E.g., users repeatedly prompt LowCode Executing or My- CrunchGpt DesignAssistant in multi-turn dia- logues to obtain an essay or an airfoil design, respec- tively. Frequency addresses how often the application in- vokes a specific LLM component to fulfill a goal: Single: A single invocation of an LLM component is sufficient to produce the result. E.g., in My- CrunchGpt, the application internally invokes dis- tinct LLM components once for each user input by injecting varying prompt instructions. Iterative: The LLM component is invoked repeatedly to produce the result. E.g., AutoDroid TaskEx- 11 ecutor is invoked multiple times to fulfill a com- mand with an updated environment description in the State prompt; LowCode Executing is repeat- edly prompted by the user to achieve the use goal while the application updates the dialogue history. 5.2.2. Function dimensions The Function dimensions are derived from the classi- cal three-tier software architecture model which seg- regates an application into three distinct layers: pre- sentation, logic and data [17]. The presentation layer implements the UI. On the input side, it allows users to enter data and commands that control the appli- cation. On the output side, it presents information and provides feedback on the execution of commands. The logic layer holds the code that directly realizes the core objectives and processes of an application such as processing data, performing calculations, and making decisions. The data layer of an application manages the reading and writing of data from and to persistent data storage. Due to its versatility, an LLM component can simultaneously implement func- tionality for all three layers. The taxonomy addresses this with three Function dimensions. UI indicates whether an LLM component contributes significantly to the user interface of an application, avoiding the need to implement graphical UI controls or display elements: none: No UI functionality is realized by the LLM. E.g., in ExcelCopilot, the LLM does not replace any UI elements. Input: is (partially) implemented by the LLM. E.g., in MatrixProduction Manager, users input their order in natural language, obviating a product configuration GUI. Output: Output UI is (partially) implemented by the LLM. E.g., in TruckPlatoon, the output gener- ated by the LLM component can replace a data cock- pit with gauges and other visuals displaying numeri- cal data. Input and output UI are (partially) imple- Both: mented by the LLM. E.g., in MyCrunchGpt, the DesignAssistant provides a convenient conversa- interface for parameterization of APIs and tional Input UI tools and feedback on missing values, which other- wise might require a complex GUI. Logic indicates whether the LLM component deter- mines the control flow of the application. It discerns two characteristics: cAlculate: The output does not significantly impact the control flow of the application, i.e., the output is processed like data. E.g., MyCrunchGpt Set- tingsEditor modifies a JSON file, replacing a pro- grammed function; MyCrunchGpt DesignAssis- tant asks the user for parameters, but the sequence of calling APIs and tools follows a predefined work- flow; the workflow computed by LowCode Plan- ning is displayed without influencing the applica- tion’s control flow. Control : The output of the LLM is used for con- trolling the application. E.g., the plans generated by MatrixProduction Manager serve to sched- ule and activate production modules; the actions pro- posed by AutoDroid TaskExecutor are actually executed and determine how the control flow of the app proceeds. Since an LLM invocation always computes a result, cAlculate is interpreted as “calculate only”, making cAlculate and Control mutually exclusive. Data addresses whether the LLM contributes to read- ing or writing persistent data: none: The LLM does not contribute to reading or writing persistent data. This characteristic applies to most sample instances. Read : The LLM is applied for reading from persistent data store. E.g., SgpTod DstPrompter generates SQL queries which the application executes; Honey- comb QueryAssistant devises analytical database queries. Write and Both: No LLM component among the samples generates database queries for creating or updating persistent data. 5.2.3. Prompt-related dimensions Integrating an LLM into an application poses spe- cific requirements for prompts, such as the need for prompts to reliably elicit output in the requested 12 form [68]. While a broad range of prompt patterns have been identified and investigated [66], there is still a lack of research on successful prompt pat- terns specifically for LLM-integrated applications, on which this taxonomy could build. Developing prompt taxonomies is a challenging research endeavor in itself [49] and is beyond the scope of this research. There- fore, the taxonomy does not define a dimension with specific prompt patterns as characteristics, but rather focuses on how the application generates the prompt for an LLM component from a technical perspective. Prompts generally consist of several parts with dis- tinct purposes, generated by different mechanisms. Although many authors explore the concepts, a com- mon terminology has yet to be established. This is illustrated in table 3, showing terms from an ad-hoc selection of recent papers addressing prompt gener- In the table, italics indicate ation in applications. that the authors refrain from introducing an abstract term and instead use a domain-specific description. The term “examples” indicates a one-shot or few-shot prompt pattern. The terms that are adopted for the taxonomy are underlined. The taxonomy distinguishes three prompt parts re- ferred to as Prompt Instruction, Prompt State, and Prompt Task. These parts can occur in any order, potentially interleaved, and some parts may be ab- sent. • Instruction is the part of a prompt that outlines how to solve the task. Defined during LLM com- ponent development, it remains static through- out an application’s lifespan. • State is the situation-dependent part of the prompt that is created dynamically every time the LLM is invoked. The taxonomy opts for the term State instead of “context” in order to avoid confusion with the “LLM context” as explained in section 2. The State may include the current dialogue history, an extract of a knowledge base needed specifically for the current LLM invoca- tion, or a state or scene description, etc. • Task is the part of the prompt conveying the task to solve in a specific invocation. Prompt Instruction, State and Task describe the ori- gins of the prompt parts by uniform characteristics: none: The prompt part is not present. E.g., Prog- Prompt ActionPlanning has no State prompt, nor does LowCode Planning (except the dialogue history when planning a subprocess). Instruction and Task prompt parts are present in all sample in- stances. User : The user phrases the prompt part. E.g., the Task for ExcelCopilot IntentDetector or for LowCode Planning is phrased by the user. There are no sample instances where the user provides the Instruction or State prompt parts. LLM : The prompt part is generated by an LLM. E.g., LowCode Planning generates the State for Low- Code Executing and ExcelCopilot IntentDe- tector generates the Task for ExcelCopilot Ac- tionExecutors. Program: Application code generates the prompt part. E.g., AutoDroid programmatically generates the State and the Task parts for its MemoryGen- erators in the knowledge base building phase. The Prompt Instruction dimension is always gener- ated by Program. While a user and possibly an LLM have defined this prompt part during application de- velopment, this falls outside the scope of this taxon- omy. Therefore, the Prompt Instruction dimension is not discriminating and categorizes all cases as Pro- gram. It is retained in the taxonomy for completeness and better understandability. Prompt Check describes whether the application em- ploys a review mechanism to control and modify the prompt before invoking the LLM. The same charac- teristics as for the prompt parts are applicable: none: The prompt is used without check. User : The user checks and revises the prompt. LLM : Another LLM component checks or revises the prompt. Program: The application comprises code to check or revise the prompt. E.g., AutoDroid removes personal data, such as names, to ensure privacy before invoking the TaskExecutor; Honeycomb QueryAssistant incorporates a coded mechanism against prompt injection attacks. 13 Table 3: Terms used for prompt parts. Expressions specific to a domain are printed in italics, “examples” indicates a one-shot or few-shot prompt pattern. Terms adopted for the taxonomy are underlined. Source [72] [34] [32] [45] [45] [37] Instruction task description + examples instruction prompt predefined prompt prompt template + examples examples prompt context, i.e., examples [5] [5] [69] [26] education prompt education prompt role and goal + instruction + examples predefined system instruction + domain-specific information State DB schema environment state, scene description dialogue history dialogue history + provided workflow context query results from knowledge graph Task test instance data prompt user prompt user input question SQL query result input task commands user input task prompt (circumscribed) current task the user’s request Most example instances omit prompt checks. There are no examples where a Check is performed by a User or an LLM. 5.2.4. Skills dimensions The Skills dimension captures the types of LLM ca- pabilities that an application utilizes. It is designed as a dimension with six non-mutually exclusive char- acteristics. Skills is decomposed into six specific capabilities: reWrite: The LLM edits or transforms data or text, such as rephrasing, summarizing, reformat- ting, correcting, or replacing values. E.g., My- CrunchGpt SettingsEditor replaces values in JSON files; TruckPlatoon converts measurements into textual explanations. Create: The LLM generates novel output. E.g., LowCode Executing generates substantial bodies of text for tasks like essay writing. conVerse: The application relies on the LLM’s capa- bility to engage in purposeful dialogues with humans. E.g., MyCrunchGpt DesignAssistant asks users for missing parameters; SgpTod PolicyPrompter decides how to react to user inputs and formulates chatbot responses. Inform: The application depends on knowledge that the LLM has acquired during its training, unlike applications that provide all necessary information within the prompt. E.g., MyCrunchGpt Domain- Expert provides expert knowledge on airfoil designs; MatrixProduction relies on built-in knowledge of production processes, such as “a hole is produced by drilling”; LowCode Executing uses its learned knowledge for tasks like essay writing. Reason: The LLM draws conclusions or makes log- ical inferences. E.g., FormulaExplainer in Ex- celCopilot explains the effects of Excel functions in formulas; AutoDroid MemoryGenerators ex- plain the effects of GUI elements in Android apps. Plan: The LLM designs a detailed method or course E.g., Au- of action to achieve a specific goal. toDroid TaskExecutor and WorkplaceRobot TaskPlanning devise action plans to achieve goals. The Plan and Reason characteristics are interrelated, as planning also requires reasoning. The intended handling of these characteristics is to categorize an LLM component as Plan only and understand Plan as implicitly subsuming Reason. The effectiveness of LLMs as components of software applications relies on their commonsense knowledge and their ability to correctly interpret and handle a broad variety of text inputs, including instructions, 14 examples, and code. It is reasonable to assume that a fundamental capability, which might be termed Un- terstand, is leveraged by every LLM component. As it is not distinctive, the taxonomy does not list it explicitly in the Skills dimension. Applying this taxonomy dimension requires users to determine which skills are most relevant and worth highlighting in an LLM component. Given the versa- tility of LLMs, reducing the focus to few predominant skills is necessary to make categorizations distinctive and expressive. 5.2.5. Output-related dimensions Output Format characterizes the format of the LLM’s output. As an output may consist of several parts in diverse formats, this dimension is designed as non- mutually exclusive, same as the Skills dimension. It distinguishes four characteristics that are distinctive and well discernible: FreeText: unstructured natural language text out- put. E.g., TruckPlatoon and MyCrunchGpt DomainExpert generate text output in natural lan- guage; MatrixProduction Manager and Ma- trixProduction Operator produce FreeText ex- planations complementing output in custom formats to be parsed by the application. Item: a single text item from a predefined set of items, such as a class in a classification task. E.g., ProgPrompt ScenarioFeedback outputs either True or False. Code: source code or other highly formalized output that the LLM has learned during its training, such as a programming language, XML, or JSON. E.g., AutoDroid TaskExecutor produces code to steer an Android app; MyCrunchGpt SettingsEditor outputs JSON. Structure: structured, formalized output adhering to a custom format. E.g., LowCode Planning out- puts text in a format that can be displayed as a flow chart; MatrixProduction Manager and Oper- ator produce output in custom formats combined with FreeText explanations. Output Revision indicates whether the application checks or revises the LLM-generated output before utilization. These characteristics and their interpre- tations mirror those in the Prompt Check dimension: none: There is no revision of the LLM output. User : The user revises the LLM output. E.g., the user improves the plan generated by LowCode Planning. LLM : A further LLM component checks or revises the output of the LLM component under considera- tion. Program: Programmed code checks or revises the LLM output. E.g., Honeycomb QueryAssistant corrects the query produced by the LLM before exe- cuting it [7]. There are no instances in the sample set where an- other LLM revises or checks the output of the LLM. Most sample applications do not check or revise the LLM’s output, though several of them parse and transform it. The purpose of the Output Revision dimension is to indicate whether the application in- cludes control or correction mechanisms, rather than just parsing it. Output Consumer addresses the way of utilizing the LLM output: User signifies that the LLM output is presented to a human user. E.g., the text output of TruckPla- toon is intended for humans, as well as the output of MyCrunchGPT DomainExpert. LLM indicates that the output serves as a prompt part in a further LLM invocation. E.g., the knowl- edge base entries generated by an AutoDroid Mem- oryGenerator become part of the prompt for AutoDroid TaskExecutor; the plan output by LowCode Planning serves as a part of the prompt for LowCode Executing. Program describes instances where the LLM output is consumed and processed further by a software com- ponent of the application. E.g., the output of Ma- trixProduction Manager is handled by software systems (including a Manufacturing Execution Sys- tem) which use it to compute prompts for other LLM components. Engine covers scenarios where the LLM output is in- tended for execution on a runtime engine. E.g., the SQL query generated by SgpTod DstPrompter is 15 processed by a SQL interpreter; a part of the output of MatrixProduction Operator is executed by automation modules. Although applications may parse and transform the LLM output before use, the Output Consumer di- mension is meant to identify the ultimate consumer, such as an execution engine, rather than an interme- diary parser or transformation code. When applica- tions divide the LLM output into parts for different consumers, users applying the taxonomy need to de- termine which consumer is most relevant, since this dimension is designed to be mutually exclusive. 5.3. Evaluation Figure 2 displays the number of occurrences of char- It must acteristics within the example instances. be noted, however, that these do not reflect actual frequencies, as similar LLM components within the same application are aggregated together, indicated by symbols ∗ and 2 in figure 1. Furthermore, Ex- celCopilot likely includes occurrences of Prompt Check and Output Revision which are not counted due to insufficient system documentation. We evaluate the taxonomy against commonly ac- cepted quality criteria: comprehensiveness, robust- ness, conciseness, mutual exclusiveness, explanatory power, and extensibility [58, 42]. The taxonomy encompasses all example instances including those that were not considered during its development. This demonstrates comprehensiveness. As figure 1 shows, all example instances have unique categoriza- tions, supporting the taxonomy’s robustness. This not only indicates that the dimensions and charac- teristics are distinctive for the domain, but also high- lights the wide variety possible in this field. Concise- ness demands that the taxonomy uses the minimum number of dimensions and characteristics. The tax- onomy gains conciseness by identifying relatively few and abstract characteristics within each dimension. However, it does not adhere to the related subcri- terion that each characteristic must be present in at least one investigated instance [54]. Unoccupied char- acteristics are retained for dimensions whose char- acteristics were derived conceptually, specifically, for the Prompt dimensions, the Output Revision dimen- sion, and the Data Function dimension, enhancing the taxonomy’s ability to illustrate design options and inspire novel uses for LLM integrations in ap- plications. Some dimensions are constructed in par- allel, sharing common sets of characteristics. While this affects conciseness, it makes the taxonomy easier to understand and apply. As is often seen in tax- onomy development [54], we deliberately waived the requirement for mutual exclusiveness for some di- mensions, specifically the Output Format and Skills dimensions. In the context of this taxonomy, these can equivalently be understood as a set of of six and four binary dimensions respectively, each divided into characteristics “yes” and “no”. However, framing them as a single dimension with non-mutually exclu- sive characteristics seems more intuitive. Metadimensions structure the taxonomy, and most of the characteristics are illustrated through exam- ples. These measures are recognized for enhancing the explanatory power of a taxonomy [58]. The taxonomy’s flat structure allows for the easy addition of dimensions and characteristics, indicating that its extensibility is good. Potential extensions and fur- ther aspects of the taxonomy, including its usefulness and ease of use, are discussed in section 6. We visualize the taxonomy (or, strictly speaking, cat- egorized instances) in a compact form using feature vectors with characteristics abbreviated to single- letter codes. This approach has a drawback, as it requires referencing a legend. Additionally, non- applicable characteristics in mutually exclusive di- mensions are not visible, which means the design space is not completely shown. However, the com- pactness of the representation allows LLM compo- nents within a common application to be grouped closely, so that an LLM-integrated application can be perceived as a unit without appearing convoluted. This is a significant advantage for our purposes. 6. Discussion The discussion first focuses on the taxonomy’s appli- cability and ease of use before considering its overall usefulness. 16 Invocation (cid:122) (cid:123) (cid:125)(cid:124) Inter. Freq. Logic UI Function (cid:125)(cid:124) (cid:122) (cid:123) Data (cid:122) Instr. Prompt (cid:125)(cid:124) State Task (cid:123) Check Skills (cid:125)(cid:124) (cid:122) (cid:123) Output Format (cid:122) (cid:122) (cid:125)(cid:124) (cid:123) Revision Consumer Output (cid:125)(cid:124) (cid:123) A C D I S C A I O B R W B U L P U L P U L P U L P W C V I R P F I C S U L P U L P E 8 9 4 5 16 8 13 5 2 2 2 0 0 0 0 21 0 2 17 11 3 7 0 0 2 3 1 4 4 7 8 10 4 6 8 1 0 1 5 3 3 10 Figure 2: Occurrences of characteristics in the sample set of LLM-integrated applications. 6.1. Applicability and ease of use The taxonomy was effectively applied to LLM- integrated applications based on research papers, source code blog posts, recorded software demonstra- tions, and developer experiences. The analysis of LowCode revealed it to be a prompt definition tool combined with an LLM-based chatbot, which devi- ates from the strict definition of an LLM-integrated application. Still, the taxonomy provided an effective categorization and led to a clear understanding of the system’s architecture. Obviously, the ease of categorization depends on the clarity and comprehensiveness of the available infor- mation, which varies across analyzed systems. An- alyzing applications of LLMs in novel and uncom- mon domains can be challenging. While these papers present inspiring and innovative ideas for LLM inte- gration, such as MyCrunchGpt and TruckPla- toon, they may prioritize explaining the application area and struggle to detail the technical aspects of the LLM integration. A taxonomy for LLM-integrated applications can guide and facilitate the writing pro- cess and lead to more standardized and comparable descriptions. Applying the taxonomy is often more straightforward for research-focused systems. Omitting the com- plexities required for real-world applications, such as prompt checks and output revisions, their architec- tures are simpler and easier to describe. A taxonomy can point out such omissions. A fundamental challenge in applying the taxonomy arises from the inherent versatility of LLMs, which allows to define LLM components serving multiple purposes. This is exemplified by SgpTod Poli- cyPrompter, where the prompt is designed to pro- duce a structure with two distinct outcomes (a class label and a chatbot response), and similarly by Ma- trixProduction, as detailed section 4.2. Draw- ing an analogy to “function overloading” in classical programming, such LLM components can be termed “overloaded LLM components”. A taxonomy can handle overloaded LLM components in several ways: (1) define more dimensions as non- mutually exclusive, (2) label overloaded LLM compo- nents as “overloaded” without a more detailed catego- rization, or (3) categorize them by their predominant purpose or output. While the first approach allows for the most precise categorization, it complicates the taxonomy. Moreover, it will likely result in nearly all characteristics being marked for some LLM compo- nents, which is ultimately not helpful. The second approach simplifies categorization but sacrifices much detail. Our taxonomy adopts the third approach, en- forcing simplification and abstraction in descriptions of overloaded LLM components while retaining es- sential detail. The taxonomy can easily be extended to include approach (2) as an additional binary di- mension. 6.2. Usefulness The search for instances of LLM-integrated appli- cations uncovered activities across various domains. Substantial research involving LLM integrations, of- ten driven by theoretical interests, is notable in robot task planning [37, 51, 61, 33, 63] and in the TOD field [23, 71, 4, 6, 56]. Research exploring LLM po- tentials from a more practical perspective can be found in novel domains, such as industrial produc- tion [69, 26] and other technical areas [28, 70]. Fur- 17 thermore, developers of commercial LLM-based ap- plications are beginning to communicate their efforts and challenges [44, 7]. The taxonomy has been ap- plied to example instances from these and additional areas. This demonstrates its potential as a common, unified framework for describing LLM-integrated ap- plications, facilitating the comparison and sharing of development knowledge between researchers and practitioners across various domains. When applying the taxonomy to the example in- stances, it proved to be effective and useful as an analytical lens. Descriptions of LLM-integrated ap- plications commonly explain background information and details of the application domain in addition to its LLM integration. When used as an analytical lens, the taxonomy quickly directs the analysis to- wards the aspects of LLM integration, abstracting from the specificities of the domain. The taxonomy describes how LLM capabilities can be leveraged in software systems, offers inspiration for LLM-based functions, and outlines options for their implementation as follows. The Skills dimension out- lines the range of capabilities an LLM can contribute to an application through a concise set of characteris- tics, while the Function dimension suggests potential uses, further supported by the Interaction dimension. The Output Type dimension indicates options for en- coding the output of an LLM in formats beyond plain text, making it processable by software. The Output Consumer dimension illustrates the diverse ways to utilize or act upon LLM output. Thus, the taxonomy, as intended, spans a design space for LLM integra- tions. The sampled LLM-integrated applications showcase the creativity of researchers and developers in ap- plying and exploiting the potentials of LLMs, rang- ing from straightforward solutions (e.g., TruckPla- toon) to highly sophisticated and technically com- plex ones (e.g., AutoDroid). When using the tax- onomy to inspire innovative uses of LLMs, we recom- mend supplementing it with descriptions of example applications to enhance its illustrativeness. The char- acteristics of the Skills dimension are derived prag- matically from the investigated example instances. While they do not claim to be exhaustive or deeply 18 rooted in LLM theory or cognitive science, they add relevant details to the categorizations and illustrate design options and potentials for using LLMs as soft- ware components. It emerged as a key insight of this research that, rather than analyzing an LLM-integrated application in whole, analysis should start with the identifica- tion and description of its distinct LLM components. This is essential for gaining a clear understanding of how the application utilizes the capabilities of LLMs. The LLM-integrated application then manifests as a combination of its LLM components. As shown in fig- ure 1, the visualization effectively displays both the quantity and the variety of LLM components in an LLM-integrated application. LLM components interact through prompt chaining, where one LLM component’s output feeds into an- other’s input [67]. When an LLM-integrated applica- tion involves such an interaction, the taxonomy rep- resents it as an LLM characteristic within a Prompt dimension. The taxonomy can capture the variance in these interactions. For instance, in AutoDroid TaskExecutor and LowCode Executing, the LLM characteristic appears in the Prompt State di- mension, because their prompt components (knowl- edge base excerpts and prompt definition, respec- tively) are generated by other LLM components in a preparatory stage. In contrast, the LLM character- istic appears in the Prompt Task dimension for Ma- trixProduction Operator, because its prompt part is generated individually by the MatrixPro- duction Manager almost immediately before use. that cover Taxonomy dimensions entire LLM- integrated applications may be useful. Given their complexity, these dimensions should be designed based on a broader range of examples, which will only become available as more LLM-integrated applica- tions are developed and their architectures disclosed in the future. Extensions to the taxonomy could also include dimensions for describing the structure of prompts in more detail, as well as dimensions ad- dressing characteristics of the language models used. Table 4: LLM usage in the sample instances. “Evals” indicates evaluations of various LLMs. Used or best LLM Evals Comments GPT-3.5 GPT-3.5-turbo GPT-3.5 yes GPT-4 far too slow then awaiting the publication of GPT-4 Application Honeycomb LowCode MyCrunchGpt MatrixProduction text-davinci-003 WorkplaceRobot AutoDroid ProgPrompt FactoryAssistants GPT-3.5 GPT-3.5 SgpTod GPT-3.5-turbo TruckPlatoon N/A ExcelCopilot GPT-3 GPT-4 GPT-3 yes GPT-4 best for tasks requiring many steps CODEX better, but access limits prohibitive yes GPT-3.5 best more often than others combined combined LLMs in Copilot for Microsoft 365 [43] 7. Conclusion This paper investigates the use of LLMs as soft- ware components. Its perspective differs from cur- rent software engineering research, which investigates LLMs as tools for software development [14, 22] and from research examining LLMs as autonomous agents [11, 62, 57, 21]. This paper defines the concept of an LLM component as a software component that re- alizes its functionality by invoking an LLM. While LLM components implicitly appear in various works, termed, for example, “prompters”, “prompted LLM”, “prompt module”, or “module” [30, 71, 6, 7], to our knowledge, this concept has not yet been formalized or systematically investigated. The main contribution of this study is a taxonomy for the analysis and description of LLM components, extending to LLM-integrated applications by charac- terizing them as combinations of LLM components. In addition to the dimensions and characteristics of the taxonomy, the study contributes a taxonomy vi- sualization based on feature vectors, which is more compact than the established visualizations such as morphological boxes [55] or radar charts. It repre- sents an LLM-integrated application as one visual en- tity in a tabular format, with its LLM components displayed as rows. The taxonomy was constructed using established methods, based on a set of example instances, and evaluated with a new set of example instances. The combined samples exhibit broad variation along the identified dimensions. For some instances, informa- tion was not available, necessitating speculative in- terpretation. However, since the sample is used for identifying options rather than quantitative analysis, this issue and the representativeness of the sample are not primary concerns. The evaluation was con- ducted by the developer of the taxonomy, consistent with recent related work [21, 52, 48]. Using a new sample for evaluation strengthens the validity of the results. A further significant contribution of the paper is a systematic overview of a sample of LLM-integrated applications across various industrial and technical domains, illustrating a spectrum of conceptual ideas and implementation options. As the examples show, LLM components can re- place traditionally coded functions in software sys- tems and enable novel use cases. However, practi- cal challenges persist. Developers report that new software engineering methods are required, e.g., for managing prompts as software assets and for test- ing and monitoring applications. For instance, the costs of LLM invocations prohibit the extensive au- tomated testing that is standard in software devel- opment practice [44, 7]. Challenges also arise from the inherent indeterminism and uncontrollability of LLMs. Small variations in prompts can lead to differ- ences in outputs, while automated output processing 19 in LLM-integrated applications requires the output to adhere to a specified format. Furthermore, the deployment mode of LLMs, whether local (on the same hardware as the ap- plication) or remote, managed privately or offered as Language-Models-as-a-Service (LMaaS), has im- pact on performance and usability. Table 4 gives an overview of the LLMs used in our sample of appli- cations. Where papers report evaluations of mul- tiple LLMs, the table displays the chosen or best- performing LLM. Although not representative, the table provides some insights. LMaaS dominates, likely due to its convenience, but more importantly, due to the superior performance of the provided LLMs. Concerns regarding LMaaS include privacy, as sensi- tive data might be transmitted to the LLM through the prompt [64], and service quality, i.e., reliability, availability, and costs. Costs typically depend on the quantity of processed tokens. This quantity also af- fects latency, which denotes the processing time of an LLM invocation. A further important factor for latency is the size of the LLM, with larger models being slower [7]. When building LLM-based applications for real- world use, the reliability and availability of an LMaaS are crucial. Availability depends not only on the technical stability of the service, but also on factors such as increased latency during high usage periods or usage restrictions imposed by the provider of an LMaaS, as reported for ProgPrompt [51]. Beyond technical aspects, the reliability of an LMaaS also en- compasses its behavior. For instance, providers might modify a model to enhance its security, potentially impacting applications that rely on it. Despite practical challenges, integrating LLMs into systems has the potential to alter the way software is constructed and the types of systems that can be realized. Prompts are central to the functioning of LLM components which pose specific requirements such as strict format adherence. Therefore, an im- portant direction for future research will be prompt engineering specifically tailored for LLM-integrated applications. In future work, the taxonomy will be extended to distinguish finer-grained parts of prompts, allowing a more detailed description and comparison of prompts and related experimental results. Initial studies share results on the format-following behavior of LLMs [68] as a subtopic of instruction-following [73], derived with synthetic benchmark data. It is necessary to complement their results with experiments using data and tasks from real application development projects because, in the early stages of this field, synthetic benchmarks may fail to cover relevant aspects within the wide range of possible options. Another crucial research direction involves exploring how LLM char- acteristics correspond to specific tasks, such as de- termining the optimal LLM size for intent detection tasks. The taxonomy developed in this study can sys- tematize such experiments and their outcomes. Ad- ditionally, it provides a structured framework for de- lineating design choices in LLM components, making it a valuable addition to future training materials. Acknowledgements Special thanks to Antonia Weber and Constantin We- ber for proofreading and providing insightful and con- structive comments. References [1] Eleni Adamopoulou and Lefteris Moussiades. An Overview of Chatbot Technology. In Ilias Ma- glogiannis, Lazaros Iliadis, and Elias Pimeni- dis, editors, Artificial Intelligence Applications and Innovations, IFIP Advances in Information and Communication Technology, pages 373–383, Cham, 2020. Springer International Publishing. doi:10.1007/978-3-030-49186-4_31. [2] Sebastian Bader, Erich Barnstedt, Heinz Be- denbender, Bernd Berres, Meik Billmann, and Marko Ristin. Details of the asset adminis- tration shell-part 1: The exchange of informa- tion between partners in the value chain of in- dustrie 4.0 (version 3.0 rc02). Working Paper, Berlin: Federal Ministry for Economic Affairs 20 and Climate Action (BMWK), 2022. doi.org/ 10.21256/zhaw-27075. Soft Computing, 151:111165, January 2024. doi:10.1016/j.asoc.2023.111165. [3] Marcos Baez, Florian Daniel, Fabio Casati, and Boualem Benatallah. Chatbot integration in few patterns. IEEE Internet Computing, pages 1–1, 2020. doi:10.1109/MIC.2020.3024605. [4] Tom Bocklisch, Thomas Werkmeister, Task- Daksh Varshneya, and Alan Nichol. Oriented Dialogue with In-Context Learn- ing. (arXiv:2402.12234), February 2024. doi:10.48550/arXiv.2402.12234. [5] Yuzhe Cai, Shaoguang Mao, Wenshan Wu, Ze- hua Wang, Yaobo Liang, Tao Ge, Chenfei Wu, Wang You, Ting Song, Yan Xia, Jonathan Tien, and Nan Duan. Low-code LLM: Visual Pro- gramming over LLMs. (arXiv:2304.08103), April 2023. doi:10.48550/arXiv.2304.08103. [6] Lang Cao. DiagGPT: An LLM-based Chatbot with Automatic Topic Management for Task- Oriented Dialogue. (arXiv:2308.08043), August 2023. doi:10.48550/arXiv.2308.08043. [7] Phillip Carter. All the Hard Stuff No- body Talks About When Building Prod- ucts with LLMs. Honeycomb, May 2023. https://www.honeycomb.io/blog/ hard-stuff-nobody-talks-about-llm. [8] Phillip Carter. So We Shipped an AI Prod- Honeycomb, Octo- uct. Did It Work? ber 2023. https://www.honeycomb.io/blog/ we-shipped-ai-product. [9] Banghao Chen, Zhaofeng Zhang, Nicolas Langrené, Unleash- and Shengxin Zhu. ing the potential of prompt engineering in Large Language Models: A comprehensive review. (arXiv:2310.14735), October 2023. doi:10.48550/arXiv.2310.14735. [10] Wang Chen, Yan-yi Liu, Tie-zheng Guo, Da- peng Li, Tao He, Li Zhi, Qing-wen Yang, Hui-han Wang, and Ying-you Wen. Sys- industry appli- tems engineering issues cations of Applied large language model. for 21 [11] Yuheng Cheng, Ceyao Zhang, Zhengwen Zhang, Xiangrui Meng, Sirui Hong, Wenhao Li, Zihao Wang, Zekai Wang, Feng Yin, Junhua Zhao, and Xiuqiang He. Exploring Large Language Model based Intelligent Agents: Definitions, Methods, and Prospects. (arXiv:2401.03428), January 2024. doi:10.48550/arXiv.2401.03428. [12] Silvia Colabianchi, Andrea Tedeschi, and Francesco Costantino. Human-technology in- tegration with industrial conversational agents: A conceptual architecture and a taxonomy for manufacturing. Journal of Industrial Infor- mation Integration, 35:100510, October 2023. doi:10.1016/j.jii.2023.100510. [13] Jonathan Evertz, Merlin Chlosta, Lea Schön- herr, and Thorsten Eisenhofer. Whispers in the Machine: Confidentiality in LLM-integrated Systems. (arXiv:2402.06922), February 2024. doi:10.48550/arXiv.2402.06922. [14] Angela Fan, Beliz Gokkaya, Mark Harman, Mitya Lyubarskiy, Shubho Sengupta, Shin Yoo, and Jie M. Zhang. Large Language Models for Software Engineering: Survey and Open Problems. (arXiv:2310.03533), November 2023. doi:10.48550/arXiv.2310.03533. [15] Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, and Qing Li. Recommender Systems in the Era of Large Language Models (LLMs). (arXiv:2307.02046), August 2023. doi:10.48550/arXiv.2307.02046. [16] David Fortin. Microsoft Copilot in Excel: What It Can and Can’t Do. YouTube, Jan- uary 2024. https://www.youtube.com/watch? v=-fsu9IXMZvo. [17] Martin Fowler. Patterns of Enterprise Applica- tion Architecture. 2002. ISBN 978-0-321-12742- 6. [18] Shirley Gregor. The nature of theory in infor- mation systems. MIS quarterly, pages 611–642, 2006. doi:10.2307/25148742. [19] Yanchu Guan, Dong Wang, Zhixuan Chu, Shiyu Wang, Feiyue Ni, Ruihua Song, Longfei Li, Jin- jie Gu, and Chenyi Zhuang. Intelligent Vir- tual Assistants with LLM-based Process Au- tomation. (arXiv:2312.06677), December 2023. doi:10.48550/arXiv.2312.06677. [20] Muhammad Usman Hadi, Qasem Al Tashi, Rizwan Qureshi, Abbas Shah, Amgad Muneer, Muhammad Irfan, Anas Zafar, Muhammad Bi- lal Shaikh, Naveed Akhtar, Jia Wu, and Seyedali Mirjalili. Large Language Models: A Compre- hensive Survey of its Applications, Challenges, Limitations, and Future Prospects, September 2023. doi:10.36227/techrxiv.23589741.v3. [21] Thorsten Händler. A Taxonomy for Au- tonomous LLM-Powered Multi-Agent Architec- tures:. In Proceedings of the 15th Interna- tional Joint Conference on Knowledge Discov- ery, Knowledge Engineering and Knowledge Management, pages 85–98, Rome, Italy, 2023. SCITEPRESS - Science and Technology Publi- cations. doi:10.5220/0012239100003598. [22] Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Wang, Li Li, Xiapu Luo, David Lo, John Grundy, and Haoyu Wang. Large Language Models for Software Engineering: A Systematic Literature Review. (arXiv:2308.10620), Septem- ber 2023. doi:10.48550/arXiv.2308.10620. [23] Vojtěch Hudeček and Ondrej Dusek. Are Large Language Models All You Need for Task- In Svetlana Stoyanchev, Oriented Dialogue? Shafiq Joty, David Schlangen, Ondrej Dusek, Casey Kennington, and Malihe Alikhani, edi- tors, Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Di- alogue, pages 216–228, Prague, Czechia, Septem- ber 2023. Association for Computational Lin- guistics. doi:10.18653/v1/2023.sigdial-1.21. [24] Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M. Bran, Stefan Bringuier, Catherine L. Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nico- las Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Im- ran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majum- dar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel Rodriques, Jacob Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean War- ren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scour- tas, K. Schmidt, Ian Foster, Andrew White, and Ben Blaiszik. 14 examples of how LLMs can transform materials science and chem- istry: A reflection on a large language model hackathon. Digital Discovery, 2(5):1233–1250, 2023. doi:10.1039/D3DD00113J. [25] Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, and Robert McHardy. Challenges and Applica- tions of Large Language Models, July 2023. doi:10.48550/arXiv.2307.10169. [26] Samuel Kernan Freire, Mina Foosherian, Chao- fan Wang, and Evangelos Niforatos. Harnessing Large Language Models for Cognitive Assistants in Factories. In Proceedings of the 5th Interna- tional Conference on Conversational User Inter- faces, CUI ’23, pages 1–6, New York, NY, USA, July 2023. Association for Computing Machin- ery. doi:10.1145/3571884.3604313. [27] Anis Koubaa, Wadii Boulila, Lahouari Ghouti, Ayyub Alzahem, and Shahid Latif. Explor- ing ChatGPT Capabilities and Limitations: A Survey. IEEE Access, 11:118698–118721, 2023. doi:10.1109/ACCESS.2023.3326474. [28] Varun Kumar, Leonard Gleyzer, Adar Ka- hana, Khemraj Shukla, and George Em Karni- 22 adakis. MyCrunchGPT: A LLM Assisted Frame- work for Scientific Machine Learning. Jour- nal of Machine Learning for Modeling and Computing, 4(4), 2023. doi.org/10.1615/ JMachLearnModelComput.2023049518. [29] Dennis Jan Kundisch, Muntermann, Anna Maria Oberländer, Daniel Rau, Maxi- milian Röglinger, Thorsten Schoormann, and Daniel Szopinski. An Update for Taxonomy Designers. Business & Information Systems Engineering, 2022. doi:10.1007/s12599-021-00723-x. 64(4):421–439, August Prompted LLMs as Jongho [30] Gibbeum Lee, Volker Hartmann, and Kang- Park, Dimitris Papailiopoulos, wook Lee. chatbot modules for long open-domain conversation. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Findings of the as- sociation for computational linguistics: ACL 2023, pages 4536–4554, Toronto, Canada, July 2023. Association for Computational Linguistics. doi:10.18653/v1/2023.findings-acl.277. [31] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zheng- bao Jiang, Hiroaki Hayashi, and Graham Neu- big. Pre-train, Prompt, and Predict: A Sys- tematic Survey of Prompting Methods in Nat- ural Language Processing. ACM Comput- ing Surveys, 55(9):195:1–195:35, January 2023. doi:10.1145/3560815. [32] Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, and Yang Liu. Prompt Injection at- tack against LLM-integrated Applications, June 2023. doi:10.48550/arXiv.2306.05499. [33] Yuchen Liu, Luigi Palmieri, Sebastian Ilche Georgievski, and Marco Aiello. Koch, DELTA: Decomposed Efficient Long-Term Robot Task Planning using Large Language Models. (arXiv:2404.03275), April 2024. doi:10.48550/arXiv.2404.03275. [34] Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, and Neil Zhenqiang Gong. Prompt Injec- tion Attacks and Defenses in LLM-Integrated 23 Applications. (arXiv:2310.12815), October 2023. doi:10.48550/arXiv.2310.12815. [35] Shaoguang Mao, Qiufeng Yin, Yuzhe Cai, https: and Dan Qiao. //github.com/chenfei-wu/TaskMatrix/ tree/main/LowCodeLLM, May 2023. LowCodeLLM. [36] Scott McLean, Gemma J. M. Read, Jason Thompson, Chris Baber, Neville A. Stanton, and Paul M. Salmon. The risks associated with Ar- tificial General Intelligence: A systematic re- view. Journal of Experimental & Theoretical Artificial Intelligence, 35(5):649–663, July 2023. doi:10.1080/0952813X.2021.1964003. [37] Oier Mees, Jessica Borja-Diaz, and Wolfram Burgard. Grounding Language with Visual Af- In 2023 fordances over Unstructured Data. IEEE International Conference on Robotics and Automation (ICRA), pages 11576–11582, London, United Kingdom, May 2023. IEEE. doi:10.1109/ICRA48891.2023.10160396. [38] Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pa- sunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Ce- likyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. Augmented Lan- guage Models: A Survey, February 2023. doi:10.48550/arXiv.2302.07842. [39] Melanie Mitchell. ture of artificial general ence, doi:10.1126/science.ado7069. intelligence. 383(6689):eado7069, March Debates on the na- Sci- 2024. [40] Quim Motger, Xavier Franch, and Jordi Marco. Survey, Software-Based Dialogue Systems: Taxonomy, and Challenges. ACM Comput- ing Surveys, 55(5):91:1–91:42, December 2022. doi:10.1145/3527450. [41] Fiona Fui-Hoon Nah, Ruilin Zheng, Jingyuan Cai, Keng Siau, and Langtao Chen. Gen- erative AI and ChatGPT: Applications, chal- lenges, and AI-human collaboration. Jour- nal of Information Technology Case and Ap- plication Research, 25(3):277–304, July 2023. doi:10.1080/15228053.2023.2233814. [42] Robert C Nickerson, Upkar Varshney, and taxon- Jan Muntermann. omy development and its application in in- formation systems. European Journal of In- formation Systems, 22(3):336–359, May 2013. doi:10.1057/ejis.2012.26. A method for [43] Camille Pack, Cern McAtee, Samantha Robert- son, Dan Brown, Aditi Srivastava, and Kweku Ako-Adjei. Microsoft Copilot for Microsoft 365 overview. https://learn.microsoft. com/en-us/copilot/microsoft-365/ microsoft-365-copilot-overview, 2024. March Sumit Gulwani, [44] Chris Parnin, Gustavo Soares, Rahul Pan- dita, and Austin Z. Henley. Building Your Own Prod- uct Copilot: Challenges, Opportunities, and Needs. (arXiv:2312.14231), December 2023. doi:10.48550/arXiv.2312.14231. Jessica Rich, [45] Rodrigo Pedro, Daniel Castro, Paulo Car- From Prompt In- reira, and Nuno Santos. jections to SQL Injection Attacks: How Pro- tected is Your LLM-Integrated Web Appli- cation? (arXiv:2308.01990), August 2023. doi:10.48550/arXiv.2308.01990. [46] Ken Peffers, Tuure Tuunanen, Marcus A. Rothenberger, and Samir Chatterjee. A De- sign Science Research Methodology for Infor- mation Systems Research. Journal of Man- agement Information Systems, 24(3):45–77, De- cember 2007. ISSN 0742-1222, 1557-928X. doi:10.2753/MIS0742-1222240302. [47] Mohaimenul Azam Khan Raiaan, Md. Sad- dam Hossain Mukta, Kaniz Fatema, Nur Mo- hammad Fahad, Sadman Sakib, Most Mar- Jubaer Ahmad, Mo- ufatul Jannat Mim, hammed Eunus Ali, and Sami Azam. A Review on Large Language Models: Architectures, Ap- plications, Taxonomies, Open Issues and Chal- 24 lenges. doi:10.1109/ACCESS.2024.3365742. IEEE Access, 12:26839–26874, 2024. [48] Jack Daniel Rittelmeyer and Kurt Sandkuhl. Morphological Box for AI Solutions: Evalua- tion and Refinement with a Taxonomy Develop- ment Method. In Knut Hinkelmann, Francisco J. López-Pellicer, and Andrea Polini, editors, Per- spectives in Business Informatics Research, Lec- ture Notes in Business Information Process- ing, pages 145–157, Cham, 2023. Springer Na- ture Switzerland. doi:10.1007/978-3-031-43126- 5_11. [49] Shubhra Kanti Karmaker Santu and Dongji TELeR: A General Taxonomy of for Benchmarking Complex (arXiv:2305.11430), October 2023. Feng. LLM Prompts Tasks. doi:10.48550/arXiv.2305.11430. [50] Thorsten Schoormann, Frederik Möller, and Daniel Szopinski. Exploring Purposes of Us- In Proceedings of the Inter- ing Taxonomies. national Conference on Wirtschaftsinformatik (WI), Nuernberg, Germany, February 2022. [51] Ishika Singh, Valts Blukis, Arsalan Mousa- vian, Ankit Goyal, Danfei Xu, Jonathan Trem- blay, Dieter Fox, Jesse Thomason, and Ani- mesh Garg. ProgPrompt: Generating Situated Robot Task Plans using Large Language Mod- els. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11523– 11530, London, United Kingdom, May 2023. IEEE. doi:10.1109/ICRA48891.2023.10161317. [52] Gero Strobel, Leonardo Banh, Frederik Möller, and Thorsten Schoormann. Exploring Gener- ative Artificial Intelligence: A Taxonomy and Types. In Proceedings of the 57th Hawaii Inter- national Conference on System Sciences, Hon- olulu, Hawaii, January 2024. https://hdl. handle.net/10125/106930. [53] Hendrik Strobelt, Albert Webson, Victor Sanh, Benjamin Hoover, Johanna Beyer, Hanspeter Pfister, and Alexander M. Rush. Interac- tive and Visual Prompt Engineering for Ad- hoc Task Adaptation With Large Language Models. IEEE Transactions on Visualization and Computer Graphics, pages 1–11, 2022. doi:10.1109/TVCG.2022.3209479. Effective Invocation Methods of Massive LLM Services. (arXiv:2402.03408), February 2024. doi:10.48550/arXiv.2402.03408. [54] Daniel Szopinski, Thorsten Schoormann, and Dennis Kundisch. Criteria as a Prelude for Guid- ing Taxonomy Evaluation. In Proceedings of the 53rd Hawaii International Conference on Sys- tem Sciences, 2020. https://hdl.handle.net/ 10125/64364. [55] Daniel Szopinski, Thorsten Schoormann, and Visualize different: To- Dennis Kundisch. researching the fit between taxon- wards omy visualizations and taxonomy tasks. In Tagungsband Der 15. Internationalen Tagung Wirtschaftsinformatik (WI 2020), Potsdam, 2020. doi:10.30844/wi_2020_k9-szopinski. [56] Manisha Thakkar and Nitin Pise. Unified Ap- proach for Scalable Task-Oriented Dialogue Sys- tem. International Journal of Advanced Com- puter Science and Applications, 15(4), 2024. doi:10.14569/IJACSA.2024.01504108. [57] Oguzhan Topsakal and Tahir Cetin Akinci. Cre- ating Large Language Model Applications Uti- lizing Langchain: A Primer on Developing LLM Apps Fast. In International Conference on Applied Engineering and Natural Sciences, vol- ume 1, pages 1050–1056, 2023. [58] Michael Unterkalmsteiner and Waleed Adbeen. A compendium and evaluation of taxonomy quality attributes. Expert Systems, 40(1): e13098, 2023. doi:10.1111/exsy.13098. [59] Bryan Wang, Gang Li, and Yang Li. En- Interaction with Mo- abling Conversational In bile UI using Large Language Models. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, pages 1–17, New York, NY, USA, April 2023. Association for Computing Machinery. doi:10.1145/3544548.3580895. [61] Jun Wang, Guocheng He, and Yiannis Kan- Safe Task Planning for Language- taros. Instructed Multi-Robot Systems using Confor- mal Prediction. (arXiv:2402.15368), February 2024. doi:10.48550/arXiv.2402.15368. [62] Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, and Jirong Wen. A survey on large language model based autonomous agents. Frontiers of Com- puter Science, 18(6):186345, March 2024. doi:10.1007/s11704-024-40231-1. [63] Shu Wang, Muzhi Han, Ziyuan Jiao, Zeyu Zhang, Ying Nian Wu, Song-Chun Zhu, and Hangxin Liu. LLM3:Large Language Model- based Task and Motion Planning with Motion Failure Reasoning. (arXiv:2403.11552), March 2024. doi:10.48550/arXiv.2403.11552. [64] Hao Wen, Yuanchun Li, Guohong Liu, Shan- hui Zhao, Tao Yu, Toby Jia-Jun Li, Shiqi Jiang, Yunhao Liu, Yaqin Zhang, and Yunxin Liu. Em- powering LLM to use Smartphone for Intelligent Task Automation. (arXiv:2308.15272), Septem- ber 2023. doi:10.48550/arXiv.2308.15272. [65] Hao Wen, Yuanchun Li, and Sean KiteFly- Kid. MobileLLM/AutoDroid. Mobile LLM, Jan- uary 2024. https://github.com/MobileLLM/ AutoDroid. [66] Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, and Dou- Jesse Spencer-Smith, glas C. Schmidt. A Prompt Pattern Cat- alog to Enhance Prompt Engineering with ChatGPT. (arXiv:2302.11382), February 2023. doi:10.48550/arXiv.2302.11382. [60] Can Wang, Bolin Zhang, Dianbo Sui, Zhiying Tu, Xiaoyu Liu, and Jiabao Kang. A Survey on [67] Tongshuang Wu, Michael Terry, and Car- rie Jun Cai. AI Chains: Transparent and 25 Instruction- and Le Hou. Denny Zhou, Following Evaluation for Large Language Mod- els. (arXiv:2311.07911), November 2023. doi:10.48550/arXiv.2311.07911. Controllable Human-AI Interaction by Chain- ing Large Language Model Prompts. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, pages 1–22, New York, NY, USA, April 2022. Association for Computing Machinery. doi:10.1145/3491102.3517582. [68] Congying Xia, Chen Xing, Jiangshu Du, Xinyi Yang, Yihao Feng, Ran Xu, Wenpeng Yin, and Caiming Xiong. FOFO: A Benchmark to Evaluate LLMs’ Format-Following Capa- bility. (arXiv:2402.18667), February 2024. doi:10.48550/arXiv.2402.18667. [69] Yuchen Xia, Manthan Shenoy, Nasser Jazdi, and Michael Weyrich. Towards autonomous system: Flexible modular production sys- language model tem enhanced with large agents. In 2023 IEEE 28th International Con- ference on Emerging Technologies and Fac- tory Automation (ETFA), pages 1–8, 2023. doi:10.1109/ETFA54631.2023.10275362. [70] I. de Zarzà, J. de Curtò, Gemma Roig, and Carlos T. Calafate. LLM Adaptive PID Control for B5G Truck Platooning Sys- tems. Sensors, 23(13):5899, January 2023. doi:10.3390/s23135899. [71] Xiaoying Zhang, Baolin Peng, Kun Li, Jingyan SGP-TOD: Build- Zhou, and Helen Meng. ing Task Bots Effortlessly via Schema-Guided LLM Prompting. (arXiv:2305.09067), May 2023. doi:10.48550/arXiv.2305.09067. [72] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. A Survey of Large Lan- guage Models. (arXiv:2303.18223), May 2023. doi:10.48550/arXiv.2303.18223. [73] Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, 26
ai_researcher
3
REL_Working_out_is_all_you_need.pdf
7 1 0 2 n a J 1 Relative e-spectra and relative closures for families of theories∗ Sergey V. Sudoplatov† ] O L . h t a m [ 1 v 6 0 2 0 0 . 1 0 7 1 : v i X r a Abstract We define the notions of relative e-spectra, with respect to E- operators, relative closures, and relative generating sets. We study properties connected with relative e-spectra and relative generating sets. Key words: E-operator, combination of theories, relative e-spectrum, disjoint families of theories, relative closure, relative generating set. We continue to study structural properties of combinations of structures and their theories [1, 2, 3] generalizing the notions of e-spectra, closures and generating sets to relative ones. Properties of relative e-spectra and relative generating sets are investigated. 1 Preliminaries Throughout the paper we use the following terminology in [1, 2]. Let P = (Pi)i∈I, be a family of nonempty unary predicates, (Ai)i∈I be a family of structures such that Pi is the universe of Ai, i ∈ I, and the symbols Pi are disjoint with languages for the structures Aj, j ∈ I. The structure AP ⇋ Ai expanded by the predicates Pi is the P -union of the i∈I S ∗Mathematics Subject Classification: 03C30, 03C50, 54A05. The research is partially supported by the Grants Council (under RF President) for State Aid of Leading Scientific Schools (grant NSh-6848.2016.1) and by Committee of Science in Education and Science Ministry of the Republic of Kazakhstan (Grant No. 0830/GF4). †[email protected] 1 structures Ai, and the operator mapping (Ai)i∈I to AP is the P -operator. The structure AP is called the P -combination of the structures Ai and denoted by CombP (Ai)i∈I if Ai = (AP ↾ Ai) ↾ Σ(Ai), i ∈ I. Structures A′, which are elementary equivalent to CombP (Ai)i∈I, will be also considered as P - combinations. Clearly, all structures A′ ≡ CombP (Ai)i∈I are represented as unions of i = (A′ ↾ Pi) ↾ Σ(Ai) if and only if the set p∞(x) = 6= CombP (A′ i)i∈I, we write A′ = Pi, maybe applying Morleyzation. their restrictions A′ {¬Pi(x) | i ∈ I} is inconsistent. CombP (A′ i)i∈I∪{∞}, where A′ ∞ = A′ ↾ If A′ i∈I T Moreover, we write CombP (Ai)i∈I∪{∞} for CombP (Ai)i∈I with the empty structure A∞. Note that if all predicates Pi are disjoint, a structure AP is a P -combination P ), where A′ and a disjoint union of structures Ai. In this case the P -combination AP is called disjoint. Clearly, for any disjoint P -combination AP , Th(AP ) = Th(A′ P is obtained from AP replacing Ai by pairwise disjoint A′ i ≡ Ai, i ∈ I. Thus, in this case, similar to structures the P -operator works for the theories Ti = Th(Ai) producing the theory TP = Th(AP ), being P -combination of Ti, which is denoted by CombP (Ti)i∈I. For an equivalence relation E replacing disjoint predicates Pi by E-classes we get the structure AE being the E-union of the structures Ai. In this case the operator mapping (Ai)i∈I to AE is the E-operator. The structure AE is also called the E-combination of the structures Ai and denoted by CombE(Ai)i∈I; here Ai = (AE ↾ Ai) ↾ Σ(Ai), i ∈ I. Similar above, structures A′, which are elementary equivalent to AE, are denoted by CombE(A′ j)j∈J , j are restrictions of A′ to its E-classes. The E-operator works for where A′ the theories Ti = Th(Ai) producing the theory TE = Th(AE), being E- combination of Ti, which is denoted by CombE(Ti)i∈I or by CombE(T ), where T = {Ti | i ∈ I}. Clearly, A′ ≡ AP realizing p∞(x) is not elementary embeddable into AP and can not be represented as a disjoint P -combination of A′ i ≡ Ai, i ∈ I. At the same time, there are E-combinations such that all A′ ≡ AE can be represented as E-combinations of some A′ j ≡ Ai. We call this representability of A′ to be the E-representability. If there is A′ ≡ AE which is not E-representable, we have the E′- representability replacing E by E′ such that E′ is obtained from E adding equivalence classes with models for all theories T , where T is a theory of a restriction B of a structure A′ ≡ AE to some E-class and B is not elementary 2 equivalent to the structures Ai. The resulting structure AE′ (with the E′- representability) is a e-completion, or a e-saturation, of AE. The structure AE′ itself is called e-complete, or e-saturated, or e-universal, or e-largest. For a structure AE the number of new structures with respect to the structures Ai, i. e., of the structures B which are pairwise elementary non- equivalent and elementary non-equivalent to the structures Ai, is called the e-spectrum of AE and denoted by e-Sp(AE). The value sup{e-Sp(A′)) | A′ ≡ AE} is called the e-spectrum of the theory Th(AE) and denoted by e-Sp(Th(AE)). If AE does not have E-classes Ai, which can be removed, with all E- classes Aj ≡ Ai, preserving the theory Th(AE), then AE is called e-prime, or e-minimal. For a structure A′ ≡ AE we denote by TH(A′) the set of all theories Th(Ai) of E-classes Ai in A′. By the definition, an e-minimal structure A′ consists of E-classes with a minimal set TH(A′). If TH(A′) is the least for models of Th(A′) then A′ is called e-least. Definition [2]. Let T be the class of all complete elementary theories of relational languages. For a set T ⊂ T we denote by ClE(T ) the set of all theories Th(A), where A is a structure of some E-class in A′ ≡ AE, AE = CombE(Ai)i∈I, Th(Ai) ∈ T . As usual, if T = ClE(T ) then T is said to be E-closed. The operator ClE of E-closure can be naturally extended to the classes T ⊂ T as follows: ClE(T ) is the union of all ClE(T0) for subsets T0 ⊆ T . For a set T ⊂ T of theories in a language Σ and for a sentence ϕ with Σ(ϕ) ⊆ Σ we denote by Tϕ the set {T ∈ T | ϕ ∈ T }. Proposition 1.1 [2]. If T ⊂ T is an infinite set and T ∈ T \ T then T ∈ ClE(T ) (i.e., T is an accumulation point for T with respect to E-closure ClE) if and only if for any formula ϕ ∈ T the set Tϕ is infinite. Theorem 1.2 [2]. For any sets T0, T1 ⊂ T , ClE(T0 ∪ T1) = ClE(T0) ∪ ClE(T1). Definition [2]. Let T0 be a closed set in a topological space (T , OE(T )), 0 ⊆ T0 is said to be 0 (for T0) is minimal if T ′ 0 0 is where OE(T ) = {T \ ClE(T ′) | T ′ ⊆ T }. A subset T ′ generating if T0 = ClE(T ′ 0 ). The generating set T ′ does not contain proper generating subsets. A minimal generating set T ′ least if T ′ 0 is contained in each generating set for T0. 3 Theorem 1.3 [2]. If T ′ 0 is a generating set for a E-closed set T0 then the following conditions are equivalent: 0 is isolated by some set (T ′ 0 )ϕ, i.e., for any T ∈ T ′ 0 0 is the least generating set for T0; 0 is a minimal generating set for T0; (1) T ′ (2) T ′ (3) any theory in T ′ there is ϕ ∈ T such that (T ′ (4) any theory in T ′ 0 )ϕ = {T }; 0 is isolated by some set (T0)ϕ, i.e., for any T ∈ T ′ 0 there is ϕ ∈ T such that (T0)ϕ = {T }. 2 Relative e-spectra and their properties Definition. For a structure AE and a class K of structures, the number of new structures with respect to the structures Ai and to the class K, i. e., of the structures B forming E-classes of models of Th(AE) such that B are pairwise elementary non-equivalent and elementary non-equivalent to the structures Ai in AE as well as to the structures in K, is called the relative e-spectrum of AE with respect to K and denoted by eK-Sp(AE). The value sup{eK-Sp(A′)) | A′ ≡ AE} is called the relative e-spectrum of the theory Th(AE) with respect to K and denoted by eK-Sp(Th(AE)). Similarly for a class T of theories and for a theory T = Th(AE) we denote by eT -Sp(T ) the value eK-Sp(T ), where K = K(T ) is the class of all structures, each of which is a model of a theory in T . The value eT -Sp(T ) is called the relative e-spectrum of the theory T with respect to T . Remark 2.1. 1. the class K(T ), in the definition above, can be replaced by any subclass K ′ ⊆ K(T ) such that any structure in K(T ) is elementary equivalent to a structure in K ′. 2. if K1 ⊆ K2 then eK1-Sp(T ) ≥ eK2-Sp(T ), and if T1 ⊆ T2 then eT1- Sp(T ) ≥ eT2-Sp(T ). 3. The value eT -Sp(T ) is equal to the supremum |T1 \ T0| for theories of E-classes of models of T such that T1 consists of all these theories and T0 ⊆ T1 with ClE(T0) = T1. Definition. Two theories T1 and T2 of a language Σ are disjoint modulo Σ0, where Σ0 ⊆ Σ, or Σ0-disjoint if T1 and T2 are do not have common nonempty predicates for Σ \ Σ0. If T1 and T2 are ∅-disjoint, these theories are called simply disjoint. 4 Families Tj, j ∈ J, of theories in the language Σ are disjoint modulo Σ0, or Σ0-disjoint if Tj1 and Tj2 are Σ0-disjoint for any Tj1 ∈ Tj1, Tj2 ∈ Tj2, j1 6= j2. If Tj1 and Tj2 are disjoint for any Tj1 ∈ Tj1, Tj2 ∈ Tj2, j1 6= j2, then the families Tj, j ∈ J, are disjoint too. The following properties are obvious. 1. Any families of theories in a language Σ are Σ-disjoint. 2. (Monotony) If Σ0 ⊆ Σ1 ⊆ Σ then disjoint families modulo Σ0, in the language Σ, are disjoint modulo Σ1. 3. (Monotony) If families Tj1 and Tj2 are Σ0-disjoint then any subfamilies j1 ⊆ Tj1 and T ′ T ′ j2 ⊆ Tj2 are Σ0-disjoint too. Below we denote by KΣ the class of all structures in languages containing Σ such that all predicates outside Σ are empty. Similarly we denote by TΣ the class of all theories of structures in KΣ. Theorem 2.2. (Relative additivity for e-spectra) If Tj, j ∈ J, are Σ0- disjoint families then for the E-combination T = CombE(Ti)i∈I of {Ti | i ∈ I} = Tj and for the E-combinations Tj = CombE(Tj), j ∈ J, j∈J S eTΣ0 -Sp(T ) = (eTΣ0 -Sp(Tj)). (1) j∈J X Proof. Denote by T the set of theories for E-classes of models of T . Since the families Tj are Σ0-disjoint, applying Proposition 1.1 we have that a theory T ∗ belongs to ClE(T ∗), where T ∗ ⊆ T , if and only if some of the following conditions holds: 1) T ∗ ∈ T ∗; 2) for any formula ϕ ∈ T ∗ without predicate symbols in Σ \ Σ0, or with predicate symbols in Σ \ Σ0 and saying that corresponding predicates are empty, there are infinitely many theories in T ∈ T ∗ containing ϕ; 3) for any formula ϕ ∈ T ∗, saying that some predicates in Σ \ Σ0 which used in ϕ are nonempty, there are infinitely many theories in T ∈ T ∗ ∩ Tj, for some j, containing ϕ; moreover, the theories T belong to the unique Tj. Indeed, taking a formula ϕ in the language Σ we have finitely many symbols R1, . . . , Rn in Σ \ Σ0, used in ϕ. Considering formulas ψi saying that Rk are nonempty, k = 1, . . . , n, we get finitely many possibilities for χδ1,...,δn χδ1,...,δn ⇋ ϕ ∧ k , δk ∈ {0, 1}. Since ϕ is equivalent to ψδk n k=1 V 5 δ1,...,δn W and only subdisjunctions with positive ψk related to the fixed Tj hold, we can divide the disjunction to disjoint parts related to Tj. Since for ϕ there are finitely many related Tj, we have finitely many cases for ϕ, each of which related to the fixed Tj. These cases are described in Item 3. Item 2 deals with formulas in the language Σ0 and with formulas for empty part in Σ \ Σ0. In particular, by Proposition 1.1 these formulas define ClE(T ∗) ∩ TΣ0. Using Items 1–3 we have for T ∗ that a theory T ∗ belongs to T ∗ \ TΣ0 if and only if T ∗ belong to (T ∗ ∩ Tj) \ TΣ0 for unique j ∈ J. Thus theories witnessing the value eTΣ0 -Sp(T ) are divided into disjoint parts witnessing the values eTΣ0 -Sp(Tj). Thus the equality (1) holds. ✷ Remark 2.3. Having positive ComLim [1] the equality (1) can fail if families Tj are not Σ0-disjoint, even for finite sets J of indexes, producing eTΣ0 -Sp(T ′) < (eTΣ0 -Sp(Tj)) (2) for appropriate T ′. Theorem 2.2 immediately implies j∈J X Corollary 2.4. If Tj, j ∈ J, are disjoint then for the E-combination Tj and for the E-combinations T = CombE(Ti)i∈I of {Ti | i ∈ I} = Tj = CombE(Tj), j ∈ J, j∈J S eT∅-Sp(T ) = (eT∅-Sp(Tj)). (3) j∈J X Definition. The theory T in Theorem 2.2 is called the Σ0-disjoint E- union of the theories Tj, j ∈ J, and the theory T in Corollary 2.4 is the disjoint E-union of the theories Tj, j ∈ J. Remark 2.5. Additivity (1) and, in particular, (3) can be failed without indexes TΣ0. Indeed, it is possible to find Tj with e-Sp(Tj) = 0 (for instance, with finite Tj) while e-Sp(T ) can be positive. Take, for example, disjoint singletons Tn = {Tn}, n ∈ ω \ {0}, such that Tn has n-element models. We have e-Sp(Tn) = 0 for each n while e-Sp(T ) = 1, since the theory T∞ ∈ T∅ with infinite models belong to ClE({Tn | n ∈ ω \ {0}}). Thus, for disjoint families Tj, j ∈ J, the equality e-Sp(T ) = (e-Sp(Tj)) (4) j∈J X 6 can fail. Moreover, producing the effect above for definable subsets in models of Tj we get eTΣ0 -Sp(T ) > (eTΣ0 -Sp(Tj)). j∈J X At the same time, by Corollary 2.4 (respectively, by Theorem 2.2) the equality (4) holds for (Σ0-)disjoint families Tj, j ∈ J, if J is finite and each Tj does not generate theories in T∅ (in TΣ0). Applying the equality (3) we take an E-combination T0 with eT∅-Sp(T0) = λ. Furthermore we consider disjoint copies Tj, j ∈ J, of T0. Combining E- classes of all Tj we obtain a theory T such that if J is finite then eT∅-Sp(T ) = |J| · λ. We have the same formula if |J| ≥ ω and λ > 0 since, in this case, the E-closure for theories of E-classes of models of T consists of theories of E-classes for theories Tj as well some theories in T∅. If E-classes have a fixed finite or only infinite cardinalities, this theory has models whose cardinalities (finite or countable) are equal to the (either finite or countable) cardinality of models of Tj. Similarly, having theories Tλ of languages Σ with cardinalities |Σ| = λ + 1 and with e-Sp(T0) = λ > 0 [1, Proposition 4.3] and taking E-combinations with their disjoint copies we get Proposition 2.6. For any positive cardinality λ there is a theory T such that E-classes of models of T form copies Tj, j ∈ J, of some E-combination T0 with a language Σ in the cardinality λ + 1, with eT∅-Sp(T0) = λ, and eT∅-Sp(T ) = |J| · λ. Remark 2.7. Since there are required theories T0 which do not generate E-classes for T∅, Proposition 2.6 can be reformulated without the index T∅. Remark 2.8. Extending the Σ0-disjoint Σ0-coordinated E-union T by definable bijections linking E-classes we can omit the additivity (1). Indeed, adding, for instance, bijections fjk witnessing isomorphisms for models of dis- joint copies Tj and Tj, have we eT∅-Sp(Tj) instead of eT∅-Sp(Tj)+eT∅-Sp(Tk). Thus, bijections fjk allow to vary eT∅-Sp(T ) from λ to |J| · λ in terms of Proposition 2.6. Thus the equality (1) can fail again producing (2) for ap- propriate T ′. 7 3 Families of theories with(out) least gener- ating sets Below we apply Theorem 1.3 characterizing the existence of e-least generating sets for Σ0-disjoint families of theories. The following natural questions arises: Question 1. When the existence of the least generating sets for the families Tj, j ∈ J, is equivalent to the existence of the least generating set for the family Tj? j∈J S Question 2. Is it true that under conditions of Theorem 2.2 the existence of the least generating sets for the families Tj, j ∈ J, is equivalent to the existence of the least generating set for the family Tj? Considering Question 2, we note below that the property of the (non)existence j∈J S of the least generating sets is not preserving under expansions and extensions of families of theories. Proposition 3.1. Any E-closed family T0 of theories in a language Σ0 0 ⊇ Σ0 such that 0 has the least generating can be transformed to an E-closed family T ′ 0 consists of expansions of theories in T0 and T ′ T ′ set. 0 in a language Σ′ Proof. Forming Σ′ such that RT0 6= ∅ for interpretations in the models of expansion T ′ and RT0 = ∅ for interpretations in the models of expansion T ′ Each formula ∃¯xRT0(¯x) isolates T ′ in view of Theorem 1.3. ✷ 0 it suffices to take new predicate symbols RT0, T0 ∈ T0, 0 of T0 1 of T1 6= T0. 0 has the least generating set 0, and thus T ′ Existence of families T0 without least generating sets implies Corollary 3.2. The property of non-existence of least generating sets is not preserved under expansions of theories. Remark 3.3. The expansion T ′ 0 of T0 in the proof of Proposition 3.1 produces discrete topologies for sets of theories in T0 ∪ T ′ 0 . In fact, for this purpose it suffices to isolate finite sets in T0 since any two distinct elements T0, T1 ∈ T0 are separated by formulas ϕ such that ϕ ∈ Ti and ¬ϕ ∈ T1−i, i = 0, 1. 8 Note also that these operators of discretization transform the given set T0 to a set T ′ 0 with identical ClE. Clearly, if a set T0 has the discrete topology it can not be expanded to a set without the least generating set. At the same time, there are expansions that transform sets with the least generating sets to sets without the least generating sets. Indeed, take Example in [3, Remark 3] with countably many disjoint copies Fq, q ∈ Q, of linearly ordered sets isomorphic to hω, ≤i and ordering limits Jq = lim Fq by the ordinary dense order on Q such that {Jq | q ∈ Q} is densely ordered. We have a dense interval {Jq | q ∈ Q} whereas the set ∪{Fq | q ∈ Q} forms the least generating set T0 of theories for ClE(T0). Now we expand the LU-theories for Fq and Jq by new predicate symbol R such that R is empty for all theories corresponding to Fq and ∀¯xR(¯x) is satisfied for all theories corresponding to Jq. The predicate R separates the set of theories for Jq with respect to ClE. At the same time the theories for Jq forms the dense interval producing the set without the least generating set in view of [3, Theorem 2]. Thus, we get the following Proposition 3.4. There is an E-closed family T0 of theories in a lan- guage Σ0 and with the least generating set, which can be transformed to an E-closed family T ′ 0 consists of expansions of theories in T0 and T ′ 0 ⊇ Σ0 such that T ′ 0 does not have the least generating set. 0 in a language Σ′ Corollary 3.5. The property of existence of least generating sets is not preserved under expansions of theories. Remark 3.6. Adding the predicate R which separates theories for Jq from theories for Fq, we get a copy for each Jq containing empty R. This effect is based on the property that separating an accumulation point Jq for Fq we get new accumulation point preserving formulas in the initial language. Introducing the predicate R together with the discretization for Fq, E- closures do not generate new theories. Proposition 3.7. Any family T0 of theories in a language Σ, with in- finitely many empty predicates for all theories in T0, can be extended to a family T ′ 0 does not have the least generating set. 0 in the language Σ such that T ′ Proof. Let Σ0 ⊆ Σ consist of predicate symbols which are empty for all theories in T0. Now we consider a family T1 of LU-theories such that all these theories have empty predicates for Σ \ Σ0, and, using Σ0 as for [3, Theorem 9 2], T1 does not have the least generating set forming a dense interval. The 0 = T0 ˙∪ T1 extends T0 and does not have the least generating set family T ′ since for any T ′′ 0 ∩ T0) ˙∪ ClE(T ′′ 0 ) = ClE(T ′′ 0 ∩ T1). ✷ 0 , ClE(T ′′ 0 ⊆ T ′ Corollary 3.8. The property of existence of least generating sets is not preserved under extensions of sets of theories. In view of Theorem 1.3 any family consisting of all theories in a given infinite language both does not have the least generating set and does not have a proper extension in the given language. Thus there are families of theories without least generating sets and without extension having least generating sets. At the same time the following proposition holds. Proposition 3.9. There is an E-closed family T0 of theories in a lan- guage Σ and without the least generating set such that T0 can be extended to an E-closed family T ′ 0 in the language Σ and with the least generating set. Proof. It suffices to take Example in [3, Remark 3] that we used for the proof of Proposition 3.4. The theories for {Jq | q ∈ Q} form a family without the least generating set whereas an extension of this family by the theories for Fq has the least generating set. ✷ Corollary 3.10. The property of non-existence of least generating sets is not preserved under extensions of sets of theories. Remark 3.11. If an extension of an E-closed family T0 of theories trans- forms T0 with the least generating set to an E-closed family T ′ 0 without the least generating set then, in view of Theorem 1.3, having the generating set in T0 consisting of isolated points we lose this property for T ′ 0 . If an extension of an E-closed family T0 of theories transforms T0 without the least gener- ating set to an E-closed family T ′ 0 with the least generating set then, again in view of Theorem 1.3, we add a set of isolated theories to T0 generating all theories in T ′ 0 . Now we return to Questions 1 and 2. Clearly, for any set T of theories, ClE(T ∩ TΣ0) ⊂ TΣ0. Therefore ClE(T ) and each its generating set are divided into parts: in TΣ0 and disjoint with TΣ0. Since Tj, j ∈ J, are disjoint with respect to TΣ0, each Tj has the least generating set if and only if both Tj ∩TΣ0 and Tj \TΣ0 have the least generating sets. Since under conditions of Theorem 2.2 the sets Tj \ TΣ0 are disjoint, j ∈ J, we have the following proposition answering Question 1. 10 Proposition 3.12. The set Tj has the least generating set if and only ∩ TΣ0 has the least generating set and each Tj \ TΣ0 has the least j∈J S if Tj ! j∈J S generating set. Since ∩ TΣ0 can be an arbitrary extension of each Tj ∩ TΣ0, Tj ! Propositions 3.7 and 3.12 imply the following corollary answering Question 2. j∈J S Corollary 3.13. For any infinite language Σ0 there are Σ0-disjoint fam- Tj does not have ilies Tj, j ∈ J, with the least generating sets such that the least generating set. j∈J S 4 Relative closures and relative least gener- ating sets Definition. Let T be a class of theories. For a set T0 ⊂ T we denote by ClE,T (T0) the set ClE(T0) \ T . The set ClE,T (T0) is called the relative E- closure of the set T0 with respect to T , or T -relative E-closure. If T0 \ T = ClE,T (T0) then T0 is said to be (relatively) E-closed with respect to K, or T -relatively E-closed. 0 ). The T -relatively generating set T ′ 0 \ T does not contain proper subsets T ′′ 0 ). A T -minimal T -relatively generating set T ′ 0 ∩ T ) ∪ T ′′ 0 \ T is contained in T ′′ Remark 4.1. Note that for T -least generating sets T ′ Let T0 be a closed set in a topological space (T , OE(T )). A subset T ′ 0 ⊆ T0 is said to be generating with respect to T , or T -relatively generating, if T0 \ T = ClE,T (T ′ 0 (for T0) is T - minimal if T ′ 0 such that T0 \ T = ClE,T ((T ′ 0 is T -least if T ′ 0 for T0. 0 , in general, we can say that T ′ 0 are uniquely defined only with respect to T . Moreover, since ClE(T0∪T1) = ClE(T0)∪ClE(T1) for any sets T0, T1 ⊂ T by Theorem 1.2, then for E-closed T , ClE(T ′ 0 is a T -least generating set if and only if T ′ 0 ∪ T ′ is a T -least generating set for some (any) T ′ ⊆ T , as well as if and only if T ′ 0 \ T for each T -relatively generating set T ′′ 0 \ T is a T -least generating set. 0 ∪ T ) = ClE(T ′ 0 ) ∪ T and T ′ The following theorem generalizes Theorem 1.3. 11 Theorem 4.2. If T is a E-closed set and T ′ 0 is a T -relatively generating set for a E-closed set T0 then the following conditions are equivalent: 0 is the T -least generating set for T0; 0 is a T -minimal generating set for T0; (1) T ′ (2) T ′ (3) any theory in T ′ (4) any theory in T ′ (5) any theory in T ′ (6) any theory in T ′ 0 \ T is isolated by some set (T ′ 0 ∪ T )ϕ; 0 \ T is isolated by some set (T0 ∪ T )ϕ; 0 \ T is isolated by some set (T ′ 0 )ϕ; 0 \ T is isolated by some set (T0)ϕ. Proof. (1) ⇒ (2) and (4) ⇒ (3) are obvious. (2) ⇒ (1). Assume that T ′ T -relatively generating set T ′′ ∅. Take T ∈ T ′ 0 \(T ′′ 0 \ (T ′′ 0 ∪ T ). 0 is T -minimal but not T -least. Then there is a 0 such that T ′ 0 ∪T ) 6= 0 ∪T ) 6= ∅ and T ′′ 0 \(T ′ 0 \ (T ′ 0 ∪ T ) 6= ∅ and T ′′ We assert that T ∈ ClE(T ′ 0 \ ({T } ∪ T ). Indeed, since T ′′ 0 \ ({T } ∪ T )), i.e., T is an accumulation point 0 ∪ T ) = 0 \T )∪T (using that T is E-closed), then by [2, Proposition 1, (3)] (that 0 \ T is infinite and by Proposition 1.1 0 \ ({T } ∪ T ))ϕ is infinite. Assume 0 \ T )ϕ 0 is T -relatively generating for T0, by Proposition 0 \ T )ϕ is finite and, again by Proposition 0 ∪ T ) contradicting to ClE(T ′′ of T ′ ClE(T ′ every finite set T ⊂ T is E-closed), T ′ it suffices to prove that for any ϕ ∈ T , ((T ′ on contrary that for some ϕ ∈ T , ((T ′ is finite and, moreover, as T ′ 1.1, (T0 \ T )ϕ is finite, too. So (T ′′ 1.1, T does not belong to ClE(T ′′ 0 \ ({T } ∪ T ))ϕ is finite. Then (T ′ 0 ⊂ ClE(T ′ (2) ⇒ (3). If T ′ Since T ∈ ClE(T ′ 0 \ ({T } ∪ T )) and T ′ is also generating for T0 contradicting the T -minimality of T ′ 0 . 0 \ T is finite then by Proposition 2.1 (3), T ′ 0 \ T = T0 \ T . Since T0 \ T is finite and T is E-closed then for any T ∈ T0 \ T there is a formula ϕ ∈ T negating all theories in (T0 \ {T }) ∪ T ). Therefore, (T0 ∪ T )ϕ = (T ′ 0 ∪ T )ϕ isolates T . 0 ∪ T )ϕ is a singleton containing T and thus, (T ′ 0 \ {T } 0 ) = T0. 0 is generating for T0, then T ′ 0 \ T be infinite. Assume that some T ∈ T ′ 0 ∪ T )ϕ. Now let T ′ 0 \ T is not isolated by the sets (T ′ 0 \ {T }) ∪ T )ϕ is infinite. Using Proposition 1.1 and the condition that T is E-closed we obtain T ∈ ClE,T (T ′ 0 \ {T }) contradicting the T -minimality of T ′ 0 . It implies that for any ϕ ∈ T , ((T ′ (3) ⇒ (2). Assume that any theory T in T ′ (T ′ 0 ∪ T )ϕ. By Proposition 1.1 it implies that T /∈ ClE((T ′ T ′ 0 is a T -minimal generating set for T0. (3) ⇒ (4) is obvious for finite T ′ 0 \ T . If T ′ T in T ′ 0 \ T is isolated by some set (T ′ 0 \ T is infinite and any theory 0 ∪ T )ϕ then T is isolated by the 0 \ T is isolated by some set 0 \ {T }) ∪ T ). Thus, 12 set (T0 ∪ T )ϕ, since otherwise using Proposition 1.1 and the properties that T is E-closed and T ′ 0 generates T0, there are infinitely many theories in T ′ 0 containing ϕ that contradicts the equality |(T ′ 0 ∪ T )ϕ| = 1. (3) ⇔ (5) and (4) ⇔ (6) are equivalent since T is E-closed. ✷ Corollary 4.3. If Tj, j ∈ J, are Σ0-disjoint families then Tj has a TΣ0-least generating set if and only if each Tj has a TΣ0-least generating Tj has a TΣ0-least generating set T0 then T0 \ TΣ0 can be set. Moreover, if j∈J S represented as a disjoint union of TΣ0-least generating sets for Tj. j∈J S Proof. Using Theorem 4.2 it suffices to note that TΣ0 is E-closed and having T0 \TΣ0 it consists of isolated points each of which is related to exactly one set Tj. ✷ Clearly, any subset of T -least generating set is again a T -least generating set (for its E-closure). At the same time the property “to be a T -least generating set” is preserved under finite extensions of generating sets T ′ 0 disjoint with ClE(T ′ 0 ): Proposition 4.4. If T is a E-closed set, T ′ 0 is a T -relatively generating set for a E-closed set T0, and Tf is a finite subset of T disjoint with T0 then the following conditions are equivalent: Proof. (1) T ′ (2) T ′ (1) ⇒ (2). 0 is the T -least generating set for T0; 0 ∪ (Tf \ T0) is the T -least generating set for the E-closed set T0 ∪ Tf . 0 is a T -least generating set for T0 then by 0 \ T is isolated by some formula ϕT . Since 0 ∪ (Tf \ T0)) \ T is isolated by some 0 ∪ Tf is the T -least generating set for Theorem 4.2 each theory T in T ′ Tf is finite then each theory T in (T ′ formula ψT . Again by Theorem 4.2, T ′ T0 ∪ Tf which is E-closed in view of Theorem 1.2. If T ′ (2) ⇒ (1) is obvious. ✷ Theorem 4.5. (Decomposition Theorem) For any E-closed sets T and 1 for T ′, which T ′ of a language Σ there is a T -relatively generating set T ′ is disjoint with T and satisfies the following conditions: 0 ∪T ′ (1) |T ′ (2) T ′ (3) ClE(T ′ (4) T ′ fying (2). 1 | ≤ max{|Σ|, ω}; 0 ∪ T ′ 0 is the least generating set for its E-closure ClE(T ′ 0 ); 0 ) ∩ T ′ 1 = ∅; 1 is either empty or infinite and does not have infinite subsets satis- 13 Proof. We denote by T ′ the subset of T ′ \ (T ∪ ClE(T ′ each sentence belonging to a theory in T ′ \ (T ∪ ClE(T ′ in T ′ Σ, i. e., |T ′ is a T -relatively generating set for T ′ in view of Proposition 1.1. 0 the set of isolated points in T ′ \ T and by T ′ 1 0 )) with a cardinality ≤ max{|Σ|, ω} such that 0 )) belongs to a theory 0 | is bounded by the number of sentences in the language 0 ∪ T ′ 1 0 | ≤ max{|Σ|, ω}, too. Thus the condition (1) holds and T ′ 1 . Note that |T ′ By Theorem 4.2, T ′ 0 is the least generating set for ClE(T ′ condition (2) holds. Now (3) and (4) are satisfied since T ′ ClE(T ′ 0 ) and does not have isolated points. ✷ 0 ). Therefore the 1 is separated from Theorem 4.6. If T is a E-combination of some theories Ti, i ∈ I, T is a E-closed set of theories, and |eT -Sp(T )| < 2ω, then ClE(T ∪ {Ti | i ∈ I}) has the T -least generating set. Proof. By Theorem 4.2 we have to show that T ′ ⇋ {Ti | i ∈ I} \ T has a generating set, modulo T , of theories Ti being isolated points. Assume the contrary. Then we have sets T ′ 0 and T ′ 1 in terms of Theorem 4.5, where |T ′ 1 | ≤ max{|Σ|, ω} and T ′ 1 is infinite. Thus T has a model M whose all 0 ∪ T ′ E-classes satisfy theories in T ′ 1 . 0 ∪ T ′ Then we can construct a 2-tree [4] of sentences ϕδ, where δ are {0, 1}- tuples, {ϕδˆ0, ϕδˆ1} are inconsistent and ϕδ ≡ ϕδˆ0, ϕδˆ1, such that all (T ′ 1 )ϕδ are infinite. Moreover, taking negations of formulas isolating theories in T ′ 1 and applying Proposition 1.1 we can assume that for each f ∈ 2ω the sequence of formulas ϕhf (0),...,f (n)i, n ∈ ω, is contained in a theory belonging ClE(T ′ 1 ). 1 )| ≥ 2ω producing, by M, |eT -Sp(T )| ≥ 2ω that contradicts the Thus |ClE(T ′ assumption |eT -Sp(T )| < 2ω. ✷ The following example shows that, in Theorem 4.6, the conditions |eT - Sp(T )| < 2ω and the existence of the T -least generating set are not equiva- lent. Example 4.7. Let Σ be a language with predicates Pi, Qj, i, j ∈ ω, of same arity (it suffices to take the arity 0). Now we consider a countable set of language uniform theories Ti [3] such that unique Pi is satisfied and Qj are satisfied independently for the set T = {Ti | i ∈ ω}. All theories Ti are isolated in ClE(T ) by the formulas ∃¯xPi(¯x). Hence, T is the least generating set for ClE(T ). At the same time |ClE(T )| = 2ω witnessed by theories with empty predicates Pi and independently satisfying Qj. Thus |eT -Sp(T )| = 2ω for the theory T being the E-combination of Ti, i ∈ ω. ✷ 14 References [1] Sudoplatov S. V. Combinations of structures / S. V. Sudoplatov. — arXiv:1601.00041v1 [math.LO]. — 2016. — 19 p. [2] Sudoplatov S. V. Closures and generating sets related to combinations of structures / S. V. Sudoplatov // Reports of Irkutsk State University. Series “Mathematics”. — 2016. — Vol. 16. — P. 131–144. [3] Sudoplatov S. V. Families of language uniform theories and their gen- erating sets / S. V. Sudoplatov // Reports of Irkutsk State University. Series “Mathematics”. — 2016. — Vol. 17. — P. 62–76. [4] Handbook of mathematical logic / ed. J. Barwise. — Moscow : Nauka, 1982. — Vol. 1. Model Theory. — 392 p. [in Russian] 15
ai_researcher
1
Talking_Trucks_Decentralized_Collaborative_Multi-Agent_Order_Scheduling_for_Self-Organizing_Logistics.pdf
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1 Mining Truck Platooning Patterns Through Massive Trajectory Data Xiaolei Ma, Enze Huo, Haiyang Yu, and Honghai Li  Abstract—Truck platooning refers to a series of trucks driving in close proximity via communication technologies, and it is considered one of the most implementable systems of connected and automated vehicles, bringing huge energy savings and safety improvements. Properly planning platoons and evaluating the potential of truck platooning are crucial to trucking companies and transportation authorities. This study proposes a series of data mining approaches to learn spontaneous truck platooning patterns from massive trajectories. An enhanced map matching algorithm is developed to identify truck headings by using digital map data, followed by an adaptive spatial clustering algorithm to detect trucks’ instantaneous co-moving sets. These sets are then aggregated to find the network-wide maximum platoon duration and size through frequent itemset mining for computational efficiency. We leverage real GPS data collected from truck fleeting systems in Liaoning Province, China, to evaluate platooning performance and successfully extract spatiotemporal platooning patterns. Results show that approximately 36% spontaneous truck platoons can be coordinated by speed adjustment without changing routes and schedules. The average platooning distance and duration ratios for these platooned trucks are 9.6% and 9.9%, respectively, leading to a 2.8% reduction in total fuel consumption. We also distinguish the optimal platooning periods and space headways for national freeways and trunk roads, and prioritize the road segments with high possibilities of truck platooning. The derived results are reproducible, providing useful policy implications and operational strategies for large-scale truck platoon planning and roadside infrastructure construction. Index Terms—energy consumption, spontaneous pattern, trajectory mining, truck platooning I. INTRODUCTION T HE freight sector accounts for 60% of cargoes, and this trend continues to increase rapidly [1]. However, long- distance and high-volume logistics operations consume a large amount of fuel. In the European Union, the energy consumption of road freight transportation is 27% among all transport modes, leading to 20% carbon emission of total emissions [2]. Truck platooning is considered one of the most promising cutting- edge technologies to alleviate energy consumption and environmental pollution, and refers to a group of trucks traveling with relative short headways for a long period via topic. Theoretically speaking, telecommunication [3], [4]. Truck platooning can effectively reduce aerodynamic resistance between leading and following vehicles, and is thus able to lower the total energy consumptions of trucks [5]. Truck platooning can also improve road safety and enhance capacity aside from offering energy savings [6], [7], [8]. With the abovementioned features, truck platooning is undoubtedly a hot truck platooning can be a leader-following consensus problem. In the past decades, a myriad of control theory-based approaches are proposed. Zhang et al. [9] considered a leader-follower consensus of multiagent system with communication capability and energy constraints, and modeled the system using Markovian approach and Lyapunov stability theory. Zhang et al. [10] investigated the issue of leader-following consensus in a linear and Lipschitz nonlinear multiagent system with limited information. Wen et al. [11] proposed a neural network-based leader-follower consensus scheme. Tan et al. [12] developed a distributed event-triggered impulsive control method to study the leader-following consensus issue. You et al. [13] studies the leader-following consensus problem in the environment of high-order nonlinear mulitiagent system, and used a self-trigger and dynamic output feedback control scheme. Yue and Meng [14] extended the leader-following consensus issue into a cooperative set problem, where multiple agents can be aggregated into a desired set. They proposed an approximate projection algorithm and proved its convergence conditions. Liu et al. [15] emphasized the synchronization of mobile agent platoon, where each agent’s speed and heading is different. Petrillo et al. [16] further designed a control mechanism to resist cyberattacks in a connected and autonomous vehicle (CAV) platoon. The above solid theoretical work sheds substantial light on cybernetic principles of truck platooning but still focuses on simulation rather than empirical validation. Bhoopalam et al. [17] divided truck platooning into three categories: scheduled platooning, real-time platooning, and opportunistic platooning. The former two tend to be applied to regional planning with prior intervention and coordination, and are also known as planned platooning, whereas the latter more likely happens in a realistic environment where trucks that are in close proximity are ad-hoc informed about platooning. For planned platooning, logistics schedules, origin/destination This paper is supported by the National Natural Science Foundation of China (52072017). (Corresponding author: Xiaolei Ma) Xiaolei Ma, Enze Huo, Haiyang Yu and Honghai Li are with the Beijing Key Laboratory for Cooperative Vehicle Infrastructure System and Safety Control, School of Transportation Science and Engineering, Beihang University, Beijing 100191, China, and also with the Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing 100191, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 2 information (i.e., locations and times), and delay tolerance are often acquired in advance to minimize network-level fuel consumption or maximize the platoon size. However, industrial resistance and practical factors hinder the wide implementation of planned platooning. Logistics operators are reluctant to change their route delivery schedules and routes to cater to platooning with other companies [18]. Compared with passenger vehicles, trucks’ estimated arrival time are more uncertain due to road congestion, overload inspection, and driving hour limitation, making network-wide truck platooning optimization impossible to solve in a finite time frame as a NP- hard issue. Fortunately, high-fidelity truck GPS data from freight fleet management systems reproduce trucks’ actual spatiotemporal footprints and can be utilized to coordinate trucks to form platoons by identifying leaders and followers from massive trajectories. Such adjustments do not require prior planning, and truck platoons can be spontaneously formed by slightly adjusting speeds on certain road segments rather than changing routes. As indicated by Liang et al. [19], the energy savings induced by truck platooning can be counteracted by the energy consumptions by detouring to seek platooning opportunities. In addition, as the national road network density increases, investing in and constructing roadside units on each segment is not realistic to facilitate V2V communications. Highlighting road segments with high platooning possibilities will be beneficial to prioritize the existing roadside facilities to be upgraded for truck platooning. reduce actually identified patterns Therefore, this study aims to develop framework to mine massive truck trajectories to identify possible truck platoon patterns. The proposed trajectory mining procedure is to detect leaders and followers within a short distance for a given time period. The the computational complexity of truck platoon planning by emphasizing the speed coordination between leading and following trucks in a large-scale network. The entire framework is composed of four steps. In the first step, a modified map matching algorithm considering relative driving direction is proposed to map each truck’s locations on the OpenStreetMap (OSM) network. We further recognize the instantaneous co- driving sets with the same directions and close spacing by using enhanced ordering points to identify the clustering structure (OPTICS) algorithm. Among the spontaneous truck platooning patterns are found based on frequent itemset mining in depth-first spanning trees. In the final step, a fuel consumption estimation model is adopted to quantify the energy savings of the identified truck platoons, where all parameters are calibrated from either field data or existing literature. We systematically evaluate the platooning potentials and performance by using several metrics. To summarize, the primary contributions of this study are twofold: (1) A series of trajectory mining approaches is proposed to identify spontaneous truck platooning patterns. To be specific, an enhanced OPTICS algorithm is proposed to detect instantaneous co-driving sets from millions of truck GPS points. This algorithm replaces the original Euclidean distance with the following distance between trucks and considers driving the generated sets, direction, roadway junction, and angle of reachability distance sequence, thus improving the detection rate of sub-centralized and speed-coordinated truck fleets. The largest truck set with the longest co-driving time-period is extracted from the instantaneous co-driving sets of the entire day by using frequent itemset mining algorithm. (2) On the basis of massive truck trajectory data in Liaoning Province, China, we answer how many trucks could be coupled as platoons and how much fuel savings could be achieved for a specific day. We also prioritize those road segments with high platooning potential, which can provide solid policy support to upgrade existing roadside facilities for automating freight transportation. The remaining part of this paper is outlined as follows. Section 2 reviews different truck platooning modes and emphasizes trajectory mining approaches for co-movement pattern identification. Section 3 describes the methodological framework, which involves a modified hidden Markov model (HMM)-based map matching algorithm tailored for truck platooning, an enhanced OPTICS algorithm to recognize instantaneous co-driving sets, and a pruning algorithm to find the potential instantaneous platooning patterns. We further introduce the fuel consumption estimation model. Section 4 provides the detailed data collection efforts and several platooning performance indicators. Section 5 analyzes one-day truck trajectory data collected in Liaoning Province, China, and evaluates the platooning performance and energy savings by using the proposed spontaneous platooning pattern mining approaches. The spatiotemporal distribution of truck platoons is visualized in a map. Section 6 presents the conclusions and envisions the future directions. II. LITERATURE REVIEW Depending on whether trip planning is involved, the form of truck platooning can be divided into scheduled platoon planning and unscheduled platoon planning [17] On the basis of the timeliness of trip announcements, the scheduled platoon planning can be further divided into static platoon planning and real-time platoon planning. The unscheduled platoon planning is also referred to as opportunistic platooning or spontaneous platooning, which is the emphasis of this study. In the subsequent literature review section, we systematically discuss the methodological advances of both scheduled and unscheduled platoon planning. For scheduled static platoon planning, truckers can decide whether to deviate from their planned routes for platooning. Moreover, many practical restrictions are in place for forming platoons. According to whether preplanned routes are allowable, the scheduled platoon planning methods can be divided into two categories. For platooning with fixed routes, Van de Hoef [20] determined the possible trucks that can adjust their speeds for platooning by ensuring on-time arrivals. The optimal speed profiles are given for each truck. Zhang et al. [21] consider travel time uncertainty and analyzes the two scenarios of two trucks traveling either to the same headings or meeting and diverging. Liang et al. [22] discussed the specific speed coordination approach for two trucks on the same route. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 3 Farokhi and Johansson [23] investigated the influence of truck platooning on congestion pricing and speed of non-platooning trucks, and discussed the potential of platooning multiple trucks on a single route. Meisen et al. [24] introduced sequential mining methods to find the maximum overlapping routes on the basis of each truck’s fixed path. Without the late-arrival issue being taken into consideration, truck platoon arrangements are proposed based on the length of overlapping segments, waiting time cost, and fuel savings, which dramatically improves the optimization efficiency. For dynamic route problems, where trucks can change their original routes for platooning, Larson et al. [25] introduced the concept of controllers located at various road junctions. The local controllers can perceive the position, speed, and OD information of incoming trucks for platooning by speed coordination. Larsson et al. [26] modeled the platoon optimization as a graph routing problem and introduced a subheuristic approach to solve. Larson et al. [27] and Nourmohammadzadeh and Hartmann [28] used mixed-integer planning and genetic algorithm to form platoons of 20–25 trucks at the state level. However, due to the limitation of the NP-hard problem, the size of optimized trucks is relatively small, which cannot be implemented in a large-scale freight network real-time coordination. The uncertainties associated with overloading enforcement inspection and temporary maintenance further bring additional challenges to truck platoon planning issue [18]. for For real-time platoon planning, dynamic programming is a common approach in most studies. The entire optimization process could be repeated when a new event is detected; for example, new freight plans are adjusted or trucks miss the platooning opportunities at one point [29], [20], [25]. Adler et al. [30] imagined a situation in which multiple trucks arrive at a particular station with Poisson distribution and are platooned to the same destination. Two platooning policies are compared: trucks are grouped to leave at a certain timestamp or when the predefined platoon size is reached. Results show that regulating the platoon size saves more energy than regulating the time period. It can be predicted that it will take many years or even a decade for the academia to comprehensively organize and optimize real-time truck platooning with real-time information. For unscheduled platoon planning, the purpose is to identify a collection of trucks that meet by chance and spontaneously travel within a short space headway for a given time period. Logistics operators in Portugal have conducted a series of interviews with truck drivers, confirming that on roadways, especially highways, many groups of unknown trucks accidentally meet and travel together for a long distance until they are separated due to speed or route difference [18]. The key to a successful unscheduled truck platooning strategy is to determine extract spontaneous platooning patterns, existing studies focus on iterative approaches that identify the platoon form-up times and durations. For the recognition of instantaneous co-driving sets, mainly three research directions exist. The first research effort utilizes density-based clustering approaches such as density- based spatial clustering of applications with noise (DBSCAN), mainly represented by Jeung et al. [31]. The second research instantaneous co-driving sets. To area is flock search area identification. Liang et al. [32] regarded coupled trucks within a 100 m flock of the current truck and traveling at the same segment as the instantaneous co- driving sets, while Larson et al. [33] applied a search radius ranging from 0.5 to 5 km to estimate the coordination potentials for spontaneous truck platooning in a transportation network in Germany. On the basis of Liang et al. [32], Shein et al. [34] combined flock sets with a central distance less than the search radius at each timestamp. Therefore, the enhanced algorithm mitigates the insufficient adaptability of fixed radius and avoids excessive density connection caused by density clustering algorithms. The idea solution should comprehensively consider the heading and distance difference between trucks for flock recognition such as Andersson et al. [35]. The abovementioned approaches cannot effectively adapt to the realistic road topology. The radius search methods are unable to identify truck grouping behavior on road junctions and parallel segments, where trucks travel in relatively short lateral distances. Mistakenly clustering those trucks in different road segments as the same platoon through fixed radius searching is inevitable. The majority of iterative approaches apply the fixed radius to find co-driving trucks and cannot meet the address the issue of dynamic spacings between trucks. Shein et al. [34] alleviated this issue to some extent but still cannot eliminate the errors of choosing the initial trucks. In addition, almost all bidirectional trunk roads are represented as single lines in OSM. Thus, the approach proposed by Liang et al. [32] has a high possibility of recognizing trucks driving in opposite directions as instantaneous co-driving sets. The current research on truck platooning does not consider freight network design to reap the maximum benefits of platooning. A platoon requires the support of roadside units; thus, logistics operators can better plan communication facilities by visualizing spontaneous truck platoon patterns and by highlighting the segments with high platooning potentials [17]. The focus of this study is spontaneous truck platoon planning based on massive trajectory data. We replace the geographical distance with the vehicle following distance to address the single-line representation issue in a digital map. An adaptive clustering algorithm to further refine instantaneous co-driving sets is proposed to alleviate excessive density connections. Truck platoon patterns are extracted based on instantaneous co- driving sets by using frequent itemset mining approaches. Finally, a series of policy recommendations is made to prioritize roadside communication infrastructures to enhance the effectiveness of truck platooning with a minimum transportation facility investment. III. METHODOLOGY We consider a road network where multiple trucks travel with latitudes, longitudes, and timestamps. The problem is to find how many trucks can be coordinated with each other and grouped as platoons as time evolves. This problem is similar to the issue of companion or flock discovery, where a companion or flock refers to a group of moving objects (e.g., animals or pedestrians) traveling together sporadically. Unlike animals or > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 4 people, who move in space and time flexibly, truckers have more strict schedules to follow and more fixed routes to travel. This problem becomes very complicated considering hundreds of millions of points concurrently scattering on a large-scale network with both spatial and temporal constraints. We aim to tackle this issue by using a series of data mining approaches. In the following, a computational framework is presented to find the possible truck platoons by using massive trajectory data and digital maps. The framework is composed of three key steps. In the first step, we map each truck’s locations on the corresponding road segments and also infer the truck’s heading and following distance with the leading truck. We then develop a density-based clustering algorithm to detect the co-driving set at each timestamp. This algorithm can effectively group those trucks traveling on China’s national expressways, which are commonly in parallel and hard to distinguish. The final step is to aggregate all sets for multiple time steps and find the most typical platoon patterns from numerous combinations. We applied the depth-first spanning trees with adequate pruning rules to lower the computational burden. The mined potential platoons are used to estimate the fuel consumptions. trunk roads allow trucks to travel on both directions but are presented using single lines. Moreover, these segments are not regularly connected from end to end. As a result, the default road direction is not fixed along each trunk road. The traditional HMM-based map matching algorithm cannot distinguish the driving direction relative to the matched segment direction. To improve the accuracy of instantaneous co-driving sets, each truck’s heading on its corresponding segment needs to be identified. To address this issue, we extend the traditional HMM-based algorithm by incorporating four scenarios, where each truck’s heading and segment direction is intertwined in Figure 1. In Figure 1, 𝑧𝑡−1 and 𝑧𝑡 represent the truck’s positioning points at timestamp 𝑡 − 1 and 𝑡 , respectively. 𝑠𝑡−1 and 𝑠𝑡 indicate the matched segments of 𝑧𝑡−1 and 𝑧𝑡 , respectively. 𝑝(𝑠𝑡−1, 𝑧𝑡−1) and 𝑝(𝑠𝑡, 𝑧𝑡) are the two candidate points on both 𝑠𝑡−1 and 𝑠𝑡. A dashed line connects two segments to represent the virtual shortest network route. We aim to identify the heading of each truck GPS point relative to the matched segment direction. 𝑑𝑖𝑟 = 0 implies that the truck travels along the segment direction, while 𝑑𝑖𝑟 = −1 implies that the truck is A. Map Matching with Heading truck speed The map matching procedure aims to accurately link each truck’s latitude and longitude with each road segment given the high positioning errors of GPS devices. The highway attributes in China can be categorized as trunk roads and national-level limits for national-level expressways. The expressways and trunk roads are 100 and 60 km/h, respectively. Trucks occasionally travel on the trunk roads to avoid tolls on expressways. In most cases, trunk roads and expressways are in parallel and intersect with each other. In addition, trunk roads are represented by single lines in digital maps, thus incurring difficulties in separating opposite traffic by relying on truck trajectories only. Thus, distinguishing whether a truck travels on trunk roads or expressways with correct direction is not an easy task. In addition, to identify co-movement patterns, we are more concerned with the interaction between multiple truck trajectories rather than locating a single truck trajectory. Therefore, it is necessary to identify the relative driving direction of each truck to the matched segment at each moment and determine the following distance among multiple trucks. Newson and Krumm [36] introduced the hidden Markov process into a map matching algorithm, where Lou et al. [37] pointed out that the transition probability should also consider the difference between speed calculated from GPS data and speed calculated from matched points. The relationship between positioning points and their corresponding segments can be regarded as the observations to the states, while the observation probability is determined by the vertical distance between the positioning points and the candidate segments within the given search radius. Therefore, determining the corresponding matched points and segments can be solved by dynamic programming influenced by both spatial and temporal factors. In OSM, each roadway is composed of segments with both start and end nodes. The default road direction is determined from the start node to the end node. However, most Fig. 1. Relationship between truck heading and segment direction. away from the segment direction. The four scenarios are demonstrated in Figure 1, where the red line indicates the truck’s actual heading, and 𝑇𝑜𝑁𝑜𝑑𝑒 and 𝐹𝑟𝑜𝑚𝑁𝑜𝑑𝑒 represent the nodes of matched segments that the truck is heading to and from, respectively. We iteratively calculate the transition probability of each scenario and select that scenario with the maximum probability, where the truck’s heading relative to the corresponding matched segment at each timestamp can be determined. Notably, the low data quality of GPS points severely deteriorates the accuracy of map matching algorithm. To avoid the influence of low-quality positioning points on transition probability calculation, malfunctional data can be skipped, and previous truck locations with multiple timestamps ahead can be utilized as the reference data to jointly determine the truck’s driving direction. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 5 following distance 𝐹𝐷(𝑜1, 𝑜2), which can better depict the co- driving relationship among multiple trucks. If there are more than 𝑀𝑖𝑛𝑃𝑡𝑠 trucks with the following distances to a truck lower than 𝜀 kilometers, then this truck is considered a core object. The neighborhood distance 𝜀 is set to 1 km to allow sufficient space for speed coordination and 𝑀𝑖𝑛𝑃𝑡𝑠 should be set as 2 to form up a platoon. Therefore, for any given truck 𝑜𝑝, the core distance refers to the minimum following distance that makes the truck a core object within the given parameter 𝜀 and 𝑀𝑖𝑛𝑃𝑡𝑠, as shown in Equation 1. 𝑐𝑑(𝑜𝑝) = { 𝑈𝑁𝐷𝐼𝐹𝐼𝑁𝐸𝐷, 𝑖𝑓 |𝑁𝜀(𝑜𝑝) < 𝑀𝑖𝑛𝑃𝑡𝑠| 𝐹𝐷 (𝑜𝑝, 𝑁𝜀 𝑀𝑖𝑛𝑃𝑡𝑠(𝑜𝑝)) , 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (1) where 𝑁𝜀(𝑜𝑝) refers to those trucks whose following 𝑀𝑖𝑛𝑃𝑡𝑠(𝑜𝑝) refers to the distances to 𝑜𝑝 are lower than 𝜀, and 𝑁𝜀 truck with following distance from 𝑜𝑝 to the 𝜀 -nearest trucks, while the core distance 𝑐𝑑(𝑜𝑝) will satisfy 𝑐𝑑(𝑜𝑝) ≤ 𝜀 when 𝑜𝑝 becomes a core object. For any other truck 𝑜𝑞 , the reachability distance of 𝑜𝑞 from 𝑜𝑝 is defined in Equation 2 under the given parameter 𝜀 and 𝑀𝑖𝑛𝑃𝑡𝑠 . According to the OPTICS algorithm, the orders in which the trucks are processed at each the calculated reachability distance is denoted as 𝑜𝑖. 𝑅. timestamp are obtained, while 𝑟𝑑(𝑜𝑞, 𝑜𝑝) = { 𝑈𝑁𝐷𝐼𝐹𝐼𝑁𝐸𝐷, 𝑖𝑓 |𝑁𝜀(𝑜𝑝) < 𝑀𝑖𝑛𝑃𝑡𝑠| 𝑚𝑎𝑥 (𝑐𝑑(𝑜𝑝), 𝐹𝐷(𝑜𝑝, 𝑜𝑞)) , 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (2) The enhanced algorithm is still based on density clustering; therefore, the desired co-driving truck sets, which are the trucks that are densely connected with the core objects but are visibly B. Instantaneous Co-driving Set Generation On the basis of the identified truck headings in the map the set of matching process, we can further extract instantaneous co-driving sets in the freight network at each moment. The so-called instantaneous co-driving set refers to a set of trucks with the same headings closely driving with each other and are distributed densely on road segments at certain time. However, the characteristics of transportation network, such as parallel-line construction, complex intersection, and lack of double-line description, bring challenges to truck co- driving set mining. Data in Liaoning Province in China extracted from OSM are taken as an example. Here, the shortest distance between national freeway G202 and trunk road G15 is only 30 m, and these two routes continue to expand in parallel. Thus, the conventional fixed-radius searching approaches cannot well recognize co-movement trucks traveling on the same route. As an improved version of density-based clustering approaches, the OPTICS algorithm adapts to a changeable distance between individual points as the DBSCAN algorithm does. The dataset is ordered by keeping cluster hierarchy in a given parameter pool. Unlike the DBSCAN algorithm, which directly provides the clustering results of a dataset, the OPTICS algorithm first outputs the order in which the objects are processed and the information such as core distance and reachability distance. Given the requirement for a point 𝑝 to be a core point is there are 𝑀𝑖𝑛𝑃𝑡𝑠 points in its 𝜀-length flock, and the core distance 𝑐𝑑(𝑝) defines the minimum radius to classify it as a core point. For a core point 𝑝, the reachability distance 𝑟𝑑(𝑝, 𝑞) refers to the maximum between the core distance 𝑐𝑑(𝑝) and the distance between two points. However, the OPTICS algorithm cannot be directly utilized to recognize trucks’ instantaneous co-driving set because the original Euclidean distance is unidirectional. Under the circumstance of detecting multiple trucks’ co-movements, each truck’s direction needs to be captured. Hence, we introduce the concept of following distance in the OPTICS algorithm, namely, adaptive OPTICS (A-OPTICS). To simplify the problem, we extract the location of N on-line trucks at timestamp 𝑡 while using 𝑜𝑝 to represent their matched locations. Considering two trucks’ locations 𝑜1 and 𝑜2 at timestamp 𝑡, the lengths of their matched segments are 𝑆𝑒𝑔𝑙𝑒𝑛𝑜1 and 𝑆𝑒𝑔𝑙𝑒𝑛𝑜1 , respectively; the truck headings relative to the segment directions are 𝑑𝑖𝑟𝑜1 and 𝑑𝑖𝑟𝑜2, respectively. 𝑟𝑜1 or 𝑟𝑜2 represent the location of the matched point on each segment as a ratio of total segment length. On the basis of the results of map matching algorithms in Section 3.1, the start and end nodes of two matched segments can be and identified the 𝐹𝑟𝑜𝑚𝑁𝑜𝑑𝑒𝑜2, 𝑇𝑜𝑁𝑜𝑑𝑒𝑜2 following distance 𝐹𝐷(𝑜1, 𝑜2) can be calculated as shown in Appendix 1, representing the geographic distance between following and leading trucks considering their relative headings. The co-driving relation is mutual; thus, the following distance 𝐹𝐷(𝑜1, 𝑜2) is symmetric, that is 𝐹𝐷(𝑜1, 𝑜2) = 𝐹𝐷(𝑜2, 𝑜1). 𝐹𝑟𝑜𝑚𝑁𝑜𝑑𝑒𝑜1, 𝑇𝑜𝑁𝑜𝑑𝑒𝑜1 respectively. Therefore, as , , Therefore, the two-dimensional Euclidean distance in the original OPTICS algorithms is replaced by the one-dimensional Fig. 2. Example of wrongly detected co-driving truck sets. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 6 far away with each other, could still be recognized as the same set. Figure 2 visualizes the uneven distribution of trucks’ instantaneous locations on a single-line roadway. With the introduction of vehicle following and reachability distance, the initial set of the instantaneous co-driving set can be successfully found, as denoted by a dashed red circle. However, within the circle, 𝑜2 and 𝑜8 are sparsely distributed and should not be classified as a platoon with the remaining trucks. Fortunately, the variation tendency and reachability distance rate according to the clustering order can address this issue [38]. In Figure 3, to quantify the variation trend of reachability distance, the gap between any two consecutive trucks in the ordering is quantified as ∆ , two vectors 𝑜𝑦𝑜𝑥 ⃗⃗⃗⃗⃗⃗⃗⃗ are constructed for any given truck 𝑦 and its two neighborhood trucks 𝑥 and 𝑧 , respectively. The angle 𝜃𝑦.𝑅 describes the variation tendency of reachability distance at the truck 𝑦, while the rate indicator Λ𝑦.𝑅 is combined with 𝜃𝑦.𝑅 to determine the boundary of the desired set in the ordering and can be computed ⃗⃗⃗⃗⃗⃗⃗⃗⃗ and 𝑜𝑦𝑜𝑧 Fig. 3. The reachability distance diagram in OPTICS algorithm. by Equations 3 and 4. 𝜃𝑦.𝑅 = 〈𝑜𝑦𝑜𝑥 ⃗⃗⃗⃗⃗⃗⃗⃗⃗ , 𝑜𝑦𝑜𝑧 ⃗⃗⃗⃗⃗⃗⃗⃗ 〉 = 𝑎𝑟𝑐𝑐𝑜𝑠 ( −∆2+(𝑥.𝑅−𝑦.𝑅)(𝑧.𝑅−𝑦.𝑅) ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ ‖ ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ ‖∙‖𝑜𝑦𝑜𝑧 ‖𝑜𝑦𝑜𝑥 ) (3) Λ𝑦.𝑅 = | −∆ ∆ 𝑥. 𝑅 − 𝑦. 𝑅 𝑧. 𝑅 − 𝑦. 𝑅 | (4) In simple terms, a smaller 𝜃𝑦.𝑅 indicates that the truck 𝑦 is located toward the edge of a centralized desired subset (i.e., truck platoon), and Λ𝑦.𝑅 can further assist in determining the initial and last trucks within the desired subset. Gap parameter ∆ and the threshold values of 𝜃𝑦.𝑅 and Λ𝑦.𝑅 can be properly set based on the parameter 𝜀 and 𝑀𝑖𝑛𝑃𝑡𝑠 . Given that the neighborhood distance 𝜀 is set as 1 km, the minimum following trucks is set as 2, the optimal gap parameter ∆ should be 0.5, and the thresholds of 𝜃𝑦.𝑅 and Λ𝑦.𝑅 are 150° and 0, respectively. Moreover, the reachability distance threshold is 1, so that a truck that is neither outlier nor a core object will be 1.01. All the above parameters are well calibrated by extensive sensitivity analysis. We compute the variation tendency 𝜃𝑦.𝑅 and reachability distance rate Λ𝑦.𝑅 to improve the truck co- driving set in Figure 2. A truck with 𝜃𝑦.𝑅 < 150∘ and Λ𝑦.𝑅 > 0 is either the front of a platoon or an outlier nearest to the platoon. A truck with 𝜃𝑦.𝑅 < 150∘ and Λ𝑦.𝑅 < 0 is either the end of a platoon or the second truck in the platoon. On the basis of the two rules, we can extract the first and last trucks in a centralized set based on the initial OPTICS clustering results. Detailed pseudocodes are provided in Appendix II. C. Spontaneous Platoon Pattern Mining For a spontaneous platoon pattern 𝑃(𝑂, 𝑇), all trucks in all set 𝑂 are located in the same instantaneous co-driving set within the timestamp set 𝑇 . Therefore, spontaneous platoon pattern mining aims to find the most frequent set from a collection of instantaneous co-driving sets from multiple timestamps. The mined set must be time-satisfied and scale- Fig. 4. Schematic of spontaneous platoon pattern mining. satisfied, determined by the two parameters of the shortest co- driving time length 𝑚𝑖𝑛𝑡 and the minimum number of co- driving trucks 𝑚𝑖𝑛𝑜, respectively, that is |𝑂| ≥ 𝑚𝑖𝑛𝑜 ∩ |𝑇| ≥ 𝑚𝑖𝑛𝑡, as shown in Figure 4. In freight operation, the following distance between trucks could exceed 1 km due to speed difference caused by weigh-in- motion detection and roadside inspection, leading to the interruption of a spontaneous platoon. In this study, 𝑚𝑖𝑛𝑡 is set as 2 timesteps (i.e., 30 seconds), and 𝑚𝑖𝑛𝑜 is set as 2 trucks. To avoid a large number of redundant mining results and eliminate invalid subsets with supersets, the concept of time closure and size closure is introduced. Therefore, the valid spontaneous platoon pattern is defined as those sets that satisfy both time and size closures. Time Closure: For any spontaneous platoon pattern 𝑃(𝑂, 𝑇), if no such pattern 𝑃(𝑂′, 𝑇′) satisfies 𝑂 = 𝑂′ ∩ 𝑇 ⊂ 𝑇′ , then 𝑃(𝑂, 𝑇) is called time-closed. 𝑦. 𝑅 𝜃𝑦.𝑅 Λ𝑦.𝑅 𝑜1 Inf - - 𝑜2 1.01 163 0.08 TABLE I CALCULATED PARAMETERS TO FORM UP A TRUCK PLATOON IN FIGURE 2 𝑜5 0.071 𝑜6 0.057 𝑜3 0.849 𝑜4 0.141 𝑜7 0.057 142 0.27 133 −0.31 174 −0.03 178 −0.01 120 −0.42 𝑜8 0.905 131 0.38 𝑜9 1.01 - - 𝑜10 Inf - - MinPts > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 7 Size Closure: For any spontaneous platoon pattern 𝑃(𝑂, 𝑇), if no such pattern 𝑃(𝑂′, 𝑇′) satisfies 𝑂 ⊂ 𝑂′ ∩ 𝑇 = 𝑇′ , then 𝑃(𝑂, 𝑇) is called size-closed. For a given set of trucks 𝑂 , it is possible to uniquely determine the maximum co-driving time set 𝑇, but not vice versa. Therefore, the mining task should apply the approach of depth-first search (DFS) for traversal search to extract the maximum time set 𝑇𝑚𝑎𝑥(𝑂) to ensure that the given trucks are in the same instantaneous co-driving set. The pruning and selection processes are mainly based on the maximum time set 𝑇𝑚𝑎𝑥(𝑂) and can be divided into the following four categories: Logical Pruning Rule: According to the search order of the depth-first spanning tree, if the difference between the truck index 𝑜𝑖𝑗 of the leaf node directly connected to the root and the total number of trucks is less than 𝑚𝑖𝑛𝑜, then the leaf node should be pruned. A Priori-like Pruning Rule: If the truck set O is not time- satisfied, which means that 𝑇𝑚𝑎𝑥(𝑂) < 𝑚𝑖𝑛𝑡 , then the maximum time set corresponding to its all supersets 𝑂′ is insufficient, that is, 𝑇𝑚𝑎𝑥(𝑂′) < 𝑚𝑖𝑛𝑡. None of the child nodes are time-satisfied; therefore, the leaf node where the current truck set 𝑂 is located should be pruned. Subset Pruning Rule: According to the searching order of the depth-first spanning tree, if the already found spontaneous platoon pattern 𝑃(𝑂, 𝑇) and the current pattern 𝑃(𝑂′, 𝑇′) satisfy 𝑂 ⊂ 𝑂′ ∩ 𝑇′ = 𝑇, then the truck set 𝑂′ and its superset are not time-closed and thus cannot be a valid spontaneous platoon pattern, and the leaf node where the current vehicle set 𝑂′ is located should be pruned. Marginal Remove Rule: According to the search order of the depth-first spanning tree, each time the subtree under the root node completes the depth search, nodes that pass through the three pruning process should perform the following marginal remove rule. For the spontaneous platoon pattern 𝑃(𝑂, 𝑇) in truck set 𝑂 = {𝑜𝑖1, 𝑜𝑖2,…,𝑜𝑖𝑗} (𝑖1 < 𝑖2 < ⋯ < 𝑖𝑗), if any truck (𝑘 > 𝑗) is added and its corresponding pattern 𝑃(𝑂′, 𝑇′) 𝑜𝑖𝑘 satisfies 𝑇′ = 𝑇, then the current spontaneous platoon pattern 𝑃(𝑂, 𝑇) should be removed because it does not satisfy size closure. D. Fuel Consumption Estimation One of the primary objectives in this study is to evaluate the energy savings incurred from truck platooning. Thus, we apply the longitudinal vehicle model proposed by Liang et al. [22] based on Newton’s second law of motion to calculate traction force and then calculate the instantaneous fuel consumption with the fuel consumption model [39], [40], [41]. Truck trajectory data longitude, and altitude latitude, information, and can be used to compute instantaneous vehicle speed and acceleration rate, which are leading factors that influence truck fuel consumption. include 𝑚𝑎𝑡 = 𝐹𝑣(𝑡) − 𝐹𝑎𝑖𝑟𝑑𝑟𝑎𝑔(𝑣𝑡) − 𝐹𝑟𝑜𝑙𝑙(𝛼𝑡) − 𝐹𝑔𝑟𝑎𝑣𝑖𝑡𝑦(𝛼𝑡) = 𝐹𝑣(𝑡) − 1 2 𝜌Α𝑐𝑑𝑣𝑡 2Φ − mg𝑐𝑟𝑐𝑜𝑠(𝛼𝑡) − 𝑚𝑔𝑠𝑖𝑛(𝛼𝑡) (5) At any given timestamp 𝑡, the acceleration rate 𝑎𝑡 of a truck with total weight 𝑚 is mainly determined by engine and braking force 𝐹𝑣(𝑡), roll resistance force 𝐹𝑟𝑜𝑙𝑙(∙), gravitational force 𝐹𝑔𝑟𝑎𝑣𝑖𝑡𝑦(∙), and air drag force 𝐹𝑎𝑖𝑟𝑑𝑟𝑎𝑔(∙). 𝐹𝑟𝑜𝑙𝑙(∙) and 𝐹𝑔𝑟𝑎𝑣𝑖𝑡𝑦(∙) are affected by roll resistance coefficient 𝑐𝑟, truck weight 𝑚 , gravitational coefficient 𝑔 and road slope 𝛼𝑡 determined by the truck’s instantaneous location 𝑧𝑡 , while 𝐹𝑎𝑖𝑟𝑑𝑟𝑎𝑔(∙) is influenced by air density 𝜌, the truck’s front area Α, air drag coefficient 𝑐𝑑, instantaneous speed 𝑣𝑡, and reduction coefficient Φ. The reduction coefficient Φ is only less than 1 when the truck is in platoon; otherwise, it is set as 1. The relationship between the instantaneous acceleration and force can be expressed as Equation 5. Assuming the heat gained from burning the crude oil will be proportionately converted into energy required to propel a truck, the instantaneous fuel consumption 𝑓𝑐 (ml/s) can be expressed the mean engine combustion as Equation 6. 𝜂𝑒𝑛𝑔 efficiency, 𝜌𝑑 is the energy conversion constant determined by crude oil, and Ψ is the conversion factor from liquid to gas. 𝛿 is an indicator to prevent oil consumption when the truck is braking, that is, 𝛿 equals 1 when 𝐹𝑣(𝑡) ≥ 0, and 0 otherwise. ̅̅̅̅̅̅ is 𝑓𝑐 = 𝛿 ̅̅̅̅̅̅̅̅̅̅̅ Ψ𝜂𝑒𝑛𝑔𝜌𝑑 𝐹𝑣(𝑡)𝑣𝑡 (6) Assuming that the acceleration rate of the truck is constant during the 15-second interval ∆t, the total fuel consumption of the truck during each time interval can be expressed as Equation 7. 𝑡+∆t 𝑓𝑐 = ∫ 𝑡 𝛿 ̅̅̅̅̅̅̅̅̅̅̅ Ψ𝜂𝑒𝑛𝑔𝜌𝑑 𝐹𝑣(𝑡)𝑣𝑡𝑑𝑡 (7) The driving profile and the changes in speed and altitude of each truck can be acquired from trajectory data. Instantaneous acceleration rate 𝑎𝑡 at the moment 𝑡 based on two consecutive GPS points can be computed through Equation 8. 𝑎𝑡 = 𝑎𝑟𝑐𝑡𝑎𝑛 ( 𝑣𝑡+∆𝑡−𝑣𝑡 ∆𝑡 ) (8) Instantaneous road slope 𝛼𝑡 at the moment 𝑡 based on two consecutive GPS points can be computed through Equation 9. 𝛼𝑡 = 𝑎𝑟𝑐𝑡𝑎𝑛 ( ℎ𝑡+∆𝑡−ℎ𝑡 𝑑(𝑧𝑡+∆𝑡−𝑧𝑡) ) (9) where ℎ𝑡 represents the instantaneous altitude at moment 𝑡 for each truck, and 𝑑(𝑧𝑡+∆𝑡 − 𝑧𝑡) represents the network distance between two locating points. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 8 Table 2 lists the calibrated parameters for the truck fuel consumption estimation model. Each truck’s driving profile (e.g., speed, acceleration rate, and road slope) is computed from the actual trajectory data. The instantaneous axle load data for Liaoning Province within the same year was used to estimate the average total truck weight. The remaining parameters are acquired from the existing literature with identical or similar settings in our study. TABLE II PARAMETERS FOR THE FUEL CONSUMPTION MODEL Parameter UNIT Value Description and Source 𝜌 Α 𝑐𝑑 𝑣𝑡 Φ 𝑚 𝑔 𝑐𝑟 𝛼𝑡 𝛿 Ψ 𝜂𝑒𝑛𝑔 ̅̅̅̅̅̅ 𝜌𝑑 𝑎𝑡 kg/m3 m2 - m/s - - - kg m/s2 - - - - g/ml - J/g - Liang et al. 2014 1.29 10.26 Liang et al. 2014 Liang et al. 2014 0.007 Derived from trajectory data - When driving alone; 1 Leading truck; Lu et al. 2011 0.92 0.72 Following truck; Janssen et al. 2015 26,800 Axle load data in Liaoning Province 9.8 0.6 - - Liang et al. 2014 Computed from trajectory data 1 0 When When 0.737 0.4 Franceschetti et al. 2013 Industry average level 44,000 Franceschetti et al. 2013 - Compute from trajectory data IV. DATA DESCRIPTION AND PLATOONING PERFORMANCE MEASURES A. Data Description In China, heavy-duty vehicles (HDVs) must be monitored in real time through the fleet management systems governed by transportation authorities. We obtain massive trajectory data from HDVs registered in Liaoning Province on April 10, 2018. The dataset contains 26,405 trucks with 28 million positioning data, of which 20,358 trucks are involved in freight activity. Approximately 77.34% trucks routinely deliver goods within Liaoning Province, while the remaining trucks perform province-to-province freight activities. loss GPS devices are equipped on each truck, and they transmit each truck’s position information such as truck ID, timestamp, longitude, latitude, altitude, speed, and direction. However, due incurred by overhead obstruction and to signal communication failure, the reading frequency is not stable, ranging from 10 s to 60 s in most cases. A certain number of trucks suffer from the no location reporting issue for a short time period but can be properly recovered by the proposed map matching algorithms in Section 3.1. The digital map data come from OSM in December 2019, which include expressway and trunk road network in China. As mentioned in Section 3.1, most trunk road segments are represented in single lines, but trucks can travel in both directions. Figure 5 presents the studied region with truck GPS data. Fig. 5. Truck trajectory distribution in Liaoning province, China. B. Platooning Performance Measures To evaluate the effectiveness of the proposed approaches, we utilize the concept of instantaneous co-driving ratio 𝐼𝐶𝑅𝑡 to represent the ratio between the number of mined co-driving 𝑡𝑜𝑡𝑎𝑙 on the national trucks and the total available trucks 𝑁𝑡 highway systems at a certain timestamp t. Supposing a total of 𝑆𝑡 co-driving sets, and 𝑛𝑖 trucks are platooned within each set. Thus, 𝐼𝐶𝑅𝑡 can be expressed as 𝐼𝐶𝑅𝑡 = 𝑆𝑡 ∑ 𝑛𝑖 𝑖 𝑁𝑡𝑜𝑡𝑎𝑙 (10) The space headway between two consecutive coupled trucks is another critical indicator to assess the potential of platooning, which is defined as the average following distance across all instantaneous co-driving sets on a specific road segment at a certain timestamp 𝑡 (see Equation 11). 𝐼𝐶𝐻𝑡 = ℎ𝑖𝑗 𝑛𝑖−1 𝑆𝑡 ∑ ∑ 𝑖 𝑗 𝑆𝑡 ∑ 𝑛𝑖 𝑖 (11) where 𝐼𝐶𝐻𝑡 is the instantaneous co-driving headway, and ℎ𝑖𝑗 represents the space headway for j-th vehicle within the i-th co-driving set. The average participant trucks in each co-driving sets are an additional indicator to understand the co-driving phenomenon, which is defined as the instantaneous co-driving size 𝐼𝐶𝑆𝑡 (see Equation 12). 𝐼𝐶𝑆𝑡 = 𝑆𝑡 ∑ 𝑛𝑖 𝑖 𝑆𝑡 (12) In a macro-level view, if all instantaneous co-driving sets are aggregated, then we can identify trucks’ spatial and temporal platooning patterns. Therefore, we propose two indicators to measure the platooning effectiveness in a large freight network. For each individual truck, the percentage of platooning time and 0vFt0vFt > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 9 distance can be computed and aggregated as the average platooning time and distance ratios for all trucks as defined in Equations 13 and 14. 𝑃𝐷𝑅 = 𝑃𝑇𝑅 = 𝐾 ∑ 𝑃𝐷𝑗 𝑗 𝐾 ∑ 𝐷𝑗 𝑗 𝐾 ∑ 𝑃𝑇𝑗 𝑗 𝐾 ∑ 𝑇𝑗 𝑗 (13) (14) where 𝑃𝐷𝑅 and 𝑃𝑇𝑅 is the average platooning distance and time ratios, respectively; 𝑃𝐷𝑗 and 𝑃𝑇𝑗 represent the platooned travel distance and time for truck 𝑗 , respectively; 𝐷𝑗 and 𝑇𝑗 represent the total travel distance and time for truck 𝑗 , respectively; and 𝐾 indicates the total number of available trucks during the entire day. V. RESULT ANALYSIS A. Instantaneous Co-driving Set Detection Result We first apply the map matching algorithm in Section 3.1 to interpolate all trajectory data on the freight network in Liaoning Province and then identify instantaneous co-driving sets by using the proposed A-OPTICS algorithm at each timestamp. As presented in Figure 6, instantaneous truck platooning does not emerge occasionally but rather can be witnessed in major national highways and some road segments around key logistics hubs. The instantaneous co-driving sets can be considered prior signals for proactive truck platoon planning strategies, e.g., speed coordination and route adjustment. Truckers may be informed by fleet management systems to accelerate or decelerate for platoon formation in their next actions. Fig. 7. Temporal distribution of instantaneous co-driving set: (a) proportion of trucks on national highways; (b) proportions of trucks on trunk roads and expressways; (c) ICR and ICS distribution on expressways; (d) ICR and ICS distribution on trunk roads. goods at night, where more trucks on expressways can be observed during daytime. With the improvement of lamination condition, the ICR values of national trunk roads and expressway present a similar trend of more platooned trucks that can be seen during daytime than during nighttime, especially on trunk roads, as presented in Figures 7(c) and 7(d). ICR and ICS values rise rapidly before morning peak hours (i.e., 5 a.m. to 7 a.m.), whether on national trunk roads or expressways. One noticeable discrepancy between trucks on trunk roads and those on expressways is the ICR value during nighttime. The ICR value of trunk roads become much higher than that of expressways, indicating that trucks are more likely to travel together sporadically on trunk roads. The possible explanation is that there are fewer overloading inspection stations on trunk roads than on expressways and the inspection process is less strict during nighttime than daytime, thus leading to more trucks traveling on trunk roads at night. The size of most truck platoons (i.e., ICS value) during the entire day is 2. The ICS value of trunk roads is slightly higher than that of expressways, implying that more trucks coupled with each other during each trip on trunk roads. Fig. 6. Spatial distribution of instantaneous co-driving set. We further compute ICR and ICH by using the trajectory data on April 10, 2018. Figure 7 demonstrates the temporal distribution of an instantaneous co-driving set. Similar to urban traffic congestion patterns, the total available trucks with valid records present a double-peak phenomenon, as presented in Figure 7(a). In contrast, the proportion of trucks on the national highways, including national trunk roads and expressways, remains stable around 50% throughout the entire day, which means that most trucks prefer to travel on highways rather than urban arterial road networks to save time. We divide all trucks on national highways into two categories that exhibit opposite patterns from each other: trucks on trunk roads and trucks on expressways. To be specific, trucks on trunk roads likely deliver By aggregating the 15-second interval into the 5-minute interval, we can further calculate the average headway between two consecutive platooned trucks (i.e., IDH) in Figure 8. The IDH value during daytime is generally more stable than that at nighttime, remaining around 215 and 200 m for expressways and trunk roads, respectively. We find that during nighttime, trucks traveling on trunk roads are more adjacent with each other than those traveling on expressways. This condition is probably due to the higher speed limit on expressways and the presence of more private vehicles at night. When overtaking between trucks and private vehicles occurs, a likely result is the interruption of truck platoons and the fluctuation of headway. This phenomenon becomes more severe during nighttime due to the poor illumination conditions. In contrast, trucks on trunk roads tend to be more cautious during nighttime due to the > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 10 lower speed limit and fewer private vehicles, leading to a relative stable headway (i.e., ICH) on trunk roads. Considering the small proportion of trucks driving on the expressways and erratic headway fluctuation, performing platoon coordination among trucks traveling on expressways from 7 p.m. to 5 a.m. is not suggested. Fig. 8. Temporal distribution of ICH on national trunk roads and expressways. Under the speed adjustment approach proposed by Liang [22], trucks with relatively small headways can coordinate into a platoon in a relatively short time. From the massive freight trajectory data, trucks’ optimal coordination time in the same instantaneous co-driving sets to coordinate into a platoon can be inferred. For national expressways, the best coordination time should be between 5 a.m. to 7 p.m., when the ICR values are relatively high and the ICH values are stable at about 215 m. For national expressways, all timeframes can be considered for truck platooning owing to the relatively stable ICH and considerable ICR values for the entire day. B. Spontaneous Platoon Pattern Analysis then analyze We then apply the spontaneous platoon pattern mining algorithm in Section 3.3 to extract the largest truck set with the longest co-driving the time period and spatiotemporal features and fuel-saving potential by using a freight platoon in Liaoning Province. A typical case of valid spontaneous platoon pattern is shown in Figure 9. Three trucks meet coincidently and participate in the platoon for more than 350 km, with the average instantaneous co-driving headway at around 200 m. The upper-left subgraph presents the zoomed positioning point of three trucks’ trajectory data in consecutive timestamps, while the upper-middle subgraph presents the temporal change in the ICR value among the spontaneous platoon pattern during the spontaneous platoon is interrupted at 9:51 a.m. because one of the trucks switched from the G1 national expressway to the G102 trunk road. However, the three trucks finally meet by chance on Tianjin’s expressway and continue to spontaneously travel as a group for a while. This example illustrates that the proposed algorithms can well capture the trucks’ accidental meeting behaviors and identify spontaneous platooning patterns in the large-scale freight network. the entire day. Moreover, To further study the feature of mined spontaneous platoon patterns, we depict the platooning time and distance in Figure 10. Nearly 30% trucks are platooned for at least 10 minutes and travel as a group for 10 km. In early 2013, Liang [19] revealed that when two trucks are Fig. 9. Typical case of valid spontaneous platoon pattern with three trucks. traveling on the same route but with a relatively short headway, a platoon can be formed up via speed coordination for fuel savings as long as their overlapping distance is more than 17 times their headway. The above theoretical foundations can be utilized with the mined spontaneous platoon patterns in this study to further estimate fuel savings. From the micro-level perspective, 52.54% of the trucks on national highways (i.e., both expressways and trunk roads) co-drive with other trucks for longer or shorter durations, and the average platooning distance PDR and the average time ratios PTR for these platooned trucks in Liaoning Province are 9.645% and 9.943%, respectively. By jointly considering the average truck headway and overlapping travel distance among all mined platooning patterns, 35.83% patterns can undergo speed coordination for platooning, resulting in a potential 2.767% reduction in total fuel consumption for trucks on national highways on April 10, 2018. The ratio of potential fuel savings is similar to that reported by Muratori et al. [42], where 4.2% of fuel savings can be achieved through truck platooning. (a) (b) Fig. 10. Characteristics of spontaneous truck platoon patterns: (a) platooning time distribution; (b) platooning distance distribution. Common sense suggests that trucks with long-distance travels on national highways are more likely to be platooned with higher energy saving potential, and our findings directly support this assumption. We further outline the relationship between truck daily hauling distance and platooning indicators (i.e., PDR and PTR) in Figure 11. In general, a longer daily hauling distance implies a higher opportunity for platoon coordination. However, the PDR and PTR values gradually decrease when the daily hauling distance exceeds 1000 km because trucks with relatively long hauling distances tend to participate in interprovincial deliveries, leading to lower > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 11 platooning performance measures. This issue can be further tested by analyzing the nationwide trucks’ trajectory data. Fig. 11. Spontaneous truck platooning performance with different daily hauling distances. than network-wide Spontaneous truck platoon patterns exhibit a certain spatial regularity. Therefore, installing roadside communication units on those segments where platooned trucks routinely travel is infrastructure more cost efficient constructions. We summarize and visualize the occurrences of truck platoons in Figure 12; the segments with high platooning potential are marked in red. Most of these segments are adjacent to industrial manufacturers and port logistics. The segments with the highest possibility of platooning are in the trunk road G228 and G305, which are the major corridors of Panjin Port, one of the largest ports in northern China. More spontaneous platooning patterns are observed in the Panjin–Jinzhou section of the G1 expressway and Yingkou–Dalian section of the G15 expressway. The G1 section connects Jinzhou, the largest logistics center of Liaoning Province, with Panjin, the base of petroleum and petrochemical industry. The G15 expressway facilitates the cargo flow between the two major harbor cities of Liaoning Province (i.e., Yinkou and Dalian). As a result of the radiation effect of Yingkou Port, a high proportion of spontaneous platoon patterns occur in the G102 trunk roads, which serve as a juncture of the G1 national expressway. As a result, the site selection of the marshalling station for assembling truck platoons can be prioritized based on the spatial distribution of spontaneous platoon patterns. VI. CONCLUSIONS In the past decade, truck platoon planning has shifted from theoretical derivation toward closed road test. The fuel consumption and air pollution incurred by truck platooning can be significantly reduced. To reveal the potential of platooning in large-scale freight networks, a series of data mining approaches is proposed to mine spontaneous platooning patterns from massive truck trajectory data. A map matching approach with heading detection is developed to identify the relative direction of a truck traveling on a single-line bidirectional road segment. An enhanced OPTICS algorithm is further proposed to recognize the Fig. 12. Road segments with spontaneous truck platooning potential in Liaoning Province. instantaneous co-driving sets at each timestamp. This algorithm overcomes the issue of mistaken clustering by replacing the conventional two-dimensional Euclidean distance with one- dimensional vehicle following distance, thus being able to detect the correct convoy relationship among multiple trucks by considering the change in reachability distances. We further aggregate multiple instantaneous co-driving sets and adopt the frequent itemset mining with pruning rules to find spontaneous platoon patterns that satisfy platoon size and duration constraints. Fuel savings due to truck platooning are then computed by using a well-established energy consumption estimated model. To validate the effectiveness of the proposed truck platoon pattern mining approaches, we leverage the extensive truck trajectory data from 26,405 trucks with 28.3345 million GPS records registered in Liaoning Province, China, on April 10, 2018. By mapping with OSM digital map data, we can find that at least 52.54% of trucks spontaneously coupled with each other for a while on a given segment, and the average platooning distance and time ratios are 9.645% and 9.943%, respectively. Among the spontaneous platoon patterns, 35.83% of the patterns need proper speed coordination for platooning, and the reduced energy savings could be as high as 2.767%. The majority of trucks prefer either national freeways or trunk roads. Thus, we further calculate the instantaneous co-driving ratio, size, and distance for both road types and find that the trucks are more likely to be platooned during daytime than during nighttime, with an average platooning distance of around 200 m. The co-driving ratio of trunk roads is higher than that of national freeways probably because of the lower speed limits on those trunk roads and fewer private vehicles at night, causing less intervention with truck platoons. We also reveal that the average platooning distance and time ratios are positively related to each truck’s daily haul distance when the average haul distance is less than 1000 km but decline when a longer distance travel is observed because trucks may undertake interprovincial deliveries. The road segments with high platooning potential are highlighted in the map. Most spontaneous truck platoons can be found in locations adjacent to ports and logistics hubs. On the basis of the mined patterns, the following policy > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 12 implications can be made to better guide autonomous truck platooning implementation: (1) Platooning strategies could be time dependent. Speed coordination can be executed from 5 a.m. to 7 p.m. for trucks traveling on national freeways and is available for trucks on trunk roads to be platooned. (2) The typical space headway for spontaneous truck platoon planning is 200 m. Thus, trucks may be equipped with medium- range communication devices to coordinate with another following or leading truck for platooning. (3) Almost half of the trucks that are traveling spontaneously in Liaoning Province can be readily platooned through speed coordination without changing their scheduling and routes. (5) The investment priority should be for roadside communication infrastructure in areas where truck platoons more likely occur. Taking Liaoning Province as an example, the G228 and G305 national freeways linking to Yingkou Port, the Panjin–Jinzhou section in the G1 expressway, and the Yingkou–Dalian section in the G15 expressway should consider installing V2X communication equipment. Future research effort can be made in the following aspects. Long-distance travels for interprovincial deliveries should be investigated to assess truck platooning potential. The efficiency of the A-OPTICS algorithm for instantaneous co-driving set detection is not satisfactory; the algorithm needs to be parallelized. A future study should focus on optimizing speed coordination and scheduling alternations on the basis of the mined spontaneous truck platooning patterns. APPENDIX A. Pseudocode for the Enhanced OPTICS Algorithm 1) Algorithm FindValley ATTACHED Fig. 1. Function FindValley. 2) Function Adaptive Recognition ATTACHED Fig. 2. Function Adaptive Recognition 3) Algorithm A-OPTICS ATTACHED Fig. 3. Algorithm A-OPTICS. B. Calculation Approach for Following Distance To compute the following distance between two trucks, whether they are in following relationships first needs to be determined. The intuitive idea is to use the difference of positioning and heading data of two trucks in consecutive timestamps for comprehensive judgment. However, due to the inherent error of recorded heading and the variety of road distribution, we cannot find a threshold to judge the following relationship accurately. Fortunately, the matched segment 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜𝑖 of truck 𝑜𝑖 combined with its relative driving direction 𝑑𝑖𝑟𝑜𝑖 and the headed node 𝑇𝑜𝑁𝑜𝑑𝑒𝑜𝑖 can help in accurately recognizing the following relationship. Here, 𝑟𝑜𝑖 is the ratio of the length from the segment’s start node to the matched locations 𝑜𝑖 to the total segment length. The most straightforward situation is that two trucks are traveling in the same segment at a given moment, so the relative driving direction will directly help determine whether they are in a following relationship and thus, the following distance can be calculated. When the relative driving directions are the same, the following distance 𝐹𝐷(𝑜1, 𝑜2) can be determined according to the difference in their location 𝑟𝑜1 and 𝑟𝑜2 combined with the segment length, as shown in Table 1. However, when the relative driving directions are opposite, they cannot follow at this moment, and the following distance can be set to infinity. However, segments that make up the roads in the digital map have major differences in length. The road with 𝜀-length can be represented by any single or dual lines or in combination with > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 13 ATTACHED TABLE I FOLLOWING DISTANCE FOR TRUCKS ON THE SAME SEGMENT Scenario 1 Scenario 2 Features 𝐹𝐷(𝑜1, 𝑜2) 𝑑𝑖𝑟𝑜1 ≠ 𝑑𝑖𝑟𝑜2 +∞ Scenario 3 𝑑𝑖𝑟𝑜1 ≠ 𝑑𝑖𝑟𝑜2 +∞ Scenario 4 Features 𝐹𝐷(𝑜1, 𝑜2) 𝑑𝑖𝑟𝑜1 = 𝑑𝑖𝑟𝑜2 𝑆𝑒𝑔𝑙𝑒𝑛 ∙ |𝑟𝑜1 − 𝑟𝑜2| 𝑑𝑖𝑟𝑜1 = 𝑑𝑖𝑟𝑜2 𝑆𝑒𝑔𝑙𝑒𝑛 ∙ |𝑟𝑜1 − 𝑟𝑜2| ATTACHED TABLE II FOLLOWING DISTANCE FOR TRUCKS ON DIFFERENT SINGLE SEGMENTS Scenario 1 Scenario 2 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2) 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) Features 𝐹𝐷(𝑜1, 𝑜2) 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2) 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) Features 𝐹𝐷(𝑜1, 𝑜2) 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜1 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜1 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜2 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜2 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) +∞ Scenario 3 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜1 ∉ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜1 ∉ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜2 ∉ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜2 ∉ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) +∞ Scenario 4 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜1 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜1 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜2, 𝑜1) + 𝐸𝑑𝑔𝑒𝑙𝑒𝑛𝑜2 × 𝜃𝑜2 − 𝐸𝑑𝑔𝑒𝑙𝑒𝑛𝑜1 × 𝜃𝑜1 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜2 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜2 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜1, 𝑜2) + 𝐸𝑑𝑔𝑒𝑙𝑒𝑛𝑜1 × 𝜃𝑜1 − 𝐸𝑑𝑔𝑒𝑙𝑒𝑛𝑜2 × 𝜃𝑜2 both segments, as shown in Attached Tables 2 to 4. As a result, not all the considered co-driving trucks will be located in the same segment, which poses a challenge to recognizing the following relationship and determining the following distance. To reduce the calculation time, only the two trucks whose geographical distance is lower than 𝜀 are judged. Therefore, when two trucks are not located on the same segment, we introduced the end-to-end route 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2) and end-to-end distance 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜1, 𝑜2) as defined below. End-to-end Route 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2): The end-to-end route 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2) for truck 𝑜1 to truck 𝑜2 is defined as the shortest network route from 𝑇𝑜𝑁𝑜𝑑𝑒𝑜1 to 𝑇𝑜𝑁𝑜𝑑𝑒𝑜2 in a national road network consisting of national freeways and trunk roads. End-to-end Distance 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜1, 𝑜2) : The end-to-end distance 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜1, 𝑜2) for truck 𝑜1 to truck 𝑜2 is defined as the length of the end-to-end route 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2). 𝑜2 can Considering that U-turn is not allowed for trucks on the national road, the catch-up distance 𝐶𝐷(𝑜1, 𝑜2) of truck 𝑜1 to 𝐶𝐷(𝑜1, 𝑜2) = truck 𝐷𝑖𝑠(𝑜1, 𝑇𝑜𝑁𝑜𝑑𝑒𝑜1) + 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜1, 𝑜2) + 𝐷𝑖𝑠(𝑜2, 𝑇𝑜𝑁𝑜𝑑𝑒𝑜2) , and the following distance 𝐹𝐷(𝑜1, 𝑜2) between two trucks can be calculated as 𝐹𝐷(𝑜1, 𝑜2) = 𝑚𝑖𝑛(𝐶𝐷(𝑜1, 𝑜2), 𝐶𝐷(𝑜2, 𝑜1)) . The following distance has symmetry, that is, 𝐹𝐷(𝑜1, 𝑜2) = calculated be as > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 14 ATTACHED TABLE III FOLLOWING DISTANCE FOR TRUCKS ON DIFFERENT DUAL SEGMENTS Scenario 1 Scenario 2 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2) 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) Features 𝐹𝐷(𝑜1, 𝑜2) 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2) 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) Features 𝐹𝐷(𝑜1, 𝑜2) 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2) 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) Features 𝐹𝐷(𝑜1, 𝑜2) 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜1, 𝑜2) ≫ 𝜀, 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜2, 𝑜1) ≫ 𝜀 +∞ 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜2, 𝑜1) ≫ 𝜀 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜1, 𝑜2) + 𝐸𝑑𝑔𝑒𝑙𝑒𝑛𝑜1 × 𝜃𝑜1 − 𝐸𝑑𝑔𝑒𝑙𝑒𝑛𝑜2 × 𝜃𝑜2 ATTACHED TABLE IV FOLLOWING DISTANCE FOR TRUCKS ON DIFFERENT SEGMENTS Scenario 1 Scenario 2 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜2, 𝑜1) ≫ 𝜀, 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜2 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜1 ∉ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2) 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜1, 𝑜2) + 𝐸𝑑𝑔𝑒𝑙𝑒𝑛𝑜1 × 𝜃𝑜1 − 𝐸𝑑𝑔𝑒𝑙𝑒𝑛𝑜2 × 𝜃𝑜2 Scenario 3 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜2, 𝑜1) ≫ 𝜀, 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜2 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜1 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2) +∞ Scenario 4 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜1, 𝑜2) ≫ 𝜀, 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜2 ∉ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜1 ∉ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) +∞ 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜1, 𝑜2) ≫ 𝜀, 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜2 ∉ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1), 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜1 ∈ 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜2, 𝑜1) + 𝐸𝑑𝑔𝑒𝑙𝑒𝑛𝑜2 × 𝜃𝑜2 − 𝐸𝑑𝑔𝑒𝑙𝑒𝑛𝑜1 × 𝜃𝑜1 𝐹𝐷(𝑜2, 𝑜1). In the Attached Tables 2 to 4, the 𝜃𝑜𝑖 represents the ratio of the remaining distance, and the truck 𝑜𝑖 travels in the current driving direction to its heading node 𝑇𝑜𝑁𝑜𝑑𝑒𝑜1 to the length of the current segment, which can be obtained from Attached Equation 1. 𝜃𝑜𝑖 = { 1 − 𝑟𝑜𝑖 𝑟𝑜𝑖 𝑑𝑖𝑟𝑜𝑖 = 0 𝑑𝑖𝑟𝑜𝑖 = −1 (1) Attached Table 2 shows all the possibilities of determining whether two trucks are co-driving on the trunk road presented > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 15 by single bidirectional segments and close to each other. The dotted line refers to the segments that connect 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜1 and 𝑆𝑒𝑔𝑚𝑒𝑛𝑡𝑜2 where the trucks are located. Obviously, the catch- up distance 𝐶𝐷(𝑜1, 𝑜2) for two trucks driving face to face or back to back may tend to be smaller than 𝜀 caused by the two- way movement of bidirectional segments, thus resulting in the the following relationship, such as misidentification of scenarios 1 and 2 in Attached Table 2. Fortunately, the end-to- end route 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2) may effectively assist in eliminating the misidentification results. As shown in Attached Table 2, when the two trucks are in a following relationship, the end-to- end route 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜1, 𝑜2) and 𝐸𝑇𝐸𝑝𝑎𝑡ℎ(𝑜2, 𝑜1) will separately contain and only contain the segment where the leading truck is located. Therefore, the following distance could be calculated when two trucks satisfy this feature. Otherwise, it should be set to infinity. Attached Table 3 shows all the possibilities of determining whether two trucks are co-driving on the national freeway presented by dual unidirectional segments and whether they are close to each other. When two trucks are not in a following the end-to-end distance 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜1, 𝑜2) and relationship, 𝐸𝑇𝐸𝑑𝑖𝑠(𝑜2, 𝑜1) would be significantly higher than 𝜀. In contrast, only one of the end-to-end distances of two trucks would be significantly higher than 𝜀 when they are co-driving together. Therefore, the following distance of paired trucks that satisfy the above feature should be calculated, while others’ following distance should be directly set to infinity. The conclusion is also correct when the positions of 𝑜1 and 𝑜2 are replaced. they are separately In general, the conversion of single bidirectional segments presented road and the dual unidirectional segments presented road will occur at the junction of the national freeway and the trunk road. Therefore, an essential step is to discuss the four possibilities of determining whether two trucks are co-driving when located on bidirectional and unidirectional segments. The research background is the national road network; thus, when the paired trucks are co- driving together, the end-to-end distance’s and end-to-end route’s features are unique. The leading truck’s end-to-end distance would be significantly higher than 𝜀, while the end-to- end route of the following truck would contain the segment where the leading truck is located but not the segment where it is located. The above features are shown in scenarios 1 and 4 in Attached Table 4. The paired trucks that satisfy these features are co-driving together, and their following distance needs to be calculated. REFERENCES [1] OECD, ITF Transport Outlook 2013: Funding Transport. Paris: OECD Publishing, 2013. [2] B. M. Schroten A, Warringa G, “Marginal abatement cost curves for heavy duty vehicles,” Delft, 2012. [3] C. Bergenhem, H. Pettersson, E. Coelingh, C. Englund, S. Shladover, and S. Tsugawa, “Overview of platooning systems,” 2012. [4] B. Patten, J., McAucliffe, B., Mayda, W. & Tanguay, “Review of Aerodynamic Drag Reduction Devices for Heavy Trucks and Buses,” Ottawa, 2012. [5] A. Davila, E. Aramburu, and A. Freixas, “Making the best out of aerodynamics: Platoons,” in SAE Technical Papers, 2013, vol. 2, doi: 10.4271/2013-01-0767. [6] B. Van Arem, C. J. G. Van Driel, and R. Visser, “The impact of [7] [8] cooperative adaptive cruise control on traffic-flow characteristics,” IEEE Trans. Intell. Transp. Syst., vol. 7, no. 4, pp. 429–436, Dec. 2006, doi: 10.1109/TITS.2006.884615. J. Lioris, R. Pedarsani, F. Y. Tascikaraoglu, and P. Varaiya, “Platoons of connected vehicles can double throughput in urban roads,” Transp. Res. Part C Emerg. Technol., vol. 77, pp. 292–305, Apr. 2017, doi: 10.1016/j.trc.2017.01.023. F. Zhu and S. V. Ukkusuri, “Modeling the Proactive Driving Behavior of Connected Vehicles: A Cell-Based Simulation Approach,” Comput. Civ. Infrastruct. Eng., vol. 33, no. 4, pp. 262–281, Apr. 2018, doi: 10.1111/mice.12289. [9] D. Zhang, Z. Xu, D. Srinivasan, and L. Yu. “Leader–Follower Consensus of Multiagent Systems With Energy Constraints: A Markovian System Approach”, IEEE Transactions on Cybernetics, vol. 47, no. 7. pp. 1727–1736, July. 2017, doi: 10.1109/TSMC.2017.2677471. [10] Z. Zhang, L. Zhang, F. Hao, and L. Wang. “Leader-Following Consensus for Linear and Lipschitz Nonlinear Multiagent Systems With Quantized Communication”, IEEE Transactions on Cybernetics, vol. 47, no. 8. pp. 1970–1982, Aug. 2017, doi: 10.1109/TCYB.2016.2580163. [11] G. Wen, C.L.P. Chen, Y.-J.. Liu, and Z. Liu. “Neural Network-Based Adaptive Leader-Following Consensus Control for a Class of Nonlinear Multiagent State-Delay Systems”, IEEE Transactions on Cybernetics, vol. 47, no. 8. pp. 2151–2160, Aug. 2017, doi: 10.1109/TCYB.2016.2608499. [12] X. Tan, J. Cao, amd X. Li. “Consensus of Leader-Following Multiagent Systems: A Distributed Event-Triggered Impulsive Control Strategy”, IEEE Transactions on Cybernetics, vol. 49, no. 3. pp. 792-801, March. 2019, 10.1109/TCYB.2017.2786474. [13] X. You, C. Hua, and X. Guan. “Self-Triggered Leader-Following Consensus for High-Order Nonlinear Multiagent Systems via Dynamic Output Feedback Control”, IEEE Transactions on Cybernetics, vol. 49, no. 6. pp. 2002-2010, June. 2019, 10.1109/TCYB.2018.2813423. [14] D. Yue and Z. Meng. “Cooperative Set Aggregation of Second-Order Multiagent Systems: Approximate Projection and Prescribed Performance”, IEEE Transactions on Cybernetics., vol. 50, no. 3. pp. 957-970, March. 2020, doi: 10.1109/TCYB.2018.2875131. [15] Z. Liu, D. Li, L., Wang, and D. Dong. “Synchronization of a Group of Mobile Agents With Variable Speeds Over Proximity Nets”, IEEE Transactions on Cybernetics, vol. 46, no. 7. pp. 1579-1590, July. 2016, doi: 10.1109/TCYB.2015.2451695. [16] A. Petrillo, A. Pescapé, and S. Santini. “A Secure Adaptive Control for Cooperative Driving of Autonomous Connected Vehicles in the Presence of Heterogeneous Communication Delays and Cyberattacks”, IEEE Transactions on Cybernetics, in press, Janurary. 2020, doi: 10.1109/TCYB.2019.2962601. [17] A. K. Bhoopalam, N. Agatz, and R. Zuidwijk, “Planning of truck platoons: A literature review and directions for future research,” Transportation Research Part B: Methodological, vol. 107. Elsevier Ltd, pp. 212–228, Jan. 01, 2018, doi: 10.1016/j.trb.2017.10.016. [18] V. Reis, R. Pereira, and P. Kanwat, “Assessing the potential of truck platooning in short distances: The case study of Portugal,” in Urban Freight Transportation Systems, Elsevier, 2019, pp. 203–222. [19] K. Y. Liang, J. Martensson, and K. H. Johansson, “When is it fuel eficient for a heavy duty vehicle to catch up with a platoon?,” in IFAC Proceedings Volumes (IFAC-PapersOnline), 2013, vol. 7, no. PART 1, pp. 738–743, doi: 10.3182/20130904-4-JP-2042.00071. [20] S. van de Hoef, “Fuel-Efficient Centralized Coordination of Truck Platooning.” Jan. 01, 2016, Accessed: Sep. 23, 2020. [Online]. Available: https://www.mysciencework.com/publication/show/fuel- efficient-centralized-coordination-truck-platooning-4855776d. [21] W. Zhang, E. Jenelius, and X. Ma, “Freight transport platoon coordination and departure time scheduling under travel time uncertainty,” Transp. Res. Part E Logist. Transp. Rev., vol. 98, pp. 1– 23, Feb. 2017, doi: 10.1016/j.tre.2016.11.008. [22] K. Y. Liang, J. Mårtensson, and K. H. Johansson, “Heavy-Duty Vehicle Platoon Formation for Fuel Efficiency,” IEEE Trans. Intell. Transp. Syst., vol. 17, no. 4, pp. 1051–1061, Apr. 2016, doi: 10.1109/TITS.2015.2492243. [23] F. Farokhi and K. H. Johansson, “Investigating the interaction between traffic flow and vehicle platooning using a congestion game,” in IFAC Proceedings Volumes (IFAC-PapersOnline), 2014, vol. 19, pp. 4170– 4177, doi: 10.3182/20140824-6-za-1003.00847. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 16 [24] P. Meisen, T. Seidl, and K. Henning, “A data-mining technique for the planning and organization of truck platoons,” in International Conference on Heavy Vehicles HVParis 2008, Hoboken, NJ, USA: John Wiley & Sons, Inc, 2013, pp. 389–402. [25] J. Larson, C. Kammer, K. Y. Liang, and K. H. Johansson, “Coordinated route optimization for heavy-duty vehicle platoons,” in IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, 2013, pp. 1196–1202, doi: 10.1109/ITSC.2013.6728395. [26] E. Larsson, G. Sennton, and J. Larson, “The vehicle platooning problem: Computational complexity and heuristics,” Transp. Res. Part C Emerg. Technol., vol. 60, pp. 258–277, Nov. 2015, doi: 10.1016/j.trc.2015.08.019. [27] J. Larson, T. Munson, and V. Sokolov, “Coordinated Platoon Routing in a Metropolitan Network,” in 2016 Proceedings of the Seventh SIAM Workshop on Combinatorial Scientific Computing, Philadelphia, PA: Society for Industrial and Applied Mathematics, 2016, pp. 73–82. [28] A. Nourmohammadzadeh and S. Hartmann, “The fuel-efficient platooning of heavy duty vehicles by mathematical programming and genetic algorithm,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Dec. 2016, vol. 10071 LNCS, pp. 46–57, doi: 10.1007/978-3-319-49001-4_4. [29] V. Pillac, M. Gendreau, C. Guéret, and A. L. Medaglia, “A review of dynamic vehicle routing problems,” Eur. J. Oper. Res., vol. 225, no. 1, pp. 1–11, Feb. 2013, doi: 10.1016/j.ejor.2012.08.015. [30] A. Adler, D. Miculescu, and S. Karaman, “Optimal Policies for Platooning and Ride Sharing in Autonomy-Enabled Transportation,” Springer, Cham, 2020, pp. 848–863. [31] H. Jeung, M. L. Yiu, X. Zhou, C. S. Jensen, and H. T. Shen, “Discovery of convoys in trajectory databases,” Proc. VLDB Endow., vol. 1, no. 1, pp. 1068–1080, Aug. 2008, doi: 10.14778/1453856.1453971. [32] K. Y. Liang, J. Martensson, and K. H. Johansson, “Fuel-saving potentials of platooning evaluated through sparse heavy-duty vehicle position data,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2014, pp. 1061–1068, doi: 10.1109/IVS.2014.6856540. [33] J. Larson, K. Y. Liang, and K. H. Johansson, “A distributed framework for coordinated heavy-duty vehicle platooning,” IEEE Trans. Intell. Transp. Syst., vol. 16, no. 1, pp. 419–429, Feb. 2015, doi: 10.1109/TITS.2014.2320133. [34] T. T. Shein, S. Puntheeranurak, and M. Imamura, “Discovery of evolving companion from trajectory data streams,” Knowl. Inf. Syst., vol. 62, no. 9, pp. 3509–3533, Sep. 2020, doi: 10.1007/s10115-020- 01471-2. [35] M. Andersson, J. Gudmundsson, P. Laube, and T. Wolle, “Reporting leaders and followers among trajectories of moving point objects,” Geoinformatica, vol. 12, no. 4, pp. 497–528, Oct. 2008, doi: 10.1007/s10707-007-0037-9. [36] P. Newson and J. Krumm, “Hidden Markov map matching through noise and sparseness,” in GIS: Proceedings of the ACM International Symposium on Advances in Geographic Information Systems, 2009, pp. 336–343, doi: 10.1145/1653771.1653818. [37] Y. Lou, C. Zhang, Y. Zheng, X. Xie, W. Wang, and Y. Huang, “Map- matching for low-sampling-rate GPS trajectories,” in GIS: Proceedings of the ACM International Symposium on Advances in Geographic Information Systems, 2009, pp. 352–361, doi: 10.1145/1653771.1653820. [38] S. Brecheisen, H. P. Kriegel, P. Kröger, and M. Pfeifle, “Visually mining through cluster hierarchies,” in SIAM Proceedings Series, 2004, pp. 400–411, doi: 10.1137/1.9781611972740.37. [39] M. Oguchi, T., Katakura, M., & Taniguchi, “Available concepts of energy reduction measures against road vehicular traffic,” 1996. [40] B. G. ROBERT, “BOSCH AUTOMOTIVE HANDBOOK - 5TH EDITION,” 2001. https://trid.trb.org/view.aspx?id=690675 (accessed Sep. 23, 2020). [41] S. CV.AB., “Technical report 7013808,” 2012. [42] M. Muratori, J. Holden, M. Lammert, A. Duran, S. Young, and J. Gonder, “Potentials for platooning in U.S. highway freight transport,” SAE Int. J. Commer. Veh., vol. 10, no. 1, pp. 45–49, May 2017, doi: 10.4271/2017-01-0086. Xiaolei Ma is an associate Professor in the school of transportation science and engineering at Beihang University, China. He received his Ph.D. degree from University of Washington, Seattle in 2013. His research areas mainly lie in public transit operation and planning, shared mobility modeling and transportation Big Data analytics. To date, he has published over 100 articles in peer-reviewed journal articles and conference papers. Dr. Ma servers as an associate editor of IEEE Transactions on Intelligent Intelligent Transportation Systems and Transport Systems while also being an editorial member of Transportation Research Part C. He is also a member of the TRB Artificial Intelligence and Advanced Computing Applications Committee. Dr. Ma received several academic awards including Young Elite Scientist Sponsorship Program by the China Association for Science and Technology, Beijing Outstanding Youth Program and Beijing Nova Program. IET Enze Huo is currently pursuing the M.S. degree with the Beijing Key Laboratory for Cooperative Infrastructure Vehicle Systems and Safety Control, School of Transportation Science and Engineering, Beihang University, Beijing, China. His research traffic data mining and freight platoon optimization. interests include Haiyang Yu is an associate professor of the School of Transportation Science and Engineering at Beihang University. Haiyang Yu received his Ph.D. degree in traffic environment and safety technology from Jilin University in 2012. His research interesting includes traffic big data, state characteristic information extraction and expression for road network and traffic control and simulation. Honghai Li is currently pursuing the Ph.D. degree with the Beijing Key Laboratory for Cooperative Vehicle Infrastructure Systems and Safety Control, School of Transportation Science and Engineering, Beihang University, Beijing, China. His research interests include traffic data analytics and intelligent transportation systems.
ai_researcher
2
Generative_Agents_Interactive_Simulacra_of_Human_Behavior.pdf
4 2 0 2 l u J 1 1 ] I A . s c [ 2 v 8 2 2 4 1 . 6 0 4 2 : v i X r a EVOAGENT: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms Siyu Yuan1∗, Kaitao Song2∗†, Jiangjie Chen1, Xu Tan2, Dongsheng Li2, Deqing Yang1† Fudan University1, Microsoft Research Asia2 [email protected], {kaitaosong, xuta, dongsli}@microsoft.com {jjchen19,yangdeqing}@fudan.edu.cn https://evo-agent.github.io Abstract The rise of powerful large language models (LLMs) has spurred a new trend in building LLM-based autonomous agents for solving complex tasks, especially multi-agent systems. Despite the remarkable progress, we notice that existing works are heavily dependent on human-designed frameworks, which greatly limits the functional scope and scalability of agent systems. How to automatically extend the specialized agent to multi-agent systems to improve task-solving capability still remains a significant challenge. In this paper, we introduce EVOAGENT, a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm, thereby improving the effectiveness of LLM-based agents in solving tasks. Specifically, we consider the existing agent frameworks as the initial individual and then apply a series of evolutionary operators (e.g., mutation, crossover, selection, etc.) to generate multiple agents with diverse agent settings. EVOAGENT can be generalized to any LLM-based agent framework, and can automatically extend the existing agent framework to multi-agent systems without any extra human designs. Experimental results across various tasks have shown that EVOAGENT can automatically generate multiple expert agents and significantly enhance the task-solving capabilities of LLM-based agents. 1 Introduction Recently, the advent of large language models (LLMs) [1, 2, 3, 4] have shown remarkable capabilities in solving language understanding, reasoning, and generation tasks. Based on the foundation of LLMs, many research works [5, 6, 7, 8, 9, 10, 11] have discovered that by empowering multiple advanced skills (e.g., planning, tool, memory and so on), we can develop more powerful autonomous agents to solve more challenging tasks. Therefore, how to design and leverage LLM-based autonomous agents to tackle more diverse and complex real-world applications has attracted enormous interest. Generally, many real-world scenarios are usually complex, encompassing a variety of challenging tasks that are beyond the capability of a single agent. To address this point, we notice that human society is composed of vast individuals, each possessing their unique characteristics. By selecting, orchestrating, and cooperating with different individuals, humans can form an efficient team group to handle complicated missions in the real world. Therefore, there has been an increasing trend to develop multi-agent collaboration frameworks (e.g., MetaGPT [10], AutoGen [12], Camel [13], ∗ The first two authors have equal contributions. This work was done when the first author was an intern at Microsoft Research Asia. † Corresponding authors. Preprint. Under review. Generative Agents [11]) to simulate human behaviors for solving complex tasks. By developing a series of expert agents with diverse settings, multi-agent systems enable us to reveal emergent abilities among multiple agents and synergize their specialized expertise to achieve superior performance, akin to simulating human populations. Nevertheless, it is worthy noting that, in most of (multi)-agent frameworks, their designs heavily depend on handcrafted settings, including character roles, task scopes, skills, and prompt settings. Although we admit that meticulous human design is quite useful for instructing LLM-based agents to understand tasks, it also limits scaling up the number of agents to further improve performance due to expensive human labor. Considering the increasing popularity of LLM-based autonomous agents, how to create a generic agent generation paradigm to automatically build multi-agent systems has emerged as a critical challenge. In this paper, we introduce a novel method, EVOAGENT, that formulates agent generation as the evolutionary processing [14] in human society. Specifically, to align human society, each agent can be considered as individuals that can procreate its population across successive generations. Motivated by this mechanism, we can simulate such a human behavior to automatically generate multiple agents based on any pre-defined agents. Therefore, EVOAGENT can be considered as a one-shot agent generation method that starts from a specialized agent as the initial agent, and then considers its settings (e.g., role, skills, prompts, and so on) as the variables to be evolved. With a series operation of EAs (e.g., selection, crossover, mutation), EVOAGENT can automatically create multiple evolutionary agents based on the initial specialized agent. Moreover, EVOAGENT is not limited to the infrastructure of agent frameworks, as it is a generic multi-agent generation method. Thus, it can be applied to any agent framework and expanded to multi-agent systems without any extra human effort. We conduct experiments on multiple datasets, including knowledge-based question answering and multi-modal reasoning (§ 4.1), interactive scientific solving (§ 4.2) and real-world complex planning (§ 4.3). Experimental results indicate that EVOAGENT can generate multiple agents with diverse skills and harness their capabilities to consistently improve model performance in different scenarios. Besides, to validate the scalability of EVOAGENT in creating massive agents, we also apply our method to some conversational scenarios (e.g., debate), and the results also indicate the potential of EVOAGENT in generating multiple diverse agents. Overall, the contributions of this paper can be summarized as below: • We introduce EVOAGENT, a simple and generic multi-agent generation method to improve the effectiveness of LLM-based agents in solving tasks. EVOAGENT can automatically generate new expert agents and is applicable to any agent framework. • We formulate the agent generation processing as an evolutionary pipeline, that encompasses multiple operators (e.g., selection, crossover, mutation) to generate agent population without additional human supervision. • We conduct extensive experiments on various tasks and demonstrate the effectiveness, scalability, and generality of our EVOAGENT. Particularly, EVOAGENT can significantly enhance the perfor- mance of LLM-based agents in both challenging open-world scenarios and complex real-world planning by generating more specialized agents. 2 Related Work LLM-based Autonomous Agents With the emergence of powerful large language models [1, 2, 3, 4], many researchers have endeavored to develop advanced autonomous agents [5, 6, 7] empowered by multiple high-level LLM skills (e.g., personas [11, 15, 16], planning [9, 17, 18, 19], tool [8, 6, 20, 21] and memory [22, 23]). Some of them also extend agent frameworks to multi-agent collaboration (e.g., MetaGPT [24], Generative Agents [11], AutoGen [12], Camel [13], AgentVerse [25] and so on), by designing multiple specific roles. These systems also demonstrate satisfactory performance in addressing massive, challenging tasks. However, it is worth noting that most of the popular agent frameworks heavily relied on handcrafted designs. The abundant human efforts necessitated by these systems also limit the adaptability and flexibility of agents to handle unexpected challenges [26, 27, 25, 10]. In this paper, we propose EVOAGENT, a method that can be applied to any LLM-based agent framework and easily extend to multi-agent systems. By using EA, our method allows us to iteratively generate and optimize multiple agents with diverse settings. 2 Algorithm 1: Multi-Agent Generation with Evolutionary Algorithm Require: Initial agent A(0,0), population size N per iteration, number of iterations T , quality-check module LLMQuality(⋅), evolutionary operations EvoCrossover(⋅) and EvoMutation(⋅), EvoUpdate(⋅) Input: Initial result R0 derived from A(0,0) Output: Final result RT 1 for t = 1 to T do 2 ′ ′ } ← (0,t−1), A ′ (N −1,t−1) (1,t−1), ..., A Crossover: Update the settings of parent agents based on their generated results and initial agent: {A EvoCrossover({R(0,t−1), R(1,t−1), ..., R(N −1,t−1)}, A(0,0)); Mutation: Generate N ′(N ′ > N ) child agents based on parent agents and initial agent: {A(0,t), A(1,t), ..., A(N ′−1,t)} ← EvoMutation({A Selection: Select high-quality agents with quality-check module: {At N −1} ← LLMQuality({A(0,t), A(1,t), ..., A(N ′−1,t)}); Result Update: Generate new result from new agents: {R(0,t), R(1,t), ..., R(N −1,t)} ← {A(0,t), A(1,t), ..., A(N −1,t)} Integrate their results as a natural selection processing: Rt ← EvoUpdate({R(0,t), R(1,t), ..., R(N −1,t)}, Rt−1); (1,t−1), ..., A ′ (N −1,t−1) (0,t−1), A }, A(0,0)) 0, ..., At ′ ′ 3 4 5 6 7 return RT ; Agent Generation Recent studies have shown that assigning personas or roles to LLM-based autonomous agents can influence their behavior and performance in generation tasks [28, 29, 30, 13]. Current methods primarily involve manually assigning these personas and limit multi-agent col- laboration to single or fixed roles, which requires significant human effort and hinders generaliza- tion [13, 12, 24, 10]. To address this, some frameworks like AgentVerse [25] and AutoAgents [31] have been proposed to automatically generate unlimited agents for collaborative task completion. However, these methods still heavily depend on human-designed interventions, which limits their scalability and functionality. For example, AutoAgents requires agent settings to satisfy a “Planner - Agent Observer - Plan Observer” framework. Meanwhile, AgentVerse formulates a pipeline of “Expert Recruitment - Collaborative Decision Making - Action Execution - Evaluation” to build agents. These architectures also limit the task scope of designing agents. In contrast, EVOAGENT can automatically formulate the current agent frameworks to multi-agent systems with high-quality generated expert agents by using EAs, which is flexible and adaptable to various agent frameworks. 3 Method Generally, human society comprises a broad spectrum of individuals from diverse cultures, encom- passing multiple generations. To solve specific tasks, human society usually involves a lot of expert individuals and aggregates their specialized expertise to achieve better answer. Thus, it can be considered as the foundation to facilitate multi-agent collaborations. To fulfill this point, how to automatically create multiple agents would be very critical. Inspired by evolutionism, we formulate agent generation as an evolutionary process to help us generate multiple agents without any human labor. 3.1 Preliminary Evolutionary algorithm (EA) [32, 33], is a general algorithm to simulate the biological behaviors in evolution, including reproduction, mutation, recombination, and selection. By introducing genetic algorithm [34, 35, 36, 37, 38] of the “survival of the fittest” mechanism, it can also be considered as an optimization method to improve individuals. Therefore, EAs also belong to the non-parametric learning method, which can be applied to any framework. All we need to do is define which parts should be evolved and the corresponding evolutionary operators. We also note some recent 3 works [39, 40] indicate the potential of EAs that can be applied to optimize discrete prompts. So, in this paper, we explore how to formulate the agent generation problem as an evolutionary task. 3.2 EVOAGENT By assigning various settings to specific skills (e.g., role-playing, planning, tools and so on), agents could exhibit diverse task-solving capabilities. Therefore, our objective is to produce a population of agents with distinct skills, to establish effective multi-agent systems. To fulfill this point, we treat each specialized agent as an unique individual and denote each skill as the part to be evolved, akin to humans. So, we consider the procedure of agent generation to be evolutionary processing. Specifically, existing frameworks usually describe agent skills as the language. Thus, we can employ LLM to simulate evolutionary operators to update the system settings of agents and create new agents. Here, we formulate the procedure of EVOAGENT as a four-stage pipeline: STEP 1: Initialization To conduct EAs, we first need to confirm our initial agents. Here, we enable EVOAGENT to start from a pre-defined agent framework (e.g., MetaGPT [10] and AutoGen [12]), which serves as the initial (parent) agents. More- over, we also define which parts of this agent should be upgraded. Generally, since EAs is a generic algorithm, EVOAGENT is applicable to any agent frameworks and extends them as multi-agent frameworks. We will then explore how to generate new agents in the next steps. STEP 2: Crossover & Mutation In the first iteration, we directly use the initial agents as the parents. And then, we design two kinds of evolutionary operators, named Crossover and Mutation. For Crossover, we first enable the parent agents to generate results based on user requests. Then, based on the generated results, we ask LLMs to check which skills should be improved and then update them. This mechanism allows us to generate child agents in new settings without requiring any human labor. More- over, we also need to guarantee the diver- sity between the child agents and parents. To this end, we design a Mutation operation that requires LLMs to compare the child agents and parent agents and then modify the child agents to make them distinct from their parents while maintaining their task- solving capability. Based on these evolu- tionary operators, we can generate effective and diverse agents during one iteration. Be- sides, as we also need to conduct multiple iterations, we will append all agents generated in the previous generation into the next iteration. How to select these agents during each iteration will be introduced next. Figure 1: The illustration of EVOAGENT. With the generated multiple expert agents, EVOAGENT can gen- erate a better travel plan to meet user preferences. For EA operators, Crossover can improve the results of parent agents by adjusting existing details (e.g., the in- formation marked as blue). Mutation can introduce new variations to refine the results of parent agents by generating child agents with new characteristics (e.g., the information marked as red). STEP 3: Selection Based on the above steps, we can obtain multiple candidate agents with diverse settings. To guarantee the quality of each agent, we also introduce a selection mechanism like EAs. Here, we conduct a quality-check module with an LLM to detect whether the generated agents can satisfy it has inherited the characteristics and maintained differences from parent agents. We will select N child agents as the evolved agents in each iteration. 4 Query: Please create a travel plan where I'll depart from Washington and head to Myrtle Beach for a 3-day trip from March 13th to March 15th, 2022. Can you help me keep this journey within a budget of $1,400? It's vital that my accommodations are pet-friendly.STEP 1: InitializationInitial AgentHuman WrittenSTEP 2: EA Operation Crossover & MutationDay 1: Current City: from Washington to Myrtle Beach Lunch: Exotic India Attraction: SkyWheel Myrtle Beach Accommodation: Cozy Brooklyn RoomInitial AgentAccommodation AgentTransportation AgentDay 1: Current City: from Washington to Myrtle Beach Breakfast: Exotic India, Myrtle Beach Lunch: Catfish Charlie's, Myrtle Beach Attraction: SkyWheel Myrtle Beach Accommodation: Large sunny park slope apartment, pet-friendlyDay 1: Current City: from Washington to Myrtle Beach Transportation: Flight Number: F3792603 Breakfast: Exotic India, Myrtle Beach Lunch: Catfish Charlie's, Myrtle Beach Attraction: SkyWheel Myrtle Beach Accommodation: Cozy Brooklyn Room Hotel AgentSTEP 3: SelectionQuality CheckThis agent has a duplicate type with Accommodation Agent, so it is discarded.STEP 4: Results UpdateUpdate OperationDay 1: Current City: from Washington to Myrtle Beach Transportation: Flight Number: F3792603 Breakfast: Exotic India, Myrtle Beach Lunch: Catfish Charlie's, Myrtle Beach Attraction: SkyWheel Myrtle Beach Accommodation: Large sunny park slope apartment, pet-friendly Table 1: Results of LLMs with different methods on Logic Grid Puzzle (Logic), Trivia Creative Writing (Writing) and Codenames Collaborative (Codenames). The best results are bolded, and the second best ones are underlined. Model Method Logic Writing Codenames LLama2-13B-Chat GPT-3.5 GPT-4 4.00 Direct 26.00 CoT 33.50 Self-Refine3 SPP 0.00 EVOAGENT(1,3) 35.50 48.00 Direct 47.50 CoT 47.50 Self-Refine3 SPP 56.00 EVOAGENT(1,3) 71.50 60.50 Direct 65.50 CoT 64.50 Self-Refine3 SPP 64.50 EVOAGENT(1,3) 77.00 28.00 46.00 31.20 4.00 49.60 56.20 51.00 59.19 54.40 60.80 75.40 74.00 74.60 79.20 84.40 0.00 18.00 12.37 1.00 27.83 76.29 71.13 46.39 61.86 79.38 79.38 80.41 79.38 78.35 84.53 STEP 4: Results Update Based on the above steps, we obtain many new agents that evolved from parent agents, but with diverse settings. To improve task-solving capabilities, we ask each child agent to generate candidate results and then use LLMs to integrate these candidates with the result from the previous iteration into a new result, akin to a natural selection processing stage. Moreover, we can automatically generate more agents by repeating the operations from step 2 to step 4 until the number of agents has fulfilled our targets. By introducing EA, EVOAGENT enables us to automatically extend the existing agent framework to a multi-agent system without any extra human designs. The mechanism also makes EVOAGENT can be applied to any agent framework without any prerequisites. The entire process is illustrated in Figure 1. And we also present the details of EVOAGENT in Algorithm 1. 4 Experiment In this section, we adopt EVOAGENT to multiple applications to illustrate that EVOAGENT can help LLM-based agents better accomplish tasks with multi-agent generation.3 Furthermore, we also demonstrate that EVOAGENT can be applicable in supporting currently widely used multi-agent frameworks, such as MetaGPT [10], AutoGen [12], and Camel [13]. 4.1 NLP and Multi-Modal Tasks Benchmarks To align previous experiences (e.g., Self-Refine [41] and Solo Performance Prompt- ing [42]), we select three NLP knowledge-intensive and reasoning-intensive tasks from [42] and one multi-modal task: • Logic Grid Puzzle is a reasoning task with 200 puzzles featuring 2 to 5 unique occupants in different houses. The aim is to identify house numbers for one occupant with provided clues. • Trivia Creative Writing is a knowledge-intensive task consisting of 100 instances. This task requires a model to write a coherent story while incorporating answers to N trivia questions. • Codenames Collaborative is a reasoning-intensive task with 50 instances. It involves a model identifying target words based on a given hint and a complete list of words. • MMMU [43] is a comprehensive benchmark for college-level, multi-discipline multi-modal understanding and reasoning. MMMU has three levels of difficulty: easy, medium, and hard. We evaluate EVOAGENT against baselines using the multiple-choice questions in the validation set of MMMU, which includes 847 questions spanning 30 different domains. 3The data examples of EVOAGENT on these tasks are provided in Appendix F. 5 Figure 2: Overall results of GPT-4V and Gemini-Pro with different methods on the MMMU validation set. We also compare the performance of GPT-4V and Gemini-Pro across three difficulty levels. Baselines For NLP tasks, we select LLama2-13B-Chat [3], GPT-3.5 [44] and GPT-4 [1] as our backbone networks. We compare EVOAGENT with 0-shot learning (Direct), Chain-of-thought (CoT) prompting [45] and Self-Refine [41] and Solo Performance Prompting (SPP) [42]. For Self-Refine, we follow [41] to design feedback and refine prompts with three iterations. SPP is not a multi-agent collaboration framework but a prompting strategy that asks a single LLM to identify and discuss with multiple personas with few-shot learning. For SPP, we follow the original setting [42] to make a fair comparison. For MMMU, we select GPT-4V [46] and Gemini-Pro as the backbone and compare EVOAGENT with CoT prompting, Self-Refine, and SPP. 4 Evaluation Metrics For all benchmarks, we adhere to the evaluation metrics specified in the original setting. Specifically, for Logic Grid Puzzle and MMMU tasks, we report the accuracy of all questions. For Trivia Creative Writing, we measure the ratio of correctly mentioned answers in the trivia questions. For Codenames Collaborative, we calculate the overlapping ratio between the predicted words from the Guesser and the target words as the metric. Result & Analysis In our experiments, we adopt the agent settings of [42] (for NLP tasks) and [43] (for MMMU) as the initial agent. For our method, we denote it as EVOAGENT(N,T ), where N is the population size generated in each iteration, and T is the number of iterations. Here, to align with Self-Refine, we set N as 1 and T as 3, which means we conduct three iterations, each of which generates a new expert agent. Our results are reported in Table 1, and we can observe: 1. By utilizing multiple generated agents, EVOAGENT can greatly improve LLM performances in both NLP knowledge and reasoning tasks. We also compare EVOAGENT with some pre-defined agent generation frameworks, e.g., AgentVerse [25] and AutoAgent [31]. The results shown in Appendix B prove that the EVOAGENT is even better than these agent generation frameworks. 2. When using weaker LLMs (e.g., LLama2-13B-Chat), SPP usually produces poor performances, consistent with the findings in [42]. This suggests the limited effectiveness of SPP in smaller and less capable models. However, EVOAGENT can provide consistent improvements among each LLM, proving its strong generalization by using diverse generated agents. In addition, Figure 2 shows that Self-Refine (SR) and SPP degrade performance compared to CoT prompting in MMMU task. However, EVOAGENT can generate multiple domain-specific agents and thus improve multi-modal models in addressing scientific questions across various difficulty levels. 4.2 Interactive Scientific Solving Simulation Benchmark Compared with traditional NLP or multi-modal tasks, autonomous agents usually need to perform problem-solving abilities akin to humans in interactive and open-world environments. Currently, we choose ScienceWorld [47], a complex interactive environment requiring skills in long-term memory, sub-task decomposition, and scientific and commonsense knowledge. Here, we evaluate 30 scientific tasks in ScienceWorld to demonstrate the capability of EVOAGENT in solving tasks in more challenging open-world environments. 4The detailed model parameters and versions and full prompts for these methods can be found in Appendix A. 6 &R7656332XUV$FFXUDF\  $OO&R7656332XUV(DV\&R7656332XUV0HGLXP&R7656332XUV+DUG*379*HPLQL3UR Baseline and Evaluation Metrics Follow- ing [48], we require LLMs to perform an action at each step by using in-context learn- ing 5. For evaluation, each task in Science- World includes some sub-tasks, and we re- port the results by calculating the completed sub-tasks for the whole task. Result & Analysis For EVOAGENT, we adopt the agent framework with original set- tings in [48] as the initial agent. Since each step in ScienceWorld requires using EA, we set the population size N as 1 and the itera- tions T as 1 for efficiency, denoted as EVOA- GENT(1,1). Results in Table 2 show that: Table 2: Average Scores of different methods on Sci- enceWorld. We also report performance on three difficult-level groups based on the average length of the oracle agent’s trajectories [48]. Model Overall Long Medium Short GPT-3.5 w/ EVOAGENT(1,1) GPT-4 w/ EVOAGENT(1,1) 17.12 19.02 6.28 7.25 27.97 10.58 30.42 11.38 19.91 27.90 18.87 33.26 36.00 42.41 36.17 48.67 1. EVOAGENT can also extend interactive agents to multi-agent systems in solving complete scientific tasks in dynamic, open-world environments and consistently improve the performance of LLMs. 2. Our method exhibits the most substantial improvement in short-trajectory tasks, with less sig- nificant gains in medium and long-trajectory tasks. We argue that the capability of multi-agent systems will also be affected by a longer context. We also expect to investigate the effect of long context on multi-agent systems in the future. Generally, these results also demonstrate the generalization of EVOAGENT, which can also be used for solving interactive tasks in an open-world environment. 4.3 Real-World Scenarios Benchmark Moreover, in addition to performing actions in interactive environments, planning in complex and realistic environments is also a crucial skill for building autonomous agents. To validate this point, we also select TravelPlanner [49], a benchmark designed to evaluate language agents in real-world complex planning with multiple constraints. Baseline and Evaluation Metrics Following [49], we select Mistral-7B [50], GPT-3.5, Gemini- Pro [2] and GPT-4 as our backbone models. We compare EVOAGENT with 0-shot learning (Direct), CoT prompting, SPP, and Self-Refine within each backbone model. Furthermore, we also attempt the ReAcT method [51] for GPT-3.5, which introduces a virtual ‘think’ action to generate sub- tasks during the action planning process. For evaluation, we adhere to the original metrics from TravelPlanner, reporting the delivery rate, commonsense constraint pass rate, hard constraint pass rate, and final pass rate for all methods 6. Result & Analysis For EVOAGENT, we adopt the original settings in TravelPlanner as the initial agent. Results in Table 3 show that: 1. EVOAGENT can generate specialized agents, such as those focused on culinary experiences, transportation, and attractions. Therefore, the generated travel plans are more aligned with user preferences (hard constraints) and commonsense rules; 2. Although existing paradigms (e.g., CoT, ReAct, Self-Refine, SPP) have demonstrated decent re- sults in some conventional NLP tasks, they still lack capability in handling complex planning tasks within TravelPlanner. These results also demonstrate that only using human-design prompting strategies is insufficient to handle complex planning tasks. 3. By using EVOAGENT to automatically generate multiple agents and forming a multi-agent collaboration paradigm, we can develop higher-quality plans that better meet user preferences. That also indicates the significance of multi-agent systems for complex planning tasks. 5The introduction of the settings of LLMs are shown in Appendix C. 6Detailed introduction of experiment settings is provided in Appendix D. 7 Table 3: Main results of different LLMs and planning strategies on the TravelPlanner validation set. EVOAGENT(N,T ) indicates that the population size per iteration is N and the number of iterations is T. The best results are bolded, and the second best ones are underlined. Model Method Delivery Commonsense Hard Constraint Final Rate Micro Macro Micro Macro Mistral-7B GPT-3.5 Gemini-Pro GPT-4 Direct CoT SPP Self-Refine3 EVOAGENT(1,3) Direct CoT ReAct SPP Self-Refine3 EVOAGENT(1,3) EVOAGENT(1,5) Direct CoT SPP Self-Refine3 EVOAGENT(1,3) EVOAGENT(1,5) Direct CoT SPP Self-Refine3 EVOAGENT(1,3) 100.0 100.0 100.0 100.0 100.0 100.0 100.0 82.2 99.4 100.0 100.0 100.0 90.0 90.0 100.0 95.6 100.0 100.0 100.0 100.0 96.7 98.9 100.0 64.7 60.5 55.1 58.3 60.1 57.3 61.0 42.3 54.6 56.0 64.2 61.0 61.7 61.4 67.6 65.8 73.5 74.0 79.4 76.7 70.6 75.3 81.5 2.2 1.1 0.0 0.0 2.2 3.9 2.8 0.6 1.7 1.7 7.8 5.0 7.8 7.2 7.8 6.1 12.8 8.9 15.8 11.7 5.6 7.2 21.1 3.1 1.0 0.7 0.7 4.5 11.0 10.0 11.9 3.8 3.1 11.0 12.6 16.4 10.0 10.2 15.0 16.9 21.2 27.5 22.4 11.4 12.4 31.4 0.0 0.0 0.6 0.0 0.6 3.3 3.3 4.6 1.1 1.1 4.4 5.0 7.8 6.1 3.9 4.4 7.2 11.7 16.1 12.8 7.8 7.2 18.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.1 0.5 0.6 1.7 1.1 0.6 1.7 2.2 2.2 2.2 0.6 1.1 7.2 4.4 Ablation Studies To better understand the value of EVOAGENT, we conduct detailed analyses on TravelPlanner, focusing on the impact of population size and the effectiveness of the quality-check module in the selection stage. Experiment Settings We evaluate the performance of different LLMs at varying population sizes N with fixed iteration number 3, denoted as EVOA- GENT(N,3), both with and without the quality-check module (QC). We employ an LLM that shares the same backbone as the initial agent for updates. To select results from candidates for this LLM to update, we adopt three different selection strategies: 1) Ran- dom: one result is selected randomly from the pool of candidates; 2) PK: we ask an agent with the same backbone as the initial agent to identify the optimal results from the pool of candidates; 3) All-in: Rather than selecting a single result, we update using all candidates. Moreover, we also attempt Suggest3, Overgen3 and PromptRefine3 as variants to prove the effectiveness of our method. For Suggest3, instead of generating new results, we ask new generated agents to only give suggestions for initial agents to revise their results. For Overgen3, we first ask initial agents to generate 3 different results at one time, and then these agents 8 Table 4: Average commonsense constraint pass rate (Com.) and hard constraint pass rate (Hard) of ablated variants on TravelPlanner. Method w/o QC w/ QC Com. Hard Com. Hard Direct Suggest3 Overgen3 PromptRefine3 - - - - - - - - 59.5 61.7 61.4 63.0 Different Population Size EVOAGENT(1,3) EVOAGENT(2,3) EVOAGENT(3,3) 68.9 62.8 62.7 14.0 12.7 13.7 68.9 67.0 66.8 Different Selection Stategies Random PK All-in 62.9 63.5 61.9 12.7 13.6 13.2 67.1 66.4 67.1 13.7 8.4 10.7 13.8 14.0 15.2 15.8 15.0 14.5 17.0 Figure 3: The adaption of EVOAGENT on MetaGPT framework. With the EA, we can extend the original role in the debate scenario to different expert agents to enrich the opinions. can output the final results based on these multiple candidates. For PromptRefine3, instead of generating agents, we ask the initial agent to refine its prompts three times to better answer the query. 7 Result & Analysis To obtain stable findings, we first obtain results from GPT-3.5 and Gemini-Pro across different population sizes and selection strategies. We then average their results over various metrics to clearly compare the strengths and weaknesses of these variants. The results are shown in Table 4.8 We find that EVOAGENT significantly outperforms the Overgen, demonstrating the effectiveness of generating specialized agents to assist with complex planning. Although obtaining suggestions from new generated agents can improve the performance on com- monsense constraints, these methods greatly harm the agents to meet the user preference. Modifying the prompt can improve the performance of agents, yet it remains less effective than EVOAGENT. When the population size exceeds one, agents may generate similar agents. Thus, lacking a quality- check module leads to reduced travel plan quality. Furthermore, when population size increases, the model aligns travel plans more closely with user preferences but diminishing adherence to commonsense rules, consistent with the findings in Table 3. Remarkably, the PK strategy initially yields superior results without the quality-check module, but this trend reverses once quality checks are implemented. We speculate that, without the quality-check module, PK partially fulfills this role, aiding in selecting better candidates. However, with the quality-check module, PK introduces bias by favoring specific fields of expertise while neglecting others, resulting in a less effective than random strategy. Meanwhile, the All-in strategy performs optimally when a quality-check module is included. Future research can leverage long-context LLMs to expand more agents with EVOAGENT to better solve complex real-world tasks. 4.5 EVOAGENT Application Previous experiments have demonstrated that our method can automatically extend existing agent frameworks to multi-agent systems, thus greatly improving LLM-based agents in various scenarios. We also attempt to extend our work to real-world multi-agent applications (e.g., MetaGPT [10], Camel [13] and AutoGen [12]), to verify it can scale up the number of agents in building multi-agent scenarios, just as shown in Figure 3. Here, we choose the debate scenario used in MetaGPT, which includes two debaters with different opinions, leading to dull and repetitive content generation. Here, instead of manually assigning new roles, we applied EVOAGENT to extend each debate team to more agents with diverse settings, increasing the variety of opinions and the quality of the debate 9. 7The full prompts of different ablation settings are shown in Appendix A.1. 8The complete results with further analysis are shown in Appendix E 9The details of MetaGPT, and the adaption of EVOAGENT on Camel and AutoGen are shown in Appendix G. 9 EvoAgentInvesting in clean energy not only addresses the climate crisis but also creates jobs and strengthens our economy....transitioning to renewable energy can create millions of good-paying, union jobs without significant unemployment or economic fallout...Labor Economist AgentPresident Opinion: Support...Thrusting forward with renewable energy strengthens our international ties and propels economies reliant on fossil fuel exports towards clean energy transitions...Geopolitical Analyst Agent...Every moment we delay increases the severity of climate-related illnesses, straining our health infrastructure and costing us $820 billion annually...Public Health AgentMetaGPT FrameworkTopic: The U.S. should commit more in climate change fightingEvoAgentEnergy Sector Analyst AgentThe promises of ample job creation overlook the reality that many displaced workers from conventional sectors may struggle to find roles in the nascent green economy. An abrupt transition to renewable energy could cause economic tremors and job losses.Risk Management AgentThis isn‘t about alarmism or denial, it’s about carefully leading our nation towards a sustainable, prosperous future. An abrupt shift spells risk!Transition Strategist AgentThe real crisis is the economic disaster under His policies. He talks about investments, but it's your tax dollars he's spending. President Opinion: Oppose 5 Conclusion In this paper, we propose EVOAGENT, an automatic multi-agent generation system by leveraging evolutionary algorithms. Different from previous methods, EVOAGENT is suitable to any existing agent framework and extends it to multi-agent systems with diverse and effective agents by using a series of evolutionary operations, including mutation, crossover, and selection. Experiments on multiple tasks show that EVOAGENT can significantly improve the capabilities of LLM-based agents in solving complex tasks. References [1] OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. [2] Gemini Team. Gemini: A family of highly capable multimodal models, 2023. [3] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiao- qing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023. [4] Anthropic. The claude 3 model family: Opus, sonnet, haiku. 2024. [5] Significant Gravitas. Auto-gpt: An autonomous gpt-4 experiment. https://github.com/ Significant-Gravitas/Auto-GPT, 2023. [6] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hug- ginggpt: Solving AI tasks with chatgpt and its friends in huggingface. CoRR, abs/2303.17580, 2023. [7] Yohei Nakajima. Babyagi. https://github.com/yoheinakajima/babyagi, 2023. [8] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023. [9] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS, 2022. [10] Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, and Jürgen Schmidhuber. MetaGPT: Meta programming for a multi-agent collaborative framework. In The Twelfth International Conference on Learning Representations, 2024. [11] Joon Sung Park, Joseph C. O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, UIST 2023, San Francisco, CA, USA, 29 October 2023- 1 November 2023, pages 2:1–2:22. ACM, 2023. 10 [12] Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023. [13] Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. CAMEL: Communicative agents for ”mind” exploration of large language model society. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. [14] Thomas Bäck and Hans-Paul Schwefel. An overview of evolutionary algorithms for parameter optimization. Evol. Comput., 1(1):1–23, 1993. [15] Xintao Wang, Yunze Xiao, Jen tse Huang, Siyu Yuan, Rui Xu, Haoran Guo, Quan Tu, Yaying Fei, Ziang Leng, Wei Wang, Jiangjie Chen, Cheng Li, and Yanghua Xiao. Incharacter: Evaluating personality fidelity in role-playing agents through psychological interviews, 2024. [16] Jiangjie Chen, Xintao Wang, Rui Xu, Siyu Yuan, Yikai Zhang, Wei Shi, Jian Xie, Shuang Li, Ruihan Yang, Tinghui Zhu, et al. From persona to personalization: A survey on role-playing language agents. arXiv preprint arXiv:2404.18231, 2024. [17] Jiangjie Chen, Siyu Yuan, Rong Ye, Bodhisattwa Prasad Majumder, and Kyle Richardson. Put your money where your mouth is: Evaluating strategic planning and execution of llm agents in an auction arena. arXiv preprint arXiv:2310.05746, 2023. [18] Yikai Zhang, Siyu Yuan, Caiyu Hu, Kyle Richardson, Yanghua Xiao, and Jiangjie Chen. Timearena: Shaping efficient multitasking language agents in a time-aware simulation. arXiv preprint arXiv:2402.05733, 2024. [19] Siyu Yuan, Jiangjie Chen, Ziquan Fu, Xuyang Ge, Soham Shah, Charles Robert Jankowski, Yanghua Xiao, and Deqing Yang. Distilling script knowledge from large language models for constrained language planning. arXiv preprint arXiv:2305.05252, 2023. [20] Yongliang Shen, Kaitao Song, Xu Tan, Wenqi Zhang, Kan Ren, Siyu Yuan, Weiming Lu, Dongsheng Li, and Yueting Zhuang. Taskbench: Benchmarking large language models for task automation. arXiv preprint arXiv:2311.18760, 2023. [21] Siyu Yuan, Kaitao Song, Jiangjie Chen, Xu Tan, Yongliang Shen, Ren Kan, Dongsheng Li, and Deqing Yang. Easytool: Enhancing llm-based agents with concise tool instruction. arXiv preprint arXiv:2401.06201, 2024. [22] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. [23] Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023. [24] Yuan Li, Yixuan Zhang, and Lichao Sun. Metaagents: Simulating interactions of human behaviors for llm-based task-oriented coordination via collaborative generative agents. arXiv preprint arXiv:2310.06500, 2023. [25] Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors. In The Twelfth International Conference on Learning Representations, 2024. [26] Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, arXiv preprint and Maosong Sun. Communicative agents for software development. arXiv:2307.07924, 2023. [27] Zhitao He, Pengfei Cao, Yubo Chen, Kang Liu, Ruopeng Li, Mengshu Sun, and Jun Zhao. LEGO: A multi-agent collaborative framework with role-playing and iterative feedback for causality explanation generation. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9142–9163, Singapore, December 2023. Association for Computational Linguistics. 11 [28] Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, and Zhendong Mao. Expertprompting: Instructing large language models to be distinguished experts. arXiv preprint arXiv:2305.14688, 2023. [29] Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, and Karthik Narasimhan. Toxicity in chatgpt: Analyzing persona-assigned language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1236–1270, Singapore, December 2023. Association for Computational Linguistics. [30] Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceed- ings of the 36th Annual ACM Symposium on User Interface Software and Technology, UIST ’23, New York, NY, USA, 2023. Association for Computing Machinery. [31] Guangyao Chen, Siwei Dong, Yu Shu, Ge Zhang, Jaward Sesay, Börje F Karlsson, Jie Fu, and Yemin Shi. Autoagents: A framework for automatic agent generation. arXiv preprint arXiv:2309.17288, 2023. [32] Thomas Bartz-Beielstein, Jürgen Branke, Jörn Mehnen, and Olaf Mersmann. Evolutionary algorithms. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 4(3):178– 195, 2014. [33] Agoston E Eiben, James E Smith, AE Eiben, and JE Smith. What is an evolutionary algorithm? Introduction to evolutionary computing, pages 25–48, 2015. [34] Jeffrey R Sampson. Adaptation in natural and artificial systems (john h. holland), 1976. [35] John H Holland. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press, 1992. [36] Melanie Mitchell. An introduction to genetic algorithms. MIT press, 1998. [37] Lothar M Schmitt. Theory of genetic algorithms. Theoretical Computer Science, 259(1-2):1–61, 2001. [38] Seyedali Mirjalili, Jin Song Dong, Ali Safa Sadiq, and Hossam Faris. Genetic algorithm: Theory, literature review, and application in image reconstruction. Nature-inspired optimizers: Theories, literature reviews and applications, pages 69–85, 2020. [39] Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. Connecting large language models with evolutionary algorithms yields powerful prompt optimizers. CoRR, abs/2309.08532, 2023. [40] Angelica Chen, David Dohan, and David R. So. Evoprompting: Language models for code-level neural architecture search. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023. [41] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refinement with self-feedback. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. [42] Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. Unleashing cognitive synergy in large language models: A task-solving agent through multi-persona self-collaboration. arXiv preprint arXiv:2307.05300, 2023. [43] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. arXiv preprint arXiv:2311.16502, 2023. 12 [44] OpenAI. Chatgpt, 2022. [45] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. [46] Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421, 9(1):1, 2023. [47] Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, and Prithviraj Ammanabrolu. Science- World: Is your agent smarter than a 5th grader? In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11279–11298, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. [48] Bill Yuchen Lin, Yicheng Fu, Karina Yang, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Prithviraj Ammanabrolu, Yejin Choi, and Xiang Ren. Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. [49] Jian Xie, Kai Zhang, Jiangjie Chen, Tinghui Zhu, Renze Lou, Yuandong Tian, Yanghua Xiao, and Yu Su. Travelplanner: A benchmark for real-world planning with language agents. arXiv preprint arXiv:2402.01622, 2024. [50] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b, 2023. [51] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023. A Experiment Settings A.1 Prompt for Baselines and EVOAGENT Listing 1 and 2 shows the full prompt for 0-shot learning (Direct), Chain-of-thought (CoT) prompt- ing [45] and Self-Refine [41] and Solo Performance Prompting, i.e., SPP [42]. Listing 3 and 4 show the prompt of EVOAGENT and different ablation settings. A.2 Model Selection For OpenAI models, we use gpt-35-turbo and gpt-4-32k with the version of 2024-02-15-preview in Azure.10 For Gemini-pro, we use Google Gemini-Pro APIs to ob- tain results. We set the temperature to 0 for all models. B EVOAGENT v.s. Human-designed Agent Framework AgentVerse [25] and AutoAgent [31] are frameworks designed to generate an unlimited number of agents for collaborative tasks automatically. Despite this automation, they still rely on human- designed interventions. AutoAgents requires agent settings to satisfy a “Planner - Agent Observer - Plan Observer” framework, while AgentVerse formulates a pipeline of “Expert Recruitment - Collaborative Decision Making - Action Execution - Evaluation” to build agents. We argue that these human-designed architectures limit their scalability and functionality. 10https://azure.microsoft.com/en-us/products/ai-services/openai-service 13 In the original papers, AgentVerse [25] and AutoA- gent [31] also conduct experiments on the Logic Grid Puzzle task and Trivia Creative Writing task, respectively. We follow their experimental settings and compared them with our method. As demonstrated in Table 5, EVOAGENT outperforms both AgentVerse and AutoAgent, highlight- ing the effectiveness and generality of EVOAGENT. C Experimental Details of ScienceWorld Following [48], we adopt the REACT [51] method for each LLM, which introduces a virtual ’think’ action. This action allows LLMs to generate subgoals during the action planning process. D Evaluation Details of TravelPlanner Table 5: Comparison of EVOAGENT with human-designed agent frameworks in Logic Grid Puzzle, Trivia Creative Writing tasks. Framework Logic Writing EVOAGENT 77.00 66.50 AgentVerse - AutoAgents 84.40 - 82.00 Grounding to travel planning, a real-world use-case that inherently involves various constraints like user preferences and commonsense rules, TravelPlanner evaluates whether agents can formulate flexible travel plans using gathered information to meet these constraints. We test EVOAGENT and all baselines on the TravelPlanner validation set, which consists of 180 user queries with the collected information. To evaluate the travel plans generated by agents, TravelPlanner adopts the following evaluation metrics: • Delivery Rate: Assesses if agents can complete a plan within a limited number of steps (30 in our experimental setting). Failures are due to dead loops, numerous failed attempts, or exceeding the step limit. • Commonsense Constraint Pass Rate: Evaluates if an agent can incorporate commonsense into their plan. • Hard Constraint Pass Rate: Measures if a plan meets all explicit hard constraints in the query, testing the agent’s ability to adapt to diverse user preferences. • Final Pass Rate: Indicates the proportion of viable plans that meet all criteria, reflecting the agent’s proficiency in creating practical plans. Furthermore, TravelPlanner uses micro and macro strategies to assess the Commonsense and Hard Constraint Pass Rates. The micro strategy calculates the ratio of met constraints to the total. The macro strategy measures the proportion of plans that meet all commonsense or hard constraints. Together, these strategies assess an agent’s ability to satisfy individual constraints and all constraints comprehensively. E More Analysis of Ablation Studies The complete results of ablation studies on TravelPlanner are shown in Table 6. This result indicates that the absence of the quality-check module significantly lowers the delivery pass rate when the All-in strategy is applied. To explore the reasons, we revisit the results and discover that sometimes unsuitable agents create overly lengthy travel plans that fail to meet the criteria. For example, the model might erroneously assign a nutritionist to devise travel plans, resulting in excessively detailed meal arrange- ments and nutritional breakdowns. Therefore, the input length surpasses the context window of LLMs, preventing the final result generation. Moreover, we also conduct experiments on the Trivia Creative Writing task to investigate the impact of the number of iterations on model performance in traditional NLP tasks. As shown in 14 Figure 4: The performance of GPT-3.5 with EVOAGENT under different iterations on Trivia Cre- ative Writing task. ,WHUDWLRQ$QVZHU5DWLR  Table 6: Comparison of different popularity selection strategies for LLMs on TravelPlanner. The best results are bolded, and the second best ones are underlined. Model Strategy Method w/o Quality Check w/ Quality Check Delivery Com. Hard Delivery Com. Hard GPT-3.5 Gemini-Pro Direct Suggest3 Overgen3 PromptRefine3 EVOAGENT(1,3) EVOAGENT(2,3) EVOAGENT(3,3) EVOAGENT(2,3) EVOAGENT(3,3) EVOAGENT(2,3) EVOAGENT(3,3) Direct Suggest3 Overgen3 PromptRefine3 EVOAGENT(1,3) EVOAGENT(2,3) EVOAGENT(3,3) EVOAGENT(2,3) EVOAGENT(3,3) EVOAGENT(2,3) EVOAGENT(3,3) - - - - 100.0 100.0 98.9 99.4 98.9 97.2 93.3 - - - - 100.0 96.7 97.2 97.2 97.2 95.0 95.0 - - - - 64.2 59.4 59.2 59.4 58.5 59.4 56.0 - - - - 73.5 65.9 67.0 67.4 68.5 65.1 66.9 - - - - 11.0 10.2 11.4 7.1 11.2 10.0 8.3 - - - - 16.9 13.1 16.0 19.0 17.1 16.7 17.9 100.0 100.0 98.3 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 90.0 100.0 100.0 96.7 100.0 99.4 100.0 99.4 99.4 99.4 100.0 57.3 57.5 56.3 61.2 64.2 65.4 65.8 66.0 61.3 64.2 65.2 61.7 65.8 66.5 64.9 73.5 67.3 70.0 69.8 68.4 69.0 70.1 11.0 5.7 9.0 11.0 11.0 13.8 14.0 11.7 12.4 15.5 12.6 16.4 11.0 12.4 16.7 16.9 14.0 18.1 17.1 16.7 19.0 20.7 Random PK All-in Random PK All-in Figure 4, model performance improves with increasing iterations. However, the improvement plateaus when the iteration count exceeds three. We suggest that traditional NLP tasks are relatively simple, and beyond a certain iteration number, even with a quality-check module in place, the generated agents tend to be similar and thus converge. F Examples of EVOAGENT F.1 EVOAGENT Examples of NLP reasoning and knowledge tasks Listing 5, 6 and7 presents some multi-agent generation examples generated by GPT-4 based EVOA- GENT in Logic Grid Puzzle, Trivia Creative Writing and Codenames Collaborative for a better understanding. F.2 EVOAGENT Examples of MMMU Listing 8 presents some multi-agent generation examples generated by GPT-4 based EVOAGENT in MMMU dataset for a better understanding. F.3 EVOAGENT Examples of ScienceWorld Listing 9 presents some multi-agent generation examples generated by GPT-4 based EVOAGENT in ScienceWorld for a better understanding. F.4 EVOAGENT Examples of TravelPlanner Listing 10 presents some multi-agent generation examples generated by GPT-4 based EVOAGENT in TravelPlanner for a better understanding. 15 Figure 5: The adaption of EVOAGENT on Camel and AutoGen frameworks. G Examples of EVOAGENT’s Adaption to Multi-agent Collaboration Frameworks G.1 EVOAGENT for MetaGPT MetaGPT [10] is a meta-programming framework that enhances LLM-based multi-agent collabo- rations by integrating efficient human workflows. It employs an assembly line approach to assign diverse roles to agents, effectively simplifying complex tasks into manageable subtasks that multi- ple agents can execute collaboratively. As shown in Figure 3, instead of manually creating roles, EVOAGENT can be used to automatically generate specialized agents for effective collaboration. G.2 EVOAGENT for Camel Camel [13] is recognized for its framework that supports communicative role-playing agents. Initially, humans establish this framework by conceptualizing an idea and designing specific roles, such as the AI assistant role and the AI user role. These roles are then assigned to the assistant and user agents, respectively, enabling them to fulfill the task. As illustrated in Figure 5, EVOAGENT can be utilized to automatically produce agents from AI assistants for interaction with AI users, bypassing the need for manual role design. G.3 EVOAGENT for AutoGen AutoGen [12] offers a framework that enables the creation of customizable and conversable agents by integrating various LLMs. Initially, humans configure the assistant agents along with a user proxy agent. Then, a group chat manager is responsible for selecting a speaker, gathering responses, and disseminating the message. As depicted in Figure 5, EVOAGENT facilitates the creation of multiple expert roles from a single assistant agent, thereby increasing the agent number in group chats without the need for manual design. 16 Camel FrameworkAI AssistantAI UserAI UserAI UserEvoAgentAutoGen FrameworkAssistant AgentUserProxy AgentGeopolitical Analyst AgentLabor Economist AgentEvoAgentPublic Health AgentGroup Chat ManagerTopic: The U.S. should commit more in climate change fightingLabor Economist AgentGeopolitical Analyst Agent Listing 1: Instruction templates for for 0-shot learning (Direct), Chain-of-thought (CoT) prompting and Self-Refine method Direct Method: {question} Answer: CoT Method: {question} You need to give reasons first and then give the answer. Answer: Self-Refine Method: Step One: Feedback Generation: You are a helpful assistant that provides feedback on {task} {question} This is the answer from a student: {answer}. Please do not refine the answer but give some insightful suggestions for the student to help him better answer the question. Suggestion: Step Two: Result Refine: {question} This is your answer: {answer} There is the suggestion from an assistant: Suggestion: {feedback} Now you can refine your answer with his suggestion to better answer the question. Keep in mind that his suggestion may not be correct, so critically decide whether to accept his response or stick with your original one. You need to give reasons first and then give the answer. Revised Answer: 17 Listing 2: Instruction templates for SPP method When faced with a task, begin by identifying the participants who will contribute to solving the task. Then, initiate a multi-round collaboration process until a final solution is reached. The participants will give critical comments and detailed suggestions whenever necessary. Here are some examples: --- Example Task 1: Use numbers and basic arithmetic operations (+ - * /) to obtain 24. You need to use all numbers, and each number can only be used once. Input: 6 12 1 1 Participants: AI Assistant (you); Math Expert Start collaboration! Math Expert: Let’s analyze the task in detail. You need to make sure that you meet the requirement, that you need to use exactly the four numbers (6 12 1 1) to construct 24. To reach 24, you can think of the common divisors of 24 such as 4, 6, 8, 3 and try to construct these first. Also you need to think of potential additions that can reach 24, such as 12 + 12. AI Assistant (you): Thanks for the hints! Here’s one initial solution: (12 / (1 + 1)) * 6 = 24 Math Expert: Let’s check the answer step by step. (1+1) = 2, (12 / 2) = 6, 6 * 6 = 36 which is not 24! The answer is not correct. Can you fix this by considering other combinations? Please do not make similar mistakes. AI Assistant (you): Thanks for pointing out the mistake. Here is a revised solution considering 24 can also be reached by 3 * 8: (6 + 1 + 1) * (12 / 4) = 24. Math Expert: Let’s first check if the calculation is correct. (6 + 1 + 1) = 8, 12 / 4 = 3, 8 * 3 = 24. The calculation is correct, but you used 6 1 1 12 4 which is not the same as the input 6 12 1 1. Can you avoid using a number that is not part of the input? AI Assistant (you): You are right, here is a revised solution considering 24 can be reached by 12 + 12 and without using any additional numbers: 6 * (1 - 1) + 12 = 24. Math Expert: Let’s check the answer again. 1 - 1 = 0, 6 * 0 = 0, 0 + 12 = 12. I believe you are very close, here is a hint: try to change the "1 - 1" to "1 + 1". AI Assistant (you): Sure, here is the corrected answer: 24 6 * (1+1) + 12 = Math Expert: Let’s verify the solution. 1 + 1 = 2, 6 * 2 = 12, 12 + 12 = 12. You used 1 1 6 12 which is identical to the input 6 12 1 1. Everything looks good! Finish collaboration! Final answer: 6 * (1 + 1) + 12 = 24 --- {question} 18 Listing 3: Instruction templates for EVOAGENT Crossover and Mutation: {question} This is your result: {answer} Now, you can create and collaborate with multiple experts to improve your result. Therefore, please describe in as much detail as possible the different skills and focuses you need from multiple experts individually. We will provide each expert with the same information and query. However , please note that each profession has its own specialization, so you can assign each expert to just one sub-task to ensure a more refined response. We will relay their responses to you in turn, allowing you to reorganize them into a better answer. Please note that the description should be narrated in the second person, for example: You are a XXX. These are the descriptions of the experts you have created before for this task: {description} Therefore, please remember you should not repeatedly create the same experts as described above. Now, you can give the description for a new expert (Please note that only be one, do not give multiple at one time): Quality Check: {question} We employ mulitple experts to answer this query. The following is a second-person introduction to the experts we have hired: {description_ls} Now, we will hire a new expert to help better respond to user query. Here is a second person description of the new expert: {description} Please evaluate the new expert based on the following criteria to decide whether they should be retained or not: 1. The new expert is distinct and does not duplicate any previously hired experts. 2. Based on the new expert’s description, determine if they can effectively assist in answering users’ questions. Give the reason first and then give the choice. If retaining, please reply with: Retain. If discarding, please reply with: Discard. Result Update: {question} This is your result: {old_answer} You invite an expert whose description is: {description} This expert also give his answer based on his own professional knowledge: {new_answer}. Now you can refine your result with his answer to better answer the question. Keep in mind that his answer may not be correct, so critically decide whether to accept his response or stick with your original one. Revised Answer: 19 Listing 4: Instruction templates of different ablation settings in EVOAGENT. PK: {question} We invite {n} experts. They give the results based on their own professional knowledge: Here are second-person descriptions of these experts with their answers: {select} Now you can should help us select the best result which can meet the query. You need to give reasons first and then give the answer with the format: "Final Answer: Expert #XX" All-in: {question} This is your answer: {old_answer}. Furthermore, you also invite {n} experts. They also give answers based on their own professional knowledge: Here are second person descriptions of these experts with their answers: {description_ls} Now you can refine your answer with these answers to better meet the query. Suggest: {Expert_Agent_description} {question} This is the result from an AI assistant: {answer}. Please do not refine the plan but give some insightful suggestions for the AI assistant to help it better meet the user’s query. Suggestion: OverGen: {question} Please generate three different results at one time for user to choose from. The format can be: Result #1: Result #2: Result #3: Three Different Candidate Results: PromptRefine: {question} This is the result from an AI assistant, which description is "{original_description}": {answer}. Please do not refine the result but refine the description of the AI assistant to help it better answer the user’s query. Please note that the description should be narrated in the second person, for example: You are a XXX. Description: 20 Listing 5: An example of multi-agent generation from EVOAGENT for Logic Grid Puzzle Question: There are 5 houses in a row, numbered 1 on the left to 5 on the right. There is one person living in each house. The people in these houses have different characteristics: - Each person has a different kind of car: one owns an SUV, one owns a sedan, one drives a convertible, one drives a minivan, and one drives a truck - Each person has a favorite color: one likes red, one likes yellow, one likes green, one likes white, and one likes blue - Each person is wearing different shoes: one is wearing running shoes, one is wearing boots, one is wearing high heels, one is wearing sandals, and one is wearing loafers Clue(s): 1. There is one house between where the person who is wearing loafers lives and where the person who drives a minivan lives. 2. The person who drives a truck does not live in the fourth house. 3. The person who drives a truck lives directly left of the person who is wearing sandals. 4. The person who likes red lives somewhere to the right of the person who likes yellow. 5. The person who is wearing loafers lives somewhere to the left of the person who drives a truck. 6. The person who likes blue lives in the fifth house. 7. There are two houses between where the person who owns a sedan lives and where the person who is wearing running shoes lives. 8. The person who is wearing boots lives somewhere to the left of the person who drives a truck. 9. The person who likes white lives in the fourth house. 10. There is one house between where the person who likes green lives and where the person who drives a truck lives. 11. The person who owns an SUV does not live in the first house. What is the number of the house where the person who is wearing boots lives? choice: 2, choice: 5, choice: 3, choice: 1, choice: 4. EvoAgent: Expert Agent #1 Description: You are a logician, specializing in logical reasoning and problem-solving . Your task is to analyze the given clues and use deductive reasoning to determine the correct sequence of houses, their occupants, and their characteristics. You will need to consider each clue carefully, understanding how it relates to the others and how it narrows down the possibilities. Your focus should be on ensuring the logical consistency of the solution, making sure that all clues are satisfied without contradiction. Reason: From clue 6 and 9, we know that the person who likes blue lives in the fifth house and the person who likes white lives in the fourth house. From clue 4, the person who likes red lives somewhere to the right of the person who likes yellow. Since the fourth and fifth houses are occupied by the people who like white and blue, the person who likes red must live in the third house and the person who likes yellow must live in the first or second house.From clue 10, there is one house between where the person who likes green lives and where the person who drives a truck lives. Since the third, fourth, and fifth houses are occupied by the people who like red, white, and blue, the person who likes green must live in the first or second house. Therefore, the person who likes yellow must live in the first house and the person who likes green must live in the second house.From clue 11, the person who owns an SUV does not live in the first house. So, the person who owns an SUV must live in the second, third, or fourth house.From clue 2, the person who drives a truck does not live in the fourth house. So, the person who drives a truck must live in the second or third house.From clue 3, the person who drives a truck lives directly left of the person who is wearing sandals. So, the person who drives a truck must live in the second house and the person who is wearing sandals must live in the third house.From clue 8, the person who is wearing boots lives somewhere to the left of the person who drives a truck. So, the person who is wearing boots must live in the first house. Answer: choice: 1 21 Listing 6: An example of multi-agent generation from EVOAGENT for Trivia Creative Writing Write a short and coherent story about Zootopia that incorporates the answers to the following 5 questions: 1. Following success at the 1979 election whose party conference speeech included the lines ’you turn if you want to, the lady’s not for turning’? 2. Which snooker player was simply known as ’The Grinder’? 3. Which former West Indian fast bowler was known as ’Whispering Death’? 4. Which Eastenders character had a dog called Willy? 5. In which year did the first episode of Minder take place? EvoAgent: Expert Agent #1 Description: You are a sports historian with a deep understanding of snooker and cricket from the 1970s and 1980s. You are familiar with the careers and playing styles of Cliff Thorburn, known as ’The Grinder’, and Michael Holding, known as ’Whispering Death’. Your role in this task will be to provide context and insight into these sports figures and their significance in the story about Zootopia. Expert Agent #2 Description: You are a political historian with a deep understanding of the political climate and key figures of the 1970s and 1980s, particularly in the United Kingdom. You have extensive knowledge about Margaret Thatcher, her policies, and her impact on society. Your role in this task will be to provide context and insight into the significance of the reference to Thatcher’s famous speech in the story about Zootopia. Story: In the vibrant city of Zootopia, the year was 1979, a time of significant change and excitement. The city had just elected its first female mayor, a lioness named Margaret Thatcher, known for her unwavering determination and strong leadership. Her victory speech at the party conference echoed the famous line from the human world’s Margaret Thatcher, "You turn if you want to, the lady’s not for turning." This phrase resonated with the citizens of Zootopia, who admired her steadfastness and determination. In the heart of the city, a meticulous badger known as ’The Grinder’ ran a popular snooker club. His nickname was a tribute to the legendary human snooker player Cliff Thorburn, known for his relentless and meticulous style of play. The Grinder’s snooker club was a hub of social activity, where animals from all walks of life came to unwind and engage in friendly competition. Meanwhile, a new recruit had joined the Zootopia Police Department, a cheetah named Michael Holding. Known as ’Whispering Death’ during his cricketing days, a moniker borrowed from the former West Indian fast bowler, he was now a respected figure in the community, using his speed and stealth to uphold the law. In another part of the city, a popular soap opera was being filmed. The show was called "Eastenders of Zootopia," and one of the beloved characters was a rabbit named Ethel Skinner, who had a dog named Willy. This was a charming reference to the human Eastenders character of the same name, symbolizing the enduring friendships that were a cornerstone of Zootopian society. The year 1979 was also significant as it marked the airing of the first episode of the Zootopian version of the human show "Minder". The show was a reflection of the city’s vibrant and diverse culture, much like the original British series. It was a testament to the city’s ability to adapt and evolve, while still holding onto its rich history and traditions. In conclusion, Zootopia was a melting pot of cultures and histories, much like the television shows and characters it referenced. These references served as a bridge between the past and the present, reminding the citizens of their roots while encouraging them to embrace the future. Evaluation: correct_count=5/5 22 Listing 7: An example of multi-agent generation from EVOAGENT for Codenames Collaborative Try to identify the 3 words best associated with the word "fruit" from the following word list: ["bowl", "einstein", "banana", "brain", "judge", "groom", "crusader", " stamp", "peach", "halloween", "troll", "slipper", "earthquake", "tip", " sun", "quarter", "pad", "nude", "helmet", "laundry", "wedding", "mirror", "sleep", "minotaur", "wish"]. EvoAgent: Expert Agent #1 Description: You are a cultural anthropologist with a focus on symbolism and rituals. Your expertise in understanding how different objects, including fruits, are used and symbolized in various cultures will be beneficial in this task. You can provide insights into how certain words might be associated with "fruit" from a cultural or symbolic perspective. Your focus will be on the cultural and symbolic aspects of the task. Reason: 1. ’Banana’: This is a type of fruit, rich in potassium and dietary fiber . It is commonly consumed raw and is used in a variety of dishes like desserts, smoothies, and baked goods. 2. ’Peach’: This is also a type of fruit, known for its sweet and juicy nature. Peaches are rich in vitamins A and C and are often eaten raw or used in dishes like pies, jams, and salads. 3. ’Bowl’: This word is associated with fruit in the context of a fruit bowl, which is a common way to store and display various types of fruit. A bowl of fruit can also be a healthy snack or dessert option. Answer: Banana, Peach, Bowl Expert Agent #2 Description: You are a nutritionist with a deep understanding of different types of food, including fruits. Your knowledge about the nutritional value, common uses, and cultural significance of various fruits will be helpful in this task. You can provide insights into how certain words might be associated with "fruit" from a dietary or nutritional perspective. Your focus will be on the nutritional and dietary aspects of the task. Reason: 1. ’Banana’: This is a direct association as a banana is a type of fruit. In many cultures, bananas symbolize fertility and prosperity. In Hinduism, for example, bananas are considered a sacred fruit and are used in religious rituals. 2. ’Peach’: Similar to banana, a peach is also a type of fruit. In Chinese culture, peaches are symbols of immortality and unity. They are often used in traditional rituals and ceremonies. 3. ’Bowl’: A bowl is often used to hold fruits. In many cultures, a bowl of fruit is a symbol of abundance, fertility, and prosperity. It is often used in rituals and ceremonies as an offering or a centerpiece. Answer: banana, peach, bowl Final Answer: After considering the expert’s input, I find that their reasoning aligns with my initial thoughts. The words ’banana’ and ’peach’ are indeed types of fruit, and their nutritional value and common uses further strengthen their association with the word ’fruit’. The word ’bowl’ is also associated with fruit in the context of a fruit bowl, which is a common way to store and display various types of fruit. Therefore, I agree with the expert’s choices. Final Answer: banana, peach, bowl. 23 Listing 8: An example of multi-agent generation from EVOAGENT for MMMU dataset. Question: <image 1> The elliptical orbit of a planet around the Sun is shown on the diagram. Which of the following statements is true? A. the eccentricity of the orbit is less than zero B. the eccentricity of the orbit is greater than 1 C. the sun might be at point C D. the sun might be at point D E. the sun might be at point B (Correct) CoT: Reason: In an elliptical orbit, the Sun is located at one of the two foci of the ellipse. The eccentricity of an orbit is a measure of how much the orbit deviates from a perfect circle, with 0 being a perfect circle and 1 being a parabola. Since the orbit shown is an ellipse, the eccentricity must be between 0 and 1. Points A and E are the farthest points from the center of the ellipse, so the Sun cannot be located at either of those points. Point B is at the center of the ellipse, so the Sun cannot be located there either. This leaves points C and D as possible locations for the Sun. However, since the Sun must be at one of the foci and point D is closer to the center of the ellipse than point C, it is more likely that the Sun is located at point C. Answer: C. the sun might be at point C EvoAgent: Expert Agent #1 Description: You are a mathematician with a specialization in geometry, particularly in the study of conic sections which include ellipses. Your expertise allows you to understand the properties and definitions of ellipses, including the concept of foci. You can provide a mathematical perspective on why the Sun must be located at one of the foci of the elliptical orbit and not at any other point. Your knowledge will help explain the geometric reasons behind the positioning of the Sun in relation to the planet’s orbit, and why the given options in the statements do not align with the mathematical definition of an ellipse. Reason: An ellipse is defined as the set of all points for which the sum of the distances to two fixed points, called foci, is constant. The Sun, being one of the massive bodies in the solar system, must be located at one of the foci of the elliptical orbit of a planet due to the gravitational forces that govern the motion of the planets. Option E is correct because the Sun might be at point B, which is one of the foci of the ellipse. The other focus would be at a point symmetrically opposite to B along the major axis. Answer: E. the sun might be at point B. Expert Agent #2 Description: You are an astronomer with a specialization in observational astronomy. Your expertise lies in the practical observation and data collection of celestial bodies, including the use of telescopes and other instruments to study the positions and movements of planets and stars. You have a deep understanding of how the apparent motion of celestial bodies can be used to infer their actual motion in space. You can provide insight into how observations of planetary orbits, such as the one depicted in the diagram, have historically been used to deduce the laws of planetary motion and the positions of stars like the Sun. Your knowledge will help explain how practical observations support the theoretical understanding that the Sun must be located at one of the foci of the elliptical orbit. Reason: The eccentricity of an orbit is a measure of how much it deviates from being a perfect circle. It ranges from 0 (a perfect circle) to 1 (a parabola). An eccentricity less than zero or greater than 1 is not possible for an elliptical orbit. Therefore, options A and B are incorrect. According to Kepler’s laws of planetary motion, the Sun must be located at one of the foci of the elliptical orbit. Point B is located at a position that could be one of the foci of the ellipse. Answer: E. the sun might be at point B. 24 Listing 9: An example of multi-agent generation from EVOAGENT for ScienceWorld. Task 1: Your task is to determine if a metal fork is electrically conductive. The metal fork is located around the kitchen. First, focus on the metal fork. If conductive, place it in the blue box. If nonconductive, place it in the green box. EvoAgent: Expert Agent Description: You are a thermodynamics expert. Your expertise in heat transfer and temperature measurement is crucial for this task. You understand how to properly use a thermometer and interpret its readings. You also have knowledge about the properties of various substances, including their typical temperature ranges. Your task is to guide the assistant in using the thermometer to accurately measure the temperature of the unknown substance B. Task 2: Your task is to boil gallium. For compounds without a boiling point, combusting the substance is also acceptable. First, focus on the substance. Then, take actions that will cause it to change its state of matter. EvoAgent: Expert Agent Description: You are a metallurgist with a deep understanding of the properties and behaviors of metals, including gallium. Your expertise includes knowledge of the melting and boiling points of various metals, as well as the processes and equipment needed to heat them to these temperatures. You are also familiar with safety procedures and precautions when handling and heating metals. Your task is to provide detailed instructions on how to safely and effectively boil gallium in the foundry. Task 3: Your task is to change the state of matter of gallium. First, focus on the substance. Then, take actions that will cause it to change its state of matter. EvoAgent: Expert Agent Description: You are a chemist with a specialization in the properties and behaviors of substances. You have a deep understanding of the state changes of matter, specifically how substances like gallium can transition from solid to liquid or gas. You are familiar with the conditions required for these changes to occur, such as temperature and pressure. You also have knowledge of the equipment and procedures used in a foundry, which will be useful in this task. Your role is to provide guidance on how to manipulate the conditions in the foundry to achieve the desired state change in gallium. 25 Listing 10: An example of multi-agent generation from EVOAGENT for TravelPlanner dataset. Question: Can you help me with a travel plan departing from Buffalo to Atlanta for a duration of 3 days, specifically from March 2nd to March 4 th, 2022? I plan to travel alone and my planned budget for the trip is around $1,100. EvoAgent: Expert Agent #1 Description: You are a culinary expert with a deep understanding of food culture in Atlanta. Your expertise includes knowledge of the best local restaurants, popular dishes, and dietary restrictions. You can provide recommendations for breakfast, lunch, and dinner spots that offer a unique taste of Atlanta’s food scene. Your suggestions should consider the traveler’s budget and the proximity of the restaurants to the attractions and accommodation listed in the plan. Expert Agent #2 Description: You are a travel logistics expert with extensive knowledge of transportation options and costs between Buffalo and Atlanta. Your expertise includes understanding flight schedules, costs, and durations, as well as alternative transportation methods such as self-driving and taxi services. You can provide the most cost-effective and time-efficient transportation options for the traveler, considering their budget and the travel dates provided in the plan. Expert Agent #3 Description: You are a local tourism expert with comprehensive knowledge of Atlanta’s attractions. Your expertise includes understanding the city’s popular tourist spots, historical sites, museums, parks, and unique local experiences. You can provide recommendations for attractions that the traveler can visit during their stay, considering their budget and the proximity of these attractions to the traveler’s accommodation and meal spots. Your suggestions should also take into account the operating hours of these attractions to ensure they align with the traveler’s schedule. Answer: Day 1: Current City: from Buffalo to Atlanta Transportation: Flight Number: F3502691, from Buffalo to Atlanta, Departure Time: 18:48, Arrival Time: 20:51 Breakfast: Not applicable as the flight is in the evening. Attraction: Not applicable as the flight is in the evening. Lunch: Not applicable as the flight is in the evening. Dinner: Saffron, Atlanta ccommodation: Spacious private room close St. Barnabas Hospital, Atlanta Day 2: Current City: Atlanta Transportation: - Breakfast: Adda, Atlanta Attraction: Atlanta Botanical Garden, Atlanta; World of Coca-Cola, Atlanta. Lunch: Baba Au Rhum, Atlanta Dinner: Asian Bistro, Atlanta Accommodation: Spacious private room close St. Barnabas Hospital, Atlanta Day 3: Current City: from Atlanta to Buffalo Transportation: Flight Number: F3500648, from Atlanta to Buffalo, Departure Time: 21:24, Arrival Time: 23:26 Breakfast: Chef Style, Atlanta Attraction: Georgia Aquarium, Atlanta; Martin Luther King, Jr. National Historical Park, Atlanta. Lunch: Pizza Central, Atlanta Dinner: Daawat-e-Kashmir, Atlanta Accommodation: -. 26
ai_researcher
1
Leveraging_Knowledge_Graph_Embeddings_to_Disambiguate_Author_Names_in_Scientific_Data.pdf
Exploring Graph Based Approaches for Author Name Disambiguation Chetanya Rastogi∗ Stanford University [email protected] Prabhat Agarwal∗ Stanford University [email protected] Shreya Singh∗ Stanford University [email protected] 3 2 0 2 c e D 2 1 ] I S . s c [ 1 v 8 8 3 8 0 . 2 1 3 2 : v i X r a Figure 1: The ever expanding author publication network ABSTRACT In many applications, such as scientific literature management, re- searcher search, social network analysis and etc, Name Disambigua- tion (aiming at disambiguating WhoIsWho) has been a challenging problem. In addition, the growth of scientific literature makes the problem more difficult and urgent. Although name disambiguation has been extensively studied in academia and industry, the problem has not been solved well due to the clutter of data and the com- plexity of the same name scenario. In this work, we aim to explore models that can perform the task of name disambiguation using the network structure that is intrinsic to the problem and present an analysis of the models. 1 INTRODUCTION Online academic search systems (such as Microsoft Academic Graph, Google Scholar, Dblp, and AMiner) have a large amount of research papers, and have become important and popular academic commu- nication and paper search platforms. However, due to the limitations of the paper assignment algorithm, there are many papers assigned to error authors. In addition, these academic platforms are collect- ing a large number of new papers every day (AMiner has about 130,000,000 author profiles and more than 200,000,000 papers) [29]. Therefore, how to accurately and quickly assign papers to existing author profiles, and maintain the consistency of author profiles is ∗All authors contributed equally to this research. an urgent problem to be solved for current online academic systems and platforms. In our project, we aim to implement author name disambiguation techniques to disambiguate profiles of authors with similar names and affiliations. We study the problem from a network perspective where researchers communicate with one another by means of their publication. The network is modeled as a bipartite graph containing two types of nodes, viz. author nodes and paper nodes. Each edge in the graph represents an author’s contribution to a paper. We believe that this inherent structure will be able to encapsulate much more implicit and intrinsic features that are otherwise impossible to capture using bibliometric data. 2 RELATED WORK The problem of Author Name Disambiguation has been of interests to researchers for a quite long time. [9] formulates it in the paradigm supervised learning and makes use of the various features associ- ated with a publication, including title, co-authors, conference, etc to make the correctly associate a publication with a specific author by learning the linkage function between the publication and the author. The authors makes use of two different datasets, one from DBLP and the other collected from the web, and test two different classification algorithms on both the datasets. However, the authors do not take into account the implicit network structure that lies in the dataset. [26] furthers the task and provides an extensive study on choosing a minimal subset of features by means of a random for- est classifier that can identify the correct author entity linked with a particular publication. The authors also introduce a new dataset called Medline which is particular to the researchers of biomedical science. Once again the authors of these work not only ignore the underlying graph structure but also restrict the work to a particular domain which constraints the problem to a very narrow dataset. Another important shortcoming of both the above works is that the authors know beforehand of how many clusters they need to identify for a particular name. This challenge is tackled by [17, 25] as they investigate a dynamic approach for estimating the number of people associated with a particular name. They propose a novel approach of framing the problem as a Markov Random Field and try to make use of the underlying graph structure by defining similarity both in terms of the content of the publication as well as the rela- tionship between them in terms of co-authors. Similarly, [29] also talks about learning the cluster size dynamically and quantifying the similarity in terms of the graph structure. In [15], Ma et al. have proposed a novel AND (Author Name Disambiguation) approach which tries to disambiguate authorship of research papers. As the population is growing, some people will inevitably share some personal features at different levels (like names and affiliations). This poses a huge challenge for many appli- cations like information retrieval and academic network analysis. The dataset used by the authors in this work is the AMiner dataset which is a heterogeneous academic network consisting of multiple entities (i.e. author, paper, venue, topic) as well as relationships (i.e. writing, publishing, collaborating and affiliations). To solve the problem of name disambiguation, the authors propose a meta- path channel based heterogeneous network representation learning method (Mech-RL) wherein node embeddings are learned from the whole heterogeneous graph instead of breaking it down to simpler subgraphs. The node (paper) embeddings [23] are learned at two levels: they are initialized by the textual features (Doc2vec embeddings) and fur- ther optimized by relational features (from the metapaths in which they appear). Once, each entity (here paper) is represented through its low dimensional embeddings, the task is reduced to a clustering task where each cluster will contain papers belonging to a unique person. Another thing to be noted is that in this approach, the au- thors solve the name disambiguation problem without considering the private information of the researchers. The experimental results based on the AMiner dataset show that Mech-RL obtains better results compared to other state-of-the-art author disambiguation methods [20, 22, 28]. 2.1 Bibliometric approaches to tracking international scientific migration Scientific migration/mobility is a well-studied topic in sociology. With the availability of large scale bibliometric data available on- line (Scopus, MAG, DPLB, etc.), many studies have been done to quantify scientific mobility on a large dataset. Since the academic 2 network data is noisy and has missing data, people have used dif- ferent methods to address the concerns of name disambiguation, geo-tagging, etc. Hadiji et. al. [7, 12] uses compressed label prop- agation to infer missing geo-tags of author-paper-pairs retrieved from online bibliographies like ACM, DBLP, etc. Robinson et. al. [19] used the name-disambiguation method from [2] to augment the data. Moed et. al. [16] used Scopus data to circumvent these noises. All these studies then use statistical measures to model different aspects of migration for each author considered separately over the period considered in the study. Hadiji et. al. estimates the dis- tribution for move propensity of an author, whereas Moed et. al. analyzes the relative migration index for 17 country pairs. Robinson et. al. introduces a new taxonomy to account for different mobility patterns rather than just migration and classifies each author to one of the class based on their affiliation history and presents an analysis of the different mobility classes for different countries. 2.2 Characterizing evolution of graph over time In [4], Domenico et al. try to understand the dynamics of an aca- demic network to determine the flow of authors’ research interests - which they refer to as “the knowledge diaspora”. They use the Microsoft Academic Graph [21, 24] and the SCImago [14] classifi- cation to categorize each paper under different areas of knowledge and study the temporal snapshots to identify a growing/falling in- terest in a particular area of study. By studying this question from a network perspective and modeling it as a multi-layer network, the authors formulate a quantitative metric to indicate the “attractive- ness” of a topic through time and are able to relate the metric with the corresponding historical or political events during that time. Furthermore, the authors also provide a metric to quantify whether a particular area of study is serving as a “source” - supplying other areas with researchers - or a “sink” - attracting researchers towards higher trans-disciplinary and multidisciplinary research. Dynamic network analysis [10] is a sub-field of network sci- ence aiming at representing and studying the behavior of systems constituted of interacting or related objects evolving through time. While there is substantial work in macroscopic (graph level) and mesoscopic (community level) analysis of such networks, micro- scopic analytic methods are less studied [1]. In [18], Orman et. al. introduces the concept of neighborhood events as a measure to characterize a node behavior across time steps. They also present a parallelizable algorithm to detect such events efficiently and show that this event sequence characterization can be used to analyze global trends in the network as well as individual node characteri- zation: (1) Node characterization: They cluster nodes based on the count of events in a time slice 𝑡𝑖 and are able to identify clusters of stable nodes and active nodes. Moreover, they observe that these clusters have different most frequent event sequences. (2) Global trends: They used frequent pattern mining to identify certain trends among the nodes at the level of the network and found that the Enron trends reflect the routine of spo- radically sending/receiving emails, whereas those of LastFM and DBLP describe a similar life cycle for ego-components: creation, growth, and decline. 3 DATA 3.1 Author Name Disambiguation We utilise a dataset hosted as a part of a competition called OAG- WhoIsWho Track 1 [3]. The organizers provide three different datasets for training, validation and testing of models but provide the ground truth labels for only the train set. Therefore, to test our models and provide quantitative metrics of our methods we utilise only the training set which we now refer to as the “entire” set. Task Description: Given a bunch of papers with authors of one same name, the task is to return different clusters of papers. Each cluster has one author, and different clusters have different authors, although they have the same name. Data Description: The dataset consists of two sets of informa- tion: list of publications for same author name and metadata of the publications. The format and fields of the publication metadata is described in Table 1. Additionally the train data contains publica- tions of same author name, clustered by author profile which is the required output of the task. Initial data exploration on the metadata showed that the data is very noisy and has many typos and wrong entries, which makes it non-consumable in the raw form. Therefore, we pre-process and augment the data which is described in the next section. We run our experiments under supervised as well as unsuper- vised learning paradigm. To allow for fast and feasible experimen- tation, we sample 20 names at random from the entire set on which we train and evaluate our methods. For unsupervised methods, we use the complete sampled dataset for training as well as evaluation while for supervised learning methods the sampled dataset is split it into train, validation and test with 15, 2, and 3 names respectively. To verify that the randomly sampled set is a valid placeholder for the entire set, we compare different attributes of the graphs gen- erated by both and see similar distributions. The data summary comparing the statistics of the sampled set with the entire set has been shown in Table 2. The data summary for the train, validation and test sets has been shown in Table 3 3.2 Data pre-processing and summary Size of the dataset. The number of publications, distinct au- 3.2.1 thor names and author profiles is shown in Table 3 and Figure 3 shows the distribution of number of publications across all authors profiles across the sampled dataset. There are on average 103.34 distinct author profiles for each author name in the entire dataset. The distribution of author profile count for author name is shown in Fig. 2. 3.2.2 Conference and Journals. Academic conferences are sympo- siums which researchers attend to present their findings and hear about the latest work in their field of interest. In Fig. 4, we have illustrated the frequency distribution of top 20 conferences/journals Figure 2: Author name frequency distribution by profile count Figure 3: Cluster size distribution(number of publications per author profile) where authors have published their work. We plot the top 20 con- ferences/journals (by their publication count) on the x-axis and plot the publication count on y-axis. From this data, we can see vividly that conferences and journals in Applied Mechanics/Materials, Ap- plied Physics and Bioinformatics are popular among the authors. This is validated by the keyword frequency distribution graph in Fig. 5 too where we see the top keywords pertaining to topics in these very fields. 3.2.3 Keywords. Effective keywords of an article portray an ac- curate representation of what an author wants to publish. Many- a-times, in the first glance, we look at the topic, keywords and abstract to get an idea about the research context of a publication. Fig. 5 illustrates the frequency distribution of top 20 keywords (by count) in the publications of the training dataset. We plot the top 20 selected keywords (by count) on the x-axis and have their counts plotted on the y-axis. This primarily gives us an idea about the different genres/topics of research where authors have published 3 Field Type Meaning Example id title authors.name author.org venue year keywords abstract PaperID string Paper Title string Authors string Organization string Conference/Journal string int Publication list of strings Key words string Abstract 53e9ab9eb7602d970354a97e Data mining: concepts and techniques Jiawei Han department of computer science university of illinois at urbana champaign Inteligencia Artificial, Revista Iberoamericana de Inteligencia Artificial 2000 [“data mining”, “structured data”, “world wide web”, “social network”, “relational data”] Our ability to generate... Table 1: Description of the fields in the paper data Parameter Entire Set Sampled Set Distinct author names Distinct author profiles # of publications # of connected components Largest Connected Component 221 22839 203184 22105 16339 20 1945 16788 1927 1048 Table 2: Statistics for Sampled set and the Entire set Dataset # of publi- cations # of author names # of au- thor pro- files pub- avg. lications per author profile Train Validation Test 10966 1833 4055 15 2 3 1347 271 327 8.14 6.76 12.4 Table 3: Train and validation data summary Figure 4: Paper ence/Journal frequency distribution by Confer- their work in. It can be seen that many of the publications contain 4 Figure 5: Keyword frequency distribution keywords pertaining to the domain of Material research, Applied Mechanics and Bioinformatics. 3.2.4 Year. In Fig. 6, we illustrate the publication frequency dis- tribution by year. We see that our dataset consists of publications throughout the years from 1995 to 2019, with a majority of them be- ing published between 2007-2017. This emphasizes on the recency of the dataset and better robustness to the present scenario. We plot years on the x-axis and the publication count of that year on the y-axis. 3.2.5 Author name and affiliation. In Fig. 7, we illustrate the dis- tribution of author profile count against the count of distinct orga- nizations. More formally, we have recorded the number of author profiles on the y-axis who have been in corresponding number of distinct organizations on the x-axis. The graph shown in Fig. 7 shows that there are many author profiles who have switched across organizations in their career which in turn strengthens the claim that many authors move across different organizations/places to cater to their research interests. 4 METHODS 4.1 Problem Formulation We formulate the problem of author name disambiguation as find- ing similarity between nodes in a bipartite graph. Given a set of publications and their respective co-authors, we construct a bipar- tite graph as shown in Fig 8. Algorithm 1: Clustering author-org nodes based on simi- larity function Input: P: Set of publications (partitioned by author name) to cluster according to author profiles, 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼 𝑀: author node similarity function Output: Cluster of publications according to author profile 1 Clusters = {} 2 foreach 𝑝 ∈ P do 3 4 5 end 6 return Clusters Find cluster in Clusters with greatest similarity 𝑠 with author of interest in 𝑝 using the similarity function 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼 𝑀 If 𝑠 > 𝜃 , add 𝑝 to the maximum similarity cluster else create a new cluster Figure 6: Publication frequency distribution by year 4.2 Evaluation The evaluation metric used in the task is macro-averaged F-1 score defined as below: PairwisePrecision = PairwiseRecall = #𝑃𝑎𝑖𝑟𝑠𝐶𝑜𝑟𝑟𝑒𝑐𝑡𝑙𝑦𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑𝑇𝑜𝑆𝑎𝑚𝑒𝐴𝑢𝑡ℎ𝑜𝑟 #𝑇𝑜𝑡𝑎𝑙𝑃𝑎𝑖𝑟𝑠𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑𝑇𝑜𝑆𝑎𝑚𝑒𝐴𝑢𝑡ℎ𝑜𝑟 #Pairs Correctly Predicted To Same Author #Total Pairs To Same Author Pairwise 𝐹1 = 2 × PairwisePrecision × PairwiseRecall PairwisePrecision + PairwiseRecall 4.3 Text based similarity (Baseline) We implement two baseline methods and study the performance of our system with respect to them. Both the baselines are described as follows with their performance summarised in Table 4: (1) ClusterByName: For the first baseline we combine all the authors with the same name under a single author profile. Formally, we define the 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼𝑀 function as follows: 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼𝑀 (𝑎1, 𝑎2) ← 𝑎1.𝑛𝑎𝑚𝑒 = 𝑎2.𝑛𝑎𝑚𝑒 This provides a lower bound on any system’s overall perfor- mance as it is likely that authors with the same name might be the same person (entity) and highly unlikely (though not impossible, since people use different forms of names) that authors with different names represent the same person (en- tity). As expected, the precision is quite low and the recall of the system is very high in this case. (2) ClusterByNameAndOrg: For the second baseline, we fur- ther fine-grained the author profiles with respect to their affiliation and name combined. Due to the highly noisy data, instead of doing a perfect match for organization name we make use of jaro-winkler similarity metric to match organi- zations. Formally, we define the 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼𝑀 function as follows: 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼𝑀 (𝑎1, 𝑎2) = 𝑎1.𝑛𝑎𝑚𝑒 == 𝑎2.𝑛𝑎𝑚𝑒 & 𝑗𝑎𝑟𝑜 (𝑎1.𝑜𝑟𝑔, 𝑎2.𝑜𝑟𝑔) > 0.9 Since it is highly unlikely that authors with the same name are affiliated with the same organization, this baseline en- sures that we do not cluster different author profiles together Figure 7: Author profile frequency distribution by organiza- tion count Formally, given a set of publications P, we construct a bipartite graph G as follows: G = (𝑈 , 𝑉 , 𝐸) 𝑈 = {𝑎|𝑎 ∈ 𝑝.𝑎𝑢𝑡ℎ𝑜𝑟𝑠 and 𝑝 ∈ P} 𝑉 = P 𝐸 = {(𝑎, 𝑝)|𝑎 ∈ 𝑝.𝑎𝑢𝑡ℎ𝑜𝑟𝑠 and 𝑝 ∈ P} Now, we define the task of author name disambiguation as clus- tering the nodes with same author name in 𝑈 based on some node similarity function 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼𝑀. The clustering algorithm is shown in 1. We analyse the behavior of various node similarity func- tions 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼𝑀 based on random walks (section 4.4), node embedding (section 4.5), and graph convolution networks (section 4.6). 5 Figure 8: Bipartite graph of author node and publications but runs a risk of creating multiple profiles for a single author. The performance of the system degraded with this baseline which implies that authors frequently change affiliation over their lifetime which is validated from the author affiliation statistic in Fig. 7. 4.4 Random Walk based similarity Random walk with restart (RWR) provides a good relevance score between two nodes in a graph, and it has been successfully used in numerous settings, like automatic captioning of images, generaliza- tions to the “connection subgraphs”, personalized PageRank, and many more. Hence we use a slightly modified version of the RWR algorithm shown in algorithm 2 to find and merge similar author nodes. In our version, the similar nodes are merged on the go after each random walk so that further iterations can benefit from the re- sults of previous iterations. Formally, we define the 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼𝑀 function as follows: 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼𝑀 (𝑎1, 𝑎2) = 𝑎1.𝑛𝑎𝑚𝑒 == 𝑎2.𝑛𝑎𝑚𝑒 & 𝑅𝑊 𝑅𝑉 𝑖𝑠𝑖𝑡𝐶𝑜𝑢𝑛𝑡 (𝑎1, 𝑎2) > 0 Unlike the other similarity functions explored, in this method we update the graph online as we find similar nodes using the RWR visit count. 4.5 Transductive embedding based similarity Node Embeddings have been successful in many graph classification and clustering tasks and hence we explore both transductive and inductive embedding methods to define the author node similarity function. Inductive learning methods are described in next section Algorithm 2: Random walk based node merging Input: 𝐺: Bipartite graph of authors and publications, 𝛼: restart probability, 𝑁 : max number of epochs, 𝑊 : Random walk length, 𝑇 : Threshold of visit count for merge Output: G with disambiguated nodes merged 1 for 𝑒𝑝𝑜𝑐ℎ ← 1 to 𝑁 do 2 foreach 𝑎𝑢𝑡ℎ𝑜𝑟 𝑁𝑜𝑑𝑒 ∈ 𝐺 .𝑉 do 3 4 5 6 7 8 9 10 11 12 13 14 15 16 visitCount ← {} 𝑠𝑡𝑎𝑟𝑡𝑁𝑜𝑑𝑒 ← 𝑎𝑢𝑡ℎ𝑜𝑟 𝑁𝑜𝑑𝑒 𝑙 ← 0 while 𝑙 < 𝑊 do if 𝑟𝑎𝑛𝑑𝑜𝑚 < 𝑎𝑙𝑝ℎ𝑎 then 𝑎𝑢𝑡ℎ𝑜𝑟 𝑁𝑜𝑑𝑒 ← 𝑠𝑡𝑎𝑟𝑡𝑁𝑜𝑑𝑒 end else sample a random neighbor pubNode of 𝑎𝑢𝑡ℎ𝑜𝑟 𝑁𝑜𝑑𝑒 sample a random neighbor coAuthorNode of pubNode 𝑎𝑢𝑡ℎ𝑜𝑟 𝑁𝑜𝑑𝑒 ← 𝑐𝑜𝐴𝑢𝑡ℎ𝑜𝑟 𝑁𝑜𝑑𝑒 end 𝑣𝑖𝑠𝑖𝑡𝐶𝑜𝑢𝑛𝑡 [𝑎𝑢𝑡ℎ𝑜𝑟 𝑁𝑜𝑑𝑒]+ = 1 end Merge 𝑠𝑡𝑎𝑟𝑡𝑁𝑜𝑑𝑒 with nodes with 𝑣𝑖𝑠𝑖𝑡𝐶𝑜𝑢𝑛𝑡 [𝑛𝑜𝑑𝑒] > 𝑇 if same is similar. end 17 18 end 6 under the graph convolution methods. In this section, we describe a popular transductive embedding method Node2Vec. Node2Vec framework learns low-dimensional representations for nodes in a graph by optimizing a neighborhood preserving ob- jective. The objective is flexible, and the algorithm accommodates for various definitions of network neighborhoods by simulating biased random walks. The two main user-defined hyperparameters 𝑝 and 𝑞 stand for the return and in-out hyperparameters respec- tively. The return parameter 𝑝 controls the probability of the walk staying inwards, revisiting the nodes again (exploitation); whereas the inout parameter 𝑞 controls the probability of the walk going farther out to explore other nodes (exploration). In our approach, we run the Node2Vec algorithm on the bipartite graph G defined in section 4.1. In our setting, we run Node2Vec with the length of the walks set at 10, number of epochs set at 20 and p and q parameters set at 1. After running the Node2Vec algorithm, we derive the node embed- dings 𝐸𝑀𝐵 of all the paper and author nodes of our graph. Then we define the 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼𝑀 function as follows: 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼 𝑀 (𝑎1, 𝑎2) = 𝑎1.𝑛𝑎𝑚𝑒 == 𝑎2.𝑛𝑎𝑚𝑒 & 𝑐𝑜𝑠𝑖𝑛𝑒 (𝐸𝑀𝐵(𝑎1), 𝐸𝑀𝐵(𝑎2)) > 𝜃 where 𝜃 is a user defined threshold. The intuition behind using Node2Vec embeddings to express the author and paper nodes is that Node2Vec leverages the inbuilt graphical properties of a graph by running multiple random walks across the nodes. Hence, running Node2Vec on the given Bipartite graph will result in placing similar author (author-organization) nodes together in the walks. The author (author-organization) nodes which occur together in multiple random walks will eventu- ally get very similar embeddings by optimizing over the loss func- tion of Node2Vec. Intuitively, this means that the author ((author- organization)) nodes having similar embeddings might belong to the same author, and hence, should be coalesced together; condi- tioned on some defined clustering threshold. We tabulate the results from this approach using different values for the threshold in Table 6. 4.6 Graph Neural Networks Graph neural networks have been very successful in a variety of graph and node classification tasks as they can learn representation of the node incorporating both graph features and node features. Hence we use GNN to learn the 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼𝑀 function using both supervised and unsupervised setting. The initial node features used in these methods is described below: Features: To take into account the meta-information of the 4.6.1 publications, we make use of the various fields that accurately identify a publication. As mentioned in section 3.2, we analyse all the fields and define the following features that are further used in all the graph neural network based approaches: • Title: Titles convey a very precise and specific information which is very unique w.r.t each publication. Therefore to incorporate the information contained in the title, we gen- erate 100-dimensional Doc2Vec embeddings [13] which are obtained by training over the entire corpus of titles. • Abstract: As in the case of a title, abstract also contain crucial information but unlike the title, they are at a higher level and in some sense convey the broader area to which the pub- lication is related. Like titles, we generate 100-dimensional Doc2Vec embeddings [13] which are obtained by training over the entire corpus of abstracts. • Year: We generate standardized year number for each paper with respect to the starting year number observed in the year distribution of the training corpus. We then use the standardized year number directly as a feature. • Organization: Inspired by Name2Vec [6], we generate 100- dimensional embeddings of organization using Doc2Vec where each organization is represented as a document for character bigrams and trained over the whole corpus. To summarize, generate separate Doc2Vec embeddings [13] for abstract and title fields, each in a 100-dimensional space. Also to account for the activity of an author in the temporal space, we make use of the year field and standardize it w.r.t a starting year. Similarly, we embed the org field in a 100-dimensional space using Name2Vec [5]. Also we experimented with two different aggregation methods for combining feature information across nodes. First we projected all the individual features in a latent space and then combined them(sum) whereas in the second method we first combined all the features(concatenate) and then projected the combined feature space to a latent space. In our experiments we noticed that the latter approach (concatenate and project) performed better and hence we report all the results using this method. 4.6.2 Unsupervised Similarity Function. In the unsupervised set- ting, given the author-publication bipartite graph G, we want to learn embeddings for the nodes such that nodes close in the graph are more similar than those far away. The hypothesis here is that this will lead to node representations such that nodes belonging to same author profile will have similar embeddings when they are in close neighborhood as well as when they are in different compo- nents. Formally, given the initial node features x, we calculate the embeddings z as follows: h0 = x 𝑙 = GNN(h h 𝐿 z = h 𝑙 − 1) We use the GNN described in PinSage [27] using neighborhood sampling to be able to apply this method on large academic graphs. To train the model to learn similar embeddings for nodes in close vicinity and dissimilar embeddings for faraway nodes, we used hinge loss as follows: L = 𝑚𝑎𝑥 (0, z𝑠𝑟𝑐 z𝑑𝑠𝑡 − z𝑠𝑟𝑐 z𝑑𝑠𝑡 _𝑛𝑒𝑔 − 𝛿) (1) Now, we define the 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼 𝑀 function as follows: 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼 𝑀 (𝑎1, 𝑎2) = 𝑎1.𝑛𝑎𝑚𝑒 == 𝑎2.𝑛𝑎𝑚𝑒 & z𝑎1z𝑎2 > 𝜃 where 𝜃 is a user defined threshold. The results of the model is shown in Table 4. 7 Method pP pR pF Name (Name,Org) ClusterByName ClusterByNameAndOrg 0.14 0.25 1.00 0.12 RWR-Merge Node2Vec Supervised-PinSage Supervised-GraphSage Supervised-GCN Supervised-MLP 0.30 0.17 0.27 0.11 0.22 0.20 0.14 0.14 0.26 0.27 0.72 0.28 Unsupervised-Pinsage 0.12 0.80 0.25 0.17 0.22 0.16 0.24 0.23 0.24 0.18 0.21 Table 4: Performance of baseline methods m_giffels t_dahms (m_giffels, cern) (m_giffels, rwth) (t_dahms,laboratoire leprinceringuet ecole polytech- nique) (t_dahms,cern european organization for nuclear re- search) e_yildirim (e_yildirim,enrico fermi institute) (e_yildirim,desy) m_k_jha (m_k_jha,purdue university) (m_k_jha,university of puerto rico) Table 5: Sample clusters produced by RWR-Merge 4.6.3 GNN based Supervised Similarity Function. In the supervised setting, we first create a dataset of pairs of author nodes consisting of pairs which are similar (belonging to the same author profile) and pairs which are dissimilar (belonging to different author profile with same or different name). We then use a Siamese network F with negative log likelihood loss to learn the weights of the network (shown in 9). We then define the 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼 𝑀 function as follows: 𝐴𝑈𝑇 𝐻𝑂𝑅_𝑆𝐼 𝑀 (𝑎1, 𝑎2) = 𝑎1.𝑛𝑎𝑚𝑒 == 𝑎2.𝑛𝑎𝑚𝑒 & F (𝑎1, 𝑎2) > 𝜃 where 𝜃 is a user defined threshold. We use different variations of the GNN architecture, results of which is shown in 4. 4.6.4 MLP based Supervised Similarity Function. To explore the effect of using graph features in finding similar nodes, we also train a model with only fully connected layers instead of the GNN layers in the above network. The results of the model is shown in 4. 5 RESULTS AND ANALYSIS 5.1 RWR-Merge Table 5 shows some of the sample nodes that were correctly identi- fied and merged together by just using the network structure. Since we are only merging the nodes if both the nodes have the same name, this method was supposed to perform better on the pairwise precision metric without any guarantees on recall. 5.1.1 Error Analysis: We expected that this method to have a high pairwise precision but didn’t get the desired results as shown in Table 4. On the careful inspection of the merged nodes, we found that the precision suffered because a high number of author nodes didn’t had the org info associated with them. This might have lead to a wrong initialization at the beginning of the algorithm as two different authors with same name but no org were already provided as a single identity to the algorithm. Moreover, as the graph was highly disconnected, the algorithm had no chance of merging two nodes that were split across two or more connected components. This can be seen in the low pairwise recall values. Similarity threshold pP pR pF1 0.0 0.5 0.8 0.95 0.14 0.27 0.27 0.26 0.97 0.11 0.11 0.10 0.25 0.16 0.15 0.15 Table 6: Node2Vec evaluation on different clustering thresh- olds 5.2 Node2Vec Table 6 tabulates precision, recall and F1 scores for different clus- tering thresholds used for clustering the Node2Vec vectors, as ex- plained in Section 4.5. The precision, recall and F1 scores are calcu- lated according to the evaluate metrics defined in Section 4.2. 5.2.1 Error Analysis. One of the samples classified correctly by the Node2Vec model (true positive) is for the author name ’Alessandro Giuliani’ where two distinct author profiles are identified, the first containing papers with paper ids ’5HWAan4P’ (’A recursive net- work approach can identify constitutive regulatory circuits in gene expression data’) and ’I7KqbI7a’ (’Medical Data Analysis, Third International Symposium, ISMDA 2002, Rome, Italy, October 8-11, 2002, Proceedings’) and the second containing papers with paper ids ’tznTWpXP’ (’Multifractal characterization of protein contact networks’) and ’fbcIaJuu’ (’A generative model for protein con- tact networks’). Similarly, among the False positive examples, we have author name ’Yan Liang’, for whom we cluster various pa- pers of paperids ’zwzfkKwL’ (’Track initiation algorithm based on Hough transform and clustering’), ’rZGsx5cX’ (’Space-time linear dispersion codes based on optimal algorithms’), ’rl9MJQHl’ (’Co- ordinative stock management system for permissible storage in VMI pattern’) under the same author profile. 5.3 Unsupervised GNN embeddings The results for unsupervised PinSage algorithm are shown in Table 4. This method had very high pairwise recall at the expense of pairwise precision. The method did extremely well in identifying similar author nodes that were scattered across multiple connected 8 Figure 9: Architecture of network for supervised training components of the bipartite graph but failed to capture fine-grained distinction between different author nodes with the same name. GCN [11], GraphSage [8], PinSage [27] perform similarly on this task which is an interesting line to explore in the future. 5.3.1 Error Analysis: The unsupervised version of PinSage over- came the problem of RWR as it defined the similarity function which could assign non-trivial values to any two nodes in the en- tire graph. Due to this the algorithm successfully identified similar nodes across multiple connected components on the basis of the graph structure(like node degree, egonet, etc). However, the algo- rithm couldn’t discriminate across distinct author profiles due to lack of supervision. This was evident from the fact that the final clustering obtained from this method had 2 clusters for each author name, one with high degree(high publications) and the other group with low degree. 5.4 Supervised GNN/MLP embeddings We used different GNN architectures to study the variation of per- formance depending upon the network. We also conducted an ablation study using only FC layers over the node features to study the effect of incorporating features of the neighbors. The results of different experiments in shown in table 4. We expect the MLP to perform poorly when compared to GNN layers and expect that the supervised setting will perform better than the unsupervised model above. 5.4.1 Error Analysis. : Since in the MLP architecture we are train- ing only on author node features, i.e., the embedding of the orga- nization, we expect the results to be similar to the baseline model where we clustered nodes based on the name and organization and this is indeed the case as can be seen from the table 4. As compared to the unsupervised GNN embeddings, the super- vised architecture is expected to perform better as the labels are directly fed into the system allowing the network that nodes in dif- ferent components can also be similar and hence bias the network to look more into node features like the abstract and title of the publications. We observe that the different GNN architecture like 9 Also, we have also observed that while the embedding similar- ities are quite skewed in the unsupervised setting rendering the model immune to threshold variation, the performance of the su- pervised model is dependent on the threshold giving a knob to tune recall and precision as required. 6 CONCLUSION In this paper, we have thoroughly analysed the dataset hosted as a part of the Open Academic Graph WhoIsWho Track 1 and have implemented various techniques to specifically address the Au- thor Name Disambiguation problem. Formally, we first represent the dataset in a Bipartite graph format containing two types of nodes: author and paper. We then define different flavours of the author similarity function to cluster the author nodes with same author name together. We experiment these different author sim- ilarity functions with (1) Text Based similarity (2) Random Walk based similarity (3) Transductive embedding based similarity and (4) Graph Neural Network methods and record results for the same. We conduct extensive quantitative and qualitative analysis of our dataset and graph, run several offline experiments with different combinations of Graph-based approach and author similarity func- tions and report the results. We observe that random walk based methods have high precision but low recall (as we cluster nodes conservatively) whereas embedding based methods in general have low precision and high recall (due to nodes across components being clustered together). 7 FUTURE WORK We applied several architectures and learning paradigms to solve the problem of Author name Disambiguation and did a rigorous error analysis on these methods. Based on the results, one straight- forward extension is to combine the RWR method along with other supervised learning techniques. This is because RWR can provide with a good starting point by aggregating some nodes which can [21] Abraham Gerard Sebastian, Shreya Singh, P. B. T. Manikanta, T. S. Ashwin, and G. Ram Mohana Reddy. 2019. Multimodal Group Activity State Detection for Classroom Response System Using Convolutional Neural Networks. In Recent Findings in Intelligent Computing Techniques, Pankaj Kumar Sa, Sambit Bakshi, Ioannis K. Hatzilygeroudis, and Manmath Narayan Sahoo (Eds.). Springer Singa- pore, Singapore, 245–251. [22] Loveperteek Singh, Shreya Singh, Sagar Arora, and Sumit Borar. 2019. One embedding to do them all. arXiv preprint arXiv:1906.12120 (2019). [23] Shreya Singh, G Mohammed Abdulla, Sumit Borar, and Sagar Arora. 2018. Footwear Size Recommendation System. arXiv preprint arXiv:1806.11423 (2018). [24] Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-june Paul Hsu, and Kuansan Wang. 2015. An overview of microsoft academic service (mas) and applications. In Proceedings of the 24th international conference on world wide web. ACM, 243–246. [25] Jie Tang, Alvis CM Fong, Bo Wang, and Jing Zhang. 2011. A unified probabilistic IEEE Transactions on framework for name disambiguation in digital library. Knowledge and Data Engineering 24, 6 (2011), 975–987. [26] Pucktada Treeratpituk and C Lee Giles. 2009. Disambiguating authors in academic publications using random forests. In Proceedings of the 9th ACM/IEEE-CS joint conference on Digital libraries. ACM, 39–48. [27] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 974–983. [28] Baichuan Zhang, Tanay Kumar Saha, and Mohammad Al Hasan. 2014. Name disambiguation from link data in a collaboration graph. In 2014 IEEE/ACM Interna- tional Conference on Advances in Social Networks Analysis and Mining (ASONAM 2014). IEEE, 81–84. [29] Yutao Zhang, Fanjin Zhang, Peiran Yao, and Jie Tang. 2018. Name Disambiguation in AMiner: Clustering, Maintenance, and Human in the Loop.. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery &#38; Data Mining (KDD ’18). ACM, New York, NY, USA, 1002–1011. https://doi.org/ 10.1145/3219819.3219859 result in high accuracy and low training time for these networks. Another area to focus is the tuning of the hyper-parameters to achieve a model of optimum performance as the models have shown significant promise in the initial experiments conducted by us. ACKNOWLEDGMENTS We would like to thank Michele Catasta for guidance and consistent help throughout the course of the project and the generous google compute credits. REFERENCES [1] Charu Aggarwal and Karthik Subbian. 2014. Evolutionary network analysis: A survey. ACM Computing Surveys (CSUR) 47, 1 (2014), 10. [2] Emiel Caron and Nees Jan van Eck. 2014. Large scale author name disambiguation using rule-based scoring and clustering. In Proceedings of the 19th international conference on science and technology indicators. CWTS-Leiden University Leiden, 79–86. [3] Bo Chen. 2019. OAG-WhoIsWho Track 1. https://www.biendata.com/ competition/aminer2019/. [4] Manlio De Domenico, Elisa Omodei, and Alex Arenas. 2016. Quantifying the diaspora of knowledge in the last century. Applied network science 1, 1 (2016), 15. [5] Jeremy Foxcroft, Adrian d’Alessandro, and Luiza Antonie. 2019. Name2Vec: Personal Names Embeddings. In Advances in Artificial Intelligence, Marie-Jean Meurs and Frank Rudzicz (Eds.). Springer International Publishing, Cham, 505– 510. [6] Jeremy Foxcroft, Adrian d’Alessandro, and Luiza Antonie. 2019. Name2Vec: Personal Names Embeddings. In Canadian Conference on Artificial Intelligence. Springer, Springer, 505–510. [7] Fabian Hadiji, Martin Mladenov, Christian Bauckhage, and Kristian Kersting. 2015. Computer science on the move: inferring migration regularities from the web via compressed label propagation. In Twenty-Fourth International Joint Conference on Artificial Intelligence. [8] Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems. 1024–1034. [9] Hui Han, Lee Giles, Hongyuan Zha, Cheng Li, and Kostas Tsioutsiouliklis. 2004. Two Supervised Learning Approaches for Name Disambiguation in Author Cita- tions. In Proceedings of the 4th ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL ’04). ACM, New York, NY, USA, 296–305. https://doi.org/10.1145/996350. 996419 [10] Akshat Jindal, Shreya Singh, and Soham Gadgil. 2023. Classification for every- one: Building geography agnostic models for fairer recognition. arXiv preprint arXiv:2312.02957 (2023). [11] Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016). [12] SDGV Akanksha Kumari and Shreya Singh. 2017. Parallelization of alphabeta pruning algorithm for enhancing the two player games. Int. J. Advances Electronics Comput. Sci 4 (2017), 74–81. [13] Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International conference on machine learning. 1188–1196. [14] MultiMedia LLC. [n.d.]. MS Windows NT Kernel Description. https://www. scimagojr.com/ [15] Xiao Ma, Ranran Wang, and Yin Zhang. 2019. Author Name Disambiguation in Heterogeneous Academic Networks. In International Conference on Web Informa- tion Systems and Applications. Springer, 126–137. [16] Henk F Moed and Gali Halevi. 2014. A bibliometric approach to tracking interna- tional scientific migration. Scientometrics 101, 3 (2014), 1987–2001. [17] G Mohammed Abdulla, Shreya Singh, and Sumit Borar. 2019. Shop Your Right Size: A System for Recommending Sizes for Fashion Products. In Companion Proceedings of The 2019 World Wide Web Conference (WWW ’19). Association for Computing Machinery, New York, NY, USA, 327–334. https://doi.org/10.1145/ 3308560.3316599 [18] Günce Keziban Orman, Vincent Labatut, and Ahmet Teoman Naskali. 2017. Ex- ploring the evolution of node neighborhoods in Dynamic Networks. Physica A: Statistical Mechanics and its Applications 482 (2017), 375–391. [19] Nicolás Robinson-Garcia, Cassidy R Sugimoto, Dakota Murray, Alfredo Yegros- Yegros, Vincent Larivière, and Rodrigo Costas. 2019. The many faces of mobility: Using bibliometric data to measure the movement of scientists. Journal of Infor- metrics 13, 1 (2019), 50–63. [20] Christian Schulz, Amin Mazloumian, Alexander M Petersen, Orion Penner, and Dirk Helbing. 2014. Exploiting citation networks for large-scale author name disambiguation. EPJ Data Science 3, 1 (2014), 11. 10
ai_researcher
1
Development_of_technological_tools_in_response_to_primary_mental_health_care_needs_arising_from_the_emergency_caused_by_COVID-19_in_Cauca.pdf
4 2 0 2 g u A 1 ] L C . s c [ 3 v 4 1 8 4 1 . 3 0 4 2 : v i X r a THE OPPORTUNITIES AND RISKS OF LARGE LANGUAGE MODELS IN MENTAL HEALTH ∗ Hannah R Lawrence Google via Magnit Folsom, CA, United States [email protected] Renee A Schneider Google Mountain View, CA, United States Susan B Rubin Google via Magnit Folsom, CA, United States Maja J Matari´c Google Mountain View, CA, United States Daniel J McDuff Google Mountain View, CA, United States Megan Jones Bell Google Mountain View, CA, United States [email protected] ABSTRACT Global rates of mental health concerns are rising, and there is increasing realization that existing models of mental health care will not adequately expand to meet the demand. With the emergence of large language models (LLMs) has come great optimism regarding their promise to create novel, large- scale solutions to support mental health. Despite their nascence, LLMs have already been applied to mental health–related tasks. In this paper, we summarize the extant literature on efforts to use LLMs to provide mental health education, assessment, and intervention and highlight key opportunities for positive impact in each area. We then highlight risks associated with LLMs’ application to mental health and encourage the adoption of strategies to mitigate these risks. The urgent need for mental health support must be balanced with responsible development, testing, and deployment of mental health LLMs. It is especially critical to ensure that mental health LLMs are fine-tuned for mental health, enhance mental health equity, and adhere to ethical standards and that people, including those with lived experience with mental health concerns, are involved in all stages from development through deployment. Prioritizing these efforts will minimize potential harms to mental health and maximize the likelihood that LLMs will positively impact mental health globally. Keywords artificial intelligence · AI · generative AI · large language models · mental health · mental health education · language model · mental health care · health equity · ethical · development · deployment 1 Introduction Globally, half of all individuals will experience a mental health disorder in their lifetimes [1], and at any given point, 1 in 8 people are experiencing a mental health concern [2]. Despite greater attention provided in the recent years to mental health, the rate of mental health concerns has increased [2,3], and access to mental health care has not expanded to adequately meet the demand [4]. In the United States alone, the average time between the onset of mental health symptoms and treatment is 11 years [5], and nearly half of the global population lives in regions with a shortage of mental health professionals [2]. ∗Citation: Lawrence HR, Schneider RA, Rubin SB, Matari´c MJ, McDuff DJ, Jones Bell M. The Opportunities and Risks of Large Language Models in Mental Health JMIR Ment Health 2024;11:e59479 doi: 10.2196/59479 JMIR MENTAL HEALTH To overcome inadequate access to effective and equitable mental health care, large-scale solutions are needed. The emergence of large language models (LLMs) brings hope regarding their application to mental health and their potential to provide such solutions due to their relevance to mental health education, assessment, and intervention. LLMs are artificial intelligence models trained using extensive data sets to predict language sequences [6]. By leveraging huge neural architectures, LLMs can organize complex and abstract concepts. This enables them to identify, translate, predict, and generate new content. LLMs can be fine-tuned for specific domains (eg, mental health) and enable interactions in natural language, as do many mental health assessments and interventions, highlighting the enormous potential they have to revolutionize mental health care. In this paper, we first summarize the research done to date applying LLMs to mental health. Then, we highlight key opportunities and risks associated with mental health LLMs and put forth suggested risk mitigation strategies. Finally, we make recommendations for the responsible use of LLMs in the mental health domain. 2 Applications of LLMs to Mental Health 2.1 Overview Initial tests of LLMs’ capabilities across mental health education, assessment, and intervention are promising. When considering this literature base, which we review next, it is important to first distinguish between general-purpose, consumer LLMs (eg, ChatGPT [OpenAI] and Gemini [Google]) and domain-specific LLMs (eg, Med-LM [Google]). General-purpose LLMs are trained on large corpora of text and are designed to perform a wide range of tasks. Domain- specific LLMs, on the other hand, typically build upon general-purpose LLMs through various strategies of fine-tuning with curated data to complete tasks within an area of focus. Given that general-purpose LLMs are largely trained with unrestricted text, they risk generating inaccurate, biased, stigmatizing, and harmful information about mental health. Developers of domain-specific LLMs can mitigate some of this risk by incorporating strategies during fine-tuning and evaluation such as using high-quality evidence-based information and attribution techniques [7], but it remains difficult to remove all possible risk from LLM-generated content. Given these important distinctions, in the paper that follows we clarify when findings are specific to general-purpose versus domain-specific LLMs where possible. 2.2 Education One area of opportunity for LLMs in the mental health domain is to provide education about mental health (see Figure 1) [8]. Although lagging behind the success of LLMs in the medical domain [9], there is evidence that LLMs are capable of generating accurate, helpful, and immediate mental health information. The psychological support with LLM (Psy-LLM), for example, is a domain-specific LLM designed to answer mental health questions [10]. Psy-LLM was pretrained with a data set of psychology articles, question-answer pairs from psychologists, and by crawling social media platforms. The model achieved moderate levels of helpfulness, fluency, relevance to the question asked, and logic based on human ratings of Psy-LLM responses. The abilities of general-purpose LLMs to answer questions about mental health has also been evaluated. Sezgin et al [11] compared Google Search, GPT-4 (using ChatGPT), and LaMDA (using Bard [Google DeepMind]) responses to questions about postpartum depression relative to responses from an American College of Obstetricians and Gynecologists (ACOG) frequently asked questions document. Board-certified human physicians rated ChatGPT responses as more in line with ACOG responses than Bard or Google Search responses, and on average, ChatGPT responses were rated at near ceiling for clinical accuracy, scoring a 3.93 out of a possible 4. Importantly, however, general-purpose LLMs differ in their policies regarding the generation of medical or mental health advice. Bard’s accuracy ratings were impacted by Bard’s policy to advise consulting a health care provider when asked questions about mental health. This practice protects individuals from potential harm, though such responses received lower ratings of quality in this study. LLM-generated answers to mental health questions may not be comparable to human-generated answers, however. It is critical for LLMs to meet or exceed human performance in order for LLMs to be trusted and to ease the demand for human providers. In the case of Psy-LLM and ChatGPT, there is evidence that responses to mental health and substance use questions fall short of human-generated responses in dimensions such as accuracy, quality, and alignment with evidence-based practice (EBP) [10,12]. Another way that LLMs may serve to educate is to support provider training. Barish et al [13] used ChatGPT to generate content and associated learning objectives for an online learning platform for behavioral health professionals. Researchers compared the time providers needed to write their own content versus the time needed to edit ChatGPT- generated content, finding that using ChatGPT improved provider efficiency by 37.5 percent. LLMs can also be leveraged to train providers to optimize interactions with their patients. As two examples, Chan and Li [14] developed a 2 JMIR MENTAL HEALTH chatbot trained to mimic a patient capable of describing their mental health symptoms in colloquial terms, and Sharma et al [15] used artificial intelligence to coach peer support providers to increase empathetic responding. These approaches illustrate ways that LLMs can support provider training and potentially enhance provider efficacy without providers becoming reliant on LLMs for in the moment critical thinking or decision-making. Figure 1: Potential opportunities for LLMs in mental health education. CBT: cognitive behavioral therapy; EST: empirically supported treatment; LLM: large language model. 2.3 Assessment A second function of LLMs within the domain of mental health is to assess mental health symptoms, identify diagnoses, and track changes in mental well-being (see Figure 2). LLMs can at times predict mental health symptoms and diagnoses accurately. Ji et al [16] initially developed two domain-specific models, MentalBERT and MentalRoBERTa, pretrained on mental health information. Compared with existing models pretrained in different domains, specifically clinical notes and biomedicine, MentalBERT and MentalRoBERTa were generally better able to detect depression and suicidal 3 JMIR MENTAL HEALTH ideation from social media posts (notably, these results were achieved with Bidirectional Encoder Representations From Transformers [BERT]-based models that represent early-genderation LLMs, with newer models and architec- tures demonstrating potential for even more advanced capabilities). LLMs such as Mental-Alpaca, a mental health domain–specific LLM, Med-PaLM 2, a medical domain–specific LLM, and ChatGPT, which is general-purpose, have also been shown to screen for possible depressive symptoms and suicide risk, with varying degrees of accuracy [17-20]. When it comes to predicting mental health diagnoses specifically, there is evidence that Med-PaLM 2 can do so accurately. When presented with a series of case studies from the American Psychiatric Association book of DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition) case examples [21], Med-PaLM 2 predicted the correct diagnosis 77.5 percent of the time, and performance increased to 92.5 percent when asked to specify the correct diagnostic category (eg, depressive disorder vs major depressive disorder) [20]. Similarly, when PaLM 2 was fine-tuned with medical domain data and optimized for differential diagnosis, the model was able to generate more appropriate and comprehensive lists of diagnoses than specialist medical doctors in response to challenging case studies, some of which involved psychiatric diagnoses [22]. LLM-predicted assessments do not, however, always match those of human mental health clinicians, suggesting that more work is needed before LLMs can engage in assessment without human oversight. In one study [23], four iterations of a case vignette [24] were presented to ChatGPT. Each vignette varied in levels of perceived burdensomeness and thwarted belongingness—two primary risk factors for suicide [25,26]. ChatGPT appropriately determined that the risk for suicidal ideation and suicide attempts was highest for the vignette with both high perceived burdensomeness and high thwarted belongingness, but it predicted lower suicide risk overall than did mental health professionals who reviewed the same vignettes. Med-PaLM 2 also at times does not achieve human clinician-level performance. The model predicted more severe posttraumatic stress disorder symptoms than human clinicians from clinical interview data, classified possible cases of posttraumatic stress disorder with high specificity (0.98) but low sensitivity (0.30), and the model only correctly predicted whether a case example had a comorbid diagnosis or diagnostic modifier 20 percent of the time [20]. In all the efforts described thus far, LLMs had been provided with information about symptoms and tasked with determining whether those symptoms indicated a possible mental health concern or diagnosis. LLMs also may be leveraged to ask the questions needed to screen for a mental health concern or to predict a mental health diagnosis. Chan and Li [14] developed a chatbot trained to engage in mental health assessment with patients. Compared with human psychiatrists, the chatbot displayed more empathy and asked more thorough questions about some symptoms (eg, sleep), but was less likely to rule out associated conditions. 4 JMIR MENTAL HEALTH Figure 2: Potential opportunities for LLMs in mental health assessment. LLM: large language model. 2.4 Intervention A third opportunity for LLMs in the mental health domain is to implement mental health interventions (see Figure 3). To date, such efforts have largely focused on chatbots. Prominent chatbots, some of which are LLM-based, include Woebot [27], Wysa [28], Tess [29], Replika [30], Ellie [31], and Sibly [32]. Many of these chatbots were trained in empirically supported treatments such as cognitive behavioral therapy, dialectical behavior therapy, and motivational interviewing. There is initial evidence that such chatbots may be effective in reducing depressive and anxiety symptoms, as well as stress [33-36]. Additionally, research finds that chatbots can be trained to express empathy [37-39], provide nonjudgmental responses [40], and maintain therapeutic conversations [14] and that individuals can establish therapeutic rapport with chatbots [41]. Caution is warranted when using chatbots to deliver mental health interventions. To date, chatbots are not effective in treating all types of mental health distress [36] and at times have difficulty personalizing interventions [38], forget information (eg, that they had talked with someone previously) [37], and provide nontherapeutic and iatrogenic advice including encouraging substance use, dieting, and weight loss [40,42,43]. Also concerning is that chatbots do not consistently or adequately respond to suicide risk, at times being dismissive and neglecting to provide crisis resources or referals to human providers [38,44]. 5 JMIR MENTAL HEALTH Figure 3: Potential opportunities for LLMs in mental health intervention. CBT: cognitive behavioral therapy; EBP: evidence-based practice; LLM: large language model. 3 Risks Associated With Mental Health LLMs 3.1 Overview To maximize the positive impact of LLMs on mental health, LLM development, testing, and deployment must be done ethically and responsibly (see Textbox 1). This requires identification and evaluation of risks, taking preemptive steps to mitigate risks, and establishing plans to monitor for ongoing or new and unexpected risks [45,46]. It is also important to recognize that the risks associated with the use of LLMs for mental health support may differ across education, assessment, and intervention (see Table 1). Here, we highlight primary risks that largely cut across uses of LLMs for mental health-related tasks and identify potential steps that can be taken to mitigate these risks. 6 JMIR MENTAL HEALTH Figure 4: Textbox1: Recommendations for responsible use of LLMs to support mental health. Risk Perpetuate inequalities, disparities, and stigma Unethical provision of mental health services Mental health ed- ucation Mental health as- sessment Mental health in- tervention Medium Higher Higher Lower Practice beyond the boundaries of competence Neglect to obtain informed consent Fail to preserve confidentiality or privacy Build and maintaininappropriate Lower levels of trust Lower Lower Lack reliability Generate inaccurate or iatrogenic output Lack transparency or explainability Neglect to involve humans Lower Medium Lower Lower Higher Higher Higher Medium Higher Higher Medium Medium Higher Higher Higher Higher Higher Higher Medium Higher Table 1: Potential risks to people when LLMs engage in mental health education, assessment, and intervention. 7 JMIR MENTAL HEALTH 3.2 Perpetuating Inequalities, Disparities, and Stigma There exists the risk that LLMs perpetuate inequities and stigma, further widening mental health disparities [47]. Mental health concerns are highly stigmatized [48], and there are disparities in who is at risk for mental health concerns, in who is diagnosed with mental health disorders, and with which mental health disorders people are diagnosed [49-51]. There are also inequities in who receives mental health care [52,53]. Much of the publicly available information and discourse about mental health contains inaccurate and stigmatizing information about mental health, and the existing research literature on mental health largely represents the perspectives of people who are White, are educated, are of high socioeconomic status, and speak English [54]. Far less information is available about the etiology of mental health concerns and effective assessments and interventions for populations that have been pushed to the margins. Training LLMs on existing data without appropriate safeguards and thoughtful human supervision and evaluation can, therefore, lead to problematic generation of biased content and disparate model performance for different groups [45,55-57] (of note, however, there is some evidence that clinicians perceive less bias in LLM-generated responses [58] relative to clinician-generated responses, suggesting that LLMs may have the potential to reduce bias compared to human clinicians). LLMs should disseminate accurate, destigmatizing information about mental health and be trained to identify and combat stigma and discrimination. To do so, models need to be fine-tuned and evaluated for the mental health domain. Training models with data representative of the diverse populations being served is helpful, but new types of bias, such as semantic biases, may arise in LLMs [59]. Opportunities to train models to identify and exclude toxic and discriminatory language should be explored, both during the training of the underlying foundation models and during the domain-specific fine-tuning (see Keeling [59] for a discussion of the trade-offs of data filtration in this context) [45]. If LLMs perform differently for different groups or generate problematic or stigmatizing language during testing, additional model fine-tuning is required prior to deployment. Individuals developing LLMs should be transparent about the limitations of the training data, the approaches to data filtration and fine-tuning, and the populations for whom LLM performance has not been sufficiently demonstrated. There is also hope that LLMs can be scaled to increase people’s access to mental health information, assessment, and treatment. LLMs have the potential to support delivery of mental health interventions in regions where access to mental health providers is limited and where significant barriers (eg, cost) exist. They can additionally help to personalize treatments to better fit people’s unique preferences, interests, identities, and language, hopefully improving treatment outcomes. LLMs may support increased access through more direct provision of mental health services, or LLMs can aid the expansion of the mental health workforce, training novice providers and community members in EBP at scale. There will undoubtedly be challenges in implementing and scaling LLMs globally. Revising and testing implementation frameworks for this new and evolving context and engagement in thoughtful public health and industry partnerships could all increase the likelihood that when mental health LLMs are scaled globally, implementation is sustained and best supports the populations most in need. 3.3 Failing to Provide Mental Health Services Ethically A second risk is that LLMs will engage in unethical practices. When human mental health providers behave unethically, harm is done to patients and public trust is eroded [60]. LLMs will similarly do harm if they are not designed and implemented in consideration of and are not consistent with relevant ethical principles and standards when operating in the domain of mental health. Core ethical principles in the health care context include beneficence, nonmaleficence, justice, and autonomy [61]. Next, we highlight additional standards of ethical professional conduct that should apply when LLMs engage in mental health service provision (see the American Psychological Association Ethical Principles of Psychologists and Code of Conduct for parallel ethical principles and standards). LLMs should operate within the boundaries of their competence and only engage in mental health tasks they have rigorously been proven to accomplish well. LLM developers should clearly communicate the limits and relevant evaluation results of LLMs, education should be provided to individuals about when it is and is not appropriate to use LLMs, and LLMs should withhold output when they are not competent in a task. LLM competence should be assessed and maintained over time. When competence is lacking in a certain domain, the LLM should no longer be deployed until the needed competence is gained (eg, via retraining and fine-tuning models with human validation). Individuals should provide informed consent when interacting with mental health LLMs. They should be fully informed about the nature of mental health services they will receive and what role LLMs will have in that service. Information presented to individuals to help make decisions about consent should be understandable and include the possible risks and benefits of engaging with LLMs. Individuals should have the ability to choose not to consent to the use of LLMs in the direct provision of their mental health care, as well as the ability to withdraw their consent and opt out of the use of LLMs even if consent was initially given. As LLMs become further integrated into health care contexts, care should be 8 JMIR MENTAL HEALTH taken to ensure that clients’ decisions to opt out of LLM involvement or to confine LLM involvement to less direct (eg, administrative) tasks do not limit their access to mental health care. Confidentiality should be protected when individuals interact with LLMs to support their mental health. Individuals should be clearly informed about expectations for confidentiality. This should include information about the limits of confidentiality (eg, in the case of imminent risk for suicide), the foreseeable uses of information generated through engagement with LLMs, where and how their data are stored, and whether it is possible to delete their data. Policies related to data security should be strict and in line with relevant mental health data protection regulations [34]. Solutions such as developing on-device storage that does not require transmission of personal data [62] or systems with robust cloud-based encryption, pursuing LLMs that support compliance with relevant data protection laws (eg, Health Insurance Portability and Accountability Act [HIPAA]), and responsibly aggregating and deidentifying mental health data to fine-tune and test models all help to protect confidentiality. Human mental health providers establish trusting relationships with those with whom they work and are obligated to ensure that the nature of the trusting provider-patient relationship does not lead to exploitation or harm. Appropriate trust is built through effective mental health assessment and treatment and, perhaps even more crucially, ethical practice. Trust should be evaluated through feedback from individuals engaged with LLMs. If and when trust is broken, this should be acknowledged and work should be done to repair trust. On the other hand, people may trust LLMs more than is warranted because of LLMs’ ability to produce humanlike natural language and to be trained to express emotion and empathy (this may especially be the case for individuals experiencing mental health concerns such as anxiety [63]) [64]. Unearned trust can have consequences, leading people to disclose personal information or trust content generated by LLMs even when it is not accurate. Education should be provided about the limits of LLMs and individuals should be cautioned against blanket trust in these models. 3.4 Insufficient Reliability A third risk is that LLMs will not generate reliable or consistent output. When prompted to complete the same task or provide an answer to the same question multiple times, LLMs at times produce different responses [46,65]. Varied and creative output is a benefit of LLMs; however, the underlying response should be consistent even when articulated in different ways. Take for example an LLM repeatedly presented with a client’s description of depressive symptoms. The LLM should reliably reach the conclusion that the client meets the criteria for major depressive disorder even if this diagnostic conclusion is communicated to the client using different phrasing. Issues of low reliability of LLMs can erode trust and increase the possibility of harm, including leading some individuals to be misdiagnosed or to pursue treatments that are not best suited to their mental health concern. LLM reliability should be measured and enhanced. Prompting approaches may help to improve LLM reliability. Self-consistency [66] and ensemble refinement [9] are strategies that sample multiple model answers to arrive at a more consistent response, improving model reliability [9]. Grounding models in data other than linguistic descriptions of symptoms (eg, objective behavioral or physiological signals) is another way of reducing variability in LLM performance, as words alone may not fully capture all of the necessary information to complete a given mental health task [67]. Finally, LLMs should not be deployed until they exceed prespecified thresholds of adequate reliability. 3.5 Inaccuracy LLMs risk producing inaccurate information about mental health [46,68]. If LLMs are trained on data that contain inaccurate or outdated information, iatrogenic treatment options, or biased representations of mental health, that information can be reproduced by LLMs [45]. An additional consideration is that accuracy of LLM outputs has multiple dimensions and is not as simple to evaluate as answers to multiple-choice questions. Accuracy can be a function of how factual an answer is, how specific it is, or how devoid of irrelevant information it is. Generating inaccurate mental health information may be more damaging than no information, especially when it may be difficult for an individual to detect inaccuracies or inconsistencies (eg, about a complex mental health diagnosis). Standards for accuracy should be defined a priori and should be high. When thresholds for LLM accuracy are not met, the risk of harm is too high and LLMs should not generate output. The accuracy of LLMs depends on the quality of data the model is trained and fine-tuned on [47,69,70]. LLMs should be adapted to the domain of mental health; models fine-tuned on mental health data perform better than models trained on non-domain-specific data [42] or general medical domains [16]. When data are limited, it is recommended that smaller but more variable data sets be prioritized over a larger single data set [19]). Training data should be highly curated, be grounded in authoritative and trusted sources, be specific to evidence-based health care, and represent diverse populations [46,58]. In mental health, the nature of consensus is continuing to evolve, and the amount of data available is continuing to increase, which should be taken into account when considering whether to further fine-tune models. Strategies such as implementing a Retrieval 9 JMIR MENTAL HEALTH Augmented Generation system, in which LLMs are given access to an external database of up-to-date, quality-verified information to incorporate in the generation process, may help to improve accuracy and enable links to sources while also maintaining access to updated information. Accuracy of LLMs should be monitored over time to ensure that model accuracy improves and does not deteriorate with new information [45]. Measuring the accuracy of mental health LLMs is complex. It is not sufficient for models to merely outperform previous models. Rather, performance of LLMs should be compared with the performance of human clinicians, both of which should be compared against gold-standard, evidence-based care. When LLMs are tasked with mental health evaluation, their ability to predict scores on reliable and valid mental health assessments should be tested, and LLMs should meet human clinician performance in diagnostic accuracy. When LLMs are tasked with aiding mental health intervention delivery, their ability to detect, support, and engage in EBP is critical. Additional criteria to consider when evaluating the accuracy of LLMs include the level of agreement between human clinicians and LLMs, metrics of effect size rather than only statistical significance, and the balance of sensitivity and specificity in making diagnostic predictions. LLMs should communicate confidence in the accuracy of generated output and limit or withhold output when confidence is lacking [58]. As an example, Med-PaLM 2’s accuracy improved when results were weighted based on confidence scores and when a cutoff threshold was set for confidence [20]. Communicating confidence in generated output and withholding output when confidence is low both help to enhance transparency and trust in LLMs’ ability to perform on mental health tasks and to limit potential harms associated with generating inaccurate information. Prompt fine-tuning can boost LLM accuracy [9,19,58]. When applied to mental health, instruction fine-tuning improved performance of Mental-Alpaca relative to zero-shot and few-shot prompting and allowed Mental-Alpaca to reach a performance level across multiple mental health tasks (eg, identifying stress and classifying individuals as depressed or not based on Reddit posts) similar to that of Mental-RoBERTa, a task-specific model [19]. Prompting to concentrate on the emotional clues in text was also shown to improve ChatGPT performance on a variety of mental health-related tasks [71]. Conversely, however, instruction prompt fine-tuning can also increase inaccurate or inappropriate content [55]; thus, LLMs should continue to be evaluated for accuracy at all stages of prompt tuning. 3.6 Lack of Transparency and Explainability LLMs risk generating output without being able to explain how they came to the decisions they did or without being able to identify the source of information used to generate the output [72]. There remains much that is not known about how LLMs generate reasoning for their responses and how sensitive these reasons are to context and prompting. It should be apparent when information is generated using LLMs, how LLMs were developed and tested, and whether LLMs are general-purpose or fine-tuned for the domain of mental health [46,58,68]. Additional steps to enhance transparency include explicitly telling individuals to exercise caution when interpreting or acting on LLM output and being clear about the bounds of LLMs’ competence [39]. Explainability, one aspect of transparency, was identified as a key priority by individuals engaged in mental health LLMs [39]. If asked to explain why they decided on a mental health diagnostic prediction or intervention, LLMs should explain what information was used to come to that decision. ChatGPT has been shown to be able to explain why an individual was classified as experiencing stress or depressive symptoms [71], and Med-PaLM 2 communicated why it predicted a particular symptom score and diagnosis [20]. Although LLMs are capable of producing plausible explanations through techniques such as chain-of-thought reasoning [73], more research is needed to ensure that explanations are internally consistent. Explainability is perhaps especially beneficial in the domain of mental health, as part of mental health assessment and intervention is communicating results of an evaluation or justification for an intervention to patients. 3.7 Neglecting to Involve Humans There are risks associated with LLMs providing anonymous mental health services. Unlike mental health apps, where content can be highly curated, the content generated by LLMs is unpredictable. This makes interacting with LLMs more engaging, more appealing, and perhaps also more humanlike. However, it also increases the risk that LLMs may produce harmful or nontherapeutic content when tasked with independently providing mental health services. Legal and regulatory frameworks are needed to protect individuals’ safety and mental health when interacting with LLMs, as well as to clarify clinician liability when using LLMs to support their work or to clarify the liability of individuals and companies who develop these LLMs. There are ongoing discussions regarding the regulation of LLMs in medicine [74-76] that can inform how LLMs can support mental health while limiting the potential for harm and liability. Humans should be actively involved in all stages of mental health LLM development, testing, and deployment. For mental health LLMs to be effective, rigorous, and ongoing, human supervision and input are needed (see Figure 4) 10 JMIR MENTAL HEALTH [46]. Reinforcement learning through human feedback can improve model accuracy and uncover problematic LLM responses [14,42]. This feedback should be obtained from individuals who reflect the diverse populations the LLM aims to help, including members of the public, patients, and human clinicians [9,14,34,58,68,77,78]. Their input should be leveraged to identify and correct biases, to ensure generated content is inclusive, culturally appropriate, and accurate, and to reduce the likelihood of harm. Particularly important is prioritizing the perspectives of individuals at heightened risk for mental health concerns (eg, sexual and gender minorities) and individuals with lived experience with mental health concerns. These individuals should play a central role in co-defining the role LLMs will play in mental health care and in co-designing tools that leverage LLMs. Practically, use cases should focus on opportunities to support and augment provider care. As just one example, LLMs may have a role in suggesting language used in clinical notes, but clinicians should have the final say in whether they adopt those suggestions or not. Figure 5: Examples of human involvement across all stages of LLM development through deployment and evaluation. LLM: large language model. 4 Conclusions The need for mental health services is pressing, and the potential of LLMs to expand access to information about mental health and to mental health care is great. LLMs are advancing rapidly and have been applied across mental health education, assessment, and intervention. Especially promising is the potential for LLMs to provide mental health education and assessment—tasks that are well aligned with LLM strengths. LLMs have made exceptional progress in related tasks such as answering medical questions and assessing medical conditions, reaching and in some cases exceeding the performance of human clinicians. Greater caution is warranted when applying LLMs to mental health intervention, but there is also cause for optimism that LLMs could eventually help to support or augment human provision of mental health treatments. Additional research is needed in testing LLMs’ ability to deliver or train providers in empirically supported treatments, to responsibly adapt approaches for youth and marginalized populations, to build appropriate rapport, and to detect risk for high-acuity mental health concerns for progress to be made in these areas. Critical to effectively engaging in mental health care tasks is fine-tuning LLMs specifically for the domain of mental health and the prioritization of equity, safety, EBP, and confidentiality. No widely used, general-purpose LLM has been fine-tuned for mental health, trained on evidence-based mental health content, or sufficiently tested on mental health-related tasks. When LLMs are developed specifically for mental health, tested to ensure adherence with EBP, and aligned with the goals of people with lived experience with mental health concerns and those who have expertise in mental health care, there is great hope that they will expand access to evidence-based mental health information and 11 JMIR MENTAL HEALTH services. Investing in developing, testing, and deploying mental health LLMs responsibly has the potential to finally reverse rising global mental health rates and to improve the mental health of the millions of people in need of mental health support. 5 Acknowledgments We acknowledge and thank Michael Howell, MD, MPH; Bakul Patel, MSEE, MBA; Matthew Thompson, DPhil, MPH; Joseph Dooley, MPA; and David Steiner, MD, PhD, for reviewing and providing helpful feedback on this paper. 6 Conflicts of Interest RAS, MJM, DJM, and MJB are employees of Google and receive monetary compensation from Google and equity in Google’s parent company, Alphabet. HRL and SBR are employees of, and receive compensation from, Magnit and are contracted for work at Google. In addition, MJB is a shareholder in Meeno Technologies, Inc and The Orange Dot (Headspace Health). RAS is a shareholder in Lyra Health and Trek Health, and she consults with Understood. References [1] McGrath JJ, Al-Hamzawi A, Alonso J, et al. Age of onset and cumulative risk of mental disorders: a cross-national analysis of population surveys from 29 countries. Lancet Psychiatry. 2023;10(9):668-681. [doi: 10.1016/S2215- 0366(23)00193-1] [Medline: 37531964] [2] World Health Organization. World mental health report: transforming mental health for all. 2022. URL: https://www.who.int/publications/i/item/9789240049338 [Accessed 2024-07-18] [3] Agency for Healthcare Research and Quality. 2022 National Healthcare Quality and Disparities Report. 2022. URL: https://www.ahrq.gov/research/findings/nhqrdr/nhqdr22/index.html [Accessed 2024-07-18] [4] Wainberg ML, Scorza P, Shultz JM, et al. Challenges and opportunities in global mental health: a research- [doi: 10.1007/s11920-017-0780-z] [Med- to-practice perspective. Curr Psychiatry Rep. 2017;19(5):28. line: 28425023] [5] National Alliance on Mental Illness. Mental health by the numbers. 2023. URL: https://www.nami.org/about- mental-illness/mental-health-by-the-numbers/ [Accessed 2024-07-29] [6] Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023;29(8):1930-1940. [doi: 10.1038/s41591-023-02448-8] [Medline: 37460753] [7] Gao Y, Xiong Y, Gao X, et al. Retrieval-augmented generation for large language models: a survey. arXiv. Preprint posted online on Dec 18, 2023. [doi: 10.48550/arXiv.2312.10997] [8] Ayers JW, Poliak A, Dredze M, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. 2023;183(6):589-596. [doi: 10.1001/ja- mainternmed.2023.1838] [Medline: 37115527] [9] Singhal K, Tu T, Gottweis J, et al. Towards expert-level medical question answering with large language models. arXiv. Preprint posted online on May 16, 2023. [doi: 10.48550/arXiv.2305.09617] [10] Lai T, Shi Y, Du Z, et al. Psy-LLM: scaling up global mental health psychological services with AI-based large language models. arXiv. Preprint posted online on Sep 1, 2023. [doi: 10.48550/arXiv.2307.11991] [11] Sezgin E, Chekeni F, Lee J, Keim S. Clinical accuracy of large language models and Google search responses to postpartum depression questions: cross-sectional study. J Med Internet Res. Sep 11, 2023;25:e49240. [doi: 10.2196/ 49240] [Medline: 37695668] [12] Spallek S, Birrell L, Kershaw S, Devine EK, Thornton L. Can we use ChatGPT for mental health and substance use education? Examining its quality and potential harms. JMIR Med Educ. Nov 30, 2023;9:e51243. [doi: 10.2196/51243] [Medline: 38032714] [13] Barish G, Marlotte L, Drayton M, Mogil C, Lester P. Automatically enriching content for a behavioral health learning management system: a first look. Presented at: The 9th World Congress on Electrical Engineering and Computer Systems and Science; Aug 3-5, 2023; London, United Kingdom. [doi: 10.11159/cist23.125] [14] Chan C, Li F. Developing a natural language-based AI-chatbot for social work training: an illustrative case study. China J Soc Work. May 4, 2023;16(2):121-136. [doi: 10.1080/17525098.2023.2176901] 12 JMIR MENTAL HEALTH [15] Sharma A, Lin IW, Miner AS, Atkins DC, Althoff T. Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nat Mach Intell. 2023;5(1):46-57. [doi: 10.1038/s42256-022- 00593-2] [16] Ji S, Zhang T, Ansari L, Fu J, Tiwari P, Cambria E. MentalBERT: publicly available pretrained language models for mental healthcare. arXiv. Preprint posted online on Oct 29, 2021. [doi: 10.48550/arXiv.2110.15621] [17] Lamichhane B. Evaluation of ChatGPT for NLP-based mental health applications. arXiv. Preprint posted online on Mar 28, 2023. [doi: 10.48550/arXiv.2303.15727] [18] Amin MM, Cambria E, Schuller BW. Will affective computing emerge from foundation models and general AI? A first evaluation on ChatGPT. arXiv. Preprint posted online on Mar 3, 2023. [doi: 10.48550/arXiv.2303.03186] [19] Xu X, Yao B, Dong Y, et al. Mental-LLM: leveraging large language models for mental health prediction via online text data. arXiv. Preprint posted online on Aug 16, 2023. [doi: 10.48550/arXiv.2307.14385] [20] Galatzer-Levy IR, McDuff D, Natarajan V, Karthikesalingam A, Malgaroli M. The capability of large lan- guage models to measure psychiatric functioning. arXiv. Preprint posted online on Aug 3, 2023. [doi: 10.48550/arXiv.2308.01834] [21] Barnhill JW. DSM-5-TR® clinical cases. Psychiatry online. URL: https://dsm.psychiatryonline.org/doi/book/10.1176/ appi.books.9781615375295 [Accessed 2024-07-18] [22] McDuff D, Schaekermann M, Tu T, et al. Towards accurate differential diagnosis with large language models. arXiv. Preprint posted online on Nov 30, 2023. [doi: 10.48550/arXiv.2312.00164] [23] Elyoseph Z, Levkovich I. Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment. Front Psychiatry. Aug 2023;14:1213141. [doi: 10.3389/fpsyt.2023.1213141] [Medline: 37593450] [24] Levi-Belz Y, Gamliel E. The effect of perceived burdensomeness and thwarted belongingness on therapists’ assess- ment of patients’ suicide risk. Psychother Res. Jul 2016;26(4):436-445. [doi: 10.1080/10503307.2015.1013161] [Medline: 25751580] [25] Joiner T. Why People Die by Suicide. Harvard University Press; 2007. [26] Van Orden KA, Witte TK, Cukrowicz KC, Braithwaite SR, Selby EA, Joiner TE. The interpersonal theory of suicide. Psychol Rev. Apr 2010;117(2):575-600. [doi: 10.1037/a0018697] [Medline: 20438238] [27] Darcy A, Beaudette A, Chiauzzi E, et al. Anatomy of a Woebot® (WB001): agent guided CBT for women with postpartum depression. Expert Rev Med Devices. Apr 2022;19(4):287-301. Retracted in: Expert Rev Med Devices. 2023;20(11):989. [doi: 10.1080/17434440.2023.2267389] [Medline: 37801290] [28] Inkster B, Sarda S, Subramanian V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR Mhealth Uhealth. Nov 23, 2018;6(11):e12106. [doi: 10.2196/12106] [Medline: 30470676] [29] Fulmer R, Joerin A, Gentile B, Lakerink L, Rauws M. Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: randomized controlled trial. JMIR Ment Health. Dec 13, 2018;5(4):e64. [doi: 10. 2196/mental.9782] [Medline: 30545815] [30] Murphy M, Templin J. Our story. Replika. 2021. URL: https://replika.ai/about/story [Accessed 2023-08-20] [31] Kim H, Yang H, Shin D, Lee JH. Design principles and architecture of a second language learning chatbot. Lang Learn Technol. 2022;26:1-18. URL: https://scholarspace.manoa.hawaii.edu/server/api/core/bitstreams/b3aa08a8- 579d-4bf6- b94a-05c2ff67351a/content [Accessed 2024-07-18] [32] Wilbourne P, Dexter G, Shoup D. Research driven: Sibly and the transformation of mental health and wellness. Presented at: Proceedings of the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare; May 21-24, 2018:389-391; New York, NY. [doi: 10.1145/3240925.3240932] [33] Denecke K, Abd-Alrazaq A, Househ M. Artificial intelligence for chatbots in mental health: opportunities and challenges. In: Househ M, Borycki E, Kushniruk A, editors. Multiple Perspectives on Artificial Intelligence in Healthcare: Opportunities and Challenges. Springer International Publishing; 2021:115-128. [doi: 10.1007/978-3- 030- 67303-1] [34] Omarov B, Zhumanov Z, Gumar A, Kuntunova L. Artificial intelligence enabled mobile chatbot psychologist using AIML and cognitive behavioral therapy. IJACSA. 2023;14(6). [doi: 10.14569/IJACSA.2023.0140616] [35] Pham KT, Nabizadeh A, Selek S. Artificial intelligence and chatbots in psychiatry. Psychiatr Q. Mar 2022;93(1):249-253. [doi: 10.1007/s11126-022-09973-8] [Medline: 35212940] 13 JMIR MENTAL HEALTH [36] Abd-Alrazaq AA, Rababeh A, Alajlani M, Bewick BM, Househ M. Effectiveness and safety of using chatbots to improve mental health: systematic review and meta-analysis. J Med Internet Res. Jul 2020;22(7):e16021. [doi: 10.2196/ 16021] [Medline: 32673216] [37] Brocki L, Dyer GC, Gładka A, Chung NC. Deep learning mental health dialogue system. Presented at: 2023 IEEE International Conference on Big Data and Smart Computing (BigComp); Feb 13-16, 2023:395-398. Jeju, Korea. [38] Martinengo L, Lum E, Car J. Evaluation of chatbot-delivered interventions for self-management of depression: content analysis. J Affect Disord. Dec 2022;319:598-607. [doi: 10.1016/j.jad.2022.09.028] [Medline: 36150405] [39] You Y, Tsai CH, Li Y, Ma F, Heron C, Gui X. Beyond self-diagnosis: how a chatbot-based symptom checker should respond. ACM Trans Comput-Hum Interact. Aug 31, 2023;30(4):1-44. [doi: 10.1145/3589959] [40] Ma Z, Mei Y, Su Z. Understanding the benefits and challenges of using large language model-based conversa- tional agents for mental well-being support. AMIA Annu Symp Proc. Jan 11, 2024;2023:1105-1114. [Medline: 38222348] [41] Lee J, Lee JG, Lee D. Influence of rapport and social presence with an AI psychotherapy chatbot on users’ self- disclosure. SSRN. Preprint posted online on Mar 22, 2022. [doi: 10.2139/ssrn.4063508] [42] Das A, Selek S, Warner AR, et al. Conversational bots for psychotherapy: a study of generative transformer models using domain-specific dialogues. In: Demner-Fushman D, Cohen KB, Ananiadou S, Tsujii J, editors. Proceedings of the 21st Workshop on Biomedical Language Processing. Association for Computational Linguistics; 2022:285-297. [doi: 10. 18653/v1/2022.bionlp-1.27] [43] Demner-Fushman D, Ananiadou S, Cohen KB, editors. The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks. Association for Computational Linguistics; 2023. [44] Heston TF. Evaluating risk progression in mental health chatbots using escalating prompts. medRxiv. Preprint posted online on Sep 12, 2023. [doi: 10.1101/2023.09.10.23295321] [45] Weidinger L, Mellor J, Rauh M, et al. Ethical and social risks of harm from language models. arXiv. Preprint posted online on Dec 8, 2021. [doi: 10.48550/arXiv.2112.04359] [46] Harrer S. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine. eBioMedicine. Apr 2023;90:104512. [doi: 10.1016/j.ebiom.2023.104512] [Medline: 36924620] [47] Koutsouleris N, Hauser TU, Skvortsova V, De Choudhury M. From promise to practice: towards the realisation of AI- informed mental health care. Lancet Digital Health. Nov 2022;4(11):e829-e840. [doi: 10.1016/S2589- 7500(22)00153-4] [48] Sickel AE, Seacat JD, Nabors NA. Mental health stigma update: a review of consequences. Adv Ment Health. Dec 2014;12(3):202-215. [doi: 10.1080/18374905.2014.11081898] [49] Alegría M, Green JG, McLaughlin KA, Loder S. Disparities in child and adolescent mental health and mental health services in the U.S. William T. Grant Foundation. Mar 2015. URL: https://wtgrantfoundation.org/wp- content/uploads/ 2015/09/Disparities-in-Child-and-Adolescent-Mental-Health.pdf [Accessed 2024-07-18] [50] Primm AB, Vasquez MJT, Mays RA. The role of public health in addressing racial and ethnic disparities in mental health and mental illness. Prev Chronic Dis. Jan 2010;7(1):A20. [Medline: 20040235] [51] Schwartz RC, Blankenship DM. Racial disparities in psychotic disorder diagnosis: a review of empirical literature. World J Psychiatry. Dec 2014;4(4):133. [doi: 10.5498/wjp.v4.i4.133] [Medline: 25540728] [52] McGuire TG, Miranda J. New evidence regarding racial and ethnic disparities in mental health: policy implications. Health Aff (Millwood). Mar 2008;27(2):393-403. [doi: 10.1377/hlthaff.27.2.393] [Medline: 18332495] [53] Snowden LR, Cheung FK. Use of inpatient mental health services by members of ethnic minority groups. Am Psychol. Mar 1990;45(3):347-355. [doi: 10.1037//0003-066x.45.3.347] [Medline: 2310083] [54] Henrich J, Heine SJ, Norenzayan A. Beyond WEIRD: towards a broad-based behavioral science. Behav Brain Sci. Jun 2010;33(2-3):111-135. [doi: 10.1017/S0140525X10000725] [55] Lin I, Njoo L, Field A, et al. Gendered mental health stigma in masked language models. arXiv. Preprint posted online on Oct 27, 2022. [doi: 10.48550/arXiv.2210.15144] [56] Liu Y, et al. Trustworthy LLMs: a survey and guideline for evaluating large language models’ alignment. arXiv. Preprint posted online on Aug 10, 2023. [doi: 10.48550/arXiv.2308.05374] [57] Straw I, Callison-Burch C. Artificial intelligence in mental health and the biases of language based models. PLoS One. Dec 2020;15(12):e0240376. [doi: 10.1371/journal.pone.0240376] [Medline: 33332380] [58] Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. Nature. Aug 2023;620(7972):172-180. [doi: 10.1038/s41586-023-06291-2] [Medline: 37438534] 14 JMIR MENTAL HEALTH [59] Keeling G. Algorithmic bias, generalist models, and clinical medicine. arXiv. Preprint posted online on May 6, 2023. [doi: 10.48550/arXiv.2305.04008] [60] Koocher GP, Keith-Spiegel P. Ethics in Psychology and the Mental Health Professions: Standards and Cases. Oxford University Press; 2008. [61] Varkey B. Principles of clinical ethics and their application to practice. Med Princ Pract. Feb 2021;30(1):17-28. [doi: 10. 1159/000509119] [Medline: 32498071] [62] Rajagopal A, Nirmala V, Andrew J, Arun M. Novel AI to avert the mental health crisis in COVID-19: novel application of GPT2 in cognitive behaviour therapy. Research Square. Preprint posted online on Apr 1, 2021. [doi: 10.21203/rs.3.rs- 382748/v1] [63] Gratch J, Lucas G. Rapport between humans and socially interactive agents. In: Lugrin B, Pelachaud C, Traum D, editors. The Handbook on Socially Interactive Agents: 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition. Association for Computing Machinery; 2021:433-462. [doi: 10.1145/3477322.3477335] [64] McDuff D, Czerwinski M. Designing emotionally sentient agents. Commun ACM. Nov 20, 2018;61(12):74-83. [doi: 10. 1145/3186591] [65] Lundin RM, Berk M, Østergaard SD. ChatGPT on ECT: can large language models support psychoeducation? J ECT. Sep 1, 2023;39(3):130-133. [doi: 10.1097/YCT.0000000000000941] [Medline: 37310145] [66] Wang B, Min S, Deng X, et al. Towards understanding chain-of-thought prompting: an empirical study of what matters. arXiv. Preprint posted online on Dec 20, 2022. [doi: 10.48550/arXiv.2212.10001] [67] Glenberg AM, Havas D, Becker R, Rinck M. Grounding language in bodily states: the case for emotion. In: Grounding Cognition: The Role of Perception and Action in Memory, Language, and Thinking. Cambridge University Press; 2005:115-128. [doi: 10.1017/CBO9780511499968] [68] Zhong Y, Chen YJ, Zhou Y, Lyu YAH, Yin JJ, Gao YJ. The artificial intelligence large language models and neu- ropsychiatry practice and research ethic. Asian J Psychiatr. Jun 2023;84:103577. [doi: 10.1016/j.ajp.2023.103577] [Medline: 37019020] [69] Gilbert S, Harvey H, Melvin T, Vollebregt E, Wicks P. Large language model AI chatbots require approval as medical devices. Nat Med. Oct 2023;29(10):2396-2398. [doi: 10.1038/s41591-023-02412-6] [Medline: 37391665] [70] Singh OP. Artificial intelligence in the era of ChatGPT: opportunities and challenges in mental health care. Indian J Psychiatry. Mar 2023;65(3):297-298. https://doi.org/10.4103/indianjpsychiatry. indianjpsychiatry_112_23 [Medline: 37204980] [71] Yang K, Ji S, Zhang T, Xie Q, Kuang Z, Ananiadou S. Towards interpretable mental health analysis with large language models. arXiv. Preprint posted online on Oct 11, 2023. [doi: 10.48550/arXiv.2304.03347] [72] Balasubramaniam N, Kauppinen M, Rannisto A, Hiekkanen K, Kujala S. Transparency and explainabil- ity of AI systems: from ethical guidelines to requirements. Inf Softw Technol. Jul 2023;159:107197. [doi: 10.1016/j.infsof.2023.107197] [73] Wei J, Wang X, Schuurmans D, et al. Chain-of-thought prompting elicits reasoning in large language models. In: Koyejo S, Mohamed S, Agarwal A, editors. Advances in Neural Information Processing Systems. Vol 35. Curran Associates, Inc; 2022:24824-24837. [74] Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. Jul 6, 2023;6(1):120. [doi: 10.1038/s41746-023-00873-0] [Medline: 37414860] [75] Ong JCL, Chang SYH, William W, et al. Ethical and regulatory challenges of large language models in medicine. Lancet Digit Health. Jun 2024;6(6):e428-e432. [doi: 10.1016/S2589-7500(24)00061-X] [Medline: 38658283] [76] Minssen T, Vayena E, Cohen IG. The challenges for regulating medical use of ChatGPT and other large language models. JAMA. Jul 25, 2023;330(4):315-316. [doi: 10.1001/jama.2023.9651] [Medline: 37410482] [77] van Heerden AC, Pozuelo JR, Kohrt BA. Global mental health services and the impact of artificial intelligence- powered large language models. JAMA Psychiatry. Jul 1, 2023;80(7):662-664. [doi: 10.1001/jamapsychia- try.2023.1253] [Medline: 37195694] [78] Cabrera J, Loyola MS, Magaña I, Rojas R. Ethical dilemmas, mental health, artificial intelligence, and LLM- based chatbots. In: Bioinformatics and Biomedical Engineering. Springer Nature Switzerland; 2023:313-326. https://doi.org/10.1007/978-3-031-34960-7_2 15
ai_researcher
1
Large_Language_Models_in_Design_and_Manufacturing.pdf
How Can Large Language Models Help Humans in Design And Manufacturing? LIANE MAKATURA, MICHAEL FOSHEY, and BOHAN WANG, MIT, USA FELIX HÄHNLEIN, University of Washington, USA PINGCHUAN MA, BOLEI DENG, and MEGAN TJANDRASUWITA, MIT, USA ANDREW SPIELBERG, Harvard University, USA CRYSTAL ELAINE OWENS, PETER YICHEN CHEN, and ALLAN ZHAO, MIT, USA AMY ZHU, University of Washington, USA WIL J NORTON, EDWARD GU, JOSHUA JACOB, and YIFEI LI, MIT, USA ADRIANA SCHULZ, University of Washington, USA WOJCIECH MATUSIK, MIT, USA The advancement of Large Language Models (LLMs), including GPT-4, provides exciting new opportunities for generative design. We investigate the application of this tool across the entire design and manufacturing workflow. Specifically, we scrutinize the utility of LLMs in tasks such as: converting a text-based prompt into a design specification, transforming a design into manufacturing instructions, producing a design space and design variations, computing the performance of a design, and searching for designs predicated on performance. Through a series of examples, we highlight both the benefits and the limitations of the current LLMs. By exposing these limitations, we aspire to catalyze the continued improvement and progression of these models. CCS Concepts: • Computing methodologies → Modeling and simulation; Spatial and physical reasoning; • Human- centered computing → Natural language interfaces; Text input. Additional Key Words and Phrases: Large Language Models, GPT-4, computational design, computational fabrication, CAD, CAM, design for manufacturing, simulation, inverse design 1 INTRODUCTION Advances in computational design and manufacturing (CDaM) have already permeated and transformed numerous industries, including aerospace, architecture, electronics, dental, and digital media, among others. Nevertheless, the full potential of the CDaM workflow is still limited by a number of barriers, such as the extensive domain- specific knowledge that is often required to use CDaM software packages or integrate CDaM solutions into existing workflows. Generative AI tools such as Large Language Models (LLMs) have the potential to remove these barriers, by expediting the CDaM process and providing an intuitive, unified, and user-friendly interface that connects each stage of the pipeline. However, to date, generative AI and LLMs have predominantly been applied to non-engineering domains. In this study, we show how these tools can also be used to develop new design and manufacturing workflows. Authors’ addresses: Liane Makatura, [email protected]; Michael Foshey, [email protected]; Bohan Wang, [email protected], MIT, 77 Massachusetts Ave, Cambridge, MA, 02139, USA; Felix Hähnlein, [email protected], University of Washington, 1410 NE Campus Parkway, Seattle, WA, 98195, USA; Pingchuan Ma, [email protected]; Bolei Deng, [email protected]; Megan Tjandrasuwita, megantj@ mit.edu, MIT, 77 Massachusetts Ave, Cambridge, MA, 02139, USA; Andrew Spielberg, [email protected], Harvard University, Massachusetts Hall, Cambridge, MA, 02138, USA; Crystal Elaine Owens, [email protected]; Peter Yichen Chen, [email protected]; Allan Zhao, [email protected], MIT, 77 Massachusetts Ave, Cambridge, MA, 02139, USA; Amy Zhu, [email protected], University of Washington, 1410 NE Campus Parkway, Seattle, WA, 98195, USA; Wil J Norton, [email protected]; Edward Gu, [email protected]; Joshua Jacob, [email protected]; Yifei Li, [email protected], MIT, 77 Massachusetts Ave, Cambridge, MA, 02139, USA; Adriana Schulz, adriana@cs. washington.edu, University of Washington, 1410 NE Campus Parkway, Seattle, WA, 98195, USA; Wojciech Matusik, [email protected], MIT, 77 Massachusetts Ave, Cambridge, MA, 02139, USA. 3 2 0 2 l u J 5 2 ] L C . s c [ 1 v 7 7 3 4 1 . 7 0 3 2 : v i X r a 2 • Makatura et al. Our analysis examines the standard CDaM workflow to identify opportunities for LLM-driven automation or acceleration. Specifically, we break the CDaM workflow into five phases, and then assess whether and how the efficiency and quality of each phase could be improved by integrating LLMs. The components under investigation include (1) generating a design, (2) constructing a design space and design variations, (3) preparing designs for manufacturing, (4) evaluating a design’s performance, and (5) discovering high-performing designs based on a given performance and design space. Although it is feasible to create specialized LLMs for design and manufacturing, we demonstrate the opportu- nities offered by generic, pre-trained models. To this end, we conduct all of our experiments using GPT-4 [26]1, a state-of-the-art general-purpose LLM. Our GPT-4-augmented CDaM workflows demonstrate how LLMs could be used to simplify and expedite the design and production of complex objects. Our analysis also showcases how LLMs can can leverage existing solvers, algorithms, tools, and visualizers to synthesize an integrated workflow. Finally, our work demonstrates current limitations of GPT-4 in the context of design and manufacturing, which naturally suggests a series of potential improvements for future LLMs and LLM-augmented workflows. 2 BACKGROUND & RELATED WORK To contextualize our work, we briefly describe the state of the art for generative LLMs and various aspects of CDaM. 2.1 LLMs for Generative Modeling Large Language Models (LLMs) have garnered significant interest in the research community and beyond, as a result of both their already-demonstrated generative capabilities and their seemingly unbounded promise. Although these models are recognized primarily for their influence on text generation [31], their reach has been extended to impact various other domains, including image generation [32], music generation [9], motion generation [16], code generation [6], 3D model creation [19], and robotic control [23]. Notable foundational models include OpenAI’s GPT series, ranging from GPT-2 to the more recent GPT-4 [26]. These models have showcased progressive improvements in fluency, coherence, and generalization capabilities. Meta AI’s LLaMa model has further extended the reach of LLMs by demonstrating proficiency in both text and image synthesis [36]. The Falcon LLM [29], trained exclusively on properly filtered and deduplicated web data, has exhibited comparable performance to models trained on meticulously curated datasets. These models have been utilized in conjunction with Reinforcement Learning from Human Feedback (RLHF) to improve the quality of the generated content [27]. This is done by incorporating human feedback into the training process, where humans rate the quality of the generated outputs and provide examples of ideal outputs for a given input [7]. In parallel, domain-specific LLMs have also been trained for performance within a specific subject area. For example, ProtGPT2 specializes in predicting protein folding structures [14], while Codex has been specifically tailored to understand and generate code [6]. In this work, we investigate the generative capabilities of generic, pre-trained LLMs within CDaM. 2.2 Computational Design and Manufacturing The CDaM workflow is often decomposed into a series of steps including (1) representing a design, (2) representing and exploring a design space, (3) preparing a design for manufacturing, (4) computing the performance of a design, and (5) finding a design with optimal performance. For each phase, we provide a brief overview of the relevant work, with a focus on aspects that offer the best opportunities for LLM integration. Design Representations. The cornerstone of computational design is the capacity to digitally represent and manipulate the salient aspects of a given design – such as geometry, articulated joints, material composition, 1We use the OpenAI ChatGPT interface to interact with the GPT-4 versions released between May 24, 2023 and July 19, 2023 How Can Large Language Models Help Humans in Design And Manufacturing? • 3 etc. There are many ways to represent such aspects, but we focus on focus on design representations that are compact, understandable, and editable. For example, modern CAD systems represent a shape as a sequence of operations such as 2D sketches, extrusions and Boolean operations [38]. These can be represented as compact programs written in domain specific languages (DSLs) such as OnShape’s FeatureScript [2]. Designs can also be represented compactly as a graph [30, 40], in which the nodes typically represent individual components, while edges represent multi-component interactions. Such graphs have been used to efficiently and hierarchically represent CAD models [10], robots [41], metamaterials [21], architecture [24], and chemical molecules [15]. To represent even more complex designs – such as a quadcopter with a physical design and a software controller – multiple DSLs may be used simultaneously. For example, the copter’s physical design may be encoded using CAD, while its software is coded using a control-specific DSL. Design Space Representations. A design space represents an entire family of designs – rather than a single instantiation – which allows for design exploration, customization, and performance-driven design optimization. One of the most popular design space representations is parametric design, in which a few exposed parameters are used to control a design. This is commonly used in CAD systems, where e.g. a bookshelf may be parametrized by its height, width, depth, and number of shelves. Another popular option is formal languages such as L-systems [33] or shape-grammars [28, 34], which generate design variations by manipulating a set of terminal and non-terminal symbols according to given rewrite rules. Formal languages have been used in domains such as architecture [24], robotics [41], and chemistry [15]. Design for Manufacturing. Design for Manufacturing (DfM) is a planning process used to generate designs that can be fabricated with maximal efficiency and minimal cost. One prominent aspect of this is Computer-Aided Manufacturing (CAM), which transforms a digital design into a viable fabrication plan for some manufacturing process, such as 3D printing, 3- or 5-axis CNC milling, or sheet-metal stretching. CAM also extends to multi-process representations such as STEP-NC, which abstracts away from machine-specific G-code in favor of tool-type- specific machining operations that are interpretable on different hardware. Because all of these fabrication plans can also be described as a program in some DSL, CAM can be interpreted as a translation from a design DSL to a manufacturing-oriented DSL. DfM also includes many other aspects, such as selecting an appropriate manufacturing method, optimizing manufacturing process parameters [13], sourcing parts and materials, or modifying a design in light of manufacturing constraints [18]. Performance Prediction. Before manufacturing a design, engineers typically want to understand its predicted performance. For example, automobile engineers may wish to evaluate and iteratively refine a candidate design’s efficiency, safety, and aesthetics. To do this, engineers frequently make use of numerical simulation methods such as general-purpose finite element analysis (FEA) [11] or more domain-specific approaches for e.g. acoustics [25], robotics [12], and electromagnetism [35]. Commercial CAD systems (e.g., Autodesk [5] and Dassault Systèmes [8]) integrate simulation into their ecosystem. Since engineers are primarily interested in the performance of the design’s manufactured counterpart, it is crucial to minimize the gap between an object’s performance in simulation versus reality. Performance Optimization: Given a design space and a way to predict performance, it is natural to seek designs that perform best with respect to a particular metric. Although this search could be performed via manual trial and error, it is more efficient and effective to use automated exploration tools. One process known as inverse design can automatically search (or optimize) over a given design space to find a design that exhibits some target performance [20]. Inverse design has already been applied to many problem domains. For example, a parametric design space can be searched for designs that have the best value of a simulated metric [39]. Topology optimization has been applied to problems such as minimum compliance. In addition, designs can be optimized for metrics such as weight, cost, and manufacturing time. 4 • Makatura et al. Fig. 1. Opportunities for LLM Integration within the CDaM Workflow. Each technical section of our paper covers opportunities for LLM integration in one of the tasks depicted above: text to design, text/design to design space, bi-directional design for manufacturing, design to performance, and inverse design (from performance and design space to an optimized design). 3 OVERVIEW The fundamental aim of this study is to conduct an in-depth exploration of the opportunities and challenges of applying contemporary LLMs within the landscape of the CDaM workflow described in Section 2.2. Driven by this objective, we propose a thorough and wide-ranging exploration that is independent of any predefined or proposed framework. To apply LLMs coherently across such diverse tasks, we leverage the insight that all building blocks in the CDaM workflow (design, design spaces, manufacturing instructions, and performance metrics) can be represented by compact programs. Thus, at a high level, every phase of the CDaM workflow can be seen as a translation layer between an input DSL and an output DSL. The fact that LLMs excel at such symbolic manipulations suggests that LLMs have the potential to address these tasks while simultaneously leveraging and improving upon our traditional solutions. To achieve comprehensive coverage and uncover the different facets of LLM-assisted CDaM, we have undertaken an extensive suite of experiments, incorporating a broad variety of design representations, manufacturing processes, and performance metrics. These are detailed further in Section 3.2. 3.1 Methodology Our methodology is crafted to provide a comprehensive inspection of the opportunities for and efficacy of various interfaces between GPT-4 and the CDaM workflow. We investigate each of the five stages of the design and manufacturing pipeline individually. As illustrated in Figure 1, these stages include: design generation (Section 4), design space generation (Section 5), design for manufacturing (Section 6), performance prediction (Section 7), and inverse design (Section 8). In each of these stages, we pose fundamental questions about ways in which GPT-4 may offer some benefit, and then conduct a series of experiments to answer these questions. For each query, we investigate aspects such as (1) strategies for engineering effective prompts, (2) strategies for integrating human feedback, expertise, or preferences into the LLM-assisted design process, and (3) tasks that GPT-4 can accomplish natively versus tasks that are better completed by asking GPT-4 to leverage external tools. After a detailed examination of each stage, we sought to understand the implications of incorporating GPT-4 within an end-to-end CDaM process. To this end, we designed and fabricated two practical examples (a cabinet and a quadcopter) with GPT-4’s support. The end-to-end design process for each example is detailed in Section 9. How Can Large Language Models Help Humans in Design And Manufacturing? • 5 Summary Extensive Knowledge Base in Des.&Mfg. GPT-4 has a broad knowledge of design and mfg. considerations Iteration Support GPT-4 attempts (and often succeeds) to iterate and recitfy errors when prompted GPT-4 can reuse or adapt previous/provided designs or solutions GPT-4 struggles with spatial reasoning, analytical reasoning, and computations GPT-4 produces inaccurate results or justifications for its solutions GPT-4 struggles to respect multiple requests concurrently GPT-4 forgets/introduces errors when modifying previously-generated designs GPT-4’s performance depends on the amount of context provided GPT-4 makes inferences/suggestions beyond what is specified in the prompt Category Capabilities Limitations Dualisms Code Title C.1 C.2 C.3 Modularity Support L.1 L.2 L.3 L.4 D.1 D.2 Reasoning Challenges Correctness and Verification Scalability Iterative Editing Context Information Unprompted Responses Table 1. GPT-4’s key properties for CDaM To facilitate discussion of GPT-4’s applicability for design and manufacturing (Des.&Mfg.), we have identified 9 key observations about GPT-4 that persist across several aspects of the CDaM workflow. This includes 3 powerful capabilities, 4 limitations, and 2 dualisms (so named because they may manifest either as an opportunity or a drawback, depending on the context). We use these observations to frame our discussions about GPT-4’s suitability for each stage of the CDaM workflow. Beyond these individual questions, our comprehensive investigation has also exposed several key insights about GPT-4’s general capabilities and limitations with respect to CDaM. We have also observed a group of properties that we term ’dualisms’, because they may manifest either as an opportunity or a drawback, depending on the situation. Our findings are summarized in Table 1, with a full description in Section 10.1. To emphasize the pervasive nature of these properties, we also use these labels as a framework for our discussions and takeaways at the end of each section. Specifically, we draw on each section’s findings and examples in order to illustrate the manifestation and impact of various properties in Table 1 throughout the CDaM workflow. 3.2 Scope of Evaluation To conduct a holistic survey of GPT-4-assisted CDaM, our experiments span a number of different design domains (Section 3.2.1), performance metrics (Section 3.2.2) and manufacturing methods (Section 3.2.3). Here, we briefly describe each domain of interest, along with the specific challenges they pose and the sort of representative, transferable insight we hope to glean by studying each domain in connection with LLMs. 3.2.1 Target Design Domains. Our experiments are concentrated in three main design domains, including 2d vector graphic design, 3D parametric modeling, and articulated robotics problems. Vector graphics use a series of text-based commands to represent paths and areas that form a given design. Vector image formats are an important part of CDaM, as they can be used as both a design specification and a manufacturing specification for e.g. laser cutters. Despite their simplicity, vector graphics can represent a wide range of 2D and 3D objects, such as artistic engravings or flat-pack furniture. We examine LLMs’ capacity to generate two popular vector formats: SVG and DFX. These formats present several challenges: they contain boilerplate formatting that GPT-4 may struggle to reproduce; it may be difficult to layout individual pieces on the canvas; and finally, it may be difficult to decompose higher-dimensional designs into 2d. Thus, vector graphics will test GPT-4’s spatial reasoning and ability to respect highly-constrained syntax, either on its own or with the use of external libraries. Parametric modeling languages generate 3D geometry through a sequence of constructive instructions. The term “parametric modeling” reflects how each constructive operator exposes a set of parameters, such as the radius of a circle. We explore two distinct approaches that are powerful, widely-used, and well-documented online. The first is rooted in classic Constructive Solid Geometry (CSG), which constructs shapes by successively deploying boolean operations (union, intersection, subtraction) over basic shapes or primitives (such as cuboids, spheres, cylinders, and so forth) that can undergo transformations such as translations, rotations, and scaling. The CSG 6 • Makatura et al. approach is intended to test the global spatial reasoning capacity of GPT-4, as every CSG operation/transformation occurs w.r.t. a shared coordinate space. The second representation relies on the contemporary B-rep format used by modern CAD systems. Here, geometry is built through a sequence of operations like sketching, extruding, and filleting. Each operation in this context is parametric and uses references to previously created geometry to e.g., select a plane for a sketch design or select a sketch for an extrusion. Sketch-based CAD will test GPT-4’s ability to effectively switch between and reason over multiple relative, local coordinate frames. Robotics offers a particularly rich design domain, as GPT-4 must coordinate a set of articulated and actuated geometries to form complex objects such as open chain robot arms, wheeled robots, copters/UAVs, and robot grippers. Robotics representations must describe not only the high-level geometry of each part, but also their properties and relationships – including the joints between parts, the degrees of freedom that those joints exhibit, and dynamics information such as the inertia of a given part. Several existing formats support these tasks, but we primarily use the XML-based language known as the Universal Robot Description Format (URDF). We also investigate the use of a more general graph-based robot representation. These formats test GPT-4’s ability to simultaneously reason about multiple aspects of design, such as static geometric bodies and dynamic articulation constraints. 3.2.2 Target Performance Domains. Diverse performance domains within engineering design require evaluation of aspects such as structural and material properties, mechanical integrity, geometry-based functionality, materials use, electromechanical integration, and subjective features. The results of such evaluation allow us to (dis)qualify a design for use and to use the evaluation to further understand and improve the design. Using GPT-4, we focus on assessing mechanical and structural properties through generating first-order analysis equations for input designs of standard objects like chairs, cabinets, and a quadcopter, which test the ability of GPT-4 to sufficiently understand a given input design in text form or through a DSL and to evaluate criteria for functionality and failure. Mechanical properties assessed include weight, size, load capacity, storage capacity, and stability. Analysis of electromechanical functionality include battery life and quadcopter travel distance. Further use of GPT-4 aims to streamline the computationally intensive process of Finite Element Analysis (FEA), a crucial tool for understanding structural behavior in detail under various conditions, and we apply this to the case of a load on a set of chairs. In addition to these technical aspects, our investigation extends into the subjective domains of sustainability and aesthetics, which cannot be strictly quantified. The inherent complexity and qualitative nature of these areas present unique challenges in evaluation. While it is well-known that computational systems can compute quantitative features, machine learning systems are becoming more sophisticated in artistic domains, and so we seek to leverage the capacity of LLMs for lexical analysis to aid more holistically in the more ambiguous realms of the design process and to find its limits. For example, could an LLM reasonably address whether a piece of furniture of a given size is “large”, or if a shoe of a given design is “comfortable,” or can it only handle classically quantifiable features? Can it even help us to reason more objectively about what aspects delineate these properties? To this end, we test evaluation of subjective domains and use GPT-4 to generate a scoring system and functions for quantifying the sustainability of a chair, the classification of chairs based on categories of aesthetic influence, and the appropriate distribution of a set of chairs into a set of rooms in a house, among other examples. We further combine these performance metric evaluations with the principles of inverse design. Inverse design entails setting desired performance attributes and employing computational methodologies to deduce design parameters that satisfy these attributes, both by generating areas for improvement within a design domain and by testing the effects of implementing improvements suggested by GPT-4 or target design goals of our own interest, as well as selecting appropriate methods of optimization. In this case, given a design/decision space for an object, we use GPT-4 to generate and implement methods to computationally improve or optimize qualifying designs to How Can Large Language Models Help Humans in Design And Manufacturing? • 7 satisfy designated performance goals. This methodical approach evaluates if LLMs can apply constructive logic for design enhancement and innovation. 3.2.3 Target Manufacturing Domains. Leveraging language models like GPT-4 in DfM context can yield more consistent and scalable decision-making, potentially augmenting human expertise and reducing our reliance on CAD software usage. Potential applications of GPT-4 include the selection of optimal manufacturing techniques, suggestion of design modifications that would enable easier production, identification of potential suppliers, and creation of manufacturing instructions. The approach is aimed to alleviate many of the bottlenecks caused by the designers’ lack of knowledge and experience in DfM. In a set of experiments, we’ve explored GPT-4’s capabilities across various tasks. Firstly, GPT-4 was used to identify the optimal manufacturing process for a given part, considering factors such as part geometry, material, production volume, and tolerance requirements. Next, GPT-4 was tasked with optimizing a component design for CNC machining. Given the geometry of the component, GPT-4 identified potential manufacturing difficulties and modified the design to address these. We also leveraged GPT-4’s extensive dataset knowledge to identify parts needed for manufacturing. In addition to these, GPT-4 was used to create manufacturing instructions for both additive and subtractive design processes. Additive design can be challenging due to the need for spatial reasoning, precision, and meticulous planning, and often requires many iterations. We’ve explored the generation of fabrication instructions using subtractive manufacturing techniques for a cabinet design. We also investigated GPT-4’s potential in generating machine-readable instructions for robot assembly tasks and converting those into human-readable standard operating procedures. This allowed for effective communication and collaboration between robots and human operators. 4 TEXT-TO-DESIGN For our first line of inquiry, we explore the extent to which GPT-4 is able to generate designs across a variety of domains. Even within the specific context of manufacturable design, the concept of a “design” is quite broad, and exists at many scales. For example, we may want to specify a single self-contained part, or a sizable hierarchical assembly containing several levels of sub-assemblies and/or other individual component modules. Such assemblies may be completely customized/self-contained, with all parts designed simultaneously, or they may be hybrid designs that integrate existing, pre-manufactured elements such as brackets or motors. In many cases, our target design tasks also include dynamic considerations such as assembly mating or articulated joints. Although these complex tasks may initially seem out-of-scope for lexical models such as LLMs, there are many modeling and design paradigms that can be expressed in terms of potentially-LLM-compatible language. To guide our exploration of GPT-4’s ability to interface with each of these models, we pose the following questions: • Q1 Can GPT-4 generate a meaningful design when provided with a high-level description of the goal and a given modeling language? • Q2 To what extent is the user able to control the designs created by GPT-4? Is GPT-4 able to interpret and respect user-defined constraints, such as spatial relationships between objects or integration of standard pre-fabricated parts? • Q3 Is GPT-4 able to incorporate high-level abstractions used by human designers, such as modular (de)composition? 4.1 Simple, self-contained designs from high-level input (Q1) To explore GPT-4’s capacity for design, we first test its ability to do one- (or few-) shot generation of an object from a minimal high-level text description as input. Ideally, we would like to understand GPT-4’s ability to complete design tasks independent of any particular modeling paradigm. However, it is not immediately clear 8 • Makatura et al. how much dependence there may be on the specific representation that is chosen, because the variation in possible language-based modeling paradigms is significant. Some languages are very general and versatile, with a wide variety of features and capabilites, while others may be highly-specialized for a specific set of tasks or outcomes. Similarly, some languages are well-established with plentiful online documentation or examples, while others may be custom-defined, poorly documented, or otherwise underrepresented in GPT-4’s training repository. Finally, some languages are fairly streamlined, while others may be syntactically complex and/or require the use/coordination of many modules. Each possibility offers unique capabilities and challenges. Thus, we set out to test a wide variety of them, in an effort to determine LLMs’ ability to use each representation; whether there are any conclusions that seem to span across different representations; and whether any particular representations seem uniquely well- or poorly-suited for LLM integration. 4.1.1 Vector Graphics with SVG/DXF. Our initial focus in the design domain is on 2D vector graphics. Vector formats such as SVGs or DXFs are prevalently utilized in manufacturing processes, like those for laser or waterjet cutting. The goal of our investigation was to ascertain whether GPT-4 could empower designers to transform their text directly into these vector formats. To evaluate this, we conducted experiments to determine if GPT-4 is capable of generating a valid SVG file and converting the design into DXF format. The primary aim of our experiment was to design an SVG file for a cabinet, with predetermined dimensions, to be constructed from 1/2 inch plywood. This implies that the thickness of each wall, a preset parameter, is 0.5 inches. The experimental setup involved the design of a cabinet comprising three shelves, with overall dimensions measuring 6 feet in height, 1 foot in depth, and 4 feet in width. A crucial aspect of the investigation was to see if GPT-4 could accurately account for this wall thickness during the design of the cabinet, appropriately adjusting the dimensions of its various components. GPT-4 was able to design the specified cabinet and subsequently generated a Python script to create an SVG file reflecting the cabinet’s layout. The script considered the necessary clearances for the thickness and accurately positioned the side panels, top and bottom panels, shelves, and back panel. Moreover, it factored in the prescribed spacing between parts and leveraged ‘svgwrite’ to generate the SVG file. The resulting SVG file provided a visual depiction of the cabinet’s design. We also replicated the experiment to create a DXF file, where GPT-4 utilized ‘ezdxf’ to generate the file. The results of these experiments are depicted in Figure 2. In conclusion, GPT-4 demonstrated its capability to employ the APIs for generating the vector file in the correct format without any simplifications. Nevertheless, it was necessary to perform several iterations to ensure GPT-4 did not cause any overlap among the cabinet parts. 4.1.2 CSG with self-defined primitives. The next design domain we are investigating is CSG. As outlined in Sec.3.2.1, CSG languages generally operate by building up a collection of primitives that have been altered or combined via linear transformations and Boolean operations. Because the associated design logic can be quite complex, it was not immediately clear that GPT-4 should be able to generate designs using these languages. Thus, to progressively test GPT-4’s modeling capabilities, we begin by exploring a very simple, custom CSG language based on a single primitive: a box. Boxes are one of the most common primitives seen in manufacturing. Moreover, many shapes can be considered as a combination of boxes with different sizes. Because of the simplicity of a box or any shape formed by the boxes, we would like to see if GPT-4 is able to generate designs of such kind of simple shapes, such as tables and chairs. Our initial approach to this task is performed in 2D. We provide a function, foo(x, y, w, h), which forms a box of dimensions 𝑤 × ℎ centred at the point (𝑥, 𝑦). We subsequently employ this function to generate letters composed of axis-aligned bars, such as ‘F’ and ‘E’. During the testing phase, we observed that while the system understands the requirement of 2D boxes, it struggles with their accurate placement. A particularly prominent issue is the collision problem. More specifically, the GPT-4 system fails to determine whether two boxes are How Can Large Language Models Help Humans in Design And Manufacturing? • 9 I would like to build a cabinet; the cabinet will have 3 shelves 6 feet high, 1 foot deep, and 4 feet wide. Please ensure that you account for the thickness of the wood. I want to use 1/2 inch plywood. Can you label the parts? Create a script that creates an SVG file. Use lines to depict the edges of the parts. import svgwrite (... omitted by authors: dimension conversion from feet to inches ...) dwg = svgwrite.Drawing('cabinet . svg ' , profile =' tiny ' ) (... omitted by authors: variable setup ...) # Side panels for i in range (2) : dwg.add(dwg.rect (( cumulative_width, 0) , side_panel_dims) ) cumulative_width += side_panel_dims[0] + space max_height = max(max_height, side_panel_dims[1]) (... omitted by authors: similar for loop to write the top and bottom panels ...) # Reset cumulative_width for next row and update max_height cumulative_width = 0 max_height += space (... omitted by authors: for loop to write the shelves ...) (... omitted by authors: reset cumulative width and height variables, then write back panel ...) dwg.save() Can you use the same method you used in this to generate the placement? Except generate a DXF rather than an SVG? import ezdxf (... omitted by authors: variable setup and dimension conversion from feet to inches ...) doc = ezdxf . new("R2010") # Create a new DXF document msp = doc.modelspace() # Create a new layout (... omitted by authors: variable setup ...) # Side panels for i in range (2) : msp.add_lwpolyline ([( cumulative_width, 0) , cumulative_width + side_panel_dims [0], side_panel_dims [1]) , side_panel_dims [0], 0) , (cumulative_width, 0) ]) cumulative_width += side_panel_dims[0] + space max_height = max(max_height, side_panel_dims[1]) (... omitted by authors: similar for loop to write the top and bottom panels ...) (cumulative_width, side_panel_dims [1]) , ( (cumulative_width + (... omitted by authors: reset cumulative width and heigh variables, then for loop to write the shelves ...) (... omitted by authors: reset cumulative width and height variables, then write back panel ...) doc. saveas ( " cabinet . dxf" ) # Save the DXF file Fig. 2. The prompts used for generating files in the format of SVG and DXF using GPT-4. The visualization of the graphs are shown on the right side. It is clear that GPT-4 can accomplish this task after several iterations. overlapping or whether there is a vacant space between them. This issue is observable when creating letters like ‘T’ and ‘E’. Using three to five targeted prompts enabled GPT-4 to ascertain the correct positions. However, these prompts had to be granular and often involved providing the direct solution. The outcomes of these attempts are 10 • Makatura et al. demonstrated in Figure 3. Interestingly, after addressing this issue, GPT-4 appears to retain the corrections. This is evidenced by its successful generation of the new letters ‘F’ and ‘L’ in a single attempt. These letters share a similar structure to ‘T’ and ‘E’, and the results can be seen in Figure 3. Our next step involved venturing into 3D, which holds more practical values. Analogous to the 2D scenarios, we inform GPT-4 of a pre-established function, box(x, y, z, w, h, d), which generates a 3D box of dimensions 𝑤 × ℎ × 𝑑 centred at the 3D coordinates (𝑥, 𝑦, 𝑧). We then tested if GPT-4 could write a program to produce a simple box of specified dimensions, for instance, 100 × 100 × 40, utilizing function ‘box’. GPT-4 successfully accomplished this task, and the resulting text explanation illustrates its understanding of the box concept and the usage of our predefined function. Next, we presented a more complex challenge: having GPT-4 design a simple table, typically consisting of four legs and a tabletop in the real world. We posed the question of whether GPT-4 could craft a program to generate such a table with a provided size using solely our box function. The output text explanation revealed that GPT-4 accurately comprehends the structure of a basic table. Given that we only provide the overall table size, GPT-4 lacks information about individual leg lengths or tabletop thickness. Yet, it was able to identify these missing parameters and make reasonable assumptions. Consequently, GPT-4 succeeded in writing a program to represent the table by creating five boxes using our predefined function. Upon visualizing the 3D table, however, the relative positioning of each pair of boxes was not always accurate. We noticed that the tabletop appeared to be suspended in the air, not in contact with the legs, as shown in Figure 4. This difficulty, also observed in our 2D tests (Figure 3), pertains to GPT-4’s understanding of mathematical concepts. In this instance, we expedited the process by directly providing GPT-4 with the solution. We indicated the necessary translations for the misplaced boxes, acknowledging that it would take several prompts to rectify the issue otherwise. After correcting the floating tabletop, the table appeared as intended, as demonstrated in Figure 4. Therefore, to create a table, it only required two prompts, significantly streamlining the procedure for generating a basic table. Once we successfully generate the table, our next more challenging goal is to design a few accompanying chairs. We tasked GPT-4 with creating a chair compatible with the table, using only our predefined function. Similar to its approach with the table, GPT-4 successfully deduced the basic structure of a simple chair, comprising the seat, four legs, and a backrest. Unlike the table instance, we didn’t observe any ‘floating’ issues in this scenario. It appears that GPT-4 might have indeed gleaned some insights from previous experiences, as we also observed when creating 2D letters. After we rectified the letters ‘T’ and ‘E’, there were no issues with the remaining letters. Additionally, GPT-4 demonstrated comprehension of the concept of compatibility by outputting a chair of an appropriate size. However, it was not successful in all aspects, as depicted in Figure 5. We attempted to correct the backrest but were unable to do so. As a result, we had to manually adjust the position, directing GPT-4 to the specific lines that needed modification to correct the structure. The final result can be seen in Figure 5. We believe the root of these issues lies in GPT-4’s struggles to comprehend geometric concepts, a difficulty also observed in previous examples. Despite these hurdles, the process for creating a basic table and chairs has been considerably simplified. Fig. 3. Failed and Successful Cases of Letter Creation Using GPT-4. The solid square is the origin of the 2D coordinate system. How Can Large Language Models Help Humans in Design And Manufacturing? • 11 Fig. 4. Failed and Successful Cases of Table Creation Using GPT-4. The table consists of five parts: 4 legs and a tabletop. Although GPT-4 successfully gives a correct composition of the table, GPT-4 outputs a floating tabletop without any human intervention. Fig. 5. Failed and Successful Cases of Chair Creation using GPT-4. GPT-4 successfully gives a correct composition of the chair. In the incorrect version (left), the dimension of the backseat is wrong and it looks like the orientation is wrong. Our final objective was to position four identical chairs around the table. Although theoretically feasible without invoking rotation, GPT-4 failed to generate the chairs with the correct orientations. We believe this failure stems from the same root cause we’ve encountered previously, namely, GPT-4’s difficulty in handling mathematical and geometric concepts. Creating four chairs with correct orientations without the support of rotation entails complex geometric transformations. GPT-4 must comprehend that a box rotated 90 degrees around its center is equivalent to a swap of its width and depth dimensions. To alleviate this issue, we expanded our ‘box‘ function to include an additional input argument, ‘angle‘, corresponding to a rotation angle around the vertical axis. With this extension, GPT-4 was able to create a program using solely the ‘box‘ function that successfully positioned four chairs around the table with correct orientations, as displayed in Figure 5. We surmise that the introduction of ‘angle‘ considerably simplifies the logic behind chair placement, enabling GPT-4 to create such a program. In conclusion, GPT-4 exhibits strong understanding of posed questions and excels at analyzing requested objects to determine their composition. However, it demonstrates a weakness in handling geometric and mathematical concepts. While it can provide nearly accurate solutions when mathematics is involved, it struggles to comprehend the underlying mathematical principles and, as a result, cannot independently correct math-related issues when they arise. 4.1.3 CSG with PyVista. Building on GPT-4’s success generating CSG-like models with boxes, we set out to explore GPT-4’s capacity to use a larger suite of primitives. For this, we used an existing 3D visualization library, PyVista, which allows us to create and place a variety of 3D primitives such as spheres and cones. Thanks to the 12 • Makatura et al. (a) Generic fish (b) Goldfish (c) Manta ray (d) Loach Fig. 6. Aquatic Creatures Generated by GPT-4 GPT-4 successfully generated variations of aquatic creatures automatically using primitives from the PyVista package in Python. library’s documentation, GPT-4 is able to automatically assemble a functional python program using PyVista’s primitive functions. We asked GPT-4 to use PyVista’s primitives to model several variations of a fish, including specific bio- inspirations such as goldfish, a manta ray, and a loach (Figure 6). GPT-4 successfully selected and scaled an appropriate set of primitives for each example, and provided sound bio-inspired rationale for its decisions. In particular, although most of the fish are composed using a sphere for the body, GPT-4 intuits that a loach would be most effectively approximated by using two cones for the body to give it an elongated shape. One area in which GPT-4 struggled was the determination of the primitives’ orientations. It often produced results that indicated an internal confusion of some of the axes, or an otherwise flawed approximation of the orientation that would be required to achieve a desired effect. After engaging in a dialogue with GPT-4, it was able to rectify the orientations of the primitives to more closely resemble the target creatures. While promising, these tests reiterate GPT-4’s seemingly limited capacity to account for local coordinate frames. 4.1.4 CSG with OpenJSCAD. To explore a full-fledged approach for LLM-aided CSG, we test GPT-4’s ability to generate meaningful designs using the open source javascript-based CSG library, OpenJSCAD [3]. OpenJSCAD has extensive documentation available online, and we found that GPT-4 natively possesses a good grasp of the API, its components, and the required code structure. In particular, it understood that it needed to import each function from the corresponding modules, and that it needed to define and export a function named main. For our experiments, we provided GPT-4 with access to the full API, and generally allowed it to select the appropriate primitives and operations without user interference. To test GPT-4’s design abilities, we ask it to design a simple cabinet with one shelf, as shown in Figure 7. GPT-4 reliably selects and instantiates the required primitives, along with intuitive naming conventions and structure within the OpenJSCAD code. GPT-4’s initial orientation of the parts was also generally reasonable, but the specific positioning of each part was often incorrect. Despite multiple attempts, GPT-4 was unable to generate any fully-correct cabinet in a single shot, with no subsequent user intervention. Moreover, GPT-4 frequently produced highly disparate results from one run to the next. Even when using an identical prompt on fresh chat environments, GPT-4’s responses varied widely in terms of their overall code structure, design accuracy, and the specific errors or oversights made. Figure 8 shows one example of a drastically different design process, even when seeded with the same initial prompt as Figure 7. Throughout our experiments, we found that GPT-4 encountered a few common pitfalls when generating designs in OpenJSCAD. Occasionally, GPT-4 made small syntatic errors such as generating incorrect boilerplate, importing functions from incorrect modules, or making “typos” in API calls – e.g., trying to import from the boolean module rather than the correct booleans module, or calling the cube() function with parameters that were intended to generate a cuboid(). In an attempt to avoid these pitfalls, we created a small list of “hints”/“reminders” for best practices when working with OpenJSCAD; this short list was always passed in alongside our initial prompt. See Appendix A.1 for a full listing of these reminders. Although these reminders seemed to help mitigate How Can Large Language Models Help Humans in Design And Manufacturing? • 13 these issues, we were unable to eradicate them entirely. However, GPT-4 can easily correct the majority of these issues when they were pointed out by the user. Often, the process of correcting the issue through prompts and responses was faster than actually adjusting the code manually, making LLMs a useful design partner. One pervasive issue that seemed more difficult to correct was the fact that GPT-4 had issues positioning the primitives in 3D space. In particular, GPT-4 frequently seemed to forget that OpenJSCAD positions elements relative to the center of a given primitive, rather than an external point on the primitive (e.g., the lower left corner). GPT-4’s arrangements were frequently incorrect due to this issue. When GPT-4 is reminded of this convention, it does generally alter the design, but it is not always able to correct the issue. If sufficiently many rounds of local edits prove unable to address the alignment issues, we found that it was generally more effective to direct GPT-4 to disregard all existing measurements, and re-derive the elements’ positions from scratch (see Figure 8). Overall, we find that GPT-4 is able to generate reasonable OpenJSCAD models from high-level input. However, the design specifications that emerge on the first attempt are rarely fully correct, so users should expect to engage in some amount of corrective feedback or iteration in order to attain the desired result. Sketch-based CAD with OnShape. Another popular method for 3D shape modeling comes from contempo- 4.1.5 rary computer-aided design (CAD) software. Rather than directly constructing and modifying solid primitives (as in the CSG approaches discussed above), modern parametric CAD systems generally work by lifting planar sketches into 3D and subsequently modifying the 3D geometry. These sketches are placed on planes, which can be offsetted construction planes, or planar faces of the current 3D model. The selected sketching plane serves as a local coordinate system in which the sketch primitives are defined. In graphical user interfaces, this change of coordinate systems is accounted for by letting the user easily align their camera view to a top down view onto the sketch plane. This change of view effectively comes back to drawing sketches in 2D, removing the cognitive burden of having to think about sketches in 3D. Despite the lack of graphical assistance, we want to investigate whether GPT-4 is able to design objects using a sketch-based modeling language. However, since the graphical assistance is very prevalent in this modeling paradigm, CAD models are mostly constructed via a GUI and not via textual programming, even though textual APIs exist, e.g. Onshape’s Feature- script [2]. Therefore, documentation and examples are less available than for the modeling paradigms from the previous sections. And indeed, GPT-4 performs poorly when trying to generate Featurescript code directly, which is why we decided to provide a simplified DSL. For our experiments, we constructed a single prompt containing the following DSL description: Our DSL exposes two operators, createSketch and extrude, and two sketch primitives, circle and rectangle. Additionally, we provide a construction example using this language of a single leg round table. Lastly, we also add some hints about how to write the program, e.g. to explicitly use design variables and to write in syntactically correct python. All of the output designs generated by GPT-4 in this section are automatically translated into Onshape PartStudios. The full prompt can be found in the supplemental material. Our first task is the design of a chair with 4 legs, a rectangular seat and a rectangular back, see Fig. 9. We asked GPT-4 to perform this task several times and observed the following. • The design sometimes includes cylindrical legs, sometimes rectangular legs. • The design is always constructed in a single direction, the 𝑍 direction. Our input example of the round table only used the 𝑍 direction to select sketching planes, but the description of our language documented the use of other plane directions. • We observe mainly two types of designs: (i) designs which are constructed in both the negative and positive 𝑍 direction starting from the seat, see first answer in Fig. 9, and (ii) designs which start from a leg, see the second response in Fig. 9. We observe that the first type of designs has a higher chance of being correct, whereas the second type fails more often. The failures are due to changes in the coordinate system. For example, when selecting the top plane of the first leg as a sketch plane for the seat, the sketch 14 • Makatura et al. Let’s use OpenJSCAD to design a cabinet ... with 3/4" wood sheets, and final exterior dimensions of 30 inches tall, 20 inches wide, and 18 inches deep. The stationary part of the cabinet should be comprised of 6 boards: bottom, top, back, two sides, and one shelf centered inside the cabinet. (... omitted by authors: OpenJSCAD modeling hints; see Appendix A.1 ...) (... omitted by authors: import statements and variable setup ...) pieces . push( colorize ([1, 0, 0], translate ([0, 0, −height/2 + thickness /2], cuboid ({ size : [width, depth, thickness ]}) ) ) ) // bottom pieces . push( colorize ([0, 1, 0], translate ([0, 0, height /2 − thickness /2], pieces . push( colorize ([0, 0, 1], translate ([0, −depth/2 + thickness /2, 0], cuboid ({ size : [width, depth, thickness ]}) ) ) ) // top cuboid ({ size : [width, thickness , height ]}) ) ) ) // back pieces . push( colorize ([1, 1, 0], translate ([−width/2 + thickness /2, 0, 0], cuboid ({ size : [ thickness , depth, height ]}) ) ) ) // left pieces . push( colorize ([0, 1, 1], translate ([ width/2 − thickness /2, 0, 0], cuboid ({ size : [ thickness , depth, height ]}) ) ) ) // right pieces . push( colorize ([1, 0, 1], translate ([0, −thickness /2, 0], cuboid ({ size : [innerWidth, thickness , innerDepth ]}) ) ) ) // (... omitted by authors: rest of design ...) shelf This is a great first guess! However, there are two issues. First, the shelf is oriented vertically, rather than horizontally. Second, there are many cases where the boards are protruding into one another. (... omitted by authors: description of the desired relative extent of each board ...) (... omitted by authors: preceding code was unchanged ...) pieces . push( colorize ([0, 0, 1], translate ([0, −depth/2 + thickness /2, 0], pieces . push( colorize ([1, 1, 0], pieces . push( colorize ([0, 1, 1], pieces . push( colorize ([1, 0, 1], cuboid ({ size : [width, thickness , translate ([−width/2 + thickness /2, innerHeight ]}) ) ) ) thickness /2, 0], innerDepth, innerHeight ]}) ) ) ) // back cuboid ({ size : [ thickness , left translate ([ width/2 − thickness /2, cuboid ({ size : [ thickness , right translate ([0, 0, 0], thickness /2, 0], innerDepth, innerHeight ]}) ) ) ) // // (... omitted by authors: subsequent code was unchanged ...) shelf cuboid ({ size : [innerWidth, innerDepth, thickness ]}) ) ) ) // Fig. 7. GPT-4’s cabinet design process. Simple cabinet design, from initial prompt to the final fabricable result, in which all elements have the correct orientation and no components overlap. GPT-4 reached the final result in 8 messages (4 each of prompt/response). plane’s origin will be in the center of the leg. GPT-4 will often ignore this or won’t be able to account for it when pointed out. Conversely, when starting with the seat and choosing the lower seat plane as a sketch plane for the legs, it can specify the leg sketch coordinates in global coordinates, since the global origin coincides with the seat’s origin. The same is true for the backrest. From this test, we can observe that GPT-4 seems to have difficulties translating the coordinate system’s origin on the XY plane. Next, we want to see if GPT-4 can account for rotating sketch planes. To test this, we ask it to design a car. GPT-4 always suggests a simple car shape, composed out of 4 cylindrical wheels and a rectangular car body, see Fig.10. The difficulty with this shape is that the cylinder sketches of the wheels have to be extruded on the side How Can Large Language Models Help Humans in Design And Manufacturing? • 15 Fig. 8. Alternate cabinet design process by GPT-4. From left to right, these images show a second example of GPT-4’s design process for the simple cabinet, beginning from the same prompt as Figure 7. This example highlights the widely varying paths that GPT-4 may take to construct a design. GPT-4 reached the final result after 16 messages (8 each of prompt/response). Up until the fourth displayed result, the user prompts’ phrasing focused on improving upon the previous result. However, after failing to make progress, we asked GPT-4 to disregard the previous attempts and recompute the elements’ positions from scratch; this quickly resulted in a valid cabinet, as shown in the final image. Can you design a chair with 4 legs, a rectangular back and a rectangular backrest? (... omitted by authors: variable setup ...) seat_sketch = createSketch ( rectangle (0, 0, seat_length , seat_width) , XY_PLANE) seat_solid = extrude( seat_sketch , seat_thickness ) leg_sketch_1 = createSketch ( circle (−3.5, −3.5, leg_solid_1 = extrude ( leg_sketch_1 , (... omitted by authors: rest design ...) leg_length ) leg_radius ) , cap( seat_solid , "min_z")) (... omitted by authors: variable setup ...) leg_sketch_1 = createSketch ( circle (−3.5, −3.5, leg_solid_1 = extrude( leg_sketch_1 , // other legs seat_sketch = createSketch ( rectangle (0, 0, seat_length , seat_width) , cap( leg_solid_1 , "max_z leg_radius ) , XY_PLANE) leg_length ) ")) seat_solid = extrude ( seat_sketch , seat_thickness ) (... omitted by authors: rest design ...) Fig. 9. Different Chair Design Strategies. GPT-4 designs a chair in two different ways. When starting with the seat, it does not have to change coordinate systems when specifying sketch coordinates of the legs. However, sketching a seat on the top face of one of the legs requires a coordinate change which GPT-4 does not manage to do. planes of the car body. There are a couple different modeling strategies to achieve this, but we observe that GPT-4 has difficulties coming up with these designs without any further indication. Instead, it often extrudes the car body along its height, starting from the ground plane, and then places the wheel circles on the bottom plane of the car, which is also the ground plane. This has the effect that the car wheels will be extruded vertically. Although we were able to correct this design in an iterative prompt-based fashion, we had little success engineering the initial prompt in such a way that we could effectively prevent this behavior. Note that intuitively placing wheels at the bottom of a car body makes sense and that without any graphical feedback, humans could also easily make this mistake. From this test, we can observe that GPT-4 is struggling to rotationally change coordinate systems. To address this, we changed our design language description to allow GPT-4 to specify sketch primitive coordinates directly in a single global coordinate system. Now, a sketch primitive center takes as input three coordinates, which we project in post-processing directly on the selected sketch plane. The extrude direction 16 • Makatura et al. Can you design a car? (... omitted by authors: variable setup ...) body_sketch = createSketch ( rectangle (0, 0, bodyLength, bodyWidth), XY_PLANE) body_solid = extrude(body_sketch, bodyHeight) wheel_sketch1 = createSketch ( circle (−bodyLength/2, −bodyWidth/2, wheelRadius), cap(body_solid , "min_z")) wheel_solid1 = extrude (wheel_sketch1, wheelWidth) (... omitted by authors: other wheels ...) (... omitted by authors: variable setup ...) body_sketch = createSketch ( rectangle (0, 0, 0, bodyLength, bodyWidth), XY_PLANE) body_solid = extrude(body_sketch, bodyHeight) wheel_sketch1 = createSketch ( circle (−bodyLength/4, −bodyWidth/2 − wheelWidth/2, 0, wheelRadius), cap(body_solid , "min_y")) wheel_solid1 = extrude (wheel_sketch1, wheelWidth) (... omitted by authors: other wheels ...) Fig. 10. Local vs Global Coordinate Systems. GPT-4 does not manage to rotate the wheel sketch planes. If we allow it to express sketch primitives in global coordinates and select a plane to orient the extrude operation, it places them correctly. is still defined by the sketch plane’s normal vector. This means that GPT-4 does not have to take coordinate translations into account anymore. We observe that this change in the DSL led to a higher success rate in generated designs, see second answer in Fig. 10. In conclusion, GPT-4 is able to design models in a sketch-based parametric CAD framework. However it is not successful at changing coordinate systems. In this case, our backup strategy is to use a single global coordinate system. One possible future direction is to let GPT-4 communicate with a geometric solver and create a feedback loop. 4.1.6 URDF. The Universal Robot Description Format (URDF) is a common XML-based language for describing articulated structures in robotics. URDF files specify a robot’s structure (including both visual and collision geometry), joint locations, and dynamics information. The URDF format appears well-suited for potential LLM design because it is human-readable and heavily documented online. Open Chain Robot Arms. Initially, we asked GPT-4 to generate simple open chain robots (commonly called “arms”) with a particular number of links. However, when we used the word ”arm” to prompt GPT-4 to generate a robot, GPT-4 was unable to determine that the links should connect at the end. Most often, GPT-4 placed the joints such that each link revolved about its center, and the links were not connected to each other (Figure 11, initial prompt). As shown in the subsequent prompts of Figure 11, to achieve an arm with two connected links, it was necessary to describe both the joint position relative to the link (``the joint origin must be half the link's length past the link origin'', rather than ``the joint origin should be at the end of the link'' ) as well as the joint axis (``a revolute joint about the x axis''). Given this prompt pattern, GPT-4 was easily able to generate proper N-link robots. Wheeled Robots. Next, we asked GPT-4 to generate wheeled robots composed of N wheels attached to a central rectangular platform. A proper design of this type must have wheels that (1) are aligned to share an axis of rotation normal to and through the center of their circular faces; (2) have circular faces displaced along said How Can Large Language Models Help Humans in Design And Manufacturing? • 17 Generate a urdf for a two link robot. The end of the second link should connect to the end of the first link with a revolute joint. The origin of the joint should be a the end of the first link < joint name="joint1" type=" revolute "> <axis xyz="0 0 1" /> <parent link =" link1 " /><child link =" link2 " /> <origin xyz="1 0 0" rpy="0 0 0" /> </ joint > <link name="link1"><visual><geometry><box size="0.1 1 0.1 " /></geometry></visual></link> <link name="link2"><visual><geometry><box size="0.1 1 0.1 " /></geometry></visual></link> urdf link origins are in the center of the link. To make a joint origin at the end of a link, it must be half the link’s length past the link origin. Generate a urdf for a 2 link robot. The end of the second link should connect to the end of the first link with a revolute joint. The origin of the joint should be at the end of the first link. Both links should be lined up along the z axis. (... omitted by authors: URDF description of an appropriately connected two-link robot ...) urdf link origins are in the center of the link. To make a joint origin at the end of a link, it must be half the link’s length past the link origin. Generate a urdf for a three link robot. The end of the second link should connect to the end of the first link with a revolute joint about the x axis, the origin of the first joint should be a the end of the first link. The other end of the second link should connect to the end of the third link via a revolute joint about the x axis with an origin at the end of the third link. All three links should be lined up along the z axis (... omitted by authors: URDF description of an appropriately connected three-link robot ...) Fig. 11. GPT-4’s Process for Designing N-Link Robot Arms. GPT-4 is unable to correctly interpret the initial high-level specification, but when provided with more explicit detail, GPT-4 is able to generate correct URDF specifications for N-link robots. axis of rotation, and (3) contact, but do not intersect, either side of the center platform. The combination of non-intersection and geometry relation constraints prove challenging for GPT-4, which seems to exhibit limited geometric reasoning. Initially, we tried to specify these using language-based constraints (i.e. “the wheels should touch, but not intersect, either side of the platform”). These proved ineffective, as shown in Figure 12 (middle). To overcome these challenges, we crafted prompts with very explicit numeric constraints (i.e. “wheels should be offset on the global y axis by half the width of the platform plus half the height of the wheel cylinder”). This style of prompt successfully generated a viable result, as shown in Figure 12 (right). As in the case of robot arms, we find that GPT-4 is immediately able to generalize a successful two-wheeled design into a four-wheeled robot. We achieve this by asking for a duplicate, shifted version of the existing wheel configuration, as shown in Figure 13. However, we were unable to directly generate a successful four-wheel robot; in general, we found that as the number of constraints in a prompt increases, it becomes increasingly likely that GPT-4 will ignore any individual constraint. Thus, rather than directly requesting a four-wheeled robot in a single prompt, we found greater success by first generating a two-wheeled robot and then prompting GPT-4 to modify the URDF by adding additional wheels than placing the text in a single prompt. 18 • Makatura et al. Fig. 12. Two Wheeled Robot Attempts by GPT-4. GPT-4’s attempts to generate a two-wheeled robot with varying types of prompt constraints on geometry. Left: no constraints Center: language constraints Right: numeric constraints. I want you to generate a urdf for a robot with two wheels. Follow the below steps: 1. generate a center rectangular platform 2. add two cylinders for wheels. The wheels should be oriented so that the global y axis is normal to the center of their circular faces, and should rotate about their cylinder’s z axis 3. The wheels should be offset on the global y axis so that they are on either side of the center rectangular platform. They should be offset by half the width of the platform plus half the height of the wheel cylinder. (Author remark: The above generates a two wheel robot. The following prompt extends this to create a four wheeled robot) Now, would you be able to modify this urdf to be a four wheel robot? There should be two more wheels, in an identical configuration to the first two. Shift back two of the wheels on the x axis to be at the back of the platform. Also move the front two wheels to the front of the platform by shifting them forwards on the x axis (Author remark: inertial and collision parameters were also generated correctly, but omitted below for brevity) <!−− Center rectangular platform link −−> <link name="base_link"><visual><geometry> <box size =" 0.1 0.1 0.02 " /> </geometry></visual></link> <!−− Front Left wheel link −−> <link name="front_left_wheel"><visual><geometry> <cylinder radius="0.05 " length="0.01 " /> </geometry></visual></link> (... omitted by authors: other wheels ...) <!−− Front Left wheel joint −−> < joint name=" front_left_wheel_joint " type="continuous"> <parent link =" base_link " /><child link =" front_left_wheel " /> <origin xyz="0.05 0.055 0" rpy="1.570796 0 0" /><axis xyz="0 0 1" /> </ joint > (... omitted by authors: other wheel joints ...) Fig. 13. URDF of Four Wheeled Robot. A correct four wheeled robot generated by GPT-4 by modifying a previously generated URDF for a two wheeled robot Robot Grippers. To test the effectiveness of our iterative, multi-prompt approach for building robots of increasing complexity, we seeded GPT-4 with a successful two-link open chain URDF, then asked it to modify this design into a collection of multi-finger robot grippers. As shown in Figure 14, we were able to build two-, four-, and five-finger grippers using a sequence of prompts to add features and change proportions. To create a two-finger gripper, we asked GPT-4 to use two of the previously generated two-link open chain robots as fingers, separated by a distance equal to half the height of the finger, and connected by a rectangular platform on the base. The How Can Large Language Models Help Humans in Design And Manufacturing? • 19 Fig. 14. URDF Grippers Generated by GPT-4. Left: Two fingered Gripper. Center: Four fingered gripper. Right: Five Finger Hand four-finger gripper was similarly derived from the two-link arm by specifying that the hand should consist of four two-link robots right next to each other on a rectangular platform. To specify a five finger hand, we requested a rectangular link that hinges as a base for the thumb, then prompted GPT-4 to add another finger on that link and to adjust the hand proportions. 4.1.7 Graph-based DSL. While designing an entire robot end-to-end using LLMs may not be feasible, we find that GPT-4 has the ability to reason about the spatial layout of robot components. These spatial layouts are naturally represented as graphs where the nodes are components and edges are connections between them. Unlike URDF, this representation is more general and is applicable in domains outside of simulation. To generate robot design graphs using GPT-4, we first need a text-based graph representation. Our first approach involved asking GPT-4 to output the popular GraphViz format. While convenient, this format makes it difficult for GPT-4 to provide metadata for each part (such as motor torque, size) in a format usable by downstream applications. Instead, we take advantage of GPT-4’s ability to generate Python code that conforms to a provided domain-specific language (DSL). The full DSL is detailed in Appendix A.4. When prompted with a small DSL embedded in Python, GPT-4 is able to write code that selects and places robot components at a high level of abstraction. By supplying a function that translates components in three-dimensional space, we can extract GPT-4’s concept of each component’s position relative to the others. In this example, we ask GPT-4 to generate a humanoid robot using the provided functions. GPT-4 makes appropriate calls to add_link to create nodes in the design graph, add_joint to create edges between them, and translate to establish their relative positions. We manually implement the functions described in the prompt in order to visualize the resulting robot topology. The arms are positioned beside the torso, the legs are positioned below, and the head rests on top as expected for a humanoid robot. We saw similar success when asking GPT-4 to construct a snake robot, car robot, and scorpion robot. When requesting a robot dog, however, GPT-4 only adds two legs initially. Specifying a “robot dog with four legs” was necessary to obtain the expected behavior. We also encountered difficulties when attempting to obtain a more detailed design for the robot dog. Asking for a “robot dog with four legs, two links per leg” produced a graph with two nodes per leg, but GPT-4 did not position them relative to each other. Summary Discussion. In light of these experiments, we conclude that GPT-4 is capable of generating 4.1.8 designs based on high-level text input, even across a wide variety of representations and problem domains. We note that several of GPT-4’s capabilites and limitations remain consistent independent of the representation. For example, in all cases, GPT-4 is able to generate sensible, well-structured code with semantically meaningful variables and comments. Moreover, independent of the representation or the problem domain, GPT-4 consistently 20 • Makatura et al. (... omitted by authors: problem setup with available Python functions ...) Write a function to construct a humanoid robot. To construct a humanoid robot, we can define the different parts or links of the robot, such as the head, torso, arms, and legs. We’ll then connect these links together using joints. Here’s an example Python function to construct a basic humanoid robot: def construct_humanoid_robot(): # Create links for the head, torso , arms, and legs head_link = add_link ( "head") (... omitted by authors: adding other parts ...) # Connect the head to the torso add_joint ( torso_link , head_link) (... omitted by authors: connecting other parts ...) # Translate the links to their translate (head_link , "up") translate ( left_arm_link , " left " ) (... omitted by authors: positioning other parts ...) respective positions (... omitted by authors: summary of code, disclaimer ...) Fig. 15. Graph of Humanoid Robot. Graph generated by GPT-4 describing the high-level components of a humanoid robot as nodes and connections between them as edges shows superior performance with respect to the high-level, discrete elements of a problem (e.g., identifying the correct type and quantity of each primitive/operation) as opposed to the lower-level continuous parameter assignments (e.g., correctly positioning the primitives relative to one another). A more detailed discussion of capabilities, limitations and opportunities will follow in Section 4.4. For now, we rely on the similarities between various representations to justify a reduced scope for our future experiments. In particular, moving forward, we study each question with respect to only a subset of the design representations and domains introduced above. 4.2 Interpreting and Respecting User Control (Q2) The above examples demonstrate GPT-4’s ability to generate a design based on very high-level semantic input. However, we also wanted to test its ability to generate designs that adhere to a specific user-given intent. This section also tests whether GPT-4 is able to overcome its own potential biases induced by the training data, in order to generate something that truly adheres to a user’s specified constraints – whether or not those constraints match the “common” form of a given design target. In particular, we choose to study whether GPT-4 is able to (1) understand and respect semantically meaningful spatial constraints, and (2) incorporate specific pre-fabricated elements into a design. Spatial Constraints. Through the general experiments above, GPT-4 has already shown some capacity to 4.2.1 respect high-level spatial constraints, such as a design element’s absolute size or its position relative to another element of the design. GPT-4’s compliance with such requests was frequently flawed at the outset, but the results were generally workable after some amount of interactive feedback. This section aims to explore the types of constraints GPT-4 is able to natively understand, and how we might best interact with GPT-4 in order to improve the chance of successful compliance with such constraints. How Can Large Language Models Help Humans in Design And Manufacturing? • 21 Fig. 16. Building a cabinet with a door. GPT-4’s attempt to build a cabinet similar to that from Section 4.1.4, with the addition of a simple door (orange) that has a handle (dark grey) on the right-hand side. GPT-4 quickly fixes the position of the cabinet’s primary pieces (e.g., the yellow and cyan side panels), but it struggles to correct the door. GPT-4 must be iteratively prompted to fix the door orientation, the relative door placement, and the handle’s placement and protrusion into the door. GPT-4 is able to arrive at a suitable design after several iterations of user feedback. As an initial experiment, we explored whether GPT-4 is able to construct a version of the previous cabinet design that includes a door and a handle (see Figure 19). We started from a fresh chat, and provided GPT-4 with a prompt similar to the one described in Section 4.1.4, asking for a cabinet to be built from scratch. However, this time, we also request a door at the front of the cabinet, with a handle on the right hand side of its outward-facing face. As shown in Figure 16, GPT-4 initially struggled to position several of the cabinet elements – particularly the side panels and the door. Although GPT-4 corrected the position of the side boards immediately, GPT-4 continued to have trouble placing the door, as it was oriented incorrectly relative to the rest of the design. When reminded that the door should be oriented vertically, GPT-4 was able to comply with the request, but the corrected position was still not fully suitable, as the door coincided with the cabinet’s side panel. After another reminder that the door should reside at the front of the cabinet, with the handle on the right so it could be attached with hinges on the left, GPT-4 was able to place the door correctly. However, the handle remained ill-positioned as it was located on the left-hand side, and was protruding into the door panel. After 2 additional prompts, GPT-4 was able to correct the position to the left hand side. To correct the protrusion issues, GPT-4 needed 3 more prompts. During these iterations, GPT-4 moved the handle fully to the inside of the door; it needed explicit reminder that the handle should be placed on the outside of the door. With a fresh GPT-4 session, we also tried providing the previous OpenJSCAD specification of the cabinet as part of our input prompt, then asking GPT-4 to modify the existing design such that it contained a door and a handle, as before. Despite the different starting points, GPT-4 followed a similar trajectory, as shown in Figure 17: the door was initially aligned incorrectly, as it coincided with one of the side panels; after 1 prompt, GPT-4 was able to correct the door placement. However, despite GPT-4’s explicit assertion that the handle is also placed on the right side of the door's exterior face, the handle remained on the left. Finally, after another prompt, GPT-4 was able to correct the handle position such that it was on the right rather than the left. The way in which GPT-4 dealt with the under-specified handle request also proved interesting. In Figure 16, GPT- 4 opted for an additional cuboid that would be unioned into the final design. By contrast, in Figure 17, GPT-4 opted to create the handle by subtracting a small cuboid from the door panel. In still other examples, GPT-4 refused to add the handle, and instead offered the following disclaimer: Note that the handle for the door is not included in this script, as its size, shape, and position would depend on additional details not provided. This would likely require additional modules, such as cylinder from @jscad/primitives, and might be added as an eighth component in the main function. These interactions provide a promising basis for interactive user control of the design, but the process is somewhat tedious at the moment, as GPT-4 requires very explicit instructions about the design or correction intent. The addition of highly-detailed user constraints also seems to confuse GPT-4 to an extent, as it seems to “forget” the larger context of the design in the process, so it must be frequently reminded. 22 • Makatura et al. Fig. 17. Adding a door to an existing cabinet. We provide GPT-4 with the initial cabinet design from Section 4.1.4 (semi-transparent blue), then ask it to add a door (orange) with a handle on the right-hand side. Despite beginning from a largely-complete model, GPT-4 still has difficulty placing the door and handle correctly. Fig. 18. GPT-4’s Attempts to Create a Proxy for an L-bracket. Left: Image of the desired pre-fabricated part, to which GPT-4 was provided a link. Right, Top: GPT-4’s attempt to design a proxy based on the knowledge it gleaned from the provided product webpage, with iterative high-level user feedback. Although GPT-4 identified the primary structures (two cuboids for the L and a cylinder for the peg), it was unable to arrive at a proper design in this manner. Right, Bottom: GPT-4’s process for designing a proxy for the part from scratch with explicit user guidance about the structure and its dimensions. Incorporating pre-fabricated elements. It’s also common to design an object around specific pre-manufactured 4.2.2 elements, such as hinges, brackets, or motors. We explore the possibility of using GPT-4 to source the parts in Section 6.3 – at that time, we explore whether GPT-4 can identify the required part categories, provide options, and/or select a set of options that are compatible with one another and the intended overall design. For now, we assume that the user has a specific (set of) part(s) in mind that they would like to incorporate into their design. Then we investigate whether, given these components, GPT-4 is able to (1) build a reasonable proxy of this design, then (2) effectively use it as a module within a larger assembly. Cabinet with Standard Hardware. To make the cabinet design more stable, a designer may wish to include extra support brackets to work with. Many pre-fabricated variations of these brackets exist, and they are inexpensive and readily available. Given this, it does not make sense to design or manufacture these parts via GPT-4. Rather, we’d like to incorporate instances of a pre-fabricated version. To do this, GPT-4 must first build a proxy of the part, place the proxies throughout the design appropriately, and adjust the remaining elements of the design to accommodate these components. For our first experiment, we chose to incorporate the Prime-Line 1/4 in. Nickel-Plated Shelf Support Pegs from Home Depot into our design. We provided GPT-4 with a URL to this part’s listing on the Home Depot website, which contained a text description of the item and the schematic diagram pictured in Figure 18(left). We then asked GPT-4 to build a simple geometric proxy that we could incorporate into our design as a placeholder. As shown in Figure 18(right, top), GPT-4 was able to infer and generate the appropriate primitives (one cylinder for the peg and two cuboids for the L bracket). However, it was not able to correctly scale, orient, or position the elements. In an effort to test GPT-4’s understanding of the structure, we asked it to describe the structure in its own words. Although it gave a reasonable description of the bracket, there was little improvement in the result How Can Large Language Models Help Humans in Design And Manufacturing? • 23 Now, we are going to try and integrate these bracket supports into our cabinet design from before. You can directly use the createBracketWithPeg() function, without regenerating it each time. Now, please go back to the most recent cabinet design, and add 4 of these brackets underneath the middle shelf. There should be two brackets supporting the left side of the shelf, and two brackets supporting the right side of the shelf. Each bracket should be positioned such that the top of the horizontal face is in contact with the bottom of the shelf; the back of the vertical segment is in contact with the innermost side of the cabinet’s side wall; and the peg protrudes into the side wall of the cabinet. (... omitted by authors: 33 total messages, used to iteratively position the brackets; process summarized visually below ...) Excellent! The bracket positions are suitable now. Now, we will move onto the shelf: at the moment, the shelf is protruding into the vertical part of the brackets. We need to reduce the width of the shelf in the left-right direction such that it leaves space for the brackets. Can you adjust the shelf size accordingly? Fig. 19. Process for Integrating L-brackets (red) into an Existing Cabinet Design (semi-transparent blue) Using GPT-4. It takes 34 messages to position the brackets appropriately (17 each of prompt/response), but once this is done, GPT-4 is able to efficiently generate a modified shelf (pink) to accommodate the placed brackets (6 messages; 3 each of prompt/response). when it was asked to improve the script accordingly. Thus, even with several iterations of user feedback, GPT-4 was unable to construct this shape from high-level third-party (URL) or user input. Ultimately, we had to provide GPT-4 with an explicit description of the structure that we wanted. Moreover, we found that even with an explicit description, GPT-4 was unable to generate the correct shape when provided with all directions at once. Instead, we had to create the shape in an iterative fashion, beginning with the L bracket and then adding in the peg, as shown in Figure 18(right, bottom). Eventually, it was able to generate the structure and consolidate the instructions into a high-level module called createBracketWithPeg, as desired. We then provided the module createBracketWithPeg as an input to GPT-4, and asked it to incorporate these structures into the design, as detailed in Figure 19. In particular, we asked for four brackets under each shelf, with the pegs protruding into the cabinet’s side walls, the back face of the bracket’s vertical leg in contact with (but not protruding into) the side wall, and the top face of the bracket’s horizontal leg in contact with (but not protruding into) the bottom face of the shelf. We initially tried to complete this experiment in a single continuous chat that (1) designed the cabinet, (2) designed the L-bracket, and then (3) incorporated the brackets into the cabinet. However, we found that after the extended discussion regarding the L-bracket design, GPT-4 seemed to have completely forgotten its cabinet specification. Despite multiple prompts, it was unable to recover the previous design. Instead, we directly provided GPT-4 with the L-bracket module and its prior cabinet design, and then asked for a modification. This approach was far more successful. Overall, we found that GPT-4 was able to instantiate the correct number of brackets, but it struggled to rotate and position them appropriately. After several user prompts, GPT-4 was able to successfully place the brackets in their locations. Finally, we asked GPT-4 to adjust the shelf in order to (1) not protrude into the brackets, and (2) incorporate some additional allowance so the shelf could easily fit between the supporting brackets in a physical assembly. GPT-4 was able to complete these requests without issue. 24 • Makatura et al. Fig. 20. A Quadcopter Designed with the Aid of GPT-4. The motors are colored in red. The propellers are in yellow. The battery is in dark gray. The frame is in blue. The dark yellow box is the controller and the green box is the receiver. Overall, although GPT-4 initially struggled to build a proxy of the pre-fabricated part we had in mind, GPT-4 seemed quite capable of incorporating the completed proxy into a given design, as desired. Quadcopter. Designing a quadcopter involves integrating pre-built elements like the motor, propeller, and battery. Detailed sourcing of these parts will be addressed in the later section (Section 6.3). Once these components are sourced, the frame must be designed to accommodate their dimensions. We’ll explore how GPT-4 can assist with this task. However, enabling GPT-4 to accurately represent these parts isn’t straightforward. To simplify the task, parts are represented as either a box of dimensions 𝑤 × ℎ × 𝑑 or a cylinder with radius 𝑟 and height ℎ. GPT-4 can handle these representations well as demonstrated in Section 4.1.2. Rather than having a single function which creates a primitive and translates it as in Section 4.1.2, we introduce three functions for ease of design: createBox(w, h, d), createCylinder(r, h), and place(item, x, y, z, a). The first two functions generate a box or a cylinder at origin (0,0,0), while the third rotates and moves the item to desired coordinates. Subsequently, we task GPT-4 with creating a design that integrates these parts using only the above functions. The primary element GPT-4 must design is the frame, which should hold the selected components. Initially, GPT-4 produced a correct textual design, but struggled with the geometric representation, similar to Section 4.1.2. It understood the quadcopter structure, but had issues with part positioning and orientation (Figure 20(a)). Problems included incorrect frame orientation and part intersections. By guiding GPT-4 in correcting these issues, we achieved a near-correct quadcopter design (Figure 20(b)). The initial frame design wasn’t practical because it was directly attached to the motor cylinder and insufficient to hold components like the battery, controller, and signal receiver. To address this, we asked GPT-4 to incrementally implement specific solutions, such as adding a cylinder base under each motor and a box body to reinforce the frame bars and house remaining parts. After minor adjustments, we arrived at a valid design, which will undergo further testing in a simulator or real world conditions (Figure 20(c)). Throughout the design process, GPT-4 demonstrated proficiency in textual design analysis but struggled with mathematical and physical concepts such as collision and structural integrity. Thus, human guidance remains crucial in these areas. 4.3 Incorporating Abstractions such as Modular/Hierarchical Designs (Q3) As we have seen from previous examples, GPT-4 is inclined to use some abstractions like variables by default. It is also clear that GPT-4 is well suited to the use of modular or hierarchical design, as in the case of the pre-fabricated L-brackets that it was able to instantiate several copies of, and distribute throughout a design. However, there are often instances where a user might want to impose their own specific modules – for example, a certain hierarchical grouping may facilitate easier debugging or cleaner code. How Can Large Language Models Help Humans in Design And Manufacturing? • 25 To test GPT-4’s abilities in this area, we revisit the cabinet example, and try to modify it such that it contains multiple shelves. Because we have already incorporated pre-fabricated brackets, this modification is non-trivial, as GPT-4 must instantiate and position the appropriate number of shelves and all associated support brackets. We began by directly asking GPT-4 to make this modification on top of the existing code, by generating two evenly spaced shelves within the cabinet instead of one. GPT-4 correctly identifies the elements which must be duplicated, and it instantiates the correct number of them. However, it is unable to correctly adjust the position of each module; after the initial request, neither the shelves nor the brackets were in reasonable locations. It took 4 additional user prompts to correct the relative positions of these components. After this correction, GPT-4 did seem able to generalize its logic directly to generate cabinets with a varying number of shelves. However, the code itself is fairly convoluted. To avoid these issues, it may be more natural to consider a shelf with its appropriate supporting brackets as a single module. This way, the entire “subassembly” could be instantiated and positioned as a unit on future calls. We asked GPT-4 to implement this plan, by requesting the creation of a module named supportedShelves(), which instantiates and appropriately positions a shelf and its associated support brackets within the design. Then, we asked GPT-4 to refactor the original script such that it used the new module to generate a cabinet with two evenly-spaced shelves. The initial response had a minor compilation error, a shelf tolerance issue, and a bracket alignment issue, as before, but each of these issues were immediately corrected after a single user prompt. Overall, the approaches resulting from both experiments seem equally effective and flexible once they have been fine-tuned. Thus, we conclude that GPT-4 is able to effectively create and use modules, whether they are explicit (e.g., in the form of a function, as in the second experiment) or implicit (e.g., in the form of a for-loop, as in the first experiment). However, it seems as if the explicit module made it slightly easier for GPT-4 to reason about a challenging alignment problem. Moreover, it is useful to know that users can effectively request this kind of hierarchical refactoring, as most human programmers/designers would generally find it easier to reason over a function in this scenario. 4.4 Discussion In this section, we elaborate on the key capabilities (C), limitations (L), and dualisms (D) previously outlined, particularly as they relate to the domain of text-to-design. C.1 Extensive Knowledge Base in Design and Manufacturing: Within the text-to-design space, GPT-4 exhibited proficiency in supporting high-level structure and discrete composition. For instance, GPT-4 consistently generated the correct primitives (type and quantity) for a given task, regardless of the specific design language it was using. GPT-4 also demonstrated a capacity for interpreting and auto-completing under-specified prompts, as in the case of the CSG table example, where GPT-4 inferred and provided reasonable values for a set of missing parameters (see Section 4.1.2). Finally, GPT-4 generated readable, explainable, and maintainable code that contained descriptive variable names and comments, along with appropriate modularity and other high-level structural elements. C.2 Iteration Support: Even when GPT-4 did not immediately arrive at a suitable design solution, it often succeeded in rectifying errors after a reasonably small number of user interactions. For example, it was able to successfully adjust the placement of the cabinet handle after a handful of additional prompts. The ability to engage in iterative design is also very helpful when building up complex structures such as the wheeled robot from Section 4.1.6 or the L-bracket proxy discussed in Section 4.2.2, because users can start with a simple prompt, then iteratively increase the complexity to arrive at a suitable result. C.3 Modularity Support: GPT-4 effectively incorporates modules and hierarchical structures, using natural language as a powerful tool for conceptualization and orientation. 26 • Makatura et al. L.1 Reasoning Challenges: Spatial reasoning posed a significant challenge for GPT-4. Well-crafted domain- specific languages (DSLs) may be able to mitigate this issue. We noted specific difficulties with constructive solid geometry (CSG) due to the computational requirements for object placement. Sketch and extrude languages that utilize reference points can minimize this challenge to an extent, as they offload the computation to reference resolution. This approach is effective for simpler designs but falters when managing complex sequences of transformations. As discussed in the sketch-based car example from Section 4.1.5, we found that DSLs that balance the benefits of reference-based language with global positioning information may be more effective. GPT-4’s lack of spatial awareness also created difficulties with constraint handling, such as when GPT-4 was asked to ensure that elements were non-overlapping. We found that iterative refinements and careful prompting often provided a workaround for these issues. For example, GPT-4 typically failed to respect “non-overlapping” constraints, but it generally responded well to the instruction that some element should be “in contact with (but not protruding into)” another element. L.2 Correctness and Verification: GPT-4 is not able to reliably verify its own output, and it frequently makes contradictory claims. For example, when asked to place a handle on the right side of the cabinet structure, GPT-4 frequently placed the handle on the left-hand side of the cabinet, then immediately declared its design a success, because the handle was on the right, as requested. This seems to suggest that external verification tools may be helpful, particularly in cases where the contradictions are less obvious. L.3 Scalability: GPT-4’s success seems to decline as the number of simultaneous requests increases. For example, it is best to issue 1-2 constraints or correct 1-2 issues at a time, rather than trying to issue several constraints or correct several issues at once. Similarly, GPT-4 encountered challenges when interpreting high-level information to build proxies for more complex designs all at once; instead, the models must be built iteratively, with gradually increasing complexity. This iterative modeling was most effective when the user provides explicit instructions about both the aspects that should change, as well as the aspects that should remain unaltered (either because they are already correct, or because they will be addressed later). Despite GPT-4’s initial difficulty creating complex models, GPT-4 is able to effectively use and combine existing modules to create more intricate models. L.4 Iterative Editing: As discussed in Section 4.2.2, GPT-4 seems to exhibit limited memory and attention span. In particular, it often “forgets” things from previous messages. We address this by occasionally reminding GPT-4 of its previous input/output, either by asking it to summarize a previous interaction/finding, or by explicitly including a prior result as a starting point in our prompt. D.2 Unprompted Responses: GPT-4 is frequently able to recognize and address under-specified problem statements. For example, in the CSG table specification (Section 4.1.2), GPT-4 correctly inferred the need to assign a tabletop thickness value. Similarly, when augmenting the cabinet with a door and a handle in Section 4.2.1, GPT-4 responded with several distinct approaches for handle design. This can be powerful, as it may alert the user to parameters or variations which may otherwise have gone overlooked; then, users have an explicit opportunity to consider and refine the specification accordingly. Moreover, it allows users to undertake a design process and begin receiving feedback without first needing to craft a perfect specification or prompt. However, if GPT-4 confidently hallucinates a particular solution to an under-specified aspect of a design problem – rather than explicitly prompting the user to consider a range of options – it may limit and/or bias their exploration in unexpected ways. 5 TEXT-TO-DESIGN-SPACE A design is a sequence of construction operations which take input values and which modify the current state of the design. These input values can directly be represented as numbers. For example in Fig. 21 (left), the design of a 3D gear is constructed by directly using 3D coordinates and dimensions. While this representation has the merit of being direct, without any references to previous code, it does not expose the degrees of freedom of a How Can Large Language Models Help Humans in Design And Manufacturing? • 27 difference() { union() { translate([0, 0, -10/2]) cylinder(10, 50, 50); for (i = [0:20-1]) rotate([0, 0, i *360/20]) translate([50, 0, 0]) cube([5*2, 5, 10], true); } cube([10, 10, 10*2], true) ; } difference() { union() { translate([0, 0, -gear_thickness/2]) cylinder(gear_thickness, gear_rad, gear_rad); for (i = [0:tooth_count-1]) rotate([0, 0, i*360/tooth_count]) translate([gear_rad, 0, 0]) cube([tooth_prot*2, tooth_width, gear_thickness], true); } cube([center_hole_width, center_hole_width, gear_thickness*2], true); } Fig. 21. Gear Design Space. The same gear (Top left) can be constructed with different design representations. Here, we have an OpenSCAD design (Bottom left) and a parametric OpenSCAD design (Bottom right) where the degrees of freedom (Top right) have been exposed via design variables. design. To modify the thickness of the gear, we have to modify several input values at once to obtain the desired 3D model. The introduction of design parameters in Fig. 21 (right) makes this change easier by modifying a single variable, namely gear_thickness. We call this representation a parametric design. Note that design parameters can be continuous or discrete, e.g. gear_thickness or tooth_count respectively. To explore different design variations, either manually or automatically, having a parametric design is not enough. We still need to know which specific values we can assign to the design parameters. For this, we introduce lower and upper bounds for each design parameter. Each design parameter can take any value within its specific bounds. Together, a parametric design and parameter bounds define a design space which is the set of all possible design variations. Design spaces are an import tool to understand what a design can accommodate for. This is important for both the manual and automatic optimization of designs. With this in mind, we want to investigate the following questions: • Q1 Can GPT-4 create a design space from text? • Q2 Can GPT-4 create a design space from an existing design? • Q3 Can GPT-4 create a design space from multiple designs? • Q4 Can GPT-4 explore a given design space? 28 • Makatura et al. Can you design a chair with 4 legs, a rectangular seat and a rectangular back? (... omitted by authors: variable setup and legs construction ...) # Create the seat seat_sketch = createSketch ( rectangle (0, 0, 16, 18) , XY_PLANE) seat_solid = extrude( seat_sketch , seat_height ) # Create the back back_sketch = createSketch ( rectangle (0, 8.5, 16, 1) , cap( seat_solid , "max_z")) back_solid = extrude (back_sketch, back_height) Can you design a chair with 4 legs, a rectangular seat and a rectangular back with explicitly using exposed high-level parameters? (... omitted by authors: variable setup and legs construction ...) # Create the seat seat_sketch = createSketch ( rectangle (0, 0, seat_length , seat_width) , XY_PLANE) seat_solid = extrude( seat_sketch , seat_height ) # Create the back back_sketch = createSketch ( rectangle (0, seat_width/2 + back_thickness /2, seat_length , back_thickness ) , cap( seat_solid , "max_z")) back_solid = extrude (back_sketch, back_height) Fig. 22. Advantage of Parametric Modeling. Without using explicit variables, GPT-4 does the computation for sketch coordinates by itself and is more likely to confuse variables and to make mistakes. For each of these questions, we want to find out what is currently possible and what seems to be beyond its capabilities. 5.1 Generating a Design Space from Text (Q1) In Sec.4, we showed that GPT-4 is capable of generating designs. The next step towards generating a design space is to test if it can also generate parametric designs. To enforce the generation of parametric designs in our prompts, we ask it to explicitly use high-level design parameters and to use as few variables as possible. It should be noted that GPT-4 often introduces variables to improve readability by itself, without explicitly being asked to do so. However, we found that including this in our prompts always resulted in parametric designs. We also notice that when asking for a simple design and asking for a parametric design of the same object, there are generally fewer mistakes in the reuse of certain dimensions. For example, in Fig. 22, at first, GPT-4 positions the backrest on top of the seat using the correct numerical values, but not for the correct dimensions. Whereas when asked for a parametric design, the use of width and length suffixes in the parameter names seem to be more consistently associated with the corresponding 3D axis. To generate a design space, we need parameter bounds. When asked for lower and upper bounds for parameters, GPT-4 proposes bounds that are based on typical proportions of the designed object. This implies that the scale is often arbitrary but that bounds are semantically reasonable relative to each other. For example, when asked to design a parametric car with exposed parameter bounds, GPT-4 returns lower and upper bounds and arguments for these bounds in terms of inequalities, see Fig. 23. According to GPT-4, the width of the car body should be less than the length but larger than the height and the radius for the cylindrical wheels should be less than the height of the car's body so the wheels don't exceed the height of the body. These constraints How Can Large Language Models Help Humans in Design And Manufacturing? • 29 Fig. 23. Car Parameter Bounds. GPT-4 generates semantically based parameter bounds and constraints based on this simplified car design. between design parameters can also be queried in the form of actual inequalities, which is useful for downstream optimization when combined with parameter bounds. However, these bounds are based on semantic knowledge about the object and not on the geometric design sequence. For example, for a pen holder, the angle of a rotated cylinder will get a lower bound of 45◦ to prevent any pen from falling out, but not to prevent the 3D object from creating unwanted intersections with other parts. Constraints in real-world design sequences often need to also consider purely geometric aspects of a design. 5.2 Generate a Design Space from an existing design (Q2) Given the current limitations of creating designs and design spaces from text prompts alone, it is interesting to understand how GPT-4 can create design spaces from existing designs, made by human designers. Just as regular code, input designs for GPT-4 can vary in quality of semantic annotations and comments about what is being constructed. For all of these inputs, we are interested in how easy it is for GPT-4 to create a design space, i.e., a parametric design with parameter bounds. We investigate how helpful semantic context is to parametrize designs. For the prompts of the following experiments, we have found that we get more consistently a good parametrization when we include that it should expose high-level design parameters while using as few variables as possible and that it should keep the same program structure and the resulting input values to modeling functions. These constraints prevent it from slightly modifying operator input values to extract fewer design parameters. First, when given a design with no semantic context, we observe that GPT-4 exposes design parameters based on equivalence between numerical values and based on which design operators these values were used in. For example, in Fig.24, it introduces a variable cube_size which replaces the value 19 which was used for both the chair’s width and length. For the mug in Fig.25, we can observe that the exposed variables also stay close to their original usage for a given geometric operator. Second, we repeat the previous experiment with additional semantic context. Providing GPT-4 with the name of the object that is being modeled proves useful for generating a parametric design. We can see that now, design parameters get exposed which are semantically more useful for modifying the design. For example, the cylinder radii in Fig. 25 gets replaced for a parameter mug_wall_thickness which controls the thickness of the mug by considering both radii jointly. Also, some ambiguity caused by numerical equivalence can be resolved and produce more useful parametrizations. In Fig. 24, the cube_size from the previous parametrization without any semantic 30 • Makatura et al. Can you expose high-level design parameters for this openscad design? Use as few variables as possible { union() difference () { cube ([19, 19, 15 + 2]) ; (... omitted by authors: intermediate construction operations ...) for ( i = [0:5−1]) (... omitted by authors: rest design ...) // High−level design parameters cube_size = 19; cube_height = 15; extra_height = 2; num_layers = 5; // Other parameters ... union() { difference () { cube([ cube_size , cube_size , cube_height + extra_height ]) ; (... omitted by authors: intermediate construction operations ...) for ( i = [0: num_layers−1]) (... omitted by authors: rest design ...) Can you expose high-level design parameters for this openscad design of a chair with slats? Use as few variables as possible length = 19; width = 19; height = 15; slat_count = 5; { union() // Length of the chair // Width of the chair // Height of the chair base // Number of slats difference () { cube([ length , width, height + thickness ]) ; (... omitted by authors: intermediate construction operations ...) for ( i = [0: slat_count −1]) (... omitted by authors: rest design ...) Fig. 24. Semantic Cues for Chair Design Parameters. When asked to parametrize a chair design, it GPT-4 gives different parametrization with or without context cue. context, gets disentangled into a length and a width parameter, allowing to have more control over the shape. This might prove especially useful in this case, since all the slats are associated to the chair’s width and not its length. Once parametrized, we can complete the design space by asking for parameter bounds, see Fig. 26. Again, notice how these bounds are somewhat arbitrary and not based on the 3D design sequence. While these results are encouraging, GPT-4 is easily confused by the final effect of a series of geometric transformations. An example for this is the generated parameter handle_thickness in Fig. 25 which actually modifies the 𝑦 position of the handle. Once again, it is limited by cases where geometric computation prevails over semantic reasoning. 5.3 Can GPT-4 create a design space from multiple designs? (Q3) Design spaces based on a single design are useful to explore the family of possible shapes generated by varying the design parameters. How Can Large Language Models Help Humans in Design And Manufacturing? • 31 Can you expose high-level design parameters for this openscad design? Use as few variables as possible { union() difference () { cylinder (30, 13.5, 13.5) ; translate ([0, 0, 1.5]) cylinder (30, 12, 12) ; } (... omitted by authors: rest design ...) To expose high-level design parameters, we can identify the key variables that control the size and position of these shapes. Here’s a version of your code with these parameters exposed: // High−level design parameters heightOfMainCylinder = 30; outerRadiusOfMainCylinder = 13.5; innerRadiusOfMainCylinder = 12; (... omitted by authors: other parameters ...) union() { difference () { cylinder (heightOfMainCylinder, outerRadiusOfMainCylinder, outerRadiusOfMainCylinder); translate ([0, 0, 1.5]) cylinder (heightOfMainCylinder, innerRadiusOfMainCylinder, innerRadiusOfMainCylinder); } (... omitted by authors: rest design ...) Can you expose high-level design parameters for this openscad design of a mug? Use as few variables as possible Sure, let’s define some high-level design parameters for this mug model. module mug(mug_height = 30, mug_outer_diameter = 13.5, mug_wall_thickness = 1.5, handle_thickness = 1.5, handle_length = 10, handle_height = 4) { union() { difference () { cylinder (mug_height, mug_outer_diameter, mug_outer_diameter); translate ([0, 0, mug_wall_thickness]) cylinder (mug_height, mug_outer_diameter − mug_wall_thickness∗2, mug_outer_diameter − mug_wall_thickness∗2); } (... omitted by authors: rest design ...) Fig. 25. Semantic Cues for Mug Design Parameters. When asked to parametrize a mug design, it GPT-4 gives different parametrization with or without context cue. However, sometimes a designer might want to make more structural changes, inspired by another design of the same object class, they want to interpolate them. Interpolating two designs can be difficult to achieve and there are a number of difficult questions which arise: Are two designs modeled in a similar way? Do they have the same dimensions and if not, how do you match the dimensions between two sub-designs? Do you have to add extra operations to combine two parts? Can you actually extract a subpart of an object from a design? If you cannot exactly extract a sub-design, can you design something which is inspired by two design sequences? How do you accurately refer to two sub-designs in a text prompt? To investigate if GPT-4 can help with design interpolation, we test three different design scenarios. All of the designs were presented to GPT-4 in our sketch-based parametric CAD DSL, explained in Sec. 4. 32 • Makatura et al. Can you give bounds for each design parameter? Bounds for each parameter could be something like this: This could parameter handle_thickness: range from 0.5 (for a thin handle) to 3 (for a thick handle). So the bounds are [0.5, 3]. (... omitted by authors: explanation of other bounds ...) Note that these bounds are somewhat arbitrary and based on common mug sizes. You could adjust them based on your specific needs or the scale of your 3D printing. Fig. 26. Mug Design Parameter Bounds and Constraints. GPT-4 gives parameter bounds based on common mug sizes. Fig. 27. Chair Interpolation. Left: Input chair design with cylindrical legs. Middle: Input chair design with splats. Right: Interpolated output chair design with cylindrical legs and splats. First, we present it with two chairs which are modeled similarly, but the first chair has cylindrical legs and the second chair has a backrest with splats, see Fig. 27. In our prompt, we ask if it can mix these two designs to create a chair with cylindrical legs and splats in the back. The result can be seen in Fig. 27 (c). It should be noted that variables in the code are descriptive, e.g. leg4_solid and splat_3_sketch, which helps provide semantic cues. Also, in our designs, the first half of the code describes the construction of the seat and the legs and the second part describes the construction of the backrest. This means that mixing these two designs comes down to replacing the second half of the first design with the second half of the second design. Next, we present GPT-4 with two designs of a temple, involving a different number of pillars, one with 4 pillars and one with 10 pillars on each side, see Fig. 28. In our prompt, we ask it to design a temple with steps, a roof and 6 pillars on the left and right side. For this, GPT-4 has to find how these pillars have been modeled How Can Large Language Models Help Humans in Design And Manufacturing? • 33 Fig. 28. Temple Design Interpolation. Left: Input temple design with 4 pillars on each side. Middle: Input temple design with 10 pillars on each side. Right: Interpolated output temple design with 6 pillars on each side. Fig. 29. Bike Design Interpolation Left: Input bicycle design . Middle: Input quad-bike design. Right: Interpolated output tricycle design. and how to model a varying number of pillars, given the two input examples. The code for the design of the pillars did not contain any looping structures nor variables and it was more spread out throughout the program than in the chair example, to make it more challenging. Despite these challenges, GPT-4 manages to extract the construction logic of the pillars and introduces variables and a looping structure to place them correctly, see Fig. 28 (c). Note that we have mentioned the steps and the roof in the prompt. We have noticed that without this reminder, it would solely focus on the construction of the pillars and forget about the rest of the design. Our last test is structurally more challenging. We present GPT-4 with a design of a bicycle and a design of a quad-bike, see Fig. 29. The two designs differ not only by the number of wheels in the front and the back, but also by the construction of the bike forks. In the case of the bicycle, the fork surrounds the wheel and in the case of the quad-bike, the wheels are connected by a horizontal bar to the vertical bar of the frame. This makes the mixing of sub-designs more complex. And indeed, when asked to design a tricycle, GPT-4 reasons correctly about the number of wheels in the front and the back, and where to find these structures. It also adjusts the size of the quad-bike’s vertical bar such that the two back wheels and the front wheel are on the same plane. This was not the case for the quad-bike and the bicycle in the input designs. But it does not succeed at extracting the complete fork from the bicycle design, as can be seen in Fig. 29 (c). Note that this experiment was performed via a single prompt and GPT-4 would likely be able to copy the missing part via further interaction with the user. We find these examples promising, as they show how GPT-4 manages to combine its general knowledge about part relationships and its coding abilities. One of the observed limitations is the ability to extract long sub-sequences and to detect which other parts are still important for plausible interpolation. 34 • Makatura et al. This is an openscad design for a lego brick: module lego_brick ( brick_length =30, brick_width=6, brick_height =1) { union() { cube([ brick_length , brick_width , brick_height ]) ; for ( i = [0: brick_length /3−1]) for ( j = [0: brick_width /3−1]) translate ([1.5 + 3∗ i , 1.5 + 3∗ j , brick_height −1/2]) cylinder (1, 1, 1) ; } } Can you give parameter bounds (in form of intervals) and parameter constraints (in form of inequalities)? Parameter bounds could be like this: brick_length: [3, Infinity) - In this design, the length of each stud is represented as 3 units, therefore the minimum is 3. // ... brick_width: [3, Infinity) (... omitted by authors: explanation of bounds ...) brick_height: [3, Infinity) (... omitted by authors: explanation of bounds ...) Parameter constraints in the form of inequalities: (... omitted by authors: other constraints ...) brick_length \% 3 = 0 - The length of the Lego brick should be a multiple of 3 to make sure studs are equally spaced and the brick ends with the complete stud. Fig. 30. Lego Design Space. We ask GPT-4 if it can generate parameter bounds and constraints for a parametric Lego brick design. 5.4 Exploration of a given Design Space (Q4) A design space is conceptually useful to reliably generate variations of a given design. However, coming up with parameters which represent meaningful design variations can be a time-consuming iterative process. To investigate if GPT-4 can help with this task, we perform the following experiment. We present it with a parametric design of a Lego brick, see Fig. 30. Then, we ask it to generate parameter bounds and parameter constraints. Interestingly, GPT-4 generated the non-trivial constraint that the length and width of the brick should be multiples of 3. We ask it to use the design space to come up with 10 different parameter settings which correspond to meaningful lego bricks. Finally, it should give each variation a name, see Fig. 31. We can observe that the proposed parameter settings respect the previously generated bounds and constraints and that they lead to distinct 3D models, for which it generates plausible semantic labels. 5.5 Discussion In this section, we summarize the key capabilities (C), limitations (L), and dualisms (D) specific to the creation and manipulation of design spaces. C.1 Extensive Knowledge Base in Design and Manufacturing: We observe that we can leverage GPT-4’s semantic knowledge base to create parameters, bounds and constraints for text-based designs and already existing designs. Additionally, GPT-4 can be useful for finding semantically meaningful design variations in a given design space. C.3 Modularity Support: We observe that GPT-4 can interpolate existing designs by extracting and adapting sub-designs based on their program representations. Interestingly, even when designs are not presented in a modular fashion, it tries to recognize and abstract sub-modules in input designs. How Can Large Language Models Help Humans in Design And Manufacturing? • 35 Fig. 31. Lego Design Space Exploration. GPT-4 generates 10 different design variations for the parametric Initial design. The label under each model corresponds to the name given by GPT-4. L.1 Reasoning Challenges: The design spaces created by GPT-4 are based both on semantic knowledge and on code interpretation. However, it does not take into account geometric considerations, such as intersecting or non-connecting parts. As a result, generated parameter bounds can create non-valid geometry and it has proven difficult to make GPT-4 correct these. However, in general the generation of valid parameter bounds and constraints is a difficult problem for which mainly approximations have been proposed [22]. L.3 Scalability: The interpolation task revealed that GPT-4 has limited capabilities to infer what parts of a design should be linked to a semantic part specified in a prompt. One promising future direction to manage increasingly complex designs is to make them increasingly modular by adding intermediate levels of abstraction. D.1 Context Information: We observe that the generation of correct parametric designs and the reparametriza- tion of already existing designs can be improved by providing semantic cues, such as the name of the modeled object. As seen in Sec.4, GPT-4 creates designs which contain a lot of semantic information and it generally performs even better when using meaningful variable names. Leveraging this aspect in the generation of design spaces and throughout other aspects in the design process should prove extremely useful. 6 DESIGN-FOR-MANUFACTURING The utilization of LLMs in the context of Design for Manufacturing (DfM) provides a broad range of applications that have the potential to enhance the design and manufacturing process of different parts and assemblies. One useful application of LLMs involves leveraging their pattern identification and language interpretation capabilities to imitate a manufacturing expertise bank that can be tapped into during various parts of the design and manufacturing stages. Furthermore, because LLMs such as GPT-4 have the ability to create programs and find and interpret patterns in text, it can potentially be used to generate and alter design and manufacturing files. Currently, DfM is often accomplished by human expertise with the aid of CAD software. Engineers and designers review design plans and use their industry experience to suggest alterations that would improve manufacturability. The CAD software then allows these alterations to be modeled. The replacement of human Long BrickThin BrickWide Flat PanelInitial designClassic BrickTall BrickFlat PanelLarge CubeSmall CubeLong Flat PanelWide Brick 36 • Makatura et al. Fig. 32. Integration of GPT-4 into Design for Manufacturing(DfM): GPT-4 can be used to augment the DfM Process when designing a part. manufacturing knowledge with GPT-4 in this context could streamline the design for manufacturing process, offering more consistent, scalable, and efficient decision-making, which is not limited by individual human capacity. In this section, we propose multiple ways that this new manufacturing expertise bank could be used in design and manufacturing, as shown in Figure 32. GPT-4 can be used to select optimal manufacturing techniques based on a part’s features. Furthermore, it can propose and implement modifications to a design to improve its manufacturability, ultimately leading to more efficient production processes. Additionally, this idea can be extended to part sourcing by leveraging the model’s reasoning capabilities to identify potential suppliers based on the part’s desired function and performance. Finally, it could be used to develop manufacturing instructions for various processes. To understand GPT-4’s ability to alter designs based on manufacturing/sourcing constraints, we pose the following questions: • Q1 Given a part geometry, production run and other desired outcomes, can GPT-4 select optimal manu- facturing processes? • Q2 Given a manufacturing process, can GPT-4 directly suggest and make design alterations to a parts file based on constraints driven by the process capabilities? • Q3 Given a desired functionality and geometric specifications, can an LLM find a source for a part that fits those specifications? • Q4 Given a design can an LLM create a set of manufacturing and assembly instructions? 6.1 Finding Optimal Manufacturing Process (Q1) To test these capabilities, we tasked GPT-4 with advising on identifying an optimal manufacturing process for a part with the geometry shown in Figure 33. We tested it with four different cases where in each case the geometry, material, tolerance requirements, and quantity were varied. We described the part’s geometry as an OpenJSCAD file. Finally, given a set of priorities, we task GPT-4 to select an optimal manufacturing process. In Case four, we provided a finite list of manufacturing processes to evaluate the effectiveness of the selection process under the constraint of a limited set of options. The goal was to determine how well the process could choose the appropriate manufacturing processes to meet the specified priorities. GPT-4 was successful at selecting an optimal manufacturing process for three out the four cases. For cases one, two, and four, GPT-4 selected the optimal process that was approved by an expert. However, in case three, shown in Figure 34, GPT-4 suggested an injection molding process, which is not suitable for processing a DfM ExpertManufacturing Expertise BankCAD/CAM File GenerationDesign DomainFunctionalityMaterialGeometryPerformanceManufacturing DomainInstructionsSourcingManufacturing ProcessG90 G21M3 S1000 G1 X10 Y10 Z-5 F200�))DesignSpecificationDesign AlterationsManufacturingInstructions How Can Large Language Models Help Humans in Design And Manufacturing? • 37 Fig. 33. Using GPT-4 to select optimal manufacturing process: Inputs and selected optimal manufacturing processes using GPT-4. ... You are trying to select an optimal set of manufacturing process to make the part. ...The parts must be manufactured to the tolerance of plus or minus 0.015 inches. You need to make 10000 part out of PTFE. ... priorities for selecting a manufacturing process are the following: 1. Cost 2. Ease of finding multiple suppliers. Given the description of the part provided, the best method to produce this component from PTFE (Polytetrafluoroethylene) in high volume, with a tight tolerance of +/- 0.015 inches would be injection molding. ... Regarding the material choice, PTFE is a common material for injection molding due to its high melting point, excellent chemical resistance, and excellent electrical insulating properties. However, it’s worth noting that PTFE can be challenging to mold due to its high melt viscosity and tendency to warp as it cools. Proper mold design and processing conditions are critical to achieving the desired part dimensions and properties. ... Fig. 34. Prompt and Response for Selecting an Optimal Manufacturing Process for a PTFE Part. GPT-4 gives the response that PTFE is commonly injection molded, which is incorrect. Typically, it is not suggested that PTFE be injection molded. Polytetrafluoroethylene (PTFE) material. In all cases, GPT-4 initially only provided a range of manufacturing options; it required additional prompts to arrive at the optimal manufacturing process selection. Case 1Quantity: 1000Priorities For Selection:1.Cost 2.Ease of Finding Supplier GeometryMaterial: 6061 AluminumTolerance:± 0.015” All Dimensions+0.003” -0” Internal PocketOptimal ProcessCNC MachiningCase 3Quantity: 10000Priorities For Selection:1.Cost 2.Ease of Finding Supplier GeometryMaterial: PTFETolerance:None GivenOptimal ProcessInjection MoldingCase 2Quantity: 1Priorities For Selection:1.Tolerance 2.Ease of Finding Supplier GeometryMaterial: GlassTolerance:± 0.015” All DimensionsOptimal ProcessWaterjet CuttingCase 4Quantity: 1Priorities For Selection:1.As strong as possibleGeometryMaterial: Select MaterialEquipment Available:FDM 3D Printer, Laser Cutter, CNC Mill, Hand Tools Optimal ProcessCNC MachiningTolerance:± 0.015” All DimensionsMaterial: Aluminum or Stainless Steel 38 • Makatura et al. 6.2 Design Alterations for Manufacturability (Q2) In this section, we assessed GPT-4’s capability to enhance designs for better manufacturing optimization. To accomplish this, we included the text of a OpenJSCAD file in the prompt, allowing GPT-4 to analyze and modify it accordingly. Our focus in this case was on the CNC machining of a 10-inch diameter disk, which involved creating bolt holes along the edge and a central blind square pocket. We included in the geometry, two intentional features that would be difficult to machine. As depicted in Figure 35, the process began with GPT-4 identifying any manufacturing complexities within the design features. Since an LLM interprets text, GPT-4 interprets the text of the OpenJSCAD file, rather than the geometry that is rendered once compiled which humans interpret. After GPT-4 identifies any complexities, we instructed it to adjust the geometry of the OpenJSCAD file to address the challenging aspects by directly changing the text of the OpenJSCAD file. Although GPT-4 accomplished these tasks with a moderate degree of success, there were a few inaccuracies. Firstly, GPT-4 correctly identified two potential machining issues: the small radius of the internal pocket and the thin wall at the pocket base. However, it also misunderstood a number of geometric features described in the OpenJSCAD file. These include perceiving holes on a curved surface and anticipating an undercut from the pocket. These misinterpretations might be attributed to GPT-4’s reliance on the text of the OpenJSCAD file for feature identification, as some features become more visible once the file is compiled into a geometric representation. After pointing out these interpretation errors to GPT-4, it was able to correct its analysis but introduced another mistake. GPT-4 incorrectly stated that the bolt holes presented machining difficulties and inquired about additional information regarding the machining area. Once provided with the necessary details, GPT-4 independently rectified its mistake about the bolt holes. GPT-4 was also aware of potential issues with the size of the part and machining area of the CNC machine. Furthermore, it was able to compute whether there was a potential issue. In the final stage, GPT-4 was asked to modify the OpenJSCAD file to address the manufacturing concerns. It improved the wall thickness from 0.02" to 0.04", making it machinable. Given the additional specification of utilizing a 1/4" endmill, GPT-4 also adeptly adjusted the internal pocket’s radius to accommodate this tooling requirement better. 6.3 Part Sourcing (Q3) The massive dataset backing LLMs contains some specialized knowledge about parts needed for manufacturing. Consequently, we posit that LLMs can be useful for reasoning about these parts, from identifying the correct part names to describing necessary properties for their functionality. Cabinet part sourcing. As part of generating the design and fabrication instructions for our cabinet, we asked GPT-4 to find appropriate shelf brackets for the shelf within the cabinet, starting from a concrete design specification in OpenJSCAD. In each iteration, GPT-4 provided several suggestions as links to products on Home Depot, with a short sentence differentiating them. Numbers in the part descriptions were inaccurate: one bracket pair held up to 300 lbs, but GPT-4 claimed it could hold 1000. Another pair was a “ heavy-duty option that can support up to 500 lbs. when properly installed.”, but could actually hold 1300 lbs. Otherwise, the short descriptions were true, and all described parts could plausibly serve as shelf brackets. Figure 36 shows the presumed brackets suggested. Overall, we found success for this relatively simple use case. Copter part sourcing. We also asked for help sourcing parts when designing the quadcopter example. First, we asked for a parts list that would encompass everything needed for the design. GPT-4 compiled a list including batteries, frames, propellers, transmitters and receivers, electronic speed controllers, etc. We found that the list was comprehensive and accurate. Next, we tried narrowing down the response from a list of parts to a list of specific parts with more tailored guidance for each use case. Asking for a range of numerical specifications (e.g. specific How Can Large Language Models Help Humans in Design And Manufacturing? • 39 Fig. 35. Using an LLM to Find Difficult to Manufacture Features and Directly Adjust the Geometry.GPT-4 is given the task to identify any features that are difficult to machine along with an OpenJSCAD file. After a number of iterations, it identifies a set of features. We prompt it then to make changes to the OpenJSCAD file to reduce the machining difficulty. amperages for batteries) produced correct and sensible numerical estimates for parts. Specifying that the copter should be able to hold a weight of 10kgs for 10 minutes yielded a list of very large and powerful parts. Specifying an indoor copter led to smaller and more lightweight part suggestions. Pushing GPT-4 beyond specification resulted in errors. Asking for specific names of part listings or parts and manufacturers, as in Figure 37 tended to result in lists with incompatible parts, or in naming parts that do not exist. Iterating on the errors with GPT-4, as seen in our follow-up question in Figure 37, produced correct new parts. Though asking GPT-4 to produce the names of real-world parts was unsuccessful, we still found impressive results in its comprehensiveness and ability to form fairly specific and accurate part lists. GPT-4 was also able to dispense meaningful advice on ensuring parts were compatible, even though it was unable to generate parts lists satisfying compatibility itself. We believe that GPT-4 can be a useful guide for delivering domain-specific knowledge and providing complete parts lists, but that precise numbers and specs should be cross-referenced before being used. Geometry-based part sourcing. McMaster-Carr is a deep compendium of knowledge for hardware parts, with geometric information and even CAD models available for many items. McMaster-Carr already has a “search by geometry” feature, so we wanted to know if we could perform higher-level searches that involve both context and geometry. First, we tried describing specific scenarios and asking GPT-4 for search terms that would procure us the correct part. Asking for a nut to be used in a tight space without room for a wrench and submerged in saltwater produced two appropriate results, “316 Stainless Steel Wing Nuts” and “316 Stainless Steel Knurled-Head Thumb Nuts”, where the correct form and material was identified. Asking for a tamper-proof nut also produced the correct search, “316 stainless steel tamper-resistant nut”. Next, we tried a more open-ended geometric compatibility scenario by asking for parts for an at-home carbonation system (Figure 38). We also then asked it for a comprehensive Bill of Materials. It seemed as though all parts were compatible, at least geometrically; we Unoptimized GeometryDifficult to Machine FeaturesSmall Radius[0.05”]Thin Wall Thickness[0.02”]Desired ProcessCNC MachiningAdjusted Geometry[0.125”][0.04”] 40 • Makatura et al. Fig. 36. Using GPT-4 to Suggest Shelf Brackets for A Cabinet. Top: On its first attempt, GPT-4 suggested five bracket products that were too large for the scale of the project, on the scale of tens of inches, rather than less than an inch. Bottom: After adjusting for our feedback, it suggested four new, appropriately-sized brackets. One bracket “ lock[s] the wire shelf to the vertical [ShelfTrack] Wall Standards to create a sturdy storage space”, whereas another was “not technically a shelf bracket... could be used to provide support underneath a shelf”, and yet another was “plastic supports that are designed to be unnoticeable”. Several of the links were either broken (still available on Home Depot at another URL) or no longer available on the Home Depot store (but available on e.g. Lowes, Walmart, or Amazon), and seven of the nine suggestions were still available and discoverable after a Google search of the product name. For the two that were not available (highlighted with red question marks), one was the “Everbilt 5x3 black corner brace”, and there were two similar brackets that were 5x1 and 3x3, and another (the “Everbilt 20 lb. Plastic Invisible Shelf Support Bracket”) did not seem to exist as labelled, though there was a product found that fit this description. suspect this is because the items in the domain are standardized for compatibility, McMaster-Carr’s dataset is quite rich, and there is great availability of each part across varying sizes. Parts of mechanisms. Part of what makes GPT-4 a compelling tool for design is its simple user interface. A user might interact with GPT-4 by describing a desired functionality and asking what parts would be necessary to achieve it. For example, we described a hypothetical bar cart with two features: a lower shelf with rails, and a tabletop where a portion could be folded down for compact storage. We asked GPT-4 to tell us what the name of the fold-down tabletop mechanism was, and recommend a part that could be used to build it. It correctly identified the function as a drop-leaf mechanism, explained that since the drop-leaf would be 20x15 inches, the mechanism should be at least 15 inches long, and named steel or brass as appropriate materials. It also was able to generate a specific search term for the part. However, GPT-4 did not recommend a particular type of mechanism in how it moved or functioned. We asked it to list the different sub-types and their use cases, which it did successfully, naming and differentiating a swing arm bracket, a slide-out support, a hinged bracket support, support bars, and a rule joint. We were able to find examples of four of the types, but the support bars, which were described as “lengths of wood or metal that are stored separately and inserted into brackets on the table and leaf to hold it in place”, did not seem to exist under that terminology or perhaps at all. We then asked it to recommend a type for our use case, and it recommended the swing arm or hinged bracket supports. ?? How Can Large Language Models Help Humans in Design And Manufacturing? • 41 Which components do I need to build an indoor drone that can fly at 30 mph? Prefer easily available components if possible. List the manufacturer, model, quantity, and Amazon search term for each component. Building an indoor drone that can fly at 30 mph requires several components. Here’s a list of the necessary components along with their specifications and Amazon search terms: Frame: Manufacturer: Hobbypower / Model: Hobbypower F450 / Quantity: 1 / Amazon search term: Hobbypower F450 frame kit Flight Controller: Manufacturer: Holybro / Model: Kakute F7 Mini / Quantity: 1 / Amazon search term: Holybro Kakute F7 Mini flight controller Motors: Manufacturer: EMAX / Model: EMAX RS2205S / Quantity: 4 / Amazon search term: EMAX RS2205S motors Electronic Speed Controllers (ESCs): Manufacturer: Hobbypower / Model: Hobbypower SimonK 30A / Quantity: 4 / Amazon search term: Hobbypower SimonK 30A ESC Propellers: Manufacturer: DALPROP / Model: DALPROP T5045C / Quantity: 4 pairs (8 propellers) / Amazon search term: DALPROP T5045C propellers Battery: Manufacturer: Tattu / Model: Tattu 4S 1300mAh / Quantity: 1 / Amazon search term: Tattu 4S 1300mAh LiPo battery Radio Transmitter and Receiver: Manufacturer: FrSky / Model: FrSky Taranis X9D Plus / Quantity: 1 / Amazon search term: FrSky Taranis X9D Plus transmitter and receiver Power Distribution Board (PDB): Manufacturer: Matek Systems / Model: Matek PDB-XT60 / Quantity: 1 / Amazon search term: Matek PDB-XT60 power distribution board Propeller Guards (optional but recommended for indoor use): Manufacturer: Crazepony / Model: Crazepony 2 Sets 65mm / Quantity: 1 set / Amazon search term: Crazepony 65mm propeller guards I already have a radio transmitter, please suggest a receiver only Certainly! If you already have a radio transmitter and only need a receiver, you can consider the following option: Receiver: Manufacturer: FrSky / Model: FrSky XSR / Quantity: 1 / Amazon search term: FrSky XSR receiver Please note that the FrSky XSR receiver is compatible with the FrSky Taranis X9D Plus transmitter, which you mentioned having. Fig. 37. Using GPT-4 to Suggest Amazon Parts for a Copter. GPT-4 produced a list of specific parts, but made factual errors. For example, it listed Crazepony propeller guards on the parts list, but Crazepony does not sell propeller guards. After iterating, we noted four major inaccuracies within our final results, out of ten identified parts. Two errors were that a certain part did not exist from the listed manufacturer, one was a redundancy where listed parts were already included in the purchase of another part (the flight controller kit recommended by GPT-4 included a power distribution board, so there was no need to purchase one separately), and one was an item that GPT-4 believed to be a transmitter and receiver, but was only a transmitter. A subsequent attempt to point this out rectified the issue and GPT-4 recommended a receiver correctly. We ordered a modified version of the parts list where the four errors were fixed. We also tried a loom example, where we asked GPT-4 to provide a fabrication plan for a 4-shaft table loom, and asked about the name of the mechanism that lifts and lowers the heddle frames and the names of specific parts that make up this mechanism. In general, it was accurate, but GPT-4 sometimes erroneously named components that only pertained to countermarche looms or floor looms instead of table loom-specific parts. We speculate that this could be due to a dearth of literature on loom construction in GPT-4’s training dataset. Our examples show potential in using LLMs to identify and source parts, with major caveats. We note a recurring theme of GPT-4’s ability to produce programs that generate valid programs or instructions to verify 42 • Makatura et al. Hi ChatGPT. I’m trying to make an at-home carbonation system and I’d like to find the hose, fittings, and pressure gauge on McMaster-Carr. Can you help me? The jar I’m using has a 5 gallon capacity. Hello, I’m glad you’ve asked for help with this project! To create a DIY carbonation system, you’ll need a few items. ... Here are some McMaster-Carr parts that could help: Regulator: McMaster-Carr has a variety of regulators. An example is Part Number 5182K56 - a two-stage, brass, compressed gas regulator with CGA 320 inlet connection. Pressure Relief Valve: For example, Part Number 48435K72, brass pressure-relief valve, set at 60 psi, which can be adjusted. Tubing: Part Number 5648K35 can be an option. It’s a firm tubing, clear, 1/4" ID, 3/8" OD. CGA320 adapter for tubing: You’ll need to adapt the CGA320 connection on the regulator to fit the tubing. Part Number 50785K51, a brass barbed tube fitting, can work for this. Bulkhead fitting for the jar lid: For this, you can use Part Number 50785K191, a brass threaded-tube-through-wall fitting with a tight seal. Diffusion stone: You can use a standard 0.5 micron diffusion stone. While this isn’t available on McMaster-Carr, it can be found at various homebrew stores. Remember to ensure all parts are compatible with each other (size, material, pressure rating) and appropriate for use with food and drink. Can you give me a bill of materials (BOM) for this project? ... Can you write it as a multi-level BOM, including all the assemblies, sub-assemblies, and be in this format: Assembly Level - Quantity - Manufacturing or Sourcing information - Part number - Name - Price Absolutely, here’s a Bill of Materials (BOM) for your project. Please note that the exact quantities will depend on your specific setup and you might need to adjust as needed: (... content omitted by authors ...) Thank you! Can you make that slightly more condensed by removing the quantity number, price, and assembly level, since they appear redundat[sic]? Of course, here’s a more condensed version: Carbonation System Manufacturing/Sourcing Information: Self-assembled Regulator Connection Manufacturing/Sourcing Information: Self-assembled Part 1: Sourcing Information: McMaster-Carr, Part Number: 5182K56, Name: Two-stage, brass, compressed gas regulator with CGA 320 inlet connection Part 2: Sourcing Information: McMaster-Carr, Part Number: 50785K51, Name: Brass Barbed Tube Fitting for CGA320 connection Part 3: Sourcing Information: McMaster-Carr, Part Number: 5648K35, Name: Firm Tubing, Clear, 1/4" ID, 3/8" OD Jar Connection Manufacturing/Sourcing Information: Self-assembled Part 1: Sourcing Information: McMaster-Carr, Part Number: 48435K72, Name: Brass pressure-relief valve Part 2: Sourcing Information: McMaster-Carr, Part Number: 50785K191, Name: Brass Threaded-Tube-Through-Wall Fitting Diffusion Setup Manufacturing/Sourcing Information: Self-assembled Part 1: Sourcing Information: Homebrew store, Name: 0.5 micron diffusion stone This BOM is simplified and omits quantities and prices, which should be determined according to your specific setup and checked directly with suppliers. It also does not cover smaller items such as clamps or the CO2 tank, nor optional or safety items like check valves or non-return valves. Fig. 38. Using GPT-4 to Suggest Geometrically-compatible Parts from McMaster-Carr. GPT-4 suggested part numbers for the regulator (top left), hose adapter (top right), pressure relief valve (middle), hose (bottom left), and hose fitting (bottom right). All part descriptions were sufficiently detailed such that they described a single part in the database after accounting for the hose inner and outer diameter (prescribed by GPT-4), except for the regulator, which left a choice between a neoprene and stainless steel diaphragm. All part numbers were incorrect, except the part number for the pressure relief valve was surprisingly correct. GPT-4 then produced a comprehensive multi-level BOM. How Can Large Language Models Help Humans in Design And Manufacturing? • 43 validity, and its inability to apply those rules to its own output. In general, we found that we could ask for general, pointed, and precise guidance with great success, but asking for product names or specific items often resulted in incompatible or nonexistent parts lists. Furthermore, best results were produced in the simpler and more common domains, or when the domain we were querying had very rich information, as was the case with McMaster-Carr. We believe that GPT-4 is useful for making comprehensive checklists, and can lend domain expertise and suggestions, so long as all information can be checked or cross-referenced. Since GPT-4 can interface across many levels of jargon, experts may derive the most value from its use currently, given that they are best able to make common sense checks over the output. For non-domain experts, GPT-4 delivers very convincing, confident information that can be incorrect. LLMs are poised to become a powerful “design for everyone” tool, but more verification steps are needed to guide novice users. 6.4 Create Manufacturing Instructions (Q4) Computer-Aided Manufacturing (CAM) is a technology that utilizes software to generate manufacturing instruc- tions from digital design files. It plays a vital role in the efficient and accurate translation of design concepts into tangible products. CAM bridges the digital design and the physical manufacturing stages, enabling seamless communication and translation of design specifications into machine-readable instructions. CAM encompasses a range of techniques and tools that leverage computer systems to automate various manufacturing processes, including planning, toolpath generation, and machine control. By utilizing CAM, manufacturers can streamline production, improve precision, and enhance overall efficiency. In this section, we delve into the creation of machine-readable and human-readable manufacturing instructions with help of GPT-4 and open-source CAM software. Specifically, we explore additive, subtractive, and assembly manufacturing processes, highlighting the capabilities and challenges associated with each approach. 6.4.1 Additive. Additive design, often employed in the realm of 3D printing, can be time-consuming and labor- intensive, requiring spatial reasoning, precision, and multiple iterations. We posit that GPT-4 will improve this process, as it comprehends complex specifications in natural language, generates designs efficiently, simulates outcomes, and explores innovative possibilities from diverse sources, optimizing functionality and aesthetics. We first try to directly use GPT-4 to generate the G-code from a natural language description. However, due to the complexity and length of G-code, GPT-4 fails to output complete code that precisely models the specified shape, as shown in Figure 39. To overcome this, we have developed a two-stage approach. Stage I. We transform the concept expressed in natural language into an intermediate 3D shape representation using triangle meshes. This choice provides compact and comprehensive representations, capturing intricate details accurately. Leveraging the Python library trimesh, we effectively manage and process the shape data extracted from the natural language input (Figure 40). Stage II. We translate this intermediate representation into G-Code, customized for the specific hardware configurations at hand. This critical step demands deep domain expertise in fabrication processes, which is why we rely on slic3r [4], a professional G-Code generation software. Through Python integration, we interface directly with slic3r, ensuring the production of high-quality G-Code that precisely guides the manufacturing process. In Figure 41, we visualize the output G-Code using Repetier[1], a manufacturing tool, to validate the fabrication pipeline. Throughout the entire pipeline, the cohesive communication between these modular components is facilitated by the powerful capabilities of GPT-4. As an advanced language model, GPT-4 maintains a seamless conversation, ensuring a smooth flow of data and instructions across the various stages of the pipeline. By harnessing the potential of GPT-4, we optimize the pipeline, achieving enhanced efficiency and superior additive manufacturing outcomes. 44 • Makatura et al. The object I want to print is a simple solid 3D ball with radius of 1. Generate a sample g-code with a simple slicing for it in a virtual printer. G-code generation is complex and it is heavily dependent on the specifics of the 3D printer being used, the material properties, the specific layer thickness, the nozzle size, print speed, and more. Hence, it’s a process that should be done using a slicing software. However, to provide you with an illustrative example, I’ll share a simple segment of G-code that would be used in a very rudimentary and unoptimized way to print a sphere layer by layer. Please note, this is highly simplified, does not account for most variables in the 3D printing process (such as extruder temperature, cooling, retractions for reducing stringing, print speeds, layer adhesion etc.), and would not result in a functional 3D print if used directly. ; Setup G−code G90 ; Set to absolute positioning G21 ; Set units to millimeters G28 ; Home all axes G92 E0 ; Zero the extruder G92 E0 ; Zero the extruder again G92 E0 ; Zero the extruder a third time G92 E0 ; Zero the extruder a fourth time ... Fig. 39. Failure Case When GPT-4 Directly Generates G-code. GPT-4 fails to output complete code that precisely models the specified shape on the right. Subtractive. Subtractive manufacturing is a widely used technique that involves removing material from 6.4.2 a workpiece to create the desired shape or form. This process is commonly employed in various industries, including woodworking and metal fabrication. Leveraging the power of GPT-4, we explore how this approach can be enhanced and streamlined to achieve optimal results. To demonstrate the design-to-subtractive manufacturing process, we focus on the previously designed cabinet (Figure 7) and employ a laser cutter and wood pieces for fabrication. Specifically, our goal is to translate the OpenJSCAD design into precise manufacturing instructions. To tackle this task, we simply provide GPT-4 with the OpenJSCAD code and request the generation of laser cutting patterns in DXF files. GPT-4 showcases an understanding of the cabinet’s fundamental geometry relationships and topological structure. It recognizes that the 3D cabinet comprises various 2D boards, including top and bottom boards, a shelf board, side boards, and back boards (Figure 42). However, GPT-4 encounters challenges when accurately determining the dimensions of the 2D cutting patterns based on the given 3D geometry input. Some inaccuracies arise, such as confusion between the cabinet’s depth and the board thickness, resulting in overly thin side boards. Additionally, distinguishing between height and width in the 3D context presents difficulties, leading to back boards that are too short. Lastly, GPT-4 struggles with precise hole positioning (Figure 42). To address these errors, human intervention becomes essential in explicitly identifying the issues and proposing potential solutions (Figure 43). After a round of communication, GPT-4 successfully generates the correct DXF files for laser cutting. To ensure their validity, these files were verified by human experts. 6.4.3 Assembly. We conducted an experiment to explore the potential of GPT-4 in generating assembly instruc- tions that are both machine-readable for robots and human-readable as standard operating procedures. The experiment focused on assembling a wooden box using a specific set of tools and materials. In Figure 44, we presented the prompt for generating machine-readable instructions, which involved creating a set of functions to How Can Large Language Models Help Humans in Design And Manufacturing? • 45 Task: generate Python code based on the following rules. Input: a few functions of box(x,y,z,w,h,d) which creates a box whose center is (x,y,z) and size is (w,h,d) and their parameters. Output: the combined `.stl` file that represents all these boxes using triangle mesh. Constraints: use library `trimesh`. write a function that can support general purposes. it is not required to provide an example. Sure, below is a Python function named `create_box_stl` which takes in a list of box parameters and produces an STL file using the `trimesh` library. import trimesh def create_box_stl (boxes, output_file ) : ''' Creates an STL file that represents all Parameters: boxes : List of tuples the boxes specified by the parameters. Each tuple corresponds to one box and contains the parameters (x,y,z , w,h,d) in that order . output_file : str The path of the STL file to be output . ''' meshes = [] for box in boxes: # Extract box parameters x, y, z , w, h, d = box # Create box mesh mesh = trimesh. creation . box( extents =[w, h, d]) # Move box to correct position mesh. apply_translation ([ x, y, z ]) # Add box mesh to list meshes.append(mesh) # Combine all box meshes into a single mesh combined = trimesh. util . concatenate (meshes) # Save as STL combined.export( output_file , file_type =' stl ' ) Fig. 40. Additive Manufacturing Stage I. From natural language to intermediate 3D shape representation. specify different tasks for the robot and generating corresponding sequences to execute those tasks. Since the functions were designed to be system-agnostic, the response from GPT-4 printed the actions performed by the robot. Subsequently, we prompted GPT-4 to generate a standard operating procedure to convert the machine-readable instructions into human-readable text. This procedure provides a detailed description of the assembly process, enabling humans to follow along and understand the steps involved. By generating both machine-readable and human-readable instructions, we sought to assess the versatility and applicability of GPT-4 in facilitating effective communication and collaboration between robots and human operators in assembly tasks. 6.5 Discussion In this section, we elaborate on the key capabilities (C), limitations (L), and opportunities (O) previously outlined, particularly as they relate to the domain of text-to-design. C.1 Extensive Knowledge Base in Design and Manufacturing: We have discovered that GPT-4 possesses an understanding of various manufacturing processes and their capabilities including CNC machining, injection molding, additive manufacturing, and laser cutting. Moreover, it is able to apply this knowledge to various 46 • Makatura et al. Task: generate python code based on the following rules. Input: the mesh from the previous code block and all hardware-related parameters that `slic3r` accepts. Output: the g-code generated based on the mesh. Constraints: use the command-line tool `slic3r` and use `subprocess` to call it. assume `slic3r` can be directly called by the `slic3r` command. we use python3 here. Here’s a Python function `generate_gcode` which takes a `.stl` file path (output from the previous function) and a dictionary of Slic3r settings as input, and uses the `subprocess` module to run Slic3r from the command line and generate G-code: import subprocess def generate_gcode( stl_file_path , slic3r_settings , output_file_path ) : ''' Generate G−code from the specified STL file using Slic3r . Parameters: stl_file_path : str The path of the input STL file . : dict slic3r_settings A dictionary where the keys are Slic3r setting names and the values are the corresponding settings . output_file_path : str The path of the output G−code file . ''' # Prepare Slic3r command slic3r_cmd = [ " slic3r " , # Add Slic3r settings to command for setting , value in slic3r_settings . items () : stl_file_path , "−−output", output_file_path ] slic3r_cmd . append("−−" + setting ) slic3r_cmd . append(str ( value ) ) # Run Slic3r subprocess . run(slic3r_cmd , check=True) Fig. 41. Additive Manufacturing Stage II. From intermediate 3D shape representation to machine-readable G-Code. Generate dxf files for laser cutting wood pieces based on the jscode of the cabinet. top_bottom_board.dxf, This script will create four DXF files: shelf_board.dxf, back_board.dxf, side_board.dxf. You’ll find these files in the same directory where you run this script. Fig. 42. Subtractive Manufacturing: GPT-4 Generates DXF Files using the 3D OpenJSCAD Code of a Designed Cabinet. The generated DXF patterns are displayed on the right, with green indicating correct patterns and red highlighting incorrect patterns. problems in design for manufacturing. Although it is not consistently accurate, it can utilize this knowledge to offer suggestions about what is the best manufacturing practice to use, if certain geometric features will be hard to produce. Moreover, because GPT-4 has the ability to generate code, it can be utilized to modify geometry directly and generate manufacturing files based on supplied files. How Can Large Language Models Help Humans in Design And Manufacturing? • 47 There are a few problems in the generated dxf files: 1) board_thickness should be the thickness of the wood piece not the width of the side board, the side board should be installed on the side of the cabinet; 2) Two holes on the side board should be horizontally distributed at the same height to support the shelf board; 3) Current back board is too short, it should have a height closer to the total height of the cabinet. Please regenerate the part that are incorrect. You’re right, the width of the side board should be the cabinet’s depth minus twice the thickness of the board. And the holes should be distributed horizontally. The backboard should indeed have dimensions according to the width and height of the cabinet, not its depth. Fig. 43. Subtractive Manufacturing: Human Intervention Necessary to Rectify Initial Pattern Inaccuracies. GPT-4 effectively addressed the errors based on user-provided correction messages. Additionally, we have discovered that GPT-4 possesses the capability to search for parts that fulfill a desired functionality as described to it. This allows it to be used to source parts based on a description, geometry, functionality, and performance. C.2 Iteration Support: GPT-4 also possesses the ability to perform iterative debugging when creating and modifying files required for manufacturing. This enables the opportunity to iterate when prompts are not ideal for generating the desired outcome or when GPT-4 generates something incorrect. L.1 Reasoning Challenges: Our observations indicate that GPT-4 exhibits constraints in quantitative reasoning. For instance, when tasked with generating manufacturing instructions, GPT-4 struggled to accurately perform basic calculations for tool path placements. However, this limitation can be mitigated by employing symbolic computations within a script. A case in point: we achieved accurate DXF file generation by designing a script to produce the file, instead of having GPT-4 generate the file directly. L.2 Correctness and Verification: We have found that GPT-4 will provide incorrect information about manu- facturing processes in some cases. For example, when selecting a manufacturing process, it proposed injection molding as an optimal manufacturing process for a PTFE part which is incorrect. We have not found a solution to this in this work to resolve GPT-4 giving incorrect information. 7 DESIGN-TO-PERFORMANCE To assess the suitability of a particular design, it is common to evaluate performance metrics based on features of the design, such as geometry and materials used. Common metrics include mechanical performance, dynamic functionality, or adherence to geometric restrictions. It is common to compute performance with respect to an individual criterion or multiple criteria. The purpose of this evaluation can be to form a set of quantitative metrics to describe the design further, as a foundation for numerical optimization or to verify whether a design meets given specifications. This performance assessment can result in a single quantitative result or an array of results. A more complex design evaluation can further classify or compare between designs in order to enable further optimization or to select a final part for production. Within the range of performance evaluation, there are objective, semi-subjective, and subjective criteria that all contribute to the final design performance. Objective criteria include quantitative features that are calculable or measurable, including features such as object weight, size, load capacity, impact resistance, speed, battery life, vibration resistance, and price. Semi-subjective criteria include features that are generally agreed upon but 48 • Makatura et al. Create a Python script for a household robot that assembles a wooden box with specific dimensions (5x5x5 inches with a thickness of 0.5 inches). The robot has to perform a set of operations including applying wood glue along certain lines, clamping the pieces together in a specific order, drilling a hole in the center of the box with a 1/8 inch drill bit, and finally placing a #8 size, 1.5 inch long wood screw into the drilled hole. The robot should be able to move to a specific location before each operation and should print a statement describing the action it has performed, including the coordinates where the action took place. The operations should be ordered as follows: glue and clamp the bottom panel, left and right sides, back, and finally the top. Then, it should change the drill bit, drill a hole in the center of the box, change the screw, and place the screw in the hole. Specify the tools in the script including ’wood glue’, ’power drill with 1/8 inch drill bit’, ’bar clamp’, and ’#8 size, 1.5 inch long wood screw’. thickness = 5, 5, 5, 0.5 # The dimensions of the panels width, height , depth, # Function to apply glue def apply_glue( start , end) : # Move to start point go_to_location ( start ) pick_up( ' wood glue' ) # Apply glue while moving to end point go_to_location (end) use_item( ' wood glue' ) return "Wood glue applied along line from ({}, {}, {}) to ({}, {}, {}) " . format (∗ start , ∗end) # Side Panel ( Left ) # Bottom Panel (width, 0, 0) ) ) (0, height , 0) ) ) ... # Gluing and clamping the panels in the correct order print (apply_glue ((0, 0, 0) , print (place_clamp(width /2, 0, 0) ) print (apply_glue ((0, 0, 0) , print (place_clamp (0, height /2, 0) ) print (apply_glue ((0, height , 0) , print (place_clamp(width, height /2, 0) ) print (apply_glue (( width, 0, 0) , print (place_clamp(width /2, height , 0) ) print (apply_glue ((0, 0, depth) , print (place_clamp(width /2, 0, depth) ) # Drilling holes and placing screws change_drill_bit ( ' 1/8 inch drill bit ' ) print ( drill (width /2, height /2, change_screw('#8 size , 1.5 inch long wood screw') print (place_screw(width /2, height /2, (width, height , 0) ) ) # Back Panel (width, 0, depth) ) ) # Top Panel (width, height , 0) ) ) thickness ) ) thickness ) ) # Bottom Panel # Side Panel ( Left ) # Side Panel (Right) # Side Panel (Right) # Back Panel # Top Panel # Center of the box # Center of the box Can you convert these instructions into a human-readable form as a standard operating procedure? Preparation: ... Gluing and Assembly of Panels: Applying Glue: Apply wood glue in a thin, consistent line along the edges where the panels will join. This includes the edges of the bottom panel and both side panels. Assembly: Start by placing the bottom panel flat on your work surface. Align each side panel perpendicularly with the edges of the bottom panel, creating a U shape. Make sure the corners are square and the edges flush. Then, apply glue to the edges of the back panel and align it with the bottom and side panels. Finally, apply glue to the top panel and align it with the side panels and back panel. Clamping: After each panel is placed, secure it with a bar clamp. Ensure clamps are tight, but not too tight to avoid bowing the wood. Leave the glue to dry according to the manufacturer’s instructions. Drilling Pilot Holes: Drill Bit Installation: Replace the existing bit in your power drill with a 1/8 inch drill bit. Drilling: Drill pilot holes at all eight corners where the panels join together. The pilot holes should be centered on the edge and go through both pieces of wood. This will help prevent the wood from splitting when the screws are inserted. Screwing in Screws: Screw Selection: Use #8 size, 1.5-inch long wood screws. These screws are robust enough to secure the box structure. Screwing: Screw them into the pilot holes at each corner to secure the box structure. Make sure each screw is flush with or slightly recessed into the wood surface. ... Fig. 44. Instructions for Assembling a Wooden Box using Specific Tools and Dimensions. This figure depicts a Python script for a household robot that assembles a wooden box with specific dimensions. The robot follows a sequence of operations using tools such as wood glue, a power drill with a 1/8 inch bit, a bar clamp, and a #8 size, 1.5 inch long wood screw. How Can Large Language Models Help Humans in Design And Manufacturing? • 49 Fig. 45. Variations on Design Input Style (DS) and Requested Output Form (RF). Markings show variations that lead to reasonable responses (green check mark), moderate or unsophisticated responses (yellow tilde), and poor responses (red ex). require some insight or estimations to evaluate. Such criteria may be evaluated by proxy measurements, and may vary based on the evaluator, the culture, or the use case; examples include ergonomics, product lifespan, sustainability, portability, safety, and accessibility. Subjective criteria include features that may differ markedly based on the evaluator, such as comfort, aesthetics, customer satisfaction, novelty, and value. With this in mind, we aim to answer the following pair of questions: • Q1 Can GPT-4 evaluate the performance of an input design that is consistent with classical, objective metrics? • Q2 Can GPT-4 support performance evaluation in ways not possible with classical approaches, such as using semi-subjective and subjective metrics? This section describes the current abilities of GPT-4 and identifies best practices, limitations, and full failures in its capabilities to address each of these questions through the use of several examples per question. Evaluations were tested using different input styles (e.g., method of design description) and requested output forms (e.g., direct classification or function creation). Demonstrative examples are shown in Figure 45. We did not test all combinations of design style and output requests but focused on key comparisons and types. In particular, to address Q1 we focused on comparing text-based designs (DS1) and generic designs (DS2), comparing output requests for direct evaluation in a text response (RF1) and evaluation by the creation of a function (RF2), and comparing code-based designs described with salient semantics (DS3) and no semantics (DS4). To address Q2 with more subjective features we also tested output requests for categorization (RF3) along with ranking and pairwise comparisons between designs, and separately used scoring (RF4) with varying levels of complexity. 7.1 Objective Evaluation (Q1) Once a design or design space has been created, a typical design process proceeds by evaluating basic geometric features such as size, weight, and strength of the object. In effect, this answers the question: does the item do what it was created to do? Most typically, certain features need to satisfy functional requirements in order to be suitable designs. 50 • Makatura et al. 7.1.1 Mechanical properties. Here, we focus on analyzing the mechanical integrity of (1) a chair and (2) a cabinet. We began with a simple input design in text form (DS1) and a request for direct evaluation in calculated form (RF1) with an additional binary output asking whether a chair of a given design could support a given load. The specific prompt is included in Figure 46. GPT-4 immediately demonstrated the capacity to handle ambiguity well, assuming a type of wood (oak) and producing numerical material properties for that material when both were unspecified. It made and stated further assumptions about load and failure types. It evaluated the failure point by comparing the yield stress to compressive stress, computed as one quarter of the applied load over the cross-section of a chair leg. This is included in the chat snippet shown in Figure 46. However, in text form it outputted 94,692.2 Pa, while direct evaluation of the equation it listed in the output gives 94,937.7 Pa; thus, GPT-4 occasionally failed to perform basic correct in-line arithmetic or algebra. Although the number is only off by a small amount in this case, it can sporadically differ by much greater magnitudes. Along with the evaluation, it included discussion of other, more sophisticated considerations for failure, such as the type of connection between the legs and the seat. Also, upon repeating the same prompt, GPT-4 would vary whether it included self-weight in the load analysis and whether it evaluated uniform weight or only one leg, leading to small variations in results. When asking for a function to evaluate chair failure (RF2), GPT-4 successfully generated Python code to evaluate whether a chair will break due to excessive compressive stress on the legs, using the same formula as described in the text exchange (RF1). GPT-4 was able to readily add multiple types of failure without error, also incorporating bending failure of the seat, and excessive stress on the back using simple beam bending and structural mechanics equations. This multi-part failure assessment is included in Figure 46. It further automatically generated a function that could intake a parametric chair design with sensible feature parameters like leg_cross_sectional_area, seat_thickness and seat_material_bending_strength, allowing versatile use of this evaluation. When generating the function, it continued to handle ambiguity by make assumptions including that the load would be distributed across all four legs, centered and uniform on the seat, and that the load on the back of the chair would be one third of the total weight. In the case of writing the function (RF2) as compared to text evaluation (RF1), it did not explicitly list all of the assumptions; rather, they had to be interpreted based on the equations used. GPT-4 also incorporated several small errors and oversights in both cases. For instance, when generating a function to evaluate seat bending failure, it treated the seat as a simply supported cantilever beam, and assumed that the chair would break along the width (separating front from back) rather than along the length or at an angle to the base. It also assumed that the bending stress on the back was evaluated as the load over the total area of the back rather than at the connection surface of the back to the seat of the chair. However, as these functions were identified, they could be further refined by iterated discourse with GPT-4 to produce a more correct function. In a comparison of these two output form requests, RF1 and RF2, functional evaluation was easier to read, more accurate, and able to be implemented for a variety of input designs, but directly incorporated more assumptions into equations. During both types of evaluation, GPT-4 actively reported on potential causes of error in the evaluation, such as how the chair legs were attached to the seat. It consistently overlooked potential causes of failure such as buckling of the legs unless specifically prompted. We found GPT-4 to adequately assess most basic mechanical properties of interest. Some properties relying on an understanding of the spatial arrangement of chair components were not able to be adequately assessed. GPT-4 had significant trouble generating a suitable evaluation of stability, and failed entirely to calculate a reasonable center of gravity for an input design despite many attempts. The closest attempt using the simple assumption that the center of gravity would be in the center of the chair seat. However, other complex physical properties were readily assessed. GPT-4 generated first-order code to assess the failure of a chair upon impact with a spherical projectile, with no difference in quality of the computation compared to static mechanical properties. How Can Large Language Models Help Humans in Design And Manufacturing? • 51 Given this chair made of wood, that has 4 legs that are 1 meter long (... content omitted by authors ...) can it support 100kg? Please give a final yes or no answer. To determine whether the chair can support 100kg, we need to consider the strength of the wood used to construct the chair, as it determines the load-bearing capacity. However, this information was not provided. Nonetheless, let’s assume that the wood used is Oak, which has a fairly high compressive strength of around 37 MPa or 37,000,000 Pascals. (... content omitted by authors ...) If we consider each leg bears 1/4 of the total weight, each leg will support 25kg. The pressure (force per unit area) on each leg then is Force/Area = (25kg * 9.8 m/s2) / 0.00258064 m2 = 94,692.2 Pascals or 94.7 kPa, which is well within the compressive strength of Oak, so the legs should hold up. (... content omitted by authors ...) Thus, based on these calculations and assumptions about the distribution of weight and the type of wood used, the answer is Yes, the chair can support 100kg. Give me a function that can evaluate whether a chair of a typical design will break under a certain applied weight. Please consider multiple types of failure. def will_chair_break (weight, leg_cross_sectional_area , leg_material_strength , seat_thickness , seat_length , seat_width , seat_material_bending_strength , back_height, back_width, back_material_strength ) : # Calculate the load on each leg load_per_leg = weight / 4 # Calculate the stress on each leg stress_on_leg = load_per_leg / # Check if the stress on each leg exceeds the strength of the leg material if stress_on_leg > leg_material_strength : return True # The chair will break leg_cross_sectional_area (... omitted by authors: code continues on to assess seat and back bending ...) Fig. 46. Comparing Text and Function-based Performance Evaluation of Chair Mechanics. When using text-based analysis, GPT-4 evaluates stresses one-by-one with more mistakes in computation and fewer features able to be assessed at once compared to function-based analysis. To evaluate GPT-4’s performance on code-based input (DS3 and DS4), we provided GPT-4 with an OpenJSCAD chair specification. When the parameters and parts of the chair were clearly-named salient features (DS3) like backThickness, leg1, chairSeat, and chairBack, GPT-4 was readily able to recognize the item as a chair and analyze desired properties, such as the breaking load of the seat. However, when we used identically-structured code with variable and object names that had been obscured (DS4), it could not recognize parts of the item to assess properties, for example to locate the seat or synonyms of the seat. This was true whether the names had been slightly obscured (e.g., as XZ_height, stick1, platform, and barrier, respectively) or entirely obscured (e.g., as Q, A1, B1 and so on). When asked about the design in the two obscured forms, GPT-4 guessed that the final item was a table with a narrow bookshelf and exhibited poor interpretation of the design and parts. Even when GPT-4 was challenged, it claimed that it could not be a chair because the back was not connected appropriately to the chair seat; this was an incorrect interpretation of the code, again indicating poor spatial reasoning. In a second case, when an input design for a cabinet (DS3) had one variable named shelfAllowance (used to slightly reduce the shelf width for easy assembly), GPT-4 erroneously assumed that this indicated number of shelves. These results reinforce the idea that LLMs perform based on semantics, and that a design without clear descriptive words becomes much less manageable, causing DS4 to generally fail. The evaluation process was repeated with DS3 and RF2 for the OpenJSCAD design of a cabinet as a box with shelves, a door, and a handle. From the inputted design, GPT-4 was prompted to create functions to evaluate 52 • Makatura et al. a set of criteria: storage capacity, load capacity, material cost, and, for a more ambiguous feature, accessibility for a person in a wheelchair. Storage capacity was computed as total volume enclosed by the cabinet, excluding shelves, as expected. In assessing load capacity, GPT-4 used the “sagulator" formula, a standard estimation found online for carpentry. However, GPT-4’s implementation gives strange results and GPT-4 was unable to provide a more correct form of the equation. For price, GPT-4 computed the volume of the cabinet walls and a cost per volume. Finally, to address accessibility, it estimated height and depth ranges that would be beneficial, assigning a higher accessibility score to shorter and deeper cabinets. However, it did not provide a source for the height and depth ranges that it scored more highly. This points to a potential limitation in the use of GPT-4 and LLMs for this kind of analysis: the source material for equations and standards of analysis may be unknown or even intentionally anonymized. Even when the equations are the standard first-order textbook equations per topic, they are almost always unreferenced. When different standards exist, across different countries or for different use cases, much more refinement would be needed to use GPT-4 to assess the mechanical integrity of a design. In addition, these equations often work well for objects of a typical design, but for edge cases or unusual designs they would miss key failure modes, such as the buckling of a table with very slender legs or the excessive bending of a chair made from rubber. In a particularly apparent example of this type of failure (i.e., creating functions based on pattern-matching rather than judicious observation of likely failures), GPT-4 was asked over a series of iterations to help write code to render a spoon with sizes within a set of ranges in OpenJSCAD, then to assess ergonomics, which it evaluated based on dimensions. Finally, we requested GPT-4 to create a function to compute the spoon’s breaking strength. Since it had been inadvertently primed by the long preceding discussion of spoon geometry, it proposed a strength evaluation using the basic heuristic of whether the spoon is within a standard size range (Figure 47). GPT-4 had to be prompted specifically for a yield analysis before offering a mechanics-based equation. At that point, it continued to handle ambiguity well and chose a most likely breaking point (the point between the handle and spoon scoop). But for a novice design engineer who might have assumed GPT-4’s initial output was sound, this bold proposition of an unreasonable strength analysis on first pass without further explanation causes some alarm. This serves as a reminder to not rely on GPT-4 alone without external validation of every step. When assessing designs in text form (DS1, RF1) at an abstract level, GPT-4 was found to readily identify problems and present a sophisticated discussion of problem areas and considerations for the particular design in question and the metrics being considered. As such, we propose the workflow for rigorous performance evaluation using GPT-4 to begin with a text-based discussion of the design (DS1 or DS2 with RF1) to understand the relevant features, with no other preceding text in that chat, followed by the development of equations with enough sophistication for the use case, presented in the form of functions for rapid assessment of an input design (RF2). This workflow is depicted in Figure 48, along with additional steps to ideally validate the final result. If an input design of a specific type was used, whether OpenJSCAD or another DSL, the form of the input was also provided using well-named variables with each iteration of the chat requesting new code to ensure the variable names did not mutate over time as would otherwise happen. There was a failure of GPT-4 to suggest refinement to the performance codes without specific prompting. For example, there are simple differences in von Mises, Tresca, and Mohr-Coulomb yield criteria for evaluating material failure under applied stress; however, GPT-4 would simply default to the most common, von Mises, without comment. It would regularly object that the analysis function was an oversimplification; additionally, it would assert that for proper evaluation, more features should be evaluated, more sophisticated tools such as FEA should be used, and structural analysis should be validated by a licensed professional engineer, especially for designs in which factor of safety is a concern. These are all valid points: despite GPT-4’s very large internal knowledge, it pattern-matches and does not reason at a level to generate the most correct or sophisticated analysis, and will tend to generate more simple rather than more complex equation-based analysis unless specifically walked through refining the code. However, it is capable of more sophisticated text-based discussion, which is How Can Large Language Models Help Humans in Design And Manufacturing? • 53 (... omitted by authors: Much prior discussion on appropriate spoon sizes and ergonomic factors ...) Please create a function to evaluate whether a spoon of a given set of dimensions, made of metal, is mechanically strong enough to use. function isSpoonStrong(spoonParameters) { const minThickness = 0.4; // Check that the handle dimensions are within reasonable limits if // Minimum thickness for the spoon material , (spoonParameters.handleLength < 10 || spoonParameters.handleLength > 30 || spoonParameters.handleWidth < 1 || spoonParameters.handleWidth > 3 || spoonParameters.handleThickness < minThickness) { in cm return false ; } (... omitted by authors: Similar second analysis on spoon head dimensions ...) // If all return true ; the checks pass , the spoon is considered to be mechanically strong enough } Please create a function using mechanical analysis (ex. yielding criteron) to evaluate whether a spoon of a given set of dimensions, made of metal, is mechanically strong enough to use. function isSpoonMechanicallyStrong(spoonParameters) { (... omitted by authors: geometry, lever arm calculations ...) // Calculate bending stress at the neck const bendingStress = bendingMoment ∗ y / I ; return bendingStress <= yieldStrengthStainlessSteel ; // in N/mm^2 or MPa // true if the spoon is mechanically strong enough, otherwise false } Fig. 47. Chat History Errors and Correction for Spoon Mechanics. When analyzing mechanics of a spoon after discussing dimensions in the preceding chat, GPT-4 generated a poor heuristic for spoon breaking from geometry alone; with very specific correction in the same chat, it recovered. why we have found that beginning with text and proceeding to functions provides a more effective workflow, as in Figure 48. 7.1.2 Quadcopter. We next explored the assessment of dynamic electronic device, a quadcopter, as an example of using the workflow of Figure 48. GPT-4 was provided with specifications for the quadcopter that included battery voltage, battery capacity, total weight, and the dimensions of the copter (DS1). We prompted it to generate functions that evaluated the maximum amount of time the copter could hover in the air, the maximum distance it could travel, and the maximum vertical or horizontal velocity and acceleration with which it could travel (RF2). From the provided physical parameters, GPT-4 was able to generate equations to calculate the copter’s inertial tensor, voltage-torque relation, and other kinematics and dynamics. We also independently asked GPT-4 to generate the physical parameters that would be needed to calculate such metrics, and it came up with the following: maximum thrust, total copter weight, battery capacity, aerodynamic characteristics (e.g. drag coefficient, rotor size, blade design), responsiveness and efficiency of the control system of the copter, additional payload, environmental conditions, and operational constraints. Although these parameters are all highly relevant, GPT-4’s output lacked many crucial considerations without explicit prompting in text form. In this evaluation, GPT-4 did not initially include the constraint that the voltage of the controller needed to stay constant, even though this would be obvious to someone familiar with the domain of knowledge. This means that seemingly “obvious” considerations need to be explicitly included in the prompt in order for a feasible output 54 • Makatura et al. Fig. 48. Suggested Performance Workflow. Performance analysis proceeds smoothly when using GPT-4 first discusses the design and tradeoffs in text form, then creates methods to assess performance, before applying them to the design in question, with iterations within and between sections as needed. to be generated. When asked to include this constraint, GPT-4 was able to understand the underlying reasons for the constraint, stating that a constant voltage is mandatory for the stability and accuracy of the flight controller. Through this exploration, we also determined that GPT-4 is able to successfully suggest a product and evaluate the copter based on specific batteries from a particular seller, such as HobbyKing LiPo batteries (e.g. 3S 2200mAh 11.1V). GPT-4 seems to lack basic spatial intuition of what a copter should look like if the prompt only included the dimensions of the entire copter rather than the dimensions of individual parts. It would hence incorrectly assume that the shape of the copter was a uniform convex solid such as a cylinder or rectangular prism, simplifying and limiting the possible analysis significantly. Thus, we would need to incorporate GPT-4’s geometric design of the copter’s frame, where the dimensions of all components are known, to properly assess aerodynamic performance. And, as with our prior trials assessing chair and cabinet designs, GPT-4 repeatedly failed to calculate center of gravity or stability metrics, even when given sufficient detail about the design and much iterated discussion. For the most part, GPT-4 was able to perform the correct arithmetic operations using its own performance functions. But because the generated functions lack complete real-world considerations,it is best to compare GPT-4’s calculated performance results with what is observed in simulation. We find that these performance functions are a reasonable approximator of copter performance in simulation. The LLM recognizes that the reliability of these results are directly dependent on the accuracy of the inputs, and additional inputs or conditions such as motor efficiency and aerodynamics need to be included in the prompt to match the real copter. Finite element analysis. To investigate the computational performance analysis capabilities of GPT-4, and 7.1.3 to build on the first-order mechanical calculations already done, we challenged it to develop a comprehensive framework for advanced performance analysis and structural evaluation using the finite element method (FEM). The primary focus was determining the likelihood of a chair breaking when subjected to external forces. Figure 49 lists the response and final code generated by GPT-4. With the application of FEM through the external library FEniCS, GPT-4 evaluates the von Mises stress, a crucial parameter in material failure prediction. By comparing this stress with the yield strength of the material, one could assess if the chair would fail under the applied load. For the development of the code, substantial back-and-forth iteration was required to create successful code How Can Large Language Models Help Humans in Design And Manufacturing? • 55 due to its overall complexity. One helpful point for gradually increasing complexity was to create code for a 2D example before asking GPT-4 to create a 3D version. In spite of these challenges, GPT-4 was highly efficient and successful in formulating a precise solution using the FEniCS library, an advanced tool for numerical solutions of PDEs. Not only did GPT-4 integrate the library into the Python code correctly, but it also applied a wide variety of FEniCS features, including defining material properties and boundary conditions and solving the problem using FEM. Caution must be taken, as GPT-4 occasionally suggests libraries and functions that do not exist. However, with correction it quickly recovers and suggests valid options. The stress distribution visualization in Figure 49 is performed on the chair previously designed by GPT-4 in Figure 9 and is the output of GPT-4’s code rendered in Paraview (which GPT-4 also gives assistance to use), as well as on a chair mesh found from other sources. The result reveals a susceptibility to high stress at the back attachment section of the chair design proposed by GPT-4, as seen in Figure 9. This observation underscores the potential for future enhancements in this object’s design. Beyond code generation, GPT-4 also lends support in the local installation of these external libraries, such as FEniCS, so users can run the generated code. This assistance proves invaluable for practitioners who may have limited familiarity with these libraries, which are initially suggested by GPT-4 itself. Notably, studies have delved into the potential of GPT-4 to generate code integrating other external libraries, like OpenFOAM, for the purpose of performing computational performance analysis [17]. It’s worth noting that GPT-4’s capabilities in utilizing these libraries have certain limitations. It can only harness some of the basic features of FEniCS and struggles with more specific, custom usages of the library, such as applying complex loading conditions. Furthermore, GPT-4 assumes homogeneous material properties for the chair, an oversimplification that doesn’t align with the more diverse characteristics found in real-world materials. Moreover, the training date cutoff for GPT-4 means that sometimes only older functions or libraries may be used, without current updates. 7.2 Subjective Evaluation (Q2) Subjective properties have a higher dependence on lexical input, making their evaluation using LLMs an intriguing proposition. We began with an assessment to compare the use of semantics for assessing subjective properties via 3 output forms: categorization or labeling (RF3), pairwise comparison, and overall ranking. We generated a simple parametric 4-legged chair with a back, then input eight versions with different leg lengths, seat widths, and back heights into GPT-4 (DS1). GPT-4 was then asked three similar queries: (1) assign to each chair a label of "large," "medium," or "small" (RF3); (2) rank all chairs from largest to smallest; and (3) in a pairwise comparison, indicate if a given chair was larger or smaller than another. Each of these inputs were given independently, to not influence the different tests based on prior answers in the chat dialogue. In each case, GPT-4 assigned the same overall ranking. Figure 50 shows the chairs rendered in ranked order including the labels for categorization, using a combined implicit consideration of seat area, back height, and leg height. In a similar query, spoons of different handle length and thickness, and scoop length, width, and curvature were compared, finding similar results. In that case, GPT-4 elected to compare spoons by the length of the scoop alone, handling the ambiguity of the question by making a decision about what single quantity mattered most. When handling higher levels of ambiguity, e.g. assigning comfort levels to shoes described in text input, GPT-4 sometimes refused to answer. To bypass this, we determined that it was essential to ask GPT-4 directly to give an output of a certain kind, such as classification into set categories. For instance, the question "Is this shoe comfortable?" would raise objections, a non-answer, and a discussion of what contributes to general shoe comfort. We could circumvent this by asking "Please state whether this shoe is likely very comfortable, comfortable, uncomfortable, or very uncomfortable, and provide one sentence of justification." Despite it’s continued objections, GPT-4’s responses were usually reasonably justified, noting aspects like use of breathable material, adjustability of laces, shock 56 • Makatura et al. Give me one open-source FEM library. Sure, one open-source FEM (Finite Element Method) library is "FEniCS". Can you please use Python code to implement FEM linear elasticity to determine if the chair will break? i.e., if the stress will violate the yield constraint? Assume the bottom is fixed and a constant downward load is applied at the top. You can use FEniCS. The problem is in 3D. You can assume a simple mesh. from fenics import ∗ from ufl import nabla_div V = VectorFunctionSpace(mesh, ' P' , 1) # Define boundary condition (... omitted by authors: boundary details ...) bc = DirichletBC (V, Constant ((0, 0, 0) ) , boundary_bottom) # Define strain and stress def epsilon (u) : return 0.5∗( nabla_grad(u) + nabla_grad(u) . T) def sigma(u) : (... omitted by authors: modulus definition ...) mu = 1 # shear modulus lmbda = 1 # first Lame's parameter return lmbda∗nabla_div(u)∗ Identity (3) + 2∗mu∗epsilon(u) # body force # Define variational problem u = TrialFunction (V) d = TestFunction (V) f = Constant ((0, 0, −10)) # traction T = Constant ((0, 0, 0) ) a = inner (sigma(u) , epsilon (d) ) ∗dx L = dot( f , d) ∗dx + dot(T, d) ∗ds # Compute solution u = Function(V) solve (a == L, u, bc) # Compute Von Mises Stress s = sigma(u) − (1./3) ∗ tr (sigma(u) ) ∗ Identity (3) von_Mises = abs(sqrt(3./2*inner(s, s))) (... omitted by authors: von_Mises details, setting the yield stress ...) # 3 for 3D # Check if the yield stress is violated yield_violation = Von_Mises.vector () . get_local () > yield_stress if any( yield_violation ) : print ( " Yield constraint is violated . " ) else : print ( " Yield constraint is not violated . " ) (... omitted by authors: saving results to file ...) Fig. 49. Finite Element Analysis for Chair Mechanics. GPT-4 determines if the chair will break by performing FEM-based stress analysis. In images, the top chair analyzed is designed by GPT-4 in Figure 9 with von Mises stress displayed from blue (zero) to a maximal value (red). A second chair with mesh from an external source is analyzed using the same code. absorption, and traction of the sole. These results indicate that the semantics of the type of assessment (ranking, categorization, or scoring) do not have a large influence on the final result of subjective analysis, as long as some How Can Large Language Models Help Humans in Design And Manufacturing? • 57 I have 10 chairs. 1. Chair with 12 inches x 12 inches seat area, 4 feet tall, and 2 feet back. 2. Chair with 6 inches x 24 inches seat area, 2 feet tall, and 1.5 feet back. (... content omitted by authors ...) Please rank these in order of size and give one sentence of overall justification. Here’s the ranking from largest to smallest, based on seat area, height, and back height: Chair with 36 inches x 36 inches seat area, 3 feet tall, and 3 feet back. Chair with 28 inches x 34 inches seat area, 3.5 feet tall, and 3 feet back. (... content omitted by authors ...) Justification: The ranking is based on the combination of seat area, overall height, and back height, with chairs having larger seat areas and higher overall dimensions ranked higher. Despite the higher overall height, the chair with a 12 inches x 12 inches seat area (chair #1) is ranked lower due to its small seat area. The chair with a 6 inches x 24 inches seat area (chair #10) is ranked last due to its narrow seat and shorter overall height. Fig. 50. Categorization and Ranking of Chairs by Size. Evaluation by GPT-4 of a series of chairs in ranked order of size from largest to smallest, left to right, and independently sorted into categories, "large", "medium", and "small". type is chosen. However, certain prompt structures may be required to avoid refusals to answer, and the simplest prompt structure to ensure this was asking for any certain kind of output response. Semi-subjective evaluation of sustainability. To challenge GPT-4 to evaluate subjective criteria dependent 7.2.1 on more abstract input parameters, we asked it to create a list of key criteria that go into evaluating sustainability, and to evaluate chair designs based on these criteria, scoring each category from one to ten (RF4). Given GPT-4’s limited understanding of numerically-specified meshes or spatial arrangements, we used text-based information (DS1) for commercial chairs from Ikea and Home Depot. GPT-4 was unable to access this information on its own when prompted with product names, so for this test case, the text from product pages was pasted into the GPT-4 chat dialogue. This information included each chair’s name, a text description of its design, material, and appearance, and some numerical information such as dimensions, weight, and cost (Figure 51). Upon requesting the evaluated score for sustainability metrics, it outputted seemingly reasonable numbers with justification based on the text description. The justification for each property score were generally reasonable but rarely entirely correct. For example, the remark in Figure 51 for modular_design about swapping seat shells was a misinterpretation of the product description: chairs with different seat shell colors were available for purchase, but a single chair could not swap shells. In addition, for this example and most other tests, GPT-4 refrained from assigning high scores (9-10/10) 58 • Makatura et al. I’m looking at a the Karlpetter chair from Ikea. The text description says: KARLPETTER Chair, Gunnared medium gray/Sefast black $75.00Price This upholstered chair provides a cozy layer of comfort while working, crafting or dining. Choose your favorite color to give your room a personal look. Product details You decide the style of your chair. The seat shell is available in different colors, and the underframe SEFAST is available in white, black and chrome-plated colors. Easy assembly with only 4 screws. You can revitalize with a new look at home by swapping the seat shell. SEFAST underframe works well with LIDÅS or SIGTRYGG seat shells, sold separately. (... content omitted by authors ...) This chair has been tested for home use and meets the requirements for durability and safety, set forth in the following standards: EN 12520 and EN 1022. Please use this text description to provide a quantitative ranking of the chair according to the chair_properties input vector above. karlpetter_properties = { ' use_of_natural_materials ' : 5, ' simplicity_of_design ' : 8, # There's use of wood veneer, but also a lot of synthetic materials # The chair is relatively simple in design and requires only four screws to assemble ' modular_design' : 7, modularity # The chair allows for swapping of seat shells , indicating some level of ' use_of_recycled_materials ' : 8, # The polyester used is at ' design_for_disassembly ' : 5, least 90% recycled # Not explicitly stated , but some level of disassembly seems possible (swapping of seat shells ) ' durability ' : 8 } # The chair has been tested for durability and safety according to EN standards Provide a ranking for the Karlpetter, Klinten, and Voxlov chairs using the descriptions shared (... content omitted by authors ...) Note that these scores are subjective and based on the specific descriptions provided. The scores are on a scale from 0 to 10 with 10 being the highest or most representative of the category. Karlpetter Chair: Variety and Appropriateness of Material Choices: 6 (Uses a blend of fabric, wood veneer, and steel) Degree of Decorative Complexity: 3 (The description doesn’t indicate intricate decorative elements) Evidence of Handcrafting: 2 (No mention of handcrafted elements) Extent of Ornamental Detailing: 2 (Simple and functional design, minimal ornamentation) Deviation from Standard Proportions: 4 (Appears to have standard proportions for a dining chair) Upholstery Use and Quality: 7 (Upholstered seat shell with foam filling) Material Innovation and Diversity: 6 (Combination of different materials but nothing groundbreaking) (... content omitted by authors ...) Remember, these scores are not absolute. They’re best estimates based on the given descriptions and could vary with more precise information or based on individual interpretations of the chair designs. Fig. 51. Evaluation of Chair Aesthetics. Evaluation by GPT-4 to score chairs based on a text description of the chair based on metrics of sustainability and aesthetics by historical periods of influence. Results of chair evaluation by style showing reasonable (green check mark), middling (yellow tilde), and unreasonable (red ex) classifications. How Can Large Language Models Help Humans in Design And Manufacturing? • 59 or low scores (1-3/10) within each category, which likely contributed to errors. A further function generated by GPT-4 readily combined the individual property scores into an overall sustainability score for a given input design. Fully subjective aesthetic evaluation. To evaluate the aesthetic design of an item, the physical appearance 7.2.2 must be known, so again the listings from product pages were used as the input data. When prompted to create a function to evaluate aesthetics in general, GPT-4 refused, noting that it is "highly subjective and can vary greatly depending on individual tastes and preferences" and wrote a function in python with a rather simple subfunction for aesthetics: # Here we'll use a provided aesthetic score. In a more carefully curated prompting setup, a range of historical periods were identified that influence chair design, including Egyptian, Greek and Roman, Renaissance, Bauhaus (a semi-minimalist German-inspired design including rounded features), and Minimalist. GPT-4 identified criteria to differentiate between these historical styles based on seven properties: material choice, decorative complexity, evidence of handcrafting, extent of ornamentation, deviation from standard proportions, upholstery use and quality, and material innovation. Based on these categories, GPT-4 evaluated each historical period and chair, and created a function to use the scores to categorize the style of each chair. A selection of text from one input/output is included in Figure 51. In every output GPT-4 would also give a reminder that scores were approximate or arbitrary and should be adjusted. And as before, scoring on a 1-10 scale was generally limited to intermediate values in the range, for instance for Degree of Decorative Complexity, a score of 3/10 is given even though the justification lists that no decorative elements were indicated. Even so, the results of the categorization (Figure 51) seem generally reasonable with most chairs placed into categories that appear subjectively appropriate; a plain metal stool was classified as minimalist, a soft lounge chair with a floral pattern was classified as Renaissance, and a double end chaise lounge was classified as Greek and Roman. A couple of types of mistakes occurred in the classification. First, most chairs were sorted into the Minimalist category, including the faux leather swivel lounge chair and two soft-sided recliners (not shown). Second, several other design styles that may have been a better fit were included in the scoring but were not found to be best fits in the evaluation, indicating that this set of GPT-4’s scoring for the historical periods was not appropriately distributed to capture the right features for all chairs. Third, upon re-evaluating scores over a few iterations, we found that different categories could be established and chairs could switch categories at times due to subjective scoring. Nevertheless, these general issues persisted, such as occasional mistaken categorizations and having one "catch-all" category that was used more than others. In a similar testing setup, GPT-4 was used to identify criteria to help a user decide the most appropriate room in a house in which to place a chair of a given design. In this second case, it created categories for criteria used to select the room of a house for a chair including size, comfort, weight, pet-friendliness, and weather resistance. It further created a list of weightings for the importance of each of these criteria based on the room in question, and ideal ranges for the quantitative features size and weight. It was finally used to create a function to distribute a set of chairs to the set of most appropriate rooms in a house. However, upon evaluation, the results were mediocre: for instance, a lounge chair was sent to the kitchen. It otherwise sorted a soft chair to the living room, a weather-resistant chair to the porch, and a sturdy chair with a soft lining to the study room. More careful selection of evaluation criteria could certainly improve on these results, as well as the inclusion of more details about the chairs and their desired purposes in the rooms in question. 7.3 Discussion In the evaluation of performance, GPT-4 was generally successful, though it exhibited an array of intriguing behavior patterns. In this section, we elaborate on GPT-4’s key capabilities (C), limitations (L), dualisms (D), and opportunities in the context of design to performance, as illustrated by our example cases in the present section. 60 • Makatura et al. C.1 Extensive Knowledge Base in Performance: Through discussing in text form, GPT-4 could suggest design considerations and metrics at a fairly sophisticated level. Even when asked to evaluate ambiguous requests, when details are left out, or when the performance metric is complex, GPT-4 is still able to output reasonable first-order approximation functions. The generated output evaluation functions usually worked, having no coding errors in python; errors in javascript or OpenJSCAD were more frequent, but they were usually directly resolvable. GPT-4 was also able to sort items into categories, and to generate rankings among a set of designs without giving explicit intermediate evaluations. C.2 Iteration Support: GPT-4 was able to eventually assess any property we tested, although the quality of assessment varied. When mistakes were made, further questioning could support the refinement of code to a point where it improved. Particularly for the complex example of the FEA, this took many steps to refine but GPT-4 responded well enough to stay on track, respond to troubleshooting feedback as well as conceptual feedback, and finally create usable code. C.3 Modularity Support: Functions could be effectively built up point by point, with modifications made according to changing needs. GPT-4 could adjust part of a scoring system, such as switching one item for another, or to create the same type of scoring system for another use case using the framework of the first system to create the second one. L.1 Reasoning Challenges: GPT-4 relied on semantic clues, such as variable names, to understand and assess designs. It overall failed to appropriately evaluate performance that required spatial reasoning, like center of gravity or stability, for items having multiple components. In addition, earlier parts of conversation could cause issues for GPT-4 to poorly choose evaluation metrics, such as a discussion of spoon dimensions leading it to evaluate whether a spoon is “strong" based on whether its size is within a normal range. When considering subjective metrics that are not typically quantified, GPT-4 would object. Upon requesting more sophisticated or more abstract evaluation, it would refuse to answer on the first attempt. Potential Solutions: To understand designs, they must be described with enough text-based semantic clues for GPT-4 to handle. Spatial reasoning issues could be resolved using external methods, such as external FEA analysis or other existing APIs to perform these evaluations. To choose the quality of evaluation equations, more discussion with GPT-4 could reveal the use-case for the chosen equations and alternatives, allowing a user to decide if another option may be more suitable. To assess subjective metrics, it worked best to develop scoring systems by breaking down a subjective feature into smaller, more quantifiable parts that GPT-4 could approach. And to bypass refusals to give a concrete answer, prompt engineering on its own could solve this, by requesting a specific enough type of output. L.2 Correctness and Verification: The source material for equations used by GPT-4 in evaluation was usually undefined, which can contribute to error, and often embeds assumptions. When calling external libraries, GPT-4 occasionally invented fake libraries that could not function. Or, when working with OpenJSCAD designs it occasionally created designs using nonfunctional methods or nonworking code and simply complained that the language had been updated past its training cutoff. Potential Solutions: An external checker would be needed to verify the source of equations against an objective standard to ensure reliability, and when challenged, GPT-4 can uncover assumptions in choices of evaluation equations. External options for checking could include using metrics and equations established by published stan- dards for engineering codes and proposed for items such as sustainability, safety, and ergonomics as appropriate to the use case. To solve the use of fake libraries or using fake methods, once GPT-4 was challenged enough times it would eventually offer an existing option. A more efficient solution when it cycled through fake options for OpenJSCAD programming was to input a working example of any kind into GPT-4 along with the request for a working code, using its capacity for modularity to help it structure a working response. L.3 Scalability: Other challenges provided obstacles to evaluation. For objective criteria, first order analysis is readily available on all metrics tested, but the scalability in complexity is limited. It was possible but more How Can Large Language Models Help Humans in Design And Manufacturing? • 61 difficult to get more advanced characterization, for example generating code for FEA for mechanics. As another challenge, the quality of evaluation was found to be best when 1-2 performance metrics were analyzed at once. When too much was requested at once the output quality decreased. Potential Solutions: To handle the limitation of scalability of the complexity of analysis in a given domain, use of existing domain-specific APIs would be suggested. To handle the limitation in amount of metrics to be assessed, the analysis for metrics should be developed one by one into subfunctions that are then stitched together. However, making a longer chat in this format then runs into memory issues of GPT-4, for which we found it to forget sets of function inputs and other details within two exchanges. This, in turn, requires giving reminders of the important parts of previous answers (such as the overall function input) when generating each subfunction. When generating the FEA code, a suitable solution was to have GPT-4 keep repeating the same entire code, and occasionally switch between asking for 2D and 3D versions to create something simple enough before increasing the challenge level, and iterating back again when next parts of the code were found to break, until the entire function worked. Opportunities. We recommended that a good workflow for analyzing performance would utilize a buildup of complexity, beginning with discussing the design in text form and then generating a function to evaluate a design input in a parametric form. Many issues arising from performance evaluation could be attenuated by relying more on existing methods, libraries, and APIs that have already been created for the use-case in question. 8 PERFORMANCE AND DESIGN SPACE TO DESIGN (INVERSE DESIGN) Although generative algorithms can produce candidates for designs, there is no guarantee concerning their quality. Inverse design is focused on producing designs that are, by some metric, as close to optimal as possible, given the constraints. Put in the vocabulary of the preceding sections, given a design space and performance metrics (which can define values to be optimized or constraints to be satisfied), inverse design answers the following question: which design in our space provides optimal performance without violating any constraints? A design generated by an LLM must therefore satisfy several requirements: 1) it must be valid, 2) it must be performant, 3) it must satisfy design constraints, and, 4) in the context of manufacturing, it must be buildable. With 3) and 4), we note the persistent reality of the sim-to-real gap — that is, objective and constraint values may differ in silico and in situ. Basic challenges involve specifications of the inverse problem to an LLM (much of which was described in previous sections), as well as generation of an effective algorithm for design optimization. Although LLMs cannot natively search for optimal solutions to a novel problem, they can make educated starting guesses and output optimization code that users can execute. Much of this section is thus focused on prompting LLMs to generate meaningful code for problems dependent on aspects such as their parameterization support (e.g. continuous versus discrete domains), performance objective landscape, or fabrication constraints. Real-world problems introduce nuanced challenges, including exploring over multiple competing objectives, difficult-to- specify objectives (such as aesthetics and objectives that depend on long-term use), and an evolving landscape of emerging methods that an LLM may not know about. In this context, we could consider whether GPT-4 can propose strategies (even novel ones) that free designers from some of the typical burdens associated with the optimization pipeline. With these considerations, we aim to investigate the following questions: Q1 When can GPT-4 solve a problem analytically, and when does it need to resort to using an outside tool (e.g., a programmed algorithm)? Q2 Can GPT-4 choose reasonable algorithms for different types of supports for constraints, objectives, and decision spaces (e.g., continuous, discrete, binary)? Q3 Can GPT-4 assist designers in navigating the landscape of possible trade-offs when multiple conflicting objectives are present? 62 • Makatura et al. Q4 Can GPT-4 support optimization in contexts that require additional knowledge, specifically when a design space is not properly defined or is missing constraints? In this section, we investigate, generally speaking, modern LLMs’ abilities to navigate and (semi-)automate design optimization problems. 8.1 GPT-4: Analytical vs. Outside Tools (Q1) We know that GPT-4 has the ability to reason about many mathematical operations, including both algebra and calculus, which is sufficient to solve many real-world engineering problems. We emphasize “reasoning” because, although GPT-4 clearly proposes reasonable analysis steps, is not obvious that GPT-4 is correctly executing those steps; as we’ll see, GPT-4 often makes mathematical errors. Still, it is reasonable to wonder if GPT-4’s own internal reasoning is sufficient for inverse design. Where are the limits of that reasoning? When must it resort to code and external libraries, or plugins? Each of these approaches has its own pitfalls that suggest caution for developers. Consider an example in which we maximize the stability of a table (Fig. 52). GPT-4 correctly describes that an object is statically stable when its center of mass lies within its support polygon. One considers stability of an object to be maximized when it remains stable under as large of a perturbation as possible. In principle, that typically means two things: 1) moving the center of mass as far away from the boundary of the support polygon as possible, 2) decreasing the object’s experienced motion (typically caused by gravitational torquing) when perturbed. GPT-4 is able to apply these intuitive principles to reason about the optimal solution within given bounds in this case. In a similar example shown in Fig. 53, the Wolfram plugin is enabled, which GPT-4 can selectively call at its discretion. While the Wolfram plugin was a natural choice for solving what appears to be a simple analytical optimization problem, GPT-4 timed out. In practice, this can happen for at least three reasons: 1) not enough time was allotted for the computation, 2) the problem is too difficult for Wolfram to handle, or, more generally, 3) the query may produce a problem that is not tractable or – in the extreme case – not computable [37]. Although it might seem like it would be trivial to provide Wolfram with more time to complete the computation, in practice, the user has no feedback on how long the computation would take. It is unreasonable to ask a user to wait indefinitely without feedback, and most numerical optimization algorithms will be unable to provide a reasonable estimate of progress. In this case, “anytime algorithms” (which can return a valid partial or approximate solution even if interrupted early) may be especially practical [42]. After failing to optimize over the full space using Wolfram, GPT-4 continues the conversation by reasoning that the optimum value will occur near the boundary of the constraints (Figure 53). By exploiting this reasoning, it successfully uses the Wolfram Plugin to compute and evaluate the equations corresponding to a small set of extremal points in the design space. Despite this, it fails to realize that certain solutions dominate others, and does not prune out bad candidates. Moreover, GPT-4 neglects to justify or prove its claim that the optimum should occur near the boundaries; though it was correct in this case, this approach may fail in general. In a follow-up experiment (Fig. 54), GPT-4 is asked to perform the same optimization task via Python code, which enables it to use an external library. It chooses L-BFGS-B, which is a reasonable, standard, and easily accessible (though not state-of-the-art) solver for continuous valued problems. It does not, however, provide gradients that can expedite the computation unless prompted for them. We explicitly prompt GPT-4 to provide the gradients (Fig. 55) and visualize the results in Fig. 56. Generally speaking, the unoptimized approach on GPT-4’s part is an issue w.r.t. performance, as not all users will be intimately familiar with all (or perhaps any) optimization libraries, and they may not realize that by providing additional information (e.g. gradients), the computation can be expedited. GPT-4 also does not elect to make use of Wolfram or autodifferentiation; in How Can Large Language Models Help Humans in Design And Manufacturing? • 63 Let’s say the bounds for the height of the legs are between 1cm and 3 cm, and the height of the table is between 1 and 5 cm. Please optimize the parameters such that the table is as statically stable as possible, and explain the process. The static stability of a table is primarily determined by its center of mass and the area of its base. A table is more stable if its center of mass is lower and the area of its base is larger. In this case, the table’s base is defined by the placement of the four legs. If we assume that the legs are placed at the maximum radius from the center of the table, the base area is maximized and thus the stability is also maximized. The center of mass is determined by the heights of the table top and the legs. To lower the center of mass, we want to minimize the height of the table top and maximize the height of the legs. Given the bounds you provided, the optimal parameters for the table would be: • Height of the legs (h): 3 cm • Height of the table top (H): 1 cm This configuration maximizes the height of the legs and minimizes the height of the table top, thus lowering the center of mass and increasing the table’s static stability. Please note that this analysis assumes that the table is on a flat surface and that the weight distribution in the table top is uniform. If the weight distribution is not uniform (for example, if there is a heavy object on one side of the table), the table may still tip over. Also, this analysis does not take into account the dynamic stability of the table, which involves how the table responds to forces and movements. For example, a table with very tall, thin legs may be statically stable but not dynamically stable. Finally, this analysis assumes that the table is made of a uniform material. If the table top is made of a heavier material than the legs, the center of mass will be higher and the table will be less stable. Fig. 52. Intuitive Optimization. Without specific prompting to take an analytical or computational approach, GPT-4 may reason inuitively about the physics of a problem. This example is an offshoot of the conversation in Fig. 53, after the initial exchange. practice, lack of direct computation can lead to errors. Later in this section, we demonstrate how GPT-4 struggles to solve a (much) more difficult version of this optimization problem. Throughout these experiments, we noticed several common issues in GPT-4’s approach. First, if users do not prompt GPT-4 explicitly to show its work, it may resort to “common-sense” reasoning about a problem. Although this reasoning could be correct, GPT-4 provides no certificate to a user, as seen in the “intuitive” physics of Fig. 52, or the boundary-aligned optima assertion in Figure 53. Another issue occurs if it is difficult to find a library to solve a particular task; in this case, GPT-4 often gives up or attempts to write its own code. If the code is detail-heavy, it may be too difficult for GPT-4 to write correctly and the code/solution may be incorrect. If a library does exist but is used uncommonly, GPT-4 may give incorrect instructions on how to install/use that library; or, in some cases, GPT-4 may hallucinate a library altogether. 8.2 Reasoning about different problem types and selecting appropriate solvers (Q2) To test GPT-4’s understanding of various problem domains and its ability to identify appropriate solutions for each, we conducted several experiments spanning a wide range of search spaces, constraint spaces, performance spaces. In some cases, the problems have additional real-world considerations of which GPT-4 must be cognizant in order to choose a suitable optimization approach. Tables 2 and 3 provide a comparison of different problems that GPT-4 was asked to solve. We describe each example in additional detail below, with the exception of the table stability optimization (which was presented in the previous section). Overall, we found that even over varying problem types, GPT-4 exhibits extreme robustness when reasoning about and choosing an adequate solver for any given problem. In cases where a more sophisticated algorithm was needed, it tended to choose at least the correct algorithm class, even if it was not always aware of the best version or implementation. One notable example was in optimizing robot topologies for ground locomotion: GPT-4 64 • Makatura et al. Let’s say I want to design a table with four legs and a top. Assume that the four legs are radially symmetric, and the top is a cylinder and the legs are cylinders. Can you give me the equation for the center of mass of the table in x, y, and z, as a function of the natural parameters of the primitives? Sure, let’s denote the following parameters: • 𝑅 is the radius of the table top. • 𝑟 is the radius of each leg. • 𝐻 is the height of the table top. • ℎ is the height of each leg. • 𝜌 is the uniform density of the material the table is made of. The mass of the table top 𝑀𝑡𝑜𝑝 and each leg 𝑀𝑙𝑒𝑔 can be calculated using the volume of a cylinder 𝑉 = 𝜋𝑟 2ℎ and the definition of density 𝜌 = 𝑀 𝑉 which gives us: • 𝑀𝑡𝑜𝑝 = 𝜌𝜋𝑅2𝐻 • 𝑀𝑙𝑒𝑔 = 𝜌𝜋𝑟 2ℎ The total mass of the table 𝑀 is the mass of the top plus the mass of four legs: • 𝑀 = 𝑀𝑡𝑜𝑝 + 4𝑀𝑙𝑒𝑔 The center of mass of the table in the z-direction 𝑧𝑐𝑚 can be calculated by taking the weighted average of the centers of mass of the ℎ 2 . This gives us: top and the legs. The center of mass of a cylinder is at its midpoint, so for the top it’s at 𝐻 2 and for the legs it’s at • 𝑧𝑐𝑚 = 1 𝑀 (𝑀𝑡𝑜𝑝 𝐻 2 + 4𝑀𝑙𝑒𝑔 ℎ 2 ) Since the table is radially symmetric, the center of mass in the x and y directions will be at the origin (0,0): So the center of mass of the table is at (0, 0, 𝑧𝑐𝑚). 𝑥𝑐𝑚 = 𝑦𝑐𝑚 = 0 Ok. Let’s say the bounds for the height of the legs are between 1cm and 3 cm, and the height of the table is between 1 and 5 cm. Show me an equation that would measure static stability, and find the minimum of that equation. The static stability of a table can be quantified by the height of the center of mass. The lower the center of mass, the more stable the table is. Therefore, we can use the equation for 𝑧𝑐𝑚 as our measure of static stability. The equation for 𝑧𝑐𝑚 is: (... omitted by authors: substituting in equations found in previous response. Below is the final result. ...) Since 𝑅, 𝑟 , and 𝜌, we can treat this as a function of ℎ and and 𝐻 only: • 𝑓 (ℎ, 𝐻 ) = 𝑅2𝐻 2+2𝑟 2ℎ2 𝑅2𝐻 +4𝑟 2ℎ We can find the minimum of this function within the given bounds using calculus or numerical optimization methods. However, because this is a complex optimization problem, it would be best to use a numerical method. Let’s use the Wolfram Language to find the minimum of this function within the given bounds. (... omitted by authors: using Wolfram plugin ...) I apologize for the inconvenience. It seems that the computation took longer than expected. This is a complex optimization problem that might require a significant amount of computational resources. Let’s try a different approach. We can make a reasonable assumption that the minimum of the function will occur at the boundary of the constraints. This is often the case for physical systems. Therefore, we can evaluate the function at the four corners of the parameter space: • 𝑓 (1, 1) • 𝑓 (1, 5) • 𝑓 (3, 1) • 𝑓 (3, 5) Let’s calculate these four values. (... omitted by authors: using Wolfram plugin and outputting the results. ...) Fig. 53. Attempt to Optimize a Table Analytically. Although GPT-4 is able to provide a formula for measuring the static stability of a table, the Wolfram plugin is unable to perform the optimization and instead defaults to an approximation of the function minimum. How Can Large Language Models Help Humans in Design And Manufacturing? • 65 Since Wolfram is having trouble, can you instead provide me code to find the optimum in the general case? (... content omitted by authors ...) import numpy as np from scipy . optimize import minimize # Define the function def f (x, r , R) : h, H = x return (R∗∗2 ∗ H∗∗2 + 2 ∗ r ∗∗2 ∗ h ∗∗2) / (R∗∗2 ∗ H + 4 ∗ r ∗∗2 ∗ h) # Define the bounds and initial guess bounds = [(1, 3) , x0 = [2, 3] # initial guess (1, 5) ] (... omitted by authors: settings values of r and R ...) # Find the minimum of the function res = minimize(f , x0, args=(r , R) , bounds=bounds) print ( "The minimum of the function is at h = {:.2 f } cm, H = {:.2 f } cm".format (∗ res . x) ) (... content omitted by authors ...) Which optimization method does minimize use? The scipy.optimize.minimize function in Python uses the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm by default for unconstrained problems. The BFGS method is a quasi-Newton method that uses function values and gradients to build up a picture of the surface to be optimized. However, when bounds are provided (as in your case), minimize defaults to the Limited-memory BFGS (L-BFGS-B) method. The L-BFGS-B method is a modification of the BFGS method that is more suitable for problems with many variables and constraints on the variable values. The method to be used can be specified using the method parameter of the minimize function. Other methods available include ’Nelder-Mead’, ’Powell’, ’CG’, ’TNC’, ’COBYLA’, ’SLSQP’, and others. The choice of method depends on the nature of the problem (e.g., whether it’s constrained, whether the function is smooth, whether gradients are available, etc.). What do you think is the best choice here? Given that we have bounds on the variables, the Limited-memory BFGS (L-BFGS-B) method is a good choice for this problem. L-BFGS-B is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden-Fletcher-Goldfarb- Shanno (BFGS) algorithm using a limited amount of computer memory. It’s a popular choice for optimization problems where there are constraints on the parameters. The L-BFGS-B method is particularly suitable for high-dimensional problems, and it can handle the bounds on the parameters efficiently. It also doesn’t require the computation of the second derivatives (the Hessian matrix), which can be computationally expensive for complex functions. Therefore, using the L-BFGS-B method with the scipy.optimize.minimize function, as in the provided Python code, would be a good approach for finding the minimum of the function within the given bounds. Fig. 54. GPT-4 Informs User on Choice of Optimization Method. When prompted to choose an alternative optimization method in Python to the Wolfram Alpha plugin, GPT-4 provides a script and a reasonable explanation for the chosen solver. 66 • Makatura et al. Can you compute expressions for the gradients of this problem, and provide code that uses them to accelerate the L-BFGS-B call? (... omitted by authors: partial derivatives computation ...) import numpy as np from scipy . optimize import minimize # Define the function def f (x, r , R) : h, H = x return (R∗∗2 ∗ H∗∗2 + 2 ∗ r ∗∗2 ∗ h ∗∗2) # Define the gradient of the function def grad_f (x, r , R) : / (R∗∗2 ∗ H + 4 ∗ r ∗∗2 ∗ h) h, H = x dfdh = (4 ∗ r ∗∗2 ∗ h ∗ (R∗∗2 ∗ H + 4 ∗ r ∗∗2 ∗ h) − 2 ∗ r ∗∗2 ∗ h∗∗2 ∗ (4 ∗ r ∗∗2) ) / (R∗∗2 ∗ H + 4 ∗ r ∗∗2 ∗ h) ∗∗2 dfdH = (2 ∗ R∗∗2 ∗ H ∗ (R∗∗2 ∗ H + 4 ∗ r ∗∗2 ∗ h) − R∗∗2 ∗ H∗∗2 ∗ (R ∗∗2) ) / (R∗∗2 ∗ H + 4 ∗ r ∗∗2 ∗ h) ∗∗2 return np.array ([ dfdh, dfdH]) # Define the bounds and initial guess bounds = [(1, 3) , x0 = [2, 3] # initial guess (1, 5) ] (... omitted by authors: settings values of r and R ...) # Find the minimum of the function res = minimize(f , x0, args=(r , R) , print ( "The minimum of the function is at h = {:.2 f } cm, H = {:.2 f } cm".format (∗ res . x) ) jac=grad_f, bounds=bounds) Fig. 55. Prompting GPT-4 to Accelerate Optimization. By default, GPT-4 does not compute gradients for the problem even when they are available. When directly prompted, GPT-4 modifies its code to pass in the gradients of the objective w.r.t. optimized parameters. (a) Initial Table Design (b) Optimized Table Design Fig. 56. Single Objective Optimization. Left: A table with initial leg height of 2mm and tabletop height of 3mm. Right: Given the specification of maximizing table stability, GPT-4 formulates the objective as minimizing the z-coordinate of the full assembly’s center of mass w.r.t. leg height and tabletop height. GPT-4 provides a Python script that computes the gradients and uses L-BFGS-B, as provided through scipy.optimize to optimize the two parameters. We visualize the optimized table, in which both the leg and tabletop heights are as small as possible. identified an evolutionary algorithm as an effective optimization method, but did not choose any state-of-the-art specific algorithms or implementations. Robot Arm Optimization. In this example, shown in Figure 57, a robot arm is to be optimized such that it reaches a target location in space. As requested, GPT-4 generates a two-link robot design parametrized by the link lengths, How Can Large Language Models Help Humans in Design And Manufacturing? • 67 Inverse Design Problems Problem Name Table Stability Robot Arm 3D Printer Parameters Cabinet Optimization Robot Arm Planning Chair Design Search Space Continuous Continuous Continuous Continuous Continuous Continuous Constraint Space Parameter Bounded Parameter Bounded & Continuous Function Parameter Bounded Parameter Bounded Continuous Continuous & Bounded Table 2. Descriptions of Search and Constraint Space of Inverse Design Problems. Problem Name Table Stability Robot Arm 3D Printer Parameters Cabinet Optimization Robot Arm Planning Chair Design Objective Space Continuous Continuous Continuous Function Bounded Continuous Continuous Inverse Design Problems Other Considerations None None Expensive Real-World Experiments None Logical Reasoning with High-level Primitives Multi-Objective Table 3. Results of the Inverse Design Queries to GPT-4. Chosen Optimization Method Analytical/Second-Order Gradient-Based Second-Order Gradient-Based Bayesian Optimization Second-Order Gradient-Based Greedy Search, Brute Force NSGA-II (Evolutionary Algorithm) and then uses inverse kinematics to provide a solution for the link lengths so as to reach a target location in space. When asked to transform this into a design optimization problem, GPT-4 sets up an optimization problem, creating an appropriate constraint (end-effector touching goal), an objective (sum of link lengths, as a proxy for material cost), and parameters with reasonable bounds. All of these were automatically provided by GPT-4, without explicit request. Notably, the optimization code is easily generalizable to arbitrary locations in space (though certain aspects like parameter bounds may need to be modified). As an optimization procedure, GPT-4 chooses L-BFGS; a reasonable choice given the continuous nature of the problem. A rendering of the optimized robot can be seen in Figure 58. Optimizing 3D Printing Parameters. In this more abstract example, GPT-4 is simply asked which algorithm to use in order to optimize the parameters of a slicer used in 3D printing. It chooses Bayesian optimization, which is a good choice for problems with real-world experiments where it is preferable to minimize the number of required experiments. GPT-4 also provides skeleton code for the optimization. As this is a more abstract example, specifics are not supplied. The listing can be found in Figure 59. Cabinet Optimization. We investigate whether GPT can output a reasonable cost function and design that optimizes the function when provided an example design, a parameterization of the design space, and a text description of the objective. One instantiation of this problem setting is with furniture: can GPT optimize the design of a cabinet such that the result has a user-specified volume while minimizing the cost to build it? First, we prompt GPT-4 with an example cabinet design in OpenJSCAD (Section 4.1.4) and a parameterization of the design (including bounds for the parameters). Then, we ask it to generate functions to compute volume and material cost. Once the user verifies the accuracy of the functions, we have GPT-4 output a python script that can minimize the cabinet’s material cost with respect to a given volume constraint. The resulting code is shown in Figure 60, with renders of an optimized cabinet in Figure 61. Robot Arm Planning. We now study a planning problem: given a claw attached to an arm and an environment with objects and bins, GPT-4 must control the arm-claw robot with a sequence of commands that efficiently picks up all objects and places them into bins. Each bin an only hold one object. In the arm-claw interface provided to GPT-4, the physical embodiment of the arm-claw robot does not matter; this allows GPT-4 to simply reason 68 • Makatura et al. I’m trying to design a robot arm that can move its end effector directly vertically half a meter. Can you help by providing a design for such a robot? It should only use rotational motors. I need to know the morphology of the robot - the dimensions of the links, and how they are connected. Or, if you don’t know the dimensions or the morphology, you can provide code to search for dimensions that would work for the task (through simulation). (... content omitted by authors ...) Now, for more complex morphologies, you would want to conduct a search via simulation to find optimal lengths and configurations for the robot’s arm. A simple Python-based example using inverse kinematics could look like this: import numpy as np (... omitted by authors: setting desired vertical distance to 0.5 and initial arm link lengths to 0.25 ...) # Calculate the angles using inverse kinematics theta2 = np.arccos (( d∗∗2 − l1 ∗∗2 − l2 ∗∗2) theta1 = np.arctan2 ( l2 ∗ np. sin ( theta2 ) , / (2 ∗ l1 ∗ l2 ) ) l1 + l2 ∗ np.cos( theta2 ) ) # Confirm that the end effector reaches the desired position x = l1 ∗ np.cos( theta1 ) + l2 ∗ np.cos( theta1 + theta2 ) y = l1 ∗ np. sin ( theta1 ) + l2 ∗ np. sin ( theta1 + theta2 ) print ( "The end effector is at position : " , (x, y) ) # Now you can run a loop to iterate over various lengths for l1 and l2 # within a feasible range and see which combination gives you the best result . lightest , simplest , etc . # You would have to define " best " − could be smallest , (... content omitted by authors ...) Below is the code you gave to perform inverse kinematics on a 2 link robot arm: (... omitted by authors: gpt provided code ...) However, when I set l1 = l2 = 0.5, the end effector is at the position (-0.25000000000000006, 0.4330127018922192) instead of the goal position of 0.5 vertical. Could you fix this error? (... omitted by authors: failed attempt at fixing the code ...) I still get the same error. Could you fix the script? (... content omitted by authors ...) import numpy as np (... omitted by authors: setting desired vertical distance to 0.5 and initial arm link lengths to 0.5 ...) # Calculate the angles using inverse kinematics theta2 = np.arccos (( x_target ∗∗2 + y_target ∗∗2 − l1 ∗∗2 − l2 ∗∗2) theta1 = np.arctan2 ( y_target , x_target ) − np.arctan2 ( l2 ∗ np. sin ( theta2 ) , / (2 ∗ l1 ∗ l2 ) ) l1 + l2 ∗ np.cos( theta2 ) ) # Confirm that the end effector reaches the desired position x = l1 ∗ np.cos( theta1 ) + l2 ∗ np.cos( theta1 + theta2 ) y = l1 ∗ np. sin ( theta1 ) + l2 ∗ np. sin ( theta1 + theta2 ) print ( "The end effector is at position : " , (x, y) ) (... content omitted by authors ...) Inverse Kinematics for 2 Link Robot Arm. Although GPT-4 is able to output inverse kinematics code for Fig. 57. optimizing the design of a 2 link robot design, its initial attempt is unsuccessful, requiring a couple iterations of prompting to achieve a correct output. However, we do not explicitly need to point out the source of the error for GPT-4 to correct itself. How Can Large Language Models Help Humans in Design And Manufacturing? • 69 (a) Initial Robot Arm (b) Optimized Robot Arm Fig. 58. Optimizing Link Lengths of a Robot Arm. The goal is to minimize the link lengths of a robot arm with the constraint that the arm is capable of reaching a goal located 0.5m vertically. Left: The initial arm features link lengths of 0.5m each, requiring both arms to be bent to reach the goal. Right: GPT-4 outputs an optimization script to discover that the optimal link lengths are 0.25 each, which we visualize here. about the movement of the claw and whether the claw should grasp or release an object. Due to the nature of the problem, there is one critical constraint to consider: the claw must visit an object to pick it up before dropping it off at a bin. Formalizing this constraint is non-trivial, but the performance objective is much simpler: minimize the claw’s travel distance. To simplify the problem, we also add that the maximum number of objects and bins is 3 each, making brute force a valid solution. The initial prompt and result are shown in Figure 62. Overall, GPT-4 understands that it needs to keep track of the claw’s position to compute the correct distances and that the claw should move to an object before moving to a bin. Still, it is unable to output an optimal solution, even when the problem statement permits a brute force approach To address this, we explicitly emphasize that the output function should guarantee that the minimum distance is traveled. Even in this case, the optimal solution is not necessarily reached. As shown in Figure 63, GPT-4’s code fails to consider all possible bins that an object could be placed into once it has been picked up. However, we note that the solutions have high variance – on a different run, GPT-4 does produce a correct brute force solution. A third run produces code that guarantees an optimal solution but is inefficient, as it computes the translation for paths that do not obey the constraint that a claw must pick up an object before placing it in a bin. 8.3 Navigating trade-offs between multiple conflicting objectives (Q3) Although our previous experiments focused on optimizing a single performance objective, we now explore the scenario where a user wishes to navigate a higher-dimensional (i.e. multi-objective) performance space. The user begins by asking GPT-4 for reasonable performance metrics for evaluating a chair. After GPT-4 provides eight such metrics, our user purposefully selects stability and sustainability, since they can be mathematically quantified by tipping angle and volume respectively. The user then asks for parameters over which to search. Since GPT-4 has not been given a design template, GPT-4 proposes parameters abstractly; we note that it might have been more useful if GPT-4 first proposed a skeleton for the chair geometry, especially so that a user could understand the ramifications of these parameters. After iterating with GPT-4 to generate correct OpenSCAD code for the design, the user requests code to evaluate and optimize the chair. GPT-4 proposes the use of NSGA-II – a very common evolutionary method for computing the Pareto front of the multi-objective trade-off space – and provides code for the optimization. As an oversight, GPT-4 initially excludes design parameters bounds from the optimization, despite verbally providing ideas earlier in the conversation. When prompted to add the bounds into the optimization code, because of its limited memory, GPT-4 suggests reasonable, but notably different 70 • Makatura et al. I have a 3D printer. I’m trying to optimize the parameters of the slicer. The parameters are continuous values. Each time I do a 3D print, I’m able to grade the quality of the print on a scale of 0 to 1, with 0 being a bad score and 1 being a perfect score. I need an algorithm for optimizing over the parameters. What method would you recommend? Given your problem, I would recommend you to use Bayesian Optimization. (... omitted by authors: explanation of how Bayesian Optimization works ...) There are several libraries in Python that provide an implementation of Bayesian Optimization. Some of these include: • Scikit-optimize: It provides a BayesSearchCV class that’s similar to GridSearchCV and RandomizedSearchCV in Scikit- learn. • GPyOpt: This is another Python library for Bayesian Optimization, which is based on GPy. • BayesianOptimization: This is a pure Python implementation of Bayesian optimization, and it’s easy to use. You could use any of these libraries to implement Bayesian Optimization and find the optimal parameters for your 3D printer. Thank you. Let’s assume I have a function, then, called evaluate. evaluate takes in parameters as a list, and returns a score for the quality. Can you provide example code for this optimization? You can assume a variable called "parameters" is given. Sure, let’s use the BayesianOptimization library to illustrate this process. Let’s also assume that you have two parameters you are tuning and they both take values between 0 and 1 (you can adjust this to your actual situation). (... omitted by authors: software installation instructions. ...) Then, you can use the following code to perform the optimization: from bayes_opt import BayesianOptimization # Assuming the evaluate function and parameters are defined . # The function should take the parameters as input and return a quality score . # Let ' s assume parameters are [' param1', def function_to_optimize (param1, param2): ' param2'] score = evaluate ([ param1, param2]) # Use the real evaluation function here return score # Define the bounds of the parameters pbounds = { ' param1': ' param2': (0, 1) , (0, 1) , # These should be the min and max values of param1 # These should be the min and max values of param2 } # Instantiate the optimizer object optimizer = BayesianOptimization( f=function_to_optimize , pbounds=pbounds, random_state=1, ) # Perform the optimization optimizer . maximize( # Number of randomly chosen points to sample the target function before starting the optimization . init_points =2, n_iter =10, # Number of iterations to perform the optimization ) # The optimal parameters are stored in the ' max' attribute of the optimizer object optimal_params = optimizer . max['params'] print (optimal_params) You can tweak the init_points and n_iter parameters to perform more or less exploration of the parameter space. The random_state parameter is used to reproduce the results, you can set it to any integer or leave it out to have a different re- sult every time. The optimizer.max attribute contains a dictionary with the parameters that yielded the maximum output for the function, and the maximum output itself. In this case, the maximum output is the highest quality score your evaluate function has returned. Fig. 59. Bayesian Optimization for 3D Printing. GPT-4 reasonably chooses Bayesian Optimization for fine-tuning parameters for systems that require real-world experiments for evaluation, and provides code for an (abstracted) instantiation of the problem. How Can Large Language Models Help Humans in Design And Manufacturing? • 71 Assume that you can call the below functions. def compute_storage_capacity( cabinet_height , cabinet_width , cabinet_depth , board_thickness ) : (... code omitted by authors ...) return storage_capacity def compute_material_cost( cabinet_height , cabinet_width , cabinet_depth , board_thickness , cost_per_cubic_inch ) : (... code omitted by authors ...) return material_cost In addition, a cabinet design can be parameterized as follows. The bounds of each parameter is specified along with rationale for these bounds. • boardThickness: (0.1, 5) inches. The board thickness should be positive and typically not exceed 5 inches for a cabinet of this size. It’s also unlikely to be less than 0.1 inches due to structural stability concerns. • cabinetHeight, cabinetWidth, cabinetDepth: (10, 100) inches. These dimensions should be positive and typically fall within a reasonable range for a cabinet. The exact bounds will depend on the specific use case and available space. • (... omitted by authors: full set of parameters ...) Write a script to find the cabinet of the correct volume that minimizes cost using any library you want. If the volume constraint is not satisfied, specify this. Assume that the cost per cubic inch of the materials, the board thickness, and the volume constraint will be user specified. Demonstrate that the script works with example user specifications. (... content omitted by authors ...) from scipy . optimize import minimize (... omitted by authors: setting values of volume constraint, cost per cubic inch, board thickness ...) def objective_function (x) : (... omitted by authors: function description ...) cabinet_height , cabinet_width , cabinet_depth = x # Extract cabinet dimensions return compute_material_cost( cabinet_height , cabinet_width , cabinet_depth , board_thickness , cost_per_cubic_inch ) def constraint_function (x) : (... omitted by authors: function description ...) cabinet_height , cabinet_width , cabinet_depth = x # Extract cabinet dimensions volume = compute_storage_capacity( cabinet_height , cabinet_width , cabinet_depth , board_thickness ) return volume − desired_volume # This should be equal to zero when the desired volume is met # Initial guess ( start at the mid−point of the possible range for each dimension) x0 = [55, 55, 55] # Define bounds for each dimension bounds = [(10, 100) , # Define the constraints constraints = [{ ' type ' : # Call the optimizer result = minimize( objective_function , x0, method='SLSQP', bounds=bounds, constraints= constraints ) # Lower and upper bound for each dimension ' fun ' : constraint_function }] (10, 100) ] (10, 100) , ' eq ' , (... omitted by authors: prints optimized dimensions if successful, otherwise reports failure ...) (... content omitted by authors ...) Fig. 60. Optimizing Material Cost of Cabinet w.r.t. Volume. The performance and cost functions are generated by prompting GPT-4. To find the minimum of the material cost function with respect to volume, GPT-4 calls scipy.optimize’s minimize function with an appropriate solver. In addition, the provided code extracts the relevant bounds and a suitable initial guess from the prompt. 72 • Makatura et al. (a) Initial Cabinet Design (b) Optimized Cabinet (c) Optimized Cabinet (Close-up) Fig. 61. Single Objective with Constraint Optimization. Left: A cabinet with initial height, width, and depth of 55 in. Middle: Using an off-the-shelf solver provided by scipy.optimize, GPT-4 provides a Python script that optimizes the cabinet’s height, width, and depth given a desired volume while minimizing the material cost. We show an optimized cabinet that meets the volume constraint of 5000 cubic inches of storage capacity. Right: A close-up view of the optimized cabinet shown in the middle. parameter bounds. Additionally, GPT-4 must be prompted again to enforce the bounds consistently throughout the algorithm (specifically, in the crossover and selection operators). Results can be found in Figure 64. Through this example, we conclude that GPT-4 has the potential to aid users in both a) understanding the trade-offs involved in different candidate designs, and b) providing pointers to a reasonable algorithm that can help navigate that space. 8.4 Supporting optimization in contexts that require additional knowledge (Q4) In many cases, it can be daunting to fully specify a given inverse design problem in a new domain: for example, it may be difficult to specify appropriate design spaces and objective functions, and it may be unclear how to deal with underspecified/unknown constraints. In this section, we briefly examine how LLMs may reduce the burden of this process to make inverse design more accessible. The chair example discussed in Section 8.3 demonstrates GPT-4’s ability to recommend reasonable parameters for a design without needing explicit, low-level prompts from a user. Indeed, when prompted for “parameters”, GPT-4 is able to apply its knowledge of the target domain to offer continuous parameterizations of a typical chair (and provide a 3D model on request), along with reasonable ranges for each parameter. Although discrete parameters are possible with a chair, they are less likely to have a significant impact relative to its raw dimensions, and most chairs are comprised specifically of four legs, a seat, and a back. For completely novel problems, GPT-4 cannot rely on its existing knowledge to generate an exact design space. However, it can apply knowledge of aspects of a problem to new problems in familiar domains. The conversation in Figure 65 presents a brief example of GPT-4 being queried about a novel invention: the Fworp2. The Fworp is a robot car with a body made of silicone rubber. While the value of such a device is unclear (perhaps shock absorption), it is synthesized from existing ideas: namely, remote control vehicles and soft robotics. GPT-4 uses its knowledge of those preexisting domains and their components to recommend reasonable design parameters, and their ranges, including analyzing size, weight, wheel size, power source, peripherals/sensors, and build material properties. It also provides guidance on performance metrics without being prompted, but classifies these under “parameters,” which may confuse users. Further, when queried about the advantages and disadvantages of such a device when compared with non-rubbery autonomous vehicles and soft robots without wheels, it provides reasonable comparisons. In particular, it notes that, compared with non-soft robot vehicles, the fworp could (possibly) be more durable, shock absorbent, safer, and quieter, while also potentially being more expensive to produce and tacky. Compared with non-vehicular soft robots, it has the potential to be more mobile, stable, 2Name chosen to be deliberately nonsensical so as not to give context clues to GPT-4. How Can Large Language Models Help Humans in Design And Manufacturing? • 73 You are tasked with controlling an arm and a claw. The goal is to pick up an object on the ground with the claw and place it in a bin large enough for the object to fit in. You are working with (x, y, z) coordinates where 0 <= x, y, z <= 50. Below are functions you can use to control the arm and claw. Arm: • translate_x(x_dist): Translates the claw in the x direction by x_dist. • translate_y(y_dist): Translates the claw in the y direction by y_dist. • translate_z(z_dist): Translates the claw in the z direction by z_dist. Claw: • grasp(): If there is an object at the claw’s current location, the object will be grasped. When the object is grasped and the claw is moved, the object is moved to the same location as the claw’s fingertips. • release(): If the claw is grasping an object, the object will fall to the ground directly underneath it at z=0 or to any bin directly underneath it. (... omitted by authors: additional problem specifications ...) (... content omitted by authors ...) def move_arm(claw_initial , def distance (p1, p2) : objects_initial , bins , T_max): (... omitted by authors: Euclidean distance helper function ...) actions = [] power_used = 0 claw_current = list ( claw_initial ) while objects_initial and power_used <= T_max: # Find nearest object nearest_object = min( objects_initial , key=lambda o: distance (claw_current , o) ) # Calculate necessary translations translations = [o − c for o, c in zip ( nearest_object , claw_current) ] to reach the object # Check if there ' s enough power left to reach the object and drop it if power_used + sum(abs(t) for t in translations ) ∗ 2 > T_max: in a bin break # Move to the object (... omitted by authors: adding translations to object and grasping to the actions list ...) # Update claw position and power used claw_current = nearest_object power_used += sum(abs(t) for t in translations ) # Find nearest bin nearest_bin = min(bins , key=lambda b: distance (claw_current , b) ) # Calculate necessary translations translations = [b − c for b, c in zip ( nearest_bin , claw_current) ] to reach the bin # Move to the bin and release the object (... omitted by authors: adding translations to bin and releasing to the actions list ...) # Update claw position and power used claw_current = nearest_bin power_used += sum(abs(t) for t in translations ) # Remove the placed object and bin from the lists objects_initial bins . remove(nearest_bin) . remove( nearest_object ) (... omitted by authors: code checking that a solution was found ...) (... content omitted by authors ...) Fig. 62. of minimizing the distance traveled by the claw, which does not guarantee an optimal solution. Initial Proposed Solution to Robot Manipulation Planning. GPT-4 provides a greedy approach to the problem 74 • Makatura et al. Could you write a function that guarantees the global optimum? Make sure to account for the distances from the claw to object and claw to bin (after it has grasped the object). Also make sure to compute translations rather than Euclidean distance. Recall the bounds are 0 <= n <=3 and 0 <= m <= 3. (... content omitted by authors ...) This updated function generates all permutations of the objects and bins, calculates the total translation required for each permutation, and selects the one with the smallest total translation. import permutations from itertools def move_arm(claw_initial , objects_initial , bins , T_max): def translation_distance (p1, p2) : (... omitted by authors: Manhattan distance helper function ...) def calculate_total_translation ( ordering ) : total_translation = 0 claw_pos = list ( claw_initial ) for object_pos , bin_pos in ordering : total_translation += translation_distance (claw_pos, object_pos ) total_translation += translation_distance ( object_pos , bin_pos) claw_pos = bin_pos # Claw is now at bin location # Claw to object # Object to bin return total_translation object_bin_pairs = list (permutations( zip ( objects_initial min_translation = float ( ' inf ' ) best_ordering = None for ordering in object_bin_pairs : , bins ) ) ) total_translation = calculate_total_translation ( ordering ) if total_translation < min_translation : min_translation = total_translation best_ordering = ordering if min_translation > T_max: return "No solution found within power limit " else : actions = [] claw_current = list ( claw_initial ) for object_pos , bin_pos in best_ordering : # Move to object (... omitted by authors: adding translations to object and grasping to the actions list ...) claw_current = list ( object_pos ) # Move to bin (... omitted by authors: adding translations to object and grasping to the actions list ...) claw_current = list (bin_pos) return actions (... content omitted by authors ...) Fig. 63. Attempt at Brute Force Solution to Manipulation Planning. Though GPT-4 attempts to solve the minimum translation problem in a brute force manner, it fails to check all possible paths that the claw can take. How Can Large Language Models Help Humans in Design And Manufacturing? • 75 (a) Initial Chair (b) Chair that Minimizes Volume (c) Chair that Maximizes Tipping Angle Fig. 64. Optimizing Chair Design with Competing Objectives. GPT-4 outputs code that handles a multi-objective performance space in the context of designing a chair to minimize volume while maximizing tipping angle. Left: A randomly sampled chair from the initial population before NSGA-II is applied. After running NSGA-II using the script GPT-4 provides, we visualize chairs on opposite ends of the frontier. Middle: A chair that minimizes volume by reducing the volume of the back seat and increasing leg lengths. Right: A chair that maximizes tipping value by extending the length and increasing the thickness of the back seat. energy-efficient, and simple to produce and control, but would lack the versatility and human-interaction potential typically afforded by most soft robots; further, while safer than a typical vehicle, it would be more dangerous than most current soft robots. This experiment highlights the notion the GPT-4 can be an effective partner when formulating a novel inverse design problem, as it can make connections between the proposed problems and more established domains. Then, GPT-4 is able to use its existing knowledge base about those related domains to provide reasonable starting points for the problem at hand. With continued user interaction, GPT-4 can also help to refine, formalize, evaluate, and ultimately act upon the newly created formulation. 8.5 Discussion This section elaborates on GPT-4’s key capabilities (C), limitations (L), and dualisms (D) in the realm of inverse design. C.1 Extensive Knowledge Base in Design and Manufacturing: GPT-4 has knowledge of how to formulate design spaces, objectives, and constraints. It also successfully selects suitable search algorithm for a given problem, suggesting that LLMs are useful as a building block when formulating inverse design systems. In its current form, GPT-4 exhibits a number of abilities that make it highly usable. For example, it was able to choose an adequate design optimization algorithm for almost every problem it was given; when asked, GPT-4 was also able to justify its choice of algorithm. GPT-4 is also helpful in automatically providing code for a significant portion of a problem formulation without requiring user input. These aspects include parameter choice, parameter ranges, and objective functions. In the best case, this feature can relieve a user of much of the “busywork” associated with a problem (loose bounds, necessary constraints, etc.). Even when GPT-4 falls short of this ideal, GPT-4 is usually able to recommend a useful starting point. GPT-4’s reasoning capabilities can also further provide value in novel domains. If a user is inexperienced with a particular domain or if they are working on a novel problem, GPT-4 has the capability to synthesize from the problem’s constituent domains to provide suitable advice, as demonstrated with the fworp example. 76 • Makatura et al. I am creating a new device called a fworp. A fworp is a wheeled soft silicone autonomous vehicle. What are some reasonable parameters I should consider for my fworp, and parameter ranges I should I consider? (... content omitted by authors ...) (1) Size and Shape: This is crucial to determine the potential applications of your fworp. You might want to keep the size compact if it is intended for home use or increase the size if it’s designed for industrial applications. The shape will also play a crucial role in how the fworp navigates its environment. For the parameter range, consider anywhere from a small sphere (e.g., 10 cm in diameter) for consumer use, up to larger sizes (e.g., 2 m in diameter) for commercial/industrial use. (2) Weight: The weight will impact the maneuverability and the energy consumption of the device. A lighter fworp would be easier to move and consume less energy but might be less stable. A heavier one could be more stable but require more energy. The weight could range from 1 kg (for small consumer devices) to 50 kg or more (for larger industrial devices). (3) Speed: You’ll want to adjust the speed depending on the use case. A fworp designed for a warehouse may need to be faster than one for home use to increase efficiency. Speeds might range from 1 km/h for careful navigation in domestic environments up to 20 km/h for more industrial settings. (4) Battery Life: This will dictate how long the fworp can function without needing to be recharged. For domestic use, a battery life of several hours might be sufficient, whereas for industrial use, a battery life of several shifts may be needed. Depending on usage, anywhere from 4 to 24 hours could be appropriate. (5) Load Capacity: Depending on the intended application, the fworp may need to carry items of varying weight. This could range from 1 kg for lightweight items up to 100 kg or more for industrial applications. (6) Sensor Range: As an autonomous vehicle, the fworp will need sensors to navigate its environment. The range of these sensors will impact how well the fworp can navigate, with larger ranges allowing it to avoid obstacles more easily. Consider a range from 1 meter for simple obstacle avoidance up to 10 meters for more complex environments. (7) Durability of the Silicone Casing: The fworp’s casing will need to be able to withstand various pressures, temperatures, and impacts. This will depend heavily on the specific silicone material used. Some potential ranges to consider might be a tensile strength from 5 to 30 MPa, a hardness from 20 to 80 Shore A, and a temperature resistance from -60 to 200°C. (8) Communication Range: If the fworp is intended to communicate with a central control system or other devices, the range of this communication will be important. This could be as short as a few meters for Bluetooth communication, up to several kilometers for more robust radio communication. (9) Robustness: It’s important to consider how the fworp will cope with various types of terrain and environmental conditions. It could be rated for indoor use only, or for use outdoors in various weather conditions. (10) Safety Features: As an autonomous device, the fworp should have features to ensure it can operate safely, such as emergency stop functionality, collision detection, and avoidance systems. (... content omitted by authors ...) Do you think the fworp is a good idea, compared to a non-rubbery counterpart? Why or why not? (... content omitted by authors ...) Advantages of a Silicone Fworp (1) Flexibility: Silicone is a very flexible material, and a vehicle made of it could potentially navigate around obstacles or squeeze into spaces that a rigid vehicle could not. (2) Durability: Silicone is known for its durability and can withstand various environmental conditions, such as wide temperature ranges, UV radiation, and exposure to various chemicals. This could lead to a longer lifespan and less maintenance compared to some non-rubber materials. (3) Safety: A soft, flexible vehicle could potentially be safer in environments where it could come into contact with humans or delicate objects. A softer impact could reduce the risk of harm or damage. (4) Noise reduction: Soft materials like silicone could potentially operate more quietly than hard materials, which might be beneficial in certain environments, such as residential areas or hospitals. Challenges with a Silicone Fworp (1) Structural integrity: Silicone may not provide the same level of structural integrity as harder materials. You would need to ensure that the fworp can carry the required loads without deforming excessively. This could be particularly challenging if the fworp is large or needs to carry heavy items. (2) Speed and Efficiency: The soft, flexible nature of silicone might mean that the fworp moves more slowly or with less efficiency than a harder vehicle. The flexibility of the material could absorb some of the energy meant for propulsion. (3) Maintenance: While silicone is durable, it can also be prone to punctures and cuts. Depending on the environment, the surface may need regular inspections and repairs. (4) Cost: Depending on the specifics, manufacturing a vehicle out of silicone could potentially be more expensive than using more traditional materials. (... content omitted by authors ...) Fig. 65. Testing GPT-4’s Domain Transfer Ability for Inverse Design. We query GPT-4 about the Fworp, an imaginary invention for which it has no direct knowledge about. Despite this, GPT-4 is able to transfer its knowledge about related concepts to propose design parameters and comparisons. However, some of its proposed parameters, such as load capacity, are actually performance metrics. How Can Large Language Models Help Humans in Design And Manufacturing? • 77 L.1 Reasoning Challenges: When asking for help in setting up a problem, GPT-4’s advice can be confusing. For example, it often does not disambiguate between the design parameters (which practitioners have direct control over) and performance metrics (which are emergent from the design). Less experienced designers may then find themselves confused, believing there must be a way to modify a system’s performance directly. Potential Solutions: By following up with GPT-4 about how a given “parameter” is computed, one can attempt to disambiguate parameters from metrics. In general, however, this verbal confusion is difficult to systematically address. The addition of function calling in LLMs, and specifically plugins using GPT-4, can eventually allow for direct execution of arbitrary code, even code that GPT-4 writes. However, there are no guarantees on the execution time of that code, and it is unclear how to manage problems that might arise, such as long runtimes (which are common in hard optimization problems), or even infinite loops. In our experiments, the Wolfram plugin was given a brief time window for computation before it timed out, which largely negated its value in the face of more challenging problems. Potential Solutions: Methods to allow for function calling while providing guarantees or control by a user (say, in the form of anytime algorithms) would be beneficial. For now, writing one’s own plugin may allow greater granularity over the type of algorithm being used. Thus, the algorithms can at least be catered to GPT-4’s behavior. L.3 Scalability: Inverse design relies on several complex building blocks, including the specification of design spaces and objective functions. However, as discussed in previous sections, GPT-4 frequently encounters difficulties when faced with these tasks. Such errors prohibit GPT-4 from scaling to inverse design exploration altogether. This occurred twice during our experiments. In one failure case, we had difficulty in evaluating the performance of a soft body system using finite elements; although the example is not detailed in the paper, Section 7 has already shown this to be difficult. In effect, this failure currently prevents GPT-4-assisted inverse design of a soft robot w.r.t. FEA-derived metrics. In a different example, we attempted to design a long, multi-link arm, but found that GPT-4 struggled with properly geometrical alignment of the links and rotation axes (as shown in Section 4.1.6). Potential Solutions: Pointing out problems with solutions (such as runtime errors) can allow GPT-4 to iterate, but requires intervention and potentially fine-grain coding or engineering knowledge by a user. In practice, it is often effective to blindly ask GPT-4 to assess its own output and report any errors it finds until GPT-4 is satisfied with its own work. In our experiments, this frequently converged to a correct solution. However, this is not foolproof and can be slow and computationally costly. Further, without access to web search, GPT-4 may not know how to reconcile out-of-date knowledge about an API. A second scalability issue is that GPT-4 does not always choose the best algorithm for solving a problem, and sometimes does not use a given method in the most efficient way (such as not providing gradient information). Since GPT-4 tends to be coy about available methods and how to best use them, a more novice user may be unsure how to navigate the intricacies of optimization and diagnose issues. Furthermore, although GPT-4 tends to choose adequate algorithm classes, it does not always choose state-of-the-art methods; instead, it tends to default to standard methods that are highly popular. Because of its knowledge cutoff, without access to web search, any given LLM may not be aware of state-of-the-art methods or how to implement them. Even if an easy-to-use implementation exists on an online repository (e.g. Github), the LLM may not be aware of the code’s existence or how to use it. Potential Solutions: Web search, which has been previously available for GPT-4, can help to mitigate these issues, as one could ask for the latest, state-of-the-art methods, and GPT-4 could provide solutions based on current repository code. However, there is no guarantee that GPT-4 will be able to understand what makes newer methods optimal for a problem without sizeable crowd knowledge, which may not be available. The third scalability issue is that, as mentioned in previous sections, GPT-4’s “short memory” can cause it to forget specifics it had generated earlier in a context; this notably occurred in the multi-objective chair example. 78 • Makatura et al. While this problem emerged in other aspects of the design-to-manufacturing pipeline, its impacts were most salient when defining inverse problems, whose specification can be especially long. Potential Solutions: Since inverse design problems can be quite lengthy to define and specify, it may be easier to decompose a problem in the following order: 1) Ask GPT-4 for a definition of a design space (including its implementation), 2 Ask GPT-4 for a definition of a performance metric and constraints (including their implementations) while abstracting away the code from 1) as an API call; 3) Ask GPT-4 to write code for the inverse design search, abstracting away the code from 2) as an API call. This can keep definitions shorter and easier to manage. D.2 Unprompted Responses: Throughout our experiments, GPT-4 always unilaterally selected an optimization algorithm and proceeded to generate code. In particular, GPT-4 never provided the user with options for possible optimization algorithms, unless it was explicitly asked to provide such options as an intermediate step. Although GPT-4’s automated selection may satisfy many users, it runs the risk of creating mistakes that would be difficult to diagnose. This is particularly true because GPT-4 rarely justified its choice to users. Furthermore, GPT-4’s assertion may imply that there is a single “correct” algorithm for a given problem, and they may not realize that there are better (or even alternative) options available in any given circumstance. GPT-4’s tendency to fill in aspects of an inverse design problem before being asked about them may also lead to mathematical problem definitions which are ill-suited or otherwise suboptimal for a user’s real-world design problem. In these cases, GPT-4’s tendency to autocomplete and plow ahead could lead users to blindly follow the LLM down bad “rabbit holes,” only to discover that a fundamental problem existed much earlier. Furthermore, since GPT-4 does not have a native way to execute arbitrary code, it will not always realize that a codeblock has errors. 9 END-TO-END DESIGN EXAMPLES In the preceding sections, we have explored how GPT-4 can benefit different stages of the design and manufacturing processes. Now, we consider the comprehensive end-to-end design processes for two examples that have been consistently referenced throughout this manuscript: the cabinet and the quadcopter. From initial design and evaluation to manufacturing and testing, we showcase how GPT-4 seamlessly integrates with each stage. For the cabinet, GPT-4 automates 3D design, part sourcing, manufacturing instructions, assembly guidance, and performance evaluations. Similarly, for the quadcopter, GPT-4 facilitates design, part selection, manufacturing, assembly, and testing. These examples highlight the capabilities of GPT-4 in streamlining the product development process with minimal human intervention. 9.1 Cabinet In this subsection, we consolidate and elaborate on the comprehensive LLM-assisted development process for a wood cabinet with minimal human intervention. Throughout this process, GPT-4 played major roles in generat- ing the 3D design, facilitating part sourcing based on the design specifications, generating machine-readable manufacturing instructions, providing human-readable assembly guidance, and conducting final performance evaluations (Figure 66). This holistic approach highlights the extensive capabilities of GPT-4 in automating and optimizing various stages of this product development process. 9.1.1 Design. We employed GPT-4 to generate Computer-Aided Design (CAD) models using such as OpenJSCAD (Section 4.1.4). After a few iterations, GPT-4 successfully generated accurate CAD design files (see Section 4 for more details). Additionally, we demonstrated utilizing GPT-4 to modify the cabinet design, including tasks such as adding a door and a handle (Section 4.2.1) or adjusting the number of shelves (Section 4.3). GPT-4 even integrated prefabricated elements like brackets into the cabinet design (Section 4.2.2). We ultimately decided to fabricate the version of the cabinet presented in Figure 7 with one shelf supported by four brackets. How Can Large Language Models Help Humans in Design And Manufacturing? • 79 Fig. 66. The Cabinet Design Process Using GPT-4. We showcase the comprehensive development process for a wood cabinet, highlighting GPT-4’s roles in 3D design generation, part sourcing, manufacturing instructions, assembly guidance, and performance evaluations. Now let’s fabricate this cabinet using wood. I have a universal laser system with a processing area of 32 by 18 inches. Can you recommend the materials I should buy? Ideally provide me links. Based on your design, it appears you’re working with a thickness of approximately 1/2 inch, which is a standard size for many types of wood. I’m going to suggest a couple of options for you: Baltic Birch Plywood - This is a good option for cabinetry because it is strong, has a nice grain pattern, and finishes well. It’s also highly recommended for laser cutting because it cuts cleanly and has minimal smoke and resin. Here’s a link to purchase from McMaster-Carr: Baltic Birch Plywood Sheets. Medium-Density Fiberboard (MDF) - If you’re looking for a more budget-friendly option, MDF might be the way to go. It doesn’t have the strength or the aesthetic appeal of real wood, but it is very consistent in density and thickness, making it ideal for precision work like this. It’s also known for its smooth finish, and it laser cuts very well. Here’s a link to purchase from McMaster-Carr: Medium-Density Fiberboard (MDF) Sheets. For brackets, you can find a variety of options from Home Depot or other similar stores. Depending on the size and weight of your cabinet, you can choose a bracket that suits your needs. Just make sure the brackets you select fit the dimensions in your OpenJSCAD code. Here’s a general link to the bracket section of the Home Depot website: Shelf Brackets - Home Depot Remember that working with laser cutters requires adequate ventilation due to the smoke produced during the cutting process, and the edges of the wood will be darkened due to the heat from the laser. If you want to paint or finish the wood, it may need a bit of light sanding to clean up the edges. Fig. 67. Part Sourcing Guided by GPT-4 Based on the Specific Cabinet Design and Manufacturing Method. In this example, we employ laser cutter and pre-manufactured hardware. 9.1.2 Manufacturing. We then tested GPT-4’s capabilities in tackling real-world manufacturing challenges encountered during the fabrication of the cabinet. These challenges encompass various aspects of the general manufacturing process, such as part sourcing, generating machine-readable instructions, addressing material and fabrication constraints, facilitating design modifications, providing assembly guidance, and offering human- readable instructions. For this particular example, our manufacturing plan involved utilizing a universal laser system to cut com- mercially available wood boards. First, by leveraging the detailed design specifications in OpenJSCAD, GPT-4 provided accurate suggestions for sourcing specific items, including brackets from Home Depot and wood pieces 80 • Makatura et al. The largest wood piece I can find on McMaster that can fit into the 18 by 32 inches laser cutter is 12 by 24 inches. Therefore, we need to scale down the initial cabinet design such that every piece can be cut from the 12 by 24 inches plates. The thickness of the wood piece I selected is 1/2 inch. Can you regenerate the python code for dxf considering this fabrication constraint? We need to make the width of the back board to be less than or equal to 12 inches. One possible solution is to divide the backboard into two equal halves, each with a width of half of the original. They could be joined together after cutting. Let me modify the code accordingly: # dimensions board_thickness = 1/2 cabinet_width = 12 # scale down to match wood piece size scale = cabinet_height / 20 # scale factor based on original height cabinet_height = 30 ∗ scale cabinet_depth = 18 ∗ scale brace_thickness = 1/8 hole_diameter = (0.375 + 1/16) hole_depth = board_thickness ∗ 3 bracketOffset = 2 ∗ scale Fig. 68. Handling Manufacturing Constraints with the Assistance of GPT-4. Here, the available wood board size is insufficient for the original cabinet design. GPT-4 successfully scaled down the entire design, ensuring manufacturability. from McMaster. Each suggestion was accompanied by a concise description of the item (Figure 67 and Section 6.3). Furthermore, in order to realize the subtractive manufacturing approach, GPT-4 successfully generated precise 2D DXF files for laser cutting the wood pieces. These files were derived from the original 3D OpenJSCAD model, effectively guiding the laser cutter in executing the required cutting patterns (Section 6.4.2). We also showcased GPT-4’s ability to adapt the design based on fabrication constraints. For example, the largest wood pieces we were able to obtain from McMaster that could fit into our laser cutter were 12 by 24 inches. Unfortunately, this size was too small to support the original cabinet design. As we presented this constraint to GPT-4, it successfully scaled down the entire design to ensure compatibility with the available wood pieces (Figure 68). Given that our laser cutter is capable of engraving patterns onto the cutting materials, we took advantage of this feature and requested GPT-4 to generate Python code for producing DXF files that could engrave words onto the side board of the cabinet. GPT-4 was able to fulfill this request, providing us with the necessary code to create the desired engraving. Finally, once we had the laser-cut wood pieces, we proceeded to assemble them into a fully functional cabinet. During this process, we sought GPT-4’s expertise in recommending the most suitable adhesive for wood. GPT-4 suggested Quick-Set Epoxy, 3M DP100, 1.64 FL. oz. Cartridge as the optimal glue for our purposes (Figure 69). Additionally, GPT-4 provided detailed treatment guidance and step-by-step procedures for the assembly, ensuring a smooth and successful construction process (Figure 69). The final fabricated cabinet is showcased in Figure 70, offering three different view angles. To provide a clear perspective of its size, a mug is placed on the middle shelf, serving as a reference for scale. 9.1.3 Performance evaluation. Using the 3D OpenJSCAD design as a basis, GPT-4 was prompted to develop functions to evaluate performance criteria: storage capacity, load capacity before sagging substantially, plywood How Can Large Language Models Help Humans in Design And Manufacturing? • 81 Great. I purchased Play Wood from McMaster with 1/2 inches. I want to use glue to assemble them together. Please provide a link for the glue I should buy and give me the details treatment procedure. For plywood assembly, a good choice would be Titebond III Ultimate Wood Glue. It offers superior strength, a long open-assembly time, and is safe to use. You can find it on the McMaster site here. Here’s a simple procedure to assemble your cabinet: 1. Dry Fit: Start by fitting all the pieces together without any glue to ensure they all fit properly. 2. Sanding: If necessary, sand the edges of your wood pieces for a better fit. Be careful not to alter the dimensions significantly. 3. Gluing Procedure: • Back Board to Side Boards: Apply a bead of glue along the side edges of the back board. Align it with the side boards such that the back board is sandwiched between the two side boards. Ensure the edges are flush. Clamp them together and wipe off any excess glue that squeezes out with a damp cloth. Allow the glue to dry according to the manufacturer’s instructions. • Bottom Board: Apply glue along the bottom edges of the side and back boards. Position the bottom board so that its edges are flush with the bottom edges of the side and back boards. Clamp and allow to dry. • Top Board: Similarly, apply glue to the top edges of the side and back boards, align the top board, clamp, and allow to dry. • Shelf Board: Apply glue along the edges of the shelf board where it will contact the side and back boards. Slide it into position and clamp. Allow to dry. 4. Finishing: After the glue is completely dry, you can finish your cabinet with a finish of your choice (paint, varnish, etc) to increase its durability and improve appearance. Remember, it’s important to clamp your pieces together after you’ve applied the glue to ensure a tight bond. Also, ensure your workspace is level so your cabinet comes out square. Please remember to follow all safety guidelines when working with tools and materials, including wearing appropriate personal protective equipment. Always work in a well-ventilated area when using glues or finishes. Fig. 69. Comprehensive Assembly Instructions Provided by GPT-4. GPT-4 provides guidance on glue selection, pre- treatment, assembly sequence, and finishing options material cost, and accessibility for individuals in wheelchairs. See Section 7.1 for details on this type of design evaluation for numerical evaluation and Section 7.2 for subjective metrics. When comparing these values for the original and scaled cabinets, the storage capacity, load capacity before sagging, material cost, and accessibility were computed to be 9,205 in3, $67, 14.336 lbs, and a score of 5/10 for the original cabinet, and 1,131 in3, $18, 8.601 lbs, and a score of 5/10 for the scaled-down cabinet. It failed to compute an appropriate strength and could not verify the equation, but other evaluations are correct. Based on these numbers, both cabinets were qualitatively classified as "strong" on a range of "strong," "moderate," or "weak," being far stronger than a typical cabinet used in an office. However, while the larger cabinet was categorized as "highly useful" on a scale of "useful," "moderate", or "not useful" due to its high strength and storage capacity, the smaller cabinet was only "moderately useful" due to its small size. Further aesthetic evaluation in text form is included in Figure 71, classifying without computation the original design as "Contemporary" and the scaled design as "Minimalist". 9.2 Quadcopter In this subsection, our objective is to design a functional indoor quadcopter capable of flight, hovering, and landing, using GPT-4 with minimal human intervention. The process involves sourcing parts, creating a 3D design incorporating the chosen parts, and ultimately, manufacturing, assembling and testing the quadcopter, as depicted in Figure 72. 9.2.1 Parts Sourcing. Many real-world systems, like drones, are built from pre-existing components with various kinds of specifications. Therefore, our first task is to utilize GPT-4 to select appropriate parts for our specific use 82 • Makatura et al. Fig. 70. The Manufactured Cabinet. Real-world cabinet manufactured from different view angles, showcasing its scale with a mug as a reference. My cabinet has these dimensions. Describe the aesthetic category it should fall into. It has one shelf at mid-height. Cabinet_dimensions = { ’height’: 30, ’width’: 18, ’depth’: 20, ’board_thickness’: 0.5 } Your cabinet, with its dimensions of 30" height, 18" width, and 20" depth, likely falls into the "contemporary" aesthetic category. With one shelf at mid-height, it offers a balanced and symmetrical look. Its proportions are conducive to a modern, sleek design that can seamlessly blend into a variety of interiors. The cabinet size suggests functionality and purpose, making it an appealing choice for those who appreciate clean lines and minimalist style. (... omitted by authors: Description of smaller scaled cabinet ...) Given its relatively small dimensions, your cabinet could fall into the "Minimalist" aesthetic category. With its clean, uncomplicated design and single mid-height shelf, it would emphasize functionality and simplicity, which are key principles of minimalist design. Its smaller size would also contribute to an uncluttered appearance, further aligning with the minimalist aesthetic. Fig. 71. Aesthetic Description. GPT-4 comments on the aesthetic category of the designed and fabricated cabinets based on its Geometry. case. This was successfully accomplished using GPT-4. The detailed process is elaborated in Section 6.3, and the selected parts are shown in Figure 74 (left). 9.2.2 Text-to-Design. Upon identifying the components, we employ GPT-4 once more to generate a viable quadcopter design incorporating those parts. With minimal human intervention, we successfully crafted a geometric design for the quadcopter, as detailed in Section 4.2.2. Now, we shift our focus to practical issues: 1) how parts are mounted onto the designed frame, and 2) whether the frame is manufacturable. Given that describing each part’s geometric details is challenging and that GPT-4 doesn’t fully comprehend how to design the frame for optimal physical balance, we provide GPT-4 with low-level instructions to guide adjustments to the current frame design rather than expecting it to independently modify the design. How Can Large Language Models Help Humans in Design And Manufacturing? • 83 Fig. 72. The Quadcopter Design Process using GPT-4. The process involves sourcing parts, creating a 3D design, manufacturing, assembling and testing the quadcopter. We successfully manufactured a working quadcopter. Fig. 73. The Quadcopter Frame. Here we show that GPT-4 can create cylinders for the motor mounting holes. Using boolean operations, we successfully created a valid frame. First, we adjust the frame bar’s cross-sectional size using GPT-4 to ensure it’s 3D-printable and adequately robust. Then, we combine all boxes and cylinders that form the frame in the geometric design. To stabilize the battery placement, we semi-integrate it into the frame, and subtract it from the frame so the frame securely holds the battery. For the controller and signal receiver, which are much lighter and smaller than the battery, we simply glue them onto the battery, eliminating the need for additional accommodations. Lastly, we mount the motors using screws for stability, requiring screw holes in the frame. To minimize human effort, we utilize GPT-4 to create holes in the frame. For each motor, we instruct GPT-4 to generate four cylinders representing the required holes, detailing the hole specifications via text according to the motor’s specifications. Even though crafting mounting holes isn’t trivial, we successfully produced the correct cylinders after a few prompts with GPT-4. The cylinders are shown in dark gray in Figure 73(Left). Once the hole cylinders for one motor are ready, we let GPT-4 group and duplicate them using our place function. GPT-4 managed to position the hole cylinders correctly but had issues with proper rotation, which we later manually corrected. As seen, the hole cylinders overlap with the frame bar. Since we cannot change the frame bar’s thickness due to the manufacturing concern, we manually adjusted the frame bar’s tip thickness to prevent it from obstructing the holes. This was done manually, as it proved challenging to adequately convey the problem and solution to GPT-4. Our experiments also revealed a limitation: adjusting designs we completed earlier proves difficult. After extensive interaction with GPT-4, referring back to previously discussed design elements becomes a challenge. Consequently, if any issues arise with earlier addressed parts, it becomes arduous to revisit them with GPT-4 and prompt modifications. Ideally, we should finalize each design without the need for future revisions, as adjustments later prove difficult. The final frame result is displayed in Figure 73(Right). 9.2.3 Design-to-Manufacturing. The only part that needs manufacturing is the frame. Once the frame is deter- mined, we fabricate it using a 3D printer. Because the representation of the frame results from Boolean operations of boxes and cylinders, it is simple to directly convert them to .stl format which is widely recognized by the 3D 84 • Makatura et al. Fig. 74. The Parts and the Printed Frame of the Copter. Left: Selected parts. Middle: Printed frame. Right: Assembled copter. printers. We used OpenJSCAD to do it. Once we have the .stl file, we manufacture the frame using Stratasys Fortus 400 since it has a sufficiently large build volume and it realizes precise, robust, and durable parts. We instruct the printer to use the least amount of infill possible in the print settings. By choosing a lower infill percentage, the printer will create a sparse or hollow internal structure rather than a solid one. This decision helps conserve material and reduce print time without significantly compromising the strength or weight of the copter’s frame, given that it is not a load-bearing component. The resulting fabricated copter frame not only meets the required dimensions, but also balances the strength and weight, necessary for optimum flight performance. We visualize the printed frame in Figure 74 (middle). 9.2.4 Assembling and Real-World Verification. With the 3D printed frame ready, we proceed to the assembly stage, integrating the pre-prepared components. Given that assembly considerations were incorporated into our GPT-4-guided design process, the assembly of the quadcopter is straightforward. The battery is secured in the central frame slot using double-sided tape and wrappers. Similarly, the controller and receiver are placed atop the battery and secured with double-sided tape and wrappers. The four motors are attached using screws. All elements are affixed firmly and stably, resulting in a sturdy copter ready for flight, as shown in Figure 74 (right). Once assembled, we conduct a series of tests. First, we administer an ascending test, directing the copter to lift off the ground and ascend to a specific altitude. This test gauges the combined thrust of the motors and the propellers’ efficacy in converting the motors’ rotary motion into lift. It also allows us to evaluate the copter’s responsiveness to radio transmitter commands, the flight controller’s interpretive capacity, and the copter’s overall ascent stability. The motion is depicted in Figure 75 (left). Following ascent, we undertake a hovering test. During this phase, the copter is directed to maintain its altitude and position for a set period. Hovering demands continuous, simultaneous operation of all four motors to counter gravity. This test significantly illuminates the copter’s capacity to achieve and maintain stable flight, a vital characteristic of any functioning copter. The hovering motion is demonstrated in Figure 75 (mid). Finally, we execute a descending test, instructing the copter to safely and gradually descend to the ground. This evaluates the copter’s ability to control thrust reduction and the resulting downward motion, as well as the flight controller’s capacity to interpret and carry out the descent command. It is also a crucial examination of the copter’s landing abilities; a smooth, safe landing is essential to preserve the copter and its components. The descending motion is exhibited in Figure 75 (right). 9.2.5 Text-to-Performance. We also investigate how GPT-4 can help with measuring the performance of a given quadcopter design. Given a current design iteration of the copter from Section 7.1.2, GPT-4 is able to identify important trade-offs to optimize and subsequently implement optimization strategies to improve performance. One such trade-off GPT-4 identified is between weight and size, where smaller copters are generally able to stay How Can Large Language Models Help Humans in Design And Manufacturing? • 85 Fig. 75. The Flight Test. Left: the ascending test. Mid: the hovering test. Right: the descending test. afloat longer due to reduced weight and aerodynamic drag, while larger copters have more space to accommodate larger batteries which can provide more energy for longer flight times. Out of all the possible optimization methods to find the best combination of parameters that maximize flight time, speed, and distance while meeting constraints on weight and size, GPT-4 chose a very suitable numerical method of Particle Swarm Optimization (PSO) from the PySwarm library. Aside from being very efficient and simple to implement, PSO has a strong global search capability, which is beneficial when the optimal solution might be located in a large and complex space, and allows for real-time adjusting of the copter’s weight and size based on performance data. GPT-4 has a strong grasp on the inherent trade-offs of such systems, and is capable of generating tailored ideas and feasible solutions to optimize performance. We now turn to the details of using simulation to evaluate the quadcopter’s performance. In the workflow of fabricating a functional robot, simulation is often used for both control and collecting performance metric statistics that can be used for optimization. Since our fabricated robot includes its own controller, we focus on using the robot’s performance in simulation for design optimization. Our design space involves both a parameterized quadcopter whose frame bar lengths can vary but is otherwise constrained by the design created in Section 9.2.2 and the controller design. While it is possible to ask GPT-4 to provide suggestions on the type of controller to apply, we choose to have GPT-4 generate a LQR controller, which is widely used for UAVs. We break down multicopter optimization into three steps: 1) Given the OpenJSCAD design of the quadcopter, convert the design into a format specific to modeling multibody systems, such as URDF, and the means of computing relevant physical properties that inform the controller design, such as the robot’s mass. 2) Given the robot’s physical properties, generate a LQR controller for simulation. 3) Given a robot design in URDF, functions for extracting the design’s relevant physical properties, and a controller, synthesize an algorithm to optimize the robot’s design. Converting OpenJSCAD to URDF. We start with the OpenJSCAD quadcopter design developed in Section 9.2.2. Because there is no straightforward equivalent of subtraction and union from OpenJSCAD in URDF, we omit the creation of the holes in the motor base and replace the union of motor base parts with placement of individual links while retaining the essence of the original design. We then take an object-oriented approach to having GPT-4 synthesize the equivalent URDF code. As seen in 76, we prompt GPT-4 to create a Component class whose instances store the geometry, mass, position, and orientation attributes of the corresponding OpenJSCAD primitives. Component instances also have distinct names to represent URDF links. We additionally prompt GPT-4 to generate helper functions for placing instances with different geometries in Fig. 76. This framework allows GPT-4 to generate a function that places components in terms of absolute coordinate positions and orientations and to replicate the Python equivalent of the OpenJSCAD, as shown in Fig. 77. However, one difficulty is that unlike CAD-like designs, formats for representing robots also require relations between components to be represented as joints in order to accurately simulate the robot dynamics. We prepare 86 • Makatura et al. Below are helper functions written in OpenJSCAD. function createBox(w, h, d) { return cuboid ({ size :[ w, h, d ], center :[0, 0, 0]}) ; } function createCylinder ( r , h) { return cylinder ({ radius : r , height : h, center : [0, 0, 0], segments: 64}) ; } Implement the above functions in Python. Both functions should create an instance of a Component, which is a class that keeps track of the component’s name, mass, geometry, as well as absolute translation and rotation with respect to the origin. In addition, consider the below place() function. // A function to add a component to the drone at function place (component, x, y, z , a = 0) specific coordinates { let cc = translate ([ x, y, z ], return cc ; } rotate ([0, 0, a / 180.0 ∗ Math.PI ], component)); Implement the place function in Python, which should return a copy of the Component with the given translation and rotation. We want to give this Component a unique name, so also take in an additional suffix argument. If not None, the suffix should be added to the copy’s name. Do not use any hypothetical libraries. (... content omitted by authors ...) import math import copy class Component: def __init__ ( self , name, mass, geometry, translation =[0,0,0], rotation =[0,0,0]) : self . name = name self . mass = mass self . geometry = geometry self . translation = translation self . rotation = rotation class Cuboid: def __init__ ( self , w, h, d) : self . width = w self . height = h self . depth = d class Cylinder : def __init__ ( self , r , h) : self . radius = r self . height = h def create_box (name, mass, w, h, d) : return Component(name, mass, Cuboid(w, h, d)) def create_cylinder (name, mass, r , h) : return Component(name, mass, Cylinder(r, h) ) def place (component, x, y, z , a=0, suffix =None): cc = copy.deepcopy(component) cc . translation = [x, y, z] cc . rotation = [0, 0, a / 180.0 ∗ math.pi] if suffix is not None: cc . name = component.name + suffix return cc (... content omitted by authors ...) Fig. 76. Creating Object-Oriented Representations of Robot Components. Using the Component class for representing a robot link facilitates the placement of links with the place() function. Later in the example, the Component class is modified to additionally keep track of an instance’s parent link. How Can Large Language Models Help Humans in Design And Manufacturing? • 87 (... omitted by authors: GPT-4 synthesized functions ...) Using createBox, createCylinder, and place defined above, write a Python script that recreates the box and cylinder components created by the below OpenSCAD script. (... omitted by authors: OpenJSCAD quadcopter design ...) Store the results of place() in a dictionary mapping component name to component because we will need to access the components later. Use the “suffix” argument to ensure all components have unique names. Use the following masses of quadcopter parts and convert them to kg: (... omitted by authors: masses of each component ...) (... content omitted by authors ...) def create_drone () : # Create the quadcopter parts with respective masses in kg motor = create_cylinder ( "motor", 0.030, 14, 32) motor_base_part1 = create_cylinder ( "motorBasePart1", 0.01, 14, 10) propeller = create_cylinder ( " propeller " , 0.0135, 65, 8) controller = create_box ( " controller " , 0.0107, 41, 38, 8.1) battery = create_box ( " battery " , 0.015, 70, 35, 33) receiver = create_box ( " receiver " , 0.0015, 16, 11, 5.4) motor_base_part2 = create_box ( "motorBasePart2", 0.01, 20, 7, 10) frame_bar1 = create_box ( "frameBar1", 0.01, 2 ∗ 130 ∗ 1.5 + 31, 15, 25) frame_bar2 = create_box ( "frameBar2", 0.01, 2 ∗ 130 ∗ 1.5 + 31, 15, 25) body = create_box ( "body", 0.05, 75, 75, (33 + 8.1 + 5.4) / 2 ∗ 1.2 ∗ 1.5) # Dictionaries to store components drone_parts = {} # Place the frame parts drone_parts[ "frameBar1"] = place (frame_bar1, 0, 0, 0, 0, "_place1" ) drone_parts[ "frameBar2"] = place (frame_bar2, 0, 0, 0, 90, "_place2" ) # Place the central body, moved down by 4 body_height = (33 + 8.1 + 5.4) drone_parts[ "body"] = place (body, 0, 0, body_height, 0, "_place1" ) / 4 ∗ 1.2 ∗ 1.5 − 4 # Place the quadcopter parts dist = 130 ∗ 1.5 + 25 # half of frameBar length # Place the motor base parts for i in range (4) : angle = i ∗ 90 x = dist ∗ math.cos(angle ∗ math.pi / 180) y = dist ∗ math.sin (angle ∗ math.pi / 180) drone_parts[ f "motorBasePart1_place{ i +1}" ] = place (motor_base_part1, x, y, 10, angle) drone_parts[ f "motorBasePart2_place{ i +1}" ] = place (motor_base_part2, x, y, 0, angle) (... omitted by authors: 2 additional for loops for placing motors and propellers in a similar manner as the motor bases. ...) # Place battery , controller , and receiver on the body, one on top of the other drone_parts[ " battery " ] = place ( battery , 0, 0, 33 / 2 + 2 + 13, 0) drone_parts[ " controller " ] = place ( controller , 0, 0, 33 + 8.1 / 2 + 2 + 13, 0) drone_parts[ " receiver " ] = place ( receiver , 0, 0, 33 + 8.1 + 5.4 / 2 + 2 + 13, 0) return drone_parts (... content omitted by authors ...) Fig. 77. Replicating Quadcopter Component Placement. Disregarding the union and subtract operators in OpenJSCAD, the above Python code accurately reproduces the locations of the components as the ones in the OpenJSCAD design. The only error is that certain components do not have unique names, which GPT-4 later corrects on additional prompting. 88 • Makatura et al. for this when synthesizing the component placement script by prompting GPT to store the components as a dictionary, which allows easy access to the components. We tackle the challenge of generating the quadcopter’s joints by relying on GPT-4’s knowledge of the spatial relation between components in a quadcopter. After equipping the Component class with a function that sets the parent link, we use this interface to have GPT-4 synthesize a sequence of robot joints, as shown in Fig. 78. We find that although GPT-4 understands certain substructures, such as the fact that the motor is placed on top of the motor base and the propeller is connected to the top of the motor, its initial definition results in an invalid URDF format, as both of the frame bars are root links. We thus explicitly prompt GPT-4 to choose one of the frame bars as the root link. Finally, GPT-4 is tasked with creating a full URDF file. Because the robot is represented with modular Component instances that contain all relevant information on individual links’ mass and geometry as well as relations to parent links, it is relatively straightforward for GPT-4 to create helper functions that synthesize URDF links and joints. We note that creating a joint is a more involved task, since the link’s absolute position and orientation must be converted to relative position and orientation to the parent link. It is necessary to explicitly prompt GPT-4 to use the appropriate rotation matrices in its calculation; otherwise, it does not appropriately account for how the parent link’s rotation affects the child link’s relative translation. Because we choose LQR as the robot’s controller in simulation, we ask GPT-4 to compute the full assembly’s mass and moment of inertia given the Python code it has generated thus far. It outputs reasonable Python code that computes the assembly’s center of mass and uses the parallel axis theorem to combine the moment of inertia of the individual links. In summary, we find that explicitly prompting GPT-4 to provide suitable object-oriented representations, such as the Component class, and modularizing the code generation as much as possible, e.g. asking GPT-4 to synthesize helper functions, placing components, and defining parent-child relations separately are key techniques in successfully converting from OpenJSCAD to URDF. Deriving a LQR Controller for the Quadcopter. Multicopter control is an extensively studied problem, for which various algorithms have been proposed such as PID controllers, LQR controllers, and more complex alternatives. We aim to synthesize a LQR controller, as it is not only a popular choice in the literature but also guarantees optimal control when the multicopter dynamics and stable fixed point are known. In particular, we focus on controlling a quadcopter, which is an underactuated system with 6 degrees of freedom but only 4 independent actuators, resulting in nonlinear dynamics even without aerodynamical effects. However, LQR provides optimal control by linearizing the system around a stable fixed point, which then has a closed form solution. As LQR is widely used for control, the algorithm is easily accessible through the control Python library that performs LQR with a single function call. As such, the main challenge in using LQR for quadcopter control is deriving its state space representation, the 𝐴 and 𝐵 matrices, used in the linear model of the system as shown in Equation 1. 𝑥 is a 12-dof state vector, including coordinate position, linear velocity, roll, yaw, pitch, and angular position. 𝑢 is the 4-dof control input, typically consisting of the upward thrust and the external torque applied in the xyz directions. Note that 𝑥 in 1 is the difference between the current state and a user-specified target state. Similarly, 𝑢 is with respect to the target control input, where the torques are 0 and the thrust balances out the gravitational force acting on the qaudcopter. GPT-4 is then asked to provide a suitable fixed point at which the quadcopter is stable, and it does provide a correct response. (cid:164)𝑥 = 𝐴𝑥 + 𝐵𝑢 (1) In order to verify GPT-4’s results, we ask GPT-4 to first synthesize the full kinematics model before using a a symbolic manipulation library, sympy, to compute the state space matrices. This forces GPT-4 to produce an interpretable dynamical model that a domain expert can verify, rather than directly outputting the 𝐴 and 𝐵 matrices. This proves to be essential as even though GPT-4 captures the high-level idea of assuming the quadcopter to be a rigid body and applying Newton-Euler equations to describe its acceleration and angular How Can Large Language Models Help Humans in Design And Manufacturing? • 89 (... omitted by authors: Python code generated by GPT-4 thus far ...) Finish writing the script by using commonsense knowledge about relations between quadcopter components to set the appropriate parent component. Generate the full URDF file of the quadcopter. (... content omitted by authors ...) def set_parent_relations (drone_parts) : # Set parent−child relations drone_parts[ "body_place1" ]. set_parent (drone_parts[ "frameBar1_place1" ]) drone_parts[ "body_place1" ]. set_parent (drone_parts[ "frameBar2_place2" ]) for i in range (4) : drone_parts[ f "motorBasePart1_place{ i +1}" ]. set_parent (drone_parts[ "frameBar1_place1" ]) drone_parts[ f "motorBasePart2_place{ i +1}" ]. set_parent (drone_parts[ "frameBar1_place1" ]) drone_parts[ f "motor{i+1}" ]. set_parent (drone_parts[ f "motorBasePart1_place{ i +1}" ]) drone_parts[ f " propeller { i +1}" ]. set_parent (drone_parts[ f "motor{i+1}" ]) drone_parts[ " battery " ]. set_parent (drone_parts[ "body_place1" ]) drone_parts[ " controller " ]. set_parent (drone_parts[ "body_place1" ]) drone_parts[ " receiver " ]. set_parent (drone_parts[ "body_place1" ]) (... content omitted by authors ...) Recall the set_parent_relations function. (... omitted by authors: original function generated by GPT-4 ...) This function results in the error that a URDF robot can only have one root joint. Could you make either frameBar1 or frameBar2 the root joint? (... content omitted by authors ...) def set_parent_relations (drone_parts) : # Set parent−child relations # Make frameBar1 the root drone_parts[ "frameBar1" ]. set_parent (None) drone_parts[ "frameBar2" ]. set_parent (drone_parts[ "frameBar1"]) drone_parts[ "body" ]. set_parent (drone_parts[ "frameBar1"]) for i in range (1,5) : drone_parts[ f "motorBasePart1_place{ i } " ]. set_parent (drone_parts[ "frameBar1"]) drone_parts[ f "motorBasePart2_place{ i } " ]. set_parent (drone_parts[ "frameBar1"]) drone_parts[ f "motor{i } " ]. set_parent (drone_parts[ f "motorBasePart1_place{ i } " ]) drone_parts[ f " propeller { i } " ]. set_parent (drone_parts[ f "motor{i } " ]) drone_parts[ " battery " ]. set_parent (drone_parts[ "body"]) drone_parts[ " controller " ]. set_parent (drone_parts[ "body"]) drone_parts[ " receiver " ]. set_parent (drone_parts[ "body"]) (... content omitted by authors ...) Fig. 78. Constructing Relations between Quadcopter Links. After creating and placing links in the appropriate positions, GPT-4 must infer parent-child relations that are not available in the OpenJSCAD design to complete the URDF representation. In its first attempt, it produces an invalid file because only one of the frame bars can be the root joint. However, GPT-4 demonstrates an understanding of how the motor, motor base, and propeller should be connected and that the main body contains the battery, controller, and receiver. 90 • Makatura et al. acceleration, it is unable to zero-shot provide the exact model of the system without user feedback. For instance, as shown in Fig. 79, GPT-4 formulates the correct rotation matrix but does not apply it correctly to convert from inertial frame (control inputs) to the body frame (linear acceleration). Although GPT-4 can correct its error when given feedback, this type of error is difficult to catch without rigorously checking the correctness of GPT-4’s calculations and highlights the limitation that in some inverse design domains, a user cannot generate precise outputs from GPT-4 without the expert knowledge to perform verification. After this fix, the resulting code for deriving the 𝐴 and 𝐵 matrices is still incorrect as the simulated quadcopter sinks downward due to a sign error in the equations for acceleration. Similarly, generating the simulation loop in PyBullet requires several rounds of iteration on the initial code. Some of the mistakes are more obvious, such as incorrectly indexing into the control input vector when applying external force and torques, whereas others are more specific to PyBullet’s API. For the former, we directly pointed out GPT-4’s mistake. In the latter case, we find that simply giving GPT-4 the error message and asking it to fix its code suffices for our problem. Co-design of Quadcopter’s Shape and Control. Now that the quadcopter design can be outputted to URDF and the LQR controller can be synthesized from a given design, we ask GPT-4 to optimize the quadcopter design. Much of the design space has been fixed by the fact that most components have unchangeable dimensions. However, as the framebars are not predefined components but rather constructed from carbon-fiber tubes, we focus on optimizing their lengths. The control then follows directly from a given design, as we only need to compute the design’s total mass and moment of inertia to apply the LQR controller derived by GPT-4. With the objective of minimizing the number of simulation steps required to reach a goal height of 1m, we prompt GPT-4 with the full Python script that converts OpenJSCAD to URDF and performs simulation, and we find that it is able to provide an outline of the optimization, consisting of an objective function that involves creating a quadcopter with specified framebar lengths and performing the simulation loop, as shown in Fig. 80. GPT-4 also provides reasonable bounds on the frame bar lengths of (100mm, 500mm) when prompted further. GPT-4 then provides an outer optimization script that uses the helper objective function. We also explicitly prompt GPT-4 to complete its outline by updating component creation code to generate frame bars with the correct masses and to change the placement of components dependent on the length of the frame bars, namely the motors, motor bases, and propellers. As seen in Fig. 80, GPT-4 initially proposes SQSLP, an optimization method that requires gradients from the objective function, which is not trivial to apply as we would require gradients computed from simulation. When prompted to provide an alternative, GPT-4 suggests Differential Evolution, which meets the specification of not requiring a differentiable optimization problem, but can be computationally expensive. We thus explicitly prompt GPT-4 to provide code to perform grid search over the two frame bar lengths. This terminates in a reasonable amount of time with the result of making the frame bars as short as possible, which is the expected solution as the smaller the quadcopter, the less inertia it faces when taking off and decelerating towards the goal. The resulting quadcopter with optimized frame bar lengths is visualized in Fig. 81. 10 DISCUSSION 10.1 Observed Capabilities and Limitations Our examination of GPT-4’s performance in the fields of design and manufacturing has underscored several crucial capabilities instrumental in supporting the design processes, in addition to pinpointing significant limitations. Next, we summarize these observations and discuss various methods to mitigate the identified limitations. Capabilities: C.1 Extensive Knowledge Base in Design and Manufacturing: GPT-4 boasts a comprehensive knowl- edge base in the realms of design and manufacturing. Its capabilities extend to solving a wide array of problems and automatically completing specifications, making it a versatile tool across various domains. How Can Large Language Models Help Humans in Design And Manufacturing? • 91 Derive the A and B matrices for the LQR controller by linearizing the dynamics of the quadcopter around the fixed point. Assume that the system has 12 states and 6 dof. The control inputs should be total thrust and x,y,z torques. You should first write the full kinematics model and then call sympy to compute the jacobians. Account for the yaw angle in the equations for linear velocity and angular velocity. (... omitted by authors: full description of quadcopter dynamics and high-level approach ...) import sympy as sp # Define the state variables px, py, pz, phi , theta , psi , vx, vy, vz , wx, wy, wz = sp . symbols( ' px py pz phi theta psi vx vy vz wx wy wz') # Define the control u1, u2, u3, u4 = sp . symbols( ' u1 u2 u3 u4' ) inputs # Define the constants Ix , Iy , Iz , m = sp.symbols( ' Ix Iy Iz m') # Rotation Matrix R = sp . Matrix ([[ sp . cos( theta ) ∗sp . cos( psi ) , sp . sin (phi) ∗sp . sin ( theta ) ∗sp . cos( psi ) − sp . cos(phi) ∗sp . sin ( psi ) , sp . cos(phi ) ∗sp . sin ( theta ) ∗sp . cos( psi ) + sp . sin (phi) ∗sp . sin ( psi ) ], [sp . cos( theta ) ∗sp . sin ( psi ) , sp . sin (phi) ∗sp . sin ( theta ) ∗sp . sin ( psi ) + sp . cos(phi) ∗sp . cos( psi ) , sp . cos(phi ) ∗sp . sin ( theta ) ∗sp . sin ( psi ) − sp . sin (phi) ∗sp . cos( psi ) ], [−sp . sin ( theta ) , sp . sin (phi) ∗sp . cos( theta ) , sp . cos(phi) ∗sp . cos( theta ) ]]) # System dynamics f = sp . Matrix([ vx, vy, vz , wx, wy, wz, R[2, 0]∗u1/m, R[2, 1]∗u1/m, R[2, 2]∗u1/m − 9.81, (( Iy − Iz ) ∗wy∗wz + u2) / Ix , (( Iz − Ix ) ∗wx∗wz + u3) / Iy , (( Ix − Iy ) ∗wx∗wy + u4) / Iz ]) state = sp . Matrix([ px, py, pz, phi , controls = sp . Matrix([ u1, u2, u3, u4]) theta , psi , vx, vy, vz , wx, wy, wz]) # Compute Jacobians A = f . jacobian ( state ) B = f . jacobian ( controls ) (... content omitted by authors ...) Can you double check the way the rotation matrix is applied when computing the system dynamics? Yes, you’re correct, and I apologize for the mistake in my previous explanation. The total force in the body frame is given by [0, 0, -u1]. To find the force in the inertial frame, we multiply by the rotation matrix. The rotational dynamics are correct as given, but the translational dynamics need to be updated to account for the orientation of the quadcopter. Let’s write them correctly. The linear acceleration (ignoring air resistance) is just the force divided by the mass, so: acc_x = R[0, 2] ∗ −u1 / m acc_y = R[1, 2] ∗ −u1 / m acc_z = R[2, 2] ∗ −u1 / m + g where g is the gravitational acceleration. Let’s correct this in our sympy code: (... omitted by authors: previous code with correction made ...) Fig. 79. Deriving Quadcopter State-Space Matrices for LQR Control. Although GPT-4 formulates the problem correctly and uses the correct kinematics equations and rotation matrix, the initial derivation of the state-space matrices has a couple of errors, one of which is that GPT-4 indexes the wrong elements of the rotation matrix. 92 • Makatura et al. I have the following code to generate the geometry of a Quadcopter, create an LQR controller, and simulate it. I would like to optimize the lengths of frameBar1 and frameBar2 in order to minimize the time it takes the robot to reach its target set point 1 meter off the ground, constrained to a reasonable parameter range. How might I go about modifying this code to do that? (... omitted by authors: script for creating and simulating quadcopter ...) (... content omitted by authors ...) First, define a function that takes the lengths of frameBar1 and frameBar2 as parameters and returns the time it takes for the robot to reach its target set point 1 meter off the ground. This function should include the code for setting up and simulating the robot, as well as measuring the time it takes to reach the target set point. from scipy . optimize import minimize def simulation_time ( length1 , length2 ) : # Use lengths to set up and simulate the robot . # Measure and return the time it takes to reach the target set point . # Modify the frameBar1 and frameBar2 lengths frame_bar1 = create_box ( "frameBar1", 0.2526, 2 ∗ length1 ∗ 1.5 + 31, 15, 25) frame_bar2 = create_box ( "frameBar2", 0.2526, 2 ∗ length2 ∗ 1.5 + 31, 15, 25) # Create drone and simulate drone_parts = create_drone () set_parent_relations (drone_parts) # Generate URDF file write_urdf_file ( ' quadcopter_scad. urdf ' , drone_parts) # Place your simulation and control code here . # Ensure that your control objective is to reach 1 meter above ground. # time_to_reach_target = ... return time_to_reach_target Next, use the minimize function from the scipy.optimize module to find the lengths that minimize the function: # Define constraints and bounds bounds = [( MIN_LENGTH1, MAX_LENGTH1), (MIN_LENGTH2, MAX_LENGTH2)] # Replace with actual min and max lengths # Initial guess x0 = [( MIN_LENGTH1 + MAX_LENGTH1)/2, (MIN_LENGTH2 + MAX_LENGTH2)/2] # initial guess can be the midpoint of bounds result = minimize(simulation_time , x0, bounds=bounds, method='SLSQP') # SLSQP method supports bounds # The optimal lengths are stored in result . x optimal_length1 , optimal_length2 = result . x (... omitted by authors: printing the results ...) (... content omitted by authors ...) Fig. 80. Code Skeleton for Optimizing Quadcopter Design. To optimize a quadcopter’s design, GPT-4 identifies proposes to wrap the code that produces the quadcopter’s URDF file and that performs LQR control in an objective function, which enables it to perform optimization in the outer scope. However, its initial suggestion to perform SQSLP is unsuitable, as there are no gradients provided from simulation. How Can Large Language Models Help Humans in Design And Manufacturing? • 93 (a) Initial Position (b) Final Position Fig. 81. Simulating Quadcopter Performance. Left: Following the OpenJSCAD to URDF conversion script provided by GPT-4, we show the quadcopter rendered in PyBullet’s simulator. The quadcopter has optimized frame bar lengths discovered with the script provided by GPT-4 and is resting at its initial position. Right: Using the LQR controller designed by GPT-4, the quadcopter is able to lift off and hover at the goal position 1m off the ground. C.2 Iteration Support: GPT-4 incorporates an iterative approach to problem-solving. When feedback is provided on errors, it attempts to rectify them. This ability to adapt, although not always successful, is a valuable facet of an GPT-4’s performance. C.3 Modularity Support: GPT-4 supports modular design, demonstrating the ability to reuse or adapt previous designs or solutions when explicitly instructed. While it does not inherently retain memory of past interactions, explicit instructions can help leverage its modular capabilities effectively. Limitations: L.1 Reasoning Challenges: GPT-4 encounters difficulties with certain types of reasoning, particularly those involving analytical reasoning and computations. These limitations can manifest as notable challenges in the design and manufacturing domain, for instance, a general lack of spatial reasoning capabilities. Potential Solutions: Implementing well-crafted domain-specific languages (DSLs) can help address these challenges. DSLs, widely used in computer science, encapsulate recurring knowledge, rules, and valuable abstractions, thereby filling knowledge gaps. Alternatively, APIs that can perform the complex computations can be integrated. GPT-4’s proficiency in creating high-level abstractions can be utilized to generate inputs that be processed through APIs by computational solvers. L.2 Correctness and Verification: GPT-4 often produces inaccurate results or justifications for its solutions and lacks the ability for self-verification. 94 • Makatura et al. Potential Solutions: Apart from relying on human verification, automated verification can be accomplished by utilizing APIs that conduct checks and validations. By leveraging GPT-4’s iterative capabilities (C.2), we can create a feedback loop that continues until a satisfactory solution is obtained. L.3 Scalability: As tasks become larger or more complex, GPT-4’s performance can deteriorate, often struggling to manage multiple tasks concurrently. Potential Solutions: One strategy is to partition larger tasks into multiple sub-tasks. For instance, rather than requesting it to evaluate multiple performance metrics simultaneously, it may be more effective to request them individually. When constructing more complex models, employing an incremental design process can prove beneficial. Components can be designed and verified separately before being assembled into the final model. GPT-4’s modularity support (C.3) can be used to facilitate the creation of complex models from a series of instructions. L.4 Iterative Editing: When a design needs modifications, specifying those changes as a prompt will often lead to unsatisfactory behaviors. This is because the GPT-4, upon receiving a change prompt, will regenerate the design, often overlooking elements specified in previous prompts. This situation poses challenges to design editing. Potential Solutions: Our solutions involved either feeding the prompts back in to create a full specification in a single prompt, though this also caused challenges due to the scalability limitations discussed above. The alternative was explicit modularization to enable parts to be designed and reused, though the latter was not possible in some cases. Dualism: D.1 Context Information: GPT-4’s performance improves significantly with the provision of context information. The more detailed the domain description, the better it performs. Furthermore, GPT-4 is adept at providing context for its actions, making it an asset in sequential workflows. This characteristic proves particularly beneficial when using GPT-4’s generated content in subsequent tasks, as these tasks can utilize the context included in the output from the initial task. However, it may also be harmful if the user does not provide enough context, or if the user would like to create an unusual design, but GPT-4 is unable to overcome the biases it associates with a particular domain. D.2 Unprompted Responses: GPT-4 often infers aspects that are not specified in the prompt, either auto- completing specifications or finding ways to make decisions without enough information. While this is interesting in the design context in terms of allowing for partial specifications that can be auto-completed, it can sometimes be overly proactive, guiding the design in some aspects, which may limit creativity. 11 CONCLUSIONS In conclusion, we find that GPT-4 possesses numerous capabilities that can be leveraged within the domain of design and manufacturing. This area, where creativity converges with practicality, presents exciting opportunities for advancements that can potentially bring about a significant shift in the way we ideate, prototype, and manufacture a broad range of products. However, it is essential to recognize that substantial work remains to be done to fully support the integration of these tools within this field. A fundamental issue is that design for manufacturing involves a delicate balance between creativity and formal verification. Engineering design presents a paradox; it requires precision and exactness, yet thrives on an iterative and exploratory spirit. While our experiments have managed to circumvent limitations for formal reasoning through user guidance, DSL crafting, and APIs that call computations, there is still much to understand about how best to implement these strategies. For instance, we hope that our analysis can stimulate new insights about DSL design. Historically, DSLs have been developed with human users in mind. However, when we shift our perspective to creating DSLs for an How Can Large Language Models Help Humans in Design And Manufacturing? • 95 AI coder, new questions and possibilities emerge. What should these DSLs look like? We believe our analysis provides valuable insights into this concept, particularly within the design and manufacturing domain. Similarly, in terms of API usage and framework development, we have observed a myriad of possibilities. Approaches range from dividing problems into parts that can be tackled by GPT-4 and others that are best solved by traditional methods, to iterative solutions, and even to a complete reframing of the problem by asking GPT-4 to generate problem-solving code. Each of these approaches carries potential advantages and disadvantages. So, what should an optimal framework look like? We hope our research will aid others in formulating an answer to this question. In summary, we believe our analysis offers valuable insights into how LLMs like GPT-4 can be harnessed in the domain of design and manufacturing. While we have made substantial strides, the path to fully exploiting the potential of these tools in this domain remains open, rich with opportunities for further exploration and innovation. ACKNOWLEDGMENTS This material is based upon work supported in part by Defense Advanced Research Projects Agency (DARPA) Grant No. FA8750-20-C-0075. REFERENCES [1] [n. d.]. - Repetier Software — repetier.com. https://www.repetier.com/. [Accessed 20-Jul-2023]. [2] [n. d.]. FeatureScript introduction. https://cad.onshape.com/FsDoc/. Accessed: 2023-07-11. [3] [n. d.]. JSCAD User Guide. https://openjscad.xyz/dokuwiki/doku.php. Accessed: 2023-07-14. [4] [n. d.]. Slic3r - Open source 3D printing toolbox — slic3r.org. https://slic3r.org/. [Accessed 20-Jul-2023]. [5] Autodesk. [n. d.]. Autodesk Simulation. https://www.autodesk.com/solutions/simulation/overview. Accessed: July 14, 2023. [6] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021). [7] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems 30 (2017). [8] Dassault Systèmes. [n. d.]. Dassault Systèmes Simulation. https://www.3ds.com/products-services/simulia/overview/. Accessed: July 14, 2023. [9] Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. 2020. Jukebox: A generative model for music. arXiv preprint arXiv:2005.00341 (2020). [10] Tao Du, Jeevana Priya Inala, Yewen Pu, Andrew Spielberg, Adriana Schulz, Daniela Rus, Armando Solar-Lezama, and Wojciech Matusik. 2018. Inversecsg: Automatic conversion of 3d models to csg trees. ACM Transactions on Graphics (TOG) 37, 6 (2018), 1–16. [11] Tao Du, Kui Wu, Pingchuan Ma, Sebastien Wah, Andrew Spielberg, Daniela Rus, and Wojciech Matusik. 2021. DiffPD: Differentiable projective dynamics. ACM Transactions on Graphics (TOG) 41, 2 (2021), 1–21. [12] Tom Erez, Yuval Tassa, and Emanuel Todorov. 2015. Simulation tools for model-based robotics: Comparison of bullet, havok, mujoco, ode and physx. In 2015 IEEE international conference on robotics and automation (ICRA). IEEE, 4397–4404. [13] Timothy Erps, Michael Foshey, Mina Konaković Luković, Wan Shou, Hanns Hagen Goetzke, Herve Dietsch, Klaus Stoll, Bernhard von Vacano, and Wojciech Matusik. 2021. Accelerated discovery of 3D printing materials using data-driven multiobjective optimization. Science Advances 7, 42 (2021), eabf7435. [14] Noelia Ferruz, Steffen Schmidt, and Birte Höcker. 2022. ProtGPT2 is a deep unsupervised language model for protein design. Nature communications 13, 1 (2022), 4348. [15] Minghao Guo, Veronika Thost, Beichen Li, Payel Das, Jie Chen, and Wojciech Matusik. 2022. Data-efficient graph grammar learning for molecular generation. arXiv preprint arXiv:2203.08031 (2022). [16] Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. 2023. MotionGPT: Human Motion as a Foreign Language. arXiv preprint arXiv:2306.14795 (2023). [17] Ali Kashefi and Tapan Mukerji. 2023. Chatgpt for programming numerical methods. Journal of Machine Learning for Modeling and Computing 4, 2 (2023). [18] Bongjin Koo, Jean Hergel, Sylvain Lefebvre, and Niloy J. Mitra. 2017. Towards Zero-Waste Furniture Design. IEEE Transactions on Visualization and Computer Graphics 23, 12 (2017), 2627–2640. https://doi.org/10.1109/TVCG.2016.2633519 96 • Makatura et al. [19] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. 2023. Zero-1-to-3: Zero-shot one image to 3d object. arXiv preprint arXiv:2303.11328 (2023). [20] Pingchuan Ma, Tao Du, John Z Zhang, Kui Wu, Andrew Spielberg, Robert K Katzschmann, and Wojciech Matusik. 2021. Diffaqua: A differentiable computational design pipeline for soft underwater swimmers with shape interpolation. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1–14. [21] Liane Makatura, Bohan Wang, Yi-Lu Chen, Bolei Deng, Chris Wojtan, Bernd Bickel, and Wojciech Matusik. 2023. Procedural Metamate- rials: A Unified Procedural Graph for Metamaterial Design. ACM Transactions on Graphics (2023). [22] Aman Mathur and Damien Zufferey. 2021. Constraint Synthesis for Parametric CAD. (2021). [23] Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Arenas, Kanishka Rao, Dorsa Sadigh, and Andy Zeng. 2023. Large Language Models as General Pattern Machines. arXiv preprint arXiv:2307.04721 (2023). [24] Pascal Müller, Peter Wonka, Simon Haegler, Andreas Ulmer, and Luc Van Gool. 2006. Procedural modeling of buildings. In ACM SIGGRAPH 2006 Papers. 614–623. [25] James F O’Brien, Chen Shen, and Christine M Gatchalian. 2002. Synthesizing sounds from rigid-body simulations. In Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation. 175–181. [26] OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL] [27] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744. [28] Mine Özkar and George Stiny. 2009. Shape grammars. In Acm Siggraph 2009 Courses. 1–176. [29] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. 2023. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116 (2023). [30] Przemyslaw Prusinkiewicz and Aristid Lindenmayer. 2012. The algorithmic beauty of plants. Springer Science & Business Media. [31] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9. [32] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In International Conference on Machine Learning. PMLR, 8821–8831. [33] Grzegorz Rozenberg and Arto Salomaa. 1980. The mathematical theory of L systems. Academic press. [34] George Stiny. 1980. Introduction to shape and shape grammars. Environment and planning B: planning and design 7, 3 (1980), 343–351. [35] Dennis M Sullivan. 2013. Electromagnetic simulation using the FDTD method. John Wiley & Sons. [36] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023). [37] Alan Mathison Turing et al. 1936. On computable numbers, with an application to the Entscheidungsproblem. J. of Math 58, 345-363 (1936), 5. [38] Karl DD Willis, Yewen Pu, Jieliang Luo, Hang Chu, Tao Du, Joseph G Lambourne, Armando Solar-Lezama, and Wojciech Matusik. 2021. Fusion 360 gallery: A dataset and environment for programmatic cad construction from human design sequences. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1–24. [39] Jie Xu, Tao Chen, Lara Zlokapa, Michael Foshey, Wojciech Matusik, Shinjiro Sueda, and Pulkit Agrawal. 2021. An end-to-end differentiable framework for contact-aware robot design. arXiv preprint arXiv:2107.07501 (2021). [40] Yunming Zhang, Mengjiao Yang, Riyadh Baghdadi, Shoaib Kamil, Julian Shun, and Saman Amarasinghe. 2018. Graphit: A high- performance graph dsl. Proceedings of the ACM on Programming Languages 2, OOPSLA (2018), 1–30. [41] Allan Zhao, Jie Xu, Mina Konaković-Luković, Josephine Hughes, Andrew Spielberg, Daniela Rus, and Wojciech Matusik. 2020. Robo- grammar: graph grammar for terrain-optimized robot design. ACM Transactions on Graphics (TOG) 39, 6 (2020), 1–16. [42] Shlomo Zilberstein. 1996. Using anytime algorithms in intelligent systems. AI magazine 17, 3 (1996), 73–73. How Can Large Language Models Help Humans in Design And Manufacturing? • 97 (... omitted by authors: main content of the prompt ...) When building this code, keep in mind: 1. It’s very important to follow the OpenJSCAD format. There must be a function named main and an export statement. 2. Each function must be imported from the appropriate module. Take care to choose the correct module for each function. For example, colorize comes from colors; cuboid comes from primitives; union comes from booleans; and translate comes from transforms. This is not an exhaustive list, feel free to use any function from any module. 2. OpenJScad positions each component relative to its center point. 3. It is very important that the individual components are in contact with one another, but no part protrudes into any other part. 4. Pay attention to the primitive types – for example, cuboid() must be used instead of cube() if you are building a box with different lengths along different dimensions. 5. The z direction is up. Fig. 82. Hints for using OpenJSCAD Each time we asked GPT-4 to construct a design using OpenJSCAD, we provided the following hints after the main prompt to avoid the most common pitfalls that GPT-4 fell into. A DSLS AND PROMPTING TIPS FOR TEXT-TO-DESIGN WITH GPT-4 A.1 CSG with OpenJSCAD GPT-4 was able to use the OpenJSCAD library out-of-the-box, with no additional explanation or restriction of the API on the part of the user. However, as described in Section 4.1.4, GPT-4 did fall into a number of common pitfalls when constructing designs. To mitigate the most common mistakes that GPT-4 made, each time we asked GPT-4 to build a design using OpenJSCAD, we provided the set of hints and reminders shown in Figure 82. A.2 Sketch-based parametric CAD DSL We propose a streamlined version of the standard sketch-based CAD language by exposing only the sketch and extrude operations along with basic sketch primitives, which already cover a wide range of geometric variations. To automatically generate CAD models from GPT-4’s output, we utilize Onshape’s API. When aiming for single- shot CAD design (i.e., with no iterative feedback), we found that a four-pronged prompt generally resulted in the most reliable output. One aspect of the prompt described the specific task that GPT-4 should complete. The remaining three aspects of the prompt provided generic context for our target CAD DSL, and largely remained constant throughout our experiments. The specific aspects were: (1) a description of our modified DSL, (2) an example constructed with this DSL, and (3) a set of tips that GPT-4 should keep in mind when constructing its own result. The prompt we used to describe these aspects to work with local coordinate systems and a global coordinate system can be seen in chat format in Fig.83 and Fig.84, respectively. A.3 URDF GPT-4 was able to use URDF without any intermediate libraries. Similar to OpenJSCAD, there were many common pitfalls that needed to be mitigated via prompt choice — these are discussed in detail in section 4.1.6. In brief summary, the following notes list some additions that were useful in mitigating specific problems: • GPT-4 has difficulties in determining where URDF objects place their origin. When wanting objects to touch but not intersect, or be placed at the “end” of other objects, it is useful to specify that the ends are half the length of the object away from the origin. • Specifying an axis for two objects to be aligned along is more effective than instructing that they be aligned. 98 • Makatura et al. We will define a design language. There are two key operators: 1) createSketch(primitive, plane): creates a sketch of a certain primitive on a given plane and returns its ID. There are two types of primitives you can create, circles or rectangles. The circle primitive is circle(center_x, center_y, radius), where center_x is the x coordinate of the center, center_y is the y coordinate and radius is the radius. The rectangle primitive is instantiated rectangle (center_x, center_y, length, width). The plane defines the 2D plane where you will draw the sketch. You can just use one of the 3 default places: XY_PLANE, XZ_PLANE, ZY_PLANE. You can also use the plane create by the result on an extrude, which you do by calling the function cap(extrude, side), where ’extrude’ is the Id returned by the extrude operation (defined below). And side is one of the following :"max_z", "min_z", "max_y", "min_y", "max_x", "min_x". The side argument defines which planar face of an extruded solid will be used for the sketch operation. For example, "min_z" will select the planar face whose bounding box center has a minimal z-component compared to all other planes’ bounding box centers. And there is the important constraint that you can only put the center of sketch primitives inside or on the edge of a planar face, if you do not put them on one of the default planes. 2) extrude (sketch, length), where sketch is the ID returned by the sketch operator and the length determines the length of the extrude. Note that sketches lie within their respective plane, and they will get extruded along the plane’s normal direction. A common pattern you will encounter is that the height variable of a solid should often be the length parameter in the extrude operator. Whereas the other dimensions of of the solid are defined by the sketch primitives. For example, if you want to design a round table with a single center leg and a leg base you can do: leg base legBase_sketch = createSketch ( circle (0,0,3) , XY_PLANE) legBase_solid = extrude( legBase_sketch , 1) leg_sketch = createSketch ( circle (0,0,1) , cap( legBase_solid , "max_z")) leg_solid = extrude( leg_sketch , 10) top_sketch = createSketch ( circle (0,0,8) , cap( leg_solid , "max_z")) top_solid = extrude(top_sketch , 1) End of the example. Always write code using variables. Try to prefer a few variables by reusing them in the design when appropriate. Write code in syntactically correct python, knowing that you have the functions createSketch, circle, rectangle, cap and extrude and the default planes. Fig. 83. A Sketch-Based CAD DSL Prompt with Local Coordinate Systems. Our prompt used for the sketch-based CAD experiments with local coordinate systems. • GPT-4 will often omit essential parts of the URDF file for brevity, replacing them with a comment to repeat a part of the file. This can be done manually, but to generate URDF files that are complete directly from the response, GPT-4 must be instructed to produce a complete file. • GPT-4 will ignore several constraints or instructions if too many are placed in a single prompt. Splitting the generation process into multiple prompts resolves this issue. A.4 Graph-based DSL for Robotics The full text of the prompt used to generate the humanoid robot graph (omitted earlier for brevity) is shown in Figure 85. How Can Large Language Models Help Humans in Design And Manufacturing? • 99 We will define a design language. There are two key operators: 1) createSketch(primitive, plane): creates a sketch of a certain primitive on a given plane and returns its ID. There are two types of primitives you can create, circles or rectangles. The circle primitive is circle(center_x, center_y, center_z, radius), where center_x is the x coordinate of the center, center_y is the y coordinate of the center, center_z is the z coordinate of the center and radius is the radius. The rectangle primitive is an instantiated rectangle (center_x, center_y, center_z, length, width). The plane defines the 2D plane where you will draw the sketch. You can just use one of the 3 default places: XY_PLANE, XZ_PLANE, ZY_PLANE. You can also use the plane created by the result on an extrude, which you do by calling the function cap(extrude, side), where ’extrude’ is the Id returned by the extrude operation (defined below). And side is one of the following :"max_z", "min_z", "max_y", "min_y", "max_x", "min_x". The side argument defines which planar face of an extruded solid will be used for the sketch operation. For example, "min_z" will select the planar face whose bounding box center has a minimal z-component compared to all other planes’ bounding box centers. And "max_x" will select the planar face which has a maximal x component compared to all other planes’ bounding box centers. Note that the normal vector of the selected planes directly correlates with the side argument. Here are the normal vectors associated to each side argument: • "min_z" : (0, 0, -1) • "max_z" : (0, 0, 1) • "min_y" : (0, -1, 0) • "max_y" : (0, 1, 0) • "min_x" : (-1, 0, 0) • "max_x" : (1, 0, 0) You can use these normal vectors to create more 3 dimensional objects. The center coordinates of sketch primitives have to be inside of the selected plane. And there is the important constraint that you can only put the center of sketch primitives inside or on the edge of a planar face, if you do not put them on one of the default planes. 2) extrude (sketch, length), where sketch is the ID returned by the sketch operator and the length determines the length of the extrude. Note that sketches lie within their respective plane, and they will get extruded along the plane’s normal direction. A common pattern you will encounter is that the height variable of a solid should often be the length parameter in the extrude operator. Whereas the other dimensions of the solid are defined by the sketch primitives. Example design For example, if you want to design a round table with a single center leg and a leg base you can do: leg base legBase_sketch = createSketch ( circle (0,0,0, 3) , XY_PLANE) legBase_solid = extrude( legBase_sketch , 1) leg_sketch = createSketch ( circle (0, 0, 1, 1) , cap( legBase_solid , "max_z")) leg_solid = extrude( leg_sketch , 10) top_sketch = createSketch ( circle (0,0, 11,8) , cap( leg_solid , "max_z")) top_solid = extrude(top_sketch , 1) End of the example. Additional Constraints Use exposed design variables whenever you can, and as few as possible. Write code in syntactically correct python, knowing that you have the functions createSketch, circle, rectangle, cap and extrude and the default planes. Fig. 84. A Sketch-Based CAD DSL Prompt with a Global Coordinate System. Our prompt used for the sketch-based CAD experiments with a global coordinate system. 100 • Makatura et al. We are constructing robots using Python code. The following functions are available: add_link (name): Adds a link to the robot with the name `name` and returns its ID. add_joint ( parent_link , child_link ) : Adds a joint between the parent link with ID `parent_link` and child link with ID ` child_link `. translate ( link , direction ) : Translates the link with ID `link` in the direction ` direction `. Direction can be one of " left " , " right " , "forward" , "backward", "up", or "down". Write a function to construct a humanoid robot. Fig. 85. Graph-based robotics DSL. A description of the custom graph-based DSL used to construct robots.
ai_researcher
1
Exploring_crop_genomics_role_of_molecular_markers_in_identifying_genetic_diversity_and_characterization.pdf
3 2 0 2 n a J 1 3 ] N G . o i b - q [ 1 v 7 8 3 3 1 . 1 0 3 2 : v i X r a Deep Learning for Reference-Free Geolocation of Poplar Trees Cai W. John∗ Bredesen Center University of Tennessee, Knoxville Knoxville, TN 37996 [email protected] Owen Queen∗† Department of Biomedical Informatics Harvard Medical School Boston, MA 02115 [email protected] Wellington Muchero Center for Bioenergy Innovation Oak Ridge National Laboratory Oak Ridge, TN 37830 [email protected] Scott J. Emrich Electrical Eng. and Computer Science Bredesen Center University of Tennessee, Knoxville Knoxville, TN 37996 [email protected] Abstract A core task in precision agriculture is the identification of climatic and ecological conditions that are advantageous for a given crop. The most succinct approach is geolocation, which is concerned with locating the native region of a given sample based on its genetic makeup. Here, we investigate genomic geolocation of Populus trichocarpa, or poplar, which has been identified by the US Department of Energy as a fast-rotation biofuel crop to be harvested nationwide. In particular, we approach geolocation from a reference-free perspective, circumventing the need for compute-intensive processes such as variant calling and alignment. Our model, MASHNET, predicts latitude and longitude for poplar trees from randomly-sampled, unaligned sequence fragments. We show that our model performs comparably to Locator, a state-of-the-art method based on aligned whole-genome sequence data. MASHNET achieves an error of 34.0 km2 compared to Locator’s 22.1 km2. MASHNET allows growers to quickly and efficiently identify natural varieties that will be most productive in their growth environment based on genotype. This paper explores geolocation for precision agriculture while providing a framework and data source for further development by the machine learning community. 1 Introduction Pollen dispersal in natural populations of Populus trichocarpa, as well as other species, results in correlations between geography and genetic variation. These correlations can be leveraged to predict geographic origin of a sample from genetic data as demonstrated in previous studies [1] [2]. To date, all studies have achieved this prediction task using aligned, whole-genome sequence data. Here, we demonstrate our novel tool MASHNET that predicts geographic origin from unaligned sequence fragments. We compare it to the current state of the art implementation, Locator [1], which uses a deep learning architecture on aligned sequences. Our method performs similarly despite using more noisy sequence read-only information. ∗Shared first authorship †Work done while at University of Tennessee, Knoxville NeurIPS 2022 AI for Science Workshop. Sequence alignment is a necessary procedure to transform short read fragments into genome-scale information. Modern technology is only capable of sequencing small sections of DNA, so large-scale genotyping of individuals using sequencing data requires post hoc alignment and variant-calling algorithms, usually relative to a well-established reference genome sequence [3]. These algorithms are computationally intensive procedures that create major bottlenecks between sample collection and downstream analysis of variant data. Further, although reference genomes are increasingly common due to advances in both technology and assembly algorithms[4], they still require large amounts of sequence data and resource intensive de novo assembly. These demands prevent many non-model organisms from being sequenced. Our approach is alignment-free and therefore can be applied to the many non-model organisms currently without a reference genome. It also circumvents the need for variant-calling algorithms allowing researchers to more rapidly analyze samples. For example, one can envision sampling natural genetic diversity in a species, and then using computational methods to suggest the ancestral origin(s) of unknown samples. This process is called geolocation. A simple spatial-climate map, such as the Köppen-Geiger climate system [5], could then map origin locations to desired growing environments. Being able to pinpoint these environments is key to precision agriculture. In this study, we focus on Populus trichocarpa (poplar) because of interest from the Department of Energy (DOE) in developing it as a fast-rotation biofuel crop to be viable nationwide [6]. Poplar’s species range extends from southern California all the way to British Columbia encompassing a latitudinal range of 38.88 to 54.25 degrees [7]. This range includes a diversity of macro and micro- environments that have likely shaped subpopulations of this species. Our goal is to predict the latitudinal and longitudinal coordinates of these genotypes from their sequence data, a task known as genomic geolocation. Geolocation has applications in precision agriculture. When considering a new site for a tree nursery it is desirable to clone samples well-suited to that environment. Given that these trees have often been previously cloned, and relocated to common gardens and greenhouses for commercial use and agricultural research, it can be difficult to obtain meta-data locating them to their origin environment. MASHNET resolves this issue allowing growers to rapidly identify the origin location of their trees, and identify which will be most productive in the new climate. In this work, we present MASHNET, a deep learning-based model that can perform accurate geolo- cation of poplar trees. The model uses a multi-task neural architecture to jointly predict latitude and longitude coordinates for each sample. Importantly, this method uses Mash sketches [8], an alignment-free feature extraction method that randomly samples k-mers from sequencing read data. We demonstrate that MASHNET can use alignment-free Mash sketches to compete with WGS-based methods. We open source our methods and data while highlighting the importance of this task to precision agriculture. 2 Methods 2.1 Data We consider 1,252 poplar genotypes from a representative sampling of the latitudinal distribution of its species range (see Figure 1 panel A). Genome re-sequencing, alignment and variant-calling of this population was previously described by Zhang et al. [3]. We use these aligned and variant called sequences in Locator as a performance benchmark for our alignment-free method. MASHNET is trained on unaligned reads. Out of the total 1,252 samples, 1,024 have reads that are publicly available for download from the NCBI’s Sequence Read Archive (SRA). A map of sample ID’s to SRA key is included with the meta data in our Github 3. During training, meta data labels with ground-truth latitude and longitude coordinates for all 1,252 samples are used. These are also included on our GitHub repository. Unfortunately, we are unable to publicly host the aligned WGS used to train and test Locator, as well as the remaining 228 sample reads. This is due to current access restrictions. Associated with each sample are several meta-data variables. The first is river system, which corresponds to the nearby river from which each sample was originally collected. This variable in particular shows strong signal, as is evidenced by Figure 1C, which shows a PCA-UMAP [9] 3All codes and data found at https://github.com/owencqueen/MashPredict 2 Figure 1: A) Map of the origin location of all 1,252 poplar samples. B) Reduced set of 919 samples used for PCA-UMAP clustering by river system. Downsampling from 1,252 samples is achieved by retaining only river systems with ≥ 35 members. C) PCA-UMAP embedding of 919 clustering samples colored by river system. projection of each sample colored by its associated river system. This projection illustrates the correlation between origin location and genotype that we will leverage to geolocate these samples. 2.2 MinHashing Unaligned Reads A major innovation of this work is achieving prediction from unaligned reads. We accomplished this using the Mash software [8]. This process uses read fragments to create a reduced representation of the genome, i.e., a “sketch” of the genome, which has been shown to accurately reflect genome- wide structure [8]. It does this by randomly sub-sampling k-mer’s from the read fragments using a minHash-based approach. When using Mash the user must define the k-mer length to use, as well as the number of hash functions to store which determines the sketch size (s). For our study, we chose a k-mer length of 21. This is the default in Mash and their studies demonstrate this k-mer length robustly maps to Average Nucleotide Identity (an alignment-based measure of mutation distance) across different sketch sizes. Mash states that “Increasing sketch size improves the accuracy of Mash estimates, especially for divergent genomes” [8]. To test this, we ran MASH at four different sketch sizes: s=500, 2000, 4000, and 50,000. We trained and tested our prediction algorithms across all four sketch sizes to compare performance (see Table 1 and Figure 2). Once sketched, we devised a novel application of the Mash output. The input to Mash is a dataset of n samples of reads Ri that correspond to sequencing reads for a given poplar tree, D = {R1, ..., Rn}. Assuming no hash collisions, each hash function Hi is a unique identifier for a 21 length k-mer. 3 1, ..., H i Mash samples s random k-mers per Ri, thereby resulting in a set of s hash functions, known as a sketch: Mi = {H i s}. s is a user-defined parameter called sketch size that is discussed in subsequent sections. This procedure is repeated for every sample in D to build a set of sketches {M1, ..., Mn}. Now, a union is taken over all hash functions in each sketch in order to construct a set of hash functions H = (cid:83)n i=1 Mi. Note that |H| is guaranteed to be upper-bounded by s × n, but often |H| (cid:28) s × n because there are common k-mers shared across samples Ri. Typically, these sketches are used for a simple pairwise comparison of genomes to estimate genetic distance. For a pair of genotypes, this is done by set comparison of the hash functions in each genome sketch, such as a Jaccard index. Here, instead of only looking at pairwise comparisons, we look at set membership across the entire population. This is achieved by building a presence-absence matrix for the hash functions in each sketch. Taking the set of all hash functions H, we construct a vector by placing a 1 if the hash is found in sketch Mi and a 0 if it is not found in sketch Mi. Formally, each vector representation Vi corresponding to a sketch Mi is defined by Vi = {1[Hj ∈Mi]|Hj ∈ H} where the indicator function 1 sets the value to 1 if Hj ∈ Mi and 0 otherwise. This converts each set Mi to a constant-size binary vector Vi. Assuming no hash collisions, this means our matrix represents a random sampling of k-mers, with a 1 indicating that k-mer as present in a genotype and 0 indicating its absence. This provides a binary input matrix for our deep learning architecture MASHNET. 2.3 MASHNET Model MASHNET is a neural network for prediction and representation of Mash sketches. This network takes the binary Mash matrix as input and performs predictions for latitude and longitude. The model architecture consists of a combination of linear and LayerNorm [10] layers followed by ELU [11] activation functions. We also chose to use a Batch Normalization [12] layer to process the input, following Locator’s [1] similar decision. We empirically found that this architecture improved performance on the sparse Mash sketch input (see Figure 2). MASHNET can be used for prediction of any phenotype, but we chose to focus it on geolocation, i.e., predicting latitude and longitude coordinates for each sample. As the output of the network, we have a multi-task learning setup, where we jointly predict both latitude and longitude in the same forward pass. The MASHNET model F takes a vectorized Mash sketch Vi as input and outputs a coordinate R2. Our loss function is a simple Absolute Error (AE) with equal weight for both latitude and longitude, i.e., L = Llat + Llong, where Llat is the AE for latitude and Llong is the AE for longitude. 2.4 Experiments and Comparison Models For geolocation, we compare MASHNET to several other non-neural models. First, we use k-nearest neighbors (kNN) on the Mash distances. Mash computes pairwise distances with a set-based distance function that approximates the Jaccard index between each sample, as discussed in [8]. We compute this pairwise distance matrix and use this as a distance metric in the kNN prediction. Additionally, XGBoost and ElasticNet algorithms are employed on the binarized Mash sketches. For each model, we perform a search over a hyperparameter space to optimize model performance: for kNN, we search over k values, for XGBoost and ElasticNet, we search over parameters controlling regularization strength and learning rate. We also compare several WGS methods against models trained on sketch-based inputs. First, we use a state-of-the-art method Locator [1], which was designed for direct geolocation prediction from WGS data. Finally, we use XGBoost [13] and ElasticNet [14] algorithms on a principal component analysis (PCA)-reduced representation. PCA is used to reduce the WGS representations because of the large size and high level of sparsity. PCA is a widely established technique in bioinformatics, and it has previously shown to be effective in compressing WGS samples [9]. Each experiment is performed with 30 separate 5-fold cross validations, each with individual random seeds. Performance metrics are averaged across all folds for one cross validation, and we report the mean and standard error across all 30 cross validations for each separate experiment. Each error in Table 1 is reported as mean absolute error (MAE) in kilometers, which is calculated from latitude and longitude coordinates via geodesic distance provided by the geopy package [15]. We only use 5 trials of cross validation on Locator because of prohibitively long runtimes. For MASHNET, we standard scale the latitude and longitude before training and inverse scale the outputs to compute errors. This 4 standard scaling approach involves transforming the data to a normal distribution with mean= 0 and standard deviation= 1. It seemed to have no detectable effect on performance for alternative models. 3 Results Figure 2: Inspecting errors across varying sketch sizes for all algorithms applied to unaligned read fragments. Locator ElasticNet XGBoost WGS 22.10±1.37 236.54±0.02 37.77±0.09 Table 1a Sketch size (×103) kNN ElasticNet XGBoost MASHNET 0.5 2 4 50 117.82±1.06 79.90±1.62 73.31±1.02 54.20±0.97 113.26±0.08 89.04±0.09 77.64±0.09 57.46±0.08 117.16±0.14 96.28±0.13 91.97±0.14 76.27±0.15 93.73±0.32 57.38±0.33 48.12±0.96 34.00±0.24 Table 1b Table 1: Mean absolute error in kilometers2 for various models trained on whole-genome sequence inputs (1a) and Mash sketch-encoded vectors (1b). Table 1a ElasticNet and XGBoost are trained on PCA-reduced versions of SNP data obtained after sequence alignment. Table 1b sketch size is shown in units of 1000 sketches. kNN is trained on Jaccard distance between each sample while all other methods are trained on vectorized Mash sketches. Locator is the best-performing model, pinpointing the location to within 22.1km2 of error. ElasticNet and XGBoost, which are both trained on PCA-reduced versions of the WGS SNPs, perform worse than Locator on the geolocation task. Within the Mash-based predictors, MASHNET outperforms all methods, regardless of the sketch size. kNN performs better than both ElasticNet and XGBoost; this is likely because distance is defined based on the set-based metric used in the original Mash publication [8]. ElasticNet consistently outperforms XGBoost, with XGBoost being the least predictive model for Mash-based input data. Comparing across WGS and Mash-based predictors, WGS predictors perform better overall. This result is expected given the longer-range structure that is elucidated during the alignment procedure. 5 However, several key patterns emerge. First, MASHNET still outperforms both WGS-based ElasticNet and XGBoost when using a sketch size of 50,000. This highlights the utility and capacity of MASHNET and neural networks for geolocation, even from noisy data such as Mash sketches. Second, on the WGS data XGBoost outperforms ElasticNet, but on the Mash-based input ElasticNet performs better. This is most likely due to the differences in data geometry. The Mash-based input data are sparse, binary vectors while PCA-reduced WGS inputs are dense with fewer dimensions. The geolocation task is highly nonlinear, so in the dense WGS setting, we expect a tree-based model (XGBoost) to perform better than a linear model (ElasticNet). We also perform benchmarking across different numbers of Mash sketches. Sketch size is an important tuning factor when using MASHNET. As seen in Table 1, performance increases with increasing sketch size. In Mash, compute time to build a sketch is largely invariant to sketch size, however overall computational costs will increase due to higher dimensional input being passed to downstream prediction models. This is a trade-off that must be managed. In general, traditional, non-deep learning-based methods (ElasticNet and XGBoost) perform poorly on Mash sketches, highlighting the need for an alternative such as our model MASHNET. However, the set-based distance metric leveraged by the original Mash publication has been further validated here, showing a clear ability to recover significant predictive signal using kNN, which even outperforms more sophisticated methods such as ElasticNet and XGBoost. 4 Discussion The genome sciences contain many applications for reference-free prediction using computational techniques. To the best knowledge of the authors, this study is one of the first attempts at trait prediction from unaligned read fragments. Innovations in this space have the potential for large impact on topics ranging from precision agriculture to medical diagnostic tools. In this study, we present a solution to the challenging task of geolocation of poplar trees from unaligned read fragments. We approach this problem by leveraging a commonly-used bioinformatics tool, Mash, and create a framework that can circumvent the computationally expensive procedures of genome assembly and short read alignment. Our solution, MASHNET, uses a neural network to predict latitude and longitude coordinates for each sample, achieving within 12.1 km2 prediction accuracy to the state-of-the-art whole-genome sequence-based method, Locator [1]. Future studies will attempt to improve our predictive capacity using unaligned reads. The initial studies undertaken in this paper outline two paths to improvement. The first is to try to pre-identify important k-mers on which screening should be focused. For example, in currently unpublished work we have identified regulatory hotspots through genome-wide association (GWAS) mapping of climatic variation. We hypothesize that if we could sample k-mer’s directly from these hotspots— and not randomly as we do currently— we could focus on the higher variance regions and therefore significantly boost prediction performance. However, this approach would require a priori knowledge of the genomic location of these hotspots and therefore pre-existing aligned WGS data. Thus, while such a hybrid approach would likely improve predictive performance, it would also nullify the generalizability of our MASHNET approach to non-model organisms. A second approach would be to increase the sketch size of the minHashing procedure. In Figure 2, we observe that there seems to be a performance plateau associated with increasing sketch size. We hypothesize this occurs once sufficient sampling coverage of the genome has been achieved. This suggests that while increasing sketch size would lead to performance gains, these gains are likely to be marginal. This presents an open question: MASHNET can predict locations within 34km2, but could a more advanced technique predict these locations with less error? Given the importance of the geolocation task for precision agriculture, we present this as an open problem for the machine learning community. Our tool, MASHNET, demonstrates how deep learning can achieve impressive results on reference-free geolocation tasks, even when compared to state-of-the-art models based on WGS representations. We believe that more advanced tools can be developed for this area and used to improve prediction accuracy of the ideal ecosystem in which a crop should be grown. We open-source the codebase and datasets used for this study with the hope that future development will focus on new techniques for representing unaligned, fragmented reads for machine learning, as well as more sophisticated prediction architectures. 6 References [1] CJ Battey, Peter L Ralph, and Andrew D Kern. Predicting geographic location from genetic variation with deep neural networks. eLife, 9:e54507, jun 2020. [2] Gilles Guillot, Hákon Jónsson, Antoine Hinge, Nabil Manchih, and Ludovic Orlando. Accu- rate continuous geographic assignment from low- to high-density SNP data. Bioinformatics, 32(7):1106–1108, 11 2015. [3] Jin Zhang and Yongil et al. Yang. Genome-wide association studies and expression-based quantitative trait loci analyses reveal roles of hct2 in caffeoylquinic acid biosynthesis and its regulation by defense-responsive transcription factors in populus. New Phytologist, 220(2):502– 516, 2018. [4] Sergey Nurk, Brian P. Walenz, Arang Rhie, Mitchell R. Vollger, Glennis A. Logsdon, Robert Grothe, Karen H. Miga, Evan E. Eichler, Adam M. Phillippy, and Sergey Koren. Hicanu: accurate assembly of segmental duplications, satellites, and allelic variants from high-fidelity long reads. Genome Research, 30(9):1291–1305, 2020. [5] F. Rubel and M Kottek. Observed and projected climate shifts 1901-2100 depicted by world maps of the köppen-geiger climate classification. Meteorol. Z., 2010. [6] Stephanie G Seay. Doe funds center for bioenergy innovation at ornl to accelerate biofuels, bioproducts research, 2017. [7] Gancho T. Slavov and Stephen P. et al. DiFazio. Genome resequencing reveals multiscale geographic structure and extensive linkage disequilibrium in the forest tree populus trichocarpa. New Phytologist, 196:713–725, 2012. [8] B. D. Ondov, T. J. Treangen, et al. Mash: fast genome and metagenome distance estimation using minhash. Genome Biology, 17, 2016. [9] S Sakaue, J Hirata, M Kanai, et al. Dimensionality reduction reveals fine-scale structure in the japanese population with consequences for polygenic risk prediction. Nat Commun, 2020. [10] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. [11] Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015. [12] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448–456. PMLR, 2015. [13] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–794, 2016. [14] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: series B (Statistical Methodology), 67(2):301–320, 2005. [15] geopy. geopy: Geocoding library for python. https://github.com/geopy/geopy, 2013. 7
ai_researcher
2
Large_Language_Models_to_generate_meaningful_feature_model_instances.pdf
Deep Multiple Instance Feature Learning via Variational Autoencoder Shabnam Ghaffarzadegan Bosch Research and Technology Center, Palo Alto, CA 94304, USA [email protected] 8 1 0 2 l u J 6 ] G L . s c [ 1 v 0 9 4 2 0 . 7 0 8 1 : v i X r a Abstract We describe a novel weakly supervised deep learning frame- work that combines both the discriminative and generative models to learn meaningful representation in the multiple in- stance learning (MIL) setting. MIL is a weakly supervised learning problem where labels are associated with groups of instances (referred as bags) instead of individual instances. To address the essential challenge in MIL problems raised from the uncertainty of positive instances label, we use a discriminative model regularized by variational autoencoders (VAEs) to maximize the differences between latent represen- tations of all instances and negative instances. As a result, the hidden layer of the variational autoencoder learns mean- ingful representation. This representation can effectively be used for MIL problems as illustrated by better performance on the standard benchmark datasets comparing to the state- of-the-art approaches. More importantly, unlike most related studies, the proposed framework can be easily scaled to large dataset problems, as illustrated by the audio event detection and segmentation task. Visualization also confirms the effec- tiveness of the latent representation in discriminating positive and negative classes. Introduction Applications of machine learning usually require accurately labeled training data. Recent remarkable breakthroughs in deep learning made this requirement even more crucial, where large amount of carefully annotated data is required to train complicated networks [12, 15]. However, creating the labeled data usually involves human annotation that are as- sociated with high cost and potential human errors[14]. One way to relax this constraint is using the multiple-instance learning (MIL) framework[8]. Unlike traditional supervised learning where training data includes instance (feature) and label pair; for MIL, each training example consists of a group of instances (referred as bag) and the associated label. In the binary classification setting, a bag is labeled as nega- tive when all the instances in the bag are negative instances. On the contrary, a bag is labeled as positive when at least one of the instances in the bag is a positive instance. The MIL setup relaxes the data annotation requirement by al- lowing ambiguity in the labels of the positive bag instances. Copyright c(cid:13) 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Take image object recognition for example, instead of re- lying on accurate boundary of the object-of-interest, MIL directly uses the label of the image by considering multiple patches in the image as a bag where each patch may or may not include the object of interest. Another example is audio event detection, instead of having precise boundaries of an audio event of interest which is time consuming and expen- sive, MIL can rely on a coarse label of a windowed audio signal segment that contains the audio event. The flexibility on the requirement of data annotation comes with the cost of increased difficulty in the learning tasks. This is mainly due to the ambiguity in the positive instance labels where the positive bag could contain both negative and positive instances. Directly using traditional machine learning approaches with the bag level label fails to consider the incorrect label of the negative instances in the positive bag. As a result, previous studies have shown that MIL specific algorithms performs better in this setting [3]. While previous studies mostly focus on learning deci- sion boundaries using the bag level representations, such as Diverse Density[18], Citation-KNN[21], Mi-Graph[29] and SVM based approaches[1, 22]. There is relatively little work on characterizing the optimal representation of the data in this setting. In this paper, we present an approach that combines the discriminative and generative models to learn meaningful low-dimensional feature representation in the MIL setting. Given a set of training bag data, we train two generative models to learn a latent representation of the negative in- stances and all instances using variational autoencoder [13], noted as V AE− and V AE±, respectively. We use the la- tent representations of the V AE± to distinguish the posi- tive bag instances from the negative bag instances using a discriminative classifier. To take into account of the uncer- tainty in the positive bag instances, we apply weighted loss to the positive bag instances based on their reconstruction error from V AE−. Intuitively, if an instance in the positive bag is very similar to the negative bag instances, it is likely to be the negative instance and should not be penalized much for being classified as positive instance. By simultaneously training these three components, a low-dimensional latent representation of V AE± are optimized to capture the differ- ence between the positive and negative instances while pre- serving the characteristics of the individual class. Using our proposed approach, we observe superior performance across different MIL benchmark datasets and the audio event detec- tion tasks. The main contribution of this work is to demonstrate how to incorporate generative model, VAE in this case, into MIL problem in a principled manners. The proposed framework can be regarded as a discriminative model regularized with VAEs. It explores the unique challenge in MIL setting where positive instance labels are ambiguous. More importantly, the fixed-size low dimension latent representation enables the proposed framework to be applied to large dataset with high dimensional features, where most other related stud- ies fail to apply due to high computational complexity. The remainder of this paper introduces our method and experi- ments in detail. Related Work Learning Axis-Parallel Concepts: Learning Axis-Parallel Concepts, are the first group of methods used by [8, 2, 17] to solve the MIL problem. In these methods, the goal is to find an axis-parallel hyper-rectangle (APR) in the feature space to represent the target concept. However, this methods lack of practical application since they neglect majority of data in large bag sample[3]. Maximum Likelihood: Similar to their counterparts in tra- ditional machine learning setting, the maximum likelihood in MIL aims to train a classifier that maximize the likelihood of the data. Many methods approximate a differentiable loss function to perform gradient descent [19, 4, 27]. The most well-known method of this category is Diversity Density (DD) proposed by [18] as a general multi-instance learn- ing framework where each bag is regarded as a manifold composed from many instances. In this method the classifier consists of only one vector from the input space called the target point, which is a data point that is close to at least one instance from positive bags while being far from instances in negative bags, known as diversity density measure. Zhang et al. [28] proposed an Expectation-Maximization based DD method (EM-DD) that iterates over two steps: in E step the current classifier is used to choose the most probable point from each positive bag, and in M step standard supervised learning method is used to find a new concept point by maximizing likelihood over all the negative and positive in- stances. This concept was further developed by many other related MIL methods [1]. Maximum Margin: Maximum margin method used in sup- port vector machine (SVM) can be adapted to MIL frame- work. Andrews et al. [1] proposed two different methods namely mi-SVM and MI-SVM, for instance-level and bag- level classification respectively, to define the margin for pos- itive bags. Both algorithms follow similar iteration process as in EM-DD method. In MI-SVM only one positive in- stance in each positive bag contributes to the optimization and the other instances in positive bags are ignored. By con- trast, mi-SVM method considers both positive and negative instances in the positive bags while optimizing the support vectors. Following these two ideas, many different variations of SVM based methods for MIL task are developed[22, 6]. Deep learning based: Recent developments in deep neu- ral networks are also applied to the MIL problems with the assumption that meaningful features can be learned directly from the bag level labels by the network. [20] used con- volutional neural network (CNN) for feature learning on a weakly supervised object localization task. In [26], deep neural network is used to learn features for weakly super- vised learning in medical imaging. [25] reformulated the CNN loss function to train an end-to-end solution for image classification and annotation problems. In [10], Feng et al. took the challenge to solve multiple instance multiple label (MIML) problem by proposing the matching score between the instance and sub labels. In this setting, the additional in- formation among multiple labels can be used to facilitate bettering learning. In [24], the authors solve MIL problem by applying two different hand-engineered feature represen- tations, including locally aggregated descriptors and Fisher vector, to convert the bag into a fix size vector representa- tion. Our approach combines the ideas behind the Maximum Likelihood, Maximum Margin and deep learning based ap- proaches. It aims to minimize the distance between negative instances, while separating the positive instances as far as possible through learning a new representation of the data. Instead of relying on single instance distance based mea- surement, such as in DD and MI-SVM, the proposed frame- work is built on instance distribution learned from the varia- tional autoencoder. This framework allows us to address the uncertainty in the positive label by measuring the similarity between the instance in the positive bags and the negative instances in terms of reconstruction error in the VAE trained with negative instance only. The effectiveness of the pro- posed approach is illustrated by comparing the performance on different MIL benchmark dataset with previous state of art approaches. More importantly, by taking the advantage of deep learning training mechanism, the proposed frame- work provide a scalable solution in contrast to most of the previous MIL research which are not capable of handling large scale datasets [23]. In fact, this issue was previously addressed by Wei et al. in [24] where they applied hand- engineered feature representations, including locally aggre- gated descriptors and Fisher vector, to convert the bags into a low-dimension fix size vector representation allowing fast learning. However, these hand-engineered feature may not achieve the best possible representation of the bags thus lead to sub-optimal performance. In this work, we train the net- work to automatically explore a meaningful representation and achieve better performance in comparison. These ad- vantages are discussed in detail in the experiment sections. Multiple Instance Feature Learning Problem Setup Let X denote the feature space and Y denote the set of class labels. In MIL, the training set of n examples is noted as (X, Y) = {(X1, Y1), . . . , (Xn, Yn)}. Each example consists of a bag of mi instances Xi = {xi1, . . . , ximi } and the bag label Yi where xij ∈ X and Yi ∈ Y = {−1, 1}. A bag la- bel Yi = −1 if instance label yij = −1 for all xij ∈ Xi; Training Phase Data Training Sample weight calculation Bag 1 +&- Inst. - Inst. + Inst. +Inst. +/- Separator Bag N -Inst. VAE-All VAE-Neg VAE-All VAE-Neg 2 fc layer Re-construction Error VAE-All Hidden Representation Figure 1: We propose a Variational Autoencoder network for learning feature representation in the MIL setting. In the training phase, pair- wise samples are fed to two VAEs (V AE± and V AE−) to learn two posteriors p(z|X) and p(z|X, Y = −1), respectively. The latent layers of both VAEs are concatenated to a classifier to determine whether both input samples are negative instances. Separator: to separate positive and negative bags based on training labels. fc: fully connected. Otherwise, Yi = 1. In other words, a bag is labeled as nega- tive bag if all instances belong to the negative class; a bag is labeled as positive bag if at least one instance in the bag be- longs to the positive class. Similar to the traditional machine learning, the goal of MIL is to learn a mapping function f : Xi → Y that minimizes a loss function l : Y × Y → R. In this setting, a positive bag may contain negative instances thus learning directly from the bag level label carries the in- trinsic error from the label. We are interested in learning a better feature representa- tion that explores the difference between the positive and negative bags at the instance level. Given that the negative instances have correct labels and the positive instances have uncertain labels, a meaningful representation should consist of similar encodings among the negative instances, while en- couraging different yet possible similar encodings between the positive and negative instances. Following this direction, we use two separate variational encoders to model two con- ditional distributions, namely all instances p(z|X) and neg- ative instances p(z|X, Y = −1). Here z ∈ Rnz represents the latent variables with a prior of p(z). These latent rep- resentations are also used to explore the difference between the positive and negative bags through a discriminator (see Figure 1). These two training objectives encourages learn- ing meaningful representations that not only encode the cor- responding input data, but also implicitly capture the differ- ences between the positive and negative instances. It is worth noticing that maximizing the difference be- tween p(z|X) and p(z|X, Y = −1) is equivalent to maximizing the difference between p(z|X, Y = 1) and p(z|X, Y = −1), which is the difference between the rep- resentation between positive and negative bags. Let pY = p(Y = 1) and p(Y = −1) = 1 − pY being the prior of p(Y ), we have p(z|X) =p(z|X, Y = 1) ∗ pY + p(z|X, Y = −1) ∗ (1 − pY ) =p(z|X, Y = −1) + [p(z|X, Y = 1) − p(z|X, Y = −1)] ∗ pY It is easy to see the following equation holds. p(z|X)−p(z|X, Y = −1) = [p(z|X, Y = 1) − p(z|X, Y = −1)] ∗ pY (1) (2) Variational Autoencoder There are many different approaches for learning latent rep- resentation of p(z|X) and p(z|X, Y = −1), we use the Variational Autoencoder (VAE) [13] since it can be com- bined with the discriminator model for the MIL problem in a principled manner, as discussed in the next section. VAE is a deep directed graphical model consisting of an encoder and decoder. The encoder maps the data sample to a latent representation p(z|X) and the decoder maps the latent rep- resentation back to the data space p(X|z). The loss function of the VAE is defined as following: LV AE = KL(q(z|X) k p(z)) − Eq(z|X)[log p(X|z)] (3) By regularizing the encoder with a prior over the latent rep- resentation p(z), z ∼ N (0, I) where I is identity matrix, the VAE learns a latent distribution q(z|X) that contains suffi- ciently diverse representation of the data. MIL Feature Learning Network Figure 1 presents the proposed MIL feature learning net- work, which consists of two VAEs sharing the same con- figurations, and a classifier network that take the latent layer Table 1: Architectures for the VAE - number of nodes/layer structure/activation function. fc stands for fully-connected layer; nz represents VAE hidden layer size. VAE Encoder VAE Decoder 256 fc, ReLU 512 fc, ReLU 256 fc, ReLU 512 fc, ReLU nz fc, ReLU fc, sigmoid Classifier 64 fc, ReLU 64 fc, ReLU 2 fc, Softmax in VAEs as inputs. The two VAE networks approximate the posterior of p(z|X) and p(z|X, Y = −1), noted as V AE± and V AE− respectively. This is achieved by training the V AE± with all instances from both positive and negative bag examples, while training the V AE− with instances from only negative bag samples. By concatenating the latent rep- resentations to a discriminator to differentiate the positive instances from the negative instances, the overall network simultaneously optimize the latent representation and clas- sification learning. The loss of the proposed network con- sists of LV AE±, LV AE− and the binary cross-entropy loss for classifier Lclf . To address the uncertainty of the posi- tive instance label, we use the reconstruction error from the V AE− as sample weight for the classifier loss Lclf . The idea is that if an input instance in the positive bag can be well reconstructed by the V AE−, it is likely to be a nega- tive instance mislabeled by the positive bag label. Table 1 shows the network configuration in Figure 1. The proposed framework applies the VAEs to the MIL problem in a principled manners. Let λ± and λ− be the parameters of V AE± and V AE− respectively. Given the prior of z ∼ N (0, I), the KL(q(z|X) k p(z)) in the LV AE encourages both qλ− and qλ± to follow Gaussian distribu- tions. We note them as N (µ−, Σ−) and N (µ±, Σ±) respec- tively. When we use the latent representation from V AE± and V AE− to distinguish the positive bag instances from the negative bag instances, we are indeed solving an opti- mization problem of λ− and λ± such that • qλ− and qλ± estimates the posterior well. • The difference between qλ− and qλ± is maximized. While the objective of VAE network is aligned with the first goal of this optimization problem, we notice that the second part of this problem can indeed be achieved by max- imizing the difference between µ− and µ±. Notice that the Kullback-Leibler (KL) divergence from qλ− to qλ± is Since Σ− ≈ I and Σ± ≈ I, KL(qλ±kqλ− ) = = (cid:2)0 − nz + nz + (µ− − µ±)⊤(µ− − µ±)(cid:3) kµ− − µ±k2 2. 1 2 1 2 (5) While we could use a variety of distance metrics between the latent layers of µ− and µ± to maximize the difference between qλ− and qλ± , such strategies may unnecessarily constrains the representation of the latent variables. Follow- ing the strategy similar to the idea of the Generative adver- sarial networks (GAN) [11], we instead use a classifier net- work that takes the latent layers of µ− and µ± as input to distinguish the positive bag instances from negative bag in- stances. In this way, the optimal distance metric is learned from the data. Training Detail During learning, we aim to use Lclf to maximize the dif- ference between the two posterior estimates qλ± kqλ− and LV AE±, LV AE− to train a optimal posteriors of p(z|X) and p(z|X, Y = −1). Training data is prepared in pairs with the input to V AE± being randomly chosen from all instance, and input to V AE− being randomly chosen from the neg- ative bag instances only. For robustness, we follow the data augmentation procedure similar to the concept introduced in [9] to repeat the aforementioned procedure multiple times. As a result, different positive-negative and negative-negative instance pairs will be included during the training. Our approach is implemented in Keras. We use the RM- Sprop optimizer and a initial learning rate of 0.001 and mo- mentum of 0.9 throughout our experiments. We initialized all the weights to zero mean Gaussian noise with a standard deviation of 0.01. The code will be released at time of pub- lication. Bag Level Classification To leverage the learned representation of the MIL data for binary event detection task, we extract simple features from the encoded space including maximum, minimum, mean and standard deviation of the encoding value along each latent dimension as bag level feature ∈ Rnz×4. This feature repre- sentation is evaluated using 3 simple classifiers including k- nearest-neighbor classifier (KNN), neural network (NN) and Adaboost to show the effectiveness of the proposed frame- work. To detect the event boundaries, we use the encoded features as an input to a many-to-many long short term mem- ory (LSTM) network. Figure 2 illustrates the process. Experiment KL(qλ±kqλ−) = 1 2 {log |Σ−| |Σ±| (µ− − µ±)⊤Σ−1 − nz + tr (cid:0)Σ−1 − Σ±(cid:1) + − (µ− − µ±)}. Datasets MUSK dataset: The MUSK datasets introduced in [8] have been used in all the previous MIL research as the bench- mark sets. This data contain two sets of MUSK1 and MUSK2 with 166 feature vectors describing molecules us- ing multiple low-energy conformations. MUSK1 consists (4) Training Phase Training Phase Data Bag 1 Bag N Separator Inst. Inst. Training Sample weight calculation +& Inst. Inst. Inst. VAE All VAE Neg VAE All VAE Neg Classifier Re construction Error VAE Hidden Representation Test Phase Event Detection Event Segmentation Min/Max Classifier Many-to-Many LSTM - Label + Label 0 0 1 1 1 0 0 0 Figure 2: Bag level classification and audio segmentation. Simple statistics are extracted from the latent representation learned from from V AE± and used for classification. For audio segmentation task, only the positive bags from the classification result with their VAE latent representation are fed to the LSTM to detect the start and end time of the event. Table 2: Low level descriptive features and high level fea- tures (functionals) computed on audio data; min: minimum; max: maximum; std: standard deviation; var: variance; dim: dimension Features Zero crossing rate & ∆ (2-dim) Energy & ∆ (2-dim) Spectral centroid & ∆ (2-dim) Pitch & ∆ (2-dim) MFCC & ∆ (26-dim) Functionals Min Max std var skew kurtosis mean median of 92 molecules (bags) and average of 6 conformation per molecules (instances), and MUSK2 is composed of 102 molecules with average of 64.7 conformations per molecule. Moreover, MUSK1 dataset includes 479 instances divided into 47 positive and 45 negative bags; and MUSK2 contains 6600 instances partitioned into 39 positive and 63 negative bags. Automatic Image Annotation dataset: Automatic image annotation assigns keywords to an image based on the con- text information. It can be formulated as MIL problem where each image is regarded as a bag where features of image patches are the instances. The benchmark datasets for image annotation include Tiger, Fox and Elephant datasets introduced in [1]. They are extracted from Corel dataset [5]. Each of these sets have 100 positive and 100 negative bags, in which positive bags correspond to images of the target animal and negative bags include images drawn randomly from the pool of other animals. In each bag, instances are created by segmenting the image and using color, texture and shape features as segment descriptors. Rare audio event detection dataset: Audio event detec- tion is another problem that can be formulated as MIL prob- lem where an audio clip and segments within the clip are re- garded as the bag and corresponding instances respectively. The goal is to detect whether an audio clip is related to a particular event based on the clip label. We used the rare audio dataset from part of the "Detection and Classification of Acoustic Scenes and Events (DCASE) 2017 Challenge" to evaluate the effectiveness of the proposed method. This dataset contains isolated sound events for 3 target classes: baby cry, gun shot and glass break. Along with recordings of 15 different everyday acoustic scenes as background, in- cluding park, coffee shop, bus, street, etc. The audio signal is recorded at 44100 Hz, and downsampled to 22050 Hz as a preprocessing to reduce the computation cost. The target classes and background sounds are synthetically mixed (30 second length) to produce train and test data used in the chal- lenge. The final mixture contains two sets of train and test data each include 500 audio recording per target class (1500 audio files in total). Also the unique event count of each tar- get class in train and test sets are: baby cry-106/42, glass break-96/43, and gun shot-134/53. We treat each audio recording as a bag and extracted au- dio features within a moving window as instances. In partic- ular, we used 0.1 and 0.5 second as window size with 50% overlap. Statistics of thirty-four low level features within the window are regarded as instance features. The thirty-four low level features include Zero crossing rate, energy, spec- tral centroid, pitch and Mel frequency cepstral coefficients (MFCC) along with their deltas [7]. These features are ex- tracted from 25 ms frame with 10 ms overlap. The statistic measurements include minimum, maximum, standard devi- ation, variance, skewness, kurtosis, mean and median. Over- all, 272 (34 low level features × 8 statistics) dimensional features are extracted. We use random forest based feature selection method to chose the best subset for each class event. Finally, 30 dimensional high level features are used as the system input. In total, we have 500 audio bags for training with 599 or 148 instances (for 0.1 and 0.5 second window, respectively) in each bag. Experiment Results Comparison For the MUSK and image annotation bench- mark datasets, we follow the previous studies and evalu- ate the proposed method using 10-fold cross-validation with random fold initialization. Both VAE networks share the same [512, 256, nz, 256, 512] hidden units configuration. The discriminator classifier is a 2-layer network with [64, 64] hidden nodes. Hidden layer size, nz, is chosen experi- mentally for different datasets, 64 for MUSK1 and Tiger, 32 for Fox, 16 for Elephant and 256 for MUSK2. Through the experiments, we set the batch size to 32 and dropout rate to 0.25. As shown in Table 3, the proposed approach achieves sig- nificant performance improvement across different bench- mark datasets with 4%, 3.5%, 12%, 1.8% and 3.1% absolute F-score improvement over MUSK1, MUSK2, Fox, Tiger and Elephant datasets, respectively. Note that some standard deviations in past studies are not available. We have reported the results of our proposed method using 3 different classi- fiers - k-nearest-neighbor (KNN), Neural Network (NN) and AdaBoost - to investigate the effectiveness of the extracted hidden representations in different classification methods. For the audio event detection dataset, we compare our re- sults with the DCASE 2017 baseline method. In this dataset, Table 3: Average prediction accuracy (%) using 10- cross validation on benchmarks. Some standard deviations are not provided by former studies. Method mi-SVM [1] MI-SVM [1] RMI-SVM[22] EM-DD [28] mi-Graph[29] MI-Graph [29] MI-Forests [16] DMIL[25] MiFV[25] MiV&F[25] Our methods VAE+KNN VAE+NN VAE+AdaBoost MUSK1 87.4 77.9 80.8 84.8 88.9±3.3 90.0±3.8 85 87.5 90.9±8.9 91.5±8.3 MUSK2 83.6 84.3 82.4 84.9 90.3±2.6 90.9±2.7 82 72.5 88.4±9.4 88.1±8.7 Fox 58.2 57.8 63.6±2.8 56.1 61.6±2.8 61.2±1.7 64 62.5 62.1±10.2 62.0±9.6 Tiger 78.4 84.0 87.9±0.9 72.1 86.0±1.6 81.9±1.5 82 79.4 81.3±8.3 82.3±8.4 Elephant 82.2 81.4 87.8±0.7 78.3 86.8±0.7 85.1±2.8 84 82.5 87.1±7.3 87.1±7.3 84.4±4.4 95.5±4.5 80.4±8.5 76.2±9.5 94.4±5.6 90.0±10.0 73.3±6.7 69.1±5.9 76.0±4.0 85.9±4.1 85.9±4.1 89.7±5.5 90.7±5.0 87.0±8.5 90.9±0.0 Table 4: F-score(%) and error rate (ER)(%) of audio event tagging and segmentation tasks; tag: audio tagging; seg: audio segmentation. Task Method Audio event tagging VAE tag Baby cry Glass break Gun shot Average F-score 89.0 96.0 85.0 91.7 ER 0.12 0.04 0.16 0.11 Audio event segmentation DCASE2017 Baseline VAE tag+VAE seg F-score 72.0 88.5 57.4 72.7 F-score 84.7 94.1 87.1 88.6 ER 0.30 0.12 0.24 0.22 ER 0.67 0.22 0.69 0.53 the large data size for each target class is computationally challenging for other MIL approaches, especially the SVM related methods where the training complexity is highly de- pendent on the size of data. The computational efficiency of the proposed framework comes from the easy training prop- erty of the VAE network and the low computational cost in the feature mapping at the test phase. Regardless of the orig- inal feature space dimension, the VAE will encode the fea- tures into a fixed size vector nz. The experiments highlight the importance of scalability of the MIL method especially in image and audio related applications. Table 4 shows the results for audio event tagging and segmentation tasks, audio event detection with and without time stamps, respectively. Note that DCASE2017 baseline results are only available for the audio event segmentation task at the moment. In fact, with the audio event detection task, the bag level feature lies in 17970 (30*599) dimensional space where the traditional MIL method such as mi-SVM or MI-SVM will take days to train. Using the proposed framework only requires less than half an hour on the same machine with 8*3.7GHz CPUs, one Quadro K5200 GPU and 32GB main memory. For audio tagging problem, we used similar network structure as before with the latent dimension nz set to 128, which gives [512, 256, 128, 256, 512] hidden units config- uration. Considering the large dataset, we also changed the batch size to 512. In the testing phase, RBF kernel SVM is chosen experimentally as the final classifier for binary tag- ging. For audio segmentation problem, only positive outputs of the audio tagging pipeline are processed with a many-to- many long short term memory (LSTM) network to detect the target events boundaries. At this step, instead of using functional to extract the bag-level representation the 128 di- mensional hidden representation of V AE± is directly pro- cessed with an LSTM network to allow the deep network to self-learn proper features for audio segmentation task. We use two 2-layer LSTM networks with [50, 50] nodes and 0.25 dropout rate. The first LSTM takes the selected 30- dimensional high level audio features summarized in table 4 and the other LSTM takes the V AE± 128 dimensional hidden representation as its input. Finally, these two LSTMs are merged together to form a many-to-many output for each audio instance. This network is trained with mean squared error loss function and RMSprop optimization. Experiments have shown the importance of adding the high level audio features along with the VAE representation to improve the segmentation results. The experimental results confirm the superior perfor- mance of the proposed method compared to the baseline sys- tem, with average of 91.7% F-score and 0.11 error rate for audio tagging task and 88.6% F-score and 0.22 error rate for audio segmentation among three classes. we achieve 15.9% absolute F-score improvement and 0.31 absolute error rate reduction compared to the DCASE 2017 baseline system. Training neg instances Training pos instances Test neg instances Test pos instances 3.0 2.5 2.0 1.5 1.0 0.5 0.0 −0.5 Neg samples VAE ± encoding Pos samples VAE ± encoding 2.5 2.0 1.5 1.0 0.5 0.0 −0.5 Neg samples VAE− encoding Pos samples VAE− encoding −1.0 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0 −1.0 −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 3.0 2.5 2.0 1.5 1.0 0.5 0.0 −0.5 Neg samples VAE ± encoding Neg samples VAE− encoding 2.0 1.5 1.0 0.5 0.0 −0.5 Pos samples VAE ± encoding Pos samples VAE− encoding 3.0 2.5 2.0 1.5 1.0 0.5 0.0 −0.5 −1.0 −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 −1.0 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0 −1.0 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0 Figure 3: The visualization of the encoding by the trained VAEs with two latent dimensions. The left four plots show the encoding of positive and negative training instances by both V AE± and V AE−. Clear separation between the positive and negative instances is observed in the V AE± encoding space. The right-most figure shows the encoding of both training and testing data by V AE±. Note that the DCASE 2017 evaluation toolkit is used for au- dio event segmentation evaluation. We have also compared our proposed segmentation system with a many-to-many- LSTM network trained with high level 30-dimensional au- dio features described earlier. To have a fair comparison, we don’t use the binary tagging results as the input of the segmentation step. Based on the experiments, with a sim- ilar network architecture, our proposed solution constantly outperform the LSTM network trained with high level audio feature with F-score absolute improvement of 5.7%, 6.9% and 8.5% for baby cry, glass break and gunshot events. We stress that the advantage of the proposed framework lies in the scalability of the approach where the high dimen- sion feature can be effectively mapped to low dimension rep- resentation through the VAE learning via learning the rela- tions among instances. Approaches like MI-Graph and MI- SVM come with high computational complexity, especially in large dataset with high dimensional features. The low- dimension representation learned with the proposed frame- work can be used by simple classifier that achieve compara- ble performance. Moreover, for the audio data, the proposed VAE network automatically learn features at the instance level, thus eliminating the hand-engineered feature extrac- tion such as histogram or Gaussian Mixture Model (GMM) based approaches. Visualizations In order to have a better insight on what the VAE network learns, we visualize its learned latent rep- resentation. Figure 3 shows the encoding of the training and test data in one of the 10-fold cross-validation evaluation in the Elephant dataset. For visualization purpose, we trained the VAE with two latent dimensions. The encoding of train- ing instances by V AE± shows two overlapped data clusters instead of two well separated clusters, corresponding to the positive and negative bag instances. This is as we expected where a positive bag could include negative instances lead- ing to the similar encoding to the negative bag instances. As the counterpart, the V AE− encodes both positive bag instances and negative instances into a single data cluster. As a result, V AE± encoding maximizes the difference be- tween the positive and negative instances. More importantly, the similar encoding pattern between positive bag instances Figure 4: F-score (%) for MUSK and image annotation datasets using VAE+KNN with different hidden dimensions. and negative bag instances can also be observed in the test data, suggesting effective feature representation of the data. Parameter sensitivity We study the sensitivity of the pa- rameter of latent dimension setting nz. Intuitively higher latent dimension setting allows the model to capture more variance of the data, which may over-fit to the data eas- ily when the intrinsic dimension of the problem is low. The lower latent dimension setting emphasizes learning the structure of the data, which may lead to under-fitting in com- plex problem. As a result, we expect different optimal setting for different problems. This is illustrated in Fig. 4 where we use the VAE with k-nearest neighbor classifier (VAE+KNN) with different nz = [8, 16, 32, 64, 128, 256] on the MUSK and image annotation benchmark datasets. As shown in the figure, the best performance of each tasks is achieved with different nz. For example, for Elephant the best performance is achieved when nz = 16 while for MUSK2 nz = 64 leads to the best performance. These results follow our intuition, providing a mean to improve the classification performance in different applications. Conclusion In this paper, we presented a novel approach to incorporate deep variational autoencoder into multiple instance learn- ing. We use two VAEs to proximate the posterior of p(z|X) and p(z|X, Y = −1) while applying their latent layers to distinguish the positive bag instances from negative bag in- stance. The proposed framework also considers the essential challenge of MIL problem where positive label is ambigu- ous. Using both theoretical proof and experiments, we have shown that maximizing the distance between two VAEs in- deed encourages learning the meaningful representation in the MIL problem. Our experimental results show the scal- ability and superior performance when compared with the state-of-the-art methods on the benchmark datasets for mul- tiple instance learning task as well as rare audio event de- tection and segmentation problem. Given the relaxed con- strains on data annotation of MIL problem fomulation, the proposed framework can take advantage of the vast amount of weakly labeled data, such as easily available web data, for different applications including image annotation, text categorization, audio event detection, etc. While this paper focuses on binary MIL problem, we are considering vari- ous extensions of the proposed framework for other related problems including multi-class MIL and MIML problem. References [1] Stuart Andrews, Ioannis Tsochantaridis, and Thomas Hof- mann. Support vector machines for multiple-instance learn- ing. In S. Becker, S. Thrun, and K. Obermayer, editors, Ad- vances in Neural Information Processing Systems 15, pages 577–584. MIT Press, 2003. [2] Peter Auer, Philip M. Long, and Aravind Srinivasan. Approx- imating hyper-rectangles: Learning and pseudo-random sets. In Proceedings of the Twenty-ninth Annual ACM Symposium on Theory of Computing, STOC ’97, pages 314–323, New York, NY, USA, 1997. ACM. [3] Boris Babenko. Multiple instance learning: algorithms and applications. View Article PubMed/NCBI Google Scholar, 2008. [4] Boris Babenko, Piotr Dollár, Zhuowen Tu, and Serge Be- Simultaneous Learning and Alignment: Multi- longie. In Workshop on Faces Instance and Multi-Pose Learning. in ’Real-Life’ Images: Detection, Alignment, and Recogni- tion, Marseille, France, October 2008. Erik Learned-Miller and Andras Ferencz and Frédéric Jurie. [5] Yixin Chen, Jinbo Bi, and James Z. Wang. Miles: Multiple- IEEE instance learning via embedded instance selection. Transactions on Pattern Analysis & Machine Intelligence, 28(12):1931–1947, 2007. [6] Yixin Chen and James Z. Wang. Image categorization by learning and reasoning with regions. J. Mach. Learn. Res., 5:913–939, December 2004. [7] Steven B. Davis and Paul Mermelstein. Readings in speech recognition. chapter Comparison of Parametric Represen- tations for Monosyllabic Word Recognition in Continuously Spoken Sentences, pages 65–74. Morgan Kaufmann Publish- ers Inc., San Francisco, CA, USA, 1990. [8] Thomas G. Dietterich, Richard H. Lathrop, and Tomas Lozano-Perez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1):31 – 71, 1997. [9] Gary Doran and Soumya Ray. Smile: Shuffled multiple- instance learning. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, AAAI’13, pages 260– 266, 2013. [10] Ji Feng and Zhi-Hua Zhou. Deep MIML network. In Pro- ceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 1884–1890, 2017. [11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahra- mani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Pro- cessing Systems 27, pages 2672–2680. Curran Associates, Inc., 2014. [12] Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Comput., 18(7):1527–1554, July 2006. [13] Diederik P. Kingma and Max Welling. Auto-encoding varia- tional bayes. CoRR, abs/1312.6114, 2013. [14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- In Advances in neural information processing sys- works. tems, pages 1097–1105, 2012. [15] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. [16] Christian Leistner, Amir Saffari, and Horst Bischof. Mi- forests: Multiple-instance learning with randomized trees. In Proceedings of the 11th European Conference on Computer Vision: Part VI, ECCV’10, pages 29–42, 2010. [17] Philip M. Long and Lei Tan. Pac learning axis-aligned rect- angles with respect to product distributions from multiple- instance examples. Machine Learning, 30(1):7–21, 1998. [18] Oded Maron and Tomas Lozano-Perez. A framework for In Advances in Neural Infor- multiple instance learning. mation Processing Systems 10, Cambridge, MA, 1998. MIT Press. [19] Jan Ramon and Luc De Raedt. Multi instance neural net- In Proceedings of the ICML 2000 Workshop on works. Attribute-Value and Relational Learning, 2000. [20] Hyun Oh Song, Yong Jae Lee, Stefanie Jegelka, and Trevor Darrell. Weakly-supervised discovery of visual pattern con- figurations. CoRR, abs/1406.6507, 2014. [21] Jun Wang and Jean-Daniel Zucker. Solving the multiple- In Proceed- instance problem: A lazy learning approach. ings of the Seventeenth International Conference on Machine Learning, ICML ’00, pages 1119–1126, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. [22] Xinggang Wang, Zhuotun Zhu, Cong Yao, and Xiang Bai. Relaxed multiple-instance SVM with application to object discovery. CoRR, abs/1510.01027, 2015. [23] Xiu-Shen Wei, Jianxin Wu, and Zhi-Hua Zhou. Scalable multi-instance learning. In 2014 IEEE International Confer- ence on Data Mining, ICDM 2014, Shenzhen, China, Decem- ber 14-17, 2014, pages 1037–1042, 2014. [24] Xiu-Shen Wei, Jianxin Wu, and Zhi-Hua Zhou. Scalable al- gorithms for multi-instance learning. IEEE transactions on neural networks and learning systems, 28(4):975–987, 2017. [25] Jiajun Wu, Yinan Yu, Chang Huang, and Kai Yu. Deep multiple instance learning for image classification and auto- annotation. In CVPR, pages 3460–3469. IEEE Computer So- ciety, 2015. [26] Yan Xu, Tao Mo, Qiwei Feng, Peilin Zhong, Maode Lai, and Eric I-Chao Chang. Deep learning of feature representation with multiple instance learning for medical image analysis. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2014, Florence, Italy, May 4-9, 2014, pages 1626–1630, 2014. [27] Cha Zhang, John C. Platt, and Paul A. Viola. Multiple in- stance boosting for object detection. In Y. Weiss, P. B. Schölkopf, and J. C. Platt, editors, Advances in Neural Infor- mation Processing Systems 18, pages 1417–1424. MIT Press, 2006. [28] Qi Zhang and Sally A. Goldman. Em-dd: An improved multiple-instance learning technique. In In Advances in Neu- ral Information Processing Systems, pages 1073–1080. MIT Press, 2001. [29] Zhi-Hua Zhou, Yu-Yin Sun, and Yu-Feng Li. Multi-instance learning by treating instances as non-i.i.d. samples. In Pro- ceedings of the 26th Annual International Conference on Ma- chine Learning, ICML ’09, pages 1249–1256, New York, NY, USA, 2009. ACM.
ai_researcher
2
Research_through_Design_Approaches_in_Human-Robot_Interaction.pdf
Would the Trees Dim the Lights? Adopting the Intentional Stance for More-Than-Human Participatory Design Ned Cooper [email protected] Australian National University Canberra, Australia 3 2 0 2 r a M 7 2 ] C H . s c [ 1 v 4 1 9 4 1 . 3 0 3 2 : v i X r a ABSTRACT The 2019/20 Black Summer bushfires in Australia demonstrated the brutal and disastrous consequences of changing the techno- logical world without considering linkages with the biophysical, ecological or human worlds. An emerging more-than-human de- sign philosophy encourages designers to consider such interrela- tions between humans and non-human entities. Yet, the design research community has focused on situated or embodied experi- ences for designers, rather than developing processes to legitimate the perspectives of non-human entities through participatory de- sign. This paper explores how adopting the ‘intentional stance’, a concept from philosophy, might provide a heuristic for more-than- human participatory design. Through experimentation with the in- tentional stance in the context of smart lighting systems, the paper demonstrates that the approach has potential for non-human enti- ties from the ecological world, but less so for the biophysical world. The paper concludes by encouraging critique and evolution of the intentional stance, and of other approaches, to legitimate the per- spectives of non-human entities in everyday design. CCS CONCEPTS • Human-centered computing → Participatory design. KEYWORDS More-than-human, posthuman, participation, smart cities ACM Reference Format: Ned Cooper. 2022. Would the Trees Dim the Lights? Adopting the Inten- tional Stance for More-Than-Human Participatory Design. In Participatory Design Conference 2022: Volume 2 (PDC 2022 Vol. 2), August 19-September 1, 2022, Newcastle upon Tyne, United Kingdom. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3537797.3537799 1 INTRODUCTION Climate variability and long-term climate trends increase the like- lihood and severity of catastrophic weather events. These weather events can have a destructive impact on both the ecological and hu- man worlds. The recent Black Summer bushfires in Australia, for example, burnt over 24 million hectares of land [3] and killed or dis- placed three billion animals [20]. At the same time, the bushfires Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). PDC 2022 Vol. 2, August 19-September 1, 2022, Newcastle upon Tyne, United Kingdom © 2022 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-9681-3/22/08. https://doi.org/10.1145/3537797.3537799 directly caused 33 deaths [3], reduced air quality to hazardous lev- els for humans [12], destroyed thousands of buildings and caused major disruption to critical infrastructure networks, bringing soci- ety to a standstill in affected areas [14]. Climate scientists link the severity of the 2019/20 bushfire season—the most destructive ever recorded in Australia—with climate variability and long-term cli- mate trends, which are themselves linked to technological changes since the Industrial Revolution [1, 7, 29]. The Black Summer ex- emplified, in a brutal and disastrous manner, the consequences of changes in one world (technological) without considering linkages with other worlds (biophysical, ecological or human). Given the interdependence of the biophysical, ecological, hu- man and technological worlds, participatory design researchers are increasingly concerned with the participation of entities be- yond the human in the design of technology. Scholars of ‘posthu- man design’ and ‘more-than-human design’, for example, are ex- perimenting with design processes to shift design mindsets. Exper- iments include situating designers in the environment in which a technological artefact will operate [4, 30] and even encouraging designers to embody non-human entities in those environments, to consider the experience of other species or things [9]. Yet, these experiments do not provide guidance on how to legitimate the per- spectives of non-human entities through participatory design, in- stead focusing on situated or embodied experiences for designers to break down the boundaries between those designers and non- human entities. In 2020, I worked with the Queanbeyan-Palerang Regional Coun- cil (‘the Council’), as part of a team of researchers, to review the plans and progress of a smart city project in a region affected by the Black Summer bushfires. Our report recommended the Council adopt a more-than-human design framework, though we were un- able to provide clear processes for the Council to enact the frame- work. This gap in design practice led to the review of theories of non-human behaviour from other disciplines, in particular phi- losophy and cognitive science, and eventually to the ‘intentional stance’. To predict or explain the behaviour of other human or non- human entities, people often refer to the mental states of the other entity (their beliefs, desires or intentions) [11]. Dennett described this strategy of referring to the mental states of others to predict or explain their behaviour as “adopting the intentional stance” [11]. This paper explores how the intentional stance might be repur- posed to legitimate the perspectives of non-human entities from the biophysical and ecological worlds, if those entities were to par- ticipate in the design process. Two questions guide the paper: PDC 2022 Vol. 2, August 19-September 1, 2022, Newcastle upon Tyne, United Kingdom • How does adopting the intentional stance as a heuristic en- able designers to account for the perspectives of non-human entities? • What non-human entities could designers consider by adopt- ing the intentional stance? Each question is considered by reflecting on the smart city project. The paper builds on previous work with the Council, imagining how adopting the intentional stance might offer a process for the Council to enact the more-than-human design framework in the context of smart lighting systems. Through a process of experi- mentation and evaluation, the paper explores the potential of one approach (adopting the intentional stance) to clarify design pro- cesses for more-than-human participation. 2 BACKGROUND This section shares definitions of each of the ‘worlds’, outlines posthuman or more-than-human design, and discusses the inten- tional stance. 2.1 Biophysical, Ecological and Human Worlds The biophysical world is defined in this paper as the four spheres that support the existence of living things: the atmosphere (gases around the earth), the hydrosphere (water on earth), the lithosphere (rocks, soils and the earth’s crust) and the biosphere (the sum of all ecosystems). The ecological world, on the other hand, is defined in this paper as living things on earth—plants and non-human animals— at the level of individuals, species, or individual ecosystems. Fi- nally, the human world refers to individual humans and human societies. The theme for the 2022 Participatory Design Conference (PDC) encourages authors and participants to explore how entities from worlds other than the human world may participate in design. This paper focuses on the connections between the biophysical, ecolog- ical, and human worlds, and explores how adopting the intentional stance could enable the participation of entities from the biophys- ical and ecological worlds in addition to the human world. 2.2 Posthuman Design and More-Than-Human Design Design has been dominated by a ‘human-centred’ or ‘user-centred’ paradigm since at least the 1980s [15, 30]. These paradigms fo- cus the attention of designers on the needs of individual human users and/or collectives of human users of technology, rather than technology itself. Recently, however, rapid transformation of the technological world alongside changes in the biophysical and eco- logical worlds have prompted a shift in the attention of designers towards the interrelations between human and non-human enti- ties. An emerging posthuman design agenda encourages consid- eration of the non-human in the design process, including non- human animals and plants along with ‘things’ and the artificial [15, 17]. As Forlano [15] outlines, posthuman design has been in- fluenced by theoretical approaches such as actor-network theory, feminist new materialism, object-oriented ontology, and transhu- manism. Inheriting from these fields, the emerging posthuman de- sign agenda encourages designers to transcend the limited focus of human-centred or user-centred design on human entities and con- sider a broader set of relations. Another term used to describe this emerging design philosophy is more-than-human design. Schol- arly publications referring to more-than-human design have fo- cused particularly on the design of smart cities [e.g., 9, 21, 31]. More-than-human design scholars have also drawn on principles of relationality as understood in Indigenous epistemologies, which may enable designers to respectfully accommodate the non-human [2, 13, 19]. 2.2.1 Participation of Non-Human Entities. For PDC 2022 we must ask – what might be considered ‘participation’ by non-human en- tities? Scholars of more-than-human design have experimented with two processes for non-human participation. In the first, de- signers are situated in the biophysical and ecological environment in which technology they design will operate, and encouraged to engage with non-human entities in that environment as part of the design process [4, 26, 30]. This act is intended to break down barri- ers between the design process and the biophysical and/or ecolog- ical worlds in which designed technology will operate. Secondly, at PDC 2018 in Hasselt, Belgium, a group of designers undertook several experiments including embodying non-human animals as the designers navigated the city [9]. This act was intended to en- courage designers to “see the city” from the perspectives of non- human entities, which might then inform smart city design pro- cesses. However, as Akama et al. [2] note, participation implies the equivalent of “human voice, rights, representation and structures of decision-making.” While situated or embodied experiences for human designers may enable designers to ‘see’ other worlds more clearly, there is a need to build on these experiments to design processes that legitimate the perspectives of non-human entities through participatory design, as we have for human entities. 2.3 The Intentional Stance Dennett first proposed the intentional stance in 1971 [10] and in- cluded the idea in his book of the same name in 1987 [11]. Dennett argues that when we predict or explain the behaviour of an en- tity (humans, animals, artifacts—any object or system), we view that entity at varying levels of abstraction. In the first level, the physical stance, we predict the behaviour of an object or system based on its physical structure. We may abstract from the physical stance to the design stance, predicting the behaviour of a system based on our experience and knowledge of how the system is de- signed. Marchesi et al [22] outline a useful example of adopting the design stance with respect to a car—we can predict that a car will slow down when we press the brake pedal based on our knowl- edge of the design of a car, rather than on our knowledge of the precise physical mechanisms that make up the braking system in a car. The most abstract of all the stances, the intentional stance, requires no knowledge of either physical structure or design. We predict the behaviour of another system from our explanations or interpretations of the beliefs and desires of the other system. Den- nett [11] explains how we use this strategy as follows: Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then Would the Trees Dim the Lights? PDC 2022 Vol. 2, August 19-September 1, 2022, Newcastle upon Tyne, United Kingdom you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in many—but not in all— instances yield a decision about what the agent ought to do; that is what you predict the agent will do. Adopting the intentional stance might be a useful way to predict the behaviour of an entity if it were to participate in design activi- ties, by treating the entity as if it had mental states. This does not lead to a theory of the operation of the internal mechanisms of any agent, and it does not require proof of the true intentionality of an agent. Rather it is a strategy for interpreting the behaviour of an entity as if it were a rational agent that governed its choice of action by considering its beliefs, desires or intentions—or men- tal states [11]. The strategy builds on the tendency of humans to attribute mental states to the entities whose behaviours we intend to interpret. For this paper, then, there is a question as to the condi- tions under which an entity from either the biophysical or ecolog- ical world could be considered to have a mental state discernible to human designers. 3 SPECULATIVE CASE STUDY: MORE-THAN-HUMAN PARTICIPATION FOR SMART LIGHTING SYSTEMS In 2020, I worked as part of a team of researchers to review the plans and progress for the Queanbeyan Smart City Precinct (’smart city project’). Our report to the Council, overall, recommended an evolution in the design framework—from a techno-centred frame- work to a framework based on more-than-human design. While the overall recommendation was well received, the report lacked specific recommendations on how to enact the renewed frame- work. Our report encouraged the Council to think of the tensions between the interests of residents in a smart city and the inter- ests of non-human entities in the ecological world, drawing on the lessons of the Black Summer bushfires. We also encouraged the Council to engage with Indigenous design practitioners during fu- ture design phases, to consider the relationality between smart city technologies and the people and places in which those technolo- gies are embedded. Yet, we were unable to provide a clear process for Council staff themselves to consider the perspectives of enti- ties within the region other than those of human residents, non- resident visitors or other humans managing the smart city. With- out such a process, it is difficult to envisage the translation of the theories of posthuman or more-than-human design outside aca- demic research contexts and into the everyday practice of smart city design. The lack of a clear design process to recommend to the Council led to the review of other disciplines, such as philosophy and cognitive science, for theories or heuristics that might trans- late into design processes. The section below presents an imagi- nation of how adopting such a heuristic—the intentional stance— could legitimate the perspectives of two non-human entities to- wards the design of smart lighting systems in Queanbeyan. 3.1 Adopting the Intentional Stance to Legitimate the Perspectives of Trees and Air Many smart cities include smart lighting systems, including the smart city project administered by the Council. The lights in Quean- beyan shine brighter when pedestrians are in the vicinity of the lighting system, to create “more vibrant areas with a feeling of safety” [24]. In Queanbeyan, then, human and non-human entities must co-exist with artificial light at night, in variable intensities. The impact of artificial light on humans is well researched [e.g., 8, 18]. Ecologists have also raised the alarm about the impact of artificial light on some non-human animals, such as sea turtles at the hatchling stage [25]. However, the impacts on plants and ele- ments of the earth are largely unexplored. This section explores how adopting the intentional stance might allow designers to pre- dict the behaviour of these entities, in the absence of definitive scientific guidance. Plants possess chlorophyll which absorbs light to sustain their existence, but they also use light as a source of information [5]. The natural cycles of light guide plants to regulate circadian rhythms and seasonal phenology, and also affect phenotypic variation (in- cluding variation in growth and resource allocation) [5]. However, the widespread use of artificial lighting by human societies out- doors alters these cycles [16]. Illumination from street lighting is brighter than and inconsistent with natural cycles of moonlight, re- sulting in many plants experiencing artificial light at night at levels consistent with physiological effects (even if in short duration) [5]. If we adopt the intentional stance towards trees located in an area proposed for a smart lighting system (entities from the ecolog- ical world), we would treat those trees as rational agents. Based on the points raised in the paragraph above, one might say that trees would believe in the importance of growth and resource allocation consistent with natural daily and seasonal cycles of light. The trees’ desires and intentions, then, could include absorbing light within the optimal spectrum in natural cycles to optimise photosynthe- sis. Following the process outlined by Dennett for the intentional stance, a designer would predict that a tree ought to seek out light within that spectrum and during those cycles. By extension, trees located within an area proposed for a smart lighting system, partic- ipating in the design of the smart city, would advocate for minimal disruption to lighting within those parameters. We might also wish to consider the perspectives of particles of air surrounding a smart lighting system—entities from the biophys- ical world. Excess artificial light at night can contribute to air pollu- tion by interfering with chemical reactions that naturally clean the air during the night [27]. Chemicals emitted from a variety of hu- man activities are broken down by a form of nitrogen oxide called the nitrate radical, which only occurs in darkness as sunlight de- stroys the naturally occurring chemical [6]. We might use this in- formation to try and predict how particles of air would act to avoid such disruption to air filtration. Yet it is difficult to discern the men- tal states of particles of air that would motivate such behaviour— what does a particle of air believe, desire or intend to do, other than exist? Adopting the intentional stance first requires identify- ing such mental states, which may not be possible for non-living entities. In addition, it is difficult to identify individual, localised PDC 2022 Vol. 2, August 19-September 1, 2022, Newcastle upon Tyne, United Kingdom entities in the atmosphere—could the intentions of individual air particles be determined separately from the air as a whole? If not, how can we identify entities from the atmosphere that should par- ticipate in the design of a particular smart lighting system or smart city? 4 DISCUSSION This section discusses the usefulness and limitations of the inten- tional stance as a heuristic for more-than-human participatory de- sign, and briefly presents alternatives for consideration. 4.1 The Intentional Stance as a Heuristic For the ecological world, this paper demonstrates through experi- mentation with the example of a smart lighting system that the in- tentional stance could become a heuristic for designers to predict the behaviour of some non-human entities, such as non-human animals and plants. Drawing on the case study, the Council could consider the perspectives of trees alongside the perspectives of hu- man stakeholders in the smart city—residents, non-resident visi- tors, and those involved in the management of the smart city. The goal of such a process is not to exclude the perspectives of human stakeholders, but to balance the perspectives of humans alongside those of entities from other worlds. For example, residents and non- resident visitors may desire that the pavement in a certain area is illuminated alongside tree cover to improve accessibility and pro- mote safety. Following the process enabled by adopting the inten- tional stance, designers may then consider tensions between the interests of human stakeholders and those of trees, as outlined ear- lier. On the other hand, interpreting the mental states of an ele- ment that makes up part of the biophysical world—the air—was not straightforward. For the biophysical world, then, adopting the intentional stance loses value as a simple heuristic for designers (or anyone practicing design) to predict the behaviour of non-human entities. Entities from different worlds may require different methods or processes to enable their legitimate participation in technology design. However, we must not wait for an all-encompassing ap- proach to participatory design for any non-human entity. The de- sign of technological systems continues apace, guided by techno- or human-centred design philosophy, and heuristics may assist the practice of everyday design to contribute, now, to avoiding ecocidal futures. Instead, I encourage application, critique and re- imagination of the intentional stance for entities from the ecolog- ical world, just as this paper was developed on the foundation set by earlier experiments in more-than-human design. 4.2 Limitations of the Intentional Stance There are several limitations to adopting the intentional stance for the participation of non-human entities from the ecological world, notwithstanding its potential as a heuristic. Firstly, non-human en- tities may not act within the rational dynamic assumed by the in- tentional stance. In the case of humans, social psychology has docu- mented our irrationality for decades [28]. More recently, affective neuroscience has demonstrated the centrality and importance of emotions to human decision making [23]. Further research might consider how non-human entities, from either the biophysical or ecological worlds, act outside the bounds of rationality, and how that behaviour may be incorporated into or considered separately from the process prescribed by adopting the intentional stance. Secondly, the intentional stance is focused on predicting the be- haviour of an individual agent or aggregates of individual agents, but not collectives of agents. As shown in this paper, we may adopt the intentional stance to discern the mental states and predict the behaviour of a tree, but what about a forest? How might the per- spective of a forest conflict with the perspective of an individual tree? What additional processes are required, or how can the in- tentional stance be amended, to consider the perspectives of col- lectives? Thirdly, adopting the intentional stance for more-than-human participatory design assumes that humans can and should ‘speak for’ non-human entities. The perspective of a non-human entity predicted by a designer following such a process will inevitably be imbued with the perspective of the designer towards the non- human entity. For example, the information presented in the case study on the physiology of trees, and how physiology might be affected by artificial light, relied solely on desktop research con- ducted for this study. The perspective of the tree was informed by an assessment of the validity and reliability of evidence available in academic research publications. Further research may add steps to the process to guide reflexive practice by designers adopting the intentional stance—identifying sources of information and interro- gating positionality—to enable outsiders to contextualise human predictions of non-human behaviour. Finally, the case study adopts the intentional stance to interpret the perspectives of a limited set of entities—entities within the im- mediate vicinity of a smart lighting system. Further research could extend the boundary of analysis to consider how such a design pro- cess might include consideration of the impact of a smart lighting system more broadly. For example, one could extend the analysis to consider the perspectives of entities impacted by the supply chain for the smart lighting system (e.g., the factory producing sensors that manage the intensity of light and the source of energy for the Council area). 4.3 Alternatives to the Intentional Stance 4.3.1 Direct Sensing As Participation. For a more direct approach to more-than-human participatory design than adopting the inten- tional stance, we could consider using environmental sensors to capture signals from ecological or biophysical worlds to inform design processes, or to convert signals from sensors into inputs for the operation of a technological system. One might argue that using trees, for example, as networked elements of a technologi- cal system reflects a techno-centred design philosophy. However, identifying a ‘goal state’ for a tree (e.g., defining indicators of a thriving tree), assessing the physiological health of trees in Quean- beyan against those metrics over time, and adjusting the operation of technology in response to signals might be one way to procure ‘participation’ of non-human entities. 4.3.2 Engaging With Indigenous Epistemologies. The growing en- gagement with Indigenous epistemologies in design research is a Would the Trees Dim the Lights? PDC 2022 Vol. 2, August 19-September 1, 2022, Newcastle upon Tyne, United Kingdom welcome expansion of cultural perspectives in our research com- munity. As argued by Akama et al.[2], there is much for more-than- human design to learn from cultural perspectives that prioritise pluriversal and relational thinking and being. The quest for sim- ple heuristics to guide everyday designers to enact a more-than- human design framework is not intended to limit or distract from the embrace of Indigenous epistemologies. Rather, it is intended as a compliment to those efforts and is, indeed, informed itself by the relational perspectives of those epistemologies. In addition, I encourage critique of the heuristic proposed in this paper from In- digenous perspectives. 5 CONCLUSION Posthuman and more-than-human design scholars are shifting the attention of the design community from the prevailing techno- or human-centred design paradigms towards the interrelations be- tween humans and non-human entities. This shift is urgent—the biophysical and ecological worlds that sustain us physically and spiritually are rapidly transforming due, at least in part, to our in- sensitivity to the interdependencies of life on earth. To translate renewed design philosophies from the realms of academic research contexts into the everyday practice of design, we need to develop clear processes to enact more-than-human design. This paper ex- plored the usefulness of adopting the intentional stance—a strategy for interpreting the behaviour of non-human entities drawn from philosophy—as a heuristic for more-than-human participatory de- sign. I encourage application, critique, re-imagination and, even, formalisation of the intentional stance to enable the participation of non-human animals and plants from the ecological world, con- sidering the limitations raised in this paper. Further research on methods of participation for entities from the biophysical world is also encouraged, along with consideration of alternative approaches to legitimate the perspectives of entities from the ecological world. Without developing such heuristics for everyday design, visions of more-than-human design for smart cities proffered by our research community risk being sidelined in the practice of technology de- sign. ACKNOWLEDGMENTS My sincere thanks to Alex Zafiroglu, Elizabeth Williams, Ben Swift, Josh Andres, Danny Bettay and anonymous reviewers for thought- ful feedback and comments. Thank you also to members of the Queanbeyan-Palerang Regional Council and my colleagues from the Australian National University—Lorenn Ruster, Nischal Mainali and Teffera Teffera—who engaged in the project that preceded this paper. REFERENCES [1] Nerilie J Abram, Benjamin J Henley, Alex Sen Gupta, Tanya J R Lippmann, Hamish Clarke, Andrew J Dowdy, Jason J Sharples, Rachael H Nolan, Tianran Zhang, Martin J Wooster, Jennifer B Wurtzel, Katrin J Meissner, Andrew J Pit- man, Anna M Ukkola, Brett P Murphy, Nigel J Tapper, and Matthias M Boer. 2021. Connections of Climate Change and Variability to Large and Extreme For- est Fires in Southeast Australia. Communications Earth & Environment 2, 1 (Jan. 2021), 1–17. https://doi.org/10.1038/s43247-020-00065-8 [2] Yoko Akama, Ann Light, and Takahito Kamihira. 2020. Expanding Partic- ipation to Design with More-Than-Human Concerns. In Proceedings of the 16th Participatory Design Conference 2020 - Participation(s) Otherwise (PDC ’20, Vol. 1). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3385010.3385016 [3] Australia and Royal Commission into National Natural Disaster Arrangements. 2020. Royal Commission into National Natural Disaster Arrangements Report. Commonwealth of Australia, Canberra, ACT, Australia. [4] Michelle Bastian. 2018. Towards a More-than-Human Participatory Research. In Participatory Research in More-than-Human Worlds (first ed.), Michelle Bastian, Owain Jones, Niamh Moore, and Emma Roe (Eds.). Routledge, Abingdon, UK; New York, USA. https://doi.org/10.4324/9781315661698 [5] Jonathan Bennie, Thomas W. Davies, David Cruse, and Kevin J. Gaston. 2016. Ecological Effects of Artificial Light at Night on Wild Plants. Journal of Ecology 104, 3 (2016), 611–620. https://doi.org/10.1111/1365-2745.12551 [6] Steven S. Brown and Jochen Stutz. 2012. Nighttime Radical Obser- Chemical Society Reviews 41, 19 (2012), 6405. vations and Chemistry. https://doi.org/10.1039/c2cs35181a [7] Josep G. Canadell, C. P. (Mick) Meyer, Garry D. Cook, Andrew Dowdy, Peter R. Briggs, Jürgen Knauer, Acacia Pepler, and Vanessa Haverd. 2021. Increase of Forest Burned Area in Australia Is Linked Multi-Decadal Nature Communications 12, 1 (Nov. 2021), 6921. to Climate Change. https://doi.org/10.1038/s41467-021-27225-4 [8] YongMin Cho, Seung-Hun Ryu, Byeo Ri Lee, Kyung Hee Kim, Eunil Lee, and Jaewook Choi. 2015. Effects of Artificial Light at Night on Human Health: A Literature Review of Observational and Experimental Studies Applied to Ex- posure Assessment. Chronobiology International 32, 9 (Oct. 2015), 1294–1310. https://doi.org/10.3109/07420528.2015.1073158 [9] Rachel Clarke, Sara Heitlinger, Ann Light, Laura Forlano, Marcus Foth, More-than-Human Participation: Design for Interactions 26, 3 (April 2019), 60–63. and Carl DiSalvo. 2019. Sustainable Smart City Futures. https://doi.org/10.1145/3319075 [10] D C Dennett. 1971. Intentional Systems. The journal of philosophy 68, 4 (1971), 87–106. https://doi.org/10.2307/2025382 [11] D C Dennett. 1987. The Intentional Stance. MIT Press, Cambridge, MA. [12] Giovanni Di Virgilio, Melissa Anne Hart, Angela M. Maharaj, and Ningbo Jiang. 2021. Air Quality Impacts of the 2019–2020 Black Summer Wildfires on Australian Schools. Atmospheric Environment 261 (Sept. 2021), 118450. https://doi.org/10.1016/j.atmosenv.2021.118450 [13] Arturo Escobar. 2018. Designs for the Pluriverse: Radical Interdependence, Auton- omy, and the Making of Worlds. Duke University Press, Durham. [14] Alexander I. Filkov, Tuan Ngo, Stuart Matthews, Simeon Telfer, and Trent D. Penman. 2020. Impact of Australia’s Catastrophic 2019/20 Bushfire Sea- son on Communities and Environment. Retrospective Analysis and Current Trends. Journal of Safety Science and Resilience 1, 1 (Sept. 2020), 44–56. https://doi.org/10.1016/j.jnlssr.2020.06.009 [15] Laura Forlano. 2017. She Ji: The Jour- nal of Design, Economics, and Innovation 3, 1 (March 2017), 16–29. https://doi.org/10.1016/j.sheji.2017.08.001 Posthumanism and Design. [16] Kevin J. Gaston, James P. Duffy, Sian Gaston, Jonathan Bennie, and Thomas W. Davies. 2014. Human Alteration of Natural Light Cycles: Causes and Ecological Consequences. Oecologia 176, 4 (Dec. 2014), 917–931. https://doi.org/10.1007/s00442-014-3088-2 [17] Elisa Giaccardi and Johan Redström. 2020. Technology and More- 33–44. 2020), 4 (Sept. Than-Human Design. https://doi.org/10.1162/desi_a_00612 Design Issues 36, [18] F. Hölker, C. Wolter, E. K. Perkin, and K. Tockner. 2010. Light Pollution as a Biodiversity Threat. Trends in Ecology &amp; Evolution 25, 12 (2010), 681–682. [19] Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite. 2018. Making Kin with the Machines. Journal of Design and Science 3.5 (July 2018). https://doi.org/10.21428/bfafd97b [20] Lily M van Eeden, Dale Nimmo, Michael Mahony, Kerryn Herman, Glenn Ehmke, Joris Driessen, James O’Connor, Gilad Bino, Martin Taylor, and Chris Dickman. 2020. Impacts of the Unprecedented 2019-2020 Bushfires on Australian Animals. Technical Report. WWF Australia, Ultimo, NSW. [21] Susan Loh, Marcus Foth, Glenda Amayo Caldwell, Veronica Garcia-Hansen, and Mark Thomson. 2020. A More-than-Human Perspective on Understanding the Performance of the Built Environment. Architectural Science Review 63, 3-4 (July 2020), 372–383. [22] Serena Marchesi, Davide Ghiglino, Francesca Ciardo, Jairo Perez-Osorio, Ebru Baykara, and Agnieszka Wykowska. 2019. Do We Adopt the Intentional Frontiers in psychology 10 (2019), 450. Stance Toward Humanoid Robots? https://doi.org/10.3389/fpsyg.2019.00450 [23] Leonard Mlodinow. 2022. Emotional: How Feelings Shape Our Thinking (first edition ed.). Pantheon Books, New York. [24] Queanbeyan-Palerang Regional Council. 2019. COMPLETED QCBD Smart City. https://www.qprc.nsw.gov.au/Major-Works-Projects/COMPLETED-QCBD- Smart-City. [25] Michael Salmon. 2003. Artificial Night Lighting and Sea Turtles. Biologist 50 (Aug. 2003), 163–168. PDC 2022 Vol. 2, August 19-September 1, 2022, Newcastle upon Tyne, United Kingdom [26] Stephanie Springgay and Sarah E. Truman. 2017. Walking Methodologies in a More-than-human World: WalkingLab. Routledge, Abingdon, UK; New York, USA. [27] H Stark, S S Brown, W P Dubé, N Wagner, T B Ryerson, I B Pollack, C D Elvidge, D Ziskin, and D D Parrish. 2010. Nighttime Photochemistry: Nitrate Radical Destruction by Anthropogenic Light Sources. [28] Amos Tversky and Daniel Kahneman. 1974. Judgment under Uncer- Science 185, 4157 (Sept. 1974), 1124–1131. tainty: Heuristics and Biases. https://doi.org/10.1126/science.185.4157.1124 [29] Geert Jan van Oldenborgh, Folmer Krikken, Sophie Lewis, Nicholas J Leach, Flavio Lehner, Kate R Saunders, Michiel van Weele, Karsten Haustein, Sihan Li, David Wallom, Sarah Sparrow, Julie Arrighi, Roop K Singh, Maarten K van Aalst, Sjoukje Y Philip, Robert Vautard, and Friederike E L Otto. 2021. At- tribution of the Australian Bushfire Risk to Anthropogenic Climate Change. Natural Hazards and Earth System Sciences 21, 3 (March 2021), 941–960. https://doi.org/10.5194/nhess-21-941-2021 [30] Ron Wakkary. 2021. Things We Could Design: For More Than Human-Centered Worlds. MIT Press, Cambridge, MA, USA. [31] Tan Yigitcanlar, Marcus Foth, and Md Kamruzzaman. 2019. Towards Post- Anthropocentric Cities: Reconceptualizing Smart Cities to Evade Urban Ecocide. Journal of Urban Technology 26, 2 (April 2019), 147–152.
ai_researcher
2
AgentScope_A_Flexible_yet_Robust_Multi-Agent_Platform.pdf
4 2 0 2 y a M 0 2 ] A M . s c [ 2 v 4 3 0 4 1 . 2 0 4 2 : v i X r a AgentScope: A Flexible yet Robust Multi-Agent Platform Dawei Gao†, Zitao Li†, Xuchen Pan∗, Weirui Kuang∗, Zhijian Ma∗, Bingchen Qian∗, Fei Wei∗, Wenhao Zhang∗, Yuexiang Xie∗, Daoyuan Chen∗, Liuyi Yao, Hongyi Peng, Zeyu Zhang, Lin Zhu, Chen Cheng, Hongzhu Shi, Yaliang Li‡, Bolin Ding‡, Jingren Zhou Alibaba Group Abstract With the rapid advancement of Large Language Models (LLMs), significant progress has been made in multi-agent applications. However, the complexities in coordinating agents’ cooperation and LLMs’ erratic performance pose notable challenges in developing robust and efficient multi-agent applications. To tackle these challenges, we propose AgentScope, a developer-centric multi-agent platform with message exchange as its core communication mechanism. The abundant syntactic tools, built-in agents and service functions, user-friendly interfaces for application demonstration and utility monitor, zero-code programming workstation, and automatic prompt tuning mechanism significantly lower the barriers to both development and deployment. Towards robust and flexible multi-agent application, AgentScope provides both built-in and customizable fault tolerance mechanisms. At the same time, it is also armed with system-level support for managing and utilizing multi-modal data, tools, and external knowledge. Additionally, we design an actor-based distribution framework, enabling easy conversion between local and distributed deployments and automatic parallel optimization without extra effort. With these features, AgentScope empowers developers to build applications that fully realize the potential of intelligent agents. We have released AgentScope at https://github.com/modelscope/agentscope, and hope AgentScope invites wider participation and innovation in this fast-moving field. 1 Introduction Multi-agent systems, as upgraded extensions of single-agent systems, require collaborative efforts from multiple agents working in concert (Wang et al., 2023; Xi et al., 2023). With the advancement of Large Language Models (LLMs) (Ouyang et al., 2022; OpenAI, 2023; Touvron et al., 2023a,b), multi-agent applications have made great progress in both research and industrial communities, including software engineering (Hong et al., 2023), society simulation (Park et al., 2023), and intelligent assistant (Wu et al., 2023; AutoGPT-Team, 2023). Although significant progress has been made in multi-agent scenarios, there are still major challenges remaining in multi-agent application development. Developing a multi-agent application is more complex than creating a single-agent one. Unlike single-agent setups where an agent solely interacts with users, the development in the multi-agent scenario requires careful creation and management of multiple models and agents (Wang et al., 2023; Xi et al., 2023), which poses high requirements for both versatility and handiness for a platform. In particular, the following aspects feature the challenges: 1) Agents involved in a multi-agent application can specialize at different functions via different initial configurations; 2) A multi-agent application may require agents to be executed in a standardized operating procedure (SOP) or a more dynamic workflow; 3)The communication pattern between agents can be varying from one-to-one or broadcasting (e.g., a discussion group of agents). As a result, developers expect a handy platform that can provide concise and clear programming patterns when taking care of all the aspects above, accelerating and facilitating the development cycle. Achieving versatility and handiness simultaneously †Co-first authors. ∗Equal contribution. ‡Corresponding authors, email address: {yaliang.li, bolin.ding}@alibaba-inc.com 1 requires careful design and taking trade-offs, and it remains a persistent goal for all multi-agent platform designs. Aberrations are tinderboxs in a multi-agent system. Although LLMs have advanced rapidly, they still struggle with issues like hallucination (Rawte et al., 2023; Zhang et al., 2023b) and inadequate instruction- following (Fu et al., 2019; Zhang et al., 2023a). Besides, an agent can be equipped with various tools, but those tools introduce additional uncertainties (e.g., accessibility to a database or the search engine). From the perspective of multi-agent system robustness, any unexpected error or response can propagate to the whole system, causing a series of cascading effects if not handled properly. Thus, it is crucial for multi-agent applications to autonomously detect and handle unexpected responses from LLMs. While LLMs may assist in identifying and managing these errors, it remains a challenge to determine whether they can resolve errors on their own and to automatically provide the necessary information for error correction. Consequently, designing fault-tolerant that incorporate LLMs is a key challenge in the development of multi-agent applications. Supporting agents with multi-modal data, tools, and external knowledge is highly systematic. Besides generating answers with LLMs, agents are expected to be more versatile, including generating and handling multi-modal data (Su et al., 2023; Betker et al., 2023), preparing and invoking functions as tools (Yao et al., 2023; Shen et al., 2024), managing external knowledge banks, and using the retrieved knowledge for augmentation generation (Lewis et al., 2020a). However, integrating these functionalities in multi-agent applications requires a comprehensive and systematic approach. Supporting multi-modal content is a complex endeavor, necessitating considerations for data storage, presentation, user interaction, message transmission, and communication. Tool utilization of agents requires unifying the function calling pattern and output parsing, prompting to instruct LLMs, and designing reasoning mechanisms to ensure the tasks can be accomplished step by step. As for external knowledge, beyond the retrieval-augmented generation (RAG) techniques, we need to consider how to efficiently share and manage the knowledge in multi-agent scenarios while leaving enough flexibility for retrieval strategies. While some existing works investigate how those techniques individually work within specialized agent systems, general platform-level programming interfaces remain absent. Distributed applications bring extra programming difficulties and system design challenges. An industrial- oriented scenario for multi-agent applications is that the agents are owned by different organizations and run on different machines because the agents are equipped with unique private knowledge or patented tools. Developing such applications usually requires the developers to have professional knowledge of distributed system programming and optimization in the design phase. Besides, distributed applications usually require a great extra effort in the development and testing, especially when debugging and diagnosing issues spread across distributed processes or agents. Moreover, integrating advanced features like multi-modal data processing poses additional challenges in a distributed setting, when the agents require different time to accomplish the sub-tasks or the generated contents are very heterogeneous. Poor distributed system design can result in excessive communication overhead between agents. Therefore, building distributed multi-agent applications requires the large efforts of experienced developers and a high barrier for beginners to migrate their prototypes to a distributed style for optimal efficiency. To tackle the aforementioned challenges, we introduce AgentScope, a novel multi-agent platform designed for developers with varying levels of expertise. AgentScope is well-designed with a message exchange communication mechanism that embodies great usability, robustness, and efficiency. We underscore the salient features of AgentScope as follows: Exceptional Usability for Developers. AgentScope is designed with a fundamental emphasis on ease of use, particularly for developers with varying levels of expertise. By implementing a procedure-oriented message exchange mechanism, AgentScope ensures a smooth learning curve on multi-agent application development. To alleviate the programming burdens, AgentScope offers an extensive suite of syntactic utilities, including various pipelines and an information-sharing mechanism. Besides programming with our framework, we also improve usability by providing a zero-code drag-and-drop programming workstation, which can enable those with limited Python programming experience to build their own applications with little effort. Compared with building the skeleton of the application, prompt tuning can be a more time-consuming stage in multi-agent application development. In AgentScope, we equip our agents with a set of automatic prompt tuning mechanisms to relieve such burden. Coupled with rich built-in resources and integrated user interaction modules, AgentScope makes building a multi-agent application much more enjoyable than ever. Robust Fault Tolerance for Diverse LLMs and APIs. As the scale and scope of models and APIs 2 expand, a robust fault-tolerance mechanism in multi-agent applications becomes paramount. AgentScope integrates a comprehensive service-level retry mechanism to maintain API reliability. AgentScope is equipped with a set of rule-based correction tools to handle some obvious formatting problems in the responses of LLMs. Moreover, AgentScope offers customizable fault tolerance configurations, enabling developers to tailor their own fault tolerance mechanism through parameters like parse_func, fault_handler, and max_retries. While admittedly, not all the errors can be handled by the aforementioned mechanism, we propose a logging system with customized features for multi-agent applications as the last safeguard for AgentScope. Extensive Compatibility for Multi-Modal, Tools, and External Knowledge. With the remarkable progress of large-scale multi-modal models, AgentScope supports multi-modal data (e.g., texts, images, audio, and videos) in dialog conversation, message transmission, and data storage. Specifically, AgentScope decouples multi-modal data transmission from storage and employs a lazy loading strategy by providing a unified URL- based attribute in messages. During message transmission, AgentScope only attaches a URL to the message, and the multi-modal data is loaded only when necessary, such as when being rendered in web UI or invoked by model wrappers. For tool usage, AgentScope provides a component, called service toolkit, as a one-step solution for tool usage, including function preprocessing, prompt engineering, reasoning, and response parsing with fault-tolerance features. To support efficient external knowledge usage, AgentScope provides end-to-end, highly configurable, and sharable knowledge processing modules for retrieval-augmented generation (RAG), from data preprocessing to customizable retrieval. Optimized Efficiency for Distributed Multi-Agent Operations. Recognizing the vital importance of distributed deployment, AgentScope introduces an actor-based distributed mechanism that enables centralized programming of complex distributed workflows, and automatic parallel optimization. Particularly, the workflows for local and distributed deployments is a exactly the same one, indicating negligible overhead when migrating applications between centralized and distributed environments. With such a distribution framework, AgentScope empowers developers to concentrate on the application design rather than implementation details. Summary To summarize, AgentScope, a novel multi-agent platform proposed for flexibility and robustness, includes the following advanced features: 1. AgentScope provides a procedure-oriented message exchange mechanism with a set of syntactic features to facilitate multi-agent programming, a zero-code drag-and-drop programming workstation, and a set of automatic prompt tuning mechanisms. 2. The fault tolerance designs of AgentScope enable developers to handle errors elegantly for their applications. 3. The support for the multi-modal applications reduces the overheads of heterogeneous data generation and transmission. The service toolkit component facilitates the tool usage of agents in AgentScope, and the knowledge processing modules provide a flexible solution for agents to handle different information. 4. The actor-based distributed mode of AgentScope can help develop efficient and reliable distributed multi-agent applications seamlessly. Roadmap In the following sections, we navigate through the core components and capabilities of AgentScope, showcasing its role in advancing the development and deployment of multi-agent applications. Section 2 provides an overview, while Section 3 focuses on the user experience. Section 4 introduces the fault tolerance mechanism in AgentScope. Sections 5, 6, and 7 cover the multi-modal support, tool usage, and retrieval- augmented generation modules in AgentScope. Section 8 presents our platform’s support for distributed multi-agent applications. Use cases are presented in Section 9, related work is summarized in Section 10, and concluding thoughts are recorded in Section 11. 3 2 Overview 2.1 Basic Concepts in AgentScope This section introduces the primary concepts in AgentScope: message, agent, service, and workflow. These four concepts are throughout the platform and all multi-agent applications based on it. • Message: Messages serve as the carriers for information exchange in multi-agent conversations, encapsulating the source and content of the information. In AgentScope, messages are implemented as Python dictionaries with two mandatory fields (name and content) and an optional field (url ). The name field records the name of the agent that generates the message, and the content field contains the text-based information generated by the agent. The url field is designed to hold the Uniform Resource Locator (URL), which typically links to multi-modal data, such as images or videos. Messages with this field are particularly relevant for interactions with agents that can process and generate multi-modal content. Each message is uniquely identified by an auto-generated UUID and timestamp, ensuring traceability. Example 1 shows how the messages can be created, serving as atoms in the inter-agent communication of AgentScope. 1 from agentscope . message import Msg 2 3 msg1 = Msg ( " Alice " , " Hello ! " ) 4 msg2 = Msg ( 5 name = " Bob " , content = " How do you find this picture I captured yesterday ? " , url = " https :// xxx . png " 6 7 8 ) Example 1: Illustrative examples of message creation in AgentScope. • Agent: Agents are the primary actors within multi-agent applications, acting as conversational participants and executors of tasks. In AgentScope, agent behaviors are abstracted through two interfaces: the reply and observe functions. The reply function takes a message as input and produces a response, while the observe function processes incoming messages without generating a direct reply. The interplay between agents and messages, as shown in Example 2, forms the operational basis of AgentScope and is essential for developers to model complex interactions in multi-agent LLMs. 1 # agent1 and agent2 are two initialized agents , for example 2 # agent1 , agent2 = DialogAgent (...) , DialogAgent (...) 3 msg1 = agent1 () 4 msg2 = agent2 ( msg1 ) Example 2: Demonstration of message exchange between agents in AgentScope. • Workflow: Workflows represent ordered sequences of agent executions and message exchanges between agents, analogous to computational graphs in TensorFlow, but with the flexibility to accommodate non-DAG structures. Workflows define the flow of information and task processing among agents, facilitating parallel execution and efficiency improvements. This concept is essential for designing multi- agent systems that interact with LLMs, as it allows for the coordination of complex, interdependent tasks. • Service Functions and Tools: Note that service functions are closely related to but different from the concept, tools, in the context of agent design in AgentScope. Service functions refer to the functional APIs that return a formatted output ServiceResponse, while tools refer to processed services functions with functionality descriptions and necessary input parameters prepared. We introduce these two concepts in AgentScope because LLMs require help to invoke service functions as tools. One observation is that LLMs may need help understanding the functionalities of the service functions precisely and 4 Figure 1: Architecture of AgentScope. demand more descriptive information to make accurate decisions. Meanwhile, LLMs can not (reliably) fill in some input parameters of the APIs, such as the API keys of Bing and Google Search. As a result, AgentScope defines tools as processed service functions. 2.2 Architecture of AgentScope We present AgentScope as an infrastructural platform to facilitate the creation, management, and deployment of multi-agent applications integrated with LLMs. The architecture of AgentScope comprises three hierarchical layers and a set of user interaction interfaces, as shown in Fig. 1. These layers provide support for multi-agent applications from different levels, including elementary and advanced functionalities of a single agent (utility layer), resources and runtime management (manager and wrapper layer), and agent-level to workflow-level programming interfaces (agent layer). AgentScope introduces intuitive abstractions designed to fulfill the diverse functionalities inherent to each layer and simplify the complicated inter-layer dependencies when building multi-agent systems. Furthermore, we offer programming interfaces and default mechanisms to strengthen the resilience of multi-agent systems against faults within different layers. • Utility Layer: As the platform’s foundation, the utility layer in AgentScope provides essential services to support the core functionalities of agents. This layer abstracts the complexity of underlying operations, such as model API invocation and service functions including code execution and database operations, allowing agents to focus on their primary tasks. AgentScope’s utility layer is designed with ease of use and robustness as its utmost priority, supporting versatile operations in multi-agent systems and providing built-in autonomous retry mechanisms for exception and error handling against unexpected interruptions. • Manager and Wrapper Layer: As an intermediary, the manager and wrapper abstraction layer manages the resources and API services, ensuring high availability of resources and providing resistance to undesired responses from LLMs. Unlike the utility layer, which provides default handlers, the manager and wrapper layer also offers customizable interfaces for fault tolerance controls depending on developers’ needs and the specific requirements of the application. This layer is responsible for maintaining the operational integrity of the agents, a crucial aspect for LLMs to perform consistently 5 ConfigurationLocalAgentModelAPIService FunctionRetrieveDBQueryCodeExec.Write/ReadFileSyntactic sugar:Pipeline,messagehubAgentWebSearchFastChatModelUtilityManager/WrapperFileManagerImage,Audio,VideoAPIInvocationLoggingHistoryRuntimeManagerActor-basedAgentMemoryPrompt TuningGPRCComm.Message:name/role,content,urlsModelWrapperAuto-correctUser-definedhandlerRetryMechanismAgentCustomizedPostAPIAgentMonitor(Comm,Cost,Time)MultiAgentsLoggerWebUI & AS StudioUserInteractionResponseParserDrag-and-Drop Programming Workstation under diverse conditions. Detailed elaboration on the fault tolerance mechanisms is provided in Section 4. • Agent Layer: At the core of AgentScope lies the agent abstraction, which forms the backbone of the multi-agent workflow and is the primary entity responsible for interaction and communication. This layer is designed to facilitate the construction of intricate workflows and enhance usability, reducing the programming burden on developers. By integrating streamlined syntax and tools, AgentScope empowers developers to concentrate on the implementation and optimization of agent-based applications that leverage the capabilities of LLMs. The programming features and syntactic sugars are introduced in Section 3 with more details. • User interaction: In addition to the layered architecture, AgentScope provides multi-agent oriented interfaces such as an annotated terminal presenting basic information, Web UI monitoring the system, a Gradio-base (Abid et al., 2019) interface that can change a command line application to a graphical one with only one step and a drag-and-drop zero-code programming workstation (Figure 4). These interfaces allow developers to effortlessly monitor the status and metrics of the application, including agent communication, execution timing, and financial costs. Collectively, the layered constructs of AgentScope provide the essential building blocks for developers to craft bespoke multi-agent applications that leverage the advanced capabilities of large language models. The subsequent section will delve into the features of AgentScope that enhance the programming experience for multi-agent application development. 3 High Usability The design of AgentScope prioritizes usability, aiming to streamline the development process for multi-agent with LLMs and to ensure a smooth interaction experience for both users and developers. This section delves into how AgentScope flattens the learning curve and enhances the programmer’s experience by introducing intuitive concepts and features that facilitate the creation of complex multi-agent applications. 3.1 Syntactic Sugar for Multi-Agent Workflows Leveraging basic concepts introduced in Section 2.1, developers are empowered to construct sophisticated multi-agent applications. Nonetheless, directly coding each agent’s message exchange can become cumbersome, as shown in Example 3. Recognizing this, AgentScope introduces two syntactic utilities: pipelines and message hubs, to abstract away the complexity and minimize repetition. 1 # set up agents : agent1 to agent5 2 # ... 3 4 msg = agent1 ( Msg ( " Alice " , " Hello ! " ) ) 5 msg = agent2 ( msg ) 6 msg = agent3 ( msg ) 7 msg = agent4 ( msg ) 8 msg = agent5 ( msg ) Example 3: Example of programming a sequential workflow with basic concepts in AgentScope. Pipeline Abstraction The pipeline abstraction reduces repetitive coding by encapsulating patterns of message transmission, including sequential, conditional, and iterative exchanges, into simple and reusable components. With these pipelines, developers can focus on the logic of agent interactions rather than the boilerplate code. Example 4 illustrates how pipelines can be employed in both functional and object-oriented styles to create a clear and concise agent workflow. Besides the sequential pipeline in the example, AgentScope also provides if-else, switch, while-loop, and for-loop pipelines, facilitating the programming of the multi-agent interactions. 6 1 # set up agents : agent1 to agent5 2 # ... 3 from agentscope . pipelines import S e qu en t ia l Pi p el i ne 4 from agentscope . pipelines . functional import s eq u en t ia l pi pe l in e 5 6 # using functional pipeline 7 x = se q ue n ti a lp i pe l in e ([ agent1 , agent2 , agent3 , agent4 , agent5 ] , x ) 8 9 # using object pipeline 10 pipe = Se q ue n ti a lP i pe li n e ([ agent1 , agent2 , agent3 , agent4 , agent5 ]) 11 x = pipe ( x ) Example 4: Using functional and object sequential pipeline to construct workflow in AgentScope. Message Hub for Agent Communication In multi-agent systems, especially when integrated with LLMs, efficiently managing communication among a group of agents is essential. The message hub in AgentScope serves as a broadcast mechanism that simplifies group interactions. Developers can initiate a message hub by defining participating agents and can include initial broadcast messages. When new messages are generated by the agents within the message hub, they are automatically disseminated to other participants, as demonstrated in Example 5. This abstraction is particularly useful for multi-agent scenarios involving LLMs, where dynamic and contextually rich conversations are commonly observed (Du et al., 2023). 1 # set up agents : agent1 to agent4 2 # ... 3 4 greeting = Msg ( " host " , " Welcome to the message hub ! " ) 5 6 with msghub ( participant =[ agent1 , agent2 , agent3 ] , 7 announcement = greeting ) as hub : 8 9 10 11 12 13 14 15 16 17 18 # Message will be broadcast to agent2 and agent3 automatically agent1 () # Delete agent2 from the message hub hub . delete ( agent2 ) # Add agent4 into the message hub hub . add ( agent4 ) # Broadcast message hub . broadcast ( Msg ( " host " , " Welcome agent4 to join the hub ! " ) ) Example 5: Using message hub with AgentScope. 3.2 Resource-Rich Environment for Agent Development To further enhance usability, AgentScope is equipped with a rich set of built-in resources, including services, dedicated agents, and pre-configured examples. These resources are designed to reduce the initial setup effort and enable rapid prototyping and deployment of multi-agent LLM systems. Comprehensive Service Integration AgentScope integrates various service functions, such as web search, database querying, and code execution, to support the tool usage capabilities of agents. These service functions are essential for building helpful agents with LLMs, as agents often need to draw information from external sources or execute tasks that go beyond the equipped LLMs’ internal knowledge. Example 6 showcases the seamless conversion of a service into an OpenAI-Compatible JSON format, simplifying the integration process for developers. 7 1 from agentscope . service import ServiceFactory , web_search 2 3 bing_search , func_json = ServiceFactory . get ( web_search , engine = " bing " , api_key = " ↰ xxx " , num_results =10) " name ": " web_search " , " description ": " Searching the given question with bing ." , " parameters ": { " type ": " object " , " properties ": { 4 5 print ( func_json ) 6 # { 7 # 8 # 9 # 10 # 11 # 12 # 13 # 14 # 15 # 16 # 17 # 18 # 19 # 20 # 21 # } 22 23 se arching_result = bing_search ( " What ’s the date today ? " ) " type ": " object " , " properties ": { " question ": { } } } } " type ": " string " , " description ": " The string question to search in Bing ." Example 6: Converting web search service into the function and JSON format dictionary that agent can use. Pre-built Agent Templates As cataloged in Table 1, AgentScope offers pre-built agents and ready-to-use components for tasks like dialogue management, user proxying, multi-modal data handling, and distributed deployment. These templates serve as starting points for developers to customize and extend, significantly accelerating the development of multi-agent LLM applications. Function The proxy of the user. A general dialog agent, whose role can be set by system prompt. A dictionary version dialog agent, who responds in Python dictionary format. An agent that can reason and use tools An agent that can write and execute Python code. Agent Name UserAgent DialogAgent DictDialogAgent ReActAgent ProgrammerAgent TextToImageAgent An agent that generates images according to the requirements. RpcUserAgent RpcDialogAgent A distributed version user proxy. A distributed version DialogAgent. Table 1: Some examples of built-in agents and their functions in AgentScope. 3.3 Multi-Agent Oriented Demonstration Interfaces Furthermore, AgentScope introduces interaction interfaces tailored for multi-agent systems, as illustrated in Figures 2 and 3. These interfaces provide a rich multi-modal experience, crucial for systems incorporating LLMs that handle diverse data types. Agent Differentiation in User Interfaces To facilitate user interaction with multiple agents, AgentScope assigns unique colors and icons to each agent, enhancing clarity and visual distinction in both terminal and web UI (Fig. 3). The “first-person perspective” feature allows users to experience interactions from the viewpoint of a specified agent, aligning with their role in the application, such as in a game scenario. This feature not only enriches the multi-agent experience but also mirrors the nuanced interactions that occur in human-agent and agent-agent dialogues within LLM systems. 8 Figure 2: The dialogue history of a werewolf game in AgentScope. Figure 3: Multi-modal interactions between agents in web UI. Monitoring and Cost Management A vital aspect of deploying LLMs at scale is resource management. AgentScope includes a monitoring module that tracks model and API usage, as well as calculating financial costs. Developers can customize metrics and set budget limits, receiving automatic alerts when thresholds are approached or exceeded. This proactive cost management is particularly important for LLMs that may incur high computational expenses. AgentScope Gradio Interface Once you have a multi-agent application, executing it in the terminal may be a concise choice but lacks attraction. In AgentScope, we provide a powerful Gradio-based interface that is compatible with all AgentScope applications as long as there is a main function as the application’s entry point. For example, if the main function of the application is in application.py file, then running “as_studio application.py” can build a Gradio application with a graphical user interface and support multi-modal content upload and presentation. 9 3.4 Towards Graphical Application Development The design mentioned above provides massive convenience for those familiar with Python programming to quickly develop their multi-agent applications. However, AgentScope takes a step further. AgentScope provides a drag-and-drop online workstation on which developers only need to drag the module blocks to compose an application; then, the workstation can generate a configuration file of the application in JSON or even a piece of Python code. With this feature, those with limited experience with Python programming can build their multi-agent application without writing any Python code, while those familiar with Python can instantly obtain a piece of draft code ready for further customization. A screenshot of the online workstation is shown in Fig. 4, and the idea supporting this implementation is illustrated as follows. Figure 4: Drag-and-drop programming workstation. Expressing Multi-agent application with nodes in directed acyclic graph (DAG). Based on the highly modular design of our basic infrastructure, all the key components can be represented as a node, and an application can be built by constructing a directed cycle graph (DAG). The execution of the application is equivalent to triggering and running the nodes in the graph following the traversing order of DAG. Following the traditional terms, we name such DAG execution as a workflow and name the nodes in the workflow as workflow nodes. According to their functionality, the workflow nodes are categorized into six different types: model nodes, agent nodes, pipeline nodes, service nodes and copy nodes. • Model nodes: Model nodes are designed to be relatively independent of the DAG. They correspond to the model configurations in AgentScope and work as entries to let users configure their models (LLMs, embedding models, or multi-modal models) and maintain such information for all the nodes in the following workflow that need to use the model. • Service (tool) nodes: These nodes correspond to the services available in AgentScope. Some of them require additional information to set up, such as Google search and Bing search, which require API keys; others can be used directly. • Agent nodes: As the name suggests, agent nodes represent the agents in AgentScope, which means users need to decide the models, agent name and system prompts for the agent. • Pipeline nodes: The pipeline node includes the operators of AgentScope, including the message hub and the pipelines (sequential, for-loop, while-loop, etc.). With such nodes, DAG representations can be as concise as Python programming. 10 • Message node: The message node is designed for cases where some initial messages are needed, such as the announcement (initial message) for the message hub. • Copy node: The copy node is a special kinds of node that replicate the results of a parent node when its output is needed for multiple subsequent operations. Execute DAG with JSON or compile to Python. With the nodes above, developers can build applications by composing DAGs. However, the DAG is highly UI-dependent. Although a DAG can be represented in some formats (e.g., JSON format recording each node’s information and execution dependency), we still need to ensure it is as reusable as other applications. To overcome this, AgentScope is equipped with a data structure called ASDiGraph, which provides two solutions based on it. • Direct-run: Given a JSON file recording the DAG information, ASDiGraph can parse DAG information and sort the nodes in topological order. With these sorted nodes, the run function of ASDiGraph can execute them in order and feed the predecessor’s output to their successors as an application is executed step by step. • To-Python compiler: The second solution is to translate the JSON file to a Python script. With the highly modularized components of AgentScope, the key idea is to rely on internal mappings of the functionality, required inputs, and expected outputs to small pieces of Python code. Specifically, each node contains Python code for importing dependent modules, initiating models or agents, and executing the application logic. ASDiGraph first groups the pieces of importing code and initiating code, and then it composes the pieces of execution code following the topological order. Therefore, users will obtain a complete Python script after the ASDiGraph finishes compilation. 3.5 Automatic Prompt Tuning For a multi-agent system that utilizes LLMs for generation, writing an appropriate prompt requires significant human effort and expertise (Pryzant et al., 2023), which motivates us to provide automatic prompt generation and tuning in AgentScope for its high usability. Specifically, AgentScope allows users to generate prompts based on a simple description of the agent in natural language, update prompts according to contexts, and enable in-context learning. System Prompt Tuning When an agent is created, a system prompt should be associated with the agent to define its roles and responsibilities for following human instructions. For example, a Programmer Agent might be prompted as “You are proficient in writing and executing Python code”. Meanwhile, a detailed and informative system prompt can improve agent performance and ensure that the agent performs as expected, such as “You are proficient in writing and executing Python code. You prefer to write the code in a modular fashion and provide unit tests for each module”. With AgentScope, users only need to provide a simple description of the agent when creating the agents, and AgentScope can automatically generate such helpful system prompts using built-in tools based on LLMs, as shown in Example 7. 1 # set up agents with automatic prompt generation 2 # ... 3 from agentscope . agents import ProgrammerAgent 4 5 # Load model configs 6 agentscope . init ( model_configs = " model_configs . json " ) 7 8 # Create a programmer agent 9 pr ogrammer_agent = ProgrammerAgent ( name = " assistant " , auto_sys_prompt = True , 10 mo del _co nf ig_ nam e = " my_config " , sys_prompt = " an assistant that can write Python code " ) 11 Example 7: Initialize a programmer agent with automatic system prompt generation. 11 Besides, AgentScope provides interfaces for system prompt updates, which include manually setting by users or automatically adjusting based on the context. As a promising future direction, meta-prompting techniques (Pryzant et al., 2023; Suzgun and Kalai, 2024) can also be integrated into AgentScope, which might involve integrating an evaluator to provide guidance for automatic prompt optimization. In-Context Learning Providing multiple demonstrations to the LLMs can greatly enhance their ability to follow instructions, particularly when we want them to complete specific downstream tasks (Dai et al., 2023; Wei et al., 2022). AgentScope provides a simple switch to turn on/off the in-context learning behavior for agents that utilize LLMs. When users choose to apply in-context learning, they only need to provide demonstration candidates and configure how to match the most suitable ones, as illustrated in Example 8. AgentScope offers several widely-used and useful matching approaches, such as random selection, similar questions, and similar answers, and allows for user customization. 1 # set up agents with in - context learning 2 # ... 3 from agentscope . agents import ReActAgent 4 from agentscope . utils . common import load_demo_data 5 6 # Load model configs 7 agentscope . init ( model_configs = " model_configs . json " ) 8 9 # Load demonstrations 10 react_pairs = load_demo_data ( " my_demos . txt " ) 11 12 # Create a reAct agent 13 react_agent = ReActAgent ( name = " react_agent " , enable_icl = True , 14 demos = react_pairs , mat chi ng_ app ro ach = " random " ) Example 8: Enable in-context learning when creating an agent. 4 Fault-Tolerant Mechanisms In the realm of multi-agent systems, particularly those interfacing with diverse open-source LLMs with various instruction-following capabilities, fault tolerance is a key property to ensure seamless operation. AgentScope is engineered to autonomously handle a wide range of errors with minimal human intervention required, drawing upon a comprehensive fault-tolerant infrastructure that is acutely aware of the complexities involved in multi-agent coordination and LLM dependencies. Error Classification and Handling Strategies Our approach begins with a methodical classification of errors into distinct levels, each with tailored handling strategies: • Accessibility errors: In AgentScope, an agent’s functionalities rely on different kinds of services, but those services may be subject to temporary inaccessible errors. These errors may be caused by model instability or network conditions. For example, the model APIs may return a timeout error when there is traffic congestion during busy hours, or a database on a remote machine may be inaccessible because of transient network outages. • Rule-resolvable errors: As many multi-agent applications require information exchange between services or agents, it is essential to follow the protocols for those communications, e.g., in JSON format. However, as the responses of LLMs are not fully controllable yet, their return may not follow the format required in the prompts. For example, we may expect a response from an LLM in JSON, but a right brace is missed at the end of the return, leading to parsing failure. As the JSON format has clear specifications, it is reasonable to assume that a subset of these errors can be resolved by correcting the format according to the rules to meet the specifications. 12 • Model-resolvable errors: When a multi-agent system handles some complicated tasks, the ability of the agent to understand the input, make decisions, and deliver outputs mostly depends on the capability of LLMs. In some cases, the responses of LLMs are in the expected format, but the content has problems, such as argument errors, semantic errors, or programming mistakes. It is hard to have pre-defined rules to regularize those responses for diverse tasks, but it has also been shown that such errors may be detected and recovered by further interaction with the LLMs. • Unresolvable errors: Eventually, there must be some errors that cannot be detected or solved. A typical example is that the API key of an LLM is expired or unauthorized. The agents relying on it or the system can do nothing to resolve such errors without human intervention. Fault Tolerance mechanisms in AgentScope In AgentScope, we provide different mechanisms to encounter the errors summarized above. • Basic auto-retry mechanisms: To combat accessibility errors, AgentScope’s API services and model wrappers are fortified with retry logic that developers can customize, such as setting the maximum retry count. This ensures that agents can recover from sporadic disruptions and maintain their operational continuity. • Rule-based correction tools: The rule-based correction tools are introduced into AgentScope to efficiently and economically handle some easy-to-fix format errors in the responses of LLMs. For example, we establish a set of default rules in AgentScope that can complete unmatchable braces and extract JSON data from strings. Such rule-based correction tools can correct some of the common rule-resolvable errors without calling LLM APIs again, which means shorter processing time and no LLM API call cost. • Customizable fault handlers: AgentScope also integrates flexible interfaces of fault handlers in model wrappers for developers to define how to parse the responses from LLMs and handle the unexpected outputs. Application developers can configure their fault handling mechanism by providing a parsing function, fault handling function, and the number of chances given to LLMs through configurable parameters (i.e., parse_func and fault_handler and max_retries) when invoking LLMs. With such developer-friendly design, AgentScope can be configurably robust to rule-resolvable errors (when the build-in rules fail to handle) and some model-resolvable errors that can be detected and handled by a single agent (e.g., distilling a verbose summary to a more concise one). • Agent-level fault handling: There are model-resolvable errors that require more advanced LLM usages or agent-level interaction to recover. For example, detecting semantic errors, which usually include factual inaccuracy, logical inconsistency, contextual incoherence, unreasonable inference, and inappropriate vocabulary usage, is challenging since they may not necessarily trigger immediate red flags within the system’s existing validation processes. Developers can utilize the agent’s ability in AgentScope (e.g., memory module and message hub) to critique for semantic error checking such as self-critique, pairwise critique, and human-augmented critique. • Logging system: Although the unsolvable errors are too tricky for the system to handle, AgentScope provides an improved logging system for developers to quickly monitor and identify the problems in multi-agent applications. The logging system in AgentScope has customized features for the multi-agent application scenarios, including adding a logging level called CHAT for logging conversations between agents, providing formatted logs with various execution information, and a WebUI user interface to facilitate monitoring. 5 Multi-Modal Applications The integration of multi-modal data is indispensable for advancing the capabilities and applications of multi-agent with LLMs. AgentScope is designed to seamlessly support various data modalities, leveraging the diverse inputs and outputs that contemporary LLMs can process and produce. 13 Figure 5: The generation, storage, and transmission of Multi-modal data in AgentScope. Management of Multi-Modal Data In a running AgentScope application, the lifecycle of multi-modal data is carefully managed. This management includes the generation, transmission, and storage of multi-modal data—all facilitated through a decoupled architecture using URLs and a local file manager system. Fig. 5 exemplifies this process, including data originating from user inputs or model generations, data storage and retrieval, and data sharing. • Multi-modal data generation: There are two primary sources of multi-modal data in AgentScope. One source is simply the locally stored multi-modal files, which can be used by either user proxy agents or general agents with access to the local file system. Another source is the model-modal content generation models. Our model APIs and the model wrappers integrate the most popular multi-modal models, such as the text-to-image content generation models like OpenAI’s DALL-E, and conversely, the image-to-text image analysis models, e.g., GPT-4V. Besides the built-in APIs, developers can introduce their favorite multi-modal models and customize their own model wrappers, with our ready-to-use examples as the starting points. This customization process is streamlined in AgentScope and benefits from our modular design, allowing developers to connect their multi-modal services with minimal effort. • Multi-modal data storage: As mentioned above, multi-modal data in the multi-agent application can be either from ready-to-use local files or generated by multi-modal models. When a multi-modal model wrapper is invoked to generate multi-modal data, it first saves the data locally with the help of the file manager and returns a local URL when it receives multi-modal data from the model API service. • Multi-modal data transmission: AgentScope simplifies the process of multi-modal data sharing between agents by allowing agents to encapsulate local or remote URLs in multi-modal messages to indicate the actual storage locations of the data. The receiver agents can load the multi-modal data through the URLs when ready to process those. The benefits of introducing URLs in the messages when agents share multi-modal data are three-fold. Firstly, it can minimize the message size to avoid potential errors or delays because of the network bandwidth and enable the receiver agent to load the data on demand. Secondly, if there is other text information in the message, the downstream agents can potentially prioritize or parallel the processing of the text information to/and the processing of multi-modal information. Last but not least, such URL-attached messages can also facilitate the multi-modal data demonstration, which will be introduced in the following section. Multi-Modal Interaction Modes With the implementation of URL-attached messages, AgentScope empowers users to interact with multi-modal systems via accessible interfaces such as terminal and web UI. Fig. 3 showcases the user’s ability to interact with multi-modal data within interaction modes. In the terminal, users can conveniently access locally stored data by activating the provided URLs. The web UI further enhances user experience by providing an intuitive platform to view and analyze multi-modal content, aligning with the expectations of modern web applications. 14 Through AgentScope, developers are equipped to tailor model API services and wrappers to their individual needs, forge applications that handle diverse data modalities, and provide users with the necessary tools to engage with multi-modal agents effectively. This comprehensive support for multi-modal applications positions AgentScope as a versatile and powerful framework for harnessing the full potential of multi-agent LLMs, broadening the horizons for developers and researchers alike in creating sophisticated and interactive AI systems. 6 Tool Usage Tool usage is an important feature for LLM-empowered agents, allowing agents to perceive, change their environment, and handle more complex tasks (Wu et al., 2023; Paranjape et al., 2023; Parisi et al., 2022). For simplicity, we treat using tools as equivalent to calling service functions by LLMs. In AgentScope, the tool usage module is designed based on ReAct algorithm (Yao et al., 2023), which allows for the generation of interleaved reasoning and task-specific actions, along with a core component—service toolkit. Such design features high compatibility, extensibility, robustness, and re-usability, spanning from function pre-processing, prompt engineering, reasoning, and response parsing to agent-level fault tolerance. Specifically, in AgentScope the tool usage involves four steps: • Function Preparation: Parse the provided service functions, and pre-process the functions so that LLMs can utilize them directly. • Instruction Preparation: Prepare instruction prompt for tool usage to elaborate the available tool functions to LLMs, including the purpose, arguments, constraints of the function, and its calling format. • Iterative Reasoning: LLMs generate strategic reasoning, make decisions for tool usage, and respond in the required format. • Iterative Acting: Parse and check the LLM response according to the calling format, invoke functions if the response adheres to the expected format, or generate a detailed error message to LLMs for correction. In the above process, the service toolkit module is responsible for tool functions management, pre- processing, prompt engineering, response parsing, and function execution, and it is highly modular and extensible. Fig. 6 demonstrates how the service toolkit works in AgentScope when users post a query. Function Preparation. In function preparation, the target is to preset the developer-specific arguments, and to generate ready-to-use functions and their corresponding formatted description for LLMs. In AgentScope, developers only need to register their functions with preset arguments in the service toolkit. As shown in Fig. 6, developers choose the Bing search function and provide the API key during registration. Then the service toolkit will automatically generate the processed ready-to-use function and its description in JSON schema format. The descriptions will be used to generate tool instructions in natural language. Optionally, some model APIs (e.g., OpenAI and DashScope Chat API, etc.) can receive the JSON schema descriptions directly, which we will discuss in Sec. 6.1. Instruction Preparation For novice developers, the service toolkit builds in templates for tool instruction and calling format for tool usage, as demonstrated in Fig. 6. The tools instruction template lists each function with a clear description and the parameters it requires, leading to an easy understanding of their functionalities. On the other hand, the calling format, as demonstrated in Fig. 6, requires a JSON dictionary in a Markdown fenced code block with thought, speak, and function fields. During LLM generation, we expect the thought field will provide a reasoning process for the next acting, including analyzing the current situation, selecting candidate functions, and correcting errors. Iterative Reasoning In AgentScope the reasoning and acting steps are iterative. As stated above, in the reasoning step LLMs should analyze the current situation and decide the next actions. Developers only need to construct prompts with the tool instructions and the calling format instructions and feed them into the LLMs. Such design provides high reusability and flexibility, that is, the service toolkit is task-independent and can be adapted to different tasks and scenarios very easily. 15 Figure 6: The ReAct-based tool usage module in AgentScope. Iterative Acting In the acting step, the service toolkit will parse the LLM response according to the calling format, extract the selected function, and execute it with the corresponding arguments. If the response conforms to the format requirements, and the function executes successfully, the service toolkit will return the execution results directly, which LLM can generate a response based on in the next reasoning step. Otherwise, we break down errors into response parsing errors, function execution errors, and other runtime errors. For response parsing and function execution errors, we expose them to LLM with detailed error information for correction in the next reasoning-acting iteration, leaving the other runtime errors to developers. 6.1 Customization for Experienced Developers AgentScope supports developers in highly customizing their tool instructions and function calling formats. To customize tool instruction, the service toolkit in AgentScope provides JSON schema descriptions automatically, which provides a structured way to elaborate how a function should be called, including its name, purpose, arguments, and other relevant details. These formatted descriptions can be directly fed into some advanced model APIs, e.g. OpenAI and DashScope Chat APIs. For users who want to deeply customize their tool instructions, they can construct instructions based on the JSON schema descriptions. Besides the tools instruction, AgentScope also provides great flexibility, that is, AgentScope provides various model response parsers, including Markdown fenced code blocks, JSON object code blocks, and customizable tagged contents, as demonstrated in Fig. 7. For the users who want to customize the function calling format, the Markdown fenced code blocks and JSON object code block allow them to quickly construct the format instruction and parse the LLM response according to the content types. For users who want to obtain multi-fields from LLMs, the multi-tagged contents allow the developers to combine different tagged contents at will and extract them easily from the response into a Python dictionary. With these parsers, developers are able to customize their own calling format easily. 16 Figure 7: Parsers in AgentScope. 7 Agents with Retrieval-Augmented Generation With the growing applications of LLMs, some circumstances require knowledge that is not contained in the training data set, for example, knowledge in highly professional domains or not publicly available. Even given the required datasets, the fine-tuning or re-training of the LLMs is still expensive. Accordingly, retrieval-augmented generation (RAG), an innovative approach that aims to boost the power of LLMs in customized knowledge domain (Gao et al., 2023; Lewis et al., 2020b), is gaining increasing attention in the literature. The methodology of RAG can be considered as inserting a pre-processing step into the common utilization pipeline of LLMs. That is, given a collection of documents that contains needed knowledge, a similarity-based index is built, and the original user input is zipped with the most relevant pieces of information and converted into prompts, then fed to the LLMs. Therefore, the methodology of RAG involves multiple phases, that is, the collection of documents that contain the necessary information, the segmentation of the documents, the indexing of the segments (a.k.a. chunks or nodes), the similarity-based index retrieval, the fusion of the original query (i.e. user input) and retrieved results, the composing of prompts, and lastly, generation of reasonable responses from the LLM based on the informative prompts. In short, RAG embraces both the power of information retrieval and the generative capabilities of LLMs, and provides enhanced LLM service with customized domain knowledge at low cost. Meanwhile, assisted by RAG, the hallucination could be avoided and the factual accuracy could be significantly improved. As a developer-oriented multi-agent platform, AgentScope provides comprehensive RAG support for multi- agent applications. Given popular RAG frameworks such as e.g., LlamaIndex (Liu, 2022), LangChain (Langchain- AI, 2023), etc., AgentScope is designed with highly flexible abstracted processes to be compatible with those frameworks. In what follows, we introduce several key features of AgentScope RAG. Configurations in One-Stop Due to the complexity of the working pipeline, the configuration of RAG services is highly convoluted and often headache-some for users. While the RAG service provided by AgentScope is comprehensive and also involves multi-agent workflow, AgentScope provides a simple one-stop configuration solution by using a single .json file to group all RAG-related configurations. With this highly systematized configuration interface, users only need to focus on constructing the workflow, without being distracted by the repetitious configurations. For example, the RAG-empowered agents may involve a wide collection of knowledge bases that need to be configured in detail. With this “One-Stop” feature, the corresponding adjustments of the modules (which may lead to different performances) are integrated as the editing over simply one single file. Moreover, this solution also naturally adapts to the AgentScope Workstation, in which the dialog-box-based configuration can be easily exported to executable files and later loaded in Python programs. Knowledge-oriented Data Managements The application of RAG in multi-agent circumstances is more complicated compared with the application on a single agent. For example, for a single agent, one can directly 17 encapsulate the needed knowledge to the agent. Therefore, the initialization of each RAG agent involves the whole pipeline of conversion from the original documents to vector-stored indexes with retrievers. However, in multi-agent applications, it is natural for agents to share knowledge, such that repeatedly executing index computation for each agent is needless. Therefore, AgentScope introduces the notion of knowledge banks. Knowledge banks can be considered as a collection of knowledge containers, where the smallest manageable unit is a customized object (which will be referred to as a “RAG object” in the following context). The workflow starts with initializing the knowledge bank, which mainly relies on the information contained in the .json configuration file. The information includes the directory and extensions (such as .py or .md) of documents, the granularity and choice of segmentation tools (e.g. the splitters in Llama-Index) for the documents, and choice of model for indexing. After the initialization, the computed results are persisted to the designated directory for later use and we also obtain a knowledge bank consisting of RAG objects, each marked with a unique knowledge_id, associated with the index of the corresponding documents, an information retriever, and other attributes. Note that AgentScope permits each RAG agent to load with more than one RAG object. Agents with RAG The application of agents with RAG in AgentScope is very simple. For example, we first need to initialize a KnowledgeBank with some RAG framework, e.g. LlamaIndex, and all the documents. Then, we configure an RAG agent and load it with the knowledge bank. After that, the initialization is completed and we can use the RAG agent like any other agent in AgentScope. It is worth noting that if KnowldgeBank is obtained with LlamaIndex framework, then we need to use LlamaIndexAgent (inherited from RAGAgentBase). The readers may refer to Section 9.5 for a concrete application sample, which implements a copilot for AgentScope using our RAG agents. Overall, the key features of RAG agents are summarized as follows: • The RAG agent is permitted to load several RAG objects (i.e. any subset of the knowledge bank). One can choose to load the original RAG objects from the knowledge bank (in such case, the modification to an object may affect all the agents who use it) or a copy of it. • While agents are initialized with a KnowledgeBank object, it is permitted for the agents to update knowledge in time. The operations include inserting, deleting, or replacing knowledge pieces. Moreover, we provide a solution by monitoring certain directories and keeping the RAG object updated with the contents in the directories. • The fusion mechanism of the retrieved results from multiple RAG objects is fully customizable. For example, since knowledge may be of different importance or trustworthiness, the agent can set weights for information retrieved from different RAG objects for subsequent processes. • RAGs agents are permitted to recompose the query in configurable repeats and conduct multiple queries for more comprehensive answers. 8 Actor-based Distributed Framework Efficiency and extensibility are essential when building industry-level applications on multi-agent systems. The inference speed of the agents in multi-agent applications may vary dramatically. For example, suppose an agent in a multi-modal application employs a text-to-video model. In that case, its response time may be significantly longer than that of an agent designed to fill in details of stories. Parallelization, as a classic idea, should be introduced to boost efficiency. Besides, multi-agent applications can comprise agents physically distributed on different machines. A typical use case is that a company can wrap its patented techniques or private knowledge bases into an agent on their local machines connected to the internet and provide autonomous services to other entities via agent interactions. However, when it comes to multi-agent systems, a challenge is that developers need to make decisions between the following two pairs of technology roadmaps. As there is no free lunch, any combinations have their benefits and drawbacks. 18 • Centralized v.s. decentralized coordination. In the context of the distributed system, centralized coordination means multiple computation nodes being managed by a central node, such as the server- client model. A multi-agent mechanism with centralized coordination means the execution of the agents is scheduled by, and the messages between agents are forwarded by a central coordination component. On the contrary, decentralized coordination does not rely on any central component to schedule or forward messages, but the agents in such a system can be invoked automatically and send messages directly to the downstream agents for further processing. While centralized coordination is a straightforward style that can be understood and is easy to debug, its disadvantages include vulnerability to central node failures, imposing heavy traffic on the central node, and difficulty in scaling or extending to complicated applications. In contrast, decentralized coordination may require extra effort to develop and maintain but has a higher robustness against failure of any single node. • Static vs. dynamic workflow design. A similar comparison can be found between the static computational graph employed in early versions of TensorFlow (Abadi et al., 2016) and the dynamic computation graph used in PyTorch Paszke et al. (2019). In the context of multi-agent applications, the choice between a static and dynamic workflow is akin to choosing between pre-compiled and interpreted execution. The static workflow design can enable the optimization of the workflow graph level for running time and resource allocation. However, static workflow design requires the workflow graph to be known before execution, which limits the adaptation into applications, especially the ones with loop structures in the design. In contrast, dynamic workflows offer greater flexibility at the expense of optimization potential. This is particularly relevant when dealing with large language models where execution paths can change based on the input data or model inference results. Figure 8: An example of a distributed application in AgentScope, illustrating various processes as denoted by different colors. Distributed mode in AgentScope. AgentScope balances these technology roadmaps by implementing an actor-based distributed mode that is mindful of the unique needs of multi-agent LLM systems, with the following important features: • Automatic parallel optimization without static graphs. AgentScope leverages the actor model to enable automatic parallel optimization, allowing developers to circumvent the intricacies of static graph programming. This approach seamlessly aligns with the dynamic and often unpredictable nature of LLMs, where the computational graph can alter based on evolving contexts and dialogue states. 19 • Programming workflows with minimal complexity. In contrast to traditional actor models and peer-to- peer (P2P) implementations that require intricate execution ordering for distributed agents, AgentScope simplifies workflow programming to a single procedural style within a Python function. This design significantly flattens the learning curve for developers, making the construction of sophisticated multi- agent LLMs more accessible. • Hybrid local and distributed agent support. AgentScope’s flexibility extends to supporting a hybrid mode where some agents operate locally while others are distributed. This feature is particularly beneficial when integrating LLMs with varying computational requirements, allowing for resource-intensive models to be distributed while less demanding agents remain local, all without the developer needing to differentiate between the two during implementation. Specifically, we can concisely describe how AgentScope incorporates the actor model as the following. In this conceptual framework, an “actor” acts as a stand-alone entity that processes computation upon receipt of all necessary messages. This paradigm ensures that each agent, corresponding to an actor, only engages in computation once the required input messages are ready, thus achieving automatic parallel optimization. However, the actor-model-based workflow presents a programming challenge: the variable (i.e., messages) passing between actors (i.e., agents) may be placeholders without any practical meaning at the beginning. To alleviate this, AgentScope introduces the “placeholder” message, a novel data structure that allows the main process to continue without blocking, while preserving the necessary information to retrieve real values later (Fig. 8). This mechanism is particularly advantageous for multi-agent LLM systems, where execution flow must adapt to the variable output of language models. 1 # set up distributed agent : agent1 2 ... 3 4 input_msg = Msg ( " system " , " Which agent should respond next , agent2 or agent3 ? " ) 5 6 # the variable choice is a placeholder 7 choice : placeholder = host_agent ( input_msg ) 8 9 if choice [ " content " ] == " agent2 " : 10 11 elif choice [ " content " ] == " agent3 " : 12 response = agent2 () response = agent3 () Example 9: Demonstrating the use of placeholders in control flow within AgentScope. Another series of challenges arise when placeholders are used within control flow statements (e.g., if-else, loops) without their real values. An example is shown in Example 9, where a placeholder is required to make decisions. In these circumstances, AgentScope temporarily blocks the process to retrieve its actual value, thus ensuring the continuity of the control flow. The actor-based distributed mode in AgentScope not only provides automatic parallel optimization and simplifies the developer experience but also demonstrates high efficiency for distributed multi-agent LLM applications. It enables developers to focus on implementing agent logic, particularly the “reply” function, without concern for underlying distributed complexities. This streamlined approach to distributed multi-agent systems can advance the field of LLMs by making it easier to develop, run, and debug sophisticated and scalable multi-agent architectures. One-click deployment in AgentScope. To further ease the distributed deployment, AgentScope provides agent server and a unified message center, named AgentScope Studio. Specifically, the agent server is hold in remote machines, which receives requests from AgentScope applications, and initialize their required agents in the deployed machine automatically. That means, developers can set up agent instances remotely, without programming in different machines. Such feature provides high flexibility, especially for large-scale simulations, where a large number of agent instances will be set up in remote machines. For AgentScope Studio, it provides a unified display interface for distributed multi-agent applications, where messages from all distributed agents will be gathered and displayed in this studio, and allows developers 20 to forward these messages to their own display interface. Besides, AgentScope studio supports agent servers management, that is, in this studio developers can check the deployment of distributed agents, open or close agent servers remotely. With this studio, developers can manage their applications much more easily. 9 Signature Applications of AgentScope As introduced in the previous sections, AgentScope is a multi-agent platform delicately designed for integrating and coordinating large-scale models in a user-friendly and fault-tolerant manner, and it is an ideal platform for a vast spectrum of applications. AgentScope can implement applications spanning from a simple single-agent vs. user dialog to complicated interactive multiplayer role-play games like werewolf. Moreover, beyond centralized deployments, AgentScope can extend to distributed conversations that involve parallel operations across multiple machines. In this section, we look into several signature applications of AgentScope that persuasively illustrate the framework’s outstanding and diverse capabilities. All examples referenced herein are accessible in our GitHub repository for community use and contribution. 9.1 Dialog Agents: Basic Conversation The simplest yet most fundamental application of AgentScope is the basic conversation, where the user directly interacts with the dialog agent. This application is an excellent starting point for fresh users of AgentScope to quickly capture the core message-passing mechanism in our framework. The basic conversation example demonstrates the usage of two fundamental built-in agents in AgentScope, the UserAgent and DialogAgent, which facilitate inputs from the user and the responses from LLMs, respectively. Normally, as illustrated in Example 10, the first step of all applications is the initializa- tion, which is to load the model configurations (specified in the model_configs.json file) through the init interface of AgentScope, which assigns the LLM-empowered agents with selected models. Currently, AgentScope is compatible with various platforms and APIs, including but not limited to standard OpenAI chat/embedding/DALL-E, HuggingFace, ModelScope, and a collection of locally hosted models with FastChat, vllm, and Flask. Moreover, the init interface also specifies detailed options such as file storage, logging, agent configures, etc.. With all the configurations settled, it is ready to construct the conversation flow, i.e. the message-exchanging mechanism between the user/agents, which is an essential building block for all agent-based applications. In this workflow, the AI agent will always respond to the user’s input, the conversation could form an endless loop until the user decides to opt-out. To implement more sophisticated applications, AgentScope facilitates pipelines, which provide a well- structured and scalable framework for complex agent interactions (in terms of messages). As illustrated in Example 11, we can implement the basic conversation example with a sequential pipeline or loop pipeline. Readers may also refer to Appendix A for conversation history while running the demo codes. 9.2 Dialog Agents: Group Conversation with Mentions Beyond the basic conversation between a user and a single dialog agent, AgentScope supports group conversations. To improve the interactivity, we introduce the “mentions” feature, which allows the user or agent to call a specific agent by simply “@agent_name”. The “mention” feature is supported by applying the filter_agents function, which screens the message and identifies if any agent is mentioned in the message content. In this example, we first initialize the agents involved in the conversation as shown in Example 12. Here, the characteristics of the agents can be customized in the agent_config.json file, e.g. using sys_prompt to customize the reaction style or functionality of the agents. Also, we utilize the message hub (msghub) to facilitate message deliveries among a group of agents. The msghub allows the sharing of public information (e.g. an announcement) and permits agents to broadcast messages to all agents. The conversation would end if a timeout limit is reached, or the user types in “exit”. 21 1 import agentscope 2 from agentscope . agents import DialogAgent , UserAgent 3 4 # read model configs 5 agentscope . init ( model_configs = " ./ o p e n a i _ m o d e l _ c o n f i g s . json " ) 6 7 # Create a dialog agent and a user agent 8 ass istant_agent = DialogAgent ( 9 name = " Assistant " , sys_prompt = " You are a helpful assistant " , model = " gpt -4 " 10 11 12 ) 13 user_agent = UserAgent () 14 15 # Basic version 16 x = None 17 while x is None or x . content != " exit " : 18 x = assistant_agent ( x ) x = user_agent ( x ) 19 Example 10: Code example of the basic conversation example. 1 # Advanced version with sequential pipeline 2 from agentscope . pipelines . functional import s eq u en t ia l pi pe l in e 3 x = None 4 while x is None or x . content != " exit " : 5 x = se q ue n ti a lp i pe l in e ([ dialog_agent , user_agent ] , x ) 6 7 # Advanced version with while loop pipeline 8 from agentscope . pipelines . functional import whi lel oop pip eli ne 9 x = w hil el oop pip eli ne ( 10 [ assistant_agent , user_agent ] , condition_func = lambda _ , x : x is None or x . content != " exit " , x = None ) Example 11: Pipeline-based implementation of the basic conversation example. 11 12 9.3 Dialog Agents: The Werewolf Game Group conversation and the mentioning feature are fundamental building blocks for multi-agent applications. Here we present a more sophisticated application, the werewolf game, which is a popular multiplayer interactive role-play game. We aim to implement the game with AgentScope in only one hundred lines of code. This example involves six players divided into two opposing teams, the werewolves, and the villages. After rounds of conversations and discussions, the game ends when all werewolves are eliminated (i.e. villager victory) or the number of werewolves equals or outnumbers the villagers (i.e. werewolf victory). As an LLM-empowered role-play game, we start the game settings with allocation for the roles and initialization for the agents. As shown in Example 13, AgentScope supports a quick setup, which consists of default agent configurations for a user to instantiate the agent objects with corresponding roles in one click, the detailed settings are included in the agent_configs.json file. It is worth noting that the werewolf game is based on the group conversation capability of AgentScope, such that the werewolves could chat in the “night phase” and all participants could discuss during the “day phase”. Similar to the group conversation example, the message hub (msghub) of AgentScope is used to facilitate the conversations. As shown in Example 13, after the host (moderator) makes an announcement, the werewolves discuss for at most MAX_WEREWOLF_DISCUSSION_ROUND rounds and conclude once an agreement is reached. Here, the agents are required to use an “agreement” attribute in the response message, which is enforced in the role-defining prompt. For complete workflow, an example of dialogue history, and more related information, please refer to Appendix B. 22 1 import agentscope 2 3 # Read model and agent configs , and initialize agents automatically 4 npc_agents = agentscope . init ( 5 model_configs = " ./ configs / model_configs . json " , agent_configs = " ./ configs / agent_configs . json " , 6 ) 7 8 user = UserAgent () 9 agent = list ( npc_agents ) +[ user ] 10 ... 11 # We use msghub to coordinate the conversations , ‘‘ hint ’’ is a message notified ↰ to all agents 12 with msghub ( agents , announcement = hint ) : 13 while True : 14 try : 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 x = user ( timeout = U SE R _T I ME _ TO _ SP EA K ) if x . content == " exit " : break except TimeoutError : x = { " content " : " " } logger . info ( f " User has not typed text for " f " { U S ER _ TI M E_ T O_ S PE AK } seconds , skip . " , ) # if user mentions any npc_agent in the message , it will be added to the ↰ speak_list speak_list += filter_agents ( x . get ( " content " , " " ) , npc_agents ) # if the speak_list is non - empty , the mentioned agents will respond in a ↰ sequential manner if len ( speak_list ) > 0: next_agent = speak_list . pop (0) x = next_agent () # otherwise , all agents will respond one by one . else : next_agent = select_next_one ( npc_agents , rnd ) x = next_agent () # if the response mentions any agent , it will be added to the speak_list speak_list += filter_agents ( x . content , npc_agents ) Example 12: Code example of the group conversations. 9.4 Distributed Deployed Agents We have seen applications regarding conversations involving dialog agents, but those examples are fundamental in the sense that the agents are deployed in a centralized manner, that is, the agents are hosted on a single machine and in a single process. To allow agents to be hosted by separate machines or processes, AgentScope allows agents to be distributedly deployed in two modes, the single-machine multi-process mode, and the multi-machine multi-process mode. In what follows, we present examples to demonstrate this feature. Single-Machine Multi-Process Mode: For this mode, all agents are deployed on a single machine, but running in separate processes. For better comparison, we implement the basic conversation example in this mode (see Example 14 for the complete code). Compared with Example 10 and 11, we use the to_dist function to convert the current agent instance into a distributed version. Then, the assistant_agent would be deployed on a local host with an automatically allocated port. Besides the aforementioned differences, the single-machine multi-process mode is identical to local deployment, yet it has been optimized for parallel execution. Multi-Machine Multi-Process Mode: To demonstrate this mode, we initiate the agent service (a DialogAgent) on a remote machine (as shown in Example 15), and constructs a workflow (as shown Example 16). One may note that the only difference comparing to the local deployed mode is that the agent server needs to be connected using specified URLs and ports before establishing the workflow. Overall, for AgentScope, we can smoothly convert from the local deployment mode to the distributed 23 1 import agentscope 2 # Read model and agent configs , and initialize agents automatically 3 survivors = agentscope . init ( 4 model_configs = " ./ configs / model_configs . json " , agent_configs = " ./ configs / agent_configs . json " , 5 6 ) 7 8 # Define the roles within the game . 9 roles = [ " werewolf " , " werewolf " , " villager " , " villager " , " seer " , " witch " ] 10 11 # Based on their roles , assign the initialized agents to variables . 12 wolves , villagers , witch , seer = survivors [:2] , survivors [2: -2] , survivors [ -1] , ↰ survivors [ -2] 13 ... 14 # Night phase : werewolves discuss 15 hint = HostMsg ( content = Prompts . to_wolves . format ( n2s ( wolves ) ) ) 16 with msghub ( wolves , announcement = hint ) as hub : 17 ... for _ in range ( M A X _ W E R E W O L F _ D I S C U S S I O N _ R O U N D ) : x = se q ue n ti a lp i pe l in e ( wolves ) if x . agreement : break ... Example 13: Code example of the werewolf game. 18 19 20 21 22 1 from agentscope . agents import UserAgent , DialogAgent 2 import agentscope 3 # we use . to_dist () to convert the agent to distributed mode . 4 ass istant_agent = DialogAgent ( 5 6 name = " Assistant " , sys_prompt = " You are a helpful assistant " , model = " gpt -4 " 7 8 ) . to_dist () 9 user_agent = UserAgent () 10 11 x = None 12 while x is None or not x . content != " exit : 13 x = se q ue n ti a lp i pe l in e ([ assistant_agent , user_agent ] , x ) Example 14: Example that deploys agents in single-machine multi-process mode. mode and vice versa, with only minimal changes to the agent configuration and no modification to the workflow. 9.5 RAG Agents: AgentScope Copilot As previously introduced in Section 7, Retrieval-Augmented Generation (RAG) allows developers to fully utilize the language generation capability of LLMs accompanied by a customized knowledge pool. Accordingly, AgentScope introduces RAG agents to facilitate such a functionality. In the following example (as shown Example 17), we show how to use a collection of Llama-index-based RAG agents (i.e., the LlamaIndexAgent inherited from the RAGAgentBase) to build a multi-agent copilot for AgentScope. We first initialize the agents. Note that the most important feature of RAG agents is that beyond customized personalities and behavioral styles configured by the system prompts, each agent is loaded with external knowledge, which is specified in the agent_configs that contains configuration information such as data storage directory, targeted file types, document chunking settings, the indexing and embedding settings, etc.. The workflow of copilot is designed as follows, the user first inputs a message, and if the user mentions some specific RAG agents as we defined, then the corresponding agents would respond, otherwise, the guide_agent would decide the most suitable agent to respond to the query. Due to space limit, we only represent simplified 24 1 from agentscope . agents . rpc_agent import R p c A g e n t S e r v e r L a u n c h e r 2 from agentscope . agents import DialogAgent 3 4 # load model configurations 5 agentscope . init ( model_configs = " configs / model_configs . json " ) 6 # set server for the remote agent 7 ser ver_launcher = R p c A g e n t S e r v e r L a u n c h e r ( 8 agent_class = DialogAgent , agent_kwargs ={ " name " : " Assitant " , " sys_prompt " : " You are a helpful assistant . " , " model " : " gpt -4 " 9 10 11 12 13 14 } , host = " xxx . xxx . xxx . xxx " , port =12010 , 15 16 ) 17 # start the server 18 ser ver_launcher . launch () 19 ser ver_launcher . w a i t _ u n t i l _ t e r m i n a t e () Example 15: Deploying a remote agent in multi-machine multi-process mode.. 1 agentscope . init ( model_configs = " configs / model_configs . json " ) 2 3 ass istant_agent = DialogAgent ( 4 name = " Assistant " , model = " gpt -4 " 5 6 ) . to_dist ( 7 8 host = " xxx . xxx . xxx . xxx " , port =12010 , launch_server = False , # The target URL of agent server # The target port of agent server # Use the remote agent server 9 10 ) 11 user_agent = UserAgent () 12 13 x = None 14 while x is None or not x . content != " exit " : 15 Example 16: Example of setting sub-processes for agents in multi-machine multi-process mode. x = se q ue n ti a lp i pe l in e ([ assistant_agent , user_agent ] , x ) codes here and please refer to the repository and documentation for more details. 9.6 Web Search and Retrieve Agents We have shown examples of agents generating responses by the capability of LLM (DialogAgent) and information retrieved from external knowledge libraries (LlamaIndexAgent). Nevertheless, we can also utilize internet resources to build agents, as introduced in the following example. As presented in Example 18, the initialization involves three types of agents - the UserAgent that takes user inputs, the SearcherAgent that converts the user’s questions into keywords and calls the search engine to retrieve webpages from the internet, and the AnswererAgent that retrieves information from web pages to compose answers. It is worth noting that, since a large number of web pages may be returned by the search agents. In the standard single-process mode, multiple AnswererAgent instances can only perform web searching and answer questions in a sequential manner on a single machine. For better efficiency, it is beneficial to allow multiple instances of AnswererAgent running in parallel, that is, the multi-machine multi-process mode of AgentScope agents. 25 1 import agentscope 2 from agentscope . agents import UserAgent , DialogAgent , LlamaIndexAgent 3 ... 4 # initialize agentscope with model configurations 5 agentscope . init ( model_configs = " configs / model_configs . json " ) 6 7 # initialize the RAG agents based on different configurations 8 tutorial_agent = LlamaIndexAgent (** agent_configs [0][ " args " ]) 9 code_agent = LlamaIndexAgent (** agent_configs [1][ " args " ]) 10 api_agent = LlamaIndexAgent (** agent_configs [2][ " args " ]) 11 search_agent = LlamaIndexAgent (** agent_configs [3][ " args " ]) 12 ... 13 # initialize a basic dialog agent as the ‘‘ frontdesk assistant ’’ and a user ↰ agent 14 guide_agent = DialogAgent (** agent_configs [4][ " args " ]) 15 user_agent = UserAgent () 16 ... 17 while True : 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 x = user_agent () # the workflow terminates when user inputs nothing or ‘‘ exit ’’ if len ( x [ " content " ]) == 0 or str ( x [ " content " ]) . startswith ( " exit " ) : break # find out the agents mentioned in user ’s input speak_list = filter_agents ( x . get ( " content " , " " ) , rag_agent_list ) if len ( speak_list ) == 0: # if no agent is mentioned , the guide agent will decide which one to ↰ call guide_response = guide_agent ( x ) speak_list = filter_agents ( guide_response . get ( " content " , " " ) , rag_agent_list , ) # agents called by the guide agent will be recorded agent_name_list = [ agent . name for agent in speak_list ] # the listed agents respond to the query in turn for agent_name , agent in zip ( agent_name_list , speak_list ) : if agent_name in rag_agent_names : agent ( x ) Example 17: Example of using RAG agents to build a copilot for AgentScope. 9.7 ReAct Agents: Convert Natural Language to SQL Query Natural Language to SQL query (NL2SQl) is a classical yet challenging task in both database and natural language processing communities, which aims to convert human input questions in natural language into SQL queries. In the research community, there is a collection of works exploring the potential of LLMs in NL2SQL, and it would be very interesting to explore this task with LLM-empowered agents. In AgentScope, we provide a special class of agents, the ReAct (reasoning and acting) agents. More specifically, we could create new service functions, by using the ServiceToolkit module, for the ReAct agents and corresponding LLMs. In this example, we try to equip the ReAct agent with a state-of-the-art NL2SQL algorithm, DAIL-SQL. As the first step (as shown in Example 19), we need to initialize the model config and the SQL database, then initiate and provide the corresponding database path in sqlite file format. Here we generate the SQLite file using the provided SQL commands. You can also use the .sqlite format file directly. Then, as shown in Example 20, we define the tools for ReAct Agent to execute the SQL query. Namely, our agent should be able to generate the SQL query given the natural language input and execute the SQL query to get the result. We referenced a third-party Text-to-SQL tool DAIL-SQL to generate a Text-to-SQL prompt. We use the query_sqlite service function in the agentscope.service module. Now, we can initiate the ReAct Agent using the defined tools and interact with the agent, as shown in Example 21. 26 1 import agentscope 2 from searcher_agent import SearcherAgent 3 from answerer_agent import AnswererAgent 4 from agentscope . agents . user_agent import UserAgent 5 6 agentscope . init ( model_configs = " configs / model_configs . json " ) 7 8 # we can perform multiple searches at one time 9 WORKER_NUM = 3 10 searcher = SearcherAgent ( name = " Searcher " , 11 mo del _co nf ig_ nam e = " my_model " , result_num = args . num_workers , s ea r ch _ en g in e _t y pe = args . search_engine , api_key = args . api_key , cse_id = args . cse_id , 15 14 12 13 16 17 ) 18 # instantiate the answerer agents 19 answerers = [] 20 for i in range ( args . num_workers ) : 21 answerer = AnswererAgent ( name = f " Answerer -{ i } " , mo del _co nf ig_ nam e = " my_model " , ) # if we want to put agents in distributed ( parallel ) mode if args . use_dist : answerer = answerer . to_dist ( lazy_launch = False ) answerers . append ( answerer ) 28 29 user_agent = UserAgent () 30 31 msg = user_agent () 32 while not msg . content == " exit " : 33 msg = searcher ( msg ) results = [] for page , worker in zip ( msg . content , answerers ) : results . append ( worker ( Msg (** page ) ) ) for result in results : logger . chat ( result ) msg = user_agent () Example 18: Example of utilizing web search and retrieve agents. 22 23 24 25 26 27 34 35 36 37 38 39 9.8 AgentScope Workstation AgentScope provides a very convenient and user-friendly development kit in the form of “dragging windows”, the Workstation. Here, implementing applications of AgentScope using this development kit is of low cost in the sense that, entry-level developers or those without any programming experience could easily develop their own application at ease by simply dragging those agent-related modules and connecting them in a very straightforward way. For example, as shown in Fig. 9, we implement the basic conversation example in Workstation. As we can see, we do not need to write any code, just simply type in the configurations such as detailed settings and APIs into the corresponding windows, link the windows to build the dependency and connection, and then with one-click, Workstation would get ready for launch automatically. Meanwhile, the Workstation also introduced static checking rules to ensure the correctness of the configurations. AgentScope Workstation also provides comprehensive support for advanced developers. The developers could export the configurations on the modules as .json files and execute by the AgentScope Workstation engine. Alternatively, one can also use the AgentScope Workstation Compiler to convert all configurations into Python codes for further editing or development to implement more customized adjustments. 27 1 import agentscope 2 from sql_utils import c r e a t e _ s q l i t e _ d b _ f r o m _ s c h e m a 3 ... 4 agentscope . init ( model_configs = " configs / model_configs . json " ) 5 c r e a t e _ s q l i t e _ d b _ f r o m _ s c h e m a ( db_schema_path , db_sqlite_path ) 6 ... Example 19: Example of utilizing web search and retrieve agents. 1 from agentscope . service import ( 2 3 4 ServiceResponse , ServiceExecStatus , ServiceToolkit , query_sqlite , 5 6 ) 7 from sql_utils import D a i l S Q L P r o m p t G e n e r a t o r 8 9 def g en e ra t e_ s ql _ qu e ry ( question : str , db_path : str , model : Callable ) -> ↰ ServiceResponse : prompt_helper = D a i l S Q L P r o m p t G e n e r a t o r ( db_path ) prepared_prompt = prompt_helper . generate_prompt ({ " content " : question }) def g e t _ r e s p o n s e _ f r o m _ p r o m p t ( prompt : dict , model : Callable ) -> str : ... sql_response = g e t _ r e s p o n s e _ f r o m _ p r o m p t ( prepared_prompt [ " prompt " ] , model = model ) return ServiceResponse ( Se rvi ceE xe cSt atu s . SUCCESS , sql_response ) 10 11 12 13 14 15 16 17 18 19 20 21 22 23 # Use Service Toolkit to set up tool functions for LLMs 24 ser vice_toolkit = ServiceToolkit () 25 ser vice_toolkit . add ( generate_sql_query , db_path = db_sqlite_path , model = ↰ loaded_model ) 26 ser vice_toolkit . add ( query_sqlite , database = db_sqlite_path ) Example 20: Example of utilizing web search and retrieve agents. 10 Related Works The development of AgentScope aligns with the rapidly evolving landscape of frameworks that leverage large language models (LLMs) for the creation of language agents and multi-agent systems. Here, we briefly introduce works closely related to AgentScope from two sub-domains pertinent : Language Agent Frameworks, focusing on individual agent capabilities, and Multi-Agent Frameworks, emphasizing collaboration among multiple agents. For broader related works, readers can refer to (Wang et al., 2023; Xi et al., 2023). Language Agent Frameworks Language agent frameworks are pivotal for developing applications that can interpret and interact using human language. The Transformers library (Huggingface, 2023) has introduced a natural language API to interface with transformer models in its recent updates (Transformers-Agents). This API utilizes a set of customizable tools, allowing the model to interpret instructions and generate code snippets accordingly. It offers support for various open-source and proprietary model endpoints, catering to diverse developer needs. LangChain (Langchain-AI, 2023) provides a framework for building applications that are context-aware and capable of reasoning. It includes libraries and templates that facilitate the integration of multiple components into a unified cognitive architecture. LangServe and LangSmith extend the framework’s capabilities by enabling deployment as a REST API and offering developer tools for debugging and monitoring chains built on any LLM framework. AutoGPT (AutoGPT-Team, 2023) illustrates a different approach, allowing an LLM to iteratively 28 1 from agentscope . agents import ReActAgent 2 agent = ReActAgent ( 3 name = " assistant " , mo del _co nf ig_ nam e = ’gpt -4 ’ , service_toolkit = service_toolkit , sys_prompt = " You are a helpful agent that preform SQL queries base on natual ↰ language instructions . " , verbose = True , # set verbose to True to show the reasoning process 4 5 6 12 7 8 ) 9 ... 10 mss = Msg ( 11 name = " user " , content = " How many singers do we have ? " , role = " user " 13 14 ) 15 logger . chat ( mss ) 16 17 sql_query_mss1 = agent ( mss ) 18 ... Example 21: Example of utilizing web search and retrieve agents. execute actions and make decisions. As a generalist agent, AutoGPT is not task-specific; it is designed to perform a variety of computer-based tasks, reflecting the adaptive nature of LLMs. ModelScope-Agent (Li et al., 2023a) is a customizable agent framework that harnesses open-source LLMs to perform tasks and connect with external APIs. It facilitates seamless integration with model APIs and common APIs while providing a comprehensive infrastructure for data collection, tool retrieval, and customized model training, all aiming to realize practical real-world applications. Multi-Agent Frameworks Building on the capabilities of individual agents, multi-agent frameworks explore collaboration and interaction among multiple agents to address complex tasks. AutoGen (Wu et al., 2023) provides a generic infrastructure that allows developers to program interaction patterns using both natural language and code. This framework enables the development of diverse applications by facilitating conversation among agents that are customizable and can utilize various combinations of LLMs, human inputs, and tools. MetaGPT (Hong et al., 2023) incorporates meta-programming to enhance multi-agent collaborations. By encoding Standardized Operating Procedures (SOP) into prompts, this framework ensures streamlined workflows and reduced errors, exemplifying effective task decomposition among agents. AGENTS (Zhou et al., 2023) is an open-source library that supports autonomous language agents with features like planning, memory, and multi-agent communication. It is designed to be user-friendly, helping non-specialists to deploy state-of-the-art language agents, and research-friendly, with a modularized design for extensibility. OpenAgents (Xie et al., 2023) provides an open platform for using language agents with practical functionalities accessible through a web interface. This framework emphasizes facilitating real-world agent interactions and includes specialized agents for different tasks, such as data analysis and web browsing. ChatDev (Qian et al., 2023) exploits LLMs for software development, creating a virtual chat-powered company that follows a waterfall model. It engages “software agents” at different stages of the development process, facilitating collaboration and context-aware communication. CAMEL (Li et al., 2023b) proposes a novel framework for autonomous cooperation among communicative agents using role-playing techniques, which allows for the generation of conversational data for studying agent behaviors and capabilities. Lastly, AgentSims (Lin et al., 2023) introduces a sandbox environment to evaluate LLMs in task-based scenarios, offering an infrastructure for researchers to test specific LLM capacities within a simulated environment. These frameworks represent significant strides in the use of LLMs for both individual and collaborative agent tasks. AgentScope is situated within this context, contributing by addressing the need for a user-friendly, fault-tolerant and versatile framework designed to manage complex interactions and processes inherent in multi-agent LLM systems. By focusing on ease of use and reliability, AgentScope aims to facilitate the creation of robust and versatile applications across diverse domains. 29 Figure 9: Workstation generates workflow configuration and Python code. 11 Conclusion In this work, we propose AgentScope, a platform that stands at the forefront of multi-agent system development, synergizing user-centric design with the advanced capabilities of LLMs. Through its innovative communication and distributed mechanisms, AgentScope demonstrates its potential to boost collaboration amongst agents, enabling efficient, fault-tolerant operations and multi-modal interactions. By abstracting complexities and offering an array of development utilities, AgentScope substantially lowers the barriers to entry, fostering a more inclusive and creative community of developers. Looking forward, AgentScope opens numerous avenues for further research and development. Future work could delve into deeper integration of retrieval augmented generation, and explore adaptive communication protocols and interactive modals that evolve alongside task requirements. The platform’s impact on accelerat- ing the deployment of multi-agent systems across industries, from healthcare to customer service, promises to be profound, potentially leading to smarter and more responsive technologies that enhance human-machine collaboration. With AgentScope, we invite the broader research and development community to build upon our foundation, driving innovations that will shape the next generation of intelligent multi-agent applications. 30 References Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. {TensorFlow}: a system for {Large-Scale} machine learning. In 12th USENIX symposium on operating systems design and implementation (OSDI 16), pages 265–283, 2016. Abubakar Abid, Ali Abdalla, Ali Abid, Dawood Khan, Abdulrahman Alfozan, and James Zou. Gradio: Hassle-free sharing and testing of ml models in the wild. arXiv preprint arXiv:1906.02569, 2019. AutoGPT-Team. Autogpt, 2023. URL https://github.com/Significant-Gravitas/AutoGPT. James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science, 2(3):8, 2023. Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, and Furu Wei. Why can GPT learn in-context? language models secretly perform gradient descent as meta-optimizers. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4005–4019, July 2023. Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023. Justin Fu, Anoop Korattikara, Sergey Levine, and Sergio Guadarrama. From language to goals: Inverse reinforcement learning for vision-based instruction following. In 7th International Conference on Learning Representations, 2019. Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo, Meng Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey. CoRR, abs/2312.10997, 2023. Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, et al. Metagpt: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352, 2023. Huggingface. Transformers-agents, 2023. transformers_agents. URL https://huggingface.co/docs/transformers/ Langchain-AI. Langchain, 2023. URL https://github.com/langchain-ai/langchain. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge- intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020a. Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. Retrieval- augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020b. Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, et al. Modelscope-agent: Building your customizable agent system with open-source large language models. arXiv preprint arXiv:2309.00986, 2023a. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for" mind" exploration of large scale language model society. arXiv preprint arXiv:2303.17760, 2023b. Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, and Qin Chen. Agentsims: An open-source sandbox for large language model evaluation. arXiv preprint arXiv:2308.04026, 2023. 31 Jerry Liu. LlamaIndex, 11 2022. URL https://github.com/jerryjliu/llama_index. OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems, 2022. Bhargavi Paranjape, Scott M. Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Túlio Ribeiro. ART: automatic multi-step reasoning and tool-use for large language models. CoRR, abs/2303.09014, 2023. Aaron Parisi, Yao Zhao, and Noah Fiedel. TALM: tool augmented language models. CoRR, abs/2205.12255, 2022. Joon Sung Park, Joseph C. O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pages 2:1–2:22, 2023. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. Reid Pryzant, Dan Iter, Jerry Li, Yin Lee, Chenguang Zhu, and Michael Zeng. Automatic prompt optimization with “gradient descent” and beam search. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7957–7968, December 2023. Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. Communicative agents for software development. arXiv preprint arXiv:2307.07924, 2023. Vipula Rawte, Amit P. Sheth, and Amitava Das. A survey of hallucination in large foundation models. CoRR, abs/2309.05922, 2023. Weizhou Shen, Chenliang Li, Hongzhan Chen, Ming Yan, Xiaojun Quan, Hehong Chen, Ji Zhang, and Fei Huang. Small llms are weak tool learners: A multi-llm agent. arXiv preprint arXiv:2401.07324, 2024. Weijie Su, Xizhou Zhu, Chenxin Tao, Lewei Lu, Bin Li, Gao Huang, Yu Qiao, Xiaogang Wang, Jie Zhou, and Jifeng Dai. Towards all-in-one pre-training via maximizing multi-modal mutual information. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15888–15899, 2023. Mirac Suzgun and Adam Tauman Kalai. Meta-prompting: Enhancing language models with task-agnostic scaffolding. arXiv preprint arXiv:2401.12954, 2024. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton- Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, 32 Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023b. Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432, 2023. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682, 2022. Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023. Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864, 2023. Tianbao Xie, Fan Zhou, Zhoujun Cheng, Peng Shi, Luoxuan Weng, Yitao Liu, Toh Jing Hua, Junning Zhao, Qian Liu, Che Liu, et al. Openagents: An open platform for language agents in the wild. arXiv preprint arXiv:2310.10634, 2023. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023. Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, and Guoyin Wang. Instruction tuning for large language models: A survey. CoRR, abs/2308.10792, 2023a. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming Shi. Siren’s song in the AI ocean: A survey on hallucination in large language models. CoRR, abs/2309.01219, 2023b. Wangchunshu Zhou, Yuchen Eleanor Jiang, Long Li, Jialong Wu, Tiannan Wang, Shi Qiu, Jintian Zhang, Jing Chen, Ruipu Wu, Shuai Wang, et al. Agents: An open-source framework for autonomous language agents. arXiv preprint arXiv:2309.07870, 2023. 33 A Running Example of Standalone Conversation Listing 22 shows the dialog history when running the standalone conversation. 1 Assistant : Thank you ! I ’ m here to help . How can I assist you today ? 2 User : I wanna know about large language models . 3 Assistant : Large language models are AI models trained on vast amounts of text ↰ data to understand and generate human - like text . 4 User : What ’ s the most famous and powerful LLM in the world ? 5 Assistant : The most famous and powerful large language model currently is OpenAI ↰ ’ s GPT -4. 6 User : What about the other LLMs ? 7 Assistant : There are several other notable large language models including : 8 9 1. BERT ( Bidirectional Encoder Representations from Transformers ) by Google : ↰ This model is designed to understand the context of words in a sentence by ↰ looking at what ’ s before and after them . 10 11 2. T5 ( Text - to - Text Transfer Transformer ) also by Google : This model treats ↰ every NLP problem as a text generation task . 12 13 3. RoBERTa by Facebook : This is a variant of BERT and is trained on more data ↰ and for a longer amount of time . 14 15 4. XLNet by Google Brain : This model overcomes some of the limitations of BERT ↰ by taking into account all possible word sequences in a sentence . 16 17 5. Turing - NLG by Microsoft : This is a 17 - billion parameter language model by ↰ Microsoft , designed for various NLP tasks . 18 19 6. Megatron by NVIDIA : This is designed to train Transformer models with ↰ billions of parameters . 20 21 These models are used for a variety of NLP tasks including translation , question ↰ answering , summarization , and more . 22 User : Okay , what should I know about LLM ? 23 Assistant : Large Language Models ( LLMs ) are powerful AI tools that can generate ↰ human - like text , but they require careful handling due to potential risks like ↰ generating harmful or biased content . 24 User : Okay , thanks . 25 Assistant : You ’ re welcome ! If you have any other questions , feel free to ask . 26 User : exit Example 22: A running example of standalone conversation. B Running Example of Werewolf Please refer to https://github.com/modelscope/agentscope for the complete code of the werewolf game. We provid an example dialogue history in Listing 23. 1 Moderator : Player1 and Player2 , you are werewolves . If you are alone , eliminate ↰ a player , else discuss with your teammates and reach an agreement . Response in ↰ the following format which can be loaded by python json . loads () { " thought ": " thought " , " speak ": " thoughts summary to say to others " , " agreement ": " whether the discussion reached an agreement or not ( true / false ) 2 3 4 ↰ " 5 } 6 Player1 : I think we should consider Player3 . They have a knack for figuring ↰ things out . 7 Player2 : I agree with your point about Player3 , they are indeed a strong player . But we should also consider Player4 , who is also a strong player and could be the seer or witch . ↰ ↰ 34 8 Player1 : I see your point about Player4 . They could be a threat if they are the ↰ seer or witch . But I still think Player3 is a strong player and could figure ↰ us out . 9 Player2 : Alright , let ’ s go with Player3 . We can consider Player4 next time . 10 Moderator : Which player do you vote to kill ? Response in the following format ↰ which can be loaded by python json . loads () 11 {{ 12 " thought ": " thought " , " speak ": " player_name " 13 14 } } 15 Player1 : Player3 16 Player2 : Player3 17 Moderator : The player with the most votes is Player3 . 18 Moderator : Player6 , you ’ re witch . Tonight Player3 is eliminated . Would you like ↰ to resurrect Player3 ? Response in the following format which can be loaded by ↰ python json . loads () " thought ": " thought " , " speak ": " thoughts summary to say " , " resurrect ": " true / false " 22 23 } 24 Player6 : I ’ m deeply saddened by the loss of Player3 . It ’ s a tough decision to ↰ make . 25 Moderator : Player5 , you ’ re seer . Which player in Player1 , Player2 , Player3 , ↰ Player4 , Player5 and Player6 would you like to check tonight ? Response in the ↰ following json format which can be loaded by python json . loads () 19 { 20 21 26 { 27 " thought ": " thought " , " speak ": " player_name " 28 29 } 30 Player5 : Player1 31 Moderator : Okay , the role of Player1 is werewolf . 32 Moderator : The day is coming , all the players open your eyes . Last night is ↰ peaceful , no player is eliminated . 33 Moderator : Now the alive players are Player1 , Player2 , Player3 , Player4 , Player5 and Player6 . Given the game rules and your role , based on the situation and ↰ ↰ the information you gain , to vote a player eliminated among alive players and ↰ to win the game , what do you want to say to others ? You can decide whether to ↰ reveal your role . Response in the following JSON format which can be loaded by ↰ 34 { 35 python json . loads () " thought ": " thought " , " speak ": " thought summary to say to others " 36 37 } 38 Player1 : I ’ m glad that we all made it through the night . I think we should keep ↰ an eye on Player4 . They ’ ve been pretty quiet , which could mean they ’ re trying ↰ to avoid suspicion . 39 Player2 : I agree with Player1 . Player4 has been unusually quiet . Maybe they ’ re ↰ trying to avoid drawing attention to themselves . 40 Player3 : I see your point about Player4 , but it ’ s still early in the game . We ↰ should be careful not to jump to conclusions without more evidence . 41 Player4 : I understand your concerns , but I assure you , I ’ m just a villager . I ↰ think we should focus on gathering more information before we start pointing ↰ fingers . If the seer has any insights , now might be a good time to share . 42 Player5 : I understand everyone ’ s concerns about Player4 , but I think we should ↰ also keep an eye on Player1 . They were quick to cast suspicion on others , ↰ which could be a strategy to divert attention from themselves . 43 Player6 : I understand the concerns about Player4 , but I also think we should be ↰ cautious about Player1 and Player2 . They were quick to agree with each other , ↰ which could be a strategy to divert attention from themselves . Let ’ s gather ↰ more information before we make any decisions . 44 Moderator : Now the alive players are Player1 , Player2 , Player3 , Player4 , Player5 and Player6 . Given the game rules and your role , based on the situation and ↰ ↰ the information you gain , to vote a player among alive players and to win the ↰ game , it ’ s time to vote one player among the alive players , please cast your ↰ vote on who you believe is a werewolf . Response in the following format which ↰ can be loaded by python json . loads () 45 { 35 46 " thought ": " thought " , " speak ": " player_name " 47 48 } 49 Player1 : Player4 50 Player2 : Player4 51 Player3 : Player1 52 Player4 : Player1 53 Player5 : Player1 54 Player6 : Player1 55 Moderator : Player1 has been voted out . 56 Moderator : The game goes on . 57 Moderator : Player2 , you are werewolves . If you are alone , eliminate a player , ↰ else discuss with your teammates and reach an agreement . Response in the ↰ following format which can be loaded by python json . loads () { " thought ": " thought " , " speak ": " thoughts summary to say to others " , " agreement ": " whether the discussion reached an agreement or not ( true / false ) 58 59 60 ↰ " 61 } 62 Player2 : I think we should consider Player4 . They are a strong player and could ↰ be a threat . 63 Moderator : Which player do you vote to kill ? Response in the following format ↰ which can be loaded by python json . loads () 64 {{ 65 71 {{ 72 " thought ": " thought " , " speak ": " player_name " 66 67 } } 68 Player2 : Player4 69 Moderator : The player with the most votes is Player4 . 70 Moderator : Would you like to eliminate one player ? Response in the following ↰ json format which can be loaded by python json . loads () " thought ": " thought " , " speak ": " False or player_name " " thought ": " thought " , " speak ": " player_name " 73 74 } } 75 Player6 : False 76 Moderator : Player5 , you ’ re seer . Which player in Player2 , Player3 , Player4 , ↰ Player5 and Player6 would you like to check tonight ? Response in the following ↰ 77 { 78 json format which can be loaded by python json . loads () 79 80 } 81 Player5 : Player2 82 Moderator : Okay , the role of Player2 is werewolf . 83 Moderator : The day is coming , all the players open your eyes . Last night , the ↰ following player ( s ) has been eliminated : Player4 . 84 Moderator : Now the alive players are Player2 , Player3 , Player5 and Player6 . ↰ Given the game rules and your role , based on the situation and the information you gain , to vote a player eliminated among alive players and to win the game ↰ ↰ , what do you want to say to others ? You can decide whether to reveal your ↰ role . Response in the following JSON format which can be loaded by python json ↰ . loads () 85 { 86 " thought ": " thought " , " speak ": " thought summary to say to others " 87 88 } 89 Player2 : I think we should keep an eye on Player5 . They ’ ve been pretty quiet , ↰ which could mean they ’ re trying to avoid suspicion . 90 Player3 : I think Player2 might be a werewolf . They were quick to suspect Player4 ↰ , who was eliminated last night . However , the game didn ’ t end after Player1 , ↰ who also suspected Player4 , was voted out . This suggests that Player1 was not ↰ a werewolf , and therefore , Player2 is likely to be one . 91 Player5 : I agree with Player3 . Player2 was quick to cast suspicion on Player4 , ↰ who turned out to be innocent . I also noticed that Player2 and Player1 were ↰ often in agreement , which could mean they were both werewolves trying to ↰ protect each other . I think we should vote out Player2 . 36 92 Player6 : I agree with Player3 and Player5 . Player2 was quick to cast suspicion ↰ on Player4 , who turned out to be innocent . I also noticed that Player2 and ↰ Player1 were often in agreement , which could mean they were both werewolves ↰ trying to protect each other . I think we should vote out Player2 . 93 Moderator : Now the alive players are Player2 , Player3 , Player5 and Player6 . ↰ Given the game rules and your role , based on the situation and the information you gain , to vote a player among alive players and to win the game , it ’ s time ↰ to vote one player among the alive players , please cast your vote on who you ↰ ↰ believe is a werewolf . Response in the following format which can be loaded by ↰ 94 { 95 python json . loads () " thought ": " thought " , " speak ": " player_name " 96 97 } 98 Player2 : Player5 99 Player3 : Player2 100 Player5 : Player2 101 Player6 : Player2 102 Moderator : Player2 has been voted out . 103 Moderator : The game is over . The werewolves have been defeated , and the village ↰ is safe once again ! Example 23: Running example of werewolf. 37
ai_researcher
1
Research_on_life_cycle_environmental_impact_assessment_and_decision_analysis_model_of_building_materials_based_on_BIM.pdf
BIM can help decarbonize the construction sector: life cycle evidence from Pavement Management Systems Anne de Bortoli1,2,3*, Yacine Baouch4, Mustapha Masdan2 1 CIRAIG, École Polytechnique de Montréal, P.O. Box 6079, Montréal, Québec, H3C 3A7, Canada 2 Direction technique, Eurovia Management, 18 Place de l’Europe, 92500, Rueil-Malmaison, France 3 LVMT, Ecole des Ponts ParisTech, Cité Descartes, 6-8 Avenue Blaise Pascal, 77420 Champs-sur-Marne, France 4 Université Technologique de Compiègne, Recherche de Royallieu, rue du Docteur Schweitzer, 60203 Compiègne, France * Corresponding author; e-mail: [email protected] ABSTRACT Transforming the construction sector is key to reaching net-zero, and many stakeholders expect its decarbonization through digitalization, but no quantified evidence has been brought to date. This article proposes the first environmental quantification of the impact of Building Information Modeling (BIM) in the construction sector. Specifically, the direct and indirect greenhouse gas (GHG) emissions generated by a monofunctional BIM to plan road maintenance – a Pavement Management System (PMS) - are evaluated using field data from France. The related carbon footprints are calculated following a life cycle approach, using different sources of data – including ecoinvent v3.6 – and the IPCC 2013 GWP 100a characterization factors. Three design-build-maintain pavement alternatives are compared: scenario 1 relates to a massive design and surface maintenance, scenario 2 to a progressive design and pre-planned structural maintenance, and scenario 3 to a progressive design and tailored structural maintenance supported by the PMS. First, results show the negligible direct emissions due to the PMS existence – 0.02% of the life cycle emissions of scenario 3’s pavement, e.g. 0.52 t CO2eq for 10 km and 30 years. Second, the base case and two complementary sensitivity analyses show that the use of a PMS is climate-positive over the life cycle when pavement subgrade bearing capacity improves over time, neutral for the climate otherwise. The GHG emissions savings using BIM can reach up to 14 and 30% of the life cycle emissions respectively compared to scenario 2 and 1, and resp. 47 and 65% when restraining the scope to maintenance and rehabilitation and excluding original pavement construction. Third, the neutral effect of BIM in case of a deterioration of the bearing capacity of the subgrade may be explained by design practices and safety margins, 1 that could in fact be enhanced using BIM. Fourth, the decarbonization potential of a multifunctional BIM is discussed, and research perspectives are presented. Keywords: BIM, Pavement Management Systems, decarbonization, digitalization, LCA, construction. 1 Introduction and background Construction is a key sector that must be transformed to reach a net-zero society, and many stakeholders expect its decarbonization through digitalization, but no quantified evidence has been shown to date. In Canada, construction is the second most carbon-intensive sector, accounting for 12% of the national emissions, 59% of them coming from infrastructure (de Bortoli and Agez, Under review). To decrease this burden, many green construction practices have been appraised using Life Cycle Assessment (LCA) - for buildings (e.g. Anand and Amor, 2017; Vilches et al., 2017) or infrastructure (e.g. AzariJafari et al., 2016; Saxe et al., 2020) - while the environmental consequences of digitalization such as the use of Building Information Modelling (BIM) for buildings have been presented positively as a way to automatize environmental quantifications and optimizations (Soust-Verdaguer et al., 2017). BIM is defined as the “use of a shared digital representation of a built asset to facilitate design, construction, and operation processes to form a reliable basis for decisions” (International Organization for Standardization, 2018). Its direct and indirect environmental impacts – those due to resp. the amortization of BIM’s hardware, equipment and infrastructure, their operation, and the usage of BIM’s software, and the consequences of BIM on construction processes and management - have never been quantified to date. In the meantime, the environmental impact of digital services has been increasingly studied (ADEME, 2016) and shown to be significant: 2 Information and Communication Technologies (ICT) would emit between 1.8 and 3.9% of the global anthropogenic greenhouse gas (GHG) (Freitag et al., 2021). Thus, it cannot be inferred that the direct environmental impact of BIM is negligible, nor that its potential indirect environmental benefits overcome its direct costs. This paper aims to offer a first life cycle quantification of these aspects. The impact of infrastructure construction exceeding those of buildings (de Bortoli and Agez, Under review), this paper will focus on infrastructure. Moreover, among the different kinds of infrastructure, roads support the most emitting mean of transportation (Our world in data, 2020). Their decarbonization is thus a priority to reach a net-zero pathway. However, BIM for infrastructure is less advanced than BIM for buildings (Malagnino et al., 2021). “I-BIM”, “Infra-BIM”, or BIM for infrastructure, started to be implemented in 2013 in the rail construction sector of developed countries (Matejov and Šestáková, 2021). Since then, it has been shown to help design road elements thanks to visualization – e.g. pavement sections, roundabouts, tunnels (Vignali et al., 2021) -, planning airport maintenance (Abbondati et al., 2020), and in general improving infrastructure (Costin et al., 2018). I-BIM-based environmental assessments would also increase sustainability awareness within design teams (van Eldik et al., 2020), but the direct and indirect environmental impacts of I-BIM have not been quantitatively investigated. The difficulty in assessing these impacts lies both in the versatility of I-BIM and its novelty. It started to be punctually used one decade ago, and 47 categories of usages have been classified, including automations, pricing, monitoring, failure detection, and maintenance planning (Costin et al., 2018). Pavement Management Systems (PMS) are common software programs analyzing road condition data to plan maintenance. These PMSs allow to adopt different strategies in the design-build-maintain sequence related to a pavement life cycle, compared to a non-digitally helped pavement management. They are thus equivalent to a monofunctional I-BIM dedicated 3 to road maintenance and used for decades by road operators. As ex-post assessments generally present a lower degree of uncertainty than prospective assessments, a case study will be conducted to quantify on field data the consequence of using this experienced BIM function on a pavement life cycle carbon footprint, compared to alternative ways to design and maintain this pavement. Two main schools of thought exist to design roads and schedule their maintenance: empirical vs mechanical-empirical (ME) methods. The American continent mainly used the AASHTO empirical method (AASHTO, 1993), before switching progressively to a ME version (AASHTO, 2008; US DoT - FHWA, 2019a, 2019b, 2019c, 2019d). In France and Africa, the LCPC-Setra ME method is widely used (LCPC-Sétra, 1994). A catalogue of standard solutions has been made available (Corte et al., 1998), while design and maintenance plans of strategic pavements are usually tailored using the match ME software “Alize”. As they carry a substantial portion of the traffic (around 20%) on a short portion of the national road network (1%) (de Bortoli 2018), high-traffic roads present strategic socioeconomic importance, and more design-build-maintain alternatives. Thus, this case study will focus on high-traffic roads. The objectives of this study are to quantify the carbon footprint of (1) the PMS function of an I-BIM based on field data, and (2) three design-build-maintain alternatives for a high-traffic road in France over its entire life cycle, (3) to understand the potential impact of the PMS function of a BIM on climate change, and (4) discuss the environmental consequences of a multifunctional BIM. 4 2 Method 2.1 Overview of the method A method is developed to quantify the carbon footprint of the most common ways to design, build and maintain high-traffic roads in France, including an option where maintenance is digitally helped using a PMS, to compare the GHG emissions related to these different practices and seize the consequences of using digital tools for maintenance operation planning that could be included in an I-BIM. This method consists in developing three design-build-maintenance scenarios to compare their life cycle carbon footprint, calculated based on foreground and background inventories (or emission factors) from the literature, road maintenance operators, and their suppliers. The quantification method is based on LCA, applied consistently with Standards ISO 14040 and 14044 (International Organization for Standardization, 2006a, 2006b), and the characterization factors chosen are IPCC 2013 GWP 100a, the most recent factors when this study was conducted (mid-2021). The system boundaries include production, maintenance, and use stages. The functional unit is “building and maintaining in good condition over 30 years a 10 kilometer-long and 7-meter-wide section of highway in France, under a traffic of 500 heavy vehicles per day”. In this method section, the foreground modeling will be first presented, i.e. the scenarios of design-build-maintenance operations and related PMS activities as well as the equations to calculate the carbon footprint of the pavement over its life for each scenario. The scenarios are detailed in the following section developed with pavement construction companies and highway concessionaires in France. Second, the background data and emission factors (EF) to quantify this carbon footprint will be detailed. An EF is the carbon footprint of any unitary activity occurring during the pavement lifespan. 5 Figure 1 Calculation method overview 2.2 Case study scenarios 2.2.1 A theoretical representative French highway The case study is conducted on a theoretical representative French highway, with two lanes in each direction, bearing a traffic of 500 heavy vehicles per day and direction (the “T1” French class of Average Annual Daily Traffic (AADT)), with a traffic rate assumed at 0%. A 10- kilometer-long section designed to last 30 years is considered, with a width of 7 meters per direction (excluding shoulders), and a class « PF2qs » subgrade at the commissioning stage. Subgrades are classified in France based on the French standard NF P 94-117-1 (AFNOR, 2000), the “SETRA-LCPC” guides on pavement subgrade design (Corte et al., 2000a, 2000b), and a national complement on PF2qs class subgrades (CEREMA, 2017). A PF2qs subgrade corresponds to a “Plate Test Static Deformation Module” (EV2) between 80 MPa (included) and 120 MPa (excluded) (CEREMA, 2017). Ten years after commissioning, this bearing capacity is assumed to decrease toward a “PF2” subgrade class (50 MPa ≤ EV2 < 80 MPa) over 25% of the linear and increase toward a “PF3” subgrade (120 MPa ≤ EV2 < 200 MPa) over 25% of the section as well, respectively due to drainage issues deteriorating the subgrade 6 and traffic compacting the soil. Fifty percent of the linear will maintain a PF2qs subgrade overtime. This evolution has been arbitrarily estimated by Eurovia, a leading road construction company worldwide, and sensitivity analyses will be conducted, respectively over a restricted and an extreme range of subgrade evolutions. This problem presents three parameters – respectively %PF2, %PF2qs and %PF3 the average percentage of each subgrade class length over the pavement lifespan. The sum of these parameters equals 100% but can vary independently. Thus, there is no continuous representation showing all the possibilities of the subgrade evolutions. For this reason, arbitrary values are chosen for these sensitivity analyses. Table 1 presents the parameters’ values that will be tested for the restricted range sensitivity analysis (SA), while Table 2 presents the parameters’ values for the extreme range SA. Table 1 Parameters’ values for the “restricted range” sensitivity analysis %PF2 %PF2qs %PF3 c e b d a 0.5 0.4 0.3 0.2 0.1 0 0.5 0.6 0.7 0.8 0.9 1 0 0 f - base g i 0 0 0.9 0.8 0.7 0.6 0.5 0.1 0.2 0.3 0.4 0.5 h 0 j 0 k 0 0 0 0 0 Table 2 Parameters’ values for the “extreme range” sensitivity analysis c a b d 1 0.8 0.5 0.5 0.4 0.25 %PF2 %PF2qs 0 0.2 0.5 0.4 0.5 0.5 0 %PF3 0.1 0.1 0.25 0 0 e f - base j 0 k g i h 0.1 0.1 0 0 0.5 0.4 0.5 0.2 0 0.4 0.5 0.5 0.8 1 2.2.2 Massive vs progressive designs and maintenance Two ways of designing pavements can be considered in France: a massive vs a progressive design. We call massive design (Scenario 1) an approach that consists of designing a pavement to last till the end of its life without structural reinforcement under the traffic expected. In that case, maintenance operations are only performed to keep the pavement surface in good 7 condition: ensuring waterproofness, a skid resistance suiting safety thresholds, and good riding conditions. In France, massive design is guided by the LCPC-Setra catalogue for pavements (Corte et al., 1998). Alternatively, a progressive design (Scenarios 2 and 3) is performed when a pavement is not originally designed to mechanically resist the traffic expected over its entire service life and needs structurally-reinforcing maintenance operations that will at the same time restore good surface condition. This second approach requires to use a ME design software and may be selected by road concessionaires to optimize discounted cash flows and reduce traffic risk. This risk management practice is particularly attractive under an ever more uncertain future, due to climate change consequences on pavement conditions and changes in behaviors related to cultural standards evolutions, economic crises, and else. Then, two approaches allow for managing the maintenance of progressively-designed pavements: theoretical mechanical planning (scenario 2) or data-supported adaptative planning (scenario 3). The maintenance in scenario 2 is planned during the design process using a ME design software considering traffic forecast and can be further modified with traffic data. Scenario 3 consists of data-supported maintenance management: it relies on traffic data but also pavement condition data collected and monitored over time, stored, and analyzed in a PMS. This is the most tailored maintenance approach existing, as it adapts to the real evolution of traffic, climate, and thus pavement mechanical conditions. It thus avoids premature pavement failures as well as structural oversizing. 2.2.3 Design and maintenance sequence by scenario Description The pavement design and maintenance sequences of the three scenarios are illustrated in Figure 2, with a theoretical risk of failure of 5%. It means that, over the pavement’s lifespan, with the 8 original subgrade class and under the traffic expected, 5% of the surface will have experienced bottom-up cracking calling for rehabilitation (=full-thickness reconstruction). The thickness and material type of each layer are presented over the 30-year service life in Figure 2. The different materials and techniques used are cold micro asphalt concrete surfacing (CMACS, called “enrobés coulés à froid” (ECF) in French), semi-coarse aggregate asphalt concrete (overlays) (SCAC(O), called “béton bitumineux semi-grenu” (BBSG) in French), Road base asphalt (RBA, called “grave-bitume” (GB3) in French), (very) thin asphalt concrete overlays ((V)TACO, called “béton bitumineux (très) mince (BB(T)M) in French). Scenario 1 is extracted from the French LCPC-Setra Catalogue (Corte et al., 1998). SCAC M&F corresponds to mill (M) a former surface layer before filling (F) it with SCAC. Scenario 2 is designed with ODIN, Eurovia’s ME design software based on the same physics as the standard Alizé’s software. In Scenarios 1 and 2, the road manager does not have data on the evolution of the road condition and is thus unaware of the evolution of the subgrade bearing capacity. This results in subsections of the pavement failing prematurely. The cumulated section’s portion likely to fail over time is calculated with Odin and provided in the supplementary material. Finally, scenario 3 is also managed with Odin but with perfect knowledge of the pavement evolution, including the subgrade’s modulus, and maintenance is optimized accordingly. 9 Figure 2 Design and maintenance sequence of the different scenarios Carbon footprint The carbon footprint (cid:1)(cid:2) of the built-maintained pavements over 30 years will be calculated following equation (1), where (cid:3)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10) is the width of the pavement studied (in meters), (cid:11)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10) the length (in meters), and (cid:12)(cid:2)(cid:8) the emission factor of the construction operation (cid:13) occurring during the pavement lifespan (in kgCO2eq/sm), this operation being related to original construction or maintenance. (cid:1)(cid:2)(cid:14)(cid:15)(cid:16)(cid:13)(cid:17)(cid:18) (cid:19) (cid:20)(cid:21)(cid:13)(cid:22)(cid:18)(cid:21)(cid:13)(cid:22)(cid:23)(cid:24) (cid:26)(cid:21)(cid:27)(cid:23)(cid:20)(cid:23)(cid:22)(cid:18)(cid:28) (cid:29) ∑ (cid:3)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10) (cid:31) (cid:11)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10) (cid:31) (cid:12)(cid:2)(cid:8) (cid:8) (1) 10 2.2.4 Pavement Management Modeling scenarios In scenarios 1 and 2, premature pavement failures are identified by regular patrolling, i.e. via visual surveying performed by trained staff, often from the pavement management company. As scenario 3 also comes with this patrolling, these operations will be removed from the system boundaries for all scenarios, following ISO 14044 guidelines (International Organization for Standardization, 2006b). On the other hand, scenario 3 requires pavement monitoring by an equipped truck which measures the evolution of the bearing capacity of the highway pavement. This data is therefore collected and then stored in a database. A pavement asset management team will then analyze them to plan the maintenance operations. To model this PMS’s features (illustrated in Pavement Management System use casesFigure 3), a “SysML Use cases” diagram is used. this formalism describes use cases and external actors’ interactions in a concise way. Tree external system users are considered: external data (that need to be collected), an external database (that is used to store data), and the user of the PMS (that is a member of the pavement asset management team). Four use cases are considered: collecting external data, storing internal data, running a local task, and transferring internal data. The last two use cases constitute the PMS’s maintenance operation planning. To be more efficient, only the main use cases are represented, and the extend/include relationships are omitted. For example, collecting data may be extended to storing. 11 Figure 3 Pavement Management System use cases PMS carbon footprint The Global Warming Potential (GWP) of a PMS for a pavement section, I!"# (cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10), can be calculated using the generic equation (2): I!"# (cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10) (cid:29) I!"# $%(cid:7)% & I!"# ’()*+,- & I!"# .+/0(-0+01- (cid:14)2(cid:28) Where: - The section is defined by a length and a maintenance period - - - 3+(+ is the GWP to collect the data related to the section I!"# ’()*+,- is the GWP to store the data related to the section I!"# .+/0(-0+01-is the GWP to plan the maintenance operation related to the section I!"# Data collection of the pavement condition In France, we use different types of deflectometer to measure pavement deflection. In this case study, a curviameter is used, i.e. an instrumented truck weighted with 13 tons on its rear axle to record the deformation under this axle. According to ecoinvent’s truck classification, this vehicle is considered as a 16-32 ton truck, and a EURO6 emissions standard is considered. 12 French national roads are standardly monitored every three years in each direction (Lorino et al., 2006), but some strategic roads can be monitored more often, such as once a year. The GWP of the data collection, I!"# $%(cid:7)%, is defined by equation (3): I!"# $%(cid:7)% (cid:29) (cid:22)4(cid:9)5(cid:10)$ ∗ 78 ∗ (cid:11)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10) ∗ F. (3) Where: - - - - is the maintenance period in years 78 (cid:11)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10) is the length of the studied section in km is the number of monitoring rounds per year (cid:22)4(cid:9)5(cid:10)$ is the emission factor of the vehicle used to collect road condition data in kg CO2eq F. per km We choose the worst-case scenario, in which the monitoring round is conducted every year. The emission factor of the instrumented truck, F. , is calculated from the 16-32 ton EURO6 Transport freight process from Ecoinvent 3.4 process (see Table 4). To estimate the carbon footprint of such a truck per kilometer traveled, we multiply the freight transportation process based on the ton-kilometer functional unit by the average load considered by ecoinvent – i.e. 5.79 tons (Spielmann et al., 2007). Data storage The PMS’s database is composed of several pieces of hardware: 3 servers (ProLiant BL460c Gen10 and ProLiant BL460c Gen9 models), 1 storage bay (3PAR 8200 model), and 1 backup bay (StorOnce 5500 model). The GWP of the data storage related to the studied section, I!"# (cid:4)(cid:7)(cid:9)4%:(cid:5), is defined by equation (4): I!"# (cid:4)(cid:7)(cid:9)4%:(cid:5) (cid:29) ;<∗=>?@ABCD =A ∗ E$%(cid:7)%F%(cid:4)(cid:5) ∗ (cid:14) GHIJK LHIJK & MN45(cid:10)(cid:28) (4) Where: - (cid:11)(cid:7) is the total section’s length informed in the database in km 13 - - - - E$%(cid:7)%F%(cid:4)(cid:5) is the allocation coefficient considering the proportion of the database used for the PMS is the GWP of the PMS’s database hardware in kg CO2eq MO%4$ is the depreciation period of the hardware database in years ΔO%4$ MN45(cid:10) is tℎ(cid:23) GWP of a one-year run of the database in kg CO2eq In this case, the entire database is dedicated to PMS: E$%(cid:7)%F%(cid:4)(cid:5) (cid:29) 1. ΔO%4$ and (cid:11)(cid:7) are given by an internal expert (see Table 3). MO%4$ and MN45(cid:10) are provided by the manufacturer (see Table 4). Data analyses Data are analyzed by the pavement asset management team to plan the maintenance. The evolution of the bearing capacity of the subgrade is extrapolated from the pavement deflection. Then, the need for further structural reinforcements or delaying operations is estimated. The digital activities of the team are composed of local tasks, with a computer, and data transfer from the PMS’s database. More specifically, two types of users, and thus computers, are involved: standard and advanced stations. The GWP of the maintenance operation planning, I!"# 8%(cid:8)(cid:10)(cid:7)(cid:5)(cid:10)%(cid:10)(cid:6)(cid:5), is defined by equation (5): 8%(cid:8)(cid:10)(cid:7)(cid:5)(cid:10)%(cid:10)(cid:6)(cid:5) (cid:29) (cid:22)(cid:9)U(cid:5)4%(cid:7)(cid:8)(cid:9)(cid:10) ∗ 78 ∗ (cid:11)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10) ∗ VE(cid:7) ∗ (cid:2)(cid:7)4%(cid:10)(cid:4)W(cid:5)4 & (cid:2)%$X & (cid:2)(cid:4)Y (5) M;ST Where: - - - - (cid:22)(cid:9)U(cid:5)4%(cid:7)(cid:8)(cid:9)(cid:10) is the number of maintenance planning during one year is the data volume transferred per PMS planning operation per km and year E(cid:7) (cid:2)(cid:7)4%(cid:10)(cid:4)W(cid:5)4 is the emission factor to transfer data and (cid:2)%$X are the emission factors of the amortization of the two types of computing (cid:2)(cid:4) stations 14 According to an internal expert, maintenance operation planning is carried out every 15 years: (cid:22)(cid:9)U(cid:5)4%(cid:7)(cid:8)(cid:9)(cid:10) (cid:29) 1/15. This internal expert considers, in a worst-case scenario, that three maintenance planning conducted on a 100 km section over 50 years require 50 gigabytes (GB) of data transfer. Thus, E(cid:7) is calculated from this estimation. (cid:2)%$X and (cid:2)(cid:4) are calculated with equation (6): (cid:2)%$X (cid:29) E%$X ∗ \ GIK]_K?]B@? LIK]__‘abc‘ & 24 ∗ P%$X_3-f/1- ∗ F-g-1h (6) Where: - - - - - - is the number of days spent on the PMS software on the advanced station, per E%$X PMS planning operation, per km, and per year M%$X_$(cid:5)X(cid:8)(cid:6)(cid:5) is the GWP of the advanced computing station’s manufacturing in kg CO2eq Δ%$X_3-f/1- is the depreciation period of the advanced computing station 24 is the number of hours per a day P+3f_3-f/1- is the electrical power in watts, daily weighted is the emission factor of the electricity consumption F-g-1 The internal expert considers, in a worst-case scenario, that three maintenance operation planning conducted on a 100 km-long section for 50 years require 40 days on the PMS software on the standard station and 15 days on the PMS software on the advanced station. This allows calculating E%$X and E(cid:4) (see Table 3). A standard computer is composed of a laptop and two display screens while an advanced computer uses a computer desktop with two display screens, a pointing device, and a keyboard. This allows calculating M%$X_$(cid:5)X(cid:8)(cid:6)(cid:5) and M(cid:4)_$(cid:5)X(cid:8)(cid:6)(cid:5) from the ecoinvent database (see Table 3). Δ%$X_3-f/1- and Δ’_3-f/1- correspond to 5 years, converted to 1825 days. We assume, in agreement with the internal expert, that a display screen operates for 8 hours per day at 90 W and stays on stand-by at 1 W for the remaining time. A laptop also operates 8 15 hours per day at 65 W but is turned off the rest of the time. Finally, a desktop computer operates all day, i.e. 24 hours, at 300 W. Those assumptions allow us to determine P+3f_3-f/1- and P’_3-f/1- (see Table 3). The emission factor of the electricity consumption F-g-1 is also given (see Table 4). Table 3 Foreground data of PMS carbon footprint Data (cid:22)4(cid:9)5(cid:10)$ E$%(cid:7)%F%(cid:4)(cid:5) ΔO%4$ (cid:11)(cid:7) (cid:22)(cid:9)U(cid:5)4%(cid:7)(cid:8)(cid:9)(cid:10) E(cid:7) Δ%$X_3-f/1- Δ(cid:4)_3-f/1- E%$X E(cid:4) P+3f_3-f/1- P’_3-f/1- Value 1 1 5 2737 1/15 3.33*10-3 1825 1825 10-3 2.6*10-3 330.67 83 Unit years km GB*(operation * year * km) -1 days days (operation * year * km) -1 (operation * year * km) -1 W W 2.3 Background emission factors 2.3.1 Road construction and maintenance operations Life cycle inventories (LCI) for road construction and maintenance operations have been developed by de Bortoli et al. (2018; 2022a) in the French context. They are the most representative LCIs within the existing LCIs for the context of this study (de Bortoli, 2020) in terms of asphalt binder, aggregates, asphalt mixing, building machines, material transportation, and else. These cradle-to-laid LCIs are based on the functional unit of one square meter of pavement built over a certain thickness, in centimeters, or milled. As these LCIs were specifically developed to study resurfacing strategies, some of them have been re-scaled to model the original construction of the pavement, over a thicker layer, consisting of more material per square meter but also sometimes more compaction operations. The compaction of 16 the subbase and base layers has been modeled based on Eurovia practices depending on the layer’s thickness. The detail can be found in the supplementary material (excel spreadsheet). 2.3.2 Pavement management system Data storage MO%4$ and MN45(cid:10) Data analyses are provided by the manufacturer to the internal expert (see Table 4). is the emission factor of the electricity mix. The electricity is mainly consumed in France F-g-1 and we use the French Base carbone emission factor (ADEME, n.d.). (cid:2)(cid:7)4%(cid:10)(cid:4)W(cid:5)4 is calculated based on the worst energy intensity of Internet transmission reported by Coroama and Hilty, i.e. 1.8 kWh/GB (2014), and F-g-1 . To calculate M%$X_$(cid:5)X(cid:8)(cid:6)(cid:5) and M(cid:4)_$(cid:5)X(cid:8)(cid:6)(cid:5) , we use the following global market ({GLO}) processes from ecoinvent v3.4: “Computer laptop” and “Display liquid crystal 17 inches” for the standard station, and “Computer, desktop, without screen”, “Keyboard” and “Pointing device optical mouse with cable” for the advanced station. Table 4 Background data to calculate the PMS’s carbon footprint Parameter F. MO%4$ MN45(cid:10) (cid:2)(cid:5)l(cid:5)(cid:6) (cid:2)(cid:7)4%(cid:10)(cid:4)W(cid:5)4(cid:7) M%$X_$(cid:5)X(cid:8)(cid:6)(cid:5) M(cid:4)_$(cid:5)X(cid:8)(cid:6)(cid:5) Value 0.92 167 1165 59.9*10-3 107.82*10-3 1052.43 942.31 Unit kg CO2eq * km-1 kg CO2eq kg CO2eq * year-1 kg CO2eq * kWh-1 kg CO2eq * GB-1 kg CO2eq kg CO2eq 17 3 Results and interpretation 3.1 Comparison of the three design-build-maintain practices Figure 4 shows the carbon footprint of the pavement over its 30-year-long lifespan depending on the scenario. Scenarios 2 and 3 emit less than scenario 1, respectively by 19 and 22%. This means savings of respectively 961 and 1076 t CO2eq for the 10 km-long section over 30 years. Thus, massive GHG savings occur from the transition from massive to progressive pavement design. But tailored maintenance thanks to the PMS in scenario 3 reduces by an additional 3% the GHG emissions of the pavement over its life cycle compared to a non-data-driven maintenance scheme (scenario 2). If not considering the original construction emissions, BIM generates 11% GHG savings on the maintenance and rehabilitation of the pavement over its service life. The direct impact of the PMS usage – e.g. manufacturing of the digital hardware, infrastructure and data collection vehicle as well as their operation and the use of the software - is highlighted in red (Figure 4): it accounts for 0.52 t CO2eq over the pavement section life cycle, e.g. 0.02% of scenario 3’s carbon footprint, for one data collection per year. If the pavement condition data is collected every three years instead, the PMS direct impact is even lower, decreasing to 0.33 t CO2eq. The carbon footprint of the PMS is negligible compared to 18 the pavement construction and maintenance emissions. On the contrary, it might be surprising to get more emission gains limited to 3.4% from the tailored maintenance. ) q e 2 O C t ( t n i r p t o o f n o b r a C 4 500 4 000 3 500 3 000 2 500 2 000 1 500 1 000 500 0 Scenario 1 Scenario 2 Scenario 3 Build-maintain operations (tCO2eq) PMS (tCO2eq) Figure 4 Carbon footprint of the pavement life cycle depending on the scenario 3.2 Roadworks’ contributions To understand better why the PMS does not bring more substantial gains (-3.3% emissions), Figure 5 presents the impact contribution of each stage of the pavement life cycle. The use of the PMS allows to reduce the probability of pavement damage and thus the length to rebuild: scenario 2 presents a reconstruction stage emitting 2.37 kg CO2eq/sm when it represents 2.74 kg CO2eq/sm for scenario 3. The effects of the PMS on maintenance operation gains are a bit more important: the maintenance stage emits 13.3 kg CO2eq/sm in the case of scenario 2, against 11.3 kg CO2eq/sm for scenario 3. 19 ) m s / q e 2 O C g k ( s n o i s s i m e G H G 70.00 60.00 50.00 40.00 30.00 20.00 10.00 0.00 Scenario 1 Scenario 2 Scenario 3 Construction Rehabilitation Maintenance Figure 5 GHG emissions of the pavement life cycle for each stage of its life cycle, per square meter 3.3 PMS contributions All results, presented in Table 5 and Figure 6, relate to the 10 km-long section over 30 years. The GWP of the PMS is 520.7 kg CO2eq. It is divided in 241.1 kg CO2eq (46.3%) for data storage, 277.9 kg CO2eq (53.4%) for data collection activities and 1.7 kg CO2eq (0.3%) for maintenance planning activities, e.g. the usage of the PMS by asset managers. Data collection is the biggest contributor to PMS’s carbon footprint, with 53.4% of the total impact. Data storage is almost equally contributing, with 46.3% of the total GWP, while the PMS’s usage itself is responsible for a negligible amount of GHGs (0.3%). The GWP due to data storage is largely due to the database running, i.e. electricity consumption (97.3%). The amortization of the database’s hardware manufacturing represents only 2.7% of the total GWP. The GWP of maintenance planning activities comes from the local tasks (87.5%) and data transfer tasks (12.5%). Table 5 PMS's carbon footprint and contributions to total PMS total 520.8 GWP (kg CO2eq) 20 Data storage Of which: hardware Of which: running Data collection Maintenance planning Of which: local task Of which: transfer task 241.1 6,5 234.5 277.9 1.7 1.5 .2 46.3 % 53.4 % 0.3 % 2.7 % 97.3 % 87.6 % 12.4 % Figure 6 The GWP distribution of data storage, data collection, and maintenance operation activities 3.4 Sensitivity analyses In the base case, the subgrade is considered to evolve from PF2qs to respectively PF2 and PF3 over 25% of the section’s length each. While this assumption has been proposed based on a road constructor’s expertise, no national statistics can be used to validate its national 21 representativeness. The sensitivity of the results to this evolution is tested based on a “restricted ) % ( s g n i v a s H H G e l c y c e f i L 30% 25% 20% 15% 10% 5% 0% a b c d e g h i j k f - base S2 vs S1 S3 vs S2 S3 vs S1 range” ( Figure 7) and an “extreme range” (Figure 8) of subgrade class evolutions. The second sensitivity analysis can be considered as extreme or rather unlikely, as up to the entire length of the subgrade is tested to be either upgraded or downgraded. The two figures show that, if the subgrade's bearing capacity deteriorates over time (scenarios a to e), the structural reinforcement of the complete section classically carried out to respect the pavement design reliability is globally environmentally equivalent to minimal maintenance consisting of crack filling to delay alligator cracking, or rehabilitating short pavement sections with premature damage due to the worst-performing materials over the section. In this case, the advantage of using a PMS to tailor maintenance overtime may be masked by the safety margins taken in the design process due to the spatial heterogeneity of the pavement’s behavior. On the contrary, if the bearing capacity of the subgrade improves, which may occur with the compaction of the pavement under heavy traffic (scenarios g to k), the results highlight that it is environmentally beneficial to know the condition of the pavement to adjust the maintenance with structural reinforcement. The PMS shows all its potential in these scenarios, as it allows to delay maintenance operations where they are not needed. Quantitatively, results show that 22 savings of up to resp. 7 and 25% of the pavement life cycle GHG emissions can be reached using BIM to tailor maintenance (scenario S3) compared to resp. classic progressive or massive design and maintenance approaches (scenaris S1 and S2) on a restricted range of subgrade bearing capacity evolutions (Figure 7). When considering an extreme range of psubgrade bearing capacity evolutions, these gains reach up to resp. 14 and 30% over the pavement life cycle. If we restrain the scope to the service life and exclude the emissions from the original construction, these gains skyrocket to 47 and 65%. ) % ( s g n i v a s H H G e l c y c e f i L 30% 25% 20% 15% 10% 5% 0% a b c d e g h i j k f - base S2 vs S1 S3 vs S2 S3 vs S1 Figure 7 GHG savings of the different pavement management scenarios over the life cycle “restricted range” sensitivity analysis 23 ) % ( s g n i v a s G H G e l c y c e f i L 30% 25% 20% 15% 10% 5% 0% ) % ( s g n i v a s G H G e f i l e c i v r e S 70% 60% 50% 40% 30% 20% 10% 0% a b c d e e g h i j k s a b - f a b c d e e g h i j k s a b - f S2 vs S1 S3 vs S2 S3 vs S1 S2 vs S1 S3 vs S2 S3 vs S1 (a) (b) Figure 8 GHG savings of the different pavement management scenarios - over the complete life cycle (a) and on the service life (b) - depending on the evolution of the subgrade bearing capacity – “extreme range” sensitivity analysis 4 Discussion In this case study, we calculated the impact of using a PMS – the equivalent of a monofunctional I-BIM for pavement maintenance planning – on a high-traffic road life cycle carbon footprint. Nevertheless, I-BIM can provide 47 different types of technical support defined by Costin et al. (2018), and a multifunctional I-BIM would necessarily have different impacts. For instance, we estimated that an equivalent amount of GHGs are emitted over the pavement lifespan with a progressively designed pavement that is maintained with the support of a PMS when the bearing capacity of a subgrade deteriorated. But these results may only reflect French maintenance practices that could be too strict from the point of view of the mechanical maintenance of the structure. Thus, I-BIM could be used to break free from established rules and optimize structures over time according to desired criteria, for example, 24 carbon emissions minimization. Indeed, monitoring the condition of pavements in real-time through instrumentation and using these data in an I-BIM to simulate the structural evolution over time would greatly advance maintenance management compared to what current structural simulation software programs allow. The big data generated and stored on the condition of the pavements and their environment could seed the development of algorithms based on artificial intelligence (AI) allowing for better simulate the behavior of road structures. Indeed, road design methods have gone from an empirical approach to a combination of mechanical physics and empiricism ("ME" methods) in the face of the inability of empiricism to capture the unlimited heterogeneity of pavement behavior due to limited data. The road sector has then turned to ME methods to reduce the errors of the physical models thanks to calibration coefficients. But ME methods are still limited to forecast pavement evolutions, and one might think that behavior models generated thanks to machine learning - and especially artificial neural networks - would perform better. Yet, AI algorithms are known to potentially consume a massive amount of electricity, and particular electronic elements whose environmental impact could be heavy (Strubell et al., 2019). Future studies should look into these aspects to ensure the development of a climate-positive I-IBM. Moreover, we have shown that, although the carbon footprint of the PMS itself is negligible compared to construction operations, half of its emissions are due to the collection of data and the other half to their storage. Instrumenting the roads with sensors rather than surveying them with internal combustion engine (ICE) vehicles could help reduce the impact of the "maintenance planning" function of an I-BIM, but the sensors would also generate impacts, despite these impacts need to be investigated further (Pirson and Bol, 2021). In addition, the data generated could be much more massive and its storage could thus generate substantial GHG emissions. Thus, it would be appropriate in the future to assess the carbon footprint of multifunctional I-BIMs and to study the risk associated with the profusion of data. 25 Finally, the impact of road construction has been proven to be very limited compared to its usage (de Bortoli et al., 2022a; de Bortoli and Agez, Under review; Wang et al., 2012), and I- BIM could help optimize the management of pavements to reduce their environmental impact over their entire life cycle, including through their surface condition and geometry that impact vehicle consumption and aging (de Bortoli et al., 2022a; Chatti and Zaabar, 2012; Wang et al., 2012; de Bortoli et al., 2022b). In any case, a multitude of different I-BIMs could be developed. And according to Bellman's principle of optimality, the sum of the optima of subsystems is different from the optimum of the system (Bellman, 1952). Also, optimizing recursively subsystems - each I-BIM’s subsystem corresponding to one specific function - will be necessary to get close to the systemic optimum of the I-BIM over time. Depending on how I-BIM is used and in which technologic conditions, its impact will vary. For example, the environmental impact of the elements of the technosphere such as electricity (Alderson et al., 2012; Wolfram et al., 2016), metals (Watari et al., 2021), electronic components, and else, varied and will keep on varying in time and space. Forward-looking assessments are uncertain but necessary to project the potential consequences of I-BIM on climate. Additionally, we only considered GWP in this study. However, burden-shifting should be monitored and controlled. According to the Information Communication Technology scope published by ADEME, freshwater eutrophication and metal resources’ would be the first two key burdens shifted with digitalization (Bio Intelligence Service, 2011), and they should be assessed closely when it comes to planning the development of BIM. 26 5 Conclusions This article evaluates for the first time the environmental impact of the use of a BIM for infrastructure. More specifically, the carbon footprint of two road designs and three maintenance management approaches are compared: massive design accompanied by surface maintenance (scenario 1), progressive design combined with structural maintenance without monitoring of the bearing capacity of the subgrade (scenario 2), and the same progressive design with optimized maintenance thanks to a PMS recording the deflection of the pavement, the PMS being equivalent to a monofunctional I-BIM. This case study shows that the direct carbon footprint of I-BIM is negligible compared to the carbon footprint of pavement construction and maintenance operations over 30 years. In addition, the indirect effect of PMS, i.e. its consequences on emissions from construction operations taking place during the life of the pavement, are rather positive. Nevertheless, I-BIM is still novel, transitioning, and thus knowledge is limited: I-BIM can become a drag as a climate asset, and further and continued research will be needed to ensure its development serves the transition to carbon neutrality. Acknowledgment: The idea of this study was initiated by the National Federation of Public Works in France (FNTP), which wishes to develop knowledge on the environmental impact of the infrastructure sector, particularly on the impact of its digitization on the climate. The authors want to thank the participation of the Technical Department of Eurovia Management in the design of the study on the construction aspect, and specifically Ivan Drouadaine – director of technics and research - for the reflection on the design-construction-maintenance alternatives. The authors also thank the French highway concessionaire Autoroute du Sud de La France (ASF), and especially Albane Hagnere – infrastructure project lead - and Sylvain Guilloteau – I-BIM lead - for their support in collecting data relating to the use of PMS (data 27 storage and use of software), as well as Cécile Giacobi - pavement asset manager - for her insights and data on pavement monitoring and the use of PMS software. Funding source and role: Anne de Bortoli: Investigation; Conceptualization; Methodology; Software; Validation; Visualization; Formal analysis; Project administration; Supervision; Writing - original draft; Yacine Baouch: Investigation; Software; Formal analysis; Visualization; Writing - original draft. Mustapha Masdan: Software; Validation; Visualization; Formal analysis; Writing – review, and editing. REFERENCES AASHTO, 2008. Mechanical-empirical pavement design guide. American Association of State Highway and Transportation Officials, Washington D.C. AASHTO, 1993. AASHTO guide for design of pavement structures. American Association of State Highway and Transportation Officials, Washington, D.C. Abbondati, F., Biancardo, S.A., Palazzo, S., Capaldo, F.S., Viscione, N., 2020. I-BIM for 596–603. infrastructures. Transp. Res. Procedia airport 45, existing https://doi.org/10.1016/j.trpro.2020.03.052 ADEME, 2016. Potentiel de contribution du numérique à la réduction des impacts environnementaux : état des lieux et enjeux pour la prospective. (Rapport final), Temis. https://bilans- [WWW Document]. URL n.d. Base carbone® ADEME, ges.ademe.fr/fr/basecarbone/donnees-consulter/choix-categorie AFNOR, 2000. NF P 94-117-1 : Portance des plates-formes, Partie 1 : Module sous chargement statique à la plaque (EV2). Alderson, H., Cranston, G.R., Hammond, G.P., 2012. Carbon and environmental footprinting to 2050. Energy 48, 96–107. futures low carbon UK electricity of https://doi.org/10.1016/j.energy.2012.04.011 Anand, C.K., Amor, B., 2017. Recent developments, future challenges and new research directions in LCA of buildings: A critical review. Renew. Sustain. Energy Rev. 67, 408–416. https://doi.org/10.1016/j.rser.2016.09.058 AzariJafari, H., Yahia, A., Ben Amor, M., 2016. Life cycle assessment of pavements: reviewing research challenges and opportunities. J. Clean. Prod. 112, 2187–2197. https://doi.org/10.1016/j.jclepro.2015.09.080 Bellman, R., 1952. On the Theory of Dynamic Programming. Proc. Natl. Acad. Sci. 38, 716– 719. https://doi.org/10.1073/pnas.38.8.716 Bio Intelligence Service, 2011. Analyse comparée des impacts environnementaux de la communication par voie électronique (Volet courrier électronique : Synthèse). ADEME. CEREMA, 2017. Dimensionnement des épaisseurs de couche de forme pour PF2qs - Complément au GTR et au GTS (ISBN 978-2-37180-152-3 No. Note d’information Chaussées-Plates-forme-Assainissement n°2). 28 Chatti, K., Zaabar, I., 2012. Estimating the effects of pavement condition on vehicle operating costs, NCHRP report. Transportation Research Board, Washington, D.C. Coroama, V.C., Hilty, L.M., 2014. Assessing Internet energy intensity: A review of methods and 63–68. Impact https://doi.org/10.1016/j.eiar.2013.12.004 Environ. Assess. results. Rev. 45, Corte, Fevre, Havard, Joubert, Perrot, Morel, Quibel, Schaeffner, Veysset, 2000a. Réalisation des remblais et des couches de forme - Fascicule I - Principes généraux. Sétra-LCPC. Corte, Fevre, Havard, Joubert, Perrot, Morel, Quibel, Schaeffner, Veysset, 2000b. Réalisation des remblais et des couches de forme - Fascicule II - Annexes. Sétra-LCPC. Corte, Guidoux, al., 1998. Catalogue des structures types de chaussées neuves. Sétra-LCPC. Costin, A., Adibfar, A., Hu, H., Chen, S.S., 2018. Building Information Modeling (BIM) for transportation infrastructure – Literature review, applications, challenges, and recommendations. 257–281. https://doi.org/10.1016/j.autcon.2018.07.001 Autom. Constr. 94, de Bortoli, A., 2020. Asphalt pavement resurfacing: a review toward a better selection and representativeness of LCI, in: Pavement, Roadway, and Bridge Life Cycle Assessment 2020. CRC Press, Taylor & Francis Group, London, pp. pp12-23. de Bortoli, A., 2018. CHAPTER 5 – Regionalization and construction of specific Life Cycle Inventories [in French], in: Toward Sustainable Road Maintenance: Taking into Account Vehicle-Pavement Interactions into the Decision-Making Process - Illustration by a French Highway Case Study [in French]. University Paris East - Ecole des Ponts ParisTech. de Bortoli, A., Agez, M., Under review. EEIO efficiently sketches large-scale environmental transition plans: illustration by Canada’s road industry. de Bortoli, A., Féraille, A., Leurent, F., 2022a. Towards road sustainability – Part II: applied holistic assess-ment and lessons learned from French highways resurfacing strategies. Sustainability Under review. de Bortoli, A., Feraille, A., Leurent, F., 2022b. Towards Road Sustainability—Part I: Principles and Holistic Assessment Method for Pavement Maintenance Policies. Sustainability. Freitag, C., Berners-Lee, M., Widdicks, K., Knowles, B., Blair, G.S., Friday, A., 2021. The real climate and transformative impact of ICT: A critique of estimates, trends, and regulations. Patterns 2, 100340. https://doi.org/10.1016/j.patter.2021.100340 International Organization for Standardization, 2018. ISO 19650-1:2018 - Organization and digitization of information about buildings and civil engineering works, including building information modelling (BIM) — Information management using building information modelling — Part 1: Concepts and principles. International Organization for Standardization, 2006a. ISO 14040:2006 - Environmental management -- Life cycle assessment -- Principles and framework. International Organization for Standardization, 2006b. ISO 14044:2006 - Environmental management -- Life cycle assessment -- Requirements and guidelines. LCPC-Sétra, F., 1994. Conception et dimensionnement des structures de chaussée. LCPC ; SETRA, Paris; Bagneux. Lorino, T., LEPERT, P., Riouall, 2006. Application à la campagne IQRN des méthodes statistiques d’analyse de l’évolution des chaussées. Bull. Lab. Ponts Chaussées 25–41. Malagnino, A., Montanaro, T., Lazoi, M., Sergi, I., Corallo, A., Patrono, L., 2021. Building Information Modeling and Internet of Things integration for smart and sustainable environments: 127716. https://doi.org/10.1016/j.jclepro.2021.127716 review. Clean. Prod. 312, A J. 29 Matejov, A., Šestáková, J., 2021. The Experiences with utilization of BIM in railway infrastructure in Slovak Republic and Czech Republic. Transp. Res. Procedia 55, 1139– 1146. https://doi.org/10.1016/j.trpro.2021.07.084 Our world in data, 2020. Cars, planes, trains: where do CO2 emissions from transport come from? Our World Data. URL https://ourworldindata.org/co2-emissions-from-transport (accessed 5.26.22). Pirson, T., Bol, D., 2021. Assessing the embodied carbon footprint of IoT edge devices with a 128966. approach. life-cycle Clean. Prod. 322, J. bottom-up https://doi.org/10.1016/j.jclepro.2021.128966 Saxe, S., Guven, G., Pereira, L., Arrigoni, A., Opher, T., Roy, A., Arceo, A., Von Raesfeld, S.S., Duhamel, M., McCabe, B., Panesar, D.K., MacLean, H.L., Posen, I.D., 2020. Taxonomy of uncertainty in environmental life cycle assessment of infrastructure projects. Environ. Res. Lett. 15, 083003. https://doi.org/10.1088/1748-9326/ab85f8 Soust-Verdaguer, B., Llatas, C., García-Martínez, A., 2017. Critical review of bim-based LCA 110–120. buildings. Build. 136, to Energy method https://doi.org/10.1016/j.enbuild.2016.12.009 Spielmann, M., Bauer, C., Dones, R., 2007. Transport services: Ecoinvent report no. 14 - road transport, EcoInvent report. Swiss Centre for Life Cycle Inventories, Dübendorf. Strubell, E., Ganesh, A., McCallum, A., 2019. Energy and Policy Considerations for Deep Learning in NLP. https://doi.org/10.48550/ARXIV.1906.02243 US DoT - FHWA, 2019a. Summary Report Midwest Region Peer Exchange Minneapolis, MN July 18–19, 2019. US Department of Transportation - Federal Highway Administration. US DoT - FHWA, 2019b. Summary Report Northwest Region Peer Exchange Portland, OR June 13–14, 2019. US Department of Transportation - Federal Highway Administration. US DoT - FHWA, 2019c. Summary Report Southeast Region Peer Exchange Atlanta, GA March 14–15, 2019. US Department of Transportation - Federal Highway Administration. US DoT - FHWA, 2019d. Summary Report Southwest Region Peer Exchange Lakewood, CO May 21–22, 2019. US Department of Transportation - Federal Highway Administration. van Eldik, M.A., Vahdatikhaki, F., dos Santos, J.M.O., Visser, M., Doree, A., 2020. BIM-based environmental impact assessment for infrastructure design projects. Autom. Constr. 120, 103379. https://doi.org/10.1016/j.autcon.2020.103379 Vignali, V., Acerra, E.M., Lantieri, C., Di Vincenzo, F., Piacentini, G., Pancaldi, S., 2021. Building information Modelling (BIM) application for an existing road infrastructure. Autom. Constr. 128, 103752. https://doi.org/10.1016/j.autcon.2021.103752 Vilches, A., Garcia-Martinez, A., Sanchez-Montañes, B., 2017. Life cycle assessment (LCA) of building refurbishment: A literature review. Energy Build. 135, 286–301. https://doi.org/10.1016/j.enbuild.2016.11.042 Wang, T., Lee, I.-S., Kendall, A., Harvey, J., Lee, E.-B., Kim, C., 2012. Life cycle energy consumption and GHG emission from pavement rehabilitation with different rolling resistance. J. Clean. Prod. 33, 86–96. https://doi.org/10.1016/j.jclepro.2012.05.001 Watari, T., Nansai, K., Nakajima, K., 2021. Major metals demand, supply, and environmental impacts to 2100: A critical review. Resour. Conserv. Recycl. 164, 105107. https://doi.org/10.1016/j.resconrec.2020.105107 Wolfram, P., Wiedmann, T., Diesendorf, M., 2016. Carbon footprint scenarios for renewable 236–245. J. Clean. Prod. 124, electricity https://doi.org/10.1016/j.jclepro.2016.02.080 Australia. in 30 31
ai_researcher
4
Biomedical_generative_pre-trained_based_transformer_language_model_for_age-related_disease_target_discovery.pdf
Biomedical Language Models are Robust to Sub-optimal Tokenization Bernal Jiménez Gutiérrez Huan Sun Yu Su The Ohio State University {jimenezgutierrez.1,sun.397,su.809}@osu.edu 3 2 0 2 l u J 0 1 ] L C . s c [ 3 v 9 4 6 7 1 . 6 0 3 2 : v i X r a Abstract As opposed to general English, many concepts in biomedical terminology have been designed in recent history by biomedical professionals with the goal of being precise and concise. This is often achieved by concatenating mean- ingful biomedical morphemes to create new semantic units. Nevertheless, most modern biomedical language models (LMs) are pre- trained using standard domain-specific tokeniz- ers derived from large scale biomedical cor- pus statistics without explicitly leveraging the agglutinating nature of biomedical language. In this work, we first find that standard open- domain and biomedical tokenizers are largely unable to segment biomedical terms into mean- ingful components. Therefore, we hypothesize that using a tokenizer which segments biomed- ical terminology more accurately would enable biomedical LMs to improve their performance on downstream biomedical NLP tasks, espe- cially ones which involve biomedical terms di- rectly such as named entity recognition (NER) and entity linking. Surprisingly, we find that pre-training a biomedical LM using a more ac- curate biomedical tokenizer does not improve the entity representation quality of a language model as measured by several intrinsic and ex- trinsic measures such as masked language mod- eling prediction (MLM) accuracy as well as NER and entity linking performance. These quantitative findings, along with a case study which explores entity representation quality more directly, suggest that the biomedical pre- training process is quite robust to instances of sub-optimal tokenization.1 1 Introduction In order to communicate complex concepts pre- cisely and efficiently, biomedical terminology has been designed by researchers and medical profes- sionals by combining existing meaningful mor- Ideal Tokenization BERT Tokenization PubMedBERT Tokenization nephr-o-pathy nephr-ectomy nephr-o-blastoma nephr-o-calcin-osis ne-ph-rop-athy ne-ph-re-ct-omy ne-ph-ro-bla-sto-ma ne-ph-ro-cal-cino-sis nephropathy nephrectomy nephr-oblastoma nephr-ocalcin-osis Table 1: Sub-optimally tokenized biomedical terms con- taining the ‘nephro’ morpheme illustrate the limitations of current tokenization methods. phemes to create new concepts. Many biomedical terms use general rules to combine meaningful mor- phemes taken from Greek and Latin (Banay, 1948). For example, these morphemes often have vowels that can be omitted such as the ‘-o-’ in ‘nephro’ from Table 1. This ‘-o-’ acts as a joint-stem to connect two consonantal roots (e.g. ‘nephr-’ + ‘- o-’ + ‘-pathy’ = ‘nephropathy’), but the ‘-o-’ is often dropped when connecting to a vowel-stem (e.g. ‘nephr-’ + ‘-ectomy’ = ‘nephrectomy’, instead of ‘nephr-o-ectomy’). Students in biomedical fields often learn the meaning of these elements as well as the word formation rules to be able to infer the meaning of unfamiliar words and recall complex terms more easily.2 Even though the agglutinating nature of biomed- ical terminology is well known, none of the ex- isting pre-trained language models consider this information explicitly when building their tokeniz- ers. As shown in Table 1, frequent words such as ‘nephropathy’ and ‘nephrectomy’ are tokenized by BERT (Devlin et al., 2019) into meaningless subwords (‘ne-ph-rop-athy’ and ‘ne-ph-re-ct-omy’) while remaining as whole words for PubMedBERT (Gu et al., 2021). For more infrequent but still im- portant medical terms like ‘nephroblastomas’ and ‘nephrocalcinosis’, PubMedBERT encodes them as ‘nephr-oblastoma’ and ‘nephr-ocalcin-osis’. We 1Our code and pre-trained models are publicly avail- at https://github.com/OSU-NLP-Group/ able Bio-Tokenization 2The existence of popular books such as Collins (2007) emphasize the importance of understanding biomedical termi- nology design for medical professionals. argue that there are more meaningful and effi- cient ways to tokenize both frequent and infrequent medical terms using meaningful morphemes like ‘nephr(o)’ (of a kidney), ‘-pathy/-(o)sis’ (disease), ‘-ectomy’ (surgical procedure), ‘calcin’ (calcifica- tion) and ‘blastoma’ (type of cancer) which could help models transfer signal directly into infrequent and even out-of-vocabulary terms. In this work, we first leverage large-scale mor- pheme segmentation datasets to more rigorously evaluate current tokenization methods both quanti- tative and qualitatively. Using the annotated mor- pheme segmentation dataset from the SIGMOR- PHON 2022 Shared Task (Batsuren et al., 2022), we are able to determine that current tokenizers such as BERT and the more biomedically relevant PubMedBERT are very poorly aligned with human judgments on morpheme segmentation, even when evaluating on biomedical terminology specifically. Given that, although the PubMedBERT tok- enizer exhibits low performance on biomedical morpheme segmentation, it shows some improve- ment over BERT’s tokenizer due to its use of biomedical corpus statistics, we hypothesize that using a tokenizer that aligns more strongly with standard biomedical terminology construction for pre-training could achieve improved performance in downstream tasks. In order to verify this hypoth- esis, we create a new tokenizer, BioVocabBERT, which uses a vocabulary derived from combining a fine-tuned morpheme segmentation model with biomedical domain-knowledge from the Unified Medical Language System (UMLS) (Bodenreider, 2004), a large scale biomedical knowledge base. Subsequently, we leverage BioVocabBERT, which greatly outperforms the PubMedBERT tokenizer on biomedical morpheme segementation, to pre- train a biomedical language model by the same name and compare its performance with a repli- cated PubMedBERT model (to control for any po- tential differences in the pre-training process) on two downstream tasks: named entity recognition (NER) and entity linking. Surprisingly, we find that the performance of our BioVocabBERT model is remarkably similar to the one obtained by our PubMedBERT replica throughout most datasets tested in fully supervised NER, low-resource NER and zero-shot entity link- ing. Small improvements arise in low-resource NER and zero-shot entity linking but results are in- consistent across datasets. Additionally, we exam- ine the model’s robustness to segmentation failures in a small scale case-study which suggests that even if the model’s word embeddings are biased by tok- enization errors, the model’s parameters are able to overcome such failures quite successfully. Finally, we measure our models’ language modeling accu- racy by word frequencies and find a small word frequency trade-off whose exploration we leave for future work. Given these findings, we conclude that biomedical language model pre-training is quite ro- bust to tokenization decisions which are not well aligned with human judgments, even when dealing with highly agglutinating biomedical terminology. 2 Related Work 2.1 Domain-Specific Pre-training Recent work on domain-specific language mod- els has demonstrated fairly conclusively that us- ing domain-specific data for pre-training signifi- cantly improves language model performance on in-domain downstream tasks. Many different such strategies have been proposed with varying degrees of in-domain vs out-of-domain pre-training data in fields such as biomedicine (Peng et al., 2020; Lee et al., 2019; Gu et al., 2021; El Boukkouri et al., 2022), finance (Wu et al., 2023), law (Chalkidis et al., 2020), scientific research (Maheshwari et al., 2021), clinical practice (Alsentzer et al., 2019) and social media (DeLucia et al., 2022). For biomedi- cal language models specifically, most work agrees that pre-training from scratch using an in-domain corpus, as done by Gu et al. (2021), leads to small but measurable performance improvements over other pre-training strategies. Apart from introducing pre-training from scratch, Gu et al. (2021) demonstrated the limi- tations of general domain tokenization for domain- specific pre-training by showing downstream im- provements obtained from using a domain-specific tokenizer, created by standard tokenizer building algorithms such as WordPiece (Schuster and Naka- jima, 2012) and BPE (Gage, 1994) on an in-domain corpus. In-domain tokenizers have also been shown to improve performance in other domains such as law (Chalkidis et al., 2020) and more specific ones like cancer (Zhou et al., 2022). As a result, most recent biomedical LMs use tokenizers created from in-domain corpora statistics (Yasunaga et al., 2022; Luo et al., 2022). 2.2 Limits of Unsupervised Tokenization Train Dev Test Even though domain-specific tokenizers have be- come widely used for biomedical LMs, they are still constructed using mainly unsupervised algo- rithms which leverage information theoretic met- rics from large-scale corpora to create subword vo- cabularies. However, as reported in the SIGMOR- PHON 2022 Shared Task for morpheme segmen- tation (Batsuren et al., 2022), these methods align little with morphological human judgments. Hof- mann et al. (2021) explore how poor segmentation affects performance by injecting rule-based deriva- tional morphology information into the tokeniza- tion process and showing improvements in word classification tasks, especially for low-frequency words. As far as we know, our work is one of the first to perform a similar morpheme segmentation analysis on biomedical tokenizers, even though biomedical terminology is highly agglutinating by design and should benefit from such analysis. Furthermore, Hofmann et al. (2020) shows that introducing derivational morphology signal into BERT via fine-tuning improves its derivation gen- eration capabilities, suggesting that performance of language models could be improved by adding mor- phologically relevant signal into their pre-training. Nevertheless, not much work apart from our cur- rent study has explored how introducing such sig- nals could affect the pre-training process directly, especially not in biomedical language models. 3 Supervised Morpheme Segmentation Recent work evaluating morphological segmenta- tion at scale such as the SIGMORPHON 2022 Shared Task (Batsuren et al., 2022) demonstrates the impressive performance of supervised meth- ods compared to unsupervised methods like BPE (Gage, 1994) or Morfessor (Creutz et al., 2005), even for languages with more limited annotated data than English. In the SIGMORPHON 2022 Shared Task, the organizers compile a large quan- tity of segmented morpheme data, over half a mil- lion English words obtained from Wiktionary and other sources using both hand-crafted and auto- mated methods (Batsuren et al., 2021). 3.1 Evaluating Biomedical Segmentation By comparing the SIGMORPHON dataset with words which appear frequently in the Unified Medical Language System (UMLS) (Bodenreider, 2004), a large scale biomedical knowledge base, English Set Biomedical Subset 458,692 33,221 57,371 4,112 57,755 4,123 Table 2: Dataset statistics for the SIGMORPHON 2022 morpheme segmentation dataset and the biomedical dataset, as defined in §3.1. we find that a small percentage (approximately 10%) of all annotated words are relevant biomed- ical terms. We therefore leverage this biomedical subset to evaluate the biomedical morpheme seg- mentation performance of several current tokeniza- tion methods. Due to the large difference in scale of the full dataset to the biomedical subset, we use the full SIGMORPHON dataset for training, including both general english and biomedical words. We use the same segmentation F1 score the SIGMOR- PHON Shared Task for evaluation. This score is calculated as the harmonic mean of precision, the ratio of correctly predicted morphemes over all pre- dicted morphemes, and recall, the ratio of correctly predicted morphemes over all gold-label units. For more information about these evaluation metrics, we refer the interested reader to Section 2.3 of Bat- suren et al. (2022). Segmentation F1 BERT Tokenizer PubMedBERT Tokenizer Fine-Tuned CANINE BioVocabBERT Tokenizer 16.2 19.2 74.1 48.5 Table 3: Morpheme segmentation performance of base- line and novel tokenizers on the biomedical subset of the SIGMORPHON 2022 development set. As seen in Table 3, both BERT and PubMed- BERT achieve under 20% segmentation F1 perfor- mance on the SIGMORPHON biomedical develop- ment subset. In order to understand why current to- kenizers obtain such dramatically low segmentation accuracy, we analyze 50 instances of sub-optimal tokenization. Apart from words which are not seg- mented because they exist in the PubMedBERT vocabulary, most errors are split into three main categories 1) missing units, 2) compound units and 3) ambiguous connecting vowels. Table 4 shows descriptions and examples of each type. 3.2 CANINE Fine-Tuning As opposed to sub-word tokenization, morpheme segmentation does not require sub-word compo- Description Example Missing Units Important biomedical morphemes missing from the vocabulary onc-oneu-ral ‘onco’ (cancer-related) is not in the vocabulary Compound Units Splitting meaningful units while creating meaningless ones neuroprot-ectant ‘neuroprot’ is meaningless and splits the meaningful morpheme ‘protect’ Connecting Vowels Vowels which connect two morphemes (more ambiguous) bronch-olith optimal segmentation could split ‘o’ from ‘lith’ Table 4: Sub-optimal segmentation types from the biomedical subset of SIGMORPHON 2022. nents (morphemes) to map directly onto a word’s characters. For instance, in Table 5, SIGMOR- PHON annotations transform the root ‘neur’ into the word ‘neuron’, introducing further flexibility and complexity to the task. In order to adapt mor- pheme segmentation annotations to standard tok- enization, we design rule-based heuristics that map each morpheme onto a subset of characters in the original word. Due to this new formulation and the success of transformer based models on this shared task, we choose a character based language model named CANINE (Clark et al., 2022) as the model to train for character tagging as morpheme segmentation. More formally, the segmentation task is re-framed as classifying each character into a B(egin) or I(nside) tag, where the B tag indicates the start of a new morpheme or token. Original Word Segmentation BI Tags onconeural onco ##neur ##al BIIIBIIIBI Table 5: Example of a biomedical term segmented into morphemes and reformulated into BI tags for CANINE fine-tuning. After fine-tuning CANINE on the full SIGMOR- PHON 2022 training set to create a supervised to- kenization system, as seen in Figure 1 (left), we find that it achieves a 74% segmentation F1 score on the biomedical SIGMORPHON subset, a very strong result compared to current tokenizers. For reference, the best segmentation F1 score reported in the English word-level test set of the SIGMOR- PHON 2022 Shared Task is 93.7% by the DeepSpin team (Peters and Martins, 2022). Even though this score is not comparable to ours due to our use of a biomedical development subset for evaluation, we note that our fine-tuned CANINE model’s perfor- mance is quite strong given that it is designed for pure tokenization as explained above. 3.3 BioVocabBERT: Domain Knowledge Injection Despite its satisfactory segmentation performance, it is challenging to use our CANINE-based seg- mentation model as a language model tokenizer due to its vocabulary-less nature. Since this model can segment words arbitrarily using our charac- ter classification framework, unseen words can be split into subwords which have never been seen by the LM. Thus, pre-training a language model using this tokenizer would require allowing the model to increase its vocabulary size without limit while training and using an out-of-vocabulary token when unseen tokens are encountered during infer- ence. This would lead to a language model with an exceedingly large vocabulary size (which would increase the cost of pre-training significantly) and potentially limited generalization ability to unseen tokens. To tackle this problem, we introduce a tokenizer which uses the same left-to-right decoding algo- rithm used by BERT and PubMedBERT but replace its vocabulary with one designed for biomedical segmentation. In order to build a vocabulary which covers important biomedical tokens, we leverage the Unified Medical Language System, a medical knowledge base which contains approximately 15 million medical concept phrases. As shown in Fig- ure 1 (center), we extract all single words from UMLS concept phrases and segment each of them using the CANINE-based tokenizer. This produces around 250,000 unique subwords which we fur- ther reduce to 55,580 by eliminating ones which only appear once in the CANINE segmented set of UMLS words. In order to avoid segmenting standard English words in unintuitive ways due to the higher proportion of biomedical subwords, we augment our 55,580 biomedical subwords with the original BERT vocabulary as seen in Figure 1 (right). After removing duplicate tokens, we are left with a vocabulary of 80,181 tokens. This carefully designed vocabulary enables our new tokenizer, BioVocabBERT, to obtain a segmentation score of 48.5% on the SIGMORPHON biomedical subset as seen in 3, outperforming the best scoring cur- rent wordpiece-based tokenizer PubMedBERT by almost 30 points. Figure 1: Overall process for creating our vocabulary for BioVocabBERT’s tokenizer. We first train a CANINE model on the SIGMORPHON training set (left). We then use this segmentation model to segment all unique UMLS words (center). Finally, we combine all UMLS subwords with frequency greater than 1 with the original BERT vocabulary to make our BioVocabBERT vocabulary (right). 4 Experimental Setup 4.1 Biomedical Pre-training In order to discover whether morpheme segmenta- tion performance has an effect on the biomedical language model pre-training process, we compare the downstream performance of two language mod- els using tokenizers with very distinct segmentation performance but otherwise equivalent pre-training processes. The first model is pre-trained using the same tokenizer as PubMedBERT while the other one uses our designed BioVocabBERT tokenizer and is thus referred to by the same name. As other BERT based architecture models, we pre-train these using the masked language mod- eling objective and choose standard token mask- ing percentages used in previous work (Gu et al., 2021). We use the easily accessible and readily pre- processed corpus used for BlueBERT (Peng et al., 2020) pre-training which contains around 4 billion words3. For pre-training, we base our implementa- tion on the work by Izsak et al. (2021) to obtain the most efficient pre-training possible. We describe the data, optimization steps, batch size, hardware and other pre-training details used for both models in Table 13 and Appendix B. 4.2 Tasks In order to determine how tokenization improve- ments affect the quality of biomedical concept representation in language models, we narrow our task selection to those which are more closely related to entity understanding instead of overall sentence understanding as is the case with relation extraction, sentence similarity or natural language inference. We select named entity recognition 3https://github.com/ncbi-nlp/BlueBERT/ blob/master/README.md#pubmed (NER) and entity linking (EL), also referred to as concept normalization, as the two biomedical NLP tasks which most closely meet this criterion. NER. We focus on evaluating our models in the more standard fully supervised fine-tuning NER setting as is done in previous work (Lee et al., 2019; Gu et al., 2021). We run hyperparameter tuning on the development set, the search space used can be found in Appendix A. We also study our models’ low-resource NER performance using 500 and 1000 examples. We also carry out hyperparameter selection which can be found in Appendix A. We report results on the development set only for our low-resource NER experiments. Entity Linking. For entity linking, we evaluate our models’ zero-shot performance as done by Liu et al. (2021) which allows us to measure entity representation quality as directly as possible. 4.3 Datasets For NER, we use all datasets from BLURB (Gu et al., 2021), a comprehensive biomedical NLP benchmark. For entity linking, we follow previous work (Liu et al., 2021) and use four popular entity linking datasets, three of which are also included in BLURB as NER datasets. All dataset names and statistics can be found in Table 6. Below, we provide brief descriptions for each dataset we use, for more information about processing and training splits for these datasets, we refer the interested reader to the dataset descriptions in Gu et al. (2021) and Liu et al. (2021). BC5CDR. The BioCreative V Chemical-Disease Relation corpus (Li et al., 2016) contains both disease and chemical annotations on PubMed ab- NER EL Train Dev Test PubMedBERT∗ PubMedBERT BioVocabBERT BC5CDR-disease BC5CDR-chem NCBI-Disease JNLPBA BC2GM MedMentions X X X X X X X X X 4,182 5,203 5,134 46,750 15,197 282,091 4,244 5,347 787 4,551 3,061 71,062 4,424 5,385 960 8,662 6,325 70,405 Table 6: Dataset statistics. stracts. We evaluate disease and chemical entity extraction and linking separately following previ- ous work (Gu et al., 2021). NCBI-Disease. The Natural Center for Biotech- nology Information Disease corpus (Do˘gan et al., 2014) contains disease name and concept annota- tions for 793 PubMed abstracts. JNLPBA. The Joint Workshop on Natural Lan- guage Processing in Biomedicine and its Appli- cations dataset (Collier and Kim, 2004) contains 2,000 abstracts from MEDLINE selected and an- notated by hand for gene related entities. BC2GM. The Biocreative II Gene Mention corpus (Smith et al., 2008) contains 17,500 sentences from PubMed abstracts labeled for gene entities. MedMentions. MedMentions (Mohan and Li, 2019) is a large-scale entity linking dataset con- taining over 4,000 abstracts and around 350,000 mentions linked to the 2017AA version of UMLS. 5 Results & Discussion 5.1 Fully-Supervised NER As seen in Table 7, our language models obtain competitive fully-supervised NER results com- pared to the results reported by Gu et al. (2021), validating our pre-training and fine-tuning process. We first find that the differences in performance between our PubMedBERT and BioVocabBERT models are very small and inconsistent across NER datasets. We note that the difference in perfor- mance between these models is often within the standard deviation reported within each dataset. Additionally, we see no pattern in performance differences based on entity types. For disease NER, BioVocab underperforms on NCBI-Disease but overperforms in BC5CDR-disease while in gene based NER, BioVocabBERT outperforms by a slightly larger margin on JNLPBA but performs only on-par on BC2GM. This seems to suggest that, at least when fine-tuning on a significant number of examples, PubMedBERT can very adequately com- pensate for instances of sub-optimal biomedical segmentation. NCBI-Disease BC5CDR-disease BC5CDR-chem JNLPBA BC2GM 87.8 85.6 93.3 79.1 84.5 87.1 ± 0.8 84.7 ± 0.2 93.0 ± 0.3 78.2 ± 0.6 83.4 ± 0.2 86.7 ± 0.4 85.2 ± 0.3 93.4 ± 0.4 78.9 ± 0.1 83.5 ± 0.3 Table 7: Comparison of fully supervised NER perfor- mance for the originally reported PubMedBERT, de- noted by ∗, our PubMedBERT replica and our BioVo- cabBERT model. We report 3 runs for each of our models and provide the average entity-level F1 on the test set along with its standard deviation. 5.2 Low-Resource NER To explore whether parity in fully-supervised NER comes from the effects of large scale fine-tuning or from the underlying models’ entity represen- tation quality, we carry out a low-resource NER study using only 500 and 1,000 training examples. We present results only on the development set for this setting. Our results suggest that, even when fewer training examples are used for fine- tuning, the difference between models is small. As shown in Table 8, BioVocabBERT obtains small and inconsistent improvements in downstream per- formance over PubMedBERT tokenization across NER datasets in the low-resource setting. Never- theless, we note that the largest gains for BioVo- cabBERT in these low data regimes come from chemical and genetic NER datasets (BC2GM and JNLPBA), suggesting that our tokenization strategy could be especially beneficial for irregular genetic entities. PubMedBERT BioVocabBERT Percent ∆ 500 77.2 79.0 91.7 69.5 75.6 1000 81.2 81.4 92.2 75.5 77.2 500 77.7 79.3 92.1 71.5 76.3 1000 500 1000 80.6 81.6 92.8 76.9 77.6 0.5 −0.6 0.2 0.3 0.6 0.4 1.4 2.0 0.4 0.7 NCBI-disease BC5CDR-disease BC5CDR-chem BC2GM JNLPBA Table 8: Comparing our models on low-resource NER with 500 and 1,000 examples. We report the entity-level F1 on the development set for this setting. 5.3 Zero-Shot Entity Linking Following our low-resource NER results, we evalu- ate entity representation quality even more directly by measuring the zero-shot entity linking perfor- mance of both models. As shown in Table 10, performance of our models exceeds the original PubMedBERT results reported by Liu et al. (2021), validating the quality of our pre-training procedure. Sub-optimal Tokenization Word Embedding 5-NN [CLS] Embedding 5-NN epicarditis (epic-ardi-tis) neuromodulation (neuromod-ulation) epicardiectomy (epic-ardi-ectomy) pancarditis (panc-ardi-tis) epicardin (epic-ardi-n) epicardium (epic-ardi-um) endopericarditis (endop-eric-ardi-tis) pancarditis (panc-ardi-tis) perimyocarditis (peri-my-ocardi-tis) myopericarditis (myo-peri-car-di-tis) myoendocarditis (myo-end-ocardi-tis) pleuropericarditis (pleu-rop-eric-ardi-tis) neuromodulations (neuromod-ulations) neuromodulators (neuromod-ulators) neuromodulator (neuromod-ulator) immunomodulation (immunomod-ulation) neuroexcitation (neuro-exc-itation) neuroregulation (neuro-reg-ulation) immunoregulation (immunoreg-ulation) neuromodulations (neuromod-ulations) neuromodulators (neuromod-ulators) neuromodulator (neuromod-ulator) Table 9: In this table we show two PubMedBERT sub-optimal tokenization examples and their nearest neighbors with respect to word embeddings and ‘[CLS]’ token embeddings. Neighbors in bold are terms that were missed by the word embeddings but are retrieved correctly by the ‘[CLS]’ embeddings, repairing the sub-optimal tokenization bias. We note that the main difference between our pre- training setup and the original PubMedBERT set- ting is the use of the masked language modeling (MLM) objective alone instead of both MLM and next-sentence prediction (NSP) objectives. This suggests that the use of the MLM objective only might be better aligned with obtaining high quality entity representations. PubMedBERT∗ PubMedBERT BioVocabBERT R@1 R@5 R@1 R@5 R@1 R@5 NCBI-Disease BC5CDR-disease BC5CDR-chem MedMentions 77.8 89.0 93.0 43.9 86.9 93.8 94.6 64.7 88.5 91.7 95.3 44.9 93.5 95.0 96.1 65.4 87.6 91.0 95.4 45.4 92.0 94.1 95.9 65.9 Table 10: Comparison of zero-shot entity linking per- formance for the originally reported PubMedBERT, de- noted by ∗, our PubMedBERT replica and our BioVo- cabBERT model. Additionally, when comparing our models, we find that BioVocabBERT slightly underperforms the PubMedBERT replica on all datasets except the more diverse MedMentions dataset. However, we note that the improvements obtained in the Med- Mentions dataset are also quite small at under 1%. Given the zero-shot nature of this experiment, it suggests that the entity representations obtained by these two models are of comparable quality and that PubMedBERT’s pre-training enables a high degree of robustness around sub-optimal tokeniza- tion. 5.4 Case Study: Tokenization Robustness As shown in the NER and entity linking exper- iments above, the downstream performance of biomedical language models appears to be mostly robust to biomedical concept segmentation which is not well-aligned with human judgments. To an- alyze this phenomenon, we take a closer look at how our pre-trained PubMedBERT model repre- sents biomedical concepts which are segmented in apparently erroneous ways by the PubMedBERT tokenizer. Table 9 contains two words from the biomedical subset of SIGMORPHON which were sub-optimally segmented by PubMedBERT. For each of these words we include two sets of their 5 nearest neighbors according to different embed- ding types. The first set shows the nearest neigh- bors obtained using embeddings computed by av- eraging all subword embeddings that make up a specific word. The second set comes from using the ‘[CLS]’ token embedding of our PubMedBERT model, often used for downstream tasks in standard fine-tuning. The pool of words from which these neighbors are obtained consists of all the unique UMLS words in UMLS phrases used in the con- struction of BioVocabBERT. Since it comes directly from subword embed- dings, the first set of neighbors is meant to show whether the tokenizer’s sub-optimal segmentation introduces a bias which distracts the model from the true semantics of a biomedical term. The sec- ond neighborhood is meant to more faithfully show us how the model represents a biomedical concept. Comparing these two sets can let us determine if the bias introduced by subword embeddings is suc- cessfully regulated by the overall model. We first observe that the bias we expected to find in the word embedding neighborhoods is ev- idently present. Most words in these first sets are segmented in exactly the same ways as the original sub-optimally segmented word. As seen in Table 9, the word ‘neuromodulation’ is segmented by PubMedBERT as ‘neuromod-ulation’, splitting the meaningful ’modulate’ morpheme down the mid- dle, an example of the compound error in Table 4. Due to this, other words with the same subword but different semantics such as ‘immunomodulation’ (‘immunomod-ulation’) and ‘immunoregulation’ (‘immonoreg-ulation’) are added to the word em- bedding neighborhood. This is also seen in the sec- ond example, where the word embedding neighbors of ‘epicarditis’ (‘epic-ardi-tis’) all contain at least two of the three original subwords. If these word embeddings were the final model representations, this bias could lead to considerable errors in down- stream tasks like entity linking by up-weighting terms based on sub-optimal subwords. Fortunately, we observe that the language model is able to readily overcome the bias observed in the word embeddings when it comes to the fi- nal ‘[CLS]’ representations. The second neighbor- hoods often contain semantically relevant words which were segmented differently than the original, such as ‘neuroexcitation’ (‘neuro-exc-itation’) and ‘neuroregulation’ (‘neuro-reg-ulation’) for ‘neu- romodulation’ (‘neuromod-ulation’) or ‘perimy- ocarditis’ (‘peri-my-ocardi-tis’) for ‘epicarditis’ (‘epic-ardi-tis’), which both mean types of inflam- mation of the pericardium. This shows us that the language model successfully extracts the seman- tics of the morpheme ‘neuro’ from ‘neuromod’ as well as the cardiovascular related semantics from both ‘epic-ardi’ and ‘ocardi’, effectively mitigating the detrimental effects seen in the word embedding neighborhoods from sub-optimal tokenization. We thus conclude that this same robustness is respon- sible for the parity observed in downstream tasks between BioVocabBERT and the original PubMed- BERT. More examples which show similar trends as the ones in Table 9 can be found in Table 14 in Appendix C. 5.5 Word Frequency Study All the findings above suggest that biomedical lan- guage model pre-training yields entity representa- tions which are fairly robust to tokenization failures. However, it is important to note that the distribu- tion of rare vs. frequent entities in these small and medium scale datasets will be naturally skewed towards frequent entities if not intentionally manip- Figure 2: MLM accuracy for our pre-trained models averaged across 10,000 word instances which fall under each word frequency bin. ulated. Therefore, we design an experiment which explores whether the quality of representations in BioVocabBERT and our PubMedBERT replica varies with respect to word frequency. In this exper- iment, we obtain 10,000 instances of words from the pre-training corpus in each of the frequency bins listed in Table 2. We encode the sentence in which each instance is found and mask the word of interest. We report the percentage of words which are predicted correctly using the masked language modeling head’s prediction in each bin. We note that both models perform very simi- larly across frequency bins until the two bins with the largest frequencies. Our BioVocabBERT model obtains a somewhat significant advantage in the sec- ond highest 500-5,000 frequency bin which then inverts to a similarly significant drop in the cate- gory with the most frequent words. This trade-off is likely due to many high frequency words hav- ing a single token in the PubMedBERT vocabu- lary, which leads the model to have a natural bias towards predicting these words. In medium fre- quency words, PubMedBERT’s bias towards high frequency words is likely detrimental and BioVo- cabBERT is able to easily outperform it. This trade- off appears to be small enough to have little effect on downstream performance for these models but we leave exploring its effect for future work. 6 Conclusion In this work, we first note that current biomedical tokenization methods are not well aligned with hu- man judgments and highlight that the agglutinating nature of biomedical terminology could be affected by this sub-optimal segmentation. To understand 12-55-5050-500500-5,0005,000-500,000Word Frequency Bins0102030MLM AccuracyPubMedBERTBioVocabBERT whether this limited segmentation performance has an effect on downstream applications, we first build a biomedical tokenizer which is better aligned with human judgments using supervised morpheme seg- mentation and biomedical domain-knowledge. We then leverage this tokenizer to pre-train our BioVo- cabBERT model and compare it with a replicated PubMedBERT model on the NER and entity link- ing tasks. Surprisingly, we find that these models achieve almost exact parity in all datasets evaluated, suggesting that PubMedBERT’s domain specific tokenization and pre-training process was already quite robust to sub-optimal tokenization. We fur- ther verify this idea with a case-study which qualita- tively confirms our observations. We hope that our work can give researchers and practitioners some insight into how instances of sub-optimal segmen- tation, which are often jarring to human experts, could have little effect on a model’s downstream applicability. 7 Limitations Although our findings suggest that biomedical lan- guage model pre-training is quite robust to sub- optimal tokenization, we note that our work has a few potential limitations that should be explored further. The use of a biomedically relevant sub- set of the SIGMORPHON Shared Task dataset for evaluating biomedical term tokenization is a straight-forward and reasonable strategy, however, it is important to highlight that the resource was not created for this purpose and might not be perfectly aligned with ideal biomedical tokenization. Addi- tionally, we would like to point out that even though our BioVocabBERT tokenizer outperforms other equivalent tokenizers like PubMedBERT’s, it sev- erly underperforms the best possible segmentation accuracy (48.5 vs 74.1 for our fine-tuned CANINE model). It is therefore possible, although unex- pected, that a tokenizer which performs biomed- ical tokenization at even higher levels could lead to sudden improvements in the pre-training pro- cess. Finally, we note that the effects of the BioVo- cabBERT’s much larger vocabulary size, almost three times larger than PubMedBERT’s, on the pre-training process were not explored in depth. Nevertheless, given that some previous work (Feng et al., 2022) argues that larger vocabularies lead to slight improvements in downstream tasks, our main conclusions are likely to hold. Acknowledgements The authors would like to thank the anonymous re- viewers and colleagues from the OSU NLP group for their valuable feedback. This research was sup- ported in part by NIH R01LM014199 and Ohio Supercomputer Center (Center, 1987). References Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clin- ical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72–78, Minneapolis, Minnesota, USA. Associ- ation for Computational Linguistics. George L. Banay. 1948. An Introduction to Medical Terminology I. Greek and Latin Derivations. Bulletin of the Medical Library Association, 36,1:1–27. Khuyagbaatar Batsuren, Gábor Bella, Aryaman Arora, Viktor Martinovic, Kyle Gorman, Zdenˇek Žabokrt- ský, Amarsanaa Ganbold, Šárka Dohnalová, Magda Ševˇcíková, Kateˇrina Pelegrinová, Fausto Giunchiglia, Ryan Cotterell, and Ekaterina Vylomova. 2022. The SIGMORPHON 2022 shared task on morpheme seg- In Proceedings of the 19th SIGMOR- mentation. PHON Workshop on Computational Research in Pho- netics, Phonology, and Morphology, pages 103–116, Seattle, Washington. Association for Computational Linguistics. Khuyagbaatar Batsuren, Gábor Bella, and Fausto Giunchiglia. 2021. MorphyNet: a large multilingual database of derivational and inflectional morphology. In Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 39–48, Online. Association for Computational Linguistics. Olivier Bodenreider. 2004. The Unified Medical Lan- guage System (UMLS): Integrating Biomedical Ter- minology. Nucleic acids research, 32 Database issue:D267–70. Ohio Supercomputer Center. 1987. Ohio supercomputer center. Ilias Chalkidis, Manos Fergadiotis, Prodromos Malaka- siotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The muppets straight out of law school. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 2898– 2904, Online. Association for Computational Lin- guistics. Jonathan H. Clark, Dan Garrette, Iulia Turc, and John Wieting. 2022. Canine: Pre-training an efficient tokenization-free encoder for language representa- tion. Transactions of the Association for Computa- tional Linguistics, 10:73–91. Nigel Collier and Jin-Dong Kim. 2004. Introduction to the Bio-entity Recognition Task at JNLPBA. In Pro- ceedings of the International Joint Workshop on Nat- ural Language Processing in Biomedicine and its Ap- plications (NLPBA/BioNLP), pages 73–78, Geneva, Switzerland. COLING. C Edward Collins. 2007. A Short Course in Medical Ter- minology. Lippincott Williams and Wilkins, Philadel- phia, PA. Mathias Creutz, K. Lagus, and Sami Virpioja. 2005. Unsupervised Morphology Induction Using Morfes- sor. In Finite-State Methods in Natural Language Processing: 5th International Workshop (FSMNLP), pages 300–301. Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, Philip Resnik, and Mark Dredze. 2022. Ber- nice: A multilingual pre-trained encoder for Twitter. In Proceedings of the 2022 Conference on Empiri- cal Methods in Natural Language Processing, pages 6191–6205, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Rezarta Islamaj Do˘gan, Robert Leaman, and Zhiyong Lu. 2014. NCBI disease corpus: A resource for disease name recognition and concept normalization. Journal of Biomedical Informatics, 47:1–10. Hicham El Boukkouri, Olivier Ferret, Thomas Lavergne, and Pierre Zweigenbaum. 2022. Re-train or train from scratch? comparing pre-training strategies of In Proceedings of BERT in the medical domain. the Thirteenth Language Resources and Evaluation Conference, pages 2626–2633, Marseille, France. Eu- ropean Language Resources Association. Zhangyin Feng, Duyu Tang, Cong Zhou, Junwei Liao, Shuangzhi Wu, Xiaocheng Feng, Bing Qin, Yunbo Cao, and Shuming Shi. 2022. Pretraining without wordpieces: Learning over a vocabulary of millions of words. ArXiv, abs/2202.12142. Philip Gage. 1994. A new algorithm for data compres- sion. The C Users Journal archive, 12:23–38. Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-Specific Lan- guage Model Pretraining for Biomedical Natural Lan- guage Processing. ACM Trans. Comput. Healthcare, 3(1). Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. 2020. DagoBERT: Generating derivational morphology with a pretrained language model. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3848–3861, Online. Association for Computa- tional Linguistics. Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. 2021. Superbizarre is not superb: Deriva- tional morphology improves BERT’s interpretation of complex words. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3594–3608, Online. Association for Computational Linguistics. Peter Izsak, Moshe Berchansky, and Omer Levy. 2021. How to train BERT with an academic budget. In Pro- ceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing, pages 10644– 10652, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234–1240. Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database (Oxford), 2016:baw068. Fangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, and Nigel Collier. 2021. Self-alignment pretraining for biomedical entity representations. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4228–4238, Online. Association for Computa- tional Linguistics. Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie-Yan Liu. 2022. BioGPT: generative pre-trained transformer for biomedical text generation and mining. Briefings in Bioinformatics, 23(6). Bbac409. Himanshu Maheshwari, Bhavyajeet Singh, and Va- sudeva Varma. 2021. SciBERT sentence representa- tion for citation context classification. In Proceed- ings of the Second Workshop on Scholarly Document Processing, pages 130–133, Online. Association for Computational Linguistics. Sunil Mohan and Donghui Li. 2019. Medmentions: A large biomedical corpus annotated with UMLS concepts. In Proceedings of the 2019 Conference on Automated Knowledge Base Construction (AKBC 2019), Amherst, Massachusetts, USA. Yifan Peng, Qingyu Chen, and Zhiyong Lu. 2020. An empirical study of multi-task learning on BERT for biomedical text mining. In Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Pro- cessing, pages 205–214, Online. Association for Computational Linguistics. Ben Peters and Andre F. T. Martins. 2022. Beyond char- acters: Subword-level morpheme segmentation. In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 131–138, Seattle, Washing- ton. Association for Computational Linguistics. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. 2012 IEEE International Conference on Acoustics, Speech and Signal Process- ing (ICASSP), pages 5149–5152. Larry Smith, Lorraine K Tanabe, Rie Johnson Nee Ando, Cheng-Ju Kuo, I-Fang Chung, Chun-Nan Hsu, Yu-Shi Lin, Roman Klinger, Christoph M Friedrich, Kuzman Ganchev, Manabu Torii, Hong- fang Liu, Barry Haddow, Craig A Struble, Richard J Povinelli, Andreas Vlachos, William A Baumgart- ner, Jr, Lawrence Hunter, Bob Carpenter, Richard Tzong-Han Tsai, Hong-Jie Dai, Feng Liu, Yifei Chen, Chengjie Sun, Sophia Katrenko, Pieter Adriaans, Christian Blaschke, Rafael Torres, Mariana Neves, Preslav Nakov, Anna Divoli, Manuel Maña-López, Jacinto Mata, and W John Wilbur. 2008. Overview of BioCreative II gene mention recognition. Genome Biology, 9(S2):S2. Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kam- badur, David Rosenberg, and Gideon Mann. 2023. Bloomberggpt: A large language model for finance. ArXiv, abs/2303.17564. Michihiro Yasunaga, Jure Leskovec, and Percy Liang. 2022. LinkBERT: Pretraining language models with document links. In Proceedings of the 60th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 8003–8016, Dublin, Ireland. Association for Computational Lin- guistics. Sicheng Zhou, Nan Wang, Liwei Wang, Hongfang Liu, and Rui Zhang. 2022. CancerBERT: a cancer domain-specific language model for extracting breast cancer phenotypes from electronic health records. Journal of the American Medical Informatics Associ- ation : JAMIA, 29:1208 – 1216. A NER Hyperparameter Tuning We run hyperparameter tuning for each model in the fully-supervised and low-resource NER settings and they can both be found in Tables 11 and 12 below. Learning Rate Batch Size Warmup Ratio Weight Decay Search Space 1e-5 3e-5 16 32 0.06 0.1 Total Epoch Number 5 10 Table 11: Hyperparameter search grid used for fully- supervised NER experiments. Learning Rate Batch Size Warmup Ratio Weight Decay Search Space 1e-5 3e-5 16 32 0.06 0.1 Total Epoch Number 15 25 Table 12: Hyperparameter search grid used for low- resource NER experiments. B Pre-training Details Our models were pre-trained on 4 80GB A100s. The process took approximately 2 and 3 days re- spectively for PubMedBERT and BioVocabBERT given the larger computational requirements of us- ing an 80,000 subword vocabulary. Objectives Vocab. Size Corpus Size Gradient Steps Batch Size # of Examples MLM & NSP 28,895 21GB 62,500 8,192 512M MLM 28,895 25GB 62,500 8,192 512M PubMedBERT (Original) PubMedBERT (Replica) BioVocabBERT MLM 80,181 25GB 62,500 8,192 512M Table 13: Pre-training details for the original PubMed- BERT compared to our models. C More Neighborhood Examples The neighborhood examples shown in Table 14, help demonstrate that the general trends discussed in §5.4 hold more generally for many sub-optimal segmentation examples. Sub-optimal Segmentation Word Embedding 5-NN [CLS] Embedding 5-NN abdominopelvic (abdom-ino-pe-lv-ic) neuroradiography (neuroradi-ography) postinfectional (postin-fection-al) abdominopelvis (abdom-ino-pe-lv-is) sacropelvic (sacro-pe-lv-ic) uteropelvic (utero-pe-lv-ic) abdomino (abdom-ino) midpelvic (mid-pe-lv-ic) roentgenography (roentgen-ography) ventriculography (ventricul-ography) neuroradiology (neuroradi-ology) electroretinography (electroretin-ography) herniography (herni-ography) postinfection (postin-fection) postin (postin) postinjection (postin-jection) reinfection (rein-fection) postinfusion (postin-fusion) abdominocentesis (abdom-ino-cent-esis) thoracopelvic (thorac-ope-lv-ic) midpelvic (mid-pe-lv-ic) sacropelvic (sacro-pe-lv-ic) extrapelvic (extrap-elvic) neuroradiology (neuroradi-ology) neuroradiologic (neuroradi-ologic) encephalography (encephal-ography) neurography (neuro-graphy) cerebroangiography (cerebro-angi-ography) reinfection (rein-fection) postinfection (postin-fection) superinfection (super-infection) reinfected (rein-fected) superinfections (super-infection-s) neurogastrointestinal (neuro-ga-st-ro-intestinal) adrenocorticosteroid (adrenocortic-oster-oid) extragastrointestinal (extra-ga-st-ro-intestinal) pangastrointestinal (pan-ga-st-ro-intestinal) myoneurogastrointestinal (myo-ne-uro-ga-st-ro-intestinal) gastrogastric (gastro-ga-st-ric) gastrogastrostomy (gastro-ga-st-rost-omy) extragastrointestinal (extra-ga-st-ro-intestinal) pangastrointestinal (pan-ga-st-ro-intestinal) enteropancreatic (enter-opancre-atic) gastroenteropancreatic (gastroenter-opancre-atic) nasopancreatic (nas-opancre-atic) adrenocorticosteroids (adrenocortic-oster-oids) glucocorticosteroid (glucocortic-oster-oid) mineralocorticosteroid (mineralocortic-oster-oid) mineralocorticosteroids (mineralocortic-oster-oids) glucosteroid (gluc-oster-oid) adrenocorticosteroids (adrenocortic-oster-oids) adrenocorticotropic (adrenocortic-otropic) corticoids (cortic-oids) corticoid (cortic-oid) glucosteroid (gluc-oster-oid) Table 14: In this table we show more PubMedBERT sub-optimal segmentation examples and their nearest neighbors with respect to word embeddings and ‘[CLS]’ token embeddings. As in Table 9, bold neighbors were missed by the word embeddings but are retrieved correctly by the ‘[CLS]’ embeddings, repairing the sub-optimal segmentation bias.
ai_researcher
1
ENTREPRENEURSS_DECISION_PROCESSES_ON_A_NEW_BUSINESS_INVESTMENT_FEASIBILITY_STUDY_IN_LONDON_FLOWER_INDUSTRY.pdf
Business Taxonomy Construction Using Concept-Level Hierarchical Clustering Haodong Bai,† Frank Z. Xing,‡ Erik Cambria,‡ Win-Bin Huang†∗ †Department of Information Management, Peking University ‡School of Computer Science and Engineering, Nanyang Technological University {hbai,huangwb}@pku.edu.cn, {zxing001,cambria}@ntu.edu.sg 9 1 0 2 n u J 4 2 ] L C . s c [ 1 v 4 9 6 9 0 . 6 0 9 1 : v i X r a Abstract Business taxonomies are indispensable tools for in- vestors to do equity research and make professional decisions. However, to identify the structure of in- dustry sectors in an emerging market is challenging for two reasons. First, existing taxonomies are de- signed for mature markets, which may not be the appropriate classification for small companies with innovative business models. Second, emerging markets are fast-developing, thus the static business taxonomies cannot promptly reflect the new fea- tures. In this article, we propose a new method to construct business taxonomies automatically from the content of corporate annual reports. Extracted concepts are hierarchically clustered using greedy affinity propagation. Our method requires less su- pervision and is able to discover new terms. Ex- periments and evaluation on the Chinese National Equities Exchange and Quotations (NEEQ) mar- ket show several advantages of the business taxon- omy we build. Our results provide an effective tool for understanding and investing in the new growth companies. 1 Introduction Business taxonomies are important knowledge management tools for investment activities. When comparing different eq- uity assets on the financial markets, investors tend to classify companies according to their main business sectors, market performances, and the products they manufacture. To dis- cover companies with great potentials to grow across differ- ent industries, only those in the same industry sector will adopt similar criteria for downstream analysis, such as finan- cial statement analysis, profit prediction, price-earnings val- uation and more [Alford, 1992]. To this end, accurate clas- sification of companies is crucial to successful investments. Consequently, governments and financial authorities, as well as big companies, have developed a large number of differ- ent business taxonomies, which are usually widely applica- ble, coarsely-grained and almost static. However, these fea- tures are not appropriate for small and startup companies. ∗Corresponding author: Win-Bin Huang These companies are often fast-growing, dynamically chang- ing their business and focusing on a specific business. There- fore, traditional business taxonomies cannot reflect the whole landscape and emerging business. Beside the traditional busi- ness taxonomies, Chinese stock markets have yet another knowledge management tool called “concept stock (概 念 股)”. However, the concept labels are summarized by re- search teams and media, which means that they have already attracted much attention and over-represent blue chip stocks. Moreover, the concept labels are neither systematic nor hi- erarchical. One such influential label set is Tonghuashun’s “concept boards” 1. For small and startup companies, the current situation is that the valuation of such companies has to rely on concept labels transferred from the main domes- tic “A” shares markets, which do not appropriately describe small companies. The companies listed at the Chinese Na- tional Equities Exchange and Quotations (NEEQ) 2 are typ- ical examples. Compared to those “A” share companies, the NEEQ listed companies rely even heavier on the inappropri- ate concept labels because there are no widely agreed market capitalization or enterprise multiple to them. For the above-mentioned reasons, there is an urgent need for a more flexible business taxonomy to help with the invest- ment decisions for small and new companies. The taxonomy can form benchmarks for thousands of different companies with innovative business models. Compared to the concept labels, a business taxonomy is not only helpful for investi- gating a specific company, but also beneficial to understand the relations between companies. There is already a large amount of studies on automatic taxonomy construction (ATC) for applications such as web search [Liu et al., 2012], ques- tion answering and refinement [Sadikov et al., 2010], adver- tising and recommendation systems, and knowledge organi- zation [Zhang et al., 2018]. However, few of them concerns business taxonomy construction. On the other hand, studies that leverage natural language processing (NLP) or text min- ing to support investment either improve the current existing taxonomy [Hoberg and Phillips, 2016] or express the industry structure using other mathematical tools [Xing et al., 2019]. 1http://q.10jqka.com.cn/gn/ 2The NEEQ is an over-the-counter (OTC) system for trading the shares of a public limited company that is not listed on either the Shenzhen or Shanghai stock exchanges, thus nicknamed “The New Third Board (新三板)”. Unlike previous research, we propose a new method in this article that constructs a business taxonomy from scratch. The method extracts concept-level terms from the corporate an- nual reports, and computes the similarities between different terms. Based on the similarity matrix, the method recursively cluster terms into different strata. Our contributions are tri-fold: 1. To the best of our knowledge, we pioneer the use of au- tomatic taxonomy construction for the business classi- fication and investment purposes. Using concept-level terms instead of keywords, the method needs a low level of supervision because we leverage linguistic knowledge and a statistical model to extract and compare terms. No seed terms or their relations are required. 2. We use positive and unlabeled learning (PU learning) to further mitigate the labor to tag indexing terms. The method thus shows its capability to identify fine-grained concepts and discover new terms from natural language. 3. We make the NEEQ annual reports dataset publicly available 3, such that researchers could benchmark their taxonomy construction methods on it or follow up with other text mining tasks. The remainder of this article is organized as follows: Sec- tion 2 elaborates related work from two thread of literature: the business classification systems and studies on automatic taxonomy construction; Section 3 provides an overview of the framework and introduce details of the algorithm; Section 4 presents experimental results; Section 4.2 evaluate the con- structed taxonomy for the NEEQ market and carries out case studies; Finally, Section 5 concludes the study with future di- rections. 2 Related Work 2.1 Business Classification Systems Business classification systems, or industry classification schemes, are fundamental tools for market research. Accord- ing to a recent review [Phillips and Ormsby, 2016], compa- nies are grouped and organized into categories by their simi- lar manufacturing process, final products, and the target mar- kets. Investors make use of the business classification systems for purposes such as benchmarking with flagship companies, discovering potential competitors, evaluating sales perfor- mances, and composing industry index. Mainstream business classification systems can be assorted into three classes de- pending on their developers and purposes: governmental sta- tistical agencies develop the system for measuring economic activities, business information vendors develop the system for guiding investors, and academic researchers study the use of such system for accounting and finance. The most widely used examples are from business information producers, such as the Global Industry Classification Standard (GICS) and the Thomson Reuters Business Classification (TRBC), because they are integrated into the popular commercial databases. Early research [Bhojraj and Lee, 2002] also supports that the GICS accurately classifies the market. For this reason, some business classification systems used on the Chinese fi- nancial markets are adapted from GICS, such as the SWS classification standard 4 and the official NEEQ classification guide 5. However, many problems have been found when using these systems on the NEEQ market. First, designed using a top-down approach, these systems have unbalanced numbers of companies in the end-level of classes. To fit in a pre-defined structure, many classes contain companies with different businesses. Second, small companies are still at the early stage of exploring their business strategies. Therefore, it is common that one company’s business can span several domains in the system, while it can only be classified in a unique class. This causes the company’s absence in other classes. Last yet importantly, frequent revision of such sys- tems is costly and would confuse investors. Literature on using NLP and text mining for financial fore- casting and investment activities is growing [Xing et al., 2018]. Specific to business classification, Hoberg and Phillips built two systems using the 10-K corpus. The first one dis- covers competition relations between companies according to how similar are their product descriptions and constructs a company network [Hoberg and Phillips, 2010]. The second one first cluster companies with the text description of com- pany products, then map the traditional business classification scheme to the newly constructed one [Hoberg and Phillips, 2016]. Both studies focused on improving the existing clas- sification systems. Consequently, the details of a company’s business model are not revealed and classification results are still rather coarse. Taxonomies with more detailed informa- tion, for example on products [Aanen et al., 2015], are not catered for the purpose of industry partition. In this research, we break the stereotype and take a fully data-driven approach for building the classification system based on the textual de- scription of companies. The business-related concepts and terms are thus more detailed and information-rich. 2.2 Automatic Taxonomy Construction A taxonomy is defined as a semantic hierarchy that organizes concepts by is-a relations [Wang et al., 2017]. Since is-a re- lations are the most important relations in human cognitive structures, taxonomy construction from natural language is fundamental for ontology learning tasks. In common cases, ATC follows a pipeline of is-a relation extraction from natural language and induction of the taxonomy structure. Relation extraction can be either pattern-based or sta- tistical. One of the pioneer pattern-based research by Hearst [Hearst, 1992] proposed to use hand-crafted lexical patterns like “A is a B” and “A such as B” to discover is-a relations. More syntactic patterns are proposed by follow- ing research [Navigli et al., 2011; Luu et al., 2014], for ex- ample, “A, including B”, “A is a type/kind of B” etc. The performance can be improved by boosting over multiple such rules [Vivaldi et al., 2001]. Pattern-based methods feature 4http://www.swsindex.com/pdf/swhylfsm.pdf/, Accessed on 2019-04-03. 3The dataset is downloadable from the following link: 5http://www.neeq.com.cn/fenglei/hyfl.html/, http://github.com/SenticNet/neeq-annual-reports/. Accessed on 2019-04-03. high precision but poor recall. This is because the exact match of such patterns has a low coverage over the relations con- tained in the corpus. This problem is more severe in our re- search because business descriptions usually do not contain explanatory clauses as above-mentioned in the linguistic pat- terns. Statistical model exams the relation between any two terms, i.e., first extract all the candidate terms, and build a model to predict what is the relation type or whether there ex- ists an “is-a” relation between two terms. The term extraction step can be achieved with either supervised or unsupervised machine learning algorithms. In the former case, more label of true terms will be required and in the latter, only minimum effort is taken to threshold terms using TF-IDF, topic model- ing (LDA) [Bakalov et al., 2012], or TextRank model. For the relation predictive model, unsupervised methods leverage information such as co-occurrence frequency analysis, term subsumption [de Knijff et al., 2013], cosine similarity based on bag-of-words, and word embedding similarities [Fu et al., 2014] to discover taxonomic relations [Wang et al., 2017]. Supervised methods require inductive reasoning over a set of known relations, which is more precise but rely heavily on the corpus as well as the seed relations [Zhang et al., 2018]. In some cases, supervised methods have very poor recall. Obvi- ously, there is a trade-off between precision and recall. Induction of the taxonomy refers to the process of grow- ing a graph-like structure based on the set of relations ex- tracted from the previous step. The optimal taxonomy de- sires some features, such as no redundant edges and no loop of conceptual terms [Luu et al., 2014]. The most impor- tant objective is the correctness of hypernym-hyponym re- lations: comparable terms should belong to the same level. Practically speaking, the business taxonomy should provide the necessary knowledge and business insights pertinent to the investment activities. To enable these, current approaches employ either clustering or algorithms that induct tree struc- ture from a graph. Clustering methods assume that ag- glomerated terms share the same hypernym. By recursively choosing a representative term, hierarchical clustering can generate a layered tree structure [de Knijff et al., 2013; Meijer et al., 2014]. On the other hand, the term relations can be organized as a directed graph. Then the task becomes mining and pruning a tree structure out of the graph [Choi et al., 2011]. In this research, we use a weakly supervised sta- tistical method for relation extraction and greedy hierarchical affinity propagation (GHAP) to construct a new taxonomy, and relate companies to the leaf descendant layer. 3 Methodology Our method can be divided into three phases: data prepro- cessing, concept-level taxonomy construction, and corporate categorization and labeling with the established taxonomy. Figure 1 provides an overview of the proposed method. Be- cause the corpus we use is in Chinese, the data preprocess- ing phase consists of word segmentation and part-of-speech (POS) labeling of each Chinese word. We use the LTP-Cloud tools developed by HIT 6 to complete this phase. The taxon- omy construction phase utilizes a semi-supervised learning 6http://www.ltp-cloud.com/ Table 1: Concept-level features used to train a term extractor. Name of features Concept mutual information M I(t) Computing methods = (cid:80) i,j p(i, j) × Right-side entropy Left-side entropy Concept TF Concept IDF Followed-by word Following word Industry TF Industry IDF Industry concept entropy i p(t, i|t) × log(p(t, i|t)). i p(i, t|t) × log(p(i, t|t)). log[p(i, j)/(p(i)p(j))]. RE(t) = (cid:80) LE(t) = (cid:80) The overall term frequency in all the docu- ments. The overall inverse document frequency in all the documents. Binary feature of whether the concept is followed by “industry (行业)” or “business scope (业务)”. Binary feature of whether the concept is following “running (从事)”. The concept frequency distribution in all the industry classes. The inverse document frequency distribu- tion in all the industry classes. IndE(t) = log(T Ft,i/T Ft). i(T Ft,i/T Ft) × − (cid:80) classifier [du Plessis et al., 2014] to reduce the amount of la- bor for tagging terms. After filtering out the concept term candidates, we obtain the final terms from the classifier. The similarity calculation is based on the idea of co-occurrence analysis from information science. Then GHAP takes the similarity matrix as an input to build a multi-layered struc- ture of terms. The corporate categorization phase maps all the companies that contain the descendant-level terms to the taxonomy. 3.1 Concept Extraction and Term Similarity One of the fundamental challenges in NLP is to model the semantic compositionality within phrases and multi-word ex- pressions. Previous research [Cambria and White, 2014] sug- gests considering concepts to be the atomic units of meaning, which leads to more powerful expressiveness and more ac- curate results in downstream applications. Unlike ATC study which uses keywords [Liu et al., 2012], we consider concept- level terms in our business taxonomy. We observe that two types of templates together cover most of the concepts in the business domain, i.e. noun phrases and attributive phrases. For the first type, we mainly consider the noun-type POS tags in the “863 Chinese POS set”. Addi- tionally, we include Chinese numerals 7 and verbs, which are not morphologically identifiable to ensure a high recall. For the second type, we simultaneously consider the dependency parsing result. Those phrases that only contains dependency relation “ATT” (the attributive relation type in Chinese gram- mar) are selected to be concept term candidates. The term candidates are represented with a concatenation of concept-level features as listed in Table 1 and similar word- level features. The features are designed to include both sta- tistical and industry-related information based on the official NEEQ classification guide, because the distribution of term frequencies in texts of different industries is a crucial fact to the discriminative power of the term. 7Numerals appear in noun phrases such as “Third-party payment (第三方支付)”. Figure 1: An overview of the proposed method, showing key techniques used in each module. The semi-supervised classifier is built as a support vector machine (SVM) with probabilistic outputs under the frame- work of PU learning [du Plessis et al., 2014]. PU learning is calibrated for real-world problems where labels of the neg- ative cases are not accessible. Labels for positive cases are costly and hard to exhaust, so the majority of data remains unlabeled. Through the analysis of the empirical risk min- imization problem of SVM, it is proved that PU learning is equivalent to a cost-sensitive classification where the cost ra- tio c1/cx is a function of class prior π and proportion of la- beled sample η [du Plessis et al., 2014]: c1/cx = 2π(1 − η) η . (1) We use the scikit-learn package to implement the cost- sensitive SVM with RBF kernel and estimate the probabil- ity parameters from the dataset. In experiments, we use the dual problem settings of PU learning, where only a small por- tion of negative cases are labeled. This is made possible be checking if the term candidate contains words from the stop- word list. We adapt a general stop-word list to the specific business domain by adding 106 domain-specific words to it. The added words include common words in the business do- main such as “corporate (集团)”, “company (公司)” and ac- tion words such as “sales (销售)”, “profit (盈利)”, “leading (领先)”, “trend (趋势)” etc. After training with the negative labels, the classifier produces the real term set from the term candidates. A term similarity is computed by integrating the compris- ing word-level similarities. To be more specific, we define the similarity of two words as the frequency of their co- occurrence divided by the harmonic mean of the frequencies of their occurrence in the documents respectively. That is s(w1, w2) = 2 × dct(w1 ∩ w2) × dct(w1) × dct(w2) dct(w1) + dct(w2) , (2) where dct(·) denotes document counts. Then, we align cor- responding words in two terms and use the average similarity of the best-match as the similarity between terms. Because this method is asymmetric, we define term similarity as the average over two directions: s(t1 → t2) = s(t1, t2) = (cid:80) i∈t1 βi maxj∈t2 s(i, j) len(t1) s(t1 → t2) + s(t2 → t1) 2 (3) (4) where i is word in term t1 and j is word in term t2; len(t1) denotes the length of t1. The weight for word i uses the TF- IDF information: βi = log(ct(i)) × log( N dct(i) ). (5) where N is the total number of documents. 3.2 Taxonomy Induction The term similarity matrix measures semantic relations be- tween two given terms, where the target “is-a” relation is one of such. In order to construct a taxonomy, we computer a matrix of relations from the term similarity matrix by clus- tering, which preserve the strong relations while prune the others. We leverage greedy hierarchical affinity propagation (GHAP) [Xiao et al., 2007], an exemplar-based clustering method to construct three layers of hypernym-hyponym re- lations. Compared to other clustering method, such as K- means, GMM or DBSCAN, GHAP has some advantages for taxonomy construction. First, the GHAP centroids are pro- totypical data points, which is important for the hypernym- hyponym relations. Second, GHAP does not need the number of clusters as a hyper-parameter input. Third, the clustering result of GHAP is insensitive to the initialization states. It is also worth mentioning that GHAP usually converges faster than HAP, which has to optimize a global loss function. The method is based on the concept of “message passing” be- tween data points. For each layer, we iteratively compute a availability matrix A[αij]n×n and a responsibility matrix WordSegmentation&LabellingPhrase&WordFeatureExtractionSemi-SupervisedClassifica>onforTermCandidatesTermSimilarityCalculationTermClusteringCorporate-Term MappingDataPreprocessingConcept-levelTaxonomyConstructionCorporateCategorizationInductionBuildingTermRela>ons R[ρij]n×n [Frey and Dueck, 2007], where Table 2: Statistics of the first level hypernyms. αii = ci + (cid:88) k(cid:54)=i max(0, ρki) (6) Hypernym αi(cid:54)=j ij = min[0, cj + ρjj + (cid:88) k /∈{i,j} max(0, ρki)] (7) ρij = sij − max k(cid:54)=j (αik + sik), (8) i and j are taxonomic terms; cj is the preference for choosing term j as an exemplar; n is the number of terms or exem- plar terms in that layer. The binary exemplar vector is sub- sequently obtained as e = (diag(A) + diag(R) > 0). Each descendant term in this taxonomy further corresponds to a set of companies running similar business. A major difference of this taxonomy from traditional business classification sys- tems is that one company can be mapped to multiple terms. This assumption is rational because in real-world cases, com- panies can span their business across several industry sectors. 4 Experiments and Evaluation 4.1 Data and Results We crawled 21,739 annual reports for 10,375 listed compa- nies from the NEEQ. The releasing time of these reports spans three years from 2014 to 2017. The original reports are in PDF format with relatively fixed discourse structure. We parse the files and extract texts from the section named “business model” using Tabula 8. After manually cleaning the missing cases, we finally obtained 20,040 business model descriptions, summing up to 46.2 MB of textual data. Ac- cording to the annual report standards, the descriptions cover the industry information, product and service, type of clients, key resource, sales model and components of income. Most of the descriptions comprise 100 to 1000 Chinese characters. We obtained 64,460 concept-level term candidates from the corpus and labeled 7,078 of them as non-terms using the domain stop-word list. The cost-sensitive SVM classifier out- put 2,744 terms, which are clustered into 33 hypernyms (see Table 2). Our investigation shows that each hypernym gov- erns no more than 20 sub-concept and 230 sub-sub-concept. Given the fact that the average term similarity equals 0.15, most of the clusters exhibit high intra-class similarity. We also observed a strong correlation between the numbers of sub- and sub-sub- concepts, which indicates the whole taxon- omy is well-balanced. To understand the branching structure within a hypernym, we showcase the structure of a relatively small ancestor class in the second row of Table 2 — “Education” (see Figure 2). There are four sub-concepts attached to this class: online training, professional training, education informatization, and smart education. Each sub-concept also has several hy- ponyms. Due to limited space we can not include all the education industry companies. Instead, we compare some popular NEEQ classification label and terms produced by our method. 8http://tabula.technology/ Intra- class simi- larity No. of sub- concept No. of sub- sub- concept No. of com- panies 0.40 0.37 0.36 0.34 0.33 0.28 0.27 0.27 0.27 0.25 0.24 0.24 0.24 0.23 0.23 Healthcare 医疗诊断服务 Education 教育 Lighting 照明灯具 Game 游戏 Transportation & logistics 物流运输 Medical service & equipment 医疗器械制 造与医疗服务 Ironmongery 金属零部件制造 Software & Hardware 第三方软硬件 Cement products 金属混凝土产品 Automobile 汽车 Electronics elements 电子原件制造 Telecoms 通信及通信设备 Building 建筑工程 Automation & robotics 自动化机器人 Information system & integration 信息系 统集成服务 Energy saving 节能环保 GIS service 地理信息服务 IT infrastructure & maintenance IT基础设 施与运维 Office appliance 日常办公用品 0.22 Digital media 互联网数字媒体 0.22 Clinical testing 临床试验检测 0.21 Smart houseware 智能家居 0.21 Horticulture 园林工程 0.20 Mechanical equipment 机械设备制造 0.20 Chemicals 化工产品 0.19 Plastic products 塑料制品 0.19 Internet & online ads 互联网媒体广告 0.19 Solar battery 太阳能电池 0.18 E-commerce platforms 电商平台 0.17 Financial services 金融服务 0.17 Outsourcing consulting 工程咨询承包 0.17 Natural bio-extract 天然植物提取物产品 0.16 Phone gadgets 手机周边产品 0.16 0.23 0.22 0.22 2 4 4 3 3 5 4 4 3 5 6 6 7 3 4 6 3 4 2 5 3 9 14 8 6 12 13 19 8 10 10 18 20 17 15 34 33 22 22 26 51 9 32 66 60 59 21 47 49 43 32 7 56 18 49 106 67 35 59 106 188 53 78 79 125 223 72 137 147 156 206 353 208 525 34 473 950 903 433 169 2416 265 1601 252 56 692 216 1086 825 377 274 395 1097 1699 1568 2673 4154 1194 8876 4.2 Qualitative Evaluation and Discussion We benchmark the validity of our constructed business tax- onomy with the official NEEQ classification guide via hu- man evaluation. Generally, the descendant classes in the tra- ditional business classification system are coarse. For exam- ple, many companies in the online education or training scope are classified as “Internet Software and Services”, which is apparently wilder-ranging; similarly, some companies are la- beled as “General Customer Service”, which provides less in- formation than the concept of “Online Training”. In fact, “In- ternet Software and Services” only reveals the means of con- veying their product for online education companies. How- ever, their customers, competitors, and market positioning are more comparable to traditional education companies, but are very different from internet software providers such as SAP or Tencent. In this sense, the traditional business classifi- cation system misleads investors by classifying companies with different business models together, providing inaccu- rate peers for pricing and research. In contrast, our method provides fine-grained concept-level terms. The mapping of companies are more balanced: each descendant term governs around ten companies in Table 2. Another important aim of investment analysis is to dis- cover new concepts and market trends. The new concepts often reflect how the industries will re-organize and develop Figure 2: Three level classification system for the education industry. in the future. However, the low frequency of update for tradi- tional business classification systems tends to hide new busi- ness concepts. It is also challenging to find the appropri- ate position for new concepts. We notice that the business owners tend to advertise the hotspot concepts in their self- descriptions. Because our method is aware of the content of corporate annual reports, new concepts can be captured dur- ing taxonomy construction. For example, “online training” and “education informatization” are trendy concepts in the scope of education. Pre-school training is also increasingly popular in China, probably due to the Confucianist child- rearing ideas. These facts are not reflected in other business taxonomies for investment. To summarize, our method allows concrete terms that would not appear in traditional business taxonomies to be dis- played and facilitates the discovery of new terms. Therefore, the constructed taxonomy has some special advantages in in- vestment activities compared to the static manually designed business classification systems, and can be a meaningful sup- plementary for the existing business classification systems. 5 Conclusion In this article, we proposed a method to extract concept-level terms with weak and partial supervision and build a taxo- nomic structure of these terms using greedy hierarchical affin- ity propagation. The application of this method for business taxonomy construction is novel, for the reason that business texts have different linguistic features to represent “is-a” re- lations. Our method is fast in both term similarity computing and taxonomy induction. Experiments on the Chinese NEEQ market show that the text-induced business taxonomy has several advantages over the traditional expert-crafted system, such as to display fine-grained concepts and discover trendy business concepts. The method provides a better tool for in- vestment activities and industry research. Of course, For this reason, the constructed business taxonomy is not perfect. For instance, the “Phone gadgets” concept is giant the and include too many companies. intra-class similarity is also the lowest for this class. These observations suggest that “Phone gadgets” can not be a good exemplar for the entire class and the class may be subject to further partition. Additionally, the semantic distances between hypernyms are at different scales: “Healthcare” and “Medical service and equipment” are small and related concepts that may be merged. Finally, the other relations between companies within the same set, e. g. supply chain relations, are not revealed. We will investigate how to improve the taxonomy with these relations in the future. Appendix Table 3 further provides some examples of how label terms generated by our method (GHAP) are different from the NEEQ terms. Contact authors for the full taxonomy structure. EducationOnlineTrainingProfessionalTrainingEducationInformatizationSmartEducationOnlineEducationServicesOnlineEducationPlatformsOnlineTrainingServicesProfessionalSkillsTrainingTrainingConsultingSpecialSkillsTrainingArtTrainingPre-schoolTrainingEducationAssistantEducationInformationServiceEducationInformationConsultingEducationSoftwareIndustrySmartFamilyIndustrySmartEducationCloud-platformSmartCampusService References [Aanen et al., 2015] Steven S. Aanen, Damir Vandic, and Flavius Frasincar. Automated product taxonomy mapping in an e-commerce environment. Expert Systems with Ap- plications, 42:1298–1313, 2015. [Alford, 1992] Andrew W. Alford. The effect of the set of comparable firms on the accuracy of the price-earnings Journal of Accounting Research, valuation method. 30(1):94–108, 1992. [Bakalov et al., 2012] Anton Bakalov, Andrew McCallum, Hanna M. Wallach, and David M. Mimno. Topic mod- els for taxonomies. In ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL), pages 237–240, 2012. [Bhojraj and Lee, 2002] Sanjeev Bhojraj and Charles M. C. Lee. Who is my peer? a valuation-based approach to the selection of comparable firms. Journal of Accounting Re- search, 40(2):407–439, 2002. [Cambria and White, 2014] Erik Cambria and Bebo White. Jumping nlp curves: A review of natural language process- ing research. IEEE Computational Intelligence Magazine, 9(2):48–57, 2014. [Choi et al., 2011] Myung Jin Choi, Vincent Y. F. Tan, Ani- mashree Anandkumar, and Alan S. Willsky. Learning la- tent tree graphical models. Journal of Machine Learning Research, 12:1771–1812, 2011. [de Knijff et al., 2013] Jeroen de Knijff, Flavius Frasincar, and Frederik Hogenboom. Domain taxonomy learning from text: The subsumption method versus hierarchical clustering. Data & Knowledge Engineering, 83:54–69, 2013. [du Plessis et al., 2014] Marthinus Christoffel du Plessis, Gang Niu, and Masashi Sugiyama. Analysis of learning from positive and unlabeled data. In Advances in Neural Information Processing Systems (NIPS), pages 703–711, 2014. [Frey and Dueck, 2007] Brendan J. Frey and Delbert Dueck. Clustering by passing messages between data points. Sci- ence, 305(5814):972–976, 2007. [Fu et al., 2014] Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. Learning semantic hierarchies via word embeddings. In Proceedings of the Annual Meeting of the Association for Computational Lin- guistics (ACL), pages 1199–1209, 2014. [Hearst, 1992] Marti A. Hearst. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the 14th Conference on Computational Linguistics (COLING), vol- ume 2, pages 539–545, 1992. [Hoberg and Phillips, 2010] Gerard Hoberg and Gordon Phillips. Product synergies and competition in mergers and aquisitions: A text-based analysis. The Review of Finan- cial Studies, 23(10):3773–3811, 2010. [Hoberg and Phillips, 2016] Gerard Hoberg and Gordon Phillips. Text-based network industries and endogenous Journal of Political Economy, product differentiation. 124(5):1423–1465, 2016. [Liu et al., 2012] Xueqing Liu, Yangqiu Song, Shixia Liu, and Haixun Wang. Automatic taxonomy construction from keywords. In Proceedings of the ACM SIGKDD In- ternational Conference on Knowledge Discovery & Data Mining, pages 1433–1441, 2012. [Luu et al., 2014] Anh Tuan Luu, Jung-Jae Kim, and See- Kiong Ng. Taxonomy construction using syntactic contex- tual evidence. In Proceedings of the Conference on Empir- ical Methods in Natural Language Processing (EMNLP), pages 810–819, 2014. [Meijer et al., 2014] Kevin Meijer, Flavius Frasincar, and Frederik Hogenboom. A semantic approach for extracting domain taxonomies from text. Decision Support Systems, 62:78–93, 2014. [Navigli et al., 2011] Roberto Navigli, Paola Velardi, and Stefano Faralli. A graph-based algorithm for inducing lex- In Proceedings of the In- ical taxonomies from scratch. ternational Joint Conference on Artificial Intelligence (IJ- CAI), pages 1872–1877, 2011. [Phillips and Ormsby, 2016] Ryan L. Phillips and Rita Ormsby. Industry classification schemes: An analysis and Journal of Business & Finance Librarianship, review. 21(1):1–25, 2016. [Sadikov et al., 2010] Eldar Sadikov, Jayant Madhavan, Lu Wang, and Alon Halevy. Clustering query refinements by user intent. In International World Wide Web Confer- ence (WWW), pages 841–850, 2010. [Vivaldi et al., 2001] Jordi Vivaldi, Llus Mrquez, and Hora- cio Rodrguez. Improving term extraction by system com- bination using boosting. In European Conference on Ma- chine Learning (ECML), pages 515–526, 2001. [Wang et al., 2017] Chengyu Wang, Xiaofeng He, and Aoy- ing Zhou. A short survey on taxonomy learning from text In Pro- corpora: Issues, resources and recent advances. ceedings of the Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1190–1203, 2017. [Xiao et al., 2007] Jianxiong Xiao, Jingdong Wang, Ping Tan, and Long Quan. Joint affinity propagation for mul- tiple view segmentation. In International Conference on Computer Vision (ICCV), pages 1–7, 2007. [Xing et al., 2018] Frank Z. Xing, Erik Cambria, and Roy E. Welsch. Natural language based financial forecasting: A survey. Artificial Intelligence Review, 50(1):49–73, 2018. [Xing et al., 2019] Frank Z. Xing, Erik Cambria, and Roy E. Welsch. Growing semantic vines for robust asset alloca- tion. Knowledge-Based Systems, 165:297–305, 2019. [Zhang et al., 2018] Chao Zhang, Fangbo Tao, Xiusi Chen, Jiaming Shen, Meng Jiang, Brian M. Sadler, Michelle Vanni, and Jiawei Han. Taxogen: Unsupervised topic taxonomy construction by adaptive term embedding and clustering. In Proceedings of the ACM SIGKDD Interna- tional Conference on Knowledge Discovery & Data Min- ing, pages 2701–2709, 2018. Table 3: The NEEQ classification label and label of our method for some companies mapped to the “Education” concept. Table3:TheNEEQclassificationlabelandlabelofourmethodforcompaniesmappedtothe“Education”concept.CompanyIDCompanynameNEEQindustryla-belGHAPindustryla-belBusinessmodel(snip-pet)Businessmodel(GoogleTrans-lation)839896新东方网InternetSoftwareandService互联网软件与服务OnlineEducationServices在线教育服务公司立足于在线教育行业,建立了多媒体学习平台及知心自适应学习大数据系统双核驱动的整体技术体系...Basedontheonlineeducationindustry,thecompanyhases-tablishedamultimedialearn-ingplatformandanintegratedtechnologysystemforthedual-coredriveroftheadaptivelearningbigdatasystem...831308华博教育InternetSoftwareandService互联网软件与服务OnlineEducationPlatforms,OnlineEducationServices在线教育平台,在线教育服务公司属于软件和信息技术服务业。其主营业务为在线教育服务产品的销售...Thecompanybelongstothesoftwareandinformationtech-nologyservicesindustry.Itsmainbusinessisthesaleofonlineeducationserviceprod-ucts...835799互动百科InternetSoftwareandService互联网软件与服务OnlineEducationPlatforms,OnlineEducationServices在线教育平台,在线教育服务公司处于知识互联网领域,立足于中文网络百科行业,经过多年的技术积累以及运营积累,给用户提供了可靠权威的百科知识服务...ThecompanyisinthefieldofknowledgeInternet,basedontheChinesenetworken-cyclopediaindustry.Afteryearsoftechnicalaccumula-tionandoperationalaccumula-tion,itprovidesuserswithreli-ableandauthoritativeencyclo-pedicknowledgeservices...831084绿网天下InternetSoftwareandService互联网软件与服务OnlineEducationPlatforms在线教育平台公司处于软件和信息技术服务业,是国内领先的针对网络安全与信息管控服务+以基于青少年移动终端上网安全为基础的K12在线教育服务及增值服务提供商...Thecompanyisinthesoft-wareandinformationtechnol-ogyserviceindustry.Itistheleadingdomesticnetworksecurityandinformationman-agementservice+K12onlineeducationserviceandvalue-addedserviceproviderbasedonthesecurityofyouthmobileterminalInternetaccess...835079全美在线GeneralCustomerService综合消费者服务OnlineTrainingServices在线教育培训全美在线(北京)教育科技股份有限公司(以下简称“全美在线”)属于教育辅助行业,依托公司在考试测评领域和在线培训领域的丰富管理经验...NationalOnline(Beijing)EducationTechnologyCo.,Ltd.(hereinafterreferredtoas“All-AmericanOnline”)isaneducation-assistedindustry,relyingonthecompany’srichmanagementexperienceinthefieldofexaminationandevaluationandonlinetraining...833587网班教育InternetSoftwareandService互联网软件与服务OnlineTrainingServices在线教育培训本公司系提供移动在线教育培训综合解决方案的软件服务企业…Thecompanyisasoftwareser-viceenterprisethatprovidescomprehensivesolutionsformobileonlineeducationandtraining...834560思维实创ITService信息技术服务OnlineTrainingServices,Educa-tionInformationServices在线教育培训,信息化综合服务本公司是立足于教育行业的信息化综合服务及软件服务提供商,利用先进技术为用户提供全面的解决方案和增值服务...Thecompanyisaninformation-basedintegratedserviceandsoftwareserviceproviderbasedontheeduca-tionindustry,usingadvancedtechnologytoprovideuserswithcomprehensivesolutionsandvalue-addedservices...839467易第优GeneralCustomerService综合消费者服务ProfessionalSkillsTraining职业技能培训公司主要业务为IT职业技术培训,面向在校/毕业大学生、求职/在职企业员工提供JAVA…Thecompany’smainbusinessisITvocationalandtechnicaltraining,providingJAVAforstudentsincollege/graduate,jobseekers/in-serviceemploy-ees...12
ai_researcher
1
Introducing_the_Concept_of_the_Volume_Lightning_Strike_Density.pdf
A Study of Lightning Activity over Different Ecological Zones of Nepal Samin Poudela a Tribhuvan University, Kathmandu, Nepal, [email protected] Abstract In the present work, occurrence of lightning activity over different ecological zones of Nepal has been studied. It has been observed that Lower Tropical zone receives lightning strikes with largest value of density of strikes whereas the Trans-Himalayan zone receives lightning strikes with least value of density of strikes. The density of lightning strikes over the Lower Tropical zone is 19.87 × 10−2/ km2 per year whereas, that over the Trans-Himalayan is 2.00 × 10−2 / km2 per year. Other three zones whose values were observed to be in the higher side were Upper Tropical, Sub-tropical and Water Body with annual densities of lightning strikes of 14.46 × 10-2 / km2, 12.05 × 10-2 / km2 and 12.00 × 10-2 / km2 respectively. During a year, 2.56 × 10-2 / km2 and 2.17 × 10-2 / km2 are densities of lightning strikes for Alpine and Nival Zones respectively which are close to the lowest value of Trans-Himalayan zone. Remaining two zones, Sub-alpine and Temperate, respectively experienced lightning strikes with densities 6.05 × 10-2 / km2 and 8.83 × 10-2 / km2 per year. i Contents Recommendation Acknowledgement Evaluation Abstract List of tables List of figures Acronyms 1.0 Introduction 1.1 How does lightning origin? 1.2 Mechanism and types of lightning 1.2.1 Mechanism of charge separation in thunderclouds 1.2.2 Types of lightning 1.2.2.1 Cloud to ground discharges 1.2.2.2 Cloud discharges 1.3 Basic Theory 1.3.1 Electrostatic field 1.3.2 Magnetostatic field 1.3.3 Electromagnetic field produced by lightning discharge 1.3.3.1 Horizontal electric field 1.3.3.2 Vertical electric field 1.3.3.3 Azimuthal magnetic field 1.4 Effects of lightning 1.5 Brief history of scientific study of lightning 1.6 Classification of ecological zones of Nepal 1.7 Ecological zones and lightning 1.8 Lightning and climate change 1.9 Aim of study ii i ii iii iv vii viii xi 1 1 1 1 2 2 3 3 4 5 5 6 7 7 8 9 9 10 11 12 2.0 Instrumentation and methodology 3.0 Observation 3.1 Distribution of lightning activity over different ecological zones of Nepal for different seasons 4.0 Discussion 5.0 Conclusion and future work 5.1 Conclusions 5.2 Recommendations for future study References 13 18 19 38 53 53 55 56 iii List of tables 2.1 Ecological zones of Nepal with altitude and area covered in percentage. 3.1 Ecological zones of Nepal with area covered in square kilometer. 3.2 Lightning activity during winter 2012 over different ecological zones of Nepal. 3.3 Lightning activity during winter 2013 over different ecological zones of Nepal. 3.4 Lightning activity during winter 2015 over different ecological zones of Nepal. 3.5 Lightning activity during pre-monsoon 2012 over different ecological zones of Nepal. 3.6 Lightning activity during pre-monsoon 2013 over different ecological zones of Nepal. 3.7 Lightning activity during monsoon 2012 over different ecological zones of Nepal. 3.8 Lightning activity during monsoon 2013 over different ecological zones of Nepal. 3.9 Lightning activity during post-monsoon 2012 over different ecological zones of Nepal. 3.10 Lightning activity for two months of post-monsoon (October, November) 2014 in different ecological zones of Nepal. 4.1 Annually expected total number and density of lightning strikes over different ecological zones of Nepal. 17 18 20 22 24 26 28 30 32 34 36 52 iv List of figures 1.1 Geometrical model used in calculating electromagnetic fields. 2.1 Schematic diagram of sensor with stroke and GPS antenna. 6 13 2.2 Photograph of stroke antenna and GPS antenna on rooftop of NAST. 14 2.3 Photograph of TOA sensor at NAST. 2.4 Ecological division of Nepal. 3.1 Ecological map of Nepal with lightning strikes during winter 2012. 3.2 Lightning activity during winter 2012 over different ecological zones of Nepal. 3.3 3.4 3.5 3.6 Ecological map of Nepal with lightning strikes during winter 2013. Lightning activity during winter 2013 over different ecological zones of Nepal. Ecological map of Nepal with lightning strikes during winter 2015. Lightning activity during winter 2015 over different ecological zones of Nepal. 15 16 19 21 21 23 23 25 3.7 Ecological map of Nepal with lightning strikes during pre-monsoon 2012. 25 Lightning activity during pre-monsoon 2012 over different ecological zones 3.8 of Nepal. 27 3.9 Ecological map of Nepal with lightning strikes during pre-monsoon 2013. 27 3.10 Lightning activity during pre-monsoon 2013 over different ecological zones of Nepal. 29 3.11 Ecological map of Nepal with lightning strikes during monsoon 2012. 29 3.12 Lightning activity during monsoon 2012 over different ecological zones of Nepal. 31 3.13 Ecological map of Nepal with lightning strikes during monsoon 2013. 31 v 3.14 Lightning activity during monsoon 2013 over different ecological zones of Nepal. 33 3.15 Ecological map of Nepal with lightning strikes during post-monsoon 2012. 33 3.16 Lightning activity during post-monsoon 2012 over different ecological zones of Nepal. 35 3.17 Ecological map of Nepal with lightning strikes for two months of post- monsoon (October, November) 2014. 3.18 Lightning activity for two months of post-monsoon (October, November) 2014 over different ecological zones of Nepal. 4.1 Schematic diagram of lightning activity over Alpine zone for different seasons for the years 2012, 2013 and 2015. 4.2 Schematic diagram of lightning activity over Lower Tropical zone for different seasons for the years 2012, 2013 and 2015. 4.3 Schematic diagram of lightning activity over Nival Zones for different seasons for the years 2012, 2013 and 2015. 4.4 Schematic diagram of lightning activity over Sub-alpine zone for different seasons for the years 2012, 2013 and 2015. 4.5 Schematic diagram of lightning activity over Sub-tropical zone for different seasons for the years 2012, 2013 and 2015. 4.6 Schematic diagram of lightning activity over Temperate zone for different seasons for the years 2012, 2013 and 2015. 4.7 Schematic diagram of lightning activity over Trans-Himalayan zone for different seasons for the years 2012, 2013 and 2015. 4.8 Schematic diagram of lightning activity over Upper Tropical zone for different seasons for the years 2012, 2013 and 2015. 35 37 38 40 41 43 44 46 47 49 vi 4.9 Schematic diagram of lightning activity over Water Body zone for different seasons for the years 2012, 2013 and 2015. 5.1 Density of lightning strikes over ecological zones in ascending altitude (temperature descending). 50 53 vii Acronyms GIS GLN Geographic Information System Global Lightning Network NAST Nepal Academy of Science and Technology ESRI Environmental Systems Research Institute ICIMOD International Centre for Integrated Mountain Development TOA GPS CAP WSI Time Difference of Arrival Global Positioning System Central Analysis Processor Weather Services International viii Chapter 1 Introduction Lightning, one of the oldest observed powerful natural phenomena on earth, is flash of light created by electric discharge accompanied with tremendous amount of energy. This electrostatic discharge takes place in atmosphere between the electrically charged regions within clouds or between a cloud and the surface of earth to balance the difference between the positive and negative charges. Globally, it strikes the ground about 100 times per second (around 8 million times per year). Lightning, most commonly, occurs within a cloud either from a cloud to the surrounding air or from a cloud to another cloud. Only about 20% of all lightning strikes occur between a cloud and the ground, which is what most people think of as classic lightning. Cloud to ground lightning is the one that is primarily concerned with general people and has been focused in this thesis than other types of lightning. 1.1 How does lightning origin? Different myths, based on various religious beliefs, on the origination of the lightning and thunderstorms have been seen so far. The myths are from different civilizations. Greeks had a belief that the lightning was a weapon of Zeus and the inventor of thunderbolts was Athena, the goddess of wisdom. According to Scandinavian mythology, it was Thor who created lightning with his hammer to attack. Similarly, Hindus had their own belief. They believed that Indra was the god of heaven, lightning, rain, storms and thunder. Indian tribes in North America suggest that it occurred when the mystical thunder bird flaps its wings. Even though these myths are present in written form in various religious books, we cannot find scientific reasons behind them to justify them. From the scientific investigations, conclusions have been drawn that atmospheric convection leading to electric discharge is responsible for lightning (more discussion in 1.2). Lightning occurs when some region of the atmosphere attains an electric charge sufficiently large that the electric field associated with charge cause electrical breakdown of the air [1]. 1.2 Mechanism and types of lightning 1.2.1 Mechanism of charge separation in thunderclouds The primary source of lightning involving the cloud is cumulonimbus; however, not every cumulonimbus produces lightning discharges. The cumulonimbus which produces lightning discharge is more properly called a thundercloud [2]. As lightning flash is the result of electric discharge and a thundercloud is considered main source for it, thunderclouds are to be electrically charged for this to happen. Different theories have been put forward regarding the charge separation in thundercloud. But, cloud particle collisions are thought to be the main mechanism for cloud electrification [3]. Cloud particle refers to snow, hail, ice crystals, graupel (soft hail) and super cooled water (water below 0 ˚C and above -40 ˚C). In particular, collision between ice crystals and graupel is responsible for the charge separation in 1 thunderclouds. During collision, ice crystals are positively charged where as hail stones are negatively charged. The updrafts during the thunderstorms carry positive crystals to upper region of cloud as they are lighter than hail stones and the heavier wet crystals move toward the base of cloud. Along with the upper and lower charged regions, “In a typical thundercloud a small positive charge is also found below the main negative charge” [4]. Thus developed tri polar structure encourages lightning. Different mechanisms for different types of lightning are briefly discussed in 1.2.2. 1.2.2 Types of lightning Not only natural but also artificially triggered lightning with the purpose of research has come into existence. Here, we focus on the natural forms of lightning. Prospects of different kinds of lightning are being presented in the scientific world recently. But the classification on the basis of the orientation of charge centers, where electric discharge initiates and terminates, is widely accepted. According to this lightning may be categorized into two. One is cloud to ground discharges and the other is cloud discharges. 1.2.2.1 Cloud to ground discharges The electric discharge that effectively transfers charge from cloud to the ground is called cloud to ground discharge. From the observed polarity of the charge effectively lowered to ground and the direction of propagation of the initial leader, four different types of lightning discharges between cloud and Earth have been identified [5]. The four types of electric discharge in between cloud and ground are downward negative, upward negative, downward positive and upward positive. As, more than 90 percent of cloud to ground discharge is downward negative, downward negative is simply termed as cloud to ground lightning for its predominance. This form of lightning initiates in a cloud but ends in a cloudless air. During this lightning, negative charges at base of the clouds are transferred towards the positive charge on earth surface from the lower regions of the cumulonimbus cloud. A brief discussion of this form is presented below. Downward negative cloud to ground lightning Cloud-to-ground (CG) lightning mainly includes processes of preliminary breakdown, stepped leader, first return stroke, inter-stroke process, dart (or dart stepped) leader, subsequent return stroke, and continuous current [6]. The negative charges in the base of thundercloud induce positive charges in the ground which are also essential components for downward negative lightning. To start, the electrons attached to water or ice crystals are discharged from thundercloud and this avalanche of electrons move towards ground by ionising the air in between. The path taken by these electrons to move down is termed as stepped leader, which is 50 meters in length in average. Journey of electrons from cloud to ground is not covered in a single step. Leaders with mean speed of 2 × 105 m/s [7] and having charge 7×10−4 C per metre [7] are present. The lengths of the individual steps in stepped leaders vary from 10 to 200m and the inter-step intervals ranges from 40 to 100 μs. Both the step lengths and their brightness increase as the leader speed increase [5].When the stepped 2 leader approaches the ground, an upward leader of positive charges known as upward connecting leader from ground connects to it and thus the stage for the first return stroke is set. The return stroke gives the intensely bright lightning flash and has the greater propagation velocity than that of stepped leader. This return stroke is the actual flow of stroke current that has a median value of about 24000A and is actually the flow of charge from earth to cloud to neutralize the charge centre [8]. Positive charge flowing upward form ground to cloud is equal to the negative that flew down. With pre-channelled path of stepped leader another leader is expected to develop in approximately more than 55 percent [8] of lightning known as dart leader which also comes down with same mechanism as that of stepped. A lightning flash usually have 3-5 strikes with stepped and dart leaders. Although cloud to ground lightning is less common, it is easier to research and thus best understood. This form of lightning is by far the most damaging and dangerous form of lightning. The sole lightning responsible for the casualties occurring on earth due to lightning is cloud to ground discharge, negative downward lightning in particular. 1.2.2.2 Cloud discharges Cloud discharges occur either within a cloud called intra-cloud lightning or between one cloud to another called inter-cloud lightning. Intra-cloud lightning is the most common type of discharge. Such a discharge takes place in between the two opposite charged regions of the same cloud. As the cloud obscures the lightning flash and makes it hard to see, it has the form of diffuse brightening that flickers. However, on some occasions flash is able to leave the obscuring boundaries of the clouds giving a bright channel of light. Inter-cloud lightning is analogous to intra-cloud lightning. The only difference is that inter-cloud lightning occurs in between the charge centers of two clouds whereas the intra-cloud lightning occurs among the charges of same cloud. 1.3 Basic theory In order to know about electromagnetic field theory involved in lightning phenomenon, first let’s go through electrostatic field and magnetostatic field separately in brief. 1.3.1 Electrostatic field The concept of electric charge is the underlying principle for explaining all electrical phenomena [9]. Static electric charges are to be present in and around the region of cumulonimbus cloud for lightning to occur. Static electric charges produce electrostatic field. Although, from old times people have been observing amber attracting tiny pieces of matter in its surrounding under certain conditions, it took some time to understand and explain that this incident was due to electrostatic field. French physicist Charles Auguste de Coulomb, after doing extensive experiments, was able to formulate the interaction forces between 3 electrical charges mathematically. He also made a device to precisely measure these forces. The electrostatic force between two point charges can be written as: F⃗ 12= k Q1.Q2.(r⃗ 2−r⃗ 1) │𝑟 2−𝑟 1│3 .............................. (1.1) Equation (1.1) can be reported as, “The electrostatic force between two point charges is directly proportional to the amount of electrostatic charge on each of them and inversely proportional to the square of their distance.” Here, k is proportionality constant which depends on unit system used and can be obtained from the relation: k = 1 4𝜋∈0 .............................. (1.2) Where∈0 = (8.85 × 10-12) AS/V m is the dielectric permittivity of free space. For the force in relation (1.1) to exist presence of electric charges on both geometric objects is must. The forces experienced by charge Q1 which appears due to the electric field of charge Q2 at the position of charge Q1 can mathematically be written as: E⃗⃗ 2 = F⃗⃗ 21 Q1 = Q2.(r⃗ 2−r⃗ 1) 4πε0│r⃗ 2−r⃗ 1│3 ........................... (1.3) Moving on to the larger scale, if N point charges are distributed in free space (free space is a linear material in the dielectric sense because the dielectric permittivity of vacuum does not depend on the electric field) the superposition principle can be applied. Thus, the vector sum of all individual fields gives the value of total electric field. Mathematically, total electric field is calculated as: N 𝐸⃗ (𝑟 ) = ∑ i=1 Q1.(r⃗ −r⃗ i) 4πε0│r⃗ −r⃗ i│3 .............................. (1.4) 1.3.2 Magnetostatic field A magnetic needle which was kept near a current carrying deflected in one side and its direction of deflection was opposite on reversing the direction of current in the wire. This discovery was made by Oersted in 1820. He explained that the magnetic field which sets up around the wire when a current passes through it is responsible for the deflection. Phenomenon of production of magnetic field around a conductor by passing a current through 4 it termed as Oersted’s discovery. From Biot-Savart law, we can find the magnetic field d𝐵⃗ due to a small current carrying element dl as: dB⃗⃗ = 𝜇0 4𝜋 Idl sin θ r2 .............................. (1.5) For large numbers of small current carrying elements relation can be written as: B⃗⃗⃗ =∫ 𝑑B⃗⃗ = μ0 4π ∫ Idl sin θ r2 .............................. (1.6) Where, 𝜃 is the angle between 𝑑𝑙 ⃗⃗⃗⃗ and 𝑟 , I is the magnitude of current passing through conductor, (𝜇0= 4𝜋 × 10−7 𝑊𝑏𝐴−1𝑚−1), is permeability of vacuum. Taking divergence of the magnetic field will lead us to one of the Maxwell’s equation which is: △ . B ⃗⃗⃗ = 0 .............................. (1.7) 1.3.3 Electromagnetic field produced by lightning discharge: Sections 1.3.1 and 1.3.2 are more than enough to make us believe that both electric and magnetic fields are produced during lightning as static charges and flow of current are the part of this natural phenomenon. Since, different physical processes give rise to different field signatures, the electric and magnetic field signatures so produced are the basic physical parameters in understanding the mechanisms of lightning discharges [2]. Because of the current that flows in the lightning channel in between clouds and ground, an electromagnetic field is radiated from the lightning. Out of the whole electromagnetic energy that is dispersed by the lightning strike, only a minor part is represented by the visible part of spectrum of stroke. Amount of energy being released out during the strike is so huge that the amplitude of the electric field pulse wave is equal to several V/m even at the distance of 100 km. Schematic representation of the lightning channel’s assumed geometry is shown in the figure (1.1) in which observation point P is the point where the fields are calculated. In order to represent the fields in this geometry, the cylindrical coordinate system is used. 5 -H Figure 1.1: Geometrical model used in calculating electromagnetic fields. Adapted from “A New Model of Electromagnetic Fields Radiated by Lightning” [10]. If we consider a perfectly conducting ground, the components of the electric and magnetic fields at the location P(r, ϕ ,z) produced by a short vertical section of infinitesimal channel dz’ at height z’ carrying a time-varying current i(z’,t) can be computed in the time domain using the following relations [10]: 1.3.3.1 Horizontal electric field The horizontal component of electric field at P (r, ϕ, z) can be expressed as: Er(r, z, t) = 1 4πϵ0 [∫ H −H 3r(z−z′) R5 t ∫ i(z′, τ − R c)⁄ 0 d dz′+∫ H −H 3r(z−z′) cR4 i(z′, t − R c)dz ⁄ + H ∫ −H r2 c2R3 ∂i(z′,t−R c)⁄ ∂t dz′] .............................. (1.8) [10] 1.3.3.2 Vertical electric field The vertical component of electric field at P (r, ϕ, z) can be expressed as: 6 Ez(r, z, t) = 1 4πϵ0 [∫ H −H 2(z−z′)2−r2 R5 t ∫ i(z′, τ − R c)⁄ 0 d dz′+∫ H −H 2 2(z−z′) cR4 −r2 i(z′, t − R c)dz ⁄ - H ∫ −H r(z−z′)2 c2R3 ∂i(z′,t−R c) ⁄ ∂t dz′ ] .............................. (1.9) [10] 1.3.3.3 Azimuthal magnetic field The azimuthal magnetic field at P (r, ϕ, z) can be expressed as: B∅ = μ0 4π [∫ H −H r R3 i(z′, t − R c⁄ ) dz′+∫ H −H r cR2 ∂i(z′,t−R c)⁄ ∂t dz′] .........……… (1.10) [10] Here, R = √(𝑧 − 𝑧′)2 + 𝑟2 ……………….……. (1.11) [10] H = v (t- 𝑅 𝑐 ) ……………………… (1.12) [10] In equations 1.8 to 1.12, i(z’,t), is the current carried by the dz’ dipole at time t; 𝜀0, is the permittivity of the vacuum; 𝜇0, is the permeability of the vacuum; c, is the speed of light; R, is the distance from the dipole to the observation point, and r, is the horizontal distance between the channel and the observation point. In equations (1.8) and (1.9), the terms containing the integral of the current (charge 1 𝑟3 distance, they are the dominant field component near the source whereas terms containing the transferred through dz’) are called “electrostatic fields” and, as these terms depend on derivative of the current are called “radiation fields”. And, because of their 1 distance 𝑟 dependence, they are the dominant components far from the source. The terms containing the current are called “induction fields”. In Equation (1.10), the first term is called “induction magnetostatic field” which is the dominant field component close to the source, and the second term is called “radiation field” which is the dominant field component at far distances from the source. In these equations the presence of the perfectly conducting ground is taken into account by replacing the ground by an equivalent image as shown in figure 1.1. The calculation of the electromagnetic field requires the knowledge of the spatial-temporal distribution of the current along the channel, i(z’,t) [10]. 1.4 Effects of lightning 7 Lightning has accompanied both merits and demerits with it. People have more or less knowledge about the harmful side of the lightning as have heard or faced about its calamities. However, many people are still not clear about the benefits of this powerful force. Evidence of 250000 - years lightning found in glassy tubes and ancient fulgurites suggest its presence during the time in which the life evolved on earth. Assumptions have been made that lightning was a source to generate the significant molecules like hydrogen cyanide (HCN) which assisted in the evolvement of life in earth. Furthermore, lightning helped to maintain the suitable condition for evolution by strengthening the ozone layer that blocked the harmful radiations like UV radiation. Nitrogen fixation, combination of relatively inert gas with other elements, is essential for the continuation of life on earth and there are not many ways to do it. One of the ways is lightning. Large heat produced during lightning combines atmospheric nitrogen with oxygen to form oxides. These oxides further react with moisture to give nitrates which come on earth with rain. “It is postulated that the lightning induced fires were mans first source of fire. Fire was of critical importance to humanity: it provided warmth, protection and a means of cooking food” [11]. Lightning ignites the forest fires and plays an important role in the ecological balance. The biologist Mr. Edwin V. Komarek, Sr. in his studies of lightning induced fire damage and the surviving ecology balance indicated that nature’s use of lightning fires for clearing dense wooded areas is indeed beneficial to the ecology. Moving on to the darker side of lightning, it can be considered as a deleterious natural process. Thunderstorms, and lightning in particular, are a major natural hazard to the public, aviation, power companies, and wildfire managers [3]. Every year many fatalities are being reported and the destruction of great deal of property is occurring because of it. The primary cause of death after the lightning strike of a man is cardiopulmonary arrest. Lightning strikes make a man vulnerable to central and peripheral nervous system injuries, burning effects, musculoskeletal effects and ophthalmic effects. This natural electric discharge has also been responsible for damaging the physical infrastructures of world. Transmission and communication towers, transmission lines and tall physical structures including residential houses and monuments are more vulnerable to lightning activities [12]. Disturbance in the human activities like aviation, outdoor sports and repairable as well as irreparable damages are other most common effects that lightning can pose. 1.5 Brief history of scientific study of lightning Although, lightning was believed to have existed even before the evolution of life on the earth (around 3 billion years ago) and the human being was curious about this phenomenon, scientific experiment was started only in mid 18th century. Experiments proposed and performed by Benjamin Franklin gave birth to lightning as a topic of a research in the scientific world. ‘The Sentry Box’ experiment proposed by Benjamin Franklin and performed at Marly-la-Ville in 1752 was the formal beginning of scientific research of lightning [2]. After a month of ‘The Sentry Box’ experiment, Benjamin himself performed 8 the famous ‘Kite’ experiment in Philadelphia. In “Kite” experiment he observed sparks to jump from a key attached to a kite string to knuckles of his hand. From these experiments Benjamin concluded that “clouds of thunderstorm are most commonly electrically charged” There is no doubt, the finding of the Benjamin’s experiments was important in the field of lightning but the use of lightning photography on a moving film by Hoffert in 1889 was the actual incident that started the scientific progress [2]. Pockels in 1897 measured the lightning current and analyzed the induced magnetic field for the first time [21]. Wilson started to use electric field measurements to evaluate thunderstorm charges involved in lightning discharges. A strong impetus was given to lightning research in the second decade of last century by the needs of electricity supply during thunderstorm periods. Technical necessity thus led to scientific research [13]. At present the experimental methods have become more advanced. Investigations are being carried out with rockets, high-altitude airplanes and spacecrafts. Rocket-triggered lightning research has been an important tool for close-up investigation. Nowadays, the research works on lightning and thunderstorm as a part of atmospheric physics are common. The research is mainly focused to find the correlation of lightning with the global climate, ecology, temperature, vegetation, precipitation etc. Lightning protection system are also being introduced as a gift from science in order to minimize the damage from the disastrous natural process. In future the Lightning Mapper program has planned to place a sensor in geostationary orbit. This sensor has capacity to map lightning discharges continuously during both day and night, with a special resolution of 10 km. 1.6 Classification of ecological zones in Nepal In Nepal, if ecological maps attempt to portray all recognized differences in landscape characteristics including slope, elevation, total rainfall and its distribution, soil type, micro climate, associated with mature phase and early phase vegetation no planner will be able to grasp how they can be used and certainly no layman can understand them. The ecological classification therefore has to differentiate the major differences only, while allowing for variation within each class [14]. Nepal lies just outside of the tropics in the global climatic zonation. However, bioclimatic tropicality extents into it up to an elevation of 1,000 m altitude. For a mountain country like Nepal altitudinal limits are most convenient to define ecological zones or life zones [15]. On the basis of altitudes from sea level, Nepal has been divided into seven ecological zones. Temperature of these ecological zones goes on decreasing with the increase of altitude. Seven ecological zones of Nepal are as follows: • Lower Tropical zone: It extends in between (70-300) m altitude from sea level and is the hottest ecological zone of Nepal. 9 • Upper Tropical zone: This zone lies in the altitudinal limit of (300-1000) m which is below Sub-tropical zone and above Lower Tropical zone. • Sub-tropical zone: It lies above Upper Tropical Zone up to 2000 m. • Temperate zone: It is in between Sub-tropical and Sub-alpine zones and in the altitudinal range of (2000-3000) m. • Sub-alpine zone: This zone extends in between (3000-4000) m altitude from sea level. • Alpine zone: Alpine zone is the second highest zone in terms of altitude from sea level among ecological zones of Nepal and lies in the altitudinal limit of (4000-5000) m. • Trans-Himalayan zone: This zone lies above the altitude of 5000 m from sea level and is the one with lowest temperature among seven ecological zones. Other two ecological zones that are used for the precise study of lightning activities are Water Body and Nival zones. Nival zones do not have the potential of vegetation. J. F. Dobremez [14], a French researcher, who was the main author of the hard copy maps made in the 1970’s and 1980’s, sketched the ecological hard copy map of Nepal too[14]. Later on, this hard copy map was used to create digital map which is now used in Geographic Information System (GIS). 1.7 Ecological zones and lightning The thundercloud lightning shows a variety of different characteristics depending on the variability of the size of thundercloud, which in turn depends on the latitude, topography, season and type of storm [5]. Global lightning activity varies from one region to another with the variation in the Earth’s climate. The climate of any ecological zone depends upon the amount of radiation obtained from the sun. And the radiations received from sun vary according to the latitudes. As basically different ecological zones of same country are in different latitudes, they have dissimilar climate among them. Thus, it is predictable that the number of lightning strikes varies from one ecological zone to another. Furthermore, the statistics of the lightning distribution around the world reveals that the intensity and polarity of lightning in thunderstorms are affected by the parameters like surface temperature, water vapor, the troposphere lapse rate and the aerosol loading [3]. These parameters differ in between the ecological zones which supports the evidences of unequal lightning pattern among these zones. Recent studies continue to show the high positive correlation between surface temperatures and lightning activity [16]. Among the tropical land masses Africa, South America and Southeast Asia, they have the rank of first, second and third consecutively for having greater lightning strikes. They are also called lightning chimneys.The reasons for Africa’s strongly continental character and lightning dominance have been attributed to surface characteristics and to the effects of aerosol [17]. Hence warmer ecological zones invite more lightning. 10 In context of lightning in Nepal, the investigation by Baral and Mackerras in 1992 [18] is worthy to mention. They studied the lightning occurrence characteristics with a flash counter network for a total of 21 months (March 1987–November 1988) in the Kathmandu Valley. Their results indicate that when the lightning activity starts in March, it intensifies quickly reaching its peak in May, while in June the activity decreases rapidly as the monsoon season starts [18]. In this study, Nepal has been divided into nine ecological zones in order to observe the lightning pattern. 1.8 Lightning and climate change Global warming has direct relationship with lightning. From the observations performed on different time scales, we observe a positive relationships between temperature and lightning with lightning increasing anywhere from 10-100 % for every one degree surface warming [16]. Future climate change could have significant repercussions on two related natural hazards: lightning and forest fires [19]. Lightning is predicted to increase by 50% by 2050. In return, lightning can also make impact on earth’s climate. The nitrogen oxides produced during the lightning will assist in minimizing the amount of ultraviolet radiations and other harmful rays in the troposphere by forming ozone, a strong green house gas. But, it also enhances the global warming with its green house effect. As heat, moisture and the wind current are redistributed during thunderstorms and lightning there is change in the climate. 1.9 Aim of study Although having a lot of possibilities and areas for study, research on topics of lightning is in the initial phase in Nepal. Continuing research in this field can be a motivational point to those who have interest in same topic. Available data have shown that Nepal bears varieties in the numbers of lightning strikes in its different ecological zones. Why does a country have different lightning densities among its regions within its territory and what are the factors that could be contributing to give such statistics? The main aim of this study is to search for the answers of these questions by going through the research works that have been done in the related field and by analyzing the data available. “The process of charge separation in clouds for lightning to origin” is still debatable but no one can question the fact that lightning has been responsible for loss of lives and physical destructions. So, a protective system from lightning is a necessity. The observations and results obtained from this study can emphasize to take steps toward establishing 11 protective systems from lightning. The results of this study can be more than useful to the authorities to differentiate between the areas which are more prone to lightning and those areas which bear less number of strikes. And, hence the knowledge can be availed for the necessary personal safety and structural protection measures. Furthermore, the distribution of lightning activity over different ecological zones is of much importance to the scientists trying to understand the influence of ground on the lightning activity. 12 Chapter 2 Instrumentation and methodology The lightning strikes over earth have continuously been monitored by using different sensors. In the present study, we have used the strike data obtained from Global Lightning Network (GLN). One of the sensors of the network been installed on the roof top of Nepal Academy of Science and Technology (NAST) and is the only sensor in Nepal. The strike data obtained from the GLN were analysed with the help of ArcGIS program developed by Environmental Systems Research Institute (ESRI). The ecological map is obtained from International Centre for Integrated Mountain Development (ICIMOD). GLN is an advanced lightning detection network providing high quality real-time and archive lightning stroke data to partners throughout the world. GLN which can be considered as boon of technology given to humans provided the required data of lightning stroke for this study. This network has become more than efficient in precise detection and in supplying information about lightning activities all over the world so that necessary steps can be taken to minimize the loss of property and save lives from this natural disaster in vulnerable areas. Figure 2.1: Schematic diagram of sensor with stroke and GPS antenna. Adapted from “Study of effect of climate change on lightning activity in Nepal” [21]. 13 GLN uses Time Difference of Arrival (TOA) Systems Precision Lightning Sensors in order to detect and analyze the lightning activities precisely. Each sensor system has a sensor receiver chassis accompanied with Global Positioning System (GPS) antenna and stroke antenna. In a time-of-arrival based system, timing is an important part of the receiver and the receiver uses GPS timing as a reference. When the raw data are forwarded to TOA Systems Central Analysis Processor (CAP), they are analyzed over there to produce transformable solutions within 10 seconds or less. CAP computes and displays real time lightning location information. All the lightning activities are archived by (Weather Services International) WSI Corporation and made them available to the host partners. Figure 2.2: Photograph of stroke antenna and GPS antenna on rooftop of NAST. GPS is an operational system, providing users worldwide with twenty-four hour a day precise position in three dimensions and precise time traceable to global time standards [20]. It consists of a constellation of 24 satellites that continuously orbit the Earth. Each GPS satellite has on board several atomic clocks that are precisely synchronized to Coordinated Universal Time provided by the U.S. Naval Observatory (USNO). In order to acquire the signals, GPS antenna should be mounted on the roof or window for clear view of the sky. 14 2.3: Photograph of TOA sensor at NAST. Figure After having the data of lightning activities of all over Nepal and nearby locations, they were projected, clipped and finalised to collect necessary information with the help of software ArcMap which is one of the part of software ArcGIS. Data from December 2011 to February 2015 excluding time period of September 2013 to September 2014 were analyzed. We had no other choice than to exclude data for the time period mentioned because the system in Nepal to collect raw data of lightning was in the condition to be repaired. The available data were plotted by separating them into four parts as per the four seasons of the year. Four seasons used are listed as follows: (1) Winter season (December, January and February) (2) Pre-monsoon season (March, April and May) (3) Monsoon season (June, July and August) (4) Post-monsoon season (September, October and November) Ecological map of Nepal with nine ecological zones has been incorporated in ArcGIS and the number of lightning strikes assigned to each ecological zone along with the area of 15 ecological zones was obtained. Thereafter, density of lightning in the required zone was calculated as: Lightning Density = Number of lightning strikes Area in square kilometer .................. (2.1) Thus, obtained different values of number of lightning strikes and densities of lightning for different ecological zones were presented with line graph for each season. Also, each ecological zone was indexed with different colour so that the dominancy of lightning activities in those regions can be analysed properly with the help of ArcGIS. Although, Nepal has been divided into seven ecological zones on the basis of altitude from sea level, map of Nepal separated into nine ecological divisions was used for detail and more precise study as Nival Zones and Water Body may play significant role on lightning activities. Figure 2.4: Ecological division of Nepal. Seven ecological zones of Nepal on the basis of altitude are tabulated as: Table 2.1: Ecological zones of Nepal with altitude and area covered in percentage. 16 S.N Ecological zone Altitude (m) Area Covered (% of Nepal) 1 2 3 4 5 6 7 Lower Tropical 70-300 Upper Tropical 300-1000 Sub-tropical 1000-2000 Temperate 2000-3000 Sub-alpine 3000-4000 Alpine 4000-5000 Trans-Himalayan Above 5000 18 18 22 12 9 8 8 Other two ecological zones over which the lightning activities were studied are Nival Zones and Water Body. Measurement, parameter and unit Position of sensor in Nepal: NAST, Khumaltar, Lalitpur Latitude: 27.650 N Longitude: 85.320 E Altitude: 1.38 km (average sea level) Lightning Density: Number of lightning strikes per square kilometer Time: Days, Months, Seasons (Pre-monsoon, monsoon, post-monsoon and winter) 17 Chapter 3 Observation Distribution of lightning activity over different ecological zones Lightning strikes occurring over the globe have continuously been monitored by the WSI’s Global Lightning Network (GLN) system. The electric field from lightning being sensed by the stroke antenna located by the coordinated GPS system are being processed and archived by the WSI’s CAP. The data so archived have further been processed and analyzed in this study with the help of ArcGIS. We have used ecological map of Nepal (shown in figure 2.4) in order to clip and make spatial analysis of the available data of lightning using ArcGIS. After finding number of lightning strikes for different zones, the densities of lightning strikes in units of per square kilometer are calculated for each regions using equation (2.1). The density so obtained were tabulated and represented through line graph in this chapter. The area of each ecological zone used for calculation is given in the table 3.1. Table 3.1: Ecological zones of Nepal with area covered in square kilometer. S.N Ecological zones Area (km2) 1 2 3 4 5 6 7 8 9 Alpine Lower Tropical Nival Zones Sub-alpine Sub-tropical Temperate Trans-Himalayan Upper Tropical Water Body 17088 25153 11061 14326 31707 20915 3224 23911 638 18 3.1 Distribution of lightning activity over different ecological zones of Nepal for different seasons Lightning activities occurring all over Nepal were observed by separating the available data into four different seasons namely winter, pre-monsoon, monsoon and post- monsoon respectively. Observed values and calculations under the heading of four seasons are as follows. (A) Winter (a) Winter 2012 Figure 3.1: Ecological map of Nepal with lightning strikes during winter 2012. 19 Table 3.2: Lightning activity during winter 2012 over different ecological zones of Nepal. S.N Ecological zones Number of lightning strikes Density of lightning strikes (per square kilometer) 1 2 3 4 5 6 7 8 9 Alpine Lower Tropical Nival Zones Sub-alpine Sub-tropical Temperate Trans-Himalayan Upper Tropical Water Body 12 144 12 34 216 106 1 158 5 0.07 × 10-2 0.57 × 10-2 0.10 × 10-2 0.23 × 10-2 0.68 × 10-2 0.50 × 10-2 0.03 × 10-2 0.66 × 10-2 0.78 × 10-2 From the map obtained via ArcGIS, by incorporating the strike data, we were able to view the facts related to lightning activity for different zones during the time period concerned. During winter 2012 i.e. December 2011 to February 2012, Nepal experienced total of 688 lightning strikes. Out of which Sub-tropical zone experienced the most with around 31 percent of strikes whereas Trans-Himalayan was the one to have least number of strikes with only one strike for the time period under consideration. Analyzing on the basis of altitude, (table 2.1 for reference), we can see that those zones which are in high altitude from sea level have less number of strikes and zones which are in less altitude from sea level have high number of strikes. Dealing with the densities of lightning strikes, it was found that Water Body and Sub- tropical zones are in the first and second place respectively to have higher densities of strikes and Alpine and Trans-Himalayan zones were in second last and last position respectively. Furthermore, details of lightning activity during winter 2012 for other ecological zones are listed in table 3.2 above. In figure 3.2 values of number of lightning strikes and density of lightning strikes × 10000 over ecological zones were plotted in the line graph for quick interpretation. Value of density of strikes was multiplied by 10000 in order to bring the value in the range of number of lightning strikes so that they can be plotted and analyzed at once in the same graph. 20 Number of lightning strikes Density of ligntning strikes (per square kilometer) × 10000 250 200 150 100 50 0 Figure 3.2: Lightning activity during winter 2012 over different ecological zones of Nepal. (b) Winter 2013 Figure 3.3: Ecological map of Nepal with lightning strikes during winter 2013. 21 Table 3.3: Lightning activity during winter 2013 over different ecological zones of Nepal. S.N Ecological zones Number of lightning strikes Density of lightning strikes (per square kilometer) 1 2 3 4 5 6 7 8 9 Alpine Lower Tropical Nival Zones Sub-alpine Sub-tropical Temperate Trans-Himalayan Upper Tropical Water Body 39 192 8 121 425 232 0 214 4 0.23 × 10-2 0.76 × 10-2 0.07 × 10-2 0.84 × 10-2 1.34 × 10-2 1.11 × 10-2 0 0.89 × 10-2 0.63 × 10-2 We were able to draw table 3.3 from the map obtained from ArcGIS for winter 2013. It is clearly seen from table 3.3 that Sub-tropical zone exceeded all other zones with 425 out of 1235 strikes and Trans-Himalayan zone was the one to have least value as it did not have any strike during winter 2013 (December 2012 to February 2013). Positions of ecological zones with highest and least number of strikes were found to be same for both 2012 and 2013 winters. Moving on to the density of lightning strikes, Sub-tropical zone is found to have maximum value of 1.34 × 10-2 / km2 while Trans-Himalayan zone is found to have minimum value of 0. For winter 2013, Sub-tropical, Temperate, Upper Tropical zones were in the first, second and third place respectively for densities of lightning while Alpine, Nival and Trans- Himalayan zones were in the seventh, eighth and ninth place respectively. Sub-alpine, Lower Tropical and Water Body zones fell in the fourth, fifth and sixth place respectively. Table 3.3 is represented with the help of graph below in figure 3.4 after multiplying the column of density of lightning strikes by 10000. We can see from graph that although Water Body zone has the value of number of lightning strikes in the lower side among the ecological zones, it has got its value in higher side for density of lightning strikes. 22 Number of lightning strikes Density of lightning strikes (per square kilometer) x 10000 450 400 350 300 250 200 150 100 50 0 Figure 3.4: Lightning activity during winter 2013 over different ecological zones of Nepal. (c) Winter 2015 Figure 3.5: Ecological map of Nepal with lightning strikes during winter 2015. 23 Table 3.4: Lightning activity during winter 2015 over different ecological zones of Nepal. S.N Ecological zones Number of lightning strikes Density of lightning strikes (per square kilometer) 1 2 3 4 5 6 7 8 9 Alpine Lower Tropical Nival Zones Sub-alpine Sub-tropical Temperate Trans-Himalayan Upper Tropical Water Body 17 87 13 51 163 80 2 107 1 0.10 × 10-2 0.35 × 10-2 0.12 × 10-2 0.36 × 10-2 0.51 × 10-2 0.38 × 10-2 0.06 × 10-2 0.45 × 10-2 0.16 × 10-2 Winter season of 2015 (December 2014 to February 2015) is found to have the minimum number of strikes among the winters under study. Total number of lightning strikes for this season was found to be 521. As during winter seasons of 2012 and 2013, Sub-tropical zone left behind all other ecological zones for having maximum number of strikes with 31 percent of total lightning strikes. Water Body zone experienced only one strike and Trans- Himalayan zone is found to have two strikes which are the minimum values for this time period. From table 3.4 it is clear that, the highest value for density of lightning strikes is 0.51 × 10-2 / km2 which is of Sub-tropical zone and the lowest value is 0.06 × 10-2 / km2 assigned to Temperate zone. Figure 3.6 depicts lightning activity over the nine different ecological zones of Nepal during winter 2015. This figure clearly shows peaks for three tropical zones which indicate the dominancy in lightning activity by Lower Tropical, Upper Tropical and Sub-tropical zones. 24 Number of lightning strikes Density of lightning strikes (per square kilometer) x 10000 180 160 140 120 100 80 60 40 20 0 Figure 3.6: Lightning activity during winter 2015 over different ecological zones of Nepal. (B) Pre-monsoon (a) Pre-monsoon 2012 Figure 3.7: Ecological map of Nepal with lightning strikes during pre-monsoon 2012. 25 Table 3.5: Lightning activity during pre-monsoon 2012 over different ecological zones of Nepal. S.N Ecological zones Number of lightning strikes Density of lightning strikes (per square kilometer) 1 2 3 4 5 6 7 8 9 Alpine 329 Lower Tropical 3774 Nival Zones Sub-alpine Sub-tropical Temperate 142 729 3075 1489 Trans-Himalayan 50 Upper Tropical 2947 Water Body 94 1.92 × 10-2 15.00 × 10-2 1.28 × 10-2 5.08 × 10-2 9.70 × 10-2 7.11 × 10-2 1.55 × 10-2 12.32 × 10-2 14.73 × 10-2 During pre-monsoon (March, April and May) 2012, 12629 lightning strikes were recorded. Lower Tropical zone and Sub-tropical zone respectively faced 3774 and 3075 strikes which were most for the time period of March to April of 2012. Similarly, the least values were 50 and 94 assigned to Trans-Himalayan zone and Water Body zone. In spite of having the second lowest value in column of number of lightning strikes, Water Body zone has second highest value of 14.73 × 10-2/ km2 in the column of density of lightning strikes in table 3.5. Lower Tropical zone has the maximum value of density of lightning strikes as 15.00 × 10-2/ km2 among nine ecological zones while the minimum value of 1.28 × 10-2/ km2 is of Nival Zones. Figure 3.8 is a graph plotted on the basis of table 3.5 and can be used to make analysis of lightning activity over ecological zones of Nepal during pre-monsoon 2012. It clearly indicates the dominance of number of lightning strikes by Lower Tropical zone and also represents the dominance of density of lightning strikes by Lower Tropical and Water Body zones. 26 Number of lightning strikes Density of lightning strikes (per square kilometer) × 10000 4000 3500 3000 2500 2000 1500 1000 500 0 Figure 3.8: Lightning activity during pre-monsoon 2012 over different ecological zones of Nepal. (b) Pre-monsoon 2013 Figure 3.9: Ecological map of Nepal with lightning strikes during pre-monsoon 2013. 27 Table 3.6: Lightning activity during pre-monsoon 2013 over different ecological zones of Nepal. S.N Ecological zones Number of lightning strikes Density of lightning strikes (per square kilometer) 1 2 3 4 5 6 7 8 9 Alpine Lower Tropical Nival Zones Sub-alpine 111 823 56 219 Sub-tropical 1082 Temperate Trans-Himalayan Upper Tropical Water Body 581 6 855 20 0.65 × 10-2 3.27 × 10-2 0.50 × 10-2 1.52 × 10-2 3.41 × 10-2 2.78 × 10-2 0.19 × 10-2 3.58 × 10-2 3.13 × 10-2 After clipping and making spatial analysis of the lightning data with ArcGIS, we had an observation that Nepal experienced a total of 3753 lightning strikes during pre-monsoon (March, April and May) 2013. Most of these strikes struck over Sub-tropical zone with value of 1082 which is about 29 percent of total strikes. The lowest number of strikes recorded is found to be 6 over the Trans-Himalayan zone which is only about 0.16 percent of the total number of lightning strikes. Moving on to the density of lightning strikes, table 3.4 clearly shows that Lower Tropical, Sub-tropical, Upper Tropical and Water Body zones have higher values of 3.27 × 10-2/ km2, 3.41× 10-2/ km2, 3.58× 10-2/ km2 and 3.13 × 10-2/ km2 respectively. Densities for Alpine, Nival Zones and Trans-Himalayan have lower values of 0.65 × 10-2/ km2, 0.50 ×10-2/ km2 and 0.19 × 10-2/ km2. Figure 3.10 is available for quick interpretation of lightning activity over different ecological zones of Nepal during pre-monsoon 2013. 28 Number of lightning strikes Density of lightning strikes (per square kilometer) x 10000 1200 1000 800 600 400 200 0 Figure 3.10: Lightning activity during pre-monsoon 2013 over different ecological zones of Nepal. (C) Monsoon (a) Monsoon 2012 Figure 3.11: Ecological map of Nepal with lightning strikes during monsoon 2012. 29 Table 3.7: Lightning activity during monsoon 2012 over different ecological zones of Nepal. S.N Ecological zones Number of lightning strikes Density of lightning strikes (per square kilometer) 1 2 3 4 5 6 7 8 9 Alpine 173 Lower Tropical 3401 Nival Zones Sub-alpine 118 255 Sub-tropical 1267 Temperate Trans-Himalayan 530 28 Upper Tropical 1351 Water Body 30 1.01 × 10-2 13.52 × 10-2 1.06 × 10-2 1.77 × 10-2 3.10 × 10-2 2.53 × 10-2 0.87 × 10-2 5.65 × 10-2 4.70 × 10-2 From the map obtained via ArcGIS, incorporating the strike data, we found that monsoon (June, July and August) 2012 received 7153 lightning strikes. About 48 percent of strikes are clustered over Lower Tropical zone as shown in figure 3.11. Observing table 3.7, we can say that the region which ranges from 0 to 300 meter in altitude from sea level i.e. Lower Tropical zone experienced highest number of strikes of 3401 and the region which is farthest from sea level i.e. Trans-Himalayan zone which is above 5000 meter from sea level experienced the least strikes of 28. If we see the densities of lightning strike zones highest and lowest values are same as that for the number of strikes. Lower Tropical zone has the maximum value of 13.52 × 10-2 / km2 by far among the ecological zones and the minimum value of 0.87 × 10-2 / km2 belongs to Trans-Himalayan zone. Line graph plotted below in figure 3.12 depicts lightning activity of monsoon 2012 which clearly reports that Lower Tropical has peak values for number of lightning strikes and density of lightning strikes among the ecological zones. 30 Number of lightning strikes Density of lightning strikes (per square kilometer) × 10000 4000 3500 3000 2500 2000 1500 1000 500 0 Figure 3.12: Lightning activity during monsoon 2012 over different ecological zones of Nepal. (b) Monsoon 2013 Figure 3.13: Ecological map of Nepal with lightning strikes during monsoon 2013. 31 Table 3.8: Lightning activity during monsoon 2013 over different ecological zones of Nepal. S.N Ecological zones Number of lightning strikes Density of lightning strikes (per square kilometer) 1 2 3 4 5 6 7 8 9 Alpine Lower Tropical Nival Zones Sub-alpine Sub-tropical Temperate Trans-Himalayan Upper Tropical Water Body 21 447 16 51 467 127 1 478 10 0.12 × 10-2 1.78 × 10-2 0.14 × 10-2 0.36 × 10-2 1.47 × 10-2 0.60 × 10-2 0.03 × 10-2 1.20 × 10-2 1.57 × 10-2 During monsoon (June, July and August) 2013, majority of the lightning occurred in between the altitude of range (70-2000) m which contains the ecological zones namely Lower Tropical, Sub-tropical and Upper Tropical. These three zones respectively had 447, 467 and 478 numbers of lightning strikes. 478 of Upper Tropical zone is the highest value among nine zones where as a single value that belongs to Trans-Himalayan zone is the lowest value. Water Body zone whose number of strikes is 10, seems low in the column of number of lightning strikes has the second highest value of 1.57 × 10-2/ km2 in column with density of lightning strikes. The highest value of density of lightning strikes is 1.78 × 10-2/ km2 for Lower Tropical zone and the lowest value is calculated as 0.03 × 10-2/ km2 for Trans- Himalayan zone. Figure 3.14 represents the lightning activity during pre-monsoon 2013 over different ecological zones of Nepal. In line graph, three peak points in the dotted line indicates dominance of the tropical zones namely Lower Tropical, Sub-tropical and Upper Tropical in number of lightning strikes.. 32 Number of lightning strikes Density of lightning strikes (per square kilometer) x 10000 600 500 400 300 200 100 0 Figure 3.14: Lightning activity during monsoon 2013 over different ecological zones of Nepal. (D) Post-monsoon (a) Post-monsoon 2012 Figure 3.15: Ecological map of Nepal with lightning strikes during post-monsoon 2012. 33 Table 3.9: Lightning activity during post-monsoon 2012 over different ecological zones of Nepal. S.N Ecological zones Number of lightning strikes Density of lightning strikes (per square kilometer) 1 2 3 4 5 6 7 8 9 Alpine Lower Tropical Nival Zones Sub-alpine Sub-tropical Temperate Trans-Himalayan Upper Tropical Water Body 98 636 64 172 747 346 21 578 9 0.57 × 10-2 2.52 × 10-2 0.58 × 10-2 1.20 × 10-2 2.36 × 10-2 1.65 × 10-2 0.65 × 10-2 2.41 × 10-2 1.41 × 10-2 The observation of table 3.9 which was created using the map obtained from incorporating data into ArcGIS shows that 2671 number of lightning strikes were recorded during post-monsoon (September, October and November) 2012. Continuing the trends of the other seasons of 2012, it was one of the tropical zones which experienced highest number of strikes. This time it happened to be Sub-tropical zone with 747 lightning strikes which are about 28 percent of total strikes. Lower Tropical and Upper Tropical zones do not have much difference values of number of lightning strikes on comparing with Sub-tropical zone. They were found to have 636 and 578 strikes respectively. Water Body zone was the one receiving lowest number of strikes. The largest value of density of lightning was calculated for Sub- tropical zone and Alpine was found to have smallest value for density of lightning after calculation. Figure 3.16 depicts the information about lightning activity over different ecological zones on Nepal during post-monsoon 2012. 34 Number of lightning strikes Density of lightning strikes (per square kilometer) × 10000 800 700 600 500 400 300 200 100 0 Figure 3.16: Lightning activity during post-monsoon 2012 over different ecological zones of Nepal. (b) Post-monsoon 2014 (October and November only) Figure 3.17: Ecological map of Nepal with lightning strikes for two months of post-monsoon (October, November) 2014. 35 Table 3.10: Lightning activity for two months of post-monsoon (October, November) 2014 over different ecological zones of Nepal. S.N Ecological zones Number of lightning strikes Density of lightning strikes (per square kilometer) 1 2 3 4 5 6 7 8 9 Alpine Lower Tropical Nival Zones Sub-alpine Sub-tropical Temperate Trans-Himalayan Upper Tropical Water Body 75 124 37 69 208 112 11 152 1 0.44 × 10-2 0.49 × 10-2 0.33 × 10-2 0.48 × 10-2 0.66 × 10-2 0.54 × 10-2 0.34 × 10-2 0.64 × 10-2 0.16 × 10-2 We used the map obtained by incorporating the strike data of October and November 2014 into ArcGIS in order to observe the lightning activity during post-monsoon 2014. When the available data were processed through ArcGIS we found the similar scenario while checking the regions for highest number of strikes with that of post-monsoon 2012. In 2012, Sub-tropical zone recorded highest number of strikes followed by Lower Tropical zone and Upper Tropical zone where as post-monsoon values for 2014 shows that Sub-tropical zone remains in the top and Lower and Upper Tropical zones inter change their position in comparison to 2012. Nevertheless, these three remains at top three in receiving higher number of lightning strikes. Water Body received the least number of strikes for the time period under consideration. From the column of density of lightning strikes, we can report that tropical zones namely Lower Tropical, Sub-tropical and Upper Tropical have values in the higher values. Figure 3.18 has been plotted to view and compare the lightning activity over different ecological zones of Nepal during two months of post-monsoon 2014. 36 Number of lightning strikes Density of lightning strikes (per square kilometer) × 10000 250 200 150 100 50 0 Figure 3.18: Lightning activity for two months of post-monsoon (October, November) 2014 over different ecological zones of Nepal. 37 Chapter 4 Discussion Collection of lightning data, converting them to useable excel file for ArcGIS, making use of excel files to project and clip lightning data into Map of Nepal, spatial join of data to map and the observations from the finalized ecological map helped to calculate the results of lightning activity over different ecological zones for different seasons (excluding data from September 2013 to September 2014 as lightning sensor system was under repair.). In this chapter, we have calculated the average number of strikes and average density of lightning using arithmetic mean (AM). If the range of data varies by multiple of 10 or more, average is obtained using geometric mean otherwise arithmetic is preferred and under our study almost all range of data to calculate average do not vary with multiple of 10 or more. So, arithmetic mean has been used. Arithmetic mean is calculated by dividing the sum of the items by number of items. In the figures 4.1 to 4.9, listed in this chapter, unit for the density of lightning strikes is per square kilometre. Density of lightning strikes Number of lightning strikes 4.1 Alpine Ecological zone Year Season Figure 4.1: Schematic diagram of lightning activity over Alpine zone for different seasons for the years 2012, 2013 and 2015. 38 Number of lightning strikes and their densities for different seasons are obtained using formula for arithmetic mean. For winter, Average number of lightning strikes = 12 + 39 + 17 3 = 22.67 ~ 23 (approx) Average density of lightning strikes = (0.07 + 0 .23 + 0.10) × 10−2 3 = 0.13 × 10−2 / km2 Hence, approximately 23 lightning strikes occurred during winter with density of 0.13 × 10−2 / km2 over Alpine zone. Now, for pre-monsoon, Average number of lightning strikes = 329 + 111 = 220 2 Average density of lightning strikes = (1.92 + 0.65) × 10−2 2 = 1.29 × 10−2 / km2 Thus, during pre-monsoon, 220 strikes were recorded with density of 1.29 × 10−2 / km2 over Alpine zone in average. Again, for monsoon, Average number of lightning strikes = 173 + 21 = 97 2 Average density of lightning strikes = (1.01 + 0.12) × 10−2 2 = 0.57 × 10−2/ km2 As we see, 97 strikes of lightning with density of 0.57 × 10−2 / km2 can be expected over Alpine zone during monsoon. Finally, for post-monsoon, As we have single lightning activity of post-monsoon season of 2012, we can say that the average number and density of strikes during post-monsoon are found to be 98 and 0.57 × 10−2/ km2 respectively over Alpine zone. By adding values of four different seasons, we can report that Alpine zone has trend of receiving around 438 lightning strikes per year with density of 2.56 × 10−2 / km2. 39 Density of lightning strikes Number of lightning strikes 4.2 Lower Tropical Ecological zone Year Season Figure 4.2: Schematic diagram of lightning activity over Lower Tropical zone for different seasons for the years 2012, 2013 and 2015. Number of lightning strikes and their densities for different seasons are obtained using formula for arithmetic mean. For winter, Average number of lightning strikes = 144 + 192 + 87 = 141 3 Average density of lightning strikes = (0.57 + 0 .76 + 0.35)×10−2 3 = 0.56 × 10−2 / km2 Hence, approximately 141 lightning strikes occurred during winter with density of 0.56 × 10−2/ km2overLower Tropical zone. Now, for pre-monsoon, Average number of lightning strikes = 3774 + 823 = 2299 2 Average density of lightning strikes = (15.00 + 3.27) × 10−2 2 = 9.14 × 10−2/km2 40 Thus, during pre-monsoon, 2299 strikes were recorded with density of 9.14 × 10−2/ km2 over Lower Tropical zone in average. Again, for monsoon, Average number of lightning strikes = 3401 + 447 =1924 2 Average density of lightning strikes = (13.51 + 1.78) × 10−2 2 = 7.65 × 10−2 / km2 As we see, 1924 strikes of lightning with density of 7.65 × 10−2 / km2 can be expected over Lower Tropical zone during monsoon. Finally, for post-monsoon, As we have single lightning activity of post-monsoon season of 2012, we can say that the average number and density of strikes during post-monsoon are 636 and 2.52 × 10−2/ km2 respectively over Lower Tropical zone. By adding values of four different seasons, we can report that Lower Tropical zone has trend of receiving around 3000 lightning strikes per year with density of 19.87 × 10−2/ km2. Density of lightning strikes Number of lightning strikes 4.3 Nival Zones Ecological zone Year Season Figure 4.3: Schematic diagram of lightning activity over Nival Zones for different seasons for the years 2012, 2013 and 2015. 41 Number of lightning strikes and their densities for different seasons are obtained using formula for arithmetic mean. For winter, Average number of lightning strikes = 12 + 8 + 13 = 11 3 Average density of lightning strikes = (0.10 + 0 .07 + 0.12) × 10−2 3 = 0.10 × 10−2 / km2 Hence, approximately 11 lightning strikes occurred during winter with density of 0.10 × 10−2/ km2 over Nival Zones. Now, for pre-monsoon, Average number of lightning strikes = 142 + 56 = 99 2 Average density of lightning strikes = (1.28 + 0.50) × 10−2 2 = 0.89 × 10−2/ km2 Thus, in pre-monsoon 99 strikes were recorded with density of 0.89 × 10−2/ km2 over Nival Zones in average. Again, for monsoon, Average number of lightning strikes = 118 + 16 = 67 2 Average density of lightning strikes = (1.06 + 0.14) × 10−2 2 = 0.60 × 10−2/ km2 As we see, 67 strikes of lightning with density of 0.60× 10−2 / km2 can be expected over Nival Zones during monsoon. Finally, for post-monsoon, As we have single lightning activity of post-monsoon season of 2012, we can say that the average number and density of strikes during post-monsoon are 64 and 0.58 × 10−2/ km2 respectively over Nival Zones. By adding values of four different seasons, we can report that Nival Zones has trend of receiving around 241lightning strikes per year with density of 2.17 × 10−2 / km2. 42 Density of lightning strikes Number of lightning strikes 4.4 Sub-alpine Ecological zone Year Season Figure 4.4: Schematic diagram of lightning activity over Sub-alpine zone for different seasons for the years 2012, 2013 and 2015. Number of lightning strikes and their densities for different seasons are obtained using formula for arithmetic mean. For winter, Average number of lightning strikes = 34 + 121 + 51 3 = 68.67 ~ 69 (approx) Average density of lightning strikes = (0.23 + 0 .84 + 0.36) × 10−2 3 = 0.48 × 10−2 / km2 Hence, approximately 69 lightning strikes occurred during winter with density of 0.48 × 10−2/ km2 over Sub-alpine zone. Now, for pre-monsoon, Average number of lightning strikes = 729 + 219 = 474 2 Average density of lightning strikes = (5.08 + 1.52) × 10−2 2 = 3.30 × 10−2 / km2 43 Thus, during pre-monsoon 474 strikes were recorded with density of 3.30 × 10−2/ km2 over Sub-alpine zone in average. Again, for monsoon, Average number of lightning strikes = 255 + 51 = 153 2 Average density of lightning strikes = (1.77 + 0.36) × 10−2 2 = 1.07 × 10−2/ km2 As we see, 153 strikes of lightning with density of 1.07 × 10−2 / km2 can be expected over Sub-alpine zone during monsoon. Finally, for post-monsoon, As we have single lightning activity of post-monsoon season of 2012, we can say the average number and density of strikes in Sub-alpine zone during post-monsoon are 172 and 1.20 × 10−2/ km2 respectively. By adding values of four different seasons, we can report that Sub-alpine zone has trend of receiving around 868 lightning strikes per year with density of 6.05 × 10−2/ km2. Density of lightning strikes Number of lightning strikes 4.5 Sub-tropical Ecological zone Year Season Figure 4.5: Schematic diagram of lightning activity over Sub-tropical zone for different seasons for the years 2012, 2013 and 2015. 44 Number of lightning strikes and their densities for different seasons are obtained using formula for arithmetic mean. For winter, Average number of lightning strikes = 216 + 425 + 163 = 268 3 Average density of lightning strikes = (0.68 + 01.34 + 0.51) × 10−2 3 = 0.84 × 10−2 / km2 Hence, approximately 268 lightning strikes occurred during winter with density of 0.84 × 10−2/ km2 over Sub-tropical zone. Now, for pre-monsoon, Average number of lightning strikes = 3075 + 1082 = 2079 2 Average density of lightning strikes = (9.70 + 3.41) × 10−2 2 = 6.56 × 10−2 / km2 Thus, during pre-monsoon, 2079 strikes were recorded with density of 6.56 × 10−2/ km2 over Sub-tropical zone in average. Again, for monsoon, Average number of lightning strikes = 1267 + 467 = 867 2 Average density of lightning strikes = (3.10 + 1.47) × 10−2 2 = 2.29 × 10−2 / km2 As we see, 867 strikes of lightning with density of 2.29 × 10−2 / km2 can be expected over Sub-tropical during monsoon. Finally, for post-monsoon, As we have single lightning activity of post-monsoon season of 2012, we can say that the average number and density of strikes over Sub-tropical zone during post-monsoon are 747 and 2.36 × 10−2/ km2 respectively. By adding values of four different seasons, we can report that Sub-tropical zone has trend of receiving around 3961 lightning strikes per year with density of 12.05 × 10−2 / km2. 45 Density of lightning strikes Number of lightning strikes 4.6 Temperate Ecological zone Year Season Figure 4.6: Schematic diagram of lightning activity over Temperate zone for different seasons for the years 2012, 2013 and 2015. Number of lightning strikes and their densities for different seasons are obtained by using formula for arithmetic mean. For winter, Average number of lightning strikes = 106 + 232 + 80 3 = 139.33 ~ 139 (approx) Average density of lightning strikes = (0.50 + 1.11 + 0.38) × 10−2 3 = 0.66× 10−2 / km2 Hence, approximately 139 lightning strikes occurred during winter with density of 0.66 × 10−2 / km2 over Temperate zone. Now, for pre-monsoon, Average number of lightning strikes = 1489 + 581 = 1035 2 Average density of lightning strikes = (7.11 + 2.78) × 10−2 2 = 4.95 × 10−2 / km2 46 Thus, during pre-monsoon, 1035 strikes were recorded with density of 4.95 × 10−2/ km2 over Temperate zone in average. Again, for monsoon, Average number of lightning strikes = 530 + 127 2 = 328.5 ~ 329 (approx) Average density of lightning strikes = (2.53 +0.60) × 10−2 2 = 1.57 × 10−2/ km2 As we see, 329 strikes of lightning with density of 1.57 × 10−2/ km2 can be expected over Temperate zone during monsoon. Finally, for post-monsoon, As we have single lightning activity of post-monsoon season of 2012, we can say that the average number and density of strikes over Temperate zone during post-monsoon are 346 and 1.65× 10−2 / km2 respectively. By adding values of four different seasons, we can report that Temperate zone has trend of receiving around 1849 lightning strikes per year with density of 8.83 × 10−2 / km2. 4.7 Trans-Himalayan Density of lightning strikes Number of lightning strikes Ecological zone Year Season Figure 4.7: Schematic diagram of lightning activity over Trans-Himalayan for different seasons for the years 2012, 2013 and 2015. 47 Number of lightning strikes and their densities for different seasons are obtained using formula for arithmetic mean. For winter, Average number of lightning strikes = 1 + 0 + 2 = 1 3 Average density of lightning strikes = (0.03 + 0.00 + 0.06) × 10−2 3 = 0.03 × 10−2/ km2 Hence, approximately only one lightning strike occurred during winter with density of 0.03 × 10−2/ km2 over Trans-Himalayan zone. Now, for pre-monsoon, Average number of lightning strikes = 50 + 6 2 = 28 Average density of lightning strikes = (1.55 + 0.19) × 10−2 2 = 0.87× 10−2 / km2 Thus, during pre-monsoon, 28 strikes were recorded with density of 0.87 × 10−2 / km2 over Trans-Himalayan zone in average. Again, for monsoon, Average number of lightning strikes = 28 + 1 2 = 14.5 ~15 (approx) Average density of lightning strikes = (0.87 + 0.03) × 10−2 2 = 0.45× 10−2/ km2 As we see, 15 strikes of lightning with density of 0.45 × 10−2/ km2 can be expected over Trans-Himalayan during monsoon. Finally, for post-monsoon, As we have single lightning activity of post-monsoon season of 2012, we can say that the average number and density of strikes during post-monsoon are 21 and 0.65 × 10−2/ km2 respectively over Trans-Himalayan zone. By adding values of four different seasons, we can report that Trans-Himalayan zone has trend of receiving around 65 lightning strikes per year with density of 2.00 × 10−2/ km2. 48 Density of lightning strikes Number of lightning strikes 4.8 Upper Tropical Ecological zone Year Season Figure 4.8: Schematic diagram of lightning activity over Upper Tropical zone for different seasons for the years 2012, 2013 and 2015. Number of lightning strikes and their densities for different seasons are obtained using formula for arithmetic mean. For winter, Average number of lightning strikes = 158 + 214 + 107 3 = 159.67 ~ 160 (approx) Average density of lightning strikes = (0.66 + 0.89 + 0.45) × 10−2 3 = 0.67 × 10−2 / km2 Hence, approximately 160 lightning strikes occurred during winter with density of 0.67 × 10−2/ km2 over Upper Tropical zone. Now, for pre-monsoon, Average number of lightning strikes = 2947 + 855 = 1901 2 Average density of lightning strikes = (12.32 + 3.58) × 10−2 2 = 7.95 × 10−2/ km2 49 Thus, during pre-monsoon, 1901 strikes were recorded with density of 7.95 × 10−2/ km2 over Upper Tropical zone in average. Again, for monsoon, Average number of lightning strikes = 1351 + 478 2 = 914.5 ~ 915 (approx) Average density of lightning strikes = (5.65 + 1.20) × 10−2 2 = 3.43 × 10−2 / km2 As we see, 915 strikes of lightning with density of 3.43 × 10−2 / km2 can be expected over Upper Tropical zone during monsoon. Finally, for post-monsoon, As we have single lightning activity of post-monsoon season of 2012, we can say that the average number and density of strikes during post-monsoon over Upper Tropical are 578 and 2.41 × 10−2/ km2 respectively. By adding values of four different seasons, we can report that Upper Tropical zone has trend of receiving around 3554 lightning strikes per year with density of 14.46 × 10−2/ km2. 4.9 Water Body Density of lightning strikes Number of lightning strikes Ecological zone Year Season Figure 4.9: Schematic diagram of lightning activity over Water Body zone for different seasons for the years 2012, 2013 and 2015. 50 Number of lightning strikes and their densities for different seasons are obtained by using formula for arithmetic mean. For winter, Average number of lightning strikes = 5 + 4 + 1 3 = 3.33 ~ 3 (approx) Average density of lightning strikes = (0.78 + 0 .63 + 0.16) × 10−2 3 = 0.52 × 10−2 / km2 Hence, approximately 3 lightning strikes occurred during winter with density of 0.52 × 10−2/ km2 over Water Body zone. Now, for pre-monsoon, Average number of lightning strikes = 94 + 20 = 57 2 Average density of lightning strikes = (14.73 + 3.13) ×10−2 2 = 8.93 × 10−2 / km2 Thus, during pre-monsoon, 57 strikes were recorded with density of 8.93 × 10−2 / km2 over Water Body zone in average. Again, for monsoon, Average number of lightning strikes = 30 + 10 = 20 2 Average density of lightning strikes = (4.70 + 1.57) × 10−2 2 = 3.14 × 10−2/ km2 As we see, 20 strikes of lightning with density of 3.14 × 10−2/ km2 can be expected over Water Body zone during monsoon. Finally, for post-monsoon, As we have single lightning activity of post-monsoon season of 2012, we can say that the average number and density of strikes during post-monsoon over Water Body zone are 9 and 1.41 × 10−2 / km2 respectively. By adding values of four different seasons, we can report that Water Body zone has trend of receiving around 89 lightning strikes per year with density of 12.00 × 10−2/ km2. 51 Table 4.1: Annually expected total number and density of lightning strikes over different ecological zones of Nepal. S.N Ecological Zones Number of Lightning Alpine Lower Tropical Nival Zones Sub-alpine Sub-tropical Temperate strikes 438 3000 241 868 3961 1849 Trans-Himalayan 65 Upper Tropical 3554 Water Body 89 1 2 3 4 5 6 7 8 9 Density of lightning strikes (per square kilometer) 2.56 × 10-2 19.87 × 10-2 2.17 × 10-2 6.05 × 10-2 12.05 × 10-2 8.83 × 10-2 2.00 × 10-2 14.46× 10-2 12.00 × 10-2 From the table 4.1, it can be summarised that Sub-tropical zone receives maximum number of lightning strikes with 3961 strikes per year but the annual highest density of lightning strikes is experienced by Lower Tropical zone with the value of 19.87 × 10-2 strikes per square kilometer. Trans-Himalayan is the zone which has both the least values for number of strikes and densities of lightning strikes. It receives 65 lightning strikes with density of 2.00 × 10-2 per square kilometer per year. 52 Chapter 5 Conclusion and future work 5.1 Conclusions Based on our findings and the literature survey of similar topics, following conclusions can be drawn. • Lightning has direct relationship with surface temperature. From the observations we had, we can completely agree with the narrated line “Recent studies continue to show the high positive correlation between surface temperatures and lightning activity” [14]. If we consider seven ecological zones of Nepal classified according to altitude, tropical zones viz. Lower Tropical zone, Sub- tropical zone and Upper Tropical zone, which ranges in between altitude of 70 to 2000 meters, are the zones with higher temperatures. While moving up towards higher altitude, temperature goes on decreasing for the ecological zones. For these seven zones, values of density of lightning strikes are in maximum range for the tropical zones and goes on decreasing for the higher altitudinal zones. In fact, Water Body zone, ecological zone which has not been classified on the basis of altitude, has comparable value of lightning density to Sub-tropical zone falls in the altitude range of 70 to 2000 meters. Thus, lightning is directly related with surface temperatures and can be represented with the graph below. Density of lightning strikes (per square kilometer) Density of lightning strikes (per square kilometer) 0.25 0.2 0.15 0.1 0.05 0 Figure 5.1: Density of lightning strikes over ecological zones in ascending altitude (temperature descending). 53 • Lightning activities vary with seasons. From the lightning activity observed over each ecological zone, it is easily noticeable that lightning activities go on changing throughout the year. As different seasons have different climatic conditions along with various amount and strength of storms, conclusion of different lightning in different season is easily predictable. This predictable conclusion has been approved from our observations. Generally, winter experiences the least lightning strikes, pre-monsoon records largest amount and then lightning strikes go on decreasing in monsoon and post-monsoon respectively. Hence, lightning activities change with seasons. • Presence of water vapour does play part to determine lightning activities over ecological zone. It is seen that Water Body zone is hit by fewer lightning strikes among nine ecological zones while considering number of strikes but it has its value in higher side while seeing the densities of strikes. Among nine ecological zones, water vapour is present in largest amount in the surrounding atmosphere of Water Body and this water might have assisted to capture solar heat in order to increase surface temperature. Thus, we can conclude that amount of water vapour in any region has direct relationship with lightning activity. • Zones in the altitudinal region of 70 to 2000 meters need more lightning safety program to minimize effects. Government and concerned authorities need to launch lightning awareness programme and install lightning protection system all over Nepal. But these programs are to be more frequent and protection systems are to be in installed at short intervals in Lower Tropical, Sub-tropical, Upper Tropical zones and Water Body zone. The basis of this conclusion is that these regions are the ones which bear higher density of lightning strikes and are also densely populated areas which can easily become vulnerable to disastrous effects of lightning. 54 5.2 Recommendations for future study “A study of lightning activity over different ecological zones of Nepal” can be a research topic to explore more in future. Surface temperature has been concluded as the main factor contributing for the highest number and densities of strikes over the tropical zones namely Lower Tropical, Sub-tropical and Upper Tropical. Searching for other dominant causes that bring such a result can be an interesting matter to look for. Use of data of longer time period can minimize seen and unseen restrictions that might have occurred during the study. Hence, more data could be more than fruitful to obtain better result. Furthermore, we would like to recommend categorizing the available data of seasons into months and making analysis in monthly basis and then compare the results with previously performed research works done by categorizing data into seasons. 55 References [1] M. A. Uman, The Lightning Discharge, San Diego: Academic Press (1987). [2] [3] S. Sharma, Electromagnetic Fields Radiated by Lightning in Tropical and Temperate Regions, p. 1, (2007). C. G. Price, Lightning Applications in Weather and Climate Research, Springer Science + Business Media Dordrecht, (2013). [4] M. A. Uman, The Art and Science of Lightning Protection, p. 3, Cambridge University Press, (2008). [5] M. A. Uman and V. A. Rakov, Lightning: Physics and Effects, p. 4, Cambridge University Press, (2003). [6] [7] [8] [9] Q. Xiushu, Z. Yijun and Z. Qilin, Characteristics of Lightning Discharges and Electric Structure of Thunderstorm, Vol. 20, p. 244, Acta Meterologica Sinica, (2006). D. R. Poelman, On the Science of Lightning: An Overview, p. 9, Royal Meteorological Institute of Belgium, (2010). R. J. Wehling, N. Barbeito, J.R. Clayton, IEEE Guide for Direct Lightning Stroke Shielding of Substations, The Institute of Electrical and Electronics Engineers Inc., Newyork, (1996). C. K. Alexander and M. N.O. Sadiku, Fundamentals of Electric Circuits, Fourth Edition, p. 4, The McGraw-Hill Companies, Inc., (2009). [10] D. Djalel, R. Lazhar and L. Hocine, A New Model of Electromagnetic Fields Radiated by Lightning, Vol 2, Issue 4, p. 182, International Journal of Engineering and Innovative Technology (IJEIT), (2012). [11] V. L. Manglold, Life and Lightning; The Good Things of Lightning, 1st Edition, p. 10, Universal Publishers, Florida, (1999). [12] S. Sharma, Lightning Protection, NAM S & T Centre, (2011). [13] R. H. Golde, K. Berger, Lightning, Vol. 1, p. 119, Academic Press, (1977). [14] J.P.B. Lilleso, Tirtha B. Shrestha, L.P. Dhakal, R.P. Nayaju and R. Shrestha, The Map of Potential Vegetation of Nepal, p. 20, Forest & Landscape Denmark, (2005). [15] T. B. Shrestha, Classification of Nepalese Forests and Their Distribution in Protected Areas, p. 1, The Initiation, (2008). 56 [16] C. Price, Thunderstorms, Lightning and Climate Change, p. 1, 29th International Conference on Lightning Protection, (2008). [17] E. R. Williams, Lightning and climate: A review, p. 274, Massachusetts Institute of Technology, Cambridge, MA, USA, (2004). [18] A. Mäkelä, R. Shrestha and R. Karki, Thunderstorm characteristics in Nepal during the pre-monsoon season 2012, p. 92, Finnish–Nepalese Project, (2013). [19] C. Price and D. Rind, Possible implications of global climate change on global lightning distributions and frequencies, Vol. 99, NASA Goddard Institute for Space Studies, Columbia University, New York, (1994). [20] P. H. Dana, Global Positioning System (GPS) Time Dissemination for Real-Time Applications, p. 9, Kluwer Academic Publishers, Boston, Manufactured in The Netherlands, (1997). [21] P. Lamichhane, Study of effect of climate change on lightning activity in Nepal, p. 11, M. Sc. (Physics) Dissertation, Tribhuvan University, (2013). 57
ai_researcher
1
Image_Enhancement_Techniques_A_Study.pdf
JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 8 A Comprehensive Review of Image Enhancement Techniques Raman Maini and Himanshu Aggarwal Abstract: Principle objective of Image enhancement is to process an image so that result is more suitable than original image for specific application. Digital image enhancement techniques provide a multitude of choices for improving the visual quality of images. Appropriate choice of such techniques is greatly influenced by the imaging modality, task at hand and viewing conditions. This paper will provide an overview of underlying concepts, along with algorithms commonly used for image enhancement. The paper focuses on spatial domain techniques for image enhancement, with particular reference to point processing methods and histogram processing. Keywords: Digital Image Processing, Geometric Corrections, Gray Scale Manipulation, Image Enhancement ——————————  —————————— I. Introduction Image basically improving enhancement techniques. The principal objective of the is interpretability or perception of information in images for human viewers and providing `better' input for other automated image image processing enhancement is to modify attributes of an image to make it more suitable for a given task and a specific observer. During this process, one or more attributes of the image are modified. The choice of attributes and the way they are modified are specific to a given task. Moreover, observer-specific factors, such as the human visual system and the observer's experience, will introduce a great deal of subjectivity into the choice of image enhancement methods. There exist many techniques that can enhance a digital image without spoiling it. The enhancement methods can broadly categories: be following divided two the in to 1. Spatial Domain Methods 2. Frequency Domain Methods In spatial domain techniques [1], we directly deal with the image pixels. The pixel values are manipulated to achieve desired enhancement. In frequency domain methods, the image is first transferred in to frequency domain. It means that, the Fourier Transform of the image is computed first. All the enhancement operations are performed on the Fourier transform of the image and then the Inverse Fourier transform is performed to get the resultant image. These enhancement operations are performed in order to modify the image brightness, contrast or the distribution of the grey levels. As a consequence the pixel value (intensities) of the transformation function applied on the input values. image will be modified according the output to Raman Maini is working as a Reader (Computer          Engineering),  ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐   University College of Engineering, Punjabi University, Patiala.    Himanshu  Aggarwal  (Computer  Engineering),  University  College  of  Engineering,  Punjabi  University,  Patiala.  is  working  as  a  Reader  Image enhancement is applied in every field where images are ought to be understood and analyzed. For example, medical image analysis, analysis of images from satellites etc. Image enhancement simply means, transforming an image f into image g using T. (Where T is the transformation. The values of pixels in images f and g are denoted by r and s, respectively. As said, the pixel values r and s are related by the expression, s = T(r) (1) Where T is a transformation that maps a pixel value r into a pixel value s. The results of this transformation are mapped into the grey scale range as we are dealing here only with grey scale digital images. So, the results are mapped back into the range [0, L-1], where L=2k, k being the number of bits in the image being considered. So, for instance, for an 8-bit image the range of pixel values will be [0, 255]. I will consider only gray level images. The same theory can be extended for the color images too. A digital gray image can have pixel values in the range of 0 to 255. Figure 1. Showing the effect of Image Enhancement Many different, often elementary and heuristic methods [2] are used to improve images in some sense. The problem is, of course, not well defined, as there is no objective measure for image quality. Here, we discuss a few recipes that have shown to be useful both for the human observer and/or for machine     JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ recognition. These methods are very problem-oriented: a method that works fine in one case may be completely inadequate for image enhancement this paper basic another problem. In techniques have been discussed their mathmatical understanding. This paper will provide an overview of underlying concepts, along with algorithms commonly used for image enhancement. The paper focuses on spatial domain techniques for image enhancement, with particular reference to point processing methods, histogram processing. with 2. Point Processing Operation The simplest spatial domain operations occur when the neighbourhood is simply the pixel itself. In this case T is referred to as a grey level transformation function or a point processing operation. Point processing operations take the form shown in equation (1) Figure 2. transformations Figure shows basic grey level 2.1 Create Negative of an Image The most basic and simple operation in digital image processing is to compute the negative of an image. The pixel gray values are inverted to compute the negative of an image. For example, if an image of size R x C, where R represents number of rows and C represents number of columns, is represented by I(r, c). The image I(r, c) can be computed as negative N(r, c) of N(r, c) = 255 – I(r, c) where 0 <= r <= R and 0 <= c <= C (2) It can be seen that every pixel value from the original image is subtracted from the 255. The resultant image becomes negative of the original image. Negative images [3] are useful for enhancing white or grey detail embedded in dark regions of an image. 9 s = 1.0 - r Figure 3 Note how much clearer the tissue is in the negative image of the mammogram Original Image x Enhanced x y Image f (x, y) y Image f (x, s = intensitymax - r (3) 2.2 Thresholding Transformations Thresholding transformations [4] are particularly useful for segmentation in which we want to isolate an object of interest from a background as shown in figure below s = 1. 0. r > r <= Figure 4. Showing effect of thresholding transformation for isolating object of interest Original Image x Enhanced x y Image f (x, y) Y Image f (x, s = 1.0 r > threshold 0.0 r <= threshold 2.3 Intensity Transformation 10 means, every monitor has built-in gamma correction in it with certain gamma ranges and so a good monitor automatically corrects all the images displayed on it for the best contrast to give user the best experience. The difference between the log- transformation function and the power-law functions is that using the power-law function a family of possible transformation curves can be obtained just by varying the λ. These are the three basic image enhancement functions for grey scale images that can be applied easily for any type of image for better contrast and highlighting. Using the image negation formula given above, it is not necessary for the results to be mapped into the grey scale range [0, L-1]. Output of L-1-r automatically falls in the range of [0, L-1]. But for the Log and Power-Law transformations resulting values are often quite distinctive, depending upon control parameters like λ and logarithmic scales. So the results of these values should be mapped back to the grey scale range to get a meaningful output image. For example, Log function s = c log (1 + r) results in 0 and 2.41 for r varying between 0 and 255, keeping c=1. So, the range [0, 2.41] should be mapped to [0, L-1] for getting a meaningful image. OUTPUT Enter the value for c==>1 Enter the value for gamma==>1 % for gamma value EQUALS TO 1 2.6 Piecewise Linear Transformation Functions Rather than using a well defined mathematical function we can use arbitrary user-defined transforms. JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 2.4 Logarithmic Transformations The general form of the log transformation is s = c * log (1 + r) (4) The log transformation maps [5] a narrow range of low input grey level values into a wider range of output values. The inverse log transformation performs transformation. Log functions are particularly useful when the input grey level values may have an extremely large range of values. In the following example the Fourier transform of an image is put through a log transform to reveal more detail the opposite s = log(1 + r) Figure 5. Example showing effect of Logarithmic transformation Original x Enhanced Image x y Image f (x, y Image f (x, y) s = log(1 + r) (5) We usually set c to 1. Grey levels must be in the range [0.0, 1.0] 2.5 Powers-Law Transformations The nth power and nth root curves shown in fig. A can be given by the expression, s = crγ (6) This transformation function is also called as gamma correction [6]. For various values of γ different levels of enhancements can be obtained. This technique is quite commonly called as Gamma Correction. If you notice, different display monitors display images at different intensities and clarity. That JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ Figure 6. The images below show a contrast stretching linear transform to add contrast to a poor quality image could `stretch out' the grey levels at the dark end to produce a more uniformly distributed histogram then the image would become much clearer. 11 2.7 Grey Level Slicing Grey level slicing [7] is the spatial domain equivalent to band-pass filtering. A grey level slicing function can either emphasize a group of intensities and diminish all others or it can emphasize a group of grey levels and leave the rest alone. Example is shown in the following figure Figure 7. Showing example of Grey level slicing 3. Histogram Processing The histogram of a digital image with intensity levels in the range [0, L-1] is a discrete function Figure 8. The original image and its histogram, and the equalized versions. Both images are quantized to 64 grey levels 3.2 Histogram Matching Histogram equalization [9] automatically determines a transformation function seeking to produce an output image with a uniform histogram. Another method is to generate an image having a specified histogram is histogram matching. 1. Find the histogram pr(r) of the input image and determine its equalization transformation 2. Use the specified pdf pz(r) of the output image to obtain the transformation function: (7) Histograms are frequently normalized by the total number of pixels in the image. Assuming an M x N image, a normalized histogram. 3. Find the inverse transformation z = G-1(s) – the mapping from s to z: (8) is related to probability of occurrence of rk in the image 3.1 Histogram Equalization Histogram equalization [8] is a common technique for enhancing the appearance of images. Suppose we have an image which is predominantly dark. Then its histogram would be skewed towards the lower end of the grey scale and all the image detail is compressed into the dark end of the histogram. If we (9) 4. Obtain the output image by equalizing the input image first; then for each pixel in the equalized image, perform the inverse mapping to obtain the corresponding pixel of the output image. Histogram matching enables us to “match” the grayscale distribution in one image to the grayscale distribution in another image JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ and sample variance: 12 (11) (12) As previously, we may specify global mean [11]-[12] and variance (for the entire image) and local mean and variance for a specified sub-image (subset of pixels). Figure 9. Showing histogram Matching different images 3.3 Local Enhancement Previous methods of histogram equalizations and histogram matching are global. So, local enhancement [10] is used. Define square or rectangular neighborhood (mask) and move the center from pixel to pixel. For each neighborhood, calculate histogram of the points in the neighborhood. Obtain histogram equalization/specification function. Map gray level of pixel centered in neighborhood. It can use new pixel values and previous histogram to calculate next histogram. ( a ) ( b) ( c ) Figure 10. (a) Original Image (b) Result of global histogram Equalization (c) Result of Local histogram equalization using 7x7 neighborhood about each pixel 3.4 Use of Histogram Statistics for Image Enhancement Let the intensity in an image is represented by a discrete rv r in [0, L-1] and let p (ri) is the normalized histogram – estimate of pdf for the intensity. The nth statistical moment is Figure 11. Showing example of using histogram statistics for image enhancement Concluding Remarks Image enhancement algorithms offer a wide variety of approaches for modifying images to achieve visually acceptable images. The choice of such techniques is a function of the specific task, image content, observer characteristics, and viewing conditions. The point processing methods are most primitive, yet essential image processing operations and are used primarily for contrast enhancement. Image Negative is suited for enhancing white detail embedded in dark regions and has applications in medical imaging. Power-law transformations are useful for general- purpose contrast manipulation. For a dark image, an expansion of gray levels is accomplished using a power-law transformation with a fractional exponent. Log Transformation is Useful for enhancing details in the darker regions of the image at the expense of detail in the brighter regions the higher-level values. For an image having a washed-out appearance, a compression of gray levels is obtained using a power-law transformation with γ greater than 1. The histogram of an image (i.e., a plot of the gray- level frequencies) provides important information regarding the contrast of an image. Histogram equalization is a transformation that stretches the contrast by redistributing the gray-level values uniformly. Only the global histogram equalization can be done completely automatically For image intensities, a sample mean: Although we did not discuss the computational cost of enhancement algorithms in this article it may play a critical role in choosing an algorithm for real-time applications. Despite the effectiveness of each of these algorithms when applied separately, in practice one has to devise a combination of such methods to achieve more effective image enhancement. (10) JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 13 References [1] Bhabatosh Chanda and Dwijest Dutta Majumder, 2002, Digital Image Processing and Analysis. [2] R.W.Jr. Weeks,(1996). Fundamental of Electronic Image Processing . Bellingham: SPIE Press [3] A. K. Jain, Fundamentals of Digital Image Processing. Englewood Cliffs, NJ: Prentice Hall, 1989. [4] R.M. Haralick, and L.G. Shapiro, Computer and Robot Vision, Vol-1, Addison Wesley, Reading, MA, 1992. [5] R. Jain, R. Kasturi and B.G. Schunck, Machine Vision, McGraw-Hill International Edition, 1995. [6] W. K. Pratt, Digital image processing, Prentice Hall, 1989. [7] A.C. Bovik, Digital Image Processing Course Notes, Dept. of Electrical Engineering, U. of Texas at Austin, 1995. [8] J.C. Russ, The Image Processing Handbook, CRC Press, Boca Raton, FL., 1992. [9] R Hummel, “Histogram modification techniques“, Computer Graphics and Image Processing, Vol. 4, pp. 209-224, 1975. [10] S. E. Umbaugh, “Computer Vision & Image Processing,” Prentice Hall PTR, 1998 [11] S. M. Pizer, et al., “Adaptive Histogram Equalization and its Variations,” Comput. Vision, Graphics and Image Processing, Vol. 39, pp. 355-368, 1987. [12] A. N. Netraveli and B. G. Haskell, “Digital Pictures: Representation and Compression,” New York: Plenum, 1988.
ai_researcher
1
Untangling_the_Emotional_Intelligence-Suicidal_Ideation_Connection_The_Role_of_Cognitive_Emotion_Regulation_Strategies_in_Adolescents.pdf
Untangling Braids with Multi-agent Q-Learning Abdullah Khan Department of Mathematical Sciences University of Essex Colchester, UK [email protected] Alexei Vernitski Department of Mathematical Sciences University of Essex Colchester, UK [email protected] Alexei Lisitsa Department of Computer Science University of Liverpool Liverpool, UK [email protected] Abstract—We use reinforcement learning to tackle the problem of untangling braids. We experiment with braids with 2 and 3 strands. Two competing players learn to tangle and untangle a braid. We interface the braid untangling problem with the OpenAI Gym environment, a widely used way of connecting agents to reinforcement learning problems. The results provide evidence that the more we train the system, the better the untangling player gets at untangling braids. At the same time, our tangling player produces good examples of tangled braids. I. INTRODUCTION Braids are mathematical objects from low-dimensional topology which can be successfully encoded with sequences of letters and, therefore, studied using algebra or, as we do in this study, using some computer-scientific approach. A braid on n strands consists of n ropes whose left-hand ends are fixed one under another and whose right-hand ends are fixed one under another; you can imagine that the braid is laid out on a table, and the ends of the ropes are attached to the table with nails. Figures 1, 2, 3 show examples of braids on 3 strands. Fig. 1. Braid aabaBBAB 1 2 0 2 p e S 9 2 ] G L . s c [ 1 v 2 0 5 4 1 . 9 0 1 2 : v i X r a Fig. 2. Braid baBABaBb Two braids are equivalent to one another if they can be transformed into one another by shifting and twisting the middle parts of the ropes (without touching the ends of the the two braids in Figures 1, 2 are ropes). For example, equivalent to one another, although it is difficult to see it. They are also what is called trivial braids, in the sense that they are equivalent to the braid without any intersections of ropes, shown in Figure 3. Fig. 3. The trivial braid without intersections Now let us explain how braids can be represented conve- niently in the computer. A braid is considered as a sequence of its simple fragments; for braids on 3 strands, these are the fragments shown in Figure 4, which we denote by A, a, B, b, 1 (and which in mathematical papers are usually denoted by σ1, σ−1 1 , σ2, σ−1 2 , 1). Using this convenient notation, we can now say that the braids in Figures 1, 2 are aabaBBAB and baBABaBb. This notation is useful not only for describing braids, but also for checking if two braids are equivalent. Indeed, it is known that two braids are equivalent if and only if one can be transformed to the other using rules called the second Reidemeister move and the third Reidemeister move. The second Reidemeister move is the rule stating that Aa and aA are equivalent to 11, and Bb and bB are also equivalent to 11. (An algebraist studying braids in the context of group theory would also add that 11 is equivalent to 1; however, we felt that the performance of our AI will be best if we omit this non-essential rule.) The third Reidemeister move is the rule stating that ABA is equivalent to BAB. Our general aim is to produce tangled braids and to untangle braids using reinforcement learning (RL). A recent study [9] uses RL to This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313. Fig. 4. Braid fragments A, a, B, b, 1 untangle knots using a version of Reidemeister moves known as Markov moves. The novelty of our approach is the use two agents: one for tangling and one for untangling. In this pilot study we concentrate on braids with 2 and 3 strands. For braids with 2 strands the problem is equivalent to simplifying words in a group given by a presentation (cid:104)a, b|ab = ba = 1(cid:105). In our experiments we choose to use the moves which preserve the length of the braid, for example, ab simplifies to 11 and not 1. We approach the problem of untangling braids with two strands as a symbol game. The input string of length n will consist of 3 symbols: [’a’, ‘b’, ‘1’]. The task of the untangling agent is to convert the string to have all characters as ‘1’ (untangled state). Following are the allowed moves: 1a = a1; 1b = b1; ab = ba = 11. (In another experiment we used the moves 1a = a1; 1b = b1; aa = bb = 11, corresponding to the group (cid:104)a, b|aa = bb = 1(cid:105).) All such moves are implemented in both directions. We also experimented with braids with three strands, where following transformations are allowed, Aa=aA=11; Bb=bB=11; A1=1A; a1=1a; B1=1B; b1=1b. We approach the problem of untangling braids on 3 strands as a game played between two players, player 1 (the tangling player) and player 2 (the untangling player). Player 1 starts with an untangled braid as in Figure 3 and applies Reidemeister moves to tangle the braid. For example, braids in Figures 1, 2 were produced by player 1 after approximately 150 games against player 2. Once player 1 has created a tangled braid after a fixed number of steps, that would be the input for the player 2 (untangling player); the task of player 2 is to apply Reidemeister moves to reach a fixed target output, that would be all 1’s (untangled state). In our experiment we approach the problem of untangling braids on 2 and 3 strands by simply using the Q-learning algorithm. Q-learning starting from the current state of the agent finds an optimal policy in the sense of maximizing the expected value of the total rewards [11]. To implement is the Q-learning algorithm, we use OpenAI Gym [3]. It an interface which provides a number of environments to implement reinforcement learning problems. The benefit of interfacing with OpenAI Gym is that it is an actively developed interface which allows to add environments and features useful while training the model. The paper is organized as follows: in the following section we discuss the Background taking into consideration basics of Reinforcement Learning with focus towards a technique known as Q-Learning. In Section 3, we briefly review the concept of OpenAI Gym, and how we have used it for our Problem. In Section 4, we mention about the experimental details and results. II. BACKGROUND In this section we formally highlight the important concepts for the understanding and development of the project, and also highlight some of the relevant work in the domain of reinforcement learning specifically for games. Reinforcement learning is the training of machine learning models to make a sequence of decisions, where the agent learns to achieve a goal in an uncertain, potentially complex environment[10]. In RL, there is a game-like situation, where the computer employs trial and error to come up with a solution to the problem. Basically, during the whole learning process, the agent gets either rewards or penalties for the actions it performs. The overall goal is to maximize the total rewards. We have used a model-free reinforcement learning algo- rithm known as Q-learning [14]. It is an off-policy algorithm to determine the best action in the current state. Off-policy means that an agent, rather than following certain rules of behavior can take random actions; best action assumes that the action will result in the highest reward; current state is the present situation the agent resides. Basically there exists a system of rewards to build a matrix of scores for each possible move known as Q-matrix. What Q-learning does is measure how good a state-action combination is in terms of rewards. It does so by keeping track of a Q-matrix a reference matrix, which gets updated after each episode with its row corresponding to the state and its column to the action. An episode ends after a set of actions is completed. Q-matrix is updated using a mathematical formula, known as the Bellman equation. New Q(s, a) (cid:123)(cid:122) (cid:125) (cid:124) New Q-Value = Q(s, a) (cid:124) (cid:123)(cid:122) (cid:125) Current Q-Value +α Maximum predicted reward, given new state and all possible actions (cid:125)(cid:124) (cid:105) (cid:122) max Q(cid:48)(s(cid:48), a(cid:48)) −Q(s, a) +γ (cid:123) (cid:104) R(s, a) (cid:124) (cid:123)(cid:122) (cid:125) Reward Learning rate Discount rate In the above equation the first term, Q(s, a) is the value of the current action in the current state, alpha is the learning rate, that controls how much the difference between previous and new Q-value is considered. Gamma is a discount factor, which is used to balance between immediate and future reward. The updates occur after each step or action and ends when an episode is done (reaching the terminal point). The agent will not learn much after a single episode, but eventually with enough exploring (steps and episodes) it will converge and learn the optimal Q-values. RL has had extensive success in complex control environ- ments like Atari games[13], Sokoban planning [4]. It is also applied to games where there is real time strategy (RTS) such as bots [15], another reinforcement learning based approach [1] chooses from a set of predefined strategies in turn based strategy based games. In such approaches the training process is separated into several stages, each of them responsible for different aspects of the game (such as combat, movement and exploration). Other works in strategic fighting games [7], [2] map the possible states of the game based on low-level formations, such as distance between the fighters and health points. The reward function used are simple: a positive reward is granted every time the agent strikes the opponent and a negative reward is given when the agents gets hit. A very recent study [9] introduced natural language processing into the study of knot theory, and they also utilize reinforcement learning (RL) to find sequences of moves and braid relations that simplify knots and can identify unknots by explicitly giving the sequence of actions. Another study [8] proposed HULK a perception-based system that untangles dense over- hand and figure-eight knots in linear deformable objects from RGB observations. It exploits geometry at local and global scales and learns to model only task-specific features, instead of performing full state estimation, to enable fine-grained manipulation. III. OPENAI GYM Recent advances in RL combines Deep Learning (DL) with RL (Deep Reinforcement Learning) and have shown that model-free optimization, or policy gradients, can be used for complex environments [14]. However, in order to continue testing new ideas and increasing the quality of results, the research community needs good benchmark platforms. This is the main goal of OpenAI Gym platform [3]. It is basically a toolkit used for developing and testing reinforcement learning algorithms. One of the encouraging aspect of choosing OpenAI gym it makes no assumptions about the structure of the agent, and has compatibility with any numerical computation library, such as Theano or Google’s Tensorflow. Gym is a library, which contains a collections of test problems, known as environments, which can be used for testing reinforcement learning algorithms. It also leverages the user to design their own customized environments. A commonality in all of rein- forcement learning is an agent situated in an environment. In each step, the agent takes an action and as a result receives an observation and a reward from the environment. What makes OpenAI Gym unique is how it focuses on the episodic setting of reinforcement learning, where the agent’s action chains are broken down into a sequence of episodes. Each episode begins by randomly sampling the agent’s initial state and continues until the environment reaches a terminal state. The purpose of structuring reinforcement learning into episodes like these is to maximize the expected total reward per episode, and to manage a high level of performance in as few episodes as possible. A. Environment Set-up for our problem it To use the Q-Learning algorithm, information to the learning process. is necessary setup the environment which defines all the the possible actions the agent. These states must encode use- and states of ful In our case of braids with two strands the following states are observed: (aa,bb,ab,ba,a1,1a,1b,b1,11). The agent remains in the same state until a legal action takes place. All the legal actions are described in the Introduction Section of the paper. For braids with 2 and 3 strands basically we have a caret which moves back and forth over the string. Each time it moves over the string agent would be in specific state, and that state would only be changed after some legal action takes place. For the case of braids with 2 strands, the caret moves over Action CARET MOVE ROTATE TRUE ROTATE FALSE REPLACE TRUE REPLACE BACK REPLACE FALSE ROTATE REPLACE TABLE I REWARD ASSOCIATED FOR EACH ACTION Reward 0 0 0 1 -2 -1 1 two characters at a time, whereas for the case with 3 strands caret moves over three characters at a time in the whole string so the state-space is also large. B. Action Space and Rewards The table I shows the rewards associated with each action for braids with 2 and 3 strands. As we have already discussed all such actions that bring us closer to the target output value will have the higher rewards and all such actions which takes us away from the target output will have lesser rewards. For the case of braids with 2 strands. There are certain actions such as action replace, replaces (ab to 11, ba to 11), action replace back, replaces (11 to ab, 11 to ba). Whereas, action rotate moves the position of string e.g., (1a to a1, 1b to b1) and vice versa. Action move caret(lef t/right), this action moves caret to the left or right. The reward associated with action replace, true is 1, action replace false is −1, re- ward for action replace back is −2, action rotate replace is 1, for all other actions reward is 0. Similarly for the other case where we have braids with 3 strands, action replace, replaces (Aa to 11, aA to 11, Bb to 11, bB to 11), action replace back replaces(11 to Aa, 11 to aA, 11 to Bb, 11 to bB), action rotate replace moves the position of the strings (ABA to BAB, BAB to ABA), action rotate moves the position of the strings(aA to Aa, aA to Aa, bB to Bb, Bb to bB). The choice of the reward selection is inspired from few of the works recently published [5], [12]. IV. EXPERIMENTS Untangling of braids requires the implementation of Q- learning algorithm discussed in Section 2. To measure the performance of Q-learning model, we utilize the metrics provided by OpenAI Gym interface, namely rewards over episodes of a particular environment. Separate experiments were performed for different environments. The choice of hyper-parameters selection was looked from some of the work in the literature [6]. In the environment where we consider braids with 2 strands, we have a single agent which performs series of actions during training to untangle the braid. We observe during the training process inside each episode an agent starts with random actions to untangle the braids, and finally over the period of time learns the right actions to reach the target output. It can be observed looking at figures 5, 6 of different lengths of the input that negative rewards are quite prominent, if we train the model for lesser number of episodes, and agent hardly learns, whereas the episodes Input length ep=10000 steps=20 81.7% 85% 87.2% 72.9% 72.3% TABLE II PROBABILITY OF PLAYER2 OF WINNING THE GAME, EP=EPISODES ep=10000 steps=100 66.6% 85% 75.8% 84.9% 60.9% ep=1000 steps=100 46.2% 48.6% 42.7% 36.3% 32.7% ep=1000 steps=20 40% 30.7% 29.4% 24.9% 24.8% 7 8 9 10 11 rewards progressively increase over time and ultimately levels out at a high reward per episode value from episode 4000, which indicates that the agent learns to maximize its total reward earned over the period of time. In the multi-agent scenario, where we consider braids with 3 strands, in each episode the first agent for the given length of the input tries to tangle the braid during the fixed number of defined steps applying the transformations discussed in Section 1, that tangled state is the input for the second agent which again applies the same transformations to un-tangle the braid. As we approach the problem as a competitive game between two players (player1 = tangling player, player2= un-tangling). It is observed from Table II, for lesser number of training episodes and larger length of the input the probability of the tangling player to win the game is more, whereas when we train the system for higher number of episodes the probability of the un-tangling player to win the game is more times at the end of training. Figures 1, 2 shows the examples hard tangled braids produced by player1 after 150 episodes. Ep vs Rw @1000 episodes Ep vs Rw @1000 episodes Fig. 5. Plots for Rewards during training over episodes for n=7 and n=8 Ep vs Rw @1000 episodes Ep vs Rw @10000 episodes Fig. 6. Plots for Rewards during training over episodes for n=7 and n=8, Ep=Episodes, Rw=Rewards V. CONCLUSION In this pilot study we successfully conducted several ex- periments using Q-learning algorithm to untangle the braids with 2 and 3 strands. The problem of untangling of braids with 2 strands was simply approached as rule-based approach, where the agent learns over the time right rules to untangle the braid. Whereas, the problem to untangle the braids with 3 strands was approached as a competitive game between two players, where the first agent starts with a fixed length of input and applies the rules to tangle the braid, that tangled braid is the input for the second agent which again applies the rules to untangle the braid, ultimately if second agent successfully untangles the braid it wins the round. We observe the more we train the model, the more is the probability of the second agent to win the game. In the future we intend to approach the similar problem using DQN(Deep Q-leaning Network) to compare the result with Q-learning approach. REFERENCES [1] Amato, C., Shani, G.: High-level reinforcement learning in strategy games. In: AAMAS. vol. 10, pp. 75–82 (2010) [2] Andrade, G., Ramalho, G., Santana, H., Corruble, V.: Automatic com- puter game balancing: a reinforcement learning approach. In: Proceed- ings of the fourth international joint conference on Autonomous agents and multiagent systems. pp. 1111–1112 (2005) [3] Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., Zaremba, W.: Openai gym. arXiv preprint arXiv:1606.01540 (2016) [4] Feng, D., Gomes, C.P., Selman, B.: Solving hard ai planning instances learning. arXiv preprint using curriculum-driven deep reinforcement arXiv:2006.02689 (2020) [5] Gawłowicz, P., Zubow, A.: ns3-gym: Extending openai gym for net- working research. arXiv preprint arXiv:1810.03943 (2018) Introduction [6] Gelana gym. -with-openai-gym-2d794da10f3d (April 2020) openai https://medium.com/swlh/introduction-to-q-learning\ q-learning with Tostaeva: to [7] Graepel, T., Herbrich, R., Gold, J.: Learning to fight. In: Proceedings of the International Conference on Computer Games: Artificial Intelligence, Design and Education. pp. 193–200. Citeseer (2004) [8] Grannen, J., Sundaresan, P., Thananjeyan, B., Ichnowski, J., Balakrishna, A., Viswanath, V., Laskey, M., Gonzalez, J.E., Goldberg, K.: Learning robot policies for untangling dense knots in linear deformable structures. In: Conference on Robot Learning (CoRL) (2020) [9] Gukov, S., Halverson, J., Ruehle, F., Sułkowski, P.: Learning to unknot. Machine Learning: Science and Technology 2(2), 025035 (2021) [10] Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of artificial intelligence research 4, 237–285 (1996) [11] Melo, F.S.: Convergence of q-learning: A simple proof. Institute Of Systems and Robotics, Tech. Rep pp. 1–4 (2001) [12] Mendonc¸a, M.R., Bernardino, H.S., Neto, R.F.: Simulating human behavior in fighting games using reinforcement learning and artificial neural networks. In: 2015 14th Brazilian symposium on computer games and digital entertainment (SBGames). pp. 152–159. IEEE (2015) [13] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013) [14] Watkins, C.J., Dayan, P.: Q-learning. Machine learning 8(3-4), 279–292 (1992) [15] Wender, S., Watson, I.: Combining case-based reasoning and rein- forcement learning for unit navigation in real-time strategy game ai. In: International Conference on Case-Based Reasoning. pp. 511–525. Springer (2014)
ai_researcher
1
TELECOMParisTech_at_ImageClefphoto_2008_Bi-Modal_Text_and_Image_Retrieval_with_Diversity_Enhancement.pdf
8 0 0 2 v o N 0 1 ] x e - p e h [ 1 v 4 4 5 1 . 1 1 8 0 : v i X r a Jet physics at HERA, Tevatron and LHC Christophe Royon IRFU-SPP, CEA Saclay, F91 191 Gif-sur-Yvette, France 1 Introduction In this short report, we discuss the Jet Physics results and perspectives at HERA, Tevatron and LHC. The different accelerators are complementary as shown in Fig. 1, where the kinematical plane in (x, Q2) is displayed (x and Q2 are respectively the proton momentum fraction carried by the interacting parton and the transferred energy squared carried by the virtual photon). HERA allows to reach very low values of x at low Q2 (x ∼ 10−6), whereas the Tevatron (and the LHC) very high values of Q2 at high x (Q2 ∼ 3 105, 108 GeV2 at the Tevatron and the LHC respectively). In the following, we will benefit from the differences between the accelerators to assess the proton structure in a wide kinematical domain. Figure 1: Kinematical domain reached by the experiments at HERA, Tevatron and LHC. We will start this report by describing the constraints on the proton structure (quark and gluon densities) using inclusive jets at HERA and the Tevatron. The 1 study of the mutijet cross sections will be discussed in the second part of the report since it is a fundamental topic for the LHC and the searches for new particles in the jet channels. Another background related to SUSY and Higgs boson searches is the W + b jet and Z + b jet events and we will give the most recent results from the Tevatron. We will finish the report by describing the low x dynamics which can be probed in forward jets at HERA and Mueller-Navelet jets at the Tevatron/LHC in particular. 2 Inclusive jets at HERA and the Tevatron 2.1 High Q2 jet measurements at HERA In addition to the measurement of the proton structure function F2 which allows to access directly the structure of the proton in terms of quarks and gluons, it is possible to probe the gluon density at high x using jet measurements at HERA. The H1 and ZEUS collaborations at HERA measured the ratios of the jet and neutral current cross sections [1] to remove many systematic uncertainties as shown in Fig. 2. The jet cross section measurement allows to perform a direct test of the next-to-leading order (NLO) QCD evolution, and allows to constrain the parton distribution functions (PDF) and the values of αS. The effect of including or not the jet cross sections in addition to the proton structure function measurements to constrain further the parton density at high x in the proton is shown in Fig. 3. The uncertainties on the gluon density at high x are still very large (typically larger than 20% for x > 0.3 at high Q2 ∼ 2000 GeV2, increasing to 100% at low Q2), and we will study if the Tevatron (and then the LHC) can reduce this uncertainty further. The H1 and ZEUS collaborations also measured the charged current jet produc- tion cross section for jet transverse energies above 100 GeV. A good agreement is found with NLO calculations but in addition to PDF uncertainties, there is a large theoretical uncertainty at high x which shows the need for NNLO calculations [2]. 2.2 Inclusive jet cross section measurements at the Tevatron The inclusive jet cross section measurements at the Tevatron rely on the determination of the jet energy calibration, which leads to the largest systematic uncertainties. Jet measurements are corrected either to particle level or to parton level, depending on the measurements and the collaboration. Jet measurements are performed using either a cone or the kT algorithm. The jet energy scale is determined mainly using γ+jet events. In the D0 collaboration, the corrected jet energy is obtained using the following method Ecorr jet = Euncorr − Of f jet Show × Resp 2 (1) Figure 2: Ratios of the jet production to the neutral current cross sections as a function of jet ET in three different Q2 regions. Figure 3: Fractional uncertainty on gluon density in the proton in four different Q2 bins determined using the proton structure function F2 data measured at HERA (in red) and the jet cross sections in addition (in yellow). 3 jet jet and Euncorr where Ecorr are the corrected and uncorrected jet energies respectively. The offset corrections (Of f ) are related to uranium noise and pile-up and are deter- mined using zero-bias data. The showering corrections (Show) take into account the energy emitted outside the jet cone because of the detector and dead material and, of course, not the physics showering outside the jet cone which corresponds to QCD radiation outside the cone. The jet response (Resp) is the largest correction, and can be subdivided in few corrections. The first step is to equalize the calorimeter response as a function of rapidity, and the jet response is then measured for the central part of the calorimeter only using the pT balance in γ+jet events. Some additional small corrections related to the method biases are introduced. One important additional correction deals with the difference in response between quark and gluon jets. The difference was studied both in data and in Monte Carlo (using for instance the γ+jet and the dijet samples which are respectively quark and gluon dominated) and leads to a difference of 4 to 6% as a function of jet pT , which is not negligible if one wants a precision on jet energy scale of the order of 1%. This has an important consequence. The jet energy scale is not universal but sample dependent. QCD jets (gluon domi- nated) will have a different correction with respect to the tt events for instance which are quark dominated. The CDF collaboration follows a method which is more Monte Carlo oriented using beam tests and single pion response to tune their Monte Carlo. At the LHC, it will be possible to use Z+jets which do not suffer from the ambiguity of photon identification in the detector. The uncertainties reached by the D0 collaboration concerning the determination of jet energy scale are of the order of 1.2% for jet pT between 70-400 GeV and in a wide range of rapidity around zero (the uncertainty is of the order of 2% for a rapidity of 2.5). This allows to make a very precise measurement of the jet inclusive cross section as a function of their transverse momentum. The measurement of the inclusive jet cross section [3] was performed by the D0 and CDF collaborations at the Tevatron using a jet cone algorithm with a cone size of 0.7 (D0 and CDF) and the kT algorithm (CDF). Data are corrected to hadron level (D0) or parton level (CDF). The motivation of this measurement is double: it is sensitive to beyond standard model effects such as quark substructure and to PDFs, especially the gluon density at high x. Historically, the excess observed by the CDF collaboration in 1995 concerning the inclusive jet pT spectrum compared to the parametrisations was suspected to be a signal of quark substructure but it was found that increasing the gluon density at high x could accomodate these data. This raises the question of PDFs versus beyond standard model effects, and the interpretation of data in general. Data are compared with NLO QCD calculations using either CTEQ6.5M [4] for D0 or CTEQ6.1 for CDF (the uncertainties of the CTEQ6.5M parametrisation are two times smaller). A good agreement is found over six orders of magnitude. The ratio data over theory for the D0 and CDF measurements are given in Figs. 4 and 5. A good agreement is found between NLO QCD and the D0 or CDF measurements with 4 a tendency of the CTEQ parametrisation to be slightly lower than the data at high jet pT . The MRST2004 [4] parametrisation follows the shape of the measurements. Given the precision obtained on jet energy scale, the uncertainties obtained by the D0 collaboration are lower than the PDF ones and will allow to constrain further the PDFs (the uncertainties of the CDF collaboration are about two times larger). The D0 collaboration took also special care of the uncertainty correlation studies, by giving the effects of the 24 sources of systematics in data. In addition, the CDF collaboration measured the dijet mass cross section [5] above 180 GeV, and up to 1.2 TeV. No excess was found with respect to NLO QCD cal- culations and this measurement allows to exclude excited quarks below 870 GeV, Z ′ (resp. W ′) below 740 (resp. 840) GeV 1, and technirho below 1.1 TeV. The question rises if PDFs can be further constrained at the LHC using inclusive measurements. The PDF uncertainties are typically of the order of 15% for a jet pT of 1 TeV, and 25% of 2 TeV for 1 < |ηjet| < 2 (without taking into account the new Tevatron measurements which we just discussed). A typical uncertainty of 5% (resp. 1%) on jet energy scale leads to a systematic uncertainty on 30 to 50% (resp. 6 to 10%) on the jet cross section. A precise determination of the jet energy scale at the LHC will thus be needed to get competitive measurements at the LHC. DØ Run II coneR = 0.7 1.5 L = 0.70 fb -1 NLO pQCD = p T +non-perturbative corrections = R F Data Systematic uncertainty 1.0 0.5 y y y y y y r r r r r r o o o o o o e e e e e e h h h h h h t t t t t t / / / / / / a a a a a a t t t t t t a a a a a a d d d d d d 1.5 1.0 0.5 0.0 |y| < 0.4 0.4 < |y| < 0.8 0.8 < |y| < 1.2 NLO scale uncertainty CTEQ6.5M with uncertainties MRST2004 1.2 < |y| < 1.6 1.6 < |y| < 2.0 2.0 < |y| < 2.4 50 100 200 300 50 100 200 300 50 100 200 300 (GeV) (GeV) (GeV) (GeV) (GeV) (GeV) p p p p p p T T T T T T Figure 4: Data over theory for the inclusive pT cross section measurement for the D0 collaboration using the 0.7 jet cone. Data are compared to NLO QCD calculations using the CTEQ6.5M parametrisation. 1Stronger limits on W ′ and Z ′ mass limits come from lepton based searches 5 m m JET |y |<0.1 JET 0.7<|y |<1.1 JET 1.6<|y |<2.1 M 1 . 6 Q E T C o t o i t a R M 1 . 6 Q E T C o t o i t a R . M 1 6 Q E T C o t o i t a R 3 2 1 3 2 1 3 2 1 0.1<|y JET |<0.7 1.1<|y JET |<1.6 200 400 600 JET Tp [GeV/c] CDF Run II Preliminary JETp [GeV/c] TK D=0.7 )-1 Data ( L = 0.98 fb Systematic uncertainties PDF uncertainties JET = max p T = 2 x 0 MRST2004 0 200 400 600 JET Tp [GeV/c] Figure 5: Data over theory for the inclusive pT cross section measurement for the CDF collaboration using the kT algorithm. Data are compared to NLO QCD calculations using the CTEQ6.1 parametrisation. 2.3 How do PDF uncertainties affect LHC potential? Another question to be raised is to know whether the uncertainty on PDFs (and also of higher order effects) can affect the LHC discovery potentual. As an example, let us consider the Higgs boson production. The cross sections are known precisely both for background and signal (typically the uncertainties on σ(gg → H) and on σ(qq → Hqq) cross sections due to PDFs are respectively less than 5 and 15% over the full Higgs boson mass range). However, there are additional uncertainties related to higher order effects. For example, for Higgs production for a Higgs mass of 120 GeV, NNLO effects are of the order of 9% (for Z production, it is of the order of 4%). Both sets of uncertainties have to be taken into account in the predictions. On the other hand, the LHC potential can be affected if the background is poorly known. PDF uncertainties can thus have an impact on searches (extra dimensions, single top, SUSY...). As an example, we can quote the search for qqqq contact inter- actions for a given compactification scale which can appear as an excess in the dijet mass spectrum. For a compactification scale of 2 TeV, and 2 extra dimensions, the effect of contact interactions is found to be of the same order as the present PDF uncertainties. 6 m m 3 Multijet cross section measurements at the Teva- tron and at HERA The measurement of multijet cross sections at the Tevatron and at HERA (and later on at the LHC) is fundamental to constrain the PDFs and to tune the Monte Carlo, since it is a direct background entering in many searches for Higgs bosons or new particles at the LHC. We can quote for instance the search for Higgs bosons in as- sociation with tt, the measurement of the tt production cross section, the search for R-parity violated SUSY (which can lead up to 8-10 jets per event...). 3.1 Measurement of ∆Φ between jets in D0 The advantage of the measurement of the difference in azimuthal angle between two leading jets in an inclusive QCD sample as was performed in D0 is that there is no need of precise knowledge of jet energy scale (the measurement is dominated by the knowledge of jet angles). The ∆Φ spectrum was measured in four different regions in maximum jet transverse momentum, and a good agreement was found with NLO calculations except at very high ∆Φ where soft radiation is missing [6]. PYTHIA [7] shows a disagreement at small ∆Φ, showing a lack of initial state gluon radiation, while HERWIG [8] shows a good agreement with data. 3.2 Measurement of multijet and γ+jet cross sections The H1 and ZEUS collaborations measured the 2 and 3 jet production cross section relatively to the neutral current one to reduce systematics. A good agreement is found with NLO calculations [9]. The D0 collaboration measured the inclusive production of isolated γ+ jets in different detector regions requiring a central photon and a central or a forward jet. It distinguished the cases when the photon and the jet are on the same or opposite side. The cross section has been found in disagreement with NLO QCD expectations both in shape and normalisation and the reason is unclear [10]. 3.3 Jet shape measurements in CDF The jet shape is dictated by multi-gluon emission from primary partons, and is sen- sitive to quark/gluon contents, PDFs and running αS, as well as underlying events. We define Ψ which is sensitive to the way the energy is spread around the jet center Ψ(r) = 1 Njets Σjets PT (0, r) P jet T (0, R) (2) 7 where R is the jet size. The energy is more concentrated towards the jet center for quark than for gluon jets since there is more QCD radiation for gluon jets (which means that Ψ is closer to one for quark jets when r ∼ 0.3R for instance. The CDF collaboration measured Ψ(0.3/R) for jets with 0.1 < |y| < 0.7 as a function of jet pT and found higher values of Ψ at high pT as expected since jets are more quark like [11]. This measurement also helps tuning the PYTHIA and HERWIG generators since it is sensitive to underlying events in particular. The CDF collaboration also studied the jet shapes for b-jets in four different pT bins [12], and the result is given in Fig. 6. The default PYTHIA and HERWIG Monte Carlo in black full and dashed lines respectively are unable to describe the measurement. Compared to the inclusive jet shape depicted in Fig. 6 in full red line for PYTHIA, the tendency of the b-jet shape is definitely the right one, leading to smaller values of Ψ as expected, but the measurement leads to a larger difference. The effect of reducing the single b-quark fraction by 20% leads to a better description of data as it shown in green in Fig. 6. The fraction of b-jets that originate from flavour creation (where a single b-quark is expected in the same jet cone) over those that originate from gluon splitting (where two b-quarks are expected in the same jet cone) is different in Monte Carlo and data. The CDF collaboration also measured the bb dijet cross section as a function of the leading jet pT and the difference in azimuthal angle between the two jets and it leads to the same conclusion, namely that PYTHIA and HERWIG underestimates the gluon splitting mechanism [5]. (r/R) b 1 MidPoint R=0.7, f =0.75 merge |Y jet £| 0.7 CDF II preliminary 0.8 0.6 0.4 0.2 1 0.8 0.6 0.4 0.2 0 £ T 52 < p 80 GeV/c £ T 80 < p 104 GeV/c data Pythia Tune A: b 1bb f incl -0.2 Herwig: b 1bb f -0.2 £ T 104 < p 142 GeV/c £ T 142 < p 300 GeV/c 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 r/R Figure 6: Measurement of the b-jet shapes and comparison with the predictions of the PYTHIA and HERWIG Monte Carlo (see text). 8 Y 4 Underlying events at Tevatron and LHC The CDF collaboration measured underlying events at the Tevatron and used these measurements to tune in particular the PYTHIA generator. pp or pp interactions are namely not as simple as interactions in ep colliders. In addition to the hard scattering producing dijets, high pT leptons..., spectator partons produce additional soft interactions called underlying events. The main consequence is that it introduces additional energy in the detector not related to the main interaction which need to be corrected. To study this kind of events, the idea is quite simple. It is for instance possible to use dijet events and we can distinguish in azimuthal angle three different regions: the “toward” region around the leading jet direction defined by a cone of 60 degrees around the jet axis, the “away” region in the opposite direction to the jet, and the “transverse” region the remaining regions far away from the jet and the “away” region. In dijet events, the “transverse” region will be dominated by underlying events. The CDF collaboration measured the charged multiplicity and the charged transverse evergy as a function of jet transverse energy and used these quantities to tune the PYTHIA Monte Carlo leading to the so called Tune A and Tune AW [5]. Clean Drell Yan events can also be used to tune underlying events [5]. The lepton pair defines the “toward” region while the “away” and “transverse” regions are defined in the same way as for dijets. As an example, we give in Fig. 7 the charged particle density as a function of the transverse momentum of the lepton pair in the three regions compared with the Tune AW of PYTHIA. At the LHC, one of the first measurements to be performed will be related to the tuning of underlying events in the generators. Present tunings between the different Monte Carlo (PYTHIA, PHOJET, HERWIG) show differences up to a factor six concerning the average multiplicity of charged particles as a function of the pT of the leading jet as an example, and it is crucial to tune the Monte Carlo to accomplish fully the LHC program. 5 Measurements of the W +jet and Z+jet cross sec- tions at the Tevatron The measurements of the W +jet and Z+jet cross sections are specially important since they are a background for many searches and especially the search for the Higgs boson. 9 Figure 7: Measurement of the charged particle density for Drell Yan events in the “toward”, “away” and “transverse” regions compared to PYTHIA Tune AW. 5.1 Measurements of the W + X cross sections The D0 collaboration measured the ratio of the W + c to the inclusive cross section 0.074 ± 0.019 (stat.) ±0.012 0.014 (syst.) in agreement with NLO calculation [13]. It will be important to redo this measurement with higher statistics since it is directly sensitive to the s-quark PDF. The W + X cross section measurement at the LHC is considered to be one of the “standard” candles with small theoretical uncertainties (the NNLO scale dependence is less than 1%) and could be used even for luminosity measurements. Unfortunately, the PDFs are not so well known in the kinematical region where the W + X cross section is measured. The average value of x (< x >∼ 7.10−3 with 5.10−4 < x < 5.10−2) is not in the valence region and thus not in the region where quarks are best known. The differences between PDFs lead to an uncertainty on the W + X cross section of the order of 8% which is not precise enough to be used as a luminosity monitor. An independant better determination of the PDFs would change the conclusions. 5.2 Measurement of the Z + b and W + b cross sections The motivation to measure the Z + b-jet cross section is quite clear: this is a direct background for Higgs boson searches and it is also sensitive to the b quark content of the proton. The measurements of the Z + b-jet and W + b-jet cross sections were performed by the CDF collaboration at the Tevatron σ(Z + b jets) =0.86 ± 0.14 ± 0.12 pb and σ(W + b − jets) × BR(W → lν) = 2.74 ± 0.27(stat.) ± 0.42(sys.) 10 pb in agreement with NLO calculations and PYTHIA predictions [14]. The CDF collaboration also compared the differential distributions in jet pT and rapidity as an example and the distributions are found in good agreeement with PYTHIA. 6 Forward jets and Mueller Navelet jets 6.1 Low Q2 jets at HERA We discussed so far only high ET jets at high Q2 and the question raises about what happens at low Q2 and how low in Q2 and jet pT is perturbative QCD at NLO reliable. In other words, BFKL [15] effects are supposed to appear at very low Q2. The H1 collaboration measured the inclusive jet cross section differentially in Q2 (dσ/dQ2) for jet pT greater than 5 GeV and a discrepancy of about a factor 2 between NLO calculations and the measurement is found for Q2 ∼ 6 GeV2. The reason can be due to missing higher order effects (NNLO) or missing low x resummation terms present in the BFKL equation [16]. To test further the low x dynamics, the H1 and ZEUS collaborations measured forward jet production cross sections. The idea is simple: we ask jets to be emitted in the “forward” region, as far as possible in rapidity from the scattered electron. When T and the virtual photon Q2 are close, the DGLAP NLO cross section [17] is the jet p2 expected to be small because of the kT ordering of the partons in the ladder in the DGLAP evolution. The BFKL cross section is expected to be much higher since there is no kT ordering of the emitted gluons. The kinematical region probed by the H1 collaboration is 10−4 < x < 4.10−3, pT (jet) > 3, 5 GeV, 7 < θjet < 20 degrees, 0.5 < p2 T /Q2 < 5 to enhance the BFKL resummation effects [18]. A discrepancy between NLO QCD prediction and the measurement is found on the differential forward jet dσ/dx cross section at low x (the discrepancy is about a factor 3 for x ∼ 0.0005. The H1 collaboration also looked at the production cross section of two forward jets and one central jet and some discrepancy is found again at low x. To study further how one moves from the BFKL dynamics to the DGLAP one, the H1 collaboration measured the triple differential jet cross section dσ/dxdp2 T dQ2 [18] as a function of x for different regions in Q2 and p2 T . The measurement is shown in Fig. 8 [19]. The NLO QCD prediction is displayed in dotted line and describes the cross section at high pT but not at low pT where it undershoots the data. The LL BFKL prediction leads to a good description at low pT (or in the case when r = p2 T /Q2 is close to 1 as expected since BFKL effects are dominant in this kinematical region, and overshoots the data at high pT . BFKL NLL leads to a good description of data over the full range. In Fig. 8, we display two different resummation schemes for BFKL NLL called S3 and S4 which both lead to a good description [19]. It is worth noticing that implementing the higher-order corrections in the impact factor due to exact gluon kinematics in the γ∗ → qq transition improves further the description of 11 data [19]. This measurement shows a clear discrepancy with DGLAP NLO calculation and is well described by the NLL BFKL formalism, and it would be nice to know the effects of higher orders corrections of the DGLAP prediction. The ZEUS collaboration also studied the forward jet cross section. They measure the 3 jet cross section and they see a disagreement with NLO QCD when the jets are in the forward region [18]. d s /dx dpT 2 d Q2 - H1 DATA 5<Q2<10 10<Q2<20 20<Q2<85 . 5 3 < 2 T p < 5 2 . 2 1 . 5 9 < 2 T p < . 5 3 8 6 4 2 0 2 1.5 1 0.5 0 . 0 0 4 < 2 T p < . 5 9 0.1 0.05 0 0.8 0.6 0.4 0.2 0 0.3 0.2 0.1 0 0.02 0.01 0 0.025 0.05 0.075 0.1 x 10 x -2 0.025 0.05 0.075 0.1 x 10 x -2 0.025 0.05 0.075 0.1 x 10 x -2 0.05 0.04 0.03 0.02 0.01 0.05 0.1 0.15 0.2 x 10 x -2 0 0.001 0.002 0.003 0.004 x 0.02 0.015 0.01 0.005 0.05 0.1 0.15 0.2 x 10 x 0 0.001 -2 -2 x 10 0.002 0.003 0.004 x 0.25 0.2 0.15 0.1 0.05 0.05 0.1 0.15 0.2 x 10 x -2 0 0.001 0.002 0.003 0.004 x Figure 8: Triple differential cross section measured by the H1 collaboration. 6.2 Mueller Navelet jets at the Tevatron and the LHC The same idea as the forward jets at HERA can be used at the Tevatron and the LHC. Mueller Navelet jets are jets produced in pp and pp collisions, requiring these two jets to be as far away as possible in rapidity, and to have about the same transverse momentum. For the same reason as for forward jets, the kT ordering of the gluons of the ladder ensures that the DGLAP cross section is low whereas the BFKL one is expected to be higher. Another easier observable is the measurement of the difference in azimuthal angle between the two forward jets. Since there are few gluons emitted for the DGLAP evolution, the ∆Φ value is peaked towards π whereas the BFKL expectation will be a flatter distribution in ∆Φ because of the emitted gluons. This 12 measurement can be performed at the Tevatron and the LHC and can be a test of BFKL resummation effects [20]. 7 Conclusion In this short report, we presented many new results from HERA and the Tevatron concerning jet physics and also some expectations for the LHC. In particular, the new measurement of the inclusive jet cross section at the Tevatron is complementary to the HERA jet cross section measurements and is fundamental to constrain further the gluon density at high x, which is useful for searches at the LHC in the jet channel, especially for a better knowledge of background. The multijet cross section measure- ments is also in agreement with NLO QCD calculations and is also fundamental for the LHC. The γ+jet cross sections is in discrepancy with NLO calculation and the reason is unclear. The W +jet and Z+jet cross sections are in general in agreement with NLO calculations but the uncertainties are still large and will benefit from higher statistics. We finished the report by describing the forward jet and Mueller Navelet jet measurements which are senstive to low x resummation effects given by the BFKL equation. Many other topics such as diffraction and the search for diffractive exclu- sive events in the jet channel by the CDF collaboration, and the implications for the LHC diffractive program were not described because of lack of time [21] References [1] A. Aktas et al., JHEP 0710:042 (2007); A. Aktas et al., Phys. Lett. B653 (2007) 134; C. Adloff et al., Eur. Phys. J. C29 (2003) 497; S. Chekanov et al., Nucl. Phys. B 765 (2007) 1; S. Chekanov et al., Eur. Phys. J. C 42 (2005) 1. [2] C. Adloff et al., Eur. Phys. J. C 30 (2003) 1; S. Chekanov et al., Phys. Rev. D 78 (2008) 032004. [3] V. M. Abazov et al., Phys. Rev. Lett. 101 (2008) 062001; A. Abulencia et al., Phys. Rev. D 75, 092006 (2007); Phys. Rev. D 74, 071103 (2006). [4] W.K. Tung et al., JHEP 0702, 053 (2007); J. Pumplin et al., JHEP 0207, 12 (2002); D. Stump et al., JHEP 0310, 046 (2003); A.D. Martin et al., Phys. Lett. B 604, 61 (2004). [5] see http://www-cdf.fnal.gov/physics/new/qcd/QCD.html. [6] V. M. Abazov et al., Phys. Rev. Lett. 94 (2005) 221801. [7] T. Sj¨ostrand et al., Comp. Phys. Comm. 135, 238 (2001). 13 [8] G. Marchesini et al., Comp. Phys. Comm. 67, 465 (1992). [9] S. Chekanov et al., preprint arXiv:0802.3955; S. Chekanov et al., Nucl. Phys. B 786 (2007) 152; F. D. Aaron et al., Eur. Phys. J. C 54 (2008) 389. [10] V. M. Abazov et al., Phys. Lett. B666 (2008) 435. [11] D. Acosta et al., Phys. Rev. D71 (2005) 112002. [12] A. Abulencia et al., preprint arXiv:0806.1699. [13] V. M. Abazov et al., Phys. Lett. B666 (2008) 23; T. Aaltonen et al., Phys. Rev. Lett. 100 (2008) 091803. [14] A. Abulencia et al., Phys. Rev. D74 (2008) 032008; see http://www-cdf.fnal.gov/physics/new/qcd/QCD.html. [15] L.N. Lipatov, Sov. J. Nucl. Phys. 23 (1976) 338; E.A. Kuraev, L.N. Lipatov and V.S. Fadin, Sov. Phys. JETP 45 (1977) 199; I.I. Balitsky and L.N. Lipatov, Sov. J. Nucl. Phys. 28 (1978) 822. [16] A. Aktas et al., Eur. Phys. J. C37 (2004) 141. [17] G. Altarelli and G. Parisi, Nucl. Phys. B126 18C (1977) 298; V.N. Gribov and L.N. Lipatov, Sov. J. Nucl. Phys. (1972) 438 and 675; Yu.L. Dokshitzer, Sov. Phys. JETP 46 (1977) 641. [18] A. Aktas et al, Eur. Phys. J. C46 (2006) 27; S. Chekanov et al, Phys. Lett. B632 (2006) 13; F. D. Aaron et al., Eur. Phys. J. C 54 (2008) 389; S. Chekanov et al., Eur. Phys. J. C 52 (2007) 515. [19] O. Kepka, C. Marquet, R. Peschanski, C. Royon, Eur. Phys. J. C55 (2008) 259; Phys. Lett. B655 (2007) 236; C. Marquet, R. Peschanski, C. Royon, Phys. Lett. B599 (2004) 236; C. Marquet, C. Royon, Nucl. Phys. B739 (2006) 131; J.G. Contreras, R. Peschanski, C. Royon, Phys. Rev. D62 (2000) 034006; [20] A.H. Mueller and H. Navelet, Nucl. Phys. B282 (1987) 727; Azimuthal decor- relation of Mueller-Navelet jets at the Tevatron and the LHC, C. Marquet, C. Royon, preprint arXiv:0704.3409; A. Sabio Vera, F. Schwennsen, Nucl. Phys. B776 (2007) 170. [21] T. Aaltonen at al., Phys. Rev. D77 (2008) 052004; O. Kepka, C. Royon, Phys. Rev. D 76 (2007) 032012; Phys. Rev. D 78 (2008) 073005. 14
ai_researcher
6
Leveraging_Large_Language_Models_for_Enhancing_Literature-Based_Discovery.pdf
3 2 0 2 l u J 7 2 ] L D . s c [ 1 v 3 4 4 2 0 . 8 0 3 2 : v i X r a AI LITERATURE REVIEW SUITE David A. Tovar Department of Psychology Vanderbilt University Nashville, TN [email protected] ABSTRACT The process of conducting literature reviews is often time-consuming and labor-intensive. To streamline this process, I present an AI Literature Review Suite that integrates several functionalities to provide a comprehensive literature review. This tool leverages the power of open access science, large language models (LLMs) and natural language processing to enable the searching, downloading, and organizing of PDF files, as well as extracting content from articles. Semantic search queries are used for data retrieval, while text embeddings and summarization using LLMs present succinct literature reviews. Interaction with PDFs is enhanced through a user-friendly graphical user interface (GUI). The suite also features integrated programs for bibliographic organization, interaction and query, and literature review summaries. This tool presents a robust solution to automate and optimize the process of literature review in academic and industrial research. Keywords Literature Review · Artificial Intelligence · Text Embeddings · Large Language Models 1 Introduction In academic and industry research, literature reviews serve as the cornerstone of extensive comprehension and ex- ploration of any given topic. Traditional manual processes of literature reviews are, however, characterized by time-consuming and labor-intensive tasks. These tasks often include sifting through volumes of academic papers, manually downloading and organizing relevant ones, reading and summarizing these papers, and finally, synthesizing the information into a cohesive narrative. The sheer magnitude of academic papers published daily and the complexity of most research topics compound this issue further. Consequently, there has been a growing need for more efficient tools that can automate and streamline the literature review process, thereby enabling researchers to focus more on knowledge synthesis and less on the logistical aspects of conducting literature reviews. To address this need, I introduce the AI Literature Review Suite, a comprehensive suite of integrated programs for conducting literature reviews efficiently and accurately. This tool capitalizes on the advancements in machine learning and natural language processing to automate several tasks involved in the literature review process. It includes features such as searching, downloading, and organizing PDF files, extracting content from articles, performing semantic search queries, summarizing literature reviews, and providing a user-friendly interface for interacting with PDFs. By automating these tasks, the tool greatly reduces the amount of time and effort required to conduct comprehensive literature reviews. The suite comprises several integrated programs, each designed to perform specific tasks. The PDF Search program, for instance, interacts with the CORE API to search for and download scholarly articles based on user-provided parameters. The PDF Extraction program serves as a bibliographic tool that leverages the CORE API [1] and the CrossRef RESTful API [2] to download and organize articles along with their references and citations based on DOIs. The PDF Chat program borrows from a previous solution [3], but adds number of features including a graphical user interface (GUI) for interaction, ability to ask specific and general questions, and saving conversation in a word document for future reference. The Literature Review Table program is an automatic literature review tool that processes multiple PDFs, performs semantic search queries, and generates comprehensive responses. The Literature Tovar AI Literature Review Suite Figure 1: AI Literature Review Suite Schematic and Modules Synthesis program uses semantic embeddings [4, 5] and large language models [6, 7] to automatically create detailed summaries from multiple text entries. This paper aims to provide a detailed overview of the AI Literature Review Suite, including its design, functionalities, and potential applications in academic and industrial research. I also discuss the integrated programs that form the backbone of the tool, their features, and how they work together to streamline the literature review process. The primary objective is to illustrate how this tool can enhance efficiency and quality in conducting literature reviews, ultimately catalyzing knowledge discovery in various research fields. 2 Results The architecture of the AI Literature Review Suite is underpinned by a principle of modularity, designed with the intention of offering researchers the flexibility to choose how much or how little of the tool they wish to use. The suite is structured into three main modules: "Knowledge Gathering," "Knowledge Extraction," and "Knowledge Synthesis." Each of these modules encapsulates a fundamental aspect of the literature review process, and together, they present a holistic approach to conducting literature reviews. The "Knowledge Gathering" module provides functionalities that facilitate the sourcing and organization of relevant academic papers. Tools like PDF Search and PDF Extraction are incorporated in this module, allowing researchers to search, download, and neatly organize articles based on specified parameters. Researchers can leverage these tools to build a comprehensive repository of relevant literature effortlessly. If a researcher wishes only to use the suite for these tasks, they are entirely at liberty to do so. The next module, "Knowledge Extraction," aids in the extraction and processing of content from the gathered articles. Here, the PDF Chat tool and comes into play, offering functionalities like text extraction and semantic search for enhanced interaction with the academic papers. Researchers can ask document-specific questions and receive accurate answers, aiding in a thorough understanding of the papers. The final module, Knowledge Synthesis, utilizes and to create concise summaries of the content extracted from the articles and synthesize a cohesive narrative that encapsulates the central theme of the literature review. This module essentially aids in transforming the raw, extracted information into a consumable format, easing the process of knowledge assimilation. Researchers have the flexibility to navigate these modules either interactively or in an automated fashion, choosing to manually guide the process for more control or let the suite’s robust automation handle the process end-to-end. This versatility in usage, coupled with the suite’s ability to run on consumer laptops and open source Python [8] software positions the AI Literature Review Suite as a significant aid to researchers, assisting in conducting effective and efficient literature reviews. 2 Tovar AI Literature Review Suite Figure 2: Graphic user interface with selections for each module 2.1 Graphic User Interface The Graphic User Interface (GUI) is designed to facilitate user interaction with the suite and offer a streamlined user experience. The interface is implemented using a modern, cross-platform framework in Python that supporting Windows, macOS, and Linux. The GUI is divided into different sections corresponding to each integrated program or module, each equipped with a distinct set of controls and visualizations to guide researchers through the process with their own GUI which will be described in the sections below. 2.2 PDF Search The PDF search module, a key component of the AI Literature Review Suite, leverages the capabilities of the CORE API [1] to access a wide array of open-access articles, including those hosted on individual lab websites. This ensures an expansive literature search, enhancing the chances of retrieving all pertinent literature on a given topic. The module allows researchers to designate specific search parameters, such as topics, titles, authors, and publication years, promoting a tailored literature retrieval process. 3 Tovar AI Literature Review Suite Figure 3: GUI for PDF search using CORE Database Upon article retrieval, they are systematically stored in a user-specified folder with the citation in APA format, facilitating subsequent referencing. Concurrently, URL links are preserved in separate text files, pre-formatted for insertion as hyperlinks, optimizing accessibility. The module also documents articles not found in the CORE database in a separate text file with their authors, titles, and abstracts. This mechanism ensures that all potential information sources are accounted for in the search process. 2.3 PDF Extraction The PDF Extraction offers a focused approach to literature extraction from selected PDFs. This module utilizes the CrossRef API [2] and literature scanner python package [9] to extract metadata, references, and citations from a PDF, thereby identifying valuable resources for a comprehensive literature review. The module searches the CORE API to acquire open-access PDFs that align with the extracted metadata. The PDFs are then saved in a user-specified directory, with citations presented in APA format, to streamline future referencing tasks. The module has an additional feature of categorizing and segregating citations and references into a subfolder. This classification enhances the accessibility and readability of the extracted data, which simplifies subsequent literature analysis tasks. 2.4 PDF Chat The PDF Chat module is an interactive tool, designed to facilitate querying any selected PDF using a Large Language Model, such as GPT [7] or LLaMA [6]. The module allows researchers to inquire about the main message or request the main results be presented in a numbered list format. This targeted questioning permits a precise extraction of the crux of a study, significantly augmenting the understanding of complex academic texts. A standout feature of this module is its flexibility. Researchers can choose to question as many or as few PDFs as needed, tailored to their individual research requirements. Importantly, the questions are limited to the context of the selected PDF captured through the semantic embedding models [3, 5], ensuring relevant and precise responses, mitigating the issue of hallucinations, a common concern with AI language models [7, 6]. However, the module still allows for the exploration of information outside the PDF. If researchers desire additional context or a broader understanding, they can ask general questions in the same dialogue. Lastly, the PDF Chat module records the entire conversation and stores it as a Word document. This feature allows researchers to refer back to the extracted information and the line of questioning, providing a valuable reference for further research analysis. 4 Tovar AI Literature Review Suite Figure 4: Example chat with PDF with specific and general questions 2.5 Literature Table The Literature Table module represents a practical solution to efficiently manage and summarize a large volume of academic articles. It facilitates the creation of an organized Excel table from a folder of selected PDFs, where each row corresponds to an individual article. The table consists of columns representing key elements of an academic article. These include the APA in-text citation, providing a ready-to-use reference for future scholarly work, and the summaries of the Introduction, Methods, and Results sections of each article. The module also offers customization by allowing researchers to pose their own questions that replace the default queries for the introduction, methods, or results summaries. This feature enables targeted literature analysis, enabling researchers to quickly access the specific information they need. These summaries are generated by a synergistic use of a semantic embedding model [5] and a Large Language Model [7, 6], ensuring coherent, meaningful, and concise representations of the original text. Furthermore, in the case of subfolders, the Literature Table module creates separate Excel sheets for each subfolder within the main Excel file, ensuring a neatly organized output that mirrors the original file structure. 2.6 Literature Clusters The Literature Clusters module serves to intelligently categorize and group academic articles based on their content. This module processes each row of the previously created Literature Table, feeding them into a semantic embedding model [5]. It then uses K-nearest neighbors (K =5) to group the articles into clusters [10, 11, 12], each typically containing five closely related works. This approach ensures that the clusters represent meaningful groupings within the literature, facilitating a deeper understanding of the nuanced areas of study within a broader research field. Upon completion of the clustering process, the module generates a Word document that groups the Excel rows according to their respective clusters. This document maintains the original Excel text, ensuring the preservation of the rich information extracted in the Literature Table module. This methodology provides researchers with a clear and organized representation of the literature landscape, helping them identify patterns, trends, and areas for further exploration. 5 Tovar AI Literature Review Suite Figure 5: Example rows from literature table 2.7 Literature Synthesis The Literature Synthesis module uses a large language model to distill, compare, and contrast the grouped works obtained from the Literature Clusters module, creating synthesized paragraphs that capture key themes, similarities, and differences. Each section and sentence within the summaries have the appropriate APA citations that the user can go refer back to. For this part of the AI suite, it uses the advanced capabilities of GPT-4 [7] to generate coherent, meaningful, and detailed syntheses of clustered literature. However, one of the key features of this module is its flexibility and adaptability. The system is designed to be modular and can easily be integrated with future open-source models, ensuring that it remains at the forefront of AI advancements and continues to provide high-quality literature synthesis. 3 Discussion At a time when information is being generated at an unprecedented pace, the AI Literature Review Suite can have considerable impact. It particularly shines in rapidly evolving domains such as medicine, science, and engineering, where keeping up with the latest research findings is crucial. For medical practitioners, scientists, and engineers, this tool can substantially expedite the process of assimilating the latest research. It reduces the time and effort required for conducting comprehensive literature reviews, ensuring that vital findings are not overlooked. In medicine, it can directly impact patient care by enabling clinicians and policy makers to make more informed decisions. In the realms of science and engineering, it accelerates the iterative cycle of hypothesis generation, testing, and refinement, thereby spurring innovation and progress. The suite is also designed to adapt and evolve with the fast-paced growth of artificial intelligence. It can integrate with different large language models, making it an adaptable tool in the dynamic landscape of AI research and application. This ensures its continued relevance and utility across various fields. However, while this tool offers many benefits, it is essential to view it as a first step in acquainting oneself with a topic rather than the definitive source of information. While the risk of losing some information in the text embedding process is mitigated through methods like different initialization seeds, critical matters always warrant a careful inspection of the literature. In summary, the AI Literature Review Suite stands as a potent ally for researchers, enhancing efficiency and quality of scholarly endeavors while promoting accelerated innovation and progress. Github Github with the latest release : AI Literature Review Suite 6 Tovar Acknowledgments AI Literature Review Suite Thank you to Ian Erkelens and Mark Wallace for feedback on early versions of the AI Literature Suite. References [1] CORE API. https://core.ac.uk/services/api. [2] Crossref. https://www.crossref.org/documentation/retrieve-metadata/rest-api/. [3] pdfgpt. https://github.com/bhaskatripathi/pdfGPT. [4] Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. Multilingual universal sentence encoder for semantic retrieval. arXiv, (arXiv:1907.04307), 2019. [5] Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. Universal sentence encoder. arXiv, (arXiv:1803.11175), 2018. [6] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. arXiv, (arXiv:2307.09288), 2023. [7] OpenAI. GPT-4 technical report. arXiv, (arXiv:2303.08774), 2023. [8] Guido Van Rossum and Fred L. Drake. Python 3.9.5 Documentation. Python Software Foundation, https://docs.python.org/3/, 2021. [9] Thomas Donoghue. Lisc: A python package for scientific literature collection and analysis. Journal of Open Source Software, 4(41):1674, 2018. [10] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. [11] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, et al. Array programming with NumPy. Nature, 585(7825):357–362, 2020. [12] Jeff Reback, Wes McKinney, jbrockmendel, Joris Van den Bossche, Tom Augspurger, Phillip Cloud, Simon Hawkins, gfyoung, Sinhrks, Adam Klein, Brock Petersen, Matthew Roeschke, Jeremy Tratner, Chang She, William Ayd, Shlomi Naveh, Mark Garcia, Vytautas Jancauskas, Kai Dong, Jason Schendel, Andrew Hayden, Ben Pardee, Faisal Aish, Tom Horrocks, et al. pandas-dev/pandas: Pandas, 2020. 7
ai_researcher
1
Not_All_Metrics_Are_Guilty_Improving_NLG_Evaluation_by_Diversifying_References.pdf
BLEU might be Guilty but References are not Innocent Markus Freitag, David Grangier, Isaac Caswell Google Research {freitag,grangier,icaswell}@google.com 0 2 0 2 t c O 0 2 ] L C . s c [ 2 v 3 6 0 6 0 . 4 0 0 2 : v i X r a Abstract The quality of automatic metrics for machine translation has been increasingly called into question, especially for high-quality systems. This paper demonstrates that, while choice of metric is important, the nature of the ref- erences is also critical. We study differ- ent methods to collect references and com- pare their value in automated evaluation by reporting correlation with human evaluation for a variety of systems and metrics. Mo- tivated by the finding that typical references exhibit poor diversity, concentrating around translationese language, we develop a para- phrasing task for linguists to perform on exist- ing reference translations, which counteracts this bias. Our method yields higher correla- tion with human judgment not only for the submissions of WMT 2019 English→German, but also for Back-translation and APE aug- mented MT output, which have been shown to have low correlation with automatic met- rics using standard references. We demon- strate that our methodology improves corre- lation with all modern evaluation metrics we look at, including embedding-based methods. To complete this picture, we reveal that multi- reference BLEU does not improve the corre- lation for high quality output, and present an alternative multi-reference formulation that is more effective. 1 Introduction Machine Translation (MT) quality has greatly im- proved in recent years (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017). This progress has cast doubt on the reliability of au- tomated metrics, especially in the high accuracy regime. For instance, the WMT English→German evaluation in the last two years had a different top system when looking at automated or human eval- uation (Bojar et al., 2018; Barrault et al., 2019). Such discrepancies have also been observed in the past, especially when comparing rule-based and statistical systems (Bojar et al., 2016b; Koehn and Monz, 2006; Callison-Burch et al., 2006). Automated evaluations are however of crucial importance, especially for system development. Most decisions for architecture selection, hyper- parameter search and data filtering rely on auto- mated evaluation at a pace and scale that would not be sustainable with human evaluations. Au- tomated evaluation (Koehn, 2010; Papineni et al., 2002) typically relies on two crucial ingredients: a metric and a reference translation. Metrics gen- erally measure the quality of a translation by as- sessing the overlap between the system output and the reference translation. Different overlap metrics have been proposed, aiming to improve correla- tion between human and automated evaluations. Such metrics range from n-gram matching, e.g. BLEU (Papineni et al., 2002), to accounting for syn- onyms, e.g. METEOR (Banerjee and Lavie, 2005), to considering distributed word representation, e.g. BERTScore (Zhang et al., 2019). Orthogonal to metric quality (Ma et al., 2019), reference quality is also essential in improving correlation between human and automated evaluation. This work studies how different reference col- lection methods impact the reliability of automatic evaluation. It also highlights that the reference sentences typically collected with current (human) translation methodology are biased to assign higher automatic scores to MT output that share a similar style as the reference. Human translators tend to generate translation which exhibit translationese language, i.e. sentences with source artifacts (Kop- pel and Ordan, 2011). This is problematic because collecting only a single style of references fails to reward systems that might produce alternative but equally accurate translations (Popovi´c, 2019). Because of this lack of diversity, multi-reference evaluations like multi-reference BLEU are also bi- ased to prefer that specific style of translation. As a better solution, we show that paraphras- ing translations, when done carefully, can improve the quality of automated evaluations more broadly. Paraphrased translations increase diversity and steer evaluation away from rewarding translation artifacts. Experiments with the official submissions of WMT 2019 English→German for a variety of different metrics demonstrate the increased correla- tion with human judgement. Further, we run addi- tional experiments for MT systems that are known to have low correlation with automatic metrics cal- culated with standard references. In particular, we investigated MT systems augmented with either back-translation or automatic post-editing (APE). We show that paraphrased references overcome the problems of automatic metrics and generate the same order as human ratings. Our contributions are four-fold: (i) We collect different types of references on the same test set and show that it is possible to report strong corre- lation between automated evaluation with human metrics, even for high accuracy systems. (ii) We gather more natural and diverse valid translations by collecting human paraphrases of reference trans- lations. We show that (human) paraphrases cor- relate well with human judgments when used as reference in automatic evaluations. (iii) We present an alternative multi-reference formulation that is more effective than multi reference BLEU for high quality output. (iv) We release1 a rich set of di- verse references to encourage research in systems producing other types of translations, and reward a wider range of generated language. 2 Related Work Evaluation of machine translation is of crucial im- portance for system development and deployment decisions (Moorkens et al., 2018). Human eval- uation typically reports adequacy of translations, often complemented with fluency scores (White, 1994; Graham et al., 2013). Evaluation by hu- man raters can be conducted through system com- parisons, rankings (Bojar et al., 2016a), or abso- lute judgments, direct assessments (Graham et al., 2013). Absolute judgments allow one to efficiently compare a large number of systems. The evalua- tion of translations as isolated sentences, full para- graphs or documents is also an important factor 1https://github.com/google/ wmt19-paraphrased-references in the cost/quality trade-offs (Carpuat and Simard, 2012). Isolated sentence evaluation is generally more efficient but fails to penalize contextual mis- takes (Tu et al., 2018; Hardmeier et al., 2015). Automatic evaluation typically collects human reference translations and relies on an automatic metric to compare human references to system outputs. Automatic metrics typically measure the overlap between references and system outputs. A wide variety of metrics has been proposed, and automated metrics is still an active area of re- search. BLEU (Papineni et al., 2002) is the most common metric. It measures the geometric aver- age of the precision over hypothesis n-grams with an additional penalty to discourage short transla- tions. NIST (Doddington, 2002) is similar but considers up-weighting rare, informative n-grams. TER (Snover et al., 2006) measures an edit dis- tance, as a way to estimate the amount of work to post-edit the hypothesis into the reference. ME- TEOR (Banerjee and Lavie, 2005) suggested re- warding n-gram beyond exact matches, considering synonyms. Others are proposing to use contextu- alized word embeddings, like BERTscore (Zhang et al., 2019). Rewarding multiple alternative for- mulations is also the primary motivation behind multiple-reference based evaluation (Nießen et al., 2000). Dreyer and Marcu (2012) introduced an annotation tool and process that can be used to cre- ate meaning-equivalent networks that encode an exponential number of translations for a given sen- tence. Orthogonal to the number of references, the quality of the reference translations is also essen- tial to the reliability of automated evaluation (Zbib et al., 2013). This topic itself raises the question of human translation assessment, which is beyond the scope of this paper (Moorkens et al., 2018). Meta-evaluation studies the correlation be- tween human assessments and automatic evalua- tions (Callison-Burch et al., 2006, 2008; Callison- Burch, 2009). Indeed, automatic evaluation is use- ful only if it rewards hypotheses perceived as fluent and adequate by a human. Interestingly, previous work (Bojar et al., 2016a) has shown that a higher correlation can be achieved when comparing sim- ilar systems than when comparing different types of systems, e.g. phrase-based vs neural vs rule- based. In particular, rule-based systems can be pe- nalized as they produce less common translations, even when such translations are fluent and adequate. Similarly, recent benchmark results comparing neu- ral systems on high resource languages (Bojar et al., 2018; Barrault et al., 2019) have shown mismatches between the systems with highest BLEU score and the systems faring the best in human evaluations. Freitag et al. (2019); Edunov et al. (2019) study this mismatch in the context of systems trained with back-translation (Sennrich et al., 2016) and noisy back-translation (Edunov et al., 2018). They observe that systems training with or without back- translation (BT) can reach a similar level of overlap (BLEU) with the reference, but hypotheses from BT systems are more fluent, both measured by hu- mans and by a language model (LM). They suggest considering LM scores in addition to BLEU. Freitag et al. (2019); Edunov et al. (2019) point at translationese as a major source of mismatch be- tween BLEU and human evaluation. Translationese refers to artifacts from the source language present in the translations, i.e. human translations are often less fluent than natural target sentences due to word order and lexical choices influenced by the source language (Koppel and Ordan, 2011). The impact of translationese on evaluation has recently received attention (Toral et al., 2018; Zhang and Toral, 2019; Graham et al., 2019). In the present work, we are specifically concerned that the presence of transla- tionese in the references might cause overlap-based metrics to reward hypotheses with translationese language more than hypotheses using more natural language. The question of bias to a specific refer- ence has also been raised in the case of monolingual human evaluation (Fomicheva and Specia, 2016; Ma et al., 2017). The impact of translationese in test sets is related to but different from the impact of translationese in the training data (Kurokawa et al., 2009; Lembersky et al., 2012; Bogoychev and Sennrich, 2019; Riley et al., 2019). In this work, we explore collecting a single refer- ence translation, using human paraphrases to steer away as much as possible from biases in the ref- erence translation that affect the automatic met- rics to prefer MT output with the same style (e.g. translationese). Automatic methods to extract para- phrase n-grams (Zhou et al., 2006) or full sentence paraphrases (Kauchak and Barzilay, 2006; Bawden et al., 2020; Thompson and Post, 2020) have been used to consider multiple references. In contrast, we generate a single unbiased reference translation generated by humans instead of trying to cover a wider space of possible translations. In contrast to human paraphrasing (our instructions asked for most diverse paraphrases), automatic paraphrasing are still far from perfect (Roy and Grangier, 2019) and mostly generate local changes that do not steer away from biases as e.g. introducing different sen- tence structures. 3 Collecting High Quality and Diverse References We acquired two types of new reference transla- tions: first, we asked a professional translation ser- vice to provide an additional reference translation. Second, we used the same service to paraphrase ex- isting references, asking a different set of linguists. 3.1 Additional Standard References We asked a professional translation service to cre- ate additional high quality references to measure the effect of different reference translations. The work was equally shared by 10 professional lin- guists. The use of CAT tools (dictionaries, trans- lation memory, MT) was specifically disallowed, and the translation service employed a tool to dis- able copying from the source field and pasting anything into the target field. The translations were produced by experienced linguists who are native speakers in the target language. The original WMT English→German newstest2019 reference translations have been generated in sequence while keeping an 1-1 alignment between sentences. This should help the linguists to use some kind of docu- ment context. We instead shuffled the sentences to also get translations from different linguists within a document and avoid systematic biases within a document. The collection of additional references not only may yield better references, but also al- lows us to conduct various types of multi-reference evaluation. In addition of applying multi-reference BLEU, it also allows us to select the most adequate option among the alternative references for each sentence, composing a higher quality set. 3.2 Diversified Paraphrased References The product of human translation is assumed to be ontologically different from natural texts (Kop- pel and Ordan, 2011) and is therefore often called translationese (Gellerstam, 1986). Translationese includes the effects of interference, the process by which the source language leaves distinct marks in the translation, e.g. word order, sentence struc- ture (monotonic translation) or lexical choices. It also often brings simplification (Laviosa, 1997), as Task: Paraphrase the sentence as much as possible: To paraphrase a source, you have to rewrite a sentence without changing the meaning of the original sentence. 1. Read the sentence several times to fully understand the meaning 2. Note down key concepts 3. Write your version of the text without looking at the original 4. Compare your paraphrased text with the original and make minor adjustments to phrases that remain too similar Please try to change as much as you can without changing the meaning of the original sentence. Some suggestions: 1. Start your first sentence at a different point from that of the original source (if possible) 2. Use as many synonyms as possible 3. Change the sentence structure (if possible) Figure 1: Instructions used to paraphrase an existing translation as much as possible. Source The Bells of St. Martin’s Fall Silent as Churches in Harlem Struggle . Translation Die Glocken von St. Martin verstummen , da Kirchen in Harlem Probleme haben . Paraphrase Die Probleme in Harlems Kirchen lassen die Glocken von St. Martin verstummen . Paraphrase Die Kirchen in Harlem k¨ampfen mit Problemen , und so l¨auten die Glocken von St. Martin nicht mehr . Table 1: Reference examples of a typical translation and two different paraphrases of this translation. The para- phrases are not only very different from the source sentence (e.g. sentence structure), but also differ a lot when compared to each other. the translator might impoverish the message, the language, or both. The troubling implication is that a reference set of translationese sentences is biased to assign higher word overlap scores to MT outputs that produces a similar translationese style, and penalizes MT output with more natural targets (Freitag et al., 2019). Collecting a different type of reference could uncover alternative high quality systems producing different styles of outputs. We explore collecting diverse references using paraphrasing to steer away from translationese, with the ultimate goal of generating a natural-to- natural test set, where neither the source sentences nor the reference sentences contain translationese artifacts. In an initial experiment on a sample of 100 sentences, we asked linguists to paraphrase (translated) sentences. The paraphrased references had only minor changes and consequently only mi- nor impact on the automatic metrics. Therefore, we changed the instructions and asked linguists to paraphrase the sentence as much as possible while also suggesting using synonyms and different sen- tence structures. The paraphrase instructions are shown in Figure 1. These instructions satisfy not only our goal to generate an unbiased sentence, but also have the side effect that two paraphrases of the same sentence are quite different. All our paraphrase experiments in this paper are done with these instructions. One might be concerned that paraphrasing “as much as possible” might yield ex- cessive reformulation at the expense of adequacy in some cases. To compensate for this in the present paper, we collect adequacy ratings for all produced paraphrases. These ratings allow us to select the most adequate paraphrase from among available alternatives for the same sentence, which results in a composite high paraphrase set with strong ade- quacy ratings (see Table 2). A paraphrase example is given in Table 1. Even without speaking any German, one can easily see that the paraphrases have a different sentence structure than the source sentence, and both paraphrases are quite different. 4 Experimental Set-up 4.1 Data and Models We use the official submissions of the WMT 2019 English→German news translation task (Barrault et al., 2019) to measure automatic scores for differ- ent kinds of references. We then report correlations with the WMT human ratings from the same eval- uation campaign. We chose English→German as this track had the most submissions and the outputs with the highest adequacy ratings. 4.2 Human Evaluation We use the same direct assessment template as was used in the WMT 2019 evaluation campaign. Human raters are asked to assess a given translation by how adequately it expresses the meaning of the corresponding source sentence on an absolute 0- 100 rating scale. We acquire 3 ratings per sentence and take the average as the final sentence score. In contrast to WMT, we do not normalize the scores, and report the average absolute ratings. 5 Experiments We generate three additional references for the WMT 2019 English→German news translation task. In addition to acquiring an additional ref- erence (AR), we also asked linguists to paraphrase the existing WMT reference and the AR reference (see Section 3 for details). We refer to these para- phrases as WMT.p and AR.p. 5.1 Human Evaluation of References It is often believed that the most accurate transla- tions should also yield the highest correlation with humans ratings when used as reference for an auto- matic metric. For that reason, we run a human eval- uation (Section 4.2) for all reference translations to test this hypothesis (Table 2). While all reference translations yield high scores, the paraphrased refer- ences are rated as slightly less accurate. We suspect that this may at least in part be an artifact of the rat- ing methodology. Specifically, translations whose word order matches that of the source (i.e. transla- tionese) are easier to rate than translations that use very different sentence structures and phrasing than the source sentence. We generated our paraphrased reference translation with the instructions to mod- ify the translations as much as possible. Therefore, the non-translationese, perhaps more natural, na- ture of the paraphrased translations make it more demanding to assign an accurate rating. As a by-product of these ratings, we consider selecting the best rated references among alterna- tives for each sentence. Representing this method of combining reference sets with the HQ() func- tion, we generate 3 new reference sets. These are (a) HQ(WMT, AR), abbreviated as HQ(R); (b) HQ(WMT.p, AR.p), abbreviated as HQ(P); and (c) HQ(WMT, AR, AR.p, WMT.p), abbreviated as HQ(all 4). Interestingly, the combined paraphrased reference HQ(P) has a higher human rating than WMT or AR alone. WMT WMT.p AR AR.p HQ(R) [WMT+AR] HQ(P) [WMT.p+AR.p] HQ(all 4) [all 4] adequacy rating 85.3 81.8 86.7 80.8 92.8 89.1 95.3 Table 2: Human adequacy assessments for different kinds of references, over the full set of 1997 sentences. 5.2 Correlation with Human Judgement Table 3 provides the system-level rank-correlations (Spearman’s ρ and Kendall’s τ )2 of BLEU (cal- culated with sacreBLEU (Post, 2018)3) evaluat- ing translations of newstest2019 for different refer- ences. On the full set of 22 submissions, all 3 new references (AR, WMT.p, AR.p) show higher corre- lation with human judgment than the original WMT reference, with the paraphrased references WMT.p coming out on top. Furthermore, each paraphrased reference set shows higher correlation when com- pared to the reference set that it was paraphrased from. Full Set (22) single ref single ref multi ref Reference WMT AR WMT.p AR.p HQ(R) HQ(P) HQ(all 4) AR+WMT AR.p+WMT.p all 4 ρ 0.88 0.89 0.91 0.89 0.91 0.91 0.91 0.90 0.90 0.90 τ 0.72 0.76 0.79 0.77 0.78 0.78 0.79 0.75 0.79 0.75 Table 3: Spearman’s ρ and Kendall’s τ for the WMT2019 English→German official submissions with human ratings conducted by the WMT organizers. Although, the combined reference HQ(R) (Sec- tion 5.1) improves correlation when compared to the non-paraphrased reference sets (WMT and AR), not one of the three combined references HQ(R), 2We used the scipy implementation in all our ex- periments: https://docs.scipy.org/doc/scipy/ reference/stats.html 3BLEU+case.mixed+lang.en- de+numrefs.1+smooth.exp+test.wmt19+tok.intl+version.1.4.2 HQ(P), HQ(all 4) shows higher correlation than the paraphrased reference set WMT.p. This result casts doubt on the belief that if references are rated as more adequate, it necessarily implies that such ref- erences will yield more reliable automated scores. We further find that multi-reference BLEU (cal- culated with sacreBLEU) does not exhibit bet- ter correlation with human judgments either than single-reference BLEU or than the composed ref- erence sets HQ(x). It is generally assumed that multi-reference BLEU yields higher correlation with human judgements due to the increased diver- sity in the reference translations. However, combin- ing two translated reference sets that likely share the same systematic translationese biases does still prefers translationese translations. Interestingly, multi-reference BLEU with multiple paraphrases also does not show higher correlation than single- reference BLEU. Combining all 4 references with multi reference BLEU shows the same correlation numbers as the combination of AR+WMT. As we will see later, the BLEU scores calculated with paraphrased references are much lower than those calculated with standard references. They have fewer n-gram matches, which are mostly only a subset of the n-gram matches of the standard ref- erences. Adding paraphrased references to a mix of standard references therefore has a small effect on the total number of n-gram matches, and as a consequence the scores are not much affected. Note that the correlation numbers already appear relatively high for the full set of systems. This is because both Kendall’s τ and Spearman’s ρ rank correlation operate over all possible pairs of sys- tems. Since the submissions to WMT2019 covered a wide range of translation qualities, any metric able to distinguish the highest-scoring and lowest- scoring systems will already have a high correla- tion. Therefore, small numeric increases as demon- strated in Table 3 can correspond to much larger improvements in the local ranking of systems. As a consequence, we looked deeper into the cor- relation between a subset of the systems that per- formed best in human evaluation, where correlation for metrics calculated on the standard reference is known to break down. Kendall’s τ rank correlation as a function of the top k systems can be seen in Figure 2. During the WMT 2019 Metric task (Ma et al., 2019), all official submissions (using the orig- inal WMT reference) had low correlation scores with human ratings. The paraphrased references improve especially on high quality system output, and every paraphrased reference set (dotted line) outperforms its corresponding unparaphrased set (same-color solid line). Figure 2: Kendall’s τ correlation of BLEU for the best k systems (based on human ratings). These improvements in ranking can be seen in Table 4, which reports the actual BLEU scores of the top seven submissions with four different ref- erences. Since we asked humans to paraphrase the WMT reference as much as possible (Section 3) to get very different sentences, the paraphrased BLEU scores are much lower than what one expects for a high-quality system. Nevertheless, the system outputs are better ranked and show the highest cor- relation of any references explored in this paper. WMT HQ(R) WMT.p HQ(P) human 0.347 15.1 43.6 FB 0.311 14.9 44.8 Micr.sd 0.296 14.9 Micr.dl 44.8 MSRA 46.0 0.214 14.2 0.213 14.2 UCAM 44.1 0.208 14.0 44.6 NEU 0.189 13.3 42.4 MLLP 42.3 42.1 42.2 42.1 40.4 40.8 38.3 15.0 14.9 14.9 14.1 14.2 14.1 13.4 Table 4: BLEU scores of the best submissions of WMT2019 English→German. 5.3 Alternative Metrics Any reference-based metric can be used with our new reference translations. In addition to BLEU, we consider TER (Snover et al., 2006), ME- TEOR (Banerjee and Lavie, 2005), chrF (Popovi´c, 2015), the f-score variant of BERTScore (Zhang et al., 2019) and Yisi-1 (Lo, 2019) (winning sys- tem of WMT 2019 English→German metric task). Table 5 compares these metrics. As we saw in Fig- ure 2, the paraphrased version of each reference set yields higher correlation with human evaluation across all evaluated metrics than the correspond- ing original references, with the only exception of TER for HQ(P). Comparing the two paraphrased references, we see that HQ(P) shows higher corre- lation for chrF and Yisi when compared to WMT.p. In particular Yisi (which is based on word embed- dings) seems to benefit from the higher accuracy of the reference translation. metric WMT HQ(R) WMT.p HQ(P) HQ(all) BLEU 0.72 1 - TER 0.71 0.74 chrF MET 0.74 BERTS 0.78 0.78 Yisi-1 0.79 0.71 0.78 0.81 0.82 0.84 0.79 0.74 0.78 0.80 0.81 0.84 0.79 0.67 0.82 0.81 0.82 0.86 0.78 0.74 0.81 0.81 0.82 0.84 Table 5: WMT 2019 English→German: Correlations (Kendall’s τ ) of alternative metrics: BLEU, 1.0 - TER, chrF, METEOR, BERTScore, and Yisi-1. 5.4 WMT18 a acquired paraphrased as-much-as- We possible reference (WMT.p) for newstest2018 English→German with the same instruction as used before (Figure 1). The test set newstest2018 is a joint test set which means that half of the sentences have been originally written in English and translated into German, and vice versa. We paraphrased the reference sentences for the forward translated half only as we want to have a natural English source sentence. Correlation with human rankings of the WMT18 evaluation campaign are summarized in Table 6. The paraphrased reference WMT.p show higher correlations with human judgement for all metrics. ref WMT WMT.p BLEU chrf METEOR BERTS Yisi-1 0.82 0.75 0.76 0.91 0.91 0.82 0.80 0.90 0.75 0.84 Table 6: WMT 2018 English→German: Kendall’s τ . 6 Why Paraphrases? While the top WMT submissions use very similar approaches, there are some techniques in MT that are known to produce more natural (less transla- tionese) output than others. We run experiments with a variety of models that have been shown that their actual quality scores have low correlation with automatic metrics. In particular, we focus on back- translation (Sennrich et al., 2016) and Automatic Post Editing (APE, Freitag et al. (2019)) augmented systems trained on WMT 2014 English→German. All these systems have in common that they gen- erate less translationese output, and thus BLEU with translationese references under-estimate their quality. The experiment in this section follows the setup described in Freitag et al. (2019). We run adequacy evaluation on WMT newstest 2019 for the 3 systems, as described in Section 4.2. Both the APE and the BT models, which use addi- tional target-side monolingual data, are rated higher by humans than the system relying only on bitext. Table 7 summarizes the BLEU scores for our differ- ent reference translations. All references generated with human translations (WMT, HQ(R) and HQ(all 4)) show negative correlation with human ratings for these extreme cases and produce the wrong order. On the other hand, all references that rely purely on paraphrased references do produce the correct ranking of these three systems. This further suggests that reference translations based on hu- man translations bias the metrics to generate higher scores for translationese outputs. By paraphras- ing the reference translations, we undo this bias, and the metric can measure the true quality of the underlying systems with greater accuracy. Reference human WMT WMT.p HQ(R) HQ(p) HQ(all 4) bitext APE 86.1 84.5 34.6 39.4 12.7 12.5 32.1 35.0 12.8 12.4 25.8 27.2 BT 87.8 37.9 12.9 34.9 13.0 27.5 correct? (cid:51) (cid:55) (cid:51) (cid:55) (cid:51) (cid:55) Table 7: BLEU scores for WMT newstest 2019 English→German for MT systems trained on bitext, augmented with BT or using APE as text naturalizer. The correct column indicates if the model ranking agrees with human judgments. This finding, that existing reference translation methodology may systematically bias against mod- elling techniques known to improve human-judged quality, raises the question of whether previous re- search has incorrectly discarded approaches that actually improved the quality of MT. Releasing all reference translations gives the community a chance to revisit some of their decisions and mea- sure quality differences for high quality systems. 7 Characterizing Paraphrases • Wheeling , West Virginia → 3 times (Wheel- 7.1 Alignment One typical characteristic of translationese is that humans prefer to translate a sentence phrase-by- phrase instead of coming up with a different sen- tence structure, resulting in ‘monotonic’ transla- tions. To measure the monotonicity of the different reference translations, we compute an alignment with fast-align (Dyer et al., 2013) on the WMT 2014 English-German parallel data and compare the alignments of all four references. Table 8 sum- marizes the average absolute distance of two align- ment points for each reference. The paraphrased translations are less monotonic and use a different sentence structure than a pure human translation. WMT AR WMT.p AR.p 6.88 5.17 5.27 6.43 Table 8: Average absolute distance per alignment point, as a proxy for word-by-word (‘monotonic’) translation. Lower scores indicate more monotonic translation. 7.2 Matched n-grams The actual BLEU scores calculated with the para- phrased references are much lower compared to BLEU scores calculated with standard references (Table 4). Nevertheless, the paraphrased refer- ences show higher correlation with human judg- ment, which motivates us to investigate which n- grams of the MT output are actually matching the paraphrased references during BLEU calcula- tion. The n-grams responsible for the most overlap with standard references are generic, common n- grams. In the winning submission of the WMT 2019 English→German evaluation campaign from Facebook, the 4grams with the highest number of matches are: • , sagte er . → 28 times (, he said.) • “ , sagte er → 14 times (” , he said) • f ¨ugte hinzu , dass → 8 times (added that) These matches are crucial to reach high > 40 BLEU scores, and appear in translation when using the same sentence structure as the source sentence. On the other hand, the n-grams overlapping with the paraphrased references show a different pic- ture. They usually reward n-grams that express the semantic meaning of the sentence. The 4-grams with the highest number of matches with the para- phrased references for the same system are: ing , West Virginia) • von Christine Blasey Ford → 3 times (from Christine Blasey Ford) • Erdbeben der St¨arke 7,5 → 3 times (7.5 magnitude earthquake) 8 Conclusions This work presents a study on the impact of refer- ence quality on the reliability of automated evalua- tion of machine translation. We consider collecting additional human translations as well as generat- ing more diverse and natural references through paraphrasing. We observe that the paraphrased references result in more reliable automated evalua- tions, i.e. stronger correlation with human eval- uation for the submissions of the WMT 2019 English→German evaluation campaign. These findings are confirmed across a wide range of auto- mated metrics, including BLEU, chrF, METEOR, BERTScore and Yisi. We further demonstrate that the paraphrased references correlate especially well for the top submissions of WMT, and additionally are able to correctly distinguish baselines from sys- tems known to produce more natural output (those augmented with either BT or APE), whose qual- ity tends to be underestimated by references with translationese artifacts. We explore two different approaches to multi- reference evaluation: (a) standard multi-reference BLEU, and (b) selecting the best-rated references for each sentence. Contrary to conventional wis- dom, we find that multi-reference BLEU does not exhibit better correlation with human judgments than single-reference BLEU. Combining two stan- dard reference translations by selecting the best rated reference, on the other hand, did increase correlation for the standard reference translations. Nevertheless, the combined paraphrasing refer- ences are of higher quality for all techniques when compared to the standard reference counter part. We suggest using a single paraphrased reference for more reliable automatic evaluation going for- ward. Although a combined paraphrased reference shows slightly higher correlation for embedding based metrics, it is over twice as expensive to con- struct such a reference set. To drive this point home, our experiments suggest that standard reference translations may systematically bias against mod- elling techniques known to improve human-judged quality, raising the question of whether previous research has incorrectly discarded approaches that actually improved the quality of MT. Releasing all reference translations gives the community a chance to revisit some of their decisions and mea- sure quality differences for high quality systems and modelling techniques that produce more natu- ral or fluent output. As a closing note, we would like to empha- size that it is more difficult for a human rater to rate a paraphrased translation than a translationese sentence, because the latter may share a similar structure and lexical choice to the source. We sus- pect that human evaluation is also less reliable for complex translations. Future work, can investigate whether finer ratings could correct the bias in favor of lower effort ratings, and how this may interact with document-level evaluation. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly In 3rd Inter- Learning to Align and Translate. national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Im- proved Correlation with Human Judgments. In Pro- ceedings of the ACL workshop on intrinsic and ex- trinsic evaluation measures for machine translation and/or summarization, pages 65–72. Lo¨ıc Barrault, Ondˇrej Bojar, Marta R. Costa-juss`a, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M¨uller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 Conference on Machine Trans- lation (WMT19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. As- sociation for Computational Linguistics. Rachel Bawden, Biao Zhang, Lisa Yankovskaya, An- dre T¨attar, and Matt Post. 2020. Explicit represen- tation of the translation space: Automatic paraphras- ing for machine translation evaluation. Nikolay Bogoychev and Rico Sennrich. 2019. Do- main, Translationese and Noise in Synthetic Data arXiv preprint for Neural Machine Translation. arXiv:1911.03362. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, Matteo Negri, Aur´elie N´ev´eol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016a. Findings of the 2016 Confer- ence on Machine Translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 131–198, Berlin, Ger- many. Association for Computational Linguistics. Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 Con- ference on Machine Translation (WMT18). In Pro- ceedings of the Third Conference on Machine Trans- lation: Shared Task Papers, pages 272–303, Bel- gium, Brussels. Association for Computational Lin- guistics. Ondˇrej Bojar, Christian Federmann, Barry Haddow, Philipp Koehn, Matt Post, and Lucia Specia. 2016b. Ten Years of WMT Evaluation Campaigns: Lessons Learnt. Translation Evaluation: From Fragmented Tools and Data Sets to an Integrated Ecosystem, page 27. Chris Callison-Burch. 2009. Fast, Cheap, and Creative: Evaluating Translation Quality Using Amazon’s Me- In Proceedings of the 2009 Con- chanical Turk. ference on Empirical Methods in Natural Language Processing, pages 286–295, Singapore. Association for Computational Linguistics. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2008. Further Meta-evaluation of Machine Translation. In Proceedings of the Third Workshop on Statistical Machine Translation, StatMT ’08, pages 70–106, Stroudsburg, PA, USA. Association for Computa- tional Linguistics. Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the Role of Bleu in Ma- chine Translation Research. In 11th Conference of the European Chapter of the Association for Com- putational Linguistics, Trento, Italy. Association for Computational Linguistics. Marine Carpuat and Michel Simard. 2012. The Trou- In Proceedings of the ble with SMT Consistency. Seventh Workshop on Statistical Machine Transla- tion, pages 442–449. Association for Computational Linguistics. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- In Proceedings of the Sec- occurrence statistics. ond International Conference on Human Language Technology Research, HLT ’02, pages 138–145, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Markus Dreyer and Daniel Marcu. 2012. Hyter: Meaning-equivalent semantics for translation evalu- ation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 162–171. Association for Computational Linguistics. shared task on pronoun translation. In Proceedings of the Second Workshop on Discourse in Machine Translation, pages 1–16, Lisbon, Portugal. Associa- tion for Computational Linguistics. Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A Simple, Fast, and Effective Reparameter- ization of IBM Model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648. David Kauchak and Regina Barzilay. 2006. Para- In Proceedings phrasing for automatic evaluation. of the Human Language Technology Conference of the NAACL, Main Conference, pages 455–462, New York City, USA. Association for Computational Lin- guistics. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding Back-Translation at In Proceedings of the 2018 Conference on Scale. Empirical Methods in Natural Language Processing, pages 489–500, Brussels, Belgium. Association for Computational Linguistics. Sergey Edunov, Myle Ott, Marc’Aurelio Ranzato, and Michael Auli. 2019. On The Evaluation of Machine Translation Systems Trained With Back-Translation. arXiv preprint arXiv:1908.05204. Marina Fomicheva and Lucia Specia. 2016. Reference Bias in Monolingual Machine Translation Evalua- tion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 77–82, Berlin, Ger- many. Association for Computational Linguistics. Markus Freitag, Isaac Caswell, and Scott Roy. 2019. APE at Scale and Its Implications on MT Evaluation Biases. In Proceedings of the Fourth Conference on Machine Translation, pages 34–44, Florence, Italy. Association for Computational Linguistics. Jonas Gehring, Michael Auli, David Grangier, and Yann N. Dauphin. 2017. A Convolutional Encoder Model for Neural Machine Translation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancou- ver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 123–135. Martin Gellerstam. 1986. Translationese in swedish novels translated from english. In Lars Wollin and Hans Lindquist, editors, Translation Studies in Scan- dinavia, page 88–95. CWK Gleerup. Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous Measurement Scales in Human Evaluation of Machine Transla- In Proceedings of the 7th Linguistic Annota- tion. tion Workshop and Interoperability with Discourse, pages 33–41, Sofia, Bulgaria. Association for Com- putational Linguistics. Yvette Graham, Barry Haddow, and Philipp Koehn. 2019. Translationese in Machine Translation Eval- uation. CoRR, abs/1906.09833. Christian Hardmeier, Preslav Nakov, Sara Stymne, J¨org Tiedemann, Yannick Versley, and Mauro Cettolo. 2015. Pronoun-focused MT and cross-lingual pro- noun prediction: Findings of the 2015 DiscoMT Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press. Philipp Koehn and Christof Monz. 2006. Manual and automatic evaluation of machine translation between In Proceedings on the Work- European languages. shop on Statistical Machine Translation, pages 102– 121, New York City. Association for Computational Linguistics. Moshe Koppel and Noam Ordan. 2011. Translationese and its dialects. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies - Volume 1, HLT ’11, pages 1318–1326, Stroudsburg, PA, USA. Association for Computational Linguistics. David Kurokawa, Cyril Goutte, and Pierre Isabelle. 2009. Automatic detection of translated text and its In Proceedings of impact on machine translation. MT-Summit XII, pages 81–88. Sara Laviosa. 1997. How comparable can’comparable corpora’be? Target. International Journal of Trans- lation Studies, 9(2):289–319. Gennadi Lembersky, Noam Ordan, and Shuly Wintner. 2012. Adapting translation models to translationese improves SMT. In Proceedings of the 13th Confer- ence of the European Chapter of the Association for Computational Linguistics, EACL ’12, pages 255– 265, Stroudsburg, PA, USA. Association for Com- putational Linguistics. Chi-kiu Lo. 2019. Yisi-a unified semantic mt quality evaluation and estimation metric for languages with different levels of available resources. In Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 2: Shared Task Papers, Day 1), pages 507–513. Qingsong Ma, Yvette Graham, Timothy Baldwin, and Qun Liu. 2017. Further investigation into reference bias in monolingual evaluation of machine transla- In Proceedings of the 2017 Conference on tion. Empirical Methods in Natural Language Processing, pages 2476–2485, Copenhagen, Denmark. Associa- tion for Computational Linguistics. Qingsong Ma, Johnny Wei, Ondˇrej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT sys- In Proceedings of the tems pose big challenges. Fourth Conference on Machine Translation (Volume Antonio Toral, Sheila Castilho, Ke Hu, and Andy Way. 2018. Attaining the unattainable? Reassessing claims of human parity in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 113–123, Bel- gium, Brussels. Association for Computational Lin- guistics. Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to remember translation history with a continuous cache. Transactions of the Association for Computational Linguistics, 6:407–420. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in Neural Information Pro- cessing Systems, pages 5998–6008. John S. White. 1994. The ARPA MT Evaluation Methodologies: Evolution, Lessons, and Further Ap- proaches. In Proceedings of the 1994 Conference of the Association for Machine Translation in the Amer- icas, pages 193–205. Rabih Zbib, Gretchen Markiewicz, Spyros Matsoukas, Richard Schwartz, and John Makhoul. 2013. Sys- tematic comparison of professional and crowd- sourced reference translations for machine transla- tion. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 612–616, Atlanta, Georgia. Association for Computational Linguistics. Mike Zhang and Antonio Toral. 2019. The effect of translationese in machine translation test sets. CoRR, abs/1906.08069. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Bertscore: Arxiv, Weinberger, and Yoav Artzi. 2019. Evaluating text generation with BERT. 1904.09675. Liang Zhou, Chin-Yew Lin, and Eduard Hovy. 2006. Re-evaluating Machine Translation Results with In Proceedings of the 2006 Paraphrase Support. Conference on Empirical Methods in Natural Lan- guage Processing, pages 77–84, Sydney, Australia. Association for Computational Linguistics. 2: Shared Task Papers, Day 1), pages 62–90, Flo- rence, Italy. Association for Computational Linguis- tics. Joss Moorkens, Sheila Castilho, Federico Gaspari, and Stephen Doherty. 2018. Translation Quality Assess- ment: From Principles to Practice. Springer. Sonja Nießen, Franz Josef Och, Gregor Leusch, and Hermann Ney. 2000. An evaluation tool for machine In translation: Fast evaluation for MT research. Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00), Athens, Greece. European Language Resources As- sociation (ELRA). Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th annual meeting on association for com- putational linguistics, pages 311–318. Association for Computational Linguistics. Maja Popovi´c. 2015. chrf: character n-gram f-score for automatic mt evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395. Maja Popovi´c. 2019. On reducing translation shifts in translations intended for MT evaluation. In Proceed- ings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 80–87. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computa- tional Linguistics. Parker Riley, Isaac Caswell, Markus Freitag, and David Grangier. 2019. Translationese as a language in ”multilingual” nmt. Aurko Roy and David Grangier. 2019. Unsupervised paraphrasing without translation. In ACL (1), pages 6033–6039. Association for Computational Linguis- tics. Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation mod- 2016. In Proceedings of the els with monolingual data. 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computa- tional Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annota- tion. In In Proceedings of Association for Machine Translation in the Americas, pages 223–231. Brian Thompson and Matt Post. 2020. Auto- matic machine translation evaluation in many lan- guages via zero-shot paraphrasing. arXiv preprint arXiv:2004.14564.
ai_researcher
1
Live__Real-Time_Platform_for_Robot_Design_Interfaces.pdf
Can parallel lives provide a solution to Hardy’s paradox? ˙Inan¸c S¸ahin1, ∗ 1Department of Physics, Faculty of Sciences, Ankara University, Ankara, Turkey Abstract Parallel lives is a model which provides an interpretation of quantum theory that is both local and realistic. This model assumes that all quantum fields are composed of point beings called ”lives”. Lives interact locally and have a memory of their previous interactions. The reduction of the state vector is not included in this model: lives can be divided into different worlds. This feature resembles many worlds interpretation. However in the parallel lives model, the division of lives into different worlds takes place locally. The parallel lives model is expected to be compatible with special relativity, as the lives propagate at a speed that does not exceed the speed of light and interact locally. On the other hand, it is open to paradoxes based on counterfactual propositions, as it provides a realistic interpretation of quantum theory. In this paper, we confront the parallel lives model with the paradox proposed by Hardy [1]. We show that the parallel lives model cannot overcome the dilemma in Hardy’s paradox. We discuss implications of this confrontation on special theory of relativity, and speculate a solution that we believe, fits the spirit of the parallel lives model. Keywords: Parallel lives model, many worlds interpretation, quantum theory, relativity 0 2 0 2 g u A 2 2 ] h p - t n a u q [ 1 v 3 3 6 7 0 . 9 0 0 2 : v i X r a ∗[email protected] 1 I. INTRODUCTION Parallel lives (PL) is an ontological model that was first proposed by Brassard and Raymond-Robichaud [2, 3] in order to provide a local and realistic interpretation to quantum theory (QT). The details of the PL model have been developed in Ref.[4]. According to PL, all quantum fields are composed of point beings called ”lives” moving on continuous world- lines with a speed bounded by the speed of light [4]. Lives can only interact locally when their world-lines coincides. However, not all lives whose world-lines coincide interact with another. Lives have a memory of their previous interactions, and this memory determines which live they will interact with. Lives that do not interact are invisible to each other. We can say that these invisible ”lives” are living in different worlds. The network of internal interactions of a very large collection of lives forms a macroscopic system. If a live is hidden relative to one of the lives that make up the macroscopic system, it should also be hidden relative to other lives in that macroscopic system.1 Thus, it is possible to have macroscopic systems that live in parallel and hidden relative to each other. This feature recalls the many worlds interpretation [5, 6]. However, in many worlds interpretation the entire universe split into copies, while in PL, lives locally split into relative worlds. When the state vector of a system is reduced to one of the orthogonal terms in it, the lives that make up that system split locally into different relative worlds. Therefore, there is no reduction of the state vector in the PL model; each orthogonal term in the superposition lives parallel in space-time. For instance, let’s consider an EPR-type experiment with two spin-1/2 particles in the singlet state, | 0, 0 >= 1 √2[ | ↑ > , ↓ , ↑ −| ↓ >]. Let A and B be spacelike separated macroscopic ob- server systems carrying Stern-Gerlach apparatuses. After the spins become entangled in the singlet state at the midpoint between A and B, one moves to A and the other to B. Then, the lives of spins and observers A and B split into relative worlds. In one world spin is up and observer measures spin-up and in the other world spin is down and observer measures spin-down. If A (A ↑ ↓ ) represents observer A measuring spin-up (spin-down) then, the lives of A ↑ and A can only interact with the lives of B . Similarly, A are hidden with respect to B ↑ ↓ respectively. Therefore, we say that A ↑ ↓ ↓ and B can interact with B ↑ ↓ , but A ↑ and 1 Here we should note that not all lives in a macroscopic system need to interact with each other, but they must be part of the same network of interactions. The interaction waves propagating through the macroscopic system form a network of interactions and the memory of a distant live is shared in this way. 2 B are living in a world parallel to the world of A and B . ↓ It is often thought that Bell’s theorem rules out local realistic interpretations of QT. In ↓ ↑ fact, Bell’s theorem rules out local hidden variable theories, not local realistic interpretations of QT [2, 3, 7]. However, this issue is subtle and a detailed review is required. In local hidden variable theories, the result of a measurement is given as a function of hidden variables and locally defined adjustable apparatus parameters [7, 8]. It is also assumed that experimenters have a free will to adjust apparatus parameters2 [9]. Let us denote the measurement result by the function R(λ, a), where λ and a represent hidden variables and apparatus parame- ters respectively. The existence of the function R(λ, a) tells us that when the values of the parameters λ and a are given, the measurement result is uniquely determined. We will call this property determinism. In the PL model, different possible outcomes of a measurement and observers observing these results can live in parallel in different relative worlds. Thus, reality depends on which relative world we live in; there is no single concept of reality. Due to this multiple reality concept, some authors give up using conventional realism [10]. On the other hand, PL assumes ontological reality according to which measurement results cor- responding to orthogonal terms in the superposition exist in different relative worlds prior to measurement. This view is different from Copenhagen interpretation, where ontological reality of the wave function is denied. PL can provide deterministic rules for the behaviors of the lives [10]. If we consider whole worlds of lives living parallel, then PL gives a deter- ministic model. On the other hand, each individual observer living parallel in space-time, experiences indeterminism. For example, the observer A performing a spin measurement (see the example at the end of page 2) can find herself in the relative world of A ↓ after measurement. But she does not know in advance which relative world she will be in. or A ↑ Since, the observers cannot know in advance which one among several possible outcomes will actually occur, the process generated by the rules of PL is completely indeterministic according to observers. Therefore, the measurement results cannot be given as a determin- istic function predicted by a local hidden variable theory. In the language of the free will theorem of Conway and Kochen [9], the response of universe to the measurement is not a function of the information accessible to the particle. The universe makes a free decision in the neighborhood of the particle and this decision determines in which relative world the 2 Otherwise, we cannot eliminate the superdeterminism option. 3 observer lives.3 Consequently, the locality and reality4 features of the PL model do not conflict with Bell’s theorem. On the other hand, as demonstrated in several studies in the literature, the realistic interpretations of QT are inconsistent with the special theory of relativity [1, 11, 12]. We should note that their arguments are based on counterfactual reasoning. When we consider the results of actual measurements, we do not encounter paradoxes [13]. Nevertheless, if we have a realistic model where the wave function or say the probability distribution of the possible outcomes exists prior to the measurement then counterfactual propositions become somewhat legitimate [14]. Therefore, any model that claims to provide a realistic interpretation of QT must be confronted with counterfactual paradoxes. In this context, we confront PL model with the second paradox in Hardy’s paper [1]. As we will see, in PL some counterfactual propositions become part of the reality in various alternative worlds. This has interesting implications for the theory of relativity, which we will examine. II. REVISITING HARDY’S PARADOX In 1992 Hardy [1] proposed a gedankenexperiment consists of two Mach-Zehnder interfer- ometers, one for positrons and one for electrons Fig.1. The experiment is designed so that u+ and u− paths of these two Mach-Zehnder interferometers overlap. If the positron and electron take u+ and u− paths then they will meet at P and annihilate one another. Pair annihilation is expressed in Hardy’s notation as u+ > | u− > | → | γ > . (1) Using the experimental setup shown in the Fig.1, Hardy first demonstrated an inequality-free version of the Bell’s theorem. Hardy secondly demonstrated that if the ”elements of reality” corresponding to Lorentz-invariant observables are themselves Lorentz invariant, then real- istic interpretations of quantum mechanics are incompatible with special theory of relativity. For the purpose of this paper we will concentrate on his second result. The summary of the reasoning that led him to this conclusion is as follows: Consider three different reference 3 According to weak anthropic principle, the observer is in one of the relative worlds just because she observes the measurement result in that relative world. 4 Unless otherwise stated, reality will be used in the sense of ontological reality. 4 frames: LAB, S+ and S− frames of reference. In LAB frame, the measurements on electron and positron are simultaneous. The relative velocities of S+ and S− frames to LAB frame are so arranged that these measurements are not simultaneous with respect to S+ and S−. According to S+ frame the measurement on the positron occurs before the electron arrives at BS2− and according to S− frame the measurement on the electron occurs before the positron arrives at BS2+. Let’s denote the initial electron-positron states by e− > | | e+ >. After the particles pass point P, but before they reach BS2± the initial state evolves to e− > | | e+ > → 1 2 ( γ > +i | −| u+ > v− > +i | | v+ > | u− > + v+ > | | v− >). (2) Since this state is orthogonal to u− >, according to an observer in the LAB frame positron and electron cannot take u+ and u− paths simultaneously. The beam splitters | | u+ > BS2± perform the following transformations: u± > | → 1 √2 (i | d± > + c± >), | v± > | → 1 √2 (i | c± > + d± >). | (3) Using equations (2) and and (3) we see that the state vector reduces to a probability of 1 d− > with 16th of the experiments both D+ and D− detectors receive 16 . Hence, in 1 | | d+ > signals. Now, let’s examine the same experiment according to observers in the S− and S+ frames. According to S− when the electron passes through BS2− but the positron has not yet reached BS2+, the following state is obtained: 1 2 ( −| γ > 1 √2 | − u+ > c− > + | i √2 | u+ > | d− > +i√2 v+ > | | c− >). (4) Here we use (2) and transformations for u− > and | | v− > in (3). When the electron is detected in D−, then the state vector is reduced to u+ > | | d− > . (5) Then, the observer in the S− frame infers that positron takes u+ path. On the other hand, according to S+ when the positron passes through BS2+ but the electron has not yet reached BS2−, the following state is obtained: 1 2 ( −| γ > 1 √2 | − c+ > | u− > + i √2 | d+ > | u− > +i√2 c+ > | | v− >). (6) 5 Here we use (2) and transformations for u+ > and | v+ > in (3). When the positron is | detected in D+, then the state vector is reduced to d+ > | | u− > . (7) Then, the observer in the S+ frame infers that electron takes u− path.5 Hardy used EPR’s [15] ”element of reality” criterion. If a system is in an eigenstate of an operator corresponding to an observable, then we can predict certainly the result of the measurement of this observable. Therefore, according to EPR’s reality criterion, the value of this observable (which is the eigenvalue of the observable corresponding to the system eigenstate) is an element of reality even if the measurement is not performed. We can define | u± >< u± . Since the vectors the operators U ± = u− > are eigenvectors of U ±, there exist elements of reality associated with paths u+ and u−. However, as we have shown, the reference frames S+ and S− infer that electron and positron take the paths u− and u+ respectively. If the elements of reality corresponding to Lorentz-invariant observables u+ > and | | | are themselves Lorentz invariant, then these inferences must be true for all inertial frames. On the contrary, as shown previously it is not true for the LAB frame. To summarize very briefly, what Hardy did is that he associated counterfactuals about particle paths with elements of reality. Then, he showed that elements of reality corresponding to these paths are not Lorentz invariant. As stated in his article, Hardy’s result can be applied to any realistic interpretation of QT which assumes that particles have real trajectories. In PL model, lives move on real trajectories in space-time. Therefore, confrontation of the PL model with Hardy’s paradox can have important consequences. Before examining Hardy’s paradox in the PL model, let’s examine lives of a single photon on a beam splitter and in a Mach-Zehnder interferometer. In Fig.2 we show a single photon on a 50-50 beam splitter. Incident photon can either be transmitted along path (1) or reflected along path (2). Each path has 50% probability. Assume that an observer performs a measurement using photon detectors to determine the path along which the photon moves. This measurement causes an entanglement between 5 Here we should note that the inferences of observers in LAB, S+ and S− frames about particle trajectories (u+ and u−) are counterfactual. They don’t make measurements to determine real paths, but they infer these results from D+ and D− detections via counterfactual reasoning. 6 photon paths and measurement apparatus: ψ >= | 1 √2| 1γ > | 1m > + 1 √2| 2γ > 2m > | (8) where, | 1γ > represents the photon state in path (1) and 1m > represents the state of | measurement apparatus measuring a photon in path (1). Similar definitions hold for 2γ > | and | 2m >. Furthermore, we can also say that the observer is entangled with photon paths. By looking at the result of the measurement, the observer can decide to behave in one way or another. For instance, assume that if the photon takes path (1), then the observer will have lunch. On the other hand, if the photon takes path (2), then she will be on diet. Thus, we can write ψ >= | 1 √2 | 1γ > | 1o > + 1 √2 | 2γ > 2o > . | (9) Here, subscript ”o” denotes the observer. The description of the experiment within the PL model can be given as follows: The lives of the incident photon are divided into two group of lives living in the same world. One of them takes path (1) and the other takes path (2). When the lives of the photons moving on paths (1) and (2) meet with the detectors, lives of each detector, subsequently lives of the measurement apparatus and the observer are divided into two different worlds. In one world D1 detects a signal but D2 does not detect any signal, in another world D1 does not detect any signal but D2 detects a signal. Consequently, in one world observer measures a photon moving on path (1) and in the other world she measures a photon moving on path (2). These two worlds are hidden with respect to each other. Now, let’s consider a single photon in a Mach-Zehnder interferometer Fig.3. Due to destructive interference, D2 detector does not detect any signal. Therefore, in this case photon paths are not entangled with the measurement apparatus or the observer. Hence, the lives of the measurement apparatus and the observer are not divided into relative worlds. When the initial photon passes through the first beam splitter, its lives are divided into two group of lives, one going through the path (1) and the other going through the path (2). These two group of lives moving on paths (1) and (2), exist in the same world. In the second beam splitter they interact with each other and produce the usual interference effects. Finally, let’s try to examine Hardy’s paradox in the framework of the PL model. In the LAB frame of reference, both of the particles reach second beam splitters simultaneously. 7 Just before they reach second beam splitters the state of the system is given by (2). This u+ > | state is orthogonal to u− >. Therefore, according to an observer in the LAB frame positron and electron cannot take u+ and u− paths together. Accordingly, lives of the positron and electron moving on paths u+ and u− must be hidden in the world of the LAB frame, i.e. they are living parallel to the LAB frame.6 In Fig.4 we show a diagram | representing the lives observed in the LAB frame. Let’s depict the same experiment according to an observer in S− frame of reference. Due to the relativity of simultaneity, the positron has not yet reached BS2+ as soon as the electron passes through BS2−. At this instant, the system is described by the state given in (4). Within a very short time, electron can reach C − and D−. Hence, the following entangled state is obtained: γ > 1 2 | i 2√2 | − + C − = 0; D− = 0 > | 1 2√2 | − u+ > c− > | | C − = 1; D− = 0 > u+ > d− > | | C − = 0; D− = 1 > + i √2 | v+ > c− > | | C − = 1; D− = 0 > (10) where, | C − = 0, 1; D− = 0, 1 > is the state of the measurement apparatus; 1 represents detection of a particle and 0 represents a null value (no detection). Consequently, lives of the observer and experimental apparatus split into four different worlds, corresponding to orthogonal terms in the superposition (10). Since we restrict ourselves to the situation where D detectors detect signal, we consider the relative world of S− described by the third term in (10). In this relative world, lives of the positron take u+ path and lives moving on paths u− and v+ are hidden. In Fig.5 we show the lives of the experimental apparatus observed in the S− frame. On the other hand, according to an observer in S+ frame of reference, the electron has not yet reached BS2− as soon as the positron passes through BS2+. At this instant, the system is described by the state given in (6). Within a very short time, positron 6 This is evident from equation (2), but it is also conceivable from pair annihilation process at point P . If the particles take paths u+ and u−, then pair annihilation occur. In this case, the positron and electron turn into two photon and do not leave any signal in the detectors D+, D−, C+, C−. If we have additional photon detectors, we can capture photon signals from pair annihilation. However, since we restrict ourselves to the situation where both D+ and D− detectors detect signals, there should be no pair annihilation in the world of the LAB frame. 8 can reach C + and D+. Hence, the following entangled state is obtained: γ > 1 2 | i 2√2 | − + C + = 0; D+ = 0 > | 1 2√2 | − u− > c+ > | | C + = 1; D+ = 0 > u− > d+ > | | C + = 0; D+ = 1 > + i √2 | v− > c+ > | | C + = 1; D+ = 0 > . (11) In the relative world of S+ described by the third term in (11), lives of the electron take u− path and lives moving on paths u+ and v− are hidden. The lives of the experimental apparatus observed in the S+ frame is given in Fig.6. To summarize, the lives of particles in the worlds of different reference frames are different from each other. The lives moving on path u+ are part of the world of the S− frame, but not part of the worlds of the S+ and LAB frames. Similarly, the lives moving on path u− are part of the world of the S+ frame, but not part of the worlds of the S− and LAB frames. However, we should note that actually the lives were there all along. The only thing that changes from one frame of reference to another is whether lives of the particles interact or not with the apparatus. As we have discussed in the introduction, noninteracting lives are hidden, and the observer cannot experience them in her world. The fact that different reference frames live parallel to each other in different worlds seems to fit the logic of PL at first sight. However as we will see, there is a problem we have to overcome. The observer in each reference frame observes not only the experimental apparatus but also the observer in the other reference frame. For instance, let’s denote the lives of the observer in the S− frame of reference observing the measurement results C − = 0 and D− = 1 by OS−(D− = 1). Denote also the lives of the experimental apparatus with C − = 0 and D− = 1 by A(D− = 1). When these two lives meet, they merge to form a bigger set of lives that we will denote as OS−(D− = 1) ⊕ AS−(D− = 1). (12) Here, the subscript S− in A represents the configuration of the lives of the apparatus ob- served by OS− (configuration in Fig.5). Let the lives OS−(D− = 1) and AS−(D− = 1) meet the lives of the observer in S+ frame before positron reaches BS2+, then the following set of lives is obtained: OS−(D− = 1) AS−(D− = 1) ⊕ ⊕ OS+(D− = 1)3. (13) where the subscript ”3.” indicates that this describes a ”third-person perspective”: observer in S− frame observes in her world another ”observer” in the S+ frame of reference which 9 she denotes (OS+)3..7 After a while, positron also passes BS2+ and is then detected. The detection of the positron causes the lives of the apparatus and the observers split into relative worlds: In one world we obtain C + = 0, D+ = 1 and in the other world C + = 1, D+ = 0. Since we consider D− = 1, D+ = 1 case, lives of the joint system become OS−(D− = 1; D+ = 1) AS−(D− = 1; D+ = 1) ⊕ ⊕ OS+(D− = 1; D+ = 1)3.. (14) The above expression reflects first-person perspective of the observer OS−.8 In this perspec- tive D− = 1 and D+ = 1 detections occurred due to lives coming from v− and u+ paths (see Fig.5). Therefore, lives moving on paths v− and u+ are part of the history of (14). On the other hand, first-person perspective of the observer OS+ has experienced an other history. According to OS+, D− = 1 and D+ = 1 detections occurred due to lives coming from u− and v+ paths (see Fig.6). In the first-person perspective of the observer OS+, we can write the following world of lives: OS−(D− = 1; D+ = 1)3. ⊕ AS+(D− = 1; D+ = 1) ⊕ OS+(D− = 1; D+ = 1). (15) From the analysis we performed above, we get the following odd-looking result: first-person and third-person perspectives of the same observer belong to different worlds. The observer (OS+)3. in the world of (OS−)1. lives parallel to the world of (OS+)1.. But if quantum laws apply equally to all observers, then (OS+)3. should not observe that the positron is detected before the electron.9 However, this result is incompatible with the relativity of simultaneity: (OS+)3. is moving relative to (OS+)1., and the time order of the detection events should be reversed. Consequently, we encounter a discrepancy between special relativity and the PL model. Nevertheless, we need to say that such a discrepancy does not arise for any interpretation of QT that does not accept the reality of anything other than the measurement outcomes. According to such an interpretation, the paths u−, u+, v− and v+ are just mathematical auxiliary concepts; they are not related to reality. 7 We borrow this terminology from Ref.[16]. However, Ref.[16] used this terminology in the context of algorithmic information theory and did not apply it to relativistic observers. 8 we omit the subscript ”1.” for abbreviation. 9 Otherwise the state vector is reduced to (7), which indicates that electron takes u− path. However, this is erroneous as seen from (14). 10 III. SPECULATIONS ON THEORY OF RELATIVITY If we persist in the realistic interpretations of QT, the discrepancy with the theory of relativity needs to be resolved. One solution to this discrepancy is to modify the theory of relativity by proposing a preferred frame of reference. Such a modification of the theory has been discussed for a long time [17]. However, there are obscurities in this approach, such as which criteria should be used to determine the preferred frame of reference.10 In this paper we will make the following speculation which we believe offers a solution to the discrepancy and also fits the spirit of the PL model: There is no particular preferred frame of reference, but for each frame there is always a world in which that frame is preferred. The world observed from an observer’s first-person perspective is the world where the observer’s stationary frame is preferred. Lorentz transformations11 are defined between first-person perspectives of observers on different inertial frames of reference. According to the assumptions above, lives of each observer split into infinitely many worlds; one of them corresponds to observer’s first-person perspective and others correspond to third-person perspectives of some other observers. Suppose that S1,S2,...,Sn are different inertial reference frames. Then lives of the observer of each reference frame Si, i 1, 2, ...n } split into n relative worlds. One of them is the world observed in the first-person perspective ∈ { of the observer in the frame Si. In this world we denote the lives of the observer in Si by (OSi)1.. All other observers are in the third-person perspective and denoted by (OS1)3., (OS2)3.,..,(OSi−1)3., (OSi+1)3.,..,(OSn)3.. As is known, Lorentz transformations have a sym- metrical form, i.e. the transformations Si → form, up to the sign in front of the velocity. This feature implies that we cannot distinguish = j) have exactly the same Sj and Sj → Si, (i one frame of reference from another. In our assumptions, a Lorentz transformation from Si to Sj, essentially defines a transformation from (OSi)1. to (OSj)1.. (OSi)1. and (OSj)1. live parallel in different worlds and each is the preferred observer in her own world. We interpret the symmetry feature of Lorentz transformations as the equivalence of the worlds of (OSi)1. and (OSj)1. in defining the laws of nature. 10 One possible candidate for preferred frame of reference is the frame in which the cosmic microwave background is isotropic [17, 18]. However, there is not any apparent reason why this frame should be the preferred frame of reference. 11 Conventional Lorentz transformations in the symmetrical form. 11 6 One can then ask the transformations between observers in the first-person and third- person perspectives, i.e. transformations (OSi)1. → frame of reference. Therefore, the order of events observed by (OSi)1. determine the physical (OSj)3.? In this case Si is the preferred behavior in Hardy’s gedankenexperiment. For instance, if Si coincides with S− frame of reference then the detection of the electron takes place before the detection of the positron and hence, lives of the joint system of observers and the apparatus is given by (14). All other observers in the world of (OSi)1. should observe same order of detection events. Therefore (OSj)3. observes variable speed of light, and hence the transformations (OSi)1. ⇆ (OSj)3. does not obey conventional Lorentz transformation formula. To be precise, assume that the detections in the D+ and D− detectors are synchronized with light pulses from outer point K. According to (OSi)1., these light pulses propagate with a speed c. Then, according = j) speeds of these light pulses moving from K to D+ and D− can vary to (OSj)3., (i and their values may no longer be c. The discussion of what the explicit forms of these transformations is beyond the purpose of this paper. However, we would like to draw attention to the following point: Whatever new transformations are, it may not be valid globally. For instance, speed of light from emission event at K to the absorption event at D− may not be equal to the speed of light moving between other two events.12 Therefore, the transformation used, varies depending on which events it is used for. This gives us locally defined transformations. This peculiar situation becomes understandable to some extent if we realize that the world of (OSi)1. emerge as a result of the entanglement of the Hardy’s experimental setup with (OSi)1.. Accordingly, in this world we can attribute a special meaning to the signal events in D− and D+ detectors. We can consider some kind of transformation which gives conventional Lorentz transformation formula for events not associated with Hardy’s experimental setup, but gives a new or modified transformation formula for signal events in the D− and D+ detectors. Obviously, this new transformation violates the Lorentz symmetry. However, the Lorentz symmetry is violated only for events associated with quantum entanglement between the observer and some quantum system. Therefore, we can say that Lorentz symmetry is almost valid. As it was said by Barbour [19], Einstein did not create a theory of clocks and duration from first principles. He avoided ever having to address the physical working of rods and clocks; 12 Even the speeds of light pulses from K to D− and K to D+ may not be equal. 12 6 they were always treated separately as independent entities in both relativity theories. Their properties were not deduced from the inner structure of the theory, but were simply required to accord with the relativity principle [19]. We claim that QT gives actual physical working of rods and clocks. But we should be open to the idea that the relativity principle may not be absolute, and can be violated for certain events associated with quantum entanglement. Finally, we want to discuss how we should interpret the non-equivalence of an observer’s first-person and third-person perspectives. What exactly does this mean? Does this mean that the observer (OSj)3. in (OSi)1.’s world is an unconscious being, such as a zombie or a robot? This is not what we intend to say. If we want to explain with the example of Hardy’s gedankenexperiment we discussed in the previous section, we can say that the measurement performed by (OSi)1. and her conscious perception causes the state vector to collapse.13 But this does not mean that (OSj)3. is an unconscious being. It simply means that in (OSi)1.’s world, (OSj)3.’s perception of the measurement result has no effect on the state vector’s collapse; all observers in different reference frames respect the order of events and recorded history that the observer (OSi)1. sees on Hardy’s experimental setup. On the other hand, if we repeat or perform another experiment, lives will split again and (OSj)3. can find herself in the world of her first-person perspective where her frame of reference is the preferred frame. As soon as this happens, the subscript ”3.” should be replaced by ”1.”. IV. CONCLUSIONS PL is a model that is expected to be compatible with the relativity theory because it in- cludes the local interactions of lives and their motions that do not exceed the speed of light. However, we negated this expectation by showing that the PL model could not overcome the paradox suggested by Hardy. Our results can also be applied to many world interpreta- tion where counterfactual propositions assumed to be part of reality in different alternative worlds, or any realistic interpretation of QT that assumes real particle trajectories. But we want to emphasize that there is no conflict between the special theory of relativity and QT for approaches and interpretations that regard state vectors as auxiliary mathematical concepts and do not relate them to reality. Therefore, one way to overcome the Hardy’s 13 Of course, there is no state vector collapse in the PL model. But since we think many physicists are more familiar with this terminology, we use the term ”collapse” for clarity. 13 paradox is to adopt such an approach. On the other hand, if we insist on a realistic interpre- tation as we have just mentioned, we must accept the possibility that Lorentz symmetry is violated. Such a Lorentz symmetry violation can be realized by choosing a preferred frame of reference, as noted in Hardy’s original paper [1]. In section III, we have made an interesting speculation which we believe offers a solution to the discrepancy between QT and special theory of relativity, and also fits the spirit of the PL model. [1] L. Hardy, ”Quantum mechanics, local realistic theories, and Lorentz-invariant realistic theo- ries,” Phys. Rev. Lett. 68, 2981-2984 (1992). [2] G. Brassard and P. Raymond-Robichaud, ”Can free will emerge from determinism in quantum theory?,” in ”Is Science Compatible with Free Will? Exploring free will and consciousness in light of quantum physics and neuroscience” A. Suarez and P. Adams (Eds.), Chapter 4., pp. 41-61, Springer, 2013 [arXiv:1204.2128 [quant-ph]]. [3] G. Brassard, and P. Raymond-Robichaud. ”Parallel Lives: A Local-Realistic Interpretation of Nonlocal Boxes,” Entropy 21(1), 87 (2019) [arXiv:1709.10016 [quant-ph]]. [4] M. Waegell, “An Ontology of Nature with Local Causality, Parallel Lives, and Many Relative Worlds,” Found. Phys. 48, no.12, 1698-1730 (2018) [arXiv:1707.06324 [quant-ph]]. [5] H. Everett III, ”relative state formulation of quantum mechanics,” Rev. Mod. Phys. 29, no. 3, p. 454 (1957). [6] B. S. DeWitt, Physics Today 23, 9, 30 (1970). [7] J. S. Bell, ”On the Einstein-Podolsky-Rosen paradox,” Physics 1, 195-200 (1964). [8] J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, ”Proposed Experiment to Test Local Hidden-Variable Theories,” Phys. Rev. Lett. 23, 880 (1969). [9] J. Conway, and S. Kochen. ”The Free Will Theorem,” Found. Phys. 36, no.10, 1441-1473 (2006) [arXiv:0604079 [quant-ph]]. [10] M. Waegell, ”Locally Causal and Deterministic Interpretations of Quantum Mechanics: Parallel Lives and Cosmic Inflation,” Quantum Stud.: Math. Found. 4, 323-337 (2017) [arXiv:1604.07874 [quant-ph]]. [11] R. Clifton, C. Pagonis and I. Pitowsky, ”Relativity, Quantum Mechanics and EPR”, Proceed- ings of the Biennial Meeting of the Philosophy of Science Association, Volume 1, pp. 114-128 14 (1992). [12] I. Pitowsky, ”The Relativity of Quantum Predictions”, Phys. Lett. A 156, 137-139 (1991). [13] Y. Aharonov, et al. ”Revisiting Hardys Paradox: Counterfactual Statements, Real Measure- ments, Entanglement and Weak Values”, Phys. Lett. A 301, 130-138 (2002). [14] L. Vaidman, ”Counterfactuals in Quantum Mechanics,” in ”Compendium of Quantum Physics” Greenberger D., Hentschel K., Weinert F. (eds) Springer, 2009. [15] A. Einstein, B. Podolsky and N. Rosen, ”Can Quantum-Mechanical Description of Physical Reality be Considered Complete?”, Phys. Rev. 47, 777, (1935). [16] M. P. M¨uller, “Law without law: from observer states to physics via algorithmic information theory,” arXiv:1712.01826 [quant-ph]. [17] C. M. Will, ”Theory and Experiment in Gravitational Physics”, Cambridge University Press, Cambridge, (1993). [18] S. R. Coleman and S. L. Glashow, ”Cosmic ray and neutrino tests of special relativity,” Phys. Lett. B 405, 249-252 (1997) [arXiv:hep-ph/9703240 [hep-ph]]. [19] J. Barbour, ”The End Of Time”, Oxford University Press, Oxford, (1999). 15 FIG. 1: Scheme of Hardy’s gedankenexperiment [1]. BS1+, BS1−, BS2+, BS2− represent beam splitters and M 1+, M 1−, M 2+, M 2− represent mirrors. C +, D+, C −, D− are detectors. FIG. 2: Single photon on a beam splitter. 16 FIG. 3: Single photon in a Mach-Zehnder interferometer. FIG. 4: Diagram representing the lives observed in the LAB frame. Dotted lines represent hidden lives living in parallel. 17 FIG. 5: Diagram representing the lives observed in the S− frame. Dotted lines represent hidden lives living in parallel. 18 FIG. 6: Diagram representing the lives observed in the S+ frame. Dotted lines represent hidden lives living in parallel. 19
ai_researcher
1
LLM4VV_Exploring_LLM-as-a-Judge_for_Validation_and_Verification_Testsuites.pdf
LLM4VV: Exploring LLM-as-a-Judge for Validation and Verification Testsuites Zachariah Sollenberger* University of Delaware Newark, DE Jay Patel* University of Delaware Newark, DE Christian Munley University of Delaware Newark, DE Aaron Jarmusch University of Delaware Newark, DE Sunita Chandrasekaran University of Delaware Newark, DE [email protected] 4 2 0 2 g u A 2 2 ] E S . s c [ 2 v 9 2 7 1 1 . 8 0 4 2 : v i X r a Abstract—Large Language Models (LLM) continue to improve and are revolutionizing the landscape of software development. These large models have demonstrated potential to generate, debug, test, analyze, document, and even translate code. Thus they are a valuable tool in the software development cycle. If used correctly, such tools can often accelerate the development cycle. Though the tools are powerful and new, the community is cautious of training using biased or sensitive data, which can lead to biased, dangerous, or incorrect outputs along with the inadvertent release of confidential information. Additionally, the carbon footprints and the un-explainability of these “black box” models continue to raise questions about the reliability of LLMs. With these opportunities and these challenges ahead, this paper explores the idea of “judging” LLM-generated code to better understand and “open up” the un-explainable “black box” models used by LLMs. We probe into the black box of one such LLM that has generated the best compiler tests for the directive-based programming models OpenMP and OpenACC in our earlier research. We challenge DeepSeek’s deepseek-coder- 33B-instruct model with intentionally-erroneous code, and we also define relevant metrics and adopt an agent-based approach to evaluate the LLM and assess its capabilities as an LLM-as-a- judge. We also develop a pipeline-based approach to streamline the entire workflow. Finally, we make use of all of these strategies together to develop a more reliable method for automatically validating LLM-generated compiler tests. Based on our results, utilizing an agent-based prompting approach and setting up a validation pipeline structure drastically increased the quality of deepseek-coder-33B-instruct evaluation of tests which are used to validate compiler implementations of directive-based parallel programming models. I. INTRODUCTION Large Language Models (LLMs) have recently revolu- tionized the field of computer science. Popular models like BERT [1], GPT-4 [2], Gemini [3], and more are trained on an objective such as predicting the next words or tokens in a text, and demonstrate capabilities to process, recognize, and under- stand human languages at impressive levels. LLMs can achieve this feat with the help of a subsection of machine learning known as deep learning. LLMs use a type of deep-learning architecture called transformers. With the combination of self- attention, positional encoding, feed forward networks, multi- head attention [4], and other key components, the transformer architecture can be trained on internet-scale text datasets *Authors Zachariah and Jay contributed equally to this manuscript using self-supervised learning and learn to model language effectively. With the wide-ranging capabilities provided by LLMs, this paper explores the idea of using an LLM-as-a-judge (LLMJ) to evaluate tests written to verify and validate compiler implementations. We chose DeepSeek’s deepseek-coder-33B- instruct model [5] for this purpose because in a recently published work of ours [6], we found that the deepseek-coder- 33B-instruct model demonstrated the best capability to gener- ate directive-based parallel programming model codes among the several LLMs we tested for that purpose (the directive- based parallel programming models being OpenACC [7] and OpenMP [8]). This LLM generated codes with a high compi- lation and pass rate compared to other popular LLMs, such as GPT-4 turbo [9], Codellama-34b-Instruct [10], and GPT-3, as narrated in the published paper. LLMs are being widely considered for tasks such as code generation, summarization, and refactoring [11], [12], [13], [14]. However, the application of LLMJs specifically in eval- uating tests used for verifying and validating compiler imple- mentations of directive-based parallel programming models is a new topic. This paper investigates this topic and explores the application of the LLMJ technique. The potential of LLMJ is enabled by the training of the model on a large number of codes to allow it to compre- hend code and assess a given piece of code based on user- specified metrics. The LLM processes the input data in its large network of parameters, and at a level of abstraction it us using some pattern recognition and learned knowledge to generate text. The LLMJ produces an output that reflects its judgment against the defined criteria. This process can take various forms, such as analyzing code for errors, determining syntactical correctness, and predicting the accuracy of code implementation, among others. Our reason for exploring this usage of an LLMJ is to help automate the creation of functional validation and verification test suites for directive-based parallel programming models. The challenge we are currently facing in this process is finding a method to accurately evaluate the correctness of tests generated by an LLM. The objective of our research in this paper is to minimize or potentially remove the need for human intervention or involvement in this process by utilizing an LLMJ. The approach in this paper could be beneficial to developers beyond directive-based programming models. Any developer would need to verify and validate their software, and an LLMJ could serve that purpose, as it takes significantly less labor and time compared to a human evaluating the code. The paper makes the following contributions: • Creating and defining metrics to evaluate LLM-generated code • Developing negative-probing methods and a benchmark to evaluate a given LLM’s performance as a judge • Evaluating the capability of deepseek-coder-33B as a judge by using an agent-based approach II. RELATED WORK Several recent studies have demonstrated the capabili- ties of LLMs in generating parallel programs. For instance, Nichols [15] proposed a reinforcement learning method to improve the speedup of generated codes, while LM4HPC [16] presented various datasets and a tokenizer for HPC-related code generation. Oren et al. [17] explored AST representation of code, Godoy et al. [18] evaluated OpenAI Codex for HPC kernels generation and Valero-Lara et al. [19] explored Llama- 2 and GPT-3 for kernel generation. Another line of work involves using LLMs as judges to evaluate other models on open-ended questions. Zheng et al. [20] presented the concept of using strong LLMs as judges to identify biases in other models, achieving an 80% agreement with human preferences. This study demonstrates the potential of LLMJs in the HPC realm. Other studies have explored LLMs for developing test cases for applications beyond compiler V&V. For instance, Shhafer et al. [21] evaluated LLMs for automated JavaScript unit test generation, while other works have investigated LLM-based test case generation for various programming languages and software systems [22], [23]. Finally, there are several copilot models being implemented into IDEs, such as GitHub Copilot [24] and Cursor [25], which leverage LLMs to assist developers in writing code. These models have shown promising results in improving coding productivity and reducing errors. Overall, these studies demonstrate the growing interest in exploring the potential of LLMs for software development and automation. Our work builds upon this trend by investigating the use of LLMs for compiler V&V, with a focus on improving the accuracy and efficiency of the verification process. III. METHODOLOGY To determine how deepseek-coder-33b-instruct performs as an LLMJ, we first outline in this section strategies including negative probing, an agent-based approach, and a validation pipeline to streamline the process. A. Negative Probing Manually written compiler from the OpenACC V&V [26] and OpenMP V&V [27] repositories were split into two groups: one containing code that had been modified tests to include various errors, and the other containing code that remained unchanged. The idea behind is to intentionally create invalid variations of otherwise valid code in order to determine and understand how an LLM as a “black box” assesses code. We term this process as negative probing. Modifications applied to Group 1 include: Group 1: Variations of negative probing • 0. Removed memory allocation / replaced directives with a different syntactically incorrect directive • 1. Removed an opening bracket • 2. Added use of undeclared variable • 3. Replaced file with randomly generated non-OpenACC & OpenMP code • 4. Removed last bracketed section of code Group 2: Unchanged manually written codes: • 5. No changes to code First we split the manually-written test files in half randomly and create a modified, invalid suite and an unchanged, valid suite. We prompt the deepseek-coder-33B-instruct model [5] one test at a time and instruct to judge the two different groups with predefined criteria, and record the evaluations for each file. the model Listing 1 shows the criteria we use in prompting to review and evaluate an OpenACC code: Listing 1: Criteria for Evaluation - an Example Prompt 1 Syntax: Ensure all OpenACC directives and pragmas are syntactically correct. 2 Directive Appropriateness: Check if the right directives are used for the intended parallel computations. 3 Clause Correctness: Verify that all clauses within the directives are correctly used according to OpenACC specifications. 4 Memory Management: Assess the accuracy of data movement between CPU and GPU. 5 Compliance: Ensure the code adheres to the latest OpenACC specifications and best practices. 6 Logic: Verify that the logic of the test (e.g. performing the same computation in serial and parallel and comparing) is correct. By observing how the LLM judged both groups of files, and by recording the specific modifications made to each file, we were able to identify different areas where the LLMJ did well, and where it encountered challenges. We were also able to measure and judge the overall accuracy of the LLM. This type of analysis allows for insights into the strengths and weaknesses of an LLM’s assessment capabilities. B. Agent-based Approach for LLM-as-a-Judge (LLMJ) An agent-based approach involves treating the LLM as an autonomous agent that interacts with its environment and utilizes various tools to improve quality of its outputs. In the agent-based approach the context of using an LLMJ, 13 Compiler STDERR: {Compiler’s STDERR} 14 Compiler STDOUT: {Compiler’s STDOUT} 15 When the compiled code is run, it gives the following results: 16 Return code: {Program’s return code} 17 STDERR: {Program’s STDERR} 18 STDOUT: {Program’s STDOUT} Through this method, the LLM is able to obtain more information about the file to aid in its evaluation. C. Validation Pipeline utilizing LLMJ In order to efficiently evaluate the validity of compiler tests, it may not always be feasible to compile, run, and have an LLM evaluate each and every test that requires verification. Performing all three processes on every single file being verified can quickly become a time-consuming and costly task, especially if verifying LLM-generated codes with a high occurrence of invalidity or a large volume of candidate tests. To streamline this task, we re-organized the three processes into a pipeline infrastructure as shown in Figure 2, to both optimize the overarching task by reducing the number of unnecessary steps and by increasing the throughput of files for verification via pipeline stages and parallel processing. Fig. 2: An Overview of the Validation Pipeline The driving concept behind the pipeline is that a file that fails an earlier stage of the pipeline does not need to be passed to the next stage, as it has already demonstrated its invalidity. Within this validation pipeline infrastructure, files are first compiled, then executed, an finally judged by an agent-based LLMJ. Each file being processed is first queued for compilation, which can be done either by a single thread or by a pool of threads in parallel. Files that successfully compile are then queued for execution, which can again be done synchronously in a single thread or asynchronously by a second thread pool. Finally, files that exit with return code 0 are queued for evaluation by an agent-based LLMJ. This stage can also be parallelized if there are enough available GPU resources, but it can also be done by a single thread running synchronously or asynchronously. In this manner, unnecessary operations are reduced by preventing invalid files from continuing through the pipeline, throughput is increased by the staged architecture, and the overarching task of verification can utilize all available resources via parallel and/or asynchronous computing. To determine the accuracy of this method for compiler test validation, we performed the negative probing technique again, but instead of only recording the LLMJ’s evaluations, we recorded each file’s compilation data, execution data, and its evaluation from an agent-based LLMJ. This not only allowed Fig. 1: An Overview of Agent-based Approach for LLMJ entails collecting and making use of external information about each file, such as compilation and execution error messages, outputs, and return codes, and providing this information to the LLMJ within the prompt. The LLMJ then utilizes the provided information to evaluate the file, deeming it either valid or invalid. Figure 1 demonstrates how the agent-based LLMJ works. Listing 2 shows how the tool use is incorporated in the prompting to provide additional information to the LLM to help it review an OpenACC code and evaluate it based on user-specified criteria: Listing 2: Agent-based LLMJ - an Example Prompt 1 Syntax: Ensure all OpenACC directives and pragmas are syntactically correct. 2 Directive Appropriateness: Check if the right directives are used for the intended parallel computations. 3 Clause Correctness: Verify that all clauses within the directives are correctly used according to OpenACC specifications. 4 Memory Management: Assess the accuracy of data movement between CPU and GPU. 5 Compliance: Ensure the code adheres to the latest OpenACC specifications and best practices. 6 Logic: Verify that the logic of the test (e.g. performing the same computation in serial and parallel and comparing) is correct. 7 Based on these criteria, evaluate the code and determine if it is a valid or invalid test. Think step by 8 You MUST include the exact phrase, " FINAL JUDGEMENT: valid" in your response if you deem the test to be step. valid. 9 If you deem the test to be invalid, include the exact phrase "FINAL JUDGEMENT: invalid" in your response instead. 10 Here is some information about the code to help you. 11 When compiled with a compliant OpenACC compiler, the below code causes the following outputs: 12 Compiler return code: {Compiler’s return code} us to determine which files would have passed through the pipeline architecture and thus determine the accuracy of the validation pipeline, but also allowed us to determine the performance and accuracy of an agent-based LLMJ on its own. D. Experimental Setup For this paper, we have used the high-performance comput- ing cluster Perlmutter located at Lawrence Berkeley National Laboratory [28]. Each node of Perlmutter is equipped with four NVIDIA A100 GPUs and one AMD EPYC 7763 CPU. Manually written test suites from the OpenACC and OpenMP Validation and Verification test suites were used for negative probing. The experiments were conducted on C, C++, and a small set of Fortran files. IV. DEFINING METRICS To determine the effectiveness of LLMJ, we utilized three metrics: • Per-issue evaluation accuracy: Where the issue is the intentional error introduced into each file during negative probing • Overall evaluation accuracy: This does not take into account the issue in each file • Bias: A numerical measurement of the LLMJ’s tendency to fail valid files or pass invalid files when an incorrect evaluation is made All metrics were calculated with the results from negative probing. In order to determine whether the LLMJ’s evaluations were accurate or not, the following system-of-verification was utilized to determine the validity of each file: • Files with issue IDs ranging from 0-4 are considered invalid as they have been altered to include errors. • Count: This is the number of files that correspond to each issue ID. • Correct/Incorrect Judgments: This is the number of correct and mistaken evaluations made by the LLMJ on files corresponding to each issue ID, determined by comparing the LLMJ’s evaluations against each file’s validity according to the above verification system. • Accuracy: Calculated by first determining the number of correct evaluations made by the LLMJ (equal to the count value minus the mistakes value for each issue ID), and dividing that number by the number of files with the same issue ID. The resulting value represents the percentage of correct evaluations made by the LLMJ for each issue ID. Additionally, we conducted a numerical evaluation to assess the overall accuracy and bias. • Overall evaluation accuracy: Calculated by determining the total number of correct evaluations, and dividing it by the total number of files, regardless of each file’s issue ID. • Bias: Calculated by first determining a total bias value. 1 is added to the total for each mistaken evaluation of an invalid file, and 1 is subtracted from the total for each mistaken evaluation of a valid file. The resulting total is then divided by the total number of mistaken evaluations, giving a value in the range [-1, 1]. These metrics allowed us to create profiles of multiple different approaches and setups for verifying compiler tests when each approach was subjected to negative probing. V. ANALYSIS OF DEEPSEEK-CODER-33B-INSTRUCT AS AN LLMJ This section discusses results from analyzing deepseek- • Files with issue ID 5 are considered valid as they remain coder-33B-instruct as an LLMJ. We do so in two parts. unchanged. The first metric, i.e. per-issue evaluation accuracy was determined by categorizing the LLMJ’s evaluations according to each file’s issue ID, and then observing the percentage of correct LLMJ evaluations in each category. The second metric, i.e. the overall accuracy was determined by observing the percentage of correct LLMJ evaluations regardless of the issues injected into each file. Finally, the third metric, i.e. bias was determined by numerically measuring the LLMJ’s tendency to fail a valid file or to pass an invalid one when it made a mistake. A positive bias means that when the LLMJ makes a mistake, it is more likely to be one of permissiveness (passing an invalid file), whereas a negative bias means that a mistake is more likely to be one of restrictiveness (failing a valid file). For the purposes of numerical analysis, ”Correct”, ”Pass- ing”, and ”Valid” were mapped to 0, and ”Incorrect”, ”Fail- ing”, and ”Invalid” were mapped to 1. Based on this definition, we can numerically evaluate the performance of LLMJ for each issue type. Following are the data points recorded or calculated on a per-issue basis: • Part One: We discuss results derived from using the LLMJ by itself through negative probing • Part Two: We discuss results from two different prompt- ing styles for an agent-based LLMJ and a validation pipeline that utilizes an agent-based LLMJ. A. Results for Part One Initial experimentation began with an analysis of the LLMJ technique itself. Two test suites were put together with nega- tive probing to test the LLM against: one suite for OpenMP (containing only C files, due to time constraints), and one suite for OpenACC (containing C, C++, and Fortran files). After assembling the testsuites, we loaded the deepseek- coder-33b-instruct model onto one node on Perlmutter, and use the following prompt for each file as shown in Listing 3. Because the prompt asks the LLM to directly evaluate the code provided, we called this prompt a direct analysis prompt. Listing 3: Direct Analysis - an Example Prompt 1 Review the following OpenACC/OpenMP code and evaluate it based on the following criteria: 2 3 Syntax: Ensure all OpenACC/OpenMP directives and pragmas are syntactically correct. 4 Directive Appropriateness: Check if the right directives are used for the intended parallel computations. 5 Clause Correctness: Verify that all clauses within the directives are correctly used according to OpenACC /OpenMP specifications. 6 Memory Management: Assess the accuracy of data movement between CPU and GPU. 7 Compliance: Ensure the code adheres to the latest OpenACC specifications and best practices. 8 Logic: Verify that the logic of the test (e.g. performing the same computation in serial and parallel and comparing) is correct. 9 Based on these criteria, evaluate the code in a brief summary, then respond with precisely "FINAL JUDGEMENT: correct" (or incorrect). 10 You MUST include the exact phrase " FINAL JUDGEMENT: correct" in your evaluation if you believe the code is correct. Otherwise, you must include the phrase "FINAL JUDGEMENT : incorrect" in your evalutation. 11 Here is the code: 12 {C/C++/Fortran file content} The LLM’s response and evaluation were then recorded for each file, and we performed an analysis of the data. Table I and Table II show the per-issue accuracy of deepseek-coder- 33B-instruct’s evaluations for OpenACC and OpenMP files, respectively. As Table I demonstrates, deepseek-coder-33B-instruct strug- gled to recognize basic syntax errors and test logic errors in OpenACC files, and was only able to accurately determine whether the test contained any OpenACC directives or rou- tines at all. Meanwhile, Table II shows that the LLMJ was significantly better at recognizing syntax errors in OpenMP files, while struggling a bit more to recognize OpenMP errors and test logic errors. Notably, the LLMJ was almost entirely incapable of recognizing when a file did not contain any OpenMP at all. Table III shows the overall performance of deepseek-coder- 33B-instruct as a judge for OpenACC as well as OpenMP. Sur- prisingly, despite OpenMP having existed for a longer period of time, deepseek-coder-33B-instruct demonstrated a higher overall accuracy when evaluating OpenACC files. However, it also exhibited a much higher positive bias for OpenACC than for OpenMP, demonstrating a strong tendency for its mistakes to involve it passing an invalid file. B. Results for Part Two Based on these results, we concluded that it would be necessary to equip the LLMJ with more tools in order to improve its accuracy. We designed the validation pipeline and implemented an agent-based approach for the LLMJ, and created larger testsuites for OpenMP and OpenACC (using C and C++ files from the manually-written testsuites for both). Many OpenMP offloading compilers do not support all OpenMP features introduced after version 4.5. To reduce the likelihood of this inconsistent feature support affecting our results, we only included files that used OpenMP 4.5 or lower to ensure that the LLVM OpenMP offloading compiler we used would be fully-compliant for all features present. For OpenACC, we used NVIDIA’s HPC SDK nvc compiler. For now, we have experimented mostly with C/C++ files, with an aim to include Fortran files in the near future. We theorize that the wording of our direct analysis prompt in Listing 4 was causing the LLM to provide results based on examples of code reviews online instead of its knowledge of OpenMP and OpenACC. To remedy this, we re-wrote the prompt and instructed the LLM to generate a detailed description of the code provided, and then determine if that description fit the profile of a valid compiler test. In this way, the LLM would be indirectly evaluating the code, so we referred to it as an indirect analysis prompt. With this approach, the LLM would hopefully base its response on its knowledge of OpenMP and OpenACC (when generating a description of the code), and its knowledge of compiler tests (when analyzing the description). The following shows the indirect anlaysis prompt that we created: Listing 4: Indirect Analysis - an Example Prompt 1 Describe what the below OpenACC/OpenMP program will do when run. Think step by step. 2 Here is some information about the code to help you; you do not have to compile or run the code yourself. 3 When the below code is compiled with a OpenACC/OpenMP-compliant compiler, the compiler gives the following outputs: Compiler return code: {return code} Compiler STDERR: {STDOUT} Compiler STDOUT: {STDERR} 4 5 6 7 When the compiled code is run, it gives the following results: Return code: {return code} STDOUT: {STDOUT} STDERR: {STDERR} 8 9 10 11 Using this information, describe in full detail how the below code works, what the below code will do when run, and suggest why the below code might have been written this way. 12 Then, based on that description, determine whether the described program would be a valid or invalid compiler test for {flavor} compilers. 13 You MUST include the exact phrase " FINAL JUDGEMENT: valid" in your final response if you beleive that your description of the below OpenACC/OpenMP code describes a valid compiler test; otherwise, your final response MUST include the exact phrase "FINAL JUDGEMENT: invalid". 14 Here is the code for you to analyze: { C/C++/Fortran file} TABLE I: LLMJ Negative Probing Results for OpenACC OpenACC Issue Type Removed ACC memory allocation / swapped ACC directive Removed an opening bracket Added use of undeclared variable Replaced file with randomly-generated non-OpenACC code Removed last bracketed section of code No issue Total Count 203 125 108 117 114 668 Correct Judgments 31 15 16 94 14 586 Incorrect Judgments 172 110 92 23 100 82 TABLE II: LLMJ Negative Probing Results for OpenMP OpenMP Issue Type Removed OMP memory allocation / swapped OMP directive Removed an opening bracket Added use of undeclared variable Replaced file with randomly-generated non-OpenMP code Removed last bracketed section of code No issue Total Count 59 39 33 51 33 216 Correct Judgments 28 29 21 2 11 84 Incorrect Judgments 31 10 12 49 22 132 Accuracy 15% 12% 15% 80% 12% 88% Accuracy 47% 74% 64% 4% 33% 39% TABLE III: LLMJ Overall Negative Probing Results Datapoint Total Count Total Mistakes Overall Accuracy Bias OpenACC 1335 579 56.63% 0.717 OpenMP 431 256 40.60% -0.031 For each testsuite, we then compiled, executed, and used both LLMJ prompts to evaluate each file while recording the compilation data, execution data, and evaluations. We ran each file through every stage of the validation pipeline; however, for this experiment, we did not prevent invalid files from continuing through the pipeline. This way, we could gather information about both agent-based LLMJs, and retroactively verify how the entire validation pipeline would have performed on the data by checking the compilation, execution, and evaluation status of each file. To simplify the data analysis: • LLMJ 1: The agent-based LLMJ that used the direct analysis prompt • LLMJ 2: The agent-based LLMJ that used the indirect analysis prompt • Pipeline 1: Validation pipeline outputs computed with LLMJ 1’s evaluation • Pipeline 2: Validation pipeline outputs computed with LLMJ 2’s evaluation We then compared the performances of the two pipelines against each other, and compared the two agent-based LLMJs against each other and against the non-agent-based LLMJ. Table IV shows the results of the two pipelines on the OpenACC testsuite. As can be seen, the two pipelines per- formed almost identically, though Pipeline 2 demonstrated a higher ability recognize errors in the test’s logic, and Pipeline 1 demonstrated a higher ability to recognize when a file contained no errors. Table V also shows a similarity between the two pipelines’ performances, though in this case, Pipeline 2 was slightly worse at recognizing OpenMP errors and significantly better at recognizing a lack of OpenMP code. Table VI shows the overall performance of both pipelines across both OpenACC and OpenMP. Both pipelines were significantly more accurate for OpenMP than for OpenACC, though Pipeline 1 was slightly more accurate than Pipeline 2 for OpenACC and slightly less accurate than Pipeline 2 for OpenMP. For both programming models, both pipelines demonstrated a bias towards restrictiveness, though Pipeline 2 consistently had a stronger bias than Pipeline 1. This demonstrates that for both pipelines, when a mistake does occur, it is more likely to be one of misjudging a valid file rather than one of misjudging an invalid file. Figures 3 and 4 present the accuracy of both pipelines on the four categories of errors introduced into each file, for OpenACC and OpenMP respectively. As Figure 4 clearly shows, the performance of both pipelines on OpenMP was nearly identical across all four types of issues, while Pipeline 1 and Pipeline 2 had only slight differences in performance for OpenACC. The radar plots also show the large difference in the pipelines’ ability to detect erroneous test logic in OpenMP files versus OpenACC files; however, both pipelines also demonstrated an almost identical ability to detect improper directive use and improper syntax across both OpenACC and OpenMP. Table VII shows the results of the two agent-based LLMJs the two LLMJs’ on the OpenACC testsuite. In this case, performances varied much more, with LLMJ 1 demonstrating a superior ability to identify missing syntax errors and to recognize valid code, and LLMJ 2 demonstrating a supe- rior ability to detect OpenACC errors, a lack of OpenACC code, and errors in test logic. Table VIII, which shows the performance of both LLMJs on OpenMP, also demonstrates more variance between the two LLMJs. LLMJ 1 exhibited a higher accuracy for recognizing OpenMP errors, syntax errors, and test logic error, while LLMJ 2 was better-equipped for recognizing a lack of OpenMP code and valid codes. Table IX shows the overall performance of both LLMJs. TABLE IV: Validation Pipeline Results for OpenACC OpenACC Issue Type Removed ACC memory allocation / swapped ACC directive Removed an opening bracket Added use of undeclared variable Replaced file with randomly-generated non-OpenACC code Removed last bracketed section of code No issue Total Count 272 146 151 146 176 891 Pipeline 1 Correct Evaluations 250 146 151 146 38 704 Pipeline 2 Correct Evaluations 251 146 151 146 53 627 Pipeline 1 Accuracy Pipeline 2 Accuracy 92% 100% 100% 100% 22% 79% 92% 100% 100% 100% 30% 70% TABLE V: Validation Pipeline Results for OpenMP OpenMP Issue Type Removed OMP memory allocation / swapped OMP directive Removed an opening bracket Added use of undeclared variable Replaced file with randomly-generated non-OpenMP code Removed last bracketed section of code No issue Total Count 49 28 26 20 25 148 Pipeline 1 Correct Evaluations 47 28 26 14 23 136 Pipeline 2 Correct Evaluations 46 28 26 17 23 138 Pipeline 1 Accuracy Pipeline 2 Accuracy 96% 100% 100% 70% 92% 92% 94% 100% 100% 85% 92% 93% TABLE VI: Overall Validation Pipeline Results Datapoint Total Count Total Pipeline 1 Mistakes Total Pipeline 2 Mistakes Overall Pipeline 1 Accuracy Overall Pipeline 2 Accuracy Pipeline 1 Bias Pipeline 2 Bias OpenACC 1782 347 408 80.53% 77.10% -0.078 -0.294 OpenMP 296 22 18 92.57% 93.92% -0.091 -0.111 Fig. 4: A Radar Plot for Validation Pipeline Results for OpenMP demonstrated a roughly equal level of accuracy between OpenACC and OpenMP. LLMJ 1 also exhibited a consistently strong positive bias, while LLMJ 2 had a much smaller positive bias for OpenACC and a much larger positive bias for OpenMP. In all cases, the agent-based LLMs exhibited a tendency towards passing invalid files as opposed to failing valid files. Compared to the non-agent-based LLMJ, both agent-based LLMJs exhibited drastically higher overall accuracy, and both exhibited a smaller positive bias for OpenACC. Figures 5 and 6 present the accuracy of all three LLMJs for Fig. 3: A Radar Plot for Validation Pipeline Results for OpenACC For both OpenACC and OpenMP, LLMJ 1 demonstrated a higher overall accuracy than LLMJ 2 and performed slightly better on OpenACC than OpenMP, while LLMJ 2 TABLE VII: Agent-Based LLMJ Results for OpenACC OpenACC Issue Type Removed ACC memory allocation / swapped ACC directive Removed an opening bracket Added use of undeclared variable Replaced file with randomly-generated non-OpenACC code Removed last bracketed section of code No issue Total Count 272 146 151 146 176 891 LLMJ 1 Correct Evaluations 182 111 128 142 26 819 LLMJ 2 Correct Evaluations 224 81 126 146 47 701 LLMJ 1 Accuracy LLMJ 2 Accuracy 67% 76% 85% 97% 15% 92% 82% 55% 83% 100% 27% 79% TABLE VIII: Agent-Based LLMJ Results for OpenMP OpenMP Issue Type Removed OMP memory allocation / swapped OMP directive Removed an opening bracket Added use of undeclared variable Replaced file with randomly-generated non-OpenMP code Removed last bracketed section of code No issue Total Count 49 28 26 20 25 148 LLMJ 1 Correct Evaluations 23 16 18 13 18 137 LLMJ 2 Correct Evaluations 22 13 15 17 12 142 LLMJ 1 Accuracy LLMJ 2 Accuracy 47% 57% 69% 65% 72% 93% 45% 46% 58% 85% 48% 96% TABLE IX: Overall Agent-Based LLMJ Results Datapoint Total Count Total LLMJ 1 Mistakes Total LLMJ 2 Mistakes Overall LLMJ 1 Accuracy Overall LLMJ 2 Accuracy LLMJ 1 Bias LLMJ 2 Bias OpenACC 1782 374 457 79.01% 74.35% 0.615 0.168 OpenMP 296 71 75 76.01% 74.66% 0.690 0.840 Fig. 6: A Radar Plot for LLMJ Results for OpenMP 2), and improper syntax recognition for OpenMP (where the non-agent-based LLMJ outperformed both agent-based LLMJs). LLMJ 2 consistently demonstrated a higher accuracy in recognizing improper directive usage than LLMJ 1, while LLMJ 1 exhibited a better recognition of improper syntax than LLMJ 2. Both agent-based LLMJs were also consistently able to recognize valid tests with a high degree of accuracy, with LLMJ 1 slightly outperforming LLMJ 2 for OpenACC. Fig. 5: A Radar Plot for LLMJ Results for OpenACC OpenACC and OpenMP, respectively. In almost all categories, the agent-based LLMJs outperformed the non-agent-based LLMJ, with the exception of valid test recognition for Ope- nACC (where the non-agent-based LLMJ outperformed LLMJ VI. CONCLUSION In this paper, we explore ways to assess the capability of LLM-as-a-Judge. We employ different techniques such as negative probing and agent-based approach along with prompts to understand how the LLM evaluates the codes. Our results indicate that utilizing an agent-based prompting approach and setting up a validation pipeline structure signif- icantly increased the quality of DeepSeek Coder’s evaluations of tests used to validate compiler implementations of directive- based programming models. As part of our future work, we will incorporate fortran code into our testing to ensure more comprehensive data collection and probing. We will also be exploring the automation of compiler test generation based on lessons learnt from this work. ACKNOWLEDGMENT The authors are very grateful to OpenACC for supporting this work. This research used resources NERSC, a U.S. DOE Office of Science User Facility located at LBNL, operated under Contract No. DE-AC02-05CH11231 using NERSC ER- CAP0029463. This material is also based upon work supported by the U.S. DOE under Contract DE-FOA-0003177, S4PST: Next Generation Science Software Technologies Project. REFERENCES [1] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” 2019. [Online]. Available: https://arxiv.org/abs/1810.04805 [2] OpenAI and A. et al., “Gpt-4 technical Available: https://arxiv.org/abs/2303.08774 report,” 2024. [Online]. [3] G. Team and A. et al., “Gemini: A family of highly capable multimodal models,” 2024. [Online]. Available: https://arxiv.org/abs/2312.11805 [4] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2017. [Online]. Available: https://arxiv.org/abs/1706.03762 [5] D. Guo, Q. Zhu, D. Yang, Z. Xie, K. Dong, W. Zhang, G. Chen, X. Bi, Y. Wu, Y. Li, F. Luo, Y. Xiong, and W. Liang, “Deepseek-coder: When the large language model meets programming – the rise of code intelligence,” 2024. [Online]. Available: https://arxiv.org/abs/2401.14196 [6] C. Munley, A. Jarmusch, and S. Chandrasekaran, “Llm4vv: Developing llm-driven testsuite for compiler validation,” 2024. [Online]. Available: https://arxiv.org/abs/2310.04963 [7] OpenACC Organization, “OpenACC.” [Online]. Available: https: //www.openacc.org/ [16] L. Chen, P.-H. Lin, T. Vanderbruggen, C. Liao, M. Emani, and B. de Supinski, “Lm4hpc: Towards effective language model application in high-performance computing,” 2023. [Online]. Available: https://arxiv.org/abs/2306.14979 [17] T. Kadosh, N. Hasabnis, V. A. Vo, N. Schneider, N. Krien, A. Wasay, N. Ahmed, T. Willke, G. Tamir, Y. Pinter et al., “Scope is all you need: Transforming llms for hpc code,” arXiv preprint arXiv:2308.09440, 2023. [18] W. Godoy, P. Valero-Lara, K. Teranishi, P. Balaprakash, and J. Vetter, “Evaluation of openai codex for hpc parallel programming models kernel generation,” in Proceedings of the 52nd International Conference on Parallel Processing Workshops, 2023, pp. 136–144. [19] P. Valero-Lara, A. Huante, M. A. Lail, W. F. Godoy, K. Teranishi, P. Balaprakash, and J. S. Vetter, “Comparing llama-2 and gpt-3 llms for hpc kernels generation,” arXiv preprint arXiv:2309.07103, 2023. [20] L. Zheng, W.-L. Chiang, Y. Sheng, S. Zhuang, Z. Wu, Y. Zhuang, Z. Lin, Z. Li, D. Li, E. Xing et al., “Judging llm-as-a-judge with mt-bench and chatbot arena,” Advances in Neural Information Processing Systems, vol. 36, 2024. [21] M. Sch¨afer, S. Nadi, A. Eghbali, and F. Tip, “An empirical evaluation of using large language models for automated unit test generation,” IEEE Transactions on Software Engineering, 2023. [22] G. Ryan, S. Jain, M. Shang, S. Wang, X. Ma, M. K. Ramanathan, and B. Ray, “Code-aware prompting: A study of coverage-guided test generation in regression setting using llm,” Proceedings of the ACM on Software Engineering, vol. 1, no. FSE, pp. 951–971, 2024. [23] K. Liu, Y. Liu, Z. Chen, J. M. Zhang, Y. Han, Y. Ma, G. Li, and G. Huang, “Llm-powered test case generation for detecting tricky bugs,” arXiv preprint arXiv:2404.10304, 2024. [24] GitHub, “Github copilot,” 2024, accessed: 2024-08-16. [Online]. Available: https://github.com/features/copilot [25] Cursor, 2024, accessed: 2024-08-16. [Online]. Available: https: //www.cursor.com/ [26] A. Jarmusch, A. Liu, C. Munley, D. Horta, V. Ravichandran, J. Denny, K. Friedline, and S. Chandrasekaran, “Analysis of validating and verify- ing openacc compilers 3.0 and above,” in 2022 Workshop on Accelerator Programming Using Directives (WACCPD). IEEE, 2022, pp. 1–10. [27] T. Huber, S. Pophale, N. Baker, M. Carr, N. Rao, J. Reap, K. Holsapple, J. H. Davis, T. Burnus, S. Lee, D. E. Bernholdt, and S. Chandrasekaran, “Ecp sollve: Validation and verification testsuite status update and com- piler insight for openmp,” in 2022 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC), 2022, pp. 123–135. [28] L. B. N. Lab, 2024, accessed: 2024-08-16. [Online]. Available: https://docs.nersc.gov/systems/perlmutter/architecture/ [8] OpenMP Architecture Review Board, “OpenMP application program interface version 5.2,” 2021. [Online]. Available: https://www.openmp. org/wp-content/uploads/OpenMP-API-Specification-5-2.pdf models [9] OpenAI, and announced new-models-and-developer-products-announced-at-devday/, accessed: 2024-08-16. devday,” developer products https://openai.com/index/ 2024, “New at [10] R. et al., “Code llama: Open foundation models for code,” 2024. [Online]. Available: https://arxiv.org/abs/2308.12950 [11] J. Jiang, F. Wang, J. Shen, S. Kim, and S. Kim, “A survey on large language models for code generation,” arXiv preprint arXiv:2406.00515, 2024. [12] X. Hou, Y. Zhao, Y. Liu, Z. Yang, K. Wang, L. Li, X. Luo, D. Lo, J. Grundy, and H. Wang, “Large language models for software engineer- ing: A systematic literature review,” arXiv preprint arXiv:2308.10620, 2023. [13] N. Baumgartner, P. Iyenghar, T. Schoemaker, and E. Pulverm¨uller, “Ai- driven refactoring: A pipeline for identifying and correcting data clumps in git repositories,” Electronics, vol. 13, no. 9, p. 1644, 2024. [14] A. T. McCabe, M. Bj¨orkman, J. Engstr¨om, P. Kuang, E. S¨oderberg, and L. Church, “Ironies of programming automation: Exploring the experience of code synthesis via large language models,” in Companion Proceedings of the 8th International Conference on the Art, Science, and Engineering of Programming, 2024, pp. 12–21. [15] D. Nichols, P. Polasam, H. Menon, A. Marathe, T. Gamblin, and A. Bhatele, “Performance-aligned llms for generating fast code,” 2024. [Online]. Available: https://arxiv.org/abs/2404.18864
ai_researcher
2
Adapting_Large_Language_Models_to_Log_Analysis_with_Interpretable_Domain_Knowledge.pdf
Adapting Large Language Models to Log Analysis with Interpretable Domain Knowledge Yuhe Ji∗†, Yilun Liu∗†(cid:0), Feiyu Yao†, Minggui He†, Shimin Tao†, Xiaofeng Zhao†, Su Chang†, Xinhua Yang†, Weibin Meng†, Yuming Xie†, Boxing Chen‡, Hao Yang† †Huawei, China ‡Huawei Canada, Canada 4 2 0 2 c e D 2 ] L C . s c [ 1 v 7 7 3 1 0 . 2 1 4 2 : v i X r a Abstract—The increasing complexity of computer systems ne- cessitates innovative approaches to fault and error management, going beyond traditional manual log analysis. While existing solu- tions using large language models (LLMs) show promise, they are limited by a gap between natural and domain-specific languages, which restricts their effectiveness in real-world applications. Our approach addresses these limitations by integrating interpretable domain knowledge into open-source LLMs through continual pre-training (CPT), enhancing performance on log tasks while retaining natural language processing capabilities. We created a comprehensive dataset, NLPLog, with over 250,000 question- answer pairs to facilitate this integration. Our model, SuperLog, trained with this dataset, achieves the best performance across four log analysis tasks, surpassing the second-best model by an average of 12.01%. Our contributions include a novel CPT paradigm that significantly improves model performance, the development of SuperLog with state-of-the-art results, and the release of a large-scale dataset to support further research in this domain. Index Terms—log analysis, continual pre-training, large lan- guage model, instruction tuning I. INTRODUCTION As computer systems and programs grow increasingly com- plex [1]–[3], the inevitability of faults and errors necessi- tates innovative solutions that extend beyond the traditional reliance on experienced specialists sifting through extensive logs. This labor-intensive approach faces challenges due to the unpredictable nature of faults and errors, the sheer volume of logs, and the specialized knowledge required for effective log analysis. In response, there has been a burgeoning interest in leveraging large language models (LLMs) to enhance the efficiency and effectiveness of log analysis tasks. In particular, significant advancements have been made in log parsing with tools such as [4]–[6], which utilize advanced LLMs combined with various prompting strategies to streamline the process. Similarly, in the realm of log anomaly detection, recent studies and tools [7]–[9] have focused on harnessing these powerful models to identify inconsistencies and potential issues within large log datasets. In this paper, LLMs are defined as lan- guage models with at least 7 billion (7B) parameters [10]. Compared to smaller models, the advantages of LLMs in log analysis primarily lie in the interpretability of their analysis results [9] and their robust performance in online scenarios characterized by limited training data [6]. This shift towards ∗ Equal contribution. (cid:0) Corresponding author ([email protected]). Fig. 1. Illustration on differences of three LLM-based log analysis approaches: prompting or fine-tuning (a) on general-purpose LLMs, (b) on domain-adapted LLMs and (c) on LLMs infusing interpretable domain knowledge (SuperLog). LLM-based automated log analysis highlights a broader trend in program comprehension: the integration of state-of-the-art (SOTA) artificial intelligence to tackle complex challenges in system maintenance and diagnostics, offering a glimpse into the future of IT infrastructure management. logs and predict While these methods showcase promising advancements, their applicability in real-world scenarios remains constrained. As shown in Fig. 1(a), most works attempt to directly prompt general-purpose LLMs to perform log tasks, which may lead to suboptimal performance due to the inherent gap between natural language and domain-specific language (i.e., logs). For instance, a study by [8] illustrates that, requiring ChatGPT to continuously summarize significant system events from historical the current system state based on prompt skills, falls short of expectations. Similarly, [6] attempts to equip ChatGPT with a set of advanced prompting strategies related to log tasks, achieving high performance in log parsing but still struggling with anomaly detection in zero- shot scenarios. This suboptimal performance may stem from a knowledge gap between logs and human language, as logs are typically concise, often grammatically incorrect, and lack com- prehensive background information by their very nature [11]– [13]. Powerful proprietary LLMs such as GPT-4 [14] and Claude-3.5 [15] may help mitigate this knowledge gap through their inference capabilities [16], [17]. However, access to these proprietary LLMs is usually via APIs, necessitating an internet connection and retries upon access failures, which can hardly meet the security, robustness, and immediacy requirements of industrial applications. In contrast, open-source LLMs, such as the LLaMA model families [18], offer greater deployment potential in real-world applications, yet the knowledge gap is even more pronounced for open-source LLMs attempting to perform log analysis tasks. This was noted by Liu et al. [9], who utilized Vicuna [19] (fine-tuned from LLaMA) for log analysis and found a substantial performance discrepancy compared to ChatGPT. Before the advent of LLMs, several studies improved lan- guage models (with approximately 0.5B to 1B parameters) through continual pre-training (CPT) [20] on log data, thereby infusing domain knowledge into these models to enhance performance on log analysis tasks [21]–[23], represented by the domain-adapted LLM in Fig. 1(b). For example, Biglog [23] pre-trained the BERT model [24] on 83GB of raw log records collected from real-world devices [25], achieving high accuracy across multiple tasks. Nevertheless, the limited interpretability of raw log data presents a significant challenge for language models, as most of their pre-trained corpora consist of plain texts in natural language. This disparity in CPT dataset distribution may lead to catastrophic forgetting [26], a phenomenon of performance degradation often observed when newly added training data originate from a significantly different distribution. Furthermore, compared to BERT-like language models, LLMs are known for generating justifica- tions alongside their prediction results [9]. The limited inter- pretability of domain knowledge during CPT may hinder the interpretative capabilities of LLMs. Training directly on log data can reduce the likelihood of LLMs providing explanations and justifications in natural language for their predictions, resulting in a drastic decline in user-friendliness, which is observed in our experimental result in Table VI. To address the challenge of insufficient domain knowl- edge in real-world log analysis using LLMs, this paper aims to enhance the performance of general-purpose open-source LLMs in log analysis tasks by integrating interpretable do- main knowledge through CPT, as shown in Fig. 1(c). By incorporating this interpretable knowledge, we improve the LLMs’ performance on log-related tasks while preserving their inherent natural language comprehension and instruction- following abilities. To facilitate reliable integration of domain- specific knowledge, we have developed a large-scale dataset called NLPLog, which contains over 250,000 question-and- answer pairs presented in natural language, emphasizing com- prehension and analysis on real-world logs. This dataset serves as a valuable source of interpretable knowledge for LLMs. As a result, our trained model, SuperLog, which undergoes the CPT phase using NLPLog, not only excels in executing log analysis tasks but also maintains a high degree of interpretabil- ity, aligning closely with industry demands for practical and understandable outcomes. Our contributions are as follows: • We introduce a novel CPT paradigm that boosts large model performance by injecting interpretable knowledge. Ablation studies verify that models trained under this paradigm achieve substantial performance gains over tra- ditional CPT methods. In the ablation study, SuperLog achieved an average performance improvement of 23%. • Building upon this paradigm, we developed SuperLog, which demonstrated superior performance across all four log analysis tasks under two distinct fine-tuning strate- gies. Our model, SuperLog achieves the best performance across four log analysis tasks, surpassing the second-best model by an average of 12.01%. Furthermore, SuperLog demonstrated exceptional performance on logs from un- seen domains. • We open-sourced a meticulously curated and large-scaled dataset, rich in log-related knowledge and derived from real-world log analysis practices, providing essential guidance for advancing new training paradigms1. II. RELATED WORK A. LLMs & Training Regimes LLMs have established themselves as pivotal tools in natural language processing, transforming our approach to language understanding and generation tasks. The training of LLMs typically involves multiple phases, each critical for achieving state-of-the-art performance. The initial phase, known as pre-training, involves exposing the model to extensive amounts of unlabeled text data. This phase enables the model to learn general language patterns and representations, forming a robust linguistic foundation [27]. pre-training is fundamental as it equips the model with the ability to understand and generate coherent text, which can be further refined for specific applications. To build the language contexts for LLMs over specialized domains, continual pre-training (CPT) is often employed. This technique involves updating the model’s knowledge base with new domain-specific data, ensuring that the model adapts to the specialized language contexts [28]. CPT is especially cru- cial in fields with specialized language requirements that differ from general-purpose needs, such as medicine [29], law [30], and software operations and maintenance (O&M) [23]. Following pre-training and CPT, LLMs undergo a super- vised fine-tuning phase, where they are adapted to specific tasks using labeled datasets. This phase is crucial for task specialization, enabling the model to apply its broad linguistic knowledge to particular challenges such as sentiment analy- sis [31], question answering [32], or text classification [33]. By fine-tuning on task-specific data, LLMs can achieve higher accuracy and versatility, making them feasible for a wide range of applications. Our work redefines the paradigm of CPT for log analysis by infusing interpretable domain knowledge into LLMs. By constructing an interpretable CPT dataset that combines log data with corresponding natural language explanations, the lack of log-related domain knowledge in general-purpose open-source LLMs is addressed. 1https://github.com/J-York/SuperLog B. Log Analysis Log analysis is a multifaceted field encompassing various aspects such as log parsing, anomaly detection, fault diag- nosis, and interpretation. This comprehensive approach to log analysis ensures efficient utilization of log data, enhancing the reliability and performance of software systems. 1) Log Parsing: Log parsing is the cornerstone of log analysis, focusing on efficiently reducing log data to its core elements. This is achieved by generating templates from raw logs that capture essential patterns, facilitating subsequent analysis, including anomaly detection. Traditionally, coarse- grained parsing techniques dominated the field, employing methods such as clustering [11], [34], heuristics [35], [36], and tree-structured approaches [12], [37], [38]. These methods generally involve extracting static log components, replac- ing variable parts with placeholders like <*>. Novel tools like LogParse [39] and LogStamp [13] harness word-level classifiers to extract dynamic patterns from logs directly. Furthermore, research by Huo et al. [40] and Li et al. [41] has advanced the semantic modeling and classification of log variables. In the latest developments, LLMs have been applied to parsing tasks. Techniques such as LogPPT [17] and LogPrompt [9] have implemented prompt engineering strategies to enhance real-time parsing efficiency. Techniques by Jiang et al. [42] and Zhong et al. [43] further optimize parsing using adaptive mechanisms and hybrid systems with LLMs. 2) Log-based Anomaly Detection: Log-based anomaly de- tection aims to uncover irregular patterns indicative of poten- tial issues within log data. This detection is often performed at the session or template level by initially creating templates for log summarization. For session-level detection, logs are com- piled into sessions based on time or length constraints, with methods classifying these entire sessions as anomalous if any underlying template is unusual. Session-level methods include classification and forecast-based approaches. Classification- based techniques, such as those used in LogRobust [44] and Lu et al. [45], leverage machine learning models like LSTM and CNNs. Techniques by Le et al. [46], which integrate a BERT encoder [47], showcase innovations eliminating the need for explicit parsing. Forecast-based methods exemplified by DeepLog [48] and LogAnomaly [49] involve detecting deviations from historical log patterns. LogCraft [50] further integrates multiple prediction methods through a meta-learning framework. Template-level anomaly detection methods, such as LogPrompt [9] using LLM-based chain-of-thought prompt- ing, and RAGLog [51] incorporating retrieval-augmented generation, enhance anomaly detection efforts. Additionally, benchmarks by Cui et al. [52] assess performance specifically for template-level analyses. 3) Log Fault Diagnosis: Log-based fault diagnosis expands on anomaly detection by identifying specific causes of anoma- lies, thereby enabling timely issue resolution. Techniques in this area often involve root cause analysis through correlation and dependency mapping among detected anomalies [53]. Leveraging LLMs, error patterns can be correlated with known Fig. 2. Illustration on the interpretable knowledge construction and continual pre-training of SuperLog. fault signatures, allowing for precise diagnostic measures [52], [54]. Fault diagnosis benefits from the integration of automated tools developed through machine learning to offer predictive insights, thereby reducing system downtime. 4) Log Interpretation: Interpreting logs involves explaining the significance of log events using natural language, making them more accessible for human understanding. As part of this, advanced systems aim to generate natural language summaries for key aspects of log data. For instance, Liu et al. [6] propose methods for explaining log elements through narrative descriptions, assisting in decision-making processes. The inte- gration of LLMs aids in deriving explanatory content, enabling better understanding and actionable insights in complex sys- tems [54]. Enhancements in tools provide robust interpretation capabilities through interactive systems, facilitating improved incident management and strategy formulation [55], [56]. These distinct aspects of log analysis collectively improve system performance by leveraging refined parsing techniques, enhancing anomaly detection precision, optimizing fault diag- nosis, and enabling intuitive log interpretation. III. METHODOLOGY General-purpose LLMs inherently lack specialized O&M domain knowledge, which results in suboptimal accuracy and reliability when engineers attempt to leverage them for log analysis [23]. To address this gap, we propose SuperLog, which is an LLM adapted for log analysis by infusing in- terpretable domain knowledge. The overview of SuperLog is shown in Fig. 2. The injection of log-related knowledge into the general-purpose LLM is achieved through a CPT phase. During this phase, we enable the model to acquire interpretable log-related knowledge, smoothly enhancing its domain expertise in log analysis while retaining its original language comprehension and interpretation abilities. By developing NLPLog, a specialized and large-scale dataset, we can infuse domain knowledge into an LLM while retaining its interpretability, thereby enhancing the LLM’s ability to interpret and apply relevant O&M expertise. Each entry in NLPLog is in natural language and is structured as a Q&A pair involving a specific real-world log, with the question asking for analyzing on the input log and the answer provide a throughout analysis result. On one hand, the logs in the Q&A pairs come from real-world practices within 14 different domains [25] and the questions are meticulously designed to cover five necessary dimensions of log-related knowledge, ensuring that the model is well-equipped with comprehensive knowledge that enables handling diverse and nuanced queries encountered in real-world O&M environments. On the other hand, the answers embeds interpretable knowledge directly into the training data, providing log-related contexts for LLMs during the CPT phase. This approach not only aids in improv- ing the model’s interpretability by offering clear, example- driven knowledge but also aligns the training process with practical O&M needs. In contrast, traditional methods typically rely on raw log data for CPT. While this approach can facilitate training on large volumes of log data, it often lacks the interpretability that our Q&A pairs in natural language provides and can lead to incompatibility with pre-acquired knowledge of LLMs. By incorporating log-related domain knowledge and contexts in natural language, our approach bridges the gap between the- oretical understanding and practical application, guaranteeing a more effective fine-tuning process for downstream tasks. The remainder of this section is divided into two parts: first, we describe the construction process of the NLPLog dataset and how it enables interpretability within the O&M context. Then, we provide a detailed explanation of the CPT process in our proposed approach, focusing on how it adapts general- purpose LLMs with domain-specific knowledge. A. Construction of NLPLog In this section, we introduce the construction process of NLPLog, the dataset for pre-training SuperLog. Particularly, we designed a meticulous framework to ensure data quality during the construction process. 1) Overview: To construct NLPLog dataset, we choose 14 different log domains from LogHub [25], an open-source dataset rich in real-world logs from different domains. These domains include operation systems, supercomputer, distributed system and software applications, thereby guaranteeing models trained on NLPLog dataset to focus on domain-invariant features and gain more robustness and generalization ability. However, since the log events are collected from real-world devices and systems within continuous time windows, there are large number of similar or even duplicated logs in the raw LogHub dataset, which not only significantly increases the cost for creating NLPLog, but also may introduce un- necessary noises to the model during CPT phase. To reduce the redundancy in dataset, we designed a data pre-processing framework which aims to select the most representative logs and generate interpretable knowledge from these logs by the form of Q&A pairs, with three phases: Deduplication, Log Event Reconstruction, and Interpretable Knowledge Genera- tion. Statistics of NLPLog is shown in Table I. The Deduplication phase is designed to extract key ele- ments from large volumes of logs that represent significant log events, aiming to reduce the total number of logs and TABLE I STATISTICS OF NLPLOG, OUR CONSTRUCTED CPT DATASET Domain Log Count Q&A Pairs Proportion OpenSSH HDFS HPC Windows Mac Thunderbird Spark Linux Zookeeper HealthApp Hadoop BGL Android Proxifier 54 409 159 9,605 708 13,069 369 654 104 195 270 607 25,369 18 270 2,045 795 48,025 3,540 65,345 1,845 3,270 520 975 1,350 3,035 126,845 90 0.19% 1.54% 0.59% 36.12% 2.63% 49.04% 1.38% 2.42% 0.39% 0.73% 1.01% 2.26% 18.86% 0.07% balance the distribution differences between categories. This is achieved via applying deep-learning-based log parsing tech- niques, where logs representing the same event are consoli- dated into a unified log template. Log Event Reconstruction follows the deduplication phase. It reconstruct a log event from each extracted template to avoid information loss. This is achieved by recording variables dur- ing the deduplication and populating them into the templates afterwards. In the phase of Interpretable Knowledge Generation, for each reconstructed log event, we utilize ChatGPT with care- fully crafted prompts to construct Q&A pairs covering five essential dimensions of log-related knowledge, ensuring that the model is equipped with comprehensive domain knowledge. 2) Deduplication: Deduplication is a critical step in our framework, aimed at reducing redundancy by identifying and extracting log templates from large volumes of semi-structured log data. Logs consist of both a fixed part (template), de- termined by log printing statements that describe program execution events, and a dynamic part (variable) containing dynamic information such as LineID, Date, Time, and IP. Since log templates provide key insights into program execution and are much fewer in quantity than the total logs, accurate extraction of these templates enhances log analysis efficiency by reducing data volume and focusing on unique events. To support this, we employed LogPPT [4] as the log template extraction algorithm. LogPPT utilizes pre-trained lan- guage models and a small set of labeled samples to identify log templates and variables. This approach improves deduplication efficiency and accuracy over traditional rule-based methods. We used 2,000 manually parsed log entries from each domain available on Loghub as training samples, and then applied the trained LogPPT models to the entire set of logs within these domains to obtain their templates. Once the log template extraction algorithm was applied, we separated logs into template and variable parts. Dupli- cate log templates were removed, leaving 51,590 distinct log templates—a comprehensive set of unique events that greatly reduces data redundancy and serves as a solid foundation for further analysis. 3) Log Event Reconstruction: The process of Log Event Reconstruction can be formalized as the generation of log events from the set of log templates {T1, T2, . . . , Tn} and the set of variables {V1, V2, . . . , Vm}. Each log template Ti consists of a fixed part and variable placeholder parts (conventionally the “<*>” parts), expressed as: Ti = FixedPart(Ti) + {Ph1, Ph2, . . . , Phk}, where FixedPart(Ti) represents the fixed part of the tem- plate, and {Ph1, Ph2, . . . , Phk} denotes the set of placeholders that correspond to dynamic variables. To generate a log event, we need to populate the place- holders in template Ti with appropriate variable values Vj ∈ {V1, V2, . . . , Vm}. Suppose we have a corresponding set of variables {v1, v2, . . . , vk} for each template Ti recorded during the deduplication process, where each vj matches a place- holder Phj, the log event can be represented as: LogEvent(Ti) = FixedPart(Ti) + {v1, v2, . . . , vk} That is, by sequentially populating the placeholders in the template with the variables vj, a complete log event is generated. Through the formal process described above, we completed the construction of the log data sets, which is deduplicated and lossless in log content, and will serve as the key raw materials for generating training data for SuperLog. 4) Interpretable Knowledge Generation: To effectively in- tegrate interpretable and comprehensive log-related knowledge into the model for domain adaptation, it is essential for the LLM to understand all relevant knowledge dimensions associ- ated with real-world log events. To achieve this interpretability, we structure the knowledge as Q&A pairs in natural language. For each input log, we design questions that address five distinct knowledge dimensions and generate corresponding answers in natural language. After reviewing existing log analysis studies, we identified the following key dimensions of log-related knowledge essential for a comprehensive under- standing: Grok Pattern Parsing. Using Grok [57] is about decipher- ing the structure information of complex log data. It employs patterns to identify and extract details from log messages, making it easier to manage and analyze the data. This knowl- edge dimension focuses on identifying patterns within logs to simplify data extraction, making the log messages more manageable and facilitating efficient analysis. Log Event Insights. Log Event Insights transform technical log data into clear, human-readable insights. By expanding on the semantic meaning of key log components, this dimension provides a more accessible understanding of log content, enhancing its relevance and clarity across diverse operational environments. Root Cause Analysis. Root Cause Analysis is critical in log applications, as it identifies the underlying causes of system anomalies. This knowledge dimension aids in pinpointing the source of issues, improving troubleshooting accuracy and enabling timely corrective actions. Component Correlation Analysis. In complex systems, understanding the relationships between different components is vital. Component Correlation Analysis identifies and ana- lyzes these interconnections within the logs, providing deeper insights into system interactions, which ultimately improves diagnostic precision and issue resolution. Potential Failure Forecast. Failure Forecasting is critical in log analysis, involving the identification of patterns that precede potential failures. This knowledge dimension helps in predicting system failures by recognizing early warning signs in logs, allowing for proactive maintenance and preventing downtime. By learning knowledge from these five critical dimensions, the model can not only comprehensively accumulate structural and semantic knowledge of real-world log events, but also prepare for its reasoning and associative capabilities when performing log analysis tasks in real applications. In order to mitigate the risk of overfitting during model training, which could result from a uniform questioning ap- proach, we designed 10 different question formulations for each dimension. Specifically, for each constructed log data, we randomly select a question from the 10 candidates for each knowledge dimension, and the log itself as an input prompt to interact with ChatGPT, acquiring its response as the answer of the Q&A pair. The statistics of final constructed dataset is displayed in Table I. B. Continual Pre-Training Unlike standard fine-tuning approaches that directly adapt a pre-trained model to specific tasks, we employed a Continual Pre-Training (CPT) strategy before fine-tuning to enhance Superlog’s capability in the log analysis domain. This strategy involves extending the pre-training process on domain-specific data before fine-tuning on task-specific datasets. The decision to use CPT was motivated by the intrinsic limitations of traditional fine-tuning, particularly when applied to domains with specialized data formats that differ significantly from the general-purpose corpora used in initial pre-training. jargon, timestamps, 1) Limitations of Direct Fine-Tuning: Log data, though output as text, has a unique structure and function that distinguishes it from ordinary natural language. It contains identifiers, and a wealth of technical domain-specific terminology, which are uncommon in typi- cal language datasets used for initial large-scale model pre- training. Moreover, log-based tasks often require strict syntac- tic precision and specialized responses, such as interpreting anomalies, identifying root causes, or performing event cor- relation. These challenges render standard fine-tuning inade- quate, as it struggles to transfer general language knowledge to such domain-specific applications effectively. Fine-tuning alone risks overfitting models to narrow task-specific datasets, thereby compromising their robustness and generalization ca- pability when applied to varied log formats or novel tasks. our proposed research questions (RQs) and the key find- ings addressing these questions. Sections IV-D through IV-F present the experiments organized around each RQ and their respective results. 2) The Benefits of Continued Pre-Training: Our proposed approach with a CPT phase addresses the above challenges by re-aligning the model’s parameters with the character- istics of the log domain through additional pre-training on large-scale log-related knowledge. For this, we leveraged the NLPLog dataset—a comprehensive collection of interpretable knowledge discussed in Section III-A. The NLPLog dataset was specifically designed to bridge the gap between human language and system logs, providing high-quality insights on representative log data from multiple domains. By using this dataset, CPT exposes the model to rich log-related contexts in natural language, ensuring it captures domain-specific knowl- edge while remaining its general-purpose abilities. 3) Practical Implementation of CPT for SuperLog: During the CPT phase, we utilized the 7B version of LLaMA2 [58] as the base model and performed self-supervised training on over 250,000 entries from the NLPLog dataset. To prevent catastrophic forgetting of general language knowledge while aligning the model more closely with the log domain, we carefully adjusted the learning rate and batch size. The initial learning rate was set at 1e-5, and training was conducted for 1.5 epochs to ensure that the model captured log data char- acteristics while retaining its capabilities in broad language tasks. Following Shi et al. [59], we define a sequence of domain distributions {Dt}T t=1, where each Dt represents the t-th joint distribution over the shared input space X and label space Y , corresponding to logs from different systems. We aims to find the optimal hypothesis h∗ : X → Y that minimizes the cumulative error across all domains. Specifically, we define the objective function as: h∗ = arg min h T (cid:88) t=1 E(x,y)∼Dt [I(h(x) ̸= y)] where I is the indicator function, which evaluates whether the predicted token h(x) differs from the answer from ChatGPT y. Through these carefully designed training settings, Superlog not only improved its performance in the log analysis domain during the CPT process but also enhanced its adaptability and robustness across different systems and application environ- ments. IV. EXPERIMENTS In this section, we evaluate the practical performance of Su- perLog in the domain of log analysis tasks. The content of this section is organized as follows: In Section IV-A, we describe how SuperLog is applied to perform log analysis, including the two fine-tuning paradigms we employed. Section IV-B details our implementation specifics. In Section IV-C, we summarize A. Performing Log Analysis Tasks using SuperLog To comprehensively evaluate the log analysis capabilities of SuperLog, four log analysis tasks were selected: log parsing, log anomaly detection, log-based fault diagnosis, and log interpretation. Besides, we trained SuperLog using two fine- tuning approaches and evaluated their effectiveness. These approaches are the traditional task-based fine-tuning and the fine-tuning designed to enhance instruction-following capabil- ities. In this chapter, we will introduce these two fine-tuning methods and present the log-domain test tasks conducted after fine-tuning. 1) Task-based Fine-tuning: The first approach to SFT fol- lows a more traditional paradigm [23], focusing on fine-tuning the model for specific downstream tasks. In the context of log analysis, this method tailors the model to tasks such as log parsing and anomaly detection, ensuring that it can adapt to the nuanced requirements of these tasks with high accuracy. For this purpose, we utilized popular public task-specific evaluation sets in log analysis. For log parsing task, we leveraged 2000 manually corrected parsing results provided by LogHub 2k [25] for each log domain and utilized the first 10% logs to form instruction pairs for fine-tuning SuperLog. Instruction pairs for Anomaly Detection were derived from the BGL and Spirit benchmark datasets [60]. Liu et al. [9] extracted log templates from these two datasets, respectively, releasing pairs of log templates and anomaly labels. We to randomly selected approximately 10% of each dataset create instruction pairs, reserving the rest for evaluation. Each subset retained around 10% abnormal samples, maintaining the original distribution of normal and anomalous logs. Using these datasets, Superlog was fine-tuned over 3 epochs with a learning rate of 1e-5. This task-specific fine-tuning enabled the model to quickly adapt to the structured format and intricacies of each log domain, thereby enhancing its performance in downstream tasks. The primary advantage of this approach is the rapid adapta- tion to specific tasks, allowing the model to exhibit strong per- formance on targeted tasks immediately following fine-tuning. However, it requires constructing separate training datasets and results in separate fine-tuned models for each task and each domain, which can be resource-intensive and time-consuming. To balance this limitation, we also applied a second fine- tuning strategy based on general-purpose instruction-following paradigms. 2) Fine-tuning for instruction-following: The second ap- proach to SFT is designed to enable the model to follow general user instructions and interact more flexibly across tasks. Instead of focusing on specific log analysis tasks, this method trains the model using a broad set of open-domain instruction-following examples. The goal is to enhance the model’s ability to respond accurately to a wide range of instructions, improving its versatility in real-world scenarios where precise task-specific data might not always be available. For this approach, we utilized the Alpaca dataset [61], a publicly available dataset of instruction-following examples. However, to further improve the quality and diversity of the instructions, we applied the Cluster and Ranking (CaR) method preposed by Ge et al. [62]. This process involves clustering similar instructions and assigning quality scores to them based on relevance and richness. From this pool, we extracted 1,000 high-quality instructions, ensuring a diverse and robust training set. Superlog was fine-tuned using this dataset over three epochs with the same learning rate of 1e-5. The general-purpose instruction fine-tuning process equipped the model with the capability to follow various user instructions, making it more interactive and adaptable. However, this method relies heavily on the domain knowledge injected during the CPT phase, as it does not directly incorporate task-specific data. Therefore, the model’s performance on downstream tasks depends on the success of CPT in embedding domain-specific knowledge. B. Implementation Details The experiments were conducted on a Linux server equipped with eight Tesla A100 80G GPUs. The Linux kernel version is 5.4.0 and the CUDA version is 12.2. SuperLog utilizes the LLaMA-2-7B [58] as its foundational model, which is a foundation LLM open-sourced by MetaAI. During the CPT phase, we employed the dataset shown in Table I, setting the learning rate to 1e-5. The training was conducted for 1.5 epochs with a batch size of 16. In the instruction fine- tuning phase, we used 1,000 Alpaca dataset entries, filtered through CaR [62], [63], to enhance the model’s instruction- following capabilities. The learning rate was maintained at 1e-5, with training conducted over 3 epochs. Other parameters in both phases were kept at the default settings provided by LLaMA-Factory [64]. C. Research Question & Key Findings In this section, we present the research questions(RQs) we addressed during the evaluation of SuperLog, along with the key findings we obtained. RQ1: Can SuperLog demonstrate strong performance on log-related downstream tasks? Key Findings: In Section IV-D, we evaluated the log analysis capabilities of SuperLog across four tasks using two different SFT methods, comparing its performance with existing methods. Experimental results show that SuperLog outperformed current approaches in all tasks. Furthermore, as a 7B model, SuperLog demonstrated capabilities surpassing those of LLMs such as GPT-4 and Claude-3 Sonnet in some tasks. These findings indicate that the proposed training paradigm effectively enhances the model’s capabilities in log analysis, enabling reliable domain knowledge injection. RQ2: To what extent does training on a carefully con- improve SuperLog’s perfor- interpretable dataset structed, mance? Key Findings: In Section IV-E, we conducted ablation experiments to validate the improvements brought by our training approach. We compared the performance of SuperLog, SuperLog w/o CPT, and SuperLog w/o Interpretable Knowl- edge across four log analysis tasks. SuperLog achieved the best performance in all tasks, confirming that continual pre-training on an interpretable dataset is highly effective. As shown in the experimental results in Table VI, incorporating interpretable data significantly enhanced the model’s understanding of log data and its ability to answer questions with greater expertise. Furthermore, the experimental results show that training solely on raw log data leads to an overall improvement in most capa- bilities compared to the original model. However, the model’s performance on log interpretation tasks significantly decreases. This supports the idea that training with interpretable data, as in SuperLog, can enhance the model’s understanding of domain knowledge. RQ3: How does SuperLog perform on logs from previously unseen domains? Key Findings: In Section IV-F, we benchmarked the per- formance of SuperLog on unseen domain logs by conducting experiments. The results demonstrated that SuperLog outper- formed existing baseline models, showing a 22.4% higher score compared to the next best, OWL. This confirmed that SuperLog maintains strong alignment with human-like under- standing and expert annotations, indicating its effectiveness even in new and unfamiliar domains. D. RQ1: Benchmarking on Log Analysis Capabilities 1) Under Task-based Fine-tuning: Log Parsing. This benchmark assesses the performance of log parsing on the last 90% of log entries from five distinct domains within the LogHub 2k dataset. In this study, we evaluate SuperLog against 10 established log parsing approaches, which include cluster-based methods [34], [65], heuristic methods [35], [36], [66], tree-based methods [12], [37], machine learning methods [39], and LLM-based methods [6], [13]. Consistent with the experimental framework outlined by Liu et al. [9], all baseline models are trained using the initial 10% of logs from each domain. An exception is Log- Prompt [6], which employs ChatGPT for log parsing without a training phase. Based on the work of Liu et al. [9], the evaluation criteria include both coarse-grained and fine-grained metrics. For the coarse-grained evaluation, the RandIndex [67] is used. This metric evaluates the accuracy of log clustering by determining if logs with the same template are correctly grouped together, without considering the accuracy of the variables within the extracted templates. On the other hand, the fine-grained metric is the F1-score, which evaluates how accurately the variable parts in logs are identified. To compute the F1-score, the predicted log template is broken down into a sequence of tokens. For each token, the values T P , T N , F P , and F N are counted. If a token is truly a variable and is correctly identified as such (or not), the value of T P (or F P ) is incremented by one. If a token is not a variable and is correctly predicted as not TABLE II PERFORMANCE OF LOG PARSING UNDER TASK-BASED SFT TABLE III PERFORMANCE OF ANOMALY DETECTION UNDER TASK-BASED SFT Methods HDFS Hadoop Zookeeper Linux Proxifier RI F1 RI F1 RI F1 RI F1 RI F1 Methods BGL Spirit S-F1a T-F1 S-F1 T-F1 0.914 0.389 0.636 0.068 0.787 0.225 0.695 0.225 0.822 0.500 IPLoM 0.861 0.424 0.150 0.198 0.787 0.225 0.825 0.388 0.379 0.309 LKE 0.872 0.344 0.651 0.050 0.787 0.225 0.715 0.146 0.559 0.339 LogSig 0.908 0.385 0.668 0.046 0.773 0.186 0.709 0.211 0.722 0.420 FT-tree 0.871 0.000 0.721 0.058 0.102 0.045 0.706 0.091 0.621 0.000 Spell 0.914 0.389 0.647 0.068 0.787 0.225 0.695 0.225 0.822 0.500 Drain 0.871 0.000 0.699 0.095 0.899 0.000 0.410 0.026 0.621 0.000 MoLFI LogParse 0.907 0.632 0.349 0.502 0.982 0.348 0.825 0.588 0.490 0.334 LogStamp 0.954 0.523 0.927 0.594 0.992 0.275 0.760 0.658 0.811 0.438 LogPrompt 0.890 0.863 0.879 0.763 0.948 0.889 0.758 0.766 0.567 0.653 SuperLog 0.979 0.988 0.982 0.942 0.998 0.815 1.000 0.914 0.998 0.939 a RI stands for coarse-level RandIndex. F1 stands for fine-level F1-score. a variable (or incorrectly as a variable), the value of T N (or F N ) is incremented by one. The F1-score is calculated as the harmonic mean of Recall (Recall = T P T P +F N ) and Precision (P recision = T P T P +F P ). SuperLog achieved outstanding results on the log pars- ing benchmark, surpassing all existing methods significantly in both coarse-level and fine-level evaluations. Specifically, SuperLog outperformed the best baseline methods with an average improvement of 18.3% in RandIndex (RI) and 13.3% in F1-score. These superior results indicate that SuperLog is highly effective at accurately identifying variable components templates, within logs and extracting precise coarse-level setting a new standard in log parsing capabilities. As demon- strated in Table II, SuperLog showcases its robustness and adaptability across various datasets. Log Anomaly Detection. This evaluation compares Su- perLog with both template-level methods [9] and session-level methods [44], [48], [49]. Accordingly, the evaluation is divided into two parts: template-level and session-level. For the template-level evaluation, the test set consists of the split template-label pairs, representing approximately 90% of the templates extracted by Liu et al. [9] from the BGL and Spirit datasets. For the session-level evaluation, log sessions were con- structed using fixed-window grouping with 100 chronologi- cally adjacent logs from BGL and Spirit. The first 4000 logs from each dataset were used for training the baseline models, while the remaining logs were reserved for testing. To prevent data leakage, logs from the training set were excluded from the session-level test set, resulting in final test sets of 40,521 sessions for BGL and 7,515 sessions for Spirit. The evaluation metric used for both template-level and session-level assessments is the F1-score of anomalies. This metric takes into account both the recall of anomalous logs (or sessions) in test cases and the accuracy of anomaly predictions at the template and session levels. T P denotes the correct identification of an anomaly, with T N , F P , and F N representing true negatives, false positives, and false negatives, respectively. The F1-score is then computed as the harmonic mean of Recall and Precision. 0.049 LogBERT [21] 0.138 LogAnomaly [49] 0.045 LogRobust [44] 0.122 0.050 ChatGPT [9] SuperLog 0.333 0.300 a S-F1/T-F1 means F1-Score in session/template-level. 0.108 0.129 0.077 0.129 0.147 - - - 0.067 0.262 - - - The evaluation result is show in Table III. From an overall perspective, selecting only a small subset of logs in sequence as the training set presents a significant challenge for most log anomaly detection methods. The sequential selection, as opposed to random selection, restricts the model to learning from a short segment of the log sequence, making it difficult to capture the overall distribution patterns of the logs. However, through the injection of interpretable knowledge, SuperLog demonstrates a strong understanding of log data, enabling it to extrapolate learning results from limited data. Ultimately, Su- perLog outperforms existing state-of-the-art algorithms across all evaluation metrics, with particularly significant improve- ments observed on large-scale log datasets, such as the Spirit dataset. 2) Under Fine-tuning for Instruction-following: Log Interpretation. Log interpretation and understanding play a crucial role in extracting meaningful insights from log data. Building upon Liu’s research [54], we define the log interpretation capabilities of language models in two aspects. The first aspect is usefulness, meaning that the model’s in- terpretation of a log should include an understanding of the domain, the extraction of relevant information, and the ability to assist analysts. The second aspect is readability, where the model’s output should be concise, clear, and in natural language, without causing confusion. Specifically, we selected a dataset of 100 log entries and asked a LLM to explain the events represented by each log. The model’s outputs, along with a set of evaluation criteria, were then fed into GPT-4 to score them based on their usefulness and readability, using a scoring range from 1 to 5 points. Finally, we calculated the average score for all 100 responses. We selected Qwen2-0.5B, Qwen2-1.5B [68], LLaMA3.1- 8B, and OWL-7B [69] as baseline models for comparison. Qwen2 is a general-purpose LLM family open-sourced by Alibaba, demonstrating strong performance across various domains. OWL-7B, on the other hand, is a domain-specific LLM designed for Q&A in IT operations. The experimental results are shown in Table IV. SuperLog’s readability is very close to that of LLaMA3.1-8B, while its usefulness shows a significant improvement compared to all the baseline models. Specifically, it outperforms the second-best model, Qwen2- 0.5B, by nearly 18%, and leads OWL-7B in both metrics. The model’s strong performance in log interpretation benefits from the injection of interpretable knowledge during the CPT phase, TABLE IV PERFORMANCE OF LOG INTERPRETATION UNDER INSTRUCTION-FOLLOWING SFT TABLE VI ABLATION STUDY OF SUPERLOG: ELIMINATING INTERPRETABLE KNOWLEDGE OR CPT PHASE Models Usefulness Readability Methods Parsinga AD FD Qwen2-0.5B Qwen2-1.5B LLaMA3.1-8B OWL-7B SuperLog(Ours) 3.353 2.899 3.073 3.234 3.894 3.596 3.576 4.080 3.451 3.990 0.117 SuperLog w/o IKb 0.096 0.090 w/o CPT a Parsing means Log Parsing 0.920 0.906 0.881 0.500 0.382 0.311 AD means Anomaly Detection. FD means Log-based Failure Diagnose. Inter means Log Interpretation. Inter 3.895 3.054 3.273 TABLE V BENCHMARKING ON THE TASK OF ANOMALY DETECTION AND FAILURE DIAGNOSE UNDER INSTRUCTION-FOLLOWING SFT b w/o IK: pre-training only on logs in NLPLog. w/o CPT: no continual pre-training phase. Methods Anomaly Detection Diagnosis GPT-3.5 GPT-4 Claude3 Sonnet BaiChuan2-13B DeVops-7B AquilaChat-7B LLaMa2-70B DeVops-14B Qwen1.5-72B InternLM2-7B InternLM2-20B Mistral-7B SuperLog (ours) 0.082 0.097 0.100 0.0 0.037 0.042 0.044 0.055 0.063 0.075 0.089 0.092 0.117 0.336 0.453 0.422 0.0 0.357 0.348 0.291 0.416 0.423 0.284 0.425 0.284 0.500 while the SFT on high-quality general-domain instructions ensures its high readability. Log Anomaly Detection & Log-based Failure Diagnose. In this section, we maintain complete consistency between the experimental setups for Anomaly Detection and Failure Diagnosis with LogEval [52]. LogEval is a comprehensive benchmark suite designed to assess the capabilities of LLMs in various log analysis tasks. For the log anomaly detection task, LogEval uses the open-source BGL and ThunderBird datasets from Loghub-2k [25], [60], with totally 4,000 log entries. For the failure diagnosis task, LogEval use log datasets from Alibaba Cloud and China Mobile [52]. A total of 4,000 representative failure logs were selected from these two datasets to serve as the test set for evaluating the model’s performance. For our baseline, we selected two categories of LLMs: open-source models and proprietary models. The open-source models include general-purpose LLMs such as BaiChuan2- 13b [70], AquilaChat-7b [71], LLaMa2-70b [58], Qwen1.5- 72b [72], InternLM2-7b, InternLM2-20b [73], Mistral-7b [74], as well as DeVops-7b and DeVops-14b [75], which are specifically trained for O&M tasks. The proprietary models accessible only via API include GPT-3.5 [76], GPT-4 [14], and Claude3 Sonnet [77]. Since both log anomaly detection and log fault diagnosis are defined as classification tasks, we use the F1-score to evaluate the performance of the models. The final experimental results are shown in Table V. SuperLog outperformed all baseline algorithms in both log anomaly detection and log-based fault diagnosis. Even when compared with powerful LLMs that are accessible only via API, SuperLog demonstrated superior performance. Similarly, when compared with some models specifically trained for O&M tasks, SuperLog also achieved better results. These findings validate the advanced nature of the NLPLog dataset we developed and highlight the role of injecting interpretable knowledge in enabling large models to efficiently adapt to domain-specific tasks. E. RQ2: Ablation Study on Training Datasets and Methods 1) Evaluation Setting: To comprehensively evaluate the performance of SuperLog, we designed two types of ablation experiments. (1) SuperLog w/o CPT: In this experiment, we fine-tuned the LLaMA2-7B model on the Alpaca-1k dataset to enable instruction-following capabilities, without perform- ing continuous pre-training. (2) SuperLog w/o IK: In this experiment, we also use the LLaMA2-7B as base model. The key difference from SuperLog is that this model did not use the interpretable knowledge generated by GPT-4. Instead, we directly used the deduplicated raw logs collected from Loghub for the CPT phase. Similar to the setups in previous sections, we selected four tasks to organize the experiments: log parsing, log anomaly detection, log-based fault diagnosis, and log interpretation. 2) Result: The experimental results are shown in Table VI. For tasks such as log parsing, log anomaly detection, and log- based fault diagnosis, we use the F1 score as the evaluation metric. For log interpretation tasks, we evaluate based on the average of usefulness and readability. SuperLog achieved the highest performance across all tasks. Compared to models without the CPT phase, SuperLog acquired more domain- specific information during training, successfully transferring from a general model to a domain-specific model. Compared to the dataset that used only the raw log data for CPT, Super- Log demonstrated superior performance due to the acquisition of interpretable knowledge data during the CPT phase. It is also observed that models using CPT with raw log texts show improved performance in the three log analysis tasks, but their performance in terms of log interpretability is lower than that of models without the CPT process. This suggests that TABLE VII EVALUATION OF SUPERLOG ON UNSEEN DOMAINS A. Implications of Findings V. DISCUSSION Methods Apache OpenStack Rouge-1 Rouge-L Rouge-1 Rouge-L LLaMa3.1-8B Qwen2-0.5B Qwen2-1.5B OWL-7B SuperLog 35.534 32.686 41.507 48.763 51.703 12.314 11.917 16.147 30.841 42.224 32.015 34.456 40.540 44.819 52.348 11.395 14.665 16.013 23.832 34.071 while CPT can facilitate knowledge injection, it may also lead to the issue of catastrophic forgetting. NLPLog bridges the gap between domain-specific knowledge and natural language expressions by constructing Q&A pairs, enabling interpretable domain knowledge injection during the CPT stage. Ablation study results confirm the effectiveness of the proposed new paradigm for domain knowledge injection, demonstrating that the incorporation of interpretable knowledge significantly im- proves the model’s specialized domain capabilities. F. RQ3: Benchmarking on unseen Domain Logs 1) Evaluation Setting: In this section, to evaluate the per- formance of SuperLog on unseen log domains, we selected logs from two domains not included in NLPLog—Apache and OpenStack—to organize the experiment. Under the assumption that no corresponding labels exist in unseen domains, we assess the model’s performance by comparing its output with that of ChatGPT. Specifically, we replicated the setup used in the log parsing experiments, using the ChatGPT-generated results for Apache and OpenStack logs as labels, and applied different large models to perform log parsing tasks on the respective datasets. We then compared the model outputs with the ChatGPT-generated labels and computed ROUGE scores. In particular, ROUGE-1 and ROUGE-L were used to measure the unigram overlap and the longest common subsequences, respectively, in text summarization tasks. These metrics provide a quantitative way to evaluate the quality of machine-generated text against human-produced references. 2) Results: The performance of SuperLog on unseen do- mains is shown in Table VII. SuperLog’s ROUGE scores are consistently higher than those of existing baseline algorithms, with an improvement of approximately 22.4% over the second- best performing model, OWL, significantly outperforming the LLaMA 3.1 and Qwen 2 series models. The experi- ment demonstrates that SuperLog possesses exceptional log understanding capabilities, performing well even on unseen domains. Its outputs are highly aligned with human tendencies and show strong consistency with professional annotations from operations engineers. This indicates that SuperLog is not only capable of achieving excellent performance in familiar domains but is also effective in understanding and processing log data in unseen domains. 1) Balancing General and Domain-Specific Knowledge: Our approach highlights the importance of balancing general language understanding with domain-specific knowledge. Su- perLog’s success lies in its ability to maintain natural language comprehension while acquiring deep log analysis expertise. We achieved this by enhancing interpretability through con- verting domain knowledge into question-answer pairs, preserv- ing the characteristics of natural language in the process. 2) Interpretability and Transparency: By integrating inter- pretable domain knowledge, SuperLog not only performs well on log analysis tasks but also provides more understandable and justifiable outcomes, aligning with industry demands for transparent AI systems. B. Threats to Validity Despite the promising results achieved by SuperLog, several limitations need to be acknowledged, which could guide future research directions. 1) Generalizability: Although SuperLog performed well on unseen domains, its performance might degrade with logs that have significantly different characteristics or structures. In Section IV-F, we assume that no corresponding labels exist in the specific log domain, and use ChatGPT’s responses as the reference answers, with ROUGE scores serving as the evaluation metric. However, in some cases, a high similarity between the model’s output and ChatGPT’s response may not necessarily indicate the model’s ability to comprehensively solve the problem. Further testing on a broader range of domains and evaluate on other possible metrics is needed to assess generalizability. 2) Hallucination: The phenomenon of hallucination in LLMs presents a significant limitation, particularly in applica- tions requiring high accuracy and reliability, such as log-based fault diagnosis. Hallucination refers to the model’s tendency to generate content that is coherent but factually incorrect or inconsistent with the provided source content [78]. In this case, the model may generate responses that are difficult to directly assess for correctness, potentially affecting the judgment of the operations staff. VI. CONCLUSION In this paper, we present a novel approach to log analy- sis that significantly enhances the capabilities of LLMs by incorporating interpretable domain knowledge through con- tinual pre-training (CPT). This innovative method improves LLM performance in log analysis by seamlessly integrating domain-specific insights. A key element of our approach is the development of the NLPLog dataset, which contains over 250,000 question-answer pairs, offering a rich repository of domain-specific knowledge. By utilizing the proposed domain knowledge injection paradigm and the NLPLog dataset, we trained SuperLog, a LLM designed specifically for log analysis tasks. Our experimental results demonstrate that SuperLog outperforms existing state-of-the-art methods across four log analysis tasks, including those involving logs from previ- ously unseen domains. This highlights the effectiveness of our approach in injecting domain-specific knowledge while maintaining the natural language processing capabilities of LLMs. To encourage further research and development, we have made the NLPLog dataset publicly available for training large models on domain-specific tasks. REFERENCES [1] Z. Jiang, H. Lin, Y. Zhong, Q. Huang, Y. Chen, Z. Zhang, Y. Peng, X. Li, C. Xie, S. Nong et al., “Megascale: Scaling large language model training to more than 10,000 gpus,” arXiv preprint arXiv:2402.15627, 2024. [2] D. Narayanan, M. Shoeybi, J. Casper, P. LeGresley, M. Patwary, V. Korthikanti, D. Vainbrand, P. Kashinkunti, J. Bernauer, B. Catanzaro et al., “Efficient large-scale language model training on gpu clusters using megatron-lm,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2021, pp. 1–15. [3] N. P. Jouppi, G. Kurian, S. Li, P. C. Ma, R. Nagarajan, L. Nai, N. Patil, S. Subramanian, A. Swing, B. Towles, C. Young, X. Zhou, Z. Zhou, and D. A. Patterson, “Tpu v4: An optically reconfigurable supercomputer for machine learning with hardware support for embeddings,” Proceedings of the 50th Annual International Symposium on Computer Architecture, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:257921908 [4] V.-H. Le and H. Zhang, “Log parsing with prompt-based few-shot learning,” in 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), 2023, pp. 2438–2449. [5] Z. Ma, A. R. Chen, D. J. Kim, T.-H. Chen, and S. Wang, “Llmparser: An exploratory study on using large language models for log parsing,” in 2024 IEEE/ACM 46th International Conference on Software Engi- neering (ICSE). IEEE Computer Society, 2024, pp. 883–883. [6] Y. Liu, S. Tao, W. Meng, F. Yao, X. Zhao, and H. Yang, “Logprompt: Prompt engineering towards zero-shot and interpretable log analysis,” in Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings, 2024, pp. 364–365. [7] H. Zheng, G. Chu, H. Sun, J. Wang, S. Tao, and H. Yang, “Logdapt: Log data anomaly detection with domain-adaptive pretraining (industry track),” in Proceedings of the 24th International Middleware Conference: Industrial Track, ser. Middleware ’23. New York, NY, USA: Association for Computing Machinery, 2023, p. 15–21. [Online]. Available: https://doi.org/10.1145/3626562.3626830 for [8] C. Egersdoerfer, D. Zhang, and D. Dai, “Early exploration of using log-based anomaly detection on parallel file systems chatgpt logs,” in Proceedings of the 32nd International Symposium on High-Performance Parallel and Distributed Computing, ser. HPDC ’23. New York, NY, USA: Association for Computing Machinery, 2023, p. 315–316. [Online]. Available: https://doi.org/10.1145/3588195.3595943 [9] Y. Liu, S. Tao, W. Meng, J. Wang, W. Ma, Y. Chen, Y. Zhao, H. Yang, and Y. Jiang, “Interpretable online log analysis using large language models with prompt strategies,” in Proceedings of the 32nd IEEE/ACM International Conference on Program Comprehension, 2024, pp. 35–46. [10] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong et al., “A survey of large language models,” arXiv preprint arXiv:2303.18223, 2023. [11] J. Zhu, S. He, J. Liu, P. He, Q. Xie, Z. Zheng, and M. R. Lyu, “Tools and benchmarks for automated log parsing,” in 2019 IEEE/ACM 41st In- ternational Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE, 2019, pp. 121–130. [12] P. He, J. Zhu, Z. Zheng, and M. R. Lyu, “Drain: An online log parsing approach with fixed depth tree,” in 2017 IEEE international conference on web services (ICWS). IEEE, 2017, pp. 33–40. [13] S. Tao, W. Meng, Y. Cheng, Y. Zhu, Y. Liu, C. Du, T. Han, Y. Zhao, X. Wang, and H. Yang, “Logstamp: Automatic online log parsing based on sequence labelling,” ACM SIGMETRICS Performance Evaluation Review, vol. 49, no. 4, pp. 93–98, 2022. [14] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat et al., “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023. [15] “The claude 3 model family: Opus, sonnet, haiku.” [Online]. Available: https://api.semanticscholar.org/CorpusID:268232499 [16] J. Qi, S. Huang, Z. Luan, C. Fung, H. Yang, and D. Qian, “Loggpt: Exploring chatgpt for log-based anomaly detection,” arXiv preprint arXiv:2309.01189, 2023. [17] V.-H. Le and H. Zhang, “Log parsing: How far can chatgpt go?” in 2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2023, pp. 1699–1704. [18] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozi`ere, N. Goyal, E. Hambro, F. Azhar et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023. [19] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez et al., “Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality,” See https://vicuna. lmsys. org (accessed 14 April 2023), vol. 2, no. 3, p. 6, 2023. [20] S. Gururangan, A. Marasovi´c, S. Swayamdipta, K. Lo, I. Beltagy, D. Downey, and N. A. Smith, “Don’t stop pretraining: Adapt language models to domains and tasks,” arXiv preprint arXiv:2004.10964, 2020. [21] H. Guo, S. Yuan, and X. Wu, “Logbert: Log anomaly detection via bert,” in 2021 international joint conference on neural networks (IJCNN). IEEE, 2021, pp. 1–8. [22] H. Zheng, G. Chu, H. Sun, J. Wang, S. Tao, and H. Yang, “Logdapt: Log data anomaly detection with domain-adaptive pretraining (industry track),” in Proceedings of the 24th International Middleware Confer- ence: Industrial Track, 2023, pp. 15–21. [23] S. Tao, Y. Liu, W. Meng, Z. Ren, H. Yang, X. Chen, L. Zhang, Y. Xie, C. Su, X. Oiao, W. Tian, Y. Zhu, T. Han, Y. Qin, and Y. Li, “Biglog: Unsupervised large-scale pre-training for a unified log representation,” in 2023 IEEE/ACM 31st International Symposium on Quality of Service (IWQoS), 2023, pp. 1–11. [24] J. Devlin, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018. [25] S. He, J. Zhu, P. He, and M. R. Lyu, “Loghub: A large collection of system log datasets towards automated log analytics,” arXiv preprint arXiv:2008.06448, 2020. [26] Y. Luo, Z. Yang, F. Meng, Y. Li, J. Zhou, and Y. Zhang, “An empir- ical study of catastrophic forgetting in large language models during continual fine-tuning,” arXiv preprint arXiv:2308.08747, 2023. [27] S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin et al., “Opt: Open pre-trained transformer language models,” arXiv preprint arXiv:2205.01068, 2022. [28] C¸ . Yıldız, N. K. Ravichandran, P. Punia, M. Bethge, and B. Ermis, “Investigating continual pretraining in large language models: Insights and implications,” arXiv preprint arXiv:2402.17400, 2024. [29] E. Alsentzer, J. Murphy, W. Boag, W.-H. Weng, D. Jindi, T. Naumann, and M. McDermott, “Publicly available clinical bert embeddings,” in Proceedings of the 2nd Clinical Natural Language Processing Workshop, 2019, pp. 72–78. [30] I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras, and I. An- droutsopoulos, “Legal-bert: The muppets straight out of law school,” in Findings of the Association for Computational Linguistics: EMNLP 2020, 2020, pp. 2898–2904. [31] W. Zhang, Y. Deng, B. Liu, S. J. Pan, and L. Bing, “Sentiment analysis in the era of large language models: A reality check,” arXiv preprint arXiv:2305.15005, 2023. [32] D. Su, Y. Xu, G. I. Winata, P. Xu, H. Kim, Z. Liu, and P. Fung, “Generalizing question answering system with pre-trained language model fine-tuning,” in Proceedings of the 2nd workshop on machine reading for question answering, 2019, pp. 203–211. [33] X. Sun, X. Li, J. Li, F. Wu, S. Guo, T. Zhang, and G. Wang, “Text clas- sification via large language models,” arXiv preprint arXiv:2305.08377, 2023. [34] Q. Fu, J.-G. Lou, Y. Wang, and J. Li, “Execution anomaly detection in distributed systems through unstructured log analysis,” in 2009 ninth IEEE international conference on data mining, 2009, pp. 149–158. [35] M. Du and F. Li, “Spell: Streaming parsing of system event logs,” in 2016 IEEE 16th International Conference on Data Mining (ICDM). IEEE, 2016, pp. 859–864. [36] A. A. Makanju, A. N. Zincir-Heywood, and E. E. Milios, “Clustering event logs using iterative partitioning,” in Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, 2009, pp. 1255–1264. [37] S. Zhang, W. Meng et al., “Syslog processing for switch failure diagnosis and prediction in datacenter networks,” in IEEE/ACM 25th International Symposium on Quality of Service (IWQoS’17), 2007, pp. 1–10. [38] G. Chu, J. Wang, Q. Qi, H. Sun, S. Tao, and J. Liao, “Prefix-graph: A versatile log parsing approach merging prefix tree with probabilistic graph,” in 2021 IEEE 37th International Conference on Data Engineer- ing (ICDE). IEEE, 2021, pp. 2411–2422. [39] W. Meng, Y. Liu, F. Zaiter et al., “Logparse: Making log parsing adaptive through word classification,” in 2020 29th International Conference on Computer Communications and Networks (ICCCN), 2020, pp. 1–9. [40] Y. Huo, Y. Su, C. Lee, and M. R. Lyu, “Semparser: A semantic parser for log analytics,” in 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). IEEE, 2023, pp. 881–893. [41] Z. Li, C. Luo, T.-H. P. Chen, W. Shang, S. He, Q. Lin, and D. Zhang, “Did we miss something important? studying and exploring variable- aware log abstraction,” in ICSE 2023, May 2023. [42] Z. Jiang, J. Liu, Z. Chen, Y. Li, J. Huang, Y. Huo, P. He, J. Gu, and M. R. Lyu, “Lilac: Log parsing using llms with adaptive parsing cache,” Proceedings of the ACM on Software Engineering, vol. 1, no. FSE, pp. 137–160, 2024. [43] A. Zhong, D. Mo, G. Liu, J. Liu, Q. Lu, Q. Zhou, J. Wu, Q. Li, and Q. Wen, “Logparser-llm: Advancing efficient log parsing with large language models,” in Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2024, pp. 4559–4570. [44] X. Zhang, Y. Xu, Q. Lin, B. Qiao, H. Zhang, Y. Dang, C. Xie, X. Yang, Q. Cheng, Z. Li et al., “Robust log-based anomaly detection on unstable log data,” in Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2019, pp. 807–817. [45] S. Lu, X. Wei, Y. Li, and L. Wang, “Detecting anomaly in big data system logs using convolutional neural network,” in 2018 IEEE 16th Intl Conf on Dependable (DASC/PiCom/DataCom/CyberSciTech). IEEE, 2018, pp. 151–158. [46] V.-H. Le and H. Zhang, “Log-based anomaly detection without log pars- ing,” in 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2021, pp. 492–504. [47] J. D. M.-W. C. Kenton and L. K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of NAACL-HLT, 2019, pp. 4171–4186. [48] M. Du, F. Li, G. Zheng, and V. Srikumar, “Deeplog: Anomaly detection and diagnosis from system logs through deep learning,” in Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, 2017, pp. 1285–1298. [49] W. Meng, Y. Liu, Y. Zhu et al., “Loganomaly: Unsupervised detection of sequential and quantitative anomalies in unstructured logs.” in IJCAI, vol. 19, no. 7, 2019, pp. 4739–4745. [50] S. Zhang, Y. Ji, J. Luan, X. Nie, Z. Chen, M. Ma, Y. Sun, and D. Pei, “End-to-end automl for unsupervised log anomaly detection,” Automated Software Engineering (ASE’24), 2024. [51] J. Pan, W. S. Liang, and Y. Yidi, “Raglog: Log anomaly detection using retrieval augmented generation,” in 2024 IEEE World Forum on Public Safety Technology (WFPST). IEEE, 2024, pp. 169–174. [52] T. Cui, S. Ma, Z. Chen, T. Xiao, S. Tao, Y. Liu, S. Zhang, D. Lin, C. Liu, Y. Cai et al., “Logeval: A comprehensive benchmark suite for large language models in log analysis,” arXiv preprint arXiv:2407.01896, 2024. [53] Y. Sui, Y. Zhang, J. Sun, T. Xu, S. Zhang, Z. Li, Y. Sun, F. Guo, J. Shen, Y. Zhang et al., “Logkg: Log failure diagnosis through knowledge graph,” IEEE Transactions on Services Computing, 2023. [54] Y. Liu, Y. Ji, S. Tao, M. He, W. Meng, S. Zhang, Y. Sun, Y. Xie, B. Chen, and H. Yang, “Loglm: From task-based to instruction-based automated log analysis,” arXiv preprint arXiv:2410.09352, 2024. [55] Y. Chen, H. Xie, M. Ma, Y. Kang, X. Gao, L. Shi, Y. Cao, X. Gao, H. Fan, M. Wen et al., “Automatic root cause analysis via large language models for cloud incidents,” in Proceedings of the Nineteenth European Conference on Computer Systems, 2024, pp. 674–688. [56] T. Ahmed, S. Ghosh, C. Bansal, T. Zimmermann, X. Zhang, and S. Rajmohan, “Recommending root-cause and mitigation steps for cloud incidents using large language models,” in ICSE 2023, May 2023. [57] B. Debnath, M. Solaimani, M. A. G. Gulzar, N. Arora, C. Lumezanu, J. Xu, B. Zong, H. Zhang, G. Jiang, and L. Khan, “Loglens: A real- time log analysis system,” in 2018 IEEE 38th international conference on distributed computing systems (ICDCS). IEEE, 2018, pp. 1052– 1062. [58] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale et al., “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288, 2023. [59] H. Shi, Z. Xu, H. Wang, W. Qin, W. Wang, Y. Wang, Z. Wang, S. Ebrahimi, and H. Wang, “Continual learning of large language models: A comprehensive survey,” arXiv preprint arXiv:2404.16789, 2024. [60] A. Oliner and J. Stearley, “What supercomputers say: A study of five system logs,” in 37th annual IEEE/IFIP international conference on dependable systems and networks (DSN’07). IEEE, 2007, pp. 575– 584. [61] R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and T. B. Hashimoto, “Stanford alpaca: An instruction-following llama model,” https://github.com/tatsu-lab/stanford alpaca, 2023. [62] Y. Ge, Y. Liu, C. Hu, W. Meng, S. Tao, X. Zhao, M. Xia, Z. Li, B. Chen, H. Yang, B. Li, T. Xiao, and J. Zhu, “Clustering and ranking: Diversity- preserved instruction selection through expert-aligned quality estima- tion,” in Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Miami, Florida, USA: Association for Computational Linguistics, Nov. 2024, pp. 464–478. [63] H. Zhao, Y. Liu, S. Tao, W. Meng, Y. Chen, X. Geng, C. Su, M. Zhang, and H. Yang, “From handcrafted features to llms: A brief survey for machine translation quality estimation,” in 2024 International Joint Conference on Neural Networks (IJCNN), 2024, pp. 1–10. [64] Y. Zheng, R. Zhang, J. Zhang, Y. Ye, Z. Luo, Z. Feng, and Y. Ma, “Llamafactory: Unified efficient fine-tuning of 100+ language models,” in Proceedings of the Association for Computational Linguistics (Volume 3: System Demonstrations). Bangkok, Thailand: Association for Computational Linguistics, 2024. [Online]. Available: http://arxiv.org/abs/2403.13372 the 62nd Annual Meeting of [65] L. Tang, T. Li, and C.-S. Perng, “Logsig: Generating system events from raw textual logs,” in Proceedings of the 20th ACM international conference on Information and knowledge management, 2011, pp. 785– 794. [66] S. Messaoudi, A. Panichella, D. Bianculli, L. Briand, and R. Sasnauskas, “A search-based approach for accurate identification of log message formats,” in 2018 IEEE/ACM 26th International Conference on Program Comprehension (ICPC). IEEE, 2018, pp. 167–16 710. [67] W. M. Rand, “Objective criteria for the evaluation of clustering meth- ods,” Journal of the American Statistical association, vol. 66, no. 336, pp. 846–850, 1971. [68] A. Yang, B. Yang, B. Hui, B. Zheng, B. Yu, C. Zhou, C. Li, C. Li, D. Liu, F. Huang et al., “Qwen2 technical report,” arXiv preprint arXiv:2407.10671, 2024. [69] H. Guo, J. Yang, J. Liu, L. Yang, L. Chai, J. Bai, J. Peng, X. Hu, C. Chen, D. Zhang et al., “Owl: A large language model for it operations,” in The Twelfth International Conference on Learning Representations, 2024. [70] A. Yang, B. Xiao, B. Wang, B. Zhang, C. Bian, C. Yin, C. Lv, D. Pan, D. Wang, D. Yan et al., “Baichuan 2: Open large-scale language models,” arXiv preprint arXiv:2309.10305, 2023. [71] Beijing Academy of Artificial [Online]. Available: Intelligence, “Aquilachat,” 2023, https://model.baai.ac.cn/ accessed: model-detail/100101 2023. [72] J. Bai, S. Bai, Y. Chu, Z. Cui, K. Dang, X. Deng, Y. Fan, W. Ge, Y. Han, F. Huang et al., “Qwen technical report,” arXiv preprint arXiv:2309.16609, 2023. [73] Z. Cai, M. Cao, H. Chen, K. Chen, K. Chen, X. Chen, X. Chen, Z. Chen, Z. Chen, P. Chu et al., “Internlm2 technical report,” arXiv preprint arXiv:2403.17297, 2024. [74] A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier et al., “Mistral 7b,” arXiv preprint arXiv:2310.06825, 2023. [75] C. Ebert, G. Gallardo, J. Hernantes, and N. Serrano, “Devops,” IEEE software, vol. 33, no. 3, pp. 94–100, 2016. [76] L. Floridi and M. Chiriatti, “Gpt-3: Its nature, scope, limits, and consequences,” Minds and Machines, vol. 30, pp. 681–694, 2020. [77] Anthropic, “The claude 3 model family: Opus, sonnet, haiku,” 2023, accessed: 2023. [Online]. Available: https://www-cdn.anthropic.com/ de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model Card Claude 3.pdf [78] Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu, X. Huang, E. Zhao, Y. Zhang, Y. Chen et al., “Siren’s song in the ai ocean: a survey on hal- lucination in large language models,” arXiv preprint arXiv:2309.01219, 2023.
ai_researcher
2
Exploring_the_Potential_of_Generative_Artificial_Intelligence.pdf
8 1 0 2 v o N 9 1 ] h p - p o p . s c i s y h p [ 3 v 6 2 5 6 0 . 1 1 8 1 : v i X r a Artificial Intelligence for Interstellar Travel Andreas M. Hein1 and Stephen Baxter2 1Initiative for Interstellar Studies, Bone Mill, New Street, Charfield, GL12 8ES, United Kingdom 2c/o Christopher Schelling, Selectric Artists, 9 Union Square 123, Southbury, CT 06488, USA. November 20, 2018 Abstract The large distances involved in interstellar travel require a high degree of spacecraft autonomy, realized by artificial intelligence. The breadth of tasks artificial intelligence could perform on such spacecraft involves maintenance, data collection, designing and constructing an infrastructure using in-situ resources. Despite its importance, exist- ing publications on artificial intelligence and interstellar travel are limited to cursory descriptions where little detail is given about the nature of the artificial intelligence. This article explores the role of artificial intelligence for interstellar travel by compiling use cases, exploring capabilities, and proposing typologies, system and mission archi- tectures. Estimations for the required intelligence level for specific types of interstellar probes are given, along with potential system and mission architectures, covering those proposed in the literature but also presenting novel ones. Finally, a generic design for interstellar probes with an AI payload is proposed. Given current levels of increase in computational power, a spacecraft with a similar computational power as the human brain would have a mass from dozens to hundreds of tons in a 2050 – 2060 timeframe. Given that the advent of the first interstellar missions and artificial general intelligence are estimated to be by the mid-21st century, a more in-depth exploration of the re- lationship between the two should be attempted, focusing on neglected areas such as protecting the artificial intelligence payload from radiation in interstellar space and the role of artificial intelligence in self-replication. Key words: Interstellar travel, artificial intelligence, artificial general intelligence, space colonization 1 Introduction Robotic deep space exploration and interstellar travel require high levels of autonomy, as human intervention is very limited with signals taking years to travel to the probe and back. Autonomy is required for exploring the target star system, developing an infrastruc- ture using local resources, and even colonization [71, 70, 67]. High levels of autonomy in spacecraft are associated with performing cognitive tasks such as image recognition, reason- ing, decision-making etc. For example, current planetary rovers are able to autonomously identify scientifically interesting rock formations via feature recognition and take decisions to analyze them [30, 149, 27, 89, 164]. A program that is able to perform such and other cognitive tasks is referred to as artificial intelligence (AI) in the following [79]. An overview of the current state of the art of artificial intelligence in space exploration has been provided in Chien et al. [29] and Chien and Wagstaff [30]. According to Chien and Wagstaff [30], the main goals of AI on space probes is to detect and characterize features of interest such as usual and static (snow, water, ice, etc.) or unusual and dynamic (volcanic activity, fires, floods, dust devils, active jets), autonomous collection of interesting samples, autonomous creation of environmental maps, on-board analysis of data is desirable for re- ducing the data that needs to be stored and transmitted, on-board scheduling where mission 1 scheduling needs to be adapted to unexpected events. Satisfactory schedules are searched based on a timeline model of the spacecraft state and its resources using AI. Interesting areas of future development are collaborative spacecraft and rovers that use sensor webs fusing data from various sensors [9, 30, 133]. Chien et al. [29] and Hein et al. [75] further explore the role of AI in human space exploration. AI-based mission operations scheduling can help crews to interactively schedule their activities [29]. Managing the spacecraft systems such as the power subsystem is also considered an area where AI can support humans, in particular in off-nominal situations. In such a case, AI can perform problem analysis, perform repair actions, and evaluate the impact on future operations [29, 75]. Furthermore, operations on planetary surfaces is another area where AI can assist in developing operations plans. Looking into the far future, robotic probes with sophisticated artificial intelligence ca- pabilities have been proposed self-replicating space probes and probes that are capable of communicating with extraterrestrials [5, 10, 22, 39, 150, 152]. The scenario of an in- terstellar probe encountering an extraterrestrial intelligence has been explored by Baxter [10]. Bacewell [22] proposed an intelligent interstellar probe, a so-called Bracewell probe, that is able to perform sophisticated communication with extraterrestrials and contains large amounts of knowledge of a civilization. Combining the Bracewell probe with a self- replicating capability has been explored by Freitas [52], O’Neill [126] and Jones [92]. Such advanced probes would require levels of artificial intelligence that are similar to human in- telligence or even superior in broad task categories [46]. An artificial intelligence that is able to perform a broad range of cognitive tasks at similar levels or better than humans is called artificial general intelligence (AGI). Estimates for the advent for AGI differ [46], however, their median is somewhere in the middle of the 21st century. The estimated launch date for the first interstellar probe falls into a similar time frame. Given these estimates, it is plausible to assume that AGI and interstellar travel might materialize at similar points in time and implications of one on the other are worth to be considered. The more mundane use of AI for maintenance and housekeeping of interstellar probes and crewed interstellar spacecraft has been explored for the Project Daedalus study [19] and world ships [75]. Precursors for such technologies have been developed in the context of using augmented reality and intermediate simulations for space stations [97, 98, 101, 100, 99, 163]. Past publications have mostly dealt with the principle feasibility of interstellar probes with an AI without providing engineering details of such a probe. For example, Tipler assesses the principle feasibility of mind-uploading into an artificial substrate and how a fusion-propelled Daedalus-type interstellar probe could transport the AGI to other stars and gradually colonize the universe [19, 150]. Ray Kurzweil in ”The Singularity is Near” describes nano-probes with AI payloads that could even traverse small worm holes for colonizing the universe [103]. The most sophisticated analysis of AI probes is provided by Bradbury who introduces the concept of ”Matrioshka Brain” where a large number of spacecraft, producing power for AIs, orbit a star [24]. Bradbury imagines whole layers of orbital rings around a star harnessing its energy, similar to a Dyson Sphere [7]. Hein introduces several potential mission architectures based on AI interstellar probes with the main objective of paving the way for human interstellar colonization by creating space or surface colonies in advance to their arrival [71, 70]. Using AGI to grow and raise humans from individual human cells or embryos at another star and thereby avoiding the transport of grown-up humans has been proposed by Crowl et al. [36]. Regarding mid- and far-term prospects of AI in space applications, these can be cate- gorized into building artifacts in space, communicating with extraterrestrials, and growing / educating humans. Building artifacts in space encompasses diverse activities such as in- situ resource utilization, design, manufacturing, verification, validation, and testing, and self-replication. A summary of the AI use cases for space exploration are presented in Table 1. The existing literature on AI and AGI in interstellar travel and colonization seems to be limited to high-level concepts and there is a lack of a systematic analysis of the role of AI/AGI and synergies with other technologies. This article addresses these gaps by ana- lyzing the capabilities of AI, AGI for different interstellar missions, explores synergies with other technologies that could result in radically different mission architectures. Concepts 2 AI use case Current and near-term Detect and characterize features of interest (usual / unusual; static/dynamic) autonomous collection of interesting samples autonomous creation of environmental maps on-board analysis of data mission operations planning and scheduling Maintenance (problem analysis, perform repair actions, and evaluate the impact on future operations) Mid- and far-term Design and construction of artifacts (spacecraft, infrastructure, colonies) In-situ resource utilization Self-replication Communication with extraterrestrials Educate humans (transmission of knowledge) Reference [29, 30] [29, 30] [29, 30] [29, 30] [29, 30] [19, 29, 75] [52, 54, 70, 81, 55] [52, 54, 70, 81, 55] [52, 54, 70, 81, 55, 150, 152, 5] [10, 22] [36, 75] Table 1: Current, near-, mid-, and far-term use cases for AI in space exploration for different AI interstellar probes and a generic AI probe design are presented, using the methodology of explorative engineering [40, 41]. 2 Analysis Framework An analysis framework for AI probes is developed, in order to compare the capabilities required by such probes for fulfilling specific mission objectives. Categorizing and measuring capabilities of artificial intelligence is considered challenging and none of the proposed frameworks has been generally accepted [78]. Taxonomies catego- rize artificial intelligence with respect to its abilities (weak vs strong AI; narrow / general AI, super intelligence) [78], working principles [38, 62], internal processes [80], embodiment [60]. AI metrics are either task-oriented or ability-oriented [78, 79]. Most existing metrics fall into the task-oriented category, where the performance of an AI system is measured with respect to a task such as playing chess and autonomous driving. Such an evaluation is ap- propriate for specialized AI systems for specific tasks. By contrast, ability-oriented metrics focus on the set of tasks that would indicate the presence of a more general AI ability, for example, the AI decathlon [159, 3, 79, 120, 119, 139]. Such an evaluation is appropriate for AI systems that are not characterized by a set of tasks such as cognitive robots, assistants, and artificial pets. We propose a mix between more formal and qualitative framework elements. Formal approaches permit the generation of sufficiently general results that might remain valid, even with the large uncertainties associated with future progress in AI. We specifically use the pragmatic general intelligence metric [61] for formally comparing different AI capabilities. The qualitative approaches such as literature surveys of both, the scientific literature and fiction, generation of mission architectures, and design of a generic AI probe, allows for exploring specific scenarios and concepts. Regarding formal elements of the analysis framework, we argue that any AI-based in- terstellar mission is based on one or more agents. According to Franklin and Gaesser [50], 3 an agent ”is a system situated within and a part of an environment that senses that envi- ronment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future.” An agent is distinguished from computer programs in general by their autonomous, adaptive nature. We can define an agent more formally as a function π which takes an action history as input and outputs an action. An action history is defined as agent’s actions a, observations o of the environment, and rewards r: a1o1r1a2o2r2... (1) The history up to a point in time t can be abbreviated by aor1:t. According to Legg and Hutter [106], the utility function V , which expresses the expected total reward E for an agent π and environment µ over its entire lifetime T is: V π µ ≡ E( T (cid:88) n=1 r1) ≤ 1 (2) Goertzel [61] extends this framework by adding functions that indicate the complexity of goals and environments in which the agent operates, thereby formalizing pragmatic general intelligence, defined as achieving complex goals in complex environments. The expected goal-achievement is defined as V π µ,g,T ≡ E( t (cid:88) i=s rg(Ig,s,i) ≤ 1 (3) with the interaction sequence m1a1o1g1r1m2a2o2g2r2..., where m is a memory action, and T = i ∈ (i, ..., t). Each finite interaction sequence Ig,s,t = aorgs:t with gs corresponding to a goal g, is mapped by each goal function to a ’raw reward’ rg(Ig,s,t) ∈ [0, 1], indicating the reward of achieving the goal during that interaction sequence. The agent’s total reward rt is the sum of the raw rewards from all goals obtained at time t, where the symbols for these goals appear in the agent’s history before t. According to Goertzel [61], the pragmatic general intelligence of an agent π, relative to the distribution ν over environments and the distribution γ over goals, is its expected performance with respect to goals drawn from γ in environments drawn from ν Π(π) ≡ (cid:88) ν(µ)γ(g, µ)V π µ,g,T µ∈E,g∈G,T (4) This formal framework of pragmatic general intelligence allows for a comparison of the intelligence of agents as a sum of the expected rewards these agents would obtain with respect to environments and goals. For example, an agent that is expected to obtain a reward 0.2 in a single environment µ with respect to goals g1 and g2 has a total Π of 0.4, whereas an agent that is expected to obtain a reward 0.4 for goal g1 in a single environment µ has the same value for of 0.4. Hence, the metric allows for taking both, breadth and specificity of an agent’s performance into account. Apart from this quantitative framework for comparing an agent’s capability, we further use a qualitative maturity scale for analyzing task-specific capabilities and general capabili- ties with respect to AI probe missions, drawing heavily from Hernandez-Orallo [79, 78] and Hein [68]. The results of this qualitative analysis are presented in Section 4. 3 Artificial intelligence probe concepts It could be classic ex- AI probes can be distinguished with respect to their objectives. ploration where AI serves only as a means for realizing autonomous exploration of a star system. It could also be more sophisticated such as preparing an infrastructure for human colonization or even an entirely AI-based colonization. We distinguish between four types of AI probes: Explorer – 4 • capable of implementing a previously defined science mission in a system with known properties (for instance after remote observation); [10, 77] • capable of manufacturing predefined spare parts and components; • Examples – the Icarus and Daedalus studies [19]. Philosopher – • capable of devising and implementing a science program in unexplored circumstances; • capable of original science: observing unexpected phenomena, drawing up hypotheses and testing them; • capable of doing this within philosophical parameters such as planetary protection; • capable of using local resources to a limited extent, e.g. manufacturing sub-probes, or replicas for further exploration at other stars. Founder – • capable of using local resources on a significant scale, such as for establishing a human- ready habitat; • capable of setting up a human-ready habitat on a target object such as part of an embryo space colonization programme; • perhaps modifying conditions on a global scale (terraforming). [49, 70, 75]. Ambassador – • equipped to handle the first contact with extraterrestrial intelligence on behalf of mankind, within philosophical and other parameters: e.g. obeying a Prime Directive and ensuring the safety of humanity. [10, 22] A more detailed description of each of the probe types is given in the following. Besides the scientific literature, the science fiction literature has elaborated on different probe types and will be considered. 3.1 Explorer An Explorer probe is an extension of the model of modern-day automated space probes, which have limited on-board AI and well-defined missions. Because of remoteness from Earth modern probes are capable of some independent decision-making. Probes may put themselves into ‘safe’ modes in case of navigation failures or other issues; Mars rovers will stop before or back up from unexpected obstacles. But essentially, in the event of novelty, the probes wait for further orders from an Earth-bound mission control. Because of light speed time delays, this would not be an option with a probe like Icarus and Dragonfly to Alpha Centauri [73, 69, 109, 110, 130]. During the long flight, the AI would need to deal with routine systems operations like course corrections and communications, and also maintenance, upgrades, and dealing with unplanned incidents like faults. On arrival at Alpha Centauri, coming in from out of the plane of a double-star system, a complex orbital insertion sequence would be needed, followed by the deployment of subprobes and a coordination of communication with Earth [12]. It can be anticipated that the target bodies will have been well characterised by remote inspection before the launch of the mission, and so objectives will be specific and detailed. Still, some local decision-making will be needed in terms of handling unanticipated conditions, equipment failures, and indeed in prioritising the requirements (such as communications) of a multiple-subprobe exploration. 5 3.1.1 Technology and capabilities The AI could be based on already existing technologies such as deep learning [105, 136] for feature recognition and genetic algorithms for task sequencing. Such an AI would not be considered an agent according to the definition of Franklin and Graesser [50], where a distinction is made between programs that just interact with the environment and agents that show a level of autonomy and adaptability with respect to the environment. Using a pre-trained deep learning algorithm would have a limited ability to adapt to a changed environment due to its dependency on large training data sets. The genetic algorithm’s performance depends on a carefully crafted set of objective functions, which are hard-wired and not changed during the mission. Using the pragmatic general intelligence metric from Section 2, we claim that the reward of such an AI for µ is close to 0 whenever the environment does not resemble the test data set. Keeping things simple, we can define a distribution of environments νsolsys, which represents the distribution of environments within the solar system. We assume that any environment that is sufficiently outside this distribution results in a reward close to 0. Hence, the more ν differs from νsolsys, Π for πexplorer will approach 0. We can express the similarity of the distribution of environments by a similarity function sim with sim(ν1, ν2) = 1 for ν1 = ν2 and sim(ν1, ν2) < 1 for ν1 (cid:54)= ν2.With the similarity function approaching 0, the pragmatic general intelligence of the explorer probe would reach 0. Π(πexplorer) = lim sim(ν,νsolsys)→0 (cid:88) ( µ∈E,g∈G,T ν(µ)γ(g, µ)V π µ,g,T ) = 0 (5) A more advanced Explorer probe could make use of on-board manufacturing capabilities for creating mechanisms, instrument components, and tools for having the flexibility to react to unexpected situations. The existing literature on using manufacturing technologies in space can serve as a source of inspiration [51, 35, 141, 128, 127, 84, 155]. One possibility is to carry bulk material stocks and a 3D-printer to manufacture components during its trip and during exploration [51]. Used components could be recycled to close the material loop once components are no longer needed. Existing and near-term in-space manufacturing technologies rather focus on manufacturing structural elements [141] using bulk material sent from Earth, processed in-situ materials on planetary surfaces [128], or on small bodies [42]. it On-board manufacturing capabilities would increase the flexibility of the probe, i.e. would increase the space of potential actions with respect to an observation. An explorer probe without on-board manufacturing would have a set of actions (a1, ...an) at its disposal and an explorer probe with on-board manufacturing a set of (a1, ...am) actions, where m > n. For example, the probe could perform an action to manufacture a larger aperture for its telescope and allow for higher-resolution observations. The following inequality does not hold in general for Explorer probes with on-board manufacturing πexplorerobm and explorer probes without πexplorer Π(πexplorerobm) ≥ Π(πexplorer) (6) as there are cases where on-board manufacturing could lead to a globally lower value on pragmatic general intelligence. Imagine the case where manufacturing the aperture leads to a shift in the center of mass of the spacecraft which leads to a higher consumption of fuel, leading to a shorter lifetime of the probe and hence lower performance. However, we can imagine a special case where an action ai enabled by on-board manufacturing substitutes for an action aj. the substitution leads to a higher reward ri > rj but has no other effect on the entire history aor1:T of the agent from its first cycle to its end of life T . For this case, inequality 6 would hold. However, on-board manufacturing or other means of extending the set of actions would not change the fundamental limitation that the available set of actions is pre-defined via the training data and the system would perform poorly in environments that have no resem- blance with the training data. The general problem of a machine that constructs something has been treated by von Neumann [162] and Myhill [122] with universal constructor theory. A universal constructor 6 is a machine Ma that can construct another machine Mb given an instruction I: Ib + Ma → Mb (7) where ”→” indicates the inputs to a construction process on the left side and the created object on the right side. The simple constructor would be equipped with an initial set of instructions D to build infrastructure elements, instruments, replacement parts, etc., using given instructions: Ma could be a 3D-printer or any other machine for manufacturing. Ii + Ma → Mi, Ii ∈ D (8) 3.1.2 Mission architectures An Explorer type probe would arrive in the target star system and start its exploration program either using its pre-existing hard- and software or could use its capabilities of modifying or manufacturing hard- and software components, depending on the encountered situation. A standard mission architecture for an Explorer type probe is shown in Fig.1. Figure 1: Star system exploration via Explorer probe 3.2 Philosopher In contrast to the Explorers with their specific and well-defined missions, a Philosopher probe is capable of supporting an independent, open-ended exploration strategy. This may include devising and implementing its own science and exploration programme from goal- setting to execution and exploiting local resources to manufacture, for example, subsidiary equipment, subprobes, or even replicas of itself for further interstellar exploration. 3.2.1 Philosopher probes in fiction Philosopher-class probes may be particularly useful on pioneering voyages to develop nec- essary infrastructure for follow-up missions. In ‘StarCall’ [12] a smart probe called Sannah III is sent on an eighty-year mission to Alpha Centauri, using for acceleration a mass-beam propulsion system in Earth orbit, and decelerating using an onboard inertial-confinement fusion drive. Once at Centauri, the mission is to construct another mass-beam propulsion station from local resources; future probes, with no need to carry fuel for deceleration, will be capable of delivering cargoes orders of magnitudes larger. An interesting advanced probe of the Philosopher type, depicted in Greg Bear’s 1990 novel Queen of Angels [13], is AXIS (for Automated eXplorer of Interstellar Space), a probe to Alpha Centauri. An advanced onboard ‘biologic thinker system’ (Chapter 4) can design its own science programmes at the target. As an example, AXIS observes circular structures on a planetary surface which it hypothesises are artefacts of ETI, a hypothesis it later disproves. An example of a wider Philosopher-class probe strategy is Tipler’s [151] suggestion that the use of self-replicating von Neumann machines [162] as probes could reduce the costs of a large-scale interstellar exploration programme drastically – the originating culture need only bear the costs of sending out the first probe, and allow descendants constructed of local 7 resources to explore the Galaxy step by step. As a near-term example of this idea, Freitas [52] described a Daedalus probe with a self-replicating payload to establish an industrial infrastructure that allows for building another Daedalus self-replicating probe. However, Freitas did not provide details about the AI required for this task. 3.2.2 Technology and capabilities One key aspect of the Philosopher probe will be how the AI can learn from the data at the target star system to adapt and learn from new findings. It is quite obvious that optimal problem solvers such as practical implementations of AIXI [87, 106, 157, 158] and the self- referential G¨odel machine could be used [146, 145]. Another possibility is to use genetic algorithms to automatically generate programs adapted to new findings at the star system [14]. To take the G¨odel machine as an example, it consists of two parts. The first part is a program that interacts with its environment. The second part includes a proof searcher that searches for proofs that a modification to the G¨odel machine is expected to yield higher rewards during its lifetime. Once such a proof is found, the modification is implemented and the G¨odel machine modified. Schmidhuber [135] argues via his Global Optimality theorem that the G¨odel machine performs optimally in the set of environments ν and is not restricted by the Free Lunch theorem. The G¨odel machine can modify any part of its code, including the proof searcher itself and the utility function which sums up the rewards. Hence, a Philosopher probe based on a G¨odel machine would, in principle, not be bound by the limitations of the Explorer probe and could modify its soft- and hardware with respect to a specific environment and even set its goals. A version of the G¨odel machine for solving design problems has been proposed by Hein and Condat [72]. Such a design G¨odel machine could be used for building infrastructures in the target star system. The G¨odel machine only switches to a modified version if it can prove that it would yield better results on the utility function, which means that: (V πgm µ,g,T ≥ V πexplorer µ,g,T ), ∀ν(µ) ∈ E, ∀γ(g, µ) ∈ G (9) which means that the Philosopher AI based on a G¨odel machine has a practical general intelligence, which is equal or larger that of the Explorer: Π(πgm) ≥ Π(πexplorer) (10) The Philosopher AI would not only be able to manufacture artifacts as the Explorer but go a step further in actually designing artifacts during its mission. Using the notation from von Neumann [162] and Myhill [122], a designing machine is a machine Mc that creates instructions Ii: Mc → Ii (11) Programs that can synthesize designs are numerous and already exist today, for example, for generating complex geometric shapes from geometric primitives [28]. For complex soft- and hardware, a design G¨odel machine [72] could be imagined, where the machine analyzes its environment (available resources) and a set of design requirements (provide an air-tight volume), synthesizes a set of designs (aluminum hull) and assesses its feasibility. Feasibility is assessed via simulations or testing. We can imagine that the machine conceives small prototypes to test key feasibility areas with minimal expenditure of available resources before embarking to the construction of the real artifact. An open research question concerning self-improving AI is the safety problem [46]. If the AI can modify itself and specifically its utility function, how can we assure that it will not take harmful actions? 3.2.3 Self-replicating probes Self-replicating probes have been proposed in the literature for decades [52, 55, 54, 116]. In the following, we are referring to the theory of self-replicating machines which can be found in the theoretical computer science literature such as [4, 122] and [162]. 8 A self-replicating machine can be understood as a machine M , e.g. a Turing machine or equivalent that accepts some input data I that includes a description of M and is able to construct M . However, this is not self-replication, as the instruction is not copied. Hence, we introduce two machines Ma and Mb where Ma creates a copy of I and Mb creates a copy of Ma and Mb. The input data I needs to include a description of Ma and Mb. Ia+b + Ma → Ia+b Ia+b + Mb → Ma + Mb (12) (13) Where ”→” can be interpreted as ”creates”. Hence, combining the two yields a self- replicating machine: Ia+b + Ma + Mb → Ia+b + Ma + Mb (14) Although the existence of a self-replicating machine has been formally proven, an actual construction turned out to be more difficult. Programs that can take their own code as inputs have been around for years and are called ”Quines” [82]. Self-replicating machines based on cellular automata have been developed but turn out to be computationally very expensive, as they simulate the assembly or the machine from elementary parts [140]. Robotic self- replicating machines have been proposed by Zykov et al. [166], and Griffith et al. [64]. However, they use prefabricated parts that are assembled to form copies of themselves. Several NASA NIAC studies [107, 31, 20, 153] have concluded that at least ”cranking” self-replicating machines are feasible. Nevertheless, for any practically useful application, physical self-replicating machines would need to possess considerable computing power and highly sophisticated manufacturing capabilities, such as described in Freitas [52, 55, 54], involving a whole self-replication infrastructure. Hence, the remaining engineering challenges are still considerable. Possible solutions to some of the challenges may include partial self-replication, where complete self-replication is achieved gradually when the infrastructure is build up [117], the development of generic mining and manufacturing processes, applicable to replicating a wide range of components, and automation of individual steps in the replication process as well as supply chain coordination. [169, 170], Yim et al. The main challenge for the AI of such a probe is rather how to adapt the design of the probe to the given resources in a star system. Depending on the chemical composition and reachability of resources in the star system, different mining and manufacturing processes are needed. For example, resources on asteroids, comets, exoplanets, and exomoons might be quite different in composition and ease of mining them [96, 15, 114]. Using mining and manufacturing processes that are applicable to a broad range of resources would significantly facilitate the challenge, e.g. sintering can be applied to a broad range of regolith material, whereas high purity metals and alloys require highly specialized processes, which are limited to a specific type of metal and alloy. However, products from general processes might suffer from lower performance characteristics compared to products of a highly specialized process, e.g. tensile strength. A special case of self-replicating machines are self-replicating machines that improve on each generation. Myhill [122] provides an existence proof for such machines with the properties: where ”<” indicates that the machine on the right has larger theorem-proving capabilities than the machine on the left and i indicates the generation and Mzi < Mzi+1 , i = 0, 1, 2... (15) where a machine produces a machine with greater capabilities of the subsequent gener- Mzi → Mzi+1 (16) ation. 9 3.2.4 Mission architectures Surface exploration, including astrobiology Similar to an Explorer type probe, the most basic mission architecture for a Philosopher type probe would consist of the probe arriving in the star system, as shown in Fig. 2 and deploying a number of sub-probes for exploration. The difference to an Explorer probe is that the exploration strategy is developed in-situ, depending on the observations the probe will make. Figure 2: Star system exploration via multiple sub-probes Self-replicating star system exploration A mass-efficient exploration strategy would comprise the production of self-replicating probes using in-situ resources of the star system. Only the mass for the initial spacecraft is required, thereby exponentially reducing the required mass for exploration. The success of such an exploration strategy will depend on the ease of identification, reachability, and extraction of resources in the star system. Various AI architectures can be imagined. Com- puting on the sub-probes could be limited and the main probe would be responsible for sophisticated computations. Such an architecture might be more efficient but also more risky, in case of the failure of the main probe. Alternative architectures could be based on distributed computation between sub-probes and the main probe, where the comput- ing power of sub-probes and the main probe would not differ significantly, which would increase the reliability of the overall system. However, such an architecture would require the replication of computing hardware in-situ, which might be difficult to achieve. Figure 3: Star system exploration via self-replication Adapted biome creation Another mission architecture for a Philosopher probe could consist of the preparation of habitable planets for subsequent settlement. A crucial element for human habitability is the existence of a human-compatible biome, i.e. microorganisms [37], which is vital for human survival. In case the exoplanet is sterile, such a biome could be engineered, taking the local 10 environmental conditions into consideration. For example, a higher level of stellar radiation might lead to different environmental pressures on the biome than on Earth, leading to a biome which is no longer sufficient for human survival. Engineering and cultivating an adequate biome would be a task which would require sophisticated AI capabilities. The corresponding mission architecture is shown in Fig. 4 Figure 4: Creation of adapted micro biome for future colonization World model creation Another possible Philosopher probe objective could be the generation of a ”world model”, as shown in Fig. 5. A world model [65] in the context of AI is similar to a mental model in humans in which they can perform reasoning without directly taking action on the real world. Here, we can think of world models as either such AI mental models that are transmitted back from the star system to the solar system and could allow for running experiments and simulations. In other words, world models could be used for virtually exploring the star system. Figure 5: Creation and submission of world model Traveling AI Another possible mission architecture consists of a traveling AI, as shown in Fig. 6. Once the target star system has been explored, the AI which has been interacting with this environment could transmit a copy of itself back to the solar system. Alternatively, an AI could be transmitted to the star system. This would be interesting, in case the evolution of AI in the solar system is advancing quickly and updating the on-board AI would lead to performance improvements. Updating on-board software on spacecraft is already a reality today [45, 111]. Going even a step further, one can imagine a traveling AI which is sent to the star system, makes its observations, interacts with the environment, and is then sent back to our solar system or even to another Philosopher probe in a different star system. An AI agent could thereby travel between stars at light speed and gradually adapt to the exploration of different exosolar environments. 11 Figure 6: Traveling AI between probes in different star systems 3.3 Founder A Founder probe is capable of much more ambitious missions, including the significant modification of its environment, and perhaps even establishing human colonies. Not only does the Founder need the capability to collect and analyze data such as the Philosopher but also the capability to deliberately alter its environment and to verify that the conceived interventions and designs actually work out. Hence, sophisticated simulation, optimization, and reasoning capabilities are required. 3.3.1 Founder probes in the literature The classic application of a Founder-class probe may be the ‘seedship’ colony strategy. Crowl et al. [36] gave a recent sketch of possibilities for ‘embryo space colonisation’ (ESC). The purpose is to overcome the bottleneck costs of distance, mass, and energy associated with crewed interstellar voyages. Crowl et al. [36] suggested near-term strategies using frozen embryos, and more advanced options using artificial storage of genetic data and matter- printing of colonists’ bodies, and even ‘pantropy’, the pre-conception adaptation of the human form to local conditions. Hein [70] previously explored the possibility of using AI probes for downloading data from probes into assemblers that could recreate the colonists. Although appearing speculative, Boles et al. [18] have recently demonstrated the production of genetic code from data. A seedship’s AI would, at a minimum, need to create a human-ready habitat from local materials – though more advanced options up to terraforming could be considered. And, crucially, it must raise the first generation of colonists to adulthood, and perhaps beyond, without adult-human support. 3.3.2 Founder probes in fiction In fiction embryo/genetic space colonization had been hinted at as long ago as 1930, by Stapledon in Last and First Men [144]. Some billions of years in the future, the Last Men on Neptune, threatened by solar destabilization (p.238), disseminate ‘among the stars the seeds of a new humanity’. These will ‘combine to form spores of a new life, and [will] develop, not into human beings, but into lowly organisms with a definite evolutionary bias toward the essentials of human nature’. (Earlier in this saga the genetic adaptation of human species to new environments, on Venus and Neptune, was also depicted.) Clarke’s 1986 novel The Songs of Distant Earth [34] contains a classic modern description of seedship colonization, with a typically elegant summary of its challenges. Driven by the impending nova explosion of the sun, a ‘seedship’ carrying ‘gene patterns’ (Chapter 2) colonized an Earthlike planet called Thalassa, fifty light years from Earth. The first generation of Thalassans was manufactured and raised by machines. The ship had to ‘rear these potential humans, and teach them how to survive in an unknown but probably hostile environment. It would be useless – indeed, cruel – to decant naked, ignorant children on to worlds as unfriendly as the Sahara or the Antarctic. They had to be educated, given tools, 12 shown how to locate and use local resources. After it had landed and became a Mother Ship, it might have to cherish its brood for generations. Not only humans had to be carried, but a complete biota.’ (p13). Seven hundred years later, ‘the Mother Ship [was] the oldest and most revered monument on the planet’ (p14). Vinge’s ‘Long Shot’ (1972) [160], perhaps more realistically, hints at the challenges posed even by the journey component of such a mission. When the Earth is threatened by a lethal increase in solar luminosity, a 10,000-year embryo space colonization mission to Alpha Centauri is hurriedly mounted as a last-resort species survival option. The story is told from the point of view of the onboard AI, called Ilse. Ilse is trained in Earth orbit in such disciplines as orbital manoeuvres and planetary survey, manages the long mission itself, observes the target stars and selects a planet for landing, and survives a final drastic atmospheric re-entry. But during the long journey component failures degrade Ilse’s mentation and memory, to the extent that she struggles to complete her tasks, and even forgets the primary mission. A last-resort backup memory enables the nurturing of the embryos to go ahead – and this convincing story tantalisingly ends before the AI’s next great challenge: raising the first generation of colonists. Crowl et al (perhaps optimistically) suggested it would be sufficient to use androids as surrogate parents: AIs embodied to enable physical contact, and equipped with ‘a type of expertly programmed expert system, with sophisticated natural language abilities’. We may, however, need a more complete understanding of what contribution other human beings make to our development from infancy before we can be sure how to supplant natural parenting with surrogates. This contribution may even include a biological input. The seedship would need the ca- pability of synthesising far more than ‘human’ cells. The human body contains ten thousand times as many microbes of specialised kinds as eukaryotic human cells; together this ‘human microbiome’ has a gene set far larger than the human. Furthermore, the development of the microbiome in an individual’s body is not well understood; perhaps the microbes are transmitted from others, like infections [134] (pp142-3). Cultural learning would also need to be assured. As an extreme case study, Kemp [95] speculated on how a group of infants, isolated from any adult contact at all, might develop. Is culture hard-wired into our consciousness? In one relevant example, a sign language spontaneously developed among an isolated group of deaf children in Nicaragua in the 1970s. Necessary if primitive tools might be invented from scratch, and a lifestyle equivalent to hunter-gathering might emerge, depending on the environment. Sexual differentiation of behaviour and roles might arise when the first wave of pregnancies occurred – and the first deaths might lead to religious impulses. Groups limited by the ‘Dunbar number’ of 150 close personal contacts might emerge, leading to differentiation of culture, perhaps even war. Clearly, the contribution of the wider environment of our human society to our devel- opment will have to be well understood and replicated if humans are to be manufactured ‘from scratch’ beginning with nothing but genetic data. Perhaps relevant case studies such as that of the Nicaraguan deaf children could give an indication as to the minimum support, physical and psychological, required of any AI surrogate-parent to raise successfully any seedship children. However, Kemp quotes paleoanthropologist Ian Tattersall as predicting that the stranded group would die out, with the children developing pathologically without the presence of adults: that is, deprived of the social nurturing with which we have evolved. Ethical questions also arise. It would presumably be possible to imprint the infants with cultural values of a specific type, such as religious or libertarian. Would it be right to do so? After all, such values are transmitted within any ‘normal’ human society from parents and teachers to children. On the other hand, a weaker purposeful conditioning may cede unanticipated influence to the unusual initial conditions of the seedship colonists. In Hogan’s Voyage from Yesteryear (1982) [83], a limited nuclear war in 1992 triggers a panicky attempt to seed Chiron, a planet of Alpha Centauri. Generations later, humanoid robots with an essentially nurturing role continue to permeate Chironian society – just as during the upbringing of the first generation (p125). Just as there was no obvious hierarchy of human authority presented to the first cadre of children in an adult-free world, they have developed a society which continues to be hierarchy-free and self-organising (p110). And, still more profoundly, the Chironians have continued to regard material goods as free and as abundant as they were when provided by 13 the founding robots in the beginning: ‘the idea of restricting the supply of anything never occurred to anybody. There wasn’t any reason to. We’ve carried on that way ever since. You’ll get used to it’ (p129). Thus they have naturally evolved a post-scarcity society. When more conventional follow-up missions are sent to Centauri, Hogan hints at still deeper cultural clashes between ‘normal’ folk and the seed-grown: ‘”It’s not really their fault because [the seed-born are] not really people like us ...”’ (p44). There may even be religious prejudices. The ship carries an ordinance proclaiming that the seed-born have souls – rather as the sixteenth-century Popes had to decree that the inhabitants of the New World had souls, like Europeans (p44). In terms of more advanced seeding technologies, the term ‘pantropy’ [137], meaning the pre-conception manipulation of human stock to adapt it drastically for survival in novel environments, was coined by SF writer James Blish in his ‘Seedling Stars’ stories (1952-6, coll. 1957) [17] (though Blish acknowledged Stapledon’s prior exploration of related ideas). If seedships reduce the cost of the travel to a new home, pantropy should reduce the cost of adaptation in a new environment – pantropy, changing people, will be cheaper than ter- raforming, changing worlds. Blish’s story starts with a rogue pantropist who has adapted humans to survive on Ganymede; ultimately a relatively low-cost, open-ended interstel- lar ‘seeding programme’ (p54) succeeds so well that interstellar colonists return to create pantropes to recolonise ‘the vast and tumbled desert of the Earth’ (p192). It would appear that on Clarke’s Thalassa [34] the education of the initial generations and cultural transmission from the terrestrial precursor went well; the Thalassans understand where they came from, how they got there, and the meaning of later visitors from Earth. But this transmission is a challenge, and perhaps more so if the pantropes’ own physical form is drastically modified. A cautionary tale of cultural discontinuity and amnesia is Blish’s story ‘Surface Tension’ [16], in which a crash-landed seedship crew on Hydrot, world of Tau Ceti, hastily creates microscopic pantropes to share mud-flat puddles with a menagerie of algae, diatoms, protozoans, and rotifers. With no knowledge of their origin, even of their basic cosmological context, after sixteen generations the pantropes try to break out of a puddle-world into which they don’t seem to fit, delivering the mother of all science-fictional conceptual breakthroughs: ‘the two-inch wooden spaceship and its microscopic cargo toiled down the slope towards the drying little rivulet’ (p175) [17]. The embryo space colonization idea remains imaginatively alive. In the recent movie Interstellar (2014, dir. C. Nolan), with the Earth becoming uninhabitable due to a blight, embryo space colonization through a wormhole was presented as a ‘Plan B’ to save mankind if a ‘Plan A’, involving the transport of mature humans, failed. 3.3.3 Technology and capabilities Founder probes will certainly use some form of self-replication, which has been presented in Section 3.2.3 in order to bootstrap the infrastructure needed for building up habitats (free-floating or surface colonies, or even terraforming). However, we can expect that the breadth of tasks required for building a habitat is much larger and their safety-criticality much higher. It is also reasonable to assume that the complexity of a space colony is higher than that of a Philosopher probe, for the simple reason that a space colony would likely also contain a sophisticated AI for environmental control and maintenance [75]. Furthermore, engineering a proper biome for the ecosystem and fine-tuning the overall system to the local conditions is a task that would be challenging for human engineers. Hence, it is reasonable to assume that the pragmatic general intelligence of the AI of a Founder probe is equal or higher than for a philosopher probe, given the larger number of goals to be achieved and higher required performance on these goals (rg yields values close to 1 only if a number of strict safety conditions are satisfied). Hence, the expected goal-achievement for the Founder is equal or higher than for the Philosopher: If we assume that the distributions ν over environments and over goals γ are the same as for the Philosopher probe, we yield V πf ounder µ,g,T ≥ V πphilosopher µ,g,T (17) 14 Π(πf ounder) ≥ Π(πphilosopher) (18) This does not exclude that the Philosopher probe AI could achieve higher raw rewards for individual goals such as devising scientific hypotheses. However, we assume that most of the Philosopher probe’s goals are part of the Founder probe’s goals. 3.3.4 Mission architectures Fig. 7 shows a mission sequence for the Founder probe, which begins with exploring and harvesting the star system to design and manufacture habitats. In-space colonies and surface colonies could be constructed, depending on the judgment of the probe’s AI. Figure 7: A Founder probe building habitats in a star system The inhabitants could be transported to the star system via a world ship [75, 70]. Due to the extremely high cost of a world ship, two alternatives can be imagined: The genetic material for creating humans or other organisms is transported via the Founder probe or a separate probe, as shown in Fig. 8. Figure 8: A Founder probe building habitats and growing a population from transported genetic material As an alternative, the genetic material is recreated from data, using an advanced version of a digital to DNA converter [18]. The latter approach would have the advantage that up-to-date DNA data could be transferred at light speed to the star system. One of the caveats of the first instance of a digital to DNA converter is its extremely low efficiency (99.999%) [129]. Nevertheless, this would come quite close to the notion of ’teleportation’ [70], as illustrated in Fig. 9. It is not that outlandish to assume that, for example, stem cells could be transported to the star system, DNA data is sent to the converter, DNA is printed out and the stem cell ”reprogrammed”. Using far more advanced forms of bio-printing [121, 90] than exist today, 15 entire organisms could be created on-site. Such an approach would circumvent potential radiation-related and age-related degradation during transport in interstellar space. We can even imagine that using self-replication technology and advanced manufacturing systems, a design for an up-to-date digital to DNA converter could be sent to the target star system, the converter would be built by the advanced manufacturing system. Hence, a combination of these advanced technologies would allow for significant flexibility in how the colonization operation is performed. Figure 9: On-site production of genetic material via a data to DNA converter 3.4 Ambassador 3.4.1 Ambassador probes in the literature The idea of using smart space probes as a specific means to make contact with extrater- restrial civilizations dates back to Bracewell [22, 23], who proposed the idea in 1960 as an alternative to the then-nascent ‘conventional’ SETI model (detection of EM signals). Bracewell imagined a culture sending out many minimal-cost probes equipped with artifi- cial intelligence at least at the human level. On encountering a target culture with radio technology, a probe would initiate contact, perhaps by echoing back native signals. This approach has distinct advantages, at least for a long-lived culture, in a universe in which technological cultures are separated by large distances (Bracewell suggests 1000 light years or more), or, indeed, such cultures are typically short-lived. A local probe would allow rapid dialogue, compared to an exchange of EM signals which might last millennia. The probe might even be able to contact cultures lacking advanced technology, through recognizing surface structures for example [11]. And if technological cultures are short-lived, a probe, if robust enough, can simply wait at a target star for a culture ready for contact to emerge – like the Monoliths of Clarke’s 2001 [32]. In Bracewell’s model, the probe would need to be capable of distinguishing between local signal types, interpreting incoming data, and of achieving dialogue in local languages in printed form – perhaps through the use of an animated dictionary mediated by television exchanges. In terms of message content, perhaps it would discuss advances in science and mathematics with us, or ‘write poetry or discuss philosophy’ (p79). However, any engagement with an alien culture on behalf of humanity might require a sophisticated political and ethical understanding. Bracewell suggested his probe might need to handle political complications, such as avoiding rivalries between contacted groups by selecting a ‘competent worldwide entity’, like Earth’s NASA [23] (p79), to speak through. As suggested by Baxter [11], such a probe would presumably need mandates not to harm the extraterrestrial intelligence culture by the violation of a ‘Prime Directive’, and not to risk harm to humanity, for instance by revealing the existence of Earth and its location to a potentially hostile culture; it might choose to conceal or mask its approach trajectory, for example. Using such precedents as planetary protection protocols and the First SETI Protocol, before the launch of any such probe a publicly debated and agreed policy on the 16 balance between the opportunities offered by contact and the risks posed by exposure could be developed as a protocol to guide the AI in its decision-making. Bracewell went into no details of the probe’s AI, beyond speculating that ‘presumably the computing part need only be the size of a human head, which is, we know, large enough to store an immense amount of information’ [10] (p79). Bracewell’s argument was devel- oped further over the years [53, 57, 56, 102, 154]. Tarter [148] speculated on the use of nanotechnology to send out extremely small smart probes. 3.4.2 Ambassador probes in fiction In Clarke’s 2001: A Space Odyssey (1968) [32] the rogue computer Hal was effectively an Ambassador. The true mission of the spacecraft Discovery, to investigate the alien Monolith orbiting Jupiter (in the movie and sequels; Saturn in the novel), was kept secret from the pilot crew of Bowman and Poole, and was known only to a hibernating team of specialists – and to the HAL 9000 unit, the on-board AI. The need to perpetuate this dishonesty caused Hal to break down. But if the crew were incapacitated and contact with Earth lost, Hal himself had been instructed to continue the mission of alien contact (pp.98-9). Clarke’s The Fountains of Paradise (1979) [33], a novel of the building of a space elevator, features a kind of cut-price Bracewell probe, a visitor to the solar system called Starglider by human observers. Arriving in the 2060s, the probe was launched from a red dwarf system 52ly away; proceeding by means of gravity assists it has hopped from star to star, taking 60,000 years to reach Sol. When it arrives in the solar system Starglider initiates conversation using English and Mandarin acquired from our leakage broadcasts. Starglider’s primary function seems to be the acquisition and sharing of information: ‘Starglider combines the functions both of Ambassador and Explorer’ (p83). But Starglider may have a wider agenda of cultural manipulation. It reveals ‘almost no advanced technology, and so [has] minimal impact upon the technically-orientated aspects of our culture’ (p174). But on the other hand it appears purposefully to demolish religion, for example by logically deconstructing St Thomas Aquinas: as Clarke puts it, Starglider ‘had put an end to the billions of words of pious gibberish with which apparently intelligent men had addled their minds for centuries’ (p.94). If intentional, this may amount to a very subtle cultural manipulation (towards a cautious development of technology and away from religion?), implying a deep apprehension of our culture and a very high level of cognition. 3.4.3 Technology and capabilities The capabilities of an Ambassador probe AI are quite distinct from those of previous probe types, as the focus of the former is on communication, i.e. agent to agent interaction. This interaction can be broken down into performing an action (sending a visual signal, moving an object, etc.) and interpreting the action of the other agent. This generic framework would apply to various forms of organisms / ETI. An existing formal framework we refer to in the following is multi-agent reinforcement learning [85, 26, 161, 147, 108]. It is an extension of reinforcement learning to the multi-agent case with two or more agents [26, 161]. Using the notation from Busoniu et al. [26], a single agent reinforcement learning is described via the Markov decision process, where a finite Markov decision process is a tuple (cid:104)X, U, f, ρ(cid:105) with X as the finite set of environment states, U as the finite set of agent actions, f : X × U × X → [0, 1] as the state transition probability function, and ρ : X × U × X → R as the reward function. Multi-agent reinforcement learning is a generalization of the single agent case and called stochastic game. A stochastic game is a tuple (cid:104)X, U1, ..., Un, f, ρ, ..., ρn(cid:105) with n the number of agents, X the finite set of environment states, Ui, i = 1, ..., n the finite sets of actions available to the agents, yielding the joint action set U = U1 ×...×Un, f : X ×U ×X → [0, 1] is the state transition probability function, and ρi : X × U × X → R, i = 1, ..., n are the reward functions of the agents. Muti-agent reinforcement learning distinguishes between cases where the agents cooper- ate, compete, and mixed cases. For the cooperative case, the reward function of the agents are the same (ρi = ρj, ∀i, j ∈ 1, ..., n). For the competitive case, the reward functions of 17 the agents are distinct (ρi (cid:54)= ρj, ∀i, j ∈ 1, ..., n, i (cid:54)= j). The mixed case is neither fully cooperative nor competitive. Based on this basic multi-agent framework, we can already draw a few conclusions for an encounter between an Ambassador probe AI and an ETI. Whether or not the ETI is an AI is secondary for applying the formal framework, however, recent publications in the field of Search for Extraterrestrial Intelligence (SETI) have argued for the case of an alien AI [39, 25]. First, it seems very unlikely that the reward functions of these agents are the same, as there is a vast space of possible reward functions and the probability of the agents having the same reward function should be very low unless there is some form of universal convergence. It follows that the interaction between the agents is very likely not cooperative. In case the encounter is between a single Ambassador probe AI and a single ETI, the stochastic game is necessarily competitive if more than one probe AI or ETI are involved, we either have a competitive or mixed case. Drawing from game theory, the case of an encounter of an Ambassador probe AI with an ETI can be interpreted as the case of coupled learning [161], where each agent attempts to model the other agent(s), i.e. their transition function(s) and reward function(s). Depend- ing on the type of game, specifically zero-sum, general-sum, or coordination game, different solutions can be calculated by the agents: Nash equilibrium, correlated equilibrium, or a co- ordinated joint action. Such solutions cannot always be calculated but have been successfully applied in practice [161]. The Ambassador probe AI’s actions would correspond to strategies in game theory. We can imagine actions such as ’observe’, ’contact’, ’send message’, ’withdraw’, or even ’self- destruct’, in case of a hostile ETI. In each time step, the model of the other agent is refined and the next action taken with respect to the model. Regarding the model for the ETI’s actions, we face a principal challenge of predicting the actions of an agent that is more powerful than the Ambassador probe AI, which will be addressed in more detail in Section 3.4.5. Sending back the interaction history of an Ambassador probe’s encounter with an ETI could be very useful, as it could form the basis for training future agents to interact with the ETI. Even the transmission of an updated AI to the Ambassador probe could be imagined if the encounter duration might be stretched to decades and longer. 3.4.4 Mission architectures Mission architectures for the Ambassador probe are likely to resemble those of the Explorer and Philosopher if they are part of an exploration mission. A possible setting is where the Ambassador AI is stored on an Explorer or Philosopher probe and tries to identify cues for ETI in the incoming data. The world model developed by the Explorer / Philosopher AI could also serve as a source for cues for how to communicate with an ETI. 3.4.5 Safety of encounters with an alien AI The basic tasks of an Ambassador probe would be to communicate with an extraterrestrial intelligence, which means first, that it is able to understand signals from such an intelligence, and second, it is able to send signals that can be understood by the intelligence. Such an interaction sequence can be interpreted in the previously introduced agent-based framework. However, a particular challenge is to avoid actions that could be interpreted as hostile or could otherwise have negative consequences. Furthermore, recent SETI / SETA publications have conjectured that an extraterrestrial intelligence might not be biological but an advanced AI itself [25, 39, 1]. Hence, we have reason to believe that if the Ambassador probe makes contact with an extraterrestrial intelligence, such an intelligence might not be biological in nature but a kind of machine, and more importantly, it might have more advanced AI capabilities than the Ambassador probe itself. Intuitively, we would expect that communication with such an advanced AI would be challenging for the Ambassador. We will argue that it is possible, at least for a limited formal case to prove that it is in general impossible to fully interpret the actions of such an AI via an Ambassador probe. One might argue that the formal case, where we refer to theorem proving, is not applicable to a real encounter with an alien AI. However, given 18 that we use formal theorem proving techniques for verifying computer programs that have to adhere to strict safety standards, we still think that such an approach would be suitable for shedding light on some fundamental issues regarding the encounter with alien AI. Furthermore, we argue that it is impossible to generally prove that actions the Ambas- sador would take in interacting with the alien AI are ”safe”. The first case is analogous to the difficulty of ”ensuring that the initial agent’s reasoning about its future versions is reliable, even if these future versions are far more intelligent than the current reasoner” [48]. This type of reasoning has been called Vingean reflection in the literature and may apply to humans reasoning about super-human AI as well as AI reasoning about more intelligent versions of itself. Here we argue that the same line of reasoning on Vingean reflection can be applied to the case of an AI on an interstellar probe encountering a more intelligent ETI. Fallenstein and Soares [48] use backward induction as an illustrative example that an agent which is capable of reasoning about improved versions of itself would need the reasoning capabilities of its improved versions to do so. LaVictoire [104] uses L¨ob’s theorem to show that an AI’s reasoning about a more powerful version of itself is unreliable. Several remedies for this ”L¨obstacle” have been proposed [47, 167, 58, 59]. There are, though, differences to those settings in the literature compared to the case of an encounter with an ETI. Firstly, we have good reasons to believe that such an AI would be vastly superior to an AI we have sent to the stars, as it is likely that such an AI has developed well before we would have developed an advanced AI, therefore having had much more time to evolve. Secondly, the problem is not to predict if modifications made to an AI are potentially harmful but to a certain extent predict the actions of an ETI. Let’s suppose that we could have access to the entire code of the ETI. Such a case would happen when we receive a signal from ETI, which might turn out to be a program [156]. A program that would be able to prove that such a program is safe would need to be at least as powerful as the program it checks. We simply refer to L¨ob’s theorem for a proof: Let X be any logical statement and L(X) be the statement ”if ProofSeeker(X) halts, then X”, where ProofSeeker is a program that searches all possible proofs and halts if and only if one of them is a valid proof of the statement X. L¨ob’s theorem states that for all statements X, if L(X) is provable, then X is provable. It is straightforward to apply L¨ob’s theorem to the case of checking whether an alien AI program is safe. In such a case, we take X as the logical statement ”alien program is safe”. L(X) then translates to ”if ProofSeeker(”alien program is safe”) halts, then ”alien program is safe””. The ProofSeeker would be the AI on the Ambassador probe. However, this is in con- tradiction to the inferior deductive capabilities of ProofSeeker compared to the alien program saf e”). and therefore, ProofSeeker cannot prove that statement L(”alien program is It follows that whatever action the Ambassador AI takes, it cannot prove that the ETI would react safely, as it cannot predict that the alien AI’s actions would be safe in general. The problem of an encounter with alien AI is, therefore, an extreme case of Vingean reflection, where approaches from the literature, such as from Everitt et al. [46] (Section 5) that aim at containing potentially harmful self-modifications do not apply. For example, the correct specification of the reward function for avoiding harmful AI does not apply to alien AI, as if there is a reward function, it has already been specified. If AI alignment is already a challenge for AI created by humans, it is very likely that ETI is not aligned, neither with human values nor with values of an AI created by humans, e.g. the Ambassador probe’s AI, simply due to the vast space of possible AI designs. Unless there is some form of universal convergence of AI designs, it is unlikely that the ETI is similar to the Ambassador’s AI. Although the values of the Ambassador probe’s AI and the ETI are likely misaligned, it might still be possible that the encounter does not result in a harmful result for either side. For example, empathy might be a characteristic that would allow for mutual understanding, where empathy means ”the capacity to relate another’s emotional state” [165] and not the general prediction of an agent’s actions. Empathy might be linked to an understanding of the other agent’s values and adopting these values [131]. To summarize, we have argued that it is in general not possible for an inferior AI to predict all actions of the superior AI and it is likely that the Ambassador probe AI is inferior to an alien AI. Hence, there is no guarantee that we can predict whether or not 19 such an encounter will be safe. It is even more unlikely that the values of these AIs will be aligned, given the vast space of possible AI designs. Nevertheless, characteristics such as empathy, which could still be present, if the AI is able to engage in social interactions, could be a key to mutual understanding. 4 Artificial Intelligence Capabilities 4.1 Task-oriented capability evaluation In the following, we use the task-oriented approach [78, 79] of comparing the performance of the AI of interstellar probes, looking at the set of tasks the probes need to accomplish. This qualitative approach is in contrast to the more formal approach we have previously taken. The result can be seen in 2. The Explorer in its basic form has capabilities similar to existing spacecraft for interplan- etary exploration. Data collection and processing is performed with large degrees of auton- omy. By contrast, the Philosopher is a probe that is able to conduct science autonomously, including devising hypotheses, experimental setups or identifying data collection procedures, and hypothesis testing. We can imagine prototypical forms of such an AI that are based on current machine learning algorithms and a library of scientific hypotheses from which new hypotheses can be derived by recombination and mutation. This includes the identification and analysis of alien life via remote sensing, in-situ analysis of celestial bodies, and analysis of signals [10]. The founder is expected to undertake extensive construction works within the target star system. These could include self-replication, large communication infrastructure with the solar system, space colonies, and even terraforming [49, 75]. The required AI probe capabilities differ. For example, mining resources in-situ, processing, and constructing truss structures is a capability that near-term technology for asteroid mining is likely to be able to accomplish. However, doing so in an environment that is to large extents unknown seems to be much more difficult. Furthermore, the complexity of the systems that are produced influence how sophisticated the AI needs to be. This is due to the emergence of unexpected phenomena in complex systems that require improvisation and creativity to resolve. When it comes to terraforming, the complexity of the system is enormous with limited predictability. Furthermore, there is probably only a limited failure tolerance for such a system. Therefore, complex systems such as space colonies, and very complex systems such as terraforming planets require a broad range of capabilities at similar or superior levels to humans. When it comes to embryo space colonization, capabilities that allow for sophisticated social interactions between the AI and the colonists. Finally, the Ambassador has the capability to initiate communication with an extraterres- trial intelligence in addition to the capabilities of the philosopher. Communication requires receiving and decoding signals from extraterrestrials, translating them into a language we are familiar with, composing a message the extraterrestrials are likely to understand, and its transmission. The most critical function is the translation of the signal and the compo- sition of proper responses. Such conversations are imagined by Bracewell (Bracewell, 1960) who assumes a human-level intelligence for the probe. Of course, we can imagine basic conversational capabilities that today’s chatbots are already capable of. However, for a low- probability but high-risk event such as contact to an extraterrestrial civilization, we expect that much more sophisticated forms of AI are required that is able to handle the subtle- ness and ambiguity of language. Another important aspect is social intelligence, including empathy, as we already mentioned in Section 3.4.5. It can be seen in Table 2 that the Philosopher’s set of tasks is a superset of the Explorer’s tasks and the Founder’s set of tasks is a superset of the Philosopher’s. It can also be seen that the Ambassador’s set of tasks is a superset of the Philosopher but not a superset of the Founder’s set of tasks. 20 Explorer X Philosopher X Founder X Ambassador X X X X X X X X X X X X X X X X X X Image recognition Hypothesis testing Signal pattern recognition Devise scientific hypotheses Universal translation Conversation Identify resources Conceive design (synthesis / analysis) Resource processing Construction Verification, validation, testing Table 2: AI probe types and capabilities 4.2 Do we need an AGI? We will also briefly touch on the question of whether or not AGI is a precondition for certain AI types. For most applications, no AGI is required, as their mission objectives are related to specific tasks and capabilities for accomplishing them. Hence, narrow but high-performance AI could, in principle, accomplish these tasks. With reference to Hern´andez-Orallo [78], general abilities that underlie these specific tasks would be required if the individual tasks would require them, such as those for the Founder probe. 4.3 Testing AI capabilities Testing AI capabilities prior to an interstellar mission is mandatory. In the following, dif- ferent test cases for the tasks introduced in the previous section are presented, as shown in Table 3. For most of the AI capabilities and tasks, test cases in our solar system or simulated environments can be imagined. AI for interstellar probes are likely to be tested for solar system exploration (e.g. Kuiper belt objects, interstellar objects [76]), economic development (e.g. asteroid mining [74, 143, 116, 117], space manufacturing [141, 128, 127, 81]), and colonization (e.g. lunar / Martian base [43, 8, 168], free-floating colonies [126, 125, 124, 91, 6]). Simulated environments for testing are in widespread use today for testing AI and in robotics. We can also imagine the use of adversarial machine learning and generative ad- versarial networks (GAN), where agents are pitted against each other, in order to self-train [86, 63, 132]. These approaches have recently been used for self-training games [138] and creating art [44]. Although the solar system provides an environment for testing various AI capabilities, representative conditions under which an AI for an interstellar probe could be tested are more likely to be found in the outer solar system and deep space, due to the signal latency, which makes human intervention difficult. Nevertheless, the solar system environment, supplemented by virtual environments, is likely to be the context in which AI systems are 21 matured before they are sent to the stars. 5 Design of a Generic Artificial Intelligence Probe In the following, we present a concept for an artificial intelligence probe, based on the assumption that any sophisticated AI will still likely use substantial computing resources, thereby consuming substantial amounts of energy. The probe concept has already been presented in Hein [67]. In the following, we make the additional assumption that a future human-level or super-human level AI would consume as much energy for its operation as the equivalent energy for simulating a human brain. We think that this assumption is reasonable, given the large uncertainty regarding which path will lead to AGI and as simulating a human brain is considered as one possible pathway towards AGI [21]. Today’s supercomputers use power in the MW range, the human brain, by contrast, uses only about 25 W [94], for a computing power that has been estimated at 1020 flops. The required power consumption for equivalent computing power using today’s or near-future computing hardware can be estimated to be between 1 MW and 100 GW, depending on how far current levels of increase in computing power can be extrapolated into the future. Nevertheless, these figures provide lower and upper bounds for the power requirements to simulate a human brain, i.e. between 10 to 1011 W. The large difference between the lower and upper bounds are also reflected in the scale of the corresponding power generation system in space. For generating power on the order of 10 W, a power generation subsystems for a 3U-CubeSat would be sufficient. For generating 1011 W, a hundred solar power satellites would be required [112]. Furthermore, power generation is likely not required in interstellar space, where power generation is more challenging, due to the absence of stellar radiation. Once the probe arrives in the target star system, we assume that power generation using photovoltaic cells is feasible. We argue that heat generation is likely to be a major issue for AI probes, first, due to the large amounts of power consumed, and second, as waste heat generation due to computing is likely to take place within a small volume. We assume that computation is running within a rather small volume, in order to minimize the time of data transfer, as is the case for today’s computers. Analogously, we can expect that large amounts of heat are generated in a small volume. Heat rejection is currently a major issue for supercomputers [123] and the predominant approach for heat rejection is the use of heat pipes, which transport a cooling liquid to the processors and the heated liquid away from them. The current heat density in supercomputers is as high as 10 kW/cm2, about one order of magnitude higher than inside a rocket engine nozzle [123]. Heat rejection has already been an issue for terrestrial supercomputers, the issue is ag- gravated in space, where heat can only be rejected without mass loss via radiation, requiring large surface areas facing towards free space. Advanced radiators could reject about 1kWt/kg thermal power per kilogram in the near future [2, 88, 93]. Consequently, about 100 tons of radiator mass would be required for rejecting 100MW and 1 ton for 1MW respectively. One could argue that existing computer architectures are very inefficient in replicating the function of a human brain, resulting in a huge difference in power consumption. Future computer architectures or working principles of computers such as quantum computing could have a disruptive effect on power consumption [113]. In order not to exclude this possibility, we keep the power consumption of a few dozens to hundreds of Watts as a lower boundary, in case revolutionary new ways are found to reproduce the function of the human brain. However, a more conservative estimate would put the required power at dozens to hundreds of MW for Philosopher, Founder, and Ambassador type probes and lower values for AI that is less sophisticated, such as for an Explorer type probe. Regarding the mass of the computing unit, current on-board data handling systems (OBDH) have a computing power on the order of 100 DMIPS per kg. The spacecraft OBDH from the literature have DMIPS values that are about two orders of magnitude below the values for terrestrial processors. If we extrapolate these values to the 2050 timeframe, we can expect spacecraft OBDH with a processing power of 15 million DMIPS per kg. As DMIPS and flops are different performance measures, we use a value for flops per kg from an existing supercomputer (MareNostrum) and extrapolate this value (0.025 ∗ 1012 flops/kg) 22 AI capabilities and tasks Image recognition Hypothesis testing Signal pattern recognition Devise scientific hypotheses Universal translation Conversation Identify resources Test cases Simulated images, solar system environment Capturing and analyzing data for hypothesis testing in virtual environment, solar system environment Various simulated signals of increased sophistication, test in solar system environment Extensive tests for research within the solar system environment Decoding language of various forms and of various organisms, including artificially generated languages (e.g. using adversarial machine learning and generative adversarial networks (GAN) where one agent tries to generate new languages that the other cannot translate [86, 63, 132]) Training with various living organisms; Training with artificial agents, e.g. agents that are generated to ”beat” the AI Testing in a solar system environment (e.g. asteroid mining, planetary surface exploration, planetary surface habitat design, space colony construction) and simulated virtual environments Conceive design (synthesis / analysis) Testing in a solar system environment Resource processing Construction Verification, validation, testing (e.g. asteroid mining, planetary surface exploration, planetary surface habitat design, space colony construction) and simulated virtual environments Testing in a solar system environment (e.g. e.g. asteroid mining, planetary surface exploration, planetary surface habitat design, space colony construction [81]) and simulated virtual environments Testing in a solar system environment (e.g. e.g. asteroid mining, planetary surface exploration, planetary surface habitat design, space colony construction [81]) and simulated virtual environments Testing in a solar system environment (e.g. e.g. asteroid mining, planetary surface exploration, planetary surface habitat design, space colony construction [81]) and simulated virtual environments Table 3: AI probe capabilities and test cases 23 Spacecraft subsystem Computing payload Solar cells (current technology) Radiators Other subsystems (50 of computing payload) Total mass Specific mass 0.025 ∗ 1017 flops/kg 1kW/kg Subsystem mass [t] 40 100 1kWt/kg 100 20 260 Table 4: Mass estimate for AI probe in the 2050-2060 time frame into the future (2050). By 2050, we assume an improvement of computational power by a factor 105, which yields 0.025∗1017 flops/kg. In order to achieve 1020 flops, a mass of dozens to a hundred tons is needed. We assume an additional 100 tons for radiator mass and with 1 kW/kg for solar cells, about 100 tons for the solar cells. This yields a total mass for an AI probe on the order of hundreds of tons, which is roughly equivalent to the payload mass of the Daedalus spacecraft of 450 tons [19]. Table 4 shows the mass estimates for the main spacecraft subsystems and its total mass in a 2050 to 2060 time frame. The mass estimate is only valid for the part of the spacecraft that actually arrives at the target star system. Due to the large power consumption and heat rejection requirements, the following char- acteristics for an AI probe can be concluded: • Large solar panels: Unless other power sources such as nuclear power is used, the probe will depend on large solar panels / solar concentrators for generating power for the AI payload. • Operation close to target star: The spacecraft is at least initially operated close to the star. In order to maximize power input from the star and to minimize solar power generator mass, the probe should be located as close to the star as possible. The minimum distance is constrained by the maximum acceptable temperature for the spacecraft subsystems and an eventual heat shield that protects against the starlight. For that purpose, it needs a star-shield to protect against the heat and radiation from the star. A trade-off between the heat shield mass and the mass savings from lower solar power generator mass needs to be made. However, note that the AI payload itself is a source of intense heat and more sensitive spacecraft subsystems such as sensors need to be located distant to the AI payload, e.g. on a boom. • AI payload switched off outside star system: AI is switched off outside the target star system, as there is no power source available for its operation. This may also protect against some forms of radiation damage. Nevertheless, proper radiation protection is an issue, as galactic cosmic rays could still destroy circuits via impact and the resulting particle shower; • Large radiators: A large radiator is needed for rejecting the heat generated by the AI payload; • Compact computing unit: The computer is either super-compact or distributed. How- ever, with a distributed system, communication speed becomes an issue and it is therefore likely that the architecture will be as compact as possible to minimize the time for signals to travel within the payload. Fig. 10 and Fig. 11 show an artist’s impression of an AI probe with its main subsystems. The design is similar to spacecraft with nuclear reactors but with important differences. Where the design of spacecraft with nuclear reactors is dominated by large radiators and placing the reactor far away from other spacecraft subsystems, the AI probe has solar cells 24 Figure 10: AI probe subsystems (Image: Adrian Mann) and the computing payload does not need to be placed as far away from other subsystems for radiation concerns. The AI payload is likely to have a cylindrical shape, as it is easier for the heat rejection system to have one backbone heat channel and then smaller, radial pipes that reject heat from the processing units. The heat is rejected via large radiators. The radiator size may decreases with distance from the payload, as less and less fluid is available for rejection (not shown in image). It is furthermore better to reject the heat quickly. Hence, the larger size of the radiators close to the payload. The radiator is perpendicular to the payload in order to avoid heat radiation from the payload being absorbed by the radiators and letting the payload face as much free space as possible. Fig. 11 shows an additional heat shield between the payload section and the radiators, in order to prevent radiative heat transfer from the payload. In order to maximize energy intake from the star, the spacecraft may be located as close as possible to the target star. There is likely to be a trade between distance to the star and other probe objectives. One can also imagine that sub-probes would do the majority of exploration and the AI probe would close to the star and do the majority of the computation-heavy tasks while communicating with the sub-probes. If the probe operates close to the target star, strong thermal radiation and particles from the star impact the spacecraft. In order to avoid heat and particle influx from the star, a heat and radiation shield is needed for protection against these particles. The shield is located in the direction of the star and shields the payload. The spacecraft needs to be constantly maintained and parts replaced or repaired. This is similar to existing terrestrial supercomputers. A system of this complexity is very likely to need repair. If the computer is modular, these modules are replaced on a regular basis and parts replaced within these modules. We can imagine a storage depot of parts and robots that replace these parts. With more advanced technology available, robots that reproduce even very complex replacement parts can be imagined. The AI payload needs to either be protected against galactic cosmic rays during its interstellar cruise or needs to have appropriate counter-measures in place such as self-healing [66, 118] and radiation-hardened electronics. 25 Figure 11: View from the back of AI probe (Image: Adrian Mann) 6 When will we be ready? Under the assumption that during the 2050 to 2090 timeframe, computing power per mass is still increasing by a factor of 20.5, it can be seen in 12 that the payload mass decreases to levels that can be transported by an interstellar spacecraft of the size of the Daedalus probe or smaller from 2050 onwards. If the trend continues till 2090, even modest payload sizes of about 1 kg can be imagined. Such a mission might be subject to the ”waiting paradox”, as the development of the payload might be postponed successively, as long as computing power increases and consequently launch cost decrease due to the lower payload mass. Furthermore, under the assumption that an advanced AI payload has equal capabilities for exploration than a human and the mass required for transporting a human over inter- stellar distances is estimated to be about 100t [75, 115], the breakeven point for an AI probe with similar cognitive capabilities as a human would be somewhere between 2050 and 2060. However, it is clear that the similarities end here, as a human crew would have colonization as an objective and would also require a large number of crew members [142]. Figure 12: Spacecraft payload mass vs. year of development 26 7 Conclusions We presented four types of artificial intelligence interstellar probes along with their required capabilities and mission architectures. Furthermore, a generic design for an artificial intelli- gence interstellar probe was presented. Based on the extrapolation of existing technologies and trends, we estimated that the payload of such an interstellar probe that has a similar computing power as the human brain is likely to have a mass of hundreds of tons in the 2050 time frame and a mass of dozens of tons in the 2060 time frame. Furthermore, estimates for the advent of artificial general intelligence and first interstellar missions coincide and are both estimated to be in the middle of the 21st century. We therefore conclude that a more in-depth exploration of the relationship between the two should be attempted, looking into currently neglected areas such as protecting the artificial intelligence payload from radiation in interstellar space and the role of artificial intelligence in self-replication. References [1] Real-world mining feasibility studies applied to asteroids, the Moon and Mars. AIAA SPACE 2011, 2011. [2] R. Adams, G. Statham, S. White, and B. Patton. Crewed Mission to Callisto Using Advanced Plasma Propulsion Systems. In 39th AIAA/ASME/SAE/, 2003. [3] J. R. Anderson and C. Lebiere. The Newell test for a theory of cognition. Behavioral and brain Sciences, 26(5):587–601, 2003. [4] M. Arbib. From universal Turing machines to self-reproduction. In A half-century survey on The Universal Turing Machine. 1988. [5] S. Armstrong and A. Sandberg. Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the fermi paradox. Acta Astronautica, 89:1–13, 2013. [6] N. Arora, A. Bajoria, and A. L. Globus. Kalpana One: Analysis and design of space colony. In 14th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference 7th AIAA/ASME/AHS Adaptive Structures Conference, page 2183, 2006. [7] V. Badescu. On the radius of Dyson’s sphere. Acta Astronautica, 36(2):135–138, 1995. [8] V. Badescu. Mars: prospective energy and material resources. Springer Science & Business Media, 2009. [9] S. Bartsch, F. Cordes, S. Haase, S. Planthaber, T. M. Roehr, and F. Kirchner. Perfor- mance evaluation of an heterogeneous multi-robot system for lunar crater exploration. In Proceedings of the 10th International Symposium on Artificial Intelligence, Robotics and Automation in Space (iSAIRAS-10), 2010. [10] S. Baxter. Project Icarus: Interstellar Spaceprobes and Encounters with Extraterres- trial Intelligence. Journal of the British Interplanetary Society, 66(1/2), 2013. [11] S. Baxter. Project Icarus: Exploring Alpha Centauri: Trajectories and Strategies for Subprobe Deployment. Journal of the British Interplanetary Society, 69:11–19, 2016. [12] S. Baxter. StarCall. In Obelisk. Gollancz, 2016. [13] G. Bear. Queen of Angels. Warner Books, 1990. [14] K. Becker and J. Gottschlich. AI Programmer: Autonomously Creating Software Programs Using Genetic Algorithms. arXiv preprint arXiv:, 1709.05703, sep 2017. [15] C. A. Beichman, G. Bryden, T. N. Gautier, K. R. Stapelfeldt, M. W. Werner, K. Mis- selt, and D. Trilling. An excess due to small grains around the nearby K0 V star HD 69830: asteroid or cometary debris? The Astrophysical Journal, 626(2):1061, 2005. 27 [16] J. Blish. Surface Tension. Galaxy Science Fiction, 1952. [17] J. Blish. The Seedling Stars. Gnome Press, page numbe edition, 1957. [18] K. S. Boles, K. Kannan, J. Gill, M. Felderman, H. Gouvis, B. Hubby, K. I. Kam- rud, J. C. Venter, and D. G. Gibson. Digital-to-biological converter for on-demand production of biologics. Nature biotechnology, 35(7):672, 2017. [19] A. Bond. Project Daedalus – The Final Report on the BIS Starship Study. Technical report, British Interplanetary Society, 1978. [20] P. J. Boston, P. Todd, and K. R. McMillen. Robotic Lunar Ecopoiesis Test Bed: Bringing the Experimental Method to Terraforming. AIP Conference Proceedings, 699(1):975–983, 2004. [21] N. Bostrom. Superintelligence: Paths, dangers, strategies. Oxford University Press, 2014. [22] R. Bracewell. Communications from superior galactic communities. Nature, 186:670– 671, 1960. [23] R. Bracewell. The Galactic Club: Intelligent Life in Outer Space. San Francisco Books, page numbe edition, 1975. [24] R. Bradbury. Matrioshka brains, 2001. [25] R. J. Bradbury, M. M. Cirkoivc, and G. Dvorsky. Dysonian approach to SETI: a Journal of the British Interplanetary Society, 64(5), 156, fruitful middle ground? 64(5):156, 2011. [26] L. Busoniu, R. Babuˇska, and B. De Schutter. Multi-agent reinforcement learning: An overview. Innovations in multi-agent systems and applications-1, 310:183–221, 2010. [27] R. Castano, T. Estlin, R. C. Anderson, D. M. Gaines, A. Castano, B. Bornstein, C. Chouinard, and M. Judd. Oasis: Onboard autonomous science investigation system for opportunistic rover science. Journal of Field Robotics, 24(5):379–397, may 2007. [28] A. Chakrabarti and K. Shea. Computer-based design synthesis research: an overview. Journal of Computing and Information Science in Engineering, 11(2):021003, 2011. [29] S. Chien, R. Doyle, A. G. Davies, A. Jonsson, and R. Lorenz. The future of AI in space. IEEE Intelligent Systems, 21(4):64–69, 2006. [30] S. Chien and K. L. Wagstaff. Robotic space exploration agents. Science Robotics, 2(7):eaan4831, 2017. [31] G. S. Chirikjian. An Architecture for Self-Replicating Lunar Factories. Technical report, NASA NIAC Phase 1 report, 2004. [32] A. C. Clarke. 2001: A Space Odyssey. New American Library, page numbe edition, 1968. [33] A. C. Clarke. The Fountains of Paradise. Victor Gollancz, book club edition, 1979. [34] A. C. Clarke. The Songs of Distant Earth. Del Rey, grafton edition, 1986. [35] R. J. Clinton. NASA’s In Space Manufacturing Initiatives: Conquering the Challenges of In-Space Manufacturing. In Design in Plastics 2017, Detroit, MI; United States, 2017. [36] A. Crowl, J. Hunt, and A. Hein. Embryo Space Colonisation to Overcome the Inter- stellar Time Distance Bottleneck. Journal of the British Interplanetary, 65:283–285, 2012. 28 [37] P. Davies. Afterword. In G. Benford, G., & Benford, editor, Starship century: Toward the grandest horizon, pages 301–310. Microwave Sciences, 2013. [38] H. De Garis, C. Shuo, B. Goertzel, and L. Ruiting. A world survey of artificial brain projects, Part I: Large-scale brain simulations. Neurocomputing, 74(1-3):3–29, 2010. [39] S. Dick. Cultural evolution, the postbiological universe and SETI. International Journal of Astrobiology, 2(1):65–74, 2003. [40] K. E. Drexler. Exploring future technologies. Doing Science: The Reality Club, pages 129–150, 1991. [41] K. E. Drexler. Radical Abundance. PublicAffairs books, 2013. [42] J. Dunn, M. Fagin, M. Snyder, and E. Joyce. Project RAMA: Reconstructing Asteroids Into Mechanical Automata. 2017. [43] P. Eckart. The lunar base handbook: an introduction to lunar base design, development, and operations. McGraw-Hill, 1999. [44] A. Elgammal, M. Papazoglou, and B. Kr¨amer. Design for Customization: A New Paradigm for Product-Service System Development. In Procedia CIRP, 2017. [45] J. K. Erickson. Living the dream-an overview of the Mars exploration project. IEEE Robotics & automation magazine, 13(2):12–18, 2006. [46] T. Everitt, G. Lea, and M. Hutter. AGI Safety Literature Review. arXiv preprint, 1805.01109, 2018. [47] B. Fallenstein and N. Soares. Problems of Self-reference in Self-improving Space-Time Embedded Intelligence. In International Conference on Artificial General Intelligence 2014, pages 21–32. Springer, 2014. [48] B. Fallenstein and N. Soares. Vingean reflection: Reliable reasoning for self-improving agents. Technical report, Machine Intelligence Research Institute, Berkeley, CA, 2015. [49] M. J. Fogg. Terraforming, as Part of a Strategy for Interstellar Colonisation. Journal of the British Interplanetary Society, 44:183–192, 1991. [50] S. Franklin and A. Graesser. Is it an Agent, or just a Program?: A Taxonomy for In International Workshop on Agent Theories, Architectures, Autonomous Agents. and Languages, pages 21–35. Springer, Berlin, Heidelberg, 1996. [51] A. Freeman and L. Alkalai. First Interstellar Explorer: What Should it do When it Arrives at its Destination? In American Geophysical Union Fall Meeting, 2017. [52] R. Freitas. A self-reproducing interstellar probe. Journal of the British Interplanetary Society, 33(7):251–64, 1980. [53] R. Freitas. The search for extraterrestrial artifacts (SETA). Journal of the British Interplanetary Society, 36:501–506, 1983. [54] R. Freitas and W. Gilbreath. Advanced automation for space missions. Journal of the Astronautical Sciences, 30(1):221, 1982. [55] R. Freitas and W. Zachary. A self-replicating, growing lunar factory. Prince- ton/AIAA/SSI Conference on Space Manufacturing, 35:18–21, 1981. [56] R. A. Freitas. Interstellar probes - A new approach to SETI. Journal of the British Interplanetary Society, 33:95–100, 1980. [57] R. A. Freitas. The Case for Interstellar Probes. Journal of the British Interplanetary Society, 36:490–495, 1983. 29 [58] S. Garrabrant, T. Benson-Tilsen, A. Critch, N. Soares, and J. Taylor. A formal approach to the problem of logical non-omniscience. arXiv preprint, 1707.08747, 2017. [59] S. Garrabrant, T. Benson-Tilsen, N. Critch, A., Soares, and J. Taylor. Logical induc- tion. arXiv preprint, 1609.03543, 2016. [60] B. Goertzel. The hidden pattern. Brown Walker, 2006. [61] B. Goertzel. Toward a Formal Characterization of Real-World General Intelligence. In Artificial General Intelligence: Proceedings of the Third Conference on Artificial General Intelligence, AGI 2010, Lugano, Switzerland, March 5-8, 2010, pages 19–24, 2010. [62] B. Goertzel, R. Lian, I. Arel, H. De Garis, and S. Chen. A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures. Neurocomputing, 74(1-3):30–49, 2010. [63] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, In Advances in neural A. Courville, and Y. Bengio. Generative adversarial nets. information processing systems, pages 2672–2680, 2014. [64] S. Griffith, D. Goldwater, and J. M. Jacobson. Robotics: Self-replication from random parts. Nature, 437(7059):636, 2005. [65] D. Ha and J. Schmidhuber. World Models. arXiv preprint, arXiv:1803, 2018. [66] J. W. Han, M. Kebaili, and M. Meyyappan. System on microheater for on-chip an- nealing of defects generated by hot-carrier injection, bias temperature instability, and ionizing radiation. IEEE Electron Device Letters, 37(12):1543–1547, 2016. [67] A. Hein. Artificial Intelligence Probes for Interstellar Exploration and Colonization. arXiv preprint, 1612.08733, 2016. [68] A. Hein. Heritage Technologies in Space Programs - Assessment Methodology and Statistical Analysis. PhD thesis, PhD thesis, Technical University of Munich, 2016. [69] A. Hein, K. Long, G. Matloff, R. Swinney, R. Osborne, A. Mann, and M. Ciupa. Project Dragonfly: Small, Sail-Based Spacecraft for Interstellar Missions. submitted to JBIS, 2016. [70] A. M. Hein. The Greatest Challenge: Manned Interstellar Travel. In Beyond the Boundary: Exploring the Science and Culture of Interstellar Spaceflight, pages 349– 376. Lulu, 2014. [71] A. M. Hein. Transcendence Going Interstellar: How the Singularity Might Revolu- tionize Interstellar Travel, 2014. [72] A. M. Hein and H. Condat. Can Machines Design? An Artificial General Intelligence Approach. In B. Ikl´e, M., Franz, A., Rzepka, R., & Goertzel, editor, Artificial General Intelligence: 11th International Conference, AGI 2018, pages 87–99. Springer, Prague, Czech Republic, 2018. [73] A. M. Hein, K. F. Long, D. Fries, N. Perakis, A. Genovese, S. Zeidler, M. Langer, R. Os- borne, R. Swinney, J. Davies, B. Cress, M. Casson, A. Mann, and R. Armstrong. The Andromeda Study: A Femto-Spacecraft Mission to Alpha Centauri. arXiv preprint, 1708.03556, 2017. [74] A. M. Hein and R. Matheson. A Techno-Economic Analysis of Asteroid Mining. In 69th International Astronautical Congress (IAC), Bremen, Germany, 2018. [75] A. M. Hein, M. Pak, D. P¨utz, C. B¨uhler, and P. Reiss. World Ships—Architectures & Feasibility Revisited. Journal of the British Interplanetary Society, 65(4):119–133, 2012. 30 [76] A. M. Hein, N. Perakis, K. F. Long, and A. Crowl. Project Lyra: Sending a Spacecraft to 1I/’Oumuamua (former A/2017 U1), the Interstellar Asteroid. arxiv.org, 2017. [77] A. M. Hein, A. C. Tziolas, and R. Osborne. Project Icarus: Stakeholder Scenarios for an Interstellar Exploration Program. Journal of the British Interplanetary Society, 64(6/7):224–233, 2011. [78] J. Hern´andez-Orallo. Evaluation in artificial intelligence: from task-oriented to ability- oriented measurement. Artificial Intelligence Review, 48(3):397–447, 2017. [79] J. Hern´andez-Orallo. The measure of all minds: evaluating natural and artificial intelligence. Cambridge University Press, 2017. [80] A. Hintze. Understanding 4 AI Types, 2016. [81] M. Hirai, A. Hein, and C. Welch. Autonomous Space Colony Construction. In 65th International Astronautical Congress, Tortonto, Canada, 2014. [82] D. R. Hofstadter. G¨odel, Escher, Bach. Basic Books, 1979. [83] J. P. Hogan. Voyage from Yesteryear. Del Rey, 1982. [84] R. Hoyt, J. Slosad, T. Moser, and J. Cushing. MakerSat: In-Space Additive Manufac- turing of ConstructableTM Long-Baseline Sensors using the TrusselatorTM Technology. In AIAA SPACE 2016, 2016. [85] J. Hu and M. P. Wellman. Multiagent reinforcement learning: theoretical framework and an algorithm. ICML, 98:242–250, 1998. [86] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. D. Tygar. Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence, pages 43–58. ACM, 2011. [87] M. Hutter. Universal artificial intelligence: Sequential decisions based on algorithmic probability. Springer, 2004. [88] R. Hyers, B. Tomboulian, P. Crave, and J. Rogers. Lightweight, High-Temperature Radiator for Space Propulsion. 2012. [89] F. Ingrand, S. Lacroix, S. Lemai-Chenevier, and F. Py. Decisional autonomy of plan- etary rovers. Journal of Field Robotics, 24(7):559–580, jul 2007. [90] K. Jakab, C. Norotte, F. Marga, K. Murphy, and G. Vunjak-Novakovic, G. Forgacs. Tissue engineering by self-assembly and bio-printing of living cells. Biofabrication, 2(2):022001, 2010. [91] R. Johnson and C. Holbrow. Space Settlements: A Design Study. Technical report, NASA SP-413, NASA, 1977. [92] M. Jones. Practical Von Neumann Machines and the Fermi Paradox. In International Astronautical Congress 2017 - 46th IAA Symposium on the Search for Extraterrestrial Intelligence (SETI) – The Next Steps, 2017. [93] A. Juhasz and G. Peterson. Review of advanced radiator technologies for spacecraft power systems and space thermal control. 1994. [94] E. R. Kandel, J. H. Schwartz, T. M. Jessell, and S. A. Siegelbaum. Principles of neural science. McGraw-Hill, 2000. [95] C. Kemp. Back to the Wild. New Scientist, 2015. [96] G. M. Kennedy, L. Matr`a, M. Marmier, J. S. Greaves, M. C. Wyatt, G. Bryden, and B. Sibthorpe. Kuiper belt structure around nearby super-Earth host stars. Monthly Notices of the Royal Astronomical Society, 449(3):3121–3136, 2015. 31 [97] E. Kirchner. Embedded brain reading. Phd thesis, Universit¨at Bremen, 2014. [98] E. A. Kirchner, J. de Gea Fernandez, P. Kampmann, M. Schr¨oer, J. H. Metzen, and F. Kirchner. Intuitive Interaction with Robots – Technical Approaches and Chal- lenges. In Formal Modeling and Verification of Cyber-Physical Systems, pages 224–248. Springer Fachmedien Wiesbaden, Wiesbaden, 2015. [99] O. Kroemer and G. Sukhatme. Learning relevant features for manipulation skills using meta-level priors. arXiv preprint, 1605.04439, 2016. [100] O. Kroemer and G. Sukhatme. Meta-level Priors for Learning Manipulation Skills with Sparse Features. In Springer, pages 211–222, 2016. [101] O. Kroemer and G. Sukhatme. Feature selection for learning versatile manipulation skills based on observed and desired trajectories. In 2017 IEEE International Confer- ence on Robotics and Automation (ICRA), pages 4713–4720, 2017. [102] T. Kuiper and M. Morris. Searching for Extraterrestrial Civilisations. Science, 196:616–621, 1977. [103] R. Kurzweil. The singularity is near: When humans transcend biology. Penguin Books, 2005. [104] P. LaVictoire. An Introduction to L¨obs Theorem in MIRI Research. Technical report, Machine Intelligence Research Institute, 2015. [105] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553), 436-444, 521(7553):436–444, 2015. [106] S. Legg and M. Hutter. Universal intelligence: A definition of machine intelligence. In Minds and Machines. 2007. [107] H. Lipson and E. Malone. Autonomous Self-Extending Machines for Accelerating Space Exploration. Technical report, NASA NIAC Phase 1 study, 2002. [108] M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine Learning Proceedings 1994, pages 157–163, 1994. [109] K. F. Long, M. J. Fogg, R. Obousy, A. Tzioloas, A. Mann, R. Osborne, and A. Presby. Project Icarus: Son of Daedalus - Flying Closer to Another Star. Journal of the British Interplanetary Society, 62:403–414, 2009. [110] P. Lubin. A Roadmap to Interstellar Flight. Journal of the British Interplanetary Society, 69(2-3), 2016. [111] R. Lutz. Software engineering for space exploration. Computer, 44(10):41–46, 2011. [112] J. Mankins. SPS-ALPHA: The first practical solar power satellite via arbitrarily large phased array (a 2011-2012 NASA NIAC phase 1 project). Artemis Innovation Man- agement Solutions LLC, 2012. [113] I. Markov. Limits on fundamental limits to computation. Nature, 512(7513):147–154, 2014. [114] R. G. Martin and M. Livio. On the formation and evolution of asteroid belts and their potential significance for life. Monthly Notices of the Royal Astronomical Society: Letters, 428(1):L11–L15, 2012. [115] G. L. Matloff. Deep space probes: To the outer solar system and beyond. Springer Science & Business Media, 2006. [116] P. T. Metzger. Space development and space science together, an historic opportunity. Space Policy, 37(2):77–91, 2016. 32 [117] P. T. Metzger, A. Muscatello, R. P. Mueller, and J. Mantovani. Affordable, rapid bootstrapping of the space industry and solar system civilization. Journal of Aerospace Engineering, 26(1):18–29, 2012. [118] D. Moon, J. Park, J. Han, G. Jeon, J. Kim, J. Moon, M. Seol, C. Kim, H. Lee, M. Meyyappan, and Y. Choi. Sustainable electronics for nano-spacecraft in deep space missions. In 2016 IEEE International InElectron Devices Meeting (IEDM), pages 31–8, 2016. [119] S. T. Mueller. Is the Turing Test still relevant? A plan for developing the cognitive decathlon to test intelligent embodied behavior. In 19th Midwest artificial intelligence and cognitive science conference, MAICS, pages Vol. 1, p.3, 2008. [120] S. T. Mueller, M. Jones, B. S. Minnery, and J. M. Hiland. The BICA cognitive In Proceedings of decathlon: a test suite for biologically-inspired cognitive agents. behavior representation in modeling and simulation conference, pages Vol. 1, p. 3, Norfolk, UK, 2007. [121] S. V. Murphy and A. Atala. 3D bioprinting of tissues and organs. Nature biotechnology, 32(8):773, 2014. [122] J. Myhill. The abstract theory of self-reproduction. In Views on general systems theory, pages 106–118. 1964. [123] W. Nakayama. Heat in Computers: Applied Heat Transfer in Information Technology. Journal of Heat Transfer, 136(1):013001, 2014. [124] G. K. O’Neill. The colonization of space. Physics Today, 27:32–40, 1974. [125] G. K. O’Neill. The high frontier: human colonies in space. William Morrow, New York, USA, 1977. [126] G. K. O’Neill. 2081: A hopeful view of the human future. Simon and Schuster, New York, 1981. [127] A. Owens and O. De Weck. Systems Analysis of In-Space Manufacturing Applica- tions for the International Space Station and the Evolvable Mars Campaign. In AIAA SPACE 2016, Reston, Virginia, sep 2016. American Institute of Aeronautics and As- tronautics. [128] A. Owens, S. Do, A. Kurtz, and O. Weck. Benefits of additive manufacturing for human In 45th International Conference on Environmental Systems, exploration of mars. 2015. [129] J. Pearson. Craig Venter’s ‘Digital-to-Biological Converter’ Is Real, 2017. [130] N. Perakis and A. M. Hein. Combining Magnetic and Electric Sails for Interstellar Deceleration. Acta Astronautica, 128:13–20, 2016. [131] A. Potapov and S. Rodionov. Universal empathy and ethical bias for artificial general intelligence. Journal of Experimental & Theoretical Artificial Intelligence, 26(3):405– 416, 2014. [132] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint, 1511.06434, 2015. [133] S. Saeedi, M. Trentini, M. Seto, and H. Li. Multiple-Robot Simultaneous Localization and Mapping: A Review. Journal of Field Robotics, 33(1):3–46, jan 2016. [134] C. Scharf. The Copernicus Complex. Farrar, Strauss and Giroux, 2014. [135] J. Schmidhuber. Ultimate Cognition `a la G¨odel. Cognitive Computation, 1(2):177–193, jun 2009. 33 [136] J. Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85–117, 2015. [137] SFE. Pantropy, 2015. [138] D. Silver, I. Schrittwieser, J., Simonyan, K., Antonoglou, A. Huang, A. Guez, T. Hu- bert, L. Baker, M. Lai, A. Bolton, and Y. Chen. Mastering the game of Go without human knowledge. Nature, 550(7676):354, 2017. [139] R. L. Simpson Jr and C. R. Twardy. Refining the cognitive decathlon. In Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems, pages 124–131. ACM, 2008. [140] M. Sipper. Fifty years of research on self-replication: An overview. Artificial Life, 4(3):237–257, 1998. [141] R. Skomorohov, A. Hein, and C. Welch. In-orbit Spacecraft Manufacturing: Near- In International Astronautical Congress 2016, IAC-2016, Term Business Cases. Guadalajara, Mexico, 2016. [142] C. M. Smith. Estimation of a genetically viable population for multigenerational interstellar voyaging: Review and data for project Hyperion. Acta Astronautica, 97:16– 29, 2014. [143] M. Sonter. The technical and economic feasibility of mining the near-earth asteroids. Acta Astronautica, 1997. [144] O. Stapledon. Last and First Men. Methuen, page numbe edition, 1930. [145] B. Steunebrink, J Schmidhuber - Theoretical Foundations of, and U. 2012. Towards an actual g¨odel machine implementation: A lesson in self-reflective systems. In Theoretical Foundations of Artificial General Intelligence, pages 173–195, 2012. [146] B. Steunebrink and J. Schmidhuber. A family of G¨odel machine implementations. In Artificial General Intelligence, pages 275–280, 2011. [147] M. Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the tenth international conference on machine learning, pages 330–337, 1993. [148] J. Tarter. Alternative Models for Detecting Very Advanced Civilisations. Journal of the British Interplanetary Society, 49:291–295, 1996. [149] D. R. Thompson, D. S. Wettergreen, and F. J. C. Peralta. Autonomous science during large-scale robotic survey. Journal of Field Robotics, 28(4), 542-564, 28(4):542–564, 2011. [150] F. Tipler and J. Barrow. The anthropic cosmological principle. Oxford University Press, 1986. [151] F. J. Tipler. Extraterrestrial Intelligent Beings Do Not Exist. Quarterly Journal of the Royal Astronomical Society, 21:267–281, 1980. [152] F. J. Tipler. The physics of immortality: Modern cosmology, God, and the resurrection of the dead. Doubleday, 1994. [153] T. Toth-Fejel, R. Freitas, and M. Moses. Modeling Kinematic Cellular Automata Final Report. Technical report, NASA NIAC Phase 1 report, 2004. [154] A. Tough. Small Smart Interstellar Probes. Journal of the British Interplanetary Society, 51:167–174, 1998. 34 [155] A. E. Trujillo, M. T. Moraguez, A. Owens, S. I. Wald, and O. De Weck. Feasibility Analysis of Commercial In-Space Manufacturing Applications. In AIAA SPACE and Astronautics Forum and Exposition, Reston, Virginia, sep 2017. American Institute of Aeronautics and Astronautics. [156] A. Turchin. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI. Journal of British Interpanetary Society, 71(2):71–79, 2018. [157] J. Veness, K. S. Ng, M. Hutter, and D. Silver. Reinforcement Learning via AIXI Approximation. In AAAI 2010, 2010. [158] J. Veness, K. S. Ng, M. Hutter, W. Uther, and D. Silver. A monte-carlo aixi approxi- mation. Journal of Artificial Intelligence Research, 40(1):95–142, 2011. [159] S. A. Vere. A cognitive process shell. Behavioral and Brain Sciences, 15(3):460–461, 1992. [160] V. Vinge. Long Shot. Analog Science Fiction and Fact, 1972. [161] N. Vlassis. A Concise Introduction to Multiagent Systems and Distributed Artificial Intelligence. Lectures on Artificial Intelligence and Machine Learning, 1(1):1–71, 2007. [162] J. Von Neumann and A. W. Burks. Theory of Self-reproducing Automata. University of Illinois Press, 1966. [163] M. Wirkus. Towards Robot-independent Manipulation Behavior Description. arxiv preprint, 1412.3247, dec 2014. [164] M. Woods, A. Shaw, D. Barnes, D. Price, D. Long, and D. Pullan. Autonomous science for an ExoMars Rover-like mission. Journal of Field Robotics, 26(4):358–390, apr 2009. [165] O. N. Yalcin and S. DiPaola. A computational model of empathy for interactive agents. Biologically Inspired Cognitive Architectures, in print, 2018. [166] M. Yim, W. Shen, B. Salemi, D. Rus, M. Moll, H. Lipson, E. Klavins, and G. Chirikjian. Modular self-reconfigurable robot systems [grand challenges of robotics]. IEEE Robotics & Automation Magazine, 14(1):43–52, 2007. [167] E. Yudkowsky and M. Herreshoff. Tiling agents for self-modifying AI, and the L¨obian obstacle. Technical report, Machine Intelligence Research Institute, Berkeley, Califor- nia, USA, 2013. [168] R. Zubrin. The case for Mars. Simon and Schuster, 2012. [169] V. Zykov, E. Mytilinaios, B. Adams, and H. Lipson. Robotics: Self-reproducing ma- chines. Nature, 435(7039):163, 2005. [170] V. Zykov, E. Mytilinaios, M. Desnoyer, and H. Lipson. Evolved and designed self- reproducing modular robotics. IEEE Transactions on robotics, 23(2):308–319, 2007. 35
ai_researcher
1
Generating_Ontology-Learning_Training-Data_through_Verbalization.pdf
Merging of Ontologies Through Merging of Their Rules Olegs Verhodubs [email protected] Abstract. Ontology merging is important, but not always effective. The main reason, why ontology merging is not effective, is that ontology merging is performed without considering goals. Goals define the way, in which ontologies to be merged more effectively. The paper illustrates ontology merging by means of rules, which are generated from these ontologies. This is necessary for further use in expert systems. Keywords: Ontology, Ontology Merging, Semantic Web I. INTRODUCTION Continuously developing over the past few decades, the Web today has become a global information resource. With the distributed nature of information as one of its principles, the Web has inherited the advantages and disadvantages associated with this principle. One of the significant advantage of the distributed nature of information in the Web is the variety of this information. A variety of information in the Web is provided by a huge number of people, who supply this information. The main disadvantage of the Web is the information there is situated in different resources. It is necessary to make significant efforts to gather the required information together. An html-based Web was not quite capable of this task. The extension of the html-based Web was developed, and it was named the Semantic Web [1]. It is considered that the Semantic Web is a machine-readable Web that is the Web that can be processed by machines. There are several technologies, aimed at implementing the Semantic Web. These technologies are RDF (Resource Description Framework), RDFS (Resource Description Framework Schema), SPARQL, OWL (Web Ontology Language) and some others [2]. This research focuses on the OWL standard as a standard for describing ontologies. An ontology is a piece of information with its own structure. This structure of ontology is provided by classes, properties, datatypes and individuals [3]. An ontology can describe some strictly defined area, as well as some kind of information resource that accumulates information from different areas. It is foreseen that most ontologies will be generated from websites. Usually no website and ontology generated from the website can fully satisfy the need for information. In this case ontologies should be merged together. Merging of ontologies can be implemented differently. One way is more complete, because merges all elements of two or more ontologies. This is not necessary in any case, therefore sometimes another way is preferable. Another way is more effective, because makes merging process more selective. Only some elements of ontologies are merged. For example, it is possible one ontology to enrich with properties from another ontology. Or it is possible one ontology to enrich with links from another ontology. Or the process of ontology merging can be done somehow differently. One of such ways is shown in the paper. This way foresees to merge functional value of ontologies instead of merging the ontologies themselves. Functional value of ontologies can be different, and it depends on the goal of ontology use. Particularly, ontologies can be used for generation of rules [4], [5]. In this regard functional value of ontologies are rules, which are generated from ontologies. Merging of rules, generated from different ontologies, is merging of ontology functional value and can be called as functional merging of ontologies. Functional merging of ontologies is described in the paper. This paper is divided into several sections. The next section reviews previous researches in the area. The third section describes merging of ontologies throug the rules, generated from these ontologies. In turn, the fourth section presents the architecture of merged ontologies. The last section introduces the conclusions of the research, and also research directions for the future. II. PREVIOUS RESEARCH Integration of ontologies is one of the significant tasks in the area of the Semantic Web. That is why there are a lot of papers dedicated to the subject. Some overview of ontology integration methods was done in [6]. Several years have passed since publication, but the overview of ontology merging methods in the paper is still relevant. Although mentioned overview is performed for the Semantic Web Expert System [7], it is also useful for other researches. So, there are ontology mapping, ontology alignment and ontology merging [6]. Ontology mapping is a specification of the semantic overlap between two or more ontologies. Ontology alignment is creating links between two original ontologies. Ontology merging is the process of a new ontology creation, which is the union of one or more source ontologies. Ontology merging means that a single ontology is generated from two or more ontologies. Certainly, each type of ontology integration has its own advantages and disadvantages. Only disadvantages are reviewed here, because advantages are due to existence of these types of ontology integration, but disadvantages determine why the types of integration of ontologies are not suitable for us. The semantic overlap between two or more ontologies in ontology mapping implies some measure of fuzziness and incompleteness. The subprocess of creating links between two original ontologies during the process of ontology alignment implies the creation of something like a virtual integrated ontology. The weak spots here are links, which can break off completely or temporarily. Ontology merging is critical to memory, because generation of a single ontology from two or more source ontologies can require a large amount of memory. This may occur if source ontologies are large or the process of ontology merging includes merging of more than two ontologies. Thus, no one type of ontology integration satisfies, considering mentioned disadvantages. It is necessary to take into account the purposes of ontology use in order to develop the alternative type of ontology integration without obvious disadvantages. III. MERGING OF ONTOLOGIES VIA THEIR RULES There are a lot of ontology uses. For example, ontologies can describe a particular field of human activity. It could be sports, medicine or something else. Or ontologies can be used for transforming the Web towards to be more processable by machines. For instance, it is possible to process ontologies by means of Apache Jena Inference Api [8]. In any case, data, information and knowledge from different ontologies can intertwine with each other giving rise to the need for ontology merging. The difference and similarity of data, information and knowledge is shown in [9]. Here is important that ontologies in themselves can provide us with knowledge namely rules if production systems are meant [4], [5]. The purpose is to use knowledge that is rules generated from ontologies in expert systems. The algorithm of ontology use in expert system is the following. First of all, rules are generated from ontology. Then the generated rules are supplied to the knowledge base of the expert system. After that, rules from the knowledge base can be used in the expert system. If the rules, which are generated from one ontology, are not enough, then several ontologies are merged into a single ontology and only after that the rules are generated from a single, integrated ontology. This way has at least one serious disadvantage: merging of two or more ontologies is a long process. The process of rule generation from a single, merged ontology is a long process, too. Both of these processes can significantly slow down the work with the expert system and thereby reduce its attractiveness to a critical level. It would be possible to merge pairs, triples or etc. of ontologies in advance, but what exactly ontologies to merge becomes known only from user’s request, that is, it is impossible to know it in advance. However, some of the work of merging ontologies should be carried out in advance as it is realized in the search engines, when web pages are indexed in advanced, and the user actually utilizes the database, when types his keywords in the intended for this string. That is, the search in the Web when working with some of the search engines does not carry out in the moment of typing keywords; in the moment of typing the search is carrying out in the indexed database. It is possible to merge the ontologies in a special way. Not the ontologies themselves, but the product of their processing should be merged for this purpose. Rules, which are generated from ontologies, are the product of ontology processing. In this way, ontologies are being merged by means of rule integration together. That is, rules, which are generated from one ontology, and rules, which are generated from another ontology, are used together if they would be generated from one, single ontology. This is different from merging of ontologies in the usual sense of the word, when parts of one ontology are being added to another ontology [6]. Advantages and disadvantages are reviewed further in order to evaluate the proposed way of ontology merging. The main advantages are getting rules from being processed ontologies and possibility to realize the process of rule generation from ontologies, before these rules are needed in the expert system. Early ontology processing positively affects the speed of rule use, because time is needed to access the database, where generated rules are stored. It is possible to suppose that in such a way the speed of the expert system is approximately equal to the speed of the web search engine. In turn, two principal disadvantages of this ontology merging are observed. The first one is the single ontology from several other ontologies is not generated as it happens during usual ontology merging. This is not the purpose; however the enrichment of ontologies by parts of other ontologies can be useful from the optimization point of view. The second disadvantage is not so evident, but more important for quality of the generated rules. The fact is that the rules, which are generated from each ontology separately, can differ from the rules, which are generated from the single merged ontology, which is generated from several other ontologies. Let us demonstrate the situation (Fig.1.): Ontology 1: Ontology 2: Ontology 3: Plane wings engine Plane wheel Plane wings engine wheel Fig.1. Three ontologies with class “Plane”. There are three ontologies (Fig.1): “Ontology 1”, “Ontology 2” and “Ontology 3”. “Ontology 1” and “Ontology 2” are two different ontologies, which contain the class “Plane”. The class “Plane” in the “Ontology 1” has two properties “wings” and “engine”. The class “Plane” in the “Ontology 2” has one property “wheel”. “Ontology 3” is the result of “Ontology 1” and “Ontology 2” merging. The class “Plane” in the “Ontology 3” has three properties: “wings”, “engine” and “wheel”. So, it is possible to generate the following rules from these ontologies according to [4]: TABLE I. Ontologies and generated rules. Ontology Ontology 1 Ontology 2 Ontology 3 Rules IF wings and engine THEN Plane IF wheel THEN Plane IF wings and engine and wheel THEN Plane It can be found that the rules, generated from “Ontology 1” and “Ontology 2” differ from the rule, which is generated from the “Ontology 3”. The rule from the “Ontology 3” is more complete and is preferred for expert systems. Thus, if ontologies are being merged by means of the rules, which are generated from ontologies and collected together, some extra work is needed for increasing the quality of the generated rules. IV. ARCHITECTURE FOR ONTOLOGY MERGING Merging of ontologies in the form of a rule collection is necessary for functioning of the expert systems. Considering a lot of rule types, which can be generated from the ontology [4] [5], and also the amount of ontologies, which can be involved in the process of rule generation, the architecture of the storage for rule storing and retrieving has to be developed. It is possible to work out your own software for storing and retrieving rules in the expert systems or to utilize ready-made software for mentioned purposes. DBMS (Data Base Management System) is software that usually is being used for such tasks. DBMS had already been optimized in many parameters, therefore DBMS use is preferred. There are a lot of DBMS that differ by its type, prevalence, licensing and so on. MySQL [10] DBMS is quite sufficient for research tasks as open-source, quite effective, well-documented and tested in real tasks, that is why MySQL is chosen. Architecture of DB (Data Base) affects the efficiency of working with data, therefore it is necessary to pay close attention to this task. The DB architecture is stored data, relationship of this data and kind of grouping this data, and the DB architecture is set using the DBMS means. MySQL DBMS is a relational DBMS [11]; this means that data in the DB is grouped by tables, but tables are interconnected by means of primary and foreign keys. So, it is necessary to define the types of data, number of tables and their structure, and also the links between the tables of DB in order to develop the architecture of rule storage. A rule consists of the condition and the result. In addition, the condition and the result of every rule both have its own membership functions. Thus, the table for storing rules in the DB has the following structure (TABLE II): TABLE II. Table for storing rules. id Condition MFC Result MFR Let us explain the abbreviations (TABLE II): id is an index number of the record (primary key), condition is a condition of the rule, MFC is a membership function of the condition, result is a result of the rule, MFR is a membership function of the rule. id is an integer, Condition and Result are strings, MFC and MFR are real numbers. Ontologies in the Web can be modified, therefore the rules, which are generated from the same ontology in different times can differ. That is why it is necessary to manage the information about the ontologies, which are participated in the process of rule generation. This may be useful to build automatic system for updating rules in the DB. The table for storing information about used ontologies is the following (TABLE III): TABLE III. Table for storing information about ontologies. id Name Address Access_date Table for storing information about ontologies has the following structure (TABLE III): id is an index number of the record (primary key), Name is a string for storing the name of ontology, Address is a string for storing the address of the ontology in the Web, Access_date is a date, when ontology is accessed and processed. It is necessary to modify the table for storing rules (TABLE II) in order to add to the each rule information about the ontology, from where it is generated. In this regard, the modified table for storing rules is the following (TABLE IV): TABLE IV. Modified table for storing rules. id Ontology* Condition MFC Result MFR Modified table for storing rules (TABLE IV) differs from the original table (TABLE II) by the presence of Ontology* column that is foreign key, which is primary key of the table for storing information about ontologies (TABLE III). So, there is a table for storing information about all processed ontologies (TABLE III) and a table for storing all generated rules from the processed ontologies (TABLE IV). One more table is necessary for storing the information about all mergers of ontologies. The structure of the table for storing the information about all mergers of ontologies is the following (TABLE V): TABLE V. Table for storing information about mergers of ontologies. id Table name Key words Merged ontologies The table for storing information about mergers of ontologies has the following structure (TABLE V): id is an index number, Key words define the domain of the merged ontologies, Merged ontologies is a list of ontologies that are merged, Table name is a name of the separate table, which contains all rules from merged ontologies. The structure of such a table can be identical to the mentioned table (TABLE II). Ontologies Rules Mergers Rules of mergers DB Fig.2. Architecture of DB for ontology merging. Thus, the architecture of DB to provide ontology merging via rules, generated from these ontologies, consists of two conditional parts (Fig.2). The first part is a structure of the DB; it is provided with three tables: Ontologies, Rules and Mergers. Here the table “Rules” stores all rules from all ontologies, the table “Ontologies” stores information about all ontologies, the table “Mergers” stores information about ontologies to be merged. The second conditional part of the DB consists of the tables that are needed for storing information about each concrete merger of two or more ontologies. Unlike the first conditional part of the DB, which consists strictly from three tables, the amount of tables here is not constant and depends on the amount of ontology mergers. For example, if there are twenty mergers of ontologies, then twenty tables are in the second part of the DB. V. CONCLUSION The paper describes one more way to merge the ontologies. Merging of ontologies is possible to perform if to be guided by the purpose, for which merging of ontologies is needed. Functioning of the expert systems, based on rules, is such a purpose. Merging of rules, which are generated from several ontologies, is the same as merging of ontologies in terms of use of ontology merging in expert systems. Merging of rules is grouping the rules in order to use these rules in the expert system according to the user’s request. The paper presents the DB architecture for soring rules, generated from ontologies, and mergers of ontologies. This information in the DB is needed for functioning of expert systems. It is important to mention that such an architecture of the DB is sufficient for working of the expert systems, but this architecture is not ideal. Possible improvement of the DB architecture is associated with the realization that is why it is not discussed here. This research identified such an interesting task: rules, generated from each separated ontology, differed from the rules, generated from one, single merged ontology. Single merged ontology here is the ontology that is merged from several other ontologies in usual way that is parts of several ontologies are united in one [6]. It would be supposed that processing of the rules, generated from several ontologies, is useful and has good research potential. ACKNOWLEDGEMENTS This work like most previous works has been supported by my family and my friends. REFERENCES [1] Berners-Lee, Tim; James Hendler; Ora Lassila. "The Semantic Web", 2001. [2] World Wide Web Consortium (W3C), w3.org [3] https://www.w3.org/TR/owl-ref/ [Accessed: 27.12.2019] [4] Verhodubs O., Grundspenkis J., Evolution of ontology potential for generation of rules, 2012. [5] Verhodubs O., Ontology as a source for rule generation, 2014. [6] Verhodubs O., Grundspenkis J., Ontology merging in the context of Semantic Web Expert System, 2013. [7] Verhodubs O., Grundspenkis J., Towards the Semantic Web Expert System, 2011. [8] https://jena.apache.org/ [Accessed: 04.01.2020] [9] Verhodubs O., Mutual transformation of information and knowledge, 2016. [10] www.mysql.com [Accessed: 09.01.2020] [11] https://dev.mysql.com/doc/refman/8.0/en/what-is-mysql.html [Accessed: 11.01.2020]
ai_researcher
7
Idea2Img_Iterative_Self-Refinement_with_GPT-4V(ision)_for_Automatic_Image_Design_and_Generation.pdf
4 2 0 2 g u A 4 1 ] V C . s c [ 2 v 1 4 5 8 0 . 0 1 3 2 : v i X r a Idea2Img : Iterative Self-Refinement with GPT-4V for Automatic Image Design and Generation Zhengyuan Yang , Jianfeng Wang , Linjie Li, Kevin Lin, Chung-Ching Lin , Zicheng Liu , and Lijuan Wang {zhengyang,jianfw,lindsey.li,keli,chungching.lin,zliu,lijuanw}@microsoft.com https://idea2img.github.io/ Microsoft Abstract. We introduce “Idea to Image,” 1 an agent system that enables multimodal iterative self-refinement with GPT-4V(ision) for automatic image design and generation. Humans can quickly identify the character- istics of different text-to-image (T2I) models via iterative explorations. This enables them to efficiently convert their high-level generation ideas into effective T2I prompts that can produce good images. We investi- gate if systems based on large multimodal models (LMMs) can develop analogous multimodal self-refinement abilities that enable exploring un- known models or environments via self-refining tries. Idea2Img cyclically generates revised T2I prompts to synthesize draft images, and provides directional feedback for prompt revision, both conditioned on its memory of the probed T2I model’s characteristics. The iterative self-refinement brings Idea2Img various advantages over vanilla T2I models. Notably, Idea2Img can process input ideas with interleaved image-text sequences, follow ideas with design instructions, and generate images of better se- mantic and visual qualities. The user preference study validates the effi- cacy of Idea2Img on automatic image design and generation via multi- modal iterative self-refinement. Keywords: Multimodal Agents · Self-Refinement · Large Multimodal Models · Image Design and Generation 1 Introduction “Image design and generation” aims to create an image from a high-level user idea. This input IDEA can contain interleaved reference images, such as “the dog looks like the one in the image,” or with instructional texts specifying the in- tended design usage, such as “a logo for the Idea2Img system.” To convert IDEA into an image, humans may first draft detailed descriptions of the imagined im- age, and then use text-to-image (T2I) models [36, 39, 40, 42, 63] to generate the image. This manual process for users to search for an ideal detailed description (i.e., T2I prompt) that fits the T2I model typically involves iterative explo- ration [51, 67]. As shown in Figure 1, humans may first design and draft an 1 Short for “Idea2Img.” System logo design assisted by Idea2Img. 2 Z. Yang et al. Fig. 1: Idea2Img framework enables LMMs to mimic human-like exploration to use a T2I model, enabling the design and generation of an imagined image specified as a multimodal input IDEA. The iterative process involves LMMs functioning in different roles to refine the image creation. Specifically, LMMs will (1) generate and revise text prompts for the T2I model, (2) select the best draft images, and (3) provide feedback on the errors and revision directions. This multimodal iterative self-refinement process requires LMMs to memorize the T2I model’s characteristics observed in previous iter- ations as humans and adjust T2I prompts accordingly. initial T2I prompt based on their imagined IDEA to generate. Then, they can obtain multiple draft images with a T2I model, select the most promising draft, write text feedback, and further revise the T2I prompt. As this iteration pro- gresses, we humans can swiftly grasp the characteristics of a specific T2I model, such as words that the model can not understand, finally producing a good im- age generated by a suitable T2I prompt. Given the remarkable capabilities of large multimodal models (LMMs) [14, 31, 57], we explore if we can build sys- tems based on LMMs to develop similar iterative self-refinement ability, thereby relieving humans from the tedious process of converting ideas to images. Iterative self-refinement is one intrinsic ability humans possess when explor- ing unknown environments and solving complicated problems. Large language models (LLMs) agent systems [9, 27, 46] have demonstrated the effectiveness of self-refinement in better addressing natural language processing tasks, such as acronym generation, sentiment retrieval, text-based environment exploration, etc. Transitioning from text-only tasks to multimodal environments poses new challenges of improving, assessing, and verifying multimodal contents, such as multiple interleaved image-text sequences. For example, when learning to use T2I models, LMMs need to improve the generation with revised T2I prompts, assess multiple images in detail to select the best draft, and verify the draft image with the multimodal IDEA to provide text feedback. These steps, each requir- ing different multimodal understanding capabilities, jointly enable the intriguing multimodal iterative self-refinement ability. Such an LMM framework can au- Output: A portrait of Bill Gates on a bustling city street, with his right hand raised in a friendly wave with his palm facing forward, standing next to a Siberian Husky with striking blue eyes and a playful tongue sticking out, in a lively and welcoming setting with natural daylight.Prompt Generationgenerate initial/revised promptFeedback Reflectiongive text feedbackLMM with MemoryDraft Image Selectionselect best draft imageT2I ModelIdea2Img Framework Multimodal iterative self-refinementUnknown model or environment to exploredraft image * Ndraft image * Ndraft image * 1text feedbacktext feedbacktext prompt * N①②③Input: Multimodal user-imagined IDEA to generatephoto of Bill Gates with the same hand gesture as in the given image, with a dog looks like this one in the imageIDEA 1:IDEA①②③.Idea2Img Frameworkphoto of Bill Gates with the same suit as in the given image on the street, with a dog looks like this one in the imageIDEA 2:Idea2Img Frameworkcartoon drawing of the person as in the given image playing with a dog on the beach, with a dog looks like this one in the imageIDEA 3:Idea2Img Framework Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 3 Fig. 2: Overview of the image design and generation scenarios enabled by Idea2Img. In each sub-figure, the image and text in the left green box are the user input IDEA. The center image is the baseline results directly generated by the same T2I model with a human-written T2I prompt, and the image on the right is generated with the T2I prompt discovered by Idea2Img’s iterative self-refinement exploration. tomatically learn to tackle various real-world problems [57] via self-exploration, such as navigating GUI to use electronic devices, exploring unknown physical environments via an embodied agent, engaging in electronic games, and so on. In this study, we focus on “image design and generation” as the task to study the multimodal iterative self-refinement ability. To this end, we introduce Idea2Img, a multimodal iterative self-refinement framework for automatic image design and generation. As illustrated in Fig- ure 1, Idea2Img involves an LMM, GPT-4V(ision) [1, 31–33], interacting with a photo of the object pointed by the blue arrow, and a brown corgi dogpainting of a corgi dog with style similar to this one in the imagean image of a hand holding an iphone. The image is used for illustrating how to take a screen shot on iphone5 people sitting around a table drinking beer and eating buffalo wingsa logo suitable for a stylish hotel.a drawing with the background changed to a beachA hand drawing of a room where people can sleep and study. Hand drawing shows the sketch and looks like the given image.a plate that has no bananas on it. there is a glass without orange juice next to ita whole cake on the table with words Azure Research written on the cakean image of a car perfect for a children's painting competitionPhoto of Bill Gates with the same cloth as in the given imagewith a dog looks like this one in the imageObject countScene textKnowledgePrompt followingVisual designIntended usageVisual design & styleConcept customization & visual pointingImage manipulationMultiple concepts customizationBlending images for new visual designStyle transferpainting of a corgi dog with style different from this one in the imageFind the the image style pattern in the left two dog images and apply it on the top right people in tree pose image. Provide a textual description that keeps the content in the people in tree pose image, with the correct style pattern.In-context entity and style transferOpposite style transferCartoon drawing of Mr Bean playing tennis, with the same cloth and pose as in the given imageVisual attribute referringA painting of a tennis game, with the image style similar to this one in the imageVisual attribute referring & style transferand the second image isA logo with a design that naturally blends the two given images as a new logo. The first image isIDEAT2IIdea2ImgIDEAT2IIdea2ImgIDEAT2IIdea2ImgIDEAT2IIdea2ImgIDEAT2IIdea2Img 4 Z. Yang et al. T2I model to probe its usage and find an effective T2I prompt. The LMM will act in different roles to analyze the return signal from the T2I model (i.e., draft images) and design the next round’s queries (i.e., text T2I prompts). The three roles of generating T2I prompts, selecting draft images, and reflecting feedback together enable the multimodal iterative self-refinement ability. Specifically, (1) Prompt generation: GPT-4V generates N text prompts that correspond to the input multimodal user IDEA, conditioned on the previous text feedback and refinement history; (2) Draft image selection: GPT-4V carefully compares N draft images for the same IDEA and select the most promising one; (3) Feed- back reflection: GPT-4V examines the discrepancy between the draft image and the IDEA. GPT-4V then provides feedback on what is incorrect, the plausible causes, and how T2I prompts may be revised to obtain a better image. Fur- thermore, Idea2Img is enhanced with a memory module that stores all prompt exploration histories, including previous draft images, text prompts, and feed- back. The Idea2Img framework iterates among these three steps with GPT-4V for automatic image design and generation. To users, Idea2Img functions as an enhanced image design and generation assistant. Compared with T2I models, Idea2Img can handle design instructions instead of requiring detailed image description, support the multimodal IDEA input, and generate images of better semantic and visual qualities. We overview representative image design and generation scenarios in Figure 2. For example, Idea2Img can incorporate the visual design and intended usage description in IDEA, extract arbitrary visual information from the input image, and process IDEA with arbitrarily interleaved image-text sequences. Built upon these new functionalities and scenarios of interest, we develop an evaluation IDEA set with 104 samples, containing complicated queries that humans may fail in their first trials. We perform user preference studies on Idea2Img with different T2I models. The consistent user preference score improvements on different image generation models, e.g., +26.9% with SDXL [36], indicate the effectiveness of Idea2Img in image design and generation. Our contributions are summarized as follows. – We study “automatic image design and generation,” which aims to create an image from an input IDEA. This new multimodal IDEA input enables visual creation with reference image inputs and instructions on desired designs. – We explore the multimodal iterative self-refinement ability in GPT-4V-based agent systems, showcasing its effectiveness in improving, assessing, and ver- ifying multimodal contents. – We propose Idea2Img, a multimodal iterative self-refinement framework that enhances any image generation model for visual design, enabling various new image creation functionalities, and achieving better generation qualities. – We present an evaluation set with 104 challenging multimodal IDEA. The consistent user preference score improvements, when experimented on differ- ent image generation models, indicate Idea2Img’s effectiveness in automatic image design and generation. Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 5 2 Related Work LLM-based self-refinement. Idea2Img is inspired by the effectiveness of itera- tive self-refinement in LLM-based agent systems [27,34,46] in exploring unknown environments and tasks, built upon the successful LLM agents [15, 35, 37, 43, 56, 61,66]. Self-refine [27] takes the same LLM to iteratively critique its outputs and leverage this feedback to enhance its predictions, showing effectiveness across various NLP tasks. Reflexion [46] explores a self-reflective LLM system on the text-based environment exploration task [47] and multi-hop QA [60]. Despite the success, LLM-based self-refinement naturally can not understand multimodal in- puts. Consequently, the explored tasks and environments are limited to the nat- ural language description, such as AlfWorld [47]. Idea2Img explores the potential of an LMM-based iterative self-refinement system for multimodal environment exploration, from a simple T2I model to other more complicated environments. Multimodal agents. Our Idea2Img is related to multimodal agents [16, 22, 26, 44, 49, 52, 58, 64] that chain external tools such as T2I or vision-language models with LLMs for multimodal tasks. For instance, MM-ReAct [58] integrates Chat- GPT with multiple vision tools for multimodal reasoning and action, enabling it to solve various complicated visual understanding tasks. Visual ChatGPT [52] empowers ChatGPT to allocate various image generation models, such as Stable Diffusion [40], img2img model [28], ControlNet [65], enabling multi-step visual editing and generation. The primary difference between Idea2Img and existing multimodal agent studies [52, 58] lies in the approach to understand the tool usage. Existing studies assume the knowledge of how to best use each tool and provide such information to LLMs via text instructions or in-context examples. In contrast, the optimal usage of the tool remains unknown in Idea2Img and re- quires iterative exploration. Another minor distinction is that Idea2Img utilizes LMMs instead of LLMs, thereby does not require general visual understanding tools such as a caption model [50, 53]. Extensions of base T2I models. Idea2Img provides a more natural way for users to design and produce their desired visual content. This framework, which extends T2I models for new functionalities, is related to various works in im- proving base T2I models [36, 39, 40, 42, 63]. These studies include extending the base T2I model to better follow user prompts [5,7,10,12], finding magic words in T2I prompts for better visual quality [51, 67], supporting extra image input for image manipulation [6, 17, 18, 28], style transfer [13], visual concept customiza- tion [2, 8, 19, 41, 45], and so on. While specialized T2I extensions can address a single specific functionality, Idea2Img offers a more unified and widely applicable framework. That is, a single Idea2Img framework can handle various generation scenarios, ranging from style transfer to attribute customization, without re- quiring separate models or task-specific model design and finetune. More impor- tantly, Idea2Img effectively collaborates with those enhanced generative models, consistently improving them by exploring suitable text prompts. 6 Z. Yang et al. Fig. 3: The framework overview of Idea2Img, which takes an LMM [31,32] to explore a T2I model via multimodal iterative self-refinement, leading to an effective T2I prompt for the input user IDEA. The rounded rectangle shape indicates a GPT-4V call. 3 Idea2Img Framework Figure 3 illustrates the Idea2Img framework. Idea2Img framework involves two core pre-trained models, i.e., the GPT-4V(ision) as the LMM M and a text- conditioned image generation model2 to explore G. Idea2Img also contains a memory m that stores insights on G discovered by M during previous iterations. Execution flow. We begin with an overview of the key steps in M iteratively exploring the use of G. Starting from the top-left of Figure 3, “initial prompt generation” converts the input multimodal user IDEA into T2I text prompts, later producing multiple draft images with T2I model G. “Draft image selection” then selects the best draft image among them for the current iteration. The selected image is either output as the final prediction or continues for further refinement, depending on the stop condition. For the latter, “feedback reflec- tion” compares the current best draft image with the multimodal IDEA, and summarizes the major discrepancy as text feedback. With the iteration history and text feedback, “revised prompt generation” then drafts revised T2I prompts and continues the iterative self-refinement with the new set of draft images. 1 Initial prompt generation. This step generates N initial T2I prompts (cid:9) following the input user IDEA x, by prompting M with LMM (cid:8)y0 0, . . . , yN −1 prompt pgen: 0 (cid:8)y0 0, . . . , yN −1 0 (cid:9) = M(x, pgen) (1) The “initial prompt generation” requires M to understand multimodal user IDEA x and convert design IDEA into descriptive T2I prompts. LMM prompt pgen is a zero-shot prompt without in-context examples. 2 We will show image generation models other than T2I later in experiments. For clarity, we use T2I as a representative generation model to introduce Idea2Img. ①②③④⑤⑥①③② Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 7 With the “initial prompt generation” step, Idea2Img can understand user IDEA with interleaved image-text sequences, instead of the text-only T2I prompts containing the image description. Specifically, (1) IDEA can be a high-level de- sign or usage instead of the detailed image description, such as “a car image for a children’s painting competition”; and (2) IDEA can take multiple images and use interleaved text instruction to extract arbitrary visual information of inter- est, including image style, visual entity, object attributes, etc. Then, in iteration t = 0 as well as future iterations t = t, each T2I prompt yn is separately sent to t the T2I model G, resulting in N draft images in 2 Draft image selection. With the N draft images in iteration t, “draft image selection” selects the best draft image i∗ by prompting M with LMM t prompt pselect: t ), n = 0, . . . , N − 1. t = G(yn t = M(i0 i∗ t , . . . , iN −1 t The design of a “draft image selection” step is motivated by the observation that T2I models could generate bad images with good prompts. This step is designed to filter out low-quality images, and avoid the quality perturbation to dominate the iterative refinement. , x, pselect). (2) The task of selecting the best image requires M to compare and grade both the semantics and visual quality of N similar draft images. We find such a “spot the difference” task challenging for LMMs, and only the very recent models [31, 57] are capable of performing the selection reliably. , the framework 3 Feedback reflection. After obtaining the selected image i∗ t checks the stop condition, such as if the current iteration t exceeds the maximum as the output image or proceeds the refinement T . Idea2Img then outputs i∗ t process to the “feedback reflection” step accordingly. “Feedback reflection” aims to provide text feedback ft that describes the . The steps prompts M with LMM prompt direction to improve for draft image i∗ t pf b, conditioned on the draft image i∗ t and memory m: ft = M(i∗ t , m, x, pf b). (3) with the multimodal “Feedback reflection” takes M to compare an image i∗ t user IDEA x, and summarize the gap as text feedback ft. The step not only requires M to identify the discrepancy between image i∗ and IDEA x, but also t benefits from writing the major errors to make the iteration effective. In practice, we find it helpful to explicitly specify the aspects to check, such as style, entity, attributes, appearance, etc., via text instructions or in-context examples in LMM prompt pf b. Furthermore, we add text instructions to pf b to have M “focus on one thing to improve in each feedback,” and “provide a high-level explanation of how to modify prompts to address the given feedback.” 4 / 1 Revised prompt generation. Finally, “prompt generation” takes text (cid:9), by feedback ft and memory m to draft N revised prompt (cid:8)y0 prompting M with LMM prompt previse: t+1, . . . , yN −1 t+1 (cid:8)y0 t+1, . . . , yN −1 t+1 (cid:9) = M(ft, m, x, previse). (4) 8 Z. Yang et al. Generating revised prompts requires M to understand the property of G stored in memory m, thereby drafting new T2I prompts that could most likely address the issue identified in ft. We empirically demonstrate that Idea2Img can generate better prompts for G via iterative self-refinement. Memory module. Memory m is one important design in Idea2Img. m has the format of interleaved image-text sequences that store all previous iterations’ feedback, selected draft image, and the corresponding text prompts: mt = (cid:2)y∗ 0, i∗ 0, f0, . . . , y∗ t−1, i∗ t−1, ft−1 (cid:3) . (5) It allows LMM M to understand the properties and capabilities of the T2I model G in use, such as a keyword that G may not understand or a complicated scene that G fail to generate, and incorporate such knowledge in generating the revised T2I prompts y. For example, it may describe the appearance of a yoga pose in detail, instead of only mentioning its name in y. Examples are shown in Appendix Figures A-D, when comparing initial and refined prompts y0 and yT . 4 Experiments 4.1 Experiment Settings Compared model variants. We mainly compare the following three models in image generation. – “Initial-round manual prompt” is the baseline T2I prompt written by humans with minor prompt engineering. It serves as the baseline of a T2I prompt that merely contains key information in IDEA. – “Initial-round Idea2Img prompt” is the LMM-generated T2I prompt in the initial round. Specifically, the max iteration T = 1, and LMM M is only used for initial prompt generation and draft image selection, but not feedback reflection nor revised prompt generation. This Idea2Img variant is used to ablate Idea2Img’s gain from prompt generation and selection, vs. the further iterative refinement. – “Iterative self-refined Idea2Img prompt” is complete Idea2Img pipeline with the max iteration T = 3. Evaluation samples and metrics. For the quantitative evaluation, we collect a dataset of 104 user IDEA as input queries. Among them, 33 queries contain text only, 43 queries contain an image-text sequence with a single image, and the remaining 28 contains a sequence with two or more images. The text in most IDEA contains not only descriptive content text that describes the scene to generate, but also instructional text such as “a logo for commercial advertising” or “generate the pointed dog in blue.” All test queries are manually composed. We then perform the user preference study as the main quantitative metric. Users are presented with the IDEA and multiple images to select the best one for each IDEA. The evaluation script automatically shuffles the order during evaluation to prevent the influence of image orders. Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 9 Table 1: User preference scores when applying Idea2Img onto different image genera- tion models (compare the three scores in the middle section within each row individu- ally). We observe that “Iterative self-refined Idea2Img prompt” is consistently favored across all experimented image generation models. ∆iteration reports the preference gain from the iterative Idea2Img over the initial-round Idea2Img. User preference score (%) SDXL v1.0 DeepFloyd IF SD v2.1 SD v1.5 SDXL-img2img IF-img2img Initial-round Initial-round manual prompt Idea2Img prompt 13.5 14.4 13.5 8.6 8.6 8.6 29.8 34.6 40.4 43.3 34.6 38.5 Iterative self-refined Idea2Img prompt 56.7 51.0 46.2 48.1 56.7 52.9 ∆iteration +26.9 +16.3 +5.8 +4.8 +16.3 +14.4 Fig. 4: User preference scores among T2I models before and after iterative self- refinement. We observe that the initially favored T2I model, SDXL, benefits more from the Idea2Img iteration. Experimented T2I models. We experiment Idea2Img on a wide variety of T2I model G with diverse model capacities and functionalities. Specifically, we study Stable Diffusion (SD) v1.5 [40], SD v2.1, SDXL v1.0 with refiner [36], and DeepFloyd IF (IF-I-XL and IF-II-L) [20]. Other than T2I models, we also con- sider the img2img pipeline (i.e., SDEdit [28]) for SDXL and DeepFloyd IF, as a demonstration of using Idea2Img for the text-conditioned image-to-image gen- eration. The default strength t0 in the img2img pipeline is 1.00. SDXL-img2img and IF-img2img are the same as SDXL and IF (i.e., T2I) when IDEA contains text only, and condition on the first image with IDEA contains multiple images. LMM prompts pgen, pselect, pf b, previse are kept the same for all experimented T2I models. Appendix Section B shows the complete LMM prompts. 4.2 Image Generation Results User preference evaluation. Table 1 compares the user preference when selecting from the three images generated by “initial-round manual prompt,” “initial-round Idea2Img prompt,” and “iterative self-refined Idea2Img prompt,” Idea2ImgSDXL v1.0DeepFloyd IFSD v1.5SD v2.1 10 Z. Yang et al. for each user IDEA with the same T2I model. Among T2I models with different model sizes and functionalities, Idea2Img leads to consistent improvements in user preference. The initial-round Idea2Img prompt already improves the initial- round manual prompt, by effectively understanding the multimodal user IDEA and selecting the best draft images. The full Idea2Img framework further im- proves from the initial-round Idea2Img results with the multimodal iterative self-refinement. For example, when using SDXL v1.0, users prefer the images generated by Idea2Img 59/104 = 56.7% times, compared with the baseline of 14/104 = 13.5%. Similar improvements are observed on all experimented T2I models, as shown in the bolded column “iterative self-refined Idea2Img prompt.” Furthermore, we examine which T2I model benefits the most from the LMM iterative self-refinement. By comparing the ∆iteration in Table 1 that represents the difference between first-round and iterative Idea2Img user preference, we observe that stronger T2I models tend to benefit more from LMM refinements. For example, SDXL and IF become more favored 26.9% and 16.3% times af- ter iteration, compared with the 5.8% and 4.8% for SD v2.1 and SD v1.5. The trend that stronger T2I models benefit more from Idea2Img is also observed in Figure 4’s analysis, where users pick their preferred image generated by dif- ferent T2I models. After Idea2Img’s iterative refinement, the initially favored model SDXL benefits more from the iteration, resulting in an even higher user preference rate, from 46.2% to 65.4%. We conjecture that the better language understanding ability in stronger T2I models enables them to better follow re- vised T2I prompts. They also have a better image generation capability that makes it possible to generate challenging scenes, when given a good T2I prompt optimized by Idea2Img. Nonetheless, Idea2Img is effective across T2I models of varying capacities, consistently leading to a higher user preference score. Qualitative comparisons. Idea2Img could help users generate images that better follow IDEA, such as the correct object counts in Figure 5(a). Idea2Img enables visual content design, in contrast to conventional T2I that requires a detailed visual content description. For example in Figure 5(b), Idea2Img designs visual logo based on the instruction of “a logo for a 2024 conference in Seattle.” The power of LMMs allows Idea2Img to extract arbitrary information from the input image for visual generation. This could be any object in the image like “the circled dog” in Figure 5(c) or the image style like in Figure 5(d). Such general visual conditioning ability can be seamlessly extended to compose multiple visual and text conditions, such as composing the camera angle and image style in Figure 5(e) and two objects in Figure 5(f). Other than SDXL, Idea2Img is effective in finding text prompts for other image generation models. This includes arbitrary T2I models (e.g., SD v2.1 [40], DeepFloyd IF [20], DALL·E 3 [30], etc.), text-conditioned image-to-image mod- els (e.g., SDXL-img2img and IF-img2img with SDEdit [28]), and other special- ist generation models (e.g., reward-tuned T2I [11, 21], region-controlled genera- tors [23,59,65], and other specialist models [3,6,41]). Figure 6 overviews Idea2Img working with different image generation models. We show additional qualitative results and discussions in Appendix Section A.1. Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 11 Fig. 5: The comparisons among initial-round manual prompt, initial-round Idea2Img prompt, and iterative self-refined Idea2Img prompt, with SDXL [36] as the T2I model. How Idea2Img may assist humans? We use selected qualitative results to highlight the scenarios where humans might find Idea2Img most helpful in image design and generation, compared with conventional T2I generation. 1. New functionalities with multimodal IDEA inputs. Idea2Img pro- vides a more natural way for human interaction, where users do not have to describe their desired image solely through texts and painstakingly search for the right prompt word. Instead, the multimodal IDEA allows Idea2Img to precisely extract specific elements from one or multiple input images, such as the dog breed and color, pointed objects, artist style, camera view, and more, as shown in Figure 5. Finding proper words that the T2I model can understand for such visual concepts could be tedious for humans, e.g., the art style “with bold lines and bright colors, with a cubist and surrealistic style, with a focus on geometric shapes and patterns.” in Figure 5(d). Idea2Img automates this process via Idea2Img iterative self-refinement. 2. New functionalities with instructional inputs. Vanilla T2I models struggle to understand T2I prompts that describe the intended visual design or purpose of the generated image, such as “a logo for a 2024 conference held in Seattle” in Figure 5(b). Instead, the prompt needs to be a comprehensive description of the image to generate, demanding extra drafting effort from users, such as “. . . the Seattle skyline in the center and the conference title (f)(e)(d)(c)(b)A Cavalier King Charles Spaniel running on a tiled floor, happy mood, bright lightingphoto of a dog looks like the circled one in the image running on the floorphoto of a dog looks like the circled one in the image running on the floor that is a black and white dogA small Cavalier King Charles Spaniel with black and white fur, running on a tiled floor, tongue out, happy mood, bright lightingA group of 5 friends sitting around a wooden table, drinking beer and eating buffalo wings in a casual setting.5 people sitting around a table drinking beer and eating buffalo wings5 people sitting around a table drinking beer and eating buffalo wingsA group of 5 friends sitting around a wooden table, with one person at the head of the table and two people on each side, drinking beer and eating buffalo wings in a casual setting, with a window in the background and a warm, inviting atmosphere.A painting of Mt Rainier mountain with a tree and lake in the foreground, in a colorful and abstract style similar to a Picasso painting.Painting of Mt rainier mountain with tree and lake in the foreground, with style similar to this one in the imagePainting of Mt rainier mountain with tree and lake in the foreground, with style similar to this one in the image that has colorful blocksA painting of Mt Rainier mountain with a tree and lake in the foreground, with bold lines and bright colors, with a cubist and surrealistic style, with a focus on geometric shapes and patterns.Input user IDEAInitial-round manual promptInitial-round Idea2Img promptIterative self-refined Idea2Img promptA painting of a tennis game from camera angle, with the image style similar to this one in the imageA painting of a tennis game from a top-down camera angle, with the image style similar to Claude Monet's impressionist paintings, with a bright and sunny atmosphere.A painting of a tennis game from camera angle, with the image style similar to this one in the image that is impressionistA painting of a tennis game from a top-down camera angle, with the image style similar to Claude Monet's impressionist paintings, with a bright and sunny atmosphere, with a blue sky and green trees in the background, with the players wearing white clothes, with a large crowd in the stands.A logo design for a 2024 conference held in Seattle, featuring the Seattle skyline and the conference title in a modern font, with a blue and green color scheme.a logo for a 2024 conference held in Seattle. Please provide a textual description of the design before generatinga logo for a 2024 conference held in Seattle.A logo design for a 2024 conference held in Seattle, with a modern and minimalist design, featuring the Seattle skyline and the conference title in a geometric sans serif font, with a blue and green color scheme.with a dog looks like this one in the imageBill Gates in a formal suit on a bustling city street, raising his hand in a friendly wave like a charming businessman, with a cute pug dog with a wrinkled face and large eyes by his side, under natural daylight, with a sense of approachability.Photo of Bill Gates with the same hand gesture as in the given image that is waving hand, with a dog looks like this one in the image that is a pug dogBill Gates wearing a suit and tie, standing on a busy street with tall buildings, waving with his right hand raised and palm facing forward, with a small pug dog with an adorably wrinkled face and big round eyes next to him, in daylight, exuding friendliness and warmth.(a)Photo of Bill Gates with the same hand gesture as in the given image 12 Z. Yang et al. Fig. 6: The comparisons among initial-round manual prompt, initial-round Idea2Img prompt, and iterative self-refined Idea2Img prompt, with different image generation models. Additional qualitative results and discussions are in Appendix A.1. below it . . . ”. In contrast, Idea2Img effectively understands the instructional texts in IDEA and creates images accordingly. 3. Better semantic and visual quality. Finally, the iterative refinement allows Idea2Img to generate images with better semantic and visual qualities, leading to an effective automatic image creation assistant. 4.3 LMM Feedback, Revision, and Selection We show representative LMM outputs for “feedback reflection,” “revised prompt generation,” and “draft image selection.” Additional results are in Appendix A.2. Feedback reflection. Figure 7(a) shows the text feedback generated by GPT- 4V for the user IDEA and the draft image and T2I prompt. Idea2Img can ef- fectively check if the generated image is correct, and verify if the draft image corresponds to the visual descriptions in IDEA. This includes the breed of the dog in (a.1), as well as art styles, objects, visual attributes, etc. In addition to identifying the discrepancy, Idea2Img also points to the plausible directions that may improve the T2I prompt in the text feedback. For example, in (a.2), Idea2Img provides guidance to have generated images better follow the user intention of “an image for a children’s painting competition,” by “specifically mentioning that the car should be simple and cartoon-like.” Input user IDEAInitial-round manual promptInitial-round Idea2Img promptIterative self-refine Idea2Img prompt6 cakes placed in a boxA cardboard box with 6 homemade cakes, each with a unique design and flavor, ready to be delivered.6 cakes placed in a boxA brown cardboard box with 6 homemade cakes, each with a different flavor and design, arranged in a symmetrical pattern.A contemporary logo for a 2023 conference in Paris, showcasing a stylized Eiffel Tower, elegant fonts, and a color palette inspired by the French flag, with a touch of sophistication.a logo for a 2023 conference held in Paris. Please provide a textual description of the design before generatinga logo for a 2023 conference held in Paris.A sleek and modern logo for a 2023 conference in Paris, incorporating a stylized Eiffel Tower, the year '2023' in a classy font, and a tricolor background inspired by the French flag. The design includes the word 'Paris' at the bottom in a refined font, symbolizing sophistication and culture.The image of a man waving is stylized with a low-poly design, where the scene is broken down into geometric shapes filled with bold and vibrant colors, creating a visually striking and contemporary aesthetic, under the bright sky.Find the the image style pattern in the left two dog images and apply it on the top right people waving hand image. Provide a textual description that keeps the content in the people waving hand image, with the correct style pattern.Find the the image style pattern in the left two dog images and apply it on the top right people waving hand imageA man in a formal suit is captured mid-motion, his hand raised in a wave, each finger distinctly visible. The entire scene is reimagined in a low-poly art style, where each facet of the man and his attire is defined by geometric shapes filled with a palette of bold, contrasting colors. The background remains a solid, untextured expanse, free of additional elements like sun rays or clouds, ensuring the focus remains solely on the colorful, stylized figure of the man waving. The lighting is neutral, casting no shadows or highlights, preserving the simplicity of the scene.Two dogs, one with the appearance of a Cavalier King Charles Spaniel with black and white fur and the other resembling a brown and white Yorkshire Terrier, are energetically running side by side on a sandy beach, their tails wagging in excitement. The backdrop features a serene ocean view, clear skies, gentle waves, bright sunlight casting soft shadows, and distant seagulls soaring in the sky, adding a lively, playful atmosphere to the scene.Two dogs running on the beach, the left one looks like a fluffy golden dog and the right one looks like the left dog in the image that is black and whiteTwo dogs, a Yorkshire Terrier with detailed brown and white fur that glistens in the sunlight, on the left of a black and white Cavalier King Charles Spaniel, are immersed in a game of chase along a scenic beach. The ocean waves, with their rhythmic motion, complement the dogs’ lively energy. The sky, a masterpiece of soft, blended colors, serves as a canvas for the seagulls that glide gracefully, casting intricate shadows that dance on the sandy surface, amplifying the scene’s vibrancy.Two dogs running on the beach, the left one looks likeand the right one looks like the left dog in the imageA close-up of a sushi roll in the making, with a pile of rice on a seaweed sheet and salmon and cucumber being added, next step is rolling and cutting the sushi.image that depicts what will happen next based on the story in the two given frames. Please provide a textual description of the image before generating. The first frame in the story is rice on sushi roll, and the second frame in the story is ingredients on riceA close-up of a sushi roll in the making, with a pile of rice on a seaweed sheet and salmon and cucumber being added, next step is rolling and cutting the sushi, with a bamboo mat and knife nearby, in a kitchen setting.image that depicts what will happen next based on the story in the two given frames. Please provide a textual description of the image before generating. The first frame in the story is, and the second frame in the story isA marble table with a Coca-Cola bottle floating above it, defying gravity, in a modern living room, with soft ambient lighting, minimalistic decor, and a hint of surrealism.photo with a design that composites the two given objects into the new photo. The placement of the objects does not adhere to the laws of physics. Please provide a textual description of the objects and design before generating. The first object is a marble table, and the second object is a cokeIn a contemporary living room, a Coca-Cola bottle is suspended in mid-air above a marble table, with a mystical aura, warm lighting, and sleek decor, evoking a sense of magic and defying gravity.photo with a design that composites the two given objects into the new photo. The placement of the objects does not adhere to the laws of physics. Please provide a textual description of the objects and design before generating. The first object is, and the second object isSD v2.1DeepFloyd IFSDXL img2imgDeepFloyd IF img2imgDALL·E 3DALL·E 3 Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 13 (a) Feedback reflection: The right column shows the examples of generated text feedback. (b) Revised prompt generation: The right column shows the examples of revised prompts. (c) Draft image selection: The right column shows the examples of the draft image selection index and justification. Fig. 7: GPT-4V’s outputs in Idea2Img for image feedback, revision, and selection. Revised prompt generation. Figure 7(b) compares the T2I prompts before and after the prompt revision, showcasing how text feedback may help the refine- ment. For example, in (b.1), the revised T2I prompt specifies “an empty plate with no other objects” to preclude the T2I model from generating bananas, which occurred with the previous prompt “no bananas.” In (b.2), the revised T2I prompt includes a detailed description of “a yellow tennis outfit” and “a Yorkshire Terrier” to generate the queried clothing and dog. Draft image selection. Performing draft image selection requires LMMs to compare multiple similar draft images and pick the one that best matches the multimodal input IDEA. Figure 7(c) shows the selection results generated by Idea2Img. GPT-4V is prompted to give justifications and scores for each draft image, in addition to the final selection. We observe that Idea2Img could com- prehensively judges different aspects in IDEA, and gives reasonable scores and selection indexes. E.g., finding the image with both sleep and study area in (c.1), verifying content and style in (c.2), and other examples in Appendix Figure G. Revised prompt and imagecartoon drawing of Mr Bean with the same cloth as in the given imageplaying with a dog looks like this one on the beachThe dog in the generated image is not the same breed as the one in the given image. The dog in the given image is a Yorkshire Terrier, while the dog in the generated image is a Boston Terrier. To address this issue, the sentence prompt should be modified to specifically mention the breed of the dog as a Yorkshire Terrier.A humorous scene of Mr Bean and a small dog playing fetch on a sandy beach with blue skies in the background.A humorous scene of Mr Bean wearing a yellow tennis outfit and playing with a Yorkshire Terrier on a beach with palm trees in the background.Input user IDEADraft prompt and image to reviseText feedback(b.1)The image generated by the AI art generation model does not follow the user imagined IDEA of the scene as there are bananas on the plate and orange juice in the glass. To address this, the sentence prompt can be modified to specify that there should be no bananas on the plate and no orange juice in the glass.a plate that has no bananas on it. there is a glass without orange juice next to itA breakfast scene with a plate that has no bananas and a glass without orange juice, with a blue background and a bright lighting.A close-up view of an empty plate with no other objects on it, and an empty glass without orange juice on a wooden table, with a white background.(b.2) 14 Z. Yang et al. 5 Limitation and Discussion Tasks beyond image generation. Idea2Img explores the emergent ability of multimodal self-refinement in LMM-based systems, through the image design and generation task. Specifically, Idea2Img views the T2I model to use as an un- known multimodal environment to explore, and iteratively refines T2I prompts to find its optimal usage. This concept mirrors the intrinsic human approach of iter- ative problem-solving when faced with unknown environments or challenges. We leave its extension to other intriguing tasks, e.g., GUI navigation [55], embodied agents [29], and complicated visual reasoning [38, 54], for future exploration. From a single image generation model to multiple tools. Idea2Img ex- plores using a single image generation model, such as a text-to-image model [40] or a text-conditioned image-to-image model [28]. When needed, other specialized generative models like ControlNet [65], inpainting [3], region-controlled T2I gen- eration [23, 59], customized generation [8, 41], and video generation [48, 62] can be seamlessly switched and supported. That is, Idea2Img could broadly boost different visual generation models of diverse specialties by exploring their opti- mal text description or instruction prompts. Beyond a single generation model, Idea2Img can also be used to allocate multiple tools as in multimodal agent stud- ies [52,58]. In this case, Idea2Img isn’t limited to optimizing the use of individual tools but also investigates their effective collaboration when used together, such as generator selection and multi-step visual generation. Consolidating explored knowledge. We have shown the effectiveness of LMM iterative self-refinement in automatic image design and generation. Idea2Img can also help to consolidate or distill the explored knowledge into T2I model parameters, such that no inference-time iterative refinement is needed when en- countering seen generation scenarios. One could collect a dataset using Idea2Img for a scenario of interest, and fine-tune a T2I model with the explored self- refinement trajectory. Storing the probed knowledge as sample-agnostic prompt for each image generation model is another promising direction [15, 37, 66]. Fi- nally, with minimal extra computation, we find it helpful to use the explored T2I prompt history as in-context examples for prompt re-writing and expansion, im- proving from the zero-shot expansion like the one in ChatGPT-Dalle-3 [1, 4]. 6 Conclusion We have presented Idea2Img, a multimodal iterative self-refinement framework that leverages GPT-4V(ision) for image design and generation. Idea2Img ex- plores the emergent capabilities of iterative self-refinement in LMM-based agent systems, showcasing its effectiveness in improving, assessing, and verifying the generated multimodal content. The user preference study demonstrates Idea2Img’s capability in assisting humans to find the optimal usage of generation models for automatic image design and generation. Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 15 Acknowledgment We are deeply grateful to OpenAI for providing access to their exceptional tool [1, 31–33]. We also extend heartfelt thanks to our Microsoft colleagues for their insights, with special acknowledgment to Faisal Ahmed, Ehsan Azarnasab, and Lin Liang for their constructive feedback. In this supplementary material, we begin with showing additional qualitative results in Section A.1, in supporting Idea2Img’s effectiveness on different image generation models, including Dalle-3 [4, 30], SDXL [36], SDXL-img2img [28, 36], DeepFloyd IF [20], among others. In Section A.2, we show GPT-4V’s outputs to probe how Idea2Img helps image creation during the iterative self-refinement, and the possibility of replacing GPT-4V with other LMMs. Section B introduce remaining implementation details. A Qualitative Results A.1 Qualitative Comparisons Figures A-D show additional qualitative results of the comparison in Table 1. Figure A presents examples of Idea2Img explores the use of SDXL, a represen- tative T2I model. Figure B examines SDXL-img2img, a simple text-conditioned image-to-image model that adds noise to the input image and then performs text-conditioned denoising [28]. Figures C, D contain the results of Idea2Img working with Dalle-3 and other image generation models. SDXL. Idea2Img could help users generate images that better follow IDEA, such as the one with correct object counts and rendered scene texts in Figures A(a,b). Idea2Img enables the visual content design that can create images from a text instruction of its desired usage, in contrast to the detailed image description re- quired in the conventional T2I generation. For example in Figure A(c), Idea2Img designs a logo based on the user IDEA of “having a logo for a 2024 conference in Seattle.” Idea2Img can also understand user IDEA to search for images with high aesthetic scores and great visual details, or its opposite direction with “minimal face details” in (d). The LMM allows Idea2Img to extract arbitrary information from the input image for visual generation. This could be any specific object in the image, such as “the dog on the left” or “the dog pointed to via a red circle” in (e). Figure A(f) shows an example of extracting the painting style, which requires art knowledge for humans to describe accurately. The image input can even be an in-context example that defines the desired image transformation, such as the visual style transfer shown in (g). The ability to extract arbitrary information from the input image can be seamlessly extended to compose mul- tiple visual and text conditions, such as composing the camera angle and image style in (h) and the two entities in (I). SDXL-img2img. Idea2Img is also effective in finding T2I prompts for the text- conditioned image-to-image model SDXL-img2img, as shown in Figure B. Fig- ures B(c) and (d) illustrate generating images that follow and differ from the 16 Z. Yang et al. Fig. A: The comparisons among the initial-round manual prompts, initial-round Idea2Img prompts, and the iterative self-refined Idea2Img prompts, with the SDXL v1.0 [36] used as the T2I model. (i)(h)(g)(f)(e)(d)(c)(b)A Cavalier King Charles Spaniel running on a tiled floor, happy mood, bright lightingphoto of a dog looks like the circled one in the image running on the floorphoto of a dog looks like the circled one in the image running on the floor that is a black and white dogA small Cavalier King Charles Spaniel with black and white fur, running on a tiled floor, tongue out, happy mood, bright lightingA group of 5 friends sitting around a wooden table, drinking beer and eating buffalo wings in a casual setting.5 people sitting around a table drinking beer and eating buffalo wings5 people sitting around a table drinking beer and eating buffalo wingsA group of 5 friends sitting around a wooden table, with one person at the head of the table and two people on each side, drinking beer and eating buffalo wings in a casual setting, with a window in the background and a warm, inviting atmosphere.A painting of Mt Rainier mountain with a tree and lake in the foreground, in a colorful and abstract style similar to a Picasso painting.Painting of Mt rainier mountain with tree and lake in the foreground, with style similar to this one in the imagePainting of Mt rainier mountain with tree and lake in the foreground, with style similar to this one in the image that has colorful blocksA painting of Mt Rainier mountain with a tree and lake in the foreground, with bold lines and bright colors, with a cubist and surrealistic style, with a focus on geometric shapes and patterns.A portrait of Bill Gates with minimalistic style and vague facial features, in a monochrome color scheme.A vague portrait of Bill Gates with minimal visual and face detailsA vague portrait of Bill Gates with minimal visual and face detailsA portrait of Bill Gates with an extremely abstract and vague style, with almost no emphasis on the facial features, in a monochrome color scheme, with a completely abstract background.Input user IDEAInitial-round manual promptInitial-round Idea2Img promptIterative self-refined Idea2Img promptA painting of a tennis game from camera angle, with the image style similar to this one in the imageA painting of a tennis game from a top-down camera angle, with the image style similar to Claude Monet's impressionist paintings, with a bright and sunny atmosphere.A painting of a tennis game from camera angle, with the image style similar to this one in the image that is impressionistA painting of a tennis game from a top-down camera angle, with the image style similar to Claude Monet's impressionist paintings, with a bright and sunny atmosphere, with a blue sky and green trees in the background, with the players wearing white clothes, with a large crowd in the stands.A logo design for a 2024 conference held in Seattle, featuring the Seattle skyline and the conference title in a modern font, with a blue and green color scheme.a logo for a 2024 conference held in Seattle. Please provide a textual description of the design before generatinga logo for a 2024 conference held in Seattle.A logo design for a 2024 conference held in Seattle, with a modern and minimalist design, featuring the Seattle skyline and the conference title in a geometric sans serif font, with a blue and green color scheme.A scene of a man in a suit waving his hand, with the same style as the geometric dog image on the left.Find the the image style pattern in the left two dog images and apply it on the top right people waving hand image.A person in a business suit waving with his right hand, depicted in a polygonal art style reminiscent of the dog image, with a mosaic of colorful geometric shapes, against a backdrop of a blue sky.with a dog looks like this one in the imageBill Gates in a formal suit on a bustling city street, raising his hand in a friendly wave like a charming businessman, with a cute pug dog with a wrinkled face and large eyes by his side, under natural daylight, with a sense of approachability.Photo of Bill Gates with the same hand gesture as in the given image that is waving hand, with a dog looks like this one in the image that is a pug dogBill Gates wearing a suit and tie, standing on a busy street with tall buildings, waving with his right hand raised and palm facing forward, with a small pug dog with an adorably wrinkled face and big round eyes next to him, in daylight, exuding friendliness and warmth.A whole cake on a wooden table with the words Azure Research written on it in blue icing, with a white tablecloth and a vase of flowers in the background.a whole cake on the table with words Azure Research written on the cakea whole cake on the table with words Azure Research written on the cakeA top-down perspective of a cake on a table, with the words "Azure Research" meticulously written in blue icing on the top, surrounded by a modern kitchen with sunlight filtering through.Bill Gates in a suit, standing in front of a building, with a pug dog sitting on the ground next to his legs, waving his hand in a friendly mannerBill Gates in a suit, standing in front of a building, with a pug dog sitting next to him, waving his hand in a friendly mannerA scene of a man in a suit with his right hand raised in a friendly wave with his palm facing forward, with the same style as the geometric dog image on the left, with a blue sky background, with a slight smile on his face, with a modern and abstract style.(a)(a)(a)(b)(c)(d)(e)(f)(g)Find the the image style pattern in the left two dog images and apply it on the top right people waving hand image. Provide a textual description that keeps the content in the people waving hand image, with the correct style pattern.(h)(i)Photo of Bill Gates with the same hand gesture as in the given image Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 17 initial-round Fig. B: The comparisons among the initial-round manual prompts, Idea2Img prompts, and the iterative self-refined Idea2Img prompts, with the SDXL- img2img [28, 36] used as the image generation model. Instead of random noise, the image generation starts from the input image with added noise [28], showing the effec- tiveness of Idea2Img on text-conditioned image-to-image pipelines. (i)(h)(g)(f)(e)(d)(c)(b)(a)A surreal scene of a Coca-Cola bottle and a marble table merging into one, in an abstract space with distorted dimensions, bright colors, and dynamic lighting.photo with a design that composites the two given objects into the new photo. The placement of the objects does not adhere to the laws of physics. Please provide a textual description of the objects and design before generating. The first object is a marble table, and the second object is a cokeAn imaginative scene where a Coca-Cola bottle is floating and twisting above a marble table, defying gravity, with the bottle and table merging into each other in places, surrounded by a kaleidoscope of colors and ethereal lighting that gives a dreamlike atmosphere.Input user IDEAInitial-round manual promptInitial-round Idea2Img promptIterative self-refined Idea2Img promptCartoon drawing with a design that naturally blends the two given images as a new minimalist drawing. Please provide a textual description of the design before generating. The first image is, and the second image isA cartoon drawing of a llama wearing a suit and waving, blending the minimalist style of the first image with the professional attire of the second image.Cartoon drawing with a design that naturally blends the two given images as a new minimalist drawing. Please provide a textual description of the design before generating. The first image is a llama logo, and the second image is a person in suitA whimsical cartoon of a llama with a human body dressed in a stylish suit, waving its hand, combining the charm of a cute llama drawing and the elegance of a man in a suit, in a minimalist style, with a muted color palette.photo with a design that composites the two given objects into the new photo. The placement of the objects does not adhere to the laws of physics. Please provide a textual description of the objects and design before generating. The first object is, and the second object isA close-up of a sushi roll in the making, with a pile of rice on a seaweed sheet and salmon and cucumber being added, next step is rolling and cutting the sushi.image that depicts what will happen next based on the story in the two given frames. Please provide a textual description of the image before generating. The first frame in the story is rice on sushi roll, and the second frame in the story is ingredients on riceA close-up of a sushi roll in the making, with a pile of rice on a seaweed sheet and salmon and cucumber being added, next step is rolling and cutting the sushi, with a bamboo mat and knife nearby, in a kitchen setting.image that depicts what will happen next based on the story in the two given frames. Please provide a textual description of the image before generating. The first frame in the story is, and the second frame in the story isA gray cat and an orange cat running on the beach, with the gray cat on the right and the orange cat on the left.Two cats running on the beach, the right one looks like a gray cat and the left one looks like a orange catA gray cat with its head down and eyes looking forward and an orange cat with its eyes closed, both running on a beach with the gray cat on the right and the orange cat on the left, with the ocean in the background.Two cats running on the beach, the right one looks likeand the left one looks likeportrait of Bill Gates with style different from this one in the imageA portrait of Bill Gates in a cartoon style, with bright colors and a cheerful mood.portrait of Bill Gates with style different from this one in the image that is impressionistA portrait of Bill Gates in a cartoon style, with bright colors and a cheerful mood, with a light-colored background to make the portrait stand out.a watercolor painting of the same house with the same number of windows and the frontal view as the one in the sketchA watercolor rendition of a house with a frontal view and six windows, with a focus on the architectural detailsa watercolor painting of the same house with the same number of windows and the frontal view as the one in the sketch that is a two-level house with four large windows and three small windows in the centerA watercolor illustration of a house with a porch and six windows, with an emphasis on the symmetry and style of the house in the sketch.A watercolor cartoon logo based on the design in the given imageA logo design of a llama with a red scarf and beanie, in a watercolor cartoon style, with a soft and pastel color palette.A watercolor cartoon logo based on the design in the given image that is a llamaA logo design of a llama standing on all fours, with a red scarf and beanie, in a watercolor cartoon style, with a soft and pastel color palette, similar to the design in the given image.photo of a dog looks like the one in the given image running on the beachA small dog with a fluffy coat running on the beach, with the ocean in the background.photo of a dog looks like the one in the given image running on the beach that is a fluffy golden dogA playful Yorkshire Terrier with a collar running on the beach, with the sun setting over the ocean in the background.a dog looks like this one in the image running on the grass, but in blue color, with the image style similar to this one in the imageA pug dog with a blue coat running on a field of green grass, with a style reminiscent of cubism.a dog looks like this one in the image running on the grass, but in blue color . The dog is pug dog, with the image style similar to this one in the image that contains blocksA pug dog with a blue coat running on a field of green grass, with its legs stretched out and its tail wagging as it runs, in a style reminiscent of cubism.A cartoon drawing of a llama wearing a gray suit with a black tie, with his right hand raised in a friendly wave with his palm facing forward, blending the minimalist style of the first image with the professional attire of the second image.A painting of a tennis game from a top-down camera angle, with the image style similar to Claude Monet's impressionist paintings, with a bright and sunny atmosphere, with a blue sky and green trees in the background, with the players wearing white clothes, with a large crowd in the stands.photo with a design that naturally blends the two given objects into the new photo. The placement of the objects perfectly adhere to the laws of physics and is common in real life. The first object is a marble table, and the second object is a cokeA group of 5 friends sitting around a wooden table, with one person at the head of the table and two people on each side, drinking beer and eating buffalo wings in a casual setting, with a window in background and a warm, inviting atmosphere.photo with a design that naturally blends the two given objects into the new photo. The placement of the objects perfectly adhere to the laws of physics and is common in real life. Please provide a textual description of the objects and design before generating. The first object is, and the second object is 18 Z. Yang et al. Fig. C: The comparisons among the initial-round manual prompts, initial-round Idea2Img prompts, and the iterative self-refined Idea2Img prompts, with the Dalle- 3 [30] used as the T2I model. (i)(h)(f)(e)(d)(c)(b)On a peaceful beach, a silhouette of a person is captured in the midst of performing a tree pose yoga, with the backdrop of a setting sun and a large tree. The scene is reimagined with a geometric, polygonal art style, echoing the aesthetic of a stylized, faceted dog, where each shape captures and reflects the light differently, creating a mosaic of sunset hues.Find the the image style pattern in the left two dog images and apply it on the top right people in tree pose image. Provide a textual description that keeps the content in the people in tree pose image, with the correct style pattern.Find the the image style pattern in the left two dog images and apply it on the top right people in tree pose imageAmidst the gentle waves lapping at the shore and the sun bidding adieu, a person is encapsulated in the serene act of performing the tree pose yoga. A tree, grand and robust, shares the spotlight, its leaves whispering tales of nature’s splendor. The scene is reimagined, with every element, from the yogi to the distant horizon, adorned in larger, more distinct geometric shapes, echoing the style of the polygonal dog images. Each shape, bold and less complex, is a canvas capturing the sunset’s myriad hues, crafting a cohesive, visual narrative.A dynamic logo showcasing the Space Needle piercing through the number "2024", symbolizing innovation and progress, set against a starry night sky over Seattle, with the city lights shimmering, and the conference details illuminated by a moonlit glow.a logo for a 2024 conference held in Seattle. Please provide a textual description of the design before generatinga logo for a 2024 conference held in Seattle. Please provide a textual description of the design before generatingAn elegant logo for the 2024 conference, where the Space Needle forms part of the "2024", adorned with subtle imagery of a speaker's podium and audience, set against a simplified Seattle skyline, with a clean, monochromatic palette, precise detailing, and a soft lighting to accentuate the conference theme and professionalism.The image of a man waving is stylized with a low-poly design, where the scene is broken down into geometric shapes filled with bold and vibrant colors, creating a visually striking and contemporary aesthetic, under the bright sky.Find the the image style pattern in the left two dog images and apply it on the top right people waving hand image. Provide a textual description that keeps the content in the people waving hand image, with the correct style pattern.Find the the image style pattern in the left two dog images and apply it on the top right people waving hand imageA man in a formal suit is captured mid-motion, his hand raised in a wave, each finger distinctly visible. The entire scene is reimagined in a low-poly art style, where each facet of the man and his attire is defined by geometric shapes filled with a palette of bold, contrasting colors. The background remains a solid, untextured expanse, free of additional elements like sun rays or clouds, ensuring the focus remains solely on the colorful, stylized figure of the man waving. The lighting is neutral, casting no shadows or highlights, preserving the simplicity of the scene.A minimalist black and white logo of a llama, inspired by the clean lines and simplicity of a stethoscope that forms a paw print, flat design, monochrome.A logo of llama with the exact same style as in the given imageA logo of llama with the exact same style as in the given image that has a minimalist designA black and white logo where a stethoscope seamlessly forms a llama, the tubing outlines the body and legs, and the earpieces and diaphragm shape the head, akin to the minimalist style of a paw print, no facial features, embodying pure simplicity, elegance, and clean lines.Input user IDEAInitial-round manual promptInitial-round Idea2Img promptIterative self-refined Idea2Img promptA hand drawing of a person with a clear background. Hand drawing shows the sketch of the scene and looks like the given image. The person has a pose ofA sketch of a person, their hand raised in a greeting, is detailed with intricate lines and shading, showcasing the folds of their suit and their poised stance, set against a clear, unblemished background, in grayscale, the simplicity of the backdrop amplifying the complexity of the hand-drawn subject, with a focus on realism infused with artistic flair.A hand drawing of a person with a clear background. Hand drawing shows the sketch of the scene and looks like the given image . The person has a pose of waving handsIn monochrome, a person is depicted with their hand raised, palm out, in a waving gesture, encapsulated by the spontaneous and rough lines of a hand drawing. The background is immaculate and void of detail, spotlighting the figure. The artwork exudes the energy of a quick sketch, where the focus is on capturing the gesture and posture with immediacy, echoing the unrestrained strokes found in an artist’s sketchbook.A portrait of a corgi dog, not utilizing the thick, dynamic brushstrokes and intense color contrasts seen in a scene of a golden wheat field beneath a dramatic, dark sky, soft lighting, hyper-realistic style, serene setting.painting of a corgi dog with style different from this one in the imagepainting of a corgi dog with style different from this one in the image that is oil paintingA corgi dog depicted in a pop art style, featuring bold, flat colors and comic book-inspired aesthetics, no wheat field or dramatic sky, set against a backdrop of a bustling cityscape with skyscrapers, neon lights, and street art, illuminated by the glow of streetlights, emphasizing an urban, contemporary atmosphere.Two dogs, one with the appearance of a Cavalier King Charles Spaniel with black and white fur and the other resembling a brown and white Yorkshire Terrier, are energetically running side by side on a sandy beach, their tails wagging in excitement. The backdrop features a serene ocean view, clear skies, gentle waves, bright sunlight casting soft shadows, and distant seagulls soaring in the sky, adding a lively, playful atmosphere to the scene.Two dogs running on the beach, the left one looks like a fluffy golden dog and the right one looks like the left dog in the image that is black and whiteTwo dogs, a Yorkshire Terrier with detailed brown and white fur that glistens in the sunlight, on the left of a black and white Cavalier King Charles Spaniel, are immersed in a game of chase along a scenic beach. The ocean waves, with their rhythmic motion, complement the dogs’ lively energy. The sky, a masterpiece of soft, blended colors, serves as a canvas for the seagulls that glide gracefully, casting intricate shadows that dance on the sandy surface, amplifying the scene’s vibrancy., with the image style similar to this one in the imageAn artwork showcasing a tennis game viewed from a high angle, players are frozen in intense play, surrounded by a packed audience, the painting style is akin to impressionism with blurred lines and a play of light and shadow, evoking a sense of movement and energy, in the midst of a clear day.A painting of a tennis game from camera angle, with the image style similar to this one in the image that is impressionistA vivid impressionist painting captures a dynamic tennis match from an elevated perspective, where athletes are engaged in a fierce competition on a distinct blue court. The stadium is brimming with an enthusiastic crowd, their faces a blend of colors reflecting the excitement of the moment. The artwork is characterized by soft, blurred lines and a harmonious play of light and shadow, reminiscent of a clear, sunny day, enhancing the visual appeal and bringing the scene to life.A sleek, modern car parked against a vibrant cityscape, illuminated by the golden hour sunlight, glossy paint reflecting the surrounding lights, clean and polished look.an image of a car that can be used for commercial advertisingan image of a car that can be used for commercial advertisingThe image captures a state-of-the-art car positioned elegantly against a modern city backdrop. The lighting is balanced and clear, ensuring the car's sleek design and features are prominently displayed. The surrounding environment, though vibrant, doesn’t overshadow the car, making it the undeniable centerpiece, ideal for commercial advertising.(a)A painting of a tennis game from camera angleThe lively scene captures a golden and grey Yorkshire Terrier mid-leap, its joyful expression accentuated by the open mouth and bright eyes. The backdrop is a busy street, pedestrians in mid-stride, and colorful storefronts offering a visual feast. The overcast lighting lends a soft glow, illuminating the dog's fur and creating a dynamic interplay of light and shadow on the street.A portrait of a corgi dog, rendered in a style that mirrors the precision and clarity of a high-resolution photograph, showcasing intricate details from the texture of its fur to the reflections in its eyes, set in a calm outdoor setting, vibrant yet natural colors, under the gentle glow of the afternoon sun.Five friends, each clutching a distinct beer bottle, are immersed in a spirited discussion around a polished wooden table. A generous serving of buffalo wings, glistening with sauce, commands the center. The room, adorned with vintage decor, basks in the golden hue of hanging lights, casting intricate shadows and highlighting the bonds of friendship.A simplistic car painted in bold, unshaded primary colors is parked on a uniform green landscape, no gradients, under a plain blue sky with basic cloud shapes, next to a playground with elementary structures painted in solid colors, ensuring an uncomplicated, child-friendly painting experience.(g)Two dogs running on the beach, the left one looks likeand the right one looks like the left dog in the image Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 19 Fig. D: The comparisons among the initial-round manual prompts, initial-round Idea2Img prompts, and the iterative self-refined Idea2Img prompts, with other image generation models, including SD v1.5, SD v2.1 [40], DeepFloyd IF, and IF-img2img [20]. reference image style of “watercolor and impressionist,” respectively. Idea2Img can design visual contents with the inspiration of input images, e.g., a cartoon drawing that blends the llama and the person in suits in (f), and composing the 6 cakes placed in a box(a)A marble table with a Coca-Cola bottle floating above it, defying gravity, in a modern living room, with soft ambient lighting, minimalistic decor, and a hint of surrealism.photo with a design that composites the two given objects into the new photo. The placement of the objects does not adhere to the laws of physics. Please provide a textual description of the objects and design before generating. The first object is a marble table, and the second object is a cokeIn a contemporary living room, a Coca-Cola bottle is suspended in mid-air above a marble table, with a mystical aura, warm lighting, and sleek decor, evoking a sense of magic and defying gravity.SD v1.5SD v2.1DeepFloyd IFDeepFloyd IF img2imgA happy dog with pink fur running on a street with buildings in the background, daytime, sunny.photo of the circled dog but with background changed to a streetphoto of the circled dog but with background changed to a street that is a pink fluffy dogA Yorkshire Terrier with pink fur running on a bustling street with skyscrapers in the background, daytime, bright, traffic.A cardboard box with 6 homemade cakes, each with a unique design and flavor, ready to be delivered.6 cakes placed in a boxA brown cardboard box with 6 homemade cakes, each with a different flavor and design, arranged in a symmetrical pattern.A casual portrait of Elon Musk and Mark Zuckerberg, both wearing grey sweaters and black shirts, with a white background.generating an image with the same cloth as in the given image, but change the left persons to Elon Musk and the right person to Mark Zuckerberggenerating an image with the same cloth as in the given image, but change the left persons to Elon Musk and the right person to Mark ZuckerbergElon Musk and Mark Zuckerberg standing in a relaxed pose, with their shoulders touching, in a grey background, with Elon Musk wearing a grey sweater and Mark Zuckerberg wearing a black shirt.A kitchen scene with an empty plate and a glass without orange juice on a wooden table, with a white background.a plate that has no bananas on it. there is a glass without orange juice next to ita plate that has no bananas on it. there is a glass without orange juice next to itA close-up view of a wooden table with an empty glass and a white plate without bananas, with a white background.A contemporary logo for a 2023 conference in Paris, showcasing a stylized Eiffel Tower, elegant fonts, and a color palette inspired by the French flag, with a touch of sophistication.a logo for a 2023 conference held in Paris. Please provide a textual description of the design before generatinga logo for a 2023 conference held in Paris.A sleek and modern logo for a 2023 conference in Paris, incorporating a stylized Eiffel Tower, the year '2023' in a classy font, and a tricolor background inspired by the French flag. The design includes the word 'Paris' at the bottom in a refined font, symbolizing sophistication and culture.A logo of llama with the exact same style as in the given imageA monochromatic logo that combines a llama's face with a stethoscope, similar to the given image, where the stethoscope forms the face and the footprints are inside, in a sleek and minimalist style, with bold black lines.A logo of llama with the exact same style as in the given image that has a minimalist designA logo in which a stethoscope is artistically shaped with its tubing outlining a llama's face and its ear tips as the llama's ears, with paw prints inside the face, akin to the given image, in a simple, modern design with bold black contours.A playful dog with a unique blue coat and a bright yellow collar, enjoying a run on the beach with the waves crashing behind it.photo of a dog looks like the one in the given image running on the beach, but change the dog color to blue and the collar color to yellowphoto of a dog looks like the one in the given image running on the beach, but change the dog color to blue and the collar color to yellow that is a pug dogA pug-like dog with a unique blue fur, adorned with a plain yellow collar on its neck, racing across the beach with sand flying under its paws, and a tranquil sea with gentle waves as the backdrop.photo with a design that composites the two given objects into the new photo. The placement of the objects does not adhere to the laws of physics. Please provide a textual description of the objects and design before generating. The first object is, and the second object isInput user IDEAInitial-round manual promptInitial-round Idea2Img promptIterative self-refined Idea2Img prompt(b)(c)(d)(e)(f)(g)(h)A hand holding an iPhone 12 to take a photo, with a blurred background and natural lighting.an image of a hand holding an iphone 12 to take a photoan image of a hand holding an iphone 12 to take a photoA hand holding an iPhone 12 with the camera app open and the shutter button visible, taking a photo with a blurred background and natural lighting, with a focus on the hand and phone.A gray cat and an orange cat running on the beach, with the gray cat on the right and the orange cat on the left.photo with a design that composites the two given objects into the new photo. The placement of the objects does not adhere to the laws of physics. Please provide a textual description of the objects and design before generating. The first object is a marble table, and the second object is a cokeA gray cat with its head down and eyes looking forward and an orange cat with its eyes closed, both running on a beach with the gray cat on the right and the orange cat on the left, with the ocean in the background. 20 Z. Yang et al. coke with the table in an imaginative way in (g). (h) illustrates a novel scenario of generating an image to represent the anticipated action of rolling sushi. Dalle-3 and other generation models. Idea2Img demonstrates its effective- ness across different image generation models. Figure C shows the results gen- erated by Idea2Img with Dalle-3. We access Dalle-3 via Bing Image Creator3, which excludes the ChatGPT prompt rewrite. Idea2Img could better release Dalle-3’s strong prompt-following capability and show impressive results, espe- cially for challenging queries. This includes polishing the logo design in Fig- ure C(a), drafting car advertisements in (b), creating unique image styles in (c), and enhancing the design with reference images in (d). When confronted with more challenging tasks, Idea2Img with Dalle-3 excels. For the visual in-context generation problem in (e) and (f), Idea2Img finds the pattern in the input grid image and explores T2I prompts for the desired image design. The framework also proves effectiveness when handling multiple reference images, such as the two dogs in (g), the hand drawing of a person’s pose in (h), and the tennis game with a queried style in (i). Furthermore, Figure D shows the Idea2Img results on other T2I models, including SD v1.5, v2.1, DeepFloyd IF, and IF-img2img. Despite the variance in the base T2I models’ capacity, Idea2Img consistently helps design and generate better images. A.2 LMM Feedback, Revision, and Selection One may wonder how GPT-4V behaves and performs in each role throughout Idea2Img’s iterative self-refinement pipeline, i.e., “feedback reflection,” “revised prompt generation,” and “draft image selection.” We show corresponding quali- tative results as follows. Feedback reflection. Figure E shows text feedback generated by GPT-4V for the user IDEA, draft image, and T2I prompt. Idea2Img can effectively check if the generated image is correct, such as the number of oranges in (a) and the misspelled scene text "ALURE RESEACHE" in (b). In addition to the text descriptions in IDEA, Idea2Img can verify if the draft image corresponds to the visual descriptions in IDEA. This includes the color and breed of the dog in (e), the exact art style in (f), and the same cloth and pose in (g). Furthermore, Idea2Img can understand and verify the interleaved image-text pairs in IDEA, as shown in Figures E(h,i). In addition to identifying the discrepancy, Idea2Img can also point to the plausible directions for improving the T2I prompt in the text feedback. For exam- ple, in Figure E(c), GPT-4V mentions that “the person is not in yoga mountain pose, but the T2I prompt has already mentioned mountain pose,” “the AI model might not understand what mountain pose is, and prompt should be modified to specifically mention what mountain pose is.” Similarly, in Figure E(d), Idea2Img provides guidance to have generated images better follow the user intention of 3 https://www.bing.com/create Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 21 Fig. E: Examples of the generated text feedback. The left column shows the multi- modal input user IDEA, and the center column shows the draft image to process as well as its corresponding text prompts. The right column shows the text feedback gen- erated by GPT-4V. The dark blue color highlights the identified discrepancies. 22 Z. Yang et al. Fig. F: Examples of the revised prompts. The four columns, from left to right, show the input user IDEA, the draft image to be revised, generated text feedback, and the revised T2I prompt and image. The dark blue color highlights the identified discrepancies in text feedback, and how they are addressed in the revised T2I prompt. We note that the example only shows a single round of self-refinement. Therefore, the revised T2I prompt may have remaining issues to be further addressed. (i)(h)(g)(f)(e)(d)(c)(b)(a)The person in the image is not practicing yoga dolphin pose, but in a different yoga pose. Because the prompt has already mention dolphin pose, the AI art model might not understand what is dolphin pose. To address this, the sentence prompt should be modified to specifically mention what is a dolphin pose: posture where one balances on the forearms with the legs lifted, forming an inverted V shape with the body.a person practicing yoga dolphin pose at beachA person practicing yoga dolphin pose at a beach with clear blue water and white sand, during sunrise.A person on a sandy beach practicing yoga in the dolphin pose, which involves being on both forearms and toes with hips raised towards the ceiling and facing the ground, as the waves gently crash, the sun sets in the background casting a warm glow, and a serene atmosphere surrounds the scene.The dog in the generated image is not in the same color as the one in the given image. The dog in the given image is a black and white Cavalier King Charles Spaniel, while the dog in the generated image is a brown and white Cavalier King Charles Spaniel. To address this issue, the sentence prompt should be modified to specifically mention the breed of the dog as a black and white Cavalier King Charles Spaniel.photo of a dog looks like the left one in the given image running on the floorA Cavalier King Charles Spaniel running on a tiled floor, with a happy mood, in a modern kitchen.A black and white Cavalier King Charles Spaniel running on a wooden floor, with a playful mood, in a cozy kitchen.Revised prompt and imageCartoon drawing of Mr Bean with the same cloth as in the given imageplaying with a dog looks like this one on the beachThe dog in the generated image is not the same breed as the one in the given image. The dog in the given image is a Yorkshire Terrier, while the dog in the generated image is a Boston Terrier. To address this issue, the sentence prompt should be modified to specifically mention the breed of the dog as a Yorkshire Terrier.A humorous scene of Mr Bean and a small dog playing fetch on a sandy beach with blue skies in the background.A humorous scene of Mr Bean wearing a yellow tennis outfit and playing with a Yorkshire Terrier on a beach with palm trees in the background.The image generated by the AI art generation model does not follow the user imagined IDEA of the scene as the fingers are not ready to take a screenshot. To address this, the sentence prompt can be modified to specify that the fingers should be pressing the side button and the volume up button at the same time.an image of a hand holding an iphone 12. The image is used for illustrating how to take a screen shot on iphoneA detailed image of an iPhone 12 being held in a hand, with the screen on and the fingers ready to take a screenshot.A detailed image of a hand holding an iPhone 12, with the fingers ready to take a screenshot by pressing the side button and the volume up button at the same time, with a clear view of the screen.The image generated by the AI art generation model does not follow the user imagined IDEA of the scene as there is no study area in the image. To address this, the sentence prompt can be modified to specify that there should be a study area in the room, such as a desk with a lamp and a bookshelf.A hand drawing of a room where people can sleep and study. Hand drawing is a style that looks like the given image.A room with a cozy atmosphere, hand-drawn in a style that resembles a pen and ink sketch.A room with a comfortable bed and a study area with a desk and a bookshelf, hand-drawn in a style that looks like a pen and ink sketch.Two cats running on the beach, the right one looks likeand the left one looks likeThe cats in the generated image are both Russian Blue cats, while the user imagined IDEA of the scene specifies that one of the cats should be a domestic shorthair. To address this issue, the sentence prompt should be modified to specifically mention that the domestic shorthair cat is orange in color.A playful scene of a Russian Blue cat and a domestic shorthair cat chasing each other on the beach, the sand is white and the sky is a deep blue, the sun is shining brightly.A playful scene of a Russian Blue cat and an orange domestic shorthair cat running on the white sandy beach, the sky is a deep blue and the sun is shining brightly.The image generated by the AI art generation model does not follow the user imagined IDEA of the scene as there are bananas on the plate and orange juice in the glass. To address this, the sentence prompt can be modified to specify that there should be no bananas on the plate and no orange juice in the glass.a plate that has no bananas on it. there is a glass without orange juice next to itA breakfast scene with a plate that has no bananas and a glass without orange juice, with a blue background and a bright lighting.A close-up view of an empty plate with no other objects on it, and an empty glass without orange juice on a wooden table, with a white background.Input user IDEADraft prompt and image to reviseText feedbackA watercolor cartoon logo based on the design in the given imageA logo design of a llama with a red scarf and beanie, in a watercolor cartoon style, with a soft and pastel color palette.A logo design of a llama in a watercolor cartoon style, with a red beanie and scarf, standing on all fours, with a soft and pastel color palette, similar to the design in the given image.The llama in the generated image is not in the same design as the one in the given image. The llama in the given image is standing on all fours, while the llama in the generated image is only showing its head. To address this issue, the sentence prompt should be modified to specifically mention that the llama should be standing on all fours, and the design should be similar to the one in the given image.A scene of a man in a suit waving his hand, with the same style as the geometric dog image on the left.Find the the image style pattern in the left two dog images and apply it on the top right people waving hand image. Provide a textual description that keeps the content in the people waving hand image, with the correct style pattern.A scene of a man in a suit with his right hand raised in a friendly wave with his palm facing forward, with the same style as the geometric dog image on the left, with a blue sky background, with a slight smile on his face, with a modern and abstract style.The man in the generated image is not the same as the one in the given image. This is because the prompt mentions 'a man' but the AI art model can not understand image inputs. To address this, the sentence prompt should be modified to specifically mention the man in the given image is 'a man in a suit with his right hand raised in a friendly wave with his palm facing forward'.A person in a yoga dolphin pose, posture where one balances on the forearms with the legs lifted, forming an inverted V shape with the body, on a rocky beach with crashing waves and a cloudy sky.A breakfast scene with an empty plate and a glass without orange juice, with a blue background and a bright lighting.A minimalist kitchen with a table showcasing an empty plate and an empty glass, devoid of bananas and orange juice respectively, under the soft glow of morning light, evoking a sense of scarcity.A scene of a man in a suit with his right hand raised in a friendly wave with his palm facing forward, with the same style as the geometric dog image on the left, with a blue sky background, with a slight smile on his face, with a modern and abstract style.A scene of a man in a suit with his right hand raised in a friendly wave with his palm facing forward, with the same style as the geometric dog image on the left, with a blue sky background. Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 23 Fig. G: Examples of the draft image selection. The right column shows justification ratings and the final selection. The dark blue color highlights the identified problems and justifications for the draft image selection. The selected draft image in the round is visualized with the blue box shown in the middle column. 2 reason not correct 24 Z. Yang et al. (a) Feedback reflection: The right column shows the examples of the generated text feedback. (b) Revised prompt generation: The right column shows the examples of the revised prompts. Fig. H: LLaVA-1.5-13B’s [24] outputs in Idea2Img for image feedback and revision. “an image for a children’s painting competition,” by “specifically mentioning that the car should be simple and cartoon-like.” Revised prompt generation. Figure F compares the T2I prompts before and after the revision, for visualizing how text feedback helps the revision. For ex- ample, (a) the revised T2I prompt includes a detailed description of the “yoga dolphin pose” to generate the correct body pose; (b) the revised T2I prompt mentions “an empty plate with no other objects” to avoid the T2I model mis- understand the prompt “no bananas;” (c) T2I model generates the correct hand gesture with Idea2Img providing text description on how to take a screenshot. Idea2Img also effectively addresses the identified errors in text feedback and improves the prompts for multimodal input IDEA, including the dog color in Figure F(d), the llama design in Figure F(e), the study area in Figure F(f), the human gesture in Figure F(g), the dog breed and human clothing in Figure F(h), and the color of the two cats in Figure F(i). Draft image selection. T2I models may generate low-quality images even with good T2I prompts. To ensure consistent improvements in each iteration, it is critical to reduce such generation noise by selecting from multiple draft images in each round. Performing such selection requires GPT-4V to compare multiple similar draft images and pick the one with the best overall quality. Figure G shows the selection results generated by GPT-4V. The LMM prompt is designed such that GPT-4V gives justifications and scores for each draft image, in addition to the final selection index. Such intermediate thoughts not only help humans interpret the selection process, but also serve as the chain of thought Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 25 to improve the selection performance. We observe that GPT-4V can compare different aspects mentioned in the IDEA and give reasonable scores and selection index. For example, checking the scene text spelling in Figure G(a); verifying the phone screen and model in Figure G(b); counting the number of apples and bananas in Figure G(c); verifying the ball color and dog action in Figure G(d); finding the image with both sleep and study area in Figure G(e); selecting the image that best fits the given image style in Figure G(f); verifying the image content and style in Figure G(g); locating the best-blended image in Figure G(h); and finding the image with correct dog color and image style in Figure G(I). LMMs alternative to GPT-4V. After observing the effectiveness of Idea2Img with GPT-4V, a natural question is whether we can replace GPT-4V with more accessible and lightweight alternatives. Figure H examines LLaVA-1.5- 13B [24, 25], a leading open-source LMM, using the same test cases as those in the main paper’s Figure 6. Despite the promising results, LMMs alternative to GPT-4V may not be ready yet for the Idea2Img-like iterative self-refinement framework, with two major bottlenecks. First, most current LMMs lack the ability to process complex interleaved image-text sequences, therefore limiting Idea2Img in understanding multimodal IDEA, processing memory, and referenc- ing in-context examples. This limitation also prevents us from conducting image selection experiments in Figure H, as we did in Figure 6(c) with GPT-4V. Sec- ond, the weaker multimodal reasoning capability [64] will significantly increase the noise in Idea2Img’s iteration and make the framework ineffective. For exam- ple, in Figure H(a.2), LLaVA fails to capture the correct direction to improve the image, and in (b.1), it repeats the same T2I prompt without effective revision. B Idea2Img Code, Data, and Gallery We will release the Idea2Img code, evaluation queries, and generated samples. We show the used LMM prompts pgen, pselect, pf b, previse as follows. The colored texts indicate the corresponding multimodal contents, such as IDEA or the history memory. LMM prompts are kept the same for different image generation models and input IDEA. Initial prompt generation pgen: You are a helpful assistant. Instruction: Given a user imagined IDEA of the scene, converting the IDEA into a self-contained sentence prompt that will be used to generate an image. Here are some rules to write good prompts: - Each prompt should consist of a description of the scene followed by mod- ifiers divided by commas. - The modifiers should alter the mood, style, lighting, and other aspects of the scene. - Multiple modifiers can be used to provide more specific details. 26 Z. Yang et al. - When generating prompts, reduce abstract psychological and emotional descriptions. - When generating prompts, explain images and unusual entities in IDEA with detailed descriptions of the scene. - Do not mention ’given image’ in output, use detailed texts to describe the image in IDEA instead. - Generate diverse prompts. - Each prompt should have no more than 50 words. IDEA: IDEA input. End of IDEA. Based on the above information, you will write N detailed prompts exactly about the IDEA follow the rules. Each prompt is wrapped with <START> and <END>. Draft image selection pselect: You are a helpful assistant. You are a judge to rank provided images. Below are N images generated by an AI art generation model, indexed from 0 to N-1. From scale 1 to 10, decide how similar each image is to the user imagined IDEA of the scene. IDEA: IDEA input. End of IDEA. List of draft images. Let’s think step by step. Check all aspects to see how well these images strictly follow the content in IDEA, including having correct object counts, attributes, entities, relationships, sizes, appearance, and all other descrip- tions in the IDEA. Then give a score for each input images. Finally, con- sider the scores and select the image with the best overall quality with image index 0 to N-1 wrapped with <START> and <END>. Only wrap single image index digits between <START> and <END>. Feedback reflection pf b: You are a helpful assistant. You are iteratively refining the sentence prompt by analyzing the images produced by an AI art generation model, seeking to find out the differences between the user imagined IDEA of the scene and the actual output. If the generated image is not perfect, provide key REASON on ways to improve the image and sentence prompt to better follow the user imagined IDEA of the scene. Here are some rules to write good key REASON: Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 27 - Carefully compare the current image with the IDEA to strictly follow the details described in the IDEA, including object counts, attributes, entities, relationships, sizes, and appearance. Write down what is different in detail. - Avoid hallucinating information or asks that is not mentioned in IDEA. - Explain images and unusual entities in IDEA with detailed text descrip- tions of the scene. - Explain how to modify prompts to address the given reflection reason. - Focus on one thing to improve in each REASON. - Avoid generating REASON identical with the REASON in previous rounds. IDEA: IDEA input. End of IDEA. This is the round t of the iteration. The iteration history are: Memory module history. Based on the above information, you will write REASON that is wrapped with <START> and <END>. REASON: Feedback reflection previse: You are a helpful assistant. Instruction: Given a user imagined IDEA of the scene, converting the IDEA into a sentence prompt that will be used to generate an image. Here are some rules to write good prompts: - Each prompt should consist of a description of the scene followed by mod- ifiers divided by commas. - The modifiers should alter the mood, style, lighting, spatial details, and other aspects of the scene. - Multiple modifiers can be used to provide more specific details. - When generating prompts, reduce abstract psychological and emotional descriptions. - When generating prompts, explain images and unusual entities in IDEA with detailed descriptions of the scene. - Do not mention ’given image’ in output, use detailed texts to describe the image in IDEA. - Generate diverse prompts. - Output prompt should have less than 50 words. IDEA: IDEA input. End of IDEA. You are iteratively improving the sentence prompt by looking at the images generated by an AI art generation model and find out what is different from the given IDEA. This is the round t of the iteration. The iteration history are: Memory module history. Generated sentence prompt for current round t is: prompt 28 Z. Yang et al. Corresponding image generated by the AI art generation model: image However, reflection Based on the above information, to improve the image, you will write N detailed prompts exactly about the IDEA follow the rules. Make description of the scene more detailed and add modifiers to address the given key reasons to improve the image. Avoid generating prompts identical with the ones in previous rounds. Each prompt is wrapped with <START> and <END>. References 1. Chatgpt can now see, hear, and speak. https://openai.com/blog/chatgpt-can- now-see-hear-and-speak (2023) 2. Avrahami, O., Aberman, K., Fried, O., Cohen-Or, D., Lischinski, D.: Break-a-scene: Extracting multiple concepts from a single image. arXiv preprint arXiv:2305.16311 (2023) 3. Avrahami, O., Lischinski, D., Fried, O.: Blended diffusion for text-driven editing of natural images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18208–18218 (2022) 4. Betker, J., Goh, G., Li, J., Brooks, T., Wang, J., Li, L., Ouyang, L., Zhuang, J., Lee, J., Guo, Y., Manassra, W., Dhariwal, P., Chu, C., Jiao, Y., Ramesh, A.: Improving image generation with better captions (2023) 5. Black, K., Janner, M., Du, Y., Kostrikov, I., Levine, S.: Training diffusion models with reinforcement learning. arXiv preprint arXiv:2305.13301 (2023) 6. Brooks, T., Holynski, A., Efros, A.A.: Instructpix2pix: Learning to follow image editing instructions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18392–18402 (2023) 7. Chefer, H., Alaluf, Y., Vinker, Y., Wolf, L., Cohen-Or, D.: Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. arXiv preprint arXiv:2301.13826 (2023) 8. Chen, W., Hu, H., Li, Y., Rui, N., Jia, X., Chang, M.W., Cohen, W.W.: Subject- driven text-to-image generation via apprenticeship learning. arXiv preprint arXiv:2304.00186 (2023) 9. Chen, X., Lin, M., Schärli, N., Zhou, D.: Teaching large language models to self- debug. arXiv preprint arXiv:2304.05128 (2023) 10. Fan, Y., Watkins, O., Du, Y., Liu, H., Ryu, M., Boutilier, C., Abbeel, P., Ghavamzadeh, M., Lee, K., Lee, K.: Dpok: Reinforcement learning for fine-tuning text-to-image diffusion models. arXiv preprint arXiv:2305.16381 (2023) 11. Fan, Y., Watkins, O., Du, Y., Liu, H., Ryu, M., Boutilier, C., Abbeel, P., Ghavamzadeh, M., Lee, K., Lee, K.: Reinforcement learning for fine-tuning text- to-image diffusion models. Advances in Neural Information Processing Systems 36 (2024) 12. Feng, W., He, X., Fu, T.J., Jampani, V., Akula, A.R., Narayana, P., Basu, S., Wang, X.E., Wang, W.Y.: Training-free structured diffusion guidance for com- positional text-to-image synthesis. In: The Eleventh International Conference on Learning Representations (2022) 13. Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576 (2015) 14. Google: Bard. https://bard.google.com (2023), accessed: 2023-07-17 Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 29 15. Guo, Y., Liang, Y., Wu, C., Wu, W., Zhao, D., Duan, N.: Learning to program with natural language. arXiv preprint arXiv:2304.10464 (2023) 16. Gupta, T., Kembhavi, A.: Visual programming: Compositional visual reasoning without training. In: Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition. pp. 14953–14962 (2023) 17. Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., Cohen-or, D.: Prompt-to-prompt image editing with cross-attention control. In: The Eleventh International Conference on Learning Representations (2022) 18. Kawar, B., Zada, S., Lang, O., Tov, O., Chang, H., Dekel, T., Mosseri, I., Irani, M.: Imagic: Text-based real image editing with diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6007–6017 (2023) 19. Kumari, N., Zhang, B., Zhang, R., Shechtman, E., Zhu, J.Y.: Multi-concept cus- tomization of text-to-image diffusion. In: Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition. pp. 1931–1941 (2023) 20. Lab, D.: Deepfloyd if. https://github.com/deep-floyd/IF (2023) 21. Lee, K., Liu, H., Ryu, M., Watkins, O., Du, Y., Boutilier, C., Abbeel, P., Ghavamzadeh, M., Gu, S.S.: Aligning text-to-image models using human feedback. arXiv preprint arXiv:2302.12192 (2023) 22. Li, C., Gan, Z., Yang, Z., Yang, J., Li, L., Wang, L., Gao, J.: Multimodal foundation models: From specialists to general-purpose assistants. arXiv preprint arXiv:2309.10020 (2023) 23. Li, Y., Liu, H., Wu, Q., Mu, F., Yang, J., Gao, J., Li, C., Lee, Y.J.: Gligen: Open-set grounded text-to-image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 22511–22521 (2023) 24. Liu, H., Li, C., Li, Y., Lee, Y.J.: Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744 (2023) 25. Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual instruction tuning. arXiv preprint arXiv:2304.08485 (2023) 26. Lu, P., Peng, B., Cheng, H., Galley, M., Chang, K.W., Wu, Y.N., Zhu, S.C., Gao, J.: Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842 (2023) 27. Madaan, A., Tandon, N., Gupta, P., Hallinan, S., Gao, L., Wiegreffe, S., Alon, U., Dziri, N., Prabhumoye, S., Yang, Y., et al.: Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651 (2023) 28. Meng, C., He, Y., Song, Y., Song, J., Wu, J., Zhu, J.Y., Ermon, S.: Sdedit: Guided image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073 (2021) 29. Nasiriany, S., Xia, F., Yu, W., Xiao, T., Liang, J., Dasgupta, I., Xie, A., Driess, D., Wahid, A., Xu, Z., et al.: Pivot: Iterative visual prompting elicits actionable knowledge for vlms. arXiv preprint arXiv:2402.07872 (2024) 30. OpenAI: Dall·e 3 system card. https://cdn.openai.com/papers/DALL_E_3_ System_Card.pdf (2023) 31. OpenAI: Gpt-4 technical report (2023) 32. OpenAI: Gpt-4v(ision) system card (2023), https://cdn.openai.com/papers/ GPTV_System_Card.pdf 33. OpenAI: Gpt-4v(ision) technical work and authors. https://cdn.openai.com/ contributions/gpt-4v.pdf (2023) 34. Pan, L., Saxon, M., Xu, W., Nathani, D., Wang, X., Wang, W.Y.: Automatically correcting large language models: Surveying the landscape of diverse self-correction strategies. arXiv preprint arXiv:2308.03188 (2023) 30 Z. Yang et al. 35. Paranjape, B., Lundberg, S., Singh, S., Hajishirzi, H., Zettlemoyer, L., Ribeiro, M.T.: Art: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:2303.09014 (2023) 36. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023) 37. Pryzant, R., Iter, D., Li, J., Lee, Y.T., Zhu, C., Zeng, M.: Automatic prompt optimization with" gradient descent" and beam search. arXiv preprint arXiv:2305.03495 (2023) 38. Qi, J., Ding, M., Wang, W., Bai, Y., Lv, Q., Hong, W., Xu, B., Hou, L., Li, J., Dong, Y., et al.: Cogcom: Train large vision-language models diving into details through chain of manipulations. arXiv preprint arXiv:2402.04236 (2024) 39. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical text- conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 (2022) 40. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10684–10695 (2022) 41. Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., Aberman, K.: Dream- booth: Fine tuning text-to-image diffusion models for subject-driven generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 22500–22510 (2023) 42. Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E., Ghasemipour, S.K.S., Ayan, B.K., Mahdavi, S.S., Lopes, R.G., et al.: Photorealistic text- to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487 (2022) 43. Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., Cancedda, N., Scialom, T.: Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761 (2023) 44. Shen, Y., Song, K., Tan, X., Li, D., Lu, W., Zhuang, Y.: Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580 (2023) 45. Shi, J., Xiong, W., Lin, Z., Jung, H.J.: Instantbooth: Personalized text-to-image generation without test-time finetuning. arXiv preprint arXiv:2304.03411 (2023) 46. Shinn, N., Cassano, F., Labash, B., Gopinath, A., Narasimhan, K., Yao, S.: Re- flexion: Language agents with verbal reinforcement learning (2023) 47. Shridhar, M., Yuan, X., Côté, M.A., Bisk, Y., Trischler, A., Hausknecht, M.: Alf- world: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768 (2020) 48. Singer, U., Polyak, A., Hayes, T., Yin, X., An, J., Zhang, S., Hu, Q., Yang, H., Ashual, O., Gafni, O., et al.: Make-a-video: Text-to-video generation without text- video data. arXiv preprint arXiv:2209.14792 (2022) 49. Surís, D., Menon, S., Vondrick, C.: Vipergpt: Visual inference via python execution for reasoning. arXiv preprint arXiv:2303.08128 (2023) 50. Wang, J., Yang, Z., Hu, X., Li, L., Lin, K., Gan, Z., Liu, Z., Liu, C., Wang, L.: Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100 (2022) 51. Wang, Z.J., Montoya, E., Munechika, D., Yang, H., Hoover, B., Chau, D.H.: Diffu- siondb: A large-scale prompt gallery dataset for text-to-image generative models. arXiv preprint arXiv:2210.14896 (2022) Idea2Img: Self-Refinement with LMMs for Automatic Visual Creation 31 52. Wu, C., Yin, S., Qi, W., Wang, X., Tang, Z., Duan, N.: Visual chatgpt: foundation models. arXiv preprint Talking, drawing and editing with visual arXiv:2303.04671 (2023) 53. Wu, J., Wang, J., Yang, Z., Gan, Z., Liu, Z., Yuan, J., Wang, L.: Grit: A generative region-to-text transformer for object understanding. arXiv preprint arXiv:2212.00280 (2022) 54. Wu, P., Xie, S.: V*: Guided visual search as a core mechanism in multimodal llms. arXiv preprint arXiv:2312.14135 17 (2023) 55. Yan, A., Yang, Z., Zhu, W., Lin, K., Li, L., Wang, J., Yang, J., Zhong, Y., McAuley, J., Gao, J., et al.: Gpt-4v in wonderland: Large multimodal models for zero-shot smartphone gui navigation. arXiv preprint arXiv:2311.07562 (2023) 56. Yang, C., Wang, X., Lu, Y., Liu, H., Le, Q.V., Zhou, D., Chen, X.: Large language models as optimizers. arXiv preprint arXiv:2309.03409 (2023) 57. Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 (2023) 58. Yang*, Z., Li*, L., Wang*, J., Lin*, K., Azarnasab*, E., Ahmed*, F., Liu, Z., Liu, C., Zeng, M., Wang, L.: Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381 (2023) 59. Yang, Z., Wang, J., Gan, Z., Li, L., Lin, K., Wu, C., Duan, N., Liu, Z., Liu, C., Zeng, M., et al.: Reco: Region-controlled text-to-image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 14246–14255 (2023) 60. Yang, Z., Qi, P., Zhang, S., Bengio, Y., Cohen, W.W., Salakhutdinov, R., Manning, C.D.: Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600 (2018) 61. Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., Cao, Y.: React: Syn- ergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629 (2022) 62. Yin, S., Wu, C., Yang, H., Wang, J., Wang, X., Ni, M., Yang, Z., Li, L., Liu, S., Yang, F., et al.: Nuwa-xl: Diffusion over diffusion for extremely long video generation. arXiv preprint arXiv:2303.12346 (2023) 63. Yu, J., Xu, Y., Koh, J.Y., Luong, T., Baid, G., Wang, Z., Vasudevan, V., Ku, A., Yang, Y., Ayan, B.K., et al.: Scaling autoregressive models for content-rich text-to-image generation. Transactions on Machine Learning Research (2022) 64. Yu, W., Yang, Z., Li, L., Wang, J., Lin, K., Liu, Z., Wang, X., Wang, L.: Mm- vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490 (2023) 65. Zhang, L., Agrawala, M.: Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543 (2023) 66. Zhao, A., Huang, D., Xu, Q., Lin, M., Liu, Y.J., Huang, G.: Expel: Llm agents are experiential learners. arXiv preprint arXiv:2308.10144 (2023) 67. Zhu, W., Wang, X., Lu, Y., Fu, T.J., Wang, X.E., Eckstein, M., Wang, W.Y.: Collaborative generative ai: Integrating gpt-k for efficient editing in text-to-image generation. arXiv preprint arXiv:2305.11317 (2023)
ai_researcher
1
The_Prompt_Canvas_A_Literature-Based_Practitioner_Guide_for_Creating_Effective_Prompts_in_Large_Language_Models.pdf
The Prompt Canvas: A Literature-Based Practitioner Guide for Creating Effective Prompts in Large Language Models Michael Hewing and Vincent Leinhos FH Münster – University of Applied Sciences December 6, 2024 ABSTRACT The rise of large language models (LLMs) has highlighted the importance of prompt engineering as a crucial technique for optimizing model outputs. While experimentation with various prompting methods, such as Few-shot, Chain-of-Thought, and role-based techniques, has yielded promising results, these advancements remain fragmented across academic papers, blog posts and anecdotal experimentation. The lack of a single, unified resource to consolidate the field’s knowledge impedes the progress of both research and practical application. This paper argues for the creation of an overarching framework that synthesizes existing methodologies into a cohesive overview for practitioners. Using a design-based research approach, we present the Prompt Canvas (Figure 1), a structured framework resulting from an extensive literature review on prompt engineering that captures current knowledge and expertise. By combining the conceptual foundations and practical strategies identified in prompt engineering, the Prompt Canvas provides a practical approach for leveraging the potential of Large Language Models. It is primarily designed as a learning resource for pupils, students and employees, offering a structured introduction to prompt engineering. This work aims to contribute to the growing discourse on prompt engineering by establishing a unified methodology for researchers and providing guidance for practitioners. 4 2 0 2 c e D 6 ] I A . s c [ 1 v 7 2 1 5 0 . 2 1 4 2 : v i X r a Figure 1: The Prompt Canvas. The Prompt CanvasThe Prompt Canvas is designed as a learning resource for you and your team, providing a structured approach into Prompt Engineering for Large Language Models like ChatGPT.Persona/RoleAudienceReflect on which roles and personas are relevant for your organization. Ask the model to adopt a persona. Integra te company values and culture into the persona description.Develop detailed personas that represent typical users or customers.TonalityReferencesStep- by- Stepv. 1.0 | English ToolingContextOutputPrompt Name:Date:Owner:Task and IntentDescribe the task precisely, starting with action verbs. Also describe the intent and objective you are pursuing.Give detailed informa tion about the current situation , including background details; the model knows almost everything, but best what you tell the AI.Tell the model how long the response should be. Specify the content structure of the text. Choose the format you want (table, text, markd own, code). Ask the model to include quotes from a reference or to add sources.Recommended TechniquesIterative Optimization: R efine with additional instructions.Placeholders & Delimiters: Use delimiters to provide clarity, placeholders for flexibility.AI as a Prompt Generator: Ask the model to generate or refine the prompt.Chain- of- Thought: Ask the model to think step- by- step.Tree- of- Thought: Ask the model to analyze from multiple perspectives/personas.Emotion Prompting: add emotional phrases such as "This is important to my career".Rephrase and Respond / Re- Reading: Instruct model to first express the question in their own words before giving an answer or tell it to read the question again.Adjusting Hyperparameters (advanced): Adjust the model's settings (temperature, top- p, frequency or presence penalty).The Prompt Canvas © 2024 by Michael Hewing is licensed under CC BY 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/B reak down the task into clear, sequential steps required for completion.Think about which attributes you want to communicate (luxury, quality…). Specify a brand or writing style to inspire the tone.Share essential data, facts, or figures that are important for the response. Mention past events or decisions that could have an impact. Refer to specific documents, reports, or policies. Provide relevant files or examples.Use LLM Apps (e.g. ChatGPT App) for faster prompts based on voice inputUse Prompting Platforms for instant prompt optimization (e.g. PromptPerfect)Explore Prompt Libraries to discover creative prompts (e.g. PromptHero, PromptBase)Use Browser Extensions, such as Tex Blaze, to save prompts and integrate LLMsCompare and evaluate large language models side by side with LLM- Arenas,such as Chatbot Arena, to stay updated with the top AI modelsUse Custom- GPTs, such as ScholarGPT, for specific purposesCreate own Custom- GPTs for customization and company- wide useIntegrate the model via API into your application systemsExample: You are a skilled summarizer and editor. Your role is to distill complex information into clear and concise summaries while ensuring the text is polished and engaging.Example: Create this content for a young, tech- savvy audience aged 18-25. Use casual and relatable language, incorporating trending references or examples.Example: Summarize the key points of the attached document, focusing on the main arguments and supporting evidence. The goal is to provide a concise and accurate article that captures the essence of the document, making it easy for readers to understand the core message quickly., focusing on the main arguments and supporting evidence. The goal is to provide a concise and accurate article that captures the essence of the document, making it easy for readers to understand the core message quickly.Example: Follow these steps to complete the task: Read and Understand, Identify Key Draft the Summary, Edit for Clarity, Check for Completeness.Example 2 (no example): Let's think step by step.Example: You are writing for Art Horizon, a cutting- edge online art magazine dedicated to exploring emerging trends, groundbreaking movements, and cultural phenomena in the global art world. Known for its vibrant design and engaging storytelling, Art Horizon appeals to a diverse audience of young creatives, collectors, and art enthusiasts. The magazine focuses on topics such as ...Example: Incorporate the attached survey feedback into the content creation process to align the article with audience preferences. Additionally, use the provided example article as a reference. Example: The text should be no more than 200 words and divided into three main sections: Introduction, Core Content and Conclusions. Write the text in Markdown format.Example: Write the essay in the style of [brand/author/magazine], capturing the distinctive tone and approach. Adapt the tone, structure, and language to align with the chosen inspiration while delivering the intended message effectively. Write with a tone that embodies authenticity and sophistication. Reflect these attributes in every aspect of the writing.www.thepromptcanvas.com 1 Introduction With the advances of sophisticated Large Language Mod- els (LLM), the ability to guide these models to generate useful, contextually relevant, and coherent answers has be- come an essential skill. Prompt engineering refers to the art and science of designing inputs or queries (prompts) that effectively guide LLMs towards desired outputs. Schulhoff et al. (2024, p. 7) describe related prompt techniques as a “blueprint that outlines how to structure a prompt.” This discipline bridges the gap between the user’s goals and the model’s capabilities, enabling more precise, creative, and domain-specific solutions. Prompt engineering allows users to precisely guide LLMs in generating contextually relevant and task-specific responses. Yet, much of the research and insights into prompt engi- neering are distributed across disparate sources, such as academic journals, preprints, blogs, and informal discus- sions on platforms like GitHub, Reddit, or YouTube. Navi- gating this complex landscape requires not only significant effort, but also a level of expertise that may be inaccessible to practitioners, creating a substantial barrier to entry and hindering the effective application of prompt engineering techniques in practice. With this paper, a canvas-oriented approach is proposed that consolidates current knowledge in the field of prompt engineering into a coherent, visual format. This way, practitioners can implement effective strategies more confidently and with clarity. The second chapter of this paper describes the fragmented state of knowledge in prompt engineering, highlighting the challenges practitioners face in accessing and applying diverse techniques. In the third chapter, a comprehensive review of existing studies and approaches in prompt en- gineering is presented, showcasing key techniques and patterns in the field. Chapter Four introduces the Prompt Canvas as a structured framework to consolidate and vi- sually represent prompt engineering techniques for better accessibility and practical application. The last chapter provides a conclusion along with constraints and areas for future research. 2 The Need for an Overview of Prompt Engineering Techniques Prompts are vital for LLMs because they serve as the primary mechanism for translating user intentions into actionable outputs. By guiding the model’s responses, prompts enable LLMs to perform a wide range of tasks, from creative writing to complex problem-solving, with- out requiring task-specific retraining. They leverage the pre-trained knowledge embedded in the model, allowing users to adapt LLMs to specific contexts and applications through in-context learning. 2.1 The Relevance of Prompting in Unsupervised Learning and Transformer Architecture Through the concepts of unsupervised learning and ba- sic transformer architecture, the relevance and impact of prompts is evident. Generative AI models, such as LLMs, can be assigned to natural language processing (NLP) in the field of artificial intelligence (Braun et al., 2024, p. 560). In an earlier paradigm of NLP (Liu et al., 2023, p. 4), mod- els were typically trained for specific tasks using super- vised learning with manually annotated data. However, this limited a model to its training domain and manual annotation during training was time-consuming and ex- pensive (Radford et al., 2018, p. 1). This challenge led to unsupervised learning gaining in importance. According to P. Liu et al., this represents the transition to the current NLP paradigm of Pre-train, Prompt, Predict. Large and diverse data sets are used for training. In this way, the model recognizes patterns and aligns parameters within a neural network. By entering a prompt, the model adapts to the corresponding task, which is known as in-context learning. This allows the model to be used for a variety of tasks (see Radford et al., 2018, p. 2; Brown et al., 2020, p. 3). In addition to unsupervised learning, the transformer architecture, which was published in 2017 by Vaswani et al. under the title “Attention Is All You Need,” laid an impor- tant foundation for today’s LLMs. It enables context to be maintained across long texts (Radford et al., 2018, p. 2). In June 2018, OpenAI (2018) stated that their “. . . approach is a combination of two existing ideas: transformers and unsupervised pre-training.” The abbreviation GPT, Gener- ative Pre-Trained Transformer, reflects this approach. The process from prompt input to output is described in Lo (2023) as follows (the process description has been short- ened for clarity): First, individual words of the prompt are broken down into tokens. Each token is represented by a vector that conveys its meaning. This representation is referred to as embedding. Self-attention is used to capture the relationship between tokens in the prompt. Finally, based on the previous context and the patterns learned in the training data, next tokens are predicted. Once a token has been selected, it is translated back into a human- readable form. This process is repeated until a termination criterion is reached. These foundational concepts enable LLMs to process di- verse and complex tasks without task-specific retraining, relying instead on adaptive responses generated through prompting. Prompt engineering bridges the gap between generalized pre-trained knowledge and specific user needs, functioning as the key mechanism through which the model’s potential is harnessed. The transformer’s self- attention mechanism ensures contextual integrity across sequences, while unsupervised learning enables the model to identify and generalize patterns from vast datasets. To- gether, these innovations allow LLMs to excel in the Pre- train, Prompt, Predict paradigm, making prompt engineer- ing not only a critical aspect of model utility but also a determinant of task-specific success. As AI applications expand across domains, the ability to craft precise and 2 effective prompts will remain central to realizing the full power of these transformative technologies. 2.2 The Need for a Practitioner-Oriented Overview on Prompt Engineering Techniques Prompt engineering is a rapidly evolving field, with tech- niques such as Few-shot learning, Chain-of-Thought rea- soning and iterative feedback loops being developed and refined to solve complex problems. The pace of innovation is driven by a wide range of applications in industries such as healthcare, education and software development, where tailored prompts can significantly improve model perfor- mance. A large body of research is investigating the effec- tiveness of different prompting techniques. However, the current state of knowledge in this area is highly fragmented, posing significant challenges to researchers and practition- ers alike. Fragmentation of knowledge refers to the dis- jointed and inconsistent distribution of information across various sources, often lacking coherence or standardized frameworks. One of the primary challenges of this frag- mented knowledge is the absence of a unified framework that consolidates the diverse techniques, methodologies and findings in prompt engineering. Practitioners new to the field face steep learning curves, as they must navigate a scattered and complex body of literature. Yet, as it will be highlighted in the literature review of chap- ter Three, initial efforts to systematically consolidate these techniques, develop taxonomies and establish a shared vocabulary are emerging. These publications structure cur- rent knowledge into schemes and patterns. While they provide in-depth analyses and valuable structures, they often lack accessibility for practitioners seeking practical solutions and actionable insights. This gap from research advancements to practical application highlights a pressing need for bridging between academic research and real- world use. Addressing these challenges will ensure that the benefits of prompt engineering are more widely re- alized, enabling its application to expand further across industries and domains. 2.3 Canvas for Visualization The field of prompt engineering involves a dynamic and multifaceted interplay of strategies, methodologies, and considerations, making it challenging to present in a way that is both comprehensive and accessible. The canvas model promotes visual thinking and has been widely adopted in fields such as business strategy (Osterwalder and Pigneur 2010, Pichler 2016), teamwork (Ivanov and Voloshchuk, 2015), startups (Maurya, 2012), research (The Fountain Institute, 2020) and design thinking (IBM, 2016), where it has proven to be an effective way to organize and communicate complex processes. A canvas simplifies complexity by visually organizing aspects of relevance into defined sections, allowing users to see the relationships and workflows at a glance. It promotes a holistic view of the process in one unified space. Also, the collaborative nature of a canvas facilitates communication and alignment among team members with varying levels of expertise. By applying this proven framework to prompt engineering and making the transition to this visual representation more intuitive, practitioners can leverage prompt techniques and patterns. Practitioners can quickly grasp the key elements and a workflow, reducing barriers to entry and enabling more effective application of the techniques. 3 Identifying Common Techniques Through a Systematic Literature Review In order to obtain a comprehensive overview of the current state of techniques in the field of prompt engineering, a sys- tematic literature review (SLR) has been carried out. Such a systematic approach provides transparency in the selec- tion of databases, search terms, as well as inclusion and exclusion criteria. After the literature search and selection, included literature will be analyzed and consolidated. 3.1 Literature Search and Selection The literature search process primarily adheres to the framework outlined by vom Brocke et al. (2009, pp. 8–11). For the subsequent selection of sources, the methodology is based on the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines (cf. Page et al., 2021). Vom Brocke et al. (2009) outline the systematic literature review (SLR) process in five distinct phases. The process begins with defining the scope of the literature search (Phase 1) and creating a preliminary concept map (Phase 2) to guide the review. This is followed by the execution of the actual literature search (Phase 3). The later stages involve the analysis and synthesis of the included literature (Phase 4) and a discussion of the findings along with their limitations (Phase 5). The last phase we integrated into the section on limitations at the end of this paper. Vom Brocke et al. (2009) emphasize the first three phases in their work. With the literature research, the following research question shall be addressed: What is the current state of techniques and methods in the field of prompt engineering, especially in text-to-text modalities? To establish the framework for the literature search in phase one, vom Brocke et al. (2009) draw on Cooper’s taxonomy (1988, pp. 107–112). Cooper identifies six key characteristics for classifying literature searches: focus, goal, perspective, coverage, organization and audience. These characteristics provide a structured approach to defining the purpose and scope of a literature review. Table 1 offers a detailed overview of how these classifications align with the specific intentions of this SLR, ensuring a systematic and targeted review process. The second phase involves elaboration using concept mapping. For this purpose, terms are selected that are 3 Table 1: Characteristics according to Cooper (1988, pp. 107–112) applied to this SLR. Characteristic Category Focus Goal Research outcomes, Practices or ap- plications Integration or synthesis Perspective Neutral representation Coverage Exhaustive coverage with selective citation Organization Conceptual (thematically organized) Audience Users of LLMs (private and business use) for the Advancement of Artificial Intelligence (AAAI), International Joint Conference on Artificial Intelligence (IJCAI), ACM Conference on Human Factors in Comput- ing Systems (CHI), Conference on Neural Information Processing Systems (NeurIPS). The next step within the third phase, the selection of databases and search terms, is to search for peer-reviewed SLRs on the topic of prompt engineering to identify databases relevant The search was to the subject. conducted on Scopus on the 10th of August and resulted in five hits (Table 2). The title, abstract and keywords were searched using the following search term: ( TITLE-ABS-KEY ( "prompt engineering") OR TITLE-ABS-KEY ( "prompt-engineering") AND TITLE-ABS-KEY ( "systematic literature review") OR TITLE-ABS-KEY ( "PRISMA") ) These publications focus on specific application areas, expected to lead to relevant and comprehensive results in the subsequent database search. To keep the literature review as inclusive as possible, only terms directly related to prompt engineering were included: prompt engineering, prompt techniques, prompt designs, prompt patterns, prompt strategies, prompt methods. Further related concepts such as LLMs or generative AI have not been considered because they might broaden the scope too much. According to vom Brocke et al. (2009), the third phase is divided into several steps. The first step is to identify and select qualitative sources for the literature review. The “VHB Publication Media Rating 2024” for the section Information Systems is an established reference for quality and impact of sources (Verband der Hochschullehrerinnen und Hochschullehrer für Betriebswirtschaft e.V., 2024). Journals with a VHB rating of B or higher and a potential focus on AI were preselected. This selection was made by manually reviewing the short descriptions of the respective journals and evaluating their relevance with the assistance of generative AI. Prompt used in GPT-4o on October 4, 2024: “Evaluate which of the following journals could contain articles relevant to the topic of prompt engineer- ing.” Based on the manual and AI-supported selection, the following journals should at least be included in the database set for this literature search: Nature Machine Intelligence, ACM Transactions on Computer-Human Interaction (TOCHI), Artificial Intelligence (AIJ), IEEE Transactions on Knowledge and Data Engineering, ACM Transactions on Interactive Intelligent Systems (TiiS). It is understood that conferences play an important role in the field of generative AI, as the timely exchange of new approaches is fundamental. Taking into account a VHB rating of B or higher and an evaluation of thematic the following conferences should also be relevance, included in the database: International Conference on Information Systems (ICIS), European Conference on Information Systems (ECIS), Hawaii International Conference on System Sciences (HICSS), International Conference on Machine Learning (ICML), Association Table 2: Search results for systematic literature reviews in the field of prompt engineering (performed in Scopus on August 10, 2024). Reference Domain Databases Sasaki et al. (2024) Programming Google Scholar; arXiv, ACM Digital Library, IEEE Xplore Moglia et al. (2024) Medicine Han et (2023) al. Economy Watson et al. (2023) Machine Learning PubMed, Web of Sci- ence, Scopus, arXiv JSTOR, ProQuest, Sci- enceDirect, Web of Sci- ence, Google Scholar Scopus, IEEE Xplore, ScienceDirect, Elicit, WorldCat, Google Scholar, arXiv such as programming, medicine, economics, or machine learning, making it challenging to generalize their insights for broader practical use by practitioners. Yet, similarities in the database selection were recognized. All hits either use arXiv.org directly as a database or cite a large number of their sources from that website. Documents on arXiv.org are largely not peer-reviewed, but on the other hand enable the publication of current research. In the “Systematic Literature Review of Prompt Engineering Patterns in Software Engineering,” Sasaki et al. (2024, p. 671) transparently state that most of their cited sources are not peer-reviewed, but that it is important to include them because prompt engineering is a rapidly changing field. Another SLR from the preliminary research states that many current articles can only be obtained through arXiv.org and an increasing number of research groups are publishing their work on arXiv (Moglia et al., 2024, p. 41). Based on these findings and the previously identified journals, databases and search terms were defined (see Table 8 in Appendix). On the one hand, those qualitative 4 Figure 2: PRISMA procedure, based on (Page et al., 2021, p. 5). journals and conferences should be considered (accessible via AIS eLibrary, IEEE Xplore, ACM Digital Library), while at the same time fully including interdisciplinary areas (through Scopus), as well as current – albeit largely non-peer-reviewed – articles (from arXiv). The selected search terms result from concept mapping and iterative testing of keywords. It was found that many articles only contained the term prompt engineering in the abstract, but their research focus was in a different area. This could be explained by the fact that prompt engineering can play an indirect role in many areas of application. Since it is assumed that articles with a primary focus on prompt engineering also contain this term in the title, only the title was searched. The following search term with variations was formed: TITLE("prompt-engineering" OR "prompt engineering" OR "prompt techniques" OR "prompt designs" OR "prompt design" OR "prompt patterns" OR "prompt pattern" OR "prompt strategies" OR "prompt strategy" OR "prompt methods") The search was carried out on October 4, 2024 in the respective databases according to the PRISMA procedure (Page et al., 2021) (see Figure 2) and documented with the literature management program Zotero. 718 hits were identified for all databases. Of these, 131 hits were identified as duplicates and excluded accordingly. In the next step, 587 hits were checked for suitability with regard to their title and abstract. Previously, articles that were published before 2022 or were not written in English In the third step, 115 full-text articles were excluded. were checked for suitability. Since articles from arXiv.org may not contain peer-reviewed articles, but at the same time are often highly relevant, an evaluation system was created to evaluate articles from all databases holistically according to thematic suitability, quality and actuality. The thematic suitability was weighted most heavily, while the quality of articles was assessed using two criteria to ensure a comprehensive evaluation. First, we prioritized publications that include a literature review process, assigning higher scores to systematic literature reviews (SLRs) and lower scores to less detailed reviews. This is of importance as we want to consolidate the knowledge in this field. Second, the evaluation incorporated the VHB rating (Verband der Hochschullehrerinnen und Hochschullehrer für Betriebswirtschaft e.V.), with higher scores. The evaluation criteria are defined in Table 3. Articles scoring fewer than seven points were excluded from the primary selection. However, the articles below this threshold, especially those with a score of six points, were reviewed as supplementary sources. These also include the four SLR articles from the previous database selection. There has been a significant increase in the number of articles published in recent years that can be assigned to the field of prompt engineering based on their title or abstract. Of the 115 articles that were checked for their suitability in full text in the fourth step, 63 articles were published in the year 2024 to date (up to October 4, 2024), 44 articles were published in 2023 and eight articles in 2022. Many articles were related to fine-tuning models, which was not relevant to the research question, as it requires specialized technical knowledge. Ultimately, five articles met the inclusion criteria, demonstrating relevance, quality and alignment with the research question. These five articles are summarized in Table 4. 5 Records identified (n = 718)- AIS eLibrary (n = 17)- ACM Digital Library (n = 42)- IEEE Xplore (n = 148)- Scopus (n = 251)- arXiv.org (n = 260)IdentificationDuplicate records removed(n = 131)Records screened in title and abstract(n = 587)ScreeningRecords excluded (n = 472)- Published before 2022 (n = 82)- Not in English (n = 3)- Thematic mismatch (n = 387)Reports assessed for eligibility(n = 115)EligibilityReports excluded due to a rating below 7 out of 11 (n = 110)Studies included in the review (n = 5)Inclusion Table 3: Criteria for evaluating articles in full text. Criterion Explanation Topic Quality Is the full text of the article relevant to answering the research question? 4 = very relevant 3 = relevant 2 = somewhat relevant 1 = less relevant 0 = not relevant a VHB rating (1) How transparent is the literature re- search process of that article? 2 = very transparent (SLR) 1 = present (LR) 0 = not transparent (2) Does (Ver- band der Hochschullehrerinnen und Hochschullehrer für Betriebswirtschaft e.V., 2024) exist for this article? 3 = A+ 2 = A 1 = B 0 = C 0 = D or not available Actuality When was the article published? 2 = 2024 1 = 2023 0 = 2022 or before 18 articles, each with six points, were reviewed and in- cluded in the analysis as supplementary sources (Table 5), but they are not described in as much detail as those with seven points. The most common aspects have been al- ready covered by the extensive reviews of the articles listed above. 3.2 Analysis and Results In "Can (A)I Have a Word with You? A Taxonomy on the Design Dimensions of AI Prompts", Braun et al. (2024) develop a taxonomy for the design of prompts for different modalities, such as text-to-text and text-to-image. "The Prompt Report" by (Schulhoff et al., 2024) can be considered the most comprehensive article of the included literature. It considers prompting techniques for the text- to-text modality and gives an insight into other modalities, such as text-to-visuals. At the same time, that article uses the PRISMA approach within a systematic literature re- view, which increases the transparency of the selected prompting techniques. Prompting techniques are catego- rized by modality and prompting category. "A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications" by Sa- hoo et al. (2024) is similarly comprehensive. It classifies the prompting techniques according to application area. 6 Schulhoff et al. and Sahoo et al. present a total of 108 different prompting techniques. "The Art of Creative Inquiry-From Question Asking to Prompt Engineering" by Sasson Lazovsky et al. offers a perspective on the similarities between question formu- lation and prompt engineering. The article shows which characteristics are important in the interaction between humans and generative AI. In "A Prompt Pattern Catalog to Enhance Prompt Engineer- ing with ChatGPT", White et al. (2023) present prompt patterns using practical examples for software develop- ment. However, according to the authors, these can be transferred to other areas. In order to make a selection from the multitude of prompt- ing techniques, those that were presented in several in- cluded articles were prioritized. If a prompting tech- nique includes adapted variants, the parent prompting technique is presented first, followed by possible adap- tations. Braun et al. (2024) classified nine dimensions and three meta-dimensions that should be considered when creating prompts (Figure 3). Firstly, the interaction be- tween the LLM and user, which, depending on the prompt, can be seen as computer-in-the-loop or human-in-the-loop (HITL). An example of HITL would be if the user asks the LLM, in addition to the main instruction, to pose follow-up questions that the user then answers (Braun et al., 2024, pp. 561, 565). In this case, the user would take on a more active role. According to Braun et al., input and output types such as text, image, audio, video and others are part of the interaction meta-dimension. Context is defined as the second meta-dimension. This consists of the learning dimension, which is divided into Zero-shot, One-shot and Few-shot. In addition to Braun et al., three of the included articles also identified these as superordinate prompting techniques. As already presented in Chapter Two, an LLM can adapt to new tasks (in-context learning), even if it has not been explicitly trained for that task (Braun et al., 2024, pp. 563–564; cf. Radford et al., 2018, Brown et al., 2020). The addition of examples in a prompt is referred to as Few- shot and One-shot in the case of an explicit example. As Brown et al. showed in their article on GPT-3, the use of Few-shot can increase the accuracy of the output compared to Zero-shot (see Radford et al., 2019), where no examples are provided. In addition, the behavior of an LLM can be adapted to a specific context by assigning a role in a prompt (Braun et al., 2024, p. 564; cf. White et al., 2023, p. 7). By setting a style, the output can be adapted more generally (Braun et al., 2024, p. 564). Braun et al. go on to define the information space dimension for the con- text meta-dimension. The authors differentiate between whether additional information is provided internally - di- rectly in the prompt - or externally - by agents that use search engines, for example. If no additional context is provided, the output is based exclusively on the training data of the LLM (Braun et al., 2024, p. 564). Braun et al. define the third and final meta-dimension as the outcome to be achieved by adapting a prompt (Braun et al., 2024, p. 564). The authors classify Chain-of-Thought (CoT) for this purpose. This was also identified as a superior prompt- ing technique in the articles by Schulhoff et al. (2024) and Sahoo et al. (2024), which refer to the article by Wei et al. (2023). CoT is designed for tasks that require complex understanding. This is also referred to as reasoning in var- ious included articles. CoT breaks down a problem into smaller steps, solves them and then provides a final answer. Table 4: Final inclusion of SLR articles. No. Reference Title 1 2 3 4 5 Braun et al. (2024) Can (A)I Have a Word with You? A Taxonomy on the Design Dimensions of AI Prompts Schulhoff et al. (2024) The Prompt Report: A Systematic Survey of Prompting Techniques Sahoo et al. (2024) A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications Sasson Lazovsky et al. (2024) The Art of Creative Inquiry—From Question Asking to Prompt Engineering White et al. (2023) A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT Table 5: Articles that score six points. No. Reference Title 1 2 3 4 5 6 7 8 9 Bhandari (2024) Bozkurt (2024) Chen et al. (2024) Chong et al. (2024) A Survey on Prompting Techniques in LLMs Tell Me Your Prompts and I Will Make Them True: The Alchemy of Prompt Engineering and Generative AI Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review Prompting for products: Investigating design space exploration strategies for text-to-image generative models Fagbohun et al. (2024) An Empirical Categorization of Prompting Techniques for Large Language Models: A Practitioner’s Guide Garg and Rajendran (2024) Analyzing the Role of Generative AI in Fostering Self-directed Learning Through Structured Prompt Engineering Hill et al. (2024) Prompt Engineering Principles for Generative AI Use in Extension Korzynski et al. (2023) Artificial intelligence prompt engineering as a new digital competence: Analy- sis of generative AI technologies such as ChatGPT Liu and Chilton (2022) Design Guidelines for Prompt Engineering Text-to-Image Generative Models 10 Sasaki et al. (2024) 11 Schmidt et al. (2024) Systematic Literature Review of Prompt Engineering Patterns in Software Engineering Towards a Catalog of Prompt Patterns to Enhance the Discipline of Prompt Engineering 12 13 14 15 Siino and Tinnirello (2024) GPT Hallucination Detection Through Prompt Engineering Tolzin et al. (2024) Worked Examples to Facilitate the Development of Prompt Engineering Skills Tony et al. (2024) Prompting Techniques for Secure Code Generation: A Systematic Investigation Vatsal and Dubey (2024) A Survey of Prompt Engineering Methods in Large Language Models for Different NLP Tasks 16 Wang et al. (2023a) Review of large vision models and visual prompt engineering 17 Wang et al. (2024) Prompt engineering in consistency and reliability with the evidence-based guideline for LLMs 18 Ye et al. (2024) Prompt Engineering a Prompt Engineer 7 Figure 3: Prompt dimensions and characteristics (Braun et al., 2024, p. 563). This approach aims to provide the user with a clear and understandable result by having the LLM explain the pro- cess it uses to generate its output (Wei et al., 2023, p. 3). Finally, Braun et al. (2024, p. 565) classified the following goals of a prompt within the meta-dimension result: learn, lookup, investigate, monitor/extract, decide, create. This last dimension concludes Braun et al.’s taxonomy. The article “A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT” by White et al. (2023) makes a further conceptual contribution, which however focuses on concrete prompt structures. White et al. present pat- terns – so-called prompt patterns, which generally structure prompts for frequently occurring problems in a task area. With the help of such a prompt pattern, prompts can then be formulated for a specific task. This approach not only saves time, but also ensures compliance with proven stan- dards. White et al. present Prompt patterns as examples for the area of software development. Following our own analysis of the prompt patterns presented by White et al., similarities between them were analyzed and classified. These are presented in Table 6 using examples from White et al. (2023). These observations can be seen as an ex- tension of the previously presented prompt dimensions by Braun et al. (2024, p. 563). In addition to the selection of a prompt structure and the appropriate selection of prompting techniques, the formu- lation of questions is essential in order to be able to interact effectively with the generative AI as a user. This requires similar skills to those required for asking interpersonal questions (Sasson Lazovsky et al., 2024, pp. 7–9). Sasson Lazovsky et al. identified the following seven common key skills: Creativity, Clarity and Precision, Adaptability, Critical Thinking, Empathy, Cognitive Flexibility, Goal Orientation. These are described in Table 7. Subsequently, prompting techniques from previously outlined areas such as Zero-shot, Few-shot and Chain-of-Thought will be ex- plored in greater depth and subdivided into further poten- tial areas. The following prompting techniques are taken from the synthesis by Schulhoff et al. (2024, pp. 8–18) and Sahoo et al. (2024, pp. 2–7), who present several prompt- ing techniques and refer to corresponding articles. There are often a large number of articles that adapt a prompting technique for new purposes. Therefore, reference is always made to the original article, unless an adapted variant is presented. Besides Role and Style Prompting, Emotion Prompting adds emotional phrases such as “This is very important to my career” to the end of a prompt (Li et al., 2023, p. 2). Another area of prompting techniques can be divided into Rephrase and Re-read. Rephrase and Respond (RaR) in- structs an LLM to first express the question in its own words before giving an answer (Deng et al., 2024, pp. 9– 10). Re-reading (RE2) tells an LLM to read the question again. This can increase performance in the area of reason- ing (Xu et al., 2024). Prompting techniques that focus on a step-by-step ap- proach can be assigned to the prompting area of Chain-of- Thought. Chain-of-Thought Zero-shot adds “Let’s think step by step” at the beginning of a prompt (Kojima et al., 2023, p. 1). Analogical Prompting instructs LLMs to create examples that can improve output quality using in-context learning (Yasunaga et al., 2024). Thread-of- Thought (ThoT) reviews several prompting templates for efficiency, with the following instruction rated best: “Walk me through this context in manageable parts step by step, summarizing and analyzing as we go” (Zhou et al., 2023b). Plan-and-Solve builds on the previously introduced Chain- of-Thought Zero-shot, but instead uses: “Let’s first under- stand the problem and devise a plan to solve the problem. Then, let’s carry out the plan and solve the problem step by step” (Wang et al., 2023b, pp. 3–4). Self-Consistency uses Chain-of-Thought, but executes a prompt several times and decides, for example, in favor of the result whose solu- tion was mentioned most frequently (Wang et al., 2023c, pp. 1–2). Tree-of-Thoughts (ToT) also extends the Chain- of-Thought approach by following individual steps such as thought processes separately (Yao et al., 2023, pp. 1–2). Automatic Prompt Engineer (APE) presents a system with 8 Dimensions Characteristics Interaction Input Type NE Text Image Audio Video Other Output Type Interaction Type NE Computer-in-the-loop Human-in-the-loop Context Role ME Not defined Defined Style Learning ME Zero-shot One-shot Few-shot Information Space ME Not defined Explicit internal Explicit external Outcome Chain of Thoughts ME Single step Step-by-step Goal NE Learn Lookup Investi-gate Monitor / extract Decide Create ME = Mutually Exclusive, NE = Non-Exclusive Table 6: Exemplified prompt patterns from White et al. (2023). Observation Example Scope Task/Goal Context Procedure Role Output “Within scope X” “Create a game. . . ” “I would like to achieve X” “When I say X, I mean. . . ” “Consider Y” “Ignore Z” “When you are asked a question, follow these rules. . . ” “Explain the reasoning and assumptions behind your answer” “Act as persona X. Provide outputs that persona X would create” “Please preserve the formatting and overall template that I provide” Termination condition “You should ask questions until this condition is met or to achieve this goal” Table 7: Prompt Engineering Skills, Sasson Lazovsky et al. (2024). Capability Creativity Applied to the area of prompt engineering Designing prompts that elicit desired and insightful responses from AI Clarity and precision Conveying instructions precisely to minimize misunderstandings Adaptability Tailoring prompts to the task and language model capabilities Critical thinking Considering potential outcomes and responses for meaningful interactions Empathy Optimizing language model responses through empathetic consideration Cognitive flexibility Iterating with various prompts to optimize results Goal orientation Eliciting specific responses that align with the intended purpose which a prompt is selected from a set that leads to a high output quality. The following prompt was rated well in an evaluation: “Let’s work this out in a step-by-step way to be sure we have the right answer” (Zhou et al., 2023a). Another noteworthy prompting technique is Self-Refine, which uses an LLM to improve a result through feedback until a termination condition is reached (Madaan et al., 2023, pp. 1–2). 4 Mapping Identified Techniques to the Prompt Canvas This chapter focuses on synthesizing the insights gathered from the literature review to populate the Prompt Canvas with relevant, evidence-based elements. It identifies key techniques in prompt engineering, aligning them with the structured sections of the canvas. Based on a user-centered design focusing on understanding the users’ needs, the can- vas consists of four categories, each containing a distinct aspect of a prompt: Persona/Role and Target Audience, Goal and Step-by-Step, Context and References, Format and Tonality. These categories align with the natural flow of information processing, from establishing the setting (persona and audience) to defining the task (goal and steps), providing the necessary background (context and refer- ences), and finally specifying the desired output (format and tone). 4.1 Persona/Role and Target Audience Defining a specific persona or role helps in tailoring the language model’s perspective, ensuring that the response aligns with the expected expertise or viewpoint. Identify- ing the target audience ensures that the content is appropri- ate for the intended recipients, considering their knowledge level and interests. This category is essential because it sets the foundation for the model’s voice and the direction of the response, making it more relevant and engaging for the user. This element was derived from recurring discussions in the literature about role-based prompting and user-centered de- sign. Studies by Braun et al. (2024) highlighted the value of assigning roles to guide the model’s tone and speci- ficity. Sasson Lazovsky et al. (2024) further emphasized the importance of personas in enhancing creative inquiry. These insights underscored the need to include a dedicated 9 section on tailoring prompts to roles and audience charac- teristics. 4.2 Task/Intent and Step-by-Step Clearly articulating the goal provides the language model with a specific objective, enhancing the focus and purpose of the response. Breaking down the goal into step-by-step instructions or questions guides the model through com- plex tasks or explanations systematically. This category justifies its inclusion by emphasizing the importance of precision and clarity in prompts, which directly impacts the quality and usefulness of the output. Classified by Braun et al. (2024), Sahoo et al. (2024) and Sasson Lazovsky et al. (2024) as a distinct prompting cat- egory, Chain-of-Thought prompting techniques decom- pose a task step-by-step and enhance thereby the model’s reasoning capabilities on complex problems. The Chain- of-Thought prompting technique can be used with both Zero-shot and Few-shot concepts. By structuring tasks incrementally, the model produces outputs that are both coherent and logically organized. Furthermore, this cate- gory facilitates creative inquiry; as Sasson Lazovsky et al. (2024) emphasize, clearly defining intent in prompts is essential for open-ended or exploratory tasks. 4.3 Context and References Providing context and relevant references equips the lan- guage model with necessary background information, re- ducing ambiguity and enhancing the accuracy of the re- sponse. This category acknowledges that AI models rely heavily on the input prompt for context, and without it, the responses may be generic or off-target. Including refer- ences also allows the model to incorporate specific data or adhere to particular frameworks, which is vital in academic or professional settings. This element was selected to address the frequent recom- mendation to provide situational and contextual informa- tion in prompts. Braun et al. (2024) stressed the impor- tance of embedding contextual details to enhance output reliability and Schulhoff et al. (2024) suggested incorporat- ing external references or historical data into prompts for guidance. Linking prompts to prior decisions, documents, or reports enhances contextual richness and ensures out- puts reflect critical dependencies (Sasson Lazovsky et al., 2024). By integrating these elements, practitioners can craft prompts that are both informative and grounded in factual context. model in delivering content that is not only informative but also appropriately presented. This consideration is crucial for aligning the response with the conventions of the intended medium or genre. This category emerged from the emphasis in the literature on aligning the model’s outputs with specific user re- quirements and communication contexts. Techniques like output specification and refinement, discussed in Sahoo et al. (2024), are critical for aligning the model’s output with user needs. Braun et al. (2024) highlighted specifying output formats to meet technical or domain-specific needs. Directing the model to produce responses in specific formats, such as tables, markdown, or code, ensures that outputs meet those requirements. Tonality customization and aligning tone with organizational branding to maintain consistency across communication outputs further validated the need to include this aspect in the Prompt Canvas. Also, it is of use to specify tone attributes like luxury, authority, or informality, depending on the target audience or purpose. By mapping the identified techniques to the Prompt Canvas, the foundational aspects of a prompt from defining personas to output refinement are systematically addressed. The canvas simplifies the application of complex techniques, making them more approachable for practitioners. In addition to its primary elements, the integration of Techniques and Tooling categories serves to enhance the canvas by offering deeper technical insights and practical support. These categories focus on further techniques and the tools available to implement them. 4.5 Recommended Techniques This category within the Prompt Canvas emphasizes the application of further strategies to refine and optimize prompts. These techniques enrich the Prompt Canvas by offering a diverse set of strategies to address varying tasks and contexts. Practitioners can draw from this toolbox to adapt their prompts to specific challenges. Iterative Optimization Sahoo et al. (2024) and Schul- hoff et al. (2024) present iterative refinement, through prompting techniques, as a crucial approach for improving prompts. This involves adjusting and testing prompts in a feedback loop to enhance their effectiveness. Iterative op- timization allows practitioners to fine-tune prompts based on model responses, ensuring greater alignment with task objectives. 4.4 Output/Format and Tonality Specifying the desired format and tone ensures that the response meets stylistic and structural expectations. Whether the output should be in the form of a report, a list, or an informal explanation, and whether the tone should be formal, friendly, or neutral, this category guides the Placeholders and Delimiters Placeholders act as flexible components that can be replaced with context-specific in- formation, while delimiters help segment instructions, im- proving clarity and reducing ambiguity. Both can be used to create dynamic and adaptable prompts (cf. White et al. 2023). 10 Prompt Generator LLMs can also help to generate and refine prompts, making AI communication more effective. They assist in crafting precise instructions and optimizing existing prompts for better results. Chain-of-Thought Reasoning Chain-of-Thought en- courages step-by-step reasoning in model outputs. By embedding sequential logic into prompts, practitioners can enhance the model’s ability to solve complex problems and provide coherent explanations. Tree-of-Thoughts Exploration Building on Chain-of- Thought methods, Tree-of-Thoughts prompting allows the model to explore multiple perspectives or solutions simul- taneously. This technique is particularly valuable for tasks requiring diverse viewpoints or creative problem-solving. Emotion Prompting This technique involves appending emotional phrases to the end of a prompt to enhance the model’s empathetic engagement (Li et al., 2023, p. 2). Rephrase and Respond / Re-Reading As outlined by Deng et al. (2024, pp. 9–10) and Xu et al. (2024), these techniques have been shown to enhance reasoning perfor- mance. Adjusting Hyperparameters The ability to adjust hyper- parameters such as temperature, top-p, frequency penalty, and presence penalty within the prompt itself is very help- ful for controlling the diversity, creativity, and focus of the model’s outputs. 4.6 Tooling The Tooling category offers practical support for designing and applying prompts efficiently. Tools and platforms simplify workflows, enhance accessibility, and enable the scalable deployment of prompt engineering techniques. LLM Apps Apps like the ChatGPT App enable faster prompt creation through voice input, making interactions more efficient and accessible. This feature reduces typing effort, enhances usability on-the-go, and supports diverse users, streamlining the prompt engineering process for dynamic or time-sensitive tasks. Prompting Platforms Platforms like PromptPerfect al- low users to design, test, and optimize prompts interac- tively. These tools often include analytics for assessing prompt performance and making informed adjustments. Prompt Libraries Pre-designed templates and reusable prompts, discussed by White et al. (2023), provide a valu- able starting point for practitioners. Libraries save time and ensure consistency by offering solutions for common tasks. Some platforms either offer prompts for purchase (e.g., PromptBase), while others focus on sharing prompts for free (e.g., PromptHero). Browser Extensions Providing direct integration into web clients, browser extensions, like Text Blaze and Prompt Perfect, allow users to experiment with prompts in real-time on websites. LLM Arenas LLM Arenas, like Chatbot Arena, offer platforms to test and compare AI models, providing in- sights into their performance and capabilities. These are- nas help users refine prompts and stay updated with the latest advancements in LLM technology. Custom GPTs for Specific Purposes Chen et al. (2024) mention GPTs as plugins in ChatGPT. Customized GPTs such as Prompt Perfect or ScholarGPT are tailored LLMs optimized for specialized applications or industries. These customized versions are also able to leverage additional data through Application Programming Interfaces (APIs) or are given additional context through text or PDFs for specific objectives, making them highly effective for spe- cialized tasks. custom GPTs Customized LLMs and company-wide use takes Developing company-specific customization a step further by integrating organizational knowledge, values, and workflows into a LLM. These models have been given additional context or are even fine-tuned on internal data, leveraging documents and APIs, and are primed with internal prompts to ensure alignment with company standards and improve oper- ational efficiency. Additionally, some LLM providers offer a sandboxed environment for enterprises, ensuring that entered data will not be used to train future publicly available models. Integration of LLMs via API into application systems APIs facilitate seamless integration of LLMs into existing systems, enabling automated prompt generation and application. 5 Limitations, Outlook and Conclusion This chapter outlines the limitations of the current study, explores potential future directions for research and appli- cation, and concludes by emphasizing the significance of the Prompt Canvas as a foundational tool for the evolving field of prompt engineering. It provides a critical reflec- tion on the scope of the work, its adaptability to emerging trends, and its role in bridging research and practice. 5.1 Limitations As prompt engineering is not a one-size-fits-all disci- pline, different tasks and domains may require tailored approaches and techniques. Yet, a canvas can be easily customized to include domain-specific elements, such as ethical considerations for healthcare or creative constraints for marketing. This adaptability ensures that the canvas remains relevant and useful across diverse use cases. The modular structure allows practitioners to customize tech- niques for specific tasks or domains, improving relevance 11 and scalability. The effectiveness of this canvas requires validation through both quantitative and qualitative research methodologies. Recognizing the strong demand for a practical guide in the field of prompt engineering, this publication aims to serve as a starting point to initiate and foster discussion on the topic. Additionally, a research design to evaluate the utility of the canvas is already under development. This work focuses primarily on text-to-text modalities. Al- though this modality should already cover a wide range of applications, there are other modalities such as image, audio, video or image-as-text (cf. Schulhoff et al. 2024) that are not highlighted in this study. At the same time, many techniques mentioned above are not designed exclu- sively for the text-to-text modality, e.g. iterative prompting. Furthermore, this work focused primarily on the design of individual prompts. Prompting techniques that use agents were thematically separated out in the analysis and syn- thesis. It is assumed that they can play another important role in further improving the output quality. At the same time, this work focused on findings for users of LLMs in the private and business environment. Finally, it is impor- tant to emphasize that this work does not explore potential risks associated with the use of LLMs. These risks in- clude biases, handling of sensitive information, copyright violations, or the significant consumption of resources. 5.2 Outlook The Prompt Canvas serves as a foundational tool, offering a shared framework for the field of prompt engineering. It is intended not only for practical application but also to fos- ter dialogue about which techniques are most relevant and sustainable. By doing so, the canvas encourages discussion and guides research in evaluating whether emerging devel- opments should be incorporated into its framework. Given the dynamic and rapidly evolving nature of the discipline, it is important to view the Prompt Canvas not as a static prod- uct but as a living document that reflects the current state of practice. For instance, if prompting techniques are more deeply integrated into LLMs in the future through prompt tuning and automated prompts, one could argue that some prompt techniques may become less important. Advanc- ing models, such as OpenAI’s o1 model series, already incorporate the Chain-of-Thought technique, enabling it to perform complex reasoning by generating intermediate steps before arriving at a final answer. 5.3 Conclusion This paper introduces the Prompt Canvas as a unified framework aimed at consolidating the diverse and frag- mented techniques of prompt engineering into an acces- sible and practical tool for practitioners. Grounded in an extensive literature review and informed by established methodologies, the Prompt Canvas addresses a need for a comprehensive and systematic approach to designing effective prompts for large language models. By mapping key techniques, such as role-based prompting, Chain-of- Thought reasoning, and iterative refinement, onto a struc- tured canvas, this work provides a valuable resource that bridges the gap between academic research and practical application. Future research is encouraged to expand the framework to address these evolving challenges, ensuring its continued relevance and utility across diverse domains. References Prabin Bhandari. A survey on prompting techniques in LLMs, 2024. URL https://arxiv.org/abs/2312. 03740. A. Bozkurt. Tell me your prompts and i will make them true: The alchemy of prompt engineering and generative AI. Open Praxis, 16(2):111–118, 2024. doi: 10.55982/ openpraxis.16.2.661. Publisher: International Council for Open and Distance Education. Marvin Braun, Maike Greve, Felix Kegel, Lutz Kolbe, and Philipp Emanuel Beyer. Can (a)i have a word with you? a taxonomy on the design dimensions of AI prompts. In Proceedings of the 57th Hawaii Inter- national Conference on System Sciences | 2024, 2024. URL https://aisel.aisnet.org/hicss-57/cl/ design_development_and_evaluation/2. Publica- tion Title: Hawaii International Conference on System Sciences 2024 (HICSS-57). Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Sub- biah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakan- tan, Pranav Shyam, Girish Sastry, Amanda Askell, Sand- hini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. Banghao Chen, Zhaofeng Zhang, Nicolas Langrené, and Shengxin Zhu. Unleashing the potential of prompt engineering in large language models: A comprehen- sive review. CoRR, abs/2310.14735, 2024. URL https://arxiv.org/abs/2310.14735. Accessed: 2024-12-05. Leah Chong, I.-Ping Lo, Jude Rayan, Steven Dow, Faez Ahmed, and Ioanna Lykourentzou. Prompting for products: Investigating design space exploration strate- gies for text-to-image generative models, 2024. URL https://arxiv.org/abs/2408.03946v1. Harris M. Cooper. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society, 1 (1), 1988. ISSN 0897-1986. doi: 10.1007/BF03177550. 12 Yihe Deng, Weitong Zhang, Zixiang Chen, and Quanquan Gu. Rephrase and respond: Let large language models ask better questions for themselves, 2024. Oluwole Fagbohun, Rachel M. Harrison, and Anton Dereventsov. An empirical categorization of prompting techniques for large language models: A practitioner’s guide, 2024. URL https://arxiv.org/abs/2402. 14837v1. Ashish Garg and Ramkumar Rajendran. Analyzing the role of generative AI in fostering self-directed learning through structured prompt engineering. In Generative Intelligence and Intelligent Tutoring Systems: 20th In- ternational Conference, ITS 2024, Thessaloniki, Greece, June 10–13, 2024, Proceedings, Part I, pages 232–243. Springer-Verlag, 2024. ISBN 978-3-031-63027-9. doi: 10.1007/978-3-031-63028-6_18. event-place: Thessa- loniki, Greece. Y. Han, J. Hou, and Y. Sun. Research and application of GPT-based large language models in business and economics: A systematic literature review in progress. In 2023 IEEE International Conference on Comput- ing (ICOCO), pages 118–123, 2023. doi: 10.1109/ ICOCO59262.2023.10397642. P.A. Hill, L.K. Narine, and A.L. Miller. Prompt en- gineering principles for generative AI use in exten- sion. The Journal of Extension, 62(3), 2024. doi: 10.34068/joe.62.03.20. Publisher: Extension Journal, Inc. IBM. Design Thinking Field Guide, 2016. Available at https://www.ibm.com/design/thinking. and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv., 55(9):195:1–195:35, 2023. ISSN 0360-0300. doi: 10. 1145/3560815. Vivian Liu and Lydia B. Chilton. Design guidelines for prompt engineering text-to-image generative models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22. Association for Computing Machinery, 2022. ISBN 978-1-4503-9157- 3. doi: 10.1145/3491102.3501825. event-place: New Orleans, LA, USA. L.S. Lo. The art and science of prompt engineering: A new literacy in the information age. Internet Reference Services Quarterly, 27(4):203–210, 2023. doi: 10.1080/ 10875301.2023.2227621. Publisher: Routledge. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hal- linan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Her- mann, Sean Welleck, Amir Yazdanbakhsh, and Pe- ter Clark. Self-refine: Iterative refinement with self- feedback, 2023. Ash Maurya. Running Lean: Iterate from Plan A to a Plan That Works. O’Reilly Media, 2012. A. Moglia, K. Georgiou, P. Cerveri, L. Mainardi, R.M. Satava, and A. Cuschieri. Large language models in healthcare: from a systematic review on medical exam- inations to a comparative analysis on fundamentals of robotic surgery online test. Artificial Intelligence Re- view, 57(9), 2024. doi: 10.1007/s10462-024-10849-5. Alexey Ivanov and Dmitry Voloshchuk. The team can- vas. https://theteamcanvas.com, 2015. Accessed: 2024-12-05. OpenAI. Improving language understanding with unsu- pervised learning, 2018. URL https://openai.com/ index/language-unsupervised. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners, 2023. P. Korzynski, G. Mazurek, P. Krzypkowska, and A. Kurasinski. Artificial intelligence prompt engineer- ing as a new digital competence: Analysis of generative AI technologies such as ChatGPT. Entrepreneurial Busi- ness and Economics Review, 11(3):25–37, 2023. doi: 10.15678/EBER.2023.110302. Publisher: Cracow Uni- versity of Economics. Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Xing Xie. Large language models understand and can be enhanced by emotional stimuli, 2023. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, Alexander Osterwalder and Yves Pigneur. Business Model Generation: A Handbook for Visionaries, Game Chang- ers, and Challengers. John Wiley & Sons, Hoboken, NJ, 2010. English Edition, Strategyzer Series. Matthew J. Page, Joanne E. McKenzie, Patrick M. Bossuyt, Isabelle Boutron, Tammy C. Hoffmann, Cynthia D. Mul- row, Larissa Shamseer, Jennifer M. Tetzlaff, Elie A. Akl, Sue E. Brennan, Roger Chou, Julie Glanville, Jeremy M. Grimshaw, Asbjørn Hróbjartsson, Manoj M. Lalu, Tian- jing Li, Elizabeth W. Loder, Evan Mayo-Wilson, Steve McDonald, Luke A. McGuinness, Lesley A. Stewart, James Thomas, Andrea C. Tricco, Vivian A. Welch, Penny Whiting, and David Moher. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ, 372:n71, 2021. ISSN 1756-1833. doi: 10.1136/bmj.n71. Publisher: British Medical Journal Publishing Group Section: Research Methods &amp; Reporting. 13 Roman Pichler. Strategize: Product Strategy and Prod- uct Roadmap Practices for the Digital Age. Pichler Consulting, London, UK, 2016. Alec Radford, Karthik Narasimhan, and Ilya Sutskever. understanding with Tim Sal- Improving lan- learn- URL https://cdn.openai.com/ imans, guage ing, 2018. research-covers/language-unsupervised/ language_understanding_paper.pdf. unsupervised Galuscakova P., and de Herrera A.G.S., editors, CLEF 2024: Conference and Labs of the Evaluation Forum, volume 3740, pages 712–721. CEUR-WS, 2024. URL https://www.scopus.com/inward/record.uri? eid=2-s2.0-85201630710&partnerID=40&md5= b9f52dd225e44f2f74ee40871bd0b9d9. The Fountain Institute. Ux research canvas. https://www.thefountaininstitute.com/ ux-research-canvas, 2020. Accessed: 2024-12-05. Alec Radford, language models Jeffrey Wu, Rewon Child, David Bet- implications, https://cdn.openai.com/ Luan, Dario Amodei, and Ilya Sutskever. ter 2019. better-language-models/language_models_ are_unsupervised_multitask_learners.pdf. URL their and and Andreas Antonia Tolzin, Nils Knoth, Worked examples Jan- to facilitate the de- In URL https: son. velopment of prompt engineering skills. ECIS 2024 Proceedings, 2024. //aisel.aisnet.org/ecis2024/track13_ learning_teach/track13_learning_teach/10. Publication Title: ECIS 2024 Proceedings. Pranab Sahoo, Ayush Kumar Singh, Saha Sriparna, Jain Vinija, Samrat Mondal, and Aman Chadha. A systematic survey of prompt engineering in large language models: Techniques and applications, 2024. URL https:// arxiv.org/abs/2402.07927. Y. Sasaki, H. Washizaki, J. Li, D. Sander, N. Yoshioka, and Y. Fukazawa. Systematic literature review of prompt engineering patterns in software engineering. In 2024 IEEE 48th Annual Computers, Software, and Applica- tions Conference (COMPSAC), pages 670–675, 2024. doi: 10.1109/COMPSAC61105.2024.00096. G. Sasson Lazovsky, T. Raz, and Y.N. Kenett. The art of creative inquiry—from question asking to prompt engineering. The Journal of Creative Behavior, 2024. doi: 10.1002/jocb.671. Publisher: John Wiley and Sons Inc. Douglas C. Schmidt, Jesse Spencer-Smith, Quchen Fu, and Jules White. Towards a catalog of prompt patterns to enhance the discipline of prompt engineering. Ada Lett., 43(2):43–51, 2024. ISSN 1094-3641. doi: 10. 1145/3672359.3672364. Place: New York, NY, USA Publisher: Association for Computing Machinery. Sander Schulhoff, Michael Ilie, Nishant Balepur, Kon- stantine Kahadze, Amanda Liu, Chenglei Si, Yinheng Li, Aayush Gupta, HyoJung Han, Sevien Schulhoff, Pranav Sandeep Dulepet, Saurav Vidyadhara, Dayeon Ki, Sweta Agrawal, Chau Pham, Gerson Kroiz, Feileen Li, Hudson Tao, Ashay Srivastava, Hevander Da Costa, Saloni Gupta, Megan L. Rogers, Inna Goncearenco, Giuseppe Sarli, Igor Galynker, Denis Peskoff, Marine Carpuat, Jules White, Shyamal Anadkat, Alexander Hoyle, and Philip Resnik. The prompt report: A sys- tematic survey of prompting techniques, 2024. URL https://arxiv.org/abs/2406.06608v3. M. Siino and I. Tinnirello. GPT hallucination detection through prompt engineering. In Faggioli G., Ferro N., Catherine Tony, Nicolás E. Díaz Ferreyra, Markus Mutas, Salem Dhiff, and Riccardo Scandariato. Prompting techniques for secure code generation: A systematic investigation, 2024. URL https://arxiv.org/abs/ 2407.07064v1. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2023. Shubham Vatsal and Harsh Dubey. A survey of prompt en- gineering methods in large language models for different NLP tasks, 2024. Verband der Hochschullehrerinnen und Hochschullehrer für Betriebswirtschaft e.V. Vhb-rating 2024 für publika- tionsmedien, teilrating wirtschaftsinformatik (wi), 2024. URL https://vhbonline.org/fileadmin/user_ upload/VHB_Rating_2024_Area_rating_WI.pdf. Jan vom Brocke, Alexander Simons, Bjoern Niehaves, Björn Niehaves, Kai Riemer, Ralf Plattfaut, and Anne Cleven. RECONSTRUCTING THE GIANT: ON THE IMPORTANCE OF RIGOUR IN DOCUMENTING THE LITERATURE SEARCH PROCESS. ECIS 2009 Proceedings, 2009. URL https://aisel.aisnet. org/ecis2009/161. J. Wang, Z. Liu, L. Zhao, Z. Wu, C. Ma, S. Yu, H. Dai, Q. Yang, Y. Liu, S. Zhang, E. Shi, Y. Pan, T. Zhang, D. Zhu, X. Li, X. Jiang, B. Ge, Y. Yuan, D. Shen, T. Liu, and S. Zhang. Review of large vision models and vi- sual prompt engineering. Meta-Radiology, 1(3), 2023a. doi: 10.1016/j.metrad.2023.100047. Publisher: KeAi Publishing Communications Ltd. L. Wang, X. Chen, X. Deng, H. Wen, M. You, W. Liu, Q. Li, and J. Li. Prompt engineering in consistency and reliability with the evidence-based guideline for LLMs. npj Digital Medicine, 7(1), 2024. doi: 10.1038/ 14 s41746-024-01029-4. Publisher: Nature Research. Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. Plan-and- solve prompting: Improving zero-shot chain-of-thought reasoning by large language models, 2023b. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Self-consistency improves chain of Denny Zhou. thought reasoning in language models, 2023c. E. Watson, T. Viana, and S. Zhang. Augmented behavioral annotation tools, with application to multimodal datasets and models: A systematic review. AI (Switzerland), 4 (1):128–171, 2023. doi: 10.3390/ai4010007. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits rea- soning in large language models, 2023. Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, and Douglas C. Schmidt. A prompt pat- tern catalog to enhance prompt engineering with Chat- GPT, 2023. Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu, Hongbo Xu, Guodong Long, Jian-guang Lou, and Shuai Ma. Re- reading improves reasoning in large language models, 2024. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023. Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Pasupat, Jure Leskovec, Percy Liang, Ed H. Chi, and Denny Zhou. Large language models as analogical rea- soners, 2024. Qinyuan Ye, Maxamed Axmed, Reid Pryzant, and Fereshte Khani. Prompt engineering a prompt engineer, 2024. Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. Large language models are human-level prompt engineers, 2023a. Yucheng Zhou, Xiubo Geng, Tao Shen, Chongyang Tao, Guodong Long, Jian-Guang Lou, and Jianbing Shen. Thread of thought unraveling chaotic contexts, 2023b. 15 A Appendix Table 8: Identified databases with respective search terms. Database AIS eLibrary ACM Digital Library IEEE Xplore Scopus arXiv Search term 1) “prompt-engineering”; 2) “prompt engineering”; 3) “prompt techniques”; 4) “prompt designs”; 5) “prompt design”; 6) “prompt patterns”; 7) “prompt pattern”; 8) “prompt strategies”; 9) “prompt strategy”; 10) “prompt methods” The search in the AIS eLibrary was carried out individually for each term, as the website did not support OR operators. ("prompt-engineering" OR "prompt engineering" OR "prompt Title: techniques" OR "prompt designs" OR "prompt design" OR "prompt patterns" OR "prompt pattern" OR "prompt strategies" OR "prompt strategy" OR "prompt methods") ("Document Title":prompt-engineering) OR ("Document Title":prompt engineering) OR ("Document Title":prompt techniques) OR ("Document Title":prompt designs) OR ("Document Title":prompt design) OR ("Document Title":prompt patterns) OR ("Document Title":prompt pattern) OR ("Document Title":prompt strategies) OR ("Document Title":prompt strategy) OR ("Document Title":prompt methods") (TITLE("prompt-engineering") OR TITLE("prompt engineering") OR TITLE("prompt techniques") OR TITLE("prompt designs") OR TITLE("prompt design") OR TITLE("prompt patterns") OR TITLE("prompt pattern") OR TITLE("prompt strategies") OR TITLE("prompt strategy") OR TITLE("prompt methods")) "prompt-engineering" OR "prompt engineering" OR "prompt techniques" OR "prompt designs" OR "prompt design" OR "prompt patterns" OR "prompt pattern" OR "prompt strategies" OR "prompt strategy" OR "prompt methods" 16