dataset
stringclasses
9 values
input
stringlengths
351
18.6k
output
stringlengths
8
7.48k
id
stringlengths
20
47
messages
listlengths
2
2
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Both the public dataset and the proprietary dataset are <cite>Das et al. (2018)</cite> , using the same train/val/test splits for FB15k-237. We extend the publicly available implementation of <cite>Das et al. (2018)</cite> for our experimentation.
["Similarities", "Uses"]
science.multicite_intent_classification.2102
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nBoth the public dataset and the proprietary dataset are <cite>Das et al. (2018)</cite> , using the same train/val/test splits for FB15k-237. We extend the publicly available implementation of <cite>Das et al. (2018)</cite> for our experimentation.", "role": "user" }, { "content": "[\"Similarities\", \"Uses\"]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Evaluating Language-biased image classification based on semantic compositionality Main Content: 1 INTRODUCTION . Language is not only a fundamental tool enabling humans to communicate with each other : it also shapes other aspects of perception and cognition , ranging from influencing visual processing to enabling abstract thinking ( Vygotsky , 1934 ; Miller , 1951 ; Waxman & Markow , 1995 ; Colas et al. , 2021 ) . Analogous to this human ability , recent artificial intelligence technology has also utilized language to drive the training of visual skills in machines . For instance , the OpenAI team introduced CLIP ( Contrastive Language–Image Pre-training ) , consisting of joint learning of language and vision ( Radford et al. , 2021 ) . The CLIP model efficiently learns visual concepts from natural language supervision and can be applied to various visual tasks in a zero-shot manner . Joint learning of language and vision has also been utilized in the Visual Question Answering ( VQA ) literature , where artificial models are asked to answer questions about visual content ( Antol et al. , 2015 ; Lu et al. , 2016 ; Fukui et al. , 2016 ) . While joint training with language effectively fosters the acquisition of general abilities for visual recognition/understanding , its abstraction can produce biased recognition for either humans or machines . The Stroop effect known in human cognitive science literature is one of such biases ( Stroop , 1935 ) . When human participants observe a mismatched word , where the word “ red ” is printed with the “ blue ” color , and are asked to answer the “ color ” of the word , their response to the word can be slower than when they see the word “ red ” with the “ red ” color . In addition to the color-word interference , language also interferes with human image category recognition , known as picture-word interference ( Rosinski , 1977 ; Lupker , 1979 ) . When a participant observes an image coupled with an incongruent category word ( e.g. , Figure 1 , right ) , the image categorization speed can be delayed . Similar to the effect in humans , recent deep learning systems jointly trained on language and vision , e.g. , CLIP , also show object recognition interference when words are embedded in images ( Goh et al. , 2021 ) . Specifically , by simply adding a handwritten word to an image , the object recognition can be biased to the superimposed word . A critical aspect of picture-word interference in humans is that it depends on the semantic relationship between images and superimposed words . For instance , the reaction time on the image classification for word-superimposed images is slower when the image category is semantically similar to the word one ( Rosinski , 1977 ) . Based on the accumulated findings , previous works have commonly suggested that at least two different processes mediate the effect in humans , as introduced in detail in Section 2.2 . Specifically , when a participant observes a word-superimposed image , an activation process synthesizes semantic representations corresponding to the superimposed word and image ( Figure 2 , activation process ) . Based on the representations , it has been assumed that a selection process decides which possible activation is the answer for the current task . ( Figure 2 , selection process ) . The activated representation is shared across visual word forms and images , and the dual activation by words and images confuses the decision in the selection process . These human cognitive findings raise the question of whether interferences in artificial models have some similarities with those observed in humans . Indeed , some artificial models show languagebiased image classification , but one possibility is that the bias might emerge irrespective of the semantic interaction between words and images . When we consider a model with text and image encoders for picture-word interference ( Figure 2 ) , we can regard the functional role of the image encoder as synthesizing semantic representations for word-superimposed images . Although pictureword interference in humans depends on shared semantic representations of written words and images , this is not the only mechanism that can produce interference . For instance , the image encoder of the model may assign the written word ” dog ” a different semantic representation than the visual image ” dog ” and show strong preferential activation to written words while ignoring image contents . Since joint learning of language and vision is expected to enable machines to acquire highly interacted representations of texts and images to solve various general tasks , observing such independent representations and a bias for one of them can be problematic and show a limit of the joint training process . In this study , we import methodological tools from the cognitive science literature to assess biases of artificial models in image classification for word-superimposed images . Specifically , we introduce a paradigm to evaluate picture-word interference , in which an agent has to predict the image superordinate/basic category for images with superordinate/basic words ( Figure 1 ) . Our benchmark dataset is a set of images with superimposed words and consists of a mixture of natural image datasets ( Cichy et al. , 2016 ; Mohsenzadeh et al. , 2019 ) and hierarchical word labels ( Lin et al. , 2014 ; Krizhevsky et al. , 2009 ) . Our benchmark task aims to test artificial models with a text encoder and an image encoder ( Figure 2 ) . The task enables us to test 1 ) whether language-biased decisions happen across different category levels and 2 ) the extent to which picture-word interference in artificial models depends on the semantic similarity between superimposed words and images . Furthermore , by importing a methodological tool from the neuroscience literature , called representational similarity analysis ( RSA ) ( Kriegeskorte et al. , 2008 ) , we can also test 3 ) whether the image encoder has a common semantic representation for superimposed words and images . Out of various joint learning models of language and vision , the present study evaluates the CLIP model using the dataset because the model consists of dual processing of visual and text encoders and is developed to be applied to various general visual tasks without fine-tuning the dataset . To anticipate the result , we show that 1 ) presenting words disturbs the CLIP image classification even across different category levels , 2 ) the effect does not depend on the semantic relationship between images and words , and 3 ) the superimposed word representation in the CLIP image encoder is not shared with the image representation . The present study provides an open-source code to reproduce the results and the full dataset . 2 RELATED WORKS . Joint learning of language and vision in artificial models : Some recent works in machine learning have reported language-biased image recognition for artificial models trained with joint learning of language and vision . One example is the CLIP model ( Radford et al. , 2021 ) . The CLIP architecture consists of an image encoder and a text encoder ( Figure 2 ) . The image encoder is based on a ResNet ( He et al. , 2016 ) or Vision Transformer ( Dosovitskiy et al. , 2020 ) , and the text encoder is a Language Transformer model ( Vaswani et al. , 2017 ) . The model is trained on a large dataset of image-text pairs with contrastive objectives , where it learns to align text and image representations for each pair . The pre-trained model can be used for various visual tasks in a zero-shot manner . For instance , when the model is applied for the image classification task , the learned text encoder synthesizes a zero-shot linear classifier for the descriptions , including classification labels , e.g. , ” a photograph of [ label ] ” ( Figure 2 ) . The learned image encoder outputs the representation for an input image , and the text classifier decides which label representation is the closest to the image representation . Languagebiased image classification is reported in the CLIP image classification task as a typographic attack ( Goh et al. , 2021 ) . As shown in Figure 1 , when a written word is superimposed on an image , the CLIP misclassifies the image category as the superimposed word . Another example of language-biased image classification is reported in the context of the Visual Question Answering ( VQA ) task ( Antol et al. , 2015 ; Lu et al. , 2016 ; Fukui et al. , 2016 ) . This task ’ s goal is to answer questions about visual content , e.g. , a text question ” What color is the dog ? ” for a dog image . The task also needs joint learning of language and vision , i.e. , an image encoder to recognize what the visual content is and a text encoder to understand the question . However , joint learning tends to be biased by superficial correlations between texts and images in the training data and ignore visual input to solve a task ( Agrawal et al. , 2016 ) . For instance , even when a model sees a green ( untypical ) banana and is asked about its color , the model tends to answer ” the color is yellow ” while ignoring the actual image content . Based on the observations , recent VQA models have attempted to remove the biases ( Agrawal et al. , 2018 ; Ramakrishnan et al. , 2018 ) . Picture-word interference in humans : It is essential to understand how the language-biased processing in artificial models is related to the human mechanisms discussed in the cognitive science literature . This subsection reviews the functional connection between text/image encoders in artificial models and picture-word interference in humans ( Figure 2 ) . Picture-word interference refers to language-biased recognition for word-superimposed images , and previous works in cognitive science have investigated this effect using various experimental paradigms ( reviewed by Bürki et al . ( 2020 ) ) . Here , we focus on interference when a participant observes a word-superimposed image and answers the image category for the image to compare the underlying cognitive mechanisms with artificial models ( Rosinski , 1977 ) . The discussion about what type of mechanisms mediate picture-word interference is still controversial in the cognitive science literature , especially the stage at which the interference emerges . However , we can summarize common functional mechanisms shared across different theoretical views and ask whether artificial models acquire these fundamental components in the current study . First , most views assume that visual words and images activate common semantic representations , i.e. , a written word ” dog ” activates similar semantic representations to an image ” dog. ” Second , they commonly assume that interference emerges due to some types of selection process . The type of selection process is extensively debated and differs depending on the views . One is the lexical selection hypothesis , and the selection works to compare the semantic representation competitively ( Levelt et al. , 1999 ; Roelofs , 1992 ) . This selection relies on relative activation in semantic representations , and picture-word interference emerges because the activation by words induces misselection when it is semantically close to the activation by images . However , the lexical selection is inconsistent with some behavioral findings , e.g. , the activation amplitude does not always correspond to the effect size of interference . Another theoretical view assumes the response selection , which is closer to the decision stage than the lexical selection ( Finkbeiner & Caramazza , 2006 ) . Since the response selection assumes additional processing for relative activated representations , it can explain the inconsistency of the phenomenon with the lexical selection assumption . We first ask whether one observes patterns of interference at the behavioral level that are similar to humans and then study more precisely the learned representations to see if the mechanisms proposed to explain human interference could also be at play in the artificial model . To investigate machine mechanisms in terms of the cognitive science findings , we regard an image encoder in the artificial model as the processing to activate semantic representations from images ( Figure 2 ) . We also consider a text encoder as the selection process , which tries to find the answers according to the task objectives ( superordinate/basic selection ) .
This paper proposes a testbed for picture-word interference in image classification models. The authors specifically investigate the performance of CLIP to understand whether such language-biased model shows similar interferences to the ones observed in humans. The dataset consists of images superimposed with words, representing two different category levels (basic and superordinate). The experiments on CLIP show that the model is affected by superimposed words, but independently of their semantic relationship with the underlying image. Further analysis shows that CLIP image representations are different from the ones of superimposed words, while this does not happen in two ImageNet-based CNNs.
science.mup_single_document_summarization.8732
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nEvaluating Language-biased image classification based on semantic compositionality\n\nMain Content:\n1 INTRODUCTION . Language is not only a fundamental tool enabling humans to communicate with each other : it also shapes other aspects of perception and cognition , ranging from influencing visual processing to enabling abstract thinking ( Vygotsky , 1934 ; Miller , 1951 ; Waxman & Markow , 1995 ; Colas et al. , 2021 ) . Analogous to this human ability , recent artificial intelligence technology has also utilized language to drive the training of visual skills in machines . For instance , the OpenAI team introduced CLIP ( Contrastive Language–Image Pre-training ) , consisting of joint learning of language and vision ( Radford et al. , 2021 ) . The CLIP model efficiently learns visual concepts from natural language supervision and can be applied to various visual tasks in a zero-shot manner . Joint learning of language and vision has also been utilized in the Visual Question Answering ( VQA ) literature , where artificial models are asked to answer questions about visual content ( Antol et al. , 2015 ; Lu et al. , 2016 ; Fukui et al. , 2016 ) . While joint training with language effectively fosters the acquisition of general abilities for visual recognition/understanding , its abstraction can produce biased recognition for either humans or machines . The Stroop effect known in human cognitive science literature is one of such biases ( Stroop , 1935 ) . When human participants observe a mismatched word , where the word “ red ” is printed with the “ blue ” color , and are asked to answer the “ color ” of the word , their response to the word can be slower than when they see the word “ red ” with the “ red ” color . In addition to the color-word interference , language also interferes with human image category recognition , known as picture-word interference ( Rosinski , 1977 ; Lupker , 1979 ) . When a participant observes an image coupled with an incongruent category word ( e.g. , Figure 1 , right ) , the image categorization speed can be delayed . Similar to the effect in humans , recent deep learning systems jointly trained on language and vision , e.g. , CLIP , also show object recognition interference when words are embedded in images ( Goh et al. , 2021 ) . Specifically , by simply adding a handwritten word to an image , the object recognition can be biased to the superimposed word . A critical aspect of picture-word interference in humans is that it depends on the semantic relationship between images and superimposed words . For instance , the reaction time on the image classification for word-superimposed images is slower when the image category is semantically similar to the word one ( Rosinski , 1977 ) . Based on the accumulated findings , previous works have commonly suggested that at least two different processes mediate the effect in humans , as introduced in detail in Section 2.2 . Specifically , when a participant observes a word-superimposed image , an activation process synthesizes semantic representations corresponding to the superimposed word and image ( Figure 2 , activation process ) . Based on the representations , it has been assumed that a selection process decides which possible activation is the answer for the current task . ( Figure 2 , selection process ) . The activated representation is shared across visual word forms and images , and the dual activation by words and images confuses the decision in the selection process . These human cognitive findings raise the question of whether interferences in artificial models have some similarities with those observed in humans . Indeed , some artificial models show languagebiased image classification , but one possibility is that the bias might emerge irrespective of the semantic interaction between words and images . When we consider a model with text and image encoders for picture-word interference ( Figure 2 ) , we can regard the functional role of the image encoder as synthesizing semantic representations for word-superimposed images . Although pictureword interference in humans depends on shared semantic representations of written words and images , this is not the only mechanism that can produce interference . For instance , the image encoder of the model may assign the written word ” dog ” a different semantic representation than the visual image ” dog ” and show strong preferential activation to written words while ignoring image contents . Since joint learning of language and vision is expected to enable machines to acquire highly interacted representations of texts and images to solve various general tasks , observing such independent representations and a bias for one of them can be problematic and show a limit of the joint training process . In this study , we import methodological tools from the cognitive science literature to assess biases of artificial models in image classification for word-superimposed images . Specifically , we introduce a paradigm to evaluate picture-word interference , in which an agent has to predict the image superordinate/basic category for images with superordinate/basic words ( Figure 1 ) . Our benchmark dataset is a set of images with superimposed words and consists of a mixture of natural image datasets ( Cichy et al. , 2016 ; Mohsenzadeh et al. , 2019 ) and hierarchical word labels ( Lin et al. , 2014 ; Krizhevsky et al. , 2009 ) . Our benchmark task aims to test artificial models with a text encoder and an image encoder ( Figure 2 ) . The task enables us to test 1 ) whether language-biased decisions happen across different category levels and 2 ) the extent to which picture-word interference in artificial models depends on the semantic similarity between superimposed words and images . Furthermore , by importing a methodological tool from the neuroscience literature , called representational similarity analysis ( RSA ) ( Kriegeskorte et al. , 2008 ) , we can also test 3 ) whether the image encoder has a common semantic representation for superimposed words and images . Out of various joint learning models of language and vision , the present study evaluates the CLIP model using the dataset because the model consists of dual processing of visual and text encoders and is developed to be applied to various general visual tasks without fine-tuning the dataset . To anticipate the result , we show that 1 ) presenting words disturbs the CLIP image classification even across different category levels , 2 ) the effect does not depend on the semantic relationship between images and words , and 3 ) the superimposed word representation in the CLIP image encoder is not shared with the image representation . The present study provides an open-source code to reproduce the results and the full dataset . 2 RELATED WORKS . Joint learning of language and vision in artificial models : Some recent works in machine learning have reported language-biased image recognition for artificial models trained with joint learning of language and vision . One example is the CLIP model ( Radford et al. , 2021 ) . The CLIP architecture consists of an image encoder and a text encoder ( Figure 2 ) . The image encoder is based on a ResNet ( He et al. , 2016 ) or Vision Transformer ( Dosovitskiy et al. , 2020 ) , and the text encoder is a Language Transformer model ( Vaswani et al. , 2017 ) . The model is trained on a large dataset of image-text pairs with contrastive objectives , where it learns to align text and image representations for each pair . The pre-trained model can be used for various visual tasks in a zero-shot manner . For instance , when the model is applied for the image classification task , the learned text encoder synthesizes a zero-shot linear classifier for the descriptions , including classification labels , e.g. , ” a photograph of [ label ] ” ( Figure 2 ) . The learned image encoder outputs the representation for an input image , and the text classifier decides which label representation is the closest to the image representation . Languagebiased image classification is reported in the CLIP image classification task as a typographic attack ( Goh et al. , 2021 ) . As shown in Figure 1 , when a written word is superimposed on an image , the CLIP misclassifies the image category as the superimposed word . Another example of language-biased image classification is reported in the context of the Visual Question Answering ( VQA ) task ( Antol et al. , 2015 ; Lu et al. , 2016 ; Fukui et al. , 2016 ) . This task ’ s goal is to answer questions about visual content , e.g. , a text question ” What color is the dog ? ” for a dog image . The task also needs joint learning of language and vision , i.e. , an image encoder to recognize what the visual content is and a text encoder to understand the question . However , joint learning tends to be biased by superficial correlations between texts and images in the training data and ignore visual input to solve a task ( Agrawal et al. , 2016 ) . For instance , even when a model sees a green ( untypical ) banana and is asked about its color , the model tends to answer ” the color is yellow ” while ignoring the actual image content . Based on the observations , recent VQA models have attempted to remove the biases ( Agrawal et al. , 2018 ; Ramakrishnan et al. , 2018 ) . Picture-word interference in humans : It is essential to understand how the language-biased processing in artificial models is related to the human mechanisms discussed in the cognitive science literature . This subsection reviews the functional connection between text/image encoders in artificial models and picture-word interference in humans ( Figure 2 ) . Picture-word interference refers to language-biased recognition for word-superimposed images , and previous works in cognitive science have investigated this effect using various experimental paradigms ( reviewed by Bürki et al . ( 2020 ) ) . Here , we focus on interference when a participant observes a word-superimposed image and answers the image category for the image to compare the underlying cognitive mechanisms with artificial models ( Rosinski , 1977 ) . The discussion about what type of mechanisms mediate picture-word interference is still controversial in the cognitive science literature , especially the stage at which the interference emerges . However , we can summarize common functional mechanisms shared across different theoretical views and ask whether artificial models acquire these fundamental components in the current study . First , most views assume that visual words and images activate common semantic representations , i.e. , a written word ” dog ” activates similar semantic representations to an image ” dog. ” Second , they commonly assume that interference emerges due to some types of selection process . The type of selection process is extensively debated and differs depending on the views . One is the lexical selection hypothesis , and the selection works to compare the semantic representation competitively ( Levelt et al. , 1999 ; Roelofs , 1992 ) . This selection relies on relative activation in semantic representations , and picture-word interference emerges because the activation by words induces misselection when it is semantically close to the activation by images . However , the lexical selection is inconsistent with some behavioral findings , e.g. , the activation amplitude does not always correspond to the effect size of interference . Another theoretical view assumes the response selection , which is closer to the decision stage than the lexical selection ( Finkbeiner & Caramazza , 2006 ) . Since the response selection assumes additional processing for relative activated representations , it can explain the inconsistency of the phenomenon with the lexical selection assumption . We first ask whether one observes patterns of interference at the behavioral level that are similar to humans and then study more precisely the learned representations to see if the mechanisms proposed to explain human interference could also be at play in the artificial model . To investigate machine mechanisms in terms of the cognitive science findings , we regard an image encoder in the artificial model as the processing to activate semantic representations from images ( Figure 2 ) . We also consider a text encoder as the selection process , which tries to find the answers according to the task objectives ( superordinate/basic selection ) .", "role": "user" }, { "content": "This paper proposes a testbed for picture-word interference in image classification models. The authors specifically investigate the performance of CLIP to understand whether such language-biased model shows similar interferences to the ones observed in humans. The dataset consists of images superimposed with words, representing two different category levels (basic and superordinate). The experiments on CLIP show that the model is affected by superimposed words, but independently of their semantic relationship with the underlying image. Further analysis shows that CLIP image representations are different from the ones of superimposed words, while this does not happen in two ImageNet-based CNNs.", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: The world's first prospective controlled studies with the biologic human anti-IL4R antibody dupilumab for the indication "atopic dermatitis" were published in 2014. These motivated (1) to extend the studies to dupilumab and (2) to clinically test antagonization of other target molecules of TH2 polarized, atopic inflammation, e.g., IL-13, IL-31, IL-22, TSLP, and CRTH2. A Among the recently developed antiasthma biologic drugs, the monoclonal antibody dupilumab is very promising because of its ability to inhibit the biological effects of both IL-4 and IL-13. Indeed, dupilumab prevents IL-4/13 interactions with the α-subunit of the IL-4 receptor complex. BACKGROUND: Dupilumab, a fully human monoclonal antibody that blocks interleukin-4 and interleukin-13, has shown efficacy in patients with asthma and elevated eosinophil levels. In early-phase trials, dupilumab, a fully human mAb targeting IL-4 receptor α, markedly improved disease activity, but the effect of IL-4/IL-13 blockade on AD at the molecular level has not been characterized. Examples include the efficacy of omalizumab in patients with severe refractory atopic asthma characterized by raised serum total IgE, mepolizumab, reslizumab, and benralizumab in patients with recurrent eosinophilic exacerbations characterized by blood and sputum eosinophilia despite high doses of corticosteroids, and lebrikizumab, pitrakinra, dupilumab, and tralokinumab that target the IL-4/IL-13 signalling pathways in patients with eosinophilic asthma or raised serum periostin. BACKGROUND: Moderate-to-severe asthma remains poorly treated. We evaluated the efficacy and safety of dupilumab (SAR231893/REGN668), a fully human monoclonal antibody to the alpha subunit of the interleukin-4 receptor, in patients with persistent, moderate-to-severe asthma and elevated eosinophil levels. With respect to immune dysregulation, dupilumab, a fully human monoclonal antibody directed at the IL-4 receptor alpha subunit was recently shown to be effective in treating adults with moderate-to-severe AD. Dupilumab, a fully human monoclonal antibody that blocks interleukin-4 and interleukin-13, has shown efficacy in patients with asthma and elevated eosinophil levels. We evaluated the efficacy and safety of dupilumab (SAR231893/REGN668), a fully human monoclonal antibody to the alpha subunit of the interleukin-4 receptor, in patients with persistent, moderate-to-severe asthma and elevated eosinophil levels. Indeed, dupilumab prevents IL-4/13 interactions with the α-subunit of the IL-4 receptor complex. We evaluated the efficacy and safety of dupilumab (SAR231893/REGN668), a fully human monoclonal antibody to the alpha subunit of the interleukin-4 receptor, in patients with persistent, moderate-to-severe asthma and elevated eosinophil levels. BACKGROUND: Dupilumab, a fully human monoclonal antibody that blocks interleukin-4 and interleukin-13, has shown efficacy in patients with asthma and elevated eosinophil levels. The blockade by dupilumab of these key drivers of type 2 helper T-cell (Th2)-mediated inflammation could help in the treatment of related diseases, including atopic dermatitis. Indeed, dupilumab prevents IL-4/13 interactions with the α-subunit of the IL-4 receptor complex. We evaluated the efficacy and safety of dupilumab (SAR231893/REGN668), a fully human monoclonal antibody to the alpha subunit of the interleukin-4 receptor, in patients with persistent, moderate-to-severe asthma and elevated eosinophil levels. Dupilumab, a fully human monoclonal antibody that blocks interleukin-4 and interleukin-13, has shown efficacy in patients with asthma and elevated eosinophil levels. Question: Which interleukins are inhibited by Dupilumab?
["interleukin-4", "interleukin-13"]
science.bioasq_list_qa.18
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: The world's first prospective controlled studies with the biologic human anti-IL4R antibody dupilumab for the indication \"atopic dermatitis\" were published in 2014. These motivated (1) to extend the studies to dupilumab and (2) to clinically test antagonization of other target molecules of TH2 polarized, atopic inflammation, e.g., IL-13, IL-31, IL-22, TSLP, and CRTH2. A\nAmong the recently developed antiasthma biologic drugs, the monoclonal antibody dupilumab is very promising because of its ability to inhibit the biological effects of both IL-4 and IL-13. Indeed, dupilumab prevents IL-4/13 interactions with the α-subunit of the IL-4 receptor complex.\nBACKGROUND: Dupilumab, a fully human monoclonal antibody that blocks interleukin-4 and interleukin-13, has shown efficacy in patients with asthma and elevated eosinophil levels.\nIn early-phase trials, dupilumab, a fully human mAb targeting IL-4 receptor α, markedly improved disease activity, but the effect of IL-4/IL-13 blockade on AD at the molecular level has not been characterized.\nExamples include the efficacy of omalizumab in patients with severe refractory atopic asthma characterized by raised serum total IgE, mepolizumab, reslizumab, and benralizumab in patients with recurrent eosinophilic exacerbations characterized by blood and sputum eosinophilia despite high doses of corticosteroids, and lebrikizumab, pitrakinra, dupilumab, and tralokinumab that target the IL-4/IL-13 signalling pathways in patients with eosinophilic asthma or raised serum periostin.\nBACKGROUND: Moderate-to-severe asthma remains poorly treated. We evaluated the efficacy and safety of dupilumab (SAR231893/REGN668), a fully human monoclonal antibody to the alpha subunit of the interleukin-4 receptor, in patients with persistent, moderate-to-severe asthma and elevated eosinophil levels.\nWith respect to immune dysregulation, dupilumab, a fully human monoclonal antibody directed at the IL-4 receptor alpha subunit was recently shown to be effective in treating adults with moderate-to-severe AD.\nDupilumab, a fully human monoclonal antibody that blocks interleukin-4 and interleukin-13, has shown efficacy in patients with asthma and elevated eosinophil levels.\nWe evaluated the efficacy and safety of dupilumab (SAR231893/REGN668), a fully human monoclonal antibody to the alpha subunit of the interleukin-4 receptor, in patients with persistent, moderate-to-severe asthma and elevated eosinophil levels.\nIndeed, dupilumab prevents IL-4/13 interactions with the α-subunit of the IL-4 receptor complex.\nWe evaluated the efficacy and safety of dupilumab (SAR231893/REGN668), a fully human monoclonal antibody to the alpha subunit of the interleukin-4 receptor, in patients with persistent, moderate-to-severe asthma and elevated eosinophil levels.\nBACKGROUND: Dupilumab, a fully human monoclonal antibody that blocks interleukin-4 and interleukin-13, has shown efficacy in patients with asthma and elevated eosinophil levels. The blockade by dupilumab of these key drivers of type 2 helper T-cell (Th2)-mediated inflammation could help in the treatment of related diseases, including atopic dermatitis.\nIndeed, dupilumab prevents IL-4/13 interactions with the α-subunit of the IL-4 receptor complex.\nWe evaluated the efficacy and safety of dupilumab (SAR231893/REGN668), a fully human monoclonal antibody to the alpha subunit of the interleukin-4 receptor, in patients with persistent, moderate-to-severe asthma and elevated eosinophil levels.\nDupilumab, a fully human monoclonal antibody that blocks interleukin-4 and interleukin-13, has shown efficacy in patients with asthma and elevated eosinophil levels.\n\nQuestion: Which interleukins are inhibited by Dupilumab?", "role": "user" }, { "content": "[\"interleukin-4\", \"interleukin-13\"]", "role": "assistant" } ]
science.discomat_te
| | Photochromic glasses | Photochromic glasses | Thermochromic glasses | Thermochromic glasses | Thermochromic glasses | | | 1 | 2 [9] | 3 [4] | 4 | 5 | | SiO2 | 67.0 | 59 | - | - | - | | B2O3 | 22.7 | 21 | 44.5 | 32.5 | 55.0 | | La2O3 | - | - | 44.5 | 44.5 | - | | Al2O3 | 4.0 | 9 | 0.3 | 0.3 | 18.0 | | CaO | - | - | 9.8 | 9.8 | - | | CdO | 2.0 | - | 0.9 | 0.9 | 0.9 | | M x O y | - | - | - | 12 TiO2 | - | | ZnO | - | - | - | - | - | | Li2O | - | 2.5 | - | - | 18.0 | | Na | 6.2 NaCl, 3.5 Na2SiF6 | - | - | - | - | | K2O | - | 8.5 | - | - | - | | Ag | - | 0.5 | 1.25 | 1.25 | 1.25 | | Cl | 0.5 | ~ | 0.5 | 0.5 | 0.5 | | CuO | 1.65 | 0.015 | 0.032 | 0.032 | 0.032 | | SnO2 | 0.2 | | | | | Caption: As-batched glass compositions (wt%) Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "1", "type": "Other"} {"value": "2", "type": "Other"} {"value": "3", "type": "Other"} {"value": "4", "type": "Other"} {"value": "5", "type": "Other"} {"value": "67.0", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "1"} {"value": "59", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "2 [9]"} {"value": "22.7", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "wt", "material": "1"} {"value": "21", "type": "Other"} {"value": "44.5", "type": "Other"} {"value": "32.5", "type": "Other"} {"value": "55.0", "type": "Other"} {"value": "44.5", "type": "Other"} {"value": "44.5", "type": "Other"} {"value": "4.0", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "1"} {"value": "9", "type": "Other"} {"value": "0.3", "type": "Other"} {"value": "0.3", "type": "Other"} {"value": "18.0", "type": "Other"} {"value": "9.8", "type": "Other"} {"value": "9.8", "type": "Other"} {"value": "2.0", "type": "Glass_Compound_Amount", "constituent": "CdO", "unit": "wt", "material": "1"} {"value": "0.9", "type": "Other"} {"value": "0.9", "type": "Other"} {"value": "0.9", "type": "Other"} {"value": "2.5", "type": "Other"} {"value": "18.0", "type": "Other"} {"value": "8.5", "type": "Other"} {"value": "0.5", "type": "Glass_Compound_Amount", "constituent": "Cl", "unit": "wt", "material": "1"} {"value": "1.25", "type": "Other"} {"value": "1.25", "type": "Other"} {"value": "1.25", "type": "Other"} {"value": "0.5", "type": "Other"} {"value": "0.5", "type": "Other"} {"value": "0.5", "type": "Other"} {"value": "0.5", "type": "Other"} {"value": "1.65", "type": "Glass_Compound_Amount", "constituent": "CuO", "unit": "wt", "material": "1"} {"value": "0.015", "type": "Other"} {"value": "0.032", "type": "Other"} {"value": "0.032", "type": "Other"} {"value": "0.032", "type": "Other"} {"value": "0.2", "type": "Glass_Compound_Amount", "constituent": "SnO2", "unit": "wt", "material": "1"}
science.discomat_te.102
[ { "content": "| | Photochromic glasses | Photochromic glasses | Thermochromic glasses | Thermochromic glasses | Thermochromic glasses |\n| | 1 | 2 [9] | 3 [4] | 4 | 5 |\n| SiO2 | 67.0 | 59 | - | - | - |\n| B2O3 | 22.7 | 21 | 44.5 | 32.5 | 55.0 |\n| La2O3 | - | - | 44.5 | 44.5 | - |\n| Al2O3 | 4.0 | 9 | 0.3 | 0.3 | 18.0 |\n| CaO | - | - | 9.8 | 9.8 | - |\n| CdO | 2.0 | - | 0.9 | 0.9 | 0.9 |\n| M x O y | - | - | - | 12 TiO2 | - |\n| ZnO | - | - | - | - | - |\n| Li2O | - | 2.5 | - | - | 18.0 |\n| Na | 6.2 NaCl, 3.5 Na2SiF6 | - | - | - | - |\n| K2O | - | 8.5 | - | - | - |\n| Ag | - | 0.5 | 1.25 | 1.25 | 1.25 |\n| Cl | 0.5 | ~ | 0.5 | 0.5 | 0.5 |\n| CuO | 1.65 | 0.015 | 0.032 | 0.032 | 0.032 |\n| SnO2 | 0.2 | | | | |\n\nCaption: As-batched glass compositions (wt%)\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"3\", \"type\": \"Other\"}\n{\"value\": \"4\", \"type\": \"Other\"}\n{\"value\": \"5\", \"type\": \"Other\"}\n{\"value\": \"67.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"1\"}\n{\"value\": \"59\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"2 [9]\"}\n{\"value\": \"22.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"wt\", \"material\": \"1\"}\n{\"value\": \"21\", \"type\": \"Other\"}\n{\"value\": \"44.5\", \"type\": \"Other\"}\n{\"value\": \"32.5\", \"type\": \"Other\"}\n{\"value\": \"55.0\", \"type\": \"Other\"}\n{\"value\": \"44.5\", \"type\": \"Other\"}\n{\"value\": \"44.5\", \"type\": \"Other\"}\n{\"value\": \"4.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"1\"}\n{\"value\": \"9\", \"type\": \"Other\"}\n{\"value\": \"0.3\", \"type\": \"Other\"}\n{\"value\": \"0.3\", \"type\": \"Other\"}\n{\"value\": \"18.0\", \"type\": \"Other\"}\n{\"value\": \"9.8\", \"type\": \"Other\"}\n{\"value\": \"9.8\", \"type\": \"Other\"}\n{\"value\": \"2.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CdO\", \"unit\": \"wt\", \"material\": \"1\"}\n{\"value\": \"0.9\", \"type\": \"Other\"}\n{\"value\": \"0.9\", \"type\": \"Other\"}\n{\"value\": \"0.9\", \"type\": \"Other\"}\n{\"value\": \"2.5\", \"type\": \"Other\"}\n{\"value\": \"18.0\", \"type\": \"Other\"}\n{\"value\": \"8.5\", \"type\": \"Other\"}\n{\"value\": \"0.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Cl\", \"unit\": \"wt\", \"material\": \"1\"}\n{\"value\": \"1.25\", \"type\": \"Other\"}\n{\"value\": \"1.25\", \"type\": \"Other\"}\n{\"value\": \"1.25\", \"type\": \"Other\"}\n{\"value\": \"0.5\", \"type\": \"Other\"}\n{\"value\": \"0.5\", \"type\": \"Other\"}\n{\"value\": \"0.5\", \"type\": \"Other\"}\n{\"value\": \"0.5\", \"type\": \"Other\"}\n{\"value\": \"1.65\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuO\", \"unit\": \"wt\", \"material\": \"1\"}\n{\"value\": \"0.015\", \"type\": \"Other\"}\n{\"value\": \"0.032\", \"type\": \"Other\"}\n{\"value\": \"0.032\", \"type\": \"Other\"}\n{\"value\": \"0.032\", \"type\": \"Other\"}\n{\"value\": \"0.2\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SnO2\", \"unit\": \"wt\", \"material\": \"1\"}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Preventive pharmacotherapy for migraine is not satisfactory because of the low efficacy/tolerability ratio of many available drugs. Novel and more efficient preventive strategies are therefore warranted. Abnormal excitability of cortical areas appears to play a pivotal role in migraine pathophysiology. Transcranial direct current stimulation (tDCS) is a non-invasive and safe technique that is able to durably modulate the activity of the underlying cerebral cortex, and is being tested in various medical indications. The results of small open studies using tDCS in migraine prophylaxis are conflicting, possibly because the optimal stimulation settings and the brain targets were not well chosen. We have previously shown that the cerebral cortex, especially the visual cortex, is hyperresponsive in migraine patients between attacks and provided evidence from evoked potential studies that this is due to a decreased cortical preactivation level. If one accepts this concept, anodal tDCS over the visual cortex may have therapeutic potentials in migraine prevention, as it is able to increase neuronal firing. ABSTRACT.OBJECTIVE: To study the effects of anodal tDCS on visual cortex activity in healthy volunteers (HV) and episodic migraine without aura patients (MoA), and its potentials for migraine prevention. ABSTRACT.METHODS: We recorded pattern-reversal visual evoked potentials (VEP) before and after a 15-min session of anodal tDCS over the visual cortex in 11 HV and 13 MoA interictally. Then 10 MoA patients reporting at least 4 attacks/month subsequently participated in a therapeutic study, and received 2 similar sessions of tDCS per week for 8 weeks as migraine preventive therapy. ABSTRACT.RESULTS: In HV as well as in MoA, anodal tDCS transiently increased habituation of the VEP N1P1 component. VEP amplitudes were not modified by tDCS. Preventive treatment with anodal tDCS turned out to be beneficial in MoA: migraine attack frequency, migraine days, attack duration and acute medication intake significantly decreased during the treatment period compared to pre-treatment baseline (all p < 0.05), and this benefit persisted on average 4.8 weeks after the end of tDCS. ABSTRACT.CONCLUSIONS: Anodal tDCS over the visual cortex is thus able to increase habituation to repetitive visual stimuli in healthy volunteers and in episodic migraineurs, who on average lack habituation interictally. Moreover, 2 weekly sessions of anodal tDCS had a significant preventive anti- migraine effect, proofing the concept that the low preactivation level of the visual cortex in migraine patients can be corrected by an activating neurostimulation. The therapeutic results indicate that a larger sham-controlled trial using the same tDCS protocol is worthwhile. Extracted Data:
[["Transcranial direct current stimulation (tDCS)", "baseline", "acute medication intake", "significantly decreased", "Preventive treatment with anodal tDCS turned out to be beneficial in MoA: migraine attack frequency, migraine days, attack duration and acute medication intake significantly decreased during the treatment period compared to pre-treatment baseline (all p \u003c 0.05), and this benefit persisted on average 4.8 weeks after the end of tDCS."], ["Transcranial direct current stimulation (tDCS)", "baseline", "migraine attack frequency, migraine days, attack duration ", "significantly decreased", "Preventive treatment with anodal tDCS turned out to be beneficial in MoA: migraine attack frequency, migraine days, attack duration and acute medication intake significantly decreased during the treatment period compared to pre-treatment baseline (all p \u003c 0.05),"]]
science.evidence_inference.131
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Preventive pharmacotherapy for migraine is not satisfactory because of the low efficacy/tolerability ratio of many available drugs. Novel and more efficient preventive strategies are therefore warranted. Abnormal excitability of cortical areas appears to play a pivotal role in migraine pathophysiology. Transcranial direct current stimulation (tDCS) is a non-invasive and safe technique that is able to durably modulate the activity of the underlying cerebral cortex, and is being tested in various medical indications. The results of small open studies using tDCS in migraine prophylaxis are conflicting, possibly because the optimal stimulation settings and the brain targets were not well chosen. We have previously shown that the cerebral cortex, especially the visual cortex, is hyperresponsive in migraine patients between attacks and provided evidence from evoked potential studies that this is due to a decreased cortical preactivation level. If one accepts this concept, anodal tDCS over the visual cortex may have therapeutic potentials in migraine prevention, as it is able to increase neuronal firing.\n\nABSTRACT.OBJECTIVE: To study the effects of anodal tDCS on visual cortex activity in healthy volunteers (HV) and episodic migraine without aura patients (MoA), and its potentials for migraine prevention.\n\nABSTRACT.METHODS: We recorded pattern-reversal visual evoked potentials (VEP) before and after a 15-min session of anodal tDCS over the visual cortex in 11 HV and 13 MoA interictally. Then 10 MoA patients reporting at least 4 attacks/month subsequently participated in a therapeutic study, and received 2 similar sessions of tDCS per week for 8 weeks as migraine preventive therapy.\n\nABSTRACT.RESULTS: In HV as well as in MoA, anodal tDCS transiently increased habituation of the VEP N1P1 component. VEP amplitudes were not modified by tDCS. Preventive treatment with anodal tDCS turned out to be beneficial in MoA: migraine attack frequency, migraine days, attack duration and acute medication intake significantly decreased during the treatment period compared to pre-treatment baseline (all p < 0.05), and this benefit persisted on average 4.8 weeks after the end of tDCS.\n\nABSTRACT.CONCLUSIONS: Anodal tDCS over the visual cortex is thus able to increase habituation to repetitive visual stimuli in healthy volunteers and in episodic migraineurs, who on average lack habituation interictally. Moreover, 2 weekly sessions of anodal tDCS had a significant preventive anti- migraine effect, proofing the concept that the low preactivation level of the visual cortex in migraine patients can be corrected by an activating neurostimulation. The therapeutic results indicate that a larger sham-controlled trial using the same tDCS protocol is worthwhile.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Transcranial direct current stimulation (tDCS)\", \"baseline\", \"acute medication intake\", \"significantly decreased\", \"Preventive treatment with anodal tDCS turned out to be beneficial in MoA: migraine attack frequency, migraine days, attack duration and acute medication intake significantly decreased during the treatment period compared to pre-treatment baseline (all p \\u003c 0.05), and this benefit persisted on average 4.8 weeks after the end of tDCS.\"], [\"Transcranial direct current stimulation (tDCS)\", \"baseline\", \"migraine attack frequency, migraine days, attack duration \", \"significantly decreased\", \"Preventive treatment with anodal tDCS turned out to be beneficial in MoA: migraine attack frequency, migraine days, attack duration and acute medication intake significantly decreased during the treatment period compared to pre-treatment baseline (all p \\u003c 0.05),\"]]", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Localisation of the Becker muscular dystrophy gene on the short arm of the X chromosome by linkage to cloned DNA sequences. A linkage study in 30 Becker muscular dystrophy (BMD) kindreds using three cloned DNA sequences from the X chromosome which demonstrate restriction fragment length polymorphisms (RFLPs), suggests that the BMD gene is located on the short arm of the X chromosome, in the p21 region. The genes for Becker and Duchenne dystrophies must therefore be closely linked, if not allelic, and any future DNA probes found to be of practical use in one disorder should be equally applicable to the other. The linkage analysis also provides data on the frequency of recombination along the short arm of the X chromosome, and across the centromeric region..
{"CellLine": [], "Chemical": [], "Disease": ["BMD", "Becker and Duchenne dystrophies", "Becker muscular dystrophy"], "Gene": ["Becker muscular dystrophy gene", "BMD"], "Species": [], "Variant": []}
science.biored_ner.15
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nLocalisation of the Becker muscular dystrophy gene on the short arm of the X chromosome by linkage to cloned DNA sequences. A linkage study in 30 Becker muscular dystrophy (BMD) kindreds using three cloned DNA sequences from the X chromosome which demonstrate restriction fragment length polymorphisms (RFLPs), suggests that the BMD gene is located on the short arm of the X chromosome, in the p21 region. The genes for Becker and Duchenne dystrophies must therefore be closely linked, if not allelic, and any future DNA probes found to be of practical use in one disorder should be equally applicable to the other. The linkage analysis also provides data on the frequency of recombination along the short arm of the X chromosome, and across the centromeric region..", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [], \"Disease\": [\"BMD\", \"Becker and Duchenne dystrophies\", \"Becker muscular dystrophy\"], \"Gene\": [\"Becker muscular dystrophy gene\", \"BMD\"], \"Species\": [], \"Variant\": []}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: MUC1-C activates the NF-κB p65 signaling pathway by interacting with IκB kinase ß. Research Abstract: Nuclear factor-κB (NF-κB) is constitutively activated in diverse human malignancies by mechanisms that are not understood. The MUC1 oncoprotein is aberrantly overexpressed by most human carcinomas and, similarly to NF-κB, blocks apoptosis and induces transformation. This study demonstrates that overexpression of MUC1 in human carcinoma cells is associated with constitutive activation of NF-κB p65. We show that MUC1 interacts with the high-molecular-weight IκB kinase (IKK) complex in vivo and that the MUC1 cytoplasmic domain binds directly to IKKβ and IKKγ. Interaction of MUC1 with both IKKβ and IKKγ is necessary for IKKβ activation, resulting in phosphorylation and degradation of IκBα. Studies in non-malignant epithelial cells show that MUC1 is recruited to the TNF-R1 complex and interacts with IKKβ–IKKγ in response to TNFα stimulation. TNFα-induced recruitment of MUC1 is dependent on TRADD and TRAF2, but not the death-domain kinase RIP1. In addition, MUC1-mediated activation of IKKβ is dependent on TAK1 and TAB2. These findings indicate that MUC1 is important for physiological activation of IKKβ and that overexpression of MUC1, as found in human cancers, confers sustained induction of the IKKβ–NF-κB p65 pathway.
{"verdict": "SUPPORT", "evidence": ["These findings indicate that MUC1 is important for physiological activation of IKK\u03b2 and that overexpression of MUC1, as found in human cancers, confers sustained induction of the IKK\u03b2\u2013NF-\u03baB p65 pathway."]}
science.scifact_entailment.476
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nMUC1-C activates the NF-κB p65 signaling pathway by interacting with IκB kinase ß.\n\nResearch Abstract:\nNuclear factor-κB (NF-κB) is constitutively activated in diverse human malignancies by mechanisms that are not understood.\nThe MUC1 oncoprotein is aberrantly overexpressed by most human carcinomas and, similarly to NF-κB, blocks apoptosis and induces transformation.\nThis study demonstrates that overexpression of MUC1 in human carcinoma cells is associated with constitutive activation of NF-κB p65.\nWe show that MUC1 interacts with the high-molecular-weight IκB kinase (IKK) complex in vivo and that the MUC1 cytoplasmic domain binds directly to IKKβ and IKKγ.\nInteraction of MUC1 with both IKKβ and IKKγ is necessary for IKKβ activation, resulting in phosphorylation and degradation of IκBα.\nStudies in non-malignant epithelial cells show that MUC1 is recruited to the TNF-R1 complex and interacts with IKKβ–IKKγ in response to TNFα stimulation.\nTNFα-induced recruitment of MUC1 is dependent on TRADD and TRAF2, but not the death-domain kinase RIP1.\nIn addition, MUC1-mediated activation of IKKβ is dependent on TAK1 and TAB2.\nThese findings indicate that MUC1 is important for physiological activation of IKKβ and that overexpression of MUC1, as found in human cancers, confers sustained induction of the IKKβ–NF-κB p65 pathway.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"These findings indicate that MUC1 is important for physiological activation of IKK\\u03b2 and that overexpression of MUC1, as found in human cancers, confers sustained induction of the IKK\\u03b2\\u2013NF-\\u03baB p65 pathway.\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Etoricoxib is a highly selective COX-2 inhibitor which was evaluated for the treatment of rheumatoid arthritis (RA). ABSTRACT.METHODS: Double-blind, randomized, placebo and active comparator-controlled, 12-week study conducted at 67 sites in 28 countries. Eligible patients were chronic NSAID users who demonstrated a clinical worsening of arthritis upon withdrawal of prestudy NSAIDs. Patients received either placebo, etoricoxib 90 mg once daily, or naproxen 500 mg twice daily (2:2:1 allocation ratio). Primary efficacy measures included direct assessment of arthritis by counts of tender and swollen joints, and patient and investigator global assessments of disease activity. Key secondary measures included the Stanford Health Assessment Questionnaire, patient global assessment of pain, and the percentage of patients who achieved ACR20 responder criteria response (a composite of pain, inflammation, function, and global assessments). Tolerability was assessed by adverse events and routine laboratory evaluations. ABSTRACT.RESULTS: 1171 patients were screened, 891 patients were randomized (N = 357 for placebo, N = 353 for etoricoxib, and N = 181 for naproxen), and 687 completed 12 weeks of treatment (N = 242 for placebo, N = 294 for etoricoxib, and N = 151 for naproxen). Compared with patients receiving placebo, patients receiving etoricoxib and naproxen showed significant improvements in all efficacy endpoints (p<0.05). Treatment responses were similar between the etoricoxib and naproxen groups for all endpoints. The percentage of patients who achieved ACR20 responder criteria response was 41% in the placebo group, 59% in the etoricoxib group, and 58% in the naproxen group. Etoricoxib and naproxen were both generally well tolerated. ABSTRACT.CONCLUSIONS: In this study, etoricoxib 90 mg once daily was more effective than placebo and similar in efficacy to naproxen 500 mg twice daily for treating patients with RA over 12 weeks. Etoricoxib 90 mg was generally well tolerated in RA patients. Extracted Data:
[[" etoricoxib 90 mg once daily, naproxen 500 mg twice daily", "placebo", "Patient global assessment of disease activity; investigator global assessment of disease activity; Tender joint count; Swollen joint count; Patient global assessment of pain;Health Assessment Questionnaire disability ; Serum C-reactive protein; ACR20 responder criteria; Discontinuation due to lack of efficacy ", "significantly increased", "Compared with patients receiving placebo, patients receiving etoricoxib and naproxen showed significant improvements in all efficacy endpoints (p\u0026lt;0.05)."], [" etoricoxib 90 mg once daily, naproxen 500 mg twice daily", "placebo", "Patient global assessment of disease activity; investigator global assessment of disease activity; Tender joint count; Swollen joint count; Patient global assessment of pain;Health Assessment Questionnaire disability ; Serum C-reactive protein; ACR20 responder criteria; Discontinuation due to lack of efficacy ", "significantly increased", "Compared with patients receiving placebo, patients receiving etoricoxib and naproxen showed significant improvements in all efficacy endpoints (p\u003c0.05)."], [" etoricoxib 90 mg once daily", " naproxen 500 mg twice daily", "Patient global assessment of disease activity; investigator global assessment of disease activity; Tender joint count; Swollen joint count; Patient global assessment of pain;Health Assessment Questionnaire disability ; Serum C-reactive protein; ACR20 responder criteria; Discontinuation due to lack of efficacy ", "no significant difference", "Treatment responses were similar between the etoricoxib and naproxen groups for all endpoints."]]
science.evidence_inference.192
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Etoricoxib is a highly selective COX-2 inhibitor which was evaluated for the treatment of rheumatoid arthritis (RA).\n\nABSTRACT.METHODS: Double-blind, randomized, placebo and active comparator-controlled, 12-week study conducted at 67 sites in 28 countries. Eligible patients were chronic NSAID users who demonstrated a clinical worsening of arthritis upon withdrawal of prestudy NSAIDs. Patients received either placebo, etoricoxib 90 mg once daily, or naproxen 500 mg twice daily (2:2:1 allocation ratio). Primary efficacy measures included direct assessment of arthritis by counts of tender and swollen joints, and patient and investigator global assessments of disease activity. Key secondary measures included the Stanford Health Assessment Questionnaire, patient global assessment of pain, and the percentage of patients who achieved ACR20 responder criteria response (a composite of pain, inflammation, function, and global assessments). Tolerability was assessed by adverse events and routine laboratory evaluations.\n\nABSTRACT.RESULTS: 1171 patients were screened, 891 patients were randomized (N = 357 for placebo, N = 353 for etoricoxib, and N = 181 for naproxen), and 687 completed 12 weeks of treatment (N = 242 for placebo, N = 294 for etoricoxib, and N = 151 for naproxen). Compared with patients receiving placebo, patients receiving etoricoxib and naproxen showed significant improvements in all efficacy endpoints (p<0.05). Treatment responses were similar between the etoricoxib and naproxen groups for all endpoints. The percentage of patients who achieved ACR20 responder criteria response was 41% in the placebo group, 59% in the etoricoxib group, and 58% in the naproxen group. Etoricoxib and naproxen were both generally well tolerated.\n\nABSTRACT.CONCLUSIONS: In this study, etoricoxib 90 mg once daily was more effective than placebo and similar in efficacy to naproxen 500 mg twice daily for treating patients with RA over 12 weeks. Etoricoxib 90 mg was generally well tolerated in RA patients.\n\nExtracted Data:", "role": "user" }, { "content": "[[\" etoricoxib 90 mg once daily, naproxen 500 mg twice daily\", \"placebo\", \"Patient global assessment of disease activity; investigator global assessment of disease activity; Tender joint count; Swollen joint count; Patient global assessment of pain;Health Assessment Questionnaire disability ; Serum C-reactive protein; ACR20 responder criteria; Discontinuation due to lack of efficacy \", \"significantly increased\", \"Compared with patients receiving placebo, patients receiving etoricoxib and naproxen showed significant improvements in all efficacy endpoints (p\\u0026lt;0.05).\"], [\" etoricoxib 90 mg once daily, naproxen 500 mg twice daily\", \"placebo\", \"Patient global assessment of disease activity; investigator global assessment of disease activity; Tender joint count; Swollen joint count; Patient global assessment of pain;Health Assessment Questionnaire disability ; Serum C-reactive protein; ACR20 responder criteria; Discontinuation due to lack of efficacy \", \"significantly increased\", \"Compared with patients receiving placebo, patients receiving etoricoxib and naproxen showed significant improvements in all efficacy endpoints (p\\u003c0.05).\"], [\" etoricoxib 90 mg once daily\", \" naproxen 500 mg twice daily\", \"Patient global assessment of disease activity; investigator global assessment of disease activity; Tender joint count; Swollen joint count; Patient global assessment of pain;Health Assessment Questionnaire disability ; Serum C-reactive protein; ACR20 responder criteria; Discontinuation due to lack of efficacy \", \"no significant difference\", \"Treatment responses were similar between the etoricoxib and naproxen groups for all endpoints.\"]]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Upon viral challenge, influenza-specific memory CD4+ T cells greatly diminish the early production of inflammatory chemokines in the lung. Research Abstract: Inflammation induced by recognition of pathogen-associated molecular patterns markedly affects subsequent adaptive responses. We asked whether the adaptive immune system can also affect the character and magnitude of innate inflammatory responses. We found that the response of memory, but not naive, CD4+ T cells enhances production of multiple innate inflammatory cytokines and chemokines (IICs) in the lung and that, during influenza infection, this leads to early control of virus. Memory CD4+ T cell–induced IICs and viral control require cognate antigen recognition and are optimal when memory cells are either T helper type 1 (TH1) or TH17 polarized but are independent of interferon-γ (IFN-γ) and tumor necrosis factor-α (TNF-α) production and do not require activation of conserved pathogen recognition pathways. This represents a previously undescribed mechanism by which memory CD4+ T cells induce an early innate response that enhances immune protection against pathogens.
{"verdict": "CONTRADICT", "evidence": ["We found that the response of memory, but not naive, CD4+ T cells enhances production of multiple innate inflammatory cytokines and chemokines (IICs) in the lung and that, during influenza infection, this leads to early control of virus."]}
science.scifact_entailment.872
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nUpon viral challenge, influenza-specific memory CD4+ T cells greatly diminish the early production of inflammatory chemokines in the lung.\n\nResearch Abstract:\nInflammation induced by recognition of pathogen-associated molecular patterns markedly affects subsequent adaptive responses.\nWe asked whether the adaptive immune system can also affect the character and magnitude of innate inflammatory responses.\nWe found that the response of memory, but not naive, CD4+ T cells enhances production of multiple innate inflammatory cytokines and chemokines (IICs) in the lung and that, during influenza infection, this leads to early control of virus.\nMemory CD4+ T cell–induced IICs and viral control require cognate antigen recognition and are optimal when memory cells are either T helper type 1 (TH1) or TH17 polarized but are independent of interferon-γ (IFN-γ) and tumor necrosis factor-α (TNF-α) production and do not require activation of conserved pathogen recognition pathways.\nThis represents a previously undescribed mechanism by which memory CD4+ T cells induce an early innate response that enhances immune protection against pathogens.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"We found that the response of memory, but not naive, CD4+ T cells enhances production of multiple innate inflammatory cytokines and chemokines (IICs) in the lung and that, during influenza infection, this leads to early control of virus.\"]}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Mutation in the auxiliary calcium-channel subunit CACNA2D4 causes autosomal recessive cone dystrophy. Retinal signal transmission depends on the activity of high voltage-gated l-type calcium channels in photoreceptor ribbon synapses. We recently identified a truncating frameshift mutation in the Cacna2d4 gene in a spontaneous mouse mutant with profound loss of retinal signaling and an abnormal morphology of ribbon synapses in rods and cones. The Cacna2d4 gene encodes an l-type calcium-channel auxiliary subunit of the alpha (2) delta type. Mutations in its human orthologue, CACNA2D4, were not yet known to be associated with a disease. We performed mutation analyses of 34 patients who received an initial diagnosis of night blindness, and, in two affected siblings, we detected a homozygous nucleotide substitution (c.2406C-->A) in CACNA2D4. The mutation introduces a premature stop codon that truncates one-third of the corresponding open reading frame. Both patients share symptoms of slowly progressing cone dystrophy. These findings represent the first report of a mutation in the human CACNA2D4 gene and define a novel gene defect that causes autosomal recessive cone dystrophy.
{"CellLine": [], "Chemical": ["calcium"], "Disease": ["cone dystrophy", "night blindness"], "Gene": ["Cacna2d4", "CACNA2D4", "l-type calcium-channel auxiliary subunit of the alpha (2) delta type"], "Species": ["human", "mouse", "patients"], "Variant": ["c.2406C--\u003eA"]}
science.biored_ner.47
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nMutation in the auxiliary calcium-channel subunit CACNA2D4 causes autosomal recessive cone dystrophy. Retinal signal transmission depends on the activity of high voltage-gated l-type calcium channels in photoreceptor ribbon synapses. We recently identified a truncating frameshift mutation in the Cacna2d4 gene in a spontaneous mouse mutant with profound loss of retinal signaling and an abnormal morphology of ribbon synapses in rods and cones. The Cacna2d4 gene encodes an l-type calcium-channel auxiliary subunit of the alpha (2) delta type. Mutations in its human orthologue, CACNA2D4, were not yet known to be associated with a disease. We performed mutation analyses of 34 patients who received an initial diagnosis of night blindness, and, in two affected siblings, we detected a homozygous nucleotide substitution (c.2406C-->A) in CACNA2D4. The mutation introduces a premature stop codon that truncates one-third of the corresponding open reading frame. Both patients share symptoms of slowly progressing cone dystrophy. These findings represent the first report of a mutation in the human CACNA2D4 gene and define a novel gene defect that causes autosomal recessive cone dystrophy.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"calcium\"], \"Disease\": [\"cone dystrophy\", \"night blindness\"], \"Gene\": [\"Cacna2d4\", \"CACNA2D4\", \"l-type calcium-channel auxiliary subunit of the alpha (2) delta type\"], \"Species\": [\"human\", \"mouse\", \"patients\"], \"Variant\": [\"c.2406C--\\u003eA\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: For example the patterns used by Yangarber et al. (2000) are the subject-verb-object tuples from the dependency tree (the remainder of the dependency parse is discarded) while <cite>Sudo et al. (2003)</cite> allow any subtree within the dependency parse to act as an extraction pattern.
["Background"]
science.multicite_intent_classification.1666
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nFor example the patterns used by Yangarber et al. (2000) are the subject-verb-object tuples from the dependency tree (the remainder of the dependency parse is discarded) while <cite>Sudo et al. (2003)</cite> allow any subtree within the dependency parse to act as an extraction pattern.", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: A More Globally Accurate Dimensionality Reduction Method Using Triplets Main Content: 1 INTRODUCTION . Information visualization using dimensionality reduction ( DR ) is a fundamental step for gaining insight about a dataset . Motivated by the fact that the humans essentially think in two or three dimensions , DR has been studied extensively ( Tenenbaum et al. , 2000 ; Kohonen , 1998 ; Roweis & Saul , 2000 ; Hinton & Roweis , 2003 ; Maaten & Hinton , 2008 ) . However , how to choose the best DR method for visualizing a given dataset is yet unresolved . Despite the plethora of quantitative measures such as trustworthiness-continuity ( Kaski et al. , 2003 ) , mean ( smoothed ) precision-recall ( Venna et al. , 2010 ) , and nearest-neighbor accuracy ( Van Der Maaten et al. , 2009 ) , there is no standard procedure to assess the quality of a low-dimensional embedding in general . Also these measures only focus on the local neighborhood of each point in the original space and low-dimensional embedding and fail to reflect global aspects of the data . We will argue that selecting a DR method based on these local measures is misleading . In this paper , we mainly focus on the global properties of the data . These properties include the relative placements of the clusters in the dataset as well as revelation of outlier points that are located far away from the rest of the points in the high-dimensional space . It appears that quantifying the global properties is significantly more complicated and certainly local quality measures of DR ( such as precision-recall1 ) do not capture these properties . Thus we resort to an evaluation based on visual clues that allows the practitioner to build confidence about the obtained DR results . To this end , we propose a number of transformations that happen naturally to real-world datasets . We argue that any DR method that claims to preserve the global structure of the data , should be able to handle these transformations . Ackerman et al . ( 2010 ) took a similar more theoretical approach for the task of clustering where certain natural properties are used to taxonomize different clustering algorithms . Next , we introduce a new DR method , called TriMap . The main idea behind TriMap is to capture higher orders of structure in the data by considering the relative similarities of a chosen set of triplets of points . The method minimizes a loss over triplets that measures how well a triplet is satisfied . This loss is made robust by capping it with a damping function . We provide compelling experimental 1We verified on small datasets that other common measures such as trustworthiness-continuity ( Kaski et al. , 2003 ) are also inadequate for evaluating the global structure . Also these measures are too expensive to calculate for moderately large datasets . results on large natural datasets . We show that t-SNE and LargeVis , which are both based on preserving the pairwise ( dis ) similarities of the points , tend to form spurious clusters with clean boundaries and fail to properly reflect the global aspects of the data . On the other hand , TriMap ’ s DR results preserve more of the global structure of the data as well as reveals outliers . 2 TESTING THE GLOBAL ACCURACY OF DR METHODS . Our proposed tests mimic the natural scenarios that happen in the real-world datasets . In each case we run the test on four DR methods : t-SNE , which is currently the most widely used DR method ; LargeVis , a more recent method that is also based on pairwise ( dis ) similarities ; TriMap , our new method based on triplet information ; and PCA a global method which projects the data onto the two directions of largest variance . We defer the description of t-SNE & LargeVis as well as our new TriMap method to later sections . As a running example , we perform the tests on the MNIST dataset2 which contains 70,000 images of handwritten digits ( represented by 784 dimensions ) 3 . We normalized the data to have pixel values between [ 0 , 1 ] and verified that the results are consistent over multiple runs of all methods . For each embedding , we calculated the area under the mean precision- 2 http : //yann.lecun.com/exdb/mnist/ 3For ease of comparison , we use the same initial solution for t-SNE and TriMap , whenever possible . The current implementation of LargeVis does not support initial solutions . recall curve ( AUC ) ( Venna et al. , 2010 ) .4 We also calculate the nearest-neighbor classification accuracy for each embedding . The ( AUC , NN-Accuracy ) values are shown on the bottom of each figure . The main reason for including these local measure in the results is to emphasize the fact that these measures fail to reflect the globally properties of the embedding , discussed in the following . Figure 5 ( ? ) shows the results on the full dataset by the four methods . Partial observation test A DR tool should be invariant to removing a subset of points from the dataset . The placement of the remaining points ( or clusters ) after running the DR method on the reduced data set should remain relatively unchanged compared to the embedding created from the full dataset . In a first test , the subset of removed points are selected at random . This test mimics the fact that in reality we only see a sample from the data distribution and the DR method should be stable w.r.t . the sample size . Figure 5 ( 2.1.a ) shows the results after removing % 90 of the dataset at random . t-SNE and TriMap both produce good results in that the clusters of the reduced datasets are located roughly in the same arrangement as in the full dataset . However , LargeVis moves the clusters around ( e.g . cluster ‘ 1 ’ ) . For labeled datasets , we might be interested in visualizing a particular subset of the classes , leading to our second test . Figure 5 ( 2.1.b ) gives the results on running the methods on just the subset of even digits of MNIST . As can be seen , only the TriMap method is able to preserve the relative distances between all the clusters after removing the odd digits , while t-SNE and LargeVis place the remaining clusters at arbitrary positions . 4More precisely , for each point we fix a neighborhood size of 20 in the high-dimension as the “ relevant points ” and vary the neighborhood size in the low-dimension between 1 and 100 . This gives us a precision and recall curve for each point . The AUC measure is the area under the mean curve for all points . Outlier test Natural datasets frequently contain outliers due to measurement or experimental errors . Detecting and removing the outliers is key step in statistical analysis that should be facilitated with the DR methods . We can check the ability of the DR methods to reveal outliers by adding artificial outlier points to the original dataset . Figure 5 ( 2.2 ) shows the DR results after shifting point X ( originally placed inside cluster ‘ 8 ’ by all methods ) far away in a random direction . The point X is clearly an outlier in the new dataset ( as verified by the PCA result ) . Surprisingly , t-SNE and LargeVis both place the outlier inside the cluster ‘ 3 ’ , which happens to be the nearest cluster to this point in higher dimension . More disturbingly , adding a single outlier this way consistently rearranges the relative location of the clusters for t-SNE and LargeVis . TriMap shows the outlier and preserve the structure . Multiple scales test A DR tool should be able to reflect the global structure of the data at different scales . For instance , a dataset might consist of multiple clusters where each cluster itself may contain multiple sub-clusters and so on . The practitioner can do a rudimentary test by duplicating and shifting the natural dataset at hand . That is , for each point xn of the original dataset , we add a point xn + c , where c is a fixed random shift which is chosen large enough such that the two copies are guaranteed to be far apart in the high-dimensional space5 . When applied on the duplicated dataset , the DR method should be able to show each copy separately . Figure 5 ( 2.3 ) illustrates the results on a duplicated MNIST dataset . We expect to see two identical copies of the same dataset in the low-dimensional embedding , which can be verified by the PCA result . Curiously enough , both t-SNE and LargeVis fail to provide any meaningful global structure6 . In fact , both methods tend to split some of the clusters into smaller sub-clusters . On the other hand , TriMap successfully recovers the two identical copies , which look similar to the original dataset in Figure 5 ( ? ) . 3 SKETCH OF THE T-SNE AND LARGEVIS . The t-SNE method ( Maaten & Hinton , 2008 ) is perhaps the most commonly used DR method in practice . The main idea of t-SNE is to reflect the pairwise similarities of the points by minimizing a KL divergence between the normalized pairwise similarities in the high-dimensional space and the low-dimensional embedding . The t-SNE method has O ( N2 ) time complexity for N points . However , the complexity can be reduced to O ( N logN ) by approximating the gradient using treebased algorithms ( Van Der Maaten , 2014 ) . t-SNE is easy to apply to general datasets and usually produces nicely separated clusters . Also it has been proven theoretically ( Arora et al. , 2018 ) that if the clusters are separated far enough in higher dimension , then t-SNE separates them in the embedding 7 . However , as we showed in the previous section , the DR results produced by t-SNE are sometimes extremely misleading : the whole dataset is simply collapsed into an “ orb ” and the outliers are shown as “ inliers ” . The latter can be explained by fact that for an outlier point , the pairwise similarities to the points that are closest to the outlier point dominate the rest , causing the method to pull back the outlier point to the closest cluster . LargeVis is a more recent method ( Tang et al. , 2016 ) that also aims to preserve the pairwise ( dis ) similarities of the points in the high-dimensional space in the low-dimensional embedding . To avoid the O ( N2 ) complexity of t-SNE , LargeVis uses a negative sampling approach ( Mikolov et al. , 2013 ) to randomly select a subset of the dissimilar pairs . However , the weights of all the dissimilar pairs are set to a positive constant . This causes the LargeVis to lose global information in the data , as we showed in the previous section . Overall , LargeVis forms well-separated clusters . However , the outlier points that are far away or in-between the clusters are pushed back into the closest clusters . This is a consequence of using a noisy distribution for negative sampling which tends to pick points from the denser regions with higher probability . For example , moving single points that lie between multiple large clusters inside the nearest cluster increases the likelihood of the model ( because a single point has a very small probability of being selected as a dissimilar example by any of points that lie inside the large clusters ) . 5Similar results obtained by shifting the datasets in 4 directions . 6Verified with a large range of the perplexity parameter for t-SNE ( from the default of 30 to 150 ) . 7The proof requires that ” early exaggeration ” is used in all iterations of training .
In this paper, the authors present a novel dimensionality reduction method named TriMap. TriMap attempts to improve upon the widely-adopted t-SNE algorithm by incorporating global distances through the use of triplets, rather than pairwise comparisons. The authors compare to t-SNE, as well as a newer method called LargeVis which also claims to impose a global distances metric. The authors show that their method is more robust to the addition or removal of clusters and outliers and provides a more meaningful global distance relative to the methods against which they compare.
science.mup_single_document_summarization.9824
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nA More Globally Accurate Dimensionality Reduction Method Using Triplets\n\nMain Content:\n1 INTRODUCTION . Information visualization using dimensionality reduction ( DR ) is a fundamental step for gaining insight about a dataset . Motivated by the fact that the humans essentially think in two or three dimensions , DR has been studied extensively ( Tenenbaum et al. , 2000 ; Kohonen , 1998 ; Roweis & Saul , 2000 ; Hinton & Roweis , 2003 ; Maaten & Hinton , 2008 ) . However , how to choose the best DR method for visualizing a given dataset is yet unresolved . Despite the plethora of quantitative measures such as trustworthiness-continuity ( Kaski et al. , 2003 ) , mean ( smoothed ) precision-recall ( Venna et al. , 2010 ) , and nearest-neighbor accuracy ( Van Der Maaten et al. , 2009 ) , there is no standard procedure to assess the quality of a low-dimensional embedding in general . Also these measures only focus on the local neighborhood of each point in the original space and low-dimensional embedding and fail to reflect global aspects of the data . We will argue that selecting a DR method based on these local measures is misleading . In this paper , we mainly focus on the global properties of the data . These properties include the relative placements of the clusters in the dataset as well as revelation of outlier points that are located far away from the rest of the points in the high-dimensional space . It appears that quantifying the global properties is significantly more complicated and certainly local quality measures of DR ( such as precision-recall1 ) do not capture these properties . Thus we resort to an evaluation based on visual clues that allows the practitioner to build confidence about the obtained DR results . To this end , we propose a number of transformations that happen naturally to real-world datasets . We argue that any DR method that claims to preserve the global structure of the data , should be able to handle these transformations . Ackerman et al . ( 2010 ) took a similar more theoretical approach for the task of clustering where certain natural properties are used to taxonomize different clustering algorithms . Next , we introduce a new DR method , called TriMap . The main idea behind TriMap is to capture higher orders of structure in the data by considering the relative similarities of a chosen set of triplets of points . The method minimizes a loss over triplets that measures how well a triplet is satisfied . This loss is made robust by capping it with a damping function . We provide compelling experimental 1We verified on small datasets that other common measures such as trustworthiness-continuity ( Kaski et al. , 2003 ) are also inadequate for evaluating the global structure . Also these measures are too expensive to calculate for moderately large datasets . results on large natural datasets . We show that t-SNE and LargeVis , which are both based on preserving the pairwise ( dis ) similarities of the points , tend to form spurious clusters with clean boundaries and fail to properly reflect the global aspects of the data . On the other hand , TriMap ’ s DR results preserve more of the global structure of the data as well as reveals outliers . 2 TESTING THE GLOBAL ACCURACY OF DR METHODS . Our proposed tests mimic the natural scenarios that happen in the real-world datasets . In each case we run the test on four DR methods : t-SNE , which is currently the most widely used DR method ; LargeVis , a more recent method that is also based on pairwise ( dis ) similarities ; TriMap , our new method based on triplet information ; and PCA a global method which projects the data onto the two directions of largest variance . We defer the description of t-SNE & LargeVis as well as our new TriMap method to later sections . As a running example , we perform the tests on the MNIST dataset2 which contains 70,000 images of handwritten digits ( represented by 784 dimensions ) 3 . We normalized the data to have pixel values between [ 0 , 1 ] and verified that the results are consistent over multiple runs of all methods . For each embedding , we calculated the area under the mean precision- 2 http : //yann.lecun.com/exdb/mnist/ 3For ease of comparison , we use the same initial solution for t-SNE and TriMap , whenever possible . The current implementation of LargeVis does not support initial solutions . recall curve ( AUC ) ( Venna et al. , 2010 ) .4 We also calculate the nearest-neighbor classification accuracy for each embedding . The ( AUC , NN-Accuracy ) values are shown on the bottom of each figure . The main reason for including these local measure in the results is to emphasize the fact that these measures fail to reflect the globally properties of the embedding , discussed in the following . Figure 5 ( ? ) shows the results on the full dataset by the four methods . Partial observation test A DR tool should be invariant to removing a subset of points from the dataset . The placement of the remaining points ( or clusters ) after running the DR method on the reduced data set should remain relatively unchanged compared to the embedding created from the full dataset . In a first test , the subset of removed points are selected at random . This test mimics the fact that in reality we only see a sample from the data distribution and the DR method should be stable w.r.t . the sample size . Figure 5 ( 2.1.a ) shows the results after removing % 90 of the dataset at random . t-SNE and TriMap both produce good results in that the clusters of the reduced datasets are located roughly in the same arrangement as in the full dataset . However , LargeVis moves the clusters around ( e.g . cluster ‘ 1 ’ ) . For labeled datasets , we might be interested in visualizing a particular subset of the classes , leading to our second test . Figure 5 ( 2.1.b ) gives the results on running the methods on just the subset of even digits of MNIST . As can be seen , only the TriMap method is able to preserve the relative distances between all the clusters after removing the odd digits , while t-SNE and LargeVis place the remaining clusters at arbitrary positions . 4More precisely , for each point we fix a neighborhood size of 20 in the high-dimension as the “ relevant points ” and vary the neighborhood size in the low-dimension between 1 and 100 . This gives us a precision and recall curve for each point . The AUC measure is the area under the mean curve for all points . Outlier test Natural datasets frequently contain outliers due to measurement or experimental errors . Detecting and removing the outliers is key step in statistical analysis that should be facilitated with the DR methods . We can check the ability of the DR methods to reveal outliers by adding artificial outlier points to the original dataset . Figure 5 ( 2.2 ) shows the DR results after shifting point X ( originally placed inside cluster ‘ 8 ’ by all methods ) far away in a random direction . The point X is clearly an outlier in the new dataset ( as verified by the PCA result ) . Surprisingly , t-SNE and LargeVis both place the outlier inside the cluster ‘ 3 ’ , which happens to be the nearest cluster to this point in higher dimension . More disturbingly , adding a single outlier this way consistently rearranges the relative location of the clusters for t-SNE and LargeVis . TriMap shows the outlier and preserve the structure . Multiple scales test A DR tool should be able to reflect the global structure of the data at different scales . For instance , a dataset might consist of multiple clusters where each cluster itself may contain multiple sub-clusters and so on . The practitioner can do a rudimentary test by duplicating and shifting the natural dataset at hand . That is , for each point xn of the original dataset , we add a point xn + c , where c is a fixed random shift which is chosen large enough such that the two copies are guaranteed to be far apart in the high-dimensional space5 . When applied on the duplicated dataset , the DR method should be able to show each copy separately . Figure 5 ( 2.3 ) illustrates the results on a duplicated MNIST dataset . We expect to see two identical copies of the same dataset in the low-dimensional embedding , which can be verified by the PCA result . Curiously enough , both t-SNE and LargeVis fail to provide any meaningful global structure6 . In fact , both methods tend to split some of the clusters into smaller sub-clusters . On the other hand , TriMap successfully recovers the two identical copies , which look similar to the original dataset in Figure 5 ( ? ) . 3 SKETCH OF THE T-SNE AND LARGEVIS . The t-SNE method ( Maaten & Hinton , 2008 ) is perhaps the most commonly used DR method in practice . The main idea of t-SNE is to reflect the pairwise similarities of the points by minimizing a KL divergence between the normalized pairwise similarities in the high-dimensional space and the low-dimensional embedding . The t-SNE method has O ( N2 ) time complexity for N points . However , the complexity can be reduced to O ( N logN ) by approximating the gradient using treebased algorithms ( Van Der Maaten , 2014 ) . t-SNE is easy to apply to general datasets and usually produces nicely separated clusters . Also it has been proven theoretically ( Arora et al. , 2018 ) that if the clusters are separated far enough in higher dimension , then t-SNE separates them in the embedding 7 . However , as we showed in the previous section , the DR results produced by t-SNE are sometimes extremely misleading : the whole dataset is simply collapsed into an “ orb ” and the outliers are shown as “ inliers ” . The latter can be explained by fact that for an outlier point , the pairwise similarities to the points that are closest to the outlier point dominate the rest , causing the method to pull back the outlier point to the closest cluster . LargeVis is a more recent method ( Tang et al. , 2016 ) that also aims to preserve the pairwise ( dis ) similarities of the points in the high-dimensional space in the low-dimensional embedding . To avoid the O ( N2 ) complexity of t-SNE , LargeVis uses a negative sampling approach ( Mikolov et al. , 2013 ) to randomly select a subset of the dissimilar pairs . However , the weights of all the dissimilar pairs are set to a positive constant . This causes the LargeVis to lose global information in the data , as we showed in the previous section . Overall , LargeVis forms well-separated clusters . However , the outlier points that are far away or in-between the clusters are pushed back into the closest clusters . This is a consequence of using a noisy distribution for negative sampling which tends to pick points from the denser regions with higher probability . For example , moving single points that lie between multiple large clusters inside the nearest cluster increases the likelihood of the model ( because a single point has a very small probability of being selected as a dissimilar example by any of points that lie inside the large clusters ) . 5Similar results obtained by shifting the datasets in 4 directions . 6Verified with a large range of the perplexity parameter for t-SNE ( from the default of 30 to 150 ) . 7The proof requires that ” early exaggeration ” is used in all iterations of training .", "role": "user" }, { "content": "In this paper, the authors present a novel dimensionality reduction method named TriMap. TriMap attempts to improve upon the widely-adopted t-SNE algorithm by incorporating global distances through the use of triplets, rather than pairwise comparisons. The authors compare to t-SNE, as well as a newer method called LargeVis which also claims to impose a global distances metric. The authors show that their method is more robust to the addition or removal of clusters and outliers and provides a more meaningful global distance relative to the methods against which they compare.", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Incidence of heart failure increased by 10% in women since 1979. Research Abstract: CONTEXT The epidemic of heart failure has yet to be fully investigated, and data on incidence, survival, and sex-specific temporal trends in community-based populations are limited. OBJECTIVE To test the hypothesis that the incidence of heart failure has declined and survival after heart failure diagnosis has improved over time but that secular trends have diverged by sex. DESIGN, SETTING, AND PARTICIPANTS Population-based cohort study using the resources of the Rochester Epidemiology Project conducted in Olmsted County, Minnesota. Patients were 4537 Olmsted County residents (57% women; mean [SD] age, 74 [14] years) with a diagnosis of heart failure between 1979 and 2000. Framingham criteria and clinical criteria were used to validate the diagnosis MAIN OUTCOME MEASURES Incidence of heart failure and survival after heart failure diagnosis. RESULTS The incidence of heart failure was higher among men (378/100 000 persons; 95% confidence interval [CI], 361-395 for men; 289/100 000 persons; 95% CI, 277-300 for women) and did not change over time among men or women. After a mean follow-up of 4.2 years (range, 0-23.8 years), 3347 deaths occurred, including 1930 among women and 1417 among men. Survival after heart failure diagnosis was worse among men than women (relative risk, 1.33; 95% CI, 1.24-1.43) but overall improved over time (5-year age-adjusted survival, 43% in 1979-1984 vs 52% in 1996-2000, P<.001). However, men and younger persons experienced larger survival gains, contrasting with less or no improvement for women and elderly persons. CONCLUSION In this community-based cohort, the incidence of heart failure has not declined during 2 decades, but survival after onset of heart failure has increased overall, with less improvement among women and elderly persons.
{"verdict": "CONTRADICT", "evidence": ["RESULTS The incidence of heart failure was higher among men (378/100 000 persons; 95% confidence interval [CI], 361-395 for men; 289/100 000 persons; 95% CI, 277-300 for women) and did not change over time among men or women."]}
science.scifact_entailment.382
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nIncidence of heart failure increased by 10% in women since 1979.\n\nResearch Abstract:\nCONTEXT The epidemic of heart failure has yet to be fully investigated, and data on incidence, survival, and sex-specific temporal trends in community-based populations are limited.\nOBJECTIVE To test the hypothesis that the incidence of heart failure has declined and survival after heart failure diagnosis has improved over time but that secular trends have diverged by sex.\nDESIGN, SETTING, AND PARTICIPANTS Population-based cohort study using the resources of the Rochester Epidemiology Project conducted in Olmsted County, Minnesota.\nPatients were 4537 Olmsted County residents (57% women; mean [SD] age, 74 [14] years) with a diagnosis of heart failure between 1979 and 2000.\nFramingham criteria and clinical criteria were used to validate the diagnosis MAIN OUTCOME MEASURES Incidence of heart failure and survival after heart failure diagnosis.\nRESULTS The incidence of heart failure was higher among men (378/100 000 persons; 95% confidence interval [CI], 361-395 for men; 289/100 000 persons; 95% CI, 277-300 for women) and did not change over time among men or women.\nAfter a mean follow-up of 4.2 years (range, 0-23.8 years), 3347 deaths occurred, including 1930 among women and 1417 among men.\nSurvival after heart failure diagnosis was worse among men than women (relative risk, 1.33; 95% CI, 1.24-1.43) but overall improved over time (5-year age-adjusted survival, 43% in 1979-1984 vs 52% in 1996-2000, P<.001).\nHowever, men and younger persons experienced larger survival gains, contrasting with less or no improvement for women and elderly persons.\nCONCLUSION In this community-based cohort, the incidence of heart failure has not declined during 2 decades, but survival after onset of heart failure has increased overall, with less improvement among women and elderly persons.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"RESULTS The incidence of heart failure was higher among men (378/100 000 persons; 95% confidence interval [CI], 361-395 for men; 289/100 000 persons; 95% CI, 277-300 for women) and did not change over time among men or women.\"]}", "role": "assistant" } ]
science.discomat_te
| | Batch | Batch | Batch | Actual (from ICP) | Actual (from ICP) | Actual (from ICP) | | | P2O5 | V2O5 | Fe2O3 | P2O5 | V2O5 | Fe2O3 | | | 0.40 | 0.60 | 0.00 | 0.398 | 0.602 | 0.000 | | | 0.40 | 0.50 | 0.10 | 0.401 | 0.502 | 0.097 | | | 0.40 | 0.40 | 0.20 | 0.407 | 0.402 | 0.191 | | | 0.40 | 0.35 | 0.25 | 0.401 | 0.349 | 0.250 | | [A] | 0.40 | 0.30 | 0.30 | 0.421 | 0.301 | 0.279 | | [B] | 0.40 | 0.30 | 0.30 | 0.398 | 0.294 | 0.309 | Caption: Batch and actual composition (molar fraction) of various vanadium phosphate glasses containing Fe2O3 Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "0.40", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.60", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.00", "type": "Other"} {"value": "0.398", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.602", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.000", "type": "Other"} {"value": "0.40", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.50", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.10", "type": "Glass_Compound_Amount", "constituent": "Fe2O3", "unit": "mol", "material": "xx"} {"value": "0.401", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.502", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.097", "type": "Glass_Compound_Amount", "constituent": "Fe2O3", "unit": "mol", "material": "xx"} {"value": "0.40", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.40", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.20", "type": "Glass_Compound_Amount", "constituent": "Fe2O3", "unit": "mol", "material": "xx"} {"value": "0.407", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.402", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.191", "type": "Glass_Compound_Amount", "constituent": "Fe2O3", "unit": "mol", "material": "xx"} {"value": "0.40", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.35", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.25", "type": "Glass_Compound_Amount", "constituent": "Fe2O3", "unit": "mol", "material": "xx"} {"value": "0.401", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.349", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.250", "type": "Glass_Compound_Amount", "constituent": "Fe2O3", "unit": "mol", "material": "xx"} {"value": "0.40", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.30", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.30", "type": "Glass_Compound_Amount", "constituent": "Fe2O3", "unit": "mol", "material": "xx"} {"value": "0.421", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.301", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.279", "type": "Glass_Compound_Amount", "constituent": "Fe2O3", "unit": "mol", "material": "xx"} {"value": "0.40", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.30", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.30", "type": "Glass_Compound_Amount", "constituent": "Fe2O3", "unit": "mol", "material": "xx"} {"value": "0.398", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "xx"} {"value": "0.294", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "mol", "material": "xx"} {"value": "0.309", "type": "Glass_Compound_Amount", "constituent": "Fe2O3", "unit": "mol", "material": "xx"}
science.discomat_te.169
[ { "content": "| | Batch | Batch | Batch | Actual (from ICP) | Actual (from ICP) | Actual (from ICP) |\n| | P2O5 | V2O5 | Fe2O3 | P2O5 | V2O5 | Fe2O3 |\n| | 0.40 | 0.60 | 0.00 | 0.398 | 0.602 | 0.000 |\n| | 0.40 | 0.50 | 0.10 | 0.401 | 0.502 | 0.097 |\n| | 0.40 | 0.40 | 0.20 | 0.407 | 0.402 | 0.191 |\n| | 0.40 | 0.35 | 0.25 | 0.401 | 0.349 | 0.250 |\n| [A] | 0.40 | 0.30 | 0.30 | 0.421 | 0.301 | 0.279 |\n| [B] | 0.40 | 0.30 | 0.30 | 0.398 | 0.294 | 0.309 |\n\nCaption: Batch and actual composition (molar fraction) of various vanadium phosphate glasses containing Fe2O3\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"0.40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.00\", \"type\": \"Other\"}\n{\"value\": \"0.398\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.602\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.000\", \"type\": \"Other\"}\n{\"value\": \"0.40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Fe2O3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.401\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.502\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.097\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Fe2O3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Fe2O3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.407\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.402\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.191\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Fe2O3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.35\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Fe2O3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.401\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.349\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.250\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Fe2O3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Fe2O3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.421\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.301\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.279\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Fe2O3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Fe2O3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.398\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.294\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.309\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Fe2O3\", \"unit\": \"mol\", \"material\": \"xx\"}", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: An attempt has been made to use an Augmented Transition Network as a procedural dialog model. The development of such a model appears to be important in several respects: as a device to represent and to use different dialog schemata proposed in empirical conversation analysis; as a device to represent and to use models of verbal interaction; as a device combining knowledge about dialog schemata and about verbal interaction with knowledge about task-oriented and goal-directed dialogs. A standard ATN should be further developed in order to account for the verbal interactions of task-oriented dialogs.
Output Entities: {"Generic": ["models"], "Material": ["task-oriented dialogs", "task-oriented and goal-directed dialogs"], "Method": ["dialog model", "Augmented Transition Network", "conversation analysis"], "Metric": [], "OtherScientificTerm": ["dialog schemata", "verbal interaction"], "Task": []}
science.scierc_ner.6
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nAn attempt has been made to use an Augmented Transition Network as a procedural dialog model. The development of such a model appears to be important in several respects: as a device to represent and to use different dialog schemata proposed in empirical conversation analysis; as a device to represent and to use models of verbal interaction; as a device combining knowledge about dialog schemata and about verbal interaction with knowledge about task-oriented and goal-directed dialogs. A standard ATN should be further developed in order to account for the verbal interactions of task-oriented dialogs.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [\"models\"], \"Material\": [\"task-oriented dialogs\", \"task-oriented and goal-directed dialogs\"], \"Method\": [\"dialog model\", \"Augmented Transition Network\", \"conversation analysis\"], \"Metric\": [], \"OtherScientificTerm\": [\"dialog schemata\", \"verbal interaction\"], \"Task\": []}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Coresets for Clustering with Missing Values Main Content: 1 Introduction . We consider coresets and approximation algorithms for k-clustering problems , particularly k-MEANS1 and more generally ( k , z ) -CLUSTERING ( see Definition 2.1 ) , for points in Rd with missing values ( coordinates ) . The presence of missing values in data sets is a common phenomenon , and dealing with it is a fundamental challenge in data science . While data imputation is a very popular method for handling missing values , it often requires prior knowledge which might not be available , or statistical assumptions on the missing values that might be difficult to verify [ All01 , LR19 ] . In contrast , our worst-case approach does not requires any prior knowledge . Specifically , in our context of clustering , the distance dist ( x , c ) between a clustering center point c and a data point x is evaluated only on the available ( i.e. , non-missing ) coordinates . Similar models that aim to minimize clustering costs using only the available coordinates have been proposed in previous work [ HB01 , Wag04 , CCB16 , WLH+19 ] , and some other relevant works were discussed in a survey [ HC10 ] . Clustering under this distance function , which is evaluated only on the available coordinates , is a formidable computational challenge , because distances do not satisfy the triangle inequality , and 1In the usual k-MEANS problem ( without missing coordinates ) , the input is a data set X ⊂ Rd and the goal is to find a center set C ⊂ Rd , |C| = k that minimizes the sum of squared distances from every x ∈ X to C. 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . therefore many classical and effective clustering algorithms , such as k-MEANS++ [ AV07 ] , can not be readily applied or even be defined properly . Despite the algorithmic interest in clustering with missing values , the problem is still not well understood and only a few results are known . In a pioneering work , Gao , Langberg and Schulman [ GLS08 ] initiated the algorithmic study of the k-CENTER problem with missing values . They took a geometric perspective and interpreted the k-CENTER with missing values problem as an affine-subspace clustering problem , and followup work [ GLS10 , LS13 ] has subsequently improved and generalized their algorithm . Only very recently , approximation algorithms for objectives other than k-CENTER , particularly k-MEANS , were obtained for the limited case of at most one missing coordinate in each input point [ MF19 ] or for constant number of missing coordinates [ EFG+21 ] . We focus on designing coresets for clustering with missing values . Roughly speaking , an -coreset is a small proxy of the data set , such that the clustering objective is preserved within ( 1± ) factor for all center sets ( see Definition 2.2 for formal definition ) . Efficient constructions of small -coresets usually lead to efficient approximations schemes , since the input size is reduced to that of the coreset , see e.g . [ HJLW18 , FRS19 , MF19 ] . Moreover , apart from speeding up approximation algorithms in the classical setting ( offline computation ) , coresets can also be applied to design streaming [ HM04 , FS05 , BFL+17 ] , distributed [ BEL13 , RPS15 , BLK18 ] , and dynamic algorithms [ Cha09 , HK20 ] , which are effective methods/models for dealing with big data , and recently coresets were used even in neural networks [ MOB+20 ] . 1.1 Our Results . Coresets . Our main result , stated in Theorem 1.1 , is a near-linear time construction of coresets for k-MEANS with missing values . Here , an -coreset for k-MEANS for a data set X in Rd with missing coordinates is a weighted subset S ⊆ X with weights w : S → R+ , such that ∀C ⊂ Rd , |C| = k , ∑ x∈S w ( x ) · dist2 ( x , C ) ∈ ( 1± ) ∑ x∈X dist2 ( x , C ) , where dist ( x , c ) : = √∑ i : xi not missing ( xi − ci ) 2 , and dist ( x , C ) : = minc∈C dist ( x , c ) ; note that the center set C does not contain missing values . More generally , our coreset also works for ( k , z ) CLUSTERING , which includes k-MEDIAN ( see Definition 2.1 and Definition 2.2 ) . Throughout , we use Õ ( f ) to denote O ( f poly log f ) . Theorem 1.1 ( Informal version of Theorem 3.1 ) . There is an algorithm that , given 0 < < 1/2 , integers d , j , k ≥ 1 , and a set X ⊂ Rd of n points each having at most j missing values , it constructs with constant probability an -coreset for k-MEANS onX of sizem = ( jk ) O ( min { j , k } ) · ( −1d log n ) 2 , and runs in time Õ ( ( jk ) O ( min { j , k } ) · nd+m ) . Our coreset size is only a low-degree polynomial of d , and log n , and can thus deal with moderatelyhigh dimension or large data set . The dependence on k ( number of clusters ) and j ( maximum number of missing values per point ) is also a low-degree polynomial as long as at least one of k and j is small . Actually , we justify in Theorem 1.2 that this exponential dependence in min { j , k } can not be further improved , as long as the coreset size is in a similar parameter regime , i.e. , the coreset size is of the form f ( j , k ) · poly ( −1d log n ) . We provide the proof of Theorem 1.2 in the full version . Theorem 1.2 . Consider the k-MEANS with missing values problem in Rd ? where each point can have at most j missing coordinates . Assume there is an algorithm that constructs an -coreset of size f ( j , k ) · poly ( −1d log n ) , then f ( j , k ) can not be as small as 2o ( min ( j , k ) ) . Furthermore , the space complexity of our construction algorithm is near-linear , and since our coreset is clearly mergeable , it is possible to apply the merge-and-reduce method [ HM04 ] to convert our construction into a streaming algorithm of space poly log n. Prior to our result , the only known coreset construction for clustering with missing values is for the special case j = 1 [ MF19 ] 2 and has size kO ( k ) · ( −2d log n ) . Since our coreset has size poly ( k −1d log n ) when j = 1 , it improves the dependence on k over that of [ MF19 ] by a factor of kO ( k ) . 2In fact , [ MF19 ] considers a slightly more general setting where the input are arbitrary lines that are not necessarily axis-parallel . Near-linear time PTAS for k-MEANS with missing values . Very recently , a PTAS for kMEANS with missing values , was obtained by Eiben , Fomin , Golovach , Lochet , Panolan , and Simonov [ EFG+21 ] . Its time bound is quadratic , namely O ( 2poly ( jk/ ) · n2d ) , and since our coreset can be constructed in near-linear time , we can speedup this PTAS to near-linear time by first constructing our coreset and then running this PTAS on the coreset . Corollary 1.3 ( Near-linear time PTAS for k-MEANS with missing values ) . There is an algorithm that , given 0 < < 1/2 , integers d , j , k ≥ 1 , and a set X ⊂ Rd of n points each having at most j missing values , it finds with constant probability a ( 1 + ) -approximation for k-MEANS on X , and runs in time Õ ( ( jk ) O ( min { j , k } ) · nd+ 2poly ( jk/ ) · dO ( 1 ) ) . Experiments . We implement our algorithm and validate its performance on various real and synthetic data sets in Section 4 . Our coreset exhibits flexible tradeoffs between coreset size and accuracy , and generally outperforms a uniform-sampling baseline and a baseline that is based on imputation , in both error rate and stability , especially when the coreset size is relatively small . In particular , on each data set , a coreset of moderate size 2000 ( which is 0.5 % -5 % of the data sets ) achieves low empirical error ( 5 % -20 % ) . We further demonstrate an application and use our coresets to accelerate a Lloyd ’ s-style heuristic adapted to the missing-values setting . The experiments suggest that running the heuristic on top of our coresets gives equally good solutions ( error < 1 % relative to running on the original data set ) but is much faster ( speedup > 5x ) . 1.2 Technical Overview . Our coreset construction is based on the importance sampling framework introduced by Feldman and Langberg [ FL11 ] and subsequently improved and generalized by [ FSS20 , BJKW21 ] . In the framework , one first computes an importance score σx for every data point x ∈ X , and then draws independent samples with probabilities proportional to these scores . When no values are missing , the importance scores can be computed easily , even for general metric spaces [ VX12b , FSS20 , BJKW21 ] . However , a significant challenge with missing values is that distances do not satisfy the triangle inequality , hence importance scores can not be easily computed . We overcome this hurdle using a method introduced by Varadarajan and Xiao [ VX12a ] for projective clustering ( where the triangle inequality similarly does not hold ) . They reduce the importancescore computation to the construction of a coreset for k-CENTER objective ; this method is quite different from earlier approaches , e.g . [ FL11 , VX12b , FSS20 , BJKW21 ] , and yields a coreset for k-MEANS whose size depends linearly on log n and of course on the size of the k-CENTER coreset . ( Mathematically , this arises from the sum of all importance scores . ) We make use of this reduction , and thus focus on constructing ( efficiently ) a small coreset for k-CENTER with missing values . An immediate difficulty is how to deal with the missing values . We show that it is possible to find a collection of subsets of coordinates I ( so each I ∈ I is a subset of [ d ] ) , such that if we construct k-CENTER coresets SI on the data set “ restricted ” to each I ∈ I , then the union of these SI ’ s is a k-CENTER coreset for the original data set with missing values . Crucially , we ensure that each “ restricted ” data set does not contain any missing value , so that it is possible to use a classical coreset construction for k-CENTER . Finally , we show in a technical lemma how to find a collection as necessary of size |I| ≤ ( jk ) O ( min { j , k } ) . Since a “ restricted ” data set does not contain any missing values , we can use a classical k-CENTER coreset construction , and a standard construction has size O ( k −d ) [ AP02 ] , which is known to be tight . We bypass this −d limitation by observing that actually Õ ( 1 ) -coreset for k-CENTER suffices , even though the final coreset error is . We observe that an Õ ( 1 ) -coreset can be constructed using a variant of Gonzalez ’ s algorithm [ Gon85 ] . To implement Gonzalez ’ s algorithm , a key step is to find the furthest neighbor of a given subset of at most O ( k ) points , and a naive implementation of this runs in linear time , which overall yields a quadratic-time coreset construction , because the aforementioned reduction of [ VX12a ] actually requires Θ ( n/k ) successive runs of Gonzalez ’ s algorithm . To resolve this issue , we propose a fully-dynamic implementation of Gonzalez ’ s algorithm so that a furthest-point query is answered in time poly ( k log n ) , and the point-set is updated between successive runs instead of constructed from scratch . Our dynamic algorithm is based on a random-projection method that was proposed for furthest-point queries in the streaming setting [ Ind03 ] . Specifically , we project the ( restricted ) data set onto several random directions , and on each projected ( one-dimensional ) data set we apply a data structure for intervals .
The paper describes the first algorithms for coresets for k-means clustering with missing data coordinates for some points in d-dimensions, with provably guarantees. The algorithm runs in linear time O(nd) times factor depending only on j (most possible missing values per coordinate) and k (number of allowed clusters). A SODA 2021 paper provides a PTAS for this problem, but it has worse dependence on j, k, epsilon and has a base cost of O(n^2 d), so quadratic in n. This paper demonstrates their approach is implementable, and practical.
science.mup_single_document_summarization.11396
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nCoresets for Clustering with Missing Values\n\nMain Content:\n1 Introduction . We consider coresets and approximation algorithms for k-clustering problems , particularly k-MEANS1 and more generally ( k , z ) -CLUSTERING ( see Definition 2.1 ) , for points in Rd with missing values ( coordinates ) . The presence of missing values in data sets is a common phenomenon , and dealing with it is a fundamental challenge in data science . While data imputation is a very popular method for handling missing values , it often requires prior knowledge which might not be available , or statistical assumptions on the missing values that might be difficult to verify [ All01 , LR19 ] . In contrast , our worst-case approach does not requires any prior knowledge . Specifically , in our context of clustering , the distance dist ( x , c ) between a clustering center point c and a data point x is evaluated only on the available ( i.e. , non-missing ) coordinates . Similar models that aim to minimize clustering costs using only the available coordinates have been proposed in previous work [ HB01 , Wag04 , CCB16 , WLH+19 ] , and some other relevant works were discussed in a survey [ HC10 ] . Clustering under this distance function , which is evaluated only on the available coordinates , is a formidable computational challenge , because distances do not satisfy the triangle inequality , and 1In the usual k-MEANS problem ( without missing coordinates ) , the input is a data set X ⊂ Rd and the goal is to find a center set C ⊂ Rd , |C| = k that minimizes the sum of squared distances from every x ∈ X to C. 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . therefore many classical and effective clustering algorithms , such as k-MEANS++ [ AV07 ] , can not be readily applied or even be defined properly . Despite the algorithmic interest in clustering with missing values , the problem is still not well understood and only a few results are known . In a pioneering work , Gao , Langberg and Schulman [ GLS08 ] initiated the algorithmic study of the k-CENTER problem with missing values . They took a geometric perspective and interpreted the k-CENTER with missing values problem as an affine-subspace clustering problem , and followup work [ GLS10 , LS13 ] has subsequently improved and generalized their algorithm . Only very recently , approximation algorithms for objectives other than k-CENTER , particularly k-MEANS , were obtained for the limited case of at most one missing coordinate in each input point [ MF19 ] or for constant number of missing coordinates [ EFG+21 ] . We focus on designing coresets for clustering with missing values . Roughly speaking , an -coreset is a small proxy of the data set , such that the clustering objective is preserved within ( 1± ) factor for all center sets ( see Definition 2.2 for formal definition ) . Efficient constructions of small -coresets usually lead to efficient approximations schemes , since the input size is reduced to that of the coreset , see e.g . [ HJLW18 , FRS19 , MF19 ] . Moreover , apart from speeding up approximation algorithms in the classical setting ( offline computation ) , coresets can also be applied to design streaming [ HM04 , FS05 , BFL+17 ] , distributed [ BEL13 , RPS15 , BLK18 ] , and dynamic algorithms [ Cha09 , HK20 ] , which are effective methods/models for dealing with big data , and recently coresets were used even in neural networks [ MOB+20 ] . 1.1 Our Results . Coresets . Our main result , stated in Theorem 1.1 , is a near-linear time construction of coresets for k-MEANS with missing values . Here , an -coreset for k-MEANS for a data set X in Rd with missing coordinates is a weighted subset S ⊆ X with weights w : S → R+ , such that ∀C ⊂ Rd , |C| = k , ∑ x∈S w ( x ) · dist2 ( x , C ) ∈ ( 1± ) ∑ x∈X dist2 ( x , C ) , where dist ( x , c ) : = √∑ i : xi not missing ( xi − ci ) 2 , and dist ( x , C ) : = minc∈C dist ( x , c ) ; note that the center set C does not contain missing values . More generally , our coreset also works for ( k , z ) CLUSTERING , which includes k-MEDIAN ( see Definition 2.1 and Definition 2.2 ) . Throughout , we use Õ ( f ) to denote O ( f poly log f ) . Theorem 1.1 ( Informal version of Theorem 3.1 ) . There is an algorithm that , given 0 < < 1/2 , integers d , j , k ≥ 1 , and a set X ⊂ Rd of n points each having at most j missing values , it constructs with constant probability an -coreset for k-MEANS onX of sizem = ( jk ) O ( min { j , k } ) · ( −1d log n ) 2 , and runs in time Õ ( ( jk ) O ( min { j , k } ) · nd+m ) . Our coreset size is only a low-degree polynomial of d , and log n , and can thus deal with moderatelyhigh dimension or large data set . The dependence on k ( number of clusters ) and j ( maximum number of missing values per point ) is also a low-degree polynomial as long as at least one of k and j is small . Actually , we justify in Theorem 1.2 that this exponential dependence in min { j , k } can not be further improved , as long as the coreset size is in a similar parameter regime , i.e. , the coreset size is of the form f ( j , k ) · poly ( −1d log n ) . We provide the proof of Theorem 1.2 in the full version . Theorem 1.2 . Consider the k-MEANS with missing values problem in Rd ? where each point can have at most j missing coordinates . Assume there is an algorithm that constructs an -coreset of size f ( j , k ) · poly ( −1d log n ) , then f ( j , k ) can not be as small as 2o ( min ( j , k ) ) . Furthermore , the space complexity of our construction algorithm is near-linear , and since our coreset is clearly mergeable , it is possible to apply the merge-and-reduce method [ HM04 ] to convert our construction into a streaming algorithm of space poly log n. Prior to our result , the only known coreset construction for clustering with missing values is for the special case j = 1 [ MF19 ] 2 and has size kO ( k ) · ( −2d log n ) . Since our coreset has size poly ( k −1d log n ) when j = 1 , it improves the dependence on k over that of [ MF19 ] by a factor of kO ( k ) . 2In fact , [ MF19 ] considers a slightly more general setting where the input are arbitrary lines that are not necessarily axis-parallel . Near-linear time PTAS for k-MEANS with missing values . Very recently , a PTAS for kMEANS with missing values , was obtained by Eiben , Fomin , Golovach , Lochet , Panolan , and Simonov [ EFG+21 ] . Its time bound is quadratic , namely O ( 2poly ( jk/ ) · n2d ) , and since our coreset can be constructed in near-linear time , we can speedup this PTAS to near-linear time by first constructing our coreset and then running this PTAS on the coreset . Corollary 1.3 ( Near-linear time PTAS for k-MEANS with missing values ) . There is an algorithm that , given 0 < < 1/2 , integers d , j , k ≥ 1 , and a set X ⊂ Rd of n points each having at most j missing values , it finds with constant probability a ( 1 + ) -approximation for k-MEANS on X , and runs in time Õ ( ( jk ) O ( min { j , k } ) · nd+ 2poly ( jk/ ) · dO ( 1 ) ) . Experiments . We implement our algorithm and validate its performance on various real and synthetic data sets in Section 4 . Our coreset exhibits flexible tradeoffs between coreset size and accuracy , and generally outperforms a uniform-sampling baseline and a baseline that is based on imputation , in both error rate and stability , especially when the coreset size is relatively small . In particular , on each data set , a coreset of moderate size 2000 ( which is 0.5 % -5 % of the data sets ) achieves low empirical error ( 5 % -20 % ) . We further demonstrate an application and use our coresets to accelerate a Lloyd ’ s-style heuristic adapted to the missing-values setting . The experiments suggest that running the heuristic on top of our coresets gives equally good solutions ( error < 1 % relative to running on the original data set ) but is much faster ( speedup > 5x ) . 1.2 Technical Overview . Our coreset construction is based on the importance sampling framework introduced by Feldman and Langberg [ FL11 ] and subsequently improved and generalized by [ FSS20 , BJKW21 ] . In the framework , one first computes an importance score σx for every data point x ∈ X , and then draws independent samples with probabilities proportional to these scores . When no values are missing , the importance scores can be computed easily , even for general metric spaces [ VX12b , FSS20 , BJKW21 ] . However , a significant challenge with missing values is that distances do not satisfy the triangle inequality , hence importance scores can not be easily computed . We overcome this hurdle using a method introduced by Varadarajan and Xiao [ VX12a ] for projective clustering ( where the triangle inequality similarly does not hold ) . They reduce the importancescore computation to the construction of a coreset for k-CENTER objective ; this method is quite different from earlier approaches , e.g . [ FL11 , VX12b , FSS20 , BJKW21 ] , and yields a coreset for k-MEANS whose size depends linearly on log n and of course on the size of the k-CENTER coreset . ( Mathematically , this arises from the sum of all importance scores . ) We make use of this reduction , and thus focus on constructing ( efficiently ) a small coreset for k-CENTER with missing values . An immediate difficulty is how to deal with the missing values . We show that it is possible to find a collection of subsets of coordinates I ( so each I ∈ I is a subset of [ d ] ) , such that if we construct k-CENTER coresets SI on the data set “ restricted ” to each I ∈ I , then the union of these SI ’ s is a k-CENTER coreset for the original data set with missing values . Crucially , we ensure that each “ restricted ” data set does not contain any missing value , so that it is possible to use a classical coreset construction for k-CENTER . Finally , we show in a technical lemma how to find a collection as necessary of size |I| ≤ ( jk ) O ( min { j , k } ) . Since a “ restricted ” data set does not contain any missing values , we can use a classical k-CENTER coreset construction , and a standard construction has size O ( k −d ) [ AP02 ] , which is known to be tight . We bypass this −d limitation by observing that actually Õ ( 1 ) -coreset for k-CENTER suffices , even though the final coreset error is . We observe that an Õ ( 1 ) -coreset can be constructed using a variant of Gonzalez ’ s algorithm [ Gon85 ] . To implement Gonzalez ’ s algorithm , a key step is to find the furthest neighbor of a given subset of at most O ( k ) points , and a naive implementation of this runs in linear time , which overall yields a quadratic-time coreset construction , because the aforementioned reduction of [ VX12a ] actually requires Θ ( n/k ) successive runs of Gonzalez ’ s algorithm . To resolve this issue , we propose a fully-dynamic implementation of Gonzalez ’ s algorithm so that a furthest-point query is answered in time poly ( k log n ) , and the point-set is updated between successive runs instead of constructed from scratch . Our dynamic algorithm is based on a random-projection method that was proposed for furthest-point queries in the streaming setting [ Ind03 ] . Specifically , we project the ( restricted ) data set onto several random directions , and on each projected ( one-dimensional ) data set we apply a data structure for intervals .", "role": "user" }, { "content": "The paper describes the first algorithms for coresets for k-means clustering with missing data coordinates for some points in d-dimensions, with provably guarantees. The algorithm runs in linear time O(nd) times factor depending only on j (most possible missing values per coordinate) and k (number of allowed clusters). A SODA 2021 paper provides a PTAS for this problem, but it has worse dependence on j, k, epsilon and has a base cost of O(n^2 d), so quadratic in n. This paper demonstrates their approach is implementable, and practical.", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope---i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. We evaluate a range of benchmark classifiers on our dataset along with several different out-of-scope identification schemes. We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries. Our dataset and evaluation fill an important gap in the field, offering a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems. Introduction Task-oriented dialog systems have become ubiquitous, providing a means for billions of people to interact with computers using natural language. Moreover, the recent influx of platforms and tools such as Google's DialogFlow or Amazon's Lex for building and deploying such systems makes them even more accessible to various industries and demographics across the globe. Tools for developing such systems start by guiding developers to collect training data for intent classification: the task of identifying which of a fixed set of actions the user wishes to take based on their query. Relatively few public datasets exist for evaluating performance on this task, and those that do exist typically cover only a very small number of intents (e.g. BIBREF0, which has 7 intents). Furthermore, such resources do not facilitate analysis of out-of-scope queries: queries that users may reasonably make, but fall outside of the scope of the system-supported intents. Figure FIGREF1 shows example query-response exchanges between a user and a task-driven dialog system for personal finance. In the first user-system exchange, the system correctly identifies the user's intent as an in-scope balance query. In the second and third exchanges, the user queries with out-of-scope inputs. In the second exchange, the system incorrectly identifies the query as in-scope and yields an unrelated response. In the third exchange, the system correctly classifies the user's query as out-of-scope, and yields a fallback response. Out-of-scope queries are inevitable for a task-oriented dialog system, as most users will not be fully cognizant of the system's capabilities, which are limited by the fixed number of intent classes. Correctly identifying out-of-scope cases is thus crucial in deployed systems—both to avoid performing the wrong action and also to identify potential future directions for development. However, this problem has seen little attention in analyses and evaluations of intent classification systems. This paper fills this gap by analyzing intent classification performance with a focus on out-of-scope handling. To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. The queries cover 150 intents, plus out-of-scope queries that do not fall within any of the 150 in-scope intents. We evaluate a range of benchmark classifiers and out-of-scope handling methods on our dataset. BERT BIBREF1 yields the best in-scope accuracy, scoring 96% or above even when we limit the training data or introduce class imbalance. However, all methods struggle with identifying out-of-scope queries. Even when a large number of out-of-scope examples are provided for training, there is a major performance gap, with the best system scoring 66% out-of-scope recall. Our results show that while current models work on known classes, they have difficulty on out-of-scope queries, particularly when data is not plentiful. This dataset will enable future work to address this key gap in the research and development of dialog systems. All data introduced in this paper can be found at https://github.com/clinc/oos-eval. Dataset We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data. Dataset ::: In-Scope Data Collection We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent. Dataset ::: Data Preprocessing and Partitioning For all queries collected, all tokens were down-cased, and all end-of-sentence punctuation was removed. Additionally, all duplicate queries were removed and replaced. In an effort to reduce bias in the in-scope data, we placed all queries from a given crowd worker in a single split (train, validation, or test). This avoids the potential issue of similar queries from a crowd worker ending up in both the train and test sets, for instance, which would make the train and test distributions unrealistically similar. We note that this is a recommendation from concurrent work by BIBREF3. We also used this procedure for the out-of-scope set, except that we split the data into train/validation/test based on task prompt instead of worker. Dataset ::: Dataset Variants In addition to the full dataset, we consider three variations. First, Small, in which there are only 50 training queries per each in-scope intent, rather than 100. Second, Imbalanced, in which intents have either 25, 50, 75, or 100 training queries. Third, OOS+, in which there are 250 out-of-scope training examples, rather than 100. These are intended to represent production scenarios where data may be in limited or uneven supply. Benchmark Evaluation ::: Classifier Models SVM: A linear support vector machine with bag-of-words sentence representations. MLP: A multi-layer perceptron with USE embeddings BIBREF4 as input. FastText: A shallow neural network that averages embeddings of n-grams BIBREF5. CNN: A convolutional neural network with non-static word embeddings initialized with GloVe BIBREF6. BERT: A neural network that is trained to predict elided words in text and then fine-tuned on our data BIBREF1. Platforms: Several platforms exist for the development of task-oriented agents. We consider Google's DialogFlow and Rasa NLU with spacy-sklearn. Benchmark Evaluation ::: Out-of-Scope Prediction We use three baseline approaches for the task of predicting whether a query is out-of-scope: (1) oos-train, where we train an additional (i.e. 151st) intent on out-of-scope training data; (2) oos-threshold, where we use a threshold on the classifier's probability estimate; and (3) oos-binary, a two-stage process where we first classify a query as in- or out-of-scope, then classify it into one of the 150 intents if classified as in-scope. To reduce the severity of the class imbalance between in-scope versus out-of-scope query samples (i.e., 15,000 versus 250 queries for OOS+), we investigate two strategies when using oos-binary: one where we undersample the in-scope data and train using 1,000 in-scope queries sampled evenly across all intents (versus 250 out-of-scope), and another where we augment the 250 OOS+ out-of-scope training queries with 14,750 sentences sampled from Wikipedia. From a development point of view, the oos-train and oos-binary methods both require careful curation of an out-of-scope training set, and this set can be tailored to individual systems. The oos-threshold method is a more general decision rule that can be applied to any model that produces a probability. In our evaluation, the out-of-scope threshold was chosen to be the value which yielded the highest validation score across all intents, treating out-of-scope as its own intent. Benchmark Evaluation ::: Metrics We consider two performance metrics for all scenarios: (1) accuracy over the 150 intents, and (2) recall on out-of-scope queries. We use recall to evaluate out-of-scope since we are more interested in cases where such queries are predicted as in-scope, as this would mean a system gives the user a response that is completely wrong. Precision errors are less problematic as the fallback response will prompt the user to try again, or inform the user of the system's scope of supported domains. Results ::: Results with oos-train Table TABREF14 presents results for all models across the four variations of the dataset. First, BERT is consistently the best approach for in-scope, followed by MLP. Second, out-of-scope query performance is much lower than in-scope across all methods. Training on less data (Small and Imbalanced) yields models that perform slightly worse on in-scope queries. The trend is mostly the opposite when evaluating out-of-scope, where recall increases under the Small and Imbalanced training conditions. Under these two conditions, the size of the in-scope training set was decreased, while the number of out-of-scope training queries remained constant. This indicates that out-of-scope performance can be increased by increasing the relative number of out-of-scope training queries. We do just that in the OOS+ setting—where the models were trained on the full training set as well as 150 additional out-of-scope queries—and see that performance on out-of-scope increases substantially, yet still remains low relative to in-scope accuracy. Results ::: Results with oos-threshold In-scope accuracy using the oos-threshold approach is largely comparable to oos-train. Out-of-scope recall tends to be much higher on Full, but several models suffer greatly on the limited datasets. BERT and MLP are the top oos-threshold performers, and for several models the threshold approach provided erratic results, particularly FastText and Rasa. Results ::: Results with oos-binary Table TABREF19 compares classifier performance using the oos-binary scheme. In-scope accuracy suffers for all models using the undersampling scheme when compared to training on the full dataset using the oos-train and oos-threshold approaches shown in Table TABREF14. However, out-of-scope recall improves compared to oos-train on Full but not OOS+. Augmenting the out-of-scope training set appears to help improve both in-scope and out-of-scope performance compared to undersampling, but out-of-scope performance remains weak. Prior Work In most other analyses and datasets, the idea of out-of-scope data is not considered, and instead the output classes are intended to cover all possible queries (e.g., TREC BIBREF7). Recent work by BIBREF8 considers a similar problem they call out-of-distribution detection. They use other datasets or classes excluded during training to form the out-of-distribution samples. This means that the out-of-scope samples are from a small set of coherent classes that differ substantially from the in-distribution samples. Similar experiments were conducted for evaluating unknown intent discovery models in BIBREF9. In contrast, our out-of-scope queries cover a broad range of phenomena and are similar in style and often similar in topic to in-scope queries, representing things a user might say given partial knowledge of the capabilities of a system. Table TABREF20 compares our dataset with other short-query intent classification datasets. The Snips BIBREF0 dataset and the dataset presented in BIBREF10 are the most similar to the in-scope part of our work, with the same type of conversational agent requests. Like our work, both of these datasets were bootstrapped using crowdsourcing. However, the Snips dataset has only a small number of intents and an enormous number of examples of each. Snips does present a low-data variation, with 70 training queries per intent, in which performance drops slightly. The dataset presented in BIBREF10 has a large number of intent classes, yet also contains a wide range of samples per intent class (ranging from 24 to 5,981 queries per intent, and so is not constrained in all cases). BIBREF11 created datasets with constrained training data, but with very few intents, presenting a very different type of challenge. We also include the TREC query classification datasets BIBREF7, which have a large set of labels, but they describe the desired response type (e.g., distance, city, abbreviation) rather than the action intents we consider. Moreover, TREC contains only questions and no commands. Crucially, none of the other datasets summarized in Table TABREF20 offer a feasible way to evaluate out-of-scope performance. The Dialog State Tracking Challenge (DSTC) datasets are another related resource. Specifically, DSTC 1 BIBREF12, DSTC 2 BIBREF13, and DSTC 3 BIBREF14 contain “chatbot style" queries, but the datasets are focused on state tracking. Moreover, most if not all queries in these datasets are in-scope. In contrast, the focus of our analysis is on both in- and out-of-scope queries that challenge a virtual assistant to determine whether it can provide an acceptable response. Conclusion This paper analyzed intent classification and out-of-scope prediction methods with a new dataset consisting of carefully collected out-of-scope data. Our findings indicate that certain models like BERT perform better on in-scope classification, but all methods investigated struggle with identifying out-of-scope queries. Models that incorporate more out-of-scope training data tend to improve on out-of-scope performance, yet such data is expensive and difficult to generate. We believe our analysis and dataset will lead to developing better, more robust dialog systems. All datasets introduced in this paper can be found at https://github.com/clinc/oos-eval. Table 1: Sample queries from our dataset. The out-of-scope queries are similar in style to the in-scope queries. Table 2: Benchmark classifier results under each data condition using the oos-train (top half) and oos-threshold (bottom half) prediction methods. Table 4: Classification dataset properties. Ours has the broadest range of intents and specially collected out-ofscope queries. We consider “chatbot style” queries to be short, possibly unstructured questions and commands. Query: How was the dataset annotated? JSON Response:
{"answer": "intents are annotated manually with guidance from queries collected using a scoping crowdsourcing task", "evidence": ["We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent."]}
science.qasper_abstractive_qa.46.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nAn Evaluation Dataset for Intent Classification and Out-of-Scope Prediction\n\nTask-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope---i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. We evaluate a range of benchmark classifiers on our dataset along with several different out-of-scope identification schemes. We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries. Our dataset and evaluation fill an important gap in the field, offering a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems.\n\nIntroduction\nTask-oriented dialog systems have become ubiquitous, providing a means for billions of people to interact with computers using natural language. Moreover, the recent influx of platforms and tools such as Google's DialogFlow or Amazon's Lex for building and deploying such systems makes them even more accessible to various industries and demographics across the globe.\nTools for developing such systems start by guiding developers to collect training data for intent classification: the task of identifying which of a fixed set of actions the user wishes to take based on their query. Relatively few public datasets exist for evaluating performance on this task, and those that do exist typically cover only a very small number of intents (e.g. BIBREF0, which has 7 intents). Furthermore, such resources do not facilitate analysis of out-of-scope queries: queries that users may reasonably make, but fall outside of the scope of the system-supported intents.\nFigure FIGREF1 shows example query-response exchanges between a user and a task-driven dialog system for personal finance. In the first user-system exchange, the system correctly identifies the user's intent as an in-scope balance query. In the second and third exchanges, the user queries with out-of-scope inputs. In the second exchange, the system incorrectly identifies the query as in-scope and yields an unrelated response. In the third exchange, the system correctly classifies the user's query as out-of-scope, and yields a fallback response.\nOut-of-scope queries are inevitable for a task-oriented dialog system, as most users will not be fully cognizant of the system's capabilities, which are limited by the fixed number of intent classes. Correctly identifying out-of-scope cases is thus crucial in deployed systems—both to avoid performing the wrong action and also to identify potential future directions for development. However, this problem has seen little attention in analyses and evaluations of intent classification systems.\nThis paper fills this gap by analyzing intent classification performance with a focus on out-of-scope handling. To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. The queries cover 150 intents, plus out-of-scope queries that do not fall within any of the 150 in-scope intents.\nWe evaluate a range of benchmark classifiers and out-of-scope handling methods on our dataset. BERT BIBREF1 yields the best in-scope accuracy, scoring 96% or above even when we limit the training data or introduce class imbalance. However, all methods struggle with identifying out-of-scope queries. Even when a large number of out-of-scope examples are provided for training, there is a major performance gap, with the best system scoring 66% out-of-scope recall. Our results show that while current models work on known classes, they have difficulty on out-of-scope queries, particularly when data is not plentiful. This dataset will enable future work to address this key gap in the research and development of dialog systems. All data introduced in this paper can be found at https://github.com/clinc/oos-eval.\n\nDataset\nWe introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data.\n\nDataset ::: In-Scope Data Collection\nWe defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent.\n\nDataset ::: Data Preprocessing and Partitioning\nFor all queries collected, all tokens were down-cased, and all end-of-sentence punctuation was removed. Additionally, all duplicate queries were removed and replaced.\nIn an effort to reduce bias in the in-scope data, we placed all queries from a given crowd worker in a single split (train, validation, or test). This avoids the potential issue of similar queries from a crowd worker ending up in both the train and test sets, for instance, which would make the train and test distributions unrealistically similar. We note that this is a recommendation from concurrent work by BIBREF3. We also used this procedure for the out-of-scope set, except that we split the data into train/validation/test based on task prompt instead of worker.\n\nDataset ::: Dataset Variants\nIn addition to the full dataset, we consider three variations. First, Small, in which there are only 50 training queries per each in-scope intent, rather than 100. Second, Imbalanced, in which intents have either 25, 50, 75, or 100 training queries. Third, OOS+, in which there are 250 out-of-scope training examples, rather than 100. These are intended to represent production scenarios where data may be in limited or uneven supply.\n\nBenchmark Evaluation ::: Classifier Models\nSVM: A linear support vector machine with bag-of-words sentence representations.\nMLP: A multi-layer perceptron with USE embeddings BIBREF4 as input.\nFastText: A shallow neural network that averages embeddings of n-grams BIBREF5.\nCNN: A convolutional neural network with non-static word embeddings initialized with GloVe BIBREF6.\nBERT: A neural network that is trained to predict elided words in text and then fine-tuned on our data BIBREF1.\nPlatforms: Several platforms exist for the development of task-oriented agents. We consider Google's DialogFlow and Rasa NLU with spacy-sklearn.\n\nBenchmark Evaluation ::: Out-of-Scope Prediction\nWe use three baseline approaches for the task of predicting whether a query is out-of-scope: (1) oos-train, where we train an additional (i.e. 151st) intent on out-of-scope training data; (2) oos-threshold, where we use a threshold on the classifier's probability estimate; and (3) oos-binary, a two-stage process where we first classify a query as in- or out-of-scope, then classify it into one of the 150 intents if classified as in-scope.\nTo reduce the severity of the class imbalance between in-scope versus out-of-scope query samples (i.e., 15,000 versus 250 queries for OOS+), we investigate two strategies when using oos-binary: one where we undersample the in-scope data and train using 1,000 in-scope queries sampled evenly across all intents (versus 250 out-of-scope), and another where we augment the 250 OOS+ out-of-scope training queries with 14,750 sentences sampled from Wikipedia.\nFrom a development point of view, the oos-train and oos-binary methods both require careful curation of an out-of-scope training set, and this set can be tailored to individual systems. The oos-threshold method is a more general decision rule that can be applied to any model that produces a probability. In our evaluation, the out-of-scope threshold was chosen to be the value which yielded the highest validation score across all intents, treating out-of-scope as its own intent.\n\nBenchmark Evaluation ::: Metrics\nWe consider two performance metrics for all scenarios: (1) accuracy over the 150 intents, and (2) recall on out-of-scope queries. We use recall to evaluate out-of-scope since we are more interested in cases where such queries are predicted as in-scope, as this would mean a system gives the user a response that is completely wrong. Precision errors are less problematic as the fallback response will prompt the user to try again, or inform the user of the system's scope of supported domains.\n\nResults ::: Results with oos-train\nTable TABREF14 presents results for all models across the four variations of the dataset. First, BERT is consistently the best approach for in-scope, followed by MLP. Second, out-of-scope query performance is much lower than in-scope across all methods. Training on less data (Small and Imbalanced) yields models that perform slightly worse on in-scope queries. The trend is mostly the opposite when evaluating out-of-scope, where recall increases under the Small and Imbalanced training conditions. Under these two conditions, the size of the in-scope training set was decreased, while the number of out-of-scope training queries remained constant. This indicates that out-of-scope performance can be increased by increasing the relative number of out-of-scope training queries. We do just that in the OOS+ setting—where the models were trained on the full training set as well as 150 additional out-of-scope queries—and see that performance on out-of-scope increases substantially, yet still remains low relative to in-scope accuracy.\n\nResults ::: Results with oos-threshold\nIn-scope accuracy using the oos-threshold approach is largely comparable to oos-train. Out-of-scope recall tends to be much higher on Full, but several models suffer greatly on the limited datasets. BERT and MLP are the top oos-threshold performers, and for several models the threshold approach provided erratic results, particularly FastText and Rasa.\n\nResults ::: Results with oos-binary\nTable TABREF19 compares classifier performance using the oos-binary scheme. In-scope accuracy suffers for all models using the undersampling scheme when compared to training on the full dataset using the oos-train and oos-threshold approaches shown in Table TABREF14. However, out-of-scope recall improves compared to oos-train on Full but not OOS+. Augmenting the out-of-scope training set appears to help improve both in-scope and out-of-scope performance compared to undersampling, but out-of-scope performance remains weak.\n\nPrior Work\nIn most other analyses and datasets, the idea of out-of-scope data is not considered, and instead the output classes are intended to cover all possible queries (e.g., TREC BIBREF7). Recent work by BIBREF8 considers a similar problem they call out-of-distribution detection. They use other datasets or classes excluded during training to form the out-of-distribution samples. This means that the out-of-scope samples are from a small set of coherent classes that differ substantially from the in-distribution samples. Similar experiments were conducted for evaluating unknown intent discovery models in BIBREF9. In contrast, our out-of-scope queries cover a broad range of phenomena and are similar in style and often similar in topic to in-scope queries, representing things a user might say given partial knowledge of the capabilities of a system.\nTable TABREF20 compares our dataset with other short-query intent classification datasets. The Snips BIBREF0 dataset and the dataset presented in BIBREF10 are the most similar to the in-scope part of our work, with the same type of conversational agent requests. Like our work, both of these datasets were bootstrapped using crowdsourcing. However, the Snips dataset has only a small number of intents and an enormous number of examples of each. Snips does present a low-data variation, with 70 training queries per intent, in which performance drops slightly. The dataset presented in BIBREF10 has a large number of intent classes, yet also contains a wide range of samples per intent class (ranging from 24 to 5,981 queries per intent, and so is not constrained in all cases).\nBIBREF11 created datasets with constrained training data, but with very few intents, presenting a very different type of challenge. We also include the TREC query classification datasets BIBREF7, which have a large set of labels, but they describe the desired response type (e.g., distance, city, abbreviation) rather than the action intents we consider. Moreover, TREC contains only questions and no commands. Crucially, none of the other datasets summarized in Table TABREF20 offer a feasible way to evaluate out-of-scope performance.\nThe Dialog State Tracking Challenge (DSTC) datasets are another related resource. Specifically, DSTC 1 BIBREF12, DSTC 2 BIBREF13, and DSTC 3 BIBREF14 contain “chatbot style\" queries, but the datasets are focused on state tracking. Moreover, most if not all queries in these datasets are in-scope. In contrast, the focus of our analysis is on both in- and out-of-scope queries that challenge a virtual assistant to determine whether it can provide an acceptable response.\n\nConclusion\nThis paper analyzed intent classification and out-of-scope prediction methods with a new dataset consisting of carefully collected out-of-scope data. Our findings indicate that certain models like BERT perform better on in-scope classification, but all methods investigated struggle with identifying out-of-scope queries. Models that incorporate more out-of-scope training data tend to improve on out-of-scope performance, yet such data is expensive and difficult to generate. We believe our analysis and dataset will lead to developing better, more robust dialog systems.\nAll datasets introduced in this paper can be found at https://github.com/clinc/oos-eval.\n\nTable 1: Sample queries from our dataset. The out-of-scope queries are similar in style to the in-scope queries.\n\nTable 2: Benchmark classifier results under each data condition using the oos-train (top half) and oos-threshold (bottom half) prediction methods.\n\nTable 4: Classification dataset properties. Ours has the broadest range of intents and specially collected out-ofscope queries. We consider “chatbot style” queries to be short, possibly unstructured questions and commands.\n\nQuery: How was the dataset annotated?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"intents are annotated manually with guidance from queries collected using a scoping crowdsourcing task\", \"evidence\": [\"We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent.\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Energy balance requires hypothalamic glutamate neurotransmission. Research Abstract: The melanocortin receptor 4 (MC4R) is a well-established mediator of body weight homeostasis. However, the neurotransmitter(s) that mediate MC4R function remain largely unknown; as a result, little is known about the second-order neurons of the MC4R neural pathway. Single-minded 1 (Sim1)-expressing brain regions, which include the paraventricular nucleus of hypothalamus (PVH), represent key brain sites that mediate melanocortin action. We conditionally restored MC4R expression in Sim1 neurons in the background of Mc4r-null mice. The restoration dramatically reduced obesity in Mc4r-null mice. The anti-obesity effect was completely reversed by selective disruption of glutamate release from those same Sim1 neurons. The reversal was caused by lower energy expenditure and hyperphagia. Corroboratively, selective disruption of glutamate release from adult PVH neurons led to rapid obesity development via reduced energy expenditure and hyperphagia. Thus, this study establishes glutamate as the primary neurotransmitter that mediates MC4Rs on Sim1 neurons in body weight regulation.
{"verdict": "SUPPORT", "evidence": ["The anti-obesity effect was completely reversed by selective disruption of glutamate release from those same Sim1 neurons.", "Corroboratively, selective disruption of glutamate release from adult PVH neurons led to rapid obesity development via reduced energy expenditure and hyperphagia.", "Thus, this study establishes glutamate as the primary neurotransmitter that mediates MC4Rs on Sim1 neurons in body weight regulation."]}
science.scifact_entailment.249
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nEnergy balance requires hypothalamic glutamate neurotransmission.\n\nResearch Abstract:\nThe melanocortin receptor 4 (MC4R) is a well-established mediator of body weight homeostasis.\nHowever, the neurotransmitter(s) that mediate MC4R function remain largely unknown; as a result, little is known about the second-order neurons of the MC4R neural pathway.\nSingle-minded 1 (Sim1)-expressing brain regions, which include the paraventricular nucleus of hypothalamus (PVH), represent key brain sites that mediate melanocortin action.\nWe conditionally restored MC4R expression in Sim1 neurons in the background of Mc4r-null mice.\nThe restoration dramatically reduced obesity in Mc4r-null mice.\nThe anti-obesity effect was completely reversed by selective disruption of glutamate release from those same Sim1 neurons.\nThe reversal was caused by lower energy expenditure and hyperphagia.\nCorroboratively, selective disruption of glutamate release from adult PVH neurons led to rapid obesity development via reduced energy expenditure and hyperphagia.\nThus, this study establishes glutamate as the primary neurotransmitter that mediates MC4Rs on Sim1 neurons in body weight regulation.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"The anti-obesity effect was completely reversed by selective disruption of glutamate release from those same Sim1 neurons.\", \"Corroboratively, selective disruption of glutamate release from adult PVH neurons led to rapid obesity development via reduced energy expenditure and hyperphagia.\", \"Thus, this study establishes glutamate as the primary neurotransmitter that mediates MC4Rs on Sim1 neurons in body weight regulation.\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: JMJD6 catalyzes the hydroxylation of C-terminal lysine and supresses transcriptional activity. Research Abstract: Jumonji domain-containing 6 (JMJD6) is a member of the Jumonji C domain-containing family of proteins. Compared to other members of the family, the cellular activity of JMJD6 is still not clearly defined and its biological function is still largely unexplored. Here we report that JMJD6 is physically associated with the tumor suppressor p53. We demonstrated that JMJD6 acts as an α-ketoglutarate- and Fe(II)-dependent lysyl hydroxylase to catalyze p53 hydroxylation. We found that p53 indeed exists as a hydroxylated protein in vivo and that the hydroxylation occurs mainly on lysine 382 of p53. We showed that JMJD6 antagonizes p53 acetylation, promotes the association of p53 with its negative regulator MDMX, and represses transcriptional activity of p53. Depletion of JMJD6 enhances p53 transcriptional activity, arrests cells in the G1 phase, promotes cell apoptosis, and sensitizes cells to DNA damaging agent-induced cell death. Importantly, knockdown of JMJD6 represses p53-dependent colon cell proliferation and tumorigenesis in vivo, and significantly, the expression of JMJD6 is markedly up-regulated in various types of human cancer especially in colon cancer, and high nuclear JMJD6 protein is strongly correlated with aggressive clinical behaviors of colon adenocarcinomas. Our results reveal a novel posttranslational modification for p53 and support the pursuit of JMJD6 as a potential biomarker for colon cancer aggressiveness and a potential target for colon cancer intervention.
{"verdict": "SUPPORT", "evidence": ["We demonstrated that JMJD6 acts as an \u03b1-ketoglutarate- and Fe(II)-dependent lysyl hydroxylase to catalyze p53 hydroxylation.", "We found that p53 indeed exists as a hydroxylated protein in vivo and that the hydroxylation occurs mainly on lysine 382 of p53.", "Depletion of JMJD6 enhances p53 transcriptional activity, arrests cells in the G1 phase, promotes cell apoptosis, and sensitizes cells to DNA damaging agent-induced cell death."]}
science.scifact_entailment.425
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nJMJD6 catalyzes the hydroxylation of C-terminal lysine and supresses transcriptional activity.\n\nResearch Abstract:\nJumonji domain-containing 6 (JMJD6) is a member of the Jumonji C domain-containing family of proteins.\nCompared to other members of the family, the cellular activity of JMJD6 is still not clearly defined and its biological function is still largely unexplored.\nHere we report that JMJD6 is physically associated with the tumor suppressor p53.\nWe demonstrated that JMJD6 acts as an α-ketoglutarate- and Fe(II)-dependent lysyl hydroxylase to catalyze p53 hydroxylation.\nWe found that p53 indeed exists as a hydroxylated protein in vivo and that the hydroxylation occurs mainly on lysine 382 of p53.\nWe showed that JMJD6 antagonizes p53 acetylation, promotes the association of p53 with its negative regulator MDMX, and represses transcriptional activity of p53.\nDepletion of JMJD6 enhances p53 transcriptional activity, arrests cells in the G1 phase, promotes cell apoptosis, and sensitizes cells to DNA damaging agent-induced cell death.\nImportantly, knockdown of JMJD6 represses p53-dependent colon cell proliferation and tumorigenesis in vivo, and significantly, the expression of JMJD6 is markedly up-regulated in various types of human cancer especially in colon cancer, and high nuclear JMJD6 protein is strongly correlated with aggressive clinical behaviors of colon adenocarcinomas.\nOur results reveal a novel posttranslational modification for p53 and support the pursuit of JMJD6 as a potential biomarker for colon cancer aggressiveness and a potential target for colon cancer intervention.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"We demonstrated that JMJD6 acts as an \\u03b1-ketoglutarate- and Fe(II)-dependent lysyl hydroxylase to catalyze p53 hydroxylation.\", \"We found that p53 indeed exists as a hydroxylated protein in vivo and that the hydroxylation occurs mainly on lysine 382 of p53.\", \"Depletion of JMJD6 enhances p53 transcriptional activity, arrests cells in the G1 phase, promotes cell apoptosis, and sensitizes cells to DNA damaging agent-induced cell death.\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Additionally, in cases where authors are rhetorically similar, our framework outperforms techniques outlined by Akiva (2013) and <cite>Aldebei (2015)</cite> , which both rely on word occurrences as features.
["Differences"]
science.multicite_intent_classification.1423
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nAdditionally, in cases where authors are rhetorically similar, our framework outperforms techniques outlined by Akiva (2013) and <cite>Aldebei (2015)</cite> , which both rely on word occurrences as features.", "role": "user" }, { "content": "[\"Differences\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Statins have anti-inflammatory and immunomodulatory properties in addition to lipid-lowering effects. ABSTRACT.OBJECTIVES: To report the 12-month extension of a phase II trial evaluating the efficacy, safety and tolerability of atorvastatin 40 mg/d added to interferon beta-1b (IFNB-1b) in relapsing-remitting multiple sclerosis (RRMS). ABSTRACT.METHODS: In the randomized, multicenter, parallel-group, rater-blinded core study, 77 RRMS patients started IFNB-1b. At month three they were randomized 1∶1 to receive atorvastatin 40 mg/d or not in addition to IFNB-1b until month 15. In the subsequent extension study, patients continued with unchanged medication for another 12 months. Data at study end were compared to data at month three of the core study. ABSTRACT.RESULTS: 27 of 72 patients that finished the core study entered the extension study. 45 patients were lost mainly due to a safety analysis during the core study including a recruitment stop for the extension study. The primary end point, the proportion of patients with new lesions on T2-weighted images was equal in both groups (odds ratio 1.926; 95% CI 0.265–14.0007; p = 0.51). All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either. The combination therapy was well tolerated. ABSTRACT.CONCLUSIONS: Atorvastatin 40 mg/day in addition to IFNB-1b did not have any beneficial effects on RRMS compared to IFNB-1b monotherapy over a period of 24 months. ABSTRACT.TRIAL REGISTRATION: ClinicalTrials.gov NCT01111656 Extracted Data:
[["Atorvastatin 40 mg", "Control", "Gd-enhancing lesions on T1-weighted images", "no significant difference", "All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either."], ["Atorvastatin 40 mg", "Control", "New lesions on T2-weighted images ", "no significant difference", "All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either."], ["Atorvastatin 40 mg", "Control", "Grey and white matter volumes", "no significant difference", "All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either."], ["Atorvastatin 40 mg", "Control", "Number of relapse-free patients", "no significant difference", "All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either."], ["Atorvastatin 40 mg", "Control", "Total lesion volume", "no significant difference", "All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either."]]
science.evidence_inference.168
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Statins have anti-inflammatory and immunomodulatory properties in addition to lipid-lowering effects.\n\nABSTRACT.OBJECTIVES: To report the 12-month extension of a phase II trial evaluating the efficacy, safety and tolerability of atorvastatin 40 mg/d added to interferon beta-1b (IFNB-1b) in relapsing-remitting multiple sclerosis (RRMS).\n\nABSTRACT.METHODS: In the randomized, multicenter, parallel-group, rater-blinded core study, 77 RRMS patients started IFNB-1b. At month three they were randomized 1∶1 to receive atorvastatin 40 mg/d or not in addition to IFNB-1b until month 15. In the subsequent extension study, patients continued with unchanged medication for another 12 months. Data at study end were compared to data at month three of the core study.\n\nABSTRACT.RESULTS: 27 of 72 patients that finished the core study entered the extension study. 45 patients were lost mainly due to a safety analysis during the core study including a recruitment stop for the extension study. The primary end point, the proportion of patients with new lesions on T2-weighted images was equal in both groups (odds ratio 1.926; 95% CI 0.265–14.0007; p = 0.51). All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either. The combination therapy was well tolerated.\n\nABSTRACT.CONCLUSIONS: Atorvastatin 40 mg/day in addition to IFNB-1b did not have any beneficial effects on RRMS compared to IFNB-1b monotherapy over a period of 24 months.\n\nABSTRACT.TRIAL REGISTRATION: ClinicalTrials.gov NCT01111656\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Atorvastatin 40 mg\", \"Control\", \"Gd-enhancing lesions on T1-weighted images\", \"no significant difference\", \"All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either.\"], [\"Atorvastatin 40 mg\", \"Control\", \"New lesions on T2-weighted images \", \"no significant difference\", \"All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either.\"], [\"Atorvastatin 40 mg\", \"Control\", \"Grey and white matter volumes\", \"no significant difference\", \"All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either.\"], [\"Atorvastatin 40 mg\", \"Control\", \"Number of relapse-free patients\", \"no significant difference\", \"All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either.\"], [\"Atorvastatin 40 mg\", \"Control\", \"Total lesion volume\", \"no significant difference\", \"All secondary endpoints including number of new lesions and total lesion volume on T2-weighted images, total number of Gd-enhancing lesions on T1-weighted images, volume of grey and white matter, EDSS, MSFC, relapse rate, number of relapse-free patients and neutralizing antibodies did not show significant differences either.\"]]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.OBJECTIVES:: This study examined the neural adaptations associated with a low-volume Wingate-based High Intensity Interval Training (HIIT). ABSTRACT.METHODS:: Fourteen recreationally trained males were divided into an experimental (HIIT) and a control group to determine whether a short-term (4 weeks) Wingate-based HIIT program could alter the Hoffmann (H-) reflex, volitional (V-) wave and maximum voluntary contraction (MVC) of the plantar-flexor muscles, and the peak power achieved during a Wingate test. ABSTRACT.RESULTS:: Absolute and relative peak power increased in the HIIT group (ABS_Ppeak: +14.7%, P=0.001; and REL_Ppeak: +15.0%, P=0.001), but not in the control group (ABS_Ppeak: P=0.466; and REL_Ppeak: P=0.493). However, no significant changes were found in the MVC (P>0.05 for both groups). There was a significant increase in H-reflex size after HIIT (+24.5%, P=0.004), while it remained unchanged in the control group (P=0.134). No significant changes were observed either in the V-wave or in the Vwave/Mwave ratio (P>0.05 for both groups). ABSTRACT.CONCLUSION:: The Wingate-based training led to an increased peak power together with a higher spinal excitability. However, no changes were found either in the volitional wave or in the MVC, indicating a lack of adaptation in the central motor drive. Extracted Data:
[["low-volume Wingate-based High Intensity Interval Training (HIIT", "control", "maximum voluntary contraction (MVC)", "no significant difference", "However, no significant changes were found in the MVC (P\u0026gt;0.05 for both groups)."], ["low-volume Wingate-based High Intensity Interval Training (HIIT", "control", "maximum voluntary contraction (MVC)", "no significant difference", "no significant changes were found in the MVC (P\u003e0.05 for both groups)."]]
science.evidence_inference.214
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.OBJECTIVES:: This study examined the neural adaptations associated with a low-volume Wingate-based High Intensity Interval Training (HIIT).\n\nABSTRACT.METHODS:: Fourteen recreationally trained males were divided into an experimental (HIIT) and a control group to determine whether a short-term (4 weeks) Wingate-based HIIT program could alter the Hoffmann (H-) reflex, volitional (V-) wave and maximum voluntary contraction (MVC) of the plantar-flexor muscles, and the peak power achieved during a Wingate test.\n\nABSTRACT.RESULTS:: Absolute and relative peak power increased in the HIIT group (ABS_Ppeak: +14.7%, P=0.001; and REL_Ppeak: +15.0%, P=0.001), but not in the control group (ABS_Ppeak: P=0.466; and REL_Ppeak: P=0.493). However, no significant changes were found in the MVC (P>0.05 for both groups). There was a significant increase in H-reflex size after HIIT (+24.5%, P=0.004), while it remained unchanged in the control group (P=0.134). No significant changes were observed either in the V-wave or in the Vwave/Mwave ratio (P>0.05 for both groups).\n\nABSTRACT.CONCLUSION:: The Wingate-based training led to an increased peak power together with a higher spinal excitability. However, no changes were found either in the volitional wave or in the MVC, indicating a lack of adaptation in the central motor drive.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"low-volume Wingate-based High Intensity Interval Training (HIIT\", \"control\", \"maximum voluntary contraction (MVC)\", \"no significant difference\", \"However, no significant changes were found in the MVC (P\\u0026gt;0.05 for both groups).\"], [\"low-volume Wingate-based High Intensity Interval Training (HIIT\", \"control\", \"maximum voluntary contraction (MVC)\", \"no significant difference\", \"no significant changes were found in the MVC (P\\u003e0.05 for both groups).\"]]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: the SMILE studies proved the prognostic benefit of zofenopril vs. placebo or other ACE-inhibitors (ACEIs) in post-acute myocardial infarction (AMI). In this retrospective pooled analysis of these studies we assessed whether the zofenopril effect is influenced by gender. ABSTRACT.METHODS: the four double-blind, randomized, parallel-group SMILE studies, compared the efficacy and safety of 6–48 week treatment with zofenopril 60 mg/day with that of placebo, lisinopril 10 mg/day or ramipril 10 mg/day in 3630 AMI patients. This pooled analysis compared treatment efficacy (1-year combined occurrence of death or hospitalization for CV causes) in 2733 men and 897 women. ABSTRACT.RESULTS: women were older than men, had a higher prevalence of diabetes and of other major CV risk factors. The risk of a major CV event was significantly larger for women (23% vs. 17% men, p<0.001). Between-gender risk difference was more marked for people living in Southern (+54%) than in Northern Europe (+12%). In both genders zofenopril similarly reduced the 1-year risk of CV morbidity and mortality vs. placebo (−39% men, p = 0.0001; −40% women, p = 0.005). The risk reduction was more marked with zofenopril than with the other ACEIs, particularly in men (−27%, p = 0.012; women: −14%, p = 0.479). The drug safety profile was similar between genders in zofenopril-treated patients, while it was worse in women treated with other ACEIs. ABSTRACT.CONCLUSIONS: post-AMI women are at higher risk of CV complications than men, particularly when living in Mediterranean countries. Their response to ACE-inhibition varies according to the type of drug and is usually better in men. Extracted Data:
[["zofenopril 60 mg/day", "placebo", "risk of 1-year CV morbidity and mortality ", "significantly decreased", "In both genders zofenopril similarly reduced the 1-year risk of CV morbidity and mortality vs. placebo (\u221239% men, p\u200a=\u200a0.0001; \u221240% women, p\u200a=\u200a0.005)."]]
science.evidence_inference.237
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: the SMILE studies proved the prognostic benefit of zofenopril vs. placebo or other ACE-inhibitors (ACEIs) in post-acute myocardial infarction (AMI). In this retrospective pooled analysis of these studies we assessed whether the zofenopril effect is influenced by gender.\n\nABSTRACT.METHODS: the four double-blind, randomized, parallel-group SMILE studies, compared the efficacy and safety of 6–48 week treatment with zofenopril 60 mg/day with that of placebo, lisinopril 10 mg/day or ramipril 10 mg/day in 3630 AMI patients. This pooled analysis compared treatment efficacy (1-year combined occurrence of death or hospitalization for CV causes) in 2733 men and 897 women.\n\nABSTRACT.RESULTS: women were older than men, had a higher prevalence of diabetes and of other major CV risk factors. The risk of a major CV event was significantly larger for women (23% vs. 17% men, p<0.001). Between-gender risk difference was more marked for people living in Southern (+54%) than in Northern Europe (+12%). In both genders zofenopril similarly reduced the 1-year risk of CV morbidity and mortality vs. placebo (−39% men, p = 0.0001; −40% women, p = 0.005). The risk reduction was more marked with zofenopril than with the other ACEIs, particularly in men (−27%, p = 0.012; women: −14%, p = 0.479). The drug safety profile was similar between genders in zofenopril-treated patients, while it was worse in women treated with other ACEIs.\n\nABSTRACT.CONCLUSIONS: post-AMI women are at higher risk of CV complications than men, particularly when living in Mediterranean countries. Their response to ACE-inhibition varies according to the type of drug and is usually better in men.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"zofenopril 60 mg/day\", \"placebo\", \"risk of 1-year CV morbidity and mortality \", \"significantly decreased\", \"In both genders zofenopril similarly reduced the 1-year risk of CV morbidity and mortality vs. placebo (\\u221239% men, p\\u200a=\\u200a0.0001; \\u221240% women, p\\u200a=\\u200a0.005).\"]]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: An experimental study of layer-level training speed and its impact on generalization Main Content: How optimization influences the generalization ability of a DNN is still an active area of research . This work aims to unveil and study a factor of influence : the speed at which each layer trains . In our preliminary work , we develop a visualization technique and an optimization algorithm to monitor and control the layer rotation rate , a tentative measure of layer-level training speed , and show that it has a remarkably consistent and substantial impact on generalization . Our experiments further suggest that weight decay ’ s and adaptive gradients methods ’ impact on both generalization performance and speed of convergence are solely due to layer rotation rate changes compared to vanilla SGD , offering a novel interpretation of these widely used techniques , and providing supplementary evidence that layer-level training speed indeed impacts generalization . Besides these fundamental findings , we also expect that on a practical level , the tools we introduce will reduce the meta-parameter tuning required to get the best generalization out of a deep network . 1 INTRODUCTION . Generalization and gradient propagation are two popular themes in the deep learning literature . Concerning generalization , it has been observed that a network ’ s ability to generalize depends on a subtle interaction between the optimization procedure and the training data ( Zhang et al. , 2017a ; Arpit et al. , 2017 ) . Concerning gradient propagation , several works have shown that the norm of gradients can gradually increase or decrease as a function of layer depth ( i.e . vanishing and exploding gradients ( Bengio et al. , 1994 ; Hochreiter , 1998 ; Glorot & Bengio , 2010 ) ) , so that some layers are trained faster than others . This work explores an interaction between generalization and the intricate nature of gradient propagation in deep networks , and focuses on the following research question : how does the speed at which each layer trains influence generalization ? Our endeavour is motivated by the following intuition : if the training data influences a neural network ’ s generalization ability when using gradient-based optimization ( Zhang et al. , 2017a ; Arpit et al. , 2017 ) , the input and feedback signals that a layer receives ( during the network ’ s forward and backward passes ) could also influence the generalization ability induced by the layer ’ s training . These signals result from a transformation involving the other layers of the network such that , for example , the input signals of the last layer could be more conducive to good generalization if the first layers have been significantly updated already , instead of being randomly initialized ( cfr . the works on transfer learning ( Donahue et al. , 2014 ; Oquab et al. , 2014 ) ) . More generally , the speed at which each layer trains during the network ’ s training , since it directly influences how the input and feedback signals of the other layers evolve over training , could have an impact on generalization . Figure 1 supports our intuition with a toy example where training a single layer of an 11 layer MLP network , although always reaching 100 % train accuracy , results in different test accuracies depending on the layer ’ s localisation in the network architecture . Our study starts from an educated guess about how to measure layer-level training speed appropriately : we measure it through layer rotation rates , i.e . the rates at which the weight vectors of layers rotate ( another approach would be to measure the norm of the weight updates at each training step , as is done in Bengio et al . ( 1994 ) ; Hochreiter ( 1998 ) ; Glorot & Bengio ( 2010 ) ; Pascanu et al . ( 2013 ) ; Arjovsky et al . ( 2016 ) ) . The study is then composed of the three following steps : 1 . Developing tools to monitor and control layer rotation rates ; 2 . Using our controlling tool to systematically explore layer rotation rate configurations , varying the layers which are prioritized ( first layers , last layers , or no prioritization ) and the global rotation rate value ( high or low rate , for all layers ) ; 1 3 . Using our monitoring tool to study the layer rotation rates that emerge from standard training settings.1 The outcomes of our study , supported by an extensive amount of experiments , are the following : ( i ) layer rotation rates have a consistent and substantial impact on generalization ; ( ii ) weight decay is a key ingredient for enabling the emergence of beneficial layer rotation rates during SGD training ; ( iii ) adaptive gradient methods ’ impact on generalization and training speed does not result from parameter-level , but rather from layer-level adaptation of the learning rate ; While the influence of layer-level training speed on generalization has remained unstudied , our observations thus suggest that its impact is ubiquitous in current deep learning applications . Our preliminary work offers useful guidance for meta-parameter tuning and novel insights around two widely used techniques : weight decay and adaptive gradient methods . Moreover , while layer rotation rate as a measure of layer-level training speed originated from an educated guess , the impressive consistency of its impact on generalization supports the pertinence of this choice . Our work thus also contributes to the open problem of correctly measuring layer-level training speed . To encourage further validation of our claims , the tools and source code used to create all the figures of this paper are provided at -github link hidden to preserve anonymity- ( code uses the Keras ( Chollet et al. , 2015 ) and TensorFlow ( Agarwal et al. , 2016 ) libraries ) . We also encourage interested readers to browse the supplementary material of this paper , as additional results are presented and discussed . 2 RELATED WORK . Recent works have demonstrated that generalization in deep neural networks was largely due to the optimization procedure and its puzzling interaction with the training data ( Zhang et al. , 2017a ; Arpit et al. , 2017 ) . Our paper discloses an aspect of the optimization procedure that influences generalization in deep learning : the rate at which each layer ’ s weight vector is rotated . This novel 1Our preliminary study focuses on convolutional neural networks used for image classification . factor complements batch size and global learning rate , two parameters that have been extensively studied in the light of generalization ( Keskar et al. , 2017 ; Jastrzebski et al. , 2017 ; Smith & Le , 2017 ; Smith & Topin , 2017 ; Hoffer et al. , 2017 ; Masters & Luschi , 2018 ) . The works studying the vanishing and exploding gradients problems ( Bengio et al. , 1994 ; Hochreiter , 1998 ; Glorot & Bengio , 2010 ) heavily inspired this paper . These works introduce two ideas which are central to our investigation : the notion of layer-level training speed and the fact that SGD does not necessarily train all layers at the same speed during training . Our work explores the same phenomena , but studies them in the light of generalization instead of trainability and speed of convergence . Our paper also proposes Layca , an algorithm to control the rate at which each layer ’ s weight is rotated during training . It is related to the works that sought solutions to the gradient propagation problems at optimization level ( Pascanu et al. , 2013 ; Hazan et al. , 2015 ; Singh et al. , 2015 ; Arjovsky et al. , 2016 ; Pennington et al. , 2017 ) . These works , however , do not use weight rotation as a measure of layer-level training speed , and also focus on speed of convergence instead of generalization . Recently , a series of papers proposed optimization algorithms similar to Layca and observed an impact on generalization ( Yu et al. , 2017 ; Zhang et al. , 2017b ; Ginsburg et al. , 2018 ) . Section A.2 in our Supplementary Material provides evidence that these methods may be equivalent to Layca in practice , despite avoiding some of Layca ’ s operations . Our paper thus supplements these works by providing an extensive study of the phenomena underlying their observations . Several works have recently argued that weight decay ’ s regularization effect emerged from its ability to increase the effective learning rate ( van Laarhoven , 2017 ; Hoffer et al. , 2018 ; Anonymous , 2019 ) . A concise description of when and to what extent weight decay increases the effective learning rate is however lacking , such that using weight decay is still necessary to benefit from its regularization effect in practice.2 Our work also analyses weight decay , but from the perspective of layer rotation rates instead of effective learning rates . We show that this new perspective enables a more succinct description of weight decay ’ s regularizing effect , that we are able to reproduce without any additional meta-parameter tuning when using Layca , our tool for controlling layer rotation rates . 3 TOOLS FOR MONITORING AND CONTROLLING LAYER ROTATION RATES . This paper ’ s goal is to study the relation between layer-level training speed and generalization . However , the notion of layer-level training speed is unclear , and its control through SGD is potentially difficult because of the intricate nature of gradient propagation ( cfr . vanishing and exploding gradients ) . Therefore , our work starts by making an educated guess about how to measure layer-level training speed and by developing tools to monitor and control this metric during training . This section presents the metric and tools , while the impact on generalization is studied in Sections 4 and 5 . 3.1 HOW CAN WE MEASURE LAYER-LEVEL TRAINING SPEED ? . Training speed can be understood as the speed with which a model converges to its optimal solution -not to be confounded with learning rate , which is only one of the parameters that affect training speed in current deep learning applications . The notion of layer-level training speed is ill-posed , since a layer does not have a loss of its own : all layers optimize the same global loss function . Given a training step , how can we know by how much each layer ’ s update contributed to the improvement of the global loss ? Or , in other words , how can we measure at what rate relevant features are learned by each layer individually ? Previous work on vanishing and exploding gradients focused on the norm and variance of gradients as a measure of layer-level training speed ( Bengio et al. , 1994 ; Hochreiter , 1998 ; Glorot & Bengio , 2010 ) . Provided the empirical work on activation and weight binarization during ( Courbariaux & David , 2015 ; Rastegari et al. , 2016 ; Hubara et al. , 2016 ) or after training ( Agrawal et al. , 2014 ; 2van Laarhoven ( 2017 ) proposes to keep the norm of the weights fixed to 1 in order to eliminate the effect of weight decay , but not to reproduce it . Hoffer et al . ( 2018 ) ; Anonymous ( 2019 ) are able to reproduce the regularizing effect of weight decay by tuning the effective learning rate , but their tuning strategy consists in copying the effective learning rate that emerges when training the same network with weight decay . Algorithm 1 Layca , an algorithm that enables control over the amount of weight rotation per step for each layer through its learning rate parameter ( cfr . Section 3.2 ) . Require : o , an optimizer ( SGD is the default choice ) Require : T , the number of training steps L is the number of layers in the network for l=0 to L-1 do Require : ρl ( t ) , a layer ’ s learning rate schedule Require : wl0 , the initial multiplicative weights of layer l end for t← 0 while t < T do s0t , ... , s L−1 t = getStep ( o , w 0 t , ... , w L−1 t ) ( get the updates of the selected optimizer ) for l=0 to L-1 do slt ← slt − ( slt·w l t ) w l t wlt·wlt ( project step on space orthogonal to wlt ) slt ← slt‖w l t‖2 ‖slt‖2 ( rotation-based normalization ) wlt+1 ← wlt + ρl ( t ) slt ( perform update ) wlt+1 ← wlt+1 ‖wl0‖2 ‖wlt+1‖2 ( project weights back on sphere ) end for t← t+ 1 end while Carbonnelle & De Vleeschouwer , 2018 ) , we argue that the norm of a weight vector does not matter , but only its orientation . Therefore , we suggest to measure training speed through the rotation rate of a layer ’ s weight vector ( also denoted by layer rotation rate in this paper ) . More precisely , let wtl be the flattened weight tensor of the lth layer at optimization step t , then the rotation rate of layer l between steps t1 and t2 is defined as the angle between wt1l and w t2 l divided by the number of performed steps t2 − t1 . 3 In order to visualize how fast layers rotate during training , we propose to inspect how the cosine distance between each layer ’ s current weight vector and its initialization evolves across training steps . We denote this visualization tool by layer-wise angle deviation curves hereafter .
This paper insists layer-level training speed is crucial for generalization ability. The layer-level training speed is measured by angle between weights at different time stamps in this paper. To control the amount of weight rotation, which means the degree of angle movement, this paper proposes a new algorithm, Layca. This algorithm projects the gradient vector of SGD (or update vector of other variants) onto the space orthogonal to the current weight vector, and adjust the length of the update vector to achieve the desirable angle movement. This paper conducted several experiments to verify the helpfulness of Layca.
science.mup_single_document_summarization.9610
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAn experimental study of layer-level training speed and its impact on generalization\n\nMain Content:\nHow optimization influences the generalization ability of a DNN is still an active area of research . This work aims to unveil and study a factor of influence : the speed at which each layer trains . In our preliminary work , we develop a visualization technique and an optimization algorithm to monitor and control the layer rotation rate , a tentative measure of layer-level training speed , and show that it has a remarkably consistent and substantial impact on generalization . Our experiments further suggest that weight decay ’ s and adaptive gradients methods ’ impact on both generalization performance and speed of convergence are solely due to layer rotation rate changes compared to vanilla SGD , offering a novel interpretation of these widely used techniques , and providing supplementary evidence that layer-level training speed indeed impacts generalization . Besides these fundamental findings , we also expect that on a practical level , the tools we introduce will reduce the meta-parameter tuning required to get the best generalization out of a deep network . 1 INTRODUCTION . Generalization and gradient propagation are two popular themes in the deep learning literature . Concerning generalization , it has been observed that a network ’ s ability to generalize depends on a subtle interaction between the optimization procedure and the training data ( Zhang et al. , 2017a ; Arpit et al. , 2017 ) . Concerning gradient propagation , several works have shown that the norm of gradients can gradually increase or decrease as a function of layer depth ( i.e . vanishing and exploding gradients ( Bengio et al. , 1994 ; Hochreiter , 1998 ; Glorot & Bengio , 2010 ) ) , so that some layers are trained faster than others . This work explores an interaction between generalization and the intricate nature of gradient propagation in deep networks , and focuses on the following research question : how does the speed at which each layer trains influence generalization ? Our endeavour is motivated by the following intuition : if the training data influences a neural network ’ s generalization ability when using gradient-based optimization ( Zhang et al. , 2017a ; Arpit et al. , 2017 ) , the input and feedback signals that a layer receives ( during the network ’ s forward and backward passes ) could also influence the generalization ability induced by the layer ’ s training . These signals result from a transformation involving the other layers of the network such that , for example , the input signals of the last layer could be more conducive to good generalization if the first layers have been significantly updated already , instead of being randomly initialized ( cfr . the works on transfer learning ( Donahue et al. , 2014 ; Oquab et al. , 2014 ) ) . More generally , the speed at which each layer trains during the network ’ s training , since it directly influences how the input and feedback signals of the other layers evolve over training , could have an impact on generalization . Figure 1 supports our intuition with a toy example where training a single layer of an 11 layer MLP network , although always reaching 100 % train accuracy , results in different test accuracies depending on the layer ’ s localisation in the network architecture . Our study starts from an educated guess about how to measure layer-level training speed appropriately : we measure it through layer rotation rates , i.e . the rates at which the weight vectors of layers rotate ( another approach would be to measure the norm of the weight updates at each training step , as is done in Bengio et al . ( 1994 ) ; Hochreiter ( 1998 ) ; Glorot & Bengio ( 2010 ) ; Pascanu et al . ( 2013 ) ; Arjovsky et al . ( 2016 ) ) . The study is then composed of the three following steps : 1 . Developing tools to monitor and control layer rotation rates ; 2 . Using our controlling tool to systematically explore layer rotation rate configurations , varying the layers which are prioritized ( first layers , last layers , or no prioritization ) and the global rotation rate value ( high or low rate , for all layers ) ; 1 3 . Using our monitoring tool to study the layer rotation rates that emerge from standard training settings.1 The outcomes of our study , supported by an extensive amount of experiments , are the following : ( i ) layer rotation rates have a consistent and substantial impact on generalization ; ( ii ) weight decay is a key ingredient for enabling the emergence of beneficial layer rotation rates during SGD training ; ( iii ) adaptive gradient methods ’ impact on generalization and training speed does not result from parameter-level , but rather from layer-level adaptation of the learning rate ; While the influence of layer-level training speed on generalization has remained unstudied , our observations thus suggest that its impact is ubiquitous in current deep learning applications . Our preliminary work offers useful guidance for meta-parameter tuning and novel insights around two widely used techniques : weight decay and adaptive gradient methods . Moreover , while layer rotation rate as a measure of layer-level training speed originated from an educated guess , the impressive consistency of its impact on generalization supports the pertinence of this choice . Our work thus also contributes to the open problem of correctly measuring layer-level training speed . To encourage further validation of our claims , the tools and source code used to create all the figures of this paper are provided at -github link hidden to preserve anonymity- ( code uses the Keras ( Chollet et al. , 2015 ) and TensorFlow ( Agarwal et al. , 2016 ) libraries ) . We also encourage interested readers to browse the supplementary material of this paper , as additional results are presented and discussed . 2 RELATED WORK . Recent works have demonstrated that generalization in deep neural networks was largely due to the optimization procedure and its puzzling interaction with the training data ( Zhang et al. , 2017a ; Arpit et al. , 2017 ) . Our paper discloses an aspect of the optimization procedure that influences generalization in deep learning : the rate at which each layer ’ s weight vector is rotated . This novel 1Our preliminary study focuses on convolutional neural networks used for image classification . factor complements batch size and global learning rate , two parameters that have been extensively studied in the light of generalization ( Keskar et al. , 2017 ; Jastrzebski et al. , 2017 ; Smith & Le , 2017 ; Smith & Topin , 2017 ; Hoffer et al. , 2017 ; Masters & Luschi , 2018 ) . The works studying the vanishing and exploding gradients problems ( Bengio et al. , 1994 ; Hochreiter , 1998 ; Glorot & Bengio , 2010 ) heavily inspired this paper . These works introduce two ideas which are central to our investigation : the notion of layer-level training speed and the fact that SGD does not necessarily train all layers at the same speed during training . Our work explores the same phenomena , but studies them in the light of generalization instead of trainability and speed of convergence . Our paper also proposes Layca , an algorithm to control the rate at which each layer ’ s weight is rotated during training . It is related to the works that sought solutions to the gradient propagation problems at optimization level ( Pascanu et al. , 2013 ; Hazan et al. , 2015 ; Singh et al. , 2015 ; Arjovsky et al. , 2016 ; Pennington et al. , 2017 ) . These works , however , do not use weight rotation as a measure of layer-level training speed , and also focus on speed of convergence instead of generalization . Recently , a series of papers proposed optimization algorithms similar to Layca and observed an impact on generalization ( Yu et al. , 2017 ; Zhang et al. , 2017b ; Ginsburg et al. , 2018 ) . Section A.2 in our Supplementary Material provides evidence that these methods may be equivalent to Layca in practice , despite avoiding some of Layca ’ s operations . Our paper thus supplements these works by providing an extensive study of the phenomena underlying their observations . Several works have recently argued that weight decay ’ s regularization effect emerged from its ability to increase the effective learning rate ( van Laarhoven , 2017 ; Hoffer et al. , 2018 ; Anonymous , 2019 ) . A concise description of when and to what extent weight decay increases the effective learning rate is however lacking , such that using weight decay is still necessary to benefit from its regularization effect in practice.2 Our work also analyses weight decay , but from the perspective of layer rotation rates instead of effective learning rates . We show that this new perspective enables a more succinct description of weight decay ’ s regularizing effect , that we are able to reproduce without any additional meta-parameter tuning when using Layca , our tool for controlling layer rotation rates . 3 TOOLS FOR MONITORING AND CONTROLLING LAYER ROTATION RATES . This paper ’ s goal is to study the relation between layer-level training speed and generalization . However , the notion of layer-level training speed is unclear , and its control through SGD is potentially difficult because of the intricate nature of gradient propagation ( cfr . vanishing and exploding gradients ) . Therefore , our work starts by making an educated guess about how to measure layer-level training speed and by developing tools to monitor and control this metric during training . This section presents the metric and tools , while the impact on generalization is studied in Sections 4 and 5 . 3.1 HOW CAN WE MEASURE LAYER-LEVEL TRAINING SPEED ? . Training speed can be understood as the speed with which a model converges to its optimal solution -not to be confounded with learning rate , which is only one of the parameters that affect training speed in current deep learning applications . The notion of layer-level training speed is ill-posed , since a layer does not have a loss of its own : all layers optimize the same global loss function . Given a training step , how can we know by how much each layer ’ s update contributed to the improvement of the global loss ? Or , in other words , how can we measure at what rate relevant features are learned by each layer individually ? Previous work on vanishing and exploding gradients focused on the norm and variance of gradients as a measure of layer-level training speed ( Bengio et al. , 1994 ; Hochreiter , 1998 ; Glorot & Bengio , 2010 ) . Provided the empirical work on activation and weight binarization during ( Courbariaux & David , 2015 ; Rastegari et al. , 2016 ; Hubara et al. , 2016 ) or after training ( Agrawal et al. , 2014 ; 2van Laarhoven ( 2017 ) proposes to keep the norm of the weights fixed to 1 in order to eliminate the effect of weight decay , but not to reproduce it . Hoffer et al . ( 2018 ) ; Anonymous ( 2019 ) are able to reproduce the regularizing effect of weight decay by tuning the effective learning rate , but their tuning strategy consists in copying the effective learning rate that emerges when training the same network with weight decay . Algorithm 1 Layca , an algorithm that enables control over the amount of weight rotation per step for each layer through its learning rate parameter ( cfr . Section 3.2 ) . Require : o , an optimizer ( SGD is the default choice ) Require : T , the number of training steps L is the number of layers in the network for l=0 to L-1 do Require : ρl ( t ) , a layer ’ s learning rate schedule Require : wl0 , the initial multiplicative weights of layer l end for t← 0 while t < T do s0t , ... , s L−1 t = getStep ( o , w 0 t , ... , w L−1 t ) ( get the updates of the selected optimizer ) for l=0 to L-1 do slt ← slt − ( slt·w l t ) w l t wlt·wlt ( project step on space orthogonal to wlt ) slt ← slt‖w l t‖2 ‖slt‖2 ( rotation-based normalization ) wlt+1 ← wlt + ρl ( t ) slt ( perform update ) wlt+1 ← wlt+1 ‖wl0‖2 ‖wlt+1‖2 ( project weights back on sphere ) end for t← t+ 1 end while Carbonnelle & De Vleeschouwer , 2018 ) , we argue that the norm of a weight vector does not matter , but only its orientation . Therefore , we suggest to measure training speed through the rotation rate of a layer ’ s weight vector ( also denoted by layer rotation rate in this paper ) . More precisely , let wtl be the flattened weight tensor of the lth layer at optimization step t , then the rotation rate of layer l between steps t1 and t2 is defined as the angle between wt1l and w t2 l divided by the number of performed steps t2 − t1 . 3 In order to visualize how fast layers rotate during training , we propose to inspect how the cosine distance between each layer ’ s current weight vector and its initialization evolves across training steps . We denote this visualization tool by layer-wise angle deviation curves hereafter .", "role": "user" }, { "content": "This paper insists layer-level training speed is crucial for generalization ability. The layer-level training speed is measured by angle between weights at different time stamps in this paper. To control the amount of weight rotation, which means the degree of angle movement, this paper proposes a new algorithm, Layca. This algorithm projects the gradient vector of SGD (or update vector of other variants) onto the space orthogonal to the current weight vector, and adjust the length of the update vector to achieve the desirable angle movement. This paper conducted several experiments to verify the helpfulness of Layca.", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: We aim to categorize a clause based on its aspectual property and more specifically, based on the type of Situation Entity (SE) 1 (e.g., events, states, generalizing statements and generic statements) the clause introduces to the discourse, following the recent work by<cite> (Friedrich et al., 2016)</cite> .
["Uses"]
science.multicite_intent_classification.1466
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWe aim to categorize a clause based on its aspectual property and more specifically, based on the type of Situation Entity (SE) 1 (e.g., events, states, generalizing statements and generic statements) the clause introduces to the discourse, following the recent work by<cite> (Friedrich et al., 2016)</cite> .", "role": "user" }, { "content": "[\"Uses\"]", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Data Collection for Interactive Learning through the Dialog This paper presents a dataset collected from natural dialogs which enables to test the ability of dialog systems to learn new facts from user utterances throughout the dialog. This interactive learning will help with one of the most prevailing problems of open domain dialog system, which is the sparsity of facts a dialog system can reason about. The proposed dataset, consisting of 1900 collected dialogs, allows simulation of an interactive gaining of denotations and questions explanations from users which can be used for the interactive learning. Introduction Nowadays, dialog systems are usually designed for a single domain BIBREF0 . They store data in a well-defined format with a fixed number of attributes for entities that the system can provide. Because data in this format can be stored as a two-dimensional table within a relational database, we call the data flat. This data representation allows the system to query the database in a simple and efficient way. It also allows to keep the dialog state in the form of slots (which usually correspond to columns in the table) and track it through the dialog using probabilistic belief tracking BIBREF1 , BIBREF2 . However, the well-defined structure of the database of a typical dialog system comes with a high cost of extending it as every piece of new information has to fit the format. This is especially a problem when we one is adapting the system for a new domain because its entities could have different attributes. A dialog system based on knowledge bases offers many advantages. First, the knowledge base, which can be represented as knowledge graph containing entities connected by relations, is much more flexible than the relational database. Second, freely available knowledge bases, such as Freebase, Wikidata, etc. contain an enormous amount of structured information, and are still growing. A dialog system which is capable of working with this type of information would be therefore very useful. In this paper we propose a dataset aiming to help develop and evaluate dialog systems based on knowledge bases by interactive learning motivated in Section "Motivation" Section "Dialog policies" describes policies that can be used for retrieving information from knowledge bases. In Section "Dialog Simulation" is introduced a dialog simulation from natural conversations which we use for evaluation of interactive learning. The dataset collection process allowing the dialog simulation is described in Section "Dataset Collection Process" and is followed by properties of the resulting dataset in Section "Dataset Properties" Evaluation guidelines with proposed metrics can be found in Section "Interactive Learning Evaluation" The planned future work is summarized in Section "Future Work" We conclude the paper with Section "Conclusion" Motivation From the point of view of dialog systems providing general information from a knowledge base, the most limiting factor is that a large portion of the questions is understood poorly. Current approaches BIBREF3 , BIBREF4 can only achieve around 50% accuracy on some question answering datasets. Therefore, we think that there is a room for improvements which can be achieved by interactively asking for additional information in conversational dialogs with users. This extra information can be used for improving policies of dialog systems. We call this approach the interactive learning from dialogs. We can improve dialog systems in several aspects through interactive learning in a direct interaction with users. First, the most straightforward way obviously is getting the correct answer for questions that the system does not know. We can try to ask users for answers on questions that the system encountered in a conversation with a different user and did not understand it. Second, the system can ask the user for a broader explanation of a question. This explanation could help the system to understand the question and provide the correct answer. In addition, the system can learn correct policy for the question which allows providing answers without asking any extra information for similar questions next time. We hypothesize that users are willing to give such explanations because it could help them to find answers for their own questions. The last source of information that we consider for interactive learning is rephrasing, which could help when the system does know the concept but does not know the correct wording. This area is extensively studied for the purposes of information retrieval BIBREF5 , BIBREF6 . The main purpose of the collected dataset is to enable interactive learning using the steps proposed above and potentially to evaluate how different systems perform on this task. Dialog policies The obvious difficulty when developing a dialog system is finding a way how to identify the piece of information that the user is interested in. This is especially a problem for dialog systems based on knowledge graphs containing a large amount of complex structured information. While a similar problem is being solved in a task of question answering, dialog systems have more possibilities of identifying the real intention of the user. For example, a dialog system can ask for additional information during the dialog. We distinguish three different basic approaches to requesting knowledge bases: A combination of the above approaches is also possible. For example, we can imagine scenarios where the dialog system starts with hand-crafted rules, which are subsequently interactively improved through dialogs with its users. With a growing demand for open domain dialog systems, it shows that creating hand-crafted policies does not scale well - therefore, machine learning approaches are gaining on popularity. Many public datasets for offline learning have been published BIBREF8 , BIBREF7 . However, to our knowledge, no public datasets for interactive learning are available. To fill this gap, we collected a dataset which enables to train interactively learned policies through a simulated interaction with users. Dialog Simulation Offline evaluation of interactive dialogs on real data is difficult because different policies can lead to different variants of the dialog. Our solution to this issue is to collect data in a way that allows us to simulate all dialog variants possible according to any policy. The dialog variants we are considering for interactive learning differ only in presence of several parts of the dialog. Therefore, we can collect dialogs containing all information used for interactive learning and omit those parts that were not requested by the policy. We collected the dataset (see Section "Dataset Collection Process" ) that enables simulation where the policy can decide how much extra information to the question it requests. If the question is clear to the system it can attempt to answer the question without any other information. It can also ask for a broader explanation with a possibility to answer the question afterwards. If the system decides not to answer the question, we can simulate rerouting the question to another user, to try to obtain the answer from them. The principle of simulated user's answer is shown in the Figure 1 . Note that the simulated user’s answer can be incorrect because human users naturally made mistakes. We intentionally keep these mistakes in the dataset because real systems must address them as well. Dataset Collection Process A perfect data collection scenario for our dataset would use real running dialog system providing general information from the knowledge base to real users. This system could then ask for explanations and answers for questions which it is not able to answer. However, getting access to systems with real users is usually hard. Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection. A CF worker gets a task instructing them to use our chat-like interface to help the system with a question which is randomly selected from training examples of Simple questions BIBREF7 dataset. To complete the task user has to communicate with the system through the three phase dialog discussing question paraphrase (see Section "Interactive Learning Evaluation" ), explanation (see Section "Future Work" ) and answer of the question (see Section "Conclusion" ). To avoid poor English level of dialogs we involved CF workers from English speaking countries only. The collected dialogs has been annotated (see Section "Acknowledgments" ) by expert annotators afterwards. The described procedure leads to dialogs like the one shown in the Figure 2 . Question Paraphrasing At beginning of the dialog, the system is requesting the user to paraphrase question that the system does not understand. The main goal of this first phase is to let the user get familiar with the presented question and to get alternative wordings of the posed question. Question Explanation In the second phase, the user is asked for an explanation of the question. We expect the explanation to be different enough from the original question (in terms of the number of common words between the question and the explanation). If the explanation is too similar to the question, the user is notified that their explanation is not broad enough and they must provide a better one. Question Answer With the valid explanation the dialog turns into the last phase where the user is asked for a correct answer to the original question. The system requires the user to answer with a full sentence. In practical experiments this has shown as a useful decision because it improves system's ability to reveal cheaters. We can simply measure the connection (in terms of common words ) between question and the answer sentence. This allows to reject completely irrelevant answers. Annotation The correct answer for question in each dialog is available from Simple questions dataset. Answers are in form of Freebase entities identified by unique id. For evaluation purposes we need information whether dialog contains the answer which is consistent with the entity from Simple questions, the answer with another entity or whether the dialog does not contain any answer. While the annotation process is quite simple, we did not need crowdsourcing for the process. Natural Language Understanding (NLU) The collection system needs to recognize following dialog acts from user utterances during all phases of the dialog: – user does not want to provide requested information, – user agrees to provide requested information, – user does not know the requested information, – user tries chit chat with the system (hello, bye, who are you...), – none of the above, interpreted as user is giving information requested by the system. Parsing of the dialog acts is made by hand written rules using templates and keyword spotting. The templates and keywords were manually collected from frequent expressions used by CF workers during preparation runs of the dataset collection process (google it, check wikipedia, I would need... $\rightarrow $ Negate). Dataset Properties We collected the dataset with 1900 dialogs and 8533 turns. Topics discussed in dialogs are questions randomly chosen from training examples of Simple questions BIBREF7 dataset. From this dataset we also took the correct answers in form of Freebase entities. Our dataset consists of standard data split into training, development and test files. The basic properties of those files are as follows: Each file contains complete dialogs enriched by outputs of NLU (see Section "Natural Language Understanding (NLU)" ) that were used during the data collection. On top of that, each dialog is labeled by the correct answer for the question and expert annotation of the user answer hint which tells whether the hint points to the correct answer, incorrect answer, or no answer at all. 351 of all collected dialogs contain correct answer provided by users and 702 dialogs have incorrect answer. In the remaining 847 dialogs users did not want to answer the question. The collected dialogs also contain 1828 paraphrases and 1539 explanations for 1870 questions. An answer for a question was labeled as correct by annotators only when it was evident to them that the answer points to the same Freebase entity that was present in Simple questions dataset for that particular question. However, a large amount of questions from that dataset is quite general - with many possible answers. Therefore lot of answers from users were labeled as incorrect even though those answers perfectly fit the question. Our annotators identified that 285 of the incorrect answers were answers for such general questions. Example of this situation can be demonstrated by question 'Name an actor' which was correctly answered by 'Brad Pitt is an actor', however, to be consistent with Simple questions annotation, which is 'Kelly Atwood', annotators were forced to mark it as an incorrect answer. Interactive Learning Evaluation A perfect interactive learning model would be able to learn anything interactively from test dialogs during testing, which would allow us to measure progress of the model from scratch over the course of time. However, a development of such model would be unnecessarily hard, therefore we provide training dialogs which can be used for feature extraction and other engineering related to interactive learning from dialogs in natural language. Model development is further supported with labeled validation data for parameter tuning. We propose two evaluation metrics for comparing interactive learning models. First metric (see Section "Efficiency Score" ) scores amount of information required by the model, second metric (see Section "Answer Extraction Accuracy" ) is accuracy of answer extraction from user utterances. All models must base their answers only on information gained from training dialogs and testing dialogs seen during the simulation so far, to ensure that the score will reflect the interactive learning of the model instead of general question answering. Efficiency Score The simulation of dialogs from our dataset allows to evaluate how efficient a dialog system is in using information gained from users. The dialog system should maximize the number of correctly answered questions without requesting too many explanations and answers from users. To evaluate different systems using the collected data, we propose the following evaluation measure: $$ S_D = \frac{n_c - w_i n_i - w_e n_e - w_a n_a}{|D|}$$ (Eq. 20) Here, $n_c$ denotes the number of correctly answered questions, $n_i$ denotes the number of incorrectly answered questions, $n_e$ denotes the number of requested explanations, $n_a$ denotes the number of requested answers and $|D|$ denotes the number of simulated dialogs in the dataset. $w_i$ , $w_e$ , $w_a$ are penalization weights. The penalization weights are used to compensate for different costs of obtaining different types of information from the user. For example, gaining broader explanation from the user is relatively simple because it is in their favor to cooperate with the system on a question they are interested in. However, obtaining correct answers from users is significantly more difficult because the system does not always have the chance to ask the question and the user does not have to know the correct answer for it. To make the evaluations comparable between different systems we recommend using our evaluation scripts included with the dataset with following penalization weights that reflect our intuition for gaining information from users: – incorrect answers are penalized significantly, – explanations are quite cheap; therefore, we will penalize them just slightly, – gaining question’s answer from users is harder than gaining explanations. Answer Extraction Accuracy It is quite challenging to find appropriate entity in the knowledge base even though the user provided the correct answer. Therefore, we propose another metric relevant to our dataset. This metric is the accuracy of entity extraction which measures how many times was extracted a correct answer from answer hints provided by the user in dialogs annotated as correctly answered. Future Work Our future work will be mainly focused on providing a baseline system for interactive learning which will be evaluated on the dataset. We are also planning improvements for dialog management that is used to gain explanations during the data collection. We believe that with conversation about specific aspects of the discussed question it will be possible to gain even more interesting information from users. The other area of our interest is in possibilities to improve question answering accuracy on test questions of Simple question dataset with the extra information contained in the collected dialogs. Conclusion In this paper, we presented a novel way how to evaluate different interactive learning approaches for dialog models. The evaluation covers two challenging aspects of interactive learning. First, it scores efficiency of using information gained from users in simulated question answering dialogs. Second, it measures accuracy on answer hints understanding. For purposes of evaluation we collected a dataset from conversational dialogs with workers on crowdsourcing platform CrowdFlower. Those dialogs were annotated with expert annotators and published under Creative Commons 4.0 BY-SA license on lindat. We also provide evaluation scripts with the dataset that should ensure comparable evaluation of different interactive learning approaches. Acknowledgments This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 224, and GAUK grant 1170516 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071). Figure 1: Unknown questions can be rerouted between users. We can, for example, use chitchat to get correct answers. The challenge is in generalizing the collected question-answer pairs using the knowledge base in order to apply them to previously unseen questions. Table 1: Table of turn and dialog counts for dataset splits. Query: How was this data collected? JSON Response:
{"answer": "The crowdsourcing platform CrowdFlower was used to obtain natural dialog data that prompted the user to paraphrase, explain, and/or answer a question from a Simple questions BIBREF7 dataset. The CrowdFlower users were restricted to English-speaking countries to avoid dialogs with poor English.", "evidence": ["However, getting access to systems with real users is usually hard. Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection.", "A CF worker gets a task instructing them to use our chat-like interface to help the system with a question which is randomly selected from training examples of Simple questions BIBREF7 dataset. To complete the task user has to communicate with the system through the three phase dialog discussing question paraphrase (see Section \"Interactive Learning Evaluation\" ), explanation (see Section \"Future Work\" ) and answer of the question (see Section \"Conclusion\" ). To avoid poor English level of dialogs we involved CF workers from English speaking countries only. The collected dialogs has been annotated (see Section \"Acknowledgments\" ) by expert annotators afterwards."]}
science.qasper_abstractive_qa.19.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nData Collection for Interactive Learning through the Dialog\n\nThis paper presents a dataset collected from natural dialogs which enables to test the ability of dialog systems to learn new facts from user utterances throughout the dialog. This interactive learning will help with one of the most prevailing problems of open domain dialog system, which is the sparsity of facts a dialog system can reason about. The proposed dataset, consisting of 1900 collected dialogs, allows simulation of an interactive gaining of denotations and questions explanations from users which can be used for the interactive learning.\n\nIntroduction\nNowadays, dialog systems are usually designed for a single domain BIBREF0 . They store data in a well-defined format with a fixed number of attributes for entities that the system can provide. Because data in this format can be stored as a two-dimensional table within a relational database, we call the data flat. This data representation allows the system to query the database in a simple and efficient way. It also allows to keep the dialog state in the form of slots (which usually correspond to columns in the table) and track it through the dialog using probabilistic belief tracking BIBREF1 , BIBREF2 .\nHowever, the well-defined structure of the database of a typical dialog system comes with a high cost of extending it as every piece of new information has to fit the format. This is especially a problem when we one is adapting the system for a new domain because its entities could have different attributes.\nA dialog system based on knowledge bases offers many advantages. First, the knowledge base, which can be represented as knowledge graph containing entities connected by relations, is much more flexible than the relational database. Second, freely available knowledge bases, such as Freebase, Wikidata, etc. contain an enormous amount of structured information, and are still growing. A dialog system which is capable of working with this type of information would be therefore very useful.\nIn this paper we propose a dataset aiming to help develop and evaluate dialog systems based on knowledge bases by interactive learning motivated in Section \"Motivation\" Section \"Dialog policies\" describes policies that can be used for retrieving information from knowledge bases. In Section \"Dialog Simulation\" is introduced a dialog simulation from natural conversations which we use for evaluation of interactive learning. The dataset collection process allowing the dialog simulation is described in Section \"Dataset Collection Process\" and is followed by properties of the resulting dataset in Section \"Dataset Properties\" Evaluation guidelines with proposed metrics can be found in Section \"Interactive Learning Evaluation\" The planned future work is summarized in Section \"Future Work\" We conclude the paper with Section \"Conclusion\" \n\nMotivation\nFrom the point of view of dialog systems providing general information from a knowledge base, the most limiting factor is that a large portion of the questions is understood poorly.\nCurrent approaches BIBREF3 , BIBREF4 can only achieve around 50% accuracy on some question answering datasets. Therefore, we think that there is a room for improvements which can be achieved by interactively asking for additional information in conversational dialogs with users. This extra information can be used for improving policies of dialog systems. We call this approach the interactive learning from dialogs.\nWe can improve dialog systems in several aspects through interactive learning in a direct interaction with users. First, the most straightforward way obviously is getting the correct answer for questions that the system does not know. We can try to ask users for answers on questions that the system encountered in a conversation with a different user and did not understand it. Second, the system can ask the user for a broader explanation of a question. This explanation could help the system to understand the question and provide the correct answer. In addition, the system can learn correct policy for the question which allows providing answers without asking any extra information for similar questions next time. We hypothesize that users are willing to give such explanations because it could help them to find answers for their own questions. The last source of information that we consider for interactive learning is rephrasing, which could help when the system does know the concept but does not know the correct wording. This area is extensively studied for the purposes of information retrieval BIBREF5 , BIBREF6 .\nThe main purpose of the collected dataset is to enable interactive learning using the steps proposed above and potentially to evaluate how different systems perform on this task.\n\nDialog policies\nThe obvious difficulty when developing a dialog system is finding a way how to identify the piece of information that the user is interested in. This is especially a problem for dialog systems based on knowledge graphs containing a large amount of complex structured information. While a similar problem is being solved in a task of question answering, dialog systems have more possibilities of identifying the real intention of the user. For example, a dialog system can ask for additional information during the dialog.\nWe distinguish three different basic approaches to requesting knowledge bases:\nA combination of the above approaches is also possible. For example, we can imagine scenarios where the dialog system starts with hand-crafted rules, which are subsequently interactively improved through dialogs with its users. With a growing demand for open domain dialog systems, it shows that creating hand-crafted policies does not scale well - therefore, machine learning approaches are gaining on popularity. Many public datasets for offline learning have been published BIBREF8 , BIBREF7 . However, to our knowledge, no public datasets for interactive learning are available. To fill this gap, we collected a dataset which enables to train interactively learned policies through a simulated interaction with users.\n\nDialog Simulation\nOffline evaluation of interactive dialogs on real data is difficult because different policies can lead to different variants of the dialog. Our solution to this issue is to collect data in a way that allows us to simulate all dialog variants possible according to any policy.\nThe dialog variants we are considering for interactive learning differ only in presence of several parts of the dialog. Therefore, we can collect dialogs containing all information used for interactive learning and omit those parts that were not requested by the policy.\nWe collected the dataset (see Section \"Dataset Collection Process\" ) that enables simulation where the policy can decide how much extra information to the question it requests. If the question is clear to the system it can attempt to answer the question without any other information. It can also ask for a broader explanation with a possibility to answer the question afterwards. If the system decides not to answer the question, we can simulate rerouting the question to another user, to try to obtain the answer from them. The principle of simulated user's answer is shown in the Figure 1 .\nNote that the simulated user’s answer can be incorrect because human users naturally made mistakes. We intentionally keep these mistakes in the dataset because real systems must address them as well.\n\nDataset Collection Process\nA perfect data collection scenario for our dataset would use real running dialog system providing general information from the knowledge base to real users. This system could then ask for explanations and answers for questions which it is not able to answer.\nHowever, getting access to systems with real users is usually hard. Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection.\nA CF worker gets a task instructing them to use our chat-like interface to help the system with a question which is randomly selected from training examples of Simple questions BIBREF7 dataset. To complete the task user has to communicate with the system through the three phase dialog discussing question paraphrase (see Section \"Interactive Learning Evaluation\" ), explanation (see Section \"Future Work\" ) and answer of the question (see Section \"Conclusion\" ). To avoid poor English level of dialogs we involved CF workers from English speaking countries only. The collected dialogs has been annotated (see Section \"Acknowledgments\" ) by expert annotators afterwards.\nThe described procedure leads to dialogs like the one shown in the Figure 2 .\n\nQuestion Paraphrasing\nAt beginning of the dialog, the system is requesting the user to paraphrase question that the system does not understand. The main goal of this first phase is to let the user get familiar with the presented question and to get alternative wordings of the posed question.\n\nQuestion Explanation\nIn the second phase, the user is asked for an explanation of the question. We expect the explanation to be different enough from the original question (in terms of the number of common words between the question and the explanation). If the explanation is too similar to the question, the user is notified that their explanation is not broad enough and they must provide a better one.\n\nQuestion Answer\nWith the valid explanation the dialog turns into the last phase where the user is asked for a correct answer to the original question. The system requires the user to answer with a full sentence. In practical experiments this has shown as a useful decision because it improves system's ability to reveal cheaters. We can simply measure the connection (in terms of common words ) between question and the answer sentence. This allows to reject completely irrelevant answers.\n\nAnnotation\nThe correct answer for question in each dialog is available from Simple questions dataset. Answers are in form of Freebase entities identified by unique id. For evaluation purposes we need information whether dialog contains the answer which is consistent with the entity from Simple questions, the answer with another entity or whether the dialog does not contain any answer. While the annotation process is quite simple, we did not need crowdsourcing for the process.\n\nNatural Language Understanding (NLU)\nThe collection system needs to recognize following dialog acts from user utterances during all phases of the dialog:\n– user does not want to provide requested information,\n– user agrees to provide requested information,\n– user does not know the requested information,\n– user tries chit chat with the system (hello, bye, who are you...),\n– none of the above, interpreted as user is giving information requested by the system.\nParsing of the dialog acts is made by hand written rules using templates and keyword spotting. The templates and keywords were manually collected from frequent expressions used by CF workers during preparation runs of the dataset collection process (google it, check wikipedia, I would need... $\\rightarrow $ Negate).\n\nDataset Properties\nWe collected the dataset with 1900 dialogs and 8533 turns. Topics discussed in dialogs are questions randomly chosen from training examples of Simple questions BIBREF7 dataset. From this dataset we also took the correct answers in form of Freebase entities.\nOur dataset consists of standard data split into training, development and test files. The basic properties of those files are as follows:\nEach file contains complete dialogs enriched by outputs of NLU (see Section \"Natural Language Understanding (NLU)\" ) that were used during the data collection. On top of that, each dialog is labeled by the correct answer for the question and expert annotation of the user answer hint which tells whether the hint points to the correct answer, incorrect answer, or no answer at all.\n351 of all collected dialogs contain correct answer provided by users and 702 dialogs have incorrect answer. In the remaining 847 dialogs users did not want to answer the question. The collected dialogs also contain 1828 paraphrases and 1539 explanations for 1870 questions.\nAn answer for a question was labeled as correct by annotators only when it was evident to them that the answer points to the same Freebase entity that was present in Simple questions dataset for that particular question. However, a large amount of questions from that dataset is quite general - with many possible answers. Therefore lot of answers from users were labeled as incorrect even though those answers perfectly fit the question. Our annotators identified that 285 of the incorrect answers were answers for such general questions. Example of this situation can be demonstrated by question 'Name an actor' which was correctly answered by 'Brad Pitt is an actor', however, to be consistent with Simple questions annotation, which is 'Kelly Atwood', annotators were forced to mark it as an incorrect answer.\n\nInteractive Learning Evaluation\nA perfect interactive learning model would be able to learn anything interactively from test dialogs during testing, which would allow us to measure progress of the model from scratch over the course of time. However, a development of such model would be unnecessarily hard, therefore we provide training dialogs which can be used for feature extraction and other engineering related to interactive learning from dialogs in natural language. Model development is further supported with labeled validation data for parameter tuning.\nWe propose two evaluation metrics for comparing interactive learning models. First metric (see Section \"Efficiency Score\" ) scores amount of information required by the model, second metric (see Section \"Answer Extraction Accuracy\" ) is accuracy of answer extraction from user utterances. All models must base their answers only on information gained from training dialogs and testing dialogs seen during the simulation so far, to ensure that the score will reflect the interactive learning of the model instead of general question answering.\n\nEfficiency Score\nThe simulation of dialogs from our dataset allows to evaluate how efficient a dialog system is in using information gained from users. The dialog system should maximize the number of correctly answered questions without requesting too many explanations and answers from users. To evaluate different systems using the collected data, we propose the following evaluation measure: \n$$ \nS_D = \\frac{n_c - w_i n_i - w_e n_e - w_a n_a}{|D|}$$ (Eq. 20) \nHere, $n_c$ denotes the number of correctly answered questions, $n_i$ denotes the number of incorrectly answered questions, $n_e$ denotes the number of requested explanations, $n_a$ denotes the number of requested answers and $|D|$ denotes the number of simulated dialogs in the dataset. $w_i$ , $w_e$ , $w_a$ are penalization weights.\nThe penalization weights are used to compensate for different costs of obtaining different types of information from the user. For example, gaining broader explanation from the user is relatively simple because it is in their favor to cooperate with the system on a question they are interested in. However, obtaining correct answers from users is significantly more difficult because the system does not always have the chance to ask the question and the user does not have to know the correct answer for it.\nTo make the evaluations comparable between different systems we recommend using our evaluation scripts included with the dataset with following penalization weights that reflect our intuition for gaining information from users:\n– incorrect answers are penalized significantly,\n– explanations are quite cheap; therefore, we will penalize them just slightly,\n– gaining question’s answer from users is harder than gaining explanations.\n\nAnswer Extraction Accuracy\nIt is quite challenging to find appropriate entity in the knowledge base even though the user provided the correct answer. Therefore, we propose another metric relevant to our dataset. This metric is the accuracy of entity extraction which measures how many times was extracted a correct answer from answer hints provided by the user in dialogs annotated as correctly answered.\n\nFuture Work\nOur future work will be mainly focused on providing a baseline system for interactive learning which will be evaluated on the dataset. We are also planning improvements for dialog management that is used to gain explanations during the data collection. We believe that with conversation about specific aspects of the discussed question it will be possible to gain even more interesting information from users. The other area of our interest is in possibilities to improve question answering accuracy on test questions of Simple question dataset with the extra information contained in the collected dialogs.\n\nConclusion\nIn this paper, we presented a novel way how to evaluate different interactive learning approaches for dialog models. The evaluation covers two challenging aspects of interactive learning. First, it scores efficiency of using information gained from users in simulated question answering dialogs. Second, it measures accuracy on answer hints understanding.\nFor purposes of evaluation we collected a dataset from conversational dialogs with workers on crowdsourcing platform CrowdFlower. Those dialogs were annotated with expert annotators and published under Creative Commons 4.0 BY-SA license on lindat. We also provide evaluation scripts with the dataset that should ensure comparable evaluation of different interactive learning approaches.\n\nAcknowledgments\nThis work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 224, and GAUK grant 1170516 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).\n\nFigure 1: Unknown questions can be rerouted between users. We can, for example, use chitchat to get correct answers. The challenge is in generalizing the collected question-answer pairs using the knowledge base in order to apply them to previously unseen questions.\n\nTable 1: Table of turn and dialog counts for dataset splits.\n\nQuery: How was this data collected?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"The crowdsourcing platform CrowdFlower was used to obtain natural dialog data that prompted the user to paraphrase, explain, and/or answer a question from a Simple questions BIBREF7 dataset. The CrowdFlower users were restricted to English-speaking countries to avoid dialogs with poor English.\", \"evidence\": [\"However, getting access to systems with real users is usually hard. Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection.\", \"A CF worker gets a task instructing them to use our chat-like interface to help the system with a question which is randomly selected from training examples of Simple questions BIBREF7 dataset. To complete the task user has to communicate with the system through the three phase dialog discussing question paraphrase (see Section \\\"Interactive Learning Evaluation\\\" ), explanation (see Section \\\"Future Work\\\" ) and answer of the question (see Section \\\"Conclusion\\\" ). To avoid poor English level of dialogs we involved CF workers from English speaking countries only. The collected dialogs has been annotated (see Section \\\"Acknowledgments\\\" ) by expert annotators afterwards.\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND:: The aim of this study was to prospectively compare single-dose intravenous antibiotic prophylaxis vs. no prophylaxis before minor cystoscopic procedures, including punch biopsy and transurethral resection (TUR) of small bladder tumors. ABSTRACT.MATERIALS AND METHODS:: A total of 200 patients with a mean age of 47.3 years old (range: 19–84 years old) with initial negative urine cultures were recruited. All patients underwent a diagnostic cystoscopy. Patients were then randomized into 2 groups: One group that did not receive antibiotics (100 patients) and the other group that received antibiotic treatment (100 patients with a single intravenous dose of cefoperazone). All patients had urine analysis and urine cultures on the second day after the operation. Additionally, clinical parameters including fever and dysuria were recorded. In 15% of the patients, incidental additional interventions such as punch biopsy or TUR of a small bladder tumor that were similarly distributed in both groups were performed. ABSTRACT.RESULTS:: In 1 patient from the antibiotic group and 2 patients from the no prophylaxis group, the urine cultures after cystoscopy were positive. No statistically significant difference was observed between these groups based on the microbiological and clinical parameters. ABSTRACT.CONCLUSION:: The current study provides evidence that no antibiotic prophylaxis is required before diagnostic cystoscopy in patients without bacteriuria. But, the absolute risk of infection was small, suggesting that a much larger study is required. Extracted Data:
[["Single-dose intravenous antibiotic prophylaxis", "No prophylaxis", "Microbiological and clinical parameters", "no significant difference", "No statistically significant difference was observed between these groups based on the microbiological and clinical parameters."]]
science.evidence_inference.57
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND:: The aim of this study was to prospectively compare single-dose intravenous antibiotic prophylaxis vs. no prophylaxis before minor cystoscopic procedures, including punch biopsy and transurethral resection (TUR) of small bladder tumors.\n\nABSTRACT.MATERIALS AND METHODS:: A total of 200 patients with a mean age of 47.3 years old (range: 19–84 years old) with initial negative urine cultures were recruited. All patients underwent a diagnostic cystoscopy. Patients were then randomized into 2 groups: One group that did not receive antibiotics (100 patients) and the other group that received antibiotic treatment (100 patients with a single intravenous dose of cefoperazone). All patients had urine analysis and urine cultures on the second day after the operation. Additionally, clinical parameters including fever and dysuria were recorded. In 15% of the patients, incidental additional interventions such as punch biopsy or TUR of a small bladder tumor that were similarly distributed in both groups were performed.\n\nABSTRACT.RESULTS:: In 1 patient from the antibiotic group and 2 patients from the no prophylaxis group, the urine cultures after cystoscopy were positive. No statistically significant difference was observed between these groups based on the microbiological and clinical parameters.\n\nABSTRACT.CONCLUSION:: The current study provides evidence that no antibiotic prophylaxis is required before diagnostic cystoscopy in patients without bacteriuria. But, the absolute risk of infection was small, suggesting that a much larger study is required.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Single-dose intravenous antibiotic prophylaxis\", \"No prophylaxis\", \"Microbiological and clinical parameters\", \"no significant difference\", \"No statistically significant difference was observed between these groups based on the microbiological and clinical parameters.\"]]", "role": "assistant" } ]
science.discomat_te
| | LAS1 | LAS1 | LAS3 | LAS3 | YAS1 | YAS1 | YAS3 | YAS3 | | | X | N | X | N | X | N | X | N | | Si-O | 0.223 | 0.234 | 0.290 | 0.250 | 0.220 | 0.218 | 0.300 | 0.251 | | Al-O | 0.098 | 0.093 | 0.154 | 0.119 | 0.092 | 0.082 | 0.160 | 0.119 | | La-O/Y-O | 0.172 | 0.088 | 0.056 | 0.024 | 0.183 | 0.122 | 0.040 | 0.022 | | O-O | 0.194 | 0.499 | 0.253 | 0.534 | 0.200 | 0.488 | 0.262 | 0.535 | | Si-Si | 0.064 | 0.027 | 0.083 | 0.029 | 0.060 | 0.024 | 0.086 | 0.030 | | Si-Al | 0.056 | 0.022 | 0.032 | 0.028 | 0.050 | 0.018 | 0.091 | 0.028 | | Si-La/Si-Y | 0.099 | 0.021 | 0.088 | 0.006 | 0.100 | 0.027 | 0.023 | 0.005 | | Al-Al | 0.012 | 0.004 | 0.024 | 0.007 | 0.011 | 0.003 | 0.024 | 0.007 | | Al-La/Al-Y | 0.044 | 0.008 | 0.017 | 0.003 | 0.042 | 0.010 | 0.012 | 0.003 | | La-La/Y-Y | 0.038 | 0.004 | 0.003 | ~ 0 | 0.042 | 0.008 | 0.002 | ~ 0 | Caption: X-ray (X) and neutron (N) weighting factors of the partial structure factors Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "0.223", "type": "Other"} {"value": "0.234", "type": "Other"} {"value": "0.290", "type": "Other"} {"value": "0.250", "type": "Other"} {"value": "0.220", "type": "Other"} {"value": "0.218", "type": "Other"} {"value": "0.300", "type": "Other"} {"value": "0.251", "type": "Other"} {"value": "0.098", "type": "Other"} {"value": "0.093", "type": "Other"} {"value": "0.154", "type": "Other"} {"value": "0.119", "type": "Other"} {"value": "0.092", "type": "Other"} {"value": "0.082", "type": "Other"} {"value": "0.160", "type": "Other"} {"value": "0.119", "type": "Other"} {"value": "0.172", "type": "Other"} {"value": "0.088", "type": "Other"} {"value": "0.056", "type": "Other"} {"value": "0.024", "type": "Other"} {"value": "0.183", "type": "Other"} {"value": "0.122", "type": "Other"} {"value": "0.040", "type": "Other"} {"value": "0.022", "type": "Other"} {"value": "0.194", "type": "Other"} {"value": "0.499", "type": "Other"} {"value": "0.253", "type": "Other"} {"value": "0.534", "type": "Other"} {"value": "0.200", "type": "Other"} {"value": "0.488", "type": "Other"} {"value": "0.262", "type": "Other"} {"value": "0.535", "type": "Other"} {"value": "0.064", "type": "Other"} {"value": "0.027", "type": "Other"} {"value": "0.083", "type": "Other"} {"value": "0.029", "type": "Other"} {"value": "0.060", "type": "Other"} {"value": "0.024", "type": "Other"} {"value": "0.086", "type": "Other"} {"value": "0.030", "type": "Other"} {"value": "0.056", "type": "Other"} {"value": "0.022", "type": "Other"} {"value": "0.032", "type": "Other"} {"value": "0.028", "type": "Other"} {"value": "0.050", "type": "Other"} {"value": "0.018", "type": "Other"} {"value": "0.091", "type": "Other"} {"value": "0.028", "type": "Other"} {"value": "0.099", "type": "Other"} {"value": "0.021", "type": "Other"} {"value": "0.088", "type": "Other"} {"value": "0.006", "type": "Other"} {"value": "0.100", "type": "Other"} {"value": "0.027", "type": "Other"} {"value": "0.023", "type": "Other"} {"value": "0.005", "type": "Other"} {"value": "0.012", "type": "Other"} {"value": "0.004", "type": "Other"} {"value": "0.024", "type": "Other"} {"value": "0.007", "type": "Other"} {"value": "0.011", "type": "Other"} {"value": "0.003", "type": "Other"} {"value": "0.024", "type": "Other"} {"value": "0.007", "type": "Other"} {"value": "0.044", "type": "Other"} {"value": "0.008", "type": "Other"} {"value": "0.017", "type": "Other"} {"value": "0.003", "type": "Other"} {"value": "0.042", "type": "Other"} {"value": "0.010", "type": "Other"} {"value": "0.012", "type": "Other"} {"value": "0.003", "type": "Other"} {"value": "0.038", "type": "Other"} {"value": "0.004", "type": "Other"} {"value": "0.003", "type": "Other"} {"value": "0.042", "type": "Other"} {"value": "0.008", "type": "Other"} {"value": "0.002", "type": "Other"}
science.discomat_te.29
[ { "content": "| | LAS1 | LAS1 | LAS3 | LAS3 | YAS1 | YAS1 | YAS3 | YAS3 |\n| | X | N | X | N | X | N | X | N |\n| Si-O | 0.223 | 0.234 | 0.290 | 0.250 | 0.220 | 0.218 | 0.300 | 0.251 |\n| Al-O | 0.098 | 0.093 | 0.154 | 0.119 | 0.092 | 0.082 | 0.160 | 0.119 |\n| La-O/Y-O | 0.172 | 0.088 | 0.056 | 0.024 | 0.183 | 0.122 | 0.040 | 0.022 |\n| O-O | 0.194 | 0.499 | 0.253 | 0.534 | 0.200 | 0.488 | 0.262 | 0.535 |\n| Si-Si | 0.064 | 0.027 | 0.083 | 0.029 | 0.060 | 0.024 | 0.086 | 0.030 |\n| Si-Al | 0.056 | 0.022 | 0.032 | 0.028 | 0.050 | 0.018 | 0.091 | 0.028 |\n| Si-La/Si-Y | 0.099 | 0.021 | 0.088 | 0.006 | 0.100 | 0.027 | 0.023 | 0.005 |\n| Al-Al | 0.012 | 0.004 | 0.024 | 0.007 | 0.011 | 0.003 | 0.024 | 0.007 |\n| Al-La/Al-Y | 0.044 | 0.008 | 0.017 | 0.003 | 0.042 | 0.010 | 0.012 | 0.003 |\n| La-La/Y-Y | 0.038 | 0.004 | 0.003 | ~ 0 | 0.042 | 0.008 | 0.002 | ~ 0 |\n\nCaption: X-ray (X) and neutron (N) weighting factors of the partial structure factors\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"0.223\", \"type\": \"Other\"}\n{\"value\": \"0.234\", \"type\": \"Other\"}\n{\"value\": \"0.290\", \"type\": \"Other\"}\n{\"value\": \"0.250\", \"type\": \"Other\"}\n{\"value\": \"0.220\", \"type\": \"Other\"}\n{\"value\": \"0.218\", \"type\": \"Other\"}\n{\"value\": \"0.300\", \"type\": \"Other\"}\n{\"value\": \"0.251\", \"type\": \"Other\"}\n{\"value\": \"0.098\", \"type\": \"Other\"}\n{\"value\": \"0.093\", \"type\": \"Other\"}\n{\"value\": \"0.154\", \"type\": \"Other\"}\n{\"value\": \"0.119\", \"type\": \"Other\"}\n{\"value\": \"0.092\", \"type\": \"Other\"}\n{\"value\": \"0.082\", \"type\": \"Other\"}\n{\"value\": \"0.160\", \"type\": \"Other\"}\n{\"value\": \"0.119\", \"type\": \"Other\"}\n{\"value\": \"0.172\", \"type\": \"Other\"}\n{\"value\": \"0.088\", \"type\": \"Other\"}\n{\"value\": \"0.056\", \"type\": \"Other\"}\n{\"value\": \"0.024\", \"type\": \"Other\"}\n{\"value\": \"0.183\", \"type\": \"Other\"}\n{\"value\": \"0.122\", \"type\": \"Other\"}\n{\"value\": \"0.040\", \"type\": \"Other\"}\n{\"value\": \"0.022\", \"type\": \"Other\"}\n{\"value\": \"0.194\", \"type\": \"Other\"}\n{\"value\": \"0.499\", \"type\": \"Other\"}\n{\"value\": \"0.253\", \"type\": \"Other\"}\n{\"value\": \"0.534\", \"type\": \"Other\"}\n{\"value\": \"0.200\", \"type\": \"Other\"}\n{\"value\": \"0.488\", \"type\": \"Other\"}\n{\"value\": \"0.262\", \"type\": \"Other\"}\n{\"value\": \"0.535\", \"type\": \"Other\"}\n{\"value\": \"0.064\", \"type\": \"Other\"}\n{\"value\": \"0.027\", \"type\": \"Other\"}\n{\"value\": \"0.083\", \"type\": \"Other\"}\n{\"value\": \"0.029\", \"type\": \"Other\"}\n{\"value\": \"0.060\", \"type\": \"Other\"}\n{\"value\": \"0.024\", \"type\": \"Other\"}\n{\"value\": \"0.086\", \"type\": \"Other\"}\n{\"value\": \"0.030\", \"type\": \"Other\"}\n{\"value\": \"0.056\", \"type\": \"Other\"}\n{\"value\": \"0.022\", \"type\": \"Other\"}\n{\"value\": \"0.032\", \"type\": \"Other\"}\n{\"value\": \"0.028\", \"type\": \"Other\"}\n{\"value\": \"0.050\", \"type\": \"Other\"}\n{\"value\": \"0.018\", \"type\": \"Other\"}\n{\"value\": \"0.091\", \"type\": \"Other\"}\n{\"value\": \"0.028\", \"type\": \"Other\"}\n{\"value\": \"0.099\", \"type\": \"Other\"}\n{\"value\": \"0.021\", \"type\": \"Other\"}\n{\"value\": \"0.088\", \"type\": \"Other\"}\n{\"value\": \"0.006\", \"type\": \"Other\"}\n{\"value\": \"0.100\", \"type\": \"Other\"}\n{\"value\": \"0.027\", \"type\": \"Other\"}\n{\"value\": \"0.023\", \"type\": \"Other\"}\n{\"value\": \"0.005\", \"type\": \"Other\"}\n{\"value\": \"0.012\", \"type\": \"Other\"}\n{\"value\": \"0.004\", \"type\": \"Other\"}\n{\"value\": \"0.024\", \"type\": \"Other\"}\n{\"value\": \"0.007\", \"type\": \"Other\"}\n{\"value\": \"0.011\", \"type\": \"Other\"}\n{\"value\": \"0.003\", \"type\": \"Other\"}\n{\"value\": \"0.024\", \"type\": \"Other\"}\n{\"value\": \"0.007\", \"type\": \"Other\"}\n{\"value\": \"0.044\", \"type\": \"Other\"}\n{\"value\": \"0.008\", \"type\": \"Other\"}\n{\"value\": \"0.017\", \"type\": \"Other\"}\n{\"value\": \"0.003\", \"type\": \"Other\"}\n{\"value\": \"0.042\", \"type\": \"Other\"}\n{\"value\": \"0.010\", \"type\": \"Other\"}\n{\"value\": \"0.012\", \"type\": \"Other\"}\n{\"value\": \"0.003\", \"type\": \"Other\"}\n{\"value\": \"0.038\", \"type\": \"Other\"}\n{\"value\": \"0.004\", \"type\": \"Other\"}\n{\"value\": \"0.003\", \"type\": \"Other\"}\n{\"value\": \"0.042\", \"type\": \"Other\"}\n{\"value\": \"0.008\", \"type\": \"Other\"}\n{\"value\": \"0.002\", \"type\": \"Other\"}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Results Table 3 reveals that the unmodified BiDAF model is almost as effective as the <cite>Levy et al. (2017)</cite> model in terms of zero-shot F1 on the original UWRE test set.
["Differences"]
science.multicite_intent_classification.879
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nResults Table 3 reveals that the unmodified BiDAF model is almost as effective as the <cite>Levy et al. (2017)</cite> model in terms of zero-shot F1 on the original UWRE test set.", "role": "user" }, { "content": "[\"Differences\"]", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Randomized comparison of olanzapine versus risperidone for the treatment of first-episode schizophrenia: 4-month outcomes. OBJECTIVE: The authors compared 4-month treatment outcomes for olanzapine versus risperidone in patients with first-episode schizophrenia spectrum disorders. METHOD: One hundred twelve subjects (70% male; mean age=23.3 years [SD = 5.1]) with first-episode schizophrenia (75%), schizophreniform disorder (17%), or schizoaffective disorder (8%) were randomly assigned to treatment with olanzapine (2.5-20 mg/day) or risperidone (1-6 mg/day). RESULTS: Response rates did not significantly differ between olanzapine (43.7%, 95% CI=28.8%-58.6%) and risperidone (54.3%, 95% CI=39.9%-68.7%). Among those responding to treatment, more subjects in the olanzapine group (40.9%, 95% CI=16.8%-65.0%) than in the risperidone group (18.9%, 95% CI=0%-39.2%) had subsequent ratings not meeting response criteria. Negative symptom outcomes and measures of parkinsonism and akathisia did not differ between medications. Extrapyramidal symptom severity scores were 1.4 (95% CI=1.2-1.6) with risperidone and 1.2 (95% CI=1.0-1.4) with olanzapine. Significantly more weight gain occurred with olanzapine than with risperidone: the increase in weight at 4 months relative to baseline weight was 17.3% (95% CI=14.2%-20.5%) with olanzapine and 11.3% (95% CI=8.4%-14.3%) with risperidone. Body mass index at baseline and at 4 months was 24.3 (95% CI=22.8-25.7) versus 28.2 (95% CI=26.7-29.7) with olanzapine and 23.9 (95% CI=22.5-25.3) versus 26.7 (95% CI=25.2-28.2) with risperidone. CONCLUSIONS: Clinical outcomes with risperidone were equal to those with olanzapine, and response may be more stable. Olanzapine may have an advantage for motor side effects. Both medications caused substantial rapid weight gain, but weight gain was greater with olanzapine.
{"CellLine": [], "Chemical": ["risperidone", "olanzapine", "Olanzapine"], "Disease": ["weight gain", "Extrapyramidal symptom", "parkinsonism", "schizophrenia", "schizoaffective disorder", "akathisia", "schizophreniform disorder"], "Gene": [], "Species": ["patients"], "Variant": []}
science.biored_ner.84
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nRandomized comparison of olanzapine versus risperidone for the treatment of first-episode schizophrenia: 4-month outcomes. OBJECTIVE: The authors compared 4-month treatment outcomes for olanzapine versus risperidone in patients with first-episode schizophrenia spectrum disorders. METHOD: One hundred twelve subjects (70% male; mean age=23.3 years [SD = 5.1]) with first-episode schizophrenia (75%), schizophreniform disorder (17%), or schizoaffective disorder (8%) were randomly assigned to treatment with olanzapine (2.5-20 mg/day) or risperidone (1-6 mg/day). RESULTS: Response rates did not significantly differ between olanzapine (43.7%, 95% CI=28.8%-58.6%) and risperidone (54.3%, 95% CI=39.9%-68.7%). Among those responding to treatment, more subjects in the olanzapine group (40.9%, 95% CI=16.8%-65.0%) than in the risperidone group (18.9%, 95% CI=0%-39.2%) had subsequent ratings not meeting response criteria. Negative symptom outcomes and measures of parkinsonism and akathisia did not differ between medications. Extrapyramidal symptom severity scores were 1.4 (95% CI=1.2-1.6) with risperidone and 1.2 (95% CI=1.0-1.4) with olanzapine. Significantly more weight gain occurred with olanzapine than with risperidone: the increase in weight at 4 months relative to baseline weight was 17.3% (95% CI=14.2%-20.5%) with olanzapine and 11.3% (95% CI=8.4%-14.3%) with risperidone. Body mass index at baseline and at 4 months was 24.3 (95% CI=22.8-25.7) versus 28.2 (95% CI=26.7-29.7) with olanzapine and 23.9 (95% CI=22.5-25.3) versus 26.7 (95% CI=25.2-28.2) with risperidone. CONCLUSIONS: Clinical outcomes with risperidone were equal to those with olanzapine, and response may be more stable. Olanzapine may have an advantage for motor side effects. Both medications caused substantial rapid weight gain, but weight gain was greater with olanzapine.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"risperidone\", \"olanzapine\", \"Olanzapine\"], \"Disease\": [\"weight gain\", \"Extrapyramidal symptom\", \"parkinsonism\", \"schizophrenia\", \"schizoaffective disorder\", \"akathisia\", \"schizophreniform disorder\"], \"Gene\": [], \"Species\": [\"patients\"], \"Variant\": []}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Lmna knock-in models are appropriate for testing therapeutic approaches against Hutchinson-Gilford progeria syndrome (HGPS). Research Abstract: Hutchinson-Gilford progeria syndrome (HGPS) is a childhood premature aging disease caused by a spontaneous point mutation in lamin A (encoded by LMNA), one of the major architectural elements of the mammalian cell nucleus. The HGPS mutation activates an aberrant cryptic splice site in LMNA pre-mRNA, leading to synthesis of a truncated lamin A protein and concomitant reduction in wild-type lamin A. Fibroblasts from individuals with HGPS have severe morphological abnormalities in nuclear envelope structure. Here we show that the cellular disease phenotype is reversible in cells from individuals with HGPS. Introduction of wild-type lamin A protein does not rescue the cellular disease symptoms. The mutant LMNA mRNA and lamin A protein can be efficiently eliminated by correction of the aberrant splicing event using a modified oligonucleotide targeted to the activated cryptic splice site. Upon splicing correction, HGPS fibroblasts assume normal nuclear morphology, the aberrant nuclear distribution and cellular levels of lamina-associated proteins are rescued, defects in heterochromatin-specific histone modifications are corrected and proper expression of several misregulated genes is reestablished. Our results establish proof of principle for the correction of the premature aging phenotype in individuals with HGPS.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.453
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nLmna knock-in models are appropriate for testing therapeutic approaches against Hutchinson-Gilford progeria syndrome (HGPS).\n\nResearch Abstract:\nHutchinson-Gilford progeria syndrome (HGPS) is a childhood premature aging disease caused by a spontaneous point mutation in lamin A (encoded by LMNA), one of the major architectural elements of the mammalian cell nucleus.\nThe HGPS mutation activates an aberrant cryptic splice site in LMNA pre-mRNA, leading to synthesis of a truncated lamin A protein and concomitant reduction in wild-type lamin A. Fibroblasts from individuals with HGPS have severe morphological abnormalities in nuclear envelope structure.\nHere we show that the cellular disease phenotype is reversible in cells from individuals with HGPS.\nIntroduction of wild-type lamin A protein does not rescue the cellular disease symptoms.\nThe mutant LMNA mRNA and lamin A protein can be efficiently eliminated by correction of the aberrant splicing event using a modified oligonucleotide targeted to the activated cryptic splice site.\nUpon splicing correction, HGPS fibroblasts assume normal nuclear morphology, the aberrant nuclear distribution and cellular levels of lamina-associated proteins are rescued, defects in heterochromatin-specific histone modifications are corrected and proper expression of several misregulated genes is reestablished.\nOur results establish proof of principle for the correction of the premature aging phenotype in individuals with HGPS.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: Dividing sentences in chunks of words is a useful preprocessing step for parsing, information extraction and information retrieval. (Ramshaw and Marcus, 1995) have introduced a 'convenient' data representation for chunking by converting it to a tagging task. In this paper we will examine seven different data representations for the problem of recognizing noun phrase chunks. We will show that the data representation choice has a minor influence on chunking performance. However, equipped with the most suitable data representation, our memory-based learning chunker was able to improve the best published chunking results for a standard data set.
Output Entities: {"Generic": [], "Material": ["data set"], "Method": ["data representation", "memory-based learning chunker", "data representations"], "Metric": [], "OtherScientificTerm": [], "Task": ["recognizing noun phrase chunks", "tagging task", "parsing", "information retrieval", "Dividing sentences in chunks of words", "information extraction"]}
science.scierc_ner.27
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nDividing sentences in chunks of words is a useful preprocessing step for parsing, information extraction and information retrieval. (Ramshaw and Marcus, 1995) have introduced a 'convenient' data representation for chunking by converting it to a tagging task. In this paper we will examine seven different data representations for the problem of recognizing noun phrase chunks. We will show that the data representation choice has a minor influence on chunking performance. However, equipped with the most suitable data representation, our memory-based learning chunker was able to improve the best published chunking results for a standard data set.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [], \"Material\": [\"data set\"], \"Method\": [\"data representation\", \"memory-based learning chunker\", \"data representations\"], \"Metric\": [], \"OtherScientificTerm\": [], \"Task\": [\"recognizing noun phrase chunks\", \"tagging task\", \"parsing\", \"information retrieval\", \"Dividing sentences in chunks of words\", \"information extraction\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: taa1 tar1 tar2 triple mutants in Arabidopsis do not make root meristem during embryogenesis. Research Abstract: Plants have evolved a tremendous ability to respond to environmental changes by adapting their growth and development. The interaction between hormonal and developmental signals is a critical mechanism in the generation of this enormous plasticity. A good example is the response to the hormone ethylene that depends on tissue type, developmental stage, and environmental conditions. By characterizing the Arabidopsis wei8 mutant, we have found that a small family of genes mediates tissue-specific responses to ethylene. Biochemical studies revealed that WEI8 encodes a long-anticipated tryptophan aminotransferase, TAA1, in the essential, yet genetically uncharacterized, indole-3-pyruvic acid (IPA) branch of the auxin biosynthetic pathway. Analysis of TAA1 and its paralogues revealed a link between local auxin production, tissue-specific ethylene effects, and organ development. Thus, the IPA route of auxin production is key to generating robust auxin gradients in response to environmental and developmental cues.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.915
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \ntaa1 tar1 tar2 triple mutants in Arabidopsis do not make root meristem during embryogenesis.\n\nResearch Abstract:\nPlants have evolved a tremendous ability to respond to environmental changes by adapting their growth and development.\nThe interaction between hormonal and developmental signals is a critical mechanism in the generation of this enormous plasticity.\nA good example is the response to the hormone ethylene that depends on tissue type, developmental stage, and environmental conditions.\nBy characterizing the Arabidopsis wei8 mutant, we have found that a small family of genes mediates tissue-specific responses to ethylene.\nBiochemical studies revealed that WEI8 encodes a long-anticipated tryptophan aminotransferase, TAA1, in the essential, yet genetically uncharacterized, indole-3-pyruvic acid (IPA) branch of the auxin biosynthetic pathway.\nAnalysis of TAA1 and its paralogues revealed a link between local auxin production, tissue-specific ethylene effects, and organ development.\nThus, the IPA route of auxin production is key to generating robust auxin gradients in response to environmental and developmental cues.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Why So Down? The Role of Negative (and Positive) Pointwise Mutual Information in Distributional Semantics In distributional semantics, the pointwise mutual information ($\mathit{PMI}$) weighting of the cooccurrence matrix performs far better than raw counts. There is, however, an issue with unobserved pair cooccurrences as $\mathit{PMI}$ goes to negative infinity. This problem is aggravated by unreliable statistics from finite corpora which lead to a large number of such pairs. A common practice is to clip negative $\mathit{PMI}$ ($\mathit{\texttt{-} PMI}$) at $0$, also known as Positive $\mathit{PMI}$ ($\mathit{PPMI}$). In this paper, we investigate alternative ways of dealing with $\mathit{\texttt{-} PMI}$ and, more importantly, study the role that negative information plays in the performance of a low-rank, weighted factorization of different $\mathit{PMI}$ matrices. Using various semantic and syntactic tasks as probes into models which use either negative or positive $\mathit{PMI}$ (or both), we find that most of the encoded semantics and syntax come from positive $\mathit{PMI}$, in contrast to $\mathit{\texttt{-} PMI}$ which contributes almost exclusively syntactic information. Our findings deepen our understanding of distributional semantics, while also introducing novel $PMI$ variants and grounding the popular $PPMI$ measure. Introduction Dense word vectors (or embeddings) are a key component in modern NLP architectures for tasks such as sentiment analysis, parsing, and machine translation. These vectors can be learned by exploiting the distributional hypothesis BIBREF0, paraphrased by BIBREF1 as “a word is characterized by the company that it keeps”, usually by constructing a cooccurrence matrix over a training corpus, re-weighting it using Pointwise Mutual Information ($\mathit {PMI}$) BIBREF2, and performing a low-rank factorization to obtain dense vectors. Unfortunately, $\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus. Due to unreliable statistics, this happens very frequently in finite corpora. Many models work around this issue by clipping negative $\mathit {PMI}$ values at 0, a measure known as Positive $\mathit {PMI}$ ($\mathit {PPMI}$), which works very well in practice. An unanswered question is: “What is lost/gained by collapsing the negative $\mathit {PMI}$ spectrum to 0?”. Understanding which type of information is captured by $\mathit {\texttt {-}PMI}$ can help in tailoring models for optimal performance. In this work, we attempt to answer this question by studying the kind of information contained in the negative and positive spectrums of $\mathit {PMI}$ ($\mathit {\texttt {-}PMI}$ and $\mathit {\texttt {+}PMI}$). We evaluate weighted factorization of different matrices which use either $\mathit {\texttt {-}PMI}$, $\mathit {\texttt {+}PMI}$, or both on various semantic and syntactic tasks. Results show that $\mathit {\texttt {+}PMI}$ alone performs quite well on most tasks, capturing both semantics and syntax, in contrast to $\mathit {\texttt {-}PMI}$, which performs poorly on nearly all tasks, except those that test for syntax. Our main contribution is deepening our understanding of distributional semantics by extending BIBREF1's paraphrase of the distributional hypothesis to “a word is not only characterized by the company that it keeps, but also by the company it rejects”. Our secondary contributions are the proposal of two $PMI$ variants that account for the spectrum of $\mathit {\texttt {-}PMI}$, and the justification of the popular $PPMI$ measure. In this paper, we first look at related work ($§$SECREF2), then study $\mathit {\texttt {-}PMI}$ and ways of accounting for it ($§$SECREF3), describe experiments ($§$SECREF4), analyze results ($§$SECREF5), and close with ideas for future work ($§$SECREF6). PMI & Matrix Factorization PMI: A cooccurrence matrix $M$ is constructed by sliding a symmetric window over the subsampled BIBREF9 training corpus and for each center word $w$ and context word $c$ within the window, incrementing $M_{wc}$. $\mathit {PMI}$ is then equal to: where * denotes summation over the corresponding index. To deal with negative values, we propose clipped $\mathit {PMI}$, which is equivalent to $\mathit {PPMI}$ when $z = 0$. Matrix factorization: LexVec BIBREF15 performs the factorization $M^{\prime } = WC^\top $, where $M^{\prime }$ is any transformation of $M$ (such as $\mathit {PPMI}$), and $W, C$ are the word and context embeddings respectively. By sliding a symmetric window over the training corpus (window sampling), LexVec performs one Stochastic Gradient Descent (SGD) step every time a $(w,c)$ pair is observed, minimizing Additionally, for every center word $w$, $k$ negative words BIBREF9 are drawn from the unigram context distribution $P_n$ (negative sampling) and SGD steps taken to minimize: Thus the loss function prioritizes the correct approximation of frequently cooccurring pairs and of pairs where either word occurs with high frequency; these are pairs for which we have more reliable statistics. In our experiments, we use LexVec over Singular Value Decomposition (SVD) because a) Empirical results shows it outperforms SVD BIBREF15. b) The weighting of reconstruction errors by statistical confidence is particularly important for $\mathit {\texttt {-}PMI}$, where negative cooccurrence between a pair of frequent words is more significant and should be better approximated than that between a pair of rare words. GloVe's matrix factorization is even more unsuitable for our experiments as its loss weighting — a monotonically increasing function of $M_{wc}$ — ignores reconstruction errors of non-cooccurring pairs. Spectrum of PMI: To better understand the distribution of $\mathit {CPMI}$ values, we plot a histogram of $10^5$ pairs randomly sampled by window sampling and negative sampling in fig:hist, setting $z=-5$. We can clearly see the spectrum of $\mathit {\texttt {-}PMI}$ that is collapsed when we use $\mathit {PPMI}$ ($z=0$). In practice we find that $z=-2$ captures most of the negative spectrum and consistently gives better results than smaller values so we use this value for the rest of this paper. We suspect this is due to the large number of non-cooccurring pairs ($41.7\%$ in this sample) which end up dominating the loss function when $z$ is too small. Normalization: We also experiment with normalized $\mathit {PMI}$ ($\mathit {NPMI}$) BIBREF7: such that $NPMI(w,c) = -1$ when $(w,c)$ never cooccur, $NPMI(w,c) = 0$ when they are independent, and $NPMI(w,c) = 1$ when they always cooccur together. This effectively captures the entire negative spectrum, but has the downside of normalization which discards scale information. In practice we find this works poorly if done symmetrically, so we introduce a variant called $\mathit {NNEGPMI}$ which only normalizes $\mathit {\texttt {-}PMI}$: We also experimented with Laplace smoothing as in BIBREF18 for various pseudocounts but found it to work consistently worse than both $\mathit {CPMI_z}$ and $\mathit {NNEGPMI}$ so we omit further discussion in this paper. Materials In order to identify the role that $\mathit {\texttt {-}PMI}$ and $\mathit {\texttt {+}PMI}$ play in distributional semantics, we train LexVec models that skip SGD steps when target cell values are $>0$ or $\le 0$, respectively. For example, $-\mathit {CPMI}_{\texttt {-}2}$ skips steps when $\mathit {CPMI}_{\texttt {-}2}(w,c) > 0$. Similarly, the $\mathit {\texttt {+}PPMI}$ model skips SGD steps when $\mathit {PPMI}(w,c) \le 0$. We compare these to models that include both negative and positive information to see how the two interact. We use the default LexVec configuration for all $\mathit {PMI}$ variants: fixed window of size 2, embedding dimension of 300, 5 negative samples, positional contexts, context distribution smoothing of $.75$, learning rate of $.025$, no subword information, and negative distribution power of $.75$. We train on a lowercased, alphanumerical 2015 Wikipedia dump with $3.8$B tokens, discarding tokens with frequency $< 100$, for a vocabulary size of $303,517$ words. For comparison, we include results for a randomly initialized, non-trained embedding to establish task baselines. Semantics: To evaluate word-level semantics, we use the SimLex BIBREF19 and Rare Word (RW) BIBREF20 word similarity datasets, and the Google Semantic (GSem) analogies BIBREF9. We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22. Syntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax. Classifiers for all SentEval probing tasks are multilayer perceptrons with a single hidden layer of 100 units and dropout of $.1$. Our final syntactic task is part-of-speech (POS) tagging using the same BiLSTM-CRF setup as BIBREF23 but using only word embeddings (no hand-engineered features) as input, trained on the WSJ section of the Penn Treebank BIBREF24. Results All results are shown in tab:senteval. Negative PMI: We observe that using only $\mathit {\texttt {-}PMI}$ (rows $\mathit {\texttt {-}CPMI_{\texttt {-}2}}$ and $\mathit {\texttt {-}NNEGPMI}$) performs similarly to all other models in POS tagging and both syntactic probing tasks, but very poorly on all semantic tasks, strongly supporting our main claim that $\mathit {\texttt {-}PMI}$ mostly encodes syntactic information. Our hypothesis for this is that the grammar that generates language implicitly creates negative cooccurrence and so $\mathit {\texttt {-}PMI}$ encodes this syntactic information. Interestingly, this idea creates a bridge between distributional semantics and the argument by BIBREF17 that indirect negative evidence might play an important role in human language acquisition of grammar. Positive PMI: The $\mathit {\texttt {+}PPMI}$ model performs as well or better as the full spectrum models on nearly all tasks, clearly indicating that $\mathit {\texttt {+}PMI}$ encodes both semantic and syntactic information. Why incorporate -PMI? $\mathit {\texttt {+}PPMI}$ only falters on the RW and analogy tasks, and we hypothesize this is where $\mathit {\texttt {-}PMI}$ is useful: in the absence of positive information, negative information can be used to improve rare word representations and word analogies. Analogies are solved using nearest neighbor lookups in the vector space, and so accounting for negative cooccurrence effectively repels words with which no positive cooccurrence was observed. In future work, we will explore incorporating $\mathit {\texttt {-}PMI}$ only for rare words (where it is most needed). Full spectrum models: The $\mathit {PPMI}$, $\mathit {CPMI_{\texttt {-}2}}$, and $\mathit {NNEGPMI}$ models perform similarly, whereas the $\mathit {NPMI}$ model is significantly worst on nearly all semantic tasks. We thus conclude that accounting for scale in the positive spectrum is more important than in the negative spectrum. We hypothesize this is because scale helps to uniquely identify words, which is critical for semantics (results on $WC$ task correlate strongly with performance on semantic tasks), but in syntax, words with the same function should be indistinguishable. Since $\mathit {\texttt {+}PMI}$ encodes both semantics and syntax, scale must be preserved, whereas $\mathit {\texttt {-}PMI}$ encodes mostly syntax, and so scale information can be discarded. Collapsing the negative spectrum: The $\mathit {PPMI}$ model, which collapses the negative spectrum to zero, performs almost identically to the $\mathit {CPMI_{\texttt {-}2}}$ and $\mathit {NNEGPMI}$ models that account for the range of negative values. This is justified by 1) Our discussion which shows that $\mathit {\texttt {+}PMI}$ is far more informative than $\mathit {\texttt {-}PMI}$ and 2) Looking at fig:hist, we see that collapsed values — interval $(-5,0]$ — account for only $11\%$ of samples compared to $41.7\%$ for non-collapsed negative values. Conclusions and Future Work In this paper, we evaluated existing and novel ways of incorporating $\mathit {\texttt {-}PMI}$ into word embedding models based on explicit weighted matrix factorization, and, more importantly, studied the role that $\mathit {\texttt {-}PMI}$ and $\mathit {\texttt {+}PMI}$ each play in distributional semantics, finding that “a word is not only characterized by the company that it keeps, but also by the company it rejects”. In future work, we wish to further study the link between our work and language acquisition, and explore the fact the $\mathit {\texttt {-}PMI}$ is almost purely syntactic to (possibly) subtract syntax from the full spectrum models, studying the frontier (if there is one) between semantics and syntax. Acknowledgments This research was partly supported by CAPES and CNPq (projects 312114/2015-0, 423843/2016-8, and 140402/2018-7). Figure 1: CPMI -5 histogram (bucket width equal to .2) of 105 sampled pairs using window sampling and negative sampling. Number of samples in interval: [−5,−5] = 41695, (−5, 0] = 11001, [−2, 0] = 10759, (0,∞) = 47304 Query: What are the disadvantages to clipping negative PMI? JSON Response:
{"answer": "It may lead to poor rare word representations and word analogies.", "evidence": ["Why incorporate -PMI? $\\mathit {\\texttt {+}PPMI}$ only falters on the RW and analogy tasks, and we hypothesize this is where $\\mathit {\\texttt {-}PMI}$ is useful: in the absence of positive information, negative information can be used to improve rare word representations and word analogies. Analogies are solved using nearest neighbor lookups in the vector space, and so accounting for negative cooccurrence effectively repels words with which no positive cooccurrence was observed. In future work, we will explore incorporating $\\mathit {\\texttt {-}PMI}$ only for rare words (where it is most needed)."]}
science.qasper_abstractive_qa.63.1
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nWhy So Down? The Role of Negative (and Positive) Pointwise Mutual Information in Distributional Semantics\n\nIn distributional semantics, the pointwise mutual information ($\\mathit{PMI}$) weighting of the cooccurrence matrix performs far better than raw counts. There is, however, an issue with unobserved pair cooccurrences as $\\mathit{PMI}$ goes to negative infinity. This problem is aggravated by unreliable statistics from finite corpora which lead to a large number of such pairs. A common practice is to clip negative $\\mathit{PMI}$ ($\\mathit{\\texttt{-} PMI}$) at $0$, also known as Positive $\\mathit{PMI}$ ($\\mathit{PPMI}$). In this paper, we investigate alternative ways of dealing with $\\mathit{\\texttt{-} PMI}$ and, more importantly, study the role that negative information plays in the performance of a low-rank, weighted factorization of different $\\mathit{PMI}$ matrices. Using various semantic and syntactic tasks as probes into models which use either negative or positive $\\mathit{PMI}$ (or both), we find that most of the encoded semantics and syntax come from positive $\\mathit{PMI}$, in contrast to $\\mathit{\\texttt{-} PMI}$ which contributes almost exclusively syntactic information. Our findings deepen our understanding of distributional semantics, while also introducing novel $PMI$ variants and grounding the popular $PPMI$ measure.\n\nIntroduction\nDense word vectors (or embeddings) are a key component in modern NLP architectures for tasks such as sentiment analysis, parsing, and machine translation. These vectors can be learned by exploiting the distributional hypothesis BIBREF0, paraphrased by BIBREF1 as “a word is characterized by the company that it keeps”, usually by constructing a cooccurrence matrix over a training corpus, re-weighting it using Pointwise Mutual Information ($\\mathit {PMI}$) BIBREF2, and performing a low-rank factorization to obtain dense vectors.\nUnfortunately, $\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus. Due to unreliable statistics, this happens very frequently in finite corpora. Many models work around this issue by clipping negative $\\mathit {PMI}$ values at 0, a measure known as Positive $\\mathit {PMI}$ ($\\mathit {PPMI}$), which works very well in practice. An unanswered question is: “What is lost/gained by collapsing the negative $\\mathit {PMI}$ spectrum to 0?”. Understanding which type of information is captured by $\\mathit {\\texttt {-}PMI}$ can help in tailoring models for optimal performance.\nIn this work, we attempt to answer this question by studying the kind of information contained in the negative and positive spectrums of $\\mathit {PMI}$ ($\\mathit {\\texttt {-}PMI}$ and $\\mathit {\\texttt {+}PMI}$). We evaluate weighted factorization of different matrices which use either $\\mathit {\\texttt {-}PMI}$, $\\mathit {\\texttt {+}PMI}$, or both on various semantic and syntactic tasks. Results show that $\\mathit {\\texttt {+}PMI}$ alone performs quite well on most tasks, capturing both semantics and syntax, in contrast to $\\mathit {\\texttt {-}PMI}$, which performs poorly on nearly all tasks, except those that test for syntax. Our main contribution is deepening our understanding of distributional semantics by extending BIBREF1's paraphrase of the distributional hypothesis to “a word is not only characterized by the company that it keeps, but also by the company it rejects”. Our secondary contributions are the proposal of two $PMI$ variants that account for the spectrum of $\\mathit {\\texttt {-}PMI}$, and the justification of the popular $PPMI$ measure.\nIn this paper, we first look at related work ($§$SECREF2), then study $\\mathit {\\texttt {-}PMI}$ and ways of accounting for it ($§$SECREF3), describe experiments ($§$SECREF4), analyze results ($§$SECREF5), and close with ideas for future work ($§$SECREF6).\n\nPMI & Matrix Factorization\nPMI: A cooccurrence matrix $M$ is constructed by sliding a symmetric window over the subsampled BIBREF9 training corpus and for each center word $w$ and context word $c$ within the window, incrementing $M_{wc}$. $\\mathit {PMI}$ is then equal to:\nwhere * denotes summation over the corresponding index. To deal with negative values, we propose clipped $\\mathit {PMI}$,\nwhich is equivalent to $\\mathit {PPMI}$ when $z = 0$.\nMatrix factorization: LexVec BIBREF15 performs the factorization $M^{\\prime } = WC^\\top $, where $M^{\\prime }$ is any transformation of $M$ (such as $\\mathit {PPMI}$), and $W, C$ are the word and context embeddings respectively. By sliding a symmetric window over the training corpus (window sampling), LexVec performs one Stochastic Gradient Descent (SGD) step every time a $(w,c)$ pair is observed, minimizing\nAdditionally, for every center word $w$, $k$ negative words BIBREF9 are drawn from the unigram context distribution $P_n$ (negative sampling) and SGD steps taken to minimize:\nThus the loss function prioritizes the correct approximation of frequently cooccurring pairs and of pairs where either word occurs with high frequency; these are pairs for which we have more reliable statistics.\nIn our experiments, we use LexVec over Singular Value Decomposition (SVD) because a) Empirical results shows it outperforms SVD BIBREF15. b) The weighting of reconstruction errors by statistical confidence is particularly important for $\\mathit {\\texttt {-}PMI}$, where negative cooccurrence between a pair of frequent words is more significant and should be better approximated than that between a pair of rare words. GloVe's matrix factorization is even more unsuitable for our experiments as its loss weighting — a monotonically increasing function of $M_{wc}$ — ignores reconstruction errors of non-cooccurring pairs.\nSpectrum of PMI: To better understand the distribution of $\\mathit {CPMI}$ values, we plot a histogram of $10^5$ pairs randomly sampled by window sampling and negative sampling in fig:hist, setting $z=-5$. We can clearly see the spectrum of $\\mathit {\\texttt {-}PMI}$ that is collapsed when we use $\\mathit {PPMI}$ ($z=0$). In practice we find that $z=-2$ captures most of the negative spectrum and consistently gives better results than smaller values so we use this value for the rest of this paper. We suspect this is due to the large number of non-cooccurring pairs ($41.7\\%$ in this sample) which end up dominating the loss function when $z$ is too small.\nNormalization: We also experiment with normalized $\\mathit {PMI}$ ($\\mathit {NPMI}$) BIBREF7:\nsuch that $NPMI(w,c) = -1$ when $(w,c)$ never cooccur, $NPMI(w,c) = 0$ when they are independent, and $NPMI(w,c) = 1$ when they always cooccur together. This effectively captures the entire negative spectrum, but has the downside of normalization which discards scale information. In practice we find this works poorly if done symmetrically, so we introduce a variant called $\\mathit {NNEGPMI}$ which only normalizes $\\mathit {\\texttt {-}PMI}$:\nWe also experimented with Laplace smoothing as in BIBREF18 for various pseudocounts but found it to work consistently worse than both $\\mathit {CPMI_z}$ and $\\mathit {NNEGPMI}$ so we omit further discussion in this paper.\n\nMaterials\nIn order to identify the role that $\\mathit {\\texttt {-}PMI}$ and $\\mathit {\\texttt {+}PMI}$ play in distributional semantics, we train LexVec models that skip SGD steps when target cell values are $>0$ or $\\le 0$, respectively. For example, $-\\mathit {CPMI}_{\\texttt {-}2}$ skips steps when $\\mathit {CPMI}_{\\texttt {-}2}(w,c) > 0$. Similarly, the $\\mathit {\\texttt {+}PPMI}$ model skips SGD steps when $\\mathit {PPMI}(w,c) \\le 0$. We compare these to models that include both negative and positive information to see how the two interact.\nWe use the default LexVec configuration for all $\\mathit {PMI}$ variants: fixed window of size 2, embedding dimension of 300, 5 negative samples, positional contexts, context distribution smoothing of $.75$, learning rate of $.025$, no subword information, and negative distribution power of $.75$. We train on a lowercased, alphanumerical 2015 Wikipedia dump with $3.8$B tokens, discarding tokens with frequency $< 100$, for a vocabulary size of $303,517$ words.\nFor comparison, we include results for a randomly initialized, non-trained embedding to establish task baselines.\nSemantics: To evaluate word-level semantics, we use the SimLex BIBREF19 and Rare Word (RW) BIBREF20 word similarity datasets, and the Google Semantic (GSem) analogies BIBREF9. We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22.\nSyntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax. Classifiers for all SentEval probing tasks are multilayer perceptrons with a single hidden layer of 100 units and dropout of $.1$. Our final syntactic task is part-of-speech (POS) tagging using the same BiLSTM-CRF setup as BIBREF23 but using only word embeddings (no hand-engineered features) as input, trained on the WSJ section of the Penn Treebank BIBREF24.\n\nResults\nAll results are shown in tab:senteval.\nNegative PMI: We observe that using only $\\mathit {\\texttt {-}PMI}$ (rows $\\mathit {\\texttt {-}CPMI_{\\texttt {-}2}}$ and $\\mathit {\\texttt {-}NNEGPMI}$) performs similarly to all other models in POS tagging and both syntactic probing tasks, but very poorly on all semantic tasks, strongly supporting our main claim that $\\mathit {\\texttt {-}PMI}$ mostly encodes syntactic information.\nOur hypothesis for this is that the grammar that generates language implicitly creates negative cooccurrence and so $\\mathit {\\texttt {-}PMI}$ encodes this syntactic information. Interestingly, this idea creates a bridge between distributional semantics and the argument by BIBREF17 that indirect negative evidence might play an important role in human language acquisition of grammar. Positive PMI: The $\\mathit {\\texttt {+}PPMI}$ model performs as well or better as the full spectrum models on nearly all tasks, clearly indicating that $\\mathit {\\texttt {+}PMI}$ encodes both semantic and syntactic information.\nWhy incorporate -PMI? $\\mathit {\\texttt {+}PPMI}$ only falters on the RW and analogy tasks, and we hypothesize this is where $\\mathit {\\texttt {-}PMI}$ is useful: in the absence of positive information, negative information can be used to improve rare word representations and word analogies. Analogies are solved using nearest neighbor lookups in the vector space, and so accounting for negative cooccurrence effectively repels words with which no positive cooccurrence was observed. In future work, we will explore incorporating $\\mathit {\\texttt {-}PMI}$ only for rare words (where it is most needed).\nFull spectrum models: The $\\mathit {PPMI}$, $\\mathit {CPMI_{\\texttt {-}2}}$, and $\\mathit {NNEGPMI}$ models perform similarly, whereas the $\\mathit {NPMI}$ model is significantly worst on nearly all semantic tasks. We thus conclude that accounting for scale in the positive spectrum is more important than in the negative spectrum. We hypothesize this is because scale helps to uniquely identify words, which is critical for semantics (results on $WC$ task correlate strongly with performance on semantic tasks), but in syntax, words with the same function should be indistinguishable. Since $\\mathit {\\texttt {+}PMI}$ encodes both semantics and syntax, scale must be preserved, whereas $\\mathit {\\texttt {-}PMI}$ encodes mostly syntax, and so scale information can be discarded.\nCollapsing the negative spectrum: The $\\mathit {PPMI}$ model, which collapses the negative spectrum to zero, performs almost identically to the $\\mathit {CPMI_{\\texttt {-}2}}$ and $\\mathit {NNEGPMI}$ models that account for the range of negative values. This is justified by 1) Our discussion which shows that $\\mathit {\\texttt {+}PMI}$ is far more informative than $\\mathit {\\texttt {-}PMI}$ and 2) Looking at fig:hist, we see that collapsed values — interval $(-5,0]$ — account for only $11\\%$ of samples compared to $41.7\\%$ for non-collapsed negative values.\n\nConclusions and Future Work\nIn this paper, we evaluated existing and novel ways of incorporating $\\mathit {\\texttt {-}PMI}$ into word embedding models based on explicit weighted matrix factorization, and, more importantly, studied the role that $\\mathit {\\texttt {-}PMI}$ and $\\mathit {\\texttt {+}PMI}$ each play in distributional semantics, finding that “a word is not only characterized by the company that it keeps, but also by the company it rejects”. In future work, we wish to further study the link between our work and language acquisition, and explore the fact the $\\mathit {\\texttt {-}PMI}$ is almost purely syntactic to (possibly) subtract syntax from the full spectrum models, studying the frontier (if there is one) between semantics and syntax.\n\nAcknowledgments\nThis research was partly supported by CAPES and CNPq (projects 312114/2015-0, 423843/2016-8, and 140402/2018-7).\n\nFigure 1: CPMI -5 histogram (bucket width equal to .2) of 105 sampled pairs using window sampling and negative sampling. Number of samples in interval: [−5,−5] = 41695, (−5, 0] = 11001, [−2, 0] = 10759, (0,∞) = 47304\n\nQuery: What are the disadvantages to clipping negative PMI?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"It may lead to poor rare word representations and word analogies.\", \"evidence\": [\"Why incorporate -PMI? $\\\\mathit {\\\\texttt {+}PPMI}$ only falters on the RW and analogy tasks, and we hypothesize this is where $\\\\mathit {\\\\texttt {-}PMI}$ is useful: in the absence of positive information, negative information can be used to improve rare word representations and word analogies. Analogies are solved using nearest neighbor lookups in the vector space, and so accounting for negative cooccurrence effectively repels words with which no positive cooccurrence was observed. In future work, we will explore incorporating $\\\\mathit {\\\\texttt {-}PMI}$ only for rare words (where it is most needed).\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: PS Arg1 Extractor: We implement features described in<cite> Wang and Lan (2015)</cite> and add novel features.
["Uses"]
science.multicite_intent_classification.2012
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nPS Arg1 Extractor: We implement features described in<cite> Wang and Lan (2015)</cite> and add novel features.", "role": "user" }, { "content": "[\"Uses\"]", "role": "assistant" } ]
science.discomat_te
| Temperature (degC) | 120 | 300 | 500 | 650 | 800 | 1000 | | Apparent density (g/cm3) | 0.078 | 0.070 | 0.064 | 0.075 | 0.082 | 0.104 | Caption: Apparent density of silica gel powders heat treated at different temperatures for 2 h Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "120", "type": "Other"} {"value": "300", "type": "Other"} {"value": "500", "type": "Other"} {"value": "650", "type": "Other"} {"value": "800", "type": "Other"} {"value": "1000", "type": "Other"} {"value": "0.078", "type": "Other"} {"value": "0.070", "type": "Other"} {"value": "0.064", "type": "Other"} {"value": "0.075", "type": "Other"} {"value": "0.082", "type": "Other"} {"value": "0.104", "type": "Other"}
science.discomat_te.125
[ { "content": "| Temperature (degC) | 120 | 300 | 500 | 650 | 800 | 1000 |\n| Apparent density (g/cm3) | 0.078 | 0.070 | 0.064 | 0.075 | 0.082 | 0.104 |\n\nCaption: Apparent density of silica gel powders heat treated at different temperatures for 2 h\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"120\", \"type\": \"Other\"}\n{\"value\": \"300\", \"type\": \"Other\"}\n{\"value\": \"500\", \"type\": \"Other\"}\n{\"value\": \"650\", \"type\": \"Other\"}\n{\"value\": \"800\", \"type\": \"Other\"}\n{\"value\": \"1000\", \"type\": \"Other\"}\n{\"value\": \"0.078\", \"type\": \"Other\"}\n{\"value\": \"0.070\", \"type\": \"Other\"}\n{\"value\": \"0.064\", \"type\": \"Other\"}\n{\"value\": \"0.075\", \"type\": \"Other\"}\n{\"value\": \"0.082\", \"type\": \"Other\"}\n{\"value\": \"0.104\", \"type\": \"Other\"}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Where can quantum kernel methods make a big difference? Main Content: 1 INTRODUCTION . Since the birth of quantum computing , researchers have been looking for the best place to apply quantum algorithms . The first two quantum algorithms devised were Grover ( 1996 ) and Shor ( 1994 ) . They proved the advantages of quantum algorithms in some specific problems such as search and factorization theoretically . With the rise of artificial intelligence , more and more quantum algorithms combine quantum computing with machine learning . For example , quantum neural network ( QNN ) was first proposed by Ezhov & Ventura ( 2000 ) but was only defined in general terms at the physical level . Ricks & Ventura ( 2003 ) defined an approach to train QNNs , but the complexity of its method is exponential . Subsequently , Lloyd et al . ( 2013 ) , Blacoe et al . ( 2013 ) , and Rebentrost et al . ( 2014 ) tried to introduce quantum computing into clustering , distributed semantics , and SVMs , respectively , but their approaches were too limited to theory . As researchers introduce quantum into various machine learning algorithms , Schuld et al . ( 2015 ) , Biamonte et al . ( 2017 ) , Kopczyk ( 2018 ) , Ciliberto et al . ( 2018 ) have started to summarize and sort out the concept of quantum machine learning . The physical implementation of quantum computers has made great strides in recent years . In 2019 , Arute et al . ( 2019 ) announced the achievement of quantum hegemony , a milestone event in quantum computing . Also , thanks to the emergence of several quantum computing platforms , such as those of IBM and Google , it has become possible for ordinary researchers to translate their research on quantum machine learning algorithms from theory to practice . For example , Farhi & Neven ( 2018 ) , Quek et al . ( 2021 ) , Verdon et al . ( 2019 ) , Garg et al . ( 2019 ) , Srivastava et al . ( 2020 ) , and Meichanetzidis et al . ( 2020 ) have demonstrated the value of quantum machine learning in machine learning tasks by practice , respectively . On the other hand , one of the most famous machine learning algorithms is the kernel method . A detailed description of the kernel method has been given by Burges ( 1998a ) , Muller et al . ( 2001 ) , Scholkopf ( 2001 ) and Hofmann et al . ( 2008 ) . Inspired by the classical kernel approach , Rebentrost et al . ( 2014 ) proposed a quantum kernel approach based on SVM , but only theoretically feasible . It was not until Schuld & Killoran ( 2019 ) , and Havlı́ček et al . ( 2019 ) systematically proposed two feasible implementations of quantum kernel methods that made quantum kernel methods became one of the most mature and practically valuable quantum machine learning methods . Later , Blank et al . ( 2020 ) , Wang et al . ( 2021 ) , Kusumoto et al . ( 2021 ) , and Peters et al . ( 2021 ) experimentally demonstrated the superiority of quantum kernel methods on some datasets . However , none of them have systematically explored the conditions when quantum kernel methods exist to their advantage . Schuld ( 2021 ) summarizes the connection between quantum kernel methods and classical kernel methods . However , it is still unclear when quantum kernel methods will have advantages over classical kernel methods . In this paper , we conclude under what circumstances the quantum kernel method is better or worse than the classical kernel method . Specifically , • We propose that the quantum kernel function is probabilistic and classify the existing kernel functions . • We propose a distance-based criterion δ to determine whether the quantum kernel method has the quantum advantage for a given dataset and demonstrate this experimentally . • We explore and find the relationship between the superiority of quantum kernel methods in two dimensions and ( 1 ) the complexity of the data pattern ( 2 ) the data based on Mersenne Twister random distribution . 2 BACKGROUND AND RELATED WORK . Classical Kernel . Kernel methods , summarised by Muller et al . ( 2001 ) and Hofmann et al . ( 2008 ) , are an important class of machine learning methods that carry out machine learning by defining which data points are similar to each other and which are not . Mathematically , the similarity is a distance in the data space , i.e. , the distance between digital representations of data points . Specifically , the kernel method uses a feature mapping function fc to map data from a point in the original input space O to a higher-dimensional Hilbert feature space Fc , i.e. , fc : O → Fc , making separability between data classes more explicit . One of the most famous methods is the support vector machine ( SVM ) proposed by Burges ( 1998b ) . One important factor that makes the kernel method successful is the kernel track . Scholkopf ( 2001 ) pointed out that instead of explicitly calculating the distance in high-dimensional Hilbert space , this distance can be calculated implicitly in low-dimensional input space by the kernel function K , but with the same effect . It can reduce the computational effort significantly and avoid a large number of calculations . A nonlinear classification problem is one of the classical machine learning problems , and kernel methods can effectively handle such problems . Recall that in classical kernel methods , such as support vector machine , a data point xi ∈ Rn is mapped into a potentially much higher dimensional feature space Fc via a nonlinear mapping function fc , where xi is represented by ϕ ( xi ) , i.e. , fc : xi → ϕ ( xi ) . In space Fc , the nonlinear classification problem becomes a linear classification problem and simplifies the problem . The inner product of ϕ ( xi ) Tϕ ( xj ) is often seen as distances between xi and xj in the new space Fc . Quantum Kernel . The quantum kernel method is a kernel method designed to run on quantum computers based on quantum computing properties . Its principle is almost identical to the classical kernel method except that it maps the data point from the original input space O to the quantum Hilbert space Fq , i.e. , fq : O → Fq . The key to the quantum kernel methods is the quantum mapping function fq . We can view the feature mapping function fq as the key to define the quantum kernel methods . Thus , if the quantum kernel approach is superior , the superiority lies in the quantum mapping function . The mechanism of quantum kernel methods is basically the same as that of classical kernel methods . The data point xi is mapped from the original input space O to the quantum state space Fq , i.e. , fq : xi → |ϕ ( xi ) ⟩ , where the |·⟩ denotes a vector and physically it represents a state of some quantum system . The ⟨·| is the Hermitian Conjugate of the vector |·⟩ . In practice , the feature map is realized by acting the circuit U ( xi ) on the initial quantum state |0n⟩ , i.e. , |ϕ ( xi ) ⟩ = U ( xi ) |0n⟩ . ( 1 ) The quantum kernel can be obtained by running the circuit U† ( xj ) U ( xi ) on the initial quantum state |0n⟩ , where U† is the Hermitian conjugate of U . Then estimate | ⟨0n|U† ( xj ) U ( xi ) |0n⟩ |2 by counting the frequency of the 0n output as a value of k ( xi , xj ) . Fig.1A shows the process flow of the quantum kernel method and classical kernel method . Quantum Kernel Method Based On Pauli Feature Map . Following the IBM quantum computing platform , we take two qubits as an example . The general expression of a 2-qubit quantum kernel is k ( xi , xj ) = | ⟨ϕ ( xi ) |ϕ ( xj ) ⟩ |2 = | 〈 02 ∣∣U† ( xj ) U ( xi ) ∣∣02〉 |2 . ( 2 ) By the definition of Havlı́ček et al . ( 2019 ) , the quantum circuit U is realized by U ( x⃗ ) = Uϕ ( x⃗ ) H ⊗ 2Uϕ ( x⃗ ) H ⊗ 2 , where the ⊗ is the Kronecker Product of two matrices . For the Secondorder Pauli-Z evolution circuit , Uϕ ( x⃗ ) = exp ( i ( x0Z0 + x1Z1 + ( π − x0 ) ( π − x1 ) Z0Z1 ) ) , where Z0 , Z1 are quantum Z-Gates , and H is the quantum Hadamard-Gate . We denote the corresponding quantum kernel method as the Z-ZZ quantum kernel method , and the corresponding feature map is showed in Fig.1B ( 3 ) . The feature maps of the Z quantum kernel method and the ZZ quantum kernel method are shown in Fig.1B ( 1 ) and Fig.1B ( 2 ) , respectively . In this paper , all references to quantum kernel methods refer to the Z-ZZ quantum kernel method unless otherwise stated . In section 4.3 , we compare these three quantum kernel methods . Support Vector Machine . Support Vector Machine is a maximal margin classifier . It is seen as one of the most successful cases of the kernel approach . SVMs are dedicated to finding a hyper-plane that separates different classes and makes the margin as large as possible . In general cases , i.e. , nonlinear cases , the data is mapped non-linearly to high dimensional Hilbert space by a mapping function . Then the distance between two data points can be calculated using the kernel function . Suppose we have a set of data points D = { ( x1 , y1 ) , · · · , ( xn , yn ) } , where xi ∈ Rd and yi ∈ { −1 , +1 } . According to Burges ( 1998b ) , the nonlinear SVM can be modified and expressed by an optimization problem as maximize : LD ≡ n∑ i αi − 1 2 n∑ i , j=0 αiαjyiyjk ( xi , xj ) ( 3 ) s.t . 0 ≤ αi ≤ C and ∑n i αiyi = 0 , where i = 1 , · · · , n. The decision function is f ( x ) = sign ( Ns∑ i=1 αiyiϕ ( si ) Tϕ ( x ) + b ) = sign ( Ns∑ i=1 αiyik ( si , x ) + b ) , ( 4 ) where si are the support vectors and Ns is the number of support vectors . The SVM-based quantum kernel method is very similar in principle to the traditional SVM , except that the computation of the kernel is performed on a quantum computer . Havlı́ček et al . ( 2019 ) refer to it as quantum kernel estimation . We show the specific estimation method in Section 4.2 . 3 METHODS . 3.1 QUANTUM KERNEL IS A PROBABILISTIC KERNEL . The kernel function is an equation for measuring similarity . In vector space , we estimate the similarity of vectors utilizing vector kernel functions . Similarly , graph kernels describe the similarity of two graphs , and tree kernels compare the similarity of trees , which are often used in natural language processing . A question arises as to how to define kernels or what kind of kernel functions can be effectively used or not . There is no answer to this question . Mercer ( 1909 ) argues that a valid kernel function needs to satisfy symmetry and positive definiteness . However , some kernel functions that do not obey Mercer ’ s condition still achieve good results in some specific tasks , such as the widely used sigmoid kernel function proposed by Lin & Lin ( 2003 ) . Mix kernel function proposed by Smits & Jordaan ( 2002 ) tries to achieve better properties when combining different kernel functions . The mechanism of the quantum kernel function is similar to some traditional kernel functions . It follows the Mercer theorem and is a practical kernel function that expands the family of kernel functions . However , its implementation is based on quantum superposition states and entanglement . Since the values obtained are based on probabilities in a statistical sense , we call it a probabilistic kernel function . For correspondence , we call the classical kernel function a deterministic kernel function . We try to clearly show the relationship and difference between different kernel functions by a diagram ( Fig.2 ) . It is worth noting that there are no guarantees for one kernel to work better than the other in all cases , according to the No Free Lunch Theorem ( Wolpert & Macready ( 1997 ) ) . Choosing different kernel functions in various subjects will achieve better results . The primary purpose of this paper is to investigate under what circumstances the quantum kernel method is better or worse than the classical kernel method .
The paper investigates the circumstances under which quantum kernel methods will be superior to classical kernel methods. The criteria is based on a threshold of the ratio of the inter- and intra-class distances of the (binary) training data. Validation of the proposed criteria is carried out on a range of toy and real datasets.
science.mup_single_document_summarization.7264
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nWhere can quantum kernel methods make a big difference?\n\nMain Content:\n1 INTRODUCTION . Since the birth of quantum computing , researchers have been looking for the best place to apply quantum algorithms . The first two quantum algorithms devised were Grover ( 1996 ) and Shor ( 1994 ) . They proved the advantages of quantum algorithms in some specific problems such as search and factorization theoretically . With the rise of artificial intelligence , more and more quantum algorithms combine quantum computing with machine learning . For example , quantum neural network ( QNN ) was first proposed by Ezhov & Ventura ( 2000 ) but was only defined in general terms at the physical level . Ricks & Ventura ( 2003 ) defined an approach to train QNNs , but the complexity of its method is exponential . Subsequently , Lloyd et al . ( 2013 ) , Blacoe et al . ( 2013 ) , and Rebentrost et al . ( 2014 ) tried to introduce quantum computing into clustering , distributed semantics , and SVMs , respectively , but their approaches were too limited to theory . As researchers introduce quantum into various machine learning algorithms , Schuld et al . ( 2015 ) , Biamonte et al . ( 2017 ) , Kopczyk ( 2018 ) , Ciliberto et al . ( 2018 ) have started to summarize and sort out the concept of quantum machine learning . The physical implementation of quantum computers has made great strides in recent years . In 2019 , Arute et al . ( 2019 ) announced the achievement of quantum hegemony , a milestone event in quantum computing . Also , thanks to the emergence of several quantum computing platforms , such as those of IBM and Google , it has become possible for ordinary researchers to translate their research on quantum machine learning algorithms from theory to practice . For example , Farhi & Neven ( 2018 ) , Quek et al . ( 2021 ) , Verdon et al . ( 2019 ) , Garg et al . ( 2019 ) , Srivastava et al . ( 2020 ) , and Meichanetzidis et al . ( 2020 ) have demonstrated the value of quantum machine learning in machine learning tasks by practice , respectively . On the other hand , one of the most famous machine learning algorithms is the kernel method . A detailed description of the kernel method has been given by Burges ( 1998a ) , Muller et al . ( 2001 ) , Scholkopf ( 2001 ) and Hofmann et al . ( 2008 ) . Inspired by the classical kernel approach , Rebentrost et al . ( 2014 ) proposed a quantum kernel approach based on SVM , but only theoretically feasible . It was not until Schuld & Killoran ( 2019 ) , and Havlı́ček et al . ( 2019 ) systematically proposed two feasible implementations of quantum kernel methods that made quantum kernel methods became one of the most mature and practically valuable quantum machine learning methods . Later , Blank et al . ( 2020 ) , Wang et al . ( 2021 ) , Kusumoto et al . ( 2021 ) , and Peters et al . ( 2021 ) experimentally demonstrated the superiority of quantum kernel methods on some datasets . However , none of them have systematically explored the conditions when quantum kernel methods exist to their advantage . Schuld ( 2021 ) summarizes the connection between quantum kernel methods and classical kernel methods . However , it is still unclear when quantum kernel methods will have advantages over classical kernel methods . In this paper , we conclude under what circumstances the quantum kernel method is better or worse than the classical kernel method . Specifically , • We propose that the quantum kernel function is probabilistic and classify the existing kernel functions . • We propose a distance-based criterion δ to determine whether the quantum kernel method has the quantum advantage for a given dataset and demonstrate this experimentally . • We explore and find the relationship between the superiority of quantum kernel methods in two dimensions and ( 1 ) the complexity of the data pattern ( 2 ) the data based on Mersenne Twister random distribution . 2 BACKGROUND AND RELATED WORK . Classical Kernel . Kernel methods , summarised by Muller et al . ( 2001 ) and Hofmann et al . ( 2008 ) , are an important class of machine learning methods that carry out machine learning by defining which data points are similar to each other and which are not . Mathematically , the similarity is a distance in the data space , i.e. , the distance between digital representations of data points . Specifically , the kernel method uses a feature mapping function fc to map data from a point in the original input space O to a higher-dimensional Hilbert feature space Fc , i.e. , fc : O → Fc , making separability between data classes more explicit . One of the most famous methods is the support vector machine ( SVM ) proposed by Burges ( 1998b ) . One important factor that makes the kernel method successful is the kernel track . Scholkopf ( 2001 ) pointed out that instead of explicitly calculating the distance in high-dimensional Hilbert space , this distance can be calculated implicitly in low-dimensional input space by the kernel function K , but with the same effect . It can reduce the computational effort significantly and avoid a large number of calculations . A nonlinear classification problem is one of the classical machine learning problems , and kernel methods can effectively handle such problems . Recall that in classical kernel methods , such as support vector machine , a data point xi ∈ Rn is mapped into a potentially much higher dimensional feature space Fc via a nonlinear mapping function fc , where xi is represented by ϕ ( xi ) , i.e. , fc : xi → ϕ ( xi ) . In space Fc , the nonlinear classification problem becomes a linear classification problem and simplifies the problem . The inner product of ϕ ( xi ) Tϕ ( xj ) is often seen as distances between xi and xj in the new space Fc . Quantum Kernel . The quantum kernel method is a kernel method designed to run on quantum computers based on quantum computing properties . Its principle is almost identical to the classical kernel method except that it maps the data point from the original input space O to the quantum Hilbert space Fq , i.e. , fq : O → Fq . The key to the quantum kernel methods is the quantum mapping function fq . We can view the feature mapping function fq as the key to define the quantum kernel methods . Thus , if the quantum kernel approach is superior , the superiority lies in the quantum mapping function . The mechanism of quantum kernel methods is basically the same as that of classical kernel methods . The data point xi is mapped from the original input space O to the quantum state space Fq , i.e. , fq : xi → |ϕ ( xi ) ⟩ , where the |·⟩ denotes a vector and physically it represents a state of some quantum system . The ⟨·| is the Hermitian Conjugate of the vector |·⟩ . In practice , the feature map is realized by acting the circuit U ( xi ) on the initial quantum state |0n⟩ , i.e. , |ϕ ( xi ) ⟩ = U ( xi ) |0n⟩ . ( 1 ) The quantum kernel can be obtained by running the circuit U† ( xj ) U ( xi ) on the initial quantum state |0n⟩ , where U† is the Hermitian conjugate of U . Then estimate | ⟨0n|U† ( xj ) U ( xi ) |0n⟩ |2 by counting the frequency of the 0n output as a value of k ( xi , xj ) . Fig.1A shows the process flow of the quantum kernel method and classical kernel method . Quantum Kernel Method Based On Pauli Feature Map . Following the IBM quantum computing platform , we take two qubits as an example . The general expression of a 2-qubit quantum kernel is k ( xi , xj ) = | ⟨ϕ ( xi ) |ϕ ( xj ) ⟩ |2 = | 〈 02 ∣∣U† ( xj ) U ( xi ) ∣∣02〉 |2 . ( 2 ) By the definition of Havlı́ček et al . ( 2019 ) , the quantum circuit U is realized by U ( x⃗ ) = Uϕ ( x⃗ ) H ⊗ 2Uϕ ( x⃗ ) H ⊗ 2 , where the ⊗ is the Kronecker Product of two matrices . For the Secondorder Pauli-Z evolution circuit , Uϕ ( x⃗ ) = exp ( i ( x0Z0 + x1Z1 + ( π − x0 ) ( π − x1 ) Z0Z1 ) ) , where Z0 , Z1 are quantum Z-Gates , and H is the quantum Hadamard-Gate . We denote the corresponding quantum kernel method as the Z-ZZ quantum kernel method , and the corresponding feature map is showed in Fig.1B ( 3 ) . The feature maps of the Z quantum kernel method and the ZZ quantum kernel method are shown in Fig.1B ( 1 ) and Fig.1B ( 2 ) , respectively . In this paper , all references to quantum kernel methods refer to the Z-ZZ quantum kernel method unless otherwise stated . In section 4.3 , we compare these three quantum kernel methods . Support Vector Machine . Support Vector Machine is a maximal margin classifier . It is seen as one of the most successful cases of the kernel approach . SVMs are dedicated to finding a hyper-plane that separates different classes and makes the margin as large as possible . In general cases , i.e. , nonlinear cases , the data is mapped non-linearly to high dimensional Hilbert space by a mapping function . Then the distance between two data points can be calculated using the kernel function . Suppose we have a set of data points D = { ( x1 , y1 ) , · · · , ( xn , yn ) } , where xi ∈ Rd and yi ∈ { −1 , +1 } . According to Burges ( 1998b ) , the nonlinear SVM can be modified and expressed by an optimization problem as maximize : LD ≡ n∑ i αi − 1 2 n∑ i , j=0 αiαjyiyjk ( xi , xj ) ( 3 ) s.t . 0 ≤ αi ≤ C and ∑n i αiyi = 0 , where i = 1 , · · · , n. The decision function is f ( x ) = sign ( Ns∑ i=1 αiyiϕ ( si ) Tϕ ( x ) + b ) = sign ( Ns∑ i=1 αiyik ( si , x ) + b ) , ( 4 ) where si are the support vectors and Ns is the number of support vectors . The SVM-based quantum kernel method is very similar in principle to the traditional SVM , except that the computation of the kernel is performed on a quantum computer . Havlı́ček et al . ( 2019 ) refer to it as quantum kernel estimation . We show the specific estimation method in Section 4.2 . 3 METHODS . 3.1 QUANTUM KERNEL IS A PROBABILISTIC KERNEL . The kernel function is an equation for measuring similarity . In vector space , we estimate the similarity of vectors utilizing vector kernel functions . Similarly , graph kernels describe the similarity of two graphs , and tree kernels compare the similarity of trees , which are often used in natural language processing . A question arises as to how to define kernels or what kind of kernel functions can be effectively used or not . There is no answer to this question . Mercer ( 1909 ) argues that a valid kernel function needs to satisfy symmetry and positive definiteness . However , some kernel functions that do not obey Mercer ’ s condition still achieve good results in some specific tasks , such as the widely used sigmoid kernel function proposed by Lin & Lin ( 2003 ) . Mix kernel function proposed by Smits & Jordaan ( 2002 ) tries to achieve better properties when combining different kernel functions . The mechanism of the quantum kernel function is similar to some traditional kernel functions . It follows the Mercer theorem and is a practical kernel function that expands the family of kernel functions . However , its implementation is based on quantum superposition states and entanglement . Since the values obtained are based on probabilities in a statistical sense , we call it a probabilistic kernel function . For correspondence , we call the classical kernel function a deterministic kernel function . We try to clearly show the relationship and difference between different kernel functions by a diagram ( Fig.2 ) . It is worth noting that there are no guarantees for one kernel to work better than the other in all cases , according to the No Free Lunch Theorem ( Wolpert & Macready ( 1997 ) ) . Choosing different kernel functions in various subjects will achieve better results . The primary purpose of this paper is to investigate under what circumstances the quantum kernel method is better or worse than the classical kernel method .", "role": "user" }, { "content": "The paper investigates the circumstances under which quantum kernel methods will be superior to classical kernel methods. The criteria is based on a threshold of the ratio of the inter- and intra-class distances of the (binary) training data. Validation of the proposed criteria is carried out on a range of toy and real datasets.", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Currently, word_align depends on charalign<cite> (Church, 1993)</cite> to generate a starting point, which limits its applicability to European languages since char_align was designed for language pairs that share a common alphabet.
["Uses", "Background"]
science.multicite_intent_classification.2130
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nCurrently, word_align depends on charalign<cite> (Church, 1993)</cite> to generate a starting point, which limits its applicability to European languages since char_align was designed for language pairs that share a common alphabet.", "role": "user" }, { "content": "[\"Uses\", \"Background\"]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Sudden death can occur in patients with orthostatic hypertension without cardiac conduction abnormalities. Research Abstract: BACKGROUND Patients with familial amyloid polyneuropathy, a rare hereditary form of amyloidosis, have progressive autonomic neuropathy. The disease usually does not induce heart failure but is associated with sudden death, conduction disturbances, and an increased risk of complications during anesthesia. Although cardiac sympathetic denervation has been clearly demonstrated, the postsynaptic status of the cardiac autonomic nervous system remains unelucidated. METHODS AND RESULTS Twenty-one patients were studied (age, 39+/-11 years; normal coronary arteries; left ventricular ejection fraction 68+/-9%). To evaluate the density and affinity constants of myocardial muscarinic receptors, PET with (11)C-MQNB (methylquinuclidinyl benzilate), a specific hydrophilic antagonist, was used. Cardiac beta-receptor functional efficiency was studied by the heart rate (HR) response to intravenous infusion of isoproterenol (5 minutes after 2 mg of atropine, 5, 10, and 15 ng/kg per minute during 5 minutes per step). The mean muscarinic receptor density was higher in patients than in control subjects (B'(max), 35.5+/-8.9 versus 26.1+/-6.7 pmol/mL, P=0.003), without change in receptor affinity. The increase in HR after injection of atropine as well as of MQNB was lower in patients compared with control subjects despite a similar basal HR (DeltaHR after atropine, 11+/-21% versus 62+/-17%; P<0.001), consistent with parasympathetic denervation. Incremental infusion of isoproterenol induced a similar increase in HR in patients and control subjects. CONCLUSIONS Cardiac autonomic denervation in familial amyloid polyneuropathy results in an upregulation of myocardial muscarinic receptors but without change in cardiac beta-receptor responsiveness to catecholamines.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.725
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nSudden death can occur in patients with orthostatic hypertension without cardiac conduction abnormalities.\n\nResearch Abstract:\nBACKGROUND Patients with familial amyloid polyneuropathy, a rare hereditary form of amyloidosis, have progressive autonomic neuropathy.\nThe disease usually does not induce heart failure but is associated with sudden death, conduction disturbances, and an increased risk of complications during anesthesia.\nAlthough cardiac sympathetic denervation has been clearly demonstrated, the postsynaptic status of the cardiac autonomic nervous system remains unelucidated.\nMETHODS AND RESULTS Twenty-one patients were studied (age, 39+/-11 years; normal coronary arteries; left ventricular ejection fraction 68+/-9%).\nTo evaluate the density and affinity constants of myocardial muscarinic receptors, PET with (11)C-MQNB (methylquinuclidinyl benzilate), a specific hydrophilic antagonist, was used.\nCardiac beta-receptor functional efficiency was studied by the heart rate (HR) response to intravenous infusion of isoproterenol (5 minutes after 2 mg of atropine, 5, 10, and 15 ng/kg per minute during 5 minutes per step).\nThe mean muscarinic receptor density was higher in patients than in control subjects (B'(max), 35.5+/-8.9 versus 26.1+/-6.7 pmol/mL, P=0.003), without change in receptor affinity.\nThe increase in HR after injection of atropine as well as of MQNB was lower in patients compared with control subjects despite a similar basal HR (DeltaHR after atropine, 11+/-21% versus 62+/-17%; P<0.001), consistent with parasympathetic denervation.\nIncremental infusion of isoproterenol induced a similar increase in HR in patients and control subjects.\nCONCLUSIONS Cardiac autonomic denervation in familial amyloid polyneuropathy results in an upregulation of myocardial muscarinic receptors but without change in cardiac beta-receptor responsiveness to catecholamines.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space Main Content: 1 INTRODUCTION . Deep neural networks ( DNNs ) are a family of powerful models that have demonstrated superior learning capabilities in a wide range of applications such as image classification , object detection and natural language processing . However , DNNs are often applied as a black box with limited understanding of what the model has learned from the data . Existing understandings about DNNs have mostly been developed in the deep representation space or using the attention map . DNNs are known to be able to learn high quality representations ( Donahue et al. , 2014 ) , and the representations are well associated with the attention map of the model on the inputs ( Zhou et al. , 2016 ; Selvaraju et al. , 2016 ) . It has also been found that DNNs trained on high resolution images like ImageNet are biased towards texture ( Geirhos et al. , 2019 ) . While these works have significantly contributed to the understanding of DNNs , a method that can intuitively visualize what DNNs learn for each class in the input space ( rather than the deep representation space ) is still missing . Recently , the above understandings have been challenged by the vulnerabilities of DNNs to backdoor ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) and adversarial attacks ( Gu et al. , 2017 ; Chen et al. , 2017 ) . The backdoor vulnerability is believed to be caused by the preference of learning high frequency patterns ( Chen et al. , 2017 ; Liu et al. , 2020 ; Wang et al. , 2020 ) . Nevertheless , no existing method is able to reliably reveal the backdoor patterns , even though it has been well learned into the backdoored model . Adversarial attacks can easily fool state-of-the-art DNNs by either sample-wise ( Goodfellow et al. , 2016 ) or universal ( Moosavi-Dezfooli et al. , 2017 ) adversarial perturbations . One recent explanation for the adversarial vulnerability is that , besides robust features , DNNs also learn useful ( to the prediction ) yet non-robust features which are sensitive to small perturbations ( Ilyas et al. , 2019 ) . Adversarial training , one state-of-the-art adversarial defense method , has been shown can train DNNs to learn sample-wise robust features ( Madry et al. , 2018 ; Ilyas et al. , 2019 ) . However , it is still not clear if adversarially trained DNNs can learn a robust pattern for each class . In this paper , we focus on image classification tasks and propose a visualization method that can reveal the pattern learned by DNNs for each class in the input space . Different from sample-wise visualization methods like attention maps , we aim to reveal the knowledge ( or pattern ) learned by DNNs for each class . Moreover , we reveal these patterns in the input space rather than the deep representation space . This is because input space patterns are arguably much easier to interpret . Furthermore , we are interested in a visualization method that can provide new insights into the backdoor and adversarial vulnerabilities of DNNs , both of which are input space vulnerabilities ( Szegedy et al. , 2014 ; Ma et al. , 2018 ) . Given a target class , a canvas image , and a subset of images from the nontarget classes , our method searches for a single pattern ( a set of pixels ) from the canvas image that is highly predictive of the target class . In other words , when the pattern is attached to images from any other ( i.e . nontarget ) classes , the model will consistently predict them as the target class . Figure 1 illustrates a few examples of the class-wise patterns revealed by our method for DNNs trained on natural ( clean ) CIFAR-10 ( Krizhevsky , 2009 ) and ImageNet ( Deng et al. , 2009 ) datasets . In summary , our main contributions are : 1 ) We propose a visualization method to reveal the classwise patterns learned by DNNs in the input space , and show the difference to attention maps and universal adversarial perturbations . 2 ) With the proposed visualization method , we show that DNNs trained on natural datasets can learn a consistent and predictive pattern for each class , and the pattern contains abstract shapes along with some texture . This sheds new lights on the current texture bias understanding of DNNs . 3 ) When applied on backdoored DNNs , our method can reveal the trigger patterns learned by the model from the poisoned dataset . Our method can serve as an effective tool to assist the detection of backdoored models . 4 ) The existence of class-wise predictive patterns in the input space indicates that even DNNs trained on clean data can have backdoors , and the class-wise patterns identified by our method can be readily applied to “ backdoor ” attack the model . 5 ) By examining the patterns learned by DNNs trained in the adversarially setting , we find that adversarially trained models learn more simplified shape patterns . 2 RELATED WORK . General Understandings of DNNs . DNNs are known to learn more complex and higher quality representations than traditional models . Features learned at intermediate layers of AlexNet have been found to contain both simple patterns like lines and corners and high level shapes ( Donahue et al. , 2014 ) . These features have been found crucial for the superior performance of DNNs ( He et al. , 2015 ) . The exceptional representation learning capability of DNNs has also been found related to structures of the networks like depth and width ( Safran & Shamir , 2017 ; Telgarsky , 2016 ) . One recent work found that ImageNet-trained DNNs are biased towards texture features ( Geirhos et al. , 2019 ) . Attention maps have also been used to develop better understandings of the decisions made by DNNs on a given input ( Simonyan et al. , 2014 ; Springenberg et al. , 2015 ; Zeiler & Fergus , 2014 ; Gan et al. , 2015 ) . The Grad-CAM technique proposed by Selvaraju et al . ( 2016 ) utilizes input gradients to produce intuitive attention maps . Whilst these works mostly focus on deep representations or sample-wise attention , an understanding and visualization of what DNNs learn for each class in the input space is still missing from the current literature . Understanding Vulnerabilities of DNNs . Recent works have found that DNNs are vulnerable to backdoor and adversarial attacks . A backdoor attack implants a backdoor trigger into a victim model by injecting the trigger into a small proportion of training data ( Gu et al. , 2017 ; Liu et al. , 2018 ) . The model trained on poisoned dataset will learn a noticeable correlation between the trigger and a target label . A backdoored model behaves normally on clean test data , yet consistently predict a target ( incorrect ) label whenever the trigger appears in a test example ( Zhao et al. , 2020 ; Yao et al. , 2019 ; Liu et al. , 2020 ) . This is believed to be caused by the fact that DNNs tend to learn more high frequency ( e.g . backdoor ) patterns ( Chen et al. , 2017 ; Liu et al. , 2020 ; Wang et al. , 2020 ) . However , it is still unclear whether DNNs can learn such patterns from natural ( clean ) data . Moreover , despite a few attempts ( Wang et al. , 2019 ; Qiao et al. , 2019 ) , the trigger pattern still can not be reliably revealed , even though it has been well learned by the backdoored model . DNNs can also be easily fooled by small , imperceptible adversarial perturbations into making incorrect predictions ( Szegedy et al. , 2014 ; Goodfellow et al. , 2016 ) . Adversarial perturbations can be either sample-wise ( Madry et al. , 2018 ) or universal ( Moosavi-Dezfooli et al. , 2017 ) . This has been found to be caused by learning useful ( to prediction ) but nonrobust ( to adversarial perturbation ) features ( Ilyas et al. , 2019 ) . Meanwhile , adversarial training has been shown to learn more robust features and deliver effective defenses ( Madry et al. , 2018 ) . However , existing understandings of adversarial training are established based on sample-wise attention ( Ilyas et al. , 2019 ) . It still unclear , from the class-wise perspective , what robust or nonrobust input patterns look like . In this paper , we will propose a method to reveal the patterns ( e.g . backdoor or adversarially robust/nonrobust ) learned by DNNs for each class . 3 PROPOSED VISUALIZATION METHOD . In this section , we first define the input space class-wise pattern searching problem , then introduce our proposed searching method . Motivation and Intuition . We focus on image classification with deep neural networks . We denote the training and test dataset as Dtrain and Dtest , respectively . Given a DNN model f trained on a K-class Dtrain and a target class y ∈ { 1 , · · · , K } , our goal is to find an input space pattern , i.e , a small set of pixels , that are extremely predictive of the target class . A highly predictive pattern of a class can largely capture the knowledge the model learned for the class . In backdoor attack , a predictive ( i.e . backdoor trigger ) pattern learned by the model can even control the model ’ s prediction . Intuitively , a predictive pattern of a target class should be able to make the model consistently predict the target class whenever it is attached to images from any other ( e.g . nontarget ) classes . Class-wise Pattern Searching . For a target class y , our method searches for a predictive pattern py from a canvas image xc , based on a small test subset Dn of images from the nontarget classes ( i.e . Dn ⊂ Dtest ) . The canvas image xc is the image where the pattern ( a set of pixels ) is extracted . The search is done via an optimization process based on a mixed input between the canvas image xc and an image xn ∈ Dn . The mixed input x̃ is defined as follows : x̃ =m ∗ xc + ( 1−m ) ∗ xn , ( 1 ) where m is a mask that has the same size as either xc or xn , and mij ≥ 0 . The mixed input image is labeled as the target class y regardless of its original class . This mixing strategy is reminiscent of the mixup ( Zhang et al. , 2018 ) data augmentation algorithm . However , we do not mix the class labels and our purpose is for pattern optimization rather than data augmentation . During the searching process , the mask is iteratively updated to minimize the following loss : L = − log fy ( x̃ ) + α 1 n ‖m‖1 , ( 2 ) where , fy is network ’ s probability output with respect to target class y , ‖·‖1 is the L1 norm , α is a parameter that balances the two loss terms , and n is the size of the input image as well as the mask . The first loss term is the commonly used cross entropy loss . The second term increases the sparsity of mask as we are interested in simple patterns with a small number of highly predictive pixels . During the search process , we pair the canvas image xc randomly with images from Dn , and iteratively update the mask m using standard Stochastic Gradient Decent ( SGD ) while keeping the model parameters unchanged . At each iteration , the mask m will also be clipped into [ 0 , 1 ] . Once a mask is learned , we further clip the values in the mask that are smaller than γ to zero , larger than γ to one . We denote this clipped mask by mγ . We then extract the pattern from the canvas image by py =mγ ∗ xc . The γ parameter can be flexibly determined in different applications . A large γ may lead to less predictive pattern while a small γ will produce more of a sample-wise pattern that overfits to the canvas image . The above search method is repeatedly applied to N canvas images to generate N patterns for each class . We then select the pattern that has the lowest loss value as the final pattern of the class . This additional step is to find the most predictive pattern by exploring different canvases . The complete procedure of our method is described in Algorithm 1 in Appendix A. Canvas Sampling . We propose four different sampling strategies for the selection of the N canvas images : positive sampling , negative sampling , random sampling and white canvas . Positive sampling selects the top-N confident images from the target class according to the logits output of model f . Negative sampling selects the top-N most confidently misclassified images from any nontarget class into the target class . The random sampling randomly chooses N images from the target class y . The white canvas simply uses an image with all white pixels as the canvas . Both positive and the negative sampling aim to find the most well-learned examples by the model , but from different perspectives : well-learned correctly ( e.g . positive ) vs. well-learned incorrectly ( e.g . negative ) . The white canvas is interesting since the pattern found from the white canvas will have the texture “ removed ” , which is useful for scenarios where only the shape features are of interest . The patterns found based on different canvases are compared in Figure 4 . After applying our method on each class , we can obtain a set of class-wise patterns : P = { p1 , · · · , pK } . This set of predictive patterns can revel the knowledge learned by model f for each class from a unique perspective . Why is it Class-wise ? At first sight , one might wonder if the discovered pattern could be samplewise , rather than class-wise , given the use of the canvas sample . Note that , however , even though we are using a single sample as a canvas , the pattern found by the optimization algorithm is dependent on how the model has learnt the entire class , in terms of its loss . This is particularly evident in the case of the all white canvas , which bears no relation to any input sample . Hence our designation of the pattern as being “ class-wise ” . While our method can find consistent and predictive classwise patterns in the experiments , it might still be extendable . For example , using multiple positive canvas images at the same time , using noise rather than the non-target images , or using universal adversarial perturbation ( UAP ) ( Moosavi-Dezfooli et al. , 2017 ) but in a more controlled manner . We leave further explorations of these methods as our future work . Difference to Universal Adversarial Perturbation . UAP can also be applied to craft class-wise adversarial patterns that can make the model predict an adversarial target class . In this view , both UAP and our method find predictive patterns to the target class . However , the two methods work in different ways . By fooling the network , UAP explores the unlearned space ( low-probability “ pockets ” ) of the network ( Szegedy et al. , 2014 ; Ma et al. , 2018 ) . In contrast , our method is a searching ( rather than perturbing ) method that does not rely on adversarial perturbations . Thus , it has to find the optimal pixel locations in the input space that are well-learned by the model for the pattern to be predictive of the class . In Section 4.2 and Appendix E , we have experiments showing the difference of the patterns found by class-wise UAP and our method .
The papers proposes a simple method for visualizing the patterns learned by deep neural networks in the supervised classification setting. Informally, suppose you have an image x that is "representative" of the class y and let X be a set of images that belong to other classes. The authors propose an optimization problem that looks for a mask (i.e. set of pixels) along with values of those pixels such that when this pattern is added to any image in X, the model will predict the new image to have the label y. This optimization problem can be solved using iterative thresholding and one may control the level of sparsity as the authors studied. Despite its simplicity, it can reveal clear patterns, particularly on high resolution images, such as ImageNet. The authors, then, show how this method can be used to interpret neural networks, detect backdoor attacks during training, and verify robustness.
science.mup_single_document_summarization.1244
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nWhat Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space\n\nMain Content:\n1 INTRODUCTION . Deep neural networks ( DNNs ) are a family of powerful models that have demonstrated superior learning capabilities in a wide range of applications such as image classification , object detection and natural language processing . However , DNNs are often applied as a black box with limited understanding of what the model has learned from the data . Existing understandings about DNNs have mostly been developed in the deep representation space or using the attention map . DNNs are known to be able to learn high quality representations ( Donahue et al. , 2014 ) , and the representations are well associated with the attention map of the model on the inputs ( Zhou et al. , 2016 ; Selvaraju et al. , 2016 ) . It has also been found that DNNs trained on high resolution images like ImageNet are biased towards texture ( Geirhos et al. , 2019 ) . While these works have significantly contributed to the understanding of DNNs , a method that can intuitively visualize what DNNs learn for each class in the input space ( rather than the deep representation space ) is still missing . Recently , the above understandings have been challenged by the vulnerabilities of DNNs to backdoor ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) and adversarial attacks ( Gu et al. , 2017 ; Chen et al. , 2017 ) . The backdoor vulnerability is believed to be caused by the preference of learning high frequency patterns ( Chen et al. , 2017 ; Liu et al. , 2020 ; Wang et al. , 2020 ) . Nevertheless , no existing method is able to reliably reveal the backdoor patterns , even though it has been well learned into the backdoored model . Adversarial attacks can easily fool state-of-the-art DNNs by either sample-wise ( Goodfellow et al. , 2016 ) or universal ( Moosavi-Dezfooli et al. , 2017 ) adversarial perturbations . One recent explanation for the adversarial vulnerability is that , besides robust features , DNNs also learn useful ( to the prediction ) yet non-robust features which are sensitive to small perturbations ( Ilyas et al. , 2019 ) . Adversarial training , one state-of-the-art adversarial defense method , has been shown can train DNNs to learn sample-wise robust features ( Madry et al. , 2018 ; Ilyas et al. , 2019 ) . However , it is still not clear if adversarially trained DNNs can learn a robust pattern for each class . In this paper , we focus on image classification tasks and propose a visualization method that can reveal the pattern learned by DNNs for each class in the input space . Different from sample-wise visualization methods like attention maps , we aim to reveal the knowledge ( or pattern ) learned by DNNs for each class . Moreover , we reveal these patterns in the input space rather than the deep representation space . This is because input space patterns are arguably much easier to interpret . Furthermore , we are interested in a visualization method that can provide new insights into the backdoor and adversarial vulnerabilities of DNNs , both of which are input space vulnerabilities ( Szegedy et al. , 2014 ; Ma et al. , 2018 ) . Given a target class , a canvas image , and a subset of images from the nontarget classes , our method searches for a single pattern ( a set of pixels ) from the canvas image that is highly predictive of the target class . In other words , when the pattern is attached to images from any other ( i.e . nontarget ) classes , the model will consistently predict them as the target class . Figure 1 illustrates a few examples of the class-wise patterns revealed by our method for DNNs trained on natural ( clean ) CIFAR-10 ( Krizhevsky , 2009 ) and ImageNet ( Deng et al. , 2009 ) datasets . In summary , our main contributions are : 1 ) We propose a visualization method to reveal the classwise patterns learned by DNNs in the input space , and show the difference to attention maps and universal adversarial perturbations . 2 ) With the proposed visualization method , we show that DNNs trained on natural datasets can learn a consistent and predictive pattern for each class , and the pattern contains abstract shapes along with some texture . This sheds new lights on the current texture bias understanding of DNNs . 3 ) When applied on backdoored DNNs , our method can reveal the trigger patterns learned by the model from the poisoned dataset . Our method can serve as an effective tool to assist the detection of backdoored models . 4 ) The existence of class-wise predictive patterns in the input space indicates that even DNNs trained on clean data can have backdoors , and the class-wise patterns identified by our method can be readily applied to “ backdoor ” attack the model . 5 ) By examining the patterns learned by DNNs trained in the adversarially setting , we find that adversarially trained models learn more simplified shape patterns . 2 RELATED WORK . General Understandings of DNNs . DNNs are known to learn more complex and higher quality representations than traditional models . Features learned at intermediate layers of AlexNet have been found to contain both simple patterns like lines and corners and high level shapes ( Donahue et al. , 2014 ) . These features have been found crucial for the superior performance of DNNs ( He et al. , 2015 ) . The exceptional representation learning capability of DNNs has also been found related to structures of the networks like depth and width ( Safran & Shamir , 2017 ; Telgarsky , 2016 ) . One recent work found that ImageNet-trained DNNs are biased towards texture features ( Geirhos et al. , 2019 ) . Attention maps have also been used to develop better understandings of the decisions made by DNNs on a given input ( Simonyan et al. , 2014 ; Springenberg et al. , 2015 ; Zeiler & Fergus , 2014 ; Gan et al. , 2015 ) . The Grad-CAM technique proposed by Selvaraju et al . ( 2016 ) utilizes input gradients to produce intuitive attention maps . Whilst these works mostly focus on deep representations or sample-wise attention , an understanding and visualization of what DNNs learn for each class in the input space is still missing from the current literature . Understanding Vulnerabilities of DNNs . Recent works have found that DNNs are vulnerable to backdoor and adversarial attacks . A backdoor attack implants a backdoor trigger into a victim model by injecting the trigger into a small proportion of training data ( Gu et al. , 2017 ; Liu et al. , 2018 ) . The model trained on poisoned dataset will learn a noticeable correlation between the trigger and a target label . A backdoored model behaves normally on clean test data , yet consistently predict a target ( incorrect ) label whenever the trigger appears in a test example ( Zhao et al. , 2020 ; Yao et al. , 2019 ; Liu et al. , 2020 ) . This is believed to be caused by the fact that DNNs tend to learn more high frequency ( e.g . backdoor ) patterns ( Chen et al. , 2017 ; Liu et al. , 2020 ; Wang et al. , 2020 ) . However , it is still unclear whether DNNs can learn such patterns from natural ( clean ) data . Moreover , despite a few attempts ( Wang et al. , 2019 ; Qiao et al. , 2019 ) , the trigger pattern still can not be reliably revealed , even though it has been well learned by the backdoored model . DNNs can also be easily fooled by small , imperceptible adversarial perturbations into making incorrect predictions ( Szegedy et al. , 2014 ; Goodfellow et al. , 2016 ) . Adversarial perturbations can be either sample-wise ( Madry et al. , 2018 ) or universal ( Moosavi-Dezfooli et al. , 2017 ) . This has been found to be caused by learning useful ( to prediction ) but nonrobust ( to adversarial perturbation ) features ( Ilyas et al. , 2019 ) . Meanwhile , adversarial training has been shown to learn more robust features and deliver effective defenses ( Madry et al. , 2018 ) . However , existing understandings of adversarial training are established based on sample-wise attention ( Ilyas et al. , 2019 ) . It still unclear , from the class-wise perspective , what robust or nonrobust input patterns look like . In this paper , we will propose a method to reveal the patterns ( e.g . backdoor or adversarially robust/nonrobust ) learned by DNNs for each class . 3 PROPOSED VISUALIZATION METHOD . In this section , we first define the input space class-wise pattern searching problem , then introduce our proposed searching method . Motivation and Intuition . We focus on image classification with deep neural networks . We denote the training and test dataset as Dtrain and Dtest , respectively . Given a DNN model f trained on a K-class Dtrain and a target class y ∈ { 1 , · · · , K } , our goal is to find an input space pattern , i.e , a small set of pixels , that are extremely predictive of the target class . A highly predictive pattern of a class can largely capture the knowledge the model learned for the class . In backdoor attack , a predictive ( i.e . backdoor trigger ) pattern learned by the model can even control the model ’ s prediction . Intuitively , a predictive pattern of a target class should be able to make the model consistently predict the target class whenever it is attached to images from any other ( e.g . nontarget ) classes . Class-wise Pattern Searching . For a target class y , our method searches for a predictive pattern py from a canvas image xc , based on a small test subset Dn of images from the nontarget classes ( i.e . Dn ⊂ Dtest ) . The canvas image xc is the image where the pattern ( a set of pixels ) is extracted . The search is done via an optimization process based on a mixed input between the canvas image xc and an image xn ∈ Dn . The mixed input x̃ is defined as follows : x̃ =m ∗ xc + ( 1−m ) ∗ xn , ( 1 ) where m is a mask that has the same size as either xc or xn , and mij ≥ 0 . The mixed input image is labeled as the target class y regardless of its original class . This mixing strategy is reminiscent of the mixup ( Zhang et al. , 2018 ) data augmentation algorithm . However , we do not mix the class labels and our purpose is for pattern optimization rather than data augmentation . During the searching process , the mask is iteratively updated to minimize the following loss : L = − log fy ( x̃ ) + α 1 n ‖m‖1 , ( 2 ) where , fy is network ’ s probability output with respect to target class y , ‖·‖1 is the L1 norm , α is a parameter that balances the two loss terms , and n is the size of the input image as well as the mask . The first loss term is the commonly used cross entropy loss . The second term increases the sparsity of mask as we are interested in simple patterns with a small number of highly predictive pixels . During the search process , we pair the canvas image xc randomly with images from Dn , and iteratively update the mask m using standard Stochastic Gradient Decent ( SGD ) while keeping the model parameters unchanged . At each iteration , the mask m will also be clipped into [ 0 , 1 ] . Once a mask is learned , we further clip the values in the mask that are smaller than γ to zero , larger than γ to one . We denote this clipped mask by mγ . We then extract the pattern from the canvas image by py =mγ ∗ xc . The γ parameter can be flexibly determined in different applications . A large γ may lead to less predictive pattern while a small γ will produce more of a sample-wise pattern that overfits to the canvas image . The above search method is repeatedly applied to N canvas images to generate N patterns for each class . We then select the pattern that has the lowest loss value as the final pattern of the class . This additional step is to find the most predictive pattern by exploring different canvases . The complete procedure of our method is described in Algorithm 1 in Appendix A. Canvas Sampling . We propose four different sampling strategies for the selection of the N canvas images : positive sampling , negative sampling , random sampling and white canvas . Positive sampling selects the top-N confident images from the target class according to the logits output of model f . Negative sampling selects the top-N most confidently misclassified images from any nontarget class into the target class . The random sampling randomly chooses N images from the target class y . The white canvas simply uses an image with all white pixels as the canvas . Both positive and the negative sampling aim to find the most well-learned examples by the model , but from different perspectives : well-learned correctly ( e.g . positive ) vs. well-learned incorrectly ( e.g . negative ) . The white canvas is interesting since the pattern found from the white canvas will have the texture “ removed ” , which is useful for scenarios where only the shape features are of interest . The patterns found based on different canvases are compared in Figure 4 . After applying our method on each class , we can obtain a set of class-wise patterns : P = { p1 , · · · , pK } . This set of predictive patterns can revel the knowledge learned by model f for each class from a unique perspective . Why is it Class-wise ? At first sight , one might wonder if the discovered pattern could be samplewise , rather than class-wise , given the use of the canvas sample . Note that , however , even though we are using a single sample as a canvas , the pattern found by the optimization algorithm is dependent on how the model has learnt the entire class , in terms of its loss . This is particularly evident in the case of the all white canvas , which bears no relation to any input sample . Hence our designation of the pattern as being “ class-wise ” . While our method can find consistent and predictive classwise patterns in the experiments , it might still be extendable . For example , using multiple positive canvas images at the same time , using noise rather than the non-target images , or using universal adversarial perturbation ( UAP ) ( Moosavi-Dezfooli et al. , 2017 ) but in a more controlled manner . We leave further explorations of these methods as our future work . Difference to Universal Adversarial Perturbation . UAP can also be applied to craft class-wise adversarial patterns that can make the model predict an adversarial target class . In this view , both UAP and our method find predictive patterns to the target class . However , the two methods work in different ways . By fooling the network , UAP explores the unlearned space ( low-probability “ pockets ” ) of the network ( Szegedy et al. , 2014 ; Ma et al. , 2018 ) . In contrast , our method is a searching ( rather than perturbing ) method that does not rely on adversarial perturbations . Thus , it has to find the optimal pixel locations in the input space that are well-learned by the model for the pattern to be predictive of the class . In Section 4.2 and Appendix E , we have experiments showing the difference of the patterns found by class-wise UAP and our method .", "role": "user" }, { "content": "The papers proposes a simple method for visualizing the patterns learned by deep neural networks in the supervised classification setting. Informally, suppose you have an image x that is \"representative\" of the class y and let X be a set of images that belong to other classes. The authors propose an optimization problem that looks for a mask (i.e. set of pixels) along with values of those pixels such that when this pattern is added to any image in X, the model will predict the new image to have the label y. This optimization problem can be solved using iterative thresholding and one may control the level of sparsity as the authors studied. Despite its simplicity, it can reveal clear patterns, particularly on high resolution images, such as ImageNet. The authors, then, show how this method can be used to interpret neural networks, detect backdoor attacks during training, and verify robustness.", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Automated News Suggestions for Populating Wikipedia Entity Pages Wikipedia entity pages are a valuable source of information for direct consumption and for knowledge-base construction, update and maintenance. Facts in these entity pages are typically supported by references. Recent studies show that as much as 20\% of the references are from online news sources. However, many entity pages are incomplete even if relevant information is already available in existing news articles. Even for the already present references, there is often a delay between the news article publication time and the reference time. In this work, we therefore look at Wikipedia through the lens of news and propose a novel news-article suggestion task to improve news coverage in Wikipedia, and reduce the lag of newsworthy references. Our work finds direct application, as a precursor, to Wikipedia page generation and knowledge-base acceleration tasks that rely on relevant and high quality input sources. We propose a two-stage supervised approach for suggesting news articles to entity pages for a given state of Wikipedia. First, we suggest news articles to Wikipedia entities (article-entity placement) relying on a rich set of features which take into account the \emph{salience} and \emph{relative authority} of entities, and the \emph{novelty} of news articles to entity pages. Second, we determine the exact section in the entity page for the input article (article-section placement) guided by class-based section templates. We perform an extensive evaluation of our approach based on ground-truth data that is extracted from external references in Wikipedia. We achieve a high precision value of up to 93\% in the \emph{article-entity} suggestion stage and upto 84\% for the \emph{article-section placement}. Finally, we compare our approach against competitive baselines and show significant improvements. Article–Entity Placement In this step we learn the function INLINEFORM0 to correctly determine whether INLINEFORM1 should be suggested for INLINEFORM2 , basically a binary classification model (0=`non-relevant' and 1=`relevant'). Note that we are mainly interested in finding the relevant pairs in this task. For every news article, the number of disambiguated entities is around 30 (but INLINEFORM3 is suggested for only two of them on average). Therefore, the distribution of `non-relevant' and `relevant' pairs is skewed towards the earlier, and by simply choosing the `non-relevant' label we can achieve a high accuracy for INLINEFORM4 . Finding the relevant pairs is therefore a considerable challenge. An article INLINEFORM0 is suggested to INLINEFORM1 by our function INLINEFORM2 if it fulfills the following properties. The entity INLINEFORM3 is salient in INLINEFORM4 (a central concept), therefore ensuring that INLINEFORM5 is about INLINEFORM6 and that INLINEFORM7 is important for INLINEFORM8 . Next, given the fact there might be many articles in which INLINEFORM9 is salient, we also look at the reverse property, namely whether INLINEFORM10 is important for INLINEFORM11 . We do this by comparing the authority of INLINEFORM12 (which is a measure of popularity of an entity, such as its frequency of mention in a whole corpus) with the authority of its co-occurring entities in INLINEFORM13 , leading to a feature we call relative authority. The intuition is that for an entity that has overall lower authority than its co-occurring entities, a news article is more easily of importance. Finally, if the article we are about to suggest is already covered in the entity profile INLINEFORM14 , we do not wish to suggest redundant information, hence the novelty. Therefore, the learning objective of INLINEFORM15 should fulfill the following properties. Table TABREF21 shows a summary of the computed features for INLINEFORM16 . Salience: entity INLINEFORM0 should be a salient entity in news article INLINEFORM1 Relative Authority: the set of entities INLINEFORM0 with which INLINEFORM1 co-occurs should have higher authority than INLINEFORM2 , making INLINEFORM3 important for INLINEFORM4 Novelty: news article INLINEFORM0 should provide novel information for entity INLINEFORM1 taking into account its profile INLINEFORM2 Baseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details. Relative Entity Frequency. Although frequency of mention and positional features play some role in baseline features, their interaction is not modeled by a single feature nor do the positional features encode more than sentence position. We therefore suggest a novel feature called relative entity frequency, INLINEFORM0 , that has three properties.: (i) It rewards entities for occurring throughout the text instead of only in some parts of the text, measured by the number of paragraphs it occurs in (ii) it rewards entities that occur more frequently in the opening paragraphs of an article as we model INLINEFORM1 as an exponential decay function. The decay corresponds to the positional index of the news paragraph. This is inspired by the news-specific discourse structure that tends to give short summaries of the most important facts and entities in the opening paragraphs. (iii) it compares entity frequency to the frequency of its co-occurring mentions as the weight of an entity appearing in a specific paragraph, normalized by the sum of the frequencies of other entities in INLINEFORM2 . DISPLAYFORM0 where, INLINEFORM0 represents a news paragraph from INLINEFORM1 , and with INLINEFORM2 we indicate the set of all paragraphs in INLINEFORM3 . The frequency of INLINEFORM4 in a paragraph INLINEFORM5 is denoted by INLINEFORM6 . With INLINEFORM7 and INLINEFORM8 we indicate the number of paragraphs in which entity INLINEFORM9 occurs, and the total number of paragraphs, respectively. Relative Authority. In this case, we consider the comparative relevance of the news article to the different entities occurring in it. As an example, let us consider the meeting of the Sudanese bishop Elias Taban with Hillary Clinton. Both entities are salient for the meeting. However, in Taban's Wikipedia page, this meeting is discussed prominently with a corresponding news reference, whereas in Hillary Clinton's Wikipedia page it is not reported at all. We believe this is not just an omission in Clinton's page but mirrors the fact that for the lesser known Taban the meeting is big news whereas for the more famous Clinton these kind of meetings are a regular occurrence, not all of which can be reported in what is supposed to be a selection of the most important events for her. Therefore, if two entities co-occur, the news is more relevant for the entity with the lower a priori authority. The a priori authority of an entity (denoted by INLINEFORM0 ) can be measured in several ways. We opt for two approaches: (i) probability of entity INLINEFORM1 occurring in the corpus INLINEFORM2 , and (ii) authority assessed through centrality measures like PageRank BIBREF16 . For the second case we construct the graph INLINEFORM3 consisting of entities in INLINEFORM4 and news articles in INLINEFORM5 as vertices. The edges are established between INLINEFORM6 and entities in INLINEFORM7 , that is INLINEFORM8 , and the out-links from INLINEFORM9 , that is INLINEFORM10 (arrows present the edge direction). Starting from a priori authority, we proceed to relative authority by comparing the a priori authority of co-occurring entities in INLINEFORM0 . We define the relative authority of INLINEFORM1 as the proportion of co-occurring entities INLINEFORM2 that have a higher a priori authority than INLINEFORM3 (see Equation EQREF28 . DISPLAYFORM0 As we might run the danger of not suggesting any news articles for entities with very high a priori authority (such as Clinton) due to the strict inequality constraint, we can relax the constraint such that the authority of co-occurring entities is above a certain threshold. News Domain Authority. The news domain authority addresses two main aspects. Firstly, if bundled together with the relative authority feature, we can ensure that dependent on the entity authority, we suggest news from authoritative sources, hence ensuring the quality of suggested articles. The second aspect is in a news streaming scenario where multiple news domains report the same event — ideally only articles coming from authoritative sources would fulfill the conditions for the news suggestion task. The news domain authority is computed based on the number of news references in Wikipedia coming from a particular news domain INLINEFORM0 . This represents a simple prior that a news article INLINEFORM1 is from domain INLINEFORM2 in corpus INLINEFORM3 . We extract the domains by taking the base URLs from the news article URLs. An important feature when suggesting an article INLINEFORM0 to an entity INLINEFORM1 is the novelty of INLINEFORM2 w.r.t the already existing entity profile INLINEFORM3 . Studies BIBREF17 have shown that on comparable collections to ours (TREC GOV2) the number of duplicates can go up to INLINEFORM4 . This figure is likely higher for major events concerning highly authoritative entities on which all news media will report. Given an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . We combine this measure with the entity overlap of INLINEFORM7 and INLINEFORM8 . The novelty value of INLINEFORM9 is given by the minimal divergence value. Low scores indicate low novelty for the entity profile INLINEFORM10 . N(n|e) = n'Nt-1{DKL((n') || (n)) + DKL((N) || (n)). DKL((n') || (n)). (1-) jaccard((n'),(n))} where INLINEFORM0 is the KL divergence of the language models ( INLINEFORM1 and INLINEFORM2 ), whereas INLINEFORM3 is the mixing weight ( INLINEFORM4 ) between the language models INLINEFORM5 and the entity overlap in INLINEFORM6 and INLINEFORM7 . Here we introduce the evaluation setup and analyze the results for the article–entity (AEP) placement task. We only report the evaluation metrics for the `relevant' news-entity pairs. A detailed explanation on why we focus on the `relevant' pairs is provided in Section SECREF16 . Baselines. We consider the following baselines for this task. B1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 . B2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 . Learning Models. We use Random Forests (RF) BIBREF23 . We learn the RF on all computed features in Table TABREF21 . The optimization on RF is done by splitting the feature space into multiple trees that are considered as ensemble classifiers. Consequently, for each classifier it computes the margin function as a measure of the average count of predicting the correct class in contrast to any other class. The higher the margin score the more robust the model. Metrics. We compute precision P, recall R and F1 score for the relevant class. For example, precision is the number of news-entity pairs we correctly labeled as relevant compared to our ground truth divided by the number of all news-entity pairs we labeled as relevant. The following results measure the effectiveness of our approach in three main aspects: (i) overall performance of INLINEFORM0 and comparison to baselines, (ii) robustness across the years, and (iii) optimal model for the AEP placement task. Performance. Figure FIGREF55 shows the results for the years 2009 and 2013, where we optimized the learning objective with instances from year INLINEFORM0 and evaluate on the years INLINEFORM1 (see Section SECREF46 ). The results show the precision–recall curve. The red curve shows baseline B1 BIBREF11 , and the blue one shows the performance of INLINEFORM2 . The curve shows for varying confidence scores (high to low) the precision on labeling the pair INLINEFORM3 as `relevant'. In addition, at each confidence score we can compute the corresponding recall for the `relevant' label. For high confidence scores on labeling the news-entity pairs, the baseline B1 achieves on average a precision score of P=0.50, while INLINEFORM4 has P=0.93. We note that with the drop in the confidence score the corresponding precision and recall values drop too, and the overall F1 score for B1 is around F1=0.2, in contrast we achieve an average score of F1=0.67. It is evident from Figure FIGREF55 that for the years 2009 and 2013, INLINEFORM0 significantly outperforms the baseline B1. We measure the significance through the t-test statistic and get a p-value of INLINEFORM1 . The improvement we achieve over B1 in absolute numbers, INLINEFORM2 P=+0.5 in terms of precision for the years between 2009 and 2014, and a similar improvement in terms of F1 score. The improvement for recall is INLINEFORM3 R=+0.4. The relative improvement over B1 for P and F1 is almost 1.8 times better, while for recall we are 3.5 times better. In Table TABREF58 we show the overall scores for the evaluation metrics for B1 and INLINEFORM4 . Finally, for B2 we achieve much poorer performance, with average scores of P=0.21, R=0.20 and F1=0.21. Robustness. In Table TABREF58 , we show the overall performance for the years between 2009 and 2013. An interesting observation we make is that we have a very robust performance and the results are stable across the years. If we consider the experimental setup, where for year INLINEFORM0 we optimize the learning objective with only 74k training instances and evaluate on the rest of the instances, it achieves a very good performance. We predict with F1=0.68 the remaining 469k instances for the years INLINEFORM1 . The results are particularly promising considering the fact that the distribution between our two classes is highly skewed. On average the number of `relevant' pairs account for only around INLINEFORM0 of all pairs. A good indicator to support such a statement is the kappa (denoted by INLINEFORM1 ) statistic. INLINEFORM2 measures agreement between the algorithm and the gold standard on both labels while correcting for chance agreement (often expected due to extreme distributions). The INLINEFORM3 scores for B1 across the years is on average INLINEFORM4 , while for INLINEFORM5 we achieve a score of INLINEFORM6 (the maximum score for INLINEFORM7 is 1). In Figure FIGREF60 we show the impact of the individual feature groups that contribute to the superior performance in comparison to the baselines. Relative entity frequency from the salience feature, models the entity salience as an exponentially decaying function based on the positional index of the paragraph where the entity appears. The performance of INLINEFORM0 with relative entity frequency from the salience feature group is close to that of all the features combined. The authority and novelty features account to a further improvement in terms of precision, by adding roughly a 7%-10% increase. However, if both feature groups are considered separately, they significantly outperform the baseline B1. Query: What features are used to represent the salience and relative authority of entities? JSON Response:
{"answer": "Salience features positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in.\nThe relative authority of entity features: comparative relevance of the news article to the different entities occurring in it.", "evidence": ["Baseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details."]}
science.qasper_abstractive_qa.122.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nAutomated News Suggestions for Populating Wikipedia Entity Pages\n\nWikipedia entity pages are a valuable source of information for direct consumption and for knowledge-base construction, update and maintenance. Facts in these entity pages are typically supported by references. Recent studies show that as much as 20\\% of the references are from online news sources. However, many entity pages are incomplete even if relevant information is already available in existing news articles. Even for the already present references, there is often a delay between the news article publication time and the reference time. In this work, we therefore look at Wikipedia through the lens of news and propose a novel news-article suggestion task to improve news coverage in Wikipedia, and reduce the lag of newsworthy references. Our work finds direct application, as a precursor, to Wikipedia page generation and knowledge-base acceleration tasks that rely on relevant and high quality input sources. We propose a two-stage supervised approach for suggesting news articles to entity pages for a given state of Wikipedia. First, we suggest news articles to Wikipedia entities (article-entity placement) relying on a rich set of features which take into account the \\emph{salience} and \\emph{relative authority} of entities, and the \\emph{novelty} of news articles to entity pages. Second, we determine the exact section in the entity page for the input article (article-section placement) guided by class-based section templates. We perform an extensive evaluation of our approach based on ground-truth data that is extracted from external references in Wikipedia. We achieve a high precision value of up to 93\\% in the \\emph{article-entity} suggestion stage and upto 84\\% for the \\emph{article-section placement}. Finally, we compare our approach against competitive baselines and show significant improvements.\n\nArticle–Entity Placement\nIn this step we learn the function INLINEFORM0 to correctly determine whether INLINEFORM1 should be suggested for INLINEFORM2 , basically a binary classification model (0=`non-relevant' and 1=`relevant'). Note that we are mainly interested in finding the relevant pairs in this task. For every news article, the number of disambiguated entities is around 30 (but INLINEFORM3 is suggested for only two of them on average). Therefore, the distribution of `non-relevant' and `relevant' pairs is skewed towards the earlier, and by simply choosing the `non-relevant' label we can achieve a high accuracy for INLINEFORM4 . Finding the relevant pairs is therefore a considerable challenge.\nAn article INLINEFORM0 is suggested to INLINEFORM1 by our function INLINEFORM2 if it fulfills the following properties. The entity INLINEFORM3 is salient in INLINEFORM4 (a central concept), therefore ensuring that INLINEFORM5 is about INLINEFORM6 and that INLINEFORM7 is important for INLINEFORM8 . Next, given the fact there might be many articles in which INLINEFORM9 is salient, we also look at the reverse property, namely whether INLINEFORM10 is important for INLINEFORM11 . We do this by comparing the authority of INLINEFORM12 (which is a measure of popularity of an entity, such as its frequency of mention in a whole corpus) with the authority of its co-occurring entities in INLINEFORM13 , leading to a feature we call relative authority. The intuition is that for an entity that has overall lower authority than its co-occurring entities, a news article is more easily of importance. Finally, if the article we are about to suggest is already covered in the entity profile INLINEFORM14 , we do not wish to suggest redundant information, hence the novelty. Therefore, the learning objective of INLINEFORM15 should fulfill the following properties. Table TABREF21 shows a summary of the computed features for INLINEFORM16 .\nSalience: entity INLINEFORM0 should be a salient entity in news article INLINEFORM1 \nRelative Authority: the set of entities INLINEFORM0 with which INLINEFORM1 co-occurs should have higher authority than INLINEFORM2 , making INLINEFORM3 important for INLINEFORM4 \nNovelty: news article INLINEFORM0 should provide novel information for entity INLINEFORM1 taking into account its profile INLINEFORM2 \nBaseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details.\nRelative Entity Frequency. Although frequency of mention and positional features play some role in baseline features, their interaction is not modeled by a single feature nor do the positional features encode more than sentence position. We therefore suggest a novel feature called relative entity frequency, INLINEFORM0 , that has three properties.: (i) It rewards entities for occurring throughout the text instead of only in some parts of the text, measured by the number of paragraphs it occurs in (ii) it rewards entities that occur more frequently in the opening paragraphs of an article as we model INLINEFORM1 as an exponential decay function. The decay corresponds to the positional index of the news paragraph. This is inspired by the news-specific discourse structure that tends to give short summaries of the most important facts and entities in the opening paragraphs. (iii) it compares entity frequency to the frequency of its co-occurring mentions as the weight of an entity appearing in a specific paragraph, normalized by the sum of the frequencies of other entities in INLINEFORM2 . DISPLAYFORM0 \nwhere, INLINEFORM0 represents a news paragraph from INLINEFORM1 , and with INLINEFORM2 we indicate the set of all paragraphs in INLINEFORM3 . The frequency of INLINEFORM4 in a paragraph INLINEFORM5 is denoted by INLINEFORM6 . With INLINEFORM7 and INLINEFORM8 we indicate the number of paragraphs in which entity INLINEFORM9 occurs, and the total number of paragraphs, respectively.\nRelative Authority. In this case, we consider the comparative relevance of the news article to the different entities occurring in it. As an example, let us consider the meeting of the Sudanese bishop Elias Taban with Hillary Clinton. Both entities are salient for the meeting. However, in Taban's Wikipedia page, this meeting is discussed prominently with a corresponding news reference, whereas in Hillary Clinton's Wikipedia page it is not reported at all. We believe this is not just an omission in Clinton's page but mirrors the fact that for the lesser known Taban the meeting is big news whereas for the more famous Clinton these kind of meetings are a regular occurrence, not all of which can be reported in what is supposed to be a selection of the most important events for her. Therefore, if two entities co-occur, the news is more relevant for the entity with the lower a priori authority.\nThe a priori authority of an entity (denoted by INLINEFORM0 ) can be measured in several ways. We opt for two approaches: (i) probability of entity INLINEFORM1 occurring in the corpus INLINEFORM2 , and (ii) authority assessed through centrality measures like PageRank BIBREF16 . For the second case we construct the graph INLINEFORM3 consisting of entities in INLINEFORM4 and news articles in INLINEFORM5 as vertices. The edges are established between INLINEFORM6 and entities in INLINEFORM7 , that is INLINEFORM8 , and the out-links from INLINEFORM9 , that is INLINEFORM10 (arrows present the edge direction).\nStarting from a priori authority, we proceed to relative authority by comparing the a priori authority of co-occurring entities in INLINEFORM0 . We define the relative authority of INLINEFORM1 as the proportion of co-occurring entities INLINEFORM2 that have a higher a priori authority than INLINEFORM3 (see Equation EQREF28 . DISPLAYFORM0 \nAs we might run the danger of not suggesting any news articles for entities with very high a priori authority (such as Clinton) due to the strict inequality constraint, we can relax the constraint such that the authority of co-occurring entities is above a certain threshold.\nNews Domain Authority. The news domain authority addresses two main aspects. Firstly, if bundled together with the relative authority feature, we can ensure that dependent on the entity authority, we suggest news from authoritative sources, hence ensuring the quality of suggested articles. The second aspect is in a news streaming scenario where multiple news domains report the same event — ideally only articles coming from authoritative sources would fulfill the conditions for the news suggestion task.\nThe news domain authority is computed based on the number of news references in Wikipedia coming from a particular news domain INLINEFORM0 . This represents a simple prior that a news article INLINEFORM1 is from domain INLINEFORM2 in corpus INLINEFORM3 . We extract the domains by taking the base URLs from the news article URLs.\nAn important feature when suggesting an article INLINEFORM0 to an entity INLINEFORM1 is the novelty of INLINEFORM2 w.r.t the already existing entity profile INLINEFORM3 . Studies BIBREF17 have shown that on comparable collections to ours (TREC GOV2) the number of duplicates can go up to INLINEFORM4 . This figure is likely higher for major events concerning highly authoritative entities on which all news media will report.\nGiven an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . We combine this measure with the entity overlap of INLINEFORM7 and INLINEFORM8 . The novelty value of INLINEFORM9 is given by the minimal divergence value. Low scores indicate low novelty for the entity profile INLINEFORM10 .\nN(n|e) = n'Nt-1{DKL((n') || (n)) + DKL((N) || (n)).\nDKL((n') || (n)). (1-) jaccard((n'),(n))} where INLINEFORM0 is the KL divergence of the language models ( INLINEFORM1 and INLINEFORM2 ), whereas INLINEFORM3 is the mixing weight ( INLINEFORM4 ) between the language models INLINEFORM5 and the entity overlap in INLINEFORM6 and INLINEFORM7 .\nHere we introduce the evaluation setup and analyze the results for the article–entity (AEP) placement task. We only report the evaluation metrics for the `relevant' news-entity pairs. A detailed explanation on why we focus on the `relevant' pairs is provided in Section SECREF16 .\nBaselines. We consider the following baselines for this task.\nB1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .\nB2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .\nLearning Models. We use Random Forests (RF) BIBREF23 . We learn the RF on all computed features in Table TABREF21 . The optimization on RF is done by splitting the feature space into multiple trees that are considered as ensemble classifiers. Consequently, for each classifier it computes the margin function as a measure of the average count of predicting the correct class in contrast to any other class. The higher the margin score the more robust the model.\nMetrics. We compute precision P, recall R and F1 score for the relevant class. For example, precision is the number of news-entity pairs we correctly labeled as relevant compared to our ground truth divided by the number of all news-entity pairs we labeled as relevant.\nThe following results measure the effectiveness of our approach in three main aspects: (i) overall performance of INLINEFORM0 and comparison to baselines, (ii) robustness across the years, and (iii) optimal model for the AEP placement task.\nPerformance. Figure FIGREF55 shows the results for the years 2009 and 2013, where we optimized the learning objective with instances from year INLINEFORM0 and evaluate on the years INLINEFORM1 (see Section SECREF46 ). The results show the precision–recall curve. The red curve shows baseline B1 BIBREF11 , and the blue one shows the performance of INLINEFORM2 . The curve shows for varying confidence scores (high to low) the precision on labeling the pair INLINEFORM3 as `relevant'. In addition, at each confidence score we can compute the corresponding recall for the `relevant' label. For high confidence scores on labeling the news-entity pairs, the baseline B1 achieves on average a precision score of P=0.50, while INLINEFORM4 has P=0.93. We note that with the drop in the confidence score the corresponding precision and recall values drop too, and the overall F1 score for B1 is around F1=0.2, in contrast we achieve an average score of F1=0.67.\nIt is evident from Figure FIGREF55 that for the years 2009 and 2013, INLINEFORM0 significantly outperforms the baseline B1. We measure the significance through the t-test statistic and get a p-value of INLINEFORM1 . The improvement we achieve over B1 in absolute numbers, INLINEFORM2 P=+0.5 in terms of precision for the years between 2009 and 2014, and a similar improvement in terms of F1 score. The improvement for recall is INLINEFORM3 R=+0.4. The relative improvement over B1 for P and F1 is almost 1.8 times better, while for recall we are 3.5 times better. In Table TABREF58 we show the overall scores for the evaluation metrics for B1 and INLINEFORM4 . Finally, for B2 we achieve much poorer performance, with average scores of P=0.21, R=0.20 and F1=0.21.\nRobustness. In Table TABREF58 , we show the overall performance for the years between 2009 and 2013. An interesting observation we make is that we have a very robust performance and the results are stable across the years. If we consider the experimental setup, where for year INLINEFORM0 we optimize the learning objective with only 74k training instances and evaluate on the rest of the instances, it achieves a very good performance. We predict with F1=0.68 the remaining 469k instances for the years INLINEFORM1 .\nThe results are particularly promising considering the fact that the distribution between our two classes is highly skewed. On average the number of `relevant' pairs account for only around INLINEFORM0 of all pairs. A good indicator to support such a statement is the kappa (denoted by INLINEFORM1 ) statistic. INLINEFORM2 measures agreement between the algorithm and the gold standard on both labels while correcting for chance agreement (often expected due to extreme distributions). The INLINEFORM3 scores for B1 across the years is on average INLINEFORM4 , while for INLINEFORM5 we achieve a score of INLINEFORM6 (the maximum score for INLINEFORM7 is 1).\nIn Figure FIGREF60 we show the impact of the individual feature groups that contribute to the superior performance in comparison to the baselines. Relative entity frequency from the salience feature, models the entity salience as an exponentially decaying function based on the positional index of the paragraph where the entity appears. The performance of INLINEFORM0 with relative entity frequency from the salience feature group is close to that of all the features combined. The authority and novelty features account to a further improvement in terms of precision, by adding roughly a 7%-10% increase. However, if both feature groups are considered separately, they significantly outperform the baseline B1.\n\nQuery: What features are used to represent the salience and relative authority of entities?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"Salience features positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in.\\nThe relative authority of entity features: comparative relevance of the news article to the different entities occurring in it.\", \"evidence\": [\"Baseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details.\"]}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Gotta Go Fast When Generating Data with Score-Based Models Main Content: Score-based ( denoising diffusion ) generative models have recently gained a lot of success in generating realistic and diverse data . These approaches define a forward diffusion process for transforming data to noise and generate data by reversing it ( thereby going from noise to data ) . Unfortunately , current score-based models generate data very slowly due to the sheer number of score network evaluations required by numerical SDE solvers . In this work , we aim to accelerate this process by devising a more efficient SDE solver . Existing approaches rely on the Euler-Maruyama ( EM ) solver , which uses a fixed step size . We found that naively replacing it with other SDE solvers fares poorly - they either result in low-quality samples or become slower than EM . To get around this issue , we carefully devise an SDE solver with adaptive step sizes tailored to score-based generative models piece by piece . Our solver requires only two score function evaluations per step , rarely rejects samples , and leads to high-quality samples . Our approach generates data 2 to 10 times faster than EM while achieving better or equal sample quality . For high-resolution images , our method leads to significantly higher quality samples than all other methods tested . Our SDE solver has the benefit of requiring no step size tuning . 1 INTRODUCTION . Score-based generative models ( Song and Ermon , 2019 ; 2020 ; Ho et al. , 2020 ; Jolicoeur-Martineau et al. , 2020 ; Song et al. , 2020a ; Piché-Taillefer , 2021 ) have been very successful at generating data from various modalities , such as images ( Ho et al. , 2020 ; Song et al. , 2020a ) , audio ( Chen et al. , 2020 ; Kong et al. , 2020 ; Mittal et al. , 2021 ; Kameoka et al. , 2020 ) , and graphs ( Niu et al. , 2020 ) . They have further been used effectively for super-resolution ( Saharia et al. , 2021 ; Kadkhodaie and Simoncelli , 2020 ) , inpainting ( Kadkhodaie and Simoncelli , 2020 ) , source separation ( Jayaram and Thickstun , 2020 ) , and image-to-image translation ( Sasaki et al. , 2021 ) . In most of these applications , scorebased models achieved superior performances in terms of quality and diversity than the historically dominant Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) . Score-based models can be understood in two main classes : those based on a Variance Exploding ( VE ) diffusion process ( Song and Ermon , 2019 ) and those based on a Variance Preserving ( VP ) one ( Ho et al. , 2020 ) . Both diffusion processes progressively transform real data into Gaussian noise ; N ( 0 , σ2maxI ) for VE where σ2max is very large , and N ( 0 , I ) for VP . The diffusion process ( VE , VP , etc . ) is then reversed in order to generate real data from Gaussian noise . Reversing the process requires the score function , which is estimated with a neural network ( known as a score network ) . Although very powerful , score-based models generate data through an undesirably long iterative process ; meanwhile , other state-of-the-art methods such as GANs generate data from a single forward pass of a neural network . Increasing the speed of the generative process is thus an active area of research . Chen et al . ( 2020 ) and San-Roman et al . ( 2021 ) proposed faster step size schedules for VP diffusions that still yield relatively good quality/diversity metrics . Although fast , these schedules are arbitrary , require careful tuning , and the optimal schedules will vary from one model to another . Block et al . ( 2020 ) proposed generating data progressively from low to high-resolution images and show that the scheme improves speed . Similarly , Nichol and Dhariwal ( 2021 ) proposed generating low-resolution images and then upscale them since generating low-resolution images is quicker . They further suggested to accelerate VP-based models by learning dimension-specific noise rather than assuming equal noise everywhere . Note that these methods do not affect the data generation algorithm and would thus be complementary to our methods . Song et al . ( 2020a ) and Song et al . ( 2020b ) proposed removing the noise from the data generation algorithm and solve an Ordinary Differential Equation ( ODE ) rather than a Stochastic Differential Equation ( SDE ) ; they report being able to converge much faster when there is no noise . Although it improves the generation speed , Song et al . ( 2020a ) report obtaining lower-quality images when using the ODE formulation for the VE process ( Song et al. , 2020a ) . We will later show that our SDE solver generally leads to better results than ODE solvers at similar speeds . Thus , existing methods for acceleration often require considerable step size/schedule tuning ( this is also true for the baseline approach ) and do not always work for both VE and VP processes . To improve speed and remove the need for step size/schedule tuning , we propose to solve the reverse diffusion process using SDE solvers with adaptive step sizes . It turns out that off-the-shelf SDE solvers are ill-suited for generative modeling and exhibit either ( 1 ) divergence , ( 2 ) slower data generation than the baseline , or ( 3 ) significantly worse quality than the baseline ( see Appendix A ) . This can be attributed to distinct features of the SDEs that arise in score-based generative models that set them apart from the SDEs traditionally considered in the numerical SDE solver literature , namely : ( 1 ) the codomain of the unknown function is extremely high-dimensional , especially in the case of image generation ; ( 2 ) evaluating the score function is computationally expensive , requiring a forward pass of a large mini-batch through a large neural network ; ( 3 ) the required precision of the solution is smaller than usual because we are satisfied as long as the error is not perceptible ( e.g. , one RGB increment on an image ) . We devise our own SDE solver with these features in mind , resulting in an algorithm that can get around the problems encountered by off-the-shelf solvers . To address high dimensionality , we use the ` 2 norm rather than the ` ∞ norm to measure the error across different dimensions to prevent a single pixel from slowing down the solver . To address the cost of score function evaluations while still obtaining high precision , we ( 1 ) take the minimum number of score function evaluations needed for adaptive step sizes ( two evaluations ) , and ( 2 ) use extrapolation to get high precision at no extra cost . To take advantage of the reduced requirement for precision , we set the absolute tolerance for the error according to the range of RGB values . Our main contribution is a new SDE solver tailored to score-based generative models with the following benefits : • Our solver is much faster than the baseline methods , i.e . reverse-diffusion method with Langevin dynamics and Euler-Maruyama ( EM ) ; • It yields higher quality/diversity samples than EM when using the same computing budget ; • It does not require any step size or schedule tuning ; • It can be used to quickly solve any type of diffusion process ( e.g. , VE , VP ) 2 BACKGROUND . 2.1 SCORE-BASED MODELING WITH SDES . Let x ( 0 ) ∈ Rd be a sample from the data distribution pdata . The sample is gradually corrupted over time through a Forward Diffusion Process ( FDP ) , a common type of Stochastic Differential Equation ( SDE ) : dx = f ( x , t ) dt+ g ( t ) dw , ( 1 ) where f ( x , t ) : Rd × R→ Rd is the drift , g ( t ) : R→ R is the diffusion coefficient and w ( t ) is the Wiener process indexed by t ∈ [ 0 , 1 ] . Data points and their probability distribution evolve along the trajectories { x ( t ) } 1t=0 and { pt ( x ) } 1t=0 respectively , with p0 ≡ pdata . The functions f and g are chosen such that x ( 1 ) be approximately Gaussian and independent from x ( 0 ) . Inference is achieved by reversing this diffusion , drawing x ( 1 ) from its Gaussian distribution and solving the Reverse Diffusion Process ( RDP ) equal to : dx = [ f ( x , t ) − g ( t ) 2∇x log pt ( x ) ] dt+ g ( t ) dw̄ , ( 2 ) where∇x log pt ( x ) is referred to as the score of the distribution at time t ( Hyvärinen , 2005 ) and w̄ ( t ) is the Wiener process in which time flows backward ( Anderson , 1982 ) . One can observe from Equation 2 that the RDP requires knowledge of the score ( or pt ) , which we do not have access to . Fortunately , it can be estimated by a neural network ( referred to as the score network ) by optimizing the following objective : L ( θ ) = Ex ( t ) ∼p ( x ( t ) |x ( 0 ) ) , x ( 0 ) ∼pdata [ λ ( t ) 2 ∥∥sθ ( x ( t ) , t ) −∇x ( t ) log pt ( x ( t ) |x ( 0 ) ) ∥∥22 ] , ( 3 ) where λ ( t ) : R→ R is a weighting function generally chosen to be inversely proportional to : E [ ∥∥∇x ( t ) log pt ( x ( t ) |x ( 0 ) ) ∥∥22 ] . One can demonstrate that the minimizer of that objective θ∗ will be such that sθ∗ ( x , t ) = ∇x log pt ( x ) ( Vincent , 2011 ) , allowing us to approximate the reverse diffusion process . As can be seen , evaluating the objective requires the ability to generate samples from the FDP at arbitrary times t. Thankfully , as long as the drift is affine ( i.e. , f ( x , t ) = Ax + B ) , the transition kernel p ( x ( t ) |x ( 0 ) ) will always be normally distributed ( Särkkä and Solin , 2019 ) , which means that we can solve the forward diffusion in a single step . Furthermore , the score of the Gaussian transition kernel is trivial to compute , making the loss an inexpensive training objective . There are two primary choices for the FDP in the literature , which we discuss below . 2.2 VARIANCE EXPLODING ( VE ) PROCESS . The Variance Exploding ( VE ) process consists in the following FDP : dx = √ d [ σ2 ( t ) ] dt dw . Its associated transition kernel is : x ( t ) |x ( 0 ) ∼ N ( x ( 0 ) , [ σ2 ( t ) − σ2 ( 0 ) ] I ) ≈ N ( x ( 0 ) , σ2 ( t ) I ) . In practice , we let σ ( t ) = σmin ( σmax σmin ) t , where σmin = 0.01 and σmax ≈ maxi ∑N j=1 ||x ( i ) − x ( j ) || is the maximum Euclidean distance between two samples from the dataset { x ( i ) } Ni=1 ( Song and Ermon , 2020 ) . Using the maximum Euclidean distance ensures that x ( 1 ) does not depend on x ( 0 ) ; thus , x ( 1 ) is approximately distributed as N ( 0 , σ2 ( 1 ) I ) .
The paper presents a new SDE solver for the reverse process in score-based models. The algorithm is fast and offers high quality, and avoids some hyperparameter tuning. There is theoretical analysis on the stability and bias of the algorithm. The paper also has experiments comparing the proposed algorithm to several baseline methods.
science.mup_single_document_summarization.3407
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nGotta Go Fast When Generating Data with Score-Based Models\n\nMain Content:\nScore-based ( denoising diffusion ) generative models have recently gained a lot of success in generating realistic and diverse data . These approaches define a forward diffusion process for transforming data to noise and generate data by reversing it ( thereby going from noise to data ) . Unfortunately , current score-based models generate data very slowly due to the sheer number of score network evaluations required by numerical SDE solvers . In this work , we aim to accelerate this process by devising a more efficient SDE solver . Existing approaches rely on the Euler-Maruyama ( EM ) solver , which uses a fixed step size . We found that naively replacing it with other SDE solvers fares poorly - they either result in low-quality samples or become slower than EM . To get around this issue , we carefully devise an SDE solver with adaptive step sizes tailored to score-based generative models piece by piece . Our solver requires only two score function evaluations per step , rarely rejects samples , and leads to high-quality samples . Our approach generates data 2 to 10 times faster than EM while achieving better or equal sample quality . For high-resolution images , our method leads to significantly higher quality samples than all other methods tested . Our SDE solver has the benefit of requiring no step size tuning . 1 INTRODUCTION . Score-based generative models ( Song and Ermon , 2019 ; 2020 ; Ho et al. , 2020 ; Jolicoeur-Martineau et al. , 2020 ; Song et al. , 2020a ; Piché-Taillefer , 2021 ) have been very successful at generating data from various modalities , such as images ( Ho et al. , 2020 ; Song et al. , 2020a ) , audio ( Chen et al. , 2020 ; Kong et al. , 2020 ; Mittal et al. , 2021 ; Kameoka et al. , 2020 ) , and graphs ( Niu et al. , 2020 ) . They have further been used effectively for super-resolution ( Saharia et al. , 2021 ; Kadkhodaie and Simoncelli , 2020 ) , inpainting ( Kadkhodaie and Simoncelli , 2020 ) , source separation ( Jayaram and Thickstun , 2020 ) , and image-to-image translation ( Sasaki et al. , 2021 ) . In most of these applications , scorebased models achieved superior performances in terms of quality and diversity than the historically dominant Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) . Score-based models can be understood in two main classes : those based on a Variance Exploding ( VE ) diffusion process ( Song and Ermon , 2019 ) and those based on a Variance Preserving ( VP ) one ( Ho et al. , 2020 ) . Both diffusion processes progressively transform real data into Gaussian noise ; N ( 0 , σ2maxI ) for VE where σ2max is very large , and N ( 0 , I ) for VP . The diffusion process ( VE , VP , etc . ) is then reversed in order to generate real data from Gaussian noise . Reversing the process requires the score function , which is estimated with a neural network ( known as a score network ) . Although very powerful , score-based models generate data through an undesirably long iterative process ; meanwhile , other state-of-the-art methods such as GANs generate data from a single forward pass of a neural network . Increasing the speed of the generative process is thus an active area of research . Chen et al . ( 2020 ) and San-Roman et al . ( 2021 ) proposed faster step size schedules for VP diffusions that still yield relatively good quality/diversity metrics . Although fast , these schedules are arbitrary , require careful tuning , and the optimal schedules will vary from one model to another . Block et al . ( 2020 ) proposed generating data progressively from low to high-resolution images and show that the scheme improves speed . Similarly , Nichol and Dhariwal ( 2021 ) proposed generating low-resolution images and then upscale them since generating low-resolution images is quicker . They further suggested to accelerate VP-based models by learning dimension-specific noise rather than assuming equal noise everywhere . Note that these methods do not affect the data generation algorithm and would thus be complementary to our methods . Song et al . ( 2020a ) and Song et al . ( 2020b ) proposed removing the noise from the data generation algorithm and solve an Ordinary Differential Equation ( ODE ) rather than a Stochastic Differential Equation ( SDE ) ; they report being able to converge much faster when there is no noise . Although it improves the generation speed , Song et al . ( 2020a ) report obtaining lower-quality images when using the ODE formulation for the VE process ( Song et al. , 2020a ) . We will later show that our SDE solver generally leads to better results than ODE solvers at similar speeds . Thus , existing methods for acceleration often require considerable step size/schedule tuning ( this is also true for the baseline approach ) and do not always work for both VE and VP processes . To improve speed and remove the need for step size/schedule tuning , we propose to solve the reverse diffusion process using SDE solvers with adaptive step sizes . It turns out that off-the-shelf SDE solvers are ill-suited for generative modeling and exhibit either ( 1 ) divergence , ( 2 ) slower data generation than the baseline , or ( 3 ) significantly worse quality than the baseline ( see Appendix A ) . This can be attributed to distinct features of the SDEs that arise in score-based generative models that set them apart from the SDEs traditionally considered in the numerical SDE solver literature , namely : ( 1 ) the codomain of the unknown function is extremely high-dimensional , especially in the case of image generation ; ( 2 ) evaluating the score function is computationally expensive , requiring a forward pass of a large mini-batch through a large neural network ; ( 3 ) the required precision of the solution is smaller than usual because we are satisfied as long as the error is not perceptible ( e.g. , one RGB increment on an image ) . We devise our own SDE solver with these features in mind , resulting in an algorithm that can get around the problems encountered by off-the-shelf solvers . To address high dimensionality , we use the ` 2 norm rather than the ` ∞ norm to measure the error across different dimensions to prevent a single pixel from slowing down the solver . To address the cost of score function evaluations while still obtaining high precision , we ( 1 ) take the minimum number of score function evaluations needed for adaptive step sizes ( two evaluations ) , and ( 2 ) use extrapolation to get high precision at no extra cost . To take advantage of the reduced requirement for precision , we set the absolute tolerance for the error according to the range of RGB values . Our main contribution is a new SDE solver tailored to score-based generative models with the following benefits : • Our solver is much faster than the baseline methods , i.e . reverse-diffusion method with Langevin dynamics and Euler-Maruyama ( EM ) ; • It yields higher quality/diversity samples than EM when using the same computing budget ; • It does not require any step size or schedule tuning ; • It can be used to quickly solve any type of diffusion process ( e.g. , VE , VP ) 2 BACKGROUND . 2.1 SCORE-BASED MODELING WITH SDES . Let x ( 0 ) ∈ Rd be a sample from the data distribution pdata . The sample is gradually corrupted over time through a Forward Diffusion Process ( FDP ) , a common type of Stochastic Differential Equation ( SDE ) : dx = f ( x , t ) dt+ g ( t ) dw , ( 1 ) where f ( x , t ) : Rd × R→ Rd is the drift , g ( t ) : R→ R is the diffusion coefficient and w ( t ) is the Wiener process indexed by t ∈ [ 0 , 1 ] . Data points and their probability distribution evolve along the trajectories { x ( t ) } 1t=0 and { pt ( x ) } 1t=0 respectively , with p0 ≡ pdata . The functions f and g are chosen such that x ( 1 ) be approximately Gaussian and independent from x ( 0 ) . Inference is achieved by reversing this diffusion , drawing x ( 1 ) from its Gaussian distribution and solving the Reverse Diffusion Process ( RDP ) equal to : dx = [ f ( x , t ) − g ( t ) 2∇x log pt ( x ) ] dt+ g ( t ) dw̄ , ( 2 ) where∇x log pt ( x ) is referred to as the score of the distribution at time t ( Hyvärinen , 2005 ) and w̄ ( t ) is the Wiener process in which time flows backward ( Anderson , 1982 ) . One can observe from Equation 2 that the RDP requires knowledge of the score ( or pt ) , which we do not have access to . Fortunately , it can be estimated by a neural network ( referred to as the score network ) by optimizing the following objective : L ( θ ) = Ex ( t ) ∼p ( x ( t ) |x ( 0 ) ) , x ( 0 ) ∼pdata [ λ ( t ) 2 ∥∥sθ ( x ( t ) , t ) −∇x ( t ) log pt ( x ( t ) |x ( 0 ) ) ∥∥22 ] , ( 3 ) where λ ( t ) : R→ R is a weighting function generally chosen to be inversely proportional to : E [ ∥∥∇x ( t ) log pt ( x ( t ) |x ( 0 ) ) ∥∥22 ] . One can demonstrate that the minimizer of that objective θ∗ will be such that sθ∗ ( x , t ) = ∇x log pt ( x ) ( Vincent , 2011 ) , allowing us to approximate the reverse diffusion process . As can be seen , evaluating the objective requires the ability to generate samples from the FDP at arbitrary times t. Thankfully , as long as the drift is affine ( i.e. , f ( x , t ) = Ax + B ) , the transition kernel p ( x ( t ) |x ( 0 ) ) will always be normally distributed ( Särkkä and Solin , 2019 ) , which means that we can solve the forward diffusion in a single step . Furthermore , the score of the Gaussian transition kernel is trivial to compute , making the loss an inexpensive training objective . There are two primary choices for the FDP in the literature , which we discuss below . 2.2 VARIANCE EXPLODING ( VE ) PROCESS . The Variance Exploding ( VE ) process consists in the following FDP : dx = √ d [ σ2 ( t ) ] dt dw . Its associated transition kernel is : x ( t ) |x ( 0 ) ∼ N ( x ( 0 ) , [ σ2 ( t ) − σ2 ( 0 ) ] I ) ≈ N ( x ( 0 ) , σ2 ( t ) I ) . In practice , we let σ ( t ) = σmin ( σmax σmin ) t , where σmin = 0.01 and σmax ≈ maxi ∑N j=1 ||x ( i ) − x ( j ) || is the maximum Euclidean distance between two samples from the dataset { x ( i ) } Ni=1 ( Song and Ermon , 2020 ) . Using the maximum Euclidean distance ensures that x ( 1 ) does not depend on x ( 0 ) ; thus , x ( 1 ) is approximately distributed as N ( 0 , σ2 ( 1 ) I ) .", "role": "user" }, { "content": "The paper presents a new SDE solver for the reverse process in score-based models. The algorithm is fast and offers high quality, and avoids some hyperparameter tuning. There is theoretical analysis on the stability and bias of the algorithm. The paper also has experiments comparing the proposed algorithm to several baseline methods.", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: Analyses included correlations and a regression analysis between depressive symptoms and unmet expectations with the Five-Factor Model personality traits (extraversion, agreeableness, conscientiousness, emotional stability, openness to experience) as measured by the Ten-Item Personality Inventory. The questionnaire measures each of the five major facets of personality: openness, extroversion, conscientiousness, agreeableness and neuroticism. DESIGN: Adoptive mothers (N = 136) were surveyed for depressive symptoms using the Center for Epidemiologic Studies-Depression Scale (CES-D) and the Edinburgh Postnatal Depression Scale (EPDS). Analyses included correlations and a regression analysis between depressive symptoms and unmet expectations with the Five-Factor Model personality traits (extraversion, agreeableness, conscientiousness, emotional stability, openness to experience) as measured by the Ten-Item Personality Inventory. Analyses included correlations and a regression analysis between depressive symptoms and unmet expectations with the Five-Factor Model personality traits (extraversion, agreeableness, conscientiousness, emotional stability, openness to experience) as measured by the Ten-Item Personality Inventory. Question: What personality traits can be evaluated with the Ten Item Personality Inventory.
["openness", "extroversion", "conscientiousness", "agreeableness", "neuroticism"]
science.bioasq_list_qa.311
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Analyses included correlations and a regression analysis between depressive symptoms and unmet expectations with the Five-Factor Model personality traits (extraversion, agreeableness, conscientiousness, emotional stability, openness to experience) as measured by the Ten-Item Personality Inventory.\nThe questionnaire measures each of the five major facets of personality: openness, extroversion, conscientiousness, agreeableness and neuroticism.\nDESIGN: Adoptive mothers (N = 136) were surveyed for depressive symptoms using the Center for Epidemiologic Studies-Depression Scale (CES-D) and the Edinburgh Postnatal Depression Scale (EPDS). Analyses included correlations and a regression analysis between depressive symptoms and unmet expectations with the Five-Factor Model personality traits (extraversion, agreeableness, conscientiousness, emotional stability, openness to experience) as measured by the Ten-Item Personality Inventory.\nAnalyses included correlations and a regression analysis between depressive symptoms and unmet expectations with the Five-Factor Model personality traits (extraversion, agreeableness, conscientiousness, emotional stability, openness to experience) as measured by the Ten-Item Personality Inventory.\n\nQuestion: What personality traits can be evaluated with the Ten Item Personality Inventory.", "role": "user" }, { "content": "[\"openness\", \"extroversion\", \"conscientiousness\", \"agreeableness\", \"neuroticism\"]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Scalable Learning and MAP Inference for Nonsymmetric Determinantal Point Processes Main Content: 1 INTRODUCTION . Determinantal point processes ( DPPs ) have proven useful for numerous machine learning tasks . For example , recent uses include summarization ( Sharghi et al. , 2018 ) , recommender systems ( Wilhelm et al. , 2018 ) , neural network compression ( Mariet & Sra , 2016 ) , kernel approximation ( Li et al. , 2016 ) , multi-modal output generation ( Elfeki et al. , 2019 ) , and batch selection , both for stochastic optimization ( Zhang et al. , 2017 ) and for active learning ( Bıyık et al. , 2019 ) . For subset selection problems where the ground set of items to select from has cardinality M , the typical DPP is parameterized by an M ×M kernel matrix . Most prior work has been concerned with symmetric DPPs , where the kernel must equal its transpose . However , recent work has considered the more general class of nonsymmetric DPPs ( NDPPs ) and shown that these have additional useful modeling power ( Brunel , 2018 ; Gartrell et al. , 2019 ) . In particular , unlike symmetric DPPs , which can only model negative correlations between items , NDPPs allow modeling of positive correlations , where the presence of item i in the selected set increases the probability that some other item j will also be selected . There are many intuitive examples of how positive correlations can be of practical importance . For example , consider a product recommendation task for a retail website , where a camera is found in a user ’ s shopping cart , and the goal is to display several other items that might be purchased . Relative to an empty cart , the presence of the camera probably increases the probability of buying an accessory like a tripod . Although NDPPs can theoretically model such behavior , the existing approach for NDPP learning and inference ( Gartrell et al. , 2019 ) is often impractical in terms of both storage and runtime requirements . These algorithms require memory quadratic in M and time quadratic ( for inference ) or cubic ( for learning ) in M ; for the not-unusual M of 1 million , this requires storing 8TB-size objects in memory , with runtime millions or billions of times slower than that of a linear-complexity method . In this work , we make the following contributions : Learning : We propose a new decomposition of the NDPP kernel which reduces the storage and runtime requirements of learning and inference to linear in M . Fortuitously , the modified decomposition retains all of the previous decomposition ’ s modeling power , as it covers the same part of the NDPP kernel space . The algebraic manipulations we apply to get linear complexity for this decomposition can not be applied to prior work , meaning that our new decomposition is crucial for scalability . Inference : After learning , prior NDPP work applies a DPP conditioning algorithm to do subset expansion ( Gartrell et al. , 2019 ) , with quadratic runtime in M . However , prior work does not examine the general problem of MAP inference for NDPPs , i.e. , solving the problem of finding the highestprobability subset under a DPP . For symmetric DPPs , there exists a standard greedy MAP inference algorithm that is linear in M . In this work , we develop a version of this algorithm that is also linear for low-rank NDPPs . The low-rank requirement is unique to NDPPs , and highlights the fact that the transformation of the algorithm from the symmetric to the nonsymmetric space is non-trivial . To the best of our knowledge , this is the first MAP algorithm proposed for NDPPs . We combine the above contributions through experiments that involve learning NDPP kernels and applying MAP inference to these kernels to do subset selection for several real-world datasets . These experiments demonstrate that our algorithms are much more scalable , and that the new kernel decomposition matches the predictive performance of the decomposition from prior work . 2 BACKGROUND . Consider a finite set Y = { 1 , 2 , . . . , M } of cardinalityM , which we will also denote by [ [ M ] ] . A DPP on [ [ M ] ] defines a probability distribution over all of its 2M subsets . It is parameterized by a matrix L ∈ RM×M , called the kernel , such that the probability of each subset Y ⊆ [ [ M ] ] is proportional to the determinant of its corresponding principal submatrix : Pr ( Y ) ∝ det ( LY ) . The normalization constant for this distribution can be expressed as a single M ×M determinant : ∑ Y⊆ [ [ M ] ] det ( LY ) = det ( L + I ) ( Kulesza et al. , 2012 , Theorem 2.1 ) . Hence , Pr ( Y ) = det ( LY ) / det ( L + I ) . We will use PL to denote this distribution . For intuition about the kernel parameters , notice that the probabilities of singletons { i } and { j } are proportional to Lii and Ljj , respectively . Hence , it is common to think of L ’ s diagonal as representing item qualities . The probability of a pair { i , j } is proportional to det ( L { i , j } ) = LiiLjj − LijLji . Thus , if −LijLji < 0 , this indicates i and j interact negatively . Similarly , if −LijLji > 0 , then i and j interact positively . Therefore , off-diagonal terms determine item interactions . ( The vague term “ interactions ” can be replaced by the more precise term “ correlations ” if we consider the DPP ’ s marginal kernel instead ; see Gartrell et al . ( 2019 , Section 2.1 ) for an extensive discussion . ) In order to ensure that PL defines a probability distribution , all principal minors of L must be non-negative : det ( LY ) ≥ 0 . Matrices that satisfy this property are called P0-matrices ( Fang , 1989 , Definition 1 ) . There is no known generative method or matrix decomposition that fully covers the space of all P0 matrices , although there are many that partially cover the space ( Tsatsomeros , 2004 ) . One common partial solution is to use a decomposition that covers the space of symmetric P0 matrices . By restricting to the space of symmetric matrices , one can exploit the fact that L ∈ P0 if L is positive semidefinite ( PSD ) * ( Prussing , 1986 ) . Any symmetric PSD matrix can be written as the Gramian matrix of some set of vectors : L : = V V > , where V ∈ RM×K . Hence , the V V > decomposition provides an easy means of generating the entire space of symmetric P0 matrices . It also has a nice intuitive interpretation : we can view the i-th row of V as a length-K feature vector describing item i . Unfortunately , the symmetry requirement limits the types of correlations that a DPP can capture . A symmetric model is able to capture only nonpositive interactions between items , since LijLji = L2ij ≥ 0 , whereas a nonsymmetric L can also capture positive correlations . ( Again , see Gartrell et al . ( 2019 , Section 2.1 ) for more intuition . ) To expand coverage to nonsymmetric matrices in P0 , it is natural to consider nonsymmetric PSD matrices . In what follows , we denote by P+0 the set of all nonsymmetric ( and symmetric ) PSD matrices . Any nonsymmetric PSD matrix is in P0 ( Gartrell et al. , 2019 , Lemma 1 ) , so P+0 ⊆ P0 . However , unlike in the symmetric case , the set of nonsymmetric PSD * Recall that a matrix L ∈ RM×M is defined to be PSD if and only if x > Lx ≥ 0 , for all x ∈ RM . matrices does not fully cover the set of nonsymmetric P0 matrices . For example , consider L = ( 1 5/3 1/2 1 ) with det ( L { 1 } ) , det ( L { 2 } ) , det ( L { 1,2 } ) ≥ 0 , but x > Lx < 0 for x = ( −1 1 ) . Still , nonsymmetric PSD matrices cover a large enough portion of the P0 space to be useful in practice , as evidenced by the experiments of Gartrell et al . ( 2019 ) . This work covered the P+0 space by using the following decomposition : L : = S + A , with S : = V V > for V ∈ RM×K , and A : = BC > −CB > for B , C ∈ RM×K . This decomposition makes use of the fact that any matrix L can be decomposed uniquely as the sum of a symmetric matrix S = ( L + LT ) /2 and a skew-symmetric matrix A = ( L−LT ) /2 . All skew-symmetric matrices A are trivially PSD , since x > Ax = 0 for all x ∈ RM . Hence , the L here is guaranteed to be PSD simply because its S uses the standard Gramian decomposition V V > . In this work we will also only consider P+0 , and leave to future work the problem of finding tractable ways to cover the rest of P0 . We propose a new decomposition of L that also covers the P+0 space , but allows for more scalable learning . As in prior work , our decomposition has inner dimension K that could be as large as M , but is usually much smaller in practice . Our algorithms work well for modest values of K. In cases where the natural K is larger ( e.g. , natural language processing ) , random projections can often be used to significantly reduce K ( Gillenwater et al. , 2012a ) . 3 NEW KERNEL DECOMPOSITION AND SCALABLE LEARNING . Prior work on NDPPs proposed a maximum likelihood estimation ( MLE ) algorithm ( Gartrell et al. , 2019 ) . Due to that work ’ s particular kernel decomposition , this algorithm had complexity cubic in the number of items M . Here , we propose a kernel decomposition that reduces this to linear in M . We begin by showing that our new decomposition covers the space of P+0 matrices . Before diving in , let us define Σi : = ( 0 λi −λi 0 ) as shorthand for a 2× 2 block matrix with zeros on-diagonal and opposite values off-diagonal . Then , our proposed decomposition is as follows : L : = S + A , with S : = V V > and A : = BCB > , ( 1 ) where V , B ∈ RM×K , and C ∈ RK×K is a block-diagonal matrix with some diagonal blocks of the form Σi , with λi > 0 , and zeros elsewhere . The following lemma shows that this decomposition covers the space of P+0 matrices . Lemma 1 . Let A ∈ RM×M be a skew-symmetric matrix with rank ` ≤ M . Then , there exist B ∈ RM× ` and positive numbers λ1 , . . . , λb ` /2c , such that A = BCB > , where C ∈ R ` × ` is the block-diagonal matrix with b ` /2c diagonal blocks of size 2 given by Σi , i = 1 , . . . , b ` /2c and zero elsewhere . The proof of Lemma 1 and all subsequent results can be found in Appendix F. With this decomposition in hand , we now proceed to show that it can be used for linear-time MLE learning . To do so , we must show that corresponding NDPP log-likelihood objective and gradient can be computed in time linear in M . Given a collection of n observed subsets { Y1 , ... , Yn } composed of items from Y = [ [ M ] ] , the full formulation of the regularized log-likelihood is : φ ( V , B , C ) = 1 n n∑ i=1 log det ( VYiV > Yi +BYiCB > Yi ) − log det ( V V > +BCB > + I ) −R ( V , B ) , ( 2 ) where VYi ∈ R|Yi|×K denotes a matrix composed of the rows of V that correspond to the items in Yi . The regularization term , R ( V , B ) , is defined as follows : R ( V , B ) = α M∑ i=1 1 µi ‖vi‖22 + β M∑ i=1 1 µi ‖bi‖22 , ( 3 ) where µi counts the number of occurrences of item i in the training set , vi and bi are rows of V and B , respectively , and α , β > 0 are tunable hyperparameters . This regularization is similar to that of prior works ( Gartrell et al. , 2017 ; 2019 ) . We omit regularization for C. Theorem 1 shows that computing the regularized log-likelihood and its gradient both have time complexity linear in M . The complexities also depend on K , the rank of the NDPP , and K ′ , the size of the largest observed subset in the data . For many real-world datasets we observe that K ′ M , and we set K = K ′ . Hence , linearity in M means that we can efficiently perform learning for datasets with very large ground sets , which is impossible with the cubic-complexity L decomposition in prior work ( Gartrell et al. , 2019 ) . Theorem 1 . Given an NDPP with kernel L = V V > +BCB > , parameterized by V of rank K , B of rank K , and a K ×K matrix C , we can compute the regularized log-likelihood ( Eq . 2 ) and its gradient in O ( MK2 +K3 +nK ′3 ) time , where K ′ is the size of the largest of the n training subsets .
Nonsymmetric determinantal point processes (NDPPs) received some attention recently because they allow modeling of both negative and positive correlations between items. This paper developed scalable learning and MAP inference algorithms with space and time complexity linear in ground set size, which is a huge improvement compared to previous approaches. Experimental results show that the algorithms scale significantly better, and can roughly match the predictive performance of prior work.
science.mup_single_document_summarization.797
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nScalable Learning and MAP Inference for Nonsymmetric Determinantal Point Processes\n\nMain Content:\n1 INTRODUCTION . Determinantal point processes ( DPPs ) have proven useful for numerous machine learning tasks . For example , recent uses include summarization ( Sharghi et al. , 2018 ) , recommender systems ( Wilhelm et al. , 2018 ) , neural network compression ( Mariet & Sra , 2016 ) , kernel approximation ( Li et al. , 2016 ) , multi-modal output generation ( Elfeki et al. , 2019 ) , and batch selection , both for stochastic optimization ( Zhang et al. , 2017 ) and for active learning ( Bıyık et al. , 2019 ) . For subset selection problems where the ground set of items to select from has cardinality M , the typical DPP is parameterized by an M ×M kernel matrix . Most prior work has been concerned with symmetric DPPs , where the kernel must equal its transpose . However , recent work has considered the more general class of nonsymmetric DPPs ( NDPPs ) and shown that these have additional useful modeling power ( Brunel , 2018 ; Gartrell et al. , 2019 ) . In particular , unlike symmetric DPPs , which can only model negative correlations between items , NDPPs allow modeling of positive correlations , where the presence of item i in the selected set increases the probability that some other item j will also be selected . There are many intuitive examples of how positive correlations can be of practical importance . For example , consider a product recommendation task for a retail website , where a camera is found in a user ’ s shopping cart , and the goal is to display several other items that might be purchased . Relative to an empty cart , the presence of the camera probably increases the probability of buying an accessory like a tripod . Although NDPPs can theoretically model such behavior , the existing approach for NDPP learning and inference ( Gartrell et al. , 2019 ) is often impractical in terms of both storage and runtime requirements . These algorithms require memory quadratic in M and time quadratic ( for inference ) or cubic ( for learning ) in M ; for the not-unusual M of 1 million , this requires storing 8TB-size objects in memory , with runtime millions or billions of times slower than that of a linear-complexity method . In this work , we make the following contributions : Learning : We propose a new decomposition of the NDPP kernel which reduces the storage and runtime requirements of learning and inference to linear in M . Fortuitously , the modified decomposition retains all of the previous decomposition ’ s modeling power , as it covers the same part of the NDPP kernel space . The algebraic manipulations we apply to get linear complexity for this decomposition can not be applied to prior work , meaning that our new decomposition is crucial for scalability . Inference : After learning , prior NDPP work applies a DPP conditioning algorithm to do subset expansion ( Gartrell et al. , 2019 ) , with quadratic runtime in M . However , prior work does not examine the general problem of MAP inference for NDPPs , i.e. , solving the problem of finding the highestprobability subset under a DPP . For symmetric DPPs , there exists a standard greedy MAP inference algorithm that is linear in M . In this work , we develop a version of this algorithm that is also linear for low-rank NDPPs . The low-rank requirement is unique to NDPPs , and highlights the fact that the transformation of the algorithm from the symmetric to the nonsymmetric space is non-trivial . To the best of our knowledge , this is the first MAP algorithm proposed for NDPPs . We combine the above contributions through experiments that involve learning NDPP kernels and applying MAP inference to these kernels to do subset selection for several real-world datasets . These experiments demonstrate that our algorithms are much more scalable , and that the new kernel decomposition matches the predictive performance of the decomposition from prior work . 2 BACKGROUND . Consider a finite set Y = { 1 , 2 , . . . , M } of cardinalityM , which we will also denote by [ [ M ] ] . A DPP on [ [ M ] ] defines a probability distribution over all of its 2M subsets . It is parameterized by a matrix L ∈ RM×M , called the kernel , such that the probability of each subset Y ⊆ [ [ M ] ] is proportional to the determinant of its corresponding principal submatrix : Pr ( Y ) ∝ det ( LY ) . The normalization constant for this distribution can be expressed as a single M ×M determinant : ∑ Y⊆ [ [ M ] ] det ( LY ) = det ( L + I ) ( Kulesza et al. , 2012 , Theorem 2.1 ) . Hence , Pr ( Y ) = det ( LY ) / det ( L + I ) . We will use PL to denote this distribution . For intuition about the kernel parameters , notice that the probabilities of singletons { i } and { j } are proportional to Lii and Ljj , respectively . Hence , it is common to think of L ’ s diagonal as representing item qualities . The probability of a pair { i , j } is proportional to det ( L { i , j } ) = LiiLjj − LijLji . Thus , if −LijLji < 0 , this indicates i and j interact negatively . Similarly , if −LijLji > 0 , then i and j interact positively . Therefore , off-diagonal terms determine item interactions . ( The vague term “ interactions ” can be replaced by the more precise term “ correlations ” if we consider the DPP ’ s marginal kernel instead ; see Gartrell et al . ( 2019 , Section 2.1 ) for an extensive discussion . ) In order to ensure that PL defines a probability distribution , all principal minors of L must be non-negative : det ( LY ) ≥ 0 . Matrices that satisfy this property are called P0-matrices ( Fang , 1989 , Definition 1 ) . There is no known generative method or matrix decomposition that fully covers the space of all P0 matrices , although there are many that partially cover the space ( Tsatsomeros , 2004 ) . One common partial solution is to use a decomposition that covers the space of symmetric P0 matrices . By restricting to the space of symmetric matrices , one can exploit the fact that L ∈ P0 if L is positive semidefinite ( PSD ) * ( Prussing , 1986 ) . Any symmetric PSD matrix can be written as the Gramian matrix of some set of vectors : L : = V V > , where V ∈ RM×K . Hence , the V V > decomposition provides an easy means of generating the entire space of symmetric P0 matrices . It also has a nice intuitive interpretation : we can view the i-th row of V as a length-K feature vector describing item i . Unfortunately , the symmetry requirement limits the types of correlations that a DPP can capture . A symmetric model is able to capture only nonpositive interactions between items , since LijLji = L2ij ≥ 0 , whereas a nonsymmetric L can also capture positive correlations . ( Again , see Gartrell et al . ( 2019 , Section 2.1 ) for more intuition . ) To expand coverage to nonsymmetric matrices in P0 , it is natural to consider nonsymmetric PSD matrices . In what follows , we denote by P+0 the set of all nonsymmetric ( and symmetric ) PSD matrices . Any nonsymmetric PSD matrix is in P0 ( Gartrell et al. , 2019 , Lemma 1 ) , so P+0 ⊆ P0 . However , unlike in the symmetric case , the set of nonsymmetric PSD * Recall that a matrix L ∈ RM×M is defined to be PSD if and only if x > Lx ≥ 0 , for all x ∈ RM . matrices does not fully cover the set of nonsymmetric P0 matrices . For example , consider L = ( 1 5/3 1/2 1 ) with det ( L { 1 } ) , det ( L { 2 } ) , det ( L { 1,2 } ) ≥ 0 , but x > Lx < 0 for x = ( −1 1 ) . Still , nonsymmetric PSD matrices cover a large enough portion of the P0 space to be useful in practice , as evidenced by the experiments of Gartrell et al . ( 2019 ) . This work covered the P+0 space by using the following decomposition : L : = S + A , with S : = V V > for V ∈ RM×K , and A : = BC > −CB > for B , C ∈ RM×K . This decomposition makes use of the fact that any matrix L can be decomposed uniquely as the sum of a symmetric matrix S = ( L + LT ) /2 and a skew-symmetric matrix A = ( L−LT ) /2 . All skew-symmetric matrices A are trivially PSD , since x > Ax = 0 for all x ∈ RM . Hence , the L here is guaranteed to be PSD simply because its S uses the standard Gramian decomposition V V > . In this work we will also only consider P+0 , and leave to future work the problem of finding tractable ways to cover the rest of P0 . We propose a new decomposition of L that also covers the P+0 space , but allows for more scalable learning . As in prior work , our decomposition has inner dimension K that could be as large as M , but is usually much smaller in practice . Our algorithms work well for modest values of K. In cases where the natural K is larger ( e.g. , natural language processing ) , random projections can often be used to significantly reduce K ( Gillenwater et al. , 2012a ) . 3 NEW KERNEL DECOMPOSITION AND SCALABLE LEARNING . Prior work on NDPPs proposed a maximum likelihood estimation ( MLE ) algorithm ( Gartrell et al. , 2019 ) . Due to that work ’ s particular kernel decomposition , this algorithm had complexity cubic in the number of items M . Here , we propose a kernel decomposition that reduces this to linear in M . We begin by showing that our new decomposition covers the space of P+0 matrices . Before diving in , let us define Σi : = ( 0 λi −λi 0 ) as shorthand for a 2× 2 block matrix with zeros on-diagonal and opposite values off-diagonal . Then , our proposed decomposition is as follows : L : = S + A , with S : = V V > and A : = BCB > , ( 1 ) where V , B ∈ RM×K , and C ∈ RK×K is a block-diagonal matrix with some diagonal blocks of the form Σi , with λi > 0 , and zeros elsewhere . The following lemma shows that this decomposition covers the space of P+0 matrices . Lemma 1 . Let A ∈ RM×M be a skew-symmetric matrix with rank ` ≤ M . Then , there exist B ∈ RM× ` and positive numbers λ1 , . . . , λb ` /2c , such that A = BCB > , where C ∈ R ` × ` is the block-diagonal matrix with b ` /2c diagonal blocks of size 2 given by Σi , i = 1 , . . . , b ` /2c and zero elsewhere . The proof of Lemma 1 and all subsequent results can be found in Appendix F. With this decomposition in hand , we now proceed to show that it can be used for linear-time MLE learning . To do so , we must show that corresponding NDPP log-likelihood objective and gradient can be computed in time linear in M . Given a collection of n observed subsets { Y1 , ... , Yn } composed of items from Y = [ [ M ] ] , the full formulation of the regularized log-likelihood is : φ ( V , B , C ) = 1 n n∑ i=1 log det ( VYiV > Yi +BYiCB > Yi ) − log det ( V V > +BCB > + I ) −R ( V , B ) , ( 2 ) where VYi ∈ R|Yi|×K denotes a matrix composed of the rows of V that correspond to the items in Yi . The regularization term , R ( V , B ) , is defined as follows : R ( V , B ) = α M∑ i=1 1 µi ‖vi‖22 + β M∑ i=1 1 µi ‖bi‖22 , ( 3 ) where µi counts the number of occurrences of item i in the training set , vi and bi are rows of V and B , respectively , and α , β > 0 are tunable hyperparameters . This regularization is similar to that of prior works ( Gartrell et al. , 2017 ; 2019 ) . We omit regularization for C. Theorem 1 shows that computing the regularized log-likelihood and its gradient both have time complexity linear in M . The complexities also depend on K , the rank of the NDPP , and K ′ , the size of the largest observed subset in the data . For many real-world datasets we observe that K ′ M , and we set K = K ′ . Hence , linearity in M means that we can efficiently perform learning for datasets with very large ground sets , which is impossible with the cubic-complexity L decomposition in prior work ( Gartrell et al. , 2019 ) . Theorem 1 . Given an NDPP with kernel L = V V > +BCB > , parameterized by V of rank K , B of rank K , and a K ×K matrix C , we can compute the regularized log-likelihood ( Eq . 2 ) and its gradient in O ( MK2 +K3 +nK ′3 ) time , where K ′ is the size of the largest of the n training subsets .", "role": "user" }, { "content": "Nonsymmetric determinantal point processes (NDPPs) received some attention recently because they allow modeling of both negative and positive correlations between items. This paper developed scalable learning and MAP inference algorithms with space and time complexity linear in ground set size, which is a huge improvement compared to previous approaches. Experimental results show that the algorithms scale significantly better, and can roughly match the predictive performance of prior work.", "role": "assistant" } ]
science.discomat_te
| | Al | Ni | La | Ti | B | Ref. | | r (nm) | 0.143 | 0.124 | 0.183 | 0.147 | 0.086 | [20] | | D H Ti - X mix (kJ/mol) | -30 | -35 | 20 | - | - | [21] | | D H B - X mix (kJ/mol) | 0 | -24 | -47 | - | - | [21] | Caption: Atomic radius r and heat of mixing DH mix between the added atoms (Ti and B) and the constituent atoms (Al, Ni and La). Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "0.143", "type": "Other"} {"value": "0.124", "type": "Other"} {"value": "0.183", "type": "Other"} {"value": "0.147", "type": "Other"} {"value": "0.086", "type": "Other"} {"value": "20", "type": "Other"} {"value": "0", "type": "Other"}
science.discomat_te.17
[ { "content": "| | Al | Ni | La | Ti | B | Ref. |\n| r (nm) | 0.143 | 0.124 | 0.183 | 0.147 | 0.086 | [20] |\n| D H Ti - X mix (kJ/mol) | -30 | -35 | 20 | - | - | [21] |\n| D H B - X mix (kJ/mol) | 0 | -24 | -47 | - | - | [21] |\n\nCaption: Atomic radius r and heat of mixing DH mix between the added atoms (Ti and B) and the constituent atoms (Al, Ni and La).\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"0.143\", \"type\": \"Other\"}\n{\"value\": \"0.124\", \"type\": \"Other\"}\n{\"value\": \"0.183\", \"type\": \"Other\"}\n{\"value\": \"0.147\", \"type\": \"Other\"}\n{\"value\": \"0.086\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: We describe a novel approach to statistical machine translation that combines syntactic information in the source language with recent advances in phrasal translation. This method requires a source-language dependency parser, target language word segmentation and an unsupervised word alignment component. We align a parallel corpus, project the source dependency parse onto the target sentence, extract dependency treelet translation pairs, and train a tree-based ordering model. We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in a parser.
Output Entities: {"Generic": ["approach"], "Material": ["parallel corpus"], "Method": ["source-language dependency parser", "phrasal SMT", "SMT models", "parser", "decoder", "target language word segmentation", "tree-based ordering model", "unsupervised word alignment component"], "Metric": [], "OtherScientificTerm": ["linguistic generality", "dependency treelet translation pairs", "source dependency parse", "syntactic information"], "Task": ["statistical machine translation", "phrasal translation"]}
science.scierc_ner.14
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nWe describe a novel approach to statistical machine translation that combines syntactic information in the source language with recent advances in phrasal translation. This method requires a source-language dependency parser, target language word segmentation and an unsupervised word alignment component. We align a parallel corpus, project the source dependency parse onto the target sentence, extract dependency treelet translation pairs, and train a tree-based ordering model. We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in a parser.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [\"approach\"], \"Material\": [\"parallel corpus\"], \"Method\": [\"source-language dependency parser\", \"phrasal SMT\", \"SMT models\", \"parser\", \"decoder\", \"target language word segmentation\", \"tree-based ordering model\", \"unsupervised word alignment component\"], \"Metric\": [], \"OtherScientificTerm\": [\"linguistic generality\", \"dependency treelet translation pairs\", \"source dependency parse\", \"syntactic information\"], \"Task\": [\"statistical machine translation\", \"phrasal translation\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: MP-HCNN<cite> (Rao et al., 2018)</cite> is the first neural model that captures the characteristics of social media domain.
["Uses"]
science.multicite_intent_classification.1088
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nMP-HCNN<cite> (Rao et al., 2018)</cite> is the first neural model that captures the characteristics of social media domain.", "role": "user" }, { "content": "[\"Uses\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND & OBJECTIVES:: The complementary and alternative medicines (CAM) have not been systematically evaluated for the management of HIV/AIDS patients. In a prospective, single-site, open-label, non-randomized, controlled, pilot trial, we evaluated a polyherbal formulation (PHF) for its safety and efficacy in treating subjects with HIV-AIDS. ABSTRACT.METHODS:: A total of 32 and 31 subjects were enrolled under the PHF and highly active antiretroviral treatment (HAART) arms, respectively, and followed up for a period of 24 months. Plasma viral RNA, CD4 cell count and blood chemistry were monitored at 3-month intervals. Following mid-term safety evaluation, 12 subjects from the PHF arm were shifted to HAART and were followed separately as PHF-to-HAART arm, for the rest of the period. ABSTRACT.RESULTS:: The HAART arm was characterized by significant improvements in CD4 cell count (154.4 cells/μl/year, P<0.001) and reduction in plasma viral load within 3 to 6 months (-0.431+ 0.004 log10 IU/month, P<0.001). In contrast, the PHF arm showed a profile of CD4 cell loss at remarkably slower kinetics (14.3 cells/μl/year, P=0.021) and insignificant reduction in the viral load. The PHF and HAART arms did not differ significantly in the occurrence of AIDS-related illnesses over the study period of 24 months. In the PHF-to-HAART arm, the rates of CD4 count and reduction in viral load were significant and comparable to that of the HAART group. In the PHF arm, at 1 month, a significant increase in CD4 cell count and a concomitant decrease in viral load were seen. ABSTRACT.INTERPRETATION & CONCLUSIONS:: The PHF appears to have provided protection by delaying the kinetics of CD4 cell reduction. Given the several study limitations, drawing assertive inferences from the data is challenging. Future studies with a stringent study design are warranted to confirm these findings. Extracted Data:
[["Highly active antiretroviral treatment", "No treatment", "Viral load", "significantly decreased", ":\r\r\r\r\nThe HAART arm was characterized by significant improvements in CD4 cell count (154.4 cells/\u03bcl/year, P\u003c0.001) and reduction in plasma viral load within 3 to 6 months (-0.431+ 0.004 log10 IU/month, P\u003c0.001). "], ["Polyherbal formulation", "No treatment", "Viral load", "no significant difference", "In contrast, the PHF arm showed a profile of CD4 cell loss at remarkably slower kinetics (14.3 cells/\u03bcl/year, P=0.021) and insignificant reduction in the viral load. "], ["Highly active antiretroviral treatment", "No treatment", "CD4 count", "significantly increased", ":\r\r\r\r\nThe HAART arm was characterized by significant improvements in CD4 cell count (154.4 cells/\u03bcl/year, P\u003c0.001) and reduction in plasma viral load within 3 to 6 months (-0.431+ 0.004 log10 IU/month, P\u003c0.001). "]]
science.evidence_inference.59
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND & OBJECTIVES:: The complementary and alternative medicines (CAM) have not been systematically evaluated for the management of HIV/AIDS patients. In a prospective, single-site, open-label, non-randomized, controlled, pilot trial, we evaluated a polyherbal formulation (PHF) for its safety and efficacy in treating subjects with HIV-AIDS.\n\nABSTRACT.METHODS:: A total of 32 and 31 subjects were enrolled under the PHF and highly active antiretroviral treatment (HAART) arms, respectively, and followed up for a period of 24 months. Plasma viral RNA, CD4 cell count and blood chemistry were monitored at 3-month intervals. Following mid-term safety evaluation, 12 subjects from the PHF arm were shifted to HAART and were followed separately as PHF-to-HAART arm, for the rest of the period.\n\nABSTRACT.RESULTS:: The HAART arm was characterized by significant improvements in CD4 cell count (154.4 cells/μl/year, P<0.001) and reduction in plasma viral load within 3 to 6 months (-0.431+ 0.004 log10 IU/month, P<0.001). In contrast, the PHF arm showed a profile of CD4 cell loss at remarkably slower kinetics (14.3 cells/μl/year, P=0.021) and insignificant reduction in the viral load. The PHF and HAART arms did not differ significantly in the occurrence of AIDS-related illnesses over the study period of 24 months. In the PHF-to-HAART arm, the rates of CD4 count and reduction in viral load were significant and comparable to that of the HAART group. In the PHF arm, at 1 month, a significant increase in CD4 cell count and a concomitant decrease in viral load were seen.\n\nABSTRACT.INTERPRETATION & CONCLUSIONS:: The PHF appears to have provided protection by delaying the kinetics of CD4 cell reduction. Given the several study limitations, drawing assertive inferences from the data is challenging. Future studies with a stringent study design are warranted to confirm these findings.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Highly active antiretroviral treatment\", \"No treatment\", \"Viral load\", \"significantly decreased\", \":\\r\\r\\r\\r\\nThe HAART arm was characterized by significant improvements in CD4 cell count (154.4 cells/\\u03bcl/year, P\\u003c0.001) and reduction in plasma viral load within 3 to 6 months (-0.431+ 0.004 log10 IU/month, P\\u003c0.001). \"], [\"Polyherbal formulation\", \"No treatment\", \"Viral load\", \"no significant difference\", \"In contrast, the PHF arm showed a profile of CD4 cell loss at remarkably slower kinetics (14.3 cells/\\u03bcl/year, P=0.021) and insignificant reduction in the viral load. \"], [\"Highly active antiretroviral treatment\", \"No treatment\", \"CD4 count\", \"significantly increased\", \":\\r\\r\\r\\r\\nThe HAART arm was characterized by significant improvements in CD4 cell count (154.4 cells/\\u03bcl/year, P\\u003c0.001) and reduction in plasma viral load within 3 to 6 months (-0.431+ 0.004 log10 IU/month, P\\u003c0.001). \"]]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Side effects associated with antidepressants increases risk of stroke. Research Abstract: OBJECTIVES To investigate the association between antidepressant treatment and risk of several potential adverse outcomes in older people with depression and to examine risks by class of antidepressant, duration of use, and dose. DESIGN Cohort study of people aged 65 and over diagnosed as having depression. SETTING 570 general practices in the United Kingdom supplying data to the QResearch primary care database. PARTICIPANTS 60,746 patients diagnosed as having a new episode of depression between the ages of 65 and 100 years from 1 January 1996 to 31 December 2007 and followed up until 31 December 2008. MAIN OUTCOME MEASURES Hazard ratios associated with antidepressant use for all cause mortality, attempted suicide/self harm, myocardial infarction, stroke/transient ischaemic attack, falls, fractures, upper gastrointestinal bleeding, epilepsy/seizures, road traffic accidents, adverse drug reactions, and hyponatraemia, adjusted for a range of potential confounding variables. Hazard ratios were calculated for antidepressant class (tricyclic and related antidepressants, selective serotonin reuptake inhibitors, other antidepressants), dose, and duration of use and for commonly prescribed individual drugs. RESULTS 54,038 (89.0%) patients received at least one prescription for an antidepressant during follow-up. A total of 1,398,359 antidepressant prescriptions were issued: 764,659 (54.7%) for selective serotonin reuptake inhibitors, 442,192 (31.6%) for tricyclic antidepressants, 2203 (0.2%) for monoamine oxidase inhibitors, and 189,305 (13.5%) for the group of other antidepressants. The associations with the adverse outcomes differed significantly between the antidepressant classes for seven outcomes. Selective serotonin reuptake inhibitors were associated with the highest adjusted hazard ratios for falls (1.66, 95% confidence interval 1.58 to 1.73) and hyponatraemia (1.52, 1.33 to 1.75) compared with when antidepressants were not being used. The group of other antidepressants was associated with the highest adjusted hazard ratios for all cause mortality (1.66, 1.56 to 1.77), attempted suicide/self harm (5.16, 3.90 to 6.83), stroke/transient ischaemic attack (1.37, 1.22 to 1.55), fracture (1.64, 1.46 to 1.84), and epilepsy/seizures (2.24, 1.60 to 3.15), compared with when antidepressants were not being used. Tricyclic antidepressants did not have the highest hazard ratio for any of the outcomes. Significantly different associations also existed between the individual drugs for the same seven outcomes; trazodone (tricyclic antidepressant), mirtazapine, and venlafaxine (both in the group of other antidepressants) were associated with the highest rates for some of these outcomes. Absolute risks over 1 year for all cause mortality were 7.04% for patients while not taking antidepressants, 8.12% for those taking tricyclic antidepressants, 10.61% for selective serotonin reuptake inhibitors, and 11.43% for other antidepressants. CONCLUSIONS Selective serotonin reuptake inhibitors and drugs in the group of other antidepressants were associated with an increased risk of several adverse outcomes compared with tricyclic antidepressants. Among individual drugs, trazodone, mirtazapine, and venlafaxine were associated with the highest risks for some outcomes. As this is an observational study, it is susceptible to confounding by indication, channelling bias, and residual confounding, so differences in characteristics between patients prescribed different antidepressant drugs that could account for some of the associations between the drugs and the adverse outcomes may remain. Further research is needed to confirm these findings, but the risks and benefits of different antidepressants should be carefully evaluated when these drugs are prescribed to older people.
{"verdict": "SUPPORT", "evidence": ["The group of other antidepressants was associated with the highest adjusted hazard ratios for all cause mortality (1.66, 1.56 to 1.77), attempted suicide/self harm (5.16, 3.90 to 6.83), stroke/transient ischaemic attack (1.37, 1.22 to 1.55), fracture (1.64, 1.46 to 1.84), and epilepsy/seizures (2.24, 1.60 to 3.15), compared with when antidepressants were not being used."]}
science.scifact_entailment.709
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nSide effects associated with antidepressants increases risk of stroke.\n\nResearch Abstract:\nOBJECTIVES To investigate the association between antidepressant treatment and risk of several potential adverse outcomes in older people with depression and to examine risks by class of antidepressant, duration of use, and dose.\nDESIGN Cohort study of people aged 65 and over diagnosed as having depression.\nSETTING 570 general practices in the United Kingdom supplying data to the QResearch primary care database.\nPARTICIPANTS 60,746 patients diagnosed as having a new episode of depression between the ages of 65 and 100 years from 1 January 1996 to 31 December 2007 and followed up until 31 December 2008.\nMAIN OUTCOME MEASURES Hazard ratios associated with antidepressant use for all cause mortality, attempted suicide/self harm, myocardial infarction, stroke/transient ischaemic attack, falls, fractures, upper gastrointestinal bleeding, epilepsy/seizures, road traffic accidents, adverse drug reactions, and hyponatraemia, adjusted for a range of potential confounding variables.\nHazard ratios were calculated for antidepressant class (tricyclic and related antidepressants, selective serotonin reuptake inhibitors, other antidepressants), dose, and duration of use and for commonly prescribed individual drugs.\nRESULTS 54,038 (89.0%) patients received at least one prescription for an antidepressant during follow-up.\nA total of 1,398,359 antidepressant prescriptions were issued: 764,659 (54.7%) for selective serotonin reuptake inhibitors, 442,192 (31.6%) for tricyclic antidepressants, 2203 (0.2%) for monoamine oxidase inhibitors, and 189,305 (13.5%) for the group of other antidepressants.\nThe associations with the adverse outcomes differed significantly between the antidepressant classes for seven outcomes.\nSelective serotonin reuptake inhibitors were associated with the highest adjusted hazard ratios for falls (1.66, 95% confidence interval 1.58 to 1.73) and hyponatraemia (1.52, 1.33 to 1.75) compared with when antidepressants were not being used.\nThe group of other antidepressants was associated with the highest adjusted hazard ratios for all cause mortality (1.66, 1.56 to 1.77), attempted suicide/self harm (5.16, 3.90 to 6.83), stroke/transient ischaemic attack (1.37, 1.22 to 1.55), fracture (1.64, 1.46 to 1.84), and epilepsy/seizures (2.24, 1.60 to 3.15), compared with when antidepressants were not being used.\nTricyclic antidepressants did not have the highest hazard ratio for any of the outcomes.\nSignificantly different associations also existed between the individual drugs for the same seven outcomes; trazodone (tricyclic antidepressant), mirtazapine, and venlafaxine (both in the group of other antidepressants) were associated with the highest rates for some of these outcomes.\nAbsolute risks over 1 year for all cause mortality were 7.04% for patients while not taking antidepressants, 8.12% for those taking tricyclic antidepressants, 10.61% for selective serotonin reuptake inhibitors, and 11.43% for other antidepressants.\nCONCLUSIONS Selective serotonin reuptake inhibitors and drugs in the group of other antidepressants were associated with an increased risk of several adverse outcomes compared with tricyclic antidepressants.\nAmong individual drugs, trazodone, mirtazapine, and venlafaxine were associated with the highest risks for some outcomes.\nAs this is an observational study, it is susceptible to confounding by indication, channelling bias, and residual confounding, so differences in characteristics between patients prescribed different antidepressant drugs that could account for some of the associations between the drugs and the adverse outcomes may remain.\nFurther research is needed to confirm these findings, but the risks and benefits of different antidepressants should be carefully evaluated when these drugs are prescribed to older people.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"The group of other antidepressants was associated with the highest adjusted hazard ratios for all cause mortality (1.66, 1.56 to 1.77), attempted suicide/self harm (5.16, 3.90 to 6.83), stroke/transient ischaemic attack (1.37, 1.22 to 1.55), fracture (1.64, 1.46 to 1.84), and epilepsy/seizures (2.24, 1.60 to 3.15), compared with when antidepressants were not being used.\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.INTRODUCTION: The effect of cardiovascular disease (CVD) prevention measures aimed at elderly patients requires further evidence. We investigated the effect of statin treatment (targeted to achieve guideline goals) on CVD outcomes in different age groups to determine whether statins are more beneficial in the elderly. ABSTRACT.MATERIAL AND METHODS: The primary endpoint of this post hoc analysis of the GREek Atorvastatin and Coronary-heart-disease Evaluation (GREACE) study (n = 1,600 patients with established coronary heart disease (CHD), mean follow-up 3 years) was the absolute and relative CVD event (a composite of death, myocardial infarction, revascularization, unstable angina, heart failure and stroke) risk reduction in age quartiles (each n = 200). Patients on "structured care" with atorvastatin (n = 800) followed up by the university clinic and treated to lipid goal were compared with the corresponding quartiles on "usual care" (n = 800) followed up by specialists or general practitioners of the patient's choice outside the hospital. ABSTRACT.RESULTS: In the elderly (mean age 69 ±4 and 70 ±3 years in the "structured" and "usual care", respectively) the absolute CVD event reduction between "structured" and "usual care" was 16.5% (p < 0.0001), while in the younger patients (mean age 51 ±3 years and 52 ±3 years in the "structured" and "usual care", respectively) this was 8.5% (p = 0.016); relative risk reduction (RRR) 60% (p < 0.0001) vs. 42% respectively (p = 0.001). The elderly had higher rates of chronic kidney disease and higher uric acid levels, plus an increased prevalence of diabetes, metabolic syndrome and non-alcoholic fatty liver disease. These factors might contribute to the increased CVD risk in older patients. ABSTRACT.CONCLUSIONS: All age groups benefited from statin treatment, but the elderly on "structured care" had a greater absolute and relative CVD risk reduction than the younger patients when compared with the corresponding patients assigned to "usual care". These findings suggest that we should not deprive older patients of CVD prevention treatment and lipid target achievement. Extracted Data:
[["\u201cstructured care\u201d with atorvastatin", "usual care", "absolute cardiovascular disease (CVD) event reduction", "significantly increased", "In the elderly (mean age 69 \u00b14 and 70 \u00b13 years in the \u201cstructured\u201d and \u201cusual care\u201d, respectively) the absolute CVD event reduction between \u201cstructured\u201d and \u201cusual care\u201d was 16.5% (p \u003c 0.0001), while in the younger patients (mean age 51 \u00b13 years and 52 \u00b13 years in the \u201cstructured\u201d and \u201cusual care\u201d, respectively) this was 8.5% (p = 0.016)"]]
science.evidence_inference.231
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.INTRODUCTION: The effect of cardiovascular disease (CVD) prevention measures aimed at elderly patients requires further evidence. We investigated the effect of statin treatment (targeted to achieve guideline goals) on CVD outcomes in different age groups to determine whether statins are more beneficial in the elderly.\n\nABSTRACT.MATERIAL AND METHODS: The primary endpoint of this post hoc analysis of the GREek Atorvastatin and Coronary-heart-disease Evaluation (GREACE) study (n = 1,600 patients with established coronary heart disease (CHD), mean follow-up 3 years) was the absolute and relative CVD event (a composite of death, myocardial infarction, revascularization, unstable angina, heart failure and stroke) risk reduction in age quartiles (each n = 200). Patients on \"structured care\" with atorvastatin (n = 800) followed up by the university clinic and treated to lipid goal were compared with the corresponding quartiles on \"usual care\" (n = 800) followed up by specialists or general practitioners of the patient's choice outside the hospital.\n\nABSTRACT.RESULTS: In the elderly (mean age 69 ±4 and 70 ±3 years in the \"structured\" and \"usual care\", respectively) the absolute CVD event reduction between \"structured\" and \"usual care\" was 16.5% (p < 0.0001), while in the younger patients (mean age 51 ±3 years and 52 ±3 years in the \"structured\" and \"usual care\", respectively) this was 8.5% (p = 0.016); relative risk reduction (RRR) 60% (p < 0.0001) vs. 42% respectively (p = 0.001). The elderly had higher rates of chronic kidney disease and higher uric acid levels, plus an increased prevalence of diabetes, metabolic syndrome and non-alcoholic fatty liver disease. These factors might contribute to the increased CVD risk in older patients.\n\nABSTRACT.CONCLUSIONS: All age groups benefited from statin treatment, but the elderly on \"structured care\" had a greater absolute and relative CVD risk reduction than the younger patients when compared with the corresponding patients assigned to \"usual care\". These findings suggest that we should not deprive older patients of CVD prevention treatment and lipid target achievement.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"\\u201cstructured care\\u201d with atorvastatin\", \"usual care\", \"absolute cardiovascular disease (CVD) event reduction\", \"significantly increased\", \"In the elderly (mean age 69 \\u00b14 and 70 \\u00b13 years in the \\u201cstructured\\u201d and \\u201cusual care\\u201d, respectively) the absolute CVD event reduction between \\u201cstructured\\u201d and \\u201cusual care\\u201d was 16.5% (p \\u003c 0.0001), while in the younger patients (mean age 51 \\u00b13 years and 52 \\u00b13 years in the \\u201cstructured\\u201d and \\u201cusual care\\u201d, respectively) this was 8.5% (p = 0.016)\"]]", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: PTPN22 Is a Critical Regulator of Fcg Receptor-Mediated Neutrophil Activation. Neutrophils act as a first line of defense against bacterial and fungal infections, but they are also important effectors of acute and chronic inflammation. Genome-wide association studies have established that the gene encoding the protein tyrosine phosphatase nonreceptor 22 (PTPN22) makes an important contribution to susceptibility to autoimmune disease, notably rheumatoid arthritis. Although PTPN22 is most highly expressed in neutrophils, its function in these cells remains poorly characterized. We show in this article that neutrophil effector functions, including adhesion, production of reactive oxygen species, and degranulation induced by immobilized immune complexes, were reduced in Ptpn22(-/-) neutrophils. Tyrosine phosphorylation of Lyn and Syk was altered in Ptpn22(-/-) neutrophils. On stimulation with immobilized immune complexes, Ptpn22(-/-) neutrophils manifested reduced activation of key signaling intermediates. Ptpn22(-/-) mice were protected from immune complex-mediated arthritis, induced by the transfer of arthritogenic serum. In contrast, in vivo neutrophil recruitment following thioglycollate-induced peritonitis and in vitro chemotaxis were not affected by lack of PTPN22. Our data suggest an important role for PTPN22-dependent dephosphorylation events, which are required to enable full FcgR-induced activation, pointing to an important role for this molecule in neutrophil function.
{"CellLine": [], "Chemical": ["thioglycollate", "reactive oxygen species"], "Disease": ["rheumatoid arthritis", "acute and chronic inflammation", "arthritis", "fungal infections", "peritonitis", "autoimmune disease"], "Gene": ["Ptpn22", "Lyn", "PTPN22", "Fcg Receptor", "Syk", "FcgR", "protein tyrosine phosphatase nonreceptor 22"], "Species": ["mice"], "Variant": []}
science.biored_ner.74
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nPTPN22 Is a Critical Regulator of Fcg Receptor-Mediated Neutrophil Activation. Neutrophils act as a first line of defense against bacterial and fungal infections, but they are also important effectors of acute and chronic inflammation. Genome-wide association studies have established that the gene encoding the protein tyrosine phosphatase nonreceptor 22 (PTPN22) makes an important contribution to susceptibility to autoimmune disease, notably rheumatoid arthritis. Although PTPN22 is most highly expressed in neutrophils, its function in these cells remains poorly characterized. We show in this article that neutrophil effector functions, including adhesion, production of reactive oxygen species, and degranulation induced by immobilized immune complexes, were reduced in Ptpn22(-/-) neutrophils. Tyrosine phosphorylation of Lyn and Syk was altered in Ptpn22(-/-) neutrophils. On stimulation with immobilized immune complexes, Ptpn22(-/-) neutrophils manifested reduced activation of key signaling intermediates. Ptpn22(-/-) mice were protected from immune complex-mediated arthritis, induced by the transfer of arthritogenic serum. In contrast, in vivo neutrophil recruitment following thioglycollate-induced peritonitis and in vitro chemotaxis were not affected by lack of PTPN22. Our data suggest an important role for PTPN22-dependent dephosphorylation events, which are required to enable full FcgR-induced activation, pointing to an important role for this molecule in neutrophil function.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"thioglycollate\", \"reactive oxygen species\"], \"Disease\": [\"rheumatoid arthritis\", \"acute and chronic inflammation\", \"arthritis\", \"fungal infections\", \"peritonitis\", \"autoimmune disease\"], \"Gene\": [\"Ptpn22\", \"Lyn\", \"PTPN22\", \"Fcg Receptor\", \"Syk\", \"FcgR\", \"protein tyrosine phosphatase nonreceptor 22\"], \"Species\": [\"mice\"], \"Variant\": []}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.ABSTRACT.BACKGROUND:: Limited evidence is available regarding the effects of insoles on pediatric flexible flatfoot because of the heterogeneity and low methodological quality of previous studies. The purpose of this prospective trial is to examine the short-term effects of customized arch support insoles on symptomatic flexible flatfoot in children by using the International Classification of Functioning, randomized controlled Disability, and Health (ICF) framework. ABSTRACT.ABSTRACT.METHODS:: This study was conducted in a rehabilitation outpatient clinic of a teaching hospital. Fifty-two children with symptomatic flexible flatfoot were included. The children in the treatment group wore customized arch support insoles for 12 weeks, whereas those in the control group did not wear the insoles. Both clinical and radiographic measurements, including the navicular drop, foot posture index, Beighton hypermobility score, talonavicular coverage angle, calcaneal inclination angle, and calcaneal-first metatarsal angle, were used for diagnosing flexible flatfoot. Physical activity (10-m normal and fast walking, stair ascent, stair descent, and chair rising), physical function, and psychometric properties (Pediatric Outcome Data Collection Instrument and Pediatric Quality of Life Inventory) were evaluated at the baseline and 12 weeks after the intervention. ABSTRACT.ABSTRACT.RESULTS:: Compared with the control group, the treatment group exhibited significant improvement in pain/comfort (P = .048), physical health (P = .035), stair ascent time (P = .015), upper extremity and physical function (P = .016), and transfer and basic mobility (P = .042) during the intervention period. ABSTRACT.ABSTRACT.CONCLUSION:: Children with flexible flatfoot who wore customized arch support insoles for 12 weeks exhibited significantly improved pain/comfort, physical health, stair ascent time, upper extremity and physical function, and transfer and basic mobility. These variables belong to the domains of body functions and structures and activity and participation in the ICF framework. However, because the groups were not comparable, additional studies with larger sample sizes should be conducted. Extracted Data:
[["arch support insoles for 12 weeks", "did not wear the insoles", "stair ascent time", "significantly increased", "the treatment group exhibited significant improvement in pain/comfort (P\u200a=\u200a.048), physical health (P\u200a=\u200a.035), stair ascent time (P\u200a=\u200a.015"]]
science.evidence_inference.246
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.ABSTRACT.BACKGROUND:: Limited evidence is available regarding the effects of insoles on pediatric flexible flatfoot because of the heterogeneity and low methodological quality of previous studies. The purpose of this prospective trial is to examine the short-term effects of customized arch support insoles on symptomatic flexible flatfoot in children by using the International Classification of Functioning, randomized controlled Disability, and Health (ICF) framework.\n\nABSTRACT.ABSTRACT.METHODS:: This study was conducted in a rehabilitation outpatient clinic of a teaching hospital. Fifty-two children with symptomatic flexible flatfoot were included. The children in the treatment group wore customized arch support insoles for 12 weeks, whereas those in the control group did not wear the insoles. Both clinical and radiographic measurements, including the navicular drop, foot posture index, Beighton hypermobility score, talonavicular coverage angle, calcaneal inclination angle, and calcaneal-first metatarsal angle, were used for diagnosing flexible flatfoot. Physical activity (10-m normal and fast walking, stair ascent, stair descent, and chair rising), physical function, and psychometric properties (Pediatric Outcome Data Collection Instrument and Pediatric Quality of Life Inventory) were evaluated at the baseline and 12 weeks after the intervention.\n\nABSTRACT.ABSTRACT.RESULTS:: Compared with the control group, the treatment group exhibited significant improvement in pain/comfort (P = .048), physical health (P = .035), stair ascent time (P = .015), upper extremity and physical function (P = .016), and transfer and basic mobility (P = .042) during the intervention period.\n\nABSTRACT.ABSTRACT.CONCLUSION:: Children with flexible flatfoot who wore customized arch support insoles for 12 weeks exhibited significantly improved pain/comfort, physical health, stair ascent time, upper extremity and physical function, and transfer and basic mobility. These variables belong to the domains of body functions and structures and activity and participation in the ICF framework. However, because the groups were not comparable, additional studies with larger sample sizes should be conducted.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"arch support insoles for 12 weeks\", \"did not wear the insoles\", \"stair ascent time\", \"significantly increased\", \"the treatment group exhibited significant improvement in pain/comfort (P\\u200a=\\u200a.048), physical health (P\\u200a=\\u200a.035), stair ascent time (P\\u200a=\\u200a.015\"]]", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: The yield of the derived tree resulting from the operations of the derivation tree γ of Fig. 3 defined in<cite> (Kallmeyer and Kuhlmann, 2012)</cite> , w 2 where x, y denotes a tuple of strings.
["Background"]
science.multicite_intent_classification.2314
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe yield of the derived tree resulting from the operations of the derivation tree γ of Fig. 3 defined in<cite> (Kallmeyer and Kuhlmann, 2012)</cite> , w 2 where x, y denotes a tuple of strings.", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: One of the major problems one is faced with when decomposing words into their constituent parts is ambiguity: the generation of multiple analyses for one input word, many of which are implausible. In order to deal with ambiguity, the MORphological PArser MORPA is provided with a probabilistic context-free grammar (PCFG), i.e. it combines a 'conventional' context-free morphological grammar to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse. Consequently, remaining analyses can be ordered along a scale of plausibility. Test performance data will show that a PCFG yields good results in morphological parsing. MORPA is a fully implemented parser developed for use in a text-to-speech conversion system.
Output Entities: {"Generic": [], "Material": [], "Method": ["probability-based scoring function", "parser", "MORphological PArser MORPA", "\u0027conventional\u0027 context-free morphological grammar", "probabilistic context-free grammar (PCFG)"], "Metric": [], "OtherScientificTerm": ["ungrammatical segmentations", "ambiguity"], "Task": ["generation", "text-to-speech conversion system", "morphological parsing", "parse"]}
science.scierc_ner.34
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nOne of the major problems one is faced with when decomposing words into their constituent parts is ambiguity: the generation of multiple analyses for one input word, many of which are implausible. In order to deal with ambiguity, the MORphological PArser MORPA is provided with a probabilistic context-free grammar (PCFG), i.e. it combines a 'conventional' context-free morphological grammar to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse. Consequently, remaining analyses can be ordered along a scale of plausibility. Test performance data will show that a PCFG yields good results in morphological parsing. MORPA is a fully implemented parser developed for use in a text-to-speech conversion system.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [], \"Material\": [], \"Method\": [\"probability-based scoring function\", \"parser\", \"MORphological PArser MORPA\", \"\\u0027conventional\\u0027 context-free morphological grammar\", \"probabilistic context-free grammar (PCFG)\"], \"Metric\": [], \"OtherScientificTerm\": [\"ungrammatical segmentations\", \"ambiguity\"], \"Task\": [\"generation\", \"text-to-speech conversion system\", \"morphological parsing\", \"parse\"]}", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: Image composition (or mosaicing) has attracted a growing attention in recent years as one of the main elements in video analysis and representation. In this paper we deal with the problem of global alignment and super-resolution. We also propose to evaluate the quality of the resulting mosaic by measuring the amount of blurring. Global registration is achieved by combining a graph-based technique -- that exploits the topological structure of the sequence induced by the spatial overlap -- with a bundle adjustment which uses only the homographies computed in the previous steps. Experimental comparison with other techniques shows the effectiveness of our approach.
Output Entities: {"Generic": ["techniques"], "Material": [], "Method": ["graph-based technique", "bundle adjustment"], "Metric": ["amount of blurring"], "OtherScientificTerm": ["topological structure", "spatial overlap", "homographies"], "Task": ["Image composition (or mosaicing)", "video analysis and representation", "global alignment", "mosaic", "Global registration", "super-resolution"]}
science.scierc_ner.23
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nImage composition (or mosaicing) has attracted a growing attention in recent years as one of the main elements in video analysis and representation. In this paper we deal with the problem of global alignment and super-resolution. We also propose to evaluate the quality of the resulting mosaic by measuring the amount of blurring. Global registration is achieved by combining a graph-based technique -- that exploits the topological structure of the sequence induced by the spatial overlap -- with a bundle adjustment which uses only the homographies computed in the previous steps. Experimental comparison with other techniques shows the effectiveness of our approach.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [\"techniques\"], \"Material\": [], \"Method\": [\"graph-based technique\", \"bundle adjustment\"], \"Metric\": [\"amount of blurring\"], \"OtherScientificTerm\": [\"topological structure\", \"spatial overlap\", \"homographies\"], \"Task\": [\"Image composition (or mosaicing)\", \"video analysis and representation\", \"global alignment\", \"mosaic\", \"Global registration\", \"super-resolution\"]}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Improving Implicit Discourse Relation Classification by Modeling Inter-dependencies of Discourse Units in a Paragraph We argue that semantic meanings of a sentence or clause can not be interpreted independently from the rest of a paragraph, or independently from all discourse relations and the overall paragraph-level discourse structure. With the goal of improving implicit discourse relation classification, we introduce a paragraph-level neural networks that model inter-dependencies between discourse units as well as discourse relation continuity and patterns, and predict a sequence of discourse relations in a paragraph. Experimental results show that our model outperforms the previous state-of-the-art systems on the benchmark corpus of PDTB. Implicit Discourse Relation Recognition Since the PDTB BIBREF7 corpus was created, a surge of studies BIBREF8 , BIBREF3 , BIBREF9 , BIBREF10 have been conducted for predicting discourse relations, primarily focusing on the challenging task of implicit discourse relation classification when no explicit discourse connective phrase was presented. Early studies BIBREF11 , BIBREF3 , BIBREF2 , BIBREF12 focused on extracting linguistic and semantic features from two discourse units. Recent research BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 tried to model compositional meanings of two discourse units by exploiting interactions between words in two units with more and more complicated neural network models, including the ones using neural tensor BIBREF5 , BIBREF17 , BIBREF18 and attention mechanisms BIBREF6 , BIBREF19 , BIBREF20 . Another trend is to alleviate the shortage of annotated data by leveraging related external data, such as explicit discourse relations in PDTB BIBREF9 , BIBREF19 , BIBREF21 and unlabeled data obtained elsewhere BIBREF12 , BIBREF19 , often in a multi-task joint learning framework. However, nearly all the previous works assume that a pair of discourse units is independent from its wider paragraph-level contexts and build their discourse relation prediction models based on only two relevant discourse units. In contrast, we model inter-dependencies of discourse units in a paragraph when building discourse unit representations; in addition, we model global continuity and patterns in a sequence of discourse relations, including both implicit and explicit relations. Hierarchical neural network models BIBREF22 , BIBREF23 have been applied to RST-style discourse parsing BIBREF24 mainly for the purpose of generating text-level hierarchical discourse structures. In contrast, we use hierarchical neural network models to build context-aware sentence representations in order to improve implicit discourse relation prediction. Paragraph Encoding Abstracting latent representations from a long sequence of words, such as a paragraph, is a challenging task. While several novel neural network models BIBREF25 , BIBREF26 have been introduced in recent years for encoding a paragraph, Recurrent Neural Network (RNN)-based methods remain the most effective approaches. RNNs, especially the long-short term memory (LSTM) BIBREF27 models, have been widely used to encode a paragraph for machine translation BIBREF28 , dialogue systems BIBREF29 and text summarization BIBREF30 because of its ability in modeling long-distance dependencies between words. In addition, among four typical pooling methods (sum, mean, last and max) for calculating sentence representations from RNN-encoded hidden states for individual words, max-pooling along with bidirectional LSTM (Bi-LSTM) BIBREF31 yields the current best universal sentence representation method BIBREF32 . We adopted a similar neural network architecture for paragraph encoding. Fine-tune Discourse Relation Predictions Using a CRF Layer Data analysis and many linguistic studies BIBREF11 , BIBREF33 , BIBREF34 , BIBREF35 have repeatedly shown that discourse relations feature continuity and patterns (e.g., a temporal relation is likely to be followed by another temporal relation). Especially, BIBREF11 firstly reported that patterns exist between implicit discourse relations and their neighboring explicit discourse relations. Motivated by these observations, we aim to improve implicit discourse relation detection by making use of easily identifiable explicit discourse relations and taking into account global patterns of discourse relation distributions. Specifically, we add an extra CRF layer at the top of the softmax prediction layer (shown in figure 3 ) to fine-tune predicted discourse relations by considering their inter-dependencies. The Conditional Random Fields BIBREF36 (CRF) layer updates a state transition matrix, which can effectively adjust the current label depending on proceeding and following labels. Both training and decoding of the CRF layer can be solved efficiently by using the Viterbi algorithm. With the CRF layer, the model jointly assigns a sequence of discourse relations between each two adjacent discourse units in a paragraph, including both implicit and explicit relations, by considering relevant discourse unit representations as well as global discourse relation patterns. Dataset and Preprocessing The Penn Discourse Treebank (PDTB): We experimented with PDTB v2.0 BIBREF7 which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles. In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp). We followed the same PDTB section partition BIBREF12 as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set. Table 1 presents the data distributions we collected from PDTB. Preprocessing: The PDTB dataset documents its annotations as a list of discourse relations, with each relation associated with its two discourse units. To recover the paragraph context for a discourse relation, we match contents of its two annotated discourse units with all paragraphs in corresponding raw WSJ article. When all the matching was completed, each paragraph was split into a sequence of discourse units, with one discourse relation (implicit or explicit) between each two adjacent discourse units. Following this method, we obtained 14,309 paragraphs in total, each contains 3.2 discourse units on average. Table 2 shows the distribution of paragraphs based on the number of discourse units in a paragraph. Baseline Models and Systems We compare the performance of our neural network model with several recent discourse relation recognition systems that only consider two relevant discourse units. BIBREF12 : improves implicit discourse relation prediction by creating more training instances from the Gigaword corpus utilizing explicitly mentioned discourse connective phrases. BIBREF5 : a gated relevance network (GRN) model with tensors to capture semantic interactions between words from two discourse units. BIBREF9 : a convolutional neural network model that leverages relations between different styles of discourse relations annotations (PDTB and RST BIBREF24 ) in a multi-task joint learning framework. BIBREF6 : a multi-level attention-over-attention model to dynamically exploit features from two discourse units for recognizing an implicit discourse relation. BIBREF21 : a novel pipelined adversarial framework to enable an adaptive imitation competition between the implicit network and a rival feature discriminator with access to connectives. BIBREF18 : a Simple Word Interaction Model (SWIM) with tensors that captures both linear and quadratic relations between words from two discourse units. BIBREF19 : an attention-based LSTM neural network that leverages explicit discourse relations in PDTB and unannotated external data in a multi-task joint learning framework. Evaluation Settings On the PDTB corpus, both binary classification and multi-way classification settings are commonly used to evaluate the implicit discourse relation recognition performance. We noticed that all the recent works report class-wise implicit relation prediction performance in the binary classification setting, while none of them report detailed performance in the multi-way classification setting. In the binary classification setting, separate “one-versus-all” binary classifiers were trained, and each classifier is to identify one class of discourse relations. Although separate classifiers are generally more flexible in combating with imbalanced distributions of discourse relation classes and obtain higher class-wise prediction performance, one pair of discourse units may be tagged with all four discourse relations without proper conflict resolution. Therefore, the multi-way classification setting is more appropriate and natural in evaluating a practical end-to-end discourse parser, and we mainly evaluate our proposed models using the four-way multi-class classification setting. Since none of the recent previous work reported class-wise implicit relation classification performance in the multi-way classification setting, for better comparisons, we re-implemented the neural tensor network architecture (so-called SWIM in BIBREF18 ) which is essentially a Bi-LSTM model with tensors and report its detailed evaluation result in the multi-way classification setting. As another baseline, we report the performance of a Bi-LSTM model without tensors as well. Both baseline models take two relevant discourse units as the only input. For additional comparisons, We also report the performance of our proposed models in the binary classification setting. Experimental Results Multi-way Classification: The first section of table 3 shows macro average F1-scores and accuracies of previous works. The second section of table 3 shows the multi-class classification results of our implemented baseline systems. Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance. However, the performance on the three small classes (Comp, Cont and Temp) remains low. The third section of table 3 shows the multi-class classification results of our proposed paragraph-level neural network models that capture inter-dependencies among discourse units. The first row shows the performance of a variant of our basic model, where we only identify implicit relations and ignore identifying explicit relations by setting the $\alpha $ in equation (5) to be 0. Compared with the baseline Bi-LSTM model, the only difference is that this model considers paragraph-wide contexts and model inter-dependencies among discourse units when building representation for individual DU. We can see that this model has greatly improved implicit relation classification performance across all the four relations and improved the macro-average F1-score by over 7 percents. In addition, compared with the baseline Bi-LSTM model with tensor, this model improved implicit relation classification performance across the three small classes, with clear performance gains of around 2 and 8 percents on contingency and temporal relations respectively, and overall improved the macro-average F1-score by 2.2 percents. The second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph. Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations. Especially on the contingency relation, the classification performance was improved by another 1.42 percents. Moreover, the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ). After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent. Binary Classification: From table 4 , we can see that compared against the best previous systems, our paragraph-level model with untied parameters in the prediction layer achieves F1-score improvements of 6 points on Comparison and 7 points on Temporal, which demonstrates that paragraph-wide contexts are important in detecting minority discourse relations. Note that the CRF layer of the model is not suitable for binary classification. Impact of Paragraph Length To understand the influence of paragraph lengths to our paragraph-level models, we divide paragraphs in the PDTB test set into several subsets based on the number of DUs in a paragraph, and then evaluate our proposed models on each subset separately. From Figure 4 , we can see that our paragraph-level models (the latter three) overall outperform DU-pair baselines across all the subsets. As expected, the paragraph-level models achieve clear performance gains on long paragraphs (with more than 5 DUs) by extensively modeling mutual influences of DUs in a paragraph. But somewhat surprisingly, the paragraph-level models achieve noticeable performance gains on short paragraphs (with 2 or 3 DUs) as well. We hypothesize that by learning more appropriate discourse-aware DU representations in long paragraphs, our paragraph-level models reduce bias of using DU representations in predicting discourse relations, which benefits discourse relation prediction in short paragraphs as well. Figure 2: Untie Parameters in the Prediction Layer Figure 3: Fine-tune Discourse Relations with a CRF layer. Table 1: Distributions of Four Top-level Discourse Relations in PDTB. Table 5: Multi-class Classification Results of Ensemble Models on PDTB. Query: What discourse relations does it work best/worst for? JSON Response:
{"answer": "Best: Expansion (Exp). Worst: Comparison (Comp).", "evidence": ["The Penn Discourse Treebank (PDTB): We experimented with PDTB v2.0 BIBREF7 which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles. In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp). We followed the same PDTB section partition BIBREF12 as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set. Table 1 presents the data distributions we collected from PDTB.", "Multi-way Classification: The first section of table 3 shows macro average F1-scores and accuracies of previous works. The second section of table 3 shows the multi-class classification results of our implemented baseline systems. Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance. However, the performance on the three small classes (Comp, Cont and Temp) remains low."]}
science.qasper_abstractive_qa.13.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nImproving Implicit Discourse Relation Classification by Modeling Inter-dependencies of Discourse Units in a Paragraph\n\nWe argue that semantic meanings of a sentence or clause can not be interpreted independently from the rest of a paragraph, or independently from all discourse relations and the overall paragraph-level discourse structure. With the goal of improving implicit discourse relation classification, we introduce a paragraph-level neural networks that model inter-dependencies between discourse units as well as discourse relation continuity and patterns, and predict a sequence of discourse relations in a paragraph. Experimental results show that our model outperforms the previous state-of-the-art systems on the benchmark corpus of PDTB.\n\nImplicit Discourse Relation Recognition\nSince the PDTB BIBREF7 corpus was created, a surge of studies BIBREF8 , BIBREF3 , BIBREF9 , BIBREF10 have been conducted for predicting discourse relations, primarily focusing on the challenging task of implicit discourse relation classification when no explicit discourse connective phrase was presented. Early studies BIBREF11 , BIBREF3 , BIBREF2 , BIBREF12 focused on extracting linguistic and semantic features from two discourse units. Recent research BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 tried to model compositional meanings of two discourse units by exploiting interactions between words in two units with more and more complicated neural network models, including the ones using neural tensor BIBREF5 , BIBREF17 , BIBREF18 and attention mechanisms BIBREF6 , BIBREF19 , BIBREF20 . Another trend is to alleviate the shortage of annotated data by leveraging related external data, such as explicit discourse relations in PDTB BIBREF9 , BIBREF19 , BIBREF21 and unlabeled data obtained elsewhere BIBREF12 , BIBREF19 , often in a multi-task joint learning framework.\nHowever, nearly all the previous works assume that a pair of discourse units is independent from its wider paragraph-level contexts and build their discourse relation prediction models based on only two relevant discourse units. In contrast, we model inter-dependencies of discourse units in a paragraph when building discourse unit representations; in addition, we model global continuity and patterns in a sequence of discourse relations, including both implicit and explicit relations.\nHierarchical neural network models BIBREF22 , BIBREF23 have been applied to RST-style discourse parsing BIBREF24 mainly for the purpose of generating text-level hierarchical discourse structures. In contrast, we use hierarchical neural network models to build context-aware sentence representations in order to improve implicit discourse relation prediction.\n\nParagraph Encoding\nAbstracting latent representations from a long sequence of words, such as a paragraph, is a challenging task. While several novel neural network models BIBREF25 , BIBREF26 have been introduced in recent years for encoding a paragraph, Recurrent Neural Network (RNN)-based methods remain the most effective approaches. RNNs, especially the long-short term memory (LSTM) BIBREF27 models, have been widely used to encode a paragraph for machine translation BIBREF28 , dialogue systems BIBREF29 and text summarization BIBREF30 because of its ability in modeling long-distance dependencies between words. In addition, among four typical pooling methods (sum, mean, last and max) for calculating sentence representations from RNN-encoded hidden states for individual words, max-pooling along with bidirectional LSTM (Bi-LSTM) BIBREF31 yields the current best universal sentence representation method BIBREF32 . We adopted a similar neural network architecture for paragraph encoding.\n\nFine-tune Discourse Relation Predictions Using a CRF Layer\nData analysis and many linguistic studies BIBREF11 , BIBREF33 , BIBREF34 , BIBREF35 have repeatedly shown that discourse relations feature continuity and patterns (e.g., a temporal relation is likely to be followed by another temporal relation). Especially, BIBREF11 firstly reported that patterns exist between implicit discourse relations and their neighboring explicit discourse relations.\nMotivated by these observations, we aim to improve implicit discourse relation detection by making use of easily identifiable explicit discourse relations and taking into account global patterns of discourse relation distributions. Specifically, we add an extra CRF layer at the top of the softmax prediction layer (shown in figure 3 ) to fine-tune predicted discourse relations by considering their inter-dependencies.\nThe Conditional Random Fields BIBREF36 (CRF) layer updates a state transition matrix, which can effectively adjust the current label depending on proceeding and following labels. Both training and decoding of the CRF layer can be solved efficiently by using the Viterbi algorithm. With the CRF layer, the model jointly assigns a sequence of discourse relations between each two adjacent discourse units in a paragraph, including both implicit and explicit relations, by considering relevant discourse unit representations as well as global discourse relation patterns.\n\nDataset and Preprocessing\nThe Penn Discourse Treebank (PDTB): We experimented with PDTB v2.0 BIBREF7 which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles. In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp). We followed the same PDTB section partition BIBREF12 as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set. Table 1 presents the data distributions we collected from PDTB.\nPreprocessing: The PDTB dataset documents its annotations as a list of discourse relations, with each relation associated with its two discourse units. To recover the paragraph context for a discourse relation, we match contents of its two annotated discourse units with all paragraphs in corresponding raw WSJ article. When all the matching was completed, each paragraph was split into a sequence of discourse units, with one discourse relation (implicit or explicit) between each two adjacent discourse units. Following this method, we obtained 14,309 paragraphs in total, each contains 3.2 discourse units on average. Table 2 shows the distribution of paragraphs based on the number of discourse units in a paragraph.\n\nBaseline Models and Systems\nWe compare the performance of our neural network model with several recent discourse relation recognition systems that only consider two relevant discourse units.\n BIBREF12 : improves implicit discourse relation prediction by creating more training instances from the Gigaword corpus utilizing explicitly mentioned discourse connective phrases.\n BIBREF5 : a gated relevance network (GRN) model with tensors to capture semantic interactions between words from two discourse units.\n BIBREF9 : a convolutional neural network model that leverages relations between different styles of discourse relations annotations (PDTB and RST BIBREF24 ) in a multi-task joint learning framework.\n BIBREF6 : a multi-level attention-over-attention model to dynamically exploit features from two discourse units for recognizing an implicit discourse relation.\n BIBREF21 : a novel pipelined adversarial framework to enable an adaptive imitation competition between the implicit network and a rival feature discriminator with access to connectives.\n BIBREF18 : a Simple Word Interaction Model (SWIM) with tensors that captures both linear and quadratic relations between words from two discourse units.\n BIBREF19 : an attention-based LSTM neural network that leverages explicit discourse relations in PDTB and unannotated external data in a multi-task joint learning framework.\n\nEvaluation Settings\nOn the PDTB corpus, both binary classification and multi-way classification settings are commonly used to evaluate the implicit discourse relation recognition performance. We noticed that all the recent works report class-wise implicit relation prediction performance in the binary classification setting, while none of them report detailed performance in the multi-way classification setting. In the binary classification setting, separate “one-versus-all” binary classifiers were trained, and each classifier is to identify one class of discourse relations. Although separate classifiers are generally more flexible in combating with imbalanced distributions of discourse relation classes and obtain higher class-wise prediction performance, one pair of discourse units may be tagged with all four discourse relations without proper conflict resolution. Therefore, the multi-way classification setting is more appropriate and natural in evaluating a practical end-to-end discourse parser, and we mainly evaluate our proposed models using the four-way multi-class classification setting.\nSince none of the recent previous work reported class-wise implicit relation classification performance in the multi-way classification setting, for better comparisons, we re-implemented the neural tensor network architecture (so-called SWIM in BIBREF18 ) which is essentially a Bi-LSTM model with tensors and report its detailed evaluation result in the multi-way classification setting. As another baseline, we report the performance of a Bi-LSTM model without tensors as well. Both baseline models take two relevant discourse units as the only input.\nFor additional comparisons, We also report the performance of our proposed models in the binary classification setting.\n\nExperimental Results\nMulti-way Classification: The first section of table 3 shows macro average F1-scores and accuracies of previous works. The second section of table 3 shows the multi-class classification results of our implemented baseline systems. Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance. However, the performance on the three small classes (Comp, Cont and Temp) remains low.\nThe third section of table 3 shows the multi-class classification results of our proposed paragraph-level neural network models that capture inter-dependencies among discourse units. The first row shows the performance of a variant of our basic model, where we only identify implicit relations and ignore identifying explicit relations by setting the $\\alpha $ in equation (5) to be 0. Compared with the baseline Bi-LSTM model, the only difference is that this model considers paragraph-wide contexts and model inter-dependencies among discourse units when building representation for individual DU. We can see that this model has greatly improved implicit relation classification performance across all the four relations and improved the macro-average F1-score by over 7 percents. In addition, compared with the baseline Bi-LSTM model with tensor, this model improved implicit relation classification performance across the three small classes, with clear performance gains of around 2 and 8 percents on contingency and temporal relations respectively, and overall improved the macro-average F1-score by 2.2 percents.\nThe second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph. Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations. Especially on the contingency relation, the classification performance was improved by another 1.42 percents. Moreover, the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).\nAfter untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent.\nBinary Classification: From table 4 , we can see that compared against the best previous systems, our paragraph-level model with untied parameters in the prediction layer achieves F1-score improvements of 6 points on Comparison and 7 points on Temporal, which demonstrates that paragraph-wide contexts are important in detecting minority discourse relations. Note that the CRF layer of the model is not suitable for binary classification.\n\nImpact of Paragraph Length\nTo understand the influence of paragraph lengths to our paragraph-level models, we divide paragraphs in the PDTB test set into several subsets based on the number of DUs in a paragraph, and then evaluate our proposed models on each subset separately. From Figure 4 , we can see that our paragraph-level models (the latter three) overall outperform DU-pair baselines across all the subsets. As expected, the paragraph-level models achieve clear performance gains on long paragraphs (with more than 5 DUs) by extensively modeling mutual influences of DUs in a paragraph. But somewhat surprisingly, the paragraph-level models achieve noticeable performance gains on short paragraphs (with 2 or 3 DUs) as well. We hypothesize that by learning more appropriate discourse-aware DU representations in long paragraphs, our paragraph-level models reduce bias of using DU representations in predicting discourse relations, which benefits discourse relation prediction in short paragraphs as well.\n\nFigure 2: Untie Parameters in the Prediction Layer\n\nFigure 3: Fine-tune Discourse Relations with a CRF layer.\n\nTable 1: Distributions of Four Top-level Discourse Relations in PDTB.\n\nTable 5: Multi-class Classification Results of Ensemble Models on PDTB.\n\nQuery: What discourse relations does it work best/worst for?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"Best: Expansion (Exp). Worst: Comparison (Comp).\", \"evidence\": [\"The Penn Discourse Treebank (PDTB): We experimented with PDTB v2.0 BIBREF7 which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles. In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp). We followed the same PDTB section partition BIBREF12 as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set. Table 1 presents the data distributions we collected from PDTB.\", \"Multi-way Classification: The first section of table 3 shows macro average F1-scores and accuracies of previous works. The second section of table 3 shows the multi-class classification results of our implemented baseline systems. Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance. However, the performance on the three small classes (Comp, Cont and Temp) remains low.\"]}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Combating Adversarial Misspellings with Robust Word Recognition To combat adversarial spelling mistakes, we propose placing a word recognition model in front of the downstream classifier. Our word recognition models build upon the RNN semicharacter architecture, introducing several new backoff strategies for handling rare and unseen words. Trained to recognize words corrupted by random adds, drops, swaps, and keyboard mistakes, our method achieves 32% relative (and 3.3% absolute) error reduction over the vanilla semi-character model. Notably, our pipeline confers robustness on the downstream classifier, outperforming both adversarial training and off-the-shelf spell checkers. Against a BERT model fine-tuned for sentiment analysis, a single adversarially-chosen character attack lowers accuracy from 90.3% to 45.8%. Our defense restores accuracy to 75% 1 . Surprisingly, better word recognition does not always entail greater robustness. Our analysis reveals that robustness also depends upon a quantity that we denote the sensitivity. Introduction Despite the rapid progress of deep learning techniques on diverse supervised learning tasks, these models remain brittle to subtle shifts in the data distribution. Even when the permissible changes are confined to barely-perceptible perturbations, training robust models remains an open challenge. Following the discovery that imperceptible attacks could cause image recognition models to misclassify examples BIBREF0 , a veritable sub-field has emerged in which authors iteratively propose attacks and countermeasures. For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails' intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 . In this paper, we focus on adversarially-chosen spelling mistakes in the context of text classification, addressing the following attack types: dropping, adding, and swapping internal characters within words. These perturbations are inspired by psycholinguistic studies BIBREF4 , BIBREF5 which demonstrated that humans can comprehend text altered by jumbling internal characters, provided that the first and last characters of each word remain unperturbed. First, in experiments addressing both BiLSTM and fine-tuned BERT models, comprising four different input formats: word-only, char-only, word+char, and word-piece BIBREF6 , we demonstrate that an adversary can degrade a classifier's performance to that achieved by random guessing. This requires altering just two characters per sentence. Such modifications might flip words either to a different word in the vocabulary or, more often, to the out-of-vocabulary token UNK. Consequently, adversarial edits can degrade a word-level model by transforming the informative words to UNK. Intuitively, one might suspect that word-piece and character-level models would be less susceptible to spelling attacks as they can make use of the residual word context. However, our experiments demonstrate that character and word-piece models are in fact more vulnerable. We show that this is due to the adversary's effective capacity for finer grained manipulations on these models. While against a word-level model, the adversary is mostly limited to UNK-ing words, against a word-piece or character-level model, each character-level add, drop, or swap produces a distinct input, providing the adversary with a greater set of options. Second, we evaluate first-line techniques including data augmentation and adversarial training, demonstrating that they offer only marginal benefits here, e.g., a BERT model achieving $90.3$ accuracy on a sentiment classification task, is degraded to $64.1$ by an adversarially-chosen 1-character swap in the sentence, which can only be restored to $69.2$ by adversarial training. Third (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly misspelled) inputs. The word recognition model's outputs form the input to a downstream classification model. Our word recognition models build upon the RNN-based semi-character word recognition model due to BIBREF7 . While our word recognizers are trained on domain-specific text from the task at hand, they often predict UNK at test time, owing to the small domain-specific vocabulary. To handle unobserved and rare words, we propose several backoff strategies including falling back on a generic word recognizer trained on a larger corpus. Incorporating our defenses, BERT models subject to 1-character attacks are restored to $88.3$ , $81.1$ , $78.0$ accuracy for swap, drop, add attacks respectively, as compared to $69.2$ , $63.6$ , and $50.0$ for adversarial training Fourth, we offer a detailed qualitative analysis, demonstrating that a low word error rate alone is insufficient for a word recognizer to confer robustness on the downstream task. Additionally, we find that it is important that the recognition model supply few degrees of freedom to an attacker. We provide a metric to quantify this notion of sensitivity in word recognition models and study its relation to robustness empirically. Models with low sensitivity and word error rate are most robust. Model Sensitivity In computer vision, an important factor determining the success of an adversary is the norm constraint on the perturbations allowed to an image ( $|| \bf x - \bf x^{\prime }||_{\infty } < \epsilon $ ). Higher values of $\epsilon $ lead to a higher chance of mis-classification for at least one $\bf x^{\prime }$ . Defense methods such as quantization BIBREF22 and thermometer encoding BIBREF23 try to reduce the space of perturbations available to the adversary by making the model invariant to small changes in the input. In NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Character-level models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity. We can quantify this notion for a word recognition system $W$ as the expected number of unique outputs it assigns to a set of adversarial perturbations. Given a sentence $s$ from the set of sentences $\mathcal {S}$ , let $A(s) = {s_1}^{\prime } , {s_2}^{\prime }, \dots , {s_n}^{\prime }$ denote the set of $n$ perturbations to it under attack type $A$ , and let $V$ be the function that maps strings to an input representation for the downstream classifier. For a word level model, $V$ would transform sentences to a sequence of word ids, mapping OOV words to the same UNK ID. Whereas, for a char (or word+char, word-piece) model, $V$ would map inputs to a sequence of character IDs. Formally, sensitivity is defined as $$S_{W,V}^A=\mathbb {E}_{s}\left[\frac{\#_{u}(V \circ W({s_1}^{\prime }), \dots , V \circ W({s_n}^{\prime }))}{n}\right] ,$$ (Eq. 12) where $V \circ W (s_i)$ returns the input representation (of the downstream classifier) for the output string produced by the word-recognizer $W$ using $s_i$ and $\#_{u}(\cdot )$ counts the number of unique arguments. Intuitively, we expect a high value of $S_{W, V}^A$ to lead to a lower robustness of the downstream classifier, since the adversary has more degrees of freedom to attack the classifier. Thus, when using word recognition as a defense, it is prudent to design a low sensitivity system with a low error rate. However, as we will demonstrate, there is often a trade-off between sensitivity and error rate. Robustness to adversarial attacks We use sentiment analysis and paraphrase detection as downstream tasks, as for these two tasks, 1-2 character edits do not change the output labels. For sentiment classification, we systematically study the effect of character-level adversarial attacks on two architectures and four different input formats. The first architecture encodes the input sentence into a sequence of embeddings, which are then sequentially processed by a BiLSTM. The first and last states of the BiLSTM are then used by the softmax layer to predict the sentiment of the input. We consider three input formats for this architecture: (1) Word-only: where the input words are encoded using a lookup table; (2) Char-only: where the input words are encoded using a separate single-layered BiLSTM over their characters; and (3) Word $+$ Char: where the input words are encoded using a concatenation of (1) and (2) . The second architecture uses the fine-tuned BERT model BIBREF26 , with an input format of word-piece tokenization. This model has recently set a new state-of-the-art on several NLP benchmarks, including the sentiment analysis task we consider here. All models are trained and evaluated on the binary version of the sentence-level Stanford Sentiment Treebank BIBREF24 dataset with only positive and negative reviews. We also consider the task of paraphrase detection. Here too, we make use of the fine-tuned BERT BIBREF26 , which is trained and evaluated on the Microsoft Research Paraphrase Corpus (MRPC) BIBREF27 . Two common methods for dealing with adversarial examples include: (1) data augmentation (DA) BIBREF28 ; and (2) adversarial training (Adv) BIBREF29 . In DA, the trained model is fine-tuned after augmenting the training set with an equal number of examples randomly attacked with a 1-character edit. In Adv, the trained model is fine-tuned with additional adversarial examples (selected at random) that produce incorrect predictions from the current-state classifier. The process is repeated iteratively, generating and adding newer adversarial examples from the updated classifier model, until the adversarial accuracy on dev set stops improving. In Table 3 , we examine the robustness of the sentiment models under each attack and defense method. In the absence of any attack or defense, BERT (a word-piece model) performs the best ( $90.3\%$ ) followed by word+char models ( $80.5\%$ ), word-only models ( $79.2\%$ ) and then char-only models ( $70.3\%$ ). However, even single-character attacks (chosen adversarially) can be catastrophic, resulting in a significantly degraded performance of $46\%$ , $57\%$ , $59\%$ and $33\%$ , respectively under the `all' setting. Intuitively, one might suppose that word-piece and character-level models would be more robust to such attacks given they can make use of the remaining context. However, we find that they are the more susceptible. To see why, note that the word `beautiful' can only be altered in a few ways for word-only models, either leading to an UNK or an existing vocabulary word, whereas, word-piece and character-only models treat each unique character combination differently. This provides more variations that an attacker can exploit. Following similar reasoning, add and key attacks pose a greater threat than swap and drop attacks. The robustness of different models can be ordered as word-only $>$ word+char $>$ char-only $\sim $ word-piece, and the efficacy of different attacks as add $>$ key $>$ drop $>$ swap. Next, we scrutinize the effectiveness of defense methods when faced against adversarially chosen attacks. Clearly from table 3 , DA and Adv are not effective in this case. We observed that despite a low training error, these models were not able to generalize to attacks on newer words at test time. ATD spell corrector is the most effective on keyboard attacks, but performs poorly on other attack types, particularly the add attack strategy. The ScRNN model with pass-through backoff offers better protection, bringing back the adversarial accuracy within $5\%$ range for the swap attack. It is also effective under other attack classes, and can mitigate the adversarial effect in word-piece models by $21\%$ , character-only models by $19\%$ , and in word, and word+char models by over $4.5\%$ . This suggests that the direct training signal of word error correction is more effective than the indirect signal of sentiment classification available to DA and Adv for model robustness. We observe additional gains by using background models as a backoff alternative, because of its lower word error rate (WER), especially, under the swap and drop attacks. However, these gains do not consistently translate in all other settings, as lower WER is necessary but not sufficient. Besides lower error rate, we find that a solid defense should furnish the attacker the fewest options to attack, i.e. it should have a low sensitivity. As we shall see in section § "Understanding Model Sensitivity" , the backoff neutral variation has the lowest sensitivity due to mapping UNK predictions to a fixed neutral word. Thus, it results in the highest robustness on most of the attack types for all four model classes. Table 4 shows the accuracy of BERT on 200 examples from the dev set of the MRPC paraphrase detection task under various attack and defense settings. We re-trained the ScRNN model variants on the MRPC training set for these experiments. Again, we find that simple 1-2 character attacks can bring down the accuracy of BERT significantly ( $89\%$ to $31\%$ ). Word recognition models can provide an effective defense, with both our pass-through and neutral variants recovering most of the accuracy. While the neutral backoff model is effective on 2-char attacks, it hurts performance in the no attack setting, since it incorrectly modifies certain correctly spelled entity names. Since the two variants are already effective, we did not train a background model for this task. Table 2: Word Error Rates (WER) of ScRNN with each backoff strategy, plus ATD and an ScRNN trained only on the background corpus (78K vocabulary) The error rates include 5.25% OOV words. Figure 2: Effect of sensitivity and word error rate on robustness (depicted by the bubble sizes) in word-only models (left) and char-only models (right). Query: Why is the adversarial setting appropriate for misspelling recognition? JSON Response:
{"answer": "Adversarial misspellings are a real-world problem", "evidence": ["For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails\u0027 intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 ."]}
science.qasper_abstractive_qa.169.3
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nCombating Adversarial Misspellings with Robust Word Recognition\n\nTo combat adversarial spelling mistakes, we propose placing a word recognition model in front of the downstream classifier. Our word recognition models build upon the RNN semicharacter architecture, introducing several new backoff strategies for handling rare and unseen words. Trained to recognize words corrupted by random adds, drops, swaps, and keyboard mistakes, our method achieves 32% relative (and 3.3% absolute) error reduction over the vanilla semi-character model. Notably, our pipeline confers robustness on the downstream classifier, outperforming both adversarial training and off-the-shelf spell checkers. Against a BERT model fine-tuned for sentiment analysis, a single adversarially-chosen character attack lowers accuracy from 90.3% to 45.8%. Our defense restores accuracy to 75%\n1 . Surprisingly, better word recognition does not always entail greater robustness. Our analysis reveals that robustness also depends upon a quantity that we denote the sensitivity.\n\nIntroduction\nDespite the rapid progress of deep learning techniques on diverse supervised learning tasks, these models remain brittle to subtle shifts in the data distribution. Even when the permissible changes are confined to barely-perceptible perturbations, training robust models remains an open challenge. Following the discovery that imperceptible attacks could cause image recognition models to misclassify examples BIBREF0 , a veritable sub-field has emerged in which authors iteratively propose attacks and countermeasures.\nFor all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails' intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 .\nIn this paper, we focus on adversarially-chosen spelling mistakes in the context of text classification, addressing the following attack types: dropping, adding, and swapping internal characters within words. These perturbations are inspired by psycholinguistic studies BIBREF4 , BIBREF5 which demonstrated that humans can comprehend text altered by jumbling internal characters, provided that the first and last characters of each word remain unperturbed.\nFirst, in experiments addressing both BiLSTM and fine-tuned BERT models, comprising four different input formats: word-only, char-only, word+char, and word-piece BIBREF6 , we demonstrate that an adversary can degrade a classifier's performance to that achieved by random guessing. This requires altering just two characters per sentence. Such modifications might flip words either to a different word in the vocabulary or, more often, to the out-of-vocabulary token UNK. Consequently, adversarial edits can degrade a word-level model by transforming the informative words to UNK. Intuitively, one might suspect that word-piece and character-level models would be less susceptible to spelling attacks as they can make use of the residual word context. However, our experiments demonstrate that character and word-piece models are in fact more vulnerable. We show that this is due to the adversary's effective capacity for finer grained manipulations on these models. While against a word-level model, the adversary is mostly limited to UNK-ing words, against a word-piece or character-level model, each character-level add, drop, or swap produces a distinct input, providing the adversary with a greater set of options.\nSecond, we evaluate first-line techniques including data augmentation and adversarial training, demonstrating that they offer only marginal benefits here, e.g., a BERT model achieving $90.3$ accuracy on a sentiment classification task, is degraded to $64.1$ by an adversarially-chosen 1-character swap in the sentence, which can only be restored to $69.2$ by adversarial training.\nThird (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly misspelled) inputs. The word recognition model's outputs form the input to a downstream classification model. Our word recognition models build upon the RNN-based semi-character word recognition model due to BIBREF7 . While our word recognizers are trained on domain-specific text from the task at hand, they often predict UNK at test time, owing to the small domain-specific vocabulary. To handle unobserved and rare words, we propose several backoff strategies including falling back on a generic word recognizer trained on a larger corpus. Incorporating our defenses, BERT models subject to 1-character attacks are restored to $88.3$ , $81.1$ , $78.0$ accuracy for swap, drop, add attacks respectively, as compared to $69.2$ , $63.6$ , and $50.0$ for adversarial training\nFourth, we offer a detailed qualitative analysis, demonstrating that a low word error rate alone is insufficient for a word recognizer to confer robustness on the downstream task. Additionally, we find that it is important that the recognition model supply few degrees of freedom to an attacker. We provide a metric to quantify this notion of sensitivity in word recognition models and study its relation to robustness empirically. Models with low sensitivity and word error rate are most robust.\n\nModel Sensitivity\nIn computer vision, an important factor determining the success of an adversary is the norm constraint on the perturbations allowed to an image ( $|| \\bf x - \\bf x^{\\prime }||_{\\infty } < \\epsilon $ ). Higher values of $\\epsilon $ lead to a higher chance of mis-classification for at least one $\\bf x^{\\prime }$ . Defense methods such as quantization BIBREF22 and thermometer encoding BIBREF23 try to reduce the space of perturbations available to the adversary by making the model invariant to small changes in the input.\nIn NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Character-level models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity.\nWe can quantify this notion for a word recognition system $W$ as the expected number of unique outputs it assigns to a set of adversarial perturbations. Given a sentence $s$ from the set of sentences $\\mathcal {S}$ , let $A(s) = {s_1}^{\\prime } , {s_2}^{\\prime }, \\dots , {s_n}^{\\prime }$ denote the set of $n$ perturbations to it under attack type $A$ , and let $V$ be the function that maps strings to an input representation for the downstream classifier. For a word level model, $V$ would transform sentences to a sequence of word ids, mapping OOV words to the same UNK ID. Whereas, for a char (or word+char, word-piece) model, $V$ would map inputs to a sequence of character IDs. Formally, sensitivity is defined as \n$$S_{W,V}^A=\\mathbb {E}_{s}\\left[\\frac{\\#_{u}(V \\circ W({s_1}^{\\prime }), \\dots , V \\circ W({s_n}^{\\prime }))}{n}\\right] ,$$ (Eq. 12) \nwhere $V \\circ W (s_i)$ returns the input representation (of the downstream classifier) for the output string produced by the word-recognizer $W$ using $s_i$ and $\\#_{u}(\\cdot )$ counts the number of unique arguments.\nIntuitively, we expect a high value of $S_{W, V}^A$ to lead to a lower robustness of the downstream classifier, since the adversary has more degrees of freedom to attack the classifier. Thus, when using word recognition as a defense, it is prudent to design a low sensitivity system with a low error rate. However, as we will demonstrate, there is often a trade-off between sensitivity and error rate.\n\nRobustness to adversarial attacks\nWe use sentiment analysis and paraphrase detection as downstream tasks, as for these two tasks, 1-2 character edits do not change the output labels.\nFor sentiment classification, we systematically study the effect of character-level adversarial attacks on two architectures and four different input formats. The first architecture encodes the input sentence into a sequence of embeddings, which are then sequentially processed by a BiLSTM. The first and last states of the BiLSTM are then used by the softmax layer to predict the sentiment of the input. We consider three input formats for this architecture: (1) Word-only: where the input words are encoded using a lookup table; (2) Char-only: where the input words are encoded using a separate single-layered BiLSTM over their characters; and (3) Word $+$ Char: where the input words are encoded using a concatenation of (1) and (2) .\nThe second architecture uses the fine-tuned BERT model BIBREF26 , with an input format of word-piece tokenization. This model has recently set a new state-of-the-art on several NLP benchmarks, including the sentiment analysis task we consider here. All models are trained and evaluated on the binary version of the sentence-level Stanford Sentiment Treebank BIBREF24 dataset with only positive and negative reviews.\nWe also consider the task of paraphrase detection. Here too, we make use of the fine-tuned BERT BIBREF26 , which is trained and evaluated on the Microsoft Research Paraphrase Corpus (MRPC) BIBREF27 .\nTwo common methods for dealing with adversarial examples include: (1) data augmentation (DA) BIBREF28 ; and (2) adversarial training (Adv) BIBREF29 . In DA, the trained model is fine-tuned after augmenting the training set with an equal number of examples randomly attacked with a 1-character edit. In Adv, the trained model is fine-tuned with additional adversarial examples (selected at random) that produce incorrect predictions from the current-state classifier. The process is repeated iteratively, generating and adding newer adversarial examples from the updated classifier model, until the adversarial accuracy on dev set stops improving.\nIn Table 3 , we examine the robustness of the sentiment models under each attack and defense method. In the absence of any attack or defense, BERT (a word-piece model) performs the best ( $90.3\\%$ ) followed by word+char models ( $80.5\\%$ ), word-only models ( $79.2\\%$ ) and then char-only models ( $70.3\\%$ ). However, even single-character attacks (chosen adversarially) can be catastrophic, resulting in a significantly degraded performance of $46\\%$ , $57\\%$ , $59\\%$ and $33\\%$ , respectively under the `all' setting.\nIntuitively, one might suppose that word-piece and character-level models would be more robust to such attacks given they can make use of the remaining context. However, we find that they are the more susceptible. To see why, note that the word `beautiful' can only be altered in a few ways for word-only models, either leading to an UNK or an existing vocabulary word, whereas, word-piece and character-only models treat each unique character combination differently. This provides more variations that an attacker can exploit. Following similar reasoning, add and key attacks pose a greater threat than swap and drop attacks. The robustness of different models can be ordered as word-only $>$ word+char $>$ char-only $\\sim $ word-piece, and the efficacy of different attacks as add $>$ key $>$ drop $>$ swap.\nNext, we scrutinize the effectiveness of defense methods when faced against adversarially chosen attacks. Clearly from table 3 , DA and Adv are not effective in this case. We observed that despite a low training error, these models were not able to generalize to attacks on newer words at test time. ATD spell corrector is the most effective on keyboard attacks, but performs poorly on other attack types, particularly the add attack strategy.\nThe ScRNN model with pass-through backoff offers better protection, bringing back the adversarial accuracy within $5\\%$ range for the swap attack. It is also effective under other attack classes, and can mitigate the adversarial effect in word-piece models by $21\\%$ , character-only models by $19\\%$ , and in word, and word+char models by over $4.5\\%$ . This suggests that the direct training signal of word error correction is more effective than the indirect signal of sentiment classification available to DA and Adv for model robustness.\nWe observe additional gains by using background models as a backoff alternative, because of its lower word error rate (WER), especially, under the swap and drop attacks. However, these gains do not consistently translate in all other settings, as lower WER is necessary but not sufficient. Besides lower error rate, we find that a solid defense should furnish the attacker the fewest options to attack, i.e. it should have a low sensitivity.\nAs we shall see in section § \"Understanding Model Sensitivity\" , the backoff neutral variation has the lowest sensitivity due to mapping UNK predictions to a fixed neutral word. Thus, it results in the highest robustness on most of the attack types for all four model classes.\nTable 4 shows the accuracy of BERT on 200 examples from the dev set of the MRPC paraphrase detection task under various attack and defense settings. We re-trained the ScRNN model variants on the MRPC training set for these experiments. Again, we find that simple 1-2 character attacks can bring down the accuracy of BERT significantly ( $89\\%$ to $31\\%$ ). Word recognition models can provide an effective defense, with both our pass-through and neutral variants recovering most of the accuracy. While the neutral backoff model is effective on 2-char attacks, it hurts performance in the no attack setting, since it incorrectly modifies certain correctly spelled entity names. Since the two variants are already effective, we did not train a background model for this task.\n\nTable 2: Word Error Rates (WER) of ScRNN with each backoff strategy, plus ATD and an ScRNN trained only on the background corpus (78K vocabulary) The error rates include 5.25% OOV words.\n\nFigure 2: Effect of sensitivity and word error rate on robustness (depicted by the bubble sizes) in word-only models (left) and char-only models (right).\n\nQuery: Why is the adversarial setting appropriate for misspelling recognition?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"Adversarial misspellings are a real-world problem\", \"evidence\": [\"For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails\\u0027 intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 .\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: We extend the tests made in <cite>Agirre et al. (2008)</cite> , who used different types of semantic information, obtaining significant improvements in two constituency parsers, showing how semantic information helps in constituency parsing.
["Extends", "Differences"]
science.multicite_intent_classification.1198
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWe extend the tests made in <cite>Agirre et al. (2008)</cite> , who used different types of semantic information, obtaining significant improvements in two constituency parsers, showing how semantic information helps in constituency parsing.", "role": "user" }, { "content": "[\"Extends\", \"Differences\"]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: The effect of Lipopolysaccharides on kidney barrier function is dependent on inflammation levels. Research Abstract: Podocyte dysfunction, represented by foot process effacement and proteinuria, is often the starting point for progressive kidney disease. Therapies aimed at the cellular level of the disease are currently not available. Here we show that induction of urokinase receptor (uPAR) signaling in podocytes leads to foot process effacement and urinary protein loss via a mechanism that includes lipid-dependent activation of αvβ3 integrin. Mice lacking uPAR (Plaur−/−) are protected from lipopolysaccharide (LPS)-mediated proteinuria but develop disease after expression of a constitutively active β3 integrin. Gene transfer studies reveal a prerequisite for uPAR expression in podocytes, but not in endothelial cells, for the development of LPS-mediated proteinuria. Mechanistically, uPAR is required to activate αvβ3 integrin in podocytes, promoting cell motility and activation of the small GTPases Cdc42 and Rac1. Blockade of αvβ3 integrin reduces podocyte motility in vitro and lowers proteinuria in mice. Our findings show a physiological role for uPAR signaling in the regulation of kidney permeability.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.791
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nThe effect of Lipopolysaccharides on kidney barrier function is dependent on inflammation levels.\n\nResearch Abstract:\nPodocyte dysfunction, represented by foot process effacement and proteinuria, is often the starting point for progressive kidney disease.\nTherapies aimed at the cellular level of the disease are currently not available.\nHere we show that induction of urokinase receptor (uPAR) signaling in podocytes leads to foot process effacement and urinary protein loss via a mechanism that includes lipid-dependent activation of αvβ3 integrin.\nMice lacking uPAR (Plaur−/−) are protected from lipopolysaccharide (LPS)-mediated proteinuria but develop disease after expression of a constitutively active β3 integrin.\nGene transfer studies reveal a prerequisite for uPAR expression in podocytes, but not in endothelial cells, for the development of LPS-mediated proteinuria.\nMechanistically, uPAR is required to activate αvβ3 integrin in podocytes, promoting cell motility and activation of the small GTPases Cdc42 and Rac1.\nBlockade of αvβ3 integrin reduces podocyte motility in vitro and lowers proteinuria in mice.\nOur findings show a physiological role for uPAR signaling in the regulation of kidney permeability.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: First, Chen et al. (2017) compared results across SemEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset <cite>(Dong et al., 2014)</cite> and their own Chinese news comments dataset.
["Background"]
science.multicite_intent_classification.1935
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nFirst, Chen et al. (2017) compared results across SemEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset <cite>(Dong et al., 2014)</cite> and their own Chinese news comments dataset.", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: The real challenge for psycholinguistic research is not the extent to which the theoretical framework above is supported by current results in the lab but rather to increase the precision of dependency length measurements and investigate the experimental conditions in which the following theoretical predictions are observed [20, <cite>21]</cite> : one principle beating the other, coexistence, collaboration between principles or the very same trade-off causing the delusion that word order constraints have relaxed dramatically or even disappeared.
["Uses", "Background"]
science.multicite_intent_classification.228
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe real challenge for psycholinguistic research is not the extent to which the theoretical framework above is supported by current results in the lab but rather to increase the precision of dependency length measurements and investigate the experimental conditions in which the following theoretical predictions are observed [20, <cite>21]</cite> : one principle beating the other, coexistence, collaboration between principles or the very same trade-off causing the delusion that word order constraints have relaxed dramatically or even disappeared.", "role": "user" }, { "content": "[\"Uses\", \"Background\"]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Rotator cuff exercises are more effective than general exercise therapy in reducing pain and improving function of the shoulder. Research Abstract: OBJECTIVE To evaluate if a specific exercise strategy, targeting the rotator cuff and scapula stabilisers, improves shoulder function and pain more than unspecific exercises in patients with subacromial impingement syndrome, thereby decreasing the need for arthroscopic subacromial decompression. DESIGN Randomised, participant and single assessor blinded, controlled study. SETTING Department of orthopaedics in a Swedish university hospital. PARTICIPANTS 102 patients with long standing (over six months) persistent subacromial impingement syndrome in whom earlier conservative treatment had failed, recruited through orthopaedic specialists. INTERVENTIONS The specific exercise strategy consisted of strengthening eccentric exercises for the rotator cuff and concentric/eccentric exercises for the scapula stabilisers in combination with manual mobilisation. The control exercise programme consisted of unspecific movement exercises for the neck and shoulder. Patients in both groups received five to six individual guided treatment sessions during 12 weeks. In between these supervised sessions the participants performed home exercises once or twice a day for 12 weeks. MAIN OUTCOME MEASURES The primary outcome was the Constant-Murley shoulder assessment score evaluating shoulder function and pain. Secondary outcomes were patients' global impression of change because of treatment and decision regarding surgery. RESULTS Most (97, 95%) participants completed the 12 week study. There was a significantly greater improvement in the Constant-Murley score in the specific exercise group than in the control exercise group (24 points (95% confidence interval 19 to 28.0) v 9 points (5 to 13); mean difference between group: 15 points (8.5 to 20.6)). Significantly more patients in the specific exercise group reported successful outcome (defined as large improvement or recovered) in the patients' global assessment of change because of treatment: 69% (35/51) v 24% (11/46); odds ratio 7.6, 3.1 to 18.9; P<0.001. A significantly lower proportion of patients in the specific exercise group subsequently chose to undergo surgery: 20% (10/51) v 63% (29/46); odds ratio 7.7, 3.1 to 19.4; P<0.001). CONCLUSION A specific exercise strategy, focusing on strengthening eccentric exercises for the rotator cuff and concentric/eccentric exercises for the scapula stabilisers, is effective in reducing pain and improving shoulder function in patients with persistent subacromial impingement syndrome. By extension, this exercise strategy reduces the need for arthroscopic subacromial decompression within the three month timeframe used in the study. TRIAL REGISTRATION Clinical trials NCT01037673.
{"verdict": "SUPPORT", "evidence": ["There was a significantly greater improvement in the Constant-Murley score in the specific exercise group than in the control exercise group (24 points (95% confidence interval 19 to 28.0) v 9 points (5 to 13); mean difference between group: 15 points (8.5 to 20.6))."]}
science.scifact_entailment.697
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nRotator cuff exercises are more effective than general exercise therapy in reducing pain and improving function of the shoulder.\n\nResearch Abstract:\nOBJECTIVE To evaluate if a specific exercise strategy, targeting the rotator cuff and scapula stabilisers, improves shoulder function and pain more than unspecific exercises in patients with subacromial impingement syndrome, thereby decreasing the need for arthroscopic subacromial decompression.\nDESIGN Randomised, participant and single assessor blinded, controlled study.\nSETTING Department of orthopaedics in a Swedish university hospital.\nPARTICIPANTS 102 patients with long standing (over six months) persistent subacromial impingement syndrome in whom earlier conservative treatment had failed, recruited through orthopaedic specialists.\nINTERVENTIONS The specific exercise strategy consisted of strengthening eccentric exercises for the rotator cuff and concentric/eccentric exercises for the scapula stabilisers in combination with manual mobilisation.\nThe control exercise programme consisted of unspecific movement exercises for the neck and shoulder.\nPatients in both groups received five to six individual guided treatment sessions during 12 weeks.\nIn between these supervised sessions the participants performed home exercises once or twice a day for 12 weeks.\nMAIN OUTCOME MEASURES The primary outcome was the Constant-Murley shoulder assessment score evaluating shoulder function and pain.\nSecondary outcomes were patients' global impression of change because of treatment and decision regarding surgery.\nRESULTS Most (97, 95%) participants completed the 12 week study.\nThere was a significantly greater improvement in the Constant-Murley score in the specific exercise group than in the control exercise group (24 points (95% confidence interval 19 to 28.0) v 9 points (5 to 13); mean difference between group: 15 points (8.5 to 20.6)).\nSignificantly more patients in the specific exercise group reported successful outcome (defined as large improvement or recovered) in the patients' global assessment of change because of treatment: 69% (35/51) v 24% (11/46); odds ratio 7.6, 3.1 to 18.9; P<0.001.\nA significantly lower proportion of patients in the specific exercise group subsequently chose to undergo surgery: 20% (10/51) v 63% (29/46); odds ratio 7.7, 3.1 to 19.4; P<0.001).\nCONCLUSION A specific exercise strategy, focusing on strengthening eccentric exercises for the rotator cuff and concentric/eccentric exercises for the scapula stabilisers, is effective in reducing pain and improving shoulder function in patients with persistent subacromial impingement syndrome.\nBy extension, this exercise strategy reduces the need for arthroscopic subacromial decompression within the three month timeframe used in the study.\nTRIAL REGISTRATION Clinical trials NCT01037673.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"There was a significantly greater improvement in the Constant-Murley score in the specific exercise group than in the control exercise group (24 points (95% confidence interval 19 to 28.0) v 9 points (5 to 13); mean difference between group: 15 points (8.5 to 20.6)).\"]}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Deep LSTM for Large Vocabulary Continuous Speech Recognition Recurrent neural networks (RNNs), especially long short-term memory (LSTM) RNNs, are effective network for sequential task like speech recognition. Deeper LSTM models perform well on large vocabulary continuous speech recognition, because of their impressive learning ability. However, it is more difficult to train a deeper network. We introduce a training framework with layer-wise training and exponential moving average methods for deeper LSTM models. It is a competitive framework that LSTM models of more than 7 layers are successfully trained on Shenma voice search data in Mandarin and they outperform the deep LSTM models trained by conventional approach. Moreover, in order for online streaming speech recognition applications, the shallow model with low real time factor is distilled from the very deep model. The recognition accuracy have little loss in the distillation process. Therefore, the model trained with the proposed training framework reduces relative 14\% character error rate, compared to original model which has the similar real-time capability. Furthermore, the novel transfer learning strategy with segmental Minimum Bayes-Risk is also introduced in the framework. The strategy makes it possible that training with only a small part of dataset could outperform full dataset training from the beginning. Introduction Recently, deep neural network has been widely employed in various recognition tasks. Increasing the depth of neural network is a effective way to improve the performance, and convolutional neural network (CNN) has benefited from it in visual recognition task BIBREF0 . Deeper long short-term memory (LSTM) recurrent neural networks (RNNs) are also applied in large vocabulary continuous speech recognition (LVCSR) task, because LSTM networks have shown better performance than Fully-connected feed-forward deep neural network BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Training neural network becomes more challenge when it goes deep. A conceptual tool called linear classifier probe is introduced to better understand the dynamics inside a neural network BIBREF5 . The discriminating features of linear classifier is the hidden units of a intermediate layer. For deep neural networks, it is observed that deeper layer's accuracy is lower than that of shallower layers. Therefore, the tool shows the difficulty of deep neural model training visually. Layer-wise pre-training is a successful method to train very deep neural networks BIBREF6 . The convergence becomes harder with increasing the number of layers, even though the model is initialized with Xavier or its variants BIBREF7 , BIBREF8 . But the deeper network which is initialized with a shallower trained network could converge well. The size of LVCSR training dataset goes larger and training with only one GPU becomes high time consumption inevitably. Therefore, parallel training with multi-GPUs is more suitable for LVCSR system. Mini-batch based stochastic gradient descent (SGD) is the most popular method in neural network training procedure. Asynchronous SGD is a successful effort for parallel training based on it BIBREF9 , BIBREF10 . It can many times speed up the training time without decreasing the accuracy. Besides, synchronous SGD is another effective effort, where the parameter server waits for every works to finish their computation and sent their local models to it, and then it sends updated model back to all workers BIBREF11 . Synchronous SGD converges well in parallel training with data parallelism, and is also easy to implement. In order to further improve the performance of deep neural network with parallel training, several methods are proposed. Model averaging method achieves linear speedup, as the final model is averaged from all parameters of local models in different workers BIBREF12 , BIBREF13 , but the accuracy decreases compared with single GPU training. Moreover, blockwise model-updating filter (BMUF) provides another almost linear speedup approach with multi-GPUs on the basis of model averaging. It can achieve improvement or no-degradation of recognition performance compared with mini-batch SGD on single GPU BIBREF14 . Moving averaged (MA) approaches are also proposed for parallel training. It is demonstrated that the moving average of the parameters obtained by SGD performs as well as the parameters that minimize the empirical cost, and moving average parameters can be used as the estimator of them, if the size of training data is large enough BIBREF15 . One pass learning is then proposed, which is the combination of learning rate schedule and averaged SGD using moving average BIBREF16 . Exponential moving average (EMA) is proposed as a non-interference method BIBREF17 . EMA model is not broadcasted to workers to update their local models, and it is applied as the final model of entire training process. EMA method is utilized with model averaging and BMUF to further decrease the character error rate (CER). It is also easy to implement in existing parallel training systems. Frame stacking can also speed up the training time BIBREF18 . The super frame is stacked by several regular frames, and it contains the information of them. Thus, the network can see multiple frames at a time, as the super frame is new input. Frame stacking can also lead to faster decoding. For streaming voice search service, it needs to display intermediate recognition results while users are still speaking. As a result, the system needs to fulfill high real-time requirement, and we prefer unidirectional LSTM network rather than bidirectional one. High real-time requirement means low real time factor (RTF), but the RTF of deep LSTM model is higher inevitably. The dilemma of recognition accuracy and real-time requirement is an obstacle to the employment of deep LSTM network. Deep model outperforms because it contains more knowledge, but it is also cumbersome. As a result, the knowledge of deep model can be distilled to a shallow model BIBREF19 . It provided a effective way to employ the deep model to the real-time system. In this paper, we explore a entire deep LSTM RNN training framework, and employ it to real-time application. The deep learning systems benefit highly from a large quantity of labeled training data. Our first and basic speech recognition system is trained on 17000 hours of Shenma voice search dataset. It is a generic dataset sampled from diverse aspects of search queries. The requirement of speech recognition system also addressed by specific scenario, such as map and navigation task. The labeled dataset is too expensive, and training a new model with new large dataset from the beginning costs lots of time. Thus, it is natural to think of transferring the knowledge from basic model to new scenario's model. Transfer learning expends less data and less training time than full training. In this paper, we also introduce a novel transfer learning strategy with segmental Minimum Bayes-Risk (sMBR). As a result, transfer training with only 1000 hours data can match equivalent performance for full training with 7300 hours data. Our deep LSTM training framework for LVCSR is presented in Section 2. Section 3 describes how the very deep models does apply in real world applications, and how to transfer the model to another task. The framework is analyzed and discussed in Section 4, and followed by the conclusion in Section 5. Layer-wise Training with Soft Target and Hard Target Gradient-based optimization of deep LSTM network with random initialization get stuck in poor solution easily. Xavier initialization can partially solve this problem BIBREF7 , so this method is the regular initialization method of all training procedure. However, it does not work well when it is utilized to initialize very deep model directly, because of vanishing or exploding gradients. Instead, layer-wise pre-training method is a effective way to train the weights of very deep architecture BIBREF6 , BIBREF20 . In layer-wise pre-training procedure, a one-layer LSTM model is firstly trained with normalized initialization. Sequentially, two-layers LSTM model's first layer is initialized by trained one-layer model, and its second layer is regularly initialized. In this way, a deep architecture is layer-by-layer trained, and it can converge well. In conventional layer-wise pre-training, only parameters of shallower network are transfered to deeper one, and the learning targets are still the alignments generated by HMM-GMM system. The targets are vectors that only one state's probability is one, and the others' are zeros. They are known as hard targets, and they carry limited knowledge as only one state is active. In contrast, the knowledge of shallower network should be also transfered to deeper one. It is obtained by the softmax layer of existing model typically, so each state has a probability rather than only zero or one, and called as soft target. As a result, the deeper network which is student network learns the parameters and knowledge from shallower one which is called teacher network. When training the student network from the teacher network, the final alignment is the combination of hard target and soft target in our layer-wise training phase. The final alignment provides various knowledge which transfered from teacher network and extracted from true labels. If only soft target is learned, student network perform no better than teacher network, but it could outperform teacher network as it also learns true labels. The deeper network spends less time to getting the same level of original network than the network trained from the beginning, as a period of low performance is skipped. Therefore, training with hard and soft target is a time saving method. For large training dataset, training with the whole dataset still spends too much time. A network firstly trained with only a small part of dataset could go deeper as well, and so the training time reducing rapidly. When the network is deep enough, it then trained on the entire dataset to get further improvement. There is no gap of accuracy between these two approaches, but latter one saves much time. Training Data A large quantity of labeled data is needed for training a more accurate acoustic model. We collect the 17000 hours labeled data from Shenma voice search, which is one of the most popular mobile search engines in China. The dataset is created from anonymous online users' search queries in Mandarin, and all audio file's sampling rate is 16kHz, recorded by mobile phones. This dataset consists of many different conditions, such as diverse noise even low signal-to-noise, babble, dialects, accents, hesitation and so on. In the Amap, which is one of the most popular web mapping and navigation services in China, users can search locations and navigate to locations they want though voice search. To present the performance of transfer learning with sequence discriminative training, the model trained from Shenma voice search which is greneric scenario transfer its knowledge to the model of Amap voice search. 7300 hours labeled data is collected in the similar way of Shenma voice search data collection. Two dataset is divided into training set, validation set and test set separately, and the quantity of them is shown in Table TABREF10 . The three sets are split according to speakers, in order to avoid utterances of same speaker appearing in three sets simultaneously. The test sets of Shenma and Amap voice search are called Shenma Test and Amap Test. Experimental setup LSTM RNNs outperform conventional RNNs for speech recognition system, especially deep LSTM RNNs, because of its long-range dependencies more accurately for temporal sequence conditions BIBREF26 , BIBREF23 . Shenma and Amap voice search is a streaming service that intermediate recognition results displayed while users are still speaking. So as for online recognition in real time, we prefer unidirectional LSTM model rather than bidirectional one. Thus, the training system is unidirectional LSTM-based. A 26-dimensional filter bank and 2-dimensional pitch feature is extracted for each frame, and is concatenated with first and second order difference as the final input of the network. The super frame are stacked by 3 frames without overlapping. The architecture we trained consists of two LSTM layers with sigmoid activation function, followed by a full-connection layer. The out layer is a softmax layer with 11088 hidden markov model (HMM) tied-states as output classes, the loss function is cross-entropy (CE). The performance metric of the system in Mandarin is reported with character error rate (CER). The alignment of frame-level ground truth is obtained by GMM-HMM system. Mini-batched SGD is utilized with momentum trick and the network is trained for a total of 4 epochs. The block learning rate and block momentum of BMUF are set as 1 and 0.9. 5-gram language model is leveraged in decoder, and the vocabulary size is as large as 760000. Differentials of recurrent layers is limited to range [-10000,10000], while gradients are clipped to range [-5, 5] and cell activations clipped to range [-50, 50]. After training with CE loss, sMBR loss is employed to further improve the performance. It has shown that BMUF outperforms traditional model averaging method, and it is utilized at the synchronization phase. After synchronizing with BMUF, EMA method further updates the model in non-interference way. The training system is deployed on the MPI-based HPC cluster where 8 GPUs. Each GPU processes non-overlap subset split from the entire large scale dataset in parallel. Local models from distributed workers synchronize with each other in decentralized way. In the traditional model averaging and BMUF method, a parameter server waits for all workers to send their local models, aggregate them, and send the updated model to all workers. Computing resource of workers is wasted until aggregation of the parameter server done. Decentralized method makes full use of computing resource, and we employ the MPI-based Mesh AllReduce method. It is mesh topology as shown in Figure FIGREF12 . There is no centralized parameter server, and peer to peer communication is used to transmit local models between workers. Local model INLINEFORM0 of INLINEFORM1 -th worker in INLINEFORM2 workers cluster is split to INLINEFORM3 pieces INLINEFORM4 , and send to corresponding worker. In the aggregation phase, INLINEFORM5 -th worker computed INLINEFORM6 splits of model INLINEFORM7 and send updated model INLINEFORM8 back to workers. As a result, all workers participate in aggregation and no computing resource is dissipated. It is significant to promote training efficiency, when the size of neural network model is too large. The EMA model is also updated additionally, but not broadcasting it. Transfer Learning 2-layers distilled model of Shenma voice search has shown a impressive performance on Shenma Test, and we call it Shenma model. It is trained for generic search scenario, but it has less adaptation for specific scenario like Amap voice search. Training with very large dataset using CE loss is regarded as improvement of frame level recognition accuracy, and sMBR with less dataset further improves accuracy as sequence discriminative training. If robust model of generic scenario is trained, there is no need to train a model with very large dataset, and sequence discriminative training with less dataset is enough. Therefore, on the basis of Shenma model, it is sufficient to train a new Amap model with small dataset using sMBR. As shown in Table TABREF18 , Shenma model presents the worst performance among three methods, since it does not trained for Amap scenario. 2-layers Shenma model further trained with sMBR achieves about 8.1% relative reduction, compared with 2-layers regular-trained Amap model. Both training sMBR datasets contain the same 1000 hours data. As a result, with the Shenma model, only about 14% data usage achieves lower CER, and it leads to great time and cost saving with less labeled data. Besides, transfer learning with sMBR does not use the alignment from the HMM-GMM system, so it also saves huge amount of time. Table 1. The time summation of different sets of Shenma voice search and Amap. Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model. Query: how small of a dataset did they train on? JSON Response:
{"answer": "23085 hours of data", "evidence": ["Two dataset is divided into training set, validation set and test set separately, and the quantity of them is shown in Table TABREF10 . The three sets are split according to speakers, in order to avoid utterances of same speaker appearing in three sets simultaneously. The test sets of Shenma and Amap voice search are called Shenma Test and Amap Test."]}
science.qasper_abstractive_qa.190.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nDeep LSTM for Large Vocabulary Continuous Speech Recognition\n\nRecurrent neural networks (RNNs), especially long short-term memory (LSTM) RNNs, are effective network for sequential task like speech recognition. Deeper LSTM models perform well on large vocabulary continuous speech recognition, because of their impressive learning ability. However, it is more difficult to train a deeper network. We introduce a training framework with layer-wise training and exponential moving average methods for deeper LSTM models. It is a competitive framework that LSTM models of more than 7 layers are successfully trained on Shenma voice search data in Mandarin and they outperform the deep LSTM models trained by conventional approach. Moreover, in order for online streaming speech recognition applications, the shallow model with low real time factor is distilled from the very deep model. The recognition accuracy have little loss in the distillation process. Therefore, the model trained with the proposed training framework reduces relative 14\\% character error rate, compared to original model which has the similar real-time capability. Furthermore, the novel transfer learning strategy with segmental Minimum Bayes-Risk is also introduced in the framework. The strategy makes it possible that training with only a small part of dataset could outperform full dataset training from the beginning.\n\nIntroduction\nRecently, deep neural network has been widely employed in various recognition tasks. Increasing the depth of neural network is a effective way to improve the performance, and convolutional neural network (CNN) has benefited from it in visual recognition task BIBREF0 . Deeper long short-term memory (LSTM) recurrent neural networks (RNNs) are also applied in large vocabulary continuous speech recognition (LVCSR) task, because LSTM networks have shown better performance than Fully-connected feed-forward deep neural network BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 .\nTraining neural network becomes more challenge when it goes deep. A conceptual tool called linear classifier probe is introduced to better understand the dynamics inside a neural network BIBREF5 . The discriminating features of linear classifier is the hidden units of a intermediate layer. For deep neural networks, it is observed that deeper layer's accuracy is lower than that of shallower layers. Therefore, the tool shows the difficulty of deep neural model training visually.\nLayer-wise pre-training is a successful method to train very deep neural networks BIBREF6 . The convergence becomes harder with increasing the number of layers, even though the model is initialized with Xavier or its variants BIBREF7 , BIBREF8 . But the deeper network which is initialized with a shallower trained network could converge well.\nThe size of LVCSR training dataset goes larger and training with only one GPU becomes high time consumption inevitably. Therefore, parallel training with multi-GPUs is more suitable for LVCSR system. Mini-batch based stochastic gradient descent (SGD) is the most popular method in neural network training procedure. Asynchronous SGD is a successful effort for parallel training based on it BIBREF9 , BIBREF10 . It can many times speed up the training time without decreasing the accuracy. Besides, synchronous SGD is another effective effort, where the parameter server waits for every works to finish their computation and sent their local models to it, and then it sends updated model back to all workers BIBREF11 . Synchronous SGD converges well in parallel training with data parallelism, and is also easy to implement.\nIn order to further improve the performance of deep neural network with parallel training, several methods are proposed. Model averaging method achieves linear speedup, as the final model is averaged from all parameters of local models in different workers BIBREF12 , BIBREF13 , but the accuracy decreases compared with single GPU training. Moreover, blockwise model-updating filter (BMUF) provides another almost linear speedup approach with multi-GPUs on the basis of model averaging. It can achieve improvement or no-degradation of recognition performance compared with mini-batch SGD on single GPU BIBREF14 .\nMoving averaged (MA) approaches are also proposed for parallel training. It is demonstrated that the moving average of the parameters obtained by SGD performs as well as the parameters that minimize the empirical cost, and moving average parameters can be used as the estimator of them, if the size of training data is large enough BIBREF15 . One pass learning is then proposed, which is the combination of learning rate schedule and averaged SGD using moving average BIBREF16 . Exponential moving average (EMA) is proposed as a non-interference method BIBREF17 . EMA model is not broadcasted to workers to update their local models, and it is applied as the final model of entire training process. EMA method is utilized with model averaging and BMUF to further decrease the character error rate (CER). It is also easy to implement in existing parallel training systems.\nFrame stacking can also speed up the training time BIBREF18 . The super frame is stacked by several regular frames, and it contains the information of them. Thus, the network can see multiple frames at a time, as the super frame is new input. Frame stacking can also lead to faster decoding.\nFor streaming voice search service, it needs to display intermediate recognition results while users are still speaking. As a result, the system needs to fulfill high real-time requirement, and we prefer unidirectional LSTM network rather than bidirectional one. High real-time requirement means low real time factor (RTF), but the RTF of deep LSTM model is higher inevitably. The dilemma of recognition accuracy and real-time requirement is an obstacle to the employment of deep LSTM network. Deep model outperforms because it contains more knowledge, but it is also cumbersome. As a result, the knowledge of deep model can be distilled to a shallow model BIBREF19 . It provided a effective way to employ the deep model to the real-time system.\nIn this paper, we explore a entire deep LSTM RNN training framework, and employ it to real-time application. The deep learning systems benefit highly from a large quantity of labeled training data. Our first and basic speech recognition system is trained on 17000 hours of Shenma voice search dataset. It is a generic dataset sampled from diverse aspects of search queries. The requirement of speech recognition system also addressed by specific scenario, such as map and navigation task. The labeled dataset is too expensive, and training a new model with new large dataset from the beginning costs lots of time. Thus, it is natural to think of transferring the knowledge from basic model to new scenario's model. Transfer learning expends less data and less training time than full training. In this paper, we also introduce a novel transfer learning strategy with segmental Minimum Bayes-Risk (sMBR). As a result, transfer training with only 1000 hours data can match equivalent performance for full training with 7300 hours data.\nOur deep LSTM training framework for LVCSR is presented in Section 2. Section 3 describes how the very deep models does apply in real world applications, and how to transfer the model to another task. The framework is analyzed and discussed in Section 4, and followed by the conclusion in Section 5.\n\nLayer-wise Training with Soft Target and Hard Target\nGradient-based optimization of deep LSTM network with random initialization get stuck in poor solution easily. Xavier initialization can partially solve this problem BIBREF7 , so this method is the regular initialization method of all training procedure. However, it does not work well when it is utilized to initialize very deep model directly, because of vanishing or exploding gradients. Instead, layer-wise pre-training method is a effective way to train the weights of very deep architecture BIBREF6 , BIBREF20 . In layer-wise pre-training procedure, a one-layer LSTM model is firstly trained with normalized initialization. Sequentially, two-layers LSTM model's first layer is initialized by trained one-layer model, and its second layer is regularly initialized. In this way, a deep architecture is layer-by-layer trained, and it can converge well.\nIn conventional layer-wise pre-training, only parameters of shallower network are transfered to deeper one, and the learning targets are still the alignments generated by HMM-GMM system. The targets are vectors that only one state's probability is one, and the others' are zeros. They are known as hard targets, and they carry limited knowledge as only one state is active. In contrast, the knowledge of shallower network should be also transfered to deeper one. It is obtained by the softmax layer of existing model typically, so each state has a probability rather than only zero or one, and called as soft target. As a result, the deeper network which is student network learns the parameters and knowledge from shallower one which is called teacher network. When training the student network from the teacher network, the final alignment is the combination of hard target and soft target in our layer-wise training phase. The final alignment provides various knowledge which transfered from teacher network and extracted from true labels. If only soft target is learned, student network perform no better than teacher network, but it could outperform teacher network as it also learns true labels.\nThe deeper network spends less time to getting the same level of original network than the network trained from the beginning, as a period of low performance is skipped. Therefore, training with hard and soft target is a time saving method. For large training dataset, training with the whole dataset still spends too much time. A network firstly trained with only a small part of dataset could go deeper as well, and so the training time reducing rapidly. When the network is deep enough, it then trained on the entire dataset to get further improvement. There is no gap of accuracy between these two approaches, but latter one saves much time.\n\nTraining Data\nA large quantity of labeled data is needed for training a more accurate acoustic model. We collect the 17000 hours labeled data from Shenma voice search, which is one of the most popular mobile search engines in China. The dataset is created from anonymous online users' search queries in Mandarin, and all audio file's sampling rate is 16kHz, recorded by mobile phones. This dataset consists of many different conditions, such as diverse noise even low signal-to-noise, babble, dialects, accents, hesitation and so on.\nIn the Amap, which is one of the most popular web mapping and navigation services in China, users can search locations and navigate to locations they want though voice search. To present the performance of transfer learning with sequence discriminative training, the model trained from Shenma voice search which is greneric scenario transfer its knowledge to the model of Amap voice search. 7300 hours labeled data is collected in the similar way of Shenma voice search data collection.\nTwo dataset is divided into training set, validation set and test set separately, and the quantity of them is shown in Table TABREF10 . The three sets are split according to speakers, in order to avoid utterances of same speaker appearing in three sets simultaneously. The test sets of Shenma and Amap voice search are called Shenma Test and Amap Test.\n\nExperimental setup\nLSTM RNNs outperform conventional RNNs for speech recognition system, especially deep LSTM RNNs, because of its long-range dependencies more accurately for temporal sequence conditions BIBREF26 , BIBREF23 . Shenma and Amap voice search is a streaming service that intermediate recognition results displayed while users are still speaking. So as for online recognition in real time, we prefer unidirectional LSTM model rather than bidirectional one. Thus, the training system is unidirectional LSTM-based.\nA 26-dimensional filter bank and 2-dimensional pitch feature is extracted for each frame, and is concatenated with first and second order difference as the final input of the network. The super frame are stacked by 3 frames without overlapping. The architecture we trained consists of two LSTM layers with sigmoid activation function, followed by a full-connection layer. The out layer is a softmax layer with 11088 hidden markov model (HMM) tied-states as output classes, the loss function is cross-entropy (CE). The performance metric of the system in Mandarin is reported with character error rate (CER). The alignment of frame-level ground truth is obtained by GMM-HMM system. Mini-batched SGD is utilized with momentum trick and the network is trained for a total of 4 epochs. The block learning rate and block momentum of BMUF are set as 1 and 0.9. 5-gram language model is leveraged in decoder, and the vocabulary size is as large as 760000. Differentials of recurrent layers is limited to range [-10000,10000], while gradients are clipped to range [-5, 5] and cell activations clipped to range [-50, 50]. After training with CE loss, sMBR loss is employed to further improve the performance.\nIt has shown that BMUF outperforms traditional model averaging method, and it is utilized at the synchronization phase. After synchronizing with BMUF, EMA method further updates the model in non-interference way. The training system is deployed on the MPI-based HPC cluster where 8 GPUs. Each GPU processes non-overlap subset split from the entire large scale dataset in parallel.\nLocal models from distributed workers synchronize with each other in decentralized way. In the traditional model averaging and BMUF method, a parameter server waits for all workers to send their local models, aggregate them, and send the updated model to all workers. Computing resource of workers is wasted until aggregation of the parameter server done. Decentralized method makes full use of computing resource, and we employ the MPI-based Mesh AllReduce method. It is mesh topology as shown in Figure FIGREF12 . There is no centralized parameter server, and peer to peer communication is used to transmit local models between workers. Local model INLINEFORM0 of INLINEFORM1 -th worker in INLINEFORM2 workers cluster is split to INLINEFORM3 pieces INLINEFORM4 , and send to corresponding worker. In the aggregation phase, INLINEFORM5 -th worker computed INLINEFORM6 splits of model INLINEFORM7 and send updated model INLINEFORM8 back to workers. As a result, all workers participate in aggregation and no computing resource is dissipated. It is significant to promote training efficiency, when the size of neural network model is too large. The EMA model is also updated additionally, but not broadcasting it.\n\nTransfer Learning\n2-layers distilled model of Shenma voice search has shown a impressive performance on Shenma Test, and we call it Shenma model. It is trained for generic search scenario, but it has less adaptation for specific scenario like Amap voice search. Training with very large dataset using CE loss is regarded as improvement of frame level recognition accuracy, and sMBR with less dataset further improves accuracy as sequence discriminative training. If robust model of generic scenario is trained, there is no need to train a model with very large dataset, and sequence discriminative training with less dataset is enough. Therefore, on the basis of Shenma model, it is sufficient to train a new Amap model with small dataset using sMBR. As shown in Table TABREF18 , Shenma model presents the worst performance among three methods, since it does not trained for Amap scenario. 2-layers Shenma model further trained with sMBR achieves about 8.1% relative reduction, compared with 2-layers regular-trained Amap model. Both training sMBR datasets contain the same 1000 hours data. As a result, with the Shenma model, only about 14% data usage achieves lower CER, and it leads to great time and cost saving with less labeled data. Besides, transfer learning with sMBR does not use the alignment from the HMM-GMM system, so it also saves huge amount of time.\n\nTable 1. The time summation of different sets of Shenma voice search and Amap.\n\nTable 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model.\n\nQuery: how small of a dataset did they train on?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"23085 hours of data\", \"evidence\": [\"Two dataset is divided into training set, validation set and test set separately, and the quantity of them is shown in Table TABREF10 . The three sets are split according to speakers, in order to avoid utterances of same speaker appearing in three sets simultaneously. The test sets of Shenma and Amap voice search are called Shenma Test and Amap Test.\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: The non-competitive N-methyl-D-aspartate receptor antagonist ketamine leads to transient psychosis-like symptoms and impairments in oculomotor performance in healthy volunteers. This study examined whether the adverse effects of ketamine on oculomotor performance can be reversed by the atypical antipsychotic risperidone. In this randomized double-blind, placebo-controlled study, 72 healthy participants performed smooth pursuit eye movements (SPEM), prosaccades (PS) and antisaccades (AS) while being randomly assigned to one of four drug groups (intravenous 100 ng ml−1 ketamine, 2 mg oral risperidone, 100 ng ml−1 ketamine plus 2 mg oral risperidone, placebo). Drug administration did not lead to harmful adverse events. Ketamine increased saccadic frequency and decreased velocity gain of SPEM (all P<0.01) but had no significant effects on PS or AS (all P⩾0.07). An effect of risperidone was observed for amplitude gain and peak velocity of PS and AS, indicating hypometric gain and slower velocities compared with placebo (both P⩽0.04). No ketamine by risperidone interactions were found (all P⩾0.26). The results confirm that the administration of ketamine produces oculomotor performance deficits similar in part to those seen in schizophrenia. The atypical antipsychotic risperidone did not reverse ketamine-induced deteriorations. These findings do not support the cognitive enhancing potential of risperidone on oculomotor biomarkers in this model system of schizophrenia and point towards the importance of developing alternative performance-enhancing compounds to optimise pharmacological treatment of schizophrenia. Extracted Data:
[["Ketamine ", "Placebo", "Smooth pursuit saccadic frequency", "significantly increased", "Ketamine increased saccadic frequency and decreased velocity gain of SPEM (all P\u003c0.01) but had no significant effects on PS or AS (all P\u2a7e0.07)."]]
science.evidence_inference.92
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: The non-competitive N-methyl-D-aspartate receptor antagonist ketamine leads to transient psychosis-like symptoms and impairments in oculomotor performance in healthy volunteers. This study examined whether the adverse effects of ketamine on oculomotor performance can be reversed by the atypical antipsychotic risperidone. In this randomized double-blind, placebo-controlled study, 72 healthy participants performed smooth pursuit eye movements (SPEM), prosaccades (PS) and antisaccades (AS) while being randomly assigned to one of four drug groups (intravenous 100 ng ml−1 ketamine, 2 mg oral risperidone, 100 ng ml−1 ketamine plus 2 mg oral risperidone, placebo). Drug administration did not lead to harmful adverse events. Ketamine increased saccadic frequency and decreased velocity gain of SPEM (all P<0.01) but had no significant effects on PS or AS (all P⩾0.07). An effect of risperidone was observed for amplitude gain and peak velocity of PS and AS, indicating hypometric gain and slower velocities compared with placebo (both P⩽0.04). No ketamine by risperidone interactions were found (all P⩾0.26). The results confirm that the administration of ketamine produces oculomotor performance deficits similar in part to those seen in schizophrenia. The atypical antipsychotic risperidone did not reverse ketamine-induced deteriorations. These findings do not support the cognitive enhancing potential of risperidone on oculomotor biomarkers in this model system of schizophrenia and point towards the importance of developing alternative performance-enhancing compounds to optimise pharmacological treatment of schizophrenia.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Ketamine \", \"Placebo\", \"Smooth pursuit saccadic frequency\", \"significantly increased\", \"Ketamine increased saccadic frequency and decreased velocity gain of SPEM (all P\\u003c0.01) but had no significant effects on PS or AS (all P\\u2a7e0.07).\"]]", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Question Answering for Privacy Policies: Combining Computational and Legal Perspectives Privacy policies are long and complex documents that are difficult for users to read and understand, and yet, they have legal effects on how user data is collected, managed and used. Ideally, we would like to empower users to inform themselves about issues that matter to them, and enable them to selectively explore those issues. We present PrivacyQA, a corpus consisting of 1750 questions about the privacy policies of mobile applications, and over 3500 expert annotations of relevant answers. We observe that a strong neural baseline underperforms human performance by almost 0.3 F1 on PrivacyQA, suggesting considerable room for improvement for future systems. Further, we use this dataset to shed light on challenges to question answerability, with domain-general implications for any question answering system. The PrivacyQA corpus offers a challenging corpus for question answering, with genuine real-world utility. Introduction Privacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's products or services only if they find the conditions outlined in its privacy policy acceptable. Many legal jurisdictions around the world accept this framework, including the United States and the European Union BIBREF1, BIBREF2. However, the legitimacy of this framework depends upon users actually reading and understanding privacy policies to determine whether company practices are acceptable to them BIBREF3. In practice this is seldom the case BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. This is further complicated by the highly individual and nuanced compromises that users are willing to make with their data BIBREF11, discouraging a `one-size-fits-all' approach to notice of data practices in privacy documents. With devices constantly monitoring our environment, including our personal space and our bodies, lack of awareness of how our data is being used easily leads to problematic situations where users are outraged by information misuse, but companies insist that users have consented. The discovery of increasingly egregious uses of data by companies, such as the scandals involving Facebook and Cambridge Analytica BIBREF12, have further brought public attention to the privacy concerns of the internet and ubiquitous computing. This makes privacy a well-motivated application domain for NLP researchers, where advances in enabling users to quickly identify the privacy issues most salient to them can potentially have large real-world impact. [1]https://play.google.com/store/apps/details?id=com.gotokeep.keep.intl [2]https://play.google.com/store/apps/details?id=com.viber.voip [3]A question might not have any supporting evidence for an answer within the privacy policy. Motivated by this need, we contribute PrivacyQA, a corpus consisting of 1750 questions about the contents of privacy policies, paired with over 3500 expert annotations. The goal of this effort is to kickstart the development of question-answering methods for this domain, to address the (unrealistic) expectation that a large population should be reading many policies per day. In doing so, we identify several understudied challenges to our ability to answer these questions, with broad implications for systems seeking to serve users' information-seeking intent. By releasing this resource, we hope to provide an impetus to develop systems capable of language understanding in this increasingly important domain. Data Collection ::: Answer Selection To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked. Data Collection ::: Analysis Table.TABREF17 presents aggregate statistics of the PrivacyQA dataset. 1750 questions are posed to our imaginary privacy assistant over 35 mobile applications and their associated privacy documents. As an initial step, we formulate the problem of answering user questions as an extractive sentence selection task, ignoring for now background knowledge, statistical data and legal expertise that could otherwise be brought to bear. The dataset is partitioned into a training set featuring 27 mobile applications and 1350 questions, and a test set consisting of 400 questions over 8 policy documents. This ensures that documents in training and test splits are mutually exclusive. Every question is answered by at least one expert. In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts. Table TABREF14 describes the distribution over first words of questions posed by crowdworkers. We also observe low redundancy in the questions posed by crowdworkers over each policy, with each policy receiving ~49.94 unique questions despite crowdworkers independently posing questions. Questions are on average 8.4 words long. As declining to answer a question can be a legally sound response but is seldom practically useful, answers to questions where a minority of experts abstain to answer are filtered from the dataset. Privacy policies are ~3000 words long on average. The answers to the question asked by the users typically have ~100 words of evidence in the privacy policy document. Data Collection ::: Analysis ::: Categories of Questions Questions are organized under nine categories from the OPP-115 Corpus annotation scheme BIBREF49: First Party Collection/Use: What, why and how information is collected by the service provider Third Party Sharing/Collection: What, why and how information shared with or collected by third parties Data Security: Protection measures for user information Data Retention: How long user information will be stored User Choice/Control: Control options available to users User Access, Edit and Deletion: If/how users can access, edit or delete information Policy Change: Informing users if policy information has been changed International and Specific Audiences: Practices pertaining to a specific group of users Other: General text, contact information or practices not covered by other categories. For each question, domain experts indicate one or more relevant OPP-115 categories. We mark a category as relevant to a question if it is identified as such by at least two annotators. If no such category exists, the category is marked as `Other' if atleast one annotator has identified the `Other' category to be relevant. If neither of these conditions is satisfied, we label the question as having no agreement. The distribution of questions in the corpus across OPP-115 categories is as shown in Table.TABREF16. First party and third party related questions are the largest categories, forming nearly 66.4% of all questions asked to the privacy assistant. Experimental Setup ::: Answerability Identification Baselines We define answerability identification as a binary classification task, evaluating model ability to predict if a question can be answered, given a question in isolation. This can serve as a prior for downstream question-answering. We describe three baselines on the answerability task, and find they considerably improve performance over a majority-class baseline. SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel. CNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions. BERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128. Experimental Setup ::: Privacy Question Answering Our goal is to identify evidence within a privacy policy for questions asked by a user. This is framed as an answer sentence selection task, where models identify a set of evidence sentences from all candidate sentences in each policy. Experimental Setup ::: Privacy Question Answering ::: Evaluation Metric Our evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to BIBREF30, BIBREF16. Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences. We report the average of the maximum F1 from each n$-$1 subset, in relation to the heldout reference. Results and Discussion ::: What makes Questions Unanswerable? We further ask legal experts to identify potential causes of unanswerability of questions. This analysis has considerable implications. While past work BIBREF17 has treated unanswerable questions as homogeneous, a question answering system might wish to have different treatments for different categories of `unanswerable' questions. The following factors were identified to play a role in unanswerability: Incomprehensibility: If a question is incomprehensible to the extent that its meaning is not intelligible. Relevance: Is this question in the scope of what could be answered by reading the privacy policy. Ill-formedness: Is this question ambiguous or vague. An ambiguous statement will typically contain expressions that can refer to multiple potential explanations, whereas a vague statement carries a concept with an unclear or soft definition. Silence: Other policies answer this type of question but this one does not. Atypicality: The question is of a nature such that it is unlikely for any policy policy to have an answer to the question. Our experts attempt to identify the different `unanswerable' factors for all 573 such questions in the corpus. 4.18% of the questions were identified as being incomprehensible (for example, `any difficulties to occupy the privacy assistant'). Amongst the comprehendable questions, 50% were identified as likely to have an answer within the privacy policy, 33.1% were identified as being privacy-related questions but not within the scope of a privacy policy (e.g., 'has Viber had any privacy breaches in the past?') and 16.9% of questions were identified as completely out-of-scope (e.g., `'will the app consume much space?'). In the questions identified as relevant, 32% were ill-formed questions that were phrased by the user in a manner considered vague or ambiguous. Of the questions that were both relevant as well as `well-formed', 95.7% of the questions were not answered by the policy in question but it was reasonable to expect that a privacy policy would contain an answer. The remaining 4.3% were described as reasonable questions, but of a nature generally not discussed in privacy policies. This suggests that the answerability of questions over privacy policies is a complex issue, and future systems should consider each of these factors when serving user's information seeking intent. We examine a large-scale dataset of “natural” unanswerable questions BIBREF54 based on real user search engine queries to identify if similar unanswerability factors exist. It is important to note that these questions have previously been filtered, according to a criteria for bad questions defined as “(questions that are) ambiguous, incomprehensible, dependent on clear false presuppositions, opinion-seeking, or not clearly a request for factual information.” Annotators made the decision based on the content of the question without viewing the equivalent Wikipedia page. We randomly sample 100 questions from the development set which were identified as unanswerable, and find that 20% of the questions are not questions (e.g., “all I want for christmas is you mariah carey tour”). 12% of questions are unlikely to ever contain an answer on Wikipedia, corresponding closely to our atypicality category. 3% of questions are unlikely to have an answer anywhere (e.g., `what guides Santa home after he has delivered presents?'). 7% of questions are incomplete or open-ended (e.g., `the south west wind blows across nigeria between'). 3% of questions have an unresolvable coreference (e.g., `how do i get to Warsaw Missouri from here'). 4% of questions are vague, and a further 7% have unknown sources of error. 2% still contain false presuppositions (e.g., `what is the only fruit that does not have seeds?') and the remaining 42% do not have an answer within the document. This reinforces our belief that though they have been understudied in past work, any question answering system interacting with real users should expect to receive such unanticipated and unanswerable questions. Acknowledgements This research was supported in part by grants from the National Science Foundation Secure and Trustworthy Computing program (CNS-1330596, CNS-1330214, CNS-15-13957, CNS-1801316, CNS-1914486, CNS-1914444) and a DARPA Brandeis grant on Personalized Privacy Assistants (FA8750-15-2-0277). The US Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF, DARPA, or the US Government. The authors would like to extend their gratitude to Elias Wright, Gian Mascioli, Kiara Pillay, Harrison Kay, Eliel Talo, Alexander Fagella and N. Cameron Russell for providing their valuable expertise and insight to this effort. The authors are also grateful to Eduard Hovy, Lorrie Cranor, Florian Schaub, Joel Reidenberg, Aditya Potukuchi and Igor Shalyminov for helpful discussions related to this work, and to the three anonymous reviewers of this draft for their constructive feedback. Finally, the authors would like to thank all crowdworkers who consented to participate in this study. Figure 1: Examples of privacy-related questions users ask, drawn from two mobile applications: Keep1and Viber.2Policy evidence represents sentences in the privacy policy that are relevant for determining the answer to the user’s question.3 Table 2: Ten most frequent first words in questions in the PRIVACYQA dataset. We observe high lexical diversity in prefixes with 35 unique first word types and 131 unique combinations of first and second words. Table 6: Performance of baselines on PRIVACYQA dataset. Query: Who were the experts used for annotation? JSON Response:
{"answer": "Individuals with legal training", "evidence": ["To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question\u0027s relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked."]}
science.qasper_abstractive_qa.211.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nQuestion Answering for Privacy Policies: Combining Computational and Legal Perspectives\n\nPrivacy policies are long and complex documents that are difficult for users to read and understand, and yet, they have legal effects on how user data is collected, managed and used. Ideally, we would like to empower users to inform themselves about issues that matter to them, and enable them to selectively explore those issues. We present PrivacyQA, a corpus consisting of 1750 questions about the privacy policies of mobile applications, and over 3500 expert annotations of relevant answers. We observe that a strong neural baseline underperforms human performance by almost 0.3 F1 on PrivacyQA, suggesting considerable room for improvement for future systems. Further, we use this dataset to shed light on challenges to question answerability, with domain-general implications for any question answering system. The PrivacyQA corpus offers a challenging corpus for question answering, with genuine real-world utility.\n\nIntroduction\nPrivacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's products or services only if they find the conditions outlined in its privacy policy acceptable. Many legal jurisdictions around the world accept this framework, including the United States and the European Union BIBREF1, BIBREF2. However, the legitimacy of this framework depends upon users actually reading and understanding privacy policies to determine whether company practices are acceptable to them BIBREF3. In practice this is seldom the case BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. This is further complicated by the highly individual and nuanced compromises that users are willing to make with their data BIBREF11, discouraging a `one-size-fits-all' approach to notice of data practices in privacy documents.\nWith devices constantly monitoring our environment, including our personal space and our bodies, lack of awareness of how our data is being used easily leads to problematic situations where users are outraged by information misuse, but companies insist that users have consented. The discovery of increasingly egregious uses of data by companies, such as the scandals involving Facebook and Cambridge Analytica BIBREF12, have further brought public attention to the privacy concerns of the internet and ubiquitous computing. This makes privacy a well-motivated application domain for NLP researchers, where advances in enabling users to quickly identify the privacy issues most salient to them can potentially have large real-world impact.\n[1]https://play.google.com/store/apps/details?id=com.gotokeep.keep.intl [2]https://play.google.com/store/apps/details?id=com.viber.voip [3]A question might not have any supporting evidence for an answer within the privacy policy.\nMotivated by this need, we contribute PrivacyQA, a corpus consisting of 1750 questions about the contents of privacy policies, paired with over 3500 expert annotations. The goal of this effort is to kickstart the development of question-answering methods for this domain, to address the (unrealistic) expectation that a large population should be reading many policies per day. In doing so, we identify several understudied challenges to our ability to answer these questions, with broad implications for systems seeking to serve users' information-seeking intent. By releasing this resource, we hope to provide an impetus to develop systems capable of language understanding in this increasingly important domain.\n\nData Collection ::: Answer Selection\nTo identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked.\n\nData Collection ::: Analysis\nTable.TABREF17 presents aggregate statistics of the PrivacyQA dataset. 1750 questions are posed to our imaginary privacy assistant over 35 mobile applications and their associated privacy documents. As an initial step, we formulate the problem of answering user questions as an extractive sentence selection task, ignoring for now background knowledge, statistical data and legal expertise that could otherwise be brought to bear. The dataset is partitioned into a training set featuring 27 mobile applications and 1350 questions, and a test set consisting of 400 questions over 8 policy documents. This ensures that documents in training and test splits are mutually exclusive. Every question is answered by at least one expert. In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts.\nTable TABREF14 describes the distribution over first words of questions posed by crowdworkers. We also observe low redundancy in the questions posed by crowdworkers over each policy, with each policy receiving ~49.94 unique questions despite crowdworkers independently posing questions. Questions are on average 8.4 words long. As declining to answer a question can be a legally sound response but is seldom practically useful, answers to questions where a minority of experts abstain to answer are filtered from the dataset. Privacy policies are ~3000 words long on average. The answers to the question asked by the users typically have ~100 words of evidence in the privacy policy document.\n\nData Collection ::: Analysis ::: Categories of Questions\nQuestions are organized under nine categories from the OPP-115 Corpus annotation scheme BIBREF49:\nFirst Party Collection/Use: What, why and how information is collected by the service provider\nThird Party Sharing/Collection: What, why and how information shared with or collected by third parties\nData Security: Protection measures for user information\nData Retention: How long user information will be stored\nUser Choice/Control: Control options available to users\nUser Access, Edit and Deletion: If/how users can access, edit or delete information\nPolicy Change: Informing users if policy information has been changed\nInternational and Specific Audiences: Practices pertaining to a specific group of users\nOther: General text, contact information or practices not covered by other categories.\nFor each question, domain experts indicate one or more relevant OPP-115 categories. We mark a category as relevant to a question if it is identified as such by at least two annotators. If no such category exists, the category is marked as `Other' if atleast one annotator has identified the `Other' category to be relevant. If neither of these conditions is satisfied, we label the question as having no agreement. The distribution of questions in the corpus across OPP-115 categories is as shown in Table.TABREF16. First party and third party related questions are the largest categories, forming nearly 66.4% of all questions asked to the privacy assistant.\n\nExperimental Setup ::: Answerability Identification Baselines\nWe define answerability identification as a binary classification task, evaluating model ability to predict if a question can be answered, given a question in isolation. This can serve as a prior for downstream question-answering. We describe three baselines on the answerability task, and find they considerably improve performance over a majority-class baseline.\nSVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel.\nCNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions.\nBERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128.\n\nExperimental Setup ::: Privacy Question Answering\nOur goal is to identify evidence within a privacy policy for questions asked by a user. This is framed as an answer sentence selection task, where models identify a set of evidence sentences from all candidate sentences in each policy.\n\nExperimental Setup ::: Privacy Question Answering ::: Evaluation Metric\nOur evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to BIBREF30, BIBREF16. Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences. We report the average of the maximum F1 from each n$-$1 subset, in relation to the heldout reference.\n\nResults and Discussion ::: What makes Questions Unanswerable?\nWe further ask legal experts to identify potential causes of unanswerability of questions. This analysis has considerable implications. While past work BIBREF17 has treated unanswerable questions as homogeneous, a question answering system might wish to have different treatments for different categories of `unanswerable' questions. The following factors were identified to play a role in unanswerability:\nIncomprehensibility: If a question is incomprehensible to the extent that its meaning is not intelligible.\nRelevance: Is this question in the scope of what could be answered by reading the privacy policy.\nIll-formedness: Is this question ambiguous or vague. An ambiguous statement will typically contain expressions that can refer to multiple potential explanations, whereas a vague statement carries a concept with an unclear or soft definition.\nSilence: Other policies answer this type of question but this one does not.\nAtypicality: The question is of a nature such that it is unlikely for any policy policy to have an answer to the question.\nOur experts attempt to identify the different `unanswerable' factors for all 573 such questions in the corpus. 4.18% of the questions were identified as being incomprehensible (for example, `any difficulties to occupy the privacy assistant'). Amongst the comprehendable questions, 50% were identified as likely to have an answer within the privacy policy, 33.1% were identified as being privacy-related questions but not within the scope of a privacy policy (e.g., 'has Viber had any privacy breaches in the past?') and 16.9% of questions were identified as completely out-of-scope (e.g., `'will the app consume much space?'). In the questions identified as relevant, 32% were ill-formed questions that were phrased by the user in a manner considered vague or ambiguous. Of the questions that were both relevant as well as `well-formed', 95.7% of the questions were not answered by the policy in question but it was reasonable to expect that a privacy policy would contain an answer. The remaining 4.3% were described as reasonable questions, but of a nature generally not discussed in privacy policies. This suggests that the answerability of questions over privacy policies is a complex issue, and future systems should consider each of these factors when serving user's information seeking intent.\nWe examine a large-scale dataset of “natural” unanswerable questions BIBREF54 based on real user search engine queries to identify if similar unanswerability factors exist. It is important to note that these questions have previously been filtered, according to a criteria for bad questions defined as “(questions that are) ambiguous, incomprehensible, dependent on clear false presuppositions, opinion-seeking, or not clearly a request for factual information.” Annotators made the decision based on the content of the question without viewing the equivalent Wikipedia page. We randomly sample 100 questions from the development set which were identified as unanswerable, and find that 20% of the questions are not questions (e.g., “all I want for christmas is you mariah carey tour”). 12% of questions are unlikely to ever contain an answer on Wikipedia, corresponding closely to our atypicality category. 3% of questions are unlikely to have an answer anywhere (e.g., `what guides Santa home after he has delivered presents?'). 7% of questions are incomplete or open-ended (e.g., `the south west wind blows across nigeria between'). 3% of questions have an unresolvable coreference (e.g., `how do i get to Warsaw Missouri from here'). 4% of questions are vague, and a further 7% have unknown sources of error. 2% still contain false presuppositions (e.g., `what is the only fruit that does not have seeds?') and the remaining 42% do not have an answer within the document. This reinforces our belief that though they have been understudied in past work, any question answering system interacting with real users should expect to receive such unanticipated and unanswerable questions.\n\nAcknowledgements\nThis research was supported in part by grants from the National Science Foundation Secure and Trustworthy Computing program (CNS-1330596, CNS-1330214, CNS-15-13957, CNS-1801316, CNS-1914486, CNS-1914444) and a DARPA Brandeis grant on Personalized Privacy Assistants (FA8750-15-2-0277). The US Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF, DARPA, or the US Government. The authors would like to extend their gratitude to Elias Wright, Gian Mascioli, Kiara Pillay, Harrison Kay, Eliel Talo, Alexander Fagella and N. Cameron Russell for providing their valuable expertise and insight to this effort. The authors are also grateful to Eduard Hovy, Lorrie Cranor, Florian Schaub, Joel Reidenberg, Aditya Potukuchi and Igor Shalyminov for helpful discussions related to this work, and to the three anonymous reviewers of this draft for their constructive feedback. Finally, the authors would like to thank all crowdworkers who consented to participate in this study.\n\nFigure 1: Examples of privacy-related questions users ask, drawn from two mobile applications: Keep1and Viber.2Policy evidence represents sentences in the privacy policy that are relevant for determining the answer to the user’s question.3\n\nTable 2: Ten most frequent first words in questions in the PRIVACYQA dataset. We observe high lexical diversity in prefixes with 35 unique first word types and 131 unique combinations of first and second words.\n\nTable 6: Performance of baselines on PRIVACYQA dataset.\n\nQuery: Who were the experts used for annotation?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"Individuals with legal training\", \"evidence\": [\"To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question\\u0027s relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked.\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Autophagy deficiency in the liver increases vulnerability to insulin resistance. Research Abstract: Despite growing interest and a recent surge in papers, the role of autophagy in glucose and lipid metabolism is unclear. We produced mice with skeletal muscle–specific deletion of Atg7 (encoding autophagy-related 7). Unexpectedly, these mice showed decreased fat mass and were protected from diet-induced obesity and insulin resistance; this phenotype was accompanied by increased fatty acid oxidation and browning of white adipose tissue (WAT) owing to induction of fibroblast growth factor 21 (Fgf21). Mitochondrial dysfunction induced by autophagy deficiency increased Fgf21 expression through induction of Atf4, a master regulator of the integrated stress response. Mitochondrial respiratory chain inhibitors also induced Fgf21 in an Atf4-dependent manner. We also observed induction of Fgf21, resistance to diet-induced obesity and amelioration of insulin resistance in mice with autophagy deficiency in the liver, another insulin target tissue. These findings suggest that autophagy deficiency and subsequent mitochondrial dysfunction promote Fgf21 expression, a hormone we consequently term a 'mitokine', and together these processes promote protection from diet-induced obesity and insulin resistance.
{"verdict": "CONTRADICT", "evidence": ["Unexpectedly, these mice showed decreased fat mass and were protected from diet-induced obesity and insulin resistance; this phenotype was accompanied by increased fatty acid oxidation and browning of white adipose tissue (WAT) owing to induction of fibroblast growth factor 21 (Fgf21).", "We also observed induction of Fgf21, resistance to diet-induced obesity and amelioration of insulin resistance in mice with autophagy deficiency in the liver, another insulin target tissue.", "These findings suggest that autophagy deficiency and subsequent mitochondrial dysfunction promote Fgf21 expression, a hormone we consequently term a 'mitokine', and together these processes promote protection from diet-induced obesity and insulin resistance."]}
science.scifact_entailment.99
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nAutophagy deficiency in the liver increases vulnerability to insulin resistance.\n\nResearch Abstract:\nDespite growing interest and a recent surge in papers, the role of autophagy in glucose and lipid metabolism is unclear.\nWe produced mice with skeletal muscle–specific deletion of Atg7 (encoding autophagy-related 7).\nUnexpectedly, these mice showed decreased fat mass and were protected from diet-induced obesity and insulin resistance; this phenotype was accompanied by increased fatty acid oxidation and browning of white adipose tissue (WAT) owing to induction of fibroblast growth factor 21 (Fgf21).\nMitochondrial dysfunction induced by autophagy deficiency increased Fgf21 expression through induction of Atf4, a master regulator of the integrated stress response.\nMitochondrial respiratory chain inhibitors also induced Fgf21 in an Atf4-dependent manner.\nWe also observed induction of Fgf21, resistance to diet-induced obesity and amelioration of insulin resistance in mice with autophagy deficiency in the liver, another insulin target tissue.\nThese findings suggest that autophagy deficiency and subsequent mitochondrial dysfunction promote Fgf21 expression, a hormone we consequently term a 'mitokine', and together these processes promote protection from diet-induced obesity and insulin resistance.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Unexpectedly, these mice showed decreased fat mass and were protected from diet-induced obesity and insulin resistance; this phenotype was accompanied by increased fatty acid oxidation and browning of white adipose tissue (WAT) owing to induction of fibroblast growth factor 21 (Fgf21).\", \"We also observed induction of Fgf21, resistance to diet-induced obesity and amelioration of insulin resistance in mice with autophagy deficiency in the liver, another insulin target tissue.\", \"These findings suggest that autophagy deficiency and subsequent mitochondrial dysfunction promote Fgf21 expression, a hormone we consequently term a 'mitokine', and together these processes promote protection from diet-induced obesity and insulin resistance.\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: vq-wav2vec <cite>(Baevski et al., 2019b)</cite> learns vector quantized (VQ) representations of audio data using a future time-step prediction task.
["Background"]
science.multicite_intent_classification.3
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nvq-wav2vec <cite>(Baevski et al., 2019b)</cite> learns vector quantized (VQ) representations of audio data using a future time-step prediction task.", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: For all Freebase KG configurations,<cite> Gardner et al. (2014)</cite> have 1000 paths for most relations (approx. 6 of the relations have between 230 and 973).
["Background"]
science.multicite_intent_classification.1663
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nFor all Freebase KG configurations,<cite> Gardner et al. (2014)</cite> have 1000 paths for most relations (approx. 6 of the relations have between 230 and 973).", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.discomat_te
| Sample name | G | GC1 | GC2 | GC3 | GC4 | | Heat treated time at 660degC | 0h | 4h | 8h | 16h | 32h | | Er3+ concentration (mol/dm3) | 1.796 | 1.785 | 1.783 | 1.776 | 1.784 | | Density (g/cm3) | 3.046 | 3.030 | 3.028 | 3.015 | 3.030 | | Thickness (cm) | 0.150 | 0.156 | 0.156 | 0.156 | 0.156 | | n C (at 656.3nm) | 1.527 | 1.530 | 1.530 | 1.530 | 1.525 | | n D (at 589.3nm) | 1.530 | 1.533 | 1.532 | 1.532 | 1.528 | | n F (at 486.1nm) | 1.535 | 1.539 | 1.538 | 1.538 | 1.533 | Caption: Heat treated mode and some physical properties of the samples Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "1.796", "type": "Other"} {"value": "1.785", "type": "Other"} {"value": "1.783", "type": "Other"} {"value": "1.776", "type": "Other"} {"value": "1.784", "type": "Other"} {"value": "3.046", "type": "Other"} {"value": "3.030", "type": "Other"} {"value": "3.028", "type": "Other"} {"value": "3.015", "type": "Other"} {"value": "3.030", "type": "Other"} {"value": "0.150", "type": "Other"} {"value": "0.156", "type": "Other"} {"value": "0.156", "type": "Other"} {"value": "0.156", "type": "Other"} {"value": "0.156", "type": "Other"} {"value": "1.527", "type": "Other"} {"value": "1.530", "type": "Other"} {"value": "1.530", "type": "Other"} {"value": "1.530", "type": "Other"} {"value": "1.525", "type": "Other"} {"value": "1.530", "type": "Other"} {"value": "1.533", "type": "Other"} {"value": "1.532", "type": "Other"} {"value": "1.532", "type": "Other"} {"value": "1.528", "type": "Other"} {"value": "1.535", "type": "Other"} {"value": "1.539", "type": "Other"} {"value": "1.538", "type": "Other"} {"value": "1.538", "type": "Other"} {"value": "1.533", "type": "Other"}
science.discomat_te.483
[ { "content": "| Sample name | G | GC1 | GC2 | GC3 | GC4 |\n| Heat treated time at 660degC | 0h | 4h | 8h | 16h | 32h |\n| Er3+ concentration (mol/dm3) | 1.796 | 1.785 | 1.783 | 1.776 | 1.784 |\n| Density (g/cm3) | 3.046 | 3.030 | 3.028 | 3.015 | 3.030 |\n| Thickness (cm) | 0.150 | 0.156 | 0.156 | 0.156 | 0.156 |\n| n C (at 656.3nm) | 1.527 | 1.530 | 1.530 | 1.530 | 1.525 |\n| n D (at 589.3nm) | 1.530 | 1.533 | 1.532 | 1.532 | 1.528 |\n| n F (at 486.1nm) | 1.535 | 1.539 | 1.538 | 1.538 | 1.533 |\n\nCaption: Heat treated mode and some physical properties of the samples\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"1.796\", \"type\": \"Other\"}\n{\"value\": \"1.785\", \"type\": \"Other\"}\n{\"value\": \"1.783\", \"type\": \"Other\"}\n{\"value\": \"1.776\", \"type\": \"Other\"}\n{\"value\": \"1.784\", \"type\": \"Other\"}\n{\"value\": \"3.046\", \"type\": \"Other\"}\n{\"value\": \"3.030\", \"type\": \"Other\"}\n{\"value\": \"3.028\", \"type\": \"Other\"}\n{\"value\": \"3.015\", \"type\": \"Other\"}\n{\"value\": \"3.030\", \"type\": \"Other\"}\n{\"value\": \"0.150\", \"type\": \"Other\"}\n{\"value\": \"0.156\", \"type\": \"Other\"}\n{\"value\": \"0.156\", \"type\": \"Other\"}\n{\"value\": \"0.156\", \"type\": \"Other\"}\n{\"value\": \"0.156\", \"type\": \"Other\"}\n{\"value\": \"1.527\", \"type\": \"Other\"}\n{\"value\": \"1.530\", \"type\": \"Other\"}\n{\"value\": \"1.530\", \"type\": \"Other\"}\n{\"value\": \"1.530\", \"type\": \"Other\"}\n{\"value\": \"1.525\", \"type\": \"Other\"}\n{\"value\": \"1.530\", \"type\": \"Other\"}\n{\"value\": \"1.533\", \"type\": \"Other\"}\n{\"value\": \"1.532\", \"type\": \"Other\"}\n{\"value\": \"1.532\", \"type\": \"Other\"}\n{\"value\": \"1.528\", \"type\": \"Other\"}\n{\"value\": \"1.535\", \"type\": \"Other\"}\n{\"value\": \"1.539\", \"type\": \"Other\"}\n{\"value\": \"1.538\", \"type\": \"Other\"}\n{\"value\": \"1.538\", \"type\": \"Other\"}\n{\"value\": \"1.533\", \"type\": \"Other\"}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: In order to compare ROOT9 with the state-of-the-art, we have also evaluated it in the<cite> Weeds et al. (2014)</cite> datasets.
["Uses"]
science.multicite_intent_classification.1692
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nIn order to compare ROOT9 with the state-of-the-art, we have also evaluated it in the<cite> Weeds et al. (2014)</cite> datasets.", "role": "user" }, { "content": "[\"Uses\"]", "role": "assistant" } ]
science.discomat_te
| | Alkali germanate glasses | Alkali germanate glasses | Alkali germanate glasses | Alkali germanate glasses | Alkali germanate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | | | Li | Na | K | Rb | Cs | Sodium | Sodium | Sodium | Potassium | Potassium | Potassium | Rubidium | Rubidium | Rubidium | | | | | | | | 8:1 | 6:1 | 4:1 | 8:1 | 6:1 | 4:1 | 8:1 | 6:1 | 4:1 | | Density maximum | 17.5-20 | 15 | 10 | 15 | 17.5 | 19 | 19 | 25 | 8 | 10 | 12 | 20 | 25 | 20 | | Predicted maximum from Raman spectra | 17.5-20 | 15 | 15 | 12.5 | 12.5 | 19 | 25 | 25 | 20 | 21 | 25 | 20 | 18 | 15 | Caption: Maximum in density of alkali germanate and alkali germano-phosphate glasses vs. maximum density predicted from Raman data Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "8", "type": "Other"} {"value": "1", "type": "Other"} {"value": "6", "type": "Other"} {"value": "1", "type": "Other"} {"value": "4", "type": "Other"} {"value": "1", "type": "Other"} {"value": "8", "type": "Other"} {"value": "1", "type": "Other"} {"value": "6", "type": "Other"} {"value": "1", "type": "Other"} {"value": "4", "type": "Other"} {"value": "1", "type": "Other"} {"value": "8", "type": "Other"} {"value": "1", "type": "Other"} {"value": "6", "type": "Other"} {"value": "1", "type": "Other"} {"value": "4", "type": "Other"} {"value": "1", "type": "Other"} {"value": "17.5", "type": "Other"} {"value": "20", "type": "Other"} {"value": "15", "type": "Other"} {"value": "10", "type": "Other"} {"value": "15", "type": "Other"} {"value": "17.5", "type": "Other"} {"value": "19", "type": "Other"} {"value": "19", "type": "Other"} {"value": "25", "type": "Other"} {"value": "8", "type": "Other"} {"value": "10", "type": "Other"} {"value": "12", "type": "Other"} {"value": "20", "type": "Other"} {"value": "25", "type": "Other"} {"value": "20", "type": "Other"} {"value": "17.5", "type": "Other"} {"value": "20", "type": "Other"} {"value": "15", "type": "Other"} {"value": "15", "type": "Other"} {"value": "12.5", "type": "Other"} {"value": "12.5", "type": "Other"} {"value": "19", "type": "Other"} {"value": "25", "type": "Other"} {"value": "25", "type": "Other"} {"value": "20", "type": "Other"} {"value": "21", "type": "Other"} {"value": "25", "type": "Other"} {"value": "20", "type": "Other"} {"value": "18", "type": "Other"} {"value": "15", "type": "Other"}
science.discomat_te.0
[ { "content": "| | Alkali germanate glasses | Alkali germanate glasses | Alkali germanate glasses | Alkali germanate glasses | Alkali germanate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses | Alkali germano-phosphate glasses |\n| | Li | Na | K | Rb | Cs | Sodium | Sodium | Sodium | Potassium | Potassium | Potassium | Rubidium | Rubidium | Rubidium |\n| | | | | | | 8:1 | 6:1 | 4:1 | 8:1 | 6:1 | 4:1 | 8:1 | 6:1 | 4:1 |\n| Density maximum | 17.5-20 | 15 | 10 | 15 | 17.5 | 19 | 19 | 25 | 8 | 10 | 12 | 20 | 25 | 20 |\n| Predicted maximum from Raman spectra | 17.5-20 | 15 | 15 | 12.5 | 12.5 | 19 | 25 | 25 | 20 | 21 | 25 | 20 | 18 | 15 |\n\nCaption: Maximum in density of alkali germanate and alkali germano-phosphate glasses vs. maximum density predicted from Raman data\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"8\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"6\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"4\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"8\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"6\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"4\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"8\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"6\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"4\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"17.5\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Other\"}\n{\"value\": \"15\", \"type\": \"Other\"}\n{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"15\", \"type\": \"Other\"}\n{\"value\": \"17.5\", \"type\": \"Other\"}\n{\"value\": \"19\", \"type\": \"Other\"}\n{\"value\": \"19\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Other\"}\n{\"value\": \"8\", \"type\": \"Other\"}\n{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"12\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Other\"}\n{\"value\": \"17.5\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Other\"}\n{\"value\": \"15\", \"type\": \"Other\"}\n{\"value\": \"15\", \"type\": \"Other\"}\n{\"value\": \"12.5\", \"type\": \"Other\"}\n{\"value\": \"12.5\", \"type\": \"Other\"}\n{\"value\": \"19\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Other\"}\n{\"value\": \"21\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Other\"}\n{\"value\": \"18\", \"type\": \"Other\"}\n{\"value\": \"15\", \"type\": \"Other\"}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Ca2+ cycling controls whole-body energy homeostasis in beige fat. Research Abstract: Uncoupling protein 1 (UCP1) plays a central role in nonshivering thermogenesis in brown fat; however, its role in beige fat remains unclear. Here we report a robust UCP1-independent thermogenic mechanism in beige fat that involves enhanced ATP-dependent Ca2+ cycling by sarco/endoplasmic reticulum Ca2+-ATPase 2b (SERCA2b) and ryanodine receptor 2 (RyR2). Inhibition of SERCA2b impairs UCP1-independent beige fat thermogenesis in humans and mice as well as in pigs, a species that lacks a functional UCP1 protein. Conversely, enhanced Ca2+ cycling by activation of α1- and/or β3-adrenergic receptors or the SERCA2b-RyR2 pathway stimulates UCP1-independent thermogenesis in beige adipocytes. In the absence of UCP1, beige fat dynamically expends glucose through enhanced glycolysis, tricarboxylic acid metabolism and pyruvate dehydrogenase activity for ATP-dependent thermogenesis through the SERCA2b pathway; beige fat thereby functions as a 'glucose sink' and improves glucose tolerance independently of body weight loss. Our study uncovers a noncanonical thermogenic mechanism through which beige fat controls whole-body energy homeostasis via Ca2+ cycling.
{"verdict": "SUPPORT", "evidence": ["Conversely, enhanced Ca2+ cycling by activation of \u03b11- and/or \u03b23-adrenergic receptors or the SERCA2b-RyR2 pathway stimulates UCP1-independent thermogenesis in beige adipocytes.", "In the absence of UCP1, beige fat dynamically expends glucose through enhanced glycolysis, tricarboxylic acid metabolism and pyruvate dehydrogenase activity for ATP-dependent thermogenesis through the SERCA2b pathway; beige fat thereby functions as a 'glucose sink' and improves glucose tolerance independently of body weight loss.", "Our study uncovers a noncanonical thermogenic mechanism through which beige fat controls whole-body energy homeostasis via Ca2+ cycling."]}
science.scifact_entailment.144
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nCa2+ cycling controls whole-body energy homeostasis in beige fat.\n\nResearch Abstract:\nUncoupling protein 1 (UCP1) plays a central role in nonshivering thermogenesis in brown fat; however, its role in beige fat remains unclear.\nHere we report a robust UCP1-independent thermogenic mechanism in beige fat that involves enhanced ATP-dependent Ca2+ cycling by sarco/endoplasmic reticulum Ca2+-ATPase 2b (SERCA2b) and ryanodine receptor 2 (RyR2).\nInhibition of SERCA2b impairs UCP1-independent beige fat thermogenesis in humans and mice as well as in pigs, a species that lacks a functional UCP1 protein.\nConversely, enhanced Ca2+ cycling by activation of α1- and/or β3-adrenergic receptors or the SERCA2b-RyR2 pathway stimulates UCP1-independent thermogenesis in beige adipocytes.\nIn the absence of UCP1, beige fat dynamically expends glucose through enhanced glycolysis, tricarboxylic acid metabolism and pyruvate dehydrogenase activity for ATP-dependent thermogenesis through the SERCA2b pathway; beige fat thereby functions as a 'glucose sink' and improves glucose tolerance independently of body weight loss.\nOur study uncovers a noncanonical thermogenic mechanism through which beige fat controls whole-body energy homeostasis via Ca2+ cycling.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Conversely, enhanced Ca2+ cycling by activation of \\u03b11- and/or \\u03b23-adrenergic receptors or the SERCA2b-RyR2 pathway stimulates UCP1-independent thermogenesis in beige adipocytes.\", \"In the absence of UCP1, beige fat dynamically expends glucose through enhanced glycolysis, tricarboxylic acid metabolism and pyruvate dehydrogenase activity for ATP-dependent thermogenesis through the SERCA2b pathway; beige fat thereby functions as a 'glucose sink' and improves glucose tolerance independently of body weight loss.\", \"Our study uncovers a noncanonical thermogenic mechanism through which beige fat controls whole-body energy homeostasis via Ca2+ cycling.\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Our models are compared against the best attention-based systems in<cite> (Vinyals et al., 2015a)</cite> , including the state-of-the-art result of 92.8 F 1 .
["Uses"]
science.multicite_intent_classification.1376
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nOur models are compared against the best attention-based systems in<cite> (Vinyals et al., 2015a)</cite> , including the state-of-the-art result of 92.8 F 1 .", "role": "user" }, { "content": "[\"Uses\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND:: Clinically optimized focusing of drug administration to specific need of patient with bispectral index (BIS) monitoring results in reduced dose and faster recovery of consciousness. This study was planned with an aim to study and compare the conventional clinical end point or BIS on the requirement of dosage of propofol, hemodynamic effects, and BIS alterations following propofol induction. ABSTRACT.METHODS:: 70 patients, ASA I and II, 20-60 years undergoing elective surgical procedure under general anesthesia with endotracheal intubation were selected and divided into two groups. Group A received (inj.) fentanyl (2 μg/kg), followed 3 min later by inj. propofol at the rate of 30 mg/kg/hr infusion till the loss of response to verbal command while group B received inj. fentanyl (2 μg/kg), followed 3 min later by inj. propofol at the rate of 30 mg/kg/hr infusion. The end point of hypnosis was when the BIS value was sustained for 1 min at 48±2. The patients were intubated. Total induction dose of propofol was noted in each group. The value of BIS and hemodynamic parameters (heart rate, systolic/diastolic blood pressure) were noted at the time of loss of consciousness, at the time of intubation, and 1 min after intubation, thereafter every minute for first 10 min and thereafter every 10 min till end of surgery. Any involuntary muscle activity such as jerky movements, dystonic posturing, and opisthotonos were also recorded. ABSTRACT.RESULTS:: The mean dose of propofol used in groups A and B were 1.85±0.48 mg/kg and 1.79±0.41 mg/kg, respectively. The dosage used in group B were less but not clinically significant (P=0.575). On comparing the dosage of propofol in males among the groups there was a significantly lower dosage of propofol required in group B (2.06±0.45 mg/kg and 1.83±0.32 mg/kg, respectively, P=0.016). This decrease however was not seen in female patients dosage being 1.65±0.44 mg/kg and 1.75±0.49 mg/kg, respectively (P=0.372). The hemodynamic variables including heart rate, systolic/diastolic blood pressure and BIS were comparable within the group at induction, post-induction, and intubation. However, there was a significant increase in all the parameters at postintubation readings (P<0.001). ABSTRACT.CONCLUSION:: No significant difference in the induction dose of propofol was observed when assessed clinically (loss of verbal response) or by BIS monitoring. Traditional teaching to titrate the dose of propofol and depth of anesthesia during intubation by loss of verbal response is as good as BIS value monitoring. Extracted Data:
[["propofol induction guided by BIS", "propofol induction guided by clinical signs", "mean dose of propofol required for induction", "no significant difference", "No significant difference in the induction dose of propofol was observed when assessed clinically (loss of verbal response) or by BIS monitoring."], ["propofol induction guided by BIS", "propofol induction guided by clinical signs", "dosage of propofol required by females", "no significant difference", "The mean dose of propofol used in groups A and B were 1.85\u00b10.48 mg/kg and 1.79\u00b10.41 mg/kg, respectively. The dosage used in group B were less but not clinically significant (P=0.575). On comparing the dosage of propofol in males among the groups there was a significantly lower dosage of propofol required in group B (2.06\u00b10.45 mg/kg and 1.83\u00b10.32 mg/kg, respectively, P=0.016). This decrease however was not seen in female patients dosage being 1.65\u00b10.44 mg/kg and 1.75\u00b10.49 mg/kg, respectively (P=0.372)."]]
science.evidence_inference.22
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND:: Clinically optimized focusing of drug administration to specific need of patient with bispectral index (BIS) monitoring results in reduced dose and faster recovery of consciousness. This study was planned with an aim to study and compare the conventional clinical end point or BIS on the requirement of dosage of propofol, hemodynamic effects, and BIS alterations following propofol induction.\n\nABSTRACT.METHODS:: 70 patients, ASA I and II, 20-60 years undergoing elective surgical procedure under general anesthesia with endotracheal intubation were selected and divided into two groups. Group A received (inj.) fentanyl (2 μg/kg), followed 3 min later by inj. propofol at the rate of 30 mg/kg/hr infusion till the loss of response to verbal command while group B received inj. fentanyl (2 μg/kg), followed 3 min later by inj. propofol at the rate of 30 mg/kg/hr infusion. The end point of hypnosis was when the BIS value was sustained for 1 min at 48±2. The patients were intubated. Total induction dose of propofol was noted in each group. The value of BIS and hemodynamic parameters (heart rate, systolic/diastolic blood pressure) were noted at the time of loss of consciousness, at the time of intubation, and 1 min after intubation, thereafter every minute for first 10 min and thereafter every 10 min till end of surgery. Any involuntary muscle activity such as jerky movements, dystonic posturing, and opisthotonos were also recorded.\n\nABSTRACT.RESULTS:: The mean dose of propofol used in groups A and B were 1.85±0.48 mg/kg and 1.79±0.41 mg/kg, respectively. The dosage used in group B were less but not clinically significant (P=0.575). On comparing the dosage of propofol in males among the groups there was a significantly lower dosage of propofol required in group B (2.06±0.45 mg/kg and 1.83±0.32 mg/kg, respectively, P=0.016). This decrease however was not seen in female patients dosage being 1.65±0.44 mg/kg and 1.75±0.49 mg/kg, respectively (P=0.372). The hemodynamic variables including heart rate, systolic/diastolic blood pressure and BIS were comparable within the group at induction, post-induction, and intubation. However, there was a significant increase in all the parameters at postintubation readings (P<0.001).\n\nABSTRACT.CONCLUSION:: No significant difference in the induction dose of propofol was observed when assessed clinically (loss of verbal response) or by BIS monitoring. Traditional teaching to titrate the dose of propofol and depth of anesthesia during intubation by loss of verbal response is as good as BIS value monitoring.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"propofol induction guided by BIS\", \"propofol induction guided by clinical signs\", \"mean dose of propofol required for induction\", \"no significant difference\", \"No significant difference in the induction dose of propofol was observed when assessed clinically (loss of verbal response) or by BIS monitoring.\"], [\"propofol induction guided by BIS\", \"propofol induction guided by clinical signs\", \"dosage of propofol required by females\", \"no significant difference\", \"The mean dose of propofol used in groups A and B were 1.85\\u00b10.48 mg/kg and 1.79\\u00b10.41 mg/kg, respectively. The dosage used in group B were less but not clinically significant (P=0.575). On comparing the dosage of propofol in males among the groups there was a significantly lower dosage of propofol required in group B (2.06\\u00b10.45 mg/kg and 1.83\\u00b10.32 mg/kg, respectively, P=0.016). This decrease however was not seen in female patients dosage being 1.65\\u00b10.44 mg/kg and 1.75\\u00b10.49 mg/kg, respectively (P=0.372).\"]]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Manipulation of total homocysteine concentration with oral methionine is associated with impairment of endothelial-dependent vasodilation. This may be caused by increased oxidative stress. Vitamin C is an aqueous phase antioxidant vitamin and free radical scavenger. We hypothesised that if the impairment of endothelial function related to experimental hyperhomocysteinaemia was free radically mediated then co-administration of vitamin C should prevent this. ABSTRACT.METHODS: Ten healthy adults took part in this crossover study. Endothelial function was determined by measuring forearm blood flow (FBF) in response to intra-arterial infusion of acetylcholine (endothelial-dependent) and sodium nitroprusside (endothelial-independent). Subjects received methionine (100 mg/Kg) plus placebo tablets, methionine plus vitamin C (2 g orally) or placebo drink plus placebo tablets. Study drugs were administered at 9 am on each study date, a minimum of two weeks passed between each study. Homocysteine (tHcy) concentration was determined at baseline and after 4 hours. Endothelial function was determined at 4 hours. Responses to the vasoactive substances are expressed as the area under the curve of change in FBF from baseline. Data are mean plus 95% Confidence Intervals. ABSTRACT.RESULTS: Following oral methionine tHcy concentration increased significantly versus placebo. At this time endothelial-dependent responses were significantly reduced compared to placebo (31.2 units [22.1-40.3] vs. 46.4 units [42.0-50.8], p < 0.05 vs. Placebo). Endothelial-independent responses were unchanged. Co-administration of vitamin C did not alter the increase in homocysteine or prevent the impairment of endothelial-dependent responses (31.4 [19.5-43.3] vs. 46.4 units [42.0-50.8], p < 0.05 vs. Placebo) ABSTRACT.CONCLUSIONS: This study demonstrates that methionine increased tHcy with impairment of the endothelial-dependent vasomotor responses. Administration of vitamin C did not prevent this impairment and our results do not support the hypothesis that the endothelial impairment is mediated by adverse oxidative stress. Extracted Data:
[["methionine (100 mg/Kg) plus placebo tablets", "placebo drink plus placebo tablets", "Homocysteine (tHcy) concentration", "significantly increased", "Following oral methionine tHcy concentration increased significantly versus placebo."]]
science.evidence_inference.191
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Manipulation of total homocysteine concentration with oral methionine is associated with impairment of endothelial-dependent vasodilation. This may be caused by increased oxidative stress. Vitamin C is an aqueous phase antioxidant vitamin and free radical scavenger. We hypothesised that if the impairment of endothelial function related to experimental hyperhomocysteinaemia was free radically mediated then co-administration of vitamin C should prevent this.\n\nABSTRACT.METHODS: Ten healthy adults took part in this crossover study. Endothelial function was determined by measuring forearm blood flow (FBF) in response to intra-arterial infusion of acetylcholine (endothelial-dependent) and sodium nitroprusside (endothelial-independent). Subjects received methionine (100 mg/Kg) plus placebo tablets, methionine plus vitamin C (2 g orally) or placebo drink plus placebo tablets. Study drugs were administered at 9 am on each study date, a minimum of two weeks passed between each study. Homocysteine (tHcy) concentration was determined at baseline and after 4 hours. Endothelial function was determined at 4 hours. Responses to the vasoactive substances are expressed as the area under the curve of change in FBF from baseline. Data are mean plus 95% Confidence Intervals.\n\nABSTRACT.RESULTS: Following oral methionine tHcy concentration increased significantly versus placebo. At this time endothelial-dependent responses were significantly reduced compared to placebo (31.2 units [22.1-40.3] vs. 46.4 units [42.0-50.8], p < 0.05 vs. Placebo). Endothelial-independent responses were unchanged. Co-administration of vitamin C did not alter the increase in homocysteine or prevent the impairment of endothelial-dependent responses (31.4 [19.5-43.3] vs. 46.4 units [42.0-50.8], p < 0.05 vs. Placebo)\n\nABSTRACT.CONCLUSIONS: This study demonstrates that methionine increased tHcy with impairment of the endothelial-dependent vasomotor responses. Administration of vitamin C did not prevent this impairment and our results do not support the hypothesis that the endothelial impairment is mediated by adverse oxidative stress.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"methionine (100 mg/Kg) plus placebo tablets\", \"placebo drink plus placebo tablets\", \"Homocysteine (tHcy) concentration\", \"significantly increased\", \"Following oral methionine tHcy concentration increased significantly versus placebo.\"]]", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Pre-treatment of bupivacaine-induced cardiovascular depression using different lipid formulations of propofol. BACKGROUND: Pre-treatment with lipid emulsions has been shown to increase lethal doses of bupivacaine, and the lipid content of propofol may alleviate bupivacaine-induced cardiotoxicity. The aim of this study is to investigate the effects of propofol in intralipid or medialipid emulsions on bupivacaine-induced cardiotoxicity. METHODS: Rats were anaesthetised with ketamine and were given 0.5 mg/kg/min propofol in intralipid (Group P), propofol in medialipid (Group L), or saline (Group C) over 20 min. Thereafter, 2 mg/kg/min bupivacaine 0.5% was infused. We recorded time to first dysrhythmia occurrence, respective times to 25% and 50% reduction of the heart rate (HR) and mean arterial pressure, and time to asystole and total amount of bupivacaine consumption. Blood and tissue samples were collected following asystole. RESULTS: The time to first dysrhythmia occurrence, time to 25% and 50% reductions in HR, and time to asystole were longer in Group P than the other groups. The cumulative bupivacaine dose given at those time points was higher in Group P. Plasma bupivacaine levels were significantly lower in Group P than in Group C. Bupivacaine levels in the brain and heart were significantly lower in Group P and Group L than in Group C. CONCLUSION: We conclude that pre-treatment with propofol in intralipid, compared with propofol in medialipid or saline, delayed the onset of bupivacaine-induced cardiotoxic effects as well as reduced plasma bupivacaine levels. Further studies are needed to explore tissue bupivacaine levels of propofol in medialipid and adapt these results to clinical practice.
{"CellLine": [], "Chemical": ["lipid", "propofol", "lipid emulsions", "intralipid", "medialipid", "ketamine", "bupivacaine", "Bupivacaine"], "Disease": ["cardiovascular depression", "cardiotoxic", "cardiotoxicity", "asystole", "dysrhythmia"], "Gene": [], "Species": ["Rats"], "Variant": []}
science.biored_ner.98
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nPre-treatment of bupivacaine-induced cardiovascular depression using different lipid formulations of propofol. BACKGROUND: Pre-treatment with lipid emulsions has been shown to increase lethal doses of bupivacaine, and the lipid content of propofol may alleviate bupivacaine-induced cardiotoxicity. The aim of this study is to investigate the effects of propofol in intralipid or medialipid emulsions on bupivacaine-induced cardiotoxicity. METHODS: Rats were anaesthetised with ketamine and were given 0.5 mg/kg/min propofol in intralipid (Group P), propofol in medialipid (Group L), or saline (Group C) over 20 min. Thereafter, 2 mg/kg/min bupivacaine 0.5% was infused. We recorded time to first dysrhythmia occurrence, respective times to 25% and 50% reduction of the heart rate (HR) and mean arterial pressure, and time to asystole and total amount of bupivacaine consumption. Blood and tissue samples were collected following asystole. RESULTS: The time to first dysrhythmia occurrence, time to 25% and 50% reductions in HR, and time to asystole were longer in Group P than the other groups. The cumulative bupivacaine dose given at those time points was higher in Group P. Plasma bupivacaine levels were significantly lower in Group P than in Group C. Bupivacaine levels in the brain and heart were significantly lower in Group P and Group L than in Group C. CONCLUSION: We conclude that pre-treatment with propofol in intralipid, compared with propofol in medialipid or saline, delayed the onset of bupivacaine-induced cardiotoxic effects as well as reduced plasma bupivacaine levels. Further studies are needed to explore tissue bupivacaine levels of propofol in medialipid and adapt these results to clinical practice.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"lipid\", \"propofol\", \"lipid emulsions\", \"intralipid\", \"medialipid\", \"ketamine\", \"bupivacaine\", \"Bupivacaine\"], \"Disease\": [\"cardiovascular depression\", \"cardiotoxic\", \"cardiotoxicity\", \"asystole\", \"dysrhythmia\"], \"Gene\": [], \"Species\": [\"Rats\"], \"Variant\": []}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Skewed X inactivation in an X linked nystagmus family resulted from a novel, p.R229G, missense mutation in the FRMD7 gene. AIMS: This study aimed to identify the underlying genetic defect of a large Turkish X linked nystagmus (NYS) family. METHODS: Both Xp11 and Xq26 loci were tested by linkage analysis. The 12 exons and intron-exon junctions of the FRMD7 gene were screened by direct sequencing. X chromosome inactivation analysis was performed by enzymatic predigestion of DNA with a methylation-sensitive enzyme, followed by PCR of the polymorphic CAG repeat of the androgen receptor gene. RESULTS: The family contained 162 individuals, among whom 28 had NYS. Linkage analysis confirmed the Xq26 locus. A novel missense c.686C>G mutation, which causes the substitution of a conserved arginine at amino acid position 229 by glycine (p.R229G) in exon 8 of the FRMD7 gene, was observed. This change was not documented in 120 control individuals. The clinical findings in a female who was homozygous for the mutation were not different from those of affected heterozygous females. Skewed X inactivation was remarkable in the affected females of the family. CONCLUSIONS: A novel p.R229G mutation in the FRMD7 gene causes the NYS phenotype, and skewed X inactivation influences the manifestation of the disease in X linked NYS females.
{"CellLine": [], "Chemical": [], "Disease": ["X linked NYS", "NYS", "X linked nystagmus", "genetic defect"], "Gene": ["FRMD7", "androgen receptor"], "Species": [], "Variant": ["arginine at amino acid position 229 by glycine", "c.686C\u003eG", "p.R229G", "CAG repeat"]}
science.biored_ner.7
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nSkewed X inactivation in an X linked nystagmus family resulted from a novel, p.R229G, missense mutation in the FRMD7 gene. AIMS: This study aimed to identify the underlying genetic defect of a large Turkish X linked nystagmus (NYS) family. METHODS: Both Xp11 and Xq26 loci were tested by linkage analysis. The 12 exons and intron-exon junctions of the FRMD7 gene were screened by direct sequencing. X chromosome inactivation analysis was performed by enzymatic predigestion of DNA with a methylation-sensitive enzyme, followed by PCR of the polymorphic CAG repeat of the androgen receptor gene. RESULTS: The family contained 162 individuals, among whom 28 had NYS. Linkage analysis confirmed the Xq26 locus. A novel missense c.686C>G mutation, which causes the substitution of a conserved arginine at amino acid position 229 by glycine (p.R229G) in exon 8 of the FRMD7 gene, was observed. This change was not documented in 120 control individuals. The clinical findings in a female who was homozygous for the mutation were not different from those of affected heterozygous females. Skewed X inactivation was remarkable in the affected females of the family. CONCLUSIONS: A novel p.R229G mutation in the FRMD7 gene causes the NYS phenotype, and skewed X inactivation influences the manifestation of the disease in X linked NYS females.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [], \"Disease\": [\"X linked NYS\", \"NYS\", \"X linked nystagmus\", \"genetic defect\"], \"Gene\": [\"FRMD7\", \"androgen receptor\"], \"Species\": [], \"Variant\": [\"arginine at amino acid position 229 by glycine\", \"c.686C\\u003eG\", \"p.R229G\", \"CAG repeat\"]}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Attraction-Repulsion Actor-Critic for Continuous Control Reinforcement Learning Main Content: In reinforcement learning , robotic control tasks are often useful for understanding how agents perform in environments with deceptive rewards where the agent can easily become trapped into suboptimal solutions . One way to avoid these local optima is to use a population of agents to ensure coverage of the policy space ( a form of exploration ) , yet learning a population with the “ best ” coverage is still an open problem . In this work , we present a novel approach to population-based RL in continuous control that leverages properties of normalizing flows to perform attractive and repulsive operations between current members of the population and previously observed policies . Empirical results on the MuJoCo suite demonstrate a high performance gain for our algorithm compared to prior work , including Soft-Actor Critic ( SAC ) . 1 INTRODUCTION . Many important reinforcement learning ( RL ) tasks , such as those in robotics and self-driving cars , are challenging due to large action and state spaces ( Lee et al. , 2018 ) . In particular , environments with large continuous action spaces are prone to deceptive rewards , i.e . fall into local optima in learning ( Conti et al. , 2018 ) . Applying traditional policy optimization algorithms to these domains often leads to locally optimal , yet globally sub-optimal policies . The agent should then explore the reward landscape more thoroughly in order to avoid falling into these local optima . Not all RL domains that require exploration are suitable for understanding how to train agents that are robust to deceptive rewards . For example , Montezuma ’ s Revenge , a game in the Atari Learning Environment ( Bellemare et al. , 2013 ) , has sparse rewards ; algorithms that perform the best on this task encourage exploration by providing a denser intrinsic reward to the agent to encourage exploration ( Tang et al. , 2017 ) . On the other hand , many robotic control problems , such as those found in MuJoCo ( Todorov et al. , 2012 ) , provide the agent with a dense reward signal , yet their high-dimensional action spaces induce a multimodal , often deceptive , reward landscape . For example , in the biped environments , coordinating both arms and legs is crucial for performing well on even simple tasks such as forward motion . However , simply learning to maximize the reward can be detrimental across training : agents will tend to run and fall further away from the start point rather than discovering stable and efficient walking motion . In this setting , exploration serves to provide a more reliable learning signal for the agent by covering more different types of actions during learning . One way to maximize action space coverage is the maximum entropy RL framework ( Ziebart , 2010 ) , which prevents variance collapse by adding a policy entropy auxiliary objective . One such prominent algorithm , Soft Actor-Critic ( SAC , Haarnoja et al . ( 2018 ) ) , has been shown to excel in large continuous action spaces . To further improve on exploration properties of SAC , one can maintain a population of agents that cover non-identical sections of the policy space . To prevent premature convergence , a diversity-preserving mechanism is typically put in place ; balancing the objective and the diversity term becomes key to converging to a global optimum ( Hong et al. , 2018 ) . This paper studies a particular family of population-based exploration methods , which conduct coordinated local search in the policy space . Prior work on population-based strategies improves performance on robotic control domains through stochastic perturbation on a single actor ’ s parameter ( Pourchot & Sigaud , 2019 ) or a set of actor ’ s parameters ( Conti et al. , 2018 ; Khadka & Tumer , 2018 ; Liu et al. , 2017 ) . We hypothesize that exploring directly in the policy space will be more effective than perturbing the parameters of the policy , as the latter does not guarantee diversity ( i.e. , different neural network parameterizations can approximately represent the same function ) . Given a population of RL agents , we enforce local exploration using an Attraction-Repulsion ( AR ) mechanism . The later consists in adding an auxiliary loss to encourage pairwise attraction or repulsion between members of a population , as measured by a divergence term . We make use of the KullbackLeibler ( KL ) divergence because of its desirable statistical properties and its easiness of computation . However , naively maximizing the KL term between two Gaussian policies can be detrimental ( e.g . drives both means apart ) . Because of this , we parametrize the policy with a general family of distributions called Normalizing Flows ( NFs , Rezende & Mohamed , 2015 ) ; this modification allows to improve upon AR+Gaussian ( see Appendix Figure 6 ) . NFs are shown to improve the expressivity of the policies using invertible mappings while maintaining entropy guarantees ( Mazoure et al. , 2019 ; Tang & Agrawal , 2018 ) . Nonlinear density estimators have also been previously used for deep RL problems in contexts of distributional RL ( Doan et al. , 2018 ) and reward shaping ( Tang et al. , 2017 ) . The AR objective blends particularly well with SAC , since computing the KL requires stochastic policies with tractable densities for each agent . 2 PRELIMINARIES . We first formalize the RL setting in a Markov decision process ( MDP ) . A discrete-time , finite-horizon , MDP ( Bellman , 1957 ; Puterman , 2014 ) is described by a state space S , an action spaceA , a transition function P : S ×A× S 7→ R+ , and a reward function r : S ×A 7→ R.1 On each round t , an agent interacting with this MDP observes the current state st ∈ S , selects an action at ∈ A , and observes a reward r ( st , at ) ∈ R upon transitioning to a new state st+1 ∼ P ( st , at ) . Let γ ∈ [ 0 , 1 ] be a discount factor . The goal of an agent evolving in a discounted MDP is to learn a policy π : S × A 7→ [ 0 , 1 ] such as taking action at ∼ π ( ·|st ) would maximize the expected sum of discounted returns , V π ( s ) = Eπ [ ∞∑ t=0 γtr ( st , at ) |s0 = s ] . In the following , we use ρπ to denote the trajectory distribution induced by following policy π . If S or A are vector spaces , action and space vectors are respectively denoted by a and s . 2.1 DISCOVERING NEW SOLUTIONS THROUGH POPULATION-BASED ATTRACTION-REPULSION . Consider evolving a population of M agents , also called individuals , { πθm } Mm=1 , each agent corresponding to a policy with its own parameters . In order to discover new solutions , we aim to generate agents that can mimic some target policy while following a path different from those of other policies . Let G denote an archive of policies encountered in previous generations of the population . A natural way of enforcing π to be different from or similar to the policies contained in G is by augmenting the loss of the agent with an Attraction-Repulsion ( AR ) term : LAR = − E π′∼G [ βπ′DKL [ π||π′ ] ] , ( 1 ) where π′ is an archived policy and βπ′ is a coefficient weighting the relative importance of the Kullback-Leibler ( KL ) divergence between π and π′ , which we will choose to be a function of the average reward ( see Sec . 3.2 below ) . Intuitively , Eq . 1 adds to the agent objective a weighted average distance between the current and the archived policies . For βπ′ ≥ 0 , the agent tends to move away from the archived policy ’ s behavior ( i.e . repulsion , see Figure 1 ) a ) . On the other hand , βπ′ < 0 encourages the agent π to imitate π′ ( i.e . attraction ) . Requirements for AR In order for agents within a population to be trained using the proposed AR-based loss ( Eq . 1 ) , we have the following requirements : 1 . Their policies should be stochastic , so that the KL-divergence between two policies is well-defined . 1A and S can be either discrete or continuous . 2 . Their policies should have tractable distributions , so that the KL-divergence can be computed easily , either with closed-form solution or Monte Carlo estimation . Several RL algorithms enjoy such properties ( Haarnoja et al. , 2018 ; Schulman et al. , 2015 ; 2017 ) . In particular , the soft actor-critic ( SAC , Haarnoja et al. , 2018 ) is a straightforward choice , as it currently outperforms other candidates and is off-policy , thus maintains a single critic shared among all agents ( instead of one critic per agent ) , which reduces computation costs . 2.2 SOFT ACTOR-CRITIC . SAC ( Haarnoja et al. , 2018 ) is an off-policy learning algorithm which finds the information projection of the Boltzmann Q-function onto the set of diagonal Gaussian policies Π : π = arg min π′∈Π DKL ( π′ ( .|st ) ∥∥∥∥exp ( 1αQπold ( st , . ) ) Zπold ( st ) ) , where α ∈ ( 0 , 1 ) controls the temperature , i.e . the peakedness of the distribution . The policy π , critic Q , and value function V are optimized according to the following loss functions : Lπ , SAC = Est∼B [ Eat∼π [ α log π ( at|st ) −Q ( st , at ) ] ] ( 2 ) LQ = E ( s , a , r , s′ ) ∼B [ { Q ( s , a ) − ( r + γV πν ( s′ ) ) } 2 ] ( 3 ) LV = Est∼D [ 1 2 { V πν ( st ) − Eat∼π [ Q ( st , at ) − α log π ( at|st ) ] } 2 ] , ( 4 ) where B is the replay buffer . The policy used in SAC as introduced in Haarnoja et al . ( 2018 ) is Gaussian , which is both stochastic and tractable , thus compatible with our AR loss function in Eq . 1 . Together with the AR loss in Eq . 1 , the final policy loss becomes : Lπ = Lπ , SAC + LAR ( 5 ) However , Gaussian policies are arguably of limited expressibility ; we can improve on the family of policy distributions without sacrificing qualities necessary for AR or SAC by using Normalizing Flows ( NFs , Rezende & Mohamed , 2015 ) . 2.3 NORMALIZING FLOWS . NFs ( Rezende & Mohamed , 2015 ) were introduced as a means of transforming simple distributions into more complex distributions using learnable and invertible functions . Given a random variable z0 with density q0 , they define a set of differentiable and invertible functions , { fi } Ni=1 , which generate a sequence of d-dimensional random variables , { zi } Ni=1 . Because SAC uses explicit , yet simple parametric policies , NFs can be used to transform the SAC policy into a richer one ( e.g. , multimodal ) without risk loss of information . For example , Mazoure et al . ( 2019 ) enhanced SAC using a family of radial contractions around a point z0 ∈ Rd , f ( z ) = z + β α+ ||z− z0||2 ( z− z0 ) ( 6 ) for α ∈ R+ and β ∈ R. This results in a rich set of policies comprised of an initial noise sample a0 , a state-noise embedding hθ ( a0 , st ) , and a flow { fφi } Ni=1 of arbitrary length N , parameterized by φ = { φi } Ni=1 . Sampling from the policy πφ , θ ( at|st ) can be described by the following set of equations : a0 ∼ N ( 0 , I ) ; z = hθ ( a0 , st ) ; at = fφN ◦ fφN−1 ◦ ... ◦ fφ1 ( z ) , ( 7 ) where hθ = a0σI + µ ( st ) depends on the state and the noise variance σ > 0 . Different SAC policies can thus be crafted by parameterizing their NFs layers .
The paper proposes an ensemble method for reinforcement learning in which the policy updates are modulated with a loss which encourages diversity among all experienced policies. It is a combination of SAC, normalizing flow policies, and an approach to diversity considered by Hong et al. (2018). The work seems rather incremental and the experiments have some methodological flaws. Specifically the main results (Fig. 4) are based on a comparison between 4 different codebases which makes it impossible to make meaningful conclusions as pointed out e.g. by [1]. The authors mention that their work is built on the work of Hong et al. (2018) yet the comparisons do not seem to include it as a baseline. I'm also concerned about how exactly are environment steps counted: in Algorithm 1 on line 27, it seems that the fitness which is used for training is evaluated by interacting with the environment yet these interactions are not counted towards total_step.
science.mup_single_document_summarization.18848
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAttraction-Repulsion Actor-Critic for Continuous Control Reinforcement Learning\n\nMain Content:\nIn reinforcement learning , robotic control tasks are often useful for understanding how agents perform in environments with deceptive rewards where the agent can easily become trapped into suboptimal solutions . One way to avoid these local optima is to use a population of agents to ensure coverage of the policy space ( a form of exploration ) , yet learning a population with the “ best ” coverage is still an open problem . In this work , we present a novel approach to population-based RL in continuous control that leverages properties of normalizing flows to perform attractive and repulsive operations between current members of the population and previously observed policies . Empirical results on the MuJoCo suite demonstrate a high performance gain for our algorithm compared to prior work , including Soft-Actor Critic ( SAC ) . 1 INTRODUCTION . Many important reinforcement learning ( RL ) tasks , such as those in robotics and self-driving cars , are challenging due to large action and state spaces ( Lee et al. , 2018 ) . In particular , environments with large continuous action spaces are prone to deceptive rewards , i.e . fall into local optima in learning ( Conti et al. , 2018 ) . Applying traditional policy optimization algorithms to these domains often leads to locally optimal , yet globally sub-optimal policies . The agent should then explore the reward landscape more thoroughly in order to avoid falling into these local optima . Not all RL domains that require exploration are suitable for understanding how to train agents that are robust to deceptive rewards . For example , Montezuma ’ s Revenge , a game in the Atari Learning Environment ( Bellemare et al. , 2013 ) , has sparse rewards ; algorithms that perform the best on this task encourage exploration by providing a denser intrinsic reward to the agent to encourage exploration ( Tang et al. , 2017 ) . On the other hand , many robotic control problems , such as those found in MuJoCo ( Todorov et al. , 2012 ) , provide the agent with a dense reward signal , yet their high-dimensional action spaces induce a multimodal , often deceptive , reward landscape . For example , in the biped environments , coordinating both arms and legs is crucial for performing well on even simple tasks such as forward motion . However , simply learning to maximize the reward can be detrimental across training : agents will tend to run and fall further away from the start point rather than discovering stable and efficient walking motion . In this setting , exploration serves to provide a more reliable learning signal for the agent by covering more different types of actions during learning . One way to maximize action space coverage is the maximum entropy RL framework ( Ziebart , 2010 ) , which prevents variance collapse by adding a policy entropy auxiliary objective . One such prominent algorithm , Soft Actor-Critic ( SAC , Haarnoja et al . ( 2018 ) ) , has been shown to excel in large continuous action spaces . To further improve on exploration properties of SAC , one can maintain a population of agents that cover non-identical sections of the policy space . To prevent premature convergence , a diversity-preserving mechanism is typically put in place ; balancing the objective and the diversity term becomes key to converging to a global optimum ( Hong et al. , 2018 ) . This paper studies a particular family of population-based exploration methods , which conduct coordinated local search in the policy space . Prior work on population-based strategies improves performance on robotic control domains through stochastic perturbation on a single actor ’ s parameter ( Pourchot & Sigaud , 2019 ) or a set of actor ’ s parameters ( Conti et al. , 2018 ; Khadka & Tumer , 2018 ; Liu et al. , 2017 ) . We hypothesize that exploring directly in the policy space will be more effective than perturbing the parameters of the policy , as the latter does not guarantee diversity ( i.e. , different neural network parameterizations can approximately represent the same function ) . Given a population of RL agents , we enforce local exploration using an Attraction-Repulsion ( AR ) mechanism . The later consists in adding an auxiliary loss to encourage pairwise attraction or repulsion between members of a population , as measured by a divergence term . We make use of the KullbackLeibler ( KL ) divergence because of its desirable statistical properties and its easiness of computation . However , naively maximizing the KL term between two Gaussian policies can be detrimental ( e.g . drives both means apart ) . Because of this , we parametrize the policy with a general family of distributions called Normalizing Flows ( NFs , Rezende & Mohamed , 2015 ) ; this modification allows to improve upon AR+Gaussian ( see Appendix Figure 6 ) . NFs are shown to improve the expressivity of the policies using invertible mappings while maintaining entropy guarantees ( Mazoure et al. , 2019 ; Tang & Agrawal , 2018 ) . Nonlinear density estimators have also been previously used for deep RL problems in contexts of distributional RL ( Doan et al. , 2018 ) and reward shaping ( Tang et al. , 2017 ) . The AR objective blends particularly well with SAC , since computing the KL requires stochastic policies with tractable densities for each agent . 2 PRELIMINARIES . We first formalize the RL setting in a Markov decision process ( MDP ) . A discrete-time , finite-horizon , MDP ( Bellman , 1957 ; Puterman , 2014 ) is described by a state space S , an action spaceA , a transition function P : S ×A× S 7→ R+ , and a reward function r : S ×A 7→ R.1 On each round t , an agent interacting with this MDP observes the current state st ∈ S , selects an action at ∈ A , and observes a reward r ( st , at ) ∈ R upon transitioning to a new state st+1 ∼ P ( st , at ) . Let γ ∈ [ 0 , 1 ] be a discount factor . The goal of an agent evolving in a discounted MDP is to learn a policy π : S × A 7→ [ 0 , 1 ] such as taking action at ∼ π ( ·|st ) would maximize the expected sum of discounted returns , V π ( s ) = Eπ [ ∞∑ t=0 γtr ( st , at ) |s0 = s ] . In the following , we use ρπ to denote the trajectory distribution induced by following policy π . If S or A are vector spaces , action and space vectors are respectively denoted by a and s . 2.1 DISCOVERING NEW SOLUTIONS THROUGH POPULATION-BASED ATTRACTION-REPULSION . Consider evolving a population of M agents , also called individuals , { πθm } Mm=1 , each agent corresponding to a policy with its own parameters . In order to discover new solutions , we aim to generate agents that can mimic some target policy while following a path different from those of other policies . Let G denote an archive of policies encountered in previous generations of the population . A natural way of enforcing π to be different from or similar to the policies contained in G is by augmenting the loss of the agent with an Attraction-Repulsion ( AR ) term : LAR = − E π′∼G [ βπ′DKL [ π||π′ ] ] , ( 1 ) where π′ is an archived policy and βπ′ is a coefficient weighting the relative importance of the Kullback-Leibler ( KL ) divergence between π and π′ , which we will choose to be a function of the average reward ( see Sec . 3.2 below ) . Intuitively , Eq . 1 adds to the agent objective a weighted average distance between the current and the archived policies . For βπ′ ≥ 0 , the agent tends to move away from the archived policy ’ s behavior ( i.e . repulsion , see Figure 1 ) a ) . On the other hand , βπ′ < 0 encourages the agent π to imitate π′ ( i.e . attraction ) . Requirements for AR In order for agents within a population to be trained using the proposed AR-based loss ( Eq . 1 ) , we have the following requirements : 1 . Their policies should be stochastic , so that the KL-divergence between two policies is well-defined . 1A and S can be either discrete or continuous . 2 . Their policies should have tractable distributions , so that the KL-divergence can be computed easily , either with closed-form solution or Monte Carlo estimation . Several RL algorithms enjoy such properties ( Haarnoja et al. , 2018 ; Schulman et al. , 2015 ; 2017 ) . In particular , the soft actor-critic ( SAC , Haarnoja et al. , 2018 ) is a straightforward choice , as it currently outperforms other candidates and is off-policy , thus maintains a single critic shared among all agents ( instead of one critic per agent ) , which reduces computation costs . 2.2 SOFT ACTOR-CRITIC . SAC ( Haarnoja et al. , 2018 ) is an off-policy learning algorithm which finds the information projection of the Boltzmann Q-function onto the set of diagonal Gaussian policies Π : π = arg min π′∈Π DKL ( π′ ( .|st ) ∥∥∥∥exp ( 1αQπold ( st , . ) ) Zπold ( st ) ) , where α ∈ ( 0 , 1 ) controls the temperature , i.e . the peakedness of the distribution . The policy π , critic Q , and value function V are optimized according to the following loss functions : Lπ , SAC = Est∼B [ Eat∼π [ α log π ( at|st ) −Q ( st , at ) ] ] ( 2 ) LQ = E ( s , a , r , s′ ) ∼B [ { Q ( s , a ) − ( r + γV πν ( s′ ) ) } 2 ] ( 3 ) LV = Est∼D [ 1 2 { V πν ( st ) − Eat∼π [ Q ( st , at ) − α log π ( at|st ) ] } 2 ] , ( 4 ) where B is the replay buffer . The policy used in SAC as introduced in Haarnoja et al . ( 2018 ) is Gaussian , which is both stochastic and tractable , thus compatible with our AR loss function in Eq . 1 . Together with the AR loss in Eq . 1 , the final policy loss becomes : Lπ = Lπ , SAC + LAR ( 5 ) However , Gaussian policies are arguably of limited expressibility ; we can improve on the family of policy distributions without sacrificing qualities necessary for AR or SAC by using Normalizing Flows ( NFs , Rezende & Mohamed , 2015 ) . 2.3 NORMALIZING FLOWS . NFs ( Rezende & Mohamed , 2015 ) were introduced as a means of transforming simple distributions into more complex distributions using learnable and invertible functions . Given a random variable z0 with density q0 , they define a set of differentiable and invertible functions , { fi } Ni=1 , which generate a sequence of d-dimensional random variables , { zi } Ni=1 . Because SAC uses explicit , yet simple parametric policies , NFs can be used to transform the SAC policy into a richer one ( e.g. , multimodal ) without risk loss of information . For example , Mazoure et al . ( 2019 ) enhanced SAC using a family of radial contractions around a point z0 ∈ Rd , f ( z ) = z + β α+ ||z− z0||2 ( z− z0 ) ( 6 ) for α ∈ R+ and β ∈ R. This results in a rich set of policies comprised of an initial noise sample a0 , a state-noise embedding hθ ( a0 , st ) , and a flow { fφi } Ni=1 of arbitrary length N , parameterized by φ = { φi } Ni=1 . Sampling from the policy πφ , θ ( at|st ) can be described by the following set of equations : a0 ∼ N ( 0 , I ) ; z = hθ ( a0 , st ) ; at = fφN ◦ fφN−1 ◦ ... ◦ fφ1 ( z ) , ( 7 ) where hθ = a0σI + µ ( st ) depends on the state and the noise variance σ > 0 . Different SAC policies can thus be crafted by parameterizing their NFs layers .", "role": "user" }, { "content": "The paper proposes an ensemble method for reinforcement learning in which the policy updates are modulated with a loss which encourages diversity among all experienced policies. It is a combination of SAC, normalizing flow policies, and an approach to diversity considered by Hong et al. (2018). The work seems rather incremental and the experiments have some methodological flaws. Specifically the main results (Fig. 4) are based on a comparison between 4 different codebases which makes it impossible to make meaningful conclusions as pointed out e.g. by [1]. The authors mention that their work is built on the work of Hong et al. (2018) yet the comparisons do not seem to include it as a baseline. I'm also concerned about how exactly are environment steps counted: in Algorithm 1 on line 27, it seems that the fitness which is used for training is evaluated by interacting with the environment yet these interactions are not counted towards total_step.", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Non-deep Networks Main Content: 1 INTRODUCTION . Deep Neural Networks ( DNNs ) have revolutionized the fields of machine learning , computer vision , and natural language processing . As their name suggests , a key characteristic of DNNs is that they are deep . That is , they have a large depth , which can be defined as the length of the longest path from an input neuron to an output neuron . Often a neural network can be described as a linear sequence of layers , i.e . groups of neurons with no intra-group connections . In such cases , the depth of a network is its number of layers . It has been generally accepted that large depth is an essential component for high-performing networks because depth increases the representational ability of a network and helps learn increasingly abstract features ( He et al. , 2016a ) . In fact , one of the primary reasons given for the success of ResNets is that they allow training very deep networks with as many as 1000 layers ( He et al. , 2016a ) . As such , state-of-the-art performance is increasingly achieved by training models with large depth , and what qualifies as “ deep ” has shifted from “ 2 or more layers ” in the early days of deep learning to the “ tens or hundreds of layers ” routinely used in today ’ s models . For example , as shown in Figure 1 , competitive benchmarks such as ImageNet are dominated by very deep models ( He et al. , 2016a ; Huang et al. , 2017 ; Tan & Le , 2019 ) with at least 30 layers , whereas models with fewer than 30 layers perform substantially worse . The best-performing model with fewer than 20 layers has a top-1 accuracy of only 75.2 , substantially lower than accuracies achievable with 30 or more layers when evaluated with a single image crop ( He et al. , 2015 ; Tan & Le , 2019 ) . But is large depth always necessary ? This question is worth asking because large depth is not without drawbacks . A deeper network leads to more sequential processing and higher latency ; it is harder to parallelize and less suitable for applications that require fast responses . In this paper , we study whether it is possible to achieve high performance with “ non-deep ” neural networks , especially networks with ∼10 layers . We find that , contrary to conventional wisdom , this is indeed possible . We present a network design that is non-deep and performs competitively against its deep counterparts . We refer to our architecture as ParNet ( Parallel Networks ) . We show , for the first time , that a classification network with a depth of just 12 can achieve accuracy greater than 80 % on ImageNet , 96 % on CIFAR10 , and 81 % on CIFAR100 . We also show that a detection network with a low-depth ( 12 ) backbone can achieve an AP of 48 % on MS-COCO . Note that the number of parameters in ParNet is comparable to state-of-the-art models , as illustrated in Figure 1 . A key design choice in ParNet is the use of parallel subnetworks . Instead of arranging layers sequentially , we arrange layers in parallel subnetworks . This design is “ embarrassingly parallel ” , in the sense that there are no connections between the subnetworks except at the beginning and the end . This allows us to reduce the depth of the network while maintaining high accuracy . It is worth noting that our parallel structures are distinct from “ widening ” a network by increasing the number of neurons in a layer . ParNet not only helps us answer a scientific question about the necessity of large depth , but also offers practical advantages . Due to the parallel substructures , ParNet can be efficiently parallelized across multiple processors . We find that ParNet can be effectively parallelized and outperforms ResNets in terms of both speed and accuracy . Note that this is achieved despite the extra latency introduced by the communication between processing units . This shows that in the future , with possibly specialized hardware to further mitigate communication latency , ParNet-like architectures could be used for creating extremely fast recognition systems . We also study the scaling rules for ParNet . Specifically , we show that ParNet can be effectively scaled by increasing width , resolution , and number of branches , all while keeping depth constant . We observe that the performance of ParNet does not saturate and increases as we increase computational throughput . This suggests that by increasing compute further , one can achieve even higher performance while maintaining small depth ( ∼10 ) and low latency . To summarize , our contributions are three-fold : • We show , for the first time , that a neural network with a depth of only 12 can achieve high performance on very competitive benchmarks ( 80.7 % on ImageNet , 96 % on CIFAR10 , 81 % on CIFAR100 ) . • We show how parallel structures in ParNet can be utilized for fast , low-latency inference . • We study the scaling rules for ParNet and demonstrate effective scaling with constant low depth . 2 RELATED WORK . Analyzing importance of depth . There exists a rich literature analyzing the importance of depth in neural networks . The classic work of Cybenko et al . showed that a single-layer neural network with sigmoid activations can approximate any function with arbitrarily small error ( Cybenko , 1989 ) . However , one needs to use a network with sufficiently large width , which can drastically increase the parameter count . Subsequent works have shown that , to approximate a function , a deep network with non-linearity needs exponentially fewer parameters than its shallow counterpart ( Liang & Srikant , 2017 ) . This is often cited as one of the major advantages of large depth . Several works have also empirically analyzed the importance of depth and came to the conclusion that under a fixed parameter budget , deeper networks perform better than their shallow counter- parts ( Eigen et al. , 2013 ; Urban et al. , 2017 ) . However , in such analysis , prior works have only studied shallow networks with a linear , sequential structure , and it is unclear whether the conclusion still holds with alternative designs . In this work , we show that , contrary to conventional wisdom , a shallow network can perform surprisingly well , and the key is to have parallel substructures . Scaling DNNs . There have been many exciting works that have studied the problem of scaling neural networks . Tan & Le ( 2019 ) showed that increasing depth , width , and resolution leads to effective scaling of convolutional networks . We also study scaling rules but focus on the low-depth regime . We find that one can increase the number of branches , width , and resolution to effectively scale ParNet while keeping depth constant and low . Zagoruyko & Komodakis ( 2016 ) showed that shallower networks with a large width can achieve similar performance to deeper ResNets . We also scale our networks by increasing their width . However , we consider networks that are much shallower – a depth of just 12 compared to 50 considered for ImageNet by Zagoruyko & Komodakis ( 2016 ) – and introduce parallel substructures . Shallow networks . Shallow networks have attracted attention in theoretical machine learning . With infinite width , a single-layer neural network behaves like a Gaussian Process , and one can understand the training procedure in terms of kernel methods ( Jacot et al. , 2018 ) . However , such models do not perform competitively when compared to state-of-the-art networks ( Li et al. , 2019 ) . We provide empirical proof that non-deep networks can be competitive with their deep counterparts . Multi-stream networks . Multi-stream neural networks have been used in a variety of computer vision tasks such as segmentation ( Chen et al. , 2016 ; 2017 ) , detection ( Lin et al. , 2017 ) , and video classification ( Wu et al. , 2016 ) . The HRNet architecture maintains multi-resolution streams throughout the forward pass ( Wang et al. , 2020 ) ; these streams are fused together at regular intervals to exchange information . We also use streams with different resolutions , but our network is much shallower ( 12 vs. 38 for the smallest HRNet for classification ) and the streams are fused only once , at the very end , making parallelization easier . 3 METHOD . In this section , we develop and analyze ParNet , a network architecture that is much less deep but still achieves high performance on competitive benchmarks . ParNet consists of parallel substructures that process features at different resolutions . We refer to these parallel substructures as streams . Features from different streams are fused at a later stage in the network , and these fused features are used for the downstream task . Figure 2a provides a schematic representation of ParNet . 3.1 PARNET BLOCK . In ParNet , we utilize VGG-style blocks ( Simonyan & Zisserman , 2015 ) . To see whether non-deep networks can achieve high performance , we empirically find that VGG-style blocks are more suitable than ResNet-style blocks ( Table 8 ) . In general , training VGG-style networks is more difficult than their ResNet counterparts ( He et al. , 2016a ) . But recent work shows that it is easier to train net- works with such blocks if one uses a “ structural reparameterization ” technique ( Ding et al. , 2021 ) . During training , one uses multiple branches over the 3×3 convolution blocks . Once trained , the multiple branches can be fused into one 3×3 convolution . Hence , one ends up with a plain network consisting of only 3×3 block and non-linearity . This reparameterization or fusion of blocks helps reduce latency during inference . We borrow our initial block design from Rep-VGG ( Ding et al. , 2021 ) and modify it to make it more suitable for our non-deep architecture . One challenge with a non-deep network with only 3×3 convolutions is that the receptive field is rather limited . To address this , we build a SkipSqueeze-Excitation ( SSE ) layer which is based on the Squeeze-and-Excitation ( SE ) design ( Hu et al. , 2018 ) . Vanilla Squeeze-and-Excitation is not suitable for our purpose as it increases the depth of the network . Hence we use a Skip-Squeeze-Excitation design which is applied alongside the skip connection and uses a single fully-connected layer . We find that this design helps increase performance ( Table 7 ) . Figure 2b provides a schematic representation of our modified Rep-VGG block with the Skip-Squeeze-Excitation module . We refer to this block as the RepVGG-SSE . One concern , especially with large-scale datasets such as ImageNet , is that a non-deep network may not have sufficient non-linearity , limiting its representational power . Thus we replace the ReLU activation with SiLU ( Ramachandran et al. , 2017 ) . 3.2 DOWNSAMPLING AND FUSION BLOCK . Apart from the RepVGG-SSE block , whose input and output have the same size , ParNet also contains Downsampling and Fusion blocks . The Downsampling block reduces resolution and increases width to enable multi-scale processing , while the Fusion block combines information from multiple resolutions . In the Downsampling block , there is no skip connection ; instead , we add a single-layered SE module parallel to the convolution layer . Additionally , we add 2D average pooling in the 1×1 convolution branch . The Fusion block is similar to the Downsampling block but contains an extra concatenation layer . Because of concatenation , the input to the Fusion block has twice as many channels as a Downsampling block . Hence , to reduce the parameter count , we use convolution with group 2 . Please refer to Figure A1 in the appendix for a schematic representation of the Downsampling and Fusion blocks .
The paper proposes to manually design a new 12-depth CNN architecture ParNet based on parallel subnetworks instead of traditionally deeply stacked blocks. Experiments show that ParNet is the first CNN achieving over 80% accuracy on ImageNet with 12 depth only. ParNet also achieves a competitive AP of 48% on MS-COCO for object detection.
science.mup_single_document_summarization.5533
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nNon-deep Networks\n\nMain Content:\n1 INTRODUCTION . Deep Neural Networks ( DNNs ) have revolutionized the fields of machine learning , computer vision , and natural language processing . As their name suggests , a key characteristic of DNNs is that they are deep . That is , they have a large depth , which can be defined as the length of the longest path from an input neuron to an output neuron . Often a neural network can be described as a linear sequence of layers , i.e . groups of neurons with no intra-group connections . In such cases , the depth of a network is its number of layers . It has been generally accepted that large depth is an essential component for high-performing networks because depth increases the representational ability of a network and helps learn increasingly abstract features ( He et al. , 2016a ) . In fact , one of the primary reasons given for the success of ResNets is that they allow training very deep networks with as many as 1000 layers ( He et al. , 2016a ) . As such , state-of-the-art performance is increasingly achieved by training models with large depth , and what qualifies as “ deep ” has shifted from “ 2 or more layers ” in the early days of deep learning to the “ tens or hundreds of layers ” routinely used in today ’ s models . For example , as shown in Figure 1 , competitive benchmarks such as ImageNet are dominated by very deep models ( He et al. , 2016a ; Huang et al. , 2017 ; Tan & Le , 2019 ) with at least 30 layers , whereas models with fewer than 30 layers perform substantially worse . The best-performing model with fewer than 20 layers has a top-1 accuracy of only 75.2 , substantially lower than accuracies achievable with 30 or more layers when evaluated with a single image crop ( He et al. , 2015 ; Tan & Le , 2019 ) . But is large depth always necessary ? This question is worth asking because large depth is not without drawbacks . A deeper network leads to more sequential processing and higher latency ; it is harder to parallelize and less suitable for applications that require fast responses . In this paper , we study whether it is possible to achieve high performance with “ non-deep ” neural networks , especially networks with ∼10 layers . We find that , contrary to conventional wisdom , this is indeed possible . We present a network design that is non-deep and performs competitively against its deep counterparts . We refer to our architecture as ParNet ( Parallel Networks ) . We show , for the first time , that a classification network with a depth of just 12 can achieve accuracy greater than 80 % on ImageNet , 96 % on CIFAR10 , and 81 % on CIFAR100 . We also show that a detection network with a low-depth ( 12 ) backbone can achieve an AP of 48 % on MS-COCO . Note that the number of parameters in ParNet is comparable to state-of-the-art models , as illustrated in Figure 1 . A key design choice in ParNet is the use of parallel subnetworks . Instead of arranging layers sequentially , we arrange layers in parallel subnetworks . This design is “ embarrassingly parallel ” , in the sense that there are no connections between the subnetworks except at the beginning and the end . This allows us to reduce the depth of the network while maintaining high accuracy . It is worth noting that our parallel structures are distinct from “ widening ” a network by increasing the number of neurons in a layer . ParNet not only helps us answer a scientific question about the necessity of large depth , but also offers practical advantages . Due to the parallel substructures , ParNet can be efficiently parallelized across multiple processors . We find that ParNet can be effectively parallelized and outperforms ResNets in terms of both speed and accuracy . Note that this is achieved despite the extra latency introduced by the communication between processing units . This shows that in the future , with possibly specialized hardware to further mitigate communication latency , ParNet-like architectures could be used for creating extremely fast recognition systems . We also study the scaling rules for ParNet . Specifically , we show that ParNet can be effectively scaled by increasing width , resolution , and number of branches , all while keeping depth constant . We observe that the performance of ParNet does not saturate and increases as we increase computational throughput . This suggests that by increasing compute further , one can achieve even higher performance while maintaining small depth ( ∼10 ) and low latency . To summarize , our contributions are three-fold : • We show , for the first time , that a neural network with a depth of only 12 can achieve high performance on very competitive benchmarks ( 80.7 % on ImageNet , 96 % on CIFAR10 , 81 % on CIFAR100 ) . • We show how parallel structures in ParNet can be utilized for fast , low-latency inference . • We study the scaling rules for ParNet and demonstrate effective scaling with constant low depth . 2 RELATED WORK . Analyzing importance of depth . There exists a rich literature analyzing the importance of depth in neural networks . The classic work of Cybenko et al . showed that a single-layer neural network with sigmoid activations can approximate any function with arbitrarily small error ( Cybenko , 1989 ) . However , one needs to use a network with sufficiently large width , which can drastically increase the parameter count . Subsequent works have shown that , to approximate a function , a deep network with non-linearity needs exponentially fewer parameters than its shallow counterpart ( Liang & Srikant , 2017 ) . This is often cited as one of the major advantages of large depth . Several works have also empirically analyzed the importance of depth and came to the conclusion that under a fixed parameter budget , deeper networks perform better than their shallow counter- parts ( Eigen et al. , 2013 ; Urban et al. , 2017 ) . However , in such analysis , prior works have only studied shallow networks with a linear , sequential structure , and it is unclear whether the conclusion still holds with alternative designs . In this work , we show that , contrary to conventional wisdom , a shallow network can perform surprisingly well , and the key is to have parallel substructures . Scaling DNNs . There have been many exciting works that have studied the problem of scaling neural networks . Tan & Le ( 2019 ) showed that increasing depth , width , and resolution leads to effective scaling of convolutional networks . We also study scaling rules but focus on the low-depth regime . We find that one can increase the number of branches , width , and resolution to effectively scale ParNet while keeping depth constant and low . Zagoruyko & Komodakis ( 2016 ) showed that shallower networks with a large width can achieve similar performance to deeper ResNets . We also scale our networks by increasing their width . However , we consider networks that are much shallower – a depth of just 12 compared to 50 considered for ImageNet by Zagoruyko & Komodakis ( 2016 ) – and introduce parallel substructures . Shallow networks . Shallow networks have attracted attention in theoretical machine learning . With infinite width , a single-layer neural network behaves like a Gaussian Process , and one can understand the training procedure in terms of kernel methods ( Jacot et al. , 2018 ) . However , such models do not perform competitively when compared to state-of-the-art networks ( Li et al. , 2019 ) . We provide empirical proof that non-deep networks can be competitive with their deep counterparts . Multi-stream networks . Multi-stream neural networks have been used in a variety of computer vision tasks such as segmentation ( Chen et al. , 2016 ; 2017 ) , detection ( Lin et al. , 2017 ) , and video classification ( Wu et al. , 2016 ) . The HRNet architecture maintains multi-resolution streams throughout the forward pass ( Wang et al. , 2020 ) ; these streams are fused together at regular intervals to exchange information . We also use streams with different resolutions , but our network is much shallower ( 12 vs. 38 for the smallest HRNet for classification ) and the streams are fused only once , at the very end , making parallelization easier . 3 METHOD . In this section , we develop and analyze ParNet , a network architecture that is much less deep but still achieves high performance on competitive benchmarks . ParNet consists of parallel substructures that process features at different resolutions . We refer to these parallel substructures as streams . Features from different streams are fused at a later stage in the network , and these fused features are used for the downstream task . Figure 2a provides a schematic representation of ParNet . 3.1 PARNET BLOCK . In ParNet , we utilize VGG-style blocks ( Simonyan & Zisserman , 2015 ) . To see whether non-deep networks can achieve high performance , we empirically find that VGG-style blocks are more suitable than ResNet-style blocks ( Table 8 ) . In general , training VGG-style networks is more difficult than their ResNet counterparts ( He et al. , 2016a ) . But recent work shows that it is easier to train net- works with such blocks if one uses a “ structural reparameterization ” technique ( Ding et al. , 2021 ) . During training , one uses multiple branches over the 3×3 convolution blocks . Once trained , the multiple branches can be fused into one 3×3 convolution . Hence , one ends up with a plain network consisting of only 3×3 block and non-linearity . This reparameterization or fusion of blocks helps reduce latency during inference . We borrow our initial block design from Rep-VGG ( Ding et al. , 2021 ) and modify it to make it more suitable for our non-deep architecture . One challenge with a non-deep network with only 3×3 convolutions is that the receptive field is rather limited . To address this , we build a SkipSqueeze-Excitation ( SSE ) layer which is based on the Squeeze-and-Excitation ( SE ) design ( Hu et al. , 2018 ) . Vanilla Squeeze-and-Excitation is not suitable for our purpose as it increases the depth of the network . Hence we use a Skip-Squeeze-Excitation design which is applied alongside the skip connection and uses a single fully-connected layer . We find that this design helps increase performance ( Table 7 ) . Figure 2b provides a schematic representation of our modified Rep-VGG block with the Skip-Squeeze-Excitation module . We refer to this block as the RepVGG-SSE . One concern , especially with large-scale datasets such as ImageNet , is that a non-deep network may not have sufficient non-linearity , limiting its representational power . Thus we replace the ReLU activation with SiLU ( Ramachandran et al. , 2017 ) . 3.2 DOWNSAMPLING AND FUSION BLOCK . Apart from the RepVGG-SSE block , whose input and output have the same size , ParNet also contains Downsampling and Fusion blocks . The Downsampling block reduces resolution and increases width to enable multi-scale processing , while the Fusion block combines information from multiple resolutions . In the Downsampling block , there is no skip connection ; instead , we add a single-layered SE module parallel to the convolution layer . Additionally , we add 2D average pooling in the 1×1 convolution branch . The Fusion block is similar to the Downsampling block but contains an extra concatenation layer . Because of concatenation , the input to the Fusion block has twice as many channels as a Downsampling block . Hence , to reduce the parameter count , we use convolution with group 2 . Please refer to Figure A1 in the appendix for a schematic representation of the Downsampling and Fusion blocks .", "role": "user" }, { "content": "The paper proposes to manually design a new 12-depth CNN architecture ParNet based on parallel subnetworks instead of traditionally deeply stacked blocks. Experiments show that ParNet is the first CNN achieving over 80% accuracy on ImageNet with 12 depth only. ParNet also achieves a competitive AP of 48% on MS-COCO for object detection.", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: We analyzed the mRNA levels of catecholamine-synthesizing enzymes: tyrosine hydroxylase (TH), aromatic L-amino acid decarboxylase (AAAD), dopamine-β-hydroxylase (DBH) and phenylethanolamine N-methyltransferase (PNMT) in adrenal glands of 18 pigs with chronic systolic non-ischaemic HF (tachycardia-induced cardiomyopathy due to right ventricle pacing) and 6 sham-operated controls. This study aimed to investigate physical exercise-related changes in gene expression of catecholamine biosynthetic enzymes (tyrosine hydroxylase, dopamine-ß-hydroxylase and phenylethanolamine N-methyltransferase) and cyclic adenosine monophosphate response element-binding (CREB) in the adrenal medulla, concentrations of catecholamines and corticosterone (CORT) in the plasma and the weight of adrenal glands of chronically psychosocially stressed adult rats exposed daily to 20 min treadmill running for 12 weeks. In the adrenal medulla, despite the absence of morphological changes, immunohistochemistry for tyrosine hydroxylase, dopamine β-hydroxylase and phenyl-ethanolamine-N-methyltransferase demonstrated an increased immunopositivity for these cathecolamine-synthesizing enzymes after intense exercise. Using isolated adrenal medulla we observed no difference in basal catecholamine secretion percentile between obese and lean animals. However, the percentile of catecholamine secretion stimulated by high K+ concentration was lower in the obese group. There was a decrease in the tyrosine hydroxylase enzyme expression (57.3%, P<0.004) in adrenal glands of obese mice. Interestingly, the expression of dopamine beta-hydroxylase was also reduced (47.0%, P<0.005). Phenylethanolamine N-methyltransferase expression was not affected. Differential housing (single vs. group) and social defeat of rats is known to alter the activity of catecholamine-synthesizing enzymes in the medulla. The present studies examined the effect of 70 days of triad (3 rats per large cage) and individual housing of male rats on adrenal mRNA levels of tyrosine hydroxylase (TH), dopamine-beta-hydroxylase (DBH) and phenylethanolamine-N-methyltransferase (PNMT) and on TH protein levels. As a first step toward understanding the molecular mechanisms by which catecholamine synthesis is controlled in the tumor, we measured the levels of mRNA coding for the catecholamine synthesizing enzyme, tyrosine hydroxylase (TH) and catecholamines in 6 pheochromocytomas and 2 normal adrenal glands. We also examined the gene expression of the messengers of other catecholamine synthesizing enzymes, dopamine beta-hydroxylase (DBH) and aromatic 1-amino acid decarboxylase (AADC) in pheochromocytomas. These findings indicate that catecholamine overproduction in pheochromocytomas is mediated by the overexpression of genes coding for catecholamines synthesizing enzymes, TH, DBH, and AADC. he present study investigated the cellular localization of 3 catecholamine biosynthetic enzymes, tyrosine hydroxylase (TH), dopamine beta-hydroxylase (DBH), and phenylethanolamine N-methyltransferase (PNMT) tyrosine hydroxylase (TH) the rate limiting catecholamine biosynthetic enzyme and also of dopamine beta-hydroxylase (DBH) tyrosine hydroxylase (TH), dopamine-beta-hydroxylase (DBH) and phenylethanolamine-N-methyltransferase (PNMT) in sympathetic ganglia and adrenals This study aimed to investigate physical exercise-related changes in gene expression of catecholamine biosynthetic enzymes (tyrosine hydroxylase, dopamine-ß-hydroxylase and phenylethanolamine N-methyltransferase) and cyclic adenosine monophosphate response element-binding (CREB) in the adrenal medulla, concentrations of catecholamines and corticosterone (CORT) in the plasma and the weight of adrenal glands of chronically psychosocially stressed adult rats exposed daily to 20 min treadmill running for 12 weeks As a first step toward understanding the molecular mechanisms by which catecholamine synthesis is controlled in the tumor, we measured the levels of mRNA coding for the catecholamine synthesizing enzyme, tyrosine hydroxylase (TH) and catecholamines in 6 pheochromocytomas and 2 normal adrenal glands We analyzed the mRNA levels of catecholamine-synthesizing enzymes: tyrosine hydroxylase (TH), aromatic L-amino acid decarboxylase (AAAD), dopamine-β-hydroxylase (DBH) and phenylethanolamine N-methyltransferase (PNMT) in adrenal glands of 18 pigs with chronic systolic non-ischaemic HF (tachycardia-induced cardiomyopathy due to right ventricle pacing) and 6 sham-operated controls This study aimed to investigate physical exercise-related changes in gene expression of catecholamine biosynthetic enzymes (tyrosine hydroxylase, dopamine-ß-hydroxylase and phenylethanolamine N-methyltransferase) and cyclic adenosine monophosphate response element-binding (CREB) in the adrenal medulla, concentrations of catecholamines and corticosterone (CORT) in the plasma and the weight of adrenal glands of chronically psychosocially stressed adult rats exposed daily to 20 min treadmill running for 12 weeks. We used immunocytochemistry to study the ontogeny of leu-enkephalin and the catecholamine-synthesizing enzymes dopamine beta-hydroxylase and phenylethanolamine N-methyltransferase in adjacent sections of 14 fetal rhesus and 31 fetal human adrenal glands. As a first step toward understanding the molecular mechanisms by which catecholamine synthesis is controlled in the tumor, we measured the levels of mRNA coding for the catecholamine synthesizing enzyme, tyrosine hydroxylase (TH) and catecholamines in 6 pheochromocytomas and 2 normal adrenal glands. Question: Which enzymes synthesize catecholamines in adrenal glands?
["Tyrosine Hydroxylase", "Aromatic L-amino acid decarboxylase", "Dopamine \u03b2-hydroxylase", "Phenylethanolamine N-methyltransferase"]
science.bioasq_list_qa.400
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: We analyzed the mRNA levels of catecholamine-synthesizing enzymes: tyrosine hydroxylase (TH), aromatic L-amino acid decarboxylase (AAAD), dopamine-β-hydroxylase (DBH) and phenylethanolamine N-methyltransferase (PNMT) in adrenal glands of 18 pigs with chronic systolic non-ischaemic HF (tachycardia-induced cardiomyopathy due to right ventricle pacing) and 6 sham-operated controls.\nThis study aimed to investigate physical exercise-related changes in gene expression of catecholamine biosynthetic enzymes (tyrosine hydroxylase, dopamine-ß-hydroxylase and phenylethanolamine N-methyltransferase) and cyclic adenosine monophosphate response element-binding (CREB) in the adrenal medulla, concentrations of catecholamines and corticosterone (CORT) in the plasma and the weight of adrenal glands of chronically psychosocially stressed adult rats exposed daily to 20 min treadmill running for 12 weeks.\nIn the adrenal medulla, despite the absence of morphological changes, immunohistochemistry for tyrosine hydroxylase, dopamine β-hydroxylase and phenyl-ethanolamine-N-methyltransferase demonstrated an increased immunopositivity for these cathecolamine-synthesizing enzymes after intense exercise.\nUsing isolated adrenal medulla we observed no difference in basal catecholamine secretion percentile between obese and lean animals. However, the percentile of catecholamine secretion stimulated by high K+ concentration was lower in the obese group. There was a decrease in the tyrosine hydroxylase enzyme expression (57.3%, P<0.004) in adrenal glands of obese mice. Interestingly, the expression of dopamine beta-hydroxylase was also reduced (47.0%, P<0.005). Phenylethanolamine N-methyltransferase expression was not affected.\nDifferential housing (single vs. group) and social defeat of rats is known to alter the activity of catecholamine-synthesizing enzymes in the medulla. The present studies examined the effect of 70 days of triad (3 rats per large cage) and individual housing of male rats on adrenal mRNA levels of tyrosine hydroxylase (TH), dopamine-beta-hydroxylase (DBH) and phenylethanolamine-N-methyltransferase (PNMT) and on TH protein levels.\nAs a first step toward understanding the molecular mechanisms by which catecholamine synthesis is controlled in the tumor, we measured the levels of mRNA coding for the catecholamine synthesizing enzyme, tyrosine hydroxylase (TH) and catecholamines in 6 pheochromocytomas and 2 normal adrenal glands.\nWe also examined the gene expression of the messengers of other catecholamine synthesizing enzymes, dopamine beta-hydroxylase (DBH) and aromatic 1-amino acid decarboxylase (AADC) in pheochromocytomas.\nThese findings indicate that catecholamine overproduction in pheochromocytomas is mediated by the overexpression of genes coding for catecholamines synthesizing enzymes, TH, DBH, and AADC.\nhe present study investigated the cellular localization of 3 catecholamine biosynthetic enzymes, tyrosine hydroxylase (TH), dopamine beta-hydroxylase (DBH), and phenylethanolamine N-methyltransferase (PNMT)\ntyrosine hydroxylase (TH) the rate limiting catecholamine biosynthetic enzyme and also of dopamine beta-hydroxylase (DBH)\ntyrosine hydroxylase (TH), dopamine-beta-hydroxylase (DBH) and phenylethanolamine-N-methyltransferase (PNMT) in sympathetic ganglia and adrenals\nThis study aimed to investigate physical exercise-related changes in gene expression of catecholamine biosynthetic enzymes (tyrosine hydroxylase, dopamine-ß-hydroxylase and phenylethanolamine N-methyltransferase) and cyclic adenosine monophosphate response element-binding (CREB) in the adrenal medulla, concentrations of catecholamines and corticosterone (CORT) in the plasma and the weight of adrenal glands of chronically psychosocially stressed adult rats exposed daily to 20 min treadmill running for 12 weeks\nAs a first step toward understanding the molecular mechanisms by which catecholamine synthesis is controlled in the tumor, we measured the levels of mRNA coding for the catecholamine synthesizing enzyme, tyrosine hydroxylase (TH) and catecholamines in 6 pheochromocytomas and 2 normal adrenal glands\nWe analyzed the mRNA levels of catecholamine-synthesizing enzymes: tyrosine hydroxylase (TH), aromatic L-amino acid decarboxylase (AAAD), dopamine-β-hydroxylase (DBH) and phenylethanolamine N-methyltransferase (PNMT) in adrenal glands of 18 pigs with chronic systolic non-ischaemic HF (tachycardia-induced cardiomyopathy due to right ventricle pacing) and 6 sham-operated controls\nThis study aimed to investigate physical exercise-related changes in gene expression of catecholamine biosynthetic enzymes (tyrosine hydroxylase, dopamine-ß-hydroxylase and phenylethanolamine N-methyltransferase) and cyclic adenosine monophosphate response element-binding (CREB) in the adrenal medulla, concentrations of catecholamines and corticosterone (CORT) in the plasma and the weight of adrenal glands of chronically psychosocially stressed adult rats exposed daily to 20 min treadmill running for 12 weeks.\nWe used immunocytochemistry to study the ontogeny of leu-enkephalin and the catecholamine-synthesizing enzymes dopamine beta-hydroxylase and phenylethanolamine N-methyltransferase in adjacent sections of 14 fetal rhesus and 31 fetal human adrenal glands.\nAs a first step toward understanding the molecular mechanisms by which catecholamine synthesis is controlled in the tumor, we measured the levels of mRNA coding for the catecholamine synthesizing enzyme, tyrosine hydroxylase (TH) and catecholamines in 6 pheochromocytomas and 2 normal adrenal glands.\n\nQuestion: Which enzymes synthesize catecholamines in adrenal glands?", "role": "user" }, { "content": "[\"Tyrosine Hydroxylase\", \"Aromatic L-amino acid decarboxylase\", \"Dopamine \\u03b2-hydroxylase\", \"Phenylethanolamine N-methyltransferase\"]", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: PRA, as one of the most promising research innovations for knowledge base completion, has also attracted considerable attention<cite> [Lao et al., 2015</cite>; Gardner and Mitchell, 2015; Wang et al., 2016;<cite> Nickel et al., 2016]</cite> .
["Background"]
science.multicite_intent_classification.2195
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nPRA, as one of the most promising research innovations for knowledge base completion, has also attracted considerable attention<cite> [Lao et al., 2015</cite>; Gardner and Mitchell, 2015; Wang et al., 2016;<cite> Nickel et al., 2016]</cite> .", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.discomat_te
| Assignment | Observed (cm-1) | Ref. values (cm-1) | | Mn2+-O stretching | 250 | 240-290 [2,3,19] | | Mn3+-O stretching | - | 370-400 [3] | | Si-O-Si rocking motion of the bridging oxygen | 460 | 460 [20,21] | | ring bending of [B3O9]3- | 525 | 525 [2] | | B-O-B bending between two trigonal boron atoms | 690 | 690-730 [3,22,23] | | Si-O-Si bending motion of the bridging oxygen | 805 | 785-810 [20,21] | | B-O stretching in tetrahedron [BO4]- | 930 | 900-1000 [2,22,23] | | Si-O-Si asymmetric stretching | 1080 | 1075-1120 [21] | | Si-O-B asymmetric stretching | - | 1150 [24,25] | | B-O stretching in [BO2O-] n chains | 1230 | 1220-1270 [22,23] | | B-O- stretching of [BO2O-] n chains | 1390 | 1350-1400 [2,22,23] | Caption: Assignations of IR bands for the obtained samples. Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "250", "type": "Other"} {"value": "240", "type": "Other"} {"value": "290", "type": "Other"} {"value": "370", "type": "Other"} {"value": "400", "type": "Other"} {"value": "460", "type": "Other"} {"value": "460", "type": "Other"} {"value": "525", "type": "Other"} {"value": "525", "type": "Other"} {"value": "690", "type": "Other"} {"value": "690", "type": "Other"} {"value": "730", "type": "Other"} {"value": "805", "type": "Other"} {"value": "785", "type": "Other"} {"value": "810", "type": "Other"} {"value": "930", "type": "Other"} {"value": "900", "type": "Other"} {"value": "1000", "type": "Other"} {"value": "1080", "type": "Other"} {"value": "1075", "type": "Other"} {"value": "1120", "type": "Other"} {"value": "1150", "type": "Other"} {"value": "1230", "type": "Other"} {"value": "1220", "type": "Other"} {"value": "1270", "type": "Other"} {"value": "1390", "type": "Other"} {"value": "1350", "type": "Other"} {"value": "1400", "type": "Other"}
science.discomat_te.238
[ { "content": "| Assignment | Observed (cm-1) | Ref. values (cm-1) |\n| Mn2+-O stretching | 250 | 240-290 [2,3,19] |\n| Mn3+-O stretching | - | 370-400 [3] |\n| Si-O-Si rocking motion of the bridging oxygen | 460 | 460 [20,21] |\n| ring bending of [B3O9]3- | 525 | 525 [2] |\n| B-O-B bending between two trigonal boron atoms | 690 | 690-730 [3,22,23] |\n| Si-O-Si bending motion of the bridging oxygen | 805 | 785-810 [20,21] |\n| B-O stretching in tetrahedron [BO4]- | 930 | 900-1000 [2,22,23] |\n| Si-O-Si asymmetric stretching | 1080 | 1075-1120 [21] |\n| Si-O-B asymmetric stretching | - | 1150 [24,25] |\n| B-O stretching in [BO2O-] n chains | 1230 | 1220-1270 [22,23] |\n| B-O- stretching of [BO2O-] n chains | 1390 | 1350-1400 [2,22,23] |\n\nCaption: Assignations of IR bands for the obtained samples.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"250\", \"type\": \"Other\"}\n{\"value\": \"240\", \"type\": \"Other\"}\n{\"value\": \"290\", \"type\": \"Other\"}\n{\"value\": \"370\", \"type\": \"Other\"}\n{\"value\": \"400\", \"type\": \"Other\"}\n{\"value\": \"460\", \"type\": \"Other\"}\n{\"value\": \"460\", \"type\": \"Other\"}\n{\"value\": \"525\", \"type\": \"Other\"}\n{\"value\": \"525\", \"type\": \"Other\"}\n{\"value\": \"690\", \"type\": \"Other\"}\n{\"value\": \"690\", \"type\": \"Other\"}\n{\"value\": \"730\", \"type\": \"Other\"}\n{\"value\": \"805\", \"type\": \"Other\"}\n{\"value\": \"785\", \"type\": \"Other\"}\n{\"value\": \"810\", \"type\": \"Other\"}\n{\"value\": \"930\", \"type\": \"Other\"}\n{\"value\": \"900\", \"type\": \"Other\"}\n{\"value\": \"1000\", \"type\": \"Other\"}\n{\"value\": \"1080\", \"type\": \"Other\"}\n{\"value\": \"1075\", \"type\": \"Other\"}\n{\"value\": \"1120\", \"type\": \"Other\"}\n{\"value\": \"1150\", \"type\": \"Other\"}\n{\"value\": \"1230\", \"type\": \"Other\"}\n{\"value\": \"1220\", \"type\": \"Other\"}\n{\"value\": \"1270\", \"type\": \"Other\"}\n{\"value\": \"1390\", \"type\": \"Other\"}\n{\"value\": \"1350\", \"type\": \"Other\"}\n{\"value\": \"1400\", \"type\": \"Other\"}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Neural User Simulation for Corpus-based Policy Optimisation for Spoken Dialogue Systems User Simulators are one of the major tools that enable offline training of task-oriented dialogue systems. For this task the Agenda-Based User Simulator (ABUS) is often used. The ABUS is based on hand-crafted rules and its output is in semantic form. Issues arise from both properties such as limited diversity and the inability to interface a text-level belief tracker. This paper introduces the Neural User Simulator (NUS) whose behaviour is learned from a corpus and which generates natural language, hence needing a less labelled dataset than simulators generating a semantic output. In comparison to much of the past work on this topic, which evaluates user simulators on corpus-based metrics, we use the NUS to train the policy of a reinforcement learning based Spoken Dialogue System. The NUS is compared to the ABUS by evaluating the policies that were trained using the simulators. Cross-model evaluation is performed i.e. training on one simulator and testing on the other. Furthermore, the trained policies are tested on real users. In both evaluation tasks the NUS outperformed the ABUS. Motivation and Related Work Ideally the DM's policy would be trained by interacting with real users. Although there are models that support on-line learning BIBREF15 , for the majority of RL algorithms, which require a lot of interactions, this is impractical. Furthermore, a set of users needs to be recruited every time a policy is trained. This makes common practices such as hyper-parameter optimization prohibitively expensive. Thus, it is natural to try to learn from a dataset which needs to be recorded only once, but can be used over and over again. A problem with learning directly from recorded dialogue corpora is that the state space that was visited during the collection of the data is limited; the size of the recorded corpus usually falls short of the requirements for training a statistical DM. However, even if the size of the corpus is large enough the optimal dialogue strategy is likely not to be contained within it. A solution is to transform the static corpus into a dynamic tool: a user simulator. The user simulator (US) is trained on a dialogue corpus to learn what responses a real user would provide in a given dialogue context. The US is trained using supervised learning since the aim is for it to learn typical user behaviour. For the DM, however, we want optimal behaviour which is why supervised learning cannot be used. By interacting with the SDS, the trained US can be used to train the DM's policy. The DM's policy is optimised using the feedback given by either the user simulator or a separate evaluator. Any number of dialogues can be generated using the US and dialogue strategies that are not in the recorded corpus can be explored. Most user-simulators work on the level of user semantics. These usually consist of a user dialogue act (e.g. inform, or request) and a corresponding slot-value pair. The first statistical user simulator BIBREF16 used a simple bi-gram model INLINEFORM0 to predict the next user act INLINEFORM1 given the last system act INLINEFORM2 . It has the advantage of being purely probabilistic and domain-independent. However, it does not take the full dialogue history into account and is not conditioned on a goal, leading to incoherent user behaviour throughout a dialogue. BIBREF17 , BIBREF18 attempted to overcome goal inconsistency by proposing a graph-based model. However, developing the graph structure requires extensive domain-specific knowledge. BIBREF19 combined features from Sheffler and Young's work with Eckert's Model, by conditioning a set of probabilities on an explicit representation of the user goal and memory. A Markov Model is also used by BIBREF20 . It uses a large feature vector to describe the user's current state, which helps to compensate for the Markov assumption. However, the model is not conditioned on any goal. Therefore, it is not used to train a dialogue policy since it is impossible to determine whether the user goal was fulfilled. A hidden Markov model was proposed by BIBREF21 , which was also not used to train a policy. BIBREF22 cast user simulation as an inverse reinforcement learning problem where the user is modelled as a decision-making agent. The model did not incorporate a user goal and was hence not used to train a policy. The most prominent user model for policy optimisation is the Agenda-Based User Simulator BIBREF6 , which represents the user state elegantly as a stack of necessary user actions, called the agenda. The mechanism that generates the user response and updates the agenda does not require any data, though it can be improved using data. The model is conditioned on a goal for which it has update rules in case the dialogue system expresses that it cannot fulfil the goal. BIBREF4 modelled user simulation as a sequence-to-sequence task. The model can keep track of the dialogue history and user behaviour is learned entirely from data. However, goal changes were not modelled, even though a large proportion of dialogues within their dataset (DSTC2) contains goal changes. Their model outperformed the ABUS on statistical metrics, which is not surprising given that it was trained by optimising a statistical metric and the ABUS was not. The aforementioned work focuses on user simulation at the semantic level. Multiple issues arise from this approach. Firstly, annotating the user-response with the correct semantics is costly. More data could be collected, if the US were to output natural language. Secondly, research suggests that the two modules of an SDS performing Spoken Language Understanding (SLU) and belief tracking should be jointly trained as a single entity BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . In fact in the second Dialogue State Tracking Challenge (DSTC2) BIBREF28 , the data of which this work uses, systems which used no external SLU module outperformed all systems that only used an external SLU Module. Training the policy of a DM in a simulated environment, when also using a joint system for SLU and belief tracking is not possible without a US that produces natural language. Thirdly, a US is sometimes augmented with an error model which generates a set of competing hypotheses with associated confidence scores trying to replicate the errors of the speech recogniser. When the error model matches the characteristics of the speech recogniser more accurately, the SDS performs better BIBREF29 . However, speech recognition errors are badly modelled based on user semantics since they arise (mostly) due to the phonetics of the spoken words and not their semantics BIBREF30 . Thus, an SDS that is trained with a natural language based error model is likely to outperform one trained with a semantic error model when tested on real users. Sequence-to-sequence learning for word-level user simulation is performed in BIBREF31 , though the model is not conditioned on any goal and hence not used for policy optimisation. A word-level user simulator was also used in BIBREF32 where it was built by augmenting the ABUS with a natural language generator. Goal Generator The Goal Generator generates a random goal INLINEFORM0 at the start of the dialogue. It consists of a set of constraints, INLINEFORM1 , which specify the required venue e.g. (food=Spanish, area=north) and a number of requests, INLINEFORM2 , that specify the information that the NUS wants about the final venue e.g. the address or the phone number. The possibilities for INLINEFORM3 and INLINEFORM4 are defined by the ontology. In DSTC2 INLINEFORM5 can consist of a maximum of three constraints; food, area and pricerange. Whether each of the three is present is independently sampled with a probability of 0.66, 0.62 and 0.58 respectively. These probabilities were estimated from the DSTC2 data set. If no constraint is sampled then the goal is re-sampled. For each slot in INLINEFORM6 a value (e.g. north for area) is sampled uniformly from the ontology. Similarly, the presence of a request is independently sampled, followed by re-sampling if zero requests were chosen. When training the sequence-to-sequence model, the Goal Generator is not used, but instead the goal labels from the DSTC2 dataset are used. In DSTC2 one goal-label is given to the entire dialogue. This goal is always the final goal. If the user's goal at the start of the dialogue is (food=eritrean, area=south), which is changed to (food=spanish, area=south), due to the non-existence of an Eritrean restaurant in the south, using only the final goal is insufficient to model the dialogue. The final goal can only be used for the requests as they are not altered during a dialogue. DSTC2 also provides turn-specific labels. These contain the constraints and requests expressed by the user up until and including the current turn. When training a policy with the NUS, such labels would not be available as they “predict the future", i.e. when the turn-specific constraints change from (area=south) to (food=eritrean, area=south) it means that the user will inform the system about her desire to eat Eritrean food in the current turn. In related work on user-simulation for which the DSTC2 dataset was used, the final goal was used for the entire dialogue BIBREF4 , BIBREF33 , BIBREF34 . As stated above, we do not believe this to be sufficient. The following describes how to update the turn-specific constraint labels such that their behaviour can be replicated when training a DM's policy, whilst allowing goal changes to be modelled. The update strategy is illustrated in Table TABREF4 with an example. The final turn keeps its constraints, from which we iterate backwards through the list of DSTC2's turn-specific constraints. The constraints of a turn will be set to the updated constraints of the succeeding turn, besides if the same slot is present with a different value. In that case the value will be kept. The behaviour of the updated turn-specific goal-labels can be replicated when the NUS is used to train a DM's policy. In the example, the food type changed due to the SDS expressing that there is no restaurant serving Eritrean food in the south. When deploying the NUS to train a policy, the goal is updated when the SDS outputs the canthelp dialogue act. Testing with a simulated user In Schatzmann et. al schatztmann2005effects cross-model evaluation is proposed to compare user simulators. First, the user simulators to be evaluated are used to train INLINEFORM0 policy each. Then these policies are tested using the different user simulators and the results averaged. BIBREF7 showed that a strategy learned with a good user model still performs well when tested on poor user models. If a policy performs well on all user simulators and not just on the one that it was trained on, it indicates that the US with which it was trained is diverse and realistic, and thus the policy is likely to perform better on real users. For each US five policies ( INLINEFORM1 ), each using a different random seed for initialisation, are trained. Results are reported for both the best and the average performance on 1000 test dialogues. The ABUS is programmed to always mention the new goal after a goal change. In order to not let this affect our results we implement the same for the NUS by re-sampling a sentence if the new goal is not mentioned. Testing with real users Though the above test is already more indicative of policy performance on real users than measuring statistical metrics of user behaviour, a better test is to test with human users. For the test on human users, two policies for each US that was used for training are chosen from the five policies. The first policy is the one that performed best when tested on the NUS. The second is the one that performed best when tested on the ABUS. This choice of policies is motivated by a type of overfitting to be seen in Sec. SECREF17 . The evaluation of the trained dialogue policies in interaction with real users follows a similar set-up to BIBREF40 . Users are recruited through the Amazon Mechanical Turk (AMT) service. 1000 dialogues (250 per policy) were gathered. The learnt policies were incorporated into an SDS pipeline with a commercial ASR system. The AMT users were asked to find a restaurant that matches certain constraints and find certain requests. Subjects were randomly allocated to one of the four analysed systems. After each dialogue the users were asked whether they judged the dialogue to be successful or not which was then translated to the reward measure. Cross-Model Evaluation Table TABREF18 shows the results of the cross-model evaluation after 4000 training dialogues. The policies trained with the NUS achieved an average success rate (SR) of 94.0% and of 96.6% when tested on the ABUS and the NUS, respectively. By comparison, the policies trained with the ABUS achieved average SRs of 99.5% and 45.5% respectively. Thus, training with the NUS leads to policies that can perform well on both USs, which is not the case for training with the ABUS. Furthermore, the best SRs when tested on the ABUS are similar at 99.9% (ABUS) and 99.8% (NUS). When tested on the NUS the best SRs were 71.5% (ABUS) and 98.0% (NUS). This shows that the behaviour of the Neural User Simulator is realistic and diverse enough to train policies that can also perform very well on the Agenda-Based User Simulator. Of the five policies, for each US, the policy performing best on the NUS was not the best performing policy on the ABUS. This could indicate that the policy “overfits” to a particular user simulator. Overfitting usually manifests itself in worse results as the model is trained for longer. Five policies trained on each US for only 1000 dialogues were also evaluated, the results of which can be seen in Table TABREF19 . After training for 1000 dialogues, the average SR of the policies trained on the NUS when tested on the ABUS was 97.3% in comparison to 94.0% after 4000 dialogues. This behaviour was observed for all five seeds, which indicates that the policy indeed overfits to the NUS. For the policies trained with the ABUS this was not observed. This could indicate that the policy can learn to exploit some of the shortcomings of the trained NUS. Figure 1: General Architecture of the Neural User Simulator. The System Output is passed to the Feature Extractor. It generates a new feature vector that is appended to the Feature History, which is passed to the sequence-to-sequence model to produce the user utterance. At the start of the dialogue the Goal Generator generates a goal, which might change during the course of the dialogue. Figure 2: Sequence-To-Sequence model of the Neural User Simulator. Here, the NUS is generating the user response to the third system output. The white, light-grey and dark blocks represent the RNN encoder, a fully-connected layer and the RNN decoder respectively. The previous output of the decoder is passed to its input for the next time-step. v3:1 are the first three feature vectors (see Sec. 4.2). Table 3: As Table 2 but trained for 1000 dialogues. Query: by how much did nus outperform abus? JSON Response:
{"answer": "Average success rate is higher by 2.6 percent points.", "evidence": ["Table TABREF18 shows the results of the cross-model evaluation after 4000 training dialogues. The policies trained with the NUS achieved an average success rate (SR) of 94.0% and of 96.6% when tested on the ABUS and the NUS, respectively. By comparison, the policies trained with the ABUS achieved average SRs of 99.5% and 45.5% respectively. Thus, training with the NUS leads to policies that can perform well on both USs, which is not the case for training with the ABUS. Furthermore, the best SRs when tested on the ABUS are similar at 99.9% (ABUS) and 99.8% (NUS). When tested on the NUS the best SRs were 71.5% (ABUS) and 98.0% (NUS). This shows that the behaviour of the Neural User Simulator is realistic and diverse enough to train policies that can also perform very well on the Agenda-Based User Simulator."]}
science.qasper_abstractive_qa.243.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nNeural User Simulation for Corpus-based Policy Optimisation for Spoken Dialogue Systems\n\nUser Simulators are one of the major tools that enable offline training of task-oriented dialogue systems. For this task the Agenda-Based User Simulator (ABUS) is often used. The ABUS is based on hand-crafted rules and its output is in semantic form. Issues arise from both properties such as limited diversity and the inability to interface a text-level belief tracker. This paper introduces the Neural User Simulator (NUS) whose behaviour is learned from a corpus and which generates natural language, hence needing a less labelled dataset than simulators generating a semantic output. In comparison to much of the past work on this topic, which evaluates user simulators on corpus-based metrics, we use the NUS to train the policy of a reinforcement learning based Spoken Dialogue System. The NUS is compared to the ABUS by evaluating the policies that were trained using the simulators. Cross-model evaluation is performed i.e. training on one simulator and testing on the other. Furthermore, the trained policies are tested on real users. In both evaluation tasks the NUS outperformed the ABUS.\n\nMotivation and Related Work\nIdeally the DM's policy would be trained by interacting with real users. Although there are models that support on-line learning BIBREF15 , for the majority of RL algorithms, which require a lot of interactions, this is impractical. Furthermore, a set of users needs to be recruited every time a policy is trained. This makes common practices such as hyper-parameter optimization prohibitively expensive. Thus, it is natural to try to learn from a dataset which needs to be recorded only once, but can be used over and over again.\nA problem with learning directly from recorded dialogue corpora is that the state space that was visited during the collection of the data is limited; the size of the recorded corpus usually falls short of the requirements for training a statistical DM. However, even if the size of the corpus is large enough the optimal dialogue strategy is likely not to be contained within it.\nA solution is to transform the static corpus into a dynamic tool: a user simulator. The user simulator (US) is trained on a dialogue corpus to learn what responses a real user would provide in a given dialogue context. The US is trained using supervised learning since the aim is for it to learn typical user behaviour. For the DM, however, we want optimal behaviour which is why supervised learning cannot be used. By interacting with the SDS, the trained US can be used to train the DM's policy. The DM's policy is optimised using the feedback given by either the user simulator or a separate evaluator. Any number of dialogues can be generated using the US and dialogue strategies that are not in the recorded corpus can be explored.\nMost user-simulators work on the level of user semantics. These usually consist of a user dialogue act (e.g. inform, or request) and a corresponding slot-value pair. The first statistical user simulator BIBREF16 used a simple bi-gram model INLINEFORM0 to predict the next user act INLINEFORM1 given the last system act INLINEFORM2 . It has the advantage of being purely probabilistic and domain-independent. However, it does not take the full dialogue history into account and is not conditioned on a goal, leading to incoherent user behaviour throughout a dialogue. BIBREF17 , BIBREF18 attempted to overcome goal inconsistency by proposing a graph-based model. However, developing the graph structure requires extensive domain-specific knowledge. BIBREF19 combined features from Sheffler and Young's work with Eckert's Model, by conditioning a set of probabilities on an explicit representation of the user goal and memory. A Markov Model is also used by BIBREF20 . It uses a large feature vector to describe the user's current state, which helps to compensate for the Markov assumption. However, the model is not conditioned on any goal. Therefore, it is not used to train a dialogue policy since it is impossible to determine whether the user goal was fulfilled. A hidden Markov model was proposed by BIBREF21 , which was also not used to train a policy. BIBREF22 cast user simulation as an inverse reinforcement learning problem where the user is modelled as a decision-making agent. The model did not incorporate a user goal and was hence not used to train a policy. The most prominent user model for policy optimisation is the Agenda-Based User Simulator BIBREF6 , which represents the user state elegantly as a stack of necessary user actions, called the agenda. The mechanism that generates the user response and updates the agenda does not require any data, though it can be improved using data. The model is conditioned on a goal for which it has update rules in case the dialogue system expresses that it cannot fulfil the goal. BIBREF4 modelled user simulation as a sequence-to-sequence task. The model can keep track of the dialogue history and user behaviour is learned entirely from data. However, goal changes were not modelled, even though a large proportion of dialogues within their dataset (DSTC2) contains goal changes. Their model outperformed the ABUS on statistical metrics, which is not surprising given that it was trained by optimising a statistical metric and the ABUS was not.\nThe aforementioned work focuses on user simulation at the semantic level. Multiple issues arise from this approach. Firstly, annotating the user-response with the correct semantics is costly. More data could be collected, if the US were to output natural language. Secondly, research suggests that the two modules of an SDS performing Spoken Language Understanding (SLU) and belief tracking should be jointly trained as a single entity BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . In fact in the second Dialogue State Tracking Challenge (DSTC2) BIBREF28 , the data of which this work uses, systems which used no external SLU module outperformed all systems that only used an external SLU Module. Training the policy of a DM in a simulated environment, when also using a joint system for SLU and belief tracking is not possible without a US that produces natural language. Thirdly, a US is sometimes augmented with an error model which generates a set of competing hypotheses with associated confidence scores trying to replicate the errors of the speech recogniser. When the error model matches the characteristics of the speech recogniser more accurately, the SDS performs better BIBREF29 . However, speech recognition errors are badly modelled based on user semantics since they arise (mostly) due to the phonetics of the spoken words and not their semantics BIBREF30 . Thus, an SDS that is trained with a natural language based error model is likely to outperform one trained with a semantic error model when tested on real users. Sequence-to-sequence learning for word-level user simulation is performed in BIBREF31 , though the model is not conditioned on any goal and hence not used for policy optimisation. A word-level user simulator was also used in BIBREF32 where it was built by augmenting the ABUS with a natural language generator.\n\nGoal Generator\nThe Goal Generator generates a random goal INLINEFORM0 at the start of the dialogue. It consists of a set of constraints, INLINEFORM1 , which specify the required venue e.g. (food=Spanish, area=north) and a number of requests, INLINEFORM2 , that specify the information that the NUS wants about the final venue e.g. the address or the phone number. The possibilities for INLINEFORM3 and INLINEFORM4 are defined by the ontology. In DSTC2 INLINEFORM5 can consist of a maximum of three constraints; food, area and pricerange. Whether each of the three is present is independently sampled with a probability of 0.66, 0.62 and 0.58 respectively. These probabilities were estimated from the DSTC2 data set. If no constraint is sampled then the goal is re-sampled. For each slot in INLINEFORM6 a value (e.g. north for area) is sampled uniformly from the ontology. Similarly, the presence of a request is independently sampled, followed by re-sampling if zero requests were chosen.\nWhen training the sequence-to-sequence model, the Goal Generator is not used, but instead the goal labels from the DSTC2 dataset are used. In DSTC2 one goal-label is given to the entire dialogue. This goal is always the final goal. If the user's goal at the start of the dialogue is (food=eritrean, area=south), which is changed to (food=spanish, area=south), due to the non-existence of an Eritrean restaurant in the south, using only the final goal is insufficient to model the dialogue. The final goal can only be used for the requests as they are not altered during a dialogue. DSTC2 also provides turn-specific labels. These contain the constraints and requests expressed by the user up until and including the current turn. When training a policy with the NUS, such labels would not be available as they “predict the future\", i.e. when the turn-specific constraints change from (area=south) to (food=eritrean, area=south) it means that the user will inform the system about her desire to eat Eritrean food in the current turn.\nIn related work on user-simulation for which the DSTC2 dataset was used, the final goal was used for the entire dialogue BIBREF4 , BIBREF33 , BIBREF34 . As stated above, we do not believe this to be sufficient. The following describes how to update the turn-specific constraint labels such that their behaviour can be replicated when training a DM's policy, whilst allowing goal changes to be modelled. The update strategy is illustrated in Table TABREF4 with an example. The final turn keeps its constraints, from which we iterate backwards through the list of DSTC2's turn-specific constraints. The constraints of a turn will be set to the updated constraints of the succeeding turn, besides if the same slot is present with a different value. In that case the value will be kept. The behaviour of the updated turn-specific goal-labels can be replicated when the NUS is used to train a DM's policy. In the example, the food type changed due to the SDS expressing that there is no restaurant serving Eritrean food in the south. When deploying the NUS to train a policy, the goal is updated when the SDS outputs the canthelp dialogue act.\n\nTesting with a simulated user\nIn Schatzmann et. al schatztmann2005effects cross-model evaluation is proposed to compare user simulators. First, the user simulators to be evaluated are used to train INLINEFORM0 policy each. Then these policies are tested using the different user simulators and the results averaged. BIBREF7 showed that a strategy learned with a good user model still performs well when tested on poor user models. If a policy performs well on all user simulators and not just on the one that it was trained on, it indicates that the US with which it was trained is diverse and realistic, and thus the policy is likely to perform better on real users. For each US five policies ( INLINEFORM1 ), each using a different random seed for initialisation, are trained. Results are reported for both the best and the average performance on 1000 test dialogues. The ABUS is programmed to always mention the new goal after a goal change. In order to not let this affect our results we implement the same for the NUS by re-sampling a sentence if the new goal is not mentioned.\n\nTesting with real users\nThough the above test is already more indicative of policy performance on real users than measuring statistical metrics of user behaviour, a better test is to test with human users. For the test on human users, two policies for each US that was used for training are chosen from the five policies. The first policy is the one that performed best when tested on the NUS. The second is the one that performed best when tested on the ABUS. This choice of policies is motivated by a type of overfitting to be seen in Sec. SECREF17 . The evaluation of the trained dialogue policies in interaction with real users follows a similar set-up to BIBREF40 . Users are recruited through the Amazon Mechanical Turk (AMT) service. 1000 dialogues (250 per policy) were gathered. The learnt policies were incorporated into an SDS pipeline with a commercial ASR system. The AMT users were asked to find a restaurant that matches certain constraints and find certain requests. Subjects were randomly allocated to one of the four analysed systems. After each dialogue the users were asked whether they judged the dialogue to be successful or not which was then translated to the reward measure.\n\nCross-Model Evaluation\nTable TABREF18 shows the results of the cross-model evaluation after 4000 training dialogues. The policies trained with the NUS achieved an average success rate (SR) of 94.0% and of 96.6% when tested on the ABUS and the NUS, respectively. By comparison, the policies trained with the ABUS achieved average SRs of 99.5% and 45.5% respectively. Thus, training with the NUS leads to policies that can perform well on both USs, which is not the case for training with the ABUS. Furthermore, the best SRs when tested on the ABUS are similar at 99.9% (ABUS) and 99.8% (NUS). When tested on the NUS the best SRs were 71.5% (ABUS) and 98.0% (NUS). This shows that the behaviour of the Neural User Simulator is realistic and diverse enough to train policies that can also perform very well on the Agenda-Based User Simulator.\nOf the five policies, for each US, the policy performing best on the NUS was not the best performing policy on the ABUS. This could indicate that the policy “overfits” to a particular user simulator. Overfitting usually manifests itself in worse results as the model is trained for longer. Five policies trained on each US for only 1000 dialogues were also evaluated, the results of which can be seen in Table TABREF19 . After training for 1000 dialogues, the average SR of the policies trained on the NUS when tested on the ABUS was 97.3% in comparison to 94.0% after 4000 dialogues. This behaviour was observed for all five seeds, which indicates that the policy indeed overfits to the NUS. For the policies trained with the ABUS this was not observed. This could indicate that the policy can learn to exploit some of the shortcomings of the trained NUS.\n\nFigure 1: General Architecture of the Neural User Simulator. The System Output is passed to the Feature Extractor. It generates a new feature vector that is appended to the Feature History, which is passed to the sequence-to-sequence model to produce the user utterance. At the start of the dialogue the Goal Generator generates a goal, which might change during the course of the dialogue.\n\nFigure 2: Sequence-To-Sequence model of the Neural User Simulator. Here, the NUS is generating the user response to the third system output. The white, light-grey and dark blocks represent the RNN encoder, a fully-connected layer and the RNN decoder respectively. The previous output of the decoder is passed to its input for the next time-step. v3:1 are the first three feature vectors (see Sec. 4.2).\n\nTable 3: As Table 2 but trained for 1000 dialogues.\n\nQuery: by how much did nus outperform abus?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"Average success rate is higher by 2.6 percent points.\", \"evidence\": [\"Table TABREF18 shows the results of the cross-model evaluation after 4000 training dialogues. The policies trained with the NUS achieved an average success rate (SR) of 94.0% and of 96.6% when tested on the ABUS and the NUS, respectively. By comparison, the policies trained with the ABUS achieved average SRs of 99.5% and 45.5% respectively. Thus, training with the NUS leads to policies that can perform well on both USs, which is not the case for training with the ABUS. Furthermore, the best SRs when tested on the ABUS are similar at 99.9% (ABUS) and 99.8% (NUS). When tested on the NUS the best SRs were 71.5% (ABUS) and 98.0% (NUS). This shows that the behaviour of the Neural User Simulator is realistic and diverse enough to train policies that can also perform very well on the Agenda-Based User Simulator.\"]}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Elevation of ADAM10, ADAM17, MMP-2 and MMP-9 expression with media degeneration features CaCl2-induced thoracic aortic aneurysm in a rat model. PURPOSE: This study was designed to establish a rat model of thoracic aortic aneurysm (TAA) by calcium chloride (CaCl(2))-induced arterial injury and to explore the potential role of a disintegrin and metalloproteinase (ADAM), matrix metalloproteinases (MMPs) and their endogenous inhibitors (TIMPs) in TAA formation. METHODS: Thoracic aorta of male Sprague-Dawley rats was exposed to 0.5M CaCl(2) or normal saline (NaCl). After 12weeks, animals were euthanized, and CaCl(2)-treated, CaCl(2)-untreated (n=12) and NaCl-treated aortic segments (n=12) were collected for histological and molecular assessments. MMP-TIMP and ADAM mRNAs were semi-quantitatively analyzed and protein expressions were determined by immunohistochemistry. RESULTS: Despite similar external diameters among CaCl(2)-treated, non-CaCl(2)-treated and NaCl-treated segments, aneurymal alteration (n=6, 50%), media degeneration with regional disruption, fragmentation of elastic fiber, and increased collagen deposition (n=12, 100%) were demonstrated in CaCl(2)-treated segments. MMP-2, MMP-9, ADAM-10 and ADAM-17 mRNA levels were increased in CaCl(2)-treated segments (all p<0.01), with trends of elevation in CaCl(2)-untreated segments, as compared with NaCl-treated segments. Immunohistochemistry displayed significantly increased expressions of MMP-2, MMP-9, ADAM-10 and ADAM-17 (all p<0.01) in intima and media for CaCl(2)-treated segments. TIMP mRNA and tissue levels did not differ obviously among the three aortic segments. CONCLUSION: This study establishes a TAA model by periarterial CaCl(2) exposure in rats, and demonstrates a significant elevation of expression of MMP-2, MMP-9, ADAM10 and ADAM17 in the pathogenesis of vascular remodeling.
{"CellLine": [], "Chemical": ["CaCl2", "CaCl(2)", "NaCl", "calcium chloride"], "Disease": ["thoracic aortic aneurysm", "TAA", "arterial injury"], "Gene": ["a disintegrin and metalloproteinase", "MMPs", "ADAM-10", "ADAM-17", "ADAM", "MMP", "TIMP", "MMP-9", "ADAM10", "matrix metalloproteinases", "ADAM17", "MMP-2"], "Species": ["rats", "rat"], "Variant": []}
science.biored_ner.95
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nElevation of ADAM10, ADAM17, MMP-2 and MMP-9 expression with media degeneration features CaCl2-induced thoracic aortic aneurysm in a rat model. PURPOSE: This study was designed to establish a rat model of thoracic aortic aneurysm (TAA) by calcium chloride (CaCl(2))-induced arterial injury and to explore the potential role of a disintegrin and metalloproteinase (ADAM), matrix metalloproteinases (MMPs) and their endogenous inhibitors (TIMPs) in TAA formation. METHODS: Thoracic aorta of male Sprague-Dawley rats was exposed to 0.5M CaCl(2) or normal saline (NaCl). After 12weeks, animals were euthanized, and CaCl(2)-treated, CaCl(2)-untreated (n=12) and NaCl-treated aortic segments (n=12) were collected for histological and molecular assessments. MMP-TIMP and ADAM mRNAs were semi-quantitatively analyzed and protein expressions were determined by immunohistochemistry. RESULTS: Despite similar external diameters among CaCl(2)-treated, non-CaCl(2)-treated and NaCl-treated segments, aneurymal alteration (n=6, 50%), media degeneration with regional disruption, fragmentation of elastic fiber, and increased collagen deposition (n=12, 100%) were demonstrated in CaCl(2)-treated segments. MMP-2, MMP-9, ADAM-10 and ADAM-17 mRNA levels were increased in CaCl(2)-treated segments (all p<0.01), with trends of elevation in CaCl(2)-untreated segments, as compared with NaCl-treated segments. Immunohistochemistry displayed significantly increased expressions of MMP-2, MMP-9, ADAM-10 and ADAM-17 (all p<0.01) in intima and media for CaCl(2)-treated segments. TIMP mRNA and tissue levels did not differ obviously among the three aortic segments. CONCLUSION: This study establishes a TAA model by periarterial CaCl(2) exposure in rats, and demonstrates a significant elevation of expression of MMP-2, MMP-9, ADAM10 and ADAM17 in the pathogenesis of vascular remodeling.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"CaCl2\", \"CaCl(2)\", \"NaCl\", \"calcium chloride\"], \"Disease\": [\"thoracic aortic aneurysm\", \"TAA\", \"arterial injury\"], \"Gene\": [\"a disintegrin and metalloproteinase\", \"MMPs\", \"ADAM-10\", \"ADAM-17\", \"ADAM\", \"MMP\", \"TIMP\", \"MMP-9\", \"ADAM10\", \"matrix metalloproteinases\", \"ADAM17\", \"MMP-2\"], \"Species\": [\"rats\", \"rat\"], \"Variant\": []}", "role": "assistant" } ]
science.discomat_te
| Glasses | B=H/Kc (m-1/2) | P C (N) | P C R* (N) | P C (N) | P C L* (N) | | Float glass | 8500 | 0.25-0.50 | 0.30 | 1-2 | - | | SLS1 | 7500 | 0.10-0.25 | 0.70 | 0.25-0.50 | 1.18 | | SLS2 | 7350 | 0.10-025 | 0.68 | 0.25-0.50 | 1.17 | | SLS3 | 6800 | 1-2 | 1.01 | 2-3 | 1.75 | | SLS4 | 6350 | 1-2 | 1.34 | 2-3 | 2.28 | Caption: Brittleness (B), experimental and calculated radial-crack-initiation loads (P C R and P C R* respectively) and experimental and calculated lateral crack threshold loads (P C L and P C L* respectively) for the SLS glasses Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "8500", "type": "Other"} {"value": "0.25", "type": "Other"} {"value": "0.50", "type": "Other"} {"value": "0.30", "type": "Other"} {"value": "1", "type": "Other"} {"value": "2", "type": "Other"} {"value": "7500", "type": "Other"} {"value": "0.10", "type": "Other"} {"value": "0.25", "type": "Other"} {"value": "0.70", "type": "Other"} {"value": "0.25", "type": "Other"} {"value": "0.50", "type": "Other"} {"value": "1.18", "type": "Other"} {"value": "7350", "type": "Other"} {"value": "0.10", "type": "Other"} {"value": "025", "type": "Other"} {"value": "0.68", "type": "Other"} {"value": "0.25", "type": "Other"} {"value": "0.50", "type": "Other"} {"value": "1.17", "type": "Other"} {"value": "6800", "type": "Other"} {"value": "1", "type": "Other"} {"value": "2", "type": "Other"} {"value": "1.01", "type": "Other"} {"value": "2", "type": "Other"} {"value": "3", "type": "Other"} {"value": "1.75", "type": "Other"} {"value": "6350", "type": "Other"} {"value": "1", "type": "Other"} {"value": "2", "type": "Other"} {"value": "1.34", "type": "Other"} {"value": "2", "type": "Other"} {"value": "3", "type": "Other"} {"value": "2.28", "type": "Other"}
science.discomat_te.237
[ { "content": "| Glasses | B=H/Kc (m-1/2) | P C (N) | P C R* (N) | P C (N) | P C L* (N) |\n| Float glass | 8500 | 0.25-0.50 | 0.30 | 1-2 | - |\n| SLS1 | 7500 | 0.10-0.25 | 0.70 | 0.25-0.50 | 1.18 |\n| SLS2 | 7350 | 0.10-025 | 0.68 | 0.25-0.50 | 1.17 |\n| SLS3 | 6800 | 1-2 | 1.01 | 2-3 | 1.75 |\n| SLS4 | 6350 | 1-2 | 1.34 | 2-3 | 2.28 |\n\nCaption: Brittleness (B), experimental and calculated radial-crack-initiation loads (P C R and P C R* respectively) and experimental and calculated lateral crack threshold loads (P C L and P C L* respectively) for the SLS glasses\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"8500\", \"type\": \"Other\"}\n{\"value\": \"0.25\", \"type\": \"Other\"}\n{\"value\": \"0.50\", \"type\": \"Other\"}\n{\"value\": \"0.30\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"7500\", \"type\": \"Other\"}\n{\"value\": \"0.10\", \"type\": \"Other\"}\n{\"value\": \"0.25\", \"type\": \"Other\"}\n{\"value\": \"0.70\", \"type\": \"Other\"}\n{\"value\": \"0.25\", \"type\": \"Other\"}\n{\"value\": \"0.50\", \"type\": \"Other\"}\n{\"value\": \"1.18\", \"type\": \"Other\"}\n{\"value\": \"7350\", \"type\": \"Other\"}\n{\"value\": \"0.10\", \"type\": \"Other\"}\n{\"value\": \"025\", \"type\": \"Other\"}\n{\"value\": \"0.68\", \"type\": \"Other\"}\n{\"value\": \"0.25\", \"type\": \"Other\"}\n{\"value\": \"0.50\", \"type\": \"Other\"}\n{\"value\": \"1.17\", \"type\": \"Other\"}\n{\"value\": \"6800\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"1.01\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"3\", \"type\": \"Other\"}\n{\"value\": \"1.75\", \"type\": \"Other\"}\n{\"value\": \"6350\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"1.34\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"3\", \"type\": \"Other\"}\n{\"value\": \"2.28\", \"type\": \"Other\"}", "role": "assistant" } ]
science.discomat_te
| Composition, mol% | Composition, mol% | Composition, mol% | T g, degC | T x, degC | T p, degC | DT =(T x - T p), degC | density, gcm-3 | n | | BaO | TiO2 | SiO2 | T g, degC | T x, degC | T p, degC | DT =(T x - T p), degC | density, gcm-3 | n | | 30 | 50 | 20 | 727 | 825 | 840 | 98 | 4.38 | 1.99 | | 30 | 40 | 30 | 746 | 879 | 929 | 133 | 4.22 | 1.91 | | 25 | 35 | 40 | 745 | 888 | 940 | 143 | 3.94 | 1.84 | | 30 | 30 | 40 | 750 | 877 | 940 | 127 | 4.12 | 1.83 | | 30 | 20 | 50 | 740 | 878 | 953 | 138 | 3.93 | 1.75 | | 40 | 20 | 40 | 712 | 787 | 808 | 75 | 4.29 | 1.76 | | 33.3 | 16.7 | 50 | 747 | 885 | 970 | 138 | 4.01 | 1.74 | | 45 | 15 | 40 | 694 | 757 | 778 | 63 | 4.41 | 1.75 | | 40 | 10 | 50 | 721 | 858 | 877 | 150 | 4.17 | 1.71 | Caption: Glass composition, glass transition temperature, T g, crystallization onset temperature, T x, crystallization peak temperature, T p, DT(=T x - T g), density, and refractive index, n, of the present BaO-TiO2-SiO2 (BTS) glasses Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "30", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "xx"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "xx"} {"value": "20", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "xx"} {"value": "727", "type": "Other"} {"value": "825", "type": "Other"} {"value": "840", "type": "Other"} {"value": "98", "type": "Other"} {"value": "4.38", "type": "Other"} {"value": "1.99", "type": "Other"} {"value": "30", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "xx"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "xx"} {"value": "30", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "xx"} {"value": "746", "type": "Other"} {"value": "879", "type": "Other"} {"value": "929", "type": "Other"} {"value": "133", "type": "Other"} {"value": "4.22", "type": "Other"} {"value": "1.91", "type": "Other"} {"value": "25", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "xx"} {"value": "35", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "xx"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "xx"} {"value": "745", "type": "Other"} {"value": "888", "type": "Other"} {"value": "940", "type": "Other"} {"value": "143", "type": "Other"} {"value": "3.94", "type": "Other"} {"value": "1.84", "type": "Other"} {"value": "30", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "xx"} {"value": "30", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "xx"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "xx"} {"value": "750", "type": "Other"} {"value": "877", "type": "Other"} {"value": "940", "type": "Other"} {"value": "127", "type": "Other"} {"value": "4.12", "type": "Other"} {"value": "1.83", "type": "Other"} {"value": "30", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "xx"} {"value": "20", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "xx"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "xx"} {"value": "740", "type": "Other"} {"value": "878", "type": "Other"} {"value": "953", "type": "Other"} {"value": "138", "type": "Other"} {"value": "3.93", "type": "Other"} {"value": "1.75", "type": "Other"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "xx"} {"value": "20", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "xx"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "xx"} {"value": "712", "type": "Other"} {"value": "787", "type": "Other"} {"value": "808", "type": "Other"} {"value": "75", "type": "Other"} {"value": "4.29", "type": "Other"} {"value": "1.76", "type": "Other"} {"value": "33.3", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "xx"} {"value": "16.7", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "xx"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "xx"} {"value": "747", "type": "Other"} {"value": "885", "type": "Other"} {"value": "970", "type": "Other"} {"value": "138", "type": "Other"} {"value": "4.01", "type": "Other"} {"value": "1.74", "type": "Other"} {"value": "45", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "xx"} {"value": "15", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "xx"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "xx"} {"value": "694", "type": "Other"} {"value": "757", "type": "Other"} {"value": "778", "type": "Other"} {"value": "63", "type": "Other"} {"value": "4.41", "type": "Other"} {"value": "1.75", "type": "Other"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "xx"} {"value": "10", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "xx"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "xx"} {"value": "721", "type": "Other"} {"value": "858", "type": "Other"} {"value": "877", "type": "Other"} {"value": "150", "type": "Other"} {"value": "4.17", "type": "Other"} {"value": "1.71", "type": "Other"}
science.discomat_te.190
[ { "content": "| Composition, mol% | Composition, mol% | Composition, mol% | T g, degC | T x, degC | T p, degC | DT =(T x - T p), degC | density, gcm-3 | n |\n| BaO | TiO2 | SiO2 | T g, degC | T x, degC | T p, degC | DT =(T x - T p), degC | density, gcm-3 | n |\n| 30 | 50 | 20 | 727 | 825 | 840 | 98 | 4.38 | 1.99 |\n| 30 | 40 | 30 | 746 | 879 | 929 | 133 | 4.22 | 1.91 |\n| 25 | 35 | 40 | 745 | 888 | 940 | 143 | 3.94 | 1.84 |\n| 30 | 30 | 40 | 750 | 877 | 940 | 127 | 4.12 | 1.83 |\n| 30 | 20 | 50 | 740 | 878 | 953 | 138 | 3.93 | 1.75 |\n| 40 | 20 | 40 | 712 | 787 | 808 | 75 | 4.29 | 1.76 |\n| 33.3 | 16.7 | 50 | 747 | 885 | 970 | 138 | 4.01 | 1.74 |\n| 45 | 15 | 40 | 694 | 757 | 778 | 63 | 4.41 | 1.75 |\n| 40 | 10 | 50 | 721 | 858 | 877 | 150 | 4.17 | 1.71 |\n\nCaption: Glass composition, glass transition temperature, T g, crystallization onset temperature, T x, crystallization peak temperature, T p, DT(=T x - T g), density, and refractive index, n, of the present BaO-TiO2-SiO2 (BTS) glasses\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"727\", \"type\": \"Other\"}\n{\"value\": \"825\", \"type\": \"Other\"}\n{\"value\": \"840\", \"type\": \"Other\"}\n{\"value\": \"98\", \"type\": \"Other\"}\n{\"value\": \"4.38\", \"type\": \"Other\"}\n{\"value\": \"1.99\", \"type\": \"Other\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"746\", \"type\": \"Other\"}\n{\"value\": \"879\", \"type\": \"Other\"}\n{\"value\": \"929\", \"type\": \"Other\"}\n{\"value\": \"133\", \"type\": \"Other\"}\n{\"value\": \"4.22\", \"type\": \"Other\"}\n{\"value\": \"1.91\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"35\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"745\", \"type\": \"Other\"}\n{\"value\": \"888\", \"type\": \"Other\"}\n{\"value\": \"940\", \"type\": \"Other\"}\n{\"value\": \"143\", \"type\": \"Other\"}\n{\"value\": \"3.94\", \"type\": \"Other\"}\n{\"value\": \"1.84\", \"type\": \"Other\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"750\", \"type\": \"Other\"}\n{\"value\": \"877\", \"type\": \"Other\"}\n{\"value\": \"940\", \"type\": \"Other\"}\n{\"value\": \"127\", \"type\": \"Other\"}\n{\"value\": \"4.12\", \"type\": \"Other\"}\n{\"value\": \"1.83\", \"type\": \"Other\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"740\", \"type\": \"Other\"}\n{\"value\": \"878\", \"type\": \"Other\"}\n{\"value\": \"953\", \"type\": \"Other\"}\n{\"value\": \"138\", \"type\": \"Other\"}\n{\"value\": \"3.93\", \"type\": \"Other\"}\n{\"value\": \"1.75\", \"type\": \"Other\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"712\", \"type\": \"Other\"}\n{\"value\": \"787\", \"type\": \"Other\"}\n{\"value\": \"808\", \"type\": \"Other\"}\n{\"value\": \"75\", \"type\": \"Other\"}\n{\"value\": \"4.29\", \"type\": \"Other\"}\n{\"value\": \"1.76\", \"type\": \"Other\"}\n{\"value\": \"33.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"16.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"747\", \"type\": \"Other\"}\n{\"value\": \"885\", \"type\": \"Other\"}\n{\"value\": \"970\", \"type\": \"Other\"}\n{\"value\": \"138\", \"type\": \"Other\"}\n{\"value\": \"4.01\", \"type\": \"Other\"}\n{\"value\": \"1.74\", \"type\": \"Other\"}\n{\"value\": \"45\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"694\", \"type\": \"Other\"}\n{\"value\": \"757\", \"type\": \"Other\"}\n{\"value\": \"778\", \"type\": \"Other\"}\n{\"value\": \"63\", \"type\": \"Other\"}\n{\"value\": \"4.41\", \"type\": \"Other\"}\n{\"value\": \"1.75\", \"type\": \"Other\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"721\", \"type\": \"Other\"}\n{\"value\": \"858\", \"type\": \"Other\"}\n{\"value\": \"877\", \"type\": \"Other\"}\n{\"value\": \"150\", \"type\": \"Other\"}\n{\"value\": \"4.17\", \"type\": \"Other\"}\n{\"value\": \"1.71\", \"type\": \"Other\"}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Genetic Algorithm for Constrained Molecular Inverse Design Main Content: 1 INTRODUCTION . When the search space is complex or partially known , it is difficult to optimize current solutions using gradient descent ( Jin & Ha , 1997 ; Ahmad et al. , 2010 ) . Because genetic algorithms find approximate solutions , they are effective in exploring vast unknown space such as molecular search space . ( Holland , 1992 ; Jin & Ha , 1997 ; Ahmad et al. , 2010 ; Henault et al. , 2020 ) . In drug discovery , optimizing complex pharmacological properties using genetic algorithms have been widely studied and extended ( Leardi , 2001 ) . Deep learning-based molecular design is being actively conducted to improve pharmacological properties and has served as a powerful tool ( Gómez-Bombarelli et al. , 2018 ) . However , inferred drug candidates depending on the architecture did not satisfy the heuristic guidelines set by chemical researchers ( George & Hautier , 2020 ) . Furthermore , this not only produces nonsynthesizable molecules but also causes optimization problems for non-linear structure-activity relationships ( Gómez-Bombarelli et al. , 2018 ; Vanhaelen et al. , 2020 ) . Most molecular design methods first generate molecular structures and then calculate properties of the structures resulting in high computational cost ( Duvenaud et al. , 2015 ; Gilmer et al. , 2017 ; Feinberg et al. , 2018 ; Yang et al. , 2019 ) . Conversely , inverse molecular design specifies target properties in advance and then systematically explores the chemical space to discover molecular structures that have the desired properties ( Sanchez-Lengeling & Aspuru-Guzik , 2018 ) . Recent advances in molecular inverse design of genetic algorithms compete with or even surpass deep learning-based methods ( Yoshikawa et al. , 2018 ; Jensen , 2019 ; Nigam et al. , 2019 ; Polishchuk , 2020 ; Ahn et al. , 2020 ; Nigam et al. , 2021b ; a ) . This indicates that the computational methods of genetic algorithms in a chemical domain are more effective for exploring the vast chemical space ( Ahn et al. , 2020 ) . However , the genetic operators cause difficulty in lead optimization where the molecular structure is constrained ( Hasançebi & Erbatur , 2000 ) . A common molecular design strategy is to narrow the chemical search space , starting with known potential molecules ( Lim et al. , 2020 ) . The scaffold , which is the ” core ” of the molecule is intentionally maintained to preserve basic bioactivity ( Hu et al. , 2016 ) . This is directly involved in the interaction with the target protein ( Zhao & Dietrich , 2015 ; Lim et al. , 2020 ) . In lead identification process , scaffolds with biological activity to the target protein are identified ( Zhao & Dietrich , 2015 ) . In lead optimization process , it is important to optimize the SAR ( Structure-Activity Relationship ) properties while staying in the chemical space associated with the privileged scaffold ( Langevin et al. , 2020 ) . Lead optimization can be described as a multiple optimization problem in scaffold constraints ( Langevin et al. , 2020 ) . Most studies have focused on the application of generative models to the field of medicinal chemistry , and studies related to structure-constrained lead optimization have not been extensively explored . In this regard , we introduce a genetic algorithm featuring constrained molecular inverse design . We use graph and SELFIES descriptors to generate valid molecules through a two-phase GA-based pipeline . In addition , we introduced a two-phase optimization to ensure that molecular generation does not fail to optimize for specific properties while adhering to structural constraints . 2 RELATED WORK . Among the various methods for constrained optimization in a genetic algorithm , the basic one is designing effective penalty functions ( Yeniay , 2005 ) . The functions , which impose a penalty to fitness value , are widely used for constrained optimization ( Yeniay , 2005 ; Fletcher , 2013 ) . The individual fitness value is determined by combining the objective function value with the constraint violation penalty . A dominant relationship exists between the constraint penalty and the objective function value ( Coello , 2000 ) . One of the various methods is focusing on the selection of practicable solutions . To solve the problem of constrained optimization using a genetic algorithm , a two-phase framework which is MultiObjective Evolutionary Algorithm ( MOEA ) was introduced ( Venkatraman & Yen , 2005 ) . In the first phase , MoEA confirms the constraint satisfaction of solutions using a penalty function . The algorithm ranks solutions based on violations of constraints , completely disregarding objective function . When one or more feasible solutions are identified , the second phase continues . In the second phase , individual fitness is reassigned according to the objective function-constraint violation space and bi-objective optimization is performed ( Venkatraman & Yen , 2005 ) . Structure-constrained molecular optimization work was first presented in the JT-VAE ( Jin et al. , 2018a ) study . In this study , they create a tree-structured scaffold for constraining the structure of a molecule and then generate a molecular graph using a message-passing network ( Jin et al. , 2018a ) . GCPN ( You et al. , 2018 ) uses goal-directed reinforcement learning to generate molecules with desired properties and similarities . VJTNN ( Jin et al. , 2018b ) treated molecular optimization as a graph-to-graph translation problem and solved it through MMPA ( Matched Molecular Pair Analysis ) ( Dalke et al. , 2018 ) dataset . DEFactor ( Assouel et al. , 2018 ) generated molecules with optimized penalized LogP while maintaining molecular similarity through differentiable conditional probability-based graph VAE . Glen & Payne proposed a method for generating molecules through domain-specific rule-based crossover and mutation to ensure molecular structure constraints ( Glen & Payne , 1995 ) . To produce offspring molecules with the parent ’ s substructures , the crossover is used as a strategy to cut the end part of the molecule and link it to the end part of another similar molecule . To produce new molecules , twelve operators of mutation were defined : atomic insertion , atomic deletion , etc ( Glen & Payne , 1995 ) . ChemGE ( Yoshikawa et al. , 2018 ) uses the grammatical evolution of SMILES ( Simplified Molecular-Input Line-Entry System ) ( Weininger , 1988 ) to optimize penalized LogP and KITH protein inhibitors . Their study converts SMILES into integer sequences using a chromosomal mapping process ( Yoshikawa et al. , 2018 ) . Subsequently , operators of mutation are used to optimize grammatical evolutionary molecular populations . GA-GB ( Jensen , 2019 ) and MolFinder ( Kwon & Lee , 2021 ) defined expert rules for operators of crossover and mutation to ensure the validity of the structure when generating molecules . GEGL ( Ahn et al. , 2020 ) created optimized SMILES strings for penalized LogP by presenting a reinforcement learning contained expert policy . Various studies have proposed SMILES-based approaches since they are convenient to convert a complex 3D chemical structure to a simple 1D string ( Yoshikawa et al. , 2018 ; Kwon & Lee , 2021 ; Ahn et al. , 2020 ) . However , adding atoms or square brackets to a string indeed changes the structure globally not locally ( Dalke , 2018 ) . It is difficult to generate a completely valid molecule because SMILES are context sensitive ( Dalke , 2018 ; Kwon & Lee , 2021 ) . Recently , the development and application of SELFIES ( Self-Referencing Embedded Strings ) , which is a 100 % valid string representation , has been implemented in the molecular inverse design to cope with this problem ( Krenn et al. , 2020 ) . The GA-D ( Nigam et al. , 2019 ) generated SELFIES strings using a random operator of mutation while maintaining validity . Furthermore , this study maintained the molecular diversity of the population through an adaptive penalty of the deep neural discriminator ( Nigam et al. , 2019 ) . The STONED ( Nigam et al. , 2021b ) study was introduced for performing local chemical space search and molecular interpolation using SELFIES . Furthermore , the mutation site was limited to terminal 10 % to maintain the molecular scaffold ( Nigam et al. , 2021b ) . SELFIES makes it possible to generate new molecules through the random operation without relying on expert rules in operations of mutation ( Nigam et al. , 2021b ) . 3 PROPOSED METHOD . In this section , we introduce a novel genetic algorithm for constrained optimization in molecular inverse design . Our algorithm generates molecules suitable for the target properties while constraining the structural similarity of a target molecule . We use two strategies to satisfy the constraint conditions . First , our algorithm constructs a population that always satisfies the structural similarity condition in the first phase . Second , the algorithm selects the appropriate molecular descriptors according to the genetic operators to ensure the validity of the molecule . The detailed whole process is shown in figure 1 . The algorithm starts by constructing a population depending on whether the target molecule or similar molecules exist in the dataset . In the case of the target molecule or its similar molecule existing in the dataset , the population consists of molecules above the similarity calculated by the similarity between the molecules in the dataset and the target molecule . On the contrary , a target molecular SMILES is randomly arranged to construct a population according to the population size . Next , our algorithm confirms constraint satisfaction from the population and proceeds with bioptimization for property and constraint conditions . The detailed process is explained in the subsections below . 3.1 TWO-PHASE MOLECULAR OPTIMIZATION . In the work of constrained molecular optimization , the optimization often failed when a large number of molecules were subject to penalty included in the population . Therefore , we focused on ensuring that the population always contains only feasible solutions . Inspired by MOEA ( Venkatraman & Yen , 2005 ) , we divided the process into constraint satisfaction and bi-optimization , each of which is further explained in the following . Constraint Satisfaction for Population In this phase , the algorithm searches for feasible solutions considering only the structural similarity . The objective function is not considered in this process . The algorithm measures the Tanimoto coefficient of how similar it is to the reference molecule ( Nigam et al. , 2019 ) . The similarity between the target molecule m and the product molecule m′ is expressed as sim ( m , m′ ) . SimilarityPenalty ( m ) is given to each molecule according to as follows : SimilarityPenalty ( m ) = { 0 , sim ( m , m′ ) ≥ δ −106 , sim ( m , m′ ) < δ ( 1 ) SimilarityPenalty ( m ) is set to 0 if sim ( m , m′ ) ≥ δ ( Nigam et al. , 2019 ) . Otherwise , a death penalty of −106 is given . Next , the algorithm assigns fitness value to each individual by equation 2 and selects only feasible solutions . A fitness function is expressed as follows : F ( m ) = SimilarityPenalty ( m ) ( 2 ) Bi-Optimization for Property and Constraint Condition In this phase , the algorithm switches to bi-optimization for the fitness function value and the constraint condition when one or more feasible solutions are found . Our work simultaneously is to maximize the penalized LogP value while retaining the molecular structure . The penalized LogP ( J ( m ) ) of each molecule is expressed as follows : J ( m ) = LopP ( m ) − SAScore ( m ) −RingPenalty ( m ) ( 3 ) The higher score for J ( m ) has a more suitable structural profile as a drug ( Yoshikawa et al. , 2018 ) . LogP ( m ) is the octanol-water partition coefficient for the molecule m. SAScore ( m ) is the Synthetic Accessibility score , which is a quantitative score for whether a molecular structure can be synthesized ( Ertl & Schuffenhauer , 2009 ) . The higher the score , the more difficult it is to synthesize molecules . By giving a penalty according to the score , it is possible to prevent the generation of molecules that can not be synthesized.RingPenalty ( m ) is used to give a penalty to prevent the creation of molecules with unrealistically many carbon rings . This function is a penalty for molecules with seven or more rings of carbon ( Yoshikawa et al. , 2018 ) . The algorithm reassigns the fitness value to each individual according to equation 4 and sorted by rank . A fitness function can be represented as follows : F ( m ) = J ( m ) − SimilarityPenalty ( m ) ( 4 ) We wanted not only to preserve the superior individuals but also to replace the inferior individuals through reproduction . A portion of the generated molecules are replaced while the rest of the are kept through the selection module . The probability of replacing a molecule is determined using a smooth logistic function based on a ranking of fitness ( Nigam et al. , 2019 ) .
The authors proposed a genetic algorithm for molecule generation. The reward is a combination of several objectives like Eqn.(3). The genetic operations include crossover and mutation. The authors conduct experiments to optimize the LogP of molecules.
science.mup_single_document_summarization.7251
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nGenetic Algorithm for Constrained Molecular Inverse Design\n\nMain Content:\n1 INTRODUCTION . When the search space is complex or partially known , it is difficult to optimize current solutions using gradient descent ( Jin & Ha , 1997 ; Ahmad et al. , 2010 ) . Because genetic algorithms find approximate solutions , they are effective in exploring vast unknown space such as molecular search space . ( Holland , 1992 ; Jin & Ha , 1997 ; Ahmad et al. , 2010 ; Henault et al. , 2020 ) . In drug discovery , optimizing complex pharmacological properties using genetic algorithms have been widely studied and extended ( Leardi , 2001 ) . Deep learning-based molecular design is being actively conducted to improve pharmacological properties and has served as a powerful tool ( Gómez-Bombarelli et al. , 2018 ) . However , inferred drug candidates depending on the architecture did not satisfy the heuristic guidelines set by chemical researchers ( George & Hautier , 2020 ) . Furthermore , this not only produces nonsynthesizable molecules but also causes optimization problems for non-linear structure-activity relationships ( Gómez-Bombarelli et al. , 2018 ; Vanhaelen et al. , 2020 ) . Most molecular design methods first generate molecular structures and then calculate properties of the structures resulting in high computational cost ( Duvenaud et al. , 2015 ; Gilmer et al. , 2017 ; Feinberg et al. , 2018 ; Yang et al. , 2019 ) . Conversely , inverse molecular design specifies target properties in advance and then systematically explores the chemical space to discover molecular structures that have the desired properties ( Sanchez-Lengeling & Aspuru-Guzik , 2018 ) . Recent advances in molecular inverse design of genetic algorithms compete with or even surpass deep learning-based methods ( Yoshikawa et al. , 2018 ; Jensen , 2019 ; Nigam et al. , 2019 ; Polishchuk , 2020 ; Ahn et al. , 2020 ; Nigam et al. , 2021b ; a ) . This indicates that the computational methods of genetic algorithms in a chemical domain are more effective for exploring the vast chemical space ( Ahn et al. , 2020 ) . However , the genetic operators cause difficulty in lead optimization where the molecular structure is constrained ( Hasançebi & Erbatur , 2000 ) . A common molecular design strategy is to narrow the chemical search space , starting with known potential molecules ( Lim et al. , 2020 ) . The scaffold , which is the ” core ” of the molecule is intentionally maintained to preserve basic bioactivity ( Hu et al. , 2016 ) . This is directly involved in the interaction with the target protein ( Zhao & Dietrich , 2015 ; Lim et al. , 2020 ) . In lead identification process , scaffolds with biological activity to the target protein are identified ( Zhao & Dietrich , 2015 ) . In lead optimization process , it is important to optimize the SAR ( Structure-Activity Relationship ) properties while staying in the chemical space associated with the privileged scaffold ( Langevin et al. , 2020 ) . Lead optimization can be described as a multiple optimization problem in scaffold constraints ( Langevin et al. , 2020 ) . Most studies have focused on the application of generative models to the field of medicinal chemistry , and studies related to structure-constrained lead optimization have not been extensively explored . In this regard , we introduce a genetic algorithm featuring constrained molecular inverse design . We use graph and SELFIES descriptors to generate valid molecules through a two-phase GA-based pipeline . In addition , we introduced a two-phase optimization to ensure that molecular generation does not fail to optimize for specific properties while adhering to structural constraints . 2 RELATED WORK . Among the various methods for constrained optimization in a genetic algorithm , the basic one is designing effective penalty functions ( Yeniay , 2005 ) . The functions , which impose a penalty to fitness value , are widely used for constrained optimization ( Yeniay , 2005 ; Fletcher , 2013 ) . The individual fitness value is determined by combining the objective function value with the constraint violation penalty . A dominant relationship exists between the constraint penalty and the objective function value ( Coello , 2000 ) . One of the various methods is focusing on the selection of practicable solutions . To solve the problem of constrained optimization using a genetic algorithm , a two-phase framework which is MultiObjective Evolutionary Algorithm ( MOEA ) was introduced ( Venkatraman & Yen , 2005 ) . In the first phase , MoEA confirms the constraint satisfaction of solutions using a penalty function . The algorithm ranks solutions based on violations of constraints , completely disregarding objective function . When one or more feasible solutions are identified , the second phase continues . In the second phase , individual fitness is reassigned according to the objective function-constraint violation space and bi-objective optimization is performed ( Venkatraman & Yen , 2005 ) . Structure-constrained molecular optimization work was first presented in the JT-VAE ( Jin et al. , 2018a ) study . In this study , they create a tree-structured scaffold for constraining the structure of a molecule and then generate a molecular graph using a message-passing network ( Jin et al. , 2018a ) . GCPN ( You et al. , 2018 ) uses goal-directed reinforcement learning to generate molecules with desired properties and similarities . VJTNN ( Jin et al. , 2018b ) treated molecular optimization as a graph-to-graph translation problem and solved it through MMPA ( Matched Molecular Pair Analysis ) ( Dalke et al. , 2018 ) dataset . DEFactor ( Assouel et al. , 2018 ) generated molecules with optimized penalized LogP while maintaining molecular similarity through differentiable conditional probability-based graph VAE . Glen & Payne proposed a method for generating molecules through domain-specific rule-based crossover and mutation to ensure molecular structure constraints ( Glen & Payne , 1995 ) . To produce offspring molecules with the parent ’ s substructures , the crossover is used as a strategy to cut the end part of the molecule and link it to the end part of another similar molecule . To produce new molecules , twelve operators of mutation were defined : atomic insertion , atomic deletion , etc ( Glen & Payne , 1995 ) . ChemGE ( Yoshikawa et al. , 2018 ) uses the grammatical evolution of SMILES ( Simplified Molecular-Input Line-Entry System ) ( Weininger , 1988 ) to optimize penalized LogP and KITH protein inhibitors . Their study converts SMILES into integer sequences using a chromosomal mapping process ( Yoshikawa et al. , 2018 ) . Subsequently , operators of mutation are used to optimize grammatical evolutionary molecular populations . GA-GB ( Jensen , 2019 ) and MolFinder ( Kwon & Lee , 2021 ) defined expert rules for operators of crossover and mutation to ensure the validity of the structure when generating molecules . GEGL ( Ahn et al. , 2020 ) created optimized SMILES strings for penalized LogP by presenting a reinforcement learning contained expert policy . Various studies have proposed SMILES-based approaches since they are convenient to convert a complex 3D chemical structure to a simple 1D string ( Yoshikawa et al. , 2018 ; Kwon & Lee , 2021 ; Ahn et al. , 2020 ) . However , adding atoms or square brackets to a string indeed changes the structure globally not locally ( Dalke , 2018 ) . It is difficult to generate a completely valid molecule because SMILES are context sensitive ( Dalke , 2018 ; Kwon & Lee , 2021 ) . Recently , the development and application of SELFIES ( Self-Referencing Embedded Strings ) , which is a 100 % valid string representation , has been implemented in the molecular inverse design to cope with this problem ( Krenn et al. , 2020 ) . The GA-D ( Nigam et al. , 2019 ) generated SELFIES strings using a random operator of mutation while maintaining validity . Furthermore , this study maintained the molecular diversity of the population through an adaptive penalty of the deep neural discriminator ( Nigam et al. , 2019 ) . The STONED ( Nigam et al. , 2021b ) study was introduced for performing local chemical space search and molecular interpolation using SELFIES . Furthermore , the mutation site was limited to terminal 10 % to maintain the molecular scaffold ( Nigam et al. , 2021b ) . SELFIES makes it possible to generate new molecules through the random operation without relying on expert rules in operations of mutation ( Nigam et al. , 2021b ) . 3 PROPOSED METHOD . In this section , we introduce a novel genetic algorithm for constrained optimization in molecular inverse design . Our algorithm generates molecules suitable for the target properties while constraining the structural similarity of a target molecule . We use two strategies to satisfy the constraint conditions . First , our algorithm constructs a population that always satisfies the structural similarity condition in the first phase . Second , the algorithm selects the appropriate molecular descriptors according to the genetic operators to ensure the validity of the molecule . The detailed whole process is shown in figure 1 . The algorithm starts by constructing a population depending on whether the target molecule or similar molecules exist in the dataset . In the case of the target molecule or its similar molecule existing in the dataset , the population consists of molecules above the similarity calculated by the similarity between the molecules in the dataset and the target molecule . On the contrary , a target molecular SMILES is randomly arranged to construct a population according to the population size . Next , our algorithm confirms constraint satisfaction from the population and proceeds with bioptimization for property and constraint conditions . The detailed process is explained in the subsections below . 3.1 TWO-PHASE MOLECULAR OPTIMIZATION . In the work of constrained molecular optimization , the optimization often failed when a large number of molecules were subject to penalty included in the population . Therefore , we focused on ensuring that the population always contains only feasible solutions . Inspired by MOEA ( Venkatraman & Yen , 2005 ) , we divided the process into constraint satisfaction and bi-optimization , each of which is further explained in the following . Constraint Satisfaction for Population In this phase , the algorithm searches for feasible solutions considering only the structural similarity . The objective function is not considered in this process . The algorithm measures the Tanimoto coefficient of how similar it is to the reference molecule ( Nigam et al. , 2019 ) . The similarity between the target molecule m and the product molecule m′ is expressed as sim ( m , m′ ) . SimilarityPenalty ( m ) is given to each molecule according to as follows : SimilarityPenalty ( m ) = { 0 , sim ( m , m′ ) ≥ δ −106 , sim ( m , m′ ) < δ ( 1 ) SimilarityPenalty ( m ) is set to 0 if sim ( m , m′ ) ≥ δ ( Nigam et al. , 2019 ) . Otherwise , a death penalty of −106 is given . Next , the algorithm assigns fitness value to each individual by equation 2 and selects only feasible solutions . A fitness function is expressed as follows : F ( m ) = SimilarityPenalty ( m ) ( 2 ) Bi-Optimization for Property and Constraint Condition In this phase , the algorithm switches to bi-optimization for the fitness function value and the constraint condition when one or more feasible solutions are found . Our work simultaneously is to maximize the penalized LogP value while retaining the molecular structure . The penalized LogP ( J ( m ) ) of each molecule is expressed as follows : J ( m ) = LopP ( m ) − SAScore ( m ) −RingPenalty ( m ) ( 3 ) The higher score for J ( m ) has a more suitable structural profile as a drug ( Yoshikawa et al. , 2018 ) . LogP ( m ) is the octanol-water partition coefficient for the molecule m. SAScore ( m ) is the Synthetic Accessibility score , which is a quantitative score for whether a molecular structure can be synthesized ( Ertl & Schuffenhauer , 2009 ) . The higher the score , the more difficult it is to synthesize molecules . By giving a penalty according to the score , it is possible to prevent the generation of molecules that can not be synthesized.RingPenalty ( m ) is used to give a penalty to prevent the creation of molecules with unrealistically many carbon rings . This function is a penalty for molecules with seven or more rings of carbon ( Yoshikawa et al. , 2018 ) . The algorithm reassigns the fitness value to each individual according to equation 4 and sorted by rank . A fitness function can be represented as follows : F ( m ) = J ( m ) − SimilarityPenalty ( m ) ( 4 ) We wanted not only to preserve the superior individuals but also to replace the inferior individuals through reproduction . A portion of the generated molecules are replaced while the rest of the are kept through the selection module . The probability of replacing a molecule is determined using a smooth logistic function based on a ranking of fitness ( Nigam et al. , 2019 ) .", "role": "user" }, { "content": "The authors proposed a genetic algorithm for molecule generation. The reward is a combination of several objectives like Eqn.(3). The genetic operations include crossover and mutation. The authors conduct experiments to optimize the LogP of molecules.", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Propriospinal interneurons that play a role in the plastic reorganization of spinal circuits are integral for recovery from spinal cord injury. Research Abstract: Spinal cord injuries (SCIs) in humans and experimental animals are often associated with varying degrees of spontaneous functional recovery during the first months after injury. Such recovery is widely attributed to axons spared from injury that descend from the brain and bypass incomplete lesions, but its mechanisms are uncertain. To investigate the neural basis of spontaneous recovery, we used kinematic, physiological and anatomical analyses to evaluate mice with various combinations of spatially and temporally separated lateral hemisections with or without the excitotoxic ablation of intrinsic spinal cord neurons. We show that propriospinal relay connections that bypass one or more injury sites are able to mediate spontaneous functional recovery and supraspinal control of stepping, even when there has been essentially total and irreversible interruption of long descending supraspinal pathways in mice. Our findings show that pronounced functional recovery can occur after severe SCI without the maintenance or regeneration of direct projections from the brain past the lesion and can be mediated by the reorganization of descending and propriospinal connections. Targeting interventions toward augmenting the remodeling of relay connections may provide new therapeutic strategies to bypass lesions and restore function after SCI and in other conditions such as stroke and multiple sclerosis.
{"verdict": "SUPPORT", "evidence": ["We show that propriospinal relay connections that bypass one or more injury sites are able to mediate spontaneous functional recovery and supraspinal control of stepping, even when there has been essentially total and irreversible interruption of long descending supraspinal pathways in mice.", "Our findings show that pronounced functional recovery can occur after severe SCI without the maintenance or regeneration of direct projections from the brain past the lesion and can be mediated by the reorganization of descending and propriospinal connections."]}
science.scifact_entailment.636
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nPropriospinal interneurons that play a role in the plastic reorganization of spinal circuits are integral for recovery from spinal cord injury.\n\nResearch Abstract:\nSpinal cord injuries (SCIs) in humans and experimental animals are often associated with varying degrees of spontaneous functional recovery during the first months after injury.\nSuch recovery is widely attributed to axons spared from injury that descend from the brain and bypass incomplete lesions, but its mechanisms are uncertain.\nTo investigate the neural basis of spontaneous recovery, we used kinematic, physiological and anatomical analyses to evaluate mice with various combinations of spatially and temporally separated lateral hemisections with or without the excitotoxic ablation of intrinsic spinal cord neurons.\nWe show that propriospinal relay connections that bypass one or more injury sites are able to mediate spontaneous functional recovery and supraspinal control of stepping, even when there has been essentially total and irreversible interruption of long descending supraspinal pathways in mice.\nOur findings show that pronounced functional recovery can occur after severe SCI without the maintenance or regeneration of direct projections from the brain past the lesion and can be mediated by the reorganization of descending and propriospinal connections.\nTargeting interventions toward augmenting the remodeling of relay connections may provide new therapeutic strategies to bypass lesions and restore function after SCI and in other conditions such as stroke and multiple sclerosis.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"We show that propriospinal relay connections that bypass one or more injury sites are able to mediate spontaneous functional recovery and supraspinal control of stepping, even when there has been essentially total and irreversible interruption of long descending supraspinal pathways in mice.\", \"Our findings show that pronounced functional recovery can occur after severe SCI without the maintenance or regeneration of direct projections from the brain past the lesion and can be mediated by the reorganization of descending and propriospinal connections.\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: A disconcerting proportion of Canadian nurses are physically inactive and report poor cardiovascular health. Web-based interventions incorporating feedback and group features may represent opportune, convenient, and cost-effective methods for encouraging physical activity (PA) in order to improve the levels of PA and cardiovascular health of nurses. The purpose of this parallel-group randomized trial was to examine the impact of an intervention providing participants with feedback from an activity monitor coupled with a web-based individual, friend or team PA challenge, on the PA and cardiovascular health of nurses working in a cardiovascular setting. <p> Methods: Nurses were randomly assigned in a 1:1:1 ratio to one of the following intervention "challenge" groups: (1) individual, (2) friend or (3) team. Nurses wore a Tractivity® activity monitor throughout a baseline week and 6-week intervention. Height, body mass, body fat percentage, waist circumference, resting blood pressure (BP) and heart rate were assessed, and body mass index (BMI) was calculated, during baseline and within 1 week post-intervention. Data were analyzed using descriptive statistics and general linear model procedures for repeated measures. <p> Results: 76 nurses (97% female; age: 46 ± 11 years) participated. Weekly moderate-to-vigorous intensity PA (MVPA) changed over time (F = 4.022, df = 4.827, p = 0.002, η2 = 0.055), and was greater during intervention week 2 when compared to intervention week 6 (p = 0.011). Daily steps changed over time (F = 7.668, df = 3.910, p < 0.001, η2 = 0.100), and were greater during baseline and intervention weeks 1, 2, 3, and 5 when compared to intervention week 6 (p < 0.05). No differences in weekly MVPA or daily steps were observed between groups (p > 0.05). No changes in body mass, BMI or waist circumference were observed within or between groups (p > 0.05). Decreases in body fat percentage (−0.8 ± 4.8%, p = 0.015) and resting systolic BP (−2.6 ± 8.8 mmHg, p = 0.019) were observed within groups, but not between groups (p > 0.05). <p> Conclusions: A web-based intervention providing feedback and a PA challenge initially impacted the PA, body fat percentage and resting systolic BP of nurses working in a cardiovascular setting, though increases in PA were short-lived. The nature of the PA challenge did not differentially impact outcomes. Alternative innovative strategies to improve and sustain nurses' PA should be developed and their effectiveness evaluated. Extracted Data:
[["individual or friend ", "team", " Weekly moderate-to-vigorous intensity PA (MVPA) levels", "no significant difference", "No differences in weekly MVPA or daily steps were observed between groups (p \u003e 0.05)."]]
science.evidence_inference.238
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: A disconcerting proportion of Canadian nurses are physically inactive and report poor cardiovascular health. Web-based interventions incorporating feedback and group features may represent opportune, convenient, and cost-effective methods for encouraging physical activity (PA) in order to improve the levels of PA and cardiovascular health of nurses. The purpose of this parallel-group randomized trial was to examine the impact of an intervention providing participants with feedback from an activity monitor coupled with a web-based individual, friend or team PA challenge, on the PA and cardiovascular health of nurses working in a cardiovascular setting. <p> Methods: Nurses were randomly assigned in a 1:1:1 ratio to one of the following intervention \"challenge\" groups: (1) individual, (2) friend or (3) team. Nurses wore a Tractivity® activity monitor throughout a baseline week and 6-week intervention. Height, body mass, body fat percentage, waist circumference, resting blood pressure (BP) and heart rate were assessed, and body mass index (BMI) was calculated, during baseline and within 1 week post-intervention. Data were analyzed using descriptive statistics and general linear model procedures for repeated measures. <p> Results: 76 nurses (97% female; age: 46 ± 11 years) participated. Weekly moderate-to-vigorous intensity PA (MVPA) changed over time (F = 4.022, df = 4.827, p = 0.002, η2 = 0.055), and was greater during intervention week 2 when compared to intervention week 6 (p = 0.011). Daily steps changed over time (F = 7.668, df = 3.910, p < 0.001, η2 = 0.100), and were greater during baseline and intervention weeks 1, 2, 3, and 5 when compared to intervention week 6 (p < 0.05). No differences in weekly MVPA or daily steps were observed between groups (p > 0.05). No changes in body mass, BMI or waist circumference were observed within or between groups (p > 0.05). Decreases in body fat percentage (−0.8 ± 4.8%, p = 0.015) and resting systolic BP (−2.6 ± 8.8 mmHg, p = 0.019) were observed within groups, but not between groups (p > 0.05). <p> Conclusions: A web-based intervention providing feedback and a PA challenge initially impacted the PA, body fat percentage and resting systolic BP of nurses working in a cardiovascular setting, though increases in PA were short-lived. The nature of the PA challenge did not differentially impact outcomes. Alternative innovative strategies to improve and sustain nurses' PA should be developed and their effectiveness evaluated.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"individual or friend \", \"team\", \" Weekly moderate-to-vigorous intensity PA (MVPA) levels\", \"no significant difference\", \"No differences in weekly MVPA or daily steps were observed between groups (p \\u003e 0.05).\"]]", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: The project presented here is a part of a long term research program aiming at a full lexicon grammar for Polish (SyntLex). The main of this project is computer-assisted acquisition and morpho-syntactic description of verb-noun collocations in Polish. We present methodology and resources obtained in three main project phases which are: dictionary-based acquisition of collocation lexicon, feasibility study for corpus-based lexicon enlargement phase, corpus-based lexicon enlargement and collocation description. In this paper we focus on the results of the third phase. The presented here corpus-based approach permitted us to triple the size the verb-noun collocation dictionary for Polish. In the paper we describe the SyntLex Dictionary of Collocations and announce some future research intended to be a separate project continuation.
Output Entities: {"Generic": ["phases"], "Material": ["Polish", "SyntLex Dictionary of Collocations", "verb-noun collocation dictionary"], "Method": ["corpus-based approach", "lexicon grammar for Polish (SyntLex)"], "Metric": [], "OtherScientificTerm": [], "Task": ["corpus-based lexicon enlargement and collocation description", "computer-assisted acquisition and morpho-syntactic description of verb-noun collocations", "corpus-based lexicon enlargement phase", "feasibility study", "dictionary-based acquisition of collocation lexicon"]}
science.scierc_ner.24
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nThe project presented here is a part of a long term research program aiming at a full lexicon grammar for Polish (SyntLex). The main of this project is computer-assisted acquisition and morpho-syntactic description of verb-noun collocations in Polish. We present methodology and resources obtained in three main project phases which are: dictionary-based acquisition of collocation lexicon, feasibility study for corpus-based lexicon enlargement phase, corpus-based lexicon enlargement and collocation description. In this paper we focus on the results of the third phase. The presented here corpus-based approach permitted us to triple the size the verb-noun collocation dictionary for Polish. In the paper we describe the SyntLex Dictionary of Collocations and announce some future research intended to be a separate project continuation.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [\"phases\"], \"Material\": [\"Polish\", \"SyntLex Dictionary of Collocations\", \"verb-noun collocation dictionary\"], \"Method\": [\"corpus-based approach\", \"lexicon grammar for Polish (SyntLex)\"], \"Metric\": [], \"OtherScientificTerm\": [], \"Task\": [\"corpus-based lexicon enlargement and collocation description\", \"computer-assisted acquisition and morpho-syntactic description of verb-noun collocations\", \"corpus-based lexicon enlargement phase\", \"feasibility study\", \"dictionary-based acquisition of collocation lexicon\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: We implement the features in<cite> Wang and Lan (2015)</cite> and add our novel features shown in Table 1 .
["Uses"]
science.multicite_intent_classification.2022
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWe implement the features in<cite> Wang and Lan (2015)</cite> and add our novel features shown in Table 1 .", "role": "user" }, { "content": "[\"Uses\"]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Generalizable Adversarial Training via Spectral Normalization Main Content: 1 INTRODUCTION . Despite their impressive performance on many supervised learning tasks , deep neural networks ( DNNs ) are often highly susceptible to adversarial perturbations imperceptible to the human eye ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014b ) . These “ adversarial attacks '' have received enormous attention in the machine learning literature over recent years ( Goodfellow et al. , 2014b ; Moosavi Dezfooli et al. , 2016 ; Carlini & Wagner , 2016 ; Kurakin et al. , 2016 ; Papernot et al. , 2016 ; Carlini & Wagner , 2017 ; Papernot et al. , 2017 ; Madry et al. , 2018 ; Tramèr et al. , 2018 ) . Adversarial attack studies have mainly focused on developing effective attack and defense schemes . While attack schemes attempt to mislead a trained classifier via additive perturbations to the input , defense mechanisms aim to train classifiers robust to these perturbations . Although existing defense methods result in considerably better performance compared to standard training methods , the improved performance can still be far below the performance in non-adversarial settings ( Athalye et al. , 2018 ; Schmidt et al. , 2018 ) . A standard adversarial training scheme involves fitting a classifier using adversarially-perturbed samples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014b ) with the intention of producing a trained classifier with better robustness to attacks on future ( i.e . test ) samples . Madry et al . ( 2018 ) provides a robust optimization interpretation of the adversarial training approach , demonstrating that this strategy finds the optimal classifier minimizing the average worst-case loss over an adversarial ball centered at each training sample . This minimax interpretation can also be extended to distributionally-robust training methods ( Sinha et al. , 2018 ) where the offered robustness is over a Wasserstein-ball around the empirical distribution of training data . Recently , Schmidt et al . ( 2018 ) have shown that standard adversarial training produces networks that generalize poorly . The performance of adversarially-trained DNNs over test samples can be significantly worse than their training performance , and this gap can be far greater than the generalization ∗Equal Contributors gap achieved using standard empirical risk minimization ( ERM ) . This discrepancy suggests that the overall adversarial test performance can be improved by applying effective regularization schemes during adversarial training . In this work , we propose using spectral normalization ( SN ) ( Miyato et al. , 2018 ) as a computationallyefficient and statistically-powerful regularization scheme for adversarial training of DNNs . SN has been successfully implemented and applied for DNNs in the context of generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014a ) , resulting in state-of-the-art deep generative models for several benchmark tasks ( Miyato et al. , 2018 ) . Moreover , SN ( Tsuzuku et al. , 2018 ) and other similar Lipschitz regularization techniques ( Cisse et al. , 2017 ) have been successfully applied in non-adversarial training settings to improve the robustness of ERM-trained networks to adversarial attacks . The theoretical results in ( Bartlett et al. , 2017 ; Neyshabur et al. , 2017a ) and empirical results in ( Yoshida & Miyato , 2017 ) also suggest that SN can close the generalization gap for DNNs in non-adversarial ERM setting . On the theoretical front , we extend the standard notion of margin loss to adversarial settings . We leverage the PAC-Bayes generalization framework ( McAllester , 1999 ) to prove generalization bounds for spectrally-normalized DNNs in terms of our defined adversarial margin loss . We obtain adversarial generalization error bounds for three well-known gradient-based attack schemes : fast gradient method ( FGM ) ( Goodfellow et al. , 2014b ) , projected gradient method ( PGM ) ( Kurakin et al. , 2016 ) , and Wasserstein risk minimization ( WRM ) ( Sinha et al. , 2018 ) . Our theoretical analysis shows that the adversarial generalization error will vanish by applying SN to all layers . On the empirical front , we show that SN can significantly improve the test performance of adversarially-trained DNNs . We perform numerical experiments over various standard datasets and DNN architectures . In almost all of our experiments , we obtain a better test performance after applying SN . For example , Figure 1 shows the training and validation performance for AlexNet fit on the CIFAR10 dataset using FGM , PGM , and WRM , resulting in adversarial test accuracy improvements of 9 , 11 , and 4 percent , respectively . To perform our numerical experiments , we develop a computationally-efficient approach for normalizing the spectral norm of convolution layers with arbitrary stride and padding schemes . To summarize , the main contributions of this work are : 1 . Proposing SN as a regularization scheme for adversarial training of DNNs , 2 . Extending concepts of margin-based generalization analysis to adversarial settings and proving margin-based generalization bounds for three gradient-based adversarial attack schemes , 3 . Developing an efficient method for normalizing the spectral norm of convolutional layers in deep convolution networks , 4 . Numerically demonstrating the improved test and generalization performance of DNNs trained with SN . 2 PRELIMINARIES . In this section , we first review some standard concepts of margin-based generalization analysis in learning theory . We then extend these notions to adversarial training settings . 2.1 SUPERVISED LEARNING , DEEP NEURAL NETWORKS , GENERALIZATION ERROR . Consider samples { ( x1 , y1 ) , . . . , ( xn , yn ) } drawn i.i.d from underlying distribution PX , Y . We suppose X ∈ X and Y ∈ { 1 , 2 , . . . , m } where m represents the number of different labels . Given loss function ` and function class F = { fw , w ∈ W } parameterized by w , a supervised learner aims to find the optimal function in F minimizing the expected loss ( risk ) averaged over the underlying distribution P . We consider Fnn as the class of d-layer neural networks with h hidden units per layer and activation functions σ : R→ R. Each fw : X → Rm in Fnn maps a data point x to an m-dimensional vector . Specifically , we can express each fw ∈ Fnn as fw ( x ) = Wdσ ( Wd−1 · · ·σ ( W1x ) · · · ) ) . We use ‖Wi‖2 to denote the spectral norm of matrix Wi , defined as the largest singular value of Wi , and ‖Wi‖F to denote Wi ’ s Frobenius norm . A classifier fw ’ s performance over the true distribution of data can be different from the training performance over the empirical distribution of training samples P̂ . The difference between the empirical and true averaged losses , evaluated on respectively training and test samples , is called the generalization error . Similar to Neyshabur et al . ( 2017a ) , we evaluate a DNN ’ s generalization performance using its expected margin loss defined for margin parameter γ > 0 as Lγ ( fw ) : = P ( fw ( X ) [ Y ] ≤ γ + max j 6=Y fw ( X ) [ j ] ) , ( 1 ) where fw ( X ) [ j ] denotes the jth entry of fw ( X ) ∈ Rm . For a given data point X , we predict the label corresponding to the maximum entry of fw ( X ) . Also , we use L̂γ ( fw ) to denote the empirical margin loss averaged over the training samples . The goal of margin-based generalization analysis is to provide theoretical comparison between the true and empirical margin risks . 2.2 ADVERSARIAL ATTACKS , ADVERSARIAL TRAINING . A supervised learner observes only the training samples and hence does not know the true distribution of data . Then , a standard approach to train a classifier is to minimize the empirical expected loss ` over function class F = { fw : w ∈ W } , which is min w∈W 1 n n∑ i=1 ` ( fw ( xi ) , yi ) . ( 2 ) This approach is called empirical risk minimization ( ERM ) . For better optimization performance , the loss function ` is commonly chosen to be smooth . Hence , 0-1 and margin losses are replaced by smooth surrogate loss functions such as the cross-entropy loss . However , we still use the margin loss as defined in ( 1 ) for evaluating the test and generalization performance of DNN classifiers . While ERM training usually achieves good performance over DNNs , several recent observations reveal that adding some adversarially-chosen perturbation to each sample can significantly drop the trained DNN ’ s performance . Given norm function ‖ · ‖ and adversarial noise power > 0 , the adversarial additive noise for sample ( x , y ) and classifier fw is defined to be δadvw ( x ) : = argmax ‖δ‖≤ ` ( fw ( x + δ ) , y ) . ( 3 ) To provide adversarial robustness against the above attack scheme , a standard technique , which is called adversarial training , follows ERM training over the adversarially-perturbed samples by solving min w∈W 1 n n∑ i=1 ` ( fw ( xi + δ adv w ( xi ) ) , yi ) : = min w∈W 1 n n∑ i=1 max ‖δi‖≤ ` ( fw ( xi + δi ) , yi ) . ( 4 ) However , ( 3 ) and ( 4 ) are intractable optimization problems . Therefore , several schemes have been proposed in the literature to approximate the optimal solution of ( 3 ) . In this work , we analyze the generalization performance of the following three gradient-based methods for approximating the solution to ( 3 ) . We note that several other attack schemes such as DeepFool ( Moosavi Dezfooli et al. , 2016 ) , CW attacks ( Carlini & Wagner , 2017 ) , target and least-likely attacks ( Kurakin et al. , 2016 ) have been introduced and examined in the literature , which can lead to interesting future directions for this work . 1 . Fast Gradient Method ( FGM ) ( Goodfellow et al. , 2014b ) : FGM approximates the solution to ( 3 ) by considering a linearized DNN loss around a given data point . Hence , FGM perturbs ( x , y ) by adding the following noise vector : δfgmw ( x ) : = argmax ‖δ‖≤ δT∇x ` ( fw ( x ) , y ) . ( 5 ) For the special case of ` ∞-norm ‖ · ‖∞ , the above representation of FGM recovers the fast gradient sign method ( FGSM ) where each data point ( x , y ) is perturbed by the -normalized sign vector of the loss ’ s gradient . For ` 2-norm ‖ · ‖2 , we similarly normalize the loss ’ s gradient vector to have Euclidean norm . 2 . Projected Gradient Method ( PGM ) ( Kurakin et al. , 2016 ) : PGM is the iterative version of FGM and applies projected gradient descent to solve ( 3 ) . PGM follows the following update rules for a given r number of steps : ∀1 ≤ i ≤ r : δpgm , i+1w ( x ) : = ∏ B , ‖·‖ ( 0 ) { δpgm , iw ( x ) + αν ( i ) w } , ( 6 ) ν ( i ) w : = argmax ‖δ‖≤1 δT∇x ` ( fw ( x + δ pgm , i w ( x ) ) , y ) . Here , we first find the direction ν ( i ) w along which the loss at the ith perturbed point changes the most , and then we move the perturbed point along this direction by stepsize α followed by projecting the resulting perturbation onto the set { δ : ‖δ‖ ≤ } with -bounded norm . 3 . Wasserstein Risk Minimization ( WRM ) ( Sinha et al. , 2018 ) : WRM solves the following variant of ( 3 ) for data-point ( x , y ) where the norm constraint in ( 3 ) is replaced by a norm-squared Lagrangian penalty term : δwrmw ( x ) : = argmax δ ` ( fw ( x + δ ) , y ) − λ 2 ‖δ‖2 . ( 7 ) As discussed earlier , the optimization problem ( 3 ) is generally intractable . However , in the case of Euclidean norm ‖ · ‖2 , if we assume∇x ` ( fw ( x ) , y ) ’ s Lipschitz constant is upper-bounded by λ , then WRM optimization ( 7 ) results in solving a convex optimization problem and can be efficiently solved using gradient methods . To obtain efficient adversarial defense schemes , we can substitute δfgmw , δ pgm w , or δ wrm w for δ adv w in ( 4 ) . Instead of fitting the classifier over true adversarial examples , which are NP-hard to obtain , we can instead train the DNN over FGM , PGM , or WRM-adversarially perturbed samples .
The paper first provides a generalization bounds for adversarial training, showing that the error bound depends on Lipschitz constant. This motivates the use of spectral regularization (similar to Miyato et al 2018) in adversarial training. Using spectral regularization to improve robustness is not new, but it's interesting to combine spectral regularization and adversarial training. Experimental results show significant improvement over vanilla adversarial training.
science.mup_single_document_summarization.10679
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nGeneralizable Adversarial Training via Spectral Normalization\n\nMain Content:\n1 INTRODUCTION . Despite their impressive performance on many supervised learning tasks , deep neural networks ( DNNs ) are often highly susceptible to adversarial perturbations imperceptible to the human eye ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014b ) . These “ adversarial attacks '' have received enormous attention in the machine learning literature over recent years ( Goodfellow et al. , 2014b ; Moosavi Dezfooli et al. , 2016 ; Carlini & Wagner , 2016 ; Kurakin et al. , 2016 ; Papernot et al. , 2016 ; Carlini & Wagner , 2017 ; Papernot et al. , 2017 ; Madry et al. , 2018 ; Tramèr et al. , 2018 ) . Adversarial attack studies have mainly focused on developing effective attack and defense schemes . While attack schemes attempt to mislead a trained classifier via additive perturbations to the input , defense mechanisms aim to train classifiers robust to these perturbations . Although existing defense methods result in considerably better performance compared to standard training methods , the improved performance can still be far below the performance in non-adversarial settings ( Athalye et al. , 2018 ; Schmidt et al. , 2018 ) . A standard adversarial training scheme involves fitting a classifier using adversarially-perturbed samples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014b ) with the intention of producing a trained classifier with better robustness to attacks on future ( i.e . test ) samples . Madry et al . ( 2018 ) provides a robust optimization interpretation of the adversarial training approach , demonstrating that this strategy finds the optimal classifier minimizing the average worst-case loss over an adversarial ball centered at each training sample . This minimax interpretation can also be extended to distributionally-robust training methods ( Sinha et al. , 2018 ) where the offered robustness is over a Wasserstein-ball around the empirical distribution of training data . Recently , Schmidt et al . ( 2018 ) have shown that standard adversarial training produces networks that generalize poorly . The performance of adversarially-trained DNNs over test samples can be significantly worse than their training performance , and this gap can be far greater than the generalization ∗Equal Contributors gap achieved using standard empirical risk minimization ( ERM ) . This discrepancy suggests that the overall adversarial test performance can be improved by applying effective regularization schemes during adversarial training . In this work , we propose using spectral normalization ( SN ) ( Miyato et al. , 2018 ) as a computationallyefficient and statistically-powerful regularization scheme for adversarial training of DNNs . SN has been successfully implemented and applied for DNNs in the context of generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014a ) , resulting in state-of-the-art deep generative models for several benchmark tasks ( Miyato et al. , 2018 ) . Moreover , SN ( Tsuzuku et al. , 2018 ) and other similar Lipschitz regularization techniques ( Cisse et al. , 2017 ) have been successfully applied in non-adversarial training settings to improve the robustness of ERM-trained networks to adversarial attacks . The theoretical results in ( Bartlett et al. , 2017 ; Neyshabur et al. , 2017a ) and empirical results in ( Yoshida & Miyato , 2017 ) also suggest that SN can close the generalization gap for DNNs in non-adversarial ERM setting . On the theoretical front , we extend the standard notion of margin loss to adversarial settings . We leverage the PAC-Bayes generalization framework ( McAllester , 1999 ) to prove generalization bounds for spectrally-normalized DNNs in terms of our defined adversarial margin loss . We obtain adversarial generalization error bounds for three well-known gradient-based attack schemes : fast gradient method ( FGM ) ( Goodfellow et al. , 2014b ) , projected gradient method ( PGM ) ( Kurakin et al. , 2016 ) , and Wasserstein risk minimization ( WRM ) ( Sinha et al. , 2018 ) . Our theoretical analysis shows that the adversarial generalization error will vanish by applying SN to all layers . On the empirical front , we show that SN can significantly improve the test performance of adversarially-trained DNNs . We perform numerical experiments over various standard datasets and DNN architectures . In almost all of our experiments , we obtain a better test performance after applying SN . For example , Figure 1 shows the training and validation performance for AlexNet fit on the CIFAR10 dataset using FGM , PGM , and WRM , resulting in adversarial test accuracy improvements of 9 , 11 , and 4 percent , respectively . To perform our numerical experiments , we develop a computationally-efficient approach for normalizing the spectral norm of convolution layers with arbitrary stride and padding schemes . To summarize , the main contributions of this work are : 1 . Proposing SN as a regularization scheme for adversarial training of DNNs , 2 . Extending concepts of margin-based generalization analysis to adversarial settings and proving margin-based generalization bounds for three gradient-based adversarial attack schemes , 3 . Developing an efficient method for normalizing the spectral norm of convolutional layers in deep convolution networks , 4 . Numerically demonstrating the improved test and generalization performance of DNNs trained with SN . 2 PRELIMINARIES . In this section , we first review some standard concepts of margin-based generalization analysis in learning theory . We then extend these notions to adversarial training settings . 2.1 SUPERVISED LEARNING , DEEP NEURAL NETWORKS , GENERALIZATION ERROR . Consider samples { ( x1 , y1 ) , . . . , ( xn , yn ) } drawn i.i.d from underlying distribution PX , Y . We suppose X ∈ X and Y ∈ { 1 , 2 , . . . , m } where m represents the number of different labels . Given loss function ` and function class F = { fw , w ∈ W } parameterized by w , a supervised learner aims to find the optimal function in F minimizing the expected loss ( risk ) averaged over the underlying distribution P . We consider Fnn as the class of d-layer neural networks with h hidden units per layer and activation functions σ : R→ R. Each fw : X → Rm in Fnn maps a data point x to an m-dimensional vector . Specifically , we can express each fw ∈ Fnn as fw ( x ) = Wdσ ( Wd−1 · · ·σ ( W1x ) · · · ) ) . We use ‖Wi‖2 to denote the spectral norm of matrix Wi , defined as the largest singular value of Wi , and ‖Wi‖F to denote Wi ’ s Frobenius norm . A classifier fw ’ s performance over the true distribution of data can be different from the training performance over the empirical distribution of training samples P̂ . The difference between the empirical and true averaged losses , evaluated on respectively training and test samples , is called the generalization error . Similar to Neyshabur et al . ( 2017a ) , we evaluate a DNN ’ s generalization performance using its expected margin loss defined for margin parameter γ > 0 as Lγ ( fw ) : = P ( fw ( X ) [ Y ] ≤ γ + max j 6=Y fw ( X ) [ j ] ) , ( 1 ) where fw ( X ) [ j ] denotes the jth entry of fw ( X ) ∈ Rm . For a given data point X , we predict the label corresponding to the maximum entry of fw ( X ) . Also , we use L̂γ ( fw ) to denote the empirical margin loss averaged over the training samples . The goal of margin-based generalization analysis is to provide theoretical comparison between the true and empirical margin risks . 2.2 ADVERSARIAL ATTACKS , ADVERSARIAL TRAINING . A supervised learner observes only the training samples and hence does not know the true distribution of data . Then , a standard approach to train a classifier is to minimize the empirical expected loss ` over function class F = { fw : w ∈ W } , which is min w∈W 1 n n∑ i=1 ` ( fw ( xi ) , yi ) . ( 2 ) This approach is called empirical risk minimization ( ERM ) . For better optimization performance , the loss function ` is commonly chosen to be smooth . Hence , 0-1 and margin losses are replaced by smooth surrogate loss functions such as the cross-entropy loss . However , we still use the margin loss as defined in ( 1 ) for evaluating the test and generalization performance of DNN classifiers . While ERM training usually achieves good performance over DNNs , several recent observations reveal that adding some adversarially-chosen perturbation to each sample can significantly drop the trained DNN ’ s performance . Given norm function ‖ · ‖ and adversarial noise power > 0 , the adversarial additive noise for sample ( x , y ) and classifier fw is defined to be δadvw ( x ) : = argmax ‖δ‖≤ ` ( fw ( x + δ ) , y ) . ( 3 ) To provide adversarial robustness against the above attack scheme , a standard technique , which is called adversarial training , follows ERM training over the adversarially-perturbed samples by solving min w∈W 1 n n∑ i=1 ` ( fw ( xi + δ adv w ( xi ) ) , yi ) : = min w∈W 1 n n∑ i=1 max ‖δi‖≤ ` ( fw ( xi + δi ) , yi ) . ( 4 ) However , ( 3 ) and ( 4 ) are intractable optimization problems . Therefore , several schemes have been proposed in the literature to approximate the optimal solution of ( 3 ) . In this work , we analyze the generalization performance of the following three gradient-based methods for approximating the solution to ( 3 ) . We note that several other attack schemes such as DeepFool ( Moosavi Dezfooli et al. , 2016 ) , CW attacks ( Carlini & Wagner , 2017 ) , target and least-likely attacks ( Kurakin et al. , 2016 ) have been introduced and examined in the literature , which can lead to interesting future directions for this work . 1 . Fast Gradient Method ( FGM ) ( Goodfellow et al. , 2014b ) : FGM approximates the solution to ( 3 ) by considering a linearized DNN loss around a given data point . Hence , FGM perturbs ( x , y ) by adding the following noise vector : δfgmw ( x ) : = argmax ‖δ‖≤ δT∇x ` ( fw ( x ) , y ) . ( 5 ) For the special case of ` ∞-norm ‖ · ‖∞ , the above representation of FGM recovers the fast gradient sign method ( FGSM ) where each data point ( x , y ) is perturbed by the -normalized sign vector of the loss ’ s gradient . For ` 2-norm ‖ · ‖2 , we similarly normalize the loss ’ s gradient vector to have Euclidean norm . 2 . Projected Gradient Method ( PGM ) ( Kurakin et al. , 2016 ) : PGM is the iterative version of FGM and applies projected gradient descent to solve ( 3 ) . PGM follows the following update rules for a given r number of steps : ∀1 ≤ i ≤ r : δpgm , i+1w ( x ) : = ∏ B , ‖·‖ ( 0 ) { δpgm , iw ( x ) + αν ( i ) w } , ( 6 ) ν ( i ) w : = argmax ‖δ‖≤1 δT∇x ` ( fw ( x + δ pgm , i w ( x ) ) , y ) . Here , we first find the direction ν ( i ) w along which the loss at the ith perturbed point changes the most , and then we move the perturbed point along this direction by stepsize α followed by projecting the resulting perturbation onto the set { δ : ‖δ‖ ≤ } with -bounded norm . 3 . Wasserstein Risk Minimization ( WRM ) ( Sinha et al. , 2018 ) : WRM solves the following variant of ( 3 ) for data-point ( x , y ) where the norm constraint in ( 3 ) is replaced by a norm-squared Lagrangian penalty term : δwrmw ( x ) : = argmax δ ` ( fw ( x + δ ) , y ) − λ 2 ‖δ‖2 . ( 7 ) As discussed earlier , the optimization problem ( 3 ) is generally intractable . However , in the case of Euclidean norm ‖ · ‖2 , if we assume∇x ` ( fw ( x ) , y ) ’ s Lipschitz constant is upper-bounded by λ , then WRM optimization ( 7 ) results in solving a convex optimization problem and can be efficiently solved using gradient methods . To obtain efficient adversarial defense schemes , we can substitute δfgmw , δ pgm w , or δ wrm w for δ adv w in ( 4 ) . Instead of fitting the classifier over true adversarial examples , which are NP-hard to obtain , we can instead train the DNN over FGM , PGM , or WRM-adversarially perturbed samples .", "role": "user" }, { "content": "The paper first provides a generalization bounds for adversarial training, showing that the error bound depends on Lipschitz constant. This motivates the use of spectral regularization (similar to Miyato et al 2018) in adversarial training. Using spectral regularization to improve robustness is not new, but it's interesting to combine spectral regularization and adversarial training. Experimental results show significant improvement over vanilla adversarial training.", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: An Iterative Approach for Identifying Complaint Based Tweets in Social Media Platforms Twitter is a social media platform where users express opinions over a variety of issues. Posts offering grievances or complaints can be utilized by private/ public organizations to improve their service and promptly gauge a low-cost assessment. In this paper, we propose an iterative methodology which aims to identify complaint based posts pertaining to the transport domain. We perform comprehensive evaluations along with releasing a novel dataset for the research purposes. Introduction With the advent of social media platforms, increasing user base address their grievances over these platforms, in the form of complaints. According to BIBREF0, complaint is considered to be a basic speech act used to express negative mismatch between the expectation and reality. Transportation and its related logistics industries are the backbones of every economy. Many transport organizations rely on complaints gathered via these platforms to improve their services, hence understanding these are important for: (1) linguists to identify human expressions of criticism and (2) organizations to improve their query response time and address concerns effectively. Presence of inevitable noise, sparse content along with rephrased and structurally morphed instances of posts, make the task at hand difficult BIBREF1. Previous works BIBREF2 in the domain of complaint extraction have focused on static datasets only. These are not robust to changes in the trends reflected, information flow and linguistic variations. We propose an iterative, semi-supervised approach for identification of complaint based tweets, having the ability to be replicated for stream of information flow. The preference of a semi-supervised approach over supervised ones is due to the stated reasons: (a) the task of isolating the training set, make supervised tasks less attractive and impractical and (b) imbalance between the subjective and objective classes lead to poor performance. Proposed Methodology We aimed to mimic the presence of sparse/noisy content distribution, mandating the need to curate a novel dataset via specific lexicons. We scraped 500 random posts from recognized transport forum. A pool of 50 uni/bi-grams was created based on tf-idf representations, extracted from the posts, which was further pruned by annotators. Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset. In spite of the sparse nature of these posts, the lexical characteristics act as information cues. Figure FIGREF4 pictorially represents our methodology. Our approach required an initial set of informative tweets for which we employed two human annotators annotating a random sub-sample of the original dataset. From the 1500 samples, 326 were marked as informative and 1174 as non informative ($\kappa =0.81$), discriminated on this criteria: Is the tweet addressing any complaint or raising grievances about modes of transport or services/ events associated with transportation such as traffic; public or private transport?. An example tweet marked as informative: No, metro fares will be reduced ???, but proper fare structure needs to presented right, it's bad !!!. We utilized tf-idf for the identification of initial seed phrases from the curated set of informative tweets. 50 terms having the highest tf-idf scores were passed through the complete dataset and based on sub-string match, the transport relevant tweets were identified. The redundant tweets were filtered based on the cosine similarity score. Implicit information indicators were identified based on domain relevance score, a metric used to gauge the coverage of n-gram (1,2,3) when evaluated against a randomly created pool of posts. We collected a pool of 5000 randomly sampled tweets different from the data collection period. The rationale behind having such a metric was to discard commonly occurring n-grams normalized by random noise and include ones which are of lexical importance. We used terms associated with high domain relevance score (threshold determined experimentally) as seed phrases for the next set of iterations. The growing dictionary augments the collection process. The process ran for 4 iterations providing us 7200 transport relevant tweets as no new lexicons were identified. In order to identify linguistic signals associated with the complaint posts, we randomly sampled a set of 2000 tweets which was used as training set, manually annotated into distinct labels: complaint relevant (702) and complaint non-relevant (1298) ($\kappa =0.79$). We employed these features on our dataset. Linguistic markers. To capture linguistic aspects of complaints, we utilized Bag of Words, count of POS tags and Word2vec clusters. Sentiment markers. We used quantified score based on the ratio of tokens mentioned in the following lexicons: MPQA, NRC, VADER and Stanford. Information specific markers. These account for a set of handcrafted features associated with complaint, we used the stated markers (a) Text-Meta Data, this includes the count of URL's, hashtags, user mentions, special symbols and user mentions, used to enhance retweet impact; (b) Request Identification, we employed the model presented in BIBREF3 to identify if a specific tweet assertion is a request; (c) Intensifiers, we make use of feature set derived from the number of words starting with capital letters and the repetition of special symbols (exclamation, questions marks) within the same post; (d) Politeness Markers, we utilize the politeness score of the tweet extracted from the model presented in BIBREF3; (e) Pronoun Variation, these have the ability to reveal the personal involvement or intensify involvement. We utilize the frequency of pronoun types $\lbrace \textit {first, second, third, demonstrative and indefinite}$} using pre-defined dictionaries. From the pool of 7200 transport relevant tweets, we sampled 3500 tweets which were used as the testing set. The results are reported in TableTABREF5 with 10 fold cross-validation. With increasing the number of iterations, the pool of seed phrases gets refined and augments the selection of transport relevant tweets. The proposed pipeline is tailored to identify complaint relevant tweets in a noisy scenario. Results Table TABREF5 reflects that the BOW model provided the best results, both in terms of accuracy and F1-score. The best result achieved by a sentiment model was the Stanford Sentiment ($0.63$ F1-score), with others within the same range and linguistic-based features collectively giving the best performance. Conclusion and Future Work In this paper, we presented a novel semi-supervised pipeline along with a novel dataset for identification of complaint based posts in the transport domain. The proposed methodology can be expanded for other fields by altering the lexicons used for the creation of information cues. There are limitations to this analysis; we do not use neural networks which mandate a large volume of data. In the future, we aim to identify demographic features for identification of complaint based posts on social media platforms. Figure 1: Pictorial representation of the proposed pipeline. Table 1: Performance of various linguistic, sentiment and information specific features on our dataset. Classifier utilized Logistic Regression (Elastic Net regularization), as it gave the best performance as compared to its counterparts. Query: What language is explored in this paper? JSON Response:
{"answer": "English language", "evidence": ["Figure FIGREF4 pictorially represents our methodology. Our approach required an initial set of informative tweets for which we employed two human annotators annotating a random sub-sample of the original dataset. From the 1500 samples, 326 were marked as informative and 1174 as non informative ($\\kappa =0.81$), discriminated on this criteria: Is the tweet addressing any complaint or raising grievances about modes of transport or services/ events associated with transportation such as traffic; public or private transport?. An example tweet marked as informative: No, metro fares will be reduced ???, but proper fare structure needs to presented right, it\u0027s bad !!!."]}
science.qasper_abstractive_qa.74.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nAn Iterative Approach for Identifying Complaint Based Tweets in Social Media Platforms\n\nTwitter is a social media platform where users express opinions over a variety of issues. Posts offering grievances or complaints can be utilized by private/ public organizations to improve their service and promptly gauge a low-cost assessment. In this paper, we propose an iterative methodology which aims to identify complaint based posts pertaining to the transport domain. We perform comprehensive evaluations along with releasing a novel dataset for the research purposes.\n\nIntroduction\nWith the advent of social media platforms, increasing user base address their grievances over these platforms, in the form of complaints. According to BIBREF0, complaint is considered to be a basic speech act used to express negative mismatch between the expectation and reality. Transportation and its related logistics industries are the backbones of every economy. Many transport organizations rely on complaints gathered via these platforms to improve their services, hence understanding these are important for: (1) linguists to identify human expressions of criticism and (2) organizations to improve their query response time and address concerns effectively.\nPresence of inevitable noise, sparse content along with rephrased and structurally morphed instances of posts, make the task at hand difficult BIBREF1. Previous works BIBREF2 in the domain of complaint extraction have focused on static datasets only. These are not robust to changes in the trends reflected, information flow and linguistic variations. We propose an iterative, semi-supervised approach for identification of complaint based tweets, having the ability to be replicated for stream of information flow. The preference of a semi-supervised approach over supervised ones is due to the stated reasons: (a) the task of isolating the training set, make supervised tasks less attractive and impractical and (b) imbalance between the subjective and objective classes lead to poor performance.\n\nProposed Methodology\nWe aimed to mimic the presence of sparse/noisy content distribution, mandating the need to curate a novel dataset via specific lexicons. We scraped 500 random posts from recognized transport forum. A pool of 50 uni/bi-grams was created based on tf-idf representations, extracted from the posts, which was further pruned by annotators. Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset. In spite of the sparse nature of these posts, the lexical characteristics act as information cues.\nFigure FIGREF4 pictorially represents our methodology. Our approach required an initial set of informative tweets for which we employed two human annotators annotating a random sub-sample of the original dataset. From the 1500 samples, 326 were marked as informative and 1174 as non informative ($\\kappa =0.81$), discriminated on this criteria: Is the tweet addressing any complaint or raising grievances about modes of transport or services/ events associated with transportation such as traffic; public or private transport?. An example tweet marked as informative: No, metro fares will be reduced ???, but proper fare structure needs to presented right, it's bad !!!.\nWe utilized tf-idf for the identification of initial seed phrases from the curated set of informative tweets. 50 terms having the highest tf-idf scores were passed through the complete dataset and based on sub-string match, the transport relevant tweets were identified. The redundant tweets were filtered based on the cosine similarity score. Implicit information indicators were identified based on domain relevance score, a metric used to gauge the coverage of n-gram (1,2,3) when evaluated against a randomly created pool of posts.\nWe collected a pool of 5000 randomly sampled tweets different from the data collection period. The rationale behind having such a metric was to discard commonly occurring n-grams normalized by random noise and include ones which are of lexical importance. We used terms associated with high domain relevance score (threshold determined experimentally) as seed phrases for the next set of iterations. The growing dictionary augments the collection process. The process ran for 4 iterations providing us 7200 transport relevant tweets as no new lexicons were identified. In order to identify linguistic signals associated with the complaint posts, we randomly sampled a set of 2000 tweets which was used as training set, manually annotated into distinct labels: complaint relevant (702) and complaint non-relevant (1298) ($\\kappa =0.79$). We employed these features on our dataset.\nLinguistic markers. To capture linguistic aspects of complaints, we utilized Bag of Words, count of POS tags and Word2vec clusters.\nSentiment markers. We used quantified score based on the ratio of tokens mentioned in the following lexicons: MPQA, NRC, VADER and Stanford.\nInformation specific markers. These account for a set of handcrafted features associated with complaint, we used the stated markers (a) Text-Meta Data, this includes the count of URL's, hashtags, user mentions, special symbols and user mentions, used to enhance retweet impact; (b) Request Identification, we employed the model presented in BIBREF3 to identify if a specific tweet assertion is a request; (c) Intensifiers, we make use of feature set derived from the number of words starting with capital letters and the repetition of special symbols (exclamation, questions marks) within the same post; (d) Politeness Markers, we utilize the politeness score of the tweet extracted from the model presented in BIBREF3; (e) Pronoun Variation, these have the ability to reveal the personal involvement or intensify involvement. We utilize the frequency of pronoun types $\\lbrace \\textit {first, second, third, demonstrative and indefinite}$} using pre-defined dictionaries.\nFrom the pool of 7200 transport relevant tweets, we sampled 3500 tweets which were used as the testing set. The results are reported in TableTABREF5 with 10 fold cross-validation. With increasing the number of iterations, the pool of seed phrases gets refined and augments the selection of transport relevant tweets. The proposed pipeline is tailored to identify complaint relevant tweets in a noisy scenario.\n\nResults\nTable TABREF5 reflects that the BOW model provided the best results, both in terms of accuracy and F1-score. The best result achieved by a sentiment model was the Stanford Sentiment ($0.63$ F1-score), with others within the same range and linguistic-based features collectively giving the best performance.\n\nConclusion and Future Work\nIn this paper, we presented a novel semi-supervised pipeline along with a novel dataset for identification of complaint based posts in the transport domain. The proposed methodology can be expanded for other fields by altering the lexicons used for the creation of information cues. There are limitations to this analysis; we do not use neural networks which mandate a large volume of data. In the future, we aim to identify demographic features for identification of complaint based posts on social media platforms.\n\nFigure 1: Pictorial representation of the proposed pipeline.\n\nTable 1: Performance of various linguistic, sentiment and information specific features on our dataset. Classifier utilized Logistic Regression (Elastic Net regularization), as it gave the best performance as compared to its counterparts.\n\nQuery: What language is explored in this paper?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"English language\", \"evidence\": [\"Figure FIGREF4 pictorially represents our methodology. Our approach required an initial set of informative tweets for which we employed two human annotators annotating a random sub-sample of the original dataset. From the 1500 samples, 326 were marked as informative and 1174 as non informative ($\\\\kappa =0.81$), discriminated on this criteria: Is the tweet addressing any complaint or raising grievances about modes of transport or services/ events associated with transportation such as traffic; public or private transport?. An example tweet marked as informative: No, metro fares will be reduced ???, but proper fare structure needs to presented right, it\\u0027s bad !!!.\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: However, many agreement studies have restricted annotators to using a single sense, which can significantly lower inter-annotator agreement (IAA) in the presence of ambiguous or polysemous usages; indeed, multiple studies have shown that when allowed, annotators readily assign multiple senses to a single usage (Véronis, 1998; Murray and Green, 2004; <cite>Erk et al., 2009</cite>; Passonneau et al., 2012b) .
["Background", "Motivation"]
science.multicite_intent_classification.1287
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nHowever, many agreement studies have restricted annotators to using a single sense, which can significantly lower inter-annotator agreement (IAA) in the presence of ambiguous or polysemous usages; indeed, multiple studies have shown that when allowed, annotators readily assign multiple senses to a single usage (Véronis, 1998; Murray and Green, 2004; <cite>Erk et al., 2009</cite>; Passonneau et al., 2012b) .", "role": "user" }, { "content": "[\"Background\", \"Motivation\"]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: GQ-Net: Training Quantization-Friendly Deep Networks Main Content: 1 INTRODUCTION . Neural network quantization is a technique to reduce the size of deep networks and to bypass computationally and energetically expensive floating-point arithmetic operations in favor of efficient integer arithmetic on quantized versions of model weights and activations . Network quantization has been the focus of intensive research in recent years ( Rastegari et al. , 2016 ; Zhou et al. , 2016 ; Jacob et al. , 2018 ; Krishnamoorthi , 2018 ; Jung et al. , 2018 ; Louizos et al. , 2019 ; Nagel et al. , 2019 ; Gong et al. , 2019 ) , with most works belonging to one of two categories . The first line of work quantizes parts of the network while leaving a portion of its operations , e.g . computations in the first and last network layers in floating point . While such networks can be highly efficient , using bitwidths down to 5 or 4 bits with minimal loss in network accuracy ( Zhang et al. , 2018 ; Jung et al. , 2018 ) , they may be difficult to deploy in certain practical settings , due to the complexity of extra floating point hardware needed to execute the non-quantized portions of the network . Another line of work aims for ease of real world deployment by quantizing the entire network , including all weights and activations in all convolutional and fully connected layers ; we term this scheme strict quantization . Maintaining accuracy under strict quantization is considerably more challenging . While nearly lossless 8-bit strictly quantized networks have been proposed ( Jacob et al. , 2018 ) , to date state-of-the-art 4 bit networks incur large losses in accuracy compared to full precision reference models . For example , the strict 4-bit ResNet-18 model in Louizos et al . ( 2019 ) has 61.52 % accuracy , compared to 69.76 % for the full precision model , while the strict 4-bit MobileNet-v2 model in Krishnamoorthi ( 2018 ) has 62.00 % accuracy , compared to 71.88 % accuracy in full precision . To understand the difficulty of training accurate low-bitwidth strictly quantized networks , consider a common training procedure which begins with a pre-trained network , quantizes the model , then applies fine-tuning using straight-through estimators ( STE ) for gradient updates until the model achieves sufficient quantized accuracy . This process faces two problems . First , as the pre-trained model was not initially trained with the task of being subsequently quantized in mind , it may not be “ quantization-friendly ” . That is , the fine-tuning process may need to make substantial changes to the initial model in order to transform it to an accurate quantized model . Second , fine-tuning a model , especially at low bitwidths , is difficult due to the lack of accurate gradient information provided by STE . In particular , fine-tuning using STE is done by updating a model represented internally with floating point values using gradients computed at the nearest quantizations of the floating point values . Thus for example , if we apply 4 bit quantization to floating point model parameters in the range [ 0 , 1 ] , a random parameter will incur an average round-off error of 1/32 , which will be incorporated into the error in the STE gradient for this parameter , leading to possibly ineffective fine-tuning . To address these problems , we propose GQ-Net , a guided quantization training algorithm . The main goal of GQ-Net is to produce an accurate and quantization-friendly full precision model , i.e . a model whose quantized version , obtained by simply rounding each full precision value to its nearest quantized point , has nearly the same accuracy as itself . To do this , we design a loss function for the model which includes two components , one to minimize error with respect to the training labels , and another component to minimize the distributional difference between the model ’ s outputs and the outputs of the model ’ s quantized version . This loss function has the effect of guiding the optimization process towards a model which is both accurate , by virtue of minimizing the first loss component , and which is also similar enough to its quantized version due to minimization of the second component to ensure that the quantized model is also accurate . In addition , because the first component of the loss function deals only with floating point values , it provides accurate gradient information during optimization , in contrast to STE-based optimization which uses biased gradients at rounded points , which further improves the accuracy of the quantized model . Since GQ-Net directly produces a quantized model which does not require further fine-tuning , the number of epochs required to train GQ-Net is substantially less than the total number of epochs needed to train and fine-tune a model using the traditional quantization approach , leading to significantly reduced wall-clock training time . We note that GQ-Net ’ s technique is independent of and can be used in conjunction with other techniques for improving quantization accuracy , as we demonstrate in Section 4.3 . Finally , we believe that the guided training technique we propose can also be applied to other neural network structural optimization problems such as network pruning . We implemented GQ-Net in PyTorch and our codebase and trained models are publicly available 1 . We validated GQ-Net on the ImageNet classification task with the widely used ResNet-18 and 1An anonymous codebase has been submitted to OpenReview . The GitHub repository will be made public after the review process . compact MobileNet-v1/v2 models , and also performed a thorough set of ablation experiments to study different aspects of our technique . In terms of quantization accuracy loss compared to reference floating point models , GQ-Net strictly quantized using 4-bit weights and activations surpasses existing state-of-the-art strict methods by up to 2.7× , and also improves upon these methods even when they use higher bitwidths . In particular , 4-bit GQ-Net applied to ResNet-18 achieves 66.68 % top-1 accuracy , compared to 61.52 % accuracy in Louizos et al . ( 2019 ) and a reference floating point accuracy of 69.76 % , while on MobileNet-v2 GQ-Net achieves 66.15 % top-1 accuracy compared to 62.00 % accuracy in Krishnamoorthi ( 2018 ) and a reference floating point accuracy of 71.88 % . Additionally , GQ-Net achieves these results using layer-wise quantization , as opposed to channel-wise quantization in Krishnamoorthi ( 2018 ) , which further enhances the efficiency and practicality of the technique . 2 RELATED WORKS . Neural network quantization has been the subject of extensive investigation in recent years . Quantization can be applied to different part of neural networks , including weights , activations or gradients . Courbariaux et al . ( 2015 ) , Hou et al . ( 2016 ) , Zhou et al . ( 2017 ) , Hou & Kwok ( 2018 ) and other works quantized model weights to binary , ternary or multi-bit integers to reduce model size . Wei et al . ( 2018 ) quantized activations of object detection models for knowledge transfer . Alistarh et al . ( 2016 ) , Hou et al . ( 2019 ) quantized model gradients to accelerate distributed training . Another line of work quantizes both weights and activations to accelerate model inference by utilizing fix-point or integer arithmetic . These works include Courbariaux et al . ( 2016 ) , Rastegari et al . ( 2016 ) , Gysel et al . ( 2016 ) , Krishnamoorthi ( 2018 ) , Choi et al . ( 2018 ) , Zhang et al . ( 2018 ) , Jung et al . ( 2018 ) . A large set of methods have been proposed to improve training or fine-tuning for network quantization . Straight through estimators ( Bengio et al. , 2013 ) ( STE ) propagate gradients through non-differentiable operations with the identity mapping . Other training methods “ soften ” nondifferentiable operations to similar differentiable ones in order for gradients to pass through , then gradually anneal to piecewise continuous functions by applying stronger constraints . This line of works include Louizos et al . ( 2019 ) , Gong et al . ( 2019 ) , Bai et al . ( 2018 ) . Some works regard quantization as a stochastic process that produces parameterized discrete distributions , and guides training using gradients with respect to these parameters Soudry et al . ( 2014 ) , Shayer et al . ( 2018 ) . Another line of works does not require fine tuning , and instead re-calibrates or modifies the original network to recover accuracy using little or even no data He & Cheng ( 2018 ) , Nagel et al . ( 2019 ) , Meller et al . ( 2019 ) . Several recent works have focused on quantizing all parts of a network , typically in order to support deployment using only integer arithmetic units and avoiding the cost and complexity of additional floating point units . Gysel et al . ( 2016 ) proposed performing network inference using dynamic fixed-point arithmetic , where bitwidths for the integer and mantissa parts are determined based on a model ’ s weight distribution . Jacob et al . ( 2018 ) ; Krishnamoorthi ( 2018 ) proposed the quantization training and deployment algorithm behind the Tensorflow-Lite quantization runtime , which generates strictly quantized networks that can be easily implemented in hardware . Louizos et al . ( 2019 ) proposed a training method for strictly quantized models based on annealing a smooth quantization function to a piecewise continuous one . There has also been recent work on using parameterized quantizers which are optimized during quantization training . Choi et al . ( 2018 ) introduced learnable upper bounds to control the range of quantization . Zhang et al . ( 2018 ) proposed quantizers with a learnable basis which an be executed using fixed-point arithmetic . Jung et al . ( 2018 ) proposed to optimize weight scaling and quantization ranges jointly from task losses . 3 GQ-NET . In this section we describe the architecture of our proposed GQ-Net and then discuss components of the architecture which can be tuned to improve performance . 3.1 GQ-NET ARCHITECTURE . The major components of GQ-Net include the following , and are illustrated in Figure 1 : 1 . An L-layer neural network hW ( · ) with all computations performed using full precision floating point arithmetic . Here W = { W1 , . . . , WL } denotes the parameter ( weights ) of the model , with Wi , i ∈ 1 . . . L being the weights in layer i and expressed in floating point . 2 . The quantized model ĥW , Q ( · ) built from hW ( · ) . Here Q = { Qw1 , . . . , QwL , Qa0 , . . . , QaL } is a set of quantizers , i.e . mappings from floating point to ( scaled ) integer values ; the quantizers may be parameterized , and we describe how to optimize these parameters in Section 3.2 . Qwi quantizes weights Wi and Q a i quantizes activations in layer i . Let x0 denote an input to hW . To construct output ĥW , Q ( x0 ) of the quantized network , we proceed layer by layer . We first quantize the weights in layers i = 1 , . . . , L as ŵi = Qwi ( wi ) , and also quantize the input by setting x̂0 = Q a 0 ( x0 ) . we compute the quantized activations x̂i in layer i iteratively for i = 1 , . . . , L using x̂i = Qai ( x̃i ) , where x̃i = gi ( ŵi ∗ x̂i−1 ) , and gi ( · ) denotes the nonlinearity function in layer i and ∗ denotes convolution . Note that since ŵi and x̂i−1 are quantized , x̃i can be computed using integer or fixed point arithmetic . 3 . Next , we construct a loss function L incorporating both the training loss Lf of the full precision model hW and a loss Lq capturing the difference between hW and the quantized model ĥW , Q . L = ωfLf + ωqLq ( 1 ) Here ωf , ωq ∈ R are parameters capturing the relative importance of training loss versus distributional loss . In this paper , we focus on image classification networks , and thus we set Lf to be the cross-entropy loss between outputs from hW and the training labels . In addition , we set Lq = DKL ( σ ( hW ( · ) ) ||σ ( ĥW , Q ( · ) ) ) , where σ denotes the softmax function , to be the KL divergence between distributions σ ( hW ) and σ ( ĥW , Q ) on each input . Hence , minimizing the second term in L corresponds to pushing the floating point and quantized models to behave similarly to each other . Since the weight parameters W appear in both terms in L , the two terms can give conflicting signals for updating W during the optimization of L , causing the optimization to be unstable . We discuss how to deal with this problem in Section 3.2 . To train GQ-Net , we successively take mini-batches of training samples and labels and use them to compute L during the forward pass and propagate gradients with respect to W and the parameters of Q during the backward pass in order to minimize L. After L has converged sufficiently , we take the quantized weights in ĥW , Q ( · ) as the quantized model .
This work introduces GQ-Net, a novel technique that trains quantization friendly networks that facilitate for 4 bit weights and activations. This is achieved by introducing a loss function that consists of a linear combination of two components: one that aims to minimize the error of the network on the training labels of the dataset and one that aims to minimize the discrepancy of the model output with respect to the output of the model when the weights and activations are quantized. The authors argue that this has the effect of “guiding” the optimization procedure in finding networks that can be quantized without loss of performance. For the discrepancy metric the authors use the KL divergence from the predictive distribution of the floating point model to the one of the quantized model. The authors then propose several extra techniques that boost the performance of their method: 1. scheduling the weighting coefficients of the two loss terms (something which reminisces iterative pruning methods), 2. stopping the gradient of the floating point model w.r.t. the second loss term, 3. learning the parameters of the uniform quantizer, 4. alternating optimization between the weights and the parameters of the quantizers and 5. using separate batch normalization statistics for the floating point and quantized models. The authors then evaluate their method on Imagenet classification using ResNet-18 and Mobilenet v1 / v2, while also performing an ablation study about the extra tricks that they propose.
science.mup_single_document_summarization.18670
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nGQ-Net: Training Quantization-Friendly Deep Networks\n\nMain Content:\n1 INTRODUCTION . Neural network quantization is a technique to reduce the size of deep networks and to bypass computationally and energetically expensive floating-point arithmetic operations in favor of efficient integer arithmetic on quantized versions of model weights and activations . Network quantization has been the focus of intensive research in recent years ( Rastegari et al. , 2016 ; Zhou et al. , 2016 ; Jacob et al. , 2018 ; Krishnamoorthi , 2018 ; Jung et al. , 2018 ; Louizos et al. , 2019 ; Nagel et al. , 2019 ; Gong et al. , 2019 ) , with most works belonging to one of two categories . The first line of work quantizes parts of the network while leaving a portion of its operations , e.g . computations in the first and last network layers in floating point . While such networks can be highly efficient , using bitwidths down to 5 or 4 bits with minimal loss in network accuracy ( Zhang et al. , 2018 ; Jung et al. , 2018 ) , they may be difficult to deploy in certain practical settings , due to the complexity of extra floating point hardware needed to execute the non-quantized portions of the network . Another line of work aims for ease of real world deployment by quantizing the entire network , including all weights and activations in all convolutional and fully connected layers ; we term this scheme strict quantization . Maintaining accuracy under strict quantization is considerably more challenging . While nearly lossless 8-bit strictly quantized networks have been proposed ( Jacob et al. , 2018 ) , to date state-of-the-art 4 bit networks incur large losses in accuracy compared to full precision reference models . For example , the strict 4-bit ResNet-18 model in Louizos et al . ( 2019 ) has 61.52 % accuracy , compared to 69.76 % for the full precision model , while the strict 4-bit MobileNet-v2 model in Krishnamoorthi ( 2018 ) has 62.00 % accuracy , compared to 71.88 % accuracy in full precision . To understand the difficulty of training accurate low-bitwidth strictly quantized networks , consider a common training procedure which begins with a pre-trained network , quantizes the model , then applies fine-tuning using straight-through estimators ( STE ) for gradient updates until the model achieves sufficient quantized accuracy . This process faces two problems . First , as the pre-trained model was not initially trained with the task of being subsequently quantized in mind , it may not be “ quantization-friendly ” . That is , the fine-tuning process may need to make substantial changes to the initial model in order to transform it to an accurate quantized model . Second , fine-tuning a model , especially at low bitwidths , is difficult due to the lack of accurate gradient information provided by STE . In particular , fine-tuning using STE is done by updating a model represented internally with floating point values using gradients computed at the nearest quantizations of the floating point values . Thus for example , if we apply 4 bit quantization to floating point model parameters in the range [ 0 , 1 ] , a random parameter will incur an average round-off error of 1/32 , which will be incorporated into the error in the STE gradient for this parameter , leading to possibly ineffective fine-tuning . To address these problems , we propose GQ-Net , a guided quantization training algorithm . The main goal of GQ-Net is to produce an accurate and quantization-friendly full precision model , i.e . a model whose quantized version , obtained by simply rounding each full precision value to its nearest quantized point , has nearly the same accuracy as itself . To do this , we design a loss function for the model which includes two components , one to minimize error with respect to the training labels , and another component to minimize the distributional difference between the model ’ s outputs and the outputs of the model ’ s quantized version . This loss function has the effect of guiding the optimization process towards a model which is both accurate , by virtue of minimizing the first loss component , and which is also similar enough to its quantized version due to minimization of the second component to ensure that the quantized model is also accurate . In addition , because the first component of the loss function deals only with floating point values , it provides accurate gradient information during optimization , in contrast to STE-based optimization which uses biased gradients at rounded points , which further improves the accuracy of the quantized model . Since GQ-Net directly produces a quantized model which does not require further fine-tuning , the number of epochs required to train GQ-Net is substantially less than the total number of epochs needed to train and fine-tune a model using the traditional quantization approach , leading to significantly reduced wall-clock training time . We note that GQ-Net ’ s technique is independent of and can be used in conjunction with other techniques for improving quantization accuracy , as we demonstrate in Section 4.3 . Finally , we believe that the guided training technique we propose can also be applied to other neural network structural optimization problems such as network pruning . We implemented GQ-Net in PyTorch and our codebase and trained models are publicly available 1 . We validated GQ-Net on the ImageNet classification task with the widely used ResNet-18 and 1An anonymous codebase has been submitted to OpenReview . The GitHub repository will be made public after the review process . compact MobileNet-v1/v2 models , and also performed a thorough set of ablation experiments to study different aspects of our technique . In terms of quantization accuracy loss compared to reference floating point models , GQ-Net strictly quantized using 4-bit weights and activations surpasses existing state-of-the-art strict methods by up to 2.7× , and also improves upon these methods even when they use higher bitwidths . In particular , 4-bit GQ-Net applied to ResNet-18 achieves 66.68 % top-1 accuracy , compared to 61.52 % accuracy in Louizos et al . ( 2019 ) and a reference floating point accuracy of 69.76 % , while on MobileNet-v2 GQ-Net achieves 66.15 % top-1 accuracy compared to 62.00 % accuracy in Krishnamoorthi ( 2018 ) and a reference floating point accuracy of 71.88 % . Additionally , GQ-Net achieves these results using layer-wise quantization , as opposed to channel-wise quantization in Krishnamoorthi ( 2018 ) , which further enhances the efficiency and practicality of the technique . 2 RELATED WORKS . Neural network quantization has been the subject of extensive investigation in recent years . Quantization can be applied to different part of neural networks , including weights , activations or gradients . Courbariaux et al . ( 2015 ) , Hou et al . ( 2016 ) , Zhou et al . ( 2017 ) , Hou & Kwok ( 2018 ) and other works quantized model weights to binary , ternary or multi-bit integers to reduce model size . Wei et al . ( 2018 ) quantized activations of object detection models for knowledge transfer . Alistarh et al . ( 2016 ) , Hou et al . ( 2019 ) quantized model gradients to accelerate distributed training . Another line of work quantizes both weights and activations to accelerate model inference by utilizing fix-point or integer arithmetic . These works include Courbariaux et al . ( 2016 ) , Rastegari et al . ( 2016 ) , Gysel et al . ( 2016 ) , Krishnamoorthi ( 2018 ) , Choi et al . ( 2018 ) , Zhang et al . ( 2018 ) , Jung et al . ( 2018 ) . A large set of methods have been proposed to improve training or fine-tuning for network quantization . Straight through estimators ( Bengio et al. , 2013 ) ( STE ) propagate gradients through non-differentiable operations with the identity mapping . Other training methods “ soften ” nondifferentiable operations to similar differentiable ones in order for gradients to pass through , then gradually anneal to piecewise continuous functions by applying stronger constraints . This line of works include Louizos et al . ( 2019 ) , Gong et al . ( 2019 ) , Bai et al . ( 2018 ) . Some works regard quantization as a stochastic process that produces parameterized discrete distributions , and guides training using gradients with respect to these parameters Soudry et al . ( 2014 ) , Shayer et al . ( 2018 ) . Another line of works does not require fine tuning , and instead re-calibrates or modifies the original network to recover accuracy using little or even no data He & Cheng ( 2018 ) , Nagel et al . ( 2019 ) , Meller et al . ( 2019 ) . Several recent works have focused on quantizing all parts of a network , typically in order to support deployment using only integer arithmetic units and avoiding the cost and complexity of additional floating point units . Gysel et al . ( 2016 ) proposed performing network inference using dynamic fixed-point arithmetic , where bitwidths for the integer and mantissa parts are determined based on a model ’ s weight distribution . Jacob et al . ( 2018 ) ; Krishnamoorthi ( 2018 ) proposed the quantization training and deployment algorithm behind the Tensorflow-Lite quantization runtime , which generates strictly quantized networks that can be easily implemented in hardware . Louizos et al . ( 2019 ) proposed a training method for strictly quantized models based on annealing a smooth quantization function to a piecewise continuous one . There has also been recent work on using parameterized quantizers which are optimized during quantization training . Choi et al . ( 2018 ) introduced learnable upper bounds to control the range of quantization . Zhang et al . ( 2018 ) proposed quantizers with a learnable basis which an be executed using fixed-point arithmetic . Jung et al . ( 2018 ) proposed to optimize weight scaling and quantization ranges jointly from task losses . 3 GQ-NET . In this section we describe the architecture of our proposed GQ-Net and then discuss components of the architecture which can be tuned to improve performance . 3.1 GQ-NET ARCHITECTURE . The major components of GQ-Net include the following , and are illustrated in Figure 1 : 1 . An L-layer neural network hW ( · ) with all computations performed using full precision floating point arithmetic . Here W = { W1 , . . . , WL } denotes the parameter ( weights ) of the model , with Wi , i ∈ 1 . . . L being the weights in layer i and expressed in floating point . 2 . The quantized model ĥW , Q ( · ) built from hW ( · ) . Here Q = { Qw1 , . . . , QwL , Qa0 , . . . , QaL } is a set of quantizers , i.e . mappings from floating point to ( scaled ) integer values ; the quantizers may be parameterized , and we describe how to optimize these parameters in Section 3.2 . Qwi quantizes weights Wi and Q a i quantizes activations in layer i . Let x0 denote an input to hW . To construct output ĥW , Q ( x0 ) of the quantized network , we proceed layer by layer . We first quantize the weights in layers i = 1 , . . . , L as ŵi = Qwi ( wi ) , and also quantize the input by setting x̂0 = Q a 0 ( x0 ) . we compute the quantized activations x̂i in layer i iteratively for i = 1 , . . . , L using x̂i = Qai ( x̃i ) , where x̃i = gi ( ŵi ∗ x̂i−1 ) , and gi ( · ) denotes the nonlinearity function in layer i and ∗ denotes convolution . Note that since ŵi and x̂i−1 are quantized , x̃i can be computed using integer or fixed point arithmetic . 3 . Next , we construct a loss function L incorporating both the training loss Lf of the full precision model hW and a loss Lq capturing the difference between hW and the quantized model ĥW , Q . L = ωfLf + ωqLq ( 1 ) Here ωf , ωq ∈ R are parameters capturing the relative importance of training loss versus distributional loss . In this paper , we focus on image classification networks , and thus we set Lf to be the cross-entropy loss between outputs from hW and the training labels . In addition , we set Lq = DKL ( σ ( hW ( · ) ) ||σ ( ĥW , Q ( · ) ) ) , where σ denotes the softmax function , to be the KL divergence between distributions σ ( hW ) and σ ( ĥW , Q ) on each input . Hence , minimizing the second term in L corresponds to pushing the floating point and quantized models to behave similarly to each other . Since the weight parameters W appear in both terms in L , the two terms can give conflicting signals for updating W during the optimization of L , causing the optimization to be unstable . We discuss how to deal with this problem in Section 3.2 . To train GQ-Net , we successively take mini-batches of training samples and labels and use them to compute L during the forward pass and propagate gradients with respect to W and the parameters of Q during the backward pass in order to minimize L. After L has converged sufficiently , we take the quantized weights in ĥW , Q ( · ) as the quantized model .", "role": "user" }, { "content": "This work introduces GQ-Net, a novel technique that trains quantization friendly networks that facilitate for 4 bit weights and activations. This is achieved by introducing a loss function that consists of a linear combination of two components: one that aims to minimize the error of the network on the training labels of the dataset and one that aims to minimize the discrepancy of the model output with respect to the output of the model when the weights and activations are quantized. The authors argue that this has the effect of “guiding” the optimization procedure in finding networks that can be quantized without loss of performance. For the discrepancy metric the authors use the KL divergence from the predictive distribution of the floating point model to the one of the quantized model. The authors then propose several extra techniques that boost the performance of their method: 1. scheduling the weighting coefficients of the two loss terms (something which reminisces iterative pruning methods), 2. stopping the gradient of the floating point model w.r.t. the second loss term, 3. learning the parameters of the uniform quantizer, 4. alternating optimization between the weights and the parameters of the quantizers and 5. using separate batch normalization statistics for the floating point and quantized models. The authors then evaluate their method on Imagenet classification using ResNet-18 and Mobilenet v1 / v2, while also performing an ablation study about the extra tricks that they propose.", "role": "assistant" } ]
science.discomat_te
| Glass | Gas | Temperature range (degC) | S 0 (x1016 atoms/gm-atm) | -DH S (cal/mol) | | SiO2 (present study) | He | 69-861 | 8.34 | 710 | | SiO2 (Shelby) | He | 103-636 | 9.98 | 450 | | SiO2 (literature avg.) | He | 25-1050 | 6.73 | 760 | | | | | | | | GeO2 (present study) | He | 115-790 | 4.74 | 150 | | GeO2 (Shelby) | He | 110-176 | 3.43 | 240 | | GeO2 (Doremus) | He | 300-500 | 4.23 | 0 | | | | | | | | SiO2 (present study) | Ne | 271-846 | 3.77 | 1540 | | SiO2 (Shelby) | Ne | 194-457 | 3.80 | 1320 | | SiO2 (literature avg.) | Ne | 25-1050 | 3.16 | 1660 | | | | | | | | GeO2 (present study) | Ne | 101-756 | 4.19 | 0 | | GeO2 (Doremus) | Ne | 300-500 | 4.23 | 0 | Caption: Temperature dependence parameters for helium and neon solubility in vitreous silica and germania Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "69", "type": "Other"} {"value": "861", "type": "Other"} {"value": "8.34", "type": "Other"} {"value": "710", "type": "Other"} {"value": "103", "type": "Other"} {"value": "636", "type": "Other"} {"value": "9.98", "type": "Other"} {"value": "450", "type": "Other"} {"value": "25", "type": "Other"} {"value": "1050", "type": "Other"} {"value": "6.73", "type": "Other"} {"value": "760", "type": "Other"} {"value": "115", "type": "Other"} {"value": "790", "type": "Other"} {"value": "4.74", "type": "Other"} {"value": "150", "type": "Other"} {"value": "110", "type": "Other"} {"value": "176", "type": "Other"} {"value": "3.43", "type": "Other"} {"value": "240", "type": "Other"} {"value": "300", "type": "Other"} {"value": "500", "type": "Other"} {"value": "4.23", "type": "Other"} {"value": "0", "type": "Other"} {"value": "271", "type": "Other"} {"value": "846", "type": "Other"} {"value": "3.77", "type": "Other"} {"value": "1540", "type": "Other"} {"value": "194", "type": "Other"} {"value": "457", "type": "Other"} {"value": "3.80", "type": "Other"} {"value": "1320", "type": "Other"} {"value": "25", "type": "Other"} {"value": "1050", "type": "Other"} {"value": "3.16", "type": "Other"} {"value": "1660", "type": "Other"} {"value": "101", "type": "Other"} {"value": "756", "type": "Other"} {"value": "4.19", "type": "Other"} {"value": "0", "type": "Other"} {"value": "300", "type": "Other"} {"value": "500", "type": "Other"} {"value": "4.23", "type": "Other"} {"value": "0", "type": "Other"}
science.discomat_te.370
[ { "content": "| Glass | Gas | Temperature range (degC) | S 0 (x1016 atoms/gm-atm) | -DH S (cal/mol) |\n| SiO2 (present study) | He | 69-861 | 8.34 | 710 |\n| SiO2 (Shelby) | He | 103-636 | 9.98 | 450 |\n| SiO2 (literature avg.) | He | 25-1050 | 6.73 | 760 |\n| | | | | |\n| GeO2 (present study) | He | 115-790 | 4.74 | 150 |\n| GeO2 (Shelby) | He | 110-176 | 3.43 | 240 |\n| GeO2 (Doremus) | He | 300-500 | 4.23 | 0 |\n| | | | | |\n| SiO2 (present study) | Ne | 271-846 | 3.77 | 1540 |\n| SiO2 (Shelby) | Ne | 194-457 | 3.80 | 1320 |\n| SiO2 (literature avg.) | Ne | 25-1050 | 3.16 | 1660 |\n| | | | | |\n| GeO2 (present study) | Ne | 101-756 | 4.19 | 0 |\n| GeO2 (Doremus) | Ne | 300-500 | 4.23 | 0 |\n\nCaption: Temperature dependence parameters for helium and neon solubility in vitreous silica and germania\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"69\", \"type\": \"Other\"}\n{\"value\": \"861\", \"type\": \"Other\"}\n{\"value\": \"8.34\", \"type\": \"Other\"}\n{\"value\": \"710\", \"type\": \"Other\"}\n{\"value\": \"103\", \"type\": \"Other\"}\n{\"value\": \"636\", \"type\": \"Other\"}\n{\"value\": \"9.98\", \"type\": \"Other\"}\n{\"value\": \"450\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Other\"}\n{\"value\": \"1050\", \"type\": \"Other\"}\n{\"value\": \"6.73\", \"type\": \"Other\"}\n{\"value\": \"760\", \"type\": \"Other\"}\n{\"value\": \"115\", \"type\": \"Other\"}\n{\"value\": \"790\", \"type\": \"Other\"}\n{\"value\": \"4.74\", \"type\": \"Other\"}\n{\"value\": \"150\", \"type\": \"Other\"}\n{\"value\": \"110\", \"type\": \"Other\"}\n{\"value\": \"176\", \"type\": \"Other\"}\n{\"value\": \"3.43\", \"type\": \"Other\"}\n{\"value\": \"240\", \"type\": \"Other\"}\n{\"value\": \"300\", \"type\": \"Other\"}\n{\"value\": \"500\", \"type\": \"Other\"}\n{\"value\": \"4.23\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"271\", \"type\": \"Other\"}\n{\"value\": \"846\", \"type\": \"Other\"}\n{\"value\": \"3.77\", \"type\": \"Other\"}\n{\"value\": \"1540\", \"type\": \"Other\"}\n{\"value\": \"194\", \"type\": \"Other\"}\n{\"value\": \"457\", \"type\": \"Other\"}\n{\"value\": \"3.80\", \"type\": \"Other\"}\n{\"value\": \"1320\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Other\"}\n{\"value\": \"1050\", \"type\": \"Other\"}\n{\"value\": \"3.16\", \"type\": \"Other\"}\n{\"value\": \"1660\", \"type\": \"Other\"}\n{\"value\": \"101\", \"type\": \"Other\"}\n{\"value\": \"756\", \"type\": \"Other\"}\n{\"value\": \"4.19\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"300\", \"type\": \"Other\"}\n{\"value\": \"500\", \"type\": \"Other\"}\n{\"value\": \"4.23\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Although the BOSWE model can be regarded as a shallow approach, its overall results are comparable to those of deep learning approaches (<cite>Dong and Zhang, 2016</cite>; Tay et al., 2018) .
["Similarities"]
science.multicite_intent_classification.194
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nAlthough the BOSWE model can be regarded as a shallow approach, its overall results are comparable to those of deep learning approaches (<cite>Dong and Zhang, 2016</cite>; Tay et al., 2018) .", "role": "user" }, { "content": "[\"Similarities\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.PURPOSE: A link between poor sleep quality and Alzheimer's disease (AD) has recently been suggested. Since endogenous melatonin levels are already reduced at preclinical AD stages, it is important to ask whether replenishing the missing hormone would be beneficial in AD and whether any such effects would be related to the presence of sleep disorder in patients. ABSTRACT.PATIENTS AND METHODS: The effects of add-on prolonged-release melatonin (PRM) (2 mg) to standard therapy on cognitive functioning and sleep were investigated in 80 patients (men [50.7%], women [49.3%], average age 75.3 years [range, 52–85 years]) diagnosed with mild to moderate AD, with and without insomnia comorbidity, and receiving standard therapy (acetylcholinesterase inhibitors with or without memantine). In this randomized, double-blind, parallel-group study, patients were treated for 2 weeks with placebo and then randomized (1:1) to receive 2 mg of PRM or placebo nightly for 24 weeks, followed by 2 weeks placebo. The AD Assessment Scale–Cognition (ADAS-Cog), Instrumental Activities of Daily Living (IADL), Mini–Mental State Examination (MMSE), sleep, as assessed by the Pittsburgh Sleep Quality Index (PSQI) and a daily sleep diary, and safety parameters were measured. ABSTRACT.RESULTS: Patients treated with PRM (24 weeks) had significantly better cognitive performance than those treated with placebo, as measured by the IADL (P=0.004) and MMSE (P=0.044). Mean ADAS-Cog did not differ between the groups. Sleep efficiency, as measured by the PSQI, component 4, was also better with PRM (P=0.017). In the comorbid insomnia (PSQI ≥6) subgroup, PRM treatment resulted in significant and clinically meaningful effects versus the placebo, in mean IADL (P=0.032), MMSE score (+1.5 versus −3 points) (P=0.0177), and sleep efficiency (P=0.04). Median ADAS-Cog values (−3.5 versus +3 points) (P=0.045) were significantly better with PRM. Differences were more significant at longer treatment duration. PRM was well tolerated, with an adverse event profile similar to that of placebo. ABSTRACT.CONCLUSION: Add-on PRM has positive effects on cognitive functioning and sleep maintenance in AD patients compared with placebo, particularly in those with insomnia comorbidity. The results suggest a possible causal link between poor sleep and cognitive decline. Extracted Data:
[["Prolonged-release melatonin", "Placebo", "IADL cognitive performance ", "significantly increased", "Patients treated with PRM (24 weeks) had significantly better cognitive performance than those treated with placebo, as measured by the IADL (P=0.004) and MMSE (P=0.044)."], ["Prolonged-release melatonin", "Placebo", "MMSE cognitive performance ", "significantly increased", "Patients treated with PRM (24 weeks) had significantly better cognitive performance than those treated with placebo, as measured by the IADL (P=0.004) and MMSE (P=0.044)."], ["Prolonged-release melatonin", "Placebo", "Pittsburgh Sleep Quality Index sleep efficiency", "significantly increased", "Sleep efficiency, as measured by the PSQI, component 4, was also better with PRM (P=0.017)."], ["Prolonged-release melatonin", "Placebo", "Adverse events ", "no significant difference", "Differences were more significant at longer treatment duration. PRM was well tolerated, with an adverse event profile similar to that of placebo."]]
science.evidence_inference.134
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.PURPOSE: A link between poor sleep quality and Alzheimer's disease (AD) has recently been suggested. Since endogenous melatonin levels are already reduced at preclinical AD stages, it is important to ask whether replenishing the missing hormone would be beneficial in AD and whether any such effects would be related to the presence of sleep disorder in patients.\n\nABSTRACT.PATIENTS AND METHODS: The effects of add-on prolonged-release melatonin (PRM) (2 mg) to standard therapy on cognitive functioning and sleep were investigated in 80 patients (men [50.7%], women [49.3%], average age 75.3 years [range, 52–85 years]) diagnosed with mild to moderate AD, with and without insomnia comorbidity, and receiving standard therapy (acetylcholinesterase inhibitors with or without memantine). In this randomized, double-blind, parallel-group study, patients were treated for 2 weeks with placebo and then randomized (1:1) to receive 2 mg of PRM or placebo nightly for 24 weeks, followed by 2 weeks placebo. The AD Assessment Scale–Cognition (ADAS-Cog), Instrumental Activities of Daily Living (IADL), Mini–Mental State Examination (MMSE), sleep, as assessed by the Pittsburgh Sleep Quality Index (PSQI) and a daily sleep diary, and safety parameters were measured.\n\nABSTRACT.RESULTS: Patients treated with PRM (24 weeks) had significantly better cognitive performance than those treated with placebo, as measured by the IADL (P=0.004) and MMSE (P=0.044). Mean ADAS-Cog did not differ between the groups. Sleep efficiency, as measured by the PSQI, component 4, was also better with PRM (P=0.017). In the comorbid insomnia (PSQI ≥6) subgroup, PRM treatment resulted in significant and clinically meaningful effects versus the placebo, in mean IADL (P=0.032), MMSE score (+1.5 versus −3 points) (P=0.0177), and sleep efficiency (P=0.04). Median ADAS-Cog values (−3.5 versus +3 points) (P=0.045) were significantly better with PRM. Differences were more significant at longer treatment duration. PRM was well tolerated, with an adverse event profile similar to that of placebo.\n\nABSTRACT.CONCLUSION: Add-on PRM has positive effects on cognitive functioning and sleep maintenance in AD patients compared with placebo, particularly in those with insomnia comorbidity. The results suggest a possible causal link between poor sleep and cognitive decline.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Prolonged-release melatonin\", \"Placebo\", \"IADL cognitive performance \", \"significantly increased\", \"Patients treated with PRM (24 weeks) had significantly better cognitive performance than those treated with placebo, as measured by the IADL (P=0.004) and MMSE (P=0.044).\"], [\"Prolonged-release melatonin\", \"Placebo\", \"MMSE cognitive performance \", \"significantly increased\", \"Patients treated with PRM (24 weeks) had significantly better cognitive performance than those treated with placebo, as measured by the IADL (P=0.004) and MMSE (P=0.044).\"], [\"Prolonged-release melatonin\", \"Placebo\", \"Pittsburgh Sleep Quality Index sleep efficiency\", \"significantly increased\", \"Sleep efficiency, as measured by the PSQI, component 4, was also better with PRM (P=0.017).\"], [\"Prolonged-release melatonin\", \"Placebo\", \"Adverse events \", \"no significant difference\", \"Differences were more significant at longer treatment duration. PRM was well tolerated, with an adverse event profile similar to that of placebo.\"]]", "role": "assistant" } ]
science.discomat_te
| | Glass | Glass | Glass | Glass | | | G1 | G7 | G8 | G9 | | Calculated thickness (micrometers) | 26.70+-2.60 | 13.39+-4.98 | 23.90+-1.80 | 25.00+-4.02 | | Experimental thickness (micrometers) | 26.32+-3.00 | 16.11+-3.00 | 22.75+-3.00 | 23.70+-3.00 | | Agreement | Yes | Yes | Yes | Yes | Caption: Verification of the model validity for the thickness study. Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "26.70", "type": "Other"} {"value": "13.39", "type": "Other"} {"value": "23.90", "type": "Other"} {"value": "25.00", "type": "Other"} {"value": "26.32", "type": "Other"} {"value": "16.11", "type": "Other"} {"value": "22.75", "type": "Other"} {"value": "23.70", "type": "Other"}
science.discomat_te.454
[ { "content": "| | Glass | Glass | Glass | Glass |\n| | G1 | G7 | G8 | G9 |\n| Calculated thickness (micrometers) | 26.70+-2.60 | 13.39+-4.98 | 23.90+-1.80 | 25.00+-4.02 |\n| Experimental thickness (micrometers) | 26.32+-3.00 | 16.11+-3.00 | 22.75+-3.00 | 23.70+-3.00 |\n| Agreement | Yes | Yes | Yes | Yes |\n\nCaption: Verification of the model validity for the thickness study.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"26.70\", \"type\": \"Other\"}\n{\"value\": \"13.39\", \"type\": \"Other\"}\n{\"value\": \"23.90\", \"type\": \"Other\"}\n{\"value\": \"25.00\", \"type\": \"Other\"}\n{\"value\": \"26.32\", \"type\": \"Other\"}\n{\"value\": \"16.11\", \"type\": \"Other\"}\n{\"value\": \"22.75\", \"type\": \"Other\"}\n{\"value\": \"23.70\", \"type\": \"Other\"}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: ERAP1 SNPs are epistatic to HLA alleles for ankylosing spondylitis. Research Abstract: Ankylosing spondylitis (AS) is a common, highly heritable, inflammatory arthritis for which HLA-B*27 is the major genetic risk factor, although its role in the aetiology of AS remains elusive. To better understand the genetic basis of the MHC susceptibility loci, we genotyped 7,264 MHC SNPs in 22,647 AS cases and controls of European descent. We impute SNPs, classical HLA alleles and amino-acid residues within HLA proteins, and tested these for association to AS status. Here we show that in addition to effects due to HLA-B*27 alleles, several other HLA-B alleles also affect susceptibility. After controlling for the associated haplotypes in HLA-B, we observe independent associations with variants in the HLA-A, HLA-DPB1 and HLA-DRB1 loci. We also demonstrate that the ERAP1 SNP rs30187 association is not restricted only to carriers of HLA-B*27 but also found in HLA-B*40:01 carriers independently of HLA-B*27 genotype.
{"verdict": "SUPPORT", "evidence": ["We also demonstrate that the ERAP1 SNP rs30187 association is not restricted only to carriers of HLA-B*27 but also found in HLA-B*40:01 carriers independently of HLA-B*27 genotype."]}
science.scifact_entailment.239
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nERAP1 SNPs are epistatic to HLA alleles for ankylosing spondylitis.\n\nResearch Abstract:\nAnkylosing spondylitis (AS) is a common, highly heritable, inflammatory arthritis for which HLA-B*27 is the major genetic risk factor, although its role in the aetiology of AS remains elusive.\nTo better understand the genetic basis of the MHC susceptibility loci, we genotyped 7,264 MHC SNPs in 22,647 AS cases and controls of European descent.\nWe impute SNPs, classical HLA alleles and amino-acid residues within HLA proteins, and tested these for association to AS status.\nHere we show that in addition to effects due to HLA-B*27 alleles, several other HLA-B alleles also affect susceptibility.\nAfter controlling for the associated haplotypes in HLA-B, we observe independent associations with variants in the HLA-A, HLA-DPB1 and HLA-DRB1 loci.\nWe also demonstrate that the ERAP1 SNP rs30187 association is not restricted only to carriers of HLA-B*27 but also found in HLA-B*40:01 carriers independently of HLA-B*27 genotype.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"We also demonstrate that the ERAP1 SNP rs30187 association is not restricted only to carriers of HLA-B*27 but also found in HLA-B*40:01 carriers independently of HLA-B*27 genotype.\"]}", "role": "assistant" } ]