dataset
stringclasses 9
values | input
stringlengths 351
18.6k
| output
stringlengths 8
7.48k
| id
stringlengths 20
47
| messages
listlengths 2
2
|
---|---|---|---|---|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Is change the only constant? Profile change perspective on #LokSabhaElections2019
Users on Twitter are identified with the help of their profile attributes that consists of username, display name, profile image, to name a few. The profile attributes that users adopt can reflect their interests, belief, or thematic inclinations. Literature has proposed the implications and significance of profile attribute change for a random population of users. However, the use of profile attribute for endorsements and to start a movement have been under-explored. In this work, we consider #LokSabhaElections2019 as a movement and perform a large-scale study of the profile of users who actively made changes to profile attributes centered around #LokSabhaElections2019. We collect the profile metadata for 49.4M users for a period of 2 months from April 5, 2019 to June 5, 2019 amid #LokSabhaElections2019. We investigate how the profile changes vary for the influential leaders and their followers over the social movement. We further differentiate the organic and inorganic ways to show the political inclination from the prism of profile changes. We report how the addition of election campaign related keywords lead to spread of behavior contagion and further investigate it with respect to "Chowkidar Movement" in detail.
Profile Attribute Analysis ::: Political Handles
Increase in the Internet user-base and widespread adoption of smartphones in the country has led to an increase in the importance of social media campaigns during the elections. This is evident by the fact that in our dataset (Set $U$), we found that a total of 54 handles got verified throughout 61 snapshots. Moreover the average number of followers of handles in set $P$ increased from $3,08,716.84$ to $3,19,881.09$ over the 61 snapshots. Given that the average followers of set $P$ increased by approximately $3\%$ and 54 profile handles got verified in the span of 61 days, we believe that efforts of leaders on social media campaign can't be denied.
We consider 5 major profile attributes for further analysis. These attributes are username, display name, profile image, location and description respectively. The username is a unique handle or screen name associated with each twitter user that appears on the profile URL and is used to communicate with each other on Twitter. Some username examples are $@narendramodi$, $@RahulGandhi$ etc. The display name on the other hand, is merely a personal identifier that is displayed on the profile page of a user, e.g. `Narendra Modi', `Rahul Gandhi', etc. The description is a string to describe the account, while the location is user-defined profile location. We considered the 5 profile attributes as stated above since these attributes are key elements of a users identity and saliently define users likes and values BIBREF2.
For the set of users in $P$, the total number of changes in these 5 profile attributes over 61 snapshots is 974. Figure FIGREF12 shows the distribution of profile changes for specific attributes over all the 61 snapshots. The most changed attribute among the political handles is the profile image followed by display name. In the same Figure, there is a sharp increase in the number of changes to the display name attribute on 23rd May, 2019. The increase is because @narendramodi removed Chowkidar from the display name of his Twitter handle which resulted in many other political handles doing the same. Another interesting trend is that none of the handles changed their username over all the snapshots. This could be attributed to the fact that Twitter discourages a change of usernames for verified accounts as the verified status might not remain intact after the change BIBREF8.
Profile Attribute Analysis ::: Follower Handles
To characterize the profile changing behavior amongst the followers , we study the profile snapshots of all the users in set $S$ over 9 snapshots.
Of all the active user accounts, $1,363,499$ users made at least one change to their profile attributes over all the snapshots. Thus, over $3\%$ of all the active users engaged in some form of profile change activity over the 9 snapshots. Of these $1,363,499$ users, more than $95\%$ of the users made only 1 change to their profile, and around 275 users made more than 20 changes. On an average, each one of these $1,363,499$ users made about 2 changes to their profiles with a standard deviation of $1.42$.
In order to compare the profile change behavior of political accounts and follower accounts, we analyzed the profile changes of set $\mathbf {P}$ over the 9 snapshots only. For the same 9 snapshots, $54.9\%$ of all users in set $\mathbf {P}$ made at least one change to their profile attributes. This is in sharp contrast to the the users of set $\mathbf {S}$, where only $3\%$ of users made any changes to their profile attributes. Similarly, users of set $\mathbf {P}$ made $75.32\%$ changes to their profiles on an average, which is 25 times more than the set $\mathbf {S}$.
INSIGHT 1: Political handles are more likely to engage in profile changing behavior as compared to their followers.
In Figure FIGREF15, we plot the number of changes in given attributes over all the snapshots for the users in set $S$. From this plot, we find that not all the attributes are modified at an equal rate. Profile image and Location are the most changed profile attributes and account for nearly $34\%$ and $25\%$ respectively of the total profile changes in our dataset. We analyze the trends in Figure FIGREF12 and find that the political handles do not change their usernames at all. This is in contrast to the trend in Figure FIGREF15 where we see that there are a lot of handles that change their usernames multiple times. The most likely reason for the same is that most of the follower handles are not verified and would not loose their verified status on changing their username.
The follower handles also use their usernames to show support to a movement or person and revert back to their original username later. Out of the $79,991$ username changes, $1,711$ users of set ${S}$ went back and forth to the same name which included adding of Election related keywords. Examples of few such cases in our dataset are shown in Table TABREF16.
INSIGHT 2: Users do not change all profile attributes equally, and the profile attributes that political and follower handles focus on are different.
Profile Attribute Analysis ::: Similarity analysis
Change in an attribute in the profile involves replacing an existing attribute with a new one. We analyze the similarity between the current, new display names and usernames used by users in set $\mathbf {P}$ and $\mathbf {S}$ respectively. For this analysis, we consider only the users who have changed their username or display name at least once in all 9 snapshots. We use the Longest Common Subsequence (LCS) to compute the similarity between two strings. Thus, for each user, we compute the average LCS score between all possible pairs of unique display names and usernames adopted by them. A high average LCS score indicates that the newer username or display name were not significantly different from the previous ones.
The graph in Figure FIGREF18 shows that nearly $80\%$ of the users in set $S$ made more significant changes to their Display Names as compared to the political handles. (This can be seen by considering a horizontal line on the graph. For any given fraction of users, the average LCS score of set $S$ (Followers) is lesser than set $P$ (Politicians) for approximately $80\%$ percent of users). For around $10\%$ of the users in set $P$, the new and old profile attributes were unrelated (LCS score < $0.5$) and for users in set $S$, this value went up to $30\%$.
INSIGHT 3: Political handles tend to make new changes related to previous attribute values. However, the followers make comparatively less related changes to previous attribute values.
Political support through profile meta data
In this section, we first describe the phenomenon of mention of political parties names in the profile attributes of users. This is followed by the analysis of profiles that make specific mentions of political handles in their profile attributes. Both of these constitute an organic way of showing support to a party and does not involve any direct campaigning by the parties. We also study in detail the #MainBhiChowkidar campaign and analyze the corresponding change in profile attributes associated with it.
Political support through profile meta data ::: Who follows who
We observe that a lot of users in our dataset wrote that they are proud to be followed by a famous political handle in their description. We show example of such cases in Table TABREF22.
We perform an exhaustive search of all such mentions in the description attribute of the users and find $1,164$ instances of this phenomenon. This analysis and the party mention analysis in Section SECREF19, both testify that people often display their political inclination on Twitter via their profile attributes itself.
INSIGHT 4: Twitter users often display their political inclinations in their profile attributes itself and are pretty open about it.
Related Work ::: Adoption of behavior from leaders
The real and virtual world are intertwined in a way that any speech, event, or action of leaders may trigger the changes is behavior of people online BIBREF1, BIBREF21. Opinion leaders have been defined as `the individuals who were likely to influence other persons in their immediate environment.' BIBREF22 Opinion leaders on social media platforms help in adoption of behavior by followers easily BIBREF9, BIBREF6. The performance and effectiveness of the opinion leader directly affect the contagion of the leader BIBREF9. On Twitter, opinion leaders show high motivation for information seeking and public expression. Moreover, they make a significant contribution to people's political involvement BIBREF21.
Our work contributes to this literature in two major ways. First, we investigate how profile attribute changes are similar or different from the influential leaders in the country and how much of these changes are the result of the same. Then we see how the users change their profile information in the midst of election in the largest democracy in the world.
Discussion
In this paper, we study the use of profile attributes on Twitter as the major endorsement during #LokSabhaElectios2019. We identify $1,293$ verified political leader's profile and extract $49,433,640$ unique followers. We collect the daily and weekly snapshots of the political leaders and the follower handles from 5th April, 2019 to 5th June, 2019. With the constraint of Twitter API, we collected a total of 9 snapshots of the follower's profile and 61 snapshots of Political handles. We consider the 5 major attributes for the analysis, including, Profile Name, Username, Description, Profile Image, and Location.
Firstly, we analyze the most changed attributes among the 5 major attributes. We find that most of the users made at-most 4 changes during the election campaign, with profile image being the most changed attribute for both the political as well as follower handles. We also show that political handles made more changes to profile attributes than the follower handles. About $80\%$ changes made by the followers in the period of data collection was related to #LokSabhaElectios2019. While the profile handles made changes, the changes mostly included going back and forth to same profile attribute, with the change in attribute being active support to a political party or political leader.
We argue that we can predict the political inclination of a user using just the profile attribute of the users. We further show that the presence of party name in the profile attribute can be considered as an organic behavior and signals support to a party. However, we argue that the addition of election campaign related keywords to the profile is a form of inorganic behavior. The inorganic behavior analysis falls inline with behavior contagion, where the followers tend to adapt to the behavior of their opinion leaders. The “Chowkidar movement” showed a similar effect in the #LokSabhaElectios2019, which was evident by how the other political leaders and followers of BJP party added chowkidar to their profile attributes after @narendramodi did. We thus, argue that people don't shy away from showing support to political parties through profile information.
We believe our analysis will be helpful to understand how social movement like election are perceived and sustained through profile attributes on social media. Our findings provide baseline data for study in the direction of election campaign and to further analyze people's perspective on elections.
Figure 4: Distribution of last tweet time of twitterer in set S . Only 5 million users tweeted since start of 2019, in favor of hypothesis that a minority of users are only responsible for most of the political opinions.
Table 2: Example of users who changed their username once and later reverted back to their old username.
Figure 8: The tweet in which Narendra Modi urges Twitter users to remove Chowkidar from their profiles.
Query: How do profile changes vary for influential leads and their followers over the social movement?
JSON Response: | {"answer": "Influential leaders are more likely to change their profile attributes than their followers; the leaders do not change their usernames, while their followers change their usernames a lot; the leaders tend to make new changes related to previous attribute values, while the followers make comparatively less related changes to previous attribute values.", "evidence": ["INSIGHT 1: Political handles are more likely to engage in profile changing behavior as compared to their followers.", "In Figure FIGREF15, we plot the number of changes in given attributes over all the snapshots for the users in set $S$. From this plot, we find that not all the attributes are modified at an equal rate. Profile image and Location are the most changed profile attributes and account for nearly $34\\%$ and $25\\%$ respectively of the total profile changes in our dataset. We analyze the trends in Figure FIGREF12 and find that the political handles do not change their usernames at all. This is in contrast to the trend in Figure FIGREF15 where we see that there are a lot of handles that change their usernames multiple times. The most likely reason for the same is that most of the follower handles are not verified and would not loose their verified status on changing their username.", "INSIGHT 3: Political handles tend to make new changes related to previous attribute values. However, the followers make comparatively less related changes to previous attribute values."]} | science.qasper_abstractive_qa.188.1 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nIs change the only constant? Profile change perspective on #LokSabhaElections2019\n\nUsers on Twitter are identified with the help of their profile attributes that consists of username, display name, profile image, to name a few. The profile attributes that users adopt can reflect their interests, belief, or thematic inclinations. Literature has proposed the implications and significance of profile attribute change for a random population of users. However, the use of profile attribute for endorsements and to start a movement have been under-explored. In this work, we consider #LokSabhaElections2019 as a movement and perform a large-scale study of the profile of users who actively made changes to profile attributes centered around #LokSabhaElections2019. We collect the profile metadata for 49.4M users for a period of 2 months from April 5, 2019 to June 5, 2019 amid #LokSabhaElections2019. We investigate how the profile changes vary for the influential leaders and their followers over the social movement. We further differentiate the organic and inorganic ways to show the political inclination from the prism of profile changes. We report how the addition of election campaign related keywords lead to spread of behavior contagion and further investigate it with respect to \"Chowkidar Movement\" in detail.\n\nProfile Attribute Analysis ::: Political Handles\nIncrease in the Internet user-base and widespread adoption of smartphones in the country has led to an increase in the importance of social media campaigns during the elections. This is evident by the fact that in our dataset (Set $U$), we found that a total of 54 handles got verified throughout 61 snapshots. Moreover the average number of followers of handles in set $P$ increased from $3,08,716.84$ to $3,19,881.09$ over the 61 snapshots. Given that the average followers of set $P$ increased by approximately $3\\%$ and 54 profile handles got verified in the span of 61 days, we believe that efforts of leaders on social media campaign can't be denied.\nWe consider 5 major profile attributes for further analysis. These attributes are username, display name, profile image, location and description respectively. The username is a unique handle or screen name associated with each twitter user that appears on the profile URL and is used to communicate with each other on Twitter. Some username examples are $@narendramodi$, $@RahulGandhi$ etc. The display name on the other hand, is merely a personal identifier that is displayed on the profile page of a user, e.g. `Narendra Modi', `Rahul Gandhi', etc. The description is a string to describe the account, while the location is user-defined profile location. We considered the 5 profile attributes as stated above since these attributes are key elements of a users identity and saliently define users likes and values BIBREF2.\nFor the set of users in $P$, the total number of changes in these 5 profile attributes over 61 snapshots is 974. Figure FIGREF12 shows the distribution of profile changes for specific attributes over all the 61 snapshots. The most changed attribute among the political handles is the profile image followed by display name. In the same Figure, there is a sharp increase in the number of changes to the display name attribute on 23rd May, 2019. The increase is because @narendramodi removed Chowkidar from the display name of his Twitter handle which resulted in many other political handles doing the same. Another interesting trend is that none of the handles changed their username over all the snapshots. This could be attributed to the fact that Twitter discourages a change of usernames for verified accounts as the verified status might not remain intact after the change BIBREF8.\n\nProfile Attribute Analysis ::: Follower Handles\nTo characterize the profile changing behavior amongst the followers , we study the profile snapshots of all the users in set $S$ over 9 snapshots.\nOf all the active user accounts, $1,363,499$ users made at least one change to their profile attributes over all the snapshots. Thus, over $3\\%$ of all the active users engaged in some form of profile change activity over the 9 snapshots. Of these $1,363,499$ users, more than $95\\%$ of the users made only 1 change to their profile, and around 275 users made more than 20 changes. On an average, each one of these $1,363,499$ users made about 2 changes to their profiles with a standard deviation of $1.42$.\nIn order to compare the profile change behavior of political accounts and follower accounts, we analyzed the profile changes of set $\\mathbf {P}$ over the 9 snapshots only. For the same 9 snapshots, $54.9\\%$ of all users in set $\\mathbf {P}$ made at least one change to their profile attributes. This is in sharp contrast to the the users of set $\\mathbf {S}$, where only $3\\%$ of users made any changes to their profile attributes. Similarly, users of set $\\mathbf {P}$ made $75.32\\%$ changes to their profiles on an average, which is 25 times more than the set $\\mathbf {S}$.\nINSIGHT 1: Political handles are more likely to engage in profile changing behavior as compared to their followers.\nIn Figure FIGREF15, we plot the number of changes in given attributes over all the snapshots for the users in set $S$. From this plot, we find that not all the attributes are modified at an equal rate. Profile image and Location are the most changed profile attributes and account for nearly $34\\%$ and $25\\%$ respectively of the total profile changes in our dataset. We analyze the trends in Figure FIGREF12 and find that the political handles do not change their usernames at all. This is in contrast to the trend in Figure FIGREF15 where we see that there are a lot of handles that change their usernames multiple times. The most likely reason for the same is that most of the follower handles are not verified and would not loose their verified status on changing their username.\nThe follower handles also use their usernames to show support to a movement or person and revert back to their original username later. Out of the $79,991$ username changes, $1,711$ users of set ${S}$ went back and forth to the same name which included adding of Election related keywords. Examples of few such cases in our dataset are shown in Table TABREF16.\nINSIGHT 2: Users do not change all profile attributes equally, and the profile attributes that political and follower handles focus on are different.\n\nProfile Attribute Analysis ::: Similarity analysis\nChange in an attribute in the profile involves replacing an existing attribute with a new one. We analyze the similarity between the current, new display names and usernames used by users in set $\\mathbf {P}$ and $\\mathbf {S}$ respectively. For this analysis, we consider only the users who have changed their username or display name at least once in all 9 snapshots. We use the Longest Common Subsequence (LCS) to compute the similarity between two strings. Thus, for each user, we compute the average LCS score between all possible pairs of unique display names and usernames adopted by them. A high average LCS score indicates that the newer username or display name were not significantly different from the previous ones.\nThe graph in Figure FIGREF18 shows that nearly $80\\%$ of the users in set $S$ made more significant changes to their Display Names as compared to the political handles. (This can be seen by considering a horizontal line on the graph. For any given fraction of users, the average LCS score of set $S$ (Followers) is lesser than set $P$ (Politicians) for approximately $80\\%$ percent of users). For around $10\\%$ of the users in set $P$, the new and old profile attributes were unrelated (LCS score < $0.5$) and for users in set $S$, this value went up to $30\\%$.\nINSIGHT 3: Political handles tend to make new changes related to previous attribute values. However, the followers make comparatively less related changes to previous attribute values.\n\nPolitical support through profile meta data\nIn this section, we first describe the phenomenon of mention of political parties names in the profile attributes of users. This is followed by the analysis of profiles that make specific mentions of political handles in their profile attributes. Both of these constitute an organic way of showing support to a party and does not involve any direct campaigning by the parties. We also study in detail the #MainBhiChowkidar campaign and analyze the corresponding change in profile attributes associated with it.\n\nPolitical support through profile meta data ::: Who follows who\nWe observe that a lot of users in our dataset wrote that they are proud to be followed by a famous political handle in their description. We show example of such cases in Table TABREF22.\nWe perform an exhaustive search of all such mentions in the description attribute of the users and find $1,164$ instances of this phenomenon. This analysis and the party mention analysis in Section SECREF19, both testify that people often display their political inclination on Twitter via their profile attributes itself.\nINSIGHT 4: Twitter users often display their political inclinations in their profile attributes itself and are pretty open about it.\n\nRelated Work ::: Adoption of behavior from leaders\nThe real and virtual world are intertwined in a way that any speech, event, or action of leaders may trigger the changes is behavior of people online BIBREF1, BIBREF21. Opinion leaders have been defined as `the individuals who were likely to influence other persons in their immediate environment.' BIBREF22 Opinion leaders on social media platforms help in adoption of behavior by followers easily BIBREF9, BIBREF6. The performance and effectiveness of the opinion leader directly affect the contagion of the leader BIBREF9. On Twitter, opinion leaders show high motivation for information seeking and public expression. Moreover, they make a significant contribution to people's political involvement BIBREF21.\nOur work contributes to this literature in two major ways. First, we investigate how profile attribute changes are similar or different from the influential leaders in the country and how much of these changes are the result of the same. Then we see how the users change their profile information in the midst of election in the largest democracy in the world.\n\nDiscussion\nIn this paper, we study the use of profile attributes on Twitter as the major endorsement during #LokSabhaElectios2019. We identify $1,293$ verified political leader's profile and extract $49,433,640$ unique followers. We collect the daily and weekly snapshots of the political leaders and the follower handles from 5th April, 2019 to 5th June, 2019. With the constraint of Twitter API, we collected a total of 9 snapshots of the follower's profile and 61 snapshots of Political handles. We consider the 5 major attributes for the analysis, including, Profile Name, Username, Description, Profile Image, and Location.\nFirstly, we analyze the most changed attributes among the 5 major attributes. We find that most of the users made at-most 4 changes during the election campaign, with profile image being the most changed attribute for both the political as well as follower handles. We also show that political handles made more changes to profile attributes than the follower handles. About $80\\%$ changes made by the followers in the period of data collection was related to #LokSabhaElectios2019. While the profile handles made changes, the changes mostly included going back and forth to same profile attribute, with the change in attribute being active support to a political party or political leader.\nWe argue that we can predict the political inclination of a user using just the profile attribute of the users. We further show that the presence of party name in the profile attribute can be considered as an organic behavior and signals support to a party. However, we argue that the addition of election campaign related keywords to the profile is a form of inorganic behavior. The inorganic behavior analysis falls inline with behavior contagion, where the followers tend to adapt to the behavior of their opinion leaders. The “Chowkidar movement” showed a similar effect in the #LokSabhaElectios2019, which was evident by how the other political leaders and followers of BJP party added chowkidar to their profile attributes after @narendramodi did. We thus, argue that people don't shy away from showing support to political parties through profile information.\nWe believe our analysis will be helpful to understand how social movement like election are perceived and sustained through profile attributes on social media. Our findings provide baseline data for study in the direction of election campaign and to further analyze people's perspective on elections.\n\nFigure 4: Distribution of last tweet time of twitterer in set S . Only 5 million users tweeted since start of 2019, in favor of hypothesis that a minority of users are only responsible for most of the political opinions.\n\nTable 2: Example of users who changed their username once and later reverted back to their old username.\n\nFigure 8: The tweet in which Narendra Modi urges Twitter users to remove Chowkidar from their profiles.\n\nQuery: How do profile changes vary for influential leads and their followers over the social movement?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Influential leaders are more likely to change their profile attributes than their followers; the leaders do not change their usernames, while their followers change their usernames a lot; the leaders tend to make new changes related to previous attribute values, while the followers make comparatively less related changes to previous attribute values.\", \"evidence\": [\"INSIGHT 1: Political handles are more likely to engage in profile changing behavior as compared to their followers.\", \"In Figure FIGREF15, we plot the number of changes in given attributes over all the snapshots for the users in set $S$. From this plot, we find that not all the attributes are modified at an equal rate. Profile image and Location are the most changed profile attributes and account for nearly $34\\\\%$ and $25\\\\%$ respectively of the total profile changes in our dataset. We analyze the trends in Figure FIGREF12 and find that the political handles do not change their usernames at all. This is in contrast to the trend in Figure FIGREF15 where we see that there are a lot of handles that change their usernames multiple times. The most likely reason for the same is that most of the follower handles are not verified and would not loose their verified status on changing their username.\", \"INSIGHT 3: Political handles tend to make new changes related to previous attribute values. However, the followers make comparatively less related changes to previous attribute values.\"]}",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
RAD52 is involved in break-induced DNA replication (BIR).
Research Abstract:
Human cancers are characterized by the presence of oncogene-induced DNA replication stress (DRS), making them dependent on repair pathways such as break-induced replication (BIR) for damaged DNA replication forks.
To better understand BIR, we performed a targeted siRNA screen for genes whose depletion inhibited G1 to S phase progression when oncogenic cyclin E was overexpressed.
RAD52, a gene dispensable for normal development in mice, was among the top hits.
In cells in which fork collapse was induced by oncogenes or chemicals, the Rad52 protein localized to DRS foci.
Depletion of Rad52 by siRNA or knockout of the gene by CRISPR/Cas9 compromised restart of collapsed forks and led to DNA damage in cells experiencing DRS.
Furthermore, in cancer-prone, heterozygous APC mutant mice, homozygous deletion of the Rad52 gene suppressed tumor growth and prolonged lifespan.
We therefore propose that mammalian RAD52 facilitates repair of collapsed DNA replication forks in cancer cells. | {"verdict": "SUPPORT", "evidence": ["We therefore propose that mammalian RAD52 facilitates repair of collapsed DNA replication forks in cancer cells."]} | science.scifact_entailment.650 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nRAD52 is involved in break-induced DNA replication (BIR).\n\nResearch Abstract:\nHuman cancers are characterized by the presence of oncogene-induced DNA replication stress (DRS), making them dependent on repair pathways such as break-induced replication (BIR) for damaged DNA replication forks.\nTo better understand BIR, we performed a targeted siRNA screen for genes whose depletion inhibited G1 to S phase progression when oncogenic cyclin E was overexpressed.\nRAD52, a gene dispensable for normal development in mice, was among the top hits.\nIn cells in which fork collapse was induced by oncogenes or chemicals, the Rad52 protein localized to DRS foci.\nDepletion of Rad52 by siRNA or knockout of the gene by CRISPR/Cas9 compromised restart of collapsed forks and led to DNA damage in cells experiencing DRS.\nFurthermore, in cancer-prone, heterozygous APC mutant mice, homozygous deletion of the Rad52 gene suppressed tumor growth and prolonged lifespan.\nWe therefore propose that mammalian RAD52 facilitates repair of collapsed DNA replication forks in cancer cells.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"We therefore propose that mammalian RAD52 facilitates repair of collapsed DNA replication forks in cancer cells.\"]}",
"role": "assistant"
}
]
|
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
Association of a FGFR-4 gene polymorphism with bronchopulmonary dysplasia and neonatal respiratory distress. BACKGROUND: Bronchopulmonary dysplasia (BPD) is the most common chronic lung disease of premature birth, characterized by impaired alveolar development and inflammation. Pathomechanisms contributing to BPD are poorly understood. However, it is assumed that genetic factors predispose to BPD and other pulmonary diseases of preterm neonates, such as neonatal respiratory distress syndrome (RDS). For association studies, genes upregulated during alveolarization are major candidates for genetic analysis, for example, matrix metalloproteinases (MMPs) and fibroblast growth factors (FGFs) and their receptors (FGFR). OBJECTIVE: Determining genetic risk variants in a Caucasian population of premature neonates with BPD and RDS. Methods. We genotyped 27 polymorphisms within 14 candidate genes via restriction fragment length polymorphism (RFLP): MMP-1, -2, -9, and -12, -16, FGF receptors 2 and 4, FGF-2, -3, -4, -7, and -18, Signal-Regulatory Protein a (SIRPA) and Thyroid Transcription Factor-1 (TTF-1). RESULTS: Five single nucleotide polymorphisms (SNPs) in MMP-9, MMP-12, FGFR-4, FGF-3, and FGF-7 are associated (P < 0.05) with RDS, defined as surfactant application within the first 24 hours after birth. One of them, in FGFR-4 (rs1966265), is associated with both RDS (P = 0.003) and BPD (P = 0.023). CONCLUSION: rs1966265 in FGF receptor 4 is a possible genetic key variant in alveolar diseases of preterm newborns. | {"CellLine": [], "Chemical": [], "Disease": ["lung disease", "inflammation", "alveolar diseases", "premature birth", "Bronchopulmonary dysplasia", "RDS", "pulmonary diseases", "neonatal respiratory distress", "neonatal respiratory distress syndrome", "BPD", "bronchopulmonary dysplasia", "impaired alveolar development"], "Gene": ["Signal-Regulatory Protein a", "FGF receptors 2 and 4", "FGF-2, -3, -4, -7, and -18", "FGFR-4", "MMPs", "FGF-7", "FGF receptor 4", "Thyroid Transcription Factor-1", "MMP-12", "SIRPA", "FGF-3", "MMP-9", "FGFR", "matrix metalloproteinases", "MMP-1, -2, -9, and -12, -16", "FGFs", "TTF-1", "fibroblast growth factors"], "Species": [], "Variant": ["rs1966265"]} | science.biored_ner.61 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nAssociation of a FGFR-4 gene polymorphism with bronchopulmonary dysplasia and neonatal respiratory distress. BACKGROUND: Bronchopulmonary dysplasia (BPD) is the most common chronic lung disease of premature birth, characterized by impaired alveolar development and inflammation. Pathomechanisms contributing to BPD are poorly understood. However, it is assumed that genetic factors predispose to BPD and other pulmonary diseases of preterm neonates, such as neonatal respiratory distress syndrome (RDS). For association studies, genes upregulated during alveolarization are major candidates for genetic analysis, for example, matrix metalloproteinases (MMPs) and fibroblast growth factors (FGFs) and their receptors (FGFR). OBJECTIVE: Determining genetic risk variants in a Caucasian population of premature neonates with BPD and RDS. Methods. We genotyped 27 polymorphisms within 14 candidate genes via restriction fragment length polymorphism (RFLP): MMP-1, -2, -9, and -12, -16, FGF receptors 2 and 4, FGF-2, -3, -4, -7, and -18, Signal-Regulatory Protein a (SIRPA) and Thyroid Transcription Factor-1 (TTF-1). RESULTS: Five single nucleotide polymorphisms (SNPs) in MMP-9, MMP-12, FGFR-4, FGF-3, and FGF-7 are associated (P < 0.05) with RDS, defined as surfactant application within the first 24 hours after birth. One of them, in FGFR-4 (rs1966265), is associated with both RDS (P = 0.003) and BPD (P = 0.023). CONCLUSION: rs1966265 in FGF receptor 4 is a possible genetic key variant in alveolar diseases of preterm newborns.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [], \"Disease\": [\"lung disease\", \"inflammation\", \"alveolar diseases\", \"premature birth\", \"Bronchopulmonary dysplasia\", \"RDS\", \"pulmonary diseases\", \"neonatal respiratory distress\", \"neonatal respiratory distress syndrome\", \"BPD\", \"bronchopulmonary dysplasia\", \"impaired alveolar development\"], \"Gene\": [\"Signal-Regulatory Protein a\", \"FGF receptors 2 and 4\", \"FGF-2, -3, -4, -7, and -18\", \"FGFR-4\", \"MMPs\", \"FGF-7\", \"FGF receptor 4\", \"Thyroid Transcription Factor-1\", \"MMP-12\", \"SIRPA\", \"FGF-3\", \"MMP-9\", \"FGFR\", \"matrix metalloproteinases\", \"MMP-1, -2, -9, and -12, -16\", \"FGFs\", \"TTF-1\", \"fibroblast growth factors\"], \"Species\": [], \"Variant\": [\"rs1966265\"]}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Curriculum Design for Teaching via Demonstrations: Theory and Applications
Main Content:
1 Introduction . Imitation learning is a paradigm in which a learner acquires a new set of skills by imitating a teacher ’ s behavior . The importance of imitation learning is realized in real-world applications where the desired behavior can not be explicitly defined but can be demonstrated easily . These applications include the settings involving both human-to-machine interaction [ 1–4 ] , and human-to-human interaction [ 5 , 6 ] . The two most popular approaches to imitation learning are Behavioral Cloning ( BC ) [ 7 ] and Inverse Reinforcement Learning ( IRL ) [ 8 ] . BC algorithms aim to directly match the behavior of the teacher using supervised learning methods . IRL algorithms operate in a two-step approach : first , a reward function explaining the teacher ’ s behavior is inferred ; then , the learner adopts a policy corresponding to the inferred reward . In the literature , imitation learning has been extensively studied from the learner ’ s point of view to design efficient learning algorithms [ 9–15 ] . However , much less work is done from the teacher ’ s point of view to reduce the number of demonstrations required to achieve the learning objective . In this paper , we focus on the problem of Teaching via Demonstrations ( TvD ) , where a helpful teacher assists the imitation learner in converging quickly by designing a personalized curriculum [ 16–20 ] . Despite a substantial amount of work on curriculum design for reinforcement learning agents [ 21– 27 ] , curriculum design for imitation learning agents is much less investigated . Prior work on curriculum design for IRL learners has focused on two concrete settings : noninteractive and interactive . In the non-interactive setting [ 17 , 18 ] , the teacher provides a near-optimal set of demonstrations as a single batch . These curriculum strategies do not incorporate 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . any feedback from the learner , hence unable to adapt the teaching to the learner ’ s progress . In the interactive setting [ 28 ] , the teacher can leverage the learner ’ s progress to adaptively choose the next demonstrations to accelerate the learning process . However , the existing state-of-the-art work [ 28 ] has proposed interactive curriculum algorithms that are based on learning dynamics of a specific IRL learner model ( i.e. , the learner ’ s gradient update rule ) ; see further discussion in Section 1.1 . In contrast , we focus on designing an interactive curriculum algorithm with theoretical guarantees that is agnostic to the learner ’ s dynamics . This will enable the algorithm to be applicable for a broad range of learner models , and in practical settings where the learner ’ s internal model is unknown ( such as tutoring systems with human learners ) . A detailed comparison between our curriculum algorithm and the prior state-of-the-art algorithms from [ 18 , 28 ] is presented in Section 1.1 . Our approach is motivated by works on curriculum design for supervised learning and reinforcement learning algorithms that use a ranking over the training examples using a difficulty score [ 29–35 ] . In particular , our work is inspired by theoretical results on curriculum learning for linear regression models [ 32 ] . We define difficulty scores for any demonstration based on the teacher ’ s optimal policy and the learner ’ s current policy . We then study the differential effect of the difficulty scores on the learning progress for two popular imitation learners : Maximum Causal Entropy Inverse Reinforcement Learning ( MaxEnt-IRL ) [ 10 ] and Cross-Entropy loss-based Behavioral Cloning ( CrossEntBC ) [ 36 ] . Our main contributions are as follows:1 1 . Our analysis for both MaxEnt-IRL and CrossEnt-BC learners leads to a unified curriculum strategy , i.e. , a preference ranking over demonstrations . This ranking is obtained based on the ratio between the demonstration ’ s likelihood under the teacher ’ s optimal policy and the learner ’ s current policy . Experiments on a synthetic car driving environment validate our curriculum strategy . 2 . For the MaxEnt-IRL learner , we prove that our curriculum strategy achieves a linear convergence rate ( under certain mild technical conditions ) , notably without requiring access to the learner ’ s dynamics . 3 . We adapt our curriculum strategy to the learner-centric setting where a teacher agent is not present through the use of task-specific difficulty scores . As a proof of concept , we show that our strategy accelerates the learning process in synthetic navigation-based environments . 1.1 Comparison to Existing Approaches on Curriculum Design for Imitation Learning . In the non-interactive setting , [ 18 ] have proposed a batch teaching algorithm ( SCOT ) by showing that the teaching problem can be formulated as a set cover problem . In contrast , our algorithm is interactive in nature and hence , can leverage the learner ’ s progress ( see experimental results in Section 5 ) . In the interactive setting , [ 28 ] have proposed the Omniscient algorithm ( OMN ) based on the iterative machine teaching ( IMT ) framework [ 37 ] . Their algorithm obtains strong convergence guarantees for the MaxEnt-IRL learner model ; however , requires exact knowledge of the learner ’ s dynamics ( i.e , the learner ’ s update rule ) . Our algorithm on the other hand is agnostic to the learner ’ s dynamics and is applicable to a broader family of learner models ( see Sections 4 and 5 ) . Also for the interactive setting , [ 28 ] have proposed the Blackbox algorithm ( BBOX ) as a heuristic to apply the OMN algorithm when the learner ’ s dynamics are unknown—this makes the BBOX algorithm more widely applicable than OMN . However , this heuristic algorithm is still based on the gradient functional form of the linear MaxEnt-IRL learner model ( see Footnote 2 ) , and does not provide any convergence guarantees . In contrast , our algorithm is derived independent of any specific learner model and we provide a theoretical analysis of our algorithm for different learner models ( see Theorems 1 , 2 , and 3 ) . Another crucial difference is that the BBOX algorithm requires access to the true reward function of the environment , which precludes it from being applied to learnercentric settings where no teacher agent is present . In comparison , our algorithm is applicable to learner-centric settings ( see experimental results in Section 6 ) . 1.2 Additional Related Work on Curriculum Design and Teaching . Curriculum design . Curriculum design for supervised learning settings has been extensively studied in the literature . Early works present the idea of designing a curriculum comprising of tasks with increasing difficulty to train a machine learning model [ 29–31 ] . However , these approaches require 1Github repo : https : //github.com/adishs/neurips2021_curriculum-teaching-demonstrations_code . task-specific knowledge for designing heuristic difficulty measures . Recent works have tackled the problem of automating curriculum design [ 38 , 39 ] . There is also an increasing interest in theoretically analyzing the impact of a curriculum ( ordering ) of training tasks on the convergence of supervised learner models [ 32 , 40 , 41 ] . In particular , our work builds on the idea of difficulty scores of the training examples studied in [ 32 ] . The existing results on curriculum design for sequential decision-making settings are mostly empirical in nature . Similar to the supervised learning settings , the focus on curriculum design for reinforcement learning settings has been shifted from hand-crafted approaches [ 34 , 35 ] to automatic methods [ 21–23 , 25 ] . We refer the reader to a recent survey [ 42 ] on curriculum design for reinforcement learning . The curriculum learning paradigm has also been studied in psychology literature [ 43–47 ] . One key aspect in these works has been to design algorithms that account for the pedagogical intentions of a teacher , which often aims to explicitly demonstrate specific skills rather than just provide an optimal demonstration for a task . We see our work as complementary to these . Machine teaching . The algorithmic teaching problem considers the interaction between a teacher and a learner where the teacher ’ s objective is to find an optimal training sequence to steer the learner towards a desired goal [ 37 , 48–50 ] . Most of the work in machine teaching for supervised learning settings is on batch teaching where the teacher provides a batch of teaching examples at once without any adaptation . The question of how a teacher should adaptively select teaching examples for a learner has been addressed recently in supervised learning settings [ 51–56 ] . Furthermore , [ 16–18 , 28 , 57 , 58 ] have studied algorithmic teaching for sequential decision-making tasks . In particular , [ 17 , 18 ] have proposed batch teaching algorithms for an IRL agent , where the teacher decides the entire set of demonstrations to provide to the learner before any interaction . These teaching algorithms do not leverage any feedback from the learner . In contrast , as discussed in Section 1.1 , [ 28 ] have proposed interactive teaching algorithms ( namely OMN and BBOX ) for an IRL agent , where the teacher takes into account how the learner progresses . The works of [ 57 , 58 ] are complementary to ours and study algorithmic teaching when the learner has a differet worldview than the teacher or has its own specific preferences . 2 Formal Problem Setup . Here , we formalize our problem setting which is based on prior work on sequential teaching [ 28 , 37 ] . Environment . We consider an environment defined as a Markov Decision Process ( MDP ) M : = ( S , A , T , γ , P0 , RE ) , where the state and action spaces are denoted by S and A , respectively . T : S × S × A → [ 0 , 1 ] is the transition dynamics , γ is the discounting factor , and P0 : S → [ 0 , 1 ] is an initial distribution over states S. A policy π : S × A → [ 0 , 1 ] is a mapping from a state to a probability distribution over actions . The underlying reward function is given by RE : S ×A → R. Teacher-learner interaction . We consider a setting with two agents : a teacher and a sequential learner . The teacher has access to the full MDPM and has a target policy πE ( e.g. , a near-optimal policy w.r.t . RE ) . The learner knows the MDPM but not the reward function RE , i.e. , has only access to M \ RE . The teacher ’ s goal is to provide an informative sequence of demonstrations to teach the policy πE to the learner . Here , a teacher ’ s demonstration ξ = { ( sξτ , a ξ τ ) } τ=0,1 , ... is obtained by first choosing an initial state sξ0 ∈ S ( where P0 ( sξ0 ) > 0 ) and then choosing a trajectory , sequence of state-action pairs , obtained by executing the policy πE in the MDPM . The interaction between the teacher and the learner is formally described in Algorithm 1 . For simplicity , we assume that the teacher directly observes the learner ’ s policy πLt at any time t. In practice , the teacher could approximately infer the policy πLt by probing the learner and using Monte Carlo methods . Generic learner model . Here , we describe a generic learner update rule for Algorithm 1 . Let Θ ⊆ Rd be a parameter space . The learner searches for a policy in the following parameterized policy space : ΠΘ : = { πθ : S ×A → [ 0 , 1 ] , where θ ∈ Θ } . For the policy search , the learner sequentially minimizes a loss function ` that depends on the policy parameter θ and the demonstration ξ = { ( sξτ , a ξ τ ) } τ provided by the teacher . More concretely , we consider ` ( ξ , θ ) : = − logP ( ξ|θ ) , where P ( ξ|θ ) = P0 ( sξ0 ) · ∏ τ πθ ( aξτ |sξτ ) · T ( sξτ+1|sξτ , aξτ ) is the likelihood ( probability ) of the demon- Algorithm 1 Teacher-Learner Interaction 1 : Initialization : Initial knowledge of learner πL1 . 2 : for t = 1 , 2 , . . . do 3 : Teacher observes the learner ’ s current policy πLt . 4 : Teacher provides demonstration ξt to the learner . 5 : Learner updates its policy to πLt+1 using ξt . stration ξ under policy πθ in the MDP M. At time t , upon receiving a demonstration ξt provided by the teacher , the learner performs the following online projected gradient descent update : θt+1 ← ProjΘ [ θt − ηtgt ] , where ηt is the learning rate , and gt = [ ∇θ ` ( ξt , θ ) ] θ=θt . Note that the parameter θ1 reflects the initial knowledge of the learner . Given the learner ’ s current parameter θt at time t , the learner ’ s policy is defined as πLt : = πθt . Teaching objective . For any policy π , the value ( total expected reward ) of π in the MDPM is defined as V π : = ∑ s , a ∑∞ τ=0 γ τ ·P { Sτ = s | π , M } ·π ( a | s ) ·RE ( s , a ) , where P { Sτ = s | π , M } denotes the probability of visiting the state s after τ steps by following the policy π . Let πL denote the learner ’ s final policy at the end of teaching . The performance of the policy πL ( w.r.t . πE ) inM can be evaluated via ∣∣∣V πE − V πL∣∣∣ [ 9 , 59 ] . The teaching objective is to ensure that the learner ’ s final policy -approximates the teacher ’ s policy , i.e. , ∣∣∣V πE − V πL ∣∣∣ ≤ . The teacher aims to provide an optimized sequence of demonstrations { ξt } t=1,2 , ... to the learner to achieve the teaching objective . The teacher ’ s performance is then measured by the number of demonstrations required to achieve this objective . Based on existing work [ 28 , 37 ] , we assume that ∃ θ∗ ∈ Θ such that πE = πθ∗ ( we refer to θ∗ as the target teaching parameter ) . Similar to [ 28 ] , we assume that a smoothness condition holds in the policy parameter space : |V πθ − V πθ′ | ≤ O ( f ( ‖θ − θ′‖ ) ) ∀θ , θ′ ∈ Θ . Then , the teaching objective in terms of V π convergence can be reduced to the convergence in the parameter space , i.e. , we can focus on the quantity ‖θ∗ − θt‖ . | This paper considers the problem of providing experts demonstrations to a learner, modeled as a Maximum Entropy IRL or a Cross Entropy Behavioral Cloning algorithm, in order for them to achieve a target performance on some task. The key contribution of the paper is to provide a demonstration ranking strategy which can be greedily utilized to define a curriculum. This strategy resulting in convergence for both types of learners. Importantly, and in contrast to prior work, this approach can be done in a manner that does not explicitly see the learner's internal dynamics. Moreover, in the case of the maximum entropy IRL (with deterministic dynamics), the prove that the convergence is logarithmic. Finally, they show the efficacy of this approach in various domains. | science.mup_single_document_summarization.11206 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nCurriculum Design for Teaching via Demonstrations: Theory and Applications\n\nMain Content:\n1 Introduction . Imitation learning is a paradigm in which a learner acquires a new set of skills by imitating a teacher ’ s behavior . The importance of imitation learning is realized in real-world applications where the desired behavior can not be explicitly defined but can be demonstrated easily . These applications include the settings involving both human-to-machine interaction [ 1–4 ] , and human-to-human interaction [ 5 , 6 ] . The two most popular approaches to imitation learning are Behavioral Cloning ( BC ) [ 7 ] and Inverse Reinforcement Learning ( IRL ) [ 8 ] . BC algorithms aim to directly match the behavior of the teacher using supervised learning methods . IRL algorithms operate in a two-step approach : first , a reward function explaining the teacher ’ s behavior is inferred ; then , the learner adopts a policy corresponding to the inferred reward . In the literature , imitation learning has been extensively studied from the learner ’ s point of view to design efficient learning algorithms [ 9–15 ] . However , much less work is done from the teacher ’ s point of view to reduce the number of demonstrations required to achieve the learning objective . In this paper , we focus on the problem of Teaching via Demonstrations ( TvD ) , where a helpful teacher assists the imitation learner in converging quickly by designing a personalized curriculum [ 16–20 ] . Despite a substantial amount of work on curriculum design for reinforcement learning agents [ 21– 27 ] , curriculum design for imitation learning agents is much less investigated . Prior work on curriculum design for IRL learners has focused on two concrete settings : noninteractive and interactive . In the non-interactive setting [ 17 , 18 ] , the teacher provides a near-optimal set of demonstrations as a single batch . These curriculum strategies do not incorporate 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . any feedback from the learner , hence unable to adapt the teaching to the learner ’ s progress . In the interactive setting [ 28 ] , the teacher can leverage the learner ’ s progress to adaptively choose the next demonstrations to accelerate the learning process . However , the existing state-of-the-art work [ 28 ] has proposed interactive curriculum algorithms that are based on learning dynamics of a specific IRL learner model ( i.e. , the learner ’ s gradient update rule ) ; see further discussion in Section 1.1 . In contrast , we focus on designing an interactive curriculum algorithm with theoretical guarantees that is agnostic to the learner ’ s dynamics . This will enable the algorithm to be applicable for a broad range of learner models , and in practical settings where the learner ’ s internal model is unknown ( such as tutoring systems with human learners ) . A detailed comparison between our curriculum algorithm and the prior state-of-the-art algorithms from [ 18 , 28 ] is presented in Section 1.1 . Our approach is motivated by works on curriculum design for supervised learning and reinforcement learning algorithms that use a ranking over the training examples using a difficulty score [ 29–35 ] . In particular , our work is inspired by theoretical results on curriculum learning for linear regression models [ 32 ] . We define difficulty scores for any demonstration based on the teacher ’ s optimal policy and the learner ’ s current policy . We then study the differential effect of the difficulty scores on the learning progress for two popular imitation learners : Maximum Causal Entropy Inverse Reinforcement Learning ( MaxEnt-IRL ) [ 10 ] and Cross-Entropy loss-based Behavioral Cloning ( CrossEntBC ) [ 36 ] . Our main contributions are as follows:1 1 . Our analysis for both MaxEnt-IRL and CrossEnt-BC learners leads to a unified curriculum strategy , i.e. , a preference ranking over demonstrations . This ranking is obtained based on the ratio between the demonstration ’ s likelihood under the teacher ’ s optimal policy and the learner ’ s current policy . Experiments on a synthetic car driving environment validate our curriculum strategy . 2 . For the MaxEnt-IRL learner , we prove that our curriculum strategy achieves a linear convergence rate ( under certain mild technical conditions ) , notably without requiring access to the learner ’ s dynamics . 3 . We adapt our curriculum strategy to the learner-centric setting where a teacher agent is not present through the use of task-specific difficulty scores . As a proof of concept , we show that our strategy accelerates the learning process in synthetic navigation-based environments . 1.1 Comparison to Existing Approaches on Curriculum Design for Imitation Learning . In the non-interactive setting , [ 18 ] have proposed a batch teaching algorithm ( SCOT ) by showing that the teaching problem can be formulated as a set cover problem . In contrast , our algorithm is interactive in nature and hence , can leverage the learner ’ s progress ( see experimental results in Section 5 ) . In the interactive setting , [ 28 ] have proposed the Omniscient algorithm ( OMN ) based on the iterative machine teaching ( IMT ) framework [ 37 ] . Their algorithm obtains strong convergence guarantees for the MaxEnt-IRL learner model ; however , requires exact knowledge of the learner ’ s dynamics ( i.e , the learner ’ s update rule ) . Our algorithm on the other hand is agnostic to the learner ’ s dynamics and is applicable to a broader family of learner models ( see Sections 4 and 5 ) . Also for the interactive setting , [ 28 ] have proposed the Blackbox algorithm ( BBOX ) as a heuristic to apply the OMN algorithm when the learner ’ s dynamics are unknown—this makes the BBOX algorithm more widely applicable than OMN . However , this heuristic algorithm is still based on the gradient functional form of the linear MaxEnt-IRL learner model ( see Footnote 2 ) , and does not provide any convergence guarantees . In contrast , our algorithm is derived independent of any specific learner model and we provide a theoretical analysis of our algorithm for different learner models ( see Theorems 1 , 2 , and 3 ) . Another crucial difference is that the BBOX algorithm requires access to the true reward function of the environment , which precludes it from being applied to learnercentric settings where no teacher agent is present . In comparison , our algorithm is applicable to learner-centric settings ( see experimental results in Section 6 ) . 1.2 Additional Related Work on Curriculum Design and Teaching . Curriculum design . Curriculum design for supervised learning settings has been extensively studied in the literature . Early works present the idea of designing a curriculum comprising of tasks with increasing difficulty to train a machine learning model [ 29–31 ] . However , these approaches require 1Github repo : https : //github.com/adishs/neurips2021_curriculum-teaching-demonstrations_code . task-specific knowledge for designing heuristic difficulty measures . Recent works have tackled the problem of automating curriculum design [ 38 , 39 ] . There is also an increasing interest in theoretically analyzing the impact of a curriculum ( ordering ) of training tasks on the convergence of supervised learner models [ 32 , 40 , 41 ] . In particular , our work builds on the idea of difficulty scores of the training examples studied in [ 32 ] . The existing results on curriculum design for sequential decision-making settings are mostly empirical in nature . Similar to the supervised learning settings , the focus on curriculum design for reinforcement learning settings has been shifted from hand-crafted approaches [ 34 , 35 ] to automatic methods [ 21–23 , 25 ] . We refer the reader to a recent survey [ 42 ] on curriculum design for reinforcement learning . The curriculum learning paradigm has also been studied in psychology literature [ 43–47 ] . One key aspect in these works has been to design algorithms that account for the pedagogical intentions of a teacher , which often aims to explicitly demonstrate specific skills rather than just provide an optimal demonstration for a task . We see our work as complementary to these . Machine teaching . The algorithmic teaching problem considers the interaction between a teacher and a learner where the teacher ’ s objective is to find an optimal training sequence to steer the learner towards a desired goal [ 37 , 48–50 ] . Most of the work in machine teaching for supervised learning settings is on batch teaching where the teacher provides a batch of teaching examples at once without any adaptation . The question of how a teacher should adaptively select teaching examples for a learner has been addressed recently in supervised learning settings [ 51–56 ] . Furthermore , [ 16–18 , 28 , 57 , 58 ] have studied algorithmic teaching for sequential decision-making tasks . In particular , [ 17 , 18 ] have proposed batch teaching algorithms for an IRL agent , where the teacher decides the entire set of demonstrations to provide to the learner before any interaction . These teaching algorithms do not leverage any feedback from the learner . In contrast , as discussed in Section 1.1 , [ 28 ] have proposed interactive teaching algorithms ( namely OMN and BBOX ) for an IRL agent , where the teacher takes into account how the learner progresses . The works of [ 57 , 58 ] are complementary to ours and study algorithmic teaching when the learner has a differet worldview than the teacher or has its own specific preferences . 2 Formal Problem Setup . Here , we formalize our problem setting which is based on prior work on sequential teaching [ 28 , 37 ] . Environment . We consider an environment defined as a Markov Decision Process ( MDP ) M : = ( S , A , T , γ , P0 , RE ) , where the state and action spaces are denoted by S and A , respectively . T : S × S × A → [ 0 , 1 ] is the transition dynamics , γ is the discounting factor , and P0 : S → [ 0 , 1 ] is an initial distribution over states S. A policy π : S × A → [ 0 , 1 ] is a mapping from a state to a probability distribution over actions . The underlying reward function is given by RE : S ×A → R. Teacher-learner interaction . We consider a setting with two agents : a teacher and a sequential learner . The teacher has access to the full MDPM and has a target policy πE ( e.g. , a near-optimal policy w.r.t . RE ) . The learner knows the MDPM but not the reward function RE , i.e. , has only access to M \\ RE . The teacher ’ s goal is to provide an informative sequence of demonstrations to teach the policy πE to the learner . Here , a teacher ’ s demonstration ξ = { ( sξτ , a ξ τ ) } τ=0,1 , ... is obtained by first choosing an initial state sξ0 ∈ S ( where P0 ( sξ0 ) > 0 ) and then choosing a trajectory , sequence of state-action pairs , obtained by executing the policy πE in the MDPM . The interaction between the teacher and the learner is formally described in Algorithm 1 . For simplicity , we assume that the teacher directly observes the learner ’ s policy πLt at any time t. In practice , the teacher could approximately infer the policy πLt by probing the learner and using Monte Carlo methods . Generic learner model . Here , we describe a generic learner update rule for Algorithm 1 . Let Θ ⊆ Rd be a parameter space . The learner searches for a policy in the following parameterized policy space : ΠΘ : = { πθ : S ×A → [ 0 , 1 ] , where θ ∈ Θ } . For the policy search , the learner sequentially minimizes a loss function ` that depends on the policy parameter θ and the demonstration ξ = { ( sξτ , a ξ τ ) } τ provided by the teacher . More concretely , we consider ` ( ξ , θ ) : = − logP ( ξ|θ ) , where P ( ξ|θ ) = P0 ( sξ0 ) · ∏ τ πθ ( aξτ |sξτ ) · T ( sξτ+1|sξτ , aξτ ) is the likelihood ( probability ) of the demon- Algorithm 1 Teacher-Learner Interaction 1 : Initialization : Initial knowledge of learner πL1 . 2 : for t = 1 , 2 , . . . do 3 : Teacher observes the learner ’ s current policy πLt . 4 : Teacher provides demonstration ξt to the learner . 5 : Learner updates its policy to πLt+1 using ξt . stration ξ under policy πθ in the MDP M. At time t , upon receiving a demonstration ξt provided by the teacher , the learner performs the following online projected gradient descent update : θt+1 ← ProjΘ [ θt − ηtgt ] , where ηt is the learning rate , and gt = [ ∇θ ` ( ξt , θ ) ] θ=θt . Note that the parameter θ1 reflects the initial knowledge of the learner . Given the learner ’ s current parameter θt at time t , the learner ’ s policy is defined as πLt : = πθt . Teaching objective . For any policy π , the value ( total expected reward ) of π in the MDPM is defined as V π : = ∑ s , a ∑∞ τ=0 γ τ ·P { Sτ = s | π , M } ·π ( a | s ) ·RE ( s , a ) , where P { Sτ = s | π , M } denotes the probability of visiting the state s after τ steps by following the policy π . Let πL denote the learner ’ s final policy at the end of teaching . The performance of the policy πL ( w.r.t . πE ) inM can be evaluated via ∣∣∣V πE − V πL∣∣∣ [ 9 , 59 ] . The teaching objective is to ensure that the learner ’ s final policy -approximates the teacher ’ s policy , i.e. , ∣∣∣V πE − V πL ∣∣∣ ≤ . The teacher aims to provide an optimized sequence of demonstrations { ξt } t=1,2 , ... to the learner to achieve the teaching objective . The teacher ’ s performance is then measured by the number of demonstrations required to achieve this objective . Based on existing work [ 28 , 37 ] , we assume that ∃ θ∗ ∈ Θ such that πE = πθ∗ ( we refer to θ∗ as the target teaching parameter ) . Similar to [ 28 ] , we assume that a smoothness condition holds in the policy parameter space : |V πθ − V πθ′ | ≤ O ( f ( ‖θ − θ′‖ ) ) ∀θ , θ′ ∈ Θ . Then , the teaching objective in terms of V π convergence can be reduced to the convergence in the parameter space , i.e. , we can focus on the quantity ‖θ∗ − θt‖ .",
"role": "user"
},
{
"content": "This paper considers the problem of providing experts demonstrations to a learner, modeled as a Maximum Entropy IRL or a Cross Entropy Behavioral Cloning algorithm, in order for them to achieve a target performance on some task. The key contribution of the paper is to provide a demonstration ranking strategy which can be greedily utilized to define a curriculum. This strategy resulting in convergence for both types of learners. Importantly, and in contrast to prior work, this approach can be done in a manner that does not explicitly see the learner's internal dynamics. Moreover, in the case of the maximum entropy IRL (with deterministic dynamics), the prove that the convergence is logarithmic. Finally, they show the efficacy of this approach in various domains.",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Recently, <cite>Yang et al. (2019)</cite> showed that combining a BERT-based reader with passage retrieval using the Anserini IR toolkit yields a large improvement in question answering directly from a Wikipedia corpus, measured in terms of exact match on a standard benchmark (Chen et al., 2017) . Interestingly, the approach of <cite>Yang et al. (2019)</cite> represents a simple method to combining BERT with off-the-shelf IR. In this paper, we build on these initial successes to explore how much further we can push this simple architecture by data augmentation, taking advantage of distant supervision techniques to gather more and higher-quality * equal contribution training data to fine tune BERT. | ["Extends"] | science.multicite_intent_classification.2359 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nRecently, <cite>Yang et al. (2019)</cite> showed that combining a BERT-based reader with passage retrieval using the Anserini IR toolkit yields a large improvement in question answering directly from a Wikipedia corpus, measured in terms of exact match on a standard benchmark (Chen et al., 2017) . Interestingly, the approach of <cite>Yang et al. (2019)</cite> represents a simple method to combining BERT with off-the-shelf IR. In this paper, we build on these initial successes to explore how much further we can push this simple architecture by data augmentation, taking advantage of distant supervision techniques to gather more and higher-quality * equal contribution training data to fine tune BERT.",
"role": "user"
},
{
"content": "[\"Extends\"]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Table 1 presents the word error rates (WER) from the baseline CNN model trained with the SWB data when evaluated on the NIST 2000 CTS test set, for both cross-entropy (CE) training and sequence training (ST) using MMI. Also, the CNN models always gave better results, confirming similar observations from studies reported earlier <cite>[8]</cite> . | ["Uses", "Similarities"] | science.multicite_intent_classification.2074 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nTable 1 presents the word error rates (WER) from the baseline CNN model trained with the SWB data when evaluated on the NIST 2000 CTS test set, for both cross-entropy (CE) training and sequence training (ST) using MMI. Also, the CNN models always gave better results, confirming similar observations from studies reported earlier <cite>[8]</cite> .",
"role": "user"
},
{
"content": "[\"Uses\", \"Similarities\"]",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: The aim of this study was to investigate the impact of a tenofovir disoproxil fumarate (TDF) plus ritonavir-boosted protease inhibitor (PI/r) regimen on renal function in Chinese HIV-infected patients.
ABSTRACT.METHODS: Seventy-five HIV-1 infected patients failing first-line antiretroviral therapy (ART) comprised the TDF+PI/r group. Seventy-five HIV-1 infected patients matched for gender, age, and renal function made up the control. All subjects completed follow-up visits over 48 weeks. CD4 cell count, plasma HIV-1 viral load, and urine protein level were assessed at the trial start (baseline, week 0) and at week 48. The serum creatinine and estimated glomerular filtration rate (eGFR) were monitored at each follow-up point. Change in eGFR from baseline to week 48 was also compared.
ABSTRACT.RESULTS: Compared to control, the TDF+PI/r group exhibited higher levels of serum creatinine (79 vs. 69.7 μmol/L, P<0.001) and a lower rate of eGFR (93.0 vs. 101.6 ml/min/1.73m2, P=0.009) at the end of week 48. Patients treated with TDF+PI/r showed greater decline in eGFR than control (−8.8 vs. 6.4ml/min/1.73m2, P<0.001). Compared to baseline renal function of the control group, the TDF+PI/r group exhibited a greater median decline in eGFR at the end of week 48 (P<0.001).
ABSTRACT.CONCLUSIONS: We found that a TDF+PI/r based ART regimen resulted in greater renal function decline over 48 weeks. Therefore, renal function should be monitored especially when TDF is used in combination with PI/r.
ABSTRACT.TRIAL REGISTRATION: ClinicalTrials.gov identifier: NCT00872417
Extracted Data: | [["a tenofovir disoproxil fumarate (TDF) plus ritonavir-boosted protease inhibitor (PI/r) group", "control", "serum creatinine", "significantly increased", "Compared to control, the TDF+PI/r group exhibited higher levels of serum creatinine (79 vs. 69.7 \u03bcmol/L, P\u0026lt;0.001) and a lower rate of eGFR (93.0 vs. 101.6 ml/min/1.73m2, P=0.009) at the end of week 48."]] | science.evidence_inference.198 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: The aim of this study was to investigate the impact of a tenofovir disoproxil fumarate (TDF) plus ritonavir-boosted protease inhibitor (PI/r) regimen on renal function in Chinese HIV-infected patients.\n\nABSTRACT.METHODS: Seventy-five HIV-1 infected patients failing first-line antiretroviral therapy (ART) comprised the TDF+PI/r group. Seventy-five HIV-1 infected patients matched for gender, age, and renal function made up the control. All subjects completed follow-up visits over 48 weeks. CD4 cell count, plasma HIV-1 viral load, and urine protein level were assessed at the trial start (baseline, week 0) and at week 48. The serum creatinine and estimated glomerular filtration rate (eGFR) were monitored at each follow-up point. Change in eGFR from baseline to week 48 was also compared.\n\nABSTRACT.RESULTS: Compared to control, the TDF+PI/r group exhibited higher levels of serum creatinine (79 vs. 69.7 μmol/L, P<0.001) and a lower rate of eGFR (93.0 vs. 101.6 ml/min/1.73m2, P=0.009) at the end of week 48. Patients treated with TDF+PI/r showed greater decline in eGFR than control (−8.8 vs. 6.4ml/min/1.73m2, P<0.001). Compared to baseline renal function of the control group, the TDF+PI/r group exhibited a greater median decline in eGFR at the end of week 48 (P<0.001).\n\nABSTRACT.CONCLUSIONS: We found that a TDF+PI/r based ART regimen resulted in greater renal function decline over 48 weeks. Therefore, renal function should be monitored especially when TDF is used in combination with PI/r.\n\nABSTRACT.TRIAL REGISTRATION: ClinicalTrials.gov identifier: NCT00872417\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"a tenofovir disoproxil fumarate (TDF) plus ritonavir-boosted protease inhibitor (PI/r) group\", \"control\", \"serum creatinine\", \"significantly increased\", \"Compared to control, the TDF+PI/r group exhibited higher levels of serum creatinine (79 vs. 69.7 \\u03bcmol/L, P\\u0026lt;0.001) and a lower rate of eGFR (93.0 vs. 101.6 ml/min/1.73m2, P=0.009) at the end of week 48.\"]]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
While our reordering step is a reimplementation of the <cite>Collins et al. (2005)</cite> system, contrary to their findings we do not see an improvement using the reordering step alone. | ["Uses", "Differences"] | science.multicite_intent_classification.289 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWhile our reordering step is a reimplementation of the <cite>Collins et al. (2005)</cite> system, contrary to their findings we do not see an improvement using the reordering step alone.",
"role": "user"
},
{
"content": "[\"Uses\", \"Differences\"]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Following <cite>Horn et al. (2014)</cite> , our system simplifies neither proper nouns, as identified by the Natural Language Toolkit (Bird et al., 2009) , nor words in our stoplist, which are already simple. | ["Uses"] | science.multicite_intent_classification.902 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nFollowing <cite>Horn et al. (2014)</cite> , our system simplifies neither proper nouns, as identified by the Natural Language Toolkit (Bird et al., 2009) , nor words in our stoplist, which are already simple.",
"role": "user"
},
{
"content": "[\"Uses\"]",
"role": "assistant"
}
]
|
science.discomat_te | | Band position (cm-1) | Band assignment |
| 3440 | OH stretching [16,21] |
| 1625 | OH bending [16,21] |
| 1077 | Si-O-Si stretching [16,21] |
| 960 | Si-O-, Si-O-Zr and Si-OH stretching [16,21,24] |
| 800 | Si-O-Si symmetric stretching [16,21] |
| 700 | Zr-O-Zr bond [24] |
| 540 | Zr-O-Zr bond [24] |
| 460 | Si-O-Si rocking motions [16,21] |
Caption: Infrared bands and their assignments
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "3440", "type": "Other"}
{"value": "1625", "type": "Other"}
{"value": "1077", "type": "Other"}
{"value": "960", "type": "Other"}
{"value": "800", "type": "Other"}
{"value": "700", "type": "Other"}
{"value": "540", "type": "Other"}
{"value": "460", "type": "Other"} | science.discomat_te.441 | [
{
"content": "| Band position (cm-1) | Band assignment |\n| 3440 | OH stretching [16,21] |\n| 1625 | OH bending [16,21] |\n| 1077 | Si-O-Si stretching [16,21] |\n| 960 | Si-O-, Si-O-Zr and Si-OH stretching [16,21,24] |\n| 800 | Si-O-Si symmetric stretching [16,21] |\n| 700 | Zr-O-Zr bond [24] |\n| 540 | Zr-O-Zr bond [24] |\n| 460 | Si-O-Si rocking motions [16,21] |\n\nCaption: Infrared bands and their assignments\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"3440\", \"type\": \"Other\"}\n{\"value\": \"1625\", \"type\": \"Other\"}\n{\"value\": \"1077\", \"type\": \"Other\"}\n{\"value\": \"960\", \"type\": \"Other\"}\n{\"value\": \"800\", \"type\": \"Other\"}\n{\"value\": \"700\", \"type\": \"Other\"}\n{\"value\": \"540\", \"type\": \"Other\"}\n{\"value\": \"460\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
Segregation of a M404V mutation of the p62/sequestosome 1 (p62/SQSTM1) gene with polyostotic Paget's disease of bone in an Italian family. Mutations of the p62/Sequestosome 1 gene (p62/SQSTM1) account for both sporadic and familial forms of Paget's disease of bone (PDB). We originally described a methionine-->valine substitution at codon 404 (M404V) of exon 8, in the ubiquitin protein-binding domain of p62/SQSTM1 gene in an Italian PDB patient. The collection of data from the patient's pedigree provided evidence for a familial form of PDB. Extension of the genetic analysis to other relatives in this family demonstrated segregation of the M404V mutation with the polyostotic PDB phenotype and provided the identification of six asymptomatic gene carriers. DNA for mutational analysis of the exon 8 coding sequence was obtained from 22 subjects, 4 PDB patients and 18 clinically unaffected members. Of the five clinically ascertained affected members of the family, four possessed the M404V mutation and exhibited the polyostotic form of PDB, except one patient with a single X-ray-assessed skeletal localization and one with a polyostotic disease who had died several years before the DNA analysis. By both reconstitution and mutational analysis of the pedigree, six unaffected subjects were shown to bear the M404V mutation, representing potential asymptomatic gene carriers whose circulating levels of alkaline phosphatase were recently assessed as still within the normal range. Taken together, these results support a genotype-phenotype correlation between the M404V mutation in the p62/SQSTM1 gene and a polyostotic form of PDB in this family. The high penetrance of the PDB trait in this family together with the study of the asymptomatic gene carriers will allow us to confirm the proposed genotype-phenotype correlation and to evaluate the potential use of mutational analysis of the p62/SQSTM1 gene in the early detection of relatives at risk for PDB. | {"CellLine": [], "Chemical": ["alkaline phosphatase"], "Disease": ["Paget\u0027s disease of bone", "polyostotic disease", "PDB"], "Gene": ["sequestosome 1", "p62", "Sequestosome 1", "SQSTM1"], "Species": ["patient", "patients"], "Variant": ["M404V", "methionine--\u003evaline substitution at codon 404"]} | science.biored_ner.3 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nSegregation of a M404V mutation of the p62/sequestosome 1 (p62/SQSTM1) gene with polyostotic Paget's disease of bone in an Italian family. Mutations of the p62/Sequestosome 1 gene (p62/SQSTM1) account for both sporadic and familial forms of Paget's disease of bone (PDB). We originally described a methionine-->valine substitution at codon 404 (M404V) of exon 8, in the ubiquitin protein-binding domain of p62/SQSTM1 gene in an Italian PDB patient. The collection of data from the patient's pedigree provided evidence for a familial form of PDB. Extension of the genetic analysis to other relatives in this family demonstrated segregation of the M404V mutation with the polyostotic PDB phenotype and provided the identification of six asymptomatic gene carriers. DNA for mutational analysis of the exon 8 coding sequence was obtained from 22 subjects, 4 PDB patients and 18 clinically unaffected members. Of the five clinically ascertained affected members of the family, four possessed the M404V mutation and exhibited the polyostotic form of PDB, except one patient with a single X-ray-assessed skeletal localization and one with a polyostotic disease who had died several years before the DNA analysis. By both reconstitution and mutational analysis of the pedigree, six unaffected subjects were shown to bear the M404V mutation, representing potential asymptomatic gene carriers whose circulating levels of alkaline phosphatase were recently assessed as still within the normal range. Taken together, these results support a genotype-phenotype correlation between the M404V mutation in the p62/SQSTM1 gene and a polyostotic form of PDB in this family. The high penetrance of the PDB trait in this family together with the study of the asymptomatic gene carriers will allow us to confirm the proposed genotype-phenotype correlation and to evaluate the potential use of mutational analysis of the p62/SQSTM1 gene in the early detection of relatives at risk for PDB.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [\"alkaline phosphatase\"], \"Disease\": [\"Paget\\u0027s disease of bone\", \"polyostotic disease\", \"PDB\"], \"Gene\": [\"sequestosome 1\", \"p62\", \"Sequestosome 1\", \"SQSTM1\"], \"Species\": [\"patient\", \"patients\"], \"Variant\": [\"M404V\", \"methionine--\\u003evaline substitution at codon 404\"]}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Word embedding initialization provides a consistent and considerable boost in this cross-lingual setup, up to 10% absolute improvements across 21 languages when only 500 projected training instances are available <cite>(Plank and Agić, 2018)</cite> . | ["Background"] | science.multicite_intent_classification.1241 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWord embedding initialization provides a consistent and considerable boost in this cross-lingual setup, up to 10% absolute improvements across 21 languages when only 500 projected training instances are available <cite>(Plank and Agić, 2018)</cite> .",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.discomat_te | | mol% | LAS | NAS | KAS |
| Al2O3 | 9.93 | 9.80 | 9.98 |
| SiO2 | 61.80 | 59.70 | 61.11 |
| Li2O | 28.24 | 0.0120 | 0.0002 |
| Na2O | 0.0308 | 30.47 | 0.0105 |
| K2O | 0.0007 | 0.0017 | 28.88 |
| Tg (degC) | 496 | 518 | - |
| Density (g/cm3) | 2.394 | 2.503 | - |
| Shear modulus (GPa) | 34.8 | 26.7 | 20.8 |
| Young's modulus (GPa) | 85.6 | 66.9 | 52.4 |
| Poisson's ratio | 0.23 | 0.255 | 0.262 |
Caption: Glass chemistry determined by wet chemistry expressed in mol%, glass transition temperature by beam bending viscometry (where viscosity is equal to 1012 Pas), room temperature density, and elastic moduli analyzed by resonant ultrasound spectroscopy.
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "9.93", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "LAS"}
{"value": "9.80", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "NAS"}
{"value": "9.98", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "KAS"}
{"value": "61.80", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "LAS"}
{"value": "59.70", "type": "Other"}
{"value": "61.11", "type": "Other"}
{"value": "28.24", "type": "Glass_Compound_Amount", "constituent": "Li2O", "unit": "mol", "material": "LAS"}
{"value": "0.0120", "type": "Other"}
{"value": "0.0002", "type": "Other"}
{"value": "0.0308", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "LAS"}
{"value": "30.47", "type": "Other"}
{"value": "0.0105", "type": "Other"}
{"value": "0.0007", "type": "Glass_Compound_Amount", "constituent": "K2O", "unit": "mol", "material": "LAS"}
{"value": "0.0017", "type": "Glass_Compound_Amount", "constituent": "K2O", "unit": "mol", "material": "NAS"}
{"value": "28.88", "type": "Glass_Compound_Amount", "constituent": "K2O", "unit": "mol", "material": "KAS"}
{"value": "496", "type": "Other"}
{"value": "518", "type": "Other"}
{"value": "2.394", "type": "Other"}
{"value": "2.503", "type": "Other"}
{"value": "34.8", "type": "Other"}
{"value": "26.7", "type": "Other"}
{"value": "20.8", "type": "Other"}
{"value": "85.6", "type": "Other"}
{"value": "66.9", "type": "Other"}
{"value": "52.4", "type": "Other"}
{"value": "0.23", "type": "Other"}
{"value": "0.255", "type": "Other"}
{"value": "0.262", "type": "Other"} | science.discomat_te.175 | [
{
"content": "| mol% | LAS | NAS | KAS |\n| Al2O3 | 9.93 | 9.80 | 9.98 |\n| SiO2 | 61.80 | 59.70 | 61.11 |\n| Li2O | 28.24 | 0.0120 | 0.0002 |\n| Na2O | 0.0308 | 30.47 | 0.0105 |\n| K2O | 0.0007 | 0.0017 | 28.88 |\n| Tg (degC) | 496 | 518 | - |\n| Density (g/cm3) | 2.394 | 2.503 | - |\n| Shear modulus (GPa) | 34.8 | 26.7 | 20.8 |\n| Young's modulus (GPa) | 85.6 | 66.9 | 52.4 |\n| Poisson's ratio | 0.23 | 0.255 | 0.262 |\n\nCaption: Glass chemistry determined by wet chemistry expressed in mol%, glass transition temperature by beam bending viscometry (where viscosity is equal to 1012 Pas), room temperature density, and elastic moduli analyzed by resonant ultrasound spectroscopy.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"9.93\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"LAS\"}\n{\"value\": \"9.80\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"NAS\"}\n{\"value\": \"9.98\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"KAS\"}\n{\"value\": \"61.80\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"LAS\"}\n{\"value\": \"59.70\", \"type\": \"Other\"}\n{\"value\": \"61.11\", \"type\": \"Other\"}\n{\"value\": \"28.24\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Li2O\", \"unit\": \"mol\", \"material\": \"LAS\"}\n{\"value\": \"0.0120\", \"type\": \"Other\"}\n{\"value\": \"0.0002\", \"type\": \"Other\"}\n{\"value\": \"0.0308\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"LAS\"}\n{\"value\": \"30.47\", \"type\": \"Other\"}\n{\"value\": \"0.0105\", \"type\": \"Other\"}\n{\"value\": \"0.0007\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"K2O\", \"unit\": \"mol\", \"material\": \"LAS\"}\n{\"value\": \"0.0017\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"K2O\", \"unit\": \"mol\", \"material\": \"NAS\"}\n{\"value\": \"28.88\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"K2O\", \"unit\": \"mol\", \"material\": \"KAS\"}\n{\"value\": \"496\", \"type\": \"Other\"}\n{\"value\": \"518\", \"type\": \"Other\"}\n{\"value\": \"2.394\", \"type\": \"Other\"}\n{\"value\": \"2.503\", \"type\": \"Other\"}\n{\"value\": \"34.8\", \"type\": \"Other\"}\n{\"value\": \"26.7\", \"type\": \"Other\"}\n{\"value\": \"20.8\", \"type\": \"Other\"}\n{\"value\": \"85.6\", \"type\": \"Other\"}\n{\"value\": \"66.9\", \"type\": \"Other\"}\n{\"value\": \"52.4\", \"type\": \"Other\"}\n{\"value\": \"0.23\", \"type\": \"Other\"}\n{\"value\": \"0.255\", \"type\": \"Other\"}\n{\"value\": \"0.262\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
e 10-fold cross-validation results of the two classi ers are provided in Table 1 using the metrics of precision, recall, and F-Measure. e performance of the classi ers is be er for the Favor class for both targets when compared with the performance results for the Against class. e same percentage of common terms may not have been observed in tweets during the expression of negative stances towards the targets. Yet, completely the opposite pa ern is observed in stance detection results of baseline systems given in <cite>[12]</cite> , i.e., be er FMeasure rates have been obtained for the Against class when compared with the Favor class <cite>[12]</cite> . | ["Differences"] | science.multicite_intent_classification.87 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\ne 10-fold cross-validation results of the two classi ers are provided in Table 1 using the metrics of precision, recall, and F-Measure. e performance of the classi ers is be er for the Favor class for both targets when compared with the performance results for the Against class. e same percentage of common terms may not have been observed in tweets during the expression of negative stances towards the targets. Yet, completely the opposite pa ern is observed in stance detection results of baseline systems given in <cite>[12]</cite> , i.e., be er FMeasure rates have been obtained for the Against class when compared with the Favor class <cite>[12]</cite> .",
"role": "user"
},
{
"content": "[\"Differences\"]",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Differentiable Learning Under Triage
Main Content:
1 Introduction . In recent years , there has been a raising interest on a new learning paradigm which seeks the development of predictive models that operate under different automation levels—models that take decisions for a given fraction of instances and leave the remaining ones to human experts . This new paradigm has been so far referred to as learning under algorithmic triage [ 1 ] , learning under human assistance [ 2 , 3 ] , learning to complement humans [ 4 , 5 ] , and learning to defer to an expert [ 6 ] . Here , one does not only has to find a predictive model but also a triage policy which determines who predicts each instance . The motivation that underpins learning under algorithmic triage is the observation that , while there are high-stake tasks where predictive models have matched , or even surpassed , the average performance of human experts [ 8 , 9 ] , they are still less accurate than human experts on some instances , where they make far more errors than average [ 1 ] . The main promise is that , by working together , human experts and predictive models are likely to achieve a considerably better performance than each of them would achieve on their own . While the above mentioned work has shown some success at fulfilling this promise , the interplay between the predictive accuracy of a predictive model and its human counterpart under algorithmic triage is not well understood . One of the main challenges in learning under algorithmic triage is that , for each potential triage policy , there is an optimal predictive model , however , the triage policy is also something one seeks to optimize , as first noted by De et al . [ 2 ] . In this context , previous work on learning under algorithmic triage can be naturally differentiated into two lines of work . The first line of work has developed rather general heuristic algorithms that do not enjoy theoretical guarantees [ 1 , 4 , 5 ] . The second has developed algorithms with theoretical guarantees [ 2 , 3 , 6 ] , however , they have focused on more 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . restrictive settings . More specifically , De et al . [ 2 , 3 ] focus on ridge regression and support vector machines and reduce the problem to the maximization of approximately submodular functions and Mozannar and Sontag [ 6 ] view the problem from the perspective of cost sensitive learning and introduce a convex and consistent surrogate loss of the objective function they consider . Our contributions . Our starting point is a theoretical investigation of the interplay between the prediction accuracy of supervised learning models and human experts under algorithmic triage . By doing so , we hope to better inform the design of general purpose techniques for training differentiable models under algorithmic triage . Our investigation yields the following insights : I . To find the optimal triage policy and predictive model , we need to take into account the amount of human expert disagreement , or expert uncertainty , on a per-instance level . II . We identify under which circumstances a predictive model that is optimal under full automation may be suboptimal under a desired level of triage . III . Given any predictive model and desired level of triage , the optimal triage policy is a deterministic threshold rule in which triage decisions are derived deterministically by thresholding the difference between the model and human errors on a per-instance level . Building on the above insights , we introduce a practical gradient-based algorithm that finds a sequence of predictive models and triage policies of increasing performance subject to a constraint on the maximum level of triage . We apply our gradient-based algorithm in a wide variety of supervised learning tasks using both synthetic and real-world data from two important applications—content moderation and scientific discovery . Our experiments illustrate our theoretical results and show that the models and triage policies provided by our algorithm outperform those provided by several competitive baselines1 . Further related work . Our work is also related to the areas of learning to defer and active learning . In learning to defer , the goal is to design machine learning models that are able to defer predictions [ 10– 18 ] . Most previous work focuses on supervised learning and design classifiers that learn to defer either by considering the defer action as an additional label value or by training an independent classifier to decide about deferred decisions . However , in this line of work , there are no human experts who make predictions whenever the classifiers defer them , in contrast with the literature on learning under algorithmic triage [ 1–6 ] —they just pay a constant cost every time they defer predictions . Moreover , the classifiers are trained to predict the labels of all samples in the training set , as in full automation . In active learning , the goal is to find which subset of samples one should label so that a model trained on these samples predicts accurately any sample at test time [ 19–26 ] . In contrast , in our work , the trained model only needs to predict accurately a fraction of samples picked by the triage policy at test time and rely on human experts to predict the remaining samples . 2 Supervised Learning under Triage . Let X ⊆ Rm be the feature domain , Y be the label domain , and assume features and labels are sampled from a ground truth distribution P ( x , y ) = P ( x ) P ( y |x ) . Moreover , let ŷ = h ( x ) be the label predictions provided by a human expert and assume they are sampled from a distribution P ( h |x ) , which models the disagreements amongst experts [ 27 ] . Then , in supervised learning under triage , one needs to find : ( i ) a triage policy π ( x ) : X → { 0 , 1 } , which determines who predicts each feature vector—a supervised learning model ( π ( x ) = 0 ) or a human expert ( π ( x ) = 1 ) ; ( ii ) a predictive model m ( x ) : X → Y , which needs to provide label predictions ŷ = m ( x ) for those feature vectors x for which π ( x ) = 0 . Here , similarly as in standard supervised learning , we look for the triage policy and the predictive model that result into the most accurate label predictions by minimizing a loss function ` ( ŷ , y ) . More formally , let Π be the set of all triage policies , then , given a hypothesis class of predictive modelsM , 1Our code and data are available at https : //github.com/Networks-Learning/differentiable-learning-under-triage our goal is to solve the following minimization problem2 : minimize π∈Π , m∈M L ( π , m ) subject to Ex [ π ( x ) ] ≤ b ( 1 ) where b is a given parameter that limits the level of triage , i.e. , the percentage of samples human experts need to provide predictions for , and L ( π , m ) = Ex , y , h [ ( 1− π ( x ) ) ` ( m ( x ) , y ) + π ( x ) ` ( h , y ) ] . ( 2 ) Here , one might think of replacing h with its point estimate µh ( x ) = argminµh Eh|x [ ` ′ ( h , µh ) ] , where ` ′ ( · ) is a general loss function . However , the resulting objective would have a bias term , as formalized by the following proposition3 for the quadratic loss ` ′ ( h , µh ) = ( h− µh ) 2 : Proposition 1 Let ` ′ ( ŷ , y ) = ( ŷ − y ) 2 and assume there exist x ∈ X for which the distribution of human predictions P ( h |x ) is not a point mass . Then , the function L ( π , m ) = Ex , y [ ( 1− π ( x ) ) ` ′ ( m ( x ) , y ) + π ( x ) ` ′ ( µh ( x ) , y ) ] is a biased estimate of the true average loss defined in Eq . 2 . The above result implies that , to find the optimal triage policy and predictive model , we need to take into account the amount of expert disagreement , or expert uncertainty , on each feature vector x rather than just an average expert prediction . 3 On the Interplay Between Prediction Accuracy and Triage . Let m∗0 be the optimal predictive model under full automation , i.e. , m ∗ 0 = argminm∈M L ( π0 , m ) , where π0 ( x ) = 0 for all x ∈ X . Then , the following proposition tells us that , if the predictions made by m∗0 are less accurate than those by human experts on some instances , the model will always benefit from algorithmic triage : Proposition 2 If there is a subset V ⊂ X of positive measure under P such that∫ x∈V Ey|x [ ` ( m∗0 ( x ) , y ) ] dP > ∫ x∈V Ey , h|x [ ` ( h , y ) ] dP , then there exists a nontrivial triage policy π 6= π0 such that L ( π , m∗0 ) < L ( π0 , m∗0 ) . Moreover , if we rewrite the average loss as L ( π , m ) = Ex [ ( 1− π ( x ) ) Ey|x [ ` ( m ( x ) , y ) ] + π ( x ) Ey , h|x [ ` ( h , y ) ] ] , it becomes apparent that , for any modelm ∈M , the optimal triage policy π∗m = argminπ∈Π L ( π , m ) is a deterministic threshold rule in which triage decisions are derived by thresholding the difference between the model and human loss on a per-instance level . Formally , we have the following Theorem : Theorem 3 Let m ∈M be any fixed predictive model . Then , the optimal triage policy that minimize the loss L ( π , m ) subject to a constraint Ex [ π ( x ) ] ≤ b on the maximum level of triage is given by : π∗m , b ( x ) = { 1 if Ey|x [ ` ( m ( x ) , y ) − Eh|x [ ` ( h , y ) ] ] > tP , b , m 0 otherwise , ( 3 ) where tP , b , m = argmin τ≥0 Ex [ τ b+ max ( Ey|x [ ` ( m ( x ) , y ) − Eh|x [ ` ( h , y ) ] ] − τ , 0 ) ] . Then , if we plug in Eq . 3 into Eq . 1 , we can rewrite our minimization problem as : minimize m∈M L ( π∗m , b , m ) ( 4 ) where L ( π∗m , b , m ) = Ex [ Ey|x [ ` ( m ( x ) , y ) ] − THREStP , b , m ( Ey|x [ ` ( m ( x ) , y ) − Eh|x [ ` ( h , y ) ] ] , 0 ) ] ( 5 ) 2One might think that minimizing over the set of all possible triage policies is not a well-posed problem , i.e. , the optimal triage policy is an extremely complex function . However , our theoretical analysis will reveal that the optimal triage policy does have a simple form and its complexity depends on the considered hypothesis class of predictive models ( refer to Theorem 3 ) . 3All proofs can be found in Appendix A. with THRESt ( x , val ) = { x if x > t val otherwise . Here , note that , in the unconstrained case , tP , b , m = 0 and THRES0 ( x , 0 ) = max ( x , 0 ) . Next , building on the above expression , we prove that the optimal predictive model under full automation mθ∗0 is suboptimal under algorithmic triage as long as the average gradient across the subset of samples which are assigned to the human under the corresponding optimal triage policy π∗mθ∗0 , b is not zero . Formally , our main result is the following Proposition : Proposition 4 Let mθ∗0 be the optimal predictive model under full automation within a hypothesis class of parameterized modelsM ( Θ ) , π∗mθ∗0 , b the optimal triage policy for mθ∗0 defined in Eq . 3 for a given maximum level of triage b , and V = { x |π∗mθ∗0 , b ( x ) = 1 } . If∫ x∈V Ey|x [ ∇θ ` ( mθ ( x ) , y ) |θ=θ∗0 ] dP 6= 0 . ( 6 ) then it holds that L ( π∗mθ∗0 , b , mθ∗0 ) > minθ∈Θ L ( π ∗ mθ , b , mθ ) . Finally , we can also identify the circumstances under which any predictive model mθ′ within a hypothesis class of parameterized predictive modelsM ( Θ ) is suboptimal under algorithmic triage : Proposition 5 Let mθ′ be a predictive model within a hypothesis class of parameterized models M ( Θ ) , π∗mθ′ , b the optimal triage policy for mθ′ defined in Eq . 3 for a given maximum level of triage b , and V = { x |π∗mθ′ , b ( x ) = 0 } . If∫ x∈V Ey|x [ ∇θ ` ( mθ ( x ) , y ) |θ=θ′ ] dP 6= 0 . ( 7 ) then it holds that L ( π∗mθ′ , b , mθ′ ) > minθ∈Θ L ( π ∗ mθ , b , mθ ) . The above results will lay the foundations for our practical gradient-based algorithm for differentiable learning under triage in the next section . | This paper considers the setting of "algorithmic triage", where a predictive model can choose to route some data points to a human expert for manual review. The authors formally define the settings under which triage is useful, and show a motivating result that training a model for non-triage settings may be sub-optimal for learning under triage. They then derive the optimal triage policy as a deterministic threshold rule, show a gradient-based algorithm for this, and demonstrate strong empirical performance on several synthetic and real datasets. | science.mup_single_document_summarization.16323 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nDifferentiable Learning Under Triage\n\nMain Content:\n1 Introduction . In recent years , there has been a raising interest on a new learning paradigm which seeks the development of predictive models that operate under different automation levels—models that take decisions for a given fraction of instances and leave the remaining ones to human experts . This new paradigm has been so far referred to as learning under algorithmic triage [ 1 ] , learning under human assistance [ 2 , 3 ] , learning to complement humans [ 4 , 5 ] , and learning to defer to an expert [ 6 ] . Here , one does not only has to find a predictive model but also a triage policy which determines who predicts each instance . The motivation that underpins learning under algorithmic triage is the observation that , while there are high-stake tasks where predictive models have matched , or even surpassed , the average performance of human experts [ 8 , 9 ] , they are still less accurate than human experts on some instances , where they make far more errors than average [ 1 ] . The main promise is that , by working together , human experts and predictive models are likely to achieve a considerably better performance than each of them would achieve on their own . While the above mentioned work has shown some success at fulfilling this promise , the interplay between the predictive accuracy of a predictive model and its human counterpart under algorithmic triage is not well understood . One of the main challenges in learning under algorithmic triage is that , for each potential triage policy , there is an optimal predictive model , however , the triage policy is also something one seeks to optimize , as first noted by De et al . [ 2 ] . In this context , previous work on learning under algorithmic triage can be naturally differentiated into two lines of work . The first line of work has developed rather general heuristic algorithms that do not enjoy theoretical guarantees [ 1 , 4 , 5 ] . The second has developed algorithms with theoretical guarantees [ 2 , 3 , 6 ] , however , they have focused on more 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . restrictive settings . More specifically , De et al . [ 2 , 3 ] focus on ridge regression and support vector machines and reduce the problem to the maximization of approximately submodular functions and Mozannar and Sontag [ 6 ] view the problem from the perspective of cost sensitive learning and introduce a convex and consistent surrogate loss of the objective function they consider . Our contributions . Our starting point is a theoretical investigation of the interplay between the prediction accuracy of supervised learning models and human experts under algorithmic triage . By doing so , we hope to better inform the design of general purpose techniques for training differentiable models under algorithmic triage . Our investigation yields the following insights : I . To find the optimal triage policy and predictive model , we need to take into account the amount of human expert disagreement , or expert uncertainty , on a per-instance level . II . We identify under which circumstances a predictive model that is optimal under full automation may be suboptimal under a desired level of triage . III . Given any predictive model and desired level of triage , the optimal triage policy is a deterministic threshold rule in which triage decisions are derived deterministically by thresholding the difference between the model and human errors on a per-instance level . Building on the above insights , we introduce a practical gradient-based algorithm that finds a sequence of predictive models and triage policies of increasing performance subject to a constraint on the maximum level of triage . We apply our gradient-based algorithm in a wide variety of supervised learning tasks using both synthetic and real-world data from two important applications—content moderation and scientific discovery . Our experiments illustrate our theoretical results and show that the models and triage policies provided by our algorithm outperform those provided by several competitive baselines1 . Further related work . Our work is also related to the areas of learning to defer and active learning . In learning to defer , the goal is to design machine learning models that are able to defer predictions [ 10– 18 ] . Most previous work focuses on supervised learning and design classifiers that learn to defer either by considering the defer action as an additional label value or by training an independent classifier to decide about deferred decisions . However , in this line of work , there are no human experts who make predictions whenever the classifiers defer them , in contrast with the literature on learning under algorithmic triage [ 1–6 ] —they just pay a constant cost every time they defer predictions . Moreover , the classifiers are trained to predict the labels of all samples in the training set , as in full automation . In active learning , the goal is to find which subset of samples one should label so that a model trained on these samples predicts accurately any sample at test time [ 19–26 ] . In contrast , in our work , the trained model only needs to predict accurately a fraction of samples picked by the triage policy at test time and rely on human experts to predict the remaining samples . 2 Supervised Learning under Triage . Let X ⊆ Rm be the feature domain , Y be the label domain , and assume features and labels are sampled from a ground truth distribution P ( x , y ) = P ( x ) P ( y |x ) . Moreover , let ŷ = h ( x ) be the label predictions provided by a human expert and assume they are sampled from a distribution P ( h |x ) , which models the disagreements amongst experts [ 27 ] . Then , in supervised learning under triage , one needs to find : ( i ) a triage policy π ( x ) : X → { 0 , 1 } , which determines who predicts each feature vector—a supervised learning model ( π ( x ) = 0 ) or a human expert ( π ( x ) = 1 ) ; ( ii ) a predictive model m ( x ) : X → Y , which needs to provide label predictions ŷ = m ( x ) for those feature vectors x for which π ( x ) = 0 . Here , similarly as in standard supervised learning , we look for the triage policy and the predictive model that result into the most accurate label predictions by minimizing a loss function ` ( ŷ , y ) . More formally , let Π be the set of all triage policies , then , given a hypothesis class of predictive modelsM , 1Our code and data are available at https : //github.com/Networks-Learning/differentiable-learning-under-triage our goal is to solve the following minimization problem2 : minimize π∈Π , m∈M L ( π , m ) subject to Ex [ π ( x ) ] ≤ b ( 1 ) where b is a given parameter that limits the level of triage , i.e. , the percentage of samples human experts need to provide predictions for , and L ( π , m ) = Ex , y , h [ ( 1− π ( x ) ) ` ( m ( x ) , y ) + π ( x ) ` ( h , y ) ] . ( 2 ) Here , one might think of replacing h with its point estimate µh ( x ) = argminµh Eh|x [ ` ′ ( h , µh ) ] , where ` ′ ( · ) is a general loss function . However , the resulting objective would have a bias term , as formalized by the following proposition3 for the quadratic loss ` ′ ( h , µh ) = ( h− µh ) 2 : Proposition 1 Let ` ′ ( ŷ , y ) = ( ŷ − y ) 2 and assume there exist x ∈ X for which the distribution of human predictions P ( h |x ) is not a point mass . Then , the function L ( π , m ) = Ex , y [ ( 1− π ( x ) ) ` ′ ( m ( x ) , y ) + π ( x ) ` ′ ( µh ( x ) , y ) ] is a biased estimate of the true average loss defined in Eq . 2 . The above result implies that , to find the optimal triage policy and predictive model , we need to take into account the amount of expert disagreement , or expert uncertainty , on each feature vector x rather than just an average expert prediction . 3 On the Interplay Between Prediction Accuracy and Triage . Let m∗0 be the optimal predictive model under full automation , i.e. , m ∗ 0 = argminm∈M L ( π0 , m ) , where π0 ( x ) = 0 for all x ∈ X . Then , the following proposition tells us that , if the predictions made by m∗0 are less accurate than those by human experts on some instances , the model will always benefit from algorithmic triage : Proposition 2 If there is a subset V ⊂ X of positive measure under P such that∫ x∈V Ey|x [ ` ( m∗0 ( x ) , y ) ] dP > ∫ x∈V Ey , h|x [ ` ( h , y ) ] dP , then there exists a nontrivial triage policy π 6= π0 such that L ( π , m∗0 ) < L ( π0 , m∗0 ) . Moreover , if we rewrite the average loss as L ( π , m ) = Ex [ ( 1− π ( x ) ) Ey|x [ ` ( m ( x ) , y ) ] + π ( x ) Ey , h|x [ ` ( h , y ) ] ] , it becomes apparent that , for any modelm ∈M , the optimal triage policy π∗m = argminπ∈Π L ( π , m ) is a deterministic threshold rule in which triage decisions are derived by thresholding the difference between the model and human loss on a per-instance level . Formally , we have the following Theorem : Theorem 3 Let m ∈M be any fixed predictive model . Then , the optimal triage policy that minimize the loss L ( π , m ) subject to a constraint Ex [ π ( x ) ] ≤ b on the maximum level of triage is given by : π∗m , b ( x ) = { 1 if Ey|x [ ` ( m ( x ) , y ) − Eh|x [ ` ( h , y ) ] ] > tP , b , m 0 otherwise , ( 3 ) where tP , b , m = argmin τ≥0 Ex [ τ b+ max ( Ey|x [ ` ( m ( x ) , y ) − Eh|x [ ` ( h , y ) ] ] − τ , 0 ) ] . Then , if we plug in Eq . 3 into Eq . 1 , we can rewrite our minimization problem as : minimize m∈M L ( π∗m , b , m ) ( 4 ) where L ( π∗m , b , m ) = Ex [ Ey|x [ ` ( m ( x ) , y ) ] − THREStP , b , m ( Ey|x [ ` ( m ( x ) , y ) − Eh|x [ ` ( h , y ) ] ] , 0 ) ] ( 5 ) 2One might think that minimizing over the set of all possible triage policies is not a well-posed problem , i.e. , the optimal triage policy is an extremely complex function . However , our theoretical analysis will reveal that the optimal triage policy does have a simple form and its complexity depends on the considered hypothesis class of predictive models ( refer to Theorem 3 ) . 3All proofs can be found in Appendix A. with THRESt ( x , val ) = { x if x > t val otherwise . Here , note that , in the unconstrained case , tP , b , m = 0 and THRES0 ( x , 0 ) = max ( x , 0 ) . Next , building on the above expression , we prove that the optimal predictive model under full automation mθ∗0 is suboptimal under algorithmic triage as long as the average gradient across the subset of samples which are assigned to the human under the corresponding optimal triage policy π∗mθ∗0 , b is not zero . Formally , our main result is the following Proposition : Proposition 4 Let mθ∗0 be the optimal predictive model under full automation within a hypothesis class of parameterized modelsM ( Θ ) , π∗mθ∗0 , b the optimal triage policy for mθ∗0 defined in Eq . 3 for a given maximum level of triage b , and V = { x |π∗mθ∗0 , b ( x ) = 1 } . If∫ x∈V Ey|x [ ∇θ ` ( mθ ( x ) , y ) |θ=θ∗0 ] dP 6= 0 . ( 6 ) then it holds that L ( π∗mθ∗0 , b , mθ∗0 ) > minθ∈Θ L ( π ∗ mθ , b , mθ ) . Finally , we can also identify the circumstances under which any predictive model mθ′ within a hypothesis class of parameterized predictive modelsM ( Θ ) is suboptimal under algorithmic triage : Proposition 5 Let mθ′ be a predictive model within a hypothesis class of parameterized models M ( Θ ) , π∗mθ′ , b the optimal triage policy for mθ′ defined in Eq . 3 for a given maximum level of triage b , and V = { x |π∗mθ′ , b ( x ) = 0 } . If∫ x∈V Ey|x [ ∇θ ` ( mθ ( x ) , y ) |θ=θ′ ] dP 6= 0 . ( 7 ) then it holds that L ( π∗mθ′ , b , mθ′ ) > minθ∈Θ L ( π ∗ mθ , b , mθ ) . The above results will lay the foundations for our practical gradient-based algorithm for differentiable learning under triage in the next section .",
"role": "user"
},
{
"content": "This paper considers the setting of \"algorithmic triage\", where a predictive model can choose to route some data points to a human expert for manual review. The authors formally define the settings under which triage is useful, and show a motivating result that training a model for non-triage settings may be sub-optimal for learning under triage. They then derive the optimal triage policy as a deterministic threshold rule, show a gradient-based algorithm for this, and demonstrate strong empirical performance on several synthetic and real datasets.",
"role": "assistant"
}
]
|
science.discomat_te | | Raman frequency (cm-1) | Suggested type of vibration |
| 26-50-65-75 | Folding of the TA modes coming from Brillouin zone boundary |
| 150 | g-In2Se3 crystal related zone center mode |
| 175 | In-Se vibrations |
| 180-202-212-220-234-245 | Folding of the TO and LO modes coming from Brillouin zone boundary |
| 235 | Se chain vibrations |
| 254 | Se8 rings vibrations |
| 490 | Second order RS |
Caption: Proposed interpretation of Raman spectra of both InSe2 and In2Se3 amorphous thin films
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "26", "type": "Other"}
{"value": "50", "type": "Other"}
{"value": "65", "type": "Other"}
{"value": "75", "type": "Other"}
{"value": "150", "type": "Other"}
{"value": "175", "type": "Other"}
{"value": "180", "type": "Other"}
{"value": "202", "type": "Other"}
{"value": "212", "type": "Other"}
{"value": "220", "type": "Other"}
{"value": "234", "type": "Other"}
{"value": "245", "type": "Other"}
{"value": "235", "type": "Other"}
{"value": "254", "type": "Other"}
{"value": "490", "type": "Other"} | science.discomat_te.22 | [
{
"content": "| Raman frequency (cm-1) | Suggested type of vibration |\n| 26-50-65-75 | Folding of the TA modes coming from Brillouin zone boundary |\n| 150 | g-In2Se3 crystal related zone center mode |\n| 175 | In-Se vibrations |\n| 180-202-212-220-234-245 | Folding of the TO and LO modes coming from Brillouin zone boundary |\n| 235 | Se chain vibrations |\n| 254 | Se8 rings vibrations |\n| 490 | Second order RS |\n\nCaption: Proposed interpretation of Raman spectra of both InSe2 and In2Se3 amorphous thin films\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"26\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Other\"}\n{\"value\": \"65\", \"type\": \"Other\"}\n{\"value\": \"75\", \"type\": \"Other\"}\n{\"value\": \"150\", \"type\": \"Other\"}\n{\"value\": \"175\", \"type\": \"Other\"}\n{\"value\": \"180\", \"type\": \"Other\"}\n{\"value\": \"202\", \"type\": \"Other\"}\n{\"value\": \"212\", \"type\": \"Other\"}\n{\"value\": \"220\", \"type\": \"Other\"}\n{\"value\": \"234\", \"type\": \"Other\"}\n{\"value\": \"245\", \"type\": \"Other\"}\n{\"value\": \"235\", \"type\": \"Other\"}\n{\"value\": \"254\", \"type\": \"Other\"}\n{\"value\": \"490\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks
Main Content:
1 INTRODUCTION . Deep learning has achieved astounding performances in many fields by relying on larger numbers of parameters and increasingly sophisticated optimization algorithms . However , DNNs with far more parameters than training samples are more prone to poor generalization . Generalization is arguably the most fundamental and yet mysterious aspect of deep learning . Several studies have been conducted to better understand the generalization of DNNs and to train DNNs that generalize well across the natural distribution ( Keskar et al. , 2016 ; Neyshabur et al. , 2017 ; Chaudhari et al. , 2019 ; Zhang et al. , 2019 ; Wu et al. , 2020 ; Foret et al. , 2020 ; Zhang et al. , 2021 ) . For example , Keskar et al . ( 2016 ) investigate the effect of batch size on neural networks ’ generalization ability . Zhang et al . ( 2019 ) propose an optimizer for training DNNs with improved generalization ability . Specifically , Hochreiter & Schmidhuber ( 1995 ) , Li et al . ( 2017 ) and Dinh et al . ( 2017 ) argue that the geometry of the loss landscape affects generalization and DNNs with a flat minimum can generalize better . The recent work by Foret et al . ( 2020 ) proposes an effective training algorithm Sharpness Aware Minimizer ( SAM ) for obtaining a flat minimum . SAM employs a base optimizer such as Stochastic Gradient Descent ( Nesterov , 1983 ) or Adam ( Kingma & Ba , 2014 ) to minimize both the vanilla training loss and the sharpness . The sharpness , which describes the flatness of a minimum , is characterized using eigenvalues of the Hessian matrix by Keskar et al . ( 2016 ) . SAM quantifies the sharpness as the maximized change of training loss when a constraint perturbation is added to current weights . As a result , SAM leads to a flat minimum and significantly improves the generalization ability of the trained DNNs . SAM and its variants have been shown to outperform the state-of-the-art across a variety of deep learning benchmarks ( Kwon et al. , 2021 ; Chen et al. , 2021 ; Galatolo et al. , 2021 ; Zheng et al. , 2021 ) . Regrettably though , SAM and its variants achieve such remarkable performance at the expense of doubling the computational overhead of the given base optimizers , which minimize the training loss with a single forward and backward propagation step . SAM requires an additional propagation step compared to the base optimizers to resolve the weight perturbation for quantifying the sharpness . The extra propagation step requires the same computational overhead as the single propagation step used by base optimizers , resulting in SAM ’ s computational overhead being doubled ( 2× ) . As demonstrated in Figure 1 , SAM achieves higher test accuracy ( i.e. , 84.46 % vs. 81.89 % ) at the expense of sacrificing half of the training speed of the base optimizer ( i.e. , 276 imgs/s vs. 557 imgs/s ) . SGD SAM ESAM 80 81 82 83 84 85 86 A cc ur ac y ( % ) 100 200 300 400 500 600 700 Tr ai ni ng S pe ed ( i m ag es /s ) Accuracy Training Speed sharpness of DNNs . As a result , the sharpness calculated over the subsets can serve as an upper bound of the SAM ’ s sharpness , ensuring that SDS ’ s performance is comparable to that of SAM ’ s . We verify the effectiveness of ESAM on the CIFAR10 , CIFAR100 ( Krizhevsky et al. , 2009 ) and ImageNet ( Deng et al. , 2009 ) datasets with five different DNN architectures . The experimental results demonstrate that ESAM obtains flat minima at a cost of only 40 % ( vs. SAM ’ s 100 % ) extra computational overhead over base optimizers . More importantly , ESAM achieves better performance in terms of the test accuracy compared to SAM . In a nutshell , our contributions are as follows : • We propose two novel and effective training strategies Stochastic Weight Perturbation ( SWP ) and Sharpness-sensitive Data Selection ( SDS ) . Both strategies are designed to improve efficiency without sacrificing performance . The empirical results demonstrate that both of the proposed strategies can improve both the efficiency and effectiveness of SAM . • We introduce the ESAM , which integrates SWP and SDS . ESAM improves the generalization ability of DNNs with marginally additional computational cost compared to standard training . The rest of this paper is structured in this way . Section 2.1 introduces SAM and its computational issues . Section 2.2 and Section 2.3 discuss how the two proposed training strategies SWP and SDS are designed respectively . Section 3 verifies the effectiveness of ESAM across a variety of datasets and DNN architectures . Section 4 presents the related work and Section 5 concludes this paper . 2 METHODOLOGY . We start with recapitulating how SAM achieves a flat minimum with small sharpness , which is quantified by resolving a maximization problem . To compute the sharpness , SAM requires additional forward and backward propagation and results in the doubling of the computational overhead compared to base optimizers . Following that , we demonstrate how we derive and propose ESAM , which integrates SWP and SDS , to maximize efficiency while maintaining the performance . We introduce SWP and SDS in Sections 2.2 and 2.3 respectively . Algorithm 1 shows the overall proposed ESAM algorithm . Throughout this paper , we denote a neural network f with weight parameters θ as fθ . The weights are contained in the vector θ = ( θ1 , θ2 , . . . , θN ) , where N is the number of weight units in the neural network . Given a training dataset S that contains samples i.i.d . drawn from a distribution D , Algorithm 1 Efficient SAM ( ESAM ) Input : Network fθ , θ = ( θ1 , θ2 , . . . , θN ) , Training set S , Batch size b , Learning rate η > 0 , Neighborhood size ρ > 0 , Iterations A , SWP hyperparameter β , SDS hyperparameter γ . Output : A flat minimum solution θ̂ . 1 : for a = 1 to A do 2 : Sample a mini-batch B ⊂ S with size b . 3 : for n = 1 to N do 4 : if θn is chosen by probability β then 5 : n ← ρ1−β∇θnLB ( fθ ) . SWP in B1 6 : else 7 : n ← 0 8 : ̂← ( 1 , ... , N ) . Assign Weight Perturbation 9 : Compute ` ( fθ+̂ , xi , yi ) and construct B+ with selection ratio γ ( Equation 6 ) 10 : Compute gradients g = ∇θLB+ ( fθ+̂ ) . SDS in B2 11 : Update weights θ ← θ − ηg the network is trained to obtain optimal weights θ̂ via empirical risk minimization ( ERM ) , i.e. , θ̂ = arg min θ { LS ( fθ ) = 1 |S| ∑ ( xi , yi ) ∈S ` ( fθ , xi , yi ) } ( 1 ) where ` can be an arbitrary loss function . We take ` to be the cross entropy loss in this paper . The population loss is defined as LD ( fθ ) , E ( xi , yi ) ∼D ( ` ( fθ , xi , yi ) ) . In each training iteration , optimizers sample a mini-batch B ⊂ S with size b to update parameters . 2.1 SHARPNESS-AWARE MINIMIZATION AND ITS COMPUTATIONAL DRAWBACK . To improve the generalization capability of DNNs , Foret et al . ( 2020 ) proposed the SAM training strategy for searching flat minima . SAM trains DNNs by solving the following min-max optimization problem , min θ max : ‖ ‖2≤ρ LS ( fθ+ ) . ( 2 ) Given θ , the inner optimization attempts to find a weight perturbation in Euclidean ball with radius ρ that maximizes the empirical loss . The maximized loss at weights θ is the sum of the empirical loss and the sharpness , which is defined to be RS ( fθ ) = max : ‖ ‖2 < ρ [ LS ( fθ+ ) − LS ( fθ ) ] . This sharpness is quantified by the maximal change of empirical loss when a constraint perturbation is added to θ . The min-max problem encourages SAM to find flat minima . For a certain set of weights θ , Foret et al . ( 2020 ) theoretically justifies that the population loss of DNNs can be upper-bounded by the sum of sharpness , empirical loss , and a regularization term on the norm of weights ( refer to Equation 3 ) . Thus , by minimizing the sharpness together with the empirical loss , SAM produces optimized solutions for DNNs with flat minima , and the resultant models can thus generalize better ( Foret et al. , 2020 ; Chen et al. , 2021 ; Kwon et al. , 2021 ) . LD ( fθ ) ≤ RS ( fθ ) + LS ( fθ ) + λ‖θ‖22 = max : ‖ ‖2≤ρ LS ( fθ+ ) + λ‖θ‖22 ( 3 ) In practice , SAM first approximately solves the inner optimization by means of a single-step gradient descent method , i.e. , ̂ = arg max : ‖ ‖2 < ρ LS ( fθ+ ) ≈ ρ∇θLS ( fθ ) . ( 4 ) The sharpness at weights θ is approximated by RS ( fθ ) = LS ( fθ+̂ ) − LS ( fθ ) . Then , a base optimizer , such as SGD ( Nesterov , 1983 ) or Adam ( Kingma & Ba , 2014 ) , updates the DNNs ’ weights to minimize LS ( fθ+̂ ) . We refer to LS ( fθ+̂ ) as the SAM loss . Overall , SAM requires two forward and two backward operations to update weights once . We refer to the forward and backward propagation for approximating ̂ as F1 and B1 and those for updating weights by base optimizers as F2 and B2 respectively . Although SAM can effectively improve the generalization of DNNs , it additionally requires one forward and one backward operation ( F1 and B1 ) in each training iteration . Thus , SAM results in a doubling of the computational overhead compared to the use of base optimizers . To improve the efficiency of SAM , we propose ESAM , which consists of two strategies—SWP and SDS , to accelerate the sharpness approximation phase and the weight updating phase . Specifically , on the one hand , when estimating ̂ around weight vector θ , SWP efficiently approximates ̂ by randomly selecting each parameter with a given probability to form a subset of weights to be perturbed . The reduction of the number of perturbed parameters results in lower computational overhead during the backward propagation . SWP rescales the resultant weight perturbation so as to assure that the expected weight perturbation equals to ̂ , and the generalization capability thus will not be significantly degraded . On the other hand , when updating weights via base optimizers , instead of computing the upper bound LB ( fθ+̂ ) over a whole batch of samples , SDS selects a subset of samples , B+ , whose loss values increase the most with respect to the perturbation ̂ . Optimizing the weights based on a fewer number of samples decreases the computational overhead ( in a linear fashion ) . We further justify that LB ( fθ+̂ ) can be upper bounded by LB+ ( fθ+̂ ) and consequently the generalization capability can be preserved . In general , ESAM works much more efficiently and performs as well as SAM in terms of its generalization capability . | This paper investigates the crucial efficiency issue of the sharpness-aware minimizer (SAM). SAM improves the generalization of DNNs but results in a double training time compared to vanilla training. By analyzing the min-max procedure of SAM, the authors observe the computational redundancy and then propose a method ESAM to improve the efficiency from the data and parameters perspectives. The authors argue that SAM can be approximated properly with fewer computations. Empirical results show that ESAM can reduce the extra training time in CIFAR10/100 and ImageNet datasets with improved accuracy compared to SAM. | science.mup_single_document_summarization.6798 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nEfficient Sharpness-aware Minimization for Improved Training of Neural Networks\n\nMain Content:\n1 INTRODUCTION . Deep learning has achieved astounding performances in many fields by relying on larger numbers of parameters and increasingly sophisticated optimization algorithms . However , DNNs with far more parameters than training samples are more prone to poor generalization . Generalization is arguably the most fundamental and yet mysterious aspect of deep learning . Several studies have been conducted to better understand the generalization of DNNs and to train DNNs that generalize well across the natural distribution ( Keskar et al. , 2016 ; Neyshabur et al. , 2017 ; Chaudhari et al. , 2019 ; Zhang et al. , 2019 ; Wu et al. , 2020 ; Foret et al. , 2020 ; Zhang et al. , 2021 ) . For example , Keskar et al . ( 2016 ) investigate the effect of batch size on neural networks ’ generalization ability . Zhang et al . ( 2019 ) propose an optimizer for training DNNs with improved generalization ability . Specifically , Hochreiter & Schmidhuber ( 1995 ) , Li et al . ( 2017 ) and Dinh et al . ( 2017 ) argue that the geometry of the loss landscape affects generalization and DNNs with a flat minimum can generalize better . The recent work by Foret et al . ( 2020 ) proposes an effective training algorithm Sharpness Aware Minimizer ( SAM ) for obtaining a flat minimum . SAM employs a base optimizer such as Stochastic Gradient Descent ( Nesterov , 1983 ) or Adam ( Kingma & Ba , 2014 ) to minimize both the vanilla training loss and the sharpness . The sharpness , which describes the flatness of a minimum , is characterized using eigenvalues of the Hessian matrix by Keskar et al . ( 2016 ) . SAM quantifies the sharpness as the maximized change of training loss when a constraint perturbation is added to current weights . As a result , SAM leads to a flat minimum and significantly improves the generalization ability of the trained DNNs . SAM and its variants have been shown to outperform the state-of-the-art across a variety of deep learning benchmarks ( Kwon et al. , 2021 ; Chen et al. , 2021 ; Galatolo et al. , 2021 ; Zheng et al. , 2021 ) . Regrettably though , SAM and its variants achieve such remarkable performance at the expense of doubling the computational overhead of the given base optimizers , which minimize the training loss with a single forward and backward propagation step . SAM requires an additional propagation step compared to the base optimizers to resolve the weight perturbation for quantifying the sharpness . The extra propagation step requires the same computational overhead as the single propagation step used by base optimizers , resulting in SAM ’ s computational overhead being doubled ( 2× ) . As demonstrated in Figure 1 , SAM achieves higher test accuracy ( i.e. , 84.46 % vs. 81.89 % ) at the expense of sacrificing half of the training speed of the base optimizer ( i.e. , 276 imgs/s vs. 557 imgs/s ) . SGD SAM ESAM 80 81 82 83 84 85 86 A cc ur ac y ( % ) 100 200 300 400 500 600 700 Tr ai ni ng S pe ed ( i m ag es /s ) Accuracy Training Speed sharpness of DNNs . As a result , the sharpness calculated over the subsets can serve as an upper bound of the SAM ’ s sharpness , ensuring that SDS ’ s performance is comparable to that of SAM ’ s . We verify the effectiveness of ESAM on the CIFAR10 , CIFAR100 ( Krizhevsky et al. , 2009 ) and ImageNet ( Deng et al. , 2009 ) datasets with five different DNN architectures . The experimental results demonstrate that ESAM obtains flat minima at a cost of only 40 % ( vs. SAM ’ s 100 % ) extra computational overhead over base optimizers . More importantly , ESAM achieves better performance in terms of the test accuracy compared to SAM . In a nutshell , our contributions are as follows : • We propose two novel and effective training strategies Stochastic Weight Perturbation ( SWP ) and Sharpness-sensitive Data Selection ( SDS ) . Both strategies are designed to improve efficiency without sacrificing performance . The empirical results demonstrate that both of the proposed strategies can improve both the efficiency and effectiveness of SAM . • We introduce the ESAM , which integrates SWP and SDS . ESAM improves the generalization ability of DNNs with marginally additional computational cost compared to standard training . The rest of this paper is structured in this way . Section 2.1 introduces SAM and its computational issues . Section 2.2 and Section 2.3 discuss how the two proposed training strategies SWP and SDS are designed respectively . Section 3 verifies the effectiveness of ESAM across a variety of datasets and DNN architectures . Section 4 presents the related work and Section 5 concludes this paper . 2 METHODOLOGY . We start with recapitulating how SAM achieves a flat minimum with small sharpness , which is quantified by resolving a maximization problem . To compute the sharpness , SAM requires additional forward and backward propagation and results in the doubling of the computational overhead compared to base optimizers . Following that , we demonstrate how we derive and propose ESAM , which integrates SWP and SDS , to maximize efficiency while maintaining the performance . We introduce SWP and SDS in Sections 2.2 and 2.3 respectively . Algorithm 1 shows the overall proposed ESAM algorithm . Throughout this paper , we denote a neural network f with weight parameters θ as fθ . The weights are contained in the vector θ = ( θ1 , θ2 , . . . , θN ) , where N is the number of weight units in the neural network . Given a training dataset S that contains samples i.i.d . drawn from a distribution D , Algorithm 1 Efficient SAM ( ESAM ) Input : Network fθ , θ = ( θ1 , θ2 , . . . , θN ) , Training set S , Batch size b , Learning rate η > 0 , Neighborhood size ρ > 0 , Iterations A , SWP hyperparameter β , SDS hyperparameter γ . Output : A flat minimum solution θ̂ . 1 : for a = 1 to A do 2 : Sample a mini-batch B ⊂ S with size b . 3 : for n = 1 to N do 4 : if θn is chosen by probability β then 5 : n ← ρ1−β∇θnLB ( fθ ) . SWP in B1 6 : else 7 : n ← 0 8 : ̂← ( 1 , ... , N ) . Assign Weight Perturbation 9 : Compute ` ( fθ+̂ , xi , yi ) and construct B+ with selection ratio γ ( Equation 6 ) 10 : Compute gradients g = ∇θLB+ ( fθ+̂ ) . SDS in B2 11 : Update weights θ ← θ − ηg the network is trained to obtain optimal weights θ̂ via empirical risk minimization ( ERM ) , i.e. , θ̂ = arg min θ { LS ( fθ ) = 1 |S| ∑ ( xi , yi ) ∈S ` ( fθ , xi , yi ) } ( 1 ) where ` can be an arbitrary loss function . We take ` to be the cross entropy loss in this paper . The population loss is defined as LD ( fθ ) , E ( xi , yi ) ∼D ( ` ( fθ , xi , yi ) ) . In each training iteration , optimizers sample a mini-batch B ⊂ S with size b to update parameters . 2.1 SHARPNESS-AWARE MINIMIZATION AND ITS COMPUTATIONAL DRAWBACK . To improve the generalization capability of DNNs , Foret et al . ( 2020 ) proposed the SAM training strategy for searching flat minima . SAM trains DNNs by solving the following min-max optimization problem , min θ max : ‖ ‖2≤ρ LS ( fθ+ ) . ( 2 ) Given θ , the inner optimization attempts to find a weight perturbation in Euclidean ball with radius ρ that maximizes the empirical loss . The maximized loss at weights θ is the sum of the empirical loss and the sharpness , which is defined to be RS ( fθ ) = max : ‖ ‖2 < ρ [ LS ( fθ+ ) − LS ( fθ ) ] . This sharpness is quantified by the maximal change of empirical loss when a constraint perturbation is added to θ . The min-max problem encourages SAM to find flat minima . For a certain set of weights θ , Foret et al . ( 2020 ) theoretically justifies that the population loss of DNNs can be upper-bounded by the sum of sharpness , empirical loss , and a regularization term on the norm of weights ( refer to Equation 3 ) . Thus , by minimizing the sharpness together with the empirical loss , SAM produces optimized solutions for DNNs with flat minima , and the resultant models can thus generalize better ( Foret et al. , 2020 ; Chen et al. , 2021 ; Kwon et al. , 2021 ) . LD ( fθ ) ≤ RS ( fθ ) + LS ( fθ ) + λ‖θ‖22 = max : ‖ ‖2≤ρ LS ( fθ+ ) + λ‖θ‖22 ( 3 ) In practice , SAM first approximately solves the inner optimization by means of a single-step gradient descent method , i.e. , ̂ = arg max : ‖ ‖2 < ρ LS ( fθ+ ) ≈ ρ∇θLS ( fθ ) . ( 4 ) The sharpness at weights θ is approximated by RS ( fθ ) = LS ( fθ+̂ ) − LS ( fθ ) . Then , a base optimizer , such as SGD ( Nesterov , 1983 ) or Adam ( Kingma & Ba , 2014 ) , updates the DNNs ’ weights to minimize LS ( fθ+̂ ) . We refer to LS ( fθ+̂ ) as the SAM loss . Overall , SAM requires two forward and two backward operations to update weights once . We refer to the forward and backward propagation for approximating ̂ as F1 and B1 and those for updating weights by base optimizers as F2 and B2 respectively . Although SAM can effectively improve the generalization of DNNs , it additionally requires one forward and one backward operation ( F1 and B1 ) in each training iteration . Thus , SAM results in a doubling of the computational overhead compared to the use of base optimizers . To improve the efficiency of SAM , we propose ESAM , which consists of two strategies—SWP and SDS , to accelerate the sharpness approximation phase and the weight updating phase . Specifically , on the one hand , when estimating ̂ around weight vector θ , SWP efficiently approximates ̂ by randomly selecting each parameter with a given probability to form a subset of weights to be perturbed . The reduction of the number of perturbed parameters results in lower computational overhead during the backward propagation . SWP rescales the resultant weight perturbation so as to assure that the expected weight perturbation equals to ̂ , and the generalization capability thus will not be significantly degraded . On the other hand , when updating weights via base optimizers , instead of computing the upper bound LB ( fθ+̂ ) over a whole batch of samples , SDS selects a subset of samples , B+ , whose loss values increase the most with respect to the perturbation ̂ . Optimizing the weights based on a fewer number of samples decreases the computational overhead ( in a linear fashion ) . We further justify that LB ( fθ+̂ ) can be upper bounded by LB+ ( fθ+̂ ) and consequently the generalization capability can be preserved . In general , ESAM works much more efficiently and performs as well as SAM in terms of its generalization capability .",
"role": "user"
},
{
"content": "This paper investigates the crucial efficiency issue of the sharpness-aware minimizer (SAM). SAM improves the generalization of DNNs but results in a double training time compared to vanilla training. By analyzing the min-max procedure of SAM, the authors observe the computational redundancy and then propose a method ESAM to improve the efficiency from the data and parameters perspectives. The authors argue that SAM can be approximated properly with fewer computations. Empirical results show that ESAM can reduce the extra training time in CIFAR10/100 and ImageNet datasets with improved accuracy compared to SAM.",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Handling Divergent Reference Texts when Evaluating Table-to-Text Generation
Automatically constructed datasets for generating text from semi-structured data (tables), such as WikiBio, often contain reference texts that diverge from the information in the corresponding semi-structured data. We show that metrics which rely solely on the reference texts, such as BLEU and ROUGE, show poor correlation with human judgments when those references diverge. We propose a new metric, PARENT, which aligns n-grams from the reference and generated texts to the semi-structured data before computing their precision and recall. Through a large scale human evaluation study of table-to-text models for WikiBio, we show that PARENT correlates with human judgments better than existing text generation metrics. We also adapt and evaluate the information extraction based evaluation proposed by Wiseman et al (2017), and show that PARENT has comparable correlation to it, while being easier to use. We show that PARENT is also applicable when the reference texts are elicited from humans using the data from the WebNLG challenge.
PARENT
PARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 .
Experiments & Results
In this section we compare several automatic evaluation metrics by checking their correlation with the scores assigned by humans to table-to-text models. Specifically, given INLINEFORM0 models INLINEFORM1 , and their outputs on an evaluation set, we show these generated texts to humans to judge their quality, and obtain aggregated human evaluation scores for all the models, INLINEFORM2 (§ SECREF33 ). Next, to evaluate an automatic metric, we compute the scores it assigns to each model, INLINEFORM3 , and check the Pearson correlation between INLINEFORM4 and INLINEFORM5 BIBREF21 .
Human Evaluation
We collected human judgments on the quality of the 16 models trained for WikiBio, plus the reference texts. Workers on a crowd-sourcing platform, proficient in English, were shown a table with pairs of generated texts, or a generated text and the reference, and asked to select the one they prefer. Figure FIGREF34 shows the instructions they were given. Paired comparisons have been shown to be superior to rating scales for comparing generated texts BIBREF24 . However, for measuring correlation the comparisons need to be aggregated into real-valued scores, INLINEFORM0 , for each of the INLINEFORM1 models. For this, we use Thurstone's method BIBREF22 , which assigns a score to each model based on how many times it was preferred over an alternative.
The data collection was performed separately for models in the WikiBio-Systems and WikiBio-Hyperparams categories. 1100 tables were sampled from the development set, and for each table we got 8 different sentence pairs annotated across the two categories, resulting in a total of 8800 pairwise comparisons. Each pair was judged by one worker only which means there may be noise at the instance-level, but the aggregated system-level scores had low variance (cf. Table TABREF32 ). In total around 500 different workers were involved in the annotation. References were also included in the evaluation, and they received a lower score than PG-Net, highlighting the divergence in WikiBio.
Compared Metrics
Text only: We compare BLEU BIBREF8 , ROUGE BIBREF9 , METEOR BIBREF18 , CIDEr and CIDEr-D BIBREF25 using their publicly available implementations.
Information Extraction based: We compare the CS, RG and RG-F metrics discussed in § SECREF4 .
Text & Table: We compare a variant of BLEU, denoted as BLEU-T, where the values from the table are used as additional references. BLEU-T draws inspiration from iBLEU BIBREF26 but instead rewards n-grams which match the table rather than penalizing them. For PARENT, we compare both the word-overlap model (PARENT-W) and the co-occurrence model (PARENT-C) for determining entailment. We also compare versions where a single INLINEFORM0 is tuned on the entire dataset to maximize correlation with human judgments, denoted as PARENT*-W/C.
Correlation Comparison
We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table TABREF37 . The distribution of correlations for the best performing metrics are shown in Figure FIGREF38 .
Table TABREF37 also indicates whether PARENT is significantly better than a baseline metric. BIBREF21 suggest using the William's test for this purpose, but since we are computing correlations between only 4/13 systems at a time, this test has very weak power in our case. Hence, we use the bootstrap samples to obtain a INLINEFORM0 confidence interval of the difference in correlation between PARENT and any other metric and check whether this is above 0 BIBREF27 .
Correlations are higher for the systems category than the hyperparams category. The latter is a more difficult setting since very similar models are compared, and hence the variance of the correlations is also high. Commonly used metrics which only rely on the reference (BLEU, ROUGE, METEOR, CIDEr) have only weak correlations with human judgments. In the hyperparams category, these are often negative, implying that tuning models based on these may lead to selecting worse models. BLEU performs the best among these, and adding n-grams from the table as references improves this further (BLEU-T).
Among the extractive evaluation metrics, CS, which also only relies on the reference, has poor correlation in the hyperparams category. RG-F, and both variants of the PARENT metric achieve the highest correlation for both settings. There is no significant difference among these for the hyperparams category, but for systems, PARENT-W is significantly better than the other two. While RG-F needs a full information extraction pipeline in its implementation, PARENT-C only relies on co-occurrence counts, and PARENT-W can be used out-of-the-box for any dataset. To our knowledge, this is the first rigorous evaluation of using information extraction for generation evaluation.
On this dataset, the word-overlap model showed higher correlation than the co-occurrence model for entailment. In § SECREF47 we will show that for the WebNLG dataset, where more paraphrasing is involved between the table and text, the opposite is true. Lastly, we note that the heuristic for selecting INLINEFORM0 is sufficient to produce high correlations for PARENT, however, if human annotations are available, this can be tuned to produce significantly higher correlations (PARENT*-W/C).
Analysis
In this section we further analyze the performance of PARENT-W under different conditions, and compare to the other best metrics from Table TABREF37 .
To study the correlation as we vary the number of divergent references, we also collected binary labels from workers for whether a reference is entailed by the corresponding table. We define a reference as entailed when it mentions only information which can be inferred from the table. Each table and reference pair was judged by 3 independent workers, and we used the majority vote as the label for that pair. Overall, only INLINEFORM0 of the references were labeled as entailed by the table. Fleiss' INLINEFORM1 was INLINEFORM2 , which indicates a fair agreement. We found the workers sometimes disagreed on what information can be reasonably entailed by the table.
Figure FIGREF40 shows the correlations as we vary the percent of entailed examples in the evaluation set of WikiBio. Each point is obtained by fixing the desired proportion of entailed examples, and sampling subsets from the full set which satisfy this proportion. PARENT and RG-F remain stable and show a high correlation across the entire range, whereas BLEU and BLEU-T vary a lot. In the hyperparams category, the latter two have the worst correlation when the evaluation set contains only entailed examples, which may seem surprising. However, on closer examination we found that this subset tends to omit a lot of information from the tables. Systems which produce more information than these references are penalized by BLEU, but not in the human evaluation. PARENT overcomes this issue by measuring recall against the table in addition to the reference.
We check how different components in the computation of PARENT contribute to its correlation to human judgments. Specifically, we remove the probability INLINEFORM0 of an n-gram INLINEFORM1 being entailed by the table from Eqs. EQREF19 and EQREF23 . The average correlation for PARENT-W drops to INLINEFORM5 in this case. We also try a variant of PARENT with INLINEFORM6 , which removes the contribution of Table Recall (Eq. EQREF22 ). The average correlation is INLINEFORM7 in this case. With these components, the correlation is INLINEFORM8 , showing that they are crucial to the performance of PARENT.
BIBREF28 point out that hill-climbing on an automatic metric is meaningless if that metric has a low instance-level correlation to human judgments. In Table TABREF46 we show the average accuracy of the metrics in making the same judgments as humans between pairs of generated texts. Both variants of PARENT are significantly better than the other metrics, however the best accuracy is only INLINEFORM0 for the binary task. This is a challenging task, since there are typically only subtle differences between the texts. Achieving higher instance-level accuracies will require more sophisticated language understanding models for evaluation.
Related Work
Over the years several studies have evaluated automatic metrics for measuring text generation performance BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 . The only consensus from these studies seems to be that no single metric is suitable across all tasks. A recurring theme is that metrics like BLEU and NIST BIBREF36 are not suitable for judging content quality in NLG. Recently, BIBREF37 did a comprehensive study of several metrics on the outputs of state-of-the-art NLG systems, and found that while they showed acceptable correlation with human judgments at the system level, they failed to show any correlation at the sentence level. Ours is the first study which checks the quality of metrics when table-to-text references are divergent. We show that in this case even system level correlations can be unreliable.
Hallucination BIBREF38 , BIBREF39 refers to when an NLG system generates text which mentions extra information than what is present in the source from which it is generated. Divergence can be viewed as hallucination in the reference text itself. PARENT deals with hallucination by discounting n-grams which do not overlap with either the reference or the table.
PARENT draws inspiration from iBLEU BIBREF26 , a metric for evaluating paraphrase generation, which compares the generated text to both the source text and the reference. While iBLEU penalizes texts which match the source, here we reward such texts since our task values accuracy of generated text more than the need for paraphrasing the tabular content BIBREF40 . Similar to SARI for text simplification BIBREF41 and Q-BLEU for question generation BIBREF42 , PARENT falls under the category of task-specific metrics.
Conclusions
We study the automatic evaluation of table-to-text systems when the references diverge from the table. We propose a new metric, PARENT, which shows the highest correlation with humans across a range of settings with divergent references in WikiBio. We also perform the first empirical evaluation of information extraction based metrics BIBREF1 , and find RG-F to be effective. Lastly, we show that PARENT is comparable to the best existing metrics when references are elicited by humans on the WebNLG data.
Hyperparameters
After tuning we found the same set of hyperparameters to work well for both the table-to-text PG-Net, and the inverse information extraction PG-Net. The hidden state size of the biLSTMs was set to 200. The input and output vocabularies were set to 50000 most common words in the corpus, with additional special symbols for table attribute names (such as “birth-date”). The embeddings of the tokens in the vocabulary were initialized with Glove BIBREF43 . Learning rate of INLINEFORM0 was used during training, with the Adam optimizer, and a dropout of INLINEFORM1 was also applied to the outputs of the biLSTM. Models were trained till the loss on the dev set stopped dropping. Maximum length of a decoded text was set to 40 tokens, and that of the tables was set to 120 tokens. Various beam sizes and length normalization penalties were applied for the table-to-text system, which are listed in the main paper. For the information extraction system, we found a beam size of 8 and no length penalty to produce the highest F-score on the dev set.
Sample Outputs
Table TABREF55 shows some sample references and the corresponding predictions from the best performing model, PG-Net for WikiBio.
Figure 1: A table from the WikiBio dataset (right), its reference description and three hypothetical generated texts with scores assigned to them by automatic evaluation metrics. Text which cannot be inferred from the table is in red, and text which can be inferred but isn’t present in the reference is in green. PARENT is our proposed metric.
Figure 2: Instructions to crowd-workers for comparing two generated texts.
Figure 3: Distribution of metric correlations across 500 bootstrap samples. PRT = PARENT.
Figure 4: Correlation of the metrics to human judgment as the percentage of entailed examples in WikiBio is varied.
Figure 5: Histogram of the recall of the references against the table (Eq. 6), which is used to set 1 − λ. Lower values indicate that the metric relies more on the table and less on the reference.
Table 5: Performance of the Information Extraction system.
Table 6: Sample references and predictions from PG-Net with beam size 8. Information which is absent from the reference, but can be inferred from the table is in bold. Information which is present in the reference, but cannot be inferred from the table is in italics.
Query: How many people participated in their evaluation study of table-to-text models?
JSON Response: | {"answer": "about 500", "evidence": ["The data collection was performed separately for models in the WikiBio-Systems and WikiBio-Hyperparams categories. 1100 tables were sampled from the development set, and for each table we got 8 different sentence pairs annotated across the two categories, resulting in a total of 8800 pairwise comparisons. Each pair was judged by one worker only which means there may be noise at the instance-level, but the aggregated system-level scores had low variance (cf. Table TABREF32 ). In total around 500 different workers were involved in the annotation. References were also included in the evaluation, and they received a lower score than PG-Net, highlighting the divergence in WikiBio."]} | science.qasper_abstractive_qa.48.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nHandling Divergent Reference Texts when Evaluating Table-to-Text Generation\n\nAutomatically constructed datasets for generating text from semi-structured data (tables), such as WikiBio, often contain reference texts that diverge from the information in the corresponding semi-structured data. We show that metrics which rely solely on the reference texts, such as BLEU and ROUGE, show poor correlation with human judgments when those references diverge. We propose a new metric, PARENT, which aligns n-grams from the reference and generated texts to the semi-structured data before computing their precision and recall. Through a large scale human evaluation study of table-to-text models for WikiBio, we show that PARENT correlates with human judgments better than existing text generation metrics. We also adapt and evaluate the information extraction based evaluation proposed by Wiseman et al (2017), and show that PARENT has comparable correlation to it, while being easier to use. We show that PARENT is also applicable when the reference texts are elicited from humans using the data from the WebNLG challenge.\n\nPARENT\nPARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 .\n\nExperiments & Results\nIn this section we compare several automatic evaluation metrics by checking their correlation with the scores assigned by humans to table-to-text models. Specifically, given INLINEFORM0 models INLINEFORM1 , and their outputs on an evaluation set, we show these generated texts to humans to judge their quality, and obtain aggregated human evaluation scores for all the models, INLINEFORM2 (§ SECREF33 ). Next, to evaluate an automatic metric, we compute the scores it assigns to each model, INLINEFORM3 , and check the Pearson correlation between INLINEFORM4 and INLINEFORM5 BIBREF21 .\n\nHuman Evaluation\nWe collected human judgments on the quality of the 16 models trained for WikiBio, plus the reference texts. Workers on a crowd-sourcing platform, proficient in English, were shown a table with pairs of generated texts, or a generated text and the reference, and asked to select the one they prefer. Figure FIGREF34 shows the instructions they were given. Paired comparisons have been shown to be superior to rating scales for comparing generated texts BIBREF24 . However, for measuring correlation the comparisons need to be aggregated into real-valued scores, INLINEFORM0 , for each of the INLINEFORM1 models. For this, we use Thurstone's method BIBREF22 , which assigns a score to each model based on how many times it was preferred over an alternative.\nThe data collection was performed separately for models in the WikiBio-Systems and WikiBio-Hyperparams categories. 1100 tables were sampled from the development set, and for each table we got 8 different sentence pairs annotated across the two categories, resulting in a total of 8800 pairwise comparisons. Each pair was judged by one worker only which means there may be noise at the instance-level, but the aggregated system-level scores had low variance (cf. Table TABREF32 ). In total around 500 different workers were involved in the annotation. References were also included in the evaluation, and they received a lower score than PG-Net, highlighting the divergence in WikiBio.\n\nCompared Metrics\nText only: We compare BLEU BIBREF8 , ROUGE BIBREF9 , METEOR BIBREF18 , CIDEr and CIDEr-D BIBREF25 using their publicly available implementations.\nInformation Extraction based: We compare the CS, RG and RG-F metrics discussed in § SECREF4 .\nText & Table: We compare a variant of BLEU, denoted as BLEU-T, where the values from the table are used as additional references. BLEU-T draws inspiration from iBLEU BIBREF26 but instead rewards n-grams which match the table rather than penalizing them. For PARENT, we compare both the word-overlap model (PARENT-W) and the co-occurrence model (PARENT-C) for determining entailment. We also compare versions where a single INLINEFORM0 is tuned on the entire dataset to maximize correlation with human judgments, denoted as PARENT*-W/C.\n\nCorrelation Comparison\nWe use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table TABREF37 . The distribution of correlations for the best performing metrics are shown in Figure FIGREF38 .\nTable TABREF37 also indicates whether PARENT is significantly better than a baseline metric. BIBREF21 suggest using the William's test for this purpose, but since we are computing correlations between only 4/13 systems at a time, this test has very weak power in our case. Hence, we use the bootstrap samples to obtain a INLINEFORM0 confidence interval of the difference in correlation between PARENT and any other metric and check whether this is above 0 BIBREF27 .\nCorrelations are higher for the systems category than the hyperparams category. The latter is a more difficult setting since very similar models are compared, and hence the variance of the correlations is also high. Commonly used metrics which only rely on the reference (BLEU, ROUGE, METEOR, CIDEr) have only weak correlations with human judgments. In the hyperparams category, these are often negative, implying that tuning models based on these may lead to selecting worse models. BLEU performs the best among these, and adding n-grams from the table as references improves this further (BLEU-T).\nAmong the extractive evaluation metrics, CS, which also only relies on the reference, has poor correlation in the hyperparams category. RG-F, and both variants of the PARENT metric achieve the highest correlation for both settings. There is no significant difference among these for the hyperparams category, but for systems, PARENT-W is significantly better than the other two. While RG-F needs a full information extraction pipeline in its implementation, PARENT-C only relies on co-occurrence counts, and PARENT-W can be used out-of-the-box for any dataset. To our knowledge, this is the first rigorous evaluation of using information extraction for generation evaluation.\nOn this dataset, the word-overlap model showed higher correlation than the co-occurrence model for entailment. In § SECREF47 we will show that for the WebNLG dataset, where more paraphrasing is involved between the table and text, the opposite is true. Lastly, we note that the heuristic for selecting INLINEFORM0 is sufficient to produce high correlations for PARENT, however, if human annotations are available, this can be tuned to produce significantly higher correlations (PARENT*-W/C).\n\nAnalysis\nIn this section we further analyze the performance of PARENT-W under different conditions, and compare to the other best metrics from Table TABREF37 .\nTo study the correlation as we vary the number of divergent references, we also collected binary labels from workers for whether a reference is entailed by the corresponding table. We define a reference as entailed when it mentions only information which can be inferred from the table. Each table and reference pair was judged by 3 independent workers, and we used the majority vote as the label for that pair. Overall, only INLINEFORM0 of the references were labeled as entailed by the table. Fleiss' INLINEFORM1 was INLINEFORM2 , which indicates a fair agreement. We found the workers sometimes disagreed on what information can be reasonably entailed by the table.\nFigure FIGREF40 shows the correlations as we vary the percent of entailed examples in the evaluation set of WikiBio. Each point is obtained by fixing the desired proportion of entailed examples, and sampling subsets from the full set which satisfy this proportion. PARENT and RG-F remain stable and show a high correlation across the entire range, whereas BLEU and BLEU-T vary a lot. In the hyperparams category, the latter two have the worst correlation when the evaluation set contains only entailed examples, which may seem surprising. However, on closer examination we found that this subset tends to omit a lot of information from the tables. Systems which produce more information than these references are penalized by BLEU, but not in the human evaluation. PARENT overcomes this issue by measuring recall against the table in addition to the reference.\nWe check how different components in the computation of PARENT contribute to its correlation to human judgments. Specifically, we remove the probability INLINEFORM0 of an n-gram INLINEFORM1 being entailed by the table from Eqs. EQREF19 and EQREF23 . The average correlation for PARENT-W drops to INLINEFORM5 in this case. We also try a variant of PARENT with INLINEFORM6 , which removes the contribution of Table Recall (Eq. EQREF22 ). The average correlation is INLINEFORM7 in this case. With these components, the correlation is INLINEFORM8 , showing that they are crucial to the performance of PARENT.\n BIBREF28 point out that hill-climbing on an automatic metric is meaningless if that metric has a low instance-level correlation to human judgments. In Table TABREF46 we show the average accuracy of the metrics in making the same judgments as humans between pairs of generated texts. Both variants of PARENT are significantly better than the other metrics, however the best accuracy is only INLINEFORM0 for the binary task. This is a challenging task, since there are typically only subtle differences between the texts. Achieving higher instance-level accuracies will require more sophisticated language understanding models for evaluation.\n\nRelated Work\nOver the years several studies have evaluated automatic metrics for measuring text generation performance BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 . The only consensus from these studies seems to be that no single metric is suitable across all tasks. A recurring theme is that metrics like BLEU and NIST BIBREF36 are not suitable for judging content quality in NLG. Recently, BIBREF37 did a comprehensive study of several metrics on the outputs of state-of-the-art NLG systems, and found that while they showed acceptable correlation with human judgments at the system level, they failed to show any correlation at the sentence level. Ours is the first study which checks the quality of metrics when table-to-text references are divergent. We show that in this case even system level correlations can be unreliable.\nHallucination BIBREF38 , BIBREF39 refers to when an NLG system generates text which mentions extra information than what is present in the source from which it is generated. Divergence can be viewed as hallucination in the reference text itself. PARENT deals with hallucination by discounting n-grams which do not overlap with either the reference or the table.\nPARENT draws inspiration from iBLEU BIBREF26 , a metric for evaluating paraphrase generation, which compares the generated text to both the source text and the reference. While iBLEU penalizes texts which match the source, here we reward such texts since our task values accuracy of generated text more than the need for paraphrasing the tabular content BIBREF40 . Similar to SARI for text simplification BIBREF41 and Q-BLEU for question generation BIBREF42 , PARENT falls under the category of task-specific metrics.\n\nConclusions\nWe study the automatic evaluation of table-to-text systems when the references diverge from the table. We propose a new metric, PARENT, which shows the highest correlation with humans across a range of settings with divergent references in WikiBio. We also perform the first empirical evaluation of information extraction based metrics BIBREF1 , and find RG-F to be effective. Lastly, we show that PARENT is comparable to the best existing metrics when references are elicited by humans on the WebNLG data.\n\nHyperparameters\nAfter tuning we found the same set of hyperparameters to work well for both the table-to-text PG-Net, and the inverse information extraction PG-Net. The hidden state size of the biLSTMs was set to 200. The input and output vocabularies were set to 50000 most common words in the corpus, with additional special symbols for table attribute names (such as “birth-date”). The embeddings of the tokens in the vocabulary were initialized with Glove BIBREF43 . Learning rate of INLINEFORM0 was used during training, with the Adam optimizer, and a dropout of INLINEFORM1 was also applied to the outputs of the biLSTM. Models were trained till the loss on the dev set stopped dropping. Maximum length of a decoded text was set to 40 tokens, and that of the tables was set to 120 tokens. Various beam sizes and length normalization penalties were applied for the table-to-text system, which are listed in the main paper. For the information extraction system, we found a beam size of 8 and no length penalty to produce the highest F-score on the dev set.\n\nSample Outputs\nTable TABREF55 shows some sample references and the corresponding predictions from the best performing model, PG-Net for WikiBio.\n\nFigure 1: A table from the WikiBio dataset (right), its reference description and three hypothetical generated texts with scores assigned to them by automatic evaluation metrics. Text which cannot be inferred from the table is in red, and text which can be inferred but isn’t present in the reference is in green. PARENT is our proposed metric.\n\nFigure 2: Instructions to crowd-workers for comparing two generated texts.\n\nFigure 3: Distribution of metric correlations across 500 bootstrap samples. PRT = PARENT.\n\nFigure 4: Correlation of the metrics to human judgment as the percentage of entailed examples in WikiBio is varied.\n\nFigure 5: Histogram of the recall of the references against the table (Eq. 6), which is used to set 1 − λ. Lower values indicate that the metric relies more on the table and less on the reference.\n\nTable 5: Performance of the Information Extraction system.\n\nTable 6: Sample references and predictions from PG-Net with beam size 8. Information which is absent from the reference, but can be inferred from the table is in bold. Information which is present in the reference, but cannot be inferred from the table is in italics.\n\nQuery: How many people participated in their evaluation study of table-to-text models?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"about 500\", \"evidence\": [\"The data collection was performed separately for models in the WikiBio-Systems and WikiBio-Hyperparams categories. 1100 tables were sampled from the development set, and for each table we got 8 different sentence pairs annotated across the two categories, resulting in a total of 8800 pairwise comparisons. Each pair was judged by one worker only which means there may be noise at the instance-level, but the aggregated system-level scores had low variance (cf. Table TABREF32 ). In total around 500 different workers were involved in the annotation. References were also included in the evaluation, and they received a lower score than PG-Net, highlighting the divergence in WikiBio.\"]}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
A Portuguese Native Language Identification Dataset
In this paper we present NLI-PT, the first Portuguese dataset compiled for Native Language Identification (NLI), the task of identifying an author's first language based on their second language writing. The dataset includes 1,868 student essays written by learners of European Portuguese, native speakers of the following L1s: Chinese, English, Spanish, German, Russian, French, Japanese, Italian, Dutch, Tetum, Arabic, Polish, Korean, Romanian, and Swedish. NLI-PT includes the original student text and four different types of annotation: POS, fine-grained POS, constituency parses, and dependency parses. NLI-PT can be used not only in NLI but also in research on several topics in the field of Second Language Acquisition and educational NLP. We discuss possible applications of this dataset and present the results obtained for the first lexical baseline system for Portuguese NLI.
Introduction
Several learner corpora have been compiled for English, such as the International Corpus of Learner English BIBREF0 . The importance of such resources has been increasingly recognized across a variety of research areas, from Second Language Acquisition to Natural Language Processing. Recently, we have seen substantial growth in this area and new corpora for languages other than English have appeared. For Romance languages, there are a several corpora and resources for French, Spanish BIBREF1 , and Italian BIBREF2 .
Portuguese has also received attention in the compilation of learner corpora. There are two corpora compiled at the School of Arts and Humanities of the University of Lisbon: the corpus Recolha de dados de Aprendizagem do Português Língua Estrangeira (hereafter, Leiria corpus), with 470 texts and 70,500 tokens, and the Learner Corpus of Portuguese as Second/Foreign Language, COPLE2 BIBREF3 , with 1,058 texts and 201,921 tokens. The Corpus de Produções Escritas de Aprendentes de PL2, PEAPL2 compiled at the University of Coimbra, contains 516 texts and 119,381 tokens. Finally, the Corpus de Aquisição de L2, CAL2, compiled at the New University of Lisbon, contains 1,380 texts and 281,301 words, and it includes texts produced by adults and children, as well as a spoken subset.
The aforementioned Portuguese learner corpora contain very useful data for research, particularly for Native Language Identification (NLI), a task that has received much attention in recent years. NLI is the task of determining the native language (L1) of an author based on their second language (L2) linguistic productions BIBREF4 . NLI works by identifying language use patterns that are common to groups of speakers of the same native language. This process is underpinned by the presupposition that an author’s L1 disposes them towards certain language production patterns in their L2, as influenced by their mother tongue. A major motivation for NLI is studying second language acquisition. NLI models can enable analysis of inter-L1 linguistic differences, allowing us to study the language learning process and develop L1-specific pedagogical methods and materials.
However, there are limitations to using existing Portuguese data for NLI. An important issue is that the different corpora each contain data collected from different L1 backgrounds in varying amounts; they would need to be combined to have sufficient data for an NLI study. Another challenge concerns the annotations as only two of the corpora (PEAPL2 and COPLE2) are linguistically annotated, and this is limited to POS tags. The different data formats used by each corpus presents yet another challenge to their usage.
In this paper we present NLI-PT, a dataset collected for Portuguese NLI. The dataset is made freely available for research purposes. With the goal of unifying learner data collected from various sources, listed in Section "Collection methodology" , we applied a methodology which has been previously used for the compilation of language variety corpora BIBREF5 . The data was converted to a unified data format and uniformly annotated at different linguistic levels as described in Section "Preprocessing and annotation of texts" . To the best of our knowledge, NLI-PT is the only Portuguese dataset developed specifically for NLI, this will open avenues for research in this area.
Related Work
NLI has attracted a lot of attention in recent years. Due to the availability of suitable data, as discussed earlier, this attention has been particularly focused on English. The most notable examples are the two editions of the NLI shared task organized in 2013 BIBREF6 and 2017 BIBREF7 .
Even though most NLI research has been carried out on English data, an important research trend in recent years has been the application of NLI methods to other languages, as discussed in multilingual-nli. Recent NLI studies on languages other than English include Arabic BIBREF8 and Chinese BIBREF9 , BIBREF10 . To the best of our knowledge, no study has been published on Portuguese and the NLI-PT dataset opens new possibilities of research for Portuguese. In Section "A Baseline for Portuguese NLI" we present the first simple baseline results for this task.
Finally, as NLI-PT can be used in other applications besides NLI, it is important to point out that a number of studies have been published on educational NLP applications for Portuguese and on the compilation of learner language resources for Portuguese. Examples of such studies include grammatical error correction BIBREF11 , automated essay scoring BIBREF12 , academic word lists BIBREF13 , and the learner corpora presented in the previous section.
Collection methodology
The data was collected from three different learner corpora of Portuguese: (i) COPLE2; (ii) Leiria corpus, and (iii) PEAPL2 as presented in Table 1 .
The three corpora contain written productions from learners of Portuguese with different proficiency levels and native languages (L1s). In the dataset we included all the data in COPLE2 and sections of PEAPL2 and Leiria corpus.
The main variable we used for text selection was the presence of specific L1s. Since the three corpora consider different L1s, we decided to use the L1s present in the largest corpus, COPLE2, as the reference. Therefore, we included in the dataset texts corresponding to the following 15 L1s: Chinese, English, Spanish, German, Russian, French, Japanese, Italian, Dutch, Tetum, Arabic, Polish, Korean, Romanian, and Swedish. It was the case that some of the L1s present in COPLE2 were not documented in the other corpora. The number of texts from each L1 is presented in Table 2 .
Concerning the corpus design, there is some variability among the sources we used. Leiria corpus and PEAPL2 followed a similar approach for data collection and show a close design. They consider a close list of topics, called “stimulus”, which belong to three general areas: (i) the individual; (ii) the society; (iii) the environment. Those topics are presented to the students in order to produce a written text. As a whole, texts from PEAPL2 and Leiria represent 36 different stimuli or topics in the dataset. In COPLE2 corpus the written texts correspond to written exercises done during Portuguese lessons, or to official Portuguese proficiency tests. For this reason, the topics considered in COPLE2 corpus are different from the topics in Leiria and PEAPL2. The number of topics is also larger in COPLE2 corpus: 149 different topics. There is some overlap between the different topics considered in COPLE2, that is, some topics deal with the same subject. This overlap allowed us to reorganize COPLE2 topics in our dataset, reducing them to 112.
Due to the different distribution of topics in the source corpora, the 148 topics in the dataset are not represented uniformly. Three topics account for a 48.7% of the total texts and, on the other hand, a 72% of the topics are represented by 1-10 texts (Figure 1 ). This variability affects also text length. The longest text has 787 tokens and the shortest has only 16 tokens. Most texts, however, range roughly from 150 to 250 tokens. To better understand the distribution of texts in terms of word length we plot a histogram of all texts with their word length in bins of 10 (1-10 tokens, 11-20 tokens, 21-30 tokens and so on) (Figure 2 ).
The three corpora use the proficiency levels defined in the Common European Framework of Reference for Languages (CEFR), but they show differences in the number of levels they consider. There are five proficiency levels in COPLE2 and PEAPL2: A1, A2, B1, B2, and C1. But there are 3 levels in Leiria corpus: A, B, and C. The number of texts included from each proficiency level is presented in Table 4 .
Preprocessing and annotation of texts
As demonstrated earlier, these learner corpora use different formats. COPLE2 is mainly codified in XML, although it gives the possibility of getting the student version of the essay in TXT format. PEAPL2 and Leiria corpus are compiled in TXT format. In both corpora, the TXT files contain the student version with special annotations from the transcription. For the NLI experiments we were interested in a clean txt version of the students' text, together with versions annotated at different linguistics levels. Therefore, as a first step, we removed all the annotations corresponding to the transcription process in PEAPL2 and Leiria files. As a second step, we proceeded to the linguistic annotation of the texts using different NLP tools.
We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 .
Applications
NLI-PT was developed primarily for NLI, but it can be used for other research purposes ranging from second language acquisition to educational NLP applications. Here are a few examples of applications in which the dataset can be used:
A Baseline for Portuguese NLI
To demonstrate the usefulness of the dataset we present the first lexical baseline for Portuguese NLI using a sub-set of NLI-PT. To the best of our knowledge, no study has been published on Portuguese NLI and our work fills this gap.
In this experiment we included the five L1s in NLI-PT which contain the largest number of texts in this sub-set and run a simple linear SVM BIBREF21 classifier using a bag of words model to identify the L1 of each text. The languages included in this experiment were Chinese (355 texts), English (236 texts), German (214 texts), Italian (216 texts), and Spanish (271 texts).
We evaluated the model using stratified 10-fold cross-validation, achieving 70% accuracy. An important limitation of this experiment is that it does not account for topic bias, an important issue in NLI BIBREF22 . This is due to the fact that NLI-PT is not balanced by topic and the model could be learning topic associations instead. In future work we would like to carry out using syntactic features such as function words, syntactic relations and POS annotation.
Conclusion and Future Work
This paper presented NLI-PT, the first Portuguese dataset compiled for NLI. NLI-PT contains 1,868 texts written by speakers of 15 L1s amounting to over 380,000 tokens.
As discussed in Section "Applications" , NLI-PT opens several avenues for future research. It can be used for different research purposes beyond NLI such as grammatical error correction and CALL. An experiment with the texts written by the speakers of five L1s: Chinese, English, German, Italian, and Spanish using a bag of words model achieved 70% accuracy. We are currently experimenting with different features taking advantage of the annotation available in NLI-PT thus reducing topic bias in classification.
In future work we would like to include more texts in the dataset following the same methodology and annotation.
Acknowledgement
We want to thank the research teams that have made available the data we used in this work: Centro de Estudos de Linguística Geral e Aplicada at Universidade de Coimbra (specially Cristina Martins) and Centro de Linguística da Universidade de Lisboa (particularly Amália Mendes).
This work was partially supported by Fundação para a Ciência e a Tecnologia (postdoctoral research grant SFRH/BPD/109914/2015).
Table 1: Distribution of the dataset: Number of texts, tokens, types, and type/token ratio (TTER) per source corpus.
Figure 1: Topic distribution by number of texts. Each bar represents one of the 148 topics.
Table 2: Distribution by L1s and source corpora.
Table 3: Number of different topics by source.
Figure 2: Histogram of document lengths, as measured by the number of tokens. The mean value is 204 with standard deviation of 103.
Table 4: Distribution by proficiency levels and by source corpus.
Query: Are the annotations automatic or manually created?
JSON Response: | {"answer": "Automatic", "evidence": ["We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 ."]} | science.qasper_abstractive_qa.141.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nA Portuguese Native Language Identification Dataset\n\nIn this paper we present NLI-PT, the first Portuguese dataset compiled for Native Language Identification (NLI), the task of identifying an author's first language based on their second language writing. The dataset includes 1,868 student essays written by learners of European Portuguese, native speakers of the following L1s: Chinese, English, Spanish, German, Russian, French, Japanese, Italian, Dutch, Tetum, Arabic, Polish, Korean, Romanian, and Swedish. NLI-PT includes the original student text and four different types of annotation: POS, fine-grained POS, constituency parses, and dependency parses. NLI-PT can be used not only in NLI but also in research on several topics in the field of Second Language Acquisition and educational NLP. We discuss possible applications of this dataset and present the results obtained for the first lexical baseline system for Portuguese NLI.\n\nIntroduction\nSeveral learner corpora have been compiled for English, such as the International Corpus of Learner English BIBREF0 . The importance of such resources has been increasingly recognized across a variety of research areas, from Second Language Acquisition to Natural Language Processing. Recently, we have seen substantial growth in this area and new corpora for languages other than English have appeared. For Romance languages, there are a several corpora and resources for French, Spanish BIBREF1 , and Italian BIBREF2 .\nPortuguese has also received attention in the compilation of learner corpora. There are two corpora compiled at the School of Arts and Humanities of the University of Lisbon: the corpus Recolha de dados de Aprendizagem do Português Língua Estrangeira (hereafter, Leiria corpus), with 470 texts and 70,500 tokens, and the Learner Corpus of Portuguese as Second/Foreign Language, COPLE2 BIBREF3 , with 1,058 texts and 201,921 tokens. The Corpus de Produções Escritas de Aprendentes de PL2, PEAPL2 compiled at the University of Coimbra, contains 516 texts and 119,381 tokens. Finally, the Corpus de Aquisição de L2, CAL2, compiled at the New University of Lisbon, contains 1,380 texts and 281,301 words, and it includes texts produced by adults and children, as well as a spoken subset.\nThe aforementioned Portuguese learner corpora contain very useful data for research, particularly for Native Language Identification (NLI), a task that has received much attention in recent years. NLI is the task of determining the native language (L1) of an author based on their second language (L2) linguistic productions BIBREF4 . NLI works by identifying language use patterns that are common to groups of speakers of the same native language. This process is underpinned by the presupposition that an author’s L1 disposes them towards certain language production patterns in their L2, as influenced by their mother tongue. A major motivation for NLI is studying second language acquisition. NLI models can enable analysis of inter-L1 linguistic differences, allowing us to study the language learning process and develop L1-specific pedagogical methods and materials.\nHowever, there are limitations to using existing Portuguese data for NLI. An important issue is that the different corpora each contain data collected from different L1 backgrounds in varying amounts; they would need to be combined to have sufficient data for an NLI study. Another challenge concerns the annotations as only two of the corpora (PEAPL2 and COPLE2) are linguistically annotated, and this is limited to POS tags. The different data formats used by each corpus presents yet another challenge to their usage.\nIn this paper we present NLI-PT, a dataset collected for Portuguese NLI. The dataset is made freely available for research purposes. With the goal of unifying learner data collected from various sources, listed in Section \"Collection methodology\" , we applied a methodology which has been previously used for the compilation of language variety corpora BIBREF5 . The data was converted to a unified data format and uniformly annotated at different linguistic levels as described in Section \"Preprocessing and annotation of texts\" . To the best of our knowledge, NLI-PT is the only Portuguese dataset developed specifically for NLI, this will open avenues for research in this area.\n\nRelated Work\nNLI has attracted a lot of attention in recent years. Due to the availability of suitable data, as discussed earlier, this attention has been particularly focused on English. The most notable examples are the two editions of the NLI shared task organized in 2013 BIBREF6 and 2017 BIBREF7 .\nEven though most NLI research has been carried out on English data, an important research trend in recent years has been the application of NLI methods to other languages, as discussed in multilingual-nli. Recent NLI studies on languages other than English include Arabic BIBREF8 and Chinese BIBREF9 , BIBREF10 . To the best of our knowledge, no study has been published on Portuguese and the NLI-PT dataset opens new possibilities of research for Portuguese. In Section \"A Baseline for Portuguese NLI\" we present the first simple baseline results for this task.\nFinally, as NLI-PT can be used in other applications besides NLI, it is important to point out that a number of studies have been published on educational NLP applications for Portuguese and on the compilation of learner language resources for Portuguese. Examples of such studies include grammatical error correction BIBREF11 , automated essay scoring BIBREF12 , academic word lists BIBREF13 , and the learner corpora presented in the previous section.\n\nCollection methodology\nThe data was collected from three different learner corpora of Portuguese: (i) COPLE2; (ii) Leiria corpus, and (iii) PEAPL2 as presented in Table 1 .\nThe three corpora contain written productions from learners of Portuguese with different proficiency levels and native languages (L1s). In the dataset we included all the data in COPLE2 and sections of PEAPL2 and Leiria corpus.\nThe main variable we used for text selection was the presence of specific L1s. Since the three corpora consider different L1s, we decided to use the L1s present in the largest corpus, COPLE2, as the reference. Therefore, we included in the dataset texts corresponding to the following 15 L1s: Chinese, English, Spanish, German, Russian, French, Japanese, Italian, Dutch, Tetum, Arabic, Polish, Korean, Romanian, and Swedish. It was the case that some of the L1s present in COPLE2 were not documented in the other corpora. The number of texts from each L1 is presented in Table 2 .\nConcerning the corpus design, there is some variability among the sources we used. Leiria corpus and PEAPL2 followed a similar approach for data collection and show a close design. They consider a close list of topics, called “stimulus”, which belong to three general areas: (i) the individual; (ii) the society; (iii) the environment. Those topics are presented to the students in order to produce a written text. As a whole, texts from PEAPL2 and Leiria represent 36 different stimuli or topics in the dataset. In COPLE2 corpus the written texts correspond to written exercises done during Portuguese lessons, or to official Portuguese proficiency tests. For this reason, the topics considered in COPLE2 corpus are different from the topics in Leiria and PEAPL2. The number of topics is also larger in COPLE2 corpus: 149 different topics. There is some overlap between the different topics considered in COPLE2, that is, some topics deal with the same subject. This overlap allowed us to reorganize COPLE2 topics in our dataset, reducing them to 112.\nDue to the different distribution of topics in the source corpora, the 148 topics in the dataset are not represented uniformly. Three topics account for a 48.7% of the total texts and, on the other hand, a 72% of the topics are represented by 1-10 texts (Figure 1 ). This variability affects also text length. The longest text has 787 tokens and the shortest has only 16 tokens. Most texts, however, range roughly from 150 to 250 tokens. To better understand the distribution of texts in terms of word length we plot a histogram of all texts with their word length in bins of 10 (1-10 tokens, 11-20 tokens, 21-30 tokens and so on) (Figure 2 ).\nThe three corpora use the proficiency levels defined in the Common European Framework of Reference for Languages (CEFR), but they show differences in the number of levels they consider. There are five proficiency levels in COPLE2 and PEAPL2: A1, A2, B1, B2, and C1. But there are 3 levels in Leiria corpus: A, B, and C. The number of texts included from each proficiency level is presented in Table 4 .\n\nPreprocessing and annotation of texts\nAs demonstrated earlier, these learner corpora use different formats. COPLE2 is mainly codified in XML, although it gives the possibility of getting the student version of the essay in TXT format. PEAPL2 and Leiria corpus are compiled in TXT format. In both corpora, the TXT files contain the student version with special annotations from the transcription. For the NLI experiments we were interested in a clean txt version of the students' text, together with versions annotated at different linguistics levels. Therefore, as a first step, we removed all the annotations corresponding to the transcription process in PEAPL2 and Leiria files. As a second step, we proceeded to the linguistic annotation of the texts using different NLP tools.\nWe annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 .\n\nApplications\nNLI-PT was developed primarily for NLI, but it can be used for other research purposes ranging from second language acquisition to educational NLP applications. Here are a few examples of applications in which the dataset can be used:\n\nA Baseline for Portuguese NLI\nTo demonstrate the usefulness of the dataset we present the first lexical baseline for Portuguese NLI using a sub-set of NLI-PT. To the best of our knowledge, no study has been published on Portuguese NLI and our work fills this gap.\nIn this experiment we included the five L1s in NLI-PT which contain the largest number of texts in this sub-set and run a simple linear SVM BIBREF21 classifier using a bag of words model to identify the L1 of each text. The languages included in this experiment were Chinese (355 texts), English (236 texts), German (214 texts), Italian (216 texts), and Spanish (271 texts).\nWe evaluated the model using stratified 10-fold cross-validation, achieving 70% accuracy. An important limitation of this experiment is that it does not account for topic bias, an important issue in NLI BIBREF22 . This is due to the fact that NLI-PT is not balanced by topic and the model could be learning topic associations instead. In future work we would like to carry out using syntactic features such as function words, syntactic relations and POS annotation.\n\nConclusion and Future Work\nThis paper presented NLI-PT, the first Portuguese dataset compiled for NLI. NLI-PT contains 1,868 texts written by speakers of 15 L1s amounting to over 380,000 tokens.\nAs discussed in Section \"Applications\" , NLI-PT opens several avenues for future research. It can be used for different research purposes beyond NLI such as grammatical error correction and CALL. An experiment with the texts written by the speakers of five L1s: Chinese, English, German, Italian, and Spanish using a bag of words model achieved 70% accuracy. We are currently experimenting with different features taking advantage of the annotation available in NLI-PT thus reducing topic bias in classification.\nIn future work we would like to include more texts in the dataset following the same methodology and annotation.\n\nAcknowledgement\nWe want to thank the research teams that have made available the data we used in this work: Centro de Estudos de Linguística Geral e Aplicada at Universidade de Coimbra (specially Cristina Martins) and Centro de Linguística da Universidade de Lisboa (particularly Amália Mendes).\nThis work was partially supported by Fundação para a Ciência e a Tecnologia (postdoctoral research grant SFRH/BPD/109914/2015).\n\nTable 1: Distribution of the dataset: Number of texts, tokens, types, and type/token ratio (TTER) per source corpus.\n\nFigure 1: Topic distribution by number of texts. Each bar represents one of the 148 topics.\n\nTable 2: Distribution by L1s and source corpora.\n\nTable 3: Number of different topics by source.\n\nFigure 2: Histogram of document lengths, as measured by the number of tokens. The mean value is 204 with standard deviation of 103.\n\nTable 4: Distribution by proficiency levels and by source corpus.\n\nQuery: Are the annotations automatic or manually created?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Automatic\", \"evidence\": [\"We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 .\"]}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Considering various contexts where we need definitions of phrases, we evaluated our method with four datasets including WordNet<cite> (Noraset et al., 2017)</cite> for general words, the Oxford dictionary (Gadetsky et al., 2018) for polysemous words, Urban Dictionary (Ni and Wang, 2017) for rare idioms or slangs, and a newlycreated Wikipedia dataset for entities. Experimental results demonstrate the effectiveness of our method against the three baselines stated above (Noraset et al., 2017; Ni and Wang, 2017; Gadetsky et al., 2018) . | ["Differences"] | science.multicite_intent_classification.1519 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nConsidering various contexts where we need definitions of phrases, we evaluated our method with four datasets including WordNet<cite> (Noraset et al., 2017)</cite> for general words, the Oxford dictionary (Gadetsky et al., 2018) for polysemous words, Urban Dictionary (Ni and Wang, 2017) for rare idioms or slangs, and a newlycreated Wikipedia dataset for entities. Experimental results demonstrate the effectiveness of our method against the three baselines stated above (Noraset et al., 2017; Ni and Wang, 2017; Gadetsky et al., 2018) .",
"role": "user"
},
{
"content": "[\"Differences\"]",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Despite the decreasing rate of under nutrition children in recent years overall, the negative affect in growth and development make it as a main concern in the world. Applying an available and appropriate supplementary food is a major approach in treating children with Moderate Acute Malnutrition (MAM).
ABSTRACT.OBJECTIVE: To compare the effectiveness of a new supplementary blended flour (containing chickpea, rice, wheat and barley, named Shadameen) in combination with multivitamin/mineral supplement and nutritional counseling versus multivitamin/mineral supplement and nutritional counseling alone, in the treatment of children with MAM.
ABSTRACT.METHODS: This randomized controlled trial study was conducted at Heshmatiyeh Hospital in Sabzevar city in Iran, from January 2016 to December 2016. Seventy infants, aged 9 to 24 months with MAM who were referred from urban health centers to the hospital clinic were included. They were randomly assigned to receive, for about 3 months, either multivitamin / mineral supplement and nutritional counseling alone or in combination with an extra supplementary blended food. We analyzed weight, length, weight for length Z score (WLZ), weight for age Z score (WAZ) and length for weight Z score (LAZ), along with recovery rate and adverse events among the two groups. The data were analyzed using SPSS version 16. We used statistics, Chi-square, independent t-test, and Fisher's exact test for the analyses of primary and secondary outcomes.
ABSTRACT.RESULTS: The food supplementation infant's mean weight and WLZ and WAZ were greater compared with the other group (0.81±0.29 gr and 0.55±0.33 gr, p=0.002), (0.36±0.36 and 0.02±0.52, p=0.003), (0.40±0.33 and 0.09±0.37, p=0.001). The recovery rate in the food supplemented group was significantly higher than the other group (68.4%, 31.6%, p=0.001). No adverse reactions were observed. There were no significant differences in LAZ at the end of the study between the two groups (p=0.53).
ABSTRACT.CONCLUSION: This study showed that Shadameen in combination with multivitamin/mineral and counselling therapy can be more effective in decreasing the wasting rate of children with MAM than vitamin/mineral and counseling therapy alone.
ABSTRACT.TRIAL REGISTRATION: The trial was registered at the Iranian Registry of Clinical Trials (http://www.irct.ir) with the Irct ID: IRCT2015040921670N1.
ABSTRACT.FUNDING: This study was financially supported by the vice chancellor for research office, Mashhad University of Medical Sciences.
Extracted Data: | [["new supplementary blended flour (containing chickpea, rice, wheat and barley, named Shadameen) ", "multivitamin / mineral supplement and nutritional counseling alone", "The proportion of children who recovered", "significantly increased", "The food supplementation infant\u2019s mean weight and WLZ and WAZ were greater compared with the other group (0.81\u00b10.29 gr and 0.55\u00b10.33 gr, p=0.002), (0.36\u00b10.36 and 0.02\u00b10.52, p=0.003), (0.40\u00b10.33 and 0.09\u00b10.37, p=0.001). The recovery rate in the food supplemented group was significantly higher than the other group (68.4%, 31.6%, p=0.001). No adverse reactions were observed."]] | science.evidence_inference.204 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Despite the decreasing rate of under nutrition children in recent years overall, the negative affect in growth and development make it as a main concern in the world. Applying an available and appropriate supplementary food is a major approach in treating children with Moderate Acute Malnutrition (MAM).\n\nABSTRACT.OBJECTIVE: To compare the effectiveness of a new supplementary blended flour (containing chickpea, rice, wheat and barley, named Shadameen) in combination with multivitamin/mineral supplement and nutritional counseling versus multivitamin/mineral supplement and nutritional counseling alone, in the treatment of children with MAM.\n\nABSTRACT.METHODS: This randomized controlled trial study was conducted at Heshmatiyeh Hospital in Sabzevar city in Iran, from January 2016 to December 2016. Seventy infants, aged 9 to 24 months with MAM who were referred from urban health centers to the hospital clinic were included. They were randomly assigned to receive, for about 3 months, either multivitamin / mineral supplement and nutritional counseling alone or in combination with an extra supplementary blended food. We analyzed weight, length, weight for length Z score (WLZ), weight for age Z score (WAZ) and length for weight Z score (LAZ), along with recovery rate and adverse events among the two groups. The data were analyzed using SPSS version 16. We used statistics, Chi-square, independent t-test, and Fisher's exact test for the analyses of primary and secondary outcomes.\n\nABSTRACT.RESULTS: The food supplementation infant's mean weight and WLZ and WAZ were greater compared with the other group (0.81±0.29 gr and 0.55±0.33 gr, p=0.002), (0.36±0.36 and 0.02±0.52, p=0.003), (0.40±0.33 and 0.09±0.37, p=0.001). The recovery rate in the food supplemented group was significantly higher than the other group (68.4%, 31.6%, p=0.001). No adverse reactions were observed. There were no significant differences in LAZ at the end of the study between the two groups (p=0.53).\n\nABSTRACT.CONCLUSION: This study showed that Shadameen in combination with multivitamin/mineral and counselling therapy can be more effective in decreasing the wasting rate of children with MAM than vitamin/mineral and counseling therapy alone.\n\nABSTRACT.TRIAL REGISTRATION: The trial was registered at the Iranian Registry of Clinical Trials (http://www.irct.ir) with the Irct ID: IRCT2015040921670N1.\n\nABSTRACT.FUNDING: This study was financially supported by the vice chancellor for research office, Mashhad University of Medical Sciences.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"new supplementary blended flour (containing chickpea, rice, wheat and barley, named Shadameen) \", \"multivitamin / mineral supplement and nutritional counseling alone\", \"The proportion of children who recovered\", \"significantly increased\", \"The food supplementation infant\\u2019s mean weight and WLZ and WAZ were greater compared with the other group (0.81\\u00b10.29 gr and 0.55\\u00b10.33 gr, p=0.002), (0.36\\u00b10.36 and 0.02\\u00b10.52, p=0.003), (0.40\\u00b10.33 and 0.09\\u00b10.37, p=0.001). The recovery rate in the food supplemented group was significantly higher than the other group (68.4%, 31.6%, p=0.001). No adverse reactions were observed.\"]]",
"role": "assistant"
}
]
|
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: In addition, overexpression of H19 was found to downregulate DIRAS3 expression, promote mTOR phosphorylation and inhibit autophagy activation in cardiomyocytes exposed to high glucose.
In conclusion, our study suggested that H19 could inhibit autophagy in cardiomyocytes by epigenetically silencing of DIRAS3, which might provide novel insights into understanding the molecular mechanisms of diabetic cardiomyopathy.
LncRNA H19 inhibits autophagy by epigenetically silencing of DIRAS3 in diabetic cardiomyopathy.
The long non-coding RNA H19 promotes cardiomyocyte apoptosis in dilated cardiomyopathy.
H19 knockdown in the myocardium of DCM rats attenuated cardiomyocyte apoptosis and improved left ventricular structure and function. Adriamycin treatment was associated with elevated H19 and miR-675 expression and increased apoptosis in neonatal cardiomyocytes.
Moreover, H19 knockdown was found to increase PA2G4 expression and suppress apoptosis in cardiomyocytes exposed to adriamycin. In conclusion, our study suggests that H19/miR-675 axis is involved in the promotion of cardiomyocyte apoptosis by targeting PA2G4, which may provide a new therapeutic strategy for the treatment of adriamycin-induced DCM.
Downregulation of long non-coding RNA H19 promotes P19CL6 cells proliferation and inhibits apoptosis during late-stage cardiac differentiation via miR-19b-modulated Sox6.
Biological function analysis showed that knockdown of H19 promoted cell proliferation and inhibits cell apoptosis. H19 suppressed miR-19b expression and miR-19b targeted Sox6, which inhibited cell proliferation and promoted apoptosis in P19CL6 cells during late-stage cardiac differentiation.
Downregulation of H19 promoted cell proliferation and inhibited cell apoptosis during late-stage cardiac differentiation by regulating the negative role of miR-19b in Sox6 expression, which suggested that the manipulation of H19 expression could serve as a potential strategy for heart disease.
Knockdown and overexpression experiments showed that H19 induces a strong osteogenic phenotype by altering the NOTCH1 pathway.
A knockdown of H19 in valve interstitial cells (VICs) increased the expression of NOTCH1 and decreased the level of RUNX2 and BMP2, 2 downstream targets repressed by NOTCH1. In rescue experiments, the transfection of a vector encoding for the active Notch intracellular domain prevented H19-induced mineralization of valve interstitial cells
Long noncoding RNA H19 controls DUSP5/ERK1/2 axis in cardiac fibroblast proliferation and fibrosis.
H19 ectopic overexpression reduces DUSP5 abundance and increases the proliferation of cardiac fibroblast, whereas H19 silencing causes the opposite effects. In a broader perspective, these results demonstrated that LncRNA H19 contributes to cardiac fibroblast proliferation and fibrosis, which act in part through repression of DUSP5/ERK1/2.
Question: Which cellular functions are affected by lncRNA H19 in the heart? | ["inhibits autophagy", "promotes apoptosis", "inhibits proliferation", "induced mineralization of valve interstitial cells", "promotes cardiac fibroblast proliferation", "promotes fibrosis"] | science.bioasq_list_qa.542 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: In addition, overexpression of H19 was found to downregulate DIRAS3 expression, promote mTOR phosphorylation and inhibit autophagy activation in cardiomyocytes exposed to high glucose.\nIn conclusion, our study suggested that H19 could inhibit autophagy in cardiomyocytes by epigenetically silencing of DIRAS3, which might provide novel insights into understanding the molecular mechanisms of diabetic cardiomyopathy.\nLncRNA H19 inhibits autophagy by epigenetically silencing of DIRAS3 in diabetic cardiomyopathy.\nThe long non-coding RNA H19 promotes cardiomyocyte apoptosis in dilated cardiomyopathy.\nH19 knockdown in the myocardium of DCM rats attenuated cardiomyocyte apoptosis and improved left ventricular structure and function. Adriamycin treatment was associated with elevated H19 and miR-675 expression and increased apoptosis in neonatal cardiomyocytes.\nMoreover, H19 knockdown was found to increase PA2G4 expression and suppress apoptosis in cardiomyocytes exposed to adriamycin. In conclusion, our study suggests that H19/miR-675 axis is involved in the promotion of cardiomyocyte apoptosis by targeting PA2G4, which may provide a new therapeutic strategy for the treatment of adriamycin-induced DCM.\nDownregulation of long non-coding RNA H19 promotes P19CL6 cells proliferation and inhibits apoptosis during late-stage cardiac differentiation via miR-19b-modulated Sox6.\nBiological function analysis showed that knockdown of H19 promoted cell proliferation and inhibits cell apoptosis. H19 suppressed miR-19b expression and miR-19b targeted Sox6, which inhibited cell proliferation and promoted apoptosis in P19CL6 cells during late-stage cardiac differentiation.\nDownregulation of H19 promoted cell proliferation and inhibited cell apoptosis during late-stage cardiac differentiation by regulating the negative role of miR-19b in Sox6 expression, which suggested that the manipulation of H19 expression could serve as a potential strategy for heart disease.\nKnockdown and overexpression experiments showed that H19 induces a strong osteogenic phenotype by altering the NOTCH1 pathway.\nA knockdown of H19 in valve interstitial cells (VICs) increased the expression of NOTCH1 and decreased the level of RUNX2 and BMP2, 2 downstream targets repressed by NOTCH1. In rescue experiments, the transfection of a vector encoding for the active Notch intracellular domain prevented H19-induced mineralization of valve interstitial cells\nLong noncoding RNA H19 controls DUSP5/ERK1/2 axis in cardiac fibroblast proliferation and fibrosis.\nH19 ectopic overexpression reduces DUSP5 abundance and increases the proliferation of cardiac fibroblast, whereas H19 silencing causes the opposite effects. In a broader perspective, these results demonstrated that LncRNA H19 contributes to cardiac fibroblast proliferation and fibrosis, which act in part through repression of DUSP5/ERK1/2.\n\nQuestion: Which cellular functions are affected by lncRNA H19 in the heart?",
"role": "user"
},
{
"content": "[\"inhibits autophagy\", \"promotes apoptosis\", \"inhibits proliferation\", \"induced mineralization of valve interstitial cells\", \"promotes cardiac fibroblast proliferation\", \"promotes fibrosis\"]",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT: Systematic evaluation of child and adolescent psychiatric outpatient treatment is important but time-consuming. The aim of this paper was to study whether Interactive Voice Response (IVR) is a more effective method than a questionnaire sent by post when following up outpatient treatment in child and adolescent psychiatry. Eighty patients were recruited from a child and adolescent psychiatric outpatient unit in Sweden. One parent of each of the patients was randomized to complete the BCFPI follow-up form, using either IVR (n = 40) or postal survey (n = 40) one month after discharge. The response rate for complete answers was 65% in the IVR group and 38% in the postal survey group (p = 0.014). There was less need for reminders in the IVR group (p = 0.000). IVR is a promising and cost-effective method for evaluating evidence-based treatment in child and adolescent psychiatric care.
Extracted Data: | [["Interactive Voice Response", "Questionnaire sent by post", "Need for reminders", "significantly decreased", "There was less need for reminders in the IVR group (p = 0.000)."], ["Interactive Voice Response", "Questionnaire sent by post", "Responsiveness", "significantly increased", ". The response rate for complete answers was 65% in the IVR group and 38% in the postal survey group (p = 0.014)."]] | science.evidence_inference.62 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Systematic evaluation of child and adolescent psychiatric outpatient treatment is important but time-consuming. The aim of this paper was to study whether Interactive Voice Response (IVR) is a more effective method than a questionnaire sent by post when following up outpatient treatment in child and adolescent psychiatry. Eighty patients were recruited from a child and adolescent psychiatric outpatient unit in Sweden. One parent of each of the patients was randomized to complete the BCFPI follow-up form, using either IVR (n = 40) or postal survey (n = 40) one month after discharge. The response rate for complete answers was 65% in the IVR group and 38% in the postal survey group (p = 0.014). There was less need for reminders in the IVR group (p = 0.000). IVR is a promising and cost-effective method for evaluating evidence-based treatment in child and adolescent psychiatric care.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Interactive Voice Response\", \"Questionnaire sent by post\", \"Need for reminders\", \"significantly decreased\", \"There was less need for reminders in the IVR group (p = 0.000).\"], [\"Interactive Voice Response\", \"Questionnaire sent by post\", \"Responsiveness\", \"significantly increased\", \". The response rate for complete answers was 65% in the IVR group and 38% in the postal survey group (p = 0.014).\"]]",
"role": "assistant"
}
]
|
science.discomat_te | | Level | l (nm) | S obs | S calc | t (ms) |
| 1D2 | 355 | 0.6057 | 0.4278 | 36 |
| 1G4 | 463 | 0.4091 | 0.2859 | 483 |
| 3F2 | 655 | 0.0641 | 0.2118 | 764 |
| 3F3 | 682 | 0.9716 | 1.042 | 533 |
| 3H4 | 790 | 1.710 | 1.535 | 845 |
| 3H5 | 1210 | 1.251 | 1.252 | 3834 |
| 3F4 | 1626 | 3.419 | 3.497 | 4162 |
Caption: Spectroscopic properties of Tm-doped CaBaMg aluminate glass
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "355", "type": "Other"}
{"value": "0.6057", "type": "Other"}
{"value": "0.4278", "type": "Other"}
{"value": "36", "type": "Other"}
{"value": "463", "type": "Other"}
{"value": "0.4091", "type": "Other"}
{"value": "0.2859", "type": "Other"}
{"value": "483", "type": "Other"}
{"value": "655", "type": "Other"}
{"value": "0.0641", "type": "Other"}
{"value": "0.2118", "type": "Other"}
{"value": "764", "type": "Other"}
{"value": "682", "type": "Other"}
{"value": "0.9716", "type": "Other"}
{"value": "1.042", "type": "Other"}
{"value": "533", "type": "Other"}
{"value": "790", "type": "Other"}
{"value": "1.710", "type": "Other"}
{"value": "1.535", "type": "Other"}
{"value": "845", "type": "Other"}
{"value": "1210", "type": "Other"}
{"value": "1.251", "type": "Other"}
{"value": "1.252", "type": "Other"}
{"value": "3834", "type": "Other"}
{"value": "1626", "type": "Other"}
{"value": "3.419", "type": "Other"}
{"value": "3.497", "type": "Other"}
{"value": "4162", "type": "Other"} | science.discomat_te.308 | [
{
"content": "| Level | l (nm) | S obs | S calc | t (ms) |\n| 1D2 | 355 | 0.6057 | 0.4278 | 36 |\n| 1G4 | 463 | 0.4091 | 0.2859 | 483 |\n| 3F2 | 655 | 0.0641 | 0.2118 | 764 |\n| 3F3 | 682 | 0.9716 | 1.042 | 533 |\n| 3H4 | 790 | 1.710 | 1.535 | 845 |\n| 3H5 | 1210 | 1.251 | 1.252 | 3834 |\n| 3F4 | 1626 | 3.419 | 3.497 | 4162 |\n\nCaption: Spectroscopic properties of Tm-doped CaBaMg aluminate glass\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"355\", \"type\": \"Other\"}\n{\"value\": \"0.6057\", \"type\": \"Other\"}\n{\"value\": \"0.4278\", \"type\": \"Other\"}\n{\"value\": \"36\", \"type\": \"Other\"}\n{\"value\": \"463\", \"type\": \"Other\"}\n{\"value\": \"0.4091\", \"type\": \"Other\"}\n{\"value\": \"0.2859\", \"type\": \"Other\"}\n{\"value\": \"483\", \"type\": \"Other\"}\n{\"value\": \"655\", \"type\": \"Other\"}\n{\"value\": \"0.0641\", \"type\": \"Other\"}\n{\"value\": \"0.2118\", \"type\": \"Other\"}\n{\"value\": \"764\", \"type\": \"Other\"}\n{\"value\": \"682\", \"type\": \"Other\"}\n{\"value\": \"0.9716\", \"type\": \"Other\"}\n{\"value\": \"1.042\", \"type\": \"Other\"}\n{\"value\": \"533\", \"type\": \"Other\"}\n{\"value\": \"790\", \"type\": \"Other\"}\n{\"value\": \"1.710\", \"type\": \"Other\"}\n{\"value\": \"1.535\", \"type\": \"Other\"}\n{\"value\": \"845\", \"type\": \"Other\"}\n{\"value\": \"1210\", \"type\": \"Other\"}\n{\"value\": \"1.251\", \"type\": \"Other\"}\n{\"value\": \"1.252\", \"type\": \"Other\"}\n{\"value\": \"3834\", \"type\": \"Other\"}\n{\"value\": \"1626\", \"type\": \"Other\"}\n{\"value\": \"3.419\", \"type\": \"Other\"}\n{\"value\": \"3.497\", \"type\": \"Other\"}\n{\"value\": \"4162\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Physical activity level is associated with the difference in maximal oxygen consumption between black and white youth.
Research Abstract:
The purpose of this study was to examine differences in resting, submaximal, and maximal (VO2max) oxygen consumption (VO2) in African-American (n = 44) and Caucasian (n = 31) prepubertal children aged 5-10 yr.
Resting VO2 was measured via indirect calorimetry in the fasted state.
Submaximal VO2 and VO2max were determined during an all out, progressive treadmill exercise test appropriate for children.
Dual-energy X-ray absorptiometry was used to determine total fat mass (FM), soft lean tissue mass (LTM), and leg soft LTM.
Doubly labeled water was used to determine total energy expenditure (TEE) and activity energy expenditure (AEE).
A significant effect of ethnicity (P < 0.01) was found for VO2max but not resting or submaximal VO2, with African-American children having absolute VO2max approximately 15% lower than Caucasian children (1.21 +/- 0.032 vs. 1.43 +/-
0.031 l/min, respectively).
The lower VO2max persisted in African-American children after adjustment for soft LTM (1.23 +/- 0.025 vs. 1.39 +/-
0.031 l/min; P < 0.01), leg soft LTM (1.20 +/-
0.031 vs. 1.43 +/-
0.042 l/min; P < 0.01), and soft LTM and FM (1.23 +/-
0.025 vs. 1.39 +/-
0.031 l/min; P < 0.01).
The lower VO2max persisted also after adjustment for TEE (1.20 +/-
0.02 vs. 1.38 +/-
0.0028 l/min P < 0.001) and AEE (1.20 +/- 0.024 vs. 1.38 +/-
0.028 l/min; P < 0.001).
In conclusion, our data indicate that African-American and Caucasian children have similar rates of VO2 at rest and during submaximal exercise, but VO2max is approximately 15% lower in African-American children, independent of soft LTM, FM, leg LTM, TEE, and AEE. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.600 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nPhysical activity level is associated with the difference in maximal oxygen consumption between black and white youth.\n\nResearch Abstract:\nThe purpose of this study was to examine differences in resting, submaximal, and maximal (VO2max) oxygen consumption (VO2) in African-American (n = 44) and Caucasian (n = 31) prepubertal children aged 5-10 yr.\nResting VO2 was measured via indirect calorimetry in the fasted state.\nSubmaximal VO2 and VO2max were determined during an all out, progressive treadmill exercise test appropriate for children.\nDual-energy X-ray absorptiometry was used to determine total fat mass (FM), soft lean tissue mass (LTM), and leg soft LTM.\nDoubly labeled water was used to determine total energy expenditure (TEE) and activity energy expenditure (AEE).\nA significant effect of ethnicity (P < 0.01) was found for VO2max but not resting or submaximal VO2, with African-American children having absolute VO2max approximately 15% lower than Caucasian children (1.21 +/- 0.032 vs. 1.43 +/-\n0.031 l/min, respectively).\nThe lower VO2max persisted in African-American children after adjustment for soft LTM (1.23 +/- 0.025 vs. 1.39 +/-\n0.031 l/min; P < 0.01), leg soft LTM (1.20 +/-\n0.031 vs. 1.43 +/-\n0.042 l/min; P < 0.01), and soft LTM and FM (1.23 +/-\n0.025 vs. 1.39 +/-\n0.031 l/min; P < 0.01).\nThe lower VO2max persisted also after adjustment for TEE (1.20 +/-\n0.02 vs. 1.38 +/-\n0.0028 l/min P < 0.001) and AEE (1.20 +/- 0.024 vs. 1.38 +/-\n0.028 l/min; P < 0.001).\nIn conclusion, our data indicate that African-American and Caucasian children have similar rates of VO2 at rest and during submaximal exercise, but VO2max is approximately 15% lower in African-American children, independent of soft LTM, FM, leg LTM, TEE, and AEE.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Surreal-GAN:Semi-Supervised Representation Learning via GAN for uncovering heterogeneous disease-related imaging patterns
Main Content:
1 INTRODUCTION . Neuroimaging , conventional machine learning , and deep learning have offered unprecedented opportunities to understand the underlying mechanism of brain disorders over the past decades ( Davatzikos ( 2018 ) ) , and pave the road towards individualized precision medicine ( Bzdok & MeyerLindenberg ( 2018 ) ) . A large body of case-control studies leverage mass univariate group comparisons to derive structural or functional neuroimaging signatures ( Habeck et al . ( 2008 ) , Hampel et al . ( 2008 ) , Ewers et al . ( 2011 ) ) . However , these studies suffer from under-powered statistical inferences since the heterogeneous nature of neurological diseases usually violate the assumption that each group population are pathologically homogeneous . A developing body of methodologies aim to disentangle this heterogeneity with various machine learning methods . Zhang et al . ( 2016 ) used the Bayesian Latent Dirichlet Allocation ( LDA ) model to identify latent atrophy patterns from voxel-wise gray matter ( GM ) density maps . However , due to the nature of the LDA model , the density maps need to be first discretized , and the model hypothesized to exclusively capture brain atrophy while ignoring the potential regional enlargement . Young et al . ( 2018 ) proposed another method , Sustain , to uncover temporal and phenotypic heterogeneity by inferring both subtypes and stages . However , the model only handles around 10-20 large brain regions without more detailed information . Notably , both methods applied directly in the PT domain , confront main limitations in avoiding potential disease-unrelated brain variations . In con- trast , semi-supervised clustering methods ( Varol et al . ( 2016 ) , Dong et al . ( 2015 ) , Wen et al . ( 2021 ) , Yang et al . ( 2021 ) ) were proposed to cluster patients via the patterns or transformations between the reference group ( CN ) and the target patient group ( PT ) . A recent proposed deep learning-based semi-supervised clustering method , termed Smile-GAN ( Yang et al . ( 2021 ) ) , achieved better clustering performance by learning multiple mappings from CN to PT with an inverse clustering function for both regularization and clustering . Albeit these methods demonstrated potential in clinical applications , they have a major limitation . In particular , these methods model disease heterogeneity as a dichotomous process and derive a discrete output , i.e. , subtype , ignoring the fact that brain disorders progress along a continuum . Moreover , variations among subtypes were contributed by both spatial and temporal differences in disease patterns , thus confounding further clinical analysis. ( Fig . 1a ) To address the aforementioned limitations , we propose a novel method , Surreal-GAN ( SemiSupervised Representation Learning via GAN ) , for deriving heterogeneity-refined imaging signatures . Surreal-GAN is a significant extension of Smile-GAN under the same principle of semisupervised learning by transforming data from CN domain X to PT domain Y. Herein , the key extension is to represent complex disease-related heterogeneity with low dimensional representations with each dimension indicating the severity of one relatively homogeneous imaging patterns ( Fig . 1b ) . We refer to these low dimensional representations as R-indices ( ri , where i represent the ith dimension ) of disease pattern . The first key point of the method is modelling disease as a continuous process and learning infinite transformation directions from CN to PT , with each direction capturing a specific combination of patterns and severity . The idea is realized by learning one transformation function which takes both normal data and a continuous latent variable as inputs and output synthesized-PT data whose distribution is indistinguishable from that of real PT data . The second key point is controlling the monotonicity ( from light to severe with increasing index value ) of disease patterns through one monotonicity loss and double sampling procedure . The third key point is boosting the separation of captured disease patterns through one orthogonality regularization . The fourth key point is introducing a inverse mapping function consisting of a decomposition part and a reconstruction part which further ensure that patterns synthesized through transformation are exactly captured for inferring representations of disease patterns , the R-indices . Besides the above mentioned regularization , function continuity , transformation sparsity and inverse consistency further guide the model to capture meaningful imaging patterns . To support our claims , we first validated Surreal-GAN on semi-synthetic data with known ground truth of disease patterns and severity . We compared performance of Surreal-GAN to NMF ( Lee & Seung ( 1999 ) ) , Factor Analysis and LDA ( Blei et al . ( 2001 ) ) models , and subsequently studied the robustness of model under various conditions . Finally , we applied the model to one real dataset for Alzheimer ’ s disease ( AD ) , and demonstrated significant correlations between the proposed imaging signatures and clincial variables . 2 METHODS . The schematic diagram of Surreal-GAN is shown in Fig . 1d . The model is applied on regional volume data ( volumes of different graymatter ( GM ) and whitematter ( WM ) regions ) derived from structural MRI . The essential idea is to learn one transformation function f : X ∗ Z → Y which transforms CN data x to different synthesized PT data y′ = f ( x , z ) , with latent variable z specifying distinct combinations of patterns and severity . Here , the Latent ( LAT ) domain , Z = { z : 0 ≤ zi ≤ 1 ∀1 ≤ i ≤ M } , is a class of vectors with dimension M ( predefined number of patterns ) . We denote four different data distributions as x ∼ pcn ( x ) , y ∼ ppt ( y ) , y′ ∼ psyn ( y′ ) and z ∼ plat ( z ) , respectively , where z ∼ plat ( z ) is sampled from a multivariate uniform distribution U [ 0 , 1 ] M , rather than a categorical distribution as in Smile-GAN . A continuous z variable lay the foundation for learning continuous representations . In addition to the transformation function , an adversarial discriminator D is introduced to distinguish between real PT data y and synthesized PT data y′ , thereby ensuring that the synthesized PT data are indistinguishable from real PT data ( i.e . minimizing the distance between real PT data distribution and synthesized PT data distribution ) . A continuous latent variable does not necessarily lead to desired properties of CN to PT transformation ( e.g . separation/monotonicity of captured patterns ) . That is , there are a number of functions potentially achieving equality in distributions , thus making it hard to guarantee that the transformation function learned by the model is closely related to the underlying pathology progression process . Moreover , during the training procedure , the transformation function backboned by the neural network tends to ignore the latent variable z . Even when the Latent variable z is emphasized through regularization , different components of z variable do not naturally lead to separate disease patterns and are not positively correlated with pattern severity . Therefore , with the assumption that there is one true underlying function for real PT data y = h ( x , z ) , Surreal-GAN aims to boost the transformation function f to approximate the true underlying function h , while having the latent variable , z , contributing to recognizable imaging patterns rather than being random ” activators ” of the GAN . For this purpose , we constrain the function class of f via different types of regularization as follows : 1 ) encourage sparse transformations ; 2 ) enforce Lipschitz continuity of functions ; 3 ) introduce one inverse function g : Y → Z serving to decompose and reconstruct ; 4 ) boost orthogonality among synthesized/captured patterns ; 5 ) enforce monotonicity of synthesized patterns and their positive correlations with components of z . The training procedure of Surreal-GAN is thus a adversarial learning process constrained by these regularization . 2.1 ADVERSARIAL LOSS . The adversarial loss ( Goodfellow et al . ( 2014 ) ) is used for iterative training of discriminator D and the transformation function f , which can be written as : LGAN ( D , f ) = Ey∼ppt ( y ) [ log ( D ( y ) ) ] + Ex∼pcn ( x ) , z∼Plat ( z ) [ log ( 1−D ( f ( x , z ) ) ) ] ( 1 ) = Ey∼ppt ( y ) [ log ( D ( y ) ) ] + Ey′∼psyn ( y′ ) [ log ( 1−D ( y ′ ) ) ] ( 2 ) The transformation function f attempts to transform CN data to synthesized PT data so that they follow similar distributions as real PT data . The discriminator , providing a probability that y comes from the real data rather than transformation function , tries to identify the synthesized PT data and distinguish it from real PT data . Therefore , the discriminator D attempts to maximize the adversarial loss function while the transformation function f attempts to minimize against it . The training process can be denoted as : min f max D LGAN ( D , f ) = Ey∼ppt ( y ) [ log ( D ( y ) ) ] + Ey′∼psyn ( y′ ) [ log ( 1−D ( y ′ ) ) ] ( 3 ) 2.2 REGULARIZATION . 2.2.1 SPARSE TRANSFORMATION . We assumed that disease process will not change brain anatomy dramatically and primarily only affect certain regions throughout most of disease stages , which means that the true underlying transformation only moderately changes some regions while keeping the rest mildly affected with most of z variable values . Therefore , to encourage sparsity and control the distance of transformations , we defined one change loss to be the l1 distance between the synthesized PT data and the original CN data : Lchange ( f ) = Ex∼pcn ( x ) , z∼plat ( z ) [ ||f ( x , z ) − x||1 ] ( 4 ) 2.2.2 LIPSCHITIZ CONTINUITY AND INVERSE MAPPING . First , by enforcing the transformation function f to be K1-Lipschitz continuous , we can derive that , for fixed latent variable z = a and ∀x1 , x2 ∈ X , ||f ( x1 , a ) − f ( x2 , a ) ||2 ≤ K1||x1−x2||2 . Thus , by controlling the constant K1 , we constrained the transformation function to preserve the original distances among CN data instead of scattering them into the PT domain if a same combination of disease patterns is imposed to them . Second , to prevent latent variable z from being ignored , we introduced an inverse mapping function from PT domain to Latent domain , g : Y → Z . By Lemma1 ( section A.1 ) , with function g being K2-Lipschitz continuous and d ( · , · ) being any distance metric satisfying triangular inequality , we can derive that , ∀z1 , z2 ∼ plat ( z ) , z1 6= z2 , and x̄ ∼ pcn ( x ) , d ( f ( x̄ , z1 ) , f ( x̄ , z2 ) ) is lower bounded by ( d ( z1 , z2 ) K2 − 1 K2 ( d ( g ( f ( x̄ , z1 ) ) , z1 ) + d ( g ( f ( x̄ , z2 ) ) , z2 ) ) ) . Therefore , by minimizing the distance between sampled latent variable z and reconstructed latent variable g ( f ( x , z ) ) , we can control differences between synthesized patterns to be positively related to the distances between latent variables , and thus to be non-trivial ( i.e. , same CN data is transformed to significantly different PT data with different z variables ) . Therefore , we define a reconstruction loss as the l2 distance between g ( f ( x , z ) ) and z : Lrecons ( f , g ) = Ex∼pcn ( x ) , z∼plat ( z ) [ ||g ( f ( x , z ) ) − z||2 ] ( 5 ) | In this paper, the authors present a method for learning a representation of disease-related image patterns from regional volume information generated from structural MRI images. The method is based on a GAN approach, and different regularization methods on the latent representation are presented to improve the learning of disease-related representations. There are four contributions presented in the paper: 1. Modelling a disease as a continuous process 2. Controlling the monotonicity of the disease pattern 3. Separation of captured disease pattern 4. Inverse Mapping to ensure that disease patterns synthesized are captured The evaluation of the method is performed on semi-synthetic data sets and an actual Alzheimer’s disease data set. The results show that the method can identify two clinically informative patterns. | science.mup_single_document_summarization.5258 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nSurreal-GAN:Semi-Supervised Representation Learning via GAN for uncovering heterogeneous disease-related imaging patterns\n\nMain Content:\n1 INTRODUCTION . Neuroimaging , conventional machine learning , and deep learning have offered unprecedented opportunities to understand the underlying mechanism of brain disorders over the past decades ( Davatzikos ( 2018 ) ) , and pave the road towards individualized precision medicine ( Bzdok & MeyerLindenberg ( 2018 ) ) . A large body of case-control studies leverage mass univariate group comparisons to derive structural or functional neuroimaging signatures ( Habeck et al . ( 2008 ) , Hampel et al . ( 2008 ) , Ewers et al . ( 2011 ) ) . However , these studies suffer from under-powered statistical inferences since the heterogeneous nature of neurological diseases usually violate the assumption that each group population are pathologically homogeneous . A developing body of methodologies aim to disentangle this heterogeneity with various machine learning methods . Zhang et al . ( 2016 ) used the Bayesian Latent Dirichlet Allocation ( LDA ) model to identify latent atrophy patterns from voxel-wise gray matter ( GM ) density maps . However , due to the nature of the LDA model , the density maps need to be first discretized , and the model hypothesized to exclusively capture brain atrophy while ignoring the potential regional enlargement . Young et al . ( 2018 ) proposed another method , Sustain , to uncover temporal and phenotypic heterogeneity by inferring both subtypes and stages . However , the model only handles around 10-20 large brain regions without more detailed information . Notably , both methods applied directly in the PT domain , confront main limitations in avoiding potential disease-unrelated brain variations . In con- trast , semi-supervised clustering methods ( Varol et al . ( 2016 ) , Dong et al . ( 2015 ) , Wen et al . ( 2021 ) , Yang et al . ( 2021 ) ) were proposed to cluster patients via the patterns or transformations between the reference group ( CN ) and the target patient group ( PT ) . A recent proposed deep learning-based semi-supervised clustering method , termed Smile-GAN ( Yang et al . ( 2021 ) ) , achieved better clustering performance by learning multiple mappings from CN to PT with an inverse clustering function for both regularization and clustering . Albeit these methods demonstrated potential in clinical applications , they have a major limitation . In particular , these methods model disease heterogeneity as a dichotomous process and derive a discrete output , i.e. , subtype , ignoring the fact that brain disorders progress along a continuum . Moreover , variations among subtypes were contributed by both spatial and temporal differences in disease patterns , thus confounding further clinical analysis. ( Fig . 1a ) To address the aforementioned limitations , we propose a novel method , Surreal-GAN ( SemiSupervised Representation Learning via GAN ) , for deriving heterogeneity-refined imaging signatures . Surreal-GAN is a significant extension of Smile-GAN under the same principle of semisupervised learning by transforming data from CN domain X to PT domain Y. Herein , the key extension is to represent complex disease-related heterogeneity with low dimensional representations with each dimension indicating the severity of one relatively homogeneous imaging patterns ( Fig . 1b ) . We refer to these low dimensional representations as R-indices ( ri , where i represent the ith dimension ) of disease pattern . The first key point of the method is modelling disease as a continuous process and learning infinite transformation directions from CN to PT , with each direction capturing a specific combination of patterns and severity . The idea is realized by learning one transformation function which takes both normal data and a continuous latent variable as inputs and output synthesized-PT data whose distribution is indistinguishable from that of real PT data . The second key point is controlling the monotonicity ( from light to severe with increasing index value ) of disease patterns through one monotonicity loss and double sampling procedure . The third key point is boosting the separation of captured disease patterns through one orthogonality regularization . The fourth key point is introducing a inverse mapping function consisting of a decomposition part and a reconstruction part which further ensure that patterns synthesized through transformation are exactly captured for inferring representations of disease patterns , the R-indices . Besides the above mentioned regularization , function continuity , transformation sparsity and inverse consistency further guide the model to capture meaningful imaging patterns . To support our claims , we first validated Surreal-GAN on semi-synthetic data with known ground truth of disease patterns and severity . We compared performance of Surreal-GAN to NMF ( Lee & Seung ( 1999 ) ) , Factor Analysis and LDA ( Blei et al . ( 2001 ) ) models , and subsequently studied the robustness of model under various conditions . Finally , we applied the model to one real dataset for Alzheimer ’ s disease ( AD ) , and demonstrated significant correlations between the proposed imaging signatures and clincial variables . 2 METHODS . The schematic diagram of Surreal-GAN is shown in Fig . 1d . The model is applied on regional volume data ( volumes of different graymatter ( GM ) and whitematter ( WM ) regions ) derived from structural MRI . The essential idea is to learn one transformation function f : X ∗ Z → Y which transforms CN data x to different synthesized PT data y′ = f ( x , z ) , with latent variable z specifying distinct combinations of patterns and severity . Here , the Latent ( LAT ) domain , Z = { z : 0 ≤ zi ≤ 1 ∀1 ≤ i ≤ M } , is a class of vectors with dimension M ( predefined number of patterns ) . We denote four different data distributions as x ∼ pcn ( x ) , y ∼ ppt ( y ) , y′ ∼ psyn ( y′ ) and z ∼ plat ( z ) , respectively , where z ∼ plat ( z ) is sampled from a multivariate uniform distribution U [ 0 , 1 ] M , rather than a categorical distribution as in Smile-GAN . A continuous z variable lay the foundation for learning continuous representations . In addition to the transformation function , an adversarial discriminator D is introduced to distinguish between real PT data y and synthesized PT data y′ , thereby ensuring that the synthesized PT data are indistinguishable from real PT data ( i.e . minimizing the distance between real PT data distribution and synthesized PT data distribution ) . A continuous latent variable does not necessarily lead to desired properties of CN to PT transformation ( e.g . separation/monotonicity of captured patterns ) . That is , there are a number of functions potentially achieving equality in distributions , thus making it hard to guarantee that the transformation function learned by the model is closely related to the underlying pathology progression process . Moreover , during the training procedure , the transformation function backboned by the neural network tends to ignore the latent variable z . Even when the Latent variable z is emphasized through regularization , different components of z variable do not naturally lead to separate disease patterns and are not positively correlated with pattern severity . Therefore , with the assumption that there is one true underlying function for real PT data y = h ( x , z ) , Surreal-GAN aims to boost the transformation function f to approximate the true underlying function h , while having the latent variable , z , contributing to recognizable imaging patterns rather than being random ” activators ” of the GAN . For this purpose , we constrain the function class of f via different types of regularization as follows : 1 ) encourage sparse transformations ; 2 ) enforce Lipschitz continuity of functions ; 3 ) introduce one inverse function g : Y → Z serving to decompose and reconstruct ; 4 ) boost orthogonality among synthesized/captured patterns ; 5 ) enforce monotonicity of synthesized patterns and their positive correlations with components of z . The training procedure of Surreal-GAN is thus a adversarial learning process constrained by these regularization . 2.1 ADVERSARIAL LOSS . The adversarial loss ( Goodfellow et al . ( 2014 ) ) is used for iterative training of discriminator D and the transformation function f , which can be written as : LGAN ( D , f ) = Ey∼ppt ( y ) [ log ( D ( y ) ) ] + Ex∼pcn ( x ) , z∼Plat ( z ) [ log ( 1−D ( f ( x , z ) ) ) ] ( 1 ) = Ey∼ppt ( y ) [ log ( D ( y ) ) ] + Ey′∼psyn ( y′ ) [ log ( 1−D ( y ′ ) ) ] ( 2 ) The transformation function f attempts to transform CN data to synthesized PT data so that they follow similar distributions as real PT data . The discriminator , providing a probability that y comes from the real data rather than transformation function , tries to identify the synthesized PT data and distinguish it from real PT data . Therefore , the discriminator D attempts to maximize the adversarial loss function while the transformation function f attempts to minimize against it . The training process can be denoted as : min f max D LGAN ( D , f ) = Ey∼ppt ( y ) [ log ( D ( y ) ) ] + Ey′∼psyn ( y′ ) [ log ( 1−D ( y ′ ) ) ] ( 3 ) 2.2 REGULARIZATION . 2.2.1 SPARSE TRANSFORMATION . We assumed that disease process will not change brain anatomy dramatically and primarily only affect certain regions throughout most of disease stages , which means that the true underlying transformation only moderately changes some regions while keeping the rest mildly affected with most of z variable values . Therefore , to encourage sparsity and control the distance of transformations , we defined one change loss to be the l1 distance between the synthesized PT data and the original CN data : Lchange ( f ) = Ex∼pcn ( x ) , z∼plat ( z ) [ ||f ( x , z ) − x||1 ] ( 4 ) 2.2.2 LIPSCHITIZ CONTINUITY AND INVERSE MAPPING . First , by enforcing the transformation function f to be K1-Lipschitz continuous , we can derive that , for fixed latent variable z = a and ∀x1 , x2 ∈ X , ||f ( x1 , a ) − f ( x2 , a ) ||2 ≤ K1||x1−x2||2 . Thus , by controlling the constant K1 , we constrained the transformation function to preserve the original distances among CN data instead of scattering them into the PT domain if a same combination of disease patterns is imposed to them . Second , to prevent latent variable z from being ignored , we introduced an inverse mapping function from PT domain to Latent domain , g : Y → Z . By Lemma1 ( section A.1 ) , with function g being K2-Lipschitz continuous and d ( · , · ) being any distance metric satisfying triangular inequality , we can derive that , ∀z1 , z2 ∼ plat ( z ) , z1 6= z2 , and x̄ ∼ pcn ( x ) , d ( f ( x̄ , z1 ) , f ( x̄ , z2 ) ) is lower bounded by ( d ( z1 , z2 ) K2 − 1 K2 ( d ( g ( f ( x̄ , z1 ) ) , z1 ) + d ( g ( f ( x̄ , z2 ) ) , z2 ) ) ) . Therefore , by minimizing the distance between sampled latent variable z and reconstructed latent variable g ( f ( x , z ) ) , we can control differences between synthesized patterns to be positively related to the distances between latent variables , and thus to be non-trivial ( i.e. , same CN data is transformed to significantly different PT data with different z variables ) . Therefore , we define a reconstruction loss as the l2 distance between g ( f ( x , z ) ) and z : Lrecons ( f , g ) = Ex∼pcn ( x ) , z∼plat ( z ) [ ||g ( f ( x , z ) ) − z||2 ] ( 5 )",
"role": "user"
},
{
"content": "In this paper, the authors present a method for learning a representation of disease-related image patterns from regional volume information generated from structural MRI images. The method is based on a GAN approach, and different regularization methods on the latent representation are presented to improve the learning of disease-related representations. There are four contributions presented in the paper: 1. Modelling a disease as a continuous process 2. Controlling the monotonicity of the disease pattern 3. Separation of captured disease pattern 4. Inverse Mapping to ensure that disease patterns synthesized are captured The evaluation of the method is performed on semi-synthetic data sets and an actual Alzheimer’s disease data set. The results show that the method can identify two clinically informative patterns.",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction
Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope---i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. We evaluate a range of benchmark classifiers on our dataset along with several different out-of-scope identification schemes. We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries. Our dataset and evaluation fill an important gap in the field, offering a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems.
Introduction
Task-oriented dialog systems have become ubiquitous, providing a means for billions of people to interact with computers using natural language. Moreover, the recent influx of platforms and tools such as Google's DialogFlow or Amazon's Lex for building and deploying such systems makes them even more accessible to various industries and demographics across the globe.
Tools for developing such systems start by guiding developers to collect training data for intent classification: the task of identifying which of a fixed set of actions the user wishes to take based on their query. Relatively few public datasets exist for evaluating performance on this task, and those that do exist typically cover only a very small number of intents (e.g. BIBREF0, which has 7 intents). Furthermore, such resources do not facilitate analysis of out-of-scope queries: queries that users may reasonably make, but fall outside of the scope of the system-supported intents.
Figure FIGREF1 shows example query-response exchanges between a user and a task-driven dialog system for personal finance. In the first user-system exchange, the system correctly identifies the user's intent as an in-scope balance query. In the second and third exchanges, the user queries with out-of-scope inputs. In the second exchange, the system incorrectly identifies the query as in-scope and yields an unrelated response. In the third exchange, the system correctly classifies the user's query as out-of-scope, and yields a fallback response.
Out-of-scope queries are inevitable for a task-oriented dialog system, as most users will not be fully cognizant of the system's capabilities, which are limited by the fixed number of intent classes. Correctly identifying out-of-scope cases is thus crucial in deployed systems—both to avoid performing the wrong action and also to identify potential future directions for development. However, this problem has seen little attention in analyses and evaluations of intent classification systems.
This paper fills this gap by analyzing intent classification performance with a focus on out-of-scope handling. To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. The queries cover 150 intents, plus out-of-scope queries that do not fall within any of the 150 in-scope intents.
We evaluate a range of benchmark classifiers and out-of-scope handling methods on our dataset. BERT BIBREF1 yields the best in-scope accuracy, scoring 96% or above even when we limit the training data or introduce class imbalance. However, all methods struggle with identifying out-of-scope queries. Even when a large number of out-of-scope examples are provided for training, there is a major performance gap, with the best system scoring 66% out-of-scope recall. Our results show that while current models work on known classes, they have difficulty on out-of-scope queries, particularly when data is not plentiful. This dataset will enable future work to address this key gap in the research and development of dialog systems. All data introduced in this paper can be found at https://github.com/clinc/oos-eval.
Dataset
We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data.
Dataset ::: In-Scope Data Collection
We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent.
Dataset ::: Out-of-Scope Data Collection
Out-of-scope queries were collected in two ways. First, using worker mistakes: queries written for one of the 150 intents that did not actually match any of the intents. Second, using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere. To help ensure the richness of this additional out-of-scope data, each of these task prompts contributed to at most four queries. Since we use the same crowdsourcing method for collecting out-of-scope data, these queries are similar in style to their in-scope counterparts.
The out-of-scope data is difficult to collect, requiring expert knowledge of the in-scope intents to painstakingly ensure that no out-of-scope query sample is mistakenly labeled as in-scope (and vice versa). Indeed, roughly only 69% of queries collected with prompts targeting out-of-scope yielded out-of-scope queries. Of the 1,200 out-of-scope queries collected, 100 are used for validation and 100 are used for training, leaving 1,000 for testing.
Dataset ::: Data Preprocessing and Partitioning
For all queries collected, all tokens were down-cased, and all end-of-sentence punctuation was removed. Additionally, all duplicate queries were removed and replaced.
In an effort to reduce bias in the in-scope data, we placed all queries from a given crowd worker in a single split (train, validation, or test). This avoids the potential issue of similar queries from a crowd worker ending up in both the train and test sets, for instance, which would make the train and test distributions unrealistically similar. We note that this is a recommendation from concurrent work by BIBREF3. We also used this procedure for the out-of-scope set, except that we split the data into train/validation/test based on task prompt instead of worker.
Dataset ::: Dataset Variants
In addition to the full dataset, we consider three variations. First, Small, in which there are only 50 training queries per each in-scope intent, rather than 100. Second, Imbalanced, in which intents have either 25, 50, 75, or 100 training queries. Third, OOS+, in which there are 250 out-of-scope training examples, rather than 100. These are intended to represent production scenarios where data may be in limited or uneven supply.
Benchmark Evaluation
To quantify the challenges that our new dataset presents, we evaluated the performance of a range of classifier models and out-of-scope prediction schemes.
Benchmark Evaluation ::: Out-of-Scope Prediction
We use three baseline approaches for the task of predicting whether a query is out-of-scope: (1) oos-train, where we train an additional (i.e. 151st) intent on out-of-scope training data; (2) oos-threshold, where we use a threshold on the classifier's probability estimate; and (3) oos-binary, a two-stage process where we first classify a query as in- or out-of-scope, then classify it into one of the 150 intents if classified as in-scope.
To reduce the severity of the class imbalance between in-scope versus out-of-scope query samples (i.e., 15,000 versus 250 queries for OOS+), we investigate two strategies when using oos-binary: one where we undersample the in-scope data and train using 1,000 in-scope queries sampled evenly across all intents (versus 250 out-of-scope), and another where we augment the 250 OOS+ out-of-scope training queries with 14,750 sentences sampled from Wikipedia.
From a development point of view, the oos-train and oos-binary methods both require careful curation of an out-of-scope training set, and this set can be tailored to individual systems. The oos-threshold method is a more general decision rule that can be applied to any model that produces a probability. In our evaluation, the out-of-scope threshold was chosen to be the value which yielded the highest validation score across all intents, treating out-of-scope as its own intent.
Benchmark Evaluation ::: Metrics
We consider two performance metrics for all scenarios: (1) accuracy over the 150 intents, and (2) recall on out-of-scope queries. We use recall to evaluate out-of-scope since we are more interested in cases where such queries are predicted as in-scope, as this would mean a system gives the user a response that is completely wrong. Precision errors are less problematic as the fallback response will prompt the user to try again, or inform the user of the system's scope of supported domains.
Results ::: Results with oos-train
Table TABREF14 presents results for all models across the four variations of the dataset. First, BERT is consistently the best approach for in-scope, followed by MLP. Second, out-of-scope query performance is much lower than in-scope across all methods. Training on less data (Small and Imbalanced) yields models that perform slightly worse on in-scope queries. The trend is mostly the opposite when evaluating out-of-scope, where recall increases under the Small and Imbalanced training conditions. Under these two conditions, the size of the in-scope training set was decreased, while the number of out-of-scope training queries remained constant. This indicates that out-of-scope performance can be increased by increasing the relative number of out-of-scope training queries. We do just that in the OOS+ setting—where the models were trained on the full training set as well as 150 additional out-of-scope queries—and see that performance on out-of-scope increases substantially, yet still remains low relative to in-scope accuracy.
Results ::: Results with oos-threshold
In-scope accuracy using the oos-threshold approach is largely comparable to oos-train. Out-of-scope recall tends to be much higher on Full, but several models suffer greatly on the limited datasets. BERT and MLP are the top oos-threshold performers, and for several models the threshold approach provided erratic results, particularly FastText and Rasa.
Results ::: Results with oos-binary
Table TABREF19 compares classifier performance using the oos-binary scheme. In-scope accuracy suffers for all models using the undersampling scheme when compared to training on the full dataset using the oos-train and oos-threshold approaches shown in Table TABREF14. However, out-of-scope recall improves compared to oos-train on Full but not OOS+. Augmenting the out-of-scope training set appears to help improve both in-scope and out-of-scope performance compared to undersampling, but out-of-scope performance remains weak.
Prior Work
In most other analyses and datasets, the idea of out-of-scope data is not considered, and instead the output classes are intended to cover all possible queries (e.g., TREC BIBREF7). Recent work by BIBREF8 considers a similar problem they call out-of-distribution detection. They use other datasets or classes excluded during training to form the out-of-distribution samples. This means that the out-of-scope samples are from a small set of coherent classes that differ substantially from the in-distribution samples. Similar experiments were conducted for evaluating unknown intent discovery models in BIBREF9. In contrast, our out-of-scope queries cover a broad range of phenomena and are similar in style and often similar in topic to in-scope queries, representing things a user might say given partial knowledge of the capabilities of a system.
Table TABREF20 compares our dataset with other short-query intent classification datasets. The Snips BIBREF0 dataset and the dataset presented in BIBREF10 are the most similar to the in-scope part of our work, with the same type of conversational agent requests. Like our work, both of these datasets were bootstrapped using crowdsourcing. However, the Snips dataset has only a small number of intents and an enormous number of examples of each. Snips does present a low-data variation, with 70 training queries per intent, in which performance drops slightly. The dataset presented in BIBREF10 has a large number of intent classes, yet also contains a wide range of samples per intent class (ranging from 24 to 5,981 queries per intent, and so is not constrained in all cases).
BIBREF11 created datasets with constrained training data, but with very few intents, presenting a very different type of challenge. We also include the TREC query classification datasets BIBREF7, which have a large set of labels, but they describe the desired response type (e.g., distance, city, abbreviation) rather than the action intents we consider. Moreover, TREC contains only questions and no commands. Crucially, none of the other datasets summarized in Table TABREF20 offer a feasible way to evaluate out-of-scope performance.
The Dialog State Tracking Challenge (DSTC) datasets are another related resource. Specifically, DSTC 1 BIBREF12, DSTC 2 BIBREF13, and DSTC 3 BIBREF14 contain “chatbot style" queries, but the datasets are focused on state tracking. Moreover, most if not all queries in these datasets are in-scope. In contrast, the focus of our analysis is on both in- and out-of-scope queries that challenge a virtual assistant to determine whether it can provide an acceptable response.
Conclusion
This paper analyzed intent classification and out-of-scope prediction methods with a new dataset consisting of carefully collected out-of-scope data. Our findings indicate that certain models like BERT perform better on in-scope classification, but all methods investigated struggle with identifying out-of-scope queries. Models that incorporate more out-of-scope training data tend to improve on out-of-scope performance, yet such data is expensive and difficult to generate. We believe our analysis and dataset will lead to developing better, more robust dialog systems.
All datasets introduced in this paper can be found at https://github.com/clinc/oos-eval.
Figure 1: Example exchanges between a user (blue, right side) and a task-driven dialog system for personal finance (grey, left side). The system correctly identi-
Table 1: Sample queries from our dataset. The out-of-scope queries are similar in style to the in-scope queries.
Table 2: Benchmark classifier results under each data condition using the oos-train (top half) and oos-threshold (bottom half) prediction methods.
Table 3: Results of oos-binary experiments on OOS+, where we compare performance of undersampling (under) and augmentation using sentences from Wikipedia (wiki aug). The wiki aug approach was too large for the DialogFlow and Rasa classifiers.
Query: Where does the data come from?
JSON Response: | {"answer": "crowsourcing platform", "evidence": ["We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data."]} | science.qasper_abstractive_qa.46.2 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nAn Evaluation Dataset for Intent Classification and Out-of-Scope Prediction\n\nTask-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope---i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. We evaluate a range of benchmark classifiers on our dataset along with several different out-of-scope identification schemes. We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries. Our dataset and evaluation fill an important gap in the field, offering a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems.\n\nIntroduction\nTask-oriented dialog systems have become ubiquitous, providing a means for billions of people to interact with computers using natural language. Moreover, the recent influx of platforms and tools such as Google's DialogFlow or Amazon's Lex for building and deploying such systems makes them even more accessible to various industries and demographics across the globe.\nTools for developing such systems start by guiding developers to collect training data for intent classification: the task of identifying which of a fixed set of actions the user wishes to take based on their query. Relatively few public datasets exist for evaluating performance on this task, and those that do exist typically cover only a very small number of intents (e.g. BIBREF0, which has 7 intents). Furthermore, such resources do not facilitate analysis of out-of-scope queries: queries that users may reasonably make, but fall outside of the scope of the system-supported intents.\nFigure FIGREF1 shows example query-response exchanges between a user and a task-driven dialog system for personal finance. In the first user-system exchange, the system correctly identifies the user's intent as an in-scope balance query. In the second and third exchanges, the user queries with out-of-scope inputs. In the second exchange, the system incorrectly identifies the query as in-scope and yields an unrelated response. In the third exchange, the system correctly classifies the user's query as out-of-scope, and yields a fallback response.\nOut-of-scope queries are inevitable for a task-oriented dialog system, as most users will not be fully cognizant of the system's capabilities, which are limited by the fixed number of intent classes. Correctly identifying out-of-scope cases is thus crucial in deployed systems—both to avoid performing the wrong action and also to identify potential future directions for development. However, this problem has seen little attention in analyses and evaluations of intent classification systems.\nThis paper fills this gap by analyzing intent classification performance with a focus on out-of-scope handling. To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. The queries cover 150 intents, plus out-of-scope queries that do not fall within any of the 150 in-scope intents.\nWe evaluate a range of benchmark classifiers and out-of-scope handling methods on our dataset. BERT BIBREF1 yields the best in-scope accuracy, scoring 96% or above even when we limit the training data or introduce class imbalance. However, all methods struggle with identifying out-of-scope queries. Even when a large number of out-of-scope examples are provided for training, there is a major performance gap, with the best system scoring 66% out-of-scope recall. Our results show that while current models work on known classes, they have difficulty on out-of-scope queries, particularly when data is not plentiful. This dataset will enable future work to address this key gap in the research and development of dialog systems. All data introduced in this paper can be found at https://github.com/clinc/oos-eval.\n\nDataset\nWe introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data.\n\nDataset ::: In-Scope Data Collection\nWe defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent.\n\nDataset ::: Out-of-Scope Data Collection\nOut-of-scope queries were collected in two ways. First, using worker mistakes: queries written for one of the 150 intents that did not actually match any of the intents. Second, using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere. To help ensure the richness of this additional out-of-scope data, each of these task prompts contributed to at most four queries. Since we use the same crowdsourcing method for collecting out-of-scope data, these queries are similar in style to their in-scope counterparts.\nThe out-of-scope data is difficult to collect, requiring expert knowledge of the in-scope intents to painstakingly ensure that no out-of-scope query sample is mistakenly labeled as in-scope (and vice versa). Indeed, roughly only 69% of queries collected with prompts targeting out-of-scope yielded out-of-scope queries. Of the 1,200 out-of-scope queries collected, 100 are used for validation and 100 are used for training, leaving 1,000 for testing.\n\nDataset ::: Data Preprocessing and Partitioning\nFor all queries collected, all tokens were down-cased, and all end-of-sentence punctuation was removed. Additionally, all duplicate queries were removed and replaced.\nIn an effort to reduce bias in the in-scope data, we placed all queries from a given crowd worker in a single split (train, validation, or test). This avoids the potential issue of similar queries from a crowd worker ending up in both the train and test sets, for instance, which would make the train and test distributions unrealistically similar. We note that this is a recommendation from concurrent work by BIBREF3. We also used this procedure for the out-of-scope set, except that we split the data into train/validation/test based on task prompt instead of worker.\n\nDataset ::: Dataset Variants\nIn addition to the full dataset, we consider three variations. First, Small, in which there are only 50 training queries per each in-scope intent, rather than 100. Second, Imbalanced, in which intents have either 25, 50, 75, or 100 training queries. Third, OOS+, in which there are 250 out-of-scope training examples, rather than 100. These are intended to represent production scenarios where data may be in limited or uneven supply.\n\nBenchmark Evaluation\nTo quantify the challenges that our new dataset presents, we evaluated the performance of a range of classifier models and out-of-scope prediction schemes.\n\nBenchmark Evaluation ::: Out-of-Scope Prediction\nWe use three baseline approaches for the task of predicting whether a query is out-of-scope: (1) oos-train, where we train an additional (i.e. 151st) intent on out-of-scope training data; (2) oos-threshold, where we use a threshold on the classifier's probability estimate; and (3) oos-binary, a two-stage process where we first classify a query as in- or out-of-scope, then classify it into one of the 150 intents if classified as in-scope.\nTo reduce the severity of the class imbalance between in-scope versus out-of-scope query samples (i.e., 15,000 versus 250 queries for OOS+), we investigate two strategies when using oos-binary: one where we undersample the in-scope data and train using 1,000 in-scope queries sampled evenly across all intents (versus 250 out-of-scope), and another where we augment the 250 OOS+ out-of-scope training queries with 14,750 sentences sampled from Wikipedia.\nFrom a development point of view, the oos-train and oos-binary methods both require careful curation of an out-of-scope training set, and this set can be tailored to individual systems. The oos-threshold method is a more general decision rule that can be applied to any model that produces a probability. In our evaluation, the out-of-scope threshold was chosen to be the value which yielded the highest validation score across all intents, treating out-of-scope as its own intent.\n\nBenchmark Evaluation ::: Metrics\nWe consider two performance metrics for all scenarios: (1) accuracy over the 150 intents, and (2) recall on out-of-scope queries. We use recall to evaluate out-of-scope since we are more interested in cases where such queries are predicted as in-scope, as this would mean a system gives the user a response that is completely wrong. Precision errors are less problematic as the fallback response will prompt the user to try again, or inform the user of the system's scope of supported domains.\n\nResults ::: Results with oos-train\nTable TABREF14 presents results for all models across the four variations of the dataset. First, BERT is consistently the best approach for in-scope, followed by MLP. Second, out-of-scope query performance is much lower than in-scope across all methods. Training on less data (Small and Imbalanced) yields models that perform slightly worse on in-scope queries. The trend is mostly the opposite when evaluating out-of-scope, where recall increases under the Small and Imbalanced training conditions. Under these two conditions, the size of the in-scope training set was decreased, while the number of out-of-scope training queries remained constant. This indicates that out-of-scope performance can be increased by increasing the relative number of out-of-scope training queries. We do just that in the OOS+ setting—where the models were trained on the full training set as well as 150 additional out-of-scope queries—and see that performance on out-of-scope increases substantially, yet still remains low relative to in-scope accuracy.\n\nResults ::: Results with oos-threshold\nIn-scope accuracy using the oos-threshold approach is largely comparable to oos-train. Out-of-scope recall tends to be much higher on Full, but several models suffer greatly on the limited datasets. BERT and MLP are the top oos-threshold performers, and for several models the threshold approach provided erratic results, particularly FastText and Rasa.\n\nResults ::: Results with oos-binary\nTable TABREF19 compares classifier performance using the oos-binary scheme. In-scope accuracy suffers for all models using the undersampling scheme when compared to training on the full dataset using the oos-train and oos-threshold approaches shown in Table TABREF14. However, out-of-scope recall improves compared to oos-train on Full but not OOS+. Augmenting the out-of-scope training set appears to help improve both in-scope and out-of-scope performance compared to undersampling, but out-of-scope performance remains weak.\n\nPrior Work\nIn most other analyses and datasets, the idea of out-of-scope data is not considered, and instead the output classes are intended to cover all possible queries (e.g., TREC BIBREF7). Recent work by BIBREF8 considers a similar problem they call out-of-distribution detection. They use other datasets or classes excluded during training to form the out-of-distribution samples. This means that the out-of-scope samples are from a small set of coherent classes that differ substantially from the in-distribution samples. Similar experiments were conducted for evaluating unknown intent discovery models in BIBREF9. In contrast, our out-of-scope queries cover a broad range of phenomena and are similar in style and often similar in topic to in-scope queries, representing things a user might say given partial knowledge of the capabilities of a system.\nTable TABREF20 compares our dataset with other short-query intent classification datasets. The Snips BIBREF0 dataset and the dataset presented in BIBREF10 are the most similar to the in-scope part of our work, with the same type of conversational agent requests. Like our work, both of these datasets were bootstrapped using crowdsourcing. However, the Snips dataset has only a small number of intents and an enormous number of examples of each. Snips does present a low-data variation, with 70 training queries per intent, in which performance drops slightly. The dataset presented in BIBREF10 has a large number of intent classes, yet also contains a wide range of samples per intent class (ranging from 24 to 5,981 queries per intent, and so is not constrained in all cases).\nBIBREF11 created datasets with constrained training data, but with very few intents, presenting a very different type of challenge. We also include the TREC query classification datasets BIBREF7, which have a large set of labels, but they describe the desired response type (e.g., distance, city, abbreviation) rather than the action intents we consider. Moreover, TREC contains only questions and no commands. Crucially, none of the other datasets summarized in Table TABREF20 offer a feasible way to evaluate out-of-scope performance.\nThe Dialog State Tracking Challenge (DSTC) datasets are another related resource. Specifically, DSTC 1 BIBREF12, DSTC 2 BIBREF13, and DSTC 3 BIBREF14 contain “chatbot style\" queries, but the datasets are focused on state tracking. Moreover, most if not all queries in these datasets are in-scope. In contrast, the focus of our analysis is on both in- and out-of-scope queries that challenge a virtual assistant to determine whether it can provide an acceptable response.\n\nConclusion\nThis paper analyzed intent classification and out-of-scope prediction methods with a new dataset consisting of carefully collected out-of-scope data. Our findings indicate that certain models like BERT perform better on in-scope classification, but all methods investigated struggle with identifying out-of-scope queries. Models that incorporate more out-of-scope training data tend to improve on out-of-scope performance, yet such data is expensive and difficult to generate. We believe our analysis and dataset will lead to developing better, more robust dialog systems.\nAll datasets introduced in this paper can be found at https://github.com/clinc/oos-eval.\n\nFigure 1: Example exchanges between a user (blue, right side) and a task-driven dialog system for personal finance (grey, left side). The system correctly identi-\n\nTable 1: Sample queries from our dataset. The out-of-scope queries are similar in style to the in-scope queries.\n\nTable 2: Benchmark classifier results under each data condition using the oos-train (top half) and oos-threshold (bottom half) prediction methods.\n\nTable 3: Results of oos-binary experiments on OOS+, where we compare performance of undersampling (under) and augmentation using sentences from Wikipedia (wiki aug). The wiki aug approach was too large for the DialogFlow and Rasa classifiers.\n\nQuery: Where does the data come from?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"crowsourcing platform\", \"evidence\": [\"We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data.\"]}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Better techniques are needed to help consumers make lower calorie food choices. This pilot study examined the effect of menu labeling with caloric information and exercise equivalents (EE) on food selection. Participants, 62 females, ages 18-34, recruited for this study, ordered a fast food meal with menus that contained the names of the food (Lunch 1 (L1), control meal). One week later (Lunch 2 (L2), experiment meal), participants ordered a meal from one of three menus with the same items as the previous week: no calorie information, calorie information only, or calorie information and EE.
ABSTRACT.RESULTS: There were no absolute differences between groups in calories ordered from L1 to L2. However, it is noteworthy that calorie only and calorie plus exercise equivalents ordered about 16% (206 kcal) and 14% (162 kcal) fewer calories from Lunch 1 to Lunch 2, respectively; whereas, the no information group ordered only 2% (25 kcal) fewer.
ABSTRACT.CONCLUSIONS: Menu labeling alone may be insufficient to reduce calories; however, further research is needed in finding the most effective ways of presenting the menu labels for general public.
Extracted Data: | [["Lunch 2 (L2), experiment meal", "Lunch 1 (L1), control meal", "amount of calories", "no significant difference", "There were no absolute differences between groups in calories ordered from L1 to L2."]] | science.evidence_inference.228 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Better techniques are needed to help consumers make lower calorie food choices. This pilot study examined the effect of menu labeling with caloric information and exercise equivalents (EE) on food selection. Participants, 62 females, ages 18-34, recruited for this study, ordered a fast food meal with menus that contained the names of the food (Lunch 1 (L1), control meal). One week later (Lunch 2 (L2), experiment meal), participants ordered a meal from one of three menus with the same items as the previous week: no calorie information, calorie information only, or calorie information and EE.\n\nABSTRACT.RESULTS: There were no absolute differences between groups in calories ordered from L1 to L2. However, it is noteworthy that calorie only and calorie plus exercise equivalents ordered about 16% (206 kcal) and 14% (162 kcal) fewer calories from Lunch 1 to Lunch 2, respectively; whereas, the no information group ordered only 2% (25 kcal) fewer.\n\nABSTRACT.CONCLUSIONS: Menu labeling alone may be insufficient to reduce calories; however, further research is needed in finding the most effective ways of presenting the menu labels for general public.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Lunch 2 (L2), experiment meal\", \"Lunch 1 (L1), control meal\", \"amount of calories\", \"no significant difference\", \"There were no absolute differences between groups in calories ordered from L1 to L2.\"]]",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Event Discovery for History Representation in Reinforcement Learning
Main Content:
1 INTRODUCTION . Deep Reinforcement Learning ( RL ) algorithms have been successfully applied to a range of challenging domains , from computer games ( Mnih et al. , 2013 ) to robotic control ( OpenAI et al. , 2018 ) . These approaches use a Neural Network ( NN ) both to represent the current observation of the environment and to learn the agent ’ s optimal policy , used to choose the next action . For instance , the state observation can be the current game frame or an image of the robot camera , and a Convolutional Neural Network ( CNN ) may be used to obtain a compact feature vector from it . However , often RL environments are only partially observable and having a significant representation of the past may be crucial for the agent ( Kapturowski et al. , 2019 ) . For instance , in the Atari 2600 Pong game , the state can be represented as two consecutive frames . In this way , the agent can determine both the position of the ball and the direction of its movement . More complex partially observable domains require a longer state history input to the agent . For instance , when navigating inside a 1-st person-view 3D labyrinth , the agent obtains little information from the current scene observation and needs several previous frames to localise itself . A common solution to represent the observation history is based on the use of Recurrent Neural Networks ( RNNs ) , where the RNN hidden-layer activation vector is input to the agent , possibly together with the current state observation ( Mnih et al. , 2016 ) . However , RL is characterized by highly nonstationary data , which makes training unstable ( Schulman et al. , 2016 ) and this instability is exacerbated when a recurrent network needs to be simultaneously trained to extract the agent ’ s input representation . We show in Sec . 5.2 that in some common cases an RNN-based history representation struggles to improve the agent ’ s results over an input representation composed of only the instantaneous observations . In this paper we propose a different direction , in which the agent ’ s observation history is represented using a set of discrete events , which describe important changes in the state of the world ( Orr et al. , 2018 ) . Environment-specific events are automatically discovered during training by clustering past observations and are then used as landmarks in our history representation . Intuitively , this is inspired by the common human behaviour : when making decisions , humans do not keep detailed visual information of the previous steps . For instance , while navigating through the halls of a building , it is sufficient to recall a few significant landmarks seen during the walk , e.g . specific doors or furniture . Following this idea , in this paper we propose an Event Discovery History Representation ( EDHR ) approach , composed of 3 iterative stages : experience collection , policy optimisation and event discovery . We discover events by clustering past state observations . In more detail , we maximize the Mutual Information ( MI ) between the latent representations of temporally-close frames to cluster the frame semantics . This is a form of self-supervision , in which no additional annotation is needed for the event discovery : the higher the predictability of one frame with respect to the other , the larger the semantic information shared by the two frames ( van den Oord et al. , 2018 ; Hjelm et al. , 2019 ; Anand et al. , 2019 ; Ji et al. , 2019 ) . Once clusters have been formed , the probability distribution of a given frame Ft for the set of current clusters is used as the semantic representation of the state observation at time t and is recorded in a longer history . Finally , the history is input to the agent together with an instantaneous observation representation , obtained , following ( Mnih et al. , 2013 ) , as a stack of the last 4 frames . This information is now used for policy optimisation . Note that our proposed history representation is independent of the specific RL approach . However , in all our experiments we use the PPO algorithm ( Schulman et al. , 2017 ) for the policy and the value function optimisation . The 3 stages are iterated during training , and thus past clusters can be modified and adapted to address new observations , while the agent is progressing through the environment . We summarize the contribution of this paper below . First , we use a modified version of the Invariant Information Clustering ( IIC ) algorithm ( Ji et al. , 2019 ) to discover significant events in the agent ’ s past observations . Second , we propose to replace common RNN-based history representations with a time-dependent probability distribution of the observations for the events so far discovered . We evaluate our history representation ( EDHR ) using the PPO algorithm on several environments of the Atari-57 benchmark ( Bellemare et al. , 2012 ) and on the 3D environment Obstacle Tower ( Juliani et al. , 2019 ) , showing that it provides a significant boost with respect to the most common input representation methods . Specifically , we show that EDHR outperforms plain PPO in cases , where history can increase observability , and it outperforms RNN-based methods in several common cases , while simultaneously requiring 2× less wall-clock time for training . The source code of the method and all the experiments is publicly available at the anonymous link : https : //github.com/iclr2020anon/EDHR and will be published after acceptance . 2 RELATED WORK . Partially Observable Markov Decision Processes . We consider a setting in which an agent interacts with an environment to maximize a cumulative reward signal . The interaction process is defined as a Partially Observable Markov Decision Process ( POMDP ) < S , A , T , R , Ω , O , γ > ( Monahan , 1982 ; Kaelbling et al. , 1998 ) , where S is a finite state space , A is a finite action space , R ( s , a ) is the reward function , T ( ·|s , a ) is the transition function , Ω is a set of possible observations , O is a function which maps states to probability distributions over observations and γ ∈ [ 0 , 1 ) is a discount factor used to compute the cumulative discounted reward . Within this framework , at each step t , the environment is in some unobserved state s ∈ S. When the agent takes an action a , the environment transits to the new state s′ with probability T ( s′|s , a ) , providing the agent with a new observation o′ ∈ Ω with probability O ( o′|s′ ) and reward r ∼ R ( s , a ) . POMDPs are an active research area and several approaches have been recently proposed to address partial observability . DQN ( Mnih et al. , 2013 ) is one of the earliest deep RL models directly trained using the high-dimensional observation ot as input ( i.e. , the raw pixels of the current frame ) . To deal with partial observability , the input of the model is composed of a stack of the 4 last grayscale frames , processed as 4 channels using a 2D CNN . We also use this observation representation which we call instantaneous to highlight the difference from the past state history . Hausknecht & Stone ( 2015 ) replace the fully-connected layer in DQN with an LSTM ( Hochreiter & Schmidhuber , 1997 ) layer . At each step , the model receives only one frame of the environment together with the LSTM hidden-layer value which represents the past . The idea was extended in ( Igl et al. , 2018 ) by inferring latent belief states . To simulate reduced observability and demonstrate the robustness of the method , Hausknecht & Stone ( 2015 ) introduce the flickering Atari games , obscuring the observation with a certain probability at each step . These methods focus on shortterm partial observability , such as noisy observations , and approximate the state with an implicitly inferred continuous history . In contrast , our method explicitly represents each step of the past using a dictionary of discovered events , and concatenates a sequence of steps in an event-based history . Our final goal is to augment the current state representation with important past changes of the environment . Recurrent layers ( e.g . LSTM or GRU ( Cho et al. , 2014 ) ) , are a common choice for modern RL architectures . The A3C algorithm ( Mnih et al. , 2016 ) showed a high mean performance with an LSTM layer . R2D2 ( Kapturowski et al. , 2019 ) is specifically focused on processing long-history sequences ( 80 steps with additional 40 burn-in steps ) using an LSTM . However , despite its widespread , RNNs have disadvantages within an RL framework because they increase the training instability ( Sec . 1 ) . In this paper we propose an alternative history representation and we empirically show in Sec . 5.2 that our proposal outperforms RNN-based solutions in several common cases . Unsupervised and Self-supervised Learning . Unsupervised and self-supervised learning aim to extract useful representations from data ( e.g . images , videos ) without annotation and reward signal . While common approaches are covered in recent surveys ( Jing & Tian , 2019 ; Kolesnikov et al. , 2019 ) , below we analyse the most relevant methods to this work . A first class of unsupervised feature-extraction methods is based on a generative encoding-decoding scheme . The methods typically minimize the reconstruction error in the pixel space , thus the representation is encouraged to capture pixel-level details , which may differ from semantics . Ha & Schmidhuber ( 2018 ) combine Variational Autoencoders ( Kingma & Welling , 2014 ) ( VAEs ) with RNNs in their World Models . A VAE is trained to reconstruct the current frame and simultaneously produce a latent , compressed frame representation which is input to an RNN predicting the next latent feature vector . The input of the RL agent consists of the latent vector of the current observation and the hidden state of the RNN . An alternative to a reconstruction loss in the input space is clustering in the feature space . DeepCluster ( Caron et al. , 2018 ) iteratively alternates between k-means clustering of the CNN features and updating the network ’ s parameters with the pseudo-labels obtained by means of clustering . Another important self-supervised paradigm is based on maximizing the Mutual Information ( MI ) between latent representations . Typically these methods rely on collecting pairs of positive and negative samples . Two positive samples should share the same semantics , while negatives should be perceptually different . A discriminator is trained to distinguish between positive and negative pairs , so computing a lower-bound on the MI of the learned latent representations . In CPC ( van den Oord et al. , 2018 ) and ST-DIM ( Anand et al. , 2019 ) positives are temporally close frames , while in IIC ( Ji et al. , 2019 ) simple data-augmentation techniques are applied on still images . In DIM ( Hjelm et al. , 2019 ) this idea is extended including locality , and MI is computed also between global and local representations of the same image . DIM was applied to several games of the Atari 2600 suite ( Bellemare et al. , 2012 ) in ST-DIM , however , the reported evaluation is based only the disentanglement degree of the obtained representations and does not consider the possible benefit for an RL task . Our event discovery stage ( Sec . 3 ) is based on IIC ( Ji et al. , 2019 ) , but we use as positives temporallyclose frames extracted from an agent trajectory instead of data-augmentation . Note that the main difference between our proposal and IIC is that our goal is not clustering , but clusters are used as a set of discrete events to represent the agent ’ s history , in contrast to RNN and other continuous history representations . | This paper proposes a new way to represent past history as input to an RL agent, that consists in clustering states and providing the (soft) cluster assignment of past states in the input. The clustering algorithm comes from previous work based on mutual information, where close (in time) observations are assumed to be semantically similar. The proposed scheme, named EDHR (Event Discovery History Representation), is shown to perform better than PPO and an RNN variant of PPO on (most of) 7 representative Atari games, and better than PPO on the Obstacle Tower benchmark. | science.mup_single_document_summarization.17212 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nEvent Discovery for History Representation in Reinforcement Learning\n\nMain Content:\n1 INTRODUCTION . Deep Reinforcement Learning ( RL ) algorithms have been successfully applied to a range of challenging domains , from computer games ( Mnih et al. , 2013 ) to robotic control ( OpenAI et al. , 2018 ) . These approaches use a Neural Network ( NN ) both to represent the current observation of the environment and to learn the agent ’ s optimal policy , used to choose the next action . For instance , the state observation can be the current game frame or an image of the robot camera , and a Convolutional Neural Network ( CNN ) may be used to obtain a compact feature vector from it . However , often RL environments are only partially observable and having a significant representation of the past may be crucial for the agent ( Kapturowski et al. , 2019 ) . For instance , in the Atari 2600 Pong game , the state can be represented as two consecutive frames . In this way , the agent can determine both the position of the ball and the direction of its movement . More complex partially observable domains require a longer state history input to the agent . For instance , when navigating inside a 1-st person-view 3D labyrinth , the agent obtains little information from the current scene observation and needs several previous frames to localise itself . A common solution to represent the observation history is based on the use of Recurrent Neural Networks ( RNNs ) , where the RNN hidden-layer activation vector is input to the agent , possibly together with the current state observation ( Mnih et al. , 2016 ) . However , RL is characterized by highly nonstationary data , which makes training unstable ( Schulman et al. , 2016 ) and this instability is exacerbated when a recurrent network needs to be simultaneously trained to extract the agent ’ s input representation . We show in Sec . 5.2 that in some common cases an RNN-based history representation struggles to improve the agent ’ s results over an input representation composed of only the instantaneous observations . In this paper we propose a different direction , in which the agent ’ s observation history is represented using a set of discrete events , which describe important changes in the state of the world ( Orr et al. , 2018 ) . Environment-specific events are automatically discovered during training by clustering past observations and are then used as landmarks in our history representation . Intuitively , this is inspired by the common human behaviour : when making decisions , humans do not keep detailed visual information of the previous steps . For instance , while navigating through the halls of a building , it is sufficient to recall a few significant landmarks seen during the walk , e.g . specific doors or furniture . Following this idea , in this paper we propose an Event Discovery History Representation ( EDHR ) approach , composed of 3 iterative stages : experience collection , policy optimisation and event discovery . We discover events by clustering past state observations . In more detail , we maximize the Mutual Information ( MI ) between the latent representations of temporally-close frames to cluster the frame semantics . This is a form of self-supervision , in which no additional annotation is needed for the event discovery : the higher the predictability of one frame with respect to the other , the larger the semantic information shared by the two frames ( van den Oord et al. , 2018 ; Hjelm et al. , 2019 ; Anand et al. , 2019 ; Ji et al. , 2019 ) . Once clusters have been formed , the probability distribution of a given frame Ft for the set of current clusters is used as the semantic representation of the state observation at time t and is recorded in a longer history . Finally , the history is input to the agent together with an instantaneous observation representation , obtained , following ( Mnih et al. , 2013 ) , as a stack of the last 4 frames . This information is now used for policy optimisation . Note that our proposed history representation is independent of the specific RL approach . However , in all our experiments we use the PPO algorithm ( Schulman et al. , 2017 ) for the policy and the value function optimisation . The 3 stages are iterated during training , and thus past clusters can be modified and adapted to address new observations , while the agent is progressing through the environment . We summarize the contribution of this paper below . First , we use a modified version of the Invariant Information Clustering ( IIC ) algorithm ( Ji et al. , 2019 ) to discover significant events in the agent ’ s past observations . Second , we propose to replace common RNN-based history representations with a time-dependent probability distribution of the observations for the events so far discovered . We evaluate our history representation ( EDHR ) using the PPO algorithm on several environments of the Atari-57 benchmark ( Bellemare et al. , 2012 ) and on the 3D environment Obstacle Tower ( Juliani et al. , 2019 ) , showing that it provides a significant boost with respect to the most common input representation methods . Specifically , we show that EDHR outperforms plain PPO in cases , where history can increase observability , and it outperforms RNN-based methods in several common cases , while simultaneously requiring 2× less wall-clock time for training . The source code of the method and all the experiments is publicly available at the anonymous link : https : //github.com/iclr2020anon/EDHR and will be published after acceptance . 2 RELATED WORK . Partially Observable Markov Decision Processes . We consider a setting in which an agent interacts with an environment to maximize a cumulative reward signal . The interaction process is defined as a Partially Observable Markov Decision Process ( POMDP ) < S , A , T , R , Ω , O , γ > ( Monahan , 1982 ; Kaelbling et al. , 1998 ) , where S is a finite state space , A is a finite action space , R ( s , a ) is the reward function , T ( ·|s , a ) is the transition function , Ω is a set of possible observations , O is a function which maps states to probability distributions over observations and γ ∈ [ 0 , 1 ) is a discount factor used to compute the cumulative discounted reward . Within this framework , at each step t , the environment is in some unobserved state s ∈ S. When the agent takes an action a , the environment transits to the new state s′ with probability T ( s′|s , a ) , providing the agent with a new observation o′ ∈ Ω with probability O ( o′|s′ ) and reward r ∼ R ( s , a ) . POMDPs are an active research area and several approaches have been recently proposed to address partial observability . DQN ( Mnih et al. , 2013 ) is one of the earliest deep RL models directly trained using the high-dimensional observation ot as input ( i.e. , the raw pixels of the current frame ) . To deal with partial observability , the input of the model is composed of a stack of the 4 last grayscale frames , processed as 4 channels using a 2D CNN . We also use this observation representation which we call instantaneous to highlight the difference from the past state history . Hausknecht & Stone ( 2015 ) replace the fully-connected layer in DQN with an LSTM ( Hochreiter & Schmidhuber , 1997 ) layer . At each step , the model receives only one frame of the environment together with the LSTM hidden-layer value which represents the past . The idea was extended in ( Igl et al. , 2018 ) by inferring latent belief states . To simulate reduced observability and demonstrate the robustness of the method , Hausknecht & Stone ( 2015 ) introduce the flickering Atari games , obscuring the observation with a certain probability at each step . These methods focus on shortterm partial observability , such as noisy observations , and approximate the state with an implicitly inferred continuous history . In contrast , our method explicitly represents each step of the past using a dictionary of discovered events , and concatenates a sequence of steps in an event-based history . Our final goal is to augment the current state representation with important past changes of the environment . Recurrent layers ( e.g . LSTM or GRU ( Cho et al. , 2014 ) ) , are a common choice for modern RL architectures . The A3C algorithm ( Mnih et al. , 2016 ) showed a high mean performance with an LSTM layer . R2D2 ( Kapturowski et al. , 2019 ) is specifically focused on processing long-history sequences ( 80 steps with additional 40 burn-in steps ) using an LSTM . However , despite its widespread , RNNs have disadvantages within an RL framework because they increase the training instability ( Sec . 1 ) . In this paper we propose an alternative history representation and we empirically show in Sec . 5.2 that our proposal outperforms RNN-based solutions in several common cases . Unsupervised and Self-supervised Learning . Unsupervised and self-supervised learning aim to extract useful representations from data ( e.g . images , videos ) without annotation and reward signal . While common approaches are covered in recent surveys ( Jing & Tian , 2019 ; Kolesnikov et al. , 2019 ) , below we analyse the most relevant methods to this work . A first class of unsupervised feature-extraction methods is based on a generative encoding-decoding scheme . The methods typically minimize the reconstruction error in the pixel space , thus the representation is encouraged to capture pixel-level details , which may differ from semantics . Ha & Schmidhuber ( 2018 ) combine Variational Autoencoders ( Kingma & Welling , 2014 ) ( VAEs ) with RNNs in their World Models . A VAE is trained to reconstruct the current frame and simultaneously produce a latent , compressed frame representation which is input to an RNN predicting the next latent feature vector . The input of the RL agent consists of the latent vector of the current observation and the hidden state of the RNN . An alternative to a reconstruction loss in the input space is clustering in the feature space . DeepCluster ( Caron et al. , 2018 ) iteratively alternates between k-means clustering of the CNN features and updating the network ’ s parameters with the pseudo-labels obtained by means of clustering . Another important self-supervised paradigm is based on maximizing the Mutual Information ( MI ) between latent representations . Typically these methods rely on collecting pairs of positive and negative samples . Two positive samples should share the same semantics , while negatives should be perceptually different . A discriminator is trained to distinguish between positive and negative pairs , so computing a lower-bound on the MI of the learned latent representations . In CPC ( van den Oord et al. , 2018 ) and ST-DIM ( Anand et al. , 2019 ) positives are temporally close frames , while in IIC ( Ji et al. , 2019 ) simple data-augmentation techniques are applied on still images . In DIM ( Hjelm et al. , 2019 ) this idea is extended including locality , and MI is computed also between global and local representations of the same image . DIM was applied to several games of the Atari 2600 suite ( Bellemare et al. , 2012 ) in ST-DIM , however , the reported evaluation is based only the disentanglement degree of the obtained representations and does not consider the possible benefit for an RL task . Our event discovery stage ( Sec . 3 ) is based on IIC ( Ji et al. , 2019 ) , but we use as positives temporallyclose frames extracted from an agent trajectory instead of data-augmentation . Note that the main difference between our proposal and IIC is that our goal is not clustering , but clusters are used as a set of discrete events to represent the agent ’ s history , in contrast to RNN and other continuous history representations .",
"role": "user"
},
{
"content": "This paper proposes a new way to represent past history as input to an RL agent, that consists in clustering states and providing the (soft) cluster assignment of past states in the input. The clustering algorithm comes from previous work based on mutual information, where close (in time) observations are assumed to be semantically similar. The proposed scheme, named EDHR (Event Discovery History Representation), is shown to perform better than PPO and an RNN variant of PPO on (most of) 7 representative Atari games, and better than PPO on the Obstacle Tower benchmark.",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
On Large-Cohort Training for Federated Learning
Main Content:
1 Introduction . Federated learning ( FL ) [ 52 ] considers learning a model from multiple clients without directly sharing training data , often under the orchestration of a central server . In this work we focus on cross-device FL , in which the aim is to learn across a large population of edge devices [ 27 , Table 1 ] . A distinguishing characteristic of cross-device FL is partial participation of the client population : Due to systems constraints such as network size , the server typically only communicates with a subset of the clients at a time1 . For example , in the popular FedAvg algorithm [ 52 ] , at each communication round the server broadcasts its current model to a subset of available clients ( referred to as a cohort ) , who use the model to initialize local optimization and send their model updates back to the server . Intuitively , larger cohort sizes have the potential to improve the convergence of FL algorithms . By sampling more clients per round , we can observe a more representative sample of the underlying population—possibly reducing the number of communication rounds needed to achieve a given accuracy . This intuition is reflected in many convergence analyses of FL methods [ 29 , 31 , 32 , 62 , 69 ] , which generally show that asymptotic convergence rates improve as the cohort size increases . Larger cohorts can also provide privacy benefits . For example , when using the distributed differential privacy model [ 6 , 11 , 16 , 65 ] in federated learning , noise is typically added to the updates sent from the clients to the server [ 54 ] . This helps preserve privacy but can also mar the utility of the learned model . By dividing the noise among more clients , larger cohorts may mitigate detrimental effects of noise . Moreover , since privacy tends to decrease as a function of the number of communication 1In contrast , cross-silo settings often have a small set of clients , most of which participate in each round [ 27 ] . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . rounds [ 1 , 17 ] , larger cohorts also have the potential to improve privacy in FL by reducing the number of rounds needed for convergence . Motivated by the potential benefits of large-cohort training , we systematically explore the impact of cohort size in realistic cross-device settings . Our results show that increasing the cohort size may not lead to significant convergence improvements in practice , despite their theoretical benefit [ 69 ] . Moreover , large-cohort training can introduce fundamental optimization and generalization issues . Our results are reminiscent of work on large-batch training in centralized settings , where larger batches can stagnate convergence improvements [ 14 , 19 , 51 , 70 , 71 ] , and even lead to generalization issues with deep neural networks [ 23 , 30 , 46–48 , 50 , 64 ] . While some of the challenges we identify with large-cohort training are parallel to issues that arise in large-batch centralized learning , others are unique to federated learning and have not been previously identified in the literature . Contributions . In this work , we provide a novel examination of cohort sizes in federated learning . We give a wide ranging empirical analysis spanning many popular federated algorithms and datasets ( Section 2 ) . Despite the many possible benefits of large-cohort training , we find that challenges exist in realizing these benefits ( Section 3 ) . We show that these issues are caused in part by distinctive characteristics of federated training dynamics ( Section 4 ) . Using these insights , we provide partial solutions to the challenges we identify ( Section 5 ) , focusing on how to adapt techniques from largebatch training , and the limitations of such approaches . Our solutions are designed to serve as simple benchmarks for future work . We conclude by discussing limitations and open problems ( Section 6 ) . Throughout , we attempt to uncover interesting theoretical questions , but remain firmly grounded in the practical realities of federated learning . 1.1 Related Work . Large-batch training . In non-federated settings , mini-batch stochastic gradient descent ( SGD ) and its variants are common choices for training machine learning models , particularly deep neural networks . While larger mini-batch sizes ostensibly allow for improved convergence ( in terms of the number of steps required to reach a desired accuracy ) , in practice speedups may quickly saturate when increasing the mini-batch size . This property of diminishing returns has been explored both empirically [ 14 , 19 , 51 , 64 ] and theoretically [ 48 , 70 ] . Beyond the issue of speedup saturation , numerous works have also observed a generalization gap when training deep neural networks with large batches [ 23 , 30 , 46 , 47 , 50 , 71 ] . Our work differs from these areas by specifically exploring how the cohort size ( the number of selected clients ) affects federated optimization methods . While some of the issues with large-batch training appear in large-cohort training , we also identify a number of new challenges introduced by the federated setting . Optimization for federated learning . Significant attention has been paid towards developing federated optimization techniques . Such work has focused on various aspects , including communicationefficiency [ 5 , 34 , 37 , 52 ] , data and systems heterogeneity [ 25 , 28 , 29 , 39–42 , 67 ] , and fairness [ 26 , 43 ] . We provide a description of some relevant methods in Section 2 , and defer readers to recent surveys such as [ 27 ] and [ 41 ] for additional background . One area pertinent to our work is that of variance reduction for federated learning , which can mitigate negative effects of data heterogeneity [ 28 , 29 , 73 ] . However , such methods often require clients to maintain state across rounds [ 29 , 73 ] , which may be infeasible in cross-device settings [ 27 ] . Moreover , such methods may not perform well in settings with limited client participation [ 62 ] . Many convergence analyses of federated optimization methods show that larger cohort sizes can lead to improved convergence rates , even without explicit variance reduction [ 31 , 32 , 69 ] . These analyses typically focus on asymptotic convergence , and require assumptions on learning rates and heterogeneity that may not hold in practice [ 9 , 27 ] . In this work , we attempt to see whether increasing the cohort size leads to improved convergence in practical , communication-limited settings . Client sampling . A number of works have explored how to select cohorts of a fixed size in crossdevice FL [ 10 , 12 , 18 , 59 , 63 ] . Such methods can yield faster convergence than random sampling by carefully selecting the clients that participate at each round , based on quantities such as the client loss . However , such approaches typically require the server to be able to choose which clients participate in a cohort . In practice , cohort selection in cross-device federated learning is often governed by client availability , and is not controlled by the server [ 7 , 61 ] . In this work we instead focus on the impact of size of the cohort , assuming the cohort is sampled at random . 2 Preliminaries . Federated optimization methods often aim to minimize a weighted average of client loss functions : min x f ( x ) : = K∑ k=1 pkfk ( x ) , ( 1 ) where K is total number of clients , the pk are client weights satisfying pk ≥ 0 , and fk is the loss function of client k. For practical reasons , pk is often set to the number of examples in client k ’ s local dataset [ 42 , 52 ] . To solve ( 1 ) , each client in a sampled cohort could send∇fk ( x ) to the server , and the server could then apply ( mini-batch ) SGD . This approach is referred to as FedSGD [ 52 ] . This requires communication for every model update , which may not be desirable in communication-limited settings . To address this , McMahan et al . [ 52 ] propose FedAvg , in which clients perform multiple epochs of local training , potentially reducing the number of communication rounds needed for convergence . We focus on a more general framework , FedOpt , introduced by Reddi et al . [ 62 ] that uses both client and server optimization . At each round , the server sends its model x to a cohort of clients C of size M . Each client ck ∈ C performs E epochs of training using mini-batch SGD with client learning rate ηc , producing a local model xk . Each client k ∈ C then communicates their client update ∆k to the server , where ∆k : = xk − x is the difference between the client ’ s local model and the server model . The server computes a weighted average ∆ of the client updates , and updates its own model via x′ = SERVEROPT ( x , ηs , ∆ ) , ( 2 ) where SERVEROPT ( x , ηs , g ) is some first-order optimizer , ηs is the server learning rate , and g is a gradient estimate . For example , if SERVEROPT is SGD , then SERVEROPT ( x , ηs , g ) = x − ηsg . The ∆ in ( 2 ) is referred to as a pseudo-gradient [ 62 ] . While ∆ may not be an unbiased estimate of ∇f , it can serve a somewhat comparable role ( though as we show in Section 4 , there are important distinctions ) . Full pseudo-code of FedOpt is given in Algorithm 1 . Algorithm 1 FedOpt framework Input : M , T E , x1 , ηc , ηs , SERVEROPT , { pk } Kk=1 for t = 1 , · · · , T do The server selects a cohort Ct of M clients uniformly at random , without replacement . The server sends xt to all clients in Ct. Each client k ∈ Ct performs E epochs of mini-batch SGD on fk with step-size ηc . After training , each k ∈ Ct has a local model xtk and sends ∆tk = xt − xtk to the server . The server computes a pseudo-gradient ∆t and updates its model via ∆t = ∑ k∈Ct pk∆ t k∑ k∈Ct pk , xt+1 = SERVEROPT ( xt , ηs , ∆t ) . Algorithm 1 generalizes a number of federated learning algorithms , including FedAvg [ 52 ] , FedAvgM [ 25 ] , FedAdagrad [ 62 ] , and FedAdam [ 62 ] . These are the cases where SERVEROPT is SGD , SGD with momentum , Adagrad [ 15 , 53 ] , and Adam [ 33 ] , respectively . FedSGD is realized when SERVEROPT is SGD , ηc = 1 , E = 1 , and each client performs full-batch gradient descent . 2.1 Experimental Setup . We aim to understand how the cohort size M impacts the performance of Algorithm 1 . In order to study this , we perform a wide-ranging empirical evaluation using various special cases of Algorithm 1 across multiple datasets , models , and tasks . We discuss the key facets of our experiments below . Datasets , models , and tasks . We use four datasets : CIFAR-100 [ 35 ] , EMNIST [ 13 ] , Shakespeare [ 8 ] , and Stack Overflow [ 3 ] . For CIFAR-100 , we use the client partitioning proposed by Reddi et al . [ 62 ] . The other three datasets have natural client partitions that we use . For EMNIST , the handwritten characters are partitioned by their author . For Shakespeare , speaking lines in Shakespeare plays are partitioned by their speaker . For Stack Overflow , posts on the forum are partitioned by their author . The number of clients and examples in the training and test sets are given in Table 1 . For CIFAR-100 , we train a ResNet-18 , replacing batch normalization layers with group normalization ( as proposed and empirically validated in federated settings by Hsieh et al . [ 24 ] ) . For EMNIST , we train a convolutional network with two convolutional layers , max-pooling , dropout , and two dense layers . For Shakespeare , we train an RNN with two LSTM layers to perform next-characterprediction . For Stack Overflow , we perform next-word-prediction using an RNN with a single LSTM layer . For full details on the models and datasets , see Appendix A.1 . Algorithms . We implement many special cases of Algorithm 1 , including FedSGD , FedAvg , FedAvgM , FedAdagrad , and FedAdam . We also develop two novel methods : FedLARS and FedLamb , which are the special cases of Algorithm 1 where SERVEROPT is LARS [ 71 ] and Lamb [ 72 ] , respectively . See Section 5 for the motivation and full details of these algorithms . Implementation and tuning . Unless otherwise specified , in Algorithm 1 clients perform E = 1 epochs of training with mini-batch SGD . Their batch size is fixed per-task . We set pk to be the number of examples in client k ’ s dataset . We tune learning rates for all algorithms and models using a held-out validation set : We perform T = 1500 rounds of training with M = 50 , E = 1 for each algorithm and model , varying ηc , ηs over { 10i | − 3 ≤ i ≤ 1 } and select the values that maximize the average validation performance over 5 random trials . All other hyperparameters ( such as momentum ) are fixed . For more details , see Appendix A . We provide open-source implementations of all simulations in TensorFlow Federated [ 4 ] 2 . All experiments were conducted using clusters of multi-core CPUs , though our results are independent of wall-clock time and amount of compute resources . Presentation of results . We apply the algorithms above to the tasks listed above with varying cohort sizes . For brevity , we present only a fraction of our results , selecting representative experiments to illustrate large-cohort training phenomena . The full set of experimental results can be found in Appendix B . We run 5 random trials for each experiment , varying the model initialization and which clients are sampled per round . In all subsequent figures , dark lines indicate the mean across the 5 trials , and shaded regions indicate one standard deviation above and below the mean . | This work introduces a wide empirical study of cohort sizes in local methods for federated learning problems. The authors explain the advantages and disadvantages of using a large cohort number. They provide connections with batch size in centralized optimization. They highlight 5 main aspects: Catastrophic Training Failures, Diminishing Returns, Generalization Failures, Fairness Concerns, and Decreased Data Efficiency. Also, the authors provide several solutions how to overcome issues led by a large cohort number. Authors provide several sequences of experiments with different models, datasets, and methods. | science.mup_single_document_summarization.13847 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nOn Large-Cohort Training for Federated Learning\n\nMain Content:\n1 Introduction . Federated learning ( FL ) [ 52 ] considers learning a model from multiple clients without directly sharing training data , often under the orchestration of a central server . In this work we focus on cross-device FL , in which the aim is to learn across a large population of edge devices [ 27 , Table 1 ] . A distinguishing characteristic of cross-device FL is partial participation of the client population : Due to systems constraints such as network size , the server typically only communicates with a subset of the clients at a time1 . For example , in the popular FedAvg algorithm [ 52 ] , at each communication round the server broadcasts its current model to a subset of available clients ( referred to as a cohort ) , who use the model to initialize local optimization and send their model updates back to the server . Intuitively , larger cohort sizes have the potential to improve the convergence of FL algorithms . By sampling more clients per round , we can observe a more representative sample of the underlying population—possibly reducing the number of communication rounds needed to achieve a given accuracy . This intuition is reflected in many convergence analyses of FL methods [ 29 , 31 , 32 , 62 , 69 ] , which generally show that asymptotic convergence rates improve as the cohort size increases . Larger cohorts can also provide privacy benefits . For example , when using the distributed differential privacy model [ 6 , 11 , 16 , 65 ] in federated learning , noise is typically added to the updates sent from the clients to the server [ 54 ] . This helps preserve privacy but can also mar the utility of the learned model . By dividing the noise among more clients , larger cohorts may mitigate detrimental effects of noise . Moreover , since privacy tends to decrease as a function of the number of communication 1In contrast , cross-silo settings often have a small set of clients , most of which participate in each round [ 27 ] . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . rounds [ 1 , 17 ] , larger cohorts also have the potential to improve privacy in FL by reducing the number of rounds needed for convergence . Motivated by the potential benefits of large-cohort training , we systematically explore the impact of cohort size in realistic cross-device settings . Our results show that increasing the cohort size may not lead to significant convergence improvements in practice , despite their theoretical benefit [ 69 ] . Moreover , large-cohort training can introduce fundamental optimization and generalization issues . Our results are reminiscent of work on large-batch training in centralized settings , where larger batches can stagnate convergence improvements [ 14 , 19 , 51 , 70 , 71 ] , and even lead to generalization issues with deep neural networks [ 23 , 30 , 46–48 , 50 , 64 ] . While some of the challenges we identify with large-cohort training are parallel to issues that arise in large-batch centralized learning , others are unique to federated learning and have not been previously identified in the literature . Contributions . In this work , we provide a novel examination of cohort sizes in federated learning . We give a wide ranging empirical analysis spanning many popular federated algorithms and datasets ( Section 2 ) . Despite the many possible benefits of large-cohort training , we find that challenges exist in realizing these benefits ( Section 3 ) . We show that these issues are caused in part by distinctive characteristics of federated training dynamics ( Section 4 ) . Using these insights , we provide partial solutions to the challenges we identify ( Section 5 ) , focusing on how to adapt techniques from largebatch training , and the limitations of such approaches . Our solutions are designed to serve as simple benchmarks for future work . We conclude by discussing limitations and open problems ( Section 6 ) . Throughout , we attempt to uncover interesting theoretical questions , but remain firmly grounded in the practical realities of federated learning . 1.1 Related Work . Large-batch training . In non-federated settings , mini-batch stochastic gradient descent ( SGD ) and its variants are common choices for training machine learning models , particularly deep neural networks . While larger mini-batch sizes ostensibly allow for improved convergence ( in terms of the number of steps required to reach a desired accuracy ) , in practice speedups may quickly saturate when increasing the mini-batch size . This property of diminishing returns has been explored both empirically [ 14 , 19 , 51 , 64 ] and theoretically [ 48 , 70 ] . Beyond the issue of speedup saturation , numerous works have also observed a generalization gap when training deep neural networks with large batches [ 23 , 30 , 46 , 47 , 50 , 71 ] . Our work differs from these areas by specifically exploring how the cohort size ( the number of selected clients ) affects federated optimization methods . While some of the issues with large-batch training appear in large-cohort training , we also identify a number of new challenges introduced by the federated setting . Optimization for federated learning . Significant attention has been paid towards developing federated optimization techniques . Such work has focused on various aspects , including communicationefficiency [ 5 , 34 , 37 , 52 ] , data and systems heterogeneity [ 25 , 28 , 29 , 39–42 , 67 ] , and fairness [ 26 , 43 ] . We provide a description of some relevant methods in Section 2 , and defer readers to recent surveys such as [ 27 ] and [ 41 ] for additional background . One area pertinent to our work is that of variance reduction for federated learning , which can mitigate negative effects of data heterogeneity [ 28 , 29 , 73 ] . However , such methods often require clients to maintain state across rounds [ 29 , 73 ] , which may be infeasible in cross-device settings [ 27 ] . Moreover , such methods may not perform well in settings with limited client participation [ 62 ] . Many convergence analyses of federated optimization methods show that larger cohort sizes can lead to improved convergence rates , even without explicit variance reduction [ 31 , 32 , 69 ] . These analyses typically focus on asymptotic convergence , and require assumptions on learning rates and heterogeneity that may not hold in practice [ 9 , 27 ] . In this work , we attempt to see whether increasing the cohort size leads to improved convergence in practical , communication-limited settings . Client sampling . A number of works have explored how to select cohorts of a fixed size in crossdevice FL [ 10 , 12 , 18 , 59 , 63 ] . Such methods can yield faster convergence than random sampling by carefully selecting the clients that participate at each round , based on quantities such as the client loss . However , such approaches typically require the server to be able to choose which clients participate in a cohort . In practice , cohort selection in cross-device federated learning is often governed by client availability , and is not controlled by the server [ 7 , 61 ] . In this work we instead focus on the impact of size of the cohort , assuming the cohort is sampled at random . 2 Preliminaries . Federated optimization methods often aim to minimize a weighted average of client loss functions : min x f ( x ) : = K∑ k=1 pkfk ( x ) , ( 1 ) where K is total number of clients , the pk are client weights satisfying pk ≥ 0 , and fk is the loss function of client k. For practical reasons , pk is often set to the number of examples in client k ’ s local dataset [ 42 , 52 ] . To solve ( 1 ) , each client in a sampled cohort could send∇fk ( x ) to the server , and the server could then apply ( mini-batch ) SGD . This approach is referred to as FedSGD [ 52 ] . This requires communication for every model update , which may not be desirable in communication-limited settings . To address this , McMahan et al . [ 52 ] propose FedAvg , in which clients perform multiple epochs of local training , potentially reducing the number of communication rounds needed for convergence . We focus on a more general framework , FedOpt , introduced by Reddi et al . [ 62 ] that uses both client and server optimization . At each round , the server sends its model x to a cohort of clients C of size M . Each client ck ∈ C performs E epochs of training using mini-batch SGD with client learning rate ηc , producing a local model xk . Each client k ∈ C then communicates their client update ∆k to the server , where ∆k : = xk − x is the difference between the client ’ s local model and the server model . The server computes a weighted average ∆ of the client updates , and updates its own model via x′ = SERVEROPT ( x , ηs , ∆ ) , ( 2 ) where SERVEROPT ( x , ηs , g ) is some first-order optimizer , ηs is the server learning rate , and g is a gradient estimate . For example , if SERVEROPT is SGD , then SERVEROPT ( x , ηs , g ) = x − ηsg . The ∆ in ( 2 ) is referred to as a pseudo-gradient [ 62 ] . While ∆ may not be an unbiased estimate of ∇f , it can serve a somewhat comparable role ( though as we show in Section 4 , there are important distinctions ) . Full pseudo-code of FedOpt is given in Algorithm 1 . Algorithm 1 FedOpt framework Input : M , T E , x1 , ηc , ηs , SERVEROPT , { pk } Kk=1 for t = 1 , · · · , T do The server selects a cohort Ct of M clients uniformly at random , without replacement . The server sends xt to all clients in Ct. Each client k ∈ Ct performs E epochs of mini-batch SGD on fk with step-size ηc . After training , each k ∈ Ct has a local model xtk and sends ∆tk = xt − xtk to the server . The server computes a pseudo-gradient ∆t and updates its model via ∆t = ∑ k∈Ct pk∆ t k∑ k∈Ct pk , xt+1 = SERVEROPT ( xt , ηs , ∆t ) . Algorithm 1 generalizes a number of federated learning algorithms , including FedAvg [ 52 ] , FedAvgM [ 25 ] , FedAdagrad [ 62 ] , and FedAdam [ 62 ] . These are the cases where SERVEROPT is SGD , SGD with momentum , Adagrad [ 15 , 53 ] , and Adam [ 33 ] , respectively . FedSGD is realized when SERVEROPT is SGD , ηc = 1 , E = 1 , and each client performs full-batch gradient descent . 2.1 Experimental Setup . We aim to understand how the cohort size M impacts the performance of Algorithm 1 . In order to study this , we perform a wide-ranging empirical evaluation using various special cases of Algorithm 1 across multiple datasets , models , and tasks . We discuss the key facets of our experiments below . Datasets , models , and tasks . We use four datasets : CIFAR-100 [ 35 ] , EMNIST [ 13 ] , Shakespeare [ 8 ] , and Stack Overflow [ 3 ] . For CIFAR-100 , we use the client partitioning proposed by Reddi et al . [ 62 ] . The other three datasets have natural client partitions that we use . For EMNIST , the handwritten characters are partitioned by their author . For Shakespeare , speaking lines in Shakespeare plays are partitioned by their speaker . For Stack Overflow , posts on the forum are partitioned by their author . The number of clients and examples in the training and test sets are given in Table 1 . For CIFAR-100 , we train a ResNet-18 , replacing batch normalization layers with group normalization ( as proposed and empirically validated in federated settings by Hsieh et al . [ 24 ] ) . For EMNIST , we train a convolutional network with two convolutional layers , max-pooling , dropout , and two dense layers . For Shakespeare , we train an RNN with two LSTM layers to perform next-characterprediction . For Stack Overflow , we perform next-word-prediction using an RNN with a single LSTM layer . For full details on the models and datasets , see Appendix A.1 . Algorithms . We implement many special cases of Algorithm 1 , including FedSGD , FedAvg , FedAvgM , FedAdagrad , and FedAdam . We also develop two novel methods : FedLARS and FedLamb , which are the special cases of Algorithm 1 where SERVEROPT is LARS [ 71 ] and Lamb [ 72 ] , respectively . See Section 5 for the motivation and full details of these algorithms . Implementation and tuning . Unless otherwise specified , in Algorithm 1 clients perform E = 1 epochs of training with mini-batch SGD . Their batch size is fixed per-task . We set pk to be the number of examples in client k ’ s dataset . We tune learning rates for all algorithms and models using a held-out validation set : We perform T = 1500 rounds of training with M = 50 , E = 1 for each algorithm and model , varying ηc , ηs over { 10i | − 3 ≤ i ≤ 1 } and select the values that maximize the average validation performance over 5 random trials . All other hyperparameters ( such as momentum ) are fixed . For more details , see Appendix A . We provide open-source implementations of all simulations in TensorFlow Federated [ 4 ] 2 . All experiments were conducted using clusters of multi-core CPUs , though our results are independent of wall-clock time and amount of compute resources . Presentation of results . We apply the algorithms above to the tasks listed above with varying cohort sizes . For brevity , we present only a fraction of our results , selecting representative experiments to illustrate large-cohort training phenomena . The full set of experimental results can be found in Appendix B . We run 5 random trials for each experiment , varying the model initialization and which clients are sampled per round . In all subsequent figures , dark lines indicate the mean across the 5 trials , and shaded regions indicate one standard deviation above and below the mean .",
"role": "user"
},
{
"content": "This work introduces a wide empirical study of cohort sizes in local methods for federated learning problems. The authors explain the advantages and disadvantages of using a large cohort number. They provide connections with batch size in centralized optimization. They highlight 5 main aspects: Catastrophic Training Failures, Diminishing Returns, Generalization Failures, Fairness Concerns, and Decreased Data Efficiency. Also, the authors provide several solutions how to overcome issues led by a large cohort number. Authors provide several sequences of experiments with different models, datasets, and methods.",
"role": "assistant"
}
]
|
science.discomat_te | | Sample | A (cm)-1+-4% | n+-0.1% | t exp (ms)+-5% | K ds/dT(10-4 cmK -1)+-5% a | ph Nd | e(Nd 3+) |
| YABC 1% | 3.1 | 1.43 | 520 | -2.98 | 0.33+-0.08 | 0.87+-0.10 |
| YABC 2% | 6.9 | 1.43 | 430 | -2.98 | 0.47+-0.05 | 0.68+-0.06 |
| ZBLAN 1% | 2.8 | 1.51 | 500 | -9.04 | 0.27+-0.06 | 0.94+-0.08 |
| ZBLAN 2% | 5.3 | 1.51 | 432 | -9.04 | 0.41+-0.08 | 0.76+-0.10 |
| PGIZCa 1% | 4.1 | 1.56 | 410 | -2.72 | 0.37+-0.10 | 0.80+-0.13 |
| PGIZCa 2% | 9.0 | 1.56 | 285 | -2.72 | 0.48+-0.10 | 0.67+-0.13 |
Caption: Thermo-optical experimental parameters of the investigated glasses a
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "3.1", "type": "Other"}
{"value": "1.43", "type": "Other"}
{"value": "520", "type": "Other"}
{"value": "0.33", "type": "Other"}
{"value": "0.87", "type": "Other"}
{"value": "6.9", "type": "Other"}
{"value": "1.43", "type": "Other"}
{"value": "430", "type": "Other"}
{"value": "0.47", "type": "Other"}
{"value": "0.68", "type": "Other"}
{"value": "2.8", "type": "Other"}
{"value": "1.51", "type": "Other"}
{"value": "500", "type": "Other"}
{"value": "0.27", "type": "Other"}
{"value": "0.94", "type": "Other"}
{"value": "5.3", "type": "Other"}
{"value": "1.51", "type": "Other"}
{"value": "432", "type": "Other"}
{"value": "0.41", "type": "Other"}
{"value": "0.76", "type": "Other"}
{"value": "4.1", "type": "Other"}
{"value": "1.56", "type": "Other"}
{"value": "410", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.80", "type": "Other"}
{"value": "9.0", "type": "Other"}
{"value": "1.56", "type": "Other"}
{"value": "285", "type": "Other"}
{"value": "0.48", "type": "Other"}
{"value": "0.67", "type": "Other"} | science.discomat_te.323 | [
{
"content": "| Sample | A (cm)-1+-4% | n+-0.1% | t exp (ms)+-5% | K ds/dT(10-4 cmK -1)+-5% a | ph Nd | e(Nd 3+) |\n| YABC 1% | 3.1 | 1.43 | 520 | -2.98 | 0.33+-0.08 | 0.87+-0.10 |\n| YABC 2% | 6.9 | 1.43 | 430 | -2.98 | 0.47+-0.05 | 0.68+-0.06 |\n| ZBLAN 1% | 2.8 | 1.51 | 500 | -9.04 | 0.27+-0.06 | 0.94+-0.08 |\n| ZBLAN 2% | 5.3 | 1.51 | 432 | -9.04 | 0.41+-0.08 | 0.76+-0.10 |\n| PGIZCa 1% | 4.1 | 1.56 | 410 | -2.72 | 0.37+-0.10 | 0.80+-0.13 |\n| PGIZCa 2% | 9.0 | 1.56 | 285 | -2.72 | 0.48+-0.10 | 0.67+-0.13 |\n\nCaption: Thermo-optical experimental parameters of the investigated glasses a\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"3.1\", \"type\": \"Other\"}\n{\"value\": \"1.43\", \"type\": \"Other\"}\n{\"value\": \"520\", \"type\": \"Other\"}\n{\"value\": \"0.33\", \"type\": \"Other\"}\n{\"value\": \"0.87\", \"type\": \"Other\"}\n{\"value\": \"6.9\", \"type\": \"Other\"}\n{\"value\": \"1.43\", \"type\": \"Other\"}\n{\"value\": \"430\", \"type\": \"Other\"}\n{\"value\": \"0.47\", \"type\": \"Other\"}\n{\"value\": \"0.68\", \"type\": \"Other\"}\n{\"value\": \"2.8\", \"type\": \"Other\"}\n{\"value\": \"1.51\", \"type\": \"Other\"}\n{\"value\": \"500\", \"type\": \"Other\"}\n{\"value\": \"0.27\", \"type\": \"Other\"}\n{\"value\": \"0.94\", \"type\": \"Other\"}\n{\"value\": \"5.3\", \"type\": \"Other\"}\n{\"value\": \"1.51\", \"type\": \"Other\"}\n{\"value\": \"432\", \"type\": \"Other\"}\n{\"value\": \"0.41\", \"type\": \"Other\"}\n{\"value\": \"0.76\", \"type\": \"Other\"}\n{\"value\": \"4.1\", \"type\": \"Other\"}\n{\"value\": \"1.56\", \"type\": \"Other\"}\n{\"value\": \"410\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.80\", \"type\": \"Other\"}\n{\"value\": \"9.0\", \"type\": \"Other\"}\n{\"value\": \"1.56\", \"type\": \"Other\"}\n{\"value\": \"285\", \"type\": \"Other\"}\n{\"value\": \"0.48\", \"type\": \"Other\"}\n{\"value\": \"0.67\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.discomat_te | | | Mg | Ca | Sr | Ba | Tm | Y | Al | F |
| Z i | +2 | +2 | +2 | +2 | +3 | +3 | +3 | -1 |
| | | | | | | | | |
| A ij (10-6 J) | Mg | Ca | Sr | Ba | Tm | Y | Al | F |
| Mg | 1.44 | 2.75 | 3.45 | 5.65 | 3.78 | 2.82 | 1.64 | 1.22 |
| Ca | | 5.65 | 7.22 | 12.22 | 7.72 | 5.57 | 2.99 | 2.48 |
| Sr | | | 9.28 | 15.84 | 9.85 | 7.06 | 3.72 | 3.16 |
| Ba | | | | 27.46 | 16.60 | 11.74 | 5.97 | 5.32 |
| Tm | | | | | 10.55 | 7.65 | 4.14 | 3.39 |
| Y | | | | | | 5.61 | 3.14 | 2.46 |
| Al | | | | | | | 1.93 | 1.30 |
| F | | | | | | | | 0.843 |
Caption: The potential and the parameters used in MD simulation
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "+2", "type": "Other"}
{"value": "+2", "type": "Other"}
{"value": "+2", "type": "Other"}
{"value": "+2", "type": "Other"}
{"value": "+3", "type": "Other"}
{"value": "+3", "type": "Other"}
{"value": "+3", "type": "Other"}
{"value": "1.44", "type": "Other"}
{"value": "2.75", "type": "Other"}
{"value": "3.45", "type": "Other"}
{"value": "5.65", "type": "Other"}
{"value": "3.78", "type": "Other"}
{"value": "2.82", "type": "Other"}
{"value": "1.64", "type": "Other"}
{"value": "1.22", "type": "Other"}
{"value": "5.65", "type": "Other"}
{"value": "7.22", "type": "Other"}
{"value": "12.22", "type": "Other"}
{"value": "7.72", "type": "Other"}
{"value": "5.57", "type": "Other"}
{"value": "2.99", "type": "Other"}
{"value": "2.48", "type": "Other"}
{"value": "9.28", "type": "Other"}
{"value": "15.84", "type": "Other"}
{"value": "9.85", "type": "Other"}
{"value": "7.06", "type": "Other"}
{"value": "3.72", "type": "Other"}
{"value": "3.16", "type": "Other"}
{"value": "27.46", "type": "Other"}
{"value": "16.60", "type": "Other"}
{"value": "11.74", "type": "Other"}
{"value": "5.97", "type": "Other"}
{"value": "5.32", "type": "Other"}
{"value": "10.55", "type": "Other"}
{"value": "7.65", "type": "Other"}
{"value": "4.14", "type": "Other"}
{"value": "3.39", "type": "Other"}
{"value": "5.61", "type": "Other"}
{"value": "3.14", "type": "Other"}
{"value": "2.46", "type": "Other"}
{"value": "1.93", "type": "Other"}
{"value": "1.30", "type": "Other"}
{"value": "0.843", "type": "Other"} | science.discomat_te.434 | [
{
"content": "| | Mg | Ca | Sr | Ba | Tm | Y | Al | F |\n| Z i | +2 | +2 | +2 | +2 | +3 | +3 | +3 | -1 |\n| | | | | | | | | |\n| A ij (10-6 J) | Mg | Ca | Sr | Ba | Tm | Y | Al | F |\n| Mg | 1.44 | 2.75 | 3.45 | 5.65 | 3.78 | 2.82 | 1.64 | 1.22 |\n| Ca | | 5.65 | 7.22 | 12.22 | 7.72 | 5.57 | 2.99 | 2.48 |\n| Sr | | | 9.28 | 15.84 | 9.85 | 7.06 | 3.72 | 3.16 |\n| Ba | | | | 27.46 | 16.60 | 11.74 | 5.97 | 5.32 |\n| Tm | | | | | 10.55 | 7.65 | 4.14 | 3.39 |\n| Y | | | | | | 5.61 | 3.14 | 2.46 |\n| Al | | | | | | | 1.93 | 1.30 |\n| F | | | | | | | | 0.843 |\n\nCaption: The potential and the parameters used in MD simulation\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"+2\", \"type\": \"Other\"}\n{\"value\": \"+2\", \"type\": \"Other\"}\n{\"value\": \"+2\", \"type\": \"Other\"}\n{\"value\": \"+2\", \"type\": \"Other\"}\n{\"value\": \"+3\", \"type\": \"Other\"}\n{\"value\": \"+3\", \"type\": \"Other\"}\n{\"value\": \"+3\", \"type\": \"Other\"}\n{\"value\": \"1.44\", \"type\": \"Other\"}\n{\"value\": \"2.75\", \"type\": \"Other\"}\n{\"value\": \"3.45\", \"type\": \"Other\"}\n{\"value\": \"5.65\", \"type\": \"Other\"}\n{\"value\": \"3.78\", \"type\": \"Other\"}\n{\"value\": \"2.82\", \"type\": \"Other\"}\n{\"value\": \"1.64\", \"type\": \"Other\"}\n{\"value\": \"1.22\", \"type\": \"Other\"}\n{\"value\": \"5.65\", \"type\": \"Other\"}\n{\"value\": \"7.22\", \"type\": \"Other\"}\n{\"value\": \"12.22\", \"type\": \"Other\"}\n{\"value\": \"7.72\", \"type\": \"Other\"}\n{\"value\": \"5.57\", \"type\": \"Other\"}\n{\"value\": \"2.99\", \"type\": \"Other\"}\n{\"value\": \"2.48\", \"type\": \"Other\"}\n{\"value\": \"9.28\", \"type\": \"Other\"}\n{\"value\": \"15.84\", \"type\": \"Other\"}\n{\"value\": \"9.85\", \"type\": \"Other\"}\n{\"value\": \"7.06\", \"type\": \"Other\"}\n{\"value\": \"3.72\", \"type\": \"Other\"}\n{\"value\": \"3.16\", \"type\": \"Other\"}\n{\"value\": \"27.46\", \"type\": \"Other\"}\n{\"value\": \"16.60\", \"type\": \"Other\"}\n{\"value\": \"11.74\", \"type\": \"Other\"}\n{\"value\": \"5.97\", \"type\": \"Other\"}\n{\"value\": \"5.32\", \"type\": \"Other\"}\n{\"value\": \"10.55\", \"type\": \"Other\"}\n{\"value\": \"7.65\", \"type\": \"Other\"}\n{\"value\": \"4.14\", \"type\": \"Other\"}\n{\"value\": \"3.39\", \"type\": \"Other\"}\n{\"value\": \"5.61\", \"type\": \"Other\"}\n{\"value\": \"3.14\", \"type\": \"Other\"}\n{\"value\": \"2.46\", \"type\": \"Other\"}\n{\"value\": \"1.93\", \"type\": \"Other\"}\n{\"value\": \"1.30\", \"type\": \"Other\"}\n{\"value\": \"0.843\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Considering the crucial role of appropriate preventative strategies in reducing the rate of contrast-induced nephropathy (CIN) occurrence and its related morbidity and mortality, the effect of N-acetylcysteine (NAC), ascorbic acid (AA), and normal saline (NS) was investigated in the patient's undergone coronary angiography.
ABSTRACT.METHODS: In this clinical trial, 120 patients scheduled for elective coronary angiography with serum creatinine (Cr) level > 1.5 mg/dl or glomerular filtration rate (GFR) ≥ 60 selected by convenience method. Selected patients were allocated in three treatment groups randomly to receive oral NAC (600 mg/twice daily) plus NS (100 ml/hour) (Group A), oral AA (250 mg/twice daily) plus NS (100 ml/hour) (Group B) and NS (100 ml/hour) (Group C), respectively. The occurrence of CIN was evaluated based on serum Cr and GFR in three studied groups, before and after angiography procedure. The analysis of variance and paired t-test were used for data analysis by SPSS.
ABSTRACT.RESULTS: The serum Cr increased and GFR decreased significantly during the intervention in three groups (P < 0.010). However, the amounts of these changes were equal between groups (P > 0.050).
ABSTRACT.CONCLUSION: The study showed that nor the addition of NAC neither the addition of AA to sodium chloride infusion has more beneficial effect than hydration with sodium chloride, in the prevention of CIN.
Extracted Data: | [["N-acetylcysteine plus normal saline (Group A) and Ascorbic acid plus normal saline (Group B)", "Only normal saline (Group C)", "Serum creatinine", "no significant difference", "The serum Cr increased and GFR decreased significantly during the intervention in three groups (P \u003c 0.010). However, the amounts of these changes were equal between groups (P \u003e 0.050)."], ["N-acetylcysteine plus normal saline (Group A) and Ascorbic acid plus normal saline (Group B)", "Only normal saline (Group C)", "Glomerular filtration rate", "no significant difference", "The serum Cr increased and GFR decreased significantly during the intervention in three groups (P \u003c 0.010). However, the amounts of these changes were equal between groups (P \u003e 0.050)."]] | science.evidence_inference.33 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Considering the crucial role of appropriate preventative strategies in reducing the rate of contrast-induced nephropathy (CIN) occurrence and its related morbidity and mortality, the effect of N-acetylcysteine (NAC), ascorbic acid (AA), and normal saline (NS) was investigated in the patient's undergone coronary angiography.\n\nABSTRACT.METHODS: In this clinical trial, 120 patients scheduled for elective coronary angiography with serum creatinine (Cr) level > 1.5 mg/dl or glomerular filtration rate (GFR) ≥ 60 selected by convenience method. Selected patients were allocated in three treatment groups randomly to receive oral NAC (600 mg/twice daily) plus NS (100 ml/hour) (Group A), oral AA (250 mg/twice daily) plus NS (100 ml/hour) (Group B) and NS (100 ml/hour) (Group C), respectively. The occurrence of CIN was evaluated based on serum Cr and GFR in three studied groups, before and after angiography procedure. The analysis of variance and paired t-test were used for data analysis by SPSS.\n\nABSTRACT.RESULTS: The serum Cr increased and GFR decreased significantly during the intervention in three groups (P < 0.010). However, the amounts of these changes were equal between groups (P > 0.050).\n\nABSTRACT.CONCLUSION: The study showed that nor the addition of NAC neither the addition of AA to sodium chloride infusion has more beneficial effect than hydration with sodium chloride, in the prevention of CIN.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"N-acetylcysteine plus normal saline (Group A) and Ascorbic acid plus normal saline (Group B)\", \"Only normal saline (Group C)\", \"Serum creatinine\", \"no significant difference\", \"The serum Cr increased and GFR decreased significantly during the intervention in three groups (P \\u003c 0.010). However, the amounts of these changes were equal between groups (P \\u003e 0.050).\"], [\"N-acetylcysteine plus normal saline (Group A) and Ascorbic acid plus normal saline (Group B)\", \"Only normal saline (Group C)\", \"Glomerular filtration rate\", \"no significant difference\", \"The serum Cr increased and GFR decreased significantly during the intervention in three groups (P \\u003c 0.010). However, the amounts of these changes were equal between groups (P \\u003e 0.050).\"]]",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge
Open-domain question answering (QA) is known to involve several underlying knowledge and reasoning challenges, but are models actually learning such knowledge when trained on benchmark tasks? To investigate this, we introduce several new challenge tasks that probe whether state-of-the-art QA models have general knowledge about word definitions and general taxonomic reasoning, both of which are fundamental to more complex forms of reasoning and are widespread in benchmark datasets. As an alternative to expensive crowd-sourcing, we introduce a methodology for automatically building datasets from various types of expert knowledge (e.g., knowledge graphs and lexical taxonomies), allowing for systematic control over the resulting probes and for a more comprehensive evaluation. We find automatically constructing probes to be vulnerable to annotation artifacts, which we carefully control for. Our evaluation confirms that transformer-based QA models are already predisposed to recognize certain types of structural lexical knowledge. However, it also reveals a more nuanced picture: their performance degrades substantially with even a slight increase in the number of hops in the underlying taxonomic hierarchy, or as more challenging distractor candidate answers are introduced. Further, even when these models succeed at the standard instance-level evaluation, they leave much room for improvement when assessed at the level of clusters of semantically connected probes (e.g., all Isa questions about a concept).
Introduction
Automatically answering questions, especially in the open-domain setting (i.e., where minimal or no contextual knowledge is explicitly provided), requires bringing to bear considerable amount of background knowledge and reasoning abilities. For example, knowing the answers to the two questions in Figure FIGREF1 requires identifying a specific ISA relation (i.e., that cooking is a type of learned behavior) as well as recalling the definition of a concept (i.e., that global warming is defined as a worldwide increase in temperature). In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate).
Recent successes in QA, driven largely by the creation of new resources BIBREF2, BIBREF3, BIBREF4, BIBREF5 and advances in model pre-training BIBREF6, BIBREF7, raise a natural question: do state-of-the-art multiple-choice QA (MCQA) models that excel at standard tasks really have basic knowledge and reasoning skills?
Most existing MCQA datasets are constructed through either expensive crowd-sourcing BIBREF8 or hand engineering effort, in the former case making it possible to collect large amounts of data at the cost of losing systematic control over the semantics of the target questions. Hence, doing a controlled experiment to answer such a question for QA is difficult given a lack of targeted challenge datasets.
Having definitive empirical evidence of model competence on any given phenomenon requires constructing a wide range of systematic tests. For example, in measuring competence of definitions, not only do we want to see that the model can handle individual questions such as Figure FIGREF1.1 inside of benchmark tasks, but that it can answer a wider range of questions that exhaustively cover a broad set of concepts and question perturbations (i.e., systematic adjustments to how the questions are constructed). The same applies to ISA reasoning; not only is it important to recognize in the question in Figure FIGREF1.1 that cooking is a learned behavior, but also that cooking is a general type of behavior or, through a few more inferential steps, a type of human activity.
In this paper, we look at systematically constructing such tests by exploiting the vast amounts of structured information contained in various types of expert knowledge such as knowledge graphs and lexical taxonomies. Our general methodology works as illustrated in Figure FIGREF1: given any MCQA model trained on a set of benchmark tasks, we systematically generate a set of synthetic dataset probes (i.e., MCQA renderings of the target information) from information in expert knowledge sources. We then use these probes to ask two empirical questions: 1) how well do models trained on benchmark tasks perform on these probing tasks and; 2) can such models be re-trained to master new challenges with minimal performance loss on their original tasks?
While our methodology is amenable to any knowledge source and set of models/benchmark tasks, we focus on probing state-of-the-art transformer models BIBREF7, BIBREF9 in the domain of science MCQA. For sources of expert knowledge, we use WordNet, a comprehensive lexical ontology, and other publicly available dictionary resources. We devise probes that measure model competence in definition and taxonomic knowledge in different settings (including hypernymy, hyponymy, and synonymy detection, and word sense disambiguation). This choice is motivated by fact that the science domain is considered particularly challenging for QA BIBREF10, BIBREF11, BIBREF12, and existing science benchmarks are known to involve widespread use of such knowledge (see BIBREF1, BIBREF13 for analysis), which is also arguably fundamental to more complex forms of reasoning.
We show that accurately probing QA models via synthetic datasets is not straightforward, as unexpected artifacts can easily arise in such data. This motivates our carefully constructed baselines and close data inspection to ensure probe quality.
Our results confirm that transformer-based QA models have a remarkable ability to recognize certain types of knowledge captured in our probes—even without additional fine-tuning. Such models can even outperform strong task-specific models trained directly on our probing tasks (e.g., on definitions, our best model achieves 77% test accuracy without specialized training, as opposed to 51% for a task-specific LSTM-based model). We also show that the same models can be effectively re-fine-tuned on small samples (even 100 examples) of probe data, and that high performance on the probes tends to correlate with a smaller drop in the model's performance on the original QA task.
Our comprehensive assessment reveals several interesting nuances to the overall positive trend. For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. State-of-the-art QA models thus have much room to improve even in some fundamental building blocks, namely definitions and taxonomic hierarchies, of more complex forms of reasoning.
Dataset Probes and Construction ::: DictionaryQA
The DictionaryQA dataset is created from the GCIDE dictionary, which is a comprehensive open-source English dictionary built largely from the Webster's Revised Unabridged Dictionary BIBREF38. Each entry consists of a word, its part-of-speech, its definition, and an optional example sentence (see Table TABREF14). Overall, 33k entries (out of a total of 155k) contain example sentences/usages. As with the WordNet probes, we focus on this subset so as to contextualize each word being probed. In contrast to WordNet, GCIDE does not have ISA relations or explicit synsets, so we take each unique entry to be a distinct sense. We then use the dictionary entries to create a probe that centers around word-sense disambiguation, as described below.
Probing Methodology and Modeling
Given the probes above, we now can start to answer the empirical questions posed at the beginning. Our main focus is on looking at transformer-based MCQA models trained in the science domain (using the benchmarks shown in Table TABREF21). In this section, we provide details of MCQA and the target models, as well as several baselines that we use to sanity check our new datasets. To evaluate model competence, we look at a combination of model performance after science pre-training and after additional model fine-tuning using the lossless inoculation strategy of BIBREF22 (Section SECREF22). In Section SECREF24, we also discuss a cluster-level accuracy metric for measuring performance over semantic clusters.
Results and Findings ::: How well do pre-trained MCQA models do?
Science models that use non-transformer based encoders, such as the ESIM model with GloVe and ELMO, perform poorly across all probes, in many cases scoring near random chance, showing limits to how well they generalize from science to other tasks even with pre-trained GloVe and ELMO embeddings. In sharp contrast, the transformer models have mixed results, the most striking result being the RoBERTa models on the definitions and synonymy probes (achieving a test accuracy of 77% and 61%, respectively), which outperform several of the task-specific LSTM models trained directly on the probes. At first glance, this suggests that RoBERTa, which generally far outpaces even BERT across most probes, has high competence of definitions and synonyms even without explicit training on our new tasks.
Given the controlled nature of our probes, we can get a more detailed view of how well the science models are performing across different reasoning and distractor types, as shown in the first column of Figure FIGREF28 for ESIM and RoBERTa. The ESIM science model without training has uniformly poor performance across all categories, whereas the performance of RoBERTa is more varied. Across all datasets and number of hops (i.e., the rows in the heat maps), model performance for RoBERTa is consistently highest among examples with random distractors (i.e., the first column), and lowest in cases involving distractors that are closest in WordNet space (e.g., sister and ISA, or up/down, distractors of distance $k^{\prime }=1$). This is not surprising, given that, in the first case, random distractors are likely to be the easiest category (and the opposite for distractors close in space), but suggests that RoBERTa might only be getting the easiest cases correct.
Model performance also clearly degrades for hypernymy and hyponymy across all models as the number of hops $k$ increases (see red dashed boxes). For example, problems that involve hyponym reasoning with sister distractors of distance $k^{\prime }=1$ (i.e., the second column) degrades from 47% to 15% when the number of hops $k$ increases from 1 to 4. This general tendency persists even after additional fine-tuning, as we discuss next, and gives evidence that models are limited in their capacity for certain types of multi-hop inferences.
As discussed by BIBREF26, the choice of generation templates can have a significant effect on model performance. The results so far should therefore be regarded as a lower bound on model competence. It is possible that model performance is high for definitions, for example, because the associated templates best align with the science training distribution (which we know little about). For this reason, the subsequent inoculation step is important—it gives the model an opportunity to learn about our target templates and couple this learned knowledge with its general knowledge acquired during pre-training and science training (which is, again, what we aim to probe).
Results and Findings ::: Are Models Consistent across Clusters?
Table TABREF32 shows cluster-level accuracies for the different WordNetQA probes. As with performance across the different inference/distractor categories, these results are mixed. For some probes, such as definitions, our best models appear to be rather robust; e.g., our RoBERTa model has a cluster accuracy of $75\%$, meaning that it can answer all questions perfectly for 75% of the target concepts and that errors are concentrated on a small minority (25%) of concepts. On synonymy and hypernymy, both BERT and RoBERTa appear robust on the majority of concepts, showing that errors are similarly concentrated. In contrast, our best model on hyponymy has an accuracy of 36%, meaning that its errors are spread across many concepts, thus suggesting less robustness.
Table TABREF30 shows a selection of semantic clusters involving ISA reasoning, as well as the model performance over different answers (shown symbolically) and perturbations. For example, in the the second case, the cluster is based around the concept/synset oppose.v.06 and involves 4 inferences and a total 24 questions (i.e., inferences with perturbations). Our weakest model, ESIM, answers only 5 out of 24 questions correctly, whereas RoBERTa gets 21/24. In the other cases, RoBERTa gets all clusters correct, whereas BERT and ESIM get none of them correct.
We emphasize that these results only provide a crude look into model consistency and robustness. Recalling again the details in Table TABREF12, probes differ in terms of average size of clusters. Hyponymy, in virtue of having many more questions per cluster, might simply be a much more difficult dataset. In addition, such a strict evaluation does not take into account potential errors inside of clusters, which is an important issue that we discuss in the next section. We leave addressing such issues and coming up with more insightful cluster-based metrics for future work.
Figure 2: A portion of the WordNet ISA graph (top) and an example distractor function DISTR(τ) (bottom) used to generate distractor choices {a′1, a′2, a′3} for a question q based on information in the graph.
Figure 4: Inoculation plots showing accuracy of challenge tasks (red solid lines) and original tasks (red dashed lines) using the best aggregate model Ma,k∗ at each k number of challenge examples (x axis). We also plot the effect of using add-some inoculation shown in the blue (1x matching) and black (x2 matching) lines.
Query: Do they focus on Reading Comprehension or multiple choice question answering?
JSON Response: | {"answer": "MULTIPLE CHOICE QUESTION ANSWERING", "evidence": ["Automatically answering questions, especially in the open-domain setting (i.e., where minimal or no contextual knowledge is explicitly provided), requires bringing to bear considerable amount of background knowledge and reasoning abilities. For example, knowing the answers to the two questions in Figure FIGREF1 requires identifying a specific ISA relation (i.e., that cooking is a type of learned behavior) as well as recalling the definition of a concept (i.e., that global warming is defined as a worldwide increase in temperature). In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate)."]} | science.qasper_abstractive_qa.28.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nWhat Does My QA Model Know? Devising Controlled Probes using Expert Knowledge\n\nOpen-domain question answering (QA) is known to involve several underlying knowledge and reasoning challenges, but are models actually learning such knowledge when trained on benchmark tasks? To investigate this, we introduce several new challenge tasks that probe whether state-of-the-art QA models have general knowledge about word definitions and general taxonomic reasoning, both of which are fundamental to more complex forms of reasoning and are widespread in benchmark datasets. As an alternative to expensive crowd-sourcing, we introduce a methodology for automatically building datasets from various types of expert knowledge (e.g., knowledge graphs and lexical taxonomies), allowing for systematic control over the resulting probes and for a more comprehensive evaluation. We find automatically constructing probes to be vulnerable to annotation artifacts, which we carefully control for. Our evaluation confirms that transformer-based QA models are already predisposed to recognize certain types of structural lexical knowledge. However, it also reveals a more nuanced picture: their performance degrades substantially with even a slight increase in the number of hops in the underlying taxonomic hierarchy, or as more challenging distractor candidate answers are introduced. Further, even when these models succeed at the standard instance-level evaluation, they leave much room for improvement when assessed at the level of clusters of semantically connected probes (e.g., all Isa questions about a concept).\n\nIntroduction\nAutomatically answering questions, especially in the open-domain setting (i.e., where minimal or no contextual knowledge is explicitly provided), requires bringing to bear considerable amount of background knowledge and reasoning abilities. For example, knowing the answers to the two questions in Figure FIGREF1 requires identifying a specific ISA relation (i.e., that cooking is a type of learned behavior) as well as recalling the definition of a concept (i.e., that global warming is defined as a worldwide increase in temperature). In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate).\nRecent successes in QA, driven largely by the creation of new resources BIBREF2, BIBREF3, BIBREF4, BIBREF5 and advances in model pre-training BIBREF6, BIBREF7, raise a natural question: do state-of-the-art multiple-choice QA (MCQA) models that excel at standard tasks really have basic knowledge and reasoning skills?\nMost existing MCQA datasets are constructed through either expensive crowd-sourcing BIBREF8 or hand engineering effort, in the former case making it possible to collect large amounts of data at the cost of losing systematic control over the semantics of the target questions. Hence, doing a controlled experiment to answer such a question for QA is difficult given a lack of targeted challenge datasets.\nHaving definitive empirical evidence of model competence on any given phenomenon requires constructing a wide range of systematic tests. For example, in measuring competence of definitions, not only do we want to see that the model can handle individual questions such as Figure FIGREF1.1 inside of benchmark tasks, but that it can answer a wider range of questions that exhaustively cover a broad set of concepts and question perturbations (i.e., systematic adjustments to how the questions are constructed). The same applies to ISA reasoning; not only is it important to recognize in the question in Figure FIGREF1.1 that cooking is a learned behavior, but also that cooking is a general type of behavior or, through a few more inferential steps, a type of human activity.\nIn this paper, we look at systematically constructing such tests by exploiting the vast amounts of structured information contained in various types of expert knowledge such as knowledge graphs and lexical taxonomies. Our general methodology works as illustrated in Figure FIGREF1: given any MCQA model trained on a set of benchmark tasks, we systematically generate a set of synthetic dataset probes (i.e., MCQA renderings of the target information) from information in expert knowledge sources. We then use these probes to ask two empirical questions: 1) how well do models trained on benchmark tasks perform on these probing tasks and; 2) can such models be re-trained to master new challenges with minimal performance loss on their original tasks?\nWhile our methodology is amenable to any knowledge source and set of models/benchmark tasks, we focus on probing state-of-the-art transformer models BIBREF7, BIBREF9 in the domain of science MCQA. For sources of expert knowledge, we use WordNet, a comprehensive lexical ontology, and other publicly available dictionary resources. We devise probes that measure model competence in definition and taxonomic knowledge in different settings (including hypernymy, hyponymy, and synonymy detection, and word sense disambiguation). This choice is motivated by fact that the science domain is considered particularly challenging for QA BIBREF10, BIBREF11, BIBREF12, and existing science benchmarks are known to involve widespread use of such knowledge (see BIBREF1, BIBREF13 for analysis), which is also arguably fundamental to more complex forms of reasoning.\nWe show that accurately probing QA models via synthetic datasets is not straightforward, as unexpected artifacts can easily arise in such data. This motivates our carefully constructed baselines and close data inspection to ensure probe quality.\nOur results confirm that transformer-based QA models have a remarkable ability to recognize certain types of knowledge captured in our probes—even without additional fine-tuning. Such models can even outperform strong task-specific models trained directly on our probing tasks (e.g., on definitions, our best model achieves 77% test accuracy without specialized training, as opposed to 51% for a task-specific LSTM-based model). We also show that the same models can be effectively re-fine-tuned on small samples (even 100 examples) of probe data, and that high performance on the probes tends to correlate with a smaller drop in the model's performance on the original QA task.\nOur comprehensive assessment reveals several interesting nuances to the overall positive trend. For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. State-of-the-art QA models thus have much room to improve even in some fundamental building blocks, namely definitions and taxonomic hierarchies, of more complex forms of reasoning.\n\nDataset Probes and Construction ::: DictionaryQA\nThe DictionaryQA dataset is created from the GCIDE dictionary, which is a comprehensive open-source English dictionary built largely from the Webster's Revised Unabridged Dictionary BIBREF38. Each entry consists of a word, its part-of-speech, its definition, and an optional example sentence (see Table TABREF14). Overall, 33k entries (out of a total of 155k) contain example sentences/usages. As with the WordNet probes, we focus on this subset so as to contextualize each word being probed. In contrast to WordNet, GCIDE does not have ISA relations or explicit synsets, so we take each unique entry to be a distinct sense. We then use the dictionary entries to create a probe that centers around word-sense disambiguation, as described below.\n\nProbing Methodology and Modeling\nGiven the probes above, we now can start to answer the empirical questions posed at the beginning. Our main focus is on looking at transformer-based MCQA models trained in the science domain (using the benchmarks shown in Table TABREF21). In this section, we provide details of MCQA and the target models, as well as several baselines that we use to sanity check our new datasets. To evaluate model competence, we look at a combination of model performance after science pre-training and after additional model fine-tuning using the lossless inoculation strategy of BIBREF22 (Section SECREF22). In Section SECREF24, we also discuss a cluster-level accuracy metric for measuring performance over semantic clusters.\n\nResults and Findings ::: How well do pre-trained MCQA models do?\nScience models that use non-transformer based encoders, such as the ESIM model with GloVe and ELMO, perform poorly across all probes, in many cases scoring near random chance, showing limits to how well they generalize from science to other tasks even with pre-trained GloVe and ELMO embeddings. In sharp contrast, the transformer models have mixed results, the most striking result being the RoBERTa models on the definitions and synonymy probes (achieving a test accuracy of 77% and 61%, respectively), which outperform several of the task-specific LSTM models trained directly on the probes. At first glance, this suggests that RoBERTa, which generally far outpaces even BERT across most probes, has high competence of definitions and synonyms even without explicit training on our new tasks.\nGiven the controlled nature of our probes, we can get a more detailed view of how well the science models are performing across different reasoning and distractor types, as shown in the first column of Figure FIGREF28 for ESIM and RoBERTa. The ESIM science model without training has uniformly poor performance across all categories, whereas the performance of RoBERTa is more varied. Across all datasets and number of hops (i.e., the rows in the heat maps), model performance for RoBERTa is consistently highest among examples with random distractors (i.e., the first column), and lowest in cases involving distractors that are closest in WordNet space (e.g., sister and ISA, or up/down, distractors of distance $k^{\\prime }=1$). This is not surprising, given that, in the first case, random distractors are likely to be the easiest category (and the opposite for distractors close in space), but suggests that RoBERTa might only be getting the easiest cases correct.\nModel performance also clearly degrades for hypernymy and hyponymy across all models as the number of hops $k$ increases (see red dashed boxes). For example, problems that involve hyponym reasoning with sister distractors of distance $k^{\\prime }=1$ (i.e., the second column) degrades from 47% to 15% when the number of hops $k$ increases from 1 to 4. This general tendency persists even after additional fine-tuning, as we discuss next, and gives evidence that models are limited in their capacity for certain types of multi-hop inferences.\nAs discussed by BIBREF26, the choice of generation templates can have a significant effect on model performance. The results so far should therefore be regarded as a lower bound on model competence. It is possible that model performance is high for definitions, for example, because the associated templates best align with the science training distribution (which we know little about). For this reason, the subsequent inoculation step is important—it gives the model an opportunity to learn about our target templates and couple this learned knowledge with its general knowledge acquired during pre-training and science training (which is, again, what we aim to probe).\n\nResults and Findings ::: Are Models Consistent across Clusters?\nTable TABREF32 shows cluster-level accuracies for the different WordNetQA probes. As with performance across the different inference/distractor categories, these results are mixed. For some probes, such as definitions, our best models appear to be rather robust; e.g., our RoBERTa model has a cluster accuracy of $75\\%$, meaning that it can answer all questions perfectly for 75% of the target concepts and that errors are concentrated on a small minority (25%) of concepts. On synonymy and hypernymy, both BERT and RoBERTa appear robust on the majority of concepts, showing that errors are similarly concentrated. In contrast, our best model on hyponymy has an accuracy of 36%, meaning that its errors are spread across many concepts, thus suggesting less robustness.\nTable TABREF30 shows a selection of semantic clusters involving ISA reasoning, as well as the model performance over different answers (shown symbolically) and perturbations. For example, in the the second case, the cluster is based around the concept/synset oppose.v.06 and involves 4 inferences and a total 24 questions (i.e., inferences with perturbations). Our weakest model, ESIM, answers only 5 out of 24 questions correctly, whereas RoBERTa gets 21/24. In the other cases, RoBERTa gets all clusters correct, whereas BERT and ESIM get none of them correct.\nWe emphasize that these results only provide a crude look into model consistency and robustness. Recalling again the details in Table TABREF12, probes differ in terms of average size of clusters. Hyponymy, in virtue of having many more questions per cluster, might simply be a much more difficult dataset. In addition, such a strict evaluation does not take into account potential errors inside of clusters, which is an important issue that we discuss in the next section. We leave addressing such issues and coming up with more insightful cluster-based metrics for future work.\n\nFigure 2: A portion of the WordNet ISA graph (top) and an example distractor function DISTR(τ) (bottom) used to generate distractor choices {a′1, a′2, a′3} for a question q based on information in the graph.\n\nFigure 4: Inoculation plots showing accuracy of challenge tasks (red solid lines) and original tasks (red dashed lines) using the best aggregate model Ma,k∗ at each k number of challenge examples (x axis). We also plot the effect of using add-some inoculation shown in the blue (1x matching) and black (x2 matching) lines.\n\nQuery: Do they focus on Reading Comprehension or multiple choice question answering?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"MULTIPLE CHOICE QUESTION ANSWERING\", \"evidence\": [\"Automatically answering questions, especially in the open-domain setting (i.e., where minimal or no contextual knowledge is explicitly provided), requires bringing to bear considerable amount of background knowledge and reasoning abilities. For example, knowing the answers to the two questions in Figure FIGREF1 requires identifying a specific ISA relation (i.e., that cooking is a type of learned behavior) as well as recalling the definition of a concept (i.e., that global warming is defined as a worldwide increase in temperature). In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate).\"]}",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Activated Cdk5 regulates DNA damage-induced neuronal death.
Research Abstract:
The phosphatidylinositol-3-kinase-like kinase ATM (ataxia-telangiectasia mutated) has a central role in coordinating DNA damage responses, including cell-cycle checkpoint control, DNA repair and apoptosis.
Mutations of ATM cause a spectrum of defects ranging from neurodegeneration to cancer predisposition.
However, the mechanism by which DNA damage activates ATM is poorly understood.
Here we show that Cdk5 (cyclin-dependent kinase 5), activated by DNA damage, directly phosphorylates ATM at Ser 794 in post-mitotic neurons.
Phosphorylation at Ser 794 precedes, and is required for, ATM autophosphorylation at Ser 1981, and activates ATM kinase activity.
The Cdk5-ATM signal regulates phosphorylation and function of the ATM targets p53 and H2AX.
Interruption of the Cdk5-ATM pathway attenuates DNA-damage-induced neuronal cell cycle re-entry and expression of the p53 targets PUMA and Bax, protecting neurons from death.
Thus, activation of Cdk5 by DNA damage serves as a critical signal to initiate the ATM response and regulate ATM-dependent cellular processes. | {"verdict": "SUPPORT", "evidence": ["Interruption of the Cdk5-ATM pathway attenuates DNA-damage-induced neuronal cell cycle re-entry and expression of the p53 targets PUMA and Bax, protecting neurons from death."]} | science.scifact_entailment.49 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nActivated Cdk5 regulates DNA damage-induced neuronal death.\n\nResearch Abstract:\nThe phosphatidylinositol-3-kinase-like kinase ATM (ataxia-telangiectasia mutated) has a central role in coordinating DNA damage responses, including cell-cycle checkpoint control, DNA repair and apoptosis.\nMutations of ATM cause a spectrum of defects ranging from neurodegeneration to cancer predisposition.\nHowever, the mechanism by which DNA damage activates ATM is poorly understood.\nHere we show that Cdk5 (cyclin-dependent kinase 5), activated by DNA damage, directly phosphorylates ATM at Ser 794 in post-mitotic neurons.\nPhosphorylation at Ser 794 precedes, and is required for, ATM autophosphorylation at Ser 1981, and activates ATM kinase activity.\nThe Cdk5-ATM signal regulates phosphorylation and function of the ATM targets p53 and H2AX.\nInterruption of the Cdk5-ATM pathway attenuates DNA-damage-induced neuronal cell cycle re-entry and expression of the p53 targets PUMA and Bax, protecting neurons from death.\nThus, activation of Cdk5 by DNA damage serves as a critical signal to initiate the ATM response and regulate ATM-dependent cellular processes.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Interruption of the Cdk5-ATM pathway attenuates DNA-damage-induced neuronal cell cycle re-entry and expression of the p53 targets PUMA and Bax, protecting neurons from death.\"]}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
Main Content:
1 Introduction . Convolutional neural networks ( CNNs ) have obtained widespread adoptions in numerous important AI applications [ 17 , 48 , 47 , 12 , 11 , 44 , 35 ] . However , CNNs are inherently computation intensive and storage intensive , thereby posing severe challenges for their efficient deployment on resourceconstrained embedded platforms . To address these challenges , model compression is widely used to accelerate and compress CNN models on edge devices . To date , various types of compression strategies , such as network pruning [ 15 , 16 , 37 , 60 , 28 , 14 , 1 , 49 , 10 , 63 , 36 , 18 , 13 , 3 , 2 , 25 , 53 , 50 , 9 , 39 , 33 ] , quantization [ 15 , 55 , 43 , 8 ] , low-rank approximation [ 56 , 40 , 58 , 57 ] , knowledge distillation [ 22 , 41 ] and structured matrix-based construction [ 45 , 29 , 6 ] , have been proposed and explored . Among them , network pruning is the most popular and extensively studied model compression technique in both academia and industry . Based on their differences in pruning granularity , pruning approaches can be roughly categorized to weight pruning [ 16 , 15 ] and filter pruning [ 54 , 27 , 21 , 38 , 34 ] . Weight pruning focuses on the proper selection of the to-be-pruned weights within the filters . Although enabling a high compression ratio , this strategy meanwhile causes unstructured sparsity patterns , which are not well supported by the general-purpose hardware in practice . On the other hand , filter pruning emphasizes the removal of 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . the entire selected filters . The resulting structured sparsity patterns can be then properly leveraged by the off-the-shelf CPUs/GPUs to achieve acceleration in the real-world scenario . Existing Filter Pruning Methods . Motivated by the potential practical speedup offered by filter pruning , to date numerous research efforts have been conducted to study how to determine the important filters – the key component of efficient filter pruning . A well-known strategy is to utilize the norms of different filters to evaluate their importance . Such a `` smaller-norm-less-important '' hypothesis is adopted in several pioneering filter pruning works [ 27 , 19 ] . Later , considering the limitations of norm-based criterion in real scenarios , [ 20 ] proposes to utilize geometric medianbased criterion . More recently , first determining those important feature maps and then preserving the corresponding filters , instead of directly selecting the filters , become a popular strategy for filter pruning . As indicated in [ 31 ] , the features , by their natures , reflect and capture rich and important information and characteristics of both input data and filters , and hence measuring the importance of features can provide a better guideline to determine the important filters . Built on this pruning philosophy , several feature-guided filter pruning approaches [ 31 , 51 ] have been proposed and developed , and the evaluation results show their superior performance over the state-of-the-art filter-guided counterparts with respect to both task performance ( e.g. , accuracy ) and compression performance ( e.g. , model size and floating-point operations ( FLOPs ) reductions ) . Determining Importance : Intra-channel & Inter-channel Perspectives . These recent advancements on filter pruning indeed show the huge benefits of leveraging feature information to determine the importance of filters . To date , some feature-guided approaches measure the importance from the intra-channel perspective . In other words , no matter which importance metric is used , the importance of one feature map ( and its corresponding filter ) , is measured only upon the information of this feature map in its own channel . On the other aspect , the inter-channel perspective , which essentially determines the filter importance via using cross-channel information [ 20 , 42 , 46 , 52 , 26 ] , is still being further explored . To be specific , [ 20 ] and [ 42 ] adopt cross-channel geometric median and Hessian , respectively , to measure the channel importance . However , such measurement is based on filter instead of feature map information , and hence the rich and important feature characteristics are not properly identified and extracted . [ 52 , 26 ] also explore inter-channel-based filter pruning via introducing budget constraints across channels . However , such exploration and utilization of the inter-channel information are implicit and indirect , thereby limiting the practical pruning performance . Benefits of Inter-channel Perspective . In principle , the feature information across multiple channels , if being leveraged properly , can potentially provide richer knowledge for filter pruning than the intra-channel information . Specifically , this is because : 1 ) the importance of one filter , if being solely determined by its corresponding feature map , may be sensitive to input data ; while the cross-channel feature information can bring more stable and reliable measurement ; and 2 ) consider the essential mission of pruning is to remove the unnecessary redundancy , the inter-channel strategy can inherently better identify and capture the potential unnecessary correlations among different feature maps ( and the corresponding filters ) , and thereby unlocking the new opportunity of achieving better task and compression performance . Technical Preview and Contributions . Motivated by these promising potential benefits , in this paper we propose to explore and leverage the cross-channel feature information for efficient filter pruning . To be specific , we propose Channel Independence , a cross-channel correlation-based metric to measure the importance of filters . Channel independence can be intuitively understood as the measurement of `` replaceability '' : when the feature map of one filter is measured as exhibiting lower independence , it means this feature map tends to be more linearly dependent on other feature maps of other channels . In such a scenario , the contained information of this low-independence feature map is believed to have already been implicitly encoded in other feature maps – in other words , it does not contain useful information or knowledge . Therefore the corresponding filter , which outputs this low-independence feature map , is viewed as unimportant and can be safely removed without affecting the model capacity . Overall , the contributions of this paper are summarized as : • We propose channel independence , a metric that measures the correlation of multiple feature maps , to determine the importance of filters . Built from an inter-channel perspective , channel independence can identify and capture the filter importance in a more global and precise way , thereby providing a better guideline for filter pruning . • We systematically investigate and analyze the suitable quantification metric , the complexity of the measuring scheme and the sensitiveness & reliability of channel independence , and then we develop a low-cost fine-grained high-robustness channel independence calculation scheme for efficient filter pruning . • We empirically apply the channel independence-based importance determination in different filter pruning tasks . The evaluation results show that our proposed approach brings very high pruning performance with preserving high accuracy . Notably , on CIFAR-10 dataset our solution can bring 0.90 % and 0.94 % accuracy increase over baseline ResNet-56 and ResNet-110 models , respectively , and meanwhile the model size and FLOPs are reduced by 42.8 % and 47.4 % ( for ResNet-56 ) and 48.3 % and 52.1 % ( for ResNet-110 ) , respectively . On ImageNet dataset , our approach can achieve 40.8 % and 44.8 % storage and computation reductions , respectively , with 0.15 % accuracy increase over the baseline ResNet-50 model . 2 Preliminaries . Filter Pruning . For a CNN model with L layers , its l-th convolutional layer W l = { F l1 , F l 2 , · · · , F l cl } contains cl filters F li ∈ Rc l−1×kl×kl , where cl , cl−1 and kl denote the number of output channels , the number of input channels and the kernel size , respectively . In general , network pruning can be formulated as the following optimization problem : min { Wl } Ll=1 L ( Y , f ( X , W l ) ) , s.t . ∥W l∥0 ≤ κl , ( 1 ) where L ( · , · ) is the loss function , Y is the ground-truth labels , X is the input data , and f ( · , · ) is the output function of CNN model { W l } Ll=1 . Besides , ∥ · ∥0 is the ℓ0-norm that measures the number of non-zero filters in the set , and κl is the number of filters to be preserved in the l-th layer . Feature-guided Filter Pruning . Consider the feature maps , in principle , contain rich and important information of both filters and input data , approaches using feature information have become popular and achieved the state-of-the-art performance for filter pruning . To be specific , unlike the filter-guided methods that directly minimize the loss function involved with filters ( as Eq . 1 ) , the objective of feature-guided filter pruning is to minimize the following loss function : min { Al } Ll=1 L ( Y , Al ) , s.t . ∥Al∥0 ≤ κl , ( 2 ) where Al = { Al1 , Al2 , · · · , Alcl } ∈ R cl×h×w is a set of feature maps output from the l-th layer , and Ali ∈ Rh×w is the feature map corresponds to the i-th channel . In general , after the κl important feature maps are identified and selected , their corresponding κl filters are preserved after pruning . 3 The Proposed Method . 3.1 Motivation . As formulated in Eq . 2 , feature-guided filter pruning leverages the generated feature maps in each layer to identify the important filters . To achieve that , various types of feature information , such as the high ranks [ 31 ] and the scaling factors [ 51 ] , have been proposed and utilized to select the proper feature maps and the corresponding filters . A common point for these state-of-the-art approaches is that all of them focus on measuring the importance via using the information contained in each feature map . On the other hand , the correlation among different feature maps , as another type of rich information provided by the neural networks , is little exploited in the existing filter pruning works . Why Inter-channel Perspective ? We argue that the feature information across multiple channels is of significant importance and richness , and it can be leveraged towards efficient filter pruning . Such an inter-channel perspective is motivated by two promising benefits . First , filter pruning is essentially a data-driven strategy . When the importance of one filter solely depends on the information represented by its own generated feature map , the measurement of the importance may be unstable and sensitive to the slight change of input data . On the other hand , determining the importance built upon information contained in the multiple feature maps , if performed properly , can reduce the potential disturbance incurred by the change of input data , and thereby making the importance ranking more reliable and stable . Second , the inter-channel strategy , by its nature , can better model and capture the cross-channel correlation . In the context of model compression , these identified correlations can be interpreted as a type of architecture-level redundancy , which is exactly what filter pruning aims to remove . Therefore , inter-channel strategy can enable more aggressive pruning while still preserving high accuracy . | This paper proposes a channel pruning method named CHIP to improve the existing channel pruning frameworks. The proposed method measures the importance of channels through channel independence. Extensive experiments show the effectiveness of trunk pruning. | science.mup_single_document_summarization.14695 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nCHIP: CHannel Independence-based Pruning for Compact Neural Networks\n\nMain Content:\n1 Introduction . Convolutional neural networks ( CNNs ) have obtained widespread adoptions in numerous important AI applications [ 17 , 48 , 47 , 12 , 11 , 44 , 35 ] . However , CNNs are inherently computation intensive and storage intensive , thereby posing severe challenges for their efficient deployment on resourceconstrained embedded platforms . To address these challenges , model compression is widely used to accelerate and compress CNN models on edge devices . To date , various types of compression strategies , such as network pruning [ 15 , 16 , 37 , 60 , 28 , 14 , 1 , 49 , 10 , 63 , 36 , 18 , 13 , 3 , 2 , 25 , 53 , 50 , 9 , 39 , 33 ] , quantization [ 15 , 55 , 43 , 8 ] , low-rank approximation [ 56 , 40 , 58 , 57 ] , knowledge distillation [ 22 , 41 ] and structured matrix-based construction [ 45 , 29 , 6 ] , have been proposed and explored . Among them , network pruning is the most popular and extensively studied model compression technique in both academia and industry . Based on their differences in pruning granularity , pruning approaches can be roughly categorized to weight pruning [ 16 , 15 ] and filter pruning [ 54 , 27 , 21 , 38 , 34 ] . Weight pruning focuses on the proper selection of the to-be-pruned weights within the filters . Although enabling a high compression ratio , this strategy meanwhile causes unstructured sparsity patterns , which are not well supported by the general-purpose hardware in practice . On the other hand , filter pruning emphasizes the removal of 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . the entire selected filters . The resulting structured sparsity patterns can be then properly leveraged by the off-the-shelf CPUs/GPUs to achieve acceleration in the real-world scenario . Existing Filter Pruning Methods . Motivated by the potential practical speedup offered by filter pruning , to date numerous research efforts have been conducted to study how to determine the important filters – the key component of efficient filter pruning . A well-known strategy is to utilize the norms of different filters to evaluate their importance . Such a `` smaller-norm-less-important '' hypothesis is adopted in several pioneering filter pruning works [ 27 , 19 ] . Later , considering the limitations of norm-based criterion in real scenarios , [ 20 ] proposes to utilize geometric medianbased criterion . More recently , first determining those important feature maps and then preserving the corresponding filters , instead of directly selecting the filters , become a popular strategy for filter pruning . As indicated in [ 31 ] , the features , by their natures , reflect and capture rich and important information and characteristics of both input data and filters , and hence measuring the importance of features can provide a better guideline to determine the important filters . Built on this pruning philosophy , several feature-guided filter pruning approaches [ 31 , 51 ] have been proposed and developed , and the evaluation results show their superior performance over the state-of-the-art filter-guided counterparts with respect to both task performance ( e.g. , accuracy ) and compression performance ( e.g. , model size and floating-point operations ( FLOPs ) reductions ) . Determining Importance : Intra-channel & Inter-channel Perspectives . These recent advancements on filter pruning indeed show the huge benefits of leveraging feature information to determine the importance of filters . To date , some feature-guided approaches measure the importance from the intra-channel perspective . In other words , no matter which importance metric is used , the importance of one feature map ( and its corresponding filter ) , is measured only upon the information of this feature map in its own channel . On the other aspect , the inter-channel perspective , which essentially determines the filter importance via using cross-channel information [ 20 , 42 , 46 , 52 , 26 ] , is still being further explored . To be specific , [ 20 ] and [ 42 ] adopt cross-channel geometric median and Hessian , respectively , to measure the channel importance . However , such measurement is based on filter instead of feature map information , and hence the rich and important feature characteristics are not properly identified and extracted . [ 52 , 26 ] also explore inter-channel-based filter pruning via introducing budget constraints across channels . However , such exploration and utilization of the inter-channel information are implicit and indirect , thereby limiting the practical pruning performance . Benefits of Inter-channel Perspective . In principle , the feature information across multiple channels , if being leveraged properly , can potentially provide richer knowledge for filter pruning than the intra-channel information . Specifically , this is because : 1 ) the importance of one filter , if being solely determined by its corresponding feature map , may be sensitive to input data ; while the cross-channel feature information can bring more stable and reliable measurement ; and 2 ) consider the essential mission of pruning is to remove the unnecessary redundancy , the inter-channel strategy can inherently better identify and capture the potential unnecessary correlations among different feature maps ( and the corresponding filters ) , and thereby unlocking the new opportunity of achieving better task and compression performance . Technical Preview and Contributions . Motivated by these promising potential benefits , in this paper we propose to explore and leverage the cross-channel feature information for efficient filter pruning . To be specific , we propose Channel Independence , a cross-channel correlation-based metric to measure the importance of filters . Channel independence can be intuitively understood as the measurement of `` replaceability '' : when the feature map of one filter is measured as exhibiting lower independence , it means this feature map tends to be more linearly dependent on other feature maps of other channels . In such a scenario , the contained information of this low-independence feature map is believed to have already been implicitly encoded in other feature maps – in other words , it does not contain useful information or knowledge . Therefore the corresponding filter , which outputs this low-independence feature map , is viewed as unimportant and can be safely removed without affecting the model capacity . Overall , the contributions of this paper are summarized as : • We propose channel independence , a metric that measures the correlation of multiple feature maps , to determine the importance of filters . Built from an inter-channel perspective , channel independence can identify and capture the filter importance in a more global and precise way , thereby providing a better guideline for filter pruning . • We systematically investigate and analyze the suitable quantification metric , the complexity of the measuring scheme and the sensitiveness & reliability of channel independence , and then we develop a low-cost fine-grained high-robustness channel independence calculation scheme for efficient filter pruning . • We empirically apply the channel independence-based importance determination in different filter pruning tasks . The evaluation results show that our proposed approach brings very high pruning performance with preserving high accuracy . Notably , on CIFAR-10 dataset our solution can bring 0.90 % and 0.94 % accuracy increase over baseline ResNet-56 and ResNet-110 models , respectively , and meanwhile the model size and FLOPs are reduced by 42.8 % and 47.4 % ( for ResNet-56 ) and 48.3 % and 52.1 % ( for ResNet-110 ) , respectively . On ImageNet dataset , our approach can achieve 40.8 % and 44.8 % storage and computation reductions , respectively , with 0.15 % accuracy increase over the baseline ResNet-50 model . 2 Preliminaries . Filter Pruning . For a CNN model with L layers , its l-th convolutional layer W l = { F l1 , F l 2 , · · · , F l cl } contains cl filters F li ∈ Rc l−1×kl×kl , where cl , cl−1 and kl denote the number of output channels , the number of input channels and the kernel size , respectively . In general , network pruning can be formulated as the following optimization problem : min { Wl } Ll=1 L ( Y , f ( X , W l ) ) , s.t . ∥W l∥0 ≤ κl , ( 1 ) where L ( · , · ) is the loss function , Y is the ground-truth labels , X is the input data , and f ( · , · ) is the output function of CNN model { W l } Ll=1 . Besides , ∥ · ∥0 is the ℓ0-norm that measures the number of non-zero filters in the set , and κl is the number of filters to be preserved in the l-th layer . Feature-guided Filter Pruning . Consider the feature maps , in principle , contain rich and important information of both filters and input data , approaches using feature information have become popular and achieved the state-of-the-art performance for filter pruning . To be specific , unlike the filter-guided methods that directly minimize the loss function involved with filters ( as Eq . 1 ) , the objective of feature-guided filter pruning is to minimize the following loss function : min { Al } Ll=1 L ( Y , Al ) , s.t . ∥Al∥0 ≤ κl , ( 2 ) where Al = { Al1 , Al2 , · · · , Alcl } ∈ R cl×h×w is a set of feature maps output from the l-th layer , and Ali ∈ Rh×w is the feature map corresponds to the i-th channel . In general , after the κl important feature maps are identified and selected , their corresponding κl filters are preserved after pruning . 3 The Proposed Method . 3.1 Motivation . As formulated in Eq . 2 , feature-guided filter pruning leverages the generated feature maps in each layer to identify the important filters . To achieve that , various types of feature information , such as the high ranks [ 31 ] and the scaling factors [ 51 ] , have been proposed and utilized to select the proper feature maps and the corresponding filters . A common point for these state-of-the-art approaches is that all of them focus on measuring the importance via using the information contained in each feature map . On the other hand , the correlation among different feature maps , as another type of rich information provided by the neural networks , is little exploited in the existing filter pruning works . Why Inter-channel Perspective ? We argue that the feature information across multiple channels is of significant importance and richness , and it can be leveraged towards efficient filter pruning . Such an inter-channel perspective is motivated by two promising benefits . First , filter pruning is essentially a data-driven strategy . When the importance of one filter solely depends on the information represented by its own generated feature map , the measurement of the importance may be unstable and sensitive to the slight change of input data . On the other hand , determining the importance built upon information contained in the multiple feature maps , if performed properly , can reduce the potential disturbance incurred by the change of input data , and thereby making the importance ranking more reliable and stable . Second , the inter-channel strategy , by its nature , can better model and capture the cross-channel correlation . In the context of model compression , these identified correlations can be interpreted as a type of architecture-level redundancy , which is exactly what filter pruning aims to remove . Therefore , inter-channel strategy can enable more aggressive pruning while still preserving high accuracy .",
"role": "user"
},
{
"content": "This paper proposes a channel pruning method named CHIP to improve the existing channel pruning frameworks. The proposed method measures the importance of channels through channel independence. Extensive experiments show the effectiveness of trunk pruning.",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Several research work have been reported since 2010 in this research field of hate speech detection (Kwok and Wang, 2013; Burnap and Williams, 2015; Djuric et al., 2015; <cite>Davidson et al., 2017</cite>; Malmasi and Zampieri, 2018; Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018; ElSherief et al., 2018; Gambäck and Sikdar, 2017; Zhang et al., 2018; Mathur et al., 2018) . | ["Background"] | science.multicite_intent_classification.957 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nSeveral research work have been reported since 2010 in this research field of hate speech detection (Kwok and Wang, 2013; Burnap and Williams, 2015; Djuric et al., 2015; <cite>Davidson et al., 2017</cite>; Malmasi and Zampieri, 2018; Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018; ElSherief et al., 2018; Gambäck and Sikdar, 2017; Zhang et al., 2018; Mathur et al., 2018) .",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Sulphadoxine-pyrimethamine (SP) is the only single dose therapy for uncomplicated malaria, but there is widespread resistance. At the time of this study, artemether-lumefantrine (AL) and chlorproguanil-dapsone (CPD), both multi-dose regimes, were considered possible alternatives to SP in Malawi. The aim of this study was to investigate the impact of poor adherence on the effectiveness of AL and CPD.
ABSTRACT.METHODS: Children ≥12 months and adults with uncomplicated malaria were randomized to receive AL, CPD or SP. Adherence was measured using a questionnaire and electronic monitoring devices, MEMSTM, pill bottles that recorded the date and time of opening. Day-7 plasma dapsone or lumefantrine concentrations were measured to examine their relationship with adherence and clinical response.
ABSTRACT.RESULTS: 841 patients were recruited. The day-28 adequate clinical and parasitological response (ACPR) rates, using intention to treat analysis (missing data treated as failure), were AL 85.2%, CPD 63.7% and SP 50%. ACPR rates for AL were higher than CPD or SP on days 28 and 42 (p ≤ 0.002 for all comparisons). CPD was more effective than SP on day-28 (p = 0.01), but not day-42. <p> Very high adherence was reported using the questionnaire, 100% for AL treated patients and 99.2% for the CPD group. Only three CPD participants admitted missing any doses. 164/181 (90.6%) of CPD treated patients took all their doses out of the MEMSTM container and they were more likely to have a day-28 ACPR than those who did not take all their medication out of the container, p = 0.024. Only 7/87 (8%) AL treated patients did not take all of their doses out of their MEMSTM container and none had treatment failure. <p> Median day-7 dapsone concentrations were higher in CPD treated patients with ACPR than in treatment failures, p = 0.012. There were no differences in day-7 dapsone or lumefantrine concentrations between those who took all their doses from the MEMSTM container and those who did not. A day-7 lumefantrine concentration reported to be predictive of AL treatment failure in Thailand was not useful in this population; only one of 16 participants with a concentration below this threshold (175 ng/ml) had treatment failure.
ABSTRACT.CONCLUSION: This study provides reassurance of the effectiveness of AL, even with unsupervised dosing, as it is rolled out across sub-Saharan Africa. Self-reported adherence appears to be an unreliable measure of adherence in this population.
Extracted Data: | [["Chlorproguanil-dapsone (CPD)", "Sulphadoxine-pyrimethamine (SP) ", "Effectiveness at day 28", "significantly increased", "CPD was more effective than SP on day-28 (p = 0.01), but not day-42."], ["Chlorproguanil-dapsone (CPD)", "Sulphadoxine-pyrimethamine (SP) ", "Effectiveness at day 42", "no significant difference", "CPD was more effective than SP on day-28 (p = 0.01), but not day-42."]] | science.evidence_inference.40 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Sulphadoxine-pyrimethamine (SP) is the only single dose therapy for uncomplicated malaria, but there is widespread resistance. At the time of this study, artemether-lumefantrine (AL) and chlorproguanil-dapsone (CPD), both multi-dose regimes, were considered possible alternatives to SP in Malawi. The aim of this study was to investigate the impact of poor adherence on the effectiveness of AL and CPD.\n\nABSTRACT.METHODS: Children ≥12 months and adults with uncomplicated malaria were randomized to receive AL, CPD or SP. Adherence was measured using a questionnaire and electronic monitoring devices, MEMSTM, pill bottles that recorded the date and time of opening. Day-7 plasma dapsone or lumefantrine concentrations were measured to examine their relationship with adherence and clinical response.\n\nABSTRACT.RESULTS: 841 patients were recruited. The day-28 adequate clinical and parasitological response (ACPR) rates, using intention to treat analysis (missing data treated as failure), were AL 85.2%, CPD 63.7% and SP 50%. ACPR rates for AL were higher than CPD or SP on days 28 and 42 (p ≤ 0.002 for all comparisons). CPD was more effective than SP on day-28 (p = 0.01), but not day-42. <p> Very high adherence was reported using the questionnaire, 100% for AL treated patients and 99.2% for the CPD group. Only three CPD participants admitted missing any doses. 164/181 (90.6%) of CPD treated patients took all their doses out of the MEMSTM container and they were more likely to have a day-28 ACPR than those who did not take all their medication out of the container, p = 0.024. Only 7/87 (8%) AL treated patients did not take all of their doses out of their MEMSTM container and none had treatment failure. <p> Median day-7 dapsone concentrations were higher in CPD treated patients with ACPR than in treatment failures, p = 0.012. There were no differences in day-7 dapsone or lumefantrine concentrations between those who took all their doses from the MEMSTM container and those who did not. A day-7 lumefantrine concentration reported to be predictive of AL treatment failure in Thailand was not useful in this population; only one of 16 participants with a concentration below this threshold (175 ng/ml) had treatment failure.\n\nABSTRACT.CONCLUSION: This study provides reassurance of the effectiveness of AL, even with unsupervised dosing, as it is rolled out across sub-Saharan Africa. Self-reported adherence appears to be an unreliable measure of adherence in this population.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Chlorproguanil-dapsone (CPD)\", \"Sulphadoxine-pyrimethamine (SP) \", \"Effectiveness at day 28\", \"significantly increased\", \"CPD was more effective than SP on day-28 (p = 0.01), but not day-42.\"], [\"Chlorproguanil-dapsone (CPD)\", \"Sulphadoxine-pyrimethamine (SP) \", \"Effectiveness at day 42\", \"no significant difference\", \"CPD was more effective than SP on day-28 (p = 0.01), but not day-42.\"]]",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
To lemmatize or not to lemmatize: how word normalisation affects ELMo performance in word sense disambiguation
We critically evaluate the widespread assumption that deep learning NLP models do not require lemmatized input. To test this, we trained versions of contextualised word embedding ELMo models on raw tokenized corpora and on the corpora with word tokens replaced by their lemmas. Then, these models were evaluated on the word sense disambiguation task. This was done for the English and Russian languages. ::: The experiments showed that while lemmatization is indeed not necessary for English, the situation is different for Russian. It seems that for rich-morphology languages, using lemmatized training and testing data yields small but consistent improvements: at least for word sense disambiguation. This means that the decisions about text pre-processing before training ELMo should consider the linguistic nature of the language in question.
Introduction
Deep contextualised representations of linguistic entities (words and/or sentences) are used in many current state-of-the-art NLP systems. The most well-known examples of such models are arguably ELMo BIBREF0 and BERT BIBREF1.
A long-standing tradition if the field of applying deep learning to NLP tasks can be summarised as follows: as minimal pre-processing as possible. It is widely believed that lemmatization or other text input normalisation is not necessary. Advanced neural architectures based on character input (CNNs, BPE, etc) are supposed to be able to learn how to handle spelling and morphology variations themselves, even for languages with rich morphology: `just add more layers!'. Contextualised embedding models follow this tradition: as a rule, they are trained on raw text collections, with minimal linguistic pre-processing. Below, we show that this is not entirely true.
It is known that for the previous generation of word embedding models (`static' ones like word2vec BIBREF2, where a word always has the same representation regardless of the context in which it occurs), lemmatization of the training and testing data improves their performance. BIBREF3 showed that this is true at least for semantic similarity and analogy tasks.
In this paper, we describe our experiments in finding out whether lemmatization helps modern contextualised embeddings (on the example of ELMo). We compare the performance of ELMo models trained on the same corpus before and after lemmatization. It is impossible to evaluate contextualised models on `static' tasks like lexical semantic similarity or word analogies. Because of this, we turned to word sense disambiguation in context (WSD) as an evaluation task.
In brief, we use contextualised representations of ambiguous words from the top layer of an ELMo model to train word sense classifiers and find out whether using lemmas instead of tokens helps in this task (see Section SECREF5). We experiment with the English and Russian languages and show that they differ significantly in the influence of lemmatization on the WSD performance of ELMo models.
Our findings and the contributions of this paper are:
Linguistic text pre-processing still matters in some tasks, even for contemporary deep representation learning algorithms.
For the Russian language, with its rich morphology, lemmatizing the training and testing data for ELMo representations yields small but consistent improvements in the WSD task. This is unlike English, where the differences are negligible.
Training ELMo
For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same.
The texts were tokenized and lemmatized with the UDPipe models for the respective languages trained on the Universal Dependencies 2.3 treebanks BIBREF5. UDPipe yields lemmatization accuracy about 96% for English and 97% for Russian; thus for the task at hand, we considered it to be gold and did not try to further improve the quality of normalisation itself (although it is not entirely error-free, see Section SECREF4).
ELMo models were trained on these corpora using the original TensorFlow implementation, for 3 epochs with batch size 192, on two GPUs. To train faster, we decreased the dimensionality of the LSTM layers from the default 4096 to 2048 for all the models.
Word sense disambiguation test sets
We used two WSD datasets for evaluation:
Senseval-3 for English BIBREF6
RUSSE'18 for Russian BIBREF7
The Senseval-3 dataset consists of lexical samples for nouns, verbs and adjectives; we used only noun target words:
argument
arm
atmosphere
audience
bank
degree
difference
difficulty
disc
image
interest
judgement
organization
paper
party
performance
plan
shelter
sort
source
An example for the ambiguous word argument is given below:
In some situations Postscript can be faster than the escape sequence type of printer control file. It uses post fix notation, where arguments come first and operators follow. This is basically the same as Reverse Polish Notation as used on certain calculators, and follows directly from the stack based approach.
It this sentence, the word `argument' is used in the sense of a mathematical operator.
The RUSSE'18 dataset was created in 2018 for the shared task in Russian word sense induction. This dataset contains only nouns; the list of words with their English translations is given in Table TABREF30.
Originally, it includes also the words russianбайка `tale/fleece' and russianгвоздика 'clove/small nail', but their senses are ambiguous only in some inflectional forms (not in lemmas), therefore we decided to exclude these words from evaluation.
The Russian dataset is more homogeneous compared to the English one, as for all the target words there is approximately the same number of context words in the examples. This is achieved by applying the lexical window (25 words before and after the target word) and cropping everything that falls outside of that window. In the English dataset, on the contrary, the whole paragraph with the target word is taken into account. We have tried cropping the examples for English as well, but it did not result in any change in the quality of classification. In the end, we decided not to apply the lexical window to the English dataset so as not to alter it and rather use it in the original form.
Here is an example from the RUSSE'18 for the ambiguous word russianмандарин `mandarin' in the sense `Chinese official title':
russian“...дипломатического корпуса останкам богдыхана и императрицы обставлено было с необычайной торжественностью. Тысячи мандаринов и других высокопоставленных лиц разместились шпалерами на трех мраморных террасах ведущих к...”
`...the diplomatic bodies of the Bogdikhan and the Empress was furnished with extraordinary solemnity. Thousands of mandarins and other dignitaries were placed on three marble terraces leading to...'.
Table TABREF31 compares both datasets. Before usage, they were pre-processed in the same way as the training corpora for ELMo (see Section SECREF3), thus producing a lemmatized and a non-lemmatized versions of each.
As we can see from Table TABREF31, for 20 target words in English there are 24 lemmas, and for 18 target words in Russian there are 36 different lemmas. These numbers are explained by occasional errors in the UDPipe lemmatization. Another interesting thing to observe is the number of distinct word forms for every language. For English, there are 39 distinct forms for 20 target nouns: singular and plural for every noun, except `atmosphere' which is used only in the singular form. Thus, inflectional variability of English nouns is covered by the dataset almost completely. For Russian, we observe 132 distinct forms for 18 target nouns, giving more than 7 inflectional forms per each word. Note that this still covers only half of all the inflectional variability of Russian: this language features 12 distinct forms for each noun (6 cases and 2 numbers).
To sum up, the RUSSE'18 dataset is morphologically far more complex than the Senseval3, reflecting the properties of the respective languages. In the next section we will see that this leads to substantial differences regarding comparisons between token-based and lemma-based ELMo models.
Experiments
Following BIBREF8, we decided to avoid using any standard train-test splits for our WSD datasets. Instead, we rely on per-word random splits and 5-fold cross-validation. This means that for each target word we randomly generate 5 different divisions of its context sentences list into train and test sets, and then train and test 5 different classifier models on this data. The resulting performance score for each target word is the average of 5 macro-F1 scores produced by these classifiers.
ELMo models can be employed for the WSD task in two different ways: either by fine-tuning the model or by extracting word representations from it and then using them as features in a downstream classifier. We decided to stick to the second (feature extraction) approach, since it is conceptually and computationally simpler. Additionally, BIBREF9 showed that for most NLP tasks (except those focused on sentence pairs) the performance of feature extraction and fine-tuning is nearly the same. Thus we extracted the single vector of the target word from the ELMo top layer (`target' rows in Table TABREF32) or the averaged ELMo top layer vectors of all words in the context sentence (`averaged' rows in Table TABREF32).
For comparison, we also report the scores of the `averaged vectors' representations with Continuous Skipgram BIBREF2 embedding models trained on the English or Russian Wikipedia dumps (`SGNS' rows): before the advent of contextualised models, this was one of the most widely used ways to `squeeze' the meaning of a sentence into a fixed-size vector. Of course it does not mean that the meaning of a sentence always determines the senses all its words are used in. However, averaging representations of words in contexts as a proxy to the sense of one particular word is a long established tradition in WSD, starting at least from BIBREF10. Also, since SGNS is a `static' embedding model, it is of course not possible to use only target word vectors as features: they would be identical whatever the context is.
Simple logistic regression was used as a classification algorithm. We also tested a multi-layer perceptron (MLP) classifier with 200-neurons hidden layer, which yielded essentially the same results. This leads us to believe that our findings are not classifier-dependent.
Table TABREF32 shows the results, together with the random and most frequent sense (MFS) baselines for each dataset.
First, ELMo outperforms SGNS for both languages, which comes as no surprise. Second, the approach with averaging representations from all words in the sentence is not beneficial for WSD with ELMo: for English data, it clearly loses to a single target word representation, and for Russian there are no significant differences (and using a single target word is preferable from the computational point of view, since it does not require the averaging operation). Thus, below we discuss only the single target word usage mode of ELMo.
But the most important part is the comparison between using tokens or lemmas in the train and test data. For the `static' SGNS embeddings, it does not significantly change the WSD scores for both languages. The same is true for English ELMo models, where differences are negligible and seem to be simple fluctuations. However, for Russian, ELMo (target) on lemmas outperforms ELMo on tokens, with small but significant improvement. The most plausible explanation for this is that (despite of purely character-based input of ELMo) the model does not have to learn idiosyncrasies of a particular language morphology. Instead, it can use its (limited) capacity to better learn lexical semantic structures, leading to better WSD performance. The box plots FIGREF33 and FIGREF35 illustrate the scores dispersion across words in the test sets for English and Russian correspondingly (orange lines are medians). In the next section SECREF6 we analyse the results qualitatively.
Conclusion
We evaluated how the ability of ELMo contextualised word embedding models to disambiguate word senses depends on the nature of the training data. In particular, we compared the models trained on raw tokenized corpora and those trained on the corpora with word tokens replaced by their normal forms (lemmas). The models we trained are publicly available via the NLPL word embeddings repository BIBREF3.
In the majority of research papers on deep learning approaches to NLP, it is assumed that lemmatization is not necessary, especially when using powerful contextualised embeddings. Our experiments show that this is indeed true for languages with simple morphology (like English). However, for rich-morphology languages (like Russian), using lemmatized training data yields small but consistent improvements in the word sense disambiguation task. These improvements are not observed for rare words which lack inflected forms; this further supports our hypothesis that better WSD scores of lemma-based models are related to them better handling multiple word forms in morphology-rich languages.
Of course, lemmatization is by all means not a silver bullet. In other tasks, where inflectional properties of words are important, it can even hurt the performance. But this is true for any NLP systems, not only deep learning based ones.
The take-home message here is twofold: first, text pre-processing still matters for contemporary deep learning algorithms. Their impressive learning abilities do not always allow them to infer normalisation rules themselves, from simply optimising the language modelling task. Second, the nature of language at hand matters as well, and differences in this nature can result in different decisions being optimal or sub-optimal at the stage of deep learning models training. The simple truth `English is not representative of all languages on Earth' still holds here.
In the future, we plan to extend our work by including more languages into the analysis. Using Russian and English allowed us to hypothesise about the importance of morphological character of a language. But we only scratched the surface of the linguistic diversity. To verify this claim, it is necessary to analyse more strongly inflected languages like Russian as well as more weakly inflected (analytical) languages similar to English. This will help to find out if the inflection differences are important for training deep learning models across human languages in general.
Table 1: Training corpora
Table 2: Target ambiguous words for Russian (RUSSE’18)
Table 4: Averaged macro-F1 scores for WSD
Figure 1: Word sense disambiguation performance on the English data across words (ELMo target models).
Figure 2: Word sense disambiguation performance on the Russian data across words (ELMo target models).
Table 5: F1 scores for target words from RUSSE’18 with significant differences between lemma-based and token-based models
Table 6: F1 scores for the excluded target words from RUSSE’18.
Query: How big was the corpora they trained ELMo on?
JSON Response: | {"answer": "2174000000, 989000000", "evidence": ["For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same."]} | science.qasper_abstractive_qa.167.1 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nTo lemmatize or not to lemmatize: how word normalisation affects ELMo performance in word sense disambiguation\n\nWe critically evaluate the widespread assumption that deep learning NLP models do not require lemmatized input. To test this, we trained versions of contextualised word embedding ELMo models on raw tokenized corpora and on the corpora with word tokens replaced by their lemmas. Then, these models were evaluated on the word sense disambiguation task. This was done for the English and Russian languages. ::: The experiments showed that while lemmatization is indeed not necessary for English, the situation is different for Russian. It seems that for rich-morphology languages, using lemmatized training and testing data yields small but consistent improvements: at least for word sense disambiguation. This means that the decisions about text pre-processing before training ELMo should consider the linguistic nature of the language in question.\n\nIntroduction\nDeep contextualised representations of linguistic entities (words and/or sentences) are used in many current state-of-the-art NLP systems. The most well-known examples of such models are arguably ELMo BIBREF0 and BERT BIBREF1.\nA long-standing tradition if the field of applying deep learning to NLP tasks can be summarised as follows: as minimal pre-processing as possible. It is widely believed that lemmatization or other text input normalisation is not necessary. Advanced neural architectures based on character input (CNNs, BPE, etc) are supposed to be able to learn how to handle spelling and morphology variations themselves, even for languages with rich morphology: `just add more layers!'. Contextualised embedding models follow this tradition: as a rule, they are trained on raw text collections, with minimal linguistic pre-processing. Below, we show that this is not entirely true.\nIt is known that for the previous generation of word embedding models (`static' ones like word2vec BIBREF2, where a word always has the same representation regardless of the context in which it occurs), lemmatization of the training and testing data improves their performance. BIBREF3 showed that this is true at least for semantic similarity and analogy tasks.\nIn this paper, we describe our experiments in finding out whether lemmatization helps modern contextualised embeddings (on the example of ELMo). We compare the performance of ELMo models trained on the same corpus before and after lemmatization. It is impossible to evaluate contextualised models on `static' tasks like lexical semantic similarity or word analogies. Because of this, we turned to word sense disambiguation in context (WSD) as an evaluation task.\nIn brief, we use contextualised representations of ambiguous words from the top layer of an ELMo model to train word sense classifiers and find out whether using lemmas instead of tokens helps in this task (see Section SECREF5). We experiment with the English and Russian languages and show that they differ significantly in the influence of lemmatization on the WSD performance of ELMo models.\nOur findings and the contributions of this paper are:\nLinguistic text pre-processing still matters in some tasks, even for contemporary deep representation learning algorithms.\nFor the Russian language, with its rich morphology, lemmatizing the training and testing data for ELMo representations yields small but consistent improvements in the WSD task. This is unlike English, where the differences are negligible.\n\nTraining ELMo\nFor the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same.\nThe texts were tokenized and lemmatized with the UDPipe models for the respective languages trained on the Universal Dependencies 2.3 treebanks BIBREF5. UDPipe yields lemmatization accuracy about 96% for English and 97% for Russian; thus for the task at hand, we considered it to be gold and did not try to further improve the quality of normalisation itself (although it is not entirely error-free, see Section SECREF4).\nELMo models were trained on these corpora using the original TensorFlow implementation, for 3 epochs with batch size 192, on two GPUs. To train faster, we decreased the dimensionality of the LSTM layers from the default 4096 to 2048 for all the models.\n\nWord sense disambiguation test sets\nWe used two WSD datasets for evaluation:\nSenseval-3 for English BIBREF6\nRUSSE'18 for Russian BIBREF7\nThe Senseval-3 dataset consists of lexical samples for nouns, verbs and adjectives; we used only noun target words:\nargument\narm\natmosphere\naudience\nbank\ndegree\ndifference\ndifficulty\ndisc\nimage\ninterest\njudgement\norganization\npaper\nparty\nperformance\nplan\nshelter\nsort\nsource\nAn example for the ambiguous word argument is given below:\nIn some situations Postscript can be faster than the escape sequence type of printer control file. It uses post fix notation, where arguments come first and operators follow. This is basically the same as Reverse Polish Notation as used on certain calculators, and follows directly from the stack based approach.\nIt this sentence, the word `argument' is used in the sense of a mathematical operator.\nThe RUSSE'18 dataset was created in 2018 for the shared task in Russian word sense induction. This dataset contains only nouns; the list of words with their English translations is given in Table TABREF30.\nOriginally, it includes also the words russianбайка `tale/fleece' and russianгвоздика 'clove/small nail', but their senses are ambiguous only in some inflectional forms (not in lemmas), therefore we decided to exclude these words from evaluation.\nThe Russian dataset is more homogeneous compared to the English one, as for all the target words there is approximately the same number of context words in the examples. This is achieved by applying the lexical window (25 words before and after the target word) and cropping everything that falls outside of that window. In the English dataset, on the contrary, the whole paragraph with the target word is taken into account. We have tried cropping the examples for English as well, but it did not result in any change in the quality of classification. In the end, we decided not to apply the lexical window to the English dataset so as not to alter it and rather use it in the original form.\nHere is an example from the RUSSE'18 for the ambiguous word russianмандарин `mandarin' in the sense `Chinese official title':\nrussian“...дипломатического корпуса останкам богдыхана и императрицы обставлено было с необычайной торжественностью. Тысячи мандаринов и других высокопоставленных лиц разместились шпалерами на трех мраморных террасах ведущих к...”\n`...the diplomatic bodies of the Bogdikhan and the Empress was furnished with extraordinary solemnity. Thousands of mandarins and other dignitaries were placed on three marble terraces leading to...'.\nTable TABREF31 compares both datasets. Before usage, they were pre-processed in the same way as the training corpora for ELMo (see Section SECREF3), thus producing a lemmatized and a non-lemmatized versions of each.\nAs we can see from Table TABREF31, for 20 target words in English there are 24 lemmas, and for 18 target words in Russian there are 36 different lemmas. These numbers are explained by occasional errors in the UDPipe lemmatization. Another interesting thing to observe is the number of distinct word forms for every language. For English, there are 39 distinct forms for 20 target nouns: singular and plural for every noun, except `atmosphere' which is used only in the singular form. Thus, inflectional variability of English nouns is covered by the dataset almost completely. For Russian, we observe 132 distinct forms for 18 target nouns, giving more than 7 inflectional forms per each word. Note that this still covers only half of all the inflectional variability of Russian: this language features 12 distinct forms for each noun (6 cases and 2 numbers).\nTo sum up, the RUSSE'18 dataset is morphologically far more complex than the Senseval3, reflecting the properties of the respective languages. In the next section we will see that this leads to substantial differences regarding comparisons between token-based and lemma-based ELMo models.\n\nExperiments\nFollowing BIBREF8, we decided to avoid using any standard train-test splits for our WSD datasets. Instead, we rely on per-word random splits and 5-fold cross-validation. This means that for each target word we randomly generate 5 different divisions of its context sentences list into train and test sets, and then train and test 5 different classifier models on this data. The resulting performance score for each target word is the average of 5 macro-F1 scores produced by these classifiers.\nELMo models can be employed for the WSD task in two different ways: either by fine-tuning the model or by extracting word representations from it and then using them as features in a downstream classifier. We decided to stick to the second (feature extraction) approach, since it is conceptually and computationally simpler. Additionally, BIBREF9 showed that for most NLP tasks (except those focused on sentence pairs) the performance of feature extraction and fine-tuning is nearly the same. Thus we extracted the single vector of the target word from the ELMo top layer (`target' rows in Table TABREF32) or the averaged ELMo top layer vectors of all words in the context sentence (`averaged' rows in Table TABREF32).\nFor comparison, we also report the scores of the `averaged vectors' representations with Continuous Skipgram BIBREF2 embedding models trained on the English or Russian Wikipedia dumps (`SGNS' rows): before the advent of contextualised models, this was one of the most widely used ways to `squeeze' the meaning of a sentence into a fixed-size vector. Of course it does not mean that the meaning of a sentence always determines the senses all its words are used in. However, averaging representations of words in contexts as a proxy to the sense of one particular word is a long established tradition in WSD, starting at least from BIBREF10. Also, since SGNS is a `static' embedding model, it is of course not possible to use only target word vectors as features: they would be identical whatever the context is.\nSimple logistic regression was used as a classification algorithm. We also tested a multi-layer perceptron (MLP) classifier with 200-neurons hidden layer, which yielded essentially the same results. This leads us to believe that our findings are not classifier-dependent.\nTable TABREF32 shows the results, together with the random and most frequent sense (MFS) baselines for each dataset.\nFirst, ELMo outperforms SGNS for both languages, which comes as no surprise. Second, the approach with averaging representations from all words in the sentence is not beneficial for WSD with ELMo: for English data, it clearly loses to a single target word representation, and for Russian there are no significant differences (and using a single target word is preferable from the computational point of view, since it does not require the averaging operation). Thus, below we discuss only the single target word usage mode of ELMo.\nBut the most important part is the comparison between using tokens or lemmas in the train and test data. For the `static' SGNS embeddings, it does not significantly change the WSD scores for both languages. The same is true for English ELMo models, where differences are negligible and seem to be simple fluctuations. However, for Russian, ELMo (target) on lemmas outperforms ELMo on tokens, with small but significant improvement. The most plausible explanation for this is that (despite of purely character-based input of ELMo) the model does not have to learn idiosyncrasies of a particular language morphology. Instead, it can use its (limited) capacity to better learn lexical semantic structures, leading to better WSD performance. The box plots FIGREF33 and FIGREF35 illustrate the scores dispersion across words in the test sets for English and Russian correspondingly (orange lines are medians). In the next section SECREF6 we analyse the results qualitatively.\n\nConclusion\nWe evaluated how the ability of ELMo contextualised word embedding models to disambiguate word senses depends on the nature of the training data. In particular, we compared the models trained on raw tokenized corpora and those trained on the corpora with word tokens replaced by their normal forms (lemmas). The models we trained are publicly available via the NLPL word embeddings repository BIBREF3.\nIn the majority of research papers on deep learning approaches to NLP, it is assumed that lemmatization is not necessary, especially when using powerful contextualised embeddings. Our experiments show that this is indeed true for languages with simple morphology (like English). However, for rich-morphology languages (like Russian), using lemmatized training data yields small but consistent improvements in the word sense disambiguation task. These improvements are not observed for rare words which lack inflected forms; this further supports our hypothesis that better WSD scores of lemma-based models are related to them better handling multiple word forms in morphology-rich languages.\nOf course, lemmatization is by all means not a silver bullet. In other tasks, where inflectional properties of words are important, it can even hurt the performance. But this is true for any NLP systems, not only deep learning based ones.\nThe take-home message here is twofold: first, text pre-processing still matters for contemporary deep learning algorithms. Their impressive learning abilities do not always allow them to infer normalisation rules themselves, from simply optimising the language modelling task. Second, the nature of language at hand matters as well, and differences in this nature can result in different decisions being optimal or sub-optimal at the stage of deep learning models training. The simple truth `English is not representative of all languages on Earth' still holds here.\nIn the future, we plan to extend our work by including more languages into the analysis. Using Russian and English allowed us to hypothesise about the importance of morphological character of a language. But we only scratched the surface of the linguistic diversity. To verify this claim, it is necessary to analyse more strongly inflected languages like Russian as well as more weakly inflected (analytical) languages similar to English. This will help to find out if the inflection differences are important for training deep learning models across human languages in general.\n\nTable 1: Training corpora\n\nTable 2: Target ambiguous words for Russian (RUSSE’18)\n\nTable 4: Averaged macro-F1 scores for WSD\n\nFigure 1: Word sense disambiguation performance on the English data across words (ELMo target models).\n\nFigure 2: Word sense disambiguation performance on the Russian data across words (ELMo target models).\n\nTable 5: F1 scores for target words from RUSSE’18 with significant differences between lemma-based and token-based models\n\nTable 6: F1 scores for the excluded target words from RUSSE’18.\n\nQuery: How big was the corpora they trained ELMo on?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"2174000000, 989000000\", \"evidence\": [\"For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same.\"]}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Jasper: An End-to-End Convolutional Neural Acoustic Model
In this paper, we report state-of-the-art results on LibriSpeech among end-to-end speech recognition models without any external training data. Our model, Jasper, uses only 1D convolutions, batch normalization, ReLU, dropout, and residual connections. To improve training, we further introduce a new layer-wise optimizer called NovoGrad. Through experiments, we demonstrate that the proposed deep architecture performs as well or better than more complex choices. Our deepest Jasper variant uses 54 convolutional layers. With this architecture, we achieve 2.95% WER using a beam-search decoder with an external neural language model and 3.86% WER with a greedy decoder on LibriSpeech test-clean. We also report competitive results on the Wall Street Journal and the Hub5'00 conversational evaluation datasets.
Introduction
Conventional automatic speech recognition (ASR) systems typically consist of several independently learned components: an acoustic model to predict context-dependent sub-phoneme states (senones) from audio, a graph structure to map senones to phonemes, and a pronunciation model to map phonemes to words. Hybrid systems combine hidden Markov models to model state dependencies with neural networks to predict states BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Newer approaches such as end-to-end (E2E) systems reduce the overall complexity of the final system.
Our research builds on prior work that has explored using time-delay neural networks (TDNN), other forms of convolutional neural networks, and Connectionist Temporal Classification (CTC) loss BIBREF4 , BIBREF5 , BIBREF6 . We took inspiration from wav2letter BIBREF6 , which uses 1D-convolution layers. Liptchinsky et al. BIBREF7 improved wav2letter by increasing the model depth to 19 convolutional layers and adding Gated Linear Units (GLU) BIBREF8 , weight normalization BIBREF9 and dropout.
By building a deeper and larger capacity network, we aim to demonstrate that we can match or outperform non end-to-end models on the LibriSpeech and 2000hr Fisher+Switchboard tasks. Like wav2letter, our architecture, Jasper, uses a stack of 1D-convolution layers, but with ReLU and batch normalization BIBREF10 . We find that ReLU and batch normalization outperform other activation and normalization schemes that we tested for convolutional ASR. As a result, Jasper's architecture contains only 1D convolution, batch normalization, ReLU, and dropout layers – operators highly optimized for training and inference on GPUs.
It is possible to increase the capacity of the Jasper model by stacking these operations. Our largest version uses 54 convolutional layers (333M parameters), while our small model uses 34 (201M parameters). We use residual connections to enable this level of depth. We investigate a number of residual options and propose a new residual connection topology we call Dense Residual (DR).
Integrating our best acoustic model with a Transformer-XL BIBREF11 language model allows us to obtain new state-of-the-art (SOTA) results on LibriSpeech BIBREF12 test-clean of 2.95% WER and SOTA results among end-to-end models on LibriSpeech test-other. We show competitive results on Wall Street Journal (WSJ), and 2000hr Fisher+Switchboard (F+S). Using only greedy decoding without a language model we achieve 3.86% WER on LibriSpeech test-clean.
This paper makes the following contributions:
Jasper Architecture
Jasper is a family of end-to-end ASR models that replace acoustic and pronunciation models with a convolutional neural network. Jasper uses mel-filterbank features calculated from 20ms windows with a 10ms overlap, and outputs a probability distribution over characters per frame. Jasper has a block architecture: a Jasper INLINEFORM0 x INLINEFORM1 model has INLINEFORM2 blocks, each with INLINEFORM3 sub-blocks. Each sub-block applies the following operations: a 1D-convolution, batch norm, ReLU, and dropout. All sub-blocks in a block have the same number of output channels.
Each block input is connected directly into the last sub-block via a residual connection. The residual connection is first projected through a 1x1 convolution to account for different numbers of input and output channels, then through a batch norm layer. The output of this batch norm layer is added to the output of the batch norm layer in the last sub-block. The result of this sum is passed through the activation function and dropout to produce the output of the sub-block.
The sub-block architecture of Jasper was designed to facilitate fast GPU inference. Each sub-block can be fused into a single GPU kernel: dropout is not used at inference-time and is eliminated, batch norm can be fused with the preceding convolution, ReLU clamps the result, and residual summation can be treated as a modified bias term in this fused operation.
All Jasper models have four additional convolutional blocks: one pre-processing and three post-processing. See Figure FIGREF7 and Table TABREF8 for details.
We also build a variant of Jasper, Jasper Dense Residual (DR). Jasper DR follows DenseNet BIBREF15 and DenseRNet BIBREF16 , but instead of having dense connections within a block, the output of a convolution block is added to the inputs of all the following blocks. While DenseNet and DenseRNet concatenates the outputs of different layers, Jasper DR adds them in the same way that residuals are added in ResNet. As explained below, we find addition to be as effective as concatenation.
Normalization and Activation
In our study, we evaluate performance of models with:
3 types of normalization: batch norm BIBREF10 , weight norm BIBREF9 , and layer norm BIBREF17
3 types of rectified linear units: ReLU, clipped ReLU (cReLU), and leaky ReLU (lReLU)
2 types of gated units: gated linear units (GLU) BIBREF8 , and gated activation units (GAU) BIBREF18
All experiment results are shown in Table TABREF15 . We first experimented with a smaller Jasper5x3 model to pick the top 3 settings before training on larger Jasper models. We found that layer norm with GAU performed the best on the smaller model. Layer norm with ReLU and batch norm with ReLU came second and third in our tests. Using these 3, we conducted further experiments on a larger Jasper10x4. For larger models, we noticed that batch norm with ReLU outperformed other choices. Thus, leading us to decide on batch normalization and ReLU for our architecture.
During batching, all sequences are padded to match the longest sequence. These padded values caused issues when using layer norm. We applied a sequence mask to exclude padding values from the mean and variance calculation. Further, we computed mean and variance over both the time dimension and channels similar to the sequence-wise normalization proposed by Laurent et al. BIBREF19 . In addition to masking layer norm, we additionally applied masking prior to the convolution operation, and masking the mean and variance calculation in batch norm. These results are shown in Table TABREF16 . Interestingly, we found that while masking before convolution gives a lower WER, using masks for both convolutions and batch norm results in worse performance.
As a final note, we found that training with weight norm was very unstable leading to exploding activations.
Residual Connections
For models deeper than Jasper 5x3, we observe consistently that residual connections are necessary for training to converge. In addition to the simple residual and dense residual model described above, we investigated DenseNet BIBREF15 and DenseRNet BIBREF16 variants of Jasper. Both connect the outputs of each sub-block to the inputs of following sub-blocks within a block. DenseRNet, similar to Dense Residual, connects the output of each output of each block to the input of all following blocks. DenseNet and DenseRNet combine residual connections using concatenation whereas Residual and Dense Residual use addition. We found that Dense Residual and DenseRNet perform similarly with each performing better on specific subsets of LibriSpeech. We decided to use Dense Residual for subsequent experiments. The main reason is that due to concatenation, the growth factor for DenseNet and DenseRNet requires tuning for deeper models whereas Dense Residual simply just repeats a sub-blocks.
Language Model
A language model (LM) is a probability distribution over arbitrary symbol sequences INLINEFORM0 such that more likely sequences are assigned high probabilities. LMs are frequently used to condition beam search. During decoding, candidates are evaluated using both acoustic scores and LM scores. Traditional N-gram LMs have been augmented with neural LMs in recent work BIBREF20 , BIBREF21 , BIBREF22 .
We experiment with statistical N-gram language models BIBREF23 and neural Transformer-XL BIBREF11 models. Our best results use acoustic and word-level N-gram language models to generate a candidate list using beam search with a width of 2048. Next, an external Transformer-XL LM rescores the final list. All LMs were trained on datasets independently from acoustic models. We show results with the neural LM in our Results section. We observed a strong correlation between the quality of the neural LM (measured by perplexity) and WER as shown in Figure FIGREF20 .
NovoGrad
For training, we use either Stochastic Gradient Descent (SGD) with momentum or our own NovoGrad, an optimizer similar to Adam BIBREF14 , except that its second moments are computed per layer instead of per weight. Compared to Adam, it reduces memory consumption and we find it to be more numerically stable.
At each step INLINEFORM0 , NovoGrad computes the stochastic gradient INLINEFORM1 following the regular forward-backward pass. Then the second-order moment INLINEFORM2 is computed for each layer INLINEFORM3 similar to ND-Adam BIBREF27 : DISPLAYFORM0
The second-order moment INLINEFORM0 is used to re-scale gradients INLINEFORM1 before calculating the first-order moment INLINEFORM2 : DISPLAYFORM0
If L2-regularization is used, a weight decay INLINEFORM0 is added to the re-scaled gradient (as in AdamW BIBREF28 ): DISPLAYFORM0
Finally, new weights are computed using the learning rate INLINEFORM0 : DISPLAYFORM0
Using NovoGrad instead of SGD with momentum, we decreased the WER on dev-clean LibriSpeech from 4.00% to 3.64%, a relative improvement of 9% for Jasper DR 10x5. We will further analyze NovoGrad in forthcoming work.
Results
We evaluate Jasper across a number of datasets in various domains. In all experiments, we use dropout and weight decay as regularization. At training time, we use speed perturbation with fixed +/-10% BIBREF29 for LibriSpeech. For WSJ and Hub5'00, we use a random speed perturbation factor between [-10%, 10%] as each utterance is fed into the model. All models have been trained on NVIDIA DGX-1 in mixed precision BIBREF30 using OpenSeq2Seq BIBREF31 . Source code, training configurations, and pretrained models are available.
Read Speech
We evaluated the performance of Jasper on two read speech datasets: LibriSpeech and Wall Street Journal (WSJ). For LibriSpeech, we trained Jasper DR 10x5 using our NovoGrad optimizer for 400 epochs. We achieve SOTA performance on the test-clean subset and SOTA among end-to-end speech recognition models on test-other.
We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .
Conversational Speech
We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 .
We obtain good results for SWB. However, there is work to be done to improve WER on harder tasks such as CHM.
Conclusions
We have presented a new family of neural architectures for end-to-end speech recognition. Inspired by wav2letter's convolutional approach, we build a deep and scalable model, which requires a well-designed residual topology, effective regularization, and a strong optimizer. As our architecture studies demonstrated, a combination of standard components leads to SOTA results on LibriSpeech and competitive results on other benchmarks. Our Jasper architecture is highly efficient for training and inference, and serves as a good baseline approach on top of which to explore more sophisticated regularization, data augmentation, loss functions, language models, and optimization strategies. We are interested to see if our approach can continue to scale to deeper models and larger datasets.
Figure 1: JasperBxRmodel: B - number of blocks,R - number of sub-blocks.
Figure 2: Jasper Dense Residual
Table 1: Jasper 10x5: 10 blocks, each consisting of 5 1Dconvolutional sub-blocks, plus 4 additional blocks.
Table 4: Residual Connections: Greedy WER, LibriSpeech for Jasper 10x3 after 400 epochs. All models sized to have roughly the same parameter count.
Table 2: Normalization and Activation: Greedy WER, LibriSpeech after 50 epochs
Table 3: Sequence Masking: Greedy WER, LibriSpeech for Jasper 10x4 after 50 epochs
Figure 3: LM perplexity vs WER. LibriSpeech dev-other. Varying perplexity is achieved by taking earlier or later snapshots during training.
Table 5: LibriSpeech, WER (%)
Table 6: WSJ End-to-End Models, WER (%)
Table 7: Hub5’00, WER (%)
Query: what were the baselines?
JSON Response: | {"answer": "LF-MMI Attention\nSeq2Seq \nRNN-T \nChar E2E LF-MMI \nPhone E2E LF-MMI \nCTC + Gram-CTC", "evidence": ["We also evaluate the Jasper model\u0027s performance on a conversational English corpus. The Hub5 Year 2000 (Hub5\u002700) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5\u002700 results in Table TABREF31 ."]} | science.qasper_abstractive_qa.96.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nJasper: An End-to-End Convolutional Neural Acoustic Model\n\nIn this paper, we report state-of-the-art results on LibriSpeech among end-to-end speech recognition models without any external training data. Our model, Jasper, uses only 1D convolutions, batch normalization, ReLU, dropout, and residual connections. To improve training, we further introduce a new layer-wise optimizer called NovoGrad. Through experiments, we demonstrate that the proposed deep architecture performs as well or better than more complex choices. Our deepest Jasper variant uses 54 convolutional layers. With this architecture, we achieve 2.95% WER using a beam-search decoder with an external neural language model and 3.86% WER with a greedy decoder on LibriSpeech test-clean. We also report competitive results on the Wall Street Journal and the Hub5'00 conversational evaluation datasets.\n\nIntroduction\nConventional automatic speech recognition (ASR) systems typically consist of several independently learned components: an acoustic model to predict context-dependent sub-phoneme states (senones) from audio, a graph structure to map senones to phonemes, and a pronunciation model to map phonemes to words. Hybrid systems combine hidden Markov models to model state dependencies with neural networks to predict states BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Newer approaches such as end-to-end (E2E) systems reduce the overall complexity of the final system.\nOur research builds on prior work that has explored using time-delay neural networks (TDNN), other forms of convolutional neural networks, and Connectionist Temporal Classification (CTC) loss BIBREF4 , BIBREF5 , BIBREF6 . We took inspiration from wav2letter BIBREF6 , which uses 1D-convolution layers. Liptchinsky et al. BIBREF7 improved wav2letter by increasing the model depth to 19 convolutional layers and adding Gated Linear Units (GLU) BIBREF8 , weight normalization BIBREF9 and dropout.\nBy building a deeper and larger capacity network, we aim to demonstrate that we can match or outperform non end-to-end models on the LibriSpeech and 2000hr Fisher+Switchboard tasks. Like wav2letter, our architecture, Jasper, uses a stack of 1D-convolution layers, but with ReLU and batch normalization BIBREF10 . We find that ReLU and batch normalization outperform other activation and normalization schemes that we tested for convolutional ASR. As a result, Jasper's architecture contains only 1D convolution, batch normalization, ReLU, and dropout layers – operators highly optimized for training and inference on GPUs.\nIt is possible to increase the capacity of the Jasper model by stacking these operations. Our largest version uses 54 convolutional layers (333M parameters), while our small model uses 34 (201M parameters). We use residual connections to enable this level of depth. We investigate a number of residual options and propose a new residual connection topology we call Dense Residual (DR).\nIntegrating our best acoustic model with a Transformer-XL BIBREF11 language model allows us to obtain new state-of-the-art (SOTA) results on LibriSpeech BIBREF12 test-clean of 2.95% WER and SOTA results among end-to-end models on LibriSpeech test-other. We show competitive results on Wall Street Journal (WSJ), and 2000hr Fisher+Switchboard (F+S). Using only greedy decoding without a language model we achieve 3.86% WER on LibriSpeech test-clean.\nThis paper makes the following contributions:\n\nJasper Architecture\nJasper is a family of end-to-end ASR models that replace acoustic and pronunciation models with a convolutional neural network. Jasper uses mel-filterbank features calculated from 20ms windows with a 10ms overlap, and outputs a probability distribution over characters per frame. Jasper has a block architecture: a Jasper INLINEFORM0 x INLINEFORM1 model has INLINEFORM2 blocks, each with INLINEFORM3 sub-blocks. Each sub-block applies the following operations: a 1D-convolution, batch norm, ReLU, and dropout. All sub-blocks in a block have the same number of output channels.\nEach block input is connected directly into the last sub-block via a residual connection. The residual connection is first projected through a 1x1 convolution to account for different numbers of input and output channels, then through a batch norm layer. The output of this batch norm layer is added to the output of the batch norm layer in the last sub-block. The result of this sum is passed through the activation function and dropout to produce the output of the sub-block.\nThe sub-block architecture of Jasper was designed to facilitate fast GPU inference. Each sub-block can be fused into a single GPU kernel: dropout is not used at inference-time and is eliminated, batch norm can be fused with the preceding convolution, ReLU clamps the result, and residual summation can be treated as a modified bias term in this fused operation.\nAll Jasper models have four additional convolutional blocks: one pre-processing and three post-processing. See Figure FIGREF7 and Table TABREF8 for details.\nWe also build a variant of Jasper, Jasper Dense Residual (DR). Jasper DR follows DenseNet BIBREF15 and DenseRNet BIBREF16 , but instead of having dense connections within a block, the output of a convolution block is added to the inputs of all the following blocks. While DenseNet and DenseRNet concatenates the outputs of different layers, Jasper DR adds them in the same way that residuals are added in ResNet. As explained below, we find addition to be as effective as concatenation.\n\nNormalization and Activation\nIn our study, we evaluate performance of models with:\n3 types of normalization: batch norm BIBREF10 , weight norm BIBREF9 , and layer norm BIBREF17 \n3 types of rectified linear units: ReLU, clipped ReLU (cReLU), and leaky ReLU (lReLU)\n2 types of gated units: gated linear units (GLU) BIBREF8 , and gated activation units (GAU) BIBREF18 \nAll experiment results are shown in Table TABREF15 . We first experimented with a smaller Jasper5x3 model to pick the top 3 settings before training on larger Jasper models. We found that layer norm with GAU performed the best on the smaller model. Layer norm with ReLU and batch norm with ReLU came second and third in our tests. Using these 3, we conducted further experiments on a larger Jasper10x4. For larger models, we noticed that batch norm with ReLU outperformed other choices. Thus, leading us to decide on batch normalization and ReLU for our architecture.\nDuring batching, all sequences are padded to match the longest sequence. These padded values caused issues when using layer norm. We applied a sequence mask to exclude padding values from the mean and variance calculation. Further, we computed mean and variance over both the time dimension and channels similar to the sequence-wise normalization proposed by Laurent et al. BIBREF19 . In addition to masking layer norm, we additionally applied masking prior to the convolution operation, and masking the mean and variance calculation in batch norm. These results are shown in Table TABREF16 . Interestingly, we found that while masking before convolution gives a lower WER, using masks for both convolutions and batch norm results in worse performance.\nAs a final note, we found that training with weight norm was very unstable leading to exploding activations.\n\nResidual Connections\nFor models deeper than Jasper 5x3, we observe consistently that residual connections are necessary for training to converge. In addition to the simple residual and dense residual model described above, we investigated DenseNet BIBREF15 and DenseRNet BIBREF16 variants of Jasper. Both connect the outputs of each sub-block to the inputs of following sub-blocks within a block. DenseRNet, similar to Dense Residual, connects the output of each output of each block to the input of all following blocks. DenseNet and DenseRNet combine residual connections using concatenation whereas Residual and Dense Residual use addition. We found that Dense Residual and DenseRNet perform similarly with each performing better on specific subsets of LibriSpeech. We decided to use Dense Residual for subsequent experiments. The main reason is that due to concatenation, the growth factor for DenseNet and DenseRNet requires tuning for deeper models whereas Dense Residual simply just repeats a sub-blocks.\n\nLanguage Model\nA language model (LM) is a probability distribution over arbitrary symbol sequences INLINEFORM0 such that more likely sequences are assigned high probabilities. LMs are frequently used to condition beam search. During decoding, candidates are evaluated using both acoustic scores and LM scores. Traditional N-gram LMs have been augmented with neural LMs in recent work BIBREF20 , BIBREF21 , BIBREF22 .\nWe experiment with statistical N-gram language models BIBREF23 and neural Transformer-XL BIBREF11 models. Our best results use acoustic and word-level N-gram language models to generate a candidate list using beam search with a width of 2048. Next, an external Transformer-XL LM rescores the final list. All LMs were trained on datasets independently from acoustic models. We show results with the neural LM in our Results section. We observed a strong correlation between the quality of the neural LM (measured by perplexity) and WER as shown in Figure FIGREF20 .\n\nNovoGrad\nFor training, we use either Stochastic Gradient Descent (SGD) with momentum or our own NovoGrad, an optimizer similar to Adam BIBREF14 , except that its second moments are computed per layer instead of per weight. Compared to Adam, it reduces memory consumption and we find it to be more numerically stable.\nAt each step INLINEFORM0 , NovoGrad computes the stochastic gradient INLINEFORM1 following the regular forward-backward pass. Then the second-order moment INLINEFORM2 is computed for each layer INLINEFORM3 similar to ND-Adam BIBREF27 : DISPLAYFORM0 \nThe second-order moment INLINEFORM0 is used to re-scale gradients INLINEFORM1 before calculating the first-order moment INLINEFORM2 : DISPLAYFORM0 \nIf L2-regularization is used, a weight decay INLINEFORM0 is added to the re-scaled gradient (as in AdamW BIBREF28 ): DISPLAYFORM0 \nFinally, new weights are computed using the learning rate INLINEFORM0 : DISPLAYFORM0 \nUsing NovoGrad instead of SGD with momentum, we decreased the WER on dev-clean LibriSpeech from 4.00% to 3.64%, a relative improvement of 9% for Jasper DR 10x5. We will further analyze NovoGrad in forthcoming work.\n\nResults\nWe evaluate Jasper across a number of datasets in various domains. In all experiments, we use dropout and weight decay as regularization. At training time, we use speed perturbation with fixed +/-10% BIBREF29 for LibriSpeech. For WSJ and Hub5'00, we use a random speed perturbation factor between [-10%, 10%] as each utterance is fed into the model. All models have been trained on NVIDIA DGX-1 in mixed precision BIBREF30 using OpenSeq2Seq BIBREF31 . Source code, training configurations, and pretrained models are available. \n\nRead Speech\nWe evaluated the performance of Jasper on two read speech datasets: LibriSpeech and Wall Street Journal (WSJ). For LibriSpeech, we trained Jasper DR 10x5 using our NovoGrad optimizer for 400 epochs. We achieve SOTA performance on the test-clean subset and SOTA among end-to-end speech recognition models on test-other.\nWe trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .\n\nConversational Speech\nWe also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 .\nWe obtain good results for SWB. However, there is work to be done to improve WER on harder tasks such as CHM.\n\nConclusions\nWe have presented a new family of neural architectures for end-to-end speech recognition. Inspired by wav2letter's convolutional approach, we build a deep and scalable model, which requires a well-designed residual topology, effective regularization, and a strong optimizer. As our architecture studies demonstrated, a combination of standard components leads to SOTA results on LibriSpeech and competitive results on other benchmarks. Our Jasper architecture is highly efficient for training and inference, and serves as a good baseline approach on top of which to explore more sophisticated regularization, data augmentation, loss functions, language models, and optimization strategies. We are interested to see if our approach can continue to scale to deeper models and larger datasets.\n\nFigure 1: JasperBxRmodel: B - number of blocks,R - number of sub-blocks.\n\nFigure 2: Jasper Dense Residual\n\nTable 1: Jasper 10x5: 10 blocks, each consisting of 5 1Dconvolutional sub-blocks, plus 4 additional blocks.\n\nTable 4: Residual Connections: Greedy WER, LibriSpeech for Jasper 10x3 after 400 epochs. All models sized to have roughly the same parameter count.\n\nTable 2: Normalization and Activation: Greedy WER, LibriSpeech after 50 epochs\n\nTable 3: Sequence Masking: Greedy WER, LibriSpeech for Jasper 10x4 after 50 epochs\n\nFigure 3: LM perplexity vs WER. LibriSpeech dev-other. Varying perplexity is achieved by taking earlier or later snapshots during training.\n\nTable 5: LibriSpeech, WER (%)\n\nTable 6: WSJ End-to-End Models, WER (%)\n\nTable 7: Hub5’00, WER (%)\n\nQuery: what were the baselines?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"LF-MMI Attention\\nSeq2Seq \\nRNN-T \\nChar E2E LF-MMI \\nPhone E2E LF-MMI \\nCTC + Gram-CTC\", \"evidence\": [\"We also evaluate the Jasper model\\u0027s performance on a conversational English corpus. The Hub5 Year 2000 (Hub5\\u002700) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5\\u002700 results in Table TABREF31 .\"]}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: A common problem in cardiac surgery patients is decreased central venous pressure (CVP) and systemic blood pressure during transfer from operation room to intensive care unit (ICU). Since these reductions may lead to dangerous complications, this study aimed to evaluate the effects of vasopressors and isotonic fluids on hemodynamic status of cardiac surgery patients during their transfer to ICU.
ABSTRACT.METHODS: This randomized single-blind clinical trial was conducted in Chamran Hospital (Isfahan, Iran). A total number of 75 consecutive patients undergoing planned first-time coronary artery bypass grafting entered our study. Systolic blood pressure (SBP), diastolic blood pressure (DBP), CVP, and pulse rate (PR) were recorded at 5 to 10 minutes before leaving operating room and immediately after hospitalization in the ICU. Subjects in the first group received 7 cc/kg intravenous normal saline (as an isotonic fluid) within 5-10 minutes. The second group received 10 mg ephedrine before being transferred to the ICU.
ABSTRACT. RESULTS: The mean age of participants was 61.0 ± 3.6 years. No significant change in PR was detected in normal saline group (P > 0.05). No significant differences were found in two groups in SBP before and after hospitalization in the ICU (P > 0.05). DBP and CVP had statistically significant reductions in both groups after hospitalization in the ICU (P < 0.001). The mean CVP was also reduced significantly (10.7 ± 2.9 vs. 8.2 ± 3.4; P < 0.001).
ABSTRACT.CONCLUSION: Significant reductions in mean values of DBP and CVP occur during cardiac surgery and after arrival to the ICU. These reductions cannot be prevented by administration of vasopressors, such as ephedrine, or infusion of isotonic fluids, such as normal saline. Further studies are required to assess whether more fluids, other types of fluids, or other drugs can stop this phenomenon.
Extracted Data: | [["Ephedrine", "Control (nothing)", "Systolic blood preasure", "no significant difference", "No significant differences were found in two groups in SBP before and after hospitalization in the ICU (P \u003e 0.05)"]] | science.evidence_inference.73 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: A common problem in cardiac surgery patients is decreased central venous pressure (CVP) and systemic blood pressure during transfer from operation room to intensive care unit (ICU). Since these reductions may lead to dangerous complications, this study aimed to evaluate the effects of vasopressors and isotonic fluids on hemodynamic status of cardiac surgery patients during their transfer to ICU.\n\nABSTRACT.METHODS: This randomized single-blind clinical trial was conducted in Chamran Hospital (Isfahan, Iran). A total number of 75 consecutive patients undergoing planned first-time coronary artery bypass grafting entered our study. Systolic blood pressure (SBP), diastolic blood pressure (DBP), CVP, and pulse rate (PR) were recorded at 5 to 10 minutes before leaving operating room and immediately after hospitalization in the ICU. Subjects in the first group received 7 cc/kg intravenous normal saline (as an isotonic fluid) within 5-10 minutes. The second group received 10 mg ephedrine before being transferred to the ICU.\n\nABSTRACT. RESULTS: The mean age of participants was 61.0 ± 3.6 years. No significant change in PR was detected in normal saline group (P > 0.05). No significant differences were found in two groups in SBP before and after hospitalization in the ICU (P > 0.05). DBP and CVP had statistically significant reductions in both groups after hospitalization in the ICU (P < 0.001). The mean CVP was also reduced significantly (10.7 ± 2.9 vs. 8.2 ± 3.4; P < 0.001).\n\nABSTRACT.CONCLUSION: Significant reductions in mean values of DBP and CVP occur during cardiac surgery and after arrival to the ICU. These reductions cannot be prevented by administration of vasopressors, such as ephedrine, or infusion of isotonic fluids, such as normal saline. Further studies are required to assess whether more fluids, other types of fluids, or other drugs can stop this phenomenon.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Ephedrine\", \"Control (nothing)\", \"Systolic blood preasure\", \"no significant difference\", \"No significant differences were found in two groups in SBP before and after hospitalization in the ICU (P \\u003e 0.05)\"]]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
One common approach to measuring compositionality is to compare distributional representations of an MWE and its component words (e.g., Schone and Jurafsky, 2001; Baldwin et al., 2003; Katz and Giesbrecht, 2006; Reddy et al., 2011; Schulte im Walde et al., 2013; <cite>Salehi et al., 2015</cite>) . The hypothesis behind <cite>this line of work</cite> is that the representation of a compositional MWE will be more similar to the representations of its component words than the representation of a non-compositional MWE will be to those of its component words. | ["Background"] | science.multicite_intent_classification.2111 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nOne common approach to measuring compositionality is to compare distributional representations of an MWE and its component words (e.g., Schone and Jurafsky, 2001; Baldwin et al., 2003; Katz and Giesbrecht, 2006; Reddy et al., 2011; Schulte im Walde et al., 2013; <cite>Salehi et al., 2015</cite>) . The hypothesis behind <cite>this line of work</cite> is that the representation of a compositional MWE will be more similar to the representations of its component words than the representation of a non-compositional MWE will be to those of its component words.",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Citrullinated proteins externalized in neutrophil extracellular traps act indirectly to perpetuate the inflammatory cycle via induction of autoantibodies.
Research Abstract:
The early events leading to the development of rheumatoid arthritis (RA) remain unclear, but formation of autoantibodies to citrullinated protein antigens (ACPAs) is considered a key pathogenic event.
Neutrophils isolated from patients with various autoimmune diseases display enhanced neutrophil extracellular trap (NET) formation, a phenomenon that exposes autoantigens in the context of immunostimulatory molecules.
We investigated whether aberrant NETosis occurs in RA, determined its triggers, and examined its deleterious inflammatory consequences.
Enhanced NETosis was observed in circulating and RA synovial fluid neutrophils compared to neutrophils from healthy controls and from patients with osteoarthritis (OA).
Further, netting neutrophils infiltrated RA synovial tissue, rheumatoid nodules, and skin.
NETosis correlated with ACPA presence and levels and with systemic inflammatory markers.
RA sera and immunoglobulin fractions from RA patients with high levels of ACPA and/or rheumatoid factor significantly enhanced NETosis, and the NETs induced by these autoantibodies displayed distinct protein content.
Indeed, during NETosis, neutrophils externalized the citrullinated autoantigens implicated in RA pathogenesis, and anti-citrullinated vimentin antibodies potently induced NET formation.
Moreover, the inflammatory cytokines interleukin-17A (IL-17A) and tumor necrosis factor-α (TNF-α) induced NETosis in RA neutrophils.
In turn, NETs significantly augmented inflammatory responses in RA and OA synovial fibroblasts, including induction of IL-6, IL-8, chemokines, and adhesion molecules.
These observations implicate accelerated NETosis in RA pathogenesis, through externalization of citrullinated autoantigens and immunostimulatory molecules that may promote aberrant adaptive and innate immune responses in the joint and in the periphery, and perpetuate pathogenic mechanisms in this disease. | {"verdict": "SUPPORT", "evidence": ["Indeed, during NETosis, neutrophils externalized the citrullinated autoantigens implicated in RA pathogenesis, and anti-citrullinated vimentin antibodies potently induced NET formation.", "Moreover, the inflammatory cytokines interleukin-17A (IL-17A) and tumor necrosis factor-\u03b1 (TNF-\u03b1) induced NETosis in RA neutrophils.", "In turn, NETs significantly augmented inflammatory responses in RA and OA synovial fibroblasts, including induction of IL-6, IL-8, chemokines, and adhesion molecules.", "These observations implicate accelerated NETosis in RA pathogenesis, through externalization of citrullinated autoantigens and immunostimulatory molecules that may promote aberrant adaptive and innate immune responses in the joint and in the periphery, and perpetuate pathogenic mechanisms in this disease."]} | science.scifact_entailment.178 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nCitrullinated proteins externalized in neutrophil extracellular traps act indirectly to perpetuate the inflammatory cycle via induction of autoantibodies.\n\nResearch Abstract:\nThe early events leading to the development of rheumatoid arthritis (RA) remain unclear, but formation of autoantibodies to citrullinated protein antigens (ACPAs) is considered a key pathogenic event.\nNeutrophils isolated from patients with various autoimmune diseases display enhanced neutrophil extracellular trap (NET) formation, a phenomenon that exposes autoantigens in the context of immunostimulatory molecules.\nWe investigated whether aberrant NETosis occurs in RA, determined its triggers, and examined its deleterious inflammatory consequences.\nEnhanced NETosis was observed in circulating and RA synovial fluid neutrophils compared to neutrophils from healthy controls and from patients with osteoarthritis (OA).\nFurther, netting neutrophils infiltrated RA synovial tissue, rheumatoid nodules, and skin.\nNETosis correlated with ACPA presence and levels and with systemic inflammatory markers.\nRA sera and immunoglobulin fractions from RA patients with high levels of ACPA and/or rheumatoid factor significantly enhanced NETosis, and the NETs induced by these autoantibodies displayed distinct protein content.\nIndeed, during NETosis, neutrophils externalized the citrullinated autoantigens implicated in RA pathogenesis, and anti-citrullinated vimentin antibodies potently induced NET formation.\nMoreover, the inflammatory cytokines interleukin-17A (IL-17A) and tumor necrosis factor-α (TNF-α) induced NETosis in RA neutrophils.\nIn turn, NETs significantly augmented inflammatory responses in RA and OA synovial fibroblasts, including induction of IL-6, IL-8, chemokines, and adhesion molecules.\nThese observations implicate accelerated NETosis in RA pathogenesis, through externalization of citrullinated autoantigens and immunostimulatory molecules that may promote aberrant adaptive and innate immune responses in the joint and in the periphery, and perpetuate pathogenic mechanisms in this disease.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Indeed, during NETosis, neutrophils externalized the citrullinated autoantigens implicated in RA pathogenesis, and anti-citrullinated vimentin antibodies potently induced NET formation.\", \"Moreover, the inflammatory cytokines interleukin-17A (IL-17A) and tumor necrosis factor-\\u03b1 (TNF-\\u03b1) induced NETosis in RA neutrophils.\", \"In turn, NETs significantly augmented inflammatory responses in RA and OA synovial fibroblasts, including induction of IL-6, IL-8, chemokines, and adhesion molecules.\", \"These observations implicate accelerated NETosis in RA pathogenesis, through externalization of citrullinated autoantigens and immunostimulatory molecules that may promote aberrant adaptive and innate immune responses in the joint and in the periphery, and perpetuate pathogenic mechanisms in this disease.\"]}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Residual Non-local Attention Networks for Image Restoration
Main Content:
1 INTRODUCTION . Image restoration aims to recover high-quality ( HQ ) images from their corrupted low-quality ( LQ ) observations and plays a fundamental role in various high-level vision tasks . It is a typical ill-posed problem due to the irreversible nature of the image degradation process . Some most widely studied image restoration tasks include image denoising , demosaicing , and compression artifacts reduction . By distinctively modelling the restoration process from LQ observations to HQ objectives , i.e. , without assumption for a specific restoration task when modelling , these tasks can be uniformly addressed in the same framework . Recently , deep convolutional neural network ( CNN ) has shown extraordinary capability of modelling various vision problems , ranging from low-level ( e.g. , image denoising ( Zhang et al. , 2017a ) , compression artifacts reduction ( Dong et al. , 2015 ) , and image super-resolution ( Kim et al. , 2016 ; Lai et al. , 2017 ; Tai et al. , 2017 ; Lim et al. , 2017 ; Zhang et al. , 2018a ; Haris et al. , 2018 ; Wang et al. , 2018c ; Zhang et al. , 2018b ; Wang et al. , 2018b ) ) to high-level ( e.g. , image recognition ( He et al. , 2016 ) ) vision applications . Stacked denoising auto-encoder ( Vincent et al. , 2008 ) is one of the best well-known CNN based model for image restoration . Dong et al . proposed SRCNN ( Dong et al. , 2014 ) for image superresolution and ARCNN ( Dong et al. , 2015 ) for image compression artifacts reduction . Both SRCNN and ARCNN achieved superior performance against previous works . By introducing residual learning to ease the training difficulty for deeper network , Zhang et al . proposed DnCNN ( Zhang et al. , 2017a ) for image denoising and compression artifacts reduction . The denoiser prior was lately introduced in IRCNN ( Zhang et al. , 2017b ) for fast image restoration . Mao et al . proposed a very deep fully convolutional encoding-decoding framework with symmetric skip connections for image restoration ( Mao et al. , 2016 ) . Tai et al . later proposed a very deep end-to-end persistent memory network ( MemNet ) for image restoration ( Tai et al. , 2017 ) and achieved promising results . These CNN based methods have demonstrated the great ability of CNN for image restoration tasks . However , there are mainly three issues in the existing CNN based methods above . First , the receptive field size of these networks is relatively small . Most of them extract features in a local way with convolutional operation , which fails to capture the long-range dependencies between pixels in the whole image . A larger receptive field size allows to make better use of training inputs and more context information . This would be very helpful to capture the latent degradation model of LQ images , especially when the images suffer from heavy corruptions . Second , distinctive ability of these networks is also limited . Let ’ s take image denoising as an example . For a noisy image , the noise may appear in both the plain and textural regions . Noise removal would be easier in the plain area than that in the textural one . It is desired to make the denoising model focus on textual area more . However , most previous denoising methods neglect to consider different contents in the noisy input and treat them equally . This would result in over-smoothed outputs and some textural details would also fail to be recovered . Third , all channel-wise features are treated equally in those networks . This naive treatment lacks flexibility in dealing with different types of information ( e.g. , low- and high-frequency information ) . For a set of features , some contain more information related to HQ image and the others may contain more information related to corruptions . The interdependencies among channels should be considered for more accurate image restoration . To address the above issues , we propose the very deep residual non-local attention networks ( RNAN ) for high-quality image restoration . We design residual local and non-local attention blocks as the basic building modules for the very deep network . Each attention block consists of trunk and mask branches . We introduce residual block ( He et al. , 2016 ; Lim et al. , 2017 ) for trunk branch and extract hierarchical features . For mask branch , we conduct feature downscaling and upscaling with largestride convolution and deconvolution to enlarge receptive field size . Furthermore , we incorporate non-local block in the mask branch to obtain residual non-local mixed attention . We apply RNAN for various restoration tasks , including image denoising , demosaicing , and compression artifacts reduction . Extensive experiments show that our proposed RNAN achieves state-of-the-art results compared with other recent leading methods in all tasks . To the best of our knowledge , this is the first time to consider residual non-local attention for image restoration problems . The main contributions of this work are three-fold : • We propose the very deep residual non-local networks for high-quality image restoration . The powerful networks are based on our proposed residual local and non-local attention blocks , which consist of trunk and mask branches . The network obtains non-local mixed attention with non-local block in the mask branch . Such attention mechanis helps to learn local and non-local information from the hierarchical features . • We propose residual non-local attention learning to train very deep networks by preserving more low-level features , being more suitable for image restoration . Using non-local lowlevel and high-level attention from the very deep network , we can pursue better network representational ability and finally obtain high-quality image restoration results . • We demonstrate with extensive experiments that our RNAN is powerful for various image restoration tasks . RNAN achieves superior results over leading methods for image denoising , demosaicing , compression artifacts reduction , and super-resolution . In addition , RNAN achieves superior performance with moderate model size and performs very fast . 2 RELATED WORK . Non-local prior . As a classical filtering algorithm , non-local means ( Buades et al. , 2005 ) is computed as weighted mean of all pixels of an image . Such operation allows distant pixels to contribute to the response of a position at a time . It was lately introduced in BM3D ( Dabov et al. , 2007b ) for image denoising . Recently , Wang et al . ( 2018a ) proposed non-local neural network by incorporating non-local operations in deep neural network for video classification . We can see that those methods mainly introduce non-local information in the trunk pipeline . Liu et al . ( 2018 ) proposed non-local recurrent network for image restoration . However , in this paper , we mainly focus on learning nonlocal attention to better guide feature extraction in trunk branch . Attention mechanisms . Generally , attention can be viewed as a guidance to bias the allocation of available processing resources towards the most informative components of an input ( Hu et al. , 2017 ) . Recently , tentative works have been proposed to apply attention into deep neural networks ( Wang et al. , 2017 ; Hu et al. , 2017 ) . It ’ s usually combined with a gating function ( e.g. , sigmoid ) to rescale the feature maps . Wang et al . ( 2017 ) proposed residual attention network for image classification with a trunk-and-mask attention mechanism . Hu et al . ( 2017 ) proposed squeezeand-excitation ( SE ) block to model channel-wise relationships to obtain significant performance improvement for image classification . In all , these works mainly aim to guide the network pay more attention to the regions of interested . However , few works have been proposed to investigate the effect of attention for image restoration tasks . Here , we want to enhances the network with distinguished power for noise and image content . Image restoration architectures . Stacked denoising auto-encoder ( Vincent et al. , 2008 ) is one of the most well-known CNN-based image restoration method . ( Dong et al. , 2015 ) proposed ARCNN for image compression artifact reduction with several stacked convolutional layers . With the help of residual learning and batch normalization ( Ioffe & Szegedy , 2015 ) , Zhang et al . proposed DnCNN ( Zhang et al. , 2017a ) for accurate image restoration and denoiser priors for image restoration in IRCNN ( Zhang et al. , 2017b ) . Recently , great progresses have been made in image restoration community , where Timofte et al . ( Timofte et al. , 2017 ) , Ancuti et al . ( Ancuti et al. , 2018 ) , and Blau et al . ( Blau et al. , 2018 ) lead the main competitions recently and achieved new research status and records . For example , Wang et al . ( Wang et al. , 2018c ) proposed a fully progressive image SR approach . However , most methods are plain networks and neglect to use non-local information . 3 RESIDUAL NON-LOCAL ATTENTION NETWORK FOR IMAGE RESTORATION . 3.1 FRAMEWORK . The framework of our proposed residual non-local attention network ( RNAN ) is shown in Figure 1 . Let ’ s denote IL and IH as the low-quality ( e.g. , noisy , blurred , or compressed images ) and highquality images . The reconstructed image IR can be obtained by IR = HRNAN ( IL ) , ( 1 ) where HRNAN denotes the function of our proposed RNAN . With the usage of global residual learning in pixel space , the main part of our network can concentrate on learning the degradation components ( e.g. , noisy , blurring , or compressed artifacts ) . The first and last convolutional layers are shallow feature extractor and reconstruction layer respectively . We propose residual local and non-local attention blocks to extract hierarchical attentionaware features . In addition to making the main network learn degradation components , we further concentrate on more challenging areas by using local and non-local attention . We only incorporate residual non-local attention block in low-level and high-level feature space . This is mainly because a few non-local modules can well offer non-local ability to the network for image restoration . Then RNAN is optimized with loss function . Several loss functions have been investigated , such as L2 ( Mao et al. , 2016 ; Zhang et al. , 2017a ; Tai et al. , 2017 ; Zhang et al. , 2017b ) , L1 ( Lim et al. , 2017 ; Zhang et al. , 2018c ) , perceptual and adversarial losses ( Ledig et al. , 2017 ) . To show the effectiveness of our RNAN , we choose to optimize the same loss function ( e.g. , L2 loss function ) as previous works . Given a training set { IiL , I i H } N i=1 , which contains N low-quality inputs and their high-quality counterparts . The goal of training RNAN is to minimize the L2 loss function L ( Θ ) = 1 N N∑ i=1 ∥∥HRNAN ( IiL ) − IiH∥∥2 , ( 2 ) where ‖·‖2 denotes l2 norm . As detailed in Section 4 , we use the same loss function as that in other compared methods . Such choice makes it clearer and more fair to see the effectiveness of our proposed RNAN . Then we give more details to residual local and non-local attention blocks . 3.2 RESIDUAL NON-LOCAL ATTENTION BLOCK . Our residual non-local attention network is constructed by stacking several residual local and nonlocal attention blocks shown in Figure 2 . Each attention block is divided into two parts : q residual blocks ( RBs ) in the beginning and end of attention block . Two branches in the middle part : trunk branch and mask branch . For non-local attention block , we incorporate non-local block ( NLB ) in the mask branch , resulting non-local attention . Then we give more details to those components . | This paper proposes a residual non-local attention network for image restoration. Specifically, the proposed method has local and non-local attention blocks to extract features which capture long-range dependencies. The local and non-local blocks consist of trunk branch and (non-) local mask branch. The proposed method is evaluated on image denoising, demosaicing, compression artifacts reduction, and super-resolution. | science.mup_single_document_summarization.10049 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nResidual Non-local Attention Networks for Image Restoration\n\nMain Content:\n1 INTRODUCTION . Image restoration aims to recover high-quality ( HQ ) images from their corrupted low-quality ( LQ ) observations and plays a fundamental role in various high-level vision tasks . It is a typical ill-posed problem due to the irreversible nature of the image degradation process . Some most widely studied image restoration tasks include image denoising , demosaicing , and compression artifacts reduction . By distinctively modelling the restoration process from LQ observations to HQ objectives , i.e. , without assumption for a specific restoration task when modelling , these tasks can be uniformly addressed in the same framework . Recently , deep convolutional neural network ( CNN ) has shown extraordinary capability of modelling various vision problems , ranging from low-level ( e.g. , image denoising ( Zhang et al. , 2017a ) , compression artifacts reduction ( Dong et al. , 2015 ) , and image super-resolution ( Kim et al. , 2016 ; Lai et al. , 2017 ; Tai et al. , 2017 ; Lim et al. , 2017 ; Zhang et al. , 2018a ; Haris et al. , 2018 ; Wang et al. , 2018c ; Zhang et al. , 2018b ; Wang et al. , 2018b ) ) to high-level ( e.g. , image recognition ( He et al. , 2016 ) ) vision applications . Stacked denoising auto-encoder ( Vincent et al. , 2008 ) is one of the best well-known CNN based model for image restoration . Dong et al . proposed SRCNN ( Dong et al. , 2014 ) for image superresolution and ARCNN ( Dong et al. , 2015 ) for image compression artifacts reduction . Both SRCNN and ARCNN achieved superior performance against previous works . By introducing residual learning to ease the training difficulty for deeper network , Zhang et al . proposed DnCNN ( Zhang et al. , 2017a ) for image denoising and compression artifacts reduction . The denoiser prior was lately introduced in IRCNN ( Zhang et al. , 2017b ) for fast image restoration . Mao et al . proposed a very deep fully convolutional encoding-decoding framework with symmetric skip connections for image restoration ( Mao et al. , 2016 ) . Tai et al . later proposed a very deep end-to-end persistent memory network ( MemNet ) for image restoration ( Tai et al. , 2017 ) and achieved promising results . These CNN based methods have demonstrated the great ability of CNN for image restoration tasks . However , there are mainly three issues in the existing CNN based methods above . First , the receptive field size of these networks is relatively small . Most of them extract features in a local way with convolutional operation , which fails to capture the long-range dependencies between pixels in the whole image . A larger receptive field size allows to make better use of training inputs and more context information . This would be very helpful to capture the latent degradation model of LQ images , especially when the images suffer from heavy corruptions . Second , distinctive ability of these networks is also limited . Let ’ s take image denoising as an example . For a noisy image , the noise may appear in both the plain and textural regions . Noise removal would be easier in the plain area than that in the textural one . It is desired to make the denoising model focus on textual area more . However , most previous denoising methods neglect to consider different contents in the noisy input and treat them equally . This would result in over-smoothed outputs and some textural details would also fail to be recovered . Third , all channel-wise features are treated equally in those networks . This naive treatment lacks flexibility in dealing with different types of information ( e.g. , low- and high-frequency information ) . For a set of features , some contain more information related to HQ image and the others may contain more information related to corruptions . The interdependencies among channels should be considered for more accurate image restoration . To address the above issues , we propose the very deep residual non-local attention networks ( RNAN ) for high-quality image restoration . We design residual local and non-local attention blocks as the basic building modules for the very deep network . Each attention block consists of trunk and mask branches . We introduce residual block ( He et al. , 2016 ; Lim et al. , 2017 ) for trunk branch and extract hierarchical features . For mask branch , we conduct feature downscaling and upscaling with largestride convolution and deconvolution to enlarge receptive field size . Furthermore , we incorporate non-local block in the mask branch to obtain residual non-local mixed attention . We apply RNAN for various restoration tasks , including image denoising , demosaicing , and compression artifacts reduction . Extensive experiments show that our proposed RNAN achieves state-of-the-art results compared with other recent leading methods in all tasks . To the best of our knowledge , this is the first time to consider residual non-local attention for image restoration problems . The main contributions of this work are three-fold : • We propose the very deep residual non-local networks for high-quality image restoration . The powerful networks are based on our proposed residual local and non-local attention blocks , which consist of trunk and mask branches . The network obtains non-local mixed attention with non-local block in the mask branch . Such attention mechanis helps to learn local and non-local information from the hierarchical features . • We propose residual non-local attention learning to train very deep networks by preserving more low-level features , being more suitable for image restoration . Using non-local lowlevel and high-level attention from the very deep network , we can pursue better network representational ability and finally obtain high-quality image restoration results . • We demonstrate with extensive experiments that our RNAN is powerful for various image restoration tasks . RNAN achieves superior results over leading methods for image denoising , demosaicing , compression artifacts reduction , and super-resolution . In addition , RNAN achieves superior performance with moderate model size and performs very fast . 2 RELATED WORK . Non-local prior . As a classical filtering algorithm , non-local means ( Buades et al. , 2005 ) is computed as weighted mean of all pixels of an image . Such operation allows distant pixels to contribute to the response of a position at a time . It was lately introduced in BM3D ( Dabov et al. , 2007b ) for image denoising . Recently , Wang et al . ( 2018a ) proposed non-local neural network by incorporating non-local operations in deep neural network for video classification . We can see that those methods mainly introduce non-local information in the trunk pipeline . Liu et al . ( 2018 ) proposed non-local recurrent network for image restoration . However , in this paper , we mainly focus on learning nonlocal attention to better guide feature extraction in trunk branch . Attention mechanisms . Generally , attention can be viewed as a guidance to bias the allocation of available processing resources towards the most informative components of an input ( Hu et al. , 2017 ) . Recently , tentative works have been proposed to apply attention into deep neural networks ( Wang et al. , 2017 ; Hu et al. , 2017 ) . It ’ s usually combined with a gating function ( e.g. , sigmoid ) to rescale the feature maps . Wang et al . ( 2017 ) proposed residual attention network for image classification with a trunk-and-mask attention mechanism . Hu et al . ( 2017 ) proposed squeezeand-excitation ( SE ) block to model channel-wise relationships to obtain significant performance improvement for image classification . In all , these works mainly aim to guide the network pay more attention to the regions of interested . However , few works have been proposed to investigate the effect of attention for image restoration tasks . Here , we want to enhances the network with distinguished power for noise and image content . Image restoration architectures . Stacked denoising auto-encoder ( Vincent et al. , 2008 ) is one of the most well-known CNN-based image restoration method . ( Dong et al. , 2015 ) proposed ARCNN for image compression artifact reduction with several stacked convolutional layers . With the help of residual learning and batch normalization ( Ioffe & Szegedy , 2015 ) , Zhang et al . proposed DnCNN ( Zhang et al. , 2017a ) for accurate image restoration and denoiser priors for image restoration in IRCNN ( Zhang et al. , 2017b ) . Recently , great progresses have been made in image restoration community , where Timofte et al . ( Timofte et al. , 2017 ) , Ancuti et al . ( Ancuti et al. , 2018 ) , and Blau et al . ( Blau et al. , 2018 ) lead the main competitions recently and achieved new research status and records . For example , Wang et al . ( Wang et al. , 2018c ) proposed a fully progressive image SR approach . However , most methods are plain networks and neglect to use non-local information . 3 RESIDUAL NON-LOCAL ATTENTION NETWORK FOR IMAGE RESTORATION . 3.1 FRAMEWORK . The framework of our proposed residual non-local attention network ( RNAN ) is shown in Figure 1 . Let ’ s denote IL and IH as the low-quality ( e.g. , noisy , blurred , or compressed images ) and highquality images . The reconstructed image IR can be obtained by IR = HRNAN ( IL ) , ( 1 ) where HRNAN denotes the function of our proposed RNAN . With the usage of global residual learning in pixel space , the main part of our network can concentrate on learning the degradation components ( e.g. , noisy , blurring , or compressed artifacts ) . The first and last convolutional layers are shallow feature extractor and reconstruction layer respectively . We propose residual local and non-local attention blocks to extract hierarchical attentionaware features . In addition to making the main network learn degradation components , we further concentrate on more challenging areas by using local and non-local attention . We only incorporate residual non-local attention block in low-level and high-level feature space . This is mainly because a few non-local modules can well offer non-local ability to the network for image restoration . Then RNAN is optimized with loss function . Several loss functions have been investigated , such as L2 ( Mao et al. , 2016 ; Zhang et al. , 2017a ; Tai et al. , 2017 ; Zhang et al. , 2017b ) , L1 ( Lim et al. , 2017 ; Zhang et al. , 2018c ) , perceptual and adversarial losses ( Ledig et al. , 2017 ) . To show the effectiveness of our RNAN , we choose to optimize the same loss function ( e.g. , L2 loss function ) as previous works . Given a training set { IiL , I i H } N i=1 , which contains N low-quality inputs and their high-quality counterparts . The goal of training RNAN is to minimize the L2 loss function L ( Θ ) = 1 N N∑ i=1 ∥∥HRNAN ( IiL ) − IiH∥∥2 , ( 2 ) where ‖·‖2 denotes l2 norm . As detailed in Section 4 , we use the same loss function as that in other compared methods . Such choice makes it clearer and more fair to see the effectiveness of our proposed RNAN . Then we give more details to residual local and non-local attention blocks . 3.2 RESIDUAL NON-LOCAL ATTENTION BLOCK . Our residual non-local attention network is constructed by stacking several residual local and nonlocal attention blocks shown in Figure 2 . Each attention block is divided into two parts : q residual blocks ( RBs ) in the beginning and end of attention block . Two branches in the middle part : trunk branch and mask branch . For non-local attention block , we incorporate non-local block ( NLB ) in the mask branch , resulting non-local attention . Then we give more details to those components .",
"role": "user"
},
{
"content": "This paper proposes a residual non-local attention network for image restoration. Specifically, the proposed method has local and non-local attention blocks to extract features which capture long-range dependencies. The local and non-local blocks consist of trunk branch and (non-) local mask branch. The proposed method is evaluated on image denoising, demosaicing, compression artifacts reduction, and super-resolution.",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
A deficiency of vitamin B6 decreases blood levels of homocysteine.
Research Abstract:
BACKGROUND Homocysteine is a risk factor for coronary artery disease (CAD), although a causal relation remains to be proven.
The importance of determining direct causality rests in the fact that plasma homocysteine can be safely and inexpensively reduced by 25% with folic acid.
This reduction is maximally achieved by doses of 0.4 mg/d.
High-dose folic acid (5 mg/d) improves endothelial function in CAD, although the mechanism is controversial.
It has been proposed that improvement occurs through reduction in total (tHcy) or free (non-protein bound) homocysteine (fHcy).
We investigated the effects of folic acid on endothelial function before a change in homocysteine in patients with CAD.
METHODS AND RESULTS A randomized, placebo-controlled study of folic acid (5 mg/d) for 6 weeks was undertaken in 33 patients.
Endothelial function, assessed by flow-mediated dilatation (FMD), was measured before, at 2 and 4 hours after the first dose of folic acid, and after 6 weeks of treatment.
Plasma folate increased markedly by 1 hour (200 compared with 25.8 nmol/L; P<0.001).
FMD improved at 2 hours (83 compared with 47 microm; P<0.001) and was largely complete by 4 hours (101 compared with 51 microm; P<0.001).
tHcy did not significantly differ acutely (4-hour tHcy, 9.56 compared with 9.79 micromol/L; P=NS).
fHcy did not differ at 3 hours but was slightly reduced at 4 hours (1.55 compared with 1.78 micromol/L; P=0.02).
FMD improvement did not correlate with reductions in either fHcy or tHcy at any time.
CONCLUSIONS These data suggest that folic acid improves endothelial function in CAD acutely by a mechanism largely independent of homocysteine. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.26 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nA deficiency of vitamin B6 decreases blood levels of homocysteine.\n\nResearch Abstract:\nBACKGROUND Homocysteine is a risk factor for coronary artery disease (CAD), although a causal relation remains to be proven.\nThe importance of determining direct causality rests in the fact that plasma homocysteine can be safely and inexpensively reduced by 25% with folic acid.\nThis reduction is maximally achieved by doses of 0.4 mg/d.\nHigh-dose folic acid (5 mg/d) improves endothelial function in CAD, although the mechanism is controversial.\nIt has been proposed that improvement occurs through reduction in total (tHcy) or free (non-protein bound) homocysteine (fHcy).\nWe investigated the effects of folic acid on endothelial function before a change in homocysteine in patients with CAD.\nMETHODS AND RESULTS A randomized, placebo-controlled study of folic acid (5 mg/d) for 6 weeks was undertaken in 33 patients.\nEndothelial function, assessed by flow-mediated dilatation (FMD), was measured before, at 2 and 4 hours after the first dose of folic acid, and after 6 weeks of treatment.\nPlasma folate increased markedly by 1 hour (200 compared with 25.8 nmol/L; P<0.001).\nFMD improved at 2 hours (83 compared with 47 microm; P<0.001) and was largely complete by 4 hours (101 compared with 51 microm; P<0.001).\ntHcy did not significantly differ acutely (4-hour tHcy, 9.56 compared with 9.79 micromol/L; P=NS).\nfHcy did not differ at 3 hours but was slightly reduced at 4 hours (1.55 compared with 1.78 micromol/L; P=0.02).\nFMD improvement did not correlate with reductions in either fHcy or tHcy at any time.\nCONCLUSIONS These data suggest that folic acid improves endothelial function in CAD acutely by a mechanism largely independent of homocysteine.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Even though cultural transmission may aid the emergence of compositionality, recent results in evolutionary linguistics (Raviv et al., 2018) and deep learning<cite> (Kottur et al., 2017</cite>; Mordatch & Abbeel, 2018) show cultural transmission may not be necessary for compositionality to emerge. | ["Background"] | science.multicite_intent_classification.642 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nEven though cultural transmission may aid the emergence of compositionality, recent results in evolutionary linguistics (Raviv et al., 2018) and deep learning<cite> (Kottur et al., 2017</cite>; Mordatch & Abbeel, 2018) show cultural transmission may not be necessary for compositionality to emerge.",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.discomat_te | | Oscillator no. in Table 2 | Band frequencies (cm-1) | Assignments |
| 18 | 3735+-16 | Various combination modes involving certain first order (Si)O-H vibrations |
| 19 | 3810+-4 | Various combination modes involving certain first order (Si)O-H vibrations |
| 20 | 3911 | Various combination modes involving certain first order (Si)O-H vibrations |
| 21 | 4130+-2 | Various combination modes involving certain first order (Si)O-H vibrations |
| 22 | 4383+-2 | Various combination modes involving certain first order (Si)O-H vibrations |
| 23 | 4474+-9 | Various combination modes involving certain first order (Si)O-H vibrations |
| 24 | 4744+-2 | Various combination modes involving certain first order (Si)O-H vibrations |
| | | |
| 25 | 5250 | Combination mode involving certain first-order vibration(s) of H2O molecule |
| | | |
| 26 | 6200 | Unspecified water-related mode |
| | | |
| 27 | 7021 | First overtones of certain n (Si)O-H vibrations |
Caption: Tentative assignments of the multiphonon water-related bands in the 3700-7500 cm-1 region of IR absorption spectra of silicate glasses assumed by our research
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "18", "type": "Other"}
{"value": "3735", "type": "Other"}
{"value": "19", "type": "Other"}
{"value": "3810", "type": "Other"}
{"value": "20", "type": "Other"}
{"value": "3911", "type": "Other"}
{"value": "21", "type": "Other"}
{"value": "4130", "type": "Other"}
{"value": "22", "type": "Other"}
{"value": "4383", "type": "Other"}
{"value": "23", "type": "Other"}
{"value": "4474", "type": "Other"}
{"value": "24", "type": "Other"}
{"value": "4744", "type": "Other"}
{"value": "25", "type": "Other"}
{"value": "5250", "type": "Other"}
{"value": "26", "type": "Other"}
{"value": "6200", "type": "Other"}
{"value": "27", "type": "Other"}
{"value": "7021", "type": "Other"} | science.discomat_te.229 | [
{
"content": "| Oscillator no. in Table 2 | Band frequencies (cm-1) | Assignments |\n| 18 | 3735+-16 | Various combination modes involving certain first order (Si)O-H vibrations |\n| 19 | 3810+-4 | Various combination modes involving certain first order (Si)O-H vibrations |\n| 20 | 3911 | Various combination modes involving certain first order (Si)O-H vibrations |\n| 21 | 4130+-2 | Various combination modes involving certain first order (Si)O-H vibrations |\n| 22 | 4383+-2 | Various combination modes involving certain first order (Si)O-H vibrations |\n| 23 | 4474+-9 | Various combination modes involving certain first order (Si)O-H vibrations |\n| 24 | 4744+-2 | Various combination modes involving certain first order (Si)O-H vibrations |\n| | | |\n| 25 | 5250 | Combination mode involving certain first-order vibration(s) of H2O molecule |\n| | | |\n| 26 | 6200 | Unspecified water-related mode |\n| | | |\n| 27 | 7021 | First overtones of certain n (Si)O-H vibrations |\n\nCaption: Tentative assignments of the multiphonon water-related bands in the 3700-7500 cm-1 region of IR absorption spectra of silicate glasses assumed by our research\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"18\", \"type\": \"Other\"}\n{\"value\": \"3735\", \"type\": \"Other\"}\n{\"value\": \"19\", \"type\": \"Other\"}\n{\"value\": \"3810\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Other\"}\n{\"value\": \"3911\", \"type\": \"Other\"}\n{\"value\": \"21\", \"type\": \"Other\"}\n{\"value\": \"4130\", \"type\": \"Other\"}\n{\"value\": \"22\", \"type\": \"Other\"}\n{\"value\": \"4383\", \"type\": \"Other\"}\n{\"value\": \"23\", \"type\": \"Other\"}\n{\"value\": \"4474\", \"type\": \"Other\"}\n{\"value\": \"24\", \"type\": \"Other\"}\n{\"value\": \"4744\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Other\"}\n{\"value\": \"5250\", \"type\": \"Other\"}\n{\"value\": \"26\", \"type\": \"Other\"}\n{\"value\": \"6200\", \"type\": \"Other\"}\n{\"value\": \"27\", \"type\": \"Other\"}\n{\"value\": \"7021\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Superclass-Conditional Gaussian Mixture Model For Learning Fine-Grained Embeddings
Main Content:
1 INTRODUCTION . Training deep models with sufficient generalizability is of fundamental importance , which demands immense training data with fine-grained annotations ( Krizhevsky et al. , 2012 ; Brown et al. , 2020 ) . In many fields , however , data labeling requires domain-specific knowledge , such as medicine ( Sohoni et al. , 2020 ) , thus is prohibitive , and infeasible to be exhaustive . In this case , data for model training may only be “ coarsely ” labeled , while later the model is tested on a finer-grained classification task ( Bukchin et al. , 2021 ) . For example , consider an event prediction task for dialysis patients ( Inaguma et al. , 2019 ) . Hemodialysis is a major renal replacement therapy for patients with end-stage renal failure . These patients have to take hemodialysis thrice a week , each lasts for 4-5 hours . During the treatment , unexpected events , such as muscle cramp , perspiration , and dizziness , may happen as a result of lowering blood pressure , which need intensive medical care , thus should always be avoided . It is therefore an important medical issue to predict such events before an initiation of hemodialysis . In this task , binary labels , which mark the incidence of an event , can be collected . In contrast , finergrained labels that annotate different subtypes of events are seldom recorded . Since distinguishing different subtypes facilitates precise diagnoses , and helps physicians assess the risk for deciding whether to perform a hemodialysis ( with certain precautions ) , it is desirable that a model trained with coarse ( binary ) labels can perform well on a finer-grained multi-class ( subtypes ) task . To fill the gap of granularity between the training and testing scenarios , a practical way is to collect a few new records for a patient , with their fine-grained annotations . These data constitute a support set for fine-tuning a pre-trained model to the specific data distribution induced by the annotations of the target patient , for whom the adapted model is used for future predictions . Although massive finegrained annotation is impractical , annotating a few-shot set is feasible . In this work , we are interested in such a Cross-Granularity Few-Shot ( CGFS ) learning problem , where a model pre-trained on a set of coarse classes ( denoted as superclasses ) , needs to adapt to an unseen set of fine-grained target classes ( denoted as subclasses ) . The target subclasses could be descendants of the superclasses ( as in the aforementioned example ) , or could descend from other superclasses that are unobserved during pre-training . To be practical , the adaptation should only use a few samples from the subclasses . The CGFS problem is not limited to the above application . It occurs in a model ’ s lifespan when an application requires separating some subclasses from the superclasses yet when the training dataset was created these subclasses were unannotated . For example , it could occur in detecting rare pathology or variants using medical images that were coarsely described ( Oakden-Rayner et al. , 2020 ) , or personalizing a generic model trained on all historical users to a specific customer ( Luo et al. , 2020 ) . Despite its significance , CGFS can not be trivially solved by regularly training models with coarse labels , because typical losses for supervised learning aim to maximize inter-class boundaries but neglect intra-class variation . Thus , subclasses may arbitrarily and unevenly spread in every superclass . Recently , Bukchin et al . ( 2021 ) proposed to integrate coarse supervision and contrastive learning ( within superclasses ) for solving CGFS . However , their approach can not be readily used for medical records which typically contain static profiles and time series ( Che et al. , 2016 ) . This is due to the absence of a standard augmentation method for generating constrastive pairs on some data other than images . Also , since contrastive learning does not model subclass explicitly , their solution could be suboptimal ( as evaluated in Sec . 4 ) . Moreover , as their model was built upon MoCo ( He et al. , 2020 ) , and maintains many dictionaries for data sampling , its computational costs are high ( Sec . 4 ) . In this work , we propose a novel Superclass-Conditional Gaussian Mixture model ( SCGM ) to learn fine-grained embeddings for the CGFS problem . Our contributions are summarized as follows . • SCGM is agnostic to the encoder , thus is flexible to different applications . It models the generation of samples from hierarchical classes , and explicitly represents the unobserved subclasses by latent variables , without assuming their identities . It dynamically computes a Gaussian mixture for every sample conditioned on its superclass , and the model forms a hierarchy of Gaussian mixtures . • SCGM only adds a small overhead to an encoder , for parameterizing its distributions , thus is efficient . The model parameters are learned end-to-end by maximum likelihood estimation via a principled Expectation-Maximization ( EM ) algorithm . We also theoretically linked our loss function to InfoNCE ( Oord et al. , 2018 ) , explaining its effectiveness from a contrastive perspective . • In the experiments , we evaluated SCGM on both benchmark image datasets and a real-life medical dataset . Since SCGM is compatible with contrastive learning , we also tested it with a momentum encoder ( He et al. , 2020 ) . The results demonstrate SCGM on generic encoders has already outperformed the state-of-the-art ( SOTA ) baselines , with less computational costs , and it achieves boosted performance in some cases with momentum contrast integrated . 2 RELATED WORK . To our best knowledge , this is the first work to develop an SCGM model for underlying a framework that enables tackling CGFS across domains . The most relevant work ( Bukchin et al. , 2021 ) combines superclass-wise contrastive learning with coarse classification for preserving intra-class variation . Another work ( Yang et al. , 2021 ) used a three-step approach that pseudo-labels the embeddings ( pretrained by coarse classification and batch-wise contrastive learning ) by clustering every superclass . The pseudo-fine-labels were used to re-train the encoder . A similar three-step method ( Sohoni et al. , 2020 ) used a different loss for maximizing the worst-case expected accuracy over the pseudo-labeled subclasses . As discussed before , the former two methods require non-trivial searches for a suitable data augmentation method , which may be unavailable for some non-image data . By splitting training steps , the latter two methods could lead to suboptimal pseudo-labeling , and misleading labels could confuse the downstream steps . In contrast , our model is end-to-end . It explicitly infers the posterior of the subclass during coarse training . Moreover , it can achieve better performance without using as many computational resources as the model of ( Bukchin et al. , 2021 ) ( as evaluated in Sec . 4 ) . Learning with coarse supervision . Several works deal with coarse/fine labels from the perspective of weakly supervised learning ( Zhou , 2018 ) than to tackle the CGFS problem , including training methods that take advantage of a mixture of either balanced ( Ristin et al. , 2015 ; Guo et al. , 2018 ; Taherkhani et al. , 2019 ) or unbalanced ( Hsieh et al. , 2019 ; Liu et al. , 2019 ; Robinson et al. , 2020 ) coarse and fine labels . Among them , Liu et al . ( 2019 ) addressed a few-shot learning problem , but the model training assumes access to some fine labels and a graph of class hierarchy , which are unavailable in our problem . Thus it can not be adapted to solve our problem . Few-shot learning . Meta-learning has become a popular idea to handle the few-shot learning problem , which derives metric-based ( Vinyals et al. , 2016 ; Snell et al. , 2017 ) and optimization-based methods ( Finn et al. , 2017 ; Nichol et al. , 2018 ) . The idea has been extended to semi-supervised ( Ren et al. , 2018 ) , unsupervised ( Hsu et al. , 2018 ) and semantics-augmented ( Xing et al. , 2019 ) scenarios , when labels are scarce . However , none of them explores coarse labels for cross-granularity learning . Recently , many works observed that learning embeddings on all classes ( without episodic training ) , followed by simple fine-tuning , is superior to SOTA meta-learning methods ( Wang et al. , 2019 ; Dhillon et al. , 2020 ; Tian et al. , 2020 ) . In this work , similar to ( Bukchin et al. , 2021 ) , we focus on this paradigm to learn useful embeddings , and do not use meta-learning for pre-training . Embedding methods . The emerging self-supervised methods , such as contrastive methods ( Oord et al. , 2018 ; Chen et al. , 2020a ; He et al. , 2020 ) , are appealing in their ability to attain comparable embeddings to the supervised counterparts , and even surpass them when transferring to other tasks ( He et al. , 2020 ; Tian et al. , 2020 ) . Beyond instance-wise contrast , recent methods have explored instance-cluster contrast to boost performance ( Caron et al. , 2020 ; Li et al. , 2020 ) . In unsupervised methods , joint embedding and clustering has been found beneficial ( Caron et al. , 2018 ; Asano et al. , 2020 ) . Whereas , these methods never exploited coarse supervision . Thus , their embeddings/clusters do not necessarily reflect intra-class variation , which is important to the CGFS task . Vanilla Gaussian mixture ( GM ) model is unfit for the CGFS scenario . A recent work ( Manduchi et al. , 2021 ) extended GM to constrained clustering by conditioning every sample with a prior clustering preference . It is neither supervised by coarse classes nor hierarchically structured . Conventional hierarchical GM is used for hierarchical clustering ( Goldberger & Roweis , 2005 ; Olech & Paradowski , 2016 ; Athey et al. , 2019 ) by applying GM agglomeratively or divisively . These unsupervised methods only infer clusters , but do not pre-train embedding models for task adaptation . 3 SUPERCLASS-CONDITIONAL GAUSSIAN MIXTURE MODEL . Firstly , a word about some notations . Let Dtrain = { ( xi , yi ) } ni=1 be n sample-label training pairs , where yi ∈ Ysuper = { 1 , ... , c } is a superclass label . Each xi is associated with a latent ( unobserved ) subclass label ŷi ∈ Ysub = { 1 , ... , s } . Ysub relates to Ysuper by a hierarchical structure , i.e. , Ysub can be partitioned into c disjoint sets Ysub-1 , ... , Ysub-c , such that if ŷi ∈ Ysub-j , then yi = j ( 1 ≤ j ≤ c ) . Let fθ be an encoder ( i.e. , backbone network ) that is trained on Dtrain ( without knowing Ysub ) . It maps xi to a d-dimensional feature fθ ( xi ) . At test time , given a k-shot support set for a subset Ymsub ⊆ Ysub of m subclasses , i.e. , Dsupport = { ( xi , ŷi ) |ŷi ∈ Ymsub } mki=1 , the task is to train a classifier C : Rd → Ymsub with optimal accuracy on a test set of Ymsub subclasses . In our experiments , we also explored the case when the subclasses belong to superclasses that are not used for pre-training . As discussed in Sec . 2 , our focus is to train fθ for good embeddings without modifying fθ during the adaptation–a paradigm with SOTA few-shot performance ( Dhillon et al. , 2020 ; Tian et al. , 2020 ) . Formally , let fθ ( xi ) = vi , our goal is to find the model parameter θ that maximizes the likelihood of the posterior distribution pθ ( yi|vi ) on the observed data in Dtrain for classification tasks ( Grathwohl et al. , 2019 ) . To model the unobserved subclasses , we associate every vi with a latent variable zi to indicate to which subclass vi belongs . Suppose there are r possible subclasses , the log-likelihood to maximize can be rewritten by marginalizing out the latent variables . ℓ ( Dtrain ; θ ) = 1 n n∑ i=1 log [ pθ ( yi|vi ) ] = 1 n n∑ i=1 log [ r∑ zi=1 p ( yi|zi ) pθ ( zi|vi ) ] ( 1 ) where the distribution pθ ( zi|vi ) specifies the subclass membership of vi , and p ( yi|zi ) associates zi with a subclass partition Ysub-yi . Unlike some previous works ( Sohoni et al. , 2020 ) , which searched the number of subclasses for every superclass using a quality metric , we only assume a total number of r subclasses , and seek to infer their relationship with superclasses , i.e. , p ( yi|zi ) , without any prior on subclass partitions , so that the model is more generalizable . | The paper introduces a setup with the goal to adapt from a coarse pretrained model to unseen fine-grained labels. This problem is formulated as a superclass-subclass latent model and learned with maximum likelihood via expectation-maximization. The proposed approach, super-class conditional Gaussian mixture (SCGM) model, defines a hierarchical Gaussian distribution on the class hierarchy that models both the superclasses and subclasses. SCGM is evaluated on 6 image-net related datasets and a a real Dialysis-Event dataset collected by hospitals, and demonstrates competitive results under two evaluation setups: generalizing to seen and unseen superclass. | science.mup_single_document_summarization.3916 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nSuperclass-Conditional Gaussian Mixture Model For Learning Fine-Grained Embeddings\n\nMain Content:\n1 INTRODUCTION . Training deep models with sufficient generalizability is of fundamental importance , which demands immense training data with fine-grained annotations ( Krizhevsky et al. , 2012 ; Brown et al. , 2020 ) . In many fields , however , data labeling requires domain-specific knowledge , such as medicine ( Sohoni et al. , 2020 ) , thus is prohibitive , and infeasible to be exhaustive . In this case , data for model training may only be “ coarsely ” labeled , while later the model is tested on a finer-grained classification task ( Bukchin et al. , 2021 ) . For example , consider an event prediction task for dialysis patients ( Inaguma et al. , 2019 ) . Hemodialysis is a major renal replacement therapy for patients with end-stage renal failure . These patients have to take hemodialysis thrice a week , each lasts for 4-5 hours . During the treatment , unexpected events , such as muscle cramp , perspiration , and dizziness , may happen as a result of lowering blood pressure , which need intensive medical care , thus should always be avoided . It is therefore an important medical issue to predict such events before an initiation of hemodialysis . In this task , binary labels , which mark the incidence of an event , can be collected . In contrast , finergrained labels that annotate different subtypes of events are seldom recorded . Since distinguishing different subtypes facilitates precise diagnoses , and helps physicians assess the risk for deciding whether to perform a hemodialysis ( with certain precautions ) , it is desirable that a model trained with coarse ( binary ) labels can perform well on a finer-grained multi-class ( subtypes ) task . To fill the gap of granularity between the training and testing scenarios , a practical way is to collect a few new records for a patient , with their fine-grained annotations . These data constitute a support set for fine-tuning a pre-trained model to the specific data distribution induced by the annotations of the target patient , for whom the adapted model is used for future predictions . Although massive finegrained annotation is impractical , annotating a few-shot set is feasible . In this work , we are interested in such a Cross-Granularity Few-Shot ( CGFS ) learning problem , where a model pre-trained on a set of coarse classes ( denoted as superclasses ) , needs to adapt to an unseen set of fine-grained target classes ( denoted as subclasses ) . The target subclasses could be descendants of the superclasses ( as in the aforementioned example ) , or could descend from other superclasses that are unobserved during pre-training . To be practical , the adaptation should only use a few samples from the subclasses . The CGFS problem is not limited to the above application . It occurs in a model ’ s lifespan when an application requires separating some subclasses from the superclasses yet when the training dataset was created these subclasses were unannotated . For example , it could occur in detecting rare pathology or variants using medical images that were coarsely described ( Oakden-Rayner et al. , 2020 ) , or personalizing a generic model trained on all historical users to a specific customer ( Luo et al. , 2020 ) . Despite its significance , CGFS can not be trivially solved by regularly training models with coarse labels , because typical losses for supervised learning aim to maximize inter-class boundaries but neglect intra-class variation . Thus , subclasses may arbitrarily and unevenly spread in every superclass . Recently , Bukchin et al . ( 2021 ) proposed to integrate coarse supervision and contrastive learning ( within superclasses ) for solving CGFS . However , their approach can not be readily used for medical records which typically contain static profiles and time series ( Che et al. , 2016 ) . This is due to the absence of a standard augmentation method for generating constrastive pairs on some data other than images . Also , since contrastive learning does not model subclass explicitly , their solution could be suboptimal ( as evaluated in Sec . 4 ) . Moreover , as their model was built upon MoCo ( He et al. , 2020 ) , and maintains many dictionaries for data sampling , its computational costs are high ( Sec . 4 ) . In this work , we propose a novel Superclass-Conditional Gaussian Mixture model ( SCGM ) to learn fine-grained embeddings for the CGFS problem . Our contributions are summarized as follows . • SCGM is agnostic to the encoder , thus is flexible to different applications . It models the generation of samples from hierarchical classes , and explicitly represents the unobserved subclasses by latent variables , without assuming their identities . It dynamically computes a Gaussian mixture for every sample conditioned on its superclass , and the model forms a hierarchy of Gaussian mixtures . • SCGM only adds a small overhead to an encoder , for parameterizing its distributions , thus is efficient . The model parameters are learned end-to-end by maximum likelihood estimation via a principled Expectation-Maximization ( EM ) algorithm . We also theoretically linked our loss function to InfoNCE ( Oord et al. , 2018 ) , explaining its effectiveness from a contrastive perspective . • In the experiments , we evaluated SCGM on both benchmark image datasets and a real-life medical dataset . Since SCGM is compatible with contrastive learning , we also tested it with a momentum encoder ( He et al. , 2020 ) . The results demonstrate SCGM on generic encoders has already outperformed the state-of-the-art ( SOTA ) baselines , with less computational costs , and it achieves boosted performance in some cases with momentum contrast integrated . 2 RELATED WORK . To our best knowledge , this is the first work to develop an SCGM model for underlying a framework that enables tackling CGFS across domains . The most relevant work ( Bukchin et al. , 2021 ) combines superclass-wise contrastive learning with coarse classification for preserving intra-class variation . Another work ( Yang et al. , 2021 ) used a three-step approach that pseudo-labels the embeddings ( pretrained by coarse classification and batch-wise contrastive learning ) by clustering every superclass . The pseudo-fine-labels were used to re-train the encoder . A similar three-step method ( Sohoni et al. , 2020 ) used a different loss for maximizing the worst-case expected accuracy over the pseudo-labeled subclasses . As discussed before , the former two methods require non-trivial searches for a suitable data augmentation method , which may be unavailable for some non-image data . By splitting training steps , the latter two methods could lead to suboptimal pseudo-labeling , and misleading labels could confuse the downstream steps . In contrast , our model is end-to-end . It explicitly infers the posterior of the subclass during coarse training . Moreover , it can achieve better performance without using as many computational resources as the model of ( Bukchin et al. , 2021 ) ( as evaluated in Sec . 4 ) . Learning with coarse supervision . Several works deal with coarse/fine labels from the perspective of weakly supervised learning ( Zhou , 2018 ) than to tackle the CGFS problem , including training methods that take advantage of a mixture of either balanced ( Ristin et al. , 2015 ; Guo et al. , 2018 ; Taherkhani et al. , 2019 ) or unbalanced ( Hsieh et al. , 2019 ; Liu et al. , 2019 ; Robinson et al. , 2020 ) coarse and fine labels . Among them , Liu et al . ( 2019 ) addressed a few-shot learning problem , but the model training assumes access to some fine labels and a graph of class hierarchy , which are unavailable in our problem . Thus it can not be adapted to solve our problem . Few-shot learning . Meta-learning has become a popular idea to handle the few-shot learning problem , which derives metric-based ( Vinyals et al. , 2016 ; Snell et al. , 2017 ) and optimization-based methods ( Finn et al. , 2017 ; Nichol et al. , 2018 ) . The idea has been extended to semi-supervised ( Ren et al. , 2018 ) , unsupervised ( Hsu et al. , 2018 ) and semantics-augmented ( Xing et al. , 2019 ) scenarios , when labels are scarce . However , none of them explores coarse labels for cross-granularity learning . Recently , many works observed that learning embeddings on all classes ( without episodic training ) , followed by simple fine-tuning , is superior to SOTA meta-learning methods ( Wang et al. , 2019 ; Dhillon et al. , 2020 ; Tian et al. , 2020 ) . In this work , similar to ( Bukchin et al. , 2021 ) , we focus on this paradigm to learn useful embeddings , and do not use meta-learning for pre-training . Embedding methods . The emerging self-supervised methods , such as contrastive methods ( Oord et al. , 2018 ; Chen et al. , 2020a ; He et al. , 2020 ) , are appealing in their ability to attain comparable embeddings to the supervised counterparts , and even surpass them when transferring to other tasks ( He et al. , 2020 ; Tian et al. , 2020 ) . Beyond instance-wise contrast , recent methods have explored instance-cluster contrast to boost performance ( Caron et al. , 2020 ; Li et al. , 2020 ) . In unsupervised methods , joint embedding and clustering has been found beneficial ( Caron et al. , 2018 ; Asano et al. , 2020 ) . Whereas , these methods never exploited coarse supervision . Thus , their embeddings/clusters do not necessarily reflect intra-class variation , which is important to the CGFS task . Vanilla Gaussian mixture ( GM ) model is unfit for the CGFS scenario . A recent work ( Manduchi et al. , 2021 ) extended GM to constrained clustering by conditioning every sample with a prior clustering preference . It is neither supervised by coarse classes nor hierarchically structured . Conventional hierarchical GM is used for hierarchical clustering ( Goldberger & Roweis , 2005 ; Olech & Paradowski , 2016 ; Athey et al. , 2019 ) by applying GM agglomeratively or divisively . These unsupervised methods only infer clusters , but do not pre-train embedding models for task adaptation . 3 SUPERCLASS-CONDITIONAL GAUSSIAN MIXTURE MODEL . Firstly , a word about some notations . Let Dtrain = { ( xi , yi ) } ni=1 be n sample-label training pairs , where yi ∈ Ysuper = { 1 , ... , c } is a superclass label . Each xi is associated with a latent ( unobserved ) subclass label ŷi ∈ Ysub = { 1 , ... , s } . Ysub relates to Ysuper by a hierarchical structure , i.e. , Ysub can be partitioned into c disjoint sets Ysub-1 , ... , Ysub-c , such that if ŷi ∈ Ysub-j , then yi = j ( 1 ≤ j ≤ c ) . Let fθ be an encoder ( i.e. , backbone network ) that is trained on Dtrain ( without knowing Ysub ) . It maps xi to a d-dimensional feature fθ ( xi ) . At test time , given a k-shot support set for a subset Ymsub ⊆ Ysub of m subclasses , i.e. , Dsupport = { ( xi , ŷi ) |ŷi ∈ Ymsub } mki=1 , the task is to train a classifier C : Rd → Ymsub with optimal accuracy on a test set of Ymsub subclasses . In our experiments , we also explored the case when the subclasses belong to superclasses that are not used for pre-training . As discussed in Sec . 2 , our focus is to train fθ for good embeddings without modifying fθ during the adaptation–a paradigm with SOTA few-shot performance ( Dhillon et al. , 2020 ; Tian et al. , 2020 ) . Formally , let fθ ( xi ) = vi , our goal is to find the model parameter θ that maximizes the likelihood of the posterior distribution pθ ( yi|vi ) on the observed data in Dtrain for classification tasks ( Grathwohl et al. , 2019 ) . To model the unobserved subclasses , we associate every vi with a latent variable zi to indicate to which subclass vi belongs . Suppose there are r possible subclasses , the log-likelihood to maximize can be rewritten by marginalizing out the latent variables . ℓ ( Dtrain ; θ ) = 1 n n∑ i=1 log [ pθ ( yi|vi ) ] = 1 n n∑ i=1 log [ r∑ zi=1 p ( yi|zi ) pθ ( zi|vi ) ] ( 1 ) where the distribution pθ ( zi|vi ) specifies the subclass membership of vi , and p ( yi|zi ) associates zi with a subclass partition Ysub-yi . Unlike some previous works ( Sohoni et al. , 2020 ) , which searched the number of subclasses for every superclass using a quality metric , we only assume a total number of r subclasses , and seek to infer their relationship with superclasses , i.e. , p ( yi|zi ) , without any prior on subclass partitions , so that the model is more generalizable .",
"role": "user"
},
{
"content": "The paper introduces a setup with the goal to adapt from a coarse pretrained model to unseen fine-grained labels. This problem is formulated as a superclass-subclass latent model and learned with maximum likelihood via expectation-maximization. The proposed approach, super-class conditional Gaussian mixture (SCGM) model, defines a hierarchical Gaussian distribution on the class hierarchy that models both the superclasses and subclasses. SCGM is evaluated on 6 image-net related datasets and a a real Dialysis-Event dataset collected by hospitals, and demonstrates competitive results under two evaluation setups: generalizing to seen and unseen superclass.",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Set Norm and Equivariant Skip Connections: Putting the Deep in Deep Sets
Main Content:
1 INTRODUCTION Many real-world tasks involve predictions on sets or multisets , from point cloud classification ( Guo et al. , 2020 ; Wu et al. , 2015 ; Qi et al. , 2017a ) to the prediction of health outcomes from single-cell data ( Regev et al. , 2017 ; Lähnemann et al. , 2020 ; Liu et al. , 2021 ; Yuan et al. , 2017 ) . In the context of health prediction among other applications , there is the need for reliable methods that can be applied by application-based practitioners without the additional requirement of engineering models for every application task . An important property of models applied to input sets is permutation invariance : for any permutation of the instances in the input set , the model prediction stays the same . Deep Sets ( Zaheer et al. , 2017 ) and Set Transformer ( Lee et al. , 2019 ) are two general-purpose deep permutation invariant models . Such models have been proven to be universal approximators of permutation invariant functions under the right conditions ( Zaheer et al. , 2017 ; Wagstaff et al. , 2019 ) . In practice , however , the architectures are tailored to specific tasks to achieve good performance . In this work , we explore a more general approach to achieving good performance : making networks deeper , a strategy which has yielded benefit across many tasks ( He et al. , 2016b ; Wang et al. , 2019b ) . We first note that naively stacking more layers on both the Deep Sets and Set Transformer architectures hurts performance . Consider Figure 1 . Deep Sets 50 layers ( panel ( a ) ) has significantly worse performance than Deep Sets 3 layers due to the problem of vanishing gradients . Residual connections are often used to combat vanishing gradients , but even the Set Transformer architecture which consists of residual connections has worse performance at a higher depth ( panel ( b ) ) . Another strategy often used in combination with residual connections in deep architectures is the use of normalization layers , but the use of layer norm does not improve or deteriorates performance of both architectures at high depths , one with residual connections ( Set Transformer ) and one without ( Deep Sets ) . In fact , the Set Transformer paper ( Lee et al. , 2019 ) mentions layer norm as a component of their architecture but turns it off in actual implementation . In this work , we tackle the challenge of designing deep permutation invariant networks with the aim of providing general purpose architectures . • Based on real-world applications , we formalize desiderata for normalization layers when inputs are sets and systematically analyze options based on a unified framework . We introduce set norm , aimed at optimizing the desiderata we introduce . Set norm is a general , easy-to-implement normalization that can be applied on any architecture . We empirically demonstrate its benefit with ablation studies ( Section 3 , Section 5 ) . • We consider settings for residual connections which prevent vanishing gradients and maintain permutation equivariance . Through extensive experiments , we show that the best setting is that of equivariant residual connections ( ERC ) coupled the “ clean residual path ” principle ( Section 4 ) . To the best of our knowledge , we are the first to analyze this principle across architectural designs and to suggest a new benefit of certain residual connections : alleviation of information loss ( Section 5 ) . • We develop Deep Sets++ ( DS++ ) and Set Transformer++ ( ST++ ) , leveraging set norm and ERC . We demonstrate that both models yield comparable or improved performance over their existing counterparts across a variety of tasks ( Section 5 ) . Among other results , these new architectures yield better accuracy on point cloud classification than the task-specific architectures proposed in the original Deep Sets and Set Transformer papers . The fact that set norm and equivariant residual connections yield improvements on two different architectures suggests that they may also be able to benefit other permutation invariant architectures in the future . • Given growing interest in single-cell analysis ( Regev et al. , 2017 ) and the need for methods which can extract signal from single-cell data ( Lähnemann et al. , 2020 ) , we introduce a novel clinical single-cell dataset for prediction on sets called Flow-RBC . The dataset consists of red blood cell ( RBC ) measurements and hematocrit levels ( i.e . the fraction of blood volume occupied by RBCs ) for 100,000+ patients . The size and presence of a prediction target ( hematocrit ) makes this dataset unique , even among single-cell datasets in established repositories like the Human Cell Atlas ( Regev et al. , 2017 ) . Flow-RBC not only provides a new real-world benchmark for methodological innovation but also a potential source of information for biomedical science ( Section 5 ) . 2 BACKGROUND . Let S be the number of samples in a set . A function f : XS ! Y is permutation invariant if any permutation ⇡ of the input set results in the same output : f ( ⇡x ) = f ( x ) . A function : XS ! YS permutation equivariant if , for any permutation ⇡ , the outputs are permuted accordingly : ( ⇡x ) = ⇡ ( x ) . A sum/max-decomposable function f : XS ! Y is one which can be expressed via an equivariant encoder : XS ! ZS that maps each input sample in X to a latent space Z , a sum or max aggregation over the sample dimension of the encoder output , and an unconstrained decoder ⇢ : Z ! Y : f ( x ) = ⇢ ⇣X ( x ) ⌘ . ( 1 ) An architecture is a universal approximator of permutation invariant functions if and only if it is sum/max-decomposable with sufficient conditions on the latent space dimension ( Wagstaff et al. , 2019 ) . Both the Deep Sets and Set Transformer architectures can be written in the form of Equation ( 1 ) . Concretely , Deep Sets architecture consists of an equivariant encoder made up of feedforward layers , a sum or max aggregation , and a decoder also made up of feedforward layers ( Zaheer et al. , 2017 ) . The Set Transformer encoder consists of multi-attention blocks called inducing point attention blocks ( Lee et al. , 2019 ) , a variant of the original transformer block ( Vaswani et al. , 2017 ) modified to handle large set sizes . The aggregation is learned via attention , and the decoder consists of transformer blocks with self-attention . Both architectures are permutation invariant since they consist of a permutation equivariant encoder and a permutation invariant aggregation over the outputs of the encoder . In order to maintain permutation invariance of the architectures upon modification , one needs to ensure that changes to the encoder preserve permutation equivariance , and changes to the aggregation maintain permutation invariance . In the next two sections , we discuss how to incorporate normalization layers and residual connections while maintaining permutation invariance overall . 3 NORMALIZATION FOR SET DATA . Incorporating normalization layers into the permutation equivariant encoders requires careful consideration , as not all normalization layers are appropriate to use . We study normalization layers which consist of two operations , standardization and transformation . This setting captures most common normalizations ( Ioffe & Szegedy , 2015 ; Ba et al. , 2016 ; Ulyanov et al. , 2016 ) . Let a 2 RB⇥S⇥D be the activation before the normalization operation , where B is the size of the batch , S is the number of samples in a set ( the tensor is zero-padded to the largest sample size ) , and D is the feature dimension . First , the activations are standardized based on a setting L which defines which dimensions utilize separate statistics . For instance , L = { B , S } denotes that each set in a batch and each sample in a set gets its own mean and standard deviation for standardization , which are repeated over the D dimension so that µL ( a ) , L ( a ) 2 RB⇥S⇥D match the original activation tensor for elementwise subtraction and division . A standardization operation can be defined as : ¯aL = a µL ( a ) L ( a ) . ( 2 ) Next , the standardized activations are transformed through learned parameters which differ only over a setting of dimensions M . For instance , M = { D } denotes that each feature is transformed by a different scale and bias , which are shared across the sets in the batch and samples in the sets . Let ! M , ! M 2 RB⇥S⇥D denote the learned parameters and represent elementwise multiplication , any transformation operation can be defined as : ˆaM = ¯a ! M + ! M . ( 3 ) We can now summarize a wide array of normalization possibilities simply via choice of L and M . 3.1 NORMALIZATION DESIDERATA . We formally introduce the desiderata for a general-purpose normalization layer for prediction on sets . In particular : ( i ) The normalization must be permutation equivariant . ( ii ) The computation of the statistics for standardization ( µL ( a ) , L ( a ) ) should not be affected by the presence of sets of varying sizes . ( iii ) The normalization should not deteriorate performance due to removal of information that is useful for prediction . Item ( ii ) is meant to address the concern that real-world set inputs are often not all the same size , e.g . not all point cloud objects are made up of the same number of points , and not all patients will have the same number of cells in a blood sample . If the presence of different set sizes forces a choice for an application-specific implementation of a normalization layer , then such a normalization is not general-purpose . Item ( iii ) refers to the fact that the standardization operation in a normalization layer incurs a loss of information , as activations which were once of varying means and variances across certain dimensions are now forced to be zero mean and unit variance . For real-valued sets in particular , this can greatly limit predictive performance , e.g . if the removed means and variances would help in the output prediction . As an example , the mean cell volume of a blood sample can be indicative of patient health , and a normalization layer that makes it impossible to distinguish between patients ’ mean cell volumes will restrict performance across a wide variety of health outcomes predictions . 3.2 THE ONLY SUITABLE TRANSFORMATION FOR SETS IS ON THE FEATURE DIMENSION .. Only normalization layers adopting a per feature transformation , i.e . M = { D } , meet the permutation equivariance desiderata ( i ) , as per sample transformations break permutation equivariance and per set transformations cause the prediction output to change when an element ’ s position in the batch changes ( see Appendix B for proofs ) . Proposition 1 . Per feature transformation is the only setting which maintains permutation equivariance and prediction behavior which is agnostic to batch position . Thus , to systematically compare normalizations , we only need to consider the settings for L . 3.3 EXISTING NORMALIZATIONS DO NOT MEET ALL DESIDERATA .. We now consider possible settings for standardization : { } , { B } , { S } , { D } , { B , S } , { B , D } , { S , D } . We first note that L = { S } and L = { S , D } destroy permutation equivariance ( see Appendix B , for proof ) . Of the remaining settings , we note that two already exist in the literature . The first , which we call feature norm , is used in the permutation invariant PointNet and PointNet++ architectures for point cloud classification ( Qi et al. , 2017a ; b ) and performs a per feature standardization followed by a per feature transformation , i.e . L = { D } , M = { D } ( see Figure 2 ( i ) ) . The second is layer norm , used in Set Transformer , which applies standardization individually to each set and each sample and transformation separately per feature , i.e . L = { B , S } , M = { D } ( see Figure 2 ( ii ) ) . Both these normalization layers have drawbacks . Feature norm calculates statistics over the batch and consequently may exhibit batch effects : sets in the batch are no longer independent , and performance can be heavily impacted by batch size ( Hoffer et al. , 2017 ) . Additionally , like in batch norm , in feature norm sets are standardized differently at training and test time.1 Moreover , in the context of sets of varying sizes , each possible implementation of feature norm ( e.g . computing statistics over the batch , averaging statistics computed on each set separately ) will lead to each set ( or samples in the set ) having a different impact in the computation of the statistics . Therefore , the use of feature norm requires careful consideration , especially in applications having as input real-valued sets with wide ranges of both set sizes and first moments . A similar property holds for normalizations which calculate statistics over the batch ( B 62 L ) such as L = { } . Layer norm , on the other hand , removes information which may be particularly useful for set prediction tasks . By performing standardization on each sample in each set separately , layer norm maps two samples whose activations differ in only a scale and bias to the same output . The inability to distinguish such samples after layer norm can hurt prediction , as we see in Section 5.3 . Proposition 2 . Applying layer norm in its most common placement ( after linear projection , before non-linearity ) removes mean and variance information from each sample . The use of layer norm can present a representation issue similar to the oversmoothing problem in Graph Convolutional Networks ( Li et al. , 2018 ) , where nodes ( samples ) get mapped to the same value as the number of layers in the network increase and neighborhoods of nodes become similar in representation . 1Feature norm is implemented as batch norm on a transposed tensor . Thus , test batches are standardized by a running mean and variance while training batches are standardized by their own batch statistics . | The paper has four main contributions: 1. introduces the _set norm_ normalization layer in neural network models for set data, as opposed to feature normalization (aka. batch-norm) and layer normalization layers; 2. provides intuition behind a "cleaner" implementation of residual connections; 3. implements set norm and the cleaner residual connections in modified versions of existing Deep Sets and Set Transformers models; 4. introduces a new dataset called Flow-RBC with over 100,000 examples. Each "example" consists of a $(X,y)$ pair, where $X=\\{x_1, \dotsc, x_{1000}\\}$ is a set of measurements on 1000 red-blood cells from a patient, and $y$ is a hematocrit level label. Each $x_i$ consists of both volume and hemoglobin mass measurements. This dataset is significantly larger than similar existing datasets. | science.mup_single_document_summarization.8188 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nSet Norm and Equivariant Skip Connections: Putting the Deep in Deep Sets\n\nMain Content:\n1 INTRODUCTION Many real-world tasks involve predictions on sets or multisets , from point cloud classification ( Guo et al. , 2020 ; Wu et al. , 2015 ; Qi et al. , 2017a ) to the prediction of health outcomes from single-cell data ( Regev et al. , 2017 ; Lähnemann et al. , 2020 ; Liu et al. , 2021 ; Yuan et al. , 2017 ) . In the context of health prediction among other applications , there is the need for reliable methods that can be applied by application-based practitioners without the additional requirement of engineering models for every application task . An important property of models applied to input sets is permutation invariance : for any permutation of the instances in the input set , the model prediction stays the same . Deep Sets ( Zaheer et al. , 2017 ) and Set Transformer ( Lee et al. , 2019 ) are two general-purpose deep permutation invariant models . Such models have been proven to be universal approximators of permutation invariant functions under the right conditions ( Zaheer et al. , 2017 ; Wagstaff et al. , 2019 ) . In practice , however , the architectures are tailored to specific tasks to achieve good performance . In this work , we explore a more general approach to achieving good performance : making networks deeper , a strategy which has yielded benefit across many tasks ( He et al. , 2016b ; Wang et al. , 2019b ) . We first note that naively stacking more layers on both the Deep Sets and Set Transformer architectures hurts performance . Consider Figure 1 . Deep Sets 50 layers ( panel ( a ) ) has significantly worse performance than Deep Sets 3 layers due to the problem of vanishing gradients . Residual connections are often used to combat vanishing gradients , but even the Set Transformer architecture which consists of residual connections has worse performance at a higher depth ( panel ( b ) ) . Another strategy often used in combination with residual connections in deep architectures is the use of normalization layers , but the use of layer norm does not improve or deteriorates performance of both architectures at high depths , one with residual connections ( Set Transformer ) and one without ( Deep Sets ) . In fact , the Set Transformer paper ( Lee et al. , 2019 ) mentions layer norm as a component of their architecture but turns it off in actual implementation . In this work , we tackle the challenge of designing deep permutation invariant networks with the aim of providing general purpose architectures . • Based on real-world applications , we formalize desiderata for normalization layers when inputs are sets and systematically analyze options based on a unified framework . We introduce set norm , aimed at optimizing the desiderata we introduce . Set norm is a general , easy-to-implement normalization that can be applied on any architecture . We empirically demonstrate its benefit with ablation studies ( Section 3 , Section 5 ) . • We consider settings for residual connections which prevent vanishing gradients and maintain permutation equivariance . Through extensive experiments , we show that the best setting is that of equivariant residual connections ( ERC ) coupled the “ clean residual path ” principle ( Section 4 ) . To the best of our knowledge , we are the first to analyze this principle across architectural designs and to suggest a new benefit of certain residual connections : alleviation of information loss ( Section 5 ) . • We develop Deep Sets++ ( DS++ ) and Set Transformer++ ( ST++ ) , leveraging set norm and ERC . We demonstrate that both models yield comparable or improved performance over their existing counterparts across a variety of tasks ( Section 5 ) . Among other results , these new architectures yield better accuracy on point cloud classification than the task-specific architectures proposed in the original Deep Sets and Set Transformer papers . The fact that set norm and equivariant residual connections yield improvements on two different architectures suggests that they may also be able to benefit other permutation invariant architectures in the future . • Given growing interest in single-cell analysis ( Regev et al. , 2017 ) and the need for methods which can extract signal from single-cell data ( Lähnemann et al. , 2020 ) , we introduce a novel clinical single-cell dataset for prediction on sets called Flow-RBC . The dataset consists of red blood cell ( RBC ) measurements and hematocrit levels ( i.e . the fraction of blood volume occupied by RBCs ) for 100,000+ patients . The size and presence of a prediction target ( hematocrit ) makes this dataset unique , even among single-cell datasets in established repositories like the Human Cell Atlas ( Regev et al. , 2017 ) . Flow-RBC not only provides a new real-world benchmark for methodological innovation but also a potential source of information for biomedical science ( Section 5 ) . 2 BACKGROUND . Let S be the number of samples in a set . A function f : XS ! Y is permutation invariant if any permutation ⇡ of the input set results in the same output : f ( ⇡x ) = f ( x ) . A function : XS ! YS permutation equivariant if , for any permutation ⇡ , the outputs are permuted accordingly : ( ⇡x ) = ⇡ ( x ) . A sum/max-decomposable function f : XS ! Y is one which can be expressed via an equivariant encoder : XS ! ZS that maps each input sample in X to a latent space Z , a sum or max aggregation over the sample dimension of the encoder output , and an unconstrained decoder ⇢ : Z ! Y : f ( x ) = ⇢ ⇣X ( x ) ⌘ . ( 1 ) An architecture is a universal approximator of permutation invariant functions if and only if it is sum/max-decomposable with sufficient conditions on the latent space dimension ( Wagstaff et al. , 2019 ) . Both the Deep Sets and Set Transformer architectures can be written in the form of Equation ( 1 ) . Concretely , Deep Sets architecture consists of an equivariant encoder made up of feedforward layers , a sum or max aggregation , and a decoder also made up of feedforward layers ( Zaheer et al. , 2017 ) . The Set Transformer encoder consists of multi-attention blocks called inducing point attention blocks ( Lee et al. , 2019 ) , a variant of the original transformer block ( Vaswani et al. , 2017 ) modified to handle large set sizes . The aggregation is learned via attention , and the decoder consists of transformer blocks with self-attention . Both architectures are permutation invariant since they consist of a permutation equivariant encoder and a permutation invariant aggregation over the outputs of the encoder . In order to maintain permutation invariance of the architectures upon modification , one needs to ensure that changes to the encoder preserve permutation equivariance , and changes to the aggregation maintain permutation invariance . In the next two sections , we discuss how to incorporate normalization layers and residual connections while maintaining permutation invariance overall . 3 NORMALIZATION FOR SET DATA . Incorporating normalization layers into the permutation equivariant encoders requires careful consideration , as not all normalization layers are appropriate to use . We study normalization layers which consist of two operations , standardization and transformation . This setting captures most common normalizations ( Ioffe & Szegedy , 2015 ; Ba et al. , 2016 ; Ulyanov et al. , 2016 ) . Let a 2 RB⇥S⇥D be the activation before the normalization operation , where B is the size of the batch , S is the number of samples in a set ( the tensor is zero-padded to the largest sample size ) , and D is the feature dimension . First , the activations are standardized based on a setting L which defines which dimensions utilize separate statistics . For instance , L = { B , S } denotes that each set in a batch and each sample in a set gets its own mean and standard deviation for standardization , which are repeated over the D dimension so that µL ( a ) , L ( a ) 2 RB⇥S⇥D match the original activation tensor for elementwise subtraction and division . A standardization operation can be defined as : ¯aL = a µL ( a ) L ( a ) . ( 2 ) Next , the standardized activations are transformed through learned parameters which differ only over a setting of dimensions M . For instance , M = { D } denotes that each feature is transformed by a different scale and bias , which are shared across the sets in the batch and samples in the sets . Let ! M , ! M 2 RB⇥S⇥D denote the learned parameters and represent elementwise multiplication , any transformation operation can be defined as : ˆaM = ¯a ! M + ! M . ( 3 ) We can now summarize a wide array of normalization possibilities simply via choice of L and M . 3.1 NORMALIZATION DESIDERATA . We formally introduce the desiderata for a general-purpose normalization layer for prediction on sets . In particular : ( i ) The normalization must be permutation equivariant . ( ii ) The computation of the statistics for standardization ( µL ( a ) , L ( a ) ) should not be affected by the presence of sets of varying sizes . ( iii ) The normalization should not deteriorate performance due to removal of information that is useful for prediction . Item ( ii ) is meant to address the concern that real-world set inputs are often not all the same size , e.g . not all point cloud objects are made up of the same number of points , and not all patients will have the same number of cells in a blood sample . If the presence of different set sizes forces a choice for an application-specific implementation of a normalization layer , then such a normalization is not general-purpose . Item ( iii ) refers to the fact that the standardization operation in a normalization layer incurs a loss of information , as activations which were once of varying means and variances across certain dimensions are now forced to be zero mean and unit variance . For real-valued sets in particular , this can greatly limit predictive performance , e.g . if the removed means and variances would help in the output prediction . As an example , the mean cell volume of a blood sample can be indicative of patient health , and a normalization layer that makes it impossible to distinguish between patients ’ mean cell volumes will restrict performance across a wide variety of health outcomes predictions . 3.2 THE ONLY SUITABLE TRANSFORMATION FOR SETS IS ON THE FEATURE DIMENSION .. Only normalization layers adopting a per feature transformation , i.e . M = { D } , meet the permutation equivariance desiderata ( i ) , as per sample transformations break permutation equivariance and per set transformations cause the prediction output to change when an element ’ s position in the batch changes ( see Appendix B for proofs ) . Proposition 1 . Per feature transformation is the only setting which maintains permutation equivariance and prediction behavior which is agnostic to batch position . Thus , to systematically compare normalizations , we only need to consider the settings for L . 3.3 EXISTING NORMALIZATIONS DO NOT MEET ALL DESIDERATA .. We now consider possible settings for standardization : { } , { B } , { S } , { D } , { B , S } , { B , D } , { S , D } . We first note that L = { S } and L = { S , D } destroy permutation equivariance ( see Appendix B , for proof ) . Of the remaining settings , we note that two already exist in the literature . The first , which we call feature norm , is used in the permutation invariant PointNet and PointNet++ architectures for point cloud classification ( Qi et al. , 2017a ; b ) and performs a per feature standardization followed by a per feature transformation , i.e . L = { D } , M = { D } ( see Figure 2 ( i ) ) . The second is layer norm , used in Set Transformer , which applies standardization individually to each set and each sample and transformation separately per feature , i.e . L = { B , S } , M = { D } ( see Figure 2 ( ii ) ) . Both these normalization layers have drawbacks . Feature norm calculates statistics over the batch and consequently may exhibit batch effects : sets in the batch are no longer independent , and performance can be heavily impacted by batch size ( Hoffer et al. , 2017 ) . Additionally , like in batch norm , in feature norm sets are standardized differently at training and test time.1 Moreover , in the context of sets of varying sizes , each possible implementation of feature norm ( e.g . computing statistics over the batch , averaging statistics computed on each set separately ) will lead to each set ( or samples in the set ) having a different impact in the computation of the statistics . Therefore , the use of feature norm requires careful consideration , especially in applications having as input real-valued sets with wide ranges of both set sizes and first moments . A similar property holds for normalizations which calculate statistics over the batch ( B 62 L ) such as L = { } . Layer norm , on the other hand , removes information which may be particularly useful for set prediction tasks . By performing standardization on each sample in each set separately , layer norm maps two samples whose activations differ in only a scale and bias to the same output . The inability to distinguish such samples after layer norm can hurt prediction , as we see in Section 5.3 . Proposition 2 . Applying layer norm in its most common placement ( after linear projection , before non-linearity ) removes mean and variance information from each sample . The use of layer norm can present a representation issue similar to the oversmoothing problem in Graph Convolutional Networks ( Li et al. , 2018 ) , where nodes ( samples ) get mapped to the same value as the number of layers in the network increase and neighborhoods of nodes become similar in representation . 1Feature norm is implemented as batch norm on a transposed tensor . Thus , test batches are standardized by a running mean and variance while training batches are standardized by their own batch statistics .",
"role": "user"
},
{
"content": "The paper has four main contributions: 1. introduces the _set norm_ normalization layer in neural network models for set data, as opposed to feature normalization (aka. batch-norm) and layer normalization layers; 2. provides intuition behind a \"cleaner\" implementation of residual connections; 3. implements set norm and the cleaner residual connections in modified versions of existing Deep Sets and Set Transformers models; 4. introduces a new dataset called Flow-RBC with over 100,000 examples. Each \"example\" consists of a $(X,y)$ pair, where $X=\\\\{x_1, \\dotsc, x_{1000}\\\\}$ is a set of measurements on 1000 red-blood cells from a patient, and $y$ is a hematocrit level label. Each $x_i$ consists of both volume and hemoglobin mass measurements. This dataset is significantly larger than similar existing datasets.",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
In this work, we propose to address both these issues by learning simple dependency link embeddings on 'head-argument' pairs (as a single concatenated unit), which allows us to work directly with linguistically-intuitive, higher-order substructures, and also fire significantly fewer and simpler features in dependency parsing, as opposed to word cluster and embedding features in previous work (Koo et al., 2008; <cite>Bansal et al., 2014</cite>) , while still maintaining <cite>their</cite> strong accuracies. | ["Motivation", "Differences"] | science.multicite_intent_classification.1944 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nIn this work, we propose to address both these issues by learning simple dependency link embeddings on 'head-argument' pairs (as a single concatenated unit), which allows us to work directly with linguistically-intuitive, higher-order substructures, and also fire significantly fewer and simpler features in dependency parsing, as opposed to word cluster and embedding features in previous work (Koo et al., 2008; <cite>Bansal et al., 2014</cite>) , while still maintaining <cite>their</cite> strong accuracies.",
"role": "user"
},
{
"content": "[\"Motivation\", \"Differences\"]",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Trouble on the Horizon: Forecasting the Derailment of Online Conversations as they Develop
Online discussions often derail into toxic exchanges between participants. Recent efforts mostly focused on detecting antisocial behavior after the fact, by analyzing single comments in isolation. To provide more timely notice to human moderators, a system needs to preemptively detect that a conversation is heading towards derailment before it actually turns toxic. This means modeling derailment as an emerging property of a conversation rather than as an isolated utterance-level event. ::: Forecasting emerging conversational properties, however, poses several inherent modeling challenges. First, since conversations are dynamic, a forecasting model needs to capture the flow of the discussion, rather than properties of individual comments. Second, real conversations have an unknown horizon: they can end or derail at any time; thus a practical forecasting model needs to assess the risk in an online fashion, as the conversation develops. In this work we introduce a conversational forecasting model that learns an unsupervised representation of conversational dynamics and exploits it to predict future derailment as the conversation develops. By applying this model to two new diverse datasets of online conversations with labels for antisocial events, we show that it outperforms state-of-the-art systems at forecasting derailment.
Introduction
“Ché saetta previsa vien più lenta.”
– Dante Alighieri, Divina Commedia, Paradiso
Antisocial behavior is a persistent problem plaguing online conversation platforms; it is both widespread BIBREF0 and potentially damaging to mental and emotional health BIBREF1, BIBREF2. The strain this phenomenon puts on community maintainers has sparked recent interest in computational approaches for assisting human moderators.
Prior work in this direction has largely focused on post-hoc identification of various kinds of antisocial behavior, including hate speech BIBREF3, BIBREF4, harassment BIBREF5, personal attacks BIBREF6, and general toxicity BIBREF7. The fact that these approaches only identify antisocial content after the fact limits their practicality as tools for assisting pre-emptive moderation in conversational domains.
Addressing this limitation requires forecasting the future derailment of a conversation based on early warning signs, giving the moderators time to potentially intervene before any harm is done (BIBREF8 BIBREF8, BIBREF9 BIBREF9, see BIBREF10 BIBREF10 for a discussion). Such a goal recognizes derailment as emerging from the development of the conversation, and belongs to the broader area of conversational forecasting, which includes future-prediction tasks such as predicting the eventual length of a conversation BIBREF11, whether a persuasion attempt will eventually succeed BIBREF12, BIBREF13, BIBREF14, whether team discussions will eventually lead to an increase in performance BIBREF15, or whether ongoing counseling conversations will eventually be perceived as helpful BIBREF16.
Approaching such conversational forecasting problems, however, requires overcoming several inherent modeling challenges. First, conversations are dynamic and their outcome might depend on how subsequent comments interact with each other. Consider the example in Figure FIGREF2: while no individual comment is outright offensive, a human reader can sense a tension emerging from their succession (e.g., dismissive answers to repeated questioning). Thus a forecasting model needs to capture not only the content of each individual comment, but also the relations between comments. Previous work has largely relied on hand-crafted features to capture such relations—e.g., similarity between comments BIBREF16, BIBREF12 or conversation structure BIBREF17, BIBREF18—, though neural attention architectures have also recently shown promise BIBREF19.
The second modeling challenge stems from the fact that conversations have an unknown horizon: they can be of varying lengths, and the to-be-forecasted event can occur at any time. So when is it a good time to make a forecast? Prior work has largely proposed two solutions, both resulting in important practical limitations. One solution is to assume (unrealistic) prior knowledge of when the to-be-forecasted event takes place and extract features up to that point BIBREF20, BIBREF8. Another compromising solution is to extract features from a fixed-length window, often at the start of the conversation BIBREF21, BIBREF15, BIBREF16, BIBREF9. Choosing a catch-all window-size is however impractical: short windows will miss information in comments they do not encompass (e.g., a window of only two comments would miss the chain of repeated questioning in comments 3 through 6 of Figure FIGREF2), while longer windows risk missing the to-be-forecasted event altogether if it occurs before the end of the window, which would prevent early detection.
In this work we introduce a model for forecasting conversational events that overcomes both these inherent challenges by processing comments, and their relations, as they happen (i.e., in an online fashion). Our main insight is that models with these properties already exist, albeit geared toward generation rather than prediction: recent work in context-aware dialog generation (or “chatbots”) has proposed sequential neural models that make effective use of the intra-conversational dynamics BIBREF22, BIBREF23, BIBREF24, while concomitantly being able to process the conversation as it develops (see BIBREF25 for a survey).
In order for these systems to perform well in the generative domain they need to be trained on massive amounts of (unlabeled) conversational data. The main difficulty in directly adapting these models to the supervised domain of conversational forecasting is the relative scarcity of labeled data: for most forecasting tasks, at most a few thousands labeled examples are available, insufficient for the notoriously data-hungry sequential neural models.
To overcome this difficulty, we propose to decouple the objective of learning a neural representation of conversational dynamics from the objective of predicting future events. The former can be pre-trained on large amounts of unsupervised data, similarly to how chatbots are trained. The latter can piggy-back on the resulting representation after fine-tuning it for classification using relatively small labeled data. While similar pre-train-then-fine-tune approaches have recently achieved state-of-the-art performance in a number of NLP tasks—including natural language inference, question answering, and commonsense reasoning (discussed in Section SECREF2)—to the best of our knowledge this is the first attempt at applying this paradigm to conversational forecasting.
To test the effectiveness of this new architecture in forecasting derailment of online conversations, we develop and distribute two new datasets. The first triples in size the highly curated `Conversations Gone Awry' dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to “rude or hostile” behavior. In both datasets, our model outperforms existing fixed-window approaches, as well as simpler sequential baselines that cannot account for inter-comment relations. Furthermore, by virtue of its online processing of the conversation, our system can provide substantial prior notice of upcoming derailment, triggering on average 3 comments (or 3 hours) before an overtly toxic comment is posted.
To summarize, in this work we:
introduce the first model for forecasting conversational events that can capture the dynamics of a conversation as it develops;
build two diverse datasets (one entirely new, one extending prior work) for the task of forecasting derailment of online conversations;
compare the performance of our model against the current state-of-the-art, and evaluate its ability to provide early warning signs.
Our work is motivated by the goal of assisting human moderators of online communities by preemptively signaling at-risk conversations that might deserve their attention. However, we caution that any automated systems might encode or even amplify the biases existing in the training data BIBREF26, BIBREF27, BIBREF28, so a public-facing implementation would need to be exhaustively scrutinized for such biases BIBREF29.
Derailment Datasets
We consider two datasets, representing related but slightly different forecasting tasks. The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.
Wikipedia data. BIBREF9's `Conversations Gone Awry' dataset consists of 1,270 conversations that took place between Wikipedia editors on publicly accessible talk pages. The conversations are sourced from the WikiConv dataset BIBREF59 and labeled by crowdworkers as either containing a personal attack from within (i.e., hostile behavior by one user in the conversation directed towards another) or remaining civil throughout.
A series of controls are implemented to prevent models from picking up on trivial correlations. To prevent models from capturing topic-specific information (e.g., political conversations are more likely to derail), each attack-containing conversation is paired with a clean conversation from the same talk page, where the talk page serves as a proxy for topic. To force models to actually capture conversational dynamics rather than detecting already-existing toxicity, human annotations are used to ensure that all comments preceding a personal attack are civil.
To the ends of more effective model training, we elected to expand the `Conversations Gone Awry' dataset, using the original annotation procedure. Since we found that the original data skewed towards shorter conversations, we focused this crowdsourcing run on longer conversations: ones with 4 or more comments preceding the attack. Through this additional crowdsourcing, we expand the dataset to 4,188 conversations, which we are publicly releasing as part of the Cornell Conversational Analysis Toolkit (ConvoKit).
We perform an 80-20-20 train/dev/test split, ensuring that paired conversations end up in the same split in order to preserve the topic control. Finally, we randomly sample another 1 million conversations from WikiConv to use for the unsupervised pre-training of the generative component.
Reddit CMV data. The CMV dataset is constructed from conversations collected via the Reddit API. In contrast to the Wikipedia-based dataset, we explicitly avoid the use of post-hoc annotation. Instead, we use as our label whether a conversation eventually had a comment removed by a moderator for violation of Rule 2: “Don't be rude or hostile to other users”.
Though the lack of post-hoc annotation limits the degree to which we can impose controls on the data (e.g., some conversations may contain toxic comments not flagged by the moderators) we do reproduce as many of the Wikipedia data's controls as we can. Namely, we replicate the topic control pairing by choosing pairs of positive and negative examples that belong to the same top-level post, following BIBREF12; and enforce that the removed comment was made by a user who was previously involved in the conversation. This process results in 6,842 conversations, to which we again apply a pair-preserving 80-20-20 split. Finally, we gather over 600,000 conversations that do not include any removed comment, for unsupervised pre-training.
Figure 3: Precision-recall curves and the area under each curve. To reduce clutter, we show only the curves for Wikipedia data (CMV curves are similar) and exclude the fixed-length window baselines (which perform worse).
Query: What are two datasets model is applied to?
JSON Response: | {"answer": "An expanded version of the existing \u0027Conversations Gone Awry\u0027 dataset and the ChangeMyView dataset, a subreddit whose only annotation is whether the conversation required action by the Reddit moderators. ", "evidence": ["We consider two datasets, representing related but slightly different forecasting tasks. The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.", "To test the effectiveness of this new architecture in forecasting derailment of online conversations, we develop and distribute two new datasets. The first triples in size the highly curated `Conversations Gone Awry\u0027 dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to \u201crude or hostile\u201d behavior. In both datasets, our model outperforms existing fixed-window approaches, as well as simpler sequential baselines that cannot account for inter-comment relations. Furthermore, by virtue of its online processing of the conversation, our system can provide substantial prior notice of upcoming derailment, triggering on average 3 comments (or 3 hours) before an overtly toxic comment is posted."]} | science.qasper_abstractive_qa.226.1 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nTrouble on the Horizon: Forecasting the Derailment of Online Conversations as they Develop\n\nOnline discussions often derail into toxic exchanges between participants. Recent efforts mostly focused on detecting antisocial behavior after the fact, by analyzing single comments in isolation. To provide more timely notice to human moderators, a system needs to preemptively detect that a conversation is heading towards derailment before it actually turns toxic. This means modeling derailment as an emerging property of a conversation rather than as an isolated utterance-level event. ::: Forecasting emerging conversational properties, however, poses several inherent modeling challenges. First, since conversations are dynamic, a forecasting model needs to capture the flow of the discussion, rather than properties of individual comments. Second, real conversations have an unknown horizon: they can end or derail at any time; thus a practical forecasting model needs to assess the risk in an online fashion, as the conversation develops. In this work we introduce a conversational forecasting model that learns an unsupervised representation of conversational dynamics and exploits it to predict future derailment as the conversation develops. By applying this model to two new diverse datasets of online conversations with labels for antisocial events, we show that it outperforms state-of-the-art systems at forecasting derailment.\n\nIntroduction\n“Ché saetta previsa vien più lenta.”\n– Dante Alighieri, Divina Commedia, Paradiso\nAntisocial behavior is a persistent problem plaguing online conversation platforms; it is both widespread BIBREF0 and potentially damaging to mental and emotional health BIBREF1, BIBREF2. The strain this phenomenon puts on community maintainers has sparked recent interest in computational approaches for assisting human moderators.\nPrior work in this direction has largely focused on post-hoc identification of various kinds of antisocial behavior, including hate speech BIBREF3, BIBREF4, harassment BIBREF5, personal attacks BIBREF6, and general toxicity BIBREF7. The fact that these approaches only identify antisocial content after the fact limits their practicality as tools for assisting pre-emptive moderation in conversational domains.\nAddressing this limitation requires forecasting the future derailment of a conversation based on early warning signs, giving the moderators time to potentially intervene before any harm is done (BIBREF8 BIBREF8, BIBREF9 BIBREF9, see BIBREF10 BIBREF10 for a discussion). Such a goal recognizes derailment as emerging from the development of the conversation, and belongs to the broader area of conversational forecasting, which includes future-prediction tasks such as predicting the eventual length of a conversation BIBREF11, whether a persuasion attempt will eventually succeed BIBREF12, BIBREF13, BIBREF14, whether team discussions will eventually lead to an increase in performance BIBREF15, or whether ongoing counseling conversations will eventually be perceived as helpful BIBREF16.\nApproaching such conversational forecasting problems, however, requires overcoming several inherent modeling challenges. First, conversations are dynamic and their outcome might depend on how subsequent comments interact with each other. Consider the example in Figure FIGREF2: while no individual comment is outright offensive, a human reader can sense a tension emerging from their succession (e.g., dismissive answers to repeated questioning). Thus a forecasting model needs to capture not only the content of each individual comment, but also the relations between comments. Previous work has largely relied on hand-crafted features to capture such relations—e.g., similarity between comments BIBREF16, BIBREF12 or conversation structure BIBREF17, BIBREF18—, though neural attention architectures have also recently shown promise BIBREF19.\nThe second modeling challenge stems from the fact that conversations have an unknown horizon: they can be of varying lengths, and the to-be-forecasted event can occur at any time. So when is it a good time to make a forecast? Prior work has largely proposed two solutions, both resulting in important practical limitations. One solution is to assume (unrealistic) prior knowledge of when the to-be-forecasted event takes place and extract features up to that point BIBREF20, BIBREF8. Another compromising solution is to extract features from a fixed-length window, often at the start of the conversation BIBREF21, BIBREF15, BIBREF16, BIBREF9. Choosing a catch-all window-size is however impractical: short windows will miss information in comments they do not encompass (e.g., a window of only two comments would miss the chain of repeated questioning in comments 3 through 6 of Figure FIGREF2), while longer windows risk missing the to-be-forecasted event altogether if it occurs before the end of the window, which would prevent early detection.\nIn this work we introduce a model for forecasting conversational events that overcomes both these inherent challenges by processing comments, and their relations, as they happen (i.e., in an online fashion). Our main insight is that models with these properties already exist, albeit geared toward generation rather than prediction: recent work in context-aware dialog generation (or “chatbots”) has proposed sequential neural models that make effective use of the intra-conversational dynamics BIBREF22, BIBREF23, BIBREF24, while concomitantly being able to process the conversation as it develops (see BIBREF25 for a survey).\nIn order for these systems to perform well in the generative domain they need to be trained on massive amounts of (unlabeled) conversational data. The main difficulty in directly adapting these models to the supervised domain of conversational forecasting is the relative scarcity of labeled data: for most forecasting tasks, at most a few thousands labeled examples are available, insufficient for the notoriously data-hungry sequential neural models.\nTo overcome this difficulty, we propose to decouple the objective of learning a neural representation of conversational dynamics from the objective of predicting future events. The former can be pre-trained on large amounts of unsupervised data, similarly to how chatbots are trained. The latter can piggy-back on the resulting representation after fine-tuning it for classification using relatively small labeled data. While similar pre-train-then-fine-tune approaches have recently achieved state-of-the-art performance in a number of NLP tasks—including natural language inference, question answering, and commonsense reasoning (discussed in Section SECREF2)—to the best of our knowledge this is the first attempt at applying this paradigm to conversational forecasting.\nTo test the effectiveness of this new architecture in forecasting derailment of online conversations, we develop and distribute two new datasets. The first triples in size the highly curated `Conversations Gone Awry' dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to “rude or hostile” behavior. In both datasets, our model outperforms existing fixed-window approaches, as well as simpler sequential baselines that cannot account for inter-comment relations. Furthermore, by virtue of its online processing of the conversation, our system can provide substantial prior notice of upcoming derailment, triggering on average 3 comments (or 3 hours) before an overtly toxic comment is posted.\nTo summarize, in this work we:\nintroduce the first model for forecasting conversational events that can capture the dynamics of a conversation as it develops;\nbuild two diverse datasets (one entirely new, one extending prior work) for the task of forecasting derailment of online conversations;\ncompare the performance of our model against the current state-of-the-art, and evaluate its ability to provide early warning signs.\nOur work is motivated by the goal of assisting human moderators of online communities by preemptively signaling at-risk conversations that might deserve their attention. However, we caution that any automated systems might encode or even amplify the biases existing in the training data BIBREF26, BIBREF27, BIBREF28, so a public-facing implementation would need to be exhaustively scrutinized for such biases BIBREF29.\n\nDerailment Datasets\nWe consider two datasets, representing related but slightly different forecasting tasks. The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.\nWikipedia data. BIBREF9's `Conversations Gone Awry' dataset consists of 1,270 conversations that took place between Wikipedia editors on publicly accessible talk pages. The conversations are sourced from the WikiConv dataset BIBREF59 and labeled by crowdworkers as either containing a personal attack from within (i.e., hostile behavior by one user in the conversation directed towards another) or remaining civil throughout.\nA series of controls are implemented to prevent models from picking up on trivial correlations. To prevent models from capturing topic-specific information (e.g., political conversations are more likely to derail), each attack-containing conversation is paired with a clean conversation from the same talk page, where the talk page serves as a proxy for topic. To force models to actually capture conversational dynamics rather than detecting already-existing toxicity, human annotations are used to ensure that all comments preceding a personal attack are civil.\nTo the ends of more effective model training, we elected to expand the `Conversations Gone Awry' dataset, using the original annotation procedure. Since we found that the original data skewed towards shorter conversations, we focused this crowdsourcing run on longer conversations: ones with 4 or more comments preceding the attack. Through this additional crowdsourcing, we expand the dataset to 4,188 conversations, which we are publicly releasing as part of the Cornell Conversational Analysis Toolkit (ConvoKit).\nWe perform an 80-20-20 train/dev/test split, ensuring that paired conversations end up in the same split in order to preserve the topic control. Finally, we randomly sample another 1 million conversations from WikiConv to use for the unsupervised pre-training of the generative component.\nReddit CMV data. The CMV dataset is constructed from conversations collected via the Reddit API. In contrast to the Wikipedia-based dataset, we explicitly avoid the use of post-hoc annotation. Instead, we use as our label whether a conversation eventually had a comment removed by a moderator for violation of Rule 2: “Don't be rude or hostile to other users”.\nThough the lack of post-hoc annotation limits the degree to which we can impose controls on the data (e.g., some conversations may contain toxic comments not flagged by the moderators) we do reproduce as many of the Wikipedia data's controls as we can. Namely, we replicate the topic control pairing by choosing pairs of positive and negative examples that belong to the same top-level post, following BIBREF12; and enforce that the removed comment was made by a user who was previously involved in the conversation. This process results in 6,842 conversations, to which we again apply a pair-preserving 80-20-20 split. Finally, we gather over 600,000 conversations that do not include any removed comment, for unsupervised pre-training.\n\nFigure 3: Precision-recall curves and the area under each curve. To reduce clutter, we show only the curves for Wikipedia data (CMV curves are similar) and exclude the fixed-length window baselines (which perform worse).\n\nQuery: What are two datasets model is applied to?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"An expanded version of the existing \\u0027Conversations Gone Awry\\u0027 dataset and the ChangeMyView dataset, a subreddit whose only annotation is whether the conversation required action by the Reddit moderators. \", \"evidence\": [\"We consider two datasets, representing related but slightly different forecasting tasks. The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.\", \"To test the effectiveness of this new architecture in forecasting derailment of online conversations, we develop and distribute two new datasets. The first triples in size the highly curated `Conversations Gone Awry\\u0027 dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to \\u201crude or hostile\\u201d behavior. In both datasets, our model outperforms existing fixed-window approaches, as well as simpler sequential baselines that cannot account for inter-comment relations. Furthermore, by virtue of its online processing of the conversation, our system can provide substantial prior notice of upcoming derailment, triggering on average 3 comments (or 3 hours) before an overtly toxic comment is posted.\"]}",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Individuals with Alzheimers who participate in six months of physical activity improve cognitive function for up to 18 months.
Research Abstract:
CONTEXT Many observational studies have shown that physical activity reduces the risk of cognitive decline; however, evidence from randomized trials is lacking.
OBJECTIVE To determine whether physical activity reduces the rate of cognitive decline among older adults at risk.
DESIGN AND SETTING Randomized controlled trial of a 24-week physical activity intervention conducted between 2004 and 2007 in metropolitan Perth, Western Australia.
Assessors of cognitive function were blinded to group membership.
PARTICIPANTS We recruited volunteers who reported memory problems but did not meet criteria for dementia.
Three hundred eleven individuals aged 50 years or older were screened for eligibility, 89 were not eligible, and 52 refused to participate.
A total of 170 participants were randomized and 138 participants completed the 18-month assessment.
INTERVENTION Participants were randomly allocated to an education and usual care group or to a 24-week home-based program of physical activity.
MAIN OUTCOME MEASURE Change in Alzheimer Disease Assessment Scale-Cognitive Subscale (ADAS-Cog) scores (possible range, 0-70) over 18 months.
RESULTS In an intent-to-treat analysis, participants in the intervention group improved 0.26 points (95% confidence interval, -0.89 to 0.54) and those in the usual care group deteriorated 1.04 points (95% confidence interval, 0.32 to 1.82) on the ADAS-Cog at the end of the intervention.
The absolute difference of the outcome measure between the intervention and control groups was -1.3 points (95% confidence interval,-2.38 to -0.22) at the end of the intervention.
At 18 months, participants in the intervention group improved 0.73 points (95% confidence interval, -1.27 to 0.03) on the ADAS-Cog, and those in the usual care group improved 0.04 points (95% confidence interval, -0.46 to 0.88).
Word list delayed recall and Clinical Dementia Rating sum of boxes improved modestly as well, whereas word list total immediate recall, digit symbol coding, verbal fluency, Beck depression score, and Medical Outcomes 36-Item Short-Form physical and mental component summaries did not change significantly.
CONCLUSIONS In this study of adults with subjective memory impairment, a 6-month program of physical activity provided a modest improvement in cognition over an 18-month follow-up period.
TRIAL REGISTRATION anzctr.org.au Identifier: ACTRN12605000136606. | {"verdict": "SUPPORT", "evidence": ["RESULTS In an intent-to-treat analysis, participants in the intervention group improved 0.26 points (95% confidence interval, -0.89 to 0.54) and those in the usual care group deteriorated 1.04 points (95% confidence interval, 0.32 to 1.82) on the ADAS-Cog at the end of the intervention.", "CONCLUSIONS In this study of adults with subjective memory impairment, a 6-month program of physical activity provided a modest improvement in cognition over an 18-month follow-up period."]} | science.scifact_entailment.400 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nIndividuals with Alzheimers who participate in six months of physical activity improve cognitive function for up to 18 months.\n\nResearch Abstract:\nCONTEXT Many observational studies have shown that physical activity reduces the risk of cognitive decline; however, evidence from randomized trials is lacking.\nOBJECTIVE To determine whether physical activity reduces the rate of cognitive decline among older adults at risk.\nDESIGN AND SETTING Randomized controlled trial of a 24-week physical activity intervention conducted between 2004 and 2007 in metropolitan Perth, Western Australia.\nAssessors of cognitive function were blinded to group membership.\nPARTICIPANTS We recruited volunteers who reported memory problems but did not meet criteria for dementia.\nThree hundred eleven individuals aged 50 years or older were screened for eligibility, 89 were not eligible, and 52 refused to participate.\nA total of 170 participants were randomized and 138 participants completed the 18-month assessment.\nINTERVENTION Participants were randomly allocated to an education and usual care group or to a 24-week home-based program of physical activity.\nMAIN OUTCOME MEASURE Change in Alzheimer Disease Assessment Scale-Cognitive Subscale (ADAS-Cog) scores (possible range, 0-70) over 18 months.\nRESULTS In an intent-to-treat analysis, participants in the intervention group improved 0.26 points (95% confidence interval, -0.89 to 0.54) and those in the usual care group deteriorated 1.04 points (95% confidence interval, 0.32 to 1.82) on the ADAS-Cog at the end of the intervention.\nThe absolute difference of the outcome measure between the intervention and control groups was -1.3 points (95% confidence interval,-2.38 to -0.22) at the end of the intervention.\nAt 18 months, participants in the intervention group improved 0.73 points (95% confidence interval, -1.27 to 0.03) on the ADAS-Cog, and those in the usual care group improved 0.04 points (95% confidence interval, -0.46 to 0.88).\nWord list delayed recall and Clinical Dementia Rating sum of boxes improved modestly as well, whereas word list total immediate recall, digit symbol coding, verbal fluency, Beck depression score, and Medical Outcomes 36-Item Short-Form physical and mental component summaries did not change significantly.\nCONCLUSIONS In this study of adults with subjective memory impairment, a 6-month program of physical activity provided a modest improvement in cognition over an 18-month follow-up period.\nTRIAL REGISTRATION anzctr.org.au Identifier: ACTRN12605000136606.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"RESULTS In an intent-to-treat analysis, participants in the intervention group improved 0.26 points (95% confidence interval, -0.89 to 0.54) and those in the usual care group deteriorated 1.04 points (95% confidence interval, 0.32 to 1.82) on the ADAS-Cog at the end of the intervention.\", \"CONCLUSIONS In this study of adults with subjective memory impairment, a 6-month program of physical activity provided a modest improvement in cognition over an 18-month follow-up period.\"]}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.PURPOSE: To report the frequency of binocular vision (BV) anomalies in children with specific learning disorders (SLD) and to assess the efficacy of vision therapy (VT) in children with a non-strabismic binocular vision anomaly (NSBVA).
ABSTRACT.METHODS: The study was carried out at a centre for learning disability (LD). Comprehensive eye examination and binocular vision assessment was carried out for 94 children (mean (SD) age: 15 (2.2) years) diagnosed with specific learning disorder. BV assessment was done for children with best corrected visual acuity of ≥6/9 – N6, cooperative for examination and free from any ocular pathology. For children with a diagnosis of NSBVA (n = 46), 24 children were randomized to VT and no intervention was provided to the other 22 children who served as experimental controls. At the end of 10 sessions of vision therapy, BV assessment was performed for both the intervention and non-intervention groups.
ABSTRACT.RESULTS: Binocular vision anomalies were found in 59 children (62.8%) among which 22% (n = 13) had strabismic binocular vision anomalies (SBVA) and 78% (n = 46) had a NSBVA. Accommodative infacility (AIF) was the commonest of the NSBVA and found in 67%, followed by convergence insufficiency (CI) in 25%. Post-vision therapy, the intervention group showed significant improvement in all the BV parameters (Wilcoxon signed rank test, p < 0.05) except negative fusional vergence.
ABSTRACT.CONCLUSION: Children with specific learning disorders have a high frequency of binocular vision disorders and vision therapy plays a significant role in improving the BV parameters. Children with SLD should be screened for BV anomalies as it could potentially be an added hindrance to the reading difficulty in this special population.
Extracted Data: | [["vision therapy (VT)", "no intervention", "binocular vision (BV)", "significantly increased", "Post-vision therapy, the intervention group showed significant improvement in all the BV parameters (Wilcoxon signed rank test, p\u0026nbsp;\u0026lt;\u0026nbsp;0.05) except negative fusional vergence."]] | science.evidence_inference.234 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.PURPOSE: To report the frequency of binocular vision (BV) anomalies in children with specific learning disorders (SLD) and to assess the efficacy of vision therapy (VT) in children with a non-strabismic binocular vision anomaly (NSBVA).\n\nABSTRACT.METHODS: The study was carried out at a centre for learning disability (LD). Comprehensive eye examination and binocular vision assessment was carried out for 94 children (mean (SD) age: 15 (2.2) years) diagnosed with specific learning disorder. BV assessment was done for children with best corrected visual acuity of ≥6/9 – N6, cooperative for examination and free from any ocular pathology. For children with a diagnosis of NSBVA (n = 46), 24 children were randomized to VT and no intervention was provided to the other 22 children who served as experimental controls. At the end of 10 sessions of vision therapy, BV assessment was performed for both the intervention and non-intervention groups.\n\nABSTRACT.RESULTS: Binocular vision anomalies were found in 59 children (62.8%) among which 22% (n = 13) had strabismic binocular vision anomalies (SBVA) and 78% (n = 46) had a NSBVA. Accommodative infacility (AIF) was the commonest of the NSBVA and found in 67%, followed by convergence insufficiency (CI) in 25%. Post-vision therapy, the intervention group showed significant improvement in all the BV parameters (Wilcoxon signed rank test, p < 0.05) except negative fusional vergence.\n\nABSTRACT.CONCLUSION: Children with specific learning disorders have a high frequency of binocular vision disorders and vision therapy plays a significant role in improving the BV parameters. Children with SLD should be screened for BV anomalies as it could potentially be an added hindrance to the reading difficulty in this special population.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"vision therapy (VT)\", \"no intervention\", \"binocular vision (BV)\", \"significantly increased\", \"Post-vision therapy, the intervention group showed significant improvement in all the BV parameters (Wilcoxon signed rank test, p\\u0026nbsp;\\u0026lt;\\u0026nbsp;0.05) except negative fusional vergence.\"]]",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Data Collection for Interactive Learning through the Dialog
This paper presents a dataset collected from natural dialogs which enables to test the ability of dialog systems to learn new facts from user utterances throughout the dialog. This interactive learning will help with one of the most prevailing problems of open domain dialog system, which is the sparsity of facts a dialog system can reason about. The proposed dataset, consisting of 1900 collected dialogs, allows simulation of an interactive gaining of denotations and questions explanations from users which can be used for the interactive learning.
Introduction
Nowadays, dialog systems are usually designed for a single domain BIBREF0 . They store data in a well-defined format with a fixed number of attributes for entities that the system can provide. Because data in this format can be stored as a two-dimensional table within a relational database, we call the data flat. This data representation allows the system to query the database in a simple and efficient way. It also allows to keep the dialog state in the form of slots (which usually correspond to columns in the table) and track it through the dialog using probabilistic belief tracking BIBREF1 , BIBREF2 .
However, the well-defined structure of the database of a typical dialog system comes with a high cost of extending it as every piece of new information has to fit the format. This is especially a problem when we one is adapting the system for a new domain because its entities could have different attributes.
A dialog system based on knowledge bases offers many advantages. First, the knowledge base, which can be represented as knowledge graph containing entities connected by relations, is much more flexible than the relational database. Second, freely available knowledge bases, such as Freebase, Wikidata, etc. contain an enormous amount of structured information, and are still growing. A dialog system which is capable of working with this type of information would be therefore very useful.
In this paper we propose a dataset aiming to help develop and evaluate dialog systems based on knowledge bases by interactive learning motivated in Section "Motivation" Section "Dialog policies" describes policies that can be used for retrieving information from knowledge bases. In Section "Dialog Simulation" is introduced a dialog simulation from natural conversations which we use for evaluation of interactive learning. The dataset collection process allowing the dialog simulation is described in Section "Dataset Collection Process" and is followed by properties of the resulting dataset in Section "Dataset Properties" Evaluation guidelines with proposed metrics can be found in Section "Interactive Learning Evaluation" The planned future work is summarized in Section "Future Work" We conclude the paper with Section "Conclusion"
Motivation
From the point of view of dialog systems providing general information from a knowledge base, the most limiting factor is that a large portion of the questions is understood poorly.
Current approaches BIBREF3 , BIBREF4 can only achieve around 50% accuracy on some question answering datasets. Therefore, we think that there is a room for improvements which can be achieved by interactively asking for additional information in conversational dialogs with users. This extra information can be used for improving policies of dialog systems. We call this approach the interactive learning from dialogs.
We can improve dialog systems in several aspects through interactive learning in a direct interaction with users. First, the most straightforward way obviously is getting the correct answer for questions that the system does not know. We can try to ask users for answers on questions that the system encountered in a conversation with a different user and did not understand it. Second, the system can ask the user for a broader explanation of a question. This explanation could help the system to understand the question and provide the correct answer. In addition, the system can learn correct policy for the question which allows providing answers without asking any extra information for similar questions next time. We hypothesize that users are willing to give such explanations because it could help them to find answers for their own questions. The last source of information that we consider for interactive learning is rephrasing, which could help when the system does know the concept but does not know the correct wording. This area is extensively studied for the purposes of information retrieval BIBREF5 , BIBREF6 .
The main purpose of the collected dataset is to enable interactive learning using the steps proposed above and potentially to evaluate how different systems perform on this task.
Dialog policies
The obvious difficulty when developing a dialog system is finding a way how to identify the piece of information that the user is interested in. This is especially a problem for dialog systems based on knowledge graphs containing a large amount of complex structured information. While a similar problem is being solved in a task of question answering, dialog systems have more possibilities of identifying the real intention of the user. For example, a dialog system can ask for additional information during the dialog.
We distinguish three different basic approaches to requesting knowledge bases:
A combination of the above approaches is also possible. For example, we can imagine scenarios where the dialog system starts with hand-crafted rules, which are subsequently interactively improved through dialogs with its users. With a growing demand for open domain dialog systems, it shows that creating hand-crafted policies does not scale well - therefore, machine learning approaches are gaining on popularity. Many public datasets for offline learning have been published BIBREF8 , BIBREF7 . However, to our knowledge, no public datasets for interactive learning are available. To fill this gap, we collected a dataset which enables to train interactively learned policies through a simulated interaction with users.
Dialog Simulation
Offline evaluation of interactive dialogs on real data is difficult because different policies can lead to different variants of the dialog. Our solution to this issue is to collect data in a way that allows us to simulate all dialog variants possible according to any policy.
The dialog variants we are considering for interactive learning differ only in presence of several parts of the dialog. Therefore, we can collect dialogs containing all information used for interactive learning and omit those parts that were not requested by the policy.
We collected the dataset (see Section "Dataset Collection Process" ) that enables simulation where the policy can decide how much extra information to the question it requests. If the question is clear to the system it can attempt to answer the question without any other information. It can also ask for a broader explanation with a possibility to answer the question afterwards. If the system decides not to answer the question, we can simulate rerouting the question to another user, to try to obtain the answer from them. The principle of simulated user's answer is shown in the Figure 1 .
Note that the simulated user’s answer can be incorrect because human users naturally made mistakes. We intentionally keep these mistakes in the dataset because real systems must address them as well.
Dataset Collection Process
A perfect data collection scenario for our dataset would use real running dialog system providing general information from the knowledge base to real users. This system could then ask for explanations and answers for questions which it is not able to answer.
However, getting access to systems with real users is usually hard. Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection.
A CF worker gets a task instructing them to use our chat-like interface to help the system with a question which is randomly selected from training examples of Simple questions BIBREF7 dataset. To complete the task user has to communicate with the system through the three phase dialog discussing question paraphrase (see Section "Interactive Learning Evaluation" ), explanation (see Section "Future Work" ) and answer of the question (see Section "Conclusion" ). To avoid poor English level of dialogs we involved CF workers from English speaking countries only. The collected dialogs has been annotated (see Section "Acknowledgments" ) by expert annotators afterwards.
The described procedure leads to dialogs like the one shown in the Figure 2 .
Question Paraphrasing
At beginning of the dialog, the system is requesting the user to paraphrase question that the system does not understand. The main goal of this first phase is to let the user get familiar with the presented question and to get alternative wordings of the posed question.
Question Explanation
In the second phase, the user is asked for an explanation of the question. We expect the explanation to be different enough from the original question (in terms of the number of common words between the question and the explanation). If the explanation is too similar to the question, the user is notified that their explanation is not broad enough and they must provide a better one.
Question Answer
With the valid explanation the dialog turns into the last phase where the user is asked for a correct answer to the original question. The system requires the user to answer with a full sentence. In practical experiments this has shown as a useful decision because it improves system's ability to reveal cheaters. We can simply measure the connection (in terms of common words ) between question and the answer sentence. This allows to reject completely irrelevant answers.
Annotation
The correct answer for question in each dialog is available from Simple questions dataset. Answers are in form of Freebase entities identified by unique id. For evaluation purposes we need information whether dialog contains the answer which is consistent with the entity from Simple questions, the answer with another entity or whether the dialog does not contain any answer. While the annotation process is quite simple, we did not need crowdsourcing for the process.
Natural Language Understanding (NLU)
The collection system needs to recognize following dialog acts from user utterances during all phases of the dialog:
– user does not want to provide requested information,
– user agrees to provide requested information,
– user does not know the requested information,
– user tries chit chat with the system (hello, bye, who are you...),
– none of the above, interpreted as user is giving information requested by the system.
Parsing of the dialog acts is made by hand written rules using templates and keyword spotting. The templates and keywords were manually collected from frequent expressions used by CF workers during preparation runs of the dataset collection process (google it, check wikipedia, I would need... $\rightarrow $ Negate).
Dataset Properties
We collected the dataset with 1900 dialogs and 8533 turns. Topics discussed in dialogs are questions randomly chosen from training examples of Simple questions BIBREF7 dataset. From this dataset we also took the correct answers in form of Freebase entities.
Our dataset consists of standard data split into training, development and test files. The basic properties of those files are as follows:
Each file contains complete dialogs enriched by outputs of NLU (see Section "Natural Language Understanding (NLU)" ) that were used during the data collection. On top of that, each dialog is labeled by the correct answer for the question and expert annotation of the user answer hint which tells whether the hint points to the correct answer, incorrect answer, or no answer at all.
351 of all collected dialogs contain correct answer provided by users and 702 dialogs have incorrect answer. In the remaining 847 dialogs users did not want to answer the question. The collected dialogs also contain 1828 paraphrases and 1539 explanations for 1870 questions.
An answer for a question was labeled as correct by annotators only when it was evident to them that the answer points to the same Freebase entity that was present in Simple questions dataset for that particular question. However, a large amount of questions from that dataset is quite general - with many possible answers. Therefore lot of answers from users were labeled as incorrect even though those answers perfectly fit the question. Our annotators identified that 285 of the incorrect answers were answers for such general questions. Example of this situation can be demonstrated by question 'Name an actor' which was correctly answered by 'Brad Pitt is an actor', however, to be consistent with Simple questions annotation, which is 'Kelly Atwood', annotators were forced to mark it as an incorrect answer.
Interactive Learning Evaluation
A perfect interactive learning model would be able to learn anything interactively from test dialogs during testing, which would allow us to measure progress of the model from scratch over the course of time. However, a development of such model would be unnecessarily hard, therefore we provide training dialogs which can be used for feature extraction and other engineering related to interactive learning from dialogs in natural language. Model development is further supported with labeled validation data for parameter tuning.
We propose two evaluation metrics for comparing interactive learning models. First metric (see Section "Efficiency Score" ) scores amount of information required by the model, second metric (see Section "Answer Extraction Accuracy" ) is accuracy of answer extraction from user utterances. All models must base their answers only on information gained from training dialogs and testing dialogs seen during the simulation so far, to ensure that the score will reflect the interactive learning of the model instead of general question answering.
Efficiency Score
The simulation of dialogs from our dataset allows to evaluate how efficient a dialog system is in using information gained from users. The dialog system should maximize the number of correctly answered questions without requesting too many explanations and answers from users. To evaluate different systems using the collected data, we propose the following evaluation measure:
$$
S_D = \frac{n_c - w_i n_i - w_e n_e - w_a n_a}{|D|}$$ (Eq. 20)
Here, $n_c$ denotes the number of correctly answered questions, $n_i$ denotes the number of incorrectly answered questions, $n_e$ denotes the number of requested explanations, $n_a$ denotes the number of requested answers and $|D|$ denotes the number of simulated dialogs in the dataset. $w_i$ , $w_e$ , $w_a$ are penalization weights.
The penalization weights are used to compensate for different costs of obtaining different types of information from the user. For example, gaining broader explanation from the user is relatively simple because it is in their favor to cooperate with the system on a question they are interested in. However, obtaining correct answers from users is significantly more difficult because the system does not always have the chance to ask the question and the user does not have to know the correct answer for it.
To make the evaluations comparable between different systems we recommend using our evaluation scripts included with the dataset with following penalization weights that reflect our intuition for gaining information from users:
– incorrect answers are penalized significantly,
– explanations are quite cheap; therefore, we will penalize them just slightly,
– gaining question’s answer from users is harder than gaining explanations.
Answer Extraction Accuracy
It is quite challenging to find appropriate entity in the knowledge base even though the user provided the correct answer. Therefore, we propose another metric relevant to our dataset. This metric is the accuracy of entity extraction which measures how many times was extracted a correct answer from answer hints provided by the user in dialogs annotated as correctly answered.
Future Work
Our future work will be mainly focused on providing a baseline system for interactive learning which will be evaluated on the dataset. We are also planning improvements for dialog management that is used to gain explanations during the data collection. We believe that with conversation about specific aspects of the discussed question it will be possible to gain even more interesting information from users. The other area of our interest is in possibilities to improve question answering accuracy on test questions of Simple question dataset with the extra information contained in the collected dialogs.
Conclusion
In this paper, we presented a novel way how to evaluate different interactive learning approaches for dialog models. The evaluation covers two challenging aspects of interactive learning. First, it scores efficiency of using information gained from users in simulated question answering dialogs. Second, it measures accuracy on answer hints understanding.
For purposes of evaluation we collected a dataset from conversational dialogs with workers on crowdsourcing platform CrowdFlower. Those dialogs were annotated with expert annotators and published under Creative Commons 4.0 BY-SA license on lindat. We also provide evaluation scripts with the dataset that should ensure comparable evaluation of different interactive learning approaches.
Acknowledgments
This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 224, and GAUK grant 1170516 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).
Figure 1: Unknown questions can be rerouted between users. We can, for example, use chitchat to get correct answers. The challenge is in generalizing the collected question-answer pairs using the knowledge base in order to apply them to previously unseen questions.
Table 1: Table of turn and dialog counts for dataset splits.
Query: What is the average length of dialog?
JSON Response: | {"answer": "4.49 turns", "evidence": ["We collected the dataset with 1900 dialogs and 8533 turns. Topics discussed in dialogs are questions randomly chosen from training examples of Simple questions BIBREF7 dataset. From this dataset we also took the correct answers in form of Freebase entities."]} | science.qasper_abstractive_qa.19.1 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nData Collection for Interactive Learning through the Dialog\n\nThis paper presents a dataset collected from natural dialogs which enables to test the ability of dialog systems to learn new facts from user utterances throughout the dialog. This interactive learning will help with one of the most prevailing problems of open domain dialog system, which is the sparsity of facts a dialog system can reason about. The proposed dataset, consisting of 1900 collected dialogs, allows simulation of an interactive gaining of denotations and questions explanations from users which can be used for the interactive learning.\n\nIntroduction\nNowadays, dialog systems are usually designed for a single domain BIBREF0 . They store data in a well-defined format with a fixed number of attributes for entities that the system can provide. Because data in this format can be stored as a two-dimensional table within a relational database, we call the data flat. This data representation allows the system to query the database in a simple and efficient way. It also allows to keep the dialog state in the form of slots (which usually correspond to columns in the table) and track it through the dialog using probabilistic belief tracking BIBREF1 , BIBREF2 .\nHowever, the well-defined structure of the database of a typical dialog system comes with a high cost of extending it as every piece of new information has to fit the format. This is especially a problem when we one is adapting the system for a new domain because its entities could have different attributes.\nA dialog system based on knowledge bases offers many advantages. First, the knowledge base, which can be represented as knowledge graph containing entities connected by relations, is much more flexible than the relational database. Second, freely available knowledge bases, such as Freebase, Wikidata, etc. contain an enormous amount of structured information, and are still growing. A dialog system which is capable of working with this type of information would be therefore very useful.\nIn this paper we propose a dataset aiming to help develop and evaluate dialog systems based on knowledge bases by interactive learning motivated in Section \"Motivation\" Section \"Dialog policies\" describes policies that can be used for retrieving information from knowledge bases. In Section \"Dialog Simulation\" is introduced a dialog simulation from natural conversations which we use for evaluation of interactive learning. The dataset collection process allowing the dialog simulation is described in Section \"Dataset Collection Process\" and is followed by properties of the resulting dataset in Section \"Dataset Properties\" Evaluation guidelines with proposed metrics can be found in Section \"Interactive Learning Evaluation\" The planned future work is summarized in Section \"Future Work\" We conclude the paper with Section \"Conclusion\" \n\nMotivation\nFrom the point of view of dialog systems providing general information from a knowledge base, the most limiting factor is that a large portion of the questions is understood poorly.\nCurrent approaches BIBREF3 , BIBREF4 can only achieve around 50% accuracy on some question answering datasets. Therefore, we think that there is a room for improvements which can be achieved by interactively asking for additional information in conversational dialogs with users. This extra information can be used for improving policies of dialog systems. We call this approach the interactive learning from dialogs.\nWe can improve dialog systems in several aspects through interactive learning in a direct interaction with users. First, the most straightforward way obviously is getting the correct answer for questions that the system does not know. We can try to ask users for answers on questions that the system encountered in a conversation with a different user and did not understand it. Second, the system can ask the user for a broader explanation of a question. This explanation could help the system to understand the question and provide the correct answer. In addition, the system can learn correct policy for the question which allows providing answers without asking any extra information for similar questions next time. We hypothesize that users are willing to give such explanations because it could help them to find answers for their own questions. The last source of information that we consider for interactive learning is rephrasing, which could help when the system does know the concept but does not know the correct wording. This area is extensively studied for the purposes of information retrieval BIBREF5 , BIBREF6 .\nThe main purpose of the collected dataset is to enable interactive learning using the steps proposed above and potentially to evaluate how different systems perform on this task.\n\nDialog policies\nThe obvious difficulty when developing a dialog system is finding a way how to identify the piece of information that the user is interested in. This is especially a problem for dialog systems based on knowledge graphs containing a large amount of complex structured information. While a similar problem is being solved in a task of question answering, dialog systems have more possibilities of identifying the real intention of the user. For example, a dialog system can ask for additional information during the dialog.\nWe distinguish three different basic approaches to requesting knowledge bases:\nA combination of the above approaches is also possible. For example, we can imagine scenarios where the dialog system starts with hand-crafted rules, which are subsequently interactively improved through dialogs with its users. With a growing demand for open domain dialog systems, it shows that creating hand-crafted policies does not scale well - therefore, machine learning approaches are gaining on popularity. Many public datasets for offline learning have been published BIBREF8 , BIBREF7 . However, to our knowledge, no public datasets for interactive learning are available. To fill this gap, we collected a dataset which enables to train interactively learned policies through a simulated interaction with users.\n\nDialog Simulation\nOffline evaluation of interactive dialogs on real data is difficult because different policies can lead to different variants of the dialog. Our solution to this issue is to collect data in a way that allows us to simulate all dialog variants possible according to any policy.\nThe dialog variants we are considering for interactive learning differ only in presence of several parts of the dialog. Therefore, we can collect dialogs containing all information used for interactive learning and omit those parts that were not requested by the policy.\nWe collected the dataset (see Section \"Dataset Collection Process\" ) that enables simulation where the policy can decide how much extra information to the question it requests. If the question is clear to the system it can attempt to answer the question without any other information. It can also ask for a broader explanation with a possibility to answer the question afterwards. If the system decides not to answer the question, we can simulate rerouting the question to another user, to try to obtain the answer from them. The principle of simulated user's answer is shown in the Figure 1 .\nNote that the simulated user’s answer can be incorrect because human users naturally made mistakes. We intentionally keep these mistakes in the dataset because real systems must address them as well.\n\nDataset Collection Process\nA perfect data collection scenario for our dataset would use real running dialog system providing general information from the knowledge base to real users. This system could then ask for explanations and answers for questions which it is not able to answer.\nHowever, getting access to systems with real users is usually hard. Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection.\nA CF worker gets a task instructing them to use our chat-like interface to help the system with a question which is randomly selected from training examples of Simple questions BIBREF7 dataset. To complete the task user has to communicate with the system through the three phase dialog discussing question paraphrase (see Section \"Interactive Learning Evaluation\" ), explanation (see Section \"Future Work\" ) and answer of the question (see Section \"Conclusion\" ). To avoid poor English level of dialogs we involved CF workers from English speaking countries only. The collected dialogs has been annotated (see Section \"Acknowledgments\" ) by expert annotators afterwards.\nThe described procedure leads to dialogs like the one shown in the Figure 2 .\n\nQuestion Paraphrasing\nAt beginning of the dialog, the system is requesting the user to paraphrase question that the system does not understand. The main goal of this first phase is to let the user get familiar with the presented question and to get alternative wordings of the posed question.\n\nQuestion Explanation\nIn the second phase, the user is asked for an explanation of the question. We expect the explanation to be different enough from the original question (in terms of the number of common words between the question and the explanation). If the explanation is too similar to the question, the user is notified that their explanation is not broad enough and they must provide a better one.\n\nQuestion Answer\nWith the valid explanation the dialog turns into the last phase where the user is asked for a correct answer to the original question. The system requires the user to answer with a full sentence. In practical experiments this has shown as a useful decision because it improves system's ability to reveal cheaters. We can simply measure the connection (in terms of common words ) between question and the answer sentence. This allows to reject completely irrelevant answers.\n\nAnnotation\nThe correct answer for question in each dialog is available from Simple questions dataset. Answers are in form of Freebase entities identified by unique id. For evaluation purposes we need information whether dialog contains the answer which is consistent with the entity from Simple questions, the answer with another entity or whether the dialog does not contain any answer. While the annotation process is quite simple, we did not need crowdsourcing for the process.\n\nNatural Language Understanding (NLU)\nThe collection system needs to recognize following dialog acts from user utterances during all phases of the dialog:\n– user does not want to provide requested information,\n– user agrees to provide requested information,\n– user does not know the requested information,\n– user tries chit chat with the system (hello, bye, who are you...),\n– none of the above, interpreted as user is giving information requested by the system.\nParsing of the dialog acts is made by hand written rules using templates and keyword spotting. The templates and keywords were manually collected from frequent expressions used by CF workers during preparation runs of the dataset collection process (google it, check wikipedia, I would need... $\\rightarrow $ Negate).\n\nDataset Properties\nWe collected the dataset with 1900 dialogs and 8533 turns. Topics discussed in dialogs are questions randomly chosen from training examples of Simple questions BIBREF7 dataset. From this dataset we also took the correct answers in form of Freebase entities.\nOur dataset consists of standard data split into training, development and test files. The basic properties of those files are as follows:\nEach file contains complete dialogs enriched by outputs of NLU (see Section \"Natural Language Understanding (NLU)\" ) that were used during the data collection. On top of that, each dialog is labeled by the correct answer for the question and expert annotation of the user answer hint which tells whether the hint points to the correct answer, incorrect answer, or no answer at all.\n351 of all collected dialogs contain correct answer provided by users and 702 dialogs have incorrect answer. In the remaining 847 dialogs users did not want to answer the question. The collected dialogs also contain 1828 paraphrases and 1539 explanations for 1870 questions.\nAn answer for a question was labeled as correct by annotators only when it was evident to them that the answer points to the same Freebase entity that was present in Simple questions dataset for that particular question. However, a large amount of questions from that dataset is quite general - with many possible answers. Therefore lot of answers from users were labeled as incorrect even though those answers perfectly fit the question. Our annotators identified that 285 of the incorrect answers were answers for such general questions. Example of this situation can be demonstrated by question 'Name an actor' which was correctly answered by 'Brad Pitt is an actor', however, to be consistent with Simple questions annotation, which is 'Kelly Atwood', annotators were forced to mark it as an incorrect answer.\n\nInteractive Learning Evaluation\nA perfect interactive learning model would be able to learn anything interactively from test dialogs during testing, which would allow us to measure progress of the model from scratch over the course of time. However, a development of such model would be unnecessarily hard, therefore we provide training dialogs which can be used for feature extraction and other engineering related to interactive learning from dialogs in natural language. Model development is further supported with labeled validation data for parameter tuning.\nWe propose two evaluation metrics for comparing interactive learning models. First metric (see Section \"Efficiency Score\" ) scores amount of information required by the model, second metric (see Section \"Answer Extraction Accuracy\" ) is accuracy of answer extraction from user utterances. All models must base their answers only on information gained from training dialogs and testing dialogs seen during the simulation so far, to ensure that the score will reflect the interactive learning of the model instead of general question answering.\n\nEfficiency Score\nThe simulation of dialogs from our dataset allows to evaluate how efficient a dialog system is in using information gained from users. The dialog system should maximize the number of correctly answered questions without requesting too many explanations and answers from users. To evaluate different systems using the collected data, we propose the following evaluation measure: \n$$ \nS_D = \\frac{n_c - w_i n_i - w_e n_e - w_a n_a}{|D|}$$ (Eq. 20) \nHere, $n_c$ denotes the number of correctly answered questions, $n_i$ denotes the number of incorrectly answered questions, $n_e$ denotes the number of requested explanations, $n_a$ denotes the number of requested answers and $|D|$ denotes the number of simulated dialogs in the dataset. $w_i$ , $w_e$ , $w_a$ are penalization weights.\nThe penalization weights are used to compensate for different costs of obtaining different types of information from the user. For example, gaining broader explanation from the user is relatively simple because it is in their favor to cooperate with the system on a question they are interested in. However, obtaining correct answers from users is significantly more difficult because the system does not always have the chance to ask the question and the user does not have to know the correct answer for it.\nTo make the evaluations comparable between different systems we recommend using our evaluation scripts included with the dataset with following penalization weights that reflect our intuition for gaining information from users:\n– incorrect answers are penalized significantly,\n– explanations are quite cheap; therefore, we will penalize them just slightly,\n– gaining question’s answer from users is harder than gaining explanations.\n\nAnswer Extraction Accuracy\nIt is quite challenging to find appropriate entity in the knowledge base even though the user provided the correct answer. Therefore, we propose another metric relevant to our dataset. This metric is the accuracy of entity extraction which measures how many times was extracted a correct answer from answer hints provided by the user in dialogs annotated as correctly answered.\n\nFuture Work\nOur future work will be mainly focused on providing a baseline system for interactive learning which will be evaluated on the dataset. We are also planning improvements for dialog management that is used to gain explanations during the data collection. We believe that with conversation about specific aspects of the discussed question it will be possible to gain even more interesting information from users. The other area of our interest is in possibilities to improve question answering accuracy on test questions of Simple question dataset with the extra information contained in the collected dialogs.\n\nConclusion\nIn this paper, we presented a novel way how to evaluate different interactive learning approaches for dialog models. The evaluation covers two challenging aspects of interactive learning. First, it scores efficiency of using information gained from users in simulated question answering dialogs. Second, it measures accuracy on answer hints understanding.\nFor purposes of evaluation we collected a dataset from conversational dialogs with workers on crowdsourcing platform CrowdFlower. Those dialogs were annotated with expert annotators and published under Creative Commons 4.0 BY-SA license on lindat. We also provide evaluation scripts with the dataset that should ensure comparable evaluation of different interactive learning approaches.\n\nAcknowledgments\nThis work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 224, and GAUK grant 1170516 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).\n\nFigure 1: Unknown questions can be rerouted between users. We can, for example, use chitchat to get correct answers. The challenge is in generalizing the collected question-answer pairs using the knowledge base in order to apply them to previously unseen questions.\n\nTable 1: Table of turn and dialog counts for dataset splits.\n\nQuery: What is the average length of dialog?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"4.49 turns\", \"evidence\": [\"We collected the dataset with 1900 dialogs and 8533 turns. Topics discussed in dialogs are questions randomly chosen from training examples of Simple questions BIBREF7 dataset. From this dataset we also took the correct answers in form of Freebase entities.\"]}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
On the Adversarial Robustness of Vision Transformers
Main Content:
Following the success in advancing natural language processing and understanding , transformers are expected to bring revolutionary changes to computer vision . This work provides a comprehensive study on the robustness of vision transformers ( ViTs ) against adversarial perturbations . Tested on various white-box and transfer attack settings , we find that ViTs possess better adversarial robustness when compared with convolutional neural networks ( CNNs ) . This observation also holds for certified robustness . We summarize the following main observations contributing to the improved robustness of ViTs : 1 ) Features learned by ViTs contain less low-level information and are more generalizable , which contributes to superior robustness against adversarial perturbations . 2 ) Introducing convolutional or tokens-to-token blocks for learning low-level features in ViTs can improve classification accuracy but at the cost of adversarial robustness . 3 ) Increasing the proportion of transformers in the model structure ( when the model consists of both transformer and CNN blocks ) leads to better robustness . But for a pure transformer model , simply increasing the size or adding layers can not guarantee a similar effect . 4 ) Pre-training without adversarial training on larger datasets does not significantly improve adversarial robustness though it is critical for training ViTs . 5 ) Adversarial training is also applicable to ViT for training robust models . Furthermore , feature visualization and frequency analysis are conducted for explanation . The results show that ViTs are less sensitive to high-frequency perturbations than CNNs and there is a high correlation between how well the model learns low-level features and its robustness against different frequency-based perturbations . 1 INTRODUCTION . Transformers are originally applied in natural language processing ( NLP ) tasks as a type of deep neural network ( DNN ) mainly based on the self-attention mechanism ( Vaswani et al . ( 2017 ) ; Devlin et al . ( 2018 ) ; Brown et al . ( 2020 ) ) , and transformers with large-scale pre-training have achieved state-of-the-art results on many NLP tasks ( Devlin et al . ( 2018 ) ; Liu et al . ( 2019 ) ; Yang et al . ( 2019 ) ; Sun et al . ( 2019 ) ) . Recently , Dosovitskiy et al . ( 2020 ) applied a pure transformer directly to sequences of image patches ( i.e. , a vision transformer , ViT ) and showed that the Transformer itself can be competitive with convolutional neural networks ( CNN ) on image classification tasks . Since then transformers have been extended to various vision tasks and show competitive or even better performance compared to CNNs and recurrent neural networks ( RNNs ) ( Carion et al . ( 2020 ) ; Chen et al . ( 2020 ) ; Zhu et al . ( 2020 ) ) . While ViT and its variants hold promise toward a unified machine learning paradigm and architecture applicable to different data modalities , it remains unclear on the robustness of ViT against adversarial perturbations , which is critical for safe and reliable deployment of many real-world applications . In this work , we examine the adversarial robustness of ViTs on image classification tasks and make comparisons with CNN baselines . As highlighted in Figure 1 ( a ) , our experimental results illustrate the superior robustness of ViTs than CNNs in both white-box and black-box attack settings , based on which we make the following important findings : • Features learned by ViTs contain less low-level information and benefit adversarial robustness . ViTs achieve a lower attack success rate ( ASR ) of 51.9 % compared with a minimum of 83.3 % by CNNs in Figure 1 ( a ) . They are also less sensitive to high-frequency adversarial perturbations . • Using denoised randomized smoothing ( Salman et al. , 2020 ) , ViTs attain significantly better certified robustness than CNNs . • It takes the cost of adversarial robustness to improve the classification accuracy of ViTs by introducing blocks to help learn low-level features as shown in Figure 1 ( a ) . • Increasing the proportion of transformer blocks in the model leads to better robustness when the model consists of both transformer and CNN blocks . For example , the attack success rate ( ASR ) decreases from 87.1 % to 79.2 % when 10 additional transformer blocks are added to T2T-ViT-14 . However , increasing the size of a pure transformer model can not guarantee a similar effect , e.g. , the robustness of ViT-S/16 is better than that of ViT-B/16 in Figure 1 ( a ) . • Pre-training without adversarial training on larger datasets does not improve adversarial robustness though it is critical for training ViT . • The principle of adversarial training through min-max optimization ( Madry et al . ( 2017 ) ; Zhang et al . ( 2019 ) ) can be applied to train robust ViTs . 2 RELATED WORK . Transformer ( Vaswani et al . ( 2017 ) ) has achieved remarkable performance on many NLP tasks , and its robustness has been studied on those NLP tasks . Hsieh et al . ( 2019 ) ; Jin et al . ( 2020 ) ; Shi & Huang ( 2020 ) ; Li et al . ( 2020 ) ; Garg & Ramakrishnan ( 2020 ) ; Yin et al . ( 2020 ) conducted adversarial attacks on transformers including pre-trained models , and in their experiments transformers usually show better robustness compared to other models based on Long short-term memory ( LSTM ) or CNN , with a theoretical explanation provided in Hsieh et al . ( 2019 ) . However , due to the discrete nature of NLP models , these studies are focusing on discrete perturbations ( e.g. , word or character substitutions ) which are very different from small and continuous perturbations in computer vision tasks . Besides , Wang et al . ( 2020a ) improved the robustness of pre-trained transformers from an information-theoretic perspective , and Shi et al . ( 2020 ) ; Ye et al . ( 2020 ) ; Xu et al . ( 2020 ) studied the robustness certification of transformer-based models . To the best of our knowledge , this work is one of the first studies that investigates the adversarial robustness ( against small perturbations in the input pixel space ) of transformers on computer vision tasks . There are some concurrent works studying the adversarial robustness of ViTs . We supplement their contribution in Appendix G . 3 MODEL ARCHITECTURES . We first review the architecture of models investigated in our experiments including several vision transformers ( ViTs ) and CNN models . A detailed table of comparison is given in Table 5 . 3.1 VISION TRANSFORMERS . We consider the original ViT ( Dosovitskiy et al . ( 2020 ) ) and its four variants shown in Figure 1 ( b ) . Vision transformer ( ViT ) and data-efficient image transformer ( DeiT ) : ViT ( Dosovitskiy et al . ( 2020 ) ) mostly follows the original design of Transformer ( Vaswani et al . ( 2017 ) ; Devlin et al . ( 2018 ) ) on language tasks . For a 2D image x ∈ RH×W×C with resolution H ×W and C channels , it is divided into a sequence ofN = H·WP 2 flattened 2D patches of size P ×P , xi ∈ R N× ( P 2·C ) ( 1 ≤ i ≤ N ) . The patches are first encoded into patch embeddings with a simple convolutional layer , where the kernel size and stride of the convolution is exactly P × P . In addition , there are also position embeddings to preserve positional information . Similar to BERT ( Devlin et al . ( 2018 ) ) , a large-scale pre-trained model for NLP , a special [ CLS ] token is added to output features for classification . DeiT ( Touvron et al . ( 2021 ) ) further improves the ViT ’ s performance using data augmentation or distillation from CNN teachers with an additional distillation token . We investigate ViT- { S , B , L } /16 , DeiT-S/16 and Dist-DeiT-B/16 as defined in the corresponding papers in the main text and discuss other structures in Appendix F. Hybrid of CNN and ViT ( CNN-ViT ) : Dosovitskiy et al . ( 2020 ) also proposed a hybrid architecture for ViTs by replacing raw image patches with patches extracted from a CNN feature map . This is equivalent to adding learned CNN blocks to the head of ViT as shown in Figure 1 ( b ) . Following Dosovitskiy et al . ( 2020 ) , we investigate ViT-B/16-Res in our experiments , where the input sequence is obtained by flattening the spatial dimensions of the feature maps from ResNet50 . Hybrid of T2T and ViT ( T2T-ViT ) : Yuan et al . ( 2021 ) proposed to overcome the limitations of the simple tokenization in ViTs , by progressively structurizing an image to tokens with a token-to-token ( T2T ) module , which recursively aggregates neighboring tokens into one token such that low-level structures can be better learned . T2T-ViT was shown to perform better than ViT when trained from scratch on a midsize dataset . We investigate T2T-ViT-14 and T2T-ViT-24 in our experiments . Hybrid of shifted windows and ViT ( Swin-T ) : Liu et al . ( 2021 ) computes the representations with shifted windows scheme , which brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection . We investigate Swin-S/4 in the main text and discuss other structures in Appendix F . 3.2 CONVOLUTIONAL NEURAL NEWORKS . We study several CNN models for comparison , including ResNet18 ( He et al . ( 2016 ) ) , ResNet5032x4d ( He et al . ( 2016 ) ) , ShuffleNet ( Zhang et al . ( 2018 ) ) , MobileNet ( Howard et al . ( 2017 ) ) and VGG16 ( Simonyan & Zisserman ( 2014 ) ) . We also consider the SEResNet50 model , which uses the Squeeze-and-Excitation ( SE ) block ( Hu et al . ( 2018 ) ) that applies attention to channel dimensions to fuse both spatial and channel-wise information within local receptive fields at each layer . The aforementioned CNNs are all trained on ImageNet from scratch . For a better comparison with pre-trained transformers , we also consider two CNN models pre-trained on larger datasets : ResNeXt-32x4d-ssl ( Yalniz et al . ( 2019 ) ) pre-trained on YFCC100M ( Thomee et al . ( 2015 ) ) , and ResNet50-swsl pre-trained on IG-1B-Targeted ( Mahajan et al . ( 2018 ) ) using semi-weakly supervised methods ( Yalniz et al . ( 2019 ) ) . They are both fine-tuned on ImageNet . 4 ADVERSARIAL ROBUSTNESS EVALUATION METHODS . We consider the commonly used ` ∞-norm bounded adversarial attacks to evaluate the robustness of target models . An ` ∞ attack is usually formulated as solving a constrained optimization problem : max xadv L ( xadv , y ) s.t . ∥∥xadv − x0∥∥∞ ≤ , ( 1 ) where x0 is a clean example with label y , and we aim to find an adversarial example xadv within an ` ∞ ball with radius centered at x0 , such that the loss of the classifier L ( xadv , y ) is maximized . We consider untargeted attack in this paper , so an attack is successful if the perturbation successfully changes the model ’ s prediction . The attacks as well as a randomized smoothing method used in this paper are listed below . White-box attack Four white-box attacks are involved in our experiments . The Projected Gradient Decent ( PGD ) attack ( Madry et al . ( 2017 ) ) solves Eq . 1 by iteratively taking gradient ascent : xadvt+1 = Clipx0 , ( x adv t + α · sgn ( ∇xJ ( xadvt , y ) ) ) , ( 2 ) where xadvt stands for the solution after t iterations , and Clipx0 , ( · ) denotes clipping the values to make each xadvt+1 , i within [ x0 , i − , x0 , i + ] , according to the ` ∞ threat model . As a special case , Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al . ( 2014 ) ) uses a single iteration with t = 1 . AutoAttack ( Croce & Hein ( 2020 ) ) is currently the strongest white-box attack which evaluates adversarial robustness with a parameter-free ensemble of diverse attacks . We also design a frequencybased attack for analysis , which conducts attack under an additional frequency constraint : xadvfreq = IDCT ( DCT ( x adv pgd − x0 ) Mf ) + x0 , ( 3 ) where DCT and IDCT stand for discrete cosine transform and inverse discrete cosine transform respectively , xadvpgd stands for the adversarial example generated by PGD , and Mf stands for the mask metric defined the frequency filter which is illustrated in Appendix B . We found this design similar to Wang et al . ( 2020b ) . Black-box attack We consider the transfer attack which studies whether an adversarial perturbation generated by attacking the source model can successfully fool the target model . This test not only evaluates the robustness of models under the black-box setting , but also becomes a sanity check for detecting the obfuscated gradient phenomenon ( Athalye et al . ( 2018 ) ) . Previous works have demonstrated that single-step attacks like FGSM enjoys better transferability than multi-step attacks ( Kurakin et al . ( 2017 ) ) . We thus use FGSM for transfer attack in our experiments . Denoised Randomized Smoothing We also evaluate the certified robustness of the models using randomized smoothing , where the robustness is evaluated as the certified radius , and the model is certified to be robust with high probability for perturbations within the radius . We follow Salman et al . ( 2020 ) to train a DnCNN ( Zhang et al . ( 2017 ) ) denoiser Dθ for each pre-trained model f with the “ stability ” objective with LCE denoting cross entropy and N denoting Gaussian distribution : LStab = E ( xi , yi ) ∈D , δLCE ( f ( Dθ ( xi + δ ) ) , f ( xi ) ) where δ ∼ N ( 0 , σ 2I ) . ( 4 ) Randomized smoothing is applied on the denoised classifier f ◦ Dθ for robustness certification : g ( x ) = arg max c∈Y P [ f ( Dθ ( x+ δ ) ) = c ] where δ ∼ N ( 0 , σ2I ) . ( 5 ) The certified radius is then calculated for the smoothed classifier as ( Cohen et al. , 2019 ) : R = σ 2 ( Φ−1 ( pA ) − Φ−1 ( pB ) ) , ( 6 ) where Φ−1 is the inverse of the standard Gaussian CDF , pA = P ( f ( x+δ ) = cA ) is the confidence of the top-1 predicted class cA , and pB = maxc6=cA P ( f ( x+ δ ) = c ) is the confidence for the second top class . Accordingly , given a perturbation radius , the certified accuracy under this perturbation radius can be evaluated by comparing the given radius to the certified radius R . | This work studies the adversarial robustness of vision transformers comprehensively. The author first presents their finding via empirical experiments. Concretely, they show the robustness of vision transformers from the perspective of adversarial attack, transferability, and certified robustness, adversarial training defense. Then, they analyze the reason behind the observation. | science.mup_single_document_summarization.6342 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nOn the Adversarial Robustness of Vision Transformers\n\nMain Content:\nFollowing the success in advancing natural language processing and understanding , transformers are expected to bring revolutionary changes to computer vision . This work provides a comprehensive study on the robustness of vision transformers ( ViTs ) against adversarial perturbations . Tested on various white-box and transfer attack settings , we find that ViTs possess better adversarial robustness when compared with convolutional neural networks ( CNNs ) . This observation also holds for certified robustness . We summarize the following main observations contributing to the improved robustness of ViTs : 1 ) Features learned by ViTs contain less low-level information and are more generalizable , which contributes to superior robustness against adversarial perturbations . 2 ) Introducing convolutional or tokens-to-token blocks for learning low-level features in ViTs can improve classification accuracy but at the cost of adversarial robustness . 3 ) Increasing the proportion of transformers in the model structure ( when the model consists of both transformer and CNN blocks ) leads to better robustness . But for a pure transformer model , simply increasing the size or adding layers can not guarantee a similar effect . 4 ) Pre-training without adversarial training on larger datasets does not significantly improve adversarial robustness though it is critical for training ViTs . 5 ) Adversarial training is also applicable to ViT for training robust models . Furthermore , feature visualization and frequency analysis are conducted for explanation . The results show that ViTs are less sensitive to high-frequency perturbations than CNNs and there is a high correlation between how well the model learns low-level features and its robustness against different frequency-based perturbations . 1 INTRODUCTION . Transformers are originally applied in natural language processing ( NLP ) tasks as a type of deep neural network ( DNN ) mainly based on the self-attention mechanism ( Vaswani et al . ( 2017 ) ; Devlin et al . ( 2018 ) ; Brown et al . ( 2020 ) ) , and transformers with large-scale pre-training have achieved state-of-the-art results on many NLP tasks ( Devlin et al . ( 2018 ) ; Liu et al . ( 2019 ) ; Yang et al . ( 2019 ) ; Sun et al . ( 2019 ) ) . Recently , Dosovitskiy et al . ( 2020 ) applied a pure transformer directly to sequences of image patches ( i.e. , a vision transformer , ViT ) and showed that the Transformer itself can be competitive with convolutional neural networks ( CNN ) on image classification tasks . Since then transformers have been extended to various vision tasks and show competitive or even better performance compared to CNNs and recurrent neural networks ( RNNs ) ( Carion et al . ( 2020 ) ; Chen et al . ( 2020 ) ; Zhu et al . ( 2020 ) ) . While ViT and its variants hold promise toward a unified machine learning paradigm and architecture applicable to different data modalities , it remains unclear on the robustness of ViT against adversarial perturbations , which is critical for safe and reliable deployment of many real-world applications . In this work , we examine the adversarial robustness of ViTs on image classification tasks and make comparisons with CNN baselines . As highlighted in Figure 1 ( a ) , our experimental results illustrate the superior robustness of ViTs than CNNs in both white-box and black-box attack settings , based on which we make the following important findings : • Features learned by ViTs contain less low-level information and benefit adversarial robustness . ViTs achieve a lower attack success rate ( ASR ) of 51.9 % compared with a minimum of 83.3 % by CNNs in Figure 1 ( a ) . They are also less sensitive to high-frequency adversarial perturbations . • Using denoised randomized smoothing ( Salman et al. , 2020 ) , ViTs attain significantly better certified robustness than CNNs . • It takes the cost of adversarial robustness to improve the classification accuracy of ViTs by introducing blocks to help learn low-level features as shown in Figure 1 ( a ) . • Increasing the proportion of transformer blocks in the model leads to better robustness when the model consists of both transformer and CNN blocks . For example , the attack success rate ( ASR ) decreases from 87.1 % to 79.2 % when 10 additional transformer blocks are added to T2T-ViT-14 . However , increasing the size of a pure transformer model can not guarantee a similar effect , e.g. , the robustness of ViT-S/16 is better than that of ViT-B/16 in Figure 1 ( a ) . • Pre-training without adversarial training on larger datasets does not improve adversarial robustness though it is critical for training ViT . • The principle of adversarial training through min-max optimization ( Madry et al . ( 2017 ) ; Zhang et al . ( 2019 ) ) can be applied to train robust ViTs . 2 RELATED WORK . Transformer ( Vaswani et al . ( 2017 ) ) has achieved remarkable performance on many NLP tasks , and its robustness has been studied on those NLP tasks . Hsieh et al . ( 2019 ) ; Jin et al . ( 2020 ) ; Shi & Huang ( 2020 ) ; Li et al . ( 2020 ) ; Garg & Ramakrishnan ( 2020 ) ; Yin et al . ( 2020 ) conducted adversarial attacks on transformers including pre-trained models , and in their experiments transformers usually show better robustness compared to other models based on Long short-term memory ( LSTM ) or CNN , with a theoretical explanation provided in Hsieh et al . ( 2019 ) . However , due to the discrete nature of NLP models , these studies are focusing on discrete perturbations ( e.g. , word or character substitutions ) which are very different from small and continuous perturbations in computer vision tasks . Besides , Wang et al . ( 2020a ) improved the robustness of pre-trained transformers from an information-theoretic perspective , and Shi et al . ( 2020 ) ; Ye et al . ( 2020 ) ; Xu et al . ( 2020 ) studied the robustness certification of transformer-based models . To the best of our knowledge , this work is one of the first studies that investigates the adversarial robustness ( against small perturbations in the input pixel space ) of transformers on computer vision tasks . There are some concurrent works studying the adversarial robustness of ViTs . We supplement their contribution in Appendix G . 3 MODEL ARCHITECTURES . We first review the architecture of models investigated in our experiments including several vision transformers ( ViTs ) and CNN models . A detailed table of comparison is given in Table 5 . 3.1 VISION TRANSFORMERS . We consider the original ViT ( Dosovitskiy et al . ( 2020 ) ) and its four variants shown in Figure 1 ( b ) . Vision transformer ( ViT ) and data-efficient image transformer ( DeiT ) : ViT ( Dosovitskiy et al . ( 2020 ) ) mostly follows the original design of Transformer ( Vaswani et al . ( 2017 ) ; Devlin et al . ( 2018 ) ) on language tasks . For a 2D image x ∈ RH×W×C with resolution H ×W and C channels , it is divided into a sequence ofN = H·WP 2 flattened 2D patches of size P ×P , xi ∈ R N× ( P 2·C ) ( 1 ≤ i ≤ N ) . The patches are first encoded into patch embeddings with a simple convolutional layer , where the kernel size and stride of the convolution is exactly P × P . In addition , there are also position embeddings to preserve positional information . Similar to BERT ( Devlin et al . ( 2018 ) ) , a large-scale pre-trained model for NLP , a special [ CLS ] token is added to output features for classification . DeiT ( Touvron et al . ( 2021 ) ) further improves the ViT ’ s performance using data augmentation or distillation from CNN teachers with an additional distillation token . We investigate ViT- { S , B , L } /16 , DeiT-S/16 and Dist-DeiT-B/16 as defined in the corresponding papers in the main text and discuss other structures in Appendix F. Hybrid of CNN and ViT ( CNN-ViT ) : Dosovitskiy et al . ( 2020 ) also proposed a hybrid architecture for ViTs by replacing raw image patches with patches extracted from a CNN feature map . This is equivalent to adding learned CNN blocks to the head of ViT as shown in Figure 1 ( b ) . Following Dosovitskiy et al . ( 2020 ) , we investigate ViT-B/16-Res in our experiments , where the input sequence is obtained by flattening the spatial dimensions of the feature maps from ResNet50 . Hybrid of T2T and ViT ( T2T-ViT ) : Yuan et al . ( 2021 ) proposed to overcome the limitations of the simple tokenization in ViTs , by progressively structurizing an image to tokens with a token-to-token ( T2T ) module , which recursively aggregates neighboring tokens into one token such that low-level structures can be better learned . T2T-ViT was shown to perform better than ViT when trained from scratch on a midsize dataset . We investigate T2T-ViT-14 and T2T-ViT-24 in our experiments . Hybrid of shifted windows and ViT ( Swin-T ) : Liu et al . ( 2021 ) computes the representations with shifted windows scheme , which brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection . We investigate Swin-S/4 in the main text and discuss other structures in Appendix F . 3.2 CONVOLUTIONAL NEURAL NEWORKS . We study several CNN models for comparison , including ResNet18 ( He et al . ( 2016 ) ) , ResNet5032x4d ( He et al . ( 2016 ) ) , ShuffleNet ( Zhang et al . ( 2018 ) ) , MobileNet ( Howard et al . ( 2017 ) ) and VGG16 ( Simonyan & Zisserman ( 2014 ) ) . We also consider the SEResNet50 model , which uses the Squeeze-and-Excitation ( SE ) block ( Hu et al . ( 2018 ) ) that applies attention to channel dimensions to fuse both spatial and channel-wise information within local receptive fields at each layer . The aforementioned CNNs are all trained on ImageNet from scratch . For a better comparison with pre-trained transformers , we also consider two CNN models pre-trained on larger datasets : ResNeXt-32x4d-ssl ( Yalniz et al . ( 2019 ) ) pre-trained on YFCC100M ( Thomee et al . ( 2015 ) ) , and ResNet50-swsl pre-trained on IG-1B-Targeted ( Mahajan et al . ( 2018 ) ) using semi-weakly supervised methods ( Yalniz et al . ( 2019 ) ) . They are both fine-tuned on ImageNet . 4 ADVERSARIAL ROBUSTNESS EVALUATION METHODS . We consider the commonly used ` ∞-norm bounded adversarial attacks to evaluate the robustness of target models . An ` ∞ attack is usually formulated as solving a constrained optimization problem : max xadv L ( xadv , y ) s.t . ∥∥xadv − x0∥∥∞ ≤ , ( 1 ) where x0 is a clean example with label y , and we aim to find an adversarial example xadv within an ` ∞ ball with radius centered at x0 , such that the loss of the classifier L ( xadv , y ) is maximized . We consider untargeted attack in this paper , so an attack is successful if the perturbation successfully changes the model ’ s prediction . The attacks as well as a randomized smoothing method used in this paper are listed below . White-box attack Four white-box attacks are involved in our experiments . The Projected Gradient Decent ( PGD ) attack ( Madry et al . ( 2017 ) ) solves Eq . 1 by iteratively taking gradient ascent : xadvt+1 = Clipx0 , ( x adv t + α · sgn ( ∇xJ ( xadvt , y ) ) ) , ( 2 ) where xadvt stands for the solution after t iterations , and Clipx0 , ( · ) denotes clipping the values to make each xadvt+1 , i within [ x0 , i − , x0 , i + ] , according to the ` ∞ threat model . As a special case , Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al . ( 2014 ) ) uses a single iteration with t = 1 . AutoAttack ( Croce & Hein ( 2020 ) ) is currently the strongest white-box attack which evaluates adversarial robustness with a parameter-free ensemble of diverse attacks . We also design a frequencybased attack for analysis , which conducts attack under an additional frequency constraint : xadvfreq = IDCT ( DCT ( x adv pgd − x0 ) Mf ) + x0 , ( 3 ) where DCT and IDCT stand for discrete cosine transform and inverse discrete cosine transform respectively , xadvpgd stands for the adversarial example generated by PGD , and Mf stands for the mask metric defined the frequency filter which is illustrated in Appendix B . We found this design similar to Wang et al . ( 2020b ) . Black-box attack We consider the transfer attack which studies whether an adversarial perturbation generated by attacking the source model can successfully fool the target model . This test not only evaluates the robustness of models under the black-box setting , but also becomes a sanity check for detecting the obfuscated gradient phenomenon ( Athalye et al . ( 2018 ) ) . Previous works have demonstrated that single-step attacks like FGSM enjoys better transferability than multi-step attacks ( Kurakin et al . ( 2017 ) ) . We thus use FGSM for transfer attack in our experiments . Denoised Randomized Smoothing We also evaluate the certified robustness of the models using randomized smoothing , where the robustness is evaluated as the certified radius , and the model is certified to be robust with high probability for perturbations within the radius . We follow Salman et al . ( 2020 ) to train a DnCNN ( Zhang et al . ( 2017 ) ) denoiser Dθ for each pre-trained model f with the “ stability ” objective with LCE denoting cross entropy and N denoting Gaussian distribution : LStab = E ( xi , yi ) ∈D , δLCE ( f ( Dθ ( xi + δ ) ) , f ( xi ) ) where δ ∼ N ( 0 , σ 2I ) . ( 4 ) Randomized smoothing is applied on the denoised classifier f ◦ Dθ for robustness certification : g ( x ) = arg max c∈Y P [ f ( Dθ ( x+ δ ) ) = c ] where δ ∼ N ( 0 , σ2I ) . ( 5 ) The certified radius is then calculated for the smoothed classifier as ( Cohen et al. , 2019 ) : R = σ 2 ( Φ−1 ( pA ) − Φ−1 ( pB ) ) , ( 6 ) where Φ−1 is the inverse of the standard Gaussian CDF , pA = P ( f ( x+δ ) = cA ) is the confidence of the top-1 predicted class cA , and pB = maxc6=cA P ( f ( x+ δ ) = c ) is the confidence for the second top class . Accordingly , given a perturbation radius , the certified accuracy under this perturbation radius can be evaluated by comparing the given radius to the certified radius R .",
"role": "user"
},
{
"content": "This work studies the adversarial robustness of vision transformers comprehensively. The author first presents their finding via empirical experiments. Concretely, they show the robustness of vision transformers from the perspective of adversarial attack, transferability, and certified robustness, adversarial training defense. Then, they analyze the reason behind the observation.",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT:
Objectives.To perform a 6-week double-blind RCT in Raynaud's phenomenon (RP) comparing the plant extract St. John's Wort (SJW) to placebo. Methods. RP patients having at least 7 attacks per week were stratified by primary and secondary RP and within secondary by systemic sclerosis or other connective tissue disease. Subjects completed a daily standardized diary recording all RP attacks (frequency, duration and severity). Serum levels of 18 inflammatory and angiogenic cytokines were measured pre- and post-treatment. Results. Eighteen patients completed the study; 8 received SJW and 10 placebo. The decrease in mean number of attacks per day was 0.75 with SJW and 1.01 with placebo, P = 0.06. Attack duration and severity were not different between groups. Cytokine analyses demonstrated no between-groups differences. Combining treatment groups, those with >50% improvement in frequency of attacks yielded a significant increase in E-selectin (P = 0.049), MMP-9 (P = 0.011), G-CSF (P = 0.02), and VEGF (P = 0.012) pre- versus post-treatment. A ≥50% improvement in severity of attacks corresponded to a significant increase in levels of sVCAM-1 (P = 0.003), sICAM-1 (P = 0.007), and MCP-1 (P = 0.004). Conclusions. There were no clinical or biomarker benefit of SJW versus placebo in RP. However, combining all patients, there were changes in some cytokines that may be further investigated.
Extracted Data: | [["plant extract St. John\u0027s Wort (SJW)", "placebo", "mean number of attacks per day", "no significant difference", "The decrease in mean number of attacks per day was 0.75 with SJW and 1.01 with placebo, P = 0.06."]] | science.evidence_inference.41 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: \nObjectives.To perform a 6-week double-blind RCT in Raynaud's phenomenon (RP) comparing the plant extract St. John's Wort (SJW) to placebo. Methods. RP patients having at least 7 attacks per week were stratified by primary and secondary RP and within secondary by systemic sclerosis or other connective tissue disease. Subjects completed a daily standardized diary recording all RP attacks (frequency, duration and severity). Serum levels of 18 inflammatory and angiogenic cytokines were measured pre- and post-treatment. Results. Eighteen patients completed the study; 8 received SJW and 10 placebo. The decrease in mean number of attacks per day was 0.75 with SJW and 1.01 with placebo, P = 0.06. Attack duration and severity were not different between groups. Cytokine analyses demonstrated no between-groups differences. Combining treatment groups, those with >50% improvement in frequency of attacks yielded a significant increase in E-selectin (P = 0.049), MMP-9 (P = 0.011), G-CSF (P = 0.02), and VEGF (P = 0.012) pre- versus post-treatment. A ≥50% improvement in severity of attacks corresponded to a significant increase in levels of sVCAM-1 (P = 0.003), sICAM-1 (P = 0.007), and MCP-1 (P = 0.004). Conclusions. There were no clinical or biomarker benefit of SJW versus placebo in RP. However, combining all patients, there were changes in some cytokines that may be further investigated.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"plant extract St. John\\u0027s Wort (SJW)\", \"placebo\", \"mean number of attacks per day\", \"no significant difference\", \"The decrease in mean number of attacks per day was 0.75 with SJW and 1.01 with placebo, P = 0.06.\"]]",
"role": "assistant"
}
]
|
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: Lymphangiogenesis plays an important role in cancer progression and is regulated by a complex mechanism that includes vascular endothelial growth factor (VEGF) signaling.
ur results demonstrate that NHT stimulates lymphangiogenesis via upregulation of VEGF-C and -D
Vegfc is essential for secondary angiogenesis, giving rise to veins and lymphatics
vegfc and vegfd cooperatively control lymphangiogenesis throughout the embryo, including during the formation of the trunk lymphatic vasculature.
The signals for lymphangiogenesis, VEGF-C and its receptor VEGF-R3
Vascular endothelial growth factor (VEGF) members VEGF-C and VEGF-D are both potent candidates for stimulating lymphangiogenesis,
Question: List factors that promote lymphangiogenesis. | ["VEGF-C", "VEGF-D", "VEGF-R3"] | science.bioasq_list_qa.508 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Lymphangiogenesis plays an important role in cancer progression and is regulated by a complex mechanism that includes vascular endothelial growth factor (VEGF) signaling.\nur results demonstrate that NHT stimulates lymphangiogenesis via upregulation of VEGF-C and -D\nVegfc is essential for secondary angiogenesis, giving rise to veins and lymphatics\nvegfc and vegfd cooperatively control lymphangiogenesis throughout the embryo, including during the formation of the trunk lymphatic vasculature.\nThe signals for lymphangiogenesis, VEGF-C and its receptor VEGF-R3\nVascular endothelial growth factor (VEGF) members VEGF-C and VEGF-D are both potent candidates for stimulating lymphangiogenesis,\n\nQuestion: List factors that promote lymphangiogenesis.",
"role": "user"
},
{
"content": "[\"VEGF-C\", \"VEGF-D\", \"VEGF-R3\"]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
To facilitate comparison with the state of the art, we use the same treebanks and experimental setup as <cite>Naseem et al. (2012)</cite> . | ["Uses"] | science.multicite_intent_classification.691 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nTo facilitate comparison with the state of the art, we use the same treebanks and experimental setup as <cite>Naseem et al. (2012)</cite> .",
"role": "user"
},
{
"content": "[\"Uses\"]",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media
Generated hateful and toxic content by a portion of users in social media is a rising phenomenon that motivated researchers to dedicate substantial efforts to the challenging direction of hateful content identification. We not only need an efficient automatic hate speech detection model based on advanced machine learning and natural language processing, but also a sufficiently large amount of annotated data to train a model. The lack of a sufficient amount of labelled hate speech data, along with the existing biases, has been the main issue in this domain of research. To address these needs, in this study we introduce a novel transfer learning approach based on an existing pre-trained language model called BERT (Bidirectional Encoder Representations from Transformers). More specifically, we investigate the ability of BERT at capturing hateful context within social media content by using new fine-tuning methods based on transfer learning. To evaluate our proposed approach, we use two publicly available datasets that have been annotated for racism, sexism, hate, or offensive content on Twitter. The results show that our solution obtains considerable performance on these datasets in terms of precision and recall in comparison to existing approaches. Consequently, our model can capture some biases in data annotation and collection process and can potentially lead us to a more accurate model.
Methodology
Here, we analyze the BERT transformer model on the hate speech detection task. BERT is a multi-layer bidirectional transformer encoder trained on the English Wikipedia and the Book Corpus containing 2,500M and 800M tokens, respectively, and has two models named BERTbase and BERTlarge. BERTbase contains an encoder with 12 layers (transformer blocks), 12 self-attention heads, and 110 million parameters whereas BERTlarge has 24 layers, 16 attention heads, and 340 million parameters. Extracted embeddings from BERTbase have 768 hidden dimensions BIBREF11. As the BERT model is pre-trained on general corpora, and for our hate speech detection task we are dealing with social media content, therefore as a crucial step, we have to analyze the contextual information extracted from BERT' s pre-trained layers and then fine-tune it using annotated datasets. By fine-tuning we update weights using a labelled dataset that is new to an already trained model. As an input and output, BERT takes a sequence of tokens in maximum length 512 and produces a representation of the sequence in a 768-dimensional vector. BERT inserts at most two segments to each input sequence, [CLS] and [SEP]. [CLS] embedding is the first token of the input sequence and contains the special classification embedding which we take the first token [CLS] in the final hidden layer as the representation of the whole sequence in hate speech classification task. The [SEP] separates segments and we will not use it in our classification task. To perform the hate speech detection task, we use BERTbase model to classify each tweet as Racism, Sexism, Neither or Hate, Offensive, Neither in our datasets. In order to do that, we focus on fine-tuning the pre-trained BERTbase parameters. By fine-tuning, we mean training a classifier with different layers of 768 dimensions on top of the pre-trained BERTbase transformer to minimize task-specific parameters.
Methodology ::: Fine-Tuning Strategies
Different layers of a neural network can capture different levels of syntactic and semantic information. The lower layer of the BERT model may contain more general information whereas the higher layers contain task-specific information BIBREF11, and we can fine-tune them with different learning rates. Here, four different fine-tuning approaches are implemented that exploit pre-trained BERTbase transformer encoders for our classification task. More information about these transformer encoders' architectures are presented in BIBREF11. In the fine-tuning phase, the model is initialized with the pre-trained parameters and then are fine-tuned using the labelled datasets. Different fine-tuning approaches on the hate speech detection task are depicted in Figure FIGREF8, in which $X_{i}$ is the vector representation of token $i$ in a tweet sample, and are explained in more detail as follows:
1. BERT based fine-tuning: In the first approach, which is shown in Figure FIGREF8, very few changes are applied to the BERTbase. In this architecture, only the [CLS] token output provided by BERT is used. The [CLS] output, which is equivalent to the [CLS] token output of the 12th transformer encoder, a vector of size 768, is given as input to a fully connected network without hidden layer. The softmax activation function is applied to the hidden layer to classify.
2. Insert nonlinear layers: Here, the first architecture is upgraded and an architecture with a more robust classifier is provided in which instead of using a fully connected network without hidden layer, a fully connected network with two hidden layers in size 768 is used. The first two layers use the Leaky Relu activation function with negative slope = 0.01, but the final layer, as the first architecture, uses softmax activation function as shown in Figure FIGREF8.
3. Insert Bi-LSTM layer: Unlike previous architectures that only use [CLS] as the input for the classifier, in this architecture all outputs of the latest transformer encoder are used in such a way that they are given as inputs to a bidirectional recurrent neural network (Bi-LSTM) as shown in Figure FIGREF8. After processing the input, the network sends the final hidden state to a fully connected network that performs classification using the softmax activation function.
4. Insert CNN layer: In this architecture shown in Figure FIGREF8, the outputs of all transformer encoders are used instead of using the output of the latest transformer encoder. So that the output vectors of each transformer encoder are concatenated, and a matrix is produced. The convolutional operation is performed with a window of size (3, hidden size of BERT which is 768 in BERTbase model) and the maximum value is generated for each transformer encoder by applying max pooling on the convolution output. By concatenating these values, a vector is generated which is given as input to a fully connected network. By applying softmax on the input, the classification operation is performed.
Experiments and Results ::: Error Analysis
Although we have very interesting results in term of recall, the precision of the model shows the portion of false detection we have. To understand better this phenomenon, in this section we perform a deep analysis on the error of the model. We investigate the test datasets and their confusion matrices resulted from the BERTbase + CNN model as the best fine-tuning approach; depicted in Figures FIGREF19 and FIGREF19. According to Figure FIGREF19 for Waseem-dataset, it is obvious that the model can separate sexism from racism content properly. Only two samples belonging to racism class are misclassified as sexism and none of the sexism samples are misclassified as racism. A large majority of the errors come from misclassifying hateful categories (racism and sexism) as hatless (neither) and vice versa. 0.9% and 18.5% of all racism samples are misclassified as sexism and neither respectively whereas it is 0% and 12.7% for sexism samples. Almost 12% of neither samples are misclassified as racism or sexism. As Figure FIGREF19 makes clear for Davidson-dataset, the majority of errors are related to hate class where the model misclassified hate content as offensive in 63% of the cases. However, 2.6% and 7.9% of offensive and neither samples are misclassified respectively.
To understand better the mislabeled items by our model, we did a manual inspection on a subset of the data and record some of them in Tables TABREF20 and TABREF21. Considering the words such as “daughters", “women", and “burka" in tweets with IDs 1 and 2 in Table TABREF20, it can be understood that our BERT based classifier is confused with the contextual semantic between these words in the samples and misclassified them as sexism because they are mainly associated to femininity. In some cases containing implicit abuse (like subtle insults) such as tweets with IDs 5 and 7, our model cannot capture the hateful/offensive content and therefore misclassifies. It should be noticed that even for a human it is difficult to discriminate against this kind of implicit abuses.
By examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. Since Waseem et al.BIBREF5 created a small ad-hoc set of keywords and Davidson et al.BIBREF9 used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words “nigga", “faggot", “coon", or “queer", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker’s identity or dialect, whereas they were just offensive or even neither tweets. Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. One explanation of this observation may be the pre-trained general knowledge that exists in our model. Since the pre-trained BERT model is trained on general corpora, it has learned general knowledge from normal textual data without any purposely hateful or offensive language. Therefore, despite the bias in the data, our model can differentiate hate and offensive samples accurately by leveraging knowledge-aware language understanding that it has and it can be the main reason for high misclassifications of hate samples as offensive (in reality they are more similar to offensive rather than hate by considering social context, geolocation, and dialect of tweeters).
Conclusion
Conflating hatred content with offensive or harmless language causes online automatic hate speech detection tools to flag user-generated content incorrectly. Not addressing this problem may bring about severe negative consequences for both platforms and users such as decreasement of platforms' reputation or users abandonment. Here, we propose a transfer learning approach advantaging the pre-trained language model BERT to enhance the performance of a hate speech detection system and to generalize it to new datasets. To that end, we introduce new fine-tuning strategies to examine the effect of different layers of BERT in hate speech detection task. The evaluation results indicate that our model outperforms previous works by profiting the syntactical and contextual information embedded in different transformer encoder layers of the BERT model using a CNN-based fine-tuning strategy. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using the pre-trained BERT model to alleviate bias in hate speech datasets in future studies, by investigating a mixture of contextual information embedded in the BERT’s layers and a set of features associated to the different type of biases in data.
Table 1: Dataset statistics of the both Waseem-dataset (a) and Davidson-dataset (b). Splits are produced using stratified sampling to select 0.8, 0.1, and 0.1 portions of tweets from each class (racism/sexism/neither or hate/offensive/neither) for train, validation, and test samples, respectively.
Query: What biases does their model capture?
JSON Response: | {"answer": "Data annotation biases where tweet containing disrespectful words are annotated as hate or offensive without any presumption about the social context of tweeters", "evidence": ["By examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. Since Waseem et al.BIBREF5 created a small ad-hoc set of keywords and Davidson et al.BIBREF9 used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words \u201cnigga\", \u201cfaggot\", \u201ccoon\", or \u201cqueer\", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker\u2019s identity or dialect, whereas they were just offensive or even neither tweets. Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. One explanation of this observation may be the pre-trained general knowledge that exists in our model. Since the pre-trained BERT model is trained on general corpora, it has learned general knowledge from normal textual data without any purposely hateful or offensive language. Therefore, despite the bias in the data, our model can differentiate hate and offensive samples accurately by leveraging knowledge-aware language understanding that it has and it can be the main reason for high misclassifications of hate samples as offensive (in reality they are more similar to offensive rather than hate by considering social context, geolocation, and dialect of tweeters)."]} | science.qasper_abstractive_qa.158.2 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nA BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media\n\nGenerated hateful and toxic content by a portion of users in social media is a rising phenomenon that motivated researchers to dedicate substantial efforts to the challenging direction of hateful content identification. We not only need an efficient automatic hate speech detection model based on advanced machine learning and natural language processing, but also a sufficiently large amount of annotated data to train a model. The lack of a sufficient amount of labelled hate speech data, along with the existing biases, has been the main issue in this domain of research. To address these needs, in this study we introduce a novel transfer learning approach based on an existing pre-trained language model called BERT (Bidirectional Encoder Representations from Transformers). More specifically, we investigate the ability of BERT at capturing hateful context within social media content by using new fine-tuning methods based on transfer learning. To evaluate our proposed approach, we use two publicly available datasets that have been annotated for racism, sexism, hate, or offensive content on Twitter. The results show that our solution obtains considerable performance on these datasets in terms of precision and recall in comparison to existing approaches. Consequently, our model can capture some biases in data annotation and collection process and can potentially lead us to a more accurate model.\n\nMethodology\nHere, we analyze the BERT transformer model on the hate speech detection task. BERT is a multi-layer bidirectional transformer encoder trained on the English Wikipedia and the Book Corpus containing 2,500M and 800M tokens, respectively, and has two models named BERTbase and BERTlarge. BERTbase contains an encoder with 12 layers (transformer blocks), 12 self-attention heads, and 110 million parameters whereas BERTlarge has 24 layers, 16 attention heads, and 340 million parameters. Extracted embeddings from BERTbase have 768 hidden dimensions BIBREF11. As the BERT model is pre-trained on general corpora, and for our hate speech detection task we are dealing with social media content, therefore as a crucial step, we have to analyze the contextual information extracted from BERT' s pre-trained layers and then fine-tune it using annotated datasets. By fine-tuning we update weights using a labelled dataset that is new to an already trained model. As an input and output, BERT takes a sequence of tokens in maximum length 512 and produces a representation of the sequence in a 768-dimensional vector. BERT inserts at most two segments to each input sequence, [CLS] and [SEP]. [CLS] embedding is the first token of the input sequence and contains the special classification embedding which we take the first token [CLS] in the final hidden layer as the representation of the whole sequence in hate speech classification task. The [SEP] separates segments and we will not use it in our classification task. To perform the hate speech detection task, we use BERTbase model to classify each tweet as Racism, Sexism, Neither or Hate, Offensive, Neither in our datasets. In order to do that, we focus on fine-tuning the pre-trained BERTbase parameters. By fine-tuning, we mean training a classifier with different layers of 768 dimensions on top of the pre-trained BERTbase transformer to minimize task-specific parameters.\n\nMethodology ::: Fine-Tuning Strategies\nDifferent layers of a neural network can capture different levels of syntactic and semantic information. The lower layer of the BERT model may contain more general information whereas the higher layers contain task-specific information BIBREF11, and we can fine-tune them with different learning rates. Here, four different fine-tuning approaches are implemented that exploit pre-trained BERTbase transformer encoders for our classification task. More information about these transformer encoders' architectures are presented in BIBREF11. In the fine-tuning phase, the model is initialized with the pre-trained parameters and then are fine-tuned using the labelled datasets. Different fine-tuning approaches on the hate speech detection task are depicted in Figure FIGREF8, in which $X_{i}$ is the vector representation of token $i$ in a tweet sample, and are explained in more detail as follows:\n1. BERT based fine-tuning: In the first approach, which is shown in Figure FIGREF8, very few changes are applied to the BERTbase. In this architecture, only the [CLS] token output provided by BERT is used. The [CLS] output, which is equivalent to the [CLS] token output of the 12th transformer encoder, a vector of size 768, is given as input to a fully connected network without hidden layer. The softmax activation function is applied to the hidden layer to classify.\n2. Insert nonlinear layers: Here, the first architecture is upgraded and an architecture with a more robust classifier is provided in which instead of using a fully connected network without hidden layer, a fully connected network with two hidden layers in size 768 is used. The first two layers use the Leaky Relu activation function with negative slope = 0.01, but the final layer, as the first architecture, uses softmax activation function as shown in Figure FIGREF8.\n3. Insert Bi-LSTM layer: Unlike previous architectures that only use [CLS] as the input for the classifier, in this architecture all outputs of the latest transformer encoder are used in such a way that they are given as inputs to a bidirectional recurrent neural network (Bi-LSTM) as shown in Figure FIGREF8. After processing the input, the network sends the final hidden state to a fully connected network that performs classification using the softmax activation function.\n4. Insert CNN layer: In this architecture shown in Figure FIGREF8, the outputs of all transformer encoders are used instead of using the output of the latest transformer encoder. So that the output vectors of each transformer encoder are concatenated, and a matrix is produced. The convolutional operation is performed with a window of size (3, hidden size of BERT which is 768 in BERTbase model) and the maximum value is generated for each transformer encoder by applying max pooling on the convolution output. By concatenating these values, a vector is generated which is given as input to a fully connected network. By applying softmax on the input, the classification operation is performed.\n\nExperiments and Results ::: Error Analysis\nAlthough we have very interesting results in term of recall, the precision of the model shows the portion of false detection we have. To understand better this phenomenon, in this section we perform a deep analysis on the error of the model. We investigate the test datasets and their confusion matrices resulted from the BERTbase + CNN model as the best fine-tuning approach; depicted in Figures FIGREF19 and FIGREF19. According to Figure FIGREF19 for Waseem-dataset, it is obvious that the model can separate sexism from racism content properly. Only two samples belonging to racism class are misclassified as sexism and none of the sexism samples are misclassified as racism. A large majority of the errors come from misclassifying hateful categories (racism and sexism) as hatless (neither) and vice versa. 0.9% and 18.5% of all racism samples are misclassified as sexism and neither respectively whereas it is 0% and 12.7% for sexism samples. Almost 12% of neither samples are misclassified as racism or sexism. As Figure FIGREF19 makes clear for Davidson-dataset, the majority of errors are related to hate class where the model misclassified hate content as offensive in 63% of the cases. However, 2.6% and 7.9% of offensive and neither samples are misclassified respectively.\nTo understand better the mislabeled items by our model, we did a manual inspection on a subset of the data and record some of them in Tables TABREF20 and TABREF21. Considering the words such as “daughters\", “women\", and “burka\" in tweets with IDs 1 and 2 in Table TABREF20, it can be understood that our BERT based classifier is confused with the contextual semantic between these words in the samples and misclassified them as sexism because they are mainly associated to femininity. In some cases containing implicit abuse (like subtle insults) such as tweets with IDs 5 and 7, our model cannot capture the hateful/offensive content and therefore misclassifies. It should be noticed that even for a human it is difficult to discriminate against this kind of implicit abuses.\nBy examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. Since Waseem et al.BIBREF5 created a small ad-hoc set of keywords and Davidson et al.BIBREF9 used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words “nigga\", “faggot\", “coon\", or “queer\", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker’s identity or dialect, whereas they were just offensive or even neither tweets. Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. One explanation of this observation may be the pre-trained general knowledge that exists in our model. Since the pre-trained BERT model is trained on general corpora, it has learned general knowledge from normal textual data without any purposely hateful or offensive language. Therefore, despite the bias in the data, our model can differentiate hate and offensive samples accurately by leveraging knowledge-aware language understanding that it has and it can be the main reason for high misclassifications of hate samples as offensive (in reality they are more similar to offensive rather than hate by considering social context, geolocation, and dialect of tweeters).\n\nConclusion\nConflating hatred content with offensive or harmless language causes online automatic hate speech detection tools to flag user-generated content incorrectly. Not addressing this problem may bring about severe negative consequences for both platforms and users such as decreasement of platforms' reputation or users abandonment. Here, we propose a transfer learning approach advantaging the pre-trained language model BERT to enhance the performance of a hate speech detection system and to generalize it to new datasets. To that end, we introduce new fine-tuning strategies to examine the effect of different layers of BERT in hate speech detection task. The evaluation results indicate that our model outperforms previous works by profiting the syntactical and contextual information embedded in different transformer encoder layers of the BERT model using a CNN-based fine-tuning strategy. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using the pre-trained BERT model to alleviate bias in hate speech datasets in future studies, by investigating a mixture of contextual information embedded in the BERT’s layers and a set of features associated to the different type of biases in data.\n\nTable 1: Dataset statistics of the both Waseem-dataset (a) and Davidson-dataset (b). Splits are produced using stratified sampling to select 0.8, 0.1, and 0.1 portions of tweets from each class (racism/sexism/neither or hate/offensive/neither) for train, validation, and test samples, respectively.\n\nQuery: What biases does their model capture?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Data annotation biases where tweet containing disrespectful words are annotated as hate or offensive without any presumption about the social context of tweeters\", \"evidence\": [\"By examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. Since Waseem et al.BIBREF5 created a small ad-hoc set of keywords and Davidson et al.BIBREF9 used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words \\u201cnigga\\\", \\u201cfaggot\\\", \\u201ccoon\\\", or \\u201cqueer\\\", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker\\u2019s identity or dialect, whereas they were just offensive or even neither tweets. Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. One explanation of this observation may be the pre-trained general knowledge that exists in our model. Since the pre-trained BERT model is trained on general corpora, it has learned general knowledge from normal textual data without any purposely hateful or offensive language. Therefore, despite the bias in the data, our model can differentiate hate and offensive samples accurately by leveraging knowledge-aware language understanding that it has and it can be the main reason for high misclassifications of hate samples as offensive (in reality they are more similar to offensive rather than hate by considering social context, geolocation, and dialect of tweeters).\"]}",
"role": "assistant"
}
]
|
science.discomat_te | | Isotope | Eg (keV) | G (mm/s) | Source material | Half life of precursor | b (%) |
| 119Sn | 23.87 | 0.647 | Ba 119m SnO 3 | 245 days | 8.58 |
| 125Te | 35.46 | 5.208 | Mg 125m TeO 6 | 57 days | 6.99 |
| 129I | 27.77 | 0.586 | Zn 129m Te | 33 days | Artificial |
Caption: Useful Mossbauer parameters a
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "23.87", "type": "Other"}
{"value": "0.647", "type": "Other"}
{"value": "8.58", "type": "Other"}
{"value": "35.46", "type": "Other"}
{"value": "5.208", "type": "Other"}
{"value": "6.99", "type": "Other"}
{"value": "27.77", "type": "Other"}
{"value": "0.586", "type": "Other"} | science.discomat_te.155 | [
{
"content": "| Isotope | Eg (keV) | G (mm/s) | Source material | Half life of precursor | b (%) |\n| 119Sn | 23.87 | 0.647 | Ba 119m SnO 3 | 245 days | 8.58 |\n| 125Te | 35.46 | 5.208 | Mg 125m TeO 6 | 57 days | 6.99 |\n| 129I | 27.77 | 0.586 | Zn 129m Te | 33 days | Artificial |\n\nCaption: Useful Mossbauer parameters a\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"23.87\", \"type\": \"Other\"}\n{\"value\": \"0.647\", \"type\": \"Other\"}\n{\"value\": \"8.58\", \"type\": \"Other\"}\n{\"value\": \"35.46\", \"type\": \"Other\"}\n{\"value\": \"5.208\", \"type\": \"Other\"}\n{\"value\": \"6.99\", \"type\": \"Other\"}\n{\"value\": \"27.77\", \"type\": \"Other\"}\n{\"value\": \"0.586\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.discomat_te | | Transitions | n (cm-1) | Arad (s-1) | b | trad (ms) |
| 4 I 13/2 - 4I15/2 | 6574 | 377.6 | 1 | 2.648+-0.123 |
| 4 I 11/2 - 4I15/2 | 10,194 | 494.4 | 0.89 | 1.800+-0.069 |
| 4I13/2 | 3620 | 61.1 | 0.11 | 1.800+-0.069 |
| 4I9/2 - 4I15/2 | 12,146 | 245.6 | 0.439 | 2.534+-0.067 |
| 4I13/2 | 5572 | 147.2 | 0.528 | 2.534+-0.067 |
| 4I11/2 | 1952 | 1.8 | 0.032 | 2.534+-0.067 |
| 4F9/2 - 4I15/2 | 15,234 | 3517.2 | 0.9 | 0.254+-0.021 |
| 4I13/2 | 8660 | 168 | 0.043 | 0.254+-0.021 |
| 4I11/2 | 5040 | 224.1 | 0.057 | 0.254+-0.021 |
| 4I9/2 | 3088 | 12.8 | 0 | 0.254+-0.021 |
| 4S3/2 - 4I15/2 | 18,308 | 4052.4 | 0.694 | 0.165+-0.007 |
| 4I11/2 | 11,734 | 1639.7 | 0.281 | 0.165+-0.007 |
| 4I13/2 | 8114 | 127.6 | 0.022 | 0.165+-0.007 |
| 4I9/2 | 6162 | 214.7 | 0.003 | 0.165+-0.007 |
| 4F9/2 | 3074 | 2 | 0 | 0.165+-0.007 |
| 2H11/2 - 4I15/2 | 19,093 | 16,393.1 | 0.967 | 0.058+-0.001 |
| 4I13/2 | 12,519 | 305.8 | 0.018 | 0.058+-0.001 |
| 4I11/2 | 8899 | 160.6 | 0.009 | 0.058+-0.001 |
| 4I9/2 | 6947 | 290.3 | 0.002 | 0.058+-0.001 |
| 4F9/2 | 3859 | 61.4 | 0.004 | 0.058+-0.001 |
| 4S3/2 | 785 | 0.1 | 0 | 0.058+-0.001 |
Caption: Predicted spontaneous transition probabilities (Arad), branching ratios (b) and radiative lifetimes (trad) of Er3+ ions in TBZNbEr10 glass.
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "6574", "type": "Other"}
{"value": "377.6", "type": "Other"}
{"value": "1", "type": "Other"}
{"value": "2.648", "type": "Other"}
{"value": "494.4", "type": "Other"}
{"value": "0.89", "type": "Other"}
{"value": "1.800", "type": "Other"}
{"value": "3620", "type": "Other"}
{"value": "61.1", "type": "Other"}
{"value": "0.11", "type": "Other"}
{"value": "1.800", "type": "Other"}
{"value": "245.6", "type": "Other"}
{"value": "0.439", "type": "Other"}
{"value": "2.534", "type": "Other"}
{"value": "5572", "type": "Other"}
{"value": "147.2", "type": "Other"}
{"value": "0.528", "type": "Other"}
{"value": "2.534", "type": "Other"}
{"value": "1952", "type": "Other"}
{"value": "1.8", "type": "Other"}
{"value": "0.032", "type": "Other"}
{"value": "2.534", "type": "Other"}
{"value": "3517.2", "type": "Other"}
{"value": "0.9", "type": "Other"}
{"value": "0.254", "type": "Other"}
{"value": "8660", "type": "Other"}
{"value": "168", "type": "Other"}
{"value": "0.043", "type": "Other"}
{"value": "0.254", "type": "Other"}
{"value": "5040", "type": "Other"}
{"value": "224.1", "type": "Other"}
{"value": "0.057", "type": "Other"}
{"value": "0.254", "type": "Other"}
{"value": "3088", "type": "Other"}
{"value": "12.8", "type": "Other"}
{"value": "0", "type": "Other"}
{"value": "0.254", "type": "Other"}
{"value": "4052.4", "type": "Other"}
{"value": "0.694", "type": "Other"}
{"value": "0.165", "type": "Other"}
{"value": "1639.7", "type": "Other"}
{"value": "0.281", "type": "Other"}
{"value": "0.165", "type": "Other"}
{"value": "8114", "type": "Other"}
{"value": "127.6", "type": "Other"}
{"value": "0.022", "type": "Other"}
{"value": "0.165", "type": "Other"}
{"value": "6162", "type": "Other"}
{"value": "214.7", "type": "Other"}
{"value": "0.003", "type": "Other"}
{"value": "0.165", "type": "Other"}
{"value": "3074", "type": "Other"}
{"value": "2", "type": "Other"}
{"value": "0", "type": "Other"}
{"value": "0.165", "type": "Other"}
{"value": "0.967", "type": "Other"}
{"value": "0.058", "type": "Other"}
{"value": "305.8", "type": "Other"}
{"value": "0.018", "type": "Other"}
{"value": "0.058", "type": "Other"}
{"value": "8899", "type": "Other"}
{"value": "160.6", "type": "Other"}
{"value": "0.009", "type": "Other"}
{"value": "0.058", "type": "Other"}
{"value": "6947", "type": "Other"}
{"value": "290.3", "type": "Other"}
{"value": "0.002", "type": "Other"}
{"value": "0.058", "type": "Other"}
{"value": "3859", "type": "Other"}
{"value": "61.4", "type": "Other"}
{"value": "0.004", "type": "Other"}
{"value": "0.058", "type": "Other"}
{"value": "785", "type": "Other"}
{"value": "0.1", "type": "Other"}
{"value": "0", "type": "Other"}
{"value": "0.058", "type": "Other"} | science.discomat_te.75 | [
{
"content": "| Transitions | n (cm-1) | Arad (s-1) | b | trad (ms) |\n| 4 I 13/2 - 4I15/2 | 6574 | 377.6 | 1 | 2.648+-0.123 |\n| 4 I 11/2 - 4I15/2 | 10,194 | 494.4 | 0.89 | 1.800+-0.069 |\n| 4I13/2 | 3620 | 61.1 | 0.11 | 1.800+-0.069 |\n| 4I9/2 - 4I15/2 | 12,146 | 245.6 | 0.439 | 2.534+-0.067 |\n| 4I13/2 | 5572 | 147.2 | 0.528 | 2.534+-0.067 |\n| 4I11/2 | 1952 | 1.8 | 0.032 | 2.534+-0.067 |\n| 4F9/2 - 4I15/2 | 15,234 | 3517.2 | 0.9 | 0.254+-0.021 |\n| 4I13/2 | 8660 | 168 | 0.043 | 0.254+-0.021 |\n| 4I11/2 | 5040 | 224.1 | 0.057 | 0.254+-0.021 |\n| 4I9/2 | 3088 | 12.8 | 0 | 0.254+-0.021 |\n| 4S3/2 - 4I15/2 | 18,308 | 4052.4 | 0.694 | 0.165+-0.007 |\n| 4I11/2 | 11,734 | 1639.7 | 0.281 | 0.165+-0.007 |\n| 4I13/2 | 8114 | 127.6 | 0.022 | 0.165+-0.007 |\n| 4I9/2 | 6162 | 214.7 | 0.003 | 0.165+-0.007 |\n| 4F9/2 | 3074 | 2 | 0 | 0.165+-0.007 |\n| 2H11/2 - 4I15/2 | 19,093 | 16,393.1 | 0.967 | 0.058+-0.001 |\n| 4I13/2 | 12,519 | 305.8 | 0.018 | 0.058+-0.001 |\n| 4I11/2 | 8899 | 160.6 | 0.009 | 0.058+-0.001 |\n| 4I9/2 | 6947 | 290.3 | 0.002 | 0.058+-0.001 |\n| 4F9/2 | 3859 | 61.4 | 0.004 | 0.058+-0.001 |\n| 4S3/2 | 785 | 0.1 | 0 | 0.058+-0.001 |\n\nCaption: Predicted spontaneous transition probabilities (Arad), branching ratios (b) and radiative lifetimes (trad) of Er3+ ions in TBZNbEr10 glass.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"6574\", \"type\": \"Other\"}\n{\"value\": \"377.6\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"2.648\", \"type\": \"Other\"}\n{\"value\": \"494.4\", \"type\": \"Other\"}\n{\"value\": \"0.89\", \"type\": \"Other\"}\n{\"value\": \"1.800\", \"type\": \"Other\"}\n{\"value\": \"3620\", \"type\": \"Other\"}\n{\"value\": \"61.1\", \"type\": \"Other\"}\n{\"value\": \"0.11\", \"type\": \"Other\"}\n{\"value\": \"1.800\", \"type\": \"Other\"}\n{\"value\": \"245.6\", \"type\": \"Other\"}\n{\"value\": \"0.439\", \"type\": \"Other\"}\n{\"value\": \"2.534\", \"type\": \"Other\"}\n{\"value\": \"5572\", \"type\": \"Other\"}\n{\"value\": \"147.2\", \"type\": \"Other\"}\n{\"value\": \"0.528\", \"type\": \"Other\"}\n{\"value\": \"2.534\", \"type\": \"Other\"}\n{\"value\": \"1952\", \"type\": \"Other\"}\n{\"value\": \"1.8\", \"type\": \"Other\"}\n{\"value\": \"0.032\", \"type\": \"Other\"}\n{\"value\": \"2.534\", \"type\": \"Other\"}\n{\"value\": \"3517.2\", \"type\": \"Other\"}\n{\"value\": \"0.9\", \"type\": \"Other\"}\n{\"value\": \"0.254\", \"type\": \"Other\"}\n{\"value\": \"8660\", \"type\": \"Other\"}\n{\"value\": \"168\", \"type\": \"Other\"}\n{\"value\": \"0.043\", \"type\": \"Other\"}\n{\"value\": \"0.254\", \"type\": \"Other\"}\n{\"value\": \"5040\", \"type\": \"Other\"}\n{\"value\": \"224.1\", \"type\": \"Other\"}\n{\"value\": \"0.057\", \"type\": \"Other\"}\n{\"value\": \"0.254\", \"type\": \"Other\"}\n{\"value\": \"3088\", \"type\": \"Other\"}\n{\"value\": \"12.8\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"0.254\", \"type\": \"Other\"}\n{\"value\": \"4052.4\", \"type\": \"Other\"}\n{\"value\": \"0.694\", \"type\": \"Other\"}\n{\"value\": \"0.165\", \"type\": \"Other\"}\n{\"value\": \"1639.7\", \"type\": \"Other\"}\n{\"value\": \"0.281\", \"type\": \"Other\"}\n{\"value\": \"0.165\", \"type\": \"Other\"}\n{\"value\": \"8114\", \"type\": \"Other\"}\n{\"value\": \"127.6\", \"type\": \"Other\"}\n{\"value\": \"0.022\", \"type\": \"Other\"}\n{\"value\": \"0.165\", \"type\": \"Other\"}\n{\"value\": \"6162\", \"type\": \"Other\"}\n{\"value\": \"214.7\", \"type\": \"Other\"}\n{\"value\": \"0.003\", \"type\": \"Other\"}\n{\"value\": \"0.165\", \"type\": \"Other\"}\n{\"value\": \"3074\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"0.165\", \"type\": \"Other\"}\n{\"value\": \"0.967\", \"type\": \"Other\"}\n{\"value\": \"0.058\", \"type\": \"Other\"}\n{\"value\": \"305.8\", \"type\": \"Other\"}\n{\"value\": \"0.018\", \"type\": \"Other\"}\n{\"value\": \"0.058\", \"type\": \"Other\"}\n{\"value\": \"8899\", \"type\": \"Other\"}\n{\"value\": \"160.6\", \"type\": \"Other\"}\n{\"value\": \"0.009\", \"type\": \"Other\"}\n{\"value\": \"0.058\", \"type\": \"Other\"}\n{\"value\": \"6947\", \"type\": \"Other\"}\n{\"value\": \"290.3\", \"type\": \"Other\"}\n{\"value\": \"0.002\", \"type\": \"Other\"}\n{\"value\": \"0.058\", \"type\": \"Other\"}\n{\"value\": \"3859\", \"type\": \"Other\"}\n{\"value\": \"61.4\", \"type\": \"Other\"}\n{\"value\": \"0.004\", \"type\": \"Other\"}\n{\"value\": \"0.058\", \"type\": \"Other\"}\n{\"value\": \"785\", \"type\": \"Other\"}\n{\"value\": \"0.1\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"0.058\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Attention-based method for categorizing different types of online harassment language
In the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter, because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual or racist harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. Previous studies have been focused on collecting data about sexism and racism in very broad terms. However, there is not much study focusing on different types of online harassment alone attracting natural language processing techniques. In this work, we present an attention-based approach for the detection of harassment in tweets and the detection of different types of harassment as well. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach.
Related Work
Waseem et al. BIBREF1 were the first who collected hateful tweets and categorized them into being sexist, racist or neither. However, they did not provide specific definitions for each category. Jha and Mamidi BIBREF0 focused on just sexist tweets and proposed two categories of hostile and benevolent sexism. However, these categories were general as they ignored other types of sexism happening in social media. Sharifirad S. and Matwin S. BIBREF2 proposed complimentary categories of sexist language inspired from social science work. They categorized the sexist tweets into the categories of indirect harassment, information threat, sexual harassment and physical harassment. In the next year the same authors proposed BIBREF3 a more comprehensive categorization of online harassment in social media e.g. twitter into the following categories, indirect harassment, information threat, sexual harassment, physical harassment and not sexist.
For the detection of hate speech in social media like twitter, many approaches have been proposed. Jha and Mamidi BIBREF0 tested support vector machine, bi-directional RNN encoder-decoder and FastText on hostile and benevolent sexist tweets. They also used SentiWordNet and subjectivity lexicon on the extracted phrases to show the polarity of the tweets. Sharifirad et al. BIBREF4 trained, tested and evaluated different classification methods on the SemEval2018 dataset and chose the classifier with the highest accuracy for testing on each category of sexist tweets to know the mental state and the affectual state of the user who tweets in each category. To overcome the limitations of small data sets on sexist speech detection, Sharifirad S. et al. BIBREF5 have applied text augmentation and text generation with certain success. They have generated new tweets by replacing words in order to increase the size of our training set. Moreover, in the presented text augmentation approach, the number of tweets in each class remains the same, but their words are augmented with words extracted from their ConceptNet relations and their description extracted from Wikidata. Zhang et al. BIBREF6 combined convolutional and gated recurrent networks to detect hate speech in tweets. Others have proposed different methods, which are not based on deep learning. Burnap and Williams BIBREF7 used Support Vector Machines, Random Forests and a meta-classifier to distinguish between hateful and non-hateful messages. A survey of recent research in the field is presented in BIBREF8. For the problem of the hate speech detection a few approaches have been proposed that are based on the Attention mechanism. Pavlopoulos et al. BIBREF9 have proposed a novel, classification-specific attention mechanism that improves the performance of the RNN further for the detection of abusive content in the web. Xie et al. BIBREF10 for emotion intensity prediction, which is a similar problem to ours, have proposed a novel attention mechanism for CNN model that associates attention-based weights for every convolution window. Park and Fung BIBREF11 transformed the classification into a 2-step problem, where abusive text first is distinguished from the non-abusive, and then the class of abuse (Sexism or Racism) is determined. However, while the first part of the two step classification performs quite well, it falls short in detecting the particular class the abusive text belongs to. Pitsilis et al. BIBREF12 have proposed a detection scheme that is an ensemble of RNN classifiers, which incorporates various features associated with user related information, such as the users’ tendency towards racism or sexism
Proposed methodology ::: Data augmentation
As described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. These "noisy" data that have been translated back, increase the number of indirect and physical harassment tweets and boost significantly the performance of our models.
Another way to enrich our models is the use of pre-trained word embeddings from 2B Twitter data BIBREF14 having 27B tokens, for the initialization of the embedding layer.
Proposed methodology ::: Text processing
Before training our models we are processing the given tweets using a tweet pre-processor. The scope here is the cleaning and tokenization of the dataset.
Proposed methodology ::: RNN Model and Attention Mechanism
We are presenting an attention-based approach for the problem of the harassment detection in tweets. In this section, we describe the basic approach of our work. We are using RNN models because of their ability to deal with sequence information. The RNN model is a chain of GRU cells BIBREF15 that transforms the tokens $w_{1}, w_{2},..., w_{k}$ of each tweet to the hidden states $h_{1}, h_{2},..., h_{k}$, followed by an LR Layer that uses $h_{k}$ to classify the tweet as harassment or non-harassment (similarly for the other categories). Given the vocabulary V and a matrix E $\in $ $R^{d \times \vert V \vert }$ containing d-dimensional word embeddings, an initial $h_{0}$ and a tweet $w = <w_{1},.., w_{k}>$, the RNN computes $h_{1}, h_{2},..., h_{k}$, with $h_{t} \in R^{m}$, as follows:
where $h^{^{\prime }}_{t} \in R^{m}$ is the proposed hidden state at position t, obtained using the word embedding $x_{t}$ of token $w_{t}$ and the previous hidden state $h_{t-1}$, $\odot $ represents the element-wise multiplication, $r_{t} \in R^{m}$ is the reset gate, $z_{t} \in R^{m}$ is the update gate, $\sigma $ is the sigmoid function. Also $W_{h}, W_{z}, W_{r} \in R^{m \times d}$ and $U_{h}, U_{z}, U_{r} \in R^{m \times m}$, $b_{h}, b_{z}, b_{r} \in R^{m}$. After the computation of state $h_{k}$ the LR Layer estimates the probability that tweet w should be considered as harassment, with $W_{p} \in R^{1 \times m}, b_{p} \in R$:
We would like to add an attention mechanism similar to the one presented in BIBREF9, so that the LR Layer will consider the weighted sum $h_{sum}$ of all the hidden states instead of $h_{k}$:
$h_{sum} = \sum _{t=1}^{k} \alpha _{t}h_{t}$
$P_{attentionRNN} = \sigma (W_{p}h_{sum} + b_{p})$
Alternatively, we could pass $h_{sum}$ through an MLP with k layers and then the LR layer will estimate the corresponding probability. More formally,
$P_{attentionRNN} = \sigma (W_{p}h_{*} + b_{p})$
where $h_{*}$ is the state that comes out from the MLP. The weights $\alpha _{t}$ are produced by an attention mechanism presented in BIBREF9 (see Fig. FIGREF7), which is an MLP with l layers. This attention mechanism differs from most previous ones BIBREF16, BIBREF17, because it is used in a classification setting, where there is no previously generated output sub-sequence to drive the attention. It assigns larger weights $\alpha _{t}$ to hidden states $h_{t}$ corresponding to positions, where there is more evidence that the tweet should be harassment (or any other specific type of harassment) or not. In our work we are using four attention mechanisms instead of one that is presented in BIBREF9. Particularly, we are using one attention mechanism per category. Another element that differentiates our approach from Pavlopoulos et al. BIBREF9 is that we are using a projection layer for the word embeddings (see Fig. FIGREF2). In the next subsection we describe the Model Architecture of our approach.
Proposed methodology ::: Model Architecture
The Embedding Layer is initialized using pre-trained word embeddings of dimension 200 from Twitter data that have been described in a previous sub-section. After the Embedding Layer, we are applying a Spatial Dropout Layer, which drops a certain percentage of dimensions from each word vector in the training sample. The role of Dropout is to improve generalization performance by preventing activations from becoming strongly correlated BIBREF18. Spatial Dropout, which has been proposed in BIBREF19, is an alternative way to use dropout with convolutional neural networks as it is able to dropout entire feature maps from the convolutional layer which are then not used during pooling. After that, the word embeddings are passing through a one-layer MLP, which has tanh as activation function and 128 hidden units, in order to project them in the vector space of our problem considering that they have been pre-trained using text that has a different subject. In the next step the embeddings are fed in a unidirectional GRU having 1 Stacked Layer and size 128. We prefer GRU than LSTM, because it is more efficient computationally. Also the basic advantage of LSTM which is the ability to keep in memory large text documents, does not hold here, because tweets supposed to be not too large text documents. The output states of the GRU are passing through four self-attentions like the one described above BIBREF9, because we are using one attention per category (see Fig. FIGREF7). Finally, a one-layer MLP having 128 nodes and ReLU as activation function computes the final score for each category. At this final stage we have avoided using a softmax function to decide the harassment type considering that the tweet is a harassment, otherwise we had to train our models taking into account only the harassment tweets and this might have been a problem as the dataset is not large enough.
Experiments ::: Training Models
In this subsection we are giving the details of the training process of our models. Moreover, we are describing the different models that we compare in our experiments.
Batch size which pertains to the amount of training samples to consider at a time for updating our network weights, is set to 32, because our dataset is not large and small batches might help to generalize better. Also, we set other hyperparameters as: epochs = 20, patience = 10. As early stopping criterion we choose the average AUC, because our dataset is imbalanced.
The training process is based on the optimization of the loss function mentioned below and it is carried out with the Adam optimizer BIBREF20, which is known for yielding quicker convergence. We set the learning rate equal to 0.001:
$L = \frac{1}{2}BCE(harassment) + \frac{1}{2}(\frac{1}{5}BCE(sexualH) + \frac{2}{5}BCE(indirectH)+\frac{2}{5}BCE(physicalH))$
where BCE is the binary cross-entropy loss function,
$BCE = -\frac{1}{n}\sum _{i=1}^{n}[y_{i}log(y^{^{\prime }}_{i}) + (1 - y_{i})log(1 - y^{^{\prime }}_{i}))]$
$i$ denotes the $i$th training sample, $y$ is the binary representation of true harassment label, and $y^{^{\prime }}$ is the predicted probability. In the loss function we have applied equal weight to both tasks. However, in the second task (type of harassment classification) we have applied higher weight in the categories that it is harder to predict due to the problem of the class imbalance between the training, validation and test sets respectively.
Experiments ::: Evaluation and Results
Each model produces four scores and each score is the probability that a tweet includes harassment language, indirect, physical and sexual harassment language respectively. For any tweet, we first check the score of the harassment language and if it is less than a specified threshold, then the harassment label is zero, so the other three labels are zero as well. If it is greater than or equal to that threshold, then the harassment label is one and the type of harassment is the one among these three having that has the greatest score (highest probability). We set this threshold equal to 0.33.
We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category.
We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement.
Conclusion - Future work
We present an attention-based approach for the detection of harassment language in tweets and the detection of different types of harassment as well. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach and a few baseline methods. According to the results of our experiments and considering the F1 Score, the multi-attention method having a projected layer, achieved the highest performance. Also, we tackled the problem of the imbalance between the training, validation and test sets performing the technique of back-translation.
In the future, we would like to perform more experiments with this dataset applying different models using BERT BIBREF21. Also, we would like to apply the models presented in this work, in other datasets about hate speech in social media.
Fig. 1. Projection Layer
Table 2. The results considering F1 Score.
Query: Which variation provides the best results on this dataset?
JSON Response: | {"answer": "the model with multi-attention mechanism and a projected layer", "evidence": ["We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement."]} | science.qasper_abstractive_qa.113.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nAttention-based method for categorizing different types of online harassment language\n\nIn the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter, because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual or racist harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. Previous studies have been focused on collecting data about sexism and racism in very broad terms. However, there is not much study focusing on different types of online harassment alone attracting natural language processing techniques. In this work, we present an attention-based approach for the detection of harassment in tweets and the detection of different types of harassment as well. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach.\n\nRelated Work\nWaseem et al. BIBREF1 were the first who collected hateful tweets and categorized them into being sexist, racist or neither. However, they did not provide specific definitions for each category. Jha and Mamidi BIBREF0 focused on just sexist tweets and proposed two categories of hostile and benevolent sexism. However, these categories were general as they ignored other types of sexism happening in social media. Sharifirad S. and Matwin S. BIBREF2 proposed complimentary categories of sexist language inspired from social science work. They categorized the sexist tweets into the categories of indirect harassment, information threat, sexual harassment and physical harassment. In the next year the same authors proposed BIBREF3 a more comprehensive categorization of online harassment in social media e.g. twitter into the following categories, indirect harassment, information threat, sexual harassment, physical harassment and not sexist.\nFor the detection of hate speech in social media like twitter, many approaches have been proposed. Jha and Mamidi BIBREF0 tested support vector machine, bi-directional RNN encoder-decoder and FastText on hostile and benevolent sexist tweets. They also used SentiWordNet and subjectivity lexicon on the extracted phrases to show the polarity of the tweets. Sharifirad et al. BIBREF4 trained, tested and evaluated different classification methods on the SemEval2018 dataset and chose the classifier with the highest accuracy for testing on each category of sexist tweets to know the mental state and the affectual state of the user who tweets in each category. To overcome the limitations of small data sets on sexist speech detection, Sharifirad S. et al. BIBREF5 have applied text augmentation and text generation with certain success. They have generated new tweets by replacing words in order to increase the size of our training set. Moreover, in the presented text augmentation approach, the number of tweets in each class remains the same, but their words are augmented with words extracted from their ConceptNet relations and their description extracted from Wikidata. Zhang et al. BIBREF6 combined convolutional and gated recurrent networks to detect hate speech in tweets. Others have proposed different methods, which are not based on deep learning. Burnap and Williams BIBREF7 used Support Vector Machines, Random Forests and a meta-classifier to distinguish between hateful and non-hateful messages. A survey of recent research in the field is presented in BIBREF8. For the problem of the hate speech detection a few approaches have been proposed that are based on the Attention mechanism. Pavlopoulos et al. BIBREF9 have proposed a novel, classification-specific attention mechanism that improves the performance of the RNN further for the detection of abusive content in the web. Xie et al. BIBREF10 for emotion intensity prediction, which is a similar problem to ours, have proposed a novel attention mechanism for CNN model that associates attention-based weights for every convolution window. Park and Fung BIBREF11 transformed the classification into a 2-step problem, where abusive text first is distinguished from the non-abusive, and then the class of abuse (Sexism or Racism) is determined. However, while the first part of the two step classification performs quite well, it falls short in detecting the particular class the abusive text belongs to. Pitsilis et al. BIBREF12 have proposed a detection scheme that is an ensemble of RNN classifiers, which incorporates various features associated with user related information, such as the users’ tendency towards racism or sexism\n\nProposed methodology ::: Data augmentation\nAs described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. These \"noisy\" data that have been translated back, increase the number of indirect and physical harassment tweets and boost significantly the performance of our models.\nAnother way to enrich our models is the use of pre-trained word embeddings from 2B Twitter data BIBREF14 having 27B tokens, for the initialization of the embedding layer.\n\nProposed methodology ::: Text processing\nBefore training our models we are processing the given tweets using a tweet pre-processor. The scope here is the cleaning and tokenization of the dataset.\n\nProposed methodology ::: RNN Model and Attention Mechanism\nWe are presenting an attention-based approach for the problem of the harassment detection in tweets. In this section, we describe the basic approach of our work. We are using RNN models because of their ability to deal with sequence information. The RNN model is a chain of GRU cells BIBREF15 that transforms the tokens $w_{1}, w_{2},..., w_{k}$ of each tweet to the hidden states $h_{1}, h_{2},..., h_{k}$, followed by an LR Layer that uses $h_{k}$ to classify the tweet as harassment or non-harassment (similarly for the other categories). Given the vocabulary V and a matrix E $\\in $ $R^{d \\times \\vert V \\vert }$ containing d-dimensional word embeddings, an initial $h_{0}$ and a tweet $w = <w_{1},.., w_{k}>$, the RNN computes $h_{1}, h_{2},..., h_{k}$, with $h_{t} \\in R^{m}$, as follows:\nwhere $h^{^{\\prime }}_{t} \\in R^{m}$ is the proposed hidden state at position t, obtained using the word embedding $x_{t}$ of token $w_{t}$ and the previous hidden state $h_{t-1}$, $\\odot $ represents the element-wise multiplication, $r_{t} \\in R^{m}$ is the reset gate, $z_{t} \\in R^{m}$ is the update gate, $\\sigma $ is the sigmoid function. Also $W_{h}, W_{z}, W_{r} \\in R^{m \\times d}$ and $U_{h}, U_{z}, U_{r} \\in R^{m \\times m}$, $b_{h}, b_{z}, b_{r} \\in R^{m}$. After the computation of state $h_{k}$ the LR Layer estimates the probability that tweet w should be considered as harassment, with $W_{p} \\in R^{1 \\times m}, b_{p} \\in R$:\nWe would like to add an attention mechanism similar to the one presented in BIBREF9, so that the LR Layer will consider the weighted sum $h_{sum}$ of all the hidden states instead of $h_{k}$:\n$h_{sum} = \\sum _{t=1}^{k} \\alpha _{t}h_{t}$\n$P_{attentionRNN} = \\sigma (W_{p}h_{sum} + b_{p})$\nAlternatively, we could pass $h_{sum}$ through an MLP with k layers and then the LR layer will estimate the corresponding probability. More formally,\n$P_{attentionRNN} = \\sigma (W_{p}h_{*} + b_{p})$\nwhere $h_{*}$ is the state that comes out from the MLP. The weights $\\alpha _{t}$ are produced by an attention mechanism presented in BIBREF9 (see Fig. FIGREF7), which is an MLP with l layers. This attention mechanism differs from most previous ones BIBREF16, BIBREF17, because it is used in a classification setting, where there is no previously generated output sub-sequence to drive the attention. It assigns larger weights $\\alpha _{t}$ to hidden states $h_{t}$ corresponding to positions, where there is more evidence that the tweet should be harassment (or any other specific type of harassment) or not. In our work we are using four attention mechanisms instead of one that is presented in BIBREF9. Particularly, we are using one attention mechanism per category. Another element that differentiates our approach from Pavlopoulos et al. BIBREF9 is that we are using a projection layer for the word embeddings (see Fig. FIGREF2). In the next subsection we describe the Model Architecture of our approach.\n\nProposed methodology ::: Model Architecture\nThe Embedding Layer is initialized using pre-trained word embeddings of dimension 200 from Twitter data that have been described in a previous sub-section. After the Embedding Layer, we are applying a Spatial Dropout Layer, which drops a certain percentage of dimensions from each word vector in the training sample. The role of Dropout is to improve generalization performance by preventing activations from becoming strongly correlated BIBREF18. Spatial Dropout, which has been proposed in BIBREF19, is an alternative way to use dropout with convolutional neural networks as it is able to dropout entire feature maps from the convolutional layer which are then not used during pooling. After that, the word embeddings are passing through a one-layer MLP, which has tanh as activation function and 128 hidden units, in order to project them in the vector space of our problem considering that they have been pre-trained using text that has a different subject. In the next step the embeddings are fed in a unidirectional GRU having 1 Stacked Layer and size 128. We prefer GRU than LSTM, because it is more efficient computationally. Also the basic advantage of LSTM which is the ability to keep in memory large text documents, does not hold here, because tweets supposed to be not too large text documents. The output states of the GRU are passing through four self-attentions like the one described above BIBREF9, because we are using one attention per category (see Fig. FIGREF7). Finally, a one-layer MLP having 128 nodes and ReLU as activation function computes the final score for each category. At this final stage we have avoided using a softmax function to decide the harassment type considering that the tweet is a harassment, otherwise we had to train our models taking into account only the harassment tweets and this might have been a problem as the dataset is not large enough.\n\nExperiments ::: Training Models\nIn this subsection we are giving the details of the training process of our models. Moreover, we are describing the different models that we compare in our experiments.\nBatch size which pertains to the amount of training samples to consider at a time for updating our network weights, is set to 32, because our dataset is not large and small batches might help to generalize better. Also, we set other hyperparameters as: epochs = 20, patience = 10. As early stopping criterion we choose the average AUC, because our dataset is imbalanced.\nThe training process is based on the optimization of the loss function mentioned below and it is carried out with the Adam optimizer BIBREF20, which is known for yielding quicker convergence. We set the learning rate equal to 0.001:\n$L = \\frac{1}{2}BCE(harassment) + \\frac{1}{2}(\\frac{1}{5}BCE(sexualH) + \\frac{2}{5}BCE(indirectH)+\\frac{2}{5}BCE(physicalH))$\nwhere BCE is the binary cross-entropy loss function,\n$BCE = -\\frac{1}{n}\\sum _{i=1}^{n}[y_{i}log(y^{^{\\prime }}_{i}) + (1 - y_{i})log(1 - y^{^{\\prime }}_{i}))]$\n$i$ denotes the $i$th training sample, $y$ is the binary representation of true harassment label, and $y^{^{\\prime }}$ is the predicted probability. In the loss function we have applied equal weight to both tasks. However, in the second task (type of harassment classification) we have applied higher weight in the categories that it is harder to predict due to the problem of the class imbalance between the training, validation and test sets respectively.\n\nExperiments ::: Evaluation and Results\nEach model produces four scores and each score is the probability that a tweet includes harassment language, indirect, physical and sexual harassment language respectively. For any tweet, we first check the score of the harassment language and if it is less than a specified threshold, then the harassment label is zero, so the other three labels are zero as well. If it is greater than or equal to that threshold, then the harassment label is one and the type of harassment is the one among these three having that has the greatest score (highest probability). We set this threshold equal to 0.33.\nWe compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category.\nWe have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement.\n\nConclusion - Future work\nWe present an attention-based approach for the detection of harassment language in tweets and the detection of different types of harassment as well. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach and a few baseline methods. According to the results of our experiments and considering the F1 Score, the multi-attention method having a projected layer, achieved the highest performance. Also, we tackled the problem of the imbalance between the training, validation and test sets performing the technique of back-translation.\nIn the future, we would like to perform more experiments with this dataset applying different models using BERT BIBREF21. Also, we would like to apply the models presented in this work, in other datasets about hate speech in social media.\n\nFig. 1. Projection Layer\n\nTable 2. The results considering F1 Score.\n\nQuery: Which variation provides the best results on this dataset?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"the model with multi-attention mechanism and a projected layer\", \"evidence\": [\"We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement.\"]}",
"role": "assistant"
}
]
|
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: The National Institute for Health and Care Excellence (NICE) invited Servier, the company manufacturing trifluridine and tipiracil (T/T; trade name: Lonsurf®), to submit evidence for the clinical and cost effectiveness of T/T compared with best supportive care (BSC) for metastatic colorectal cancer (third-line or later).
PURPOSE: Trifluridine/tipiracil (FTD/TPI; TAS-102, Lonsurf®), a novel form of chemotherapy for metastatic colorectal cancer (mCRC), has shown clinical benefit in the global, phase III RECOURSE trial, regardless of patient age.
BACKGROUND: Trifluridine/tipiracil (TAS-102, Lonsurf®), a novel oral anti-tumor agent combining an anti-neoplastic thymidine-based nucleoside analogue (trifluridine, FTD) with a thymidine phosphorylase inhibitor (tipiracil hydrochloride, TPI) presents a new treatment option for metastatic colorectal cancer (mCRC) patients refractory or intolerant to standard therapies.
These AEs may affect patient adherence, particularly with completely oral regimens, such as trifluridine/tipiracil (TAS-102, Lonsurf®), an antimetabolite agent for patients with mCRC refractory or intolerant to standard therapies.
Trifluridine/tipiracil has recently been approved for the treatment of adult patients with metastatic colorectal cancer (mCRC) who are refractory to or are not considered candidates for, current standard chemotherapy and biological therapy in the EU and USA and in unresectable advanced or recurrent CRC in Japan.
TAS-102 (Lonsurf) is an oral fluoropyrimidine that is formed by the combination of 2 active drugs: trifluridine (a nucleoside analog) and tipiracil hydrochloride (a thymidine phosphorylase inhibitor).
Evolocumab (Repatha) for patients with hypercholesterolemia whose condition has not been controlled by statins and other therapies; trifluridine/tipiracil (Lonsurf) for metastatic colorectal cancer; and blood coagulation factor VIII (Nuwiq) for adults and children with hemophilia A.
In May 2014, tablets containing both trifluridine and tipiracil hydrochloride (Lonsurf® tablets) were launched in Japan ahead of other countries, for the treatment of advanced/relapsed unresectable colorectal cancer.
Initial Evaluation of the Efficacy and Safety of Tablets Containing Trifluridine and Tipiracil Hydrochloride--Safety Measures Devised by a Multidisciplinary Team Including a Pharmaceutical Outpatient Clinic.
Question: Which drugs are included in the Lonsurf pill? | ["trifluridine", "tipiracil"] | science.bioasq_list_qa.605 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: The National Institute for Health and Care Excellence (NICE) invited Servier, the company manufacturing trifluridine and tipiracil (T/T; trade name: Lonsurf®), to submit evidence for the clinical and cost effectiveness of T/T compared with best supportive care (BSC) for metastatic colorectal cancer (third-line or later).\nPURPOSE: Trifluridine/tipiracil (FTD/TPI; TAS-102, Lonsurf®), a novel form of chemotherapy for metastatic colorectal cancer (mCRC), has shown clinical benefit in the global, phase III RECOURSE trial, regardless of patient age.\nBACKGROUND: Trifluridine/tipiracil (TAS-102, Lonsurf®), a novel oral anti-tumor agent combining an anti-neoplastic thymidine-based nucleoside analogue (trifluridine, FTD) with a thymidine phosphorylase inhibitor (tipiracil hydrochloride, TPI) presents a new treatment option for metastatic colorectal cancer (mCRC) patients refractory or intolerant to standard therapies.\nThese AEs may affect patient adherence, particularly with completely oral regimens, such as trifluridine/tipiracil (TAS-102, Lonsurf®), an antimetabolite agent for patients with mCRC refractory or intolerant to standard therapies.\nTrifluridine/tipiracil has recently been approved for the treatment of adult patients with metastatic colorectal cancer (mCRC) who are refractory to or are not considered candidates for, current standard chemotherapy and biological therapy in the EU and USA and in unresectable advanced or recurrent CRC in Japan.\nTAS-102 (Lonsurf) is an oral fluoropyrimidine that is formed by the combination of 2 active drugs: trifluridine (a nucleoside analog) and tipiracil hydrochloride (a thymidine phosphorylase inhibitor).\nEvolocumab (Repatha) for patients with hypercholesterolemia whose condition has not been controlled by statins and other therapies; trifluridine/tipiracil (Lonsurf) for metastatic colorectal cancer; and blood coagulation factor VIII (Nuwiq) for adults and children with hemophilia A.\nIn May 2014, tablets containing both trifluridine and tipiracil hydrochloride (Lonsurf® tablets) were launched in Japan ahead of other countries, for the treatment of advanced/relapsed unresectable colorectal cancer.\nInitial Evaluation of the Efficacy and Safety of Tablets Containing Trifluridine and Tipiracil Hydrochloride--Safety Measures Devised by a Multidisciplinary Team Including a Pharmaceutical Outpatient Clinic.\n\nQuestion: Which drugs are included in the Lonsurf pill?",
"role": "user"
},
{
"content": "[\"trifluridine\", \"tipiracil\"]",
"role": "assistant"
}
]
|
science.discomat_te | | | | Elemental composition (at.%) | Elemental composition (at.%) | Elemental composition (at.%) | Elemental composition (at.%) | Elemental composition (at.%) | Elemental composition (at.%) |
| | | Si | B | Na | O | Ba | Zr |
| B5 | Nominal | 19.67 | 3.28 | 16.39 | 57.38 | 1.64 | 1.64 |
| B5 | PGAA | 19.1 (1.8) | 3.69 (1.0) | 15.37 (1.4) | 58.0 (0.5) | 1.3 (2.6) | 2.19 (2.5) |
| | | | | | | | |
| B10 | Nominal | 17.46 | 6.35 | 15.87 | 57.14 | 1.59 | 1.59 |
| B10 | PGAA | 19.1 (1.8) | 6.59 (1.0) | 14.3 (1.5) | 58.4 (0.5) | 1.3 (2.6) | 2.18 (2.4) |
| | | | | | | | |
| B15 | Nominal | 15.38 | 9.23 | 15.38 | 56.92 | 1.54 | 1.54 |
| B15 | PGAA | 15.24 (1.8) | 9.15 (0.9) | 13.08 (1.4) | 58.5 (0.4) | 1.3 (2.5) | 2.42 (2.4) |
Caption: Elemental composition (at.%) of the multi-component matrix glasses, nominal and the measured value by PGAA method
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "19.67", "type": "Glass_Compound_Amount", "constituent": "Si", "unit": "mol", "material": "B5"}
{"value": "3.28", "type": "Glass_Compound_Amount", "constituent": "B", "unit": "mol", "material": "B5"}
{"value": "16.39", "type": "Glass_Compound_Amount", "constituent": "Na", "unit": "mol", "material": "B5"}
{"value": "57.38", "type": "Glass_Compound_Amount", "constituent": "O", "unit": "mol", "material": "B5"}
{"value": "1.64", "type": "Glass_Compound_Amount", "constituent": "Ba", "unit": "mol", "material": "B5"}
{"value": "1.64", "type": "Glass_Compound_Amount", "constituent": "Zr", "unit": "mol", "material": "B5"}
{"value": "19.1", "type": "Glass_Compound_Amount", "constituent": "Si", "unit": "mol", "material": "B5"}
{"value": "3.69", "type": "Glass_Compound_Amount", "constituent": "B", "unit": "mol", "material": "B5"}
{"value": "15.37", "type": "Glass_Compound_Amount", "constituent": "Na", "unit": "mol", "material": "B5"}
{"value": "58.0", "type": "Glass_Compound_Amount", "constituent": "O", "unit": "mol", "material": "B5"}
{"value": "1.3", "type": "Glass_Compound_Amount", "constituent": "Ba", "unit": "mol", "material": "B5"}
{"value": "2.19", "type": "Glass_Compound_Amount", "constituent": "Zr", "unit": "mol", "material": "B5"}
{"value": "17.46", "type": "Glass_Compound_Amount", "constituent": "Si", "unit": "mol", "material": "B10"}
{"value": "6.35", "type": "Glass_Compound_Amount", "constituent": "B", "unit": "mol", "material": "B10"}
{"value": "15.87", "type": "Glass_Compound_Amount", "constituent": "Na", "unit": "mol", "material": "B10"}
{"value": "57.14", "type": "Glass_Compound_Amount", "constituent": "O", "unit": "mol", "material": "B10"}
{"value": "1.59", "type": "Glass_Compound_Amount", "constituent": "Ba", "unit": "mol", "material": "B10"}
{"value": "1.59", "type": "Glass_Compound_Amount", "constituent": "Zr", "unit": "mol", "material": "B10"}
{"value": "19.1", "type": "Glass_Compound_Amount", "constituent": "Si", "unit": "mol", "material": "B10"}
{"value": "6.59", "type": "Glass_Compound_Amount", "constituent": "B", "unit": "mol", "material": "B10"}
{"value": "14.3", "type": "Glass_Compound_Amount", "constituent": "Na", "unit": "mol", "material": "B10"}
{"value": "58.4", "type": "Glass_Compound_Amount", "constituent": "O", "unit": "mol", "material": "B10"}
{"value": "1.3", "type": "Glass_Compound_Amount", "constituent": "Ba", "unit": "mol", "material": "B10"}
{"value": "2.18", "type": "Glass_Compound_Amount", "constituent": "Zr", "unit": "mol", "material": "B10"}
{"value": "15.38", "type": "Glass_Compound_Amount", "constituent": "Si", "unit": "mol", "material": "B15"}
{"value": "9.23", "type": "Glass_Compound_Amount", "constituent": "B", "unit": "mol", "material": "B15"}
{"value": "15.38", "type": "Glass_Compound_Amount", "constituent": "Na", "unit": "mol", "material": "B15"}
{"value": "56.92", "type": "Glass_Compound_Amount", "constituent": "O", "unit": "mol", "material": "B15"}
{"value": "1.54", "type": "Glass_Compound_Amount", "constituent": "Ba", "unit": "mol", "material": "B15"}
{"value": "1.54", "type": "Glass_Compound_Amount", "constituent": "Zr", "unit": "mol", "material": "B15"}
{"value": "15.24", "type": "Glass_Compound_Amount", "constituent": "Si", "unit": "mol", "material": "B15"}
{"value": "9.15", "type": "Glass_Compound_Amount", "constituent": "B", "unit": "mol", "material": "B15"}
{"value": "13.08", "type": "Glass_Compound_Amount", "constituent": "Na", "unit": "mol", "material": "B15"}
{"value": "58.5", "type": "Glass_Compound_Amount", "constituent": "O", "unit": "mol", "material": "B15"}
{"value": "1.3", "type": "Glass_Compound_Amount", "constituent": "Ba", "unit": "mol", "material": "B15"}
{"value": "2.42", "type": "Glass_Compound_Amount", "constituent": "Zr", "unit": "mol", "material": "B15"} | science.discomat_te.378 | [
{
"content": "| | | Elemental composition (at.%) | Elemental composition (at.%) | Elemental composition (at.%) | Elemental composition (at.%) | Elemental composition (at.%) | Elemental composition (at.%) |\n| | | Si | B | Na | O | Ba | Zr |\n| B5 | Nominal | 19.67 | 3.28 | 16.39 | 57.38 | 1.64 | 1.64 |\n| B5 | PGAA | 19.1 (1.8) | 3.69 (1.0) | 15.37 (1.4) | 58.0 (0.5) | 1.3 (2.6) | 2.19 (2.5) |\n| | | | | | | | |\n| B10 | Nominal | 17.46 | 6.35 | 15.87 | 57.14 | 1.59 | 1.59 |\n| B10 | PGAA | 19.1 (1.8) | 6.59 (1.0) | 14.3 (1.5) | 58.4 (0.5) | 1.3 (2.6) | 2.18 (2.4) |\n| | | | | | | | |\n| B15 | Nominal | 15.38 | 9.23 | 15.38 | 56.92 | 1.54 | 1.54 |\n| B15 | PGAA | 15.24 (1.8) | 9.15 (0.9) | 13.08 (1.4) | 58.5 (0.4) | 1.3 (2.5) | 2.42 (2.4) |\n\nCaption: Elemental composition (at.%) of the multi-component matrix glasses, nominal and the measured value by PGAA method\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"19.67\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Si\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"3.28\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"16.39\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"57.38\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"O\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"1.64\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Ba\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"1.64\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Zr\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"19.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Si\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"3.69\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"15.37\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"58.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"O\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"1.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Ba\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"2.19\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Zr\", \"unit\": \"mol\", \"material\": \"B5\"}\n{\"value\": \"17.46\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Si\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"6.35\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"15.87\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"57.14\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"O\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"1.59\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Ba\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"1.59\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Zr\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"19.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Si\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"6.59\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"14.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"58.4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"O\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"1.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Ba\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"2.18\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Zr\", \"unit\": \"mol\", \"material\": \"B10\"}\n{\"value\": \"15.38\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Si\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"9.23\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"15.38\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"56.92\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"O\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"1.54\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Ba\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"1.54\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Zr\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"15.24\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Si\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"9.15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"13.08\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"58.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"O\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"1.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Ba\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"2.42\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Zr\", \"unit\": \"mol\", \"material\": \"B15\"}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Maximum Likelihood Estimation for Multimodal Learning with Missing Modality
Main Content:
1 INTRODUCTION . Multimodal learning is an important research area , which builds models to process and relate information between different modalities ( Ngiam et al. , 2011 ; Srivastava & Salakhutdinov , 2014 ; Baltrušaitis et al. , 2018 ) . Compared with unimodal learning , multimodal learning can achieve better performance by properly utilizing the multimodal data . It has been successfully used in many applications , such as multimodal emotion recognition ( Soleymani et al. , 2011 ; Mittal et al. , 2020 ) , multimedia event detection ( Li et al. , 2020 ) , and visual question-answering ( Yu et al. , 2019 ) . With the emergence of big data , multimodal learning becomes more and more important to combine the multimodal data from different sources . A number of previous works ( Tzirakis et al. , 2017 ; Zhang et al. , 2017 ; Elliott et al. , 2017 ; Kim et al. , 2020 ; Zhang et al. , 2020 ) have achieved great successes based on complete observations during the training process . However , in practice , the multimodal data may have missing modalities ( Du et al. , 2018 ; Ma et al. , 2021a ; b ) . This may be caused by various reasons . For instance , the sensor that collects the multimodal data is damaged or the network transmission fails . Examples of the multimodal data are shown in Figure 1 . In the past years , different approaches have been proposed to deal with modality missing . A simple and typical way ( Hastie et al. , 2009 ) is to directly discard the data with missing modalities . Since the information contained in the modality-missing data is neglected , such method often has limited performance . In addition , researchers ( Tran et al. , 2017 ; Chen & Zhang , 2020 ; Liu et al. , 2021 ; Ma et al. , 2021b ) have proposed approaches to heuristically combine the information of the modalitymissing data . However , most of these works lack theoretical explanations , and these empirical methods are often implemented using multiple training stages rather than an end-to-end manner , which lead to the information of the modality-missing data not being well exploited . To tackle above issues , we propose an efficient approach based on maximum likelihood estimation to effectively utilize the modality-missing data . To be specific , we present a likelihood function to characterize the conditional distributions of the modality-complete data and the modality-missing data , which is theoretically optimal . Furthermore , we adopt a generalized form of the softmax function to efficiently implement our maximum likelihood estimation algorithm . Such training strategy guarantees the computability of our framework in an end-to-end scheme . In this way , our approach can effectively leverage the information of the modality-missing data during the training process , Finally , we perform several experiments on real-world multimodal datasets , including eNTERFACE ’ 05 ( Martin et al. , 2006 ) and RAVDESS ( Livingstone & Russo , 2018 ) . The results show the effectiveness of our approach in handling the problem of modality missing . To summarize , our contribution is three-fold : • We design a likelihood function to learn the conditional distributions of the modalitycomplete data and the modality-missing data , which is theoretically optimal . • We develop a generalized form of the softmax function to implement our maximum likelihood estimation framework in an end-to-end manner , which is more effective than previous works . • We conduct a series of experiments on real-world multimodal datasets . The results validate the effectiveness of our approach , even when 95 % of the training data has missing modality . 2 METHODOLOGY . Our goal is to deal with the problem of modality missing in multimodal learning based on maximum likelihood estimation . In the following , we first show the problem formulation , and then describe the details of our framework . 2.1 PROBLEM FORMULATION . In this paper , we consider that the multimodal data has two modalities . Here , the random variables corresponding to these two modalities and their category labels are denoted as X , Y , and Z , respectively . In the training process , we assume that there are two independently observed datasets : modality-complete and modality-missing . We use DXY Z = { ( x ( i ) c , y ( i ) c , z ( i ) c ) | z ( i ) c ∈ Z = { 1 , 2 , · · · , |Z| } } nc i=1 to represent the modality-complete dataset , where x ( i ) c and y ( i ) c represent the two modalities of the i-th sample of DXY Z respectively , z ( i ) c is their corresponding category label , and the size of DXY Z is nc . We then use DXZ = { ( x ( i ) m , z ( i ) m ) | z ( i ) m ∈ Z = { 1 , 2 , · · · , |Z| } } nm i=1 to represent the modality-missing dataset , where the size of DXZ is nm . In addition , we adopt [ DXY Z ] XY to represent { ( x ( i ) c , y ( i ) c ) } nc i=1 . [ DXY Z ] Z , [ DXZ ] X , and [ DXZ ] Z are expressed in the same way . The multimodal data of DXY Z and DXZ are assumed to be i.i.d . generated from an unknown underlying joint distribution . By utilizing the knowledge of the modality-complete data and the modality-missing data , we hope our framework can predict the category labels correctly . 2.2 MAXIMUM LIKELIHOOD ESTIMATION FOR MISSING MODALITY . In this section , we first present how to design a likelihood function to learn the conditional distributions of the modality-complete data and the modality-missing data . Then , we show that by adopting a generalized form of the softmax function , we design a training strategy to implement our algorithm . 2.2.1 LIKELIHOOD FUNCTION ANALYSES . Maximum likelihood estimation is a statistical method of using the observed data to estimate the distribution by maximizing the likelihood function . The estimated distribution makes the observed data most likely ( Myung , 2003 ) . With this idea , we study the likelihood function on datasets DXY Z and DXZ . For the classification task , the conditional likelihood is commonly used . Inspired by this , we use a model QXY Z to learn the underlying joint distribution of DXY Z and DXZ . The conditional likelihood can be represented as : ` , P ( [ DXY Z ] Z , [ DXZ ] Z | [ DXY Z ] XY , [ DXZ ] X ; QXY Z ) a = P ( [ DXY Z ] Z | [ DXY Z ] XY ; QXY Z ) · P ( [ DXZ ] Z | [ DXZ ] X ; QXY Z ) b = ∏ ( x , y , z ) ∈DXY Z QZ|XY ( z|xy ) · ∏ ( x , z ) ∈DXZ QZ|X ( z|x ) ( 1 ) where the step a follows from the fact that datasets DXY Z and DXZ are observed independently , and the step b is due to that samples in each dataset are i.i.d . QZ|XY and QZ|X are conditional distributions of QXY Z . In this way , we show the likelihood function using the information of DXY Z and DXZ . Then , we use the negative log-likelihood as the loss function to train our deep learning model , i.e. , L , − log ` = − ∑ ( x , y , z ) ∈DXY Z logQZ|XY ( z|xy ) − ∑ ( x , z ) ∈DXZ logQZ|X ( z|x ) ( 2 ) It is worth noting that in ( Daniels , 1961 ; Lehmann , 2004 ) , maximum likelihood estimation is proved to be an asymptotically-efficient strategy , which guarantees the theoretical optimality of our method to deal with modality missing . To optimize L , we use deep neural networks to extract the k-dimensional feature representations from the observation ( x , y , z ) , which are represented as f ( x ) = [ f1 ( x ) , f2 ( x ) , · · · , fk ( x ) ] T , g ( y ) = [ g1 ( y ) , g2 ( y ) , · · · , gk ( y ) ] T , and h ( z ) = [ h1 ( z ) , h2 ( z ) , · · · , hk ( z ) ] T , respectively . We then utilize these features to learn QZ|XY and QZ|X in L. Our framework is shown in Figure 2 . In this way , we show the log-likelihood function L. By characterizing the conditional distributions of the modality-complete data and the modality-missing data , it leverages the underlying structure information behind the multimodal data , which constitutes the theoretical basis of our framework . 2.2.2 MAXIMUM LIKELIHOOD ESTIMATION IMPLEMENTATION . In fact , it is not easy to optimize the log-likelihood function L in Equation ( 2 ) by designing neural networks , which is mainly due to two reasons . Firstly , the representations of the high-dimensional data and the procedure to model them are complicated . Secondly , since QZ|XY and QZ|X in L are related , how to build models to learn their relationships is difficult . To address these two issues , we develop a generalized form of the softmax function to describe QXY Z as follows 1 : QXY Z ( x , y , z ) = RX ( x ) RY ( y ) RZ ( z ) exp ( φ T ( f ( x ) , g ( y ) ) h ( z ) ) ∑ x′ , y′ , z′ RX ( x ′ ) RY ( y′ ) RZ ( z′ ) exp ( φT ( f ( x′ ) , g ( y′ ) ) h ( z′ ) ) ( 3 ) where φ ( f , g ) represents the function to fuse features f and g. We study three forms of φ to investigate its effect in our framework , as shown in Figure 3 . RX , RY , and RZ represent the underlying marginal distributions of the variables X , Y , and Z , respectively . Their use makes the denominator of Equation ( 3 ) expressed in the form of the mean over RX , RY , and RZ , which serves as the normalization to make QXY Z a valid distribution and is helpful for our further derivation . In addition , the generalized softmax function we propose can be regarded as a generalization of softmax learning in ( Xu et al. , 2018 ) from unimodal learning to multimodal learning . In this way , we show the distribution QXY Z by adopting a generalized form of the softmax function , which has the following two benefits . Firstly , by depicting the representation of QXY Z , we can further derive QZ|XY and QZ|X . It makes our approach a unified framework to combine the information of the modality-complete data and the modality-missing data . Secondly , it avoids modeling the relationship between QZ|XY and QZ|X . In fact , the correlation between the high-dimensional data can be rather complex . Then , we derive conditional distributions QZ|XY and QZ|X from Equation ( 3 ) : QZ|XY ( z|xy ) = RZ ( z ) exp ( φT ( f ( x ) , g ( y ) ) h ( z ) ) ∑ z′ RZ ( z ′ ) exp ( φT ( f ( x ) , g ( y ) ) h ( z′ ) ) ( 4 ) and QZ|X ( z|x ) = RZ ( z ) ∑ y′ RY ( y ′ ) exp ( φT ( f ( x ) , g ( y′ ) ) h ( z ) ) ∑ z′ RZ ( z ′ ) ∑ y′ RY ( y ′ ) exp ( φT ( f ( x ) , g ( y′ ) ) h ( z′ ) ) ( 5 ) We can observe that by introducing RX , RY , and RZ into QXY Z , the derived QZ|XY and QZ|X are expressed in the form of the mean over RY and RZ . In practice , we can use the empirical mean as an estimation . Correspondingly , by plugging Equations ( 4 ) and ( 5 ) into Equation ( 2 ) , we can summarize the detailed steps to compute our objective function L , as shown in Algorithm 1 . It is worth pointing out that when we compute QZ|X , we need to use the information of the modality y . Since in the training process , the modality y of the dataset DXZ is missing , we utilize samples of the modality y of the dataset DXY Z to compute QZ|X . Finally , we utilize neural networks to extract features f , g , and h from the modality-complete data and the modality-missing data to optimize our log-likelihood functionL . It performs classification directly , which does not need to explicitly complement the modality-missing data before the classification task . 1Strictly speaking , RX and RY are probability density functions , and RZ is a probability mass function . The denominator of Equation ( 3 ) should be integrated over RX and RY . We use summation here for the simplicity of exposition . Algorithm 1 Compute our objective function on a mini-batch . Input : A modality-complete batch { ( x ( i ) c , y ( i ) c , z ( i ) c ) } n1 i=1 , where n1 is the batch size . A modality-missing batch { ( x ( i ) m , z ( i ) m ) } n2 i=1 , where n2 is the batch size . Neural networks with k output units : f , g , and h. Output : The value of our objective L. 1 : Compute empirical label distribution R̂Z : R̂Z ( z ) ← ∑n1 i=1 1 ( z ( i ) c =z ) + ∑n2 i=1 1 ( z ( i ) m =z ) n1+n2 , z = 1 , 2 , · · · , |Z| 2 : Compute QZ|XY : QZ|XY ( z ( i ) c |x ( i ) c , y ( i ) c ) ← R̂Z ( z ( i ) c ) exp ( φ T ( f ( x ( i ) c ) , g ( y ( i ) c ) ) h ( z ) ) ∑|Z| z′=1 RZ ( z ′ ) exp ( φT ( f ( x ( i ) c ) , g ( y ( i ) c ) ) h ( z′ ) ) , i = 1 , · · · , n1 3 : Compute QZ|X : QZ|X ( z ( i ) m |x ( i ) m ) ← R̂Z ( z ( i ) m ) 1 n1 ∑n1 j=1 exp ( φ T ( f ( x ( i ) m ) , g ( y ( j ) c ) ) h ( z ( i ) m ) ) ∑|Z| z′=1 RZ ( z ′ ) 1n1 ∑n1 j=1 exp ( φ T ( f ( x ( i ) m ) , g ( y ( j ) c ) ) h ( z′ ) ) , i = 1 , · · · , n2 4 : Compute our empirical objective L : −∑n1i=1 logQZ|XY ( z ( i ) c |x ( i ) c , y ( i ) c ) − ∑n2 i=1 logQZ|X ( z ( i ) m |x ( i ) m ) | The authors propose a probabilistic framework to improve the classification accuracy in instances when there exists missing data in the multi-modality datasets (where one of the modalities is the predictive label; however, this label is not assumed missing). To this end, they propose a generalized softmax function as the joint distribution of all modalities and the label, from which conditional distributions are derived, for computing the maximum likelihood estimate (MLE). Experimental results on eNTERFACE and RAVDESS datasets demonstrate improvements in classification accuracy over baselines. In addition, the authors investigate the influence of the influence of the backbone models and the fusion functions. | science.mup_single_document_summarization.3509 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nMaximum Likelihood Estimation for Multimodal Learning with Missing Modality\n\nMain Content:\n1 INTRODUCTION . Multimodal learning is an important research area , which builds models to process and relate information between different modalities ( Ngiam et al. , 2011 ; Srivastava & Salakhutdinov , 2014 ; Baltrušaitis et al. , 2018 ) . Compared with unimodal learning , multimodal learning can achieve better performance by properly utilizing the multimodal data . It has been successfully used in many applications , such as multimodal emotion recognition ( Soleymani et al. , 2011 ; Mittal et al. , 2020 ) , multimedia event detection ( Li et al. , 2020 ) , and visual question-answering ( Yu et al. , 2019 ) . With the emergence of big data , multimodal learning becomes more and more important to combine the multimodal data from different sources . A number of previous works ( Tzirakis et al. , 2017 ; Zhang et al. , 2017 ; Elliott et al. , 2017 ; Kim et al. , 2020 ; Zhang et al. , 2020 ) have achieved great successes based on complete observations during the training process . However , in practice , the multimodal data may have missing modalities ( Du et al. , 2018 ; Ma et al. , 2021a ; b ) . This may be caused by various reasons . For instance , the sensor that collects the multimodal data is damaged or the network transmission fails . Examples of the multimodal data are shown in Figure 1 . In the past years , different approaches have been proposed to deal with modality missing . A simple and typical way ( Hastie et al. , 2009 ) is to directly discard the data with missing modalities . Since the information contained in the modality-missing data is neglected , such method often has limited performance . In addition , researchers ( Tran et al. , 2017 ; Chen & Zhang , 2020 ; Liu et al. , 2021 ; Ma et al. , 2021b ) have proposed approaches to heuristically combine the information of the modalitymissing data . However , most of these works lack theoretical explanations , and these empirical methods are often implemented using multiple training stages rather than an end-to-end manner , which lead to the information of the modality-missing data not being well exploited . To tackle above issues , we propose an efficient approach based on maximum likelihood estimation to effectively utilize the modality-missing data . To be specific , we present a likelihood function to characterize the conditional distributions of the modality-complete data and the modality-missing data , which is theoretically optimal . Furthermore , we adopt a generalized form of the softmax function to efficiently implement our maximum likelihood estimation algorithm . Such training strategy guarantees the computability of our framework in an end-to-end scheme . In this way , our approach can effectively leverage the information of the modality-missing data during the training process , Finally , we perform several experiments on real-world multimodal datasets , including eNTERFACE ’ 05 ( Martin et al. , 2006 ) and RAVDESS ( Livingstone & Russo , 2018 ) . The results show the effectiveness of our approach in handling the problem of modality missing . To summarize , our contribution is three-fold : • We design a likelihood function to learn the conditional distributions of the modalitycomplete data and the modality-missing data , which is theoretically optimal . • We develop a generalized form of the softmax function to implement our maximum likelihood estimation framework in an end-to-end manner , which is more effective than previous works . • We conduct a series of experiments on real-world multimodal datasets . The results validate the effectiveness of our approach , even when 95 % of the training data has missing modality . 2 METHODOLOGY . Our goal is to deal with the problem of modality missing in multimodal learning based on maximum likelihood estimation . In the following , we first show the problem formulation , and then describe the details of our framework . 2.1 PROBLEM FORMULATION . In this paper , we consider that the multimodal data has two modalities . Here , the random variables corresponding to these two modalities and their category labels are denoted as X , Y , and Z , respectively . In the training process , we assume that there are two independently observed datasets : modality-complete and modality-missing . We use DXY Z = { ( x ( i ) c , y ( i ) c , z ( i ) c ) | z ( i ) c ∈ Z = { 1 , 2 , · · · , |Z| } } nc i=1 to represent the modality-complete dataset , where x ( i ) c and y ( i ) c represent the two modalities of the i-th sample of DXY Z respectively , z ( i ) c is their corresponding category label , and the size of DXY Z is nc . We then use DXZ = { ( x ( i ) m , z ( i ) m ) | z ( i ) m ∈ Z = { 1 , 2 , · · · , |Z| } } nm i=1 to represent the modality-missing dataset , where the size of DXZ is nm . In addition , we adopt [ DXY Z ] XY to represent { ( x ( i ) c , y ( i ) c ) } nc i=1 . [ DXY Z ] Z , [ DXZ ] X , and [ DXZ ] Z are expressed in the same way . The multimodal data of DXY Z and DXZ are assumed to be i.i.d . generated from an unknown underlying joint distribution . By utilizing the knowledge of the modality-complete data and the modality-missing data , we hope our framework can predict the category labels correctly . 2.2 MAXIMUM LIKELIHOOD ESTIMATION FOR MISSING MODALITY . In this section , we first present how to design a likelihood function to learn the conditional distributions of the modality-complete data and the modality-missing data . Then , we show that by adopting a generalized form of the softmax function , we design a training strategy to implement our algorithm . 2.2.1 LIKELIHOOD FUNCTION ANALYSES . Maximum likelihood estimation is a statistical method of using the observed data to estimate the distribution by maximizing the likelihood function . The estimated distribution makes the observed data most likely ( Myung , 2003 ) . With this idea , we study the likelihood function on datasets DXY Z and DXZ . For the classification task , the conditional likelihood is commonly used . Inspired by this , we use a model QXY Z to learn the underlying joint distribution of DXY Z and DXZ . The conditional likelihood can be represented as : ` , P ( [ DXY Z ] Z , [ DXZ ] Z | [ DXY Z ] XY , [ DXZ ] X ; QXY Z ) a = P ( [ DXY Z ] Z | [ DXY Z ] XY ; QXY Z ) · P ( [ DXZ ] Z | [ DXZ ] X ; QXY Z ) b = ∏ ( x , y , z ) ∈DXY Z QZ|XY ( z|xy ) · ∏ ( x , z ) ∈DXZ QZ|X ( z|x ) ( 1 ) where the step a follows from the fact that datasets DXY Z and DXZ are observed independently , and the step b is due to that samples in each dataset are i.i.d . QZ|XY and QZ|X are conditional distributions of QXY Z . In this way , we show the likelihood function using the information of DXY Z and DXZ . Then , we use the negative log-likelihood as the loss function to train our deep learning model , i.e. , L , − log ` = − ∑ ( x , y , z ) ∈DXY Z logQZ|XY ( z|xy ) − ∑ ( x , z ) ∈DXZ logQZ|X ( z|x ) ( 2 ) It is worth noting that in ( Daniels , 1961 ; Lehmann , 2004 ) , maximum likelihood estimation is proved to be an asymptotically-efficient strategy , which guarantees the theoretical optimality of our method to deal with modality missing . To optimize L , we use deep neural networks to extract the k-dimensional feature representations from the observation ( x , y , z ) , which are represented as f ( x ) = [ f1 ( x ) , f2 ( x ) , · · · , fk ( x ) ] T , g ( y ) = [ g1 ( y ) , g2 ( y ) , · · · , gk ( y ) ] T , and h ( z ) = [ h1 ( z ) , h2 ( z ) , · · · , hk ( z ) ] T , respectively . We then utilize these features to learn QZ|XY and QZ|X in L. Our framework is shown in Figure 2 . In this way , we show the log-likelihood function L. By characterizing the conditional distributions of the modality-complete data and the modality-missing data , it leverages the underlying structure information behind the multimodal data , which constitutes the theoretical basis of our framework . 2.2.2 MAXIMUM LIKELIHOOD ESTIMATION IMPLEMENTATION . In fact , it is not easy to optimize the log-likelihood function L in Equation ( 2 ) by designing neural networks , which is mainly due to two reasons . Firstly , the representations of the high-dimensional data and the procedure to model them are complicated . Secondly , since QZ|XY and QZ|X in L are related , how to build models to learn their relationships is difficult . To address these two issues , we develop a generalized form of the softmax function to describe QXY Z as follows 1 : QXY Z ( x , y , z ) = RX ( x ) RY ( y ) RZ ( z ) exp ( φ T ( f ( x ) , g ( y ) ) h ( z ) ) ∑ x′ , y′ , z′ RX ( x ′ ) RY ( y′ ) RZ ( z′ ) exp ( φT ( f ( x′ ) , g ( y′ ) ) h ( z′ ) ) ( 3 ) where φ ( f , g ) represents the function to fuse features f and g. We study three forms of φ to investigate its effect in our framework , as shown in Figure 3 . RX , RY , and RZ represent the underlying marginal distributions of the variables X , Y , and Z , respectively . Their use makes the denominator of Equation ( 3 ) expressed in the form of the mean over RX , RY , and RZ , which serves as the normalization to make QXY Z a valid distribution and is helpful for our further derivation . In addition , the generalized softmax function we propose can be regarded as a generalization of softmax learning in ( Xu et al. , 2018 ) from unimodal learning to multimodal learning . In this way , we show the distribution QXY Z by adopting a generalized form of the softmax function , which has the following two benefits . Firstly , by depicting the representation of QXY Z , we can further derive QZ|XY and QZ|X . It makes our approach a unified framework to combine the information of the modality-complete data and the modality-missing data . Secondly , it avoids modeling the relationship between QZ|XY and QZ|X . In fact , the correlation between the high-dimensional data can be rather complex . Then , we derive conditional distributions QZ|XY and QZ|X from Equation ( 3 ) : QZ|XY ( z|xy ) = RZ ( z ) exp ( φT ( f ( x ) , g ( y ) ) h ( z ) ) ∑ z′ RZ ( z ′ ) exp ( φT ( f ( x ) , g ( y ) ) h ( z′ ) ) ( 4 ) and QZ|X ( z|x ) = RZ ( z ) ∑ y′ RY ( y ′ ) exp ( φT ( f ( x ) , g ( y′ ) ) h ( z ) ) ∑ z′ RZ ( z ′ ) ∑ y′ RY ( y ′ ) exp ( φT ( f ( x ) , g ( y′ ) ) h ( z′ ) ) ( 5 ) We can observe that by introducing RX , RY , and RZ into QXY Z , the derived QZ|XY and QZ|X are expressed in the form of the mean over RY and RZ . In practice , we can use the empirical mean as an estimation . Correspondingly , by plugging Equations ( 4 ) and ( 5 ) into Equation ( 2 ) , we can summarize the detailed steps to compute our objective function L , as shown in Algorithm 1 . It is worth pointing out that when we compute QZ|X , we need to use the information of the modality y . Since in the training process , the modality y of the dataset DXZ is missing , we utilize samples of the modality y of the dataset DXY Z to compute QZ|X . Finally , we utilize neural networks to extract features f , g , and h from the modality-complete data and the modality-missing data to optimize our log-likelihood functionL . It performs classification directly , which does not need to explicitly complement the modality-missing data before the classification task . 1Strictly speaking , RX and RY are probability density functions , and RZ is a probability mass function . The denominator of Equation ( 3 ) should be integrated over RX and RY . We use summation here for the simplicity of exposition . Algorithm 1 Compute our objective function on a mini-batch . Input : A modality-complete batch { ( x ( i ) c , y ( i ) c , z ( i ) c ) } n1 i=1 , where n1 is the batch size . A modality-missing batch { ( x ( i ) m , z ( i ) m ) } n2 i=1 , where n2 is the batch size . Neural networks with k output units : f , g , and h. Output : The value of our objective L. 1 : Compute empirical label distribution R̂Z : R̂Z ( z ) ← ∑n1 i=1 1 ( z ( i ) c =z ) + ∑n2 i=1 1 ( z ( i ) m =z ) n1+n2 , z = 1 , 2 , · · · , |Z| 2 : Compute QZ|XY : QZ|XY ( z ( i ) c |x ( i ) c , y ( i ) c ) ← R̂Z ( z ( i ) c ) exp ( φ T ( f ( x ( i ) c ) , g ( y ( i ) c ) ) h ( z ) ) ∑|Z| z′=1 RZ ( z ′ ) exp ( φT ( f ( x ( i ) c ) , g ( y ( i ) c ) ) h ( z′ ) ) , i = 1 , · · · , n1 3 : Compute QZ|X : QZ|X ( z ( i ) m |x ( i ) m ) ← R̂Z ( z ( i ) m ) 1 n1 ∑n1 j=1 exp ( φ T ( f ( x ( i ) m ) , g ( y ( j ) c ) ) h ( z ( i ) m ) ) ∑|Z| z′=1 RZ ( z ′ ) 1n1 ∑n1 j=1 exp ( φ T ( f ( x ( i ) m ) , g ( y ( j ) c ) ) h ( z′ ) ) , i = 1 , · · · , n2 4 : Compute our empirical objective L : −∑n1i=1 logQZ|XY ( z ( i ) c |x ( i ) c , y ( i ) c ) − ∑n2 i=1 logQZ|X ( z ( i ) m |x ( i ) m )",
"role": "user"
},
{
"content": "The authors propose a probabilistic framework to improve the classification accuracy in instances when there exists missing data in the multi-modality datasets (where one of the modalities is the predictive label; however, this label is not assumed missing). To this end, they propose a generalized softmax function as the joint distribution of all modalities and the label, from which conditional distributions are derived, for computing the maximum likelihood estimate (MLE). Experimental results on eNTERFACE and RAVDESS datasets demonstrate improvements in classification accuracy over baselines. In addition, the authors investigate the influence of the influence of the backbone models and the fusion functions.",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Human embryonic stem cells give rise to cell types from the outer embryonic germ layer, but not the other two layers.
Research Abstract:
Embryonic stem cells have the ability to remain undifferentiated and proliferate indefinitely in vitro while maintaining the potential to differentiate into derivatives of all three embryonic germ layers.
These cells have, therefore, potential for in vitro differentiation studies, gene function, and so on.
The aim of this study was to produce a human embryonic stem cell line.
An inner cell mass of a human blastocyst was separated and cultured on mouse embryonic fibroblasts in embryonic stem cell medium with related additives.
The established line was evaluated by morphology; passaging; freezing and thawing; alkaline phosphatase; Oct-4 expression; anti-surface markers including Tra-1-60 and Tra-1-81; and karyotype and spontaneous differentiation.
Differentiated cardiomyocytes and neurons were evaluated by transmission electron microscopy and immunocytochemistry.
Here, we report the derivation of a new embryonic stem cell line (Royan H1) from a human blastocyst that remains undifferentiated in morphology during continuous passaging for more than 30 passages, maintains a normal XX karyotype, is viable after freezing and thawing, and expresses alkaline phosphatase, Oct-4, Tra-1-60, and Tra-1-81.
These cells remain undifferentiated when grown on mouse embryonic fibroblast feeder layers in the presence or absence of recombinant human leukemia inhibitory factor.
Royan H1 cells can differentiate in vitro in the absence of feeder cells and can produce embryoid bodies that can further differentiate into beating cardiomyocytes as well as neurons.
These results define Royan H1 cells as a new human embryonic stem cell line. | {"verdict": "CONTRADICT", "evidence": ["Embryonic stem cells have the ability to remain undifferentiated and proliferate indefinitely in vitro while maintaining the potential to differentiate into derivatives of all three embryonic germ layers."]} | science.scifact_entailment.343 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nHuman embryonic stem cells give rise to cell types from the outer embryonic germ layer, but not the other two layers.\n\nResearch Abstract:\nEmbryonic stem cells have the ability to remain undifferentiated and proliferate indefinitely in vitro while maintaining the potential to differentiate into derivatives of all three embryonic germ layers.\nThese cells have, therefore, potential for in vitro differentiation studies, gene function, and so on.\nThe aim of this study was to produce a human embryonic stem cell line.\nAn inner cell mass of a human blastocyst was separated and cultured on mouse embryonic fibroblasts in embryonic stem cell medium with related additives.\nThe established line was evaluated by morphology; passaging; freezing and thawing; alkaline phosphatase; Oct-4 expression; anti-surface markers including Tra-1-60 and Tra-1-81; and karyotype and spontaneous differentiation.\nDifferentiated cardiomyocytes and neurons were evaluated by transmission electron microscopy and immunocytochemistry.\nHere, we report the derivation of a new embryonic stem cell line (Royan H1) from a human blastocyst that remains undifferentiated in morphology during continuous passaging for more than 30 passages, maintains a normal XX karyotype, is viable after freezing and thawing, and expresses alkaline phosphatase, Oct-4, Tra-1-60, and Tra-1-81.\nThese cells remain undifferentiated when grown on mouse embryonic fibroblast feeder layers in the presence or absence of recombinant human leukemia inhibitory factor.\nRoyan H1 cells can differentiate in vitro in the absence of feeder cells and can produce embryoid bodies that can further differentiate into beating cardiomyocytes as well as neurons.\nThese results define Royan H1 cells as a new human embryonic stem cell line.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Embryonic stem cells have the ability to remain undifferentiated and proliferate indefinitely in vitro while maintaining the potential to differentiate into derivatives of all three embryonic germ layers.\"]}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Text Summarization using Abstract Meaning Representation
With an ever increasing size of text present on the Internet, automatic summary generation remains an important problem for natural language understanding. In this work we explore a novel full-fledged pipeline for text summarization with an intermediate step of Abstract Meaning Representation (AMR). The pipeline proposed by us first generates an AMR graph of an input story, through which it extracts a summary graph and finally, generate summary sentences from this summary graph. Our proposed method achieves state-of-the-art results compared to the other text summarization routines based on AMR. We also point out some significant problems in the existing evaluation methods, which make them unsuitable for evaluating summary quality.
Background: AMR Parsing and Generation
AMR was introduced by BIBREF1 with the aim to induce work on statistical Natural Language Understanding and Generation. AMR represents meaning using graphs. AMR graphs are rooted, directed, edge and vertex labeled graphs. Figure FIGREF4 shows the graphical representation of the AMR graph of the sentence "I looked carefully all around me" generated by JAMR parser ( BIBREF2 ). The graphical representation was produced using AMRICA BIBREF3 . The nodes in the AMR are labeled with concepts as in Figure FIGREF4 around represents a concept. Edges contains the information regarding the relations between the concepts. In Figure FIGREF4 direction is the relation between the concepts look-01 and around. AMR relies on Propbank for semantic relations (edge labels). Concepts can also be of the form run-01 where the index 01 represents the first sense of the word run. Further details about the AMR can be found in the AMR guidelines BIBREF4 .
A lot of work has been done on parsing sentences to their AMR graphs. There are three main approaches to parsing. There is alignment based parsing BIBREF2 (JAMR-Parser), BIBREF5 which uses graph based algorithms for concept and relation identification. Second, grammar based parsers like BIBREF6 (CAMR) generate output by performing shift reduce transformations on output of a dependency parser. Neural parsing BIBREF7 , BIBREF8 is based on using seq2seq models for parsing, the main problem for neural methods is the absence of a huge corpus of human generated AMRs. BIBREF8 reduced the vocabulary size to tackle this problem while BIBREF7 used larger external corpus of external sentences.
Recently, some work has been done on producing meaningful sentences form AMR graphs. BIBREF2 used a number of tree to string conversion rules for generating sentences. BIBREF9 reformed the problem as a traveling salesman problem. BIBREF7 used seq2seq learning methods.
Step 1: Story to AMR
As the first step we convert the story sentences to their Abstract Meaning Representations. We use JAMR-Parser version 2 BIBREF2 as it’s openly available and has a performance close to the state of the art parsers for parsing the CNN-Dailymail corpus. For the AMR-bank we have the gold-standard AMR parses but we still parse the input stories with JAMR-Parser to study the effect of using graphs produced by JAMR-Parser instead of the gold-standard AMR graphs.
Step 2: Story AMR to Summary AMR
After parsing (Step 1) we have the AMR graphs for the story sentences. In this step we extract the AMR graphs of the summary sentences using story sentence AMRs. We divide this task in two parts. First is finding the important sentences from the story and then extracting the key information from those sentences using their AMR graphs.
Our algorithm is based on the idea that only few sentences are important from the point of view of summary i.e. there are only a few sentences which contain most of the important information and from these sentences we can generate the summary.
Hypothesis: Most of the information corresponding to a summary sentence can be found in only one sentence from the story.
To test this hypothesis, for each summary sentence we find the sentence from the story that contains maximum information of this summary sentence. We use ROGUE-1 BIBREF13 Recall scores (measures the ratio of number of words in the target summary that are contained in the predicted summary to the total number of words in the target summary) as the metric for the information contained in the story sentence. We consider the story sentence as the predicted summary and the summary sentence as the target summary. The results that we obtained for 5000 randomly chosen document summary pairs from the CNN-Dailymail corpus are given in figure FIGREF8 . The average recall score that we obtained is 79%. The score will be perfectly 1 when the summary sentence is directly picked up from a story sentence. Upon manual inspection of the summary sentence and the corresponding best sentence from the story we realized, when this score is more than 0.5 or 0.6, almost always the information in the summary sentence is contained in this chosen story sentence. The score for in these cases is not perfectly 1 because of stop words and different verb forms used in story and summary sentence. Around 80% of summary sentences have score above 0.5. So, our hypothesis seems to be correct for most of the summary sentences. This also suggests the highly extractive nature of the summary in the corpus.
Now the task in hand is to select few important sentences. Methods that use sentence extraction for summary generation can be used for the task. It is very common in summarization tasks specifically in news articles that a lot of information is contained in the initial few sentences. Choosing initial few sentences as the summary produces very strong baselines which the state-of-the-art methods beat only marginally. Ex. On the CNN-Dailymail corpus the state-of-the-art extractive method beats initial 3 sentences only by 0.4% as reported by BIBREF14 .
Using this idea of picking important sentences from the beginning, we propose two methods, first is to simply pick initial few sentences, we call this first-n method where n stands for the number of sentences. We pick initial 3 sentences for the CNN-Dailymail corpus i.e. first-3 and only the first sentence for the proxy report section (AMR Bank) i.e. first-1 as they produce the best scores on the ROGUE metric compared to any other first-n. Second, we try to capture the relation between the two most important entities (we define importance by the number of occurrences of the entity in the story) of the document. For this we simply find the first sentence which contains both these entities. We call this the first co-occurrence based sentence selection. We also select the first sentence along with first co-occurrence based sentence selection as the important sentences. We call this the first co-occurrence+first based sentence selection.
As the datasets under consideration are news articles. The most important information in them is about an entity and a verb associated with it. So, to extract important information from the sentence. We try to find the entity being talked about in the sentence, we consider the most referred entity (one that occurs most frequently in the text), now for the main verb associated with the entity in the sentence, we find the verb closest to this entity in the AMR graph. We define the closest verb as the one which lies first in the path from the entity to the root.
We start by finding the position of the most referred entity in the graph, then we find the closest verb to the entity. and finally select the subtree hanging from that verb as the summary AMR.
Baselines
In this section, we present the baseline models and analysis method used for each step of our pipeline.
For the CNN-Dailymail dataset, the Lead-3 model is considered a strong baseline; both the abstractive BIBREF16 and extractive BIBREF14 state-of-the art methods on this dataset beat this baseline only marginally. The Lead-3 model simply produces the leading three sentences of the document as its summary.
The key step in our pipeline is step-2 i.e. summary graph extraction.
Directly comparing the Lead-3 baseline, with AMR based pipeline to evaluate the effectiveness of step-2 is an unfair comparison because of the errors introduced by imperfect parser and generator in the AMR pipeline. Thus to evaluate the effectiveness of step-2 against Lead-3 baseline, we need to nullify the effect of errors introduce by AMR parser and generator. We achieve this by trying to introduce similar errors in the leading thre sentences of each document. We generate the AMR graphs of the leading three sentences and then generate the sentences using these AMR graph. We use parser and generator that were used in our pipeline. We consider these generated sentences as the new baseline summary, we shall now refer to it Lead-3-AMR baseline in the remaining of the paper.
For the proxy report section of the AMR bank, we consider the Lead-1-AMR model as the baseline. For this dataset we already have the gold-standard AMR graphs of the sentences. Therefore, we only need to nullify the error introduced by the generator.
Effects of using JAMR Parser
In this subsection we analyze the effect of using JAMR parser for step-1 instead of the gold-standard AMR graphs. First part of table TABREF14 has scores after using the gold-standard AMR graphs. In the second part of table TABREF14 we have included the scores of using the JAMR parser for AMR graph generation. We have used the same Neural AMR for sentence generation in all methods. Scores of all methods including the Lead-1-AMR baseline have dropped significantly.
The usage of JAMR Parser has affected the scores of first co-occurrence+first and first-1 more than that for the Lead-1-AMR. The drop in ROGUE INLINEFORM0 score when we use first co-occurrence+first is around two ROGUE INLINEFORM1 points more than when Lead-1-AMR. This is a surprising result, and we believe that it is worthy of further research.
Analyzing the effectiveness of AMR extraction
The aim of extracting summary graphs from the AMR graphs of the sentence is to drop the not so important information from the sentences. If we are able to achieve this perfectly, the ROGUE-1 Recall scores that we are getting should remain almost the same (since we are not add any new information) and the ROGUE-1 precision should go up (as we have thrown out some useless information); thus effectively improving the overall ROGUE-1 INLINEFORM0 score. In the first two rows of table TABREF14 we have the scores after using the full-AMR and extracted AMR for generation respectively. It is safe to say that extracting the AMR results in improved ROGUE-1 precision whereas ROGUE-1 Recall reduces only slightly, resulting in an overall improved ROGUE-1 INLINEFORM1 .
Results on the CNN-Dailymail corpus
In table TABREF18 we report the results on the CNN-Dailymail corpus. We present scores by using the first-3 model. The first row contains the Lead-3-AMR baseline. The results we achieve are competitive with the Lead-3-AMR baseline. The rest of the table contains scores of Lead-3 baseline followed by the state-of-the-art method on the anonymized and non-anonymized versions of the dataset. The drop in the scores from the Lead-3(non-anonymized) to Lead-3-AMR is significant and is largely because of the error introduced by parser and generator.
Need of an new Dataset and Evaluation Metric
ROGUE metric, by it is design has lots of properties that make it unsuitable for evaluating abstractive summaries. For example, ROGUE matches exact words and not the stems of the words, it also considers stop words for evaluation. One of the reasons why ROGUE like metrics might never become suitable for evaluating abstractive summaries is its incapabilities of knowing if the sentences have been restructured. A good evaluation metric should be one where we compare the meaning of the sentence and not the exact words. As we showed section SECREF20 ROGUE is not suitable for evaluating summaries generated by the AMR pipeline.
We now show why the CNN-Dailymail corpus is not suitable for Abstractive summarization. The nature of summary points in the corpus is highly extractive (Section SECREF7 for details) where most of the summary points are simply picked up from some sentences in the story. Tough, this is a good enough reason to start searching for better dataset, it is not the biggest problem with the dataset. The dataset has the property that a lot of important information is in the first few sentences and most of the summary points are directly pick from these sentences. The extractive methods based on sentence selection like SummaRunNer are not actually performing well, the results they have got are only slightly better than the Lead-3 baseline. The work doesn't show how much of the selected sentences are among the first few and it might be the case that the sentences selected by the extractive methods are mostly among the first few sentences, the same can be the problem with the abstractive methods, where most the output might be getting copied from the initial few sentences.
These problems with this corpus evoke the need to have another corpus where we don't have so much concentration of important information at any location but rather the information is more spread out and the summaries are more abstractive in nature.
Possible Future Directions
As this proposed algorithm is a step by step process we can focus on improving each step to produce better results. The most exciting improvements can be done in the summary graph extraction method. Not a lot of work has been done to extract AMR graphs for summaries. In order to make this pipeline generalizable for any sort of text, we need to get rid of the hypothesis that the summary is being extracted exactly from one sentence. So, the natural direction seems to be joining AMR graphs of multiple sentences that are similar and then extracting the summary AMR from that large graph. It will be like clustering similar sentences and then extracting a summary graph from each of these cluster. Another idea is to use AMR graphs for important sentence selection.
Table 1: Results of various algorithms after step 2 (AMR extraction)
Table 2: Results of various algorithms after step 3 - summary generation using JAMR)
Figure 4: The graphical representation of the truncated AMR graph of the sentence : ’Absurd as it might seem to me , a thousand miles from any human habitation and in danger of death , I took out of my pocket a sheet of paper and my fountain - pen’
Query: How are sentences selected from the summary graph?
JSON Response: | {"answer": " Two methods: first is to simply pick initial few sentences, second is to capture the relation between the two most important entities (select the first sentence which contains both these entities).", "evidence": ["Using this idea of picking important sentences from the beginning, we propose two methods, first is to simply pick initial few sentences, we call this first-n method where n stands for the number of sentences. We pick initial 3 sentences for the CNN-Dailymail corpus i.e. first-3 and only the first sentence for the proxy report section (AMR Bank) i.e. first-1 as they produce the best scores on the ROGUE metric compared to any other first-n. Second, we try to capture the relation between the two most important entities (we define importance by the number of occurrences of the entity in the story) of the document. For this we simply find the first sentence which contains both these entities. We call this the first co-occurrence based sentence selection. We also select the first sentence along with first co-occurrence based sentence selection as the important sentences. We call this the first co-occurrence+first based sentence selection."]} | science.qasper_abstractive_qa.125.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nText Summarization using Abstract Meaning Representation\n\nWith an ever increasing size of text present on the Internet, automatic summary generation remains an important problem for natural language understanding. In this work we explore a novel full-fledged pipeline for text summarization with an intermediate step of Abstract Meaning Representation (AMR). The pipeline proposed by us first generates an AMR graph of an input story, through which it extracts a summary graph and finally, generate summary sentences from this summary graph. Our proposed method achieves state-of-the-art results compared to the other text summarization routines based on AMR. We also point out some significant problems in the existing evaluation methods, which make them unsuitable for evaluating summary quality.\n\nBackground: AMR Parsing and Generation\nAMR was introduced by BIBREF1 with the aim to induce work on statistical Natural Language Understanding and Generation. AMR represents meaning using graphs. AMR graphs are rooted, directed, edge and vertex labeled graphs. Figure FIGREF4 shows the graphical representation of the AMR graph of the sentence \"I looked carefully all around me\" generated by JAMR parser ( BIBREF2 ). The graphical representation was produced using AMRICA BIBREF3 . The nodes in the AMR are labeled with concepts as in Figure FIGREF4 around represents a concept. Edges contains the information regarding the relations between the concepts. In Figure FIGREF4 direction is the relation between the concepts look-01 and around. AMR relies on Propbank for semantic relations (edge labels). Concepts can also be of the form run-01 where the index 01 represents the first sense of the word run. Further details about the AMR can be found in the AMR guidelines BIBREF4 .\nA lot of work has been done on parsing sentences to their AMR graphs. There are three main approaches to parsing. There is alignment based parsing BIBREF2 (JAMR-Parser), BIBREF5 which uses graph based algorithms for concept and relation identification. Second, grammar based parsers like BIBREF6 (CAMR) generate output by performing shift reduce transformations on output of a dependency parser. Neural parsing BIBREF7 , BIBREF8 is based on using seq2seq models for parsing, the main problem for neural methods is the absence of a huge corpus of human generated AMRs. BIBREF8 reduced the vocabulary size to tackle this problem while BIBREF7 used larger external corpus of external sentences.\nRecently, some work has been done on producing meaningful sentences form AMR graphs. BIBREF2 used a number of tree to string conversion rules for generating sentences. BIBREF9 reformed the problem as a traveling salesman problem. BIBREF7 used seq2seq learning methods.\n\nStep 1: Story to AMR\nAs the first step we convert the story sentences to their Abstract Meaning Representations. We use JAMR-Parser version 2 BIBREF2 as it’s openly available and has a performance close to the state of the art parsers for parsing the CNN-Dailymail corpus. For the AMR-bank we have the gold-standard AMR parses but we still parse the input stories with JAMR-Parser to study the effect of using graphs produced by JAMR-Parser instead of the gold-standard AMR graphs.\n\nStep 2: Story AMR to Summary AMR\nAfter parsing (Step 1) we have the AMR graphs for the story sentences. In this step we extract the AMR graphs of the summary sentences using story sentence AMRs. We divide this task in two parts. First is finding the important sentences from the story and then extracting the key information from those sentences using their AMR graphs.\nOur algorithm is based on the idea that only few sentences are important from the point of view of summary i.e. there are only a few sentences which contain most of the important information and from these sentences we can generate the summary.\nHypothesis: Most of the information corresponding to a summary sentence can be found in only one sentence from the story.\nTo test this hypothesis, for each summary sentence we find the sentence from the story that contains maximum information of this summary sentence. We use ROGUE-1 BIBREF13 Recall scores (measures the ratio of number of words in the target summary that are contained in the predicted summary to the total number of words in the target summary) as the metric for the information contained in the story sentence. We consider the story sentence as the predicted summary and the summary sentence as the target summary. The results that we obtained for 5000 randomly chosen document summary pairs from the CNN-Dailymail corpus are given in figure FIGREF8 . The average recall score that we obtained is 79%. The score will be perfectly 1 when the summary sentence is directly picked up from a story sentence. Upon manual inspection of the summary sentence and the corresponding best sentence from the story we realized, when this score is more than 0.5 or 0.6, almost always the information in the summary sentence is contained in this chosen story sentence. The score for in these cases is not perfectly 1 because of stop words and different verb forms used in story and summary sentence. Around 80% of summary sentences have score above 0.5. So, our hypothesis seems to be correct for most of the summary sentences. This also suggests the highly extractive nature of the summary in the corpus.\nNow the task in hand is to select few important sentences. Methods that use sentence extraction for summary generation can be used for the task. It is very common in summarization tasks specifically in news articles that a lot of information is contained in the initial few sentences. Choosing initial few sentences as the summary produces very strong baselines which the state-of-the-art methods beat only marginally. Ex. On the CNN-Dailymail corpus the state-of-the-art extractive method beats initial 3 sentences only by 0.4% as reported by BIBREF14 .\nUsing this idea of picking important sentences from the beginning, we propose two methods, first is to simply pick initial few sentences, we call this first-n method where n stands for the number of sentences. We pick initial 3 sentences for the CNN-Dailymail corpus i.e. first-3 and only the first sentence for the proxy report section (AMR Bank) i.e. first-1 as they produce the best scores on the ROGUE metric compared to any other first-n. Second, we try to capture the relation between the two most important entities (we define importance by the number of occurrences of the entity in the story) of the document. For this we simply find the first sentence which contains both these entities. We call this the first co-occurrence based sentence selection. We also select the first sentence along with first co-occurrence based sentence selection as the important sentences. We call this the first co-occurrence+first based sentence selection.\nAs the datasets under consideration are news articles. The most important information in them is about an entity and a verb associated with it. So, to extract important information from the sentence. We try to find the entity being talked about in the sentence, we consider the most referred entity (one that occurs most frequently in the text), now for the main verb associated with the entity in the sentence, we find the verb closest to this entity in the AMR graph. We define the closest verb as the one which lies first in the path from the entity to the root.\nWe start by finding the position of the most referred entity in the graph, then we find the closest verb to the entity. and finally select the subtree hanging from that verb as the summary AMR.\n\nBaselines\nIn this section, we present the baseline models and analysis method used for each step of our pipeline.\nFor the CNN-Dailymail dataset, the Lead-3 model is considered a strong baseline; both the abstractive BIBREF16 and extractive BIBREF14 state-of-the art methods on this dataset beat this baseline only marginally. The Lead-3 model simply produces the leading three sentences of the document as its summary.\nThe key step in our pipeline is step-2 i.e. summary graph extraction.\nDirectly comparing the Lead-3 baseline, with AMR based pipeline to evaluate the effectiveness of step-2 is an unfair comparison because of the errors introduced by imperfect parser and generator in the AMR pipeline. Thus to evaluate the effectiveness of step-2 against Lead-3 baseline, we need to nullify the effect of errors introduce by AMR parser and generator. We achieve this by trying to introduce similar errors in the leading thre sentences of each document. We generate the AMR graphs of the leading three sentences and then generate the sentences using these AMR graph. We use parser and generator that were used in our pipeline. We consider these generated sentences as the new baseline summary, we shall now refer to it Lead-3-AMR baseline in the remaining of the paper.\nFor the proxy report section of the AMR bank, we consider the Lead-1-AMR model as the baseline. For this dataset we already have the gold-standard AMR graphs of the sentences. Therefore, we only need to nullify the error introduced by the generator.\n\nEffects of using JAMR Parser\nIn this subsection we analyze the effect of using JAMR parser for step-1 instead of the gold-standard AMR graphs. First part of table TABREF14 has scores after using the gold-standard AMR graphs. In the second part of table TABREF14 we have included the scores of using the JAMR parser for AMR graph generation. We have used the same Neural AMR for sentence generation in all methods. Scores of all methods including the Lead-1-AMR baseline have dropped significantly.\nThe usage of JAMR Parser has affected the scores of first co-occurrence+first and first-1 more than that for the Lead-1-AMR. The drop in ROGUE INLINEFORM0 score when we use first co-occurrence+first is around two ROGUE INLINEFORM1 points more than when Lead-1-AMR. This is a surprising result, and we believe that it is worthy of further research.\n\nAnalyzing the effectiveness of AMR extraction\nThe aim of extracting summary graphs from the AMR graphs of the sentence is to drop the not so important information from the sentences. If we are able to achieve this perfectly, the ROGUE-1 Recall scores that we are getting should remain almost the same (since we are not add any new information) and the ROGUE-1 precision should go up (as we have thrown out some useless information); thus effectively improving the overall ROGUE-1 INLINEFORM0 score. In the first two rows of table TABREF14 we have the scores after using the full-AMR and extracted AMR for generation respectively. It is safe to say that extracting the AMR results in improved ROGUE-1 precision whereas ROGUE-1 Recall reduces only slightly, resulting in an overall improved ROGUE-1 INLINEFORM1 .\n\nResults on the CNN-Dailymail corpus\nIn table TABREF18 we report the results on the CNN-Dailymail corpus. We present scores by using the first-3 model. The first row contains the Lead-3-AMR baseline. The results we achieve are competitive with the Lead-3-AMR baseline. The rest of the table contains scores of Lead-3 baseline followed by the state-of-the-art method on the anonymized and non-anonymized versions of the dataset. The drop in the scores from the Lead-3(non-anonymized) to Lead-3-AMR is significant and is largely because of the error introduced by parser and generator.\n\nNeed of an new Dataset and Evaluation Metric\nROGUE metric, by it is design has lots of properties that make it unsuitable for evaluating abstractive summaries. For example, ROGUE matches exact words and not the stems of the words, it also considers stop words for evaluation. One of the reasons why ROGUE like metrics might never become suitable for evaluating abstractive summaries is its incapabilities of knowing if the sentences have been restructured. A good evaluation metric should be one where we compare the meaning of the sentence and not the exact words. As we showed section SECREF20 ROGUE is not suitable for evaluating summaries generated by the AMR pipeline.\nWe now show why the CNN-Dailymail corpus is not suitable for Abstractive summarization. The nature of summary points in the corpus is highly extractive (Section SECREF7 for details) where most of the summary points are simply picked up from some sentences in the story. Tough, this is a good enough reason to start searching for better dataset, it is not the biggest problem with the dataset. The dataset has the property that a lot of important information is in the first few sentences and most of the summary points are directly pick from these sentences. The extractive methods based on sentence selection like SummaRunNer are not actually performing well, the results they have got are only slightly better than the Lead-3 baseline. The work doesn't show how much of the selected sentences are among the first few and it might be the case that the sentences selected by the extractive methods are mostly among the first few sentences, the same can be the problem with the abstractive methods, where most the output might be getting copied from the initial few sentences.\nThese problems with this corpus evoke the need to have another corpus where we don't have so much concentration of important information at any location but rather the information is more spread out and the summaries are more abstractive in nature.\n\nPossible Future Directions\nAs this proposed algorithm is a step by step process we can focus on improving each step to produce better results. The most exciting improvements can be done in the summary graph extraction method. Not a lot of work has been done to extract AMR graphs for summaries. In order to make this pipeline generalizable for any sort of text, we need to get rid of the hypothesis that the summary is being extracted exactly from one sentence. So, the natural direction seems to be joining AMR graphs of multiple sentences that are similar and then extracting the summary AMR from that large graph. It will be like clustering similar sentences and then extracting a summary graph from each of these cluster. Another idea is to use AMR graphs for important sentence selection.\n\nTable 1: Results of various algorithms after step 2 (AMR extraction)\n\nTable 2: Results of various algorithms after step 3 - summary generation using JAMR)\n\nFigure 4: The graphical representation of the truncated AMR graph of the sentence : ’Absurd as it might seem to me , a thousand miles from any human habitation and in danger of death , I took out of my pocket a sheet of paper and my fountain - pen’\n\nQuery: How are sentences selected from the summary graph?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \" Two methods: first is to simply pick initial few sentences, second is to capture the relation between the two most important entities (select the first sentence which contains both these entities).\", \"evidence\": [\"Using this idea of picking important sentences from the beginning, we propose two methods, first is to simply pick initial few sentences, we call this first-n method where n stands for the number of sentences. We pick initial 3 sentences for the CNN-Dailymail corpus i.e. first-3 and only the first sentence for the proxy report section (AMR Bank) i.e. first-1 as they produce the best scores on the ROGUE metric compared to any other first-n. Second, we try to capture the relation between the two most important entities (we define importance by the number of occurrences of the entity in the story) of the document. For this we simply find the first sentence which contains both these entities. We call this the first co-occurrence based sentence selection. We also select the first sentence along with first co-occurrence based sentence selection as the important sentences. We call this the first co-occurrence+first based sentence selection.\"]}",
"role": "assistant"
}
]
|
science.discomat_te | | Attribution | Ge-H | Si-OH | [SiOH]*[GeH]-1 | GeODC |
| Wavenumber (cm-1) or energy (eV) | 2185cm-1 | 3650cm-1 | - | 5.1eV |
| e (ppm-1 cm-1) or oscillator strength (f) | e =0.32x10-3 | e =2.35x10-3 | - | f =0.1 |
| Concentration, Dt =5s | 1470+-90ppm | 4780+-60ppm | 3.25+-0.23 | 1.5x1019/cm3 |
| Concentration, Dt =15s | 2480+-100ppm | 18770+-200ppm | 7.57+-0.16 | 2.5x1019/cm3 |
Caption: Determination of H-bearing species and GeODC defects concentrations in the OH-flooded P0 plate
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "1470", "type": "Other"}
{"value": "4780", "type": "Other"}
{"value": "3.25", "type": "Other"}
{"value": "2480", "type": "Other"}
{"value": "18770", "type": "Other"}
{"value": "7.57", "type": "Other"} | science.discomat_te.85 | [
{
"content": "| Attribution | Ge-H | Si-OH | [SiOH]*[GeH]-1 | GeODC |\n| Wavenumber (cm-1) or energy (eV) | 2185cm-1 | 3650cm-1 | - | 5.1eV |\n| e (ppm-1 cm-1) or oscillator strength (f) | e =0.32x10-3 | e =2.35x10-3 | - | f =0.1 |\n| Concentration, Dt =5s | 1470+-90ppm | 4780+-60ppm | 3.25+-0.23 | 1.5x1019/cm3 |\n| Concentration, Dt =15s | 2480+-100ppm | 18770+-200ppm | 7.57+-0.16 | 2.5x1019/cm3 |\n\nCaption: Determination of H-bearing species and GeODC defects concentrations in the OH-flooded P0 plate\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"1470\", \"type\": \"Other\"}\n{\"value\": \"4780\", \"type\": \"Other\"}\n{\"value\": \"3.25\", \"type\": \"Other\"}\n{\"value\": \"2480\", \"type\": \"Other\"}\n{\"value\": \"18770\", \"type\": \"Other\"}\n{\"value\": \"7.57\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT: Objective To assess the effectiveness of a primary care based programme of exercise on prescription among relatively inactive women over a two year period. <p> Design Randomised controlled trial. <p> Setting 17 primary care practices in Wellington, New Zealand <p> Participants 1089 women aged 40-74 not undertaking 30 minutes of moderate intensity physical activity on at least five days of the week <p> Intervention Brief physical activity intervention led by nurse with six month follow-up visit and monthly telephone support over nine months. <p> Main outcome measure Physical activity assessed at baseline and 12 and 24 months. Secondary outcomes were quality of life (SF-36), weight, waist circumference, blood pressure, concentrations of fasting serum lipids, glycated haemoglobin (HbA1c), glucose, insulin, and physical fitness. <p> Results Mean age was 58.9 (SD 7) years. Trial retention rates were 93% and 89% at 12 and 24 months, respectively. At baseline, 10% of intervention participants and 11% of control participants were achieving 150 minutes of at least moderate intensity physical activity a week. At 12 months rates increased to 43% and 30% and at 24 months to 39.3% and 32.8% (P<0.001), respectively. SF-36 physical functioning (P=0.03) and mental health (P<0.05) scores improved more in intervention compared with control participants, but role physical scores were significantly lower (P<0.01). There were no significant differences in clinical outcomes. More falls (P<0.001) and injuries (P=0.03) were recorded in the intervention group. <p> Conclusions This programme of exercise on prescription increased physical activity and quality of life over two years, although falls and injuries also increased. This finding supports the use of exercise on prescription programmes as part of population strategies to reduce physical inactivity. <p> Trial registration Australian New Zealand Clinical Trials Registry (ANZCTR) ANZCTRN012605000490673.
Extracted Data: | [["Primary care based exercise programme ", "Control", "Weekly physical activity", "significantly increased", "At 12 months rates increased to 43% and 30% and at 24 months to 39.3% and 32.8% (P\u003c0.001), respectively."], ["Primary care based exercise programme ", "Control", "SF-36 mental health improvement ", "significantly increased", "At 12 months rates increased to 43% and 30% and at 24 months to 39.3% and 32.8% (P\u003c0.001), respectively. SF-36 physical functioning (P=0.03) and mental health (P\u003c0.05) scores improved more in intervention compared with control participants, but role physical scores were significantly lower (P\u003c0.01)."], ["Primary care based exercise programme ", "Control", "Falls", "significantly increased", "More falls (P\u003c0.001) and injuries (P=0.03) were recorded in the intervention group."], ["Primary care based exercise programme ", "Control", "SF-36 physical functioning improvement ", "significantly increased", "SF-36 physical functioning (P=0.03) and mental health (P\u003c0.05) scores improved more in intervention compared with control participants,"]] | science.evidence_inference.110 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Objective To assess the effectiveness of a primary care based programme of exercise on prescription among relatively inactive women over a two year period. <p> Design Randomised controlled trial. <p> Setting 17 primary care practices in Wellington, New Zealand <p> Participants 1089 women aged 40-74 not undertaking 30 minutes of moderate intensity physical activity on at least five days of the week <p> Intervention Brief physical activity intervention led by nurse with six month follow-up visit and monthly telephone support over nine months. <p> Main outcome measure Physical activity assessed at baseline and 12 and 24 months. Secondary outcomes were quality of life (SF-36), weight, waist circumference, blood pressure, concentrations of fasting serum lipids, glycated haemoglobin (HbA1c), glucose, insulin, and physical fitness. <p> Results Mean age was 58.9 (SD 7) years. Trial retention rates were 93% and 89% at 12 and 24 months, respectively. At baseline, 10% of intervention participants and 11% of control participants were achieving 150 minutes of at least moderate intensity physical activity a week. At 12 months rates increased to 43% and 30% and at 24 months to 39.3% and 32.8% (P<0.001), respectively. SF-36 physical functioning (P=0.03) and mental health (P<0.05) scores improved more in intervention compared with control participants, but role physical scores were significantly lower (P<0.01). There were no significant differences in clinical outcomes. More falls (P<0.001) and injuries (P=0.03) were recorded in the intervention group. <p> Conclusions This programme of exercise on prescription increased physical activity and quality of life over two years, although falls and injuries also increased. This finding supports the use of exercise on prescription programmes as part of population strategies to reduce physical inactivity. <p> Trial registration Australian New Zealand Clinical Trials Registry (ANZCTR) ANZCTRN012605000490673.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Primary care based exercise programme \", \"Control\", \"Weekly physical activity\", \"significantly increased\", \"At 12 months rates increased to 43% and 30% and at 24 months to 39.3% and 32.8% (P\\u003c0.001), respectively.\"], [\"Primary care based exercise programme \", \"Control\", \"SF-36 mental health improvement \", \"significantly increased\", \"At 12 months rates increased to 43% and 30% and at 24 months to 39.3% and 32.8% (P\\u003c0.001), respectively. SF-36 physical functioning (P=0.03) and mental health (P\\u003c0.05) scores improved more in intervention compared with control participants, but role physical scores were significantly lower (P\\u003c0.01).\"], [\"Primary care based exercise programme \", \"Control\", \"Falls\", \"significantly increased\", \"More falls (P\\u003c0.001) and injuries (P=0.03) were recorded in the intervention group.\"], [\"Primary care based exercise programme \", \"Control\", \"SF-36 physical functioning improvement \", \"significantly increased\", \"SF-36 physical functioning (P=0.03) and mental health (P\\u003c0.05) scores improved more in intervention compared with control participants,\"]]",
"role": "assistant"
}
]
|
science.discomat_te | | l (nm) | n TE +-0.001 | n TM +-0.001 | Thickness mm +-0.1 | Number of TE/TM modes |
| Passive optical properties | Passive optical properties | Passive optical properties | Passive optical properties | Passive optical properties |
| 632.8 | 1.671 | 1.670 | 0.5 | 2/1 |
| 543.5 | 1.679 | 1.678 | 0.5 | 2/2 |
| | | | | |
| Peak (nm) | FWHM (nm) | t fit (ms) | t 1/e (ms) | l exc (nm) |
| Active optical properties | Active optical properties | Active optical properties | Active optical properties | Active optical properties |
| 1530 | 62+-4 | 2.3+-0.5 | 1.9+-0.5 | 514.5 |
Caption: Optical properties of the 90GeO2 -10TiO2 film doped with 1 mol% Er3+
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "632.8", "type": "Other"}
{"value": "1.671", "type": "Other"}
{"value": "1.670", "type": "Other"}
{"value": "0.5", "type": "Other"}
{"value": "2", "type": "Other"}
{"value": "1", "type": "Other"}
{"value": "543.5", "type": "Other"}
{"value": "1.679", "type": "Other"}
{"value": "1.678", "type": "Other"}
{"value": "0.5", "type": "Other"}
{"value": "2", "type": "Other"}
{"value": "2", "type": "Other"}
{"value": "1530", "type": "Other"}
{"value": "62", "type": "Other"}
{"value": "2.3", "type": "Other"}
{"value": "1.9", "type": "Other"}
{"value": "514.5", "type": "Other"} | science.discomat_te.460 | [
{
"content": "| l (nm) | n TE +-0.001 | n TM +-0.001 | Thickness mm +-0.1 | Number of TE/TM modes |\n| Passive optical properties | Passive optical properties | Passive optical properties | Passive optical properties | Passive optical properties |\n| 632.8 | 1.671 | 1.670 | 0.5 | 2/1 |\n| 543.5 | 1.679 | 1.678 | 0.5 | 2/2 |\n| | | | | |\n| Peak (nm) | FWHM (nm) | t fit (ms) | t 1/e (ms) | l exc (nm) |\n| Active optical properties | Active optical properties | Active optical properties | Active optical properties | Active optical properties |\n| 1530 | 62+-4 | 2.3+-0.5 | 1.9+-0.5 | 514.5 |\n\nCaption: Optical properties of the 90GeO2 -10TiO2 film doped with 1 mol% Er3+\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"632.8\", \"type\": \"Other\"}\n{\"value\": \"1.671\", \"type\": \"Other\"}\n{\"value\": \"1.670\", \"type\": \"Other\"}\n{\"value\": \"0.5\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"543.5\", \"type\": \"Other\"}\n{\"value\": \"1.679\", \"type\": \"Other\"}\n{\"value\": \"1.678\", \"type\": \"Other\"}\n{\"value\": \"0.5\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"1530\", \"type\": \"Other\"}\n{\"value\": \"62\", \"type\": \"Other\"}\n{\"value\": \"2.3\", \"type\": \"Other\"}\n{\"value\": \"1.9\", \"type\": \"Other\"}\n{\"value\": \"514.5\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.OBJECTIVE: To evaluate the effect of creatine phosphate sodium on bispectral index (BIS) and recovery quality during the general anaesthesia emergence period in elderly patients.
ABSTRACT.METHODS: This randomized, double-blind, placebo-controlled study enrolled patients undergoing transabdominal cholecystectomy under general anaesthesia. Patients were randomly assigned to receive either creatine phosphate sodium (1.0 g/100 ml 0.9% saline; group P) or 100 ml 0.9% saline (group C) over 30 minutes during surgical incision. The BIS values were recorded at anaesthesia induction (T0), skin incision (T1), cutting the gallbladder (T2), suturing the peritoneum (T3), skin closure (T4), sputum suction (T5), extubation (T6) and 1 min (T7), 5 min (T8), 10 min (T9), and 15 min (T10) after extubation. The anaesthesia duration, operation time, waking time, extubation time, consciousness recovery time, time in the postanaesthesia care unit (PACU), and the Steward recovery scores at T7, T8, T9 and T10 were recorded.
ABSTRACT.RESULTS: A total of 120 elderly patients were randomized equally to the two groups. Compared with group C, the BIS values were significantly higher in group P at T5, T6, T7 and T8; and the Steward recovery scores at T7 and T8 were significantly higher in group P. The waking time, extubation time, consciousness recovery time and time in the PACU were significantly shorter in group P compared with group C.
ABSTRACT.CONCLUSION: Creatine phosphate sodium administered during transabdominal cholecystectomy can improve BIS values and recovery following general anaesthesia in elderly patients.
Extracted Data: | [["creatine phosphate sodium (1.0 g/100 ml 0.9% saline; group P)", "100 ml 0.9% saline (group C)", "The Steward recovery scores at T7 and T8", "significantly increased", "the Steward recovery scores at T7 and T8 were significantly higher in group P."], ["creatine phosphate sodium (1.0 g/100 ml 0.9% saline; group P)", "100 ml 0.9% saline (group C)", "The time of waking, extubation, recovery of consciousness and time in the PACU", "significantly decreased", "The waking time, extubation time, consciousness recovery time and time in the PACU were significantly shorter in group P compared with group C."]] | science.evidence_inference.60 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.OBJECTIVE: To evaluate the effect of creatine phosphate sodium on bispectral index (BIS) and recovery quality during the general anaesthesia emergence period in elderly patients.\n\nABSTRACT.METHODS: This randomized, double-blind, placebo-controlled study enrolled patients undergoing transabdominal cholecystectomy under general anaesthesia. Patients were randomly assigned to receive either creatine phosphate sodium (1.0 g/100 ml 0.9% saline; group P) or 100 ml 0.9% saline (group C) over 30 minutes during surgical incision. The BIS values were recorded at anaesthesia induction (T0), skin incision (T1), cutting the gallbladder (T2), suturing the peritoneum (T3), skin closure (T4), sputum suction (T5), extubation (T6) and 1 min (T7), 5 min (T8), 10 min (T9), and 15 min (T10) after extubation. The anaesthesia duration, operation time, waking time, extubation time, consciousness recovery time, time in the postanaesthesia care unit (PACU), and the Steward recovery scores at T7, T8, T9 and T10 were recorded.\n\nABSTRACT.RESULTS: A total of 120 elderly patients were randomized equally to the two groups. Compared with group C, the BIS values were significantly higher in group P at T5, T6, T7 and T8; and the Steward recovery scores at T7 and T8 were significantly higher in group P. The waking time, extubation time, consciousness recovery time and time in the PACU were significantly shorter in group P compared with group C.\n\nABSTRACT.CONCLUSION: Creatine phosphate sodium administered during transabdominal cholecystectomy can improve BIS values and recovery following general anaesthesia in elderly patients.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"creatine phosphate sodium (1.0 g/100 ml 0.9% saline; group P)\", \"100 ml 0.9% saline (group C)\", \"The Steward recovery scores at T7 and T8\", \"significantly increased\", \"the Steward recovery scores at T7 and T8 were significantly higher in group P.\"], [\"creatine phosphate sodium (1.0 g/100 ml 0.9% saline; group P)\", \"100 ml 0.9% saline (group C)\", \"The time of waking, extubation, recovery of consciousness and time in the PACU\", \"significantly decreased\", \"The waking time, extubation time, consciousness recovery time and time in the PACU were significantly shorter in group P compared with group C.\"]]",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Adaptive Label Smoothing with Self-Knowledge
Main Content:
1 INTRODUCTION . In common practice , a neural network is trained to maximize the expected likelihood of observed targets , and the gradient with respect to the objective updates the learnable model parameters . With hard targets ( one-hot encoded ) , the maximum objective can be approached when a model assigns a high probability mass to the corresponding target label over the output space . That is , due to the normalizing activation functions ( i.e . softmax ) , a model is trained in order for logits to have a marked difference between the target logit and the other classes logits ( Müller et al. , 2019 ) . Despite its wide application and use , the maximum likelihood estimation with hard targets has been found to incur an overconfident problem : the predictive score of a model does not reflect the actual accuracy of the prediction . Consequently , this leads to degradation in model calibration ( Pereyra et al. , 2017 ) , as well as in model performance ( Müller et al. , 2019 ) . Additionally , this problem stands out more clearly with a limited number of samples , as a model is more prone to overfitting . To remedy such phenomenon , Szegedy et al . ( 2016 ) proposed label smoothing , in which one-hot encoded targets are replaced with smoothed targets . Label smoothing has boosted performance in computer vision ( Szegedy et al. , 2016 ) , and has been highly preferred in other domains , such as Natural Language Processing ( Vaswani et al. , 2017 ; Lewis et al. , 2020 ) . However , there are several aspects to be discussed in label smoothing . First , it comes with certain downsides , namely the static smoothing parameter . The smoothing regularizer fails to account for the change in probability mass over the course of training . Despite the fact that a model can benefit from adaptive control of the smoothing extent depending on the signs of overfitting and overconfidence , the smoothing parameter remains fixed throughout training in all instances . Another aspect of label smoothing for consideration is its connection to knowledge distillation ( Hinton et al. , 2015 ) . There have been attempts to bridge label smoothing with knowledge distillation , and the findings suggest the latter is an adaptive form of the former ( Tang et al. , 2021 ; Yuan et al. , 2020 ) . However , the regularization effect on overconfidence by self-knowledge distillation is still poorly understood and explored . To tackle the issues mentioned above , this work presents adaptive label smoothing with selfknowledge as a prior label distribution . Our regularizer allows a model to self-regulate the extent of smoothing based on the entropic level of model probability distribution , varying the amount per sample and per time step . Furthermore , our theoretical analysis suggests that self-knowledge distillation and the adaptive smoothing parameter have a strong regularization effect by rescaling gradients on logit space . Our work validates the efficacy of the proposed regularization method on machine translation tasks and achieves superior results in model performance and model calibration compared to other baselines . 2 PRELIMINARIES & RELATED WORK . 2.1 LABEL SMOOTHING . Label smoothing ( Szegedy et al. , 2016 ) was first introduced to prevent a model from making a peaked probability distribution . Since its introduction , it has been in wide application as a means of regularization ( Vaswani et al. , 2017 ; Lewis et al. , 2020 ) . In label smoothing , one-hot encoded ground-truth label ( y ) and a pre-defined prior label distribution ( q ) are mixed with the weight , the smoothing parameter ( α ) , forming a smoothed ground-truth label . A model with label smoothing is learned to maximize the likelihood of predicting the smoothed label distribution . Specifically , Lls ( x ( n ) , y ( n ) ) = − |C|∑ i=1 ( 1− α ) y ( n ) i logPθ ( yi|x ( n ) ) + αqi logPθ ( yi|x ( n ) ) ( 1 ) |C| denotes the number of classes , ( n ) the index of a sample in a batch , and Pθ the probability distribution mapped by a model . α is commonly set to 0.1 , and remains fixed throughout training ( Vaswani et al. , 2017 ; Lewis et al. , 2020 ) . A popular choice of q is an uniform distribution ( q ∼ U ( |C| ) ) , while unigram distribution is another option for dealing with an imbalanced label distribution ( Vaswani et al. , 2017 ; Szegedy et al. , 2016 ; Müller et al. , 2019 ; Pereyra et al. , 2017 ) . The pre-defined prior label distribution remains unchanged , hence the latter cross-entropy term in Equation 1 is equivalent to minimizing the KL divergence between the model prediction and the predefined label distribution . In line with the idea , Pereyra et al . ( 2017 ) proposed confidence penalty ( ConfPenalty ) that adds negative entropy term to the loss function , thereby minimizing the KL divergence between the uniform distribution and model probability distribution . Ghoshal et al . ( 2021 ) proposed low-rank adaptive label smoothing ( LORAS ) that jointly learns a noise distribution for softening targets and model parameters . Li et al . ( 2020 ) ; Krothapalli & Abbott ( 2020 ) introduced smoothing schemes that are data-dependent . 2.2 KNOWLEDGE DISTILLATION . Knowledge distillation ( Hinton et al. , 2015 ) aims to transfer the dark knowledge of ( commonly ) a larger and better performing teacher model to a student model ( Buciluundefined et al. , 2006 ) . The idea is to mix the ground-truth label with the model probability distribution of a teacher model , the result acting an adaptive version of label smoothing ( Tang et al. , 2021 ) . Lkd ( x ( n ) , y ( n ) ) = − |C|∑ i=1 ( 1− α ) y ( n ) i logPθ ( yi|x ( n ) ) + αPφ ( yi|x ( n ) ) logPθ ( yi|x ( n ) ) ( 2 ) φ and θ denote the parameters of a teacher model and a student model respectively . Similar to label smoothing , φ remains unchanged in training ; thus a student model is learned to minimize the KL divergence between its probability distribution and that of the teacher model . When Pφ follows a uniform distribution with temperature set to 1 , the loss function of knowledge distillation is identical to that of uniform label smoothing . Training a large teacher model can be computationally expensive ; for this reason , there have been attempts to replace the teacher model with the student model itself , called self-knowledge distillation ( Zhang et al. , 2019 ; Yuan et al. , 2020 ; Kim et al. , 2021 ; Zhang & Sabuncu , 2020 ) . TF-KD ( Yuan et al. , 2020 ) trains a student with a pre-trained teacher that is identical to the student in terms of structure . SKD-PRT ( Kim et al. , 2021 ) utilizes the previous epoch checkpoint as a teacher with linear increase in α. Zhang & Sabuncu ( 2020 ) incorporate beta distribution sampling ( BETA ) and self-knowledge distillation ( SD ) , and introduce instance-specific prior label distribution . Yun et al . ( 2020 ) utilize self-knowledge distillation to minimize the predictive distribution of samples with the same class , encouraging consistent probability distribution within the same class . 3 METHODOLOGY . The core components of label smoothing are two-fold : smoothing parameter ( α ) and prior label distribution . The components determine how much to smooth the target label using which distribution , a process that requires careful choice of selection . In this section , we illustrate how to make the smoothing parameter adaptive . We also demonstrate how our adaptive smoothing parameter and self-knowledge distillation as a prior distribution act as a form of regularization with theoretical analysis on the gradients . 3.1 ADAPTIVE α An intuitive and ideal way of softening the hard target is to bring dynamic nature into choosing α ; a sample with low entropic level in model prediction receives a high smoothing parameter to further smooth the target label . In another scenario , in which high entropy of model prediction ( flat distribution ) is seen , the smoothing factor is decreased . With the intuition , our method computes the smoothing parameter on the fly during the forward propagation in training , relying on the entropic level of model probability distribution per sample , and per time step in case of sequential classification.1 H ( Pθ ( y|x ( n ) ) ) = − |C|∑ i=1 Pθ ( yi|x ( n ) ) logPθ ( yi|x ( n ) ) ( 3 ) The entropy quantifies the level of probability mass distributed across the label space ; therefore , low entropy is an indication of overfitting and overconfidence ( Pereyra et al. , 2017 ; Meister et al. , 2020 ) . Since entropy does not have a fixed range between 0 and 1 , one simple scheme is to normalize the entropy with maximum entropy ( log |C| ) . Hence , the normalization is capable of handling variable 1For notational simplicity , time step is not included in the equation hereafter . size of class set among different datasets . α ( n ) = 1− H ( Pθ ( y|x ( n ) ) ) log |C| ( 4 ) With this mechanism , a sample with high entropy is trained with low α , and a sample with low entropy receives high α . The computation for α is excluded from the computation graph for the gradient calculation , hence , the gradient does not flow through adaptive α ( n ) . There are two essential benefits of adopting the adaptive smoothing parameter . As the smoothing extent is determined by its own probability mass over the output space , the hyperparameter search on α is removed . Furthermore , it is strongly connected to the gradient rescaling effect on selfknowledge distillation , which will be dealt in Section 3.3 in detail . 3.2 SELF-KNOWLEDGE AS A PRIOR . Similar to ( Kim et al. , 2021 ; Liu et al. , 2021 ) , our regularizer loads a past student model checkpoint as teacher network parameters in the course of training . The intuition is to utilize past selfknowledge which is a generalized supervision , thereby hindering the model from overfitting to observations in the training set . φ = argmax θ g ( f ( X ′ ; θi ) , Y ′ ) , where θi ∈ Θ ( 5 ) Θ is a set of past model checkpoints in training , and function f is a specific task , which in our work is machine translation . X ′ and Y ′ are sets of input and ground-truth samples from a validation dataset2 , and the function g could be any proper evaluation metric for model generalization ( i.e . accuracy ) .3 Our work utilizes the n-gram matching score , BLEU ( Papineni et al. , 2002 ) being the function for finding the prior label distribution ( q ∼ Pφ ) . Combining the adaptive smoothing parameter and self-knowledge as a prior distribution , our loss function is as follows : L ( x ( n ) , y ( n ) ) = − |C|∑ i=1 ( 1− α ( n ) ) y ( n ) i logPθ ( yi|x ( n ) ) + α ( n ) Pφ ( yi|x ( n ) ) logPθ ( yi|x ( n ) ) ( 6 ) | This paper focuses on adaptive label smoothing that could reflect the change in probability distribution mapped by a model over the course of training. To deal with this issue, this paper proposes a label soothing scheme that brings dynamic nature into the smoothing parameter and the prior label distribution from the distilled knowledge. Specifically, the smoothing parameter is computed with the entropic level of model probability distribution per sample on the fly during the forward propagation in training. Besides, the prior label distribution is selected from the self-knowledge distillation. Experiments on various datasets demonstrate that the proposed adaptive label smoothing achieves state-of-the-art performance. | science.mup_single_document_summarization.9307 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAdaptive Label Smoothing with Self-Knowledge\n\nMain Content:\n1 INTRODUCTION . In common practice , a neural network is trained to maximize the expected likelihood of observed targets , and the gradient with respect to the objective updates the learnable model parameters . With hard targets ( one-hot encoded ) , the maximum objective can be approached when a model assigns a high probability mass to the corresponding target label over the output space . That is , due to the normalizing activation functions ( i.e . softmax ) , a model is trained in order for logits to have a marked difference between the target logit and the other classes logits ( Müller et al. , 2019 ) . Despite its wide application and use , the maximum likelihood estimation with hard targets has been found to incur an overconfident problem : the predictive score of a model does not reflect the actual accuracy of the prediction . Consequently , this leads to degradation in model calibration ( Pereyra et al. , 2017 ) , as well as in model performance ( Müller et al. , 2019 ) . Additionally , this problem stands out more clearly with a limited number of samples , as a model is more prone to overfitting . To remedy such phenomenon , Szegedy et al . ( 2016 ) proposed label smoothing , in which one-hot encoded targets are replaced with smoothed targets . Label smoothing has boosted performance in computer vision ( Szegedy et al. , 2016 ) , and has been highly preferred in other domains , such as Natural Language Processing ( Vaswani et al. , 2017 ; Lewis et al. , 2020 ) . However , there are several aspects to be discussed in label smoothing . First , it comes with certain downsides , namely the static smoothing parameter . The smoothing regularizer fails to account for the change in probability mass over the course of training . Despite the fact that a model can benefit from adaptive control of the smoothing extent depending on the signs of overfitting and overconfidence , the smoothing parameter remains fixed throughout training in all instances . Another aspect of label smoothing for consideration is its connection to knowledge distillation ( Hinton et al. , 2015 ) . There have been attempts to bridge label smoothing with knowledge distillation , and the findings suggest the latter is an adaptive form of the former ( Tang et al. , 2021 ; Yuan et al. , 2020 ) . However , the regularization effect on overconfidence by self-knowledge distillation is still poorly understood and explored . To tackle the issues mentioned above , this work presents adaptive label smoothing with selfknowledge as a prior label distribution . Our regularizer allows a model to self-regulate the extent of smoothing based on the entropic level of model probability distribution , varying the amount per sample and per time step . Furthermore , our theoretical analysis suggests that self-knowledge distillation and the adaptive smoothing parameter have a strong regularization effect by rescaling gradients on logit space . Our work validates the efficacy of the proposed regularization method on machine translation tasks and achieves superior results in model performance and model calibration compared to other baselines . 2 PRELIMINARIES & RELATED WORK . 2.1 LABEL SMOOTHING . Label smoothing ( Szegedy et al. , 2016 ) was first introduced to prevent a model from making a peaked probability distribution . Since its introduction , it has been in wide application as a means of regularization ( Vaswani et al. , 2017 ; Lewis et al. , 2020 ) . In label smoothing , one-hot encoded ground-truth label ( y ) and a pre-defined prior label distribution ( q ) are mixed with the weight , the smoothing parameter ( α ) , forming a smoothed ground-truth label . A model with label smoothing is learned to maximize the likelihood of predicting the smoothed label distribution . Specifically , Lls ( x ( n ) , y ( n ) ) = − |C|∑ i=1 ( 1− α ) y ( n ) i logPθ ( yi|x ( n ) ) + αqi logPθ ( yi|x ( n ) ) ( 1 ) |C| denotes the number of classes , ( n ) the index of a sample in a batch , and Pθ the probability distribution mapped by a model . α is commonly set to 0.1 , and remains fixed throughout training ( Vaswani et al. , 2017 ; Lewis et al. , 2020 ) . A popular choice of q is an uniform distribution ( q ∼ U ( |C| ) ) , while unigram distribution is another option for dealing with an imbalanced label distribution ( Vaswani et al. , 2017 ; Szegedy et al. , 2016 ; Müller et al. , 2019 ; Pereyra et al. , 2017 ) . The pre-defined prior label distribution remains unchanged , hence the latter cross-entropy term in Equation 1 is equivalent to minimizing the KL divergence between the model prediction and the predefined label distribution . In line with the idea , Pereyra et al . ( 2017 ) proposed confidence penalty ( ConfPenalty ) that adds negative entropy term to the loss function , thereby minimizing the KL divergence between the uniform distribution and model probability distribution . Ghoshal et al . ( 2021 ) proposed low-rank adaptive label smoothing ( LORAS ) that jointly learns a noise distribution for softening targets and model parameters . Li et al . ( 2020 ) ; Krothapalli & Abbott ( 2020 ) introduced smoothing schemes that are data-dependent . 2.2 KNOWLEDGE DISTILLATION . Knowledge distillation ( Hinton et al. , 2015 ) aims to transfer the dark knowledge of ( commonly ) a larger and better performing teacher model to a student model ( Buciluundefined et al. , 2006 ) . The idea is to mix the ground-truth label with the model probability distribution of a teacher model , the result acting an adaptive version of label smoothing ( Tang et al. , 2021 ) . Lkd ( x ( n ) , y ( n ) ) = − |C|∑ i=1 ( 1− α ) y ( n ) i logPθ ( yi|x ( n ) ) + αPφ ( yi|x ( n ) ) logPθ ( yi|x ( n ) ) ( 2 ) φ and θ denote the parameters of a teacher model and a student model respectively . Similar to label smoothing , φ remains unchanged in training ; thus a student model is learned to minimize the KL divergence between its probability distribution and that of the teacher model . When Pφ follows a uniform distribution with temperature set to 1 , the loss function of knowledge distillation is identical to that of uniform label smoothing . Training a large teacher model can be computationally expensive ; for this reason , there have been attempts to replace the teacher model with the student model itself , called self-knowledge distillation ( Zhang et al. , 2019 ; Yuan et al. , 2020 ; Kim et al. , 2021 ; Zhang & Sabuncu , 2020 ) . TF-KD ( Yuan et al. , 2020 ) trains a student with a pre-trained teacher that is identical to the student in terms of structure . SKD-PRT ( Kim et al. , 2021 ) utilizes the previous epoch checkpoint as a teacher with linear increase in α. Zhang & Sabuncu ( 2020 ) incorporate beta distribution sampling ( BETA ) and self-knowledge distillation ( SD ) , and introduce instance-specific prior label distribution . Yun et al . ( 2020 ) utilize self-knowledge distillation to minimize the predictive distribution of samples with the same class , encouraging consistent probability distribution within the same class . 3 METHODOLOGY . The core components of label smoothing are two-fold : smoothing parameter ( α ) and prior label distribution . The components determine how much to smooth the target label using which distribution , a process that requires careful choice of selection . In this section , we illustrate how to make the smoothing parameter adaptive . We also demonstrate how our adaptive smoothing parameter and self-knowledge distillation as a prior distribution act as a form of regularization with theoretical analysis on the gradients . 3.1 ADAPTIVE α An intuitive and ideal way of softening the hard target is to bring dynamic nature into choosing α ; a sample with low entropic level in model prediction receives a high smoothing parameter to further smooth the target label . In another scenario , in which high entropy of model prediction ( flat distribution ) is seen , the smoothing factor is decreased . With the intuition , our method computes the smoothing parameter on the fly during the forward propagation in training , relying on the entropic level of model probability distribution per sample , and per time step in case of sequential classification.1 H ( Pθ ( y|x ( n ) ) ) = − |C|∑ i=1 Pθ ( yi|x ( n ) ) logPθ ( yi|x ( n ) ) ( 3 ) The entropy quantifies the level of probability mass distributed across the label space ; therefore , low entropy is an indication of overfitting and overconfidence ( Pereyra et al. , 2017 ; Meister et al. , 2020 ) . Since entropy does not have a fixed range between 0 and 1 , one simple scheme is to normalize the entropy with maximum entropy ( log |C| ) . Hence , the normalization is capable of handling variable 1For notational simplicity , time step is not included in the equation hereafter . size of class set among different datasets . α ( n ) = 1− H ( Pθ ( y|x ( n ) ) ) log |C| ( 4 ) With this mechanism , a sample with high entropy is trained with low α , and a sample with low entropy receives high α . The computation for α is excluded from the computation graph for the gradient calculation , hence , the gradient does not flow through adaptive α ( n ) . There are two essential benefits of adopting the adaptive smoothing parameter . As the smoothing extent is determined by its own probability mass over the output space , the hyperparameter search on α is removed . Furthermore , it is strongly connected to the gradient rescaling effect on selfknowledge distillation , which will be dealt in Section 3.3 in detail . 3.2 SELF-KNOWLEDGE AS A PRIOR . Similar to ( Kim et al. , 2021 ; Liu et al. , 2021 ) , our regularizer loads a past student model checkpoint as teacher network parameters in the course of training . The intuition is to utilize past selfknowledge which is a generalized supervision , thereby hindering the model from overfitting to observations in the training set . φ = argmax θ g ( f ( X ′ ; θi ) , Y ′ ) , where θi ∈ Θ ( 5 ) Θ is a set of past model checkpoints in training , and function f is a specific task , which in our work is machine translation . X ′ and Y ′ are sets of input and ground-truth samples from a validation dataset2 , and the function g could be any proper evaluation metric for model generalization ( i.e . accuracy ) .3 Our work utilizes the n-gram matching score , BLEU ( Papineni et al. , 2002 ) being the function for finding the prior label distribution ( q ∼ Pφ ) . Combining the adaptive smoothing parameter and self-knowledge as a prior distribution , our loss function is as follows : L ( x ( n ) , y ( n ) ) = − |C|∑ i=1 ( 1− α ( n ) ) y ( n ) i logPθ ( yi|x ( n ) ) + α ( n ) Pφ ( yi|x ( n ) ) logPθ ( yi|x ( n ) ) ( 6 )",
"role": "user"
},
{
"content": "This paper focuses on adaptive label smoothing that could reflect the change in probability distribution mapped by a model over the course of training. To deal with this issue, this paper proposes a label soothing scheme that brings dynamic nature into the smoothing parameter and the prior label distribution from the distilled knowledge. Specifically, the smoothing parameter is computed with the entropic level of model probability distribution per sample on the fly during the forward propagation in training. Besides, the prior label distribution is selected from the self-knowledge distillation. Experiments on various datasets demonstrate that the proposed adaptive label smoothing achieves state-of-the-art performance.",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.OBJECTIVES: To evaluate the effect of certolizumab pegol (CZP) on productivity outside and within the home, and on participation in family, social and leisure activities in adult patients with psoriatic arthritis (PsA).
ABSTRACT.METHODS: RAPID-PsA (NCT01087788) is a phase 3, double-blind, placebo-controlled trial. 409 patients with active PsA were randomised 1:1:1 to placebo, CZP 200 mg every 2 weeks (Q2W) or CZP 400 mg every 4 weeks (Q4W). The arthritis-specific Work Productivity Survey (WPS) assessed the impact of PsA on paid work and household productivity, and participation in social activities during the preceding month. WPS responses were compared between treatment arms using a non-parametric bootstrap-t method.
ABSTRACT.RESULTS: At baseline, 56.6%, 60.1% and 61.5% of placebo, CZP 200 mg Q2W and CZP 400 mg Q4W patients were employed. By week 24, employed CZP patients reported an average of 1.0–1.8 and 3.0–3.9 fewer days of absenteeism and presenteeism, respectively, per month compared with 1.0 and 0.3 fewer days for placebo patients (p<0.05). Within the home, by week 24, CZP patients reported an average of 3.0–3.5 household work days gained per month versus 1.0 day for placebo (p<0.05). CZP patients also reported fewer days with reduced household productivity or days lost for participation in family, social and leisure activities. Improvements with CZP were seen as early as week 4 and continued to week 24.
ABSTRACT.CONCLUSIONS: CZP treatment significantly improved productivity at paid work and within the home, and resulted in greater participation in social activities for PsA patients.
ABSTRACT.TRIAL REGISTRATION NUMBER: NCT01087788.
Extracted Data: | [["CZP 200 mg every 2 weeks (Q2W) or CZP 400 mg every 4 weeks (Q4W)", "Placebo", "absenteeism and presenteeism", "significantly decreased", "employed CZP patients reported an average of 1.0\u20131.8 and 3.0\u20133.9 fewer days of absenteeism and presenteeism, respectively, per month compared with 1.0 and 0.3 fewer days for placebo patients (p\u003c0.05)."], ["CZP 200 mg every 2 weeks (Q2W) or CZP 400 mg every 4 weeks (Q4W)", "placebo", "household work days", "significantly increased", "CZP patients reported an average of 3.0\u20133.5 household work days gained per month versus 1.0 day for placebo (p\u003c0.05)."]] | science.evidence_inference.101 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.OBJECTIVES: To evaluate the effect of certolizumab pegol (CZP) on productivity outside and within the home, and on participation in family, social and leisure activities in adult patients with psoriatic arthritis (PsA).\n\nABSTRACT.METHODS: RAPID-PsA (NCT01087788) is a phase 3, double-blind, placebo-controlled trial. 409 patients with active PsA were randomised 1:1:1 to placebo, CZP 200 mg every 2 weeks (Q2W) or CZP 400 mg every 4 weeks (Q4W). The arthritis-specific Work Productivity Survey (WPS) assessed the impact of PsA on paid work and household productivity, and participation in social activities during the preceding month. WPS responses were compared between treatment arms using a non-parametric bootstrap-t method.\n\nABSTRACT.RESULTS: At baseline, 56.6%, 60.1% and 61.5% of placebo, CZP 200 mg Q2W and CZP 400 mg Q4W patients were employed. By week 24, employed CZP patients reported an average of 1.0–1.8 and 3.0–3.9 fewer days of absenteeism and presenteeism, respectively, per month compared with 1.0 and 0.3 fewer days for placebo patients (p<0.05). Within the home, by week 24, CZP patients reported an average of 3.0–3.5 household work days gained per month versus 1.0 day for placebo (p<0.05). CZP patients also reported fewer days with reduced household productivity or days lost for participation in family, social and leisure activities. Improvements with CZP were seen as early as week 4 and continued to week 24.\n\nABSTRACT.CONCLUSIONS: CZP treatment significantly improved productivity at paid work and within the home, and resulted in greater participation in social activities for PsA patients.\n\nABSTRACT.TRIAL REGISTRATION NUMBER: NCT01087788.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"CZP 200 mg every 2 weeks (Q2W) or CZP 400 mg every 4 weeks (Q4W)\", \"Placebo\", \"absenteeism and presenteeism\", \"significantly decreased\", \"employed CZP patients reported an average of 1.0\\u20131.8 and 3.0\\u20133.9 fewer days of absenteeism and presenteeism, respectively, per month compared with 1.0 and 0.3 fewer days for placebo patients (p\\u003c0.05).\"], [\"CZP 200 mg every 2 weeks (Q2W) or CZP 400 mg every 4 weeks (Q4W)\", \"placebo\", \"household work days\", \"significantly increased\", \"CZP patients reported an average of 3.0\\u20133.5 household work days gained per month versus 1.0 day for placebo (p\\u003c0.05).\"]]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Finally, we find the ranked BQ of all 244302 testing questions from the testing question set of <cite>VQA dataset</cite> (<cite>Antol et al. 2015</cite>) and collect them together, with the format {Image, M Q, 21 (BQ + corresponding similarity score)}, as our Basic Question Dataset (BQD). | ["Uses"] | science.multicite_intent_classification.1900 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nFinally, we find the ranked BQ of all 244302 testing questions from the testing question set of <cite>VQA dataset</cite> (<cite>Antol et al. 2015</cite>) and collect them together, with the format {Image, M Q, 21 (BQ + corresponding similarity score)}, as our Basic Question Dataset (BQD).",
"role": "user"
},
{
"content": "[\"Uses\"]",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND/AIMS:: Several countries are working to adapt clinical trial regulations to align the approval process to the level of risk for trial participants. The optimal framework to categorize clinical trials according to risk remains unclear, however. Switzerland is the first European country to adopt a risk-based categorization procedure in January 2014. We assessed how accurately and consistently clinical trials are categorized using two different approaches: an approach using criteria set forth in the new law (concept) or an intuitive approach (ad hoc).
ABSTRACT.METHODS:: This was a randomized controlled trial with a method-comparison study nested in each arm. We used clinical trial protocols from eight Swiss ethics committees approved between 2010 and 2011. Protocols were randomly assigned to be categorized in one of three risk categories using the concept or the ad hoc approach. Each protocol was independently categorized by the trial's sponsor, a group of experts and the approving ethics committee. The primary outcome was the difference in categorization agreement between the expert group and sponsors across arms. Linear weighted kappa was used to quantify agreements, with the difference between kappas being the primary effect measure.
ABSTRACT.RESULTS:: We included 142 of 231 protocols in the final analysis (concept = 78; ad hoc = 64). Raw agreement between the expert group and sponsors was 0.74 in the concept and 0.78 in the ad hoc arm. Chance-corrected agreement was higher in the ad hoc (kappa: 0.34 (95% confidence interval = 0.10–0.58)) than in the concept arm (0.27 (0.06–0.50)), but the difference was not significant (p = 0.67).
ABSTRACT.LIMITATIONS:: The main limitation was the large number of protocols excluded from the analysis mostly because they did not fit with the clinical trial definition of the new law.
ABSTRACT.CONCLUSION:: A structured risk categorization approach was not better than an ad hoc approach. Laws introducing risk-based approaches should provide guidelines, examples and templates to ensure correct application.
Extracted Data: | [["New law criteria (Concept)", "Intuitive approach (ad hoc)", "Agreement between Experts and Sponsors", "no significant difference", "Raw agreement between the expert group and sponsors was 0.74 in the concept and 0.78 in the ad hoc arm. Chance-corrected agreement was higher in the ad hoc (kappa: 0.34 (95% confidence interval = 0.10\u20130.58)) than in the concept arm (0.27 (0.06\u20130.50)), but the difference was not significant (p = 0.67)."], ["New law criteria (Concept)", "Intuitive approach (ad hoc)", "Kappa values", "no significant difference", "Chance-corrected agreement was higher in the ad hoc (kappa: 0.34 (95% confidence interval = 0.10\u20130.58)) than in the concept arm (0.27 (0.06\u20130.50)), but the difference was not significant (p = 0.67)."]] | science.evidence_inference.64 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND/AIMS:: Several countries are working to adapt clinical trial regulations to align the approval process to the level of risk for trial participants. The optimal framework to categorize clinical trials according to risk remains unclear, however. Switzerland is the first European country to adopt a risk-based categorization procedure in January 2014. We assessed how accurately and consistently clinical trials are categorized using two different approaches: an approach using criteria set forth in the new law (concept) or an intuitive approach (ad hoc).\n\nABSTRACT.METHODS:: This was a randomized controlled trial with a method-comparison study nested in each arm. We used clinical trial protocols from eight Swiss ethics committees approved between 2010 and 2011. Protocols were randomly assigned to be categorized in one of three risk categories using the concept or the ad hoc approach. Each protocol was independently categorized by the trial's sponsor, a group of experts and the approving ethics committee. The primary outcome was the difference in categorization agreement between the expert group and sponsors across arms. Linear weighted kappa was used to quantify agreements, with the difference between kappas being the primary effect measure.\n\nABSTRACT.RESULTS:: We included 142 of 231 protocols in the final analysis (concept = 78; ad hoc = 64). Raw agreement between the expert group and sponsors was 0.74 in the concept and 0.78 in the ad hoc arm. Chance-corrected agreement was higher in the ad hoc (kappa: 0.34 (95% confidence interval = 0.10–0.58)) than in the concept arm (0.27 (0.06–0.50)), but the difference was not significant (p = 0.67).\n\nABSTRACT.LIMITATIONS:: The main limitation was the large number of protocols excluded from the analysis mostly because they did not fit with the clinical trial definition of the new law.\n\nABSTRACT.CONCLUSION:: A structured risk categorization approach was not better than an ad hoc approach. Laws introducing risk-based approaches should provide guidelines, examples and templates to ensure correct application.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"New law criteria (Concept)\", \"Intuitive approach (ad hoc)\", \"Agreement between Experts and Sponsors\", \"no significant difference\", \"Raw agreement between the expert group and sponsors was 0.74 in the concept and 0.78 in the ad hoc arm. Chance-corrected agreement was higher in the ad hoc (kappa: 0.34 (95% confidence interval = 0.10\\u20130.58)) than in the concept arm (0.27 (0.06\\u20130.50)), but the difference was not significant (p = 0.67).\"], [\"New law criteria (Concept)\", \"Intuitive approach (ad hoc)\", \"Kappa values\", \"no significant difference\", \"Chance-corrected agreement was higher in the ad hoc (kappa: 0.34 (95% confidence interval = 0.10\\u20130.58)) than in the concept arm (0.27 (0.06\\u20130.50)), but the difference was not significant (p = 0.67).\"]]",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
On Success and Simplicity: A Second Look at Transferable Targeted Attacks
Main Content:
1 Introduction . Deep neural networks have achieved remarkable performance in various machine learning tasks , but are known to be vulnerable to adversarial attacks [ 1 ] . A key property of adversarial attacks that makes them critical in realistic , black-box scenarios is their transferability [ 2 , 3 ] . Current work on adversarial transferability has achieved great success for non-targeted attacks [ 4–12 ] , while several initial attempts [ 3 , 4 , 13 ] at targeted transferability have shown its extreme difficulty . Targeted transferability is known to be much more challenging and worth exploring since it can raise more critical concerns by fooling models into predicting a chosen , highly dangerous target class . However , so far state-of-the-art results can only be secured by resource-intensive transferable attacks [ 14–16 ] . Specifically , the FDA approach [ 14 , 15 ] is based on modeling layer-wise feature distributions by training target-class-specific auxiliary classifiers on large-scale labeled data , and then optimizing adversarial perturbations using these auxiliary classifiers from across the deep feature space . The TTP approach [ 16 ] is based on training target-class-specific Generative Adversarial Networks ( GANs ) through global and local distribution matching , and then using the trained generator to directly generate perturbations on any given input image . In this paper , we take a second , thorough look at current research on targeted transferability . Our main contribution is the finding that simple transferable attacks [ 4 , 6 , 8 ] that require neither model 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . training nor additional data can actually achieve surprisingly strong targeted transferability . We argue that this insight has been overlooked mainly because current research has unreasonably restricted the attack convergence by only using a small number of iterations ( see detailed discussion in Section 3 ) . Another key contribution of our work is , for the first time , demonstrating the general superiority of a very simple logit loss , which even outperforms the resource-intensive state of the art . In order to validate the general effectiveness of simple transferable attacks , in Section 4.1 , we conduct extensive experiments in a wide range of transfer scenarios . We test the commonly adopted single-model and ensemble transfer scenarios , but also introduce three new scenarios that are more challenging and realistic : an ensemble transfer scenario with little model similarity , a worse-case scenario with low-ranked target classes , and also a real-world attack on the Google Cloud Vision API . Experimental results in these new scenarios suggest that evaluation in only the commonly adopted , easy scenarios can not reveal the actual strength of different attacks , and may cause misleading comparative results . Additional experiments in Section 4.2 have shown the better performance of the simple transferable attacks than the state-of-the-art resource-intensive approaches . Finally , in Section 4.3 , inspired by the observation that the generated perturbations themselves reflect specific target semantics , we use the simple Logit attack to generate targeted Universal Adversarial Perturbations ( UAPs ) in a data-free manner . In contrast , recent advances in targeted UAPs [ 16–19 ] have inevitably relied on large-scale optimization over additional data . Overall , we hope our analysis of the weakness of commonly adopted attack settings and transfer scenarios will inspire a more meaningful evaluation on targeted transferability . 2 Related Work . In this section , we review existing simple transferable attacks ( Section 2.1 ) , and also recent resourceintensive transferable attacks ( Section 2.2 ) . Finally , we discuss related work on generating universal adversarial perturbations . 2.1 Simple Transferable Attacks . We refer to transferable attacks that require neither model training nor additional data , but only use iterative optimization on a single ( original ) image as simple transferable attacks . Simple transferable attacks have been extensively studied in the non-targeted case [ 4–12 ] , and also attempted in the targeted case [ 3 , 4 , 20 ] . These attacks are commonly built up on the well-known Iterative-Fast Gradient Sign Method ( I-FGSM ) [ 21 , 22 ] , which can be formulated as : x′0 = x , x ′ i+1 = x ′ i − α · sign ( ∇xJ ( x′i , yt ) ) , ( 1 ) where x′i denotes the perturbed image in the i-th iteration , and yt is the target class label . In order to ensure the imperceptibility , the added perturbations are restricted with respect to some Lp distance , i.e. , satisfying ‖x′ − x‖p ≤ . Current transferable attack methods have commonly adopted the L∞ distance , but can also be easily adapted to the L2 distance based on an L2 normalization [ 23 ] . For the loss function J ( · , · ) , most simple transferable attacks have adopted the Cross-Entropy ( CE ) loss . However , the CE loss has been recently shown to be insufficient in the targeted case due to its decreasing gradient problem [ 20 ] . To address this problem , the authors in [ 20 ] have proposed the Po+Trip loss , in which the Poincaré distance was used to adapt the gradients ’ magnitude : LPo = d ( u , v ) = arccosh ( 1 + δ ( u , v ) ) , δ ( u , v ) = 2 · ‖u− v‖22 ( 1− ‖u‖22 ) ( 1− ‖v‖22 ) , u = l ( x′ ) ‖l ( x′ ) ‖ , v = max { v − ξ , 0 } , ( 2 ) where u is the normalized logit vector and v is the one-hot vector with respect to the target class . ξ = 10−5 is a small constant to ensure numerical stability . The following triplet loss is also integrated for pushing the image away from the original class while pulling it into the target class : LTrip = [ D ( l ( x ′ ) , yt ) −D ( l ( x′ ) , yo ) + γ ] + , D ( l ( x′ ) , y ) = 1− ‖l ( x′ ) · y‖1 ‖l ( x′ ) ‖2‖y‖2 . ( 3 ) The overall loss function is then formulated as LPo+Trip = LPo + λLTrip . Note that in the original work , Po+Trip was evaluated in the commonly adopted , easy ensemble transfer scenario , which only involves models with similar architectures . In addition to devising new loss functions , there are other transfer methods [ 4 , 6 , 8 , 9 ] developed based on the assumption that preventing the attack optimization from overfitting to the specific source model can improve transferability . Such transfer methods can be easily plugged into different attacks without modifications , in contrast to the above methods that need to apply new attack loss functions . In this paper , we consider three [ 4 , 6 , 8 ] of such transfer methods that have been widely used in the literature , as described in the following text . Momentum Iterative-FGSM ( MI-FGSM ) [ 4 ] integrates a momentum term , which accumulates previous gradients in order to achieve more stable update directions . It can be expressed as : gi+1 = µ · gi + ∇xJ ( x′i , yt ) ‖∇xJ ( x′i , yt ) ‖1 , x′i+1 = x ′ i − α · sign ( gi ) , ( 4 ) where gi is the accumulated gradients at the i-th iteration , and µ is a decay factor . Another similar technique that instead uses the Nesterov accelerated gradient was explored in [ 9 ] . Translation Invariant-FGSM ( TI-FGSM ) [ 6 ] randomly translates the input image during attack optimization in order to prevent the attack from overfitting to the specific source model . This approach is inspired by the data augmentation techniques used for preventing overfitting in normal model training . Instead of calculating gradients for multiple translated images separately , the authors have proposed an approximate solution to accelerate the implementation . It is achieved by directly computing locally smoothed gradients on the original image via convolution with a kernel : x′i+1 = x ′ i − α · sign ( W ∗ ∇xJ ( x′i , yt ) ) , ( 5 ) where W is the convolution kernel used for smoothing . TI-FGSM was originally designed for boosting transferability with adversarially-trained models as target models and has been recently shown that a smaller kernel size should be used when transferring to normally-trained models [ 12 ] . Diverse Input-FGSM ( DI-FGSM ) [ 8 ] follows a similar idea to TI-FGSM , but applies random resizing and padding for data augmentation . Another important difference is that DI-FGSM randomizes augmentation parameters over iterations rather than fixing them as in TI-FGSM . The attack optimization of DI-FGSM can be formulated as : x′i+1 = x ′ i − α · sign ( ∇xJ ( T ( x′i , p ) , yt ) ) , ( 6 ) where the stochastic transformation T ( x′i , p ) is implemented with probability p at each iteration . In Section 4.1 , we demonstrate that simple transfer attacks with these three transfer methods can actually achieve surprisingly strong targeted transferability in a wide range of transfer scenarios . 2.2 Resource-Intensive Transferable Attacks . Due to the broad consensus that achieving targeted transferability is extremely difficult , recent researchers have resorted to resource-intensive approaches that require training target-class-specific models on large-scale additional data . Specifically , the Feature Distribution Attack ( FDA ) [ 14 ] follows the same attack pipeline as the above simple transferable attacks , but requires auxiliary classifiers that have been trained on additional labeled data as part of the source model . Each auxiliary classifier is a small , binary , one-versus-all classifier trained for a specific target class at a specific layer . That is to say , the number of auxiliary classifiers is the number of layers that are probed multiplied by the number of target classes that are required to model [ 14 ] . The attack loss function of FDA can be formulated as : LFDA = J ( Fl ( x′ ) , yt ) − η ‖Fl ( x′ ) −Fl ( x ) ‖2 ‖Fl ( x ) ‖2 , ( 7 ) where each auxiliary classifiers Fl ( · ) can model the probability that a feature map at layer l is from a specific target class yt . FDA ( N ) +xent [ 15 ] extends FDA by aggregating features from L layers and also incorporating the cross-entropy loss H ( · , · ) of the original network F ( · ) . The loss function of FDA ( N ) +xent can be expressed as : LFDA ( N ) +xent = ∑ l∈L λl ( LFDA + γH ( F ( x′ ) , yt ) ) , where ∑ l∈L λl = 1 . ( 8 ) Very recently , TTP [ 16 ] has achieved state-of-the-art targeted transferability by directly generating perturbations using target-class-specific GANs that have been trained via matching the distributions of perturbations and a specific target class both globally and locally . Specifically , the global distribution matching is achieved by minimizing the Kullback Leibler ( KL ) divergence , and the local distribution matching is by enforcing the neighbourhood similarity . In order to further boost the performance , data augmentation techniques , such as image rotation , crop resize , horizontal flip , color jittering and gray-scale transformation , have been applied during model training . We refer the readers to [ 16 ] for more technical details of TTP . These two transferable attacks , FDA ( N ) +xent and TTP , are resource intensive due to the use of largescale model training and additional data . However , in Section 4.2 , we show that simple transferable attacks , which require neither model training nor additional data , can actually achieve even better performance than them . | The paper provides new insights for transfer-based targeted attacks. The paper describes that target attacks with simple logit loss archive high transferability under sufficient iterations. Various experiments with realistic settings demonstrate the transferability of attacks with the logit loss. In addition, the paper shows the usefulness of the logit loss for generating universal adversarial perturbations. | science.mup_single_document_summarization.11972 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nOn Success and Simplicity: A Second Look at Transferable Targeted Attacks\n\nMain Content:\n1 Introduction . Deep neural networks have achieved remarkable performance in various machine learning tasks , but are known to be vulnerable to adversarial attacks [ 1 ] . A key property of adversarial attacks that makes them critical in realistic , black-box scenarios is their transferability [ 2 , 3 ] . Current work on adversarial transferability has achieved great success for non-targeted attacks [ 4–12 ] , while several initial attempts [ 3 , 4 , 13 ] at targeted transferability have shown its extreme difficulty . Targeted transferability is known to be much more challenging and worth exploring since it can raise more critical concerns by fooling models into predicting a chosen , highly dangerous target class . However , so far state-of-the-art results can only be secured by resource-intensive transferable attacks [ 14–16 ] . Specifically , the FDA approach [ 14 , 15 ] is based on modeling layer-wise feature distributions by training target-class-specific auxiliary classifiers on large-scale labeled data , and then optimizing adversarial perturbations using these auxiliary classifiers from across the deep feature space . The TTP approach [ 16 ] is based on training target-class-specific Generative Adversarial Networks ( GANs ) through global and local distribution matching , and then using the trained generator to directly generate perturbations on any given input image . In this paper , we take a second , thorough look at current research on targeted transferability . Our main contribution is the finding that simple transferable attacks [ 4 , 6 , 8 ] that require neither model 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . training nor additional data can actually achieve surprisingly strong targeted transferability . We argue that this insight has been overlooked mainly because current research has unreasonably restricted the attack convergence by only using a small number of iterations ( see detailed discussion in Section 3 ) . Another key contribution of our work is , for the first time , demonstrating the general superiority of a very simple logit loss , which even outperforms the resource-intensive state of the art . In order to validate the general effectiveness of simple transferable attacks , in Section 4.1 , we conduct extensive experiments in a wide range of transfer scenarios . We test the commonly adopted single-model and ensemble transfer scenarios , but also introduce three new scenarios that are more challenging and realistic : an ensemble transfer scenario with little model similarity , a worse-case scenario with low-ranked target classes , and also a real-world attack on the Google Cloud Vision API . Experimental results in these new scenarios suggest that evaluation in only the commonly adopted , easy scenarios can not reveal the actual strength of different attacks , and may cause misleading comparative results . Additional experiments in Section 4.2 have shown the better performance of the simple transferable attacks than the state-of-the-art resource-intensive approaches . Finally , in Section 4.3 , inspired by the observation that the generated perturbations themselves reflect specific target semantics , we use the simple Logit attack to generate targeted Universal Adversarial Perturbations ( UAPs ) in a data-free manner . In contrast , recent advances in targeted UAPs [ 16–19 ] have inevitably relied on large-scale optimization over additional data . Overall , we hope our analysis of the weakness of commonly adopted attack settings and transfer scenarios will inspire a more meaningful evaluation on targeted transferability . 2 Related Work . In this section , we review existing simple transferable attacks ( Section 2.1 ) , and also recent resourceintensive transferable attacks ( Section 2.2 ) . Finally , we discuss related work on generating universal adversarial perturbations . 2.1 Simple Transferable Attacks . We refer to transferable attacks that require neither model training nor additional data , but only use iterative optimization on a single ( original ) image as simple transferable attacks . Simple transferable attacks have been extensively studied in the non-targeted case [ 4–12 ] , and also attempted in the targeted case [ 3 , 4 , 20 ] . These attacks are commonly built up on the well-known Iterative-Fast Gradient Sign Method ( I-FGSM ) [ 21 , 22 ] , which can be formulated as : x′0 = x , x ′ i+1 = x ′ i − α · sign ( ∇xJ ( x′i , yt ) ) , ( 1 ) where x′i denotes the perturbed image in the i-th iteration , and yt is the target class label . In order to ensure the imperceptibility , the added perturbations are restricted with respect to some Lp distance , i.e. , satisfying ‖x′ − x‖p ≤ . Current transferable attack methods have commonly adopted the L∞ distance , but can also be easily adapted to the L2 distance based on an L2 normalization [ 23 ] . For the loss function J ( · , · ) , most simple transferable attacks have adopted the Cross-Entropy ( CE ) loss . However , the CE loss has been recently shown to be insufficient in the targeted case due to its decreasing gradient problem [ 20 ] . To address this problem , the authors in [ 20 ] have proposed the Po+Trip loss , in which the Poincaré distance was used to adapt the gradients ’ magnitude : LPo = d ( u , v ) = arccosh ( 1 + δ ( u , v ) ) , δ ( u , v ) = 2 · ‖u− v‖22 ( 1− ‖u‖22 ) ( 1− ‖v‖22 ) , u = l ( x′ ) ‖l ( x′ ) ‖ , v = max { v − ξ , 0 } , ( 2 ) where u is the normalized logit vector and v is the one-hot vector with respect to the target class . ξ = 10−5 is a small constant to ensure numerical stability . The following triplet loss is also integrated for pushing the image away from the original class while pulling it into the target class : LTrip = [ D ( l ( x ′ ) , yt ) −D ( l ( x′ ) , yo ) + γ ] + , D ( l ( x′ ) , y ) = 1− ‖l ( x′ ) · y‖1 ‖l ( x′ ) ‖2‖y‖2 . ( 3 ) The overall loss function is then formulated as LPo+Trip = LPo + λLTrip . Note that in the original work , Po+Trip was evaluated in the commonly adopted , easy ensemble transfer scenario , which only involves models with similar architectures . In addition to devising new loss functions , there are other transfer methods [ 4 , 6 , 8 , 9 ] developed based on the assumption that preventing the attack optimization from overfitting to the specific source model can improve transferability . Such transfer methods can be easily plugged into different attacks without modifications , in contrast to the above methods that need to apply new attack loss functions . In this paper , we consider three [ 4 , 6 , 8 ] of such transfer methods that have been widely used in the literature , as described in the following text . Momentum Iterative-FGSM ( MI-FGSM ) [ 4 ] integrates a momentum term , which accumulates previous gradients in order to achieve more stable update directions . It can be expressed as : gi+1 = µ · gi + ∇xJ ( x′i , yt ) ‖∇xJ ( x′i , yt ) ‖1 , x′i+1 = x ′ i − α · sign ( gi ) , ( 4 ) where gi is the accumulated gradients at the i-th iteration , and µ is a decay factor . Another similar technique that instead uses the Nesterov accelerated gradient was explored in [ 9 ] . Translation Invariant-FGSM ( TI-FGSM ) [ 6 ] randomly translates the input image during attack optimization in order to prevent the attack from overfitting to the specific source model . This approach is inspired by the data augmentation techniques used for preventing overfitting in normal model training . Instead of calculating gradients for multiple translated images separately , the authors have proposed an approximate solution to accelerate the implementation . It is achieved by directly computing locally smoothed gradients on the original image via convolution with a kernel : x′i+1 = x ′ i − α · sign ( W ∗ ∇xJ ( x′i , yt ) ) , ( 5 ) where W is the convolution kernel used for smoothing . TI-FGSM was originally designed for boosting transferability with adversarially-trained models as target models and has been recently shown that a smaller kernel size should be used when transferring to normally-trained models [ 12 ] . Diverse Input-FGSM ( DI-FGSM ) [ 8 ] follows a similar idea to TI-FGSM , but applies random resizing and padding for data augmentation . Another important difference is that DI-FGSM randomizes augmentation parameters over iterations rather than fixing them as in TI-FGSM . The attack optimization of DI-FGSM can be formulated as : x′i+1 = x ′ i − α · sign ( ∇xJ ( T ( x′i , p ) , yt ) ) , ( 6 ) where the stochastic transformation T ( x′i , p ) is implemented with probability p at each iteration . In Section 4.1 , we demonstrate that simple transfer attacks with these three transfer methods can actually achieve surprisingly strong targeted transferability in a wide range of transfer scenarios . 2.2 Resource-Intensive Transferable Attacks . Due to the broad consensus that achieving targeted transferability is extremely difficult , recent researchers have resorted to resource-intensive approaches that require training target-class-specific models on large-scale additional data . Specifically , the Feature Distribution Attack ( FDA ) [ 14 ] follows the same attack pipeline as the above simple transferable attacks , but requires auxiliary classifiers that have been trained on additional labeled data as part of the source model . Each auxiliary classifier is a small , binary , one-versus-all classifier trained for a specific target class at a specific layer . That is to say , the number of auxiliary classifiers is the number of layers that are probed multiplied by the number of target classes that are required to model [ 14 ] . The attack loss function of FDA can be formulated as : LFDA = J ( Fl ( x′ ) , yt ) − η ‖Fl ( x′ ) −Fl ( x ) ‖2 ‖Fl ( x ) ‖2 , ( 7 ) where each auxiliary classifiers Fl ( · ) can model the probability that a feature map at layer l is from a specific target class yt . FDA ( N ) +xent [ 15 ] extends FDA by aggregating features from L layers and also incorporating the cross-entropy loss H ( · , · ) of the original network F ( · ) . The loss function of FDA ( N ) +xent can be expressed as : LFDA ( N ) +xent = ∑ l∈L λl ( LFDA + γH ( F ( x′ ) , yt ) ) , where ∑ l∈L λl = 1 . ( 8 ) Very recently , TTP [ 16 ] has achieved state-of-the-art targeted transferability by directly generating perturbations using target-class-specific GANs that have been trained via matching the distributions of perturbations and a specific target class both globally and locally . Specifically , the global distribution matching is achieved by minimizing the Kullback Leibler ( KL ) divergence , and the local distribution matching is by enforcing the neighbourhood similarity . In order to further boost the performance , data augmentation techniques , such as image rotation , crop resize , horizontal flip , color jittering and gray-scale transformation , have been applied during model training . We refer the readers to [ 16 ] for more technical details of TTP . These two transferable attacks , FDA ( N ) +xent and TTP , are resource intensive due to the use of largescale model training and additional data . However , in Section 4.2 , we show that simple transferable attacks , which require neither model training nor additional data , can actually achieve even better performance than them .",
"role": "user"
},
{
"content": "The paper provides new insights for transfer-based targeted attacks. The paper describes that target attacks with simple logit loss archive high transferability under sufficient iterations. Various experiments with realistic settings demonstrate the transferability of attacks with the logit loss. In addition, the paper shows the usefulness of the logit loss for generating universal adversarial perturbations.",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
UG18 at SemEval-2018 Task 1: Generating Additional Training Data for Predicting Emotion Intensity in Spanish
The present study describes our submission to SemEval 2018 Task 1: Affect in Tweets. Our Spanish-only approach aimed to demonstrate that it is beneficial to automatically generate additional training data by (i) translating training data from other languages and (ii) applying a semi-supervised learning method. We find strong support for both approaches, with those models outperforming our regular models in all subtasks. However, creating a stepwise ensemble of different models as opposed to simply averaging did not result in an increase in performance. We placed second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc) in the four Spanish subtasks we participated in.
Introduction
Understanding the emotions expressed in a text or message is of high relevance nowadays. Companies are interested in this to get an understanding of the sentiment of their current customers regarding their products and the sentiment of their potential customers to attract new ones. Moreover, changes in a product or a company may also affect the sentiment of a customer. However, the intensity of an emotion is crucial in determining the urgency and importance of that sentiment. If someone is only slightly happy about a product, is a customer willing to buy it again? Conversely, if someone is very angry about customer service, his or her complaint might be given priority over somewhat milder complaints.
BIBREF0 present four tasks in which systems have to automatically determine the intensity of emotions (EI) or the intensity of the sentiment (Valence) of tweets in the languages English, Arabic, and Spanish. The goal is to either predict a continuous regression (reg) value or to do ordinal classification (oc) based on a number of predefined categories. The EI tasks have separate training sets for four different emotions: anger, fear, joy and sadness. Due to the large number of subtasks and the fact that this language does not have many resources readily available, we only focus on the Spanish subtasks. Our work makes the following contributions:
Our submissions ranked second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc), demonstrating that the proposed method is accurate in automatically determining the intensity of emotions and sentiment of Spanish tweets. This paper will first focus on the datasets, the data generation procedure, and the techniques and tools used. Then we present the results in detail, after which we perform a small error analysis on the largest mistakes our model made. We conclude with some possible ideas for future work.
Data
For each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 . Links and usernames were replaced by the general tokens URL and @username, after which the tweets were tokenized by using TweetTokenizer. All text was lowercased. In a post-processing step, it was ensured that each emoji is tokenized as a single token.
Translating Lexicons
Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 .
All lexicons from the AffectiveTweets package were translated, except for SentiStrength. Instead of translating this lexicon, the English version was replaced by the Spanish variant made available by BIBREF6 .
For each subtask, the optimal combination of lexicons was determined. This was done by first calculating the benefits of adding each lexicon individually, after which only beneficial lexicons were added until the score did not increase anymore (e.g. after adding the best four lexicons the fifth one did not help anymore, so only four were added). The tests were performed using a default SVM model, with the set of word embeddings described in the previous section. Each subtask thus uses a different set of lexicons (see Table TABREF1 for an overview of the lexicons used in our final ensemble). For each subtask, this resulted in a (modest) increase on the development set, between 0.01 and 0.05.
Translating Data
The training set provided by BIBREF0 is not very large, so it was interesting to find a way to augment the training set. A possible method is to simply translate the datasets into other languages, leaving the labels intact. Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. This new set of “Spanish” data was then added to our original training set. Again, the machine translation platform Apertium BIBREF5 was used for the translation of the datasets.
Algorithms Used
Three types of models were used in our system, a feed-forward neural network, an LSTM network and an SVM regressor. The neural nets were inspired by the work of Prayas BIBREF7 in the previous shared task. Different regression algorithms (e.g. AdaBoost, XGBoost) were also tried due to the success of SeerNet BIBREF8 , but our study was not able to reproduce their results for Spanish.
For both the LSTM network and the feed-forward network, a parameter search was done for the number of layers, the number of nodes and dropout used. This was done for each subtask, i.e. different tasks can have a different number of layers. All models were implemented using Keras BIBREF9 . After the best parameter settings were found, the results of 10 system runs to produce our predictions were averaged (note that this is different from averaging our different type of models in Section SECREF16 ). For the SVM (implemented in scikit-learn BIBREF10 ), the RBF kernel was used and a parameter search was conducted for epsilon. Detailed parameter settings for each subtask are shown in Table TABREF12 . Each parameter search was performed using 10-fold cross validation, as to not overfit on the development set.
Semi-supervised Learning
One of the aims of this study was to see if using semi-supervised learning is beneficial for emotion intensity tasks. For this purpose, the DISC BIBREF0 corpus was used. This corpus was created by querying certain emotion-related words, which makes it very suitable as a semi-supervised corpus. However, the specific emotion the tweet belonged to was not made public. Therefore, a method was applied to automatically assign the tweets to an emotion by comparing our scraped tweets to this new data set.
First, in an attempt to obtain the query-terms, we selected the 100 words which occurred most frequently in the DISC corpus, in comparison with their frequencies in our own scraped tweets corpus. Words that were clearly not indicators of emotion were removed. The rest was annotated per emotion or removed if it was unclear to which emotion the word belonged. This allowed us to create silver datasets per emotion, assigning tweets to an emotion if an annotated emotion-word occurred in the tweet.
Our semi-supervised approach is quite straightforward: first a model is trained on the training set and then this model is used to predict the labels of the silver data. This silver data is then simply added to our training set, after which the model is retrained. However, an extra step is applied to ensure that the silver data is of reasonable quality. Instead of training a single model initially, ten different models were trained which predict the labels of the silver instances. If the highest and lowest prediction do not differ more than a certain threshold the silver instance is maintained, otherwise it is discarded.
This results in two parameters that could be optimized: the threshold and the number of silver instances that would be added. This method can be applied to both the LSTM and feed-forward networks that were used. An overview of the characteristics of our data set with the final parameter settings is shown in Table TABREF14 . Usually, only a small subset of data was added to our training set, meaning that most of the silver data is not used in the experiments. Note that since only the emotions were annotated, this method is only applicable to the EI tasks.
Ensembling
To boost performance, the SVM, LSTM, and feed-forward models were combined into an ensemble. For both the LSTM and feed-forward approach, three different models were trained. The first model was trained on the training data (regular), the second model was trained on both the training and translated training data (translated) and the third one was trained on both the training data and the semi-supervised data (silver). Due to the nature of the SVM algorithm, semi-supervised learning does not help, so only the regular and translated model were trained in this case. This results in 8 different models per subtask. Note that for the valence tasks no silver training data was obtained, meaning that for those tasks the semi-supervised models could not be used.
Per task, the LSTM and feed-forward model's predictions were averaged over 10 prediction runs. Subsequently, the predictions of all individual models were combined into an average. Finally, models were removed from the ensemble in a stepwise manner if the removal increased the average score. This was done based on their original scores, i.e. starting out by trying to remove the worst individual model and working our way up to the best model. We only consider it an increase in score if the difference is larger than 0.002 (i.e. the difference between 0.716 and 0.718). If at some point the score does not increase and we are therefore unable to remove a model, the process is stopped and our best ensemble of models has been found. This process uses the scores on the development set of different combinations of models. Note that this means that the ensembles for different subtasks can contain different sets of models. The final model selections can be found in Table TABREF17 .
Results and Discussion
Table TABREF18 shows the results on the development set of all individuals models, distinguishing the three types of training: regular (r), translated (t) and semi-supervised (s). In Tables TABREF17 and TABREF18 , the letter behind each model (e.g. SVM-r, LSTM-r) corresponds to the type of training used. Comparing the regular and translated columns for the three algorithms, it shows that in 22 out of 30 cases, using translated instances as extra training data resulted in an improvement. For the semi-supervised learning approach, an improvement is found in 15 out of 16 cases. Moreover, our best individual model for each subtask (bolded scores in Table TABREF18 ) is always either a translated or semi-supervised model. Table TABREF18 also shows that, in general, our feed-forward network obtained the best results, having the highest F-score for 8 out of 10 subtasks.
However, Table TABREF19 shows that these scores can still be improved by averaging or ensembling the individual models. On the dev set, averaging our 8 individual models results in a better score for 8 out of 10 subtasks, while creating an ensemble beats all of the individual models as well as the average for each subtask. On the test set, however, only a small increase in score (if any) is found for stepwise ensembling, compared to averaging. Even though the results do not get worse, we cannot conclude that stepwise ensembling is a better method than simply averaging.
Our official scores (column Ens Test in Table TABREF19 ) have placed us second (EI-Reg, EI-Oc), fourth (V-Reg) and fifth (V-Oc) on the SemEval AIT-2018 leaderboard. However, it is evident that the results obtained on the test set are not always in line with those achieved on the development set. Especially on the anger subtask for both EI-Reg and EI-Oc, the scores are considerably lower on the test set in comparison with the results on the development set. Therefore, a small error analysis was performed on the instances where our final model made the largest errors.
Error Analysis
Due to some large differences between our results on the dev and test set of this task, we performed a small error analysis in order to see what caused these differences. For EI-Reg-anger, the gold labels were compared to our own predictions, and we manually checked 50 instances for which our system made the largest errors.
Some examples that were indicative of the shortcomings of our system are shown in Table TABREF20 . First of all, our system did not take into account capitalization. The implications of this are shown in the first sentence, where capitalization intensifies the emotion used in the sentence. In the second sentence, the name Imperator Furiosa is not understood. Since our texts were lowercased, our system was unable to capture the named entity and thought the sentence was about an angry emperor instead. In the third sentence, our system fails to capture that when you are so angry that it makes you laugh, it results in a reduced intensity of the angriness. Finally, in the fourth sentence, it is the figurative language me infla la vena (it inflates my vein) that the system is not able to understand.
The first two error-categories might be solved by including smart features regarding capitalization and named entity recognition. However, the last two categories are problems of natural language understanding and will be very difficult to fix.
Conclusion
To conclude, the present study described our submission for the Semeval 2018 Shared Task on Affect in Tweets. We participated in four Spanish subtasks and our submissions ranked second, second, fourth and fifth place. Our study aimed to investigate whether the automatic generation of additional training data through translation and semi-supervised learning, as well as the creation of stepwise ensembles, increase the performance of our Spanish-language models. Strong support was found for the translation and semi-supervised learning approaches; our best models for all subtasks use either one of these approaches. These results suggest that both of these additional data resources are beneficial when determining emotion intensity (for Spanish). However, the creation of a stepwise ensemble from the best models did not result in better performance compared to simply averaging the models. In addition, some signs of overfitting on the dev set were found. In future work, we would like to apply the methods (translation and semi-supervised learning) used on Spanish on other low-resource languages and potentially also on other tasks.
Table 1: Lexicons included in our final ensemble. NRC-10 and SentiWordNet are left out of the table because they never improved the score for a task.
Table 2: Parameter settings for the algorithms used. For feed-forward, we show the number of nodes per layer. The Dense column for LSTM shows whether a dense layer was added after the LSTM layers (with half the number of nodes as is shown in the Nodes column). The feed-forward networks always use a dropout of 0.001 after the first layer.
Table 3: Statistics and parameter settings of the semi-supervised learning experiments.
Table 4: Models included in our final ensemble.
Table 5: Scores for each individual model per subtask. Best individual score per subtask is bolded.
Table 6: Results on the dev and test set for averaging and stepwise ensembling the individual models. The last column shows our official results.
Table 7: Error analysis for the EI-Reg-anger subtask, with English translations.
Query: What subtasks did they participate in?
JSON Response: | {"answer": "Answer with content missing: (Subscript 1: \"We did not participate in subtask 5 (E-c)\") Authors participated in EI-Reg, EI-Oc, V-Reg and V-Oc subtasks.", "evidence": ["Our submissions ranked second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc), demonstrating that the proposed method is accurate in automatically determining the intensity of emotions and sentiment of Spanish tweets. This paper will first focus on the datasets, the data generation procedure, and the techniques and tools used. Then we present the results in detail, after which we perform a small error analysis on the largest mistakes our model made. We conclude with some possible ideas for future work."]} | science.qasper_abstractive_qa.236.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nUG18 at SemEval-2018 Task 1: Generating Additional Training Data for Predicting Emotion Intensity in Spanish\n\nThe present study describes our submission to SemEval 2018 Task 1: Affect in Tweets. Our Spanish-only approach aimed to demonstrate that it is beneficial to automatically generate additional training data by (i) translating training data from other languages and (ii) applying a semi-supervised learning method. We find strong support for both approaches, with those models outperforming our regular models in all subtasks. However, creating a stepwise ensemble of different models as opposed to simply averaging did not result in an increase in performance. We placed second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc) in the four Spanish subtasks we participated in.\n\nIntroduction\nUnderstanding the emotions expressed in a text or message is of high relevance nowadays. Companies are interested in this to get an understanding of the sentiment of their current customers regarding their products and the sentiment of their potential customers to attract new ones. Moreover, changes in a product or a company may also affect the sentiment of a customer. However, the intensity of an emotion is crucial in determining the urgency and importance of that sentiment. If someone is only slightly happy about a product, is a customer willing to buy it again? Conversely, if someone is very angry about customer service, his or her complaint might be given priority over somewhat milder complaints.\n BIBREF0 present four tasks in which systems have to automatically determine the intensity of emotions (EI) or the intensity of the sentiment (Valence) of tweets in the languages English, Arabic, and Spanish. The goal is to either predict a continuous regression (reg) value or to do ordinal classification (oc) based on a number of predefined categories. The EI tasks have separate training sets for four different emotions: anger, fear, joy and sadness. Due to the large number of subtasks and the fact that this language does not have many resources readily available, we only focus on the Spanish subtasks. Our work makes the following contributions:\nOur submissions ranked second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc), demonstrating that the proposed method is accurate in automatically determining the intensity of emotions and sentiment of Spanish tweets. This paper will first focus on the datasets, the data generation procedure, and the techniques and tools used. Then we present the results in detail, after which we perform a small error analysis on the largest mistakes our model made. We conclude with some possible ideas for future work.\n\nData\nFor each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 . Links and usernames were replaced by the general tokens URL and @username, after which the tweets were tokenized by using TweetTokenizer. All text was lowercased. In a post-processing step, it was ensured that each emoji is tokenized as a single token.\n\nTranslating Lexicons\nMost lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 .\nAll lexicons from the AffectiveTweets package were translated, except for SentiStrength. Instead of translating this lexicon, the English version was replaced by the Spanish variant made available by BIBREF6 .\nFor each subtask, the optimal combination of lexicons was determined. This was done by first calculating the benefits of adding each lexicon individually, after which only beneficial lexicons were added until the score did not increase anymore (e.g. after adding the best four lexicons the fifth one did not help anymore, so only four were added). The tests were performed using a default SVM model, with the set of word embeddings described in the previous section. Each subtask thus uses a different set of lexicons (see Table TABREF1 for an overview of the lexicons used in our final ensemble). For each subtask, this resulted in a (modest) increase on the development set, between 0.01 and 0.05.\n\nTranslating Data\nThe training set provided by BIBREF0 is not very large, so it was interesting to find a way to augment the training set. A possible method is to simply translate the datasets into other languages, leaving the labels intact. Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. This new set of “Spanish” data was then added to our original training set. Again, the machine translation platform Apertium BIBREF5 was used for the translation of the datasets.\n\nAlgorithms Used\nThree types of models were used in our system, a feed-forward neural network, an LSTM network and an SVM regressor. The neural nets were inspired by the work of Prayas BIBREF7 in the previous shared task. Different regression algorithms (e.g. AdaBoost, XGBoost) were also tried due to the success of SeerNet BIBREF8 , but our study was not able to reproduce their results for Spanish.\nFor both the LSTM network and the feed-forward network, a parameter search was done for the number of layers, the number of nodes and dropout used. This was done for each subtask, i.e. different tasks can have a different number of layers. All models were implemented using Keras BIBREF9 . After the best parameter settings were found, the results of 10 system runs to produce our predictions were averaged (note that this is different from averaging our different type of models in Section SECREF16 ). For the SVM (implemented in scikit-learn BIBREF10 ), the RBF kernel was used and a parameter search was conducted for epsilon. Detailed parameter settings for each subtask are shown in Table TABREF12 . Each parameter search was performed using 10-fold cross validation, as to not overfit on the development set.\n\nSemi-supervised Learning\nOne of the aims of this study was to see if using semi-supervised learning is beneficial for emotion intensity tasks. For this purpose, the DISC BIBREF0 corpus was used. This corpus was created by querying certain emotion-related words, which makes it very suitable as a semi-supervised corpus. However, the specific emotion the tweet belonged to was not made public. Therefore, a method was applied to automatically assign the tweets to an emotion by comparing our scraped tweets to this new data set.\nFirst, in an attempt to obtain the query-terms, we selected the 100 words which occurred most frequently in the DISC corpus, in comparison with their frequencies in our own scraped tweets corpus. Words that were clearly not indicators of emotion were removed. The rest was annotated per emotion or removed if it was unclear to which emotion the word belonged. This allowed us to create silver datasets per emotion, assigning tweets to an emotion if an annotated emotion-word occurred in the tweet.\nOur semi-supervised approach is quite straightforward: first a model is trained on the training set and then this model is used to predict the labels of the silver data. This silver data is then simply added to our training set, after which the model is retrained. However, an extra step is applied to ensure that the silver data is of reasonable quality. Instead of training a single model initially, ten different models were trained which predict the labels of the silver instances. If the highest and lowest prediction do not differ more than a certain threshold the silver instance is maintained, otherwise it is discarded.\nThis results in two parameters that could be optimized: the threshold and the number of silver instances that would be added. This method can be applied to both the LSTM and feed-forward networks that were used. An overview of the characteristics of our data set with the final parameter settings is shown in Table TABREF14 . Usually, only a small subset of data was added to our training set, meaning that most of the silver data is not used in the experiments. Note that since only the emotions were annotated, this method is only applicable to the EI tasks.\n\nEnsembling\nTo boost performance, the SVM, LSTM, and feed-forward models were combined into an ensemble. For both the LSTM and feed-forward approach, three different models were trained. The first model was trained on the training data (regular), the second model was trained on both the training and translated training data (translated) and the third one was trained on both the training data and the semi-supervised data (silver). Due to the nature of the SVM algorithm, semi-supervised learning does not help, so only the regular and translated model were trained in this case. This results in 8 different models per subtask. Note that for the valence tasks no silver training data was obtained, meaning that for those tasks the semi-supervised models could not be used.\nPer task, the LSTM and feed-forward model's predictions were averaged over 10 prediction runs. Subsequently, the predictions of all individual models were combined into an average. Finally, models were removed from the ensemble in a stepwise manner if the removal increased the average score. This was done based on their original scores, i.e. starting out by trying to remove the worst individual model and working our way up to the best model. We only consider it an increase in score if the difference is larger than 0.002 (i.e. the difference between 0.716 and 0.718). If at some point the score does not increase and we are therefore unable to remove a model, the process is stopped and our best ensemble of models has been found. This process uses the scores on the development set of different combinations of models. Note that this means that the ensembles for different subtasks can contain different sets of models. The final model selections can be found in Table TABREF17 .\n\nResults and Discussion\nTable TABREF18 shows the results on the development set of all individuals models, distinguishing the three types of training: regular (r), translated (t) and semi-supervised (s). In Tables TABREF17 and TABREF18 , the letter behind each model (e.g. SVM-r, LSTM-r) corresponds to the type of training used. Comparing the regular and translated columns for the three algorithms, it shows that in 22 out of 30 cases, using translated instances as extra training data resulted in an improvement. For the semi-supervised learning approach, an improvement is found in 15 out of 16 cases. Moreover, our best individual model for each subtask (bolded scores in Table TABREF18 ) is always either a translated or semi-supervised model. Table TABREF18 also shows that, in general, our feed-forward network obtained the best results, having the highest F-score for 8 out of 10 subtasks.\nHowever, Table TABREF19 shows that these scores can still be improved by averaging or ensembling the individual models. On the dev set, averaging our 8 individual models results in a better score for 8 out of 10 subtasks, while creating an ensemble beats all of the individual models as well as the average for each subtask. On the test set, however, only a small increase in score (if any) is found for stepwise ensembling, compared to averaging. Even though the results do not get worse, we cannot conclude that stepwise ensembling is a better method than simply averaging.\nOur official scores (column Ens Test in Table TABREF19 ) have placed us second (EI-Reg, EI-Oc), fourth (V-Reg) and fifth (V-Oc) on the SemEval AIT-2018 leaderboard. However, it is evident that the results obtained on the test set are not always in line with those achieved on the development set. Especially on the anger subtask for both EI-Reg and EI-Oc, the scores are considerably lower on the test set in comparison with the results on the development set. Therefore, a small error analysis was performed on the instances where our final model made the largest errors.\n\nError Analysis\nDue to some large differences between our results on the dev and test set of this task, we performed a small error analysis in order to see what caused these differences. For EI-Reg-anger, the gold labels were compared to our own predictions, and we manually checked 50 instances for which our system made the largest errors.\nSome examples that were indicative of the shortcomings of our system are shown in Table TABREF20 . First of all, our system did not take into account capitalization. The implications of this are shown in the first sentence, where capitalization intensifies the emotion used in the sentence. In the second sentence, the name Imperator Furiosa is not understood. Since our texts were lowercased, our system was unable to capture the named entity and thought the sentence was about an angry emperor instead. In the third sentence, our system fails to capture that when you are so angry that it makes you laugh, it results in a reduced intensity of the angriness. Finally, in the fourth sentence, it is the figurative language me infla la vena (it inflates my vein) that the system is not able to understand.\nThe first two error-categories might be solved by including smart features regarding capitalization and named entity recognition. However, the last two categories are problems of natural language understanding and will be very difficult to fix.\n\nConclusion\nTo conclude, the present study described our submission for the Semeval 2018 Shared Task on Affect in Tweets. We participated in four Spanish subtasks and our submissions ranked second, second, fourth and fifth place. Our study aimed to investigate whether the automatic generation of additional training data through translation and semi-supervised learning, as well as the creation of stepwise ensembles, increase the performance of our Spanish-language models. Strong support was found for the translation and semi-supervised learning approaches; our best models for all subtasks use either one of these approaches. These results suggest that both of these additional data resources are beneficial when determining emotion intensity (for Spanish). However, the creation of a stepwise ensemble from the best models did not result in better performance compared to simply averaging the models. In addition, some signs of overfitting on the dev set were found. In future work, we would like to apply the methods (translation and semi-supervised learning) used on Spanish on other low-resource languages and potentially also on other tasks.\n\nTable 1: Lexicons included in our final ensemble. NRC-10 and SentiWordNet are left out of the table because they never improved the score for a task.\n\nTable 2: Parameter settings for the algorithms used. For feed-forward, we show the number of nodes per layer. The Dense column for LSTM shows whether a dense layer was added after the LSTM layers (with half the number of nodes as is shown in the Nodes column). The feed-forward networks always use a dropout of 0.001 after the first layer.\n\nTable 3: Statistics and parameter settings of the semi-supervised learning experiments.\n\nTable 4: Models included in our final ensemble.\n\nTable 5: Scores for each individual model per subtask. Best individual score per subtask is bolded.\n\nTable 6: Results on the dev and test set for averaging and stepwise ensembling the individual models. The last column shows our official results.\n\nTable 7: Error analysis for the EI-Reg-anger subtask, with English translations.\n\nQuery: What subtasks did they participate in?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Answer with content missing: (Subscript 1: \\\"We did not participate in subtask 5 (E-c)\\\") Authors participated in EI-Reg, EI-Oc, V-Reg and V-Oc subtasks.\", \"evidence\": [\"Our submissions ranked second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc), demonstrating that the proposed method is accurate in automatically determining the intensity of emotions and sentiment of Spanish tweets. This paper will first focus on the datasets, the data generation procedure, and the techniques and tools used. Then we present the results in detail, after which we perform a small error analysis on the largest mistakes our model made. We conclude with some possible ideas for future work.\"]}",
"role": "assistant"
}
]
|
science.discomat_te | | Samples | d MAS , ppm (+-2) | d 3QMAS , ppm (+-1) | d iso ppm (+-2) | P Q, MHz (+-0.2) |
| BS55 | | | | |
| Ba-O-Si | 156 | -90 | 161 | 2.3 |
| BB46 | | | | |
| Ba-O-B | 183 | -111 | 196 | 3.7 |
| BBS433 | | | | |
| Ba-O-Si | 153 | -88 | 158 | 2.3 |
| Ba-O-B | 185 | -112 | 197 | 3.6 |
Caption: 17O NMR parameters for NBO in glass samples BS55, BB46 and BBS433 derived from 3QMAS spectra measured at 14.1 T a
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "156", "type": "Other"}
{"value": "161", "type": "Other"}
{"value": "2.3", "type": "Other"}
{"value": "183", "type": "Other"}
{"value": "196", "type": "Other"}
{"value": "3.7", "type": "Other"}
{"value": "153", "type": "Other"}
{"value": "158", "type": "Other"}
{"value": "2.3", "type": "Other"}
{"value": "185", "type": "Other"}
{"value": "197", "type": "Other"}
{"value": "3.6", "type": "Other"} | science.discomat_te.35 | [
{
"content": "| Samples | d MAS , ppm (+-2) | d 3QMAS , ppm (+-1) | d iso ppm (+-2) | P Q, MHz (+-0.2) |\n| BS55 | | | | |\n| Ba-O-Si | 156 | -90 | 161 | 2.3 |\n| BB46 | | | | |\n| Ba-O-B | 183 | -111 | 196 | 3.7 |\n| BBS433 | | | | |\n| Ba-O-Si | 153 | -88 | 158 | 2.3 |\n| Ba-O-B | 185 | -112 | 197 | 3.6 |\n\nCaption: 17O NMR parameters for NBO in glass samples BS55, BB46 and BBS433 derived from 3QMAS spectra measured at 14.1 T a\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"156\", \"type\": \"Other\"}\n{\"value\": \"161\", \"type\": \"Other\"}\n{\"value\": \"2.3\", \"type\": \"Other\"}\n{\"value\": \"183\", \"type\": \"Other\"}\n{\"value\": \"196\", \"type\": \"Other\"}\n{\"value\": \"3.7\", \"type\": \"Other\"}\n{\"value\": \"153\", \"type\": \"Other\"}\n{\"value\": \"158\", \"type\": \"Other\"}\n{\"value\": \"2.3\", \"type\": \"Other\"}\n{\"value\": \"185\", \"type\": \"Other\"}\n{\"value\": \"197\", \"type\": \"Other\"}\n{\"value\": \"3.6\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
External Lexical Information for Multilingual Part-of-Speech Tagging
Morphosyntactic lexicons and word vector representations have both proven useful for improving the accuracy of statistical part-of-speech taggers. Here we compare the performances of four systems on datasets covering 16 languages, two of these systems being feature-based (MEMMs and CRFs) and two of them being neural-based (bi-LSTMs). We show that, on average, all four approaches perform similarly and reach state-of-the-art results. Yet better performances are obtained with our feature-based models on lexically richer datasets (e.g. for morphologically rich languages), whereas neural-based results are higher on datasets with less lexical variability (e.g. for English). These conclusions hold in particular for the MEMM models relying on our system MElt, which benefited from newly designed features. This shows that, under certain conditions, feature-based approaches enriched with morphosyntactic lexicons are competitive with respect to neural methods.
MElt
MElt BIBREF12 is a tagging system based on maximum entropy Markov models (MEMM) BIBREF5 , a class of discriminative models that are suitable for sequence labelling BIBREF5 . The basic set of features used by MElt is given in BIBREF12 . It is a superset of the feature sets used by BIBREF5 and BIBREF24 and includes both local standard features (for example the current word itself and its prefixes and suffixes of length 1 to 4) and contextual standard features (for example the tag just assigned to the preceding word). In particular, with respect to Ratnaparkhi's feature set, MElt's basic feature set lifts the restriction that local standard features used to analyse the internal composition of the current word should only apply to rare words.
One of the advantages of feature-based models such as MEMMs and CRFs is that complementary information can be easily added in the form of additional features. This was investigated for instance by BIBREF25 , whose best-performing model for PoS tagging dialogues was obtained with a version of MElt extended with dialogue-specific features. Yet the motivation of MElt's developers was first and foremost to investigate the best way to integrate lexical information extracted from large-scale morphosyntactic lexical resources into their models, on top of the training data BIBREF12 . They showed that performances are better when this external lexical information is integrated in the form of additional lexical features than when the external lexicon is used as constraints at tagging time. These lexical features can also be divided into local lexical features (for example the list of possible tags known to the external lexicon for the current word) and contextual lexical features (for example the list of possible tags known to the external lexicon for surrounding words). In particular, lexical contextual features provide a means to model the right context of the current word, made of words that have not yet been tagged by the system but for which the lexicon often provides a list of possible tags. Moreover, tagging accuracy for out-of-vocabulary (OOV) words is improved, as a result of the fact that words unknown to the training corpus might be known to the external lexicon.
Despite a few experiments published with MElt on languages other than French BIBREF12 , BIBREF40 , BIBREF41 , the original feature set used by MElt (standard and lexical features) was designed and tested mostly on this language, by building and evaluating tagging models on a variant of the French TreeBank. Since our goal was to carry out experiments in a multilingual setting, we have decided to design our own set of features, using the standard MElt features as a starting point. With respect to the original MElt feature set, we have added new ones, such as prefixes and suffixes of the following word, as well as a hybrid contextual feature obtained by concatenating the tag predicted for the preceding word and the tag(s) provided by the external lexicon for the following word.
In order to select the best performing feature set, we carried out a series of experiments using the multilingual dataset provided during the SPMRL parsing shared task BIBREF42 . This included discarding useless or harmful features and selecting the maximal length of the prefixes and suffixes to be used as features, both for the current word and for the following word.
We incorporated in MElt the best performing feature set, described in Table TABREF1 . All models discussed in this paper are based on this feature set.
Corpora
We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. All UD1.2 corpora use a common tag set, the 17 universal PoS tags, which is an extension of the tagset proposed by BIBREF43 .
As our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse.
Lexicons
We generate our external lexicons using the set of source lexicons listed in Table TABREF3 . Since external lexical information is exploited via features, there is no need for the external lexicons and the annotated corpora to use the same PoS inventory. Therefore, for each language, we simply extracted from the corresponding lexicon the PoS of each word based on its morphological tags, by removing all information provided except for its coarsest-level category. We also added entries for punctuations when the source lexicons did not contain any.
We also performed experiments in which we retained the full original tags provided by the lexicons, with all morphological features included. On average, results were slightly better than those presented in the paper, although not statistically significantly. Moreover, the granularity of tag inventories in the lexicons is diverse, which makes it difficult to draw general conclusions about results based on full tags. This is why we only report results based on (coarse) PoS extracted from the original lexicons.
Baseline models
In order to assess the respective contributions of external lexicons and word vector representations, we first compared the results of the three above-mentioned systems when trained without such additional lexical information. Table TABREF11 provides the results of MElt and MarMoT retrained on UD1.2 corpora, together with the results publised on the same corpora by BIBREF20 , using their best model not enhanced by external word vector representations —i.e. the model they call INLINEFORM0 , which is a bidirectional LSTM that combines both word and character embeddings.
These results show that Plank et al.'s (2016) bi-LSTM performs extremely well, surpassed by MarMoT on only 3 out of 16 datasets (Czech, French and Italian), and by MElt only once (Indonesian).
Models enriched with external lexical information
Table TABREF13 provides the results of four systems enriched with lexical information. The feature-based systems MElt and MarMoT, respectively based on MEMMs and CRFs, are extended with the lexical information provided by our morphosyntactic lexicons. This extension takes the form of additional features, as described in Section SECREF2 for MElt. The results reported by BIBREF20 for their bidirectional LSTM when initialised with Polyglot embeddings trained on full wikipedias are also included, together with their new system FREQBIN, also initialised with Polyglot embeddings. FREQBIN trains bi-LSTMs to predict for each input word both a PoS and a label that represents its log frequency in the training data. As they word it, “the idea behind this model is to make the representation predictive for frequency, which encourages the model not to share representations between common and rare words, thus benefiting the handling of rare tokens.”
The results, which are also displayed in Figures FIGREF14 and FIGREF15 , show that all systems reach very similar results on average, although discrepancies can be observed from one dataset to another, on which we shall comment shortly. The best performing system in terms of macro-average is MElt (96.60%). Both bi-LSTM systems reach the same score (96.58%), the difference with MElt's results being non significant, whereas MarMoT is only 0.14% behind (96.46%). Given the better baseline scores of the neural approaches, these results show that the benefit of using external lexicons in the feature-based models MElt and MarMoT are much higher than those using Polyglot word vector representations as initialisations for bi-LSTMs.
Yet these very similar overall results reflect a different picture when focusing on OOV tagging accuracy. The best models for OOV tagging accuracy are, by far, FREQBIN models, which are beaten by MarMoT and by MElt only once each (on English and Danish respectively). The comparison on OOV tagging between MElt and MarMoT shows that MElt performs better on average than MarMoT, despite the fact that MarMoT's baseline results were better than those reached by MElt. This shows that the information provided by external morphosyntactic lexicons is better exploited by MElt's lexical features than by those used by MarMoT. On the other hand, the comparison of both bi-LSTM-based approaches confirm that the FREQBIN models is better by over 10% absolute on OOV tagging accuracy (94.28% vs. 83.59%), with 65% lower error rate.
One of the important differences between the lexical information provided by an external lexicon and word vectors built from raw corpora, apart from the very nature of the lexical information provided, is the coverage and accuracy of this lexical information on rare words. All words in a morphosyntactic lexicon are associated with information of a same granularity and quality, which is not the case with word representations such as provided by Polyglot. Models that take advantage of external lexicons should therefore perform comparatively better on datasets containing a higher proportion of rarer words, provided the lexicons' coverage is high. In order to confirm this intuition, we have used a lexical richness metric based on the type/token ratio. Since this ratio is well-known for being sensitive to corpus length, we normalised it by computing it over the 60,000 first tokens of each training set. When this normalised type/token ratio is plotted against the difference between the results of MElt and both bi-LSTM-based models, the expected correlation is clearly visible (see Figure FIGREF16 ). This explains why MElt obtains better results on the morphologically richer Slavic datasets (average normalised type/token ratio: 0.28, average accuracy difference: 0.32 compared to both bi-LSTM+Polyglot and FREQBIN+Polyglot) and, at the other end of the spectrum, significantly worse results on the English dataset (normalised type/token ratio: 0.15, average accuracy difference: -0.56 compared to bi-LSTM+Polyglot, -0.57 compared to FREQBIN+Polyglot).
Table 1: Feature set used by our MElt models. The current word is wi = w1i . . . w ni i . Previously assigned tags for the two previous words are ti−2 and ti−1. The tag to be predicted for the current word is ti, which can be assigned any tag T in the tagset. The lex function applied to a word returns the set of all tags known to the lexicon for this word, or the singleton {_unk_} if the word is unknown to the lexicon. Boolean functions used by the local standard features have self-explanatory names.
Table 2: Information about the morphosyntactic lexicons used as external sources of lexical information in our MElt and MarMoT models. The number of entries and tagset sizes refers to the morphosyntactic lexicons we extracted and used in our models, not to the original resources.
Table 3: Overall accuracy (in %) of baseline systems, i.e. MElt and MarMoT models trained without external lexicons, and Plank et al.’s (2016) ~c + ~w models, which do not make use of Polyglot embeddings. Best scores are highlighted for each corpus.
Figure 1: Graphical visualisation of the overall tagging accuracies for all four types of enriched models. Detailed results are given in Table 4. Languages are sorted by increasing MElt’s overall tagging scores.
Figure 3: Difference between the tagging accuracy of lexicon-enhanced MElt models and each of the two types of Polyglot-enhanced neural bi-LSTM models plotted against training sets’ normalised token/type ratio.
Query: which datasets did they experiment with?
JSON Response: | {"answer": "Universal Dependencies v1.2 treebanks for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German,\nIndonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish, and Swedish", "evidence": ["We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. All UD1.2 corpora use a common tag set, the 17 universal PoS tags, which is an extension of the tagset proposed by BIBREF43 .", "As our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse."]} | science.qasper_abstractive_qa.66.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nExternal Lexical Information for Multilingual Part-of-Speech Tagging\n\nMorphosyntactic lexicons and word vector representations have both proven useful for improving the accuracy of statistical part-of-speech taggers. Here we compare the performances of four systems on datasets covering 16 languages, two of these systems being feature-based (MEMMs and CRFs) and two of them being neural-based (bi-LSTMs). We show that, on average, all four approaches perform similarly and reach state-of-the-art results. Yet better performances are obtained with our feature-based models on lexically richer datasets (e.g. for morphologically rich languages), whereas neural-based results are higher on datasets with less lexical variability (e.g. for English). These conclusions hold in particular for the MEMM models relying on our system MElt, which benefited from newly designed features. This shows that, under certain conditions, feature-based approaches enriched with morphosyntactic lexicons are competitive with respect to neural methods.\n\nMElt\nMElt BIBREF12 is a tagging system based on maximum entropy Markov models (MEMM) BIBREF5 , a class of discriminative models that are suitable for sequence labelling BIBREF5 . The basic set of features used by MElt is given in BIBREF12 . It is a superset of the feature sets used by BIBREF5 and BIBREF24 and includes both local standard features (for example the current word itself and its prefixes and suffixes of length 1 to 4) and contextual standard features (for example the tag just assigned to the preceding word). In particular, with respect to Ratnaparkhi's feature set, MElt's basic feature set lifts the restriction that local standard features used to analyse the internal composition of the current word should only apply to rare words.\nOne of the advantages of feature-based models such as MEMMs and CRFs is that complementary information can be easily added in the form of additional features. This was investigated for instance by BIBREF25 , whose best-performing model for PoS tagging dialogues was obtained with a version of MElt extended with dialogue-specific features. Yet the motivation of MElt's developers was first and foremost to investigate the best way to integrate lexical information extracted from large-scale morphosyntactic lexical resources into their models, on top of the training data BIBREF12 . They showed that performances are better when this external lexical information is integrated in the form of additional lexical features than when the external lexicon is used as constraints at tagging time. These lexical features can also be divided into local lexical features (for example the list of possible tags known to the external lexicon for the current word) and contextual lexical features (for example the list of possible tags known to the external lexicon for surrounding words). In particular, lexical contextual features provide a means to model the right context of the current word, made of words that have not yet been tagged by the system but for which the lexicon often provides a list of possible tags. Moreover, tagging accuracy for out-of-vocabulary (OOV) words is improved, as a result of the fact that words unknown to the training corpus might be known to the external lexicon.\nDespite a few experiments published with MElt on languages other than French BIBREF12 , BIBREF40 , BIBREF41 , the original feature set used by MElt (standard and lexical features) was designed and tested mostly on this language, by building and evaluating tagging models on a variant of the French TreeBank. Since our goal was to carry out experiments in a multilingual setting, we have decided to design our own set of features, using the standard MElt features as a starting point. With respect to the original MElt feature set, we have added new ones, such as prefixes and suffixes of the following word, as well as a hybrid contextual feature obtained by concatenating the tag predicted for the preceding word and the tag(s) provided by the external lexicon for the following word.\nIn order to select the best performing feature set, we carried out a series of experiments using the multilingual dataset provided during the SPMRL parsing shared task BIBREF42 . This included discarding useless or harmful features and selecting the maximal length of the prefixes and suffixes to be used as features, both for the current word and for the following word.\nWe incorporated in MElt the best performing feature set, described in Table TABREF1 . All models discussed in this paper are based on this feature set.\n\nCorpora\nWe carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. All UD1.2 corpora use a common tag set, the 17 universal PoS tags, which is an extension of the tagset proposed by BIBREF43 .\nAs our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse.\n\nLexicons\nWe generate our external lexicons using the set of source lexicons listed in Table TABREF3 . Since external lexical information is exploited via features, there is no need for the external lexicons and the annotated corpora to use the same PoS inventory. Therefore, for each language, we simply extracted from the corresponding lexicon the PoS of each word based on its morphological tags, by removing all information provided except for its coarsest-level category. We also added entries for punctuations when the source lexicons did not contain any.\nWe also performed experiments in which we retained the full original tags provided by the lexicons, with all morphological features included. On average, results were slightly better than those presented in the paper, although not statistically significantly. Moreover, the granularity of tag inventories in the lexicons is diverse, which makes it difficult to draw general conclusions about results based on full tags. This is why we only report results based on (coarse) PoS extracted from the original lexicons.\n\nBaseline models\nIn order to assess the respective contributions of external lexicons and word vector representations, we first compared the results of the three above-mentioned systems when trained without such additional lexical information. Table TABREF11 provides the results of MElt and MarMoT retrained on UD1.2 corpora, together with the results publised on the same corpora by BIBREF20 , using their best model not enhanced by external word vector representations —i.e. the model they call INLINEFORM0 , which is a bidirectional LSTM that combines both word and character embeddings.\nThese results show that Plank et al.'s (2016) bi-LSTM performs extremely well, surpassed by MarMoT on only 3 out of 16 datasets (Czech, French and Italian), and by MElt only once (Indonesian).\n\nModels enriched with external lexical information\nTable TABREF13 provides the results of four systems enriched with lexical information. The feature-based systems MElt and MarMoT, respectively based on MEMMs and CRFs, are extended with the lexical information provided by our morphosyntactic lexicons. This extension takes the form of additional features, as described in Section SECREF2 for MElt. The results reported by BIBREF20 for their bidirectional LSTM when initialised with Polyglot embeddings trained on full wikipedias are also included, together with their new system FREQBIN, also initialised with Polyglot embeddings. FREQBIN trains bi-LSTMs to predict for each input word both a PoS and a label that represents its log frequency in the training data. As they word it, “the idea behind this model is to make the representation predictive for frequency, which encourages the model not to share representations between common and rare words, thus benefiting the handling of rare tokens.”\nThe results, which are also displayed in Figures FIGREF14 and FIGREF15 , show that all systems reach very similar results on average, although discrepancies can be observed from one dataset to another, on which we shall comment shortly. The best performing system in terms of macro-average is MElt (96.60%). Both bi-LSTM systems reach the same score (96.58%), the difference with MElt's results being non significant, whereas MarMoT is only 0.14% behind (96.46%). Given the better baseline scores of the neural approaches, these results show that the benefit of using external lexicons in the feature-based models MElt and MarMoT are much higher than those using Polyglot word vector representations as initialisations for bi-LSTMs.\nYet these very similar overall results reflect a different picture when focusing on OOV tagging accuracy. The best models for OOV tagging accuracy are, by far, FREQBIN models, which are beaten by MarMoT and by MElt only once each (on English and Danish respectively). The comparison on OOV tagging between MElt and MarMoT shows that MElt performs better on average than MarMoT, despite the fact that MarMoT's baseline results were better than those reached by MElt. This shows that the information provided by external morphosyntactic lexicons is better exploited by MElt's lexical features than by those used by MarMoT. On the other hand, the comparison of both bi-LSTM-based approaches confirm that the FREQBIN models is better by over 10% absolute on OOV tagging accuracy (94.28% vs. 83.59%), with 65% lower error rate.\nOne of the important differences between the lexical information provided by an external lexicon and word vectors built from raw corpora, apart from the very nature of the lexical information provided, is the coverage and accuracy of this lexical information on rare words. All words in a morphosyntactic lexicon are associated with information of a same granularity and quality, which is not the case with word representations such as provided by Polyglot. Models that take advantage of external lexicons should therefore perform comparatively better on datasets containing a higher proportion of rarer words, provided the lexicons' coverage is high. In order to confirm this intuition, we have used a lexical richness metric based on the type/token ratio. Since this ratio is well-known for being sensitive to corpus length, we normalised it by computing it over the 60,000 first tokens of each training set. When this normalised type/token ratio is plotted against the difference between the results of MElt and both bi-LSTM-based models, the expected correlation is clearly visible (see Figure FIGREF16 ). This explains why MElt obtains better results on the morphologically richer Slavic datasets (average normalised type/token ratio: 0.28, average accuracy difference: 0.32 compared to both bi-LSTM+Polyglot and FREQBIN+Polyglot) and, at the other end of the spectrum, significantly worse results on the English dataset (normalised type/token ratio: 0.15, average accuracy difference: -0.56 compared to bi-LSTM+Polyglot, -0.57 compared to FREQBIN+Polyglot).\n\nTable 1: Feature set used by our MElt models. The current word is wi = w1i . . . w ni i . Previously assigned tags for the two previous words are ti−2 and ti−1. The tag to be predicted for the current word is ti, which can be assigned any tag T in the tagset. The lex function applied to a word returns the set of all tags known to the lexicon for this word, or the singleton {_unk_} if the word is unknown to the lexicon. Boolean functions used by the local standard features have self-explanatory names.\n\nTable 2: Information about the morphosyntactic lexicons used as external sources of lexical information in our MElt and MarMoT models. The number of entries and tagset sizes refers to the morphosyntactic lexicons we extracted and used in our models, not to the original resources.\n\nTable 3: Overall accuracy (in %) of baseline systems, i.e. MElt and MarMoT models trained without external lexicons, and Plank et al.’s (2016) ~c + ~w models, which do not make use of Polyglot embeddings. Best scores are highlighted for each corpus.\n\nFigure 1: Graphical visualisation of the overall tagging accuracies for all four types of enriched models. Detailed results are given in Table 4. Languages are sorted by increasing MElt’s overall tagging scores.\n\nFigure 3: Difference between the tagging accuracy of lexicon-enhanced MElt models and each of the two types of Polyglot-enhanced neural bi-LSTM models plotted against training sets’ normalised token/type ratio.\n\nQuery: which datasets did they experiment with?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Universal Dependencies v1.2 treebanks for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German,\\nIndonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish, and Swedish\", \"evidence\": [\"We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. All UD1.2 corpora use a common tag set, the 17 universal PoS tags, which is an extension of the tagset proposed by BIBREF43 .\", \"As our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse.\"]}",
"role": "assistant"
}
]
|
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
Fluoxetine improves the memory deficits caused by the chemotherapy agent 5-fluorouracil. Cancer patients who have been treated with systemic adjuvant chemotherapy have described experiencing deteriorations in cognition. A widely used chemotherapeutic agent, 5-fluorouracil (5-FU), readily crosses the blood-brain barrier and so could have a direct effect on brain function. In particular this anti mitotic drug could reduce cell proliferation in the neurogenic regions of the adult brain. In contrast reports indicate that hippocampal dependent neurogenesis and cognition are enhanced by the SSRI antidepressant Fluoxetine. In this investigation the behavioural effects of chronic (two week) treatment with 5-FU and (three weeks) with Fluoxetine either separately or in combination with 5-FU were tested on adult Lister hooded rats. Behavioural effects were tested using a context dependent conditioned emotional response test (CER) which showed that animals treated with 5-FU had a significant reduction in freezing time compared to controls. A separate group of animals was tested using a hippocampal dependent spatial working memory test, the object location recognition test (OLR). Animals treated only with 5-FU showed significant deficits in their ability to carry out the OLR task but co administration of Fluoxetine improved their performance. 5-FU chemotherapy caused a significant reduction in the number of proliferating cells in the sub granular zone of the dentate gyrus compared to controls. This reduction was eliminated when Fluoxetine was co administered with 5-FU. Fluoxetine on its own had no effect on proliferating cell number or behaviour. These findings suggest that 5-FU can negatively affect both cell proliferation and hippocampal dependent working memory and that these deficits can be reversed by the simultaneous administration of the antidepressant Fluoxetine. | {"CellLine": [], "Chemical": ["SSRI", "Fluoxetine", "5-FU", "5-fluorouracil"], "Disease": ["Cancer", "memory deficits"], "Gene": [], "Species": ["rats", "patients"], "Variant": []} | science.biored_ner.93 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nFluoxetine improves the memory deficits caused by the chemotherapy agent 5-fluorouracil. Cancer patients who have been treated with systemic adjuvant chemotherapy have described experiencing deteriorations in cognition. A widely used chemotherapeutic agent, 5-fluorouracil (5-FU), readily crosses the blood-brain barrier and so could have a direct effect on brain function. In particular this anti mitotic drug could reduce cell proliferation in the neurogenic regions of the adult brain. In contrast reports indicate that hippocampal dependent neurogenesis and cognition are enhanced by the SSRI antidepressant Fluoxetine. In this investigation the behavioural effects of chronic (two week) treatment with 5-FU and (three weeks) with Fluoxetine either separately or in combination with 5-FU were tested on adult Lister hooded rats. Behavioural effects were tested using a context dependent conditioned emotional response test (CER) which showed that animals treated with 5-FU had a significant reduction in freezing time compared to controls. A separate group of animals was tested using a hippocampal dependent spatial working memory test, the object location recognition test (OLR). Animals treated only with 5-FU showed significant deficits in their ability to carry out the OLR task but co administration of Fluoxetine improved their performance. 5-FU chemotherapy caused a significant reduction in the number of proliferating cells in the sub granular zone of the dentate gyrus compared to controls. This reduction was eliminated when Fluoxetine was co administered with 5-FU. Fluoxetine on its own had no effect on proliferating cell number or behaviour. These findings suggest that 5-FU can negatively affect both cell proliferation and hippocampal dependent working memory and that these deficits can be reversed by the simultaneous administration of the antidepressant Fluoxetine.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [\"SSRI\", \"Fluoxetine\", \"5-FU\", \"5-fluorouracil\"], \"Disease\": [\"Cancer\", \"memory deficits\"], \"Gene\": [], \"Species\": [\"rats\", \"patients\"], \"Variant\": []}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Attention-based method for categorizing different types of online harassment language
In the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter, because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual or racist harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. Previous studies have been focused on collecting data about sexism and racism in very broad terms. However, there is not much study focusing on different types of online harassment alone attracting natural language processing techniques. In this work, we present an attention-based approach for the detection of harassment in tweets and the detection of different types of harassment as well. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach.
Introduction
In the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. In fact, online harassment becomes very common in Twitter and there have been a lot of critics that Twitter has become the platform for many racists, misogynists and hate groups which can express themselves openly. Online harassment is usually in the form of verbal or graphical formats and is considered harassment, because it is neither invited nor has the consent of the receipt. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter. The main reason is because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. The basic goal of this automatic classification is that it will significantly improve the process of detecting these types of hate speech on social media by reducing the time and effort required by human beings.
Previous studies have been focused on collecting data about sexism and racism in very broad terms or have proposed two categories of sexism as benevolent or hostile sexism BIBREF0, which undermines other types of online harassment. However, there is no much study focusing on different types online harassment alone attracting natural language processing techniques.
In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion.
Related Work
Waseem et al. BIBREF1 were the first who collected hateful tweets and categorized them into being sexist, racist or neither. However, they did not provide specific definitions for each category. Jha and Mamidi BIBREF0 focused on just sexist tweets and proposed two categories of hostile and benevolent sexism. However, these categories were general as they ignored other types of sexism happening in social media. Sharifirad S. and Matwin S. BIBREF2 proposed complimentary categories of sexist language inspired from social science work. They categorized the sexist tweets into the categories of indirect harassment, information threat, sexual harassment and physical harassment. In the next year the same authors proposed BIBREF3 a more comprehensive categorization of online harassment in social media e.g. twitter into the following categories, indirect harassment, information threat, sexual harassment, physical harassment and not sexist.
For the detection of hate speech in social media like twitter, many approaches have been proposed. Jha and Mamidi BIBREF0 tested support vector machine, bi-directional RNN encoder-decoder and FastText on hostile and benevolent sexist tweets. They also used SentiWordNet and subjectivity lexicon on the extracted phrases to show the polarity of the tweets. Sharifirad et al. BIBREF4 trained, tested and evaluated different classification methods on the SemEval2018 dataset and chose the classifier with the highest accuracy for testing on each category of sexist tweets to know the mental state and the affectual state of the user who tweets in each category. To overcome the limitations of small data sets on sexist speech detection, Sharifirad S. et al. BIBREF5 have applied text augmentation and text generation with certain success. They have generated new tweets by replacing words in order to increase the size of our training set. Moreover, in the presented text augmentation approach, the number of tweets in each class remains the same, but their words are augmented with words extracted from their ConceptNet relations and their description extracted from Wikidata. Zhang et al. BIBREF6 combined convolutional and gated recurrent networks to detect hate speech in tweets. Others have proposed different methods, which are not based on deep learning. Burnap and Williams BIBREF7 used Support Vector Machines, Random Forests and a meta-classifier to distinguish between hateful and non-hateful messages. A survey of recent research in the field is presented in BIBREF8. For the problem of the hate speech detection a few approaches have been proposed that are based on the Attention mechanism. Pavlopoulos et al. BIBREF9 have proposed a novel, classification-specific attention mechanism that improves the performance of the RNN further for the detection of abusive content in the web. Xie et al. BIBREF10 for emotion intensity prediction, which is a similar problem to ours, have proposed a novel attention mechanism for CNN model that associates attention-based weights for every convolution window. Park and Fung BIBREF11 transformed the classification into a 2-step problem, where abusive text first is distinguished from the non-abusive, and then the class of abuse (Sexism or Racism) is determined. However, while the first part of the two step classification performs quite well, it falls short in detecting the particular class the abusive text belongs to. Pitsilis et al. BIBREF12 have proposed a detection scheme that is an ensemble of RNN classifiers, which incorporates various features associated with user related information, such as the users’ tendency towards racism or sexism
Proposed methodology ::: Data augmentation
As described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. These "noisy" data that have been translated back, increase the number of indirect and physical harassment tweets and boost significantly the performance of our models.
Another way to enrich our models is the use of pre-trained word embeddings from 2B Twitter data BIBREF14 having 27B tokens, for the initialization of the embedding layer.
Proposed methodology ::: RNN Model and Attention Mechanism
We are presenting an attention-based approach for the problem of the harassment detection in tweets. In this section, we describe the basic approach of our work. We are using RNN models because of their ability to deal with sequence information. The RNN model is a chain of GRU cells BIBREF15 that transforms the tokens $w_{1}, w_{2},..., w_{k}$ of each tweet to the hidden states $h_{1}, h_{2},..., h_{k}$, followed by an LR Layer that uses $h_{k}$ to classify the tweet as harassment or non-harassment (similarly for the other categories). Given the vocabulary V and a matrix E $\in $ $R^{d \times \vert V \vert }$ containing d-dimensional word embeddings, an initial $h_{0}$ and a tweet $w = <w_{1},.., w_{k}>$, the RNN computes $h_{1}, h_{2},..., h_{k}$, with $h_{t} \in R^{m}$, as follows:
where $h^{^{\prime }}_{t} \in R^{m}$ is the proposed hidden state at position t, obtained using the word embedding $x_{t}$ of token $w_{t}$ and the previous hidden state $h_{t-1}$, $\odot $ represents the element-wise multiplication, $r_{t} \in R^{m}$ is the reset gate, $z_{t} \in R^{m}$ is the update gate, $\sigma $ is the sigmoid function. Also $W_{h}, W_{z}, W_{r} \in R^{m \times d}$ and $U_{h}, U_{z}, U_{r} \in R^{m \times m}$, $b_{h}, b_{z}, b_{r} \in R^{m}$. After the computation of state $h_{k}$ the LR Layer estimates the probability that tweet w should be considered as harassment, with $W_{p} \in R^{1 \times m}, b_{p} \in R$:
We would like to add an attention mechanism similar to the one presented in BIBREF9, so that the LR Layer will consider the weighted sum $h_{sum}$ of all the hidden states instead of $h_{k}$:
$h_{sum} = \sum _{t=1}^{k} \alpha _{t}h_{t}$
$P_{attentionRNN} = \sigma (W_{p}h_{sum} + b_{p})$
Alternatively, we could pass $h_{sum}$ through an MLP with k layers and then the LR layer will estimate the corresponding probability. More formally,
$P_{attentionRNN} = \sigma (W_{p}h_{*} + b_{p})$
where $h_{*}$ is the state that comes out from the MLP. The weights $\alpha _{t}$ are produced by an attention mechanism presented in BIBREF9 (see Fig. FIGREF7), which is an MLP with l layers. This attention mechanism differs from most previous ones BIBREF16, BIBREF17, because it is used in a classification setting, where there is no previously generated output sub-sequence to drive the attention. It assigns larger weights $\alpha _{t}$ to hidden states $h_{t}$ corresponding to positions, where there is more evidence that the tweet should be harassment (or any other specific type of harassment) or not. In our work we are using four attention mechanisms instead of one that is presented in BIBREF9. Particularly, we are using one attention mechanism per category. Another element that differentiates our approach from Pavlopoulos et al. BIBREF9 is that we are using a projection layer for the word embeddings (see Fig. FIGREF2). In the next subsection we describe the Model Architecture of our approach.
Proposed methodology ::: Model Architecture
The Embedding Layer is initialized using pre-trained word embeddings of dimension 200 from Twitter data that have been described in a previous sub-section. After the Embedding Layer, we are applying a Spatial Dropout Layer, which drops a certain percentage of dimensions from each word vector in the training sample. The role of Dropout is to improve generalization performance by preventing activations from becoming strongly correlated BIBREF18. Spatial Dropout, which has been proposed in BIBREF19, is an alternative way to use dropout with convolutional neural networks as it is able to dropout entire feature maps from the convolutional layer which are then not used during pooling. After that, the word embeddings are passing through a one-layer MLP, which has tanh as activation function and 128 hidden units, in order to project them in the vector space of our problem considering that they have been pre-trained using text that has a different subject. In the next step the embeddings are fed in a unidirectional GRU having 1 Stacked Layer and size 128. We prefer GRU than LSTM, because it is more efficient computationally. Also the basic advantage of LSTM which is the ability to keep in memory large text documents, does not hold here, because tweets supposed to be not too large text documents. The output states of the GRU are passing through four self-attentions like the one described above BIBREF9, because we are using one attention per category (see Fig. FIGREF7). Finally, a one-layer MLP having 128 nodes and ReLU as activation function computes the final score for each category. At this final stage we have avoided using a softmax function to decide the harassment type considering that the tweet is a harassment, otherwise we had to train our models taking into account only the harassment tweets and this might have been a problem as the dataset is not large enough.
Experiments ::: Evaluation and Results
Each model produces four scores and each score is the probability that a tweet includes harassment language, indirect, physical and sexual harassment language respectively. For any tweet, we first check the score of the harassment language and if it is less than a specified threshold, then the harassment label is zero, so the other three labels are zero as well. If it is greater than or equal to that threshold, then the harassment label is one and the type of harassment is the one among these three having that has the greatest score (highest probability). We set this threshold equal to 0.33.
We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category.
We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement.
Table 1. Class distribution of the dataset.
Fig. 1. Projection Layer
Fig. 2. Attention mechanism, MLP with l Layers
Table 2. The results considering F1 Score.
Query: What dataset is used for this work?
JSON Response: | {"answer": "Twitter dataset provided by the organizers", "evidence": ["In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classi\ufb01cation specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion."]} | science.qasper_abstractive_qa.113.2 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nAttention-based method for categorizing different types of online harassment language\n\nIn the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter, because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual or racist harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. Previous studies have been focused on collecting data about sexism and racism in very broad terms. However, there is not much study focusing on different types of online harassment alone attracting natural language processing techniques. In this work, we present an attention-based approach for the detection of harassment in tweets and the detection of different types of harassment as well. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach.\n\nIntroduction\nIn the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. In fact, online harassment becomes very common in Twitter and there have been a lot of critics that Twitter has become the platform for many racists, misogynists and hate groups which can express themselves openly. Online harassment is usually in the form of verbal or graphical formats and is considered harassment, because it is neither invited nor has the consent of the receipt. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter. The main reason is because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. The basic goal of this automatic classification is that it will significantly improve the process of detecting these types of hate speech on social media by reducing the time and effort required by human beings.\nPrevious studies have been focused on collecting data about sexism and racism in very broad terms or have proposed two categories of sexism as benevolent or hostile sexism BIBREF0, which undermines other types of online harassment. However, there is no much study focusing on different types online harassment alone attracting natural language processing techniques.\nIn this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion.\n\nRelated Work\nWaseem et al. BIBREF1 were the first who collected hateful tweets and categorized them into being sexist, racist or neither. However, they did not provide specific definitions for each category. Jha and Mamidi BIBREF0 focused on just sexist tweets and proposed two categories of hostile and benevolent sexism. However, these categories were general as they ignored other types of sexism happening in social media. Sharifirad S. and Matwin S. BIBREF2 proposed complimentary categories of sexist language inspired from social science work. They categorized the sexist tweets into the categories of indirect harassment, information threat, sexual harassment and physical harassment. In the next year the same authors proposed BIBREF3 a more comprehensive categorization of online harassment in social media e.g. twitter into the following categories, indirect harassment, information threat, sexual harassment, physical harassment and not sexist.\nFor the detection of hate speech in social media like twitter, many approaches have been proposed. Jha and Mamidi BIBREF0 tested support vector machine, bi-directional RNN encoder-decoder and FastText on hostile and benevolent sexist tweets. They also used SentiWordNet and subjectivity lexicon on the extracted phrases to show the polarity of the tweets. Sharifirad et al. BIBREF4 trained, tested and evaluated different classification methods on the SemEval2018 dataset and chose the classifier with the highest accuracy for testing on each category of sexist tweets to know the mental state and the affectual state of the user who tweets in each category. To overcome the limitations of small data sets on sexist speech detection, Sharifirad S. et al. BIBREF5 have applied text augmentation and text generation with certain success. They have generated new tweets by replacing words in order to increase the size of our training set. Moreover, in the presented text augmentation approach, the number of tweets in each class remains the same, but their words are augmented with words extracted from their ConceptNet relations and their description extracted from Wikidata. Zhang et al. BIBREF6 combined convolutional and gated recurrent networks to detect hate speech in tweets. Others have proposed different methods, which are not based on deep learning. Burnap and Williams BIBREF7 used Support Vector Machines, Random Forests and a meta-classifier to distinguish between hateful and non-hateful messages. A survey of recent research in the field is presented in BIBREF8. For the problem of the hate speech detection a few approaches have been proposed that are based on the Attention mechanism. Pavlopoulos et al. BIBREF9 have proposed a novel, classification-specific attention mechanism that improves the performance of the RNN further for the detection of abusive content in the web. Xie et al. BIBREF10 for emotion intensity prediction, which is a similar problem to ours, have proposed a novel attention mechanism for CNN model that associates attention-based weights for every convolution window. Park and Fung BIBREF11 transformed the classification into a 2-step problem, where abusive text first is distinguished from the non-abusive, and then the class of abuse (Sexism or Racism) is determined. However, while the first part of the two step classification performs quite well, it falls short in detecting the particular class the abusive text belongs to. Pitsilis et al. BIBREF12 have proposed a detection scheme that is an ensemble of RNN classifiers, which incorporates various features associated with user related information, such as the users’ tendency towards racism or sexism\n\nProposed methodology ::: Data augmentation\nAs described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. These \"noisy\" data that have been translated back, increase the number of indirect and physical harassment tweets and boost significantly the performance of our models.\nAnother way to enrich our models is the use of pre-trained word embeddings from 2B Twitter data BIBREF14 having 27B tokens, for the initialization of the embedding layer.\n\nProposed methodology ::: RNN Model and Attention Mechanism\nWe are presenting an attention-based approach for the problem of the harassment detection in tweets. In this section, we describe the basic approach of our work. We are using RNN models because of their ability to deal with sequence information. The RNN model is a chain of GRU cells BIBREF15 that transforms the tokens $w_{1}, w_{2},..., w_{k}$ of each tweet to the hidden states $h_{1}, h_{2},..., h_{k}$, followed by an LR Layer that uses $h_{k}$ to classify the tweet as harassment or non-harassment (similarly for the other categories). Given the vocabulary V and a matrix E $\\in $ $R^{d \\times \\vert V \\vert }$ containing d-dimensional word embeddings, an initial $h_{0}$ and a tweet $w = <w_{1},.., w_{k}>$, the RNN computes $h_{1}, h_{2},..., h_{k}$, with $h_{t} \\in R^{m}$, as follows:\nwhere $h^{^{\\prime }}_{t} \\in R^{m}$ is the proposed hidden state at position t, obtained using the word embedding $x_{t}$ of token $w_{t}$ and the previous hidden state $h_{t-1}$, $\\odot $ represents the element-wise multiplication, $r_{t} \\in R^{m}$ is the reset gate, $z_{t} \\in R^{m}$ is the update gate, $\\sigma $ is the sigmoid function. Also $W_{h}, W_{z}, W_{r} \\in R^{m \\times d}$ and $U_{h}, U_{z}, U_{r} \\in R^{m \\times m}$, $b_{h}, b_{z}, b_{r} \\in R^{m}$. After the computation of state $h_{k}$ the LR Layer estimates the probability that tweet w should be considered as harassment, with $W_{p} \\in R^{1 \\times m}, b_{p} \\in R$:\nWe would like to add an attention mechanism similar to the one presented in BIBREF9, so that the LR Layer will consider the weighted sum $h_{sum}$ of all the hidden states instead of $h_{k}$:\n$h_{sum} = \\sum _{t=1}^{k} \\alpha _{t}h_{t}$\n$P_{attentionRNN} = \\sigma (W_{p}h_{sum} + b_{p})$\nAlternatively, we could pass $h_{sum}$ through an MLP with k layers and then the LR layer will estimate the corresponding probability. More formally,\n$P_{attentionRNN} = \\sigma (W_{p}h_{*} + b_{p})$\nwhere $h_{*}$ is the state that comes out from the MLP. The weights $\\alpha _{t}$ are produced by an attention mechanism presented in BIBREF9 (see Fig. FIGREF7), which is an MLP with l layers. This attention mechanism differs from most previous ones BIBREF16, BIBREF17, because it is used in a classification setting, where there is no previously generated output sub-sequence to drive the attention. It assigns larger weights $\\alpha _{t}$ to hidden states $h_{t}$ corresponding to positions, where there is more evidence that the tweet should be harassment (or any other specific type of harassment) or not. In our work we are using four attention mechanisms instead of one that is presented in BIBREF9. Particularly, we are using one attention mechanism per category. Another element that differentiates our approach from Pavlopoulos et al. BIBREF9 is that we are using a projection layer for the word embeddings (see Fig. FIGREF2). In the next subsection we describe the Model Architecture of our approach.\n\nProposed methodology ::: Model Architecture\nThe Embedding Layer is initialized using pre-trained word embeddings of dimension 200 from Twitter data that have been described in a previous sub-section. After the Embedding Layer, we are applying a Spatial Dropout Layer, which drops a certain percentage of dimensions from each word vector in the training sample. The role of Dropout is to improve generalization performance by preventing activations from becoming strongly correlated BIBREF18. Spatial Dropout, which has been proposed in BIBREF19, is an alternative way to use dropout with convolutional neural networks as it is able to dropout entire feature maps from the convolutional layer which are then not used during pooling. After that, the word embeddings are passing through a one-layer MLP, which has tanh as activation function and 128 hidden units, in order to project them in the vector space of our problem considering that they have been pre-trained using text that has a different subject. In the next step the embeddings are fed in a unidirectional GRU having 1 Stacked Layer and size 128. We prefer GRU than LSTM, because it is more efficient computationally. Also the basic advantage of LSTM which is the ability to keep in memory large text documents, does not hold here, because tweets supposed to be not too large text documents. The output states of the GRU are passing through four self-attentions like the one described above BIBREF9, because we are using one attention per category (see Fig. FIGREF7). Finally, a one-layer MLP having 128 nodes and ReLU as activation function computes the final score for each category. At this final stage we have avoided using a softmax function to decide the harassment type considering that the tweet is a harassment, otherwise we had to train our models taking into account only the harassment tweets and this might have been a problem as the dataset is not large enough.\n\nExperiments ::: Evaluation and Results\nEach model produces four scores and each score is the probability that a tweet includes harassment language, indirect, physical and sexual harassment language respectively. For any tweet, we first check the score of the harassment language and if it is less than a specified threshold, then the harassment label is zero, so the other three labels are zero as well. If it is greater than or equal to that threshold, then the harassment label is one and the type of harassment is the one among these three having that has the greatest score (highest probability). We set this threshold equal to 0.33.\nWe compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category.\nWe have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement.\n\nTable 1. Class distribution of the dataset.\n\nFig. 1. Projection Layer\n\nFig. 2. Attention mechanism, MLP with l Layers\n\nTable 2. The results considering F1 Score.\n\nQuery: What dataset is used for this work?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Twitter dataset provided by the organizers\", \"evidence\": [\"In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classi\\ufb01cation specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion.\"]}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Interpretable Complex-Valued Neural Networks for Privacy Protection
Main Content:
1 INTRODUCTION . Deep neural networks ( DNNs ) have shown superb capabilities to process massive volume of data , and local devices such as mobile phones , medical equipment , Internet of Things ( IoT ) devices have become major data entry points in recent years . Although on-device machine learning has exhibited various advantages , it usually burdens thin devices with overwhelming computational overhead . Yet offloading data or processed features to a cloud operator would put the individual privacy at risk . For example , if a user has a virtual assistant at home , it should not worry about its private data being collected and processed by an untrusted cloud operator . The operator , on the other hand , should not be able to recover original signals or their interpretation . However , as shown in the previous literature ( Dosovitskiy & Brox ( 2016 ) ; Zeiler & Fergus ( 2014 ) ; Mahendran & Vedaldi ( 2015 ) ; Shokri et al . ( 2017 ) ; Ganju et al . ( 2018 ) ; Melis et al . ( 2019 ) ) , intermediatelayer features face many privacy threats , where the adversary either reconstructs the input or infers unintended properties about the input . Hence , on-device processing encounters a dilemma , i.e . while we expect intermediate-layer features to yield high accuracy , we certainly would not want sensitive information to be leaked . Therefore , we propose a novel method which tweaks a conventional neural network into a complexvalued one , such that intermediate-layer features are released without sacrificing input privacy too much . More precisely , we turn the original real-valued features into complex-valued ones , rotate these features by a random angle , and feed them to the cloud for further processing . We face two significant challenges in the design . First , the tweaked features have to be correctly handled by the DNN on the cloud . Although network structures vary depending on the input types , it is desired that the piecewise linearity of features to be preserved , so that the resulting accuracy does not degrade too much compared with the original features . Second , an adversary attacker who intercepts the features in the middle should not be able to recover the exact input , nor can it figure out the correct output . Most importantly , the neural network is supposed to be trained using the original data without additional human supervision , and is efficient to conduct inference . ∗Quanshi Zhang is the corresponding author with the John Hopcroft Center and MoE Key Lab of Artificial Intelligence AI Institute , Shanghai Jiao Tong University , China . zqs1022 @ sjtu.edu.cn To overcome the first challenge , we tailor operations of DNN to make the succeeding computation at the cloud invariant to feature rotations , i.e. , the DNN can correctly handle the feature and preserve the rotated angle in the output . For the second issue , we adopt generative adversarial networks ( GAN ) to generate synthesized features to hide the original ones . It guarantees k-anonymity such that , when an adversary tries to recover the original input from the complex-valued features , it would get at least k different inputs ( Please see Appendix B for detailed discussion ) . Each of the k reconstructed inputs has equivalent probabilities of being the original input for the attacker . Apart from the GAN part , the training of the neural network adds little additional computation overhead and requires no extra denotations . Contributions of this study can be summarized as follows . We propose to transform conventional DNNs into complex-valued ones , which hide input information into a randomly chosen phase . Only the party with the phase information can retrieve the correct output . Without it , any party can neither recover the input nor the output . Tested on various datasets and DNNs , our method has been shown to yield only moderate computational overhead and little accuracy degradation , while significantly diminishes the adversary ’ s ability of inferring about the input . 2 RELATED WORK . Complex-valued neural networks . Using complex-valued features or parameters in neural networks has always been an interesting topic . From the computational perspective , Trabelsi et al . ( 2018 ) showed that complex-valued neural networks have a competitive performance with their real-valued counterparts . By augmenting recurrent neural networks with associative memory based on complexvalued vectors , Danihelka et al . ( 2016 ) achieved faster learning on memorization tasks . Complexvalued features can contain information in both the phase and the magnitude . An example is that Reichert & Serre ( 2013 ) used the phase to indicate properties of spike timing in cortical information processing . Zhang et al . ( 2019 ) used the quaternion-valued neural network to learn 3D-rotationequivariant features for 3D point cloud processing . We take advantage of complex-valued features such that original features can be hidden in an unknown phase . Also , Yang et al . ( 2019 ) shared a similar transformation-based approach as ours while it performed transformation on inputs rather than features , and it aimed to improve the adversarial robustness of the model . Privacy-preserving deep learning . Various privacy-preserving mechanisms have been proposed using different definitions of privacy : Osia et al . ( 2017 ) applied the Siamese architecture to separate the primary and private information so that the primary information was preserved in the feature . PrivyNet by Li et al . ( 2017 ) was proposed to decide the local DNN structure under the privacy constraint based on the peak signal-to-noise ratio or the pixel-wise Euclidean distance . Data nullification and random noise addition were introduced by Wang et al . ( 2018 ) to protect private information in the features , which guaranteed differential privacy . Cryptographic tools have been used to learn from sensitive data . Zhang et al . ( 2016 ) adopted homomorphic encryption , in particular , BGV encryption , to encrypt the private data and perform the high-order back-propagation on the encrypted data ; Mohassel & Zhang ( 2017 ) distributed the private data among two non-colluding servers who performed the secure two-party computation to train models on the joint data . Our scheme achieves k-anonymity privacy guarantee such that any adversary can at best reconstruct a number of synthetic inputs from the transformed features and thus it can not distinguish the original input from others . 3 THREAT MODEL AND PRIVACY GUARANTEE . In this section , we briefly introduce the threat model and the privacy guarantee that our scheme aims to achieve . Threat model . In our model , a user processes its raw data locally and transmits the neural network features to the cloud for further processing . An adversary would intercept the transferred features and perform either feature inversion attacks or property inference attacks on these features to either reconstruct inputs , or infer unintended information . Typical examples of the former attacks include feature inversion by Dosovitskiy & Brox ( 2016 ) , gradient-based visualization by Zeiler & Fergus ( 2014 ) ; Mahendran & Vedaldi ( 2015 ) , etc . And the latter ones include membership inference attack by Shokri et al . ( 2017 ) , passive property inference attack by Ganju et al . ( 2018 ) as well as feature leakage in collaborative learning by Melis et al . ( 2019 ) . k-anonymity . We aim to achieve the following privacy guarantee : when an adversary tries to reconstruct the original input from the transformed features , it can sythesize inputs belonging to k different prototypes with equivalent likelihoods , and only one of them corresponds to the original input . 4 APPROACH . 4.1 OVERVIEW . To fight against the above threats , we propose to revise the conventional DNN . The new structure is shown in Fig . 1a , where the entire network is divided into the following three modules . • Encoder is embedded inside a local device at the user end . The encoder extracts feature from the input , hides true feature by rotating the features by a randomized angle , and sends the encoded result to the cloud . • Processing module is located at the public cloud . This module receives and processes the encoded features without knowing the rotation . At the end of processing , the cloud returns the output to the user . • Decoder resides at the user ’ s local device . The decoder receives and decodes results sent from the cloud to obtain the final result . The three modules are jointly trained . Once trained , the encoder and decoder are placed at the local device , whereas the processing module resides on the cloud for public use . We will illustrate the detail of each module in the following section . 4.2 COMPLEX-VALUED NEURAL NETWORKS . Let ( I , y ) ∈ D denote an input and its label in the training dataset and g be the encoder at the local device . Given the input I , the intermediate-layer feature is computed as a = g ( I ) , ( 1 ) but we do not directly submit a to the cloud . Instead , we introduce a fooling counterpart b to construct a complex-valued feature as follows : x = exp ( iθ ) [ a+ bi ] , ( 2 ) where θ and b are randomly chosen . b is the fooling counterpart which does not contain any private information of a , but its magnitude is comparable with a to cause obfuscation . The encoded feature is then sent to the processing module Φ , which produces the complex-valued feature h = Φ ( x ) . Upon receiving h , the decoder makes prediction ŷ on I by inverting the complex-valued feature h back : ŷ = d ( < [ h · exp ( −iθ ) ] ) , ( 3 ) where d denotes the decoder module , which can be constructed as either a shallow network or just a softmax layer . < ( · ) denotes the operation of picking real parts of complex values . Processing Module Φ . The core design of the processing module is to allow the complex-valued feature h = Φ ( x ) to be successfully decoded later by the decoder . I.e . if we rotate the complexvalued feature a + bi by an angle θ , all the features of the following layers are supposed to be rotated by the same angle . We represent the processing module as cascaded functions of multiple layers Φ ( x ) = Φn ( Φn−1 ( · · ·Φ1 ( x ) ) ) , where Φj ( · ) denotes the function of the j-th layer and fj = Φj ( fj−1 ) represents the output of the j-th layer . Thus the processing module should have the following property : Φ ( f ( θ ) ) = eiθΦ ( f ) s.t . f ( θ ) , eiθf , ∀θ ∈ [ 0 , 2π ) . ( 4 ) In other words , the function of each intermediate layer in the processing module should satisfy Φj ( f ( θ ) j−1 ) = e iθΦj ( fj−1 ) s.t . f ( θ ) j−1 , e iθfj−1 , ∀j ∈ [ 2 , . . . , n ] , ∀θ ∈ [ 0 , 2π ) . ( 5 ) to recursively prove Eqn . ( 4 ) . Let us consider six most common types of network layers to construct the processing module , i.e . the convolutional layer , the ReLU layer , the batch-normalization layer , the average/max pooling layer , the dropout layer , and the skip-connection operation . For the convolutional layer , we remove the bias term and obtain Conv ( f ) = w ⊗ f , which satisfies Eqn . ( 5 ) . Inspired by Trabelsi et al . ( 2018 ) , we replace the ReLU with the following non-linear layer : δ ( fijk ) = ‖fijk‖ max { ‖fijk‖ , c } · fijk ( 6 ) where fijk denotes the neural activation at the location ( i , j ) in the k-th channel of the feature , and c is a positive constant . Likewise , the batch-normalization operation is replaced by norm ( f lijk ) = f lijk√ El [ ‖f lijk‖2 ] , ( 7 ) where f l denotes the complex-valued tensor for the l-th sample in the batch . For max-pooling layers , we modify the rule such that the feature with the maximum norm in the region is selected , which ensures that max-pooling does not change the phase of its input . For the dropout layer , we randomly drops out both the real and imaginary parts of complex-valued features . Skip connections can be formulated as f + Φ ( f ) , where the inner module Φ ( f ) recursively satisfies Eqn . ( 4 ) . All above six operations satisfy Eqn . ( 4 ) . Please see supplementary materials for the proof . GAN-based Encoder . The objective of the encoder is to hide the real feature a of the input I in a certain phase θ of the encoded feature x = exp ( iθ ) [ a + bi ] . Let a′ = < [ x exp ( −iθ′ ) ] = < [ ( a+ bi ) exp ( iθ − iθ′ ) ] = < [ ( a+ bi ) exp ( i∆θ ) ] denote a feature decoded using a random angle θ′ 6= θ , where ∆θ = θ − θ′ . An ideal encoder requires ( i ) the decoded feature a′ contains sufficient information to cause obfuscation with the real feature a ; ( ii ) a′ and a follow the same distribution so that it is hard to distinguish which one is the real one . Hence , we train the encoder g with a range of ∆θ ’ s and b ’ s , and adopt a GAN to distinguish different values of them . Letting D denote the discriminator , we train the encoder g over the WGAN ( Arjovsky et al . ( 2017 ) ) loss : min g max D L ( g , D ) = EI∼pI [ D ( a ) − E∆θ∼U ( 0 , π ) , b 6=a [ D ( a′ ) ] ] = EI∼pI [ D ( g ( I ) ) − E∆θ∼U ( 0 , π ) , b 6=g ( I ) [ D ( < [ ( g ( I ) + bi ) ei∆θ ] ) ] ] . ( 8 ) g generates features to fool D. Upon convergence , the distribution of a′ approximates that of a . Note that the loss is the expectation over uniform-randomly distributed θ ∈ ( 0 , 2π ) . Ideally , the expectation in Eqn . ( 8 ) should also be taken over all possible ∆θ 6= 0 , and thus when an adversary tries to recover a by randomly rotating an angle , it will generate an infinite number of features following the same distribution as that of a. Empirically , we pick k − 1 such ∆θ ’ s uniformly over ( 0 , π ) and iterate through these k − 1 values as negative samples in the adversarial learning . The GAN-based encoder is the key to our k-anonymity guarantee such that it ensures the real feature can not be distinguished from at least k − 1 synthetic features . Please find the end of Appendix B to see the visualization effect of our privacy guarantee . The overall loss of learning is formulated as follows , which contains the adversarial loss in Eqn . ( 8 ) and a loss for the target task : min g , Φ , d max D Loss = min g , Φ , d [ max D L ( g , D ) + Ltask ( ŷ , y ) ] , ( 9 ) where Ltask ( ŷ , y ) represents the loss for the target task . Note that to simplify the implementation , we compute b = g ( I ′ ) as the feature of a randomly chosen sample I ′ 6= I where ideally I ′ has little correlation with I . In sum , the entire neural network [ g , Φ , d ] is trained with randomized variables θ and b to minimize the loss in Eqn . ( 9 ) . Once trained , the network [ g , Φ , d ] is fixed and publicly released . At the inference phase , the encoder secretly selects a fooling counterpart b and a rotation angle θ to encode its real feature a . The decoder receives the processed result h from the cloud , and make the final prediction based on Eqn . ( 3 ) . | In this paper, the authors propose to hide information in phase of the input features. They proposed that if each layer of processing layer, sitting outside of the local unit, is phase preserving, then they can recover the phase back. They propose a modification to the most popular layers in DNN to satisfy that property. | science.mup_single_document_summarization.17720 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nInterpretable Complex-Valued Neural Networks for Privacy Protection\n\nMain Content:\n1 INTRODUCTION . Deep neural networks ( DNNs ) have shown superb capabilities to process massive volume of data , and local devices such as mobile phones , medical equipment , Internet of Things ( IoT ) devices have become major data entry points in recent years . Although on-device machine learning has exhibited various advantages , it usually burdens thin devices with overwhelming computational overhead . Yet offloading data or processed features to a cloud operator would put the individual privacy at risk . For example , if a user has a virtual assistant at home , it should not worry about its private data being collected and processed by an untrusted cloud operator . The operator , on the other hand , should not be able to recover original signals or their interpretation . However , as shown in the previous literature ( Dosovitskiy & Brox ( 2016 ) ; Zeiler & Fergus ( 2014 ) ; Mahendran & Vedaldi ( 2015 ) ; Shokri et al . ( 2017 ) ; Ganju et al . ( 2018 ) ; Melis et al . ( 2019 ) ) , intermediatelayer features face many privacy threats , where the adversary either reconstructs the input or infers unintended properties about the input . Hence , on-device processing encounters a dilemma , i.e . while we expect intermediate-layer features to yield high accuracy , we certainly would not want sensitive information to be leaked . Therefore , we propose a novel method which tweaks a conventional neural network into a complexvalued one , such that intermediate-layer features are released without sacrificing input privacy too much . More precisely , we turn the original real-valued features into complex-valued ones , rotate these features by a random angle , and feed them to the cloud for further processing . We face two significant challenges in the design . First , the tweaked features have to be correctly handled by the DNN on the cloud . Although network structures vary depending on the input types , it is desired that the piecewise linearity of features to be preserved , so that the resulting accuracy does not degrade too much compared with the original features . Second , an adversary attacker who intercepts the features in the middle should not be able to recover the exact input , nor can it figure out the correct output . Most importantly , the neural network is supposed to be trained using the original data without additional human supervision , and is efficient to conduct inference . ∗Quanshi Zhang is the corresponding author with the John Hopcroft Center and MoE Key Lab of Artificial Intelligence AI Institute , Shanghai Jiao Tong University , China . zqs1022 @ sjtu.edu.cn To overcome the first challenge , we tailor operations of DNN to make the succeeding computation at the cloud invariant to feature rotations , i.e. , the DNN can correctly handle the feature and preserve the rotated angle in the output . For the second issue , we adopt generative adversarial networks ( GAN ) to generate synthesized features to hide the original ones . It guarantees k-anonymity such that , when an adversary tries to recover the original input from the complex-valued features , it would get at least k different inputs ( Please see Appendix B for detailed discussion ) . Each of the k reconstructed inputs has equivalent probabilities of being the original input for the attacker . Apart from the GAN part , the training of the neural network adds little additional computation overhead and requires no extra denotations . Contributions of this study can be summarized as follows . We propose to transform conventional DNNs into complex-valued ones , which hide input information into a randomly chosen phase . Only the party with the phase information can retrieve the correct output . Without it , any party can neither recover the input nor the output . Tested on various datasets and DNNs , our method has been shown to yield only moderate computational overhead and little accuracy degradation , while significantly diminishes the adversary ’ s ability of inferring about the input . 2 RELATED WORK . Complex-valued neural networks . Using complex-valued features or parameters in neural networks has always been an interesting topic . From the computational perspective , Trabelsi et al . ( 2018 ) showed that complex-valued neural networks have a competitive performance with their real-valued counterparts . By augmenting recurrent neural networks with associative memory based on complexvalued vectors , Danihelka et al . ( 2016 ) achieved faster learning on memorization tasks . Complexvalued features can contain information in both the phase and the magnitude . An example is that Reichert & Serre ( 2013 ) used the phase to indicate properties of spike timing in cortical information processing . Zhang et al . ( 2019 ) used the quaternion-valued neural network to learn 3D-rotationequivariant features for 3D point cloud processing . We take advantage of complex-valued features such that original features can be hidden in an unknown phase . Also , Yang et al . ( 2019 ) shared a similar transformation-based approach as ours while it performed transformation on inputs rather than features , and it aimed to improve the adversarial robustness of the model . Privacy-preserving deep learning . Various privacy-preserving mechanisms have been proposed using different definitions of privacy : Osia et al . ( 2017 ) applied the Siamese architecture to separate the primary and private information so that the primary information was preserved in the feature . PrivyNet by Li et al . ( 2017 ) was proposed to decide the local DNN structure under the privacy constraint based on the peak signal-to-noise ratio or the pixel-wise Euclidean distance . Data nullification and random noise addition were introduced by Wang et al . ( 2018 ) to protect private information in the features , which guaranteed differential privacy . Cryptographic tools have been used to learn from sensitive data . Zhang et al . ( 2016 ) adopted homomorphic encryption , in particular , BGV encryption , to encrypt the private data and perform the high-order back-propagation on the encrypted data ; Mohassel & Zhang ( 2017 ) distributed the private data among two non-colluding servers who performed the secure two-party computation to train models on the joint data . Our scheme achieves k-anonymity privacy guarantee such that any adversary can at best reconstruct a number of synthetic inputs from the transformed features and thus it can not distinguish the original input from others . 3 THREAT MODEL AND PRIVACY GUARANTEE . In this section , we briefly introduce the threat model and the privacy guarantee that our scheme aims to achieve . Threat model . In our model , a user processes its raw data locally and transmits the neural network features to the cloud for further processing . An adversary would intercept the transferred features and perform either feature inversion attacks or property inference attacks on these features to either reconstruct inputs , or infer unintended information . Typical examples of the former attacks include feature inversion by Dosovitskiy & Brox ( 2016 ) , gradient-based visualization by Zeiler & Fergus ( 2014 ) ; Mahendran & Vedaldi ( 2015 ) , etc . And the latter ones include membership inference attack by Shokri et al . ( 2017 ) , passive property inference attack by Ganju et al . ( 2018 ) as well as feature leakage in collaborative learning by Melis et al . ( 2019 ) . k-anonymity . We aim to achieve the following privacy guarantee : when an adversary tries to reconstruct the original input from the transformed features , it can sythesize inputs belonging to k different prototypes with equivalent likelihoods , and only one of them corresponds to the original input . 4 APPROACH . 4.1 OVERVIEW . To fight against the above threats , we propose to revise the conventional DNN . The new structure is shown in Fig . 1a , where the entire network is divided into the following three modules . • Encoder is embedded inside a local device at the user end . The encoder extracts feature from the input , hides true feature by rotating the features by a randomized angle , and sends the encoded result to the cloud . • Processing module is located at the public cloud . This module receives and processes the encoded features without knowing the rotation . At the end of processing , the cloud returns the output to the user . • Decoder resides at the user ’ s local device . The decoder receives and decodes results sent from the cloud to obtain the final result . The three modules are jointly trained . Once trained , the encoder and decoder are placed at the local device , whereas the processing module resides on the cloud for public use . We will illustrate the detail of each module in the following section . 4.2 COMPLEX-VALUED NEURAL NETWORKS . Let ( I , y ) ∈ D denote an input and its label in the training dataset and g be the encoder at the local device . Given the input I , the intermediate-layer feature is computed as a = g ( I ) , ( 1 ) but we do not directly submit a to the cloud . Instead , we introduce a fooling counterpart b to construct a complex-valued feature as follows : x = exp ( iθ ) [ a+ bi ] , ( 2 ) where θ and b are randomly chosen . b is the fooling counterpart which does not contain any private information of a , but its magnitude is comparable with a to cause obfuscation . The encoded feature is then sent to the processing module Φ , which produces the complex-valued feature h = Φ ( x ) . Upon receiving h , the decoder makes prediction ŷ on I by inverting the complex-valued feature h back : ŷ = d ( < [ h · exp ( −iθ ) ] ) , ( 3 ) where d denotes the decoder module , which can be constructed as either a shallow network or just a softmax layer . < ( · ) denotes the operation of picking real parts of complex values . Processing Module Φ . The core design of the processing module is to allow the complex-valued feature h = Φ ( x ) to be successfully decoded later by the decoder . I.e . if we rotate the complexvalued feature a + bi by an angle θ , all the features of the following layers are supposed to be rotated by the same angle . We represent the processing module as cascaded functions of multiple layers Φ ( x ) = Φn ( Φn−1 ( · · ·Φ1 ( x ) ) ) , where Φj ( · ) denotes the function of the j-th layer and fj = Φj ( fj−1 ) represents the output of the j-th layer . Thus the processing module should have the following property : Φ ( f ( θ ) ) = eiθΦ ( f ) s.t . f ( θ ) , eiθf , ∀θ ∈ [ 0 , 2π ) . ( 4 ) In other words , the function of each intermediate layer in the processing module should satisfy Φj ( f ( θ ) j−1 ) = e iθΦj ( fj−1 ) s.t . f ( θ ) j−1 , e iθfj−1 , ∀j ∈ [ 2 , . . . , n ] , ∀θ ∈ [ 0 , 2π ) . ( 5 ) to recursively prove Eqn . ( 4 ) . Let us consider six most common types of network layers to construct the processing module , i.e . the convolutional layer , the ReLU layer , the batch-normalization layer , the average/max pooling layer , the dropout layer , and the skip-connection operation . For the convolutional layer , we remove the bias term and obtain Conv ( f ) = w ⊗ f , which satisfies Eqn . ( 5 ) . Inspired by Trabelsi et al . ( 2018 ) , we replace the ReLU with the following non-linear layer : δ ( fijk ) = ‖fijk‖ max { ‖fijk‖ , c } · fijk ( 6 ) where fijk denotes the neural activation at the location ( i , j ) in the k-th channel of the feature , and c is a positive constant . Likewise , the batch-normalization operation is replaced by norm ( f lijk ) = f lijk√ El [ ‖f lijk‖2 ] , ( 7 ) where f l denotes the complex-valued tensor for the l-th sample in the batch . For max-pooling layers , we modify the rule such that the feature with the maximum norm in the region is selected , which ensures that max-pooling does not change the phase of its input . For the dropout layer , we randomly drops out both the real and imaginary parts of complex-valued features . Skip connections can be formulated as f + Φ ( f ) , where the inner module Φ ( f ) recursively satisfies Eqn . ( 4 ) . All above six operations satisfy Eqn . ( 4 ) . Please see supplementary materials for the proof . GAN-based Encoder . The objective of the encoder is to hide the real feature a of the input I in a certain phase θ of the encoded feature x = exp ( iθ ) [ a + bi ] . Let a′ = < [ x exp ( −iθ′ ) ] = < [ ( a+ bi ) exp ( iθ − iθ′ ) ] = < [ ( a+ bi ) exp ( i∆θ ) ] denote a feature decoded using a random angle θ′ 6= θ , where ∆θ = θ − θ′ . An ideal encoder requires ( i ) the decoded feature a′ contains sufficient information to cause obfuscation with the real feature a ; ( ii ) a′ and a follow the same distribution so that it is hard to distinguish which one is the real one . Hence , we train the encoder g with a range of ∆θ ’ s and b ’ s , and adopt a GAN to distinguish different values of them . Letting D denote the discriminator , we train the encoder g over the WGAN ( Arjovsky et al . ( 2017 ) ) loss : min g max D L ( g , D ) = EI∼pI [ D ( a ) − E∆θ∼U ( 0 , π ) , b 6=a [ D ( a′ ) ] ] = EI∼pI [ D ( g ( I ) ) − E∆θ∼U ( 0 , π ) , b 6=g ( I ) [ D ( < [ ( g ( I ) + bi ) ei∆θ ] ) ] ] . ( 8 ) g generates features to fool D. Upon convergence , the distribution of a′ approximates that of a . Note that the loss is the expectation over uniform-randomly distributed θ ∈ ( 0 , 2π ) . Ideally , the expectation in Eqn . ( 8 ) should also be taken over all possible ∆θ 6= 0 , and thus when an adversary tries to recover a by randomly rotating an angle , it will generate an infinite number of features following the same distribution as that of a. Empirically , we pick k − 1 such ∆θ ’ s uniformly over ( 0 , π ) and iterate through these k − 1 values as negative samples in the adversarial learning . The GAN-based encoder is the key to our k-anonymity guarantee such that it ensures the real feature can not be distinguished from at least k − 1 synthetic features . Please find the end of Appendix B to see the visualization effect of our privacy guarantee . The overall loss of learning is formulated as follows , which contains the adversarial loss in Eqn . ( 8 ) and a loss for the target task : min g , Φ , d max D Loss = min g , Φ , d [ max D L ( g , D ) + Ltask ( ŷ , y ) ] , ( 9 ) where Ltask ( ŷ , y ) represents the loss for the target task . Note that to simplify the implementation , we compute b = g ( I ′ ) as the feature of a randomly chosen sample I ′ 6= I where ideally I ′ has little correlation with I . In sum , the entire neural network [ g , Φ , d ] is trained with randomized variables θ and b to minimize the loss in Eqn . ( 9 ) . Once trained , the network [ g , Φ , d ] is fixed and publicly released . At the inference phase , the encoder secretly selects a fooling counterpart b and a rotation angle θ to encode its real feature a . The decoder receives the processed result h from the cloud , and make the final prediction based on Eqn . ( 3 ) .",
"role": "user"
},
{
"content": "In this paper, the authors propose to hide information in phase of the input features. They proposed that if each layer of processing layer, sitting outside of the local unit, is phase preserving, then they can recover the phase back. They propose a modification to the most popular layers in DNN to satisfy that property.",
"role": "assistant"
}
]
|
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: In addition, SNAP25, RAB18, FABP5, ARF5 and LDHA, which are related genes to schizophrenia and methamphetamine sensitization,
We aimed to study a SNAP-25 gene polymorphism, which is related to many psychiatric diseases, and FMS association in this prospective study.
Studies have shown a correlation between aberrant expression of the SNAP25 and a variety of brain diseases. Single nucleotide polymorphisms (SNPs) in this gene are associated with several psychiatric diseases, such as bipolar, schizophrenia, and attention-deficit/hyperactivity disorder.
Polymorphic variants of the SNAP-25 gene emerged as putative genetic components of impulsivity, as SNAP-25 protein plays an important role in the central nervous system, and its SNPs are associated with several psychiatric disorders.
SNAP-25 is a key component of the synaptic-vesicle fusion machinery, involved in several psychiatric diseases including schizophrenia and ADHD
Question: List psychiatric diseases that are associated with Synaptosome Associated Protein 25 (snap25). | ["schizophrenia", "attention-deficit/hyperactivity disorder", "bipolar"] | science.bioasq_list_qa.653 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: In addition, SNAP25, RAB18, FABP5, ARF5 and LDHA, which are related genes to schizophrenia and methamphetamine sensitization,\nWe aimed to study a SNAP-25 gene polymorphism, which is related to many psychiatric diseases, and FMS association in this prospective study.\nStudies have shown a correlation between aberrant expression of the SNAP25 and a variety of brain diseases. Single nucleotide polymorphisms (SNPs) in this gene are associated with several psychiatric diseases, such as bipolar, schizophrenia, and attention-deficit/hyperactivity disorder.\nPolymorphic variants of the SNAP-25 gene emerged as putative genetic components of impulsivity, as SNAP-25 protein plays an important role in the central nervous system, and its SNPs are associated with several psychiatric disorders.\nSNAP-25 is a key component of the synaptic-vesicle fusion machinery, involved in several psychiatric diseases including schizophrenia and ADHD\n\nQuestion: List psychiatric diseases that are associated with Synaptosome Associated Protein 25 (snap25).",
"role": "user"
},
{
"content": "[\"schizophrenia\", \"attention-deficit/hyperactivity disorder\", \"bipolar\"]",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Genetic deficiency of mast cells leads to decreased weight gain in a mouse model of diet-induced obesity.
Research Abstract:
Although mast cell functions have classically been related to allergic responses, recent studies indicate that these cells contribute to other common diseases such as multiple sclerosis, rheumatoid arthritis, atherosclerosis, aortic aneurysm and cancer.
This study presents evidence that mast cells also contribute to diet-induced obesity and diabetes.
For example, white adipose tissue (WAT) from obese humans and mice contain more mast cells than WAT from their lean counterparts.
Furthermore, in the context of mice on a Western diet, genetically induced deficiency of mast cells, or their pharmacological stabilization, reduces body weight gain and levels of inflammatory cytokines, chemokines and proteases in serum and WAT, in concert with improved glucose homeostasis and energy expenditure.
Mechanistic studies reveal that mast cells contribute to WAT and muscle angiogenesis and associated cell apoptosis and cathepsin activity.
Adoptive transfer experiments of cytokine-deficient mast cells show that these cells, by producing interleukin-6 (IL-6) and interferon-gamma (IFN-gamma), contribute to mouse adipose tissue cysteine protease cathepsin expression, apoptosis and angiogenesis, thereby promoting diet-induced obesity and glucose intolerance.
Our results showing reduced obesity and diabetes in mice treated with clinically available mast cell-stabilizing agents suggest the potential of developing new therapies for these common human metabolic disorders. | {"verdict": "SUPPORT", "evidence": ["Furthermore, in the context of mice on a Western diet, genetically induced deficiency of mast cells, or their pharmacological stabilization, reduces body weight gain and levels of inflammatory cytokines, chemokines and proteases in serum and WAT, in concert with improved glucose homeostasis and energy expenditure."]} | science.scifact_entailment.300 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nGenetic deficiency of mast cells leads to decreased weight gain in a mouse model of diet-induced obesity.\n\nResearch Abstract:\nAlthough mast cell functions have classically been related to allergic responses, recent studies indicate that these cells contribute to other common diseases such as multiple sclerosis, rheumatoid arthritis, atherosclerosis, aortic aneurysm and cancer.\nThis study presents evidence that mast cells also contribute to diet-induced obesity and diabetes.\nFor example, white adipose tissue (WAT) from obese humans and mice contain more mast cells than WAT from their lean counterparts.\nFurthermore, in the context of mice on a Western diet, genetically induced deficiency of mast cells, or their pharmacological stabilization, reduces body weight gain and levels of inflammatory cytokines, chemokines and proteases in serum and WAT, in concert with improved glucose homeostasis and energy expenditure.\nMechanistic studies reveal that mast cells contribute to WAT and muscle angiogenesis and associated cell apoptosis and cathepsin activity.\nAdoptive transfer experiments of cytokine-deficient mast cells show that these cells, by producing interleukin-6 (IL-6) and interferon-gamma (IFN-gamma), contribute to mouse adipose tissue cysteine protease cathepsin expression, apoptosis and angiogenesis, thereby promoting diet-induced obesity and glucose intolerance.\nOur results showing reduced obesity and diabetes in mice treated with clinically available mast cell-stabilizing agents suggest the potential of developing new therapies for these common human metabolic disorders.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Furthermore, in the context of mice on a Western diet, genetically induced deficiency of mast cells, or their pharmacological stabilization, reduces body weight gain and levels of inflammatory cytokines, chemokines and proteases in serum and WAT, in concert with improved glucose homeostasis and energy expenditure.\"]}",
"role": "assistant"
}
]
|
science.scierc_ner | Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.
Input Abstract:
Despite much recent progress on accurate semantic role labeling, previous work has largely used independent classifiers, possibly combined with separate label sequence models via Viterbi decoding. This stands in stark contrast to the linguistic observation that a core argument frame is a joint structure, with strong dependencies between arguments. We show how to build a joint model of argument frames, incorporating novel features that model these interactions into discriminative log-linear models. This system achieves an error reduction of 22% on all arguments and 32% on core arguments over a state-of-the art independent classifier for gold-standard parse trees on PropBank. | Output Entities:
{"Generic": [], "Material": ["PropBank"], "Method": ["label sequence models", "independent classifiers", "joint model of argument frames", "Viterbi decoding", "discriminative log-linear models"], "Metric": ["error reduction"], "OtherScientificTerm": ["features", "core argument frame", "gold-standard parse trees"], "Task": ["semantic role labeling"]} | science.scierc_ner.33 | [
{
"content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nDespite much recent progress on accurate semantic role labeling, previous work has largely used independent classifiers, possibly combined with separate label sequence models via Viterbi decoding. This stands in stark contrast to the linguistic observation that a core argument frame is a joint structure, with strong dependencies between arguments. We show how to build a joint model of argument frames, incorporating novel features that model these interactions into discriminative log-linear models. This system achieves an error reduction of 22% on all arguments and 32% on core arguments over a state-of-the art independent classifier for gold-standard parse trees on PropBank.",
"role": "user"
},
{
"content": "Output Entities:\n\n{\"Generic\": [], \"Material\": [\"PropBank\"], \"Method\": [\"label sequence models\", \"independent classifiers\", \"joint model of argument frames\", \"Viterbi decoding\", \"discriminative log-linear models\"], \"Metric\": [\"error reduction\"], \"OtherScientificTerm\": [\"features\", \"core argument frame\", \"gold-standard parse trees\"], \"Task\": [\"semantic role labeling\"]}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Another reason why the multi-source model does not work as well in our experiments as it does in those of <cite>Tyers et al. (2018)</cite> might be that we use pre-trained embeddings whereas <cite>Tyers et al. (2018)</cite> do not. | ["Differences"] | science.multicite_intent_classification.1856 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nAnother reason why the multi-source model does not work as well in our experiments as it does in those of <cite>Tyers et al. (2018)</cite> might be that we use pre-trained embeddings whereas <cite>Tyers et al. (2018)</cite> do not.",
"role": "user"
},
{
"content": "[\"Differences\"]",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.OBJECTIVES:: This study aimed to assess and compare the microleakage of two bulk fill and one conventional composite in class II restorations of primary posterior teeth.
ABSTRACT.MATERIALS AND METHODS:: This in vitro, experimental study was conducted on 60 primary mandibular second molars, which were randomly divided into three groups. Standard class II cavities were prepared in teeth and restored with 3M bulk fill composite in group 1, SonicFill bulk fill composite in group 2 and Z250 conventional composite in group 3. Single Bond 2 bonding agent was used in all cavities. The teeth were then thermocycled and immersed in 1M silver nitrate solution. The teeth were then mesiodistally sectioned and evaluated under a stereomicroscope at×10 magnification. Dye penetration depth was recorded in microns and data were analyzed using one-way ANOVA.
ABSTRACT.RESULTS:: The mean (± standard deviation) dye penetration depth in the gingival margins was 543±523μm, 343±290μm and 597±590μm for 3M bulk fill, SonicFill and Z250 conventional composite, respectively. These values were 214±93μm, 302±127μm and 199±145μm in the occlusal margins, respectively. The three groups were not significantly different in terms of occlusal or gingival microleakage (P>0.05), but gingival margins showed significantly higher microleakage than occlusal margins in all three groups (P<0.05).
ABSTRACT.CONCLUSIONS:: Bulk fill composites are not significantly different from conventional composites in terms of microleakage.
Extracted Data: | [["SonicFill bulk fill", "Z250 conventional composite", "Microleakage of Two Bulk Fill and One Conventional Composite in Class II Restorations of Primary Posterior Teeth", "no significant difference", "The three groups were not significantly different in terms of occlusal or gingival microleakage (P\u003e0.05), but gingival margins showed significantly higher microleakage than occlusal margins in all three groups (P\u003c0.05)."], ["3M bulk fill", "SonicFill", "Microleakage of Two Bulk Fill and One Conventional Composite in Class II Restorations of Primary Posterior Teeth", "no significant difference", "The three groups were not significantly different in terms of occlusal or gingival microleakage (P\u003e0.05), but gingival margins showed significantly higher microleakage than occlusal margins in all three groups (P\u003c0.05)."], ["gingival margin of all groups", "occlusal margin of all groups", "Microleakage (dye penetration depth)", "significantly increased", "The three groups were not significantly different in terms of occlusal or gingival microleakage (P\u003e0.05), but gingival margins showed significantly higher microleakage than occlusal margins in all three groups (P\u003c0.05)."], ["3M bulk fill", "Z250 conventional composite", "Microleakage of Two Bulk Fill and One Conventional Composite in Class II Restorations of Primary Posterior Teeth", "no significant difference", "The three groups were not significantly different in terms of occlusal or gingival microleakage (P\u003e0.05), but gingival margins showed significantly higher microleakage than occlusal margins in all three groups (P\u003c0.05)."]] | science.evidence_inference.80 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.OBJECTIVES:: This study aimed to assess and compare the microleakage of two bulk fill and one conventional composite in class II restorations of primary posterior teeth.\n\nABSTRACT.MATERIALS AND METHODS:: This in vitro, experimental study was conducted on 60 primary mandibular second molars, which were randomly divided into three groups. Standard class II cavities were prepared in teeth and restored with 3M bulk fill composite in group 1, SonicFill bulk fill composite in group 2 and Z250 conventional composite in group 3. Single Bond 2 bonding agent was used in all cavities. The teeth were then thermocycled and immersed in 1M silver nitrate solution. The teeth were then mesiodistally sectioned and evaluated under a stereomicroscope at×10 magnification. Dye penetration depth was recorded in microns and data were analyzed using one-way ANOVA.\n\nABSTRACT.RESULTS:: The mean (± standard deviation) dye penetration depth in the gingival margins was 543±523μm, 343±290μm and 597±590μm for 3M bulk fill, SonicFill and Z250 conventional composite, respectively. These values were 214±93μm, 302±127μm and 199±145μm in the occlusal margins, respectively. The three groups were not significantly different in terms of occlusal or gingival microleakage (P>0.05), but gingival margins showed significantly higher microleakage than occlusal margins in all three groups (P<0.05).\n\nABSTRACT.CONCLUSIONS:: Bulk fill composites are not significantly different from conventional composites in terms of microleakage.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"SonicFill bulk fill\", \"Z250 conventional composite\", \"Microleakage of Two Bulk Fill and One Conventional Composite in Class II Restorations of Primary Posterior Teeth\", \"no significant difference\", \"The three groups were not significantly different in terms of occlusal or gingival microleakage (P\\u003e0.05), but gingival margins showed significantly higher microleakage than occlusal margins in all three groups (P\\u003c0.05).\"], [\"3M bulk fill\", \"SonicFill\", \"Microleakage of Two Bulk Fill and One Conventional Composite in Class II Restorations of Primary Posterior Teeth\", \"no significant difference\", \"The three groups were not significantly different in terms of occlusal or gingival microleakage (P\\u003e0.05), but gingival margins showed significantly higher microleakage than occlusal margins in all three groups (P\\u003c0.05).\"], [\"gingival margin of all groups\", \"occlusal margin of all groups\", \"Microleakage (dye penetration depth)\", \"significantly increased\", \"The three groups were not significantly different in terms of occlusal or gingival microleakage (P\\u003e0.05), but gingival margins showed significantly higher microleakage than occlusal margins in all three groups (P\\u003c0.05).\"], [\"3M bulk fill\", \"Z250 conventional composite\", \"Microleakage of Two Bulk Fill and One Conventional Composite in Class II Restorations of Primary Posterior Teeth\", \"no significant difference\", \"The three groups were not significantly different in terms of occlusal or gingival microleakage (P\\u003e0.05), but gingival margins showed significantly higher microleakage than occlusal margins in all three groups (P\\u003c0.05).\"]]",
"role": "assistant"
}
]
|
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: We found conservation of heterozygous K27M mutations in H3F3A (n = 4) or HIST1H3B (n = 3) across all primary, contiguous, and metastatic tumor sites in all DIPGs
ACVR1 (n = 2), PIK3CA (n = 2), FGFR1 (n = 2), and MET (n = 1) were also intra-tumorally conserved
TP53 aberrations (n = 3 patients) varied by type and location between primary and metastatic tumors sites but were intra-tumorally conserved.
Evolutionary reconstruction indicates histone 3 (H3) K27M--including H3.2K27M--mutations potentially arise first and are invariably associated with specific, high-fidelity obligate partners throughout the tumour and its spread, from diagnosis to end-stage disease, suggesting mutual need for tumorigenesis.
These H3K27M ubiquitously-associated mutations involve alterations in TP53 cell-cycle (TP53/PPM1D) or specific growth factor pathways (ACVR1/PIK3R1).
Recently, sequencing of tumor cells revealed that histone H3 is frequently mutated in pediatric HGG, with up to 78 % of diffuse intrinsic pontine gliomas (DIPGs) carrying K27M and 36 % of non-brainstem gliomas carrying either K27M or G34R/V mutations.
Moreover, in diffuse intrinsic pontine gliomas (DIPG), recurrent somatic mutations of ACVR1 were found in association with HIST1H3B mutations
Recently, sequencing of tumor cells revealed that histone H3 is frequently mutated in pediatric HGG, with up to 78 % of diffuse intrinsic pontine gliomas (DIPGs) carrying K27M and 36 % of non-brainstem gliomas carrying either K27M or G34R/V mutations
K27M mutation in histone H3.3 defines clinically and biologically distinct subgroups of pediatric diffuse intrinsic pontine gliomas.
Recurrent mutations within the histone H3 genes H3F3A and HIST1H3B that convert K27 to methionine (H3K27M) and disrupt the global H3K27 methylation landscape and PRC2-dependent silencing, have recently been identified in pediatric high-grade gliomas including Diffuse Intrinsic Pontine Glioma (DIPG) and Glioblastoma multiforme (GBM; Type IV glioma).
Sequencing analysis showed c.83A>T mutations in the H3F3A or HIST1H3B gene in 77 % of our DIPG cohort.
Recently, sequencing of tumor cells revealed that histone H3 is frequently mutated in pediatric HGG, with up to 78 % of diffuse intrinsic pontine gliomas (DIPGs) carrying K27M and 36 % of non-brainstem gliomas carrying either K27M or G34R/V mutations.
K27M mutation in histone H3.3 defines clinically and biologically distinct subgroups of pediatric diffuse intrinsic pontine gliomas.
Question: Which driver mutations have been identified for Diffuse Intrinsic Pontine Glioma (DIPG)? | ["K27M in H3F3A", "HIST1H3B"] | science.bioasq_list_qa.451 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: We found conservation of heterozygous K27M mutations in H3F3A (n = 4) or HIST1H3B (n = 3) across all primary, contiguous, and metastatic tumor sites in all DIPGs\nACVR1 (n = 2), PIK3CA (n = 2), FGFR1 (n = 2), and MET (n = 1) were also intra-tumorally conserved\nTP53 aberrations (n = 3 patients) varied by type and location between primary and metastatic tumors sites but were intra-tumorally conserved.\nEvolutionary reconstruction indicates histone 3 (H3) K27M--including H3.2K27M--mutations potentially arise first and are invariably associated with specific, high-fidelity obligate partners throughout the tumour and its spread, from diagnosis to end-stage disease, suggesting mutual need for tumorigenesis.\nThese H3K27M ubiquitously-associated mutations involve alterations in TP53 cell-cycle (TP53/PPM1D) or specific growth factor pathways (ACVR1/PIK3R1).\nRecently, sequencing of tumor cells revealed that histone H3 is frequently mutated in pediatric HGG, with up to 78 % of diffuse intrinsic pontine gliomas (DIPGs) carrying K27M and 36 % of non-brainstem gliomas carrying either K27M or G34R/V mutations.\nMoreover, in diffuse intrinsic pontine gliomas (DIPG), recurrent somatic mutations of ACVR1 were found in association with HIST1H3B mutations\nRecently, sequencing of tumor cells revealed that histone H3 is frequently mutated in pediatric HGG, with up to 78 % of diffuse intrinsic pontine gliomas (DIPGs) carrying K27M and 36 % of non-brainstem gliomas carrying either K27M or G34R/V mutations\nK27M mutation in histone H3.3 defines clinically and biologically distinct subgroups of pediatric diffuse intrinsic pontine gliomas.\nRecurrent mutations within the histone H3 genes H3F3A and HIST1H3B that convert K27 to methionine (H3K27M) and disrupt the global H3K27 methylation landscape and PRC2-dependent silencing, have recently been identified in pediatric high-grade gliomas including Diffuse Intrinsic Pontine Glioma (DIPG) and Glioblastoma multiforme (GBM; Type IV glioma).\nSequencing analysis showed c.83A>T mutations in the H3F3A or HIST1H3B gene in 77 % of our DIPG cohort.\nRecently, sequencing of tumor cells revealed that histone H3 is frequently mutated in pediatric HGG, with up to 78 % of diffuse intrinsic pontine gliomas (DIPGs) carrying K27M and 36 % of non-brainstem gliomas carrying either K27M or G34R/V mutations.\nK27M mutation in histone H3.3 defines clinically and biologically distinct subgroups of pediatric diffuse intrinsic pontine gliomas.\n\nQuestion: Which driver mutations have been identified for Diffuse Intrinsic Pontine Glioma (DIPG)?",
"role": "user"
},
{
"content": "[\"K27M in H3F3A\", \"HIST1H3B\"]",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Zero-shot Transfer Learning for Gray-box Hyper-parameter Optimization
Main Content:
1 INTRODUCTION . Within the research community , the concentration of efforts towards solving the problem of hyperparameter optimization ( HPO ) has been mainly through sequential model-based optimization ( SMBO ) , i.e . iteratively fitting a probabilistic response model , typically a Gaussian process ( Rasmussen ( 2003 ) ) , to a history of observations of losses of the target response , and suggesting the next hyper-parameters via a policy , acquisition function , that balances exploration and exploitation by leveraging the uncertainty in the posterior distribution ( Jones et al . ( 1998 ) ; Wistuba et al . ( 2018 ) ; Snoek et al . ( 2012 ) ) . However , even when solutions are defined in conjunction with transfer learning techniques ( Bardenet et al . ( 2013 ) ; Wistuba et al . ( 2016 ) ; Feurer et al . ( 2015 ) ) , the performance of SMBO is heavily affected by the choice of the initial hyper-parameters . Furthermore , SMBO is sequential by design and additional acceleration by parallelization is not possible . In this paper , we present the problem of zero-shot hyper-parameter optimization as a meta-learning objective that exploits dataset information as part of the surrogate model . Instead of treating HPO as a black-box function , operating blindly on the response of the hyper-parameters alone , we treat it as a gray-box function ( Whitley et al . ( 2016 ) ) , by capturing the relationship among the dataset meta-features and hyper-parameters to approximate the response model . In this paper , we propose a novel formulation of HPO as a conditional gray-box function optimization problem , Section 4 , that allows us to regress from the dataset/hyper-parameter pair directly onto the response . Driven by the assumption that similar datasets should have similar response approximations , we introduce an additional data-driven similarity regularization objective to penalize the difference between the predicted response of similar datasets . In Section 5 , we perform an extensive battery of experiments that highlight the capacity of our universal model to serve as a solution for : ( 1 ) zero-shot HPO as a stand-alone task , ( 2 ) zero-shot as an initialization strategy for Bayesian Optimization ( BO ) , ( 3 ) transferable sequential model-based optimization . A summary of our contributions is : • a formulation of the zero-shot hyper-parameter optimization problem in which our response model predicts upfront the full set of hyper-parameter configurations to try , without access to observations of losses of the target response ; • a novel multi-task optimization objective that models the inherent similarity between datasets and their respective responses ; • three new meta-datasets with different search spaces and cardinalities to facilitate the experiments and serve as a benchmark for future work ; • an empirical demonstration of the performance of our approach through a battery of experiments that address the aforementioned research aspects , and a comparison against state-of-the-art HPO solutions for transfer-learning . 2 RELATED WORK . The straightforward zero-shot approaches for HPO consist of random search ( Bergstra & Bengio ( 2012 ) ) , or simply selecting hyper-parameters that perform well on general tasks ( Brazdil et al . ( 2003 ) ) . Some recent work has also shown that simply selecting random hyper-parameters from a restricted search space significantly outperforms existing solutions , and improves the performance of conventional SMBO approaches ( Perrone et al . ( 2019 ) ) . The restricted search space is created by eliminating regions that are further away from the best hyper-parameters of the training tasks . Another prominent direction for zero-shot HPO depends heavily on engineered meta-features , i.e . dataset characteristics ( Vanschoren ( 2018 ) ) , to measure the similarity of datasets . Following the assumption that the responses of similar datasets behave similarly to the hyper-parameters , it has been shown that even the simplest of meta-features ( Bardenet et al . ( 2013 ) ) improve the performance of single task BO algorithms ( Feurer et al . ( 2014 ; 2015 ) ) . The target response is initialized with the top-performing hyper-parameters of the dataset nearest neighbor in the meta-feature space . The shortcomings of using engineered meta-features are that they are hard to define ( Leite & Brazdil ( 2005 ) ) , and are often selected through trial-and-error or expert domain knowledge . As a remedy , replacing engineered meta-features with learned meta-features ( Jomaa et al . ( 2019 ) ) compensates for such limitations , by producing expressive meta-features agnostic to any meta-task , such as HPO . Zero-shot HPO is also posed as an optimization problem that aims to minimize the meta-loss over a collection of datasets ( Wistuba et al . ( 2015a ) ) by replacing the discrete minimum function with a differentiable softmin function as an approximation . The initial configurations boost the single task BO without any meta-features . In ( Wistuba et al . ( 2015b ) ) , hyper-parameter combinations are assigned a static ranking based on the cumulative average normalized error , and dataset similarity is estimated based on the relative ranking of these combinations . Winkelmolen et al . ( 2020 ) introduce a Bayesian Optimization solution for zero-shot HPO by iteratively fitting a surrogate model over the observed responses of different tasks , and selecting the next hyper-parameters and datasets that minimize the aggregated observed loss . Aside from zero-shot HPO , transfer learning is employed by learning better response models ( Wistuba et al . ( 2016 ) ) based on the similarity of the response . Feurer et al . ( 2018 ) propose an ensemble model for BO by building the target response model as a weighted sum of the predictions of base models as well as the target model . In addition to the transferable response models , Volpp et al . ( 2019 ) design a transferable acquisition function as a policy for hyper-parameter optimization defined in a reinforcement learning framework . As a replacement to the standard Gaussian process , Perrone et al . ( 2018 ) train a multi-task adaptive Bayesian linear regression model with a shared feature extractor that provides context information for each independent task . In contrast to the literature , we formulate the problem of zero-shot HPO as a gray-box function optimization problem , by designing a universal response model defined over the combined domain of datasets and hyper-parameters . We rely on the embeddings to estimate the similarities across datasets and design a novel multi-task optimization objective to regress directly on the response . This allows us to delineate from the complexity paired with Bayesian uncertainty , as well as the trouble of engineering similarity measures . 3 HYPER-PARAMETER OPTIMIZATION . Consider a dataset D = { ( x ( Train ) , y ( Train ) ) , ( x ( Val ) , y ( Val ) ) , ( x ( Test ) , y ( Test ) ) } for a supervised learning task , with training , validation and test splits of predictors x ∈ X and targets y ∈ Y . We aim at training a parametric approximation of the target using ŷ : = f ( θ , λ ) : X → Y , where θ ∈ Θ denotes the parameters and λ ∈ Λ its hyper-parameters , by minimizing a loss function L : Y × Y → R as : λ∗ = arg min λ∈Λ L ( y ( Val ) , f ( x ( Val ) ; θ∗ , λ ) ) s.t . θ∗ = arg min θ∈Θ L ( y ( Train ) , f ( x ( Train ) ; θ , λ ) ) ( 1 ) We hereafter denote the validation error as the response ` ( λ ) : = L ( y ( Val ) , f ( x ( Val ) ; θ∗ , λ ) ) . Unfortunately , a direct optimization of the response ` ( λ ) in terms of λ is not trivial , because θ∗ is the result of the minimization problem and its gradients with respect to λ are not easy to compute . Instead , in order to learn the optimal hyper-parameters λ we train a probabilistic surrogate ˆ̀ ( λ ; β ) : Λ× B → R parameterized by β ∈ B , with B as the space of response model parameters , that minimizes the log-likelihood of approximating the response ` ( λ ) over a set of K evaluations S : = { ( λ1 , ` ( λ1 ) ) , . . . , ( λK , ` ( λK ) ) } . We denote P as the probability of estimating the response given a surrogate model . Given the surrogate , the next hyper-parameter to be evaluated λ ( next ) is computed by maximizing an acquisition function A ( e.g . EI ( Močkus ( 1975 ) ) as : λ ( next ) : = arg max λ∈Λ A ( ˆ̀ ( λ ; β∗ ) ) s.t . β∗ : = arg min β∈B K∑ k=1 lnP ( ` ( λk ) , ˆ̀ ( λk ; β ) ) ( 2 ) 4 META-LEARNING OF CONDITIONAL GRAY-BOX SURROGATES . Let us define a collection of T datasets as { D ( 1 ) , . . . , D ( T ) } and let ` ( t ) ( λ ) measure the response of the hyper-parameter λ on the t-th dataset D ( t ) . Furthermore , assume we have previously evaluated K ( t ) many hyper-parameters λ ( t ) k , k ∈ { 1 , . . . , K ( t ) } on that particular dataset . We condition the surrogate ˆ̀ to capture the characteristics of the t-th dataset , by taking as input the meta-features representation of the dataset as φ ( t ) . Therefore , a dataset-aware surrogate can be trained using meta-learning over a cumulative objective function O ( β ) as : O ( β ) : = T∑ t=1 K ( t ) ∑ k=1 ( ` ( t ) ( λ ( t ) k ) − ˆ̀ ( λ ( t ) k , φ ( t ) ; β ) ) 2 ( 3 ) 4.1 THE META-FEATURE EXTRACTOR . Introducing engineered meta-features has had a significant impact on hyper-parameter optimization . However , learning meta-features across datasets of varying schema in a task-agnostic setting provides more representative characteristics than to rely on hard-to-tune empirical estimates . The meta-feature extractor is a set-based function ( Zaheer et al . ( 2017 ) ) that presents itself as an extended derivation of the Kolmogorov-Arnold representation theorem ( Krková ( 1992 ) ) , which states that a multi-variate function φ can be defined as an aggregation of univariate functions over single variables , Appendix B . Each supervised ( tabular ) dataset D ( t ) : = ( x ( t ) , y ( t ) ) consists of instances x ( t ) ∈ X ∈ RN×M and targets y ( t ) ∈ Y ∈ RN×C such that N , M and C represent the number of instances , predictors and targets respectively . The dataset can be further represented as a set of smaller components , set of sets , D ( t ) = { ( x ( t ) i , m , y ( t ) i , c ) | m ∈ { 1 , . . . , M } , i ∈ { 1 , . . . , N } , c ∈ { 1 , . . . , C } } . A tabular dataset composed of columns ( predictors , targets ) and rows ( instances ) is reduced to single predictor-target pairs instead of an instance-target pairs . Based on this representation , a meta-feature extractor parameterized as a neural network ( Jomaa et al . ( 2019 ) ) , is formulated in Equation 4 . For simplicity of notation , we drop the superscript ( t ) unless needed . φ ( D ) = h ( 1 MC M∑ m=1 C∑ c=1 g ( 1 N N∑ i=1 f ( xi , m , yi , c ) ) ) ( 4 ) with f : R2 → RKf , g : RKf → RKg and h : RKg → RK represented by neural networks with Kf , Kg , and K output units , respectively . This set-based formulation captures the correlation between each variable ( predictor ) and its assigned target and is permutation-invariant , i.e . the output is unaffected by the ordering of the pairs in the set . Other set-based functions such as ( Edwards & Storkey ( 2016 ) ; Lee et al . ( 2019 ) ) can also be used for meta-feature extraction , however , we focus on this deep-set formulation ( Jomaa et al . ( 2019 ) ) because it is proven to work properly for hyper-parameter optimization . | In this papar, the authors formulated a new objective function for HBO, which included an additional regularization term based on the dataset similarity. The authors used the distance between the meta-features of selected datasets to measure this dataset similarity and assumpted that similar datasets should have similar hyper-paprameters. The experiments are complete and demonstrate the advantages of using this new optimization formulation. | science.mup_single_document_summarization.1022 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nZero-shot Transfer Learning for Gray-box Hyper-parameter Optimization\n\nMain Content:\n1 INTRODUCTION . Within the research community , the concentration of efforts towards solving the problem of hyperparameter optimization ( HPO ) has been mainly through sequential model-based optimization ( SMBO ) , i.e . iteratively fitting a probabilistic response model , typically a Gaussian process ( Rasmussen ( 2003 ) ) , to a history of observations of losses of the target response , and suggesting the next hyper-parameters via a policy , acquisition function , that balances exploration and exploitation by leveraging the uncertainty in the posterior distribution ( Jones et al . ( 1998 ) ; Wistuba et al . ( 2018 ) ; Snoek et al . ( 2012 ) ) . However , even when solutions are defined in conjunction with transfer learning techniques ( Bardenet et al . ( 2013 ) ; Wistuba et al . ( 2016 ) ; Feurer et al . ( 2015 ) ) , the performance of SMBO is heavily affected by the choice of the initial hyper-parameters . Furthermore , SMBO is sequential by design and additional acceleration by parallelization is not possible . In this paper , we present the problem of zero-shot hyper-parameter optimization as a meta-learning objective that exploits dataset information as part of the surrogate model . Instead of treating HPO as a black-box function , operating blindly on the response of the hyper-parameters alone , we treat it as a gray-box function ( Whitley et al . ( 2016 ) ) , by capturing the relationship among the dataset meta-features and hyper-parameters to approximate the response model . In this paper , we propose a novel formulation of HPO as a conditional gray-box function optimization problem , Section 4 , that allows us to regress from the dataset/hyper-parameter pair directly onto the response . Driven by the assumption that similar datasets should have similar response approximations , we introduce an additional data-driven similarity regularization objective to penalize the difference between the predicted response of similar datasets . In Section 5 , we perform an extensive battery of experiments that highlight the capacity of our universal model to serve as a solution for : ( 1 ) zero-shot HPO as a stand-alone task , ( 2 ) zero-shot as an initialization strategy for Bayesian Optimization ( BO ) , ( 3 ) transferable sequential model-based optimization . A summary of our contributions is : • a formulation of the zero-shot hyper-parameter optimization problem in which our response model predicts upfront the full set of hyper-parameter configurations to try , without access to observations of losses of the target response ; • a novel multi-task optimization objective that models the inherent similarity between datasets and their respective responses ; • three new meta-datasets with different search spaces and cardinalities to facilitate the experiments and serve as a benchmark for future work ; • an empirical demonstration of the performance of our approach through a battery of experiments that address the aforementioned research aspects , and a comparison against state-of-the-art HPO solutions for transfer-learning . 2 RELATED WORK . The straightforward zero-shot approaches for HPO consist of random search ( Bergstra & Bengio ( 2012 ) ) , or simply selecting hyper-parameters that perform well on general tasks ( Brazdil et al . ( 2003 ) ) . Some recent work has also shown that simply selecting random hyper-parameters from a restricted search space significantly outperforms existing solutions , and improves the performance of conventional SMBO approaches ( Perrone et al . ( 2019 ) ) . The restricted search space is created by eliminating regions that are further away from the best hyper-parameters of the training tasks . Another prominent direction for zero-shot HPO depends heavily on engineered meta-features , i.e . dataset characteristics ( Vanschoren ( 2018 ) ) , to measure the similarity of datasets . Following the assumption that the responses of similar datasets behave similarly to the hyper-parameters , it has been shown that even the simplest of meta-features ( Bardenet et al . ( 2013 ) ) improve the performance of single task BO algorithms ( Feurer et al . ( 2014 ; 2015 ) ) . The target response is initialized with the top-performing hyper-parameters of the dataset nearest neighbor in the meta-feature space . The shortcomings of using engineered meta-features are that they are hard to define ( Leite & Brazdil ( 2005 ) ) , and are often selected through trial-and-error or expert domain knowledge . As a remedy , replacing engineered meta-features with learned meta-features ( Jomaa et al . ( 2019 ) ) compensates for such limitations , by producing expressive meta-features agnostic to any meta-task , such as HPO . Zero-shot HPO is also posed as an optimization problem that aims to minimize the meta-loss over a collection of datasets ( Wistuba et al . ( 2015a ) ) by replacing the discrete minimum function with a differentiable softmin function as an approximation . The initial configurations boost the single task BO without any meta-features . In ( Wistuba et al . ( 2015b ) ) , hyper-parameter combinations are assigned a static ranking based on the cumulative average normalized error , and dataset similarity is estimated based on the relative ranking of these combinations . Winkelmolen et al . ( 2020 ) introduce a Bayesian Optimization solution for zero-shot HPO by iteratively fitting a surrogate model over the observed responses of different tasks , and selecting the next hyper-parameters and datasets that minimize the aggregated observed loss . Aside from zero-shot HPO , transfer learning is employed by learning better response models ( Wistuba et al . ( 2016 ) ) based on the similarity of the response . Feurer et al . ( 2018 ) propose an ensemble model for BO by building the target response model as a weighted sum of the predictions of base models as well as the target model . In addition to the transferable response models , Volpp et al . ( 2019 ) design a transferable acquisition function as a policy for hyper-parameter optimization defined in a reinforcement learning framework . As a replacement to the standard Gaussian process , Perrone et al . ( 2018 ) train a multi-task adaptive Bayesian linear regression model with a shared feature extractor that provides context information for each independent task . In contrast to the literature , we formulate the problem of zero-shot HPO as a gray-box function optimization problem , by designing a universal response model defined over the combined domain of datasets and hyper-parameters . We rely on the embeddings to estimate the similarities across datasets and design a novel multi-task optimization objective to regress directly on the response . This allows us to delineate from the complexity paired with Bayesian uncertainty , as well as the trouble of engineering similarity measures . 3 HYPER-PARAMETER OPTIMIZATION . Consider a dataset D = { ( x ( Train ) , y ( Train ) ) , ( x ( Val ) , y ( Val ) ) , ( x ( Test ) , y ( Test ) ) } for a supervised learning task , with training , validation and test splits of predictors x ∈ X and targets y ∈ Y . We aim at training a parametric approximation of the target using ŷ : = f ( θ , λ ) : X → Y , where θ ∈ Θ denotes the parameters and λ ∈ Λ its hyper-parameters , by minimizing a loss function L : Y × Y → R as : λ∗ = arg min λ∈Λ L ( y ( Val ) , f ( x ( Val ) ; θ∗ , λ ) ) s.t . θ∗ = arg min θ∈Θ L ( y ( Train ) , f ( x ( Train ) ; θ , λ ) ) ( 1 ) We hereafter denote the validation error as the response ` ( λ ) : = L ( y ( Val ) , f ( x ( Val ) ; θ∗ , λ ) ) . Unfortunately , a direct optimization of the response ` ( λ ) in terms of λ is not trivial , because θ∗ is the result of the minimization problem and its gradients with respect to λ are not easy to compute . Instead , in order to learn the optimal hyper-parameters λ we train a probabilistic surrogate ˆ̀ ( λ ; β ) : Λ× B → R parameterized by β ∈ B , with B as the space of response model parameters , that minimizes the log-likelihood of approximating the response ` ( λ ) over a set of K evaluations S : = { ( λ1 , ` ( λ1 ) ) , . . . , ( λK , ` ( λK ) ) } . We denote P as the probability of estimating the response given a surrogate model . Given the surrogate , the next hyper-parameter to be evaluated λ ( next ) is computed by maximizing an acquisition function A ( e.g . EI ( Močkus ( 1975 ) ) as : λ ( next ) : = arg max λ∈Λ A ( ˆ̀ ( λ ; β∗ ) ) s.t . β∗ : = arg min β∈B K∑ k=1 lnP ( ` ( λk ) , ˆ̀ ( λk ; β ) ) ( 2 ) 4 META-LEARNING OF CONDITIONAL GRAY-BOX SURROGATES . Let us define a collection of T datasets as { D ( 1 ) , . . . , D ( T ) } and let ` ( t ) ( λ ) measure the response of the hyper-parameter λ on the t-th dataset D ( t ) . Furthermore , assume we have previously evaluated K ( t ) many hyper-parameters λ ( t ) k , k ∈ { 1 , . . . , K ( t ) } on that particular dataset . We condition the surrogate ˆ̀ to capture the characteristics of the t-th dataset , by taking as input the meta-features representation of the dataset as φ ( t ) . Therefore , a dataset-aware surrogate can be trained using meta-learning over a cumulative objective function O ( β ) as : O ( β ) : = T∑ t=1 K ( t ) ∑ k=1 ( ` ( t ) ( λ ( t ) k ) − ˆ̀ ( λ ( t ) k , φ ( t ) ; β ) ) 2 ( 3 ) 4.1 THE META-FEATURE EXTRACTOR . Introducing engineered meta-features has had a significant impact on hyper-parameter optimization . However , learning meta-features across datasets of varying schema in a task-agnostic setting provides more representative characteristics than to rely on hard-to-tune empirical estimates . The meta-feature extractor is a set-based function ( Zaheer et al . ( 2017 ) ) that presents itself as an extended derivation of the Kolmogorov-Arnold representation theorem ( Krková ( 1992 ) ) , which states that a multi-variate function φ can be defined as an aggregation of univariate functions over single variables , Appendix B . Each supervised ( tabular ) dataset D ( t ) : = ( x ( t ) , y ( t ) ) consists of instances x ( t ) ∈ X ∈ RN×M and targets y ( t ) ∈ Y ∈ RN×C such that N , M and C represent the number of instances , predictors and targets respectively . The dataset can be further represented as a set of smaller components , set of sets , D ( t ) = { ( x ( t ) i , m , y ( t ) i , c ) | m ∈ { 1 , . . . , M } , i ∈ { 1 , . . . , N } , c ∈ { 1 , . . . , C } } . A tabular dataset composed of columns ( predictors , targets ) and rows ( instances ) is reduced to single predictor-target pairs instead of an instance-target pairs . Based on this representation , a meta-feature extractor parameterized as a neural network ( Jomaa et al . ( 2019 ) ) , is formulated in Equation 4 . For simplicity of notation , we drop the superscript ( t ) unless needed . φ ( D ) = h ( 1 MC M∑ m=1 C∑ c=1 g ( 1 N N∑ i=1 f ( xi , m , yi , c ) ) ) ( 4 ) with f : R2 → RKf , g : RKf → RKg and h : RKg → RK represented by neural networks with Kf , Kg , and K output units , respectively . This set-based formulation captures the correlation between each variable ( predictor ) and its assigned target and is permutation-invariant , i.e . the output is unaffected by the ordering of the pairs in the set . Other set-based functions such as ( Edwards & Storkey ( 2016 ) ; Lee et al . ( 2019 ) ) can also be used for meta-feature extraction , however , we focus on this deep-set formulation ( Jomaa et al . ( 2019 ) ) because it is proven to work properly for hyper-parameter optimization .",
"role": "user"
},
{
"content": "In this papar, the authors formulated a new objective function for HBO, which included an additional regularization term based on the dataset similarity. The authors used the distance between the meta-features of selected datasets to measure this dataset similarity and assumpted that similar datasets should have similar hyper-paprameters. The experiments are complete and demonstrate the advantages of using this new optimization formulation.",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT: Hepatitis C virus genotype 2 (HCV-2) slow responders poorly respond to 24 weeks of peginterferon (Peg-IFN) plus ribavirin (RBV). We evaluated the efficacy of extended 48-week regimen and the role of interleukin-28B (IL-28B) genotype in this clinical setting. Treatment-naïve HCV-2 patients not achieving rapid virologic response (RVR) by Peg-IFN alfa-2a 180 μg/week plus weight-based RBV (1,000–1,200 mg/day, cutoff body weight of 75 kg) were randomly assigned to receive a total duration of 48 (n = 94) or 24 (n = 93) weeks of therapy. The primary endpoint was sustained virologic response (SVR). Baseline patient characteristics to predict SVR were analyzed. Patients receiving 48 weeks of treatment had a greater SVR rate than those receiving 24 weeks of treatment (70.2% versus 46.2%, P = 0.001). Compared to patients treated for 24 weeks, the SVR rate in those treated for 48 weeks increased by 10.9% [95% CI: −5.9% to 27.7%] and 65.6% [95% CI: 44.5% to 86.7%] if they had IL-28B rs8099917 TT genotype, and GT/GG genotype, respectively (interaction P = 0.002). In conclusion, 48-week treatment with Peg-IFN plus weight-based RBV provides a greater SVR rate than 24-week treatment in treatment-naïve HCV-2 patients with unfavorable IL-28B genotypes who fail to achieve RVR.
Extracted Data: | [["peginterferon (Peg-IFN) plus ribavirin (RBV) for 48 weeks", "peginterferon (Peg-IFN) plus ribavirin (RBV) for 24 weeks", "The SVR rates if the patients had IL-28B rs8099917 TT genotype, and GT/GG genotype", "significantly increased", "Compared to patients treated for 24 weeks, the SVR rate in those treated for 48 weeks increased by 10.9% [95% CI: \u22125.9% to 27.7%] and 65.6% [95% CI: 44.5% to 86.7%] if they had IL-28B rs8099917 TT genotype, and GT/GG genotype, respectively (interaction P = 0.002)."]] | science.evidence_inference.130 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Hepatitis C virus genotype 2 (HCV-2) slow responders poorly respond to 24 weeks of peginterferon (Peg-IFN) plus ribavirin (RBV). We evaluated the efficacy of extended 48-week regimen and the role of interleukin-28B (IL-28B) genotype in this clinical setting. Treatment-naïve HCV-2 patients not achieving rapid virologic response (RVR) by Peg-IFN alfa-2a 180 μg/week plus weight-based RBV (1,000–1,200 mg/day, cutoff body weight of 75 kg) were randomly assigned to receive a total duration of 48 (n = 94) or 24 (n = 93) weeks of therapy. The primary endpoint was sustained virologic response (SVR). Baseline patient characteristics to predict SVR were analyzed. Patients receiving 48 weeks of treatment had a greater SVR rate than those receiving 24 weeks of treatment (70.2% versus 46.2%, P = 0.001). Compared to patients treated for 24 weeks, the SVR rate in those treated for 48 weeks increased by 10.9% [95% CI: −5.9% to 27.7%] and 65.6% [95% CI: 44.5% to 86.7%] if they had IL-28B rs8099917 TT genotype, and GT/GG genotype, respectively (interaction P = 0.002). In conclusion, 48-week treatment with Peg-IFN plus weight-based RBV provides a greater SVR rate than 24-week treatment in treatment-naïve HCV-2 patients with unfavorable IL-28B genotypes who fail to achieve RVR.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"peginterferon (Peg-IFN) plus ribavirin (RBV) for 48 weeks\", \"peginterferon (Peg-IFN) plus ribavirin (RBV) for 24 weeks\", \"The SVR rates if the patients had IL-28B rs8099917 TT genotype, and GT/GG genotype\", \"significantly increased\", \"Compared to patients treated for 24 weeks, the SVR rate in those treated for 48 weeks increased by 10.9% [95% CI: \\u22125.9% to 27.7%] and 65.6% [95% CI: 44.5% to 86.7%] if they had IL-28B rs8099917 TT genotype, and GT/GG genotype, respectively (interaction P = 0.002).\"]]",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND:: Premature neonates' feeding is of great importance due to its effective role in their growth. These neonates should reach an independent oral nutrition stage before being discharged from the Neonatal Intensive care Unit. Therefore, the researcher decided to conduct a study on the effect of palady and cup feeding on premature neonates' weight gain and their reaching full oral feeding time interval.
ABSTRACT.MATERIALS AND METHODS:: This is a clinical trial with a quantitative design conducted on 69 premature infants (gestational age between 29 and 32 weeks) who were assigned to cup (n = 34) and palady (n = 35) feeding groups through random allocation. The first feeding was administrated either by cup or palady method in each shift within seven sequential days (total of 21 cup and palady feedings). Then, the rest of feeding was administrated by gavage.
ABSTRACT.RESULTS:: Mean hospitalization time (cup = 39.01 and palady = 30.4; P < 0.001) and mean time interval to reach full oral feeding (cup = 33.7 and palady = 24.1; P < 0.001) were significantly lower in palady group compared to cup group. Mean weight changes of neonates 7 weeks after the intervention compared to those in the beginning of the intervention were significantly more in palady group compared to the cup group (cup = 146.7 and palady = 198.8; P < 0.001).
ABSTRACT.CONCLUSIONS:: The neonates in palady group reached full oral feeding earlier than those of cup group. Subjects' weight gain was also higher in palady group compared to the cup group. Premature neonates with over 30 weeks of gestational age and physiological stability can be fed by palady.
Extracted Data: | [["palady", "cup", "Mean weight changes of neonates 7 weeks", "significantly increased", "Mean weight changes of neonates 7 weeks after the intervention compared to those in the beginning of the intervention were significantly more in palady group compared to the cup group (cup = 146.7 and palady = 198.8; P \u003c 0.001)."], ["palady", "cup", "Mean hospitalization time", "significantly decreased", "Mean hospitalization time (cup = 39.01 and palady = 30.4; P \u003c 0.001) and mean time interval to reach full oral feeding (cup = 33.7 and palady = 24.1; P \u003c 0.001) were significantly lower in palady group compared to cup group."]] | science.evidence_inference.43 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND:: Premature neonates' feeding is of great importance due to its effective role in their growth. These neonates should reach an independent oral nutrition stage before being discharged from the Neonatal Intensive care Unit. Therefore, the researcher decided to conduct a study on the effect of palady and cup feeding on premature neonates' weight gain and their reaching full oral feeding time interval.\n\nABSTRACT.MATERIALS AND METHODS:: This is a clinical trial with a quantitative design conducted on 69 premature infants (gestational age between 29 and 32 weeks) who were assigned to cup (n = 34) and palady (n = 35) feeding groups through random allocation. The first feeding was administrated either by cup or palady method in each shift within seven sequential days (total of 21 cup and palady feedings). Then, the rest of feeding was administrated by gavage.\n\nABSTRACT.RESULTS:: Mean hospitalization time (cup = 39.01 and palady = 30.4; P < 0.001) and mean time interval to reach full oral feeding (cup = 33.7 and palady = 24.1; P < 0.001) were significantly lower in palady group compared to cup group. Mean weight changes of neonates 7 weeks after the intervention compared to those in the beginning of the intervention were significantly more in palady group compared to the cup group (cup = 146.7 and palady = 198.8; P < 0.001).\n\nABSTRACT.CONCLUSIONS:: The neonates in palady group reached full oral feeding earlier than those of cup group. Subjects' weight gain was also higher in palady group compared to the cup group. Premature neonates with over 30 weeks of gestational age and physiological stability can be fed by palady.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"palady\", \"cup\", \"Mean weight changes of neonates 7 weeks\", \"significantly increased\", \"Mean weight changes of neonates 7 weeks after the intervention compared to those in the beginning of the intervention were significantly more in palady group compared to the cup group (cup = 146.7 and palady = 198.8; P \\u003c 0.001).\"], [\"palady\", \"cup\", \"Mean hospitalization time\", \"significantly decreased\", \"Mean hospitalization time (cup = 39.01 and palady = 30.4; P \\u003c 0.001) and mean time interval to reach full oral feeding (cup = 33.7 and palady = 24.1; P \\u003c 0.001) were significantly lower in palady group compared to cup group.\"]]",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
ITAM phosphorylation allows for the transfer of the T cell receptor (TCR) signal from the echo-domain to the cytoplasmic tail of the T cell receptor (TCR).
Research Abstract:
T cell receptor (TCR-CD3) triggering involves both receptor clustering and conformational changes at the cytoplasmic tails of the CD3 subunits.
The mechanism by which TCRalphabeta ligand binding confers conformational changes to CD3 is unknown.
By using well-defined ligands, we showed that induction of the conformational change requires both multivalent engagement and the mobility restriction of the TCR-CD3 imposed by the plasma membrane.
The conformational change is elicited by cooperative rearrangements of two TCR-CD3 complexes and does not require accompanying changes in the structure of the TCRalphabeta ectodomains.
This conformational change at CD3 reverts upon ligand dissociation and is required for T cell activation.
Thus, our permissive geometry model provides a molecular mechanism that rationalizes how the information of ligand binding to TCRalphabeta is transmitted to the CD3 subunits and to the intracellular signaling machinery. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.356 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nITAM phosphorylation allows for the transfer of the T cell receptor (TCR) signal from the echo-domain to the cytoplasmic tail of the T cell receptor (TCR).\n\nResearch Abstract:\nT cell receptor (TCR-CD3) triggering involves both receptor clustering and conformational changes at the cytoplasmic tails of the CD3 subunits.\nThe mechanism by which TCRalphabeta ligand binding confers conformational changes to CD3 is unknown.\nBy using well-defined ligands, we showed that induction of the conformational change requires both multivalent engagement and the mobility restriction of the TCR-CD3 imposed by the plasma membrane.\nThe conformational change is elicited by cooperative rearrangements of two TCR-CD3 complexes and does not require accompanying changes in the structure of the TCRalphabeta ectodomains.\nThis conformational change at CD3 reverts upon ligand dissociation and is required for T cell activation.\nThus, our permissive geometry model provides a molecular mechanism that rationalizes how the information of ligand binding to TCRalphabeta is transmitted to the CD3 subunits and to the intracellular signaling machinery.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND:: Angiogenesis is crucial for glioblastoma growth, and anti-vascular endothelial growth factor agents are widely used in recurrent glioblastoma patients. The number of circulating endothelial cells (CECs) is a surrogate marker for endothelial damage. We assessed their kinetics and explored their prognostic value in patients with recurrent glioblastoma.
ABSTRACT.METHODS:: In this side study of the BELOB trial, 141 patients with recurrent glioblastoma were randomised to receive single-agent bevacizumab or lomustine, or bevacizumab plus lomustine. Before treatment, after 4 weeks and after 6 weeks of treatment, CECs were enumerated.
ABSTRACT.RESULTS:: The number of CECs increased during treatment with bevacizumab plus lomustine, but not during treatment in the single-agent arms. In patients treated with lomustine single agent, higher absolute CEC numbers after 4 weeks (log10CEC hazard ratio (HR) 0.41, 95% CI 0.18–0.91) and 6 weeks (log10CEC HR 0.16, 95% CI 0.05–0.56) of treatment were associated with improved overall survival (OS). Absolute CEC numbers in patients receiving bevacizumab plus lomustine or bevacizumab single agent were not associated with OS.
ABSTRACT.CONCLUSION:: CEC numbers increased during treatment with bevacizumab plus lomustine but not during treatment with either agent alone, suggesting that this combination induced the greatest vascular damage. Although the absolute number of CECs was not associated with OS in patients treated with bevacizumab either alone or in combination, they could serve as a marker in glioblastoma patients receiving lomustine single agent.
Extracted Data: | [["bevacizumab plus lomustine", "baseline", "circulating endothelial cells (CECs) after 4 and 6 weeks", "significantly increased", "The number of CECs increased during treatment with bevacizumab plus lomustine, but not during treatment in the single-agent arms."], ["bevacizumab ", "baseline", "circulating endothelial cells (CECs) after 4 and 6 weeks", "no significant difference", "The number of CECs increased during treatment with bevacizumab plus lomustine, but not during treatment in the single-agent arms."], ["lomustine", "baseline", "circulating endothelial cells (CECs) after 4 and 6 weeks", "no significant difference", "The number of CECs increased during treatment with bevacizumab plus lomustine, but not during treatment in the single-agent arms."]] | science.evidence_inference.166 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND:: Angiogenesis is crucial for glioblastoma growth, and anti-vascular endothelial growth factor agents are widely used in recurrent glioblastoma patients. The number of circulating endothelial cells (CECs) is a surrogate marker for endothelial damage. We assessed their kinetics and explored their prognostic value in patients with recurrent glioblastoma.\n\nABSTRACT.METHODS:: In this side study of the BELOB trial, 141 patients with recurrent glioblastoma were randomised to receive single-agent bevacizumab or lomustine, or bevacizumab plus lomustine. Before treatment, after 4 weeks and after 6 weeks of treatment, CECs were enumerated.\n\nABSTRACT.RESULTS:: The number of CECs increased during treatment with bevacizumab plus lomustine, but not during treatment in the single-agent arms. In patients treated with lomustine single agent, higher absolute CEC numbers after 4 weeks (log10CEC hazard ratio (HR) 0.41, 95% CI 0.18–0.91) and 6 weeks (log10CEC HR 0.16, 95% CI 0.05–0.56) of treatment were associated with improved overall survival (OS). Absolute CEC numbers in patients receiving bevacizumab plus lomustine or bevacizumab single agent were not associated with OS.\n\nABSTRACT.CONCLUSION:: CEC numbers increased during treatment with bevacizumab plus lomustine but not during treatment with either agent alone, suggesting that this combination induced the greatest vascular damage. Although the absolute number of CECs was not associated with OS in patients treated with bevacizumab either alone or in combination, they could serve as a marker in glioblastoma patients receiving lomustine single agent.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"bevacizumab plus lomustine\", \"baseline\", \"circulating endothelial cells (CECs) after 4 and 6 weeks\", \"significantly increased\", \"The number of CECs increased during treatment with bevacizumab plus lomustine, but not during treatment in the single-agent arms.\"], [\"bevacizumab \", \"baseline\", \"circulating endothelial cells (CECs) after 4 and 6 weeks\", \"no significant difference\", \"The number of CECs increased during treatment with bevacizumab plus lomustine, but not during treatment in the single-agent arms.\"], [\"lomustine\", \"baseline\", \"circulating endothelial cells (CECs) after 4 and 6 weeks\", \"no significant difference\", \"The number of CECs increased during treatment with bevacizumab plus lomustine, but not during treatment in the single-agent arms.\"]]",
"role": "assistant"
}
]
|
science.discomat_te | | No. | Contents of oxide (mol%) | Contents of oxide (mol%) | Contents of oxide (mol%) | Temperature of glass transition (+-7) (degC) | Temperature of glass transition (+-7) (degC) | Density (+-0.03) (g/cm3) |
| No. | TiO2 | R2O3 | RO | T g1 | T g2 | Density (+-0.03) (g/cm3) |
| 5.3 | 40.8 | 32.4 | 26.7 | 585 | 679 | 4.45 |
| 5.7 | 39.1 | 35.3 | 25.6 | 565 | 649 | 4.40 |
| 6.5 | 37.8 | 37.5 | 24.7 | 556 | 599 | 4.24 |
Caption: Influence of the chemical composition on the temperature of glass transition and density in the glass compositions with spinodal liquation
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "5.3", "type": "Other"}
{"value": "40.8", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "5.3"}
{"value": "32.4", "type": "Other"}
{"value": "26.7", "type": "Other"}
{"value": "585", "type": "Other"}
{"value": "679", "type": "Other"}
{"value": "4.45", "type": "Other"}
{"value": "5.7", "type": "Other"}
{"value": "39.1", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "5.7"}
{"value": "35.3", "type": "Other"}
{"value": "25.6", "type": "Other"}
{"value": "565", "type": "Other"}
{"value": "649", "type": "Other"}
{"value": "4.40", "type": "Other"}
{"value": "6.5", "type": "Other"}
{"value": "37.8", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "6.5"}
{"value": "37.5", "type": "Other"}
{"value": "24.7", "type": "Other"}
{"value": "556", "type": "Other"}
{"value": "599", "type": "Other"}
{"value": "4.24", "type": "Other"} | science.discomat_te.174 | [
{
"content": "| No. | Contents of oxide (mol%) | Contents of oxide (mol%) | Contents of oxide (mol%) | Temperature of glass transition (+-7) (degC) | Temperature of glass transition (+-7) (degC) | Density (+-0.03) (g/cm3) |\n| No. | TiO2 | R2O3 | RO | T g1 | T g2 | Density (+-0.03) (g/cm3) |\n| 5.3 | 40.8 | 32.4 | 26.7 | 585 | 679 | 4.45 |\n| 5.7 | 39.1 | 35.3 | 25.6 | 565 | 649 | 4.40 |\n| 6.5 | 37.8 | 37.5 | 24.7 | 556 | 599 | 4.24 |\n\nCaption: Influence of the chemical composition on the temperature of glass transition and density in the glass compositions with spinodal liquation\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"5.3\", \"type\": \"Other\"}\n{\"value\": \"40.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"5.3\"}\n{\"value\": \"32.4\", \"type\": \"Other\"}\n{\"value\": \"26.7\", \"type\": \"Other\"}\n{\"value\": \"585\", \"type\": \"Other\"}\n{\"value\": \"679\", \"type\": \"Other\"}\n{\"value\": \"4.45\", \"type\": \"Other\"}\n{\"value\": \"5.7\", \"type\": \"Other\"}\n{\"value\": \"39.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"5.7\"}\n{\"value\": \"35.3\", \"type\": \"Other\"}\n{\"value\": \"25.6\", \"type\": \"Other\"}\n{\"value\": \"565\", \"type\": \"Other\"}\n{\"value\": \"649\", \"type\": \"Other\"}\n{\"value\": \"4.40\", \"type\": \"Other\"}\n{\"value\": \"6.5\", \"type\": \"Other\"}\n{\"value\": \"37.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"6.5\"}\n{\"value\": \"37.5\", \"type\": \"Other\"}\n{\"value\": \"24.7\", \"type\": \"Other\"}\n{\"value\": \"556\", \"type\": \"Other\"}\n{\"value\": \"599\", \"type\": \"Other\"}\n{\"value\": \"4.24\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.OBJECTIVE: Macronutrient "preloads" can reduce postprandial glycemia by slowing gastric emptying and stimulating glucagon-like peptide-1 (GLP-1) secretion. An ideal preload would entail minimal additional energy intake and might be optimized by concurrent inhibition of dipeptidyl peptidase-4 (DPP-4). We evaluated the effects of a low-energy d-xylose preload, with or without sitagliptin, on gastric emptying, plasma intact GLP-1 concentrations, and postprandial glycemia in type 2 diabetes.
ABSTRACT.RESEARCH DESIGN AND METHODS: Twelve type 2 diabetic patients were studied on four occasions each. After 100 mg sitagliptin (S) or placebo (P) and an overnight fast, patients consumed a preload drink containing either 50 g d-xylose (X) or 80 mg sucralose (control [C]), followed after 40 min by a mashed potato meal labeled with 13C-octanoate. Blood was sampled at intervals. Gastric emptying was determined.
ABSTRACT.RESULTS: Both peak blood glucose and the amplitude of glycemic excursion were lower after PX and SC than PC (P < 0.01 for each) and were lowest after SX (P < 0.05 for each), while overall blood glucose was lower after SX than PC (P < 0.05). The postprandial insulin-to-glucose ratio was attenuated (P < 0.05) and gastric emptying was slower (P < 0.01) after d-xylose, without any effect of sitagliptin. Plasma GLP-1 concentrations were higher after d-xylose than control only before the meal (P < 0.05) but were sustained postprandially when combined with sitagliptin (P < 0.05).
ABSTRACT.CONCLUSIONS: In type 2 diabetes, acute administration of a d-xylose preload reduces postprandial glycemia and enhances the effect of a DPP-4 inhibitor.
Extracted Data: | [["sitagliptin and d-xylose", "placebo and sucralose", "peak blood glucose", "significantly decreased", "Both peak blood glucose and the amplitude of glycemic excursion were lower after PX and SC than PC (P \u003c 0.01 for each) and were lowest after SX (P \u003c 0.05 for each), while overall blood glucose was lower after SX than PC (P \u003c 0.05)."], ["sitagliptin and d-xylose", "placebo and sucralose", "amplitude of glycemic excursion", "significantly decreased", "Both peak blood glucose and the amplitude of glycemic excursion were lower after PX and SC than PC (P \u003c 0.01 for each) and were lowest after SX (P \u003c 0.05 for each), while overall blood glucose was lower after SX than PC (P \u003c 0.05)."], ["d-xylose", "sucralose", "plasma GLP-1 before consuming the meal", "significantly increased", "Plasma GLP-1 concentrations were higher after d-xylose than control only before the meal (P \u003c 0.05) but were sustained postprandially when combined with sitagliptin (P \u003c 0.05)."], ["sitagliptin and d-xylose", "placebo and sucralose", "overall blood glucose", "significantly decreased", "Both peak blood glucose and the amplitude of glycemic excursion were lower after PX and SC than PC (P \u003c 0.01 for each) and were lowest after SX (P \u003c 0.05 for each), while overall blood glucose was lower after SX than PC (P \u003c 0.05)."]] | science.evidence_inference.23 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.OBJECTIVE: Macronutrient \"preloads\" can reduce postprandial glycemia by slowing gastric emptying and stimulating glucagon-like peptide-1 (GLP-1) secretion. An ideal preload would entail minimal additional energy intake and might be optimized by concurrent inhibition of dipeptidyl peptidase-4 (DPP-4). We evaluated the effects of a low-energy d-xylose preload, with or without sitagliptin, on gastric emptying, plasma intact GLP-1 concentrations, and postprandial glycemia in type 2 diabetes.\n\nABSTRACT.RESEARCH DESIGN AND METHODS: Twelve type 2 diabetic patients were studied on four occasions each. After 100 mg sitagliptin (S) or placebo (P) and an overnight fast, patients consumed a preload drink containing either 50 g d-xylose (X) or 80 mg sucralose (control [C]), followed after 40 min by a mashed potato meal labeled with 13C-octanoate. Blood was sampled at intervals. Gastric emptying was determined.\n\nABSTRACT.RESULTS: Both peak blood glucose and the amplitude of glycemic excursion were lower after PX and SC than PC (P < 0.01 for each) and were lowest after SX (P < 0.05 for each), while overall blood glucose was lower after SX than PC (P < 0.05). The postprandial insulin-to-glucose ratio was attenuated (P < 0.05) and gastric emptying was slower (P < 0.01) after d-xylose, without any effect of sitagliptin. Plasma GLP-1 concentrations were higher after d-xylose than control only before the meal (P < 0.05) but were sustained postprandially when combined with sitagliptin (P < 0.05).\n\nABSTRACT.CONCLUSIONS: In type 2 diabetes, acute administration of a d-xylose preload reduces postprandial glycemia and enhances the effect of a DPP-4 inhibitor.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"sitagliptin and d-xylose\", \"placebo and sucralose\", \"peak blood glucose\", \"significantly decreased\", \"Both peak blood glucose and the amplitude of glycemic excursion were lower after PX and SC than PC (P \\u003c 0.01 for each) and were lowest after SX (P \\u003c 0.05 for each), while overall blood glucose was lower after SX than PC (P \\u003c 0.05).\"], [\"sitagliptin and d-xylose\", \"placebo and sucralose\", \"amplitude of glycemic excursion\", \"significantly decreased\", \"Both peak blood glucose and the amplitude of glycemic excursion were lower after PX and SC than PC (P \\u003c 0.01 for each) and were lowest after SX (P \\u003c 0.05 for each), while overall blood glucose was lower after SX than PC (P \\u003c 0.05).\"], [\"d-xylose\", \"sucralose\", \"plasma GLP-1 before consuming the meal\", \"significantly increased\", \"Plasma GLP-1 concentrations were higher after d-xylose than control only before the meal (P \\u003c 0.05) but were sustained postprandially when combined with sitagliptin (P \\u003c 0.05).\"], [\"sitagliptin and d-xylose\", \"placebo and sucralose\", \"overall blood glucose\", \"significantly decreased\", \"Both peak blood glucose and the amplitude of glycemic excursion were lower after PX and SC than PC (P \\u003c 0.01 for each) and were lowest after SX (P \\u003c 0.05 for each), while overall blood glucose was lower after SX than PC (P \\u003c 0.05).\"]]",
"role": "assistant"
}
]
|
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
Detection of EGFR mutations in plasma DNA from lung cancer patients by mass spectrometry genotyping is predictive of tumor EGFR status and response to EGFR inhibitors. AIMS: EGFR mutations now guide the clinical use of EGFR-targeted therapy in lung cancer. However, standard EGFR mutation analysis requires a minimum amount of tumor tissue, which may not be available in certain situations. In this study, we combined a mass spectrometry genotyping assay (Sequenom) with a mutant-enriched PCR (ME-PCR) to detect EGFR mutations in free plasma DNA from patients with lung cancer. METHOD: DNAs were extracted from 31 plasma samples from 31 patients and analyzed by both methods for EGFR Exon 19 deletion and EGFR L858R mutation. Results in plasma DNA samples were compared with EGFR mutation status obtained in tumor DNA (18/31 EGFR mutant). The relationship of EGFR mutation status in tumor and/or plasma samples to overall survival was assessed. RESULTS: The EGFR mutation status in plasma DNA was identical to the primary tumor in 61% of patients (19/31). By mass spectrometry genotyping, the plasma samples contained mutant DNA corresponding to 5/14 EGFR Exon 19 deletions and 3/4 EGFR L858R mutations previously diagnosed in the matched tumors. Two samples were positive in plasma DNA but negative in primary tumor tissue. Results were similar for samples studied by ME-PCR. For patients treated with erlotinib, overall survival was correlated with the presence of EGFR mutation in plasma and/or tumor tissue (p=0.002), with the two patients positive only in plasma DNA showing responses and favorable outcomes. CONCLUSION: The detection of EGFR mutations in plasma DNA samples by mass spectrometry genotyping and ME-PCR is feasible. A positive EGFR result in plasma DNA has a high predictive value for tumor EGFR status and for favorable clinical course on EGFR-targeted therapy and could therefore be useful in guiding clinical decisions in patients with insufficient or unavailable tumor specimens. | {"CellLine": [], "Chemical": ["erlotinib"], "Disease": ["tumor", "lung cancer", "tumors"], "Gene": ["EGFR"], "Species": ["patients"], "Variant": ["L858R"]} | science.biored_ner.35 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nDetection of EGFR mutations in plasma DNA from lung cancer patients by mass spectrometry genotyping is predictive of tumor EGFR status and response to EGFR inhibitors. AIMS: EGFR mutations now guide the clinical use of EGFR-targeted therapy in lung cancer. However, standard EGFR mutation analysis requires a minimum amount of tumor tissue, which may not be available in certain situations. In this study, we combined a mass spectrometry genotyping assay (Sequenom) with a mutant-enriched PCR (ME-PCR) to detect EGFR mutations in free plasma DNA from patients with lung cancer. METHOD: DNAs were extracted from 31 plasma samples from 31 patients and analyzed by both methods for EGFR Exon 19 deletion and EGFR L858R mutation. Results in plasma DNA samples were compared with EGFR mutation status obtained in tumor DNA (18/31 EGFR mutant). The relationship of EGFR mutation status in tumor and/or plasma samples to overall survival was assessed. RESULTS: The EGFR mutation status in plasma DNA was identical to the primary tumor in 61% of patients (19/31). By mass spectrometry genotyping, the plasma samples contained mutant DNA corresponding to 5/14 EGFR Exon 19 deletions and 3/4 EGFR L858R mutations previously diagnosed in the matched tumors. Two samples were positive in plasma DNA but negative in primary tumor tissue. Results were similar for samples studied by ME-PCR. For patients treated with erlotinib, overall survival was correlated with the presence of EGFR mutation in plasma and/or tumor tissue (p=0.002), with the two patients positive only in plasma DNA showing responses and favorable outcomes. CONCLUSION: The detection of EGFR mutations in plasma DNA samples by mass spectrometry genotyping and ME-PCR is feasible. A positive EGFR result in plasma DNA has a high predictive value for tumor EGFR status and for favorable clinical course on EGFR-targeted therapy and could therefore be useful in guiding clinical decisions in patients with insufficient or unavailable tumor specimens.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [\"erlotinib\"], \"Disease\": [\"tumor\", \"lung cancer\", \"tumors\"], \"Gene\": [\"EGFR\"], \"Species\": [\"patients\"], \"Variant\": [\"L858R\"]}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Moreover, the Bit-string result (92.6) is the same, i.e., has no statistically significant difference from the BROWN result (92.7), and also from the <cite>Bansal et al. (2014</cite>) SKIP DEP result (92.7). Therefore, the main contribution of these link embeddings is that their significantly simpler, smaller, and faster set of unary features can match the performance of complex, template-based BROWN features (and of the dependency-based word embedding features of <cite>Bansal et al. (2014)</cite> ), and also stack over them. | ["Similarities"] | science.multicite_intent_classification.1952 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nMoreover, the Bit-string result (92.6) is the same, i.e., has no statistically significant difference from the BROWN result (92.7), and also from the <cite>Bansal et al. (2014</cite>) SKIP DEP result (92.7). Therefore, the main contribution of these link embeddings is that their significantly simpler, smaller, and faster set of unary features can match the performance of complex, template-based BROWN features (and of the dependency-based word embedding features of <cite>Bansal et al. (2014)</cite> ), and also stack over them.",
"role": "user"
},
{
"content": "[\"Similarities\"]",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Disentangled generative models for robust dynamical system prediction
Main Content:
1 INTRODUCTION . The robust prediction of dynamical systems behaviour remains an open question in machine learning , and engineering in general . The ability to make robust predictions is important not only for forecasting systems of interest like weather ( Garg et al. , 2021 ; Ravuri et al. , 2021 ) but even more so because it enables innovations in fields like system control , autonomous planning ( Hafner et al. , 2018 ) and computer aided engineering ( Brunton et al. , 2020 ) . In this context , the use of deep generative models has recently gained significant traction for sequence modelling ( Girin et al. , 2020 ) . Robustness of machine learning models can be considered along two axes : long-term prediction and out-of-distribution ( OOD ) performance . Accurate long-term prediction can be notoriously difficult in many dynamical systems , because error accumulation can diverge in finite time ( Zhou et al. , 2020 ; Raissi et al. , 2019 ) , a problem that even traditional solvers can suffer from . More importantly , machine learning techniques are known to suffer from poor OOD performance ( Goyal & Bengio , 2020 ) , i.e . when they are employed in a setting they had not encountered during the training phase . Before addressing the OOD problem , we must first define what constitutes as OOD in dynamical systems . We start by the observation that even simple dynamical systems , i.e the swinging pendulum or the =-body system , can have multiple continuous parameters that affect their evolution . These parameters can be manifested as differential equation coefficients , boundary or initial conditions etc . Our starting point is to consider distinct ranges of those parameters as separate domains . Under this view , it becomes apparent why OOD prediction of dynamical systems can be hard : capturing the whole range of those parameters in a single training set is unrealistic ( Fotiadis et al. , 2020 ) and further inductive biases are required ( Miladinović et al. , 2019 ; Bird & Williams , 2019 ; Barber et al. , 2021 ) . From a dynamical system point of view , different parameters can produce widely different trajectories in phase space . A motivating example can be bifurcations which occur when a small change in the parameters of a system causes a sudden qualitative change in its behaviour . We focus on the inductive bias of disentangled representations for which the dynamics are separated from the domain parameters . Many approaches based on the use of neural networks try to jointly learn the dynamics and the physical parameters , which results in convoluted representations and usually leads to overfitting ( Bengio et al. , 2012 ) . System identification can be used to extract parameters , but 1Code for reproducing our experiments at : https : //anonymous.4open.science/r/ dis-dyn-systems/ 2Animated phase-space and video predictions are available at : https : //bit.ly/dis-dyn-systems requires knowledge of the underlying system to be computationally effective ( Ayyad et al. , 2020 ) . We , instead , leverage advances in Variational Autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) that enable learning disentangled representations . Disentanglement enables different latent variables to focus on different factors of variation of the data distribution , and has been applied in the context of image generation ( Higgins et al. , 2017 ; Kim & Mnih , 2018 ) . This can be extended to modelling dynamical systems by looking at disentanglement from a causal perspective : from all the generative models which can have the same marginal distribution , identify the one with the true causal factors . To map this idea to sequence modelling we treat the domain parameters of a dynamical system as factors of variation . Recent findings ( Locatello et al. , 2018 ) emphasize the vital role of inductive biases from models or data for useful disentanglement . Unsupervised disentanglement , based on the assumption of domain stationarity , is a promising direction ( Miladinović et al. , 2019 ; Li & Mandt , 2018 ) . Nevertheless , this leaves a wealth of ground truth domain parameters , which can be cheaply collected in simulated datasets . This type of privileged information originating from simulations has been shown to be effective for domain adaptation in computer vision tasks ( Sarafianos et al. , 2017 ; Lee et al. , 2018 ) . We thus use supervised disentanglement ( Locatello et al. , 2019 ) by leveraging the ground truth domain parameters . To the best of our knowledge , using domain parameters information this way , has not been previously explored in the dynamical system prediction setting . Contributions While others have treated domain parameters as factors of variation in the data distribution , our work is the first , to the best of our knowledge , that explicitly uses privileged information from simulated data to disentangle those domain parameters from dynamics in a supervised way . We furthermore conduct experiments both in the low-dimensional phase space of 3 dynamical systems and the high-dimensional video rendering of a swinging pendulum . Disentanglement has , in the past , been mostly applied to VAEs because they are easily amenable to it . We additionally apply disentanglement on a more powerful , hybrid , model with both stochastic and deterministic parts ( Hafner et al. , 2018 ) . In doing so , we not only assess disentanglement on a generative model outside boundaries of VAEs but furthermore we do it on a model which is considered state-of-the-art in long-term video prediction ( Saxena et al. , 2021 ) . In all cases , the prediction performance is assessed both in-distribution and also in OOD settings of increasing degrees of distribution shift . To our understanding , this is the first time such a rigorous OOD test is performed . Our results in phase-space demonstrate that disentangled models can better capture the variability of dynamical systems compared to baseline models both in-distribution and OOD . In modelling dynamics in video sequences , results indicate that disentanglement is beneficial both for long-term prediction and OOD prediction . Limitations This work focuses on dynamical system prediction . While the results can potentially open up many applications in general time-series modelling , this is out of the scope of this work . We prioritize to empirically study OOD downstream task performance and the inspection of the disentangled representations with appropriate metrics is left out of scope in this work . 2 RELATED WORK . VAEs and disentanglement Disentanglement aims to produce representations where separate factors of variation in the data are encoded into independent latent components . This can be seen as finding the true causal model of the data . While supervised disentanglement in generative models is a long-standing idea ( Mathieu et al. , 2016 ) , information-theoretic properties can be leveraged to allow unsupervised disentanglement in VAEs ( Higgins et al. , 2017 ; Kim & Mnih , 2018 ) . The impossibility result from ( Locatello et al. , 2018 ) suggested that disentangled learning is only possible by inductive biases coming either from the model or the data . Hence , the focus shifted back to semi- or weaklysupervised disentanglement approaches ( Locatello et al. , 2019 ; 2020 ) . While most of these methods focus on disentanglement metrics , we opt to directly assess using a downstream prediction task . Disentanglement in sequence modelling While disentanglement techniques are mainly tested in a static setting , there is a growing interest in applying it to sequence dynamics . Using a bottleneck based on physical knowledge , Iten et al . ( 2018 ) learn an interpretable representation that requires conditioning the decoder on time , but it can return physically inconsistent predictions in OOD data ( Barber et al. , 2021 ) . Deep state-space models ( SSMs ) have also employed techniques for disentangling content from dynamics ( Fraccaro et al. , 2017 ; Li & Mandt , 2018 ) , but , focus mostly on modelling variations in the content , failing to take dynamics into account . In hierarchical approaches ( Karl et al. , 2017 ) , different layers of latent variables correspond to different timescales : for example , in speech analysis for separating voice characteristics and phoneme-level attributes ( Hsu et al. , 2017 ) . In an approach similar to our work , Miladinović et al . ( 2019 ) separate the dynamics from sequencewide properties in dynamical systems like Lotka-Volterra , but do so in an unsupervised way which dismisses a wealth of cheap information and only assesses OOD generalization in a limited way . Feed-forward models for sequence modelling Deep SSM models are difficult to train as they require non-trivial inference schemes and a careful design of the dynamic model ( Krishnan et al. , 2015 ; Karl et al. , 2017 ) . Feed-forward models , with necessary inductive biases , have been used for sequence modelling in dynamical systems ( Greydanus et al. , 2019 ; Fotiadis et al. , 2020 ) . In works like Hamiltonian Neural Networks Greydanus et al . ( 2019 ) the domain is fixed ; together with Barber et al . ( 2021 ) , our work is an attempt in tackling domain variability . Privileged information for domain adaptation . Using privileged information during training has been shown to help with domain adaptation in computer vision tasks . Using segmentation masks of simulated urban scenery can improve semantic segmentation on the target domain ( Lee et al. , 2018 ) , while clip art data can help with domain transfer in an action recognition task ( Sarafianos et al. , 2017 ) . 3 METHODS . 3.1 VARIATIONAL AUTOENCODERS . Variational autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) offer a principled approach to latent variable modeling by combining a variational inference model @ q ( z |x ) with a generative model ? \ ( x|z ) . As in other approximate inference methods , the goal is to maximize the evidence lower bound ( ELBO ) over the data : Lq , \ ( x ) = E @ q ( z |x ) [ log ? \ ( x | z ) ] − ! ( @ q ( z | x ) | | ? ( z ) ) ( 1 ) The first part of the ELBO is the reconstruction loss ( in our case the prediction loss ) and the second part is the Kullback-Leibler divergence that quantifies how close the posterior is to the prior . Design choices for the model We use an isotropic unit Gaussian prior ? ( z ) = N ( z | 0 , I ) which helps to disentangle the learned representation ( Higgins et al. , 2017 ) . The approximate posterior ( encoder ) distribution is a Gaussian with diagonal covariance @ q ( z | x ) = N ( z | µI , I ) , allowing a closed form KL-divergence , while the decoder has a Laplace distribution ? \ ( x | z ) = Laplace ( x | µG , WI ) with constant diagonal covariance W > 0 , which is tuned empirically . This leads to an ! 1 loss that provides improved results in some problems ( Mathieu et al. , 2018 ) and empirically works better in our case . The parameters µI ≡ µI ( x ; q ) , I ≡ diag [ σI ( x ; q ) ] 2 , and µG ≡ µG ( z ; \ ) are computed via feed-forward neural networks . 3.2 DISENTANGLEMENT OF DOMAIN PARAMETERS IN LATENT SPACE . Apart from the disentanglement that stems from the choice of prior ? ( z ) , we explicitly disentangle part of latent space so that it corresponds to the domain parameters of each input sequence . We achieve this by using a regression loss term Lξ ( z1 : : , ξ ) between the ground truth factors of the domain parameters ξ ∈ R : and the output of the corresponding latents , z1 : : . We , empirically , opted for an ! 1 loss , corresponding to a Laplacian prior with mean ξ and unitary covariance . Previous methods have reported that binary cross-entropy works better than ! 2 ( Locatello et al. , 2019 ) but this does not fit well in a setting like ours . We hypothesize that BCE works better because of the implicit scaling . To address this , we propose applying a function G ( ` I8 ) which linearly scales the ` I8 between the min and max values of the corresponding factor of variation : G ( ` I8 ) = ` I8 · ( max ( b8 ) −min ( b8 ) ) +min ( b8 ) ( 2 ) where b8 are the domain parameters and their corresponding minimum and maximum values of domain parameters from the training set . In all cases , the regression term is weighted by a parameter X which is empirically tuned . Plugging these choices in results in the following loss function : Lq , \ ( x ) =E @ q ( z |x1 : = ) [ 1 W ‖x=+1 : =+ > − µG ( z ; \ ) ‖1 ] + 3 log W ( Prediction loss ) + ‖σI ( x1 : = ; q ) ‖22 − log diag [ σI ( x1 : = ; q ) ] 2 + ‖µI ( x1 : = ; q ) ‖22 ( KL-Divergence ) ( 3 ) + X ξG − G ( µI1 : : ( x1 : = ; q ) ) 1 } ( Sup . disentangl . loss ) Using the reparameterization trick ( Kingma & Welling , 2014 ) , the loss is amenable to optimzation by stochastic gradient descent , with batch size = . The model architecture can be seen in Figure 1 ( left ) . | The paper studies the performance of dynamical systems learned from data with a focus on out of distribution (OOD) evaluations. Authors consider the question whether disentangling dynamical system parameters in the latent space can improve the generalization of the models, which is perceived as privileged information available from the reference (ground truth) simulations. Authors carry out experiments on several dynamical systems: pendulum, Lotka-Volterra system and three-body problem. Additionally an experiment on video prediction of a singing pendulum is performed. Authors found that additional disentanglement can improve generalization performance of the models and in video prediction setting leads to better long-term predictions based on structural and perceptual image metrics. | science.mup_single_document_summarization.9111 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nDisentangled generative models for robust dynamical system prediction\n\nMain Content:\n1 INTRODUCTION . The robust prediction of dynamical systems behaviour remains an open question in machine learning , and engineering in general . The ability to make robust predictions is important not only for forecasting systems of interest like weather ( Garg et al. , 2021 ; Ravuri et al. , 2021 ) but even more so because it enables innovations in fields like system control , autonomous planning ( Hafner et al. , 2018 ) and computer aided engineering ( Brunton et al. , 2020 ) . In this context , the use of deep generative models has recently gained significant traction for sequence modelling ( Girin et al. , 2020 ) . Robustness of machine learning models can be considered along two axes : long-term prediction and out-of-distribution ( OOD ) performance . Accurate long-term prediction can be notoriously difficult in many dynamical systems , because error accumulation can diverge in finite time ( Zhou et al. , 2020 ; Raissi et al. , 2019 ) , a problem that even traditional solvers can suffer from . More importantly , machine learning techniques are known to suffer from poor OOD performance ( Goyal & Bengio , 2020 ) , i.e . when they are employed in a setting they had not encountered during the training phase . Before addressing the OOD problem , we must first define what constitutes as OOD in dynamical systems . We start by the observation that even simple dynamical systems , i.e the swinging pendulum or the =-body system , can have multiple continuous parameters that affect their evolution . These parameters can be manifested as differential equation coefficients , boundary or initial conditions etc . Our starting point is to consider distinct ranges of those parameters as separate domains . Under this view , it becomes apparent why OOD prediction of dynamical systems can be hard : capturing the whole range of those parameters in a single training set is unrealistic ( Fotiadis et al. , 2020 ) and further inductive biases are required ( Miladinović et al. , 2019 ; Bird & Williams , 2019 ; Barber et al. , 2021 ) . From a dynamical system point of view , different parameters can produce widely different trajectories in phase space . A motivating example can be bifurcations which occur when a small change in the parameters of a system causes a sudden qualitative change in its behaviour . We focus on the inductive bias of disentangled representations for which the dynamics are separated from the domain parameters . Many approaches based on the use of neural networks try to jointly learn the dynamics and the physical parameters , which results in convoluted representations and usually leads to overfitting ( Bengio et al. , 2012 ) . System identification can be used to extract parameters , but 1Code for reproducing our experiments at : https : //anonymous.4open.science/r/ dis-dyn-systems/ 2Animated phase-space and video predictions are available at : https : //bit.ly/dis-dyn-systems requires knowledge of the underlying system to be computationally effective ( Ayyad et al. , 2020 ) . We , instead , leverage advances in Variational Autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) that enable learning disentangled representations . Disentanglement enables different latent variables to focus on different factors of variation of the data distribution , and has been applied in the context of image generation ( Higgins et al. , 2017 ; Kim & Mnih , 2018 ) . This can be extended to modelling dynamical systems by looking at disentanglement from a causal perspective : from all the generative models which can have the same marginal distribution , identify the one with the true causal factors . To map this idea to sequence modelling we treat the domain parameters of a dynamical system as factors of variation . Recent findings ( Locatello et al. , 2018 ) emphasize the vital role of inductive biases from models or data for useful disentanglement . Unsupervised disentanglement , based on the assumption of domain stationarity , is a promising direction ( Miladinović et al. , 2019 ; Li & Mandt , 2018 ) . Nevertheless , this leaves a wealth of ground truth domain parameters , which can be cheaply collected in simulated datasets . This type of privileged information originating from simulations has been shown to be effective for domain adaptation in computer vision tasks ( Sarafianos et al. , 2017 ; Lee et al. , 2018 ) . We thus use supervised disentanglement ( Locatello et al. , 2019 ) by leveraging the ground truth domain parameters . To the best of our knowledge , using domain parameters information this way , has not been previously explored in the dynamical system prediction setting . Contributions While others have treated domain parameters as factors of variation in the data distribution , our work is the first , to the best of our knowledge , that explicitly uses privileged information from simulated data to disentangle those domain parameters from dynamics in a supervised way . We furthermore conduct experiments both in the low-dimensional phase space of 3 dynamical systems and the high-dimensional video rendering of a swinging pendulum . Disentanglement has , in the past , been mostly applied to VAEs because they are easily amenable to it . We additionally apply disentanglement on a more powerful , hybrid , model with both stochastic and deterministic parts ( Hafner et al. , 2018 ) . In doing so , we not only assess disentanglement on a generative model outside boundaries of VAEs but furthermore we do it on a model which is considered state-of-the-art in long-term video prediction ( Saxena et al. , 2021 ) . In all cases , the prediction performance is assessed both in-distribution and also in OOD settings of increasing degrees of distribution shift . To our understanding , this is the first time such a rigorous OOD test is performed . Our results in phase-space demonstrate that disentangled models can better capture the variability of dynamical systems compared to baseline models both in-distribution and OOD . In modelling dynamics in video sequences , results indicate that disentanglement is beneficial both for long-term prediction and OOD prediction . Limitations This work focuses on dynamical system prediction . While the results can potentially open up many applications in general time-series modelling , this is out of the scope of this work . We prioritize to empirically study OOD downstream task performance and the inspection of the disentangled representations with appropriate metrics is left out of scope in this work . 2 RELATED WORK . VAEs and disentanglement Disentanglement aims to produce representations where separate factors of variation in the data are encoded into independent latent components . This can be seen as finding the true causal model of the data . While supervised disentanglement in generative models is a long-standing idea ( Mathieu et al. , 2016 ) , information-theoretic properties can be leveraged to allow unsupervised disentanglement in VAEs ( Higgins et al. , 2017 ; Kim & Mnih , 2018 ) . The impossibility result from ( Locatello et al. , 2018 ) suggested that disentangled learning is only possible by inductive biases coming either from the model or the data . Hence , the focus shifted back to semi- or weaklysupervised disentanglement approaches ( Locatello et al. , 2019 ; 2020 ) . While most of these methods focus on disentanglement metrics , we opt to directly assess using a downstream prediction task . Disentanglement in sequence modelling While disentanglement techniques are mainly tested in a static setting , there is a growing interest in applying it to sequence dynamics . Using a bottleneck based on physical knowledge , Iten et al . ( 2018 ) learn an interpretable representation that requires conditioning the decoder on time , but it can return physically inconsistent predictions in OOD data ( Barber et al. , 2021 ) . Deep state-space models ( SSMs ) have also employed techniques for disentangling content from dynamics ( Fraccaro et al. , 2017 ; Li & Mandt , 2018 ) , but , focus mostly on modelling variations in the content , failing to take dynamics into account . In hierarchical approaches ( Karl et al. , 2017 ) , different layers of latent variables correspond to different timescales : for example , in speech analysis for separating voice characteristics and phoneme-level attributes ( Hsu et al. , 2017 ) . In an approach similar to our work , Miladinović et al . ( 2019 ) separate the dynamics from sequencewide properties in dynamical systems like Lotka-Volterra , but do so in an unsupervised way which dismisses a wealth of cheap information and only assesses OOD generalization in a limited way . Feed-forward models for sequence modelling Deep SSM models are difficult to train as they require non-trivial inference schemes and a careful design of the dynamic model ( Krishnan et al. , 2015 ; Karl et al. , 2017 ) . Feed-forward models , with necessary inductive biases , have been used for sequence modelling in dynamical systems ( Greydanus et al. , 2019 ; Fotiadis et al. , 2020 ) . In works like Hamiltonian Neural Networks Greydanus et al . ( 2019 ) the domain is fixed ; together with Barber et al . ( 2021 ) , our work is an attempt in tackling domain variability . Privileged information for domain adaptation . Using privileged information during training has been shown to help with domain adaptation in computer vision tasks . Using segmentation masks of simulated urban scenery can improve semantic segmentation on the target domain ( Lee et al. , 2018 ) , while clip art data can help with domain transfer in an action recognition task ( Sarafianos et al. , 2017 ) . 3 METHODS . 3.1 VARIATIONAL AUTOENCODERS . Variational autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) offer a principled approach to latent variable modeling by combining a variational inference model @ q ( z |x ) with a generative model ? \\ ( x|z ) . As in other approximate inference methods , the goal is to maximize the evidence lower bound ( ELBO ) over the data : Lq , \\ ( x ) = E @ q ( z |x ) [ log ? \\ ( x | z ) ] − ! ( @ q ( z | x ) | | ? ( z ) ) ( 1 ) The first part of the ELBO is the reconstruction loss ( in our case the prediction loss ) and the second part is the Kullback-Leibler divergence that quantifies how close the posterior is to the prior . Design choices for the model We use an isotropic unit Gaussian prior ? ( z ) = N ( z | 0 , I ) which helps to disentangle the learned representation ( Higgins et al. , 2017 ) . The approximate posterior ( encoder ) distribution is a Gaussian with diagonal covariance @ q ( z | x ) = N ( z | µI , I ) , allowing a closed form KL-divergence , while the decoder has a Laplace distribution ? \\ ( x | z ) = Laplace ( x | µG , WI ) with constant diagonal covariance W > 0 , which is tuned empirically . This leads to an ! 1 loss that provides improved results in some problems ( Mathieu et al. , 2018 ) and empirically works better in our case . The parameters µI ≡ µI ( x ; q ) , I ≡ diag [ σI ( x ; q ) ] 2 , and µG ≡ µG ( z ; \\ ) are computed via feed-forward neural networks . 3.2 DISENTANGLEMENT OF DOMAIN PARAMETERS IN LATENT SPACE . Apart from the disentanglement that stems from the choice of prior ? ( z ) , we explicitly disentangle part of latent space so that it corresponds to the domain parameters of each input sequence . We achieve this by using a regression loss term Lξ ( z1 : : , ξ ) between the ground truth factors of the domain parameters ξ ∈ R : and the output of the corresponding latents , z1 : : . We , empirically , opted for an ! 1 loss , corresponding to a Laplacian prior with mean ξ and unitary covariance . Previous methods have reported that binary cross-entropy works better than ! 2 ( Locatello et al. , 2019 ) but this does not fit well in a setting like ours . We hypothesize that BCE works better because of the implicit scaling . To address this , we propose applying a function G ( ` I8 ) which linearly scales the ` I8 between the min and max values of the corresponding factor of variation : G ( ` I8 ) = ` I8 · ( max ( b8 ) −min ( b8 ) ) +min ( b8 ) ( 2 ) where b8 are the domain parameters and their corresponding minimum and maximum values of domain parameters from the training set . In all cases , the regression term is weighted by a parameter X which is empirically tuned . Plugging these choices in results in the following loss function : Lq , \\ ( x ) =E @ q ( z |x1 : = ) [ 1 W ‖x=+1 : =+ > − µG ( z ; \\ ) ‖1 ] + 3 log W ( Prediction loss ) + ‖σI ( x1 : = ; q ) ‖22 − log diag [ σI ( x1 : = ; q ) ] 2 + ‖µI ( x1 : = ; q ) ‖22 ( KL-Divergence ) ( 3 ) + X ξG − G ( µI1 : : ( x1 : = ; q ) ) 1 } ( Sup . disentangl . loss ) Using the reparameterization trick ( Kingma & Welling , 2014 ) , the loss is amenable to optimzation by stochastic gradient descent , with batch size = . The model architecture can be seen in Figure 1 ( left ) .",
"role": "user"
},
{
"content": "The paper studies the performance of dynamical systems learned from data with a focus on out of distribution (OOD) evaluations. Authors consider the question whether disentangling dynamical system parameters in the latent space can improve the generalization of the models, which is perceived as privileged information available from the reference (ground truth) simulations. Authors carry out experiments on several dynamical systems: pendulum, Lotka-Volterra system and three-body problem. Additionally an experiment on video prediction of a singing pendulum is performed. Authors found that additional disentanglement can improve generalization performance of the models and in video prediction setting leads to better long-term predictions based on structural and perceptual image metrics.",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
There has also been growing interest in deep learning models for keyphrase generation <cite>(Meng et al. 2017</cite>; Chan et al. 2019) . | ["Motivation"] | science.multicite_intent_classification.2413 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThere has also been growing interest in deep learning models for keyphrase generation <cite>(Meng et al. 2017</cite>; Chan et al. 2019) .",
"role": "user"
},
{
"content": "[\"Motivation\"]",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Additive Poisson Process: Learning Intensity of Higher-Order Interaction in Poisson Processes
Main Content:
1 INTRODUCTION . The Poisson process is a counting process used in a wide range of disciplines such as spatialtemporal sequential data in transportation ( Zhou et al. , 2018 ; 2021 ) , finance ( Ilalan , 2016 ) and ecology ( Thompson , 1955 ) to model the arrival rate by learning an intensity function . For a given time interval , the integral of the intensity function represents the average number of events occurring in that interval . The intensity function can be generalized to multiple dimensions . However , for most practical applications , learning the multi-dimensional intensity function becomes a challenge because of the sparse observations . Despite the recent advances of Poisson processes , most current Poisson process models are unable to learn the intensity function of a multi-dimensional Poisson process . Our research question is , “ Are there any good ways of approximating the high dimensional intensity function ? ” Our proposed model , the Additive Poisson Process ( APP ) , provides a novel solution to this problem . Throughout this paper , we will use a running example in a spatial-temporal setting . Say we want to learn the intensity function for a taxi to pick up customers at a given time and location . For this setting , each event is multi-dimensional ; that is , ( xi , yi , Wi ) , where a pair of xi and yi represents two spatial coordinates and Wi represents the day of the week . In addition , observation time is associated with each component of the event , which is represented as ti = ( txi , tyi , tWi ) . In this example , it is reasonable to assume txi = tyi = tWi as they are observed simultaneously . For any given location or time , we can expect at most a few pick-up events , which makes it difficult for any model to learn the low-valued intensity function . Figure 2b visualizes this problem . In this problem setup , if we would like to learn the intensity function at a given location ( x , y ) and day of the week W , the naı̈ve approach would be to learn the intensity at ( x , y , W ) directly from observations . This is extremely difficult because there could be only few events for a given location and day . However , there is useful information in lower-dimensional space ; for example , the marginalized observations at the location ( xi , yi ) across all days of the week , or on the day Wi at all locations . This information can be included into the model to improve the estimation of the joint intensity function . Using the information in lower-dimensional space provides a structured approach to include prior information based on the location or day of the week to improve the estimation of the joint intensity function . For example , a given location could be a shopping center or a hotel , where it is common for taxis to pick up passengers , and therefore we expect more passengers at this location . There could also be additional patterns that could be uncovered based on the day of the week . We can then use the observations of events to update our knowledge of the intensity function . Previous approaches , such as kernel density estimation ( KDE ) ( Rosenblatt , 1956 ) , are able to learn the joint intensity function by using information in lower dimensions . However , KDE suffers from the curse of dimensionality , which means that it requires a large size of samples to build an accurate model . In addition , the complexity of the model expands exponentially with respect to the number of dimensions , which makes it infeasible to compute . Bayesian approaches , such as using a mixture of beta distributions with a Dirichlet prior ( Kottas , 2006 ; Kottas & Sansó , 2007 ) and Reproducing Kernel Hilbert Space ( RKHS ) ( Flaxman et al. , 2017 ) , have been proposed to quantify the uncertainty for the intensity function . However , these approaches are often non-convex , making it difficult to obtain the globally optimal solution . Besides , if observations are sparse , it is hard for these approaches to learn a reasonable intensity function . Additional related work about Bayesian inference for Poisson processes and Poisson factorization can be found in Appendix A . In this paper , we propose a novel framework to learn the higher-order interaction effects of intensity functions in Poisson processes . Our model combines the techniques introduced by Luo & Sugiyama ( 2019 ) to model higher-order interactions between Poisson processes and by Friedman & Stuetzle ( 1981 ) in generalized additive models to learn the joint intensity function using samples in a lower dimensional space . Our proposed approach is to decompose a multi-dimensional Poisson process into lower-dimensional representations . For example , we have points ( xi ) Ni=1 in the x-dimension and ( yi ) Ni=1 in the y-dimension . Such data in the lower-dimensional space can be used to improve the estimation of the joint intensity function . This is different from the traditional approaches where only the joint occurrence of events is used to learn the joint intensity . We first show the connection between generalized additive models and Poisson processes , and then provide the connection between generalized additive models and the log-linear model ( Agresti , 2012 ) , which has a well-established theoretical background in information geometry ( Amari , 2016 ) . We draw parallels between the formulation of the generalized additive models and the binary loglinear model on a partially ordered set ( poset ) ( Sugiyama et al. , 2017 ) . The learning process in our model is formulated as a convex optimization problem to arrive at a unique optimal solution using natural gradient , which minimizes the Kullback-Leibler ( KL ) divergence from the sample distribution in a lower-dimensional space to the distribution modeled by the learned intensity function . This connection provides remarkable properties to our model : the ability to learn higher-order intensity functions using lower-dimensional projections , thanks to the Kolmogorov-Arnold representation theorem . This property makes it advantageous to use our proposed approach for cases where there are no observations , missing samples , or low event rates . Our model is flexible because it can capture the interaction effects between events in a Poisson process as a partial order structure in the log-linear model and the parameters of the model are fully customizable to meet the requirements of the application . Our empirical results show that our model effectively uses samples projected onto a lower dimensional space to estimate the higher-order intensity function . More importantly , our model is also robust to various sample sizes . 2 FORMULATION . We start this section by introducing the technical background in the Poisson process and its extension to a multi-dimensional Poisson process . We then introduce the Generalized Additive Model ( GAM ) and its connection to the Poisson process . This is followed by presenting our novel framework , called Additive Poisson Process ( APP ) , which is our main technical contribution and has a tight link to the Poisson process modeled by GAMs . We show that the learning of APP can be achieved via convex optimization using natural gradient . The Poisson process is characterized by an intensity function λ : R → R. An inhomogeneous Poisson process is an extension of a homogeneous Poisson process , where the arrival rate changes with time . The process with time-changing intensity λ ( t ) is defined as a counting process N ( t ) , which has an independent increment property . For any time t ≥ 0 and infinitesimal interval δ ≥ 0 , the probability of events count is p ( N ( t+δ ) −N ( t ) = 0 ) = 1−δλ ( t ) +o ( δ ) , p ( N ( t+δ ) −N ( t ) = 1 ) = δλ ( t ) +o ( δ ) , and p ( N ( t+δ ) −N ( t ) ≥ 2 ) = o ( δ ) , where o ( · ) denotes little-o notation ( Daley & VereJones , 2007 ) . This formulation can be generalized into the ( inhomogeneous ) multi-dimensional Poisson process , which can be used to learn the intensity function λ : RD → R from a realization of timestamps t1 , t2 , . . . , tN with ti ∈ [ 0 , T ] D. Each ti is the time of occurrence for the i-th event across D dimensions and T is the observation duration . The likelihood for the multi-dimensional Poisson process ( Daley & Vere-Jones , 2007 ) is given by p ( { ti } Ni=1 | λ ( t ) ) = exp ( − ∫ λ ( t ) dt ) N∏ i=1 λ ( ti ) , ( 1 ) where t = ( t ( 1 ) , . . . , t ( D ) ) ∈ RD . We define the functional prior on λ ( t ) as λ ( t ) : = g ( f ( t ) ) = exp ( f ( t ) ) . ( 2 ) The function g ( · ) is a positive function to guarantee the non-negativity of the intensity which we choose to be the exponential function , and our objective is to learn the function f ( · ) . The loglikelihood of the multi-dimensional Poisson process with the functional prior is described as log p ( { ti } Ni=1 |λ ( t ) ) = N∑ i=1 f ( ti ) − ∫ exp ( f ( t ) ) dt . ( 3 ) In the following sections , we introduce the generalized additive models and propose to model it by the log-linear model to learn f ( t ) and the normalizing term ( ∫ exp ( f ( t ) ) dt ) . 2.1 GENERALIZED ADDITIVE MODEL . In this section , we present the connection between Poisson processes and the Generalized Additive Model ( GAM ) proposed by Friedman & Stuetzle ( 1981 ) . The GAM projects higher-dimensional features into lower-dimensional space to apply smoothing functions to build a restricted class of non-parametric regression models . GAM is less affected by the curse of dimensionality compared to directly using smoothing in a higher-dimensional space . For a given set of processes J ⊆ [ D ] = { 1 , . . . , D } , the traditional GAM using one-dimensional projections is defined as log λJ ( t ) = ∑ j∈J fj ( t ( j ) ) − βJ with some smoothing function fj . In this paper , we extend it to include higher-order interactions between features in GAM by introducing terms that represent the multiplicative product between events . The k-th order GAM is defined as log λJ ( t ) = ∑ j∈J f { j } ( t ( j ) ) + ∑ j1 , j2∈J f { j1 , j2 } ( t ( j1 ) , t ( j2 ) ) + · · ·+ ∑ j1 , ... , jk∈J f { j1 , ... , jk } ( t ( j1 ) , . . . , t ( jk ) ) − βJ = ∑ I⊆J , |I|≤k fI ( t ( I ) ) − βJ , ( 4 ) where t ( I ) ∈ R|I| denotes the subvector ( t ( j ) ) j∈I of t with respect to I ⊆ [ D ] . The function fI : R|I| → R is a smoothing function to fit the data , and the normalization constant βJ for the intensity function is obtained as βJ = ∫ λJ ( t ) dt = ∫ exp ( ∑ I⊆J , |I|≤k fI ( t ( I ) ) ) dt . The definition of the additive model is in the same form as Equation ( 3 ) . In particular , if we compare Equation ( 3 ) and Equation ( 4 ) , we can see that the smoothing function f in Equation ( 3 ) is realized as the summation over lower-dimensional projections in Equation ( 4 ) . Learning of a continuous function using lower-dimensional projections is well known because of the Kolmogorov-Arnold representation theorem , which states that : Theorem 1 ( Kolmogorov–Arnold Representation Theorem ( Braun & Griebel , 2009 ; Kolmogorov , 1957 ) ) . Any multivariate continuous function can be represented as a superposition of one–dimensional functions ; that is , f ( t1 , . . . , tn ) = ∑2n+1 q=1 fq ( ∑n p=1 gq , p ( tp ) ) . Braun ( 2009 ) showed that the GAM is an approximation to the general form presented in Kolmogorov-Arnold representation theorem by replacing the range q ∈ { 1 , . . . , 2n + 1 } with I ⊆ J and the inner function gq , p by the identity if q = p and zero otherwise , yielding f ( t ) = ∑ I⊆J fI ( t ( I ) ) . Interestingly , the canonical form for additive models in Equation ( 4 ) can be rearranged to be in the same form as Kolmogorov-Arnold representation theorem . By letting f ( t ) = ∑ I⊆J fI ( t ( I ) ) = g−1 ( λ ( t ) ) and g ( · ) = exp ( · ) , we have λJ ( t ) = 1 exp ( βJ ) exp ( ∑ I⊆J fI ( t ( I ) ) ) ∝ exp ( ∑ I⊆J fI ( t ( I ) ) ) , ( 5 ) where we assume fI ( t ( I ) ) = 0 if |I| > k for the k-th order model and 1/ exp ( βJ ) is the normalization term for the intensity function . Based on the Kolmogorov-Arnold representation theorem , generalized additive models are able to learn the intensity of the higher-order interaction between Poisson processes by using projections into lower dimensional spaces . The log-likelihood function for a kth-order model is obtained by substituting λJ ( t ) in Equation ( 4 ) into λ ( t ) in Equation ( 3 ) , log p ( { t } Ni=1|λ ( t ) ) = N∑ i=1 ∑ I⊆J , |I|≤k fI ( t ( I ) ) − β′ , ( 6 ) where β′ is a constant given by β′ = ∫ λ ( t ) dt+ ∑ I⊆J βJ . In the following subsection we introduce a log-linear formulation equipped with partially ordered sample space , which aligns with the GAM formulation in Equation ( 5 ) . | This paper proposes a method to estimate the multi-dimensional intensity function of a Poisson process with a lower-dimensional projection. The proposed method is motivated by perspectives of information geometry and generalized additive models in statistics. The performance of the proposed method is extreme through synthetic examples and a real data analysis. | science.mup_single_document_summarization.5632 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAdditive Poisson Process: Learning Intensity of Higher-Order Interaction in Poisson Processes\n\nMain Content:\n1 INTRODUCTION . The Poisson process is a counting process used in a wide range of disciplines such as spatialtemporal sequential data in transportation ( Zhou et al. , 2018 ; 2021 ) , finance ( Ilalan , 2016 ) and ecology ( Thompson , 1955 ) to model the arrival rate by learning an intensity function . For a given time interval , the integral of the intensity function represents the average number of events occurring in that interval . The intensity function can be generalized to multiple dimensions . However , for most practical applications , learning the multi-dimensional intensity function becomes a challenge because of the sparse observations . Despite the recent advances of Poisson processes , most current Poisson process models are unable to learn the intensity function of a multi-dimensional Poisson process . Our research question is , “ Are there any good ways of approximating the high dimensional intensity function ? ” Our proposed model , the Additive Poisson Process ( APP ) , provides a novel solution to this problem . Throughout this paper , we will use a running example in a spatial-temporal setting . Say we want to learn the intensity function for a taxi to pick up customers at a given time and location . For this setting , each event is multi-dimensional ; that is , ( xi , yi , Wi ) , where a pair of xi and yi represents two spatial coordinates and Wi represents the day of the week . In addition , observation time is associated with each component of the event , which is represented as ti = ( txi , tyi , tWi ) . In this example , it is reasonable to assume txi = tyi = tWi as they are observed simultaneously . For any given location or time , we can expect at most a few pick-up events , which makes it difficult for any model to learn the low-valued intensity function . Figure 2b visualizes this problem . In this problem setup , if we would like to learn the intensity function at a given location ( x , y ) and day of the week W , the naı̈ve approach would be to learn the intensity at ( x , y , W ) directly from observations . This is extremely difficult because there could be only few events for a given location and day . However , there is useful information in lower-dimensional space ; for example , the marginalized observations at the location ( xi , yi ) across all days of the week , or on the day Wi at all locations . This information can be included into the model to improve the estimation of the joint intensity function . Using the information in lower-dimensional space provides a structured approach to include prior information based on the location or day of the week to improve the estimation of the joint intensity function . For example , a given location could be a shopping center or a hotel , where it is common for taxis to pick up passengers , and therefore we expect more passengers at this location . There could also be additional patterns that could be uncovered based on the day of the week . We can then use the observations of events to update our knowledge of the intensity function . Previous approaches , such as kernel density estimation ( KDE ) ( Rosenblatt , 1956 ) , are able to learn the joint intensity function by using information in lower dimensions . However , KDE suffers from the curse of dimensionality , which means that it requires a large size of samples to build an accurate model . In addition , the complexity of the model expands exponentially with respect to the number of dimensions , which makes it infeasible to compute . Bayesian approaches , such as using a mixture of beta distributions with a Dirichlet prior ( Kottas , 2006 ; Kottas & Sansó , 2007 ) and Reproducing Kernel Hilbert Space ( RKHS ) ( Flaxman et al. , 2017 ) , have been proposed to quantify the uncertainty for the intensity function . However , these approaches are often non-convex , making it difficult to obtain the globally optimal solution . Besides , if observations are sparse , it is hard for these approaches to learn a reasonable intensity function . Additional related work about Bayesian inference for Poisson processes and Poisson factorization can be found in Appendix A . In this paper , we propose a novel framework to learn the higher-order interaction effects of intensity functions in Poisson processes . Our model combines the techniques introduced by Luo & Sugiyama ( 2019 ) to model higher-order interactions between Poisson processes and by Friedman & Stuetzle ( 1981 ) in generalized additive models to learn the joint intensity function using samples in a lower dimensional space . Our proposed approach is to decompose a multi-dimensional Poisson process into lower-dimensional representations . For example , we have points ( xi ) Ni=1 in the x-dimension and ( yi ) Ni=1 in the y-dimension . Such data in the lower-dimensional space can be used to improve the estimation of the joint intensity function . This is different from the traditional approaches where only the joint occurrence of events is used to learn the joint intensity . We first show the connection between generalized additive models and Poisson processes , and then provide the connection between generalized additive models and the log-linear model ( Agresti , 2012 ) , which has a well-established theoretical background in information geometry ( Amari , 2016 ) . We draw parallels between the formulation of the generalized additive models and the binary loglinear model on a partially ordered set ( poset ) ( Sugiyama et al. , 2017 ) . The learning process in our model is formulated as a convex optimization problem to arrive at a unique optimal solution using natural gradient , which minimizes the Kullback-Leibler ( KL ) divergence from the sample distribution in a lower-dimensional space to the distribution modeled by the learned intensity function . This connection provides remarkable properties to our model : the ability to learn higher-order intensity functions using lower-dimensional projections , thanks to the Kolmogorov-Arnold representation theorem . This property makes it advantageous to use our proposed approach for cases where there are no observations , missing samples , or low event rates . Our model is flexible because it can capture the interaction effects between events in a Poisson process as a partial order structure in the log-linear model and the parameters of the model are fully customizable to meet the requirements of the application . Our empirical results show that our model effectively uses samples projected onto a lower dimensional space to estimate the higher-order intensity function . More importantly , our model is also robust to various sample sizes . 2 FORMULATION . We start this section by introducing the technical background in the Poisson process and its extension to a multi-dimensional Poisson process . We then introduce the Generalized Additive Model ( GAM ) and its connection to the Poisson process . This is followed by presenting our novel framework , called Additive Poisson Process ( APP ) , which is our main technical contribution and has a tight link to the Poisson process modeled by GAMs . We show that the learning of APP can be achieved via convex optimization using natural gradient . The Poisson process is characterized by an intensity function λ : R → R. An inhomogeneous Poisson process is an extension of a homogeneous Poisson process , where the arrival rate changes with time . The process with time-changing intensity λ ( t ) is defined as a counting process N ( t ) , which has an independent increment property . For any time t ≥ 0 and infinitesimal interval δ ≥ 0 , the probability of events count is p ( N ( t+δ ) −N ( t ) = 0 ) = 1−δλ ( t ) +o ( δ ) , p ( N ( t+δ ) −N ( t ) = 1 ) = δλ ( t ) +o ( δ ) , and p ( N ( t+δ ) −N ( t ) ≥ 2 ) = o ( δ ) , where o ( · ) denotes little-o notation ( Daley & VereJones , 2007 ) . This formulation can be generalized into the ( inhomogeneous ) multi-dimensional Poisson process , which can be used to learn the intensity function λ : RD → R from a realization of timestamps t1 , t2 , . . . , tN with ti ∈ [ 0 , T ] D. Each ti is the time of occurrence for the i-th event across D dimensions and T is the observation duration . The likelihood for the multi-dimensional Poisson process ( Daley & Vere-Jones , 2007 ) is given by p ( { ti } Ni=1 | λ ( t ) ) = exp ( − ∫ λ ( t ) dt ) N∏ i=1 λ ( ti ) , ( 1 ) where t = ( t ( 1 ) , . . . , t ( D ) ) ∈ RD . We define the functional prior on λ ( t ) as λ ( t ) : = g ( f ( t ) ) = exp ( f ( t ) ) . ( 2 ) The function g ( · ) is a positive function to guarantee the non-negativity of the intensity which we choose to be the exponential function , and our objective is to learn the function f ( · ) . The loglikelihood of the multi-dimensional Poisson process with the functional prior is described as log p ( { ti } Ni=1 |λ ( t ) ) = N∑ i=1 f ( ti ) − ∫ exp ( f ( t ) ) dt . ( 3 ) In the following sections , we introduce the generalized additive models and propose to model it by the log-linear model to learn f ( t ) and the normalizing term ( ∫ exp ( f ( t ) ) dt ) . 2.1 GENERALIZED ADDITIVE MODEL . In this section , we present the connection between Poisson processes and the Generalized Additive Model ( GAM ) proposed by Friedman & Stuetzle ( 1981 ) . The GAM projects higher-dimensional features into lower-dimensional space to apply smoothing functions to build a restricted class of non-parametric regression models . GAM is less affected by the curse of dimensionality compared to directly using smoothing in a higher-dimensional space . For a given set of processes J ⊆ [ D ] = { 1 , . . . , D } , the traditional GAM using one-dimensional projections is defined as log λJ ( t ) = ∑ j∈J fj ( t ( j ) ) − βJ with some smoothing function fj . In this paper , we extend it to include higher-order interactions between features in GAM by introducing terms that represent the multiplicative product between events . The k-th order GAM is defined as log λJ ( t ) = ∑ j∈J f { j } ( t ( j ) ) + ∑ j1 , j2∈J f { j1 , j2 } ( t ( j1 ) , t ( j2 ) ) + · · ·+ ∑ j1 , ... , jk∈J f { j1 , ... , jk } ( t ( j1 ) , . . . , t ( jk ) ) − βJ = ∑ I⊆J , |I|≤k fI ( t ( I ) ) − βJ , ( 4 ) where t ( I ) ∈ R|I| denotes the subvector ( t ( j ) ) j∈I of t with respect to I ⊆ [ D ] . The function fI : R|I| → R is a smoothing function to fit the data , and the normalization constant βJ for the intensity function is obtained as βJ = ∫ λJ ( t ) dt = ∫ exp ( ∑ I⊆J , |I|≤k fI ( t ( I ) ) ) dt . The definition of the additive model is in the same form as Equation ( 3 ) . In particular , if we compare Equation ( 3 ) and Equation ( 4 ) , we can see that the smoothing function f in Equation ( 3 ) is realized as the summation over lower-dimensional projections in Equation ( 4 ) . Learning of a continuous function using lower-dimensional projections is well known because of the Kolmogorov-Arnold representation theorem , which states that : Theorem 1 ( Kolmogorov–Arnold Representation Theorem ( Braun & Griebel , 2009 ; Kolmogorov , 1957 ) ) . Any multivariate continuous function can be represented as a superposition of one–dimensional functions ; that is , f ( t1 , . . . , tn ) = ∑2n+1 q=1 fq ( ∑n p=1 gq , p ( tp ) ) . Braun ( 2009 ) showed that the GAM is an approximation to the general form presented in Kolmogorov-Arnold representation theorem by replacing the range q ∈ { 1 , . . . , 2n + 1 } with I ⊆ J and the inner function gq , p by the identity if q = p and zero otherwise , yielding f ( t ) = ∑ I⊆J fI ( t ( I ) ) . Interestingly , the canonical form for additive models in Equation ( 4 ) can be rearranged to be in the same form as Kolmogorov-Arnold representation theorem . By letting f ( t ) = ∑ I⊆J fI ( t ( I ) ) = g−1 ( λ ( t ) ) and g ( · ) = exp ( · ) , we have λJ ( t ) = 1 exp ( βJ ) exp ( ∑ I⊆J fI ( t ( I ) ) ) ∝ exp ( ∑ I⊆J fI ( t ( I ) ) ) , ( 5 ) where we assume fI ( t ( I ) ) = 0 if |I| > k for the k-th order model and 1/ exp ( βJ ) is the normalization term for the intensity function . Based on the Kolmogorov-Arnold representation theorem , generalized additive models are able to learn the intensity of the higher-order interaction between Poisson processes by using projections into lower dimensional spaces . The log-likelihood function for a kth-order model is obtained by substituting λJ ( t ) in Equation ( 4 ) into λ ( t ) in Equation ( 3 ) , log p ( { t } Ni=1|λ ( t ) ) = N∑ i=1 ∑ I⊆J , |I|≤k fI ( t ( I ) ) − β′ , ( 6 ) where β′ is a constant given by β′ = ∫ λ ( t ) dt+ ∑ I⊆J βJ . In the following subsection we introduce a log-linear formulation equipped with partially ordered sample space , which aligns with the GAM formulation in Equation ( 5 ) .",
"role": "user"
},
{
"content": "This paper proposes a method to estimate the multi-dimensional intensity function of a Poisson process with a lower-dimensional projection. The proposed method is motivated by perspectives of information geometry and generalized additive models in statistics. The performance of the proposed method is extreme through synthetic examples and a real data analysis.",
"role": "assistant"
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.